category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
pytorch
What is the relationship between PyTorch and Torch?
https://stackoverflow.com/questions/44371560/what-is-the-relationship-between-pytorch-and-torch
<p>There are two PyTorch repositories :</p> <ul> <li><a href="https://github.com/hughperkins/pytorch" rel="noreferrer">https://github.com/hughperkins/pytorch</a></li> <li><a href="https://github.com/pytorch/pytorch" rel="noreferrer">https://github.com/pytorch/pytorch</a></li> </ul> <p>The first clearly requires Torch and lua and is a wrapper, but the second doesn't make any reference to the Torch project except with its name.</p> <p>How is it related to the <a href="http://torch.ch/" rel="noreferrer">Lua Torch</a>?</p>
<p>Here a short comparison on pytorch and torch.</p> <p><strong>Torch:</strong></p> <blockquote> <p>A Tensor library like <code>numpy</code>, unlike <code>numpy</code> it has strong GPU support.<br /> Lua is a wrapper for Torch (Yes! you need to have a good understanding of Lua), and for that you will need LuaRocks package manager.</p> </blockquote> <p><strong>PyTorch:</strong></p> <blockquote> <p>No need for the LuaRocks package manager, no need to write code in Lua. And because we are using Python, we can develop Deep Learning models with utmost flexibility. We can also exploit major Python packages likes <code>scipy</code>, <code>numpy</code>, <code>matplotlib</code> and <code>Cython</code> with PyTorch's own autograd.</p> </blockquote> <p>There is a detailed discussion on this on <a href="https://discuss.pytorch.org/t/roadmap-for-torch-and-pytorch/38" rel="noreferrer">pytorch forum</a>. Adding to that both PyTorch and Torch use <a href="https://github.com/torch/nn/tree/master/lib/THNN" rel="noreferrer">THNN</a>. Torch provides lua wrappers to the THNN library while Pytorch provides Python wrappers for the same.</p> <p>PyTorch's recurrent nets, weight sharing and memory usage with the flexibility of interfacing with C, and the current speed of Torch.</p> <p>For more insights, have a look at this discussion session <a href="https://discuss.pytorch.org/t/torch-autograd-vs-pytorch-autograd/1671" rel="noreferrer">here</a>.</p>
534
pytorch
How do I display a single image in PyTorch?
https://stackoverflow.com/questions/53623472/how-do-i-display-a-single-image-in-pytorch
<p>How do I display a PyTorch <code>Tensor</code> of shape <code>(3, 224, 224)</code> representing a 224x224 RGB image? Using <code>plt.imshow(image)</code> gives the error:</p> <blockquote> <p>TypeError: Invalid dimensions for image data</p> </blockquote>
<p>Given a <code>Tensor</code> representing the image, use <a href="https://pytorch.org/docs/stable/generated/torch.permute.html#torch.permute" rel="nofollow noreferrer"><code>.permute()</code></a> to put the channels as the last dimension when passing them to matplotlib:</p> <pre><code>plt.imshow(tensor_image.permute(1, 2, 0)) </code></pre> <p>Note: <a href="https://discuss.pytorch.org/t/swap-axes-in-pytorch/970/7" rel="nofollow noreferrer"><code>permute</code> does not copy or allocate memory</a>, and <a href="https://stackoverflow.com/q/48482787/5353461"> <code>from_numpy()</code> doesn't either.</a></p>
535
pytorch
Is .data still useful in pytorch?
https://stackoverflow.com/questions/51743214/is-data-still-useful-in-pytorch
<p>I'm new to pytorch. I read much pytorch code which heavily uses tensor's <code>.data</code> member. But I search <code>.data</code> in the official document and Google, finding little. I guess <code>.data</code> contains the data in the tensor, but I don't know when we need it and when not? </p>
<p><code>.data</code> was an attribute of <code>Variable</code> (object representing <code>Tensor</code> with history tracking e.g. for automatic update), not <code>Tensor</code>. Actually, <code>.data</code> was giving access to the <code>Variable</code>'s underlying <code>Tensor</code>.</p> <p>However, since PyTorch version <code>0.4.0</code>, <code>Variable</code> and <code>Tensor</code> have been merged (into an updated <code>Tensor</code> structure), so <code>.data</code> disappeared along the previous <code>Variable</code> object (well <code>Variable</code> is still there for backward-compatibility, but is deprecated).</p> <hr> <p>Paragraph from <a href="https://github.com/pytorch/pytorch/releases/tag/v0.4.0" rel="noreferrer">Release Notes</a> for version <code>0.4.0</code> (I recommend reading the whole section about <code>Variable</code>/<code>Tensor</code> updates):</p> <blockquote> <p><strong>What about <code>.data</code>?</strong></p> <p><code>.data</code> was the primary way to get the underlying <code>Tensor</code> from a <code>Variable</code>. After this merge, calling <code>y = x.data</code> still has similar semantics. So <code>y</code> will be a <code>Tensor</code> that shares the same data with <code>x</code>, is unrelated with the computation history of <code>x</code>, and has <code>requires_grad=False</code>.</p> <p>However, <code>.data</code> can be unsafe in some cases. Any changes on <code>x.data</code> wouldn't be tracked by <code>autograd</code>, and the computed gradients would be incorrect if <code>x</code> is needed in a backward pass. A safer alternative is to use <code>x.detach()</code>, which also returns a <code>Tensor</code> that shares data with <code>requires_grad=False</code>, but will have its in-place changes reported by <code>autograd</code> if <code>x</code> is needed in backward.</p> </blockquote>
536
pytorch
Install specific PyTorch version (pytorch==1.0.1)
https://stackoverflow.com/questions/64062637/install-specific-pytorch-version-pytorch-1-0-1
<p>I'm trying to install specific PyTorch version under conda env:</p> <p>Using pip:</p> <pre><code>pip3 install pytorch==1.0.1 WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip. Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue. To avoid this problem you can invoke Python with '-m pip' instead of running pip directly. Defaulting to user installation because normal site-packages is not writeable ERROR: Could not find a version that satisfies the requirement pytorch==1.0.1 (from versions: 0.1.2, 1.0.2) ERROR: No matching distribution found for pytorch==1.0.1 </code></pre> <p>Using Conda:</p> <pre><code>conda install pytorch==1.0.1 Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. PackagesNotFoundError: The following packages are not available from current channels: - pytorch==1.0.1 Current channels: - https://repo.anaconda.com/pkgs/main/osx-64 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/r/osx-64 - https://repo.anaconda.com/pkgs/r/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. </code></pre> <p>I was able to find this version under <a href="https://anaconda.org/soumith/pytorch" rel="noreferrer">https://anaconda.org/soumith/pytorch</a> but is there a way to find it and install from console?</p>
<p>You can download/install the version you like from the official <a href="https://anaconda.org/pytorch/pytorch/files?version=1.0.1" rel="noreferrer">Pytorch's Conda package</a>. the link you specified is an old version and is not supported/updated for quit some time now!.<br /> Install your desired version like this :</p> <pre><code>conda install pytorch==1.0.1 torchvision==0.2.2 -c pytorch </code></pre> <p>If you are looking for a pip version, you can view and access all versions from <a href="https://download.pytorch.org/whl/torch_stable.html" rel="noreferrer">here</a> as well.</p> <p>and simply do :</p> <pre><code>pip install torch===1.0.1 -f https://download.pytorch.org/whl/torch_stable.html </code></pre> <p>You can always check the <a href="https://pytorch.org/get-started/previous-versions/" rel="noreferrer">previous versions</a> here as well.</p>
537
pytorch
What are Torch Scripts in PyTorch?
https://stackoverflow.com/questions/53900396/what-are-torch-scripts-in-pytorch
<p>I've just found that PyTorch docs expose something that is called <a href="https://pytorch.org/docs/stable/jit.html?highlight=model%20features" rel="noreferrer">Torch Scripts</a>. However, I do not know:</p> <ul> <li>When they should be used?</li> <li>How they should be used?</li> <li>What are their benefits?</li> </ul>
<p>Torch Script is one of two modes of using the PyTorch <a href="https://en.wikipedia.org/wiki/Just-in-time_compilation" rel="noreferrer">just in time compiler</a>, the other being <a href="https://pytorch.org/docs/stable/jit.html#torch.jit.trace" rel="noreferrer">tracing</a>. The benefits are explained in the linked documentation:</p> <blockquote> <p>Torch Script is a way to create serializable and optimizable models from PyTorch code. Any code written in Torch Script can be saved from your Python process and loaded in a process where there is no Python dependency.</p> </blockquote> <p>The above quote is actually true both of scripting and tracing. So</p> <ol> <li>You gain the ability to serialize your models and later run them outside of Python, via LibTorch, a C++ native module. This allows you to embed your DL models in various production environments like mobile or IoT. There is an official guide on exporting models to C++ <a href="https://pytorch.org/tutorials/advanced/cpp_export.html" rel="noreferrer">here</a>.</li> <li>PyTorch can <em>compile</em> your jit-able modules rather than running them as an interpreter, allowing for various optimizations and improving performance, both during training and inference. This is equally helpful for development and production.</li> </ol> <p>Regarding Torch Script specifically, in comparison to tracing, it is a subset of Python, specified in detail <a href="https://pytorch.org/docs/stable/jit_language_reference.html#language-reference" rel="noreferrer">here</a>, which, when adhered to, can be compiled by PyTorch. It is more laborious to write Torch Script modules instead of tracing regular <code>nn.Module</code> subclasses, but it allows for some extra features over tracing, most notably flow control like <code>if</code> statements or <code>for</code> loops. Tracing treats such flow control as &quot;constant&quot; - in other words, if you have an <code>if model.training</code> clause in your module and trace it with <code>training=True</code>, it will always behave this way, even if you change the <code>training</code> variable to <code>False</code> later on.</p> <p>To answer your first question, you <em>need</em> to use <code>jit</code> if you want to deploy your models outside Python and otherwise you <em>should</em> use <code>jit</code> if you want to gain some execution performance at the price of extra development effort (as not every model can be straightforwardly made compliant with <code>jit</code>). In particular, you should use Torch Script if your code cannot be <code>jit</code>ed with tracing alone because it relies on some features such as <code>if</code> statements. For maximum ergonomy, you probably want to <a href="https://pytorch.org/docs/stable/jit.html#mixing-tracing-and-scripting" rel="noreferrer">mix the two</a> on a case-by-case basis.</p> <p>Finally, for <em>how</em> they should be used, please refer to all the documentation and tutorial links.</p>
538
pytorch
PyTorch: What is numpy.linalg.multi_dot() equivalent in PyTorch
https://stackoverflow.com/questions/64520994/pytorch-what-is-numpy-linalg-multi-dot-equivalent-in-pytorch
<p>I am trying to perform matrix multiplication of multiple matrices in PyTorch and was wondering what is the equivalent of <code>numpy.linalg.multi_dot()</code> in PyTorch?</p> <p>If there isn't one, what is the next best way (in terms of speed and memory) I can do this in PyTorch?</p> <p>Code:</p> <pre><code>import numpy as np import torch A = np.random.rand(3, 3) B = np.random.rand(3, 3) C = np.random.rand(3, 3) results = np.linalg.multi_dot(A, B, C) A_tsr = torch.tensor(A) B_tsr = torch.tensor(B) C_tsr = torch.tensor(C) # What is the PyTorch equivalent of np.linalg.multi_dot()? </code></pre> <p>Many thanks!</p>
<p>~~Looks like one can send tensors into multi_dot~~</p> <p>Looks like the numpy implementation casts everything into numpy arrays. If your tensors are on the cpu and detached this should work. Otherwise, the conversion to numpy would fail.</p> <p>So in general - likely there isn't an alternative. I think your best shot is to take the <code>multi_dot</code> implementation, e.g. <a href="https://github.com/numpy/numpy/blob/v1.19.0/numpy/linalg/linalg.py#L2621-L2739" rel="nofollow noreferrer">from here for numpy v1.19.0</a> and adjust it to handle tensors / skip the cast to numpy. Given the similar interface and the code simplicity I think that this should be pretty straightforward.</p>
539
pytorch
Install PyTorch from requirements.txt
https://stackoverflow.com/questions/60912744/install-pytorch-from-requirements-txt
<p>Torch documentation says use</p> <pre><code>pip install torch==1.4.0+cpu torchvision==0.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.html </code></pre> <p>to install the latest version of PyTorch. This works when I do it manually but when I add it to req.txt and do <code>pip install -r req.txt</code>, it fails and says <code>ERROR: No matching distribution</code>.</p> <p>Edit: adding the whole line from req.txt and error here.</p> <pre><code>torch==1.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html torchvision==0.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.htmltorch==1.4.0+cpu </code></pre> <pre><code>ERROR: Could not find a version that satisfies the requirement torch==1.4.0+cpu (from -r requirements.txt (line 1)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 0.3.1, 0.4.0, 0.4.1, 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.2.0, 1.3.0, 1.3.1, 1.4.0) ERROR: No matching distribution found for torch==1.4.0+cpu (from -r requirements.txt (line 1)) </code></pre>
<p>Add <code>--find-links</code> in <code>requirements.txt</code> before torch</p> <pre><code>--find-links https://download.pytorch.org/whl/torch_stable.html torch==1.2.0+cpu </code></pre> <p>Source: <a href="https://github.com/pytorch/pytorch/issues/29745#issuecomment-553588171" rel="noreferrer">https://github.com/pytorch/pytorch/issues/29745#issuecomment-553588171</a></p>
540
pytorch
How Pytorch Tensor get the index of specific value
https://stackoverflow.com/questions/47863001/how-pytorch-tensor-get-the-index-of-specific-value
<p>With python lists, we can do:</p> <pre><code>a = [1, 2, 3] assert a.index(2) == 1 </code></pre> <p>How can a pytorch tensor find the <code>.index()</code> directly?</p>
<p>I think there is no direct translation from <code>list.index()</code> to a pytorch function. However, you can achieve similar results using <code>tensor==number</code> and then the <code>nonzero()</code> function. For example:</p> <pre><code>t = torch.Tensor([1, 2, 3]) print ((t == 2).nonzero(as_tuple=True)[0]) </code></pre> <p>This piece of code returns</p> <blockquote> <p>1</p> <p>[torch.LongTensor of size 1x1]</p> </blockquote>
541
pytorch
Installing PyTorch via Conda
https://stackoverflow.com/questions/49951846/installing-pytorch-via-conda
<p>Objective: Create a conda environment with pytorch and torchvision. Anaconda Navigator 1.8.3, python 3.6, MacOS 10.13.4.</p> <p>What I've tried:</p> <ul> <li>In Navigator, created a new environment. Tried to install pytorch and torchvision but could not because the UI search for packages does not find any packages available matching pytorch, torch, torchvision, or similar strings.</li> <li><code>conda install pytorch torchvision -c pytorch</code></li> <li><code>conda update --all</code></li> </ul> <p>pytorch 0.3.1, torch 0.3.1, and torchvision 0.2.0 now appear as installed in the root environment. However, the root environment is no longer cloneable; the clone button is gray/disabled (it used be enabled/cloneable). I could use the root environment as a fallback but the main point of conda is to be able to create separate and disposable environments. What am I missing?</p> <p>UPDATE -----------------</p> <p>Running <code>conda install -c pytorch pytorch</code> yields: <code># All requested packages already installed.</code> But if I activate the <code>pytorch</code> environment and list the packages therein, there is no package containing the word "torch". If I then do <code>conda search pytorch</code> I get <code>PackagesNotFoundError: The following packages are not available from current channels: - pytorch</code>. If I activate the <code>base</code> environment and then do <code>conda list</code> then pytorch is in the package list for base. So how does one create a separate environment containing pytorch?</p>
<p>You seem to have installed PyTorch in your base environment, you therefore cannot use it from your other "pytorch" env.</p> <p>Either:</p> <ul> <li><p>directly create a new environment (let's call it <code>pytorch_env</code>) with PyTorch: <code>conda create -n pytorch_env -c pytorch pytorch torchvision</code></p></li> <li><p>switch to the pytorch environment you have already created with: <code>source activate pytorch_env</code> and then install PyTorch in it: <code>conda install -c pytorch pytorch torchvision</code></p></li> </ul>
542
pytorch
What does `-1` of `view()` mean in PyTorch?
https://stackoverflow.com/questions/50792316/what-does-1-of-view-mean-in-pytorch
<p>As the question says, what does <code>-1</code> of <code>view()</code> do in PyTorch?</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; a = torch.arange(1, 17) &gt;&gt;&gt; a tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16.]) &gt;&gt;&gt; a.view(1,-1) tensor([[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16.]]) &gt;&gt;&gt; a.view(-1,1) tensor([[ 1.], [ 2.], [ 3.], [ 4.], [ 5.], [ 6.], [ 7.], [ 8.], [ 9.], [ 10.], [ 11.], [ 12.], [ 13.], [ 14.], [ 15.], [ 16.]]) </code></pre> <p>Does <code>-1</code> of <code>view()</code> in PyTorch generate additional dimension? Does <code>-1</code> of <code>view()</code> in PyTorch behave the same as <code>-1</code> of <code>reshape()</code> in NumPy?</p>
<p>Yes, it does behave like <code>-1</code> in <code>numpy.reshape()</code>, i.e. the actual value for this dimension will be inferred so that the number of elements in the view matches the original number of elements.</p> <p>For instance:</p> <pre class="lang-py prettyprint-override"><code>import torch x = torch.arange(6) print(x.view(3, -1)) # inferred size will be 2 as 6 / 3 = 2 # tensor([[ 0., 1.], # [ 2., 3.], # [ 4., 5.]]) print(x.view(-1, 6)) # inferred size will be 1 as 6 / 6 = 1 # tensor([[ 0., 1., 2., 3., 4., 5.]]) print(x.view(1, -1, 2)) # inferred size will be 3 as 6 / (1 * 2) = 3 # tensor([[[ 0., 1.], # [ 2., 3.], # [ 4., 5.]]]) # print(x.view(-1, 5)) # throw error as there's no int N so that 5 * N = 6 # RuntimeError: invalid argument 2: size '[-1 x 5]' is invalid for input with 6 elements print(x.view(-1, -1, 3)) # throw error as only one dimension can be inferred # RuntimeError: invalid argument 1: only one dimension can be inferred </code></pre>
543
pytorch
How can i process multi loss in pytorch?
https://stackoverflow.com/questions/53994625/how-can-i-process-multi-loss-in-pytorch
<p><a href="https://i.sstatic.net/yBrXW.png" rel="noreferrer"><img src="https://i.sstatic.net/yBrXW.png" alt="enter image description here"></a></p> <p>Such as this, I want to using some auxiliary loss to promoting my model performance.<br/> Which type code can implement it in pytorch?</p> <pre><code>#one loss1.backward() loss2.backward() loss3.backward() optimizer.step() #two loss1.backward() optimizer.step() loss2.backward() optimizer.step() loss3.backward() optimizer.step() #three loss = loss1+loss2+loss3 loss.backward() optimizer.step() </code></pre> <p>Thanks for your answer!</p>
<p>First and 3rd attempt are exactly the same and correct, while 2nd approach is completely wrong.</p> <p>In Pytorch, low layer gradients are <strong>Not</strong> &quot;overwritten&quot; by subsequent <code>backward()</code> calls, rather they are accumulated, or summed. This makes first and 3rd approach identical, though 1st approach might be preferable if you have low-memory GPU/RAM (a batch size of 1024 with one <code>backward() + step()</code> call is same as having 8 batches of size 128 and 8 <code>backward()</code> calls, with one <code>step()</code> call in the end).</p> <p>To illustrate the idea, here is a simple example. We want to get our tensor <code>x</code> close to <code>40,50 and 60</code> simultaneously:</p> <pre><code>x = torch.tensor([1.0],requires_grad=True) loss1 = criterion(40,x) loss2 = criterion(50,x) loss3 = criterion(60,x) </code></pre> <p>Now the first approach: (we use <code>tensor.grad</code> to get current gradient for our tensor <code>x</code>)</p> <pre><code>loss1.backward() loss2.backward() loss3.backward() print(x.grad) </code></pre> <p>This outputs : <code>tensor([-294.])</code> (EDIT: put <code>retain_graph=True</code> in first two <code>backward</code> calls for more complicated computational graphs)</p> <p>The third approach:</p> <pre><code>loss = loss1+loss2+loss3 loss.backward() print(x.grad) </code></pre> <p>Again the output is : <code>tensor([-294.])</code></p> <p>2nd approach is different because we don't call <code>opt.zero_grad</code> after calling <code>step()</code> method. This means in all 3 <code>step</code> calls gradients of first <code>backward</code> call is used. For example, if 3 losses provide gradients <code>5,1,4</code> for same weight, instead of having 10 (=5+1+4), now your weight will have <code>5*3+1*2+4*1=21</code> as gradient.</p> <p>For further reading : <a href="https://discuss.pytorch.org/t/pytorch-gradients/884/2" rel="noreferrer">Link 1</a>,<a href="https://discuss.pytorch.org/t/why-do-we-need-to-set-the-gradients-manually-to-zero-in-pytorch/4903/20" rel="noreferrer">Link 2</a></p>
544
pytorch
Indexing Pytorch tensor
https://stackoverflow.com/questions/59154920/indexing-pytorch-tensor
<p>I have a Pytorch code which generates a Pytorch tensor in each iteration of for loop, all of the same size. I want to assign each of those tensors to a row of new tensor, which will include all the tensors at the end. In other works something like this</p> <pre><code>for i=1:N: X = torch.Tensor([[1,2,3], [3,2,5]]) #Y is a pytorch tensor Y[i] = X </code></pre> <p>I wonder how I can implement this with Pytorch.</p>
<p>You can concatenate the tensors using <a href="https://pytorch.org/docs/stable/torch.html#torch.cat" rel="nofollow noreferrer"><code>torch.cat</code></a>:</p> <pre class="lang-py prettyprint-override"><code>tensors = [] for i in range(N): X = torch.tensor([[1,2,3], [3,2,5]]) tensors.append(X) Y = torch.cat(tensors, dim=0) # dim 0 is the rows of the tensor </code></pre>
545
pytorch
Pytorch doesn&#39;t support one-hot vector?
https://stackoverflow.com/questions/55549843/pytorch-doesnt-support-one-hot-vector
<p>I am very confused by how Pytorch deals with one-hot vectors. In this <a href="https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html" rel="noreferrer">tutorial</a>, the neural network will generate a one-hot vector as its output. As far as I understand, the schematic structure of the neural network in the tutorial should be like:</p> <p><a href="https://i.sstatic.net/1v35k.png" rel="noreferrer"><img src="https://i.sstatic.net/1v35k.png" alt="enter image description here"></a></p> <p>However, the <code>labels</code> are not in one-hot vector format. I get the following <code>size</code></p> <pre><code>print(labels.size()) print(outputs.size()) output&gt;&gt;&gt; torch.Size([4]) output&gt;&gt;&gt; torch.Size([4, 10]) </code></pre> <p>Miraculously, I they pass the <code>outputs</code> and <code>labels</code> to <code>criterion=CrossEntropyLoss()</code>, there's no error at all.</p> <pre><code>loss = criterion(outputs, labels) # How come it has no error? </code></pre> <h2>My hypothesis:</h2> <p>Maybe pytorch automatically convert the <code>labels</code> to one-hot vector form. So, I try to convert labels to one-hot vector before passing it to the loss function.</p> <pre><code>def to_one_hot_vector(num_class, label): b = np.zeros((label.shape[0], num_class)) b[np.arange(label.shape[0]), label] = 1 return b labels_one_hot = to_one_hot_vector(10,labels) labels_one_hot = torch.Tensor(labels_one_hot) labels_one_hot = labels_one_hot.type(torch.LongTensor) loss = criterion(outputs, labels_one_hot) # Now it gives me error </code></pre> <p>However, I got the following error</p> <blockquote> <p>RuntimeError: multi-target not supported at /opt/pytorch/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15</p> </blockquote> <p>So, one-hot vectors are not supported in <code>Pytorch</code>? How does <code>Pytorch</code> calculates the <code>cross entropy</code> for the two tensor <code>outputs = [1,0,0],[0,0,1]</code> and <code>labels = [0,2]</code> ? It doesn't make sense to me at all at the moment.</p>
<p>PyTorch states in its documentation for <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss" rel="noreferrer"><code>CrossEntropyLoss</code></a> that</p> <blockquote> <p>This criterion expects a class index (0 to C-1) as the target for each value of a 1D tensor of size minibatch</p> </blockquote> <p>In other words, it has your <code>to_one_hot_vector</code> function conceptually built in <code>CEL</code> and does not expose the one-hot API. Notice that one-hot vectors are memory inefficient compared to storing class labels.</p> <p>If you are given one-hot vectors and need to go to class labels format (for instance to be compatible with <code>CEL</code>), you can use <code>argmax</code> like below:</p> <pre><code>import torch labels = torch.tensor([1, 2, 3, 5]) one_hot = torch.zeros(4, 6) one_hot[torch.arange(4), labels] = 1 reverted = torch.argmax(one_hot, dim=1) assert (labels == reverted).all().item() </code></pre>
546
pytorch
Run Pytorch examples with Pytorch build from source
https://stackoverflow.com/questions/76260489/run-pytorch-examples-with-pytorch-build-from-source
<p>I have build pytorch 2.0.1 from source. Using cuda 11.7, cudnn v8, and the driver for the nvidia GPU is 515.43.04 (CUDA version 11.7). Altough Pytorch seems to build successfully when I am trying to run examples downloaded from <a href="https://github.com/pytorch/examples" rel="nofollow noreferrer">github</a> I see the following error which is related to cuDNN:</p> <pre><code>CUDA available! Training on GPU. terminate called after throwing an instance of 'c10::Error' what(): GET was unable to find an engine to execute this computation Exception raised from run_single_conv at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:671 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f7edfcb24d7 in /tmp/manospavl/anaconda/envs/pytorch-dev/lib/python3.9/site-packages/torch/lib/libc10.so) frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x68 (0x7f7edfc7c434 in /tmp/manospavl/anaconda/envs/pytorch-dev/lib/python3.9/site-packages/torch/lib/libc10.so) frame #2: &lt;unknown function&gt; + 0xe4314c (0x7f7e9cc3d14c in /tmp/manospavl/anaconda/envs/pytorch-dev/lib/python3.9/site-packages/torch/lib/libtorch_cuda.so) frame #3: &lt;unknown function&gt; + 0xe433eb (0x7f7e9cc3d3eb in /tmp/manospavl/anaconda/envs/pytorch-dev/lib/python3.9/site-packages/torch/lib/libtorch_cuda.so) frame #4: &lt;unknown function&gt; + 0xe27dba (0x7f7e9cc21dba in /tmp/manospavl/anaconda/envs/pytorch-dev/lib/python3.9/site-packages/torch/lib/libtorch_cuda.so) frame #5: at::native::cudnn_convolution(at::Tensor const&amp;, at::Tensor const&amp;, c10::ArrayRef&lt;long&gt;, c10::ArrayRef&lt;long&gt;, c10::ArrayRef&lt;long&gt;, long, bool, bool, bool) + 0x96 (0x7f7e9cc22406 in /tmp/manospavl/anaconda/envs/pytorch-dev/lib/python3.9/site-packages/torch/lib/libtorch_cuda.so) frame #6: &lt;unknown function&gt; + 0x2b16b97 (0x7f7e9e910b97 in /tmp/manospavl/anaconda/envs/pytorch-dev/lib/python3.9/site-packages/torch/lib/libtorch_cuda.so) frame #7: &lt;unknown function&gt; + 0x2b16c50 (0x7f7e9e910c50 in /tmp/manospavl/anaconda/envs/pytorch-dev/lib/python3.9/site-packages/torch/lib/libtorch_cuda.so) frame #8: at::_ops::cudnn_convolution::call(at::Tensor const&amp;, at::Tensor const&amp;, c10::ArrayRef&lt;long&gt;, c10::ArrayRef&lt;long&gt;, c10::ArrayRef&lt;long&gt;, long, bool, bool, bool) + 0x23d (0x7f7ec4780ecd in /tmp/manospavl/anaconda/envs/pytorch-dev/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so) frame #9: at::native::_convolution(at::Tensor const&amp;, at::Tensor const&amp;, c10::optional&lt;at::Tensor&gt; const&amp;, c10::ArrayRef&lt;long&gt;, c10::ArrayRef&lt;long&gt;, c10::ArrayRef&lt;long&gt;, bool, c10::ArrayRef&lt;long&gt;, long, bool, bool, bool, bool) + 0x1515 (0x7f7ec3adec45 in /tmp/manospavl/anaconda/envs/pytorch-dev/lib/python 3.9/site-packages/torch/lib/libtorch_cpu.so) frame #10: &lt;unknown function&gt; + 0x2c434c6 (0x7f7ec4b004c6 in /tmp/manospavl/anaconda/envs/pytorch-dev/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so) frame #11: &lt;unknown function&gt; + 0x2c43547 (0x7f7ec4b00547 in /tmp/manospavl/anaconda/envs/pytorch-dev </code></pre> <p>I have tried the most recent version of pytorch 2.1.0 and other examples but all seem to produce the same error. Additionally, I have written two simple examples that work. I have also check the cudnn and exists in my setup.</p>
<p>The issue was that there was a local installed PyTorch.</p>
547
pytorch
Hyperparameter optimization for Pytorch model
https://stackoverflow.com/questions/44260217/hyperparameter-optimization-for-pytorch-model
<p>What is the best way to perform hyperparameter optimization for a Pytorch model? Implement e.g. Random Search myself? Use Skicit Learn? Or is there anything else I am not aware of?</p>
<p>Many researchers use <a href="http://ray.readthedocs.io/en/latest/tune.html" rel="noreferrer">RayTune</a>. It's a scalable hyperparameter tuning framework, specifically for deep learning. You can easily use it with any deep learning framework (2 lines of code below), and it provides most state-of-the-art algorithms, including HyperBand, Population-based Training, Bayesian Optimization, and BOHB.</p> <pre><code>import torch.optim as optim from ray import tune from ray.tune.examples.mnist_pytorch import get_data_loaders, ConvNet, train, test def train_mnist(config): train_loader, test_loader = get_data_loaders() model = ConvNet() optimizer = optim.SGD(model.parameters(), lr=config[&quot;lr&quot;]) for i in range(10): train(model, optimizer, train_loader) acc = test(model, test_loader) tune.report(mean_accuracy=acc) analysis = tune.run( train_mnist, config={&quot;lr&quot;: tune.grid_search([0.001, 0.01, 0.1])}) print(&quot;Best config: &quot;, analysis.get_best_config(metric=&quot;mean_accuracy&quot;)) # Get a dataframe for analyzing trial results. df = analysis.dataframe() </code></pre> <p>[Disclaimer: I contribute actively to this project!]</p>
548
pytorch
Why do we &quot;pack&quot; the sequences in PyTorch?
https://stackoverflow.com/questions/51030782/why-do-we-pack-the-sequences-in-pytorch
<p>I was trying to replicate <a href="https://discuss.pytorch.org/t/simple-working-example-how-to-use-packing-for-variable-length-sequence-inputs-for-rnn/2120" rel="noreferrer">How to use packing for variable-length sequence inputs for rnn</a> but I guess I first need to understand why we need to &quot;pack&quot; the sequence.</p> <p>I understand why we &quot;pad&quot; them but why is &quot;packing&quot; (via <code>pack_padded_sequence</code>) necessary?</p>
<p>I have stumbled upon this problem too and below is what I figured out.</p> <p>When training RNN (LSTM or GRU or vanilla-RNN), it is difficult to batch the variable length sequences. For example: if the length of sequences in a size 8 batch is [4,6,8,5,4,3,7,8], you will pad all the sequences and that will result in 8 sequences of length 8. You would end up doing 64 computations (8x8), but you needed to do only 45 computations. Moreover, if you wanted to do something fancy like using a bidirectional-RNN, it would be harder to do batch computations just by padding and you might end up doing more computations than required.</p> <p>Instead, PyTorch allows us to pack the sequence, internally packed sequence is a tuple of two lists. One contains the elements of sequences. Elements are interleaved by time steps (see example below) and other contains the <s>size of each sequence</s> the batch size at each step. This is helpful in recovering the actual sequences as well as telling RNN what is the batch size at each time step. This has been pointed by @Aerin. This can be passed to RNN and it will internally optimize the computations.</p> <p>I might have been unclear at some points, so let me know and I can add more explanations.</p> <p>Here's a code example:</p> <pre><code> a = [torch.tensor([1,2,3]), torch.tensor([3,4])] b = torch.nn.utils.rnn.pad_sequence(a, batch_first=True) &gt;&gt;&gt;&gt; tensor([[ 1, 2, 3], [ 3, 4, 0]]) torch.nn.utils.rnn.pack_padded_sequence(b, batch_first=True, lengths=[3,2]) &gt;&gt;&gt;&gt;PackedSequence(data=tensor([ 1, 3, 2, 4, 3]), batch_sizes=tensor([ 2, 2, 1])) </code></pre>
549
pytorch
No N-dimensional tranpose in PyTorch
https://stackoverflow.com/questions/44841654/no-n-dimensional-tranpose-in-pytorch
<p>PyTorch's <code>torch.transpose</code> function only transposes 2D inputs. Documentation is <a href="http://pytorch.org/docs/master/torch.html#torch.transpose" rel="noreferrer">here</a>.</p> <p>On the other hand, Tensorflow's <code>tf.transpose</code> function allows you to transpose a tensor of <code>N</code> arbitrary dimensions.</p> <p>Can someone please explain why PyTorch does not/cannot have N-dimension transpose functionality? Is this due to the dynamic nature of the computation graph construction in PyTorch versus Tensorflow's Define-then-Run paradigm?</p>
<p>It's simply called differently in pytorch. <a href="http://pytorch.org/docs/master/tensors.html#torch.Tensor.permute" rel="noreferrer">torch.Tensor.permute</a> will allow you to swap dimensions in pytorch like tf.transpose does in TensorFlow.</p> <p>As an example of how you'd convert a 4D image tensor from NHWC to NCHW (not tested, so might contain bugs):</p> <pre><code>&gt;&gt;&gt; img_nhwc = torch.randn(10, 480, 640, 3) &gt;&gt;&gt; img_nhwc.size() torch.Size([10, 480, 640, 3]) &gt;&gt;&gt; img_nchw = img_nhwc.permute(0, 3, 1, 2) &gt;&gt;&gt; img_nchw.size() torch.Size([10, 3, 480, 640]) </code></pre>
550
pytorch
How to build pytorch source?
https://stackoverflow.com/questions/71075872/how-to-build-pytorch-source
<p>When I use pytorch, it showed that my the cuda version pytorch used and cuda version of system are inconsistent, so I need rebuild pytorch from source.</p> <pre class="lang-sh prettyprint-override"><code># install dependency pip install astunparse numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing_extensions future six requests dataclasses # Download pytorch source git clone --recursive https://github.com/pytorch/pytorch cd pytorch # if you are updating an existing checkout git submodule sync git submodule update --init --recursive --jobs 0 # Build #if you want to use pytorch with cuda ,please `USE_CUDA=1` python setup.py install #torchvision install with source # Download git clone --recursive --branch v0.11.1 https://github.com/pytorch/vision.git cd vision python setup.py install </code></pre>
<p>Building pytorch from source is not trivial, there is extensive documentation for it here<a href="https://github.com/pytorch/pytorch#from-source" rel="nofollow noreferrer">enter link description here</a> . However I think you should try to either install directly a an older pytorch version compatible with your system version of cuda or use docker with the version (safer option).</p> <p>You could also try to update your CUDA system if it supports newer drivers, good luck.</p>
551
pytorch
How does one use Pytorch (+ cuda) with an A100 GPU?
https://stackoverflow.com/questions/66992585/how-does-one-use-pytorch-cuda-with-an-a100-gpu
<p>I was trying to use my current code with an A100 gpu but I get this error:</p> <pre><code>---&gt; backend='nccl' /home/miranda9/miniconda3/envs/metalearningpy1.7.1c10.2/lib/python3.8/site-packages/torch/cuda/__init__.py:104: UserWarning: A100-SXM4-40GB with CUDA capability sm_80 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37. If you want to use the A100-SXM4-40GB GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/ </code></pre> <p>which is reather confusing because it points to the usual pytorch installation but doesn't tell me which combination of pytorch version + cuda version to use for my specific hardware (A100). What is the right way to install pytorch for an A100?</p> <hr /> <p>These are some versions I've tried:</p> <pre><code># conda install -y pytorch==1.8.0 torchvision cudatoolkit=10.2 -c pytorch # conda install -y pytorch torchvision cudatoolkit=10.2 -c pytorch #conda install -y pytorch==1.7.1 torchvision torchaudio cudatoolkit=10.2 -c pytorch -c conda-forge # conda install -y pytorch==1.6.0 torchvision cudatoolkit=10.2 -c pytorch #conda install -y pytorch==1.7.1 torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge # conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch # conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge # conda install -y pytorch torchvision cudatoolkit=9.2 -c pytorch # For Nano, CC # conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge </code></pre> <hr /> <p>note that this can be subtle because I've had this error with this machine + pytorch version in the past:</p> <p><a href="https://stackoverflow.com/questions/66807131/how-to-solve-the-famous-unhandled-cuda-error-nccl-version-2-7-8-error">How to solve the famous `unhandled cuda error, NCCL version 2.7.8` error?</a></p> <hr /> <h1>Bonus 1:</h1> <p>I still have errors:</p> <pre><code>ncclSystemError: System call (socket, malloc, munmap, etc) failed. Traceback (most recent call last): File &quot;/home/miranda9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_dist_maml_l2l.py&quot;, line 1423, in &lt;module&gt; main() File &quot;/home/miranda9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_dist_maml_l2l.py&quot;, line 1365, in main train(args=args) File &quot;/home/miranda9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_dist_maml_l2l.py&quot;, line 1385, in train args.opt = move_opt_to_cherry_opt_and_sync_params(args) if is_running_parallel(args.rank) else args.opt File &quot;/home/miranda9/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/distributed.py&quot;, line 456, in move_opt_to_cherry_opt_and_sync_params args.opt = cherry.optim.Distributed(args.model.parameters(), opt=args.opt, sync=syn) File &quot;/home/miranda9/miniconda3/envs/meta_learning_a100/lib/python3.9/site-packages/cherry/optim.py&quot;, line 62, in __init__ self.sync_parameters() File &quot;/home/miranda9/miniconda3/envs/meta_learning_a100/lib/python3.9/site-packages/cherry/optim.py&quot;, line 78, in sync_parameters dist.broadcast(p.data, src=root) File &quot;/home/miranda9/miniconda3/envs/meta_learning_a100/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py&quot;, line 1090, in broadcast work = default_pg.broadcast([tensor], opts) RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8 </code></pre> <p>one of the answers suggested to have nvcca &amp; pytorch.version.cuda to match but they do not:</p> <pre><code>(meta_learning_a100) [miranda9@hal-dgx ~]$ python -c &quot;import torch;print(torch.version.cuda)&quot; 11.1 (meta_learning_a100) [miranda9@hal-dgx ~]$ nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2020 NVIDIA Corporation Built on Wed_Jul_22_19:09:09_PDT_2020 Cuda compilation tools, release 11.0, V11.0.221 Build cuda_11.0_bu.TC445_37.28845127_0 </code></pre> <p>How do I match them? I this the error? Can someone display their pip, conda and nvcca version to see what set up works?</p> <p>More error messages:</p> <pre><code>hal-dgx:21797:21797 [0] NCCL INFO Bootstrap : Using [0]enp226s0:141.142.153.83&lt;0&gt; [1]virbr0:192.168.122.1&lt;0&gt; hal-dgx:21797:21797 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation hal-dgx:21797:21797 [0] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB [1]mlx5_1:1/IB [2]mlx5_2:1/IB [3]mlx5_3:1/IB [4]mlx5_4:1/IB [5]mlx5_5:1/IB [6]mlx5_6:1/IB [7]mlx5_7:1/IB ; OOB enp226s0:141.142.153.83&lt;0&gt; hal-dgx:21797:21797 [0] NCCL INFO Using network IB NCCL version 2.7.8+cuda11.1 hal-dgx:21805:21805 [2] NCCL INFO Bootstrap : Using [0]enp226s0:141.142.153.83&lt;0&gt; [1]virbr0:192.168.122.1&lt;0&gt; hal-dgx:21799:21799 [1] NCCL INFO Bootstrap : Using [0]enp226s0:141.142.153.83&lt;0&gt; [1]virbr0:192.168.122.1&lt;0&gt; hal-dgx:21805:21805 [2] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation hal-dgx:21799:21799 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation hal-dgx:21811:21811 [3] NCCL INFO Bootstrap : Using [0]enp226s0:141.142.153.83&lt;0&gt; [1]virbr0:192.168.122.1&lt;0&gt; hal-dgx:21811:21811 [3] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation hal-dgx:21811:21811 [3] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB [1]mlx5_1:1/IB [2]mlx5_2:1/IB [3]mlx5_3:1/IB [4]mlx5_4:1/IB [5]mlx5_5:1/IB [6]mlx5_6:1/IB [7]mlx5_7:1/IB ; OOB enp226s0:141.142.153.83&lt;0&gt; hal-dgx:21811:21811 [3] NCCL INFO Using network IB hal-dgx:21799:21799 [1] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB [1]mlx5_1:1/IB [2]mlx5_2:1/IB [3]mlx5_3:1/IB [4]mlx5_4:1/IB [5]mlx5_5:1/IB [6]mlx5_6:1/IB [7]mlx5_7:1/IB ; OOB enp226s0:141.142.153.83&lt;0&gt; hal-dgx:21805:21805 [2] NCCL INFO NET/IB : Using [0]mlx5_0:1/IB [1]mlx5_1:1/IB [2]mlx5_2:1/IB [3]mlx5_3:1/IB [4]mlx5_4:1/IB [5]mlx5_5:1/IB [6]mlx5_6:1/IB [7]mlx5_7:1/IB ; OOB enp226s0:141.142.153.83&lt;0&gt; hal-dgx:21799:21799 [1] NCCL INFO Using network IB hal-dgx:21805:21805 [2] NCCL INFO Using network IB hal-dgx:21797:27906 [0] misc/ibvwrap.cc:280 NCCL WARN Call to ibv_create_qp failed hal-dgx:21797:27906 [0] NCCL INFO transport/net_ib.cc:360 -&gt; 2 hal-dgx:21797:27906 [0] NCCL INFO transport/net_ib.cc:437 -&gt; 2 hal-dgx:21797:27906 [0] NCCL INFO include/net.h:21 -&gt; 2 hal-dgx:21797:27906 [0] NCCL INFO include/net.h:51 -&gt; 2 hal-dgx:21797:27906 [0] NCCL INFO init.cc:300 -&gt; 2 hal-dgx:21797:27906 [0] NCCL INFO init.cc:566 -&gt; 2 hal-dgx:21797:27906 [0] NCCL INFO init.cc:840 -&gt; 2 hal-dgx:21797:27906 [0] NCCL INFO group.cc:73 -&gt; 2 [Async thread] hal-dgx:21811:27929 [3] misc/ibvwrap.cc:280 NCCL WARN Call to ibv_create_qp failed hal-dgx:21811:27929 [3] NCCL INFO transport/net_ib.cc:360 -&gt; 2 hal-dgx:21811:27929 [3] NCCL INFO transport/net_ib.cc:437 -&gt; 2 hal-dgx:21811:27929 [3] NCCL INFO include/net.h:21 -&gt; 2 hal-dgx:21811:27929 [3] NCCL INFO include/net.h:51 -&gt; 2 hal-dgx:21811:27929 [3] NCCL INFO init.cc:300 -&gt; 2 hal-dgx:21811:27929 [3] NCCL INFO init.cc:566 -&gt; 2 hal-dgx:21811:27929 [3] NCCL INFO init.cc:840 -&gt; 2 hal-dgx:21811:27929 [3] NCCL INFO group.cc:73 -&gt; 2 [Async thread] </code></pre> <p>after putting</p> <pre><code>import os os.environ[&quot;NCCL_DEBUG&quot;] = &quot;INFO&quot; </code></pre>
<p>From the link <a href="https://pytorch.org/get-started/locally/" rel="noreferrer">pytorch site</a> from @SimonB 's answer, I did:</p> <pre><code>pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html </code></pre> <p>This solved the problem for me.</p>
552
pytorch
PyTorch: new_ones vs ones
https://stackoverflow.com/questions/52866333/pytorch-new-ones-vs-ones
<p>In PyTorch what is the difference between <code>new_ones()</code> vs <code>ones()</code>. For example,</p> <pre><code>x2.new_ones(3,2, dtype=torch.double) </code></pre> <p>vs </p> <pre><code>torch.ones(3,2, dtype=torch.double) </code></pre>
<p>For the sake of this answer, I am assuming that your <code>x2</code> is a previously defined <code>torch.Tensor</code>. If we then head over to the <a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor.new_ones" rel="noreferrer">PyTorch documentation</a>, we can read the following on <code>new_ones()</code>:</p> <blockquote> <p>Returns a Tensor of size <code>size</code> filled with <code>1</code>. By default, the returned Tensor has the same <code>torch.dtype</code> and <code>torch.device</code> as this tensor.</p> </blockquote> <p>Whereas <a href="https://pytorch.org/docs/stable/torch.html#torch.ones" rel="noreferrer"><code>ones()</code></a></p> <blockquote> <p>Returns a tensor filled with the scalar value 1, with the shape defined by the variable argument sizes.</p> </blockquote> <p>So, essentially, <code>new_ones</code> allows you to quickly create a new <code>torch.Tensor</code> on the same device and data type as a <em>previously existing</em> tensor (with ones), whereas <code>ones()</code> serves the purpose of creating a <code>torch.Tensor</code> from scratch (filled with ones).</p>
553
pytorch
Pytorch Installation for different CUDA architectures
https://stackoverflow.com/questions/68496906/pytorch-installation-for-different-cuda-architectures
<p>I have a Dockerfile which installs PyTorch library from the source code.</p> <p>Here is the snippet from Dockerfile which performs the installation from source code of pytorch</p> <pre><code>RUN cd /tmp/ \ &amp;&amp; git clone https://github.com/pytorch/pytorch.git \ &amp;&amp; cd pytorch \ &amp;&amp; git submodule sync &amp;&amp; git submodule update --init --recursive \ &amp;&amp; sudo TORCH_CUDA_ARCH_LIST=&quot;6.0 6.1 7.0 7.5 8.0&quot; python3 setup.py install </code></pre> <p>I don't have proper understanding of what's happening here and would appreciate some input from the community:</p> <ul> <li>Why does PyTorch need different way of installation for different CUDA versions?</li> <li>What is the role of <code>TORCH_CUDA_ARCH_LIST</code> in this context?</li> <li>If my machine has multiple CUDA setups, does that mean I will have multiple PyTorch versions (specific to each CUDA setup) installed in my Docker container?</li> <li>If my machine has none of the mentioned CUDA setups (&quot;6.0 6.1 7.0 7.5 8.0&quot;), will the PyTorch installation fail?</li> </ul>
<p><strong>TL;DR</strong> The version you choose needs to correlate with your hardware, otherwise the code won't run, even if it compiles. So for example, if you want it to run on an RTX 3090, you need to make sure <code>sm_80</code>, <code>sm_86</code> or <code>sm_87</code> is in the list. <code>sm_87</code> can do things that <code>sm_80</code> might not be able to do, and it might do things faster that the others can do.</p> <blockquote> <p>Why does PyTorch need different way of installation for different CUDA versions?</p> </blockquote> <p>New hardware is being made all the time, and the compilers and drivers that support the new architectures are often not backwards compatible, and (not sure about the case of CUDA, but definitely in the case of AMD) not even forwards compatible - so having a compiler that has known support for specific hardware, is important.</p> <blockquote> <p>What is the role of TORCH_CUDA_ARCH_LIST in this context?</p> </blockquote> <p>I'm guessing here, but I think that Pytorch will compile libraries for each of these architectures, and can then pick optimized functions at runtime if these architectures are present in hardware.</p> <blockquote> <p>If my machine has multiple CUDA setups, does that mean I will have multiple PyTorch versions (specific to each CUDA setup) installed in my Docker container?</p> </blockquote> <p>I'm guessing again, but I think they will all be in the same container as multiple libraries containing different optimizations for different hardware.</p> <blockquote> <p>If my machine has none of the mentioned CUDA setups (&quot;6.0 6.1 7.0 7.5 8.0&quot;), will the PyTorch installation fail?</p> </blockquote> <p>IIRC even if you can coax the installation into working, code execution might fail for a number of reasons, usually because of hardware incompatibility.</p> <p>You can refer to the Nvidia compiler documentation at <a href="https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html#gpu-feature-list" rel="noreferrer">https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html#gpu-feature-list</a> to help you pick the right versions of CUDA for your intended hardware, eg. here are the hardware versions:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>nvcc tag</th> <th>TORCH_CUDA_ARCH_LIST</th> <th>GPU Arch</th> <th>Year</th> <th>eg. GPU</th> </tr> </thead> <tbody> <tr> <td>sm_50, sm_52 and sm_53</td> <td>5.0 5.1 5.3</td> <td><a href="https://en.wikipedia.org/wiki/Maxwell_(microarchitecture)" rel="noreferrer">Maxwell</a> support</td> <td>2014</td> <td>GTX 9xx</td> </tr> <tr> <td>sm_60, sm_61, and sm_62</td> <td>6.0 6.1 6.2</td> <td><a href="https://en.wikipedia.org/wiki/Pascal_(microarchitecture)" rel="noreferrer">Pascal</a> support</td> <td>2016</td> <td>10xx, Pxxx</td> </tr> <tr> <td>sm_70 and sm_72</td> <td>7.0 7.2</td> <td><a href="https://en.wikipedia.org/wiki/Volta_(microarchitecture)" rel="noreferrer">Volta</a> support</td> <td>2017</td> <td>Titan V</td> </tr> <tr> <td>sm_75</td> <td>7.5</td> <td><a href="https://en.wikipedia.org/wiki/Turing_(microarchitecture)" rel="noreferrer">Turing</a> support</td> <td>2018</td> <td>most 20xx</td> </tr> <tr> <td>sm_80, sm_86 and sm_87</td> <td>8.0 8.6 8.7</td> <td><a href="https://en.wikipedia.org/wiki/Ampere_(microarchitecture)" rel="noreferrer">Ampere</a> support</td> <td>2020</td> <td>RTX 30xx, Axx[xx]</td> </tr> <tr> <td>sm_89</td> <td>8.9</td> <td><a href="https://en.wikipedia.org/wiki/Ada_Lovelace_(microarchitecture)" rel="noreferrer">Ada</a> support</td> <td>2022</td> <td>RTX xxxx 40xx L4xx</td> </tr> <tr> <td>sm_90, sm_90a</td> <td>9.0 9.0a</td> <td><a href="https://en.wikipedia.org/wiki/Hopper_(microarchitecture)" rel="noreferrer">Hopper</a> support</td> <td>2022</td> <td>H100</td> </tr> </tbody> </table></div> <p>Surprisingly, I could not find a list and had to compile this myself.</p> <p>From the above you can garner that <code>sm_50</code> is <code>5.0</code> and so on...</p> <p>How do you know which <code>nvcc</code> tags to use?</p> <pre><code>$ locate nvcc ... $ /usr/local/cuda-11.7/bin/nvcc --help|grep arch ... --list-gpu-arch (-arch-ls) List the virtual device architectures (compute_XX) supported by the compiler and exit. If both --list-gpu-code and --list-gpu-arch are set, the list is ... $ /usr/local/cuda-11.7/bin/nvcc --list-gpu-arch compute_35 compute_37 compute_50 compute_52 compute_53 compute_60 compute_61 compute_62 compute_70 compute_72 compute_75 compute_80 compute_86 compute_87 </code></pre> <p>Again, here you can see that CUDA 11.7 supports Nvidia GPU's from the <a href="https://en.wikipedia.org/wiki/Tesla_(microarchitecture)" rel="noreferrer">Tesla</a> series which is not even listed on current documentation anymore. Of course those microarchitectures do not support all the functions exposed by Pytorch, so a lot of things won't run on it - and in most cases the compiler should warn you about that if you try to compile it for those versions, but the reality is that not everything is tested by the Nvidia developers, especially if you tread off the beaten track - still way more tame than the AMD world where Open Source third party drivers are ahead of vendor drivers in many respects.</p> <p>Because of the increasing complexity of hardware and compilers, the future looks less and less like vendor compilers like CUDA and ROCm, and more and more like OpenCL, and <em>cross fingers</em> Mojo, so that you don't have to worry about the magic numbers that make each version perform optimally.</p>
554
pytorch
Pytorch Cuda for ubuntu 20.04
https://stackoverflow.com/questions/71822979/pytorch-cuda-for-ubuntu-20-04
<p>I'm trying to get pytorch with cuda 10 compatibility via : conda install pytorch torchvision cudatoolkit=10.2 -c pytorch from(<a href="https://discuss.pytorch.org/t/pytorch-with-cuda-11-compatibility/89254" rel="nofollow noreferrer">https://discuss.pytorch.org/t/pytorch-with-cuda-11-compatibility/89254</a>) but there is timeout error:</p> <pre><code> Proceed ([y]/n)? y Downloading and Extracting Packages pytorch-mutex-1.0 | 3 KB | | 0% torchvision-0.12.0 | 8.8 MB | | 0% ffmpeg-4.3 | 9.9 MB | | 0% pytorch-1.11.0 | 622.9 MB | | 0% CondaHTTPError: HTTP 000 CONNECTION FAILED for url &lt;https://conda.anaconda.org/pytorch/noarch/pytorch-mutex-1.0-cuda.tar.bz2&gt; Elapsed: - CondaHTTPError: HTTP 000 CONNECTION FAILED for url &lt;https://conda.anaconda.org/pytorch/noarch/pytorch-mutex-1.0-cuda.tar.bz2&gt; Elapsed: - </code></pre>
<p>So I was running WSL2 and didn't shut down for many days. A reboot fixed the issue.</p>
555
pytorch
How to load a list of numpy arrays to pytorch dataset loader?
https://stackoverflow.com/questions/44429199/how-to-load-a-list-of-numpy-arrays-to-pytorch-dataset-loader
<p>I have a huge list of numpy arrays, where each array represents an image and I want to load it using torch.utils.data.Dataloader object. But the documentation of torch.utils.data.Dataloader mentions that it loads data directly from a folder. How do I modify it for my cause? I am new to pytorch and any help would be greatly appreciated. my numpy array for a single image looks something like this. The image is RBG image.</p> <pre><code>[[[ 70 82 94] [ 67 81 93] [ 66 82 94] ..., [182 182 188] [183 183 189] [188 186 192]] [[ 66 80 92] [ 62 78 91] [ 64 79 95] ..., [176 176 182] [178 178 184] [180 180 186]] [[ 62 82 93] [ 62 81 96] [ 65 80 99] ..., [169 172 177] [173 173 179] [172 172 178]] ..., </code></pre>
<p>I think what DataLoader actually requires is an input that subclasses <code>Dataset</code>. You can either write your own dataset class that subclasses <code>Dataset</code>or use <code>TensorDataset</code> as I have done below:</p> <pre><code>import torch import numpy as np from torch.utils.data import TensorDataset, DataLoader my_x = [np.array([[1.0,2],[3,4]]),np.array([[5.,6],[7,8]])] # a list of numpy arrays my_y = [np.array([4.]), np.array([2.])] # another list of numpy arrays (targets) tensor_x = torch.Tensor(my_x) # transform to torch tensor tensor_y = torch.Tensor(my_y) my_dataset = TensorDataset(tensor_x,tensor_y) # create your datset my_dataloader = DataLoader(my_dataset) # create your dataloader </code></pre> <p>Works for me.</p>
556
pytorch
pytorch tensorboard &quot;add_embedding error&quot;
https://stackoverflow.com/questions/66490589/pytorch-tensorboard-add-embedding-error
<p>everyone. I'm stuck in using tensorboard in pytorch. The point is add_embedding method makes the error like below:</p> <pre><code>Traceback (most recent call last): File &quot;test2.py&quot;, line 126, in &lt;module&gt; writer.add_embedding(features, metadata=class_labels, label_img = images.unsqueeze(1)) File &quot;/home/dgjung/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/utils/tensorboard/writer.py&quot;, line 798, in add_embedding fs = tf.io.gfile.get_filesystem(save_path) AttributeError: module 'tensorflow._api.v2.io.gfile' has no attribute 'get_filesystem' </code></pre> <p>My code is <a href="https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html?highlight=tensorboard" rel="nofollow noreferrer">pytorch tutorial</a>.</p> <pre><code># log embeddings features = images.view(-1, 28 * 28) writer.add_embedding(features, metadata=class_labels, label_img=images.unsqueeze(1)) </code></pre> <p>My enviroment is :</p> <ul> <li>PyTorch : '1.7.1'</li> <li>Tensorflow : '2.4.1'</li> <li>Python : 3.8.8</li> </ul> <p>PLZ, help me!</p>
<p>This is a nasty little bug that someone needs to patch.. There's a conversation about it <a href="https://github.com/pytorch/pytorch/issues/47139" rel="nofollow noreferrer">here</a></p> <p>I was able to fix it by adding:</p> <pre><code>import tensorboard as tb tf.io.gfile = tb.compat.tensorflow_stub.io.gfile </code></pre> <p>to torch/utils/tensorboard/writer.py just above line 798</p> <p>glhf</p>
557
pytorch
What is volatile variable in Pytorch
https://stackoverflow.com/questions/49837638/what-is-volatile-variable-in-pytorch
<p>What is volatile attribute of a Variable in Pytorch? Here's a sample code for defining a variable in PyTorch.</p> <pre><code>datatensor = Variable(data, volatile=True) </code></pre>
<p>Basically, set the input to a network to volatile if you are doing inference only and won't be running backpropagation in order to conserve memory.</p> <p>From the <a href="http://pytorch.org/docs/stable/notes/autograd.html#volatile" rel="noreferrer">docs</a>:</p> <blockquote> <p>Volatile is recommended for purely inference mode, when you’re sure you won’t be even calling .backward(). It’s more efficient than any other autograd setting - it will use the absolute minimal amount of memory to evaluate the model. volatile also determines that requires_grad is False.</p> </blockquote> <p>Edit: The volatile keyword has been <a href="http://pytorch.org/2018/04/22/0_4_0-migration-guide.html" rel="noreferrer">deprecated</a> as of pytorch version 0.4.0</p>
558
pytorch
PyTorch equivalence for softmax_cross_entropy_with_logits
https://stackoverflow.com/questions/46218566/pytorch-equivalence-for-softmax-cross-entropy-with-logits
<p>I was wondering is there an equivalent PyTorch loss function for TensorFlow's <code>softmax_cross_entropy_with_logits</code>?</p>
<blockquote> <p>is there an equivalent PyTorch loss function for TensorFlow's <code>softmax_cross_entropy_with_logits</code>?</p> </blockquote> <h3 id="torch.nn.functional.cross_entropy-ihhr"><code>torch.nn.functional.cross_entropy</code></h3> <p>This takes logits as inputs (performing <code>log_softmax</code> internally). Here &quot;logits&quot; are just some values that are not probabilities (i.e. not necessarily in the interval <code>[0,1]</code>).</p> <p>But, logits are also the values that will be converted to probabilities. If you consider the name of the tensorflow function you will understand it is pleonasm (since the <code>with_logits</code> part assumes <code>softmax</code> will be called).</p> <p>In the PyTorch implementation looks like this:</p> <pre><code>loss = F.cross_entropy(x, target) </code></pre> <p>Which is equivalent to :</p> <pre><code>lp = F.log_softmax(x, dim=-1) loss = F.nll_loss(lp, target) </code></pre> <p>It is not <code>F.binary_cross_entropy_with_logits</code> because this function assumes multi label classification:</p> <pre><code>F.sigmoid + F.binary_cross_entropy = F.binary_cross_entropy_with_logits </code></pre> <p>It is not <code>torch.nn.functional.nll_loss</code> either because this function takes log-probabilities (after <code>log_softmax()</code>) not logits.</p>
559
pytorch
Pytorch backpropagation
https://stackoverflow.com/questions/75745988/pytorch-backpropagation
<p>When there are max and absolute value operations in the pytorch model, how does pytorch implement the gradient descent of these operations during backpropagation please give a detail answer,thank you!</p>
<p><code>torch.abs</code> is non-differentiable only at 0, and it seems that pytorch implements a 0 derivative over some interval [-epsilon,+epsilon] near 0 <a href="https://discuss.pytorch.org/t/how-does-autograd-deal-with-non-differentiable-opponents-such-as-abs-and-max/34538" rel="nofollow noreferrer">https://discuss.pytorch.org/t/how-does-autograd-deal-with-non-differentiable-opponents-such-as-abs-and-max/34538</a>. <code>torch.max</code> is just an index selection operation, which has gradient of 1 for the selected indices and 0 for the non-selected indices. <a href="https://datascience.stackexchange.com/questions/11699/backprop-through-max-pooling-layers">https://datascience.stackexchange.com/questions/11699/backprop-through-max-pooling-layers</a></p>
560
pytorch
How does pytorch backprop through argmax?
https://stackoverflow.com/questions/54969646/how-does-pytorch-backprop-through-argmax
<p>I'm building Kmeans in pytorch using gradient descent on centroid locations, instead of expectation-maximisation. Loss is the sum of square distances of each point to its nearest centroid. To identify which centroid is nearest to each point, I use argmin, which is not differentiable everywhere. However, pytorch is still able to backprop and update weights (centroid locations), giving similar performance to sklearn kmeans on the data.</p> <p>Any ideas how this is working, or how I can figure this out within pytorch? Discussion on pytorch github suggests argmax is not differentiable: <a href="https://github.com/pytorch/pytorch/issues/1339" rel="nofollow noreferrer">https://github.com/pytorch/pytorch/issues/1339</a>.</p> <p>Example code below (on random pts):</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import torch num_pts, batch_size, n_dims, num_clusters, lr = 1000, 100, 200, 20, 1e-5 # generate random points vector = torch.from_numpy(np.random.rand(num_pts, n_dims)).float() # randomly pick starting centroids idx = np.random.choice(num_pts, size=num_clusters) kmean_centroids = vector[idx][:,None,:] # [num_clusters,1,n_dims] kmean_centroids = torch.tensor(kmean_centroids, requires_grad=True) for t in range(4001): # get batch idx = np.random.choice(num_pts, size=batch_size) vector_batch = vector[idx] distances = vector_batch - kmean_centroids # [num_clusters, #pts, #dims] distances = torch.sum(distances**2, dim=2) # [num_clusters, #pts] # argmin membership = torch.min(distances, 0)[1] # [#pts] # cluster distances cluster_loss = 0 for i in range(num_clusters): subset = torch.transpose(distances,0,1)[membership==i] if len(subset)!=0: # to prevent NaN cluster_loss += torch.sum(subset[:,i]) cluster_loss.backward() print(cluster_loss.item()) with torch.no_grad(): kmean_centroids -= lr * kmean_centroids.grad kmean_centroids.grad.zero_() </code></pre>
<p>As alvas noted in the comments, <code>argmax</code> is not differentiable. However, once you compute it and assign each datapoint to a cluster, the derivative of loss with respect to the location of these clusters is well-defined. This is what your algorithm does.</p> <p>Why does it work? If you had only one cluster (so that the <code>argmax</code> operation didn't matter), your loss function would be quadratic, with minimum at the mean of the data points. Now with multiple clusters, you can see that your loss function is piecewise (in higher dimensions think volumewise) quadratic - for any set of centroids <code>[C1, C2, C3, ...]</code> each data point is assigned to some centroid <code>CN</code> and the loss is <em>locally</em> quadratic. The extent of this locality is given by all alternative centroids <code>[C1', C2', C3', ...]</code> for which the assignment coming from <code>argmax</code> remains the same; within this region the <code>argmax</code> can be treated as a constant, rather than a function and thus the derivative of <code>loss</code> is well-defined.</p> <p>Now, in reality, it's unlikely you can treat <code>argmax</code> as constant, but you can still treat the naive "argmax-is-a-constant" derivative as pointing approximately towards a minimum, because the majority of data points are likely to indeed belong to the same cluster between iterations. And once you get close enough to a local minimum such that the points no longer change their assignments, the process can converge to a minimum.</p> <p>Another, more theoretical way to look at it is that you're doing an approximation of expectation maximization. Normally, you would have the "compute assignments" step, which is mirrored by <code>argmax</code>, and the "minimize" step which boils down to finding the minimizing cluster centers given the current assignments. The minimum is given by <code>d(loss)/d([C1, C2, ...]) == 0</code>, which for a quadratic loss is given analytically by the means of data points within each cluster. In your implementation, you're solving the same equation but with a gradient descent step. In fact, if you used a 2nd order (Newton) update scheme instead of 1st order gradient descent, you would be implicitly reproducing exactly the baseline EM scheme.</p>
561
pytorch
Pytorch install with anaconda error
https://stackoverflow.com/questions/45906706/pytorch-install-with-anaconda-error
<p>I get this error:</p> <pre><code>C:\Users&gt;conda install pytorch torchvision -c soumith Fetching package metadata ............. PackageNotFoundError: Package missing in current win-64 channels: - pytorch </code></pre> <p>I got <code>conda install pytorch torchvision -c soumith</code> from <a href="http://pytorch.org/" rel="nofollow noreferrer">Pytorch official website</a> and I have OSX/conda/3.6/none for settings on Pytorch site(should be correct). I am new to conda, any tips how to solve this?</p>
<p>Use the following commands to install pytorch on windows</p> <p>for Windows 10 and Windows Server 2016, CUDA 8</p> <pre><code>conda install -c peterjc123 pytorch cuda80 </code></pre> <p>for Windows 10 and Windows Server 2016, CUDA 9</p> <pre><code>conda install -c peterjc123 pytorch cuda90 </code></pre> <p>for Windows 7/8/8.1 and Windows Server 2008/2012, CUDA 8</p> <pre><code>conda install -c peterjc123 pytorch_legacy cuda80 </code></pre>
562
pytorch
Pytorch tensor indexing
https://stackoverflow.com/questions/57071002/pytorch-tensor-indexing
<p>I am currently working on converting some code from tensorflow to pytorch, I encountered problem with <a href="https://www.tensorflow.org/api_docs/python/tf/gather" rel="nofollow noreferrer"><code>tf.gather</code></a> func, there is no direct function to convert it in pytorch.</p> <p>What I am trying to do is basically indexing, I have two tensors, feature tensor shapes of <code>[minibatch, 60, 2]</code> and indexing tensor <code>[minibatch, 8]</code>, say like first tensor is tensor <code>A</code>, and the second one is <code>B</code>.</p> <p>In Tensorflow, it is directly converted with <code>tf.gather(A, B, batch_dims=1)</code></p> <p>How do I achieve this in pytorch?</p> <p>I have tried <code>A[B]</code> indexing. This seems not work</p> <p>and <code>A[0]B[0]</code> works, but output of shape is <code>[8, 2]</code></p> <p>I need the shape of <code>[minibatch, 8, 2]</code></p> <p>It will probably work if I stack tensor like <code>[stack, 8, 2]</code> but I have no idea how to do it</p> <pre><code>tensorflow out = tf.gather(logits, indices, batch_dims=1) </code></pre> <pre><code>pytorch out = A[B] -&gt; something like this will be great </code></pre> <p>Output shape of <code>[minibatch, 8, 2]</code></p>
<p>I think you are looking for <a href="https://pytorch.org/docs/stable/torch.html#torch.gather" rel="nofollow noreferrer"><code>torch.gather</code></a></p> <pre class="lang-py prettyprint-override"><code>out = torch.gather(A, 1, B[..., None].expand(*B.shape, A.shape[-1])) </code></pre>
563
pytorch
Where do I get a CPU-only version of PyTorch?
https://stackoverflow.com/questions/51730880/where-do-i-get-a-cpu-only-version-of-pytorch
<p>I'm trying to get a basic app running with Flask + PyTorch, and host it on Heroku. However, I run into the issue that the maximum slug size is 500mb on the free version, and PyTorch itself is ~500mb.</p> <p>After some google searching, someone wrote about finding a cpu-only version of PyTorch, and using that, which is much smaller <a href="https://www.codementor.io/@akshaysharma17/how-and-why-i-built-an-ml-based-python-api-hosted-on-heroku-j74qbfwn1" rel="noreferrer">how-and-why-i-built-an-ml-based-python-api-hosted-on-heroku-j74qbfwn1</a>.</p> <p>However, I'm pretty lost as to how this is done, and the person didn't document this at all. Any advice is appreciated, thanks.</p> <p>EDIT:</p> <p>To be more specific about my problem, I tried installing torch by (as far as I understand), including a requirements.txt which listed torch as a dependency. Current I have: torch==0.4.1. However this doesn't work bc of size.</p> <p>My question is, do you know what I could write in the requirements file to get the cpu-only version of torch that is smaller, or alternatively, if the requirements.txt doesn't work for this, what I would do instead, to get the cpu version.</p>
<p>Per the Pytorch website, you can install <code>pytorch-cpu</code> with</p> <pre><code>conda install pytorch-cpu torchvision-cpu -c pytorch </code></pre> <p>You can see from the files on <a href="https://anaconda.org/pytorch/pytorch-cpu/files" rel="noreferrer">Anaconda cloud</a>, that the size varies between 26 and 56MB depending on the OS where you want to install it.</p> <p>You can get the wheel from <code>http://download.pytorch.org/whl/cpu/</code>. The wheel is 87MB.</p> <p>You can setup the installation by putting the link to the wheel in the <code>requirements.txt</code> file. If you use Python 3.6 on Heroku:</p> <pre><code>http://download.pytorch.org/whl/cpu/torch-0.4.1-cp36-cp36m-linux_x86_64.whl </code></pre> <p>otherwise, for Python 2.7:</p> <pre><code>http://download.pytorch.org/whl/cpu/torch-0.4.1-cp27-cp27mu-linux_x86_64.whl </code></pre> <p>For example if your requirements are <code>pytorch-cpu</code>, <code>numpy</code> and <code>scipy</code> and you're using Python 3.6, the <code>requirements.txt</code> would look like:</p> <pre><code>http://download.pytorch.org/whl/cpu/torch-0.4.1-cp36-cp36m-linux_x86_64.whl numpy scipy </code></pre>
564
pytorch
PyTorch and PyTorch-operator kubeflow pipelines
https://stackoverflow.com/questions/66201581/pytorch-and-pytorch-operator-kubeflow-pipelines
<p>I am trying to integrate pytorch and pytorch-operators into kubeflow pipelines and I am not able to get a good resource for both. Is this possible in the current implementation?</p> <p>I understand that TFJob and PyTorchJob all run training containers on top of a kubernetes cluster but I am trying to integrate them into a pipeline.</p>
565
pytorch
Pytorch version for cuda 12.2
https://stackoverflow.com/questions/76678846/pytorch-version-for-cuda-12-2
<p>I am unable to find the Pytorch version for cuda driver 12.2. Can anyone please guide me where can I find any material that helps.</p> <p>I have installed currently pytorch version 11.7. While training the model i am facing following error.</p> <p>** RuntimeError(CUDA_MISMATCH_MESSAGE.format(cuda_str_version, torch.version.cuda)</p> <p>The detected CUDA version (12.2) mismatches the version that was used to compile PyTorch (11.7). Please make sure to use the same CUDA versions.**</p> <p>PS : I have nvidia driver 535</p> <p>Thanks in advance</p>
<p>You can install the nightly build. Note you should have <code>cudnn</code> installed already, I am using cudnn v8.9.3. The 12.1 PyTorch version works fine with CUDA v12.2.2:</p> <p><code>conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch-nightly -c nvidia</code></p>
566
pytorch
Pytorch custom activation functions?
https://stackoverflow.com/questions/55765234/pytorch-custom-activation-functions
<p>I'm having issues with implementing custom activation functions in Pytorch, such as Swish. How should I go about implementing and using custom activation functions in Pytorch?</p>
<p>There are <strong>four</strong> possibilities depending on what you are looking for. You will need to ask yourself two questions:</p> <p><strong>Q1)</strong> Will your activation function have learnable parameters?</p> <p>If <strong>yes</strong>, you have no choice but to create your activation function as an <code>nn.Module</code> class because you need to store those weights.</p> <p>If <strong>no</strong>, you are free to simply create a normal function, or a class, depending on what is convenient for you.</p> <p><strong>Q2)</strong> Can your activation function be expressed as a combination of existing PyTorch functions?</p> <p>If <strong>yes</strong>, you can simply write it as a combination of existing PyTorch function and won't need to create a <code>backward</code> function which defines the gradient.</p> <p>If <strong>no</strong> you will need to write the gradient by hand.</p> <p><strong>Example 1: SiLU function</strong></p> <p>The <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#:%7E:text=of%20this%20article.-,SiLU,-%5Bedit%5D" rel="noreferrer">SiLU</a> function <code>f(x) = x * sigmoid(x)</code> does not have any learned weights and can be written entirely with existing PyTorch functions, thus you can simply define it as a function:</p> <pre><code>def silu(x): return x * torch.sigmoid(x) </code></pre> <p>and then simply use it as you would have <code>torch.relu</code> or any other activation function.</p> <p><strong>Example 2: SiLU with learned slope</strong></p> <p>In this case you have one learned parameter, the slope, thus you need to make a class of it.</p> <pre><code>class LearnedSiLU(nn.Module): def __init__(self, slope = 1): super().__init__() self.slope = slope * torch.nn.Parameter(torch.ones(1)) def forward(self, x): return self.slope * x * torch.sigmoid(x) </code></pre> <p><strong>Example 3: with backward</strong></p> <p>If you have something for which you need to create your own gradient function, you can look at this example: <a href="https://stackoverflow.com/questions/46509039/pytorch-define-custom-function">Pytorch: define custom function</a></p>
567
pytorch
PyTorch / Gensim - How do I load pre-trained word embeddings?
https://stackoverflow.com/questions/49710537/pytorch-gensim-how-do-i-load-pre-trained-word-embeddings
<p>I want to load a pre-trained word2vec embedding with gensim into a PyTorch embedding layer.</p> <p>How do I get the embedding weights loaded by gensim into the PyTorch embedding layer?</p>
<p>I just wanted to report my findings about loading a gensim embedding with PyTorch.</p> <hr> <ul> <li><h2>Solution for PyTorch <code>0.4.0</code> and newer:</h2></li> </ul> <p>From <code>v0.4.0</code> there is a new function <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Embedding.from_pretrained" rel="noreferrer"><code>from_pretrained()</code></a> which makes loading an embedding very comfortable. Here is an example from the documentation.</p> <pre><code>import torch import torch.nn as nn # FloatTensor containing pretrained weights weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]]) embedding = nn.Embedding.from_pretrained(weight) # Get embeddings for index 1 input = torch.LongTensor([1]) embedding(input) </code></pre> <p>The weights from <a href="https://radimrehurek.com/gensim/" rel="noreferrer"><em>gensim</em></a> can easily be obtained by:</p> <pre><code>import gensim model = gensim.models.KeyedVectors.load_word2vec_format('path/to/file') weights = torch.FloatTensor(model.vectors) # formerly syn0, which is soon deprecated </code></pre> <p>As noted by @Guglie: in newer gensim versions the weights can be obtained by <a href="https://radimrehurek.com/gensim/models/word2vec.html" rel="noreferrer"><code>model.wv</code></a>:</p> <pre><code>weights = model.wv </code></pre> <hr> <ul> <li><h2>Solution for PyTorch version <code>0.3.1</code> and older:</h2></li> </ul> <p>I'm using version <code>0.3.1</code> and <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Embedding.from_pretrained" rel="noreferrer"><code>from_pretrained()</code></a> isn't available in this version.</p> <p>Therefore I created my own <code>from_pretrained</code> so I can also use it with <code>0.3.1</code>.</p> <p><em>Code for <code>from_pretrained</code> for PyTorch versions <code>0.3.1</code> or lower:</em></p> <pre><code>def from_pretrained(embeddings, freeze=True): assert embeddings.dim() == 2, \ 'Embeddings parameter is expected to be 2-dimensional' rows, cols = embeddings.shape embedding = torch.nn.Embedding(num_embeddings=rows, embedding_dim=cols) embedding.weight = torch.nn.Parameter(embeddings) embedding.weight.requires_grad = not freeze return embedding </code></pre> <p>The embedding can be loaded then just like this:</p> <pre><code>embedding = from_pretrained(weights) </code></pre> <p>I hope this is helpful for someone.</p>
568
pytorch
pytorch backports.functools_lru_cache conflict
https://stackoverflow.com/questions/53137588/pytorch-backports-functools-lru-cache-conflict
<p>I'm using <em>Windows 10</em>, and my the installation dir is: <code>anacoda2/python2.7/python3.6/opencv/cdua10/cudnn ....</code> </p> <p>Now I want to install pytorch, with this command: <code>conda install pytorch -c pytorch</code></p> <p>But as result I'm getting this error:</p> <pre><code>C:\Users\MM&gt;conda install pytorch -c pytorch Solving environment: failed UnsatisfiableError: The following specifications were found to be in conflict: - backports.functools_lru_cache - pytorch Use "conda info &lt;package&gt;" to see the dependencies for each package. </code></pre> <p>The version of <code>backports.functools_lru_cache</code> is <code>1.4</code>.</p> <p><strong>Does anyone know how to solve this?</strong></p>
<p>I faced a similar problem while installing pytorch. I was able to resolve it by creating another environment on anaconda which had python version 3.6. </p>
569
pytorch
How to add a new dimension to a PyTorch tensor?
https://stackoverflow.com/questions/65470807/how-to-add-a-new-dimension-to-a-pytorch-tensor
<p>In NumPy, I would do</p> <pre class="lang-py prettyprint-override"><code>a = np.zeros((4, 5, 6)) a = a[:, :, np.newaxis, :] assert a.shape == (4, 5, 1, 6) </code></pre> <p>How to do the same in PyTorch?</p>
<pre><code>a = torch.zeros(4, 5, 6) a = a[:, :, None, :] assert a.shape == (4, 5, 1, 6) </code></pre>
570
pytorch
pytorch delete model from gpu
https://stackoverflow.com/questions/53350905/pytorch-delete-model-from-gpu
<p>I want to make a cross validation in my project based on Pytorch. And I didn't find any method that pytorch provided to delete the current model and empty the memory of GPU. Could you tell that how can I do it?</p>
<p>Freeing memory in PyTorch works as it does with the normal Python garbage collector. This means once all references to an <em>Python-Object</em> are gone it will be deleted.</p> <p>You can delete references by using the <a href="https://stackoverflow.com/questions/20847149/how-does-del-operator-work-in-list-in-python"><code>del</code></a> operator:</p> <pre class="lang-py prettyprint-override"><code>del model </code></pre> <p>You have to make sure though that there is no reference to the respective object left, otherwise the memory won't be freed.</p> <p>So once you've deleted all references of your <code>model</code>, it should be deleted and the memory freed.</p> <p>If you want to learn more about memory management you can take a look here: <a href="https://pytorch.org/docs/stable/notes/cuda.html#cuda-memory-management" rel="noreferrer">https://pytorch.org/docs/stable/notes/cuda.html#cuda-memory-management</a></p>
571
pytorch
Pytorch Install
https://stackoverflow.com/questions/74107561/pytorch-install
<p>I HAVE A ERROR IN INSTALLING PYTORCH: PLEASE HELP ME. CondaHTTPError: HTT P000 CONNECTION FAILED for url <a href="https://conda.anaconda.org/pytorch/win-64/current_repodata.json" rel="nofollow noreferrer">https://conda.anaconda.org/pytorch/win-64/current_repodata.json</a> Elapsed: -</p> <p>An HTTP error occurred when trying to retrieve this URL. HTTP errors are often intermittent, and a simple retry will get you on your way. 'https//conda.anaconda.org/pytorch/win-64'</p>
<blockquote> <p>An HTTP error occurred when trying to retrieve this URL. HTTP errors are often intermittent, and a simple retry will get you on your way</p> </blockquote> <p>The possible reason for HTTP error could be an unstable network connection or a corporate firewall.</p> <p>If it was an unstable network connection, <strong>as mentioned in the Error message, retry the installation steps that failed.</strong></p> <p>If you are behind a corporate firewall, you might need additional steps to add your proxy server to the <code>.condarc</code> file on your machine.</p> <ul> <li>Since you are on Windows, you could open the Anaconda prompt and run <code>conda info</code> to figure out where the <code>.condarc</code> file is located.</li> <li>Find the proxy by running <code>echo &quot;$http_proxy&quot;</code> in your prompt. Copy the proxy.</li> <li>Open the <code>.condarc</code> file and paste the proxy under <code>proxy_servers</code> section</li> </ul> <p>For more details see: <a href="https://conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html#configure-conda-for-use-behind-a-proxy-server-proxy-servers" rel="nofollow noreferrer">Anaconda Docs: Configure conda for use behind a proxy server (proxy_servers)</a></p>
572
pytorch
Pytorch 1.13 dataloader is significantly faster than Pytorch 2.0.1
https://stackoverflow.com/questions/77417920/pytorch-1-13-dataloader-is-significantly-faster-than-pytorch-2-0-1
<p>I've noticed that PyTorch 2.0.1 DataLoader is significantly slower than PyTorch 1.13 DataLoader, especially when the number of workers is set to something other than 0. I've done some research and found that this is due to a change in the way that PyTorch handles multiprocessing in version 2.0.1. In PyTorch 1.13, the DataLoader uses a separate process for each worker. In PyTorch 2.0.1, the DataLoader uses a thread pool to manage the workers.</p> <p>I'm using a simple DataLoader, but I need to stick to PyTorch 2.0.1 for other reasons. I'm looking for a workaround to speed up my DataLoader.</p> <p>Steps to reproduce:</p> <p>Load a dataset using PyTorch 1.13 DataLoader with the following settings: num_workers: 32 pin_memory: True Time the data loading process. Expected behavior:</p> <p>The data loading process should be faster with PyTorch 2.0.1 DataLoader.</p> <p>Actual behavior:</p> <p>The data loading process is significantly slower with PyTorch 2.0.1 DataLoader.</p> <p>Environment:</p> <p>PyTorch version: 1.13, 2.0.1 Python version: 3.9 Operating system: Ubuntu 20.04 Question:</p> <p>Is there a workaround to speed up the PyTorch 2.0.1 DataLoader?</p> <p>Additional notes:</p> <p>I've tried reducing the number of workers, but this doesn't significantly improve the performance. I've also tried using a smaller batch size, but this also doesn't significantly improve the performance. I appreciate any help you can provide.</p>
573
pytorch
PyTorch is installed but not imported
https://stackoverflow.com/questions/53165990/pytorch-is-installed-but-not-imported
<p>I am trying to build PyTorch. Reference site:<a href="https://github.com/hughperkins/pytorch" rel="nofollow noreferrer">https://github.com/hughperkins/pytorch</a></p> <p>but, When we performed unit test, The following error occur.</p> <pre><code>ImportError while importing test module '/home/usr2/pytorch/test/testByteTensor.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: test/testByteTensor.py:2: in &lt;module&gt; import PyTorch E ImportError: No module named 'PyTorch' __________________ ERROR collecting test/testDoubleTensor.py ___________________ </code></pre> <p>I understand that PyTorch is not imported. but It is confirmed that pytorch is installed. Is there a way to solve this problem?</p> <p>environment</p> <pre><code>ubuntu 16.04 python3.5 cuda9.2 </code></pre>
<p>In fact, you should do <code>import torch</code> instead of <code>import PyTorch</code><br> Here is what's work for me: (I installed it using conda)</p> <pre><code>&gt;&gt;&gt; import torch &gt;&gt;&gt; torch.version &gt;&gt;&gt; &lt;module 'torch.version' from '/home/koke_cacao/miniconda3/envs/ml/lib/python3.6/site-packages/torch/version.py'&gt; &gt;&gt;&gt; print(torch.__version__) &gt;&gt;&gt; 0.4.1.post2 &gt;&gt;&gt; a = torch.FloatTensor(2,3) &gt;&gt;&gt; tensor([[-7.4368e-13, 3.0911e-41, -9.6122e-13], [ 3.0911e-41, -7.3734e-13, 3.0911e-41]]) </code></pre> <p>Edit: The version works with no problem at all for me. But if you insist to perform the unit test, maybe other people can solve your problem.</p>
574
pytorch
Using pytorch Dataloader in pytorch kmeans
https://stackoverflow.com/questions/63348557/using-pytorch-dataloader-in-pytorch-kmeans
<p>I'm Trying to perform a Kmeans clusterization using the Pytorch-kmeans (<a href="https://github.com/subhadarship/kmeans_pytorch" rel="nofollow noreferrer">github</a>). I have aprox. 27M array with 512 elements each, the aprox. size of the numpy array is 51GB which is bigger than my GPU RAM (32GB). Is there a way I can batch the arrays using the pytorch DataLoader class?</p>
575
pytorch
400% higher error with PyTorch compared with identical Keras model (with Adam optimizer)
https://stackoverflow.com/questions/73600481/400-higher-error-with-pytorch-compared-with-identical-keras-model-with-adam-op
<hr /> <p><strong>TLDR</strong>:</p> <p><em>A simple (single hidden-layer) feed-forward Pytorch model trained to predict the function <code>y = sin(X1) + sin(X2) + ... sin(X10)</code> substantially underperforms an identical model built/trained with Keras. Why is this so and what can be done to mitigate the difference in performance?</em></p> <hr /> <p>In training a regression model, I noticed that PyTorch drastically underperforms an identical model built with Keras.</p> <p><strong>This phenomenon has been observed and reported previously</strong>:</p> <ul> <li><p><a href="https://discuss.pytorch.org/t/the-same-model-produces-worse-results-on-pytorch-than-on-tensorflow/5380" rel="noreferrer">The same model produces worse results on pytorch than on tensorflow</a></p> </li> <li><p><a href="https://discuss.pytorch.org/t/cnn-model-in-pytorch-giving-30-less-accuracy-to-tensoflowflow-model/85410" rel="noreferrer">CNN model in pytorch giving 30% less accuracy to Tensoflowflow model</a>:</p> </li> <li><p><a href="https://discuss.pytorch.org/t/pytorch-adam-vs-tensorflow-adam/74471" rel="noreferrer">PyTorch Adam vs Tensorflow Adam</a></p> </li> <li><p><a href="https://discuss.pytorch.org/t/suboptimal-convergence-when-compared-with-tensorflow-model/5099" rel="noreferrer">Suboptimal convergence when compared with TensorFlow model</a></p> </li> <li><p><a href="https://discuss.pytorch.org/t/rnn-and-adam-slower-convergence-than-keras/11278" rel="noreferrer">RNN and Adam: slower convergence than Keras</a></p> </li> <li><p><a href="https://discuss.pytorch.org/t/pytorch-comparable-but-worse-than-keras-on-a-simple-feed-forward-network/9928" rel="noreferrer">PyTorch comparable but worse than keras on a simple feed forward network</a></p> </li> <li><p><a href="https://www.reddit.com/r/pytorch/comments/ox0g4e/why_is_the_pytorch_model_doing_worse_than_the/" rel="noreferrer">Why is the PyTorch model doing worse than the same model in Keras even with the same weight initialization?</a></p> </li> <li><p><a href="https://stackoverflow.com/questions/59344571/why-keras-behave-better-than-pytorch-under-the-same-network-configuration">Why Keras behave better than Pytorch under the same network configuration?</a></p> </li> </ul> <p><strong>The following explanations and suggestions have been made previously as well</strong>:</p> <ol> <li><p>Using the same decimal precision (32 vs 64): <a href="https://discuss.pytorch.org/t/the-same-model-produces-worse-results-on-pytorch-than-on-tensorflow/5380" rel="noreferrer">1</a>, <a href="https://www.reddit.com/r/MachineLearning/comments/7nw67c/d_pytorch_are_adam_and_rmsprop_okay/" rel="noreferrer">2</a>,</p> </li> <li><p>Using a CPU instead of a GPU: <a href="https://discuss.pytorch.org/t/the-same-model-produces-worse-results-on-pytorch-than-on-tensorflow/5380" rel="noreferrer">1</a>,<a href="https://discuss.pytorch.org/t/rnn-and-adam-slower-convergence-than-keras/11278/2?u=smth" rel="noreferrer">2</a></p> </li> <li><p>Change <code>retain_graph=True</code> to <code>create_graph=True</code> in computing the 2nd derivative with <code>autograd.grad</code>: <a href="https://discuss.pytorch.org/t/pytorch-adam-vs-tensorflow-adam/74471" rel="noreferrer">1</a></p> </li> <li><p>Check if keras is using a regularizer, constraint, bias, or loss function in a different way from pytorch: <a href="https://discuss.pytorch.org/t/suboptimal-convergence-when-compared-with-tensorflow-model/5099/2" rel="noreferrer">1</a>,<a href="https://discuss.pytorch.org/t/pytorch-comparable-but-worse-than-keras-on-a-simple-feed-forward-network/9928/4" rel="noreferrer">2</a></p> </li> <li><p>Ensure you are computing the validation loss in the same way: <a href="https://discuss.pytorch.org/t/suboptimal-convergence-when-compared-with-tensorflow-model/5099/3" rel="noreferrer">1</a></p> </li> <li><p>Use the same initialization routine: <a href="https://discuss.pytorch.org/t/suboptimal-convergence-when-compared-with-tensorflow-model/5099/3" rel="noreferrer">1</a>,<a href="https://stackoverflow.com/questions/59344571/why-keras-behave-better-than-pytorch-under-the-same-network-configuration">2</a></p> </li> <li><p>Training the pytorch model for longer epochs: <a href="https://discuss.pytorch.org/t/rnn-and-adam-slower-convergence-than-keras/11278?u=smth" rel="noreferrer">1</a></p> </li> <li><p>Trying several random seeds: <a href="https://discuss.pytorch.org/t/rnn-and-adam-slower-convergence-than-keras/11278/8?u=smth" rel="noreferrer">1</a></p> </li> <li><p>Ensure that <code>model.eval()</code> is called in validation step when training pytorch model: <a href="https://discuss.pytorch.org/t/pytorch-comparable-but-worse-than-keras-on-a-simple-feed-forward-network/9928" rel="noreferrer">1</a></p> </li> <li><p>The main issue is with the Adam optimizer, not the initialization: <a href="https://www.reddit.com/r/MachineLearning/comments/7nw67c/d_pytorch_are_adam_and_rmsprop_okay/" rel="noreferrer">1</a></p> </li> </ol> <p>To understand this issue, I trained a simple two-layer neural network (much simpler than my original model) in Keras and PyTorch, using the same hyperparameters and initialization routines, and following all the recommendations listed above. However, the PyTorch model results in a mean squared error (MSE) that is 400% higher than the MSE of the Keras model.</p> <p><strong>Here is my code:</strong></p> <p><strong>0. Imports</strong></p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.stats import pearsonr from sklearn.preprocessing import MinMaxScaler from sklearn import metrics from torch.utils.data import Dataset, DataLoader import tensorflow as tf from tensorflow.keras import layers from tensorflow.keras.regularizers import L2 from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam </code></pre> <p><strong>1. Generate a reproducible dataset</strong></p> <pre class="lang-py prettyprint-override"><code> def get_data(): np.random.seed(0) Xtrain = np.random.normal(0, 1, size=(7000,10)) Xval = np.random.normal(0, 1, size=(700,10)) ytrain = np.sum(np.sin(Xtrain), axis=-1) yval = np.sum(np.sin(Xval), axis=-1) scaler = MinMaxScaler() ytrain = scaler.fit_transform(ytrain.reshape(-1,1)).reshape(-1) yval = scaler.transform(yval.reshape(-1,1)).reshape(-1) return Xtrain, Xval, ytrain, yval class XYData(Dataset): def __init__(self, X, y): super(XYData, self).__init__() self.X = torch.tensor(X, dtype=torch.float32) self.y = torch.tensor(y, dtype=torch.float32) self.len = len(y) def __getitem__(self, index): return (self.X[index], self.y[index]) def __len__(self): return self.len # Data, dataset, and dataloader Xtrain, Xval, ytrain, yval = get_data() traindata = XYData(Xtrain, ytrain) valdata = XYData(Xval, yval) trainloader = DataLoader(dataset=traindata, shuffle=True, batch_size=32, drop_last=False) valloader = DataLoader(dataset=valdata, shuffle=True, batch_size=32, drop_last=False) </code></pre> <p><strong>2. Build Keras and PyTorch models with identical hyperparameters and initialization methods</strong></p> <pre class="lang-py prettyprint-override"><code>class TorchLinearModel(nn.Module): def __init__(self, input_dim=10, random_seed=0): super(TorchLinearModel, self).__init__() _ = torch.manual_seed(random_seed) self.hidden_layer = nn.Linear(input_dim,100) self.initialize_layer(self.hidden_layer) self.output_layer = nn.Linear(100, 1) self.initialize_layer(self.output_layer) def initialize_layer(self, layer): _ = torch.nn.init.xavier_normal_(layer.weight) #_ = torch.nn.init.xavier_uniform_(layer.weight) _ = torch.nn.init.constant(layer.bias,0) def forward(self, x): x = self.hidden_layer(x) x = self.output_layer(x) return x def mean_squared_error(ytrue, ypred): return torch.mean(((ytrue - ypred) ** 2)) def build_torch_model(): torch_model = TorchLinearModel() optimizer = optim.Adam(torch_model.parameters(), betas=(0.9,0.9999), eps=1e-7, lr=1e-3, weight_decay=0) return torch_model, optimizer def build_keras_model(): x = layers.Input(shape=10) z = layers.Dense(units=100, activation=None, use_bias=True, kernel_regularizer=None, bias_regularizer=None)(x) y = layers.Dense(units=1, activation=None, use_bias=True, kernel_regularizer=None, bias_regularizer=None)(z) keras_model = Model(x, y, name='linear') optimizer = Adam(learning_rate=1e-3, beta_1=0.9, beta_2=0.9999, epsilon=1e-7, amsgrad=False) keras_model.compile(optimizer=optimizer, loss='mean_squared_error') return keras_model # Instantiate models torch_model, optimizer = build_torch_model() keras_model = build_keras_model() </code></pre> <p><strong>3. Train PyTorch model for 100 epochs:</strong></p> <pre class="lang-py prettyprint-override"><code> torch_trainlosses, torch_vallosses = [], [] for epoch in range(100): # Training losses = [] _ = torch_model.train() for i, (x,y) in enumerate(trainloader): optimizer.zero_grad() ypred = torch_model(x) loss = mean_squared_error(y, ypred) _ = loss.backward() _ = optimizer.step() losses.append(loss.item()) torch_trainlosses.append(np.mean(losses)) # Validation losses = [] _ = torch_model.eval() with torch.no_grad(): for i, (x, y) in enumerate(valloader): ypred = torch_model(x) loss = mean_squared_error(y, ypred) losses.append(loss.item()) torch_vallosses.append(np.mean(losses)) print(f&quot;epoch={epoch+1}, train_loss={torch_trainlosses[-1]:.4f}, val_loss={torch_vallosses[-1]:.4f}&quot;) </code></pre> <p><strong>4. Train Keras model for 100 epochs:</strong></p> <pre class="lang-py prettyprint-override"><code>history = keras_model.fit(Xtrain, ytrain, sample_weight=None, batch_size=32, epochs=100, validation_data=(Xval, yval)) </code></pre> <p><strong>5. Loss in training history</strong></p> <pre class="lang-py prettyprint-override"><code>plt.plot(torch_trainlosses, color='blue', label='PyTorch Train') plt.plot(torch_vallosses, color='blue', linestyle='--', label='PyTorch Val') plt.plot(history.history['loss'], color='brown', label='Keras Train') plt.plot(history.history['val_loss'], color='brown', linestyle='--', label='Keras Val') plt.legend() </code></pre> <p><a href="https://i.sstatic.net/RajJk.png" rel="noreferrer"><img src="https://i.sstatic.net/RajJk.png" alt="enter image description here" /></a></p> <p><em>Keras records a much lower error in the training. Since this may be due to a difference in how Keras computes the loss, I calculated the prediction error on the validation set with sklearn.metrics.mean_squared_error</em></p> <p><strong>6. Validation error after training</strong></p> <pre class="lang-py prettyprint-override"><code>ypred_keras = keras_model.predict(Xval).reshape(-1) ypred_torch = torch_model(torch.tensor(Xval, dtype=torch.float32)) ypred_torch = ypred_torch.detach().numpy().reshape(-1) mse_keras = metrics.mean_squared_error(yval, ypred_keras) mse_torch = metrics.mean_squared_error(yval, ypred_torch) print('Percent error difference:', (mse_torch / mse_keras - 1) * 100) r_keras = pearsonr(yval, ypred_keras)[0] r_pytorch = pearsonr(yval, ypred_torch)[0] print(&quot;r_keras:&quot;, r_keras) print(&quot;r_pytorch:&quot;, r_pytorch) plt.scatter(ypred_keras, yval); plt.title('Keras'); plt.show(); plt.close() plt.scatter(ypred_torch, yval); plt.title('Pytorch'); plt.show(); plt.close() </code></pre> <pre class="lang-py prettyprint-override"><code>Percent error difference: 479.1312469426776 r_keras: 0.9115184443702814 r_pytorch: 0.21728812737220082 </code></pre> <p><a href="https://i.sstatic.net/y9y8x.png" rel="noreferrer"><img src="https://i.sstatic.net/y9y8x.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/KaLYZ.png" rel="noreferrer"><img src="https://i.sstatic.net/KaLYZ.png" alt="enter image description here" /></a></p> <p><em>The correlation of predicted values with ground truth is 0.912 for Keras but 0.217 for Pytorch, and the error for Pytorch is 479% higher!</em></p> <p><strong>7. Other trials</strong> I also tried:</p> <ul> <li>Lowering the learning rate for Pytorch (lr=1e-4), <strong>R increases from 0.217 to 0.576</strong>, but it's still much worse than Keras (r=0.912).</li> <li>Increasing the learning rate for Pytorch (lr=1e-2), <strong>R is worse at 0.095</strong></li> <li>Training numerous times with different random seeds. The <strong>performance is roughly the same</strong>, regardless.</li> <li>Trained for longer than 100 epochs. No improvement was observed!</li> <li>Used <code>torch.nn.init.xavier_uniform_</code> instead of <code>torch.nn.init.xavier_normal_</code> in the initialization of the weights. R <strong>improves from 0.217 to 0.639</strong>, but it's still worse than Keras (0.912).</li> </ul> <hr /> <p><strong>What can be done to ensure that the PyTorch model converges to a reasonable error comparable with the Keras model?</strong></p> <hr />
<p>The problem here is unintentional broadcasting in the PyTorch training loop.</p> <p>The result of a <code>nn.Linear</code> operation always has shape <code>[B,D]</code>, where <code>B</code> is the batch size and <code>D</code> is the output dimension. Therefore, in your <code>mean_squared_error</code> function <code>ypred</code> has shape <code>[32,1]</code> and <code>ytrue</code> has shape <code>[32]</code>. By the <a href="https://numpy.org/doc/stable/user/basics.broadcasting.html" rel="noreferrer">broadcasting rules</a> used by NumPy and PyTorch this means that <code>ytrue - ypred</code> has shape <code>[32,32]</code>. What you almost certainly meant is for <code>ypred</code> to have shape <code>[32]</code>. This can be accomplished in many ways; probably the most readable is to use <a href="https://pytorch.org/docs/stable/generated/torch.Tensor.flatten.html?highlight=tensor%20flatten#torch.Tensor.flatten" rel="noreferrer"><code>Tensor.flatten</code></a></p> <pre class="lang-py prettyprint-override"><code>class TorchLinearModel(nn.Module): ... def forward(self, x): x = self.hidden_layer(x) x = self.output_layer(x) return x.flatten() </code></pre> <p>which produces the following train/val curves</p> <p><a href="https://i.sstatic.net/JjIdk.png" rel="noreferrer"><img src="https://i.sstatic.net/JjIdk.png" alt="enter image description here" /></a></p>
576
pytorch
pytorch compute pairwise difference: Incorrect result in NumPy vs PyTorch and different PyTorch versions
https://stackoverflow.com/questions/55884299/pytorch-compute-pairwise-difference-incorrect-result-in-numpy-vs-pytorch-and-di
<p>Suppose I have two arrays, and I want to calculate row-wise differences between every two rows of two matrices of the same shape as follows. This is how the procedure looks like in numpy, and I want to replicate the same thing in pytorch.</p> <pre><code>&gt;&gt;&gt; a = np.array([[1,2,3],[4,5,6]]) &gt;&gt;&gt; b = np.array([[3,4,5],[5,3,2]]) &gt;&gt;&gt; c = a[np.newaxis,:,:] - b[:,np.newaxis,:] &gt;&gt;&gt; print(c) [[[-2 -2 -2] [ 1 1 1]] [[-4 -1 1] [-1 2 4]]] </code></pre> <p>BTW, I tried the same thing using pytorch, but it does not work. Is there anyway we could accomplish the same thing in pytorch</p> <pre><code>&gt;&gt;&gt; import torch &gt;&gt;&gt; a = torch.from_numpy(a) &gt;&gt;&gt; b = torch.from_numpy(b) &gt;&gt;&gt; c1 = a[None,:,:] &gt;&gt;&gt; c2 = b[:,None,:] &gt;&gt;&gt; diff = c1 - c2 &gt;&gt;&gt; print(diff.size()) torch.Size([1, 2, 3]) </code></pre> <p>I was actually looking for <code>torch.Size([2,2,3])</code>. (P.S. I also tried unsqueeze from pytorch, but it doesn't work).</p>
<p>The issue arises because of using <strong>PyTorch 0.1</strong>. If using PyTorch 1.0.1, the same operation of NumPy generalize to PyTorch without any modifications and issues. Here is a snapshot of the run in Colab.</p> <p><a href="https://i.sstatic.net/axKBu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/axKBu.png" alt="enter image description here"></a></p> <p>As we can see, we indeed get the same results.</p> <hr> <p>Here is an attempt to reproduce the error you faced of getting incorrect result:</p> <pre><code>&gt;&gt;&gt; t1 = torch.from_numpy(a) &gt;&gt;&gt; t2 = torch.from_numpy(b) &gt;&gt;&gt; t1[np.newaxis, ...] - t2[:, np.newaxis, ...] (0 ,.,.) = -2 -2 -2 -1 2 4 [torch.LongTensor of size 1x2x3] &gt;&gt;&gt; torch.__version__ '0.1.12_1' </code></pre> <p>So, please upgrade your PyTorch version to <strong>1.0.1</strong>!</p> <hr> <h3>Digging more into for details:</h3> <p>The main reason why it didn't work in <strong>PyTorch version 0.1</strong> is that broadcasting was not completely implemented then. Basically, the tensor promotion to 3D, followed by a subtraction can be achieved in two steps as in (in version <strong>1.0.1</strong>):</p> <pre><code>&gt;&gt;&gt; t1[:1, ] - t2 &gt;&gt;&gt; tensor([[-2, -2, -2], # t1_r1 [-4, -1, 1]]) # t1_r2 &gt;&gt;&gt; t1[1:, ] - t2 &gt;&gt;&gt; tensor([[ 1, 1, 1], # t2_r1 [-1, 2, 4]]) # t2_r2 </code></pre> <p>The results of above two operations put together by stacking rows in the order (t1_r1, t2_r1, t1_r2, t2_r2), after each of the rows being a 2D would give us the shape <code>(2, 2, 3)</code>.</p> <p>Now, try doing the above two steps in version 0.1, it would throw the error:</p> <blockquote> <p>RuntimeError: inconsistent tensor size at /opt/conda/conda-bld/pytorch_1501971235237/work/pytorch-0.1.12/torch/lib/TH/generic/THTensorMath.c:831</p> </blockquote>
577
pytorch
Anaconda always want to replace my GPU Pytorch version to CPU Pytorch version when updating
https://stackoverflow.com/questions/62630186/anaconda-always-want-to-replace-my-gpu-pytorch-version-to-cpu-pytorch-version-wh
<p>I have a newly installed Anaconda3 (version 2020.02) environment, and I have installed Pytorch GPU version by the command <code>conda install pytorch torchvision cudatoolkit=10.2 -c pytorch</code>. I have verified that my Pytorch indeed runs fine on GPU.</p> <p>However, whenever I update Anaconda by <code>conda update --all</code>, the following messages always shows:</p> <pre><code>The following packages will be SUPERSEDED by a higher-priority channel: pytorch pytorch::pytorch-1.5.0-py3.7_cuda102_~ --&gt; pkgs/main::pytorch-1.5.0-cpu_py37h9f948e0_0 </code></pre> <p>In other words, it always want to replace my GPU version Pytorch to CPU version. I have tried that if continue the update, it will install the CPU version Pytorch and my previous Pytorch code on GPU could not run anymore. I have also tried the command <code>conda update --all --no-channel-priority</code> but the message still shows.</p> <p>To my knowledge I have never modified Anaconda channels or add custom channels. How can I get rid of this message?</p>
<p>It's happening because, by default, conda prefers packages from a higher priority channel over any version from a lower priority channel. -- <a href="https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-channels.html" rel="nofollow noreferrer">conda docs</a></p> <p>You can solve this problem by setting the priority of <code>pytorch</code> channel higher than the default channel by changing the order in <code>.condarc</code> -- <a href="https://stackoverflow.com/q/48547046/6210807">more here</a></p> <pre><code>channels: - pytorch - defaults - conda-forge channel_priority: true </code></pre> <p>or you can upgrade it by specifying as option:</p> <pre><code>conda update --all -c pytorch </code></pre>
578
pytorch
how to install pytorch in python2.7?
https://stackoverflow.com/questions/57835948/how-to-install-pytorch-in-python2-7
<p>i am using python2.7 in virtual environment. i tried to install pytorch in python2.7 but i got error belows:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>UnsatisfiableError: The following specifications were found to be incompatible with the existing python installation in your environment: - pytorch-cpu -&gt; python[version='3.5.*|3.6.*'] - pytorch-cpu -&gt; python[version='&gt;=3.5,&lt;3.6.0a0|&gt;=3.6,&lt;3.7.0a0|&gt;=3.7,&lt;3.8.0a0'] If python is on the left-most side of the chain, that's the version you've asked for. When python appears to the right, that indicates that the thing on the left is somehow not available for the python version you are constrained to. Your current python version is (python=2.7). Note that conda will not change your python version to a different minor version unless you explicitly specify that. The following specifications were found to be incompatible with each other: Package wheel conflicts for: python=2.7 -&gt; pip -&gt; wheel pytorch-cpu -&gt; python[version='&gt;=3.6,&lt;3.7.0a0'] -&gt; pip -&gt; wheel Package vc conflicts for: python=2.7 -&gt; sqlite[version='&gt;=3.27.2,&lt;4.0a0'] -&gt; vc[version='14.*|&gt;=14,&lt;15.0a0|&gt;=14.1,&lt;15.0a0'] python=2.7 -&gt; vc[version='9.*|&gt;=9,&lt;10.0a0'] pytorch-cpu -&gt; numpy[version='&gt;=1.11'] -&gt; vc[version='14|14.*|&gt;=14,&lt;15.0a0'] pytorch-cpu -&gt; vc[version='&gt;=14.1,&lt;15.0a0'] Package cffi conflicts for: pytorch-cpu -&gt; cffi pytorch-cpu -&gt; python[version='&gt;=3.6,&lt;3.7.0a0'] -&gt; pip -&gt; requests -&gt; urllib3[version='&gt;=1.21.1,&lt;1.25'] -&gt; cryptography[version='&gt;=1.3.4'] -&gt; cffi[version='&gt;=1.7'] python=2.7 -&gt; pip -&gt; requests -&gt; urllib3[version='&gt;=1.21.1,&lt;1.25'] -&gt; cryptography[version='&gt;=1.3.4'] -&gt; cffi[version='&gt;=1.7'] Package pip conflicts for: python=2.7 -&gt; pip pytorch-cpu -&gt; python[version='&gt;=3.6,&lt;3.7.0a0'] -&gt; pip Package setuptools conflicts for: python=2.7 -&gt; pip -&gt; setuptools pytorch-cpu -&gt; python[version='&gt;=3.6,&lt;3.7.0a0'] -&gt; pip -&gt; setuptools Package msgpack-python conflicts for: python=2.7 -&gt; pip -&gt; cachecontrol -&gt; msgpack-python pytorch-cpu -&gt; python[version='&gt;=3.6,&lt;3.7.0a0'] -&gt; pip -&gt; cachecontrol -&gt; msgpack-python</code></pre> </div> </div> </p> <p>i tried conda install pytorch-cpu -c pytorch and link(<a href="https://pytorch.org/get-started/locally/" rel="nofollow noreferrer">https://pytorch.org/get-started/locally/</a>). but it is not worked. so what should i do for install torch in python version2.7? i want to install pytorch cpu version. </p> <p>plz help:)</p>
<p>Here's the link to the <a href="https://pytorch.org" rel="nofollow noreferrer">PyTorch official download page</a></p> <p>From here, you can choose the python version (2.7) and CUDA (None) and other relevant details based on your environment and OS.</p> <p>Other helpful links:</p> <ul> <li><a href="https://stackoverflow.com/questions/49918479/how-to-install-pytorch-in-anaconda-with-conda-or-pip">windows</a><br></li> <li><a href="https://medium.com/@bryant.kou/how-to-install-pytorch-on-windows-step-by-step-cc4d004adb2a" rel="nofollow noreferrer">windows</a><br></li> <li><a href="https://dev.to/berry_clione/install-pytorch-on-mac-by-pip-2fga" rel="nofollow noreferrer">mac</a><br></li> <li><a href="https://www.learnopencv.com/installing-deep-learning-frameworks-on-ubuntu-with-cuda-support/" rel="nofollow noreferrer">ubuntu</a><br></li> <li><a href="https://www.javatpoint.com/pytorch-installation" rel="nofollow noreferrer">all</a></li> </ul>
579
pytorch
Unable to install Pytorch in Ubuntu
https://stackoverflow.com/questions/63272687/unable-to-install-pytorch-in-ubuntu
<p>I'm using the following command to install pytorch in my conda environment.</p> <pre><code>conda install pytorch=0.4.1 cuda90 -c pytorch </code></pre> <p>However, I'm getting the following error</p> <blockquote> <p>Solving environment: failed</p> <p>PackagesNotFoundError: The following packages are not available from current channels:</p> <ul> <li>pytorch=0.4.1</li> <li>cuda90</li> </ul> <p>Current channels:</p> <ul> <li><a href="https://conda.anaconda.org/pytorch/linux-32" rel="nofollow noreferrer">https://conda.anaconda.org/pytorch/linux-32</a></li> <li><a href="https://conda.anaconda.org/pytorch/noarch" rel="nofollow noreferrer">https://conda.anaconda.org/pytorch/noarch</a></li> <li><a href="https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/linux-32" rel="nofollow noreferrer">https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/linux-32</a></li> <li><a href="https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/noarch" rel="nofollow noreferrer">https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/noarch</a></li> <li><a href="https://repo.anaconda.com/pkgs/main/linux-32" rel="nofollow noreferrer">https://repo.anaconda.com/pkgs/main/linux-32</a></li> <li><a href="https://repo.anaconda.com/pkgs/main/noarch" rel="nofollow noreferrer">https://repo.anaconda.com/pkgs/main/noarch</a></li> <li><a href="https://repo.anaconda.com/pkgs/free/linux-32" rel="nofollow noreferrer">https://repo.anaconda.com/pkgs/free/linux-32</a></li> </ul> </blockquote> <blockquote> <p>To search for alternate channels that may provide the conda package you're looking for, navigate to</p> <pre><code>https://anaconda.org </code></pre> </blockquote> <p>How can I sort this out? I have ofcourse installed cuda 9 and nvcc works.</p>
<p>Go directly to the pytorch website and follow the instructions for your setup and it will tell you exactly the command required to install - <a href="https://pytorch.org/get-started/locally/" rel="nofollow noreferrer">pytorch - get started</a></p> <p>For example:</p> <p><a href="https://i.sstatic.net/D5XqP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D5XqP.png" alt="enter image description here" /></a></p> <p>If you're looking for older versions of PyTorch, the version history and commands to install can be found here - <a href="https://pytorch.org/get-started/previous-versions/" rel="nofollow noreferrer">Installing Previous Versions of PyTorch</a></p> <p>If this doesn't work for you, your last option is to build from source yourself. Here's the GitHub repo for version 0.4.1 - <a href="https://github.com/pytorch/pytorch/tree/v0.4.1" rel="nofollow noreferrer">pytorch at 0.4.1</a>. The steps to install from source are outlined on the repo <a href="https://github.com/pytorch/pytorch/tree/v0.4.1#from-source" rel="nofollow noreferrer">here</a>.</p>
580
pytorch
Using pytorch Cuda on MacBook Pro
https://stackoverflow.com/questions/63423463/using-pytorch-cuda-on-macbook-pro
<p>I am using MacBook Pro (16-inch, 2019, macOS 10.15.5 (19F96))</p> <p>GPU</p> <ul> <li>AMD Radeon Pro 5300M</li> <li>Intel UHD Graphics 630</li> </ul> <p>I am trying to use Pytorch with Cuda on my mac.</p> <p>All of the guides I saw assume that i have Nvidia graphic card.</p> <p>I found this: <a href="https://github.com/pytorch/pytorch/issues/10657" rel="noreferrer">https://github.com/pytorch/pytorch/issues/10657</a> issue, but it looks like I need to install ROCm, and according to their <a href="https://github.com/pytorch/pytorch/issues/10657" rel="noreferrer">Supported Operating Systems</a>, it only supports Linux.</p> <p>Is it possible to run Pytorch on GPU using mac and AMD Graphic card?</p>
<h2>PyTorch now supports training using Metal.</h2> <p>Announcement: <a href="https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/" rel="noreferrer">https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/</a></p> <p>To get started, install the latest nightly build of PyTorch: <a href="https://pytorch.org/get-started/locally/" rel="noreferrer">https://pytorch.org/get-started/locally/</a></p> <hr /> <h2>Answer pre May 2022</h2> <p>Unfortunately, no GPU acceleration is available when using Pytorch on macOS. CUDA has not available on macOS for a while and it only runs on NVIDIA GPUs. AMDs equivalent library ROCm requires Linux.</p> <p>If you are working with macOS 12.0 or later and would be willing to use TensorFlow instead, you can use the Mac optimized build of TensorFlow, which supports GPU training using Apple's own GPU acceleration library Metal.</p> <p>Currently, you need Python 3.8 (&lt;=3.7 and &gt;=3.9 don't work) to run it. To install, run:</p> <pre><code>pip3 install tensorflow-macos pip3 install tensorflow-metal </code></pre> <p>You may need to uninstall existing tensorflow distributions first or work in a virtual environment.</p> <p>Then you can just</p> <pre><code>import tensorflow as tf tf.test.is_gpu_available() # should return True </code></pre>
581
pytorch
PyTorch : error message &quot;torch has no [...] member&quot;
https://stackoverflow.com/questions/50319943/pytorch-error-message-torch-has-no-member
<p>Good evening, I have just installed PyTorch 0.4.0 and I'm trying to carry out the first tutorial "What is PyTorch?" I have written a Tutorial.py file which I try to execute with Visual Studio Code</p> <p>Here is the code :</p> <pre><code>from __future__ import print_function import torch print (torch.__version__) x = x = torch.rand(5, 3) print(x) </code></pre> <p>Unfortunately, when I try to debug it, i have an error message : "torch has no rand member"</p> <p>This is true with any member function of torch I may try</p> <p>Can anybody help me please?</p>
<p><em>In case you haven't got a solution to your problem or someone else encounters it.</em></p> <p>The error is raised because of Pylint (<em>Python static code analysis tool</em>) not recognizing <code>rand</code> as the member function. You can either configure Pylint to <em>ignore</em> this problem or you can whitelist torch (<em>better solution</em>) to remove lint errors by adding following to your <code>.pylintrc</code> file.</p> <pre><code>[TYPECHECK] # List of members which are set dynamically and missed by Pylint inference # system, and so shouldn't trigger E1101 when accessed. generated-members=numpy.*, torch.* </code></pre> <p>In Visual Studio Code, you could also add the following to the user settings:</p> <pre><code>"python.linting.pylintArgs": [ "--generated-members=numpy.* ,torch.*" ] </code></pre> <p>The issue is discussed <a href="https://github.com/pytorch/pytorch/issues/701" rel="noreferrer">here</a> on PyTorch GitHub page.</p>
582
pytorch
PyTorch and CUDA driver
https://stackoverflow.com/questions/52562352/pytorch-and-cuda-driver
<p>I have CUDA 9.2 installed. For example:</p> <pre><code>(base) c:\&gt;nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2018 NVIDIA Corporation Built on Wed_Apr_11_23:16:30_Central_Daylight_Time_2018 Cuda compilation tools, release 9.2, V9.2.88 </code></pre> <p>I installed PyTorch on Windows 10 using:</p> <pre><code>conda install pytorch cuda92 -c pytorch pip3 install torchvision </code></pre> <p>I ran the test script:</p> <pre><code>(base) c:\&gt;python Python 3.6.5 |Anaconda custom (64-bit)| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; from __future__ import print_function &gt;&gt;&gt; import torch &gt;&gt;&gt; x = torch.rand(5, 3) &gt;&gt;&gt; print(x) tensor([[0.7041, 0.5685, 0.4036], [0.3089, 0.5286, 0.3245], [0.3504, 0.8638, 0.1118], [0.6517, 0.9209, 0.6801], [0.0315, 0.1923, 0.8720]]) &gt;&gt;&gt; quit() </code></pre> <p>So for, so good. Then I ran:</p> <pre><code>(base) c:\&gt;python Python 3.6.5 |Anaconda custom (64-bit)| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import torch &gt;&gt;&gt; torch.cuda.is_available() False &gt;&gt;&gt; </code></pre> <p>Why did PyTorch say CUDA was not available?</p> <p>The GPU is a compute capability 3.0 Quadro K3000M:</p> <pre><code>(base) C:\Program Files\NVIDIA Corporation\NVSMI&gt;nvidia-smi.exe Mon Oct 01 16:36:47 2018 NVIDIA-SMI 385.54 Driver Version: 385.54 -------------------------------+----------------------+---------------------- GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. 0 Quadro K3000M WDDM | 00000000:01:00.0 Off | N/A N/A 35C P0 N/A / N/A | 29MiB / 2048MiB | 0% Default </code></pre>
<p>Ever since <a href="https://github.com/pytorch/pytorch/releases/tag/v0.3.1" rel="nofollow noreferrer">https://github.com/pytorch/pytorch/releases/tag/v0.3.1</a>, PyTorch binary releases had removed support for old GPUs' with CUDA capability 3.0. According to <a href="https://en.wikipedia.org/wiki/CUDA" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/CUDA</a>, the compute capability of Quadro K3000M is 3.0.</p> <p>Therefore, you might have to build pytorch from source or try other packages. Please refer to this thread for more information -- <a href="https://discuss.pytorch.org/t/pytorch-no-longer-supports-this-gpu-because-it-is-too-old/13803" rel="nofollow noreferrer">https://discuss.pytorch.org/t/pytorch-no-longer-supports-this-gpu-because-it-is-too-old/13803</a>.</p>
583
pytorch
What is the difference between .pt, .pth and .pwf extentions in PyTorch?
https://stackoverflow.com/questions/59095824/what-is-the-difference-between-pt-pth-and-pwf-extentions-in-pytorch
<p>I have seen in some code examples, that people use .pwf as model file saving format. But in PyTorch documentation .pt and .pth are recommended. I used .pwf and worked fine for small 1->16->16 convolutional network.</p> <p>My question is what is the difference between these formats? Why is .pwf extension not even recommended in PyTorch documentation and why do people still use it?</p>
<p>There are no differences between the extensions that were listed: <code>.pt</code>, <code>.pth</code>, <code>.pwf</code>. One can use whatever extension (s)he wants. So, if you're using <code>torch.save()</code> for saving models, then it by default uses python pickle (<code>pickle_module=pickle</code>) to save the objects and some metadata. Thus, you have the liberty to choose the extension you want, as long as it doesn't cause collisions with any other standardized extensions.</p> <p>Having said that, it is however <a href="https://discuss.pytorch.org/t/what-does-pth-tar-extension-mean/36697/3" rel="noreferrer"><strong>not</strong> recommended to use <code>.pth</code> extension</a> when checkpointing models because it collides with <a href="https://docs.python.org/3.8/library/site.html" rel="noreferrer">Python path (<code>.pth</code>) configuration files</a>. Because of this, I myself use <code>.pth.tar</code> or <code>.pt</code> but not <code>.pth</code>, or any other extensions.</p> <hr /> <p>The standard way of checkpointing models in PyTorch is not finalized yet. Here is an open issue, as of this writing: <a href="https://github.com/pytorch/pytorch/issues/14864" rel="noreferrer">Recommend a different file extension for models (.PTH is a special extension for Python) - issues/14864 </a></p> <p>It's been <a href="https://github.com/pytorch/pytorch/issues/14864#issuecomment-477195843" rel="noreferrer">suggested by @soumith</a> to use:</p> <ul> <li><code>.pt</code> for checkpointing models in pickle format</li> <li><code>.ptc</code> for checkpointing models in pytorch compiled (for JIT)</li> </ul>
584
pytorch
pytorch versus autograd.numpy
https://stackoverflow.com/questions/62404451/pytorch-versus-autograd-numpy
<p>What are the big differences between pytorch and numpy, in particular, the autograd.numpy package? ( since both of them can compute the gradient automatically for you.) I know that pytorch can move tensors to GPU, but is this the only reason for choosing pytorch over numpy? While pytorch is well known for deep learning, obviously it can be used for almost any machine learning algorithm, its nn.Module structure is very flexible and we don't have to confine to the neural networks. (although I've never seen any neural network model written in numpy) So I'm wondering what's the biggest difference underlying pytorch and numpy. </p>
<p>I'm not sure if this question can be objectively answered, but besides the GPU functionality, it offers</p> <ul> <li>Parallelisation across GPUs</li> <li>Parallelisation across Machines</li> <li>DataLoaders / Manipulators incl. asynchronous pre-fetching</li> <li>Optimizers</li> <li>Predefined/Pretrained Models (can save you a lot of time)</li> <li>...</li> </ul> <p>But as you said, it's build around deep/machine learning, so that is what it's good as while numpy (together with scipy) is much more general and can be used to solve a large range of other engineering problems (possibly using methods that are not en vogue at the moment).</p>
585
pytorch
PyTorch: How to get the shape of a Tensor as a list of int
https://stackoverflow.com/questions/46826218/pytorch-how-to-get-the-shape-of-a-tensor-as-a-list-of-int
<p>In numpy, <code>V.shape</code> gives a tuple of ints of dimensions of V.</p> <p>In tensorflow <code>V.get_shape().as_list()</code> gives a list of integers of the dimensions of V.</p> <p>In pytorch, <code>V.size()</code> gives a size object, but how do I convert it to ints?</p>
<p>For PyTorch v1.0 and possibly above:</p> <pre><code>&gt;&gt;&gt; import torch &gt;&gt;&gt; var = torch.tensor([[1,0], [0,1]]) # Using .size function, returns a torch.Size object. &gt;&gt;&gt; var.size() torch.Size([2, 2]) &gt;&gt;&gt; type(var.size()) &lt;class 'torch.Size'&gt; # Similarly, using .shape &gt;&gt;&gt; var.shape torch.Size([2, 2]) &gt;&gt;&gt; type(var.shape) &lt;class 'torch.Size'&gt; </code></pre> <p>You can cast any torch.Size object to a native Python list:</p> <pre><code>&gt;&gt;&gt; list(var.size()) [2, 2] &gt;&gt;&gt; type(list(var.size())) &lt;class 'list'&gt; </code></pre> <hr> <p>In PyTorch v0.3 and 0.4:</p> <p>Simply <code>list(var.size())</code>, e.g.:</p> <pre><code>&gt;&gt;&gt; import torch &gt;&gt;&gt; from torch.autograd import Variable &gt;&gt;&gt; from torch import IntTensor &gt;&gt;&gt; var = Variable(IntTensor([[1,0],[0,1]])) &gt;&gt;&gt; var Variable containing: 1 0 0 1 [torch.IntTensor of size 2x2] &gt;&gt;&gt; var.size() torch.Size([2, 2]) &gt;&gt;&gt; list(var.size()) [2, 2] </code></pre>
586
pytorch
Run pytorch in pyodide?
https://stackoverflow.com/questions/64358372/run-pytorch-in-pyodide
<p>Is there any way I can run the python library pytorch in pyodide? I tried installing pytorch with micropip but it gives this error message:</p> <blockquote> <p>Couldn't find a pure Python 3 wheel for 'pytorch'</p> </blockquote>
<p>In Pyodide micropip only allows to install pure python wheels (i.e. that don't have compiled extensions). The filename for those wheels ends with <code>none-any.whl</code> (see <a href="https://www.python.org/dev/peps/pep-0427/#file-name-convention" rel="noreferrer">PEP 427</a>).</p> <p>If you look at Pytorch wheels currently available on PyPi, their filenames ends with e.g. <code>x86_64.whl</code> so it means that they would only work on the <a href="https://en.wikipedia.org/wiki/X86-64" rel="noreferrer">x86_64 architecture</a> and not in the WebAssembly VM.</p> <p>The general solution to this is to add a package to the <a href="https://pyodide.readthedocs.io/en/latest/new_packages.html" rel="noreferrer">Pyodide build system</a>. However in the case of pytorch, there is a blocker that cffi is currently not supported in pyodide (<a href="https://github.com/iodide-project/pyodide/issues/761#issuecomment-701224843" rel="noreferrer">GH-pyodide#761</a>), while it's required at runtime by pytorch (see an example of build <a href="https://github.com/conda-forge/pytorch-cpu-feedstock/blob/3867ff11725f05aff3f3bff97074fe80229d3ceb/recipe/meta.yaml#L94" rel="noreferrer">setup from conda-forge</a>). So it is unlikely that pytorch would be availble in pyodide in the near future.</p>
587
pytorch
PyTorch: Error 803: system has unsupported display driver / cuda driver combination (CUDA 11.7, pytorch 1.13.1)
https://stackoverflow.com/questions/75688024/pytorch-error-803-system-has-unsupported-display-driver-cuda-driver-combinat
<p>I can't get PyTorch to work.</p> <p>I have cuda and NVIDIA drivers installed</p> <pre><code>nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Wed_Jun__8_16:49:14_PDT_2022 Cuda compilation tools, release 11.7, V11.7.99 Build cuda_11.7.r11.7/compiler.31442593_0 </code></pre> <p>I have installed PyTorch using the following command</p> <pre><code>conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia </code></pre> <p>I am testing PyTorch using the following code snippet</p> <pre class="lang-py prettyprint-override"><code>import torch print(torch.__version__) print(torch.cuda.is_available()) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print('Using device:', device) print() #Additional Info when using cuda if device.type == 'cuda': print(torch.cuda.get_device_name(0)) print('Memory Usage:') print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB') print('Cached: ', round(torch.cuda.memory_reserved(0)/1024**3,1), 'GB') </code></pre> <p>Which tells me PyTorch can't access CUDA</p> <pre><code> 1.13.1 /home/vn/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py:88: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 803: system has unsupported display driver / cuda driver combination (Triggered internally at /opt/conda/conda-bld/pytorch_1670525541990/work/c10/cuda/CUDAFunctions.cpp:109.) return torch._C._cuda_getDeviceCount() &gt; 0 False Using device: cpu </code></pre> <p>In case it makes any difference I am running <code>6.1.15-060115-generic</code> kernel under <code>ubuntu 22.04</code></p>
<p>tldr - &quot;installed cuda&quot; doesn't mean &quot;cuda can be used by the card.&quot;</p> <p>ultimately I had to get nvidia-smi work. the easiest way to do it was by using NVIDIA drivers that came with ubuntu.</p>
588
pytorch
PyTorch : predict single example
https://stackoverflow.com/questions/51041128/pytorch-predict-single-example
<p>Following the example from: </p> <p><a href="https://github.com/jcjohnson/pytorch-examples" rel="noreferrer">https://github.com/jcjohnson/pytorch-examples</a></p> <p>This code trains successfully: </p> <pre><code># Code in file tensor/two_layer_net_tensor.py import torch device = torch.device('cpu') # device = torch.device('cuda') # Uncomment this to run on GPU # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 # Create random input and output data x = torch.randn(N, D_in, device=device) y = torch.randn(N, D_out, device=device) # Randomly initialize weights w1 = torch.randn(D_in, H, device=device) w2 = torch.randn(H, D_out, device=device) learning_rate = 1e-6 for t in range(500): # Forward pass: compute predicted y h = x.mm(w1) h_relu = h.clamp(min=0) y_pred = h_relu.mm(w2) # Compute and print loss; loss is a scalar, and is stored in a PyTorch Tensor # of shape (); we can get its value as a Python number with loss.item(). loss = (y_pred - y).pow(2).sum() print(t, loss.item()) # Backprop to compute gradients of w1 and w2 with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_w2 = h_relu.t().mm(grad_y_pred) grad_h_relu = grad_y_pred.mm(w2.t()) grad_h = grad_h_relu.clone() grad_h[h &lt; 0] = 0 grad_w1 = x.t().mm(grad_h) # Update weights using gradient descent w1 -= learning_rate * grad_w1 w2 -= learning_rate * grad_w2 </code></pre> <p>How can I predict a single example ? My experience thus far is utilising feedforward networks using just <code>numpy</code>. After training a model I utilise forward propagation but for a single example : </p> <p><code>numpy</code> code snippet where <code>new</code> is the output value I'm attempting to predict: </p> <pre><code>new = np.asarray(toclassify) Z1 = np.dot(weight_layer_1, new.T) + bias_1 sigmoid_activation_1 = sigmoid(Z1) Z2 = np.dot(weight_layer_2, sigmoid_activation_1) + bias_2 sigmoid_activation_2 = sigmoid(Z2) </code></pre> <p><code>sigmoid_activation_2</code> contains the predicted vector attributes</p> <p>Is the idiomatic PyTorch way same? Use forward propagation in order to make a single prediction?</p>
<p>The code you posted is a simple demo trying to reveal the inner mechanism of such deep learning frameworks. These frameworks, including PyTorch, Keras, Tensorflow and many more automatically handle the forward calculation, the tracking and applying gradients for you as long as you defined the network structure. However, the code you showed still try to do these stuff manually. That's the reason why you feel cumbersome when predicting one example, because you are still doing it from scratch.</p> <p>In practice, we will define a model class inherited from <code>torch.nn.Module</code> and initialize all the network components (like neural layer, GRU, LSTM layer etc.) in the <code>__init__</code> function, and define how these components interact with the network input in the <code>forward</code> function.</p> <p>Taken the example from the page you've provided:</p> <pre><code># Code in file nn/two_layer_net_module.py import torch class TwoLayerNet(torch.nn.Module): def __init__(self, D_in, H, D_out): &quot;&quot;&quot; In the constructor we instantiate two nn.Linear modules and assign them as member variables. &quot;&quot;&quot; super(TwoLayerNet, self).__init__() self.linear1 = torch.nn.Linear(D_in, H) self.linear2 = torch.nn.Linear(H, D_out) def forward(self, x): &quot;&quot;&quot; In the forward function we accept a Tensor of input data and we must return a Tensor of output data. We can use Modules defined in the constructor as well as arbitrary (differentiable) operations on Tensors. &quot;&quot;&quot; h_relu = self.linear1(x).clamp(min=0) y_pred = self.linear2(h_relu) return y_pred # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 # Create random Tensors to hold inputs and outputs x = torch.randn(N, D_in) y = torch.randn(N, D_out) # Construct our model by instantiating the class defined above. model = TwoLayerNet(D_in, H, D_out) # Construct our loss function and an Optimizer. The call to model.parameters() # in the SGD constructor will contain the learnable parameters of the two # nn.Linear modules which are members of the model. loss_fn = torch.nn.MSELoss(size_average=False) optimizer = torch.optim.SGD(model.parameters(), lr=1e-4) for t in range(500): # Forward pass: Compute predicted y by passing x to the model y_pred = model(x) # Compute and print loss loss = loss_fn(y_pred, y) print(t, loss.item()) # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() loss.backward() optimizer.step() </code></pre> <p>The code defined a model named TwoLayerNet, it initializes two linear layers in the <code>__init__</code> function and further defines how these two linears interact with the input <code>x</code> in the <code>forward</code> function.</p> <p>Having the model defined, we can perform a single feed-forward operation as follows. Say <code>xu</code> contains a single unseen example:</p> <pre><code>xu = torch.randn(D_in) </code></pre> <p>Then this performs the prediction:</p> <pre><code>y_pred = model(torch.atleast_2d(xu)) </code></pre>
589
pytorch
CUDA HOME in pytorch installation
https://stackoverflow.com/questions/52298146/cuda-home-in-pytorch-installation
<p>I installed pytorch via conda with cuda 7.5</p> <pre><code>conda install pytorch=0.3.0 cuda75 -c pytorch &gt;&gt;&gt; import torch &gt;&gt;&gt; torch.cuda.is_available() True </code></pre> <p>I didn't do any other installations for cuda other than this, since it looks like pytorch comes with cuda</p> <p>Now, I am trying to setup yolo2 <a href="https://github.com/longcw/yolo2-pytorch" rel="nofollow noreferrer">https://github.com/longcw/yolo2-pytorch</a></p> <p>However, I am getting error in <code>./make.sh</code> command</p> <p>this is the error </p> <blockquote> <p>OSError: The nvcc binary could not be located in your $PATH. Either add it to your path, or set $CUDAHOME</p> </blockquote> <p>I'm assuming I need to set CUDAHOME in my path, but I am not able to locate any cuda directory having nvcc binary. Any pointers on it? </p>
<p>The CUDA package which is distributed via anaconda is not a complete CUDA toolkit installation. It only includes the necessary libraries and tools to support <code>numba</code> and <code>pyculib</code> and other GPU accelerated binary packages they distribute, like <code>tensorflow</code> and <code>pytorch</code>.</p> <p>If you need a fully functional CUDA toolkit (and it seems you do), you will need to install one yourself. Word to the wise -- install the same version that you have installed within anaconda. With a tiny bit of PATH modification, everything should just work.</p>
590
pytorch
pytorch PIP and CONDA error?
https://stackoverflow.com/questions/47943081/pytorch-pip-and-conda-error
<p>Guys I am new to python and deeplearning world</p> <p>I tried to install pytorch using conda</p> <p>I get this Error...</p> <pre><code>(base) C:\WINDOWS\system32&gt;conda install pytorch </code></pre> <blockquote> <p>`Solving environment: failed</p> </blockquote> <p>PackagesNotFoundError: The following packages are not available from current channels:</p> <ul> <li>pytorch</li> </ul> <p>Current channels:</p> <ul> <li><a href="https://repo.continuum.io/pkgs/main/win-64" rel="nofollow noreferrer">https://repo.continuum.io/pkgs/main/win-64</a></li> <li><a href="https://repo.continuum.io/pkgs/main/noarch" rel="nofollow noreferrer">https://repo.continuum.io/pkgs/main/noarch</a></li> <li><p><a href="https://repo.continuum.io/pkgs/free/win-64" rel="nofollow noreferrer">https://repo.continuum.io/pkgs/free/win-64</a></p> <p>Couldnt post all the channels due to reputation issue on stackoverflow...</p></li> </ul> <p>Trying Pip for installing Pytorch It just opens pytorch site after this error:</p> <blockquote> <p>(base) C:\WINDOWS\system32>pip install pytorch Collecting pytorch Using cached pytorch-0.1.2.tar.gz Building wheels for collected packages: pytorch Running setup.py bdist_wheel for pytorch ... error Complete output from command C:\ProgramData\Anaconda3\python.exe -u -c "import setuptools, tokenize;<strong>file</strong>='C:\Users\micha\AppData\Local\Temp\pip-build-t86penrg\pytorch\setup.py';f=getattr(tokenize, 'open', open)(<strong>file</strong>);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, <strong>file</strong>, 'exec'))" bdist_wheel -d C:\Users\micha\AppData\Local\Temp\tmpqmo4j08upip-wheel- --python-tag cp36: Traceback (most recent call last): File "", line 1, in File "C:\Users\micha\AppData\Local\Temp\pip-build-t86penrg\pytorch\setup.py", line 17, in raise Exception(message) Exception: You should install pytorch from <a href="http://pytorch.org" rel="nofollow noreferrer">http://pytorch.org</a></p> </blockquote> <hr> <p>Failed building wheel for pytorch Running setup.py clean for pytorch Failed to build pytorch Installing collected packages: pytorch Running setup.py install for pytorch ... error Complete output from command C:\ProgramData\Anaconda3\python.exe -u -c "import setuptools, tokenize;<strong>file</strong>='C:\Users\micha\AppData\Local\Temp\pip-build-t86penrg\pytorch\setup.py';f=getattr(tokenize, 'open', open)(<strong>file</strong>);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, <strong>file</strong>, 'exec'))" install --record C:\Users\micha\AppData\Local\Temp\pip-vms7q49e-record\install-record.txt --single-version-externally-managed --compile: Traceback (most recent call last): File "", line 1, in File "C:\Users\micha\AppData\Local\Temp\pip-build-t86penrg\pytorch\setup.py", line 13, in raise Exception(message) Exception: You should install pytorch from <a href="http://pytorch.org" rel="nofollow noreferrer">http://pytorch.org</a></p> <pre><code>---------------------------------------- </code></pre> <p>Exception: Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\pip\commands\install.py", line 342, in run prefix=options.prefix_path, File "C:\ProgramData\Anaconda3\lib\site-packages\pip\req\req_set.py", line 784, in install **kwargs File "C:\ProgramData\Anaconda3\lib\site-packages\pip\req\req_install.py", line 878, in install spinner=spinner, File "C:\ProgramData\Anaconda3\lib\site-packages\pip\utils__init__.py", line 707, in call_subprocess % (command_desc, proc.returncode, cwd)) pip.exceptions.InstallationError: Command "C:\ProgramData\Anaconda3\python.exe -u -c "import setuptools, tokenize;<strong>file</strong>='C:\Users\micha\AppData\Local\Temp\pip-build-t86penrg\pytorch\setup.py';f=getattr(tokenize, 'open', open)(<strong>file</strong>);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, <strong>file</strong>, 'exec'))" install --record C:\Users\micha\AppData\Local\Temp\pip-vms7q49e-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\micha\AppData\Local\Temp\pip-build-t86penrg\pytorch\</p> <p>During handling of the above exception, another exception occurred:</p> <p>Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\pip\basecommand.py", line 215, in main status = self.run(options, args) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\commands\install.py", line 385, in run requirement_set.cleanup_files() File "C:\ProgramData\Anaconda3\lib\site-packages\pip\req\req_set.py", line 729, in cleanup_files req.remove_temporary_source() File "C:\ProgramData\Anaconda3\lib\site-packages\pip\req\req_install.py", line 977, in remove_temporary_source rmtree(self.source_dir) File "C:\ProgramData\Anaconda3\lib\site-packages\pip_vendor\retrying.py", line 49, in wrapped_f return Retrying(*dargs, **dkw).call(f, *args, **kw) File "C:\ProgramData\Anaconda3\lib\site-packages\pip_vendor\retrying.py", line 212, in call raise attempt.get() File "C:\ProgramData\Anaconda3\lib\site-packages\pip_vendor\retrying.py", line 247, in get six.reraise(self.value[0], self.value[1], self.value[2]) File "C:\ProgramData\Anaconda3\lib\site-packages\six.py", line 693, in reraise raise value File "C:\ProgramData\Anaconda3\lib\site-packages\pip_vendor\retrying.py", line 200, in call attempt = Attempt(fn(*args, **kwargs), attempt_number, False) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\utils__init__.py", line 102, in rmtree onerror=rmtree_errorhandler) File "C:\ProgramData\Anaconda3\lib\shutil.py", line 494, in rmtree return _rmtree_unsafe(path, onerror) File "C:\ProgramData\Anaconda3\lib\shutil.py", line 393, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "C:\ProgramData\Anaconda3\lib\site-packages\pip\utils__init__.py", line 114, in rmtree_errorhandler func(path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\Users\micha\AppData\Local\Temp\pip-build-t86penrg\pytorch'</p> <p>I also tried conda install -c peterjc123 pytorch=0.1.12 and soumith but I get the same error not found </p> <p>Any Idea where I am going wrong </p> <p>Tried other forum tips and post also reinstalled Anaconda but still the same issue </p>
<p>However I found how to use Pytorch on windows here: [<a href="https://www.superdatascience.com/pytorch/]" rel="nofollow noreferrer">https://www.superdatascience.com/pytorch/]</a></p> <p>conda install -c peterjc123 pytorch</p> <p>Did the trick for me ...</p>
591
pytorch
PyTorch not downloading
https://stackoverflow.com/questions/57642019/pytorch-not-downloading
<p>I go to the PyTorch website and select the following options</p> <p>PyTorch Build: Stable (1.2)</p> <p>Your OS: Windows</p> <p>Package: pip</p> <p>Language: Python 3.7</p> <p>CUDA: None</p> <p>(All of these are correct)</p> <p>Than it displays a command to run</p> <p><code>pip3 install torch==1.2.0+cpu torchvision==0.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html</code></p> <p>I have already tried to mix around the the different options but none of them has worked.</p> <hr> <p>ERROR: <code>ERROR: Could not find a version that satisfies the requirement torch==1.2.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch==1.2.0+cpu</code></p> <p>I tried to do pip install pytorch but pytorch doesn't support pypi</p>
<p>I've been in same situation. My prob was, the python version... I mean, in the 'bit' way.</p> <p>It was 32 bit that the python I'd installed. You should check which bit of python you installed. you can check in the app in setting, search python, then you will see the which bit you've installed.</p> <p>After I installed the 64 bit of Python, it solved.</p> <p>I hope you figure it out! </p> <p>environment : win 10</p>
592
pytorch
pytorch package too huge
https://stackoverflow.com/questions/69526212/pytorch-package-too-huge
<p>After installing pytorch via</p> <pre><code>RUN python3 -m pip install --no-cache-dir torch==1.9.1 </code></pre> <p>I realised that corresponding docker layer is 1.78GB Is there any way to reduce pytorch size? Current version is with GPU &amp; Cuda 10.2</p>
<p>If you are not using <strong>gpu</strong> with your docker container, the <strong>cpu</strong> version will be much smaller since it does not contain all the overhead of <strong>CUDA</strong>. To have a <strong>cpu</strong> only version you can use :</p> <pre><code>RUN python3 -m pip install --no-cache-dir torch==1.9.1+cpu </code></pre> <p>Also you can use <strong>wheel</strong> files <strong>(.whl)</strong> to install pytorch, this approach also can be viable if you want to trim some unnecessary components. You can find it on <a href="https://download.pytorch.org/whl/cpu" rel="nofollow noreferrer">pytorch website</a>:</p>
593
pytorch
Tensorflow to PyTorch
https://stackoverflow.com/questions/65092587/tensorflow-to-pytorch
<p>I'm transffering a Tensorflow code to a PyTorch code.<br /> Below lines are the problem I couldn't solve yet.<br /> I'm not familiar with PyTorch so that it's not easy for me to find the matching methods in PyTorch library.<br /> Anyone can help me?<br /> p.s. The shape of alpha is (batch, N).</p> <pre><code>alpha_cumsum = tf.cumsum(alpha, axis = 1) len_batch = tf.shape(alpha_cumsum)[0] rand_prob = tf.random_uniform(shape = [len_batch, 1], minval = 0., maxval = 1.) alpha_relu = tf.nn.relu(rand_prob - alpha_cumsum) alpha_index = tf.count_nonzero(alpha_relu, 1) alpha_hard = tf.one_hot(alpha_index, len(a)) </code></pre>
<p>I've put all your functions followed by the corresponding pytorch function. Most are the same name and put in the pytorch docs (<a href="https://pytorch.org/docs/stable/index.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/index.html</a>)</p> <pre class="lang-py prettyprint-override"><code>tf.cumsum(alpha, axis = 1) torch.cumsum(alpha, dim=1) tf.shape(alpha_cumsum)[0] alpha_cumsum.shape[0] tf.random_uniform(shape = [len_batch, 1], minval = 0., maxval = 1.) torch.rand([len_batch,1]) tf.nn.relu(rand_prob - alpha_cumsum) torch.nn.functional.relu(rand_prob - alpha_cumsum) tf.count_nonzero(alpha_relu, 1) torch.count_nonzero(alpha_relu, dim=1) tf.one_hot(alpha_index, len(a)) torch.nn.functional.one_hot(alpha_index, len(a)) # assuming len(a) is number of classes </code></pre>
594
pytorch
Get the data type of a PyTorch tensor
https://stackoverflow.com/questions/53374499/get-the-data-type-of-a-pytorch-tensor
<p>I understand that PyTorch tensors are homogenous, ie, each of the elements are of the same type.</p> <p>How do I find out the type of the elements in a PyTorch tensor?</p>
<p>There are three kinds of things:</p> <pre><code>dtype || CPU tensor || GPU tensor torch.float32 torch.FloatTensor torch.cuda.FloatTensor </code></pre> <p>The first one you get with <code>print(t.dtype)</code> if <code>t</code> is your tensor, else you use <code>t.type()</code> for the other two.</p>
595
pytorch
Runtime Error when converting Pytorch model to PyTorch jit script
https://stackoverflow.com/questions/74907368/runtime-error-when-converting-pytorch-model-to-pytorch-jit-script
<p>I am trying to make a simple PyTorch model and convert it to PyTorch jit script using below code. (Final goal is to convert it to PyTorch Mobile)</p> <pre><code>class Concat(nn.Module): def __init__(self): super(Concat, self).__init__() def forward(self, x): return torch.cat(x,1) class Net(nn.Module): def __init__(self) -&gt; None: super().__init__() self.conv1 = nn.Conv2d(3, 16, 3, 1) self.conv2 = nn.Conv2d(16, 32, 3, 1) def forward(self, x): y = self.conv1(x) y = self.conv2(y) z = self.conv1(x) z = self.conv2(z) return (y, z) net = nn.Sequential( Net(), Concat() ) mobile_net = torch.quantization.convert(net) scripted_net = torch.jit.script(mobile_net) </code></pre> <p>But the above code throws the following error.</p> <pre><code>RuntimeError Traceback (most recent call last) Cell In [2], line 26 21 net = nn.Sequential( 22 Net(), 23 Concat() 24 ) 25 mobile_net = torch.quantization.convert(net) ---&gt; 26 scripted_net = torch.jit.script(mobile_net) File ~\anaconda3\envs\yolov5pytorch\lib\site-packages\torch\jit\_script.py:1286, in script(obj, optimize, _frames_up, _rcb, example_inputs) 1284 if isinstance(obj, torch.nn.Module): 1285 obj = call_prepare_scriptable_func(obj) -&gt; 1286 return torch.jit._recursive.create_script_module( 1287 obj, torch.jit._recursive.infer_methods_to_compile 1288 ) 1290 if isinstance(obj, dict): 1291 return create_script_dict(obj) File ~\anaconda3\envs\yolov5pytorch\lib\site-packages\torch\jit\_recursive.py:476, in create_script_module(nn_module, stubs_fn, share_types, is_tracing) 474 if not is_tracing: 475 AttributeTypeIsSupportedChecker().check(nn_module) --&gt; 476 return create_script_module_impl(nn_module, concrete_type, stubs_fn) File ~\anaconda3\envs\yolov5pytorch\lib\site-packages\torch\jit\_recursive.py:538, in create_script_module_impl(nn_module, concrete_type, stubs_fn) 535 script_module._concrete_type = concrete_type 537 # Actually create the ScriptModule, initializing it with the function we just defined --&gt; 538 script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn) 540 # Compile methods if necessary 541 if concrete_type not in concrete_type_store.methods_compiled: File ~\anaconda3\envs\yolov5pytorch\lib\site-packages\torch\jit\_script.py:615, in RecursiveScriptModule._construct(cpp_module, init_fn) 602 &quot;&quot;&quot; 603 Construct a RecursiveScriptModule that's ready for use. PyTorch 604 code should use this to construct a RecursiveScriptModule instead (...) 612 init_fn: Lambda that initializes the RecursiveScriptModule passed to it. 613 &quot;&quot;&quot; 614 script_module = RecursiveScriptModule(cpp_module) --&gt; 615 init_fn(script_module) 617 # Finalize the ScriptModule: replace the nn.Module state with our 618 # custom implementations and flip the _initializing bit. 619 RecursiveScriptModule._finalize_scriptmodule(script_module) File ~\anaconda3\envs\yolov5pytorch\lib\site-packages\torch\jit\_recursive.py:516, in create_script_module_impl.&lt;locals&gt;.init_fn(script_module) 513 scripted = orig_value 514 else: 515 # always reuse the provided stubs_fn to infer the methods to compile --&gt; 516 scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn) 518 cpp_module.setattr(name, scripted) 519 script_module._modules[name] = scripted File ~\anaconda3\envs\yolov5pytorch\lib\site-packages\torch\jit\_recursive.py:542, in create_script_module_impl(nn_module, concrete_type, stubs_fn) 540 # Compile methods if necessary 541 if concrete_type not in concrete_type_store.methods_compiled: --&gt; 542 create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs) 543 # Create hooks after methods to ensure no name collisions between hooks and methods. 544 # If done before, hooks can overshadow methods that aren't exported. 545 create_hooks_from_stubs(concrete_type, hook_stubs, pre_hook_stubs) File ~\anaconda3\envs\yolov5pytorch\lib\site-packages\torch\jit\_recursive.py:393, in create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs) 390 property_defs = [p.def_ for p in property_stubs] 391 property_rcbs = [p.resolution_callback for p in property_stubs] --&gt; 393 concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults) RuntimeError: Arguments for call are not valid. The following variants are available: aten::cat(Tensor[] tensors, int dim=0) -&gt; Tensor: Expected a value of type 'List[Tensor]' for argument 'tensors' but instead found type 'Tensor (inferred)'. Inferred the value for argument 'tensors' to be of type 'Tensor' because it was not annotated with an explicit type. aten::cat.names(Tensor[] tensors, str dim) -&gt; Tensor: Expected a value of type 'List[Tensor]' for argument 'tensors' but instead found type 'Tensor (inferred)'. Inferred the value for argument 'tensors' to be of type 'Tensor' because it was not annotated with an explicit type. aten::cat.names_out(Tensor[] tensors, str dim, *, Tensor(a!) out) -&gt; Tensor(a!): Expected a value of type 'List[Tensor]' for argument 'tensors' but instead found type 'Tensor (inferred)'. Inferred the value for argument 'tensors' to be of type 'Tensor' because it was not annotated with an explicit type. aten::cat.out(Tensor[] tensors, int dim=0, *, Tensor(a!) out) -&gt; Tensor(a!): Expected a value of type 'List[Tensor]' for argument 'tensors' but instead found type 'Tensor (inferred)'. Inferred the value for argument 'tensors' to be of type 'Tensor' because it was not annotated with an explicit type. The original call is: File &quot;C:\Users\pawan\AppData\Local\Temp\ipykernel_16484\3929675973.py&quot;, line 6 def forward(self, x): return torch.cat(x,1) ~~~~~~~~~ &lt;--- HERE </code></pre> <p>I am new to PyTorch and am not familiar with the internal working of PyTorch please provide a solution to this. If torch.cat is combined in the forward method of Net class i.e instead of return (y, z) we do return torch.cat((y, z),1) then it works but I want to do it using a different class for concatenation.</p>
<p><strong>Why the error happens</strong></p> <p>While compiling <code>Concat.forward</code>, <code>torch.jit</code> assumes the parameter <code>x</code> is a <code>Tensor</code>. Later, <code>torch.jit</code> realizes the actual argument passed to <code>Concat.forward</code> is a tuple <code>(y, z)</code>, so <code>torch.jit</code> concludes &quot;Arguments for call are not valid&quot; (because a tuple isn't a <code>Tensor</code>).</p> <p><strong>How to fix it</strong></p> <p><a href="https://pytorch.org/docs/stable/jit_language_reference.html#default-types" rel="nofollow noreferrer">Explicitly specify</a> the type of the parameter <code>x</code> in <code>Concat.forward</code> as <code>Tuple[torch.Tensor, torch.Tensor]</code>, so that <code>torch.jit</code> knows what you want.</p> <pre class="lang-py prettyprint-override"><code>from typing import Tuple class Concat(nn.Module): def __init__(self): super(Concat, self).__init__() def forward(self, x: Tuple[torch.Tensor, torch.Tensor]): # ^^^ torch.jit.script needs this ^^^ return torch.cat(x,1) class Net(nn.Module): def __init__(self) -&gt; None: super().__init__() self.conv1 = nn.Conv2d(3, 16, 3, 1) self.conv2 = nn.Conv2d(16, 32, 3, 1) def forward(self, x): y = self.conv1(x) y = self.conv2(y) z = self.conv1(x) z = self.conv2(z) return (y, z) net = nn.Sequential( Net(), Concat() ) mobile_net = torch.quantization.convert(net) scripted_net = torch.jit.script(mobile_net) </code></pre>
596
pytorch
Pytorch installation
https://stackoverflow.com/questions/77478747/pytorch-installation
<p>I am trying to install Pytorch in python 3.12.0 and cuda 12.1 in windows 11 ? But I get the error</p> <p>ERROR: could not find a version that satisfies the requirement torch(from version: None) ERROR: No matching distribution found for torch</p> <p>I installed the nvidia cuda 12.1 as well</p> <p>I tried to install Pytorch using the Pytorch website but it isnt working and is giving me the error ERROR: could not find a version that satisfies the requirement torch(from version: None) ERROR: No matching distribution found for torch</p>
<p>Now, pytorch 2.2.0, 2.2.1 and 2.2.2 support windows with cuda 12.1.</p> <p>By the way, I build a tool website for easy find and download right install wheels.</p> <p><a href="https://install.pytorch.site/?python=Python+3.12&amp;device=CUDA+12.1" rel="nofollow noreferrer">https://install.pytorch.site/?python=Python+3.12&amp;device=CUDA+12.1</a></p>
597
pytorch
Keras Upsampling2d vs PyTorch Upsampling
https://stackoverflow.com/questions/71585394/keras-upsampling2d-vs-pytorch-upsampling
<p>I am trying to convert a Keras Model to PyTorch. Now, it involves the <code>UpSampling2D</code> from <code>keras</code>. When I used <code>torch.nn.UpsamplingNearest2d</code> in pytorch, as default value of <code>UpSampling2D</code> in keras is <code>nearest</code>, I got different inconsistent results. The example is as follows:</p> <p><strong>Keras behaviour</strong></p> <pre class="lang-py prettyprint-override"><code>In [3]: t1 = tf.random_normal([32, 8, 8, 512]) # as we have channels last in keras In [4]: u_s = tf.keras.layers.UpSampling2D(2)(t1) In [5]: u_s.shape Out[5]: TensorShape([Dimension(32), Dimension(16), Dimension(16), Dimension(512)]) </code></pre> <p>So the output shape is <code>(32,16,16,512)</code>. Now let's do the same thing with PyTorch.</p> <p><strong>PyTorch Behaviour</strong></p> <pre class="lang-py prettyprint-override"><code>In [2]: t1 = torch.randn([32,512,8,8]) # as channels first in pytorch In [3]: u_s = torch.nn.UpsamplingNearest2d(2)(t1) In [4]: u_s.shape Out[4]: torch.Size([32, 512, 2, 2]) </code></pre> <p>Here output shape is <code>(32,512,2,2)</code> as compared to <code>(32,512,16,16)</code> from keras.</p> <p>So how do I get equvivlent results of Keras in PyTorch. Thanks</p>
<p>In keras, it uses a scaling factor to upsample. <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/UpSampling2D" rel="nofollow noreferrer">SOURCE</a>.</p> <pre><code>tf.keras.layers.UpSampling2D(size, interpolation='nearest') </code></pre> <blockquote> <p>size: Int, or tuple of 2 integers. The upsampling factors for rows and columns.</p> </blockquote> <p>And, PyTorch provides, both, direct <strong>output size</strong> and <strong>scaling factor</strong>. <a href="https://pytorch.org/docs/stable/generated/torch.nn.UpsamplingNearest2d.html" rel="nofollow noreferrer">SOURCE</a>.</p> <pre><code>torch.nn.UpsamplingNearest2d(size=None, scale_factor=None) </code></pre> <blockquote> <p>To specify the scale, it takes either the size or the scale_factor as its constructor argument.</p> </blockquote> <hr /> <p>So, in your case</p> <pre><code># scaling factor in keras t1 = tf.random.normal([32, 8, 8, 512]) tf.keras.layers.UpSampling2D(2)(t1).shape TensorShape([32, 16, 16, 512]) # direct output size in pytorch t1 = torch.randn([32,512,8,8]) # as channels first in pytorch torch.nn.UpsamplingNearest2d(size=(16, 16))(t1).shape # or torch.nn.UpsamplingNearest2d(size=16)(t1).shape torch.Size([32, 512, 16, 16]) # scaling factor in pytorch. torch.nn.UpsamplingNearest2d(scale_factor=2)(t1).shape torch.Size([32, 512, 16, 16]) </code></pre>
598
pytorch
PyTorch model input shape
https://stackoverflow.com/questions/66488807/pytorch-model-input-shape
<p>I loaded a custom PyTorch model and I want to find out its input shape. Something like this:</p> <pre><code>model.input_shape </code></pre> <p>Is it possible to get this information?</p> <hr /> <p><strong>Update:</strong> <code>print()</code> and <code>summary()</code> don't show this model's input shape, so they are not what I'm looking for.</p>
<h1>PyTorch flexibility</h1> <p>PyTorch models are very flexible objects, to the point where they do not enforce or generally expect a fixed input shape for data.</p> <p>If you have certain layers there may be constraints e.g:</p> <ul> <li>a flatten followed by a fully connected layer of width N would enforce the dimensions of your original input (M1 x M2 x ... Mn) to have a product equal to N</li> <li>a 2d convolution of N input channels would enforce the data to be 3 dimensionsal, with the first dimension having size N</li> </ul> <p>But as you can see neither of these enforce the <em>total</em> shape of the data.</p> <blockquote> <p>We might not realize it right now, but in more complex models, getting the size of the first linear layer right is sometimes a source of frustration. We’ve heard stories of famous practitioners putting in arbitrary numbers and then relying on error messages from PyTorch to backtrack the correct sizes for their linear layers. Lame, eh? Nah, it’s all legit!</p> </blockquote> <ul> <li><em>Deep Learning with PyTorch</em></li> </ul> <h1>Investigation</h1> <h2>Simple case: First layer is Fully Connected</h2> <p>If your model's first layer is a fully connected one, then the first layer in <code>print(model)</code> will detail the expected dimensionality of a single sample.</p> <h2>Ambiguous case: CNN</h2> <p>If it is a convolutional layer however, since these are dynamic and will stride as long/wide as the input permits, there is no simple way to retrieve this info from the model itself.<sup>1</sup> This flexibility means that for many architectures <em>multiple compatible input sizes</em><SUP>2</SUP> will all be acceptable by the network.</p> <p>This is a feature of PyTorch's <a href="https://stackoverflow.com/a/62815025/9067615">Dynamic computational graph</a>.</p> <h3>Manual inspection</h3> <p>What you will need to do is investigate the network architecture, and once you've found an interpretable layer (if one is present e.g. fully connected) &quot;work backwards&quot; with its dimensions, determining how the previous layers (e.g. poolings and convolutions) have compressed/modified it.</p> <h3>Example</h3> <p>e.g. in the following model from <em>Deep Learning with PyTorch</em> (8.5.1):</p> <pre class="lang-py prettyprint-override"><code>class NetWidth(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 32, kernel_size=3, padding=1) self.conv2 = nn.Conv2d(32, 16, kernel_size=3, padding=1) self.fc1 = nn.Linear(16 * 8 * 8, 32) self.fc2 = nn.Linear(32, 2) def forward(self, x): out = F.max_pool2d(torch.tanh(self.conv1(x)), 2) out = F.max_pool2d(torch.tanh(self.conv2(out)), 2) out = out.view(-1, 16 * 8 * 8) out = torch.tanh(self.fc1(out)) out = self.fc2(out) return out </code></pre> <p>We see the model takes an input 2.d. image with <code>3</code> channels and:</p> <ul> <li><code>Conv2d</code> -&gt; sends it to an image of the same size with 32 channels</li> <li><code>max_pool2d(,2)</code> -&gt; halves the size of the image in each dimension</li> <li><code>Conv2d</code> -&gt; sends it to an image of the same size with 16 channels</li> <li><code>max_pool2d(,2)</code> -&gt; halves the size of the image in each dimension</li> <li><code>view</code> -&gt; reshapes the image</li> <li><code>Linear</code> -&gt; takes a tensor of size <code>16 * 8 * 8</code> and sends to size <code>32</code></li> <li>...</li> </ul> <p>So working backwards, we have:</p> <ul> <li>a tensor of shape <code>16 * 8 * 8</code></li> <li>un-reshaped into shape (channels x height x width)</li> <li>un-max_pooled in 2d with factor 2, so height and width un-halved</li> <li>un-convolved from 16 channels to 32<br /> <strong>Hypothesis:</strong> It is likely 16 in the product thus refers to the number of channels, and that the image seen by <code>view</code> was of shape (channels, 8,8), and currently is (channels, 16,16)<sup>2</sup></li> <li>un-max_pooled in 2d with factor 2, so height and width un-halved again (channels, 32,32)</li> <li>un-convolved from 32 channels to 3</li> </ul> <p>So assuming the kernel_size and padding are sufficient that the convolutions themselves maintain image dimensions, it is likely that the input image is of shape (3,32,32) i.e. RGB 32x32 pixel square images.</p> <hr /> <p><strong>Notes:</strong></p> <sup> <ol> <li><p>Even the external package <a href="https://stackoverflow.com/a/49989438/9067615"><code>pytorch-summary</code></a> requires you provide the input shape in order to display the shape of the output of each layer.</p> </li> <li><p>It could however be any 2 numbers whose produce equals 8*8 e.g. (64,1), (32,2), (16,4) etc however since the code is written as 8*8 it is likely the authors used the actual dimensions.</p> </li> </ol> </sup>
599
pandas
How can I iterate over rows in a Pandas DataFrame?
https://stackoverflow.com/questions/16476924/how-can-i-iterate-over-rows-in-a-pandas-dataframe
<p>I have a pandas dataframe, <code>df</code>:</p> <pre class="lang-none prettyprint-override"><code> c1 c2 0 10 100 1 11 110 2 12 120 </code></pre> <p>How do I iterate over the rows of this dataframe? For every row, I want to access its elements (values in cells) by the name of the columns. For example:</p> <pre class="lang-py prettyprint-override"><code>for row in df.rows: print(row['c1'], row['c2']) </code></pre> <hr /> <p>I found a <a href="https://stackoverflow.com/questions/7837722/what-is-the-most-efficient-way-to-loop-through-dataframes-with-pandas">similar question</a>, which suggests using either of these:</p> <ul> <li> <pre class="lang-py prettyprint-override"><code>for date, row in df.T.iteritems(): </code></pre> </li> <li> <pre class="lang-py prettyprint-override"><code>for row in df.iterrows(): </code></pre> </li> </ul> <p>But I do not understand what the <code>row</code> object is and how I can work with it.</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iterrows.html#pandas-dataframe-iterrows" rel="noreferrer"><code>DataFrame.iterrows</code></a> is a generator which yields both the index and row (as a Series):</p> <pre><code>import pandas as pd df = pd.DataFrame({'c1': [10, 11, 12], 'c2': [100, 110, 120]}) df = df.reset_index() # make sure indexes pair with number of rows for index, row in df.iterrows(): print(row['c1'], row['c2']) </code></pre> <pre><code>10 100 11 110 12 120 </code></pre> <hr /> <p>Obligatory disclaimer from the <a href="https://pandas.pydata.org/docs/user_guide/basics.html#iteration" rel="noreferrer">documentation</a></p> <blockquote> <p>Iterating through pandas objects is generally <strong>slow</strong>. In many cases, iterating manually over the rows is not needed and can be avoided with one of the following approaches:</p> <ul> <li>Look for a <em>vectorized</em> solution: many operations can be performed using built-in methods or NumPy functions, (boolean) indexing, …</li> <li>When you have a function that cannot work on the full DataFrame/Series at once, it is better to use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html#pandas.DataFrame.apply" rel="noreferrer" title="pandas.DataFrame.apply"><code>apply()</code></a> instead of iterating over the values. See the docs on <a href="https://pandas.pydata.org/docs/user_guide/basics.html#basics-apply" rel="noreferrer">function application</a>.</li> <li>If you need to do iterative manipulations on the values but performance is important, consider writing the inner loop with cython or numba. See the <a href="https://pandas.pydata.org/docs/user_guide/enhancingperf.html#enhancingperf" rel="noreferrer">enhancing performance</a> section for some examples of this approach.</li> </ul> </blockquote> <p>Other answers in this thread delve into greater depth on alternatives to iter* functions if you are interested to learn more.</p>
600
pandas
How do I select rows from a DataFrame based on column values?
https://stackoverflow.com/questions/17071871/how-do-i-select-rows-from-a-dataframe-based-on-column-values
<p>How can I select rows from a DataFrame based on values in some column in Pandas?</p> <p>In SQL, I would use:</p> <pre class="lang-sql prettyprint-override"><code>SELECT * FROM table WHERE column_name = some_value </code></pre>
<p>To select rows whose column value equals a scalar, <code>some_value</code>, use <code>==</code>:</p> <pre><code>df.loc[df['column_name'] == some_value] </code></pre> <p>To select rows whose column value is in an iterable, <code>some_values</code>, use <code>isin</code>:</p> <pre><code>df.loc[df['column_name'].isin(some_values)] </code></pre> <p>Combine multiple conditions with <code>&amp;</code>:</p> <pre><code>df.loc[(df['column_name'] &gt;= A) &amp; (df['column_name'] &lt;= B)] </code></pre> <p>Note the parentheses. Due to Python's <a href="https://docs.python.org/3/reference/expressions.html#operator-precedence" rel="noreferrer">operator precedence rules</a>, <code>&amp;</code> binds more tightly than <code>&lt;=</code> and <code>&gt;=</code>. Thus, the parentheses in the last example are necessary. Without the parentheses</p> <pre><code>df['column_name'] &gt;= A &amp; df['column_name'] &lt;= B </code></pre> <p>is parsed as</p> <pre><code>df['column_name'] &gt;= (A &amp; df['column_name']) &lt;= B </code></pre> <p>which results in a <a href="https://stackoverflow.com/questions/36921951/truth-value-of-a-series-is-ambiguous-use-a-empty-a-bool-a-item-a-any-o">Truth value of a Series is ambiguous error</a>.</p> <hr /> <p>To select rows whose column value <em>does not equal</em> <code>some_value</code>, use <code>!=</code>:</p> <pre><code>df.loc[df['column_name'] != some_value] </code></pre> <p>The <code>isin</code> returns a boolean Series, so to select rows whose value is <em>not</em> in <code>some_values</code>, negate the boolean Series using <code>~</code>:</p> <pre><code>df = df.loc[~df['column_name'].isin(some_values)] # .loc is not in-place replacement </code></pre> <hr /> <p>For example,</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'A': 'foo bar foo bar foo bar foo foo'.split(), 'B': 'one one two three two two one three'.split(), 'C': np.arange(8), 'D': np.arange(8) * 2}) print(df) # A B C D # 0 foo one 0 0 # 1 bar one 1 2 # 2 foo two 2 4 # 3 bar three 3 6 # 4 foo two 4 8 # 5 bar two 5 10 # 6 foo one 6 12 # 7 foo three 7 14 print(df.loc[df['A'] == 'foo']) </code></pre> <p>yields</p> <pre><code> A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14 </code></pre> <hr /> <p>If you have multiple values you want to include, put them in a list (or more generally, any iterable) and use <code>isin</code>:</p> <pre><code>print(df.loc[df['B'].isin(['one','three'])]) </code></pre> <p>yields</p> <pre><code> A B C D 0 foo one 0 0 1 bar one 1 2 3 bar three 3 6 6 foo one 6 12 7 foo three 7 14 </code></pre> <hr /> <p>Note, however, that if you wish to do this many times, it is more efficient to make an index first, and then use <code>df.loc</code>:</p> <pre><code>df = df.set_index(['B']) print(df.loc['one']) </code></pre> <p>yields</p> <pre><code> A C D B one foo 0 0 one bar 1 2 one foo 6 12 </code></pre> <p>or, to include multiple values from the index use <code>df.index.isin</code>:</p> <pre><code>df.loc[df.index.isin(['one','two'])] </code></pre> <p>yields</p> <pre><code> A C D B one foo 0 0 one bar 1 2 two foo 2 4 two foo 4 8 two bar 5 10 one foo 6 12 </code></pre>
601
pandas
How do I change the size of figures drawn with Matplotlib?
https://stackoverflow.com/questions/332289/how-do-i-change-the-size-of-figures-drawn-with-matplotlib
<p>How do I change the size of figure drawn with Matplotlib?</p>
<p><a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.figure.html" rel="noreferrer"><code>figure</code></a> tells you the call signature:</p> <pre><code>from matplotlib.pyplot import figure figure(figsize=(8, 6), dpi=80) </code></pre> <p><code>figure(figsize=(1,1))</code> would create an inch-by-inch image, which would be 80-by-80 pixels unless you also give a different dpi argument.</p>
602
pandas
Renaming column names in Pandas
https://stackoverflow.com/questions/11346283/renaming-column-names-in-pandas
<p>I want to change the column labels of a Pandas DataFrame from</p> <pre><code>['$a', '$b', '$c', '$d', '$e'] </code></pre> <p>to</p> <pre><code>['a', 'b', 'c', 'd', 'e'] </code></pre>
<p>Just assign it to the <code>.columns</code> attribute:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame({'$a':[1,2], '$b': [10,20]}) &gt;&gt;&gt; df $a $b 0 1 10 1 2 20 &gt;&gt;&gt; df.columns = ['a', 'b'] &gt;&gt;&gt; df a b 0 1 10 1 2 20 </code></pre>
603
pandas
Delete a column from a Pandas DataFrame
https://stackoverflow.com/questions/13411544/delete-a-column-from-a-pandas-dataframe
<p>To delete a column in a DataFrame, I can successfully use:</p> <pre class="lang-py prettyprint-override"><code>del df['column_name'] </code></pre> <p>But why can't I use the following?</p> <pre class="lang-py prettyprint-override"><code>del df.column_name </code></pre> <p>Since it is possible to access the Series via <code>df.column_name</code>, I expected this to work.</p>
<p>As you've guessed, the right syntax is </p> <pre><code>del df['column_name'] </code></pre> <p>It's difficult to make <code>del df.column_name</code> work simply as the result of syntactic limitations in Python. <code>del df[name]</code> gets translated to <code>df.__delitem__(name)</code> under the covers by Python.</p>
604
pandas
How do I get the row count of a Pandas DataFrame?
https://stackoverflow.com/questions/15943769/how-do-i-get-the-row-count-of-a-pandas-dataframe
<p>How do I get the number of rows of a pandas dataframe <code>df</code>?</p>
<p>For a dataframe <code>df</code>, one can use any of the following:</p> <ul> <li><code>len(df.index)</code></li> <li><code>df.shape[0]</code></li> <li><code>df[df.columns[0]].count()</code> (== <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.count.html" rel="noreferrer">number of non-NaN values</a> in first column)</li> </ul> <p><a href="https://i.sstatic.net/wEzue.png" rel="noreferrer"><img src="https://i.sstatic.net/wEzue.png" alt="Performance plot" /></a></p> <hr /> <p>Code to reproduce the plot:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import perfplot perfplot.save( &quot;out.png&quot;, setup=lambda n: pd.DataFrame(np.arange(n * 3).reshape(n, 3)), n_range=[2**k for k in range(25)], kernels=[ lambda df: len(df.index), lambda df: df.shape[0], lambda df: df[df.columns[0]].count(), ], labels=[&quot;len(df.index)&quot;, &quot;df.shape[0]&quot;, &quot;df[df.columns[0]].count()&quot;], xlabel=&quot;Number of rows&quot;, ) </code></pre>
605
pandas
Selecting multiple columns in a Pandas dataframe
https://stackoverflow.com/questions/11285613/selecting-multiple-columns-in-a-pandas-dataframe
<p>How do I select columns <code>a</code> and <code>b</code> from <code>df</code>, and save them into a new dataframe <code>df1</code>?</p> <pre class="lang-none prettyprint-override"><code>index a b c 1 2 3 4 2 3 4 5 </code></pre> <p>Unsuccessful attempt:</p> <pre class="lang-py prettyprint-override"><code>df1 = df['a':'b'] df1 = df.ix[:, 'a':'b'] </code></pre>
<p>The column names (which are strings) cannot be sliced in the manner you tried.</p> <p>Here you have a couple of options. If you know from context which variables you want to slice out, you can just return a view of only those columns by passing a list into the <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#basics" rel="noreferrer"><code>__getitem__</code> syntax</a> (the []'s).</p> <pre><code>df1 = df[['a', 'b']] </code></pre> <p>Alternatively, if it matters to index them numerically and not by their name (say your code should automatically do this without knowing the names of the first two columns) then you can do this instead:</p> <pre><code>df1 = df.iloc[:, 0:2] # Remember that Python does not slice inclusive of the ending index. </code></pre> <p>Additionally, you should familiarize yourself with the idea of a view into a Pandas object vs. a copy of that object. The first of the above methods will return a new copy in memory of the desired sub-object (the desired slices).</p> <p>Sometimes, however, there are indexing conventions in Pandas that don't do this and instead give you a new variable that just refers to the same chunk of memory as the sub-object or slice in the original object. This will happen with the second way of indexing, so you can modify it with the <code>.copy()</code> method to get a regular copy. When this happens, changing what you think is the sliced object can sometimes alter the original object. Always good to be on the look out for this.</p> <pre><code>df1 = df.iloc[0, 0:2].copy() # To avoid the case where changing df1 also changes df </code></pre> <p>To use <code>iloc</code>, you need to know the column positions (or indices). As the column positions may change, instead of hard-coding indices, you can use <code>iloc</code> along with <code>get_loc</code> function of <code>columns</code> method of dataframe object to obtain column indices.</p> <pre><code>{df.columns.get_loc(c): c for idx, c in enumerate(df.columns)} </code></pre> <p>Now you can use this dictionary to access columns through names and using <code>iloc</code>.</p>
606
pandas
How to change the order of DataFrame columns?
https://stackoverflow.com/questions/13148429/how-to-change-the-order-of-dataframe-columns
<p>I have the following DataFrame (<code>df</code>):</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame(np.random.rand(10, 5)) </code></pre> <p>I add more column(s) by assignment:</p> <pre class="lang-py prettyprint-override"><code>df['mean'] = df.mean(1) </code></pre> <p>How can I move the column <code>mean</code> to the front, i.e. set it as first column leaving the order of the other columns untouched?</p>
<p>One easy way would be to reassign the dataframe with a list of the columns, rearranged as needed. </p> <p>This is what you have now: </p> <pre><code>In [6]: df Out[6]: 0 1 2 3 4 mean 0 0.445598 0.173835 0.343415 0.682252 0.582616 0.445543 1 0.881592 0.696942 0.702232 0.696724 0.373551 0.670208 2 0.662527 0.955193 0.131016 0.609548 0.804694 0.632596 3 0.260919 0.783467 0.593433 0.033426 0.512019 0.436653 4 0.131842 0.799367 0.182828 0.683330 0.019485 0.363371 5 0.498784 0.873495 0.383811 0.699289 0.480447 0.587165 6 0.388771 0.395757 0.745237 0.628406 0.784473 0.588529 7 0.147986 0.459451 0.310961 0.706435 0.100914 0.345149 8 0.394947 0.863494 0.585030 0.565944 0.356561 0.553195 9 0.689260 0.865243 0.136481 0.386582 0.730399 0.561593 In [7]: cols = df.columns.tolist() In [8]: cols Out[8]: [0L, 1L, 2L, 3L, 4L, 'mean'] </code></pre> <p>Rearrange <code>cols</code> in any way you want. This is how I moved the last element to the first position: </p> <pre><code>In [12]: cols = cols[-1:] + cols[:-1] In [13]: cols Out[13]: ['mean', 0L, 1L, 2L, 3L, 4L] </code></pre> <p>Then reorder the dataframe like this: </p> <pre><code>In [16]: df = df[cols] # OR df = df.ix[:, cols] In [17]: df Out[17]: mean 0 1 2 3 4 0 0.445543 0.445598 0.173835 0.343415 0.682252 0.582616 1 0.670208 0.881592 0.696942 0.702232 0.696724 0.373551 2 0.632596 0.662527 0.955193 0.131016 0.609548 0.804694 3 0.436653 0.260919 0.783467 0.593433 0.033426 0.512019 4 0.363371 0.131842 0.799367 0.182828 0.683330 0.019485 5 0.587165 0.498784 0.873495 0.383811 0.699289 0.480447 6 0.588529 0.388771 0.395757 0.745237 0.628406 0.784473 7 0.345149 0.147986 0.459451 0.310961 0.706435 0.100914 8 0.553195 0.394947 0.863494 0.585030 0.565944 0.356561 9 0.561593 0.689260 0.865243 0.136481 0.386582 0.730399 </code></pre>
607
pandas
Change column type in pandas
https://stackoverflow.com/questions/15891038/change-column-type-in-pandas
<p>I created a DataFrame from a list of lists:</p> <pre class="lang-py prettyprint-override"><code>table = [ ['a', '1.2', '4.2' ], ['b', '70', '0.03'], ['x', '5', '0' ], ] df = pd.DataFrame(table) </code></pre> <p>How do I convert the columns to specific types? In this case, I want to convert columns 2 and 3 into floats.</p> <p>Is there a way to specify the types while converting the list to DataFrame? Or is it better to create the DataFrame first and then loop through the columns to change the dtype for each column? Ideally I would like to do this in a dynamic way because there can be hundreds of columns, and I don't want to specify exactly which columns are of which type. All I can guarantee is that each column contains values of the same type.</p>
<p>You have four main options for converting types in pandas:</p> <ol> <li><p><a href="https://pandas.pydata.org/docs/reference/api/pandas.to_numeric.html" rel="noreferrer"><code>to_numeric()</code></a> - provides functionality to safely convert non-numeric types (e.g. strings) to a suitable numeric type. (See also <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html" rel="noreferrer"><code>to_datetime()</code></a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_timedelta.html" rel="noreferrer"><code>to_timedelta()</code></a>.)</p> </li> <li><p><a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.astype.html" rel="noreferrer"><code>astype()</code></a> - convert (almost) any type to (almost) any other type (even if it's not necessarily sensible to do so). Also allows you to convert to <a href="https://pandas.pydata.org/docs/user_guide/categorical.html" rel="noreferrer">categorial</a> types (very useful).</p> </li> <li><p><a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.infer_objects.html" rel="noreferrer"><code>infer_objects()</code></a> - a utility method to convert object columns holding Python objects to a pandas type if possible.</p> </li> <li><p><a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.convert_dtypes.html" rel="noreferrer"><code>convert_dtypes()</code></a> - convert DataFrame columns to the &quot;best possible&quot; dtype that supports <code>pd.NA</code> (pandas' object to indicate a missing value).</p> </li> </ol> <p>Read on for more detailed explanations and usage of each of these methods.</p> <hr /> <h1>1. <code>to_numeric()</code></h1> <p>The best way to convert one or more columns of a DataFrame to numeric values is to use <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_numeric.html" rel="noreferrer"><code>pandas.to_numeric()</code></a>.</p> <p>This function will try to change non-numeric objects (such as strings) into integers or floating-point numbers as appropriate.</p> <h2>Basic usage</h2> <p>The input to <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_numeric.html" rel="noreferrer"><code>to_numeric()</code></a> is a Series or a single column of a DataFrame.</p> <pre><code>&gt;&gt;&gt; s = pd.Series([&quot;8&quot;, 6, &quot;7.5&quot;, 3, &quot;0.9&quot;]) # mixed string and numeric values &gt;&gt;&gt; s 0 8 1 6 2 7.5 3 3 4 0.9 dtype: object &gt;&gt;&gt; pd.to_numeric(s) # convert everything to float values 0 8.0 1 6.0 2 7.5 3 3.0 4 0.9 dtype: float64 </code></pre> <p>As you can see, a new Series is returned. Remember to assign this output to a variable or column name to continue using it:</p> <pre><code># convert Series my_series = pd.to_numeric(my_series) # convert column &quot;a&quot; of a DataFrame df[&quot;a&quot;] = pd.to_numeric(df[&quot;a&quot;]) </code></pre> <p>You can also use it to convert multiple columns of a DataFrame via the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="noreferrer"><code>apply()</code></a> method:</p> <pre><code># convert all columns of DataFrame df = df.apply(pd.to_numeric) # convert all columns of DataFrame # convert just columns &quot;a&quot; and &quot;b&quot; df[[&quot;a&quot;, &quot;b&quot;]] = df[[&quot;a&quot;, &quot;b&quot;]].apply(pd.to_numeric) </code></pre> <p>As long as your values can all be converted, that's probably all you need.</p> <h2>Error handling</h2> <p>But what if some values can't be converted to a numeric type?</p> <p><a href="https://pandas.pydata.org/docs/reference/api/pandas.to_numeric.html" rel="noreferrer"><code>to_numeric()</code></a> also takes an <code>errors</code> keyword argument that allows you to force non-numeric values to be <code>NaN</code>, or simply ignore columns containing these values.</p> <p>Here's an example using a Series of strings <code>s</code> which has the object dtype:</p> <pre><code>&gt;&gt;&gt; s = pd.Series(['1', '2', '4.7', 'pandas', '10']) &gt;&gt;&gt; s 0 1 1 2 2 4.7 3 pandas 4 10 dtype: object </code></pre> <p>The default behaviour is to raise if it can't convert a value. In this case, it can't cope with the string 'pandas':</p> <pre><code>&gt;&gt;&gt; pd.to_numeric(s) # or pd.to_numeric(s, errors='raise') ValueError: Unable to parse string </code></pre> <p>Rather than fail, we might want 'pandas' to be considered a missing/bad numeric value. We can coerce invalid values to <code>NaN</code> as follows using the <code>errors</code> keyword argument:</p> <pre><code>&gt;&gt;&gt; pd.to_numeric(s, errors='coerce') 0 1.0 1 2.0 2 4.7 3 NaN 4 10.0 dtype: float64 </code></pre> <p>The third option for <code>errors</code> is just to ignore the operation if an invalid value is encountered:</p> <pre><code>&gt;&gt;&gt; pd.to_numeric(s, errors='ignore') # the original Series is returned untouched </code></pre> <p>This last option is particularly useful for converting your entire DataFrame, but don't know which of our columns can be converted reliably to a numeric type. In that case, just write:</p> <pre><code>df.apply(pd.to_numeric, errors='ignore') </code></pre> <p>The function will be applied to each column of the DataFrame. Columns that can be converted to a numeric type will be converted, while columns that cannot (e.g. they contain non-digit strings or dates) will be left alone.</p> <h2>Downcasting</h2> <p>By default, conversion with <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_numeric.html" rel="noreferrer"><code>to_numeric()</code></a> will give you either an <code>int64</code> or <code>float64</code> dtype (or whatever integer width is native to your platform).</p> <p>That's usually what you want, but what if you wanted to save some memory and use a more compact dtype, like <code>float32</code>, or <code>int8</code>?</p> <p><a href="https://pandas.pydata.org/docs/reference/api/pandas.to_numeric.html" rel="noreferrer"><code>to_numeric()</code></a> gives you the option to downcast to either <code>'integer'</code>, <code>'signed'</code>, <code>'unsigned'</code>, <code>'float'</code>. Here's an example for a simple series <code>s</code> of integer type:</p> <pre><code>&gt;&gt;&gt; s = pd.Series([1, 2, -7]) &gt;&gt;&gt; s 0 1 1 2 2 -7 dtype: int64 </code></pre> <p>Downcasting to <code>'integer'</code> uses the smallest possible integer that can hold the values:</p> <pre><code>&gt;&gt;&gt; pd.to_numeric(s, downcast='integer') 0 1 1 2 2 -7 dtype: int8 </code></pre> <p>Downcasting to <code>'float'</code> similarly picks a smaller than normal floating type:</p> <pre><code>&gt;&gt;&gt; pd.to_numeric(s, downcast='float') 0 1.0 1 2.0 2 -7.0 dtype: float32 </code></pre> <hr /> <h1>2. <code>astype()</code></h1> <p>The <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.astype.html" rel="noreferrer"><code>astype()</code></a> method enables you to be explicit about the dtype you want your DataFrame or Series to have. It's very versatile in that you can try and go from one type to any other.</p> <h2>Basic usage</h2> <p>Just pick a type: you can use a NumPy dtype (e.g. <code>np.int16</code>), some Python types (e.g. bool), or pandas-specific types (like the categorical dtype).</p> <p>Call the method on the object you want to convert and <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.astype.html" rel="noreferrer"><code>astype()</code></a> will try and convert it for you:</p> <pre><code># convert all DataFrame columns to the int64 dtype df = df.astype(int) # convert column &quot;a&quot; to int64 dtype and &quot;b&quot; to complex type df = df.astype({&quot;a&quot;: int, &quot;b&quot;: complex}) # convert Series to float16 type s = s.astype(np.float16) # convert Series to Python strings s = s.astype(str) # convert Series to categorical type - see docs for more details s = s.astype('category') </code></pre> <p>Notice I said &quot;try&quot; - if <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.astype.html" rel="noreferrer"><code>astype()</code></a> does not know how to convert a value in the Series or DataFrame, it will raise an error. For example, if you have a <code>NaN</code> or <code>inf</code> value you'll get an error trying to convert it to an integer.</p> <p>As of pandas 0.20.0, this error can be suppressed by passing <code>errors='ignore'</code>. Your original object will be returned untouched.</p> <h2>Be careful</h2> <p><a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.astype.html" rel="noreferrer"><code>astype()</code></a> is powerful, but it will sometimes convert values &quot;incorrectly&quot;. For example:</p> <pre><code>&gt;&gt;&gt; s = pd.Series([1, 2, -7]) &gt;&gt;&gt; s 0 1 1 2 2 -7 dtype: int64 </code></pre> <p>These are small integers, so how about converting to an unsigned 8-bit type to save memory?</p> <pre><code>&gt;&gt;&gt; s.astype(np.uint8) 0 1 1 2 2 249 dtype: uint8 </code></pre> <p>The conversion worked, but the -7 was wrapped round to become 249 (i.e. 2<sup>8</sup> - 7)!</p> <p>Trying to downcast using <code>pd.to_numeric(s, downcast='unsigned')</code> instead could help prevent this error.</p> <hr /> <h1>3. <code>infer_objects()</code></h1> <p>Version 0.21.0 of pandas introduced the method <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.infer_objects.html" rel="noreferrer"><code>infer_objects()</code></a> for converting columns of a DataFrame that have an object datatype to a more specific type (soft conversions).</p> <p>For example, here's a DataFrame with two columns of object type. One holds actual integers and the other holds strings representing integers:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame({'a': [7, 1, 5], 'b': ['3','2','1']}, dtype='object') &gt;&gt;&gt; df.dtypes a object b object dtype: object </code></pre> <p>Using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.infer_objects.html" rel="noreferrer"><code>infer_objects()</code></a>, you can change the type of column 'a' to int64:</p> <pre><code>&gt;&gt;&gt; df = df.infer_objects() &gt;&gt;&gt; df.dtypes a int64 b object dtype: object </code></pre> <p>Column 'b' has been left alone since its values were strings, not integers. If you wanted to force both columns to an integer type, you could use <code>df.astype(int)</code> instead.</p> <hr /> <h1>4. <code>convert_dtypes()</code></h1> <p>Version 1.0 and above includes a method <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.convert_dtypes.html" rel="noreferrer"><code>convert_dtypes()</code></a> to convert Series and DataFrame columns to the best possible dtype that supports the <code>pd.NA</code> missing value.</p> <p>Here &quot;best possible&quot; means the type most suited to hold the values. For example, this a pandas integer type, if all of the values are integers (or missing values): an object column of Python integer objects are converted to <code>Int64</code>, a column of NumPy <code>int32</code> values, will become the pandas dtype <code>Int32</code>.</p> <p>With our <code>object</code> DataFrame <code>df</code>, we get the following result:</p> <pre><code>&gt;&gt;&gt; df.convert_dtypes().dtypes a Int64 b string dtype: object </code></pre> <p>Since column 'a' held integer values, it was converted to the <code>Int64</code> type (which is capable of holding missing values, unlike <code>int64</code>).</p> <p>Column 'b' contained string objects, so was changed to pandas' <code>string</code> dtype.</p> <p>By default, this method will infer the type from object values in each column. We can change this by passing <code>infer_objects=False</code>:</p> <pre><code>&gt;&gt;&gt; df.convert_dtypes(infer_objects=False).dtypes a object b string dtype: object </code></pre> <p>Now column 'a' remained an object column: pandas knows it can be described as an 'integer' column (internally it ran <a href="https://github.com/pandas-dev/pandas/blob/6b2d0260c818e62052eaf535767f3a8c4b446c69/pandas/_libs/lib.pyx#L1188-L1434" rel="noreferrer"><code>infer_dtype</code></a>) but didn't infer exactly what dtype of integer it should have so did not convert it. Column 'b' was again converted to 'string' dtype as it was recognised as holding 'string' values.</p>
608
pandas
Create a Pandas Dataframe by appending one row at a time
https://stackoverflow.com/questions/10715965/create-a-pandas-dataframe-by-appending-one-row-at-a-time
<p>How do I create an empty <code>DataFrame</code>, then add rows, one by one?</p> <p>I created an empty <code>DataFrame</code>:</p> <pre><code>df = pd.DataFrame(columns=('lib', 'qty1', 'qty2')) </code></pre> <p>Then I can add a new row at the end and fill a single field with:</p> <pre><code>df = df._set_value(index=len(df), col='qty1', value=10.0) </code></pre> <p>It works for only one field at a time. What is a better way to add new row to <code>df</code>?</p>
<p>You can use <code>df.loc[i]</code>, where the row with index <code>i</code> will be what you specify it to be in the dataframe.</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; from numpy.random import randint &gt;&gt;&gt; df = pd.DataFrame(columns=['lib', 'qty1', 'qty2']) &gt;&gt;&gt; for i in range(5): &gt;&gt;&gt; df.loc[i] = ['name' + str(i)] + list(randint(10, size=2)) &gt;&gt;&gt; df lib qty1 qty2 0 name0 3 3 1 name1 2 4 2 name2 2 8 3 name3 2 1 4 name4 9 6 </code></pre>
609
pandas
Pretty-print an entire Pandas Series / DataFrame
https://stackoverflow.com/questions/19124601/pretty-print-an-entire-pandas-series-dataframe
<p>I work with Series and DataFrames on the terminal a lot. The default <code>__repr__</code> for a Series returns a reduced sample, with some head and tail values, but the rest missing.</p> <p>Is there a builtin way to pretty-print the entire Series / DataFrame? Ideally, it would support proper alignment, perhaps borders between columns, and maybe even color-coding for the different columns.</p>
<p>You can also use the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.option_context.html" rel="noreferrer"><code>option_context</code></a>, with one or more options:</p> <pre><code>with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also print(df) </code></pre> <p>This will automatically return the options to their previous values.</p> <p>If you are working on jupyter-notebook, using <code>display(df)</code> instead of <code>print(df)</code> will use jupyter rich display logic <a href="https://ipython.readthedocs.io/en/stable/api/generated/IPython.display.html#IPython.display.display" rel="noreferrer">(like so)</a>.</p>
610
pandas
How to deal with SettingWithCopyWarning in Pandas
https://stackoverflow.com/questions/20625582/how-to-deal-with-settingwithcopywarning-in-pandas
<h2>Background</h2> <p>I just upgraded my Pandas from 0.11 to 0.13.0rc1. Now, the application is popping out many new warnings. One of them like this:</p> <pre class="lang-none prettyprint-override"><code>E:\FinReporter\FM_EXT.py:449: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE </code></pre> <p>I want to know what exactly it means? Do I need to change something?</p> <p>How should I suspend the warning if I insist to use <code>quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE</code>?</p> <h2>The function that gives warnings</h2> <pre><code>def _decode_stock_quote(list_of_150_stk_str): &quot;&quot;&quot;decode the webpage and return dataframe&quot;&quot;&quot; from cStringIO import StringIO str_of_all = &quot;&quot;.join(list_of_150_stk_str) quote_df = pd.read_csv(StringIO(str_of_all), sep=',', names=list('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefg')) #dtype={'A': object, 'B': object, 'C': np.float64} quote_df.rename(columns={'A':'STK', 'B':'TOpen', 'C':'TPCLOSE', 'D':'TPrice', 'E':'THigh', 'F':'TLow', 'I':'TVol', 'J':'TAmt', 'e':'TDate', 'f':'TTime'}, inplace=True) quote_df = quote_df.ix[:,[0,3,2,1,4,5,8,9,30,31]] quote_df['TClose'] = quote_df['TPrice'] quote_df['RT'] = 100 * (quote_df['TPrice']/quote_df['TPCLOSE'] - 1) quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE quote_df['TAmt'] = quote_df['TAmt']/TAMT_SCALE quote_df['STK_ID'] = quote_df['STK'].str.slice(13,19) quote_df['STK_Name'] = quote_df['STK'].str.slice(21,30)#.decode('gb2312') quote_df['TDate'] = quote_df.TDate.map(lambda x: x[0:4]+x[5:7]+x[8:10]) return quote_df </code></pre> <h2>More warning messages</h2> <pre class="lang-none prettyprint-override"><code>E:\FinReporter\FM_EXT.py:449: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE E:\FinReporter\FM_EXT.py:450: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead quote_df['TAmt'] = quote_df['TAmt']/TAMT_SCALE E:\FinReporter\FM_EXT.py:453: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead quote_df['TDate'] = quote_df.TDate.map(lambda x: x[0:4]+x[5:7]+x[8:10]) </code></pre>
<p>The <code>SettingWithCopyWarning</code> was created to flag potentially confusing &quot;chained&quot; assignments, such as the following, which does not always work as expected, particularly when the first selection returns a <em>copy</em>. [see <a href="https://github.com/pydata/pandas/pull/5390" rel="noreferrer">GH5390</a> and <a href="https://github.com/pydata/pandas/issues/5597" rel="noreferrer">GH5597</a> for background discussion.]</p> <pre><code>df[df['A'] &gt; 2]['B'] = new_val # new_val not set in df </code></pre> <p>The warning offers a suggestion to rewrite as follows:</p> <pre><code>df.loc[df['A'] &gt; 2, 'B'] = new_val </code></pre> <p>However, this doesn't fit your usage, which is equivalent to:</p> <pre><code>df = df[df['A'] &gt; 2] df['B'] = new_val </code></pre> <p>While it's clear that you don't care about writes making it back to the original frame (since you are overwriting the reference to it), unfortunately this pattern cannot be differentiated from the first chained assignment example. Hence the (false positive) warning. The potential for false positives is addressed in the <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy" rel="noreferrer">docs on indexing</a>, if you'd like to read further. You can safely disable this new warning with the following assignment.</p> <pre><code>import pandas as pd pd.options.mode.chained_assignment = None # default='warn' </code></pre> <hr /> <h2>Other Resources</h2> <ul> <li><a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html" rel="noreferrer">pandas User Guide: Indexing and selecting data</a></li> <li><a href="https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html" rel="noreferrer">Python Data Science Handbook: Data Indexing and Selection</a></li> <li><a href="https://realpython.com/pandas-settingwithcopywarning/" rel="noreferrer">Real Python: SettingWithCopyWarning in Pandas: Views vs Copies</a></li> <li><a href="https://www.dataquest.io/blog/settingwithcopywarning/" rel="noreferrer">Dataquest: SettingwithCopyWarning: How to Fix This Warning in Pandas</a></li> <li><a href="https://towardsdatascience.com/explaining-the-settingwithcopywarning-in-pandas-ebc19d799d25" rel="noreferrer">Towards Data Science: Explaining the SettingWithCopyWarning in pandas</a></li> </ul>
611
pandas
How to drop rows of Pandas DataFrame whose value in a certain column is NaN
https://stackoverflow.com/questions/13413590/how-to-drop-rows-of-pandas-dataframe-whose-value-in-a-certain-column-is-nan
<p>I have this DataFrame and want only the records whose EPS column is not NaN:</p> <pre class="lang-none prettyprint-override"><code> STK_ID EPS cash STK_ID RPT_Date 601166 20111231 601166 NaN NaN 600036 20111231 600036 NaN 12 600016 20111231 600016 4.3 NaN 601009 20111231 601009 NaN NaN 601939 20111231 601939 2.5 NaN 000001 20111231 000001 NaN NaN </code></pre> <p>...i.e. something like <code>df.drop(....)</code> to get this resulting dataframe:</p> <pre class="lang-none prettyprint-override"><code> STK_ID EPS cash STK_ID RPT_Date 600016 20111231 600016 4.3 NaN 601939 20111231 601939 2.5 NaN </code></pre> <p>How do I do that?</p>
<p>Don't drop, just take the rows where EPS is not NA:</p> <pre class="lang-py prettyprint-override"><code>df = df[df['EPS'].notna()] </code></pre>
612
pandas
How to add a new column to an existing DataFrame
https://stackoverflow.com/questions/12555323/how-to-add-a-new-column-to-an-existing-dataframe
<p>I have the following indexed DataFrame with named columns and rows not- continuous numbers:</p> <pre class="lang-none prettyprint-override"><code> a b c d 2 0.671399 0.101208 -0.181532 0.241273 3 0.446172 -0.243316 0.051767 1.577318 5 0.614758 0.075793 -0.451460 -0.012493 </code></pre> <p>I would like to add a new column, <code>'e'</code>, to the existing data frame and do not want to change anything in the data frame (i.e., the new column always has the same length as the DataFrame).</p> <pre class="lang-none prettyprint-override"><code>0 -0.335485 1 -1.166658 2 -0.385571 dtype: float64 </code></pre> <p>I tried different versions of <code>join</code>, <code>append</code>, <code>merge</code>, but I did not get the result I wanted, only errors at most.</p> <p>How can I add column <code>e</code> to the above example?</p>
<p><strong>Edit 2017</strong></p> <p>As indicated in the comments and by @Alexander, currently the best method to add the values of a Series as a new column of a DataFrame could be using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="noreferrer"><strong><code>assign</code></strong></a>:</p> <pre><code>df1 = df1.assign(e=pd.Series(np.random.randn(sLength)).values) </code></pre> <hr /> <p><strong>Edit 2015</strong><br /> Some reported getting the <code>SettingWithCopyWarning</code> with this code.<br /> However, the code still runs perfectly with the current pandas version 0.16.1.</p> <pre><code>&gt;&gt;&gt; sLength = len(df1['a']) &gt;&gt;&gt; df1 a b c d 6 -0.269221 -0.026476 0.997517 1.294385 8 0.917438 0.847941 0.034235 -0.448948 &gt;&gt;&gt; df1['e'] = pd.Series(np.random.randn(sLength), index=df1.index) &gt;&gt;&gt; df1 a b c d e 6 -0.269221 -0.026476 0.997517 1.294385 1.757167 8 0.917438 0.847941 0.034235 -0.448948 2.228131 &gt;&gt;&gt; pd.version.short_version '0.16.1' </code></pre> <p>The <code>SettingWithCopyWarning</code> aims to inform of a possibly invalid assignment on a copy of the Dataframe. It doesn't necessarily say you did it wrong (it can trigger false positives) but from 0.13.0 it let you know there are more adequate methods for the same purpose. Then, if you get the warning, just follow its advise: <em>Try using .loc[row_index,col_indexer] = value instead</em></p> <pre><code>&gt;&gt;&gt; df1.loc[:,'f'] = pd.Series(np.random.randn(sLength), index=df1.index) &gt;&gt;&gt; df1 a b c d e f 6 -0.269221 -0.026476 0.997517 1.294385 1.757167 -0.050927 8 0.917438 0.847941 0.034235 -0.448948 2.228131 0.006109 &gt;&gt;&gt; </code></pre> <p>In fact, this is currently the more efficient method as <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="noreferrer">described in pandas docs</a></p> <hr /> <p>Original answer:</p> <p>Use the original df1 indexes to create the series:</p> <pre><code>df1['e'] = pd.Series(np.random.randn(sLength), index=df1.index) </code></pre>
613
pandas
Get a list from Pandas DataFrame column headers
https://stackoverflow.com/questions/19482970/get-a-list-from-pandas-dataframe-column-headers
<p>I want to get a list of the column headers from a Pandas DataFrame. The DataFrame will come from user input, so I won't know how many columns there will be or what they will be called.</p> <p>For example, if I'm given a DataFrame like this:</p> <pre class="lang-none prettyprint-override"><code> y gdp cap 0 1 2 5 1 2 3 9 2 8 7 2 3 3 4 7 4 6 7 7 5 4 8 3 6 8 2 8 7 9 9 10 8 6 6 4 9 10 10 7 </code></pre> <p>I would get a list like this:</p> <pre class="lang-none prettyprint-override"><code>['y', 'gdp', 'cap'] </code></pre>
<p>You can get the values as a list by doing:</p> <pre><code>list(my_dataframe.columns.values) </code></pre> <p>Also you can simply use (as shown in <a href="https://stackoverflow.com/a/19483602/4909087">Ed Chum's answer</a>):</p> <pre><code>list(my_dataframe) </code></pre>
614
pandas
Use a list of values to select rows from a Pandas dataframe
https://stackoverflow.com/questions/12096252/use-a-list-of-values-to-select-rows-from-a-pandas-dataframe
<p>Let’s say I have the following Pandas dataframe:</p> <pre><code>df = DataFrame({'A': [5,6,3,4], 'B': [1,2,3,5]}) df A B 0 5 1 1 6 2 2 3 3 3 4 5 </code></pre> <p>I can subset based on a specific value:</p> <pre><code>x = df[df['A'] == 3] x A B 2 3 3 </code></pre> <p>But how can I subset based on a list of values? - something like this:</p> <pre><code>list_of_values = [3, 6] y = df[df['A'] in list_of_values] </code></pre> <p>To get:</p> <pre><code> A B 1 6 2 2 3 3 </code></pre>
<p>You can use the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="noreferrer"><code>isin</code></a> method:</p> <pre><code>In [1]: df = pd.DataFrame({'A': [5,6,3,4], 'B': [1,2,3,5]}) In [2]: df Out[2]: A B 0 5 1 1 6 2 2 3 3 3 4 5 In [3]: df[df['A'].isin([3, 6])] Out[3]: A B 1 6 2 2 3 3 </code></pre> <p>And to get the opposite use <code>~</code>:</p> <pre><code>In [4]: df[~df['A'].isin([3, 6])] Out[4]: A B 0 5 1 3 4 5 </code></pre>
615
pandas
Convert list of dictionaries to a pandas DataFrame
https://stackoverflow.com/questions/20638006/convert-list-of-dictionaries-to-a-pandas-dataframe
<p>How can I convert a list of dictionaries into a DataFrame? I want to turn</p> <pre class="lang-py prettyprint-override"><code>[{'points': 50, 'time': '5:00', 'year': 2010}, {'points': 25, 'time': '6:00', 'month': &quot;february&quot;}, {'points':90, 'time': '9:00', 'month': 'january'}, {'points_h1':20, 'month': 'june'}] </code></pre> <p>into</p> <pre class="lang-none prettyprint-override"><code> month points points_h1 time year 0 NaN 50 NaN 5:00 2010 1 february 25 NaN 6:00 NaN 2 january 90 NaN 9:00 NaN 3 june NaN 20 NaN NaN </code></pre>
<p>If <code>ds</code> is a list of <code>dict</code>s:</p> <pre><code>df = pd.DataFrame(ds) </code></pre> <p>Note: this does not work with nested data.</p>
616
pandas
&quot;Large data&quot; workflows using pandas
https://stackoverflow.com/questions/14262433/large-data-workflows-using-pandas
<p>I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons.</p> <p>One day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about "big data" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive.</p> <p>My first thought is to use <code>HDFStore</code> to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this:</p> <p>What are some best-practice workflows for accomplishing the following:</p> <ol> <li>Loading flat files into a permanent, on-disk database structure</li> <li>Querying that database to retrieve data to feed into a pandas data structure</li> <li>Updating the database after manipulating pieces in pandas</li> </ol> <p>Real-world examples would be much appreciated, especially from anyone who uses pandas on "large data".</p> <p>Edit -- an example of how I would like this to work:</p> <ol> <li>Iteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory.</li> <li>In order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory.</li> <li>I would create new columns by performing various operations on the selected columns.</li> <li>I would then have to append these new columns into the database structure.</li> </ol> <p>I am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem.</p> <p>Edit -- Responding to Jeff's questions specifically:</p> <ol> <li>I am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns.</li> <li>Typical operations involve combining several columns using conditional logic into a new, compound column. For example, <code>if var1 &gt; 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'</code>. The result of these operations is a new column for every record in my dataset.</li> <li>Finally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model.</li> <li>A typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case.</li> <li>It's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations.</li> <li>The modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns.</li> </ol> <p>It is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics/machine learning parlance).</p>
<p>I routinely use tens of gigabytes of data in just this fashion e.g. I have tables on disk that I read via queries, create data and append back.</p> <p>It's worth reading <a href="http://pandas-docs.github.io/pandas-docs-travis/io.html#hdf5-pytables" rel="noreferrer">the docs</a> and <a href="https://groups.google.com/forum/m/?fromgroups#!topic/pydata/cmw1F3OFJSc" rel="noreferrer">late in this thread</a> for several suggestions for how to store your data.</p> <p>Details which will affect how you store your data, like:<br> <em>Give as much detail as you can; and I can help you develop a structure.</em></p> <ol> <li>Size of data, # of rows, columns, types of columns; are you appending rows, or just columns? </li> <li>What will typical operations look like. E.g. do a query on columns to select a bunch of rows and specific columns, then do an operation (in-memory), create new columns, save these.<br> (Giving a toy example could enable us to offer more specific recommendations.)</li> <li>After that processing, then what do you do? Is step 2 ad hoc, or repeatable?</li> <li>Input flat files: how many, rough total size in Gb. How are these organized e.g. by records? Does each one contains different fields, or do they have some records per file with all of the fields in each file?</li> <li>Do you ever select subsets of rows (records) based on criteria (e.g. select the rows with field A > 5)? and then do something, or do you just select fields A, B, C with all of the records (and then do something)?</li> <li>Do you 'work on' all of your columns (in groups), or are there a good proportion that you may only use for reports (e.g. you want to keep the data around, but don't need to pull in that column explicity until final results time)?</li> </ol> <h2>Solution</h2> <p><em>Ensure you have <a href="http://pandas.pydata.org/getpandas.html" rel="noreferrer">pandas at least <code>0.10.1</code></a> installed.</em></p> <p>Read <a href="http://pandas-docs.github.io/pandas-docs-travis/io.html#iterating-through-files-chunk-by-chunk" rel="noreferrer">iterating files chunk-by-chunk</a> and <a href="http://pandas-docs.github.io/pandas-docs-travis/io.html#multiple-table-queries" rel="noreferrer">multiple table queries</a>.</p> <p>Since pytables is optimized to operate on row-wise (which is what you query on), we will create a table for each group of fields. This way it's easy to select a small group of fields (which will work with a big table, but it's more efficient to do it this way... I think I may be able to fix this limitation in the future... this is more intuitive anyhow):<br> (The following is pseudocode.)</p> <pre><code>import numpy as np import pandas as pd # create a store store = pd.HDFStore('mystore.h5') # this is the key to your storage: # this maps your fields to a specific group, and defines # what you want to have as data_columns. # you might want to create a nice class wrapping this # (as you will want to have this map and its inversion) group_map = dict( A = dict(fields = ['field_1','field_2',.....], dc = ['field_1',....,'field_5']), B = dict(fields = ['field_10',...... ], dc = ['field_10']), ..... REPORTING_ONLY = dict(fields = ['field_1000','field_1001',...], dc = []), ) group_map_inverted = dict() for g, v in group_map.items(): group_map_inverted.update(dict([ (f,g) for f in v['fields'] ])) </code></pre> <p>Reading in the files and creating the storage (essentially doing what <code>append_to_multiple</code> does):</p> <pre><code>for f in files: # read in the file, additional options may be necessary here # the chunksize is not strictly necessary, you may be able to slurp each # file into memory in which case just eliminate this part of the loop # (you can also change chunksize if necessary) for chunk in pd.read_table(f, chunksize=50000): # we are going to append to each table by group # we are not going to create indexes at this time # but we *ARE* going to create (some) data_columns # figure out the field groupings for g, v in group_map.items(): # create the frame for this group frame = chunk.reindex(columns = v['fields'], copy = False) # append it store.append(g, frame, index=False, data_columns = v['dc']) </code></pre> <p>Now you have all of the tables in the file (actually you could store them in separate files if you wish, you would prob have to add the filename to the group_map, but probably this isn't necessary).</p> <p>This is how you get columns and create new ones:</p> <pre><code>frame = store.select(group_that_I_want) # you can optionally specify: # columns = a list of the columns IN THAT GROUP (if you wanted to # select only say 3 out of the 20 columns in this sub-table) # and a where clause if you want a subset of the rows # do calculations on this frame new_frame = cool_function_on_frame(frame) # to 'add columns', create a new group (you probably want to # limit the columns in this new_group to be only NEW ones # (e.g. so you don't overlap from the other tables) # add this info to the group_map store.append(new_group, new_frame.reindex(columns = new_columns_created, copy = False), data_columns = new_columns_created) </code></pre> <p>When you are ready for post_processing:</p> <pre><code># This may be a bit tricky; and depends what you are actually doing. # I may need to modify this function to be a bit more general: report_data = store.select_as_multiple([groups_1,groups_2,.....], where =['field_1&gt;0', 'field_1000=foo'], selector = group_1) </code></pre> <p>About data_columns, you don't actually need to define <strong>ANY</strong> data_columns; they allow you to sub-select rows based on the column. E.g. something like:</p> <pre><code>store.select(group, where = ['field_1000=foo', 'field_1001&gt;0']) </code></pre> <p>They may be most interesting to you in the final report generation stage (essentially a data column is segregated from other columns, which might impact efficiency somewhat if you define a lot).</p> <p>You also might want to:</p> <ul> <li>create a function which takes a list of fields, looks up the groups in the groups_map, then selects these and concatenates the results so you get the resulting frame (this is essentially what select_as_multiple does). <em>This way the structure would be pretty transparent to you.</em></li> <li>indexes on certain data columns (makes row-subsetting much faster).</li> <li>enable compression.</li> </ul> <p>Let me know when you have questions!</p>
617
pandas
Writing a pandas DataFrame to CSV file
https://stackoverflow.com/questions/16923281/writing-a-pandas-dataframe-to-csv-file
<p>I have a dataframe in pandas which I would like to write to a CSV file.</p> <p>I am doing this using:</p> <pre class="lang-py prettyprint-override"><code>df.to_csv('out.csv') </code></pre> <p>And getting the following error:</p> <pre class="lang-none prettyprint-override"><code>UnicodeEncodeError: 'ascii' codec can't encode character u'\u03b1' in position 20: ordinal not in range(128) </code></pre> <ul> <li>Is there any way to get around this easily (i.e. I have unicode characters in my data frame)?</li> <li>And is there a way to write to a tab delimited file instead of a CSV using e.g. a 'to-tab' method (that I don't think exists)?</li> </ul>
<p>To delimit by a tab you can use the <code>sep</code> argument of <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="noreferrer"><code>to_csv</code></a>:</p> <pre><code>df.to_csv(file_name, sep='\t') </code></pre> <p>To use a specific encoding (e.g. 'utf-8') use the <code>encoding</code> argument:</p> <pre><code>df.to_csv(file_name, sep='\t', encoding='utf-8') </code></pre> <p>In many cases you will want to remove the index and add a header:</p> <pre><code>df.to_csv(file_name, sep='\t', encoding='utf-8', index=False, header=True) </code></pre>
618
pandas
How do I expand the output display to see more columns of a Pandas DataFrame?
https://stackoverflow.com/questions/11707586/how-do-i-expand-the-output-display-to-see-more-columns-of-a-pandas-dataframe
<p>Is there a way to widen the display of output in either interactive or script-execution mode?</p> <p>Specifically, I am using the <code>describe()</code> function on a Pandas <code>DataFrame</code>. When the <code>DataFrame</code> is five columns (labels) wide, I get the descriptive statistics that I want. However, if the <code>DataFrame</code> has any more columns, the statistics are suppressed and something like this is returned:</p> <pre class="lang-none prettyprint-override"><code>&gt;&gt; Index: 8 entries, count to max &gt;&gt; Data columns: &gt;&gt; x1 8 non-null values &gt;&gt; x2 8 non-null values &gt;&gt; x3 8 non-null values &gt;&gt; x4 8 non-null values &gt;&gt; x5 8 non-null values &gt;&gt; x6 8 non-null values &gt;&gt; x7 8 non-null values </code></pre> <p>The &quot;8&quot; value is given whether there are 6 or 7 columns. What does the &quot;8&quot; refer to?</p> <p>I have already tried dragging the <a href="https://en.wikipedia.org/wiki/IDLE" rel="noreferrer">IDLE</a> window larger, as well as increasing the &quot;Configure IDLE&quot; width options, to no avail.</p>
<p>(For Pandas versions before 0.23.4, see at bottom.)</p> <p>Use <code>pandas.set_option(optname, val)</code>, or equivalently <code>pd.options.&lt;opt.hierarchical.name&gt; = val</code>. Like:</p> <pre><code>import pandas as pd pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) </code></pre> <p>Pandas will try to autodetect the size of your terminal window if you set <code>pd.options.display.width = 0</code>.</p> <p>Here is the <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.set_option.html" rel="noreferrer">help for <code>set_option</code></a>:</p> <pre> set_option(pat,value) - Sets the value of the specified option Available options: display.[chop_threshold, colheader_justify, column_space, date_dayfirst, date_yearfirst, encoding, expand_frame_repr, float_format, height, line_width, max_columns, max_colwidth, max_info_columns, max_info_rows, max_rows, max_seq_items, mpl_style, multi_sparse, notebook_repr_html, pprint_nest_depth, precision, width] mode.[sim_interactive, use_inf_as_null] Parameters ---------- pat - str/regexp which should match a single option. Note: partial matches are supported for convenience, but unless you use the full option name (e.g., *x.y.z.option_name*), your code may break in future versions if new options with similar names are introduced. value - new value of option. Returns ------- None Raises ------ KeyError if no such option exists display.chop_threshold: [default: None] [currently: None] : float or None if set to a float value, all float values smaller then the given threshold will be displayed as exactly 0 by repr and friends. display.colheader_justify: [default: right] [currently: right] : 'left'/'right' Controls the justification of column headers. used by DataFrameFormatter. display.column_space: [default: 12] [currently: 12]No description available. display.date_dayfirst: [default: False] [currently: False] : boolean When True, prints and parses dates with the day first, eg 20/01/2005 display.date_yearfirst: [default: False] [currently: False] : boolean When True, prints and parses dates with the year first, e.g., 2005/01/20 display.encoding: [default: UTF-8] [currently: UTF-8] : str/unicode Defaults to the detected encoding of the console. Specifies the encoding to be used for strings returned by to_string, these are generally strings meant to be displayed on the console. display.expand_frame_repr: [default: True] [currently: True] : boolean Whether to print out the full DataFrame repr for wide DataFrames across multiple lines, `max_columns` is still respected, but the output will wrap-around across multiple "pages" if it's width exceeds `display.width`. display.float_format: [default: None] [currently: None] : callable The callable should accept a floating point number and return a string with the desired format of the number. This is used in some places like SeriesFormatter. See core.format.EngFormatter for an example. display.height: [default: 60] [currently: 1000] : int Deprecated. (Deprecated, use `display.height` instead.) display.line_width: [default: 80] [currently: 1000] : int Deprecated. (Deprecated, use `display.width` instead.) display.max_columns: [default: 20] [currently: 500] : int max_rows and max_columns are used in __repr__() methods to decide if to_string() or info() is used to render an object to a string. In case python/IPython is running in a terminal this can be set to 0 and Pandas will correctly auto-detect the width the terminal and swap to a smaller format in case all columns would not fit vertically. The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to do correct auto-detection. 'None' value means unlimited. display.max_colwidth: [default: 50] [currently: 50] : int The maximum width in characters of a column in the repr of a Pandas data structure. When the column overflows, a "..." placeholder is embedded in the output. display.max_info_columns: [default: 100] [currently: 100] : int max_info_columns is used in DataFrame.info method to decide if per column information will be printed. display.max_info_rows: [default: 1690785] [currently: 1690785] : int or None max_info_rows is the maximum number of rows for which a frame will perform a null check on its columns when repr'ing To a console. The default is 1,000,000 rows. So, if a DataFrame has more 1,000,000 rows there will be no null check performed on the columns and thus the representation will take much less time to display in an interactive session. A value of None means always perform a null check when repr'ing. display.max_rows: [default: 60] [currently: 500] : int This sets the maximum number of rows Pandas should output when printing out various output. For example, this value determines whether the repr() for a dataframe prints out fully or just a summary repr. 'None' value means unlimited. display.max_seq_items: [default: None] [currently: None] : int or None when pretty-printing a long sequence, no more then `max_seq_items` will be printed. If items are ommitted, they will be denoted by the addition of "..." to the resulting string. If set to None, the number of items to be printed is unlimited. display.mpl_style: [default: None] [currently: None] : bool Setting this to 'default' will modify the rcParams used by matplotlib to give plots a more pleasing visual style by default. Setting this to None/False restores the values to their initial value. display.multi_sparse: [default: True] [currently: True] : boolean "sparsify" MultiIndex display (don't display repeated elements in outer levels within groups) display.notebook_repr_html: [default: True] [currently: True] : boolean When True, IPython notebook will use html representation for Pandas objects (if it is available). display.pprint_nest_depth: [default: 3] [currently: 3] : int Controls the number of nested levels to process when pretty-printing display.precision: [default: 7] [currently: 7] : int Floating point output precision (number of significant digits). This is only a suggestion display.width: [default: 80] [currently: 1000] : int Width of the display in characters. In case python/IPython is running in a terminal this can be set to None and Pandas will correctly auto-detect the width. Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width. mode.sim_interactive: [default: False] [currently: False] : boolean Whether to simulate interactive mode for purposes of testing mode.use_inf_as_null: [default: False] [currently: False] : boolean True means treat None, NaN, INF, -INF as null (old way), False means None and NaN are null, but INF, -INF are not null (new way). Call def: pd.set_option(self, *args, **kwds) </pre> <hr /> <h3>Older version information</h3> <p><em>Much of this has been deprecated.</em></p> <p>As @bmu <a href="https://stackoverflow.com/a/11708664/623735">mentioned</a>, Pandas auto detects (by default) the size of the display area, a summary view will be used when an object repr does not fit on the display. You mentioned resizing the IDLE window, to no effect. If you do <code>print df.describe().to_string()</code> does it fit on the IDLE window?</p> <p>The terminal size is determined by <code>pandas.util.terminal.get_terminal_size()</code> (deprecated and removed), this returns a tuple containing the <code>(width, height)</code> of the display. Does the output match the size of your IDLE window? There might be an issue (there was one before when running a terminal in Emacs).</p> <p>Note that it is possible to bypass the autodetect, <code>pandas.set_printoptions(max_rows=200, max_columns=10)</code> will never switch to summary view if number of rows, columns does not exceed the given limits.</p> <hr /> <p>The <code>max_colwidth</code> option helps in seeing untruncated form of each column.</p> <p><a href="https://i.sstatic.net/J412l.png" rel="noreferrer"><img src="https://i.sstatic.net/J412l.png" alt="TruncatedColumnDisplay" /></a></p>
619
pandas
Creating an empty Pandas DataFrame, and then filling it
https://stackoverflow.com/questions/13784192/creating-an-empty-pandas-dataframe-and-then-filling-it
<p>I'm starting from the pandas DataFrame documentation here: <em><a href="http://pandas.pydata.org/pandas-docs/stable/dsintro.html" rel="noreferrer">Introduction to data structures</a></em></p> <p>I'd like to iteratively fill the DataFrame with values in a time series kind of calculation. I'd like to initialize the DataFrame with columns A, B, and timestamp rows, all 0 or all NaN.</p> <p>I'd then add initial values and go over this data calculating the new row from the row before, say <code>row[A][t] = row[A][t-1]+1</code> or so.</p> <p>I'm currently using the code as below, but I feel it's kind of ugly and there must be a way to do this with a DataFrame directly or just a better way in general.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import datetime as dt import scipy as s base = dt.datetime.today().date() dates = [ base - dt.timedelta(days=x) for x in range(9, -1, -1) ] valdict = {} symbols = ['A','B', 'C'] for symb in symbols: valdict[symb] = pd.Series( s.zeros(len(dates)), dates ) for thedate in dates: if thedate &gt; dates[0]: for symb in valdict: valdict[symb][thedate] = 1 + valdict[symb][thedate - dt.timedelta(days=1)] </code></pre>
<p>Here's a couple of suggestions:</p> <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html" rel="noreferrer"><code>date_range</code></a> for the index:</p> <pre><code>import datetime import pandas as pd import numpy as np todays_date = datetime.datetime.now().date() index = pd.date_range(todays_date-datetime.timedelta(10), periods=10, freq='D') columns = ['A','B', 'C'] </code></pre> <p><em>Note: we could create an empty DataFrame (with <code>NaN</code>s) simply by writing:</em></p> <pre><code>df_ = pd.DataFrame(index=index, columns=columns) df_ = df_.fillna(0) # With 0s rather than NaNs </code></pre> <p>To do these type of calculations for the data, use a <a href="https://en.wikipedia.org/wiki/NumPy" rel="noreferrer">NumPy</a> array:</p> <pre><code>data = np.array([np.arange(10)]*3).T </code></pre> <p>Hence we can create the DataFrame:</p> <pre class="lang-none prettyprint-override"><code>In [10]: df = pd.DataFrame(data, index=index, columns=columns) In [11]: df Out[11]: A B C 2012-11-29 0 0 0 2012-11-30 1 1 1 2012-12-01 2 2 2 2012-12-02 3 3 3 2012-12-03 4 4 4 2012-12-04 5 5 5 2012-12-05 6 6 6 2012-12-06 7 7 7 2012-12-07 8 8 8 2012-12-08 9 9 9 </code></pre>
620
pandas
Deleting DataFrame row in Pandas based on column value
https://stackoverflow.com/questions/18172851/deleting-dataframe-row-in-pandas-based-on-column-value
<p>I have the following DataFrame:</p> <pre class="lang-none prettyprint-override"><code> daysago line_race rating rw wrating line_date 2007-03-31 62 11 56 1.000000 56.000000 2007-03-10 83 11 67 1.000000 67.000000 2007-02-10 111 9 66 1.000000 66.000000 2007-01-13 139 10 83 0.880678 73.096278 2006-12-23 160 10 88 0.793033 69.786942 2006-11-09 204 9 52 0.636655 33.106077 2006-10-22 222 8 66 0.581946 38.408408 2006-09-29 245 9 70 0.518825 36.317752 2006-09-16 258 11 68 0.486226 33.063381 2006-08-30 275 8 72 0.446667 32.160051 2006-02-11 475 5 65 0.164591 10.698423 2006-01-13 504 0 70 0.142409 9.968634 2006-01-02 515 0 64 0.134800 8.627219 2005-12-06 542 0 70 0.117803 8.246238 2005-11-29 549 0 70 0.113758 7.963072 2005-11-22 556 0 -1 0.109852 -0.109852 2005-11-01 577 0 -1 0.098919 -0.098919 2005-10-20 589 0 -1 0.093168 -0.093168 2005-09-27 612 0 -1 0.083063 -0.083063 2005-09-07 632 0 -1 0.075171 -0.075171 2005-06-12 719 0 69 0.048690 3.359623 2005-05-29 733 0 -1 0.045404 -0.045404 2005-05-02 760 0 -1 0.039679 -0.039679 2005-04-02 790 0 -1 0.034160 -0.034160 2005-03-13 810 0 -1 0.030915 -0.030915 2004-11-09 934 0 -1 0.016647 -0.016647 </code></pre> <p>I need to remove the rows where <code>line_race</code> is equal to <code>0</code>. What's the most efficient way to do this?</p>
<p>If I'm understanding correctly, it should be as simple as:</p> <pre><code>df = df[df.line_race != 0] </code></pre>
621
pandas
Combine two columns of text in pandas dataframe
https://stackoverflow.com/questions/19377969/combine-two-columns-of-text-in-pandas-dataframe
<p>I have a dataframe that looks like</p> <pre class="lang-none prettyprint-override"><code>Year quarter 2000 q2 2001 q3 </code></pre> <p>How do I add a new column by combining these columns to get the following dataframe?</p> <pre class="lang-none prettyprint-override"><code>Year quarter period 2000 q2 2000q2 2001 q3 2001q3 </code></pre>
<p>If both columns are strings, you can concatenate them directly:</p> <pre><code>df[&quot;period&quot;] = df[&quot;Year&quot;] + df[&quot;quarter&quot;] </code></pre> <p>If one (or both) of the columns are not string typed, you should convert it (them) first,</p> <pre><code>df[&quot;period&quot;] = df[&quot;Year&quot;].astype(str) + df[&quot;quarter&quot;] </code></pre> <h3><strong>Beware of NaNs when doing this!</strong></h3> <hr /> <p>If you need to join multiple string columns, you can use <code>agg</code>:</p> <pre><code>df['period'] = df[['Year', 'quarter', ...]].agg('-'.join, axis=1) </code></pre> <p>Where &quot;-&quot; is the separator.</p>
622
pandas
How are iloc and loc different?
https://stackoverflow.com/questions/31593201/how-are-iloc-and-loc-different
<p>Can someone explain how these two methods of slicing are different? I've seen <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html" rel="noreferrer">the docs</a> and I've seen previous similar questions (<a href="https://stackoverflow.com/questions/28757389/loc-vs-iloc-vs-ix-vs-at-vs-iat">1</a>, <a href="https://stackoverflow.com/questions/27667759/is-ix-always-better-than-loc-and-iloc-since-it-is-faster-and-supports-i">2</a>), but I still find myself unable to understand how they are different. To me, they seem interchangeable in large part, because they are at the lower levels of slicing.</p> <p>For example, say we want to get the first five rows of a <code>DataFrame</code>. How is it that these two work?</p> <pre class="lang-py prettyprint-override"><code>df.loc[:5] df.iloc[:5] </code></pre> <p>Can someone present cases where the distinction in uses are clearer?</p> <hr /> <p>Once upon a time, I also wanted to know how these two functions differed from <code>df.ix[:5]</code> but <code>ix</code> has been removed from pandas 1.0, so I don't care anymore.</p>
<h2>Label <em>vs.</em> Location</h2> <p>The main distinction between the two methods is:</p> <ul> <li><p><code>loc</code> gets rows (and/or columns) with particular <strong>labels</strong>.</p> </li> <li><p><code>iloc</code> gets rows (and/or columns) at integer <strong>locations</strong>.</p> </li> </ul> <p>To demonstrate, consider a series <code>s</code> of characters with a non-monotonic integer index:</p> <pre><code>&gt;&gt;&gt; s = pd.Series(list(&quot;abcdef&quot;), index=[49, 48, 47, 0, 1, 2]) 49 a 48 b 47 c 0 d 1 e 2 f &gt;&gt;&gt; s.loc[0] # value at index label 0 'd' &gt;&gt;&gt; s.iloc[0] # value at index location 0 'a' &gt;&gt;&gt; s.loc[0:1] # rows at index labels between 0 and 1 (inclusive) 0 d 1 e &gt;&gt;&gt; s.iloc[0:1] # rows at index location between 0 and 1 (exclusive) 49 a </code></pre> <p>Here are some of the differences/similarities between <code>s.loc</code> and <code>s.iloc</code> when passed various objects:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>&lt;object&gt;</th> <th>description</th> <th><code>s.loc[&lt;object&gt;]</code></th> <th><code>s.iloc[&lt;object&gt;]</code></th> </tr> </thead> <tbody> <tr> <td><code>0</code></td> <td>single item</td> <td>Value at index <em>label</em> <code>0</code> (the string <code>'d'</code>)</td> <td>Value at index <em>location</em> 0 (the string <code>'a'</code>)</td> </tr> <tr> <td><code>0:1</code></td> <td>slice</td> <td><strong>Two</strong> rows (labels <code>0</code> and <code>1</code>)</td> <td><strong>One</strong> row (first row at location 0)</td> </tr> <tr> <td><code>1:47</code></td> <td>slice with out-of-bounds end</td> <td><strong>Zero</strong> rows (empty Series)</td> <td><strong>Five</strong> rows (location 1 onwards)</td> </tr> <tr> <td><code>1:47:-1</code></td> <td>slice with negative step</td> <td><strong>three</strong> rows (labels <code>1</code> back to <code>47</code>)</td> <td><strong>Zero</strong> rows (empty Series)</td> </tr> <tr> <td><code>[2, 0]</code></td> <td>integer list</td> <td><strong>Two</strong> rows with given labels</td> <td><strong>Two</strong> rows with given locations</td> </tr> <tr> <td><code>s &gt; 'e'</code></td> <td>Bool series (indicating which values have the property)</td> <td><strong>One</strong> row (containing <code>'f'</code>)</td> <td><code>NotImplementedError</code></td> </tr> <tr> <td><code>(s&gt;'e').values</code></td> <td>Bool array</td> <td><strong>One</strong> row (containing <code>'f'</code>)</td> <td>Same as <code>loc</code></td> </tr> <tr> <td><code>999</code></td> <td>int object not in index</td> <td><code>KeyError</code></td> <td><code>IndexError</code> (out of bounds)</td> </tr> <tr> <td><code>-1</code></td> <td>int object not in index</td> <td><code>KeyError</code></td> <td>Returns last value in <code>s</code></td> </tr> <tr> <td><code>lambda x: x.index[3]</code></td> <td>callable applied to series (here returning 3<sup>rd</sup> item in index)</td> <td><code>s.loc[s.index[3]]</code></td> <td><code>s.iloc[s.index[3]]</code></td> </tr> </tbody> </table> </div> <p><code>loc</code>'s label-querying capabilities extend well-beyond integer indexes and it's worth highlighting a couple of additional examples.</p> <p>Here's a Series where the index contains string objects:</p> <pre><code>&gt;&gt;&gt; s2 = pd.Series(s.index, index=s.values) &gt;&gt;&gt; s2 a 49 b 48 c 47 d 0 e 1 f 2 </code></pre> <p>Since <code>loc</code> is label-based, it can fetch the first value in the Series using <code>s2.loc['a']</code>. It can also slice with non-integer objects:</p> <pre><code>&gt;&gt;&gt; s2.loc['c':'e'] # all rows lying between 'c' and 'e' (inclusive) c 47 d 0 e 1 </code></pre> <p>For DateTime indexes, we don't need to pass the exact date/time to fetch by label. For example:</p> <pre><code>&gt;&gt;&gt; s3 = pd.Series(list('abcde'), pd.date_range('now', periods=5, freq='M')) &gt;&gt;&gt; s3 2021-01-31 16:41:31.879768 a 2021-02-28 16:41:31.879768 b 2021-03-31 16:41:31.879768 c 2021-04-30 16:41:31.879768 d 2021-05-31 16:41:31.879768 e </code></pre> <p>Then to fetch the row(s) for March/April 2021 we only need:</p> <pre><code>&gt;&gt;&gt; s3.loc['2021-03':'2021-04'] 2021-03-31 17:04:30.742316 c 2021-04-30 17:04:30.742316 d </code></pre> <h2>Rows and Columns</h2> <p><code>loc</code> and <code>iloc</code> work the same way with DataFrames as they do with Series. It's useful to note that both methods can address columns and rows together.</p> <p>When given a tuple, the first element is used to index the rows and, if it exists, the second element is used to index the columns.</p> <p>Consider the DataFrame defined below:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; df = pd.DataFrame(np.arange(25).reshape(5, 5), index=list('abcde'), columns=['x','y','z', 8, 9]) &gt;&gt;&gt; df x y z 8 9 a 0 1 2 3 4 b 5 6 7 8 9 c 10 11 12 13 14 d 15 16 17 18 19 e 20 21 22 23 24 </code></pre> <p>Then for example:</p> <pre><code>&gt;&gt;&gt; df.loc['c': , :'z'] # rows 'c' and onwards AND columns up to 'z' x y z c 10 11 12 d 15 16 17 e 20 21 22 &gt;&gt;&gt; df.iloc[:, 3] # all rows, but only the column at index location 3 a 3 b 8 c 13 d 18 e 23 </code></pre> <p>Sometimes we want to mix label and positional indexing methods for the rows and columns, somehow combining the capabilities of <code>loc</code> and <code>iloc</code>.</p> <p>For example, consider the following DataFrame. How best to slice the rows up to and including 'c' <em>and</em> take the first four columns?</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; df = pd.DataFrame(np.arange(25).reshape(5, 5), index=list('abcde'), columns=['x','y','z', 8, 9]) &gt;&gt;&gt; df x y z 8 9 a 0 1 2 3 4 b 5 6 7 8 9 c 10 11 12 13 14 d 15 16 17 18 19 e 20 21 22 23 24 </code></pre> <p>We can achieve this result using <code>iloc</code> and the help of another method:</p> <pre><code>&gt;&gt;&gt; df.iloc[:df.index.get_loc('c') + 1, :4] x y z 8 a 0 1 2 3 b 5 6 7 8 c 10 11 12 13 </code></pre> <p><a href="http://pandas.pydata.org/pandas-docs/version/0.19.1/generated/pandas.Index.get_loc.html" rel="noreferrer"><code>get_loc()</code></a> is an index method meaning &quot;get the position of the label in this index&quot;. Note that since slicing with <code>iloc</code> is exclusive of its endpoint, we must add 1 to this value if we want row 'c' as well.</p>
623
pandas
How do I count the NaN values in a column in pandas DataFrame?
https://stackoverflow.com/questions/26266362/how-do-i-count-the-nan-values-in-a-column-in-pandas-dataframe
<p>I want to find the number of <code>NaN</code> in each column of my data.</p>
<p>Use the <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.isna.html" rel="noreferrer"><code>isna()</code></a> method (or it's alias <code>isnull()</code> which is also compatible with older pandas versions &lt; 0.21.0) and then sum to count the NaN values. For one column:</p> <pre><code>&gt;&gt;&gt; s = pd.Series([1,2,3, np.nan, np.nan]) &gt;&gt;&gt; s.isna().sum() # or s.isnull().sum() for older pandas versions 2 </code></pre> <p>For several columns, this also works:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame({'a':[1,2,np.nan], 'b':[np.nan,1,np.nan]}) &gt;&gt;&gt; df.isna().sum() a 1 b 2 dtype: int64 </code></pre>
624
pandas
How to delete rows from a pandas DataFrame based on a conditional expression
https://stackoverflow.com/questions/13851535/how-to-delete-rows-from-a-pandas-dataframe-based-on-a-conditional-expression
<p>I have a pandas DataFrame and I want to delete rows from it where the length of the string in a particular column is greater than 2.</p> <p>I expect to be able to do this (per <a href="https://stackoverflow.com/questions/11881165/slice-pandas-dataframe-by-row">this answer</a>):</p> <pre><code>df[(len(df['column name']) &lt; 2)] </code></pre> <p>but I just get the error:</p> <pre><code>KeyError: u'no item named False' </code></pre> <p>What am I doing wrong?</p> <p>(Note: I know I can use <code>df.dropna()</code> to get rid of rows that contain any <code>NaN</code>, but I didn't see how to remove rows based on a conditional expression.)</p>
<p>When you do <code>len(df['column name'])</code> you are just getting one number, namely the number of rows in the DataFrame (i.e., the length of the column itself). If you want to apply <code>len</code> to each element in the column, use <code>df['column name'].map(len)</code>. So try</p> <pre><code>df[df['column name'].map(len) &lt; 2] </code></pre>
625
pandas
Pandas Merging 101
https://stackoverflow.com/questions/53645882/pandas-merging-101
<ul> <li>How can I perform a (<code>INNER</code>| (<code>LEFT</code>|<code>RIGHT</code>|<code>FULL</code>) <code>OUTER</code>) <code>JOIN</code> with pandas?</li> <li>How do I add NaNs for missing rows after a merge?</li> <li>How do I get rid of NaNs after merging?</li> <li>Can I merge on the index?</li> <li>How do I merge multiple DataFrames?</li> <li>Cross join with pandas</li> <li><code>merge</code>? <code>join</code>? <code>concat</code>? <code>update</code>? Who? What? Why?!</li> </ul> <p>... and more. I've seen these recurring questions asking about various facets of the pandas merge functionality. Most of the information regarding merge and its various use cases today is fragmented across dozens of badly worded, unsearchable posts. The aim here is to collate some of the more important points for posterity.</p> <p>This Q&amp;A is meant to be the next installment in a series of helpful user guides on common pandas idioms (see <a href="https://stackoverflow.com/questions/47152691/how-to-pivot-a-dataframe">this post on pivoting</a>, and <a href="https://stackoverflow.com/questions/49620538/what-are-the-levels-keys-and-names-arguments-for-in-pandas-concat-functio">this post on concatenation</a>, which I will be touching on, later).</p> <p>Please note that this post is <em>not</em> meant to be a replacement for <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html" rel="noreferrer">the documentation</a>, so please read that as well! Some of the examples are taken from there.</p> <hr /> <h3>Table of Contents</h3> <p><sub>For ease of access.</sub></p> <ul> <li><p><a href="https://stackoverflow.com/a/53645883/4909087">Merging basics - basic types of joins</a> (read this first)</p> </li> <li><p><a href="https://stackoverflow.com/a/65167356/4909087">Index-based joins</a></p> </li> <li><p><a href="https://stackoverflow.com/a/65167327/4909087">Generalizing to multiple DataFrames</a></p> </li> <li><p><a href="https://stackoverflow.com/a/53699013/4909087">Cross join</a></p> </li> </ul>
<p>This post aims to give readers a primer on SQL-flavored merging with Pandas, how to use it, and when not to use it.</p> <p>In particular, here's what this post will go through:</p> <ul> <li><p>The basics - types of joins (LEFT, RIGHT, OUTER, INNER)</p> <ul> <li>merging with different column names</li> <li>merging with multiple columns</li> <li>avoiding duplicate merge key column in output</li> </ul> </li> </ul> <p>What this post (and other posts by me on this thread) will not go through:</p> <ul> <li>Performance-related discussions and timings (for now). Mostly notable mentions of better alternatives, wherever appropriate.</li> <li>Handling suffixes, removing extra columns, renaming outputs, and other specific use cases. There are other (read: better) posts that deal with that, so figure it out!</li> </ul> <blockquote> <p><strong>Note</strong> Most examples default to INNER JOIN operations while demonstrating various features, unless otherwise specified.</p> <p>Furthermore, all the DataFrames here can be copied and replicated so you can play with them. Also, see <a href="https://stackoverflow.com/questions/31610889/how-to-copy-paste-dataframe-from-stackoverflow-into-python">this post</a> on how to read DataFrames from your clipboard.</p> <p>Lastly, all visual representation of JOIN operations have been hand-drawn using Google Drawings. Inspiration from <a href="https://stackoverflow.com/a/55858991/4909087">here</a>.</p> </blockquote> <hr /> <hr /> <h1>Enough talk - just show me how to use <code>merge</code>!</h1> <h3>Setup &amp; Basics</h3> <pre><code>np.random.seed(0) left = pd.DataFrame({'key': ['A', 'B', 'C', 'D'], 'value': np.random.randn(4)}) right = pd.DataFrame({'key': ['B', 'D', 'E', 'F'], 'value': np.random.randn(4)}) left key value 0 A 1.764052 1 B 0.400157 2 C 0.978738 3 D 2.240893 right key value 0 B 1.867558 1 D -0.977278 2 E 0.950088 3 F -0.151357 </code></pre> <p>For the sake of simplicity, the key column has the same name (for now).</p> <p>An <strong>INNER JOIN</strong> is represented by</p> <img src="https://i.sstatic.net/YvuOa.png" width="500"/> <blockquote> <p><strong>Note</strong> This, along with the forthcoming figures all follow this convention:</p> <ul> <li><strong>blue</strong> indicates rows that are present in the merge result</li> <li><strong>red</strong> indicates rows that are excluded from the result (i.e., removed)</li> <li><strong>green</strong> indicates missing values that are replaced with <code>NaN</code>s in the result</li> </ul> </blockquote> <p>To perform an INNER JOIN, call <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="noreferrer"><code>merge</code></a> on the left DataFrame, specifying the right DataFrame and the join key (at the very least) as arguments.</p> <pre><code>left.merge(right, on='key') # Or, if you want to be explicit # left.merge(right, on='key', how='inner') key value_x value_y 0 B 0.400157 1.867558 1 D 2.240893 -0.977278 </code></pre> <p>This returns only rows from <code>left</code> and <code>right</code> which share a common key (in this example, &quot;B&quot; and &quot;D).</p> <p>A <strong>LEFT OUTER JOIN</strong>, or LEFT JOIN is represented by</p> <img src="https://i.sstatic.net/BECid.png" width="500" /> <p>This can be performed by specifying <code>how='left'</code>.</p> <pre><code>left.merge(right, on='key', how='left') key value_x value_y 0 A 1.764052 NaN 1 B 0.400157 1.867558 2 C 0.978738 NaN 3 D 2.240893 -0.977278 </code></pre> <p>Carefully note the placement of NaNs here. If you specify <code>how='left'</code>, then only keys from <code>left</code> are used, and missing data from <code>right</code> is replaced by NaN.</p> <p>And similarly, for a <strong>RIGHT OUTER JOIN</strong>, or RIGHT JOIN which is...</p> <img src="https://i.sstatic.net/8w1US.png" width="500" /> <p>...specify <code>how='right'</code>:</p> <pre><code>left.merge(right, on='key', how='right') key value_x value_y 0 B 0.400157 1.867558 1 D 2.240893 -0.977278 2 E NaN 0.950088 3 F NaN -0.151357 </code></pre> <p>Here, keys from <code>right</code> are used, and missing data from <code>left</code> is replaced by NaN.</p> <p>Finally, for the <strong>FULL OUTER JOIN</strong>, given by</p> <img src="https://i.sstatic.net/euLoe.png" width="500" /> <p>specify <code>how='outer'</code>.</p> <pre><code>left.merge(right, on='key', how='outer') key value_x value_y 0 A 1.764052 NaN 1 B 0.400157 1.867558 2 C 0.978738 NaN 3 D 2.240893 -0.977278 4 E NaN 0.950088 5 F NaN -0.151357 </code></pre> <p>This uses the keys from both frames, and NaNs are inserted for missing rows in both.</p> <p>The documentation summarizes these various merges nicely:</p> <p><a href="https://i.sstatic.net/5qDIy.png" rel="noreferrer"><img src="https://i.sstatic.net/5qDIy.png" alt="Enter image description here" /></a></p> <hr /> <h3><strong>Other JOINs - LEFT-Excluding, RIGHT-Excluding, and FULL-Excluding/ANTI JOINs</strong></h3> <p>If you need <strong>LEFT-Excluding JOINs</strong> and <strong>RIGHT-Excluding JOINs</strong> in two steps.</p> <p>For LEFT-Excluding JOIN, represented as</p> <img src="https://i.sstatic.net/bXWIV.png" width="500"/> <p>Start by performing a LEFT OUTER JOIN and then filtering to rows coming from <code>left</code> only (excluding everything from the right),</p> <pre><code>(left.merge(right, on='key', how='left', indicator=True) .query('_merge == &quot;left_only&quot;') .drop('_merge', axis=1)) key value_x value_y 0 A 1.764052 NaN 2 C 0.978738 NaN </code></pre> <p>Where,</p> <pre><code>left.merge(right, on='key', how='left', <b>indicator=True</b>) key value_x value_y _merge 0 A 1.764052 NaN left_only 1 B 0.400157 1.867558 both 2 C 0.978738 NaN left_only 3 D 2.240893 -0.977278 both</code></pre> <p>And similarly, for a RIGHT-Excluding JOIN,</p> <img src="https://i.sstatic.net/Z0br2.png" width="500"/> <pre><code>(left.merge(right, on='key', how='right', <b>indicator=True</b>) .query('_merge == "right_only"') .drop('_merge', axis=1)) key value_x value_y 2 E NaN 0.950088 3 F NaN -0.151357</code></pre> <p>Lastly, if you are required to do a merge that only retains keys from the left or right, but not both (IOW, performing an <strong>ANTI-JOIN</strong>),</p> <img src="https://i.sstatic.net/PWMYd.png" width="500"/> <p>You can do this in similar fashion—</p> <pre><code>(left.merge(right, on='key', how='outer', indicator=True) .query('_merge != &quot;both&quot;') .drop('_merge', axis=1)) key value_x value_y 0 A 1.764052 NaN 2 C 0.978738 NaN 4 E NaN 0.950088 5 F NaN -0.151357 </code></pre> <hr /> <h3><strong>Different names for key columns</strong></h3> <p>If the key columns are named differently—for example, <code>left</code> has <code>keyLeft</code>, and <code>right</code> has <code>keyRight</code> instead of <code>key</code>—then you will have to specify <code>left_on</code> and <code>right_on</code> as arguments instead of <code>on</code>:</p> <pre><code>left2 = left.rename({'key':'keyLeft'}, axis=1) right2 = right.rename({'key':'keyRight'}, axis=1) left2 keyLeft value 0 A 1.764052 1 B 0.400157 2 C 0.978738 3 D 2.240893 right2 keyRight value 0 B 1.867558 1 D -0.977278 2 E 0.950088 3 F -0.151357 </code></pre> <pre><code>left2.merge(right2, left_on='keyLeft', right_on='keyRight', how='inner') keyLeft value_x keyRight value_y 0 B 0.400157 B 1.867558 1 D 2.240893 D -0.977278 </code></pre> <hr /> <h3><strong>Avoiding duplicate key column in output</strong></h3> <p>When merging on <code>keyLeft</code> from <code>left</code> and <code>keyRight</code> from <code>right</code>, if you only want either of the <code>keyLeft</code> or <code>keyRight</code> (but not both) in the output, you can start by setting the index as a preliminary step.</p> <pre><code>left3 = left2.set_index('keyLeft') left3.merge(right2, left_index=True, right_on='keyRight') value_x keyRight value_y 0 0.400157 B 1.867558 1 2.240893 D -0.977278 </code></pre> <p>Contrast this with the output of the command just before (that is, the output of <code>left2.merge(right2, left_on='keyLeft', right_on='keyRight', how='inner')</code>), you'll notice <code>keyLeft</code> is missing. You can figure out what column to keep based on which frame's index is set as the key. This may matter when, say, performing some OUTER JOIN operation.</p> <hr /> <h3><strong>Merging only a single column from one of the <code>DataFrames</code></strong></h3> <p>For example, consider</p> <pre><code>right3 = right.assign(newcol=np.arange(len(right))) right3 key value newcol 0 B 1.867558 0 1 D -0.977278 1 2 E 0.950088 2 3 F -0.151357 3 </code></pre> <p>If you are required to merge only &quot;newcol&quot; (without any of the other columns), you can usually just subset columns before merging:</p> <pre><code>left.merge(right3[['key', 'newcol']], on='key') key value newcol 0 B 0.400157 0 1 D 2.240893 1 </code></pre> <p>If you're doing a LEFT OUTER JOIN, a more performant solution would involve <code>map</code>:</p> <pre><code># left['newcol'] = left['key'].map(right3.set_index('key')['newcol'])) left.assign(newcol=left['key'].map(right3.set_index('key')['newcol'])) key value newcol 0 A 1.764052 NaN 1 B 0.400157 0.0 2 C 0.978738 NaN 3 D 2.240893 1.0 </code></pre> <p>As mentioned, this is similar to, but faster than</p> <pre><code>left.merge(right3[['key', 'newcol']], on='key', how='left') key value newcol 0 A 1.764052 NaN 1 B 0.400157 0.0 2 C 0.978738 NaN 3 D 2.240893 1.0 </code></pre> <hr /> <h3><strong>Merging on multiple columns</strong></h3> <p>To join on more than one column, specify a list for <code>on</code> (or <code>left_on</code> and <code>right_on</code>, as appropriate).</p> <pre><code>left.merge(right, on=['key1', 'key2'] ...) </code></pre> <p>Or, in the event the names are different,</p> <pre><code>left.merge(right, left_on=['lkey1', 'lkey2'], right_on=['rkey1', 'rkey2']) </code></pre> <hr /> <h3><strong>Other useful <code>merge*</code> operations and functions</strong></h3> <ul> <li><p>Merging a DataFrame with Series on index: See <a href="https://stackoverflow.com/a/40762674/4909087">this answer</a>.</p> </li> <li><p>Besides <code>merge</code>, <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.update.html" rel="noreferrer"><code>DataFrame.update</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html" rel="noreferrer"><code>DataFrame.combine_first</code></a> are also used in certain cases to update one DataFrame with another.</p> </li> <li><p><a href="http://pandas.pydata.org/pandas-docs/version/0.19.0/generated/pandas.merge_ordered.html" rel="noreferrer"><code>pd.merge_ordered</code></a> is a useful function for ordered JOINs.</p> </li> <li><p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge_asof.html" rel="noreferrer"><code>pd.merge_asof</code></a> (read: merge_asOf) is useful for <em>approximate</em> joins.</p> </li> </ul> <p><strong>This section only covers the very basics, and is designed to only whet your appetite. For more examples and cases, see the <a href="https://pandas.pydata.org/pandas-docs/stable/merging.html" rel="noreferrer">documentation on <code>merge</code>, <code>join</code>, and <code>concat</code></a> as well as the links to the function specifications.</strong></p> <hr /> <hr /> <h1>Continue Reading</h1> <p>Jump to other topics in Pandas Merging 101 to continue learning:</p> <ul> <li><p><a href="https://stackoverflow.com/a/53645883/4909087">Merging basics - basic types of joins</a> <sup>*</sup></p> </li> <li><p><a href="https://stackoverflow.com/a/65167356/4909087">Index-based joins</a></p> </li> <li><p><a href="https://stackoverflow.com/a/65167327/4909087">Generalizing to multiple DataFrames</a></p> </li> <li><p><a href="https://stackoverflow.com/a/53699013/4909087">Cross join</a></p> </li> </ul> <p><sub>*You are here.</sub></p>
626
pandas
Filter pandas DataFrame by substring criteria
https://stackoverflow.com/questions/11350770/filter-pandas-dataframe-by-substring-criteria
<p>I have a pandas DataFrame with a column of string values. I need to select rows based on partial string matches.</p> <p>Something like this idiom:</p> <pre class="lang-py prettyprint-override"><code>re.search(pattern, cell_in_question) </code></pre> <p>returning a boolean. I am familiar with the syntax of <code>df[df['A'] == &quot;hello world&quot;]</code> but can't seem to find a way to do the same with a partial string match, say <code>'hello'</code>.</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/text.html#string-methods" rel="noreferrer">Vectorized string methods (i.e. <code>Series.str</code>)</a> let you do the following:</p> <pre><code>df[df['A'].str.contains(&quot;hello&quot;)] </code></pre> <p>This is available in pandas <a href="https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.8.1.html" rel="noreferrer">0.8.1</a> and up.</p>
627
pandas
How to filter Pandas dataframe using &#39;in&#39; and &#39;not in&#39; like in SQL
https://stackoverflow.com/questions/19960077/how-to-filter-pandas-dataframe-using-in-and-not-in-like-in-sql
<p>How can I achieve the equivalents of SQL's <code>IN</code> and <code>NOT IN</code>?</p> <p>I have a list with the required values. Here's the scenario:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'country': ['US', 'UK', 'Germany', 'China']}) countries_to_keep = ['UK', 'China'] # pseudo-code: df[df['country'] not in countries_to_keep] </code></pre> <p>My current way of doing this is as follows:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'country': ['US', 'UK', 'Germany', 'China']}) df2 = pd.DataFrame({'country': ['UK', 'China'], 'matched': True}) # IN df.merge(df2, how='inner', on='country') # NOT IN not_in = df.merge(df2, how='left', on='country') not_in = not_in[pd.isnull(not_in['matched'])] </code></pre> <p>But this seems like a horrible kludge. Can anyone improve on it?</p>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="noreferrer"><code>pd.Series.isin</code></a>.</p> <p>For &quot;IN&quot; use: <code>something.isin(somewhere)</code></p> <p>Or for &quot;NOT IN&quot;: <code>~something.isin(somewhere)</code></p> <p>As a worked example:</p> <pre><code>&gt;&gt;&gt; df country 0 US 1 UK 2 Germany 3 China &gt;&gt;&gt; countries_to_keep ['UK', 'China'] &gt;&gt;&gt; df.country.isin(countries_to_keep) 0 False 1 True 2 False 3 True Name: country, dtype: bool &gt;&gt;&gt; df[df.country.isin(countries_to_keep)] country 1 UK 3 China &gt;&gt;&gt; df[~df.country.isin(countries_to_keep)] country 0 US 2 Germany </code></pre>
628
pandas
Truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()
https://stackoverflow.com/questions/36921951/truth-value-of-a-series-is-ambiguous-use-a-empty-a-bool-a-item-a-any-o
<p>I want to filter my dataframe with an <code>or</code> condition to keep rows with a particular column's values that are outside the range <code>[-0.25, 0.25]</code>. I tried:</p> <pre><code>df = df[(df['col'] &lt; -0.25) or (df['col'] &gt; 0.25)] </code></pre> <p>But I get the error:</p> <blockquote> <p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p> </blockquote>
<p>The <code>or</code> and <code>and</code> Python statements require <strong>truth</strong>-values. For pandas, these are considered ambiguous, so you should use &quot;bitwise&quot; <code>|</code> (or) or <code>&amp;</code> (and) operations:</p> <pre><code>df = df[(df['col'] &lt; -0.25) | (df['col'] &gt; 0.25)] </code></pre> <p>These are overloaded for these kinds of data structures to yield the element-wise <code>or</code> or <code>and</code>.</p> <hr /> <p>Just to add some more explanation to this statement:</p> <p>The exception is thrown when you want to get the <code>bool</code> of a <code>pandas.Series</code>:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; x = pd.Series([1]) &gt;&gt;&gt; bool(x) ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>You hit a place where the operator <strong>implicitly</strong> converted the operands to <code>bool</code> (you used <code>or</code> but it also happens for <code>and</code>, <code>if</code> and <code>while</code>):</p> <pre><code>&gt;&gt;&gt; x or x ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). &gt;&gt;&gt; x and x ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). &gt;&gt;&gt; if x: ... print('fun') ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). &gt;&gt;&gt; while x: ... print('fun') ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>Besides these four statements, there are several Python functions that hide some <code>bool</code> calls (like <code>any</code>, <code>all</code>, <code>filter</code>, ...). These are normally not problematic with <code>pandas.Series</code>, but for completeness I wanted to mention these.</p> <hr /> <p>In your case, the exception isn't really helpful, because it doesn't mention the <strong>right alternatives</strong>. For <code>and</code> and <code>or</code>, if you want element-wise comparisons, you can use:</p> <ul> <li><p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.logical_or.html" rel="noreferrer"><code>numpy.logical_or</code></a>:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; np.logical_or(x, y) </code></pre> <p>or simply the <code>|</code> operator:</p> <pre><code>&gt;&gt;&gt; x | y </code></pre> </li> <li><p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.logical_and.html" rel="noreferrer"><code>numpy.logical_and</code></a>:</p> <pre><code>&gt;&gt;&gt; np.logical_and(x, y) </code></pre> <p>or simply the <code>&amp;</code> operator:</p> <pre><code>&gt;&gt;&gt; x &amp; y </code></pre> </li> </ul> <p>If you're using the operators, then be sure to set your parentheses correctly because of <a href="https://docs.python.org/reference/expressions.html#operator-precedence" rel="noreferrer">operator precedence</a>.</p> <p>There are <a href="https://docs.scipy.org/doc/numpy/reference/routines.logic.html" rel="noreferrer">several logical NumPy functions</a> which <em>should</em> work on <code>pandas.Series</code>.</p> <hr /> <p>The alternatives mentioned in the Exception are more suited if you encountered it when doing <code>if</code> or <code>while</code>. I'll shortly explain each of these:</p> <ul> <li><p>If you want to check if your Series is <strong>empty</strong>:</p> <pre><code>&gt;&gt;&gt; x = pd.Series([]) &gt;&gt;&gt; x.empty True &gt;&gt;&gt; x = pd.Series([1]) &gt;&gt;&gt; x.empty False </code></pre> <p>Python normally interprets the <code>len</code>gth of containers (like <code>list</code>, <code>tuple</code>, ...) as truth-value if it has no explicit Boolean interpretation. So if you want the Python-like check, you could do: <code>if x.size</code> or <code>if not x.empty</code> instead of <code>if x</code>.</p> </li> <li><p>If your <code>Series</code> contains <strong>one and only one</strong> Boolean value:</p> <pre><code>&gt;&gt;&gt; x = pd.Series([100]) &gt;&gt;&gt; (x &gt; 50).bool() True &gt;&gt;&gt; (x &lt; 50).bool() False </code></pre> </li> <li><p>If you want to check the <strong>first and only item</strong> of your Series (like <code>.bool()</code>, but it works even for non-Boolean contents):</p> <pre><code>&gt;&gt;&gt; x = pd.Series([100]) &gt;&gt;&gt; x.item() 100 </code></pre> </li> <li><p>If you want to check if <strong>all</strong> or <strong>any</strong> item is not-zero, not-empty or not-False:</p> <pre><code>&gt;&gt;&gt; x = pd.Series([0, 1, 2]) &gt;&gt;&gt; x.all() # Because one element is zero False &gt;&gt;&gt; x.any() # because one (or more) elements are non-zero True </code></pre> </li> </ul>
629
pandas
Shuffle DataFrame rows
https://stackoverflow.com/questions/29576430/shuffle-dataframe-rows
<p>I have the following DataFrame:</p> <pre><code> Col1 Col2 Col3 Type 0 1 2 3 1 1 4 5 6 1 ... 20 7 8 9 2 21 10 11 12 2 ... 45 13 14 15 3 46 16 17 18 3 ... </code></pre> <p>The DataFrame is read from a CSV file. All rows which have <code>Type</code> 1 are on top, followed by the rows with <code>Type</code> 2, followed by the rows with <code>Type</code> 3, etc.</p> <p>I would like to shuffle the order of the DataFrame's rows so that all <code>Type</code>'s are mixed. A possible result could be:</p> <pre><code> Col1 Col2 Col3 Type 0 7 8 9 2 1 13 14 15 3 ... 20 1 2 3 1 21 10 11 12 2 ... 45 4 5 6 1 46 16 17 18 3 ... </code></pre> <p>How can I achieve this?</p>
<p>The idiomatic way to do this with Pandas is to use the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sample.html" rel="noreferrer"><code>.sample</code></a> method of your data frame to sample all rows without replacement:</p> <pre class="lang-py prettyprint-override"><code>df.sample(frac=1) </code></pre> <p>The <code>frac</code> keyword argument specifies the fraction of rows to return in the random sample, so <code>frac=1</code> means to return all rows (in random order).</p> <hr /> <p><strong>Note:</strong> If you wish to shuffle your dataframe in-place and reset the index, you could do e.g.</p> <pre class="lang-py prettyprint-override"><code>df = df.sample(frac=1).reset_index(drop=True) </code></pre> <p>Here, specifying <code>drop=True</code> prevents <code>.reset_index</code> from creating a column containing the old index entries.</p> <p><strong>Follow-up note:</strong> Although it may not look like the above operation is <em>in-place</em>, python/pandas is smart enough not to do another malloc for the shuffled object. That is, even though the <em>reference</em> object has changed (by which I mean <code>id(df_old)</code> is not the same as <code>id(df_new)</code>), the underlying C object is still the same. To show that this is indeed the case, you could run a simple memory profiler:</p> <pre><code>$ python3 -m memory_profiler .\test.py Filename: .\test.py Line # Mem usage Increment Line Contents ================================================ 5 68.5 MiB 68.5 MiB @profile 6 def shuffle(): 7 847.8 MiB 779.3 MiB df = pd.DataFrame(np.random.randn(100, 1000000)) 8 847.9 MiB 0.1 MiB df = df.sample(frac=1).reset_index(drop=True) </code></pre>
630
pandas
pandas.parser.CParserError: Error tokenizing data
https://stackoverflow.com/questions/18039057/pandas-parser-cparsererror-error-tokenizing-data
<p>I'm trying to use pandas to manipulate a .csv file but I get this error:</p> <blockquote> <p>pandas.parser.CParserError: Error tokenizing data. C error: Expected 2 fields in line 3, saw 12</p> </blockquote> <p>I have tried to read the pandas docs, but found nothing.</p> <p>My code is simple:</p> <pre><code>path = 'GOOG Key Ratios.csv' #print(open(path).read()) data = pd.read_csv(path) </code></pre> <p>How can I resolve this? Should I use the <code>csv</code> module or another language?</p>
<p>you could also try;</p> <pre><code>data = pd.read_csv('file1.csv', on_bad_lines='skip') </code></pre> <p>Do note that this will cause the offending lines to be skipped. If you don't expect many bad lines and want to (at least) know their amount and IDs, use <code>on_bad_lines='warn'</code>. For advanced handling of bads, you can pass a callable.</p> <p><strong>Edit</strong></p> <p>For Pandas &lt; 1.3.0 try</p> <pre><code>data = pd.read_csv(&quot;file1.csv&quot;, error_bad_lines=False) </code></pre> <p>as per <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html" rel="noreferrer">pandas API reference</a>.</p>
631
pandas
Constructing DataFrame from values in variables yields &quot;ValueError: If using all scalar values, you must pass an index&quot;
https://stackoverflow.com/questions/17839973/constructing-dataframe-from-values-in-variables-yields-valueerror-if-using-all
<p>I have two variables as follows.</p> <pre class="lang-py prettyprint-override"><code>a = 2 b = 3 </code></pre> <p>I want to construct a DataFrame from this:</p> <pre class="lang-py prettyprint-override"><code>df2 = pd.DataFrame({'A':a, 'B':b}) </code></pre> <p>This generates an error:</p> <pre class="lang-none prettyprint-override"><code>ValueError: If using all scalar values, you must pass an index </code></pre> <p>I tried this also:</p> <pre class="lang-py prettyprint-override"><code>df2 = (pd.DataFrame({'a':a, 'b':b})).reset_index() </code></pre> <p>This gives the same error message. How do I do what I want?</p>
<p>The error message says that if you're passing scalar values, you have to pass an index. So you can either not use scalar values for the columns -- e.g. use a list:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame({'A': [a], 'B': [b]}) &gt;&gt;&gt; df A B 0 2 3 </code></pre> <p>or use scalar values and pass an index:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame({'A': a, 'B': b}, index=[0, 3]) &gt;&gt;&gt; df A B 0 2 3 3 2 3 </code></pre>
632
pandas
How to convert index of a pandas dataframe into a column
https://stackoverflow.com/questions/20461165/how-to-convert-index-of-a-pandas-dataframe-into-a-column
<p>How to convert an index of a dataframe into a column?</p> <p>For example:</p> <pre class="lang-none prettyprint-override"><code> gi ptt_loc 0 384444683 593 1 384444684 594 2 384444686 596 </code></pre> <p>to</p> <pre class="lang-none prettyprint-override"><code> index1 gi ptt_loc 0 0 384444683 593 1 1 384444684 594 2 2 384444686 596 </code></pre>
<p>either:</p> <pre><code>df['index1'] = df.index </code></pre> <p>or <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="noreferrer"><code>.reset_index</code></a>:</p> <pre><code>df = df.reset_index() </code></pre> <hr /> <p>If you have a multi-index frame with 3 levels of index, like:</p> <pre><code>&gt;&gt;&gt; df val tick tag obs 2016-02-26 C 2 0.0139 2016-02-27 A 2 0.5577 2016-02-28 C 6 0.0303 </code></pre> <p>and you want to convert the 1st (<code>tick</code>) and 3rd (<code>obs</code>) levels in the index into columns, you could do:</p> <pre><code>&gt;&gt;&gt; df.reset_index(level=['tick', 'obs']) tick obs val tag C 2016-02-26 2 0.0139 A 2016-02-27 2 0.5577 C 2016-02-28 6 0.0303 </code></pre>
633