Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
9,800
57,037,715
Pandas: Delete rows with different encoding of 0s in python
<p>I have calculated statistical values and written them to a csv file. The nan values are replaced with zeros. There are rows with only zeros and there are rows with both 0 and 0.0 values only. How can I delete these rows? According to the attached image rows number 5 , 6 (only 0.0s), 9 and 11 (both 0s and 0.0s) needs to get deleted. <a href="https://i.stack.imgur.com/gUzDO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gUzDO.png" alt="enter image description here"></a></p> <pre><code>import pandas as pd all_df = pd.read_csv('source.csv') all_df.dropna(subset=df_all.columns.tolist()[1:], how='all', inplace=True) all_df.fillna(0, inplace=True) all_df.to_csv('outfile.csv', index=False) </code></pre>
<h2><strong>Use <code>all_df[(all_df.T != 0).any()]</code> or <code>all_df[(all_df != 0).any(axis=1)]</code>:</strong></h2> <pre><code>all_df = pd.DataFrame({'a':[0,0,0,1], 'b':[0,0,0,1]}) print all_df </code></pre> <pre><code> a b 0 0 0 1 0 0 2 0 0 3 1 1 </code></pre> <pre><code>all_df = all_df[(all_df.T != 0).any()] all_df </code></pre> <pre><code> a b 3 1 1 </code></pre> <h2>EDIT 1: After looking at your data, a solution is to convert all numerical columns to float and then do the operations. This problem arises from the way the initial data were saved into the .csv file.</h2> <pre><code>all_df = pd.read_csv('/Users/me/Downloads/Test11.csv') # do not select 'activity' column df = all_df.loc[:, all_df.columns != 'activity'] # convert to float df = df.astype(float) # remove columns with all 0s mask = (df != 0).any(axis=1) df = df[mask] #mask activity column recover_lines_of_activity_column = all_df['activity'][mask] # Final result final_df = pd.concat([recover_lines_of_activity_column, df], axis = 1) </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/9rTCo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9rTCo.png" alt="enter image description here"></a></p>
python|python-3.x|pandas|csv
2
9,801
56,976,791
Using pypi pretrained models vs PyTorch
<p>I have two setups - one takes approx. 10 minutes to run the other is still going after an hour: </p> <p>10 m: </p> <pre class="lang-py prettyprint-override"><code>import pretrainedmodels def resnext50_32x4d(pretrained=False): pretrained = 'imagenet' if pretrained else None model = pretrainedmodels.se_resnext50_32x4d(pretrained=pretrained) return nn.Sequential(*list(model.children())) learn = cnn_learner(data, resnext50_32x4d, pretrained=True, cut=-2, split_on=lambda m: (m[0][3], m[1]),metrics=[accuracy, error_rate]) </code></pre> <p>Not finishing:</p> <pre class="lang-py prettyprint-override"><code>import torchvision.models as models def get_model(pretrained=True, model_name = 'resnext50_32x4d', **kwargs ): arch = models.resnext50_32x4d(pretrained, **kwargs ) return arch learn = Learner(data, get_model(), metrics=[accuracy, error_rate]) </code></pre> <p>This is all copied and hacked from other peoples code so there are parts that I do not understand. But the most perplexing one is why one would be so much faster than the other. I would like to use the second option because its easier for me to understand and I can just swap out the pretrained model to test different ones. </p>
<p>Both architectures are different. I assume you are using <a href="https://github.com/Cadene/pretrained-models.pytorch/blob/master/README.md" rel="nofollow noreferrer">pretrained-models.pytorch</a>.</p> <p>Please notice you are using <strong>SE</strong>-ResNeXt in your first example and ResNeXt in second (standard one from <code>torchvision</code>).</p> <p>The first version uses faster block architecture (Squeeze and Excitation), research paper describing it <a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Hu_Squeeze-and-Excitation_Networks_CVPR_2018_paper.pdf" rel="nofollow noreferrer">here</a>.</p> <p>I'm not sure about exact differences between both architectures and implementations except different building block used, but you could <code>print</code> both models and check for differences.</p> <p>Finally <a href="https://towardsdatascience.com/squeeze-and-excitation-networks-9ef5e71eacd7" rel="nofollow noreferrer">here</a> is a nice article summarizing what <strong>Squeeze And Excitation</strong> is. Basically you do <code>GlobalAveragePooling</code> on all channels (im pytorch it would be <code>torch.nn.AdaptiveAvgPoo2d(1)</code> and <code>flatten</code> afterwards), push it through two linear layers (with <code>ReLU</code> activation in-between) finished by <code>sigmoid</code> in order to get weights for each channel. Finally you multiply the channels by those.</p> <p>Additionally you are doing something strange with modules transforming them to <code>torch.nn.Sequential</code>. There may be some logic in <code>forward</code> call of pretrained network you are removing by copying modules, it may play a part as well.</p>
python|pytorch|pre-trained-model|fast-ai
1
9,802
56,940,258
pandas iterate over rows and concat results automatically?
<p>I want to iterate over rows and to concat all the resulting dataframes preserving the original row information. I have a working example:</p> <p>MWE:</p> <pre><code>import pandas as pd df = pd.DataFrame({'a': list(range(3)), 'b': list(range(3))}) pd.concat(df.apply(lambda row: ( pd.DataFrame(pd.np.zeros((row.a + row.b + 1, 2)), columns=['c', 'd']).assign(**row) ), axis=1).values).reset_index(drop=True) c d a b 0 0.0 0.0 0 0 1 0.0 0.0 1 1 2 0.0 0.0 1 1 3 0.0 0.0 1 1 4 0.0 0.0 2 2 5 0.0 0.0 2 2 6 0.0 0.0 2 2 7 0.0 0.0 2 2 8 0.0 0.0 2 2 </code></pre> <p>but I feel this is hacky. I would have guessed there is a <em>direct</em> way to concat all the results got from an <code>apply</code> (like in R). The things I dislike:</p> <ul> <li>adding initial values with <code>**row</code></li> <li>using the underlying numpy array to use <code>pd.concat</code></li> <li><code>reset_index</code> because the final index is got from the new dataframe created in the loop instead of the original one.</li> </ul>
<p>I can't find the duplicate. But IIUC, you are trying to do sort of <code>crosstab</code> on the two dataframes:</p> <pre><code>df = pd.DataFrame({'a': list(range(3)), 'b': list(range(3))}) df2 = pd.DataFrame([[1,2],[3,4]], columns=('c','d')) pd.concat((df2.loc[np.tile(df2.index, len(df))].reset_index(drop=True), df.loc[df.index.repeat(len(df2))].reset_index(drop=True)), axis=1, ignore_index=True) </code></pre> <p>Output:</p> <pre><code> 0 1 2 3 0 1 2 0 0 1 3 4 0 0 2 1 2 1 1 3 3 4 1 1 4 1 2 2 2 5 3 4 2 2 </code></pre> <p>Or similarly:</p> <pre><code>common_idx = pd.MultiIndex.from_product((df.index, df2.index)) out1 = df.reindex(common_idx.get_level_values(0)).set_index(common_idx) out2 = df2.reindex(common_idx.get_level_values(1)).set_index(common_idx) pd.concat((out2,out1),axis=1).reset_index(drop=True) </code></pre> <p>outputs:</p> <pre><code> c d a b 0 1 2 0 0 1 3 4 0 0 2 1 2 1 1 3 3 4 1 1 4 1 2 2 2 5 3 4 2 2 </code></pre>
python|pandas|apply
0
9,803
46,074,452
Filtering Pandas dataframe rows
<p>I have a dataframe that has one column <strong>numbers</strong>. The column's data are strings of numbers separated by commas.</p> <pre><code>numbers ------- 1,3,4,5,17,30 5,6,18,37,41,42 1,2,5,14,19,20 1,5,13,20,29,31 1,9,10,11,14,17 2,9,13,25,30,35 </code></pre> <p>How to get all the strings that contain numbers <strong>1</strong> &amp; <strong>5</strong> only?</p> <p>The desired output:</p> <pre><code>numbers ------- 1,3,4,5,17,30 1,2,5,14,19,20 1,5,13,20,29,31 </code></pre>
<p>You can create <code>df</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>split</code></a> and compare with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.eq.html" rel="nofollow noreferrer"><code>eq</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>any</code></a> for both condition. Last filter by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p> <pre><code>df1 = df['numbers'].str.split(',', expand=True).astype(int) df = df[df1.eq(1).any(1) &amp; df1.eq(5).any(1)] print (df) numbers 0 1,3,4,5,17,30 2 1,2,5,14,19,20 3 1,5,13,20,29,31 </code></pre> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>contains</code></a> for conditions:</p> <pre><code>a = df['numbers'].str.contains(',1,|,1$|^1,') b = df['numbers'].str.contains(',5,|,5$|^5,') df = df[a &amp; b] print (df) numbers 0 1,3,4,5,17,30 2 1,2,5,14,19,20 3 1,5,13,20,29,31 </code></pre>
python-3.x|pandas
3
9,804
23,122,079
lambda function in class header
<p>I am trying to create nan value for integer. the design i am thinking about is the following. I need to create and isnan lambda function in the class definition header but it returns an error</p> <pre><code>import numpy as np class Integer(object): type = int nan = -1 isnan = lambda val: val==-1 def __new__(cls, value): return cls.type(value) class Float(object): type = float isnan = lambda val: np.isnan(val) def __new__(cls, value): return cls.type(value) </code></pre> <p>but it returns an error</p> <pre><code>&gt;&gt; Integer.isnan(1) &gt;&gt; Traceback (most recent call last): &gt;&gt; File "&lt;stdin&gt;", line 1, in &lt;module&gt; &gt;&gt;TypeError: unbound method &lt;lambda&gt;() must be called with Integer instance as first argument (got int instance instead) </code></pre>
<p>The issue is that your <code>isnan</code> functions are being treated as instance methods by Python. Even though you're using them "unbound", Python 2 still does a type check to ensure that the first argument to a method is an instance of the class (e.g. <code>self</code>). In Python 3, unbound methods have been discarded, and your code would work just fine.</p> <p>You can work around this by passing the lambda function through <code>staticmethod</code>:</p> <pre><code>isnan = staticmethod(lambda val: val == -1) </code></pre> <p>Or you could use a regular function definition, with <code>staticmethod</code> as a decorator:</p> <pre><code>@staticmethod def isnan(value): return val == -1 </code></pre> <p>Note that if you made your classes inherit from their <code>type</code> value, you could call <code>isnan</code> as an actual instance method:</p> <pre><code>class Integer(int): # no __new__ needed def isnan(self): return self == -1 </code></pre> <p>This would let you call <code>Integer(5).isnan()</code>, rather than what you do in your current code.</p> <p>One final suggestion: Don't use <code>type</code> as a variable name, since it is already the name of the built-in <code>type</code> class. It's not as bad using it as a class attribute as it would be as a variable (where it would shadow the built-in), but it can still be confusing. </p>
python|python-2.7|numpy
2
9,805
35,690,983
Optimize this loop with numpy
<p>Generating 5 millions points <code>r[i]</code> recursively with:</p> <pre><code>import numpy as np n, a, b, c = 5000000, 0.0000002, 0.5, 0.4 eps = np.random.normal(0, 1, n) sigma = np.ones(n) * np.sqrt(a) r = np.zeros(n) for i in range(1,n): sigma[i] = np.sqrt(a + b * r[i-1] ** 2 + c * sigma[i-1] ** 2) r[i] = sigma[i] * eps[i] </code></pre> <p>uses approximatively 17 seconds on my standard i5 laptop computer. </p> <p>I have used Cython quite often in the past and I know that using it here would probably optimize by a factor 10 &lt; k &lt; 100.</p> <p>But before having to use Cython in such cases, I was wondering: <strong>would there be a plain Numpy/Python method that I don't know that would optimize this much?</strong></p>
<p>Simply changing it to <code>math.sqrt</code> instead of <code>np.sqrt</code> gives you about 40% speedup here.</p> <p>Since I'm quite a numba fanatic I tried the numba version versus your one (<code>initial</code>) and the math-one (<code>normal</code>)</p> <pre><code>import numpy as np import math import numba as nb n, a, b, c = 500000, 0.0000002, 0.5, 0.4 eps = np.random.normal(0, 1, n) sigma = np.ones(n) * np.sqrt(a) r = np.zeros(n) def initial(n, a, b, c, eps, sigma, r): for i in range(1,n): sigma[i] = np.sqrt(a + b * r[i-1] ** 2 + c * sigma[i-1] ** 2) r[i] = sigma[i] * eps[i] def normal(n, a, b, c, eps, sigma, r): for i in range(1,n): sigma[i] = math.sqrt(a + b * r[i-1] ** 2 + c * sigma[i-1] ** 2) r[i] = sigma[i] * eps[i] @nb.njit def function(n, a, b, c, eps, sigma, r): for i in range(1,n): sigma[i] = math.sqrt(a + b * r[i-1] ** 2 + c * sigma[i-1] ** 2) r[i] = sigma[i] * eps[i] </code></pre> <p>Then just to verify the results are the same:</p> <pre><code>sigma1 = sigma.copy() sigma2 = sigma.copy() sigma3 = sigma.copy() r1 = r.copy() r2 = r.copy() r3 = r.copy() initial(n, a, b, c, eps, sigma1, r1) normal(n, a, b, c, eps, sigma2, r2) function(n, a, b, c, eps, sigma3, r3) np.testing.assert_array_almost_equal(sigma1, sigma2) np.testing.assert_array_almost_equal(sigma1, sigma3) np.testing.assert_array_almost_equal(r1, r2) np.testing.assert_array_almost_equal(r1, r3) </code></pre> <p>Well what about speed (I used n=500000 to have some faster timeit results):</p> <pre><code>%timeit initial(n, a, b, c, eps, sigma1, r1) 1 loop, best of 3: 7.27 s per loop %timeit normal(n, a, b, c, eps, sigma2, r2) 1 loop, best of 3: 4.49 s per loop %timeit function(n, a, b, c, eps, sigma3, r3) 100 loops, best of 3: 17.7 ms per loop </code></pre> <p>I know you didn't want cython so numba is probably also out of the question but the speedup is amazing (410 times faster!)</p>
python|numpy|cython|numerical-methods
2
9,806
11,731,768
Installing numpy on Red Hat 6?
<p>I'm trying to install numpy on a Red Hat (RHEL6) 64-bit linux machine that has Python 2.7. I downloaded and untar'd numpy 1.6.2 from Sourceforge, and I did the following commands in the numpy-1.6.2 folder:</p> <pre><code>python ./setup.py build sudo python ./setup.py install #without sudo, this gives a permissions error. </code></pre> <p>Then, when I do <code>import numpy</code> on the Python prompt, I get <code>ImportError: No module named numpy</code>.</p> <p>I read somewhere that numpy 1.6.2 is for Python 3.x, so I also tried the above steps with numpy 1.5.1, and I got the same <code>ImportError</code>.</p> <p>I'm speculating that the solution lies in some environment variable gymnastics, but I'm not sure what files/directories Python needs to "see" that isn't in scope. Any suggestions for how to get numpy working?</p> <p>I also tried some precompiled binaries for RHEL, but they gave various errors when I did <code>sudo yum install [numpy precompiled binary url].rpm</code>.</p> <p>As an aside, my motivation for installing numpy is to use <a href="http://sourceforge.net/projects/pygnuplot/" rel="nofollow">PyGnuplot</a>. Also, I've installed numpy and PyGnuplot on other machines before, but it's been on Ubuntu and Mac OS.</p>
<p>RHEL6 ships numpy 1.4.1, see <a href="http://distrowatch.com/table.php?distribution=redhat&amp;pkglist=true&amp;version=rhel-6.7#pkglist" rel="nofollow">distrowatch</a>. If 1.4.1 is new enough for you, you can install it with:</p> <pre><code>$ yum install numpy </code></pre>
python|linux|numpy|installation|rhel6
1
9,807
28,482,943
How to use feature hasher to convert non-numerical discrete data so that it can be passed to SVM?
<p>I am trying to use the CRX dataset from the UCI Machine Learning repository. This particular dataset contains some features which are not continuous variables. Therefore I need to convert them into numerical values before they can be passed to an SVM.</p> <p>I initially looked into using the one-hot decoder, which takes integer values and converts them into matrices (e.g. if a feature has three possible values, 'red' 'blue' and 'green', this would be converted into three binary features: 1,0,0 for 'red', '0,1,0 for 'blue' and 0,0,1 for 'green'. This would be ideal for my needs, except for the fact that it only can deal with integer features.</p> <pre><code>def get_crx_data(debug=False): with open("/Volumes/LocalDataHD/jt306/crx.data", "rU") as infile: features_array = [] reader = csv.reader(infile,dialect=csv.excel_tab) for row in reader: features_array.append(str(row).translate(None,"[]'").split(",")) features_array = np.array(features_array) print features_array.shape print features_array[0] labels_array = features_array[:,15] features_array = features_array[:,:15] print features_array.shape print labels_array.shape print("FeatureHasher on frequency dicts") hasher = FeatureHasher(n_features=44) X = hasher.fit_transform(line for line in features_array) print X.shape get_crx_data() </code></pre> <p>This returns </p> <pre><code>Reading CRX data from disk Traceback (most recent call last): File"/Volumes/LocalDataHD/PycharmProjects/FeatureSelectionPython278/Crx2.py", line 38, in &lt;module&gt; get_crx_data() File "/Volumes/LocalDataHD/PycharmProjects/FeatureSelectionPython278/Crx2.py", line 32, in get_crx_data X = hasher.fit_transform(line for line in features_array) File "/Volumes/LocalDataHD/anaconda/lib/python2.7/site-packages/sklearn/base.py", line 426, in fit_transform return self.fit(X, **fit_params).transform(X) File "/Volumes/LocalDataHD/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/hashing.py", line 129, in transform _hashing.transform(raw_X, self.n_features, self.dtype) File "_hashing.pyx", line 44, in sklearn.feature_extraction._hashing.transform (sklearn/feature_extraction/_hashing.c:1649) File "/Volumes/LocalDataHD/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/hashing.py", line 125, in &lt;genexpr&gt; raw_X = (_iteritems(d) for d in raw_X) File "/Volumes/LocalDataHD/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/hashing.py", line 15, in _iteritems return d.iteritems() if hasattr(d, "iteritems") else d.items() AttributeError: 'numpy.ndarray' object has no attribute 'items' (690, 16) ['0' ' 30.83' ' 0' ' u' ' g' ' w' ' v' ' 1.25' ' 1' ' 1' ' 1' ' 0' ' g' ' 202' ' 0' ' +'] (690, 15) (690,) FeatureHasher on frequency dicts Process finished with exit code 1 How can I use feature hashing (or an alternative method) to convert this data from classes (some of which are strings, others are discrete numerical values) into data which can be handled by an SVM? I have also looked into using one-hot coding, but that only takes integers as input. </code></pre>
<p>The issue is that the <code>FeatureHasher</code> object expects each row of input to have a particular structure -- or really, one of three different <a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.FeatureHasher.html" rel="noreferrer">possible structures</a>. The first possibility is a dictionary of <code>feature_name:value</code> pairs. The second is a list of <code>(feature_name, value)</code> tuples. And the third is a flat list of <code>feature_name</code>s. In the first two cases, the feature names are mapped to columns in the matrix, and given values are stored at those columns for each row. In the last, the presence or absence of a feature in the list is implicitly understood as a <code>True</code> or <code>False</code> value. Here are some simple, concrete examples:</p> <pre><code>&gt;&gt;&gt; hasher = sklearn.feature_extraction.FeatureHasher(n_features=10, ... non_negative=True, ... input_type='dict') &gt;&gt;&gt; X_new = hasher.fit_transform([{'a':1, 'b':2}, {'a':0, 'c':5}]) &gt;&gt;&gt; X_new.toarray() array([[ 1., 2., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 5., 0., 0.]]) </code></pre> <p>This illustrates the default mode -- what the <code>FeatureHasher</code> will expect if you don't pass <code>input_type</code>, as in your original code. As you can see, the expected input is a list of dictionaries, one for each input sample or row of data. Each dictionary contains an arbitrary number of feature names, mapped to values for that row. </p> <p>The output, <code>X_new</code>, contains a sparse representation of the array; calling <code>toarray()</code> returns a new copy of the data as a vanilla <code>numpy</code> array.</p> <p>If you want to pass pairs of tuples instead, pass <code>input_type='pairs'</code>. Then you can do this:</p> <pre><code>&gt;&gt;&gt; hasher = sklearn.feature_extraction.FeatureHasher(n_features=10, ... non_negative=True, ... input_type='pair') &gt;&gt;&gt; X_new = hasher.fit_transform([[('a', 1), ('b', 2)], [('a', 0), ('c', 5)]]) &gt;&gt;&gt; X_new.toarray() array([[ 1., 2., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 5., 0., 0.]]) </code></pre> <p>And finally, if you just have boolean values, you don't have to pass values explicitly at all -- the <code>FeatureHasher</code> will simply assume that if a feature name is present, then its value is <code>True</code> (represented here as the floating point value <code>1.0</code>). </p> <pre><code>&gt;&gt;&gt; hasher = sklearn.feature_extraction.FeatureHasher(n_features=10, ... non_negative=True, ... input_type='string') &gt;&gt;&gt; X_new = hasher.fit_transform([['a', 'b'], ['a', 'c']]) &gt;&gt;&gt; X_new.toarray() array([[ 1., 1., 0., 0., 0., 0., 0., 0., 0., 0.], [ 1., 0., 0., 0., 0., 0., 0., 1., 0., 0.]]) </code></pre> <p>Unfortunately, your data doesn't seem to consistently be in any one of these formats. However, it shouldn't be <em>too</em> hard to modify what you have to fit the <code>'dict'</code> or <code>'pair'</code> format. Let me know if you need help with that; in that case, please say more about the format of the data you're trying to convert. </p>
python|numpy|machine-learning|scikit-learn|feature-extraction
6
9,808
33,210,112
Does Python-VIPS support assignment to part of its image ?
<p>I've been using Python 3 and Numpy to handle an image processing task where I'm assembling small tiles into a large, complete image. </p> <p>I'd do this: </p> <pre><code>canvas = np.zeros((max_y + tilesize_y, max_x + tilesize_x, 3), dtype='uint8') </code></pre> <p>Where <code>max_x</code>, <code>max_y</code> are the maximum starting coordinates of individual image tiles. Then I'd paste images into this large canvas like this:</p> <pre><code>canvas[jj['YStart']: jj['YStart'] + tilesize_y, jj['XStart']: jj['XStart'] + tilesize_x] = temp_img </code></pre> <p>Where <code>jj</code> is an entry of the filelist that records where each tile beblongs. I'm wondering if similar operation can be achieved in VIPS with Python ? </p>
<p>VIPS has no destructive operations: you can only build new images, you can't modify existing images. This restriction is why vips can do things like automatic parallelisation and operation caching. </p> <p>Behind the scenes it has some extra machinery to make this less inefficient than it sounds. You can solve your problem like this:</p> <pre><code>#!/usr/bin/python import sys import random from gi.repository import Vips # make vips leaktest itself .. this also reports a memory high-water mark # you'll get a small speedup if you comment this out Vips.leak_set(True) composite = Vips.Image.black(10000, 10000) for filename in sys.argv[1:]: tile = Vips.Image.new_from_file(filename, access = Vips.Access.SEQUENTIAL) x = random.randint(0, 10000) y = random.randint(0, 10000) composite = composite.insert(tile, x, y) composite.write_to_file("output.tif") </code></pre> <p>There's a searchable list of all the vips operators here:</p> <p><a href="http://www.vips.ecs.soton.ac.uk/supported/current/doc/html/libvips/func-list.html" rel="nofollow">http://www.vips.ecs.soton.ac.uk/supported/current/doc/html/libvips/func-list.html</a></p> <p>The docs for insert are here:</p> <p><a href="http://www.vips.ecs.soton.ac.uk/supported/current/doc/html/libvips/libvips-conversion.html#vips-insert" rel="nofollow">http://www.vips.ecs.soton.ac.uk/supported/current/doc/html/libvips/libvips-conversion.html#vips-insert</a></p> <p>Although you seem to be making a new (huge) image for every iteration, in fact behind your back vips will share the images and only create the bits it needs. Additionally, setting the "sequential" hint on open means vips can stream the sub images in as it writes the final tiff.</p> <p>Run like this:</p> <pre><code>$ time ./insert.py ~/pics/k*.jpg memory: high-water mark 53.81 MB real 0m1.913s user 0m0.939s sys 0m0.266s $ ls ~/pics/k*.jpg | wc 8 8 278 </code></pre> <p>That's pasting in 8 large jpg images. The reported memory use is for pixel buffers, it doesn't include all memory. This script will break if you try pasting in a mix of RGB and RGBA images, you'll need to add some stuff to handle alpha channels. </p> <p>(in fact there is a destructive paste operation: </p> <p><a href="http://www.vips.ecs.soton.ac.uk/supported/current/doc/html/libvips/libvips-draw.html#vips-draw-image" rel="nofollow">http://www.vips.ecs.soton.ac.uk/supported/current/doc/html/libvips/libvips-draw.html#vips-draw-image</a></p> <p>it's there for paintbox-style programs that really do need to modify images, it's not really suitable for general use)</p>
python|numpy|image-processing|vips
1
9,809
33,369,772
numpy: distinguish/convert between different types of nan?
<p>I've been seeing a lot of errors like:</p> <pre><code>FloatingPointError: invalid value encountered in multiply </code></pre> <p>on some data I'm loading from disk (using astropy.io.fits). It appears to be related to <a href="https://github.com/numpy/numpy/issues/3190" rel="nofollow">this issue</a>, i.e., I have a 'signalling nan' instead of a 'quiet nan'.</p> <p>This is problematic, because I can't simply 'clean' the data. If I try to convert the array to an array with the same dtype, e.g.:</p> <pre><code>arr = arr.astype(arr.dtype) </code></pre> <p>the nan stays the same, i.e. <code>np.isnan</code> generates a warning, though if I change the dtype</p> <pre><code># arr.dtype is float32 originally arr = arr.astype(np.float64) </code></pre> <p>the warning goes away for multiplication/np.isnan/etc. I don't want to use this workaround since it necessitates changing the size of the array.</p> <p>So, how can I distinguish between those without reverting to the string representation of the nan? Is there a (cheap!) way to convert all 'signalling' nans to quiet ones?</p>
<p>This will replace all the nans in <code>arr</code> with the default quiet nan:</p> <pre><code>with np.errstate(invalid='ignore'): arr[np.isnan(arr)] = np.nan </code></pre> <hr> <p>For what it's worth, here's a quick <code>issnan</code> function that is True for signaling nans only:</p> <pre><code>import numpy as np def issnan(a): """ Returns True where elements of `a` are signaling nans. `a` must be a numpy array with data type float32 or float64. This function assumes IEEE 754 floating point representation, and that the first (left-most) bit in the mantissa is the "quiet" bit. That is, a nan value with this bit set to 0 is a signaling nan. """ if a.dtype == np.float64: v = a.view(np.uint64) # t1 is true where all the exponent bits are 1 and the # quiet bit is 0. t1 = (v &amp; 0x7FF8000000000000) == 0x7FF0000000000000 # t2 is non-zero where at least one bit (not including # the quiet bit) in the mantissa is 1. (If the mantissa # is all zeros and the exponent is all ones, the value is # infinity.) t2 = v &amp; 0x0007FFFFFFFFFFFF return np.logical_and(t1, t2) elif a.dtype == np.float32: v = a.view(np.uint32) t1 = (v &amp; 0x7FC00000) == 0x7F800000 t2 = v &amp; 0x003FFFFF return np.logical_and(t1, t2) else: raise ValueError('a must have dtype float32 or float64') </code></pre> <p>For example,</p> <pre><code>In [151]: z Out[151]: array([ nan, nan, inf, 1.], dtype=float32) In [152]: [hex(r) for r in z.view(np.uint32)] Out[152]: ['0x7f800001L', '0x7fc00000L', '0x7f800000L', '0x3f800000L'] In [153]: issnan(z) Out[153]: array([ True, False, False, False], dtype=bool) </code></pre>
arrays|numpy
4
9,810
66,448,110
Slices across Contourf plots at different angles to get 2D line plots
<p>I am trying to generate 2D line plots at different angles or slices of a matplotlib contourf plot.</p> <p>As an example from the matplotlib contourf demo example below</p> <pre><code>import numpy as np import matplotlib.pyplot as plt origin = 'lower' delta = 0.025 x = y = np.arange(-3.0, 3.01, delta) X, Y = np.meshgrid(x, y) Z1 = np.exp(-X**2 - Y**2) Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2) Z = (Z1 - Z2) * 2 nr, nc = Z.shape fig1, ax2 = plt.subplots(constrained_layout=True) CS = ax2.contourf(X, Y, Z, 10, cmap=plt.cm.viridis, origin=origin,extend='both') ax2.set_title('Random Plot') ax2.set_xlabel('X Axis') ax2.set_ylabel('Y Axis') cbar = fig1.colorbar(CS) </code></pre> <p>Ideally, I want to generate lines at different angles (30,45,60 degrees) across the map (starting at any arbitrary point till the end of existing array) and then plot the Z variations across that line.</p> <p>I think a simpler problem in principle would be, lines from (X2,Y2) to (X1,Y1) and plot the Z variation for the given contour (which is already interpolated data).</p> <p>As an example, original problem would be line from (-3,-3) at angle 45 deg across. Analogous problem would be lets say a line from (-3,-3) to (3,3) and plot the Z variation at different locations on that line.</p> <p>The source contour plot generated is : <a href="https://i.stack.imgur.com/CvyIQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CvyIQ.png" alt="Contourf Plot" /></a></p>
<p>Here is a rather inefficient approach, but it does the job. It recalculates the function on a new grid of which it only needs the diagonal.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt from scipy.interpolate import RectBivariateSpline delta = 0.025 x = y = np.arange(-3.0, 3.01, delta) X, Y = np.meshgrid(x, y) Z1 = np.exp(-X ** 2 - Y ** 2) Z2 = np.exp(-(X - 1) ** 2 - (Y - 1) ** 2) Z = (Z1 - Z2) * 2 nr, nc = Z.shape x1, y1 = -3, -2 x2, y2 = 3, 2 fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(15, 5)) CS = ax1.contourf(X, Y, Z, 10, cmap=plt.cm.viridis, origin='lower', extend='both') ax1.plot([x1, x2], [y1, y2], color='k', ls='--', lw=3, alpha=0.6) ax1.set_xlabel('X Axis') ax1.set_ylabel('Y Axis') cbar = fig.colorbar(CS, ax=ax1) spline_func = RectBivariateSpline(x, y, Z) xline = np.linspace(x1, x2, 200) yline = np.linspace(y1, y2, 200) zline = spline_func(xline, yline) ax2.plot(xline, zline.diagonal()) ax2.set_xlabel('X Axis') ax2.set_ylabel('Z Axis') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/xpFMV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xpFMV.png" alt="example plot" /></a></p>
python|numpy|matplotlib|contourf
1
9,811
66,458,784
ValueError: Error when checking input: expected input_13 to have 3 dimensions, but got array with shape (50000, 32, 32, 3)
<p>I'm having a dimension error when I'm training a variational autoencoder and I can't figure out what I'm doing wrong. I had a <a href="https://stackoverflow.com/questions/66454445/valueerror-error-when-checking-input-expected-dense-85-input-to-have-4-dimensi">dimension error in a neural network</a> that just used <code>Dense</code> layers, but I solved it by adding the <code>Flatten</code> layer. This error can't be solved that way.</p> <p>The dataset I'm using is CIFAR10.</p> <p>Here's my code, along with its outputs:</p> <pre><code>(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() class Sampling(keras.layers.Layer): def call(self, inputs): mean, log_var = inputs return K.random_normal(tf.shape(log_var)) * K.exp(log_var / 2) + mean tf.random.set_seed(42) np.random.seed(42) codings_size = 10 inputs = keras.layers.Input(shape=(32, 32)) z = keras.layers.Flatten()(inputs) z = keras.layers.Dense(150, activation=&quot;selu&quot;)(z) z = keras.layers.Dense(100, activation=&quot;selu&quot;)(z) codings_mean = keras.layers.Dense(codings_size)(z) codings_log_var = keras.layers.Dense(codings_size)(z) codings = Sampling()([codings_mean, codings_log_var]) variational_encoder = keras.models.Model( inputs=[inputs], outputs=[codings_mean, codings_log_var, codings]) decoder_inputs = keras.layers.Input(shape=[codings_size]) x = keras.layers.Dense(100, activation=&quot;selu&quot;)(decoder_inputs) x = keras.layers.Dense(150, activation=&quot;selu&quot;)(x) x = keras.layers.Dense(28 * 28, activation=&quot;sigmoid&quot;)(x) outputs = keras.layers.Reshape([28, 28])(x) variational_decoder = keras.models.Model(inputs=[decoder_inputs], outputs=[outputs]) _, _, codings = variational_encoder(inputs) reconstructions = variational_decoder(codings) variational_ae = keras.models.Model(inputs=[inputs], outputs=[reconstructions]) variational_ae.summary() latent_loss = -0.5 * K.sum( 1 + codings_log_var - K.exp(codings_log_var) - K.square(codings_mean), axis=-1) variational_ae.add_loss(K.mean(latent_loss) / 784.) variational_ae.compile(loss=&quot;binary_crossentropy&quot;, optimizer=&quot;rmsprop&quot;, metrics=[rounded_accuracy]) history = variational_ae.fit(x_train, x_train, epochs=25, batch_size=128, validation_data=(x_test, x_test)) </code></pre> <p>Model Summary:</p> <pre><code>Model: &quot;model_20&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_13 (InputLayer) [(None, 32, 32)] 0 _________________________________________________________________ model_18 (Model) [(None, 10), (None, 10), 170870 _________________________________________________________________ model_19 (Model) (None, 28, 28) 134634 ================================================================= Total params: 305,504 Trainable params: 305,504 Non-trainable params: 0 _________________________________________________________________ </code></pre> <p>Traceback:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-163-310c1702abb5&gt; in &lt;module&gt; 33 variational_ae.compile(loss=&quot;binary_crossentropy&quot;, optimizer=&quot;rmsprop&quot;, metrics=[rounded_accuracy]) 34 history = variational_ae.fit(x_train, x_train, epochs=25, batch_size=128, ---&gt; 35 validation_data=(x_test, x_test)) ~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 817 max_queue_size=max_queue_size, 818 workers=workers, --&gt; 819 use_multiprocessing=use_multiprocessing) 820 821 def evaluate(self, ~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 233 max_queue_size=max_queue_size, 234 workers=workers, --&gt; 235 use_multiprocessing=use_multiprocessing) 236 237 total_samples = _get_total_number_of_samples(training_data_adapter) ~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing) 591 max_queue_size=max_queue_size, 592 workers=workers, --&gt; 593 use_multiprocessing=use_multiprocessing) 594 val_adapter = None 595 if validation_data: ~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_inputs(model, mode, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing) 644 standardize_function = None 645 x, y, sample_weights = standardize( --&gt; 646 x, y, sample_weight=sample_weights) 647 elif adapter_cls is data_adapter.ListsOfScalarsDataAdapter: 648 standardize_function = standardize ~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset) 2381 is_dataset=is_dataset, 2382 class_weight=class_weight, -&gt; 2383 batch_size=batch_size) 2384 2385 def _standardize_tensors(self, x, y, sample_weight, run_eagerly, dict_inputs, ~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _standardize_tensors(self, x, y, sample_weight, run_eagerly, dict_inputs, is_dataset, class_weight, batch_size) 2408 feed_input_shapes, 2409 check_batch_axis=False, # Don't enforce the batch size. -&gt; 2410 exception_prefix='input') 2411 2412 # Get typespecs for the input data and sanitize it if necessary. ~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix) 571 ': expected ' + names[i] + ' to have ' + 572 str(len(shape)) + ' dimensions, but got array ' --&gt; 573 'with shape ' + str(data_shape)) 574 if not check_batch_axis: 575 data_shape = data_shape[1:] ValueError: Error when checking input: expected input_13 to have 3 dimensions, but got array with shape (50000, 32, 32, 3) </code></pre> <p>I tried putting <code>shape=(32, 32, 3)</code>, but that results in the error below:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-166-45b654f7e264&gt; in &lt;module&gt; 33 variational_ae.compile(loss=&quot;binary_crossentropy&quot;, optimizer=&quot;rmsprop&quot;, metrics=[rounded_accuracy]) 34 history = variational_ae.fit(x_train, x_train, epochs=25, batch_size=128, ---&gt; 35 validation_data=(x_test, x_test)) ~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 817 max_queue_size=max_queue_size, 818 workers=workers, --&gt; 819 use_multiprocessing=use_multiprocessing) 820 821 def evaluate(self, ~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 233 max_queue_size=max_queue_size, 234 workers=workers, --&gt; 235 use_multiprocessing=use_multiprocessing) 236 237 total_samples = _get_total_number_of_samples(training_data_adapter) ~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing) 591 max_queue_size=max_queue_size, 592 workers=workers, --&gt; 593 use_multiprocessing=use_multiprocessing) 594 val_adapter = None 595 if validation_data: ~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_inputs(model, mode, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing) 644 standardize_function = None 645 x, y, sample_weights = standardize( --&gt; 646 x, y, sample_weight=sample_weights) 647 elif adapter_cls is data_adapter.ListsOfScalarsDataAdapter: 648 standardize_function = standardize ~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset) 2381 is_dataset=is_dataset, 2382 class_weight=class_weight, -&gt; 2383 batch_size=batch_size) 2384 2385 def _standardize_tensors(self, x, y, sample_weight, run_eagerly, dict_inputs, ~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _standardize_tensors(self, x, y, sample_weight, run_eagerly, dict_inputs, is_dataset, class_weight, batch_size) 2487 # Additional checks to avoid users mistakenly using improper loss fns. 2488 training_utils.check_loss_and_target_compatibility( -&gt; 2489 y, self._feed_loss_fns, feed_output_shapes) 2490 2491 sample_weights, _, _ = training_utils.handle_partial_sample_weights( ~/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in check_loss_and_target_compatibility(targets, loss_fns, output_shapes) 808 raise ValueError('A target array with shape ' + str(y.shape) + 809 ' was passed for an output of shape ' + str(shape) + --&gt; 810 ' while using as loss `' + loss_name + '`. ' 811 'This loss expects targets to have the same shape ' 812 'as the output.') ValueError: A target array with shape (50000, 32, 32, 3) was passed for an output of shape (None, 28, 28) while using as loss `binary_crossentropy`. This loss expects targets to have the same shape as the output. </code></pre> <p>I also tried removing the <code>Input</code> layer (so that the first layer is the <code>Flatten</code> layer), but I still have to provide the input shape, or else I get an error complaining that 'Flatten' object has no attribute 'shape'. When I do provide the input shape (<code>input_shape=(32, 32)</code>), I get the same error.</p> <p>Can someone tell me what is going wrong here and how do I fix it?</p>
<p>There is at least two problems in your code:</p> <ul> <li>Your input data does not match the input of your network. <code>(32,32,3)</code> vs <code>(32,32)</code>. One possible fix is to load your images in grayscale to match your network input, or to make your network accept images with 3 channels.</li> <li>Your ground truth (or label) does not match the output of your network. <code>(32,32)</code> vs <code>(28,28)</code>. You need to redesign your decoder part to make it output a matrix with the same shape as your input (in your example, a <code>(32,32)</code> matrix).</li> </ul> <hr /> <p>To convert your array to grayscale, you can use <a href="https://www.tensorflow.org/api_docs/python/tf/image/rgb_to_grayscale" rel="nofollow noreferrer"><code>tf.image.rgb_to_grayscale</code></a>, and use <a href="https://www.tensorflow.org/api_docs/python/tf/squeeze" rel="nofollow noreferrer"><code>tf.squeeze</code></a> to get rid of the last dimension:</p> <pre><code>x_train_grayscale = tf.squeeze(tf.image.rgb_to_grayscale(x_train),axis=-1) </code></pre>
python|tensorflow|keras|autoencoder
1
9,812
66,404,407
Fastest way to append nonzero numpy array elements to list
<p>I want to add all nonzero elements from a <code>numpy</code> array <code>arr</code> to a list <code>out_list</code>. Previous research suggests that for numpy arrays, using <code>np.nonzero</code> is most efficient. (My own benchmark below actually suggests it can be slightly improved using <code>np.delete</code>).</p> <p>However, in my case I want my output to be a list, because I am combining many arrays for which I don't know the number of nonzero elements (so I can't effectively preallocate a numpy array for them). Hence, I was wondering whether there are some synergies that can be exploited to speed up the process. While my naive list comprehension approach is much slower than the pure numpy approach, I got some promising results combining list comprehension with <code>numba</code>.</p> <p>Here's what I found so far:</p> <pre><code>import numpy as np n = 60_000 # size of array nz = 0.3 # fraction of zero elements arr = (np.random.random_sample(n) - nz).clip(min=0) # method 1 def add_to_list1(arr, out): out.extend(list(arr[np.nonzero(arr)])) # method 2 def add_to_list2(arr, out): out.extend(list(np.delete(arr, arr == 0))) # method 3 def add_to_list3(arr, out): out += [x for x in arr if x != 0] # method 4 (not sure how to get numba to accept an empty list as argument) @njit def add_to_list4(arr): return [x for x in arr if x != 0] out_list = [] %timeit add_to_list1(arr, out_list) out_list = [] %timeit add_to_list2(arr, out_list) out_list = [] %timeit add_to_list3(arr, out_list) _ = add_to_list4(arr) # call once to compile out_list = [] %timeit out_list.extend(add_to_list4(arr)) </code></pre> <p>Yielding the following results:</p> <pre><code>2.51 ms ± 137 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 2.19 ms ± 133 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 15.6 ms ± 183 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 1.63 ms ± 158 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) </code></pre> <p>Not surprisingly, <code>numba</code> outperforms all other methods. Among the rest, method 2 (using <code>np.delete</code>) is the best. Am I missing any obvious alternative that exploits the fact that I am converting to a list afterwards? Can you think of anything to further speed up the process?</p> <h2>Edit 1:</h2> <p>Performance of <code>.tolist()</code>:</p> <pre><code># method 5 def add_to_list5(arr, out): out += arr[arr != 0].tolist() # method 6 def add_to_list6(arr, out): out += np.delete(arr, arr == 0).tolist() # method 7 def add_to_list7(arr, out): out += arr[arr.astype(bool)].tolist() </code></pre> <p>Timings are on par with <code>numba</code>:</p> <pre><code>1.62 ms ± 118 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 1.65 ms ± 104 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each 1.78 ms ± 119 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) </code></pre> <h2>Edit 2:</h2> <p>Here's some benchmarking using Mad Physicist's suggestion to use <code>np.concatenate</code> to construct a <code>numpy</code> array instead.</p> <pre><code># construct numpy array using np.concatenate out_list = [] t = time.perf_counter() for i in range(100): out_list.append(arr[arr != 0]) result = np.concatenate(out_list) print(f&quot;Time elapsed: {time.perf_counter() - t:.4f}s&quot;) # compare with best list-based method out_list = [] t = time.perf_counter() for i in range(100): out_list += arr[arr != 0].tolist() print(f&quot;Time elapsed: {time.perf_counter() - t:.4f}s&quot;) </code></pre> <p>Concatenating <code>numpy</code> arrays yields indeed another significant speed-up, although it is not directly comparable since the output is a <code>numpy</code> array instead of a list. So it will depend on the precise use what will be best.</p> <pre><code>Time elapsed: 0.0400s Time elapsed: 0.1430s </code></pre> <h2>TLDR;</h2> <p>1/ using <code>arr[arr != 0]</code> is the fastest of all the indexing options</p> <p>2/ using <code>.tolist()</code> instead of <code>list(.)</code> speeds up things by a factor 1.3 - 1.5</p> <p>3/ with the gains of 1/ and 2/ combined, the speed is on par with <code>numba</code></p> <p>4/ if having a <code>numpy</code> array instead of a <code>list</code> is acceptable, then using <code>np.concatenate</code> yields another gain in speed by a factor of ~3.5 compared to the best alternative</p>
<p>I submit that the method of choice, if you are indeed looking for a <code>list</code> output, is:</p> <pre class="lang-py prettyprint-override"><code>def f(arr, out_list): out_list += arr[arr != 0].tolist() </code></pre> <p>It seems to beat all the other methods mentioned so far in the OP's question or in other responses (at the time of this writing).</p> <p>If, however, you are looking for a result as a <code>numpy</code> array, then following @MadPhysicist's version (slightly modified to use <code>arr[arr != 0]</code> instead of using <code>np.nonzero()</code>) is almost 6x faster, see at the end of this post.</p> <p>Side note: I would <em>avoid</em> using <code>%timeit out_list.extend(some_list)</code>: it keeps adding to <code>out_list</code> during the many loops of <code>timeit</code>. Example:</p> <pre class="lang-py prettyprint-override"><code>out_list = [] %timeit out_list.extend([1,2,3]) </code></pre> <p>and now:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; len(out_list) 243333333 # yikes </code></pre> <p><strong>Timings</strong></p> <p>On 60K items on my machine, I see:</p> <pre class="lang-py prettyprint-override"><code>out_list = [] a = %timeit -o out_list + arr[arr != 0].tolist() b = %timeit -o out_list + arr[np.nonzero(arr)].tolist() c = %timeit -o out_list + list(arr[np.nonzero(arr)]) </code></pre> <p>Yields:</p> <pre class="lang-none prettyprint-override"><code>1.23 ms ± 10.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 1.53 ms ± 2.53 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 4.29 ms ± 3.02 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre> <p>And:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; c.average / a.average 3.476 &gt;&gt;&gt; b.average / a.average 1.244 </code></pre> <p><strong>For a <code>numpy</code> array result instead</strong></p> <p>Following @MadPhysicist, you can get some extra boost by <em>not</em> turning the arrays into lists, but using <code>np.concatenate()</code> instead:</p> <pre class="lang-py prettyprint-override"><code>def all_nonzero(arr_iter): &quot;&quot;&quot;return non zero elements of all arrays as a np.array&quot;&quot;&quot; return np.concatenate([a[a != 0] for a in arr_iter]) def all_nonzero_list(arr_iter): &quot;&quot;&quot;return non zero elements of all arrays as a list&quot;&quot;&quot; out_list = [] for a in arr_iter: out_list += a[a != 0].tolist() return out_list </code></pre> <pre class="lang-py prettyprint-override"><code>from itertools import repeat ta = %timeit -o all_nonzero(repeat(arr, 100)) tl = %timeit -o all_nonzero_list(repeat(arr, 100)) </code></pre> <p>Yields:</p> <pre class="lang-none prettyprint-override"><code>39.7 ms ± 107 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) 227 ms ± 680 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) </code></pre> <p>and</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; tl.average / ta.average 5.75 </code></pre>
python|arrays|list|numpy
3
9,813
66,680,238
How to install numpy on linux subsystem for windows?
<p>I'm trying to install NumPy on the Linux subsystem for Windows, but it when I try to install pip <code>sudo apt install python-pip</code> so that I can use <code>pip install numpy</code>, it gives the error <code>E: Unable to locate package python-pip</code>.</p> <p>Any suggestions?</p> <p>Thanks</p> <p>Edit: When I run <code>pip install numpy</code> it gives the error: <code>Command 'pip' not found, but there are 18 similar ones.</code> My recollection is also that python did not come installed with ubuntu, but I could be remembering incorrectly.</p>
<p>Try this <code>sudo apt install python3-pip</code></p>
windows|numpy|ubuntu
0
9,814
16,375,781
Python simple nested for loops
<p>I am trying a simple nested for loop in python to scan a threshold-ed image to detect the white pixels and store their location. The problem is that although the array it is reading from is only 160*120 (19200) it still takes about 6s to execute, my code is as follows and any help or guidance would be greatly appreciated:</p> <pre><code>im = Image.open('PYGAMEPIC') r, g, b = np.array(im).T x = np.zeros_like(b) height = len(x[0]) width = len(x) x[r &gt; 120] = 255 x[g &gt; 100] = 0 x[b &gt; 100] = 0 row_array = np.zeros(shape = (19200,1)) col_array = np.zeros(shape = (19200,1)) z = 0 for i in range (0,width-1): for j in range (0,height-1): if x[i][j] == 255: z = z+1 row_array[z] = i col_array[z] = j </code></pre>
<p>First, it shouldn't take 6 seconds. Trying your code on a 160x120 image takes ~0.2 s for me.</p> <p>That said, for good <code>numpy</code> performance, you generally want to avoid loops. Sometimes it's simpler to vectorize along all except the smallest axis and loop along that, but when possible you should try to do everything at once. This usually makes things both faster (pushing the loops down to C) and easier.</p> <p>Your for loop itself seems a little strange to me-- you seem to have an off-by-one error both in terms of where you're starting storing the results (your first value is placed in <code>z=1</code>, not <code>z=0</code>) and in terms of how far you're looking (<code>range(0, x-1)</code> doesn't include <code>x-1</code>, so you're missing the last row/column-- probably you want <code>range(x)</code>.)</p> <p>If all you want is the indices where <code>r &gt; 120</code> but neither <code>g &gt; 100</code> nor <code>b &gt; 100</code>, there are much simpler approaches. We can create boolean arrays. For example, first we can make some dummy data:</p> <pre><code>&gt;&gt;&gt; r = np.random.randint(0, 255, size=(8,8)) &gt;&gt;&gt; g = np.random.randint(0, 255, size=(8,8)) &gt;&gt;&gt; b = np.random.randint(0, 255, size=(8,8)) </code></pre> <p>Then we can find the places where our condition is met:</p> <pre><code>&gt;&gt;&gt; (r &gt; 120) &amp; ~(g &gt; 100) &amp; ~(b &gt; 100) array([[False, True, False, False, False, False, False, False], [False, False, True, False, False, False, False, False], [False, True, False, False, False, False, False, False], [False, False, False, True, False, True, False, False], [False, False, False, False, False, False, False, False], [False, True, False, False, False, False, False, False], [False, False, False, False, False, False, False, False], [False, False, False, False, False, False, False, False]], dtype=bool) </code></pre> <p>Then we can use <code>np.where</code> to get the coordinates:</p> <pre><code>&gt;&gt;&gt; r_idx, c_idx = np.where((r &gt; 120) &amp; ~(g &gt; 100) &amp; ~(b &gt; 100)) &gt;&gt;&gt; r_idx array([0, 1, 2, 3, 3, 5]) &gt;&gt;&gt; c_idx array([1, 2, 1, 3, 5, 1]) </code></pre> <p>And we can sanity-check these by indexing back into <code>r</code>, <code>g</code>, and <code>b</code>:</p> <pre><code>&gt;&gt;&gt; r[r_idx, c_idx] array([166, 175, 155, 150, 241, 222]) &gt;&gt;&gt; g[r_idx, c_idx] array([ 6, 29, 19, 62, 85, 31]) &gt;&gt;&gt; b[r_idx, c_idx] array([67, 97, 30, 4, 50, 71]) </code></pre>
python|for-loop|numpy|pygame|nested-loops
2
9,815
57,401,131
i have from my original dataframe obtained another two , how can i merge in a final one the columns that i need
<p>i have a table with 4 columns , from this data i obtained another 2 tables with some rolling averages from the original table. now i want to combine these 3 into a final table. but the indexes are not in order now and i cant do it. I just started to learn python , i have zero experience and i would really need all the help i can get.</p> <p>DF</p> <pre><code>+----+------------+-----------+------+------+ | | A | B | C | D | +----+------------+-----------+------+------+ | 1 | Home Team | Away Team | Htgs | Atgs | | 2 | dalboset | sopot | 1 | 2 | | 3 | calnic | resita | 1 | 3 | | 4 | sopot | dalboset | 2 | 2 | | 5 | resita | sopot | 4 | 1 | | 6 | sopot | dalboset | 2 | 1 | | 7 | caransebes | dalboset | 1 | 2 | | 8 | calnic | resita | 1 | 3 | | 9 | dalboset | sopot | 2 | 2 | | 10 | calnic | resita | 4 | 1 | | 11 | sopot | dalboset | 2 | 1 | | 12 | resita | sopot | 1 | 2 | | 13 | sopot | dalboset | 1 | 3 | | 14 | caransebes | dalboset | 2 | 2 | | 15 | calnic | resita | 4 | 1 | | 16 | dalboset | sopot | 2 | 1 | | 17 | calnic | resita | 1 | 2 | | 18 | sopot | dalboset | 4 | 1 | | 19 | resita | sopot | 2 | 1 | | 20 | sopot | dalboset | 1 | 2 | | 21 | caransebes | dalboset | 1 | 3 | | 22 | calnic | resita | 2 | 2 | +----+------------+-----------+------+------+ </code></pre> <p>CODE</p> <pre><code>df1 = df.groupby('Home Team',) ['Htgs', 'Atgs',].rolling(window=4, min_periods=3).mean() df1 =df1.rename(columns={'Htgs': 'Htgs/3', 'Atgs': 'Htgc/3'}) df1 df2 = df.groupby('Away Team',) ['Htgs', 'Atgs',].rolling(window=4, min_periods=3).mean() df2 =df2.rename(columns={'Htgs': 'Atgc/3', 'Atgs': 'Atgs/3'}) df2 </code></pre> <p>now i need a solution to see the columns with the rolling average next to the Home Team,,,,Away Team,,,,Htgs,,,,Atgs from the original table</p>
<p>Done ! i create a new column direct in the data frame like this </p> <p>df = pd.read_csv('Fd.csv', )</p> <p>df['Htgs/3'] = df.groupby('Home Team', ) ['Htgs'].rolling(window=4, min_periods=3).mean().reset_index(0,drop=True)</p> <p>the Htgs/3 will be the new column with the rolling average of the column Home Team, and for the rest i will do the same like in this part.</p>
python|pandas|dataframe|group-by|rolling-computation
0
9,816
57,518,057
How can I convert an image from pixels to one-hot encodings?
<p>I have a PNG image I am loading within Tensorflow using:</p> <pre><code>image = tf.io.decode_png(tf.io.read_file(path), channels=3) </code></pre> <p>The image contains pixels that match a lookup like this:</p> <pre><code>image_colors = [ (0, 0, 0), # black (0.5, 0.5, 0.5), # grey (1, 0.5, 1), # pink ] </code></pre> <p>How can I convert it so that the output has the pixels mapped into one-hot encodings where the hot component would be the matching color?</p>
<p>Let me assume for convenience that all values in <code>image_colors</code> are in <code>[0, 255]</code>:</p> <pre><code>image_colors = [ (0, 0, 0), # black (127, 127, 127), # grey (255, 127, 255), # pink ] </code></pre> <p>My approach maps pixels into one-hot values as follows:</p> <pre><code># Create a "color reference" tensor from image_colors color_reference = tf.cast(tf.constant(image_colors), dtype=tf.uint8) # Load the image and obtain tensor with one-hot values image = tf.io.decode_png(tf.io.read_file(path), channels=3) comp = tf.equal(image[..., None, :], color_reference) one_hot = tf.cast(tf.reduce_all(comp, axis=-1), dtype=tf.float32) </code></pre> <p>Note that you can easily add new colors to <code>image_colors</code> without changing the TF implementation. Also, this assumes that all pixels in the <code>image</code> are in <code>image_colors</code>. If that is not the case, one could define a range for each color and then use other <code>tf.math</code> operations (e.g. <code>tf.greater</code> and <code>tf.less</code>) instead of <code>tf.equal</code>.</p>
python|tensorflow
5
9,817
57,583,783
Scraping works fine until appended to list
<p>I am a beginner trying to scrape bitcoin price history, everything works fine until I try to append it to a list, as nothing ends up being appended to the list.</p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd from datetime import datetime url = 'https://coinmarketcap.com/currencies/bitcoin/historical-data/?start=20130428&amp;end=20190821' page = requests.get(url).content soup = BeautifulSoup(page, 'html.parser') priceDiv = soup.find('div', attrs={'class':'table-responsive'}) rows = priceDiv.find_all('tr') data = [] i = 0 for row in rows: temp = [] tds = row.findChildren() for td in tds: temp.append(td.text) if i &gt; 0: temp[0] = temp[0].replace(',', '') temp[6] = temp[6].replace(',', '') if temp[5] == '-': temp[5] = 0 else: temp[5] = temp[5].replace(',', '') data.append({'date': datetime.strptime(temp[0], '%b %d %Y'), 'open': float(temp[1]), 'high': float(temp[2]), 'low': float(temp[3]), 'close': float(temp[4]), 'volume': float(temp[5]), 'market_cap': float(temp[6])}) i += 1 df = pd.DataFrame(data) </code></pre> <p>If I try to print df or data it is just empty.</p>
<p>As noted above, you need to increment i outside that the check for > 0.</p> <p>Secondly, have you considered using pandas <code>.read_html()</code>. That will do the hard work for you.</p> <p><strong>Code:</strong></p> <pre><code>import pandas as pd url = 'https://coinmarketcap.com/currencies/bitcoin/historical-data/?start=20130428&amp;end=20190821' dfs = pd.read_html(url) df = dfs[0] </code></pre> <p><strong>Output:</strong></p> <pre><code>print (df) Date Open* ... Volume Market Cap 0 Aug 20, 2019 10916.35 ... 15053082175 192530283565 1 Aug 19, 2019 10350.28 ... 16038264603 195243306008 2 Aug 18, 2019 10233.01 ... 12999813869 185022920955 3 Aug 17, 2019 10358.72 ... 13778035685 182966857173 4 Aug 16, 2019 10319.42 ... 20228207096 185500055339 5 Aug 15, 2019 10038.42 ... 22899115082 184357666577 6 Aug 14, 2019 10889.49 ... 19990838300 179692803424 7 Aug 13, 2019 11385.05 ... 16681503537 194762696644 8 Aug 12, 2019 11528.19 ... 13647198229 203441494985 9 Aug 11, 2019 11349.74 ... 15774371518 205941632235 10 Aug 10, 2019 11861.56 ... 18125355447 202890020455 11 Aug 09, 2019 11953.47 ... 18339989960 211961319133 12 Aug 08, 2019 11954.04 ... 19481591730 213788089212 13 Aug 07, 2019 11476.19 ... 22194988641 213330426789 14 Aug 06, 2019 11811.55 ... 23635107660 205023347814 15 Aug 05, 2019 10960.74 ... 23875988832 210848822060 16 Aug 04, 2019 10821.63 ... 16530894787 195907875403 17 Aug 03, 2019 10519.28 ... 15352685061 193233960601 18 Aug 02, 2019 10402.04 ... 17489094082 187791090996 19 Aug 01, 2019 10077.44 ... 17165337858 185653203391 20 Jul 31, 2019 9604.05 ... 16631520648 180028959603 21 Jul 30, 2019 9522.33 ... 13829811132 171472452506 22 Jul 29, 2019 9548.18 ... 13791445323 169880343827 23 Jul 28, 2019 9491.63 ... 13738687093 170461958074 24 Jul 27, 2019 9871.16 ... 16817809536 169099540423 25 Jul 26, 2019 9913.13 ... 14495714483 176085968354 26 Jul 25, 2019 9809.10 ... 15821952090 176806451137 27 Jul 24, 2019 9887.73 ... 17398734322 175005760794 28 Jul 23, 2019 10346.75 ... 17851916995 176572890702 29 Jul 22, 2019 10596.95 ... 16334414913 184443440748 ... ... ... ... ... 2276 May 27, 2013 133.50 ... - 1454029510 2277 May 26, 2013 131.99 ... - 1495293015 2278 May 25, 2013 133.10 ... - 1477958233 2279 May 24, 2013 126.30 ... - 1491070770 2280 May 23, 2013 123.80 ... - 1417769833 2281 May 22, 2013 122.89 ... - 1385778993 2282 May 21, 2013 122.02 ... - 1374013440 2283 May 20, 2013 122.50 ... - 1363709900 2284 May 19, 2013 123.21 ... - 1363204703 2285 May 18, 2013 123.50 ... - 1379574546 2286 May 17, 2013 118.21 ... - 1373723882 2287 May 16, 2013 114.22 ... - 1325726787 2288 May 15, 2013 111.40 ... - 1274623813 2289 May 14, 2013 117.98 ... - 1243874488 2290 May 13, 2013 114.82 ... - 1315710011 2291 May 12, 2013 115.64 ... - 1281982625 2292 May 11, 2013 117.70 ... - 1284207489 2293 May 10, 2013 112.80 ... - 1305479080 2294 May 09, 2013 113.20 ... - 1254535382 2295 May 08, 2013 109.60 ... - 1264049202 2296 May 07, 2013 112.25 ... - 1240593600 2297 May 06, 2013 115.98 ... - 1249023060 2298 May 05, 2013 112.90 ... - 1288693176 2299 May 04, 2013 98.10 ... - 1250316563 2300 May 03, 2013 106.25 ... - 1085995169 2301 May 02, 2013 116.38 ... - 1168517495 2302 May 01, 2013 139.00 ... - 1298954594 2303 Apr 30, 2013 144.00 ... - 1542813125 2304 Apr 29, 2013 134.44 ... - 1603768865 2305 Apr 28, 2013 135.30 ... - 1488566728 [2306 rows x 7 columns] </code></pre>
python-3.x|pandas|beautifulsoup|python-requests
0
9,818
43,838,557
Custom boolean filtering in Pandas?
<p>I have a dataframe</p> <pre><code> 0 1 2 3 Marketcap 0 1.707280 0.666952 0.638515 -0.061126 2.291747 1.71B 1 -1.017134 1.353627 0.618433 0.008279 0.148128 1.82B 2 -0.774057 -0.165566 -0.083345 0.741598 -0.139851 1.1M 3 -0.630724 0.250737 1.308556 -1.040799 1.064456 30.92M 4 2.029370 0.899612 0.261146 1.474148 -1.663970 476.74k 5 2.029370 0.899612 0.261146 1.474148 -1.663970 -1 </code></pre> <p>Is there some sort of custom filter method, that would let Python know B > M > K?</p> <p>Say I want to filter, <code>df[df.Marketcap &gt; 35.00M]</code>, is there a clever or clean way to do this? Having the M or B makes the value very readable and easy to differentiate. </p> <p>Thank you. </p> <p>EDIT: Reopened the thread as Max U's answer while excellent seems to produce a pandas bug, which we opened an issue on Github. </p>
<p>This isn't super clean, but it does the trick and doesn't use any python iteration:</p> <p><strong>Code:</strong></p> <pre><code># Create a separate column (which you can omit later) that converts 'Marketcap' strings to numbers df['cap'] = df.loc[df['Marketcap'].str.contains('B'), 'Marketcap'].str.replace('B','').astype(float) * 1000 df['cap'].fillna(df.loc[df['Marketcap'].str.contains('M'), 'Marketcap'].str.replace('M',''), inplace = True) # For pandas pre-0.20.0 (&lt;May 2017) print df.ix[df['cap'].astype(float) &gt; 35, :-1] # For pandas 0.20.0+ (.ix[] deprecated) print df.iloc[df[df['cap'].astype(float) &gt; 35].index, :-1] # Or, alternate pandas 0.20.0+ option (thanks @Psidom) print df[df['cap'].astype(float) &gt; 35].iloc[:,:-1] </code></pre> <p><strong>Output:</strong></p> <pre><code> 0 1 2 3 4 Marketcap 0 1.707280 0.666952 0.638515 -0.061126 2.291747 1.71B 1 -1.017134 1.353627 0.618433 0.008279 0.148128 1.82B 4 2.029370 0.899612 0.261146 1.474148 -1.663970 100.9M </code></pre>
pandas|filtering
3
9,819
43,768,498
Tensorflow Basic Example Error: CUBLAS_STATUS_NOT_INITIALIZED
<p>Hello I am trying to install and run tensorflow 1.0.</p> <p>I am using the following guide <a href="https://www.tensorflow.org/get_started/mnist/beginners" rel="nofollow noreferrer">https://www.tensorflow.org/get_started/mnist/beginners</a></p> <p>However when I run the file mnist_softmax.py I get the following errors.</p> <pre><code>python3 mnist_softmax.py Extracting /tmp/tensorflow/mnist/input_data/train-images-idx3-ubyte.gz Extracting /tmp/tensorflow/mnist/input_data/train-labels-idx1-ubyte.gz Extracting /tmp/tensorflow/mnist/input_data/t10k-images-idx3-ubyte.gz Extracting /tmp/tensorflow/mnist/input_data/t10k-labels-idx1-ubyte.gz 2017-05-03 14:25:28.243213: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-05-03 14:25:28.243234: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-05-03 14:25:28.243238: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-05-03 14:25:28.243241: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-05-03 14:25:28.243244: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. 2017-05-03 14:25:28.436478: I tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0 with properties: name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate (GHz) 1.582 pciBusID 0000:02:00.0 Total memory: 10.91GiB Free memory: 349.06MiB 2017-05-03 14:25:28.436501: I tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0 2017-05-03 14:25:28.436505: I tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0: Y 2017-05-03 14:25:28.436510: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating TensorFlow device (/gpu:0) -&gt; (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:02:00.0) 2017-05-03 14:25:30.507057: E tensorflow/stream_executor/cuda/cuda_blas.cc:365] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED 2017-05-03 14:25:30.507091: W tensorflow/stream_executor/stream.cc:1550] attempting to perform BLAS operation using StreamExecutor without BLAS support Traceback (most recent call last): File "/home/fernando/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1039, in _do_call return fn(*args) File "/home/fernando/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1021, in _run_fn status, run_metadata) File "/usr/lib/python3.5/contextlib.py", line 66, in __exit__ next(self.gen) File "/home/fernando/.local/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(100, 784), b.shape=(784, 10), m=100, n=10, k=784 [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](_recv_Placeholder_0/_9, Variable/read)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "mnist_softmax.py", line 79, in &lt;module&gt; tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) File "/home/fernando/.local/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "mnist_softmax.py", line 66, in main sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) File "/home/fernando/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 778, in run run_metadata_ptr) File "/home/fernando/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 982, in _run feed_dict_string, options, run_metadata) File "/home/fernando/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1032, in _do_run target_list, options, run_metadata) File "/home/fernando/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1052, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(100, 784), b.shape=(784, 10), m=100, n=10, k=784 [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](_recv_Placeholder_0/_9, Variable/read)]] Caused by op 'MatMul', defined at: File "mnist_softmax.py", line 79, in &lt;module&gt; tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) File "/home/fernando/.local/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "mnist_softmax.py", line 43, in main y = tf.matmul(x, W) + b File "/home/fernando/.local/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py", line 1801, in matmul a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name) File "/home/fernando/.local/lib/python3.5/site-packages/tensorflow/python/ops/gen_math_ops.py", line 1263, in _mat_mul transpose_b=transpose_b, name=name) File "/home/fernando/.local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op op_def=op_def) File "/home/fernando/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2336, in create_op original_op=self._default_original_op, op_def=op_def) File "/home/fernando/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1228, in __init__ self._traceback = _extract_stack() InternalError (see above for traceback): Blas GEMM launch failed : a.shape=(100, 784), b.shape=(784, 10), m=100, n=10, k=784 [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](_recv_Placeholder_0/_9, Variable/read)]] </code></pre> <p>I am not sure why I am getting this error, I also cannot run the matrixMulCUBLAS cuda example either and get the following error.</p> <pre><code>./matrixMulCUBLAS [Matrix Multiply CUBLAS] - Starting... GPU Device 0: "GeForce GTX 1080 Ti" with compute capability 6.1 MatrixA(640,480), MatrixB(480,320), MatrixC(640,320) CUDA error at matrixMulCUBLAS.cpp:277 code=1(CUBLAS_STATUS_NOT_INITIALIZED) "cublasCreate(&amp;handle)" </code></pre> <p>ALL cuda examples work UNLESS they use CUBLAS, not sure if this is related to my tensorflow error.</p>
<p>@FernandoMM I got my script to run where I was getting the same error. In my case, I was running external displays of my GPU and it was eating up all the GPU ram. I disconnected all displays and restarted python (in my case I was using a Jupyter Server) and it worked. It looks like you have only 'Free memory: 349.06MiB'. Maybe freeing up some memory will work for you as well? I an not sure still why this worked for me and how it relates to the error received, so maybe someone else can enlighten us :).</p>
python-3.x|tensorflow
0
9,820
43,541,625
Fast indexing and bit flipping of boolean data using Python
<p>Using Python, I'm running a simulation where a community of species goes through a sequential set of time steps ('scenes') in each of which extinction occurs. From an initial set of N species, each extinction needs to select a number of survivors, which then forms the pool to be subsampled at the next extinction. The number of survivors at each step is drawn randomly from a binomial distribution, given the community size and a per-species probability of survival. </p> <p>The examples below show a single chain of steps, but in practice the solution needs to be able to cope with branching, where the community surviving at one time step splits into two separate trajectories, each undergoing its own independent extinction.</p> <p>As a sketch of the process:</p> <pre><code>1111111111111111 (Initial 16 species, choose 11 survivors) 0110110101101111 (11 species, choose 9 survivors) 0110110101100011 (9 species, choose 5 survivors) 0100100000100011 (End of simulation) </code></pre> <p>This process gets used a lot and the communities can get quite large, so I'm trying to speed it up as much as possible and keep the memory use down. Currently I have three competing implementations</p> <p>A) Using a boolean numpy matrix to store which species are alive at each time step. The original motivation for this was that it would have a lower memory profile by just storing the presence/absence of the species, but <code>numpy</code> uses a full byte to store boolean values so this is eight times less memory efficient than I thought! </p> <pre><code>import numpy as np def using_2D_matrix(nspp=1000, nscene=250): # define a matrix to hold the communities and # set the initial community m = np.zeros((nscene, nspp), dtype='bool_') m[0, ] = 1 # loop over each extinction scene, looking up the indices # of live species and then selecting survivors for i in range(0, nscene - 1): candidates = np.where(m[i,])[0] n_surv = np.random.binomial(len(candidates), 0.99) surv = np.random.choice(candidates, size=n_surv, replace=False) m[i + 1, surv] = 1 return m </code></pre> <p>B) So storing a dictionary of 1D arrays holding the unique indices for surviving species drops out the need to use <code>np.where</code>. It might have higher memory use because it will likely need to use <code>uint32</code> to store the ids, but where extinction is high, you only have to store a short list of indices rather than a whole row of the boolean array, so it is going to be case specific.</p> <pre><code>def using_dict_of_arrays(nspp=1000, nscene=250): # initialise a dictionary holding an array giving a # unique integer to each species m = {0: np.arange(nspp)} # loop over the scenes, selecting survivors for i in range(0, nscene - 1): n_surv = np.random.binomial(len(m[i]), 0.99) surv = np.random.choice(m[i], size=n_surv, replace=False) m[i + 1] = surv return m </code></pre> <p>Of these, B is faster by about 10-15%. </p> <pre><code>import timeit A = timeit.Timer(using_2D_matrix) A.timeit(100) # 1.6549 B = timeit.Timer(using_dictionary_of_arrays) B.timeit(100) # 1.3580 </code></pre> <p>C) I then thought about doing this using <code>bitarray</code> to store the presence or absence of species in communities compactly as actual bits. This could also offer efficiencies by using bitops to compare overlap in communities. So:</p> <pre><code>def using_bitarray(nspp=1000, nscene=250): # initialise the starting community m = {0: bitarray('1' * nspp)} for i in range(0, nscene): # pick how many die and which they are (fewer bits to swap) n_die = np.random.binomial(m[i].count(True), 0.01) unlucky = np.random.choice(m[i].search(bitarray('1')), size=n_die, replace=False) # clone the source community and kill some off m[i + 1] = bitarray(m[i]) for s in unlucky: m[i + 1][s] = False return m </code></pre> <p>All of which is nice, but quite a lot slower. </p> <pre><code>C = timeit.Timer(using_bitarray) C.timeit(100) # 2.54035 </code></pre> <p>Am I missing an approach that would work faster?</p>
<p>Here's an alternative that's pretty fast:</p> <pre><code>def using_shuffled_array(nspp=1000, nscene=250): a = np.arange(nspp) np.random.shuffle(a) m = np.zeros(nscene, dtype=int) m[0] = nspp # loop over the scenes, selecting survivors for i in range(0, nscene - 1): m[i + 1] = np.random.binomial(m[i], 0.99) return a, m </code></pre> <p>Instead of generating a separate array for each generation, it shuffles the initial sequence of species numbers once, and then for each generation, it determines how many survive. After the call <code>a, m = using_shuffled_array()</code>, <code>a[:m[k]]</code> gives the survivors at generation <code>k</code>.</p> <p>Here's a timing comparison:</p> <pre><code>In [487]: %timeit using_dict_of_arrays() 100 loops, best of 3: 7.93 ms per loop In [488]: %timeit using_shuffled_array() 1000 loops, best of 3: 607 µs per loop </code></pre>
python|performance|numpy|bit-manipulation
1
9,821
43,814,594
Multiply each slice of image by weights
<p>I'm training a network such that one of tensor <code>t1</code> has following shape:</p> <p><code>shape(t1) = [?, 300, 300, 10]</code></p> <p>and another tensor <code>t2</code> has shape:</p> <p><code>shape(t2) = [?, 10]</code></p> <p>I would like to multiple each element of <code>t2</code> tensor by each slice <code>[300, 300]</code> of tensor <code>t1</code>. Any body know how to do that? So far I've written following:</p> <pre><code>def mul_concat(I): A = [] for i in range(d1.shape[1].value): A.append(d1[:, i]*I[:, :, :, i])) return reduce(lambda a, b: a+b, A) </code></pre> <p>However, I get error because of the <code>batch size</code> dimension. Any ideas how to fix that?</p>
<p>If you reshape <code>t2</code> to shape <code>[?, 1, 1, 10]</code>, then Tensorflow's broadcasting rules will do the rest:</p> <pre><code>t2_reshaped = tf.reshape(t2, [-1, 1, 1, 10]) output = t1 * t2_reshaped </code></pre> <p>Many Tensorflow operators allow broadcasting; the rules for broadcasting are the same as <code>numpy</code>'s rules for broadcasting. See <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html</a></p> <p>Hope that helps!</p>
tensorflow
0
9,822
72,902,456
The Panda's DataFrame.apply() doesn't work as intended
<p>The task that I am trying to accomplish is to define a function that adds 1 to the elements of the 'grade' column of a DataFrame if the corresponding element in the 'sqft_living' column is less than 400 and adds 2 to the elements of the 'grade' column if the corresponding element in the 'sqft_living' column is greater than 400. This function is then applied to the DataFrame using DataFrame.apply() method.</p> <p>The dataset that I am working on is called 'House Sales in King County, USA' Link to the dataset: <a href="https://www.kaggle.com/datasets/harlfoxem/housesalesprediction" rel="nofollow noreferrer">https://www.kaggle.com/datasets/harlfoxem/housesalesprediction</a></p> <p>The 'grade' column and the 'sqft_living' column, of the dataset, looks like this:</p> <pre><code> id sqft_living grade 0 7129300520 1180 7 1 6414100192 2570 7 2 5631500400 770 6 3 2487200875 1960 7 4 1954400510 1680 8 ... ... ... ... 21608 263000018 1530 8 21609 6600060120 2310 8 21610 1523300141 1020 7 21611 291310100 1600 8 21612 1523300157 1020 7 </code></pre> <p>The code that I am using is:</p> <pre><code>def myfunc(x): if x&lt;400 and x&gt;0: housing['grade'] = housing['grade'].add(1) elif x&gt;400: housing['grade'] = housing['grade'].add(2) housing['sqft_living'].apply(myfunc) </code></pre> <p>Here, 'housing' is the dataset.</p> <p>This gives me the output as:</p> <pre><code> id sqft_living grade 0 7129300520 1180 86447 1 6414100192 2570 86447 2 5631500400 770 86446 3 2487200875 1960 86447 4 1954400510 1680 86448 ... ... ... ... 21608 263000018 1530 86448 21609 6600060120 2310 86448 21610 1523300141 1020 86447 21611 291310100 1600 86448 21612 1523300157 1020 86447 </code></pre> <p>I notice here, that the last digits of the elements of the 'grade' column are the same as their original value</p> <p>However, when I do something like:</p> <pre><code>def myfunc(x): if x&lt;400 and x&gt;0: housing['grade'] = '+' elif x&gt;400: housing['grade'] = '-' housing['sqft_living'].apply(myfunc) </code></pre> <p>The code works as intended, and gives the output</p> <pre><code> id sqft_living grade 0 7129300520 1180 - 1 6414100192 2570 - 2 5631500400 770 - 3 2487200875 1960 - 4 1954400510 1680 - ... ... ... ... 21608 263000018 1530 - 21609 6600060120 2310 - 21610 1523300141 1020 - 21611 291310100 1600 - 21612 1523300157 1020 - </code></pre> <p>I am unable to understand why the code gives the mentioned output in the first case and I'd also like to know the way by which I could accomplish the task</p>
<blockquote> <p>the last digits of the elements of the 'grade' column are the same as their original value</p> </blockquote> <p>It's just a coincide that <code>add(1)</code> and <code>add(2)</code> results to the multiples of ten which is <code>86440</code> in your example.</p> <p><code>housing['grade']</code> is the whole column, you may want change it to row</p> <pre class="lang-py prettyprint-override"><code>def myfunc(row): if 0 &lt; row['sqft_living'] &lt; 400: row['grade'] += 1 elif row['sqft_living'] &gt; 400: row['grade'] += 2 housing.apply(myfunc, axis=1) </code></pre> <p>Or with <code>np.select</code></p> <pre class="lang-py prettyprint-override"><code>housing['grade'] = np.select( [housing['sqft_living'].between(0, 400, inclusive='neither'), housing['sqft_living'] &gt; 400], [housing['grade'].add(1), housing['grade'].add(2)] ) </code></pre>
python|pandas|dataframe
1
9,823
73,130,005
When is `stage is None` in pytorch lightning?
<p>Some official pytorch lightning <a href="https://pytorch-lightning.readthedocs.io/en/stable/extensions/datamodules.html" rel="nofollow noreferrer">docs</a> have code that refer to <code>stage</code> as <code>Optional[str]</code> with for example the following code</p> <pre><code>import pytorch_lightning as pl from torch.utils.data import random_split, DataLoader # Note - you must have torchvision installed for this example from torchvision.datasets import MNIST from torchvision import transforms class MNISTDataModule(pl.LightningDataModule): def __init__(self, data_dir: str = &quot;./&quot;): super().__init__() self.data_dir = data_dir self.transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) def prepare_data(self): # download MNIST(self.data_dir, train=True, download=True) MNIST(self.data_dir, train=False, download=True) def setup(self, stage: Optional[str] = None): # Assign train/val datasets for use in dataloaders if stage == &quot;fit&quot; or stage is None: mnist_full = MNIST(self.data_dir, train=True, transform=self.transform) self.mnist_train, self.mnist_val = random_split(mnist_full, [55000, 5000]) # Assign test dataset for use in dataloader(s) if stage == &quot;test&quot; or stage is None: self.mnist_test = MNIST(self.data_dir, train=False, transform=self.transform) if stage == &quot;predict&quot; or stage is None: self.mnist_predict = MNIST(self.data_dir, train=False, transform=self.transform) def train_dataloader(self): return DataLoader(self.mnist_train, batch_size=32) def val_dataloader(self): return DataLoader(self.mnist_val, batch_size=32) def test_dataloader(self): return DataLoader(self.mnist_test, batch_size=32) def predict_dataloader(self): return DataLoader(self.mnist_predict, batch_size=32) </code></pre> <p>When does stage take the value of None? I could find no docs describing this.</p>
<p>The Trainer will never send <code>stage=None</code> to the setup hook, or any of the other hooks that take this argument. The type is annotated optional and the default value is None for historical reasons. The values will always be one of &quot;fit&quot;, &quot;validate&quot;, &quot;test&quot;, &quot;predict&quot;.</p> <p>There is an <a href="https://github.com/Lightning-AI/lightning/issues/14062" rel="nofollow noreferrer">RFC to change this</a> to a required argument to avoid confusion. The link provides some more context why it has been like this for the past.</p>
deep-learning|pytorch|pytorch-lightning
1
9,824
73,139,985
Pandas change value based on other column values
<p>I want to change the value of each item in the column 'ageatbirth' to '1' if the 'race' is 2. In pseudocode:</p> <pre><code>If 'race' == 2 and 'ageatbirth' == 2: 'ageatbirth' == 1 </code></pre> <p>Is there an easy way to do this for a very large dataset?</p>
<p>Use</p> <pre class="lang-py prettyprint-override"><code>m = df['race'].eq(2) &amp; df['ageatbirth'].eq(2) df['ageatbirth'] = df['ageatbirth'].mask(m, 1) # or df.loc[m, 'ageatbirth'] = 1 </code></pre>
python|pandas
1
9,825
73,117,582
how to cancel filtering if two string conditions occur
<p>I have code that looks like this</p> <pre class="lang-py prettyprint-override"><code>df = pd.read_csv('data.csv', nrows=100000) def get_time(event_1, event_2): clean = df[(df['event_name'] == event_1) | (df['event_name'] == event_2)] ... final_df = get_time(event_1 = 'open', event_2 = 'close') </code></pre> <p>So what I want to do is to have my original data without filtering if event_1 = 'open', event_2 = 'close'. In the other cases I want it to filter like in line with clean variable Original output</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>date</th> <th>event</th> </tr> </thead> <tbody> <tr> <td>2022/10/05</td> <td>open</td> </tr> <tr> <td>2022/10/06</td> <td>jump</td> </tr> <tr> <td>2022/10/05</td> <td>run</td> </tr> <tr> <td>2022/10/06</td> <td>close</td> </tr> </tbody> </table> </div> <p>Expected when event_1 = 'open', event_2 = 'close'</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>date</th> <th>event</th> </tr> </thead> <tbody> <tr> <td>2022/10/05</td> <td>open</td> </tr> <tr> <td>2022/10/06</td> <td>jump</td> </tr> <tr> <td>2022/10/05</td> <td>run</td> </tr> <tr> <td>2022/10/06</td> <td>close</td> </tr> </tbody> </table> </div> <p>Expected when event_1 = 'open', event_2 = 'jump'</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>date</th> <th>event</th> </tr> </thead> <tbody> <tr> <td>2022/10/05</td> <td>open</td> </tr> <tr> <td>2022/10/06</td> <td>jump</td> </tr> </tbody> </table> </div> <p>Appreciate your help</p>
<p>Use a simple <code>if</code> statement</p> <pre><code>def get_time(event_1, event_2): if event_1 == 'open' and event_2 == 'close': clean = df else: clean = df[(df['event_name'] == event_1) | (df['event_name'] == event_2)] ... </code></pre>
python|pandas|dataframe
1
9,826
73,132,314
How to fix the error of "plt.scatter(x_test_encoded[:, 0], x_test_encoded[:, 1], c=y_test)"
<p>I'm testing the &quot;<code>Variational autoencoder (VAE)</code>&quot; from this link: <a href="https://blog.keras.io/building-autoencoders-in-keras.html" rel="nofollow noreferrer">https://blog.keras.io/building-autoencoders-in-keras.html</a></p> <p>But the following code will cause error:</p> <pre><code>x_test_encoded = encoder.predict(x_test, batch_size=batch_size) plt.figure(figsize=(6, 6)) plt.scatter(x_test_encoded[:, 0], x_test_encoded[:, 1], c=y_test) plt.colorbar() plt.show() </code></pre> <p>Here is the error:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) /tmp/ipykernel_3369/2564361963.py in &lt;cell line: 2&gt;() 1 plt.figure(figsize=(6, 6)) ----&gt; 2 plt.scatter(x_test_encoded[:, 0], x_test_encoded[:, 1], c=y_test) 3 plt.colorbar() 4 plt.show() TypeError: list indices must be integers or slices, not tuple </code></pre> <p>I have tried to print the data type of <code>x_test_encoded</code>:</p> <pre><code>print(&quot;x_test_encoded: type: {}, len: {}&quot;.format( type(x_test_encoded), len(x_test_encoded) )) x_test_encoded </code></pre> <p>Output:</p> <pre><code>x_test_encoded: type: &lt;class 'list'&gt;, len: 3 [array([[-0.5947027 , -0.52120334], [ 0.4796909 , 0.3287477 ], [-3.005046 , -0.34468982], ..., [-0.62982583, -0.240578 ], [-0.9350344 , 0.38294736], [ 0.82171804, -0.7310184 ]], dtype=float32), array([[-1.2073065 , -1.0317452 ], [-1.517373 , -1.224308 ], [-0.15326577, -0.5757952 ], ..., [-1.0883944 , -0.97350955], [-0.97157186, -1.1322904 ], [-1.5575923 , -1.2778456 ]], dtype=float32), array([[-0.60485274, -0.56204545], [ 0.46670908, 0.34251845], [-2.9650807 , -0.40624568], ..., [-0.5976225 , -0.25444514], [-0.95324576, 0.4048244 ], [ 0.8028061 , -0.66895425]], dtype=float32)] </code></pre> <p>Print the first element:</p> <pre><code>print(&quot;x_test_encoded[:][0]: type: {}, len: {}, shape: {}, dtype: {}&quot;.format( type(x_test_encoded[:][0]), len(x_test_encoded[:][0]), x_test_encoded[:][0].shape, x_test_encoded[:][0].dtype )) x_test_encoded[:][0] </code></pre> <p>Output:</p> <pre><code>x_test_encoded[:][0]: type: &lt;class 'numpy.ndarray'&gt;, len: 10000, shape: (10000, 2), dtype: float32 array([[-0.5947027 , -0.52120334], [ 0.4796909 , 0.3287477 ], [-3.005046 , -0.34468982], ..., [-0.62982583, -0.240578 ], [-0.9350344 , 0.38294736], [ 0.82171804, -0.7310184 ]], dtype=float32) </code></pre> <p>How to fix the error of &quot;<code>plt.scatter(x_test_encoded[:, 0], x_test_encoded[:, 1], c=y_test)</code>&quot;?</p> <p>What does &quot;<code>[:, 0]</code>&quot; mean in a &quot;<code>list</code>&quot; type variable?</p>
<p>Your encoder is has three outputs <code>[z_mean, z_log_sigma, z]</code>, but you are actually only interested in <code>z</code> at position [-1]. So, it should actually be something like this (the tutorial seems to have an error):</p> <pre><code>x_test_encoded = encoder.predict(x_test, batch_size=32) plt.figure(figsize=(6, 6)) plt.scatter(x_test_encoded[-1][:, 0], x_test_encoded[-1][:, 1], c=y_test) plt.colorbar() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/STPgA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/STPgA.png" alt="enter image description here" /></a></p> <p>Since <code>z</code> consists of two latent variables, you index 0 and 1.</p>
python|numpy|tensorflow|matplotlib|keras
1
9,827
73,101,278
How to append 1 data frame with another based on condition
<p>I have 2 dfs. I want to append one with another only if the first df is not null. Eg:</p> <p>df1</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Name</th> <th style="text-align: left;">place</th> <th style="text-align: left;">Animal</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Abc</td> <td style="text-align: left;">China</td> <td style="text-align: left;">Lion</td> </tr> </tbody> </table> </div> <p>df2</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Name</th> <th style="text-align: left;">place</th> <th style="text-align: left;">Animal</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Xyz</td> <td style="text-align: left;">London</td> <td style="text-align: left;">cheeta</td> </tr> <tr> <td style="text-align: left;">Tom</td> <td style="text-align: left;">Paris</td> <td style="text-align: left;">dog</td> </tr> </tbody> </table> </div> <p>Now I want to append df1 to df2 only if df1 is not null, how do I do that? what I tried:</p> <pre class="lang-py prettyprint-override"><code>for i in len(df1): if i &gt; 1: df2.append(df1) </code></pre> <p>but I am getting an error. Is there a better approach?</p>
<p>You can place whatever code you want in the if statement, I just placed a print for &quot;DF1 is empty&quot; as a place holder.</p> <pre><code>df1 = pd.DataFrame() df2 = pd.DataFrame({&quot;Name&quot;:[&quot;ABC&quot;, &quot;XYZ&quot;]}) # Check if df1 is empty, if not, concatenate df1 and df2 and reset the index if df1.empty: print(&quot;DF1 is empty&quot;) else: df2 = pd.concat([df1, df2], ignore_index=True) # You can remove &quot;ignore_index&quot; if you don't want to reset the index </code></pre>
python|pandas
1
9,828
70,513,641
Azure Databricks OOM error that causes the connection to the Python REPL to be closed
<p>In the following sample code, in <code>one cell</code> of our <a href="https://docs.microsoft.com/en-us/azure/databricks/scenarios/what-is-azure-databricks#:%7E:text=Azure%20Databricks%20is%20a%20data,Microsoft%20Azure%20cloud%20services%20platform.&amp;text=For%20a%20big%20data%20pipeline,Event%20Hub%2C%20or%20IoT%20Hub." rel="noreferrer">Azure Databricks</a> notebook, the code loads about 20 million records into a <a href="https://pandas.pydata.org/" rel="noreferrer">Python pandas</a> <code>dataframe</code> from an <code>Azure SQL db</code>, does some dataframe column tranformation by applying some functions (as shown in the code snippet below). But after running the code for about half an hour, the Databricks throws the following error:</p> <p><strong>Error</strong>:</p> <pre><code>ConnectException: Connection refused (Connection refused) Error while obtaining a new communication channel ConnectException error: This is often caused by an OOM error that causes the connection to the Python REPL to be closed. Check your query's memory usage. </code></pre> <p><strong>Remarks</strong>: Table has about 150 columns. The <code>Spark setting</code> on the Databricks is as follows: <strong>Cluster</strong>: <code>128 GB , 16 Cores, DBR 8.3, Spark 8.3, Scala 2.12</code></p> <p><strong>Question</strong>: What could be a cause of the error, and how can we fix it?</p> <pre><code>import sqlalchemy as sq import pandas as pd def fn_myFunction(lastname): testvar = lastname.lower() testvar = testvar.strip() return testvar pw = dbutils.secrets.get(scope='SomeScope',key='sql') engine = sq.create_engine('mssql+pymssql://SERVICE.Databricks.NONPUBLICETL:'+pw+'MyAzureSQL.database.windows.net:1433/TEST', isolation_level=&quot;AUTOCOMMIT&quot;) app_df = pd.read_sql('select * from MyTable', con=engine) #create new column app_df['NewColumn'] = app_df['TestColumn'].apply(lambda x: fn_myFunction(x)) ............. ............. </code></pre>
<p>This means that the driver crashed because of an OOM (Out of memory) exception and after that, it's not able to establish a new connection with the driver.</p> <p>Please try below options</p> <ul> <li>Try increasing driver-side memory and then retry.</li> <li>You can look at the spark job dag which give you more info on data flow.</li> </ul> <p>For more information follow this <a href="https://blog.clairvoyantsoft.com/apache-spark-out-of-memory-issue-b63c7987fff" rel="nofollow noreferrer">article</a> by Aditi Sinha</p>
python|pandas|apache-spark|azure-sql-database|azure-databricks
2
9,829
42,723,780
How does adding a (500x5000) and (5000x1) matrix result in a (500x5000) matrix?
<p>I am following a tutorial in an iPython notebook. My intention is calculating (X^2 - X_train)^2, storing the result in dists. The following code seems to work. I don't understand how it works however.</p> <p>Why does (2*inner_prod + train_sum) which adds differently-sized matrices yield a 500x5000 matrix?</p> <p>How are the matrices processed in the calculation of dist?</p> <pre><code> test_sum = np.sum(np.square(X), axis=1) # summed each example #(500x1) train_sum = np.sum(np.square(self.X_train), axis=1) # summed each example #(5000x1) inner_prod = np.dot(X, self.X_train.T) #matrix multiplication for 2-D arrays (500x3072)*(3072x5000)=(500x5000) print inner_prod.shape print X.shape print self.X_train.T.shape print test_sum.shape print test_sum.size print train_sum.shape print train_sum.size print test_sum.reshape(-1,1).shape # how... does reshaping work??? print (2*inner_prod+train_sum).shape dists = np.sqrt(np.reshape(test_sum,(-1,1)) - 2 * inner_prod + train_sum) # (500x1) - 2*(500x5000) + (5000x1) = (500x5000) print dists.shape </code></pre> <p>The print statements give the following:</p> <pre><code>(500L, 5000L) (500L, 3072L) (3072L, 5000L) (500L,) 500 (5000L,) 5000 (500L, 1L) (500L, 5000L) (500L, 5000L) </code></pre>
<pre><code>print train_sum.shape # (5000,) print train_sum.size print test_sum.reshape(-1,1).shape # (5000,1) # how... does reshaping work??? print (2*inner_prod+train_sum).shape </code></pre> <p><code>test_sum.reshape(-1,1)</code> returns as new array with a new shape (but shared data). It does not reshape <code>test_sum</code> itself.</p> <p>So the addition broadcasting does:</p> <pre><code>(500,5000) + (5000,) =&gt; (500,5000)+(1,5000)=&gt;(500,5000) </code></pre> <p>If it had done the reshape, you'd have gotten an error.</p> <pre><code>(500,5000) + (5000,1) =&gt; error In [68]: np.ones((500,5000))+np.zeros((5000,1)) ValueError: operands could not be broadcast together with shapes (500,5000) (5000,1) </code></pre> <p>There's really only one way to add that (500,5000) array and the (5000,) one, and that's what you got without (effective) reshape.</p> <p><code>train_sum.shape = (-1,1)</code> acts in place, but isn't used as often as <code>reshape</code>. Stick with the reshape, but use it right.</p>
python|numpy
1
9,830
42,947,298
ValueError: cannot reshape array of size 30470400 into shape (50,1104,104)
<p>I am trying to run threw this Tutorial <a href="http://emmanuelle.github.io/segmentation-of-3-d-tomography-images-with-python-and-scikit-image.html" rel="noreferrer">http://emmanuelle.github.io/segmentation-of-3-d-tomography-images-with-python-and-scikit-image.html</a> </p> <p>where I want to do a Segmentation of 3-D tomography images with Python. </p> <p>I'm struggling directly in the beginning, with reshaping the image. </p> <p>This is the code: </p> <pre><code>%matplotlib inline import numpy as np import matplotlib.pyplot as plt import time as time data = np.fromfile('/data/data_l67/dalladas/Python3/Daten/Al8Cu_1000_g13_t4_200_250.vol', dtype=np.float32) data.shape (60940800,) data.reshape((50,1104,104)) </code></pre> <blockquote> <p>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () ----> 1 data.reshape((50,1104,104))</p> <p>ValueError: cannot reshape array of size 30470400 into shape (50,1104,104)</p> </blockquote> <p>Can somebody help me out? </p>
<p>It seems that there is a typo, since <code>1104*1104*50=60940800</code> and you are trying to reshape to dimensions <code>50,1104,104</code>. So it seems that you need to change 104 to 1104.</p>
python|numpy|reshape
22
9,831
27,296,380
Create conditional probabilities in pandas from dataframe
<p>I have probabilities in a pandas dataframe df (from first July 2011 up to 31th July 2011 in 15-min. periods). Here is a excerpt:</p> <pre><code> Date_Time prob 0 2011-07-01 00:00:00 0.0112 1 2011-07-01 00:15:00 0.0224 2 2011-07-01 00:30:00 0.0112 3 2011-07-01 00:45:00 0.0896 4 2011-07-01 01:00:00 0.0112 5 2011-07-01 01:15:00 0.0112 6 2011-07-01 01:30:00 0.0336 7 2011-07-01 01:45:00 0.1081 8 2011-07-01 02:00:00 0.0112 </code></pre> <p>I want to calculate the conditional probabilities (probability of A given B -> P(A|B))of one 15-min.-period an her four(!) forerunner. And this for every row (period). That means (I used the index to name the rows here):</p> <p>P(4|0), P(4|1), P(4|2), P(4|3)</p> <p>P(5|1), P(5|2), P(5|3), P(5|4)</p> <p>and so on.</p> <p>The formula is: P(A|B) = P(A and B) / P(B), also (P(A)*P(B)/P(B))</p> <p>Sorry, but I have no idea how I can do that. Maybe there is a useful pandas function, which I could fit, but I did not find something.</p>
<p>you can use <a href="http://pandas.pydata.org/pandas-docs/version/0.15.1/generated/pandas.DataFrame.shift.html" rel="nofollow"><code>shift()</code></a> to calculate all these probabilities</p> <pre><code>&gt;&gt;&gt; for i in range(1,5): ... probB = df.shift(i)['prob'] ... probA = df['prob'] ... df['prob -' + str(i)] = (probA * probB) / probB ... &gt;&gt;&gt; df Date_Time prob prob -1 prob -2 prob -3 prob -4 0 2011-07-01 00:00:00 0.0112 NaN NaN NaN NaN 1 2011-07-01 00:15:00 0.0224 0.0224 NaN NaN NaN 2 2011-07-01 00:30:00 0.0112 0.0112 0.0112 NaN NaN 3 2011-07-01 00:45:00 0.0896 0.0896 0.0896 0.0896 NaN 4 2011-07-01 01:00:00 0.0112 0.0112 0.0112 0.0112 0.0112 5 2011-07-01 01:15:00 0.0112 0.0112 0.0112 0.0112 0.0112 6 2011-07-01 01:30:00 0.0336 0.0336 0.0336 0.0336 0.0336 7 2011-07-01 01:45:00 0.1081 0.1081 0.1081 0.1081 0.1081 8 2011-07-01 02:00:00 0.0112 0.0112 0.0112 0.0112 0.0112 </code></pre>
python|pandas
0
9,832
25,158,561
ipython pylab: print histogram from dictionary
<p>I have a dictionary <code>d</code>:</p> <pre><code>d = {'apples': 5, 'oranges': 2, 'bananas': 2, 'lemons': 1, 'coconuts': 1} </code></pre> <p>how can I display it graphically, as a histogram using (<code>pylab/matplotlib/ pandas/what-ever-is-best-suitable-for-simple-histograms</code>)</p> <p>What I am looking for is a graphical analogy to the following:</p> <pre><code>X X X X X X X X X X X ------------- A O B L C </code></pre>
<p>Using matplotlib:</p> <pre><code>import matplotlib.pyplot as plt d = {'apples': 5, 'oranges': 2, 'bananas': 2, 'lemons': 1, 'coconuts': 1} plt.bar(range(len(d)), d.values(), align='center') plt.xticks(range(len(d)), d.keys(), rotation=25) </code></pre> <p><img src="https://i.stack.imgur.com/S104h.png" alt="enter image description here"></p> <p>Or, to make it colorful:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt d = {'apples': 5, 'oranges': 2, 'bananas': 2, 'lemons': 1, 'coconuts': 1} jet = plt.get_cmap('jet') N = len(d) plt.bar(range(N), d.values(), align='center', color=jet(np.linspace(0, 1.0, N))) plt.xticks(range(N), d.keys(), rotation=25) </code></pre> <p><img src="https://i.stack.imgur.com/wdlXn.png" alt="enter image description here"></p>
python|matplotlib|pandas|ipython
3
9,833
30,711,969
object of type 'float' has no len() when using to_stata
<p>I have three columns in my dataset that I'm trying to save as a STATA dta file. These are the last three lines I run after I clean the data.</p> <pre><code>macro1=macro1.rename(columns={'index':'year', 'Price Index, PCE':'pce','Unemployment Rate':'urate'}) macro1.convert_objects(convert_numeric=True).dtypes macro1[['year','pce', 'urate]].to_stata('file path\file name.dta', write_index=False) </code></pre> <p>these are the data types of these variables</p> <pre><code>year float64 pce float64 urate float64 dtype: object </code></pre> <p>The problem is, when i try to convert these columns to <code>.dta</code> I get an error message</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-69-a2069ee823e7&gt; in &lt;module&gt;() 36 macro1=macro1.rename(columns={'index':'year', 'Price Index, PCE':'pce','Unemployment Rate':'urate'}) 37 macro1.convert_objects(convert_numeric=True).dtypes ---&gt; 38 macro1[['pce']].to_stata('file path\file name.dta', write_index=False) 39 #macro1 C:\Users\chungk\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\core\frame.pyc in to_stata(self, fname, convert_dates, write_index, encoding, byteorder, time_stamp, data_label) 1262 time_stamp=time_stamp, data_label=data_label, 1263 write_index=write_index) -&gt; 1264 writer.write_file() 1265 1266 @Appender(fmt.docstring_to_string, indents=1) C:\Users\chungk\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\io\stata.pyc in write_file(self) 1245 self._write(_pad_bytes("", 5)) 1246 if self._convert_dates is None: -&gt; 1247 self._write_data_nodates() 1248 else: 1249 self._write_data_dates() C:\Users\chungk\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\io\stata.pyc in _write_data_nodates(self) 1327 if var is None or var == np.nan: 1328 var = _pad_bytes('', typ) -&gt; 1329 if len(var) &lt; typ: 1330 var = _pad_bytes(var, typ) 1331 if compat.PY3: TypeError: object of type 'float' has no len() </code></pre> <p>the problem is with both <code>urate</code> and <code>pce</code> because when I try saving only <code>year</code>, it works. </p> <p>I'm not sure where the problem lies. Any help would be much appreciated.</p>
<p><code>convert_objects</code> does not convert the dtypes inplace so you needed to assign the operation:</p> <pre><code>macro1 = macro1.convert_objects(convert_numeric=True) </code></pre> <p>see the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.convert_objects.html#pandas.DataFrame.convert_objects" rel="nofollow">docs</a></p>
python|pandas
0
9,834
26,739,564
How can I parallelize functions "leastsq" or/and "curve_fit"
<p>What is the best way to parallelize the fitting procedure for multicore computers using scipy functions? As far as I see from manual, these functions do not have parameters like <code>npocs</code>. Does it mean they are not supposed to be parallelized?</p>
<p>Short answer: there is no built-in parallelization at the moment. There is <a href="https://github.com/scipy/scipy/pull/3192" rel="nofollow">a proposal to use multiprocessing</a> in leastsq, but it's not clear whether this is worth it. (if you feel like giving that code a drive and report the results, that'll be appreciated).</p> <p>That being said, you can parallelize the evaluation of your objective function as you wish. Eg push it to a compiled extension (Cython, C, Fortran), release the GIL and use OpenMP or even explicit threads. Your mileage may vary <em>a lot</em> depending on details of your particular problem.</p>
numpy|parallel-processing|scipy|mathematical-optimization|curve-fitting
3
9,835
26,783,651
Efficient Partitioning of Pandas DataFrame rows between sandwiched indicator variables
<p>Suppose I have a pandas df with an indicator row that sandwiches a period. Ex. </p> <pre><code>In [9]: pd.DataFrame({'col1':np.arange(1,11),'indicator':[0,1,0,0,0,1,0,0,1,1]}) Out[9]: col1 indicator 0 1 0 1 2 1 2 3 0 3 4 0 4 5 0 5 6 1 6 7 0 7 8 0 8 9 1 9 10 1 </code></pre> <p>What I want to do, is to use groupby to select the partitions separated by the indicators. </p> <p>ex. </p> <p>Group 1</p> <pre><code>col1 indicator 0 1 0 1 2 1 </code></pre> <p>Group 2</p> <pre><code>2 3 0 3 4 0 4 5 0 5 6 1 </code></pre> <p>Group 3</p> <pre><code>6 7 0 7 8 0 8 9 1 </code></pre> <p>Group 4</p> <pre><code>9 10 1 </code></pre> <p>The naive solution will be to just take the indicator column out as a list, run a for-loop through it, and just label each part. But suppose the dataset is really big, and you want to avoid the for-loop. Is there something more clever that can be done here, to separate out the different groups? </p> <p>Thanks! </p>
<p>Just assign another column as a <code>cumsum</code> of <code>indicator</code>, then apply <code>groupby</code>, this should do the trick:</p> <pre><code># reverse the order as you have indicator at end of group, then reverse back df['grouped'] = df['indicator'].loc[::-1].cumsum().loc[::-1] for g in df.groupby('grouped', sort=False): print g (4, col1 indicator grouped 0 1 0 4 1 2 1 4) (3, col1 indicator grouped 2 3 0 3 3 4 0 3 4 5 0 3 5 6 1 3) (2, col1 indicator grouped 6 7 0 2 7 8 0 2 8 9 1 2) (1, col1 indicator grouped 9 10 1 1) </code></pre>
python|pandas|group-by
3
9,836
39,344,587
Select rows if columns meet condition
<p>I have a <code>DataFrame</code> with 75 columns. </p> <p>How can I select rows based on a condition in a specific array of columns? If I want to do this on all columns I can just use</p> <pre><code>df[(df.values &gt; 1.5).any(1)] </code></pre> <p>But let's say I just want to do this on columns 3:45.</p>
<p>Use <code>ix</code> to slice the columns using ordinal position:</p> <pre><code>In [31]: df = pd.DataFrame(np.random.randn(5,10), columns=list('abcdefghij')) df Out[31]: a b c d e f g \ 0 -0.362353 0.302614 -1.007816 -0.360570 0.317197 1.131796 0.351454 1 1.008945 0.831101 -0.438534 -0.653173 0.234772 -1.179667 0.172774 2 0.900610 0.409017 -0.257744 0.167611 1.041648 -0.054558 -0.056346 3 0.335052 0.195865 0.085661 0.090096 2.098490 0.074971 0.083902 4 -0.023429 -1.046709 0.607154 2.219594 0.381031 -2.047858 -0.725303 h i j 0 0.533436 -0.374395 0.633296 1 2.018426 -0.406507 -0.834638 2 -0.079477 0.506729 1.372538 3 -0.791867 0.220786 -1.275269 4 -0.584407 0.008437 -0.046714 </code></pre> <p>So to slice the 4th to 5th columns inclusive:</p> <pre><code>In [32]: df.ix[:, 3:5] Out[32]: d e 0 -0.360570 0.317197 1 -0.653173 0.234772 2 0.167611 1.041648 3 0.090096 2.098490 4 2.219594 0.381031 </code></pre> <p>So in your case</p> <pre><code>df[(df.ix[:, 2:45]).values &gt; 1.5).any(1)] </code></pre> <p>should work</p> <p>indexing is <code>0</code> based and the open range is included but the closing range is not so here 3rd column is included and we slice up to column 46 but this is not included in the slice</p>
pandas|indexing|dataframe|conditional-statements|any
2
9,837
39,030,075
sort out dataframe where index meet certain conditions
<p>I have a data frame like this:</p> <pre><code> name pe outstanding totals totalAssets code 300533 abc 30.04 2500.00 10000.00 82066.80 300532 def 31.27 2100.00 8400.00 77945.25 603986 NiT 23.40 2500.00 10000.00 89517.36 600187 ITG 0.00 145562.42 145562.42 393065.88 000652 IGE 929.15 146567.31 147557.39 2969607.50 </code></pre> <p>I want to sort out those rows whose first 3 chars of index isin(['000','300'])</p> <p>which the result will be:</p> <pre><code> name pe outstanding totals totalAssets code 300533 abc 30.04 2500.00 10000.00 82066.80 300532 def 31.27 2100.00 8400.00 77945.25 000652 IGE 929.15 146567.31 147557.39 2969607.50 </code></pre> <p>thanks.</p>
<p>You can use <code>str</code> to extract the first 3 characters from index:</p> <pre><code>df[df.index.str[:3].isin(['300', '000'])] # name pe outstanding totals totalAssets # code #300533 abc 30.04 2500.00 10000.00 82066.80 #300532 def 31.27 2100.00 8400.00 77945.25 #000652 IGE 929.15 146567.31 147557.39 2969607.50 </code></pre>
python|pandas
2
9,838
39,024,852
Ignore string data that does not match a certain format when calculating "min" with Pandas
<p>I have a DataFrame column 'datetime' with the values in this format:</p> <p><code>'2016-08-01 13:43:35'</code></p> <p>I would like to find the min and max values. The problem is that some of the rows are missing time values so they look like this:</p> <p><code>'2016-07-29 '</code></p> <p>How can I exclude the rows with missing data when calculating the min and max?</p> <p>Here is how I'm finding the min value:</p> <pre><code>min_ = df['datetime'].min() </code></pre> <p>The minimum value that I'm trying to find is the earliest date/time combination where both are included. So for example in from my data:</p> <p>'7/29/2016 11:02:38' would be the desired value.</p>
<p>You can convert values that have a specific format to datetime, and the remaining ones will be NaT. If you take the minimum on the resulting series, it will ignore NaTs.</p> <pre><code>df['datetime'] = ['2016-08-01 13:43:35', '2016-06-01 13:43:35', '2013-08-01 13:43:35', '2016-07-29 '] df Out: datetime 0 2016-08-01 13:43:35 1 2016-06-01 13:43:35 2 2013-08-01 13:43:35 3 2016-07-29 pd.to_datetime(df['datetime'], format='%Y-%m-%d %H:%M:%S', errors='coerce') Out: 0 2016-08-01 13:43:35 1 2016-06-01 13:43:35 2 2013-08-01 13:43:35 3 NaT Name: datetime, dtype: datetime64[ns] pd.to_datetime(df['datetime'], format='%Y-%m-%d %H:%M:%S', errors='coerce').min() Out: Timestamp('2013-08-01 13:43:35') </code></pre>
python|pandas
1
9,839
29,015,556
numpy element transformation with lambda?
<p>In a 3D numpy array, I need to transform each element as follows: if it's less that 0, it must be become 0, if it's greater than 255, it must be become 255, and remain as-is otherwise.</p> <p>How can I achieve that with numpy? I am thinking of something like</p> <pre><code>img.transform_each(transform_func) </code></pre> <p>where</p> <pre><code>def transform_func(x): if x&lt;0: return 0; # etc </code></pre> <p>Is there any build-function like <code>transform_each</code> for that? I don't what to make for-for-for loop manually.</p>
<p>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.clip.html" rel="nofollow"><code>clip</code></a> to keep the values of an array within a particular range. For example:</p> <pre><code>&gt;&gt;&gt; a = np.array([-1, 23, 312, 47, -5]) &gt;&gt;&gt; a.clip(0, 255) array([ 0, 23, 255, 47, 0]) </code></pre> <p>The returns a new array of the same shape containing the values - you'll need to reassign <code>a</code> to this new array if you want to make the changes permanent or else use the <code>out</code> parameter to perform the operation in place:</p> <pre><code>np.clip(a, 0, 255, out=a) </code></pre> <p>An alternative approach which opens the way to more complex operations is the idea of boolean indexing.</p> <p>For example, to set elements of an array <code>a</code> which are less than <code>0</code> to <code>0</code>:</p> <pre><code>a[a &lt; 0] = 0 </code></pre> <p>Or to multiply all values of <code>2</code> by <code>7</code>, you can write:</p> <pre><code>a[a == 2] *= 7 </code></pre>
python|arrays|numpy|vectorization
4
9,840
33,613,945
How to deal with inputs outside 0-1 range in tensorflow?
<p>In the example provided at <a href="http://www.tensorflow.org/get_started" rel="nofollow">http://www.tensorflow.org/get_started</a> if I multiply the input by 2</p> <pre><code>x_data = np.float32(np.random.rand(2, 100))*2 </code></pre> <p>I get non-sense output, while I expected to get the same solution.</p> <pre><code>0 [[ -67.06586456 -109.13352203]] [-7.67297792] 20 [[ nan nan]] [ nan] 40 [[ nan nan]] [ nan] 60 [[ nan nan]] [ nan] 80 [[ nan nan]] [ nan] 100 [[ nan nan]] [ nan] 120 [[ nan nan]] [ nan] 140 [[ nan nan]] [ nan] 160 [[ nan nan]] [ nan] 180 [[ nan nan]] [ nan] 200 [[ nan nan]] [ nan] </code></pre> <p>How does tensorflow handle inputs that are not in the 0-1 range?</p> <p><em>EDIT</em>: Using <code>AdagradOptimizer</code> works without an issue.</p>
<p>The issue is that the example uses a very aggressive learning rate:</p> <pre><code>optimizer = tf.train.GradientDescentOptimizer(0.5) </code></pre> <p>This makes learning faster, but stops working if you change the problem a bit. A learning rate of <code>0.01</code> would be more typical:</p> <pre><code>optimizer = tf.train.GradientDescentOptimizer(0.01) </code></pre> <p>Now your modification works fine. :)</p>
python|tensorflow
10
9,841
23,818,364
Speed up a loop in pandas
<p>I am struggling with an issue on Python Pandas, I have a DataFrame which represents connection on a website:</p> <pre><code>No. IDs date duration_since_last_visit 1 4678 2012-11-30 23:59:59 0 2 4703 2012-11-30 23:59:23 0 3 4678 2012-11-30 23:58:46 73s 4 5803 2012-11-30 23:58:19 0 5 4678 2012-11-30 23:58:07 39s </code></pre> <p>I am trying to find a way to know the mean time of visit for each ID number. I managed to do that thanks to:</p> <pre><code>for i in df['IDs'].values: report['mean_time_visits']=report[report['IDs']==i].duration_since_last_visit.mean() </code></pre> <p>But my array has 350 000 rows and the result takes for ever to compute, I wanted to know if I did something wrong and if there's a way to do this task faster</p>
<p>No loops needed.</p> <pre><code>In [12]: df.groupby('IDs')['duration_since_last_visit'].mean() Out[12]: IDs 4678 37.333333 4703 0.000000 5803 0.000000 Name: duration_since_last_visit, dtype: float64 </code></pre> <p>You'll find that vectorized operations are more efficient in pandas / numpy.</p>
python|pandas|dataframe
3
9,842
23,814,517
Get indices of intersecting rows of Numpy 2d Array
<p>I want to get the indices of the intersecting rows of a main numpy 2d array A, with another one B.</p> <pre><code>A=array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) B=array([[1, 4], [1, 2], [5, 6], [6, 3]]) result=[0,2] </code></pre> <p>Where this should return [0,2] based on the indices of array A.</p> <p>How can this be done efficiently for 2d arrays?</p> <p>Thank you!</p> <p><strong>edit</strong></p> <p>I have tried the function:</p> <pre><code>k[np.in1d(k.view(dtype='i,i').reshape(k.shape[0]),k2.view(dtype='i,i'). reshape(k2.shape[0]))] </code></pre> <p>from <a href="https://stackoverflow.com/questions/16210738/numpy-in1d-for-2d-arrays">Implementation of numpy in1d for 2D arrays?</a> but I get a reshape error. My datatype is floats (with two decimals). Moreover, I also tried with sets but the performance is quite slow.</p>
<p>With minimal changes, you can get your approach to work:</p> <pre><code>In [15]: A Out[15]: array([[ 1, 2], [ 3, 4], [ 5, 6], [ 7, 8], [ 9, 10]]) In [16]: B Out[16]: array([[1, 4], [1, 2], [5, 6], [6, 3]]) In [17]: np.in1d(A.view('i,i').reshape(-1), B.view('i,i').reshape(-1)) Out[17]: array([ True, False, True, False, False], dtype=bool) In [18]: np.nonzero(np.in1d(A.view('i,i').reshape(-1), B.view('i,i').reshape(-1))) Out[18]: (array([0, 2], dtype=int64),) In [19]: np.nonzero(np.in1d(A.view('i,i').reshape(-1), B.view('i,i').reshape(-1)))[0] Out[19]: array([0, 2], dtype=int64) </code></pre> <p>If your arrays are not floats, and are both contiguous, then the following will be faster:</p> <pre><code>In [21]: dt = np.dtype((np.void, A.dtype.itemsize * A.shape[1])) In [22]: np.nonzero(np.in1d(A.view(dt).reshape(-1), B.view(dt).reshape(-1)))[0] Out[22]: array([0, 2], dtype=int64) </code></pre> <p>And a quick timing:</p> <pre><code>In [24]: %timeit np.nonzero(np.in1d(A.view('i,i').reshape(-1), B.view('i,i').reshape(-1)))[0] 10000 loops, best of 3: 75 µs per loop In [25]: %timeit np.nonzero(np.in1d(A.view(dt).reshape(-1), B.view(dt).reshape(-1)))[0] 10000 loops, best of 3: 29.8 µs per loop </code></pre>
python|arrays|numpy
5
9,843
29,496,622
Can I use dtype to find the elements of an numpy array are strings?
<p>I got an numpy array, for example:</p> <pre><code>myArray = np.array(['a','bc']) </code></pre> <p>Is it possible use <code>dtype</code> to find out, whether its elements are strings?</p> <p>The only way I can do is checking <code>myArray.dtype == 'S2'</code>, but my Problem is I don't know in advance how many character are there in my elements. </p> <p>Can I use something like <code>myArray.dtype == 'str'</code>?</p>
<p>You could use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.issubdtype.html" rel="nofollow"><code>issubdtype</code></a> to do the checking:</p> <pre><code>&gt;&gt;&gt; np.issubdtype(myArray.dtype, str) True </code></pre> <p>The function checks whether a given dtype is ordered below another in NumPy's <a href="http://docs.scipy.org/doc/numpy/reference/arrays.scalars.html" rel="nofollow">type hierarchy</a>.</p> <p>Alternatively, you could inspect the dtype's character code directly. String types have code <code>'S'</code>:</p> <pre><code>&gt;&gt;&gt; myArray.dtype.char 'S' </code></pre>
python|arrays|numpy|types
4
9,844
29,719,136
Select only the column names that contain a specific string
<p>A simple example should make this obvious. Sample data:</p> <pre><code>df = pd.DataFrame( np.random.randn(2,6), columns=['x','y','xy','yx','xx','yy'] ) </code></pre> <p>Now, I just want to list the values for columns containing 'x'. Here's a couple of ways:</p> <pre><code>df[[ x for x in df.columns if 'x' in x ]] Out[53]: x xy yx xx 0 2.089078 1.111139 -0.218800 1.025810 1 -0.343189 0.274676 -0.342798 -0.503809 df[ df.columns[pd.Series(df.columns).str.contains('x')] ] Out[54]: x xy yx xx 0 2.089078 1.111139 -0.218800 1.025810 1 -0.343189 0.274676 -0.342798 -0.503809 </code></pre> <p>The latter approach seems promising but it's just really ugly and I haven't so far found a way to shorten it. Something more like this would be great:</p> <pre><code>df[ columns_with( df, 'x' ) ] </code></pre> <p>and in fact I did something just like that with a function, but am wondering if there is a pandastic way to do this without a user written function or monkeypatch?</p> <p>For motivation/background, this sort of thing is super useful when you have an unfamiliar dataset with lots of columns or even when you have a familiar dataset but can't remember the exact name of one variable out of hundreds. For the situations where I need this functionality, I'll often be doing this over and over again during data exploration stages, so it's really worth it to me to have a quick and simple way to do this.</p>
<p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html" rel="noreferrer"><code>DataFrame.filter</code></a> with the <code>like</code> argument:</p> <pre><code>&gt;&gt;&gt; df.filter(like="x") x xy yx xx 0 -1.467867 0.766077 1.210667 1.116529 1 -0.041965 0.546325 -0.590660 1.037151 </code></pre> <p>The <code>like</code> argument means "keep info axis where <code>arg in col == True</code>".</p>
python|pandas
11
9,845
62,454,451
Python - get the mean of value between 2 date
<p>I'd like to get the mean of value between 2 dates grouped by shop.</p> <p>In fact I've a first xlsx with the sells by shop and date</p> <pre><code>shop sell date a 100 2000 a 122 2001 a 300 2002 b 55 2000 b 245 2001 b 1156 2002 </code></pre> <p>And I've another file with the start and end date for each shop</p> <pre><code>shop start stop a 2000 2002 a 2000 2001 b 2000 2000 </code></pre> <p>And so I'd like to get the sell mean between each date from the 2nd file.</p> <p>I try something like this but I got a list of Df and it's not pretty optimize for me </p> <pre><code>dfend = [] for i in df2.values: filt1 = df.shop == i[0] filt2 = df.date &gt;= i[1] filt3 = df.date &lt;= i[2] dfgrouped = df.where(filt1 &amp; filt2 &amp; filt3).groupby('shop').agg(mean = ('sell','mean'), begin = ('date','min'), end = ('date', 'max')) dfend.append(dfgrouped) </code></pre> <p>Someone can help me ?</p> <p>Thx a lot</p>
<p><code>merge</code> the two DataFrames on 'shop'. Then you can check the date condition using <code>between</code> to filter down to the rows that count. Finally <code>groupby</code> + <code>sum</code>. (This assumes your second df is unique)</p> <pre><code>m = df2.merge(df1, how='left') (m[m['date'].between(m['start'], m['stop'])] .groupby(['shop', 'start', 'stop'])['sell'].mean() .reset_index()) # shop start stop sell #0 a 2000 2001 111 #1 a 2000 2002 174 #2 b 2000 2000 55 </code></pre> <hr> <p>If there are some rows in <code>df2</code> that will have no qualifying rows in <code>df1</code>, then instead use <code>mask</code> so that they still get a row after the <code>groupby</code> (this is also the reason why <code>df2</code> is the left DataFrame in the merge). Here I added an extra row</p> <pre><code>print(df2) # shop start stop #0 a 2000 2002 #1 a 2000 2001 #2 b 2000 2000 #3 e 1999 2011 m = df2.merge(df1, how='left') (m.where(m['date'].between(m['start'], m['stop'])) .groupby([m.shop, m.start, m.stop])['sell'].mean() .reset_index()) # shop start stop sell #0 a 2000 2001 111.0 #1 a 2000 2002 174.0 #2 b 2000 2000 55.0 #3 e 1999 2011 NaN </code></pre>
python|pandas|dataframe|where-clause
1
9,846
62,083,007
increment counter in a column based on certain condition pandas
<pre><code>col1 col2 - - - - no 1 - - no 2 no 3 </code></pre> <p>I have 2 columns in a dataframe. Whenever 'no' is encountered in col1, need to increment the counter in col2 as shown above</p>
<p>Campare values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html" rel="nofollow noreferrer"><code>Series.eq</code></a> for <code>==</code>, then use cumulative sum and replace non <code>no</code> values to <code>-</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.where.html" rel="nofollow noreferrer"><code>Series.where</code></a>:</p> <pre><code>m = df['col1'].eq('no') df['col2'] = m.cumsum().where(m, '-') print (df) col1 col2 0 - - 1 - - 2 no 1 3 - - 4 no 2 5 no 3 </code></pre>
python|pandas|dataframe
3
9,847
62,353,697
How to get the aggregation statistics of duplicated rows of pandas dataframe?
<p>How to get the mean of duplicated rows of some columns from another column?</p> <h1>Setup</h1> <pre><code> df = pd.DataFrame({'A': [0,0,1,1,0,1], 'B': [0,0,1,1,0,1], 'C': [0,1,0,1,0,1], 'unused': [0.1,0.2,0.3,0.4,0.5,0.5], 'price': [5,10,50,100,10,200] }) print(df) </code></pre> <h1>Required output</h1> <pre><code> A B C unused price ABC_mean 0 0 0 0 0.1 5 7.5 1 0 0 1 0.2 10 10 2 1 1 0 0.3 50 50 3 1 1 1 0.4 100 150 4 0 0 0 0.5 10 7.5 5 1 1 1 0.5 200 150 </code></pre>
<p>Use groupby:</p> <pre><code>df.groupby(['A','B','C'])['price'].transform('mean') </code></pre>
python|pandas
0
9,848
62,163,194
PyTorch: The number of sizes provided (0) must be greater or equal to the number of dimensions in the tensor (1)
<p>I'm trying to convert a CPU model to GPU using Pytorch, but I'm running into issues. I'm running this on Colab and I'm sure that Pytorch detects a GPU. This is a deep Q network (RL). </p> <p>I declare my network as: <code>Q = Q_Network(input_size, hidden_size, output_size).to(device)</code></p> <p>I ran into an issue when I tried to pass arguments through the network (It expected type cuda but got type cpu) so I add .to(device):</p> <pre class="lang-py prettyprint-override"><code>batch = np.array(shuffled_memory[i:i+batch_size]) b_pobs = np.array(batch[:, 0].tolist(), dtype=np.float32).reshape(batch_size, -1) b_pact = np.array(batch[:, 1].tolist(), dtype=np.int32) b_reward = np.array(batch[:, 2].tolist(), dtype=np.int32) b_obs = np.array(batch[:, 3].tolist(), dtype=np.float32).reshape(batch_size, -1) b_done = np.array(batch[:, 4].tolist(), dtype=np.bool) q = Q(torch.from_numpy(b_pobs).to(device)) q_ = Q_ast(torch.from_numpy(b_obs).to(device)) maxq = torch.max(q_.data,axis=1) target = copy.deepcopy(q.data) for j in range(batch_size): print(target[j, b_pact[j]].shape) # torch.Size([]) target[j, b_pact[j]] = b_reward[j]+gamma*maxq[j]*(not b_done[j]) #I run into issues here </code></pre> <p>Here is the error:</p> <p><code>RuntimeError: expand(torch.cuda.FloatTensor{[50]}, size=[]): the number of sizes provided (0) must be greater or equal to the number of dimensions in the tensor (1)</code></p>
<p><code>target[j, b_pact[j]]</code> is a single element of the tensor (a scalar, hence size of <code>torch.Size([])</code>). If you want to assign anything to it, the right hand side can only be a scalar. That is not the case, as one of the terms is a tensor with 1 dimension (a vector), namely your <code>maxq[j]</code>.</p> <p>When specifying a dimension <code>dim</code> (<code>axis</code> is treated as a synonym) to <a href="https://pytorch.org/docs/stable/torch.html#torch.max" rel="nofollow noreferrer"><code>torch.max</code></a>, it will return a named tuple of <code>(values, indices)</code>, where <code>values</code> contains the maximum values and <code>indices</code> the location of each of the maximum values (equivalent to argmax).</p> <p><code>maxq[j]</code> is not indexing into the maximum values, but rather the tuple of <code>(values, indices)</code>. If you only want the values you can use one of the following to get the values out of the tuple (all of them are equivalent, you can use whichever you prefer):</p> <pre class="lang-py prettyprint-override"><code># Destructure/unpack and ignore the indices maxq, _ = torch.max(q_.data,axis=1) # Access first element of the tuple maxq = torch.max(q_.data,axis=1)[0] # Access `values` of the named tuple maxq = torch.max(q_.data,axis=1).values </code></pre>
python|numpy|pytorch|tensor
3
9,849
51,228,876
Count of values in a pandas df
<p>I am trying to <code>count</code> the amount of <code>values</code> in a <code>pandas df</code>. I want to do this <code>row</code> by <code>row</code>. So for the <code>df</code> below I want to <code>count</code> the amount of <code>values</code> in each <code>column</code> exported <code>row</code> by <code>row</code></p> <pre><code>d = ({ 'A' : [[(1,2),(3,4)],[(1,2)],[()],[(1,2)]], 'B' : [[(1,2)],[(1,2)],[(1,2),(3,4)],[(1,2)]], 'C' : [[(1,2)],[()],[()],[()]], 'D' : [[()],[(1,2),(3,4)],[(1,2),(3,4)],[()]], }) df = pd.DataFrame(data=d) </code></pre> <p>I tried to do this using:</p> <pre><code>l = ([len(i) for i in df]) </code></pre> <p>Output</p> <pre><code>[1, 1, 1, 1] </code></pre> <p>Were as I'm hoping the intended output is:</p> <pre><code> A B C D 0 2 1 0 1 1 1 1 2 1 2 1 0 0 0 3 0 2 2 0 </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.applymap.html" rel="nofollow noreferrer"><code>applymap</code></a> with <code>if-else</code>:</p> <pre><code>df = df.applymap(lambda x: len(x) if x != [()] else 0) #alternative #df = df.applymap(lambda x: 0 if x == [()] else len(x)) print (df) A B C D 0 2 1 1 0 1 1 1 0 2 2 0 2 0 2 3 1 1 0 0 </code></pre> <p>Last if need transpose output:</p> <pre><code>df1 = pd.DataFrame(df.T.values, index=df.index, columns=df.columns) print (df1) A B C D 0 2 1 0 1 1 1 1 2 1 2 1 0 0 0 3 0 2 2 0 </code></pre>
python|pandas
0
9,850
51,366,666
Ecommerce item sales forecasting with pandas and statsmodels
<p>I want to forecast item sales (number of sales for each product) with pandas and statsmodel for an ecommerce business. Because item sales is a count dependent variable I'm assuming a Poisson modeling would work best.</p> <p>In an ideal world the model will be used to decide on which products to use in ads (increasing product views) and also to decide on deciding price points (changing prices) to result in best performance/profitability.</p> <p>So far so good, however when I try:</p> <pre><code>... import statsmodels.formula.api as smf ... result = smf.poisson(formula="Item_Sales ~ Product_Detail_Views + Variant_Price + C(Product_Type)", data=df).fit() </code></pre> <p>I get:</p> <pre><code>RuntimeWarning: invalid value encountered in multiply return -np.dot(L*X.T, X) RuntimeWarning: invalid value encountered in greater_equal return mu &gt;= 0 RuntimeWarning: invalid value encountered in greater oldparams) &gt; tol)) </code></pre> <p>And a table full of NaNs</p> <p>If I use OLS with the same dataset:</p> <pre><code>result = smf.ols(formula="Item_Sales ~ Product_Detail_Views + Variant_Price + C(Product_Type)", data=df).fit() </code></pre> <p>I get an R-squared of 0.809 so data is good. The model isn't as usable though as I get negative predictions which are obviously not possible (you cannot have negative sales of items).</p> <p>How can I make the Poisson model work?</p>
<p>Looks like a data problem. Since no sample data is shown, cannot be sure. You can try using GLM with family Poisson or GEE with family Poisson</p> <p>example:</p> <pre><code>smf.glm('sedimentation ~ C(control_grid)', data=df, families=sm.families.Poisson) </code></pre>
python|pandas|statsmodels
1
9,851
48,136,804
tf.Estimator.train throws as_list() is not defined on an unknown TensorShape
<p>I created a custom <code>input_func</code> and converted a keras model into <code>tf.Estimator</code> for training. However, it keeps throwing me error. </p> <ul> <li><p>Here is my model summary. I have attempted to set the <code>Input</code> layer with <code>batch_shape=(16, 320, 320, 3)</code> for testing but the problem still persits</p> <pre><code>inputs = Input(batch_shape=(16, 320, 320, 3), name='input_images') outputs = yolov2.predict(intputs) model = Model(inputs, outputs) model.compile(optimizer= tf.keras.optimizers.Adam(lr=learning_rate), loss = compute_loss) </code></pre></li> <li><p>I used <code>tf.keras.estimator.model_to_estimator</code> for conversion. I also create a <code>input_fn</code> for training:</p> <pre><code>def input_fun(images, labels, batch_size, shuffle=True): dataset = create_tfdataset(images, labels) dataset = dataset.shuffle().batch(batch_size) iterator = dataset.make_one_shot_iterator() images, labels = iterator.next() return {'input_images': images}, labels estimator = tf.keras.estimator.model_to_estimator(keras_model=model) estimator.train(input_fn = lambda: input_fn(images, labels, 32), max_steps = 1000) </code></pre></li> <li><p>And it throws me this error</p> <pre><code>input_tensor = Input(tensor=x, name='input_wrapper_for_' + name) ... File "/home/dat/anaconda3/envs/webapp/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 1309, in __init__ self._batch_input_shape = tuple(input_tensor.get_shape().as_list()) "as_list() is not defined on an unknown TensorShape.") ValueError: as_list() is not defined on an unknown TensorShape. </code></pre></li> </ul>
<p>I had the same problem. In input_fun, if you look at images in line "return {'input_images': images}, labels", you'll see that your tensor has no shape. You have to call set_shape for each image. Look at <a href="https://github.com/tensorflow/models/blob/master/official/resnet/imagenet_main.py" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/official/resnet/imagenet_main.py</a>, they call vgg_preprocessing.preprocess_image to set the shape</p>
python|tensorflow|deep-learning|keras
1
9,852
48,253,428
NumPy hstack throws "ValueError: all the input arrays must have same number of dimensions?"
<pre><code>data = [['297348640', 'Y', '12', 'Y'], ['300737722','Y', '1', 'Y'], ['300074407', 'Y', '1', 'N']] </code></pre> <p>I want to convert this into a NumPy array so I did:</p> <pre><code>data = np.array(data) </code></pre> <p>The above is working fine.</p> <p>Now I have two lists, say</p> <pre><code>a = [0,2,6] b = [21,21,9] </code></pre> <p>I have to append these at last of my previous list:</p> <pre><code>data = [['297348640', 'Y', '12', 'Y', 0, 21], ['300737722','Y', '1', 'Y', 2, 21], ['300074407', 'Y', '1', 'N', 6, 9]] </code></pre> <p>I tried this but its giving me wrong dimension error</p> <pre><code>a = np.array([a]) b = np.array([b]) data = np.hstack(data,(a)) data = np.hstack(data,(b)) ValueError: all the input arrays must have same number of dimensions </code></pre>
<p>Similar to @cᴏʟᴅsᴘᴇᴇᴅ's solution, but instead of passing <code>dtype=object</code>, you can be more explicit by passing <code>data</code>'s dtype:</p> <pre><code>data = np.array([['297348640', 'Y', '12', 'Y'], ['300737722','Y', '1', 'Y'], ['300074407', 'Y', '1', 'N']]) a = [0,2,6] b = [21,21,9] a = np.array(a, dtype=data.dtype) b = np.array(b, dtype=data.dtype) data = np.hstack((data, a[:, None], b[:, None])) </code></pre> <p>The first argument to <code>np.hstack</code> is a <em>sequence of arrays</em>. Right now, you are passing <code>np.hstack(data,(a))</code>, which actually gets parsed as two arguments. Adding an additional set of parantheses brings <code>data</code> and <code>a</code> (and <code>b</code>) into one sequence (a tuple).</p> <p>And lastly as for the indexing: <a href="https://stackoverflow.com/q/37867354/7954504">In numpy, what does selection by [:,None] do?</a>. This is mimicking <code>np.reshape()</code>.</p>
python|arrays|numpy
4
9,853
48,393,080
Plot Multicolored Time Series Plot based on Conditional in Python
<p>I have a pandas Financial timeseries DataFrame with two columns and one datetime index.</p> <pre><code> TOTAL.PAPRPNT.M Label 1973-03-01 25504.000 3 1973-04-01 25662.000 3 1973-05-01 25763.000 0 1973-06-01 25996.000 0 1973-07-01 26023.000 1 1973-08-01 26005.000 1 1973-09-01 26037.000 2 1973-10-01 26124.000 2 1973-11-01 26193.000 3 1973-12-01 26383.000 3 </code></pre> <p>As you can see each data-set corresponds to a 'Label'. This label should essentially classify if the line from the previous 'point' to the next 'point' carries certain characteristics (different types of stock graph changes) and therefore use a <strong>separate color</strong> for each of these plots. This question is related to this question <a href="https://stackoverflow.com/questions/31590184/plot-multicolored-line-based-on-conditional-in-python">Plot Multicolored line based on conditional in python</a> but the 'groupby' part totally skipped my understanding and this scheme is Bicolored scheme rather than a multicolored one (I have four labels). </p> <p>I want to create a Multicoloured Plot of the graph based on the Labels associated with each entry in the dataframe. </p>
<p>Here's an example of what I think your trying to do. It's based on the MPL documentation mentioned in the comments and uses randomly generated data. Just map the colormap boundaries to the discrete values given by the number of classes.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from matplotlib.collections import LineCollection from matplotlib.colors import ListedColormap, BoundaryNorm import pandas as pd num_classes = 4 ts = range(10) df = pd.DataFrame(data={'TOTAL': np.random.rand(len(ts)), 'Label': np.random.randint(0, num_classes, len(ts))}, index=ts) print(df) cmap = ListedColormap(['r', 'g', 'b', 'y']) norm = BoundaryNorm(range(num_classes+1), cmap.N) points = np.array([df.index, df['TOTAL']]).T.reshape(-1, 1, 2) segments = np.concatenate([points[:-1], points[1:]], axis=1) lc = LineCollection(segments, cmap=cmap, norm=norm) lc.set_array(df['Label']) fig1 = plt.figure() plt.gca().add_collection(lc) plt.xlim(df.index.min(), df.index.max()) plt.ylim(-1.1, 1.1) plt.show() </code></pre> <p>Each line segment is coloured according to the class label given in <code>df['Label']</code> Here's a sample result:</p> <p><a href="https://i.stack.imgur.com/OMXNf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/OMXNf.png" alt="enter image description here"></a></p>
python|pandas|matplotlib|plot|time-series
6
9,854
48,461,808
Convert a dictionary including many dataframes into a dataframe
<p>I have a dictionary from an influxdb Time-series database. This dictionary includes many keys like KEY1,KEY2... All keys correspond to a tabular dataset. As an example:</p> <pre><code>dict = { KEY1: MAX MIN Date1 max1 min1 Date2 max2 min2, KEY2: MAX MIN Date1 max3 min3 Date2 max4 min4} </code></pre> <p>What I want is a data frame like:</p> <pre><code>KEYS DATES MAX MIN KEY1 Date1 max1 min1 KEY1 Date2 max2 min2 KEY2 Date1 max3 min3 KEY2 Date2 max4 min4 </code></pre> <p>I hope it's clear enough.</p>
<p>Just add a <code>key</code> attribute to each dataframe and <code>concat</code>, like so:</p> <pre><code>import pandas as pd d = {'k1': pd.DataFrame({'max': [1, 2, 3], 'min': [1, 2, 3]}, index=[0, 1, 2]), 'k2': pd.DataFrame({'max': [4, 5, 6], 'min': [4, 5, 6]}, index=[3, 4, 5])} for k in d: d[k].insert(0, 'key', [k]*len(d[k])) res = pd.concat(d.values()) print(res) </code></pre> <p>prints out</p> <pre><code> key max min 0 k1 1 1 1 k1 2 2 2 k1 3 3 3 k2 4 4 4 k2 5 5 5 k2 6 6 </code></pre> <p>n.b. this also works if the indices are repeated across dataframes:</p> <pre><code> key max min 0 k1 1 1 1 k1 2 2 2 k1 3 3 0 k2 4 4 1 k2 5 5 2 k2 6 6 </code></pre>
python|pandas|dictionary|dataframe|time-series
0
9,855
48,880,634
How to select rows and replace some columns in pandas
<pre><code>import pandas as pd dic = {'A': [np.nan, 4, np.nan, 4], 'B': [9, 2, 5, 3], 'C': [0, 0, 5, 3]} df = pd.DataFrame(dic) df </code></pre> <p>If I have data like below</p> <pre><code> A B C 0 NaN 9 0 1 4.0 2 0 2 NaN 5 5 3 4.0 3 3 </code></pre> <p>I want to select the raw that column A is <code>NaN</code> and replace column B's value with np.nan as follows.</p> <pre><code> A B C 0 NaN NaN 0 1 4.0 2.0 0 2 NaN NaN 5 3 4.0 3.0 3 </code></pre> <p>I tried to do <code>df[df.A.isna()]["B"]=np.nan</code>, but it didn't work.<br> According to <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="nofollow noreferrer">this page</a>, I should to select data by <code>df.iloc</code>. But the problem is that if df have numerous rows, I can't select data by input index.</p>
<p><strong>Option 1</strong><br> You were pretty close actually. Use <code>pd.Series.isnull</code> on <code>A</code> and assign values to <code>B</code> using <code>df.loc</code>: </p> <pre><code>df.loc[df.A.isnull(), 'B'] = np.nan df A B C 0 NaN NaN 0 1 4.0 2.0 0 2 NaN NaN 5 3 4.0 3.0 3 </code></pre> <hr> <p><strong>Option 2</strong><br> <code>np.where</code>:</p> <pre><code>df['B'] = np.where(df.A.isnull(), np.nan, df.B) df A B C 0 NaN NaN 0 1 4.0 2.0 0 2 NaN NaN 5 3 4.0 3.0 3 </code></pre>
python|pandas
5
9,856
48,757,358
Pandas Merge Rows with Duplicate Ids Conditionally, Suitable for to CSV
<p>I have the following df and I want to merge the lines that have the same Ids, unless there are duplicates</p> <pre><code>Ids A B C D E F G H I J 4411 24 2 55 26 1 4411 24 2 54 26 0 4412 22 4 54 26 0 4412 18 8 54 26 0 7401 12 14 54 26 0 7401 0 25 53 26 0 7402 24 2 54 26 0 7402 25 1 54 26 0 10891 16 10 54 26 0 10891 3 23 54 26 0 10891 5 10 6 15 0 </code></pre> <p>Example output </p> <pre><code>Ids A B C D E F G H I J 4411 24 2 55 26 1 24 2 54 26 0 4412 22 4 54 26 0 18 8 54 26 0 7401 12 14 54 26 0 0 25 53 26 0 7402 24 2 54 26 0 25 1 54 26 0 10891 16 10 54 26 0 3 23 54 26 0 10891 5 10 6 15 0 </code></pre> <p>I tried groupby but that throws errors when you write to csv. </p>
<p>This should be slow , but can achieve what you need </p> <pre><code>df.replace('',np.nan).groupby('Ids').apply(lambda x: pd.DataFrame(x).apply(lambda x: sorted(x, key=pd.isnull),0)).dropna(axis=0,thresh=2).fillna('') Out[539]: Ids A B C D E F G H I J 0 7402 24.0 2.0 54.0 26.0 0.0 25.0 1.0 54.0 26.0 0.0 2 10891 16.0 10.0 54.0 26.0 0.0 3.0 23.0 54.0 26.0 0.0 3 10891 5.0 10.0 6.0 15.0 0.0 </code></pre>
python|pandas|csv|dataframe
2
9,857
70,982,709
Tensorflow save and load_model not working but save and load_weights does
<p>I am using tensorflow version 2.8.0:</p> <p>I have seen this issue from multiple sources all over forums, githubs, and even some here for the past 5 years with no definitive answer that has worked for me... For some reason, in certain situations, a loaded model from a previous save yields very different results from the original model evaluation. I haven't seen any well documented and investigative questions about this so I thought I'd show my full code below (simple illustration of the issue).</p> <p>This is an application of transfer learning from a pre-trained tensorflow model. The model is first trained through 5 epochs on train_data, then fine tuned (with more trainable params) for 5 more. Evaluating the model on test_data shows an accuracy of 0.5671. The model is then saved and loaded in .h5 format (I have also tried the tf SavedModel format and the result is the same). The resultant loaded_model yields an evaluation accuracy on the same, unaltered test_data of 0.4535.</p> <p>The result should be the same (0.5671)... so to further investigate I decided to save the fine tuned model's weights independently, construct and compile the same model architecture in new_model, and load the saved model's weights into new_model. Evaluating new_model yields the correct result, an accuracy of 0.5671. ----- Okay, so it must be the weights not saving properly right? I pulled the weights from each of these three models (model, loaded_model, new_model) and compared their flattened results. They are all the same. I really have no idea what's going on here but I'm assuming it is not random initialization, because the loaded_model evaluation results really did not perform anywhere near the fine tuned model - I would assume they would converge much closer.</p> <pre><code>import tensorflow as tf tf.random.set_seed(42) import pandas as pd import numpy as np import os import pathlib data_dir = pathlib.Path(&quot;101_food_classes_10_percent/train&quot;) class_names = np.array(sorted([item.name for item in data_dir.glob('*')])) train_dir = './101_food_classes_10_percent/train/' test_dir = './101_food_classes_10_percent/test/' from tensorflow.keras.preprocessing.image import ImageDataGenerator datagen=ImageDataGenerator() train_data = datagen.flow_from_directory(directory = train_dir, target_size = (224,224), batch_size = 32, class_mode='categorical') test_data = datagen.flow_from_directory(directory = test_dir, target_size = (224,224), batch_size = 32, class_mode='categorical') from tensorflow.keras.layers.experimental import preprocessing data_augmentation = tf.keras.Sequential([ preprocessing.RandomFlip('horizontal'), preprocessing.RandomRotation(0.2), preprocessing.RandomZoom(0.2), preprocessing.RandomHeight(0.2), preprocessing.RandomWidth(0.2) #preprocessing.Rescaling(1/255.) in EfficientNet it's already scaled but could use this for non-scaled ], name = 'data_augmentation') </code></pre> <pre><code>Found 7575 images belonging to 101 classes. Found 25250 images belonging to 101 classes. </code></pre> <pre><code># Build headless model - Feature Extraction # Setup base with frozen layers base_model = tf.keras.applications.EfficientNetB0(include_top=False) base_model.trainable=False inputs = tf.keras.layers.Input(shape = (224,224,3)) x = data_augmentation(inputs) x = base_model(x, training=False) x = tf.keras.layers.GlobalAveragePooling2D()(x) # Pool base_model's outputs into a feature vector outputs = tf.keras.layers.Dense(len(class_names), activation='softmax')(x) model = tf.keras.Model(inputs,outputs) model.compile('Adam', 'categorical_crossentropy', metrics=['accuracy']) model.summary() </code></pre> <pre><code>Model: &quot;model_1&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_4 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ data_augmentation (Sequentia (None, None, None, 3) 0 _________________________________________________________________ efficientnetb0 (Functional) (None, None, None, 1280) 4049571 _________________________________________________________________ global_average_pooling2d_1 ( (None, 1280) 0 _________________________________________________________________ dense_1 (Dense) (None, 101) 129381 ================================================================= Total params: 4,178,952 Trainable params: 129,381 Non-trainable params: 4,049,571 _________________________________________________________________ </code></pre> <pre><code>history = model.fit(train_data, validation_data=test_data, validation_steps=int(0.15*len(test_data)), epochs=5, callbacks = [checkpoint_callback]) </code></pre> <pre><code>Epoch 1/5 237/237 [==============================] - 63s 230ms/step - loss: 3.4712 - accuracy: 0.2482 - val_loss: 2.4446 - val_accuracy: 0.4497 Epoch 2/5 237/237 [==============================] - 52s 221ms/step - loss: 2.3575 - accuracy: 0.4561 - val_loss: 2.0051 - val_accuracy: 0.5093 Epoch 3/5 237/237 [==============================] - 51s 216ms/step - loss: 1.9838 - accuracy: 0.5265 - val_loss: 1.8313 - val_accuracy: 0.5360 Epoch 4/5 237/237 [==============================] - 51s 212ms/step - loss: 1.7497 - accuracy: 0.5761 - val_loss: 1.7417 - val_accuracy: 0.5461 Epoch 5/5 237/237 [==============================] - 53s 221ms/step - loss: 1.6035 - accuracy: 0.6141 - val_loss: 1.7012 - val_accuracy: 0.5601 </code></pre> <pre><code>model.evaluate(test_data) </code></pre> <pre><code>790/790 [==============================] - 87s 110ms/step - loss: 1.7294 - accuracy: 0.5481 [1.7294203042984009, 0.5480791926383972] </code></pre> <pre><code># Fine tuning: unfreeze some layers, lower leaning rate by 10x base_model.trainable=True # Refreeze every layer except last 5, adjust tiner tuned features down the model for layer in base_model.layers[:-5]: layer.trainable=False # recompile and lower learning rate by 10x model.compile(tf.keras.optimizers.Adam(learning_rate=0.0001), 'categorical_crossentropy', metrics=['accuracy']) model.summary() </code></pre> <pre><code>Model: &quot;model_1&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_4 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ data_augmentation (Sequentia (None, None, None, 3) 0 _________________________________________________________________ efficientnetb0 (Functional) (None, None, None, 1280) 4049571 _________________________________________________________________ global_average_pooling2d_1 ( (None, 1280) 0 _________________________________________________________________ dense_1 (Dense) (None, 101) 129381 ================================================================= Total params: 4,178,952 Trainable params: 910,821 Non-trainable params: 3,268,131 _________________________________________________________________ </code></pre> <pre><code># Fine Tune for 5 more epochs starting with last epoch left off at: fine_tune_epochs=10 # Total number of epochs we're after: 5 feature extraction, 5 fine tuning history_fine_tune = model.fit(train_data, validation_data = test_data, validation_steps=int(0.15*len(test_data)), epochs = fine_tune_epochs, initial_epoch = history.epoch[-1]) </code></pre> <pre><code>Epoch 5/10 237/237 [==============================] - 59s 220ms/step - loss: 1.3571 - accuracy: 0.6543 - val_loss: 1.6403 - val_accuracy: 0.5567 Epoch 6/10 237/237 [==============================] - 51s 213ms/step - loss: 1.2478 - accuracy: 0.6688 - val_loss: 1.6805 - val_accuracy: 0.5596 Epoch 7/10 237/237 [==============================] - 46s 193ms/step - loss: 1.1424 - accuracy: 0.6964 - val_loss: 1.6352 - val_accuracy: 0.5736 Epoch 8/10 237/237 [==============================] - 45s 191ms/step - loss: 1.0902 - accuracy: 0.7065 - val_loss: 1.6494 - val_accuracy: 0.5657 Epoch 9/10 237/237 [==============================] - 46s 193ms/step - loss: 1.0229 - accuracy: 0.7275 - val_loss: 1.6348 - val_accuracy: 0.5633 Epoch 10/10 237/237 [==============================] - 45s 191ms/step - loss: 0.9704 - accuracy: 0.7434 - val_loss: 1.6990 - val_accuracy: 0.5670 </code></pre> <pre><code>model.evaluate(test_data) </code></pre> <pre><code>790/790 [==============================] - 83s 105ms/step - loss: 1.6578 - accuracy: 0.5671 [1.657836675643921, 0.5670890808105469] </code></pre> <pre><code>model.save(&quot;./101_food_classes_10_percent/big_modelh5&quot;) loaded_model = tf.keras.models.load_model(&quot;./101_food_classes_10_percent/big_modelh5.h5&quot;) loaded_model.summary() </code></pre> <pre><code>Model: &quot;model_1&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_4 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ data_augmentation (Sequentia (None, None, None, 3) 0 _________________________________________________________________ efficientnetb0 (Functional) (None, None, None, 1280) 4049571 _________________________________________________________________ global_average_pooling2d_1 ( (None, 1280) 0 _________________________________________________________________ dense_1 (Dense) (None, 101) 129381 ================================================================= Total params: 4,178,952 Trainable params: 910,821 Non-trainable params: 3,268,131 _________________________________________________________________ </code></pre> <pre><code>loaded_model.evaluate(test_data) </code></pre> <pre><code>790/790 [==============================] - 85s 104ms/step - loss: 2.1780 - accuracy: 0.4535 - loss: 2.1790 - accuracy [2.1780412197113037, 0.4534653425216675] </code></pre> <pre><code># Try save_weights to another model model.save_weights('my_model_weights.h5') inputs = tf.keras.layers.Input(shape = (224,224,3)) x = data_augmentation(inputs) x = base_model(x, training=False) x = tf.keras.layers.GlobalAveragePooling2D()(x) # Pool base_model's outputs into a feature vector outputs = tf.keras.layers.Dense(len(class_names), activation='softmax')(x) new_model = tf.keras.Model(inputs,outputs) new_model.compile('Adam', 'categorical_crossentropy', metrics=['accuracy']) new_model.summary() </code></pre> <pre><code>Model: &quot;model_2&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_5 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ data_augmentation (Sequentia (None, None, None, 3) 0 _________________________________________________________________ efficientnetb0 (Functional) (None, None, None, 1280) 4049571 _________________________________________________________________ global_average_pooling2d_2 ( (None, 1280) 0 _________________________________________________________________ dense_2 (Dense) (None, 101) 129381 ================================================================= Total params: 4,178,952 Trainable params: 910,821 Non-trainable params: 3,268,131 _________________________________________________________________ </code></pre> <pre><code>new_model.load_weights('my_model_weights.h5') # Saving weights works... but not save and load_model new_model.evaluate(test_data) </code></pre> <pre><code>790/790 [==============================] - 88s 109ms/step - loss: 1.6578 - accuracy: 0.5671 [1.6578353643417358, 0.5670890808105469] </code></pre> <pre><code># Check if weights are the same? m1 = model.get_weights() m2 = new_model.get_weights() m3 = loaded_model.get_weights() len(m1)==len(m2)==len(m3) </code></pre> <pre><code>True </code></pre> <pre><code>from collections.abc import Iterable def flatten(l): for el in l: if isinstance(el, Iterable) and not isinstance(el, (str, bytes)): yield from flatten(el) else: yield el m1 = flatten(m1) m2 = flatten(m2) m3 = flatten(m3) print(list(m1)==list(m2)) print(list(m1)==list(m3)) </code></pre> <pre><code>True True </code></pre>
<p>This is because you have not saved your entire model using <code>.h5</code> extension, but you are using <code>.h5</code> for <code>saving the weights</code>. <br>Please check below code section:</p> <pre><code>model.save(&quot;./101_food_classes_10_percent/big_modelh5&quot;) # add .h5 loaded_model = tf.keras.models.load_model(&quot;./101_food_classes_10_percent/big_modelh5.h5&quot;) loaded_model.summary() </code></pre> <p>Use this code to save the entire model to a <code>HDF5</code> file format and try again loading it:</p> <pre><code>model.save(&quot;./101_food_classes_10_percent/big_modelh5.h5&quot;) </code></pre> <p>Check <a href="https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model" rel="nofollow noreferrer">this</a> for more details on saving model in <code>.hdf5</code> format.</p>
python|tensorflow|keras|save|load
0
9,858
70,753,740
sns.barplot ValueError: Length of values (9) does not match length of index (363)
<p>I wanted to extract the number of specific values from the columns - (<code>I</code> <code>s</code> <code>a</code> <code>x</code>) etc, so as I extracted I got stack that my chart doesn't want to read that.</p> <h1># Crashes AND Newes Frequency (I s a x)</h1> <pre><code># occurance.sort() qqq= df.Crashes.str.count(&quot;Wall Street Crash of 1929&quot;).sum() www= df.Crashes.str.count(&quot;Russian financial crisis of 1998&quot;).sum() eee= df.Crashes.str.count(&quot;Dot-com bubble of 2000&quot;).sum() rrr= df.Crashes.str.count(&quot;Financial crisis of 2007–08&quot;).sum() ttt= df.Crashes.str.count(&quot;Cryptocurrency crash of 2018&quot;).sum() yyy= df.Crashes.str.count(&quot;Chinese stock bubble of 2007&quot;).sum() uuu= df.Crashes.str.count(&quot;March Covid-19 crash of 2020&quot;).sum() iii= df.Crashes.str.count(&quot;Other&quot;).sum() ooo= df.Crashes.str.count(&quot;I do not know any&quot;).sum() occurance11 = [qqq,www,eee,rrr,ttt,yyy,uuu,iii,ooo] plt.figure(figsize=(8,6)) sns.barplot(df[&quot;News_frequency&quot;], y=occurance11,) plt.title('Correlation between Frequency of Following Financial News and Highest Education from Russians Investors', fontsize=14) plt.xlabel(&quot;Frequency of Following Financial (1: Never, 4: Always)&quot;) plt.ylabel(&quot;Type of Education&quot;); # occurance.sort() # plt.figure(figsize=(10,8)) # New_Colors = ['green','blue','purple','brown','teal','black','orange'] # plt.bar(Investment__goal, occurance,color=New_Colors) # plt.title('Known Financial Crashes by Russians ', fontsize=14) # plt.xlabel('Crashes', fontsize=14) # plt.ylabel('Occurrence', fontsize=14) # plt.grid(True) # plt.xticks( # rotation=45, # horizontalalignment='right', # fontweight='light', # fontsize='x-large') # for index,data in enumerate(occurance): # plt.text(x=index , y =data+1 , s=f&quot;{data}&quot; , fontdict=dict(fontsize=12)) # plt.tight_layout() # plt.show() # print (len(df.Crashes)) </code></pre> <pre><code>ValueError Traceback (most recent call last) /var/folders/q8/qn3d11d90fbbz0j6kllhpn9h0000gn/T/ipykernel_34081/161240386.py in &lt;module&gt; 22 23 plt.figure(figsize=(8,6)) ---&gt; 24 sns.barplot(df[&quot;News_frequency&quot;], y=occurance11,) 25 plt.title('Correlation between Frequency of Following Financial News and Highest Education from Russians Investors', fontsize=14) 26 plt.xlabel(&quot;Frequency of Following Financial (1: Never, 4: Always)&quot;) ~/opt/anaconda3/lib/python3.9/site-packages/seaborn/_decorators.py in inner_f(*args, **kwargs) 44 ) 45 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)}) ---&gt; 46 return f(**kwargs) 47 return inner_f 48 ~/opt/anaconda3/lib/python3.9/site-packages/seaborn/categorical.py in barplot(x, y, hue, data, order, hue_order, estimator, ci, n_boot, units, seed, orient, color, palette, saturation, errcolor, errwidth, capsize, dodge, ax, **kwargs) 3180 ): 3181 -&gt; 3182 plotter = _BarPlotter(x, y, hue, data, order, hue_order, 3183 estimator, ci, n_boot, units, seed, 3184 orient, color, palette, saturation, ~/opt/anaconda3/lib/python3.9/site-packages/seaborn/categorical.py in __init__(self, x, y, hue, data, order, hue_order, estimator, ci, n_boot, units, seed, orient, color, palette, saturation, errcolor, errwidth, capsize, dodge) 1582 errwidth, capsize, dodge): 1583 &quot;&quot;&quot;Initialize the plotter.&quot;&quot;&quot; -&gt; 1584 self.establish_variables(x, y, hue, data, orient, 1585 order, hue_order, units) 1586 self.establish_colors(color, palette, saturation) ~/opt/anaconda3/lib/python3.9/site-packages/seaborn/categorical.py in establish_variables(self, x, y, hue, data, orient, order, hue_order, units) 204 205 # Group the numeric data --&gt; 206 plot_data, value_label = self._group_longform(vals, groups, 207 group_names) 208 ~/opt/anaconda3/lib/python3.9/site-packages/seaborn/categorical.py in _group_longform(self, vals, grouper, order) 248 else: 249 index = None --&gt; 250 vals = pd.Series(vals, index=index) 251 252 # Group the val data ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/series.py in __init__(self, data, index, dtype, name, copy, fastpath) 428 index = ibase.default_index(len(data)) 429 elif is_list_like(data): --&gt; 430 com.require_length_match(data, index) 431 432 # create/copy the manager ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/common.py in require_length_match(data, index) 529 &quot;&quot;&quot; 530 if len(data) != len(index): --&gt; 531 raise ValueError( 532 &quot;Length of values &quot; 533 f&quot;({len(data)}) &quot; ValueError: Length of values (9) does not match length of index (363) &lt;Figure size 576x432 with 0 Axes&gt; </code></pre>
<p>You'll probably want to store all your terms into a list. That way, the list of occurrences can be created via a loop. The terms can serve as labels for the bars. As they are quite long, newlines can be inserted to display them over multiple lines.</p> <p>Converting lists to numpy arrays, <code>np.argsort()</code> can be used to find the order of the values. Adding <code>[::-1]</code> reverses the ordering, which then can be used to index the arrays.</p> <p>Here is some example code with dummy data showing how it could work:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np df = ... terms = [&quot;Wall Street Crash of 1929&quot;, &quot;Russian financial crisis of 1998&quot;, &quot;Dot-com bubble of 2000&quot;, &quot;Financial crisis of 2007–08&quot;, &quot;Cryptocurrency crash of 2018&quot;, &quot;Chinese stock bubble of 2007&quot;, &quot;March Covid-19 crash of 2020&quot;, &quot;Other&quot;, &quot;I do not know any&quot;] occurance11 = np.array([df.Crashes.str.count(term).sum() for term in terms]) ordering = np.argsort(occurance11)[::-1] terms_with_newlines = [term.replace(' ', '\n').replace('of\n', 'of ').replace('I\ndo', 'I do') for term in terms] terms_with_newlines = np.array(terms_with_newlines) fig, ax = plt.subplots(figsize=(12, 4)) sns.barplot(x=terms_with_newlines[ordering], y=occurance11[ordering], palette='flare', ax=ax) sns.despine() ax.tick_params(axis='x', length=0, labelrotation=0) plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/sA5eO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sA5eO.png" alt="sns.barplot with counts" /></a></p>
python|pandas|seaborn
0
9,859
51,862,716
Cannot insert subtotals into pandas dataframe
<p>I'm rather new to Python and to Pandas. With the help of Google and StackOverflow, I've been able to get most of what I'm after. However, this one has me stumped. I have a dataframe that looks like this:</p> <pre><code> SalesPerson 1 SalesPerson 2 SalesPerson 3 Revenue Number of Orders Revenue Number of Orders Revenue Number of Orders In Process Stage 1 8347 8 9941 5 5105 7 In Process Stage 2 3879 2 3712 3 1350 10 In Process Stage 3 7885 4 6513 8 2218 2 Won Not Invoiced 4369 1 1736 5 4950 9 Won Invoiced 7169 5 5308 3 9832 2 Lost to Competitor 8780 1 3836 7 2851 3 Lost to No Action 2835 5 4653 1 1270 2 </code></pre> <p>I would like to add subtotal rows for In Process, Won, and Lost, so that my data looks like:</p> <pre><code> SalesPerson 1 SalesPerson 2 SalesPerson 3 Revenue Number of Orders Revenue Number of Orders Revenue Number of Orders In Process Stage 1 8347 8 9941 5 5105 7 In Process Stage 2 3879 2 3712 3 1350 10 In Process Stage 3 7885 4 6513 8 2218 2 In Process Subtotal 20111 14 20166 16 8673 19 Won Not Invoiced 4369 1 1736 5 4950 9 Won Invoiced 7169 5 5308 3 9832 2 Won Subtotal 11538 6 7044 8 14782 11 Won Percent 27% 23% 20% 25% 54% 31% Lost to Competitor 8780 1 3836 7 2851 3 Lost to No Action 2835 5 4653 1 1270 2 Lost Subtotal 11615 6 8489 8 4121 5 Lost Percent 27% 23% 24% 25% 15% 14% Total 43264 26 35699 32 27576 35 </code></pre> <p>So far, my code looks like:</p> <pre><code> def create_win_lose_table(dataframe): in_process_stagename_list = {'In Process Stage 1', 'In Process Stage 2', 'In Process Stage 3'} won_stagename_list = {'Won Invoiced', 'Won Not Invoiced'} lost_stagename_list = {'Lost to Competitor', 'Lost to No Action'} temp_Pipeline_df = dataframe.copy() for index, row in temp_Pipeline_df.iterrows(): if index not in in_process_stagename_list: temp_Pipeline_df.drop([index], inplace = True) Pipeline_sum = temp_Pipeline_df.sum() #at the end I was going to concat the sum to the original dataframe, but that's where I'm stuck </code></pre> <p>I have only started to work on the in process dataframe. My thought was that once I figured that out I could then just duplicate that process for the Won and Lost categories. Any thoughts or approaches are welcome.</p> <p>Thank you! Jon</p>
<p>Simple Example for you. </p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame(np.random.rand(5, 5)) df_total = pd.DataFrame(np.sum(df.iloc[:, :].values, axis=0)).transpose() df_with_totals = df.append(df_total) df_with_totals 0 1 2 3 4 0 0.743746 0.668769 0.894739 0.947641 0.753029 1 0.236587 0.862352 0.329624 0.637625 0.288876 2 0.817637 0.250593 0.363517 0.572789 0.785234 3 0.140941 0.221746 0.673470 0.792831 0.170667 4 0.965435 0.836577 0.790037 0.996000 0.229857 0 2.904346 2.840037 3.051388 3.946885 2.227662 </code></pre> <p>You can use the rename argument in Pandas to call the summary row whatever you want.</p>
python|pandas|dataframe|subtotal
1
9,860
41,935,088
Shuffle groups of rows of a 2D array - NumPy
<p>Suppose I have a (50, 5) array. Is there a way for me to shuffle it on the basis of groupings of rows/sequences of datapoints, i.e. instead of shuffling every row, shuffle chunks of say, 5 rows?</p> <p>Thanks</p>
<p><strong>Approach #1 :</strong> Here's an approach that reshapes into a <code>3D</code> array based on the group size, indexes into the indices of blocks with shuffled indices obtained from <code>np.random.permutation</code> and finally reshapes back to <code>2D</code> -</p> <pre><code>N = 5 # Blocks of N rows M,n = a.shape[0]//N, a.shape[1] out = a.reshape(M,-1,n)[np.random.permutation(M)].reshape(-1,n) </code></pre> <p>Sample run -</p> <pre><code>In [141]: a Out[141]: array([[89, 26, 12], [97, 60, 96], [94, 38, 54], [41, 63, 29], [88, 62, 48], [95, 66, 32], [28, 58, 80], [26, 35, 89], [72, 91, 38], [26, 70, 93]]) In [142]: N = 2 # Blocks of N rows In [143]: M,n = a.shape[0]//N, a.shape[1] In [144]: a.reshape(M,-1,n)[np.random.permutation(M)].reshape(-1,n) Out[144]: array([[94, 38, 54], [41, 63, 29], [28, 58, 80], [26, 35, 89], [89, 26, 12], [97, 60, 96], [72, 91, 38], [26, 70, 93], [88, 62, 48], [95, 66, 32]]) </code></pre> <hr> <p><strong>Approach #2 :</strong> One can also simply use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.shuffle.html" rel="nofollow noreferrer"><code>np.random.shuffle</code></a> for an in-situ change -</p> <pre><code>np.random.shuffle(a.reshape(M,-1,n)) </code></pre> <p>Sample run -</p> <pre><code>In [156]: a Out[156]: array([[15, 12, 14], [55, 39, 35], [73, 78, 36], [54, 52, 32], [83, 34, 91], [42, 11, 98], [27, 65, 47], [78, 75, 82], [33, 52, 93], [87, 51, 80]]) In [157]: N = 2 # Blocks of N rows In [158]: M,n = a.shape[0]//N, a.shape[1] In [159]: np.random.shuffle(a.reshape(M,-1,n)) In [160]: a Out[160]: array([[15, 12, 14], [55, 39, 35], [27, 65, 47], [78, 75, 82], [73, 78, 36], [54, 52, 32], [33, 52, 93], [87, 51, 80], [83, 34, 91], [42, 11, 98]]) </code></pre>
numpy|multidimensional-array|shuffle
3
9,861
64,214,933
Checking ending characters in dataframe and replacing them
<p>I would like to add two new columns in my pandas dataframe based on the following conditions</p> <ul> <li>if a sentence ends with '...' then add a new column with value 1, otherwise 0;</li> <li>if a sentence ends with '...' then add a new column without '...' at the end</li> </ul> <p>Something like this:</p> <pre><code>Text bla bla bla ... once upon a time pretty little liars Batman ... </code></pre> <p>Expected</p> <pre><code> Text T Clean bla bla bla ... 1 bla bla bla once upon a time 0 once upon a time pretty little liars 0 pretty little liars Batman ... 1 Batman </code></pre> <p>I tried to apply regex, but probably str.endwith would be a better approach to check if a sentence ends with ..., since assigns a boolean value (my T column).</p> <p>I have tried as follows: <code>df['Text'].str.endswith('...')</code> but I would need to create a new column with 1 and 0. For cleaning the text I would check if <code>T</code> is true: if it is true, I would remove the <code>...</code> at the end.</p> <pre><code>df['Clean'] = df['Text'].str.rstrip('...') </code></pre> <p>or <code>df['Clean'] = df['Text'].str[:-3]</code> (but it does not include any logical condition or information on <code>...</code>)</p> <p>or <code>df['Clean'] = df['Text'].str.replace(r'...$', '')</code></p> <p>It is important that I consider the sentence ending with <code>...</code> in order to avoid to delete <code>...</code> in the middle of sentence which have a different meaning.</p>
<p>For the first column, I would use the approach you suggested:</p> <pre class="lang-py prettyprint-override"><code>df['T'] = df['Text'].str.endswith('...') </code></pre> <p>(Technically this will create a boolean column, not an integer column. You can use <code>astype()</code> to convert if you care about this.)</p> <p>For the second column, I would unconditionally replace:</p> <pre class="lang-py prettyprint-override"><code>df['Clean'] = df['Text'].str.replace(r'...$', '') </code></pre> <p>If it doesn't end in ..., it won't do anything.</p>
python|pandas
2
9,862
64,225,653
Random order of one pandas.DataFrame with respect of other
<p>I have the following structure:</p> <pre><code>data_Cnx = pd.read_csv(path_Connection,sep='\t',header=None) data_Cnx.columns = [&quot;ConnectionID&quot;] data_Srv = pd.read_csv(path_Service,sep='\t',header=None) data_Srv.columns = [&quot;ServiceID&quot;] </code></pre> <p>that can be visualized as the following:</p> <pre><code>print(data_Cnx) ConnectionID 0 CN0120 1 CN0121 2 CN0122 3 CN0123 4 CN0124 ... ... 20 CN0166 21 CN0167 22 CN0168 23 CN0171 24 CN0172 [25 rows x 1 columns] print(data_Srv) ServiceID 0 ST030 1 ST030 2 ST030 3 ST030 4 ST030 ... ... 20 ST040 21 ST040 22 ST040 23 ST050 24 ST050 [25 rows x 1 columns] </code></pre> <p>Literally, each element from <code>data_Cnx</code> corresponds to a parallel element in <code>data_Srv</code>, respecting the order. For instance:</p> <pre><code>CN0120 corresponds to ST030 CN0121 corresponds to ST030 .... CN0166 corresponds to ST040 CN0167 corresponds to ST040 ... CN0171 corresponds to ST050 ... </code></pre> <p>I would like to have another structure or different <code>data_Cnx</code> and <code>data_Srv</code> in which the order of <code>data_Cnx</code> can be randomized, but always in respect of what corresponds in <code>data_Srv</code>. For instance:</p> <p>The <code>data_Cnx</code> and <code>data_Srv</code> can be visualized as the following:</p> <pre><code>print(data_Cnx) ConnectionID 0 CN0120 1 CN0168 2 CN0156 3 CN0133 4 CN0161 ... ... 20 CN0121 21 CN0143 22 CN0127 23 CN0151 24 CN0132 print(data_Srv) [25 rows x 1 columns] ServiceID 0 ST030 1 ST040 2 ST070 3 ST010 4 ST040 ... ... 20 ST030 21 ST050 22 ST030 23 ST070 24 ST010 </code></pre> <p>I was thinking of using <a href="https://numpy.org/doc/stable/reference/random/generated/numpy.random.randn.html" rel="nofollow noreferrer">randn</a>, but obviously it uses integers as parameters. Do you have an easier idea of how can this be implemented?</p>
<p>I simply found the following works:</p> <pre><code>bigdata = pd.concat([data_Srv,data_Cnx], axis=1) bigdata.sample(n = 20) </code></pre> <p>If someone suggested a better idea, I would be opened to try it :)</p>
python|pandas|random
1
9,863
64,462,987
Why does my PCA change every time I run the code in python?
<p>I imputed my dataframe of any missing values with the median of each feature and scaled using StandardScaler(). I ran regular kneighbors with n=3 and the accuracy stays consistent.</p> <p>Now I am to do the PCA of the resulting dataset with n_components=4 and apply K-neighbors with 3 neighbors. However, every time I run my code, the PCA dataset and kneighbors accuracy changes every time I run the program but the master dataset itself doesn't change. I even tried using first 4 features of the dataset when applying kneighbors and even that is inconsistent.</p> <pre><code>data = pd.read_csv('dataset.csv') y = merged['Life expectancy at birth (years)'] X_train, X_test, y_train, y_test = train_test_split(data, y, train_size=0.7, test_size=0.3, random_state=200) for i in range(len(features)): featuredata = X_train.iloc[:,i] fulldata = data.iloc[:,i] fulldata.fillna(featuredata.median(), inplace=True) data.iloc[:,i] = fulldata scaler = preprocessing.StandardScaler().fit(X_train) data = scaler.transform(data) </code></pre> <p>If I apply KNeighbors here, it runs fine, and my accuracy score remains the same.</p> <pre><code>pcatest = PCA(n_components=4) pca_data = pcatest.fit_transform(data) X_train, X_test, y_train, y_test = train_test_split(pca_data, y, train_size=0.7, test_size=0.3) pca = neighbors.KNeighborsClassifier(n_neighbors=3) pca.fit(X_train, y_train) y_pred_pca = pca.predict(X_test) pca_accuracy = accuracy_score(y_test, y_pred_pca) </code></pre> <p>However, my pca_accuracy score changes every time I run the code. What can I do to make it set and consistent?</p> <pre><code>first4_data = data[:,:4] X_train, X_test, y_train, y_test = train_test_split(first4_data, y, train_size=0.7, test_size=0.3) first4 = neighbors.KNeighborsClassifier(n_neighbors=3) first4.fit(X_train, y_train) y_pred_first4 = first4.predict(X_test) first4_accuracy = accuracy_score(y_test, y_pred_first4) </code></pre> <p>I am only taking the first 4 features/columns and the data should remain the same, but for some reason, the accuracy score changes everytime I run it.</p>
<p>You need to give <code>random_state</code>a value in <code>train_test_split</code> otherwise everytime you run it without specifying <code>random_state</code>, you will get a different result. What happens is that every time you split your data, you do it in different ways, unless you specify a random state, or lack there of. It's the equivalent of <code>seed()</code> in <code>R</code>.</p>
python|pandas|pca|knn
0
9,864
47,834,297
TensorFlow softmax_crossentropy_with logits: are "labels" also trained (if differentiable)?
<p>The softmax cross-entropy with logits loss function is used to reduce the difference between the logits and labels provided to the function. Typically, the labels are fixed for supervised learning and the logits are adapted. But what happens when the labels come from a differentiable source, e.g., another network? Do both networks, i.e., the "logits network" and the "labels network" get trained by the subsequent optimizer, or does this loss function always treat the labels as fixed?</p> <p>TLDR: Does tf.nn.softmax_cross_entropy_with_logits() also provide gradients for the labels (if they are differentiable), or are they always considered fixed?</p> <p>Thanks!</p>
<p>You need to use tf.softmax_cross_entropy_with_logits_v2 to get gradients with respect to labels.</p>
tensorflow|fixed|label|gradient|loss-function
0
9,865
48,954,748
Normalize gradient magnitude to unit length in tensorflow
<p>How can we normalize the gradient magnitude to a unit length in tensorflow?</p> <p>I am trying to do something like</p> <p><code>gradients = tf.gradients(self.loss, _params) gradients_norm = tf.norm(gradients , name='norm') final_gradients= [(gradients/gradients_norm , var) for grad, var in gradients]</code></p> <p>Any clue? Thank you</p>
<p>There are some Gradient Clipping functions that do what you want in one step:</p> <p><a href="https://www.tensorflow.org/api_guides/python/train#Gradient_Clipping" rel="nofollow noreferrer">https://www.tensorflow.org/api_guides/python/train#Gradient_Clipping</a></p> <p>For example:</p> <pre><code>tf.clip_by_norm(t, clip_norm, axes=None, name=None) </code></pre> <p>Once you've got your gradients, just as you've shown there, you'll want to use those new, clipped, gradients instead of the original gradients. Use:</p> <pre><code>tf.train.Optimizer.apply_gradients(grads_and_vars, global_step=None, name=None) </code></pre> <p><a href="https://www.tensorflow.org/api_guides/python/train#Optimizers" rel="nofollow noreferrer">https://www.tensorflow.org/api_guides/python/train#Optimizers</a></p> <p>The <code>apply_gradients</code> op should be run instead of the normal minimizer OP to train the network. </p> <p>Example - the normal training OP:</p> <pre><code>train_op = tf.train.GradientDescentOptimizer(learning_rate=1e-4).minimize(loss_function) </code></pre> <p>Example - your training OP:</p> <pre><code>gradients = tf.gradients(tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES), self._loss) grads_and_vars = [(tf.clip_by_norm(grad, clip_norm), var) for grad, var in gradients] train_op = tf.train.GradientDescentOptimizer(learning_rate=1e-4).apply_gradients(grads_and_vars) </code></pre> <p>Note the use of <code>tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)</code> which gets you a <code>list</code> of trainable variables. These are all the variables the optimizer would normally update by default.</p>
tensorflow|gradient|normalize
2
9,866
49,057,750
groupby select value only if match
<p>I got my data sorted correctly, but now Im trying to find a way to group by "first not empty string value". Is there a way to do this without changing the rest of the data? First was close, but not quite what I needed</p> <pre><code>grouped = sortedvals.groupby(['name']).first().reset_index() </code></pre> <p>doesnt work if the first value is empty ie: <code>'',2</code> (my goal is to return 2) but does work for everything else.</p>
<blockquote> <p>Use <code>replace</code> function to replace blank values with <code>np.nan</code></p> </blockquote> <pre><code>import numpy as np grouped = sortedvals.replace('',np.nan).groupby(['name']).first().reset_index() </code></pre>
python-3.x|pandas|pandas-groupby
1
9,867
48,988,697
consistent colors after multiple calls of pd.DataFrame.plot()
<p>I have a dataframe <code>v</code> with some numerical data in it.</p> <pre><code>v=pd.DataFrame(data=np.random.rand(300,3)) </code></pre> <p>I am want to plot on the same <code>matplotlib</code> figure:</p> <ul> <li>a scatter plot</li> <li>a moving average of the same points</li> </ul> <p>I do that using <code>pd.DataFrame.plot()</code></p> <pre><code>plt.figure() v.plot(style='o',legend=False,ax=plt.gca(),alpha=0.2,ls='') v.rolling(7).mean().plot(legend=False,ax=plt.gca()) </code></pre> <p>This works fine.</p> <p>However, the points drawn with the first plot are colored according to their row number. Same happens for the lines in the second plot.</p> <p>I would like the two colors to be consistent between the two plot commands, so line obtained by moving average to have same color as in the scatter. How to get that?</p> <p>Here is what I get running the code. Obviously, I cannot figure out if the red lines correspond to the green orange or blue points...</p> <p><a href="https://i.stack.imgur.com/tQkwM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tQkwM.png" alt="enter image description here"></a></p>
<p><strong>ORIGINAL</strong></p> <p>I believe you need - </p> <pre><code>%matplotlib inline # only for jupyter notebooks import pandas as pd from matplotlib import pyplot as plt import numpy as np colors = {0: 'red', 1:'green', 2:'blue'} v=pd.DataFrame(data=np.random.rand(300,3)) plt.figure() v.plot(marker='o',legend=False,ax=plt.gca(),ls='', alpha=0.2, color=colors.values()) v.rolling(7).mean().plot(legend=False,ax=plt.gca(), color=colors.values()) </code></pre> <p><strong>UPDATE</strong></p> <p>Just go with - </p> <p><strong>Option 1 (no extra <code>cm</code> dependency)</strong></p> <pre><code>colors_rand = np.random.rand(len(v.columns),3) v.plot(marker='o',legend=False,ax=plt.gca(),ls='', alpha=0.5, color=colors_rand ) v.rolling(7).mean().plot(legend=False,ax=plt.gca(), color=colors_rand ) </code></pre> <p><strong>Option 2(as suggested by OP)</strong></p> <pre><code>v.plot(marker='o',legend=False,ax=plt.gca(),ls='', alpha=0.5, colors=cm.rainbow(np.linspace(0,1,v.shape[1]) )) v.rolling(7).mean().plot(legend=False,ax=plt.gca(), colors=cm.rainbow(np.linspace(0,1,v.shape[1]) )) </code></pre>
python|pandas|matplotlib|plot|colors
2
9,868
49,015,337
Reading and writing column data in Python with Pandas
<p>This endeavour is a variation on the wonderful <a href="https://github.com/MagerValp/MacModelShelf" rel="nofollow noreferrer">Mac Model Shelf</a>. I have managed thus far to write the code myself that can read single Mac serial numbers at the command line and give back the corresponding model type, based on on the last 3 or 4 chars in the serial.</p> <p>Write now I am trying to write a script to read-in the column data in an Excel file and return back the results for each cell in the neighbouring column. </p> <p>The output Excel would hopefully looking something like this (with headers)...</p> <pre><code>Serial Model C12PT70EG8WP Macbook Pro 2015 15" 2.5 Ghz i7 K12PT7EG0PW iMac 2010 Intel Core Duo 1.6 Ghz </code></pre> <p>This is all based on excel file that supplies its data to a python shelve. Here is a small example of how it reads... I've called it 'pgList.xlsx' in the main code. In reality it will be hundreds of lines long.</p> <pre><code>G8WP Macbook Pro 2015 15" 2.5 Ghz i7 0PW iMac 2010 Intel Core Duo 1.6 Ghz 3RT iPad Pro 2017 </code></pre> <p>Main python3 code...</p> <pre><code>import shelve import pandas as pd #getting the shelve/database ready from the library excel file DBPATH = "/Users/me/PycharmProjects/shelve/macmodelshelfNEW" databaseOfMacs = shelve.open(DBPATH) excelDict = pd.read_excel('pgList.xlsx', header=None, index_col=0,squeeze=True).to_dict() databaseOfMacs.update(excelDict) #loading up the excel file and serial numbers I want to examine... df = pd.read_excel('testSerials.xlsx', sheet_name='Sheet1') listSerials = df['Serial'] listModels = df['Model'] for i in listSerials: inputSerial = i inputSerial = inputSerial.upper() modelCodeIsolatedFromSerial = "" if len(inputSerial) == 12: modelCodeIsolatedFromSerial = inputSerial[-4:] elif len(inputSerial) == 11: modelCodeIsolatedFromSerial = inputSerial[-3:] try: model = databaseOfMacs[modelCodeIsolatedFromSerial] #printing to console to check code works print(model) except: print("Result not found") databaseOfMacs.clear() databaseOfMacs.close() </code></pre> <p>Could you guys help me out with writing of the results back to the same excel file? So example, if the serial number was in cell A2, the result (the model type) would be written to B2?</p> <p>I have tried including this line of code before the main 'for' loop in the code but it only ever serves to wipe the Excel file empty after running the script! I just comment it out for the moment. </p> <pre><code>writer = pd.ExcelWriter('testSerials.xlsx', engine='xlsxwriter') </code></pre> <p>Could you also help me handle any potential blank cells in the serials column? A blank will throw back this error.</p> <pre><code>AttributeError: 'float' object has no attribute 'upper' </code></pre> <p>Thanks again for looking after me! </p> <p>WL</p> <p><strong>UPDATE</strong></p> <p>The comments I have up to now have really helped. I think the part where am I getting stuck at is getting the output of the 'for' loop, 'model' in this case into the column for 'Models. The variable 'listModels' doesn't seem to behave like other lists in Python 3 i.e I cannot append anything to it. </p> <p><strong>UPDATE 2</strong></p> <p>Some more tinkering, trying to get the result of the serial-number lookup of the values in "Serial" column into the "Model" column.</p> <p>I have tried (without any real success)</p> <pre><code> try: model = databaseOfMacs[modelCodeIsolatedFromSerial] print(model) listModels.replace(['nan'], [model], inplace=True) </code></pre> <p>This doesn't give me an error message but still nothing appears in the outputted excel file.</p> <p>When I run a for loop to print the contents of 'listModels' I just back a list of "NaN"s, suggesting nothing at all has been changed... bummer!</p> <p>I've also tried </p> <pre><code>try: model = databaseOfMacs[modelCodeIsolatedFromSerial] print(model) listModels[i] = model </code></pre> <p>This will spit back a console error about </p> <pre><code>A value is trying to be set on a copy of a slice from a DataFrame </code></pre> <p>but at least I can see the modelname relating to a serial number in the console when I iterate through 'listModels', still nothing in the output Excel file though (along with a 'nan' for every serial number that is examined?)</p> <p>I am sure it's something small that I am missing in the code to fix this problem. Thanks again to anybody who can help me out. </p> <p><strong>UPDATE 3</strong></p> <p>I've solved it on my own. Just had to use a while loop instead.</p> <pre><code>sizeOfSerialsList = len(listSerials) count = 0 while (count &lt; sizeOfSerialsList): inputSerial = listSerials.iloc[count] inputSerial = str(inputSerial).upper() modelCodeIsolatedFromSerial = "" model = "" if len(inputSerial) == 12: modelCodeIsolatedFromSerial = inputSerial[-4:] elif len(inputSerial) == 11: modelCodeIsolatedFromSerial = inputSerial[-3:] try: model = databaseOfMacs[modelCodeIsolatedFromSerial] listModels.iloc[count] = model except: listModels.iloc[count] = "Not found" count = count + 1 </code></pre>
<p>See Update 3 for code that solved the issue</p>
python|excel|python-3.x|pandas
0
9,869
49,308,813
Reprinting column but changes don't stick when working with regex
<p>I am trying to use regex to remove the <code>$</code> signs, <code>,</code> and turn the columns into floats.</p> <pre><code>df[[cols]].replace({'\$': '', ',': ''}, regex=True).astype(float) </code></pre> <p>When I check my work and see if the changes stick, I still get <code>$</code> and <code>,</code>.</p> <p>Is there an <code>inplace=True</code> parameter or something?</p>
<p>You can do</p> <pre><code>df[col].apply(lambda x: float(x.replace('$', '')) </code></pre> <p>or </p> <pre><code>df[col].apply(lambda x: x.replace('$', '')).astype(float) </code></pre>
python|regex|pandas
0
9,870
58,708,874
Reduce sum with condition in tensorflow
<p>I am given a 2D Tensor with stochastic rows. After applying <code>tf.math.greater()</code> and <code>tf.cast(tf.int32)</code> I am left with a Tensor with 0's and 1's. I now want to apply reduce sum onto that matrix but with a condition: If there was at least one 1 summed and a 0 follows I want to remove all following 1 aswell, meaning <code>1 0 1</code> should result in <code>1</code> instead of <code>2</code>. </p> <p>I have tried to solve the Problem with <code>tf.scan()</code>, but I was not able to come up with a function yet that is able to handle starting 0's, because the row might look like: <code>0 0 0 1 0 1</code> One idea was to set the lower part of the matrix to one (bc I know everything left from the diagonal will always be 0) and then have a function like <code>tf.scan()</code> run to filter out the spots (see code and error message below).</p> <p><code>Let z be the matrix after tf.cast.</code></p> <pre><code>helper = tf.matrix_band_part(tf.ones_like(z), -1, 0) z = tf.math.logical_or(tf.cast(z, tf.bool), tf.cast(helper,tf.bool)) z = tf.cast(z, tf.int32) z = tf.scan(lambda a, x: x if a == 1 else 0 ,z) </code></pre> <p>Resulting in:</p> <p><code>ValueError: Incompatible shape for value ([]), expected ([5])</code></p>
<p>IIUC, this is one way to do what you want without scanning or looping. It may be a bit convoluted, and is actually iterating the columns twice (one cumsum and one cumprod), but being vectorized operations I think it is probably faster. Code is TF 2.x but runs the same in TF 1.x (except for the last line obviously).</p> <pre><code>import tensorflow as tf # Example data a = tf.constant([[0, 0, 0, 0], [1, 0, 0, 0], [0, 1, 1, 0], [0, 1, 0, 1], [1, 1, 1, 0], [1, 1, 0, 1], [0, 1, 1, 1], [1, 1, 1, 1]]) # Cumsum columns c = tf.math.cumsum(a, axis=1) # Column-wise differences diffs = tf.concat([tf.ones([tf.shape(c)[0], 1], c.dtype), c[:, 1:] - c[:, :-1]], axis=1) # Find point where we should not sum anymore (cumsum is not zero and difference is zero) cutoff = tf.equal(a, 0) &amp; tf.not_equal(c, 0) # Make mask mask = tf.math.cumprod(tf.dtypes.cast(~cutoff, tf.uint8), axis=1) # Compute result result = tf.reduce_max(c * tf.dtypes.cast(mask, c.dtype), axis=1) print(result.numpy()) # [0 1 2 1 3 2 3 4] </code></pre>
python|tensorflow
2
9,871
70,129,985
Use Trailing X Range of Rows when Defining a New Column
<p>I'm looking to turn a <code>for</code> loop, when creating values for a new column, into a one line statement using <code>numpy.where()</code> instead. I'm trying to implement the Doji logic <a href="https://tlc.thinkorswim.com/center/reference/thinkScript/Functions/Tech-Analysis/IsDoji" rel="nofollow noreferrer">here</a> in Python, but that's not really important to the question I have. Before this gets downvoted, I'm providing this big dataset to ensure that whoever helps me with this is working with the same dataset that I am, and to ensure that there are a few instances of <code>True</code> to compare to.</p> <p>To reproduce, (warning, long df creation ahead) create this dataframe of data:</p> <pre><code>import numpy as np import pandas as pd # Create a reproducible, static dataframe. # 1 minute SPY data. Skip to the bottom... df = pd.DataFrame([ { &quot;time&quot;: &quot;2021-10-26 9:30&quot;, &quot;open&quot;: &quot;457.2&quot;, &quot;high&quot;: &quot;457.29&quot;, &quot;low&quot;: &quot;456.78&quot;, &quot;close&quot;: &quot;456.9383&quot;, &quot;volume&quot;: &quot;594142&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:31&quot;, &quot;open&quot;: &quot;456.94&quot;, &quot;high&quot;: &quot;457.07&quot;, &quot;low&quot;: &quot;456.8&quot;, &quot;close&quot;: &quot;456.995&quot;, &quot;volume&quot;: &quot;194061&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:32&quot;, &quot;open&quot;: &quot;456.99&quot;, &quot;high&quot;: &quot;457.22&quot;, &quot;low&quot;: &quot;456.84&quot;, &quot;close&quot;: &quot;457.21&quot;, &quot;volume&quot;: &quot;186114&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:33&quot;, &quot;open&quot;: &quot;457.22&quot;, &quot;high&quot;: &quot;457.45&quot;, &quot;low&quot;: &quot;457.2011&quot;, &quot;close&quot;: &quot;457.308&quot;, &quot;volume&quot;: &quot;294158&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:34&quot;, &quot;open&quot;: &quot;457.31&quot;, &quot;high&quot;: &quot;457.4&quot;, &quot;low&quot;: &quot;457.25&quot;, &quot;close&quot;: &quot;457.32&quot;, &quot;volume&quot;: &quot;172574&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:35&quot;, &quot;open&quot;: &quot;457.31&quot;, &quot;high&quot;: &quot;457.48&quot;, &quot;low&quot;: &quot;457.18&quot;, &quot;close&quot;: &quot;457.44&quot;, &quot;volume&quot;: &quot;396668&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:36&quot;, &quot;open&quot;: &quot;457.48&quot;, &quot;high&quot;: &quot;457.6511&quot;, &quot;low&quot;: &quot;457.44&quot;, &quot;close&quot;: &quot;457.57&quot;, &quot;volume&quot;: &quot;186777&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:37&quot;, &quot;open&quot;: &quot;457.5699&quot;, &quot;high&quot;: &quot;457.73&quot;, &quot;low&quot;: &quot;457.5699&quot;, &quot;close&quot;: &quot;457.69&quot;, &quot;volume&quot;: &quot;187596&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:38&quot;, &quot;open&quot;: &quot;457.7&quot;, &quot;high&quot;: &quot;457.73&quot;, &quot;low&quot;: &quot;457.54&quot;, &quot;close&quot;: &quot;457.63&quot;, &quot;volume&quot;: &quot;185570&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:39&quot;, &quot;open&quot;: &quot;457.63&quot;, &quot;high&quot;: &quot;457.64&quot;, &quot;low&quot;: &quot;457.31&quot;, &quot;close&quot;: &quot;457.59&quot;, &quot;volume&quot;: &quot;164707&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:40&quot;, &quot;open&quot;: &quot;457.59&quot;, &quot;high&quot;: &quot;457.72&quot;, &quot;low&quot;: &quot;457.46&quot;, &quot;close&quot;: &quot;457.7199&quot;, &quot;volume&quot;: &quot;167438&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:41&quot;, &quot;open&quot;: &quot;457.72&quot;, &quot;high&quot;: &quot;457.8&quot;, &quot;low&quot;: &quot;457.68&quot;, &quot;close&quot;: &quot;457.72&quot;, &quot;volume&quot;: &quot;199951&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:42&quot;, &quot;open&quot;: &quot;457.73&quot;, &quot;high&quot;: &quot;457.74&quot;, &quot;low&quot;: &quot;457.6&quot;, &quot;close&quot;: &quot;457.62&quot;, &quot;volume&quot;: &quot;152134&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:43&quot;, &quot;open&quot;: &quot;457.6&quot;, &quot;high&quot;: &quot;457.65&quot;, &quot;low&quot;: &quot;457.45&quot;, &quot;close&quot;: &quot;457.5077&quot;, &quot;volume&quot;: &quot;142530&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:44&quot;, &quot;open&quot;: &quot;457.51&quot;, &quot;high&quot;: &quot;457.64&quot;, &quot;low&quot;: &quot;457.4001&quot;, &quot;close&quot;: &quot;457.61&quot;, &quot;volume&quot;: &quot;122575&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:45&quot;, &quot;open&quot;: &quot;457.61&quot;, &quot;high&quot;: &quot;457.76&quot;, &quot;low&quot;: &quot;457.58&quot;, &quot;close&quot;: &quot;457.75&quot;, &quot;volume&quot;: &quot;119886&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:46&quot;, &quot;open&quot;: &quot;457.74&quot;, &quot;high&quot;: &quot;457.75&quot;, &quot;low&quot;: &quot;457.37&quot;, &quot;close&quot;: &quot;457.38&quot;, &quot;volume&quot;: &quot;183157&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:47&quot;, &quot;open&quot;: &quot;457.42&quot;, &quot;high&quot;: &quot;457.49&quot;, &quot;low&quot;: &quot;457.37&quot;, &quot;close&quot;: &quot;457.44&quot;, &quot;volume&quot;: &quot;128542&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:48&quot;, &quot;open&quot;: &quot;457.43&quot;, &quot;high&quot;: &quot;457.49&quot;, &quot;low&quot;: &quot;457.33&quot;, &quot;close&quot;: &quot;457.44&quot;, &quot;volume&quot;: &quot;154181&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:49&quot;, &quot;open&quot;: &quot;457.43&quot;, &quot;high&quot;: &quot;457.5898&quot;, &quot;low&quot;: &quot;457.42&quot;, &quot;close&quot;: &quot;457.47&quot;, &quot;volume&quot;: &quot;163063&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:50&quot;, &quot;open&quot;: &quot;457.45&quot;, &quot;high&quot;: &quot;457.59&quot;, &quot;low&quot;: &quot;457.44&quot;, &quot;close&quot;: &quot;457.555&quot;, &quot;volume&quot;: &quot;96229&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:51&quot;, &quot;open&quot;: &quot;457.56&quot;, &quot;high&quot;: &quot;457.61&quot;, &quot;low&quot;: &quot;457.31&quot;, &quot;close&quot;: &quot;457.4217&quot;, &quot;volume&quot;: &quot;110380&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:52&quot;, &quot;open&quot;: &quot;457.42&quot;, &quot;high&quot;: &quot;457.56&quot;, &quot;low&quot;: &quot;457.42&quot;, &quot;close&quot;: &quot;457.47&quot;, &quot;volume&quot;: &quot;107518&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:53&quot;, &quot;open&quot;: &quot;457.475&quot;, &quot;high&quot;: &quot;457.51&quot;, &quot;low&quot;: &quot;457.4&quot;, &quot;close&quot;: &quot;457.48&quot;, &quot;volume&quot;: &quot;78062&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:54&quot;, &quot;open&quot;: &quot;457.49&quot;, &quot;high&quot;: &quot;457.57&quot;, &quot;low&quot;: &quot;457.42&quot;, &quot;close&quot;: &quot;457.46&quot;, &quot;volume&quot;: &quot;133883&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:55&quot;, &quot;open&quot;: &quot;457.47&quot;, &quot;high&quot;: &quot;457.56&quot;, &quot;low&quot;: &quot;457.45&quot;, &quot;close&quot;: &quot;457.51&quot;, &quot;volume&quot;: &quot;98998&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:56&quot;, &quot;open&quot;: &quot;457.51&quot;, &quot;high&quot;: &quot;457.54&quot;, &quot;low&quot;: &quot;457.43&quot;, &quot;close&quot;: &quot;457.43&quot;, &quot;volume&quot;: &quot;110237&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:57&quot;, &quot;open&quot;: &quot;457.43&quot;, &quot;high&quot;: &quot;457.65&quot;, &quot;low&quot;: &quot;457.375&quot;, &quot;close&quot;: &quot;457.65&quot;, &quot;volume&quot;: &quot;98794&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:58&quot;, &quot;open&quot;: &quot;457.66&quot;, &quot;high&quot;: &quot;457.69&quot;, &quot;low&quot;: &quot;457.35&quot;, &quot;close&quot;: &quot;457.45&quot;, &quot;volume&quot;: &quot;262154&quot; }, { &quot;time&quot;: &quot;2021-10-26 9:59&quot;, &quot;open&quot;: &quot;457.45&quot;, &quot;high&quot;: &quot;457.47&quot;, &quot;low&quot;: &quot;457.33&quot;, &quot;close&quot;: &quot;457.4&quot;, &quot;volume&quot;: &quot;74685&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:00&quot;, &quot;open&quot;: &quot;457.41&quot;, &quot;high&quot;: &quot;457.48&quot;, &quot;low&quot;: &quot;457.18&quot;, &quot;close&quot;: &quot;457.38&quot;, &quot;volume&quot;: &quot;166617&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:01&quot;, &quot;open&quot;: &quot;457.39&quot;, &quot;high&quot;: &quot;457.7&quot;, &quot;low&quot;: &quot;457.39&quot;, &quot;close&quot;: &quot;457.5&quot;, &quot;volume&quot;: &quot;265649&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:02&quot;, &quot;open&quot;: &quot;457.51&quot;, &quot;high&quot;: &quot;457.57&quot;, &quot;low&quot;: &quot;457.39&quot;, &quot;close&quot;: &quot;457.53&quot;, &quot;volume&quot;: &quot;131947&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:03&quot;, &quot;open&quot;: &quot;457.53&quot;, &quot;high&quot;: &quot;457.54&quot;, &quot;low&quot;: &quot;457.4&quot;, &quot;close&quot;: &quot;457.51&quot;, &quot;volume&quot;: &quot;80111&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:04&quot;, &quot;open&quot;: &quot;457.51&quot;, &quot;high&quot;: &quot;457.62&quot;, &quot;low&quot;: &quot;457.5&quot;, &quot;close&quot;: &quot;457.6101&quot;, &quot;volume&quot;: &quot;117174&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:05&quot;, &quot;open&quot;: &quot;457.621&quot;, &quot;high&quot;: &quot;457.64&quot;, &quot;low&quot;: &quot;457.51&quot;, &quot;close&quot;: &quot;457.58&quot;, &quot;volume&quot;: &quot;168758&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:06&quot;, &quot;open&quot;: &quot;457.58&quot;, &quot;high&quot;: &quot;457.64&quot;, &quot;low&quot;: &quot;457.46&quot;, &quot;close&quot;: &quot;457.61&quot;, &quot;volume&quot;: &quot;84076&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:07&quot;, &quot;open&quot;: &quot;457.62&quot;, &quot;high&quot;: &quot;457.7401&quot;, &quot;low&quot;: &quot;457.62&quot;, &quot;close&quot;: &quot;457.66&quot;, &quot;volume&quot;: &quot;125156&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:08&quot;, &quot;open&quot;: &quot;457.665&quot;, &quot;high&quot;: &quot;457.69&quot;, &quot;low&quot;: &quot;457.5&quot;, &quot;close&quot;: &quot;457.67&quot;, &quot;volume&quot;: &quot;116919&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:09&quot;, &quot;open&quot;: &quot;457.69&quot;, &quot;high&quot;: &quot;457.72&quot;, &quot;low&quot;: &quot;457.5&quot;, &quot;close&quot;: &quot;457.57&quot;, &quot;volume&quot;: &quot;102551&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:10&quot;, &quot;open&quot;: &quot;457.56&quot;, &quot;high&quot;: &quot;457.75&quot;, &quot;low&quot;: &quot;457.56&quot;, &quot;close&quot;: &quot;457.7&quot;, &quot;volume&quot;: &quot;109165&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:11&quot;, &quot;open&quot;: &quot;457.7&quot;, &quot;high&quot;: &quot;457.725&quot;, &quot;low&quot;: &quot;457.63&quot;, &quot;close&quot;: &quot;457.66&quot;, &quot;volume&quot;: &quot;146209&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:12&quot;, &quot;open&quot;: &quot;457.665&quot;, &quot;high&quot;: &quot;457.88&quot;, &quot;low&quot;: &quot;457.64&quot;, &quot;close&quot;: &quot;457.86&quot;, &quot;volume&quot;: &quot;210620&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:13&quot;, &quot;open&quot;: &quot;457.855&quot;, &quot;high&quot;: &quot;457.96&quot;, &quot;low&quot;: &quot;457.83&quot;, &quot;close&quot;: &quot;457.95&quot;, &quot;volume&quot;: &quot;159975&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:14&quot;, &quot;open&quot;: &quot;457.95&quot;, &quot;high&quot;: &quot;458.02&quot;, &quot;low&quot;: &quot;457.93&quot;, &quot;close&quot;: &quot;457.95&quot;, &quot;volume&quot;: &quot;152042&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:15&quot;, &quot;open&quot;: &quot;457.96&quot;, &quot;high&quot;: &quot;458.15&quot;, &quot;low&quot;: &quot;457.96&quot;, &quot;close&quot;: &quot;458.08&quot;, &quot;volume&quot;: &quot;146047&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:16&quot;, &quot;open&quot;: &quot;458.085&quot;, &quot;high&quot;: &quot;458.17&quot;, &quot;low&quot;: &quot;457.99&quot;, &quot;close&quot;: &quot;458.15&quot;, &quot;volume&quot;: &quot;100732&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:17&quot;, &quot;open&quot;: &quot;458.17&quot;, &quot;high&quot;: &quot;458.33&quot;, &quot;low&quot;: &quot;458.155&quot;, &quot;close&quot;: &quot;458.245&quot;, &quot;volume&quot;: &quot;235072&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:18&quot;, &quot;open&quot;: &quot;458.25&quot;, &quot;high&quot;: &quot;458.29&quot;, &quot;low&quot;: &quot;458.14&quot;, &quot;close&quot;: &quot;458.16&quot;, &quot;volume&quot;: &quot;422002&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:19&quot;, &quot;open&quot;: &quot;458.17&quot;, &quot;high&quot;: &quot;458.2801&quot;, &quot;low&quot;: &quot;458.1699&quot;, &quot;close&quot;: &quot;458.28&quot;, &quot;volume&quot;: &quot;114611&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:20&quot;, &quot;open&quot;: &quot;458.29&quot;, &quot;high&quot;: &quot;458.39&quot;, &quot;low&quot;: &quot;458.24&quot;, &quot;close&quot;: &quot;458.37&quot;, &quot;volume&quot;: &quot;241797&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:21&quot;, &quot;open&quot;: &quot;458.37&quot;, &quot;high&quot;: &quot;458.42&quot;, &quot;low&quot;: &quot;458.31&quot;, &quot;close&quot;: &quot;458.345&quot;, &quot;volume&quot;: &quot;124824&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:22&quot;, &quot;open&quot;: &quot;458.33&quot;, &quot;high&quot;: &quot;458.49&quot;, &quot;low&quot;: &quot;458.33&quot;, &quot;close&quot;: &quot;458.47&quot;, &quot;volume&quot;: &quot;132125&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:23&quot;, &quot;open&quot;: &quot;458.47&quot;, &quot;high&quot;: &quot;458.48&quot;, &quot;low&quot;: &quot;458.38&quot;, &quot;close&quot;: &quot;458.42&quot;, &quot;volume&quot;: &quot;204075&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:24&quot;, &quot;open&quot;: &quot;458.42&quot;, &quot;high&quot;: &quot;458.44&quot;, &quot;low&quot;: &quot;458.29&quot;, &quot;close&quot;: &quot;458.34&quot;, &quot;volume&quot;: &quot;126912&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:25&quot;, &quot;open&quot;: &quot;458.33&quot;, &quot;high&quot;: &quot;458.34&quot;, &quot;low&quot;: &quot;458.18&quot;, &quot;close&quot;: &quot;458.1899&quot;, &quot;volume&quot;: &quot;101231&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:26&quot;, &quot;open&quot;: &quot;458.17&quot;, &quot;high&quot;: &quot;458.24&quot;, &quot;low&quot;: &quot;458.13&quot;, &quot;close&quot;: &quot;458.2&quot;, &quot;volume&quot;: &quot;72580&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:27&quot;, &quot;open&quot;: &quot;458.2&quot;, &quot;high&quot;: &quot;458.21&quot;, &quot;low&quot;: &quot;458.14&quot;, &quot;close&quot;: &quot;458.19&quot;, &quot;volume&quot;: &quot;68729&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:28&quot;, &quot;open&quot;: &quot;458.185&quot;, &quot;high&quot;: &quot;458.23&quot;, &quot;low&quot;: &quot;458.13&quot;, &quot;close&quot;: &quot;458.1912&quot;, &quot;volume&quot;: &quot;54422&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:29&quot;, &quot;open&quot;: &quot;458.2&quot;, &quot;high&quot;: &quot;458.34&quot;, &quot;low&quot;: &quot;458.2&quot;, &quot;close&quot;: &quot;458.21&quot;, &quot;volume&quot;: &quot;138841&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:30&quot;, &quot;open&quot;: &quot;458.2&quot;, &quot;high&quot;: &quot;458.25&quot;, &quot;low&quot;: &quot;458.11&quot;, &quot;close&quot;: &quot;458.1119&quot;, &quot;volume&quot;: &quot;92084&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:31&quot;, &quot;open&quot;: &quot;458.12&quot;, &quot;high&quot;: &quot;458.205&quot;, &quot;low&quot;: &quot;458.04&quot;, &quot;close&quot;: &quot;458.16&quot;, &quot;volume&quot;: &quot;146496&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:32&quot;, &quot;open&quot;: &quot;458.1477&quot;, &quot;high&quot;: &quot;458.27&quot;, &quot;low&quot;: &quot;458.1477&quot;, &quot;close&quot;: &quot;458.205&quot;, &quot;volume&quot;: &quot;94342&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:33&quot;, &quot;open&quot;: &quot;458.205&quot;, &quot;high&quot;: &quot;458.25&quot;, &quot;low&quot;: &quot;458.17&quot;, &quot;close&quot;: &quot;458.195&quot;, &quot;volume&quot;: &quot;94324&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:34&quot;, &quot;open&quot;: &quot;458.2&quot;, &quot;high&quot;: &quot;458.29&quot;, &quot;low&quot;: &quot;458.1975&quot;, &quot;close&quot;: &quot;458.23&quot;, &quot;volume&quot;: &quot;96848&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:35&quot;, &quot;open&quot;: &quot;458.23&quot;, &quot;high&quot;: &quot;458.24&quot;, &quot;low&quot;: &quot;458.175&quot;, &quot;close&quot;: &quot;458.2&quot;, &quot;volume&quot;: &quot;83119&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:36&quot;, &quot;open&quot;: &quot;458.19&quot;, &quot;high&quot;: &quot;458.23&quot;, &quot;low&quot;: &quot;458.08&quot;, &quot;close&quot;: &quot;458.12&quot;, &quot;volume&quot;: &quot;99426&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:37&quot;, &quot;open&quot;: &quot;458.13&quot;, &quot;high&quot;: &quot;458.18&quot;, &quot;low&quot;: &quot;458.08&quot;, &quot;close&quot;: &quot;458.17&quot;, &quot;volume&quot;: &quot;65034&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:38&quot;, &quot;open&quot;: &quot;458.17&quot;, &quot;high&quot;: &quot;458.26&quot;, &quot;low&quot;: &quot;458.14&quot;, &quot;close&quot;: &quot;458.245&quot;, &quot;volume&quot;: &quot;149649&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:39&quot;, &quot;open&quot;: &quot;458.24&quot;, &quot;high&quot;: &quot;458.359&quot;, &quot;low&quot;: &quot;458.24&quot;, &quot;close&quot;: &quot;458.25&quot;, &quot;volume&quot;: &quot;120754&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:40&quot;, &quot;open&quot;: &quot;458.26&quot;, &quot;high&quot;: &quot;458.31&quot;, &quot;low&quot;: &quot;458.22&quot;, &quot;close&quot;: &quot;458.25&quot;, &quot;volume&quot;: &quot;91216&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:41&quot;, &quot;open&quot;: &quot;458.25&quot;, &quot;high&quot;: &quot;458.25&quot;, &quot;low&quot;: &quot;458.1216&quot;, &quot;close&quot;: &quot;458.15&quot;, &quot;volume&quot;: &quot;51800&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:42&quot;, &quot;open&quot;: &quot;458.15&quot;, &quot;high&quot;: &quot;458.2&quot;, &quot;low&quot;: &quot;457.96&quot;, &quot;close&quot;: &quot;458.03&quot;, &quot;volume&quot;: &quot;101539&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:43&quot;, &quot;open&quot;: &quot;458.02&quot;, &quot;high&quot;: &quot;458.02&quot;, &quot;low&quot;: &quot;457.94&quot;, &quot;close&quot;: &quot;458&quot;, &quot;volume&quot;: &quot;86088&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:44&quot;, &quot;open&quot;: &quot;457.9903&quot;, &quot;high&quot;: &quot;458.04&quot;, &quot;low&quot;: &quot;457.84&quot;, &quot;close&quot;: &quot;457.89&quot;, &quot;volume&quot;: &quot;95357&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:45&quot;, &quot;open&quot;: &quot;457.9&quot;, &quot;high&quot;: &quot;457.955&quot;, &quot;low&quot;: &quot;457.81&quot;, &quot;close&quot;: &quot;457.83&quot;, &quot;volume&quot;: &quot;93449&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:46&quot;, &quot;open&quot;: &quot;457.83&quot;, &quot;high&quot;: &quot;458.01&quot;, &quot;low&quot;: &quot;457.822&quot;, &quot;close&quot;: &quot;457.965&quot;, &quot;volume&quot;: &quot;100225&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:47&quot;, &quot;open&quot;: &quot;457.9789&quot;, &quot;high&quot;: &quot;458.08&quot;, &quot;low&quot;: &quot;457.9499&quot;, &quot;close&quot;: &quot;458.07&quot;, &quot;volume&quot;: &quot;277336&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:48&quot;, &quot;open&quot;: &quot;458.05&quot;, &quot;high&quot;: &quot;458.2&quot;, &quot;low&quot;: &quot;458.05&quot;, &quot;close&quot;: &quot;458.1999&quot;, &quot;volume&quot;: &quot;144024&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:49&quot;, &quot;open&quot;: &quot;458.191&quot;, &quot;high&quot;: &quot;458.25&quot;, &quot;low&quot;: &quot;458.14&quot;, &quot;close&quot;: &quot;458.16&quot;, &quot;volume&quot;: &quot;89625&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:50&quot;, &quot;open&quot;: &quot;458.1566&quot;, &quot;high&quot;: &quot;458.3&quot;, &quot;low&quot;: &quot;458.12&quot;, &quot;close&quot;: &quot;458.28&quot;, &quot;volume&quot;: &quot;99426&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:51&quot;, &quot;open&quot;: &quot;458.279&quot;, &quot;high&quot;: &quot;458.34&quot;, &quot;low&quot;: &quot;458.23&quot;, &quot;close&quot;: &quot;458.32&quot;, &quot;volume&quot;: &quot;136285&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:52&quot;, &quot;open&quot;: &quot;458.32&quot;, &quot;high&quot;: &quot;458.35&quot;, &quot;low&quot;: &quot;458.26&quot;, &quot;close&quot;: &quot;458.345&quot;, &quot;volume&quot;: &quot;59124&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:53&quot;, &quot;open&quot;: &quot;458.35&quot;, &quot;high&quot;: &quot;458.4&quot;, &quot;low&quot;: &quot;458.34&quot;, &quot;close&quot;: &quot;458.35&quot;, &quot;volume&quot;: &quot;68658&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:54&quot;, &quot;open&quot;: &quot;458.34&quot;, &quot;high&quot;: &quot;458.37&quot;, &quot;low&quot;: &quot;458.31&quot;, &quot;close&quot;: &quot;458.33&quot;, &quot;volume&quot;: &quot;71029&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:55&quot;, &quot;open&quot;: &quot;458.32&quot;, &quot;high&quot;: &quot;458.36&quot;, &quot;low&quot;: &quot;458.28&quot;, &quot;close&quot;: &quot;458.3&quot;, &quot;volume&quot;: &quot;92136&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:56&quot;, &quot;open&quot;: &quot;458.31&quot;, &quot;high&quot;: &quot;458.38&quot;, &quot;low&quot;: &quot;458.27&quot;, &quot;close&quot;: &quot;458.3475&quot;, &quot;volume&quot;: &quot;62093&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:57&quot;, &quot;open&quot;: &quot;458.34&quot;, &quot;high&quot;: &quot;458.355&quot;, &quot;low&quot;: &quot;458.3&quot;, &quot;close&quot;: &quot;458.35&quot;, &quot;volume&quot;: &quot;61162&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:58&quot;, &quot;open&quot;: &quot;458.35&quot;, &quot;high&quot;: &quot;458.36&quot;, &quot;low&quot;: &quot;458.32&quot;, &quot;close&quot;: &quot;458.325&quot;, &quot;volume&quot;: &quot;66327&quot; }, { &quot;time&quot;: &quot;2021-10-26 10:59&quot;, &quot;open&quot;: &quot;458.33&quot;, &quot;high&quot;: &quot;458.34&quot;, &quot;low&quot;: &quot;458.09&quot;, &quot;close&quot;: &quot;458.125&quot;, &quot;volume&quot;: &quot;133687&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:00&quot;, &quot;open&quot;: &quot;458.12&quot;, &quot;high&quot;: &quot;458.31&quot;, &quot;low&quot;: &quot;458.12&quot;, &quot;close&quot;: &quot;458.145&quot;, &quot;volume&quot;: &quot;96792&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:01&quot;, &quot;open&quot;: &quot;458.16&quot;, &quot;high&quot;: &quot;458.29&quot;, &quot;low&quot;: &quot;458.11&quot;, &quot;close&quot;: &quot;458.19&quot;, &quot;volume&quot;: &quot;70797&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:02&quot;, &quot;open&quot;: &quot;458.2&quot;, &quot;high&quot;: &quot;458.25&quot;, &quot;low&quot;: &quot;458.14&quot;, &quot;close&quot;: &quot;458.23&quot;, &quot;volume&quot;: &quot;83904&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:03&quot;, &quot;open&quot;: &quot;458.25&quot;, &quot;high&quot;: &quot;458.26&quot;, &quot;low&quot;: &quot;458.16&quot;, &quot;close&quot;: &quot;458.18&quot;, &quot;volume&quot;: &quot;59358&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:04&quot;, &quot;open&quot;: &quot;458.185&quot;, &quot;high&quot;: &quot;458.19&quot;, &quot;low&quot;: &quot;457.96&quot;, &quot;close&quot;: &quot;457.975&quot;, &quot;volume&quot;: &quot;115402&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:05&quot;, &quot;open&quot;: &quot;457.98&quot;, &quot;high&quot;: &quot;458.14&quot;, &quot;low&quot;: &quot;457.98&quot;, &quot;close&quot;: &quot;458.14&quot;, &quot;volume&quot;: &quot;134739&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:06&quot;, &quot;open&quot;: &quot;458.13&quot;, &quot;high&quot;: &quot;458.1401&quot;, &quot;low&quot;: &quot;457.99&quot;, &quot;close&quot;: &quot;458.03&quot;, &quot;volume&quot;: &quot;132432&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:07&quot;, &quot;open&quot;: &quot;458.03&quot;, &quot;high&quot;: &quot;458.11&quot;, &quot;low&quot;: &quot;457.97&quot;, &quot;close&quot;: &quot;457.97&quot;, &quot;volume&quot;: &quot;332595&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:08&quot;, &quot;open&quot;: &quot;457.96&quot;, &quot;high&quot;: &quot;458.01&quot;, &quot;low&quot;: &quot;457.89&quot;, &quot;close&quot;: &quot;457.99&quot;, &quot;volume&quot;: &quot;112124&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:09&quot;, &quot;open&quot;: &quot;458&quot;, &quot;high&quot;: &quot;458.02&quot;, &quot;low&quot;: &quot;457.92&quot;, &quot;close&quot;: &quot;458.01&quot;, &quot;volume&quot;: &quot;49906&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:10&quot;, &quot;open&quot;: &quot;458.01&quot;, &quot;high&quot;: &quot;458.13&quot;, &quot;low&quot;: &quot;458.01&quot;, &quot;close&quot;: &quot;458.13&quot;, &quot;volume&quot;: &quot;378085&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:11&quot;, &quot;open&quot;: &quot;458.13&quot;, &quot;high&quot;: &quot;458.13&quot;, &quot;low&quot;: &quot;458.03&quot;, &quot;close&quot;: &quot;458.11&quot;, &quot;volume&quot;: &quot;47473&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:12&quot;, &quot;open&quot;: &quot;458.11&quot;, &quot;high&quot;: &quot;458.13&quot;, &quot;low&quot;: &quot;458.04&quot;, &quot;close&quot;: &quot;458.0699&quot;, &quot;volume&quot;: &quot;307628&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:13&quot;, &quot;open&quot;: &quot;458.07&quot;, &quot;high&quot;: &quot;458.16&quot;, &quot;low&quot;: &quot;458.04&quot;, &quot;close&quot;: &quot;458.13&quot;, &quot;volume&quot;: &quot;39463&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:14&quot;, &quot;open&quot;: &quot;458.119&quot;, &quot;high&quot;: &quot;458.119&quot;, &quot;low&quot;: &quot;458.02&quot;, &quot;close&quot;: &quot;458.06&quot;, &quot;volume&quot;: &quot;37030&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:15&quot;, &quot;open&quot;: &quot;458.06&quot;, &quot;high&quot;: &quot;458.18&quot;, &quot;low&quot;: &quot;458.05&quot;, &quot;close&quot;: &quot;458.18&quot;, &quot;volume&quot;: &quot;67514&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:16&quot;, &quot;open&quot;: &quot;458.175&quot;, &quot;high&quot;: &quot;458.21&quot;, &quot;low&quot;: &quot;458.1&quot;, &quot;close&quot;: &quot;458.185&quot;, &quot;volume&quot;: &quot;87491&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:17&quot;, &quot;open&quot;: &quot;458.18&quot;, &quot;high&quot;: &quot;458.195&quot;, &quot;low&quot;: &quot;458.14&quot;, &quot;close&quot;: &quot;458.17&quot;, &quot;volume&quot;: &quot;37629&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:18&quot;, &quot;open&quot;: &quot;458.18&quot;, &quot;high&quot;: &quot;458.27&quot;, &quot;low&quot;: &quot;458.159&quot;, &quot;close&quot;: &quot;458.26&quot;, &quot;volume&quot;: &quot;72492&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:19&quot;, &quot;open&quot;: &quot;458.25&quot;, &quot;high&quot;: &quot;458.25&quot;, &quot;low&quot;: &quot;458.15&quot;, &quot;close&quot;: &quot;458.19&quot;, &quot;volume&quot;: &quot;42138&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:20&quot;, &quot;open&quot;: &quot;458.2&quot;, &quot;high&quot;: &quot;458.31&quot;, &quot;low&quot;: &quot;458.2&quot;, &quot;close&quot;: &quot;458.25&quot;, &quot;volume&quot;: &quot;66885&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:21&quot;, &quot;open&quot;: &quot;458.24&quot;, &quot;high&quot;: &quot;458.27&quot;, &quot;low&quot;: &quot;458.21&quot;, &quot;close&quot;: &quot;458.23&quot;, &quot;volume&quot;: &quot;48999&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:22&quot;, &quot;open&quot;: &quot;458.23&quot;, &quot;high&quot;: &quot;458.3&quot;, &quot;low&quot;: &quot;458.195&quot;, &quot;close&quot;: &quot;458.29&quot;, &quot;volume&quot;: &quot;49565&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:23&quot;, &quot;open&quot;: &quot;458.3&quot;, &quot;high&quot;: &quot;458.31&quot;, &quot;low&quot;: &quot;458.24&quot;, &quot;close&quot;: &quot;458.31&quot;, &quot;volume&quot;: &quot;51411&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:24&quot;, &quot;open&quot;: &quot;458.31&quot;, &quot;high&quot;: &quot;458.31&quot;, &quot;low&quot;: &quot;458.18&quot;, &quot;close&quot;: &quot;458.23&quot;, &quot;volume&quot;: &quot;43851&quot; }, { &quot;time&quot;: &quot;2021-10-26 11:25&quot;, &quot;open&quot;: &quot;458.24&quot;, &quot;high&quot;: &quot;458.27&quot;, &quot;low&quot;: &quot;458.2&quot;, &quot;close&quot;: &quot;458.25&quot;, &quot;volume&quot;: &quot;35606&quot; } ]) </code></pre> <p>... then see the rest of the code below. Just copy and paste the code above the code below.</p> <pre><code># Convert open and close to numeric re: the .csv to .json # converter tool I used online... df['open'] = pd.to_numeric(df['open']) df['close'] = pd.to_numeric(df['close']) # Define the Doji lookback length and body factor length = 20 bodyFactor = 0.05 # First, create a column that defines each row's # &quot;body height&quot; df['bodyHeight'] = abs(df['close'] - df['open']) # Create a placeholder column for &quot;isDoji&quot; flag df['isDoji'] = False # WORKING EXAMPLE # Iterate through the df for i in range(length, len(df)): # If the current body height &lt;= the rolling average of prev # body heights * bodyFactor, set it to true if abs(df['close'].iloc[i] - df['open'].iloc[i]) &lt;= df['bodyHeight'].iloc[i-length:i].mean() * bodyFactor: df['isDoji'].iloc[i] = True # Export the correct answer dataset to compare the next function # to below df.to_csv(&quot;correct_answers.csv&quot;, index=False) # NON-WORKING EXAMPLE # What I'd like to do is NOT use a for loop to do the above. I envision # we can use np.where() here? But I don't know how to designate a range # of rows without using iloc. I know you can use .shift() here, but # if the length is 20, or more, I don't want to manually add 20 .shifts(x) # to the code. Pseudocode would look like this df['isDoji2'] = np.where(abs(df['close'].iloc[i] - df['open'].iloc[i]) &lt;= df['bodyHeight'].iloc[i-length:i].mean() * bodyFactor , True, False) # ... but obviously the iloc[i]'s don't work here. # Check if isDoji2 matches isDoji df.to_csv(&quot;correct_answers2.csv&quot;, index=False) </code></pre> <p>I'm hoping you can see what I'm trying to accomplish here, and I don't know 100% if this can be done WITHOUT using a for loop, but figured I would check because the less iteration I have to do with the dataset I'm working on, the better off I'll be.</p> <p>Can you do this kind of column calculation using the previous X rows in one line like this? Thanks!</p>
<p>Holy smokes I got it! Thanks to inspo from <a href="https://stackoverflow.com/questions/40700762/pandas-dataframe-sum-of-shiftx-for-x-in-range1-n">this answer</a>.</p> <p>This does what I'm looking for, verified by the final dataset that I save:</p> <pre><code>df['isDoji2'] = np.where(abs(df['close'] - df['open']) &lt;= (abs(df['close'] - df['open'])).rolling(window = length).mean().shift() * bodyFactor , True, False) </code></pre> <p>So I can just take the rolling mean of the abs(close - open) and do a shift afterwards. Thanks for letting me hash that out! haha</p>
python|python-3.x|pandas
0
9,872
70,089,778
vectorized matrix list application in numpy
<p>the problem i am trying to solve is as follows. I am given a matrix of arbitrary dimension representing indices of a list, and then a list. I would like to get back a matrix with the list elements swapped for the indices. I can't figure out how to do that in a vectorized way: i.e if <code>z = [[0,1], [1,0]]</code> and <code>list = [20,10]</code>, i'd want <code>[[20,10], [10,20]]</code> returned.</p>
<p>When they both are <code>np.array</code>, you can do indexing in a natural way:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np z = np.array([[0, 1], [1, 0]]) a = np.array([20, 10]) output = a[z] print(output) # [[20 10] # [10 20]] </code></pre>
python|numpy
1
9,873
70,182,601
Replace consecutive duplicates in 2D numpy array
<p>I have a two dimensional numpy array <code>x</code>:</p> <pre><code>import numpy as np x = np.array([ [1, 2, 8, 4, 5, 5, 5, 3], [0, 2, 2, 2, 2, 1, 1, 4] ]) </code></pre> <p>My goal is to replace all consecutive duplicate numbers with a specific value (lets take <code>-1</code>), but by leaving one occurrence unchanged. I could do this as follows:</p> <pre><code>def replace_consecutive_duplicates(x): consec_dup = np.zeros(x.shape, dtype=bool) consec_dup[:, 1:] = np.diff(x, axis=1) == 0 x[consec_dup] = -1 return x # current output replace_consecutive_duplicates(x) # array([[ 1, 2, 8, 4, 5, -1, -1, 3], # [ 0, 2, -1, -1, -1, 1, -1, 4]]) </code></pre> <p>However, in this case the one occurrence left unchanged is always the first. My goal is to leave the middle occurrence unchanged. So given the same x as input, the desired output of function <code>replace_consecutive_duplicates</code> is:</p> <pre><code># desired output replace_consecutive_duplicates(x) # array([[ 1, 2, 8, 4, -1, 5, -1, 3], # [ 0, -1, 2, -1, -1, 1, -1, 4]]) </code></pre> <p>Note that in case consecutive duplicate sequences with an even number of occurrences the middle left value should be unchanged. So the consecutive duplicate sequence <code>[2, 2, 2, 2]</code> in <code>x[1]</code> becomes <code>[-1, 2, -1, -1]</code></p> <p>Also note that I'm looking for a vectorized solution for 2D numpy arrays since performance is of absolute importance in my particular use case.</p> <p>I've already tried looking at things like run length encoding and using <code>np.diff()</code>, but I didn't manage to solve this. Hope you guys can help!</p>
<p>The main problem is that you require the length of the number of consecutives values. This is not easy to get with numpy, but using <code>itertools.groupby</code> we can solve it using the following code.</p> <pre><code>import numpy as np x = np.array([ [1, 2, 8, 4, 5, 5, 5, 3], [0, 2, 2, 2, 2, 1, 1, 4] ]) def replace_row(arr: np.ndarray, new_val=-1): results = [] for val, count in itertools.groupby(arr): k = len(list(count)) results.extend([new_val] * ((k - 1) // 2)) results.append(val) results.extend([new_val] * (k // 2)) return np.fromiter(results, arr.dtype) if __name__ == '__main__': for idx, row in enumerate(x): x[idx, :] = replace_row(row) print(x) </code></pre> <p>Output:</p> <pre><code>[[ 1 2 8 4 -1 5 -1 3] [ 0 -1 2 -1 -1 1 -1 4]] </code></pre> <p>This isn't vectorized, but can be combined with multi threading since every row is handled one by one.</p>
python|python-3.x|numpy|duplicates|numpy-ndarray
0
9,874
70,248,895
pandas: removing duplicate values in rows with same index in two columns
<p>I have a dataframe as follows:</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame({'text':['she is good', 'she is bad'], 'label':['she is good', 'she is good']}) </code></pre> <p>I would like to compare row wise and if two same-indexed rows have the same values, replace the duplicate in the 'label' column with the word 'same'.</p> <p>Desired output:</p> <pre><code> pos label 0 she is good same 1 she is bad she is good </code></pre> <p>so far, i have tried the following, but it returns an error:</p> <pre><code>ValueError: Length of values (1) does not match length of index (2) df['label'] =np.where(df.query(&quot;text == label&quot;), df['label']== ' ',df['label']==df['label'] ) </code></pre>
<p>Your syntax is not correct, have a look at the documentation of <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>. Check for equality between your two columns, and replace the values in your label column:</p> <pre><code>import numpy as np df['label'] = np.where(df['text'].eq(df['label']),'same',df['label']) </code></pre> <p>prints:</p> <pre><code> text label 0 she is good same 1 she is bad she is good </code></pre>
python-3.x|pandas|duplicates|rowwise
1
9,875
56,140,870
Setting initial state in dynamic RNN
<p>Based on the link:</p> <p><a href="https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn</a></p> <p>In the example, it is shown that the "initial state" is defined in the <strong>first example and not in the second example</strong>. Could anyone please explain what is the <strong>purpose of the initial state</strong>? What's the <strong>difference</strong> if I <strong>don't set it</strong> vs if i <strong>set it</strong>? Is it <strong>only required in a single RNN cell</strong> and <strong>not in a stacked cell</strong> like in the example provided in the link?</p> <p>I'm currently debugging my RNN model, as it seemed to classify different questions in the same category, which is strange. I suspect that it might have to do with me not setting the initial state of the cell. </p>
<blockquote> <p>Could anyone please explain what is the purpose of initial state?</p> </blockquote> <p>As we know that the state matrix is the weights between the hidden neurons in timestep 1 and timestep 2. They join the hidden neurons of both the time steps. Hence they hold temporal data from the layers in previous time steps. </p> <p>Providing an initially trained state matrix by the <code>initial_state=</code> argument gives the RNN cell a <em>trained memory</em> of its previous activations.</p> <blockquote> <p>What's the difference if I don't set it vs if I set it?</p> </blockquote> <p>If we set the initial weights which have been trained on some other model or the previous model, it means that we are restoring the memory of the RNN cell so that it does not have to start from scratch.</p> <p>In the TF docs, they have initialized the <code>initial_state</code> as <code>zero_state</code> matrix.</p> <p>If you don't set the <code>initial_state</code>, it will be trained from scratch as other weight matrices do.</p> <blockquote> <p>Is it only required in a single RNN cell and not in a stacked cell like in the example provided in the link?</p> </blockquote> <p>I exactly don't know that why haven't they set the <code>initial_state</code> in the Stacked RNN example, but initial_state is required in every type of RNN as it holds the preserves the temporal features across time steps.</p> <p><em>Maybe, Stacked RNN was the point of interest in the docs and not the settings of <code>initial_state</code>.</em></p> <p><strong>Tip:</strong></p> <p>In most cases, you will not need to set the <code>initial_state</code> for an RNN. TensorFlow can handle this efficiently for us. In the case of seq2seq RNN, this property may be used.</p> <p>Your RNN maybe facing some other issue. Your RNN build ups its own memory and doesn't require powerup.</p>
tensorflow|recurrent-neural-network
1
9,876
56,093,391
Trying to port a tensorflow python to javascript
<p>I'm trying to port this python code to javascript, I'm getting very different results in my js script so I wanted to make sure that my <strong>dense layers</strong> are correct:</p> <h2>Python</h2> <pre><code>let trainValues = // data source let trainLabels = // data source model = tf.keras.models.Sequential([ tf.keras.layers.Dense(24, activation=tf.nn.relu), tf.keras.layers.Dense(2, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x=trainValues, y=trainLabels, epochs=5) </code></pre> <h2>Node.js</h2> <pre><code>let trainValues = // data source let trainLabels = // data source const model = tf.sequential(); model.add(tf.layers.dense({inputShape: [24], units: 24, activation: 'relu'})); model.add(tf.layers.dense({units: 1, activation: 'softmax'})); model.compile({ loss: tf.losses.softmaxCrossEntropy, optimizer: tf.train.adam(), metrics: ['accuracy'] }); trainValues = tf.tensor2d(trainValues); trainLabels = tf.tensor1d(trainLabels); await model.fit(trainValues, trainLabels, { epochs: 5 }); </code></pre>
<p>Your second dense layers seem to have a different number of units (<code>2</code> in python, <code>1</code> in JavaScript).</p> <p>In addition, your loss functions are different (<code>sparse_categorical_crossentropy</code> in python, <code>softmaxCrossEntropy</code> in JavaScript). Instead of providing one of the <code>tf.losses.*</code> functions, you can simply pass a string here (as defined <a href="https://github.com/tensorflow/tfjs-layers/blob/d086a5d014b2f74ec498b044b6127e57f5c83070/src/metrics.ts#L257" rel="nofollow noreferrer">here</a>).</p> <p>To have an identical model in JavaScript the code should look like this:</p> <pre class="lang-js prettyprint-override"><code>const model = tf.sequential(); model.add(tf.layers.dense({inputShape: [24], units: 24, activation: 'relu'})); model.add(tf.layers.dense({units: 2, activation: 'softmax'})); model.compile({ loss: 'sparseCategoricalCrossentropy', optimizer: tf.train.adam(), metrics: ['accuracy'] }); </code></pre> <p>I'm assuming that the number of input units is <code>24</code> and that you correctly handled the data.</p>
node.js|tensorflow|tensorflow.js
1
9,877
64,795,510
Comparing specific row in a column with all columns for that specific row in a dataframe
<p>I'm new to python and been trying to figure this out for a week now</p> <p>I have a dataset 2 rows by 2000ish columns, the data came in a dictionary format and I used df.DataFrame to convert it (Don't know if this is helpful or not)</p> <p>Here is an example</p> <pre><code> gene1 gene2 gene3 etc location [1,2] [3,4] [5,6] enhancer ATCG GGGG CATA </code></pre> <p>I want to compare the enhancer from gene 1 to the enhancer for the rest of the genes one by one to tell me how many differences there are between them. I know I can't make a new column for this since won't work, I think the best solution is to save the new information to a new Data frame</p> <p>Example output</p> <pre><code> gene1 gene 2 gene3 difference 0 3 4 </code></pre> <p>I would like an idea on how to approach this from a different perspective, I've tried doing it by using nested loop but couldn't figure it out.</p> <p>Thank you</p>
<p>This is definitely not the best way to do that, but I would try the following as it is the most straight-forward to me.</p> <pre><code>df = pd.DataFrame({&quot;gene1&quot;:[[1,2],&quot;ATCG&quot;], &quot;gene2&quot;:[[3,4],&quot;GGGG&quot;], &quot;gene3&quot;:[[5,6],&quot;CATA&quot;]},index=[&quot;location&quot;,&quot;enhancer&quot;]) target_gene = df.loc[&quot;enhancer&quot;,&quot;gene1&quot;] df.loc[&quot;difference&quot;,:] = list(map(lambda x:sum([c1!=c2 for (c1,c2) in zip(x,target_gene)]), df.loc[&quot;enhancer&quot;,:])) df </code></pre> <p><a href="https://i.stack.imgur.com/m2cjY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m2cjY.png" alt="enter image description here" /></a></p>
python|pandas
0
9,878
64,851,635
How can I get a substring of a row in a column returned by loc?
<p>I have a dataframe (df) which has the column 'Date Created'.</p> <p>I need to splice the string inside 'Date Created' so that I'm only left with the numerical day instead of the entire datetime string (For example, I want to cut 'Sun Mar 03 2020 11:52 pm' to &quot;2020/03/&quot;+ 'string in Date Created'[8:10] (9th and 10th character).</p> <p>I tried this but I get a copy warning:</p> <pre><code>for x in range(len(df)): df.iloc[x]['date'] = &quot;202003&quot; + (df.iloc[x]['Date Created'])[8:10] </code></pre> <p>I go to the documentation and it has instructions on how to use loc to get substrings but they do so for a very specific example case that doesn't apply to my code.</p> <p>I tried this then:</p> <pre><code>df['date'] = '' df.loc[:,['Date Created']] = &quot;202003&quot;+ (df.loc[:,['Date Created']])[8:10] </code></pre> <p>But this also doesn't work. Can someone please help on how I can get the 9th and 10th character of each row of Date Created and assign that to a new column (or even replace the existing value in Date Created)? TIA!</p>
<p>I made up this dataframe.</p> <pre><code>df = pd.DataFrame({&quot;Date Created&quot;: [&quot;Sun Mar 03 2020 11:52 pm&quot;, &quot;Sun Mar 08 2020 11:52 pm&quot;, &quot;Sun Mar 09 2020 11:52 pm&quot;]}) </code></pre> <p>So with</p> <pre><code>df.loc[:, &quot;Date Created&quot;] = &quot;202003&quot; + df[&quot;Date Created&quot;].str[8:10] </code></pre> <p>You'll get this</p> <p><a href="https://i.stack.imgur.com/MaiZd.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MaiZd.jpg" alt="enter image description here" /></a></p>
python|pandas|string|dataframe|copy
1
9,879
64,877,604
How to shift a histogram to the right?
<p>I have a 1D array for a histogram with bin boundaries:</p> <pre><code>bins = np.arange(1, 6,2) data = np.array([1,2,3,4,4,4,3,2,3,3,3]) plt.hist(data, bins=bins, histtype='step') </code></pre> <p>But I want to shift thee histogram horizontally to the right by 1 unit on the x-axis, how do I do that? I don't want the shape of bin boundaries to change but just the whole histogram to shift.</p> <p>I tried doing:</p> <pre><code>x0 = 1 plt.hist(data+x0, bins=np.arange(1+x0, 6+x0,2+x0), histtype='step') </code></pre> <p>but it changes the bin boundaries. How do I ammend this?</p>
<p>You were close, but in your <code>np.arange</code> you don't want to increase the step size. So:</p> <pre><code>plt.hist(data+x0, bins=np.arange(1+x0, 6+x0, 2), histtype='step') </code></pre> <p>Both graphs:</p> <p><a href="https://i.stack.imgur.com/SJ0oI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SJ0oI.png" alt="enter image description here" /></a></p>
python|numpy|jupyter|histogram
1
9,880
64,990,444
merge 4 columns into two columns
<p>I have a DataFrame with repeating 4 columns that i would like to merge in 2 columns.</p> <pre><code>Product ID Year_X Month_X Year_Y Month_Y 1 2020 1 2014 11 1 2019 2 2018 10 2 2022 5 2010 8 2 2021 1 2019 9 </code></pre> <p>The output should be like this:</p> <pre><code>Product ID Year Month 1 2014 11 1 2018 10 1 2019 2 1 2020 1 2 2010 8 2 2019 9 2 2021 1 2 2022 5 </code></pre> <p>Thank you</p>
<p>Create unique index first by <code>reset_index</code> then you can use <code>wide_to_long</code>:</p> <pre><code>print (pd.wide_to_long(df.reset_index(), stubnames=[&quot;Year&quot;, &quot;Month&quot;], i=&quot;index&quot;, j=&quot;Key&quot;, sep=&quot;_&quot;, suffix=&quot;\w*&quot;) .reset_index(drop=True)) Product ID Year Month 0 1 2020 1 1 1 2019 2 2 2 2022 5 3 2 2021 1 4 1 2014 11 5 1 2018 10 6 2 2010 8 7 2 2019 9 </code></pre>
python|pandas
2
9,881
40,137,529
How to get pixel matrix of recognized objects when classifying a image using Tensorflow?
<p>I want to get pixel matrix of objects within a image when the image is classified by Tensorflow (classify_image.py).</p> <p>In other words, the recognized objects must be segmented first. E.g. there is a computer in the picture, i want to get all pixels which belong to the computer.</p> <p>But till now i cannot find a sample from Tensorflow's tutorial.</p> <p>The only things i can get is the recognition results through sample code of Tensorflow.</p> <p>e.g.</p> <pre><code>softmax_tensor = sess.graph.get_tensor_by_name('softmax:0') predictions = sess.run(softmax_tensor, {'DecodeJpeg/contents:0': image_data}) predictions = np.squeeze(predictions) # Creates node ID --&gt; English string lookup. node_lookup = NodeLookup() top_k = predictions.argsort()[-FLAGS.num_top_predictions:][::-1] for node_id in top_k: human_string = node_lookup.id_to_string(node_id) score = predictions[node_id] print('%s (score = %.5f)' % (human_string, score)) </code></pre> <p>Does someone have an idea? Is this possible?</p>
<p>Here are some links to models for image segmentation using TensorFlow :</p> <ul> <li><a href="https://github.com/Russell91/TensorBox" rel="nofollow">TensorBox</a></li> <li><a href="https://github.com/fabianbormann/Tensorflow-DeconvNet-Segmentation" rel="nofollow">DeConvNet</a></li> <li><a href="https://github.com/MarvinTeichmann/Tensorflow-Segmentation-Toolkit" rel="nofollow">Segmentation Toolkit</a></li> </ul>
python|tensorflow|deep-learning|object-recognition
1
9,882
40,228,453
Delete row and column in symmetric array if all the values in a row (or column) do not satisfy a given contion
<p>I've got a sparse, symmetric array and I'm trying to delete a row and column of that array if all the individual entries of a given row (and column) do not satisfy some threshold condition. For example if </p> <pre><code>min_value = 2 a = np.array([[2, 2, 1, 0, 0], [2, 0, 1, 4, 0], [1, 1, 0, 0, 1], [0, 4, 0, 1, 0], [0, 0, 1, 0, 0]]) </code></pre> <p>I would like to keep the rows (and columns) where the it has at least a value of 2 or more, so that with the above example this would yield</p> <pre><code>a_new = np.array([2, 2, 0], [2, 0, 4], [0, 4, 1]] </code></pre> <p>So I would lose rows 3 and 5 (and columns 3 and 5) since every entry is less then 2. I've had a look at <a href="https://stackoverflow.com/questions/19010990/how-could-i-remove-the-rows-of-an-array-if-one-of-the-elements-of-the-row-does-n">How could I remove the rows of an array if one of the elements of the row does not satisfy a condition?</a>, <a href="https://stackoverflow.com/questions/38607586/delete-columns-based-on-repeat-value-in-one-row-in-numpy-array">Delete columns based on repeat value in one row in numpy array</a> and <a href="https://stackoverflow.com/questions/22028555/delete-a-column-in-a-multi-dimensional-array-if-all-elements-in-that-column-sati">Delete a column in a multi-dimensional array if all elements in that column satisfy a condition</a> but the marked solutions do not fit what I'm attempting to accomplish. </p> <p>I was thinking of performing something similar to:</p> <pre><code>a_new = [] min_count = 2 for row in a: for i in row: if i &gt;= min_count: a_new.append(row) print(items) print(temp) </code></pre> <p>but this doesn't work since it doesn't delete a bad column and if there are two (or more) instances where a value is greater then the threshold it append a row multiple times.</p>
<p>You could have a vectorized solution to solve it as shown below -</p> <pre><code># Get valid mask mask = a &gt;= min_value # As per requirements, look for ANY match along rows and cols and # use those masks to index into row and col dim of input array with # 1D open meshes from np.ix_ and thus select a 2D slice out of it out = a[np.ix_(mask.any(1),mask.any(0))] </code></pre> <p>A simpler way to express it would be by selecting rows and then columns, like so -</p> <pre><code>a[mask.any(1)][:,mask.any(0)] </code></pre> <p><em>Abusing</em> the symmetric nature of the input array, it would simplify to -</p> <pre><code>mask0 = (a&gt;=min_value).any(0) out = a[np.ix_(mask0,mask0)] </code></pre> <p>Sample run -</p> <pre><code>In [488]: a Out[488]: array([[2, 2, 1, 0, 0], [2, 0, 1, 4, 0], [1, 1, 0, 0, 1], [0, 4, 0, 1, 0], [0, 0, 1, 0, 0]]) In [489]: min_value Out[489]: 2 In [490]: mask0 = (a&gt;=min_value).any(0) In [491]: a[np.ix_(mask0,mask0)] Out[491]: array([[2, 2, 0], [2, 0, 4], [0, 4, 1]]) </code></pre> <hr> <p>Alternatively, we can use row and column indices of valid mask, like so -</p> <pre><code>r,c = np.where(a&gt;=min_value) out = a[np.unique(r)[:,None],np.unique(c)] </code></pre> <p>Again <em>abusing</em> the symmetric nature, the simplified version would be -</p> <pre><code>r = np.unique(np.where(a&gt;=min_value)[0]) out = a[np.ix_(r,r)] </code></pre> <p><code>r</code> could also be obtained with a mix of boolean operations -</p> <pre><code>r = np.flatnonzero((a&gt;=min_value).any(0)) </code></pre>
python|arrays|numpy
1
9,883
44,123,874
'DataFrame' object has no attribute 'sort'
<p>I face some problem here, in my python package I have install <code>numpy</code>, but I still have this error:</p> <blockquote> <p><strong>'DataFrame' object has no attribute 'sort'</strong></p> </blockquote> <p>Anyone can give me some idea..</p> <p>This is my code :</p> <pre><code>final.loc[-1] =['', 'P','Actual'] final.index = final.index + 1 # shifting index final = final.sort() final.columns=[final.columns,final.iloc[0]] final = final.iloc[1:].reset_index(drop=True) final.columns.names = (None, None) </code></pre>
<p><code>sort()</code> was deprecated for DataFrames in favor of either:</p> <ul> <li><a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="noreferrer"><code>sort_values()</code></a> to <strong>sort by column(s)</strong></li> <li><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="noreferrer"><code>sort_index()</code></a> to <strong>sort by the index</strong> </li> </ul> <p><code>sort()</code> was deprecated (but still available) in Pandas with release 0.17 (2015-10-09) with the introduction of <code>sort_values()</code> and <code>sort_index()</code>. It was removed from Pandas with release 0.20 (2017-05-05).</p>
python|pandas|numpy|dataframe
270
9,884
69,369,193
Replace string in column with other text
<p>This seems like an elementary question with many online examples, but for some reason it does not work for me.</p> <p>I am trying to replace any cells in column 'A' that have the value = &quot;Facility-based testing-OH&quot; with the value = &quot;Facility based testing-OH&quot;. If you note, the only difference between the two is a single '-', however for my purposes I do not want to use the split function on a delimeter. Simply want to locate the values that need replacement.</p> <p>I have tried the following code, but none have worked.</p> <p>1st Method:</p> <pre><code>df = df.str.replace('Facility-based testing-OH','Facility based testing-OH') </code></pre> <p>2nd Method:</p> <pre><code>df['A'] = df['A'].str.replace(['Facility-based testing-OH'], &quot;Facility based testing-OH&quot;), inplace=True </code></pre> <p>3rd Method</p> <pre><code>df.loc[df['A'].isin(['Facility-based testing-OH'])] = 'Facility based testing-OH' </code></pre>
<p>Try:</p> <pre class="lang-py prettyprint-override"><code>df[&quot;A&quot;] = df[&quot;A&quot;].str.replace( &quot;Facility-based testing-OH&quot;, &quot;Facility based testing-OH&quot;, regex=False ) print(df) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code> A 0 Facility based testing-OH 1 Facility based testing-OH </code></pre> <hr /> <p><code>df</code> used:</p> <pre class="lang-none prettyprint-override"><code> A 0 Facility-based testing-OH 1 Facility based testing-OH </code></pre>
python|pandas|string|replace
0
9,885
69,542,117
Pandas missing value; with fflill and add comment
<p>following through pandas documentation for <code>df.fillna(method=&quot;ffill&quot;)</code>, <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer">here</a>. How to add a new column with comments?How to add a new column with comments?How to add a new column with comments?How to add a new column with comments?How to add a new column with comments?How to add a new column with comments?How to add a new column with comments?How to add a new column with comments?How to add a new column with comments?How to add a new column with comments?How to add a new column with comments?How to add a new column with comments?</p> <pre><code> df_1 = pd.DataFrame([['2021-01-01', 'Supp_1', 'Product_1', 1], ['2021-02-01', 'Supp_1', 'Product_1', ''], ['2021-03-01','Supp_1', 'Product_1', np.nan], ['2021-04-01', 'Supp_1', 'Product_1', 1.25], ['2021-01-01', 'Supp_1', 'Product_2', 1.5], ['2021-02-01', 'Supp_1', 'Product_2', ''], ['2021-03-01','Supp_1', 'Product_2', np.nan], ['2021-04-01', 'Supp_1', 'Product_2', 1.75]], columns=['Date','Supplier','Product','Cost']) Date Supplier Product Cost 0 2021-01-01 Supp_1 Product_1 1 1 2021-02-01 Supp_1 Product_1 2 2021-03-01 Supp_1 Product_1 NaN 3 2021-04-01 Supp_1 Product_1 1.25 4 2021-01-01 Supp_1 Product_2 1.5 5 2021-02-01 Supp_1 Product_2 6 2021-03-01 Supp_1 Product_2 NaN 7 2021-04-01 Supp_1 Product_2 1.75 </code></pre> <p>Expected df_2,</p> <pre><code> Date Supplier Product Cost Cost_Assumption 0 2021-01-01 Supp_1 Product_1 1.00 Actual 1 2021-02-01 Supp_1 Product_1 1.00 Cost per 2021-01-01 2 2021-03-01 Supp_1 Product_1 1.00 Cost per 2021-01-01 3 2021-04-01 Supp_1 Product_1 1.25 Actual 4 2021-01-01 Supp_1 Product_2 1.50 Actual 5 2021-02-01 Supp_1 Product_2 1.50 Cost per 2021-01-01 6 2021-03-01 Supp_1 Product_2 1.50 Cost per 2021-01-01 7 2021-04-01 Supp_1 Product_2 1.75 Actual </code></pre>
<p>Could you not create the Cost_Assumption column first based on the Cost column?</p> <pre><code>df_1.loc[df_1['Cost'] == '', 'Cost_Assumption'] = 'Cost per 2021-01-01' df_1.loc[df_1['Cost'].isnull(), 'Cost_Assumption'] = 'Cost per 2021-01-01' df_1['Cost_Assumption'] = df_1['Cost_Assumption'].fillna('Actual') </code></pre> <p>And then ffill your cost column</p>
python|pandas|dataframe
0
9,886
69,632,464
Ranking last row of dataframe by entire column
<p>I'd like to rank the values in the last row of a dataframe by the corresponding column above and return a list of the ranks by the 'min' amount. Example below:</p> <pre><code>df = [[10, 2, 8, 4], [12, 6, 4, 1], [8, 4, 3, 2], [9, 3, 4, 6]] df = pd.DataFrame(df) print(df) 0 1 2 3 0 10 2 8 4 1 12 6 4 1 2 8 4 3 2 3 9 3 4 6 </code></pre> <p>The desired result would rank 9 in column 0 against the entire column, so it would return a 3 for that element in the list. In column 2, &quot;4&quot; is ranked 2nd in that column (hence the 'min' method of ranking). Desired result below:</p> <pre><code>result = [3, 3, 2, 1] </code></pre>
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rank.html" rel="nofollow noreferrer"><code>rank</code></a></p> <pre><code>rs = df.rank(method=&quot;dense&quot;, ascending=False).iloc[-1].tolist() print(rs) </code></pre> <p><strong>Output</strong></p> <pre><code>[3.0, 3.0, 2.0, 1.0] </code></pre>
python|pandas
2
9,887
69,354,637
How to select data after date which is the index of the max value of columns for each group by pandas?
<pre><code> ts_code low high 2021-08-01 881105.TI 1485.0 1629.0 2021-08-01 885452.TI 2216.0 2391.0 2021-08-01 885525.TI 7427.0 8552.0 2021-08-01 885641.TI 621.0 671.0 2021-08-08 881105.TI 1496.0 1623.0 2021-08-08 885452.TI 2297.0 2406.0 2021-08-08 885525.TI 7300.0 7868.0 2021-08-08 885641.TI 668.0 691.0 2021-08-15 881105.TI 1606.0 1776.0 2021-08-15 885452.TI 2352.0 2459.0 2021-08-15 885525.TI 7525.0 8236.0 2021-08-15 885641.TI 685.0 719.0 2021-08-22 881105.TI 1656.0 1804.0 2021-08-22 885452.TI 2329.0 2415.0 2021-08-22 885525.TI 7400.0 8270.0 2021-08-22 885641.TI 691.0 720.0 </code></pre> <p>The type of index is <code>datetime64[ns]</code>.</p> <p><strong>Goal</strong></p> <ul> <li>select data after date which is the index of max for <code>high</code> column for <code>ts_code</code> group.</li> </ul> <p><strong>Expected</strong></p> <pre><code> ts_code low high 2021-08-22 881105.TI 1656.0 1804.0 2021-08-15 885452.TI 2352.0 2459.0 2021-08-22 885452.TI 2329.0 2415.0 2021-08-01 885525.TI 7427.0 8552.0 2021-08-08 885525.TI 7300.0 7868.0 2021-08-15 885525.TI 7525.0 8236.0 2021-08-22 885525.TI 7400.0 8270.0 2021-08-22 885641.TI 691.0 720.0 </code></pre> <p>For example, the max date of <code>881105.TI</code> is <code>2021-08-22</code> and <code>885525.TI</code> is <code>2021-08-01</code>. The ouput for each <code>ts_code</code> is after the related max date.</p> <p><strong>Try and ref</strong></p> <ul> <li><a href="https://stackoverflow.com/questions/53842287/select-rows-with-highest-value-from-groupby">This post</a> returns rows with highest value.</li> </ul>
<p>Let us try <code>transform</code> with <code>idxmax</code></p> <pre><code>df1 = df.reset_index() df1 = df[df.index &gt;= df.groupby('ts_code')['high'].transform('idxmax')] out = df1[df1.groupby('ts_code').cumcount()&lt;=1] out ts_code low high 2021-08-01 885525.TI 7427.0 8552.0 2021-08-08 885525.TI 7300.0 7868.0 2021-08-15 885452.TI 2352.0 2459.0 2021-08-22 881105.TI 1656.0 1804.0 2021-08-22 885452.TI 2329.0 2415.0 2021-08-22 885641.TI 691.0 720.0 </code></pre>
python|pandas
0
9,888
69,433,422
How to add the same vector to all vectors in numpy array without loops?
<p>I am trying to plot a 3D mathematical expression using numpy and matplotlib:</p> <p>The mathematical expression is:</p> <p><code>Z(x,y) = exp[(v - v_t)*(v - v_t)']</code></p> <p>while:</p> <p><code>v= [x, y]</code> and <code>v_t = [x_t, y_t]</code></p> <p>initiating vectors through the following code:</p> <pre><code>import numpy as np CONST = 1 x = np.linspace(-5,5,20) y = np.linspace(-5,5,20) v = np.array([x,y]) v_t = np.array([CONST,CONST]) </code></pre> <p>The question is, how can I execute the subtraction of v_t from each vector in array v, in a single command without looping?</p> <p>the result should be something like so:</p> <p><code>v = ([-5, -4, ... , 4, 5], [-5, -4, ... , 4, 5])</code></p> <p><code>v_t = ([1,1])</code></p> <p><code>v - v_t = (x_i - CONST, y_i - CONST) = ([-6, -5, ... , 3, 4],[-6, -5, ... , 3, 4])</code></p>
<pre><code>In [63]: CONST = 1 ...: x = np.linspace(-5,5,11) ...: y = np.linspace(-5,5,11) ...: v = np.array([x,y]) ...: v_t = np.array([CONST,CONST]) </code></pre> <p>The resulting arrays and shapes:</p> <pre><code>In [64]: v Out[64]: array([[-5., -4., -3., -2., -1., 0., 1., 2., 3., 4., 5.], [-5., -4., -3., -2., -1., 0., 1., 2., 3., 4., 5.]]) In [65]: v_t Out[65]: array([1, 1]) In [66]: v.shape Out[66]: (2, 11) In [67]: v_t.shape Out[67]: (2,) </code></pre> <p>By the rules of broadcasting, we need to change <code>v_t</code> to (2,1):</p> <pre><code>In [68]: v - v_t[:,None] Out[68]: array([[-6., -5., -4., -3., -2., -1., 0., 1., 2., 3., 4.], [-6., -5., -4., -3., -2., -1., 0., 1., 2., 3., 4.]]) </code></pre>
python|arrays|numpy
1
9,889
69,307,041
Getting RESNet18 to work with float32 data
<p>I have float32 data that I am trying to get RESNet18 to work with. I am using the RESNet model in torchvision (and using pytorch lightning) and modified it to use one layer (grayscale) data like so:</p> <pre><code>class ResNetMSTAR(pl.LightningModule): def __init__(self): super().__init__() # define model and loss self.model = resnet18(num_classes=3) self.model.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) self.loss = nn.CrossEntropyLoss() @auto_move_data # this decorator automatically handles moving your tensors to GPU if required def forward(self, x): return self.model(x) def training_step(self, batch, batch_no): # implement single training step x, y = batch logits = self(x) loss = self.loss(logits, y) return loss def configure_optimizers(self): # choose your optimizer return torch.optim.RMSprop(self.parameters(), lr=0.005) </code></pre> <p>When I try to run this model I am getting the following error:</p> <pre><code>File &quot;/usr/local/lib64/python3.6/site-packages/torch/nn/functional.py&quot;, line 2824, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss_forward </code></pre> <p>Is there anything that I can do differently to keep this error from happening?</p>
<p>The problem is that the <code>y</code> your feeding your cross entropy loss, is not a LongTensor, but a FloatTensor. CrossEntropy expects getting fed a LongTensor for the target, and raises the error.</p> <p>This is an ugly fix:</p> <pre><code>x, y = batch y = y.long() </code></pre> <p>But what I recommend you to do is going to where the dataset is defined, and make sure you are generating long targets, this way you won't reproduce this error if you change how your training loop is working.</p>
machine-learning|pytorch|computer-vision|cross-entropy|pytorch-lightning
2
9,890
69,628,672
How to divide a numpy array elementwise by another numpy array of lower dimension
<p>Let's say I have a numpy array <code>[[0,1],[3,4],[5,6]]</code> and want to divide it elementwise by <code>[1,2,0]</code>. The desired result will be <code>[[0,1],[1.5,2],[0,0]]</code>. So if the division is by zero, then the result is zero. I only found a way to do it in pandas dataframe with div command, but couldn't find it for numpy arrays and conversion to dataframe does not seem like a good solution.</p>
<p>You could wrap your operation with <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a> to assign the invalid values to <code>0</code>:</p> <pre><code>&gt;&gt;&gt; np.where(d[:,None], x/d[:,None], 0) array([[0. , 1. ], [1.5, 2. ], [0. , 0. ]]) </code></pre> <p>This will still raise a warning though because we're not avoiding the division by zero:</p> <pre><code>/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:1: RuntimeWarning: divide by zero encountered in `true_divide` &quot;&quot;&quot;Entry point for launching an IPython kernel. </code></pre> <p>A better way is to provide a mask to <a href="https://numpy.org/doc/stable/reference/generated/numpy.divide.html" rel="nofollow noreferrer"><code>np.divide</code></a> with the <code>where</code> argument:</p> <pre><code>&gt;&gt;&gt; np.divide(x, d[:,None], where=d[:,None] != 0) array([[0. , 1. ], [1.5, 2. ], [0. , 0. ]]) </code></pre>
python|numpy
1
9,891
41,138,822
How to create a string type tensor in tensorflow C api?
<p>What exactly should the <code>data</code> below in the parameter list be?</p> <pre><code>TF_Tensor* tensorStr = TF_NewTensor(TF_STRING, nullptr, 0, &amp;data[0], 8, no_op, nullptr); </code></pre> <p>I tried:</p> <pre><code>char * data = "blah"; char* data[] = {"blah"}; char data[1][4] = {{'b','l','a','h'}}; </code></pre> <p>all out of luck. When feed into input. I always get :</p> <pre><code>Malformed TF_STRING tensor; element 0 out of range </code></pre>
<p>Example of valid (but a bit ugly) code which creates a string tensor:</p> <pre><code>std::string input_str = "abracdabra"; // any input string size_t encoded_size = TF_StringEncodedSize(input_str.size()); size_t total_size = 8 + encoded_size; // 8 extra bytes - for start_offset char *input_encoded = (char*)malloc(total_size); for (int i =0; i &lt; 8; ++i) { // fills start_offset input_encoded[i] = 0; } TF_StringEncode(input_str.c_str(), input_str.size(), input_encoded+8, encoded_size, status); // fills the rest of tensor data if (TF_GetCode(status) != TF_OK){ fprintf(stderr, "ERROR: something wrong with encoding: %s", TF_Message(status)); return 1; } TF_Tensor* input = TF_NewTensor(TF_STRING, NULL, 0, input_encoded, total_size, &amp;Deallocator, 0); </code></pre> <p>Why it works: <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/c/c_api.h#L213" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/c/c_api.h#L213</a> According to this link, data for string tensor consists of two parts. The last one is input string encoded via TF_StringEncode function. The first one is array 'start_offset', I don't completely understand its role, but looks like eight zeros do the trick)</p> <p>Another example of tensor creation could be found in C API tests: <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/c/c_api_test.cc#L1934" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/c/c_api_test.cc#L1934</a></p>
c|tensorflow
4
9,892
40,885,983
Pandas: Setting different colors for fliers within one boxplot
<p>I would like to set different colors for outliers in a boxplot based on categories. </p> <pre><code>f = plt.figure() ax = f.add_subplot(111) df = pd.DataFrame({"X":[-100,-10,0,0,0,10,100], "Category":["A","A","A","A","B","B","B",]}) bp = df.boxplot("X", return_type="dict", ax=ax, grid=False) ax.set_ylim(-110,110) plt.text(1,90,"this flier red",ha='center',va='center') plt.text(1,-90,"this flier blue",ha='center',va='center') </code></pre> <p><a href="https://i.stack.imgur.com/1u6Mv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1u6Mv.png" alt="Different flier colors in boxplot"></a></p> <p>How can I give the fliers (crosses above and below the caps) different colors?</p> <p>I know that I can set different colors for the whiskers by</p> <pre><code>bp["whiskers"][0].set_color("b") bp["whiskers"][1].set_color("r") </code></pre> <p>and it makes sense that <code>bp["whiskers"]</code> returns a list of 2 Line objects (one for the top whisker and one for the bottom one). But for <code>bp["fliers"]</code> I only get one list element (<code>bp["fliers"].set_color("r")</code> doesn't even do anything.</p> <p>Thanks for the help.</p> <p>Max</p>
<p>Okay, this is one solution. <code>bp["fliers"].get_data()</code> returns a tuple with the x-y values. Then one just has to plot via</p> <pre><code>ax.plot([1],[bp["fliers"][0].get_data()[1][0]], 'b+') ax.plot([1],[bp["fliers"][0].get_data()[1][1]], 'r+') </code></pre> <p><a href="https://i.stack.imgur.com/Nv3sN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nv3sN.png" alt="enter image description here"></a></p>
python|pandas|boxplot
2
9,893
41,066,244
Tensorflow: 'module' object has no attribute 'scalar_summary'
<p>I tried to run the following code to test my TensorBoard, however, when I ran the program, there is an error said:</p> <pre><code>'module' object has no attribute 'scalar_summary' </code></pre> <p>I want to know how can I fix this issue, thanks.</p> <p>The following is the system info:</p> <ul> <li>Operating System: Ubuntu 16.04 LTS</li> <li>Tensorflow version: 0.12rc (master)</li> <li>Running environment: Jupyter Notebook</li> </ul> <p><strong>Test program and Output:</strong> <a href="https://i.stack.imgur.com/qgt9G.png" rel="noreferrer"><img src="https://i.stack.imgur.com/qgt9G.png" alt="enter image description here"></a></p>
<p>The <code>tf.scalar_summary()</code> function was moved in the master branch, after the 0.12 release. You can now find it as <a href="https://www.tensorflow.org/versions/r0.12/api_docs/python/summary.html#scalar"><code>tf.summary.scalar()</code></a>.</p>
tensorflow
48
9,894
41,191,911
Why does `scipy.interpolate.griddata` fail for readonly arrays?
<p>I have some data which I try to interpolate using <code>scipy.interpolate.griddata</code>. In my use-case I marked some of the numpy arrays read-only, which apparently breaks the interpolation:</p> <pre><code>import numpy as np from scipy import interpolate x0 = 10 * np.random.randn(100, 2) y0 = np.random.randn(100) x1 = np.random.randn(3, 2) x0.flags.writeable = False # x1.flags.writeable = False interpolate.griddata(x0, y0, x1) </code></pre> <p>yields the following exception:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-14-a6e09dbdd371&gt; in &lt;module&gt;() 6 # x1.flags.writeable = False 7 ----&gt; 8 interpolate.griddata(x0, y0, x1) /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/interpolate/ndgriddata.pyc in griddata(points, values, xi, method, fill_value, rescale) 216 ip = LinearNDInterpolator(points, values, fill_value=fill_value, 217 rescale=rescale) --&gt; 218 return ip(xi) 219 elif method == 'cubic' and ndim == 2: 220 ip = CloughTocher2DInterpolator(points, values, fill_value=fill_value, scipy/interpolate/interpnd.pyx in scipy.interpolate.interpnd.NDInterpolatorBase.__call__ (scipy/interpolate/interpnd.c:3930)() scipy/interpolate/interpnd.pyx in scipy.interpolate.interpnd.LinearNDInterpolator._evaluate_double (scipy/interpolate/interpnd.c:5267)() scipy/interpolate/interpnd.pyx in scipy.interpolate.interpnd.LinearNDInterpolator._do_evaluate (scipy/interpolate/interpnd.c:6006)() /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/interpolate/interpnd.so in View.MemoryView.memoryview_cwrapper (scipy/interpolate/interpnd.c:17829)() /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/interpolate/interpnd.so in View.MemoryView.memoryview.__cinit__ (scipy/interpolate/interpnd.c:14104)() ValueError: buffer source array is read-only </code></pre> <p>Clearly, the interpolation function doesn't like that the arrays are write-protected. However, I don't understand why they want to change this – I certainly don't expect my input to be mutated by a call to the interpolation function and this is also not mentioned in the documentation as far as I can tell. Why would the function behave like this?</p> <p>Note that setting <code>x1</code> readonly instead of <code>x0</code> leads to a similar error.</p>
<p>The <a href="https://github.com/scipy/scipy/blob/master/scipy/interpolate/interpnd.pyx" rel="nofollow noreferrer">relevant code</a> is written in Cython, and when Cython requests a memoryview of the input array, <a href="https://mail.python.org/pipermail/cython-devel/2013-February/003396.html" rel="nofollow noreferrer">it always asks for a writeable one</a>, even if you don't need it.</p> <p>Since an array flagged as non-writeable will refuse to provide a writeable memoryview, the code fails, even though it didn't need to write to the array in the first place.</p>
python|numpy|scipy
4
9,895
54,044,050
Iterate over columns of a DataFrame and assign values
<p>I have a one-column DataFrame (data), indexed by ordered dates, and I want to create a second DataFrame with p columns, and assign to each column a shifted version of data. I.e., I want in to see in the first column data.shift(1), in the second column data.shift(2), etc. My implementation is as follows:</p> <pre><code>lagged_data = pd.DataFrame(index = data.index, columns=[i+1 for i in range(p)]) for i in range(p): lagged_data.iloc[:,i] = data.shift(i+1) </code></pre> <p>However, after the execution only the first column is updated, while all the others remain filled with np.nan. See below the result (with p=3):</p> <pre><code>print(lagged_data.head()) 1 2 3 Date gen-75 NaN NaN NaN feb-75 0.03 NaN NaN mar-75 0.04 NaN NaN apr-75 -0.04 NaN NaN mag-75 0.04 NaN NaN </code></pre> <p>Oddly enough, by repeating one more time the same loop, ALL columns are populated correctly. I really can't see the reason of this behaviour, I have also tried to create a copy by doing</p> <pre><code> lagged_data.iloc[:,i] = data.shift(i+1).copy() </code></pre> <p>but this gives the same result as before</p>
<h3>Assign series to series</h3> <p>You are assigning a dataframe to a series. While this gives a result, you shouldn't <em>expect</em> this to work. Instead, assign a series to a series and use <code>pd.Series.shift</code>:</p> <pre><code>data = pd.DataFrame({'A': [1, 2, 3, 4, 5]}) lagged_data = pd.DataFrame(index=data.index, columns=[i+1 for i in range(3)]) for i in range(3): lagged_data.iloc[:,i] = data.iloc[:, 0].shift(i + 1) print(lagged_data) # 1 2 3 # 0 NaN NaN NaN # 1 1.0 NaN NaN # 2 2.0 1.0 NaN # 3 3.0 2.0 1.0 # 4 4.0 3.0 2.0 </code></pre> <p>Notice <code>data</code> is a <code>pd.DataFrame</code> object, while <code>data.iloc[:, 0]</code> is a <code>pd.Series</code> object.</p> <h3><a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.concat.html" rel="nofollow noreferrer"><code>pd.concat</code></a> with a list comprehension</h3> <p>In this case, you can use <code>pd.concat</code> with a list comprehension instead, specifying the <code>keys</code> argument and <code>axis=1</code>:</p> <pre><code>res = pd.concat([data.iloc[:, 0].shift(i+1) for i in range(3)], keys=list(range(1, 4)), axis=1) </code></pre>
python|pandas|dataframe
2
9,896
53,932,097
AttributeError: 'psycopg2.extensions.cursor' object has no attribute 'fast_executemany'
<p>AttributeError: 'psycopg2.extensions.cursor' object has no attribute 'fast_executemany'</p> <p>to_sql() is too slow. so trying to resolve the problem. but when I run the following code I am getting :-</p> <blockquote> <p>AttributeError: 'psycopg2.extensions.cursor' object has no attribute 'fast_executemany'</p> </blockquote> <pre><code>@event.listens_for(conn, 'before_cursor_execute') def receive_before_cursor_execute(conn, cursor, statement, params, context, executemany): if executemany: cursor.fast_executemany = True cursor.commit() </code></pre>
<p>use insert with tuples it around 200 time faster than <code>executemany</code> in <code>psycopg</code></p> <pre><code>args_str = ','.join(cur.mogrify("(%s,%s,%s,%s,%s,%s,%s,%s,%s)", x) for x in tup) cur.execute("INSERT INTO table VALUES " + args_str) </code></pre> <p>its equivalent of </p> <pre><code>INSERT INTO table VALUES ('a', 'b', 'c'), ('a', 'b', 'c'), ('a', 'b', 'c'), ('a', 'b', 'c'); </code></pre>
python|pandas|amazon-redshift|pandas-to-sql
4
9,897
53,921,226
How to broadcast a subset of a text string from a pandas dataframe column
<p>I am trying to extract the year and rainfall values from messy text strings stored in a dataframe column and save these to new columns. I did it via list comprehensions, after testing with different slicing methods unsuccessfully. Is list comprehension the best way to extract a subset of a string for broadcasting?</p> <p>Much thanks to all!</p> <pre><code>df = pd.DataFrame([' 1883 1 6.3 1.7 6 122.1 ---', ' 1883 2 8.0 2.8 2 69.8 ---', ' 1883 3 4.8 -1.6 23 29.6 ---',]) df['split'] = df[0].str.split() df['year'] = [df['split'][i][0] for i in df.index] df['rainfall'] = [float(df['split'][i][5]) for i in df.index] </code></pre>
<pre><code>df['split'] = df[0].str.split() df['year']=df['split'].map(lambda x:x[0]) df['rainfall']=df['split'].map(lambda x:x[5]) df=df[['year','rainfall']] df year rainfall 0 1883 122.1 1 1883 69.8 2 1883 29.6 </code></pre>
python|pandas
1
9,898
52,656,351
How to replace specific rows (based on conditions) using values with similar features condition in pandas?
<p>i'm having a trouble when i wanna replace specific values that satisfies a condition and replace the values based on another condition. </p> <h3>Example of dataframe (df)</h3> <pre><code> Gender Surname Ticket ` 0 masc Family1 a12` ` 1 **fem NoGroup aa3**` ` 2 boy Family1 125` ` 3 **fem Family2 aa3**` ` 4 fem Family4 525` ` 5 masc NoGroup a52` </code></pre> <p>The condition to substitute de values in all rows of df['Surname'] column is:</p> <p><code>if ((df['Gender']!= masc) &amp; (df['Surname'] == 'NoGroup'))</code></p> <p>The code must search for row that have equal ticket and substitute for the correspondent Surname value, else keep the value that already exists ('noGroup').</p> <p>In this example, the ['Surname'] value in the row 1 ('noGroup') should be replace by 'family2', that corresponds row 3.</p> <p>I tried this way, but it did not work</p> <p><code>for i in zip((df['Gender']!='man') &amp; df['Surname']=='noGroup'): df['Surname'][i] = df.loc[df['Ticket']==df['Surname'][i]]</code> </p>
<p>With Pandas you should aim for vectorised calculations rather than row-wise loops. Here's one approach. First convert selected values to <code>None</code>:</p> <pre><code>df.loc[df['Gender'].ne('masc') &amp; df['Surname'].eq('NoGroup'), 'Surname'] = None </code></pre> <p>Then create a series mapping from <code>Ticket</code> to <code>Surname</code> after a filter:</p> <pre><code>s = df[df['Surname'].notnull()].drop_duplicates('Ticket').set_index('Ticket')['Surname'] </code></pre> <p>Finally, map null values with the calculated series:</p> <pre><code>df['Surname'] = df['Surname'].fillna(df['Ticket'].map(s)) </code></pre> <p>Result:</p> <pre><code> Gender Surname Ticket 0 masc Family1 a12 1 fem Family2 aa3 2 boy Family1 125 3 fem Family2 aa3 4 fem Family4 525 5 masc NoGroup a52 </code></pre>
python|pandas
1
9,899
46,558,129
search column by dictionary key and replace by dictionary value
<p>I have a dictionary d={nam:name, lin:link}</p> <p>I have a data frame that has below column names:</p> <p>nam1 nam2 nam3 nam_4 nam_5 lin1 lin2</p> <p>how do I replace the column names of dataframe with dictionary values?</p> <p>Thanks</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html" rel="nofollow noreferrer"><code>Series.replace</code></a>, so first convert <code>index</code> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.to_series.html" rel="nofollow noreferrer"><code>to_series</code></a>:</p> <pre><code>df = pd.DataFrame(columns=['nam1', 'nam2', 'nam3', 'nam_4', 'nam_5', 'lin1', 'lin2']) d={'nam':'name', 'lin':'link'} df.columns = df.columns.to_series().replace(d, regex=True) print (df) Empty DataFrame Columns: [name1, name2, name3, name_4, name_5, link1, link2] Index: [] </code></pre> <p>EDIT:</p> <pre><code>df = pd.DataFrame(columns=['nam1', 'nam2', 'nam3', 'nam_4', 'nam_5', 'lin1', 'A']) d={'nam':'name', 'lin':'link'} s = df.columns.to_series() pat = "(" + '|'.join(d.keys()) + ")" df.columns = s.str.extract(pat, expand=False).combine_first(s).replace(d) print (df) Empty DataFrame Columns: [name, name, name, name, name, link, A] Index: [] </code></pre>
python|pandas
0