Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
4,600
37,145,206
How to use summaries on tensorflow's retrain inception final layer
<p>I have been sucessfully using <a href="https://www.tensorflow.org/versions/r0.8/how_tos/image_retraining/index.html" rel="nofollow">tensorflow's tutorial on retraining</a> the final layer to handle new classes and I would like to add some summaries to check how the cross-entropy is evolving.</p> <p>I have looked into the <a href="https://www.tensorflow.org/versions/r0.8/how_tos/summaries_and_tensorboard/index.html" rel="nofollow">documentation</a> and tried to replicate it but I wasn't successful. Right now, I can launch tensorboard and see the graph but nothing else is displayed on the other tabs (events, images, histograms).</p> <p>Has anyone added summaries to the inception retraining example?</p> <p>Thanks in advance</p>
<p>I added TensorBoard summaries to the stock code for the TensorFlow image retraining <a href="https://www.tensorflow.org/versions/master/how_tos/image_retraining/index.html" rel="nofollow">tutorial</a>. You can checkout the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py" rel="nofollow">latest TensorFlow code on GitHub</a> if you want to use TensorBoard with the tutorial. I also wrote a <a href="http://maxmelnick.com/2016/07/04/visualizing-tensorflow-retrain.html" rel="nofollow">blog post</a> about it if you're interested in more details.</p>
tensorflow|tensorboard
2
4,601
37,552,727
Subtracting Two Columns with a Groupby in Pandas
<p>I have a <code>dataframe</code> and would like to subtract two columns of the previous row, provided that the previous row has the same <code>Name</code> value. If it does not, then I would like it yield <code>NAN</code> and fill with <code>-</code>. My <code>groupby</code> expression yields the error, <code>TypeError: 'Series' objects are mutable, thus they cannot be hashed</code>, which is very ambiguous. What am I missing?</p> <pre><code>import pandas as pd df = pd.DataFrame(data=[['Person A', 5, 8], ['Person A', 13, 11], ['Person B', 11, 32], ['Person B', 15, 20]], columns=['Names', 'Value', 'Value1']) df['diff'] = df.groupby('Names').apply(df['Value'].shift(1) - df['Value1'].shift(1)).fillna('-') print df </code></pre> <p>Desired Output:</p> <pre><code> Names Value Value1 diff 0 Person A 5 8 - 1 Person A 13 11 -3 2 Person B 11 32 - 3 Person B 15 20 -21 </code></pre>
<p>You can add <code>lambda x</code> and change <code>df['Value']</code> to <code>x['Value']</code>, similar with <code>Value1</code> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="noreferrer"><code>reset_index</code></a>:</p> <pre><code>df['diff'] = df.groupby('Names') .apply(lambda x: x['Value'].shift(1) - x['Value1'].shift(1)) .fillna('-') .reset_index(drop=True) print (df) Names Value Value1 diff 0 Person A 5 8 - 1 Person A 13 11 -3 2 Person B 11 32 - 3 Person B 15 20 -21 </code></pre> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.shift.html" rel="noreferrer"><code>DataFrameGroupBy.shift</code></a>:</p> <pre><code>df1 = df.groupby('Names')['Value','Value1'].shift() print (df1) Value Value1 0 NaN NaN 1 5.0 8.0 2 NaN NaN 3 11.0 32.0 df['diff'] = (df1.Value - df1.Value1).fillna('-') print (df) Names Value Value1 diff 0 Person A 5 8 - 1 Person A 13 11 -3 2 Person B 11 32 - 3 Person B 15 20 -21 </code></pre>
python|python-2.7|pandas
7
4,602
37,319,630
Python: return the row index of the minimum in a matrix
<p>I wanna print the <strong>index of the row containing the minimum</strong> element of the matrix</p> <p>my matrix is <code>matrix = [[22,33,44,55],[22,3,4,12],[34,6,4,5,8,2]]</code></p> <p>and the code</p> <pre><code>matrix = [[22,33,44,55],[22,3,4,12],[34,6,4,5,8,2]] a = np.array(matrix) buff_min = matrix.argmin(axis = 0) print(buff_min) #index of the row containing the minimum element min = np.array(matrix[buff_min]) print(str(min.min(axis=0))) #print the minium of that row print(min.argmin(axis = 0)) #index of the minimum print(matrix[buff_min]) # print all row containing the minimum </code></pre> <p>after running, my result is</p> <blockquote> <p>1<br> 3<br> 1<br> [22, 3, 4, 12]</p> </blockquote> <p>the first number should be 2, because <strong>the minimum is 2 in the third list</strong> ([34,6,4,5,8,2]), <strong>but it returns 1</strong>. It returns 3 as minimum of the matrix. What's the error?</p>
<p>I am not sure which version of Python you are using, i tested it for Python 2.7 and 3.2 as mentioned your syntax for <strong>argmin</strong> is not correct, its should be in the format</p> <pre><code>import numpy as np np.argmin(array_name,axis) </code></pre> <p>Next, Numpy knows about arrays of arbitrary objects, <strong>it's optimized for homogeneous arrays of numbers with fixed dimensions</strong>. If you really need arrays of arrays, better use a nested list. But depending on the intended use of your data, different data structures might be even better, e.g. a masked array if you have some invalid data points.</p> <p>If you really want flexible Numpy arrays, use something like this:</p> <pre><code>np.array([[22,33,44,55],[22,3,4,12],[34,6,4,5,8,2]], dtype=object) </code></pre> <p>However this will create a one-dimensional array that stores references to lists, which means that you will lose most of the benefits of Numpy (vector processing, locality, slicing, etc.).</p> <p>Also, to mention if you can resize your numpy array thing might work, i haven't tested it, but by the concept that should be an easy solution. But <strong>i will prefer use a nested list in this case of input</strong> matrix</p>
python|numpy|matrix|min|minimum
1
4,603
41,935,637
Create new column in pandas dataframe based on whether a value in the row reappears in dataframe
<p>I have a csv I've imported as a pandas dataframe which looks like this:</p> <pre><code>TripId, DeviceId, StartDate, EndDate 817d0e7, dbf69e23, 2015-04-18T13:54:27.000Z, 2015-04-18T14:59:06.000Z 817d0f5, fkri449g, 2015-04-18T13:59:21.000Z, 2015-04-18T14:50:56.000Z 8145g5g, dbf69e23, 2015-04-18T15:12:26.000Z, 2015-04-18T16:21:04.000Z 4jhbfu4, fkigit95, 2015-04-18T14:23:40.000Z, 2015-04-18T14:59:38.000Z 8145g66, dbf69e23, 2015-04-20T11:20:24.000Z, 2015-04-20T16:22:41.000Z ... </code></pre> <p>I want to add a new column, with an indicator value based on whether the DeviceId reappears in my dataframe, with a StartDate 1hour after the current EndDate. So my new dataframe should look like:</p> <pre><code>TripId, DeviceId, StartDate, EndDate, newcol 817d0e7, dbf69e23, 2015-04-18T13:54:27.000Z, 2015-04-18T14:59:06.000Z, 1 817d0f5, fkri449g, 2015-04-18T13:59:21.000Z, 2015-04-18T14:50:56.000Z, 0 8145g5g, dbf69e23, 2015-04-18T15:12:26.000Z, 2015-04-18T16:21:04.000Z, 0 4jhbfu4, fkigit95, 2015-04-18T14:23:40.000Z, 2015-04-18T14:59:38.000Z, 0 8145g66, dbf69e23, 2015-04-20T11:20:24.000Z, 2015-04-20T16:22:41.000Z, 0 ... </code></pre> <p>I've started to write some code, but I'm unsure how to proceed.</p> <pre><code>df['newcol'] = np.where(df['DeviceId'].isin(df['DeviceId']) and , 1, 0) </code></pre> <p>One problem is that I'm not sure how to find device id in dataframe excluding current row, and another is that I don't know how to tackle the time issue. </p> <p>EDIT: I've been working on it a bit, and my new code is now:</p> <pre><code>df['UniqueId'] = range(0, 14571, 1) df['StartDate'] = pd.to_datetime(df['StartDate']) df['EndDate'] = pd.to_datetime(df['EndDate']) df2 = df.loc[df.duplicated(subset=['DeviceId'],keep=False)] #Returns list of trips with repeated deviceid DeviceIds = df2['DeviceId'].tolist() DeviceIds = list(set(DeviceIds)) for ID in DeviceIds: temp = df2.loc[df2['DeviceId'] == ID] temp.sort_values(by='StartDate') temp['PreviousEnd'] = temp['EndDate'].shift(periods=1) temp['Difference'] = temp['StartDate'] - temp['PreviousEnd'] temp['Difference'] = [1 if x &lt; pd.Timedelta('1H') else 0 for x in temp['Difference']] temp = temp[['UniqueId','Difference']] df.join(temp, on='UniqueId', how='left',rsuffix='2') </code></pre> <p>The it creates the right temp dataframe, but I can't seem to join the values in Difference to the original dataframe</p>
<p>I managed to get it working, the code I used was:</p> <pre><code>df['UniqueId'] = range(0, 14571, 1) df['StartDate'] = pd.to_datetime(df['StartDate']) df['EndDate'] = pd.to_datetime(df['EndDate']) #converts dates to dateTime df2 = df.loc[df.duplicated(subset=['DeviceId'],keep=False)] #Returns list of trips with repeated deviceid DeviceIds = df2['DeviceId'].tolist() DeviceIds = list(set(DeviceIds)) df3 = pd.DataFrame(columns = ['UniqueId','Difference']) for ID in DeviceIds: #creats mini dataframes for every DeviceId temp = df2.loc[df2['DeviceId'] == ID] temp.sort_values(by='StartDate') temp['PreviousEnd'] = temp['EndDate'].shift(periods=1) temp['Difference'] = temp['StartDate'] - temp['PreviousEnd'] temp['Difference'] = [1 if x &lt; pd.Timedelta('24H') else 0 for x in temp['Difference']] temp = temp[['UniqueId','Difference']] df3 = pd.concat([df3,temp]) df.set_index('UniqueId').join(df3.set_index('UniqueId'),how='left') </code></pre>
python|python-3.x|pandas
0
4,604
41,751,160
2D dot product on two 3D matrix along an aixs
<p>Given two matrixes A and B with dimension of (x,y,z) and (y,x,z) respectively, how to dot product on the first two dimension of the two matrices? The result should have dimension of (x,x,z). </p> <p>Thanks!</p>
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow noreferrer"><code>np.einsum</code></a> with literally the same string expression -</p> <pre><code>np.einsum('xyz,yiz-&gt;xiz',a,b) # a,b are input arrays </code></pre> <p>Note that we have used <code>yiz</code> as the string notation for the second array and not <code>yxz</code>, as that <code>i</code> is supposed to be a new dimension in the output array and is not to be aligned with the first axis of the first array for which we have already assigned <code>x</code>. The dimensions that are to be aligned are given the same string notation. </p>
numpy|matrix-multiplication
1
4,605
41,730,519
Slicing NumPy array with dictionary
<p>Is there a simple option to slice a NumPy array with the predefined dictionary of indices?</p> <p>For example:</p> <pre><code>&gt;&gt; a = array([3, 9, 1, 5, 5]) </code></pre> <p>and (fictitious) dictionary:</p> <pre><code>&gt;&gt; index_dict = {'all_except_first': (1:None), 'all_except_last': (None:-1)} </code></pre> <p>and then:</p> <pre><code>&gt;&gt; a[index_dict['all_except_first']] &gt;&gt; array([9, 1, 5, 5]) &gt;&gt; a[index_dict['all_except_first']] &gt;&gt; array([3, 9, 1, 5]) </code></pre> <p>Sort of slicing with names and not with numbers.</p>
<p>Create <a href="https://docs.python.org/3/library/functions.html#slice" rel="nofollow noreferrer"><code>slice</code></a>s:</p> <pre><code>&gt;&gt;&gt; index_dict = {'all_except_first': slice(1, None), 'all_except_last': slice(None, -1)} &gt;&gt;&gt; &gt;&gt;&gt; a[index_dict['all_except_first']] array([9, 1, 5, 5]) &gt;&gt;&gt; a[index_dict['all_except_last']] array([3, 9, 1, 5]) </code></pre>
python|numpy|dictionary|slice
3
4,606
37,935,729
Extract first characters from list series pandas
<p>I have a string series containing multiples words. I want to extract the first character of each word per row in a vectorized fashion. </p> <p>So far, I have been able to split the words into a list, but haven't found a vectorized way of getting the first characters. </p> <pre><code>s = pd.Series(['aa bb cc', 'cc dd ee', 'ff ga', '0w']) &gt;&gt;&gt; s. str.split() 0 [aa, bb, cc] 1 [cc, dd, ee] 2 [ff, ga] 3 [0w] </code></pre> <p>Eventually, I want something like this:</p> <pre><code>0 [a, b, c] 1 [c, d, e] 2 [f, g] 3 [0] </code></pre>
<p>Another faster solution is nested list comprehension:</p> <pre><code>s2 = pd.Series([[y[0] for y in x.split()] for x in s.tolist()]) print (s2) 0 [a, b, c] 1 [c, d, e] 2 [f, g] 3 [0] dtype: object </code></pre> <p>Thank you <a href="https://stackoverflow.com/questions/37935729/extract-first-characters-from-list-series-pandas/37936260#comment63326032_37936260">clocker</a> for improvement - you can remove <code>tolist()</code>:</p> <pre><code>print (pd.Series([[y[0] for y in x.split()] for x in s])) </code></pre> <p><strong>Timings</strong>:</p> <pre><code>import pandas as pd s = pd.Series(['aa bb cc', 'cc dd ee', 'ff ga', '0w']) s = pd.concat([s]*10000).reset_index(drop=True) print(s) In [42]: %timeit pd.Series([[y[0] for y in x.split()] for x in s.tolist()]) 10 loops, best of 3: 28.6 ms per loop In [43]: %timeit (s.str.split().map(lambda lst : [string[0] for string in lst])) 10 loops, best of 3: 50.4 ms per loop In [44]: %timeit (s.str.split().apply(lambda lst: [list(elt)[0] for elt in lst])) 10 loops, best of 3: 76.1 ms per loop In [59]: %timeit (pd.Series([[y[0] for y in x.split()] for x in s])) 10 loops, best of 3: 28.8 ms per loop </code></pre>
python|string|pandas|dataframe|character
2
4,607
37,890,989
Why isn't this Conv2d_Transpose / deconv2d returning the original input in tensorflow?
<pre><code>weights = tf.placeholder("float",[5,5,1,1]) imagein = tf.placeholder("float",[1,32,32,1]) conv = tf.nn.conv2d(imagein,weights,strides=[1,1,1,1],padding="SAME") deconv = tf.nn.conv2d_transpose(conv, weights, [1,32,32,1], [1,1,1,1],padding="SAME") dw = np.random.rand(5,5,1,1) noise = np.random.rand(1,32,32,1) sess = tf.InteractiveSession() convolved = conv.eval(feed_dict={imagein: noise, weights: dw}) deconvolved = deconv.eval(feed_dict={imagein: noise, weights: dw}) </code></pre> <p>I've been trying to figure out conv2d_transpose in order to reverse a convolution in Tensorflow. My understanding is that "deconvolved" should contain the same data as "noise" after applying a normal convolution and then its transpose, but "deconvolved" just contains some completely different image. Is there something wrong with my code, or is the theory incorrect?</p>
<p>There's a reason it's called <code>conv2d_transpose</code> rather than <code>deconv2d</code>: it isn't deconvolution. Convolution isn't an orthogonal transformation, so it's inverse (deconvolution) isn't the same as its transpose (<code>conv2d_transpose</code>).</p> <p>Your confusion is understandable: calling the transpose of convolution "deconvolution" has been standard neural network practice for years. I am happy than we were able to fix the name to be mathematically correct in TensorFlow; more details here:</p> <p><a href="https://github.com/tensorflow/tensorflow/issues/256" rel="nofollow">https://github.com/tensorflow/tensorflow/issues/256</a></p>
tensorflow|convolution|deconvolution
4
4,608
37,630,202
python pandas: passing in dataframe to df.apply
<p>Long time user of this site but first time asking a question! Thanks to all of the benevolent users who have been answering questions for ages :)</p> <p>I have been using <code>df.apply</code> lately and ideally want to pass a dataframe into the <code>args</code> parameter to look something like so: <code> df.apply(testFunc, args=(dfOther), axis = 1)</code></p> <p>My ultimate goal is to iterate over the dataframe I am passing in the <code>args</code> parameter and check logic against each row of the original dataframe, say <code> df </code>, and return some value from <code> dfOther </code>. So say I have a function like this:</p> <pre><code>def testFunc(row, dfOther): for index, rowOther in dfOther.iterrows(): if row['A'] == rowOther[0] and row['B'] == rowOther[1]: return dfOther.at[index, 'C'] df['OTHER'] = df.apply(testFunc, args=(dfOther), axis = 1) </code></pre> <p>My current understanding is that <code>args</code> expects a Series object, and so if I actually run this we get the following error:</p> <pre><code>ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>However before I wrote <code>testFunc</code> which only passes in a single dataframe, I had actually written <code>priorTestFunc</code>, which looks like this... And it works!</p> <pre><code>def priorTestFunc(row, dfOne, dfTwo): for index, rowOne in dfOne.iterrows(): if row['A'] == rowOne[0] and row['B'] == rowOne[1]: return dfTwo.at[index, 'C'] df['OTHER'] = df.apply(testFunc, args=(dfOne, dfTwo), axis = 1) </code></pre> <p>So to my dismay I have been coming into the habit of writing <code>testFunc</code> like so and it has been working as intended:</p> <pre><code>def testFunc(row, dfOther, _): for index, rowOther in dfOther.iterrows(): if row['A'] == rowOther[0] and row['B'] == rowOther[1]: return dfOther.at[index, 'C'] df['OTHER'] = df.apply(testFunc, args=(dfOther, _), axis = 1) </code></pre> <p>I would really appreciate if someone could let me know why this would be the case and maybe errors that I will be prone to, or maybe another alternative for solving this kind of problem!!</p> <p>EDIT: As requested by the comment: My dfs generally look like the below.. They will have two matching columns and will be returning a value from the <code>dfOther.at[index, column]</code> I have considered <code>pd.concat([dfOther, df])</code> however I will be running an algorithm testing conditions on <code>df</code> and then updating it accordingly from specific values on <code>dfOther</code>(which will also be updating) and I would like <code> df</code> to be relatively neat, as opposed to making a multindex and throwing just about everything in it. Also I am aware <code>df.iterrows</code> is in general slow, but these dataframes will be about 500 rows at the max, so scalability isn't really a massive concern for me at the moment.</p> <pre><code>df Out[10]: A B C 0 foo bur 6000 1 foo bur 7000 2 foo bur 8000 3 bar kek 9000 4 bar kek 10000 5 bar kek 11000 dfOther Out[12]: A B C 0 foo bur 1000 1 foo bur 2000 2 foo bur 3000 3 bar kek 4000 4 bar kek 5000 5 bar kek 6000 </code></pre>
<p>The error is in this line:</p> <pre><code> File "C:\Anaconda3\envs\p2\lib\site-packages\pandas\core\frame.py", line 4017, in apply if kwds or args and not isinstance(func, np.ufunc): </code></pre> <p>Here, <code>if kwds or args</code> is checking whether the length of <code>args</code> passed to <code>apply</code> is greater than 0. It is a common way to check if an iterable is empty:</p> <pre><code>l = [] if l: print("l is not empty!") else: print("l is empty!") </code></pre> <blockquote> <p><code>l is empty!</code></p> </blockquote> <pre><code>l = [1] if l: print("l is not empty!") else: print("l is empty!") </code></pre> <blockquote> <p><code>l is not empty!</code></p> </blockquote> <p>If you had passed a tuple to <code>df.apply</code> as <code>args</code>, it would return True and there wouldn't be a problem. However, Python does not interpret (df) as a tuple:</p> <pre><code>type((df)) Out[39]: pandas.core.frame.DataFrame </code></pre> <p>It is just a DataFrame/variable inside parentheses. When you type <code>if df</code>:</p> <pre><code>if df: print("df is not empty") Traceback (most recent call last): File "&lt;ipython-input-40-c86da5a5f1ee&gt;", line 1, in &lt;module&gt; if df: File "C:\Anaconda3\envs\p2\lib\site-packages\pandas\core\generic.py", line 887, in __nonzero__ .format(self.__class__.__name__)) ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>You get the same error message. However, if you use a comma to indicate that it'a tuple, it works fine:</p> <pre><code>if (df, ): print("tuple is not empty") tuple is not empty </code></pre> <p>As a result, adding a comma to <code>args=(dfOther)</code> by making it a <a href="https://docs.python.org/3/tutorial/datastructures.html#tuples-and-sequences" rel="noreferrer">singleton</a> should solve the problem.</p> <pre><code>df['OTHER'] = df.apply(testFunc, args=(dfOther, ), axis = 1) </code></pre>
python|pandas|dataframe
8
4,609
37,818,063
How to calculate conditional probability of values in dataframe pandas-python?
<p>I want to calculate conditional probabilites of ratings('A','B','C') in ratings column. </p> <pre><code> company model rating type 0 ford mustang A coupe 1 chevy camaro B coupe 2 ford fiesta C sedan 3 ford focus A sedan 4 ford taurus B sedan 5 toyota camry B sedan </code></pre> <p>Output:</p> <pre><code>Prob(rating=A) = 0.333333 Prob(rating=B) = 0.500000 Prob(rating=C) = 0.166667 Prob(type=coupe|rating=A) = 0.500000 Prob(type=sedan|rating=A) = 0.500000 Prob(type=coupe|rating=B) = 0.333333 Prob(type=sedan|rating=B) = 0.666667 Prob(type=coupe|rating=C) = 0.000000 Prob(type=sedan|rating=C) = 1.000000 </code></pre> <p>Any help, Thanks..!!</p>
<p>You can use <code>.groupby()</code> and the built-in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.div.html" rel="noreferrer"><code>.div()</code></a>:</p> <pre><code>rating_probs = df.groupby('rating').size().div(len(df)) rating A 0.333333 B 0.500000 C 0.166667 </code></pre> <p>and the conditional probs:</p> <pre><code>df.groupby(['type', 'rating']).size().div(len(df)).div(rating_probs, axis=0, level='rating') coupe A 0.500000 B 0.333333 sedan A 0.500000 B 0.666667 C 1.000000 </code></pre>
python|pandas|dataframe|probability
17
4,610
64,255,974
create string using dataframe values
<p>dataframe has</p> <p>segment | percentage_change</p> <p>segment1 | 25%</p> <p>segment2 | 30%</p> <p>segment3 | 40%</p> <p>I need to create sentence for top 3:</p> <p>&quot;Segment3 has highest percentage change of 40%&quot;</p> <p>&quot;Segment2 has 2nd highest percentage change of 30%&quot;</p> <p>&quot;Segment1 has 3nd highest percentage change of 25%&quot; &quot;Segment 1 has 5% more change than segment 2&quot; &quot;Segment 2 has 10% more change than segment 3&quot;</p> <p>all these sentence will be added as each cell value in a new dataframe. Thanks for the help !!</p>
<p>Use:</p> <pre><code>#converted column with percentage to numeric df['num'] = df['percentage_change'].str.rstrip('%').astype(float) #get 3top rows by numeric column df1 = df.nlargest(3, 'num') #create difference column converted to strings df1['diff'] = df1['num'].diff(-1).fillna(0).astype(str).str.replace('\.[0]*','') + '%' #shifting segment column df1['diff_seg'] = df1['segment'].shift(-1) #default index strating by 1 df1 = df1.reset_index(drop=True) df1.index = df1.index + 1 print (df1) segment percentage_change num diff diff_seg 1 segment3 40% 40.0 10% segment2 2 segment2 30% 30.0 5% segment1 3 segment1 25% 25.0 0% NaN </code></pre> <p>Then is used <code>f-string</code>s for formating of new columns:</p> <pre><code>f1 = lambda x: f'{x[&quot;segment&quot;].title()} has {x.name}. highest percentage change of {x[&quot;percentage_change&quot;]}' f2 = lambda x: f'{x[&quot;diff_seg&quot;].title()} has {x[&quot;diff&quot;]} more change than {x[&quot;segment&quot;]}' df1['out'] = df1.apply(f1, axis=1) df1['out1'] = df1.iloc[:-1].apply(f2, axis=1) print (df1) segment percentage_change num diff diff_seg \ 1 segment3 40% 40.0 10% segment2 2 segment2 30% 30.0 5% segment1 3 segment1 25% 25.0 0% NaN out \ 1 Segment3 has 1. highest percentage change of 40% 2 Segment2 has 2. highest percentage change of 30% 3 Segment1 has 3. highest percentage change of 25% out1 1 Segment2 has 10% more change than segment3 2 Segment1 has 5% more change than segment2 3 NaN </code></pre>
python|python-3.x|pandas|string|dataframe
0
4,611
64,431,003
Check similarity of 2 pandas dataframes
<p>I am trying to compare 2 pandas dataframes in terms of column names and datatypes. With assert_frame_equal, I get an error since shapes are different. Is there a way to ignore it, as I could not find it in the documentation.</p> <p>With df1_dict == df2_dict, it just says whether its similar or not, I am trying to print if there are any differences in terms of feature names or datatypes.</p> <pre><code>df1_dict = dict(df1.dtypes) df2_dict = dict(df2.dtypes) # df1_dict = {'A': np.dtype('O'), 'B': np.dtype('O'), 'C': np.dtype('O')} # df2_dict = {'A': np.dtype('int64'), 'B': np.dtype('O'), 'C': np.dtype('O')} print(set(df1_dict) - set(df2_dict)) print(f'''Are two datsets similar: {df1_dict == df2_dict}''') pd.testing.assert_frame_equal(df1, df2) </code></pre> <p>Any suggestions would be appreciated.</p>
<p>It seems to me that if the two dataframe descriptions are outer joined, you would have all the information you want.</p> <p>example:</p> <pre><code>df1 = pd.DataFrame({'a': [1,2,3], 'b': list('abc')}) df2 = pd.DataFrame({'a': [1.0,2.0,3.0], 'b': list('abc'), 'c': [10,20,30]}) diff = df1.dtypes.rename('df1').reset_index().merge( df2.dtypes.rename('df2').reset_index(), how='outer' ) def check(x): if pd.isnull(x.df1): return 'df1-missing' if pd.isnull(x.df2): return 'df2-missing' if x.df1 != x.df2: return 'type-mismatch' return 'ok' diff['diff_status'] = diff.apply(check, axis=1) # diff prints: index df1 df2 diff_status 0 a int64 float64 type-mismatch 1 b object object ok 2 c NaN int64 df1-missing </code></pre>
pandas|dataframe
1
4,612
47,986,662
Why `xavier_initializer()` and `glorot_uniform_initializer()` are duplicated to some extent?
<p><code>xavier_initializer(uniform=True, seed=None, dtype=tf.float32)</code> and <code>glorot_uniform_initializer(seed=None, dtype=tf.float32)</code> refer to the same person Xavier Glorot. Why not consolidate them into one function?</p> <p><code>xavier_initializer</code> is in <code>tf.contrib.layers</code>. <code>glorot_uniform_initializer</code> in <code>tf</code>. Will the namespace of <code>contrib</code> eventually go away and things in <code>contrib</code> will be moved to the namespace of <code>tf</code>?</p>
<p>Yes, <code>tf.contrib.layers.xavier_initializer</code> and <code>tf.glorot_uniform_initializer</code> both implement the same concept described in this <a href="http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf" rel="nofollow noreferrer">JMLR paper: <code>Understanding the difficulty of training deep feedforward neural networks</code></a>, which can be seen in the code:</p> <ul> <li><p><a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/init_ops.py#L1210" rel="nofollow noreferrer">tf.glorot_uniform_initializer</a></p></li> <li><p><a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/initializers.py#L31" rel="nofollow noreferrer">tf.contrib.layers.xavier_initializer</a></p></li> </ul> <p>With typical values for <code>fan_in</code>, <code>fan_out</code>, <code>mode = FAN_AVG</code> , and <code>uniform = True</code>, both implementations sample values from the <em>standard uniform distribution</em> over the limit <strong>[</strong><code>-sqrt(3), sqrt(3)</code><strong>)</strong></p> <p>Because <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/init_ops.py" rel="nofollow noreferrer"><code>tf.initializer</code></a> has support for a wide variety of initialization strategies, it's highly likely that it will stay whereas the initialization from contrib which just has <code>xavier_initialization</code> will most probably be deprecated in future versions.</p> <p>So, yes it's highly likely that in future versions the <code>tf.contrib.layers.xavier_initialier</code> way of initialization might go away.</p>
python|tensorflow|machine-learning|deep-learning|initializer
3
4,613
47,923,429
How to shuffle a pandas dataframe randomly by row
<p>I am trying to shuffle a pandas dataframe by row instead of column. </p> <p>I have the following dataframe: </p> <pre><code> row1 row2 row3 1 3 1 6 2 5 2 7 3 7 3 8 4 9 4 9 </code></pre> <p>And would like to shuffle the df to achieve a random permutation such as: </p> <pre><code> row1 row2 row3 1 6 3 1 2 3 9 2 3 7 5 8 4 4 9 7 </code></pre> <p>I tried: </p> <pre><code>df1 = df.reindex(np.random.permutation(df.index)) </code></pre> <p>however, this permutes only by column and not row. </p>
<p>You can achieve this by using the sample method and apply it to axis # 1. This will shuffle the elements in a row:</p> <pre><code>df = df.sample(frac=1, axis=1).reset_index(drop=True) </code></pre> <p>How ever your desired dataframe looks completely randomised, which can be done by shuffling by row and then by column:</p> <pre><code>df = df.sample(frac=1, axis=1).sample(frac=1).reset_index(drop=True) </code></pre> <p>Edit:</p> <pre><code>import numpy as np df = df.apply(np.random.permutation, axis=1) </code></pre>
python|pandas|numpy|shuffle
8
4,614
47,862,262
How to subtract channel wise mean in keras?
<p>I have implemented a lambda function to resize an image from 28x28x1 to 224x224x3. I need to subtract the VGG mean from all the channels. When i try this, i get an error </p> <p>TypeError: 'Tensor' object does not support item assignment </p> <pre><code>def try_reshape_to_vgg(x): x = K.repeat_elements(x, 3, axis=3) x = K.resize_images(x, 8, 8, data_format="channels_last") x[:, :, :, 0] = x[:, :, :, 0] - 103.939 x[:, :, :, 1] = x[:, :, :, 1] - 116.779 x[:, :, :, 2] = x[:, :, :, 2] - 123.68 return x[:, :, :, ::-1] </code></pre> <p>What's the recommended solution to do element wise subtraction of tensors?</p>
<p>You can use <code>keras.applications.imagenet_utils.preprocess_input</code> on tensors after Keras 2.1.2. It will subtract the VGG mean from <code>x</code> under the default mode <code>'caffe'</code>.</p> <pre class="lang-py prettyprint-override"><code>from keras.applications.imagenet_utils import preprocess_input def try_reshape_to_vgg(x): x = K.repeat_elements(x, 3, axis=3) x = K.resize_images(x, 8, 8, data_format="channels_last") x = preprocess_input(x) return x </code></pre> <p>If you would like to stay in an older version of Keras, maybe you can check how it is implemented in Keras 2.1.2, and extract useful lines into <code>try_reshape_to_vgg</code>.</p> <pre class="lang-py prettyprint-override"><code>def _preprocess_symbolic_input(x, data_format, mode): global _IMAGENET_MEAN if mode == 'tf': x /= 127.5 x -= 1. return x if data_format == 'channels_first': # 'RGB'-&gt;'BGR' if K.ndim(x) == 3: x = x[::-1, ...] else: x = x[:, ::-1, ...] else: # 'RGB'-&gt;'BGR' x = x[..., ::-1] if _IMAGENET_MEAN is None: _IMAGENET_MEAN = K.constant(-np.array([103.939, 116.779, 123.68])) # Zero-center by mean pixel if K.dtype(x) != K.dtype(_IMAGENET_MEAN): x = K.bias_add(x, K.cast(_IMAGENET_MEAN, K.dtype(x)), data_format) else: x = K.bias_add(x, _IMAGENET_MEAN, data_format) return x </code></pre>
machine-learning|tensorflow|deep-learning|keras|tensor
4
4,615
47,783,978
tf.train.shuffle_batch hangs forever (using tensorflow ver. 1.4)
<p>I've a small tfrecords file with only 640 records. Below code hangs and I don't know what's wrong with it:</p> <pre><code>def read_from_tfrecord(tfrecord_file): tfrecord_file_queue = tf.train.string_input_producer(tfrecord_file, name = 'queue') reader = tf.TFRecordReader() _, tfrecord_serialized = reader.read(tfrecord_file_queue) tfrecord_features = tf.parse_single_example(tfrecord_serialized, features = {'label': tf.FixedLenFeature([], tf.string), 'snippet': tf.FixedLenFeature([], tf.string)}, name = 'features') snippet = tf.decode_raw(tfrecord_features['snippet'], tf.float32) snippet = tf.reshape(snippet, [x_height, x_width, num_channels]) label = tf.decode_raw(tfrecord_features['label'], tf.int32) label = tf.reshape(label, [2]) snippets_shuffled, labels_shuffled = tf.train.shuffle_batch([snippet, label], batch_size = 2, capacity = 10, num_threads = 1, min_after_dequeue = 4) return snippets_shuffled, labels_shuffled </code></pre> <p>and:</p> <pre><code>with tf.Session() as sess: coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) sess.run(tf.global_variables_initializer()) snippet, label = read_from_tfrecord(['./TFRecordFile/test_tmp.tfrecords']) print('1') # it prints 1 a, b = sess.run([snippet, label]) # it hangs here! print('2') # it never prints 2 </code></pre> <p>any help appreciated.</p>
<p>Ok, I resolved the issue. I spent a lot of time to make this working. So, I post the answer here in case others face a similar problem. On top of what Seven suggested, </p> <p>I had to add <code>tf.train.start_queue_runners(sess)</code>. The code will look like this:</p> <pre><code>snippet, label = read_from_tfrecord(['./TFRecordFile/test_tmp.tfrecords']) with tf.Session() as sess: coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) tf.train.start_queue_runners(sess) # &lt;==== Add This Line sess.run(tf.global_variables_initializer()) print('1') # it prints 1 a, b = sess.run([snippet, label]) # it hangs here! print('2') # it never prints 2 </code></pre>
tensorflow|queue
0
4,616
47,710,955
In numpy, why is .var() a method while mean() is a normal function?
<p>As an R programmer learning Python, I've been getting confused by the Python syntax a few times. Many of these behaviors seem arbitrary to me. It would help me to understand the reasons behind why things are the way they are in Python. I'm also new to OOP, so this might be the reason for my confusion.</p> <p>Specifically, these two points confused me the most while following a tutorial:</p> <ol> <li><p>To compute an array's mean, you use <code>np.mean(myarray)</code>. But to compute the variance, you use <code>myarray.var()</code> (that's a <em>method</em>, right?). This seems arbitrary - was there a reason for choosing to implement things in this way?</p></li> <li><p>Plotting a histogram consists of two subsequent commands, <code>plot.hist(values, 50)</code> followed by <code>plt.show()</code>. Why is the second call necessary? And where does the "result" of the first call get stored? Is this some kind of OOP magic?</p></li> </ol>
<p>You can use these methods both ways: The reasoning was to make scientific python packages friendly for users not comfortable with OOP, and provide a familiar API to people used to matlab, or R.<br> As pointed out in the comments by @Mel, the matplotlib package also shares this feature.</p> <pre><code>import numpy as np a = np.array([range(10)]) a.mean(), a.var(), np.mean(a), np.var(a) </code></pre> <h3>output:</h3> <pre><code>(4.5, 8.25, 4.5, 8.25) </code></pre>
python|numpy|methods
3
4,617
49,022,769
how to convert float to string excluding NaN in pandas within one line code?
<p>I would like to convert a column of float value to string, following is my current way:</p> <pre><code>userdf['phone_num'] = userdf['phone_num'].apply(lambda x: "{:.0f}".format(x) if x is not None else x) </code></pre> <p>However, it also converts the NaN to string "nan" which is bad when I check the missing value in this column, any better idea?</p> <p>Thanks!</p>
<p>I think you should compare Nan values instead of comparing None</p> <pre><code>userdf['phone_num'] = userdf['phone_num'].apply(lambda x: "{:.0f}". format(x) if not pd.isnull(x) else x) </code></pre>
python|pandas
3
4,618
49,225,933
When turning a list of lists of tuples to an array, how can I stop tuples from creating a 3rd dimension?
<p>I have a list of lists (each sublist of the same length) of tuples (each tuple of the same length, 2). Each sublist represents a sentence, and the tuples are bigrams of that sentence. </p> <p>When using <code>np.asarray</code> to turn this into an array, python seems to interpret the tuples as me asking for a 3rd dimension to be created.</p> <p>Full working code here:</p> <pre><code>import numpy as np from nltk import bigrams arr = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) bi_grams = [] for sent in arr: bi_grams.append(list(bigrams(sent))) bi_grams = np.asarray(bi_grams) print(bi_grams) </code></pre> <p>So before turning <code>bi_grams</code> to an array it looks like this: <code>[[(1, 2), (2, 3)], [(4, 5), (5, 6)], [(7, 8), (8, 9)]]</code></p> <p>Output of above code:</p> <pre><code>array([[[1, 2], [2, 3]], [[4, 5], [5, 6]], [[7, 8], [8, 9]]]) </code></pre> <p>Converting a list of lists to an array in this way is normally fine, and creates a 2D array, but it seems that python interprets the tuples as an added dimension, so the output is of shape <code>(3, 2, 2)</code>, when in fact I want, and was expecting, a shape of <code>(3, 2)</code>.</p> <p>The output I want is:</p> <pre><code>array([[(1, 2), (2, 3)], [(4, 5), (5, 6)], [(7, 8), (8, 9)]]) </code></pre> <p>which is of shape <code>(3, 2)</code>. </p> <p>Why does this happen? How can I achieve the array in the form/shape that I want?</p>
<p>Here are two more methods to complement @hpaulj's answer. One of them, the <code>frompyfunc</code> methods seems to scale a bit better than the other methods, although hpaulj's preallocation method is also not bad if we get rid of the loop. See timings below:</p> <pre><code>import numpy as np import itertools bi_grams = [[(1, 2), (2, 3)], [(4, 5), (5, 6)], [(7, 8), (8, 9)]] def f_pp_1(bi_grams): return np.frompyfunc(itertools.chain.from_iterable(bi_grams).__next__, 0, 1)(np.empty((len(bi_grams), len(bi_grams[0])), dtype=object)) def f_pp_2(bi_grams): res = np.empty((len(bi_grams), len(bi_grams[0])), dtype=object) res[...] = bi_grams return res def f_hpaulj(bi_grams): res = np.empty((len(bi_grams), len(bi_grams[0])), dtype=object) for i, j in np.ndindex(res.shape): res[i, j] = bi_grams[i][j] return res print(np.all(f_pp_1(bi_grams) == f_pp_2(bi_grams))) print(np.all(f_pp_1(bi_grams) == f_hpaulj(bi_grams))) from timeit import timeit kwds = dict(globals=globals(), number=1000) print(timeit('f_pp_1(bi_grams)', **kwds)) print(timeit('f_pp_2(bi_grams)', **kwds)) print(timeit('f_hpaulj(bi_grams)', **kwds)) big = 10000 * bi_grams print(timeit('f_pp_1(big)', **kwds)) print(timeit('f_pp_2(big)', **kwds)) print(timeit('f_hpaulj(big)', **kwds)) </code></pre> <p>Sample output:</p> <pre><code>True &lt;- same result for True &lt;- different methods 0.004281356999854324 &lt;- frompyfunc small input 0.002839841999957571 &lt;- prealloc ellipsis small input 0.02361366100012674 &lt;- prealloc loop small input 2.153144505 &lt;- frompyfunc large input 5.152567720999741 &lt;- prealloc ellipsis large input 33.13142323599959 &lt;- prealloc looop large input </code></pre>
python|arrays|list|numpy
1
4,619
58,916,783
How to best perform recursion on a pandas dataframe column
<p>I am trying to calculate an index value over a time series within a pandas dataframe. This index depends on the previous row's result to calculate each row after the first iteration. I've attempted to do this recursively, within iteration over the dataframe's rows, but I find that the first two rows of the calculation are correct, but the third and subsequent rows are inaccurate. </p> <p>I think this is because after the initial value, subsquent index calculations are going wrong and then set all other subsequent calculations wrong.</p> <p>What is causing this inaccuracy. Is there a better approach than the one I've taken? </p> <p>A sample of the output looks like this:</p> <pre><code> ticket_cat Sector Year factor Incorrect_index_value correct_index_value prev_row Revenue LSE Jan 2004 100.00 100.00 Revenue LSE Jan 2005 4.323542894 104.3235 104.3235 100.00 Revenue LSE Jan 2006 3.096308080 98.823 107.5537 &lt;--incorrect row Revenue LSE Jan 2007 6.211666 107.476 114.2345 &lt;--incorrect row Revenue LD Jan 2004 100.00 100.0000 Revenue LD Jan 2005 3.5218 103.5218 103.5218 Revenue LD Jan 2006 2.7417 99.2464 106.3602 &lt;--- incorrect row Revenue LD Jan 2007 3.3506 104.1353 109.9239 &lt;--- incorrect row </code></pre> <p>The code snippet I have is as follows: stpassrev is the dataframe</p> <pre class="lang-py prettyprint-override"><code>#insert initial value for index stpassrev['index_value'] = np.where( (stpassrev['Year'] == 'Jan 2004' ) &amp; (stpassrev['Ticket_cat']=='Revenue'), 100.00,np.nan ) #set up initial values for prec_row column stpassrev['prev_row'] = np.where( #only have relevant row impacted (stpassrev['Year'] == 'Jan 2005' ) &amp; (stpassrev['Ticke_cat']=='Revenue'), 100.00, np.nan ) #calculate the index_value for i in range(1,len(stpassrev)): stpassrev.loc[i,'passrev'] = np.where( (stpassrev.loc[i,'Ticket_cat']=='Revenue' ) &amp; (pd.isna(stpassrev.loc[i,'factor'])==False), ((100+stpassrev.loc[i,'factor'] ) /stpassrev.loc[i-1,'index_value'])*100, stpassrev.loc[i,'index_value']) stpassrev.loc[i,'prev_row'] = stpassrev.loc[i-1,'index_value'] </code></pre>
<p>Based on your updated question, you just need to do this:</p> <pre><code># assign a new temp_factor with initial values and prep for cumprod stpassrev['temp_factor'] = np.where(stpassrev['factor'].isna(), 1, stpassrev['factor'].add(100).div(100)) # calculate the cumprod based on the temp_factor (grouped by Sector) and multiply by 100 for index_value stpassrev['index_value'] = stpassrev.groupby('Sector')['temp_factor'].cumprod().mul(100) </code></pre> <p>Results:</p> <pre><code> ticket_cat Sector Year factor temp_factor index_value 0 Revenue LSE Jan 2004 NaN 1.000000 100.000000 1 Revenue LSE Jan 2005 4.323543 1.043235 104.323543 2 Revenue LSE Jan 2006 3.096308 1.030963 107.553721 3 Revenue LSE Jan 2007 6.211666 1.062117 114.234599 4 Revenue LD Jan 2004 NaN 1.000000 100.000000 5 Revenue LD Jan 2005 3.521800 1.035218 103.521800 6 Revenue LD Jan 2006 2.741700 1.027417 106.360057 7 Revenue LD Jan 2007 3.350600 1.033506 109.923757 </code></pre> <p>If you need it rounded to 4 digit precision, add <code>.round(4)</code> after the <code>.mul(100)</code>:</p> <pre><code>stpassrev['index_value'] = stpassrev.groupby('Sector')['temp_factor'].cumprod().mul(100).round(4) ticket_cat Sector Year factor temp_factor index_value 0 Revenue LSE Jan 2004 NaN 1.000000 100.0000 1 Revenue LSE Jan 2005 4.323543 1.043235 104.3235 2 Revenue LSE Jan 2006 3.096308 1.030963 107.5537 3 Revenue LSE Jan 2007 6.211666 1.062117 114.2346 4 Revenue LD Jan 2004 NaN 1.000000 100.0000 5 Revenue LD Jan 2005 3.521800 1.035218 103.5218 6 Revenue LD Jan 2006 2.741700 1.027417 106.3601 7 Revenue LD Jan 2007 3.350600 1.033506 109.9238 </code></pre>
python|pandas|recursion
1
4,620
70,024,115
How to calculate the number of days since a given event in each group
<p>Below is a sample data frame:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'StudentName': ['Anil','Ramu','Ramu','Anil','Peter','Peter','Anil','Ramu','Peter','Anil'], 'ExamDate': ['2021-01-10','2021-01-20','2021-02-22','2021-03-30','2021-01-04','2021-06-06','2021-04-30','2021-07-30','2021-07-08','2021-09-07'], 'Result': ['Fail','Pass','Fail','Pass','Pass','Pass','Pass','Pass','Fail','Pass']}) StudentName ExamDate Result 0 Anil 2021-01-10 Fail 1 Ramu 2021-01-20 Pass 2 Ramu 2021-02-22 Fail 3 Anil 2021-03-30 Pass 4 Peter 2021-01-04 Pass 5 Peter 2021-06-06 Pass 6 Anil 2021-04-30 Pass 7 Ramu 2021-07-30 Pass 8 Peter 2021-07-08 Fail 9 Anil 2021-09-07 Pass </code></pre> <p>For each row, I would like to calculate the number of days it has been since that student's last failed test:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'StudentName': ['Anil','Ramu','Ramu','Anil','Peter','Peter','Anil','Ramu','Peter','Anil'], 'ExamDate': ['2021-01-10','2021-01-20','2021-02-22','2021-03-30','2021-01-04','2021-06-06','2021-04-30','2021-07-30','2021-07-08','2021-09-07'], 'Result': ['Fail','Pass','Fail','Pass','Pass','Pass','Pass','Pass','Fail','Pass'], 'LastFailedDays': [0, 0, 0, 79, 0, 0, 110, 158, 0, 240]}) StudentName ExamDate Result LastFailedDays 0 Anil 2021-01-10 Fail 0 1 Ramu 2021-01-20 Pass 0 2 Ramu 2021-02-22 Fail 0 3 Anil 2021-03-30 Pass 79 4 Peter 2021-01-04 Pass 0 5 Peter 2021-06-06 Pass 0 6 Anil 2021-04-30 Pass 110 7 Ramu 2021-07-30 Pass 158 8 Peter 2021-07-08 Fail 0 9 Anil 2021-09-07 Pass 240 </code></pre> <p>For example:</p> <ul> <li>Anil failed on 2021-01-10, so for that row it will be zero days.</li> <li>Anil's next record, which is successful, is on 2021-03-30, so the number of days for that row will be the number of days from his previous failed date 2021-01-10 to 2021-03-30, which is 79 days.</li> <li>Anil's third record, which is also successful, is on 2021-04-30, so the number of days there will be again, the number of days 2021-01-10 (his last failed date) to 2021-04-30, which is 110 days.</li> </ul> <p>It is doable with regular loops but I am looking for a more conventional Pandas solution. I'm guessing it's possible with <code>groupby</code>.</p>
<h2>TL;DR</h2> <p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.where.html" rel="nofollow noreferrer"><code>Series.where</code></a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.ffill.html" rel="nofollow noreferrer"><code>groupby.ffill</code></a> to generate each student's last failed date and subtract from <code>ExamDate</code> to get <code>LastFailedDays</code>:</p> <pre><code>df['ExamDate'] = pd.to_datetime(df['ExamDate']) df['LastFailedDays'] = (df['ExamDate'].sub( df['ExamDate'].where(df['Result'] == 'Fail').groupby(df['StudentName']).ffill() ).dt.days.fillna(0)) # StudentName ExamDate Result LastFailedDays # 0 Anil 2021-01-10 Fail 0.0 # 1 Ramu 2021-01-20 Pass 0.0 # 2 Ramu 2021-02-22 Fail 0.0 # 3 Anil 2021-03-30 Pass 79.0 # 4 Peter 2021-01-04 Pass 0.0 # 5 Peter 2021-06-06 Pass 0.0 # 6 Anil 2021-04-30 Pass 110.0 # 7 Ramu 2021-07-30 Pass 158.0 # 8 Peter 2021-07-08 Fail 0.0 # 9 Anil 2021-09-07 Pass 240.0 </code></pre> <p>Re: comments, to group by multiple columns, e.g. <code>StudentClass</code> and <code>StudentName</code>, use a list as the grouper:</p> <pre><code>...groupby([df['StudentClass'], df['StudentName']]).ffill() </code></pre> <hr /> <h2>Details</h2> <ol> <li><p>Convert <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a>:</p> <pre><code>df['ExamDate'] = pd.to_datetime(df['ExamDate']) </code></pre> </li> <li><p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.where.html" rel="nofollow noreferrer"><code>Series.where</code></a> to generate each student's last failed date (here I've made it a column for easier visualization):</p> <pre><code>df['LastFailedDate'] = df['ExamDate'].where(df['Result'] == 'Fail') # StudentName ExamDate Result LastFailedDate # 0 Anil 2021-01-10 Fail 2021-01-10 # 1 Ramu 2021-01-20 Pass NaT # 2 Ramu 2021-02-22 Fail 2021-02-22 # 3 Anil 2021-03-30 Pass NaT # 4 Peter 2021-01-04 Pass NaT # 5 Peter 2021-06-06 Pass NaT # 6 Anil 2021-04-30 Pass NaT # 7 Ramu 2021-07-30 Pass NaT # 8 Peter 2021-07-08 Fail 2021-07-08 # 9 Anil 2021-09-07 Pass NaT </code></pre> </li> <li><p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.ffill.html" rel="nofollow noreferrer"><code>groupby.ffill</code></a> to forward-fill the last failed date for each student (<code>NaT</code> if no previous failed exam):</p> <pre><code>df['LastFailedDate'] = df['LastFailedDate'].groupby(df['StudentName']).ffill() # StudentName ExamDate Result LastFailedDate # 0 Anil 2021-01-10 Fail 2021-01-10 # 1 Ramu 2021-01-20 Pass NaT # 2 Ramu 2021-02-22 Fail 2021-02-22 # 3 Anil 2021-03-30 Pass 2021-01-10 # 4 Peter 2021-01-04 Pass NaT # 5 Peter 2021-06-06 Pass NaT # 6 Anil 2021-04-30 Pass 2021-01-10 # 7 Ramu 2021-07-30 Pass 2021-02-22 # 8 Peter 2021-07-08 Fail 2021-07-08 # 9 Anil 2021-09-07 Pass 2021-01-10 </code></pre> </li> <li><p>Finally subtract the exam dates by the last failed dates and use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.days.html" rel="nofollow noreferrer"><code>dt.days</code></a> to extract the number of days:</p> <pre><code>df['LastFailedDays'] = df['ExamDate'].sub(df['LastFailedDate']).dt.days.fillna(0) # StudentName ExamDate Result LastFailedDate LastFailedDays # 0 Anil 2021-01-10 Fail 2021-01-10 0.0 # 1 Ramu 2021-01-20 Pass NaT 0.0 # 2 Ramu 2021-02-22 Fail 2021-02-22 0.0 # 3 Anil 2021-03-30 Pass 2021-01-10 79.0 # 4 Peter 2021-01-04 Pass NaT 0.0 # 5 Peter 2021-06-06 Pass NaT 0.0 # 6 Anil 2021-04-30 Pass 2021-01-10 110.0 # 7 Ramu 2021-07-30 Pass 2021-02-22 158.0 # 8 Peter 2021-07-08 Fail 2021-07-08 0.0 # 9 Anil 2021-09-07 Pass 2021-01-10 240.0 </code></pre> </li> </ol>
python|pandas|pandas-groupby
2
4,621
56,266,533
How to use TensorFlow tf.data.Dataset flat_map to produce a derived dataset?
<p>I have a Pandas DataFrame, and I'm loading part of it into a tf.data Dataset:</p> <pre class="lang-py prettyprint-override"><code>dataset = tf.data.Dataset.from_tensor_slices(( df.StringColumn.values, df.IntColumn1.values, df.IntColumn2.values, )) </code></pre> <p>Now what I would like to do is to use something like <code>flat_map</code> to produce a derived Dataset that takes the data in each row and produces a bunch of rows in the derived Dataset for each row in the original.</p> <p>But <code>flat_map</code> seems to just pass me placeholder tensors in the <code>lambda</code> function.</p> <p>I'm using TensorFlow 2.0 alpha 0 if that matters.</p> <p><strong>Edit:</strong></p> <p>What I'd like is to be able to write something like this:</p> <pre class="lang-py prettyprint-override"><code>derived = dataset.flat_map(replicate) def replicate(s, i1, i2): return [[0, s, i1, i2], [0.25, s, i1, i2], [0.5, s, i1, i2], [0.75, s, i1, i2]] </code></pre> <p>... and then have <code>derived</code> be a Dataset with four columns and four times as many rows as <code>dataset</code>.</p> <p>But when I try this, <code>s</code> isn't a value, it's a string placeholder tensor.</p> <p><strong>Edit 2:</strong></p> <p>Okay, what I meant is that the <code>replicate</code> function needs to know the values of the row it's replicating:</p> <pre class="lang-py prettyprint-override"><code>slice_count = 16 def price(frac, total, size0, price0, size1, price1, size2, price2, size3, price3): total_per_slice = total / slice_count start = frac * total_per_slice finish = start + total_per_slice price = \ (price0 * (min(finish, size0) - max(start, 0) if 0 &lt; finish and start &lt; size0 else 0)) + \ (price1 * (min(finish, size1) - max(start, size0) if size0 &lt; finish and start &lt; size1 else 0)) + \ (price2 * (min(finish, size2) - max(start, size1) if size1 &lt; finish and start &lt; size2 else 0)) + \ (price3 * (min(finish, size3) - max(start, size2) if size2 &lt; finish and start &lt; size3 else 0)) def replicate(size0, price0, size1, price1, size2, price2, size3, price3): total = size0 + size1 + size2 + size3 return [[ price(frac, total, size0, price0, size1, price1, size2, price2, size3, price3), frac / slice_count] for frac in range(slice_count)] derived = dataset.flat_map(replicate) </code></pre> <p>It's not sufficient to just be able to pass placeholders along. Is this something I can do, or is it doable if I can somehow translate it into TensorFlow's calculation graphs, or is it just not doable the way I'm trying to do it?</p>
<p>Possibly a long way around but you can also use <code>.concatenate()</code> with <code>apply()</code> to achieve a 'flat mapping'</p> <p>something like this:</p> <pre><code>def replicate(ds): return (ds.map(lambda s,i1,i2: (s, i1, i2, tf.constant(0.0))) .concatenate(ds.map(lambda s,i1,i2: (s, i1, i2, tf.constant(0.25)))) .concatenate(ds.map(lambda s,i1,i2: (s, i1, i2, tf.constant(0.5)))) .concatenate(ds.map(lambda s,i1,i2: (s, i1, i2, tf.constant(0.75))))) derived = dataset.apply(replicate) </code></pre> <p>should give you the output you were expecting</p>
python|tensorflow|tensorflow-datasets
0
4,622
56,360,809
Pandas does not read CSV as it write it
<p>I created a dataframe, and i wanted to export it as a CSV. i used the <code>df.to_csv()</code> method.</p> <p>When i read my csv that i created it's not parsed well and i have some values of columns mixed between each others.</p> <p>I tried to change the encoding, as well as the delimiter, but it doesn't solve my problem.</p> <p>here is a sample of my dataframe before being exported as a CSV :</p> <pre><code> societe ... cluster 6 ACTION AIR ENVIRONNEMENT ... aquavalley 7 AD NUCLEIS ... aquavalley 8 AD'OCC ... aquavalley 9 ADEQUABIO ... aquavalley 10 ADICT SOLUTIONS ... aquavalley </code></pre> <p>then i use to export it :</p> <pre><code>csv_df.to_csv(r"path.csv", sep="\t") </code></pre> <p>and to read it :</p> <pre><code>pd.read_csv(r"path.csv", sep="\t", engine='python') </code></pre> <p>and i obtain something like that</p> <pre><code> 7 AD NUCLEIS ... aquavalley 8 AD'OCC ... None 215 Rue 34000 Mont... contact@cc.com ... None 9 ADEQUABIO ... aquavalley </code></pre>
<p>You can try adding the argument <code>index</code> in <code>to_csv</code>:</p> <pre><code>df.to_csv(r"path.csv", sep="\t", index=False) </code></pre> <p>Or a problem could be that your fields contain tabs, so in this case I would suggest you to change separator</p>
python|pandas|csv|export
0
4,623
56,098,067
Naive prediction using pandas
<p>Suppose, I have a data set:</p> <pre><code>ix m_t1 m_t2 1 42 84 2 12 12 3 100 50 </code></pre> <p>then, we can use </p> <pre><code>df = df[['m_t1', 'm_t2']].pct_change(axis=1).mul(100)[1] </code></pre> <p>to calculate the difference between <code>m_t1</code> and <code>m_t2</code> in %</p> <p>like </p> <pre><code>diff 100 0 -50 </code></pre> <p>I would like to apply this difference on <code>m_t2</code> to get <code>m_t3_predicted</code></p> <pre><code>m_t3_predicted 168 12 25 </code></pre> <p>How can I do it?</p> <p>P.S. Is there any name for the algorithm?</p>
<p>Try this:</p> <pre><code>df_diff=df[['m_t1', 'm_t2']].pct_change(axis=1).mul(100).drop(columns=["m_t1"]) </code></pre> <pre><code>df_diff diff 0 100.0 1 0.0 2 -50.0 </code></pre> <p>Rename column in df_diff:</p> <pre><code>df_diff.columns=["diff"] </code></pre> <p>Concat dataframes:</p> <pre><code>df_result=pd.concat([df,df_diff],axis=1) </code></pre> <p>Then calculate:</p> <pre><code>df_result["m_t3_predicted"]=df_result["m_t2"]+df_result["diff"]/100*df_result["m_t2"] </code></pre> <p>Result:</p> <pre><code> ix m_t1 m_t2 diff m_t3_predicted 0 1 42 84 100.0 168.0 1 2 12 12 0.0 12.0 2 3 100 50 -50.0 25.0 </code></pre>
python|pandas|dataframe|prediction|difference
2
4,624
55,764,055
Reverse the Multi label binarizer in pandas
<p>I have pandas dataframe as </p> <pre><code>import pandas as pd from sklearn.preprocessing import MultiLabelBinarizer mlb = MultiLabelBinarizer() # load sample data df = pd.DataFrame( {'user_id':['1','1','2','2','2','3'], 'fruits':['banana','orange','orange','apple','banana','mango']}) </code></pre> <p>I collect all the fruits for each user using below code - </p> <pre><code># collect fruits for each user transformed_df= df.groupby('user_id').agg({'fruits':lambda x: list(x)}).reset_index() print(transformed_df) user_id fruits 0 1 [banana, orange] 1 2 [orange, apple, banana] 2 3 [mango] </code></pre> <p>Once I get this list, I do multilabel-binarizer operation to convert this list into ones or zeroes</p> <pre><code># perform MultiLabelBinarizer final_df = transformed_df.join(pd.DataFrame(mlb.fit_transform(transformed_df.pop('fruits')),columns=mlb.classes_,index=transformed_df.index)) print(final_df) user_id apple banana mango orange 0 1 0 1 0 1 1 2 1 1 0 1 2 3 0 0 1 0 </code></pre> <p>Now, I have a requirement wherein, the input dataframe given to me is <code>final_df</code> and I need to get back the <code>transformed_df</code> which contains the list of <code>fruits</code> for each user.</p> <p>How can I get this <code>transformed_df</code> back , given that I have <code>final_df</code> as input dataframe?</p> <p>I am trying to get this working </p> <pre><code># Trying to get this working inverse_df = final_df.join(pd.DataFrame(mlb.inverse_transform(final_df.loc[:, final_df.columns != 'user_id'].as_matrix()))) inverse_df user_id apple banana mango orange 0 1 2 0 1 0 1 0 1 banana orange None 1 2 1 1 0 1 apple banana orange 2 3 0 0 1 0 mango None None </code></pre> <p>But it doesnt give me the list back.</p>
<p><code>inverse_transform()</code> method should help. Here's the documentation - <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MultiLabelBinarizer.html#sklearn.preprocessing.MultiLabelBinarizer.inverse_transform" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MultiLabelBinarizer.html#sklearn.preprocessing.MultiLabelBinarizer.inverse_transform</a>.</p>
python-3.x|sklearn-pandas
0
4,625
55,652,404
Fetch Google Analytics API with Python and Google2Pandas
<p>My plan is to fetch the GA API with python3 and <a href="https://github.com/AURA199X/Google2Pandas" rel="nofollow noreferrer">google2Pandas</a>.</p> <p>My problem so far is that I don't know where to start first, when I look at the google2pandas README it looks easy but I have issues to build my own script with that and implementing the Oauth2 stuff. </p> <p>What is the right way to start with these <a href="https://github.com/AURA199X/Google2Pandas/tree/master/google2pandas" rel="nofollow noreferrer">boiler plates?</a> </p> <p>All those functions are a bit confusing to me. </p> <p>What do I really need to use the analytics v4 API and fetch some simple stuff for my dashboard? Which Parameters do I have to set and how or where in the file should I do that? Another question is, do I have to use those functions in a new python file or can I go start with the _panalysis_ga.py? </p> <p>It would be really helpful if you can guide me here or at least steer me in the right direction with some example. </p>
<p>The link to the repository kind of has the answer, but appreciate it's not always clear if you've never seen it before. There is no need to do anything on the OAth2 process as the library seems to take care of that.</p> <p>Use <code>pip</code> to install the google2Pandas library on your machine.</p> <p>You then need to create a GCP account if you don't already have one, and follow step 1 <a href="https://developers.google.com/analytics/devguides/reporting/core/v3/quickstart/installed-py" rel="nofollow noreferrer">here</a> to get the credentials.</p> <p>you can then use the Quick Demo shown on the README file of the repository (modify the query to your needs).</p> <p><strong>EDIT</strong></p> <p>Look into the New and Improved section of the README file as it is the most up to date one.</p>
python-3.x|pandas|google-analytics-api
0
4,626
65,030,278
Cannot get_attribute('href') from element via Selenium
<p>I've been stuck at this for eons now... Can you please help?</p> <p>Trying to build a scraper that scrapes listings on <a href="https://ingatlan.com/lista/elado+lakas+budapest" rel="nofollow noreferrer">this website</a> and I just cannot for the life of me get the URL of each listing. Can you please help?</p> <p>I've tried numerous ways to locate the element, this latest one is by the absolute XPath (by class always failed as well)</p> <p>The code:</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.keys import Keys import pandas as pd import time PATH = &quot;/Users/csongordoma/Documents/chromedriver&quot; driver = webdriver.Chrome(PATH) driver.get('https://ingatlan.com/lista/elado+lakas+budapest') data = {} df = pd.DataFrame(columns=['Price', 'Address', 'Size', 'Rooms', 'URL']) listings = driver.find_elements_by_css_selector('div.listing__card') for listing in listings: data['Price'] = listing.find_elements_by_css_selector('div.price')[0].text data['Address'] = listing.find_elements_by_css_selector('div.listing__address')[0].text # data['Size'] = listing.find_elements_by_css_selector('div.listing__parameter listing__data--area-size')[0].text data['URL'] = listing.find_elements_by_xpath('/html[1]/body[1]/div[1]/div[2]/div[4]/div[1]/main[1]/div[1]/div[1]/div[1]/a[3]')[0].text df = df.append(data, ignore_index=True) print(len(listings)) print(data) # driver.find_element_by_xpath(&quot;//a[. = 'Következő oldal']&quot;).click() driver.quit() </code></pre> <p>The error message:</p> <pre><code>Traceback (most recent call last): File &quot;hello.py&quot;, line 18, in &lt;module&gt; data['URL'] = listing.find_elements_by_xpath('/html[1]/body[1]/div[1]/div[2]/div[4]/div[1]/main[1]/div[1]/div[1]/div[1]/a[3]')[0].text IndexError: list index out of range </code></pre> <p>Many thanks!</p>
<p>Something like the below would work. To get a webelement of a[2] from an element and it's href.</p> <pre><code>data['URL'] = listing.find_element_by_xpath('//a[2]').get_attribute('href') </code></pre>
python|pandas|selenium
1
4,627
64,998,101
How to remove part of the column name?
<p>I have a DataFrame with several columns like:</p> <pre><code>'clientes_enderecos.CEP', 'tabela_clientes.RENDA','tabela_produtos.cod_ramo', 'tabela_qar.chave', etc </code></pre> <p>I want to change the name of the columns and remove all the text neighbord a dot.</p> <p>I only know the method <code>pandas.rename({'A':'a','B':'b'})</code></p> <p>But to name them as they are now I used:</p> <pre><code>df_tabela_clientes.columns = [&quot;tabela_clientes.&quot; + str(col) for col in df_tabela_clientes.columns] </code></pre> <p>How could I reverse the process?</p>
<p>Try rename with lambda and string manipulation:</p> <pre><code>df = pd.DataFrame(columns=['clientes_enderecos.CEP', 'tabela_clientes.RENDA','tabela_produtos.cod_ramo', 'tabela_qar.chave']) print(df) #Empty DataFrame #Columns: [clientes_enderecos.CEP, tabela_clientes.RENDA, tabela_produtos.cod_ramo, #tabela_qar.chave] #Index: [] dfc = df.rename(columns=lambda x: x.split('.')[-1]) print(dfc) #Empty DataFrame #Columns: [CEP, RENDA, cod_ramo, chave] #Index: [] </code></pre>
pandas|dataframe
1
4,628
64,952,142
How to read a numpy array float value without change its format?
<p>I am using pandas and numpy do feature extraction. It take a long time to complete this task so I want to save DataFrame for later use.</p> <p>I write a large pandas.Dataframe which contains multiple 2-d numpy array into a csv file. These cell value like this:</p> <pre><code> color contrast dissimilarity \ 0 134.000000 [[0.0]] [[0.0]] 1 135.133333 [[0.16000000000000003]] [[0.16000000000000003]] </code></pre> <p>Then I read data from the csv file, the format of float number changed like this:</p> <pre><code> color contrast dissimilarity 0 134.00 [[0.]] [[0.]] 1 135.13 [[0.16]] [[0.16]] </code></pre> <p>The float value '0.0' become '0.' . So when I use the dataframe read from the csv file as params for my model, it raise error:</p> <pre><code>ValueError: could not convert string to float: '[[0.]]' </code></pre> <p>This is how I write df to csv file: from datetime import datetime</p> <pre><code>now = datetime.now() current_time = now.strftime(&quot;%x%H:%M:%S&quot;) print(&quot;Current Time =&quot;, current_time) current_time = current_time.replace(':', '') current_time = current_time.replace('/', '') compression_opts = dict(method='zip', archive_name= current_time + '.csv') df.to_csv(current_time + 'test.zip', index=False, compression=compression_opts) </code></pre> <p>This is how I read file</p> <pre><code>df2 = pd.read_csv('112220153048.csv', sep=',') </code></pre> <p><strong>Is there a way that don't change number format when write data to cvs file?</strong></p>
<p>The csv file is converting each element to string because it cannot recognize the brackets as numpy does. There are two solutions I can think of.</p> <p><strong>One is more hacky</strong>, and a little bit ugly. If you have to use the csv, then you could try to parse each element slicing the brackets out.</p> <pre><code>element = &quot;[[0.1]]&quot; float_from_element = float(element[2:-2]) &gt;&gt;&gt; 0.1 </code></pre> <p><strong>My second suggestion</strong> is to use <code>pickle</code> to save data instead of saving it as a csv file. This might be useful if you are only processing the dataframes on python and don't need to read the csv outside of it. The pickle package will save the dataframe or <a href="https://stackoverflow.com/questions/33642951/python-using-pandas-structures-with-large-csviterate-and-chunksize">a series of chunks</a> as a binary file that you can save on your hard drive. Then when you load the pickle file, it will load as a python object, which will conserve its properties as a pandas dataframe or numpy array.</p> <p>I think pandas has native support for pickle, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_pickle.html" rel="nofollow noreferrer">read this link</a>.</p>
python|pandas|numpy|csv
1
4,629
40,300,782
Unhashable type : 'list' Error
<p>I'm getting this error for the following code </p> <pre><code>def cleaning(CURRENT,STRING,NEXT): data.ix[data[NEXT].str.contains(STRING,na=False),CURRENT] =... data[NEXT][data[NEXT].str.contains(STRING,na=False)] d = ['lower','Less'] c = a[5:] for x,y in zip(range(len(c)),d): cleaning(c[x],d,c[x+1]) cleaning(c[x],d,c[x+2]) </code></pre> <p>Here, data is a pandas DataFrame. However for the same function I'm getting no error in the following code</p> <pre><code>a = ['UBC','LBC', 'HC', 'FC', 'P:C/F','P', 'A', 'Sex'] b = ['upper','lower','hair','footwear'] for x,y in zip(range(len(a)),b): cleaning(a[x],y,a[x+1]) cleaning(a[x],y,a[x+2]) </code></pre> <p>I know this is because we can't use a list as the key in a dict but I'm not sure how that's happening here and why is it working for one loop and not the other.</p>
<p>You are passing in <code>d</code>, a list, as the <code>STRING</code> argument:</p> <pre><code>d = ['lower','Less'] # ... cleaning(c[x],d,c[x+1]) # ^ </code></pre> <p>Your second example works, you pass in <code>y</code> instead, which is a single element from the <code>b</code> list:</p> <pre><code>b = ['upper','lower','hair','footwear'] for x,y in zip(range(len(a)),b): # ^ one element from b ^ cleaning(a[x],y,a[x+1]) # ^ </code></pre> <p>The <code>pandas.Series.str.contains</code> method accepts regexes by default, and <code>re.compile</code> uses a dictionary as a cache to hold compiled patterns. Because you passed in a list, you get your error:</p> <pre><code>&gt;&gt;&gt; pandas.Series(['aa', 'bb', 'cc']).str.contains(['a']) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/Users/mjpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/site-packages/pandas/core/strings.py", line 1458, in contains regex=regex) File "/Users/mjpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/site-packages/pandas/core/strings.py", line 222, in str_contains regex = re.compile(pat, flags=flags) File "/Users/mjpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/re.py", line 194, in compile return _compile(pattern, flags) File "/Users/mjpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/re.py", line 237, in _compile p, loc = _cache[cachekey] TypeError: unhashable type: 'list' </code></pre> <p>The fix is to pass in <code>y</code> instead of <code>d</code>:</p> <pre><code>for x, y in zip(range(len(c)) ,d): cleaning(c[x], y, c[x + 1]) cleaning(c[x], y, c[x + 2]) </code></pre> <p>You may want to come up with better variable names; one-letter names are hard to distinguish and easily lead to errors like these.</p>
python|pandas|for-loop|dictionary
1
4,630
39,488,282
total size of new array must be unchanged
<p>I have two arrays x1 and x2, both are 1*14 arrays i am trying to zip them up and then perform reshape.</p> <p>The code is as below ;</p> <pre><code>x1 </code></pre> <p>Out[122]: array([1, 2, 3, 1, 5, 6, 5, 5, 6, 7, 8, 9, 7, 9])</p> <pre><code>x2 </code></pre> <p>Out[123]: array([1, 3, 2, 2, 8, 6, 7, 6, 7, 1, 2, 1, 1, 3])</p> <pre><code>X = np.array(zip(x1, x2)).reshape(2, len(x1)) </code></pre> <p>ValueErrorTraceback (most recent call last) in () ----> 1 X = np.array(zip(x1, x2)).reshape(2, len(x1))</p> <p>ValueError: total size of new array must be unchanged</p>
<p>I would assume you're on Python 3, in which the result is an array with a <code>zip</code> object. </p> <p>You should call <code>list</code> on the <em>zipped</em> items:</p> <pre><code>X = np.array(list(zip(x1, x2))).reshape(2, len(x1)) # ^^^^ print(X) # [[1 1 2 3 3 2 1 2 5 8 6 6 5 7] # [5 6 6 7 7 1 8 2 9 1 7 1 9 3]] </code></pre> <p>In Python 2, <code>zip</code> returns a list and not an iterator as with Python 3, and your previous code would work fine.</p>
python|arrays|numpy|reshape
3
4,631
39,820,963
Fastest way to select rows where value of column of strings is in a set
<p>I have a <code>set</code> of email addresses that I've selected from one set of values. I'd like to subset a <code>pandas DataFrame</code> to include only rows where the <code>unique_id</code> column value is not contained in the set. Here's what I've done that is running very slow:</p> <pre><code>signup_emails = set(online_signup.unique_id) non_email_signup_event_emails = event_attendees[event_attendees.unique_id.apply(lambda x: x in signup_emails) == False].email </code></pre> <p>The table is several hundred thousand rows, but my computer just freezes up on this command, shows a high CPU load, and doesn't terminate even after long waits (20 minutes). How can I do this faster?</p>
<p>Use the <code>isin</code> method.</p> <pre><code>event_attendees[event_attendees.isin(signup_emails)] </code></pre> <p>For not in the signup_emails, you can do</p> <pre><code>event_attendees[event_attendees.isin(signup_emails) == False] </code></pre>
python|pandas
1
4,632
39,526,831
Multiprocessing python function for numerical calculations
<p>Hoping to get some help here with parallelising my python code, I've been struggling with it for a while and come up with several errors in whichever way I try, currently running the code will take about 2-3 hours to complete, The code is given below; </p> <pre><code>import numpy as np from scipy.constants import Boltzmann, elementary_charge as kb, e import multiprocessing from functools import partial Tc = 9.2 x = [] g= [] def Delta(T): ''' Delta(T) takes a temperature as an input and calculates a temperature dependent variable based on Tc which is defined as a global parameter ''' d0 = (pi/1.78)*kb*Tc D0 = d0*(np.sqrt(1-(T**2/Tc**2))) return D0 def element_in_sum(T, n, phi): D = Delta(T) matsubara_frequency = (np.pi * kb * T) * (2*n + 1) factor_d = np.sqrt((D**2 * cos(phi/2)**2) + matsubara_frequency**2) element = ((2 * D * np.cos(phi/2))/ factor_d) * np.arctan((D * np.sin(phi/2))/factor_d) return element def sum_elements(T, M, phi): ''' sum_elements(T,M,phi) is the most computationally heavy part of the calculations, the larger the M value the more accurate the results are. T: temperature M: number of steps for matrix calculation the larger the more accurate the calculation phi: The phase of the system can be between 0- pi ''' X = list(np.arange(0,M,1)) Y = [element_in_sum(T, n, phi) for n in X] return sum(Y) def KO_1(M, T, phi): Iko1Rn = (2 * np.pi * kb * T /e) * sum_elements(T, M, phi) return Iko1Rn def main(): for j in range(1, 92): T = 0.1*j for i in range(1, 314): phi = 0.01*i pool = multiprocessing.Pool() result = pool.apply_async(KO_1,args=(26000, T, phi,)) g.append(result) pool.close() pool.join() A = max(g); x.append(A) del g[:] </code></pre> <p>My approach was to try and send the KO1 function into a multiprocessing pool but I either get a <code>Pickling</code> error or a <code>too many files open</code>, Any help is greatly appreciated, and if multiprocessing is the wrong approach I would love any guide.</p>
<p>This is not an answer to the question, but if I may, I would propose how to speed up the code using simple numpy array operations. Have a look at the following code:</p> <pre><code>import numpy as np from scipy.constants import Boltzmann, elementary_charge as kb, e import time Tc = 9.2 RAM = 4*1024**2 # 4GB def Delta(T): ''' Delta(T) takes a temperature as an input and calculates a temperature dependent variable based on Tc which is defined as a global parameter ''' d0 = (np.pi/1.78)*kb*Tc D0 = d0*(np.sqrt(1-(T**2/Tc**2))) return D0 def element_in_sum(T, n, phi): D = Delta(T) matsubara_frequency = (np.pi * kb * T) * (2*n + 1) factor_d = np.sqrt((D**2 * np.cos(phi/2)**2) + matsubara_frequency**2) element = ((2 * D * np.cos(phi/2))/ factor_d) * np.arctan((D * np.sin(phi/2))/factor_d) return element def KO_1(M, T, phi): X = np.arange(M)[:,np.newaxis,np.newaxis] sizeX = int((float(RAM) / sum(T.shape))/sum(phi.shape)/8) #8byte i0 = 0 Iko1Rn = 0. * T * phi while (i0+sizeX) &lt;= M: print "X = %i"%i0 indices = slice(i0, i0+sizeX) Iko1Rn += (2 * np.pi * kb * T /e) * element_in_sum(T, X[indices], phi).sum(0) i0 += sizeX return Iko1Rn def main(): T = np.arange(0.1,9.2,0.1)[:,np.newaxis] phi = np.linspace(0,np.pi, 361) M = 26000 result = KO_1(M, T, phi) return result, result.max() T0 = time.time() r, rmax = main() print time.time() - T0 </code></pre> <p>It runs a bit more than 20sec on my PC. One has to be careful not to use too much memory, that is why there is still a loop with a bit complicated construction to use only pieces of X. If enough memory is present, then it is not necessary.</p> <p>One should also note that this is just the first step of speeding up. Much improvement could be reached still using e.g. just in time compilation or cython.</p>
python|multithreading|numpy|multiprocessing
1
4,633
39,865,212
dyld: Library not loaded: @rpath/libcudart.8.0.dylib, while building tensorflow on Mac OSX
<p>I am building tensorflow on my Mac(an hackintosh, so I have a GPU, and already installed CUDA8.0. It works fine with building caffe, so I am sure it works.) I have already set up the environment variables as following(I have put these in <code>.zshrc</code>,<code>.bash_profile</code> and <code>.bashrc</code>):</p> <pre><code>export CUDA_HOME=/usr/local/cuda export DYLD_LIBRARY_PATH="$DYLD_LIBRARY_PATH:$CUDA_HOME/lib" export PATH="$CUDA_HOME/bin:$PATH" export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$CUDA_HOME/lib:$CUDA_HOME/extras/CUPTI/lib" </code></pre> <p><code>./configure</code> works fine. Then I start build using command <code>bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package</code>. Then I got this error:</p> <pre><code> ERROR: /Development/tensorflow/tensorflow/python/BUILD:572:1: Executing genrule //tensorflow/python:array_ops_pygenrule failed: bash failed: error executing command /bin/bash -c ... (remaining 1 argument(s) skipped): com.google.devtools.build.lib.shell.AbnormalTerminationException: Process terminated by signal 5. dyld: Library not loaded: @rpath/libcudart.8.0.dylib Referenced from: /private/var/tmp/_bazel_zarzen/bdf1cb43f3ff02468b610730bd03f348/execroot/tensorflow/bazel-out/host/bin/tensorflow/python/gen_array_ops_py_wrappers_cc Reason: image not found /bin/bash: line 1: 92702 Trace/BPT trap: 5 bazel-out/host/bin/tensorflow/python/gen_array_ops_py_wrappers_cc @tensorflow/python/ops/hidden_ops.txt 1 &gt; bazel-out/local_darwin-opt/genfiles/tensorflow/python/ops/gen_array_ops.py Target //tensorflow/tools/pip_package:build_pip_package failed to build </code></pre> <p>I can make sure the missed library is there. And I also tried install pre-built binary(I know it only support CUDA7.5, so I set up the PATH to point to CUDA7.5, but it doesn't work. when I try to <code>import tensorflow</code>, similar error <code>Library not loaded: @rpath/libcudart.7.5.dylib</code>, only version number changed).</p> <p>I don't know why it cannot find the <code>lib</code>. Anyone can help? or any suggestions?</p>
<p>The following should fix the error.</p> <p>Find the file "genrule-setup.sh". The file should be in </p> <pre><code>&lt;tensorflow source dir&gt;/bazel-tensorflow/external/bazel_tools/tools/genrule/ </code></pre> <p>If the timestamp of this file changes then bazel build will fail saying the file is corrupted. So before modifying this file make a note of the timestamp</p> <pre><code>stat genrule-setup.sh </code></pre> <p>You should get an output like this:</p> <pre><code>16777220 25929227 -rwxr-xr-x 1 user wheel 0 242 "Oct 12 23:46:28 2016" "Oct 10 21:49:39 2026" "Oct 12 21:49:39 2016" "Oct 12 21:49:38 2016" 4096 8 0 genrule-setup.sh </code></pre> <p>Note down the second timestamp "Oct 10 21:49:39 2026" from the above output</p> <p>edit the genrule-setup.sh file</p> <pre><code>nano genrule-setup.sh </code></pre> <p>and add the environment configuration to the end of the file</p> <pre><code>export DYLD_LIBRARY_PATH=/usr/local/cuda/lib </code></pre> <p>save and close the editor.</p> <p>Then change the timestamp to the original timestamp</p> <pre><code>touch -t YYYYMMDDhhmm.SS genrule-setup.sh </code></pre> <p>for e.g. </p> <pre><code>touch -t 202610102149.39 genrule-setup.sh </code></pre> <p>Finally, create a symbolic link to avoid "Segmentation fault: 11" error</p> <pre><code>ln -sf /usr/local/cuda/lib/libcuda.dylib /usr/local/cuda/lib/libcuda.1.dylib </code></pre> <p>Now restart the build</p> <pre><code>bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package </code></pre>
macos|tensorflow|building
9
4,634
44,335,384
python numpy contains text "array"
<p>I am using a binarizer to get some one-hot-vectors. For some reason my output arrays contain a text literally saying "array".</p> <p>The form is like:</p> <pre><code>[array( [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ] )] </code></pre>
<p>It's not a string. It is a Numpy array inside of a list. Numpy arrays are formatted like that as an output. </p> <p>Test it with np.array([2,3]) The output will be array([2, 3]).</p>
python|arrays|python-3.x|numpy
2
4,635
44,096,471
What is an efficient way to loop through dataframes with pandas?
<p>I have a column in dataframes which contains values <code>'a','b','c','d' and 'e'</code> and there total <strong>1.5 million</strong> records. I would like to convert the values in to numerical categories such as <code>a=&gt;1,b=&gt;2,c=&gt;3,d=&gt;4 and e=&gt;5</code>. </p> <p>Since I have <strong>1.5 million</strong> records, what is the most <em>efficient</em> way I can do this operation?</p>
<p>I think using <code>df.applymap()</code> with an efficient function would do the trick.</p>
python|pandas|numpy
0
4,636
69,578,431
How to fix StreamlitAPIException: ("Expected bytes, got a 'int' object", 'Conversion failed for column FG% with type object')
<p><strong>Error:</strong></p> <pre><code>StreamlitAPIException: (&quot;Expected bytes, got a 'int' object&quot;, 'Conversion failed for column FG% with type object') </code></pre> <p><strong>Error Traceback</strong></p> <pre><code>Traceback: File &quot;C:\Users\ASUS\streamlit_freecodecamp-main\app_3_eda_basketball\basketball_app.py&quot;, line 44, in &lt;module&gt; st.dataframe(df_selected_team) </code></pre>
<p>It’s a bug that came with <code>streamlit 0.85.0</code>. <a href="https://blog.streamlit.io/all-in-on-apache-arrow/" rel="noreferrer"><code>pyarrow</code></a> has an issue with <code>numpy.dtype</code> values (which df.dtypes returns).</p> <p>The <a href="https://issues.apache.org/jira/browse/ARROW-14087" rel="noreferrer">issue</a> has been filed and hopefully will be taken care of soon.</p> <p>A possible workaround is to convert DataFrame cells to strings with <code>df.astype(str)</code></p> <p>In your case</p> <pre><code>test = df_selected_team.astype(str) st.dataframe(test) </code></pre> <p>or</p> <p>downgrade your streamlit version to <code>0.84</code></p> <p>or</p> <p>A preferable solution for this is to use the old dataframe serializer by setting this in your .streamlit/config.toml file:</p> <pre><code>[global] dataFrameSerialization = &quot;legacy&quot; </code></pre> <p>This allows you to continue upgrading to the latest version of Streamlit.</p> <p>Follow this <a href="https://discuss.streamlit.io/t/after-upgrade-to-the-latest-version-now-this-error-id-showing-up-arrowinvalid/15794/7" rel="noreferrer">thread</a> for more updates</p>
python|pandas|streamlit
25
4,637
69,460,270
Missing column values fill based on the available values
<p>How to fill missing values for apple <code>variety</code> from the <strong>same column</strong> when there are 1-4 varieties per farm and but cannot be two varieties with the same <code>ripening</code> index on the same farm? Assume the column has all possible scenarios.</p> <p>For instance, in the below sample, '<em>Empire</em>' and <em>'Honeycrisp'</em> have the same <code>ripening</code> but they are from the different farms.</p> <p>A sample <code>df</code> (a part of a larger dataframe):</p> <pre><code>df = pd.DataFrame( {'farm': [419,382, 382, 382, 411, 411, 411], 'variety': ['Gala', 'Gala', 'Empire', '', 'Honeycrisp', '', 'Fuji'], 'ripening':[2,2,3,3,3,3,6], 'D': np.random.randn(7)*10, 'E': list('abcdefg') } ) df Out[223]: farm variety ripening D E 0 419 Gala 2 12.921246 a 1 382 Gala 2 -2.776150 b 2 382 Empire 3 3.551226 c 3 382 3 2.715187 d 4 411 Honeycrisp 3 -13.557640 e 5 411 3 -11.525100 f 6 411 Fuji 6 -3.660661 g </code></pre> <p>my desired output:</p> <pre><code> farm variety ripening D E 0 419 Gala 2 12.921246 a 1 382 Gala 2 -2.776150 b 2 382 Empire 3 3.551226 c 3 382 Empire 3 2.715187 d 4 411 Honeycrisp 3 -13.557640 e 5 411 Honeycrisp 3 -11.525100 f 6 411 Fuji 6 -3.660661 g </code></pre>
<p>Use:</p> <pre><code>#create NaNs instead empty strings df['variety'] = df['variety'].replace('', np.nan) #test if only 1 unique category per ripening and farm m = m = df.groupby(['farm','ripening'])['variety'].transform('nunique').eq(1) #only for filtered rows forward filling values per groups df.update(df[m].groupby(['farm','ripening'])['variety'].ffill()) print (df) farm variety ripening D E 0 419 Gala 2 -12.571434 a 1 382 Gala 2 1.839992 b 2 382 Empire 3 18.946881 c 3 382 Empire 3 6.552552 d 4 411 Honeycrisp 3 11.755782 e 5 411 Honeycrisp 3 11.272973 f 6 411 Fuji 6 7.416918 g </code></pre>
python|pandas|pandas-groupby
1
4,638
69,525,091
move multiple csv files by column value using python
<p>I have thousands of csv files under uncategorised parent folder, all the files have one date column containing same date for all the rows. I want to check the date value of each files and move/copy to month wise folder using python.</p> <p>I have tried key = df.iloc[0]['Date'] but not able to use key.endswith or key.<strong>contains</strong></p>
<p>Here I am looping through the files, reading the first row of the date column. I have created new folders in the directory previously, with the names being each month of the year. Once I have read the date, I convert it to words (e.g. April, May). I then look through folders in the directory and is the date name and folder name match, I move the file to the folder.</p> <pre><code>import os import pandas as pd import datetime files = os.listdir() for file in files: if &quot;.csv&quot; in file: df = pd.read_csv(file) dates = df['date'] date = dates[0] date = datetime.datetime.strptime(date, &quot;%d/%m/%y&quot;) date = date.strftime(&quot;%B&quot;) for folder in files: if date.lower() == folder.lower(): os.rename(file, folder+&quot;\\&quot;+file) </code></pre>
python|pandas|csv
0
4,639
54,241,625
Calculate average of column x if column y meets criteria, for each y
<p>How do I retrieve the value of column Z and its average if any value are >1</p> <pre><code>data=[9,2,3,4,5,6,7,8] df = pd.DataFrame(np.random.randn(8, 5),columns=['A', 'B', 'C', 'D','E']) fd=pd.DataFrame(data,columns=['Z']) df=pd.concat([df,fd], axis=1) l=[] for x,y in df.iterrows(): for i,s in y.iteritems(): if s &gt;1: l.append(x) print(df['Z']) </code></pre> <p>The expected output will most likely be a dictionary with the column name as key and the average of Z as its values.</p>
<p>Do you mean this?</p> <pre><code>df[df['Z']&gt;1].loc[:,'Z'].mean(axis=0) </code></pre> <p>or </p> <pre><code>df[df['Z']&gt;1]['Z'].mean() </code></pre>
python|pandas|dataframe
1
4,640
54,078,450
Tensorflow Logits and Labels must be broadcastable
<p>I am very green working with Tensorflow, and can not seem to get past this error. I have been trouble shooting this error for two days now and I can't get it to work. Can anyone see an issue with the code? I am using python3 via Jupyter Notebook. Thanks for the assistance.</p> <p>Here is my code:</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("official/MNIST_data/", one_hot=True) Extracting official/MNIST_data/train-images-idx3-ubyte.gz Extracting official/MNIST_data/train-labels-idx1-ubyte.gz Extracting official/MNIST_data/t10k-images-idx3-ubyte.gz Extracting official/MNIST_data/t10k-labels-idx1-ubyte.gz type(mnist) tensorflow.contrib.learn.python.learn.datasets.base.Datasets mnist.train.num_examples 55000 mnist.test.num_examples 10000 Preparation for building CNN model: define supporting Functions Initialize weights in Filter def initialize_weights (filter_shape): init_random_dist = tf.truncated_normal(filter_shape, stddev=.1) return (tf.Variable(init_random_dist)) def initialize_bias(bias_shape): initial_bias_vals = tf.constant(.1, shape=bias_shape) return(tf.Variable(initial_bias_vals)) def create_convolution_layer_and_compute_dot_product(inputs, filter_shape): filter_initialized_with_weights = initialize_weights(filter_shape) conv_layer_outputs = tf.nn.conv2d(inputs, filter_initialized_with_weights, strides = [1,1,1,1], padding = 'SAME') return(conv_layer_outputs) def create_relu_layer_and_compute_dotproduct_plus_b(inputs, filter_shape): b = initialize_bias([filter_shape[3]]) relu_layer_outputs = tf.nn.relu(inputs + b) return (relu_layer_outputs) def create_maxpool2by2_and_reduce_spatial_size(inputs): pooling_layer_outputs = tf.nn.max_pool(inputs, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME') return(pooling_layer_outputs) def create_fully_conected_layer_and_compute_dotproduct_plus_bias(inputs, output_size): input_size = int(inputs.get_shape()[1]) W = initialize_weights([input_size, output_size]) b = initialize_bias([output_size]) fc_xW_plus_bias_outputs = tf.matmul(inputs, W) + b return(fc_xW_plus_bias_outputs) Build the Convolutional Neural Network x = tf.placeholder(tf.float32, shape = [None, 784]) y_true = tf.placeholder(tf.float32, [None, 10]) x_image = tf.reshape(x, [-1,28,28,1]) conv_layer_1_outputs \ = create_convolution_layer_and_compute_dot_product(x_image, filter_shape=[5,5,1,32]) conv_relu_layer_1_outputs \ = create_relu_layer_and_compute_dotproduct_plus_b(conv_layer_1_outputs, filter_shape=[5,5,1,32]) pooling_layer_1_ouptuts = create_maxpool2by2_and_reduce_spatial_size(conv_relu_layer_1_outputs) conv_layer_2_outputs \ = create_convolution_layer_and_compute_dot_product(conv_layer_1_outputs, filter_shape=[5,5,32,64]) conv_relu_layer_2_outputs \ = create_relu_layer_and_compute_dotproduct_plus_b(conv_layer_2_outputs, filter_shape=[5,5,32,64]) pooling_layer_2_outputs = create_maxpool2by2_and_reduce_spatial_size(conv_relu_layer_2_outputs) pooling_layer_2_outputs_flat=tf.reshape(pooling_layer_2_outputs, [-1,7*7*64]) fc_layer_1_outputs \ = create_fully_conected_layer_and_compute_dotproduct_plus_bias(pooling_layer_2_outputs_flat, output_size=1024) fc_relu_layer_1_outputs = tf.nn.relu(fc_layer_1_outputs) hold_prob = tf.placeholder(tf.float32) fc_dropout_outputs = tf.nn.dropout(fc_layer_1_outputs, keep_prob=hold_prob) y_pred = create_fully_conected_layer_and_compute_dotproduct_plus_bias(fc_dropout_outputs, output_size=10) softmax_cross_entropy_loss = tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_true, logits=y_pred) cross_entropy_mean = tf.reduce_mean(softmax_cross_entropy_loss) optimizer = tf.train.AdamOptimizer(learning_rate=.001) cnn_trainer = optimizer.minimize(cross_entropy_mean) vars_initializer = tf.global_variables_initializer() steps = 5000 Run tf.session to train and test deep learning CNN model with tf.Session() as sess: sess.run(vars_initializer) for i in range(steps): batch_x, batch_y = mnist.train.next_batch(50) sess.run(cnn_trainer, feed_dict={x: batch_x, y_true: batch_y, hold_prob: .5}) if i % 100 == 0: print('ON STEP: {}', format(i)) print('ACCURACY: ') matches = tf.equal(tf.argmax(y_pred, 1), tf.argmax(y_true, 1)) acc = tf.reduce_mean(tf.cast(matches, tf.float32)) test_accuracy = sess.run(acc, feed_dict = {x: mnist.test.images, y_true: mnist.test.labels, hold_prob: 1.0}) print(test_accuracy) print('\n') </code></pre> <p>Here is the exact error message:</p> <pre><code>InvalidArgumentError: logits and labels must be broadcastable: logits_size=[200,10] labels_size=[50,10] [[node softmax_cross_entropy_with_logits_7 (defined at &lt;ipython-input-162-3d06fe78186c&gt;:1) = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](add_31, softmax_cross_entropy_with_logits_7/Reshape_1)]] </code></pre>
<p>Posting this in case someone else is having similar issues. </p> <p>The error should read "<strong>Dumb User</strong>" lol. I passed the wrong variable into the second layer. </p> <pre><code>pooling_layer_1_ouptuts = create_maxpool2by2_and_reduce_spatial_size(conv_relu_layer_1_outputs) conv_layer_2_outputs \ = create_convolution_layer_and_compute_dot_product(conv_layer_1_outputs, filter_shape=[5,5,32,64]) </code></pre> <p>should be:</p> <pre><code>pooling_layer_1_ouptuts = create_maxpool2by2_and_reduce_spatial_size(conv_relu_layer_1_outputs) conv_layer_2_outputs \ = create_convolution_layer_and_compute_dot_product(pooling_layer_1_ouptuts , filter_shape=[5,5,32,64]) </code></pre>
python-3.x|tensorflow|jupyter-notebook
0
4,641
54,092,650
Retrieve a word from file name in python
<p>I have list of 5 excel files in a specific path as mentioned below : <code>'Z:\\Ruchika\\Citymax_Dec06\\SVCDs\\**\\*Claypot*.csv'.</code> The list of 5 excel files and the paths are as per below</p> <pre><code>['Z:\\Ruchika\\Citymax_Dec06\\SVCDs\\December - SVCD\\UAE _ Citymax _Claypot_ Burdubai_fullcampaignfile.csv', 'Z:\\Ruchika\\Citymax_Dec06\\SVCDs\\January2019 - SVCD\\UAE _ Citymax _Claypot_ Burdubai_fullcampaignfile.csv', 'Z:\\Ruchika\\Citymax_Dec06\\SVCDs\\November - SVCD\\UAE _ Citymax _ Claypot_BD_fullcampaignfile.csv', 'Z:\\Ruchika\\Citymax_Dec06\\SVCDs\\October - SVCD\\UAE _ Citymax _Claypot_ Burdubai_fullcampaignfile.csv', 'Z:\\Ruchika\\Citymax_Dec06\\SVCDs\\sept - svcd\\UAE _ Claypot _ Burdubai_fullcampaignfile.csv'] </code></pre> <p>Now I am trying to retrieve the Month name from each excel file name and add to my data frames as per below code, but getting struck as I am able to retrieve only for November Month which is incorrect. Please help me</p> <pre><code>m=['November','December','October','September','August'] def extract(folderpath): final=glob.glob(folderpath) frames = [] for file in final: j=0 df = pd.read_csv(file, error_bad_lines=False) df['Month']=m[j] frames.append(df) j=j+1 mergedfile = pd.concat(frames) return mergedfile a=extract('Z:\\Ruchika\\Citymax_Dec06\\SVCDs\\**\\*Claypot*.csv') Input : a.shape Ouput : (3232487, 31) Input : a['Month'].value_counts() Output : November 3232487 Name: Month, dtype: int64 </code></pre>
<p>I'm guessing it can be any month, so why not just check for months:</p> <pre><code>filename = r'Z:\Ruchika\Citymax_Dec06\SVCDs\December - SVCD\UAE _ Citymax Claypot Burdubai_fullcampaignfile.csv' for month in ['October', 'November', 'December']: # List of months if month in filename: print('Month is:', month) </code></pre>
python|string|pandas|filenames|series
1
4,642
38,491,881
Grouping and ungrouping based on a column
<p>My goal is to be able to group rows of a CSV file by a column value, and also to perform the inverse operation. To give an example, it is desired to be able to transform back and forth between these two formats:</p> <pre><code>uniqueId, groupId, feature_1, feature_2 1, 100, text of 1, 10 2, 100, some text of 2, 20 3, 200, text of 3, 30 4, 200, more text of 4, 40 5, 100, another text of 5, 50 </code></pre> <p>Grouped on the groupId:</p> <pre><code>uniqueId, groupId, feature_1, feature_2 1|2|5, 100, text of 1|some text of 2|another text of 5, 10|20|50 3|4, 200, text of 3|more text of 4, 30|40 </code></pre> <p>The delimiter (here |) is assumed to not exist anywhere in the data.</p> <p>I am trying to use Pandas to perform this transformation. My code so far can access the cell of rows grouped by a groupId, but I do not know how to populate the new dataframe.</p> <p>How can my method be completed to accomplish the transformation into the desired new df?</p> <p>How would an inverse method look like, that transforms the new df back to the original one?</p> <p>If R is a better tool for this job, I am also open to suggestions in R.</p> <pre><code>import pandas as pd def getGroupedDataFrame(df, groupByField, delimiter): ''' Create a df with the rows grouped on groupByField, values separated by delimiter''' groupIds = set(df[groupByField]) df_copy = pd.DataFrame(index=groupIds,columns=df.columns) # iterate over the different groupIds for groupId in groupIds: groupRows = df.loc[df[groupByField] == groupId] # for all rows of the groupId for index, row in groupRows.iterrows(): # for all columns in the df for column in df.columns: print row[column] # this prints the value the cell # here append row[column] to its cell in the df_copy row of groupId, separated by delimiter </code></pre>
<p>To perform the grouping, you can <code>groupby</code> on <code>'groupId'</code>, and then within each group perform a join with your given delimiter on each column:</p> <pre><code>def group_delim(grp, delim='|'): """Join each columns within a group by the given delimiter.""" return grp.apply(lambda col: delim.join(col)) # Make sure the DataFrame consists of strings, then apply grouping function. grouped = df.astype(str).groupby('groupId').apply(group_delim) # Drop the grouped groupId column, and replace it with the index groupId. grouped = grouped.drop('groupId', axis=1).reset_index() </code></pre> <p>The grouped output:</p> <pre><code> groupId uniqueId feature_1 feature_2 0 100 1|2|5 text of 1|some text of 2|another text of 5 10|20|50 1 200 3|4 text of 3|more text of 4 30|40 </code></pre> <p>Similar idea for the inverse process, but since each row is a unique group you can just use a regular <code>apply</code>, no need for a <code>groupby</code>:</p> <pre><code>def ungroup_delim(col, delim='|'): """Split elements in a column by the given delimiter, stacking columnwise""" return col.str.split(delim, expand=True).stack() # Apply the ungrouping function, and forward fill elements that aren't grouped. ungrouped = grouped.apply(ungroup_delim).ffill() # Drop the unwieldy altered index for a new one. ungrouped = ungrouped.reset_index(drop=True) </code></pre> <p>And ungrouping yields the original data:</p> <pre><code> groupId uniqueId feature_1 feature_2 0 100 1 text of 1 10 1 100 2 some text of 2 20 2 100 5 another text of 5 50 3 200 3 text of 3 30 4 200 4 more text of 4 40 </code></pre> <p>To use different delimiters, you'd just pass <code>delim</code> as an argument to <code>apply</code>:</p> <pre><code>foo.apply(group_delim, delim=';') </code></pre> <p>As a side note, in general iterating over DataFrames is quite slow. Whenever possible you'll want to use a vectorized approach like what I've done above.</p>
python|r|csv|pandas
4
4,643
38,111,010
how to index a numpy array using conditions?
<p>Suppose I have an array like this:</p> <pre><code>a = np.array([[2,1], [4,2], [1,3],...] </code></pre> <p>I want to retrieve the elements of the second column where the corresponding elements in the first column match some condition. So something like</p> <pre><code>a[a[:,0] == np.array([2,4]),1] (?) </code></pre> <p>should give</p> <pre><code>np.array([1,2]) </code></pre>
<p>While this uses <code>list</code> to collect results and requires a <code>for</code> loop, this collects the second column values once the first column has passed some criteria (in a <code>list</code> of acceptable results in this case).</p> <pre><code>a = np.array([[2, 1], [4, 2], [1, 3]]) b = [] criteria = [2, 4] for entry in a: if entry[0] in criteria: b.append(entry[1]) b = np.array(b) </code></pre>
python|arrays|numpy
0
4,644
38,406,511
Write json format using pandas Series and DataFrame
<p>I'm working with csvfiles. My goal is to write a json format with csvfile information. Especifically, I want to get a similar format as miserables.json</p> <p>Example:</p> <pre><code>{"source": "Napoleon", "target": "Myriel", "value": 1}, </code></pre> <p>According with the information I have the format would be:</p> <pre><code>[ { "source": "Germany", "target": "Mexico", "value": 1 }, { "source": "Germany", "target": "USA", "value": 2 }, { "source": "Brazil", "target": "Argentina", "value": 3 } ] </code></pre> <p>However, with the code I used the output looks as follow:</p> <pre><code>[ { "source": "Germany", "target": "Mexico", "value": 1 }, { "source": null, "target": "USA", "value": 2 } ][ { "source": "Brazil", "target": "Argentina", "value": 3 } ] </code></pre> <p><code>Null</code> source must be Germany. This is one of the main problems, because there are more cities with that issue. Besides this, the information is correct. I just want to remove several list inside the format and replace null to correct country.</p> <p>This is the code I used using <code>pandas</code> and <code>collections</code>.</p> <pre><code>csvdata = pandas.read_csv('file.csv', low_memory=False, encoding='latin-1') countries = csvdata['country'].tolist() newcountries = list(set(countries)) for element in newcountries: bills = csvdata['target'][csvdata['country'] == element] frquency = Counter(bills) sourceTemp = [] value = [] country = element for k,v in frquency.items(): sourceTemp.append(k) value.append(int(v)) forceData = {'source': Series(country), 'target': Series(sourceTemp), 'value': Series(value)} dfForce = DataFrame(forceData) jsondata = dfForce.to_json(orient='records', force_ascii=False, default_handler=callable) parsed = json.loads(jsondata) newData = json.dumps(parsed, indent=4, ensure_ascii=False, sort_keys=True) # since to_json doesn´t have append mode this will be written in txt file savetxt = open('data.txt', 'a') savetxt.write(newData) savetxt.close() </code></pre> <p>Any suggestion to solve this problem are appreciate!</p> <p>Thanks</p>
<p>Consider removing the <code>Series()</code> around the scalar value, country. By doing so and then upsizing the dictionaries of series into a dataframe, you force <code>NaN</code> (later converted to <code>null</code> in json) into the series to match the lengths of other series. You can see this by printing out the dfForce dataframe:</p> <pre><code>from pandas import Series from pandas import DataFrame country = 'Germany' sourceTemp = ['Mexico', 'USA', 'Argentina'] value = [1, 2, 3] forceData = {'source': Series(country), 'target': Series(sourceTemp), 'value': Series(value)} dfForce = DataFrame(forceData) # source target value # 0 Germany Mexico 1 # 1 NaN USA 2 # 2 NaN Argentina 3 </code></pre> <p>To resolve, simply keep country as scalar in dictionary of series:</p> <pre><code>forceData = {'source': country, 'target': Series(sourceTemp), 'value': Series(value)} dfForce = DataFrame(forceData) # source target value # 0 Germany Mexico 1 # 1 Germany USA 2 # 2 Germany Argentina 3 </code></pre> <hr> <p>By the way, you do not need a dataframe object to output to json. Simply use a list of dictionaries. Consider the following using an <a href="https://docs.python.org/3/library/collections.html#collections.OrderedDict" rel="nofollow">Ordered Dictionary collection</a> (to maintain the order of keys). In this way the growing list dumps into a text file without appending which would render an invalid json as opposite facing adjacent square brackets <code>...][...</code> are not allowed.</p> <pre><code>from collections import OrderedDict ... data = [] for element in newcountries: bills = csvdata['target'][csvdata['country'] == element] frquency = Counter(bills) for k,v in frquency.items(): inner = OrderedDict() inner['source'] = element inner['target'] = k inner['value'] = int(v) data.append(inner) newData = json.dumps(data, indent=4) with open('data.json', 'w') as savetxt: savetxt.write(newData) </code></pre>
python|json|python-3.x|pandas
1
4,645
38,381,031
Change a year in pandas dataframe if it is lower than 1900
<p>I have to process data where someone has been using a date value with a year of 1700 where there is not an actual event date. 1700 breaks datetime, which starts at 1900, but I'm sure you all know that.</p> <p>I have converted the data to datetime and then tried an if statement:</p> <pre><code>df["DATE"] = pd.to_datetime(df["DATE"]) if df['DATE'].dt.year.any() &lt; 1900 #assigning today's date df['DATE'] = dt.datetime.today().strftime("%m/%d/%y") else: #the original date value, formatted df["DATE"] = df["DATE"].map(lambda x: x.strftime("%m/%d/%y")) </code></pre> <p>The <code>if</code> statement does not catch the <code>1700</code> and I get the error:<br> <code>"ValueError: year=1700 is before 1900"</code></p> <p>pandas version: 0.18.0 numpy version: 1.11.1</p>
<p>Im having trouble reproducing this issue exactly, but have you tried:</p> <pre><code>df[df.DATE.dt.year &lt; 1900] = dt.datetime.today() df.DATE = df.DATE.map(lambda x: x.strftime("%m/%d/%y")) </code></pre>
python|datetime|pandas
2
4,646
38,221,981
Unpacking list of lists generated by a zip into one list
<p>I am again manipulating dataframes. Here I concatenate multiple dataframe using row as common reference. Then I want to reorder the columns by "pairing" the first one columns of each df together, and so on. All for the sake of data readability</p> <p>Here is my code:</p> <pre><code>df_list=[df_1,df_2,df_3] return_df=pd.concat(df_list,axis=1, join='outer') dfcolumns_list=[df_1.columns,df_2.columns,df_3.columns] print (return_df.columns) print(dfcolumns_list) list_columns=np.array(list(zip(*dfcolumns_list))).reshape(1,-1)[0] print (list_columns) list_columns=np.array([x for x in zip(*dfcolumns_list)]).reshape(1,-1)[0] print (list_columns) return_df=return_df[list_columns] </code></pre> <p>My question is related to: </p> <pre><code>list_columns=np.array(list(zip(*dfcolumns_list))).reshape(1,-1)[0] </code></pre> <p>or alternatively</p> <pre><code>list_columns=np.array([x for x in zip(*dfcolumns_list)]).reshape(1,-1)[0] </code></pre> <p>It takes the list of indexes, unpacks it in the zip, takes the first element of each column index, outputs it as a tuple/sublist contained in a list, transforms it into an array ,then reshapes it to get rid of the sublists which would cause the </p> <pre><code> return_df=return_df[list_columns] </code></pre> <p>to break. At last, the call to index 0 <code>[0]</code> allows it to retrieve the final list into the np.array (which I need to reshape).</p> <p>My question is: is there nothing less ugly than that? I like <code>zip</code> and similar functions, but I hate to have no simple mean/trick to unpack the generated tuples/sublist for reordering purposes.</p> <p>(It also came to my mind while redacting that I could maybe do the df differently, so I would also give points to that, but my main question is still how to do what I am doing more elegantly/with more Pythonic syntax. The <code>[0]</code> in the end is the dirtiest of all...</p>
<p>You may just zip all column lists and then flatten the list of lists</p> <pre><code>list_columns = [ col for cols in zip( *dfcolumns_list ) for col in cols ] </code></pre>
python|pandas|dataframe|zip|multiple-columns
1
4,647
65,952,132
Calculate mean() of Nympy 2D-array grouped by values in a separate list with strings corresponding to each row in the 2D array
<p>I'm attending a course on Data Analysis with Python (Numpy, Pandas etc).</p> <p>We have an assignment where we are supposed to calculate mean() of an array - based on values of another list. This might seem a bit unclear so here's an example:</p> <pre><code>list = ['A','A','A','A','B','B','B','B'] array = [ [5.1, 3.5, 1.4, 0.2], [4.9, 3. , 1.4, 0.2], [4.7, 3.2, 1.3, 0.2], [4.6, 3.1, 1.5, 0.2], [5. , 3.6, 1.4, 0.2], [5.4, 3.9, 1.7, 0.4], [4.6, 3.4, 1.4, 0.3], [5. , 3.4, 1.5, 0.2] ] </code></pre> <p>The list-values corresponds to categories for the rows in the array and we are asked to calculate the mean of each column grouped by A and B. I suppose this could be done by converting the data into a Pandas dataframe - but the assignment pertains to Numpy so i suppose we are somehow supposed to solve it without Pandas.</p> <p>I have struggled and googled to no avail. Any help is much appreciated.</p> <p>Thanks</p> <p>B.R. Anders</p>
<p>The quickest way I can think of is to split the rows and compute the mean. However, this approach is a quick cheat and falls short if you want to generalize your solution to different forms for <code>list</code>:</p> <pre><code>&gt;&gt;&gt; [x.mean() for x in np.split(np.array(array), 2)] [2.40625, 2.58750] </code></pre> <hr /> <p>A more appropriate solution is to prepare a dictionary of categories. Then sequentially append the rows to the correct entry in the map. I have renamed <code>list</code> to <code>keys</code>.</p> <pre><code>&gt;&gt;&gt; res = {k: [] for k in set(keys)} {'A': [], 'B': []} &gt;&gt;&gt; for k, row in zip(keys, array): ... res[k] += row &gt;&gt;&gt; res {'A': [5.1, 3.5, 1.4, 0.2, 4.9, 3.0, 1.4, 0.2, 4.7, 3.2, 1.3, 0.2, 4.6, 3.1, 1.5, 0.2], 'B': [5.0, 3.6, 1.4, 0.2, 5.4, 3.9, 1.7, 0.4, 4.6, 3.4, 1.4, 0.3, 5.0, 3.4, 1.5, 0.2]} </code></pre> <p>Then compute the means:</p> <pre><code>&gt;&gt;&gt; [(k, sum(v)/len(v)) for k, v in res.items()] [('B', 2.5875), ('A', 2.40625)] </code></pre> <p>This will work for any number of categories, and any form of category sequence <code>keys</code>. So long as <code>len(keys)</code> is equal to the number of rows.</p> <hr /> <p>I am sure you can come up with a full NumPy alternative yourself.</p>
python|list|numpy
2
4,648
46,258,301
How to raise an exception if we assign a value in a numpy array outside of a given range?
<p>I'm a python beginner and I'm implementing a version of k-means. </p> <p>I'm defining the k-means class and one of the class attributes is <code>__class</code>, where <code>__class[i] = j</code> means that the <code>i</code>-th data point is assigned to the <code>j</code>-th cluster. This means that if we have <code>n</code> datapoints and <code>k</code> clusters, then <code>0 &lt;= __class[i] &lt; k</code> for each <code>i in range(n)</code>.</p> <p>Now, what I want to do (to be error safe) is to raise an exception if we do something like <code>__class[i] = impossibleK</code> where <code>impossibleK &lt; 0 V impossibleK &gt;= k</code> and <code>i in range(n)</code>. In few words, I want that exception is thrown whenever we assign an impossible cluster to an element of <code>__class</code>.</p> <p>How can I automatize this check in Python?</p> <p>This the class and the constructor:</p> <pre><code>import numpy as np class CLUMPY: def __init__(self, k, file): # input file self.__file = file print("k=",k) print("Reading {}...".format(file)) # data points self.__points = np.loadtxt(file) # number of data points self.__n = self.__points.shape[0] # data points dimensionality (or length according to numpy terminology) self.__d = self.__points.shape[1] print("Read {}: {} points in {} dimensions.".format(file, self.__n, self.__d)) # __class[i] = j : the i-th data point is assigned to the j-th cluster self.__class = np.zeros(self.__n, dtype=np.int8) if __name__ == "__main__": clumpy = CLUMPY(2, "datasets/202d") </code></pre>
<p>You can use error handling in python...</p> <pre><code>try: __class[i] = j # impossibleK except IndexError: print("Index error occurred") </code></pre>
python|arrays|numpy
0
4,649
58,197,320
How do I sum elements of a pandas dataframe?
<p>I'm new to python and this is already the second question I ask here. I have the following pandas dataframe obtained from an API: </p> <pre><code> data metadata 1388534400000 {'electricity': 0.0} NaN 1388538000000 {'electricity': 0.0} NaN 1388541600000 {'electricity': 0.0} NaN 1388545200000 {'electricity': 0.0} NaN 1388548800000 {'electricity': 0.0} NaN </code></pre> <p>These are only the first elements of the dataframe (columns 'data' and 'metadata'), the last lines are these:</p> <pre><code>1420066800000 {'electricity': 0.0} NaN params NaN {'lat': '51.564', 'lon': ... units NaN {'time': 'UTC', 'electricity': 'kW'} </code></pre> <p>I had python print the type of structure and it returned this: </p> <p>I want to get the sum of all the float values that come after 'electricity' but don't know how. I have searched on google and couldn't find anything that works. Does anyone know how to solve this? Thank you! </p> <p>This is how I'm requesting the data:</p> <pre class="lang-py prettyprint-override"><code>def query_pv(lat, lon, date_from, date_to, tilt, azim=180, tracking=0, system_loss=10, capacity=1, dataset='merra2', interpolate=False, local_time=False, raw=False): s = requests.session() # get token token = _load_token() # send token through header s.headers = {'Authorization': 'Token ' + token} url = API_BASE + 'data/pv' # pre-process inputs date_from = _date_to_string(date_from) date_to = _date_to_string(date_to) args = { 'lat': lat, 'lon': lon, 'date_from': date_from, 'date_to': date_to, 'dataset': dataset, 'capacity': capacity, 'system_loss': system_loss, 'tracking': tracking, 'tilt': tilt, 'azim': azim, 'format': 'json', # 'metadata': metadata, 'raw': raw } r = s.get(url, params=args) if not r.ok: raise Exception('Query failed. Check input parameters.') return pd.read_json(r.text, orient='index') </code></pre>
<p>You can access your data through an apply to get the sum column-wise : </p> <pre class="lang-py prettyprint-override"><code>df=pd.DataFrame({'A': [{'electricity':0.0},{'electricity':1.0},{'electricity':5},{'electricity':4}],'B':['a','b','a','c']}) sumElectricity = df['A'].dropna().apply(lambda x: x['electricity'])).sum() </code></pre> <p>This approach however implies that every row is either null or has a dict with a <code>electricity</code> key. </p>
python-3.x|pandas
0
4,650
58,565,642
I want to convert .csv file to a Numpy array
<p>I would like to convert a <code>mydata.csv</code> file to a Numpy array.</p> <p>I have a matrix representation <code>mydata.csv</code> file (The matrix is 14*79 with signed values without any header name.)</p> <pre><code>-0.094391 -0.086641 0.31659 0.66066 -0.33076 0.02751 … -0.26169 -0.022418 0.47564 0.39925 -0.22232 0.16129 … -0.33073 0.026102 0.62409 -0.098799 -0.086641 0.31832 … -0.22134 0.15488 0.69289 -0.26515 -0.021011 0.47096 … </code></pre> <p>I thought this code would work for this case.</p> <pre><code>import numpy as np data = np.genfromtxt('mydata.csv', dtype=float, delimiter=',', names=False) </code></pre> <p>but it did not work.</p> <p>and I would like to have final Numpy data shape as <code>data.shape = (14, 79)</code></p> <p>My error message looks like this though..</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-26-060012d7c568&gt; in &lt;module&gt; 1 import numpy as np 2 ----&gt; 3 data = np.genfromtxt('output.csv', dtype=float, delimiter=',', names=False) ~\Anaconda3\envs\tensorflow\lib\site-packages\numpy\lib\npyio.py in genfromtxt(fname, dtype, comments, delimiter, skip_header, skip_footer, converters, missing_values, filling_values, usecols, names, excludelist, deletechars, replace_space, autostrip, case_sensitive, defaultfmt, unpack, usemask, loose, invalid_raise, max_rows, encoding) 1810 deletechars=deletechars, 1811 case_sensitive=case_sensitive, -&gt; 1812 replace_space=replace_space) 1813 # Make sure the names is a list (for 2.5) 1814 if names is not None: ~\Anaconda3\envs\tensorflow\lib\site-packages\numpy\lib\_iotools.py in easy_dtype(ndtype, names, defaultfmt, **validationargs) 934 # Simple dtype: repeat to match the nb of names 935 if nbtypes == 0: --&gt; 936 formats = tuple([ndtype.type] * len(names)) 937 names = validate(names, defaultfmt=defaultfmt) 938 ndtype = np.dtype(list(zip(names, formats))) TypeError: object of type 'bool' has no len() </code></pre>
<p>For this, you first create a list of <code>CSV</code> files (<strong>file_names</strong>) that you want to append. Then you can export this into a single <code>CSV</code> file by reshaping Numpy-Array. This will help you to move forward:</p> <pre><code>import pandas as pd import numpy as np combined_csv_files = pd.concat( [ pd.read_csv(f) for f in file_names ]) </code></pre> <p>Now, if you want to <strong>Export</strong> these files into <strong>Single .csv-File</strong>, use like:</p> <pre><code>combined_csv_files.to_csv( "combined_csv.csv", index=False) </code></pre> <p>Now, in order to obtain Numpy Array, you can move forward like this:</p> <pre><code>data_set = pd.read_csv('combined_csv.csv', header=None) data_frames = pd.DataFrame(data_set) required_array = np.array(data_frames.values) print(required_array) </code></pre> <p>Here you can also reshape Numpy Array by using:</p> <pre><code>required_array.shape = (100, 14, 79) </code></pre> <p>I have perform simple test on <strong>cmd</strong> to confirm this: </p> <pre><code>&gt;&gt;&gt; y = np.zeros((2, 3, 4)) &gt;&gt;&gt; y.shape (2, 3, 4) &gt;&gt;&gt; y.shape = (3, 8) &gt;&gt;&gt; y array([[ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.]]) </code></pre>
python|numpy|csv
3
4,651
60,887,648
Colorize the background of a seaborn plot using a column in dataframe
<h1>Question</h1> <p>How to shade or colorize the <em>background</em> of a <a href="https://seaborn.pydata.org/" rel="nofollow noreferrer">seaborn</a> plot using a column of a <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html" rel="nofollow noreferrer">dataframe</a>?</p> <h1>Code snippet</h1> <pre><code>import numpy as np import seaborn as sns; sns.set() import matplotlib.pyplot as plt fmri = sns.load_dataset(&quot;fmri&quot;) fmri.sort_values('timepoint',inplace=True) ax = sns.lineplot(x=&quot;timepoint&quot;, y=&quot;signal&quot;, data=fmri) arr = np.ones(len(fmri)) arr[:300] = 0 arr[600:] = 2 fmri['background'] = arr ax = sns.lineplot(x=&quot;timepoint&quot;, y=&quot;signal&quot;, hue=&quot;event&quot;, data=fmri) </code></pre> <p>Which produced this graph:<br /> <a href="https://i.stack.imgur.com/z6a1r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z6a1r.png" alt="Actual output" /></a></p> <h1>Desired output</h1> <p>What I'd like to have, according to the value in the new column <code>'background'</code> and any <a href="https://seaborn.pydata.org/tutorial/color_palettes.html" rel="nofollow noreferrer">palette</a> or user defined <a href="https://en.wikipedia.org/wiki/Web_colors" rel="nofollow noreferrer">colors</a>, something like this:</p> <p><a href="https://i.stack.imgur.com/B9vOt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B9vOt.png" alt="Desired output" /></a></p>
<p><code>ax.axvspan()</code> could work for you, assuming backgrounds don't overlap over timepoints.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import seaborn as sns; sns.set() import matplotlib.pyplot as plt fmri = sns.load_dataset("fmri") fmri.sort_values('timepoint',inplace=True) arr = np.ones(len(fmri)) arr[:300] = 0 arr[600:] = 2 fmri['background'] = arr fmri['background'] = fmri['background'].astype(int).astype(str).map(lambda x: 'C'+x) ax = sns.lineplot(x="timepoint", y="signal", hue="event", data=fmri) ranges = fmri.groupby('background')['timepoint'].agg(['min', 'max']) for i, row in ranges.iterrows(): ax.axvspan(xmin=row['min'], xmax=row['max'], facecolor=i, alpha=0.3) </code></pre> <p><a href="https://i.stack.imgur.com/BO8d5.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BO8d5.png" alt="enter image description here"></a></p>
python|pandas|dataframe|graph|seaborn
9
4,652
61,062,303
Deploy python app to Heroku "Slug Size too large"
<p>I'm trying to deploy a Streamlit app written in python to Heroku. My whole directory is 4.73 MB, where 4.68 MB is my ML model. My <code>requirements.txt</code> looks like this:</p> <pre><code>absl-py==0.9.0 altair==4.0.1 astor==0.8.1 attrs==19.3.0 backcall==0.1.0 base58==2.0.0 bleach==3.1.3 blinker==1.4 boto3==1.12.29 botocore==1.15.29 cachetools==4.0.0 certifi==2019.11.28 chardet==3.0.4 click==7.1.1 colorama==0.4.3 cycler==0.10.0 decorator==4.4.2 defusedxml==0.6.0 docutils==0.15.2 entrypoints==0.3 enum-compat==0.0.3 future==0.18.2 gast==0.2.2 google-auth==1.11.3 google-auth-oauthlib==0.4.1 google-pasta==0.2.0 grpcio==1.27.2 h5py==2.10.0 idna==2.9 importlib-metadata==1.5.2 ipykernel==5.2.0 ipython==7.13.0 ipython-genutils==0.2.0 ipywidgets==7.5.1 jedi==0.16.0 Jinja2==2.11.1 jmespath==0.9.5 joblib==0.14.1 jsonschema==3.2.0 jupyter-client==6.1.1 jupyter-core==4.6.3 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.0 kiwisolver==1.1.0 Markdown==3.2.1 MarkupSafe==1.1.1 matplotlib==3.2.1 mistune==0.8.4 nbconvert==5.6.1 nbformat==5.0.4 notebook==6.0.3 numpy==1.18.2 oauthlib==3.1.0 opencv-python==4.2.0.32 opt-einsum==3.2.0 pandas==1.0.3 pandocfilters==1.4.2 parso==0.6.2 pathtools==0.1.2 pickleshare==0.7.5 Pillow==7.0.0 prometheus-client==0.7.1 prompt-toolkit==3.0.4 protobuf==3.11.3 pyasn1==0.4.8 pyasn1-modules==0.2.8 pydeck==0.3.0b2 Pygments==2.6.1 pyparsing==2.4.6 pyrsistent==0.16.0 python-dateutil==2.8.0 pytz==2019.3 pywinpty==0.5.7 pyzmq==19.0.0 requests==2.23.0 requests-oauthlib==1.3.0 rsa==4.0 s3transfer==0.3.3 scikit-learn==0.22.2.post1 scipy==1.4.1 Send2Trash==1.5.0 six==1.14.0 sklearn==0.0 streamlit==0.56.0 tensorboard==2.1.1 tensorflow==2.1.0 tensorflow-estimator==2.1.0 termcolor==1.1.0 terminado==0.8.3 testpath==0.4.4 toml==0.10.0 toolz==0.10.0 tornado==5.1.1 traitlets==4.3.3 tzlocal==2.0.0 urllib3==1.25.8 validators==0.14.2 watchdog==0.10.2 wcwidth==0.1.9 webencodings==0.5.1 Werkzeug==1.0.0 widgetsnbextension==3.5.1 wincertstore==0.2 wrapt==1.12.1 zipp==3.1.0 </code></pre> <p>When I push my app to Heroku, the message is:</p> <pre><code>remote: -----&gt; Discovering process types remote: Procfile declares types -&gt; web remote: remote: -----&gt; Compressing... remote: ! Compiled slug size: 623.5M is too large (max is 500M). remote: ! See: http://devcenter.heroku.com/articles/slug-size remote: remote: ! Push failed </code></pre> <p>How can my slug size be too large? Is it the size of the requirements? Then how is it possible to deploy a python app using tensorflow to Heroku after all? Thanks for the help!</p>
<p><em>I have already answered this <a href="https://stackoverflow.com/a/62356779/11105967">here</a>.</em></p> <p>Turns out the Tensorflow 2.0 module is very large (more than 500MB, the limit for Heroku) because of its GPU support. Since Heroku doesn't support GPU, it doesn't make sense to install the module with GPU support.</p> <h2>Solution:</h2> <p>Simply replace <strong>tensorflow</strong> with <strong>tensorflow-cpu</strong> in your requirements.</p> <p>This worked for me, hope it works for you too!</p>
python|tensorflow|heroku|tensorflow2.0
54
4,653
60,883,431
Limit the x axis in matplotlib python
<p>I have code that produces a live graph, updating every few seconds. It all functions EXACTLY as I want other than a single issue, the x axis keeps adding new values but never removing old ones</p> <p>in the example code below, because I limit the dataframe to 6 columns, I expect to never see more than 6 measurements represented on my x-axis. Instead, the graph continues to update and eventually the poiunts are too close together.</p> <pre><code>from matplotlib import pyplot from matplotlib.animation import FuncAnimation import pandas as pd from datetime import datetime import threading import random import time measurements = ['abc','bcd','afr','reg','wow'] counter = 0 figure = pyplot.figure() measurement_frame = pd.DataFrame(index = measurements) def get_live(counter2, col_num): measurement_frame.iat[counter2,col_num] = random.randint(50,80) def add_to_dataframe(): global measurement_frame #timey = datetime.now().strftime('%H:%M:%S') timey = datetime.now().time() if measurement_frame.shape[1] == 6: measurement_frame.drop(measurement_frame.columns[0], axis = 1, inplace = True) measurement_frame[timey] = measurements col_num = measurement_frame.shape[1]-1 print(col_num) counter2 = 0 for item in measurements: t = threading.Thread(target=get_live, args=(counter2, col_num,)) t.start() counter2 = counter2 +1 t.join() print(measurement_frame.columns[0]) time.sleep(1) def update(frame): add_to_dataframe() x_data = measurement_frame.columns print(x_data[0]) y1_data = measurement_frame.loc[measurement_frame.index[0]] y2_data = measurement_frame.loc[measurement_frame.index[1]] y3_data = measurement_frame.loc[measurement_frame.index[2]] y4_data = measurement_frame.loc[measurement_frame.index[3]] y5_data = measurement_frame.loc[measurement_frame.index[4]] line, = pyplot.plot_date(x_data, y1_data, '-', color = 'b') line2, = pyplot.plot_date(x_data, y2_data, '-', color = 'g') line3, = pyplot.plot_date(x_data, y3_data, '-', color = 'r') line4, = pyplot.plot_date(x_data, y4_data, '-', color = 'm') line5, = pyplot.plot_date(x_data, y5_data, '-', color = 'y') line.set_data(x_data, y1_data) line2.set_data(x_data, y2_data) line3.set_data(x_data, y3_data) line4.set_data(x_data, y4_data) line5.set_data(x_data, y5_data) figure.gca().set_xlim(x_data[0]) figure.gca().autoscale() print(figure.gca().get_xlim()) return line, line2, line3, line4, line5, animation = FuncAnimation(figure, update, interval=1000) pyplot.show() </code></pre> <p>What I need is that after the dataframe reaches maximum size, the far left measurements are removed, so as to not exceed a set number of measurements on the screen at once. Note that the dataframe already drops unneeded columns before adding a new one when it reaches a certain size, but my graph does not reflect that</p>
<p>using autoscale tries to keep old data in view. If you drop autoscale and use</p> <pre><code>figure.gca().set_xlim(left =x_data[0], right = datetime.now().time()) </code></pre> <p>it works as intended</p> <p>the full code is now </p> <pre><code>from matplotlib import pyplot from matplotlib.animation import FuncAnimation import pandas as pd from datetime import datetime import threading import random import time measurements = ['abc','bcd','afr','reg','wow'] counter = 0 figure = pyplot.figure() measurement_frame = pd.DataFrame(index = measurements) def get_live(counter2, col_num): measurement_frame.iat[counter2,col_num] = random.randint(50,80) def add_to_dataframe(): global measurement_frame #timey = datetime.now().strftime('%H:%M:%S') timey = datetime.now().time() if measurement_frame.shape[1] == 6: measurement_frame.drop(measurement_frame.columns[0], axis = 1, inplace = True) measurement_frame[timey] = measurements col_num = measurement_frame.shape[1]-1 print(col_num) counter2 = 0 for item in measurements: t = threading.Thread(target=get_live, args=(counter2, col_num,)) t.start() counter2 = counter2 +1 t.join() print(measurement_frame.columns[0]) time.sleep(1) def update(frame): add_to_dataframe() x_data = measurement_frame.columns print(x_data[0]) y1_data = measurement_frame.loc[measurement_frame.index[0]] y2_data = measurement_frame.loc[measurement_frame.index[1]] y3_data = measurement_frame.loc[measurement_frame.index[2]] y4_data = measurement_frame.loc[measurement_frame.index[3]] y5_data = measurement_frame.loc[measurement_frame.index[4]] line, = pyplot.plot_date(x_data, y1_data, '-', color = 'b') line2, = pyplot.plot_date(x_data, y2_data, '-', color = 'g') line3, = pyplot.plot_date(x_data, y3_data, '-', color = 'r') line4, = pyplot.plot_date(x_data, y4_data, '-', color = 'm') line5, = pyplot.plot_date(x_data, y5_data, '-', color = 'y') line.set_data(x_data, y1_data) line2.set_data(x_data, y2_data) line3.set_data(x_data, y3_data) line4.set_data(x_data, y4_data) line5.set_data(x_data, y5_data) figure.gca().set_xlim(left =x_data[0], right = datetime.now().time()) print(figure.gca().get_xlim()) return line, line2, line3, line4, line5, animation = FuncAnimation(figure, update, interval=1000) pyplot.show() </code></pre>
python|pandas|user-interface|matplotlib|graph
2
4,654
60,847,550
model.evaluate() varies wildly with number of steps when using generators
<p>Running tensorflow 2.x in Colab with its internal keras version (tf.keras). My model is a 3D convolutional UNET for multiclass segmentation (not sure if it's relevant). I've successfully trained (high enough accuracy on validation) this model the traditional way but I'd like to do augmentation to improve it, therefore I'm switching to (hand-written) generators. When I use generators I see my loss increasing and my accuracy decreasing <em>a lot</em> (e.g.: loss increasing 4-fold, not some %) in the fit.</p> <p>To try to localize the issue I've tried loading my trained weights and computing the metrics on the data returned by the generators. And what's happening makes no sense. I can see that the results visually are ok.</p> <pre><code>model.evaluate(validationGenerator,steps=1) 2s 2s/step - loss: 0.4037 - categorical_accuracy: 0.8716 model.evaluate(validationGenerator,steps=2) 2s/step - loss: 1.7825 - categorical_accuracy: 0.7158 model.evaluate(validationGenerator,steps=4) 7s 2s/step - loss: 1.7478 - categorical_accuracy: 0.7038 </code></pre> <p>Why would the loss vary with the number of steps? I could guess some % due to statistical variations... not 4 fold increase!</p> <p>If I try</p> <pre><code>x,y = next(validationGenerator) nSamples = x.shape[0] meanLoss = np.zeros(nSamples) meanAcc = np.zeros(nSamples) for pIdx in range(nSamples): y_pred = model.predict(np.expand_dims(x[pIdx,:,:,:,:],axis=0)) meanAcc[pIdx]=np.mean(tf.keras.metrics.categorical_accuracy(np.expand_dims(y[pIdx,:,:,:,:],axis=0),y_pred)) meanLoss[pIdx]=np.mean(tf.keras.metrics.categorical_crossentropy(np.expand_dims(y[pIdx,:,:,:,:],axis=0),y_pred)) print(np.mean(meanAcc)) print(np.mean(meanLoss)) </code></pre> <p>I get accuracy~85% and loss ~0.44. Which is what I expect from the previous fit, and it varies by vary little from one batch to the other. And these are the same exact numbers that I get if I do model.evaluate() with 1 step (using the same generator function).<br> However I need about 30 steps to run trough my whole training dataset. What should I do? If I fit my already good model to this generator it indeed worsen the performances a lot (it goes from a nice segmentation of the image to uniform predictions of 25% for each of the 4 classes!!!!) </p> <p>Any idea on where to debud the issue? I've also visually looked at the images produced by the generator and at the model predictions and everything looks correct (as testified by the numbers I found when evaluating using a single step). I've tried writing a minimal working example with a 2 layers model but... in it the issue does not happen.</p> <p><strong>UPDATE: Generators code</strong> So, as I've been asked, these are the generators code. They're handwritten</p> <pre><code>def dataGen (X,Y_train): patchS = 64 #set the size of the patch I extract batchS = 16 #number of samples per batch nSamples = X.shape[0] #get total number of samples immSize = X.shape[1:] #get the shape of the iamge to crop #Get 4 patches from each image #extract them randomly, and in random patient order patList = np.array(range(0,nSamples),dtype='int16') patList = patList.reshape(nSamples,1) patList = np.tile(patList,(4,2)) patList[:nSamples,0]=0 #Use this index to tell the code where to get the patch from patList[nSamples:2*nSamples,0]=1 patList[2*nSamples:3*nSamples,0]=2 patList[3*nSamples:4*nSamples,0]=3 np.random.shuffle(patList) patStart=0 Xout = np.zeros((batchS,patchS,patchS,patchS,immSize[3])) #allocate output vector while True: Yout = np.zeros((batchS,patchS,patchS,patchS)) #allocate vector of labels for patIdx in range(batchS): XSR = 32* (patList[patStart+patIdx,0]//2) #get the index of where to extract the patch YSR = 32* (patList[patStart+patIdx,0]%2) xStart = random.randrange(XSR,XSR+32) #get a patch randomly somewhere between a range yStart = random.randrange(YSR,YSR+32) zStart = random.randrange(0,26) patInd = patList[patStart+patIdx,1] Xout[patIdx,:,:,:,:] = X[patInd,xStart:(xStart+patchS),yStart:(yStart+patchS),zStart:(zStart+patchS),:] Yout[patIdx,:,:,:] = Y_train[patInd,xStart:(xStart+patchS),yStart:(yStart+patchS),zStart:(zStart+patchS)] if((patStart+patIdx)&gt;(patList.shape[0]-2)): np.random.shuffle(patList) #after going through the whole list restart patStart=0 patStart = patStart+batchS Yout = tf.keras.utils.to_categorical (Yout, num_classes=4, dtype='float32') #convert to one hot encoding yield Xout, Yout </code></pre>
<p>Posting the <strong>workaround</strong> I've found for the future person coming here from google.</p> <p><em>Apparently</em> the issue lies in how keras calls a handwritten generator. When it was called multiple times in a row by using evaluate(gen, steps=N) apparently it returned wrong outputs. There's no documentation around about how to address this or how a generator should be written.</p> <p>I ended up writing my code using a tf.keras.utils.sequence class and the same previous code now works perfectly. No way to know why.</p>
tensorflow|machine-learning|keras
2
4,655
71,517,152
tensorflow: load checkpoint
<p>I've been training a model which looks a bit like:</p> <pre><code>base_model = tf.keras.applications.ResNet50(weights=weights, include_top=False, input_tensor=input_tensor) for layer in base_model.layers: layer.trainable = False x = tf.keras.layers.GlobalMaxPool2D()(base_model.output) output = tf.keras.Sequential() output.add(tf.keras.layers.Dense(2, activation='linear')) output.add(tf.keras.layers.Dense(2, activation='linear')) output.add(tf.keras.layers.Dense(2, activation='linear')) output.add(tf.keras.layers.Dense(2, activation='linear')) output.add(tf.keras.layers.Dense(2, activation='linear')) return output(x) </code></pre> <p>I setup checkpoints saving with code like:</p> <pre><code>cp_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_path, verbose=1, save_weights_only=True, save_freq=batch_size*5) </code></pre> <p>Yesterday I started a fit to run for 11 epochs. I'm not sure why, but the machine restarted during the 7th epoch. Naturally I want to resume fitting from the start of epoch 7.</p> <p>The checkpoint code above created three files:</p> <p><a href="https://i.stack.imgur.com/dEKpp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dEKpp.png" alt="enter image description here" /></a></p> <p>The contents of checkpoint are:</p> <pre><code>model_checkpoint_path: &quot;checkpoint&quot; all_model_checkpoint_paths: &quot;checkpoint&quot; </code></pre> <p>The other two files are binary. I tried to load the checkpoint weights with both:</p> <pre><code>model.load_weights('./2022-03-16_21-10/checkpoints/checkpoint.data-00000-of-00001') model.load_weights('./2022-03-16_21-10/checkpoints/') </code></pre> <p>Both fail with <code>NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files</code>.</p> <p>How can I restore this checkpoint and as a result resume fitting?</p> <p>I'm using tensorflow 2.4.</p>
<p>These might help: <a href="https://www.tensorflow.org/guide/checkpoint" rel="nofollow noreferrer">Training checkpoints</a> and <a href="https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint" rel="nofollow noreferrer">tf.train.Checkpoint</a>. According to the documentation, you should be able to load the model using something like this:</p> <pre><code>model = tf.keras.Model(...) checkpoint = tf.train.Checkpoint(model) # Restore the checkpointed values to the `model` object. checkpoint.restore(save_path) </code></pre> <p>I am not sure it will work if the checkpoint contains other variables. You might have to use <code>checkpoint.restore(path).expect_partial()</code>.</p> <p>You can also check the content that has been saved (according to the documentation) by <em>Manually inspecting checkpoints</em> :</p> <pre><code>reader = tf.train.load_checkpoint('./tf_ckpts/') shape_from_key = reader.get_variable_to_shape_map() dtype_from_key = reader.get_variable_to_dtype_map() sorted(shape_from_key.keys()) </code></pre>
tensorflow|tensorflow2.x
0
4,656
42,397,380
Improving data validation efficiency in Pandas
<p>I load data from a CSV into Pandas and do validation on some of the fields like this:</p> <pre><code>(1.5s) loans['net_mortgage_margin'] = loans['net_mortgage_margin'].map(lambda x: convert_to_decimal(x)) (1.5s) loans['current_interest_rate'] = loans['current_interest_rate'].map(lambda x: convert_to_decimal(x)) (1.5s) loans['net_maximum_interest_rate'] = loans['net_maximum_interest_rate'].map(lambda x: convert_to_decimal(x)) (48s) loans['credit_score'] = loans.apply(lambda row: get_minimum_score(row), axis=1) (&lt; 1s) loans['loan_age'] = ((loans['factor_date'] - loans['first_payment_date']) / np.timedelta64(+1, 'M')).round() + 1 (&lt; 1s) loans['months_to_roll'] = ((loans['next_rate_change_date'] - loans['factor_date']) / np.timedelta64(+1, 'M')).round() + 1 (34s) loans['first_payment_change_date'] = loans.apply(lambda x: validate_date(x, 'first_payment_change_date', loans.columns), axis=1) (37s) loans['first_rate_change_date'] = loans.apply(lambda x: validate_date(x, 'first_rate_change_date', loans.columns), axis=1) (39s) loans['first_payment_date'] = loans.apply(lambda x: validate_date(x, 'first_payment_date', loans.columns), axis=1) (39s) loans['maturity_date'] = loans.apply(lambda x: validate_date(x, 'maturity_date', loans.columns), axis=1) (37s) loans['next_rate_change_date'] = loans.apply(lambda x: validate_date(x, 'next_rate_change_date', loans.columns), axis=1) (36s) loans['first_PI_date'] = loans.apply(lambda x: validate_date(x, 'first_PI_date', loans.columns), axis=1) (36s) loans['servicer_name'] = loans.apply(lambda row: row['servicer_name'][:40].upper().strip(), axis=1) (38s) loans['state_name'] = loans.apply(lambda row: str(us.states.lookup(row['state_code'])), axis=1) (33s) loans['occupancy_status'] = loans.apply(lambda row: get_occupancy_type(row), axis=1) (37s) loans['original_interest_rate_range'] = loans.apply(lambda row: get_interest_rate_range(row, 'original'), axis=1) (36s) loans['current_interest_rate_range'] = loans.apply(lambda row: get_interest_rate_range(row, 'current'), axis=1) (33s) loans['valid_credit_score'] = loans.apply(lambda row: validate_credit_score(row), axis=1) (60s) loans['origination_year'] = loans['first_payment_date'].map(lambda x: x.year if x.month &gt; 2 else x.year - 1) (&lt; 1s) loans['number_of_units'] = loans['unit_count'].map(lambda x: '1' if x == 1 else '2-4') (32s) loans['property_type'] = loans.apply(lambda row: validate_property_type(row), axis=1) </code></pre> <p>Most of these are functions that find the <code>row</code> value, a few directly convert an element to something else, but all in all, these are ran for the entire dataframe line by line. When this code was written, the data frames were small enough that this was not an issue. The code is now, however, being adapted to take in significantly larger tables, such that this part of the code takes far too long.</p> <p>What is the best way to optimize this? My first thought was to go row by row, but apply all of these functions/transformations on the row once (i.e. for row in df, do func1, func2, ..., func21), but I'm not sure if that is the best way to deal with that. Is there a way to avoid lambda to get the same result, for example, since I assume it's lambda that takes a long time? Running Python 2.7 in case that matters.</p> <p>Edit: most of these calls run at about the same rate per row (a few are pretty fast). This is a dataframe with 277,659 rows, which is in the 80th percentile in terms of size.</p> <p>Edit2: example of a function: </p> <pre><code>def validate_date(row, date_type, cols): date_element = row[date_type] if date_type not in cols: return np.nan if pd.isnull(date_element) or len(str(date_element).strip()) &lt; 2: # can be blank, NaN, or "0" return np.nan if date_element.day == 1: return date_element else: next_month = date_element + relativedelta(months=1) return pd.to_datetime(dt.date(next_month.year, next_month.month, 1)) </code></pre> <p>This is similar to the longest call (origination_year) which extracts values from a date object (year, month, etc.). Others, like property_type for example, are just checking for irregular values (e.g. "N/A", "NULL", etc.) but still take a little while just to go through each one.</p>
<p><strong>td;lr: Consider distribution the processing.</strong> An improvement would be reading the data in chunks and using multiple processes. source <a href="http://gouthamanbalaraman.com/blog/distributed-processing-pandas.html" rel="nofollow noreferrer">http://gouthamanbalaraman.com/blog/distributed-processing-pandas.html</a></p> <pre><code>import multiprocessing as mp def process_frame(df): len(x) if __name__ == "__main__": reader = read_csv(csv-file, chunk_size=CHUNKSIZE) pool = mp.Pool(4) # use 4 processes funclist = [] for df in reader: # process each data frame f = pool.apply_async(process_frame,[df]) funclist.append(f) result = 0 for f in funclist: result += f.get(timeout=10) # timeout in 10 seconds print "There are %d rows of data"%(result) </code></pre> <p>Another option might be to use <a href="http://www.gnu.org/software/parallel/parallel_tutorial.html" rel="nofollow noreferrer">GNU parallel</a>. here is another good example of using <a href="http://randyzwitch.com/gnu-parallel-medium-data/" rel="nofollow noreferrer">GNU parallel</a></p>
python|python-2.7|pandas
0
4,657
69,680,491
Tensorflow can't append batches together after doing the first epoch
<p>I am running into problems with my code after I removed the loss function of the <code>compile</code> step (set it equal to <code>loss=None</code>) and added one with the intention of adding another, loss function through the <code>add_loss</code> method. I can call <code>fit</code> and it trains for one epoch but then I get this error:</p> <pre><code>ValueError: operands could not be broadcast together with shapes (128,) (117,) (128,) </code></pre> <p>My batch size is 128. It looks like <code>117</code> is somehow dependent on the number of examples that I am using. When I vary the number of examples, I get different numbers from <code>117</code>. They are all my number of examples mod my batch size. I am at a loss about how to fix this issue. I am using <code>tf.data.TFRecordDataset</code> as input.</p> <p>I have the following simplified model:</p> <pre><code>class MyModel(Model): def __init__(self): super(MyModel, self).__init__() encoder_input = layers.Input(shape=INPUT_SHAPE, name='encoder_input') x = encoder_input x = layers.Conv2D(64, (3, 3), activation='relu', padding='same', strides=2)(x) x = layers.BatchNormalization()(x) x = layers.Conv2D(32, (3, 3), activation='relu', padding='same', strides=2)(x) x = layers.BatchNormalization()(x) x = layers.Flatten()(x) encoded = layers.Dense(LATENT_DIM, name='encoded')(x) self.encoder = Model(encoder_input, outputs=[encoded]) self.decoder = tf.keras.Sequential([ layers.Input(shape=LATENT_DIM), layers.Dense(32 * 32 * 32), layers.Reshape((32, 32, 32)), layers.Conv2DTranspose(32, kernel_size=3, strides=2, activation='relu', padding='same'), layers.Conv2DTranspose(64, kernel_size=3, strides=2, activation='relu', padding='same'), layers.Conv2D(3, kernel_size=(3, 3), activation='sigmoid', padding='same')]) def call(self, x): encoded = self.encoder(x) decoded = self.decoder(encoded) # Loss function. Has to be here because I intend to add another, more layer-interdependent, loss function. r_loss = tf.math.reduce_sum(tf.math.square(x - decoded), axis=[1, 2, 3]) self.add_loss(r_loss) return decoded def read_tfrecord(example): example = tf.io.parse_single_example(example, CELEB_A_FORMAT) image = decode_image(example['image']) return image, image def load_dataset(filenames, func): dataset = tf.data.TFRecordDataset( filenames ) dataset = dataset.map(partial(func), num_parallel_calls=tf.data.AUTOTUNE) return dataset def train_autoencoder(): filenames_train = glob.glob(TRAIN_PATH) train_dataset_x_x = load_dataset(filenames_train[:4], func=read_tfrecord) autoencoder = Autoencoder() # The loss function used to be defined here and everything worked fine before. def r_loss(y_true, y_pred): return tf.math.reduce_sum(tf.math.square(y_true - y_pred), axis=[1, 2, 3]) optimizer = tf.keras.optimizers.Adam(1e-4) autoencoder.compile(optimizer=optimizer, loss=None) autoencoder.fit(train_dataset_x_x.batch(AUTOENCODER_BATCH_SIZE), epochs=AUTOENCODER_NUM_EPOCHS, shuffle=True) </code></pre>
<p>If you only want to get rid of the error and don't care about the last &quot;remainder&quot; batch of your dataset, you can use the keyword argument <code>drop_remainder=True</code> inside of <code>train_dataset_x_x.batch()</code>, that way all of your batches will be the same size.</p> <p>FYI, it's usually better practice to batch your dataset outside of the function call for <code>fit</code>:</p> <pre class="lang-py prettyprint-override"><code>data = data.batch(32) model.fit(data) </code></pre>
python|tensorflow|keras|loss-function|tf.data.dataset
1
4,658
69,729,338
Python - creating column containing the name of a team's opponent based on info in dataframe
<p>I have a dataframe (called &quot;games&quot;) that contains a play-by-play list of a basketball games, where I record scoring streaks within single games (GameID identifies a specific game). The dataframe is sorted by matches (i.e., GameID).</p> <p>Example of dataset &quot;games&quot;:</p> <pre><code> GameID TeamID Scoring Streak 0 nbaG1 A 23 1 nbaG1 B 12 2 nbaG1 B 11 3 nbaG1 A 24 4 nbaG1 B 21 5 nbaG2 C 15 6 nbaG2 C 12 7 nbaG2 D 17 8 nbaG2 C 11 9 nbaG2 D 21 10 nbaG3 E 10 11 nbaG3 F 12 12 nbaG3 F 14 </code></pre> <p>From this dataframe I would like to simply create a column that displays the name of the opposing team within the respective match. For example in Game1 (nbaG1) it is Team A vs B. So, if Team A scores the new column &quot;opponents&quot; should say &quot;B&quot;. However I have no idea how to scan for names within each game and return the value of the opposing team...nor have I found tips in other threads.</p> <p>Desired output for my dataset &quot;games&quot;:</p> <pre><code> GameID TeamID Scoring Streak Opponents 0 nbaG1 A 23 B 1 nbaG1 B 12 A 2 nbaG1 B 11 A 3 nbaG1 A 24 B 4 nbaG1 B 21 A 5 nbaG2 C 15 D 6 nbaG2 C 12 D 7 nbaG2 D 17 C 8 nbaG2 C 11 D 9 nbaG2 D 21 C 10 nbaG3 E 10 F 11 nbaG3 F 12 E 12 nbaG3 F 14 E </code></pre>
<p>First, group your dataframe by GameID and invoke <code>.unique()</code> to get the two teams that are playing the game</p> <pre><code>teams = df.groupby(&quot;GameID&quot;)[&quot;TeamID&quot;].unique() # game_teams : GameID nbaG1 [A, B] nbaG2 [C, D] nbaG3 [E, F] </code></pre> <p>Then, use this to look up both teams in each game and add the column to your original dataframe:</p> <pre><code>df[&quot;Teams&quot;] = teams[df[&quot;GameID&quot;]].to_list() # df: GameID TeamID Scoring Streak Teams 0 nbaG1 A 23 [A, B] 1 nbaG1 B 12 [A, B] 2 nbaG1 B 11 [A, B] 3 nbaG1 A 24 [A, B] 4 nbaG1 B 21 [A, B] 5 nbaG2 C 15 [C, D] 6 nbaG2 C 12 [C, D] 7 nbaG2 D 17 [C, D] 8 nbaG2 C 11 [C, D] 9 nbaG2 D 21 [C, D] 10 nbaG3 E 10 [E, F] 11 nbaG3 F 12 [E, F] 12 nbaG3 F 14 [E, F] </code></pre> <p>Finally, <code>apply</code> a function to each row that takes the element from <code>Teams</code> that is not in <code>TeamID</code></p> <pre><code>def select_opponent(row): for team in row[&quot;Teams&quot;]: if team != row[&quot;TeamID&quot;]: return team return None df[&quot;Opponent&quot;] = df.apply(select_opponent, axis=1) # df: GameID TeamID Scoring Streak Teams Opponent 0 nbaG1 A 23 [A, B] B 1 nbaG1 B 12 [A, B] A 2 nbaG1 B 11 [A, B] A 3 nbaG1 A 24 [A, B] B 4 nbaG1 B 21 [A, B] A 5 nbaG2 C 15 [C, D] D 6 nbaG2 C 12 [C, D] D 7 nbaG2 D 17 [C, D] C 8 nbaG2 C 11 [C, D] D 9 nbaG2 D 21 [C, D] C 10 nbaG3 E 10 [E, F] F 11 nbaG3 F 12 [E, F] E 12 nbaG3 F 14 [E, F] E </code></pre>
python|pandas
1
4,659
43,101,849
Python PIL/numpy conversion
<p>I'm having problems converting between python PIL images and numpy arrays. I already checked existing Stackoverflow posts on this, but it didn't solve the problem:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import PIL rgb_img = plt.imread('some-image.png') PIL_rgb_img = PIL.Image.fromarray(np.uint8(rgb_img)) plt.imshow(PIL_rgb_img) </code></pre> <p>I get a black screen. I tried with and without converting to uint8, and I also tried only keeping the RGB channels out of the entire RGBA data. Nothing worked.</p>
<p>I may not give you a full explanation , (for that, you may read matplotlib's functions docs) but clearly with some tests the following is happening:</p> <p>when you call:</p> <pre><code>rgb_img = plt.imread('img.png') </code></pre> <p>it gives a numpy float array, which will read colors between [0 - 1] as white and black ( and for RGB also )</p> <p>when you call: </p> <pre><code>PIL_rgb_img = PIL.Image.fromarray(np.uint8(rgb_img)) </code></pre> <p>which convert it to <code>uint8</code> values, it just take what supposed to be 255 and make it <code>1</code> which is completely wrong,</p> <p>you know in <code>uint8</code> the values should be between [0 - 255]</p> <p>and when you put :</p> <pre><code>plt.imshow(PIL_rgb_img) </code></pre> <p>it just show a 255 times 'faded' image , which is very close to black..</p> <p>P. S.: </p> <p>That's only happen with '.png' files, something with <code>plt.imread</code> ..</p> <p>to solve just put :</p> <pre><code>img = 'some_img.png' rgb_img = plt.imread(img) if img.split('.')[-1]=='png': PIL_rgb_img = PIL.Image.fromarray(np.uint8(rgb_img*255)) else: PIL_rgb_img = PIL.Image.fromarray(np.uint8(rgb_img)) plt.imshow(PIL_rgb_img) </code></pre> <p>That's should fix it.</p>
python|numpy|python-imaging-library
2
4,660
72,249,904
Pandas data manipulation from column to row elements
<p>I have dataset with millions of rows, here is an example of what it looks like and what I intend to output:</p> <pre><code>data = [[1, 100, 8], [1, 100, 4], [1, 100,6], [2, 100, 0], [2, 200, 1], [3, 300, 7], [4, 400, 2], [5, 100, 6], [5, 100, 3], [5, 600, 1]] df= pd.DataFrame(data, columns =['user', 'time', 'item']) print(df) user time item 1 100 8 1 100 4 1 100 6 2 100 0 2 200 1 3 300 7 4 400 2 5 100 6 5 100 3 5 600 1 </code></pre> <p>The desired output should have all items consumed by a user within the same time to appear together in the <code>items</code> column as follows</p> <pre><code>user time item 1 100 8,4,6 2 100 0 5 100 6,3 2 200 1 3 300 7 4 400 2 5 500 6 </code></pre> <p>For example, <code>user: 1</code> consumed products <code>8,4,6</code> within <code>time: 100</code></p> <p>How could this be achieved?</p>
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.astype.html" rel="nofollow noreferrer"><code>df.astype</code></a> with <a href="https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.core.groupby.DataFrameGroupBy.agg.html" rel="nofollow noreferrer"><code>Groupby.agg</code></a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>df.sort_values</code></a>:</p> <pre><code>In [489]: out = df.astype(str).groupby(['user', 'time'])['item'].agg(','.join).reset_index().sort_values('time') In [490]: out Out[490]: user time item 0 1 100 8,4,6 1 2 100 0 5 5 100 6,3 2 2 200 1 3 3 300 7 4 4 400 2 6 5 600 1 </code></pre>
pandas|numpy|data-manipulation
1
4,661
50,612,923
Tensorflow export estimators for prediction
<p>I wonder how can I export the estimator and then import it for prediction from MNIST tutorial, <a href="https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/examples/tutorials/layers/cnn_mnist.py" rel="nofollow noreferrer">Tensorflow's page</a>. Thank you!</p>
<p>The <code>Estimator</code> has <code>model_dir</code> args where the model will be saved. So during prediction we use the <code>Estimator</code> and call the <code>predict</code> method which recreates the graph and the checkpoints are loaded.</p> <p>For the <code>MNIST</code> example, the prediction code would be:</p> <pre><code>tf.reset_default_graph() # An input-function to predict the class of new data. predict_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x": eval_data}, num_epochs=1, shuffle=False) mnist_classifier = tf.estimator.Estimator( model_fn=cnn_model_fn, model_dir="/tmp/mnist_convnet_model") #Prediction call predictions = mnist_classifier.predict(input_fn=predict_input_fn) pred_class = np.array([p['classes'] for p in predictions]).squeeze() print(pred_class) # Output # [7 2 1 ... 4 5 6] </code></pre>
python|tensorflow|mnist
2
4,662
50,621,614
could not convert string to float: 'cd9f3b1a-2eb8-4cdb-86d1-5d4c2740b1dc'
<p>I am a newbie in Data Science and Python. So I try to use KMeans from sklearn. I have information about calls, and I want to find centroids. So I can do it for one phone number, but can't for 10. When I used for-loop I got the mistake "could not convert string to float: 'cd9f3b1a-2eb8-4cdb-86d1-5d4c2740b1dc'".</p> <p>For one phone number. It works.</p> <pre><code>df = pd.read_csv('Datasets/CDR.csv') df.CallDate = pd.to_datetime(df.CallDate) df.CallTime = pd.to_timedelta(df.CallTime) df.Duration = pd.to_timedelta(df.Duration) in_numbers = df.In.unique().tolist() in_numbers user1 = df[(df.In == in_numbers[0])] user1 = user1[(user1.DOW == 'Sat') | (user1.DOW == 'Sun')] user1 = user1[(user1.CallTime &lt; "06:00:00") | (user1.CallTime &gt; "22:00:00")] fig = plt.figure() ax = fig.add_subplot(111) ax.scatter(user1.TowerLon,user1.TowerLat, c='g', marker='o', alpha=0.2) ax.set_title('Weekend Calls (&lt;6am or &gt;10p)') user1 = pd.concat([user1.TowerLon, user1.TowerLat], axis = 1) model = KMeans(n_clusters = 2) labels = model.fit_predict(user1) centroids = model.cluster_centers_ print(centroids) ax.scatter(centroids[:,0], centroids[:,1], marker='x', c='red', alpha=0.5, linewidths=3, s=169) plt.show() </code></pre> <p>But when I put it in loop I get the error.</p> <pre><code>locations = [] for i in range(10): user = df[(df.In == in_numbers[i])] user.plot.scatter(x='TowerLon', y='TowerLat', c='purple', alpha=0.1, title='Call Locations', s = 30) user = user[(user.DOW == 'Sat') | (user.DOW == 'Sun')] user = user[(user.CallTime &lt; "06:00:00") | (user.CallTime &gt; "22:00:00")] fig = plt.figure() ax = fig.add_subplot(111) ax.scatter(user.TowerLon,user.TowerLat, c='g', marker='o', alpha=0.2) ax.set_title('Weekend Calls (&lt;6am or &gt;10p)') model = KMeans(n_clusters = 2) labels = model.fit_predict(user) centroids = model.cluster_centers_ ax.scatter(centroids[:,0], centroids[:,1], marker='x', c='red', alpha=0.5, linewidths=3, s=169) locations.append(centroids) plt.show() </code></pre> <p>Where is my mistake? Thank you</p> <p><a href="https://drive.google.com/file/d/1Ub6PN5UBLDXeEKdq_3xR6d7zluN-rxsh/view?usp=sharing" rel="nofollow noreferrer">CDR.csv</a></p>
<p>I missed the line in loop</p> <blockquote> <p>user = pd.concat([user.TowerLon, user.TowerLat], axis = 1)</p> </blockquote> <p>Thanks for all</p>
python|k-means|data-science|sklearn-pandas
0
4,663
45,572,247
How to index a list of class instances with a TensorFlow tensor
<p>Given a list of class instances, I need to index it using tf.tensor. For example:</p> <pre><code>Class Something(): def __init__(self): self.a = 1 self.b = 2 list = [Something() for a in range(0, 10)] index_queue = tf.train.range_input_producer(len(list)) index = index_queue.dequeue() result = list[index] tensor = function_that_returns_tensor(result) with tf.Session() as sess: sess.run(tensor) </code></pre> <p>The code above gives following error: <code>TypeError: list indices must be integers, not Tensor</code></p> <p>And using <code>tf.gather(list, index)</code> gives the following error:</p> <pre><code>TypeError: Expected binary or unicode string, got &lt;__main__.Something object at 0x7f4529fae2b0&gt; </code></pre> <p>Any help would be highly appreciated. Thanks!</p>
<p>The problem is due to a core mechanic in how TensorFlow works. When you call TensorFlow methods like <code>tf.train.range_input_producer(len(list))</code> or <code>tf.constant</code> you're not actually <em>running</em> those operations. You're just adding those operations to the TensorFlow computation graph. You then have to use the <code>run</code> method of a <code>tf.Session</code> instance to run those operations and get results from them. <code>TypeError: list indices must be integers, not Tensor</code> is telling you that you're passing the reference to the tensor on the computation graph as the index, not the result returned from running the operation that produces that tensor.</p> <p>For a more detailed explanation, see <a href="https://www.tensorflow.org/get_started/get_started#the_computational_graph" rel="nofollow noreferrer">this TensorFlow documentation</a>.</p>
python|tensorflow
0
4,664
45,333,681
Handling NA in groupby + transform
<p><strong>Edit (Jul/2021):</strong></p> <blockquote> <p>Back in the days (Jun/2017) I filled an <a href="https://github.com/pandas-dev/pandas/issues/17093#issuecomment-859993083" rel="nofollow noreferrer">issue on Pandas' Github</a>. Since it was/is a minor issue (you can work around that, e.g., Scott's answer), and as we may guess Pandas-dev crew is overloaded, it took some time to be addressed.</p> <p>One of these days I got an update on that issue as it was addressed ;)</p> <ul> <li><a href="https://github.com/pandas-dev/pandas/issues/17093#issuecomment-859993083" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/17093#issuecomment-859993083</a></li> </ul> <p>This edit is to (i) update this community on this particular matter, (ii) to thank and acknowledge all the open source developers -- the <em>Pandas' Developers</em>, in particular -- for the effort, diligence , and passion. Respect.</p> </blockquote> <hr /> <p><strong>Original post:</strong></p> <p>I am having issues in transform to a group where the column used for grouping has NaN values.</p> <p>The following code <strong>used to work</strong> until <code>pandas</code> version <strong>0.19.1</strong>. Now I've updated my environment to version <strong>0.20.3</strong> and it <strong>works no more</strong>.</p> <p>The example code:</p> <pre><code>import numpy import pandas df = pandas.DataFrame({'A':numpy.random.rand(100), 'B':numpy.random.rand(100)*10, 'C':numpy.random.randint(0,10,100)}) df.loc[:9,'C']=None df.groupby('C')['B'].transform(lambda x:x.mean()) </code></pre> <p>As of version <code>0.20.3</code> it raises the following error message:</p> <blockquote> <p>ValueError: Length mismatch: Expected axis has 90 elements, new values have 100 elements</p> </blockquote> <p>After reading the <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#na-and-nat-group-handling" rel="nofollow noreferrer">doc</a> I understand this a new behaviour; not a bug.</p> <p>But it is not clear to me how to update my code, or work around that.</p> <p>My goal is to have all the (output) values, but the ones where <code>C==None</code>, to be the result of each group's average (i.e, <code>mean</code>). The first 10 output' values (<code>df.loc[:9,...</code>) would be stay untouched (same as in '<code>B</code>').</p> <p>Any suggestion?</p> <p>Thanks in advance.</p>
<p>Let's <code>mask</code> and uniquely identify those NaN with <code>cumsum</code>:</p> <pre><code>new_c = df['C'].mask(df['C'].isnull(),df['C'].isnull().cumsum()) df.groupby(new_c)['B'].transform('mean') </code></pre> <p>Or, if you testing some more complicated function</p> <pre><code>df.groupby(new_c)['B'].transform(lambda x: x.mean()) </code></pre> <p>Output: </p> <pre><code>Out[54]: 0 5.249441 1 4.987245 2 5.245857 3 6.450159 4 4.017234 5 4.421589 6 3.673986 7 4.746087 8 5.841651 9 5.394510 10 4.421589 11 4.421589 12 4.746087 13 4.746087 14 6.450159 15 6.450159 16 3.813816 17 5.249441 18 5.841651 19 3.813816 20 3.673986 21 4.017234 22 6.450159 23 3.673986 24 4.987245 25 5.245857 26 4.017234 27 4.017234 28 6.450159 29 4.987245 .... </code></pre>
python|python-3.x|pandas
0
4,665
62,854,320
Pandas new dataframe based on combinations of all values in a column
<p>I have collected location data from buses over some time and want to build a model predicting when a bus will arrive at a certain stop.</p> <p>In its most simple form, I have a DataFrame like this:</p> <pre><code>import pandas as pd df = pd.DataFrame({'station': ['Station 1', 'Station 2', 'Station 3', 'Station 4'], 'arrival_time': ['10:00', '10:02', '10:03', '10:05']}) print(df) station arrival_time 0 Station 1 10:00 1 Station 2 10:02 2 Station 3 10:03 3 Station 4 10:05 </code></pre> <p>I would like to map the arrival time at each station to the arrival time at a station later in the trip. The expected output looks something like this:</p> <pre><code> station_prev arrival_time_prev station_next arrival_time_next 0 Station 1 10:00 Station 2 10:02 1 Station 2 10:02 Station 3 10:03 2 Station 3 10:03 Station 4 10:05 3 Station 1 10:00 Station 3 10:03 4 Station 2 10:02 Station 4 10:05 5 Station 1 10:00 Station 4 10:05 </code></pre> <p>I have experimented with df.shift() and the following works for singular DataFrames.</p> <pre><code>import pandas as pd import numpy as np def combos(df): columns_prev = np.array(df.columns) + '_prev' columns_next = np.array(df.columns) + '_next' df_combo = pd.DataFrame() for i in range(1, df.shape[0]): df_prev = df.shift(i) df_prev.columns = columns_prev df_next = df.copy() df_next.columns = columns_next combo = pd.concat([df_prev, df_next], axis=1).dropna() df_combo = df_combo.append(combo, ignore_index=True) return df_combo </code></pre> <p>However, it is quite slow for larger DataFrames and regularly breaks when I try to wrap it into a larger function that aggregates data from many trips (I often get key errors, but do not understand why). Any ideas on how to do this more elegantly, efficiently and reliably? Thanks a lot in advance!</p>
<p>Convert &quot;station&quot; to an ordered categorical column:</p> <pre><code>df['station'] = pd.Categorical(df['station'], ordered=True).codes </code></pre> <p>You can now do a cross join and filter:</p> <pre><code>tmp = df.assign(key=1) (tmp.merge(tmp, on='key', suffixes=('_prev', '_next')) .drop('key', 1) .query('station_prev &lt; station_next')) station_prev arrival_time_prev station_next arrival_time_next 1 0 10:00 1 10:02 2 0 10:00 2 10:03 3 0 10:00 3 10:05 6 1 10:02 2 10:03 7 1 10:02 3 10:05 11 2 10:03 3 10:05 </code></pre>
python|pandas
0
4,666
62,764,560
How to make pixels arrays from RGB image without losing its spatial information in python?
<p>I am wondering is there any workaround to convert RGB images to pixel vectors without losing its spatial information in python. As far as I know, I can read the images and do transformation for images to pixel vectors. I am not sure doing this way still preserve images' spatial information in pixel vectors. How can I make this happen for making pixel vectors from RGB image?</p> <p><strong>my attempt</strong>:</p> <p>I tried as follow but I am not sure how to make</p> <pre><code>import matplotlib.pyplot as pl image = plt.imread('dog.jpg') im = image/255.0 print(im.shape) #(32, 32, 3) pixels = im.reshape(im.shape[0]*im.shape[1], im.shape[2]) </code></pre> <p>but I want to make sure how to make pixel vectors from RGB images without losing pixel order and its spatial information. How to make this happen? any thoughts?</p> <p>I think maybe <code>numpy</code> might have functions to do this. Can anyone point me how to do this with <code>numpy</code>?</p> <p><strong>graphic illustration</strong>:</p> <p>here is simple graphic illustration of making pixel vectors from RGB images:</p> <p><a href="https://i.stack.imgur.com/fPQqx.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fPQqx.jpg" alt="make pixel vectors from RGB image" /></a></p> <p>as this diagram shows, we have RGB images with shape of (4,4,3) which needs to make pixel vectors without losing its spatial information and pixel orders then combine pixel vectors from each channel (Red, Green, Blue) as pixel matrix or dataframe. I am curious how to get this done in python?</p> <p><strong>goal</strong>:</p> <p>I want to make pixel vectors from RGB images so resulted pixel vectors needs to be expanded with taylor expansion. Can anyone point me out how to make this happen?</p>
<p>Are You just trying to reshape each channel to a vector and then joining them horizontally? That's what I understood from the graphic illustration and the way i would do it is something like this:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np image = plt.imread('monkey.png') image = image / 255.0 red = image[:,:,0] green = image[:,:,1] blue = image[:,:,2] def to_vector(matrix): result = [] for i in range(matrix.shape[1]): result = np.vstack(matrix[:,i]) return result red = to_vector(red) green = to_vector(green) blue = to_vector(blue) vector = np.hstack((red,green,blue)) </code></pre>
python|arrays|image|numpy
2
4,667
62,655,883
official module in tensorflow examples at tensorflow.org
<p>I've been following a tensorflow tutorial <a href="https://www.tensorflow.org/official_models/fine_tuning_bert" rel="nofollow noreferrer">https://www.tensorflow.org/official_models/fine_tuning_bert</a></p> <p>In the first code snippet, I saw a lot of imports from official module</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import tensorflow as tf import tensorflow_hub as hub import tensorflow_datasets as tfds tfds.disable_progress_bar() from official.modeling import tf_utils from official import nlp from official.nlp import bert # Load the required submodules import official.nlp.optimization import official.nlp.bert.bert_models import official.nlp.bert.configs import official.nlp.bert.run_classifier import official.nlp.bert.tokenization import official.nlp.data.classifier_data_lib import official.nlp.modeling.losses import official.nlp.modeling.models import official.nlp.modeling.networks </code></pre> <p>And problem is that i found no module name official. I guess this official module somehow related to problem specific or for BERT model(from tf-hub). As bert model uses specific text preprocessing and official module is providing this.</p> <p>So, where i can find, download, use and make imports from this official module to work? I've been using python 3.7, tf-2.2, tf-hub-0.8.0</p> <p>Please help me out</p>
<p>The official modules of TensorFlow can be found in the <a href="https://github.com/tensorflow/models/tree/master/official" rel="nofollow noreferrer">TensorFlow Model Garden Repository</a></p>
python|tensorflow|nlp|bert-toolkit
2
4,668
62,544,461
Array math on numpy structured arrays
<pre><code>import numpy as np arr = np.array([(1,2), (3,4)], dtype=[('c1', float), ('c2', float)]) arr += 3 </code></pre> <p>results in an invalid type promotion error. Is there a way I can have nice labeled columns like a structured array, but still be able to do operations like it's a simple <code>dtype=float</code> array?</p> <p>Alternatively, is there an easy way to cast a <code>dtype=float</code> array into a structured array? i.e.</p> <pre><code>arr = np.array([(1,2), (3,4)], dtype=float) arr_struc = arr.astype([('c1', float), ('c2', float)]) </code></pre> <p>only where it doesn't broadcast and matches columns to names. Seems like I shouldn't have to do this loop:</p> <pre><code>arr_struc = np.zeros(2, dtype=[('c1', float), ('c2', float)]) for i,key in enumerate(arr_struc.dtype.names): arr_struc[key] = arr[i,:] </code></pre>
<p>Hmmm. One option, use a view for this:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; arr = np.array([(1,2), (3,4)], dtype=[('c1', float), ('c2', float)]) &gt;&gt;&gt; view = arr.view(float) &gt;&gt;&gt; view += 3 &gt;&gt;&gt; arr array([(4., 5.), (6., 7.)], dtype=[('c1', '&lt;f8'), ('c2', '&lt;f8')]) &gt;&gt;&gt; view array([4., 5., 6., 7.]) </code></pre> <p>Not the cleanest. But it's a solution.</p> <p>EDIT:</p> <p>Yes, don't use <code>astype</code> use <em>a view again</em>:</p> <pre><code>&gt;&gt;&gt; arr = np.array([(1,2), (3,4)], dtype=float) &gt;&gt;&gt; arr array([[1., 2.], [3., 4.]]) &gt;&gt;&gt; struct = arr.view(dtype=[('c1', float), ('c2', float)]) &gt;&gt;&gt; struct array([[(1., 2.)], [(3., 4.)]], dtype=[('c1', '&lt;f8'), ('c2', '&lt;f8')]) &gt;&gt;&gt; struct.shape (2, 1) </code></pre> <p>You may have to reshape it to your liking:</p> <pre><code>&gt;&gt;&gt; struct.squeeze() array([(1., 2.), (3., 4.)], dtype=[('c1', '&lt;f8'), ('c2', '&lt;f8')]) </code></pre>
python|numpy|numpy-ndarray
2
4,669
62,868,593
What is more efficient that np.sum and numpy boolean operators?
<p>I am having some trouble getting my code to run quickly.</p> <p>After using a line by line profiler on my code, I have found that the following lines are where most of my inefficiencies come from:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import datetime timestamps = np.array(timestamps) mask = (minTime &lt;= timestamps) &amp; (timestamps &lt;= maxTime) count = np.sum(mask) </code></pre> <p><code>timestamps</code> starts out as a list of datetimes, and <code>minTime</code> is a single datetime.</p> <p>An example value for timestamps:</p> <pre><code>minTime = datetime.datetime(2020, 5, 21, 2, 27, 26) timestamps = [datetime.datetime(2020, 5, 21, 2, 27, 26), datetime.datetime(2020, 5, 21, 2, 27, 26), datetime.datetime(2020, 5, 21, 2, 27, 26), datetime.datetime(2020, 5, 21, 2, 30, 55), datetime.datetime(2020, 5, 21, 2, 30, 55), datetime.datetime(2020, 5, 21, 2, 30, 55), datetime.datetime(2020, 5, 21, 2, 34, 26), datetime.datetime(2020, 5, 21, 2, 34, 26), datetime.datetime(2020, 5, 21, 2, 34, 26), datetime.datetime(2020, 5, 21, 2, 39, 26), datetime.datetime(2020, 5, 21, 2, 39, 26), datetime.datetime(2020, 5, 21, 2, 39, 26)] </code></pre> <p>Is there a more efficient way to rewrite the code above?</p> <p>Any advice is appreciated.</p>
<p>Looks like <code>numpy.datetime64</code> objects are pretty fast. About a 2x speed up from standard lib <code>datetime</code>. Pandas kind of flounders here. It does a little bit better than what you see below if you use the pandas Timestamps as an index on Series object and use the <code>.loc</code> accessor. But not that much better.</p> <pre class="lang-py prettyprint-override"><code>from datetime import datetime import numpy import pandas py_dts = numpy.array([ datetime(2020, 5, 21, 2, 27, 26), datetime(2020, 5, 21, 2, 27, 26), datetime(2020, 5, 21, 2, 27, 26), datetime(2020, 5, 21, 2, 30, 55), datetime(2020, 5, 21, 2, 30, 55), datetime(2020, 5, 21, 2, 30, 55), datetime(2020, 5, 21, 2, 34, 26), datetime(2020, 5, 21, 2, 34, 26), datetime(2020, 5, 21, 2, 34, 26), datetime(2020, 5, 21, 2, 39, 26), datetime(2020, 5, 21, 2, 39, 26), datetime(2020, 5, 21, 2, 39, 26) ]) min_pydt = datetime(2020, 5, 21, 2, 27, 26) max_pydt = datetime(2020, 5, 21, 2, 39, 26) min_npdt = numpy.datetime64(min_pydt) max_npdt = numpy.datetime64(max_pydt) min_pddt = pandas.Timestamp(min_pydt) max_pddt = pandas.Timestamp(max_pydt) np_64s = numpy.array([numpy.datetime64(d) for d in py_dts]) pd_tss = pandas.Series([pandas.Timestamp(d) for d in py_dts]) def counter(timestamps, mindt, maxdt): return ((mindt &lt;= timestamps) &amp; (timestamps &lt;= maxdt)).sum() </code></pre> <p>In the a Jupyter notebook I did:</p> <pre><code>%%timeit counter(py_dts, min_pydt, max_pydt) </code></pre> <blockquote> <p>17.4 µs ± 1.31 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)</p> </blockquote> <pre><code>%%timeit counter(np_64s, min_npdt, max_npdt) </code></pre> <blockquote> <p>7.42 µs ± 102 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)</p> </blockquote> <pre><code>%%timeit counter(pd_tss, min_pddt, max_pddt) </code></pre> <blockquote> <p>531 µs ± 2.99 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)</p> </blockquote>
python|arrays|numpy|boolean|boolean-logic
1
4,670
62,791,598
how can I find intersection of multiple lines with a curve?
<p>I have a file which has x and y. For each line that passes from the y-axis, I can find the intersection but I wanted to have an automatic way to find the intersections of a bunch lines that pass from y-axis like the figure below:</p> <p><em>perspective result</em> <a href="https://i.stack.imgur.com/C1GQp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C1GQp.png" alt="enter image description here" /></a></p> <p>the code that I have written for finding intersections one-by-one is below:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt with open('txtfile1.out', 'r') as f: lines = f.readlines() x = [float(line.split()[0]) for line in lines] y = [float(line.split()[1]) for line in lines] xx = [] for i in range(1,len(x)): if (y[i] &gt; 0 and y[i-1] &lt; 0) or (y[i] &lt; 0 and y[i-1] &gt; 0): xx.append((x[i]+x[i-1])/2) yx = [0 for _ in range(len(xx))] plt.plot(x,y) plt.plot(xx,yx, color=&quot;C2&quot;, marker=&quot;o&quot;, ls=&quot;&quot;, ms=10) </code></pre> <p>the thing that I have</p> <p><em>Current result</em> <a href="https://i.stack.imgur.com/ebZ6M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ebZ6M.png" alt="enter image description here" /></a></p>
<p>You can try to set up an extra loop to check for multiple intersection values which you input and use dictionary to hold list of matches against intersection value as key. This theoretically should plot all intersections of y you desire into same graph</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt with open('txtfile1.out', 'r') as f: lines = f.readlines() x = [float(line.split()[0]) for line in lines] y = [float(line.split()[1]) for line in lines] intersections = [0, -20, 10] intersection_matches = {intersection: [] for intersection in intersections} # or just define directly: intersection_matches ={ 0: [] , -20: [], 10: [] } for i in range(1, len(x)): for intersection, xx in intersection_matches.items(): if (y[i] &gt; intersection and y[i-1] &lt; intersection or (y[i] &lt; intersection and y[i-1] &gt; intersection)): xx.append((x[i]+x[i-1])/2) plt.plot(x,y) for intersection, xx in intersection_matches.items(): yx = [intersection] * len(xx) plt.plot(xx, yx, color=&quot;C2&quot;, marker=&quot;o&quot;, ls=&quot;&quot;, ms=10) </code></pre>
python|numpy|matplotlib
2
4,671
54,548,747
Calculating a summary row in a pandas dataframe
<p>Here's what I have (Pandas DataFrame), that I created so far: Code: </p> <pre><code>table = pd.pivot_table(df1, index=['Assignee', 'IssueType'], columns=['Status'], values='Key', aggfunc={'Key': np.count_nonzero}, dropna=True) table['Total'] = table.sum(axis=1) table = table.fillna(0) table = table.apply(pd.to_numeric, errors='ignore') table = table.astype(int) table.to_csv(output_file_path, sep=delimiter) </code></pre> <p>Output: </p> <pre><code>Assignee~IssueType~Analysis~Blocked~Closed~Done~In Progress~Open~Ready For QA Testing~Total Smith, John~Story~0~0~0~0~0~1~0~1 Smith, John~Sub-task~0~0~0~0~0~1~0~1 Smith, John~Task~0~0~0~0~2~5~0~7 Doe, Jane~Bug~0~0~0~0~1~0~0~1 Polo, Marco~Bug~0~0~0~0~0~2~0~2 Polo, Marco~Story~0~0~1~0~0~0~0~1 Polo, Marco~Task~1~0~0~0~4~2~0~7 </code></pre> <p>Here's what I would like to have (considering that I could have numeric/non-numeric columns: </p> <pre><code>Assignee~IssueType~Analysis~Blocked~Closed~Done~In Progress~Open~Ready For QA Testing~Total Smith, John~Story~0~0~0~0~0~1~0~1 Smith, John~Sub-task~0~0~0~0~0~1~0~1 Smith, John~Task~0~0~0~0~2~5~0~7 Doe, Jane~Bug~0~0~0~0~1~0~0~1 Polo, Marco~Bug~0~0~0~0~0~2~0~2 Polo, Marco~Story~0~0~1~0~0~0~0~1 Polo, Marco~Task~1~0~0~0~4~2~0~7 **GrandTotal~GrandTotal~1~0~1~0~7~11~0~20** </code></pre> <p>What would be the best/optimal way to achieve this using Pandas DataFrames? Appreciate your help in advance.</p>
<p>Here's my answer to this question. Perhaps there is a scope for improvement (but atleast it works to my contentment).</p> <pre><code>def append_summary_total(df_index, file_path, delimiter): file_path = os.path.abspath(file_path) delimiter = str(delimiter) df = pd.read_csv(file_path, sep=delimiter) sums = df.select_dtypes(pd.np.number).sum().rename('Grand Total') df.loc['Grand Total'] = df.select_dtypes(pd.np.number).sum() df = df.fillna("GrandTotal") df = df.set_index(df_index) df = df.apply(pd.to_numeric, errors='ignore') df = df.astype(int) df.to_csv(file_path, sep=delimiter) </code></pre> <p>Here's the output: <a href="https://i.stack.imgur.com/B72mU.png" rel="nofollow noreferrer">Sample Output</a></p>
python|pandas|python-2.7
0
4,672
73,644,154
pandas creates 2 copies of files in a loop
<p>I have a dataframe like as below</p> <pre><code>import numpy as np import pandas as pd from numpy.random import default_rng rng = default_rng(100) cdf = pd.DataFrame({'Id':[1,2,3,4,5], 'customer': rng.choice(list('ACD'),size=(5)), 'region': rng.choice(list('PQRS'),size=(5)), 'dumeel': rng.choice(list('QWER'),size=(5)), 'dumma': rng.choice((1234),size=(5)), 'target': rng.choice([0,1],size=(5)) }) </code></pre> <p>I am trying to split the dataframe based on <code>Customer</code> and store it in a folder. Not necessary to understand the full code. The issue is in the last line.</p> <pre><code>i = 0 for k, v in df.groupby(['Customer']): print(k.split('@')[0]) LM = k.split('@')[0] i = i+1 unique_cust_names = '_'.join(v['Customer'].astype(str).unique()) unique_ids = '_'.join(v['Id'].astype(str).unique()) unique_location = '_'.join(v['dumeel'].astype(str).unique()) filename = '_'.join([unique_ids, unique_cust_names, unique_location, LM]) print(filename) with pd.ExcelWriter(f&quot;{filename}.xlsx&quot;, engine='xlsxwriter') as writer: v.to_excel(writer,columns=col_list,index=False) wb = load_workbook(filename = 'format_sheet.xlsx') sheet_from =wb.worksheets[0] wb1 = load_workbook(filename = f&quot;{filename}.xlsx&quot;) sheet_to = wb1.worksheets[0] copy_styles(sheet_from, sheet_to) #wb.close() tab = Table(displayName = &quot;Table1&quot;, ref = &quot;A1:&quot; + get_column_letter(wb1.worksheets[0].max_column) + str(wb1.worksheets[0].max_row) ) style = TableStyleInfo(name=&quot;TableStyleMedium2&quot;, showFirstColumn=False, showLastColumn=False, showRowStripes=True, showColumnStripes=False) tab.tableStyleInfo = style wb1.worksheets[0].add_table(tab) #wb1.worksheets[0].parent.save(f&quot;{filename}.xlsx&quot;) wb1.save(&quot;test_files/&quot; + f&quot;{filename}.xlsx&quot;) # issue is here wb1.close() print(&quot;Total number of customers to be emailed is &quot;, i) </code></pre> <p>Though the code works fine, the issue is in the below line I guess</p> <pre><code>wb1.save(&quot;test_files/&quot; + f&quot;{filename}.xlsx&quot;) # issue is here </code></pre> <p>This creates two copies of files.. One in the current folder as jupyter notebook file and other one inside the <code>test_files</code> folder.</p> <p>For ex: I see two files called <code>test1.xlsx</code> one in the current folder and one inside the <code>test_files</code> folder (path is test_files/test1.xlsx)</p> <p>How can I avoid this?</p> <p>I expect my output to generate/save only 1 file for each customer inside the <code>test_files</code> folder?</p>
<p>The issue is happening because you are referencing 2 different file names one with the prefix <code>&quot;test_files/&quot;</code> and once without it. Best way to handle it will be to define file name as follows</p> <pre><code>dir_filename = &quot;test_files/&quot; + f&quot;{filename}.xlsx&quot; </code></pre> <p>and then reference it in the following places</p> <pre><code>with pd.ExcelWriter(f&quot;{filename}.xlsx&quot;, engine='xlsxwriter') as writer: v.to_excel(writer,columns=col_list,index=False) ## wb1 = load_workbook(filename = f&quot;{filename}.xlsx&quot;) ## wb1.save(&quot;test_files/&quot; + f&quot;{filename}.xlsx&quot;) </code></pre> <p>Hope it helps</p>
python|pandas|dataframe|file|group-by
1
4,673
73,554,722
Searching word frequency in pandas from dict
<p>Here is the code which I am using:</p> <pre><code>import pandas as pd data = [['This is a long sentence which contains a lot of words among them happy', 1], ['This is another sentence which contains the word happy* with special character', 1], ['Content and merry are another words which implies happy', 2], ['Sad is not happy', 2], ['unfortunate has negative conotations', 1]] df = pd.DataFrame(data, columns=['string', 'id']) words = { &quot;positive&quot; : [&quot;happy&quot;, &quot;content&quot;], &quot;negative&quot; : [&quot;sad&quot;, &quot;unfortunate&quot;], &quot;neutral&quot; : [&quot;neutral&quot;, &quot;000&quot;] } </code></pre> <p>I want the output dataframe to look for keys in the dictionary and search for them in the dataframe but the key can be only be counted one time against an id.</p> <p>Simply put:</p> <ul> <li>Group by id.</li> <li>For each group: see if at least one word in all sentences of a group is positive, negative and neutral.</li> <li>Then sum up the counts for all groups.</li> </ul> <p>For example.</p> <pre><code> string id 0 This is a long sentence which contains a lot o... 1 1 This is another sentence which contains the wo... 1 2 Content and merry are another words which impl... 2 3 Sad is not happy 2 4 unfortunate has negative connotations 1 </code></pre> <p>The id &quot;1&quot; in row number 0 and 1 both contain the dict values for key positive. Thus <code>positive</code> can be counted only 1 time for id 1. Also in the last row it contains the word &quot;unfortunate&quot; thus.</p> <p><strong>For id 1</strong></p> <p>positive : 1</p> <p>negative : 1</p> <p>neutral : 0</p> <p>After all <code>id</code>s are summed up, the final dataframe should look like this:</p> <pre><code>word freq positive 2 negative 2 neutral 0 </code></pre> <p>Could you please advise how this can be accomplished in pandas</p>
<p>the following code should make the job, although is not totally working with pandas. Note I use phrase.lower() to match the correct counts.</p> <pre class="lang-py prettyprint-override"><code>from collections import Counter out = df.groupby(&quot;id&quot;)['string'].apply(list) def get_count(grouped_element): counter = Counter({&quot;postive&quot;: 0, &quot;negative&quot;: 0, &quot;neutral&quot;: 0}) words = { &quot;postive&quot; : [&quot;happy&quot;, &quot;content&quot;], &quot;negative&quot; : [&quot;sad&quot;, &quot;unfortunate&quot;], &quot;neutral&quot; : [&quot;neutral&quot;, &quot;000&quot;] } for phrase in grouped_element: if counter[&quot;postive&quot;] &lt; 1: for word in words[&quot;postive&quot;]: if word in phrase.lower(): counter.update([&quot;postive&quot;]) break if counter[&quot;negative&quot;] &lt; 1: for word in words[&quot;negative&quot;]: if word in phrase.lower(): counter.update([&quot;negative&quot;]) break if counter[&quot;neutral&quot;] &lt; 1: for word in words[&quot;neutral&quot;]: if word in phrase.lower(): counter.update([&quot;neutral&quot;]) break return counter counter = Counter({&quot;postive&quot;: 0, &quot;negative&quot;: 0, &quot;neutral&quot;: 0}) for phrases in out: result = get_count(phrases) counter.update(result) print(counter) </code></pre> <p>output is:</p> <pre><code>Counter({'postive': 2, 'negative': 2, 'neutral': 0}) </code></pre> <p>to convert to a dataframe:</p> <pre class="lang-py prettyprint-override"><code>out = {&quot;word&quot;: [], &quot;freq&quot;: []} for key, val in counter.items(): out[&quot;word&quot;].append(key) out[&quot;freq&quot;].append(val) pd.DataFrame(out) </code></pre> <pre><code> word freq 0 postive 2 1 negative 2 2 neutral 0 </code></pre>
python|pandas|dataframe|dictionary
1
4,674
71,317,141
optimizing multiple loss functions in pytorch
<p>I am training a model with different outputs in PyTorch, and I have four different losses for positions (in meter), rotations (in degree), and velocity, and a boolean value of 0 or 1 that the model has to predict.<br /> AFAIK, there are two ways to define a final loss function here:</p> <p>one - the naive weighted sum of the losses</p> <p>two - the defining coefficient for each loss to optimize the final loss.</p> <p>So, My question is how is better to weigh these losses to obtain the final loss, correctly?</p>
<p>This is not a question about programming but instead about optimization in a multi-objective setup. The two options you've described come down to the same approach which is a linear combination of the loss term. However, keep in mind there are many other approaches out there with dynamic loss weighting, uncertainty weighting, etc... In practice, the most often used approach is the linear combination where each objective gets a weight that is determined via grid-search or random-search.</p> <p>You can look up this survey on multi-task learning which showcases some approaches: <strong>Multi-Task Learning for Dense Prediction Tasks: A Survey</strong>, Vandenhende <em>et al.</em>, T-PAMI'20.</p> <p>This is an active line of research, as such, there is no definite answer to your question.</p>
python|optimization|pytorch|loss-function|loss
2
4,675
71,232,915
How to group and calculate monthly average in pandas dataframe
<p>I am trying to group a dataset based on the name and find the monthly average. i.e sum all the values for each name divided by the number of the distinct month for each name.</p> <p>For example,</p> <pre><code>name time values A 2011-01-17 10 B 2011-02-17 20 A 2011-01-11 10 A 2011-03-17 30 B 2011-02-17 10 </code></pre> <p>The expected result is</p> <pre><code>name monthly_avg A 25 B 30 </code></pre> <p>I have tried</p> <pre><code>data.groupby(['name'])['values'].mean().reset_index(name='Monthly Average') </code></pre> <p>but it gives the output below instead of my desired output above:</p> <pre><code>name Monthly Average A 16.666667 B 15.000000 </code></pre>
<p>Convert values to datetimes first, then aggregate <code>sum</code> per <code>name</code> and months by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Grouper.html" rel="nofollow noreferrer"><code>Grouper</code></a> and last get <code>mean</code> per first level <code>name</code>:</p> <pre><code>data['time'] = pd.to_datetime(data['time']) df = (data.groupby(['name', pd.Grouper(freq='m', key='time')])['values'].sum() .groupby(level=0) .mean() .reset_index(name='Monthly Average')) print (df) name Monthly Average 0 A 25 1 B 30 </code></pre> <p>With months period solution is if change <code>Grouper</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.to_period.html" rel="nofollow noreferrer"><code>Series.dt.to_period</code></a>:</p> <pre><code>data['time'] = pd.to_datetime(data['time']) df = (data.groupby(['name', data['time'].dt.to_period('m')])['values'] .sum() .groupby(level=0) .mean() .reset_index(name='Monthly Average')) print (df) name Monthly Average 0 A 25 1 B 30 </code></pre>
python|pandas
0
4,676
71,342,754
Use dataframe name in dataframe.query
<p>I have two dataframes</p> <p>df1</p> <pre><code>Company Region ID Walmart North 1 Walmart North 2 OneStore North 3 OneStore North 4 OneStore South 4 Walmart South 4 OneStore West 4 Walmart East 5 </code></pre> <p>df2</p> <pre><code>Company Region ID Sales Walmart North 1 100 Walmart North 1 150 Walmart North 1 50 Walmart North 2 400 OneStore North 3 250 OneStore North 3 150 OneStore North 4 200 OneStore South 4 300 Walmart South 4 100 Walmart South 4 250 OneStore West 4 350 OneStore West 4 100 Walmart East 5 300 Walmart East 5 400 </code></pre> <p>Final output required</p> <p>df1</p> <pre><code>Company Region ID Sales Walmart North 1 300 Walmart North 2 400 OneStore North 3 400 OneStore North 4 200 OneStore South 4 300 Walmart South 4 350 OneStore West 4 450 Walmart East 5 700 </code></pre> <p>then i want to use a query like</p> <pre><code>df2.query(&quot;(df2['Company']==df1.at[0,'Company'])&amp;(df2['Region']==df1.at[0,'Region'])&amp;(df2['ID']==df1.at[0,'ID'])&quot;)['Sales'].sum() but i get an error: **UndefinedVariableError: name 'df2' is not defined** </code></pre> <p>if df2 is removed from query</p> <pre><code>df2.query(&quot;(Company==df1.at[0,'Company'])&amp;(Region==df1.at[0,'Region'])&amp;(ID==df1.at[0,'ID'])&quot;)['Sales'].sum() UndefinedVariableError: name 'df1' is not defined </code></pre> <p>if i use</p> <pre><code>df2.loc[(df2['Company']==df1.at[0,'Company'])&amp;(df2['Region']==df1.at[0,'Region'])&amp;(df2['ID']==df1.at[0,'ID']),'Sales'].sum() </code></pre> <p>then i get required result, (300 for first row)</p> <p>but i want to use query function as this is a sample data and i work with a large dataset where columns vary so i generate a query based on columns like</p> <pre><code>'&amp;'.join(&quot;(df2['%s']==df1.at[0,'%s'])&quot; % x for x in ['col1', 'col2', . . . . ]) </code></pre> <p>How can i make the query to work for this condition</p> <p><em><strong>Update:</strong></em></p> <p><em>Thanks to <a href="https://stackoverflow.com/users/2956135/emma">Emma</a> for merge &amp; groupby,</em></p> <p>i got the output dataframe as required</p> <pre><code>df_1 Company Region ID Sales OneStore North 3 400 OneStore North 4 200 OneStore South 4 300 OneStore West 4 450 Walmart East 5 700 Walmart North 1 300 Walmart North 2 400 Walmart South 4 350 </code></pre> <p>Now i have another dataframe</p> <p>df3</p> <pre><code>Company Region ID Stock Units Walmart North 1 Full 5 Walmart North 1 Full 7 Walmart North 2 Full 4 Walmart North 2 Restock 26 Walmart North 2 Restock 34 OneStore North 3 Full 2 OneStore North 3 Restock 26 OneStore North 4 Full 3 Walmart South 4 Full 5 Walmart South 4 Restock 74 OneStore West 4 Full 9 OneStore West 4 Full 7 OneStore West 4 Restock 53 OneStore West 5 Full 2 Walmart East 5 Full 2 Walmart East 5 Full 1 Walmart East 5 Restock 36 </code></pre> <p>I need to get the sum of units where stock staus is restock and add this data to new column in df_1, i'm trying to use groupby by filtering df3 where stock status is restock, buy i'm getting error</p> <pre><code>df = df_1.merge(df3, on=['Company', 'Region', 'ID'], how='left') df_1.loc[:,'Units_req'] = df[df['Stock']=='Restock'].groupby(['Company', 'Region', 'ID', 'Sales', 'Profit'])['Units'].sum() TypeError: incompatible index of inserted column with frame index </code></pre> <p>Please help with best approach to this problem, i still have to add data from different tables based on multiple conditions, i still find it easy if anyone could help on how to use dataframe names in dataframe.query</p>
<p>You can merge the 2 dataframes first, then aggregate on group.</p> <pre class="lang-py prettyprint-override"><code># This will merge the Sales information to the first dataframe. df = df1.merge(df2, on=['Company', 'Region', 'ID'], how='left') # Then, you can group by the Company, Region, and ID and aggregate the Sales. df = df.groupby(['Company', 'Region', 'ID']).Sales.sum().reset_index() </code></pre> <p>Or you can do this in 1 shot.</p> <pre class="lang-py prettyprint-override"><code>df = (df1.merge(df2, on=['Company', 'Region', 'ID'], how='left') .groupby(['Company', 'Region', 'ID']).Sales.sum() .reset_index()) </code></pre>
python|pandas|dataframe
0
4,677
52,041,963
Select rows containing a NaN following a specific value in Pandas
<p>I am trying to create a new DataFrame consisting of the rows corresponding to the value 1.0 or NaN in the last column, whereby I only take the Nans under a 1.0 (that is, I'm interested in everything until a 0.0 appears).</p> <pre><code>Timestamp Value Mode 00-00-10 34567 1.0 00-00-20 45425 00-00-30 46773 0.0 00-00.40 64567 00-00-50 25665 1.0 00-00-60 25678 </code></pre> <p>My attempt is:</p> <pre><code>for row in data.itertuples(): while data[data.Mode != 0.0]: df2 = df2.append(row) else: #How do I differentiate between a NaN under a 1.0 and a NaN under a 0.0? print (df2) </code></pre> <p>The idea is to save every row until a 0.0 appears, and afterwards ignore every row until a 1.0 appears again.</p>
<p>You can use <code>.ffill</code> to figure out if it's a <code>NaN</code> below a 1 or a 0.</p> <p>Here are the <code>NaN</code> values below a 1</p> <pre><code>df[(df['Mode'].isnull()) &amp; df['Mode'].ffill() == 1] # Timestamp Value Mode #1 00-00-20 45425 NaN #5 00-00-60 25678 NaN </code></pre> <p>To get all of the <code>1</code>s and <code>NaN</code> below:</p> <pre><code>df[((df['Mode'].isnull()) &amp; df['Mode'].ffill() == 1) | df.Mode == 1] # Timestamp Value Mode #0 00-00-10 34567 1.0 #1 00-00-20 45425 NaN #4 00-00-50 25665 1.0 #5 00-00-60 25678 NaN </code></pre> <hr> <p>You can get away with slightly nicer logic, since you have only 1 and 0, though this might not always work due to the <code>NaN</code> in <code>'Mode'</code> (It seems to work for the above bit)</p> <pre><code>df[((df['Mode'].isnull()) &amp; df['Mode'].ffill()) | df.Mode] </code></pre>
python|pandas|dataframe
2
4,678
60,433,580
How to check if a column contains backslash
<pre><code>data = {'value': ['red','red\blue','yellow'] } df = pd.DataFrame (data, columns = ['value']) </code></pre> <p>I tried to use:</p> <pre><code>df[df['value'].str.contains("\\", na = False)]['value'].count() </code></pre> <p>but got the error:</p> <pre><code>bad escape (end of pattern) at position 0 </code></pre> <p>Thanks a lot.</p>
<p>Data was change for avoid <code>\b</code> value, add <code>r</code> prefix because by default <code>regex=True</code>. For count is simplier use <code>sum</code> of <code>True</code>s values:</p> <pre><code>data = {'value': ['red','red\ blue','yellow']} df = pd.DataFrame (data, columns = ['value']) print(df) value 0 red 1 red\ blue 2 yellow print (df['value'].str.contains(r"\\", na = False).sum()) 1 </code></pre> <p>Another idea is avoid regexes by <code>regex=False</code> parameter in <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>Series.str.contains</code></a>:</p> <pre><code>print (df['value'].str.contains("\\", na = False, regex=False).sum()) 1 </code></pre>
python|pandas
2
4,679
60,549,984
How to create new column by manipulating another column? pandas
<p>I am trying to make a new column depending on different criteria. I want to add characters to the string dependent on the starting characters of the column. An example of the data:</p> <pre><code>RH~111~header~120~~~~~~~ball RL~111~detailed~12~~~~~hat RA~111~account~13~~~~~~~~~car </code></pre> <p>I want to change those starting with RH and RL, but not the ones starting with RA. So I want to look like:</p> <pre><code>RH~111~header~120~~1~~~~~ball RL~111~detailed~12~~cancel~~~ball RA~111~account~12~~~~~~~~~ball </code></pre> <p>I have attempted to use str split, but it doesn't seem to actually be splitting the string up</p> <pre><code>(np.where(~df['1'].str.startswith('RH'), df['1'].str.split('~').str[5], df['1'])) </code></pre> <p>This is referencing the correct columns but not splitting it where I thought it would, and cant seem to get further than this. I feel like I am not really going about this the right way.</p>
<p>Define a function to replace element No <em>pos</em> in <em>arr</em> list:</p> <pre><code>def repl(arr, pos): arr[pos] = '1' if arr[0] == 'RH' else 'cancel' return '~'.join(arr) </code></pre> <p>Then perform the substitution:</p> <pre><code>df[0] = df[0].mask(df[0].str.match('^R[HL]'), df[0].str.split('~').apply(repl, pos=5)) </code></pre> <p>Details:</p> <ul> <li><code>str.match</code> provides that only proper elements are substituted.</li> <li><code>df[0].str.split('~')</code> splits the column of strings into a column of lists (resulting from splitting of each string).</li> <li><code>apply(repl, pos=5)</code> computes the value to sobstitute.</li> </ul> <p>I assumed that you have a DataFrame with a single column, so its column name is <em>0</em> (an <strong>integer</strong>), instead of <em>'1'</em> (a string). If this is not the case, change the column name in the code above.</p>
python|pandas
1
4,680
60,338,266
Python: distance from index to 1s in binary mask
<p>I have a binary mask like this:</p> <pre><code>X = [[0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 1, 1], [0, 0, 0, 1, 1, 1], [0, 0, 1, 1, 1, 1], [0, 0, 1, 1, 1, 1], [0, 0, 0, 1, 1, 1]] </code></pre> <p>I have a certain index in this array and want to compute the distance from that index to the closest <code>1</code> in the mask. If there's already a <code>1</code> at that index, the distance should be zero. Examples (assuming Manhattan distance):</p> <pre><code>distance(X, idx=(0, 5)) == 0 # already is a 1 -&gt; distance is zero distance(X, idx=(1, 2)) == 2 # second row, third column distance(X, idx=(0, 0)) == 5 # upper left corner </code></pre> <p>Is there already existing functionality like this in Python/NumPy/SciPy? Both Euclidian and Manhattan distance would be fine. I'd prefer to avoid computing distances for the entire matrix (as that is pretty big in my case), and only get the distance for my one index.</p>
<p>Here's one for <code>manhattan</code> distance metric for one entry -</p> <pre><code>def bwdist_manhattan_single_entry(X, idx): nz = np.argwhere(X==1) return np.abs((idx-nz).sum(1)).min() </code></pre> <p>Sample run -</p> <pre><code>In [143]: bwdist_manhattan_single_entry(X, idx=(0,5)) Out[143]: 0 In [144]: bwdist_manhattan_single_entry(X, idx=(1,2)) Out[144]: 2 In [145]: bwdist_manhattan_single_entry(X, idx=(0,0)) Out[145]: 5 </code></pre> <p>Optimize further on performance by extracting the boudary elements only off the blobs of <code>1s</code> -</p> <pre><code>from scipy.ndimage.morphology import binary_erosion def bwdist_manhattan_single_entry_v2(X, idx): k = np.ones((3,3),dtype=int) nz = np.argwhere((X==1) &amp; (~binary_erosion(X,k,border_value=1))) return np.abs((idx-nz).sum(1)).min() </code></pre> <p>Number of elements in <code>nz</code> with this method would be smaller number than the earlier one, hence it improves.</p>
python|numpy|scipy
3
4,681
60,532,088
Convert a list into a Numpy array lose two of its three axis only with one dataset
<p>I have a python code that reads NIFTI images using SimpleITK library. Then it converts that images into a Numpy Array. Then, I extend the Numpy Array into a list.</p> <p>I have 20 FLAIR.nii.gz files. Each of them have 48 slices.</p> <p>When I have all the 48 slices of all the 20 patients, I convert the list into a Numpy Array.</p> <p>I do it this way because I'm newbie with Python and I don't know any other way to do it.</p> <p>The code is:</p> <pre><code>import os import SimpleITK as sitk import numpy as np flair_dataset = [] # For each patient directory # data_path is a list with all of the patient's directory. for i in data_path: img_path = os.path.join(file_path, i, 'pre') mask_path = os.path.join(file_path, i) for name in glob.glob(img_path+'/FLAIR*'): # Reads images using SimpleITK. brain_image = sitk.ReadImage(name) # Get a numpy array from a SimpleITK Image. brain_array = sitk.GetArrayFromImage(brain_image) flair_dataset.extend(brain_array) if debug: print('brain_image size: ', brain_image.GetSize()) print('brain_array Shape: ', brain_array.shape) print('flair_dataset length:', len(flair_dataset)) print('flair_dataset length: ', len(flair_dataset)) print('flair_dataset[1] type: ', print(type(flair_dataset[1]))) print('flair_dataset[1] shape: ', print(flair_dataset[1].shape)) flair_array = np.array(flair_dataset) print('flair_array.shape: ', flair_array.shape) print('flair_array.dtype: ', flair_array.dtype) </code></pre> <p>This code generates this output (all FLAIR.nii.gz files have the same shape):</p> <pre><code>data_path = ['68', '55', '50', '61', '63', '52', '51', '60', '67', '58', '59', '53', '69', '64', '56', '65', '54', '62', '66', '57'] patient_data_path = 68 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 48 Mask list length: 48 patient_data_path = 55 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 96 Mask list length: 96 patient_data_path = 50 brain_image size: (256, 232, 48) brain_array Shape: (48, 232, 256) flair_dataset length: 144 WMH image Size: (256, 232, 48) WMH array Shape: (48, 232, 256) Mask list length: 144 patient_data_path = 61 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 192 Mask list length: 192 patient_data_path = 63 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 240 Mask list length: 240 patient_data_path = 52 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 288 Mask list length: 288 patient_data_path = 51 brain_image size: (256, 232, 48) brain_array Shape: (48, 232, 256) flair_dataset length: 336 WMH image Size: (256, 232, 48) WMH array Shape: (48, 232, 256) Mask list length: 336 patient_data_path = 60 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 384 Mask list length: 384 patient_data_path = 67 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 432 Mask list length: 432 patient_data_path = 58 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 480 Mask list length: 480 patient_data_path = 59 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 528 Mask list length: 528 patient_data_path = 53 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 576 Mask list length: 576 patient_data_path = 69 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 624 Mask list length: 624 patient_data_path = 64 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 672 Mask list length: 672 patient_data_path = 56 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 720 Mask list length: 720 patient_data_path = 65 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 768 Mask list length: 768 patient_data_path = 54 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 816 Mask list length: 816 patient_data_path = 62 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 864 Mask list length: 864 patient_data_path = 66 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 912 Mask list length: 912 patient_data_path = 57 brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) flair_dataset length: 960 Mask list length: 960 </code></pre> <p>The final output from the code is:</p> <pre><code>flair_dataset length: 960 mask_dataset length: 960 flair_dataset[1] type: &lt;class 'numpy.ndarray'&gt; flair_dataset[1] shape: (256, 232) flair_array.shape: (960,) flair_array.dtype: object </code></pre> <p>My problem:</p> <p>I don't understand why flair_array has this shape: <code>(960,)</code>. <code>flair_array dtype</code> is <code>object</code>.</p> <p>I have tried the same code, without changing anything, and it works perfectly. It has 20 patients also, and 48 slices for each FLAIR.nii.gz file also.</p> <p>Its output:</p> <pre><code>data_path = ['39', '31', '2', '23', '35', '29', '17', '49', '27', '8', '33', '4', '19', '41', '37', '11', '25', '6', '0', '21'] patient_data_path = 39 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 48 Mask list length: 48 patient_data_path = 31 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 96 Mask list length: 96 patient_data_path = 2 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 144 Mask list length: 144 patient_data_path = 23 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 192 Mask list length: 192 patient_data_path = 35 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 240 Mask list length: 240 patient_data_path = 29 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 288 Mask list length: 288 patient_data_path = 17 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 336 Mask list length: 336 patient_data_path = 49 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 384 Mask list length: 384 patient_data_path = 27 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 432 Mask list length: 432 patient_data_path = 8 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 480 Mask list length: 480 patient_data_path = 33 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 528 Mask list length: 528 patient_data_path = 4 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 576 Mask list length: 576 patient_data_path = 19 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 624 Mask list length: 624 patient_data_path = 41 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 672 Mask list length: 672 patient_data_path = 37 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 720 Mask list length: 720 patient_data_path = 11 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 768 Mask list length: 768 patient_data_path = 25 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 816 Mask list length: 816 patient_data_path = 6 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 864 Mask list length: 864 patient_data_path = 0 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 912 Mask list length: 912 patient_data_path = 21 brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) flair_dataset length: 960 Mask list length: 960 </code></pre> <p>This is the final output for this dataset:</p> <pre><code>flair_dataset length: 960 mask_dataset length: 960 flair_dataset[1] type: &lt;class 'numpy.ndarray'&gt; flair_dataset[1] shape: (240, 240) flair_array.shape: (960, 240, 240) flair_array.dtype: float32 </code></pre> <p>With this second dataset the <code>flair_array</code> is <code>float32</code>.</p> <p>Why the first <code>flair_array</code> shape is <code>(960,)</code>?</p> <p><strong>UPDATE:</strong><br/> In both datasets, <code>brain_array.dtype</code> is <code>float32</code> always.</p>
<p>In one case</p> <pre><code>flair_array.shape: (960,) flair_array.dtype: object </code></pre> <p>in the other</p> <pre><code>flair_array.shape: (960, 240, 240) flair_array.dtype: float32 </code></pre> <p>You make these with:</p> <pre><code>flair_array = np.array(flair_dataset) </code></pre> <p>If all the elements of <code>flair_dataset</code> have the same shape, it can create a multidimensional array from them.</p> <p>But if one or more of the arrays in the list differ in shape, it has to give up on the multidimensional goal, and instead just makes an object dtype array, which is very much like a list - in contains references to the original arrays.</p> <p>In the original list most of the elements are</p> <pre><code>brain_image size: (232, 256, 48) brain_array Shape: (48, 256, 232) </code></pre> <p>but I also see some</p> <pre><code>brain_image size: (256, 232, 48) brain_array Shape: (48, 232, 256) </code></pre> <p>In the second set all are</p> <pre><code>brain_image size: (240, 240, 48) brain_array Shape: (48, 240, 240) </code></pre> <p>When people ask about a (n,) shape, when they expect (n,m,p), I suspect the first has an <code>object</code> dtype caused by a mix in element shapes. That's why I asked about <code>dtype</code>.</p>
python|arrays|numpy|simpleitk|nifti
3
4,682
72,720,453
Python: extract column from pandas pivot
<p>I have a pivoted table <code>total_chart = df.pivot_table(index=&quot;Name&quot;, values=&quot;Items&quot;, aggfunc='count')</code> The output gives</p> <pre><code>A 8 B 52 C 24 D 6 E 43 F 5 G 13 I 1 </code></pre> <p>I trying to get only the second column (number only) Is there any simple way to get it?</p>
<p>The code below should do the trick for you. It counts &quot;Items&quot;, sort it ascending by the index &quot;Name&quot; and output just the counts without the index.</p> <pre class="lang-py prettyprint-override"><code>df['Items'].value_counts().sort_index(ascending=True).tolist() </code></pre>
python|pandas|pivot|aggregate
2
4,683
72,835,220
Pandas compare next row (several rows) by condition
<p>I have a dataframe with two columns: price and pattern (can be 0 if absent or 1 if exists).</p> <pre><code>Price Pattern 10 0 12 1 15 0 11 0 9 0 </code></pre> <p>First, i need to iterate to find row with existing pattern (pattern = 1), then</p> <ol> <li>compare price in current row (12) with price in next row (15). Basically i need to know how price changed (15 - 12) and put result in new column &quot;diff_1_tf&quot; so later i can use all of these values to see average / overall picture.</li> <li>compare price in current row (12) with price in third row from current row (9 - 12) and put result in new column &quot;diff_3_tf&quot;.</li> </ol> <p>I know that shift can be usefull but i just cant understand how to make it work in my case. I'm stuck here. Please help.</p> <pre><code>new_df = df[['price', &quot;pattern&quot;]].copy() for row in new_df[&quot;pattern&quot;]: if row == 1: print(row) </code></pre> <p><strong>Update: finally i solved my problem with new_df.iterrows() and index manipulations</strong></p>
<p>I guess that you can use pandas built-in function <code>diff</code>, which already has a magical parameter <code>period</code> and it finds a difference between <code>i-th</code> and 'i+period-th` value. I extended your example, to verify if this is what you are looking for:</p> <pre class="lang-py prettyprint-override"><code> Price Pattern 0 10 0 1 12 1 2 15 0 3 11 0 4 9 0 5 8 1 6 5 0 7 4 1 8 7 0 9 9 0 10 11 0 11 2 1 </code></pre> <p>Price change column:</p> <pre class="lang-py prettyprint-override"><code>df_new['price_change_%d' % period] = df.diff(periods=period).set_index('Pattern').loc[-1].reset_index(drop=True) </code></pre> <p>Outputs for different period values:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt; period = 1 0 3.0 1 -3.0 2 3.0 Name: Price, dtype: float64 &gt;&gt; period = 2 0 -1.0 1 5.0 Name: Price, dtype: float64 &gt;&gt; period = 3 0 -3.0 1 -1.0 2 7.0 Name: Price, dtype: float64 </code></pre> <p>Also, for <code>period == 2</code> note that it will output only 2 pairs of values because 6th and 8th positions both contain pattern 1. Not sure about those edge cases, and how would you prevent/post-process them?</p>
python|pandas|compare|rows
0
4,684
72,669,941
Why my model doesn't train with keras ImageDataGenerator?
<p>I use the Keras API to train a CNN on Cifar10.</p> <p>Here is my code :</p> <pre><code>(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() conv_network = Input(shape=(32, 32, 3), name=&quot;img&quot;) x = Conv2D(filters=32, kernel_size=(3,3), strides=2, activation=&quot;relu&quot;)(conv_network) x = Conv2D(filters=64, kernel_size=(3,3), strides=2, activation=&quot;relu&quot;)(x) x = Conv2D(filters=128, kernel_size=(3,3), strides=2, activation=&quot;relu&quot;)(x) x = Flatten()(x) x = Dense(1024, activation='relu')(x) output = Dense(10, activation='softmax')(x) model = tf.keras.Model(conv_network, output, name=&quot;convolutional_network&quot;) model.compile(loss='sparse_categorical_crossentropy',optimizer='Adam', metrics=['accuracy']) </code></pre> <p>I train my model using the following :</p> <pre><code>r = model.fit(x_train, y_train, epochs=25,validation_data=(x_test, y_test)) </code></pre> <p>It trains successfully :</p> <pre><code>Epoch 1/25 1563/1563 [==============================] - 7s 4ms/step - loss: 1.7196 - accuracy: 0.4259 - val_loss: 1.3780 - val_accuracy: 0.5105 Epoch 2/25 1563/1563 [==============================] - 6s 4ms/step - loss: 1.2711 - accuracy: 0.5519 - val_loss: 1.2598 - val_accuracy: 0.5600 Epoch 3/25 1563/1563 [==============================] - 7s 4ms/step - loss: 1.1004 - accuracy: 0.6137 - val_loss: 1.2390 - val_accuracy: 0.5776 Epoch 4/25 1563/1563 [==============================] - 7s 4ms/step - loss: 0.9520 - accuracy: 0.6678 - val_loss: 1.2774 - val_accuracy: 0.5767 Epoch 5/25 1563/1563 [==============================] - 7s 4ms/step - loss: 0.7858 - accuracy: 0.7257 - val_loss: 1.3226 - val_accuracy: 0.5921 Epoch 6/25 1563/1563 [==============================] - 6s 4ms/step - loss: 0.6334 - accuracy: 0.7791 - val_loss: 1.5789 - val_accuracy: 0.5586 Epoch 7/25 1563/1563 [==============================] - 6s 4ms/step - loss: 0.5178 - accuracy: 0.8227 - val_loss: 1.7296 - val_accuracy: 0.5730 Epoch 8/25 1563/1563 [==============================] - 6s 4ms/step - loss: 0.4163 - accuracy: 0.8589 - val_loss: 2.0499 - val_accuracy: 0.5682 Epoch 9/25 1563/1563 [==============================] - 6s 4ms/step - loss: 0.3794 - accuracy: 0.8739 - val_loss: 2.0991 - val_accuracy: 0.5820 Epoch 10/25 1563/1563 [==============================] - 7s 4ms/step - loss: 0.3453 - accuracy: 0.8901 - val_loss: 2.3261 - val_accuracy: 0.5697 </code></pre> <p>Now, when I train with a ImageDataGenerator that doesn't do any kind of augmentation, the predictions are random and it doesn't train at all :</p> <pre><code>datagen = ImageDataGenerator() model.fit(datagen.flow(x_train, y_train, batch_size=32), steps_per_epoch=50000 / 32, epochs=10) </code></pre> <p>Results in :</p> <pre><code>Epoch 1/10 1562/1562 [==============================] - 7s 4ms/step - loss: 1.6822 - accuracy: 0.1010 Epoch 2/10 1562/1562 [==============================] - 7s 4ms/step - loss: 1.2881 - accuracy: 0.0982 Epoch 3/10 1562/1562 [==============================] - 7s 4ms/step - loss: 1.1302 - accuracy: 0.0987 Epoch 4/10 1562/1562 [==============================] - 7s 4ms/step - loss: 0.9817 - accuracy: 0.1001 Epoch 5/10 1562/1562 [==============================] - 7s 4ms/step - loss: 0.8215 - accuracy: 0.1011 Epoch 6/10 1562/1562 [==============================] - 7s 4ms/step - loss: 0.6760 - accuracy: 0.1000 Epoch 7/10 1562/1562 [==============================] - 7s 4ms/step - loss: 0.5445 - accuracy: 0.1005 Epoch 8/10 1562/1562 [==============================] - 7s 4ms/step - loss: 0.4660 - accuracy: 0.1006 Epoch 9/10 1562/1562 [==============================] - 7s 4ms/step - loss: 0.4048 - accuracy: 0.1002 Epoch 10/10 1562/1562 [==============================] - 7s 4ms/step - loss: 0.3641 - accuracy: 0.1006 </code></pre> <p>What am I doing wrong here ?</p>
<p>I found a solution after trial and error but I still don't fully understand why my previous code didn't work.</p> <pre><code>conv_network = Input(shape=(32, 32, 3), name=&quot;img&quot;) x = Conv2D(filters=32, kernel_size=(3,3), strides=2, activation=&quot;relu&quot;)(conv_network) x = Conv2D(filters=64, kernel_size=(3,3), strides=2, activation=&quot;relu&quot;)(x) x = Conv2D(filters=128, kernel_size=(3,3), strides=2, activation=&quot;relu&quot;)(x) x = Flatten()(x) x = Dense(1024, activation='relu')(x) output = Dense(10, activation='softmax')(x) model = tf.keras.Model(conv_network, output, name=&quot;convolutional_network&quot;) model.summary() model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy']) (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() y_train = tf.keras.utils.to_categorical(y_train, 10) y_test = tf.keras.utils.to_categorical(y_test, 10) datagen = ImageDataGenerator() # fits the model on batches with real-time data augmentation: model.fit(datagen.flow(x_train, y_train, batch_size=32), validation_data=(x_test, y_test), steps_per_epoch=len(x_train) / 32, epochs=10) </code></pre> <p>What changes is that</p> <ul> <li>instead of using sparse_categorical_crossentropy I used categorical_crossentropy</li> <li>instead of training the network with the raw categorical y values, I changed it to one-hot encoded y values.</li> </ul> <p>If someone has a clear explanation of why does it work now, I would be glad to hear it. Also, is there a way to successfully train the model without using one-hot encoding ? Thank you</p>
python|tensorflow|keras|deep-learning|data-augmentation
0
4,685
59,831,772
Select one dimension of Multidimensional array with list - numpy
<p>I have a 3D array of shape <code>(800,5,4)</code> like:</p> <pre><code>arr = array([[35. , 33. , 33. , 0.15], [47. , 47. , 44. , 0.19], [49. , 56. , 60. , 0.31], ..., [30. , 27. , 25. , 0.07], [54. , 49. , 42. , 0.14], [33. , 30. , 28. , 0.22]]) </code></pre> <p>I have a 1D array of indeces for the second dimension (so they range from 0 to 4) like this:</p> <pre><code>indeces = [0,3,2,0,1,1,1,0,...,0,1,2,2,4,3] </code></pre> <p>I want to select the <code>idx</code> item from the second dimension, and get back an array of shape <code>(800,4)</code></p> <p>I have tried the following but could not make it work:</p> <pre><code>indexed = arr[:,indeces,:] </code></pre> <p>What am I missing?</p>
<pre><code>In [178]: arr = np.arange(24).reshape(2,3,4) </code></pre> <p>If I have a list of 7 items:</p> <pre><code>In [179]: idx = [0,1,1,2,2,0,1] In [180]: arr[:,idx,:] Out[180]: array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [ 8, 9, 10, 11], [ 0, 1, 2, 3], [ 4, 5, 6, 7]], [[12, 13, 14, 15], [16, 17, 18, 19], [16, 17, 18, 19], [20, 21, 22, 23], [20, 21, 22, 23], [12, 13, 14, 15], [16, 17, 18, 19]]]) In [181]: _.shape Out[181]: (2, 7, 4) </code></pre> <p>To produce a (2,4) result, we have to pick one element on the 2nd dim for each of pair from the other dimensions.</p> <p>A general case would be to make <code>idx</code> a (2,4) array, and index with dimensions that also broadcast to (2,4):</p> <pre><code>In [182]: idx = np.array([0,1,1,2,2,0,1,0]).reshape(2,4) In [183]: arr[np.arange(2)[:,None],idx,np.arange(4)] Out[183]: array([[ 0, 5, 6, 11], [20, 13, 18, 15]]) In [184]: _.shape Out[184]: (2, 4) </code></pre> <p>Or we could pick with a scalar:</p> <pre><code>In [185]: arr[:,2,:] Out[185]: array([[ 8, 9, 10, 11], [20, 21, 22, 23]]) </code></pre> <p><code>@a_guest</code> showed how to do this with an <code>idx</code> that matches the 1st dimension (and slices the last).</p> <p>One way or other your <code>idx</code> has to map or broadcast with the other dimensions.</p>
python|numpy|multidimensional-array|numpy-ndarray
1
4,686
59,711,878
How to find max from strings with multiple decimals in python pandas?
<p>I have a data-frame with column entries like below. How can I find max value in such case ? The max value here I would consider ( though not true) is 5.0.5.658</p> <pre><code>4.6.0.2292 4.6.0.3122 4.8.0.1500 4.8.0.1938 5.0.4.283 5.0.5.658 </code></pre>
<p>Because you get error:</p> <blockquote> <p>TypeError: '>=' not supported between instances of 'float' and 'str' </p> </blockquote> <p>it means there are some missing values. So remove them by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dropna.html" rel="nofollow noreferrer"><code>Series.dropna</code></a> and then get <code>max</code> for maximum by lexicographical sorting:</p> <pre><code>print (df['col'].dropna().max()) </code></pre> <p>If necessary maximal value by numerical sorting:</p> <pre><code>idx = (df['col'].str.split('\.', expand=True) .astype(float) .sort_values(list(range(4)), ascending=False) .index[0]) print (df.loc[idx, 'col']) </code></pre>
python|python-3.x|pandas
1
4,687
32,408,550
Adding calculated columns to the Dataframe in pandas
<p>There is a large csv file imported. Below is the output, where <code>Flavor_Score</code> and <code>Overall_Score</code> are results of applying <code>df.groupby('beer_name').mean()</code> across a multitude of testers. I would like to add a column Std Deviation for each: <code>Flavor_Score</code> and <code>Overall_Score</code> to the right of the mean column. The function is clear but how to add a column for display? Of course, I can generate an array and append it (right?) but it would seem to be a cumbersome way.</p> <pre><code> Beer_name Beer_Style Flavor_Score Overall_Score Coors Light 2.0 3.0 Sam Adams Dark 4.0 4.5 Becks Light 3.5 3.5 Guinness Dark 2.0 2.2 Heineken Light 3.5 3.7 </code></pre>
<p>You could use</p> <pre><code>df.groupby('Beer_name').agg(['mean','std']) </code></pre> <p>This computes the mean and the std for each group.</p> <hr> <p>For example,</p> <pre><code>import numpy as np import pandas as pd np.random.seed(2015) N = 100 beers = ['Coors', 'Sam Adams', 'Becks', 'Guinness', 'Heineken'] style = ['Light', 'Dark', 'Light', 'Dark', 'Light'] df = pd.DataFrame({'Beer_name': np.random.choice(beers, N), 'Flavor_Score': np.random.uniform(0, 10, N), 'Overall_Score': np.random.uniform(0, 10, N)}) df['Beer_Style'] = df['Beer_name'].map(dict(zip(beers, style))) print(df.groupby('Beer_name').agg(['mean','std'])) </code></pre> <p>yields</p> <pre><code> Flavor_Score Overall_Score mean std mean std Beer_name Becks 5.779266 3.033939 6.995177 2.697787 Coors 6.521966 2.008911 4.066374 3.070217 Guinness 4.836690 2.644291 5.577085 2.466997 Heineken 4.622213 3.108812 6.372361 2.904932 Sam Adams 5.443279 3.311825 4.697961 3.164757 </code></pre>
python|pandas|dataframe|mean|calculated-columns
0
4,688
40,443,357
Understanding dimension of input to pre-defined LSTM
<p>I am trying to design a model in tensorflow to predict next words using lstm.<br> <a href="https://www.tensorflow.org/versions/master/tutorials/recurrent/index.html" rel="nofollow noreferrer">Tensorflow</a> tutorial for RNN gives pseudocode how to use LSTM for PTB dataset.<br> I reached to step of generating batches and labels.</p> <pre><code>def generate_batches(raw_data, batch_size): global data_index data_len = len(raw_data) num_batches = data_len // batch_size #batch = dict.fromkeys([i for i in range(num_batches)]) #labels = dict.fromkeys([i for i in range(num_batches)]) batch = np.ndarray(shape=(batch_size), dtype=np.float) labels = np.ndarray(shape=(batch_size, 1), dtype=np.float) for i in xrange(batch_size) : batch[i] = raw_data[i + data_index] labels[i, 0] = raw_data[i + data_index + 1] data_index = (data_index + 1) % len(raw_data) return batch, labels </code></pre> <p>This code gives batch and labels size (batch_size X 1).<br></p> <p>These batch and labels can also be size of (batch_size x vocabulary_size) using <code>tf.nn.embedding_lookup()</code>.</p> <p>So, the problem here is how to proceed next using the function <code>rnn_cell.BasicLSTMCell</code> or using user defined lstm model?<br> What will be the <code>input dimension to LSTM cell</code> and how will it be used with <code>num_steps</code>?<br> Which size of batch and labels is useful in any scenario?</p>
<p>The full example for PTB is in the source <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/rnn/ptb/ptb_word_lm.py" rel="nofollow noreferrer">code</a>. There are recommended defaults (<code>SmallConfig</code>, <code>MediumConfig</code>, and <code>LargeConfig</code>) that you can use.</p>
python|tensorflow|recurrent-neural-network|lstm
0
4,689
40,485,246
Pandas - Convert HH:MM:SS.F string to seconds - Caveat : HH sometimes goes over 24H
<p>I have the following dataframe : </p> <blockquote> <p><code>**flashtalking_df =**</code></p> </blockquote> <pre><code>+--------------+--------------------------+------------------------+ | Placement ID | Average Interaction Time | Total Interaction Time | +--------------+--------------------------+------------------------+ | 2041083 | 00:01:04.12182 | 24:29:27.500 | | 2041083 | 00:00:54.75043 | 52:31:48.89108 | +--------------+--------------------------+------------------------+ </code></pre> <p>where 00:01:04.12182 = HH:MM:SS.F</p> <p>I need to convert both columns, Average Interaction Time, and Total Interaction Time into seconds.</p> <p>The problem is that Total Interaction Time goes over 24h.</p> <p>I found the following code which works for the most part. However, when the Total Interaction Time goes over 24h, it gives me </p> <p><code>ValueError: time data '24:29:27.500' does not match format '%H:%M:%S.%f'</code></p> <p>This is the function I am currently using, which I grabbed from another Stack Overflow question, for both Average Interaction Time and Total Interaction Time:</p> <pre><code>flashtalking_df['time'] = flashtalking_df['Total Interaction Time'].apply(lambda x: datetime.datetime.strptime(x,'%H:%M:%S.%f')) flashtalking_df['timedelta'] = flashtalking_df['time'] - datetime.datetime.strptime('00:00:00.00000','%H:%M:%S.%f') flashtalking_df['Total Interaction Time'] = flashtalking_df['timedelta'].apply(lambda x: x / np.timedelta64(1, 's')) </code></pre> <p>If there's an easier way, please let me know.</p> <p>Thank you for all your help</p>
<p>I think you need first convert <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_timedelta.html" rel="noreferrer"><code>to_timedelta</code></a> and then to <code>seconds</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="noreferrer"><code>astype</code></a>:</p> <pre><code>df['Average Interaction Time'] = pd.to_timedelta(df['Average Interaction Time']) .astype('timedelta64[s]') .astype(int) df['Total Interaction Time'] = pd.to_timedelta(df['Total Interaction Time']) .astype('timedelta64[s]') .astype(int) .map('{:,.2f}'.format) print (df) Placement ID Average Interaction Time Total Interaction Time 0 2041083 64 88,167.00 1 2041083 54 189,108.00 </code></pre> <p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.total_seconds.html" rel="noreferrer"><code>total_seconds</code></a>, thank you <a href="https://stackoverflow.com/questions/40485246/pandas-convert-hhmmss-f-string-to-seconds-caveat-hh-sometimes-goes-over/40485568?noredirect=1#comment68216435_40485568">NickilMaveli</a>:</p> <pre><code>df['Average Interaction Time'] = pd.to_timedelta(df['Average Interaction Time']) .dt.total_seconds() .map('{:,.2f}'.format) df['Total Interaction Time'] = pd.to_timedelta(df['Total Interaction Time']) .dt.total_seconds() .map('{:,.2f}'.format) print (df) Placement ID Average Interaction Time Total Interaction Time 0 2041083 64.12 88,167.50 1 2041083 54.75 189,108.89 </code></pre>
date|datetime|pandas|seconds
8
4,690
18,651,142
How to reindex_axis Pandas Panel to MultiIndex
<p>I have a 3D panel data. I am unable to reindex it to a multi index along level 2.</p> <p>I have created the multi index 'mind'.</p> <pre><code>import pandas as pd mind = pd.MultiIndex.from_arrays([['Consumer,Cyclical','Industrial','Software'], ['Retail','MiscellaneousManufactur','Technology'], ['AZO','AZZ','AZPN']],names=['sec','sub','ticker']) dfclose = pd.DataFrame([[1.1,2.1,3.1],[1.2,2.2,3.2]], index=['2013-09-02','2013-09-03'], columns=['AZO','AZZ','AZPN']) dfmean = dfclose - dfclose.mean() pdata2 = pd.Panel({'close':dfclose, 'mean':dfmean}) pdata2.minor_axis.name='ticker' pdata3=pdata2.reindex_axis(mind,axis=2,level='ticker') </code></pre> <p>But the pdata3 is not getting mapped to the new multi index and giving NaN.</p>
<p>This appears to be a bug in 0.12 (and will be fixed in 0.13).<br> A workaround is not to reindex after, but to use the MultiIndex when creating dfclose:</p> <pre><code>dfclose = pd.DataFrame([[1.1, 2.1, 3.1], [1.2, 2.2, 3.2]], index=['2013-09-02','2013-09-03'], columns=mind) dfmean = dfclose - dfclose.mean() pdata2 = pd.Panel({'close':dfclose, 'mean':dfmean}) pdata2.minor_axis.name='ticker' In [11]: pdata2.iloc[0] Out[12]: sec Consumer,Cyclical Industrial Software sub Retail MiscellaneousManufactur Technology ticker AZO AZZ AZPN 2013-09-02 1.1 2.1 3.1 2013-09-03 1.2 2.2 3.2 </code></pre> <p>Another option is to just use a DataFrame:</p> <pre><code>In [12]: pd.concat([dfmean, dfclose], axis=1, keys=['dfmean' ,'dfclose']) Out[12]: dfmean dfclose sec Consumer,Cyclical Industrial Software Consumer,Cyclical Industrial Software sub Retail MiscellaneousManufactur Technology Retail MiscellaneousManufactur Technology ticker AZO AZZ AZPN AZO AZZ AZPN 2013-09-02 -0.05 -0.05 -0.05 1.1 2.1 3.1 2013-09-03 0.05 0.05 0.05 1.2 2.2 3.2 </code></pre>
python|pandas|panel|multi-index
1
4,691
61,650,474
ValueError: Columns must be same length as key in pandas
<p>i have df below </p> <pre><code> Cost,Reve 0,3 4,0 0,0 10,10 4,8 len(df['Cost']) = 300 len(df['Reve']) = 300 </code></pre> <p>I need to divide <code>df['Cost'] / df['Reve']</code></p> <p>Below is my code</p> <pre><code>df[['Cost','Reve']] = df[['Cost','Reve']].apply(pd.to_numeric) </code></pre> <p>I got the error <code>ValueError: Columns must be same length as key</code></p> <pre><code>df['C/R'] = df[['Cost']].div(df['Reve'].values, axis=0) </code></pre> <p>I got the error <code>ValueError: Wrong number of items passed 2, placement implies 1</code></p>
<p>Problem is duplicated columns names, verify:</p> <pre><code>#generate duplicates df = pd.concat([df, df], axis=1) print (df) Cost Reve Cost Reve 0 0 3 0 3 1 4 0 4 0 2 0 0 0 0 3 10 10 10 10 4 4 8 4 8 df[['Cost','Reve']] = df[['Cost','Reve']].apply(pd.to_numeric) print (df) # ValueError: Columns must be same length as key </code></pre> <p>You can find this columns names:</p> <pre><code>print (df.columns[df.columns.duplicated(keep=False)]) Index(['Cost', 'Reve', 'Cost', 'Reve'], dtype='object') </code></pre> <p>If same values in columns is possible remove duplicated by:</p> <pre><code>df = df.loc[:, ~df.columns.duplicated()] df[['Cost','Reve']] = df[['Cost','Reve']].apply(pd.to_numeric) #simplify division df['C/R'] = df['Cost'].div(df['Reve']) print (df) Cost Reve C/R 0 0 3 0.0 1 4 0 inf 2 0 0 NaN 3 10 10 1.0 4 4 8 0.5 </code></pre>
python|pandas
7
4,692
61,769,094
pandas - Broadcasting division
<p>I'm having two dataframes: </p> <ul> <li><code>df_1</code> with single index <code>i</code> and one column <code>LB</code> of <code>float</code>s.</li> <li><code>df_2</code> with multiindex <code>i, a, s</code> and 500 columns of <code>floats</code>s.</li> </ul> <p>My goal is to divide each value in <code>df_1[LB]</code> by each cell of <code>df_2</code> with corresponding index <code>i</code> in order to outputing <code>df_3</code> with same dimension of <code>df_2</code>.</p> <p>My old approach with iteration works for <code>df_2</code> with two levels multiindex but failed when i added the third level.</p> <pre class="lang-py prettyprint-override"><code>df_3= pd.DataFrame(index=df_2.index, columns=df_2.columns) for _i in i: df_3.loc[_i] = df_1.loc[_i][LB] / df_2.loc[_i] # TypeError: cannot align on a multi-index with out specifying the join levels </code></pre> <p>I wonder if there is a general broadcasting way?</p> <p>Edit: I found a way to replicating values of <code>df_1</code> into <code>df_3</code> then divide <code>df_3</code> by <code>df_2</code>:</p> <pre class="lang-py prettyprint-override"><code>df_3 = pd.DataFrame(index=df_2.index, columns=df_2.columns) for _i in i: df_3.loc[_i] = df_1.loc[_i][LB] df_3 = df_3 / df_2 </code></pre> <p>But then if <code>df_1</code> also has multiindex (subset of <code>df_2</code>), what is the nicest way to propagating values of <code>df_1</code> to <code>df_3</code> without looping?</p>
<p>You can <em>broadcast</em> <code>df_1</code> to match the multi-level index of the second dataframe. Then you can easily broadcast the division at the numpy level:</p> <pre><code>tmp = pd.DataFrame(np.repeat(df_1.values, len(df_2)/len(df_1)), index = df_2.index, columns=df_1.columns) df_3 = pd.DataFrame(df_2.values / tmp.values, index=df_2.index, columns=df_2.columns) </code></pre> <p>The only requirements are:</p> <ul> <li><code>df_1</code> must have one single column</li> <li><code>df_2</code> must have either same index as <code>df1</code> or have a multi-index for which the first level is <code>df1.index</code></li> </ul> <hr> <p>In fact, it is enough to just reshape <code>df_2.values</code> and let numpy broadcast the operation:</p> <pre><code>df_3 = pd.DataFrame(data=( df_2.values.reshape(len(df_1),(len(df_2) // len(df_1)) * len(df_2.columns)) / df_1.values).reshape(len(df_2), len(df_2.columns)), index=df_2.index, columns=df_2.columns) </code></pre>
python|pandas
1
4,693
61,968,661
sort dataframe with dates as column headers in pandas
<p>My dates have to be in water years and <strong>I wanted to find a way where I have the column start with date 09/30/1899_24:00 and end with date 9/30/1999_24:00.</strong></p> <p><a href="https://i.stack.imgur.com/Rtx6f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rtx6f.png" alt="enter image description here"></a> </p> <p>Initially I had it like this (picture below) but when I did the dataframe pivot it messed up the order. <a href="https://i.stack.imgur.com/BHMTw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BHMTw.png" alt="enter image description here"></a></p> <p>Here is a snip of my code</p> <pre><code> sim = pd.read_csv(headout,parse_dates=True, index_col='date') sim['Layer'] = sim.groupby('date').cumcount() + 1 sim['Layer'] = 'L' + sim['Layer'].astype(str) sim = sim.pivot(index = None , columns = 'Layer').T sim = sim.reset_index() sim = sim.rename(columns={"level_0": "NodeID"}) sim["NodeID"]= sim['NodeID'].astype('int64') sim['gse'] = sim['NodeID'].map(sta.set_index(['NodeID'])['GSE']) </code></pre>
<h2>The issue is that <code>24:00</code> is not a valid time</h2> <ul> <li>If you don't convert the date column to valid datetime then python will treat the column as a string. <ul> <li>This will make it very difficult to perform any type of time based analysis</li> <li>The order of the columns will then be ordered numerically as follows: <code>'09/30/1899_24:00', '10/31/1899_24:00', '11/30/1898_24:00', '11/30/1899_24:00'</code></li> <li>Note, <code>11/30/1898</code> is before <code>11/30/1899</code></li> </ul></li> <li>Replace <code>24:00</code> with <code>23:59</code></li> </ul> <pre class="lang-py prettyprint-override"><code>import pandas as pd # dataframe df = pd.DataFrame({'date': ['09/30/1899_24:00', '09/30/1899_24:00', '09/30/1899_24:00', '09/30/1899_24:00', '10/31/1899_24:00', '10/31/1899_24:00', '10/31/1899_24:00', '10/31/1899_24:00', '11/30/1899_24:00', '11/30/1899_24:00']}) | | date | |---:|:-----------------| | 0 | 09/30/1899_24:00 | | 1 | 09/30/1899_24:00 | | 2 | 09/30/1899_24:00 | | 3 | 09/30/1899_24:00 | | 4 | 10/31/1899_24:00 | | 5 | 10/31/1899_24:00 | | 6 | 10/31/1899_24:00 | | 7 | 10/31/1899_24:00 | | 8 | 11/30/1899_24:00 | | 9 | 11/30/1899_24:00 | # replace 24:00 df.date = df.date.str.replace('24:00', '23:59') # formate as datetime df.date = pd.to_datetime(df.date, format='%m/%d/%Y_%H:%M') # final date 0 1899-09-30 23:59:00 1 1899-09-30 23:59:00 2 1899-09-30 23:59:00 3 1899-09-30 23:59:00 4 1899-10-31 23:59:00 5 1899-10-31 23:59:00 6 1899-10-31 23:59:00 7 1899-10-31 23:59:00 8 1899-11-30 23:59:00 9 1899-11-30 23:59:00 </code></pre> <h2>Remove all time component</h2> <pre class="lang-py prettyprint-override"><code>df.date = df.date.str.replace('_24:00', '') df.date = pd.to_datetime(df.date, format='%m/%d/%Y') date 0 1899-09-30 1 1899-09-30 2 1899-09-30 3 1899-09-30 4 1899-10-31 5 1899-10-31 6 1899-10-31 7 1899-10-31 8 1899-11-30 9 1899-11-30 </code></pre>
python|pandas|sorting|date|header
1
4,694
61,684,195
How to set a ** parameter in Python
<p>I'm newbie in Python.</p> <p>I'm using Python 3.7.7 and Tensorflow 2.1.0.</p> <p>This is my code:</p> <pre><code>import tensorflow as tf import tensorflow_datasets as tfds d = {"name": "omniglot:3.0.0", "data_dir": "d:\\tmp"} omniglot_builder = tfds.builder("omniglot:3.0.0", builder_init_kwargs=d) omniglot_builder.download_and_prepare(download_dir="d:\\tmp") </code></pre> <p>But I get this error:</p> <blockquote> <p>got an unexpected keyword argument 'builder_init_kwargs'</p> </blockquote> <p>I want to set <code>data_dir</code>, but I don't know how to do it. I have tried to set <code>download_dir</code> in <code>omniglot_builder.download_and_prepare(download_dir="d:\\tmp")</code> but it stills download it to <code>~/tensorflow_datasets</code>.</p> <p>From Tensorflow documentation for <a href="https://www.tensorflow.org/datasets/api_docs/python/tfds/builder" rel="nofollow noreferrer">tdfs.builder</a>:</p> <blockquote> <p>**builder_init_kwargs: dict of keyword arguments passed to the DatasetBuilder. These will override keyword arguments passed in name, if any.</p> </blockquote> <p>How can I set <code>builder_init_kwargs</code> parameter value?</p>
<p>Based on the <a href="https://www.tensorflow.org/datasets/api_docs/python/tfds/builder" rel="nofollow noreferrer">docs</a>, which say the <code>tfds.builder</code> method has type:</p> <pre><code>tfds.builder( name, **builder_init_kwargs ) </code></pre> <p>You want to do this:</p> <pre><code>dict = {"name":"omniglot:3.0.0", "data_dir": "d:\\tmp"} tfds.builder(**dict) </code></pre> <p>The <code>**</code> syntax passes a variable as the kwargs, making the above code equivalent to:</p> <pre><code>tfds.builder(name="omniglot:3.0.0", data_dir="d:\\tmp") </code></pre>
python|tensorflow|tensorflow-datasets
1
4,695
58,082,023
How exactly does Ray share data to workers?
<p>There are many simple tutorials and also SO questions and answers out there which claim that Ray somehow shares data with the workers, but none of these go into the exact details of what gets shared how on which OS.</p> <p>For example in this SO answer: <a href="https://stackoverflow.com/a/56287012/1382437">https://stackoverflow.com/a/56287012/1382437</a> an np array gets serialised into the shared object store and then used by several workers all accessing the same data (code copied from that answer):</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import ray ray.init() @ray.remote def worker_func(data, i): # Do work. This function will have read-only access to # the data array. return 0 data = np.zeros(10**7) # Store the large array in shared memory once so that it can be accessed # by the worker tasks without creating copies. data_id = ray.put(data) # Run worker_func 10 times in parallel. This will not create any copies # of the array. The tasks will run in separate processes. result_ids = [] for i in range(10): result_ids.append(worker_func.remote(data_id, i)) # Get the results. results = ray.get(result_ids) </code></pre> <p>The <code>ray.put(data)</code> call puts the serialised representation of the data into the shared object store and passes back a handle/id for it.</p> <p>then when <code>worker_func.remote(data_id, i)</code> is invoked, the <code>worker_func</code> gets passed the deserialised data.</p> <p>But what exactly happens in between? Clearly the <code>data_id</code> is used to locate the serialised version of data and deserialise it.</p> <p><strong>Q1:</strong> When the data gets &quot;deserialised&quot; does this always create a copy of the original data? I would think yes, but I am not sure.</p> <p>Once the data has been deserialised, it gets passed to a worker. Now, if the same data needs to get passed to another worker, there are two possibilities:</p> <p><strong>Q2:</strong> When an object that has already been deserialised gets passed to a worker, will it be via another copy or that exact same object? If it is the exact same object, is this using the standard shared memory approach to share data between processes? On Linux this would mean copy-on-write, so does this mean that as soon as the object is written to, another copy of it is created?</p> <p><strong>Q3:</strong> Some tutorials/answers seem to indicate that the overhead of deserialising and sharing data between workers is very different depending on the type of data (Numpy versus non-Numpy) so what are the details there? Why is numpy data shared more efficiently and is this still efficient when the client tries to write to that numpy array (which I think would always create a local copy for the process?) ?</p>
<p>This is a great question, and one of the cool features that Ray has. Ray provides a way to <strong>schedule functions in a distributed environment</strong>, but it also provides a <strong>cluster store</strong> that manages data sharing between these tasks.</p> <p>Here are the kind of objects that ray</p> <ul> <li>Objects added with <code>ray.put</code></li> <li>A result from <code>function.remote</code></li> <li>A Ray actor (the instantiation of a remote class in a Ray cluster)</li> </ul> <p>For all of these alternatives, the objects are managed by the Ray Object Store - also known as Plasma in some documents (see <a href="https://docs.ray.io/en/latest/ray-core/memory-management.html" rel="nofollow noreferrer">Memory Management in Ray Docs</a>, and <a href="https://docs.google.com/document/d/1lAy0Owi-vPz2jEqBSaHNQcy2IBSDEHyXNOQZlGuj93c/preview#heading=h.wjbl6vbr5enb" rel="nofollow noreferrer">Object Management in the Ray Architecture Whitepaper</a>).</p> <p>Given a Ray cluster with multiple nodes, and having each node running multiple processes, Ray may store objects in any of these locations:</p> <ul> <li>The local memory space for the running process</li> <li>The shared memory space for all processes in a single node</li> <li>(Only if necessary to reclaim memory) Persistent storage / hard drive</li> </ul> <p>For example, when you call a function remotely in Ray, Ray needs to manage the result from that function. There are two alternatives:</p> <ul> <li>If the serialized result is small, then Ray will send it back directly to the caller, and it will be stored <strong>in the local memory space of the caller</strong>. (see left side of the picture below, where the result is stored in the owner process)</li> <li>If the serialized result is large, then Ray will store it in the <strong>shared memory of the node executing the function</strong>. (see right side of the picture below, where the result is stored in the shared-memory object store in the local node).</li> </ul> <p><a href="https://i.stack.imgur.com/dbEWs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dbEWs.png" alt="ray example" /></a></p> <p>In general, Ray aims to make these details transparent to the user. As long as you're using the appropriate Ray APIs, Ray will behave as expected, and take care of managing all objects stored in the cluster's object store.</p> <hr /> <p>Now onto your questions:</p> <p>Q1: When does the data get serialized/deserialised?</p> <ul> <li>It all depends on whether the data has to be transferred over the network or not. If the data does not need to travel over the network, or be spilled to disk, Ray will try to avoid serializing/deserializing it, because there's a cost to doing that. For example, an object in shared memory does not need to be serialized/deserialized, because it can be directly dereferenced by the processes with access to that memory.</li> </ul> <p>Q2: When an object that has already been deserialised gets passed to a worker, will it be via another copy or that exact same object?</p> <ul> <li><p>Objects in the Ray Object Store are immutable (except for Actors, which are a special kind of object). When Ray shares an object with another worker, it does it because it knows the object will not change (actors, on the other hand, are always held in a single worker, and cannot be copied to multiple workers).</p> </li> <li><p>In short: You can't modify the objects in the Ray Object Store. If you want an updated version of an object, you'll need to create a new object.</p> </li> </ul> <p>Q3: Some tutorials/answers seem to indicate that the overhead of deserialising and sharing data between workers is very different depending on the type of data (Numpy versus non-Numpy) so what are the details there?</p> <ul> <li><p>Some data is designed to have very similar representation in-memory as in serialized format. For example, Arrow objects only need to be 'cast' into a byte stream, and shared without performing any special computation. Numpy data is also laid out in memory as a C array that can simply be 'cast' to a byte buffer (on the other hand, Python lists are an array of references, where you need to serialize the object of each reference)</p> </li> <li><p>Other kinds of data require more computation to be serialized. For example, if you need to serialize a Python function along with its closure, then it may be very slow. Consider the function below: To serialize it you will need to serialize the function, but also all of the variables that it accesses from its enclosing context (e.g. <code>MAX_ELEMENTS</code>).</p> </li> </ul> <pre class="lang-py prettyprint-override"><code>MAX_ELEMENTS = 10 def batch_elements(input): arr = [] for elm in input: arr.append(elm) if len(arr) &gt; MAX_ELEMENTS: yield arr arr = [] if arr: yield arr </code></pre> <p>I hope that helps - I'm happy to dive further into this.</p>
python|numpy|serialization|shared-memory|ray
3
4,696
57,744,353
DataFrame is empty, expected data in it
<p>I want to find duplicate items within 2 rows in Excel. So for example my Excel consists of:</p> <pre><code> list_A list_B 0 ideal ideal 1 brown colour 2 blue blew 3 red red </code></pre> <p>I checked the pandas documentation and tried duplicate method but I simply don't know why it keeps saying "DataFrame is empty". It finds both columns and I guess it's iterated over it but why doesn't it find the values and compare them?</p> <p>I also tried using iterrows but honestly don't know how to implement it.</p> <p>When running the code I get this output:</p> <p>Empty DataFrame</p> <p>Columns: [list A, list B]</p> <p>Index: []</p> <pre><code>import pandas as pd pt = pd.read_excel(r"C:\Users\S531\Desktop\pt.xlsx") dfObj = pd.DataFrame(pt) doubles = dfObj[dfObj.duplicated()] print(doubles) </code></pre> <p>The output I'm looking for is:</p> <pre><code> list_A list_B 0 ideal ideal 3 red red </code></pre> <p>Final solved code looks like this:</p> <pre><code>import pandas as pd pt = pd.read_excel(r"C:\Users\S531\Desktop\pt.xlsx") doubles = pt[pt['list_A'] == pt['list_B']] print(doubles) </code></pre>
<p>The term "duplicate" is usually used to mean rows that are exact duplicates of previous rows (see the documentation of <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.duplicated.html" rel="nofollow noreferrer">pd.DataFrame.duplicate</a>).</p> <p>What you are looking for is just the rows where these two columns are equal. For that, you want:</p> <p><code>doubles = pt[pt['list_A'] == pt['list_B']]</code></p>
python|pandas
0
4,697
58,086,280
pandas join remove columns if the same
<p>I have 2 dataframes <code>df1</code> and <code>df2</code> that I want to join based on their column <code>'C'</code></p> <pre><code>import pandas df1 = pandas.DataFrame(data=[[1,0,2,4],[2,3,1,3]],columns=['A','B','C','D']) df2 = pandas.DataFrame(data=[[2,2,2,4],[3,4,1,3]],columns=['A','F','C','D']) df1 A B C D 0 1 0 2 4 1 2 3 1 3 df2 A F C D 0 2 2 2 4 1 3 4 1 3 # Merge the dataframes dataframe_matched = df1.join( other=df2.set_index('C'), on='C', how="inner", lsuffix="_left", rsuffix="_right", sort=True, ) dataframe_matched A_left B C D_left A_right F D_right 1 2 3 1 3 3 4 3 0 1 0 2 4 2 2 4 </code></pre> <p>The columns <code>D_left</code> and <code>D_right</code> are the same. Is there an easy way to keep only 1 with the original name?</p> <pre><code>dataframe_matched A_left B C D A_right F 1 2 3 1 3 3 4 0 1 0 2 4 2 2 </code></pre>
<p>You can do <code>drop_duplicates</code></p> <pre><code>df1.merge(df2,on='C').T.drop_duplicates().T Out[288]: A_x B C D_x A_y F 0 1 0 2 4 2 2 1 2 3 1 3 3 4 </code></pre> <p>Update </p> <pre><code>pd.concat([df1.set_index('C'),df2.set_index('C')],1,keys=['right','left']).\ T.reset_index(level=1).\ drop_duplicates().set_index('level_1',append=True).T Out[337]: right left level_1 A B D A F C 2 1 0 4 2 2 1 2 3 3 3 4 </code></pre>
python|pandas|join
1
4,698
37,106,284
Pandas and Large dataframe
<p>I decided to use pandas (0.18.1) to work on a log data from one of my models using discrete element particles. This log has attributes related to 400000 particles (x,y,z positions and velocities; around 5M rows) with the following structure:</p> <pre><code>***************************************** * Log File Started 16:12:54 Fri May 06 2016 * 4.00-182 (64-bit) * * * ***************************************** elrond&gt; Ball_Id 400000 Ballx 4.90707890560e+002 Bally 9.19154644947e+001 Ballz -1.02229145082e+002 Top 0 Dx 1.38904597749e+000 Dy -6.35282219552e-001 Dz -1.64199872399e+001 Velx -1.02171891554e-001 Vely -1.05325799073e-002 Velz 4.04701964190e-003 V_rotx -6.86579713474e-004 V_roty 9.14539972137e-004 V_rotz -7.76239471255e-005 Ball_Id 399999 Ballx 7.48469370428e+002 Bally 2.46351257548e+001 Ballz -8.62490399310e+001 Top 0 Dx 6.96274451933e-001 Dy 1.32036797483e+000 Dz -1.87517847236e+001 Velx -1.05970416552e-002 Vely 7.21491947832e-003 Velz 7.55093644847e-004 V_rotx 5.17377621567e-006 V_roty 2.59041151397e-005 V_rotz -2.31863427848e-005 Ball_Id 399998 Ballx 1.19395239848e+002 Bally 7.80444921824e+001 Ballz 2.34352803814e+000 Top 0 Dx 5.90917177795e+001 Dy 1.37004693793e+000 Dz 1.61822040639e+001 Velx 1.31243808962e+001 Vely -8.20542806383e-001 Velz 6.19737823128e+000 V_rotx -4.89777825136e-002 V_roty 9.36324827264e-002 V_rotz -5.90727285357e-002 </code></pre> <p>I want to get a file with this format:</p> <pre><code>Ball_Id Ballx Bally Ballz Topo Dx Dy Dz Velx Vely Velz V_rotx V_roty V_rotz 400000 4.90714073236e+002 9.19065373175e+001 -1.02231392317e+002 0 1.39522865407e+000 -6.44209396797e-001 -1.64222344741e+001 2.68881171417e-002 -1.81227520077e-002 -4.04738585013e-003 7.75669240314e-005 -4.00875407555e-004 -1.41810083383e-004 399999 7.48472521138e+002 2.46451444724e+001 -8.62470162686e+001 0 6.99425161310e-001 1.33038669240e+000 -1.87497610612e+001 1.18932839949e-002 4.69256261481e-003 1.38621378252e-002 -6.30154171502e-006 -3.23043526114e-004 2.16368702869e-007 399998 1.28116171848e+002 7.67039376593e+001 7.55623907648e+000 0 6.78126497794e+001 2.94924148016e-002 2.13949151023e+001 6.33940244884e+000 1.73376959946e-001 4.85967665797e+000 -3.52816583310e-001 -5.38872247688e-001 1.12736371677e-001 399996 4.79841096924e+002 -1.62882386399e+002 -1.30791611129e+002 Topo1 2.73837679243e+000 -1.47077675894e+000 -6.28235946603e+000 7.90493795999e-002 -3.39089755154e-002 1.02726075741e-003 -1.14738159279e-004 -7.24753898272e-005 -6.78627383629e-005 </code></pre> <p>So far I was able to write a very inefficient code that takes an eternity to get to the final file that I want. Any suggestion to improve it would be great. Thanks</p> <pre><code>import pandas as pd #================================================================================= df = pd.read_csv("Desloc_Caixa_Compress_14_04_16_19.log",index_col=0,header = None, skiprows =[0,1,2,3,4,5,6,7],engine='python',skipfooter = 4, sep=" ") dados = df[0:14] #================================================================================= k=14; f=28; m=28; n=42 while (n&lt;=len(df)): a=df[k:f] b=df[m:n] k+=28; f+=28 m+=28; n+=28 dados = pd.concat([dados,a, b], axis=1) #================================================================================= d= dados.transpose() data = d.set_index('Ball_Id') data.to_csv('Data_14_04_16_19.txt', sep='\t') #================================================================================= </code></pre>
<p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/reshaping.html" rel="nofollow"><code>df.pivot</code></a>:</p> <pre><code>import pandas as pd df = pd.read_csv("Desloc_Caixa_Compress_14_04_16_19.log", header=None, skiprows=8, engine='python', skipfooter=4, sep=" ") df['index'] = (df[0] == 'Ball_Id').cumsum() df = df.pivot(index='index', columns=0, values=1) </code></pre> <p>yields</p> <pre><code>0 Ball_Id Ballx Bally Ballz Dx Dy index 1 400000.0 490.707891 91.915464 -102.229145 1.389046 -0.635282 2 399999.0 748.469370 24.635126 -86.249040 0.696274 1.320368 3 399998.0 119.395240 78.044492 2.343528 59.091718 1.370047 \ 0 Dz Top V_rotx V_roty V_rotz Velx Vely index 1 -16.419987 0.0 -0.000687 0.000915 -0.000078 -0.102172 -0.010533 2 -18.751785 0.0 0.000005 0.000026 -0.000023 -0.010597 0.007215 3 16.182204 0.0 NaN NaN NaN 13.124381 -0.820543 0 Velz index 1 0.004047 2 0.000755 3 NaN </code></pre>
python|pandas
1
4,699
36,971,635
Scipy fitting polynomial model to some data
<p>I do try to find an appropriate function for the permeability of cells under varying conditions. If I assume constant permeability, I can fit it to the experimental data and use Sklearns <code>PolynomialFeatures</code> together with a <code>LinearModel</code> (As explained in <a href="https://stackoverflow.com/questions/31406975/polynomial-regression-using-python">this post</a>) in order to determine a correlation between the conditions and the permeability. However, the permeability is not constant and now I try to fit my model with the permeability as a function of the process conditions. The <code>PolynomialFeature</code> module of sklearn is quite nice to use.</p> <p>Is there an equivalent function within scipy or numpy which allows me to create a polynomial model (including interaction terms e.g. <code>a*x[0]*x[1]</code> etc.) of varying order without writing the whole function by hand ?</p> <p>The standard polynomial class in numpy seems not to support interaction terms.</p>
<p>I'm not aware of such a function that does exactly what you need, but you can achieve it using a combination of <code>itertools</code> and <code>numpy</code>.</p> <p>If you have <code>n_features</code> predictor variables, you essentially must generate all vectors of length <code>n_features</code> whose entries are non-negative integers and sum to the specified order. Each new feature column is the component-wise power using these vectors who sum to a given order.</p> <p>For example, if <code>order = 3</code> and <code>n_features = 2</code>, one of the new features will be the old features raise to the respective powers, <code>[2,1]</code>. I've written some code below for arbitrary order and number of features. I've modified the generation of vectors who sum to <code>order</code> from <a href="https://stackoverflow.com/a/28969798/1430829">this post</a>.</p> <pre><code>import itertools import numpy as np from scipy.special import binom def polynomial_features_with_cross_terms(X, order): """ X: numpy ndarray Matrix of shape, `(n_samples, n_features)`, to be transformed. order: integer, default 2 Order of polynomial features to be computed. returns: T, powers. `T` is a matrix of shape, `(n_samples, n_poly_features)`. Note that `n_poly_features` is equal to: `n_features+order-1` Choose `n_features-1` See: https://en.wikipedia.org\ /wiki/Stars_and_bars_%28combinatorics%29#Theorem_two `powers` is a matrix of shape, `(n_features, n_poly_features)`. Each column specifies the power by row of the respective feature, in the respective column of `T`. """ n_samples, n_features = X.shape n_poly_features = int(binom(n_features+order-1, n_features-1)) powers = np.zeros((n_features, n_poly_features)) T = np.zeros((n_samples, n_poly_features), dtype=X.dtype) combos = itertools.combinations(range(n_features+order-1), n_features-1) for i,c in enumerate(combos): powers[:,i] = np.array([ b-a-1 for a,b in zip((-1,)+c, c+(n_features+order-1,)) ]) T[:,i] = np.prod(np.power(X, powers[:,i]), axis=1) return T, powers </code></pre> <p>Here's some example usage:</p> <pre><code>&gt;&gt;&gt; X = np.arange(-5,5).reshape(5,2) &gt;&gt;&gt; T,p = polynomial_features_with_cross_terms(X, order=3) &gt;&gt;&gt; print X [[-5 -4] [-3 -2] [-1 0] [ 1 2] [ 3 4]] &gt;&gt;&gt; print p [[ 0. 1. 2. 3.] [ 3. 2. 1. 0.]] &gt;&gt;&gt; print T [[ -64 -80 -100 -125] [ -8 -12 -18 -27] [ 0 0 0 -1] [ 8 4 2 1] [ 64 48 36 27]] </code></pre> <p>Finally, I should mention that the <a href="https://en.wikipedia.org/wiki/Polynomial_kernel" rel="nofollow noreferrer">SVM polynomial kernel</a> achieves exactly this effect without explicitly computing the polynomial map. There are of course pro's and con's to this, but I figured I should mentioned it for you to consider if you have not, yet.</p>
python|numpy|scipy|scikit-learn
1