Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
377,700
45,800,520
Specify Multi-Level columns using pd.read_clipboard?
<p>Here's some data from another question:</p> <pre><code>main Meas1 Meas2 Meas3 Meas4 Meas5 sublvl Value Value Value Value Value count 7.000000 1.0 1.0 582.00 97.000000 mean 30 37.0 26.0 33.03 16.635350 </code></pre> <p>I would like to read in this data in such a way that the first column is actually the index, and the first two rows are treated as multi-level columns where <code>MeasX</code> is the first level, and <code>Value</code> is the second level. </p> <p>How can I do this using <code>pd.read_clipboard</code>?</p> <hr> <p>My <code>pd.read_clipboard</code> series:</p> <ul> <li><p><a href="https://stackoverflow.com/questions/45528198/">How do you handle column names having spaces in them when using pd.read_clipboard?</a></p></li> <li><p><a href="https://stackoverflow.com/questions/45725500/how-to-handle-custom-named-index-when-copying-a-dataframe-using-pd-read-clipboar">How to handle custom named index when copying a dataframe using pd.read_clipboard?</a></p></li> <li><p><a href="https://stackoverflow.com/questions/45740537/copying-multiindex-dataframes-with-pd-read-clipboard">Copying MultiIndex dataframes with pd.read_clipboard?</a></p></li> </ul>
<pre><code>In [17]: pd.read_clipboard(sep='\s+', index_col=[0], header=[0,1]) Out[17]: main Meas1 Meas2 Meas3 Meas4 Meas5 sublvl Value Value Value Value Value count 7.0 1.0 1.0 582.00 97.00000 mean 30.0 37.0 26.0 33.03 16.63535 </code></pre>
python|pandas|dataframe
6
377,701
45,793,412
Structure dataset from rows to columns pandas python
<p>I have a dataframe like the following with many feature columns but only 3 mentioned below:</p> <pre><code>productid |feature1 |value1 |feature2 |value2 | feature3 |value3 100001 |weight | 130g | |price |$140.50 100002 |weight | 200g |pieces |12 pcs | dimensions |150X75cm 100003 |dimensions |70X30cm |price |$22.90 100004 |price |$12.90 |manufacturer| ABC |calories |556Kcal 100005 |calories |1320Kcal|dimensions |20X20cm |manufacturer | XYZ </code></pre> <p>and I want to structure it in the following way using pandas:</p> <pre><code>productid weight dimensions price calories no. of pieces manufacturer 100001 130g $140.50 100002 200g 150X75cm 12 pcs 100003 70X30cm $22.90 100004 $12.90 556Kcal ABC 100005 20X20cm 1320Kcal XYZ </code></pre> <p>I studied various pandas methods like reset_index, stack etc but didn't get it to convert in the required way. </p>
<p>Here's a reproducible example, check the comments for details. </p> <pre><code>import pandas as pd from StringIO import StringIO data = """ productid|feature1|value1|feature2|value2|feature3|value3 100001|weight|130g|||price|$140.50 100002|weight|200g|pieces|12pcs|dimensions|150X75cm 100003|dimensions|70X30cm|price|$22.90|| 100004|price|$12.90|manufacturer|ABC|calories|556Kcal 100005|calories|1320Kcal|dimensions|20X20cm|manufacturer|XYZ """ # simulate reading from a csv file df= pd.read_csv(StringIO(data), sep="|") # pivot all (productid, feature{x}, value{x}) tuples into a tabular dataframe # and append them to the following list converted = [] # you can construct this programmatically (out of scope for now) mapping = {"feature1": "value1", "feature2": "value2","feature3": "value3"} # iteritems() become items() in python3 for feature, values in mapping.iteritems(): # pivot (productid, feature{x}, value{x}) into a tabular dataframe # columns names : feature{x} # values: value{x} df1 = pd.pivot_table(df, values=values, index=["productid"], columns=[feature], aggfunc=lambda x: x.iloc[0]) # remove the name from the pivoted dataframe to get a standard dataframe df1.columns.name = None # keep productid in the dataframe as a column df1.reset_index(inplace=True) converted.append(df1) # merge all dataframe in the list converted into one dataframe final_df1 = converted[0] for index,df_ in enumerate(converted[1:]): final_df1 = pd.merge(final_df1, df_, how="outer") import numpy as np # replace None with np.nan so groupby().first() take the first none NaN vaues final_df1.fillna(value=np.nan, inplace=True) # format the data to be iso to what the OP wants final_df1 = final_df1.groupby("productid", as_index=False).first() print(final_df1) </code></pre> <p>The output : </p> <pre><code> productid dimensions manufacturer pieces price calories weight 0 100001 NaN NaN NaN $140.50 NaN 130g 1 100002 150X75cm NaN 12pcs NaN NaN 200g 2 100003 70X30cm NaN NaN $22.90 NaN NaN 3 100004 NaN ABC NaN $12.90 556Kcal NaN 4 100005 20X20cm XYZ NaN NaN 1320Kcal NaN </code></pre>
python|pandas|dataframe
1
377,702
45,751,891
There are two format of Time series datetime in the same series, how to change them to one format?
<p>I want to split a time series into two set: train and test. Here's my code:</p> <pre><code>train = data.iloc[:1100] test = data.iloc[1101:] </code></pre> <p>Here's what the time series looks like: <a href="https://i.stack.imgur.com/nMo1R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nMo1R.png" alt="enter image description here"></a></p> <p>And here's the train series:There's no time, only the date in the index.</p> <p><a href="https://i.stack.imgur.com/IFZFQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IFZFQ.png" alt="enter image description here"></a></p> <p>Finally, the test: <a href="https://i.stack.imgur.com/PHZKY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PHZKY.png" alt="enter image description here"></a></p> <p>How to change the index to same form?</p>
<p>Consider the simplified series <code>s</code></p> <pre><code>s = pd.Series(1, pd.date_range('2010-08-16', periods=5, freq='12H')) s 2010-08-16 00:00:00 1 2010-08-16 12:00:00 1 2010-08-17 00:00:00 1 2010-08-17 12:00:00 1 2010-08-18 00:00:00 1 Freq: 12H, dtype: int64 </code></pre> <p>But when I subset <code>s</code> leaving only <code>Timestamp</code>s that need no time element, <code>pandas</code> does me the "favor" of not displaying a bunch of zeros for no reason.</p> <pre><code>s.iloc[::2] 2010-08-16 1 2010-08-17 1 2010-08-18 1 Freq: 24H, dtype: int64 </code></pre> <p>But rest assured, the values are the same:</p> <pre><code>s.iloc[::2].index[0] == s.index[0] True </code></pre> <p>And have the same dtype and precision</p> <pre><code>print(s.iloc[::2].index.values.dtype) dtype('&lt;M8[ns]') </code></pre> <p>And</p> <pre><code>print(s.index.values.dtype) dtype('&lt;M8[ns]') </code></pre>
python-2.7|pandas|datetime|timestamp|time-series
2
377,703
46,093,577
what is the equivalent of theano.tensor.clip in pytorch?
<p>I want to clip my tensor (not gradient) values to some range. Is there any function in pytorch like there is a function theano.tensor.clip() in theano?</p>
<p>The function you are searching for is called <code>torch.clamp</code>. You can find the documentation <a href="http://pytorch.org/docs/master/torch.html#torch.clamp" rel="nofollow noreferrer">here</a></p>
theano|pytorch
2
377,704
46,153,647
KeyError: 0 when accessing value in pandas series
<p>In my script I have df['Time'] as shown below.</p> <pre><code>497 2017-08-06 11:00:00 548 2017-08-08 15:00:00 580 2017-08-10 04:00:00 646 2017-08-12 23:00:00 Name: Time, dtype: datetime64[ns] </code></pre> <p>But when i do </p> <pre><code>t1=pd.Timestamp(df['Time'][0]) </code></pre> <p>I get an error like this :</p> <blockquote> <p>KeyError: 0</p> </blockquote> <p>Do I need any type conversion here, if yes then how it can be fixed?</p>
<p>You're looking for <code>df.iloc</code>.</p> <pre><code>df['Time'].iloc[0] </code></pre> <p><code>df['Time'][0]</code> would've worked if your series had an index beginning from <code>0</code></p> <p>And if need scalar only use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.iat.html" rel="noreferrer"><code>Series.iat</code></a>:</p> <pre><code>df['Time'].iat[0] </code></pre>
python|pandas|indexing|series|keyerror
37
377,705
45,853,159
Python / Pandas - Calculating ratio
<p>I have this dataframe:</p> <pre><code>bal: year id unit period Revenues Ativo Não-Circulante \ business_id 9564 2012 302 dsada anual 5964168.52 10976013.70 9564 2011 303 dsada anual 5774707.15 10867868.13 2361 2013 304 dsada anual 3652575.31 6608468.52 2361 2012 305 dsada anual 321076.15 6027066.03 2361 2011 306 dsada anual 3858137.49 9733126.02 2369 2012 307 dsada anual 351373.66 9402830.89 8104 2012 308 dsada anual 3503226.02 6267307.01 ... </code></pre> <p>I want to create a column named "Growth". It would be the:</p> <p>(Revenues from this year/Revenues from last year) - 1</p> <p>The dataframe should look like this:</p> <pre><code> year id unit period Revenues Growth \ business_id 9564 2012 302 dsada anual 5964168.52 0.0328 9564 2011 303 dsada anual 5774707.15 NaN 2361 2013 304 dsada anual 3652575.31 10.37 2361 2012 305 dsada anual 321076.15 -0.91 2361 2011 306 dsada anual 3858137.49 NaN 2369 2012 307 dsada anual 351373.66 NaN 8104 2012 308 dsada anual 3503226.02 NaN ... </code></pre> <p>How could I do that?</p>
<p>I'll assume your dataframe is named <code>df</code>. First rest your index so that <code>business_id</code> is a column, then sort the result on <code>year</code>. Now group the dataframe on <code>business_id</code> and transform the result to get the percent change in revenues. Finally, resort the index to get the original order.</p> <pre><code>df2 = df.reset_index().sort_values(['year']) df2 = ( df2 .assign(Growth=df2.groupby(['business_id'])['Revenues'].transform( lambda group: group.pct_change())) .sort_index() ) &gt;&gt;&gt; df2 business_id year id unit period Revenues Ativo Não-Circulante Growth 0 9564 2012 302 dsada anual 5964168.52 10976013.70 0.032809 1 9564 2011 303 dsada anual 5774707.15 10867868.13 NaN 2 2361 2013 304 dsada anual 3652575.31 6608468.52 10.376041 3 2361 2012 305 dsada anual 321076.15 6027066.03 -0.916779 4 2361 2011 306 dsada anual 3858137.49 9733126.02 NaN 5 2369 2012 307 dsada anual 351373.66 9402830.89 NaN 6 8104 2012 308 dsada anual 3503226.02 6267307.01 NaN </code></pre> <p>I think you have an error in your expected output:</p> <pre><code>5964168.52 / 5774707.15 - 1 = 0.0328 # vs. 0.16 shown. </code></pre>
python|pandas
1
377,706
45,899,613
Divide certain columns by another column in pandas
<p>Was wondering if there is a more efficient way of dividing multiple columns a certain column. For example say I have:</p> <pre><code>prev open close volume 20.77 20.87 19.87 962816 19.87 19.89 19.56 668076 19.56 19.96 20.1 578987 20.1 20.4 20.53 418597 </code></pre> <p>And i would like to get:</p> <pre><code>prev open close volume 20.77 1.0048 0.9567 962816 19.87 1.0010 0.9844 668076 19.56 1.0204 1.0276 578987 20.1 1.0149 1.0214 418597 </code></pre> <p>Basically, columns 'open' and 'close' have been divided by the value from column 'prev.'</p> <p>I was able to do this by</p> <pre><code>df['open'] = list(map(lambda x,y: x/y, df['open'],df['prev'])) df['close'] = list(map(lambda x,y: x/y, df['close'],df['prev'])) </code></pre> <p>I was wondering if there is a simpler way? Especially if there are like 10 columns to be divided by the same value anyways?</p>
<pre><code>df2[['open','close']] = df2[['open','close']].div(df2['prev'].values,axis=0) </code></pre> <p>Output:</p> <pre><code> prev open close volume 0 20.77 1.004815 0.956668 962816 1 19.87 1.001007 0.984399 668076 2 19.56 1.020450 1.027607 578987 3 20.10 1.014925 1.021393 418597 </code></pre>
python|pandas|dataframe
9
377,707
46,081,177
Save data frame as csv/text file in pandas without line numbering
<p>I have created a data frame using a text file in pandas.</p> <pre><code>df = pd.read_table('inputfile.txt',names=['Line']) </code></pre> <p>when I do <code>df</code></p> <pre><code>Line 0 17/08/31 13:24:48 INFO spark.SparkContext: Run... 1 17/08/31 13:24:49 INFO spark.SecurityManager: ... 2 17/08/31 13:24:49 INFO spark.SecurityManager: ... 3 17/08/31 13:24:49 INFO spark.SecurityManager: ... 4 17/08/31 13:24:49 INFO util.Utils: Successfull... 5 17/08/31 13:24:49 INFO slf4j.Slf4jLogger: Slf4... 6 17/08/31 13:24:49 INFO Remoting: Starting remo... 7 17/08/31 13:24:50 INFO Remoting: Remoting star... 8 17/08/31 13:24:50 INFO Remoting: Remoting now ... 9 17/08/31 13:24:50 INFO util.Utils: Successfull... </code></pre> <p>Now I want to save this file as <code>csv</code></p> <pre><code>df.to_csv('outputfile') </code></pre> <p>The result I get is this</p> <pre><code>0,17/08/31 13:24:48 INFO spark.SparkContext: Running Spark version 1.6.0 1,17/08/31 13:24:49 INFO spark.SecurityManager: Changing view acls to: user1 2,17/08/31 13:24:49 INFO spark.SecurityManager: Changing modify acls to: user1 3,17/08/31 13:24:49 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user1); 4,17/08/31 13:24:49 INFO util.Utils: Successfully started service 'sparkDriver' on port 17101. 5,17/08/31 13:24:49 INFO slf4j.Slf4jLogger: Slf4jLogger started 6,17/08/31 13:24:49 INFO Remoting: Starting remoting 7,17/08/31 13:24:50 INFO Remoting: Remoting started; listening on addresses : 8,17/08/31 13:24:50 INFO Remoting: Remoting now listens on addresses: 9,17/08/31 13:24:50 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 100033. </code></pre> <p>I want my output to be</p> <pre><code>17/08/31 13:24:48 INFO spark.SparkContext: Running Spark version 1.6.0 17/08/31 13:24:49 INFO spark.SecurityManager: Changing view acls to: user1 17/08/31 13:24:49 INFO spark.SecurityManager: Changing modify acls to: user1 17/08/31 13:24:49 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user1); 17/08/31 13:24:49 INFO util.Utils: Successfully started service 'sparkDriver' on port 17101. 17/08/31 13:24:49 INFO slf4j.Slf4jLogger: Slf4jLogger started 17/08/31 13:24:49 INFO Remoting: Starting remoting 17/08/31 13:24:50 INFO Remoting: Remoting started; listening on addresses : 17/08/31 13:24:50 INFO Remoting: Remoting now listens on addresses: 17/08/31 13:24:50 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 100033. </code></pre> <p>I have tried a couple of methods like below but still getting the same result not my desired output.</p> <pre><code>np.savetxt(r'np.txt', df.Line, fmt='%d') df.to_csv(sep=' ', index=False, header=False) </code></pre>
<p>James' answer is likely correct given the special case. However, there's the standard behaviour of <code>pandas</code> to put the line number as a column without header in front. To remove this, simply set the <code>index=</code> argument to <code>None</code>:</p> <pre><code>df.to_csv("outfile.csv", index=None) </code></pre>
python|pandas
6
377,708
45,949,160
Finding z-scores of data in a test dataframe in Pandas
<p>I have data that is grouped, and split into training and test sets. I am looking to compute <code>z</code>-scores. On the training set, this is easy, as I can use built-in functions to compute the mean and standard deviation. </p> <p>Here is an example, where I am looking for the z-scores by place: import pandas as pd import numpy as np # My example dataframe</p> <pre><code>train = pd.DataFrame({'place': ['Winterfell','Winterfell','Winterfell','Winterfell','Dorne', 'Dorne','Dorne'], 'temp' : [ 23 , 10 , 0 , -32, 90, 110, 100 ]}) test = pd.DataFrame({'place': ['Winterfell', 'Winterfell', 'Dorne'], 'temp' : [6, -8, 100]}) # get the z-scores by group for the training set train.loc[: , 'z' ] = train.groupby('place')['temp'].transform(lambda x: (x - x.mean()) / x.std()) </code></pre> <p>Now the training dataframe takes the form:</p> <pre><code>| Place | temp | z | |------------|------|-------| | Winterfell | 23| 0.969 | | Winterfell | 10| 0.415 | | Winterfell | 0|-0.011 | | Winterfell | -32|-1.374 | | Dorne | 90| 1.000 | | Dorne | 110|-1.000 | | Dorne | 100| 0.000 | </code></pre> <p>which is what I want.</p> <p>The problem is that I now want to use the mean and standard deviations from the training set to calculate the z-scores in the test set. I can get the mean and standard deviation easily enough:</p> <pre><code>summary = train.groupby('place').agg({'temp' : [np.mean, np.std]} ).xs('temp',axis=1,drop_level=True) print(summary) mean std place Dorne 100.00 10.000000 Winterfell 0.25 23.471614 </code></pre> <p>I have some complicated ways of doing what I want, but since this is a task I have to do often, I am looking for a tidy way of doing it. Here is what I have tried so far:</p> <ol> <li><p>Making a dictionary <code>dict</code> out of the summary table, where I can extract the mean and standard devation as a tuple. Then on the test set, I can do an apply:</p> <pre><code>test.loc[: , 'z'] = test.apply(lambda row: (row.temp - dict[row.place][0]) / dict[row.place][1] ,axis = 1) </code></pre></li> </ol> <p>Why I don't like it:</p> <ul> <li>dictionary makes it hard to read, need to know what the structure of <code>dict</code> is.</li> <li><p>If a place appears in the test set but not the training set, instead of getting a NaN, the code will throw an error.</p> <ol start="2"> <li><p>Using an index</p> <pre><code>test.set_index('place', inplace = True) test.loc[:, 'z'] = (test['temp'] - summary['mean'])/summary['std'] </code></pre></li> </ol></li> </ul> <p>Why I don't like it: - Looks like it should work, but instead gives me only NaNs</p> <p>The final result should be Is there a standard pythonic way of doing this sort of combination?</p>
<p><strong>Option 1</strong><br> <code>pd.Series.map</code> </p> <pre><code>test.assign(z= (test.temp - test.place.map(summary['mean'])) / test.place.map(summary['std']) ) place temp z 0 Winterfell 6 0.244977 1 Winterfell -8 -0.351488 2 Dorne 100 0.000000 </code></pre> <hr> <p><strong>Option 2</strong><br> <code>pd.DataFrame.eval</code> </p> <pre><code>test.assign(z= test.join(summary, on='place').eval('(temp - mean) / std') ) place temp z 0 Winterfell 6 0.244977 1 Winterfell -8 -0.351488 2 Dorne 100 0.000000 </code></pre>
python|pandas|dataframe
4
377,709
45,862,139
Extracting slices that are identical between two dataframes
<p>How can I combine 2 dataframe <code>df1</code> and <code>df2</code> in order to get <code>df3</code> that has the rows of <code>df1</code> and <code>df2</code> that have the same index (and the same values in the columns)? </p> <pre><code>df1 = pd.DataFrame({'A': ['A0', 'A2', 'A3', 'A7'], 'B': ['B0', 'B2', 'B3', 'B7'], 'C': ['C0', 'C2', 'C3', 'C7'], 'D': ['D0', 'D2', 'D3', 'D7']}, index=[0, 2, 3,7]) </code></pre> <h1>test 1</h1> <pre><code>df2 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A7'], 'B': ['B0', 'B1', 'B2', 'B7'], 'C': ['C0', 'C1', 'C2', 'C7'], 'D': ['D0', 'D1', 'D2', 'D7']}, index=[0, 1, 2, 7]) </code></pre> <h1>test 2</h1> <pre><code>df2 = pd.DataFrame({'A': ['A1'], 'B': ['B1'], 'C': ['C1'], 'D': ['D1']}, index=[1]) </code></pre> <h1>Expected output test 1</h1> <pre><code>Out[13]: A B C D 0 A0 B0 C0 D0 2 A2 B2 C2 D2 7 A7 B7 C7 D7 </code></pre> <h1>Expected output test 2</h1> <pre><code>Empty DataFrame Columns: [A, B, C, D] Index: [] </code></pre>
<p>First, get the intersection of indices. Next, find all rows where all the columns are identical, and then just index into either dataframe.</p> <pre><code>idx = df1.index &amp; df2.index df_out = df1.loc[(df1.loc[idx] == df2.loc[idx]).all(1).index] print(df_out) </code></pre> <p>You can also use <code>df.isin</code> (slightly different from the other answer):</p> <pre><code>df_out = df1[df1.isin(df2).all(1)] print(df_out) </code></pre> <hr> <p><strong>Test 1</strong></p> <pre><code> A B C D 0 A0 B0 C0 D0 2 A2 B2 C2 D2 7 A7 B7 C7 D7 </code></pre> <p><strong>Test 2</strong></p> <pre><code>Empty DataFrame Columns: [A, B, C, D] Index: [] </code></pre>
python|pandas|dataframe
1
377,710
46,079,644
Tensorflow "Attempting to use uninitialized value ..." Error when restoring
<p>I made a RNN model. After training, I saved it in <code>tf.Session()</code> like this.</p> <pre><code>#main.py ...(training) saver = tf.train.Saver() save_path = saver.save(sess, "Save data/RNN-model.ckpt") </code></pre> <p>and in 'run.py' I tried to restore saved data.</p> <pre><code>#run.py ... with tf.Session() as sess: saver = tf.train.Saver() ckpt = tf.train.get_checkpoint_state('Save data/') if ckpt and tf.train.checkpoint_exists(ckpt.model_checkpoint_path): saver.restore(sess, ckpt.model_checkpoint_path) ...(training or testing again) </code></pre> <p>when running, error message raise.</p> <pre><code>FailedPreconditionError : Attempting to use uninitialized value accuracy/total </code></pre> <p>However, when I delete codes concerning to <code>accuracy</code> in 'run.py' as Error message said, it seems like it's working well I think.</p> <p>Am I missing something? Any comments or answers would help me.</p> <p>thanks.</p>
<p>The <code>accuracy</code> operation contains some local variable which is not part of the graph, so it should be initialized manually. adding <code>sess.run(tf.local_variables_initializer())</code> after<code>restore</code> will initialize the local variables.</p>
python|tensorflow|initialization
1
377,711
45,789,351
Tensorflow using Python programming
<p>I'm really newbie in python programming specially in tensorflow concept, I already installed tensorflow in my PC, But when I make a simple program to execute "Hello Tensorflow" there is something annoyed me, the out put always appear " b' " like this picture. <a href="https://i.stack.imgur.com/SCtH6.png" rel="nofollow noreferrer">Error Image</a> and my source code like this:</p> <pre><code>import tensorflow as tf hello = tf.constant("Hello, TensorFlow!") sess = tf.Session() print(sess.run(hello)) </code></pre> <p>anybody may help me to solve this problem please? I'm sorry for my bad english anyway. Thanks</p>
<p>In python 3 there are two types of strings.</p> <ol> <li>byte strings</li> <li>strings </li> </ol> <p>byte strings are array of characters which are prefixed by <code>b'</code>. In order to convert byte into string one needs to decode it. byte instances have method <code>decode</code> that will convert the byte to normal string. <code>decode</code> method expects encoding usually 'utf-8'.</p> <pre><code>import tensorflow as tf hello = tf.constant("Hello, TensorFlow!") sess = tf.Session() print(sess.run(hello).decode("utf-8")) </code></pre>
python|tensorflow
2
377,712
46,045,096
tensorflow windows create own plugin
<p>I have tensorflow+gpu successfully built on windows 10 with visual studio 2015, from the source code.</p> <p>As a result, I get <code>tensorflow.dll</code> and <code>tensorflow.lib</code>. I have <code>CUDA8.0</code> and <code>cudnn 5.0</code>; with a gtx 1080 gpu equipped.</p> <p>However, my question is not about building and compiling tensorflow. It's about creating tensorflow plugins.</p> <p>I followed the <a href="https://www.tensorflow.org/extend/adding_an_op" rel="nofollow noreferrer">tutorial</a> to construct my own "plug-in". and then I tried to compile a windows .dll; so windows would not export symbols automatically for me . then I compile a static lib first and used your tools</p> <pre><code>/tensorflow/contrib/cmake/tools/create_def_file.py </code></pre> <p>to create a <code>.def</code> file for me and eventually used that to compile the <code>.dll</code>.</p> <p>However, in my python code, when I tried to </p> <pre><code>correlation = tf.load_op_library(correlation.dll) </code></pre> <p>and I called </p> <pre><code>correlation.correlation() </code></pre> <p>with Correlation registered using <code>REGISTER_OP("Correlation")</code>; It still tells me</p> <blockquote> <p>AttributeError: module '7b088d8b906b36d3e50721b0adbaaa6a' has no attribute 'correlation'</p> </blockquote> <p>I think this is just a windows (or cl compiler) issue, maybe what REGISTER_OP("Correlation") did is just not picked up by the compiler,</p> <p>so is there any thing I can do to make this happen on windows??</p>
<p>Loading custom op libraries via tf.load_op_library() is not supported on Windows (at least with TensorFlow 1.8). The workaround is to add your custom op into the TensorFlow library itself. Follow the example of tf.user_ops.my_fact implemented in tensorflow\tensorflow\core\user_ops\fact.cc:</p> <ol> <li>Put your C++ implementation in tensorflow\tensorflow\core\user_ops</li> <li>Add python binding in tensorflow\tensorflow\python\user_ops</li> <li>Compile TensorFlow (read <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/cmake" rel="nofollow noreferrer" title="see instructions here">https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/cmake</a>)</li> <li>Replace tensorflow directory in your Conda environment with tensorflow\tensorflow\contrib\cmake\build\tf_python\tensorflow</li> <li>Your new op function will be imported in tf.user_ops</li> </ol>
windows|tensorflow
1
377,713
45,799,017
Inplace Forward Fill on a multi-level column dataframe
<p>I have the following dataframe: </p> <pre><code>arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']] tuples = list(zip(*arrays)) index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second']) df = pd.DataFrame(np.random.randn(3, 8), index=['A', 'B', 'C'], columns=index) df.loc["B", (slice(None), 'two')]=np.nan </code></pre> <p>Now, I want to forward fill inplace the data for columns "baz" and "foo" (so not for columns "bar" and "qux"). I tried: </p> <pre><code> df[["baz", "foo"]].ffill(inplace=True) </code></pre> <p>but the resulting dataframe did not forward fill any of the values. How can I create a dataframe with forward filled data for those two columns only?</p>
<p>I believe the problem comes due to the <code>inplace=True</code> setting. Try accessing the slice with <code>df.loc</code> and then assigning the <code>ffill</code>ed dataframe slice back:</p> <pre><code>df.loc[:, ["baz", "foo"]] = df[["baz", "foo"]].ffill() </code></pre> <p>Output:</p> <pre><code>first baz foo second one two one two A 0.465254 0.629161 -0.176656 -1.263927 B 2.051213 0.629161 1.539584 -1.263927 C -0.463592 -0.240445 -0.014090 0.170188 </code></pre> <hr> <p>Alternatively, you could use <code>df.fillna(method='ffill')</code>:</p> <pre><code>df.loc[:, ["baz", "foo"]] = df[["baz", "foo"]].fillna(method='ffill') </code></pre>
python|pandas|dataframe|hierarchical|fillna
1
377,714
46,084,008
How to groupby in pandas dataframe by one column only if other column value is not same
<p>I have a dataframe as follows:</p> <pre><code>df ID first last 0 123 Joe Thomas 1 456 James Jonas 2 675 James Jonas 3 457 James Thomas </code></pre> <p>I want an output as follows:</p> <pre><code>{'Thomas': [123, 457], 'James':[675, 457]} </code></pre> <p>such that for all the rows where <code>'last'</code> is same but <code>'first'</code> is different or where <code>'first'</code> is same but <code>'last'</code> is different, get IDs for those.</p> <p>I'm trying to do it as follows:</p> <pre><code> for i in zip(df['ID'], df['first'], df['last']): ... last.setdefault(i[2],[]) ... first.setdefault(i[1],[]) ... last[i[2]].append(i[0]) ... first[i[1]].append(i[0]) </code></pre> <p>with which I get the output as:</p> <pre><code>&gt;&gt;&gt; first {'James': [456, 675, 457], 'Joe': [123]} &gt;&gt;&gt; last {'Thomas': [123, 457], 'Jonas': [456, 675]} </code></pre> <p>But this only groups by either 'first' or 'last' and does not check that the other one should not be same. How do I get the desired output?</p> <p>UPDATE:</p> <p>dropped duplicates as:</p> <pre><code>df = df.drop_duplicates(subset=['first', 'last'], take_last=False) </code></pre> <p>ANSWER:</p> <p>Did it this way. Not sure if this is correct. Any suggestions?</p> <pre><code>new_d = pd.melt(df.sort_values('ID').drop_duplicates(['first','last']),'ID').groupby('value').ID.apply(list).to_dict() low_d = {k:v for k, v in new_d.items() if len(v)!=1} </code></pre>
<p>Building off of the answer that @Abdou provided in the comment, I can confirm this works in Python version 2.7.13 using Pandas version 0.20.1, and also in Python version 3.6.2 using Pandas version 0.20.3:</p> <pre><code>from __future__ import division, print_function import pandas as pd import sys def main(): print("python version is: %s" % sys.version) print("pandas version: %s" % pd.__version__) df = pd.DataFrame(data={'first': ['Joe','James','James','James'], 'last': ['Thomas','Jonas','Jonas','Thomas'], 'ID': [123, 456, 675, 457]}) grouped = df.groupby('first')\ .apply(lambda x: x.drop_duplicates(['last'], keep='last')) melted = pd.melt(grouped, 'ID', ['first', 'last'], 'denoms', 'names') result = melted[melted.names.duplicated(keep=False)]\ .groupby('names')['ID'] print(result.apply(list).to_dict()) if __name__ == "__main__": main() </code></pre>
python|pandas|dictionary|dataframe|group-by
0
377,715
45,824,837
Tensorflow and Numpy missing
<p>I am using Ubuntu 14.04. I am trying to use the tensorflow module, but although I have it installed, and installed it the same way I would install any other pkg or module, it is not recognized by python as being installed. Even though pip says it is installed... I am not sure what the hell is going on.</p> <p>See for yourselves:</p> <pre><code>$ sudo pip install tensorflow The directory '/home/tex/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '/home/tex/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. Requirement already satisfied: tensorflow in /home/tex/.local/lib/python2.7/site-packages Requirement already satisfied: six&gt;=1.10.0 in /home/tex/.local/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied: markdown&gt;=2.6.8 in /home/tex/.local/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied: bleach==1.5.0 in /home/tex/.local/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied: backports.weakref==1.0rc1 in /home/tex/.local/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied: html5lib==0.9999999 in /home/tex/.local/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied: werkzeug&gt;=0.11.10 in /home/tex/.local/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied: mock&gt;=2.0.0 in /home/tex/.local/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied: numpy&gt;=1.11.0 in /home/tex/.local/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied: wheel in /usr/lib/python2.7/dist-packages (from tensorflow) Requirement already satisfied: protobuf&gt;=3.2.0 in /home/tex/.local/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied: funcsigs&gt;=1; python_version &lt; "3.3" in /home/tex/.local/lib/python2.7/site-packages (from mock&gt;=2.0.0-&gt;tensorflow) Requirement already satisfied: pbr&gt;=0.11 in /home/tex/.local/lib/python2.7/site-packages (from mock&gt;=2.0.0-&gt;tensorflow) Requirement already satisfied: setuptools in /home/tex/.local/lib/python2.7/site-packages (from protobuf&gt;=3.2.0-&gt;tensorflow) </code></pre> <p>But when I try to import it from python, this is what I get:</p> <pre><code>$ python Python 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import tensorflow Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ImportError: No module named tensorflow </code></pre> <p>Why is this happening? I am also having a much weirder error. I am using flask on a <a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow noreferrer">virtualenv</a>. When I start my virtualenv, it does not recognize that numpy is installed, even though it is, and it is recognized <em>outside</em> the virtualenv. Let me show you:</p> <pre><code>(venv)tex@ubuntu:~/scratch/ilya/mock$ sudo pip install numpy The directory '/home/tex/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '/home/tex/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. Requirement already satisfied: numpy in /home/tex/.local/lib/python2.7/site-packages (venv)tex@ubuntu:~/scratch/ilya/mock$ python Python 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import numpy Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ImportError: No module named numpy </code></pre> <p>However, when I exit the virtualenv...</p> <pre><code>tex@ubuntu:~/scratch/ilya/mock$ python Python 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import numpy &gt;&gt;&gt; </code></pre> <p><a href="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcT0xtsmxjo3Iie5wmcF2CrvRJpX_-YELG7ZfiCsv6SSKUNwEpSO8emyxaY" rel="nofollow noreferrer">⊙_ʘ</a></p> <p>Edit: not a possible duplicate because the link posted doesn't address the same issues... so... basically, a completely different question.</p>
<p>Just because you source your virtualenv, this doesn't mean the 'pip' command will reference the pip library of the virtualenv. The 'pip' command is more than likely still linked to your default python interpreter.</p> <p>You can try the following to get it working:</p> <p>Start by uninstalling both modules:</p> <pre><code>[root@server] sudo pip uninstall tensorflow [root@server] sudo pip uninstall numpy </code></pre> <p>Then source your virtualenv:</p> <pre><code>[root@server] source ~/venv/activate </code></pre> <p>Then install the modules using pip whilst explicitly calling the python command:</p> <pre><code>(venv)[root@server] python -m pip install tensorflow (venv)[root@server] python -m pip install numpy </code></pre> <p>Then see if they are available:</p> <pre><code>(venv)[root@server] python &gt;&gt; import numpy </code></pre>
python|numpy|tensorflow|virtualenv|python-2.x
1
377,716
46,071,998
TensorFlow: How to use 'num_epochs' in a string_input_producer
<p>I can't enable epoch limits on my string_input_producer without getting a OutOfRange error (requested x, current size 0). It doesn't seem to matter how many elements I request, there is always 0 available. </p> <p>Here is my FileQueue builder:</p> <pre><code>def get_queue(base_directory): files = [f for f in os.listdir(base_directory) if f.endswith('.bin')] shuffle(files) file = [os.path.join(base_directory, files[0])] fileQueue = tf.train.string_input_producer(file, shuffle=False, num_epochs=1) return fileQueue </code></pre> <p>If I remove <em>num_epochs=1</em> from the string_input_producer it can create samples fine.</p> <p>My input pipeline:</p> <pre><code>def input_pipeline(instructions, fileQueue): example, label, feature_name_list = read_binary_format(fileQueue, instructions) num_preprocess_threads = 16 capacity = 20 example, label = tf.train.batch( [example, label], batch_size=50000, # set the batch size way bigger so we always return the full amount of samples from the file allow_smaller_final_batch=True, capacity=capacity, num_threads=num_preprocess_threads) return example, label </code></pre> <p>And lastly my session:</p> <pre><code>with tf.Session(graph=tf.Graph()) as sess: train_inst_set = sf.DeserializationInstructions.from_filename(os.path.join(input_dir, "Train/config.json")) fileQueue = sf.get_queue(os.path.join(input_dir, "Train")) features_train, labels_train = sf.input_pipeline(train_inst_set, fileQueue) sess.run(tf.global_variables_initializer()) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord, sess=sess) train_feature_batch, train_label_batch = sess.run([features_train, labels_train]) </code></pre>
<p>The issue was caused by this: <a href="https://github.com/tensorflow/tensorflow/issues/1045" rel="nofollow noreferrer">Issue #1045</a></p> <p>For whatever reason, tf.global_variable_initialiser does not initialise all variables. You need to initialise the local variables too.</p> <p>Add</p> <pre><code>sess.run(tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())) </code></pre> <p>to your session.</p>
python|tensorflow
1
377,717
45,957,672
python: dictionary and numpy.array issue
<p>I have a dictionary with arrays (same length) associated to strings. My goal is to create a new dictionary with the same keys but cutting the arrays, keeping only the elements I need. I wrote a function to do it but the problem is that it returns a dictionary with the same array (correct cut length) associated to every key, while the print command i put to control show the correct association. Here's the function:</p> <pre><code>def extract_years(dic,initial_year,final_year): dic_extr = {} l = numpy.size(dic[dic.keys()[0]]) if final_year != 2013 : a = numpy.zeros((final_year - initial_year)*251) elif final_year == 2013 : a = numpy.zeros(l - (initial_year-1998)*251) for i in range(0,len(dic)): #print i for k in range (0,numpy.size(a)): a[k] = dic[dic.keys()[i]][(initial_year-1998)*251 + k] #print k dic_extr[dic.keys()[i]] = a print dic.keys()[i] print dic_extr[dic.keys()[i]] print dic_extr.keys() print dic_extr return dic_extr </code></pre> <p>as I said, <code>print dic_extr[dic.keys()[i]]</code> shows the correct results while the final <code>print dic_extr</code> shows a dictionary with the same array associated to every key.</p>
<p>In Python, every object is a pointer. So, you should have to create a new instance of <code>a</code> for each iteration of the outer <code>for</code> loop. You could do this, for example, initializing the <code>a</code> array inside of that loop, like this:</p> <pre><code>def extract_years(dic,initial_year,final_year): dic_extr = {} l = numpy.size(dic[dic.keys()[0]]) for i in range(0,len(dic)): if final_year != 2013 : a = numpy.zeros((final_year - initial_year)*251) elif final_year == 2013 : a = numpy.zeros(l - (initial_year-1998)*251) for k in range (0,numpy.size(a)): a[k] = dic[dic.keys()[i]][(initial_year-1998)*251 + k] #print k dic_extr[dic.keys()[i]] = a print dic.keys()[i] print dic_extr[dic.keys()[i]] print dic_extr.keys() print dic_extr return dic_extr </code></pre> <p>Perhaps this is not the most elegant solution, but I think that it should work.</p>
python|arrays|python-2.7|numpy|dictionary
1
377,718
45,867,202
ValueError: shapes (9,) and (4,) not aligned
<p>I am training a NN to play 2048 using reinforcement learning. Or at least I think I am, cause I am new to this.</p> <p>This is what NeuralNetwork.py looks like:</p> <pre><code>import random import numpy as np def nonlin(x, deriv=False): if(deriv==True): return x * (1-x) return 1/(1+np.exp(-x)) np.random.seed(1) class NeuralNetwork: next_ID = 0 def __init__(self, HyperParams): self.synapses = [] for synapse in range(len(HyperParams)-1): self.synapses.append(2*np.random.random((HyperParams[synapse], HyperParams[synapse+1]))-1) self.score = 0 # self.name = words[random.randint(0, len(words))].strip() self.name = str(NeuralNetwork.next_ID) NeuralNetwork.next_ID += 1 def train_batch(self, epoch, state, outcome): for i in range(epoch): self.layers = [] self.layers.append(state) for j in range(len(self.synapses)): self.layers.append(nonlin(np.dot(self.layers[-1], self.synapses[j]))) error = outcome - self.layers[-1] if (i % 1000) == 0: print(str(np.mean(np.abs(error)))) for j in range(1,1+len(self.synapses)): delta = error * nonlin(self.layers[-j], True) error = delta.dot(self.synapses[-j].T) self.synapses[-j] += self.layers[-(j+1)].T.dot(delta) def train(self, state, outcome): self.layers = [] self.layers.append(state) for j in range(len(self.synapses)): self.layers.append(nonlin(np.dot(self.layers[-1], self.synapses[j]))) error = outcome - self.layers[-1] print("error: ", error.shape) for j in range(1,1+len(self.synapses)): delta = error * nonlin(self.layers[-j], True) print("delta: ", delta.shape) error = delta.dot(self.synapses[-j].T) print("layer: ", self.layers[-(j+1)].shape) print("layer.T: ", self.layers[-(j+1)].T.shape) # this is the issue print("dot: ", self.layers[-(j+1)].T.dot(delta).shape) self.synapses[-j] += self.layers[-(j+1)].T.dot(delta) def next_gen(self): child = NeuralNetwork([1]) for synapse in self.synapses: # add variation child.synapses.append(synapse + 0.1*np.random.random(synapse.shape)-0.05) # child.name += " son of " + self.name child.name += "&lt;-" + self.name return child def feed(self, state): self.layers = [] self.layers.append(state) for j in range(len(self.synapses)): self.layers.append(nonlin(np.dot(self.layers[-1], self.synapses[j]))) return self.layers[-1] </code></pre> <p>This is what 2048.py looks like:</p> <pre><code>import random import os import sys import math import numpy as np from NeuralNetwork import * # global vars, constants and setup board = {} row_size = 4 random.seed(1) HP = (16,9,4) # set up game board for i in range(row_size): # row for j in range(row_size): #column board[(i,j)] = 0 # display function def display(): for i in range(row_size): print('\t'.join([str(board[(i,j)]) for j in range(row_size)])) print() # logic function def logic(move, NN): """ char move is the move, one of any in "asdw" NN is a NeuralNetwork object """ # print("mov", move) score = 0 if move == 's': for j in range(row_size): # columns row_pointer = row_size-1 for i in reversed(range(row_size-1)): # go up the rows if board[(i, j)] != 0: # if there is a non-empty square above, and this is a zero #check if board[(row_pointer, j)] == 0: board[(row_pointer, j)] = board[(i, j)] board[(i, j)] = 0 # row_pointer -= 1 # This is the new block to focus on # if there is a non-empty square above, and they are not equivalent elif board[(i, j)] != board[(row_pointer, j)]: # while this intuitively is not a swap, without it I would need to zero board[(i,j)] # that zero would cause problems if row_pointer-1 == i board[(row_pointer-1, j)], board[(i, j)] = board[(i, j)], board[(row_pointer-1, j)] row_pointer -= 1 # This is the new block to focus on # if there is a non-empty square above, and they are the same elif board[(i, j)] == board[(row_pointer, j)]: board[(row_pointer, j)] += board[(i, j)] board[(i, j)] = 0 score += board[(row_pointer, j)] + math.log(board[(row_pointer, j)], 2) elif move == 'w': for j in range(row_size): # columns row_pointer = 0 for i in range(1, row_size): # go down the rows if board[(i, j)] != 0: # if there is a non-empty square above, and this is a zero if board[(row_pointer, j)] == 0: board[(row_pointer, j)] = board[(i, j)] board[(i, j)] = 0 # if there is a non-empty square above, and they are not equivalent elif board[(i, j)] != board[(row_pointer, j)]: board[(row_pointer+1, j)], board[(i, j)] = board[(i, j)], board[(row_pointer+1, j)] row_pointer += 1 # This is the new block to focus on # if there is a non-empty square above, and they are the same elif board[(i, j)] == board[(row_pointer, j)]: board[(row_pointer, j)] += board[(i, j)] board[(i, j)] = 0 score += board[(row_pointer, j)] + math.log(board[(row_pointer, j)], 2) elif move == 'a': for i in range(row_size): # rows column_pointer = 0 for j in range(1, row_size): # go right through the columns if board[(i, j)] != 0: # if there is a non-empty square above, and this is a zero if board[(i, column_pointer)] == 0: board[(i, column_pointer)] = board[(i, j)] board[(i, j)] = 0 # if there is a non-empty square above, and they are not equivalent elif board[(i, j)] != board[(i, column_pointer)]: board[(i, column_pointer+1)], board[(i, j)] = board[(i, j)], board[(i, column_pointer+1)] column_pointer += 1 # This is the new block to focus on # if there is a non-empty square above, and they are the same elif board[(i, j)] == board[(i, column_pointer)]: board[(i, column_pointer)] += board[(i, j)] board[(i, j)] = 0 score += board[(i, column_pointer)] + math.log(board[(i, column_pointer)], 2) elif move == 'd': for i in range(row_size): # rows column_pointer = row_size-1 for j in reversed(range(row_size-1)): # go left through the columns if board[(i, j)] != 0: # if there is a non-empty square above, and this is a zero if board[(i, column_pointer)] == 0: board[(i, column_pointer)] = board[(i, j)] board[(i, j)] = 0 # if there is a non-empty square above, and they are not equivalent elif board[(i, j)] != board[(i, column_pointer)]: board[(i, column_pointer-1)], board[(i, j)] = board[(i, j)], board[(i, column_pointer-1)] column_pointer -= 1 # This is the new block to focus on # if there is a non-empty square above, and they are the same elif board[(i, j)] == board[(i, column_pointer)]: board[(i, column_pointer)] += board[(i, j)] board[(i, j)] = 0 score += board[(i, column_pointer)] + math.log(board[(i, column_pointer)], 2) else: print("something is wrong") NN.score += score return score # checks to see whether there are any valid moves in a full board with no 0's def is_game_over(): # check the top-left square for i in range(row_size-1): for j in range(row_size-1): if board[(i,j)] in [board[(i+1,j)], board[(i,j+1)]]: # check the one below and to the right return False # Check the right-most column for j in range(row_size-1): if board[(row_size-1,j)] == board[(row_size-1,j+1)]: return False # Check the bottom row for i in range(row_size-1): if board[(i,row_size-1)] == board[(i+1,row_size-1)]: return False # There is no way to combine, game over return True # NN controls NN = NeuralNetwork(HP) for step in range(10): # set up game board for i in range(row_size): # row for j in range(row_size): #column board[(i,j)] = 0 previous_board = [] quit = False # game loop while not quit: # set a new empty tile to a 2 while True: i = random.randint(0,row_size-1) j = random.randint(0,row_size-1) # print(i,j,board[(i,j)]) if board[(i,j)] != 0: continue else: board[(i,j)] = 2 ; break # View # display() # normalize data and make a guess with nn state = np.array([board[(i,j)] for j in range(row_size) for i in range(row_size)]) state[state==0] = 1 state = np.log2(state) state = state / np.max(state) # print('\n'.join(['\t'.join([str(state[j*row_size+i]) for j in range(row_size)])for i in range(row_size)])) move = NN.feed(state) # move reward = 0 previous_board = list(board.values()) while True: if len(move[move == 0]) == 4: if is_game_over(): # print("Game Over") quit = True break reward = logic("asdw"[move.argmax()], NN) if previous_board == list(board.values()): move[move.argmax()] = 0 ; continue else: break if reward: reward = nonlin(math.log2(reward)-math.log2(2048)) move[np.argmax(move)] += reward NN.train(state, move) display() print("score: " + str(NN.score)) NN.score = 0 </code></pre> <p>I was told that numpy would know what to do when it encountered two 1-D arrays dotted, but that's not happening. Should I make these arrays 2D, with their inner dimension being 1? Could you help?</p> <p>here is the full error:</p> <pre><code>Traceback (most recent call last): File "2048.py", line 195, in &lt;module&gt; NN.train(state, move) File "/home/jeff/Programs/grad_descent/NeuralNetwork.py", line 71, in train print("dot: ", self.layers[-(j+1)].T.dot(delta).shape) ValueError: shapes (9,) and (4,) not aligned: 9 (dim 0) != 4 (dim 0) </code></pre> <p>As you can see, they are both 1D vectors, so numpy should just dot them.</p>
<p>It will work if you give an explicit 1-D column representation, using <code>np.newaxis</code>. </p> <p><strong>Note:</strong> If you're looking for a scalar output, the two vectors need to be of <a href="https://en.wikipedia.org/wiki/Dot_product" rel="nofollow noreferrer">equal length</a>. The error message in OP shows you are trying to take the dot product of length-<code>9</code> and a length-<code>4</code> vectors. I'm assuming that you actually want <code>.dot()</code> to return an outer product. If not, an inner product won't work - in that case, try and figure out why you're not getting two equal length vectors where you expect to see them.</p> <p>With:</p> <pre><code>a = np.array([1,2,3]) b = np.array([2,3,4,5]) </code></pre> <p>The shapes of <code>a</code> and <code>b</code> are <code>(3,)</code> and <code>(4,)</code>, respectively:</p> <pre><code>try: print(a.shape) print(b.shape) print("a.b: \n{}".format(np.dot(a,b.T))) except ValueError as e: print("failed: {}".format(e)) </code></pre> <p>Output:</p> <pre><code>(3,) (4,) failed: shapes (3,) and (4,) not aligned: 3 (dim 0) != 4 (dim 0) </code></pre> <p>With <code>newaxis</code>, the shapes become <code>(3,1)</code> and <code>(4,1)</code>: </p> <pre><code>aa = a[:, np.newaxis] bb = b[:, np.newaxis] try: print(aa.shape) print(bb.shape) print("aa.bb: \n{}".format(np.dot(aa,bb.T))) except ValueError as e: print("failed: {}".format(e)) </code></pre> <p>Output:</p> <pre><code>(3, 1) (4, 1) aa.bb: [[ 2 3 4 5] [ 4 6 8 10] [ 6 9 12 15]] </code></pre>
python|numpy|dot-product
0
377,719
45,812,652
Delete rows in subsequences that contain leading zeros in a dataframe
<p>I have a data frame in following format with a time series</p> <pre><code>A B C 201401 201402 201403 a1 b1 c1 100 200 300 a2 b2 c2 0 250 0 </code></pre> <p>I have used <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html" rel="nofollow noreferrer">Pandas.melt</a> to flatten this data and have got following format.</p> <pre><code>A B C YYYYMM Value a1 b1 c1 201401 100 a1 b1 c1 201402 200 a1 b1 c1 201403 300 a2 b2 c2 201401 0 a2 b2 c2 201402 250 a2 b2 c2 201403 0 </code></pre> <p>Now for a particular combination of [A B C] I only want the time series <strong>starting from</strong> non zero values.so my output should be like this.</p> <pre><code>A B C YYYYMM Value a1 b1 c1 201401 100 a1 b1 c1 201402 200 a1 b1 c1 201403 300 a2 b2 c2 201402 250 a2 b2 c2 201403 0 </code></pre> <p>I tried,</p> <pre><code>df.groupby(['A','B','C']).apply(lambda x: x['Value'][np.where(x['Value']&gt;0)[0][0]:] </code></pre> <p>This just gives me time series and doesn't imply inplace changes. What should I do to achieve this?</p>
<p>I continued with your idea of grouping and then filtering. The basic idea was to take each group and find the first non-zero Value's index assuming they are already sorted by date. And then just ungroup and clean up.</p> <pre><code>def applyFunc(row): row_values = np.array(row.Value) first_non_zero_index = next((i for i, x in enumerate(row_values) if x), None) return row.iloc[first_non_zero_index:] df.groupby(['A','B','C']).apply(applyFunc).drop(["A","B","C"],axis=1).reset_index().drop("level_3",axis=1) </code></pre> <p>Uses a snippet from <a href="https://stackoverflow.com/a/19502403/2750819">https://stackoverflow.com/a/19502403/2750819</a></p>
python-2.7|pandas|dataframe
0
377,720
46,074,863
Load saved checkpoint and predict not producing same results as in training
<p>I'm training based on a sample code I found on the Internet. The accuracy in testing is at 92% and the checkpoints are saved in a directory. In parallel (the training is running for 3 days now) I want to create my prediction code so I can learn more instead of just waiting.</p> <p>This is my third day of deep learning so I probably don't know what I'm doing. Here's how I'm trying to predict:</p> <ul> <li>Instantiate the model using the same code as in training</li> <li>Load the last checkpoint</li> <li>Try to predict</li> </ul> <p>The code works but the results are nowhere near 90%.</p> <p>Here's how I create the model:</p> <pre><code>INPUT_LAYERS = 2 OUTPUT_LAYERS = 2 AMOUNT_OF_DROPOUT = 0.3 HIDDEN_SIZE = 700 INITIALIZATION = "he_normal" # : Gaussian initialization scaled by fan_in (He et al., 2014) CHARS = list("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ .") def generate_model(output_len, chars=None): """Generate the model""" print('Build model...') chars = chars or CHARS model = Sequential() # "Encode" the input sequence using an RNN, producing an output of HIDDEN_SIZE # note: in a situation where your input sequences have a variable length, # use input_shape=(None, nb_feature). for layer_number in range(INPUT_LAYERS): model.add(recurrent.LSTM(HIDDEN_SIZE, input_shape=(None, len(chars)), init=INITIALIZATION, return_sequences=layer_number + 1 &lt; INPUT_LAYERS)) model.add(Dropout(AMOUNT_OF_DROPOUT)) # For the decoder's input, we repeat the encoded input for each time step model.add(RepeatVector(output_len)) # The decoder RNN could be multiple layers stacked or a single layer for _ in range(OUTPUT_LAYERS): model.add(recurrent.LSTM(HIDDEN_SIZE, return_sequences=True, init=INITIALIZATION)) model.add(Dropout(AMOUNT_OF_DROPOUT)) # For each of step of the output sequence, decide which character should be chosen model.add(TimeDistributed(Dense(len(chars), init=INITIALIZATION))) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model </code></pre> <p>In a separate file <code>predict.py</code> I import this method to create my model and try to predict:</p> <pre><code>...import code model = generate_model(len(question), dataset['chars']) model.load_weights('models/weights.204-0.20.hdf5') def decode(pred): return character_table.decode(pred, calc_argmax=False) x = np.zeros((1, len(question), len(dataset['chars']))) for t, char in enumerate(question): x[0, t, character_table.char_indices[char]] = 1. preds = model.predict_classes([x], verbose=0)[0] print("======================================") print(decode(preds)) </code></pre> <p>I don't know what the problem is. I have about 90 checkpoints in my directory and I'm loading the last one based on accuracy. All of them saved by a <code>ModelCheckpoint</code>:</p> <pre><code>checkpoint = ModelCheckpoint(MODEL_CHECKPOINT_DIRECTORYNAME + '/' + MODEL_CHECKPOINT_FILENAME, save_best_only=True) </code></pre> <p>I'm stuck. What am I doing wrong?</p>
<p>In the repo you provided, the training and validation sentences are inverted before being fed into the model (as commonly done in seq2seq learning).</p> <pre><code>dataset = DataSet(DATASET_FILENAME) </code></pre> <p>As you can see, the default value for <code>inverted</code> is <code>True</code>, and the questions are inverted.</p> <pre><code>class DataSet(object): def __init__(self, dataset_filename, test_set_fraction=0.1, inverted=True): self.inverted = inverted ... question = question[::-1] if self.inverted else question questions.append(question) </code></pre> <p>You can try to invert the sentences during prediction. Specifically,</p> <pre><code>x = np.zeros((1, len(question), len(dataset['chars']))) for t, char in enumerate(question): x[0, len(question) - t - 1, character_table.char_indices[char]] = 1. </code></pre>
python|tensorflow|deep-learning|keras
2
377,721
23,383,253
"tuple index out of range" reading pandas pickled Panel
<p>data is a pandas Panel</p> <p><code>data &lt;class 'pandas.core.panel.Panel'&gt; Dimensions: 16 (items) x 1954 (major_axis) x 6 (minor_axis) Items axis: ADRE to SPY Major_axis axis: 2004-12-01 00:00:00+00:00 to 2012-08-31 00:00:00+00:00 Minor_axis axis: open to price</code></p> <p>Save to disk</p> <pre><code>pandas.to_pickle(data, 'data.pkl') </code></pre> <p>But when I try to read pkl file</p> <pre><code>pandas.read_pickle('data.pkl') </code></pre> <p>I get:</p> <blockquote> <p>` --------------------------------------------------------------------------- IndexError Traceback (most recent call last) in () 1 print type(data) 2 data.to_pickle('G:\temp\test.pkl') ----> 3 pd.read_pickle('G:\temp\test.pkl')</p> <p>C:\Python27\lib\site-packages\pandas-0.13.1-py2.7-win32.egg\pandas\io\pickle.pyc in read_pickle(path) 47 48 try: ---> 49 return try_read(path) 50 except: 51 if PY3:</p> <p>C:\Python27\lib\site-packages\pandas-0.13.1-py2.7-win32.egg\pandas\io\pickle.pyc in try_read(path, encoding) 44 except: 45 with open(path, 'rb') as fh: ---> 46 return pc.load(fh, encoding=encoding, compat=True) 47 48 try:</p> <p>C:\Python27\lib\site-packages\pandas-0.13.1-py2.7-win32.egg\pandas\compat\pickle_compat.pyc in load(fh, encoding, compat, is_verbose) 87 up.is_verbose = is_verbose 88 ---> 89 return up.load() 90 except: 91 raise</p> <p>C:\Python27\lib\pickle.pyc in load(self) 856 while 1: 857 key = read(1) --> 858 dispatchkey 859 except _Stop, stopinst: 860 return stopinst.value</p> <p>C:\Python27\lib\site-packages\pandas-0.13.1-py2.7-win32.egg\pandas\compat\pickle_compat.pyc in load_reduce(self) 16 args = stack.pop() 17 func = stack[-1] ---> 18 if type(args[0]) is type: 19 n = args[0].<strong>name</strong> 20 if n == u('DeprecatedSeries') or n == u('DeprecatedTimeSeries'):</p> <p>IndexError: tuple index out of range</p> <p>`</p> <hr> </blockquote> <p>I can work around this, but my question is "am I using to/from pickle correctly"?</p>
<p>pickles are saved by:</p> <pre><code> panel.to_pickle('file_name.pkl') </code></pre> <p>you don't appear to be using a string filename and are adding an extra (non quoted) argument.</p> <p>reading is using a quote filename as well</p> <pre><code> pd.read_pickle('file_name.pkl') </code></pre> <p>On python 27-32 bit on windows</p> <pre><code>Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; np.__version__ '1.7.1' &gt;&gt;&gt; pd.__version__ '0.13.1-791-g07f6d46' &gt;&gt;&gt; from pandas.util import testing as tm &gt;&gt;&gt; panel = tm.makePanel() &gt;&gt;&gt; pd.to_pickle(panel,'test.pkl') &gt;&gt;&gt; pd.read_pickle('test.pkl') &lt;class 'pandas.core.panel.Panel'&gt; Dimensions: 3 (items) x 30 (major_axis) x 4 (minor_axis) Items axis: ItemA to ItemC Major_axis axis: 2000-01-03 00:00:00 to 2000-02-11 00:00:00 Minor_axis axis: A to D </code></pre> <p>So not sure EXACTLY what data, maybe show a reproduction</p>
python-2.7|pandas
2
377,722
23,123,625
Different result of code example on book: Python for Data Analysis
<p>I have a question on a book "Python for Data Analysis" if anyone is interested in this book.</p> <p>After running an example on page 244 <em>(Plotting Maps: Visualizing Haiti Earthquake Crisis Data)</em>, my result of dummy_frame.ix doesn't look the same as what the book says as below:</p> <pre><code>dummy_frame = DataFrame(np.zeros((len(data), len(code_index))), index=data.index, columns=code_index) If all goes well, dummy_frame should look something like this: In [107]: dummy_frame.ix[:, :6] Out[107]: &lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 3569 entries, 0 to 3592 Data columns: 1 3569 non-null values 1a 3569 non-null values 1b 3569 non-null values 1c 3569 non-null values 1d 3569 non-null values 2 3569 non-null values dtypes: float64(6) </code></pre> <p>My result is:</p> <pre><code>In [61]: dummy_frame.ix[:, :6] Out[61]: 1 1a 1b 1c 1d 2 0 0 0 0 0 0 0 4 0 0 0 0 0 0 5 0 0 0 0 0 0 6 0 0 0 0 0 0 7 0 0 0 0 0 0 &lt;snip&gt; 57 0 0 0 0 0 0 58 0 0 0 0 0 0 59 0 0 0 0 0 0 60 0 0 0 0 0 0 61 0 0 0 0 0 0 62 0 0 0 0 0 0 .. ... ... ... ... .. [3569 rows x 6 columns] </code></pre> <p>I checked its <a href="http://www.oreilly.com/catalog/errataunconfirmed.csp?isbn=0636920023784" rel="nofollow">errata page</a> but this is not mentioned in there. I ensured there is no typo of mine here and also ran it on two different machines but the result was the same.</p> <p>Any advice please?</p> <p>Edited:</p> <p>Thanks dartdog, joris! I didn't notice the Dataframe display was changed. Now I can get the same result with .info()</p> <pre><code>In [5]: dummy_frame.ix[:, :6].info() &lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 3569 entries, 0 to 3592 Data columns (total 6 columns): 1 3569 non-null float64 1a 3569 non-null float64 1b 3569 non-null float64 1c 3569 non-null float64 1d 3569 non-null float64 2 3569 non-null float64 dtypes: float64(6) </code></pre>
<p>That is the same result as far as I can see, Pandas has been changing the default display of dataframes, the example in the book is the summary display and the display you got is the newer format that displays the begin/ end.. read up on display options... In the appropriate version doc that you are using</p>
python|pandas|data-analysis
2
377,723
23,308,578
How to find numpy.argmax() on part of list and save index?
<p>I have:</p> <pre><code>array = [1, 2, 3, 4, 5, 6, 7, 8]; </code></pre> <p>I need to find <code>numpy.argmax</code> only for last 4 elements in array.</p> <p>This does not works, because index is losted:</p> <pre><code>&gt;&gt;&gt; array = [1, 2, 3, 4, 5, 6, 7, 8]; &gt;&gt;&gt; print (array[4:8]); [5, 6, 7, 8] &gt;&gt;&gt; print (np.argmax(array[4:8]) ); 3 </code></pre> <p>The result must be 7</p>
<p>The simple approach would be to, I dunno, just add a 4 to the output? Assuming it isn't always 4, you could always do this:</p> <p><code>print np.argmax(array[x : 8]) + x</code></p>
python|numpy
3
377,724
23,267,054
How to combine two sine waves without cracks
<p>I'm using Python, pyaudio and scipy and I would like to combine two sine waves (two tones) in a way that one tone is played after another (create melody). Let's assume that I have two arrays: <code>tone1</code> and <code>tone2</code>.</p> <p><code>tone1</code> contains data of sine wave with frequency of 350 Hz. <code>tone2</code> contains sine wave's data with frequency of 440 Hz.</p> <p>My question is: how to combine these two arrays (<code>tone1</code> and <code>tone2</code>) into one array that, after being played, will give me a melody without noticeable crack between these two sine waves (<code>tone1</code> and <code>tone2</code>)?</p>
<p>Append them together and apply a Fourier Transform smoothing filter. In the regions with a single tone, the Fourier transform will have only one component, and the filter will do nothing; whereas in the transition region you will get both components (plus the crap coming from the jump), that the filter would hopefully smooth out.</p>
python|numpy|scipy|signals|sine-wave
1
377,725
23,390,455
Multiply number of distances in distance matrix prior to histogram binning
<p>I am using scipy.spatial.distance.pdist to calculate the distances from an array of coordinates followed by numpy.histogram to bin the results. Currently this treats each coordinate as though one object were there, however I have multiple objects at that same coordinate. One option is to change the arrays so that each coordinate occurs multiple times, once for each object at that coordinate, however this would substantially increase the size of the array and the time of calculation for pdist, since it scales as N^2, and this is prohibitively costly and speed is important in this application.</p> <p>A second approach would be to treat the resulting distance matrix such that each distance is repeated ni<em>nj times, where ni is the number of objects at coordinate i and nj the number of objects at coordinate j. This would transform the original MxM distance matrix to be a NxN distance matrix, where M is the total number of coordinates in the array, but N is the total number of objects. But again, this seems to be unnecessarily costly since all I really need to do is somehow tell the histogramming function to multiply the number of events at distance ij by ni</em>nj. In other words, is there any way to tell numpy.histogram that there's not just one object at distance ij, but that there's ni*nj objects instead?</p> <p>Other ideas are obviously welcome.</p> <p>Edit:</p> <p>This is an example of the first approach.</p> <pre><code>import numpy as np from scipy import spatial import matplotlib.pyplot as plt #create array of 5 coordinates in 3D coords = np.random.random(15).reshape(5,3) '''array([[ 0.66500534, 0.10145476, 0.92528492], [ 0.52677892, 0.07756804, 0.50976737], [ 0.50030508, 0.37635556, 0.20828815], [ 0.02707651, 0.21878467, 0.55855427], [ 0.81564621, 0.82750694, 0.53083443]])''' #number of objects at each coordinate objects = np.random.randint(1,10,5) #array([5, 3, 8, 5, 1]) #create new array with coordinates for each individual object new_coords = np.zeros((objects.sum(),3)) #there's surely a simpler way to do this j=0 for coord in range(coords.shape[0]): for i in range(objects[coord]): new_coords[j] = coords[coord] j+=1 '''new_coords array([[ 0.66500534, 0.10145476, 0.92528492], [ 0.66500534, 0.10145476, 0.92528492], [ 0.66500534, 0.10145476, 0.92528492], [ 0.66500534, 0.10145476, 0.92528492], [ 0.66500534, 0.10145476, 0.92528492], [ 0.52677892, 0.07756804, 0.50976737], [ 0.52677892, 0.07756804, 0.50976737], [ 0.52677892, 0.07756804, 0.50976737], [ 0.50030508, 0.37635556, 0.20828815], [ 0.50030508, 0.37635556, 0.20828815], [ 0.50030508, 0.37635556, 0.20828815], [ 0.50030508, 0.37635556, 0.20828815], [ 0.50030508, 0.37635556, 0.20828815], [ 0.50030508, 0.37635556, 0.20828815], [ 0.50030508, 0.37635556, 0.20828815], [ 0.50030508, 0.37635556, 0.20828815], [ 0.02707651, 0.21878467, 0.55855427], [ 0.02707651, 0.21878467, 0.55855427], [ 0.02707651, 0.21878467, 0.55855427], [ 0.02707651, 0.21878467, 0.55855427], [ 0.02707651, 0.21878467, 0.55855427], [ 0.81564621, 0.82750694, 0.53083443]])''' #calculate distance matrix of old and new arrays distances_old = distance.pdist(coords) distances_new = distance.pdist(new_coords) #calculate and plot normalized histograms (typically just use np.histogram without plotting) plt.hist(distances_old, range=(0,1), alpha=.5, normed=True) (array([ 0., 0., 0., 0., 2., 1., 2., 2., 2., 1.]), array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ]), &lt;a list of 10 Patch objects&gt;) plt.hist(distances_new, range=(0,1), alpha=.5, normed=True) (array([ 2.20779221, 0. , 0. , 0. , 1.68831169, 0.64935065, 2.07792208, 2.81385281, 0.34632035, 0.21645022]), array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ]), &lt;a list of 10 Patch objects&gt;) plt.show() </code></pre> <p><img src="https://i.stack.imgur.com/8narO.png" alt="histograms"></p> <p>The second approach would instead treat the distance matrix rather than the coordinate matrix, but I haven't figured that code out yet.</p> <p>Both approaches seem inefficient to me and I think manipulating the binning process of np.histogram is more likely to be efficient, since it's just basic multiplication, but I'm not sure how to tell np.histogram to treat each coordinate as having a variable number of objects to count.</p>
<p>Something like this might work:</p> <pre><code>from scipy.spatial import distance positions = np.random.rand(10, 2) counts = np.random.randint(1, 5, len(positions)) distances = distance.pdist(positions) i, j = np.triu_indices(len(positions), 1) bins = np.linspace(0, 1, 10) h, b = np.histogram(distances, bins=bins, weights=counts[i]*counts[j]) </code></pre> <p>It checks out compared to repeating, excepting the <code>0</code>-distances:</p> <pre><code>repeated = np.repeat(positions, counts, 0) rdistances_r = distance.pdist(repeated) hr, br = np.histogram(rdistances, bins=bins) In [83]: h Out[83]: array([11, 22, 27, 43, 67, 46, 40, 0, 19, 0]) In [84]: hr Out[84]: array([36, 22, 27, 43, 67, 46, 40, 0, 19, 0]) </code></pre>
python|numpy|scipy|histogram
1
377,726
35,563,351
Pandas Split Column String and Plot unique values
<p>I have a dataframe <code>Df</code> that looks like this:</p> <pre><code> Country Year 0 Australia, USA 2015 1 USA, Hong Kong, UK 1982 2 USA 2012 3 USA 1994 4 USA, France 2013 5 Japan 1988 6 Japan 1997 7 USA 2013 8 Mexico 2000 9 USA, UK 2005 10 USA 2012 11 USA, UK 2014 12 USA 1980 13 USA 1992 14 USA 1997 15 USA 2003 16 USA 2004 17 USA 2007 18 USA, Germany 2009 19 Japan 2006 20 Japan 1995 </code></pre> <p>I want to make a bar chart for the <code>Country</code> column, if i try this</p> <pre><code>Df.Country.value_counts().plot(kind='bar') </code></pre> <p>I get this plot</p> <p><a href="https://i.stack.imgur.com/DqOyg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DqOyg.png" alt="enter image description here"></a></p> <p>which is incorrect because it doesn't separate the countries. My goal is to obtain a bar chart that plots the count of each country in the column, but to achieve that, first i have to somehow split the string in each row (if needed) and then plot the data. I know i can use <code>Df.Country.str.split(', ')</code> to split the strings, but if i do this i can't plot the data.</p> <p>Anyone has an idea how to solve this problem? </p>
<p>You could use the vectorized <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.Series.str.split.html" rel="nofollow noreferrer">Series.str.split</a> method to split the <code>Country</code>s:</p> <pre><code>In [163]: df['Country'].str.split(r',\s+', expand=True) Out[163]: 0 1 2 0 Australia USA None 1 USA Hong Kong UK 2 USA None None 3 USA None None 4 USA France None ... </code></pre> <hr> <p>If you <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow noreferrer">stack</a> this DataFrame to move all the values into a single column, then you can apply <code>value_counts</code> and plot as before:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame( {'Country': ['Australia, USA', 'USA, Hong Kong, UK', 'USA', 'USA', 'USA, France', 'Japan', 'Japan', 'USA', 'Mexico', 'USA, UK', 'USA', 'USA, UK', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA, Germany', 'Japan', 'Japan'], 'Year': [2015, 1982, 2012, 1994, 2013, 1988, 1997, 2013, 2000, 2005, 2012, 2014, 1980, 1992, 1997, 2003, 2004, 2007, 2009, 2006, 1995]}) counts = df['Country'].str.split(r',\s+', expand=True).stack().value_counts() counts.plot(kind='bar') plt.show() </code></pre> <p><img src="https://i.stack.imgur.com/6vXdl.png" width="300"></p>
python|pandas|plot|bar-chart
6
377,727
35,487,005
Pandas and dataframe: How to transform a ordinal variable in a binary variable?
<p>I have a column of my dataframe <code>df = pd.read_csv('somedata')</code> namely df['rank'] which is an ordinal variable. I want to create a binary column where df['rkGood'] is equal to 1 when df['rank'] ranges from 20 to 40, and 0 otherwise.</p> <p>I am trying something like this, but it is not working:</p> <pre><code>df['rkGood']= 1 if (df['rank']&gt;20 &amp; df['rank']&lt;=40) else 0 </code></pre> <p>How should I do this?</p>
<p>First initialize your column to zeros, then use <code>loc</code> as follows:</p> <pre><code>df['rkGood'] = 0 df.loc[(df['rank'] &gt; 20) &amp; (df['rank'] &lt;= 40), 'rkGood'] = 1 </code></pre> <p>Or...</p> <pre><code>df['rkGood'] = 0 df.loc[df.rank.between(20, 40, inclusive=True), 'rkGood'] = 1 </code></pre>
python|pandas
2
377,728
35,627,827
Numpy: How to query multiple multidimensional arrays?
<p>Assume I have 3 arrays:</p> <pre><code>a=np.array([[1,2,3], [3,4,5], [6,7,8]]) b=np.array([[1], [5], [4]]) c=np.array([[1], [2], [3]]) </code></pre> <p>Now, I want to select all rows from a, which have a matching row with b=4 and c=3.</p> <p><strong>So the question is, how to do:</strong></p> <pre><code>d = np.subset(a,'b==4 and c==3') </code></pre> <p>In this case I expect as output</p> <pre><code>[6,7,8] </code></pre>
<p>This will do:</p> <pre><code>&gt;&gt;&gt; a=np.array([[1,2,3], ... [3,4,5], ... [6,7,8]]) &gt;&gt;&gt; &gt;&gt;&gt; b=np.array([[1], ... [5], ... [4]]) &gt;&gt;&gt; &gt;&gt;&gt; c=np.array([[1], ... [2], ... [3]]) &gt;&gt;&gt; &gt;&gt;&gt; a[((b==4) &amp; (c==3)).squeeze()] array([[6, 7, 8]]) </code></pre>
python|numpy|scipy
4
377,729
35,534,057
iterating dataframe columns after groupby
<p>I am trying to groupby a a csv data read to a dataframe using pandas. I am doing a groupby to user_id column and able to do so successfully. How can i retrieve the column data after the groupby result. my csv columns are line this: , user_id, status </p> <pre><code>import pandas as pd import csv df = pd.DataFrame(pd.read_csv("test1.csv")) grouped=df.groupby('user_id') #writer = csv.writer(open("rewww.csv", 'w')) for user_id,status in grouped: print status </code></pre>
<p>You are almost there</p> <pre><code>for ix, grouped_df in grouped: print grouped_df['status'] </code></pre>
python|csv|pandas
0
377,730
35,461,548
Filling data using .fillNA(), data pulled from Quandl
<p>I've pulled some stock data from Quandl for both Crude Oil prices (WTI) and Caterpillar (CAT) price. When I concatenate the two dataframes together I'm left with some NaNs. My ultimate goal is to run a .Pearsonr() to assess the correlation (along with p-values), however I can't get Pearsonr() to work because of all the Nan's. So I'm trying to clean them up. When I use the .fillNA() function it doesn't seem to be working. I've even tried .interpolate() as well as .dropna(). None of them appear to work. Here is my working code.</p> <pre><code>import Quandl import pandas as pd import numpy as np #WTI Data# WTI_daily = Quandl.get("DOE/RWTC", collapse="daily",trim_start="1986-10-10", trim_end="1986-10-15") WTI_daily.columns = ['WTI'] #CAT Data CAT_daily = Quandl.get("YAHOO/CAT.6", collapse = "daily",trim_start="1986-10-10", trim_end="1986-10-15") CAT_daily.columns = ['CAT'] #Combine Data Frames daily_price_df = pd.concat([CAT_daily, WTI_daily], axis=1) print daily_price_df #Verify they are dataFrames: def really_a_df(var): if isinstance(var, pd.DataFrame): print "DATAFRAME SUCCESS" else: print "Wahh Wahh" return 'done' print really_a_df(daily_price_df) #Fill NAs #CAN'T GET THIS TO WORK!! daily_price_df.fillna(method='pad', limit=8) print daily_price_df # Try to interpolate #CAN'T GET THIS TO WORK!! daily_price_df.interpolate() print daily_price_df #Drop NAs #CAN'T GET THIS TO WORK!! daily_price_df.dropna(axis=1) print daily_price_df </code></pre> <p>For what it's worth I've managed to get the function working when I create a dataframe from scratch using this code:</p> <pre><code>import pandas as pd import numpy as np d = {'a' : 0., 'b' : 1., 'c' : 2.,'d':None,'e':6} d_series = pd.Series(d, index=['a', 'b', 'c', 'd','e']) d_df = pd.DataFrame(d_series) d_df = d_df.fillna(method='pad') print d_df </code></pre> <p>Initially I was thinking that perhaps my data wasn't in dataframe form, but I used a simple test to confirm they are in fact dataframe. The only conclusion I that remains (in my opinion) is that it is something about the structure of the Quandl dataframe, or possibly the TimeSeries nature. Please know I'm somewhat new to python so structure answers for a begginner/novice. Any help is much appreciated!</p>
<p>pot shot - have you just forgotten to assign or use the inplace flag.</p> <pre><code>daily_price_df = daily_price_df.fillna(method='pad', limit=8) OR daily_price_df.fillna(method='pad', limit=8, inplace=True) </code></pre>
python|pandas|data-cleaning
2
377,731
35,679,118
Unable to convert MATLAB to Python code for repmat and symmetry
<p>MATLAB code:</p> <pre><code>n = 2048; d = 1; order = 2048; nn = [-(n/2):(n/2-1)]'; h = zeros(size(nn),'single'); h(n/2+1) = 1 / 4; odd = mod(nn,2) == 1; h(odd) = -1 ./ (pi * nn(odd)).^2; f_kernel = abs(fft(h))*2; filt = f_kernel(1:order/2+1)'; w = 2*pi*(0:size(filt,2)-1)/order; filt(w&gt;pi*d) = 0; filt = [filt , filt(end-1:-1:2)]; filt = repmat(filt',[1 1024]); </code></pre> <p>Python Code :</p> <pre><code>import numpy as np import numpy.matlib from numpy.matlib import repmat d = 1 filt_length = 2048 nn = np.linspace(-1024,1023,2048) nn = np.transpose(nn) h = np.zeros((2048)) h[1024] = 0.25 odd = (nn%2) for i in range(0,2048) : if odd[i] == 1 : h[i] = -1/((np.pi*nn[i])**2) f_kernel = abs(fft(h))*2 filt = np.transpose(f_kernel[0:1024]) w = (np.pi)*np.linspace(0,1,1025) </code></pre> <p>However I have been unable to convert the last 3 lines of the MATLAB code to Python. Any suggestions? The second last step of MATLAB code creates a ramp filter of size 2048 (goes from 0 to 1 in steps of 1024 and 1 to 0 in another 1024 steps). The last repmat makes the size of filt as (2048,1024).</p>
<p>If I were on my computer, I'd start Octave and IPython sessions, and start replicating the code line by line. I'd use smaller dimensions to easily watch the results. I'd pay special attention to shapes. Since my Matlab is rusty it is easier to do it this way than in my head. And more reliable.</p> <p><code>np.arange</code> is easier to use than <code>linspace</code>. <code>transpose</code> is unnecessary with 1d arrays. <code>np.repeat</code> or <code>np.tile</code> will probably perform <code>repmat's</code> job. I don't recall exactly what <code>repmat</code> does. Initially I would copy loops, but I'd try replace them when code is working. <code>fft</code> may require <code>scipy</code>.</p>
python|matlab|python-2.7|numpy
0
377,732
35,612,629
Most efficient way to create non-redundant correlation matrix Python?
<p>I feel like numpy, scipy, or networkx has a method to do this but I just haven't figured it out yet. </p> <p><strong>My question is how to create a nonredundant correlation matrix in the form of a DataFrame on from a redundant correlation matrix for LARGE DATASETS in the MOST EFFICIENT way (In Python)?</strong> </p> <p>I'm using this method on a 7000x7000 matrix and it's taking forever on my MacBook Air 4GB Ram (I know, I definitely shouldn't use this for programming but that's another discussion)</p> <p>Example of redundant correlation matrix</p> <p><a href="https://i.stack.imgur.com/tzVFC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tzVFC.png" alt="enter image description here"></a></p> <p>Example of nonredundant correlation matrix</p> <p><a href="https://i.stack.imgur.com/mxl8t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mxl8t.png" alt="enter image description here"></a></p> <p>I gave a pretty naive way of doing it below but there has to be a better way. I like storing my matrices in sparse matrices and converting them to dataframes for storage purposes. </p> <pre><code>import pandas as pd import numpy as np import networkx as nx #Example DataFrame L_test = [[0.999999999999999, 0.374449352805868, 0.000347439531148995, 0.00103026903356954, 0.0011830950375467401], [0.374449352805868, 1.0, 1.17392596672424e-05, 1.49428208843456e-07, 1.216664263989e-06], [0.000347439531148995, 1.17392596672424e-05, 1.0, 0.17452569907144502, 0.238497202355299], [0.00103026903356954, 1.49428208843456e-07, 0.17452569907144502, 1.0, 0.7557000865939779], [0.0011830950375467401, 1.216664263989e-06, 0.238497202355299, 0.7557000865939779, 1.0]] labels = ['AF001', 'AF002', 'AF003', 'AF004', 'AF005'] DF_1 = pd.DataFrame(L_test,columns=labels,index=labels) #Create Nonredundant Similarity Matrix n,m = DF_test.shape #they will be the same since it's adjacency #Empty array to fill A_tmp = np.zeros((n,m)) #Copy part of the array for i in range(n): for j in range(m): A_tmp[i,j] = DF_test.iloc[i,j] if j==i: break #Make array sparse for storage A_csr = csr_matrix(A_tmp) #Recreate DataFrame DF_2 = pd.DataFrame(A_csr.todense(),columns=DF_test.columns,index=DF_test.index) DF_2.head() </code></pre>
<p>I think you can create array with <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.tril.html#numpy.tril" rel="nofollow"><code>np.tril</code></a> and then multiple it with <code>DataFrame</code> <code>DF_1</code>:</p> <pre><code>print np.tril(np.ones(DF_1.shape)) [[ 1. 0. 0. 0. 0.] [ 1. 1. 0. 0. 0.] [ 1. 1. 1. 0. 0.] [ 1. 1. 1. 1. 0.] [ 1. 1. 1. 1. 1.]] print np.tril(np.ones(DF_1.shape)) * DF_1 AF001 AF002 AF003 AF004 AF005 AF001 1.000000 0.000000e+00 0.000000 0.0000 0 AF002 0.374449 1.000000e+00 0.000000 0.0000 0 AF003 0.000347 1.173926e-05 1.000000 0.0000 0 AF004 0.001030 1.494282e-07 0.174526 1.0000 0 AF005 0.001183 1.216664e-06 0.238497 0.7557 1 </code></pre>
python|pandas|matrix|dataframe|adjacency-matrix
3
377,733
35,401,041
Concatenation of 2 1D `numpy` Arrays Along 2nd Axis
<p>Executing</p> <pre><code>import numpy as np t1 = np.arange(1,10) t2 = np.arange(11,20) t3 = np.concatenate((t1,t2),axis=1) </code></pre> <p>results in a </p> <pre><code>Traceback (most recent call last): File "&lt;ipython-input-264-85078aa26398&gt;", line 1, in &lt;module&gt; t3 = np.concatenate((t1,t2),axis=1) IndexError: axis 1 out of bounds [0, 1) </code></pre> <p>why does it report that axis 1 is out of bounds?</p>
<p>Your title explains it - a 1d array does not have a 2nd axis!</p> <p>But having said that, on my system as on <code>@Oliver W.</code>s, it does not produce an error</p> <pre><code>In [655]: np.concatenate((t1,t2),axis=1) Out[655]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18, 19]) </code></pre> <p>This is the result I would have expected from <code>axis=0</code>:</p> <pre><code>In [656]: np.concatenate((t1,t2),axis=0) Out[656]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18, 19]) </code></pre> <p>It looks like <code>concatenate</code> ignores the <code>axis</code> parameter when the arrays are 1d. I don't know if this is something new in my 1.9 version, or something old.</p> <p>For more control consider using the <code>vstack</code> and <code>hstack</code> wrappers that expand array dimensions if needed:</p> <pre><code>In [657]: np.hstack((t1,t2)) Out[657]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18, 19]) In [658]: np.vstack((t1,t2)) Out[658]: array([[ 1, 2, 3, 4, 5, 6, 7, 8, 9], [11, 12, 13, 14, 15, 16, 17, 18, 19]]) </code></pre>
arrays|numpy|concatenation|numpy-ndarray|index-error
19
377,734
35,513,574
Multiple Categorical Input Variables in Tensorflow
<p>I have a data-set in which each feature vector has 50 features, 45 of which are categorical. I am having trouble sending the categorical variables into tensorflow. I have found an example <a href="https://medium.com/@ilblackdragon/tensorflow-tutorial-part-3-c5fc0662bc08" rel="nofollow">tutorial for tensorflow with categorical variables</a>, but do not understand how to adapt this to work with a set which has both types of data, and multiple features. My first attempt is below, but this does not encode the majority of variables.</p> <pre><code>input_classes, input_gradients, outputs = databank.get_dataset() print("Creating feature matrix") inputs = np.array(input_classes, dtype=np.int32) outputs = np.array(outputs, dtype=np.int32) random.seed(42) input_train, input_test, output_train, output_test = cross_validation.train_test_split(inputs, outputs, test_size=0.2, random_state=42) print("Creating DNN") # Prepare the neural net def my_model(X, y): # DNN with 10,20,10 hidden layers and dropout chance of 0.5 layers = skflow.ops.dnn(X, [10, 20, 10], keep_prob=0.5) return skflow.models.logistic_regression(layers, y) classifier = skflow.TensorFlowEstimator(model_fn=my_model, n_classes=2) print("Testing DNN") # Test the neural net classifier.fit(input_train, output_train) score = metrics.accuracy_score(classifier.predict(input_test), output_test) print("Accuracy: %f" % score) </code></pre> <p>I think the real problem, is I don't really understand how to handle the input 'tensor' X to the my_model function in the above code.</p>
<p>Use a categorical processor to map your categories into integers before inputting, like so</p> <pre><code>cat_processor = skflow.preprocessing.CategoricalProcessor() X_train = np.array(list(cat_processor.fit_transform(X_train))) X_test = np.array(list(cat_processor.transform(X_test))) n_classes = len(cat_processor.vocabularies_[0]) </code></pre>
python|machine-learning|tensorflow|skflow
0
377,735
35,493,275
Contouring non-uniform 2d data in python/matplotlib above terrain
<p>I am having trouble contouring some data in matplotlib. I am trying to plot a vertical cross-section of temperature that I sliced from a 3d field of temperature. </p> <p>My temperature array (T) is of size 50*300 where 300 is the number of horizontal levels which are evenly spaced. However, 50 is the number of vertical levels that are: a) non-uniformly spaced; and b) have a different starting level for each vertical column. As in there are always 50 vertical levels, but sometimes they span from 100 - 15000 m, and sometimes from 300 - 20000 m (due to terrain differences). </p> <p>I also have a 2d array of height (Z; same shape as T), a 1d array of horizontal location (LAT), and a 1d array of terrain height (TER). </p> <p>I am trying to get a similar plot to one like <a href="http://www2.mmm.ucar.edu/wrf/OnLineTutorial/Graphics/NCL/Examples/CROSS_SECTION/plt_CrossSection_smooth4-1.png" rel="nofollow">here</a> in which you can see the terrain blacked out and the data is contoured around it. </p> <p>My first attempt to plot this was to create a meshgrid of horizontal distance and height, and then contourf temperature with those arguments as well. However numpy.meshgrid requires 1d inputs, and my height is a 2d variable. Doing something like this only begins contouring upwards from the first column: </p> <pre><code>ax1 = plt.gca() z1, x1 = np.meshgrid(LAT, Z[:,0]) plt.contourf(z1, x1, T) ax1.fill_between(z1[0,:], 0, TER, facecolor='black') </code></pre> <p>Which produces <a href="http://i.stack.imgur.com/z2c9U.png" rel="nofollow">this</a>. If I use Z[:,-1] in the meshgrid, it contours underground for columns to the left, which obviously I don't want. What I really would like is to use some 2d array for Z in the meshgrid but I'm not sure how to go about that. </p> <p>I've also looked into the griddata function but that requires 1D inputs as well. Anyone have any ideas on how to approach this? Any help is appreciated! </p>
<p>For what I understand your data is structured. Then you can directly use the <code>contourf</code> or <code>contour</code> option in <code>matplotlib</code>. The code you present have the right idea but you should use </p> <pre><code>x1, z1 = np.meshgrid(LAT, Z[:,0]) plt.contourf(x1, Z, T) </code></pre> <p>for the contours. I have an example below</p> <pre><code>import numpy as np import matplotlib.pyplot as plt L, H = np.pi*np.mgrid[-1:1:100j, -1:1:100j] T = np.cos(L)*np.cos(2*H) H = np.cos(L) + H plt.contourf(L, H, T, cmap="hot") plt.show() </code></pre> <p>Look that the grid is generated with the original bounding box, but the plot is made with the height that has been transformed and not the initial one. Also, you can use <a href="http://matplotlib.org/examples/pylab_examples/tricontour_demo.html" rel="nofollow noreferrer"><code>tricontour</code></a> for nonstructured data (or in general), but then you will need to generate the triangulation (that in your case is straightforward).</p> <p><a href="https://i.stack.imgur.com/74XPD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/74XPD.png" alt="enter image description here"></a></p>
python|numpy|matplotlib|plot
1
377,736
35,757,439
would it be straight forward to implement a spatial transformer network in tensorflow?
<p>i am interested in trying things out with a spatial transformer network and I can't find any implementation of it in caffe or tensorflow, which are the only two libraries I'm interested in using. I have a pretty good grasp of tensorflow but was wondering if it would be straight forward to implement with the existing building blocks that tensorflow offers without having to do something too complicated like write a custom c++ module </p>
<p>Yes, it is very straight forward to setup the Tensorflow graph for a spatial transformer network with the existing API.</p> <p>You can find an example implementation in Tensorflow here [1].</p> <p>[1] <a href="https://github.com/daviddao/spatial-transformer-tensorflow" rel="noreferrer">https://github.com/daviddao/spatial-transformer-tensorflow</a></p>
tensorflow
5
377,737
35,380,933
How to merge two pandas DataFrames based on a similarity function?
<p>Given dataset 1</p> <pre><code>name,x,y st. peter,1,2 big university portland,3,4 </code></pre> <p>and dataset 2</p> <pre><code>name,x,y saint peter3,4 uni portland,5,6 </code></pre> <p>The goal is to merge on </p> <pre><code>d1.merge(d2, on="name", how="left") </code></pre> <p>There are no exact matches on name though. So I'm looking to do a kind of fuzzy matching. The technique does not matter in this case, more how to incorporate it efficiently into pandas.</p> <p>For example, <code>st. peter</code> might match <code>saint peter</code> in the other, but <code>big university portland</code> might be too much of a deviation that we wouldn't match it with <code>uni portland</code>. </p> <p>One way to think of it is to allow joining with the lowest Levenshtein distance, but only if it is below 5 edits (<code>st. --&gt; saint</code> is 4).</p> <p>The resulting dataframe should only contain the row <code>st. peter</code>, and contain both "name" variations, and both <code>x</code> and <code>y</code> variables.</p> <p>Is there a way to do this kind of merging using pandas?</p>
<p>Did you look at <a href="https://pypi.python.org/pypi/fuzzywuzzy" rel="noreferrer">fuzzywuzzy</a>?</p> <p>You might do something like:</p> <pre><code>import pandas as pd import fuzzywuzzy.process as fwp choices = list(df2.name) def fmatch(row): minscore=95 #or whatever score works for you choice,score = fwp.extractOne(row.name,choices) return choice if score &gt; minscore else None df1['df2_name'] = df1.apply(fmatch,axis=1) merged = pd.merge(df1, df2, left_on='df2_name', right_on='name', suffixes=['_df1','_df2'], how = 'outer') # assuming you want to keep unmatched records </code></pre> <p>Caveat Emptor: I haven't tried to run this.</p>
python|pandas|merge|fuzzy-comparison
6
377,738
35,411,879
Taking an average of an array according to another array of indices
<p>Say I have an array that looks like this:</p> <pre><code>a = np.array([0, 20, 40, 30, 60, 35, 15, 18, 2]) </code></pre> <p>and I have an array of indices that I want to average between:</p> <pre><code>averaging_indices = np.array([2, 4, 7, 8]) </code></pre> <p>What I want to do is to average the elements of array a according to the averaging_indices array. Just to make that clear I want to take the averages: </p> <pre><code>np.mean(a[0:2]), np.mean(a[2:4]), np.mean(a[4:7]), np.mean(a[7,8]), np.mean(a[8:]) </code></pre> <p>and I want to return an array that then has the correct dimensions, in this case </p> <pre><code>result = [10, 35, 36.66, 18, 2] </code></pre> <p>Can anyone think of a neat way to do this? The only way I can imagine is by looping, which is very anti-numpy. </p>
<p>Here's a vectorized approach with <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.bincount.html" rel="nofollow"><code>np.bincount</code></a> -</p> <pre><code># Create "shifts array" and then IDs array for use with np.bincount later on shifts_array = np.zeros(a.size,dtype=int) shifts_array[averaging_indices] = 1 IDs = shifts_array.cumsum() # Use np.bincount to get the summations for each tag and also tag counts. # Thus, get tagged averages as final output. out = np.bincount(IDs,a)/np.bincount(IDs) </code></pre> <p>Sample input, output -</p> <pre><code>In [60]: a Out[60]: array([ 0, 20, 40, 30, 60, 35, 15, 18, 2]) In [61]: averaging_indices Out[61]: array([2, 4, 7, 8]) In [62]: out Out[62]: array([ 10. , 35. , 36.66666667, 18. , 2. ]) </code></pre>
python|arrays|numpy|mean
1
377,739
35,456,290
How to slice off 1 pixel layer from an image data array with numpy or similar
<p>I have a fits image of specified dimensions, in pixels, and I wish to slice off one pixel off the top and one on the bottom. I have attempted to use:</p> <pre><code>sliced_array = my_array[1:-1,1:0] </code></pre> <p>However, this gives me, when I query the shape of the newly sliced array using <code>print(sliced_array.shape)</code>:</p> <blockquote> <p>(4070, 0)</p> </blockquote> <p>How would this be achieved?</p> <p><strong>N.B.</strong> I'm working within Python, and the numpy and astropy modules.</p>
<p>You can slice top most layer and bottom most layer like this </p> <pre><code>my_array[1:-1,:] </code></pre> <p>preserving all the columns and excluding top most row and bottom most row</p>
python|arrays|numpy
2
377,740
35,493,764
After calculating a tensor, how can I show it as a image?
<p>I have one dimensional numpy array. After performing a calculation in TensorFlow, I get a <code>tf.Tensor</code> as output. I am trying to reshape it into a 2-dimensional array and show it as an image.</p> <p>If it were a numpy ndarray, I would know how to plot it as an image. But it is a tensor now!</p> <p>Although I tried <code>tensor.eval()</code> to convert it into numpy array, I got an error saying "No default session".</p> <p>Can anyone teach me how to show a tensor as an image?</p> <pre><code>... ... init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) # training for i in range(1): sess.run(train_step, feed_dict={x: x_data.T, y_: y_data.T}) # testing probability = tf.argmax(y,1); sess.run(probability, feed_dict={x: x_test.T}) #show result img_res = tf.reshape(probability,[len_y,len_x]) fig, ax = plt.subplots(ncols = 1) # It is the the following line that I do not know how to make it work... ax.imshow(np.asarray(img_res.eval())) #how to plot a tensor ?# plt.show() ... ... </code></pre>
<p>The immediate error you're seeing is because <code>Tensor.eval()</code> only works when there is a <a href="https://www.tensorflow.org/versions/master/api_docs/python/client.html#get_default_session" rel="nofollow">"default <code>Session</code>"</a>. This requires that either (i) you're executing in a <code>with tf.Session():</code> block, (ii) you're executing in a <code>with sess.as_default():</code> block, or (iii) you're using <a href="https://www.tensorflow.org/versions/master/api_docs/python/client.html#InteractiveSession" rel="nofollow"><code>tf.InteractiveSession</code></a>. </p> <p>There are two simple workarounds to make your case work:</p> <pre><code># Pass the session to eval(). ax.imshow(img_res.eval(session=sess)) # Use sess.run(). ax.imshow(sess.run(img_res)) </code></pre> <p>Note that, as a large point about visualizing your image, you might consider using the <a href="https://www.tensorflow.org/versions/master/api_docs/python/train.html#image_summary" rel="nofollow"><code>tf.image_summary()</code></a> op along with <a href="https://www.tensorflow.org/versions/master/how_tos/summaries_and_tensorboard/index.html" rel="nofollow">TensorBoard</a> to visualize tensors produced by a larger training pipeline.</p>
python|arrays|image|numpy|tensorflow
3
377,741
35,660,658
find if a list of list has items from another list
<p>we have two list, </p> <pre><code>l=["a","b","c"] s=[["a","b","c"], ["a","d","c"], ["a-B1","b","c"], ["a","e","c"], ["a_2","c"], ["a","d-2"], ["a-3","b","c-1-1","d"]] print l print s </code></pre> <p>Now, I am try to see if each 2nd-level list of <code>s</code> has fuzzy match to any of items in list <code>l</code>. </p> <pre><code>matches=list() matchlist2=list() print s2 for i in range(len(s)): matches.append([]) for j in range(len(s[i])): for x in l: if s[i][j].find(x)&gt;=0: print s[i][j] matches[i].append(True) break else: matches[i].append(False) matchlist2.append(all(x for x in matches[i])) print matches print matchlist2 </code></pre> <p>This gives me what was intended. But I am not happy with how it has so many loops. I am also working with pandas and if there is pandas solution that will be great to. In pandas, there are just two columns of two dataframes. </p> <pre><code>[[True, True, True], [True, False, True], [True, True, True], [True, False, True], [True, True], [True, False], [True, True, True, False]] </code></pre> <p>the second code checks if all items in sublist had match. </p> <pre><code>[True, False, True, False, True, False, False] </code></pre>
<p>I prefer this solution for conciseness and readability:</p> <pre><code>&gt;&gt;&gt; [all(any(x.startswith(y) for y in l) for x in sub) for sub in s] [True, False, True, False, True, False, False] </code></pre>
python|pandas
2
377,742
35,605,501
python where function doesn't work
<p>I'm dealing with a longitude array named <code>LON</code>, but I encountered some problems with the <code>numpy.where()</code> function.</p> <pre><code>&gt;&gt;&gt; print LON[777,777] 13.4635573678 &gt;&gt;&gt; print np.where(LON == 13.4635573678)[0] [] &gt;&gt;&gt; print np.where(LON == 13.4635573678)[1] [] </code></pre> <p>It doesn't find the <code>LON</code> entry where the array is equal to a value that certainly exists. Is the problem related to the fact I'm dealing with double variables? Because until now <code>np.where()</code> always worked fine for both integers, floats and strings... </p>
<p>One way to work around this might be to use <code>np.where</code> with an approximate match:</p> <pre><code>&gt;&gt;&gt; X = np.linspace(1, 10, 100).reshape((10,10)) &gt;&gt;&gt; np.where(abs(X - 6.3) &lt; 0.1) (array([5, 5]), array([8, 9])) &gt;&gt;&gt; X[np.where(abs(X - 6.3) &lt; 0.1)] array([ 6.27272727, 6.36363636]) </code></pre> <p>Of course, this could give you more than one match, if the epsilon (0.1 in this example) is too large, but the same could be the case when using an exact match, in case there are multiple entries in the array with the same coordinate.</p> <p>Edit: as pointed out <a href="https://stackoverflow.com/questions/35605501/python-where-function-doesnt-work/35606162#comment58905274_35606162">in comments</a>, you could also use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.isclose.html" rel="nofollow noreferrer"><code>np.isclose</code></a>, even with Python 2.7 where <code>math.isclose</code> is not available. Note, however, that <code>np.isclose</code> will not give an array of coordinates, but an array of <code>True</code>/<code>False</code>. If you need the coordinates, you could pipe the result of <code>np.close</code> through <code>np.where</code> again.</p> <pre><code>&gt;&gt;&gt; np.where(np.isclose(X, 6.3636)) (array([5]), array([9])) &gt;&gt;&gt; X[np.isclose(X, 6.3636)] array([ 6.36363636]) </code></pre> <p>Alternatively, you could consider changing the function that gave you that coordinate from the <code>LON</code> array to also return the position of that value <em>in</em> the array. This way, you would not need to use <code>np.where</code> at all.</p>
python|numpy
3
377,743
12,030,398
concatenate multiple columns based on index in pandas
<p>As a follow up to <a href="https://stackoverflow.com/questions/12021730/can-pandas-handle-variable-length-whitespace-as-column-delimeters">this post</a>, I would like to concatenate a number of columns based on their index but I am encountering some problems. In this example I get an Attribute error related to the map function. Help around this error would be appreciated as would code that does the equivalent concatenation of columns.</p> <pre><code> #data df = DataFrame({'A':['a','b','c'], 'B':['d','e','f'], 'C':['concat','me','yo'], 'D':['me','too','tambien']}) #row function to concat rows with index greater than 2 def cnc(row): temp = [] for x in range(2,(len(row))): if row[x] != None: temp.append(row[x]) return map(concat, temp) #apply function per row new = df.apply(cnc,axis=1) #Expected Output new concat me me too yo tambien </code></pre> <p>thanks, zach cp</p>
<p>How about something like this?</p> <pre><code>&gt;&gt;&gt; from pandas import * &gt;&gt;&gt; df = DataFrame({'A':['a','b','c'], 'B':['d','e','f'], 'C':['concat','me','yo'], 'D':['me','too','tambien']}) &gt;&gt;&gt; df A B C D 0 a d concat me 1 b e me too 2 c f yo tambien &gt;&gt;&gt; df.columns[2:] Index([C, D], dtype=object) &gt;&gt;&gt; df[df.columns[2:]] C D 0 concat me 1 me too 2 yo tambien &gt;&gt;&gt; [' '.join(row) for row in df[df.columns[2:]].values] ['concat me', 'me too', 'yo tambien'] &gt;&gt;&gt; df["new"] = [' '.join(row) for row in df[df.columns[2:]].values] &gt;&gt;&gt; df A B C D new 0 a d concat me concat me 1 b e me too me too 2 c f yo tambien yo tambien </code></pre> <p>If you have <code>None</code> objects floating around, you could handle that too. For example:</p> <pre><code>&gt;&gt;&gt; df["C"][1] = None &gt;&gt;&gt; df A B C D 0 a d concat me 1 b e None too 2 c f yo tambien &gt;&gt;&gt; rows = df[df.columns[2:]].values </code></pre> <p>In near-English:</p> <pre><code>&gt;&gt;&gt; new = [' '.join(word for word in row if word is not None) for row in rows] &gt;&gt;&gt; new ['concat me', 'too', 'yo tambien'] </code></pre> <p>Using <code>filter</code>:</p> <pre><code>&gt;&gt;&gt; new = [' '.join(filter(None, row)) for row in rows] &gt;&gt;&gt; new ['concat me', 'too', 'yo tambien'] </code></pre> <p>etc. You could do it in one line but I think it's clearer to separate it.</p>
python|pandas
8
377,744
28,401,458
Pandas Time Index pick largest number/last number on given day
<p>I have a Pandas DataFrame object that looks something like this:</p> <pre><code> 'Thing 1': Actual Predicted Error Date 2014-09-15 140.00 0.000000 140.000000 2014-09-15 358.03 127.738344 230.291656 2014-09-16 373.04 326.672566 46.367434 2014-09-17 427.99 340.367941 87.622059 2014-09-18 484.87 390.505241 94.364759 2014-09-18 488.22 442.403505 45.816495 2014-09-18 491.57 445.460101 46.109899 2014-09-29 553.37 448.516697 104.853303 2014-09-29 1329.07 504.904052 824.165948 2014-10-01 1200.00 1212.665718 12.665718 2014-10-01 1289.78 1094.900089 194.879911 2014-10-07 1314.78 1176.816864 137.963136 </code></pre> <p>I would like to remove duplicate entries for the same day and pick the highest value for a given day. In other words, I want something like this:</p> <pre><code> 'Thing 1': Actual Predicted Error Date 2014-09-15 358.03 127.738344 230.291656 2014-09-16 373.04 326.672566 46.367434 2014-09-17 427.99 340.367941 87.622059 2014-09-18 491.57 445.460101 46.109899 2014-09-29 1329.07 504.904052 824.165948 2014-10-01 1289.78 1094.900089 194.879911 2014-10-07 1314.78 1176.816864 137.963136 </code></pre> <p>Essentially, because of how the DataFrame object was create, I always keep the last entry for a given day and discard any others.</p> <p>Any ideas, my mind is totally fried from a day of coding...</p>
<p>you can use <code>group by</code> with <code>agg</code>. <code>Agg</code> takes a dictionary of functions. As in each group the highest observation is the last one you can use the <code>last</code> function:</p> <pre><code>df.groupby('Date').agg({'Actual':'last','Predicted':'last','Error':'last'}) </code></pre> <p>This returns:</p> <pre><code> Actual Predicted Error Date 2014-09-15 358.03 127.738344 230.291656 2014-09-16 373.04 326.672566 46.367434 2014-09-17 427.99 340.367941 87.622059 2014-09-18 491.57 445.460101 46.109899 2014-09-29 1329.07 504.904052 824.165948 2014-10-01 1289.78 1094.900089 194.879911 2014-10-07 1314.78 1176.816864 137.963136 </code></pre>
python-3.x|pandas|time-series|dataframe|anaconda
1
377,745
28,576,676
Calculate minimum value for each column of multi-indexed DataFrame in pandas
<p>I have a multi-indexed DataFrame with the following structure:</p> <pre><code> metric1 metric2 experiment1 experiment2 experiment1 experiment2 run1 1.2 1.5 0.2 0.9 run2 2.1 0.7 0.4 4.3 </code></pre> <p>How can I calculate minimum (maximum, mean, etc.) value for each column and get DataFrame like this:</p> <pre><code> metric1 metric2 experiment1 experiment2 experiment1 experiment2 run1 1.2 1.5 0.2 0.9 run2 1.6 0.9 0.3 3.1 run3 2.1 0.7 0.4 4.3 min 1.2 0.7 0.2 0.9 max 2.1 1.5 0.4 4.3 </code></pre>
<p>You can take the min, max, and mean then use pd.concat to stitch everything together. You'll need to transpose (T) then transpose back to get the dataframe to concat the way you want. </p> <pre><code>In [91]: df = pd.DataFrame(dict(exp1=[1.2,2.1],exp2=[1.5,0.7]), index=["run1", "run2"]) In [92]: df_min, df_max, df_mean = df.min(), df.max(), df.mean() In [93]: df_min.name, df_max.name, df_mean.name = "min", "max", "mean" In [94]: pd.concat((df.T, df_min, df_max, df_mean), axis=1).T Out[94]: exp1 exp2 run1 1.20 1.5 run2 2.10 0.7 min 1.20 0.7 max 2.10 1.5 mean 1.65 1.1 </code></pre> <p>Should work the same with a multi-index.</p>
python|pandas
4
377,746
28,782,487
Read HDF5 based file as a numpy array in Python
<p>How can I load in a <code>.hws</code> file as a numpy array?<br> Based on the description in <a href="http://kingler.net/2007/05/22/90" rel="nofollow">http://kingler.net/2007/05/22/90</a> which says it is a HDF5 based format, so I found <a href="https://confluence.slac.stanford.edu/display/PSDM/How+to+access+HDF5+data+from+Python" rel="nofollow">https://confluence.slac.stanford.edu/display/PSDM/How+to+access+HDF5+data+from+Python</a> might be useful. However, by following the instruction described in the page:</p> <pre><code>hdf5_file_name = '/reg/d/psdm/XPP/xppcom10/hdf5/xppcom10-r0546.h5' dataset_name = '/Configure:0000/Run:0000/CalibCycle:0000/Camera::FrameV1/XppSb4Pim.1:Tm6740.1/image' event_number = 5 file = h5py.File(hdf5_file_name, 'r') dataset = file[dataset_name] arr1ev = dataset[event_number] file.close() </code></pre> <p>I got error in the sixth line after I fixed the first three line as my case:</p> <pre><code>file_name = '~/Desktop/audioData_A.hws' item = h5py.File(file_name, 'r') print item.name ds = item['/'] print len(ds) arr1ev = ds[1] </code></pre> <p>which returns:</p> <pre><code>&lt;HDF5 group "/" (1 members)&gt; 1 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-59-b33014aeccc8&gt; in &lt;module&gt;() 4 ds = item['/'] 5 print len(ds) ----&gt; 6 arr1ev = ds[1] /usr/local/lib/python2.7/site-packages/h5py/_objects.so in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2405)() /usr/local/lib/python2.7/site-packages/h5py/_objects.so in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2362)() /usr/local/lib/python2.7/site-packages/h5py/_hl/group.pyc in __getitem__(self, name) 158 raise ValueError("Invalid HDF5 object reference") 159 else: --&gt; 160 oid = h5o.open(self.id, self._e(name), lapl=self._lapl) 161 162 otype = h5i.get_type(oid) /usr/local/lib/python2.7/site-packages/h5py/_hl/base.pyc in _e(self, name, lcpl) 119 else: 120 try: --&gt; 121 name = name.encode('ascii') 122 coding = h5t.CSET_ASCII 123 except UnicodeEncodeError: AttributeError: 'int' object has no attribute 'encode' </code></pre> <p>The problem is that I don't know how to get the information of <code>dataset_name</code> and <code>event_number</code>. By <code>item.parent</code>, the second line in my case, I guess the corresponding values should be <code>/</code> and <code>1</code>, which doesn't work. </p> <p>Please find the file in the link if you need: <a href="https://drive.google.com/file/d/0B1UyTlIs325wbWhwR3NTVmFpWTg/view?usp=sharing" rel="nofollow">https://drive.google.com/file/d/0B1UyTlIs325wbWhwR3NTVmFpWTg/view?usp=sharing</a></p>
<p>I downloaded your file and took a look at it. After reading the <code>.hws</code> file you get a dictionary with exactly one key <code>"wfm_group0"</code> (You can see keys in the file by using <code>item.keys()</code>). The value to this key is again dictionary-like with the keys <code>"axes"</code>, <code>"id"</code>, <code>"traces"</code> and <code>"vector"</code>. </p> <p>Don't know where you want to go from there, but maybe you can play around with this information and see where it gets you. </p>
python|arrays|numpy|hdf5
2
377,747
28,507,052
How to split numpy array in batches?
<p>It sounds like easy not i dont know how to do.</p> <p>i have numpy 2d array of </p> <pre><code>X = (1783,30) </code></pre> <p>and i want to split them in batches of 64. I write the code like this. </p> <pre><code>batches = abs(len(X) / BATCH_SIZE ) + 1 // It gives 28 </code></pre> <p>I am trying to do prediction of results batchwise. So i fill the batch with zeros and i overwrite them with predicted results.</p> <pre><code>predicted = [] for b in xrange(batches): data4D = np.zeros([BATCH_SIZE,1,96,96]) #create 4D array, first value is batch_size, last number of inputs data4DL = np.zeros([BATCH_SIZE,1,1,1]) # need to create 4D array as output, first value is batch_size, last number of outputs data4D[0:BATCH_SIZE,:] = X[b*BATCH_SIZE:b*BATCH_SIZE+BATCH_SIZE,:] # fill value of input xtrain #predict #print [(k, v[0].data.shape) for k, v in net.params.items()] net.set_input_arrays(data4D.astype(np.float32),data4DL.astype(np.float32)) pred = net.forward() print 'batch ', b predicted.append(pred['ip1']) print 'Total in Batches ', data4D.shape, batches print 'Final Output: ', predicted </code></pre> <p>But in the last batch number 28, there are only 55 elements instead of 64 (total elements 1783), and it gives </p> <p><code>ValueError: could not broadcast input array from shape (55,1,96,96) into shape (64,1,96,96)</code></p> <p>What is the fix for this?</p> <p>PS: the network predictione requires exact batch size is 64 to predict. </p>
<p>I don't really understand your question either, especially what X looks like. If you want to create sub-groups of equal size of your array, try this:</p> <pre><code>def group_list(l, group_size): """ :param l: list :param group_size: size of each group :return: Yields successive group-sized lists from l. """ for i in xrange(0, len(l), group_size): yield l[i:i+group_size] </code></pre>
python|numpy
14
377,748
28,664,103
How to transform a time series pandas dataframe using the index attributes?
<p>Given a dataframe with time series that looks like this:</p> <pre><code> Close 2015-02-20 14:00:00 1200.1 2015-02-20 14:10:00 1199.8 2015-02-21 14:00:00 1199.3 2015-02-21 14:10:00 1199.0 2015-02-22 14:00:00 1198.4 2015-02-22 14:10:00 1199.7 </code></pre> <p>How can I apply a function that transforms it into a dataframe like this:</p> <pre><code> '14:00' '14:10' 2015-02-20 1200.1 1199.8 2015-02-21 1199.3 1199.0 2015-02-22 1198.4 1199.7 </code></pre> <p>Note: This is a simplified example. The actual dataframe has many days and all the intraday minutes too. So it would be useful if it is an efficient procedure.</p> <p>Thanks</p>
<p>you can pivot on the <code>date</code> and <code>time</code> components of the index:</p> <p>Create the frame:</p> <pre><code>i =pd.to_datetime(['2015-02-20 14:00:00','2015-02-20 14:10:00','2015-02-21 14:20:00'\ ,'2015-02-21 14:30:00','2015-02-22 14:40:00','2015-02-22 14:50:00']) df =pd.DataFrame(index=i, data={'Close':[1200.1,1199.8,1199.3,1199.0,1198.4,1199.7]}) </code></pre> <p>pivot:</p> <pre><code>pd.pivot_table(df, index= df.index.date, columns=df.index.time, values = 'Close') </code></pre> <p>returns:</p> <pre><code> 14:00:00 14:10:00 14:20:00 14:30:00 14:40:00 14:50:00 2015-02-20 1200.1 1199.8 NaN NaN NaN NaN 2015-02-21 NaN NaN 1199.3 1199 NaN NaN 2015-02-22 NaN NaN NaN NaN 1198.4 1199.7 </code></pre> <p>use <code>aggfunc</code> as an argument of <code>pivot_table</code> to determine how data is aggregated if necessary</p>
python|pandas|time-series|dataframe
1
377,749
28,595,465
ValueError: could not convert string to float, NumPy
<p>I have a script where I am writing a JSON web-service to an Esri file geodatabase. I am receiving the error ValueError: could not convert string to float: Microwaves </p> <p>I have used the exact same script before with U40 being the dtype for all strings. </p> <p>My script and results are below;</p> <pre><code>import json import jsonpickle import requests import arcpy import numpy fc = "C:\MYLATesting.gdb\MYLA311" if arcpy.Exists(fc): arcpy.Delete_management(fc) f = open('C:\Users\Administrator\Desktop\myla311.json', 'r') data = jsonpickle.encode( jsonpickle.decode(f.read()) ) url = "myUrl" headers = {'Content-type': 'text/plain', 'Accept': '/'} r = requests.post(url, data=data, headers=headers) sr = arcpy.SpatialReference(4326) decoded = json.loads(r.text) SRAddress = decoded['Response']['ListOfServiceRequest']['ServiceRequest'][0]['SRAddress'] latitude = decoded['Response']['ListOfServiceRequest']['ServiceRequest'][0]['Latitude'] longitude = decoded['Response']['ListOfServiceRequest']['ServiceRequest'][0]['Longitude'] CommodityType = decoded['Response']['ListOfServiceRequest']['ServiceRequest'][0]['ListOfLa311ElectronicWaste']['La311ElectronicWaste'][0]['Type'] ItemType = decoded['Response']['ListOfServiceRequest']['ServiceRequest'][0]['ListOfLa311ElectronicWaste']['La311ElectronicWaste'][0]['ElectronicWestType'] ItemCount = decoded['Response']['ListOfServiceRequest']['ServiceRequest'][0]['ListOfLa311ElectronicWaste']['La311ElectronicWaste'][0]['ItemCount'] CommodityType1 = decoded['Response']['ListOfServiceRequest']['ServiceRequest'][0]['ListOfLa311ElectronicWaste']['La311ElectronicWaste'][1]['Type'] ItemType1 = decoded['Response']['ListOfServiceRequest']['ServiceRequest'][0]['ListOfLa311ElectronicWaste']['La311ElectronicWaste'][1]['ElectronicWestType'] ItemCount1 = decoded['Response']['ListOfServiceRequest']['ServiceRequest'][0]['ListOfLa311ElectronicWaste']['La311ElectronicWaste'][1]['ItemCount'] print SRAddress print latitude print longitude print CommodityType print ItemType print ItemCount print CommodityType1 print ItemType1 print ItemCount1 item ={'SRAddress': SRAddress, 'Longitude': longitude, 'Latitude': latitude, 'CommodityType': CommodityType, 'ItemType': ItemType, 'ItemCount': ItemCount} import numpy as np #NOTE THIS keys = ['SRAddress','Longitude','Latitude','CommodityType','ItemType', 'ItemCount'] k1,k2,k3, k4, k5, k6 = keys data_line ={'SRAddress': SRAddress, 'Longitude': longitude, 'Latitude': latitude, 'CommodityType': CommodityType, 'ItemType': ItemType, 'ItemCount': ItemCount} frmt = '\nStraight dictionary output\n Address: {} Long: {} Lat: {}' print(frmt.format(item[k1],item[k2],item[k3], item[k4],item[k5], item[k6])) print '\noption 1: List comprehension with unicode' a = tuple([unicode(item[key]) for key in keys]) # list comprehension with unicode print('{}'.format(a)) dt = np.dtype([('SRAddress','U40'),('CommodityType','U40'), ('ItemType','U40'), ('ItemCount','U40'),('longitude','&lt;f8'),('latitude','&lt;f8')]) arr = np.array(a,dtype=dt) print'\narray unicode\n',arr print'dtype',arr.dtype print '\noption 2:List comprehension without unicode' b = tuple([item[key] for key in keys]) print('{}'.format(b)) dt = np.dtype([('SRAddress','U40'),('CommodityType','U40'), ('ItemType','U40'), ('ItemCount','U40'),('longitude','&lt;f8'),('latitude','&lt;f8')]) arr = np.array(b,dtype=dt) print'\narray without unicode\n',arr print'dtype',arr.dtype arcpy.da.NumPyArrayToFeatureClass(arr, fc, ['longitude', 'latitude'], sr) </code></pre> <p>Results</p> <pre><code>C:\Python27\ArcGIS10.2\python.exe C:/MYLAScripts/MYLAJson.py Traceback (most recent call last): 5810 N WILLIS AVE, 91411 File "C:/MYLAScripts/MYLAJson.py", line 71, in &lt;module&gt; 34.176277 arr = np.array(a,dtype=dt) -118.455249 ValueError: could not convert string to float: Microwaves Electronic Waste Microwaves 3 Electronic Waste Televisions (Any Size) 6 Straight dictionary output Address: 5810 N WILLIS AVE, 91411 Long: -118.455249 Lat: 34.176277 option 1: List comprehension with unicode (u'5810 N WILLIS AVE, 91411', u'-118.455249', u'34.176277', u'Electronic Waste', u'Microwaves', u'3') </code></pre>
<p>You have</p> <pre><code>keys = ['SRAddress','Longitude','Latitude','CommodityType','ItemType', 'ItemCount'] </code></pre> <p>then the script is making a tuple of values from the dict <code>items</code> using these keys in that order:</p> <pre><code>a = tuple([unicode(item[key]) for key in keys]) </code></pre> <p>Then when you convert this tuple to an array</p> <pre><code>arr = np.array(a,dtype=dt) </code></pre> <p>it is trying to stuff the value associated with <code>ItemType</code> into the <code>longitude</code> field of the struct. The <code>keys</code> list should be in the same order as the struct fields of your dtype. Ideally you wouldn't even bother copying this information and instead use <code>dt.names</code>. Then as long as the struct fields have the same names as the dict you're trying to convert it should take values in the correct order.</p>
python|numpy
1
377,750
28,419,453
h5py: how to read selected rows of an hdf5 file?
<p>Is it possible to read a given set of rows from an hdf5 file without loading the whole file? I have quite big hdf5 files with loads of datasets, here is an example of what I had in mind to reduce time and memory usage:</p> <pre><code>#! /usr/bin/env python import numpy as np import h5py infile = 'field1.87.hdf5' f = h5py.File(infile,'r') group = f['Data'] mdisk = group['mdisk'].value val = 2.*pow(10.,10.) ind = np.where(mdisk&gt;val)[0] m = group['mcold'][ind] print m </code></pre> <p><code>ind</code> doesn't give consecutive rows but rather scattered ones.</p> <p>The above code fails, but it follows the standard way of slicing an hdf5 dataset. The error message I get is:</p> <pre><code>Traceback (most recent call last): File "./read_rows.py", line 17, in &lt;module&gt; m = group['mcold'][ind] File "/cosma/local/Python/2.7.3/lib/python2.7/site-packages/h5py-2.3.1-py2.7-linux-x86_64.egg/h5py/_hl/dataset.py", line 425, in __getitem__ selection = sel.select(self.shape, args, dsid=self.id) File "/cosma/local/Python/2.7.3/lib/python2.7/site-packages/h5py-2.3.1-py2.7-linux-x86_64.egg/h5py/_hl/selections.py", line 71, in select sel[arg] File "/cosma/local/Python/2.7.3/lib/python2.7/site-packages/h5py-2.3.1-py2.7-linux-x86_64.egg/h5py/_hl/selections.py", line 209, in __getitem__ raise TypeError("PointSelection __getitem__ only works with bool arrays") TypeError: PointSelection __getitem__ only works with bool arrays </code></pre>
<p>I have a sample h5py file with:</p> <pre><code>data = f['data'] # &lt;HDF5 dataset "data": shape (3, 6), type "&lt;i4"&gt; # is arange(18).reshape(3,6) ind=np.where(data[:]%2)[0] # array([0, 0, 0, 1, 1, 1, 2, 2, 2], dtype=int32) data[ind] # getitem only works with boolean arrays error data[ind.tolist()] # can't read data (Dataset: Read failed) error </code></pre> <p>This last error is caused by repeated values in the list.</p> <p>But indexing with lists with unique values works fine</p> <pre><code>In [150]: data[[0,2]] Out[150]: array([[ 0, 1, 2, 3, 4, 5], [12, 13, 14, 15, 16, 17]]) In [151]: data[:,[0,3,5]] Out[151]: array([[ 0, 3, 5], [ 6, 9, 11], [12, 15, 17]]) </code></pre> <p>So does an array with the proper dimension slicing:</p> <pre><code>In [157]: data[ind[[0,3,6]],:] Out[157]: array([[ 0, 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17]]) In [165]: f['data'][:2,np.array([0,3,5])] Out[165]: array([[ 0, 3, 5], [ 6, 9, 11]]) In [166]: f['data'][[0,1],np.array([0,3,5])] # errror about only one indexing array allowed </code></pre> <p>So if the indexing is right - unique values, and matching the array dimensions, it should work.</p> <p>My simple example doesn't test how much of the array is loaded. The documentation sounds as though elements are selected from the file without loading the whole array into memory.</p>
python|numpy|dataset|h5py
5
377,751
28,393,803
Python: Joining two dataframes on a primary key
<p>I have two DataFrames A and B. I want to replace the rows in A with rows in B where a specific column is equal to each other.</p> <pre><code>A: 1 2 3 0 asd 0.304012 0.358484 1 fdsa -0.198157 0.616415 2 gfd -0.054764 0.389018 3 ff NaN 1.164172 B: 1 2 3 0 asd 10.4012 1.458484 1 fdsa 100.198157 2.015 </code></pre> <p>I want the following result:</p> <pre><code> 1 2 3 0 asd 10.4012 1.458484 (row merged from B on column 1) 1 fdsa 100.198157 2.015 (row merged from B on column 1) 2 gfd -0.054764 0.389018 3 ff NaN 1.164172 </code></pre>
<p>Just call <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.update.html#pandas.DataFrame.update" rel="nofollow"><code>update</code></a>: this will overwrite the lhs df with the contents of the rhs df where there is a match in your case replace <code>df</code> and <code>df1</code> with <code>A</code> and <code>B</code> respectively:</p> <pre><code>In [13]: df.update(df1) df Out[13]: 1 2 3 0 asd 10.401200 1.458484 1 fdsa 100.198157 2.015000 2 gfd -0.054764 0.389018 3 ff NaN 1.164172 </code></pre>
python|pandas
2
377,752
28,704,142
How do you use pandas.DataFrame columns as index, columns, and values?
<p>I can't seem to figure out how to ask this question in a searchable way, but I feel like this is a simple question.</p> <p>Given a pandas Dataframe object, I would like to use one column as the index, one column as the columns, and a third column as the values.</p> <p>For example:</p> <pre><code> a b c 0 1 dog 2 1 1 cat 1 2 1 rat 6 3 2 cat 2 4 3 dog 1 5 3 cat 4 </code></pre> <hr> <p>I would like to user column 'a' as my index values, column 'b' as my columns, and column 'c' as the values for each row/column and fill with 0 for missing values (if possible). For example...</p> <pre><code> dog cat rat 1 2 1 6 2 0 2 0 3 1 4 0 </code></pre> <p>This would be an 'a' by 'b' matrix with 'c' as the filling values </p>
<p>It's (almost) exactly as you phrase it:</p> <pre><code>df.pivot_table(index="a", columns="b", values="c", fill_value=0) </code></pre> <p>gives</p> <pre><code>b cat dog rat a 1 1 2 6 2 2 0 0 3 4 1 0 </code></pre> <p>HTH</p>
python|indexing|pandas
3
377,753
28,772,573
Python - parallelize a python loop for 2D masked array?
<p>Probably a commonplace question, but how can I parallelize this loop in Python?</p> <pre><code>for i in range(0,Nx.shape[2]): for j in range(0,Nx.shape[2]): NI=Nx[:,:,i]; NJ=Nx[:,:,j] Ku[i,j] = (NI[mask!=True]*NJ[mask!=True]).sum() </code></pre> <p>So my question: what's the easiest way to parallelize this code?</p> <pre><code> ---------- EDIT LATER------------------ </code></pre> <p>An example of data</p> <pre><code>import random import numpy as np import numpy.ma as ma from numpy import unravel_index #my input Nx = np.random.rand(5,5,5) #mask creation mask_positions = zip(*np.where((Nx[:,:,0] &lt; 0.4))) mask_array_positions = np.asarray(mask_positions) i, j = mask_array_positions.T mask = np.zeros(Nx[:,:,0].shape, bool) mask[i,j] = True </code></pre> <p>And i want to calculate Ku by parallelizing. My aim is to use the Ku array to solve a linear problem so i have to put the mask values apart (represent near the half of my array)</p>
<p>I think you want to 'vectorize', to use <code>numpy</code> terminology, not parallelize in the multiprocess way.</p> <p>Your calculation is essentially a dot (matrix) product. Apply the <code>mask</code> once to the whole array to get a 2d array, <code>NIJ</code>. Its shape will be <code>(N,5)</code>, where <code>N</code> is the number of <code>True</code> values in <code>~mask</code>. Then it's just a <code>(5,N)</code> array 'dotted' with a <code>(N,5)</code> - ie. sum over the <code>N</code> dimension, leaving you with a <code>(5,5)</code> array.</p> <pre><code>NIJ = Nx[~mask,:] Ku = np.dot(NIJ.T,NIJ) </code></pre> <p>In quick tests it matches the <code>Ku</code> produced by your double loop. Depending on the underlying library used for <code>np.dot</code> there might be some multicore calculation, but that's usually not a priority issue for <code>numpy</code> users.</p> <hr> <p>Applying the large boolean <code>mask</code> is the most time consuming part of these calculations - both the vectorized and iterative versions.</p> <p>For a <code>mask</code> with 400,000 True values, compare these 2 indexing times:</p> <pre><code>In [195]: timeit (NI[:400,:1000],NJ[:400,:1000]) 100000 loops, best of 3: 4.87 us per loop In [196]: timeit (NI[mask],NJ[mask]) 10 loops, best of 3: 98.8 ms per loop </code></pre> <p>Selecting the same number of items with basic (slice) indexing is several orders of magnitude faster than advanced indexing with the <code>mask</code>.</p> <p>Substituting <code>np.dot(NI[mask],NJ[mask])</code> for <code>(NI[mask]*NJ[mask]).sum()</code> only saves a few ms.</p>
python|loops|numpy|parallel-processing|mask
4
377,754
28,761,925
Pandas: Taking slices from a DataFrame and recombining them into a separate DF
<p>I'm trying to take slices from a DataFrame and recombine them into a separate DF. However I'm getting a Value error 'cannot reindex from a duplicate axis' </p> <pre><code>run1 = df['run_1'] run2 = df['run_2'] a = run1[305:340] b = run1[258:270] c = run2[258:270] d = run2[305:340] first_slice = a.combine_first(b) second_slice = c.combine_first(d) df1 = pd.DataFrame(first_slice, second_slice) </code></pre>
<p>Your code will fail as the parmas to the <code>DataFrame</code> ctor are:</p> <blockquote> <p>pandas.DataFrame(data=None, index=None, columns=None, dtype=None, copy=False)</p> </blockquote> <p>So even if it didn't complain it wouldn't produce what you want. There are various methods of <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html" rel="nofollow">joining, merging and concatenating</a> multiple dfs, in your case <code>concat</code> is what you want:</p> <p><code>df1 = pd.concat([first_slice, second_slice])</code></p>
python|pandas|dataframe|slice
1
377,755
28,505,598
Convert time format in pandas
<p>I have a string object in this format 2014-12-08 09:30:00.066000 but I want to convert to datetime variable. I also want this to be less granular- I want it to be just in the order of second for example</p> <p>2014-12-08 09:30:00.066000 to 2014-12-08 09:30:00</p> <p>I am trying to use pd.to_datetime function but it's not working for me. Anyone know how to do this? Thanks!</p>
<p>See this:</p> <p><a href="https://stackoverflow.com/questions/13785932/how-to-round-a-pandas-datetimeindex">How to round a Pandas `DatetimeIndex`?</a></p> <pre><code>from pandas.lib import Timestamp def to_the_second(ts): return Timestamp(long(round(ts.value, -9))) df['My_Date_Column'].apply(to_the_second) </code></pre>
python|pandas
0
377,756
51,052,416
Pandas dataframe groupby into list, with list in cell data
<p>Consider this input df</p> <pre><code>my_input_df = pd.DataFrame({ 'export_services': [[1],[2,4,5],[4,6], [2,4,5],[1]], 'seaport':['china','africa','europe', 'mexico','europe'], 'price_of_fish':['100','200','250','125','75']}) </code></pre> <p>How to group on a column which contains lists and combine the other columns into a list? </p> <pre><code>my_output_df = pd.DataFrame({ 'export_services': [[1],[2,4,5],[4,6]], 'seaport':[['china','europe'],['africa','mexico'],'europe'], 'price_of_fish':[['100','75'],'200',['250','125']]}) </code></pre> <p>I have tried with</p> <pre><code>my_input_df.groupby('export_services').apply(list) </code></pre> <p>which gives</p> <blockquote> <p>TypeError: unhashable type: 'list'</p> </blockquote> <p>Any ideas?</p> <p>Notes: It's OK if all the grouped rows in my_output_df are lists, even for a single entry.</p>
<p>First, convert to <strong><code>tuple</code></strong>, which can be hashed:</p> <pre><code>df.export_services = df.export_services.apply(tuple) </code></pre> <p><strong><code>groupby</code></strong> with <strong><code>agg</code></strong></p> <pre><code>df.groupby('export_services').agg(list).reset_index() export_services seaport price_of_fish 0 (1,) [china, europe] [100, 75] 1 (2, 4, 5) [africa, mexico] [200, 125] 2 (4, 6) [europe] [250] </code></pre>
python|pandas|dataframe|pandas-groupby
1
377,757
50,799,510
How to run custom GPU tensorflow::op from C++ code?
<p>I follow these examples to write custom op in TensorFlow:<br> <a href="https://www.tensorflow.org/extend/adding_an_op" rel="nofollow noreferrer">Adding a New Op</a><br> <a href="https://www.github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/examples/adding_an_op/cuda_op_kernel.cu.cc" rel="nofollow noreferrer">cuda_op_kernel</a><br> Change the function to operation I need to do.<br> But all the examples are tests in Python code.<br> I need to run the my op from c++ code, how can I do this?</p>
<p>This simple example shows the construction and the execution of a graph using <a href="https://www.tensorflow.org/api_guides/cc/guide" rel="nofollow noreferrer">C++ API</a>:</p> <pre><code>// tensorflow/cc/example/example.cc #include "tensorflow/cc/client/client_session.h" #include "tensorflow/cc/ops/standard_ops.h" #include "tensorflow/core/framework/tensor.h" int main() { using namespace tensorflow; using namespace tensorflow::ops; Scope root = Scope::NewRootScope(); // Matrix A = [3 2; -1 0] auto A = Const(root, { {3.f, 2.f}, {-1.f, 0.f} }); // Vector b = [3 5] auto b = Const(root, { {3.f, 5.f} }); // v = Ab^T auto v = MatMul(root.WithOpName("v"), A, b, MatMul::TransposeB(true)); // &lt;- in your case you should put here your custom Op std::vector&lt;Tensor&gt; outputs; ClientSession session(root); // Run and fetch v TF_CHECK_OK(session.Run({v}, &amp;outputs)); // Expect outputs[0] == [19; -3] LOG(INFO) &lt;&lt; outputs[0].matrix&lt;float&gt;(); return 0; } </code></pre> <p>As in the Python counterpart, you first need to build a computational graph in a scope, which in this case has only a matrix multiplication in it, whose end point is in <code>v</code>. Then you need to open a new session (<code>session</code>) for the scope, and run it on your graph. In this case there is no feed dictionary, but at the end of the page there is an example on how to feed values:</p> <pre><code>Scope root = Scope::NewRootScope(); auto a = Placeholder(root, DT_INT32); // [3 3; 3 3] auto b = Const(root, 3, {2, 2}); auto c = Add(root, a, b); ClientSession session(root); std::vector&lt;Tensor&gt; outputs; // Feed a &lt;- [1 2; 3 4] session.Run({ {a, { {1, 2}, {3, 4} } } }, {c}, &amp;outputs); // outputs[0] == [4 5; 6 7] </code></pre> <p><sup>All the code segments here reported come from the C++ API guide for TensorFlow</sup></p> <p>If you want to call custom OP you have to use almost the same code. I have a custom op in <a href="https://github.com/MatteoRagni/tf.ZeroOut.gpu" rel="nofollow noreferrer">this repository</a> that I will use as an example code. The OP has been registered:</p> <pre><code>REGISTER_OP("ZeroOut") .Input("to_zero: int32") .Output("zeroed: int32") .SetShapeFn([](::tensorflow::shape_inference::InferenceContext *c) { c-&gt;set_output(0, c-&gt;input(0)); return Status::OK(); }); </code></pre> <p>and the Op is defined to be a Cuda Kernel in the <a href="https://github.com/MatteoRagni/tf.ZeroOut.gpu/blob/master/zero_out.cu.cc" rel="nofollow noreferrer">cuda file</a>. To launch the Op I have to (again), create a new computational graph, register my op, open a session and make it run from my code:</p> <pre><code>Scope root = Scope::NewRootScope(); // Matrix A = [3 2; -1 0] auto A = Const(root, { {3.f, 2.f}, {-1.f, 0.f} }); auto v = ZeroOut(root.WithOpName("v"), A); std::vector&lt;Tensor&gt; outputs; ClientSession session(root); // Run and fetch v TF_CHECK_OK(session.Run({v}, &amp;outputs)); LOG(INFO) &lt;&lt; outputs[0].matrix&lt;float&gt;(); </code></pre>
c++|tensorflow
2
377,758
50,978,573
pandas sql using var containing list
<p>I created a list of locations by doing this:</p> <pre><code>list_NA = [] for x in df['place']: if x and x not in list_NA: list_NA.append(x) </code></pre> <p>This gives me a list like this:</p> <pre><code>print(list_NA) ['DEN', 'BOS', 'DAB', 'MIB', 'SAA', 'LAB', 'NYB', 'AGA', 'QRO', 'DCC', 'PBC', 'MIC', 'MDW', 'SAB', 'LAA', 'NYA', 'PHL', 'DCB', 'CHA', 'CHB', 'SEB', 'AGB', 'SEC', 'DAA', 'MEX'] </code></pre> <p>I want to use this list in my where clause like this:</p> <pre><code>df2 = pd.read_sql("select airport from "+db+" where airport in "+list_NA+"", conn) </code></pre> <p>But I keep getting this error :</p> <pre><code>TypeError: Can't convert 'list' object to str implicitly </code></pre> <p>I tried to do str(list_NA) or tuple(list_NA) but</p>
<p>you will want to convert list_NA to a comma separated string with single quotes.</p> <pre><code>"','".join(list_NA) </code></pre> <p>but you'll also need to wrap that in single quotes on either end as well.</p> <pre><code>df2 = pd.read_sql("select airport from "+db+" where airport in ('"+ "','".join(list_NA) +"')", conn) </code></pre>
mysql|pandas|where-clause
1
377,759
50,779,612
Pandaic way to handle empty strings in isin()
<p>The final print statement below is shows three items when only two 'b' and 'c' are wanted. What is the pandaic way to not include the empty strings in the result?</p> <pre><code>print(sys.version) print(np.__version__) print(pd.__version__) 3.6.4 1.14.2 0.22.0 </code></pre> <p>&lt;!- -&gt;</p> <pre><code>import string ds1 = pd.Series(list(string.ascii_lowercase[:3]), (range(3))) ds2 = pd.Series(list(string.ascii_lowercase[1:4]), (range(1,4))) ds1[0]='' ds2[3]='' print(ds1) 0 1 b 2 c dtype: object print(ds2) 1 b 2 c 3 dtype: object </code></pre> <p>&lt;!- -&gt;</p> <pre><code>print(ds1[ds1.isin(ds2)]) # returns three items, only want 'b' and 'c' 0 1 b 2 c dtype: object </code></pre> <p>I tried using isnull() to no avail.</p> <pre><code>print(ds1.isnull()) </code></pre> <p>output:</p> <pre><code>0 False 1 False 2 False dtype: bool </code></pre>
<p>Empty strings do not correspond to NaN, None, etc. Just filter them out like you'd normally do.</p> <pre><code>ds1[ds1.isin(filter(None, ds2))] 1 b 2 c dtype: object </code></pre>
python|string|pandas
4
377,760
50,907,880
Data quality - Pandas
<p>I'm doing a data quality project using Python and Pandas. I have an input dataframe where each column is categorical data, and I want to return a dataframe where each column consists of the top 10 most frequently occuring categories in that column in order, together with the name of said categories (ie a key value pair or a tuple with Categorical variable : Count in each cell.)</p>
<p>You can get value-count pairs in dictionary format, as so:</p> <pre><code>df["column"].value_counts(False).to_dict() </code></pre> <p>And you can use this method <em>iteratively</em> to populate a dataframe, as so:</p> <pre><code>#Import dependencies import numpy as np import pandas as pd #Create dataframe with random data df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) #Create a 'result dataframe' resultDf = pd.DataFrame(columns=list(df.keys())) #Append value-count pairs to new dataframe for column in df.keys(): _dict_ = df[column].value_counts(False).to_dict() for index, key in enumerate(_dict_): resultDf.loc[index, column] = [key,_dict_.get(key)] </code></pre>
python|pandas
0
377,761
50,824,464
How can I use pandas to set a value for all rows that match part of a multi part index
<p>I have the following pandas dataframe:</p> <pre><code>&gt;&gt;&gt; import pandas &gt;&gt;&gt; indexes = [['a', 'a', 'c', 'd', 'd', '1'], ['1', '1', '3', '4', '5', '6']] &gt;&gt;&gt; pandas.DataFrame(index=indexes, columns=["Year", "Color", "Manufacturer"]) Year Color Manufacturer a 1 NaN NaN NaN 1 NaN NaN NaN c 3 NaN NaN NaN d 4 NaN NaN NaN 5 NaN NaN NaN 1 6 NaN NaN NaN </code></pre> <p>What command could I use to set the Manufacturer column to "Manf X" in all rows that have "1" for their second index value? I've tried the following commands but have not had much luck:</p> <pre><code>set_value((,'1'), "Manufacturer", "Manf X") set_value((:,'1'), "Manufacturer", "Manf X") </code></pre> <p>It looks like I can use a similar command for setting the column in all rows that have 1 for their first index value, but I just can't get it working when I'm looking to just match on the second index value.</p> <pre><code>set_value(('1',), "Manufacturer", "Manf X") </code></pre>
<p>One way, using <a href="https://pandas.pydata.org/pandas-docs/stable/advanced.html#using-slicers" rel="nofollow noreferrer">slicers</a>:</p> <pre><code>import pandas as pd indexes = [['a', 'a', 'c', 'd', 'd', '1'], ['1', '1', '3', '4', '5', '6']] df = pd.DataFrame(index=indexes, columns=["Year", "Color", "Manufacturer"]) df.sort_index(inplace=True) print(df) df.loc[pd.IndexSlice[:, '1'], ["Manufacturer"]] = "SomeManufacturer" print(df) </code></pre> <p>Before:</p> <pre> Year Color Manufacturer 1 6 NaN NaN NaN a 1 NaN NaN NaN 1 NaN NaN NaN c 3 NaN NaN NaN d 4 NaN NaN NaN 5 NaN NaN NaN </pre> <p>After:</p> <pre> Year Color Manufacturer 1 6 NaN NaN NaN a 1 NaN NaN SomeManufacturer 1 NaN NaN SomeManufacturer c 3 NaN NaN NaN d 4 NaN NaN NaN 5 NaN NaN NaN </pre> <p>(Sorting the index is required. Without sorting:) <pre>UnsortedIndexError: 'MultiIndex Slicing requires the index to be fully lexsorted tuple len (2), lexsort depth (0)'</pre></p>
python|python-3.x|pandas
2
377,762
51,035,439
Mapping using date to create a new column in data frame
<p>I was trying to map two data frame based on the Date. however, I had an error as follow:</p> <blockquote> <p>"InvalidIndexError: Reindexing only valid with uniquely valued Index objects"</p> </blockquote> <p>I am using the following <strong>df1</strong> and create a new column "Fix Week" </p> <pre><code>kickoffDate kickoffTime hometeam_team1 2016-08-13 11:30:00 Hull City 2016-08-13 14:00:00 Middlesbrough 2016-08-13 14:00:00 Middlesbrough 2016-08-13 14:00:00 Middlesbrough 2016-08-13 14:00:00 Middlesbrough </code></pre> <p>The <strong>df2</strong> that I am going to map is as follow:</p> <pre><code>Round Date Home Team Away Team 1 2016-08-13 Hull Leicester 1 2016-08-13 Burnley Swansea 1 2016-08-13 Crystal Palace West Brom 1 2016-08-13 Everton Spurs </code></pre> <p>To get a new column, I am using the following code:</p> <pre><code>df1['fix'] = df1.kickoffDate.map(df2.set_index('Date').Round).astype(float) </code></pre> <p>but it gave me error as I mentioned above.</p> <p>Would anyone advise me?</p> <p>Thanks</p> <p>Zep</p>
<p>There is problem your <code>Date</code> values are duplicated in <code>df2</code>.</p> <p>So need remove dupes first for unique <code>Date</code> rows:</p> <pre><code>df2 = df2.drop_duplicates('Date') print (df2) Round Date Home Team Away Team 0 1 2016-08-13 Hull Leicester df1['fix'] = df1.kickoffDate.map(df2.set_index('Date').Round).astype(float) print (df1) kickoffDate kickoffTime hometeam_team1 fix 0 2016-08-13 11:30:00 Hull City 1.0 1 2016-08-13 14:00:00 Middlesbrough 1.0 2 2016-08-13 14:00:00 Middlesbrough 1.0 3 2016-08-13 14:00:00 Middlesbrough 1.0 4 2016-08-13 14:00:00 Middlesbrough 1.0 </code></pre>
python|python-3.x|pandas
1
377,763
50,968,827
Compare lists of column rows and using filters on them in pandas
<pre><code>sales = [(3588, [1,2,3,4,5,6], [1,38,9,2,18,5]), (3588, [2,5,7], [1,2,4,8,14]), (3588, [3,10,13], [1,3,4,6,12]), (3588, [4,5,61], [1,2,3,4,11,5]), (3590, [3,5,6,1,21], [3,10,13]), (3590, [8,1,2,4,6,9], [2,5,7]), (3591, [1,2,4,5,13], [1,2,3,4,5,6]) ] labels = ['goods_id', 'properties_id_x', 'properties_id_y'] df = pd.DataFrame.from_records(sales, columns=labels) df Out[4]: goods_id properties_id_x properties_id_y 0 3588 [1, 2, 3, 4, 5, 6] [1, 38, 9, 2, 18, 5] 1 3588 [2, 5, 7] [1, 2, 4, 8, 14] 2 3588 [3, 10, 13] [1, 3, 4, 6, 12] 3 3588 [4, 5, 61] [1, 2, 3, 4, 11, 5] 4 3590 [3, 5, 6, 1, 21] [3, 10, 13] 5 3590 [8, 1, 2, 4, 6, 9] [2, 5, 7] 6 3591 [1, 2, 4, 5, 13] [1, 2, 3, 4, 5, 6] </code></pre> <p>Having df of goods and their properties. Need to compare goods <strong>properties_id_x</strong> with <strong>properties_id_y</strong> row by row and return only those rows whose lists have both <code>"1"</code> and <code>"5"</code> in them. Cannot figure out how to do it.</p> <p><strong>Desired output:</strong> </p> <pre><code>0 3588 [1, 2, 3, 4, 5, 6] [1, 38, 9, 2, 18, 5] 6 3591 [1, 2, 4, 5, 13] [1, 2, 3, 4, 5, 6] </code></pre>
<p><strong>Option 1:</strong></p> <pre><code>In [176]: mask = df.apply(lambda r: {1,5} &lt;= (set(r['properties_id_x']) &amp; set(r['properties_id_y'])), axis=1) In [177]: mask Out[177]: 0 True 1 False 2 False 3 False 4 False 5 False 6 True dtype: bool In [178]: df[mask] Out[178]: goods_id properties_id_x properties_id_y 0 3588 [1, 2, 3, 4, 5, 6] [1, 38, 9, 2, 18, 5] 6 3591 [1, 2, 4, 5, 13] [1, 2, 3, 4, 5, 6] </code></pre> <p><strong>Option 2:</strong></p> <pre><code>In [183]: mask = df.properties_id_x.map(lambda x: {1,5} &lt;= set(x)) &amp; df.properties_id_y.map(lambda x: {1,5} &lt;= set(x)) In [184]: df[mask] Out[184]: goods_id properties_id_x properties_id_y 0 3588 [1, 2, 3, 4, 5, 6] [1, 38, 9, 2, 18, 5] 6 3591 [1, 2, 4, 5, 13] [1, 2, 3, 4, 5, 6] </code></pre>
python-3.x|pandas
2
377,764
51,023,364
Get a value of a column by using another column's value
<pre><code> user min max 1 Tom 1 5 2 Sam 4 6 </code></pre> <p>I got this dataframe, now I know the user is Sam and I wanna get its 'min' value. Like this (the user is unique):</p> <pre><code>df[Sam,'min'] = 4 </code></pre> <p>How can I do this?</p>
<p>First create index by column <code>userid</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> and then select by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>loc</code></a>:</p> <pre><code>df = df.set_index('user') print (df.index) Index(['Tom', 'Sam'], dtype='object', name='user') a = df.loc['Sam','min'] print (a) 4 </code></pre> <p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with <code>loc</code>, but because get <code>Series</code> is necessary select again:</p> <pre><code>a = df.loc[df['user'] == 'Sam','min'].values[0] </code></pre>
python|pandas|dataframe
2
377,765
50,842,397
how to get standardised (Beta) coefficients for multiple linear regression using statsmodels
<p>when using the <code>.summary()</code> function using pandas statsmodels, the OLS Regression Results include the following fields.</p> <pre><code>coef std err t P&gt;|t| [0.025 0.975] </code></pre> <p>How can I get the standardised coefficients (which exclude the intercept), similarly to what is achievable in SPSS?</p>
<p>You just need to standardize your original DataFrame using a z distribution (i.e., z-score) first and then perform a linear regression. </p> <p>Assume you name your dataframe as <code>df</code>, which has independent variables <code>x1</code>, <code>x2</code>, and <code>x3</code>, and dependent variable <code>y</code>. Consider the following code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np from scipy import stats import statsmodels.formula.api as smf # standardizing dataframe df_z = df.select_dtypes(include=[np.number]).dropna().apply(stats.zscore) # fitting regression formula = 'y ~ x1 + x2 + x3' result = smf.ols(formula, data=df_z).fit() # checking results result.summary() </code></pre> <p>Now, the <code>coef</code> will show you the standardized (beta) coefficients so that you can compare their influence on your dependent variable.</p> <p>Notes:</p> <ol> <li>Please keep in mind that you need <code>.dropna()</code>. Otherwise, <code>stats.zscore</code> will return all <code>NaN</code> for a column if it has any missing values.</li> <li>Instead of using <code>.select_dtypes()</code>, you can select column manually but make sure all the columns you selected are numeric.</li> <li>If you only care about the standardized (beta) coefficients, you can also use <code>result.params</code> to return it only. It will usually be displayed in a scientific-notation fashion. You can use something like <code>round(result.params, 5)</code> to round them.</li> </ol>
python|pandas|regression|statsmodels|coefficients
10
377,766
51,089,470
How do I edit my function in python such that my output dataframe shows N/A?
<p><img src="https://i.stack.imgur.com/kVjh9.png" alt="enter image description here"></p> <p>I have defined a function below in python to extract hours of operation based on business_id that I have already retrieved:</p> <pre><code>def hours_operation(business_id): day = day_open(business_id) start = day_start(business_id) end = day_end(business_id) day1 = [] for i in day: try: value = "start"+str(i) except: value = 'None' day1.append(value) dict1 = {} for i in range(len(day1)): dict1[day1[i]]=start[i] start_df = pd.DataFrame(dict1, index=[0]) day2 = [] for i in day: day2.append("end"+str(i)) dict2 = {} for i in range(len(day2)): dict2[day2[i]]=end[i] end_df = pd.DataFrame(dict2, index=[0]) for i in day: start_df['end'+str(i)]=end_df['end'+str(i)] return start_df </code></pre> <p>The issue is that my output appears like this:</p> <pre><code>Out[36]: start0 start1 start2 start3 start4 ... end2 end3 end4 end5 end6 0 1100 1100 1100 1100 1100 ... 2200 2200 2200 2200 2100 </code></pre> <p>I like this format, however, in cases where a business works only 4 days a week and not 7. I want it to return start and end as N/A.</p> <p>This is what my desired output should look like if a business works only 4 days a weel:</p> <pre><code>start0 start1 start2 start3 start4 start5 start6 end1 end2 end3 end4 end5 end6 0 1100 1100 1100 1100 1100 N/A N/A 2200 2200 2200 N/a N/A </code></pre>
<p>You can use the python tertiary condition statement, like so:</p> <pre><code>dict1[day1[i]]= start[i] if start[i] is not None else 'N/a' </code></pre> <p>this is assuming that <code>start[i]</code> is the value of the start time, and that the value you'd like to not output is a NoneType object. If it is 0 or some string, for example, just update the condition in that statement to reflect that. (of course you'll have to the same thing for the end time)</p>
python|pandas|api
0
377,767
50,725,804
Modify DataFrame index
<p>I have a DataFrame with a wrong DateTimeIndex. The hours and minutes must be moved to the left:</p> <p>2016-07-07 00:08:30 -> 2016-07-07 08:30:00</p> <p>I know how to make the change with regex, but I do not know how to replace the index by the modified one. Something like df.index.replace(lambda old_index:new_index)...</p> <p>Anybody can help?</p>
<p>By using <code>to_datetime</code> with <code>format</code></p> <pre><code>#idx=pd.Index(pd.to_datetime(pd.Series('2016-07-07 00:08:30'))) pd.to_datetime(pd.Series(idx).astype(str),format='%Y-%m-%d %S:%H:%M') Out[562]: 0 2016-07-07 08:30:00 dtype: datetime64[ns] </code></pre> <p>And for new index use:</p> <pre><code>df.index = pd.to_datetime(df.index.astype(str),format='%Y-%m-%d %S:%H:%M') </code></pre>
pandas|dataframe
1
377,768
50,929,606
How to concat dataframes so missing values are set to NaN
<p>I have two dataframes df1 and df2. Here is a toy example so show my question. <code>df1</code> looks like</p> <pre><code>col1 col2 1 2 0 7 </code></pre> <p><code>df2</code> looks like</p> <pre><code>col1 col3 9 2 5 3 </code></pre> <p>I would like to do <code>pd.concat([df1,df2])</code> but in such a way that the result is:</p> <pre><code>col1 col2 col3 1 2 NaN 0 7 Nan 9 NaN 2 5 NaN 3 </code></pre> <p>Is there a way to do this?</p>
<pre><code>import pandas as pd df1 = pd.DataFrame() df1['col1'] = [1, 0] df1['col2'] = [2, 7] df2 = pd.DataFrame() df2['col1'] = [9, 5] df2['col3'] = [5, 3] x = df1.append(df2) </code></pre> <p>Output:</p> <pre><code> col1 col2 col3 0 1 2.0 NaN 1 0 7.0 NaN 0 9 NaN 5.0 1 5 NaN 3.0 </code></pre>
python|pandas
1
377,769
50,882,613
How to do a SQL-type INNER JOIN in Python and only naming a couple of column names?
<p>I am experienced with SQL, but am new to Python.</p> <p>I am attempting to use the join or pandas.merge functions to complete the following simple SQL Join:</p> <pre><code>SELECT a.Patient_ID, a.Physician, b.Hospital FROM DF1 a INNER JOIN DF2 b on a.Patient_ID=b.Patient_ID_Number </code></pre> <p>Here is as close as I've gotten:</p> <pre><code>import pandas as pd output=pd.merge(DF1, DF2, how='inner', left_on='Patient_ID', right_on='Patient_ID_Number') </code></pre> <p>However, this produces the equivalent of the following SQL query:</p> <pre><code>SELECT * FROM DF1 a INNER JOIN DF2 b on a.Patient_ID=b.Patient_ID_Number </code></pre> <p>I'm not familiar with indexing or keys, so I am trying to implement a simple code translation only for now. If it's not integral but just a nice feature, I will learn it later.</p> <p>Thanks!</p>
<p>You can specify the columns in the join statement:</p> <pre><code> output=pd.merge(DF1[['Patient_ID','Physician']], DF2[['Hospital','Patient_ID_Number']], how='inner', left_on='Patient_ID', right_on='Patient_ID_Number') </code></pre> <p>You do have to carry over both columns that you're joining on in your statement</p> <p>You can specify which columns to keep before your join statement</p> <pre><code>DF1=DF1[['Patient_ID','Physician']] DF2=DF2[['Patient_ID_Number','Hospital']] output=pd.merge(DF1, DF2, how='inner', left_on='Patient_ID', right_on='Patient_ID_Number') </code></pre> <p>Or only keep the columns after your join statement</p> <pre><code>output=output[['Patient_ID','Physician','Hospital']] </code></pre>
python|sql|sql-server|pandas|join
2
377,770
50,858,194
Appending Columns from several worksheets Python
<p>I am trying to import certain columns of data from several different sheets inside of a workbook. However, while appending it only seems to append 'q2 survey' to a new workbook. How do I get this to append properly?</p> <pre><code>import sys, os import pandas as pd import xlrd import xlwt b = ['q1 survey', 'q2 survey','q3 survey'] #Sheet Names df_t = pd.DataFrame(columns=["Month","Date", "Year"]) #column Name xls = "path_to_file/R.xls" sheet=[] df_b=pd.DataFrame() pd.read_excel(xls,sheet) for sheet in b: df=pd.read_excel(xls,sheet) df.rename(columns=lambda x: x.strip().upper(), inplace=True) bill=df_b.append(df[df_t]) bill.to_excel('Survey.xlsx', index=False) </code></pre>
<p>I think if you do:</p> <pre><code>b = ['q1 survey', 'q2 survey','q3 survey'] #Sheet Names list_col = ["Month","Date", "Year"] #column Name xls = "path_to_file/R.xls" #create the empty df named bill to append after bill= pd.DataFrame(columns = list_col) for sheet in b: # read the sheet df=pd.read_excel(xls,sheet) df.rename(columns=lambda x: x.strip().upper(), inplace=True) # need to assign bill again bill=bill.append(df[list_col]) # to excel bill.to_excel('Survey.xlsx', index=False) </code></pre> <p>it should work and correct the errors in your code, but you can do a bit differently using <code>pd.concat</code>:</p> <pre><code>list_sheet = ['q1 survey', 'q2 survey','q3 survey'] #Sheet Names list_col = ["Month","Date", "Year"] #column Name # read once the xls file and then access the sheet in the loop, should be faster xls_file = pd.ExcelFile("path_to_file/R.xls") #create a list to append the df list_df_to_concat = [] for sheet in list_sheet : # read the sheet df= pd.read_excel(xls_file, sheet) df.rename(columns=lambda x: x.strip().upper(), inplace=True) # append the df to the list list_df_to_concat.append(df[list_col]) # to excel pd.concat(list_df_to_concat).to_excel('Survey.xlsx', index=False) </code></pre>
excel|python-2.7|pandas|xlrd|xlwt
1
377,771
50,781,373
Using feed_dict is more than 5x faster than using dataset API?
<p>I created a dataset in TFRecord format for testing. Every entry contains 200 columns, named <code>C1</code> - <code>C199</code>, each being a strings list, and a <code>label</code> column to denote the labels. The code to create the data can be found here: <a href="https://github.com/codescv/tf-dist/blob/8bb3c44f55939fc66b3727a730c57887113e899c/src/gen_data.py#L25" rel="noreferrer">https://github.com/codescv/tf-dist/blob/8bb3c44f55939fc66b3727a730c57887113e899c/src/gen_data.py#L25</a></p> <p>Then I used a linear model to train the data. The first approach looks like this:</p> <pre><code>dataset = tf.data.TFRecordDataset(data_file) dataset = dataset.prefetch(buffer_size=batch_size*10) dataset = dataset.map(parse_tfrecord, num_parallel_calls=5) dataset = dataset.repeat(num_epochs) dataset = dataset.batch(batch_size) features, labels = dataset.make_one_shot_iterator().get_next() logits = tf.feature_column.linear_model(features=features, feature_columns=columns, cols_to_vars=cols_to_vars) train_op = ... with tf.Session() as sess: sess.run(train_op) </code></pre> <p>The full code can be found here: <a href="https://github.com/codescv/tf-dist/blob/master/src/lr_single.py" rel="noreferrer">https://github.com/codescv/tf-dist/blob/master/src/lr_single.py</a></p> <p>When I run the code above, I get 0.85 steps/sec (batch size being 1024).</p> <p>In the second approach, I manually get batches from Dataset into python, then feed them to a placeholder, like this:</p> <pre><code>example = tf.placeholder(dtype=tf.string, shape=[None]) features = tf.parse_example(example, features=tf.feature_column.make_parse_example_spec(columns+[tf.feature_column.numeric_column('label', dtype=tf.float32, default_value=0)])) labels = features.pop('label') train_op = ... dataset = tf.data.TFRecordDataset(data_file).repeat().batch(batch_size) next_batch = dataset.make_one_shot_iterator().get_next() with tf.Session() as sess: data_batch = sess.run(next_batch) sess.run(train_op, feed_dict={example: data_batch}) </code></pre> <p>The full code can be found here: <a href="https://github.com/codescv/tf-dist/blob/master/src/lr_single_feed.py" rel="noreferrer">https://github.com/codescv/tf-dist/blob/master/src/lr_single_feed.py</a></p> <p>When I run the code above, I get 5 steps/sec. That is 5x faster than the first approach. This is what I do not understand, because theoretically the second should be slower due to the extra serialization/deserialization of data batches.</p> <p>Thanks!</p>
<p>There is currently (as of TensorFlow 1.9) a performance issue when using <code>tf.data</code> to map and batch tensors that have a large number of features with a small amount of data in each. The issue has two causes:</p> <ol> <li><p>The <code>dataset.map(parse_tfrecord, ...)</code> transformation will execute O(<code>batch_size</code> * <code>num_columns</code>) small operations to create a batch. By contrast, feeding a <code>tf.placeholder()</code> to <code>tf.parse_example()</code> will execute O(1) operations to create the same batch.</p></li> <li><p>Batching many <code>tf.SparseTensor</code> objects using <code>dataset.batch()</code> is much slower than directly creating the same <code>tf.SparseTensor</code> as the output of <code>tf.parse_example()</code>.</p></li> </ol> <p>Improvements to both these issues are underway, and should be available in a future version of TensorFlow. In the meantime, you can improve the performance of the <code>tf.data</code>-based pipeline by switching the order of the <code>dataset.map()</code> and <code>dataset.batch()</code> and rewriting the <code>dataset.map()</code> to work on a vector of strings, like the feeding based version:</p> <pre><code>dataset = tf.data.TFRecordDataset(data_file) dataset = dataset.prefetch(buffer_size=batch_size*10) dataset = dataset.repeat(num_epochs) # Batch first to create a vector of strings as input to the map(). dataset = dataset.batch(batch_size) def parse_tfrecord_batch(record_batch): features = tf.parse_example( record_batch, features=tf.feature_column.make_parse_example_spec( columns + [ tf.feature_column.numeric_column( 'label', dtype=tf.float32, default_value=0)])) labels = features.pop('label') return features, labels # NOTE: Parallelism might not be as useful, because the individual map function now does # more work per invocation, but you might want to experiment with this. dataset = dataset.map(parse_tfrecord_batch) # Add a prefetch at the end to pipeline execution. dataset = dataset.prefetch(1) features, labels = dataset.make_one_shot_iterator().get_next() # ... </code></pre> <hr> <p><strong>EDIT (2018/6/18)</strong>: To answer your questions from the comments:</p> <blockquote> <ol> <li>Why is <code>dataset.map(parse_tfrecord, ...)</code> O(<code>batch_size</code> * <code>num_columns</code>), not O(<code>batch_size</code>)? If parsing requires enumeration of the columns, why doesn't parse_example take O(<code>num_columns</code>)?</li> </ol> </blockquote> <p>When you wrap TensorFlow code in a <code>Dataset.map()</code> (or other functional transformation) a constant number of extra operations per output are added to "return" values from the function and (in the case of <code>tf.SparseTensor</code> values) "convert" them to a standard format. When you directly pass the outputs of <code>tf.parse_example()</code> to the input of your model, these operations aren't added. While they are very small operations, executing so many of them can become a bottleneck. (Technically the parsing <em>does</em> take O(<code>batch_size</code> * <code>num_columns</code>) <strong>time</strong>, but the constants involved in parsing are much smaller than executing an operation.)</p> <blockquote> <ol start="2"> <li>Why do you add a prefetch at the end of the pipeline?</li> </ol> </blockquote> <p>When you're interested in performance, this is almost always the best thing to do, and it should improve the overall performance of your pipeline. For more information about best practices, see the <a href="https://www.tensorflow.org/performance/datasets_performance" rel="noreferrer">performance guide for <code>tf.data</code></a>. </p>
tensorflow|tensorflow-datasets
16
377,772
51,080,393
How to use pretrained keras model with batch normalization layer?
<p>I have a pretrained model with batch_normalization model. When I run:</p> <pre><code>model.layers.get_weights </code></pre> <p>I can see that there are beta/gama values in batch_normalization layers, which means that the model has been trained, and the value has meanings.</p> <p>I want to load the model and use it in tensorflow. When I run:</p> <pre><code>sess.run(tf.report_uninitialized_variables(tf.global_variables())) </code></pre> <p>It gives me variables from batch_normalization layer: unitialized_variable</p> <pre><code>array(['pretrain_variable/pretrain_variable/batch_normalization_11/moving_mean/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_11/moving_mean/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_11/moving_variance/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_11/moving_variance/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_15/moving_mean/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_15/moving_mean/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_15/moving_variance/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_15/moving_variance/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_9/moving_mean/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_9/moving_mean/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_9/moving_variance/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_9/moving_variance/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_13/moving_mean/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_13/moving_mean/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_13/moving_variance/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_13/moving_variance/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_16/moving_mean/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_16/moving_mean/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_16/moving_variance/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_16/moving_variance/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_14/moving_mean/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_14/moving_mean/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_14/moving_variance/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_14/moving_variance/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_10/moving_mean/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_10/moving_mean/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_10/moving_variance/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_10/moving_variance/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_12/moving_mean/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_12/moving_mean/local_step_1', 'pretrain_variable/pretrain_variable/batch_normalization_12/moving_variance/biased_1', 'pretrain_variable/pretrain_variable/batch_normalization_12/moving_variance/local_step_1', 'train_variable/output_y_0/kernel', 'train_variable/output_y_0/bias', 'train_variable/output_y_1/kernel', 'train_variable/output_y_1/bias', 'train_variable/output_y_2/kernel', 'train_variable/output_y_2/bias', 'train_variable/output_y_3/kernel', 'train_variable/output_y_3/bias', 'train_variable/output_y_4/kernel', 'train_variable/output_y_4/bias', 'train_variable/output_y_5/kernel', 'train_variable/output_y_5/bias', 'train_variable/output_y_6/kernel', 'train_variable/output_y_6/bias', 'train_variable/output_y_7/kernel', 'train_variable/output_y_7/bias', 'train_variable/output_y_8/kernel', 'train_variable/output_y_8/bias', 'train_variable/output_y_9/kernel', 'train_variable/output_y_9/bias', 'train_variable/output_y_10/kernel', 'train_variable/output_y_10/bias', 'train_variable/output_y_11/kernel', 'train_variable/output_y_11/bias', 'train_variable/beta1_power', 'train_variable/beta2_power', 'train_variable/train_variable/output_y_0/kernel/Adam', 'train_variable/train_variable/output_y_0/kernel/Adam_1', 'train_variable/train_variable/output_y_0/bias/Adam', 'train_variable/train_variable/output_y_0/bias/Adam_1', 'train_variable/train_variable/output_y_1/kernel/Adam', 'train_variable/train_variable/output_y_1/kernel/Adam_1', 'train_variable/train_variable/output_y_1/bias/Adam', 'train_variable/train_variable/output_y_1/bias/Adam_1', 'train_variable/train_variable/output_y_2/kernel/Adam', 'train_variable/train_variable/output_y_2/kernel/Adam_1', 'train_variable/train_variable/output_y_2/bias/Adam', 'train_variable/train_variable/output_y_2/bias/Adam_1', 'train_variable/train_variable/output_y_3/kernel/Adam', 'train_variable/train_variable/output_y_3/kernel/Adam_1', 'train_variable/train_variable/output_y_3/bias/Adam', 'train_variable/train_variable/output_y_3/bias/Adam_1', 'train_variable/train_variable/output_y_4/kernel/Adam', 'train_variable/train_variable/output_y_4/kernel/Adam_1', 'train_variable/train_variable/output_y_4/bias/Adam', 'train_variable/train_variable/output_y_4/bias/Adam_1', 'train_variable/train_variable/output_y_5/kernel/Adam', 'train_variable/train_variable/output_y_5/kernel/Adam_1', 'train_variable/train_variable/output_y_5/bias/Adam', 'train_variable/train_variable/output_y_5/bias/Adam_1', 'train_variable/train_variable/output_y_6/kernel/Adam', 'train_variable/train_variable/output_y_6/kernel/Adam_1', 'train_variable/train_variable/output_y_6/bias/Adam', 'train_variable/train_variable/output_y_6/bias/Adam_1', 'train_variable/train_variable/output_y_7/kernel/Adam', 'train_variable/train_variable/output_y_7/kernel/Adam_1', 'train_variable/train_variable/output_y_7/bias/Adam', 'train_variable/train_variable/output_y_7/bias/Adam_1', 'train_variable/train_variable/output_y_8/kernel/Adam', 'train_variable/train_variable/output_y_8/kernel/Adam_1', 'train_variable/train_variable/output_y_8/bias/Adam', 'train_variable/train_variable/output_y_8/bias/Adam_1', 'train_variable/train_variable/output_y_9/kernel/Adam', 'train_variable/train_variable/output_y_9/kernel/Adam_1', 'train_variable/train_variable/output_y_9/bias/Adam', 'train_variable/train_variable/output_y_9/bias/Adam_1', 'train_variable/train_variable/output_y_10/kernel/Adam', 'train_variable/train_variable/output_y_10/kernel/Adam_1', 'train_variable/train_variable/output_y_10/bias/Adam', 'train_variable/train_variable/output_y_10/bias/Adam_1', 'train_variable/train_variable/output_y_11/kernel/Adam', 'train_variable/train_variable/output_y_11/kernel/Adam_1', 'train_variable/train_variable/output_y_11/bias/Adam', 'train_variable/train_variable/output_y_11/bias/Adam_1'], dtype=object) </code></pre> <p>I have to run this code in order to use the model. I was afraid that this will destroy the parameter in batch_normalization layer. However, I verified that the parameters stayed the same. So, my question is, why would parameters in batch_normalization layer would be reported as uninitialized, but stayed the same after sess.run(tf.variables_initializer)? </p>
<p>There will be beta, gamma values as these parameters can be initialized, before the model is trained. By default gamma will be initialised to 1 and beta to 0. </p>
tensorflow|keras
1
377,773
51,037,433
Why does pandas 2min bucket print NaN although all my row values are numbers (not NaN)?
<p>I know that in my data response_bytes column does not have NaN values because when I run: <code>data[data.response_bytes.isna()].count()</code> I get as a result 0.</p> <p>When I then run 2 min bucket mean and then head I get NaN:</p> <pre><code>print(data.reset_index().set_index('time').resample('2min').mean().head()) index identity user http_code response_bytes unknown time 2018-01-31 09:26:00 0.5 NaN NaN 200.0 264.0 NaN 2018-01-31 09:28:00 NaN NaN NaN NaN NaN NaN 2018-01-31 09:30:00 NaN NaN NaN NaN NaN NaN 2018-01-31 09:32:00 NaN NaN NaN NaN NaN NaN 2018-01-31 09:34:00 NaN NaN NaN NaN NaN NaN </code></pre> <p>Why do response byte time bucketing mean have NaN values?</p> <p>I wanted to experiment and learn how time bucketing works in pandas. So I used the log file: <code>http://www.cs.tufts.edu/comp/116/access.log</code> as input data, then loaded it into pandas DataFrame and then applied time bucket 2 min (for the first time in my life) and ran mean(), I wasn't expecting to see any NaN in the <strong>response_bytes</strong> column because all values are not NaN.</p> <p>Here is my full code:</p> <pre><code>import urllib.request import pandas as pd import re from datetime import datetime import pytz pd.set_option('max_columns',10) def parse_str(x): """ Returns the string delimited by two characters. Example: `&gt;&gt;&gt; parse_str('[my string]')` `'my string'` """ return x[1:-1] def parse_datetime(x): ''' Parses datetime with timezone formatted as: `[day/month/year:hour:minute:second zone]` Example: `&gt;&gt;&gt; parse_datetime('13/Nov/2015:11:45:42 +0000')` `datetime.datetime(2015, 11, 3, 11, 45, 4, tzinfo=&lt;UTC&gt;)` Due to problems parsing the timezone (`%z`) with `datetime.strptime`, the timezone will be obtained using the `pytz` library. ''' dt = datetime.strptime(x[1:-7], '%d/%b/%Y:%H:%M:%S') dt_tz = int(x[-6:-3])*60+int(x[-3:-1]) return dt.replace(tzinfo=pytz.FixedOffset(dt_tz)) # data = pd.read_csv(StringIO(accesslog)) url = "http://www.cs.tufts.edu/comp/116/access.log" accesslog = urllib.request.urlopen(url).read().decode('utf-8') fields = ['host', 'identity', 'user', 'time_part1', 'time_part2', 'cmd_path_proto', 'http_code', 'response_bytes', 'referer', 'user_agent', 'unknown'] data = pd.read_csv(url, sep=' ', header=None, names=fields, na_values=['-']) # Panda's parser mistakenly splits the date into two columns, so we must concatenate them time = data.time_part1 + data.time_part2 time_trimmed = time.map(lambda s: re.split('[-+]', s.strip('[]'))[0]) # Drop the timezone for simplicity data['time'] = pd.to_datetime(time_trimmed, format='%d/%b/%Y:%H:%M:%S') data.head() print(data.reset_index().set_index('time').resample('2min').mean().head()) </code></pre> <p>I was expecting the time-bucketing of the mean of response_bytes column not to be NaN.</p>
<p>It is expected behaviour, because <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#resampling" rel="nofollow noreferrer"><code>resampling</code></a> converts to a regular time interval, so if there are no samples you get <code>NaN</code>.</p> <p>So it means there are no datetimes between some 2 minutes itervals, e.g. <code>2018-01-31 09:28:00</code> and <code>2018-01-31 09:30:00</code>, so <code>mean</code> cannot be count and get <code>NaN</code>s.</p> <pre><code>print (data[data['time'].between('2018-01-31 09:28:00','2018-01-31 09:30:00')]) Empty DataFrame Columns: [host, identity, user, time_part1, time_part2, cmd_path_proto, http_code, response_bytes, referer, user_agent, unknown, time] Index: [] [0 rows x 12 columns] </code></pre>
python|pandas
1
377,774
50,672,019
tensorflow bijector construction
<p>I am new to tensorflow distribution and bijector. I know when they design tensorflow distribution package, they partition a tensor's shape into three groups: [sample shape, batch_shape, event_shape]. But I find it hard to understand why when we define a new bijector class, they always defines parent class's event dimension to be 1. For example, the following code is a Real-NVP bijector class, and in its <strong>init</strong> function:</p> <pre><code>super(NVPCoupling, self).__init__( event_ndims=1, validate_args=validate_args, name=name) </code></pre> <p>But as I understand it, this real-NVP class is acting on a tensor whose event dimension is D, right?</p> <pre><code>def net(x, out_size): return layers.stack(x, layers.fully_connected, [512, 512, out_size]) # Affine Coupling layer for Real-NVP class NVPCoupling(tfb.Bijector): """NVP affine coupling layer for 2D units. """ def __init__(self, D, d, layer_id=0, validate_args=False, name="NVPCoupling"): """ Args: d: First d units are pass-thru units. """ # first d numbers decide scaling/shift factor for remaining D-d numbers. super(NVPCoupling, self).__init__( event_ndims=1, validate_args=validate_args, name=name) self.D, self.d = D, d self.id = layer_id # create variables here tmp = tf.placeholder(dtype=DTYPE, shape=[1, self.d]) self.s(tmp) self.t(tmp) def s(self, xd): with tf.variable_scope('s%d' % self.id, reuse=tf.AUTO_REUSE): return net(xd, self.D - self.d) def t(self, xd): with tf.variable_scope('t%d' % self.id, reuse=tf.AUTO_REUSE): return net(xd, self.D - self.d) def _forward(self, x): xd, xD = x[:, :self.d], x[:, self.d:] yD = xD * tf.exp(self.s(xd)) + self.t(xd) # [batch, D-d] return tf.concat([xd, yD], axis=1) def _inverse(self, y): yd, yD = y[:, :self.d], y[:, self.d:] xD = (yD - self.t(yd)) * tf.exp(-self.s(yd)) return tf.concat([yd, xD], axis=1) def _forward_log_det_jacobian(self, x): event_dims = self._event_dims_tensor(x) xd = x[:, :self.d] return tf.reduce_sum(self.s(xd), axis=event_dims) </code></pre> <p>Also, when we use a sample tensor to train it, the tensor has shape [batch_size, D]. But the tmp placeholder has a shape=[1, self.d] not [Batch_size, self.d]. What is the reason for that. Hope some experts could clarify this. Thanks.</p>
<p><code>event_ndims</code> is the <em>number</em> of event dimensions, not the size of the input. Thus <code>event_ndims=1</code> operates on vectors, <code>event_ndims=2</code> on matrices, and so on. See the <code>__init__</code> docstring for the <code>Bijector</code> class.</p>
python|tensorflow|machine-learning|unsupervised-learning
1
377,775
51,043,372
Replace a column values with its mean of groups in dataframe
<p>I have a DataFrame as</p> <pre><code>Page Line y 1 2 3.2 1 2 6.1 1 3 7.1 2 4 8.5 2 4 9.1 </code></pre> <p>I have to replace column y with values of its mean in groups. I can do that grouping using one column using this code.</p> <pre><code>df['y'] = df['y'].groupby(df['Page'], group_keys=False).transform('mean') </code></pre> <p><strong>I am trying to replace the values of y by mean of groups by 'Page' and 'Line'. Something like this,</strong></p> <pre><code>Page Line y 1 2 4.65 1 2 4.65 1 3 7.1 2 4 8.8 2 4 8.8 </code></pre> <p>I have searched through a lot of answers on this site but couldn't find this application. Using python3 with pandas.</p>
<p>You need list of columns names, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> parameter <code>by</code>:</p> <blockquote> <p><strong>by</strong> : mapping, function, label, or <strong>list of labels</strong></p> <p>Used to determine the groups for the groupby. If by is a function, it’s called on each value of the object’s index. If a dict or Series is passed, the Series or dict VALUES will be used to determine the groups (the Series’ values are first aligned; see .align() method). If an ndarray is passed, the values are used as-is determine the groups. A label or list of labels may be passed to group by the columns in self. Notice that a tuple is interpreted a (single) key.</p> </blockquote> <pre><code>df['y'] = df.groupby(['Page', 'Line'])['y'].transform('mean') print (df) Page Line y 0 1 2 4.65 1 1 2 4.65 2 1 3 7.10 3 2 4 8.80 4 2 4 8.80 </code></pre> <p>Your solution should be changed to this syntactic sugar - pass Series in list:</p> <pre><code>df['y'] = df['y'].groupby([df['Page'], df['Line']]).transform('mean') </code></pre>
python|python-3.x|pandas
9
377,776
51,101,386
Subplot of difference between data points imported with Pandas and conversion of time values
<p>I'm relatively new to Python (in the process of self-teaching) and so this is proving to be quite a learning curve but I'm very happy to get to grips with it. I have a set of data points from an experiment in excel, one column is time (with the format 00:00:00:000) and a second column is the measured parameter.</p> <p>I'm using pandas to read the excel document in order to produce a graph from it with time along the x-axis and the measured variable along the y-axis. However, when I plot the data, the time column becomes the data point number (i.e. 00:00:00:000 - 00:05:40:454 becomes 0 - 2000) and I'm not sure why. Could anyone please advise how to rectify this?</p> <p>Secondly, I'd like to produce a subplot that shows the difference between the y-values as a function of time, basically a gradient to show the variation. Is there a way to easily calculate this and display it using pandas?</p> <p>Here is my code, please do forgive how basic it is!</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import pylab df = pd.read_excel('rest.xlsx', 'Sheet1') df.plot(legend=False, grid=False) plt.show() plt.savefig('myfig') </code></pre>
<p>If you just read the excel file, pandas will create a RangeIndex, starting at 0. To use your time information from you excel file as index, you have to specify the name (as string) of the time column with the key-word argument index_col in the read_excel call:</p> <pre><code>df = pd.read_excel('rest.xlsx', 'Sheet1', index_col='name_of_time_column') </code></pre> <p>Just replace 'name_of_time_column' with the actual name of the column that contains the time information.<br> (Hopefully pandas will automatically parse the time information to a Datetimeindex, but your format should be fine.) The plot will use the Datetimeindex on x-axis. To get the time difference between each datapoint, use the diff method with argument 1 on your DataFrame:</p> <pre><code>difference = df.diff(1) difference.plot(legend=False, grid=False) </code></pre>
python|pandas|matplotlib|time|graph
0
377,777
50,959,398
customize the color of bar chart while reading from two different data frame in seaborn
<p>I have plotted a bar chart using the code below:</p> <pre><code>dffinal['CI-noCI']='Cognitive Impairement' nocidffinal['CI-noCI']='Non Cognitive Impairement' res=pd.concat([dffinal,nocidffinal]) sns.barplot(x='6month',y='final-formula',data=res,hue='CI-noCI') plt.xticks(fontsize=8, rotation=45) plt.show() </code></pre> <p>the result is as below:</p> <p><a href="https://i.stack.imgur.com/AqFPV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AqFPV.png" alt="enter image description here" /></a></p> <p>I want to change the color of them to red and green. How can I do? just as information, this plot is reading two different data frame.</p> <p>the links I have gone through were with the case the dataframe was only one data frame so did not apply to my case.</p> <p>Thanks :)</p>
<p>You can use matplotlib to overwrite Seaborn's default color cycling to ensure the hues it uses are red and green.</p> <pre><code>import matplotlib.pyplot as plt plt.rcParams['axes.prop_cycle'] = ("cycler('color', 'rg')") </code></pre> <hr> <p>Example:</p> <pre><code>import seaborn as sns import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame({'date': [1,2,3,4,4,5], 'value': [10,15,35,14,18,4], 'hue_v': [1,1,2,1,2,2]}) # The normal seaborn coloring is blue and orange sns.barplot(x='date', y='value', data=df, hue='hue_v') </code></pre> <p><a href="https://i.stack.imgur.com/WjqQA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WjqQA.png" alt="enter image description here"></a></p> <pre><code># Now change the color cycling and re-make the same plot: plt.rcParams['axes.prop_cycle'] = ("cycler('color', 'rg')") sns.barplot(x='date', y='value', data=df, hue='hue_v') </code></pre> <p><a href="https://i.stack.imgur.com/Bfp1O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bfp1O.png" alt="enter image description here"></a></p> <p>This will now impact all of the other figures you make, so if you want to restore the seaborn defaults for all other plots you need to then do:</p> <pre><code>sns.reset_orig() </code></pre>
pandas|dataframe|matplotlib|bar-chart|seaborn
0
377,778
50,710,889
Tensorflow hub tags and exporting
<p>I am very confused how tags are supposed to work in hub and how do i use them when exporting. How can I train on the train part of my graph and export the serving one?</p> <p>I have the following code:</p> <pre><code>def user_module_fn(foo, bar): x = tf.sparse_placeholder(tf.float32, shape[-1, 32], name='name') y = something(x) hub.add_signature(name='my_name', input={"x": x}, output={"default", y}) module_spec = hub.create_module_spec(module_spec_fn, tags_and_args=[ (set(), {"foo": foo, "bar": bar}), ({"train"}, {"foo": foo, "bar": baz}) ]) m = hub.Module(module_spec, name="my_name", trainable=True, tags={"train"}) hub.register_for_export(m, "my_name") </code></pre> <p>My question is the following: since I am instantiating the module <code>m</code> to with <code>tags={'train'}</code>, I think I am using the right one for training. Does this imply that I am <em>only</em> exporting the one tagged with <code>train</code>? How do I use the <code>train</code> for training and <code>set()</code> (so default) for serving?</p>
<p>In the best (i.e., simplest) case, your module doesn't need any tags at all, namely when one and the same piece of TensorFlow graph fits all intended uses of the module. For that, just leave <code>tags</code> or <code>tags_and_args</code> unset to get the default (an empty set of tags).</p> <p>Tags are needed if the same module needs more than one version of its graph, say, a training version that applies dropout in training mode, and an inference version that makes dropout a no-op. You'll typically see code like</p> <pre><code>def module_fn(training): inputs = tf.placeholder(dtype=tf.float32, shape=[None, 50]) layer1 = tf.layers.fully_connected(inputs, 200) layer1 = tf.layers.dropout(layer1, rate=0.5, training=training) layer2 = tf.layers.fully_connected(layer1, 100) outputs = dict(default=layer2) hub.add_signature(inputs=inputs, outputs=outputs) ... tags_and_args = [(set(), {"training": False}), ({"train"}, {"training": True})] module_spec = hub.create_module_spec(module_fn, tags_and_args) </code></pre> <p>Creating the module spec runs module_fn for <em>all</em> the provided argument dicts, and stores <em>all</em> the graphs built them behind the scenes. When you make a module from that spec and then export it, it will contain all the graph versions that were created, tagged with the respective sets of strings.</p> <p>The <code>tags=...</code> argument to <code>m = hub.Module(...)</code> merely controls which of the different graph versions gets used in the current graph, say, when <code>m</code> called (i.e., applied to inputs). It does not constrain what <code>m.export(...)</code> writes out.</p>
python|tensorflow|tensorflow-serving|tensorflow-estimator|tensorflow-hub
1
377,779
51,040,382
ZeroDivisionError - Pandas
<p>The script is the following - aiming to show the differences in average click through rates by keyword ranking position - highlighting queries/pages with under performing ctrs.</p> <p>Until recently it has been working fine - however it now gives me the below ZeroDivisionError.</p> <pre><code>import os import sys import math from statistics import median import numpy as np import pandas as pd in_file = 'data.csv' thresh = 5 df = pd.read_csv(in_file) # Round position to tenths df = df.round({'position': 1}) # Restrict garbage 1 impression, 1 click, 100% CTR entries df = df[df.clicks &gt;= thresh] df.head() def apply_stats(row, df): if int(row['impressions']) &gt; 5: ctr = float(row['ctr']) pos = row['position'] # Median median_ctr = median(df.ctr[df.position==pos]) # Mad mad_ctr = df.ctr[df.position==pos].mad() row['score'] = round(float( (1 * (ctr - median_ctr))/mad_ctr ), 3 ) row['mad'] = mad_ctr row['median'] = median_ctr return row df = df.apply(apply_stats, args=(df,), axis = 1) df.to_csv('out2_' + in_file) df.head() </code></pre> <p>The error I'm receiving is this:</p> <pre><code>----------------------------------------- ZeroDivisionErrorTraceback (most recent call last) &lt;ipython-input-33-f1eef41d1c9a&gt; in &lt;module&gt;() ----&gt; 1 df = df.apply(apply_stats, args=(df,), axis = 1) 2 df.to_csv('out2_' + in_file) 3 df.head() ~\Anaconda3\lib\site-packages\pandas\core\frame.py in apply(self, func, axis, broadcast, raw, reduce, result_type, args, **kwds) 6002 args=args, 6003 kwds=kwds) -&gt; 6004 return op.get_result() 6005 6006 def applymap(self, func): ~\Anaconda3\lib\site-packages\pandas\core\apply.py in get_result(self) 140 return self.apply_raw() 141 --&gt; 142 return self.apply_standard() 143 144 def apply_empty_result(self): ~\Anaconda3\lib\site-packages\pandas\core\apply.py in apply_standard(self) 246 247 # compute the result using the series generator --&gt; 248 self.apply_series_generator() 249 250 # wrap results ~\Anaconda3\lib\site-packages\pandas\core\apply.py in apply_series_generator(self) 275 try: 276 for i, v in enumerate(series_gen): --&gt; 277 results[i] = self.f(v) 278 keys.append(v.name) 279 except Exception as e: ~\Anaconda3\lib\site-packages\pandas\core\apply.py in f(x) 72 if kwds or args and not isinstance(func, np.ufunc): 73 def f(x): ---&gt; 74 return func(x, *args, **kwds) 75 else: 76 f = func &lt;ipython-input-32-900a8cda8fce&gt; in apply_stats(row, df) 11 mad_ctr = df.ctr[df.position==pos].mad() 12 ---&gt; 13 row['score'] = round(float( (1 * (ctr - median_ctr))/mad_ctr ), 3 ) 14 row['mad'] = mad_ctr 15 row['median'] = median_ctr ZeroDivisionError: ('float division by zero', 'occurred at index 317') </code></pre> <p>The data in the CSV are all integers for clicks, impressions + floats for ctr, position.</p> <p>Is there an error in the script or likely a data formatting issue?</p>
<p>It looks like your getting a row where <code>mad_ctr</code> is zero, so just add a check for that case:</p> <pre><code>row['score'] = round(float( (1 * (ctr - median_ctr))/mad_ctr ), 3 ) if mad_ctr != 0 else 0 </code></pre> <p>This will set <code>score</code> to zero if <code>mad_ctr</code> is zero. But you could also use <code>None</code> or some other default value if you prefer.</p>
python|pandas|division|zero|divide-by-zero
1
377,780
50,849,202
Pandas for each group of 4 calendar months find the date on which maximum value occured
<p>For each group of 4 consecutive calendar months Need to find the date on which maximum value occured </p> <p>My Dataframe looks like this.</p> <p><a href="https://i.stack.imgur.com/7Yggm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Yggm.png" alt="DataFrame"></a></p>
<pre><code>import numpy as np from numpy.random import randn as rn DatetimeIndex = pd.date_range('1/1/2015', '12/31/2015', freq='B') np.random.seed(101) df = pd.DataFrame(rn(len(DatetimeIndex)),index=DatetimeIndex) df.groupby(pd.Grouper(freq='4MS',label='right')).max()[:3] </code></pre> <p>2015-05-01 2.706850 2015-09-01 2.302987 2016-01-01 2.493990</p>
pandas|datetime|data-science|pandas-groupby|data-science-experience
1
377,781
51,035,281
How to populate dataframe from dictionary in loop
<p>I am trying to perform entity analysis on text and I want to put the results in a dataframe. Currently the results are not stored in a dictionary, nor in a Dataframe. The results are extracted with two functions.</p> <p>df:</p> <pre><code>ID title cur_working pos_arg neg_arg date 132 leave yes good coffee management, leadership and salary 13-04-2018 145 love it yes nice colleagues long days 14-04-2018 </code></pre> <p>I have the following code:</p> <pre><code>result = entity_analysis(df, 'neg_arg', 'ID') #This code loops through the rows and calls the function entities_text() def entity_analysis(df, col, idcol): temp_dict = {} for index, row in df.iterrows(): id = (row[idcol]) x = (row[col]) entities = entities_text(x, id) #temp_dict.append(entities) #final = pd.DataFrame(columns = ['id', 'name', 'type', 'salience']) return print(entities) def entities_text(text, id): """Detects entities in the text.""" client = language.LanguageServiceClient() ent_df = {} if isinstance(text, six.binary_type): text = text.decode('utf-8') # Instantiates a plain text document. document = types.Document( content=text, type=enums.Document.Type.PLAIN_TEXT) # Detects entities in the document. entities = client.analyze_entities(document).entities # entity types from enums.Entity.Type entity_type = ('UNKNOWN', 'PERSON', 'LOCATION', 'ORGANIZATION', 'EVENT', 'WORK_OF_ART', 'CONSUMER_GOOD', 'OTHER') for entity in entities: ent_df[id] = ({ 'name': [entity.name], 'type': [entity_type[entity.type]], 'salience': [entity.salience] }) return print(ent_df) </code></pre> <p>This code gives the following outcome:</p> <pre><code>{'132': {'name': ['management'], 'type': ['OTHER'], 'salience': [0.16079013049602509]}} {'132': {'name': ['leadership'], 'type': ['OTHER'], 'salience': [0.05074194446206093]}} {'132': {'name': ['salary'], 'type': ['OTHER'], 'salience': [0.27505040168762207]}} {'145': {'name': ['days'], 'type': ['OTHER'], 'salience': [0.004272154998034239]}} </code></pre> <p>I have created <code>temp_dict</code> and a <code>final</code> dataframe in the function <code>entity_analysis()</code>. <a href="https://stackoverflow.com/questions/28056171/how-to-build-and-fill-pandas-dataframe-from-for-loop">This thread</a> explained that appending to a dataframe in a loop is not efficient. <strong>I don't know how to populate the dataframe in an efficient way</strong>. <a href="https://stackoverflow.com/questions/30854728/python3-4-dataframe-from-dictionary">These</a> <a href="https://stackoverflow.com/questions/3294889/iterating-over-dictionaries-using-for-loops">threads</a> are related to my question but they explain how to populate a Dataframe from existing data. When I try to use <code>temp_dict.update(entities)</code> and return <code>temp_dict</code> I get an error:</p> <blockquote> <p>in entity_analysis temp_dict.update(entities) TypeError: 'NoneType' object is not iterable</p> </blockquote> <p>I want the output to be like this:</p> <pre><code>ID name type salience 132 management OTHER 0.16079013049602509 132 leadership OTHER 0.05074194446206093 132 salary OTHER 0.27505040168762207 145 days OTHER 0.004272154998034239 </code></pre>
<p>One solution is to create a list of lists via your <code>entities</code> iterable. Then feed your list of lists into <code>pd.DataFrame</code>:</p> <pre><code>LoL = [] for entity in entities: LoL.append([id, entity.name, entity_type[entity.type], entity.salience]) df = pd.DataFrame(LoL, columns=['ID', 'name', 'type', 'salience']) </code></pre> <p>If you <em>also</em> need the dictionary in the format you currently produce, then you can add your current logic to your <code>for</code> loop. However, first check whether you <em>need</em> to use two structures to store identical data.</p>
python|pandas|dictionary|google-natural-language
1
377,782
50,785,716
Add two columns, i,e. mean_a and mean_b
<pre><code># Price 0 1.00 1 12.23 2 3.24 3 12.67 6 149.98 7 19.98 8 1883.23 9 1.99 10 4.89 11 9.99 12 12.99 13 18.23 14 17.99 15 18.98 16 18.11 17 19.10 18 20.30 19 1901.30 20 20.27k </code></pre> <p>Suppose I have the previous dataframe. I would like to add two columns, <code>mean_a</code> and <code>mean_b</code>. <code>mean_a</code> would compute the mean of the next <code>k</code> levels and <code>mean_b</code> will compute the mean of the previous <code>k</code> levels. For instance, at <code>#10</code> with <code>k=3</code>, <code>mean_a = (4.89 + 9.99 + 12.99)/3 = 9.29</code> and <code>mean_b = (4.89 + 1.99 + 1883.23)/3 = 630.0366667</code>. How can I implement that in python?</p> <p>I have tried that, but I don't think it is good </p> <pre><code>def moving_average(self, df, col_name='smooth_midprice', k=10): ma_cols = [] mb_cols = [] temp_df = pd.DataFrame() for i in range(0, k+1): ma_col = 'M_A_{}'.format(i) ma_cols.append(ma_col) mb_col = 'M_B_{}'.format(i) mb_cols.append(mb_col) temp_df[ma_col] = df[col_name].shift(i) temp_df[mb_col] = df[col_name].shift(-i) df['M_A'] = temp_df[ma_cols].mean(axis=1, skipna=True, numeric_only=True) df['M_B'] = temp_df[mb_cols].mean(axis=1, skipna=True, numeric_only=True) return df </code></pre>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rolling.html" rel="nofollow noreferrer">You can just using <code>rolling</code></a> (Notice <code>.iloc</code> is to reverse the order of the df)</p> <pre><code>df['mean_a'] = df.Price.rolling(3,min_periods =1).mean() df['mean_b'] = df.Price.iloc[::-1].rolling(3,min_periods =1).mean() df Out[9]: Price mean_a mean_b 0 1.00 1.000000 5.490000 1 12.23 6.615000 9.380000 2 3.24 5.490000 55.296667 3 12.67 9.380000 60.876667 6 149.98 55.296667 684.396667 7 19.98 60.876667 635.066667 8 1883.23 684.396667 630.036667 9 1.99 635.066667 5.623333 10 4.89 630.036667 9.290000 11 9.99 5.623333 13.736667 12 12.99 9.290000 16.403333 13 18.23 13.736667 18.400000 14 17.99 16.403333 18.360000 15 18.98 18.400000 18.730000 16 18.11 18.360000 19.170000 17 19.10 18.730000 646.900000 18 20.30 19.170000 647.290000 19 1901.30 646.900000 960.785000 20 20.27 647.290000 20.270000 </code></pre> <p>Fix your code </p> <pre><code>col_name='Price' k=10 ma_cols = [] mb_cols = [] temp_df = pd.DataFrame() for i in range(0, k + 1): ma_col = 'M_A_{}'.format(i) ma_cols.append(ma_col) mb_col = 'M_B_{}'.format(i) mb_cols.append(mb_col) temp_df[ma_col] = df[col_name].shift(i) temp_df[mb_col] = df[col_name].shift(-i) df['M_A'] = temp_df[ma_cols].stack().groupby(level=0).head(3).mean(level=0)#change 3 to k df['M_B'] = temp_df[mb_cols].stack().groupby(level=0).head(3).mean(level=0) df Out[35]: Price mean_a mean_b M_A M_B 0 1.00 1.000000 5.490000 1.000000 5.490000 1 12.23 6.615000 9.380000 6.615000 9.380000 2 3.24 5.490000 55.296667 5.490000 55.296667 3 12.67 9.380000 60.876667 9.380000 60.876667 6 149.98 55.296667 684.396667 55.296667 684.396667 7 19.98 60.876667 635.066667 60.876667 635.066667 8 1883.23 684.396667 630.036667 684.396667 630.036667 9 1.99 635.066667 5.623333 635.066667 5.623333 10 4.89 630.036667 9.290000 630.036667 9.290000 11 9.99 5.623333 13.736667 5.623333 13.736667 12 12.99 9.290000 16.403333 9.290000 16.403333 13 18.23 13.736667 18.400000 13.736667 18.400000 14 17.99 16.403333 18.360000 16.403333 18.360000 15 18.98 18.400000 18.730000 18.400000 18.730000 16 18.11 18.360000 19.170000 18.360000 19.170000 17 19.10 18.730000 646.900000 18.730000 646.900000 18 20.30 19.170000 647.290000 19.170000 647.290000 19 1901.30 646.900000 960.785000 646.900000 960.785000 20 20.27 647.290000 20.270000 647.290000 20.270000 </code></pre>
python|pandas|dataframe
3
377,783
50,996,395
Pandas dataframe sort_values set default ascending=False
<p>Could be possible to define a property to always set sort_values(ascending=False)? Using it quite often in descending order then would like to set the default behavior.</p>
<p>Another option would be to declare the settings you often use in the beginning of your code and pass them as kwargs.</p> <p>Personally I would, however, write it out every time.</p> <pre><code>import pandas as pd p = {"ascending":False, "inplace":True} df = pd.DataFrame({ 'col1': [1,6,2,5,9,3] }) df.sort_values(by='col1', **p) print(df) </code></pre> <p>Returns:</p> <pre><code> col1 4 9 1 6 3 5 5 3 2 2 0 1 </code></pre>
python|pandas
2
377,784
51,012,084
numpy, taking array difference of their intersection
<p>I have multiple numpy arrays and I want to create new arrays doing something that is like an XOR ... but not quite.</p> <p>My input is two arrays, array1 and array2. My output is a modified (or new array, I don't really care) version of array1.</p> <p>The modification is elementwise, by doing the following:</p> <p>1.) If either array has 0 for the given index, then the index is left unchanged. 2.) If array1 and array2 are nonzero, then the modified array is assigned the value of array1's index subtracted by array2's index, down to a minimum of zero.</p> <p>Examples:</p> <pre><code>array1: [0, 3, 8, 0] array2: [1, 1, 1, 1] output: [0, 2, 7, 0] array1: [1, 1, 1, 1] array2: [0, 3, 8, 0] output: [1, 0, 0, 1] array1: [10, 10, 10, 10] array2: [8, 12, 8, 12] output: [2, 0, 2, 0] </code></pre> <p>I would like to be able to do this with say, a single numpy.copyto statement, but I don't know how. Thank you.</p> <p>edit: </p> <p>it just hit me. could I do:</p> <pre><code>new_array = np.zeros(size_of_array1) numpy.copyto(new_array, array1-array2, where=array1&gt;array2) </code></pre> <p>Edit 2: Since I have received several answers very quickly I am going to time the different answers against each other to see how they do. Be back with results in a few minutes.</p> <p>Okay, results are in:</p> <p>array of random ints 0 to 5, size = 10,000, 10 loops</p> <p>1.)using my np.copyto method</p> <p>2.)using clip</p> <p>3.)using maximum</p> <pre><code>0.000768184661865 0.000391960144043 0.000403165817261 </code></pre> <p>Kasramvd also provided some useful timings below</p>
<pre><code>In [73]: np.maximum(0,np.array([0,3,8,0])-np.array([1,1,1,1])) Out[73]: array([0, 2, 7, 0]) </code></pre> <p>This doesn't explicitly address</p> <blockquote> <p>If either array has 0 for the given index, then the index is left unchanged. </p> </blockquote> <p>but the results match for all examples:</p> <pre><code>In [74]: np.maximum(0,np.array([1,1,1,1])-np.array([0,3,8,0])) Out[74]: array([1, 0, 0, 1]) In [75]: np.maximum(0,np.array([10,10,10,10])-np.array([8,12,8,12])) Out[75]: array([2, 0, 2, 0]) </code></pre>
python|arrays|numpy|multidimensional-array|numpy-ndarray
4
377,785
50,907,980
Adding row values to a dataframe based on matching column labels
<p>I try to get my head around this problem. I have three dataframes and I would like to merge (concatenate?) two of these dataframes based on values inside a third one. Here are the dataframes:</p> <p>df1:</p> <pre><code>index,fields,a1,a2,a3,a4,a5 2018-06-01,price,1.1,2.1,3.1,4.1,5.1 2018-06-01,amount,15,25,35,45,55 2018-06-02,price,1.2,2.2,3.2,4.2,5.2 2018-06-02,amount,16,26,36,46,56 2018-06-03,price,1.3,2.3,3.3,4.3,5.3 2018-06-03,amount,17,27,37,47,57 </code></pre> <p>df2:</p> <pre><code>index,fields,b1,b2,b3 2018-06-01,clients,1,2,3 2018-06-02,clients,1,2,3 2018-06-03,clients,1,2,3 </code></pre> <p>Columns in df1 and df2 are different but their relationship is in df3.</p> <p>df3:</p> <pre><code>index,product1,product2 0,a1,b1 1,a2,b1 2,a3,b2 3,a4,b2 4,a5,b3 </code></pre> <p>I would like to merge the data in df1 and df2 but keep the same columns as in d1 (as b1, b2, b3 are referenced with a1, a2, a3, a4 and a5). Here is df4, the desired dataframe I want.</p> <p>df4:</p> <pre><code> index,fields,a1,a2,a3,a4,a5 2018-06-01,price,1.1,2.1,3.1,4.1,5.1 2018-06-01,amount,15,25,35,45,55 2018-06-01,clients,1,1,2,2,3 2018-06-02,price,1.2,2.2,3.2,4.2,5.2 2018-06-02,amount,16,26,36,46,56 2018-06-02,clients,4,4,5,5,6 2018-06-03,price,1.3,2.3,3.3,4.3,5.3 2018-06-03,amount,17,27,37,47,57 2018-06-03,clients,7,7,8,8,9 </code></pre> <p>many thanks in advance,</p>
<p>Unpivot <code>df2</code> using <a href="http://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.melt.html" rel="nofollow noreferrer"><code>df.melt</code></a>:</p> <pre><code>df2_melt = df2.melt(["index", "fields"], var_name="product2") </code></pre> <p>Drop redundant column <code>index</code> from reference table <code>df3</code> and <a href="http://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>pd.merge</code></a> it with <code>melted df2</code>:</p> <pre><code>merged = pd.merge(df2_melt, df3.drop("index", axis=1), on="product2")\ .drop("product2", axis=1) </code></pre> <p>Do <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow noreferrer"><code>pd.pivot_table</code></a> from merge result:</p> <pre><code>new_rows = pd.pivot_table(merged, index=["index", "fields"], columns="product1", values="value")\ .reset_index() </code></pre> <p>Add new rows to <code>df1</code> with <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>pd.concat</code></a>, sort rows and reset index:</p> <pre><code>pd.concat([df1, new_rows]).sort_values("index").reset_index(drop=True) </code></pre> <p><strong>Result</strong>:</p> <pre><code>product1 index fields a1 a2 a3 a4 a5 0 2018-06-01 price 1.1 2.1 3.1 4.1 5.1 1 2018-06-01 amount 15.0 25.0 35.0 45.0 55.0 2 2018-06-01 clients 1.0 1.0 2.0 2.0 3.0 3 2018-06-02 price 1.2 2.2 3.2 4.2 5.2 4 2018-06-02 amount 16.0 26.0 36.0 46.0 56.0 5 2018-06-02 clients 1.0 1.0 2.0 2.0 3.0 6 2018-06-03 price 1.3 2.3 3.3 4.3 5.3 7 2018-06-03 amount 17.0 27.0 37.0 47.0 57.0 8 2018-06-03 clients 1.0 1.0 2.0 2.0 3.0 </code></pre>
python|python-3.x|pandas|dataframe
1
377,786
50,873,164
How can i find the "non-unique" rows?
<p>I imported CSV files with over 500k rows, one year, every minute. To merge two of this files, i want so re-sample the index to every minute:</p> <pre><code>Temp= pd.read_csv("Temp.csv", sep=";", decimal="," , thousands='.' ,encoding="cp1252") Temp["Time"] = pd.to_datetime(Temp["Time"],dayfirst=True) Temp.set_index(['Time'], inplace=True) Temp= Temp.resample('1Min').ffill() </code></pre> <p>But I got the error:</p> <blockquote> <p>cannot reindex a non-unique index with a method or limit</p> </blockquote> <p>How can i find the "non-unique" rows?</p>
<p>You can return a slice of all duplicated rows using <code>df.duplicated()</code></p> <p>In your case</p> <pre><code>Temp[Temp.duplicated(subset=None, keep=False)] </code></pre> <p>where subset can be changed if you want to find duplicates only in a specific column, and keep = False specifies to display all rows that are duplicated, regardless if its the first or second appearance.</p> <p>Documentation: <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.duplicated.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.duplicated.html</a></p>
pandas|csv|datetime
1
377,787
50,904,144
Write the dataframe in loop to Multiple Excel File
<p>I have 500 excel files, from each file I have to skip starting 4 rows and select few columns. Either I can create new excel file for each file with particular columns, or i can push the data in SQL Server. </p> <p>I need to create one function that can read all files and do the required process and give me output either in excel or SQL.</p>
<p>It's convenient to use <a href="https://docs.python.org/3/library/os.html" rel="nofollow noreferrer"><code>os</code></a> library for working with the file system.<br> The function <code>clean_one</code> is from your code with minor changes. The function <code>clean_all</code> applies <code>clean_one</code> to all file in the <code>root</code> directory (which is in my code <a href="https://docs.python.org/3/library/os.html#os.getcwd" rel="nofollow noreferrer">'os.getcwd`</a> [current working directory]):</p> <pre><code>import os import pandas as pd def clean_one(path, n): df = pd.read_excel(path, skiprows = 4) col_list = ['Emp Code', 'Emp Name', 'Net Salary', 'Gross Earnings', 'Provident Fund', 'Provident Fund_A', 'Profession Tax', 'ESIC Deduction', 'ESIC Deduction_A', 'Gross Deductions', 'Net Salary','Salary Bank', 'Salary Account No', 'IFSC Code', 'PAN', 'Location', 'PF_Membership_No', 'State For PT'] df.to_excel('File_%d.xlsx' % n, columns = col_list) def clean_all(root): for n, filepath in enumerate(os.listdir(root)): path = os.path.join(root, filepath) clean_one(path, n) if __name__ == "__main__": root = os.getcwd() # Replace it with necessary directory clean_all(root) </code></pre>
python|excel|pandas
0
377,788
50,681,183
Pandas Dataframe splice data into 2 columns and make a number with a comma and integer
<p>I currently am running into two issues:</p> <p>My data-frame looks like this:</p> <pre><code>, male_female, no_of_students 0, 24 : 76, "81,120" 1, 33 : 67, "12,270" 2, 50 : 50, "10,120" 3, 42 : 58, "5,120" 4, 12 : 88, "2,200" </code></pre> <p>What I would like to achieve is this:</p> <pre><code>, male, female, no_of_students 0, 24, 76, 81120 1, 33, 67, 12270 2, 50, 50, 10120 3, 42, 58, 5120 4, 12, 88, 2200 </code></pre> <p>Basically I want to convert male_female into two columns and no_of_students into a column of integers. I tried a bunch of things, converting the no_of_students column into another type with .astype. But nothing seems to work properly, I also couldn't really find a smart way of splitting the male_female column properly. </p> <p>Hopefully someone can help me out!</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>str.split</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>pop</code></a> for new columns by separator, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow noreferrer"><code>strip</code></a> trailing values, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow noreferrer"><code>replace</code></a> and if necessary convert to <code>integer</code>s:</p> <pre><code>df[['male','female']] = df.pop('male_female').str.split(' : ', expand=True) df['no_of_students'] = df['no_of_students'].str.strip('" ').str.replace(',','').astype(int) df = df[['male','female', 'no_of_students']] print (df) male female no_of_students 0 24 76 81120 1 33 67 12270 2 50 50 10120 3 42 58 5120 4 12 88 2200 </code></pre>
python|pandas|dataframe
6
377,789
51,095,631
Annual Return vs Annualized Return vs CAGR
<p>I read on Internet and Stack Overflow different articles on <strong>Annual Return</strong> vs <strong>Annualised Return</strong> vs <strong>CAGR</strong> and I am a bit confused and I do not know if they are a different or same thing.</p> <p>Here what I need. Suppose I have a growth rate of 10% over 3 years. Starting with 100$ I have:</p> <p>Initial capital: 100$ Year 1: 110$ Year 2: 121$ Year 3: 133$</p> <p>Now the <strong>Annual return</strong> here is 10% and <strong>Annual mean return</strong> should be 33/3=11%. Correct? Obviously, the <strong>Annual mean return</strong> does not give me a good measure of growth rate because does not take into account the compound interest. Am I correct? </p> <p>Suppose now I reverse my input date in the problem. I have the initial capital 100$ and final 133$ and I want to calculate the growth rate. The formula should be (Capital final/Capital initial)^(1/3)-1.</p> <p>(133/100)^(1/3)-1=(1,33)^(1/3)=0.098</p> <p>that correspond more or less to 10%. Are my computation and reasoning correct so far? If so, what is the difference between <strong>Annual Return</strong> vs <strong>CAGR</strong>? It seems they are the same, correct?</p> <p>Suppose now my time frame goes from September 10 2008 to Ferbuary 10 2012, my time frame is 3 years and 132 days. According to this website (see video) <a href="https://www.investopedia.com/ask/answers/071014/what-formula-calculating-compound-annual-growth-rate-cagr-excel.asp" rel="nofollow noreferrer">https://www.investopedia.com/ask/answers/071014/what-formula-calculating-compound-annual-growth-rate-cagr-excel.asp</a> the number of years here is (3 + K) where K is the number of days converted in years, in our example K=132/365=0.36 so the number of years N=3+0.36=3.36 and this is the value I should use in the formula. Am I correct?</p> <p>Now the last doubt is that the website to calculate K suggest to divide the number of days for 360 or 365 but should I consider in my calculation only the number of days the stock market is open (a number more or less equal to 272)?</p> <p>The last question about python is, suppose I have a panda data frame df, can someone put a snippet of code I can use to calculate the Annual Return and/or CAGR. The code should take into consideration that the time frame could have a number of days more than one year and not perfectly a multiple of one year (like the example above). </p> <p>I tried to write it but I am not sure I did a good job.</p> <p>In addition, if my timeframe is less than one year what happens? Should I use Annualized Return? How the code change in this case?</p> <p>Thanks in advance for help.</p>
<p>Thanks to comments and some days of work I found detailed answers to my questions. I'll put them here for the benefits of others. I'll divide them in two parts: theory and programming.</p> <p><strong>Theory</strong></p> <p>According to my example above if I have an investment with 10% annual return and an initial capital of 100$ I have:</p> <p>Year 1: 110$ Year 2: 121$ Year 3: 133.1$</p> <p>The total earning is 33.1$ and it is called <strong>Cumulative Return</strong>. If you divide this earning for the number of years you get the <strong>Average mean return</strong>. It is a good estimation to understand trends but it does not take in consideration the compound interest. To get the real <strong>Annual Return</strong> you should apply this formula (that is derivative from Compound interest formula -- see Wikipedia for details).</p> <p><strong>Annual Return (or CAGR)</strong>=(Capital final/Capital initial)^(1/N)-1 </p> <p>Where N is the number of years. If the start date and end date of investment start on January 1st and finish on December 31th N is an integer, otherwise N could be a real number where the decimal part is calculated as 365/number of days.</p> <p><strong>Programming</strong></p> <p>Initially, since I am a beginner programmer in Python I thought to complex algorithms to calculate returns from my time series. Later, once I understand theory things became more easy. Suppose you have:</p> <ul> <li>start_time</li> <li>end_time</li> <li>start_price</li> <li>end_price</li> </ul> <p>the cumulative return is an easy formula in python:</p> <pre><code>cum_return_percentage=((end_price/start_price) - 1)*100 </code></pre> <p>also the annual return formula is easy to implement:</p> <pre><code>annual_return=(pow((end_price/start_price),(1/number_years))-1)*100 </code></pre> <p>the problem is to calculate correctly the number_years as a real value that take into account the remaining days as decimal values. Here the code I am using:</p> <pre><code>diffyears=end_date.year - start_date.year difference=end_date - start_date.replace(end_date.year) days_in_year=isleap(end_date.year) and 366 or 365 number_years=diffyears + difference.days/days_in_year annual_return=(pow((end_price/start_price),(1/number_years))-1)*100 </code></pre> <p>To give the credits, this code comes from this discussion: <a href="https://stackoverflow.com/questions/4436957/pythonic-difference-between-two-dates-in-years">Pythonic difference between two dates in years?</a></p> <p>Hope this help other guys having my same problems.</p>
python|pandas
3
377,790
50,718,634
Can't recognize dtype int as int in computation
<p>I have two columns (serverTs, FTs) in DataFrame which are timestamps in the format of Unix Time. In my code I need to subtract one from another. When i did so I received an error saying I can't subtract strings. So I added types for serverTs and FTs as integers.</p> <pre><code>file = r'S:\Работа с клиентами\Клиенты\BigTV Rating\fts_check.csv' col_names = ["Day", "vcId", "FTs", "serverTs", "locHost", "tnsTmsec", "Hits", "Uniqs"] df_empty = pd.DataFrame() with open(file) as fl: chunk_iter = pd.read_csv(fl, sep='\t', names=col_names, dtype={'serverTs': np.int32, 'FTs': np.int32}, chunksize = 100000) for chunk in chunk_iter: chunk['diff'] = np.array(chunk['serverTs'])-np.array(chunk['FTs']) chunk = chunk[chunk['diff'] &gt; 180] df_empty = pd.concat([df_empty,chunk]) </code></pre> <p>But the program gives me an error:</p> <blockquote> <p>TypeError Traceback (most recent call last) pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()</p> <p>TypeError: Cannot cast array from dtype('O') to dtype('int32') according to the rule 'safe'</p> <p>During handling of the above exception, another exception occurred:</p> <p>ValueError Traceback (most recent call last) in () 6 #dtype={'serverTs': np.int32, 'FTs': np.int32}, 7 #chunk_iter = chunk_iter.astype({'serverTs': np.int32, 'FTs': np.int32}) ----> 8 for chunk in chunk_iter: 9 #print(chunk[chunk['FTs'] == 'NaN']) 10 #chunk[['serverTs','FTs']] = chunk[['serverTs','FTs']].astype('int32')</p> <p>C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py in <strong>next</strong>(self) 1040 def <strong>next</strong>(self): 1041 try: -> 1042 return self.get_chunk() 1043 except StopIteration: 1044 self.close()</p> <p>C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py in get_chunk(self, size) 1104 raise StopIteration<br> 1105 size = min(size, self.nrows - self._currow) -> 1106 return self.read(nrows=size) 1107 1108 </p> <p>C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py in read(self, nrows) 1067 raise ValueError('skipfooter not supported for iteration') 1068 -> 1069 ret = self._engine.read(nrows) 1070 1071 if self.options.get('as_recarray'):</p> <p>C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py in read(self, nrows) 1837 def read(self, nrows=None): 1838<br> try: -> 1839 data = self._reader.read(nrows) 1840 except StopIteration: 1841 if self._first_chunk:</p> <p>pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read()</p> <p>pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()</p> <p>pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows()</p> <p>pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_column_data()</p> <p>pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()</p> <p>ValueError: invalid literal for int() with base 10: 'FTs'</p> </blockquote> <p>I'm taking data from Hadoop with SQL queries, so I checked for any symbol with letters, but there are only numbers. Moreover if FTs has any characters which are not numbers it cannot appear in the database. What could be the problem?</p>
<p>The problem here is that you are passing a <code>names</code> along with a <code>dtypes</code> argument. This causes <code>header</code> to act as <code>None</code>. So consider:</p> <pre><code>In [1]: import pandas as pd, numpy as np In [2]: dt={'serverTs': np.int32, 'FTs': np.int32} In [3]: import io In [4]: s = """FTs,serverTs ...: 0,1 ...: 1,2 ...: """ In [5]: pd.read_csv(io.StringIO(s)) Out[5]: FTs serverTs 0 0 1 1 1 2 In [6]: pd.read_csv(io.StringIO(s), dtype=dt) Out[6]: FTs serverTs 0 0 1 1 1 2 </code></pre> <p>Works fine. However, if I pass <code>names</code>:</p> <pre><code>In [8]: names = 'FTs','serverTs' In [9]: pd.read_csv(io.StringIO(s), dtype=dt, names=names) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens() TypeError: Cannot cast array from dtype('O') to dtype('int32') according to the rule 'safe' During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) &lt;ipython-input-9-18dcd5477b7e&gt; in &lt;module&gt;() ----&gt; 1 pd.read_csv(io.StringIO(s), dtype=dt, names=names) /Users/juan/anaconda3/lib/python3.5/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision) 707 skip_blank_lines=skip_blank_lines) 708 --&gt; 709 return _read(filepath_or_buffer, kwds) 710 711 parser_f.__name__ = name /Users/juan/anaconda3/lib/python3.5/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds) 453 454 try: --&gt; 455 data = parser.read(nrows) 456 finally: 457 parser.close() /Users/juan/anaconda3/lib/python3.5/site-packages/pandas/io/parsers.py in read(self, nrows) 1067 raise ValueError('skipfooter not supported for iteration') 1068 -&gt; 1069 ret = self._engine.read(nrows) 1070 1071 if self.options.get('as_recarray'): /Users/juan/anaconda3/lib/python3.5/site-packages/pandas/io/parsers.py in read(self, nrows) 1837 def read(self, nrows=None): 1838 try: -&gt; 1839 data = self._reader.read(nrows) 1840 except StopIteration: 1841 if self._first_chunk: pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_column_data() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens() ValueError: invalid literal for int() with base 10: 'FTs' In [10]: </code></pre> <p>So one solution is to pass the correct header index:</p> <pre><code>In [10]: pd.read_csv(io.StringIO(s), dtype=dt, names=names, header=0) Out[10]: FTs serverTs 0 0 1 1 1 2 </code></pre> <p>Or better yet, don't pass the <code>names</code> at all, <code>pandas</code> will infer it for you anyway:</p> <pre><code>In [11]: pd.read_csv(io.StringIO(s), dtype=dt) Out[11]: FTs serverTs 0 0 1 1 1 2 </code></pre>
python|python-3.x|pandas|dataframe
1
377,791
50,746,096
How to match cv2.imread to the keras image.img_load output
<p>I'm studying deep learning. Trained an image classification algorithm. The problem is, however, that to train images I used:</p> <pre><code>test_image = image.load_img('some.png', target_size = (64, 64)) test_image = image.img_to_array(test_image) </code></pre> <p>While for actual application I use:</p> <pre><code>test_image = cv2.imread('trick.png') test_image = cv2.resize(test_image, (64, 64)) </code></pre> <p>But I found that those give a different ndarray (different data):</p> <p>Last entries from load_image:</p> <pre><code> [ 64. 71. 66.] [ 64. 71. 66.] [ 62. 69. 67.]]] </code></pre> <p>Last entries from cv2.imread:</p> <pre><code> [ 15 23 27] [ 16 24 28] [ 14 24 28]]] </code></pre> <p>, so the system is not working. Is there a way to match results of one to another?</p>
<p>OpenCV reads images in BGR format whereas in keras, it is represented in RGB. To get the OpenCV version to correspond to the order we expect (RGB), simply reverse the channels:</p> <pre><code>test_image = cv2.imread('trick.png') test_image = cv2.resize(test_image, (64, 64)) test_image = test_image[...,::-1] # Added </code></pre> <p>The last line reverses the channels to be in RGB order. You can then feed this into your keras model.</p> <p>Another point I'd like to add is that <code>cv2.imread</code> usually reads in images in <code>uint8</code> precision. Examining the output of your keras loaded image, you can see that the data is in floating point precision so you may also want to convert to a floating-point representation, such as <code>float32</code>:</p> <pre><code>import numpy as np # ... # ... test_image = test_image[...,::-1].astype(np.float32) </code></pre> <p>As a final point, depending on how you trained your model it's usually customary to normalize the image pixel values to a <code>[0,1]</code> range. If you did this with your keras model, make sure you divide your values by 255 in your image read in through OpenCV:</p> <pre><code>import numpy as np # ... # ... test_image = (test_image[...,::-1].astype(np.float32)) / 255.0 </code></pre>
python-3.x|numpy|opencv|image-processing|keras
22
377,792
50,983,646
Pandas drop before first valid index and after last valid index for each column of a dataframe
<p>I have a dataframe like this:</p> <pre><code>df = pd.DataFrame({'timestamp':pd.date_range('2018-01-01', '2018-01-02', freq='2h', closed='right'),'col1':[np.nan, np.nan, np.nan, 1,2,3,4,5,6,7,8,np.nan], 'col2':[np.nan, np.nan, 0, 1,2,3,4,5,np.nan,np.nan,np.nan,np.nan], 'col3':[np.nan, -1, 0, 1,2,3,4,5,6,7,8,9], 'col4':[-2, -1, 0, 1,2,3,4,np.nan,np.nan,np.nan,np.nan,np.nan] })[['timestamp', 'col1', 'col2', 'col3', 'col4']] </code></pre> <p>which looks like this:</p> <pre><code> timestamp col1 col2 col3 col4 0 2018-01-01 02:00:00 NaN NaN NaN -2.0 1 2018-01-01 04:00:00 NaN NaN -1.0 -1.0 2 2018-01-01 06:00:00 NaN 0.0 NaN 0.0 3 2018-01-01 08:00:00 1.0 1.0 1.0 1.0 4 2018-01-01 10:00:00 2.0 NaN 2.0 2.0 5 2018-01-01 12:00:00 3.0 3.0 NaN 3.0 6 2018-01-01 14:00:00 NaN 4.0 4.0 4.0 7 2018-01-01 16:00:00 5.0 NaN 5.0 NaN 8 2018-01-01 18:00:00 6.0 NaN 6.0 NaN 9 2018-01-01 20:00:00 7.0 NaN 7.0 NaN 10 2018-01-01 22:00:00 8.0 NaN 8.0 NaN 11 2018-01-02 00:00:00 NaN NaN 9.0 NaN </code></pre> <p>Now, I want to find an efficient and pythonic way of chopping off (for each column! Not counting timestamp) before the first valid index and after the last valid index. In this example I have 4 columns, but in reality I have a lot more, 600 or so. I am looking for a way of chop of all the NaN values before the first valid index and all the NaN values after the last valid index.</p> <p>One way would be to loop through I guess.. But is there a better way? This way has to be efficient. I tried to "unpivot" the dataframe using melt, but then this didn't help. </p> <p>An obvious point is that each column would have a different number of rows after the chopping. So I would like the result to be a list of data frames (one for each column) having timestamp and the column in question. For instance:</p> <pre><code> timestamp col1 3 2018-01-01 08:00:00 1.0 4 2018-01-01 10:00:00 2.0 5 2018-01-01 12:00:00 3.0 6 2018-01-01 14:00:00 NaN 7 2018-01-01 16:00:00 5.0 8 2018-01-01 18:00:00 6.0 9 2018-01-01 20:00:00 7.0 10 2018-01-01 22:00:00 8.0 </code></pre> <p><strong>My try</strong></p> <p>I tried like this:</p> <pre><code>final = [] columns = [c for c in df if c !='timestamp'] for col in columns: first = df.loc[:, col].first_valid_index() last = df.loc[:, col].last_valid_index() final.append(df.loc[:, ['timestamp', col]].iloc[first:last+1, :]) </code></pre>
<p>One idea is to use a list or dictionary comprehension after setting your index as <code>timestamp</code>. You should test with your data to see if this resolves your issue with performance. It is unlikely to help if your limitation is memory.</p> <pre><code>df = df.set_index('timestamp') final = {col: df[col].loc[df[col].first_valid_index(): df[col].last_valid_index()] \ for col in df} print(final) {'col1': timestamp 2018-01-01 08:00:00 1.0 2018-01-01 10:00:00 2.0 2018-01-01 12:00:00 3.0 2018-01-01 14:00:00 4.0 2018-01-01 16:00:00 5.0 2018-01-01 18:00:00 6.0 2018-01-01 20:00:00 7.0 2018-01-01 22:00:00 8.0 Name: col1, dtype: float64, ... 'col4': timestamp 2018-01-01 02:00:00 -2.0 2018-01-01 04:00:00 -1.0 2018-01-01 06:00:00 0.0 2018-01-01 08:00:00 1.0 2018-01-01 10:00:00 2.0 2018-01-01 12:00:00 3.0 2018-01-01 14:00:00 4.0 Name: col4, dtype: float64} </code></pre>
python|pandas
1
377,793
50,753,075
Not getting correct subtraction result for difference of two images in python
<p>This is my python code where i want to find difference between two images. </p> <pre><code>import cv import numpy as np img1=cv.imread('/storage/emulated/0/a.jpg',0) print(img1[0:1]) img2=img1 img2[0:1994]=1 print(img2[0:1]) rows,cols=img1[0:1].shape print(rows) print(cols) rows,cols=img2[0:1].shape print(rows) print(cols) print(np.subtract(img1[0:1,0:1], img2[0:1,0:1])) </code></pre> <p>I am subtracting these numpy arrays but getting zero always. Kindly help regarding this matter. </p>
<p>The problem lies in the way you have copied the image. </p> <p>When you assign an object using the assignment operator (=), the changes made on one object will be reflected in the other image as well. So your this case when you do <code>img2 = img1</code> the changes made in <code>img2</code> are reflected in <code>img1</code> also. Hence upon subtraction you are <em>getting zero always</em>.</p> <p>A quick fix would be to use <code>copy()</code> method. This creates a new object <code>img2</code> all together. Hence changes made in <code>img2</code> will not be reflected in <code>img1</code> and vice-versa. </p> <pre><code>img2 = img1.copy() </code></pre> <p>Now printing <code>print(np.subtract(img1[0:1,0:1], img2[0:1,0:1]))</code> yields me <code>[[233]]</code></p> <p>Have look at <a href="https://realpython.com/copying-python-objects/" rel="nofollow noreferrer">THIS BLOG POST</a> also.</p>
python|numpy|opencv
1
377,794
51,071,365
Convert Points to Lines Geopandas
<p>Hello I am trying to convert a list of X and Y coordinates to lines. I want to mapped this data by <code>groupby</code> the IDs and also by time. My code executes successfully as long as I <code>grouby</code> one column, but two columns is where I run into errors. I referenced to this <a href="https://gis.stackexchange.com/questions/202190/turn-a-geodataframe-of-x-y-coordinates-into-linestrings-using-groupby">question</a>.</p> <p>Here's some sample data:</p> <pre><code>ID X Y Hour 1 -87.78976 41.97658 16 1 -87.66991 41.92355 16 1 -87.59887 41.708447 17 2 -87.73956 41.876827 16 2 -87.68161 41.79886 16 2 -87.5999 41.7083 16 3 -87.59918 41.708485 17 3 -87.59857 41.708393 17 3 -87.64391 41.675133 17 </code></pre> <p>Here's my code:</p> <pre><code>df = pd.read_csv("snow_gps.csv", sep=';') #zip the coordinates into a point object and convert to a GeoData Frame geometry = [Point(xy) for xy in zip(df.X, df.Y)] geo_df = GeoDataFrame(df, geometry=geometry) # aggregate these points with the GrouBy geo_df = geo_df.groupby(['track_seg_point_id', 'Hour'])['geometry'].apply(lambda x: LineString(x.tolist())) geo_df = GeoDataFrame(geo_df, geometry='geometry') </code></pre> <p>Here is the error: ValueError: LineStrings must have at least 2 coordinate tuples</p> <p>This is the final result I am trying to get:</p> <pre><code>ID Hour geometry 1 16 LINESTRING (-87.78976 41.97658, -87.66991 41.9... 1 17 LINESTRING (-87.78964000000001 41.976634999999... 1 18 LINESTRING (-87.78958 41.97663499999999, -87.6... 2 16 LINESTRING (-87.78958 41.976612, -87.669785 41... 2 17 LINESTRING (-87.78958 41.976624, -87.66978 41.... 3 16 LINESTRING (-87.78958 41.97666, -87.6695199999... 3 17 LINESTRING (-87.78954 41.976665, -87.66927 41.... </code></pre> <p>Please any suggestions or ideas would be great on how to groupby multiple parameters.</p>
<p>Your code is good, the problem is your data.</p> <p>You can see that if you group by ID and Hour, then there is only 1 point that is grouped with an ID of 1 and an hour of 17. A LineString has to consist of 1 or more Points (must have at least 2 coordinate tuples). I added another point to your sample data:</p> <pre><code>ID X Y Hour 1 -87.78976 41.97658 16 1 -87.66991 41.92355 16 1 -87.59887 41.708447 17 1 -87.48234 41.677342 17 2 -87.73956 41.876827 16 2 -87.68161 41.79886 16 2 -87.5999 41.7083 16 3 -87.59918 41.708485 17 3 -87.59857 41.708393 17 3 -87.64391 41.675133 17 </code></pre> <p>and as you can see below the code below is almost identical to yours:</p> <pre><code>import pandas as pd import geopandas as gpd from shapely.geometry import Point, LineString, shape df = pd.read_csv(&quot;snow_gps.csv&quot;, sep='\s*,\s*') #zip the coordinates into a point object and convert to a GeoData Frame geometry = [Point(xy) for xy in zip(df.X, df.Y)] geo_df = gpd.GeoDataFrame(df, geometry=geometry) geo_df2 = geo_df.groupby(['ID', 'Hour'])['geometry'].apply(lambda x: LineString(x.tolist())) geo_df2 = gpd.GeoDataFrame(geo_df2, geometry='geometry') </code></pre>
python|pandas|geopandas
12
377,795
50,954,783
Converting RGB data into an array from a text file to create an Image
<p>I am trying to convert txt RGB data from <em>file.txt</em> into an array. And then, using that array, convert the RGB array into an image. (RGB data is found at this github repository: <a href="https://github.com/abood91/RPiMLX90640/blob/master/file.txt" rel="nofollow noreferrer">IR Sensor File.txt</a>).</p> <p>I am trying to convert the .txt file into an array which I could use the PIL/Image library and convert the array into an Image, and then put it through the following script to create my image.</p> <p>My roadblock right now is converting the arrays in file.txt into an appropriate format to work with the Image function.</p> <pre><code>from PIL import Image import numpy as np data = [ARRAY FROM THE file.txt] img = Image.fromarray(data, 'RGB') img.save('my.png') img.show() </code></pre> <p>The RGB data looks like as follows, and can also be found at the .txt file from that github repository linked above:</p> <pre><code>[[(0,255,20),(0,255,50),(0,255,10),(0,255,5),(0,255,10),(0,255,25),(0,255,40),(0,255,71),(0,255,137),(0,255,178),(0,255,147),(0,255,158),(0,255,142),(0,255,163),(0,255,112),(0,255,132),(0,255,137),(0,255,153),(0,255,101),(0,255,122),(0,255,122),(0,255,147),(0,255,66),(0,255,66),(0,255,30),(0,255,61),(0,255,0),(0,255,0),(0,255,40),(0,255,66),(15,255,0),(0,255,15)], [(0,255,40),(0,255,45),(15,255,0),(20,255,0),(10,255,0),(35,255,0),(0,255,5),(0,255,56),(0,255,173),(0,255,168),(0,255,153),(0,255,137),(0,255,158),(0,255,147),(0,255,127),(0,255,117),(0,255,142),(0,255,142),(0,255,122),(0,255,122),(0,255,137),(0,255,137),(0,255,101),(0,255,66),(0,255,71),(0,255,61),(0,255,25),(0,255,25),(0,255,61),(0,255,35),(0,255,0),(35,255,0)], [(0,255,15),(0,255,25),(51,255,0),(71,255,0),(132,255,0),(101,255,0),(35,255,0),(0,255,20),(0,255,91),(0,255,153),(0,255,132),(0,255,147),(0,255,132),(0,255,158),(0,255,122),(0,255,132),(0,255,142),(0,255,158),(0,255,122),(0,255,137),(0,255,142),(0,255,147),(0,255,101),(0,255,101),(0,255,86),(0,255,86),(0,255,50),(0,255,45),(0,255,50),(0,255,56),(0,255,30),(56,255,0)], [(0,255,45),(0,255,10),(76,255,0),(127,255,0),(132,255,0)]] </code></pre>
<p>I think this should work - no idea if it's decent Python:</p> <pre><code>#!/usr/local/bin/python3 from PIL import Image import numpy as np import re # Read in entire file with open('sensordata.txt') as f: s = f.read() # Find anything that looks like numbers l=re.findall(r'\d+',s) # Convert to numpy array and reshape data = np.array(l).reshape((24,32,3)) # Convert to image and save img = Image.fromarray(data, 'RGB') img.save('result.png') </code></pre> <p><a href="https://i.stack.imgur.com/FoMuy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FoMuy.png" alt="enter image description here"></a></p> <p>I enlarged and contrast-stretched the image subsequently so you can see it!</p>
arrays|image|numpy
2
377,796
51,064,502
Panda dataframe yield error
<p>I am trying to yield 1 row by 1 row for a panda dataframe but get an error. The dataframe is a stock price data, including daily open, close, high, low price and volume information. </p> <p>The following is my code. This class will get data from MySQL database</p> <pre><code>class HistoricMySQLDataHandler(DataHandler): def __init__(self, events, symbol_list): """ Initialises the historic data handler by requesting a list of symbols. Parameters: events - The Event Queue. symbol_list - A list of symbol strings. """ self.events = events self.symbol_list = symbol_list self.symbol_data = {} self.latest_symbol_data = {} self.continue_backtest = True self._connect_MySQL() def _connect_MySQL(self): #get stock price for symbol s db_host = 'localhost' db_user = 'sec_user' db_pass = 'XXX' db_name = 'securities_master' con = mdb.connect(db_host, db_user, db_pass, db_name) for s in self.symbol_list: sql="SELECT * FROM daily_price where symbol= s self.symbol_data[s] = pd.read_sql(sql, con=con, index_col='price_date')" def _get_new_bar(self, symbol): """ Returns the latest bar from the data feed as a tuple of (sybmbol, datetime, open, low, high, close, volume). """ for row in self.symbol_data[symbol].itertuples(): yield tuple(symbol, datetime.datetime.strptime(row[0],'%Y-%m-%d %H:%M:%S'), row[15], row[17], row[16], row[18],row[20]) def update_bars(self): """ Pushes the latest bar to the latest_symbol_data structure for all symbols in the symbol list. """ for s in self.symbol_list: try: bar = self._get_new_bar(s).__next__() except StopIteration: self.continue_backtest = False </code></pre> <p>In the main function:</p> <pre><code># Declare the components with respective parameters symbol_list=["GOOG"] events=queue.Queue() bars = HistoricMySQLDataHandler(events,symbol_list) while True: # Update the bars (specific backtest code, as opposed to live trading) if bars.continue_backtest == True: bars.update_bars() else: break time.sleep(1) </code></pre> <p>Data example:</p> <pre><code>symbol_data["GOOG"] = price_date id exchange_id ticker instrument name ... high_price low_price close_price adj_close_price volume 2014-03-27 29 None GOOG stock Alphabet Inc Class C ... 568.0000 552.9200 558.46 558.46 13100 </code></pre> <p>The <code>update_bars</code> function will call <code>_get_new_bar</code> to move to next row (next day price)</p> <p>My objective is to get stock price day by day (iterate rows of the dataframe) but <code>self.symbol_data[s]</code> in <code>_connect_MySQL</code> is a dataframe while in <code>_get_new_bar</code> is a generator hence I get this error </p> <blockquote> <p>AttributeError: <code>'generator'</code> object has no attribute <code>'itertuples'</code></p> </blockquote> <p>Anyone have any ideas?</p> <p>I am using python 3.6. Thanks</p> <p><code>self.symbol_data</code> is a <code>dict</code>, <code>symbol</code> is a string key to get the dataframe. the data is stock price data. For example <code>self.symbol_data["GOOG"]</code> return a dataframe with google's daily stock price information index by date, each row including open, low, high, close price and volume. My goal is to iterate this price data day by day using <code>yield</code>.</p> <p><code>_connect_MySQL</code> will get data from the database In this example, s = "GOOG" in the function</p>
<p>I found the bug. My code in other place change the dataframe to be a generator. A stupid mistake lol</p> <p>I didn't post this line in the question but this line change the datatype</p> <pre><code># Reindex the dataframes for s in self.symbol_list: self.symbol_data[s] = self.symbol_data[s].reindex(index=comb_index, method='pad').iterrows() </code></pre>
python-3.x|pandas|dataframe|yield-keyword
0
377,797
50,943,043
What is the fastest way to get the result of matrix < matrix in numpy?
<p>Suppose I have a matrix <code>M_1</code> of dimension (M, A) and a matrix <code>M_2</code> of dimension (M, B). The result of <code>M_1 &lt; M_2</code> should be a matrix of dimension (M, B, A) where by each row in <code>M1</code> is being compared with each element of the corresponding row of <code>M_2</code> and give a boolean vector (or 1,0-vector) for each comparison.</p> <p>For example, if I have a matrix of</p> <pre><code>M1 = [[1,2,3] [3,4,5]] M2 = [[1,2], [3,4]] result should be [[[False, False, False], [True, False, False]], [[False, False, False], [True, False, False]]] </code></pre> <p>Currently, I am using for loops which is tremendously slow when I have to repeat this operations many times (taking months). Hopefully, there is a vectorized way to do this. If not, what else can I do?</p> <p>I am looking at <code>M_1</code> being (500, 3000000) and <code>M_2</code> being (500, 500) and repeated about 10000 times.</p>
<p>For NumPy arrays, extend dims with <code>None/np.newaxis</code> such that the first axes are aligned, while the second ones are <em>spread</em> that lets them be compared in an elementwise fashion. Finally do the comparsion leveraging <a href="https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html" rel="nofollow noreferrer"><code>broadcasting</code></a> for a vectorized solution -</p> <pre><code>M1[:,None,:] &lt; M2[:,:,None] </code></pre> <p>Sample run -</p> <pre><code>In [19]: M1 Out[19]: array([[1, 2, 3], [3, 4, 5]]) In [20]: M2 Out[20]: array([[1, 2], [3, 4]]) In [21]: M1[:,None,:] &lt; M2[:,:,None] Out[21]: array([[[False, False, False], [ True, False, False]], [[False, False, False], [ True, False, False]]]) </code></pre> <p>For lists as inputs, use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.expand_dims.html" rel="nofollow noreferrer"><code>numpy.expand_dims</code></a> and then compare -</p> <pre><code>In [42]: M1 = [[1,2,3], ...: [3,4,5]] ...: ...: M2 = [[1,2], ...: [3,4]] In [43]: np.expand_dims(M1, axis=1) &lt; np.expand_dims(M2, axis=2) Out[43]: array([[[False, False, False], [ True, False, False]], [[False, False, False], [ True, False, False]]]) </code></pre> <p><strong>Further boost</strong></p> <p>Further boost upon leveraging <a href="https://github.com/pydata/numexpr/blob/master/doc/user_guide.rst#enabling-intel-vml-support" rel="nofollow noreferrer"><code>multi-core</code> with <code>numexpr</code> module</a> for large data -</p> <pre><code>In [44]: import numexpr as ne In [52]: M1 = np.random.randint(0,9,(500, 30000)) In [53]: M2 = np.random.randint(0,9,(500, 500)) In [55]: %timeit M1[:,None,:] &lt; M2[:,:,None] 1 loop, best of 3: 3.32 s per loop In [56]: %timeit ne.evaluate('M1e&lt;M2e',{'M1e':M1[:,None,:],'M2e':M2[:,:,None]}) 1 loop, best of 3: 1.53 s per loop </code></pre>
python|performance|numpy|vectorization
3
377,798
20,805,083
Numpy arrays assignment operations indexed with arrays
<p>I have an array <code>y</code> with indexes of values that must be <em>incremented by one</em> in another array <code>x</code> just like <code>x[y] += 1</code>, This is an example:</p> <pre><code>&gt;&gt;&gt; x = np.zeros(5,dtype=np.int) &gt;&gt;&gt; y = np.array([1,4]) &gt;&gt;&gt; x array([0, 0, 0, 0, 0]) &gt;&gt;&gt; x[y] += 1 &gt;&gt;&gt; x array([0, 1, 0, 0, 1]) </code></pre> <p>So far so good, but then I've this problem:</p> <pre><code>&gt;&gt;&gt; x array([0, 1, 0, 0, 1]) &gt;&gt;&gt; y = np.array([1,1]) &gt;&gt;&gt; x array([0, 1, 0, 0, 1]) &gt;&gt;&gt; x[y] += 1 &gt;&gt;&gt; x array([0, 2, 0, 0, 1]) </code></pre> <p>I was expecting <code>x</code> to be <code>array([0, 3, 0, 0, 1])</code>: <code>x[1]</code> should be incremented by one twice, but I got it with <code>x[1]</code> incremented just by one. </p> <p>How can I did it? Why is that happening?</p>
<p>Do this:</p> <pre><code>&gt;&gt;&gt; x=np.array([0, 0, 0, 0, 0]) &gt;&gt;&gt; y=np.array([1,4]) &gt;&gt;&gt; x+=np.bincount(y, minlength=x.size) &gt;&gt;&gt; x array([0, 1, 0, 0, 1]) &gt;&gt;&gt; y=np.array([1,1]) &gt;&gt;&gt; x+=np.bincount(y, minlength=x.size) &gt;&gt;&gt; x array([0, 3, 0, 0, 1]) &gt;&gt;&gt; map(id, x[y]) [20481944, 20481944] </code></pre> <p><code>x[1,1]</code> only references the same element, <code>+1</code> therefore only act on one element.</p>
python|arrays|numpy
3
377,799
20,584,266
Tips for speeding up my python code
<p>I have written a python program that needs to deal with quite large data sets for a machine learning task. I have a train set (about 6 million rows) and a test set (about 2 million rows). So far I my program runs in a reasonable amount of time until I get to the last part of my code. The thing is I have my machine learning algorithm that makes predictions, and I save those predictions into a list. But before I write my predictions to a file I need to do one thing. There are duplicates in my train and test set. I need to find those duplicates in the train set and extract their corresponding label. To achieve this I created a dictionary with my training examples as keys and my labels as values. Afterwards, I create a new list and iterate over my test set and train set. If an example in my test set can be found in my train set append the corresponding labels to my new list, otherwise, append my predictions to my new list.</p> <p><strong>The actual code I used to achieve the matter I described above:</strong></p> <pre><code>listed_predictions = list(predictions) """"creating a dictionary""" train_dict = dict(izip(train,labels)) result = [] for sample in xrange(len(listed_predictions)): if test[sample] in train_dict.keys(): result.append(train_dict[test[sample]]) else: result.append(predictions[sample]) </code></pre> <p>This loop takes roughly 2 million iterations. I thought about numpy arrays, since those should scale better than python lists, but I have no idea how could achieve the same with numpy arrays. Also thought about other optimization solutions like Cython, but before I dive into that, I am hoping that there are low hanging fruits that I, as an inexperienced programmer with no formal computing education, don't see. </p> <p><strong>Update</strong> I have implemented thefourtheye's solution, and it brought my runtime down to about 10 hours, which is fast enough for what I want to achieve. Everybody, thank you for your help and suggestions.</p>
<p>Two suggestions,</p> <ol> <li><p>To check if a key is in a dict, simply use <code>in</code> and the object (this happens in O(1))</p> <pre><code>if key in dict: </code></pre></li> <li>Use comprehensions whenever possible.</li> </ol> <p>So, your code becomes like this</p> <pre><code>result = [train_dict.get(test[sample], predictions[sample]) for sample in xrange(len(listed_predictions))] </code></pre>
python|python-2.7|optimization|numpy
4