Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
374,000
50,965,178
Converting a list of time stamps into integers
<p>I have a list of Timestamp objects:</p> <pre><code>`[Timestamp('2018-01-02 00:00:00'), Timestamp('2018-01-03 00:00:00'), Timestamp('2018-01-04 00:00:00'), Timestamp('2018-01-05 00:00:00'), Timestamp('2018-01-08 00:00:00'), Timestamp('2018-01-09 00:00:00'), Timestamp('2018-01-10 00:00:00'), Timestamp('2018-01-11 00:00:00'), Timestamp('2018-01-12 00:00:00'), Timestamp('2018-03-26 00:00:00'), Timestamp('2018-03-27 00:00:00')]` </code></pre> <p>How do I convert them to integers? like:</p> <pre><code>`[20180102, 20180103, 20180104 .... 20180327,]` </code></pre>
<p>Pandas TimeStamp has a method <em>strftime</em> (string from time) to convert a TimeStamp object into a given format. The Format you want is '%Y%m%d'. To apply this to all elements in your list, you can use list comprehension to build a new list with the formatted values.</p> <p>If you want integers instead of strings you have to convert the strings into integers. Supposed your TimeStamps are stored in time_list, you get your formatted list by</p> <pre><code>time_list_formatted = [int(x.strftime(format='%Y%m%d')) for x in time_list] </code></pre>
python-3.x|pandas|datetime|timestamp|integer
0
374,001
50,676,296
scatter update tensor with index obtained using argmax
<p>I'm trying to update a tensor's max value with another value, like so:</p> <pre><code>actions = tf.argmax(output, axis=1) gen_targets = tf.scatter_nd_update(output, actions, q_value) </code></pre> <p>I'm getting an error: <code>AttributeError: 'Tensor' object has no attribute 'handle'</code> on <code>scatter_nd_update</code>.</p> <p>The <code>output</code> and <code>actions</code> are placeholders declared as:</p> <pre><code>output = tf.placeholder('float', shape=[None, num_action]) reward = tf.placeholder('float', shape=[None]) </code></pre> <p>What am I doing wrong and what would be the correct way to achieve this?</p>
<p>You are trying to update the value of <code>output</code> which is of type <code>tf.placeholder</code>. Placeholders are immutable objects, you cannot update the value of the placeholder. The tensor you are trying to update should be of type of a variable, e.g. <a href="https://www.tensorflow.org/api_docs/python/tf/Variable" rel="nofollow noreferrer">tf.Variable</a>, in order for <a href="https://www.tensorflow.org/api_docs/python/tf/scatter_nd_update" rel="nofollow noreferrer">tf.scatter_nd_update()</a> to be able to update its value. One way to solve this could be to create a variable and then assign the value of the placeholder to the variable using <a href="https://www.tensorflow.org/api_docs/python/tf/assign" rel="nofollow noreferrer">tf.assign()</a>. Since one of the dimensions of the placeholder is <code>None</code> and may be of arbitrary size during runtime, you may want to set <code>validate_shape</code> argument of <code>tf.assign()</code> to <code>False</code>, this way the shape of the placeholder does not need to match the shape of the variable. After the assignment, the shape of <code>var_output</code> will match the actual shape of the object that was fed via the placeholder.</p> <pre><code>output = tf.placeholder('float', shape=[None, num_action]) # dummy variable initialization var_output = tf.Variable(0, dtype=output.dtype) # assign value of placeholder to the var_output var_output = tf.assign(var_output, output, validate_shape=False) # ... gen_targets = tf.scatter_nd_update(var_output, actions, q_value) # ... sess.run(gen_targets, feed_dict={output: feed_your_placeholder_here}) </code></pre>
python|tensorflow|reinforcement-learning
2
374,002
50,970,156
join to dataframe with substrings and sorting
<p>I wish to appen some data from a dataframe to another. the problem is I need to build a key to be able to make map the values between the two dataframe. so I built an example, with df1 has a column "RAW". this column contains a string that needs to be split, first 3 characters from left and 3 from the right, then sorted alphabetically. meaning if "RAW" is "RTYdfhgvisdhQWE" the string that I want to use is QWERTY. and then it needs to be mapped to the proper CODE in df2 using the CODE and the DATE.</p> <pre><code>import pandas as pd df1 = pd.DataFrame(columns=["RAW", "DATE", "VALUE"]) df1.at[0, 'RAW'] = 'QWE/RTY' df1.at[0, 'DATE'] = '2012-01-01' df1.at[0, 'VALUE'] = 'TEST0' df1.at[1, 'RAW'] = 'RTY/AZE' df1.at[1, 'DATE'] = '2015-06-11' df1.at[1, 'VALUE'] = 'TEST1' df2 = pd.DataFrame(columns=["CODE", "DATE", "RES"]) df2.at[0, 'CODE'] = 'QWERTY' df2.at[0, 'DATE'] = '2012-03-01' df2.at[0, 'RES'] = 1.1 df2.at[0, 'CODE'] = 'QWERTY' df2.at[0, 'DATE'] = '2012-01-01' df2.at[0, 'RES'] = 1.3 df2.at[1, 'CODE'] = 'AZERTY' df2.at[0, 'DATE'] = '2012-06-11' df2.at[1, 'RES'] = 1.4 def buildcodefromrow(mystring): return [ mystring[0:3] + mystring[4:3] if mystring[0:2] &lt; mystring[4:6] else mystring[4:6] + mystring[0:2]] df1['BUILTCODE'] = buildcodefromrow(df1['RAW']) df1 = pd.merge(df1, df2, left_on=['BUILTCODE', 'DATE'], right_on=['CODE', 'DATE']) </code></pre> <p>Any help appreciated!</p>
<p>Change your <code>buildcodefromrow</code> function to the following:</p> <pre><code>def buildcodefromrow(mystring): return mystring[0:3] + mystring[4:] if mystring[0:3] &lt; mystring[4:] else mystring[4:] + mystring[0:3] </code></pre> <p>and the <code>BUILTCODE</code> row in df1 can be acheived using:</p> <pre><code>df1['BUILTCODE'] = df1['RAW'].apply(buildcodefromrow) </code></pre> <p>the merged df1 should look like:</p> <pre><code> RAW DATE VALUE BUILTCODE CODE RES 0 QWE/RTY 2012-01-01 TEST0 QWERTY QWERTY 1.3 1 RTY/AZE 2015-06-11 TEST1 AZERTY AZERTY 1.4 </code></pre> <p>If this is not the output you expect please edit your question with the expected output. Thanks</p>
python|pandas|merge|substring
0
374,003
51,018,104
How to group phone number with and without country code
<p>I am trying to detect phone number, my country code is <code>+62</code> but some phone manufacturer or operator use <code>0</code> and <code>+62</code>, after query and pivoting I get pivoted data. But, the pivoted data is out of context</p> <p>Here's the pivoted data</p> <pre><code>Id +623684682 03684682 +623684684 03684684 1 1 0 1 1 2 1 1 2 1 </code></pre> <p>Here's what I need to group, but I don't want to group manually (<code>+623684682</code> and <code>03684682</code> is same, etc)</p> <pre><code>Id 03684682 03684684 1 1 2 2 2 3 </code></pre>
<p>I think need <code>replace</code> with aggregate <code>sum</code>:</p> <pre><code>df = df.groupby(lambda x: x.replace('+62','0'), axis=1).sum() </code></pre> <p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow noreferrer"><code>replace</code></a> columns names and <code>sum</code>:</p> <pre><code>df.columns = df.columns.str.replace('\+62','0') df = df.sum(level=0, axis=1) </code></pre> <hr> <pre><code>print (df) 03684682 03684684 Id 1 1 2 2 2 3 </code></pre>
python|pandas|dataframe
4
374,004
50,792,130
RNN classifier, how to show the weight to understand how the neraul network make the decision
<p>I am using a RNN to build a simple classifier to classify a paragraph of words into different catalogs. it has an embedding layer, followed by RNN and then a Dense layer show as below.</p> <p>It can predict correctly but besides the prediction, how can I know why the RNN got this prediction, for example, what is the weight on each words of the paragraph.</p> <p>What words have made the RNN believe it belongs to a specific catalog?</p> <pre><code>model = Sequential() embedding_size = 300 model.add(Embedding(input_dim=num_words+1, output_dim=embedding_size, input_length=max_tokens, name='layer_embedding', weights=embedding_matrix],trainable=True)) return_sequences=True)) model.add(Bidirectional(GRU(32,return_sequences=True))) model.add(Bidirectional(GRU(32,return_sequences=True))) model.add(Bidirectional(GRU(32))) model.add(Dense(numdense, activation='sigmoid')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) </code></pre>
<p><a href="https://github.com/keras-team/keras/issues/3088" rel="nofollow noreferrer">This post on GitHub</a> proposes a way to see the parameters' name while printing them:</p> <pre><code>for e in zip(model.layers[0].trainable_weights, model.layers[0].get_weights()): print('Param %s:\n%s' % (e[0],e[1])) </code></pre>
python|tensorflow|recurrent-neural-network
1
374,005
50,905,236
Grouping and calculating data
<p>I'm almost a newbie with Pandas, so I'd like to know if a certain operation is possible before start coding around it.</p> <p>I have a set of data of employees' working hours, like this (These are oversemplified, the real stuff is thousand and thousand of records)</p> <pre><code> ID Name Date Hour Type 0 123 Bob 01/01/2018 09:00 In 1 123 Bob 01/01/2018 09:30 Out 2 123 Bob 01/01/2018 10:00 In 3 123 Bob 01/01/2018 12:00 Out 4 123 Bob 01/01/2018 13:00 In 5 123 Bob 01/01/2018 17:00 Out 6 456 Max 01/01/2018 09:00 In 7 456 Max 01/01/2018 12:00 Out 8 456 Max 01/01/2018 13:00 In 9 456 Max 01/01/2018 17:00 Out 10 123 Bob 02/01/2018 09:00 In 11 123 Bob 02/01/2018 09:30 Out 12 123 Bob 02/01/2018 10:00 In 13 123 Bob 02/01/2018 17:00 Out 14 456 Max 02/01/2018 10:00 In 15 456 Max 02/01/2018 17:00 Out </code></pre> <p>I know how powerful Python and Pandas are in manipulating data, I'd like to know if there's away to get this kind of output whithout going through iterative coding</p> <pre><code> ID Name Date HourWorked 0 123 Bob 01/01/2018 06:30 1 456 Max 01/01/2018 07:00 2 123 Bob 02/01/2018 07:30 3 456 Max 02/01/2018 07:00 </code></pre> <p>In the end, what I need is (for every employee ID) calculating the hours/minutes worked for every single day</p> <p>I watched a lot of GroupBy examples, but I found anything useful. </p> <p>TIA</p>
<p>Convert the hours to <code>datetime</code>, <code>groupby</code> the In's and Outs' and take the difference. Later sum the difference grouping by <code>'ID'</code> and <code>'Date'</code> i.e </p> <pre><code>df['Hour'] = pd.to_datetime(df['Hour']) df['diff'] = df.groupby((df['Type'] == 'In').cumsum())['Hour'].diff() df_new = df.groupby(['ID','Name','Date'])['diff'].sum().to_frame('Hours Worked') Hours Worked ID Name Date 123 Bob 01/01/2018 06:30:00 02/01/2018 07:30:00 456 Max 01/01/2018 07:00:00 02/01/2018 07:00:00 </code></pre>
python|pandas|dataframe|pandas-groupby
4
374,006
50,812,838
Error in pip install torchvision on Windows 10
<p>on <a href="https://pytorch.org/" rel="noreferrer">pytorch</a>, installing on Windows 10, conda and Cuda 9.0.</p> <p>cmd did not complain when i ran <code>conda install pytorch cuda90 -c pytorch</code>, then when I ran <code>pip3 install torchvision</code> I get this error message.</p> <pre><code>Requirement already satisfied: torchvision in PATHTOFILE\python35\lib\site-packages (0.2.1) Requirement already satisfied: numpy in PATHTOFILE\python35\lib\site-packages (from torchvision) (1.12.0+mkl) Requirement already satisfied: six in PATHTOFILE\python35\lib\site-packages (from torchvision) (1.10.0) Collecting pillow&gt;=4.1.1 (from torchvision) Using cached https://files.pythonhosted.org/packages/ab/d2/d27a21bd3e64db1ca1dc7dc16026a16d77f5c3ffca9ec619eddeea7c47ce/Pillow-5.1.0-cp35-cp35m-win_amd64.whl Collecting torch (from torchvision) Using cached https://files.pythonhosted.org/packages/5f/e9/bac4204fe9cb1a002ec6140b47f51affda1655379fe302a1caef421f9846/torch-0.1.2.post1.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "C:\Users\USERNAME~1\AppData\Local\Temp\pip-install-a70g611u\torch\setup.py", line 11, in &lt;module&gt; raise RuntimeError(README) RuntimeError: PyTorch does not currently provide packages for PyPI (see status at https://github.com/pytorch/pytorch/issues/566). Please follow the instructions at http://pytorch.org/ to install with miniconda instead. ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in C:\Users\USERNAME~1\AppData\Local\Temp\pip-install-a70g611u\torch\ </code></pre> <p>Anyone got this error?</p>
<p>I tried to use:</p> <pre><code>pip install torchvision </code></pre> <p>but it didn't work for me. So, I googled this problem more carefully and found another solution:</p> <pre><code>pip install --no-deps torchvision </code></pre> <p>I hope it will be helpful.</p> <p>Update: I want to add: "--no-deps" means that no dependencies packages will be downloaded.</p>
python|python-3.x|pytorch
9
374,007
50,782,385
Sort IP's as sets of numerical octets (__not__ lexicographically)
<p>I'm looking to sort entries in each row/cell of a specific column 'D' via IP address in ascending order. The entries are stored on a new line and have the associated protocol and port listed at the end of the IP, which I am not concerned about sorting just the 4 octets of the IP address. I feel this needs some sort of reg ex with a lambda function of some kind. Sometimes there might be a host name instead of an IP address.</p> <p>Example dataframe would be:</p> <pre><code>ID A B C D 1 x x x 10.0.0.50/TCP/50 192.168.1.90/TCP/51 server1/TCP/80 10.0.0.9/TCP/78 2 y y y 192.168.3.90/UDP/53 10.0.4.10/TCP/65 10.0.3.4/TCP/34 host1/UDP/80 3 z z z 10.0.0.40/TCP/80 10.0.0.41/TCP/443 192.168.2.70/UDP/98 10.0.0.9/TCP/12 </code></pre> <p>The desired output would be:</p> <pre><code>ID A B C D 1 x x x 10.0.0.9/TCP/78 10.0.0.50/TCP/50 192.168.1.90/TCP/51 server1/TCP/80 2 y y y 10.0.3.4/TCP/34 10.0.4.10/TCP/65 192.168.3.90/UDP/53 host1/UDP/80 3 z z z 10.0.0.9/TCP/12 10.0.0.40/TCP/34 10.0.0.41/TCP/443 192.168.2.70/UDP/98 </code></pre> <p>In order to achieve the above dataframe I initially combine row D using a groupby which works however the IP address are not in order:</p> <pre><code>df = df.groupby(['ID','A','B','C'], sort=False, as_index=False)['D'].apply('\n'.join) </code></pre> <p>It might be more efficient to combine and sort at the same time if possible rather than 2 separate commands??</p> <p>Any thoughts much appreciated I have looked at a couple of examples however none seem to fit. Hopefully my explanation is clear enough, thanks in advance for any assistance. </p>
<p>assuming you have your original DF, <strong>before</strong> grouping:</p> <pre><code>In [70]: df Out[70]: ID A B C D 0 1.0 x x x 10.0.0.50/TCP/50 1 1.0 x x x 192.168.1.90/TCP/51 2 1.0 x x x server1/TCP/80 3 1.0 x x x 10.0.0.9/TCP/78 4 2.0 y y y 192.168.3.90/UDP/53 5 2.0 y y y 10.0.4.10/TCP/65 6 2.0 y y y 10.0.3.4/TCP/34 7 2.0 y y y host1/UDP/80 8 3.0 z z z 10.0.0.40/TCP/80 9 3.0 z z z 10.0.0.41/TCP/443 10 3.0 z z z 192.168.2.70/UDP/98 11 3.0 z z z 10.0.0.9/TCP/12 </code></pre> <p><strong>Option 1:</strong> multi-index DF:</p> <pre><code>In [69]: (df.assign(x=df.D.replace(['/.*',r'\b(\d{1})\b',r'\b(\d{2})\b'], ...: ['',r'00\1',r'0\1'], ...: regex=True)) ...: .sort_values('x') ...: .groupby(['ID','A','B','C'], sort=False, as_index=False)['D'] ...: .apply('\n'.join) ...: .to_frame('D')) ...: ...: Out[69]: D ID A B C 1.0 x x x 10.0.0.9/TCP/78\n10.0.0.50/TCP/50\n192.168.1.9... 3.0 z z z 10.0.0.9/TCP/12\n10.0.0.40/TCP/80\n10.0.0.41/T... 2.0 y y y 10.0.3.4/TCP/34\n10.0.4.10/TCP/65\n192.168.3.9... </code></pre> <p><strong>Option 2:</strong> regular DF:</p> <pre><code>In [75]: (df.assign(x=df.D.replace(['/.*',r'\b(\d{1})\b',r'\b(\d{2})\b'], ...: ['',r'00\1',r'0\1'], ...: regex=True)) ...: .sort_values('x') ...: .groupby(['ID','A','B','C'], sort=False, as_index=False)['D'] ...: .apply('\n'.join) ...: .reset_index(name='D')) ...: ...: Out[75]: ID A B C D 0 1.0 x x x 10.0.0.9/TCP/78\n10.0.0.50/TCP/50\n192.168.1.9... 1 3.0 z z z 10.0.0.9/TCP/12\n10.0.0.40/TCP/80\n10.0.0.41/T... 2 2.0 y y y 10.0.3.4/TCP/34\n10.0.4.10/TCP/65\n192.168.3.9... </code></pre> <hr> <p>the following might help to understand how does it work:</p> <p>add a virtual column <code>x</code> with zero padded IP octets:</p> <pre><code>In [71]: df.assign(x=df.D.replace(['/.*',r'\b(\d{1})\b',r'\b(\d{2})\b'], ...: ['',r'00\1',r'0\1'], ...: regex=True)) ...: ...: Out[71]: ID A B C D x 0 1.0 x x x 10.0.0.50/TCP/50 010.000.000.050 1 1.0 x x x 192.168.1.90/TCP/51 192.168.001.090 2 1.0 x x x server1/TCP/80 server1 3 1.0 x x x 10.0.0.9/TCP/78 010.000.000.009 4 2.0 y y y 192.168.3.90/UDP/53 192.168.003.090 5 2.0 y y y 10.0.4.10/TCP/65 010.000.004.010 6 2.0 y y y 10.0.3.4/TCP/34 010.000.003.004 7 2.0 y y y host1/UDP/80 host1 8 3.0 z z z 10.0.0.40/TCP/80 010.000.000.040 9 3.0 z z z 10.0.0.41/TCP/443 010.000.000.041 10 3.0 z z z 192.168.2.70/UDP/98 192.168.002.070 11 3.0 z z z 10.0.0.9/TCP/12 010.000.000.009 </code></pre> <p>sort DF by a virtual column <code>x</code>:</p> <pre><code>In [72]: (df.assign(x=df.D.replace(['/.*',r'\b(\d{1})\b',r'\b(\d{2})\b'], ...: ['',r'00\1',r'0\1'], ...: regex=True)) ...: .sort_values('x')) ...: ...: Out[72]: ID A B C D x 3 1.0 x x x 10.0.0.9/TCP/78 010.000.000.009 11 3.0 z z z 10.0.0.9/TCP/12 010.000.000.009 8 3.0 z z z 10.0.0.40/TCP/80 010.000.000.040 9 3.0 z z z 10.0.0.41/TCP/443 010.000.000.041 0 1.0 x x x 10.0.0.50/TCP/50 010.000.000.050 6 2.0 y y y 10.0.3.4/TCP/34 010.000.003.004 5 2.0 y y y 10.0.4.10/TCP/65 010.000.004.010 1 1.0 x x x 192.168.1.90/TCP/51 192.168.001.090 10 3.0 z z z 192.168.2.70/UDP/98 192.168.002.070 4 2.0 y y y 192.168.3.90/UDP/53 192.168.003.090 7 2.0 y y y host1/UDP/80 host1 2 1.0 x x x server1/TCP/80 server1 </code></pre>
python|pandas|csv|dataframe|pandas-groupby
1
374,008
50,847,374
Convert multiple columns to string in pandas dataframe
<p>I have a pandas data frame with different data types. I want to convert more than one column in the data frame to string type. I have individually done for each column but want to know if there is an efficient way?</p> <p>So at present I am doing something like this:</p> <pre><code>repair['SCENARIO']=repair['SCENARIO'].astype(str) repair['SERVICE_TYPE']= repair['SERVICE_TYPE'].astype(str) </code></pre> <p>I want a function that would help me pass multiple columns and convert them to strings.</p>
<p>To convert <a href="https://stackoverflow.com/q/15891038/6060083">multiple</a> columns to string, include a <em>list</em> of columns to your above-mentioned command:</p> <pre><code>df[['one', 'two', 'three']] = df[['one', 'two', 'three']].astype(str) # add as many column names as you like. </code></pre> <p>That means that <em>one</em> way to convert <a href="https://stackoverflow.com/q/19482970/6060083">all</a> columns is to construct the list of columns like this:</p> <pre><code>all_columns = list(df) # Creates list of all column headers df[all_columns] = df[all_columns].astype(str) </code></pre> <p>Note that the latter can also be done directly (see comments).</p>
python|python-2.7|pandas
62
374,009
50,864,087
Accessing MxM numpy array that is first index of a tuple
<p>As stated in the title, I have a list of tuples that look like (numpy_array,id) where the numpy array is m x m. I need to access each element of the numpy array (i.e. all m^2 of them) but am having a tough time doing this without unpacking the tuple.</p> <p>I would rather not unpack the tuple because of how much data it is/how long that would take due to the amount of data. </p> <p>If I unpack the tuple the code would look like below, is there a way to index this so that I don't need to unpack?</p> <pre><code> for x in range(length): for y in range(length): if(instance1[x][y]==instance2[x][y]): distance -=1 </code></pre>
<p>If you just want to access directly to a element in a specific position of the ndimensional numpy array, you can just use a <b>ndimensional indexing</b>. <br>For example:<br> I want to access the element in the third column of the first row of a 3x3 array <em>c</em>, then I will do <em>c[0,2]</em>.</p> <pre><code>c = np.random.rand( 3,3 ) print(c) print( 'Element:', c[0,2]) </code></pre> <p>Check the official doc <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html" rel="nofollow noreferrer">Numpy Indexing</a></p> <p>_Update__<br> In case of a list of tuples you should index for each data structure<br></p> <pre><code>import numpy as np a =[ ( np.random.rand( 2,2 ), 0 ), #first tuple ( np.random.rand( 2,2 ), 2 ), #second tuple ( np.random.rand( 2,2 ), 3 ), # ... ( np.random.rand( 2,2 ), 1 ) ] print( np.shape(a) ) # accessing list a # (4,2) print( np.shape(a[0]) ) # accessing the first tuple in a # (2) print( np.shape(a[0][0]) ) # accessing the 2x2 array inside the first tuple # (2,2) print( np.shape(a[0][0][0,1]) ) # accessing the [0,1] element inside the array # () #another example c = ( np.array([ [1,2,3],[4,5,6],[7,8,9] ]), 8 ) print( c[0][0,2] ) # output: 3 </code></pre>
python|numpy|indexing|tuples
1
374,010
50,950,045
Create list from pandas dataframe for distinct values in a column
<p>From the following Pandas dataframe. </p> <pre><code>df = pd.DataFrame({'Id': [102,102,102,303,303,944,944,944,944],'A':[1.2,1.2,1.2,0.8,0.8,2.0,2.0,2.0,2.0],'B':[1.8,1.8,1.8,1.0,1.0,2.2,2.2,2.2,2.2], 'A_scored_time':[10,25,0,33,0,40,0,90,0],'B_scored_time':[0,0,30,0,41,0,75,0,95]}) </code></pre> <p>I was trying to create lists deriving from the combinations of <code>['A_scored_time','B_scored_time']</code>, to obtain the following lists corresponding with unique <code>Id</code>:</p> <pre><code>Id(102) = A_Time = [10,25], B_Time = [30] Id(303) = A_Time = [33], B_Time = [41] Id(944) = A_Time = [40,90], B_Time = [75,95] </code></pre> <p>This lists will be applied in the function below.</p> <pre><code>x1 = [1,0,0] x2 = [0,1,0] x3 = [0,0,1] k = 100 # constant total_timeslot = 100 # same as k A_Time = [] B_Time = [] </code></pre> <p>For i in range(distinct Id), df has 3 distinct Id here. For each i the probability array y. </p> <pre><code>y = np.array([1-(A + B)/k, A/k, B/k]) def sum_squared_diff(x1, x2, x3, y): ssd = [] for k in range(total_timeslot): if k in A_Time: ssd.append(sum((x2 - y) ** 2)) elif k in B_Time: ssd.append(sum((x3 - y) ** 2)) else: ssd.append(sum((x1 - y) ** 2)) return ssd </code></pre> <p>The output will be an array of len k. Once I acquire this, I will sum all n(n distinct Id) arrays. Which is what I'm after.<br> The outcome for <code>df</code> is:</p> <pre><code>Id(102) = sum(sum_squared_diff(x1, x2, x3, y)) =5.872800000000018 Id(303) = sum(sum_squared_diff(x1, x2, x3, y)) = 3.9407999999999896 Id(944) = sum(sum_squared_diff(x1, x2, x3, y)) =7.760800000000006 </code></pre> <p>Giving <code>toatl sum = 17.574400000000015.</code> </p>
<p>To answer the question in your title use:</p> <pre><code>df.groupby('Id')[['A_scored_time','B_scored_time']]\ .agg(lambda x: x[x != 0].tolist())\ .reset_index() </code></pre> <p>Output:</p> <pre><code> Id A_scored_time B_scored_time 0 102 [10, 25] [30] 1 303 [33] [41] 2 944 [40, 90] [75, 95] </code></pre>
python|pandas|dataframe
4
374,011
51,087,335
Tensorflow Performing Feature Extraction (on the whole Dataset) is very time consuming
<p>I want to perform a Feature Extraction on the TensorFlow's standard MNIST dataset, (before training my Neural Network) which is a simple tf.matmul() but it takes about 3 hours to be done. Any tuning tricks or Ideas to reduce the time ? The code looks like below</p> <pre><code>def apply_feature_extraction(data, feature_mapper): weights, bias = feature_mapper return session.run(tf.add(tf.matmul(data, weights), bias)) batch_x, batch_y = mnist.train.next_batch(batch_size) transformed_features = apply_feature_extraction(batch_x, my_feature_mapper) </code></pre>
<p>You should not create any operations while executing the graph!</p> <p>Each time when you call <code>apply_feature_extraction</code> you put a new operation <code>tf.add(tf.matmul(...)</code> to your graph. As a result your graph gets bloated.</p> <p>First, create a fully defined graph that contains all variables and operations you need and then just execute ops within a <code>tf.Session</code> that are defined in the graph.</p> <p>In your case that might look like this:</p> <pre><code>def apply_feature_extraction(data, feature_mapper): weights, bias = feature_mapper return tf.add(tf.matmul(data, weights), bias) batch_x, batch_y = mnist.train.next_batch(batch_size) # define graph x = tf.placeholder(tf.float32, shape=None, name='input') transformed_features = apply_feature_extraction(x, my_feature_mapper) # execute graph with tf.Session() as sess: trans_feat_evaluated = sess.run(transformed_features, feat_dict={x:batch_x} </code></pre>
python|tensorflow
1
374,012
50,723,226
able to read one file but failed to read multiple json files from a folder into a padas dataframe
<p>I have a folder that has many jason files, say folder is "myfolder" and files are: data1.json, data2.json, data3.json.... and so forth.</p> <p>There are total 6 key names and these jason files all have same key names, say: col1, col2, col3, col4, col5, and col6 (i.e. the columns of df when these are converted into dataframe)</p> <p>I want to read all these files into one pandas (or any other dataframe).</p> <p>What I am doing is:</p> <pre><code>os.chdir("D:/myfolder/") with open(json_files[0], encoding='utf-8') as data_file: data = json.loads(data_file.read()) df = json.loads(open('data1.json').read()) df = pd.io.json.json_normalize(df) df.columns = df.columns.map(lambda x: x.split(".")[-1]) </code></pre> <p>And I got DF for one file, but I am not sure how I can read all the files in loop and append <code>df</code> ? I tried for loop but could not do it.</p> <p>Is there a way out?</p>
<p>You can make a list of dataframes via a <code>for</code> loop. Then use <code>pd.concat</code> to combine in a final step.</p> <p>It's not advisable to continually append to an existing dataframe, as <code>pd.DataFrame.append</code> is expensive relative to <code>list.append</code> and a single <code>pd.concat</code> call.</p> <pre><code>dfs = [] for file in json_files: df = json.loads(open(file).read()) df = pd.io.json.json_normalize(df) df.columns = df.columns.map(lambda x: x.split('.')[-1]) dfs.append(df) df_full = pd.concat(dfs, ignore_index=True) </code></pre>
python|json|pandas|dataframe
0
374,013
50,950,614
Converting column into multi index column
<p>I am new to python, As a result of groupby I got a data frame which looks like this. </p> <pre><code>temp = pd.DataFrame({'Year' : [2018,2017], 'week' : [200, 100], 'mtd' : [100, 200], 'qtd' : [300, 345], 'ytd': [400, 500]}) temp.set_index('Year', inplace = True) print(temp) week mtd qtd ytd Year 2018 200 100 300 400 2017 100 200 345 500 </code></pre> <p>but I want to convert it into some thing like this. </p> <p><a href="https://i.stack.imgur.com/aR4Q1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aR4Q1.png" alt="enter image description here"></a></p> <p>Tried stack(), unstack(), multi-indexing but didn't succeed. Please help</p>
<p>Use, <code>stack</code>, <code>to_frame</code> and <code>T</code>:</p> <pre><code>temp.stack().to_frame().T </code></pre> <p>Output:</p> <pre><code>Year 2018 2017 week mtd qtd ytd week mtd qtd ytd 0 200 100 300 400 100 200 345 500 </code></pre>
python-3.x|pandas|dataframe
6
374,014
50,929,426
How can I multiply each column of a Tensor by all columns of an another using Tensorflow?
<p>Let <code>a</code> and <code>b</code> be tensors defined as:</p> <pre><code>a = tf.constant([[1, 4], [2, 5], [3, 6]], tf.float32) b = tf.constant([[10, 40], [20, 50], [30, 60]], tf.float32) </code></pre> <p>I am looking for a way to multiply each column of <code>a</code> by all columns of <code>b</code>, producing a result as below:</p> <pre><code>[[10, 40, 40, 160], [40, 100, 100, 250], [90, 180, 180, 360]] </code></pre> <p>I need an operation that can be performed over a tensor with an arbitrary number of columns (> 2).</p> <p>I already developed a solution that can be used within a loop. You can checkout it <a href="https://gist.github.com/lucasvenez/5adf645a30f97517c741c40da30c462c" rel="nofollow noreferrer">here</a>.</p> <p>Thank you for you attention.</p>
<p>Do I miss something? Why not just</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf a = tf.constant([[1, 4], [2, 5], [3, 6]], tf.float32) b = tf.constant([[10, 40], [20, 50], [30, 60]], tf.float32) h_b, w_a = a.shape.as_list()[:2] w_b = a.shape.as_list()[1] c = tf.einsum('ij,ik-&gt;ikj', a, b) c = tf.reshape(c,[h_b, w_a * w_b]) with tf.Session() as sess: print(sess.run(c)) </code></pre> <p>edit: add <code>foo.shape.as_list()</code></p>
tensorflow|linear-algebra|matrix-multiplication
3
374,015
50,779,261
Adding metadata to a Keras LSTM
<p>I looked at several answers, and was not able to see a clear solution to what I'm trying to do.</p> <p>I have an LSTM for binary text classification that takes the top 40k words in a corpus, then operates on the first 50 tokens. Prepared like this:</p> <pre><code>max_words = 40000 max_review_length = 50 embedding_vector_length = 100 batch_size = 128 epochs = 10 all_texts = combo.title.tolist() lstm_text_tokenizer = Tokenizer(nb_words=max_words) lstm_text_tokenizer.fit_on_texts(all_texts) x_train = lstm_text_tokenizer.texts_to_sequences(x_train.title.tolist()) x_test = lstm_text_tokenizer.texts_to_sequences(x_test.title.tolist()) x_test = sequence.pad_sequences(x_test, maxlen=50) x_train = sequence.pad_sequences(x_train, maxlen=50) </code></pre> <p>My current model looks like this:</p> <pre><code>def lstm_cnn_model(max_words, embedding_vector_length, max_review_length): model = Sequential() model.add(Embedding(max_words, embedding_vector_length, input_length=max_review_length)) model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='relu')) model.add(MaxPooling1D(pool_size=2)) model.add(LSTM(100)) model.add(Dense(num_classes)) model.add(Activation('softmax')) return model </code></pre> <p>I also have a 1-dimensial list of meta data for each example, with one value per example. I may have more complex meta data to add in the future.</p> <p>My question is, what is the best way to combine these two inputs in the training of the model?</p>
<p>It would be wise to now switch to the <a href="https://keras.io/getting-started/functional-api-guide/" rel="nofollow noreferrer">functional API</a> and create a multi-input network which will take the text as well the meta data:</p> <pre><code>text_in = Input(shape=(max_review_length,)) meta_in = Input(shape=(1,)) # so 1 meta feature per review # Embedding(...)(text_in) # You process text_in however you like text_features = LSTM(100)(embedded_text) merged = concatenate([text_features, meta_in]) # (samples, 101) text_class = Dense(num_classes, activation='softmax')(merged) model = Model([text_in, meta_in], text_class) return model </code></pre> <p>The idea is that the functional API gives you the option to create computation graphs that can use both inputs in a non-sequential way. You can extract features from text, features from meta data and merge them to see if it improves classification. You might want to look at <a href="https://visualstudiomagazine.com/articles/2014/01/01/how-to-standardize-data-for-neural-networks.aspx" rel="nofollow noreferrer">how to encode data</a> for using meta data.</p>
python|tensorflow|keras
4
374,016
50,938,182
Combining Excel worksheets over multiple loops
<p>I've got a number of Excel workbooks, each with multiple worksheets, that I'd like to combine. </p> <p>I've set up two sets of loops (one while, one for) to read in rows for each sheet in a given workbook and then do the same for all workbooks. </p> <p>I tried to do it on a subset of these, and it appears to work until I try to combine the two sets using the pd.concat function. Error given is</p> <blockquote> <p>TypeError: first argument must be an iterable of pandas objects, you passed an object of type "DataFrame"</p> </blockquote> <p>Any idea what I'm doing incorrectly?</p> <pre><code>import pandas as pd d = 2013 numberOfSheets = 5 while d &lt; 2015: #print(str(d) + ' beginning') f ='H:/MyDocuments/Z Project Work/scriptTest ' + str(d) + '.xlsx' for i in range(1,numberOfSheets+1): data = pd.read_excel(f, sheetname = 'Table '+str(i), header=None) print(i) df.append(data) print(str(d) + ' complete') print(df) d += 1 df = pd.concat(df) print(df) final = "H:/MyDocuments/Z Project Work/mergedfile.xlsx" df.to_excel(final) </code></pre>
<p>As the error says, <code>pd.concat()</code> requires an iterable, like a list: <code>pd.concat([df1, df2])</code> will concatenate <code>df1</code> and <code>df2</code> along the default axis of 0, which means <code>df2</code> is appended to the bottom of <code>df1</code>.</p> <p>Two issues need fixing:</p> <ol> <li>The <code>for</code> loop refers to <code>df</code> before assigning anything to it.</li> <li>The variable <code>df</code> is overwritten with each iteration of the <code>for</code> loop.</li> </ol> <p>One workaround is to create an empty list of DataFrames before the loops, then append DataFrames to that list, and finally concatenate all the DataFrames in that list. Something like this:</p> <pre><code>import pandas as pd d = 2013 numberOfSheets = 5 dfs = [] while d &lt; 2015: #print(str(d) + ' beginning') f ='H:/MyDocuments/Z Project Work/scriptTest ' + str(d) + '.xlsx' for i in range(1, numberOfSheets + 1): data = pd.read_excel(f, sheetname='Table ' + str(i), header=None) print(i) dfs.append(data) print(str(d) + ' complete') print(df) d += 1 # ignore_index=True gives the result a default IntegerIndex # starting from 0 df_final = pd.concat(dfs, ignore_index=True) print(df_final) final_path = "H:/MyDocuments/Z Project Work/mergedfile.xlsx" df_final.to_excel(final_path) </code></pre>
python|pandas
2
374,017
51,039,857
Pandas count values greater than current row in the last n rows
<p>How to get count of values greater than current row in the last n rows?</p> <p>Imagine we have a dataframe as following:</p> <pre><code> col_a 0 8.4 1 11.3 2 7.2 3 6.5 4 4.5 5 8.9 </code></pre> <p>I am trying to get a table such as following where n=3.</p> <pre><code> col_a col_b 0 8.4 0 1 11.3 0 2 7.2 2 3 6.5 3 4 4.5 3 5 8.9 0 </code></pre> <p>Thanks in advance.</p>
<p>In pandas is best dont loop because slow, here is better use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rolling.html" rel="noreferrer"><code>rolling</code></a> with custom function:</p> <pre><code>n = 3 df['new'] = (df['col_a'].rolling(n+1, min_periods=1) .apply(lambda x: (x[-1] &lt; x[:-1]).sum()) .astype(int)) print (df) col_a new 0 8.4 0 1 11.3 0 2 7.2 2 3 6.5 3 4 4.5 3 5 8.9 0 </code></pre> <p>If performance is important, use <a href="https://stackoverflow.com/a/48884154/2901002">strides</a>:</p> <pre><code>n = 3 x = np.concatenate([[np.nan] * (n), df['col_a'].values]) def rolling_window(a, window): shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) arr = rolling_window(x, n + 1) df['new'] = (arr[:, :-1] &gt; arr[:, [-1]]).sum(axis=1) print (df) col_a new 0 8.4 0 1 11.3 0 2 7.2 2 3 6.5 3 4 4.5 3 5 8.9 0 </code></pre> <p><strong>Performance</strong>: Here is used <a href="https://github.com/nschloe/perfplot" rel="noreferrer"><code>perfplot</code></a> in small window <code>n = 3</code>:</p> <p><a href="https://i.stack.imgur.com/BpkDU.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BpkDU.png" alt="g1"></a></p> <pre><code>np.random.seed(1256) n = 3 def rolling_window(a, window): shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) def roll(df): df['new'] = (df['col_a'].rolling(n+1, min_periods=1).apply(lambda x: (x[-1] &lt; x[:-1]).sum(), raw=True).astype(int)) return df def list_comp(df): df['count'] = [(j &lt; df['col_a'].iloc[max(0, i-3):i]).sum() for i, j in df['col_a'].items()] return df def strides(df): x = np.concatenate([[np.nan] * (n), df['col_a'].values]) arr = rolling_window(x, n + 1) df['new1'] = (arr[:, :-1] &gt; arr[:, [-1]]).sum(axis=1) return df def make_df(n): df = pd.DataFrame(np.random.randint(20, size=n), columns=['col_a']) return df perfplot.show( setup=make_df, kernels=[list_comp, roll, strides], n_range=[2**k for k in range(2, 15)], logx=True, logy=True, xlabel='len(df)') </code></pre> <p>Also I was curious about performance in large window, <code>n = 100</code>:</p> <p><a href="https://i.stack.imgur.com/qjp1j.png" rel="noreferrer"><img src="https://i.stack.imgur.com/qjp1j.png" alt="g2"></a></p>
python|pandas|dataframe
7
374,018
50,681,501
Tensorflow Graph, Session management for cross-validation
<p>What is the proper way of managing (initialize, close, reset) sessions and graphs when implementing cross-validation in Tensorflow?</p> <p>Should I reset the session and graph for each fold, or is it better/possible to keep a single session throughout the entire process? One advantage of doing the latter is that resources are bound throughout the entire process, e.g. preventing that some unrelated process running on the same machine claims the GPU while I am processing/saving results in-between folds.</p> <p>Would it be enough to keep the same graph/session and simply re-initialize the parameters by:</p> <pre><code>with tf.Graph().as_default(): with tf.Session() as sess: # define model here # for fold in folds: init = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()) sess.run(init) # train model here # </code></pre> <p>Also, I guess that ideally summaries (and possibly checkpoints?) should be kept separately for each fold.</p>
<p>I prefer to create a session and use it as long as I require, this is much better in terms of speed and performance, and later if we close the session, then we can claim the resources back. Moreover, if we use allow_growth config to True, then other programs will also be able to use the resources.</p>
session|tensorflow|graph|cross-validation
0
374,019
51,040,682
Split integer into digits using numpy
<p>I have a question. The question is asked before but as far as i can see never using numpy. I want split a value in the different digits. do somthing and return back into a number. based on the questions below i can do what i want. But i prefere to do it all in numpy. I expect it is more efficient because i'm not changing back and forward to numpy arrays. See example:</p> <p>Example:</p> <pre><code>import numpy as np l = np.array([43365644]) # is input array n = int(43365644) m = [int(d) for d in str(n)] o = np.aslist(np.sort(np.asarray(m))) p = np.asarray(''.join(map(str,o))) </code></pre> <p>I have tried serval times but without much luck. I had one moment i used the split function and it worked (in the terminal) but after add it to a script it failed again and i was unable to reproduce what i did before..</p> <p><code>q = np.sort(np.split(l,1),axis=1)</code> no error but it is still a singel value.</p> <p><code>q = np.sort(np.split(l,8),axis=1)</code> With this method it gives the following error:</p> <pre><code>Traceback (most recent call last): File "python", line 1, in &lt;module&gt; ValueError: array split does not result in an equal division </code></pre> <p>Is there some way that this is possible in numpy? thanks in advance</p> <p>referenced questions:<br> <a href="https://stackoverflow.com/questions/21270320/turn-a-single-number-into-single-digits-python#21270338">Turn a single number into single digits Python</a><br> <a href="https://stackoverflow.com/questions/489999/convert-list-of-ints-to-one-number#490020">Convert list of ints to one number?</a></p>
<p>Pretty simple:</p> <ol> <li>Divide your number by 1, 10, 100, 1000, ... rounding down</li> <li>Modulo the result by 10</li> </ol> <p>which yields</p> <pre><code>l // 10 ** np.arange(10)[:, None] % 10 </code></pre> <p>Or if you want a solution that works for</p> <ul> <li>any base</li> <li>any number of digits and</li> <li>any number of dimensions</li> </ul> <p>you can do</p> <pre><code>l = np.random.randint(0, 1000000, size=(3, 3, 3, 3)) l.shape # (3, 3, 3, 3) b = 10 # Base, in our case 10, for 1, 10, 100, 1000, ... n = np.ceil(np.max(np.log(l) / np.log(b))).astype(int) # Number of digits d = np.arange(n) # Divisor base b, b ** 2, b ** 3, ... d.shape = d.shape + (1,) * (l.ndim) # Add dimensions to divisor for broadcasting out = l // b ** d % b out.shape # (6, 3, 3, 3, 3) </code></pre>
python|python-3.x|numpy
2
374,020
51,053,155
matplot pandas plotting multiple y values on the same column
<p>Trying to plot using matplot but lines based on the value of a non x , y column.</p> <p>For example this is my DF:</p> <pre><code>code reqs value AGB 253319 57010.16528 ABC 242292 35660.58176 DCC 240440 36587.45336 CHB 172441 57825.83052 DEF 148357 34129.71166 </code></pre> <p>Which yields this plot df.plot(x='reqs',y='value',figsize=(8,4)) :</p> <p><a href="https://i.stack.imgur.com/5CqhT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5CqhT.png" alt="plot without codes"></a></p> <p>What I'm looking to do is have a plot with multiple lines one line for each of the codes. Right now its just doing 1 line and ignoring the code column.</p> <p>I tried searching for an answer but each one is asking for multiple y's I dont have multiple y's I have the same y but with different focuses (surely i'm using the wrong terms to describe what I'm trying to do hopefully this example and image makes sense)</p> <p>The result should look something like this: <a href="https://i.stack.imgur.com/iIrJq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iIrJq.png" alt="enter image description here"></a></p>
<p>So I worked out how to do exactly ^ if anyone is curious:</p> <pre><code>plt_df = df fig, ax = plt.subplots() for key,grp in plt_df.groupby(['code']): ax = grp.plot(ax=ax, kind ='line',x='reqs',y='value',label=key,figsize=(20,4),title = "someTitle") plt.show() </code></pre>
pandas|matplotlib|plot|jupyter
0
374,021
50,731,887
Pandas GroupBy two columns, calculate the total based on one column but calculate the percentage based on the total for the agregator
<p>I have derived my desired groupings but would like to calculate a percentage column based on the totals per month i.e. regardless of the string in originating_system_id</p> <pre><code>d = [('Total_RFQ_For_Month', 'size')] df_RFQ_Channel = df.groupby(['Year_Month','originating_system_id'])['state'].agg(d) #df_RFQ_Channel['RFQ_Pcent_For_Month'] = ? display(df_RFQ_Channel) Year_Month originating_system_id Total_RFQ_For_Month RFQ_Pcent_For_Month 2017-11 BBT 59 7.90% EUCR 33 4.42% MAXL 6 0.80% MXUS 649 86.88% 2017-12 BBT 36 73.47% EUCR 7 14.29% MAXL 6 12.24% 2018-01 BBT 88 9.52% EUCR 26 2.81% MAXL 4 0.43% MXUS 800 86.58% VOIX 6 0.65% </code></pre> <p>Example:</p> <pre><code>7.90% is BBT's Total_RFQ_For_Month (59) divided by the sum of all for 2017-11 (747) 2.81% is EUCR's Total_RFQ_For_Month (26) divided by the sum of all for 2018-01 (924). </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="noreferrer"><code>transform</code></a> for <code>Series</code> with same size as original <code>DataFrame</code>, so is possible divide by <code>Total_RFQ_For_Month</code> column:</p> <pre><code>#create columns from MultiIndex df = df.reset_index() s = df.groupby('Year_Month')['Total_RFQ_For_Month'].transform('sum') df['RFQ_Pcent_For_Month'] = df['Total_RFQ_For_Month'].div(s).mul(100).round(2) print (df) Year_Month originating_system_id Total_RFQ_For_Month RFQ_Pcent_For_Month 0 2017-11 BBT 59 7.90 1 2017-11 EUCR 33 4.42 2 2017-11 MAXL 6 0.80 3 2017-11 MXUS 649 86.88 4 2017-12 BBT 36 73.47 5 2017-12 EUCR 7 14.29 6 2017-12 MAXL 6 12.24 7 2018-01 BBT 88 9.52 8 2018-01 EUCR 26 2.81 9 2018-01 MAXL 4 0.43 10 2018-01 MXUS 800 86.58 11 2018-01 VOIX 6 0.65 </code></pre> <p>For percentage:</p> <pre><code>df['RFQ_Pcent_For_Month'] = (df['Total_RFQ_For_Month'].div(s) .mul(100) .round(2) .astype(str) .add('%')) print (df) Year_Month originating_system_id Total_RFQ_For_Month RFQ_Pcent_For_Month 0 2017-11 BBT 59 7.9% 1 2017-11 EUCR 33 4.42% 2 2017-11 MAXL 6 0.8% 3 2017-11 MXUS 649 86.88% 4 2017-12 BBT 36 73.47% 5 2017-12 EUCR 7 14.29% 6 2017-12 MAXL 6 12.24% 7 2018-01 BBT 88 9.52% 8 2018-01 EUCR 26 2.81% 9 2018-01 MAXL 4 0.43% 10 2018-01 MXUS 800 86.58% 11 2018-01 VOIX 6 0.65% </code></pre> <p><strong>Detail</strong>:</p> <pre><code>print (s) 0 747 1 747 2 747 3 747 4 49 5 49 6 49 7 924 8 924 9 924 10 924 11 924 Name: Total_RFQ_For_Month, dtype: int64 </code></pre>
python|pandas|dataframe|pandas-groupby
7
374,022
51,012,775
Slicing based on a range of column in a multiindex column dataframe
<p>I am creating my dataframe by doing the following:</p> <pre><code>months = [ 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec' ] monthyAmounts = [ "actual", "budgeted", "difference" ] income = [] names = [] for x in range( incomeIndex + 1, expensesIndex ): amounts = [ randint( -1000, 15000 ) for x in range( 0, len( months ) * len( monthyAmounts ) ) ] income.append( amounts ) names.append( f"name_{x}" ) index = pd.Index( names, name = 'category' ) columns = pd.MultiIndex.from_product( [ months, monthyAmounts ], names = [ 'month', 'type' ] ) incomeDF = pd.DataFrame( income, index = index, columns = columns ) </code></pre> <p>The dataframe looks like: (removed months March - December)</p> <pre><code> Jan Feb ... actual budgeted difference actual budgeted difference name_13 14593 -260 10165 9767 629 10054 name_14 6178 1398 13620 1821 10986 -663 name_15 2432 3279 7545 8196 1052 7386 name_16 9964 13098 10342 5564 4631 7422 </code></pre> <p>What I want is for every row, to slice the difference column for the months Jan - May. What I can do it slice the difference column for all of the months by doing:</p> <pre><code>incomeDifferenceDF = incomeDF.loc[ :, idx[ :, 'difference' ] ] </code></pre> <p>which gives me a dataframe that looks like: (months March - December removed)</p> <pre><code> Jan Feb .... difference difference name_13 10165 10054 name_14 13620 -663 name_15 7545 7386 name_16 10342 7422 </code></pre> <p>What I have tried is:</p> <pre><code>incomeDifferenceDF = incomeDF.loc[ :, idx[ 'Jan' : 'May', 'difference' ] ] </code></pre> <p>but that gives me the error:</p> <pre><code>UnsortedIndexError: 'MultiIndex slicing requires the index to be lexsorted: slicing on levels [0], lexsort depth 0' </code></pre> <p>So, this seems close, but I am uncertain how to resolve the problem.</p> <p>I have also tried:</p> <pre><code>incomeDifferenceDF = incomeDF.loc[ :, idx[ ['Jan':'May'], 'difference' ] ] </code></pre> <p>But that just generates the error:</p> <pre><code>SyntaxError: invalid syntax ( Points at ['Jan':'May'] ) </code></pre> <p>What is the best way to do this?</p>
<p>If need select by <code>MultiIndex</code>, need boolean masks:</p> <pre><code>index = pd.Index( [1,2,3,4], name = 'category' ) budgetMonths = pd.date_range( "January, 2018", periods = 12, freq = 'BM' ) months = [ 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec' ] monthyAmounts = [ "actual", "budgeted", "difference" ] columns = pd.MultiIndex.from_product( [ months, monthyAmounts ], names = [ 'month', 'type' ]) incomeDF = pd.DataFrame( 10, index = index, columns = columns ) #trick for get values between idx = pd.Series(0,index=months).loc['Jan' : 'May'].index print (idx) Index(['Jan', 'Feb', 'Mar', 'Apr', 'May'], dtype='object') mask1 = incomeDF.columns.get_level_values(0).isin(idx) mask2 = incomeDF.columns.get_level_values(1) == 'difference' incomeDifferenceDF = incomeDF.loc[:, mask1 &amp; mask2] print (incomeDifferenceDF) month Jan Feb Mar Apr May type difference difference difference difference difference category 1 10 10 10 10 10 2 10 10 10 10 10 3 10 10 10 10 10 4 10 10 10 10 10 </code></pre>
python|pandas|dataframe|slice|multi-index
1
374,023
51,084,768
ValueError: Input 0 of node incompatible with expected float_ref.**
<p>I'm getting below exception while trying to import my optimized frozen graph.</p> <pre><code># read pb into graph_def with tf.gfile.GFile(pb_file, "rb") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) # import graph_def with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def) </code></pre> <p>Getting the exception in this line:</p> <pre><code>tf.import_graph_def(graph_def) </code></pre> <blockquote> <p>Traceback (most recent call last): File<br> "/home/automator/PycharmProjects/tensorflow/venv/lib/python3.5/site-<br> packages/tensorflow/python/framework/importer.py", line 489, in<br> import_graph_def graph._c_graph, serialized, options) # pylint: disable=protected-access<br> tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node<br> import/final_retrain_ops/Wx_plus_b/weights_quant/AssignMinLast was<br> passed float from<br> import/final_retrain_ops/Wx_plus_b/weights_quant/min:0 incompatible<br> with expected float_ref. During handling of the above exception, another exception occurred: Traceback (most recent call last):<br> File "/snap/pycharm-community/64/helpers/pydev/pydevd.py", line 1664, in main() File "/snap/pycharm-community/64/helpers/pydev/pydevd.py", line 1658, in<br> main globals = debugger.run(setup['file'], None, None, is_module) File "/snap/pycharm-community/64/helpers/pydev/pydevd.py", line 1068, in run pydev_imports.execfile(file, globals, locals) # execute the script File<br> "/snap/pycharm-community/64/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/home/automator/PycharmProjects/tensorflow/tfliteme.py", line 389,<br> in printTensors("/home/automator/Desktop/cervix/optimized_model.pb")<br> File "/home/automator/PycharmProjects/tensorflow/tfliteme.py", line<br> 374, in printTensors tf.import_graph_def(graph_def) File "/home/automator/PycharmProjects/tensorflow/venv/lib/python3.5/site-<br> packages/tensorflow/python/util/deprecation.py", line 432, in<br> new_func return func(*args, **kwargs) File "/home/automator/PycharmProjects/tensorflow/venv/lib/python3.5/site-<br> packages/tensorflow/python/framework/importer.py", line 493, in<br> import_graph_def raise ValueError(str(e)) ValueError: Input 0 of node import/final_retrain_ops/Wx_plus_b/weights_quant/AssignMinLast was<br> passed float from<br> import/final_retrain_ops/Wx_plus_b/weights_quant/min:0 incompatible<br> with</p> </blockquote> <p>expected float_ref.</p>
<p>Make sure your <code>pb_file</code> is in the right format (something like <a href="https://github.com/timctho/convolutional-pose-machines-tensorflow/blob/master/run_freeze_graph.py" rel="nofollow noreferrer">this</a>) and also try to have some value in the 'name' parameter of <code>import_graph_def()</code> to try and override the "import" default value, like so:</p> <pre><code># read pb into graph_def with tf.gfile.GFile(pb_file, "rb") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) # import graph_def with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def, name='') </code></pre>
python|python-3.x|tensorflow|pycharm|tensorflow-lite
2
374,024
50,839,884
Filtering pandas dataframe by introspecting each element of a row
<p>I have a dataframe that contains an object in a column. </p> <p>for example:</p> <pre><code>df['id_original'].iloc[0].Class Out[20]: u'Classtype1' df['id_original'].iloc[1].Class Out[20]: u'Classtype2' </code></pre> <p>How can I filter the dataframe that I only get the rows where the row 'id_original' contains a objects with the property Class of Classtype1. Or even better. in combination with <code>.isin(allowed_class_type_list)</code>?</p> <p>Is there any way to achieve this with .isin or will I have to iterate over all the rows with iterrows? An elegant one-line solution is preferred.</p>
<p>You can use:</p> <pre><code>df.loc[df['id_original'].apply(lambda x: x.Class in allowed_class_type_list)] </code></pre> <p>Consider below minified example:</p> <pre><code>class Example: def __init__(self, class_): self.Class = class_ ex1 = Example('class1') ex2 = Example('class2') ex3 = Example('class3') ex4 = Example('class4') df = pd.DataFrame({ 'id_original':[ex1, ex2, ex2, ex1, ex4, ex3, ex3, ex4] }) allowed_class_type_list = ['class1', 'class4'] </code></pre> <p>You can filter using:</p> <pre><code>df.loc[df['id_original'].apply(lambda x: x.Class in allowed_class_type_list)] </code></pre> <p>Output:</p> <pre><code> id_original 0 &lt;__main__.Example object at 0x000000000A597390&gt; 3 &lt;__main__.Example object at 0x000000000A597390&gt; 4 &lt;__main__.Example object at 0x000000000A597B00&gt; 7 &lt;__main__.Example object at 0x000000000A597B00&gt; </code></pre>
python|pandas|filtering|series
3
374,025
50,978,117
How to plot loss curve in Tensorflow without using Tensorboard?
<p>Hey I am new to Tensorflow. I used DNN to train the model and I would like to plot the loss curve. However, I do not want to use Tensorboard since I am really not familiar with that. I wonder whether it is possible to extract the loss info info in each step and plot it use other plotting package or scikit-learn?</p> <p>Really appreciated!</p>
<p>Change your <code>sess.run(training_function, feed_dict)</code> statement so it includes your loss function as well. Then use something like Matplotlib to plot the data.</p> <pre><code>_, loss = sess.run((training_function, loss_function), feed_dict) loss_list.append(loss) import matplotlib.pyplot as plt plt.plot(loss_list) </code></pre>
python-3.x|tensorflow|machine-learning
4
374,026
50,718,146
Creating a dataframe from values extracted from a json column in Pandas
<p>I loaded a .csv file into a df, and one of the row of a columns contains a list of dictionary like below. </p> <pre><code>data = [{"character": "Jake Sully", "gender": 2,}, {"character": "Neytiri", "gender": 1}, {"character": "Dr. Grace Augustine","gender": 1}, {"character": "Col. Quaritch", "gender": 2] </code></pre> <p>But of course after loading it, it's read as a string. So, I converted each row in the column to a json, which makes it easy to extract values based on the key name. I then need to create a seperate df like so.</p> <pre><code>df = {'character': ['Jake Sully','Neytiri', 'Dr. Grace Augustine', 'Col.Quaritch'], 'gender': [2, 1, 1, 2]} </code></pre> <p>This is my code but I can't quite get the desired df ouput right.</p> <pre><code>df = pd.DataFrame() #create new df keys = ['character','gender'] #keys to extract values from json lst=[] for val in data: #to iterate over data series for object in json.loads(val): for key in keys: lst.append(object[key]) df = pd.concat([df,pd.DataFrame(lst,columns=[key])], axis=1) </code></pre> <p>Can someone tell me what i am doing wrong?</p>
<p><code>pd.DataFrame</code> accepts a list of dictionaries directly:</p> <pre><code>data = [{"character": "Jake Sully", "gender": 2,}, {"character": "Neytiri", "gender": 1}, {"character": "Dr. Grace Augustine","gender": 1}, {"character": "Col. Quaritch", "gender": 2}] df = pd.DataFrame(data) # or pd.DataFrame.from_dict(data) print(df) character gender 0 Jake Sully 2 1 Neytiri 1 2 Dr. Grace Augustine 1 3 Col. Quaritch 2 </code></pre> <p>Therefore, you only need to extract a list of dictionaries from your json file. One way you can do this is via <code>json.loads</code>.</p> <p>A better idea is to read your data directly into a dataframe via <code>pd.read_json</code>.</p>
python|pandas
2
374,027
51,023,386
Tensorboard for Windows for Tensorflowsharp
<p>Is it posssible to run a standalone version of Tensorboard on Windows WITHOUT installing tensorflow and python.</p> <p>I want to look at output from Tensorflowsharp only.</p>
<p>There is a standalone version. You can find it <a href="https://github.com/dmlc/tensorboard" rel="nofollow noreferrer">here</a>.</p>
tensorflow|tensorboard|tensorflowsharp
0
374,028
50,798,172
Pytorch: how to make the trainloader use a specific amount of images?
<p>Assume I am using the following calls:</p> <pre><code>trainset = torchvision.datasets.ImageFolder(root="imgs/", transform=transform) trainloader = torch.utils.data.DataLoader(trainset,batch_size=4,suffle=True,num_workers=1) </code></pre> <p>As far as I can tell, this defines the trainset as consisting of all the images in the folder "images", with labels as defined by the specific folder location.</p> <p>My question is - Is there any direct/easy way to define the trainset to be a sub-sample of the images in this folder? For example, define trainset to be a random sample of 10 images from every sub-folder?</p> <p>Thanks in advance </p>
<p>You can wrap the class <a href="https://github.com/pytorch/vision/blob/3f6c23c0d3056d66a14635bacc7e4b8a8a067069/torchvision/datasets/folder.py#L53" rel="nofollow noreferrer"><code>DatasetFolder</code></a> (or ImageFolder) in another class to limit the dataset:</p> <pre><code>class LimitDataset(data.Dataset): def __init__(self, dataset, n): self.dataset = dataset self.n = n def __len__(self): return self.n def __getitem__(self, i): return self.dataset[i] </code></pre> <p>You can also define some mapping between the index in <code>LimitDataset</code> and the index in the original dataset to define more complex behavior (such as random subsets).</p> <p>If you want to limit the batches per epoch instead of the dataset size:</p> <pre><code>from itertools import islice for data in islice(dataloader, 0, batches_per_epoch): ... </code></pre> <p>Note that if you use this shuffle, the dataset size will be the same, but the data that each epoch will see will be limited. If you don't shuffle the dataset this will also limit the dataset size.</p>
pytorch
4
374,029
50,965,844
Using Boolean Logic to clean DF in pandas
<p>df </p> <pre><code>shape square shape circle animal NaN NaN dog NaN cat NaN fish color red color blue </code></pre> <p>desired_df</p> <pre><code>shape square shape circle animal dog animal cat animal fish color red color blue </code></pre> <p>I have a df contains information that needs to be normalized.</p> <p>I have noticed a pattern that indicates how to join the columns and normalize the data.</p> <p>If in Col1 != NaN and Col2 == NaN and directly in the following row Col1 == NaN and Col2 != NaN, then then values from Col1 and Col2 should be joined. This continues until arriving to a row that contains values Col1 != NaN and Col2 !=NaN .</p> <p>Is there a way to solve this in <code>pandas</code>?</p> <p>The first step that I am thinking of is to create an additional column in order containing True/False values in order to determine what columns to join, however, once doing that, I am not sure how to assign the value in Col1 to all of the relevant values in Col2. </p> <p>Any suggestions to arrive at desired result?</p>
<p>If your identified pattern is a heuristic which, nevertheless, I struggle to follow, you can instead try <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.ffill.html" rel="nofollow noreferrer"><code>pd.Series.ffill</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.bfill.html" rel="nofollow noreferrer"><code>pd.Series.bfill</code></a> to reach your desired result:</p> <pre><code>df[0] = df[0].ffill() df[1] = df[1].bfill() </code></pre> <p>Then drop duplicates:</p> <pre><code>df = df.drop_duplicates() print(df) 0 1 0 shape square 1 shape circle 2 animal dog 4 animal cat 5 animal fish 6 color red 7 color blue </code></pre>
python|pandas|boolean|series
3
374,030
50,891,974
Concatenating dataframes in loop
<p>I am struggling with something simple and it drives me crazy. </p> <p>Why concatenating like below doesn't replace df1 with df1 + additional column??</p> <pre><code>df1 = pd.DataFrame({'A':[1, 2, 3], 'B':[4, 5, 6]}) df2 = pd.DataFrame({'A':[1, 2, 3], 'B':[4, 5, 6]}) df3 = pd.DataFrame({'C':[999, 999, 999]}) for table in [df1, df2]: table = pd.concat((table, df3), axis=1) df1 </code></pre> <p><a href="https://i.stack.imgur.com/Bi3Y2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bi3Y2.png" alt="enter image description here"></a></p> <p>Thanks!</p> <p>[edit] I need to obtain separatly for df1 and df2:</p> <p><a href="https://i.stack.imgur.com/0bCdQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0bCdQ.png" alt="enter image description here"></a></p>
<p>You have two DataFrames. These variables are referenced by two variable names "df1" and "df2". Now, you loop over these dataFrames in a loop under the alias "table". Inside the loop, "table" is reassigned to the result of <code>concat</code>. Since <code>concat</code> is not inplace, <em>none</em> of the original DataFrames are modified.</p> <p>My advice is to maintain a list of DataFrames.</p> <pre><code>df_list = [df1, df2] </code></pre> <p>Now, modify the <em>list</em>:</p> <pre><code>for i, df in enumerate(df_list): df_list[i] = pd.concat([df, df3], axis=1) </code></pre> <p><code>df_list</code> will reflect the update because it will now hold the newly created <code>concat</code> outputs.</p> <pre><code>df1, df2 = df_list print(df1) A B C 0 1 4 999 1 2 5 999 2 3 6 999 print(df2) A B C 0 1 4 999 1 2 5 999 2 3 6 999 </code></pre>
python|pandas|dataframe|concatenation
1
374,031
51,083,770
TensorflowJS: Training multiple models simultaneously (for performance)
<p>In my project, I am training many small graphs. Seeing as how the work is being done on the GPU, and the GPU is running at a low 5%, would it make sense to train many graphs simultaneously for a performance boost? I'm just a bit concerned, as I know JS isn't really a thread-capable language.</p> <p>Are there any other things I could look for to improve training performance?</p>
<p>In theory when training on the GPU with Tensorflow.js, you've got a number of elements that all need balancing:</p> <h2>1: GPU usage</h2> <p>Of course, the amount that the GPU is being used matters - the ultimate goal is to max out the GPU to train as efficiently as possible in terms of time.</p> <p>If you're seeing low GPU usage, your bottleneck is likely somewhere else in the picture - see below.</p> <h2>2: Javascript CPU usage</h2> <p>As you note, Javascript is not really a thread-based language. To this end, one has to watch the CPU usage of the main Javascript thread. If it's maxing out a CPU, then that's probably the bottleneck. A number of things can be done ot improve the situation:</p> <ul> <li>If you're training multiple models, try training them in different processes. In the browser this means WebWorkers (assuming that WebWorkers are compatible with Tensorflow.js); for Node.js this means multiple processes (e.g. with <a href="https://nodejs.org/api/child_process.html#child_process_child_process_fork_modulepath_args_options" rel="nofollow noreferrer"><code>child_process.fork()</code></a> (careful of transferring lots of data between processes - it's <em>slow</em>), or perhaps <a href="https://nodejs.org/api/worker_threads.html" rel="nofollow noreferrer">threads</a> (though I haven't personally tried that).</li> <li>If you have a lot of preprocessing steps to get your data into the right format, try doing some of these ahead of time to speed things up. Also check to see if the layers in your model will take a slightly different format in that's less work to convert to.</li> </ul> <h2>3: GPU memory usage</h2> <p>A limiting factor to the number of models that can be trained in parallel on a given GPU is the amount of memory that they use. Most dedicated GPUs use their own special dedicated VRAM, which can be quite limited. Check with your GPU manufacturer or OS provider to see how to monitor this.</p> <h2>4: I/O bandwidth</h2> <p>If neither your CPU nor your GPU are maxed out, then your issue is probably bandwidth. This can be in several places:</p> <ul> <li>Loading the original data in the first place</li> <li>Data transfer between the CPU &amp; GPU during the training process (as @BlessedKey notes increasing the batch size can help here, but be careful of the increase in memory usage)</li> <li>Data transfer between processes on the CPU (this is <em>slow</em> - especially in Node.js) - try loading the data directly in the process that is going to use it</li> </ul> <h2>Conclusion</h2> <p>Sorry for the long answer. This is sort of for my own reference as much as it is in answer of your question. Anyway, to summarise:</p> <ul> <li>Try training multiple models in parallel</li> <li>Be careful of i/o memory bandwidth</li> <li>Monitor performance closely to see if you gain a net increase in training speed</li> </ul>
tensorflow.js
0
374,032
51,053,219
sampling from list of dicts
<p>I have data like the sample below. It's very large and I would like to sample first 10 items from it. It looks like a list of dicts, but if I try user_train[:5] I get an error. I can sample one item at a time like user_train[4] works. Any tips are greatly appreciated. </p> <p>code:</p> <pre><code>user_train[0] </code></pre> <p>output:</p> <pre><code>[{u'asin': u'B00APT3MHO', u'helpful': [0, 0], u'overall': 5.0, 'productid': 1, u'reviewText': u"Good for someone who likes skinny jeans but doesn't look great in the legging-tight ones. A little stretchy. Not super tight in the knee or ankle, but snug on the thigh and calf.", u'reviewTime': u'11 17, 2013', u'reviewerID': u'A1JWX45KHE34AL', u'reviewerName': u'varnienarsil', u'summary': u'Love these jeans', u'unixReviewTime': 1384646400}, {u'asin': u'B00CJ5NH36', u'helpful': [0, 0], u'overall': 5.0, 'productid': 2, u'reviewText': u"This shirt with it's bold graphic is seriously adorable. I have pretty narrow shoulders, and like the way the sleeves slope off them. The shirt fits loosely in a way that is flattering and I liked the length. I'm no model, but the shirt looks on me as great as it looks in the photo.", u'reviewTime': u'11 17, 2013', u'reviewerID': u'A1JWX45KHE34AL', u'reviewerName': u'varnienarsil', u'summary': u'As cute as it looks', u'unixReviewTime': 1384646400}, {u'asin': u'B00F9NGAPM', u'helpful': [1, 1], u'overall': 3.0, 'productid': 4, u'reviewText': u"The shirt is a little flowy-er than I expected. I like the way it drapes, but the arms are a bit loose (and on me, short&amp;#8212;I'm pretty tall). Has a sort of after-yoga feel rather than the urban feel I was looking for. Super comfortable.", u'reviewTime': u'11 17, 2013', u'reviewerID': u'A1JWX45KHE34AL', u'reviewerName': u'varnienarsil', u'summary': u"Like, don't love", u'unixReviewTime': 1384646400}] Update: code: user_train[:5] error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-17-bb27c2e9fa75&gt; in &lt;module&gt;() ----&gt; 1 user_train[:5] TypeError: unhashable type </code></pre>
<p>If you wanted to find a way around it. You could go:</p> <p><code>sample = [user_train[x] for x in range(10)]</code></p> <p>This is called a list comprehension, unhashable type error is usually due to trying to convert a dict to a list.</p>
python|python-2.7|numpy-ndarray
1
374,033
51,064,456
Principal component analysis dimension reduction in python
<p>I have to implement my own PCA function function Y,V = PCA(data, M, whitening) that computes the first M principal components and transforms the data, so that y_n = U^T x_n. The function should further return V that explains the amount of variance that is explained by the transformation.</p> <p>I have to reduce the dimension of data D=4 to M=2 > given function below &lt;</p> <pre><code>def PCA(data,nr_dimensions=None, whitening=False): """ perform PCA and reduce the dimension of the data (D) to nr_dimensions Input: data... samples, nr_samples x D nr_dimensions... dimension after the transformation, scalar whitening... False -&gt; standard PCA, True -&gt; PCA with whitening Returns: transformed data... nr_samples x nr_dimensions variance_explained... amount of variance explained by the the first nr_dimensions principal components, scalar""" if nr_dimensions is not None: dim = nr_dimensions else: dim = 2 </code></pre> <p>what I have done is the following:</p> <pre><code>import numpy as np import matplotlib.cm as cm import matplotlib.mlab as mlab import matplotlib.pyplot as plt import scipy.stats as stats from scipy.stats import multivariate_normal import pdb import sklearn from sklearn import datasets #covariance matrix mean_vec = np.mean(data) cov_mat = (data - mean_vec).T.dot((data - mean_vec)) / (data.shape[0] - 1) print('Covariance matrix \n%s' % cov_mat) #now the eigendecomposition of the cov matrix cov_mat = np.cov(data.T) eig_vals, eig_vecs = np.linalg.eig(cov_mat) print('Eigenvectors \n%s' % eig_vecs) print('\nEigenvalues \n%s' % eig_vals) # Make a list of (eigenvalue, eigenvector) tuples eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))] </code></pre> <p>This is the point where I don't know what to do now and how to reduce dimension.</p> <p>Any help would be welcome! :)</p>
<h2>Here is a simple example <code>for the case</code> where the initial matrix A that contains the samples and features has <code>shape=[samples, features]</code></h2> <pre><code>from numpy import array from numpy import mean from numpy import cov from numpy.linalg import eig # define a matrix A = array([[1, 2], [3, 4], [5, 6]]) print(A) # calculate the mean of each column since I assume that it's column is a variable/feature M = mean(A.T, axis=1) print(M) # center columns by subtracting column means C = A - M print(C) # calculate covariance matrix of centered matrix V = cov(C.T) print(V) # eigendecomposition of covariance matrix values, vectors = eig(V) print(vectors) print(values) # project data P = vectors.T.dot(C.T) print(P.T) </code></pre>
python|numpy|machine-learning|scikit-learn|pca
0
374,034
50,774,747
Randomly remove 30% of values in numpy array
<p>I have a 2D numpy array which contains my values (some of them can be NaN). I want to remove the 30% of the non-NaN values and replace them with the mean of the array. How can I do so? What I tried so far:</p> <pre><code>def spar_removal(array, mean_value, sparseness): array1 = deepcopy(array) array2 = array1 spar_size = int(round(array2.shape[0]*array2.shape[1]*sparseness)) for i in range (0, spar_size): index = np.random.choice(np.where(array2 != mean_value)[1]) array2[0, index] = mean_value return array2 </code></pre> <p>But this is just picking the same row of my array. How can I remove from all over the array? It seems that choice works only for one dimension. I guess what I want is to calculate the <code>(x, y)</code> pairs that I will replace its value with <code>mean_value</code>.</p>
<p>There's likely a better way, but consider:</p> <pre><code>import numpy as np x = np.array([[1,2,3,4], [1,2,3,4], [np.NaN, np.NaN, np.NaN, np.NaN], [1,2,3,4]]) # Get a vector of 1-d indexed indexes of non NaN elements indices = np.where(np.isfinite(x).ravel())[0] # Shuffle the indices, select the first 30% (rounded down with int()) to_replace = np.random.permutation(indices)[:int(indices.size * 0.3)] # Replace those indices with the mean (ignoring NaNs) x[np.unravel_index(to_replace, x.shape)] = np.nanmean(x) print(x) </code></pre> <p><strong>Example Output</strong></p> <pre> [[ 2.5 2. 2.5 4. ] [ 1. 2. 3. 4. ] [ nan nan nan nan] [ 2.5 2. 3. 4. ]] </pre> <p>NaNs will never change and floor(0.3 * number of non-NaN elements) will be set to the mean (the mean ignoring NaNs).</p>
python|arrays|numpy
5
374,035
50,744,565
How to handle non-determinism when training on a GPU?
<p>While tuning the hyperparameters to get my model to perform better, I noticed that the score I get (and hence the model that is created) is different every time I run the code despite fixing all the seeds for random operations. This problem does not happen if I run on CPU.</p> <p>I googled and found out that this is a common issue when using a GPU to train. <a href="https://www.twosigma.com/insights/article/a-workaround-for-non-determinism-in-tensorflow/" rel="noreferrer">Here is a very good/detailed example with short code snippets to verify the existence of that problem.</a></p> <p>They pinpointed the non-determinism to "tf.reduce_sum" function. However, that is not the case for me. it could be because I'm using different hardware (1080 TI) or a different version of CUDA libraries or Tensorflow. It seems like there are many different parts of the CUDA libraries that are non-deterministic and it doesn't seem easy to figure out exactly which part and how to get rid of it. Also, this must have been by design, so it's likely that there is a sufficient efficiency increase in exchange for non-determinism.</p> <p>So, my question is:</p> <p>Since GPUs are popular for training NNs, people in this field must have a way to deal with non-determinism, because I can't see how else you'd be able to reliably tune the hyperparameters. What is the standard way to handle non-determinism when using a GPU?</p>
<p><strong>TL;DR</strong></p> <ul> <li>Non-determinism for <em>a priori</em> deterministic operations come from concurrent (multi-threaded) implementations.</li> <li>Despite constant progress on that front, TensorFlow does not currently guarantee determinism for all of its operations. After a quick search on the internet, it seems that the situation is similar to the other major toolkits.</li> <li>During training, unless you are debugging an issue, it is OK to have fluctuations between runs. Uncertainty is in the nature of training, and it is wise to measure it and take it into account when comparing results – <em>even when toolkits eventually reach perfect determinism in training</em>.</li> </ul> <p><strong>That, but much longer</strong></p> <p>When you see neural network operations as mathematical operations, you would expect everything to be deterministic. Convolutions, activations, cross-entropy – everything here are mathematical equations and should be deterministic. Even pseudo-random operations such as shuffling, drop-out, noise and the likes, are entirely determined by a seed.</p> <p>When you see those operations from their computational implementation, on the other hand, you see them as massively parallelized computations, which can be source of randomness unless you are very careful.</p> <p>The heart of the problem is that, when you run operations on several parallel threads, you typically do not know which thread will end first. It is not important when threads operate on their own data, so for example, applying an activation function to a tensor should be deterministic. But when those threads need to synchronize, such as when you compute a sum, then the result may depend on the order of the summation, and in turn, on the order in which thread ended first.</p> <p>From there, you have broadly speaking two options:</p> <ul> <li><p>Keep non-determinism associated with simpler implementations.</p> </li> <li><p>Take extra care in the design of your parallel algorithm to reduce or remove non-determinism in your computation. The added constraint usually results in slower algorithms</p> </li> </ul> <p>Which route takes CuDNN? Well, mostly the deterministic one. In recent releases, deterministic operations are the norm rather than the exception. But it used to offer many non-deterministic operations, and more importantly, it used to not offer some operations such as reduction, that people needed to implement themselves in CUDA with a variable degree of consideration to determinism.</p> <p>Some libraries such as theano were more ahead of this topic, by exposing early on a <a href="http://deeplearning.net/software/theano/library/config.html#config.deterministic" rel="nofollow noreferrer"><code>deterministic</code></a> flag that the user could turn on or off – but as you can see from its description, it is far from offering any guarantee.</p> <blockquote> <p>If <code>more</code>, sometimes we will select some implementations that are more deterministic, but slower. In particular, on the GPU, we will avoid using AtomicAdd. Sometimes we will still use non-deterministic implementation, e.g. when we do not have a GPU implementation that is deterministic. Also, see the dnn.conv.algo* flags to cover more cases.</p> </blockquote> <p>In TensorFlow, the realization of the need for determinism has been rather late, but it's slowly getting there – helped by the advance of CuDNN on that front also. For a long time, reductions have been non-deterministic, but now they seem to be deterministic. The fact that CuDNN introduced deterministic reductions in version 6.0 may have helped of course.</p> <p>It seems that currently, <a href="https://github.com/tensorflow/tensorflow/issues/2732#issuecomment-388906092" rel="nofollow noreferrer">the main obstacle for TensorFlow towards determinism is the backward pass of the convolution</a>. It is indeed one of the few operations for which CuDNN proposes a non-deterministic algorithm, labeled <code>CUDNN_CONVOLUTION_BWD_FILTER_ALGO_0</code>. This algorithm is still in <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/stream_executor/cuda/cuda_dnn.cc#L253" rel="nofollow noreferrer">the list of possible choices for the backward filter</a> in TensorFlow. And since <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/gpu/cudnn_convolution_algorithm_picker.cc#L246" rel="nofollow noreferrer">the choice of the filter seems to be based on performance</a>, it could indeed be picked if it is more efficient. (I am not so familiar with TensorFlow's C++ code so take this with a grain of salt.)</p> <p><strong>Is this important?</strong></p> <p>If you are debugging an issue, determinism is not merely important: it is mandatory. You need to reproduce the steps that led to a problem. This is currently a real issue with toolkits like TensorFlow. To mitigate this problem, your only option is to debug live, adding checks and breakpoints at the correct locations – not great.</p> <p>Deployment is another aspect of things, where it is often desirable to have a deterministic behavior, in part for human acceptance. While nobody would reasonably expect a medical diagnosis algorithm to never fail, it would be awkward that a computer could give the same patient a different diagnosis depending on the run. (Although doctors themselves are not immune to this kind of variability.)</p> <p>Those reasons are rightful motivations to fix non-determinism in neural networks.</p> <p>For all other aspects, I would say that we need to accept, if not embrace, the non-deterministic nature of neural net training. For all purposes, training <em>is</em> stochastic. We use stochastic gradient descent, shuffle data, use random initialization and dropout – and more importantly, training data is itself but a random sample of data. From that standpoint, the fact that computers can only generate pseudo-random numbers with a seed is an artifact. When you train, your loss is a value that also comes with a confidence interval due to this stochastic nature. Comparing those values to optimize hyper-parameters while ignoring those confidence intervals does not make much sense – therefore it is vain, in my opinion, to spend too much effort fixing non-determinism in that, and many other, cases.</p>
python|tensorflow|machine-learning|deep-learning
25
374,036
51,066,894
Improve performance of a for loop comparing pandas dataframe rows
<p>I'm facing a performance problem with Python/Pandas. I have a for loop comparing consequent rows in a Pandas DataFrame:</p> <pre><code>for i in range(1, N): if df.column_A.iloc[i] == df.column_A.iloc[i-1]: if df.column_B.iloc[i] == 'START' and df.column_B.iloc[i-1] == 'STOP': df.time.iloc[i] = df.time.iloc[i] - df.time.iloc[i-1] </code></pre> <p>Which is working properly, but is extremely slow. My dataframe has around 1M rows, and I'm wondering if there is some way to improve performance. I've read about vectorization, but I can't figure out where to start.</p>
<p>I think you can use <code>shift</code> and a <code>mask</code>:</p> <pre><code>mask = ((df.column_A == df.column_A.shift()) &amp; (df.column_B == 'START') &amp; (df.column_B.shift() == 'STOP')) df.loc[mask, 'time'] -= df.time.shift().loc[mask] </code></pre> <p>The mask select the row where the value in 'column_A' is equal to the value in the previous (obtained by <code>shift</code>) and where 'column_B' is equal to 'START' and the previous row to 'STOP'. Using <code>loc</code> allows you to change the value for all the selected rows by <code>mask</code> in the column 'time' by removing the value at the previous row (<code>shift</code> again) with the same mask in the column time</p> <p>EDIT: with an example:</p> <pre><code>df = pd.DataFrame({'column_A': [0,1,1,2,1,2,2], 'column_B': ['START', 'STOP', 'START','STOP', 'START','STOP', 'START'], 'time':range(7)}) column_A column_B time 0 0 START 0 1 1 STOP 1 2 1 START 2 3 2 STOP 3 4 1 START 4 5 2 STOP 5 6 2 START 6 </code></pre> <p>so here the row number 2 and 6 meet your condition as the previous row has the same value in column_A and get 'START' in column_B while the preivous row has 'STOP'.</p> <p>After running the code you get <code>df</code>:</p> <pre><code> column_A column_B time 0 0 START 0.0 1 1 STOP 1.0 2 1 START 1.0 3 2 STOP 3.0 4 1 START 4.0 5 2 STOP 5.0 6 2 START 1.0 </code></pre> <p>where the value in time at row 2 is 1 (originally 2 minus value at previous row 1) and same for row 6 ( 6 - 5)</p> <p><strong>EDIT for time comparison</strong> let's create a df with 3000 rows</p> <pre><code>df = pd.DataFrame( [['A', 'START', 3], ['A', 'STOP', 6], ['B', 'STOP', 2], ['C', 'STOP', 1], ['C', 'START', 9], ['C', 'STOP', 7]], columns=['column_A', 'column_B', 'time'] ) df = pd.concat([df]*500) df.shape Out[16]: (3000, 3) </code></pre> <p>now create two functions with the two methods:</p> <pre><code># original method def espogian (df): N = df.shape[0] for i in range(1, N): if df.column_A.iloc[i] == df.column_A.iloc[i-1]: if df.column_B.iloc[i] == 'START' and df.column_B.iloc[i-1] == 'STOP': df.time.iloc[i] = df.time.iloc[i] - df.time.iloc[i-1] return df # mine def ben(df): mask = ((df.column_A == df.column_A.shift()) &amp; (df.column_B == 'START') &amp; (df.column_B.shift() == 'STOP')) df.loc[mask, 'time'] -= df.time.shift().loc[mask] return df </code></pre> <p>and run <code>timeit</code>:</p> <pre><code>%timeit espogian (df) 1 loop, best of 3: 8.71 s per loop %timeit ben (df) 100 loops, best of 3: 4.79 ms per loop # verify they are equal df1 = espogian (df) df2 = ben (df) (df1==df2).all() Out[24]: column_A True column_B True time True </code></pre>
python|performance|pandas
4
374,037
50,704,445
Remove NoneType from BeautifulSoup
<p>I'm trying to remove the commas from the numbers I extracted with the following code:</p> <pre><code>with requests.Session() as s: url = 'https://www.zoopla.co.uk/for-sale/property/london/paddington/?q=Paddington%2C%20London&amp;results_sort=newest_listings&amp;search_source=home' r = s.get(url, headers=req_headers) soup = BeautifulSoup(r.content, 'lxml') prices = [] for price in soup.find_all('a', {"class":"listing-results-price text-price"}): prices.append(price.text) if price is None: print('none') df['price'] = prices df['price'] = df['price'].str.extract('(\d+([\d,]?\d)*(\.\d+)?)', expand=True) #remove extract numbers with commas df['price'] = df['price'].replace(',','', inplace = True) </code></pre> <p>This returns a column in which all the values are None. Is there anyway to remove this NoneType error?</p> <p>Before I run the final line the dataframe is as follows:</p> <pre><code> price 0 NaN 1 1,875,000 2 4,950,000 3 500,000 4 675,000 5 980,000 6 475,000 7 849,950 8 1,050,000 9 1,050,000 10 650,000 11 1,100,000 12 1,300,000 13 895,000 14 1,000,000 15 26,800,000 16 1,600,000 17 695,000 18 2,100,000 19 510,000 20 1,200,000 21 3,000,000 22 599,000 23 26,800,000 24 1,550,000 25 750,000 26 1,600,000 27 1,025,000 </code></pre>
<p>With <code>df['price'].replace(',','', inplace = True)</code> , you are replacing <code>inplace</code>, which does not return anything.</p> <p>You need:</p> <pre><code>df['price'] = df['price'].str.replace(',','') </code></pre> <p>Output:</p> <pre><code>0 NaN 1 1875000 2 4950000 3 500000 4 675000 5 980000 6 475000 7 849950 8 1050000 9 1050000 </code></pre> <p>For reference, have a look at <a href="http://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.Series.str.html" rel="nofollow noreferrer">docs</a></p>
python|pandas|beautifulsoup|nonetype
2
374,038
50,786,597
custom sort in python pandas dataframe needs better approach
<p>i have a dataframe like this</p> <pre><code>user = pd.DataFrame({'User':['101','101','101','102','102','101','101','102','102','102'],'Country':['India','Japan','India','Brazil','Japan','UK','Austria','Japan','Singapore','UK']}) </code></pre> <p>i want to apply custom sort in country and Japan needs to be in top for both the users </p> <p>i have done this but this is not my expected output</p> <pre><code>user.sort_values(['User','Country'], ascending=[True, False], inplace=True) </code></pre> <p>my expected output</p> <pre><code>expected_output = pd.DataFrame({'User':['101','101','101','101','101','102','102','102','102','102'],'Country':['Japan','India','India','UK','Austria','Japan','Japan','Brazil','Singapore','UK']}) </code></pre> <p>i tried to Cast the column as category and when passing the categories and put Japan at the top. is there any other approach i don't want to pass the all the countries list every time. i just want to give user 101 -japan or user 102- UK then the remaining rows order needs to come.</p> <p>Thanks</p>
<p>Create a new key help sort by using <code>map</code></p> <pre><code>user.assign(New=user.Country.map({'Japan':1}).fillna(0)).sort_values(['User','New'], ascending=[True, False]).drop('New',1) Out[80]: Country User 1 Japan 101 0 India 101 2 India 101 5 UK 101 6 Austria 101 4 Japan 102 7 Japan 102 3 Brazil 102 8 Singapore 102 9 UK 102 </code></pre> <p>Update base on comment</p> <pre><code>mapdf=pd.DataFrame({'Country':['Japan','UK'],'User':['101','102'],'New':[1,1]}) user.merge(mapdf,how='left').fillna(0).sort_values(['User','New'], ascending=[True, False]).drop('New',1) Out[106]: Country User 1 Japan 101 0 India 101 2 India 101 5 UK 101 6 Austria 101 9 UK 102 3 Brazil 102 4 Japan 102 7 Japan 102 8 Singapore 102 </code></pre>
python|python-2.7|pandas
2
374,039
50,694,066
Error during training using Tensorflow with our own data
<p>I am using <a href="https://www.tensorflow.org/" rel="nofollow noreferrer">Tensorflow</a> for training for developing the model to detect whether the message is spam or not. I am using <strong>Python</strong>.</p> <p>My training data size is 3000 rows and 3 columns, and the size of test data is 2700 rows and 3 columns.</p> <pre><code>Batch size: 500 n_nodes_hl1 = 500 n_nodes_hl2 = 500 n_classes = 2 batch_size = 32 total_batches = int(3000 / batch_size) hm_epochs = 10 x = tf.placeholder('float') y = tf.placeholder('float') hidden_1_layer = {'f_fum': n_nodes_hl1, 'weight': tf.Variable(tf.random_normal([3000, n_nodes_hl1])), 'bias': tf.Variable(tf.random_normal([n_nodes_hl1]))} hidden_2_layer = {'f_fum': n_nodes_hl2, 'weight': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), 'bias': tf.Variable(tf.random_normal([n_nodes_hl2]))} output_layer = {'f_fum': None, 'weight': tf.Variable(tf.random_normal([n_nodes_hl2, n_classes])), 'bias': tf.Variable(tf.random_normal([n_classes])), } </code></pre> <p>I am getting this error during compilation:</p> <pre><code>WARNING:tensorflow:From saving_and_restoring.py:50: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version. Instructions for updating: Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default. See @{tf.nn.softmax_cross_entropy_with_logits_v2}. Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1322, in _do_call return fn(*args) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1307, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: Matrix size-incompatible: In[0]: [32,12], In[1]: [2794,500] [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_Placeholder_0_0, Variable/read)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "saving_and_restoring.py", line 103, in &lt;module&gt; train_neural_network(x) File "saving_and_restoring.py", line 89, in train_neural_network y: np.array(batch_y)}) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 900, in run run_metadata_ptr) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1135, in _run feed_dict_tensor, options, run_metadata) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1316, in _do_run run_metadata) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1335, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: Matrix size-incompatible: In[0]: [32,12], In[1]: [2794,500] [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_Placeholder_0_0, Variable/read)]] Caused by op 'MatMul', defined at: File "saving_and_restoring.py", line 103, in &lt;module&gt; train_neural_network(x) File "saving_and_restoring.py", line 49, in train_neural_network prediction = neural_network_model(x) File "saving_and_restoring.py", line 36, in neural_network_model l1 = tf.add(tf.matmul(data, hidden_1_layer['weight']), hidden_1_layer['bias']) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/math_ops.py", line 2122, in matmul a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 4279, in mat_mul name=name) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3392, in create_op op_def=op_def) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1718, in __init__ self._traceback = self._graph._extract_stack() # pylint: disable=protected-access InvalidArgumentError (see above for traceback): Matrix size-incompatible: In[0]: [32,12], In[1]: [2794,500] [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_Placeholder_0_0, Variable/read)]] </code></pre> <p>Python version is <strong>3.6</strong> and I am using <strong>nltk</strong> for sentimental analysis.</p> <p>Please help.</p> <p>Thanks.</p> <p><strong>EDIT</strong></p> <pre><code>train_set_shuffled.csv shape=(2792,3) </code></pre> <p>My code:</p> <pre><code>import tensorflow as tf import pickle import numpy as np import nltk from nltk.tokenize import word_tokenize from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() n_nodes_hl1 = 500 n_nodes_hl2 = 500 n_classes = 2 columns=3 batch_size = 32 total_batches = int(3000 / batch_size) hm_epochs = 10 x = tf.placeholder( tf.float32) y = tf.placeholder( tf.float32) hidden_1_layer = {'f_fum': n_nodes_hl1, 'weight': tf.Variable(tf.random_normal([3000, n_nodes_hl1])), 'bias': tf.Variable(tf.random_normal([n_nodes_hl1]))} hidden_2_layer = {'f_fum': n_nodes_hl2, 'weight': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), 'bias': tf.Variable(tf.random_normal([n_nodes_hl2]))} output_layer = {'f_fum': None, 'weight': tf.Variable(tf.random_normal([n_nodes_hl2, n_classes])), 'bias': tf.Variable(tf.random_normal([n_classes])), } def neural_network_model(data): l1 = tf.add(tf.matmul(data, hidden_1_layer['weight']), hidden_1_layer['bias']) l1 = tf.nn.relu(l1) l2 = tf.add(tf.matmul(l1, hidden_2_layer['weight']), hidden_2_layer['bias']) l2 = tf.nn.relu(l2) output = tf.matmul(l2, output_layer['weight']) + output_layer['bias'] return output saver = tf.train.Saver() tf_log = 'tf.log' def train_neural_network(x): prediction = neural_network_model(x) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) try: epoch = int(open(tf_log, 'r').read().split('\n')[-2]) + 1 print('STARTING:', epoch) except: epoch = 1 while epoch &lt;= hm_epochs: if epoch != 1: saver.restore(sess, "model.ckpt") epoch_loss = 1 with open('lexicon-2500-2638.pickle', 'rb') as f: lexicon = pickle.load(f) with open('train_set_shuffled.csv', buffering=20000, encoding='latin-1') as f: batch_x = [] batch_y = [] batches_run = 0 for line in f: label = line.split(':::')[0] tweet = line.split(':::')[1] current_words = word_tokenize(tweet.lower()) current_words = [lemmatizer.lemmatize(i) for i in current_words] features = np.zeros(len(lexicon)) for word in current_words: if word.lower() in lexicon: index_value = lexicon.index(word.lower()) # OR DO +=1, test both features[index_value] += 1 line_x = list(features) line_y = eval(label) batch_x.append(line_x) batch_y.append(line_y) if len(batch_x) &gt;= batch_size: _, c = sess.run([optimizer, cost], feed_dict={x: np.array(batch_x), y: np.array(batch_y)}) epoch_loss += c batch_x = [] batch_y = [] batches_run += 1 print('Batch run:', batches_run, '/', total_batches, '| Epoch:', epoch, '| Batch Loss:', c, ) saver.save(sess, "model.ckpt") print('Epoch', epoch, 'completed out of', hm_epochs, 'loss:', epoch_loss) with open(tf_log, 'a') as f: f.write(str(epoch) + '\n') epoch += 1 train_neural_network(x) def test_neural_network(): prediction = neural_network_model(x) with tf.Session() as sess: sess.run(tf.initialize_all_variables()) for epoch in range(hm_epochs): try: saver.restore(sess, "model.ckpt") except Exception as e: print(str(e)) epoch_loss = 0 correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct, 'float')) feature_sets = [] labels = [] counter = 0 with open('processed-test-set.csv', buffering=20000) as f: for line in f: try: features = list(eval(line.split('::')[0])) label = list(eval(line.split('::')[1])) feature_sets.append(features) labels.append(label) counter += 1 except: pass print('Tested', counter, 'samples.') test_x = np.array(feature_sets) test_y = np.array(labels) print('Accuracy:', accuracy.eval({x: test_x, y: test_y})) test_neural_network() </code></pre> <p><strong>EDIT 2</strong></p> <pre><code>feature_colum_size=12 x = tf.placeholder(tf.float32,shape=[batch_size,feature_colum_size]) y = tf.placeholder(tf.float32,shape=[batch_size,feature_colum_size]) hidden_1_layer = {'f_fum': n_nodes_hl1, 'weight': tf.Variable(tf.random_normal([feature_colum_size, n_nodes_hl1])), 'bias': tf.Variable(tf.random_normal([n_nodes_hl1]))} hidden_2_layer = {'f_fum': n_nodes_hl2, 'weight': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), 'bias': tf.Variable(tf.random_normal([n_nodes_hl2]))} output_layer = {'f_fum': None, 'weight': tf.Variable(tf.random_normal([n_nodes_hl2, n_classes])), 'bias': tf.Variable(tf.random_normal([n_classes])), } </code></pre>
<p>I think the problem is where you are generating your input data(batch_x). It seems that your input shape is <code>[batch_size,12]</code> that you are multiplying(matmul) your hidden layer( <code>hidden_layer1["weight"]</code>) of shape of [<code>3000,n_nodes_hl1</code>] causing the matrix mutliplication operation to fail. and the way reading and parsing lines from train_shuffled is done makes me think that the input feature size(12) is not consistent. </p> <p>What i think you should do .</p> <p>Fix the input_feature size <code>x = tf.placeholder('float')</code> to <code>x = tf.placeholder(tf.float32,shape=[batch_size,feature_colum_size])</code></p> <p>change the <code>'weight': tf.Variable(tf.random_normal([3000, n_nodes_hl1]))</code> to </p> <pre><code>'weight': tf.Variable(tf.random_normal([feature_column_size, n_nodes_hl1])), </code></pre> <p>feature_column_size should be consistent between your input and the first hidden layer inorder to be able to perform matmul multiplication.</p>
python|pandas|tensorflow|nltk
0
374,040
51,078,625
How to ignore some input layer, while predicting, in a keras model trained with multiple input layers?
<p>I'm working with neural networks and I've implemented the following architecture using <code>keras</code> with <code>tensorflow</code> backend:</p> <p><a href="https://i.stack.imgur.com/P4wDh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P4wDh.png" alt="enter image description here"></a></p> <p>For training, I'll give some labels in the layer <code>labels_vector</code>, this vector can have <code>int32</code> values (ie: 0 could be a label). For the testing phase, I need to just ignore this input layer, if I set it to <code>0</code> results could be wrong since I've trained with labels that can be equal to <code>0</code> vector. Is there a way to simply ignore or disable this layer on the prediction phase? Thanks in advance. </p>
<blockquote> <p>How to ignore some input layer ?</p> </blockquote> <p>You <strong>can't</strong>. Keras cannot just ignore an input layer as the output depends on it.</p> <p>One solution to get nearly what you want is to define a custom label in your training data to be the null value. Your network will learn to <em>ignore</em> it if it feels that it is not an important feature.</p> <p>If <code>labels_vector</code> is a vector of categorical labels, use <em>one-hot encoding</em> instead of <em>integer encoding</em>. integer encoding assumes that there is a natural ordered relationship between each label which is wrong. </p>
python-3.x|tensorflow|keras|disabled-input
3
374,041
50,846,719
Cannot replace special characters in a Python pandas dataframe
<p>I'm working with Python 3.5 in Windows. I have a dataframe where a <code>'titles'</code> str type column contains titles of headlines, some of which have special characters such as <code>â</code>,<code>€</code>,<code>˜</code>. </p> <p>I am trying to replace these with a space <code>''</code> using <code>pandas.replace</code>. I have tried various iterations and nothing works. I am able to replace regular characters, but these special characters just don't seem to work. </p> <p>The code runs without error, but the replacement simply does not occur, and instead the original title is returned. Below is what I have tried already. Any advice would be much appreciated.</p> <pre><code>df['clean_title'] = df['titles'].replace('€','',regex=True) df['clean_titles'] = df['titles'].replace('€','') df['clean_titles'] = df['titles'].str.replace('€','') def clean_text(row): return re.sub('€','',str(row)) return str(row).replace('€','') df['clean_title'] = df['titles'].apply(clean_text) </code></pre>
<p>We can only assume that you refer to non-ASCI as 'special' characters. </p> <p>To remove <em>all</em> non-ASCI characters in a pandas dataframe column, do the following:</p> <pre><code>df['clean_titles'] = df['titles'].str.replace(r'[^\x00-\x7f]', '') </code></pre> <p>Note that this is a scalable solution as it works for <em>any</em> non-ASCI char. </p>
python|regex|string|pandas|dataframe
4
374,042
50,741,335
Calculating sum of a combination of columns in pandas, row-wise, with output file with the name of said combination
<p>I am looking for a way of generating a csv file for a specific combination of data from columns in a dataframe.</p> <p>My data looks like this (except with 200 more rows)</p> <pre><code>+-------------------------------+-----+----------+---------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+---------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+ | Species | OGT | Domain | A | C | D | E | F | G | H | I | K | L | M | N | P | Q | R | S | T | V | W | Y | +-------------------------------+-----+----------+---------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+---------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+ | Aeropyrum pernix | 95 | Archaea | 9.7659115711 | 0.6720465616 | 4.3895390781 | 7.6501943794 | 2.9344881615 | 8.8666657183 | 1.5011817208 | 5.6901432494 | 4.1428307243 | 11.0604191603 | 2.21143353 | 1.9387130928 | 5.1038552753 | 1.6855017182 | 7.7664358772 | 6.266067034 | 4.2052190807 | 9.2692433532 | 1.318690698 | 3.5614200159 | | Argobacterium fabrum | 26 | Bacteria | 11.5698896021 | 0.7985475923 | 5.5884500155 | 5.8165463343 | 4.0512504104 | 8.2643271309 | 2.0116736244 | 5.7962804605 | 3.8931525401 | 9.9250463349 | 2.5980609708 | 2.9846761128 | 4.7828063605 | 3.1262365491 | 6.5684282943 | 5.9454781844 | 5.3740045968 | 7.3382308193 | 1.2519739683 | 2.3149400984 | | Anaeromyxobacter dehalogenans | 27 | Bacteria | 16.0337898849 | 0.8860252895 | 5.1368827707 | 6.1864992608 | 2.9730203513 | 9.3167603253 | 1.9360386851 | 2.940143349 | 2.3473650439 | 10.898494736 | 1.6343905351 | 1.5247123262 | 6.3580285706 | 2.4715303021 | 9.2639057482 | 4.1890063803 | 4.3992339725 | 8.3885969061 | 1.2890166336 | 1.8265589289 | | Aquifex aeolicus | 85 | Bacteria | 5.8730327277 | 0.795341216 | 4.3287799008 | 9.6746388172 | 5.1386954322 | 6.7148035486 | 1.5438364179 | 7.3358775924 | 9.4641440609 | 10.5736658776 | 1.9263080969 | 3.6183861236 | 4.0518679067 | 2.0493569604 | 4.9229955632 | 4.7976564501 | 4.2005259246 | 7.9169763709 | 0.9292167138 | 4.1438942987 | | Archaeoglobus fulgidus | 83 | Archaea | 7.8742687687 | 1.1695110027 | 4.9165979364 | 8.9548767369 | 4.568636662 | 7.2640358917 | 1.4998752909 | 7.2472039919 | 6.8957233203 | 9.4826333048 | 2.6014466253 | 3.206476915 | 3.8419576418 | 1.7789787933 | 5.7572748236 | 5.4763351139 | 4.1490633048 | 8.6330814159 | 1.0325605451 | 3.6494619148 | +-------------------------------+-----+----------+---------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+---------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+ </code></pre> <p>What I want to do is find a way of generating a csv with species, OGT, and then a combination of a few of the other columns, say A,C,E &amp; G and the sum of the percentages of those particular values.</p> <p>So output looking something like this: (these sums are just made up)</p> <p>ACEG.csv</p> <pre><code> Species OGT Sum of percentage ------------------------------- ----- ------------------- Aeropyrum pernix 95 23.4353 Anaeromyxobacter dehalogenans 26 20.3232 Argobacterium fabrum 27 14.2312 Aquifex aeolicus 85 15.0403 Archaeoglobus fulgidus 83 34.0532 </code></pre> <p>The aim of this is so I can do this for each of the 10 million combinations of each column (A-Y), but I figure that's a simple for loop. I intially was trying to achieve this in R but upon reflection using pandas in python is probably a better bet.</p>
<p>Something like this?</p> <pre><code>def subset_to_csv(cols): df['Sum of percentage'] = your_data[list(cols)].sum(axis=1) df.to_csv(cols + '.csv') df = your_data[['Species', 'OGT']] for c in your_list_of_combinations: subset_to_csv(c) </code></pre> <p>Where <code>cols</code> is a string containing the columns you want to subset, e.g.: <code>'ABC'</code></p>
python|pandas|dataframe|combinatorics
2
374,043
50,868,147
Use .filter within a function
<p>I'm trying to create a function that creates a pivot table, and I need to filter one column based on a string.</p> <pre><code>df = DataFrame({'Breed': ['Sheltie', 'Bernard', 'Husky', 'Husky', 'pig', 'Sheltie','Bernard'], 'Metric': ['One month walked', 'two month walked', 'three month walked', 'four month walked', 'one month waiting', 'two month waiting', 'Three month waiting'], 'Age': [1,2,3,4,5,6,7]}) </code></pre> <p>I want a pivot table with the ages of all the dogs summed up, where they have a 'completed' metric, regardless of what month.</p> <p>It would look a little something like this:</p> <pre><code> Age Breed Metric sum ------------------------------------ Husky one month walked 4 Husky four month walked 5 </code></pre> <p>The function would filter out any of the metrics that are not 'walked', while summing up each of the 'completed' metric.</p> <p>I've been trying this so far.</p> <pre><code>import pandas as pd import fnmatch def Dog_Walked_Completed(dfprime): return dfprime[dfprime['Breed'] == 'Husky'].groupby(['Breed','Metric']).fnmatch.filter(lambda df : (df['Metric']=='?completion')).any().agg({'Age': ['sum']}) </code></pre> <p>But whenever i try that, I get a ''DataFrameGroupBy' object has no attribute 'fnmatch' error. Is there a different way to do wildcard searches within a function?</p>
<p>Assuming to want to find the sum of ages for each breed, which completion word in their metric. You can take the following approach.</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; df = pd.DataFrame({'Breed': ['Sheltie', 'Bernard', 'Husky', 'Husky', 'pig', 'Sheltie','Bernard'],'Metric': ['One month walked', 'two month walked', 'three month walked', 'four month walked', 'one month waiting', 'two month waiting', 'Three month waiting'],'Age': [1,2,3,4,5,6,7]}) &gt;&gt;&gt; df Age Breed Metric 0 1 Sheltie One month walked 1 2 Bernard two month walked 2 3 Husky three month walked 3 4 Husky four month walked 4 5 pig one month waiting 5 6 Sheltie two month waiting 6 7 Bernard Three month waiting </code></pre> <p>Now lets create boolean function which check for the word completion in the <code>Metrics</code> column of the dataframe <code>df</code>.</p> <pre><code>&gt;&gt;&gt; bool = df['Metric'].str.contains('completion') </code></pre> <p>Now you can do <code>groupby</code> on the Breed and <code>bool</code> variable to find the sum of ages.</p> <pre><code>&gt;&gt;&gt; pvt_tbl = df.groupby(['Breed',bool])['Age'].sum() &gt;&gt;&gt; pvt_tbl Breed Metric Bernard False 9 Husky False 7 Sheltie False 7 pig False 5 Name: Age, dtype: int64 </code></pre> <p>Since there was no 'completion' word in the sample data, all were returned false. But we can check for 'walked' word as there are some rows where walked is present.</p> <pre><code>&gt;&gt;&gt; bool1 = df['Metric'].str.contains('walked') &gt;&gt;&gt; pvt_tbl1 = df.groupby(['Breed',bool1])['Age'].sum() &gt;&gt;&gt; pvt_tbl1 Breed Metric Bernard False 7 True 2 Husky True 7 Sheltie False 6 True 1 pig False 5 Name: Age, dtype: int64 </code></pre> <p>Hope , this is what you want to do.</p> <p><strong>Update</strong> As per comments:</p> <pre><code>&gt;&gt;&gt; df.groupby(['Breed','Metric'])['Age'].sum() Breed Metric Bernard Three month waiting 7 two month walked 2 Husky four month walked 4 three month walked 3 Sheltie One month walked 1 two month waiting 6 pig one month waiting 5 Name: Age, dtype: int64 </code></pre>
python|pandas
1
374,044
50,866,385
Efficient grouping into dict
<p>I have a list of tuples:</p> <pre><code>[('Player1', 'A', 1, 100), ('Player1', 'B', 15, 100), ('Player2', 'A', 7, 100), ('Player2', 'B', 65, 100), ('Global Total', None, 88, 100)] </code></pre> <p>Which I wish to convert to a dict in the following format:</p> <pre><code>{ 'Player1': { 'A': [1, 12.5], 'B': [15, 18.75], 'Total': [16, 18.18] }, 'Player2': { 'A': [7, 87.5], 'B': [65, 81.25], 'Total': [72, 81.81] }, 'Global Total': { 'A': [8, 100], 'B': [80, 100] } } </code></pre> <p>So each Player dict has it's <em>local</em> total value and it's percentage according to it's <em>global</em> total value.</p> <p>Currently, I do it like this:</p> <pre><code>fixed_vals = {} for name, status, qtd, prct in data_set: # This is the list of tuples var if name in fixed_vals: fixed_vals[name].update({status: [qtd, prct]}) else: fixed_vals[name] = {status: [qtd, prct]} fixed_vals['Global Total']['Total'] = fixed_vals['Global Total'].pop(None) total_a = 0 for k, v in fixed_vals.items(): if k != 'Global Total': total_a += v['A'][0] fixed_vals['Global Total']['A'] = [ total_a, total_a * 100 / fixed_vals['Global Total']['Total'][0] ] fixed_vals['Global Total']['B'] = [ fixed_vals['Global Total']['Total'][0] - total_a, fixed_vals['Global Total']['Total'][0] - fixed_vals['Global Total']['A'][1] ] for player, vals in fixed_vals.items(): if player != 'Global Total': vals['A'][1] = vals['A'][0] * 100 / fixed_vals['Global Total']['A'][0] vals['B'][1] = fixed_vals['Global Total']['A'][1] - vals['B'][1] </code></pre> <p>the problem being that this is not very flexible since I have to do something similar to this, but with almost 12 categories (A, B, ...)</p> <p>Is there a better approach to this? Perhaps this is trivial with pandas?</p> <p><strong>Edit for clarification:</strong></p> <p>There are no duplicate categories for each Player, everyone of them has the same sequence (some might have 0 but the category is unique)</p>
<p>Everyone seems attracted to a dict-only solution, but why not try converting to <code>pandas</code>?</p> <pre><code>import pandas as pd # given tuple_list = [('Player1', 'A', 1, 100), ('Player1', 'B', 15, 100), ('Player2', 'A', 7, 100), ('Player2', 'B', 65, 100), ('Global Total', None, 88, 100)] # make a dataframe df = pd.DataFrame(tuple_list , columns = ['player', 'game','score', 'pct']) del df['pct'] df = df[df.player!='Global Total'] df = df.pivot(index='player', columns='game', values='score') df.columns.name='' df.index.name='' # just a check assert df.to_dict() == {'A': {'Player1': 1, 'Player2': 7}, 'B': {'Player1': 15, 'Player2': 65}} # A B #player #Player1 1 15 #Player2 7 65 print('Obtained dataset:\n', df) </code></pre> <p>Basically, all you need is 'df' dataframe, and the rest you can compute and add later, no need to save it to dictionary. </p> <p>Below is updated on OP request:</p> <pre><code># the sum across columns is this - this was the 'Grand Total' in the dicts # A 8 # B 80 sum_col = df.sum(axis=0) # lets calculate the share of each player score: shares = df / df.sum(axis=0) * 100 assert shares.transpose().to_dict() == {'Player1': {'A': 12.5, 'B': 18.75}, 'Player2': {'A': 87.5, 'B': 81.25}} # in 'shares' the columns add to 100%: # A B #player #Player1 12.50 18.75 #Player2 87.50 81.25 # lets mix up a dataframe close to original dictionary structure mixed_df = pd.concat([df.A, shares.A, df.B, shares.B], axis=1) totals = mixed_df.sum(axis=0) totals.name = 'Total' mixed_df = mixed_df.append(totals.transpose()) mixed_df.columns = ['A', 'A_pct', 'B', 'B_pct'] print('\nProducing some statistics\n', mixed_df) </code></pre>
python|pandas
2
374,045
50,678,866
how to save model filters in keras
<p>I am visualizing my cnn model filters (kernels) using code from <a href="https://github.com/julienr/ipynb_playground/blob/master/keras/convmnist/keras_cnn_mnist.ipynb" rel="nofollow noreferrer">here</a>, which is following:</p> <pre><code>from mpl_toolkits.axes_grid1 import make_axes_locatable def nice_imshow(ax, data, vmin=None, vmax=None, cmap=None): """Wrapper around pl.imshow""" if cmap is None: cmap = cm.jet if vmin is None: vmin = data.min() if vmax is None: vmax = data.max() divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.05) im = ax.imshow(data, vmin=vmin, vmax=vmax, interpolation='nearest', cmap=cmap) pl.colorbar(im, cax=cax) # pl.savefig("featuremaps--{}".format(layer_num) + '.jpg') import numpy.ma as ma def make_mosaic(imgs, nrows, ncols, border=1): """ Given a set of images with all the same shape, makes a mosaic with nrows and ncols """ nimgs = imgs.shape[0] imshape = imgs.shape[1:] mosaic = ma.masked_all((nrows * imshape[0] + (nrows - 1) * border, ncols * imshape[1] + (ncols - 1) * border), dtype=np.float32) paddedh = imshape[0] + border paddedw = imshape[1] + border for i in range(nimgs): row = int(np.floor(i / ncols)) col = i % ncols mosaic[row * paddedh:row * paddedh + imshape[0], col * paddedw:col * paddedw + imshape[1]] = imgs[i] return mosaic # Visualize weights W=model.layers[8].get_weights()[0][:,:,0,:] W=np.swapaxes(W,0,2) W = np.squeeze(W) print("W shape : ", W.shape) pl.figure(figsize=(15, 15)) pl.title('conv1 weights') nice_imshow(pl.gca(), make_mosaic(W, 8, 8), cmap=cm.binary) </code></pre> <p>I want to save filters images. Generally we use <code>fig.savefig("featuremaps-kernel-{}".format(layer_num) + '.jpg')</code> for saving figures. But it's not working in this case, may be because nice_ function. Please help what command i have to write to save figure using command not manually. Because if there is large network there is lot of manual work. </p>
<p>I had a similar issue trying to save figures in Keras with <code>plt.savefig</code>. It always resulted in blank images.</p> <p>I never really found out why it happened, if I recall correctly it only occurred when using multiprocessing, but I may be wrong.</p> <p>I solved it using a non-interactive backend, which should anyway be the proper choice if you're never going to display them with <code>plt.show()</code>.</p> <p>At the top of your matplotlib imports add</p> <pre><code>import matplotlib as mpl mpl.use('Agg') </code></pre> <p>Also, if you're saving many images like this at some point matplotlib will complain about too many open figures. You should add a <code>plt.close()</code> call after each <code>plt.savefig</code>.</p> <p>Sorry about the purely anecdotal answer, maybe someone with better insight will comment.</p>
python|numpy|matplotlib|keras
1
374,046
50,777,871
Does TensorFlow use all of the hardware on the GPU?
<p>The <a href="https://images.nvidia.com/content/pdf/tesla/whitepaper/pascal-architecture-whitepaper.pdf" rel="nofollow noreferrer">NVidia GP100</a> has 30 TPC circuits and 240 "texture units". Do the TPCs and texture units get used by TensorFlow, or are these disposable bits of silicon for machine learning? </p> <p>I am looking at GPU-Z and Windows 10's built-in GPU performance monitor on a running neural net training session and I see various hardware functions are underutilized. Tensorflow uses CUDA. CUDA has access, I presume, to all hardware components. If I know where the gap is (between Tensorflow and underlying CUDA) and whether it is material (how much silicon is wasted) I can, for example, remediate by making a clone of TensorFlow, modifying it, and then submitting a pull request.</p> <p>For example, answer below discusses texture objects, accessible from CUDA. NVidia notes that these can be used to <a href="https://devblogs.nvidia.com/cuda-pro-tip-kepler-texture-objects-improve-performance-and-flexibility/" rel="nofollow noreferrer">speed up latency-sensitive, short-running kernels</a>. If I google "TextureObject tensorflow" I don't get any hits. So I can sort of assume, barring evidence to the contrary, that TensorFlow is not taking advantage of TextureObjects.</p> <p>NVidia markets GPGPUs for neural net training. So far it seems they have adopted a dual-use strategy for their circuits, so they are leaving in circuits not used for machine learning. This begs the question of whether a pure TensorFlow circuit would be more efficient. <a href="https://www.blog.google/topics/google-cloud/google-cloud-offer-tpus-machine-learning/" rel="nofollow noreferrer">Google is now promoting TPUs for this reason.</a> The jury is out on whether TPUs are actually cheaper for TensorFlow than NVidia GPUs. <a href="https://www.extremetech.com/computing/247403-nvidia-claims-pascal-gpus-challenge-googles-tensorflow-tpu-updated-benchmarks" rel="nofollow noreferrer">NVidia is challenging Google price/performance claims.</a></p>
<p>None of those things are separate pieces of individual hardware that can be addressed separately in CUDA. Read this passage on page 10 of your document:</p> <blockquote> <p><strong>Each GPC inside GP100 has ten SMs</strong>. Each SM has 64 CUDA Cores and four texture units. <strong>With 60 SMs</strong>, GP100 has a total of 3840 single precision CUDA Cores and 240 texture units. Each memory controller is attached to 512 KB of L2 cache, and each HBM2 DRAM stack is controlled by a pair of memory controllers. The full GPU includes a total of 4096 KB of L2 cache. </p> </blockquote> <p>And if we read just above that:</p> <blockquote> <p>GP100 was built to be the highest performing parallel computing processor in the world to address the needs of the GPU accelerated computing markets serviced by our Tesla P100 accelerator platform. Like previous Tesla-class GPUs, GP100 is composed of an array of Graphics Processing Clusters (GPCs), Texture Processing Clusters (TPCs), Streaming Multiprocessors (SMs), and memory controllers. A full GP100 consists of six GPCs, 60 Pascal SMs, <strong>30 TPCs (each including two SMs)</strong>, and eight 512-bit memory controllers (4096 bits total).</p> </blockquote> <p>and take a look at the diagram we see the following:</p> <p><a href="https://i.stack.imgur.com/3ulBi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3ulBi.png" alt="enter image description here"></a></p> <p>So not only are the GPCs and SMS not seperate pieces of hardware, but even the <em>TPCs</em> are just another way to reorganize the hardware architecture and come up with a fancy marketing name. You can clearly see TPC doesn't add anything new in the diagram, it just looks like a container for the SMs. Its [1 GPC]:[5 TPCs]:[10 SMs]</p> <p>The memory controllers are something <em>all hardware</em> is going to have in order to interface with RAM, it happens that more memory controllers can enable higher bandwidth, see this diagram:</p> <p><a href="https://i.stack.imgur.com/L8f0j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L8f0j.png" alt="enter image description here"></a></p> <p>where "High bandwidth memory" refers to <a href="https://en.wikipedia.org/wiki/High_Bandwidth_Memory" rel="nofollow noreferrer">HBM2</a> a type of video memory like GDDR5, in other words, video RAM. This isn't something you would directly address in software with CUDA any more than you would do so with X86 desktop machines. </p> <p>So in reality, we only have SMs here, not TPCs an GPCs. So to answer your question, since <a href="https://github.com/tensorflow/tensorflow" rel="nofollow noreferrer">Tensor flow</a> takes advantage of <a href="https://github.com/tensorflow/tensorflow/tree/851c28951b7345f303f4ec1d6490e3fadaa0a40e/third_party/toolchains/gpus/cuda" rel="nofollow noreferrer">cuda</a>, presumably its going to use all the available hardware it can. </p> <p><strong>EDIT: The poster edited their question to an entirely different question, and has new misconceptions there so here is the answer to that:</strong></p> <p>Texture Processing Clusters (TPCs) and Texture units are not the same thing. TPCs appear to be merely an organization of Streaming Multiprocessors (SM) with a bit of marketing magic thrown in. </p> <p>Texture units are not a concrete term, and features differ from GPU to GPU, but basically you can think of them as the combination of texture memory or ready access to texture memory, which employs spatial coherence, versus L1,L2,L3... cache which employ temporal coherence, in combination of some fixed function functionality. Fixed functionality may include interpolation access filter (often at least linear interpolation), different coordinate modes, mipmapping control and ansiotropic texture filtering. See the <a href="https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#texture-and-surface-memory" rel="nofollow noreferrer">Cuda 9.0 Guide</a> on this topic to get an idea of texture unit functionality and what you can control with CUDA. On the diagram we can see the texture units at the bottom. </p> <p><a href="https://i.stack.imgur.com/kNz4b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kNz4b.png" alt="enter image description here"></a></p> <p>Clearly these are completely different from the TPCs shown in the first picture I posted, which at least according to the diagram have no extra functionality associated with them and are merely a container for two SMs. </p> <p>Now, despite the fact that you <em>can</em> address texture functionality within cuda, you often don't need to. The texture units fixed function functionality is not all that useful to Neural nets, however, the spatially coherent texture memory is often <em>automatically</em> used by CUDA as an optimization even if you don't explicitly try to access it. In this way, TensorFlow still would not be "wasting" silicon. </p>
tensorflow|gpu|gpgpu
6
374,047
50,993,978
How to get Keras network to not output all 1s
<p>I have a bunch of images that look like this of someone playing a videogame (a simple game I created in Tkinter):</p> <p><a href="https://i.stack.imgur.com/8N4pe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8N4pe.png" alt="Ball falling in videogame; player&#39;s box is at the bottom"></a></p> <p>The idea of the game is that the user controls the box at the bottom of the screen in order to dodge the falling balls (they can only dodge left and right).</p> <p>My goal is to have the neural network output the position of the player on the bottom of the screen. If they're totally on the left, the neural network should output a <code>0</code>, if they're in the middle, a <code>.5</code>, and all the way right, a <code>1</code>, and all the values in-between.</p> <p>My images are 300x400 pixels. I stored my data very simply. I recorded each of the images and position of the player as a tuple for each frame in a 50-frame game. Thus my result was a list in the form <code>[(image, player position), ...]</code> with 50 elements. I then pickled that list.</p> <p>So in my code I try to create an extremely basic feed-forward network that takes in the image and outputs a value between 0 and 1 representing where the box on the bottom of the image is. But my neural network is only outputting 1s.</p> <p>What should I change in order to get it to train and output values close to what I want?</p> <p>Of course, here is my code:</p> <pre><code># machine learning code mostly from https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/ from keras.models import Sequential from keras.layers import Dense import numpy as np import pickle def pil_image_to_np_array(image): '''Takes an image and converts it to a numpy array''' # from https://stackoverflow.com/a/45208895 # all my images are black and white, so I only need one channel return np.array(image)[:, :, 0:1] def data_to_training_set(data): # split the list in the form [(frame 1 image, frame 1 player position), ...] into [[all images], [all player positions]] inputs, outputs = [list(val) for val in zip(*data)] for index, image in enumerate(inputs): # convert the PIL images into numpy arrays so Keras can process them inputs[index] = pil_image_to_np_array(image) return (inputs, outputs) if __name__ == "__main__": # fix random seed for reproducibility np.random.seed(7) # load data # data will be in the form [(frame 1 image, frame 1 player position), (frame 2 image, frame 2 player position), ...] with open("position_data1.pkl", "rb") as pickled_data: data = pickle.load(pickled_data) X, Y = data_to_training_set(data) # get the width of the images width = X[0].shape[1] # == 400 # convert the player position (a value between 0 and the width of the image) to values between 0 and 1 for index, output in enumerate(Y): Y[index] = output / width # flatten the image inputs so they can be passed to a neural network for index, inpt in enumerate(X): X[index] = np.ndarray.flatten(inpt) # keras expects an array (not a list) of image-arrays for input to the neural network X = np.array(X) Y = np.array(Y) # create model model = Sequential() # my images are 300 x 400 pixels, so each input will be a flattened array of 120000 gray-scale pixel values # keep it super simple by not having any deep learning model.add(Dense(1, input_dim=120000, activation='sigmoid')) # Compile model model.compile(loss='mean_squared_error', optimizer='adam') # Fit the model model.fit(X, Y, epochs=15, batch_size=10) # see what the model is doing predictions = model.predict(X, batch_size=10) print(predictions) # this prints all 1s! # TODO fix </code></pre> <p><strong>EDIT:</strong> print(Y) gives me:</p> <p><a href="https://i.stack.imgur.com/gaZo1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gaZo1.png" alt="&lt;code&gt;print(Y)&lt;/code&gt;"></a></p> <p>so it's definitely not all zeroes.</p>
<p>Of course, a deeper model might give you a better accuracy, but considering the fact that your images are simple, a pretty simple (shallow) model with only one hidden layer should give a medium to high accuracy. So here are the modifications you need to make this happen:</p> <ol> <li><p>Make sure <code>X</code> and <code>Y</code> are of type <code>float32</code> (currently, <code>X</code> is of type <code>uint8</code>):</p> <pre><code>X = np.array(X, dtype=np.float32) Y = np.array(Y, dtype=np.float32) </code></pre></li> <li><p>When training a neural network it would be much better to normalize the training data. Normalization helps the optimization process to go smoothly and speed up the convergence to a solution. It further prevent large values to cause large gradient updates which would be desruptive. Usually, the values of each feature in the input data should fall in a small range, where two common ranges are <code>[-1,1]</code> and <code>[0,1]</code>. Therefore, to make sure that all values fall in the range <code>[-1,1]</code>, we subtract from each feature its mean and divide it by its standard deviation:</p> <pre><code>X_mean = X.mean(axis=0) X -= X_mean X_std = X.std(axis=0) X /= X_std + 1e-8 # add a very small constant to prevent division by zero </code></pre> <p>Note that we are normalizing each feature (i.e. each pixel in this case) here not each image. When you want to predict on new data, i.e. in inference or testing mode, you need to subtract <code>X_mean</code> from test data and divide it by <code>X_std</code> (you should <strong>NEVER EVER</strong> subtract from test data its own mean or divide it by its own standard deviation; rather, use the mean and std of training data):</p> <pre><code>X_test -= X_mean X_test /= X_std + 1e-8 </code></pre></li> <li><p>If you apply the changes in points one and two, you might notice that the network no longer predicts only ones or only zeros. Rather, it shows some faint signs of learning and predicts a mix of zeros and ones. This is not bad but it is far from good and we have high expectations! The predictions should be much better than a mix of only zeros and ones. There, you should take into account the (forgotten!) learning rate. Since the network has relatively large number of parameters considering a relatively simple problem (and there are a few samples of training data), you should choose a smaller learning rate to smooth the gradient updates and the learning process:</p> <pre><code>from keras import optimizers model.compile(loss='mean_squared_error', optimizer=optimizers.Adam(lr=0.0001)) </code></pre> <p>You would notice the difference: the loss value reaches to around <code>0.01</code> after 10 epochs. And the network no longer predicts a mix of zeros and ones; rather the predictions are much more accurate and close to what they should be (i.e. <code>Y</code>).</p></li> <li><p>Don't forget! We have high (logical!) expectations. So, how can we do better without adding any new layers to the network (obviously, we assume that <a href="https://cdn-images-1.medium.com/max/600/1*QbekpmNE8lCvSQzHLTPDHQ.png" rel="nofollow noreferrer">adding more layers</a> <em>might</em> help!!)?</p> <p>4.1. Gather more training data.</p> <p>4.2. Add weight regularization. Common ones are L1 and L2 regularization (I highly recommend the Jupyter <a href="https://github.com/fchollet/deep-learning-with-python-notebooks" rel="nofollow noreferrer">notebooks</a> of the the book <a href="https://www.manning.com/books/deep-learning-with-python" rel="nofollow noreferrer"><em>Deep Learning with Python</em></a> written by <em>François Chollet</em> the creator of Keras. Specifically, <a href="https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/4.4-overfitting-and-underfitting.ipynb" rel="nofollow noreferrer">here</a> is the one which discusses regularization.)</p></li> </ol> <ol start="5"> <li><p>You should always evaluate your model in a proper and unbiased way. Evaluating it on the training data (that you have used to train it) does not tell you anything about how well your model would perform on unseen (i.e. new or real world) data points (e.g. consider a model which stores or memorize all the training data. It would perform perfectly on the training data, but it would be a useless model and perform poorly on new data). So we should have test and train datasets: we train model on the training data and evaluate the model on the test (i.e. new) data. However, during the process of coming up with a good model you are performing lots of experiments: for example, you first change the type and number of layers, train the model and then evaluate it on test data to make sure it is good. Then you change another thing say the learning rate, train it again and then evaluate it again on test data... To make it short, these cycles of tuning and evaluations somehow causes an over-fitting on the test data. Therefore, we would need a third dataset called <em>validation data</em> (read more: <a href="https://stats.stackexchange.com/questions/19048/what-is-the-difference-between-test-set-and-validation-set">What is the difference between test set and validation set?</a>):</p> <pre><code># first shuffle the data to make sure it isn't in any particular order indices = np.arange(X.shape[0]) np.random.shuffle(indices) X = X[indices] Y = Y[indices] # you have 200 images # we select 100 images for training, # 50 images for validation and 50 images for test data X_train = X[:100] X_val = X[100:150] X_test = X[150:] Y_train = Y[:100] Y_val = Y[100:150] Y_test = Y[150:] # train and tune the model # you can attempt train and tune the model multiple times, # each time with different architecture, hyper-parameters, etc. model.fit(X_train, Y_train, epochs=15, batch_size=10, validation_data=(X_val, Y_val)) # only and only after completing the tuning of your model # you should evaluate it on the test data for just one time model.evaluate(X_test, Y_test) # after you are satisfied with the model performance # and want to deploy your model for production use (i.e. real world) # you can train your model once more on the whole data available # with the best configurations you have found out in your tunings model.fit(X, Y, epochs=15, batch_size=10) </code></pre> <p>(Actually, when we have few training data available it would be wasteful to separate validation and test data from whole available data. In this case, and if the model is not computationally expensive, instead of separating a validation set which is called cross-validation, one can do <a href="https://machinelearningmastery.com/k-fold-cross-validation/" rel="nofollow noreferrer">K-fold cross-validation</a> or iterated K-fold cross-validation in case of having very few data samples.)</p></li> </ol> <hr> <p>It is around 4 AM at the time of writing this answer and I am feeling sleepy, but I would like to mention one more thing which is not directly related to your question: by using the Numpy library and its functionalities and methods you can write more concise and efficient code and also save yourself a lot time. So make sure you practice using it more as it is heavily used in machine learning community and libraries. To demonstrate this, here is the same code you have written but with more use of Numpy (<strong>Note that I have not applied all the changes I mentioned above in this code</strong>):</p> <pre><code># machine learning code mostly from https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/ from keras.models import Sequential from keras.layers import Dense import numpy as np import pickle def pil_image_to_np_array(image): '''Takes an image and converts it to a numpy array''' # from https://stackoverflow.com/a/45208895 # all my images are black and white, so I only need one channel return np.array(image)[:, :, 0] def data_to_training_set(data): # split the list in the form [(frame 1 image, frame 1 player position), ...] into [[all images], [all player positions]] inputs, outputs = zip(*data) inputs = [pil_image_to_np_array(image) for image in inputs] inputs = np.array(inputs, dtype=np.float32) outputs = np.array(outputs, dtype=np.float32) return (inputs, outputs) if __name__ == "__main__": # fix random seed for reproducibility np.random.seed(7) # load data # data will be in the form [(frame 1 image, frame 1 player position), (frame 2 image, frame 2 player position), ...] with open("position_data1.pkl", "rb") as pickled_data: data = pickle.load(pickled_data) X, Y = data_to_training_set(data) # get the width of the images width = X.shape[2] # == 400 # convert the player position (a value between 0 and the width of the image) to values between 0 and 1 Y /= width # flatten the image inputs so they can be passed to a neural network X = np.reshape(X, (X.shape[0], -1)) # create model model = Sequential() # my images are 300 x 400 pixels, so each input will be a flattened array of 120000 gray-scale pixel values # keep it super simple by not having any deep learning model.add(Dense(1, input_dim=120000, activation='sigmoid')) # Compile model model.compile(loss='mean_squared_error', optimizer='adam') # Fit the model model.fit(X, Y, epochs=15, batch_size=10) # see what the model is doing predictions = model.predict(X, batch_size=10) print(predictions) # this prints all 1s! # TODO fix </code></pre>
python|tensorflow|machine-learning|neural-network|keras
1
374,048
51,027,339
DataFrame - table in table from nested dictionary
<p>I use python 3.</p> <p>This is my data structure:</p> <pre><code>dictionary = { 'HexaPlex x50': { 'Vendor': 'Dell Inc.', 'BIOS Version': '12.72.9', 'Newest BIOS': '12.73.9', 'Against M &amp; S': 'Yes', 'W10 Support': 'Yes', 'Computers': { 'someName001': '12.72.9', 'someName002': '12.73.9', 'someName003': '12.73.9' }, 'Mapped Category': ['SomeOtherCategory'] }, ... } </code></pre> <p>I have managed to create a table that displays columns created from keys of the first nested dictionary (which starts with <code>'Vendor'</code>). The row name is <code>'HexaPlex x50'</code>. One of the columns contains computers with a number, i.e. the nested dictionary:</p> <pre><code>{'someName001': '12.72.9', 'someName002': '12.73.9', 'someName003': '12.73.9'} </code></pre> <p>I would like to be able to have the key values pairs inside the table in the cell under column <code>'Computers'</code>, in effect a nested table.</p> <p>ATM it looks like this:</p> <p><a href="https://i.stack.imgur.com/ugvJ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ugvJ3.png" alt="Screenshot of current table display"></a></p> <p>The table should look somewhat like this</p> <p><a href="https://i.stack.imgur.com/9F56D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9F56D.png" alt="Screenshot of preferred table with one dictionary entry per row"></a></p> <p>How can I achieve this?</p> <p>Further, I would like to color the numbers or the cell that has a lower BIOS version than the newest one.</p> <p>I also face the problem that in one case the dictionary that contains the computers is so large that it gets abbreviated even though I have set <code>pd.set_option('display.max_colwidth', -1)</code>. This looks like so:</p> <p><a href="https://i.stack.imgur.com/zAD7b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zAD7b.png" alt="Close-up of dictionary string"></a></p>
<p>As already emphasized in the comments, pandas does not support "sub-dataframes". For the sake of KISS, I would recommend duplicating those rows (or to manage two separate tables... if really necessary).</p> <p>The answers in the question you referred to (<a href="https://stackoverflow.com/questions/39640936/parsing-a-dictionary-in-a-pandas-dataframe-cell-into-new-row-cells-new-columns">parsing a dictionary in a pandas dataframe cell into new row cells (new columns)</a>) result in <em>new</em> (frame-wide) columns for <em>each</em> (row-local) "computer name". I doubt that this is what you aim for, considering your domain model.</p> <hr> <p>The abbreviation of pandas can be circumvented by using another output engine, e.g. <a href="https://pypi.org/project/tabulate/" rel="nofollow noreferrer">tabulate</a> (<a href="https://stackoverflow.com/questions/18528533/pretty-printing-a-pandas-dataframe">Pretty Printing a pandas dataframe</a>):</p> <pre><code># standard pandas output Vendor BIOS Version Newest BIOS Against M &amp; S W10 Support Computer Location ... Category4 Category5 Category6 Category7 Category8 Category9 Category0 0 Dell Inc. 12.72.9 12.73.9 Yes Yes someName001 12.72.9 ... SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory 1 Dell Inc. 12.72.9 12.73.9 Yes Yes someName002 12.73.9 ... SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory 2 Dell Inc. 12.73.9 12.73.9 Yes Yes someName003 12.73.9 ... SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory [3 rows x 17 columns] # tabulate psql (with headers) +----+------------+----------------+---------------+-----------------+---------------+-------------+------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+ | | Vendor | BIOS Version | Newest BIOS | Against M &amp; S | W10 Support | Computer | Location | Category1 | Category2 | Category3 | Category4 | Category5 | Category6 | Category7 | Category8 | Category9 | Category0 | |----+------------+----------------+---------------+-----------------+---------------+-------------+------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------| | 0 | Dell Inc. | 12.72.9 | 12.73.9 | Yes | Yes | someName001 | 12.72.9 | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | | 1 | Dell Inc. | 12.72.9 | 12.73.9 | Yes | Yes | someName002 | 12.73.9 | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | | 2 | Dell Inc. | 12.73.9 | 12.73.9 | Yes | Yes | someName003 | 12.73.9 | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | +----+------------+----------------+---------------+-----------------+---------------+-------------+------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+ # tabulate psql +---+------------+---------+---------+-----+-----+-------------+---------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+ | 0 | Dell Inc. | 12.72.9 | 12.73.9 | Yes | Yes | someName001 | 12.72.9 | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | | 1 | Dell Inc. | 12.72.9 | 12.73.9 | Yes | Yes | someName002 | 12.73.9 | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | | 2 | Dell Inc. | 12.73.9 | 12.73.9 | Yes | Yes | someName003 | 12.73.9 | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | SomeCategory | +---+------------+---------+---------+-----+-----+-------------+---------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+--------------+ # tabulate plain Vendor BIOS Version Newest BIOS Against M &amp; S W10 Support Computer Location Category1 Category2 Category3 Category4 Category5 Category6 Category7 Category8 Category9 Category0 0 Dell Inc. 12.72.9 12.73.9 Yes Yes someName001 12.72.9 SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory 1 Dell Inc. 12.72.9 12.73.9 Yes Yes someName002 12.73.9 SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory 2 Dell Inc. 12.73.9 12.73.9 Yes Yes someName003 12.73.9 SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory SomeCategory </code></pre> <p>You could also use some <code>groupBy(..).apply(..)</code> + string magic to produce a string representation which simply hides the duplicates:</p> <pre><code># tabulate + merge manually +----+--------------+------------+----------------+---------------+-----------------+---------------+-------------+------------+--------------+--------------+ | | Type | Vendor | BIOS Version | Newest BIOS | Against M &amp; S | W10 Support | Computer | Location | Category1 | Category2 | |----+--------------+------------+----------------+---------------+-----------------+---------------+-------------+------------+--------------+--------------| | 0 | HexaPlex x50 | Dell Inc. | 12.72.9 | 12.73.9 | Yes | Yes | someName001 | 12.72.9 | SomeCategory | SomeCategory | | | | | 12.72.9 | | | | someName002 | 12.73.9 | | | | | | | 12.73.9 | | | | someName003 | 12.73.9 | | | +----+--------------+------------+----------------+---------------+-----------------+---------------+-------------+------------+--------------+--------------+ </code></pre> <hr> <p>Styled output can be generated via the new <a href="https://pandas.pydata.org/pandas-docs/stable/style.html" rel="nofollow noreferrer">Styling API</a> which is still <strong>provisional and under development</strong>:</p> <p><a href="https://i.stack.imgur.com/n13cS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n13cS.png" alt="styled pandas output with some cells highlighted in red"></a></p> <p>Again, you can use some logic to 'merge' consecutively redundant values in a column (quick example, I assume some more effort could result in much nicer output):</p> <p><a href="https://i.stack.imgur.com/TWW4F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TWW4F.png" alt="styled pandas output with some cells highlighted in red and some hidden"></a></p> <hr> <p><strong>Code</strong> for the above examples</p> <pre><code>import pandas as pd from tabulate import tabulate import functools def pprint(df, headers=True, fmt='psql'): # https://stackoverflow.com/questions/18528533/pretty-printing-a-pandas-dataframe print(tabulate(df, headers='keys' if headers else '', tablefmt=fmt)) df = pd.DataFrame({ 'Type': ['HexaPlex x50'] * 3, 'Vendor': ['Dell Inc.'] * 3, 'BIOS Version': ['12.72.9', '12.72.9', '12.73.9'], 'Newest BIOS': ['12.73.9'] * 3, 'Against M &amp; S': ['Yes'] * 3, 'W10 Support': ['Yes'] * 3, 'Computer': ['someName001', 'someName002', 'someName003'], 'Location': ['12.72.9', '12.73.9', '12.73.9'], 'Category1': ['SomeCategory'] * 3, 'Category2': ['SomeCategory'] * 3, 'Category3': ['SomeCategory'] * 3, 'Category4': ['SomeCategory'] * 3, 'Category5': ['SomeCategory'] * 3, 'Category6': ['SomeCategory'] * 3, 'Category7': ['SomeCategory'] * 3, 'Category8': ['SomeCategory'] * 3, 'Category9': ['SomeCategory'] * 3, 'Category0': ['SomeCategory'] * 3, }) print("# standard pandas print") print(df) print("\n# tabulate tablefmt=psql (with headers)") pprint(df) print("\n# tabulate tablefmt=psql") pprint(df, headers=False) print("\n# tabulate tablefmt=plain") pprint(df, fmt='plain') def merge_cells_for_print(rows, ls='\n'): result = pd.DataFrame() for col in rows.columns: vals = rows[col].values if all([val == vals[0] for val in vals]): result[col] = [vals[0]] else: result[col] = [ls.join(vals)] return result print("\n# tabulate + merge manually") pprint(df.groupby('Type').apply(merge_cells_for_print).reset_index(drop=True)) # https://pandas.pydata.org/pandas-docs/stable/style.html # https://pandas.pydata.org/pandas-docs/version/0.22.0/generated/pandas.io.formats.style.Styler.apply.html#pandas.io.formats.style.Styler.apply def highlight_lower(ref, col): return [f'color: {"red" if hgl else ""}' for hgl in col &lt; ref] def merge_duplicates(col): vals = col.values return [''] + ['color: transparent' if curr == pred else '' for pred, curr in zip(vals[1:], vals)] with open('only_red.html', 'w+') as f: style = df.style style = style.apply(functools.partial(highlight_lower, df['Newest BIOS']), subset=['BIOS Version']) f.write(style.render()) with open('red_and_merged.html', 'w+') as f: style = df.style style = style.apply(functools.partial(highlight_lower, df['Newest BIOS']), subset=['BIOS Version']) style = style.apply(merge_duplicates) f.write(style.render()) </code></pre>
python|html|python-3.x|pandas
2
374,049
20,490,994
How to avoid rate-limiting 429 error in Twython
<p>I've created a function designed to run through a column of Twitter handles pandas dataframe, yet it always seems to hit rate-limiting error after just 14 calls.</p> <p>Here's the code.</p> <pre><code>def poll_twitter(dfr): followers = twitter.get_followers_ids(screen_name = dfr['handle']) time.sleep(5) print "looping..." return len(followers['ids']) df[datetime.datetime.today()] = df.apply(poll_twitter, axis=1) </code></pre> <p>Here's the error</p> <p>TwythonRateLimitError: (u'Twitter API returned a 429 (Too Many Requests), Rate limit exceeded'</p> <p>The list is only 100 handles so I assumed there would be plenty of calls available.</p> <p>What's the way of fixing it?</p>
<p>Twitter GET followers/ids <a href="https://dev.twitter.com/docs/api/1.1/get/followers/ids" rel="nofollow">endpoint</a> in API 1.1 version has 15 requests/per window (15 mins) limit, i.e. about 60 requests per hour.</p> <p>Note also, that it also returns up to 5000 ids per request, so you have to issue more requests for highly followed users. For example only <a href="https://twitter.com/BarackObama" rel="nofollow">Barack Obama</a> followers list will take <code>40434976/(5000*60*24) = 5.62</code> days to be loaded.</p>
python|twitter|pandas|twython
3
374,050
20,862,068
Merging Pandas DataFrames with the same column name
<p>I have a dataset, lets say:</p> <pre><code>Column with duplicates value1 value2 1 5 0 1 0 9 </code></pre> <p>And what I want</p> <pre><code>Column with duplicates value1 value2 1 5 9 </code></pre> <p>I cannot figure out how to get this to work. The closest I got was using merge, but that left me with different suffixes.</p> <p>Any ideas?</p> <p>My real data looks like:</p> <pre><code>trial Time 1 2 3 4 1 '0-100' 0 100 0 0 1 '0-100' 32 0 0 0 1 '100-200' 0 0 100 0 . . . 2 '0-100' 0 100 0 0 </code></pre> <p>I want to keep the trials separate, and just merge the Times</p>
<p>IIUC, you can use <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow"><code>groupby</code></a> and then aggregate:</p> <pre><code>&gt;&gt;&gt; df Column with duplicates value1 value2 0 1 5 0 1 1 0 9 [2 rows x 3 columns] &gt;&gt;&gt; df.groupby("Column with duplicates", as_index=False).sum() Column with duplicates value1 value2 0 1 5 9 [1 rows x 3 columns] </code></pre> <hr> <p>On the OP's updated example:</p> <pre><code>&gt;&gt;&gt; df trial Time 1 2 3 4 0 1 '0-100' 0 100 0 0 1 1 '0-100' 32 0 0 0 2 1 '100-200' 0 0 100 0 3 2 '0-100' 0 100 0 0 [4 rows x 6 columns] &gt;&gt;&gt; df.groupby("trial", as_index=False).sum() trial 1 2 3 4 0 1 32 100 100 0 1 2 0 100 0 0 [2 rows x 5 columns] </code></pre>
python|pandas
2
374,051
20,478,949
How to force larger steps on scipy.optimize functions?
<p>I have a function <code>compare_images(k, a, b)</code> that compares two 2d-arrays <code>a</code> and <code>b</code></p> <p>Inside the funcion, I apply a <code>gaussian_filter</code> with <code>sigma=k</code> to <code>a</code> My idea is to estimate how much I must to smooth image <code>a</code> in order for it to be similar to image <code>b</code></p> <p>The problem is my function <code>compare_images</code> will only return different values if <code>k</code> variation is over <code>0.5</code>, and if I do <code>fmin(compare_images, init_guess, (a, b)</code> it usually get stuck to the <code>init_guess</code> value.</p> <p>I believe the problem is <code>fmin</code> (and <code>minimize</code>) tends to start with very small steps, which in my case will reproduce the exact same return value for <code>compare_images</code>, and so the method thinks it already found a minimum. It will only try a couple times.</p> <p>Is there a way to force <code>fmin</code> or any other minimizing function from <code>scipy</code> to take larger steps? Or is there any method better suited for my need?</p> <p><strong>EDIT:</strong> I found a temporary solution. First, as recommended, I used <code>xtol=0.5</code> and higher as an argument to <code>fmin</code>. Even then, I still had some problems, and a few times <code>fmin</code> would return <code>init_guess</code>. I then created a simple loop so that if <code>fmin == init_guess</code>, I would generate another, random <code>init_guess</code> and try it again.</p> <p>It's pretty slow, of course, but now I got it to run. It will take 20h or so to run it for all my data, but I won't need to do it again.</p> <p>Anyway, to better explain the problem for those still interested in finding a better solution:</p> <ul> <li>I have 2 images, <code>A</code> and <code>B</code>, containing some scientific data.</li> <li><code>A</code> looks like a few dots with variable value (it's a matrix of in which each valued point represents where a event occurred and it's intensity)</li> <li><code>B</code> looks like a smoothed heatmap (it is the observed density of occurrences)</li> <li><code>B</code> looks just like if you applied a gaussian filter to <code>A</code> with a bit of semi-random noise.</li> <li>We are approximating <code>B</code> by applying a gaussian filter with constant <code>sigma</code> to <code>A</code>. This <code>sigma</code> was chosen visually, but only works for a certain class of images. </li> <li>I'm trying to obtain an optimal <code>sigma</code> for each image, so later I could find some relations of <code>sigma</code> and the class of event showed in each image.</li> </ul> <p>Anyway, thanks for the help!</p>
<p>Quick check: you probably really meant <code>fmin(compare_images, init_guess, (a,b))</code>?</p> <p>If <code>gaussian_filter</code> behaves as you say, your function is piecewise constant, meaning that optimizers relying on derivatives (i.e. most of them) are out. You can try a global optimizer like <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.anneal.html" rel="nofollow">anneal</a>, or brute-force search over a sensible range of <code>k</code>'s.</p> <p>However, as you described the problem, in general there will only be a clear, global minimum of <code>compare_images</code> if <code>b</code> is a smoothed version of <code>a</code>. Your approach makes sense if you want to determine the amount of smoothing of <code>a</code> that makes both images most similar.</p> <p>If the question is "how similar are the images", then I think pixelwise comparison (maybe with a bit of smoothing) is the way to go. Depending on what images we are talking about, it might be necessary to align the images first (e.g. for comparing photographs). Please clarify :-)</p> <p><strong>edit</strong>: Another idea that might help: rewrite compare_images so that it calculates two versions of smoothed-<code>a</code> -- one with sigma=<code>floor(k)</code> and one with <code>ceil(k)</code> (i.e. round k to the next-lower/higher int). Then calculate <code>a_smooth = a_floor*(1-kfrac)+a_ceil*kfrac</code>, with <code>kfrac</code> being the fractional part of <code>k</code>. This way the compare function becomes continuous w.r.t <code>k</code>.</p> <p>Good Luck!</p>
python|optimization|numpy|scipy|gaussian
4
374,052
20,808,393
Python: Defining a minimum bounding rectangle
<p>I have data in the following format, a list of 2d x,y coordinates:</p> <pre><code>[(6, 7), (2, 4), (8, 9), (3, 7), (5, 4), (9, 9)] </code></pre> <p>and I'm trying to iterate through the list to find the minimum bounding box in the format [(minx,miny),(maxx,miny),(maxx,maxy),(minx,maxy)]</p> <p>Thus I've written the below code, however it doesn't appear to be working. Potentially to do with the fact that I'm not passing it an array rather a list. the input and code is below, print coords returns the list previously mentioned:</p> <pre><code>os.chdir("filepath") with open ('file.csv') as csvfile: coords = [(int(x), int(y)) for x, y in csv.reader(csvfile, delimiter= ',')] print coords def bounding_box(coords): min_x, min_y = numpy.min(coords[0], axis=0) max_x, max_y = numpy.max(coords[0], axis=0) return numpy.array( [(min_x, min_y), (max_x, min_y), (max_x, max_y), (min_x, max_y)]) </code></pre> <p>Could anyone point me in the right direction? feel free to disregard the entire code above if its not helping.</p>
<p>Why don't you just iterate through the list with four counters: <code>min_x</code>, <code>min_y</code>, <code>max_x</code>, and <code>max_y</code></p> <pre><code>def bounding_box(coords): min_x = 100000 # start with something much higher than expected min min_y = 100000 max_x = -100000 # start with something much lower than expected max max_y = -100000 for item in coords: if item[0] &lt; min_x: min_x = item[0] if item[0] &gt; max_x: max_x = item[0] if item[1] &lt; min_y: min_y = item[1] if item[1] &gt; max_y: max_y = item[1] return [(min_x,min_y),(max_x,min_y),(max_x,max_y),(min_x,max_y)] </code></pre> <p>Using your example input:</p> <pre><code>bounding_box([(6, 7), (2, 4), (8, 9), (3, 7), (5, 4), (9, 9)]) &gt;&gt; [(2, 4), (9, 4), (9, 9), (2, 9)] </code></pre>
python|list|numpy|2d|gis
2
374,053
33,229,140
How do I drop a table in SQLAlchemy when I don't have a table object?
<p>I want to drop a table (if it exists) before writing some data in a Pandas dataframe:</p> <pre><code>def store_sqlite(in_data, dbpath = 'my.db', table = 'mytab'): database = sqlalchemy.create_engine('sqlite:///' + dbpath) ## DROP TABLE HERE in_data.to_sql(name = table, con = database, if_exists = 'append') database.close() </code></pre> <p>The SQLAlchemy documentation all points to a <code>Table.drop()</code> object - how would I create that object, or equivalently is there an alternative way to drop this table?</p> <p><strong>Note</strong> : I can't just use <code>if_exists = 'replace'</code> as the input data is actually a dict of DataFrames which I loop over - I've suppressed that code for clarity (I hope). </p>
<p>From the panda docs;</p> <p>"You can also run a plain query without creating a dataframe with execute(). This is useful for queries that don’t return values, such as INSERT. This is functionally equivalent to calling execute on the SQLAlchemy engine or db connection object."</p> <p><a href="http://pandas.pydata.org/pandas-docs/version/0.18.0/io.html#id3" rel="noreferrer">http://pandas.pydata.org/pandas-docs/version/0.18.0/io.html#id3</a></p> <p>So I do this;</p> <pre><code>from pandas.io import sql sql.execute('DROP TABLE IF EXISTS %s'%table, engine) sql.execute('VACUUM', engine) </code></pre> <p>Where "engine" is the SQLAlchemy database object (the OP's "database" above). Vacuum is optional, just reduces the size of the sqlite file (I use the table drop part infrequently in my code).</p>
python|pandas|sqlalchemy
10
374,054
33,344,359
How to add conditional columns to pandas df
<p>I want to create a column in a dataframe that is conditionally filled with values. Basically my dataframe loks like this</p> <pre><code> Origin X 0 Guatemala x 1 China x 2 Kenya x 3 Venezuela x 4 Bangladesh x </code></pre> <p>What I want to do now is create an additional column 'Continent', which adds the continent dependent on the country. My result would look like this:</p> <pre><code> Origin X Continent 0 Guatemala x South america 1 China x Asia 2 Kenya x Africa 3 Venezuela x South america 4 Bangladesh x Asia </code></pre> <p>I have tried the following codes to accieve what i want:</p> <pre><code>def GetContinents(x): if x['Origin']== 'Thailand' or 'Indonesia' or 'China' or 'Japan' or 'Bangladesh': return 'Asia' elif x['Origin']== 'Boliva' or 'Guatemala' or 'Venezuela' or 'Mexico' or 'Argentinia': return 'South America' elif x['Origin']== 'Guinea Bissau' or 'Egypt' or 'Zaire' or 'Kenya': return 'Africa' else: return 'unknown' df['Continent']= df.apply(GetContinents, axis=1) </code></pre> <p>This one fills all the columns in 'continent' with 'Asia' mysteriously.</p> <pre><code>df['Continent'] = np.where(df['Origin'] == 'Bangladesh', 'Asia', 'unknown') </code></pre> <p>This one works fine in terms that it fills 'Asia' into the right column and unknown into all others, but when I try to make something like <code>df['Continent'] = np.where(df['Origin'] == 'Bangladesh' or 'China', 'Asia', 'unknown')</code> I get an error. </p> <p>So basically my question is: how can I fullfill my if condition with different values? </p>
<p>You can construct the lists for each continent and <code>apply</code> a func:</p> <pre><code>In [35]: asia = ['Thailand','Indonesia','China','Japan','Bangladesh'] south_america = ['Boliva' , 'Guatemala' , 'Venezuela' , 'Mexico' , 'Argentinia'] africa = [ 'Guinea Bissau' , 'Egypt' , 'Zaire' , 'Kenya'] def find_continent(x): if x in asia: return 'Asia' elif x in south_america: return 'South America' elif x in africa: return 'Africa' else: return 'Unknown' df['Continent'] = df['Origin'].apply(find_continent) df Out[35]: Origin X Continent 0 Guatemala x South America 1 China x Asia 2 Kenya x Africa 3 Venezuela x South America 4 Bangladesh x Asia </code></pre> <p>Or if you have a much larger df then you can just make successive calls using <code>isin</code> and mask the rows using <code>loc</code>:</p> <pre><code>In [38]: df.loc[df['Origin'].isin(asia),'Continent'] = 'Asia' df.loc[df['Origin'].isin(south_america),'Continent'] = 'South America' df.loc[df['Origin'].isin(africa),'Continent'] = 'Africa' df['Continent'] = df['Continent'].fillna('Unknown') df Out[38]: Origin X Continent 0 Guatemala x South America 1 China x Asia 2 Kenya x Africa 3 Venezuela x South America 4 Bangladesh x Asia </code></pre> <p>As to why your attempts didn't work:</p> <pre><code>if x['Origin']== 'Thailand' or 'Indonesia' or 'China' or 'Japan' or 'Bangladesh' </code></pre> <p>This returns <code>True</code> because <code>or 'Indonesia'</code> is always <code>True</code> so all rows get set to Asia.</p> <p>You should change it to like this:</p> <pre><code>if x['Origin'] in ('Thailand' , 'Indonesia' , 'China' , 'Japan' , 'Bangladesh'): </code></pre> <p>See related: <a href="https://stackoverflow.com/questions/15112125/how-do-i-test-one-variable-against-multiple-values">How do I test one variable against multiple values?</a></p> <p>Using <code>np.where</code> would be fine but you're not masking the rows so you continuously overwrite the rows so only the last op persists.</p>
python|if-statement|pandas|conditional|dataframe
1
374,055
33,424,503
Pandas usecols all except last
<p>I have a csv file, is it possible to have <code>usecols</code> take all columns except the last one when utilizing <code>read_csv</code> without listing every column needed.</p> <p>For example, if I have a 13 column file, I can do <code>usecols=[0,1,...,10,11]</code>. Doing <code>usecols=[:-1]</code> will give me syntax error?</p> <p>Is there another alternative? I'm using <code>pandas 0.17</code></p>
<p>Starting from version <code>0.20</code> the <code>usecols</code> method in pandas accepts a callable filter, i.e. a <code>lambda</code> expression. Hence if you know the name of the column you want to skip you can do as follows:</p> <pre><code>columns_to_skip = ['foo','bar'] df = pd.read_csv(file, usecols=lambda x: x not in columns_to_skip ) </code></pre> <p>Here's the documentation <a href="https://pandas.pydata.org/pandas-docs/stable/io.html#filtering-columns-usecols" rel="noreferrer">reference</a>. </p>
python|pandas
17
374,056
33,189,971
Numpy loadtxt works with urllib2 response but not requests response
<p>I am attempting to load a csv file from a url such as <a href="http://real-chart.finance.yahoo.com/table.csv?s=PXD&amp;d=9&amp;e=17&amp;f=2015&amp;g=d&amp;a=11&amp;b=12&amp;c=1970&amp;ignore=.csv" rel="nofollow">http://real-chart.finance.yahoo.com/table.csv?s=PXD&amp;d=9&amp;e=17&amp;f=2015&amp;g=d&amp;a=11&amp;b=12&amp;c=1970&amp;ignore=.csv</a> into a numpy array using loadtxt. If I use urllib2.urlopen(url) this works fine but I am running into errors with requests.get(url)</p> <p>Example:</p> <pre><code>file = urllib2.urlopen(url) Date,Open,High,Low,Close,Volume,AdjClose = np.loadtxt(file, unpack=True, delimiter=',', skiprows=1,converters={0:mdates.strpdate2num('%Y-%m-%d')}) print AdjClose[:5] </code></pre> <p>Returns:</p> <pre><code>[ 140.570007 136.580002 133.240005 131.889999] </code></pre> <p>However doing the same with:</p> <pre><code>file = requests.get(url) </code></pre> <p>Results in errors such as this no matter what changes I have been attempting to make to the parameters:</p> <pre><code>ValueError: time data '33.00' does not match format '%Y-%m-%d' </code></pre> <p>Adding .text to requests.get(url) results in one long string of the entire data set along with \n characters. </p> <p>Any ideas? Thanks!</p>
<p>With <code>requests</code>, I think you need to explicitly iterate over the lines of the response, otherwise <code>loadtxt</code> won't pick up the individual rows properly. Try:</p> <pre><code>Date,Open,High,Low,Close,Volume,AdjClose = np.loadtxt(file.iter_lines(), unpack=True, delimiter=',', skiprows=1, converters={0:mdates.strpdate2num('%Y-%m-%d')}) </code></pre>
python|csv|numpy|python-requests|urllib2
1
374,057
33,359,411
Mean Absolute Error - Python
<p>I'm new to Python</p> <p>I have to implement a function that can calculate MAE between 2 images</p> <p>Here is the MAE formula i have learnt:<a href="https://i.stack.imgur.com/Qegha.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qegha.png" alt="enter image description here"></a></p> <p>Here is my code:</p> <pre><code>def calculateMAE(imageA, imageB): """ Calculate MAE between 2 images np: numpy """ mae = np.sum(imageB.astype("float") - imageA.astype("float")) mae /= float(imageA.shape[0] * imageA.shape[1] * 255) if (mae &lt; 0): return mae * -1 else: return mae </code></pre> <p>Can anyone tell me if my function is right? Thanks in advance!</p>
<p>The absolute sign in the mean absolute error is in each entry in the sum, so you can't check whether <code>mae &lt; 0</code> after you summed it up - you need to put it inside the sum!</p> <p>Hence you should have something like</p> <pre><code>mae = np.sum(np.absolute((imageB.astype("float") - imageA.astype("float"))) </code></pre> <p>Where <code>np.absolute(matrix)</code> calculates the absolute value element-wise.</p>
python|numpy
21
374,058
33,342,702
how to convert a string type to date format
<p>My source data has a column including the date information but it is a string type. Typical lines are like this:</p> <pre><code>04 13, 2013 07 1, 2012 </code></pre> <p>I am trying to convert to a date format, so I used panda's <code>to_datetime</code> function:</p> <pre><code>df['ReviewDate_formated'] = pd.to_datetime(df['ReviewDate'],format='%mm%d, %yyyy') </code></pre> <p>But I got this error message:</p> <pre><code>ValueError: time data '04 13, 2013' does not match format '%mm%d, %yyyy' (match) </code></pre> <p>My questions are:</p> <ol> <li><p>How do I convert to a date format?</p></li> <li><p>I also want to extract to Month and Year and Day columns because I need to do some month over month comparison? But the problem here is the length of the string varies.</p></li> </ol>
<p>Your format string is incorrect, you want <code>'%m %d, %Y'</code>, there is a <a href="http://strftime.org/" rel="nofollow">reference</a> that shows what the valid format identifiers are:</p> <pre><code>In [30]: import io import pandas as pd t="""ReviewDate 04 13, 2013 07 1, 2012""" df = pd.read_csv(io.StringIO(t), sep=';') df Out[30]: ReviewDate 0 04 13, 2013 1 07 1, 2012 In [31]: pd.to_datetime(df['ReviewDate'], format='%m %d, %Y') Out[31]: 0 2013-04-13 1 2012-07-01 Name: ReviewDate, dtype: datetime64[ns] </code></pre> <p>To answer the second part, once the dtype is a <code>datetime64</code> then you can call the vectorised <code>dt</code> accessor methods to get just the <code>day</code>, <code>month</code>, and <code>year</code> portions:</p> <pre><code>In [33]: df['Date'] = pd.to_datetime(df['ReviewDate'], format='%m %d, %Y') df['day'],df['month'],df['year'] = df['Date'].dt.day, df['Date'].dt.month, df['Date'].dt.year df Out[33]: ReviewDate Date day month year 0 04 13, 2013 2013-04-13 13 4 2013 1 07 1, 2012 2012-07-01 1 7 2012 </code></pre>
python|date|pandas
0
374,059
33,303,314
Confusing behaviour of Pandas crosstab() function with dataframe containing NaN values
<p>I'm using Python 3.4.1 with numpy 0.10.1 and pandas 0.17.0. I have a large dataframe that lists species and gender of individual animals. It's a real-world dataset and there are, inevitably, missing values represented by NaN. A simplified version of the data can be generated as:</p> <pre><code>import numpy as np import pandas as pd tempDF = pd.DataFrame({ 'id': [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20], 'species': ["dog","dog",np.nan,"dog","dog","cat","cat","cat","dog","cat","cat","dog","dog","dog","dog",np.nan,"cat","cat","dog","dog"], 'gender': ["male","female","female","male","male","female","female",np.nan,"male","male","female","male","female","female","male","female","male","female",np.nan,"male"]}) </code></pre> <p>Printing the dataframe gives:</p> <pre><code> gender id species 0 male 1 dog 1 female 2 dog 2 female 3 NaN 3 male 4 dog 4 male 5 dog 5 female 6 cat 6 female 7 cat 7 NaN 8 cat 8 male 9 dog 9 male 10 cat 10 female 11 cat 11 male 12 dog 12 female 13 dog 13 female 14 dog 14 male 15 dog 15 female 16 NaN 16 male 17 cat 17 female 18 cat 18 NaN 19 dog 19 male 20 dog </code></pre> <p>I want to generate a cross-tabulated table to show number of males and females in each species using the following:</p> <pre><code>pd.crosstab(tempDF['species'],tempDF['gender']) </code></pre> <p>This produces the following table:</p> <pre><code>gender female male species cat 4 2 dog 3 7 </code></pre> <p>Which is what I'd expect. However, if I include the margins=True option, it produces:</p> <pre><code>pd.crosstab(tempDF['species'],tempDF['gender'],margins=True) gender female male All species cat 4 2 7 dog 3 7 11 All 9 9 20 </code></pre> <p>As you can see, the marginal totals appear to be incorrect, presumably caused by the missing data in the dataframe. Is this intended behaviour? In my mind, it seems very confusing. Surely marginal totals should be totals of rows and columns as they appear in the table and not include any missing data that isn't represented in the table. Including dropna=False does not affect the outcome.</p> <p>I can delete any row with a NaN before creating the table but that seems to be a lot of extra work and a lot of extra things to think about when doing an analysis. Should I report this as a bug?</p>
<p>I suppose one workaround would be to convert the NaNs to 'missing' before creating the table and then the cross-tubulation will include columns and rows specifically for missing values:</p> <pre><code>pd.crosstab(tempDF['species'].fillna('missing'),tempDF['gender'].fillna('missing'),margins=True) gender female male missing All species cat 4 2 1 7 dog 3 7 1 11 missing 2 0 0 2 All 9 9 2 20 </code></pre> <p>Personally, I would like to see that the default behaviour so I wouldn't have to remember to replace all the NaNs in every crosstab calculation.</p>
python|pandas|dataframe|nan|crosstab
21
374,060
33,121,760
Python: create a new dataframe column and write the index correspondig to datetime intervals
<p>I have the following dataframe :</p> <pre><code> date_time value member 2013-10-09 09:00:00 664639 Jerome 2013-10-09 09:05:00 197290 Hence 2013-10-09 09:10:00 470186 Ann 2013-10-09 09:15:00 181314 Mikka 2013-10-09 09:20:00 969427 Cristy 2013-10-09 09:25:00 261473 James 2013-10-09 09:30:00 003698 Oliver </code></pre> <p>and the second dataframe where I have the bounds like :</p> <pre><code> date_start date_end 2013-10-09 09:19:00 2013-10-09 09:25:00 2013-10-09 09:25:00 2013-10-09 09:40:00 </code></pre> <p>so I need to create a new column where I will write the index of each interval between two datetime points:</p> <p>smth like:</p> <pre><code>date_time value member session 2013-10-09 09:00:00 664639 Jerome 1 2013-10-09 09:05:00 197290 Hence 1 2013-10-09 09:10:00 470186 Ann 1 2013-10-09 09:15:00 181314 Mikka 2 2013-10-09 09:20:00 969427 Cristy 2 2013-10-09 09:25:00 261473 James 2 2013-10-09 09:30:00 003698 Oliver 2 </code></pre> <p>the following code creates the column <code>'session'</code>, but doesn't write the index of session (i.e. index of row in <code>bounds</code> dataframe) in <code>'session'</code> column, so don't separate the initial dataframe on intervals:</p> <pre><code>def create_interval(): df['session']='' for index, row in bounds.iterrows(): s = row['date_start'] e = row['date_end'] mask=(df['date'] &gt; s) &amp; (df['date'] &lt; e) df.loc[mask]['session']='[index]' return df </code></pre> <p><strong>UPDATE</strong></p> <p>problem that code <code>bounds['date_start'].searchsorted(df['date_time'])</code> doesn't give the result I want to obtain, i.e. one index value for every interval: <code>df['Session']</code> = 1 for first interval, =2 for second and so on. Columns <code>Session</code> is aimed to separate different intervals that lye in between <code>date_start</code> and <code>date_end</code> of <code>bounds</code> I suppose that if df['date_time'] not the same that bounds['start_date'] it already increments index for <code>session</code>, that not exactly what I'm looking for</p>
<p>I'm assuming you want the actual index location (zero-based), you can call <code>apply</code> on your 'date_time' column and call <code>np.searchsorted</code> to find the index location of where in <code>bounds</code> df it falls in:</p> <pre><code>In [266]: df['Session'] = df['date_time'].apply(lambda x: np.searchsorted(bounds['date_start'], x)[0]) df Out[266]: date_time value member Session 0 2013-10-09 09:00:00 664639 Jerome 0 1 2013-10-09 09:05:00 197290 Hence 0 2 2013-10-09 09:10:00 470186 Ann 0 3 2013-10-09 09:15:00 181314 Mikka 0 4 2013-10-09 09:20:00 969427 Cristy 1 5 2013-10-09 09:25:00 261473 James 1 6 2013-10-09 09:30:00 3698 Oliver 2 </code></pre> <p><strong>EDIT</strong></p> <p>@Jeff has pointed out that <code>apply</code> is unnecessary here and of course he's right, this will be much faster:</p> <pre><code>In [293]: df['session'] = bounds['date_start'].searchsorted(df['date_time']) df Out[293]: date_time value member session 0 2013-10-09 09:00:00 664639 Jerome 0 1 2013-10-09 09:05:00 197290 Hence 0 2 2013-10-09 09:10:00 470186 Ann 0 3 2013-10-09 09:15:00 181314 Mikka 0 4 2013-10-09 09:20:00 969427 Cristy 1 5 2013-10-09 09:25:00 261473 James 1 6 2013-10-09 09:30:00 3698 Oliver 2 </code></pre>
python|pandas|dataframe
2
374,061
9,296,658
How to filter a numpy array using another array's values?
<p>I have two NumPy arrays, e.g.:</p> <pre><code>a = [1,2,3,4,5] </code></pre> <p>and a filter array, e.g.:</p> <pre><code>f = [False, True, False, False, True] len(a) == len(f) </code></pre> <p>How can I get a new numpy array with only the values in a where the same index in <code>f</code> is True? In my case: <code>[2, 5]</code>.</p> <p>According to the accepted solution (with different values):</p> <pre><code>&gt;&gt;&gt; a = numpy.array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) &gt;&gt;&gt; b = numpy.array([True, False, True, False, True, False, True, False, True, False]) &gt;&gt;&gt; a[b] array([1, 3, 5, 7, 9]) </code></pre>
<p>NumPy supports <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#boolean-array-indexing" rel="noreferrer">boolean indexing</a></p> <pre><code>a[f] </code></pre> <p>This assumes that <code>a</code> and <code>f</code> are NumPy arrays rather than Python lists (as in the question). You can convert with <code>f = np.array(f)</code>.</p>
python|arrays|filter|numpy
38
374,062
9,412,500
Image Interpolation in python
<p>I am trying to use interpolation to remove chromatic aberration from an image. The code I have generates the following error: TypeError: unhashable type: 'numpy.ndarray'. Below is my code - any help would be greatly appreciated. Thank you- Areej This is an input explanation</p> <pre><code>#splitting an image into its separe bands source = im.split() Cfixed = source[2] Cwarp = source[1] #take the image minus a ew-wide edge roi = [ew+1, xdim-ew, ew+1, ydim-ew]; roi_pad = [roi[0]-ew, roi[1]+ew, roi[2]-ew, roi[3]+ew]; for k in range(0,centers_x.size): cx = centers_x[k] cy = centers_y[k] wz = warps[k] import scipy as sp from scipy import interpolate def warpRegion(Cwarp, roi_pad, (cx, cy, wz)): #Unpack region indices sx, ex, sy, ey = roi_pad xramp, yramp = np.mgrid[sx:ex+1, sy:ey+1] shapeofgrid=xramp.shape print 'shape of x grid'+str(shapeofgrid) xrampc = xramp - cx; yrampc = yramp - cy; xramp1 = 1/wz*xrampc; yramp1 = 1/wz*yrampc; xrampf = xrampc.flatten() yrampf = yrampc.flatten() xramp1f = xramp1.flatten() yramp1f = yramp1.flatten() reg_w = sp.interpolate.interp2d(yrampf,xrampf,Cwarp, yramp1f, xramp1f,'cubic'); </code></pre>
<p>A possible explanation of the error message is that you are trying to use a NumPy array as a dict key or a set element. Look at where the error occurs and study the type of every variable referenced on that line. If you need help, post a runnable example and the full traceback of the exception.</p>
python|numpy|scipy
1
374,063
6,030,906
Merging a list of numpy arrays into one array (fast)
<p>what would be the fastest way to merge a list of numpy arrays into one array if one knows the length of the list and the size of the arrays, which is the same for all?</p> <p>I tried two approaches:</p> <ul> <li><p><code>merged_array = array(list_of_arrays)</code> from <a href="https://stackoverflow.com/questions/2106287/pythonic-way-to-create-a-numpy-array-from-a-list-of-numpy-arrays">Pythonic way to create a numpy array from a list of numpy arrays</a> and</p></li> <li><p><code>vstack</code></p></li> </ul> <p>A you can see <code>vstack</code> is faster, but for some reason the first run takes three times longer than the second. I assume this caused by (missing) <a href="https://stackoverflow.com/questions/3491802/what-is-the-preferred-way-to-preallocate-numpy-arrays">preallocation</a>. So how would I preallocate an array for <code>vstack</code>? Or do you know a faster methode?</p> <p>Thanks!</p> <p><strong>[UPDATE]</strong></p> <p><strong>I want <code>(25280, 320)</code> not <code>(80, 320, 320)</code></strong> which means, <code>merged_array = array(list_of_arrays)</code> wont work for me. Thanks Joris for pointing that out!!!</p> <p><strong>Output:</strong></p> <pre><code>0.547468900681 s merged_array = array(first_list_of_arrays) 0.547191858292 s merged_array = array(second_list_of_arrays) 0.656183958054 s vstack first 0.236850976944 s vstack second </code></pre> <p><strong>Code:</strong></p> <pre><code>import numpy import time width = 320 height = 320 n_matrices=80 secondmatrices = list() for i in range(n_matrices): temp = numpy.random.rand(height, width).astype(numpy.float32) secondmatrices.append(numpy.round(temp*9)) firstmatrices = list() for i in range(n_matrices): temp = numpy.random.rand(height, width).astype(numpy.float32) firstmatrices.append(numpy.round(temp*9)) t1 = time.time() first1=numpy.array(firstmatrices) print time.time() - t1, "s merged_array = array(first_list_of_arrays)" t1 = time.time() second1=numpy.array(secondmatrices) print time.time() - t1, "s merged_array = array(second_list_of_arrays)" t1 = time.time() first2 = firstmatrices.pop() for i in range(len(firstmatrices)): first2 = numpy.vstack((firstmatrices.pop(),first2)) print time.time() - t1, "s vstack first" t1 = time.time() second2 = secondmatrices.pop() for i in range(len(secondmatrices)): second2 = numpy.vstack((secondmatrices.pop(),second2)) print time.time() - t1, "s vstack second" </code></pre>
<p>You have 80 arrays 320x320? So you probably want to use <code>dstack</code>:</p> <pre><code>first3 = numpy.dstack(firstmatrices) </code></pre> <p>This returns one 80x320x320 array just like <code>numpy.array(firstmatrices)</code> does:</p> <pre><code>timeit numpy.dstack(firstmatrices) 10 loops, best of 3: 47.1 ms per loop timeit numpy.array(firstmatrices) 1 loops, best of 3: 750 ms per loop </code></pre> <p>If you want to use <code>vstack</code>, it will return a 25600x320 array:</p> <pre><code>timeit numpy.vstack(firstmatrices) 100 loops, best of 3: 18.2 ms per loop </code></pre>
python|arrays|numpy
22
374,064
5,896,747
numpy.memmap for an array of strings?
<p>Is it possible to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html" rel="nofollow"><code>numpy.memmap</code></a> to map a large disk-based array of <strong>strings</strong> into memory?</p> <p>I know it can be done for floats and suchlike, but this question is specifically about strings.</p> <p>I am interested in solutions for both fixed-length and variable-length strings.</p> <p>The solution is free to dictate any reasonable file format.</p>
<p>If all the strings have the same length, as suggested by the term "array", this is easily possible:</p> <pre><code>a = numpy.memmap("data", dtype="S10") </code></pre> <p>would be an example for strings of length 10.</p> <p><strong>Edit</strong>: Since apparently the strings don't have the same length, you need to index the file to allow for O(1) item access. This requires reading the whole file once and storing the start indices of all strings in memory. Unfortunately, I don't think there is a pure NumPy way of indexing without creating an array the same size as the file in memory first. This array can be dropped after extracting the indices, though.</p>
python|string|numpy|memory-mapped-files|large-data
5
374,065
5,935,893
Any reason why Octave, R, Numpy and LAPACK yield different SVD results on the same matrix?
<p>I'm using Octave and R to compute SVD using a simple matrix and getting two different answers! The code is listed below:</p> <p>R </p> <pre><code>&gt; a&lt;-matrix(c(1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,1,1,1), 9, 4) &gt; a [,1] [,2] [,3] [,4] [1,] 1 1 0 0 [2,] 1 1 0 0 [3,] 1 1 0 0 [4,] 1 0 1 0 [5,] 1 0 1 0 [6,] 1 0 1 0 [7,] 1 0 0 1 [8,] 1 0 0 1 [9,] 1 0 0 1 &gt; a.svd &lt;- svd(a) &gt; a.svd$d [1] 3.464102e+00 1.732051e+00 1.732051e+00 1.922963e-16 &gt; a.svd$u [,1] [,2] [,3] [,4] [1,] -0.3333333 0.4714045 -1.741269e-16 7.760882e-01 [2,] -0.3333333 0.4714045 -3.692621e-16 -1.683504e-01 [3,] -0.3333333 0.4714045 -5.301858e-17 -6.077378e-01 [4,] -0.3333333 -0.2357023 -4.082483e-01 6.774193e-17 [5,] -0.3333333 -0.2357023 -4.082483e-01 6.774193e-17 [6,] -0.3333333 -0.2357023 -4.082483e-01 6.774193e-17 [7,] -0.3333333 -0.2357023 4.082483e-01 5.194768e-17 [8,] -0.3333333 -0.2357023 4.082483e-01 5.194768e-17 [9,] -0.3333333 -0.2357023 4.082483e-01 5.194768e-17 &gt; a.svd$v [,1] [,2] [,3] [,4] [1,] -0.8660254 0.0000000 -4.378026e-17 0.5 [2,] -0.2886751 0.8164966 -2.509507e-16 -0.5 [3,] -0.2886751 -0.4082483 -7.071068e-01 -0.5 [4,] -0.2886751 -0.4082483 7.071068e-01 -0.5 </code></pre> <p>Octave</p> <pre><code>octave:32&gt; a = [ 1, 1, 1, 1, 1, 1, 1, 1, 1; 1, 1, 1, 0, 0, 0, 0, 0, 0; 0, 0, 0, 1, 1, 1, 0, 0, 0; 0, 0, 0, 0, 0, 0, 1, 1, 1 ] a = 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 octave:33&gt; [u, s, v] = svd (a) u = -8.6603e-01 -1.0628e-16 2.0817e-17 -5.0000e-01 -2.8868e-01 5.7735e-01 -5.7735e-01 5.0000e-01 -2.8868e-01 -7.8868e-01 -2.1132e-01 5.0000e-01 -2.8868e-01 2.1132e-01 7.8868e-01 5.0000e-01 s = Diagonal Matrix 3.4641e+00 0 0 0 0 0 1.7321e+00 0 0 0 0 0 1.7321e+00 0 0 0 0 0 3.7057e-16 0 0 0 0 0 0 v = -3.3333e-01 3.3333e-01 -3.3333e-01 7.1375e-01 -3.3333e-01 3.3333e-01 -3.3333e-01 -1.3472e-02 -3.3333e-01 3.3333e-01 -3.3333e-01 -7.0027e-01 -3.3333e-01 -4.5534e-01 -1.2201e-01 -9.0583e-17 -3.3333e-01 -4.5534e-01 -1.2201e-01 2.0440e-17 -3.3333e-01 -4.5534e-01 -1.2201e-01 2.0440e-17 -3.3333e-01 1.2201e-01 4.5534e-01 8.3848e-17 -3.3333e-01 1.2201e-01 4.5534e-01 8.3848e-17 -3.3333e-01 1.2201e-01 4.5534e-01 8.3848e-17 </code></pre> <p>I'm new to both Octave and R so my first question is am I doing this right? If so, which one is "correct"? Are they both right? I've also tried this out in numpy and calling the LAPACK functions dgesdd and dgesvd directly. Numpy give me an answer similar to Octave and calling the LAPACK functions gives me an answer similar to R.</p> <p>Thanks!</p>
<p>In SVD decomposition $A=UDV^T$ only $D$ is unique (up to reordering). It is more or less easy to see that $cU$ and $\frac{1}{c}V$ will give the same decomposition. So it is not surprising that different algorithms can give different results. What matters is that $D$ must be the same for all algorithms.</p>
r|numpy|octave|lapack|svd
8
374,066
6,114,115
Windows + virtualenv + pip + NumPy (problems when installing NumPy)
<p>On Windows, I normally just use the binary installer, but I would like to install <a href="http://en.wikipedia.org/wiki/NumPy" rel="noreferrer">NumPy</a> only in a virtualenv this time, so I created a virtual env:</p> <pre><code>virtualenv --no-site-packages --distribute summary_python cd summary_python/Scripts activate.bat </code></pre> <p>Then I tried to install NumPy</p> <pre><code>pip install numpy </code></pre> <p>And I get an error. My pip.log is pasted below:</p> <pre><code>Downloading/unpacking numpy Running setup.py egg_info for package numpy non-existing path in 'numpy\\distutils': 'site.cfg' F2PY Version 2 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in c:\Users\fname.lname\Documents\summary_python\lib libraries mkl,vml,guide not found in C:\ NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in c:\Users\fname.lname\Documents\summary_python\lib libraries ptf77blas,ptcblas,atlas not found in C:\ NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in c:\Users\fname.lname\Documents\summary_python\lib libraries f77blas,cblas,atlas not found in C:\ NOT AVAILABLE blas_info: libraries blas not found in c:\Users\fname.lname\Documents\summary_python\lib libraries blas not found in C:\ NOT AVAILABLE blas_src_info: NOT AVAILABLE NOT AVAILABLE lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in c:\Users\fname.lname\Documents\summary_python\lib libraries mkl,vml,guide not found in C:\ NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in c:\Users\fname.lname\Documents\summary_python\lib libraries lapack_atlas not found in c:\Users\fname.lname\Documents\summary_python\lib libraries ptf77blas,ptcblas,atlas not found in C:\ libraries lapack_atlas not found in C:\ numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries f77blas,cblas,atlas not found in c:\Users\fname.lname\Documents\summary_python\lib libraries lapack_atlas not found in c:\Users\fname.lname\Documents\summary_python\lib libraries f77blas,cblas,atlas not found in C:\ libraries lapack_atlas not found in C:\ numpy.distutils.system_info.atlas_info NOT AVAILABLE lapack_info: libraries lapack not found in c:\Users\fname.lname\Documents\summary_python\lib libraries lapack not found in C:\ NOT AVAILABLE lapack_src_info: NOT AVAILABLE NOT AVAILABLE running egg_info running build_src build_src building py_modules sources building library "npymath" sources No module named msvccompiler in numpy.distutils; trying from distutils Running from numpy source directory.c:\Users\fname.lname\Documents\summary_python\build\numpy\numpy\distutils\system_info.py:531: UserWarning: Specified path is invalid. warnings.warn('Specified path %s is invalid.' % d) c:\Users\fname.lname\Documents\summary_python\build\numpy\numpy\distutils\system_info.py:1417: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) c:\Users\fname.lname\Documents\summary_python\build\numpy\numpy\distutils\system_info.py:1426: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) c:\Users\fname.lname\Documents\summary_python\build\numpy\numpy\distutils\system_info.py:1429: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) c:\Users\fname.lname\Documents\summary_python\build\numpy\numpy\distutils\system_info.py:1333: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) c:\Users\fname.lname\Documents\summary_python\build\numpy\numpy\distutils\system_info.py:1344: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. warnings.warn(LapackNotFoundError.__doc__) c:\Users\fname.lname\Documents\summary_python\build\numpy\numpy\distutils\system_info.py:1347: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. warnings.warn(LapackSrcNotFoundError.__doc__) error: Unable to find vcvarsall.bat Complete output from command python setup.py egg_info: non-existing path in 'numpy\\distutils': 'site.cfg' F2PY Version 2 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in c:\Users\fname.lname\Documents\summary_python\lib libraries mkl,vml,guide not found in C:\ NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in c:\Users\fname.lname\Documents\summary_python\lib libraries ptf77blas,ptcblas,atlas not found in C:\ NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in c:\Users\fname.lname\Documents\summary_python\lib libraries f77blas,cblas,atlas not found in C:\ NOT AVAILABLE blas_info: libraries blas not found in c:\Users\fname.lname\Documents\summary_python\lib libraries blas not found in C:\ NOT AVAILABLE blas_src_info: NOT AVAILABLE NOT AVAILABLE lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in c:\Users\fname.lname\Documents\summary_python\lib libraries mkl,vml,guide not found in C:\ NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in c:\Users\fname.lname\Documents\summary_python\lib libraries lapack_atlas not found in c:\Users\fname.lname\Documents\summary_python\lib libraries ptf77blas,ptcblas,atlas not found in C:\ libraries lapack_atlas not found in C:\ numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries f77blas,cblas,atlas not found in c:\Users\fname.lname\Documents\summary_python\lib libraries lapack_atlas not found in c:\Users\fname.lname\Documents\summary_python\lib libraries f77blas,cblas,atlas not found in C:\ libraries lapack_atlas not found in C:\ numpy.distutils.system_info.atlas_info NOT AVAILABLE lapack_info: libraries lapack not found in c:\Users\fname.lname\Documents\summary_python\lib libraries lapack not found in C:\ NOT AVAILABLE lapack_src_info: NOT AVAILABLE NOT AVAILABLE running egg_info running build_src build_src building py_modules sources building library "npymath" sources No module named msvccompiler in numpy.distutils; trying from distutils Running from numpy source directory.c:\Users\fname.lname\Documents\summary_python\build\numpy\numpy\distutils\system_info.py:531: UserWarning: Specified path is invalid. warnings.warn('Specified path %s is invalid.' % d) c:\Users\fname.lname\Documents\summary_python\build\numpy\numpy\distutils\system_info.py:1417: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) c:\Users\fname.lname\Documents\summary_python\build\numpy\numpy\distutils\system_info.py:1426: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) c:\Users\fname.lname\Documents\summary_python\build\numpy\numpy\distutils\system_info.py:1429: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) c:\Users\fname.lname\Documents\summary_python\build\numpy\numpy\distutils\system_info.py:1333: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) c:\Users\fname.lname\Documents\summary_python\build\numpy\numpy\distutils\system_info.py:1344: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. warnings.warn(LapackNotFoundError.__doc__) c:\Users\fname.lname\Documents\summary_python\build\numpy\numpy\distutils\system_info.py:1347: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. warnings.warn(LapackSrcNotFoundError.__doc__) error: Unable to find vcvarsall.bat ---------------------------------------- Command python setup.py egg_info failed with error code 1 Exception information: Traceback (most recent call last): File "c:\Users\fname.lname\Documents\summary_python\lib\site-packages\pip-1.0.1-py2.7.egg\pip\basecommand.py", line 126, in main self.run(options, args) File "c:\Users\fname.lname\Documents\summary_python\lib\site-packages\pip-1.0.1-py2.7.egg\pip\commands\install.py", line 223, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "c:\Users\fname.lname\Documents\summary_python\lib\site-packages\pip-1.0.1-py2.7.egg\pip\req.py", line 986, in prepare_files req_to_install.run_egg_info() File "c:\Users\fname.lname\Documents\summary_python\lib\site-packages\pip-1.0.1-py2.7.egg\pip\req.py", line 222, in run_egg_info command_desc='python setup.py egg_info') File "c:\Users\fname.lname\Documents\summary_python\lib\site-packages\pip-1.0.1-py2.7.egg\pip\__init__.py", line 255, in call_subprocess % (command_desc, proc.returncode)) InstallationError: Command python setup.py egg_info failed with error code 1 </code></pre>
<p>I've had success installing NumPy binaries into a virtualenv with good 'ol <code>easy_install</code> and a little bit of un-archiving magic.</p> <p>The <code>numpy-1.x.x-win32-superpack-python2.x.exe</code> release you download from <a href="http://en.wikipedia.org/wiki/SourceForge" rel="nofollow noreferrer">SourceForge</a> is really just a thin wrapper around three separate binary distributions (with <a href="http://en.wikipedia.org/wiki/SSE3" rel="nofollow noreferrer">SSE3</a>, <a href="http://en.wikipedia.org/wiki/SSE2" rel="nofollow noreferrer">SSE2</a>, or no SSE enabled, depending on the capabilities of your CPU). If you open up the superpack EXE file in <a href="http://en.wikipedia.org/wiki/7-Zip" rel="nofollow noreferrer">7-Zip</a> (or another archive utility), you can extract those individual setup files somewhere to use separately.</p> <p>Then, activate your virtual environment, and run</p> <pre><code>easy_install c:\path\to\extracted\numpy-1.x.x-sse3.exe </code></pre> <p>to install the SSE3-optimized binaries, for example. <code>easy_install</code> is smart enough to find everything it needs inside that <code>wininst</code> bundle and will extract the compiled <a href="https://stackoverflow.com/questions/2051192">egg</a> into your virtualenv's site-packages folder. I can also confirm that pip is still able to recognize and/or uninstall NumPy when you do this, and that using pip to install other packages which depend on NumPy works just fine.</p> <p>The only catch is knowing which optimization level to use (SSE3, SSE2, or no-SSE). If you have a <a href="http://en.wikipedia.org/wiki/SSE3#CPUs_with_SSE3" rel="nofollow noreferrer">reasonably modern processor</a> (newer than, say, a <a href="https://en.wikipedia.org/wiki/Pentium_4" rel="nofollow noreferrer">Pentium&nbsp;4</a> or <a href="https://en.wikipedia.org/wiki/Athlon_64" rel="nofollow noreferrer">Athlon&nbsp;64</a>), it's probably safe to go with the full SSE3. You can probably also run the test suite to confirm everything works as expected.</p> <hr> <p>I've found the <code>easy_install</code> "trick" to be really useful for installing all sorts of binary packages into a virtualenv. Even though I have all the requisite compilers set up on my machine, it's usually easier/faster/safer to stick with the official release when one is provided.</p>
python|windows|numpy|pip|virtualenv
48
374,067
6,022,359
Python "list order"
<p>I have 2 troubles with my data, can anyone help me:</p> <p>How can I get from this:</p> <p><strong>1.</strong></p> <pre><code>k=[['1','7', 'U1'], ['1.5', '8', 'U1'], ['2', '5.5', 'U1']] </code></pre> <p>get this</p> <pre><code>1,7,U1 1.5,8,U1 2,5.5,U1 </code></pre> <hr> <p><strong>EDIT 2 I MAKE SOME CHANGE ON second case:, still searching solution for this one:</strong></p> <p><strong>2.</strong> And how to get, from this</p> <pre><code>l=array([[[ 4.24231542], 'U1'], [[ 3.41424819], 'U1'], [[ 2.17214734], 'U1'],], dtype=object) </code></pre> <p>get</p> <pre><code>4.24231542,U1 3.41424819,U1 2.17214734,U1 </code></pre> <p>Thanks</p>
<p>One-line functional style:</p> <pre><code>print '\n'.join(','.join(x) for x in k) </code></pre>
python|numpy
5
374,068
66,633,109
Aggregating row repeats in pandas (run lengths)
<p>In the following dataframe of snapshots of a given system, I am interested in recording any changes in <code>var1</code> or <code>var2</code> <em>over time</em>, assuming that the state of the system remains the same until something changes. This is similar to run length encoding, which condenses sequences in which the same data values occur in many consecutive data elements. In that sense, I am interested in capturing the runs. For example:</p> <pre><code> var1 var2 timestamp foo 2 2017-01-01 00:07:45 foo 2 2017-01-01 00:13:42 foo 3 2017-01-01 00:19:41 bar 3 2017-01-01 00:25:41 bar 2 2017-01-01 00:37:36 bar 2 2017-01-01 00:43:37 foo 2 2017-01-01 01:01:29 foo 2 2017-01-01 01:01:34 bar 2 2017-01-01 01:19:25 bar 2 2017-01-01 01:25:22 </code></pre> <p>should be condensed to:</p> <pre><code>expected_output var1 var2 min max foo 2 2017-01-01 00:07:45 2017-01-01 00:19:41 foo 3 2017-01-01 00:19:41 2017-01-01 00:25:41 bar 3 2017-01-01 00:25:41 2017-01-01 00:37:36 bar 2 2017-01-01 00:37:36 2017-01-01 01:01:29 foo 2 2017-01-01 01:01:29 2017-01-01 01:19:25 bar 2 2017-01-01 01:25:22 None </code></pre> <p>I have tried the the following aggregation, which effectively deduplicates <code>var1</code> and <code>var2</code> and provide the min and max timestamps per group:</p> <pre><code>output = test.groupby(['var1','var2'])['timestamp'].agg(['min','max']).reset_index() output var1 var2 min max bar 2 2017-01-01 00:37:36 2017-01-01 01:25:22 bar 3 2017-01-01 00:25:41 2017-01-01 00:25:41 foo 2 2017-01-01 00:07:45 2017-01-01 01:01:34 foo 3 2017-01-01 00:19:41 2017-01-01 00:19:41 </code></pre> <p>However, <code>var1</code> and <code>var2</code> can change and revert back to the same original values over time, so a min/max function does not work since <code>var1 </code>and <code>var2</code> should be compared to the previous value in the same column over time, similar to but not exactly what the <code>shift()</code> method does.</p> <p>Is there an efficient method in pandas or numpy, similar to the <code>rle()</code> method in R, that would group or partition such runs and take the min timestamp of the next run as its max? The real dataset is over 10 million rows. Any suggestions here would be appreciated!</p>
<p>For contiguous grouping you can group on <code>(df.col != df.col.shift()).cumsum()</code></p> <p>You want it for either column so you can <code>|</code> them together.</p> <pre><code>&gt;&gt;&gt; ((df.var1 != df.var1.shift()) | (df.var2 != df.var2.shift())).cumsum() 0 1 1 1 2 2 3 3 4 4 5 4 6 5 7 5 8 6 9 6 dtype: int64 </code></pre> <p>groupby + agg</p> <pre><code>&gt;&gt;&gt; cond = ((df.var1 != df.var1.shift()) | (df.var2 != df.var2.shift())).cumsum() &gt;&gt;&gt; output = df.groupby(cond).agg( ... var1=('var1', 'first'), ... var2=('var2', 'first'), ... min=('timestamp', 'min'), ... max=('timestamp', 'max') ... ) &gt;&gt;&gt; output var1 var2 min max 1 foo 2 2017-01-01 00:07:45 2017-01-01 00:13:42 2 foo 3 2017-01-01 00:19:41 2017-01-01 00:19:41 3 bar 3 2017-01-01 00:25:41 2017-01-01 00:25:41 4 bar 2 2017-01-01 00:37:36 2017-01-01 00:43:37 5 foo 2 2017-01-01 01:01:29 2017-01-01 01:01:34 6 bar 2 2017-01-01 01:19:25 2017-01-01 01:25:22 </code></pre> <p>You can then set the max to the next row's min:</p> <pre><code>&gt;&gt;&gt; output['max'] = output['min'].shift(-1) &gt;&gt;&gt; output var1 var2 min max 1 foo 2 2017-01-01 00:07:45 2017-01-01 00:19:41 2 foo 3 2017-01-01 00:19:41 2017-01-01 00:25:41 3 bar 3 2017-01-01 00:25:41 2017-01-01 00:37:36 4 bar 2 2017-01-01 00:37:36 2017-01-01 01:01:29 5 foo 2 2017-01-01 01:01:29 2017-01-01 01:19:25 6 bar 2 2017-01-01 01:19:25 NaN </code></pre>
python|pandas|numpy|duplicates|partitioning
3
374,069
66,745,856
Change values in a Python dataframe, based on values in another dataframe
<p>I have two dataframes:</p> <pre><code>import pandas as pd import numpy as np data1 = {1: [1,2,3], 2: [1,2,3]} df1 = pd.DataFrame(data1) data2 = {1: [50,60,12], 2: [14,70,60]} df2 = pd.DataFrame(data2) </code></pre> <p>Where a value is &lt; 30 in df2, I want to change the respective value in df1 to a semicolon.</p> <p>My current approach is to change df2 to values of 1 where &gt; 30, and -99 where &lt; 30, then multiply the dataframes, and then change negative values to a semicolon.</p> <pre><code>df2 = df2.apply(lambda x: np.where(x &lt; 30, -99, 1)) df1 = df2.mul(df1) df1 = df1.apply(lambda x: np.where(x &lt; 0, &quot;:&quot;, x)) </code></pre> <p>It works with positive values, though I'm certain there has to be a simpler approach.</p> <p>Thanks in advance.</p>
<p>If same index and columns names in both DataFrames use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mask.html" rel="nofollow noreferrer"><code>DataFrame.mask</code></a> and pass mask from compare another DataFrame:</p> <pre><code>df1 = df1.mask(df2 &lt; 30, ':') print (df1) 1 2 0 1 : 1 2 2 2 : 3 </code></pre>
python|pandas
0
374,070
66,659,528
Difference between scipy.stats.norm.pdf and plotting gaussian manually
<p>I'm plotting a simple normal distribution using scipy.stats, but for some reason when I try to compare it to the regular gaussian formula the plot looks very different:</p> <pre><code>import numpy as np import scipy.stats as stats x = np.linspace(-50,175,10000) sig1, mu1 = 10.0, 30.0 y1 = stats.norm.pdf(x, mu1, sig1) y11 = np.exp(-(x-mu1)**2/2*sig1)/(np.sqrt(2*np.pi*sig1)) plt.plot(x,y11) plt.plot(x,y1) </code></pre> <p>The result is:</p> <p><a href="https://i.stack.imgur.com/A2xQm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A2xQm.png" alt="enter image description here" /></a></p> <p>Can someone explain to me why they are not the same?</p>
<p><code>stats.norm.pdf</code> requires sigma, but in your calculation you are using it as variance. Also there are two brackets missing.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats x = np.linspace(-50, 175, 10000) sig1, mu1 = 10.0, 30.0 var1 = sig1 ** 2 y1 = stats.norm.pdf(x, mu1, sig1) y11 = np.exp(-((x - mu1) ** 2) / (2 * var1)) / (np.sqrt(2 * np.pi * var1)) plt.plot(x, y11) plt.plot(x, y1) plt.show() </code></pre> <p>Which produces the same plot.</p> <p>Cheers!</p>
python|numpy|scipy|gaussian
2
374,071
66,546,186
How can you prepare data with time series and static data for classficiation?
<p>I am trying to build a binary classifier to predict the propensity of customers transitioning from one account to another. I have age, gender, cust-segment data but also a time-series of their bank balances for the last 18mths on a monthly basis and also have a lot of high cardinality categorical variables.</p> <p>So, what I want to know is how do I transform the time series data so its in a more compact static form to the rest of the data points like age, gender etc. Or can I just throw this into the algorithm too?</p> <p>sample data may look like the below: customer number, age, gender, marital status code, 18mth-bal, 17mth-bal,...,3mth-bal, postcode-segment ..</p> <p>Any help would be Fantastic! Thank you.</p>
<p>I would generate descriptive statistics for each time serie. Standard deviation seems interesting, but you coud also use percentiles, mean, min and max... or all of them.</p> <pre class="lang-py prettyprint-override"><code># add a column for the standard deviation (and/or percentiles etc.) df['standard_deviation']= np.zeros(len(df)).astype(int) # calculate the standard deviation with np.std() and add it to the new column for i in df.index : df['standard_deviation'][i] = np.std(df.loc[[i]]['balance'][i]) </code></pre> <p>This last line works if the content of a 'balance' cell is a list or an array (of the 18 last amounts known for this customer for example).</p> <p>Sorry I can't be more specific as I can't see your dataframe, but hope it helps !</p>
python|pandas|dataframe|deep-learning|data-cleaning
0
374,072
66,676,823
FileNotFoundError: [Errno 2] No such file or directory: [Instert file path]
<p>I have an issue. Here is my code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from pandas import ExcelFile test = pd.read_excel(&quot;C:\\Users\\John\\Desktop\\Python_work\\stock\\zen\\OutputFiles\\Test_file.csv&quot;, header=0) print(test) </code></pre> <p>My problem is the code does not see the file &quot;Test_file.csv&quot; at all... I have also tried putting the file in the same directory as the code itself. it still does not see the file. I have used a .txt file just to see if the code can see any file and the code recognizes the .txt file has no columns which is expected. It is as if the code is blind to excel files...</p> <p>Does anyone have a solution?</p>
<p>You could try using raw string instead (r&quot;&quot;)</p> <pre><code>amzn = pd.read_excel(r&quot;C:\Users\John\Desktop\Python_work\stock\zen\OutputFiles\Test_file.csv&quot;, header=0) </code></pre>
python|pandas
0
374,073
66,654,257
Find no of days gap for a specific ID when it has a flag X in other column
<p>I want to calculate the no. of days gap for when the 'flag' column is equal to 'X' for same IDs.</p> <p>The dataframe that I have:</p> <pre><code>ID Date flag 1 1-1-2020 X 1 10-1-2020 null 1 15-1-2020 X 2 1-2-2020 X 2 10-2-2020 X 2 15-2-2020 X 3 15-2-2020 null </code></pre> <p>The dataframe I want:</p> <pre><code>ID Date flag no_of_days 1 1-1-2020 X 14 1 10-1-2020 null null 1 15-1-2020 X null 2 1-2-2020 X 9 2 10-2-2020 X 8 2 18-2-2020 X null 3 15-2-2020 null null </code></pre> <p>Thanks in advance.</p>
<p>First filter rows by <code>X</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> and then subtract shifted column per groups by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.shift.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.shift</code></a> and last convert timedeltas to days by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.days.html" rel="nofollow noreferrer"><code>Series.dt.days</code></a>:</p> <pre><code>df['Date'] = pd.to_datetime(df['Date'], dayfirst=True) df['new'] = df[df['flag'].eq('X')].groupby('ID')['Date'].shift(-1).sub(df['Date']).dt.days print (df) ID Date flag new 0 1 2020-01-01 X 14.0 1 1 2020-01-10 NaN NaN 2 1 2020-01-15 X NaN 3 2 2020-02-01 X 9.0 4 2 2020-02-10 X 8.0 5 2 2020-02-18 X NaN 6 3 2020-02-15 NaN NaN </code></pre>
python-3.x|pandas|dataframe
1
374,074
66,600,784
Get category "Date" in Excel when writing data from pandas
<p>I am trying to write a dataframe into an excel, but I specifically need the column (in excel) to be of category &quot;Date&quot;.</p> <p>What I'm trying to achieve therefore is:</p> <pre><code>x = pandas.DataFrame(data=['04/01/2020'], columns=['Date']) x.to_excel(&quot;&lt;path&gt;/ExcelFile.xlsx&quot;) </code></pre> <p>This gives me the following: <a href="https://i.stack.imgur.com/J2gEH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J2gEH.png" alt="enter image description here" /></a></p> <p>I would like to have the &quot;General&quot; category marked as Date by default when I load the Excel file.</p> <p>I cannot do this manually, because there are hundreds of files that need the same treatment.</p> <p>I have tried the following:</p> <pre><code>x['Date'] = pandas.to_datetime(x['Date']) </code></pre> <p>which gives me:</p> <p><a href="https://i.stack.imgur.com/nWfDT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nWfDT.png" alt="enter image description here" /></a></p> <p>But that gives me in the Excel, category &quot;Custom&quot;.</p> <p>Any suggestions on how to make this happen?</p>
<p>You could use pd.ExcelWriter, class for writing DataFrame objects into excel sheets:</p> <pre><code>import pandas as pd x = pd.DataFrame(data=['04/01/2020'], columns=['Date']) x['Date'] = pd.to_datetime(x['Date']) with pd.ExcelWriter('&lt;path&gt;/ExcelFile.xlsx', datetime_format='DD/MM/YYYY') as writer: x.to_excel(writer) </code></pre> <p>You should see 'date' format when you open the file. You didn't specify what is a day and a month in your case, but you can set it with 'datetime_format' as you want.</p>
python|python-3.x|excel|pandas|dataframe
0
374,075
66,450,324
Get the sum of a multikey dict by one key and add it to a datfarme column in Python?
<p>I have a dataframe and a dict as follows:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame(np.array([[1, 2], [4, 5]]),columns=['a', 'b']) df a b 0 1 2 1 4 5 dict {(0, 'A', 1): 1, (0, 'A', 2): 2, (1, 'B', 1): 3, (1, 'B', 2): 4} </code></pre> <p>I am trying to get the total sum by the first key of the dict and add the result as a new column to my dataframe.</p> <p>This is what I have so far, but I am thinking there must be a more efficient way to do this.</p> <pre><code>total_by_1st={} for (x, _, _), v in dict.items(): if x in total_by_1st: total_by_1st[x] += v else: total_by_1st[x]=v total_by_1st {0: 3, 1: 7} df['c'] = df.index.map(total_by_1st) df a b c 0 1 2 3 1 4 5 7 </code></pre>
<blockquote> <p>I am trying to get the total sum by the first key of the dict and add the result as a new column to my dataframe</p> </blockquote> <p>You can convert to series and sum on level 0:</p> <pre><code>df['new'] = pd.Series(d).sum(level=0) </code></pre> <hr /> <pre><code>print(df) a b new 0 1 2 3 1 4 5 7 </code></pre> <p>Where <code>d</code> is the name of the variable which stores your dictionary. Please note that you should not name a variable same as a builtin (<em><code>d</code> or something similar instead of <code>dict</code></em>)</p>
python|pandas|dictionary
3
374,076
66,410,694
Python - Can't match strings from file
<p>I have this <code>textfile.txt</code>:</p> <pre><code>i car air me </code></pre> <p>And a dictionary is defined as:</p> <pre><code>dictionary = {&quot;me&quot;:3, &quot;you&quot;:4, &quot;else&quot;: 10, &quot;i&quot;:2} </code></pre> <p>I'm looking for a way to delete the words in textfile.txt from the dictionary in a generalizable way (I'm using a loop here) My attempt:</p> <pre><code>words_to_delete = open(&quot;textfile.txt&quot;, &quot;r&quot;) for i in words_to_delete.readlines(): del dictionary[i] # Output: KeyError: 'i\n' </code></pre> <p>Going further, I think is because of this:</p> <pre><code>for i in words_to_delete.readlines(): print(i == &quot;me&quot;) # Output: False, False, False, False </code></pre> <p>Why the values from the loop are not comparable from the <code>textfile.txt</code>?</p> <p>If I run this:</p> <pre><code>for i in words_to_delete.readlines(): print(type(i)) &lt;class 'str'&gt; &lt;class 'str'&gt; &lt;class 'str'&gt; &lt;class 'str'&gt; </code></pre> <p>It's a string, so why the strings <code>i</code> and <code>me</code> from the dictionary returns <code>False</code> when comparing it with brute-force written strings?</p>
<p>You need to trim the whitespaces (or in this case <code>\n</code> which is a newline. Call the strip method on the strings that have <code>\n</code> at the end. (like s.strip())</p>
numpy
1
374,077
66,471,525
Summarize several columns with looping through columns in python
<p>I have a very strange survey data structure like the below sample. During the survey number of smartphone per household were collected and then collect information about how many individuals use each device for a particular activity.</p> <p>Exmple : F3_{smartphone number}_{HH_member_id} so F3_1_4 will be F3 &amp; {first Household smartphone}=1 &amp; {Number of Household_member_using/sharing this device = 4}</p> <p>Or if 3 members in the household shearing a device , F3_1_1, F3_1_2, F3_1_3 will be 1.</p> <p>I am trying to pull out individual devices and calculate number of phones used for that activity and by how many. Here is my try</p> <pre><code>df_ph = pd.DataFrame() for h in range(1,5): df_shared_ph = pd.DataFrame(None) for i in range(1,15): df_temp_ph = df[[&quot;respid&quot;, &quot;f3_&quot; + str(h) + &quot;_&quot; + str(i)]].copy() df_temp_ph.rename(columns = {&quot;f3_&quot; + str(h) + &quot;_&quot; + str(i): &quot;Smartph&quot;}, inplace = True) df_shared_ph = pd.concat([df_shared_ph, df_temp_ph], axis=0).dropna(subset=[&quot;Smartph&quot;]) df_shared_ph = df_shared_ph.groupby(['respid']).agg({'Smartph': 'sum'}).reset_index() df_ph = pd.concat([df_ph, df_shared_ph], axis=0) print(&quot;used for X and by how many:\n&quot; + str(df_ph['Smartph'].value_counts())) </code></pre> <p>My snippet is working without error but it will duplicate many rows/id in my original data for some reason and I couldn't figure out why. Am I missing something here? is there an alternative way to do this?</p> <pre><code>df_ph.duplicated(['respid']).sum() == 0 False </code></pre> <p>Sample data:</p> <pre><code># output to a dict # the dict can be converted to a dataframe with # df = pd.DataFrame.from_dict(d, orient='index') # d is the name of the dict {0: {'f3_1_1': 1.0, 'f3_1_10': nan, 'f3_1_11': nan, 'f3_1_12': nan, 'f3_1_13': nan,'f3_1_14': nan, 'f3_1_15': nan, 'f3_1_2': 0.0, 'f3_1_3': 0.0,'f3_1_4': 0.0,'f3_1_5': nan,'f3_1_6': nan, 'f3_1_7': nan, 'f3_1_8': nan, 'f3_1_9': nan, 'f3_2_1': 0.0, 'f3_2_10': nan, 'f3_2_11': nan, 'f3_2_12': nan, 'f3_2_13': nan, 'f3_2_14': nan, 'f3_2_15': nan, 'f3_2_2': 1.0, 'f3_2_3': 0.0, 'f3_2_4': 0.0, 'f3_2_5': nan, 'f3_2_6': nan, 'f3_2_7': nan, 'f3_2_8': nan, 'f3_2_9': nan, 'f3_3_1': 0.0, 'f3_3_10': nan, 'f3_3_11': nan, 'f3_3_12': nan, 'f3_3_13': nan, 'f3_3_14': nan, 'f3_3_15': nan, 'f3_3_2': 0.0, 'f3_3_3': 1.0, 'f3_3_4': 0.0, 'f3_3_5': nan, 'f3_3_6': nan, 'f3_3_7': nan, 'f3_3_8': nan, 'f3_3_9': nan, 'f3_4_1': 0.0, 'f3_4_10': nan, 'f3_4_11': nan, 'f3_4_12': nan, 'f3_4_13': nan, 'f3_4_14': nan, 'f3_4_15': nan, 'f3_4_2': 0.0, 'f3_4_3': 0.0, 'f3_4_4': 1.0, 'f3_4_5': nan, 'f3_4_6': nan, 'f3_4_7': nan, 'f3_4_8': nan, 'f3_4_9': nan, 'f3_5_1': nan, 'f3_5_10': nan, 'f3_5_11': nan, 'f3_5_12': nan, 'f3_5_13': nan, 'f3_5_14': nan, 'f3_5_15': nan, 'f3_5_2': nan, 'f3_5_3': nan, 'f3_5_4': nan, 'f3_5_5': nan, 'f3_5_6': nan, 'f3_5_7': nan, 'f3_5_8': nan, 'f3_5_9': nan, 'respid': 13766.0}, 1: {'f3_1_1': nan, 'f3_1_10': nan, 'f3_1_11': nan, 'f3_1_12': nan, 'f3_1_13': nan, 'f3_1_14': nan, 'f3_1_15': nan, 'f3_1_2': nan, 'f3_1_3': nan, 'f3_1_4': nan, 'f3_1_5': nan, 'f3_1_6': nan, 'f3_1_7': nan, 'f3_1_8': nan, 'f3_1_9': nan, 'f3_2_1': nan, 'f3_2_10': nan, 'f3_2_11': nan, 'f3_2_12': nan, 'f3_2_13': nan, 'f3_2_14': nan, 'f3_2_15': nan, 'f3_2_2': nan, 'f3_2_3': nan, 'f3_2_4': nan, 'f3_2_5': nan, 'f3_2_6': nan, 'f3_2_7': nan, 'f3_2_8': nan, 'f3_2_9': nan, 'f3_3_1': nan, 'f3_3_10': nan, 'f3_3_11': nan, 'f3_3_12': nan, 'f3_3_13': nan, 'f3_3_14': nan, 'f3_3_15': nan, 'f3_3_2': nan, 'f3_3_3': nan, 'f3_3_4': nan, 'f3_3_5': nan, 'f3_3_6': nan, 'f3_3_7': nan, 'f3_3_8': nan, 'f3_3_9': nan, 'f3_4_1': nan, 'f3_4_10': nan, 'f3_4_11': nan, 'f3_4_12': nan, 'f3_4_13': nan, 'f3_4_14': nan, 'f3_4_15': nan, 'f3_4_2': nan, 'f3_4_3': nan, 'f3_4_4': nan, 'f3_4_5': nan, 'f3_4_6': nan, 'f3_4_7': nan, 'f3_4_8': nan, 'f3_4_9': nan, 'f3_5_1': nan, 'f3_5_10': nan, 'f3_5_11': nan, 'f3_5_12': nan, 'f3_5_13': nan, 'f3_5_14': nan, 'f3_5_15': nan, 'f3_5_2': nan, 'f3_5_3': nan, 'f3_5_4': nan, 'f3_5_5': nan, 'f3_5_6': nan, 'f3_5_7': nan, 'f3_5_8': nan, 'f3_5_9': nan, 'respid': 16346.0}, 2: {'f3_1_1': 1.0, 'f3_1_10': nan, 'f3_1_11': nan, 'f3_1_12': nan, 'f3_1_13': nan, 'f3_1_14': nan, 'f3_1_15': nan, 'f3_1_2': 0.0, 'f3_1_3': nan, 'f3_1_4': nan, 'f3_1_5': nan, 'f3_1_6': nan, 'f3_1_7': nan, 'f3_1_8': nan, 'f3_1_9': nan, 'f3_2_1': 0.0, 'f3_2_10': nan, 'f3_2_11': nan, 'f3_2_12': nan, 'f3_2_13': nan, 'f3_2_14': nan, 'f3_2_15': nan, 'f3_2_2': 1.0, 'f3_2_3': nan, 'f3_2_4': nan, 'f3_2_5': nan, 'f3_2_6': nan, 'f3_2_7': nan, 'f3_2_8': nan, 'f3_2_9': nan, 'f3_3_1': nan, 'f3_3_10': nan, 'f3_3_11': nan, 'f3_3_12': nan, 'f3_3_13': nan, 'f3_3_14': nan, 'f3_3_15': nan, 'f3_3_2': nan, 'f3_3_3': nan, 'f3_3_4': nan, 'f3_3_5': nan, 'f3_3_6': nan, 'f3_3_7': nan, 'f3_3_8': nan, 'f3_3_9': nan, 'f3_4_1': nan, 'f3_4_10': nan, 'f3_4_11': nan, 'f3_4_12': nan, 'f3_4_13': nan, 'f3_4_14': nan, 'f3_4_15': nan, 'f3_4_2': nan, 'f3_4_3': nan, 'f3_4_4': nan, 'f3_4_5': nan, 'f3_4_6': nan, 'f3_4_7': nan, 'f3_4_8': nan, 'f3_4_9': nan, 'f3_5_1': nan, 'f3_5_10': nan, 'f3_5_11': nan, 'f3_5_12': nan, 'f3_5_13': nan, 'f3_5_14': nan, 'f3_5_15': nan, 'f3_5_2': nan, 'f3_5_3': nan, 'f3_5_4': nan, 'f3_5_5': nan, 'f3_5_6': nan, 'f3_5_7': nan, 'f3_5_8': nan, 'f3_5_9': nan, 'respid': 11293.0}, 3: {'f3_1_1': nan, 'f3_1_10': nan, 'f3_1_11': nan, 'f3_1_12': nan, 'f3_1_13': nan, 'f3_1_14': nan, 'f3_1_15': nan, 'f3_1_2': nan, 'f3_1_3': nan, 'f3_1_4': nan, 'f3_1_5': nan, 'f3_1_6': nan, 'f3_1_7': nan, 'f3_1_8': nan, 'f3_1_9': nan, 'f3_2_1': nan, 'f3_2_10': nan, 'f3_2_11': nan, 'f3_2_12': nan, 'f3_2_13': nan, 'f3_2_14': nan, 'f3_2_15': nan, 'f3_2_2': nan, 'f3_2_3': nan, 'f3_2_4': nan, 'f3_2_5': nan, 'f3_2_6': nan, 'f3_2_7': nan, 'f3_2_8': nan, 'f3_2_9': nan, 'f3_3_1': nan, 'f3_3_10': nan, 'f3_3_11': nan, 'f3_3_12': nan, 'f3_3_13': nan, 'f3_3_14': nan, 'f3_3_15': nan, 'f3_3_2': nan, 'f3_3_3': nan, 'f3_3_4': nan, 'f3_3_5': nan, 'f3_3_6': nan, 'f3_3_7': nan, 'f3_3_8': nan, 'f3_3_9': nan, 'f3_4_1': nan, 'f3_4_10': nan, 'f3_4_11': nan, 'f3_4_12': nan, 'f3_4_13': nan, 'f3_4_14': nan, 'f3_4_15': nan, 'f3_4_2': nan, 'f3_4_3': nan, 'f3_4_4': nan, 'f3_4_5': nan, 'f3_4_6': nan, 'f3_4_7': nan, 'f3_4_8': nan, 'f3_4_9': nan, 'f3_5_1': nan, 'f3_5_10': nan, 'f3_5_11': nan, 'f3_5_12': nan, 'f3_5_13': nan, 'f3_5_14': nan, 'f3_5_15': nan, 'f3_5_2': nan, 'f3_5_3': nan, 'f3_5_4': nan, 'f3_5_5': nan, 'f3_5_6': nan, 'f3_5_7': nan, 'f3_5_8': nan, 'f3_5_9': nan, 'respid': 15965.0}, 4: {'f3_1_1': 1.0, 'f3_1_10': nan, 'f3_1_11': nan, 'f3_1_12': nan, 'f3_1_13': nan, 'f3_1_14': nan, 'f3_1_15': nan, 'f3_1_2': 0.0, 'f3_1_3': 0.0, 'f3_1_4': nan, 'f3_1_5': nan, 'f3_1_6': nan, 'f3_1_7': nan, 'f3_1_8': nan, 'f3_1_9': nan, 'f3_2_1': 0.0, 'f3_2_10': nan, 'f3_2_11': nan, 'f3_2_12': nan, 'f3_2_13': nan, 'f3_2_14': nan, 'f3_2_15': nan, 'f3_2_2': 1.0, 'f3_2_3': 0.0, 'f3_2_4': nan, 'f3_2_5': nan, 'f3_2_6': nan, 'f3_2_7': nan, 'f3_2_8': nan, 'f3_2_9': nan, 'f3_3_1': 0.0, 'f3_3_10': nan, 'f3_3_11': nan, 'f3_3_12': nan, 'f3_3_13': nan, 'f3_3_14': nan, 'f3_3_15': nan, 'f3_3_2': 0.0, 'f3_3_3': 1.0, 'f3_3_4': nan, 'f3_3_5': nan, 'f3_3_6': nan, 'f3_3_7': nan, 'f3_3_8': nan, 'f3_3_9': nan, 'f3_4_1': nan, 'f3_4_10': nan, 'f3_4_11': nan, 'f3_4_12': nan, 'f3_4_13': nan, 'f3_4_14': nan, 'f3_4_15': nan, 'f3_4_2': nan, 'f3_4_3': nan, 'f3_4_4': nan, 'f3_4_5': nan, 'f3_4_6': nan, 'f3_4_7': nan, 'f3_4_8': nan, 'f3_4_9': nan, 'f3_5_1': nan, 'f3_5_10': nan, 'f3_5_11': nan, 'f3_5_12': nan, 'f3_5_13': nan, 'f3_5_14': nan, 'f3_5_15': nan, 'f3_5_2': nan, 'f3_5_3': nan, 'f3_5_4': nan, 'f3_5_5': nan, 'f3_5_6': nan, 'f3_5_7': nan, 'f3_5_8': nan, 'f3_5_9': nan, 'respid': 7110.0}} </code></pre>
<p>Clearly you have encoded multi-index columns. You can decode as follows.</p> <pre><code>df = pd.DataFrame.from_dict(d, orient='index').set_index(&quot;respid&quot;) # d is the name of the dict # remove redundant &quot;f3_&quot; from column name df = df.rename(columns={c:c[3:] for c in df.columns if c.startswith(&quot;f3_&quot;)}) # F3_{smartphone number}_{HH_member_id} # make columns a multiindex df.columns = pd.MultiIndex.from_tuples([tuple(c.split(&quot;_&quot;)) for c in df.columns], names=[&quot;smartphone_no&quot;,&quot;household_id&quot;]) # now its simple to work with DF df.stack() </code></pre> <h3>output</h3> <pre><code>smartphone_no 1 2 3 4 5 respid household_id 13766.0 1 1.0 0.0 0.0 0.0 NaN 2 0.0 1.0 0.0 0.0 NaN 3 0.0 0.0 1.0 0.0 NaN 4 0.0 0.0 0.0 1.0 NaN 11293.0 1 1.0 0.0 NaN NaN NaN 2 0.0 1.0 NaN NaN NaN 7110.0 1 1.0 0.0 0.0 NaN NaN 2 0.0 1.0 0.0 NaN NaN 3 0.0 0.0 1.0 NaN NaN </code></pre>
python|pandas|for-loop|pivot-table|reshape
1
374,078
66,743,171
Training using object detection api is not running on GPUs in AI Platform
<p>I am trying to run the training of some models in tensorflow 2 object detection api.</p> <p>I am using this command:</p> <pre><code>gcloud ai-platform jobs submit training segmentation_maskrcnn_`date +%m_%d_%Y_%H_%M_%S` \ --runtime-version 2.1 \ --python-version 3.7 \ --job-dir=gs://${MODEL_DIR} \ --package-path ./object_detection \ --module-name object_detection.model_main_tf2 \ --region us-central1 \ --scale-tier CUSTOM \ --master-machine-type n1-highcpu-32 \ --master-accelerator count=4,type=nvidia-tesla-p100 \ -- \ --model_dir=gs://${MODEL_DIR} \ --pipeline_config_path=gs://${PIPELINE_CONFIG_PATH} </code></pre> <p>The training job is submitted successfully but when I look at my submitted job on AI platform I notice that it's not using the GPUs! <a href="https://i.stack.imgur.com/BRlbi.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BRlbi.png" alt="enter image description here" /></a></p> <p>Also, when looking at the logs for my training job, I noticed that in some cases it couldn't open cuda. It would say something like this:</p> <pre><code>Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/lib64:/usr/local/nvidia/lib64 </code></pre> <p>I was using AI platform for training a few months back and it was successful. I don't know what has changed now! In fact, for my own setup, nothing has changed.</p> <p>For the record, I am training Mask RCNN now. A few months back I trained Faster RCNN and SSD models.</p>
<blockquote> <p>Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/lib64:/usr/local/nvidia/lib64</p> </blockquote> <p>I'm not sure as I couldn't test anyhow. With a quick google search, It appeared that people have encountered this issue for many reasons and the solution is some kind of depends. In SO, there is the same query was asked, and you probably missed it somehow, check it first, <a href="https://stackoverflow.com/questions/55224016/importerror-libcublas-so-10-0-cannot-open-shared-object-file-no-such-file-or">here</a>.</p> <p>Also, check this related issue posted below</p> <ul> <li><a href="https://github.com/tensorflow/tensorflow/issues/26182" rel="nofollow noreferrer">TensorFlow Issue #26182</a></li> <li><a href="https://github.com/tensorflow/tensorflow/issues/45930" rel="nofollow noreferrer">TensorFlow Issue #45930</a></li> <li><a href="https://github.com/tensorflow/tensorflow/issues/38578" rel="nofollow noreferrer">TensorFlow Issue #38578</a></li> </ul> <p>After checking with every possible solution, and still remains the issue, then update your query with it.</p> <p>I think there some mismatches in your Cuda version (<code>CUDA</code>, <code>cuDNN</code>) and <code>tf</code> version, you should check them first in your working environment. Also, ensure you update the Cuda path properly. According to the given error message, you need to make it ensure that the following set properly.</p> <pre><code>export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.0/lib64/ </code></pre>
tensorflow|object-detection|object-detection-api|gcp-ai-platform-training|google-ai-platform
0
374,079
66,616,312
Numpy - count nonzero elements in 3d array
<p>I have a soduko board stored as <code>blocks = np.full(81, fill_value=0 ).reshape((9,3,3))</code>(<em>Important note: blocks are indexed sequentially, but to take up less space I show them as a single 9x9 block instead of <code>9x3x3</code>; middle block is index 4 (instead of <code>(1,1)</code>, bottom left is index 6</em>).<br /> I want to count the amount of nonzero element in this per block, example:</p> <pre><code>[[0 0 0 0 0 0 0 0 0] [2 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0] [0 0 0 5 7 4 0 0 0] [0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0]] </code></pre> <p>This has 1 nonzero in block 0 and 3 in block 4. I'm trying to use np.count_nonzero to achieve this, but the return value is never what I want no matter what axis I set as the parameter.<br /> <strong>What I'd like to have as the output is a 9 long 1d array</strong>, but instead I get a <code>(3,3)</code> if I use count_nonzero along axis 0, a <code>(9,3)</code> along axes 1 and 2. While <code>axis=2</code> does contain the value I want they are in different columns. Should I try to extract the values in a 1d array, or is there a way to make this work properly with count_nonzero?</p> <p>Edit: Just to clarify blocks looks like this:</p> <pre><code>[[[0 0 0] [2 0 0] [0 0 0]] [[0 0 0] [0 0 0] [0 0 0]] [[0 0 0] [0 0 0] [0 0 0]] [[0 0 0] [0 0 0] [0 0 0]] [[5 7 4] [0 0 0] [0 0 0]] [[0 0 0] [0 0 0] [0 0 0]] [[0 0 0] [0 0 0] [0 0 0]] [[0 0 0] [0 0 0] [0 0 0]] [[0 0 0] [0 0 0] [0 0 0]]] </code></pre>
<p>I think this is what you are trying to do if I understand your requirements correctly:</p> <pre><code>&gt;&gt;&gt; z array([[0, 0, 0, 0, 0, 0, 0, 0, 0], [2, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 5, 7, 4, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]]) &gt;&gt;&gt; np.bincount(z.nonzero()[0], minlength=9) array([0, 1, 0, 3, 0, 0, 0, 0, 0]) </code></pre>
python|numpy|multidimensional-array
0
374,080
66,398,540
How to set new columns in a multi-column index from a dict with partially specified tuple keys?
<p>I have a pandas dataframe initialized in the following way:</p> <pre><code>import pandas as pd my_multi_index = pd.MultiIndex.from_tuples([('a', 'a1'), ('a', 'a2'), ('b', 'b1'), ('b', 'b2')], names=['key1', 'key2']) df = pd.DataFrame(data=[[1, 2], [3, 4], [5, 6], [7, 8]], columns=['col1', 'col2'], index=my_multi_index) print(df) </code></pre> <p>which gives:</p> <pre><code># col1 col2 # key1 key2 # a a1 1 2 # a2 3 4 # b b1 5 6 # b2 7 8 </code></pre> <p>Now I'd like to add a new column <code>desc1</code> to this dataframe using partial key slicing BUT not in code, I'd like to do this from configuration i.e. a dictionary with partial tuple keys:</p> <pre><code># i'd like to externalize this and not hardcode it i.e. easier maintenance df.loc[pd.IndexSlice['a', :], 'desc1'] = 'x' df.loc[pd.IndexSlice['b', 'b1'], 'desc1'] = 'y1' df.loc[pd.IndexSlice['b', 'b2'], 'desc1'] = 'y2' print(df) </code></pre> <p>which gives:</p> <pre><code># key1 key2 # a a1 1 2 x # a2 3 4 x # b b1 5 6 y1 # b2 7 8 y2 </code></pre> <p>notice that setting 'x' doesn't depend on the second component of the <code>('a', _)</code> key and setting 'y1' and 'y2' do depend on the second component of the <code>('b', 'b1')</code> key. A possible solution is to fully specify the mapping but this is also not desirable if I have a 100 <code>(a, _)</code> whose assignment doesn't depend on the second component. I wish to reach the above result but not hard-coding the sliced assignments, instead I'd like to do it from an externalized dictionary:</p> <p>My configuration dictionary would look like this:</p> <pre><code>my_dict = { ('a', None): 'x', ('b', 'b1'): 'y1', ('b', 'b2'): 'y2' } </code></pre> <p>Is there a pythonic and pandas-tonic way to apply this dictionary with partially specified keys to reach the sliced assignment produced before?</p>
<p>We can leverage the fact that we can pass tuples as a MultiIndex slicer. Also we slightly adjust your <code>my_dict</code>. Then we apply a simple for loop:</p> <pre><code>my_dict = { ('a',): 'x', ('b', 'b1'): 'y1', ('b', 'b2'): 'y2' } for idx, value in my_dict.items(): df.loc[idx, 'desc1'] = value </code></pre> <pre><code> col1 col2 desc1 key1 key2 a a1 1 2 x a2 3 4 x b b1 5 6 y1 b2 7 8 y2 </code></pre> <hr /> <p>Second option would be to use <code>Index.map</code> and filling in the first value in your dict, so we can use <code>Series.ffill</code>:</p> <pre><code>my_dict = { ('a', 'a1'): 'x', ('b', 'b1'): 'y1', ('b', 'b2'): 'y2' } df['desc1'] = df.index.map(my_dict) df['desc1'] = df['desc1'].ffill() col1 col2 desc1 key1 key2 a a1 1 2 x a2 3 4 x b b1 5 6 y1 b2 7 8 y2 </code></pre>
python|pandas|dataframe|slice|multi-index
2
374,081
66,666,342
Filter a pandas row without repetition based on combination of values in separate columns
<p>I have a dataframe as follows</p> <pre><code>criteria 1 criteria 2 value a1 a2 99 b1 a2 88 c1 a2 77 a1 b2 66 b1 b2 55 c1 b2 44 a1 c2 33 b1 c2 22 c1 c2 11 </code></pre> <p>My intention is to take the rows with best value (3rd column) such that none of the criteria 1 and criteria 2 get repeated in the same columns Here is my desired result -</p> <pre><code>criteria 1 criteria 2 value a1 a2 99 b1 b2 55 c1 c2 11 </code></pre> <p>Could you please share some ideas to deal with this? Thanks!</p>
<p>Try this.</p> <pre><code>df_new = df.sort_values('value', ascending=False) result = df_new.iloc[[0],] for i in range(1,df_new.criteria1.nunique()): df_new = df_new[~df_new.criteria1.isin(result.criteria1)&amp; ~df_new.criteria2.isin(result.criteria2)] result=result.append(df_new.iloc[[0],]) </code></pre> <p><strong>Output</strong> (dataframe named 'result')</p> <pre><code>criteria1 criteria2 value a1 a2 99 b1 b2 55 c1 c2 11 </code></pre>
python|python-3.x|pandas|dataframe
1
374,082
66,558,764
Can't Train SSD Inception-V2 with Larger Input Resolution with TensorFlow Object Detection API
<p>I am looking to use the TensorFlow Object Detection API to train SSD Inception-V2 from scratch on a custom dataset with resolution larger than 300x300.</p> <p>I am referencing this as a sample config file: <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_inception_v2_coco.config" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_inception_v2_coco.config</a></p> <p>I have successfully trained a 4-class custom model with okay performance by setting: <code>num_classes: 4</code> and pointing the training data path to my custom dataset.</p> <p>However, the input resolution was set to 300x300 with:</p> <pre><code>image_resizer { fixed_shape_resizer { height: 300 width: 300 } } </code></pre> <p>My dataset has pretty small objects and I want to increase the input resolution during training.</p> <p>However, If I just change this setting to:</p> <pre><code>image_resizer { fixed_shape_resizer { height: 640 width: 640 } } </code></pre> <p>The model does not train at all and the loss stays stagnate. I saw a few other threads that talked about changing the anchor boxes and customizing the SSD network to be compatible with the new resolution.</p> <p>I have tried several configurations of anchor boxes and model customizations but I can never get the model training. (It looks like its training but the loss doesnt go down and the inference is garbage outputs)</p> <p>Has anyone trained SSD Inception-V2 with the TensorFlow object detection API on resolution other than 300x300 and can supply more concrete steps to execute the training?</p>
<p>The original <a href="https://arxiv.org/pdf/1512.02325.pdf" rel="nofollow noreferrer">SSD paper</a> that came out in 2016 was designed with 2 specific input image sizes, <code>300x300</code> and <code>512x512</code>. However, the backbone for that was Mobilenet (considering speed as the main factor). You can try resizing the images to <code>512x512</code> and then training. However, considering that the repo has <code>300x300</code> as the default value would probably mean that the model works best when the inputs are of that size and not any other.</p> <p>There are however many other models that allow an input size of <code>640x640</code></p> <p>In Tensorflow models zoo - version 1, you have <code>ssd_resnet50_v1</code> <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync.config" rel="nofollow noreferrer">config file</a> and in <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md" rel="nofollow noreferrer">version 2</a>, you have many other variants of SSD and EfficientDet which support <code>640x640</code> (with different backbones however).</p> <p>You will probably get better results by training using the above-mentioned models</p>
python|tensorflow|deep-learning|object-detection|object-detection-api
0
374,083
66,343,293
Danfo dataFrame - Replace values by index, column
<p>In Panda DataFrames in python, replacing values by column and index is very straight-forward.</p> <p>Example DataFrame:</p> <pre><code>df = pd.DataFrame({'A': [1, 2, 3], 'B': [200, 300, 400]}) A B 0 1 200 1 2 300 2 3 400 </code></pre> <p>Replacing values is as simple as:</p> <pre><code>df['A'][0] = 800 A B 0 800 200 1 2 300 2 3 400 </code></pre> <hr /> <p>How do you replace value by column and index in a Danfo DataFrame ?</p>
<p>Please try:</p> <pre><code>let df_rep = df.replace({ &quot;replace&quot;: 1, &quot;with&quot;: 800, &quot;in&quot;: [&quot;A&quot;] }) </code></pre> <p>You probably noticed, but just in case, 1 here is the value not the index.</p> <p><strong>Official documentation</strong></p> <p>{replace: int, float, str. The value to replace. with: Int, float, str. The new value to replace with. in: Array. An array of column names to replace, If not specified, replace all columns. }</p> <p><a href="https://danfo.jsdata.org/api-reference/dataframe/danfo.dataframe.replace" rel="nofollow noreferrer">https://danfo.jsdata.org/api-reference/dataframe/danfo.dataframe.replace</a></p>
pandas|dataframe|danfojs
0
374,084
66,583,713
resize video data to fit model.predict
<p>The model was trained the following way</p> <pre><code>model = keras.Sequential() model.add(Conv2D(64, (3, 3), input_shape=(16, 120, 120, 3), padding='same', activation='relu')) </code></pre> <p>How can I resize videos to pass them to <code>trained_model.predict</code> below for prediction?</p> <pre><code>trained_model = load_model(&quot;cyclist.h5&quot;) trained_model.predict('7.avi') </code></pre>
<p>It worked this way</p> <pre><code>import cv2 import numpy as np file = '7.avi' cap = cv2.VideoCapture(file) frameCount = 16 frameWidth = 120 frameHeight = 120 buf = np.empty((frameCount, frameHeight, frameWidth, 3), np.dtype('uint8')) fc = 0 ret = True while (fc &lt; frameCount and ret): buf[fc] = cv2.resize(cap.read()[1], (frameWidth, frameHeight), fx=0, fy=0, interpolation=cv2.INTER_CUBIC) fc += 1 cap.release() cv2.destroyAllWindows() trained_model.predict(np.expand_dims(buf, axis=0)) </code></pre>
python|tensorflow|keras
0
374,085
66,685,673
tf.io.decode_raw return tensor how to make it bytes or string
<p>I'm struggling with this for a while. I searched stack and check tf2 doc a bunch of times. There is one solution indicated, but I don't understand why my solution doesn't work.</p> <p>In my case, I store a binary string (i.e., bytes) in tfrecords. if I iterate over dataset via as_numpy_list or directly call numpy() on each item, I can get back binary string. while iterating the dataset, it does work.</p> <p>I'm not sure what exactly map() passes to test_callback. I see doesn't have a method nor property numpy, and the same about type tf.io.decode_raw return. (it is Tensor, but it has no numpy as well)</p> <p>Essentially I need to take a binary string, parse it via my x = decoder.FromString(y) and then pass it my encoder that will transform x binary string to tensor.</p> <pre><code>def test_callback(example_proto): # I tried to figure out. can I use bytes?decode # directly and what is the most optimal solution. parsed_features = tf.io.decode_raw(example_proto, out_type=tf.uint8) # tf.io.decoder returns tensor with N bytes. x = creator.FromString(parsed_features.numpy) encoded_seq = midi_encoder.encode(x) return encoded_seq raw_dataset = tf.data.TFRecordDataset(filenames=[&quot;main.tfrecord&quot;]) raw_dataset = raw_dataset.map(test_callback) </code></pre> <p>Thank you, folks.</p>
<p>I found one solution but I would love to see more suggestions.</p> <pre><code>def test_callback(example_proto): from_string = creator.FromString(example_proto.numpy()) encoded_seq = encoder.encoder(from_string) return encoded_seq raw_dataset = tf.data.TFRecordDataset(filenames=[&quot;main.tfrecord&quot;]) raw_dataset = raw_dataset.map(lambda x: tf.py_function(test_callback, [x], [tf.int64])) </code></pre> <p>My understanding that tf.py_function has a penalty on performance.</p> <p>Thank you</p>
tensorflow|tensorflow2.0|tensorflow-datasets
0
374,086
66,693,034
Use pandas.apply with multiple arguments to return several columns
<p>I am trying to preprocess a dataset with pandas. I want to use a function with multiple arguments (one from a column of the dataframe, others are variables) which returns several outputs like this:</p> <pre><code>def preprocess(Series,var1,var2,var3,var4): return 1,2,3,4 </code></pre> <p>I want to use the native pandas.apply to use this function on one column of my dataframe like this:</p> <pre><code>import pandas as pd df = pd.DataFrame([[4, 9]] * 3, columns=['A', 'B']) df['C'], df['D'], df['E'], df['F'] = df.apply(lambda x: preprocess(x['A'], 1, 2, 3, 4), axis=1) </code></pre> <p>But the last line gives me the following error:</p> <blockquote> <p>ValueError: not enough values to unpack (expected 4, got 3)</p> </blockquote> <p>I understand my last line returns one tuple of 4 values <code>(1,2,3,4)</code> per line whereas I wanted to get each of these values in the columns <code>C</code>, <code>D</code>, etc.</p> <p>How can I perform this?</p>
<p>You need to re-write your function to return a series, that way, <code>apply</code> returns a dataframe:</p> <pre><code>def preprocess(Series,var1,var2,var3,var4): return pd.Series([1,2,3,4]) </code></pre> <p>Then your code would run and return</p> <pre><code> A B C D E F 0 4 9 0 1 2 3 1 4 9 0 1 2 3 2 4 9 0 1 2 3 </code></pre> <p><strong>Update</strong>: Without rewrite of the function:</p> <pre><code>processed = df.apply(lambda x: preprocess(x['A'], 1, 2, 3, 4), axis=1) df['C'], df['D'], df['E'], df['F'] = np.array(processed.to_list()).T </code></pre>
python-3.x|pandas|dataframe
1
374,087
66,610,067
Is there a faster/better way to apply a function in order to create a new column, across different axes?
<p>I have two DataFrames that look like this:</p> <p>df1 (pretty small):</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>sales</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>10</td> </tr> <tr> <td>2</td> <td>20</td> </tr> </tbody> </table> </div> <p>and df2 (very large &gt;5Mil):</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>idx1</th> <th>idx2</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2</td> </tr> </tbody> </table> </div> <p>and I want the final to look like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>idx1</th> <th>idx2</th> <th>totalSales</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2</td> <td>30</td> </tr> </tbody> </table> </div> <p>I currently have this working but it is very slow:</p> <pre><code>df2['totalSales'] = df2.apply(lambda x: df1.loc[x]['sales'].sum(), axis=1) </code></pre> <p>Are there any faster/better ways to go about this? This works for me just fine, but it takes a very long time to run. Thanks in advance!</p>
<p>This should be faster than <code>apply</code>:</p> <pre><code>df2['totalSales'] = df2.idx1.map(df1.sales) + df2.idx2.map(df1.sales) df2 # idx1 idx2 totalSales #0 1 2 30 </code></pre>
python|pandas|dataframe
1
374,088
66,376,207
how to change a array of a single column to a row of values instead of arrays in python
<p>I am trying to convert an array of arrays that each contain only one integer to a single array with just the integers.</p> <p>This is my code below. k=1 after the first for loop and the next code deletes all the rows of except the first one and then transposes it.</p> <pre><code>handles.Background = np.zeros(((len(imgY) * len(imgX)),len(imgZ))) WhereIsBackground = np.zeros((len(imgY), len(imgX))) k = 0 for i in range(len(imgY)): for j in range (len(imgX)): if img[i,j,handles.PS_Index] &lt; (handles.PS_Mean_Intensity / 8): handles.Background[k,:] = img[i,j,:] WhereIsBackground[i,j] = 1 k = k+1 handles.Background = np.delete(handles.Background,np.s_[k:(len(imgY)*len(imgX))+1],0).T </code></pre> <p>At this point, I can access data by using <code>handles.Background[n]</code> but this returns an array that contains a single integer. I was trying to convert the <code>handles.Background</code> so that when I do <code>handles.Background[n]</code>, it just returns a single integer instead of an array containing that value. So, I'm getting <code>array([0.])</code> when I run <code>handles.Background[0]</code>, but I want to get just <code>0</code> when I run <code>handles.Background[0]</code></p> <p>I've observed that <code>int(handles.Background[i])</code> returns an integer and tried to reassign them using a for loop but the result didn't really change. What would be the best option for me?</p> <pre><code> for i in range (len(handles.Background)): handles.Background[i] = int(handles.Background[i]) </code></pre>
<p>if <code>handles.Background[n]</code> returns an array, you can index into that, too, using the same [n] notation.</p> <p>So you are looking for</p> <pre><code>handles.Background[n][0] </code></pre> <p>If you want to unpack the whole array at once, you can use this:</p> <pre><code>handles.Background = [bg[0] for bg in handles.Background] </code></pre>
python|arrays|numpy|transpose
0
374,089
66,397,600
Name pandas dataframe from a for loop with the use of range and the year in question
<p>I'm trying to get some data from a website, and the data consists of one Excel file per year (from 2015 to 2021). I feel I'm nearly done, but what is missing is to be able to save every annual result into a separate dataframe with a distinct name (with year as suffix). This probably have a simple solution and possibly there are other solutions to this, but what I am trying is twist the final row and dataframe in the code (df_long = ...) not to be named to df_long but df_long_2015, df_long_2016...etc as it goes through the for-loop. Thinking to concat all the years in the end. The problem now is that for every loop of year the df_long dataframe is overwritten, thus loosing the result of the previous year. Appreciate any help...thanks.</p> <pre><code>for aar in range(2015,2021+1): print(aar) url = f'https://www.nordpoolgroup.com/48c8e5/globalassets/marketdata-excel-files/elspot-prices_{aar}_daily_nok.xls' liste = pd.read_html(url, parse_dates=True, decimal=',', thousands='.', header=2, index_col=0, encoding='UTF-8') df = pd.DataFrame(liste[0]) df.index = pd.to_datetime(df.index, format = '%Y-%m-%d') df_long = df.stack().to_frame() df_long.reset_index(inplace=True) df_long.columns = ['Dato','Område','Pris'] filt = df_long['Område'].isin(['Oslo','Bergen','Tr.heim','Tromsø','Kr.sand','Molde']) df_long = df_long.loc[filt, :] </code></pre>
<p>Create an empty dictionary, add each dataframe to the dictionary with the name, e.g. df_long_2015, as the key and the dataframe as the value.</p>
python|pandas|dataframe|for-loop
1
374,090
66,704,609
function for multiple conditions in pandas using dictionaries
<p>I am trying to build a function that uses a dataframe and a dictionary and returns a dataframe based on the conditions in the dictionary. My code looks like:</p> <pre><code>import pandas as pd column_names=['name','surname','age'] lfa=[(&quot;tom&quot;,&quot;jones&quot;,44),(&quot;elvis&quot;,&quot;prestley&quot;,50),(&quot;jim&quot;,&quot;reeves&quot;,30)] lfa=pd.DataFrame(lfa,columns=column_names) def flip(df,conditions): return (df[(df['name'].isin(['tom']))&amp; (df['surname'].isin(['jones']))]) filter={'name':'tom','age':44} flip(lfa,filter) </code></pre> <p>I am struggling to find the most efficient way of returning data based on conditions in the dictionary. ie if I pass a filter of name=tom and age=44, it should apply those conditions in the function.</p> <p>NOTE: I am trying to build a generic function that can take in any dataframe with a flexible set of conditions</p>
<p>Use:</p> <pre><code>df = lfa[lfa[list(f.keys())].eq(f).all(axis=1)] print (df) name surname age 0 tom jones 44 </code></pre> <p><strong>Details</strong>:</p> <p>First filter columns by keys of dictionary:</p> <pre><code>print (lfa[list(f.keys())]) name age 0 tom 44 1 elvis 50 2 jim 30 </code></pre> <p>Compare by dictionary:</p> <pre><code>print (lfa[list(f.keys())].eq(f)) name age 0 True True 1 False False 2 False False </code></pre> <p>And then test if all values match per rows by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>DataFrame.all</code></a>:</p> <pre><code>print (lfa[list(f.keys())].eq(f).all(axis=1)) 0 True 1 False 2 False dtype: bool </code></pre> <p>If possible some keys no match:</p> <pre><code>f={'name':'tom','age':44, 'aa':78} df = lfa[lfa.reindex(f.keys(), axis=1).eq(f).all(axis=1)] print (df) Empty DataFrame Columns: [name, surname, age] Index: [] </code></pre>
python|pandas
1
374,091
66,671,232
Calculate FLOPS (Floating Point Operations per Second) of TensorFlow lite model
<p>Is there any way to measure directly the FLOPS of a .tflite model? I've found some topics about this, but just to the unconverted model.</p>
<p>There is a tool that measures the TFLite model performance. Please take a look at <a href="https://www.tensorflow.org/lite/performance/measurement" rel="nofollow noreferrer">https://www.tensorflow.org/lite/performance/measurement</a>.</p> <p>The benchmark tool can measure useful metrics including initialization time, Inference time, memory footprint and so on.</p>
tensorflow|tensorflow-lite
1
374,092
66,622,761
How to speed up scipy.stats.truncnorm for 3D array?
<p>For 1D array, I found that the generation of <code>rsv</code> by <code>truncnorm</code> is at least 1 order of magnitude faster than using <code>norm</code> and <code>np.where</code>. The 1D test code is shown below. Timing result is:</p> <pre><code>atotal time= 0.0018085979972966015 # norm and np.where btotal time= 0.0006862149748485535 # truncnorm </code></pre> <p>For 3D array, I found <code>truncnorm</code> very much slower than using <code>norm</code> and <code>np.where</code>. The 3D test code is shown below. Timing result is:</p> <pre><code>atotal time= 0.29120742401573807 # norm and np.where btotal time= 34.368199132964946 # truncnorm </code></pre> <p><strong>Question:</strong></p> <ol> <li>In the 3D comparison, why is <code>truncnorm</code> 2 orders of magnitude slower than <code>norm</code> and <code>np.where</code>? How can I speed things up for this 3D <code>truncnorm</code>? The 1D Comparison result showed that <code>truncnorm</code> should be faster than the <code>norm</code> and <code>np.where</code> approach.</li> </ol> <p>Essentially, I am trying to use <code>truncnorm</code> for 3D arrays. I don't understand what I am doing wrong. Appreciate advice to overcoming my issue.</p> <p><a href="https://i.stack.imgur.com/Bt3lF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bt3lF.png" alt="1D Comparison" /></a></p> <p><a href="https://i.stack.imgur.com/NKV76.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NKV76.png" alt="3D Comparison" /></a></p> <p><strong>Test Code for 1D Comparison:</strong></p> <pre><code>import numpy as np from numpy.random import default_rng, SeedSequence from scipy.stats import norm, truncnorm import time import matplotlib.pyplot as plt p=1000 loc = 3 scale = 1.475 # Approach A: scipy.stats.norm astart=time.perf_counter() sq1 = np.random.SeedSequence(1234567890) rng = default_rng( sq1 ) anumbers = norm.rvs( loc=loc, scale=scale, size=p, random_state=rng ) al = np.where( anumbers&lt;0 ) for x in al[0]: while True: g = norm.rvs( loc=loc, scale=scale, size=1, random_state=rng ) if g &gt;= 0.0: anumbers[x] = g[0] break al = np.where( anumbers&lt;0 ) aend=time.perf_counter() print( f'anumbers={anumbers} size={anumbers.size} mean={anumbers.mean()} std={anumbers.std()}' ) print( f'al = {al} {al[0].size}' ) # Approach B: scipy.stats.truncnorm bstart=time.perf_counter() sq1 = np.random.SeedSequence(1234567890) rng = default_rng( sq1 ) left = 0 right = np.inf a = ( left - loc ) / scale b = ( right - loc ) / scale bnumbers = truncnorm.rvs( a, b, loc=loc, scale=scale, size=p, random_state=rng ) bl = np.where( bnumbers&lt;0 ) bend=time.perf_counter() print( f'\nbnumbers={bnumbers} size={bnumbers.size} mean={bnumbers.mean()} std={bnumbers.std()}' ) print( f'bl = {bl} {bl[0].size}' ) print() print( f'atotal time= {aend-astart}' ) print( f'btotal time= {bend-bstart}' ) fig, ax = plt.subplots(1, 1) ax.set_title('1D Comparison') ax.hist( anumbers, bins=100, label='scipy.stats.norm and numpy.where methods', alpha=0.6 ) ax.hist( bnumbers, bins=100, label='scipy.stats.trucnorm', alpha=0.6 ) ax.legend( loc='upper right' ) plt.show() </code></pre> <p><strong>Test Code for 3D Comparison:</strong></p> <pre><code>import numpy as np from numpy.random import default_rng, SeedSequence from scipy.stats import norm, truncnorm import time import matplotlib.pyplot as plt it=15 s=18 p=1000 pshape=(it,s,p) size = it*s*p mu = 3 sigma = 1.475 loc = np.empty( p, dtype=np.int64 ) loc[:]=mu loc3d = np.broadcast_to( loc[None,None,:], pshape ) scale = np.empty( p, dtype=np.float64 ) scale[:]=sigma scale3d = np.broadcast_to( scale[None,None,:], pshape ) print( f'loc3d={loc3d} shape={loc3d.shape}' ) print( f'\nscale3d={scale3d} shape={scale3d.shape}' ) # Approach A: scipy.stats.norm astart=time.perf_counter() sq1 = np.random.SeedSequence(1234567890) rng = default_rng( sq1 ) anumbers3d = norm.rvs( loc=loc3d, scale=scale3d, size=pshape, random_state=rng ) al3d = np.where( anumbers3d&lt;0 ) for x,y,z in zip(*al3d): while True: g = norm.rvs( loc=loc3d[x,y,z], scale=scale3d[x,y,z], size=1, random_state=rng ) if g &gt;= 0.0: anumbers3d[x,y,z] = g[0] break al3d = np.where( anumbers3d&lt;0 ) aend=time.perf_counter() #print( f'anumbers3d={anumbers3d} size={anumbers3d.size} mean={anumbers3d.mean()} std={anumbers3d.std()}' ) print( f'al3d = {al3d} {al3d[0].size}' ) ## Approach B: scipy.stats.truncnorm bstart=time.perf_counter() sq1 = np.random.SeedSequence(1234567890) rng = default_rng( sq1 ) left = np.zeros( p, dtype=np.int64 ) left3d = np.broadcast_to( left[None,None,:], pshape ) right = np.empty( p, dtype=np.float64 ) right[:] = np.inf right3d = np.broadcast_to( right[None,None,:], pshape ) a3d = ( left3d - loc3d ) / scale3d b3d = ( right3d - loc3d ) / scale3d bnumbers3d = truncnorm.rvs( a3d, b3d, loc=loc3d, scale=scale3d, size=pshape, random_state=rng ) bl3d = np.where( bnumbers3d&lt;0 ) bend=time.perf_counter() #print( f'\nbnumbers3d={bnumbers3d} size={bnumbers3d.size} mean={bnumbers3d.mean()} std={bnumbers3d.std()}' ) print( f'bl3d = {bl3d} {bl3d[0].size}' ) print( f'\natotal time= {aend-astart}' ) print( f'btotal time= {bend-bstart}' ) fig,ax = plt.subplots(1, 2) fig.suptitle('3D Comparison', fontsize=18) ax[0].set_title('scipy.stats.norm and numpy.where') ax[1].set_title('scipy.stats.truncnorm') for x in anumbers3d: for y in x: ax[0].hist( y, bins=100, label='anumbers3d', alpha=0.6 ) for x in bnumbers3d: for y in x: ax[1].hist( y, bins=100, label='bnumbers3d', alpha=0.6 ) plt.show() </code></pre>
<p><code>truncnorm.rvs</code> is so much slower in the 3d case because the parameters you feed in for <code>a</code>, <code>b</code>, <code>loc</code>, and <code>scale</code> are arrays the same shape as your desired output. So, for the numbers you show, you need to create four 270,000-element arrays, which will clog your memory (which on its own slows the code), <em>plus</em> scipy needs to check every element of the arrays for every element it generates. It's perfectly fine to use scalars for those parameters, as you do in the 1d case; the shape of the output array is determined by the <code>size</code> argument.</p> <p>A side note: because of this checking-every-element behavior, you won't get the same output using arrays vs. using scalars in the function call, even if you use the same <code>SeedSequence</code>.</p>
python|numpy|scipy
0
374,093
66,718,311
Why pandas convert UNIX timestamps to multiple different date-time values?
<p>I have a pandas dataframe with UNIX timestamps (these are integers and not time objects). I'd like to convert the UNIX timestamps into local time (according to China timezone). So, based on this, I tried to do the following:</p> <pre><code>import pandas as pd data = {'timestamp': [1540651297, 1540651300, 1540651303, 1540651306, 1540651309, 1540651312]} df = pd.DataFrame (data, columns = ['timestamp']) df df['timestamp1'] = pd.to_datetime(df.timestamp, unit='s') df['timestamp2']=df['timestamp'].apply(lambda d: datetime.datetime.fromtimestamp(int(d)).strftime('%Y-%m-%d %H:%M:%S')) df['timestamp3'] = df['timestamp1'].dt.tz_localize('Asia/Shanghai').dt.tz_convert('UTC') </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>timestamp</th> <th>timestamp1</th> <th>timestamp2</th> <th>timestamp3</th> </tr> </thead> <tbody> <tr> <td>1540651297</td> <td>2018-10-27 14:41:37</td> <td>2018-10-27 22:41:37</td> <td>2018-10-27 06:41:37+00:00</td> </tr> <tr> <td>1540651300</td> <td>2018-10-27 14:41:40</td> <td>2018-10-27 22:41:40</td> <td>2018-10-27 06:41:40+00:00</td> </tr> <tr> <td>1540651303</td> <td>2018-10-27 14:41:43</td> <td>2018-10-27 22:41:43</td> <td>2018-10-27 06:41:43+00:00</td> </tr> <tr> <td>1540651306</td> <td>2018-10-27 14:41:46</td> <td>2018-10-27 22:41:46</td> <td>2018-10-27 06:41:46+00:00</td> </tr> <tr> <td>1540651309</td> <td>2018-10-27 14:41:49</td> <td>2018-10-27 22:41:49</td> <td>2018-10-27 06:41:49+00:00</td> </tr> </tbody> </table> </div>
<pre><code>df['timestamp1'] = pd.to_datetime(df.timestamp, unit='s') </code></pre> <p>This statement here creates a column with datetime value with current time. The datetime values are time-zone naive and in UTC.</p> <pre><code>df['timestamp2']=df['timestamp'].apply(lambda d: datetime.datetime.fromtimestamp(int(d)).strftime('%Y-%m-%d %H:%M:%S')) </code></pre> <p><code>datetime.datetime.fromtimestamp</code> takes in a timestamp and returns a local datetime. For <code>ASIA/Shanghai</code>, the offset from UTC would be +8. The datetime values are still time-zone naive.</p> <pre><code>df['timestamp1'].dt.tz_localize('Asia/Shanghai') </code></pre> <p>This returns a Series with time-zone aware datetime using time-zone naive one(<code>timestamp1</code>).</p> <p><code>2018-10-27 14:41:37</code> becomes <code>2018-10-27 14:41:37+08:00</code>.</p> <pre><code>df['timestamp1'].dt.tz_localize('Asia/Shanghai').dt.tz_convert('UTC') </code></pre> <p>The <code>dt.tz_convert('UTC')</code> converts tz-aware datetime from one time zone to another. <code>2018-10-27 14:41:37+08:00</code> is converted to UTC datetime with time-zone as <code>2018-10-27 06:41:37+00:00</code>. What you should have done instead is</p> <pre><code>df['timestamp3'] = df['timestamp1'].dt.tz_localize('UTC').dt.tz_convert('Asia/Shanghai') </code></pre> <p>Which converts the time zone naive UTC datetime to time zone aware UTC datetime and then to the <code>Asia/Shanghai</code> time-zone The result would be:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>timestamp</th> <th>timestamp1</th> <th>timestamp2</th> <th>timestamp3</th> </tr> </thead> <tbody> <tr> <td>1540651297</td> <td>2018-10-27 14:41:37</td> <td>2018-10-27 22:41:37</td> <td>2018-10-27 22:41:37+08:00</td> </tr> <tr> <td>1540651300</td> <td>2018-10-27 14:41:40</td> <td>2018-10-27 22:41:40</td> <td>2018-10-27 22:41:40+08:00</td> </tr> <tr> <td>1540651303</td> <td>2018-10-27 14:41:43</td> <td>2018-10-27 22:41:43</td> <td>2018-10-27 22:41:43+08:00</td> </tr> <tr> <td>1540651306</td> <td>2018-10-27 14:41:46</td> <td>2018-10-27 22:41:46</td> <td>2018-10-27 22:41:46+08:00</td> </tr> <tr> <td>1540651309</td> <td>2018-10-27 14:41:49</td> <td>2018-10-27 22:41:49</td> <td>2018-10-27 22:41:49+08:00</td> </tr> </tbody> </table> </div>
python|pandas|datetime|timezone
2
374,094
66,562,140
How do I vectorize a function which has multiple outputs with Numba?
<p>For example, I want to vectorize the following function:</p> <pre><code>@nb.njit(nb.types.UniTuple(nb.float64,2)(nb.float64, nb.float64)) def add_subtract(x,y): return x+y, x-y </code></pre> <p>However, when I use @numba.vectorize like this:</p> <pre><code>@nb.vectorize([nb.types.UniTuple(nb.float64,2)(nb.float64, nb.float64)]) def add_subtract(x,y): return x+y, x-y </code></pre> <p>It will show:</p> <pre><code>NotImplementedError Traceback (most recent call last) &lt;ipython-input-80-4e6510dce1de&gt; in &lt;module&gt; 1 @nb.vectorize([nb.types.UniTuple(nb.float64,2)(nb.float64, nb.float64)]) ----&gt; 2 def add_subtract(x,y): 3 return x+y, x-y E:\anaconda3\lib\site-packages\numba\np\ufunc\decorators.py in wrap(func) 118 vec = Vectorize(func, **kws) 119 for sig in ftylist: --&gt; 120 vec.add(sig) 121 if len(ftylist) &gt; 0: 122 vec.disable_compile() E:\anaconda3\lib\site-packages\numba\np\ufunc\dufunc.py in add(self, sig) 168 &quot;&quot;&quot; 169 args, return_type = sigutils.normalize_signature(sig) --&gt; 170 return self._compile_for_argtys(args, return_type) 171 172 def _compile_for_args(self, *args, **kws): E:\anaconda3\lib\site-packages\numba\np\ufunc\dufunc.py in _compile_for_argtys(self, argtys, return_type) 220 actual_sig = ufuncbuilder._finalize_ufunc_signature( 221 cres, argtys, return_type) --&gt; 222 dtypenums, ptr, env = ufuncbuilder._build_element_wise_ufunc_wrapper( 223 cres, actual_sig) 224 self._add_loop(utils.longint(ptr), dtypenums) E:\anaconda3\lib\site-packages\numba\np\ufunc\ufuncbuilder.py in _build_element_wise_ufunc_wrapper(cres, signature) 177 # Get dtypes 178 dtypenums = [as_dtype(a).num for a in signature.args] --&gt; 179 dtypenums.append(as_dtype(signature.return_type).num) 180 return dtypenums, ptr, cres.environment 181 E:\anaconda3\lib\site-packages\numba\np\numpy_support.py in as_dtype(nbtype) 149 if isinstance(nbtype, types.PyObject): 150 return np.dtype(object) --&gt; 151 raise NotImplementedError(&quot;%r cannot be represented as a Numpy dtype&quot; 152 % (nbtype,)) 153 NotImplementedError: UniTuple(float64 x 2) cannot be represented as a Numpy dtype </code></pre> <p>If I rewrite this function like this:</p> <pre><code>@nb.vectorize([nb.float64[:](nb.float64, nb.float64)]) def add_subtract(x,y): return np.array([x+y, x-y]) </code></pre> <p>It will still show:</p> <pre><code>NotImplementedError: array(float64, 1d, A) cannot be represented as a Numpy dtype </code></pre> <p>How do I vectorize this function? Is it possible to vectorize a function which has multiple outputs?</p>
<p><code>vectorize</code> only works on a single scalar output (broadcasted to the dimensions of your input vector). As a workaround you can use <code>guvectorize</code>:</p> <pre><code>import numpy as np from numba import guvectorize @guvectorize( [&quot;void(float64[:], float64[:] , float64[:], float64[:])&quot;], &quot;(),()-&gt;(),()&quot; ) def add_subtract(x, y, s, d): s[:] = x + y d[:] = x - y dim = (2, 3, 4) x = np.random.random_sample(dim) y = np.random.random_sample(dim) s, d = add_subtract(x, y) </code></pre>
python|numpy|numba
2
374,095
66,378,218
How to suppress KeyError in Python when dataframe is empty when mapping multiple Foursquare results and an API result is blank
<p>I'm retrieving Foursquare venue data and plotting it on a Folium map. I'm plotting several API call results on the same map.</p> <p>When the API returns an empty JSON result because there are no queried venues within the search, it throws a KeyError because the code is referencing columns in the dataframe that doesn't exist, because the API result is blank.</p> <p>I want to continue to display the map with other results, and have the code ignore or suppress instances where the API result is blank.</p> <p>I've tried try/except/if to test if the dataframe is blank, though cannot figure out how to &quot;ignore the blanks and skip to the next API result&quot;.</p> <p>Any advice would be appreciated.</p> <pre><code>## Foursquare Query 11 - name origin location address = 'Convent Station, NJ' ## Try &quot;Madison, NJ&quot; for working location example geolocator = Nominatim(user_agent=&quot;foursquare_agent&quot;) location = geolocator.geocode(address) latitude = location.latitude longitude = location.longitude ## name search parameters search_query = 'Pharmacy' radius = 1200 ## define corresponding URL url = 'https://api.foursquare.com/v2/venues/search?client_id={}&amp;client_secret={}&amp;ll={},{}&amp;oauth_token={}&amp;v={}&amp;query={}&amp;radius={}&amp;limit={}'.format(CLIENT_ID, CLIENT_SECRET, latitude, longitude,ACCESS_TOKEN, VERSION, search_query, radius, LIMIT) results = requests.get(url).json() ## Convert to pandas dataframe # assign relevant part of JSON to venues venues = results['response']['venues'] venues # tranform venues into a dataframe dataframe = json_normalize(venues) ## Filter results to only areas of interest # keep only columns that include venue name, and anything that is associated with location filtered_columns = ['name', 'categories'] + [col for col in dataframe.columns if col.startswith('location.')] + ['id'] dataframe_filtered = dataframe.loc[:, filtered_columns] # function that extracts the category of the venue def get_category_type(row): try: categories_list = row['categories'] except: categories_list = row['venue.categories'] if len(categories_list) == 0: return None else: return categories_list[0]['name'] # filter the category for each row dataframe_filtered['categories'] = dataframe_filtered.apply(get_category_type, axis=1) # clean column names by keeping only last term dataframe_filtered.columns = [column.split('.')[-1] for column in dataframe_filtered.columns] ## Visualize the data dataframe_filtered.name # add the query 11 pharmacies as blue circle markers for name, lat, lng, label in zip(dataframe_filtered.name, dataframe_filtered.lat, dataframe_filtered.lng, dataframe_filtered.name+&quot; - &quot;+dataframe_filtered.city+&quot;, &quot;+dataframe_filtered.state): folium.CircleMarker( [lat, lng], radius=5, color='blue', popup= label, fill = True, fill_color='blue', fill_opacity=0.6 ).add_to(venues_map) ### ### ### # display map print('Location loaded, search parameters defined, url generated, results saved &amp; converted to dataframe, map generated!') venues_map </code></pre> <p>Error when API result is blank (there are no Foursquare results in the search radius)</p> <pre><code>/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:21: FutureWarning: pandas.io.json.json_normalize is deprecated, use pandas.json_normalize instead --------------------------------------------------------------------------- KeyError Traceback (most recent call last) &lt;ipython-input-57-2deac5f680d4&gt; in &lt;module&gt;() 24 # keep only columns that include venue name, and anything that is associated with location 25 filtered_columns = ['name', 'categories'] + [col for col in dataframe.columns if col.startswith('location.')] + ['id'] ---&gt; 26 dataframe_filtered = dataframe.loc[:, filtered_columns] 27 28 # function that extracts the category of the venue 6 frames /usr/local/lib/python3.7/dist-packages/pandas/core/indexing.py in _validate_read_indexer(self, key, indexer, axis, raise_missing) 1296 if missing == len(indexer): 1297 axis_name = self.obj._get_axis_name(axis) -&gt; 1298 raise KeyError(f&quot;None of [{key}] are in the [{axis_name}]&quot;) 1299 1300 # We (temporarily) allow for some missing keys with .loc, except in KeyError: &quot;None of [Index(['name', 'categories', 'id'], dtype='object')] are in the [columns]&quot; </code></pre>
<p>If you would just like to silence a KeyError then try this:</p> <pre><code>try: ... except KeyError as ke: pass </code></pre>
python|pandas|foursquare|keyerror|folium
0
374,096
66,429,404
xtensor : How to write an vector to an array
<p>What is the equivalent in <a href="https://github.com/xtensor-stack/xtensor" rel="nofollow noreferrer">xtensor</a> or the most optimized way to write a vector to an array.</p> <p>Thanks</p> <pre class="lang-py prettyprint-override"><code>import numpy as np array = np.zeros((4, 4)) array[0] = np.array([1, 2, 3, 4]) # this </code></pre>
<p>The easiest way is</p> <pre class="lang-cpp prettyprint-override"><code>#include &lt;xtensor/xtensor.hpp&gt; #include &lt;xtensor/xarray.hpp&gt; #include &lt;xtensor/xview.hpp&gt; #include &lt;xtensor/xio.hpp&gt; int main() { xt::xtensor&lt;double, 2&gt; array = xt::xtensor&lt;double&gt;({4, 4}); xt::xtensor&lt;double, 1&gt; row = {1, 2, 3, 4}; xt::view(array, 0) = row; return 0; } </code></pre> <p>You can use <code>xt::xarray</code> for flexibility, but it is less efficient.</p>
python|c++|numpy|xtensor
1
374,097
66,532,414
Tensorflow saved model does not contain input names
<p>We are currently training an object detection model in tensorflow 2.4.0 which is working fine. However, to be able to serve it we need to wrap it with an image pre-processing layer that takes the image bytes as input and converts them to the image tensor required by the detection model. See the following code:</p> <pre><code>png_file = 'myfile.png' input_tensor = tf.io.read_file(png_file, name='image_bytes') def preprocessing_layer(inputs): image_tensor = tf.image.decode_image(inputs, channels=3) image_tensor = tf.expand_dims( image_tensor, axis=0, name=None ) return image_tensor model = keras.Sequential( [ keras.Input(tensor=input_tensor, dtype=tf.dtypes.string, name='image_bytes', batch_size=1), tf.keras.layers.Lambda(lambda inp: preprocessing_layer(inp)), yolo_model ] ) model.summary() </code></pre> <p>This wrapped model provides useful detection and if we call <code>model.input_names</code> the correct names are returned: <code>['image_bytes']</code>.</p> <p>Now if we save the model using <code>model.save('model_path')</code> the saved model does not contain the input names anymore and replaces them with generic ones (<code>args_0</code>).</p> <pre><code>signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['args_0'] tensor_info: dtype: DT_STRING shape: () name: serving_default_args_0:0 The given SavedModel SignatureDef contains the following output(s): outputs['model'] tensor_info: dtype: DT_FLOAT shape: (1, 64512, 6) </code></pre> <p>This is a problem because tensorflow serving relies on the name ending with <code>_bytes</code> to convert base64 input.</p> <p>Would you please provide hints on how to retain the input names when saving the model?</p>
<p>The problem stems from the way you defined your lambda layer, and the way you setup your model.</p> <p>Your lambda function should be able to treat a batch, which is currently not the case. You can naively use <code>tf.map_fn</code> to make it handle a batch of images, like so:</p> <pre><code>def preprocessing_layer(str_inputs): def decode(inputs): image_tensor = tf.image.decode_image(inputs[0], channels=3) image_tensor = tf.expand_dims( image_tensor, axis=0, name=None ) return image_tensor return tf.map_fn(decode, str_inputs, fn_output_signature=tf.uint8) </code></pre> <p>Then you can define your model using a symbolic <code>tf.keras.Input</code>, setting the shape to <code>()</code> (to specify no dimension other that the batch size) :</p> <pre><code>model = keras.Sequential( [ keras.Input((), dtype=tf.dtypes.string, name='image_bytes'), tf.keras.layers.Lambda(lambda inp: preprocessing_layer(inp)), yolo_model ] ) </code></pre> <p>Now the model is correctly created, and the signature can be correctly exported.</p>
python|tensorflow|keras|tensorflow-serving|tensorflow2.x
2
374,098
66,464,851
Flattening dictionary with pd.json_normalize
<p>I am currently working on flattening this dictionary file and have reached a number of road blocks. I am trying to use <code>json_normalize</code> to flatten this data. If I test with individual instances it works but if I want to flatten all the data it will return an error stating <code>key error '0'</code> I'm not sure how to fix this.</p> <p>example of the data-</p> <pre><code>data = {1:{ 'Name': &quot;Thrilling Tales of Dragon Slayers&quot;, 'IDs':{ &quot;StoreID&quot;: ['123445452543'], &quot;BookID&quot;: ['543533254353'], &quot;SalesID&quot;: ['543267765345']}, 2:{ 'Name': &quot;boring Tales of Dragon Slayers&quot;, 'IDs':{ &quot;StoreID&quot;: ['111111', '1121111'], &quot;BookID&quot;: ['543533254353', '4324232342'], &quot;SalesID&quot;: ['543267765345', '4353543']}} </code></pre> <p>my code</p> <pre><code>d_flat = pd.io.json.json_normalize(data, meta=['Title', 'StoreID', 'BookID', 'SalesID']) </code></pre>
<h2>Setup</h2> <p>Your data is structured inconveniently. I want to focus on:</p> <ol> <li>Getting the lists in <code>'IDs'</code> into a list of dictionaries, which would be far more convenient.</li> <li>Getting rid of the useless keys in the parent dictionary. All we care about are the values.</li> </ol> <p>Your <code>data</code>:</p> <pre><code>{1: {'Name': 'Thrilling Tales of Dragon Slayers', 'IDs': {'StoreID': ['123445452543'], 'BookID': ['543533254353'], 'SalesID': ['543267765345']}}, 2: {'Name': 'boring Tales of Dragon Slayers', 'IDs': {'StoreID': ['111111', '1121111'], 'BookID': ['543533254353', '4324232342'], 'SalesID': ['543267765345', '4353543']}}} </code></pre> <p>What I want it to look like:</p> <pre><code>[{'Name': 'Thrilling Tales of Dragon Slayers', 'IDs': [{'StoreID': '123445452543', 'BookID': '543533254353', 'SalesID': '543267765345'}]}, {'Name': 'boring Tales of Dragon Slayers', 'IDs': [{'StoreID': '111111', 'BookID': '543533254353', 'SalesID': '543267765345'}, {'StoreID': '1121111', 'BookID': '4324232342', 'SalesID': '4353543'}]}] </code></pre> <hr /> <h2>Restructure Data</h2> <h3>Reasonable Way</h3> <p>Simple loop, don't mess around. This gets us what I showed above</p> <pre><code>new = [] for v in data.values(): temp = {**v} # This is intended to keep all the other data that might be there ids = temp.pop('IDs') # I have to focus on this to create the records temp['IDs'] = [dict(zip(ids, x)) for x in zip(*ids.values())] new.append(temp) </code></pre> <h3>Cute one-liner</h3> <pre><code>new = [{**v, 'IDs': [dict(zip(v['IDs'], x)) for x in zip(*v['IDs'].values())]} for v in data.values()] </code></pre> <h2>Create <code>DataFrame</code> with <code>pd.json_normalize</code></h2> <p>In this call to <code>json_normalize</code> we need to specify the path to the records, i.e. the list of id dictionaries found at the <code>'IDs'</code> key. <code>json_normalize</code> will create one row in the dataframe for every item in that list. This will be done with the the <code>record_path</code> parameter and we pass a <code>tuple</code> that describes the path (if it were in a deeper structure) or a string (if the key is at the top layer, which for us, it is).</p> <pre><code>record_path = 'IDs' </code></pre> <p>Then we want to tell <code>json_normalize</code> what keys are metadata for the records. If there are more than one record, as we have, then the metadata will be repeated for each record.</p> <pre><code>meta = 'Name' </code></pre> <p>So the final solution looks like this:</p> <pre><code>pd.json_normalize(new, record_path='IDs', meta='Name') StoreID BookID SalesID Name 0 123445452543 543533254353 543267765345 Thrilling Tales of Dragon Slayers 1 111111 543533254353 543267765345 boring Tales of Dragon Slayers 2 1121111 4324232342 4353543 boring Tales of Dragon Slayers </code></pre> <hr /> <h2>However</h2> <p>If we are restructuring anyway, might as well make it so we can just pass it to the dataframe constructor.</p> <pre><code>pd.DataFrame([ {'Name': r['Name'], **dict(zip(r['IDs'], x))} for r in data.values() for x in zip(*r['IDs'].values()) ]) Name StoreID BookID SalesID 0 Thrilling Tales of Dragon Slayers 123445452543 543533254353 543267765345 1 boring Tales of Dragon Slayers 111111 543533254353 543267765345 2 boring Tales of Dragon Slayers 1121111 4324232342 4353543 </code></pre> <hr /> <h2>Bonus Content</h2> <p>While we are at it. The data is ambiguous in regards to whether or not each id type has the same number of ids. Suppose they did not.</p> <pre><code>data = {1:{ 'Name': &quot;Thrilling Tales of Dragon Slayers&quot;, 'IDs':{ &quot;StoreID&quot;: ['123445452543'], &quot;BookID&quot;: ['543533254353'], &quot;SalesID&quot;: ['543267765345']}}, 2:{ 'Name': &quot;boring Tales of Dragon Slayers&quot;, 'IDs':{ &quot;StoreID&quot;: ['111111', '1121111'], &quot;BookID&quot;: ['543533254353', '4324232342'], &quot;SalesID&quot;: ['543267765345', '4353543', 'extra id']}}} </code></pre> <p>Then we can use <code>zip_longest</code> from <code>itertools</code></p> <pre><code>from itertools import zip_longest pd.DataFrame([ {'Name': r['Name'], **dict(zip(r['IDs'], x))} for r in data.values() for x in zip_longest(*r['IDs'].values()) ]) Name StoreID BookID SalesID 0 Thrilling Tales of Dragon Slayers 123445452543 543533254353 543267765345 1 boring Tales of Dragon Slayers 111111 543533254353 543267765345 2 boring Tales of Dragon Slayers 1121111 4324232342 4353543 3 boring Tales of Dragon Slayers None None extra id </code></pre>
python|json|pandas|dictionary|json-normalize
3
374,099
66,711,799
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. GPU not detected by pytorch
<p>Having trouble with CUDA + Pytorch this is the error. I reinstalled CUDA and cudnn multiple times.</p> <p>Conda env is detecting GPU but its giving errors with pytorch and certain cuda libraries. I tried with Cuda 10.1 and 10.0, and cudnn version 8 and 7.6.5, Added cuda to path and everything.</p> <p>However anaconda is showing cuda tool kit 9.0 is installed, whilst I clearly installed 10.0, so I am not entirely sure what's the deal with that.</p> <pre><code> =&gt; loading model from models/pytorch/pose_coco/pose_hrnet_w32_256x192.pth Traceback (most recent call last): File &quot;hydroman2.py&quot;, line 580, in &lt;module&gt; pose_model.load_state_dict(torch.load(cfg.TEST.MODEL_FILE), strict=False) File &quot;C:\Users\Fardin\anaconda3\envs\myenv\lib\site-packages\torch\serialization.py&quot;, line 593, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File &quot;C:\Users\Fardin\anaconda3\envs\myenv\lib\site-packages\torch\serialization.py&quot;, line 773, in _legacy_load result = unpickler.load() File &quot;C:\Users\Fardin\anaconda3\envs\myenv\lib\site-packages\torch\serialization.py&quot;, line 729, in persistent_load deserialized_objects[root_key] = restore_location(obj, location) File &quot;C:\Users\Fardin\anaconda3\envs\myenv\lib\site-packages\torch\serialization.py&quot;, line 178, in default_restore_location result = fn(storage, location) File &quot;C:\Users\Fardin\anaconda3\envs\myenv\lib\site-packages\torch\serialization.py&quot;, line 154, in _cuda_deserialize device = validate_cuda_device(location) File &quot;C:\Users\Fardin\anaconda3\envs\myenv\lib\site-packages\torch\serialization.py&quot;, line 138, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU. </code></pre> <p>System info</p> <pre><code> System info: -------------------------------------------------------------------------------- __Time Stamp__ Report started (local time) : 2021-03-19 19:59:06.957967 UTC start time : 2021-03-19 15:59:06.957967 Running time (s) : 4.003899 __Hardware Information__ Machine : AMD64 CPU Name : znver1 CPU Count : 12 Number of accessible CPUs : 12 List of accessible CPUs cores : 0 1 2 3 4 5 6 7 8 9 10 11 CFS Restrictions (CPUs worth of runtime) : None CPU Features : 64bit adx aes avx avx2 bmi bmi2 clflushopt clzero cmov cx16 cx8 f16c fma fsgsbase fxsr lzcnt mmx movbe mwaitx pclmul popcnt prfchw rdrnd rdseed sahf sha sse sse2 sse3 sse4.1 sse4.2 sse4a ssse3 xsave xsavec xsaveopt xsaves Memory Total (MB) : 16334 Memory Available (MB) : 8787 __OS Information__ Platform Name : Windows-10-10.0.19041-SP0 Platform Release : 10 OS Name : Windows OS Version : 10.0.19041 OS Specific Version : 10 10.0.19041 SP0 Multiprocessor Free Libc Version : ? __Python Information__ Python Compiler : MSC v.1916 64 bit (AMD64) Python Implementation : CPython Python Version : 3.8.5 Python Locale : en_US.cp1252 __LLVM Information__ LLVM Version : 10.0.1 __CUDA Information__ CUDA Device Initialized : True CUDA Driver Version : 11020 CUDA Detect Output: Found 1 CUDA devices id 0 b'GeForce GTX 1070' [SUPPORTED] compute capability: 6.1 pci device id: 0 pci bus id: 6 Summary: 1/1 devices are supported CUDA Librairies Test Output: Finding cublas from &lt;unknown&gt; named cublas.dll trying to open library... ERROR: failed to open cublas: Could not find module 'cublas.dll' (or one of its dependencies). Try using the full path with constructor syntax. Finding cusparse from &lt;unknown&gt; named cusparse.dll trying to open library... ERROR: failed to open cusparse: Could not find module 'cusparse.dll' (or one of its dependencies). Try using the full path with constructor syntax. Finding cufft from &lt;unknown&gt; named cufft.dll trying to open library... ERROR: failed to open cufft: Could not find module 'cufft.dll' (or one of its dependencies). Try using the full path with constructor syntax. Finding curand from &lt;unknown&gt; named curand.dll trying to open library... ERROR: failed to open curand: Could not find module 'curand.dll' (or one of its dependencies). Try using the full path with constructor syntax. Finding nvvm from &lt;unknown&gt; named nvvm.dll trying to open library... ERROR: failed to open nvvm: Could not find module 'nvvm.dll' (or one of its dependencies). Try using the full path with constructor syntax. Finding cudart from &lt;unknown&gt; named cudart.dll trying to open library... ERROR: failed to open cudart: Could not find module 'cudart.dll' (or one of its dependencies). Try using the full path with constructor syntax. Finding libdevice from &lt;unknown&gt; searching for compute_20... ERROR: can't open libdevice for compute_20 searching for compute_30... ERROR: can't open libdevice for compute_30 searching for compute_35... ERROR: can't open libdevice for compute_35 searching for compute_50... ERROR: can't open libdevice for compute_50 __ROC information__ ROC Available : False ROC Toolchains : None HSA Agents Count : 0 HSA Agents: None HSA Discrete GPUs Count : 0 HSA Discrete GPUs : None __SVML Information__ SVML State, config.USING_SVML : True SVML Library Loaded : True llvmlite Using SVML Patched LLVM : True SVML Operational : True __Threading Layer Information__ TBB Threading Layer Available : False +--&gt; Disabled due to Unknown import problem. OpenMP Threading Layer Available : True +--&gt;Vendor: MS Workqueue Threading Layer Available : True +--&gt;Workqueue imported successfully. __Numba Environment Variable Information__ None found. __Conda Information__ Conda Build : 3.20.5 Conda Env : 4.9.2 Conda Platform : win-64 Conda Python Version : 3.8.5.final.0 Conda Root Writable : True __Installed Packages__ _pytorch_select 1.1.0 cpu anaconda _tflow_select 2.3.0 mkl anaconda absl-py 0.12.0 pypi_0 pypi alabaster 0.7.12 pypi_0 pypi appdirs 1.4.3 py36h28b3542_0 anaconda argparse 1.4.0 pypi_0 pypi asn1crypto 1.3.0 py36_0 anaconda astor 0.8.1 pyh9f0ad1d_0 conda-forge astunparse 1.6.3 pypi_0 pypi atomicwrites 1.4.0 py_0 anaconda attrs 19.3.0 py_0 anaconda babel 2.9.0 pypi_0 pypi backcall 0.2.0 py_0 anaconda backports 1.0 py_2 anaconda backports.weakref 1.0.post1 py36h9f0ad1d_1001 conda-forge blas 1.0 mkl anaconda bleach 1.5.0 py36_0 conda-forge blinker 1.4 py_1 conda-forge brotlipy 0.7.0 py36he774522_1000 anaconda bzip2 1.0.8 he774522_0 anaconda ca-certificates 2020.10.14 0 anaconda cachetools 4.1.1 py_0 anaconda certifi 2020.6.20 py36_0 anaconda cffi 1.14.0 py36h7a1dbc1_0 anaconda chardet 3.0.4 py36_1003 anaconda click 7.1.2 pyh9f0ad1d_0 conda-forge cloudpickle 1.4.1 py_0 anaconda colorama 0.4.3 py_0 anaconda contextlib2 0.6.0.post1 py_0 anaconda cpuonly 1.0 0 pytorch cryptography 2.9.2 py36h7a1dbc1_0 anaconda cudatoolkit 9.0 1 anaconda cudnn 7.6.5 cuda9.0_0 anaconda curl 7.71.0 h2a8f88b_0 anaconda cycler 0.10.0 py36h009560c_0 anaconda cython 0.29.22 pypi_0 pypi cytoolz 0.10.1 py36he774522_0 anaconda dask-core 2.19.0 py_0 anaconda decorator 4.4.2 py_0 anaconda defusedxml 0.6.0 py_0 anaconda dlib 19.20 py36h5653133_1 conda-forge docker-py 4.2.1 py36h9f0ad1d_0 conda-forge docker-pycreds 0.4.0 py_0 anaconda docutils 0.16 pypi_0 pypi easydict 1.7 pypi_0 pypi entrypoints 0.3 py36_0 anaconda ffmpeg 2.7.0 0 menpo flake8 3.8.3 py_0 anaconda flake8-polyfill 1.0.2 py36_0 anaconda flake8-quotes 3.0.0 pyh9f0ad1d_0 conda-forge flatbuffers 1.12 pypi_0 pypi freetype 2.10.2 hd328e21_0 anaconda gast 0.2.2 pypi_0 pypi geos 3.8.1 h33f27b4_0 anaconda gettext 0.19.8.1 hb01d8f6_1002 conda-forge git 2.23.0 h6bb4b03_0 anaconda glib 2.58.3 py36h04c7ab9_1004 conda-forge google-auth 1.28.0 pypi_0 pypi google-auth-oauthlib 0.4.3 pypi_0 pypi google-pasta 0.2.0 pyh8c360ce_0 conda-forge grpcio 1.32.0 pypi_0 pypi h5py 2.10.0 py36h5e291fa_0 anaconda hdf5 1.10.4 h7ebc959_0 anaconda html5lib 0.9999999 py36_0 conda-forge icc_rt 2019.0.0 h0cc432a_1 anaconda icu 58.2 ha925a31_3 anaconda idna 2.10 py_0 anaconda imageio 2.8.0 py_0 anaconda imageio-ffmpeg 0.4.2 py_0 conda-forge imagesize 1.2.0 pypi_0 pypi imgaug 0.4.0 pypi_0 pypi importlib-metadata 1.7.0 py36_0 anaconda importlib_metadata 1.7.0 0 anaconda intel-openmp 2019.4 245 anaconda ipykernel 5.3.0 py36h5ca1d4c_0 anaconda ipyparallel 6.3.0 pypi_0 pypi ipython 7.16.1 py36h5ca1d4c_0 anaconda ipython_genutils 0.2.0 py36_0 anaconda ipywidgets 7.5.1 py_0 anaconda jedi 0.17.1 py36_0 anaconda jinja2 2.11.2 py_0 anaconda joblib 0.15.1 py_0 anaconda jpeg 9d he774522_0 conda-forge json-tricks 3.15.5 pypi_0 pypi jsonschema 3.2.0 py36_0 anaconda jupyter 1.0.0 py36_7 anaconda jupyter_client 6.1.3 py_0 anaconda jupyter_console 6.1.0 py_0 anaconda jupyter_core 4.6.3 py36_0 anaconda keras-applications 1.0.8 py_1 anaconda keras-preprocessing 1.1.2 pypi_0 pypi kiwisolver 1.2.0 py36h74a9793_0 anaconda krb5 1.18.2 hc04afaa_0 anaconda leptonica 1.78.0 h919f142_2 conda-forge libarchive 3.3.3 h0643e63_5 anaconda libcurl 7.71.0 h2a8f88b_0 anaconda libffi 3.2.1 h6538335_1007 conda-forge libgpuarray 0.7.6 hfa6e2cd_1003 conda-forge libiconv 1.15 vc14h29686d3_5 [vc14] anaconda libmklml 2019.0.5 0 anaconda libpng 1.6.37 h2a8f88b_0 anaconda libprotobuf 3.12.3 h7bd577a_0 anaconda libsodium 1.0.18 h62dcd97_0 anaconda libssh2 1.9.0 h7a1dbc1_1 anaconda libtiff 4.1.0 h56a325e_0 anaconda libwebp 1.0.2 hfa6e2cd_5 conda-forge libxml2 2.9.10 h464c3ec_1 anaconda libxslt 1.1.34 he774522_0 anaconda lxml 4.5.0 py36h1350720_0 anaconda lz4-c 1.8.1.2 h2fa13f4_0 anaconda lzo 2.10 he774522_2 anaconda m2w64-gcc-libgfortran 5.3.0 6 conda-forge m2w64-gcc-libs 5.3.0 7 conda-forge m2w64-gcc-libs-core 5.3.0 7 conda-forge m2w64-gmp 6.1.0 2 conda-forge m2w64-libwinpthread-git 5.0.0.4634.697f757 2 conda-forge mako 1.1.0 py_0 anaconda markdown 3.3.4 pypi_0 pypi markupsafe 1.1.1 py36he774522_0 anaconda matplotlib 3.1.3 py36_0 anaconda matplotlib-base 3.1.3 py36h64f37c6_0 anaconda mccabe 0.6.1 py36_1 anaconda mistune 0.8.4 py36he774522_0 anaconda mkl 2018.0.3 1 anaconda mkl_fft 1.0.6 py36hdbbee80_0 anaconda mkl_random 1.0.1 py36h77b88f5_1 anaconda mock 4.0.3 pypi_0 pypi more-itertools 8.4.0 py_0 anaconda moviepy 1.0.1 py_0 conda-forge msys2-conda-epoch 20160418 1 conda-forge nbconvert 5.6.1 py36_0 anaconda nbformat 5.0.7 py_0 anaconda networkx 2.4 py_0 anaconda ninja 1.9.0 py36h74a9793_0 anaconda nose 1.3.7 pypi_0 pypi notebook 6.0.3 py36_0 anaconda numpy 1.19.5 pypi_0 pypi oauthlib 3.1.0 py_0 anaconda olefile 0.46 py36_0 anaconda opencv-python 3.4.1.15 pypi_0 pypi openjpeg 2.3.1 h57dd2e7_3 conda-forge openssl 1.1.1h he774522_0 anaconda opt-einsum 3.3.0 pypi_0 pypi packaging 20.4 py_0 anaconda pandas 1.0.3 py36h47e9c7a_0 anaconda pandoc 2.9.2.1 0 anaconda pandocfilters 1.4.2 py36_1 anaconda parso 0.7.0 py_0 anaconda pcre 8.44 ha925a31_0 anaconda pep8-naming 0.8.2 py36_0 anaconda pickleshare 0.7.5 py36_0 anaconda pillow 7.1.2 py36hcc1f983_0 anaconda pip 20.2.4 py36_0 anaconda pluggy 0.13.1 py36_0 anaconda poppler 0.87.0 hdbe765f_0 conda-forge poppler-data 0.4.9 1 conda-forge proglog 0.1.9 py_0 conda-forge prometheus_client 0.8.0 py_0 anaconda prompt-toolkit 3.0.5 py_0 anaconda prompt_toolkit 3.0.5 0 anaconda protobuf 3.12.3 py36h33f27b4_0 anaconda psutil 5.8.0 pypi_0 pypi py 1.9.0 py_0 anaconda pyasn1 0.4.8 py_0 anaconda pyasn1-modules 0.2.8 pypi_0 pypi pycocotools 2.0 pypi_0 pypi pycodestyle 2.6.0 py_0 anaconda pycparser 2.20 py_0 anaconda pyflakes 2.2.0 py_0 anaconda pygments 2.6.1 py_0 anaconda pygpu 0.7.6 py36h7725771_1001 conda-forge pyjwt 1.7.1 py_0 conda-forge pyopenssl 19.1.0 py36_0 anaconda pyparsing 2.4.7 py_0 anaconda pyqt 5.9.2 py36h6538335_2 anaconda pyreadline 2.1 py36_1001 conda-forge pyrsistent 0.16.0 py36he774522_0 anaconda pysocks 1.7.1 py36_0 anaconda pytesseract 0.3.3 pyh8c360ce_0 conda-forge pytest 5.4.3 py36_0 anaconda python 3.6.10 h9f7ef89_1 anaconda python-dateutil 2.8.1 py_0 anaconda python_abi 3.6 1_cp36m conda-forge pytorch 1.5.1 py3.6_cpu_0 [cpuonly] pytorch pytz 2020.1 py_0 anaconda pywavelets 1.1.1 py36he774522_0 anaconda pywin32 223 py36hfa6e2cd_1 anaconda pywinpty 0.5.7 py36_0 anaconda pyyaml 5.3.1 py36he774522_0 anaconda pyzmq 19.0.1 py36ha925a31_1 anaconda qt 5.9.7 vc14h73c81de_0 [vc14] anaconda qtconsole 4.7.5 py_0 anaconda qtpy 1.9.0 py_0 anaconda requests 2.24.0 py_0 anaconda requests-oauthlib 1.3.0 pyh9f0ad1d_0 conda-forge rsa 4.6 pyh9f0ad1d_0 conda-forge scikit-image 0.16.2 py36h47e9c7a_0 anaconda scikit-learn 0.20.1 py36hb854c30_0 anaconda scipy 1.4.1 pypi_0 pypi send2trash 1.5.0 py36_0 anaconda setuptools 50.3.0 py36h9490d1a_1 anaconda shapely 1.6.4 pypi_0 pypi simplejson 3.17.0 py36he774522_0 anaconda sip 4.19.8 py36h6538335_0 anaconda six 1.15.0 py_0 anaconda sklearn 0.0 pypi_0 pypi slidingwindow 0.0.14 pypi_0 pypi snowballstemmer 2.1.0 pypi_0 pypi sphinx 3.5.2 pypi_0 pypi sphinxcontrib-applehelp 1.0.2 pypi_0 pypi sphinxcontrib-devhelp 1.0.2 pypi_0 pypi sphinxcontrib-htmlhelp 1.0.3 pypi_0 pypi sphinxcontrib-jsmath 1.0.1 pypi_0 pypi sphinxcontrib-qthelp 1.0.3 pypi_0 pypi sphinxcontrib-serializinghtml 1.1.4 pypi_0 pypi sqlite 3.32.3 h2a8f88b_0 anaconda swig 3.0.12 h047fa9f_3 anaconda tbb 2020.0 h74a9793_0 anaconda tbb4py 2020.0 py36h74a9793_0 anaconda tensorboard 1.13.1 pypi_0 pypi tensorboard-plugin-wit 1.8.0 pypi_0 pypi tensorboardx 1.6 py_0 conda-forge tensorflow 2.4.1 pypi_0 pypi tensorflow-estimator 1.13.0 pypi_0 pypi tensorflow-gpu 1.13.1 pypi_0 pypi tensorflow-gpu-estimator 2.1.0 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi terminado 0.8.3 py36_0 anaconda testpath 0.4.4 py_0 anaconda theano 1.0.4 py36h003fed8_1002 conda-forge threadpoolctl 2.1.0 pyh5ca1d4c_0 anaconda tk 8.6.10 he774522_0 anaconda toolz 0.10.0 py_0 anaconda torchfile 0.1.0 py_0 conda-forge torchvision 0.6.1 py36_cpu [cpuonly] pytorch tornado 6.0.4 py36he774522_1 anaconda tqdm 4.47.0 py_0 anaconda traitlets 4.3.3 py36_0 anaconda typing-extensions 3.7.4.3 pypi_0 pypi urllib3 1.25.11 py_0 anaconda vc 14.1 h0510ff6_4 anaconda visdom 0.1.8.9 0 conda-forge vs2015_runtime 14.16.27012 hf0eaf9b_3 anaconda vs2017_win-64 19.16.27038 h2e3bad8_2 conda-forge vswhere 2.7.1 h21ff451_0 anaconda wcwidth 0.2.5 py_0 anaconda webencodings 0.5.1 py36_1 anaconda websocket-client 0.57.0 py36_1 anaconda werkzeug 1.0.1 pyh9f0ad1d_0 conda-forge wget 1.16.3 0 menpo wheel 0.35.1 py_0 anaconda widgetsnbextension 3.5.1 py36_0 anaconda win_inet_pton 1.1.0 py36_0 anaconda wincertstore 0.2 py36h7fe50ca_0 anaconda winpty 0.4.3 4 anaconda wrapt 1.12.1 py36h68a101e_1 conda-forge xz 5.2.5 h62dcd97_0 anaconda yacs 0.1.8 pypi_0 pypi yaml 0.1.7 hc54c509_2 anaconda zeromq 4.3.2 ha925a31_2 anaconda zipp 3.3.1 py_0 anaconda zlib 1.2.11 h62dcd97_4 anaconda zstd 1.3.7 h508b16e_0 anaconda No errors reported. </code></pre>
<p>Solved.</p> <p>Pytorch was installing CPU only version for some reason, reinstalling pytorch didn't help.</p> <p>Uninstalling pytorch: <code>conda uninstall pytorch</code></p> <p>Followed by uninstalling cpu only: <code>conda uninstall cpuonly</code></p> <p>Then installing pytorch again solved it.</p>
python-3.x|pytorch
0