Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
9,600
65,696,566
Pandas merge in a function
<p>I am trying to use pandas merge in a function as shown below.</p> <p>Here is what I am trying to use :</p> <pre><code>test = inv_level(left, right, 'user_id', 'new_id', 'left') #left.merge(right, on='user_id', how='left') </code></pre> <p>where inv_level is a function as defined below</p> <pre><code>def inv_level ( child_inv, parent_inv, lefton, righton, how ): level_inv = pd.merge(parent_inv, child_inv, left_on=lefton, right_on=righton, how=how) return level_inv </code></pre> <p>When I use merge directly without a function it works fine. But, when I use merge in a function and call the function passing parameters, it throws an error. I am not sure what exactly is the issue. I appreciate your input/support.</p> <p>File &quot;C:\temp\env\3.8.6\lib\site-packages\pandas\core\reshape\merge.py&quot;, line 1005, in _get_merge_keys right_keys.append(right._get_label_or_level_values(rk)) File &quot;C:\temp\env\3.8.6\lib\site-packages\pandas\core\generic.py&quot;, line 1563, in _get_label_or_level_values raise KeyError(key) KeyError: 'new_id'</p> <p>Thanks,</p> <p>Following is the complete program.</p> <pre><code>import numpy as np import pandas as pd def inv_level ( child_inv, parent_inv, lefton, righton, how ): level_inv = pd.merge(parent_inv, child_inv, left_on=lefton, right_on=righton, how=how) return level_inv def main(event, context): np.random.seed(0) # transactions left = pd.DataFrame({'transaction_id': ['A', 'B', 'C', 'D'], 'user_id': ['Peter', 'John', 'John', 'Anna'], 'value': np.random.randn(4), }) # users right = pd.DataFrame({'new_id': ['Paul', 'Mary', 'John', 'Anna'], 'favorite_color': ['blue', 'blue', 'red', np.NaN], }) ''' test = inv_level(left, right, 'user_id', 'new_id', 'left') #left.merge(right, on='user_id', how='left') The above throws an error ''' test = pd.merge(left, right, left_on='user_id', right_on='new_id', how='left') print(test) if __name__ == &quot;__main__&quot;: main(&quot;&quot;, &quot;&quot;) </code></pre>
<p>There are not swapped <code>parent_inv</code> and <code>child_inv</code>, so in your solution was tested column <code>new_id</code> in <code>left</code> instead <code>right</code>:</p> <pre><code>#swapped DataFrames def inv_level (parent_inv, child_inv, lefton, righton, how ): </code></pre>
python|pandas
0
9,601
65,872,965
How to get unique lists in a Pandas column of lists
<p>I have the following DataFrame:</p> <pre><code>import pandas as pd df = pd.DataFrame({'name': [&quot;John&quot;, &quot;Jack&quot;, &quot;Jeff&quot;, &quot;Kate&quot;], &quot;hobbies&quot;:[[&quot;pirates&quot;], [&quot;pirates&quot;], [&quot;climbing&quot;, &quot;yoga&quot;], [&quot;yoga&quot;]]}) # name hobbies # 0 John [pirates] # 1 Jack [pirates] # 2 Jeff [climbing, yoga] # 3 Kate [yoga] </code></pre> <p>I would like to have a list of the unique lists in hobbies. Just to be clear, I don't want the list of unique hobbies (i.e. <code>[&quot;pirates&quot;, &quot;climbing&quot;, &quot;yoga&quot;]</code>), which is already covered in several questions including this one: <a href="https://stackoverflow.com/questions/58528989/pandas-get-unique-values-from-column-of-lists">pandas get unique values from column of lists</a></p> <p>I would like instead the list <code>[['pirates'], ['yoga'], ['climbing', 'yoga']]</code>.</p> <p>I have thought of the following way but that does not seem very &quot;panda-ic&quot;:</p> <pre><code>[list(t) for t in {tuple(h) for h in df[&quot;hobbies&quot;]}] </code></pre> <p>Is there a better way to do it?</p>
<p>Let us change the <code>list</code> to <code>tuple</code> so we can do <code>drop_duplicates</code></p> <pre><code>out = df.hobbies.apply(tuple).drop_duplicates().apply(list).tolist() Out[143]: [['pirates'], ['climbing', 'yoga'], ['yoga']] </code></pre> <p>If you do not need converting back to <code>list</code>, you could do:</p> <pre><code>df.hobbies.apply(tuple).unique() </code></pre>
python|pandas
3
9,602
63,455,052
Redshift where clause using multiple columns
<p>Suppose I have the following table in <strong>redshift</strong>:</p> <pre><code>table | a | b | |----:|----:| | 3 | 1 | | 1 | 8 | | 7 | 6 | | 4 | 0 | | 5 | 6 | | 5 | 2 | | 5 | 9 | | 4 | 3 | | 7 | 9 | | 9 | 8 | </code></pre> <p>And in python, I have the following list of tuples:</p> <pre><code>x = [(3,1), (4,2), (10, 1), (7,9), (5,2), (6,1)] </code></pre> <p>I want to extract all rows from the table where the tuple <code>(a,b)</code> is in x using pd.read_sql_query`.</p> <p>If I only had one column it would be a simple SQL WHERE clause, something like:</p> <pre><code>query = f''' SELECT * FROM table WHERE a IN {x_sql} ''' pd.read_sql_query(query, engine) </code></pre> <p>My final result would be:</p> <pre><code>| a | b | |----:|----:| | 3 | 1 | | 5 | 2 | | 7 | 9 | </code></pre> <p>I wanted to create a query like:</p> <pre><code>#doesn't work SELECT * FROM table WHERE a,b IN ((3,1), (4,2), (10, 1), (7,9), (5,2), (6,1)) </code></pre>
<p>We can use <code>.stack</code> with <code>isin</code> and <code>.loc</code> to filter along the index:</p> <pre><code>x = [(3,1), (4,2), (10, 1), (7,9), (5,2), (6,1)] df.loc[df.stack().groupby(level=0).agg(tuple).isin(x)] a b 1 3 1 6 5 2 9 7 9 </code></pre>
python|pandas|amazon-redshift
1
9,603
63,419,344
How to insert an elements from vector into matrix based on array of random indexes
<p>Basicly, im trying to insert an elements from vector into matrix based on random index</p> <pre><code>size = 100000 answer_count = 4 num_range = int(1e4) a = torch.randint(-num_range, num_range, size=(size, )) b = torch.randint(-num_range, num_range, size=(size, )) answers = torch.randint(-num_range, num_range, size=(size, answer_count)) for i in range(size): answers[i, np.random.randint(answer_count)] = a[i] + b[i] </code></pre> <p>I tried something like</p> <pre><code>c = a + b pos = torch.randint(answer_count, size=(size, )) answers[:, pos] = c </code></pre> <p>But i'm certainly doing something wrong</p>
<p>I think you need to change the last line like this:</p> <pre><code>answers[np.arange(size), pos] = c </code></pre> <p>The problem lies in incorrect use of advanced indexing. To understand the difference of those indexing try printing out <code>answers[:, pos]</code> vs. <code>answers[np.arange(size), pos]</code> and you will see why the previous one does not work. <code>answers[np.arange(size), pos]</code> selects each <code>pos</code> with <strong>a single row</strong> while <code>answers[:, pos]</code> selects <strong>ALL rows</strong> with each <code>pos</code>. More information on advanced indexing in <a href="https://numpy.org/doc/stable/reference/arrays.indexing.html" rel="nofollow noreferrer">numpy doc here</a>.</p>
python|arrays|numpy|torch
0
9,604
63,460,217
Assign labels: all values are false
<p>I have some problem to assign label whether a condition is satisfied. Specifically, I would like to assign False (or 0) to rows which contains at least one of these words</p> <pre><code>my_list=[&quot;maths&quot;, &quot;science&quot;, &quot;geography&quot;, &quot;statistics&quot;] </code></pre> <p>in one of these fields:</p> <pre><code>path | Subject | Notes </code></pre> <p>and look for these websites <code>webs=[&quot;www.stanford.edu&quot;, &quot;www.ucl.ac.uk&quot;, &quot;www.sorbonne-universite.fr&quot;]</code> in column <code>web</code>.</p> <p>To do this I am using the following code:</p> <pre><code> def part_is_in(x, values): output = False for val in values: if val in str(x): return True break return output def assign_value(filename): my_list=[&quot;maths&quot;, &quot;&quot;, &quot;science&quot;, &quot;geography&quot;, &quot;statistics&quot;] filename['Label'] = filename[['path','subject','notes']].apply(part_is_in, values= my_list) filename['Low_Subject']=filename['Subject'] filename['Low_Notes']=filename['Notes'] lower_cols = [col for col in filename if col not in ['Subject','Notes']] filename[lower_cols]= filename[lower_cols].apply(lambda x: x.astype(str).str.lower(),axis=1) webs=[&quot;https://www.stanford.edu&quot;, &quot;https://www.ucl.ac.uk&quot;, &quot;http://www.sorbonne-universite.fr&quot;] # NEW COLUMN # this is still inside the function but I cannot add an indent within this post filename['Label'] = pd.Series(index = filename.index, dtype='object') for index, row in filename.iterrows(): value = row['web'] if any(x in str(value) for x in webs): filename.at[index,'Label'] = True else: filename.at[index,'Label'] = False for index, row in filename.iterrows(): value = row['Subject'] if any(x in str(value) for x in my_list): filename.at[index,'Label'] = True else: filename.at[index,'Label'] = False for index, row in filename.iterrows(): value = row['Notes'] if any(x in str(value) for x in my_list): filename.at[index,'Label'] = True else: filename.at[index,'Label'] = False for index, row in filename.iterrows(): value = row['path'] if any(x in str(value) for x in my_list): filename.at[index,'Label'] = True else: filename.at[index,'Label'] = False return(filename) </code></pre> <p>My dataset is</p> <pre><code>web path Subject Notes www.stanford.edu /maths/ NA NA www.ucla.com /history/ History of Egypt NA www.kcl.ac.uk /datascience/ Data Science 50 students ... </code></pre> <p>The expected output is:</p> <pre><code>web path Subject Notes Label www.stanford.edu /maths/ NA NA 1 # contains the web and maths www.ucla.com /history/ History of Egypt NA 0 www.kcl.ac.uk /datascience/ Data Science 50 students 1 # contains the word science ... </code></pre> <p>Using my code, I am getting all values <code>False</code>. Are you able to spot the issue?</p>
<ul> <li>The final values in <code>Labels</code> are Booleans <ul> <li>If you want <code>ints</code>, use <code>df.Label = df.Label.astype(int)</code></li> </ul> </li> <li><code>def test_words</code> <ul> <li>fill all <code>NaN</code>s, which are <code>float</code> type, with <code>''</code>, which is <code>str</code> type</li> <li>convert all words to lowercase</li> <li>replace all <code>/</code> with <code>' '</code></li> <li>split on <code>' '</code> to make a list</li> <li>combine all the lists into a single a set</li> <li>use set methods to determine if the row contains a word in <code>my_list</code> <ul> <li><a href="https://docs.python.org/3/library/stdtypes.html#frozenset.intersection" rel="nofollow noreferrer"><code>set.intersection</code></a> <ul> <li><code>{'datascience'}.intersection({'science'})</code> returns an empty <code>set</code>, because there is not intersection.</li> <li><code>{'data', 'science'}.intersection({'science'})</code> returns <code>{'science'}</code>, because there's an intersection on that word.</li> </ul> </li> </ul> </li> </ul> </li> <li><code>lambda x: any(x in y for y in webs)</code> <ul> <li>for each value in <code>webs</code>, check if <code>web</code> is in that value <ul> <li><code>'www.stanford.edu' in 'https://www.stanford.edu'</code> is <code>True</code></li> </ul> </li> <li>evaluates as <code>True</code> if <code>any</code> are <code>True</code>.</li> </ul> </li> </ul> <pre class="lang-py prettyprint-override"><code>import pandas as pd # test data and dataframe data = {'web': ['www.stanford.edu', 'www.ucla.com', 'www.kcl.ac.uk'], 'path': ['/maths/', '/history/', '/datascience/'], 'Subject': [np.nan, 'History of Egypt', 'Data Science'], 'Notes': [np.nan, np.nan, '50 students']} df = pd.DataFrame(data) # given my_list my_list = [&quot;maths&quot;, &quot;science&quot;, &quot;geography&quot;, &quot;statistics&quot;] my_list = set(map(str.lower, my_list)) # convert to a set and verify words are lowercase # given webs; all values should be lowercase webs = [&quot;https://www.stanford.edu&quot;, &quot;https://www.ucl.ac.uk&quot;, &quot;http://www.sorbonne-universite.fr&quot;] # function to test for word content def test_words(v: pd.Series) -&gt; bool: v = v.fillna('').str.lower().str.replace('/', ' ').str.split(' ') # replace na, lower case , convert to list s_set = {st for row in v for st in row if st} # join all the values in the lists to one set return True if s_set.intersection(my_list) else False # True if there is a word intersection between sets # test for conditions in the word columns and the web column df['Label'] = df[['path', 'Subject', 'Notes']].apply(test_words, axis=1) | df.web.apply(lambda x: any(x in y for y in webs)) # display(df) web path Subject Notes Label 0 www.stanford.edu /maths/ NaN NaN True 1 www.ucla.com /history/ History of Egypt NaN False 2 www.kcl.ac.uk /datascience/ Data Science 50 students True </code></pre> <h2>Notes Regarding Original Code</h2> <ul> <li>It's not a good idea to use <code>iterrows</code> multiple times. For a large dataset it will be very time-consuming and error prone. <ul> <li>It was easier to write a new function then interpret the different code blocks for each column.</li> </ul> </li> </ul>
python|pandas
-1
9,605
55,401,003
Is there a way to apply a condition to a regex in pandas?
<p>I'm filtering a column in pandas but want to keep certain values.</p> <p>My goal is to change the values all players that aren't Federer, Nadal and Djokovic to "Other" so</p> <p>Before:</p> <pre><code>winner_name Federer Nadal Djokovic Kyrgios Hewitt </code></pre> <p>After:</p> <pre><code>winner_name Federer Nadal Djokovic Other Other </code></pre> <p>I've tried this</p> <pre><code>df['winner_name'] = df['winner_name'].replace(to_replace=r"^(.(?&lt;!Roger Federer))*?$", value='other',regex=True) </code></pre> <p>but this replaces all the values other than Federer to 'other'.</p> <pre><code>winner_name Federer Other Other Other Other </code></pre> <p>I wish to apply the conditional to more than one value</p>
<p><code>np.where</code> and <code>isin</code> are enough here:</p> <pre><code>df['winner_name'] = np.where(df['winner_name'].isin(['Federer', 'Nadal', 'Djokovic']), df['winner_name'], 'Other') </code></pre>
python|pandas
2
9,606
55,300,601
Python numpy reshape
<p>I have a variable of z storing features of length and height of image files where z is</p> <pre><code>z = [length, height] </code></pre> <p>and i want to change these dimension to just:</p> <pre><code>z = [area] where area = length * height </code></pre> <p>I tried using the numpy reshape function as follow:</p> <pre><code>area = z.shape[0] * z.shape[1] #length * height z = z.reshape(-1) #was trying to reduce to just z = [area] </code></pre> <p>but it seemed like I'm not using the reshape function correctly. Could anyone help me out?</p>
<p>Simple example of how to use reshape:</p> <pre><code>import numpy as np a = np.random.randint(0,10,(10,10)) b = np.reshape(a, (100,)) print(b) </code></pre> <p>For your case it will be:</p> <pre><code>print(a.shape) # prints (length,height) b = np.reshape(a, (length * height,)) print(b.shape) # prints (length * height,) </code></pre> <p>To perform a reshape in place you can also use:</p> <pre><code>a.shape = ((100,)) </code></pre>
python|numpy
1
9,607
56,482,227
Split sentences into substrings containing varying number of words using pandas
<p>My question is related to this past of question of mine: <a href="https://stackoverflow.com/q/56395681/9024698">Split text in cells and create additional rows for the tokens</a>.</p> <p>Let's suppose that I have the following in a <code>DataFrame</code> in <code>pandas</code>:</p> <pre><code>id text 1 I am the first document and I am very happy. 2 Here is the second document and it likes playing tennis. 3 This is the third document and it looks very good today. </code></pre> <p>and I want to split the text of each id in tokens of random number of words (varying between two values e.g. 1 and 5) so I finally want to have something like the following:</p> <pre><code>id text 1 I am the 1 first document 1 and I am very 1 happy 2 Here is 2 the second document and it 2 likes playing 2 tennis 3 This is the third 3 document and 3 looks very 3 very good today </code></pre> <p>Keep in mind that my dataframe may also have other columns except for these two which should be simply copied at the new dataframe in the same way as <code>id</code> above.</p> <p>What is the most efficient way to do this?</p>
<p>Define a function to extract chunks in a random fashion using <code>itertools.islice</code>:</p> <pre><code>from itertools import islice import random lo, hi = 3, 5 # change this to whatever def extract_chunks(it): chunks = [] while True: chunk = list(islice(it, random.choice(range(lo, hi+1)))) if not chunk: break chunks.append(' '.join(chunk)) return chunks </code></pre> <p>Call the function through a list comprehension to ensure least possible overhead, then <code>stack</code> to get your output:</p> <pre><code>pd.DataFrame([ extract_chunks(iter(text.split())) for text in df['text']], index=df['id'] ).stack() id 1 0 I am the 1 first document and I 2 am very happy. 2 0 Here is the 1 second document and 2 it likes playing tennis. 3 0 This is the third 1 document and it looks 2 very good today. </code></pre> <p>You can extend the <code>extract_chunks</code> function to perform tokenisation. Right now, I use a simple splitting on whitespace which you can modify.</p> <hr> <p>Note that if you have other columns you don't want to touch, you can do something like a <code>melt</code>ing operation here.</p> <pre><code>u = pd.DataFrame([ extract_chunks(iter(text.split())) for text in df['text']]) (pd.concat([df.drop('text', 1), u], axis=1) .melt(df.columns.difference(['text']))) </code></pre>
python|string|pandas|tokenize
2
9,608
67,142,969
How to insert an array into another one using slices
<p>I want to create a large <code>NumPy array</code> (<code>L</code>) to hold the result of some operations. However, I can only compute one part of <code>L</code> at a time. Then, to have <code>L</code>, I need to create an array of zeros of shape <code>L.shape</code>, and fill it using these parts or subarrays. I'm currently able to do it, but in a very inefficient way.</p> <p>If the shape of <code>L</code> is <code>(x, y, z, a, b, c)</code>, then I'm creating <code>a</code> NumPy arrays of shape <code>(x, y, z, 1, b, c)</code> which correspond to the different parts of <code>L</code>, from part <code>0</code> to part <code>a-1</code>. I'm forced to create arrays of this particular shape due to the operations involved.</p> <p>In order to fill the array of zeros, I'm creating one <code>Pandas DataFrame</code> per subarray (or part). Each dataframe contains the indices and the values of one subarray of shape <code>(x, y, z, 1, b, c)</code>, like this:</p> <pre><code>index0 | index1 | index2 | index3 | index4 | index5 | value ------------------------------------------------------------ 0 | 0 | 0 | 0 | 0 | 0 | 434.2 0 | 0 | 0 | 0 | 0 | 1 | 234.5 ..., and so on. </code></pre> <p>Because of the shape <code>(x, y, z, 1, b, c)</code>, <code>index3</code> can only contain zeros. So, there's a change to make before the values can be inserted at the right index of <code>L</code>: the column at <code>index3</code> will contain only <code>0s</code> for the first subarray, only <code>1s</code> for the second subarray, etc. So, <code>df['index3'] = subarray_number</code>, where <code>subarray_number</code> goes from <code>0</code> to <code>a-1</code>. Only the <code>column</code> at <code>index3</code> is changed.</p> <p>So, the fifth subarray represented as a dataframe would look like this:</p> <pre><code>index0 | index1 | index2 | index3 | index4 | index5 | value ------------------------------------------------------------ 0 | 0 | 0 | 4 | 0 | 0 | 434.2 0 | 0 | 0 | 4 | 0 | 1 | 234.5 ... x-1 | y-1 | z-1 | 4 | b-1 | c-1 | 371.8 </code></pre> <p>After this, I only have to iterate over the rows of each of the dataframes using <code>iterrows</code>, and assign the values to the corresponding indices of the array of zeros, like this:</p> <pre class="lang-py prettyprint-override"><code>for subarray_df in subarrays_dfs: for i, row in subarray_df.iterrows(): index0, index1, index2, index3, index4, index5, value = row L[index0][index1][index2][index3][index4][index5] = value </code></pre> <p>The problem is that converting the arrays to dataframes and then assigning the values one by one is expensive, especially for large arrays. I would like to insert the subarrays in <code>L</code> directly without having to go through this intermediate step.</p> <p>I tried using slices but the generated array is not the one I expect. This is what I'm doing:</p> <pre><code>L[:subarray.shape[0], :subarray.shape[1], :subarray.shape[2], subarray_number, :subarray.shape[4], :subarray.shape[5]] = subarray </code></pre> <p>What would be the right way of using slices to fill <code>L</code> the way I need?</p> <p>Thanks!</p>
<p>Your example is not very clear, but maybe you can adapt something from this code snippet. It looks to me like you are generating your L array, shape (x, y, z, a, b, c), by computing <code>a</code> slices, of shape (x, y, z, b, c), equivalent to (x, y, z, 1, b, c). Let me know if I am completely wrong.</p> <pre><code>import numpy as np L = np.zeros((10, 10, 10, 2, 10, 10)) # shape (x, y, z, a, b, c) def compute(): return np.random.rand(10, 10, 10, 10, 10) # shape (x, y, z, b, c) for k in range(L.shape[3]): L[:, :, :, k, :, :] = compute() # Select slice of shape (x, y, z, b, c) </code></pre> <p>Basically, It computes a slice of the array (part of the array) and place it at the desired location.</p> <p>One thing to note, an array of shape <code>(x, y, z, a, b, c)</code> can quickly get out of hand. For instance I naively tried to do <code>L = np.zeros((100, 100, 100, 5, 100, 100))</code>, resulting in a 373 Gb RAM allocation.. Depending on the size of your data, maybe you could work only on each slices and store them to disk when the others are not in use?</p> <p>Following my comment, some snippet for you to get this dimension problem:</p> <pre><code>import numpy as np L = np.zeros((10, 10, 10)) L.shape # (10, 10, 10) L[:, 0, :].shape # (10, 10) L[:, 0:3, :].shape # (10, 3, 10) </code></pre> <p>The slice on <code>:</code> selects all, the slice on <code>x:y</code> selets all from <code>x</code> to <code>y</code> and the slice on a specific index <code>k</code> selects only that 'line'/'column' (analogy for 2D), and thus returning an array of dimension <code>n-1</code>. In 2D, a line or column would be 1D.</p>
python|arrays|pandas|numpy
0
9,609
47,404,102
How I reuse trained model in DNN?
<p>Everyone!</p> <p>I have a question releate in trained model reusing( tensorflow ).</p> <p>I have train model</p> <p>I want predict new data used trained model.</p> <p>I use DNNClassifier.</p> <p>I have a model.ckpt-200000.meta, model.ckpt-200000.index, checkpoint, and eval folder.</p> <p>but I don't know reuse this model..</p> <p>plz help me.</p>
<p>First, you need to import your graph,</p> <pre><code>with tf.Session() as sess: sess.run(tf.global_variables_initializer()) new_saver = tf.train.import_meta_graph('model.ckpt-200000.meta') new_saver.restore(sess, tf.train.latest_checkpoint('./')) </code></pre> <p>Then you can give input to the graph and get the output.</p> <pre><code>graph = tf.get_default_graph() input = graph.get_tensor_by_name("input:0")#input tensor by name feed_dict ={input:} #input to the model #Now, access theoutput operation. op_to_restore = graph.get_tensor_by_name("y_:0") #output tensor print sess.run(op_to_restore,feed_dict) #get output here </code></pre> <p>Few things to note,</p> <ul> <li>You can replace the above code with your training part of the graph (<em>i.e</em> you can get the output without training).</li> <li><p>However, you still have to construct your graph as previously and only replace the training part.</p></li> <li><p>Above method only loading the weights for the constructed graph. Therefore, you have to construct the graph first.</p></li> </ul> <p>A good tutorial on this can be found here, <a href="http://cv-tricks.com/tensorflow-tutorial/save-restore-tensorflow-models-quick-complete-tutorial/" rel="nofollow noreferrer">http://cv-tricks.com/tensorflow-tutorial/save-restore-tensorflow-models-quick-complete-tutorial/</a></p> <p>If you don't want to construct the graph again you can follow this tutorial, <a href="https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc" rel="nofollow noreferrer">https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc</a></p>
tensorflow
0
9,610
68,328,242
Split rows to create new rows in Pandas Dataframe
<p>I have a pandas dataframe in which one column of text strings contains multiple comma-separated values. I want to split each field and create a new row per entry only where the number of commas is equal to 2. My entire dataframe has only values with either no. of commas =1 or 2. For example, a should become b:</p> <pre><code>In [7]: a Out[7]: var1 var2 var3 0 a,b,c 1 X 1 d,e,f 2 Y 2 g,h 3 Z </code></pre> <pre><code>In [8]: b Out[8]: var1 var2 var3 0 a,c 1 X 1 b,c 1 X 2 d,f 2 Y 3 e,f 2 Y 4 g,h 3 Z </code></pre>
<ul> <li>have taken the approach that you want <strong>combinations</strong> of constituent parts</li> <li>specifically there is a <strong>combination</strong> you want to exclude</li> <li>have used an additional column just for purpose of transparency of solution</li> </ul> <pre><code>import io import itertools df = pd.read_csv(io.StringIO(&quot;&quot;&quot; var1 var2 var3 0 a,b,c 1 X 1 d,e,f 2 Y 2 g,h 3 Z&quot;&quot;&quot;), sep=&quot;\s+&quot;) df[&quot;var1_2&quot;] = df[&quot;var1&quot;].str.split(&quot;,&quot;).apply(lambda x: [&quot;,&quot;.join(list(c)) for c in itertools.combinations(x, 2) if len(x)&lt;=2 or list(c) != x[:2]]) df.explode(&quot;var1_2&quot;) </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">var1</th> <th style="text-align: right;">var2</th> <th style="text-align: left;">var3</th> <th style="text-align: left;">var1_2</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">a,b,c</td> <td style="text-align: right;">1</td> <td style="text-align: left;">X</td> <td style="text-align: left;">a,c</td> </tr> <tr> <td style="text-align: left;">a,b,c</td> <td style="text-align: right;">1</td> <td style="text-align: left;">X</td> <td style="text-align: left;">b,c</td> </tr> <tr> <td style="text-align: left;">d,e,f</td> <td style="text-align: right;">2</td> <td style="text-align: left;">Y</td> <td style="text-align: left;">d,f</td> </tr> <tr> <td style="text-align: left;">d,e,f</td> <td style="text-align: right;">2</td> <td style="text-align: left;">Y</td> <td style="text-align: left;">e,f</td> </tr> <tr> <td style="text-align: left;">g,h</td> <td style="text-align: right;">3</td> <td style="text-align: left;">Z</td> <td style="text-align: left;">g,h</td> </tr> </tbody> </table> </div>
python-3.x|pandas|dataframe|python-2.7|split
1
9,611
68,191,705
Convert tuple to int - python
<p>I have the following table ( df):</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>shape</th> <th>data</th> </tr> </thead> <tbody> <tr> <td>POLYGON</td> <td>((1280 16068.18, 1294 16059, 1297 16060, 1300 16063, 1303 16065, 1308 16066))</td> </tr> <tr> <td>POINT</td> <td>POINT ((37916311947 12769))</td> </tr> <tr> <td>POLYGON</td> <td>POLYGON ((1906.23 12983, 1908 12982, 1916 12974, 1917 12972, 1917 12970))</td> </tr> </tbody> </table> </div> <p>I would like to convert the table to the following format:</p> <p>Desired output:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>converted_data</th> </tr> </thead> <tbody> <tr> <td>[(1280, 16068), (1294, 16059), (1297, 16060), (1300, 16063), (1303, 16065), (1308, 16066)]</td> </tr> <tr> <td>[(37916311947, 12769)]</td> </tr> <tr> <td>[(1906, 12983), (1908, 12982), (1916, 12974), (1917, 12972), (1917, 12970)]</td> </tr> </tbody> </table> </div> <p>I would like to modify the parenthesis and add comma and remove the word POLYGON or POINT. What I tried so far?</p> <pre><code>res1 = [] for ip, geom in zip(df2['data'], df2['SHAPE']): if geom == 'POINT': st = str(ip)[8:-2] elif geom == 'POLYGON/SURFACE': st = str(ip)[10:-2] s = st.split(',') res1.append(s) res = [] for i in res1: res.append([tuple(map(int, j.split())) for j in i]) data2 = df2.copy() data2['converted_data']=res ´´´ The above script works saves the output as tuple and not int. How do I optimize my script? </code></pre>
<p>The first part of your code seems fine - In the second part you are probably trying to split <code>i</code> instead of <code>j</code></p> <pre><code>x = '1280 16068.18, 1294 16059, 1297 16060, 1300 16063, 1303 16065, 1308 16066' x_split = [tuple(map(lambda x: int(float(x)), i.strip().split())) for i in x.strip().split(',')] #[(1280, 16068), # (1294, 16059), # (1297, 16060), # (1300, 16063), # (1303, 16065), # (1308, 16066)] </code></pre>
python|pandas|string|integer|parentheses
1
9,612
59,367,908
Sum the values of certain rows to another nearest row based on condition
<p>I have the dataframe as below</p> <pre><code>id log loc pos_evnts neg_evnts As non_As pos_wrds neg_wrds As/Ac A c City 8 0 48 0 0 0 1 A d City 2 6 0 180 4 10 0 A e City 0 22 87 0 0 0 1 A f City 8 0 35 0 0 0 1 A g City 8 2 42 0 0 0 1 A h City 4 4 0 115 4 2 0 A i City 2 0 32 0 0 0 1 B j Hill 3 0 24 0 0 0 1 B k City 6 8 116 0 0 2 1 B l City 2 4 200 0 0 2 1 C m City 2 0 40 0 0 0 0 C n Hill 5 0 1 0 2 0 0 C o City 5 0 7 0 0 5 1 </code></pre> <p>As you can see, there are zeroes(0) in the column <code>As/Ac</code>. What i want to do is, when we have a zero, add the values of the zeros rows to the next 1 row. The result expected is as either of the one below.</p> <p>Here values of the 'zero' rows added to the closet 1 row below it but the 'zero' row itself has not changed.</p> <pre><code>id log loc pos_evnts neg_evnts As non_As pos_wrds neg_wrds As/Ac A c City 8 0 48 0 0 0 1 A d City 2 6 0 180 4 10 0 A e City 2 28 87 180 4 10 1 A f City 8 0 35 0 0 0 1 A g City 8 2 42 0 0 0 1 A h City 4 4 0 115 4 2 0 A i City 6 4 32 115 4 2 1 B j Hill 3 0 24 0 0 0 1 B k City 6 8 116 0 0 2 1 B l City 2 4 200 0 0 2 1 C m City 2 0 40 0 0 0 0 C n Hill 5 0 1 0 2 0 0 C o City 12 0 48 0 5 5 1 </code></pre> <p>or</p> <p>Here values of the 'zero' rows added to the closet 1 row below &amp; also the 'zero' row itself is updated with new values except for column As/Ac. I want As/Ac to be unchanged because i will drop the zero rows later.</p> <pre><code> id log loc pos_evnts neg_evnts As non_As pos_wrds neg_wrds As/Ac A c City 8 0 48 0 0 0 1 A d City 2 28 87 180 4 10 0 A e City 2 28 87 180 4 10 1 A f City 8 0 35 0 0 0 1 A g City 8 2 42 0 0 0 1 A h City 6 4 32 115 4 2 0 A i City 6 4 32 115 4 2 1 B j Hill 3 0 24 0 0 0 1 B k City 6 8 116 0 0 2 1 B l City 2 4 200 0 0 2 1 C m City 12 0 48 0 5 5 0 C n Hill 12 0 48 0 5 5 0 C o City 12 0 48 0 5 5 1 </code></pre> <p>I tried <code>df['As/Ac'].shift(fill_value=0).shift(-1).cumsum()</code> which gives the group' where zero's are followed by one's but i am not able to proceed further (summing them) because i need to retain the first 3 columns &amp; they are different.</p> <p>I also tried as below but i get an error.</p> <pre><code>df['validheads'] = df['As/Ac'].shift(fill_value=0).shift(-1).cumsum() df.iloc[:,3:].groupby(['validheads'],as_index=False).sum() </code></pre>
<p>moys, with some help from jezrael, i finished up my solution, i was missing two lines that i've added below</p> <pre><code>df['Truth'] = df['As/Ac'] == 0 | ( (df['As/Ac'].shift() == 0) &amp; (df['As/Ac'] == 1) ) df['T'] = df['Truth'].ne(df['Truth'].shift()).cumsum() # from jezrael cols = df.select_dtypes(np.number).columns.difference(['T']) df.loc[df['Truth'], cols] = df.loc[df['Truth'], cols] .groupby(df['T']).cumsum() id log loc pos_evnts neg_evnts As non_As pos_wrds neg_wrds As/Ac Truth T 0 A c City 8 0 48 0 0 0 1 False 1 1 A d City 2 6 0 180 4 10 0 True 2 2 A e City 2 28 87 180 4 10 1 True 2 3 A f City 8 0 35 0 0 0 1 False 3 4 A g City 8 2 42 0 0 0 1 False 3 5 A h City 4 4 0 115 4 2 0 True 4 6 A i City 6 4 32 115 4 2 1 True 4 7 B j Hill 3 0 24 0 0 0 1 False 5 8 B k City 6 8 116 0 0 2 1 False 5 9 B l City 2 4 200 0 0 2 1 False 5 10 C m City 2 0 40 0 0 0 0 True 6 11 C n Hill 7 0 41 0 2 0 0 True 6 12 C o City 12 0 48 0 2 5 1 True 6 </code></pre> <p>Modifying Shijith's answer with his permission you will get:</p> <pre><code>In [4658]: df.groupby(df.loc[::-1, 'As/Ac'].cumsum()[::-1]).cumsum() Out[4658]: pos_evnts neg_evnts As non_As pos_wrds neg_wrds As/Ac 0 8 0 48 0 0 0 1 1 2 6 0 180 4 10 0 2 2 28 87 180 4 10 1 3 8 0 35 0 0 0 1 4 8 2 42 0 0 0 1 5 4 4 0 115 4 2 0 6 6 4 32 115 4 2 1 7 3 0 24 0 0 0 1 8 6 8 116 0 0 2 1 9 2 4 200 0 0 2 1 10 2 0 40 0 0 0 0 11 7 0 41 0 2 0 0 12 12 0 48 0 2 5 1 </code></pre>
python|python-3.x|pandas|pandas-groupby
2
9,613
57,284,345
How to install nvidia apex on Google Colab
<p>what I did is follow the instruction on the official github site</p> <pre><code>!git clone https://github.com/NVIDIA/apex !cd apex !pip install -v --no-cache-dir ./ </code></pre> <p>it gives me the error:</p> <pre><code>ERROR: Directory './' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. Exception information: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 178, in main status = self.run(options, args) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py", line 326, in run self.name, wheel_cache File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 268, in populate_requirement_set wheel_cache=wheel_cache File "/usr/local/lib/python3.6/dist-packages/pip/_internal/req/constructors.py", line 248, in install_req_from_line "nor 'pyproject.toml' found." % name pip._internal.exceptions.InstallationError: Directory './' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. </code></pre>
<p>Worked for me after adding CUDA_HOME enviroment variable:</p> <pre><code>%%writefile setup.sh export CUDA_HOME=/usr/local/cuda-10.1 git clone https://github.com/NVIDIA/apex pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex </code></pre> <pre><code>!sh setup.sh </code></pre>
python|gpu|pytorch|nvidia|google-colaboratory
18
9,614
57,007,490
How to find which values of a dataframe 'significantly' differ from a specific mean
<p>I am creating a Pandas DataFrame where one column is the temperature at half hourly intervals for the year. </p> <blockquote> <p>I want to create a column which on each row contains the mean value for that month at that time. </p> </blockquote> <p>For example, in the row containing the value: "13:00:00 2018-02-02", I want the value to be the average of the temperature readings taken at 1pm during February. I am doing this so that I can identify which specific times have unusual reading of the temperature.</p> <p>I have tried to do this by using .loc and for-loops.</p> <p>Here is my code, I run this and get an error message. </p> <pre><code>import numpy as np import datetime as dat #df_train has been defined and is a Pandas DataFrame df_train['Time']=df_train['Date and Time'].dt.time df_train['Month']=df_train['Date and Time'].dt.month times=np.array(df_train.loc[df_train['Date']==dat.date(2018, 1, 2)].Time) means=[] for i in range(1,13): df_hour=df_train.loc[df_train['Month']==int(i)] for time in times: df_hour=df_hour.loc[df_hour['Time']==time] means.append(df_hour['Temp'].values.mean()) </code></pre> <p>I was hoping that I could then add means to my dataframe.</p> <p>The error read:</p> <pre><code>C:\Users\ocallaghan_m\Desktop\Forecasting\Python_Code\Neural Networks\Non Recursive NN\48 steps type\Next Day With Day Type and BH &amp; Weather\data.py:74: RuntimeWarning: Mean of empty slice. means.append(df_hour['Temp'].values.mean()) </code></pre> <p>Any help with this code or any alternative methods would be greatly appreciated.</p>
<p>I think you can use pandas' <code>groupby()</code> method to achieve what you want (instead of the for loops).</p> <p>Here is the code:</p> <pre><code>means = df_train.groupby(['Month', 'Time']).Temp.mean() df_train.set_index(['Month', 'Time'], inplace=True) df_train['Mean'] = means df_train.reset_index(inplace=True) </code></pre>
python|pandas
1
9,615
45,976,111
How can i use Seaborn.lmplot function without naming DataFrame columns?
<p>I want to use <strong><em>Seaborn.lmplot</em></strong> for scattering Dataframe's <strong>"column 1"</strong> vs <strong>"column 2"</strong> , but in according to Seaborn documentation :</p> <blockquote> <p>Seaborn.lmplot(x, y, data, hue=None, ...)</p> </blockquote> <p>We should provide names of columns for using this function, in other words Dataframes columns should be strings instead of integers.</p> <p>I've tried a following methods, but none of them works!</p> <pre><code>import seaborn as sns import pandas as pd df = pd.read_csv('someData.csv',sep=',', header=None) sns.lmplot(0,1, data= df) sns.lmplot(df[0], df[1]) sns.lmplot("0","1",data= df) </code></pre> <p>and my dataFrame is something like: <a href="https://i.stack.imgur.com/iEyfx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iEyfx.png" alt="enter image description here"></a></p> <p>Is there any method that I can use Seaborn.lmplot without naming columns!</p>
<p>Seaborn does not allows for unnamed columns.<br> However, an easy solution is to rename the columns just for the purpose of plotting. Mapping the integer to a string (<code>lambda x: str(x)</code>) would allow to use strings as column names for the seaborn plot.</p> <pre><code>sns.lmplot(x="0", y="1", data=df.rename(columns=lambda x: str(x))) </code></pre> <p>The original dataframe will stay unchanged.</p>
python|pandas|dataframe|seaborn
5
9,616
51,080,169
Tensorflow: JRE Fatal error (SIGILL (0x4)) at loading _clustering_ops.so
<p>Created a test java application which loads a trained python model through Tensorflow.</p> <p>Had to add the below line to fix this exception "Op type not registered 'NearestNeighbors' in binary"</p> <pre><code>TensorFlow.loadLibrary(/tmp/path/to/_clustering_ops.so); </code></pre> <p>My application runs with no issue on my computer. </p> <p>However, when running the application on a server, the application crashes with the following details.</p> <pre><code># A fatal error has been detected by the Java Runtime Environment: # # SIGILL (0x4) at pc=0x00007f40a00d923a, pid=1412, tid=0x00007f405a9e7700 # # JRE version: OpenJDK Runtime Environment (8.0_171-b11) (build 1.8.0_171-8u171-b11-0ubuntu0.16.04.1-b11) # Java VM: OpenJDK 64-Bit Server VM (25.171-b11 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [clustering_ops.so+0x823a] Eigen::PlainObjectBase&lt;Eigen::Matrix&lt;float, -1, 1, 0, -1, 1&gt; &gt;::PlainObjectBase&lt;Eigen::CwiseBinaryOp&lt;Eigen::internal::scalar_product_op&lt;float, float&gt;, Eigen::CwiseNullaryOp&lt;Eigen::internal::scalar_constant_op&lt;float&gt;, Eigen::Matrix&lt;float, -1, 1, 0, -1, 1&gt; const&gt; const, Eigen::PartialReduxExpr&lt;Eigen::Map&lt;Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const, 0, Eigen::Stride&lt;0, 0&gt; &gt; const, Eigen::internal::member_squaredNorm&lt;float&gt;, 1&gt; const&gt; &gt; (Eigen::DenseBase&lt;Eigen::CwiseBinaryOp&lt;Eigen::internal::scalar_product_op&lt;float, float&gt;, Eigen::CwiseNullaryOp&lt;Eigen::internal::scalar_constant_op&lt;float&gt;, Eigen::Matrix&lt;float, -1, 1, 0, -1, 1&gt; const&gt; const, Eigen::PartialReduxExpr&lt;Eigen::Map&lt;Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const, 0, Eigen::Stride&lt;0, 0&gt; &gt; const, Eigen::internal::member_squaredNorm&lt;float&gt;, 1&gt; const&gt; &gt; const&amp;)+0x6a </code></pre> <p>Debugging:</p> <pre><code>(gdb) disassemble Dump of assembler code for function __GI_raise: 0x00007f8bad12f3f0 &lt;+0&gt;: mov %fs:0x2d4,%ecx 0x00007f8bad12f3f8 &lt;+8&gt;: mov %fs:0x2d0,%eax 0x00007f8bad12f400 &lt;+16&gt;: movslq %eax,%rsi 0x00007f8bad12f403 &lt;+19&gt;: test %esi,%esi 0x00007f8bad12f405 &lt;+21&gt;: jne 0x7f8bad12f438 &lt;__GI_raise+72&gt; 0x00007f8bad12f407 &lt;+23&gt;: mov $0xba,%eax 0x00007f8bad12f40c &lt;+28&gt;: syscall 0x00007f8bad12f40e &lt;+30&gt;: mov %eax,%ecx 0x00007f8bad12f410 &lt;+32&gt;: mov %eax,%fs:0x2d0 0x00007f8bad12f418 &lt;+40&gt;: movslq %eax,%rsi 0x00007f8bad12f41b &lt;+43&gt;: movslq %edi,%rdx 0x00007f8bad12f41e &lt;+46&gt;: mov $0xea,%eax 0x00007f8bad12f423 &lt;+51&gt;: movslq %ecx,%rdi 0x00007f8bad12f426 &lt;+54&gt;: syscall =&gt; 0x00007f8bad12f428 &lt;+56&gt;: cmp $0xfffffffffffff000,%rax 0x00007f8bad12f42e &lt;+62&gt;: ja 0x7f8bad12f450 &lt;__GI_raise+96&gt; 0x00007f8bad12f430 &lt;+64&gt;: repz retq 0x00007f8bad12f432 &lt;+66&gt;: nopw 0x0(%rax,%rax,1) 0x00007f8bad12f438 &lt;+72&gt;: test %ecx,%ecx 0x00007f8bad12f43a &lt;+74&gt;: jg 0x7f8bad12f41b &lt;__GI_raise+43&gt; 0x00007f8bad12f43c &lt;+76&gt;: mov %ecx,%edx 0x00007f8bad12f43e &lt;+78&gt;: neg %edx 0x00007f8bad12f440 &lt;+80&gt;: and $0x7fffffff,%ecx 0x00007f8bad12f446 &lt;+86&gt;: cmove %esi,%edx 0x00007f8bad12f449 &lt;+89&gt;: mov %edx,%ecx 0x00007f8bad12f44b &lt;+91&gt;: jmp 0x7f8bad12f41b &lt;__GI_raise+43&gt; 0x00007f8bad12f44d &lt;+93&gt;: nopl (%rax) 0x00007f8bad12f450 &lt;+96&gt;: mov 0x38ea21(%rip),%rdx # 0x7f8bad4bde78 0x00007f8bad12f457 &lt;+103&gt;: neg %eax 0x00007f8bad12f459 &lt;+105&gt;: mov %eax,%fs:(%rdx) 0x00007f8bad12f45c &lt;+108&gt;: mov $0xffffffff,%eax 0x00007f8bad12f461 &lt;+113&gt;: retq End of assembler dump. (gdb) bt #0 0x00007f8bad12f428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 #1 0x00007f8bad13102a in __GI_abort () at abort.c:89 #2 0x00007f8bac432c59 in ?? () from /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server/libjvm.so #3 0x00007f8bac5e8047 in ?? () from /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server/libjvm.so #4 0x00007f8bac43c6ef in JVM_handle_linux_signal () from /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server/libjvm.so #5 0x00007f8bac42fd88 in ?? () from /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server/libjvm.so #6 &lt;signal handler called&gt; #7 0x00007f8ba808023a in Eigen::PlainObjectBase&lt;Eigen::Matrix&lt;float, -1, 1, 0, -1, 1&gt; &gt;::PlainObjectBase&lt;Eigen::CwiseBinaryOp&lt;Eigen::internal::scalar_product_op&lt;float, float&gt;, Eigen::CwiseNullaryOp&lt;Eigen::internal::scalar_constant_op&lt;float&gt;, Eigen::Matrix&lt;float, -1, 1, 0, -1, 1&gt; const&gt; const, Eigen::PartialReduxExpr&lt;Eigen::Map&lt;Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const, 0, Eigen::Stride&lt;0, 0&gt; &gt; const, Eigen::internal::member_squaredNorm&lt;float&gt;, 1&gt; const&gt; &gt;(Eigen::DenseBase&lt;Eigen::CwiseBinaryOp&lt;Eigen::internal::scalar_product_op&lt;float, float&gt;, Eigen::CwiseNullaryOp&lt;Eigen::internal::scalar_constant_op&lt;float&gt;, Eigen::Matrix&lt;float, -1, 1, 0, -1, 1&gt; const&gt; const, Eigen::PartialReduxExpr&lt;Eigen::Map&lt;Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const, 0, Eigen::Stride&lt;0, 0&gt; &gt; const, Eigen::internal::member_squaredNorm&lt;float&gt;, 1&gt; const&gt; &gt; const&amp;) () from /srv/path/to/clustering_ops.so #8 0x00007f8ba8088e6e in tensorflow::NearestNeighborsOp::Compute(tensorflow::OpKernelContext*) () from /srv/path/to/_clustering_ops.so #9 0x00007f8b5dbf364c in ?? () #10 0x0000000000000000 in ?? () </code></pre> <p>I am suspecting this is an issue with the server. However cannot figure out what it is. I made sure both environment were the same (my instance on the server and localhost: Ubuntu 16.04.4 LTS and javac 1.8.0_171). I also ran a RAM test on the server and didn't get an issue.</p> <p>Would appreciate if someone pointed me in the right direction to get a fix to this.</p> <hr> <p>UPDATE 1: Thank you for the reply @Employed Russian.</p> <p>I hadn't build the .so file myself but I am retrieving it from the tensorflow library files.</p> <p>Following your recommendations I thought of cloning the entire tensorflow project on github and build the clustering_ops.so from the clustering_ops.cc file in 'tensorflow/contrib/factorization/ops/clustering_ops.cc'. However, I had to give up on that, at least for now, because of the too many paths' updates required in the imports.</p> <p>I then thought that if this was a hardware compatibility issue, I would install tensorflow on the server and use the clustering_ops.so file found in the downloaded files. This I did and, good enough, I am getting a different error:</p> <pre><code>2018-07-03 14:37:47.871 ERROR 13026 --- [nio-9090-exec-1] o.a.c.c.C.[.[.[.[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [/test] threw exception [Handler dispatch failed; nested exception is java.lang.UnsatisfiedLinkError: $HOME/clustering_ops.so: undefined symbol: _ZN10tensorflow7strings6StrCatERKNS0_8AlphaNumE] with root cause java.lang.UnsatisfiedLinkError: $HOME/clustering_ops.so: undefined symbol: _ZN10tensorflow7strings6StrCatERKNS0_8AlphaNumE at org.tensorflow.TensorFlow.loadLibrary(TensorFlow.java:47) ~[libtensorflow-1.5.0.jar!/:na] at com.domain.serverTest.controller.TestController.postSomething(TestController.java:41) ~[classes!/:0.0.1-SNAPSHOT] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_171] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_171] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_171] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_171] at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:209) ~[spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136) ~[spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102) ~[spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:877) ~[spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:783) ~[spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:991) ~[spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:925) ~[spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:974) ~[spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:877) ~[spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:661) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:851) ~[spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) ~[tomcat-embed-websocket-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) ~[spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:109) ~[spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93) ~[spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200) ~[spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198) ~[tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:496) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:803) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:790) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1468) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_171] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_171] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_171] </code></pre> <p>UPDATE 2: Downloading tensorflow from source and compiling with the right setting for the -march flag resolved the above error. However, another issue arose on which I would appreciate any help. I've been battling with it for some time now and failed to get a hint as to what might be the root cause.</p> <pre><code># A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007fb191313512, pid=5931, tid=0x00007fb13abe8700 # # JRE version: OpenJDK Runtime Environment (8.0_171-b11) (build 1.8.0_171-8u171-b11-0ubuntu0.16.04.1-b11) # Java VM: OpenJDK 64-Bit Server VM (25.171-b11 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [libc.so.6+0x84512] cfree+0x22 </code></pre>
<blockquote> <p>I am suspecting this is an issue with the server. However cannot figure out what it is</p> </blockquote> <p>The issue very likely is similar to <a href="https://stackoverflow.com/a/50980619/50617">this one</a>.</p> <p>Your development machine and your server have different processors with different instruction sets (server being older), and when you build on the development machine, the compiler (by default) generates instructions that work fine on development machine, but do not work on the server.</p> <blockquote> <p><code>(gdb) disassemble Dump of assembler code for function __GI_raise:</code></p> </blockquote> <p>That is not the function you want to disassemble. What you want is:</p> <pre><code>(gdb) x/i 0x00007f8ba808023a </code></pre> <p>which is the instruction that generated <code>SIGILL</code>. You are likely to find that that is an avx2 instruction, and that your server doesn't support avx2.</p> <p>You can see what your server supports in <code>/proc/cpuinfo</code> (or just Google the model number).</p> <p>Once you've identified the instruction set your server supports, build your code with appropriate <code>-march=...</code> <a href="https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html" rel="nofollow noreferrer">setting</a>, and it should work on both the development machine and the server.</p>
tensorflow|gdb
0
9,617
66,711,884
vectorized quadratic formula, why is the runtime warning invalid value in numpy.sqrt() still raised?
<p>numpy version 1.20.1</p> <p>Bob lawblaw's Law Blog, I need more details, this post has tooooo much code.</p> <pre><code>def quadratic_formula(a, b, c): &quot;&quot;&quot;just like the song Args: a (numpy.ndarray): shape(N,1) b (numpy.ndarray): shape(N,1) c (numpy.ndarray): shape(N,1) Returns: numpy.ndarray: shape(N,2) * [soln_a, soln_b] * [np.NaN, np.NaN] when discriminant is negative * [soln_a, soln_a] single soln when discriminate == 0] Notes: .. math:: ax^2 + bx + c = 0 solns = \\frac{-b \\pm \\sqrt{b^2 -4ac}}{2a} &quot;&quot;&quot; # TODO: raise value error if any a == 0 a = np.array(a) b = np.array(b) c = np.array(c) det = b ** 2 - 4 * a * c res_a = np.where( det &gt;= 0, (-b + np.sqrt(det)) / (2 * a), np.NaN, ) res_b = np.where( det &gt;= 0, (-b - np.sqrt(det)) / (2 * a), np.NaN, ) res = np.array([res_a, res_b]).T return res </code></pre> <pre><code>a = [1,2] b = [1,0] c = [0,1] res = quadratic_formula(a,b,c) print(res) </code></pre> <pre><code>&gt;&gt;&gt; [[0, -1], [NaN, NaN]] </code></pre> <p>works, but raise <code>RuntimeWarning: invalid value encountered in sqrt</code>.</p> <p>Why is the square root even evaluated for a negative discriminant? Any suggestions for implementation?</p>
<p>Note that you are still computing <code>np.sqrt(det)</code> for all values of <code>det</code> hence the warning. The where filters the x and y arrays <em>after</em> they have been computed.</p> <p>The implementation can be fixed by simply casting the a,b and c arrays to complex.</p> <pre><code> a = np.array(a).astype(complex) b = np.array(b).astype(complex) c = np.array(c).astype(complex) </code></pre> <p>That way numpy knows to use the complex version of sqrt. Once you are there you can completely omit the np.where and check after the fact if your solutions are real, if that is what you are interested on only.</p>
python|numpy
1
9,618
57,511,783
How to group by value for certain time period
<p>I had a DataFrame like below:</p> <pre><code> Item Date Count a 6/1/2018 1 b 6/1/2018 2 c 6/1/2018 3 a 12/1/2018 3 b 12/1/2018 4 c 12/1/2018 1 a 1/1/2019 2 b 1/1/2019 3 c 1/1/2019 2 </code></pre> <p>I would like to get the sum of Count per Item with the specified duration from 7/1/2018 to 6/1/2019. For this case, the expected output will be:</p> <pre><code> Item TotalCount a 5 b 7 c 3 </code></pre>
<p>We can use <code>query</code> with <code>Series.between</code> and chain that with <code>GroupBy.sum</code>:</p> <pre><code>df.query('Date.between("07-01-2018", "06-01-2019")').groupby('Item')['Count'].sum() </code></pre> <p><strong>Output</strong></p> <pre><code>Item a 5 b 7 c 3 Name: Count, dtype: int64 </code></pre> <hr> <p>To match your exact output, use <code>reset_index</code>:</p> <pre><code>df.query('Date.between("07-01-2018", "06-01-2019")').groupby('Item')['Count'].sum()\ .reset_index(name='Totalcount') </code></pre> <p><strong>Output</strong></p> <pre><code> Item Totalcount 0 a 5 1 b 7 2 c 3 </code></pre>
python-3.x|pandas
1
9,619
51,264,932
How to properly make an assignment to a slice of a multiindexed dataframe in pandas?
<p>I am running Pandas 0.20.3 with Python 3.5.3 on macOS.</p> <p>I have a multiindexed dataframe similar to the following <code>df</code>:</p> <pre><code>import pandas as pd import numpy as np refs = ['A', 'B'] dates = pd.date_range(start='2018-01-01', end='2018-12-31') df = pd.DataFrame({'ref': np.repeat(refs, len(dates)), 'date': np.tile(dates, len(refs)), 'value': np.random.randn(len(dates) * len(refs))}) df.set_index(["ref", "date"], inplace=True) </code></pre> <p>I want to modify the dataframe and set some values to 0. Say where <code>ref</code> is equal to 'A' and where date is before 2018-01-15.</p> <p>I am using the following:</p> <pre><code>df.loc["A"].loc[df.loc["A"].index &lt; pd.to_datetime('2018-01-15')] = 0 </code></pre> <p>I do not get any <code>SettingWithCopyWarning</code>and the dataframe is modified correctly on my mac. However when I run this code on a Windows environment with the same pandas version, the dataframe is not modified.</p> <p>Hence my question: Is the above code incorrect? If not, how to properly make the assignment I need?</p>
<p>I think need chain 2 boolean masks with select values of levels of <code>MultiIndex</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_level_values.html" rel="nofollow noreferrer"><code>get_level_values</code></a>:</p> <pre><code>m1 = df.index.get_level_values(0) == 'A' m2 = df.index.get_level_values(1) &lt; '2018-01-15' df.loc[m1 &amp; m2, 'value'] = 0 </code></pre> <hr> <pre><code>print (df.head(20)) value ref date A 2018-01-01 0.000000 2018-01-02 0.000000 2018-01-03 0.000000 2018-01-04 0.000000 2018-01-05 0.000000 2018-01-06 0.000000 2018-01-07 0.000000 2018-01-08 0.000000 2018-01-09 0.000000 2018-01-10 0.000000 2018-01-11 0.000000 2018-01-12 0.000000 2018-01-13 0.000000 2018-01-14 0.000000 2018-01-15 -0.701757 2018-01-16 -0.160638 2018-01-17 -0.226917 2018-01-18 -0.431952 2018-01-19 -0.339794 2018-01-20 -0.050133 </code></pre>
python|pandas|dataframe
1
9,620
51,536,771
pandas left join on column with list values
<p>Giving this two Data samples, I would like to join by a column that in the left join dataframe the value is a list of one element of several and in the other dataframe is the same colum (primary key) with aditional information without list as format. </p> <p>with this example</p> <pre><code>df1 = pd.DataFrame({'ID':[[1111],[2222,3333],[4444,5555],[6666]],'NAME':['foo','bar','zoo','bahh']}) df2 = pd.DataFrame({'ID':[[1111],[2222],[3333],[4444],[5555],[7777]],'ALT_NAME':['foo_alt','bar_alt','zoo_alt','baoo','razz','foo fi']}) print(df1) print(df2) </code></pre> <p>Output[1]:</p> <pre><code> ID NAME 0 [1111] foo 1 [2222, 3333] bar 2 [4444, 5555] zoo 3 [6666] bahh </code></pre> <p>Output[2]:</p> <pre><code> ALT_NAME ID 0 foo_alt [1111] 1 bar_alt [2222] 2 wis_alt [3333] 3 baoo [4444] 4 razz [5555] 5 foo fi [7777] </code></pre> <p>The result should be:</p> <pre><code> ID NAME ALT NAME 0 [1111] foo [foo_alt] 1 [2222, 3333] bar [bar_alt , wis_alt] 2 [4444, 5555] zoo [baoo, razz] 3 [6666] bahh nan </code></pre> <h1>Proposed solution:</h1> <p>I could solve it by splitting the ID in several columns and do several left joins, but I expect finding onliner or smarter solution. So, The nature of this question is more python learning oriented.</p>
<p>You should convert your Ouput[2] to a map (a pandas series), e.g.:</p> <pre><code>df2.ID = df2.ID.apply(lambda x: x[0]) s2 = df2.set_index('ID')['ALT_NAME'] # let us rename it s2 as it is a series now! </code></pre> <p>When that is done you can simply use apply and fetch the values with a list comprehension:</p> <pre><code>df1['ALT NAME'] = df1.ID.apply(lambda x: [s2.get(i,None) for i in x]) print(df1) </code></pre> <p>Returns:</p> <pre><code> ID NAME ALT NAME 0 [1111] foo [foo_alt] 1 [2222, 3333] bar [bar_alt, zoo_alt] 2 [4444, 5555] zoo [baoo, razz] 3 [6666] bahh [None] </code></pre> <hr> <p><strong>Small comment</strong>: This does not give you the <code>nan</code> in last row. But what if you have 1 match and 1 none match, is that not [match1, None]?.</p> <p>Df2 after conversion to s2:</p> <pre><code>ID 1111 foo_alt 2222 bar_alt 3333 zoo_alt 4444 baoo 5555 razz 7777 foo fi </code></pre> <p>one-row version: <code>s2 = df2.assign(ID=df2.ID.apply(lambda x: x[0])).set_index('ID')['ALT_NAME']</code></p>
python|pandas
3
9,621
70,966,698
How to access the channel dimensions of a tensor
<p>I have an output of tensor shape [32,24,24,6] i.e [batch_size,height,width,channel dimension] . I want to access the channel dimension values and work on it, maybe get it as a tuple or list of tensors which i plan to use in the elems of tf.map_fn. I tried using indexing([-1, -1, -1, 0:6]) but i am not sure if it is right way. Is there a right way in which i can access the channel dimension ? Can i try tensor.get_shape().as_list() and then access by using for loop?I am confused, any suggestions will be appreciated.</p>
<p>You can use the keras backend, which also works in graph mode. Here an example:</p> <pre><code>import tensorflow as tf testTensor = tf.random.uniform(shape = (32,128,128,3)) shape = tf.keras.backend.int_shape(testTensor) # (32,128,128,3) </code></pre>
python|numpy|tensorflow
0
9,622
37,530,228
How do I add an arbitrary value to a TensorFlow summary?
<p>In order to log a simple value <code>val</code> to a TensorBoard summary I need to</p> <pre><code>val = 5 test_writer.add_summary(sess.run(tf.scalar_summary('test', val)), global_step) </code></pre> <p>Is </p> <pre><code>sess.run(tf.scalar_summary('test', val)) </code></pre> <p>really necessary to get <code>val</code> added as a summary?</p>
<p>Here's another (perhaps slightly more up-to-date) solution with the tf.Summary.FileWriter class:</p> <pre><code>summary_writer = tf.summary.FileWriter(logdir=output_dir) value = tf.Summary.Value(tag='variable name', simple_value=value) summary_writer.add_event(summary=tf.summary.Event(tf.Summary([value]), wall_time=time.time(), step=global_step)) </code></pre> <p>Then you can create your SummarySaverHook as such:</p> <pre><code>summary_hook = tf.train.SummarySaverHook( summary_writer=summary_writer, summary_op=your_summary_op) </code></pre> <p>which you can pass to your MonitoredTrainingSession. An example of a summary_op is <code>tf.summary.merge_all()</code></p> <p>NOTE: You will have to wait for the FileWriter to flush for it to appear in your events file. You can force it by calling <code>summary_writer.flush()</code></p> <hr> <p>A simpler solution:</p> <pre><code>summary_writer = tf.summary.FileWriter(output_dir) summary = tf.Summary() summary.value.add(tag='name of var', simple_value=value) summary_writer.add_summary(summary, global_step) summary_writer.flush() </code></pre>
logging|tensorflow|tensorboard
12
9,623
42,108,321
Doing Pandas properly... rather than using a loop
<p>I'm just getting started with Pandas and I'm finding it hard to treat dataframes like dataframes. Every now and again, I just can't work out how to do something without iterating through rows.</p> <p>For example, I've got a dataframe with budget info. I want to extract the 'vendor' from the 'short description', which is a string of one of three potential forms:</p> <ol> <li>blah blah blah to <em>vendor name</em></li> <li>blah blah blah at <em>vendor name</em></li> <li><em>vendor name</em></li> </ol> <p>I can do this using the following code, but I can't help but feel that it's not using Pandas properly. Any thoughts on improving it?</p> <pre><code>for i, row in dataframe.iterrows(): current = dataframe['short description'][i] if 'to' in current: point_of_break = current.index('to') + 3 dataframe['vendor'][i] = current[point_of_break:] elif 'at' in current: point_of_break = current.index('at') + 3 dataframe['vendor'][i] = current[point_of_break:] else: dataframe['vendor'][i] = current </code></pre>
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>str.split</code></a> by <code>to</code> or <code>at</code> and then select last value of list by <code>str[-1]</code>:</p> <p>I implemented this <a href="https://stackoverflow.com/a/42109585/2901002">solution</a>.</p> <pre><code>df = pd.DataFrame({'A':['blah blah blah to "vendor name"', 'blah blah blah at "vendor name"', '"vendor name"']}) print (df) A 0 blah blah blah to "vendor name" 1 blah blah blah at "vendor name" 2 "vendor name" print (df.A.str.split('[at|to]\s+')) 0 [blah blah blah t, "vendor name"] 1 [blah blah blah a, "vendor name"] 2 ["vendor name"] Name: A, dtype: object df['vendor'] = df.A.str.split('(at|to) *').str[-1] print (df) A vendor 0 blah blah blah to "vendor name" "vendor name" 1 blah blah blah at "vendor name" "vendor name" 2 "vendor name" "vendor name" </code></pre> <p>Alternatively use:</p> <pre><code>df['vendor'] = df.A.str.split('[at|to]\s+').str[-1] print (df) A vendor 0 blah blah blah to "vendor name" "vendor name" 1 blah blah blah at "vendor name" "vendor name" 2 "vendor name" "vendor name" </code></pre>
python|pandas|dataframe
3
9,624
37,736,730
Pandas GroupBy Two Text Columns And Return The Max Rows Based On Counts
<p>I'm trying to figure out the max <code>(First_Word, Group)</code> pairs </p> <pre><code>import pandas as pd df = pd.DataFrame({'First_Word': ['apple', 'apple', 'orange', 'apple', 'pear'], 'Group': ['apple bins', 'apple trees', 'orange juice', 'apple trees', 'pear tree'], 'Text': ['where to buy apple bins', 'i see an apple tree', 'i like orange juice', 'apple fell out of the tree', 'partrige in a pear tree']}, columns=['First_Word', 'Group', 'Text']) First_Word Group Text 0 apple apple bins where to buy apple bins 1 apple apple trees i see an apple tree 2 orange orange juice i like orange juice 3 apple apple trees apple fell out of the tree 4 pear pear tree partrige in a pear tree </code></pre> <p>Then I do a <code>groupby</code>:</p> <pre><code>grouped = df.groupby(['First_Word', 'Group']).count() Text First_Word Group apple apple bins 1 apple trees 2 orange orange juice 1 pear pear tree 1 </code></pre> <p>And I now want to filter it down to only unique index rows that have the max <code>Text</code> counts. Below you'll notice <code>apple bins</code> was removed because <code>apple trees</code> has the max value.</p> <pre><code> Text First_Word Group apple apple trees 2 orange orange juice 1 pear pear tree 1 </code></pre> <p>This <a href="https://stackoverflow.com/questions/15707746/python-how-can-i-get-rows-which-have-the-max-value-of-the-group-to-which-they">max value of group</a> question is similar but when I try something like this:</p> <pre><code>df.groupby(["First_Word", "Group"]).count().apply(lambda t: t[t['Text']==t['Text'].max()]) </code></pre> <p>I get an error: <code>KeyError: ('Text', 'occurred at index Text')</code>. If I add <code>axis=1</code> to the <code>apply</code> I get <code>IndexError: ('index out of bounds', 'occurred at index (apple, apple bins)')</code></p>
<p>Given <code>grouped</code>, you now want to group by the <code>First Word</code> index level, and find the index labels of the maximum row for each group (using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html" rel="nofollow"><code>idxmax</code></a>):</p> <pre><code>In [39]: grouped.groupby(level='First_Word')['Text'].idxmax() Out[39]: First_Word apple (apple, apple trees) orange (orange, orange juice) pear (pear, pear tree) Name: Text, dtype: object </code></pre> <p>You can then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow"><code>grouped.loc</code></a> to select rows from <code>grouped</code> by index label:</p> <pre><code>import pandas as pd df = pd.DataFrame( {'First_Word': ['apple', 'apple', 'orange', 'apple', 'pear'], 'Group': ['apple bins', 'apple trees', 'orange juice', 'apple trees', 'pear tree'], 'Text': ['where to buy apple bins', 'i see an apple tree', 'i like orange juice', 'apple fell out of the tree', 'partrige in a pear tree']}, columns=['First_Word', 'Group', 'Text']) grouped = df.groupby(['First_Word', 'Group']).count() result = grouped.loc[grouped.groupby(level='First_Word')['Text'].idxmax()] print(result) </code></pre> <p>yields</p> <pre><code> Text First_Word Group apple apple trees 2 orange orange juice 1 pear pear tree 1 </code></pre>
python|pandas|max
3
9,625
38,054,160
pandas wont add columns in order
<p>I have two data frames </p> <p>numbers:</p> <pre><code>Unnamed: 0 Name Number 42 42 Aberavon 1742 43 43 Aberconwy 2769 16 16 Aberdeen North 3253 25 25 Aberdeen South 4122 355 355 Airdrie and Shotts 1194 44 44 Aldershot 4517 </code></pre> <p>and electorate:</p> <pre><code> Unnamed: 0 Unnamed: 0.1 Name Number 0 533 533 Aberavon 49821 1 534 534 Aberconwy 45525 2 591 591 Aberdeen North 67745 3 592 592 Aberdeen South 68056 4 593 593 Airdrie and Shotts 66792 5 0 0 Aldershot 72430 </code></pre> <p>when I input</p> <pre><code>numbers['No. Voters] = electorate['Number'] </code></pre> <p>for <code>print(numbers)</code> i get:</p> <pre><code> Unnamed: 0 Name Number No.Voters 42 42 Aberavon 1742 80805 43 43 Aberconwy 2769 78796 16 16 Aberdeen North 3253 68343 25 25 Aberdeen South 4122 66347 355 355 Airdrie and Shotts 1194 77534 </code></pre> <p>which isobviously wrong, and I am not sure why, because index should not matter as they are in order of name anyway as I passed each through the sort_values function</p> <p>can anyone tell me what is going wrong and what the correct command would be to match a new column in dataframe numbers to the number value in electorate?</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.values.html" rel="nofollow"><code>values</code></a> for converting column <code>Number</code> to <code>numpy array</code>, so align is corrected:</p> <pre><code>numbers['No. Voters] = electorate['Number'].values </code></pre> <p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a> of both <code>DataFrames</code> for correct align:</p> <pre><code>numbers.reset_index(drop = True) electorate.reset_index(drop = True) numbers['No. Voters] = electorate['Number'] </code></pre>
python|numpy|pandas|multiple-columns
1
9,626
37,781,319
plot N planes parallel to XZ axis in 3D in python
<p>I want to plot N planes (say 10) parallel to XZ axis and equidistant to each other using python. If possible it would be nice to select the number of planes from user. It will be like, if user gives "20" then 20 planes will be drawn in 3D. This is what I did.But I would like to know is there a method to call each plane or like to get each plane's equation ?? </p> <pre><code> import numpy as np import itertools import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D plt3d = plt.figure().gca(projection='3d') xx, zz = np.meshgrid(range(10), range(10)) yy =0.5 for _ in itertools.repeat(None, 20): plt3d.plot_surface(xx, yy, zz) plt.hold(True) yy=yy+.1 plt.show() </code></pre>
<p>Here is an example how to implement what you need in a very generic way.</p> <pre><code>from mpl_toolkits.mplot3d import axes3d import matplotlib.pyplot as plt from matplotlib import cm from pylab import meshgrid,linspace,zeros,dot,norm,cross,vstack,array,matrix,sqrt def rotmatrix(axis,costheta): """ Calculate rotation matrix Arguments: - `axis` : Rotation axis - `costheta` : Rotation angle """ x,y,z = axis c = costheta s = sqrt(1-c*c) C = 1-c return matrix([[ x*x*C+c, x*y*C-z*s, x*z*C+y*s ], [ y*x*C+z*s, y*y*C+c, y*z*C-x*s ], [ z*x*C-y*s, z*y*C+x*s, z*z*C+c ]]) def plane(Lx,Ly,Nx,Ny,n,d): """ Calculate points of a generic plane Arguments: - `Lx` : Plane Length first direction - `Ly` : Plane Length second direction - `Nx` : Number of points, first direction - `Ny` : Number of points, second direction - `n` : Plane orientation, normal vector - `d` : distance from the origin """ x = linspace(-Lx/2,Lx/2,Nx) y = linspace(-Ly/2,Ly/2,Ny) # Create the mesh grid, of a XY plane sitting on the orgin X,Y = meshgrid(x,y) Z = zeros([Nx,Ny]) n0 = array([0,0,1]) # Rotate plane to the given normal vector if any(n0!=n): costheta = dot(n0,n)/(norm(n0)*norm(n)) axis = cross(n0,n)/norm(cross(n0,n)) rotMatrix = rotmatrix(axis,costheta) XYZ = vstack([X.flatten(),Y.flatten(),Z.flatten()]) X,Y,Z = array(rotMatrix*XYZ).reshape(3,Nx,Ny) dVec = (n/norm(n))*d X,Y,Z = X+dVec[0],Y+dVec[1],Z+dVec[2] return X,Y,Z if __name__ == "__main__": # Plot as many planes as you like Nplanes = 10 # Set color list from a cmap colorList = cm.jet(linspace(0,1,Nplanes)) # List of Distances distList = linspace(-10,10,Nplanes) # Plane orientation - normal vector normalVector = array([0,1,1]) # Y direction # Create figure fig = plt.figure() ax = fig.gca(projection='3d') # Plotting for i,ypos in enumerate(linspace(-10,10,10)): # Calculate plane X,Y,Z = plane(20,20,100,100,normalVector,distList[i]) ax.plot_surface(X, Y, Z, rstride=5, cstride=5, alpha=0.8, color=colorList[i]) # Set plot display parameters ax.set_xlabel('X') ax.set_xlim(-10, 10) ax.set_ylabel('Y') ax.set_ylim(-10, 10) ax.set_zlabel('Z') ax.set_zlim(-10, 10) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/0kAOx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0kAOx.png" alt="enter image description here"></a></p> <p>If you need to rotate the plane around the normal vector, you can also use the rotation matrix for that.</p> <p>Cheers</p>
python|python-2.7|numpy|matplotlib
4
9,627
64,502,496
how to specify to use pandas .replace() instead of str.replace() when defining a function?
<p>I want to filter out certain words from a pandas dataframe column and make a new column of the filtered text. I attempted the solution from <a href="https://stackoverflow.com/questions/48684774/how-to-delete-words-from-a-dataframe-column-that-are-present-in-dictionary-in-pa">here</a>, but I think im having the issue of python thinking that I want to call the <code>str.replace()</code> instead of <code>df.replace()</code>. I'm not sure how to specify the latter as long as I'm calling it within a function.</p> <p><strong>df:</strong></p> <pre><code>id old_text 0 my favorite color is blue 1 you have a dog 2 we built the house ourselves 3 i will visit you </code></pre> <pre><code>def removeWords(txt): words = ['i', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', 'your', 'yours', 'yourself'] txt = txt.replace('|'.join(words), '', regex=True) return txt df['new_text'] = df['old_text'].apply(removeWords) </code></pre> <p><strong>error:</strong></p> <pre><code>TypeError: replace() takes no keyword arguments </code></pre> <p><strong>desired output:</strong></p> <pre><code>id old_text new_text 0 my favorite color is blue favorite color is blue 1 you have a dog have a dog 2 we built the house ourselves built the house 3 i will visit you will visit you </code></pre> <p><strong>other things tried:</strong></p> <pre><code>def removeWords(txt): words = ['i', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', 'your', 'yours', 'yourself'] txt = [word for word in txt.split() if word not in words] return txt df['new_text'] = df['old_text'].apply(removeWords) </code></pre> <p><strong>this returns:</strong></p> <pre><code>id old_text new_text 0 my favorite color is blue favorite, color, is, blue 1 you have a dog have, a, dog 2 we built the house ourselves built, the, house 3 i will visit you will, visit, you </code></pre>
<p>From this line:</p> <pre><code>txt.replace(rf&quot;\b({'|'.join(words)})\b&quot;, '', regex=True) </code></pre> <p>This is the signature for <code>pd.Series.replace</code> so your function takes a series as input. On the other hand,</p> <pre><code>df['old_text'].apply(removeWords) </code></pre> <p>applies the function to <em>each cell</em> of <code>df['old_text']</code>. That means, <code>txt</code> would be just a string, and the signature for <code>str.replace</code> does not have keyword arguments (<code>regex=True</code>) in this case.</p> <p>TLDR, you want to do:</p> <pre><code>df['new_text'] = removeWords(df['old_text']) </code></pre> <p>Output:</p> <pre><code> id old_text new_text 0 0 my favorite color is blue favorte color s blue 1 1 you have a dog have a dog 2 2 we built the house ourselves bult the house selves 3 3 i will visit you wll vst </code></pre> <p>But as you can see, your function replaces the <code>i</code> within the words. You may want to modify the pattern so as it only replaces the whole words with the boundary indicator <code>\b</code>:</p> <pre><code>def removeWords(txt): words = ['i', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', 'your', 'yours', 'yourself'] # note the `\b` here return txt.replace(rf&quot;\b({'|'.join(words)})\b&quot;, '', regex=True) </code></pre> <p>Output:</p> <pre><code> id old_text new_text 0 0 my favorite color is blue favorite color is blue 1 1 you have a dog have a dog 2 2 we built the house ourselves built the house 3 3 i will visit you will visit </code></pre>
python|regex|pandas
2
9,628
64,499,856
WASM backend for tensorflowjs throws "Unhandled Rejection (RuntimeError): index out of bounds" error in Reactjs
<p>I am trying to set up a WASM back-end for blazeface face detection model in a react app. Although the demo with the vanillajs can run it without any error for hours, in react it throws &quot;Unhandled Rejection (RuntimeError): index out of bounds error&quot; after leaving the cam open for more than 3-5 minutes.</p> <p>Entire app crashes with this error. From the log of the error below, maybe it is related to <code>disposeData()</code> or <code>disposeTensor()</code> functions which to my guess, they are related to garbage collecting. But I don't know if it is a bug from the WASM lib itself or not. Do you have any idea why this might happen?</p> <p>Below I provide my render prediction function as well.</p> <pre class="lang-js prettyprint-override"><code> renderPrediction = async () =&gt; { const model = await blazeface.load({ maxFaces: 1, scoreThreshold: 0.95 }); if (this.play) { const canvas = this.refCanvas.current; const ctx = canvas.getContext(&quot;2d&quot;); const returnTensors = false; const flipHorizontal = true; const annotateBoxes = true; const predictions = await model.estimateFaces( this.refVideo.current, returnTensors, flipHorizontal, annotateBoxes ); if (predictions.length &gt; 0) { ctx.clearRect(0, 0, canvas.width, canvas.height); for (let i = 0; i &lt; predictions.length; i++) { if (returnTensors) { predictions[i].topLeft = predictions[i].topLeft.arraySync(); predictions[i].bottomRight = predictions[i].bottomRight.arraySync(); if (annotateBoxes) { predictions[i].landmarks = predictions[i].landmarks.arraySync(); } } try { } catch (err) { console.log(err.message); } const start = predictions[i].topLeft; const end = predictions[i].bottomRight; const size = [end[0] - start[0], end[1] - start[1]]; if (annotateBoxes) { const landmarks = predictions[i].landmarks; ctx.fillStyle = &quot;blue&quot;; for (let j = 0; j &lt; landmarks.length; j++) { const x = landmarks[j][0]; //console.log(typeof x) // number const y = landmarks[j][1]; ctx.fillRect(x, y, 5, 5); } } } } requestAnimationFrame(this.renderPrediction); } }; </code></pre> <p>full log of the error:</p> <pre><code>Unhandled Rejection (RuntimeError): index out of bounds (anonymous function) unknown ./node_modules/@tensorflow/tfjs-backend-wasm/dist/tf-backend-wasm.esm.js/&lt;/tt&lt;/r&lt;/r._dispose_data C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/wasm-out/tfjs-backend-wasm.js:9 disposeData C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/backend_wasm.ts:115 112 | 113 | disposeData(dataId: DataId) { 114 | const data = this.dataIdMap.get(dataId); &gt; 115 | this.wasm._free(data.memoryOffset); | ^ 116 | this.wasm.tfjs.disposeData(data.id); 117 | this.dataIdMap.delete(dataId); 118 | } disposeTensor C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:838 835 | 'tensors'); 836 | let res; 837 | const inputMap = {}; &gt; 838 | inputs.forEach((input, i) =&gt; { | ^ 839 | inputMap[i] = input; 840 | }); 841 | return this.runKernelFunc((_, save) =&gt; { dispose C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/tensor.ts:388 endScope C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:983 tidy/&lt; C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:431 428 | if (kernel != null) { 429 | kernelFunc = () =&gt; { 430 | const numDataIdsBefore = this.backend.numDataIds(); &gt; 431 | out = kernel.kernelFunc({ inputs, attrs, backend: this.backend }); | ^ 432 | const outInfos = Array.isArray(out) ? out : [out]; 433 | if (this.shouldCheckForMemLeaks()) { 434 | this.checkKernelForMemLeak(kernelName, numDataIdsBefore, outInfos); scopedRun C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:448 445 | // inputsToSave and outputsToSave. Currently this is the set of ops 446 | // with kernel support in the WASM backend. Once those ops and 447 | // respective gradients are modularised we can remove this path. &gt; 448 | if (outputsToSave == null) { | ^ 449 | outputsToSave = []; 450 | } 451 | const outsToSave = outTensors.filter((_, i) =&gt; outputsToSave[i]); tidy C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:431 428 | if (kernel != null) { 429 | kernelFunc = () =&gt; { 430 | const numDataIdsBefore = this.backend.numDataIds(); &gt; 431 | out = kernel.kernelFunc({ inputs, attrs, backend: this.backend }); | ^ 432 | const outInfos = Array.isArray(out) ? out : [out]; 433 | if (this.shouldCheckForMemLeaks()) { 434 | this.checkKernelForMemLeak(kernelName, numDataIdsBefore, outInfos); tidy C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/globals.ts:190 187 | const tensors = getTensorsInContainer(container); 188 | tensors.forEach(tensor =&gt; tensor.dispose()); 189 | } &gt; 190 | /** 191 | * Keeps a `tf.Tensor` generated inside a `tf.tidy` from being disposed 192 | * automatically. 193 | */ estimateFaces C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/blazeface_reactjs/node_modules/@tensorflow-models/blazeface/dist/blazeface.esm.js:17 Camera/this.renderPrediction C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/blazeface_reactjs/src/Camera.js:148 145 | const returnTensors = false; 146 | const flipHorizontal = true; 147 | const annotateBoxes = true; &gt; 148 | const predictions = await model.estimateFaces( | ^ 149 | this.refVideo.current, 150 | returnTensors, 151 | flipHorizontal, async*Camera/this.renderPrediction C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/blazeface_reactjs/src/Camera.js:399 396 | // } 397 | } 398 | } &gt; 399 | requestAnimationFrame(this.renderPrediction); | ^ 400 | } 401 | }; 402 | </code></pre>
<p>After using a tensor to make predictions you will need to free the tensor up from the devices memory otherwise it will build up and the cause a potential error you are having. This can simply be done using <code>tf.dispose()</code> to manually specify the place at which you want to dispose the tensors. You do it right after making predictions on the tensor.</p> <pre><code>const predictions = await model.estimateFaces( this.refVideo.current, returnTensors, flipHorizontal, annotateBoxes ); tf.dispose(this.refVideo.current); </code></pre> <p>You can also just use <code>tf.tidy()</code> which does this automatically for you. With it you can just wrap the function where you handle the image tensors for making predictions on. This question on <a href="https://github.com/tensorflow/tfjs/issues/2204" rel="nofollow noreferrer">github</a> goes through it quite well but I am not too sure about the implementation as it can only be used for synchronous function calls.</p> <p>Or you could wrap the code for handling the image tensors in the following code which will also clean up any unused tensors</p> <pre><code>tf.engine().startScope() // handling image tensors function tf.engine().endScope() </code></pre>
javascript|reactjs|webassembly|tensorflow.js
2
9,629
47,646,312
group all directly and indirectly related records using python pandas
<p>thanks in advance:</p> <p>I am trying to generate a group identifier in a many to many relationship table which has 2 columns defining IDs of parent entities and a child entities:</p> <p>Example dataframe below: (parent (p), and child (c))</p> <pre><code>df = pd.DataFrame(np.array([[1,7],[1,3],[1,4],[3,2],[5,1],[6,0]])) df.columns= ['p', 'c'] </code></pre> <p>Table looks like below:</p> <pre><code>p c 1 7 1 3 1 4 3 2 5 1 6 0 </code></pre> <p>I am trying to get all directly and indirectly linked records in a group. For example:</p> <ul> <li>Parent Record 1 is parent of [7,3,4], and</li> <li>Parent Record 5 is the parent of 1</li> <li>Parent Record 3 has is parent of 2 and 2 is grand-child of 1</li> </ul> <p>So I want to generate an ID for all related record. where parent record 6 is not related to any record, I will move it to another group, a sample result like below:</p> <pre><code>p c grp 1 7 A 1 3 A 1 4 A 3 2 A 5 1 A 6 0 B </code></pre> <p>My current way of thinking :</p> <p>For each record, if it doesn't have a group yet:</p> <ul> <li>Getting all directly related record IDs </li> <li>Then for each directly related record IDs Recursively perform the same function to find all related record for the child until they have no child record </li> <li>Then assign a group to the group of record IDs (list)</li> </ul> <p>I am not sure if it is the right way to do it, and it seemed to be unnecessarily slow, and I have to pass down all the parent records in the chain to the child record in order for it not to perform the same search for already searched results. </p> <p>Would really appreciate if someone can give me a better solution. :) </p>
<p>You can check <a href="https://networkx.github.io/documentation/networkx-1.10/index.html" rel="nofollow noreferrer"><code>networkx</code></a> </p> <pre><code>import networkx as nx G=nx.from_pandas_dataframe(df, 'c', 'p') l=list(nx.connected_components(G)) dfmap=pd.DataFrame.from_dict(l) dfmap.index=['B','A'] dfmap=dfmap.stack() d=dict(list(zip(dfmap.values.astype(int),dfmap.index.get_level_values(0)))) df['grp']=df.replace(d).p df Out[14]: p c grp 0 1 7 A 1 1 3 A 2 1 4 A 3 3 2 A 4 5 1 A 5 6 0 B </code></pre> <p>More Info</p> <pre><code>import matplotlib.pyplot as plt nx.draw(G) </code></pre> <p><a href="https://i.stack.imgur.com/n00HN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n00HN.png" alt="enter image description here"></a></p>
python|pandas|many-to-many|relationship
4
9,630
47,778,162
Python/Numpy have I already written the swiftest code for large array?
<p> <strong>GOAL:</strong> I have a large 1d array (3000000+) of distances with many duplicate distances. I am trying to write the swiftest function that returns all distances that appear n times in the array. I have written a function in numpy but there is a bottleneck at one line in the code. Swift performance is an issue because the calculations are done in a for loop for 2400 different large distance arrays.</p> <pre><code>import numpy as np for t in range(0, 2400): a=np.random.randint(1000000000, 5000000000, 3000000) b=np.bincount(a,minlength=np.size(a)) c=np.where(b == 3)[0] #SLOW STATEMENT/BOTTLENECK return c </code></pre> <p> <strong>EXPECTED RESULTS:</strong> Given a 1d array of distances [2000000000,3005670000,2000000000,12345667,4000789000,12345687,12345667,2000000000,12345667] I would expect back an array of [2000000000,12345667] when queried to return an array of all distances that appear 3 times in the main array.</p> <p> What should I do?</p>
<p>Use <code>np.unique</code> :</p> <pre><code>a=np.random.randint(0,10**6,3*10**6) uniques,counts=np.unique(a,return_counts=True) In [293]: uniques[counts==14] Out[293]: array([ 4541, 250510, 622471, 665409, 881697, 920586]) </code></pre> <p>This takes less than a second. but I don't understand why your <code>where</code> statement is slow. For me your solution is faster :</p> <pre><code>In [313]: %timeit b=np.bincount(a,minlength=a.size) 61.5 ms ± 4.82 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) In [314]: %timeit np.where(b==3)[0] 11.8 ms ± 271 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [315]: %timeit uniques,counts=np.unique(a,return_counts=True) 424 ms ± 6.82 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [316]: %timeit Counter(a) 1.41 s ± 18 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) </code></pre> <p><strong>EDIT</strong></p> <pre><code>@numba.jit() def count(a,n): counters=np.zeros(10**6,np.int32) for i in a: counters[i] += 1 res=np.empty_like(counters) k = 0 for i,j in enumerate(counters): if j == n: res[k] = i k += 1 return res[:k] </code></pre> <p>This numba function can give you a 3X improvement. for more you must find parallel solutions, <a href="http://numba.pydata.org/numba-doc/0.13/CUDAJit.html" rel="nofollow noreferrer">on GPU for example</a>. </p> <pre><code>In [26]: %timeit count(a,13) 23.6 ms ± 1.56 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre>
python|numpy
3
9,631
49,061,441
Adding 100 2D Arrays of different sizes (numpy)
<p>Is it possible to add two different sized arrays without broadcasting?</p> <p>To my knowledge, broadcasting adds values to the smaller array in order to fill that space. But that would skew my results. I was wondering if there was some way to add two different sized arrays without having to compromise the values?</p> <pre><code>P = np.array([[1,2,3],[2,1,6],[7,9,1]]) L = np.array([[1,2],[4,1]]) </code></pre> <p>output Looks like this: </p> <pre><code>P 1 2 3 2 1 6 7 9 1 L 1 2 4 1 </code></pre> <p>It is important that the (diagonal) 1s in each square matrix align when adding two different sized matrices</p> <p>How can I do that?</p>
<p>Try this:</p> <pre><code>P + np.pad(L, (0,1), 'constant', constant_values=0) </code></pre> <p>So:</p> <pre><code>&gt;&gt;&gt; P array([[1, 2, 3], [2, 1, 6], [7, 9, 1]]) &gt;&gt;&gt; np.pad(L, (0,1), 'constant', constant_values=0) array([[1, 2, 0], [4, 1, 0], [0, 0, 0]]) &gt;&gt;&gt; P + np.pad(L, (0,1), 'constant', constant_values=0) array([[2, 4, 3], [6, 2, 6], [7, 9, 1]]) </code></pre>
python|arrays|numpy
0
9,632
59,000,621
Pandas iloc() to identify specific columns from headers row?
<p>I'm trying to create a list of the column headers excluding the initial columns. I am trying to use Pandas' iloc function for this and I feel I am halfway there.</p> <pre><code>column_dates = list(pronto.iloc[[0][2:]]) print(column_dates) </code></pre> <p>Right now, this is returning </p> <pre><code>['Unwanted Variable 1', 'Unwanted Variable 2', 'January 2018', 'February 2018', 'March 2018', 'April 2018', 'May 2018', 'June 2018', 'July 2018', 'August 2018', 'September 2018', 'October 2018', 'November 2018', 'December 2018', 'January 2019', 'February 2019', 'March 2019', 'April 2019', 'May 2019', 'June 2019', 'July 2019', 'August 2019', 'September 2019', 'October 2019', 'November 2019'] </code></pre> <p>How do I specify within iloc that I want the first row (column headers) and then columns 2(3rd column really) onward? I need the columns to be open ended as the width of the data frame can vary depending on the amount of months,</p> <p>Essentially I want this back,</p> <pre><code>['January 2018', 'February 2018', 'March 2018', 'April 2018', 'May 2018', 'June 2018', 'July 2018', 'August 2018', 'September 2018', 'October 2018', 'November 2018', 'December 2018', 'January 2019', 'February 2019', 'March 2019', 'April 2019', 'May 2019', 'June 2019', 'July 2019', 'August 2019', 'September 2019', 'October 2019', 'November 2019'] </code></pre>
<p>if the names of the columns have been properly parsed, then you want</p> <pre><code>pronto.columns[2:] </code></pre> <p>if the names of your columns are appearing in your dataframe as the first row (which they shouldn't), this should work</p> <pre><code>pronto.iloc[0, 2:] </code></pre>
python|pandas
1
9,633
70,227,908
Iterating over rows in a dataframe in Pandas: is there a difference between using df.index and df.iterrows() as iterators?
<p>When iterating through rows in a dataframe in Pandas, is there a difference in performance between using:</p> <pre><code>for index in df.index: .... </code></pre> <p>And:</p> <pre><code>for index, row in df.iterrows(): .... </code></pre> <p>? Which one should be preferred?</p>
<p>Pandas is significantly faster for column-wise operations so consider transposing your dataset and carrying out whatever operation you want. If you absolutely need to iterate through rows and want to keep it simple, you can use</p> <pre><code>for row in df.itertuples(): print(row.column_1) </code></pre> <p><code>df.itertuples</code> is significantly faster than <code>df.iterrows()</code> and iterating over the indices. However, there are faster ways to perform row-wise operations. Check out <a href="https://stackoverflow.com/a/24871316/7766834">this</a> answer for an overview.</p>
python|pandas|dataframe
2
9,634
70,097,166
how do i change the format in python
<p>i have a table like this</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">grade</th> <th style="text-align: center;">name</th> <th style="text-align: center;">price</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">abc</td> <td style="text-align: center;">25</td> </tr> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">abc</td> <td style="text-align: center;">30</td> </tr> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">abc</td> <td style="text-align: center;">35</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">xyz</td> <td style="text-align: center;">40</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">xyz</td> <td style="text-align: center;">45</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">xyz</td> <td style="text-align: center;">50</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">mno</td> <td style="text-align: center;">55</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">mno</td> <td style="text-align: center;">60</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">mno</td> <td style="text-align: center;">65</td> </tr> </tbody> </table> </div> <p>and I want a table like this</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">grade</th> <th style="text-align: center;">abc</th> <th style="text-align: center;">xyz</th> <th style="text-align: center;">mno</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">25</td> <td style="text-align: center;">40</td> <td style="text-align: center;">55</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">30</td> <td style="text-align: center;">45</td> <td style="text-align: center;">60</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">35</td> <td style="text-align: center;">50</td> <td style="text-align: center;">65</td> </tr> </tbody> </table> </div> <p>i tried to transpose the dataframe in pandas but didnt work..</p> <p>how can i convert this..?</p>
<p>To get your output, your input should be:</p> <pre><code>&gt;&gt;&gt; df grade name price # your grade column 0 1 abc 25 # 1 1 2 abc 30 # 1 2 3 abc 35 # 1 3 1 mno 55 # 2 4 2 mno 60 # 2 5 3 mno 65 # 2 6 1 xyz 40 # 3 7 2 xyz 45 # 3 8 3 xyz 50 # 3 </code></pre> <p>If you have the input above, you can use <code>pivot</code>:</p> <pre><code>&gt;&gt;&gt; df.pivot('grade', 'name', 'price').reset_index().rename_axis(columns=None) grade abc mno xyz 0 1 25 55 40 1 2 30 60 45 2 3 35 65 50 </code></pre>
python|pandas|dataframe
1
9,635
70,252,159
AttributeError: 'Functional' object has no attribute 'predict_segmentation' When importing TensorFlow model Keras
<p>I have successfully trained a Keras model like:</p> <pre><code>import tensorflow as tf from keras_segmentation.models.unet import vgg_unet # initaite the model model = vgg_unet(n_classes=50, input_height=512, input_width=608) # Train model.train( train_images=train_images, train_annotations=train_annotations, checkpoints_path=&quot;/tmp/vgg_unet_1&quot;, epochs=5 ) </code></pre> <p>And saved it in hdf5 format with:</p> <pre><code>tf.keras.models.save_model(model,'my_model.hdf5') </code></pre> <p>Then I load my model with</p> <pre><code>model=tf.keras.models.load_model('my_model.hdf5') </code></pre> <p>Finally I want to make a segmentation prediction on a new image with</p> <pre><code>out = model.predict_segmentation( inp=image_to_test, out_fname=&quot;/tmp/out.png&quot; ) </code></pre> <p>I am getting the following error:</p> <pre><code>AttributeError: 'Functional' object has no attribute 'predict_segmentation' </code></pre> <p>What am I doing wrong ? Is it when I am saving my model or when I am loading it ?</p> <p>Thanks !</p>
<p><code>predict_segmentation</code> isn't a function available in normal Keras models. It looks like it was added after the model was created in the <a href="https://github.com/divamgupta/image-segmentation-keras/blob/dc830bbd76371aaedbf8cb997bdedca388c544c4/keras_segmentation/models/model_utils.py#L101" rel="nofollow noreferrer"><code>keras_segmentation</code></a> library, which might be why Keras couldn't load it again.</p> <p>I think you have 2 options for this.</p> <ol> <li>You could use the line from the code I linked to manually add the function back to the model.</li> </ol> <pre class="lang-py prettyprint-override"><code>model.predict_segmentation = MethodType(keras_segmentation.predict.predict, model) </code></pre> <ol start="2"> <li>You could create a new <code>vgg_unet</code> with the same arguments when you reload the model, and transfer the weights from your <code>hdf5</code> file to that model as suggested in the <a href="https://www.tensorflow.org/guide/keras/save_and_serialize#transfer_learning_example_2" rel="nofollow noreferrer">Keras documentation</a>.</li> </ol> <pre class="lang-py prettyprint-override"><code>model = vgg_unet(n_classes=50, input_height=512, input_width=608) model.load_weights('my_model.hdf5') </code></pre>
python|tensorflow|keras|deep-learning
2
9,636
70,153,833
Dimensional Error for text classification using conv2d layer in keras
<p>I have a dataframe which I split into train and test set and the input shape for the train set is (4115,588). Now I want to create a neural network with Conv2D layers but face this error when I pass in the input shape arguement. ValueError: Input 0 of layer sequential_8 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: (None, 588, 1) I tried the following steps:</p> <pre><code>X_train = X_train.to_numpy() X_train = X_train.reshape((X_train.shape[0], X_train.shape[1],1)) model = Sequential() model.add(Conv2D(128, kernel_size=(3,3), input_shape=(X_train.shape[0],588,1), activation='relu')) model.add(MaxPooling2D()) model.add(Conv2D(64, kernel_size=(3,3), activation='relu')) model.add(MaxPooling2D()) model.add(Conv2D(32, kernel_size=(3,3), activation='relu')) model.add(MaxPooling2D()) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) </code></pre> <p>Can someone guide me on how to solve this error. I am relatively new to this topic.</p>
<p>Conv2D expects input of shape, 4+D tensor with shape: <code>batch_shape + (channels, rows, cols)</code> if data_format='channels_first' or 4+D tensor with shape: <code>batch_shape + (rows, cols, channels)</code> if data_format='channels_last'.</p> <p>I tested your code with mnist dataset its working. <strong>Working sample code</strong></p> <pre><code>from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Dropout, Flatten import tensorflow as tf (X_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() #X_train = X_train.to_numpy() #X_train = X_train.reshape((X_train.shape[0], X_train.shape[1],1)) model = tf.keras.Sequential() model.add(Conv2D(128, kernel_size=(3,3), input_shape=(28,28,1), activation='relu')) model.add(MaxPooling2D()) model.add(Conv2D(64, kernel_size=(3,3), activation='relu')) model.add(MaxPooling2D()) model.add(Conv2D(32, kernel_size=(3,3), activation='relu')) model.add(MaxPooling2D()) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy']) model.fit(X_train, y_train, batch_size=128, epochs=1, verbose=1) </code></pre> <p><strong>Output</strong></p> <pre><code>469/469 [==============================] - 137s 289ms/step - loss: -10766237696.0000 - accuracy: 0.1124 </code></pre>
python|tensorflow|keras
0
9,637
70,059,059
Finding the distance (Haversine) between all elements in a single dataframe
<p>I currently have a dataframe which includes five columns as seen below. I group the elements of the original dataframe such that they are within a 100km x 100km grid. For each grid element, I need to determine whether there is at least one set of points which are 100m away from each other. In order to do this, I am using the Haversine formula and calculating the distance between all points within a grid element using a for loop. This is rather slow as my parent data structure can have billions of points, and each grid element millions. Is there a quicker way to do this?</p> <p>Here is a view into a group in the dataframe. &quot;approx_LatSp&quot; &amp; &quot;approx_LonSp&quot; are what I use for groupBy in a previous function.</p> <pre><code>print(group.head()) Time Lat Lon approx_LatSp approx_LonSp 197825 1.144823 -69.552576 -177.213646 -70.0 -177.234835 197826 1.144829 -69.579416 -177.213370 -70.0 -177.234835 197827 1.144834 -69.606256 -177.213102 -70.0 -177.234835 197828 1.144840 -69.633091 -177.212856 -70.0 -177.234835 197829 1.144846 -69.659925 -177.212619 -70.0 -177.234835 </code></pre> <p>This group is equivalent to one grid element. This group gets passed to the following function which seems to be the crux of my issue (from a performance perspective):</p> <pre><code>def get_pass_in_grid(group): ''' Checks if there are two points within 100m ''' check_100m = 0 check_1km = 0 row_mins = [] for index, row in group.iterrows(): # Get distance distance_from_row = get_distance_lla(row['Lat'], row['Lon'], group['Lat'].drop(index), group['Lon'].drop(index)) minimum = np.amin(distance_from_row) row_mins = row_mins + [minimum] array = np.array(row_mins) m_100 = array[array &lt; 0.1] km_1 = array[array &lt; 1.0] if m_100.size &gt; 0: check_100m = 1 if km_1.size &gt; 0: check_1km = 1 return check_100m, check_1km </code></pre> <p>And the Haversine formula is calculated as follows</p> <pre><code>def get_distance_lla(row_lat, row_long, group_lat, group_long): def radians(degrees): return degrees * np.pi / 180.0 global EARTH_RADIUS lon1 = radians(group_long) lon2 = radians(row_long) lat1 = radians(group_lat) lat2 = radians(row_lat) # Haversine formula dlon = lon2 - lon1 dlat = lat2 - lat1 a = np.sin(dlat / 2)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon / 2)**2 c = 2 * np.arcsin(np.sqrt(a)) # calculate the result return(c * EARTH_RADIUS) </code></pre> <p>One way in which I know I can improve this code is to stop the for loop if the 100m is met for any two points. If this is the only way to improve the speed then I will apply this. But I am hoping there is a better way to resolve my problem. Any thoughts are greatly appreciated! Let me know if I can help to clear something up.</p>
<ol> <li><p>Convert all points to carthesian coordinates to have much easier task (distance of 100m is small enough to disregard that Earth is not flat)</p> </li> <li><p>Divide each grid into NxN subgrids (20x20, 100x100? check what is faster), for each point determine in which subgrid it is located. Determine distances within smaller subgrids (and their neighbours) instead of searching whole grid.</p> </li> <li><p>Use numpy to vectorize calculations (doing point no1 will definitely help you)</p> </li> </ol>
python|pandas|numpy|performance
0
9,638
56,069,564
how to draw outlines of objects on an image to a separate image
<p>i am working on a puzzle, my final task here is to identify edge type of the puzzle piece.</p> <p><a href="https://i.stack.imgur.com/n7wGa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n7wGa.png" alt="enter image description here"></a></p> <p>as shown in the above image i have mange to rotate and crop out every edge of the piece in same angle. my next step is to separate the edge line into a separate image like as shown in the image bellow</p> <p><a href="https://i.stack.imgur.com/ZJttL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZJttL.png" alt="enter image description here"></a></p> <p>then to fill up one side of the line with with a color and try to process it to decide what type of edge it is. </p> <p>i dont see a proper way to separate the edge line from the image for now. </p> <p>my approach:: one way to do is scan pixel by pixel and find the black pixels where there is a nun black pixel next to it. this is a code that i can implement. but it feels like a primitive and a time consuming approach. </p> <p>so if there you can offer any help or ideas, or any completely different way to detect the hollows and humps. </p> <p>thanks in advance.. </p>
<p>First convert your color image to grayscale. Then apply a threshold, say zero to obtain a binary image. You may have to use morphological operations to further process the binary image if there are holes. Then find the contours of this image and draw them to a new image.</p> <p><a href="https://i.stack.imgur.com/8KVF2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8KVF2.png" alt="rgb"></a></p> <p><a href="https://i.stack.imgur.com/tMAX5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tMAX5.png" alt="binary"></a></p> <p><a href="https://i.stack.imgur.com/BqVuE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BqVuE.png" alt="cont"></a></p> <p>A simple code is given below, using <code>opencv 4.0.1</code> in <code>python 2.7</code>.</p> <pre><code>bgr = cv2.imread('puzzle.png') gray = cv2.cvtColor(bgr, cv2.COLOR_BGR2GRAY) _, roi = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY) cv2.imwrite('/home/dhanushka/stack/roi.png', roi) cont = cv2.findContours(roi, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) output = np.zeros(gray.shape, dtype=np.uint8) cv2.drawContours(output, cont[0], -1, (255, 255, 255)) # removing boundary boundary = 255*np.ones(gray.shape, dtype=np.uint8) boundary[1:boundary.shape[0]-1, 1:boundary.shape[1]-1] = 0 toremove = output &amp; boundary output = output ^ toremove </code></pre>
python|image|numpy|opencv|matplotlib
2
9,639
56,055,359
Tensorflow Lite arm64 error: cannot convert ‘const int8x8_t?
<p>I tried building AARCH64 on ubuntu 16.04. I followed this guide(Native Compling) <a href="https://tensorflow.google.cn/lite/guide/build_arm64" rel="nofollow noreferrer">https://tensorflow.google.cn/lite/guide/build_arm64</a>.</p> <p>But I got this error. What is problem ? Also I want try example on Orange Pi 3. How to i can use the libtensorflow-lite.a file after build for arm64. I know Qt ide and c&amp;c++. Thank you.</p> <pre><code>In file included from ./tensorflow/lite/kernels/internal/optimized/depthwiseconv_uint8.h:22:0, from tensorflow/lite/kernels/depthwise_conv.cc:29: ./tensorflow/lite/kernels/internal/optimized/depthwiseconv_uint8_3x3_filter.h: In static member function ‘static void tflite::optimized_ops::depthwise_conv::WorkspacePrefetchWrite&lt;(tflite::DepthwiseConvImplementation)3&gt;::Run(int8, int, int8*)’: ./tensorflow/lite/kernels/internal/optimized/depthwiseconv_uint8_3x3_filter.h:5782:71: note: use -flax-vector-conversions to permit conversions between vectors with differing element types or numbers of subparts vst1_lane_u32(reinterpret_cast&lt;uint32_t*&gt;(ptr), fill_data_vec, 0); ^ ./tensorflow/lite/kernels/internal/optimized/depthwiseconv_uint8_3x3_filter.h:5782:71: error: cannot convert ‘const int8x8_t {aka const __vector(8) signed char}’ to ‘uint32x2_t {aka __vector(2) unsigned int}’ for argument ‘2’ to ‘void vst1_lane_u32(uint32_t*, uint32x2_t, int)’ ./tensorflow/lite/kernels/internal/optimized/depthwiseconv_uint8_3x3_filter.h:5785:35: error: cannot convert ‘const int8x8_t {aka const __vector(8) signed char}’ to ‘uint32x2_t {aka __vector(2) unsigned int}’ for argument ‘2’ to ‘void vst1_lane_u32(uint32_t*, uint32x2_t, int)’ fill_data_vec, 0); ^ tensorflow/lite/tools/make/Makefile:225: recipe for target '/tensorflow/tensorflow/lite/tools/make/gen/aarch64_armv8-a/obj/tensorflow/lite/kernels/depthwise_conv.o' failed make: *** [/tensorflow/tensorflow/lite/tools/make/gen/aarch64_armv8-a/obj/tensorflow/lite/kernels/depthwise_conv.o] Error 1 make: *** Waiting for unfinished jobs.... </code></pre>
<p>After trying to solve the problem for hours, I think i found a solution for this:</p> <p>Just add the "-flax-vector-conversions" parameter to the CXXFLAGS variable in the tensorflow/lite/tools/make/Makefile file.</p> <p>For me it was in line 58:</p> <pre><code>CXXFLAGS := -O3 -DNDEBUG -fPIC -flax-vector-conversions </code></pre> <p>The previous error is gone, but now I get an other error:</p> <pre><code>undefined reference to `shm_open' </code></pre> <p>After that, i added "-lrt" to tensorflow/lite/tools/make/targets/aarch64_makefile.inc</p> <pre><code>LIBS := \ -lstdc++ \ -lpthread \ -lm \ -ldl \ -lrt </code></pre> <p>and changed <strong>BUILD_WITH_NNAPI</strong> in the Makefile to <strong>false</strong> </p> <p>The compile process worked. I will test the TF library as soon as possible.</p>
c++|c|tensorflow|artificial-intelligence|tensorflow-lite
2
9,640
56,128,642
How can I trasform a (1,16) array in a (1,16,1) array with np.newaxis?
<p>I want to trasform this array (1, 16) </p> <pre><code>[[4 4 4 4 4 4 4 4 4 4 4 4 0 0 0 0]] </code></pre> <p>in a (1,16,1) array. </p> <p>I tried: </p> <pre><code>board = board[np.newaxis, :] </code></pre> <p>but it is not the expeted output. </p> <p>How can i do that?</p>
<p>You have to put the <code>np.newaxis</code> on the location of the dimenstion where you want this new axis.</p> <pre><code>board[np.newaxis,:] -&gt; puts the axis in the first dimension [1,1,16] board[:,np.newaxis] -&gt; puts the axis in the second dimension [1,1,16] board[:,:,np.newaxis] -&gt; puts the axis in the third dimension [1,16,1] </code></pre>
python|arrays|numpy
3
9,641
56,178,968
Repeat rows in pandas data frame with a sequential change in a column value
<p>I want to repaet the rows in my df in a time sequence with forward filling.</p> <p>Original df:</p> <pre><code> A B C Year 0 ABC 0 A 1950 1 CDE 1 A 1950 2 XYZ 1 B 1954 3 123 1 C 1954 4 X12 1 B 1956 5 123 1 D 1956 6 124 1 D 1956 </code></pre> <p>Desired df:</p> <pre><code> A B C Year 0 ABC 0 A 1950 1 CDE 1 A 1950 2 ABC 0 A 1951 3 CDE 1 A 1951 4 ABC 0 A 1952 5 CDE 1 A 1952 6 ABC 0 A 1953 7 CDE 1 A 1953 8 XYZ 1 B 1954 9 123 1 C 1954 10 XYZ 1 B 1955 11 123 1 C 1955 12 X12 1 B 1956 13 123 1 D 1956 14 124 1 D 1956 </code></pre> <p>I have tried converting the Year column to datetime and used a resampling yearwise with forward fill. But that didn't work as resample gives only one row for each year if resample year wise.</p> <pre><code>df.resample('YS').first().ffill().reset_index() </code></pre> <p>Desired df:</p> <pre><code> A B C Year 0 ABC 0 A 1950 1 CDE 1 A 1950 2 ABC 0 A 1951 3 CDE 1 A 1951 4 ABC 0 A 1952 5 CDE 1 A 1952 6 ABC 0 A 1953 7 CDE 1 A 1953 8 XYZ 1 B 1954 9 123 1 C 1954 10 XYZ 1 B 1955 11 123 1 C 1955 12 X12 1 B 1956 13 123 1 D 1956 14 124 1 D 1956 </code></pre>
<p>I feel like this is a <a href="https://stackoverflow.com/questions/53218931/how-to-unnest-explode-a-column-in-a-pandas-dataframe/53218939#53218939"><code>unnesting</code></a> problem </p> <pre><code>s=df.astype(str).groupby('Year').agg(list) s.index=s.index.astype(int) s1=s.reindex(np.arange(s.index.min(),s.index.max()+1),method='ffill') yourdf=unnesting(s1,list('ABC')).reset_index() yourdf Out[117]: Year A B C 0 1950 ABC 0 A 1 1950 CDE 1 A 2 1951 ABC 0 A 3 1951 CDE 1 A 4 1952 ABC 0 A 5 1952 CDE 1 A 6 1953 ABC 0 A 7 1953 CDE 1 A 8 1954 XYZ 1 B 9 1954 123 1 C 10 1955 XYZ 1 B 11 1955 123 1 C 12 1956 X12 1 B 13 1956 123 1 D 14 1956 124 1 D </code></pre> <hr> <pre><code>def unnesting(df, explode): idx = df.index.repeat(df[explode[0]].str.len()) df1 = pd.concat([ pd.DataFrame({x: np.concatenate(df[x].values)}) for x in explode], axis=1) df1.index = idx return df1.join(df.drop(explode, 1), how='left') </code></pre>
pandas
3
9,642
56,134,319
Convert the text string K & M to 10^3 & 10^6
<p>I have data frame with column values -</p> <pre><code>[Themangoescosts$1K] [needtopay20K,10Kdollarsmakesagrand] </code></pre> <p>I need to convert K - 10^3</p> <p>I am not sure how to use the regex option to replace the match value at its location for the list in data frame column</p> <p>Used the below regex to identify the K &amp; M cases - </p> <pre><code>match = re.search("[\d.]+[KM]+", row) </code></pre> <p>And planned to use below to replace the items -</p> <pre><code>mp = {'K':' * 10**3', 'M':' * 10**6'} df2['c'] = pd.eval(df2.offer2.replace(mp.keys(), mp.values(), regex=True).str.replace(r'[\d.]+[KM]+','')) </code></pre> <p>Which results in error -</p> <pre><code>UndefinedVariableError: name 'nan' is not defined </code></pre> <p>Expected Output -</p> <pre><code>[Themangoescosts$1000] [needtopay20000,10000dollarsmakesagrand] </code></pre>
<p>I suggest using </p> <pre><code>df['c'] = df['offer2'].str.replace(r'(?&lt;!\d)(\d{1,3})([KM])', lambda x: '{}000'.format(x.group(1)) if x.group(2) == 'K' else '{}000000'.format(x.group(1)) ) </code></pre> <p>The point is that you may use a callable as the replacement argument when using <code>Series.str.replace</code>.</p> <p><strong>Regex description</strong></p> <ul> <li><code>(?&lt;!\d)</code> - no digit allowed immediately to the left of the current location</li> <li><code>(\d{1,3})</code> - Group 1: one to three digits</li> <li><code>([KM])</code> - Group 2: <code>L</code> or <code>M</code>.</li> </ul> <p>The <code>lambda x: '{}000'.format(x.group(1)) if x.group(2) == 'K' else '{}000000'.format(x.group(1))</code> replacement either replaces with the Group 1 + <code>000</code> if Group 2 value is <code>K</code>, else, the Group 1 with <code>000000</code> appended to it is used.</p>
python|regex|pandas
0
9,643
64,946,234
can not import numpy
<p>I have a problem with importing numpy. i did the reinstalling and also used the &quot;pip3 install numpy&quot; command but when i try to import it i face this problem:</p> <pre><code>&gt;&gt;&gt; import numpy Traceback (most recent call last): File &quot;&lt;pyshell#0&gt;&quot;, line 1, in &lt;module&gt; import numpy File &quot;C:\Users\Ehsan_D\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\__init__.py&quot;, line 305, in &lt;module&gt; _win_os_check() File &quot;C:\Users\Ehsan_D\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\__init__.py&quot;, line 302, in _win_os_check raise RuntimeError(msg.format(__file__)) from None RuntimeError: The current Numpy installation ('C:\\Users\\Ehsan_D\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\numpy\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. </code></pre> <p>how can i fix it</p>
<p>There seems to be a problem with version 1.19.4. Just open a command prompt and run</p> <pre><code>pip install numpy==1.19.3 </code></pre>
python|numpy|import|pip
0
9,644
65,012,956
Is there a way to get the number of items I print as a result of a FOR cycle in Python?
<p>I have this code:</p> <pre><code>C = 50000 threshold = 0.3 for i in range(1, 18598): if binned_orcs[i] &gt; C and ori[i] &gt; threshold: Origins = print(i) </code></pre> <p>I would like to then have a way to know how many items that for cycle with those conditions printed but because every origin is printed on a different line I can't use <code>len(Origins)</code> I think, is there a way?</p> <p>Like the code output is:</p> <pre><code>1875 2550 3424 7426 7498 9065 9866 9924 11828 12116 12334 13317 13788 15110 15348 16988 17185 17572 18516 </code></pre> <p>which is 19 numbers and I would like to have a code line which when printed would just give me 19.</p>
<p>You could add a counter and track the number yourself:</p> <pre class="lang-py prettyprint-override"><code>count = 0 for i in range(1, 18598): if binned_orcs[i] &gt; C and ori[i] &gt; threshold: print(i) count += 1 print(count) </code></pre>
python|python-3.x|numpy|string-length
2
9,645
64,968,250
How to solve Runtime Error: Empty min/max for tensor Cast while doing post-training quantization (fully quantized tflite model from saved_model)?
<p>I try to create fully quantized tflite model to be able to run it on coral. I downloaded SSD MobileNet V2 FPNLite 640x640 from <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md</a></p> <p>I installed in virtual environment tf-nightly-2.5.0.dev20201123 tf-nightly-models and tensorflow/object_detection_0.1</p> <p>I run this code to do post training quantization</p> <pre><code>import tensorflow as tf import cv2 import numpy as np converter = tf.lite.TFLiteConverter.from_saved_model('./0-ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/saved_model/',signature_keys=['serving_default']) # path to the SavedModel directory VIDEO_PATH = '/home/andrej/Videos/outvideo3.h264' def rep_data_gen(): REP_DATA_SIZE = 10#00 a = [] video = cv2.VideoCapture(VIDEO_PATH) i=0 while(video.isOpened()): ret, img = video.read() i=i+1 if not ret or i &gt; REP_DATA_SIZE: print('Reached the end of the video!') break img = cv2.resize(img, (640, 640))#todo parametrize based on network size img = img.astype(np.uint8) #img = (img /127.5) -1 # #img = img.astype(np.float32)#causing types mismatch error a.append(img) a = np.array(a) print(a.shape) # a is np array of 160 3D images for i in tf.data.Dataset.from_tensor_slices(a).batch(1).take(REP_DATA_SIZE): yield [i] #tf2 models converter.allow_custom_ops=True converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.representative_dataset = rep_data_gen converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8, tf.lite.OpsSet.SELECT_TF_OPS] #converter.quantized_input_stats = {'inputs': (0, 255)} #does not help converter.inference_input_type = tf.uint8 # or tf.uint8 converter.inference_output_type = tf.uint8 # or tf.uint8 quantized_model = converter.convert() # Save the model. with open('quantized_model.tflite', 'wb') as f: f.write(quantized_model) </code></pre> <p>I got</p> <pre><code>RuntimeError: Max and min for dynamic tensors should be recorded during calibration: Failed for tensor Cast Empty min/max for tensor Cast </code></pre>
<p>I trained the same model, <code>SSD MobileNet V2 FPNLite 640x640</code>, using the script <code>model_main_tf2.py</code> and then exported the checkpoint to <code>saved_model</code> using the script <code>exporter_main_v2.py</code>. When trying to convert to &quot;.tflite&quot; for use on Edge TPU I was having the same problem.</p> <p>The solution for me was to export the trained model using the script <code>export_tflite_graph_tf2.py</code> instead of <code>exporter_main_v2.py</code> to generate the <code>saved_model.pb</code>. Then the conversion occurred well.</p> <p>Maybe try to generate a saved_model using <code>export_tflite_graph_tf2.py</code>.</p>
python|tensorflow|tensorflow-lite|google-coral|edge-tpu
2
9,646
40,075,106
Replace values in pandas Series with dictionary
<p>I want to replace values in a pandas <code>Series</code> using a dictionary. I'm following @DSM's <a href="https://stackoverflow.com/questions/20250771/remap-values-in-pandas-column-with-a-dict">accepted answer</a> like so:</p> <pre><code>s = Series(['abc', 'abe', 'abg']) d = {'b': 'B'} s.replace(d) </code></pre> <p>But this has no effect:</p> <pre><code>0 abc 1 abe 2 abg dtype: object </code></pre> <p>The <a href="http://nipy.bic.berkeley.edu/nightly/pandas/doc/generated/pandas.Series.replace.html" rel="nofollow noreferrer">documentation</a> explains the required format of the dictionary for <code>DataFrames</code> (i.e. nested dicts with top level keys corresponding to column names) but I can't see anything specific for <code>Series</code>.</p>
<p>You can do it using <code>regex=True</code> parameter:</p> <pre><code>In [37]: s.replace(d, regex=True) Out[37]: 0 aBc 1 aBe 2 aBg dtype: object </code></pre> <p>As you have already <a href="https://stackoverflow.com/questions/40075106/replace-values-in-pandas-series-with-dictionary/40075212#comment67423617_40075106">found out yourself</a> - it's a RegEx replacement and it won't work as you expected:</p> <pre><code>In [36]: s.replace(d) Out[36]: 0 abc 1 abe 2 abg dtype: object </code></pre> <p>this is working as expected:</p> <pre><code>In [38]: s.replace({'abc':'ABC'}) Out[38]: 0 ABC 1 abe 2 abg dtype: object </code></pre>
python|pandas|dictionary|replace
7
9,647
40,138,031
How to read realtime microphone audio volume in python and ffmpeg or similar
<p>I'm trying to read, in <em>near-realtime</em>, the volume coming from the audio of a USB microphone in Python. </p> <p>I have the pieces, but can't figure out how to put it together. </p> <p>If I already have a .wav file, I can pretty simply read it using <strong>wavefile</strong>:</p> <pre><code>from wavefile import WaveReader with WaveReader("/Users/rmartin/audio.wav") as r: for data in r.read_iter(size=512): left_channel = data[0] volume = np.linalg.norm(left_channel) print volume </code></pre> <p>This works great, but I want to process the audio from the microphone in real-time, not from a file.</p> <p>So my thought was to use something like ffmpeg to PIPE the real-time output into WaveReader, but my Byte knowledge is somewhat lacking. </p> <pre><code>import subprocess import numpy as np command = ["/usr/local/bin/ffmpeg", '-f', 'avfoundation', '-i', ':2', '-t', '5', '-ar', '11025', '-ac', '1', '-acodec','aac', '-'] pipe = subprocess.Popen(command, stdout=subprocess.PIPE, bufsize=10**8) stdout_data = pipe.stdout.read() audio_array = np.fromstring(stdout_data, dtype="int16") print audio_array </code></pre> <p>That looks pretty, but it doesn't do much. It fails with a <strong>[NULL @ 0x7ff640016600] Unable to find a suitable output format for 'pipe:'</strong> error. </p> <p>I assume this is a fairly simple thing to do given that I only need to check the audio for volume levels. </p> <p>Anyone know how to accomplish this simply? FFMPEG isn't a requirement, but it does need to work on OSX &amp; Linux. </p>
<p>Thanks to @Matthias for the suggestion to use the sounddevice module. It's exactly what I need. </p> <p>For posterity, here is a working example that prints real-time audio levels to the shell: </p> <pre><code># Print out realtime audio volume as ascii bars import sounddevice as sd import numpy as np def print_sound(indata, outdata, frames, time, status): volume_norm = np.linalg.norm(indata)*10 print ("|" * int(volume_norm)) with sd.Stream(callback=print_sound): sd.sleep(10000) </code></pre> <p><a href="https://i.stack.imgur.com/p1EJ3.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/p1EJ3.gif" alt="enter image description here"></a></p>
python|linux|numpy|audio|ffmpeg
38
9,648
39,582,754
Running a Python function on Spark Dataframe
<p>I have a python function which basically does some sampling from the original dataset and converts it into training_test. </p> <p>I have written that code to work on pandas data frame. </p> <p>I was wondering if anyone knows how to implement the same on Spark DAtaframe in pyspark?. Instead of Pandas data frame or numpy array mentioned should I use Spark Dataframe and that's it?</p> <p>Please let me know</p> <pre><code>def train_test_split(recommender,pct_test=0.20,alpha=40): """ This function takes a ratings data and splits it into train, validation and test datasets This function will take in the original user-item matrix and "mask" a percentage of the original ratings where a user-item interaction has taken place for use as a test set. The test set will contain all of the original ratings, while the training set replaces the specified percentage of them with a zero in the original ratings matrix. parameters: ratings - the original ratings matrix from which you want to generate a train/test set. Test is just a complete copy of the original set. This is in the form of a sparse csr_matrix. pct_test - The percentage of user-item interactions where an interaction took place that you want to mask in the training set for later comparison to the test set, which contains all of the original ratings. returns: training_set - The altered version of the original data with a certain percentage of the user-item pairs that originally had interaction set back to zero. test_set - A copy of the original ratings matrix, unaltered, so it can be used to see how the rank order compares with the actual interactions. user_inds - From the randomly selected user-item indices, which user rows were altered in the training data. This will be necessary later when evaluating the performance via AUC. """ test_set = recommender.copy() # Make a copy of the original set to be the test set. test_set=(test_set&gt;0).astype(np.int8) training_set = recommender.copy() # Make a copy of the original data we can alter as our training set. nonzero_inds = training_set.nonzero() # Find the indices in the ratings data where an interaction exists nonzero_pairs = list(zip(nonzero_inds[0], nonzero_inds[1])) # Zip these pairs together of user,item index into list random.seed(0) # Set the random seed to zero for reproducibility num_samples = int(np.ceil(pct_test*len(nonzero_pairs))) # Round the number of samples needed to the nearest integer samples = random.sample(nonzero_pairs, num_samples) # Sample a random number of user-item pairs without replacement user_inds = [index[0] for index in samples] # Get the user row indices item_inds = [index[1] for index in samples] # Get the item column indices training_set[user_inds, item_inds] = 0 # Assign all of the randomly chosen user-item pairs to zero conf_set=1+(alpha*training_set) return training_set, test_set, conf_set, list(set(user_inds)) </code></pre>
<p>You can use randomSplit function on Spark dataframe. </p> <pre><code>(train, test) = dataframe.randomSplit([0.8, 0.2]) </code></pre>
python|pandas|apache-spark
-1
9,649
39,481,516
Acquire the data from a row in a Pandas
<p>Instructions given by Professor: 1. Using the list of countries by continent from World Atlas data, load in the countries.csv file into a pandas DataFrame and name this data set as countries. 2. Using the data available on Gapminder, load in the Income per person (GDP/capita, PPP$ inflation-adjusted) as a pandas DataFrame and name this data set as income. 3. Transform the data set to have years as the rows and countries as the columns. Show the head of this data set when it is loaded. 4. Graphically display the distribution of income per person across all countries in the world for any given year (e.g. 2000). What kind of plot would be best?</p> <p>In the code below, I have some of these tasks completed, but I'm having a hard time understanding how to acquire data from a DataFrame row. I want to be able to acquire data from a row and then plot it. It may seem like a trivial concept, but I've been at it for a while and need assistance please.</p> <pre><code>%matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt countries = pd.read_csv('2014_data/countries.csv') countries.head(n=3) income = pd.read_excel('indicator gapminder gdp_per_capita_ppp.xlsx') income = income.T def graph_per_year(year): stryear = str(year) dfList = income[stryear].tolist() graph_per_year(1801) </code></pre>
<p>Pandas uses three types of indexing.</p> <p>If you are looking to use integer indexing, you will need to use <code>.iloc</code></p> <pre><code>df_1 Out[5]: consId fan-cnt 0 1155696024483 34.0 1 1155699007557 34.0 2 1155694005571 34.0 3 1155691016680 12.0 4 1155697016945 34.0 df_1.iloc[1,:] #go to the row with index 1 and select all the columns Out[8]: consId 1.155699e+12 fan-cnt 3.400000e+01 Name: 1, dtype: float64 </code></pre> <p>And to go to a particular cell, you can use something along the following lines,</p> <pre><code>df_1.iloc[1][1] Out[9]: 34.0 </code></pre> <p>You need to go through the <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html" rel="nofollow noreferrer">documentation</a> for other types of indexing namely <code>.ix</code> and <code>.loc</code> as suggested by <a href="https://stackoverflow.com/users/4893008/sohier-dane">sohier-dane</a>.</p>
python|pandas|matplotlib|dataframe|data-science
1
9,650
39,807,845
argmin in dataset containing NaN python
<pre><code> SGSIN VNVUT CNSHK HKHKG JPOSA To MYPKL 1 4 8 9 13 SGSIN NaN 3 7 8 12 VNVUT NaN NaN 3 4 8 CNSHK 1 NaN NaN 1 5 HKHKG NaN NaN NaN NaN 3 </code></pre> <p>Let say we have the above dataset using pandas. I want to calculate the <code>arg_minimum</code> over the first column and ignoring the <code>NaN</code>. I tried with </p> <pre><code>df[df[0]].idxmin() </code></pre> <p>but it gives </p> <pre><code>nan </code></pre> <p>but I don't get the right result then. Can someone help me? The result I want is (in this case) </p> <pre><code>[0,3] </code></pre>
<p>You can use numpy's <code>argwhere</code></p> <pre><code>import numpy as np np.argwhere(df['SGSIN'].eq(df['SGSIN'].min())) array([[0], [3]]) </code></pre>
python|pandas|machine-learning
0
9,651
44,351,398
Grouping and numbering items in a pandas dataframe
<p>I want to add a column to a dataframe in python/pandas as follows:</p> <pre><code>| MarketID | SelectionID | Time | SelectNumber | | 112337406 | 3819251.0 | 13:38:32 | 4 | | 112337406 | 3819251.0 | 13:39:03 | 4 | | 112337406 | 4979206.0 | 11:29:34 | 1 | | 112337406 | 4979206.0 | 11:37:34 | 1 | | 112337406 | 5117439.0 | 13:36:32 | 3 | | 112337406 | 5117439.0 | 13:37:03 | 3 | | 112337406 | 5696467.0 | 13:23:03 | 2 | | 112337406 | 5696467.0 | 13:23:33 | 2 | | 112337407 | 3819254.0 | 13:39:12 | 4 | | 112337407 | 4979206.0 | 11:29:56 | 1 | | 112337407 | 4979206.0 | 16:27:34 | 1 | | 112337407 | 5117441.0 | 13:36:54 | 3 | | 112337407 | 5117441.0 | 17:47:11 | 3 | | 112337407 | 5696485.0 | 13:23:04 | 2 | | 112337407 | 5696485.0 | 18:23:59 | 2 | </code></pre> <p>I currently have the market ID, Selection ID and Time, I want to generate the SelectNumber column, which represents the time order in which the particular selectionID appears within a particular MarketID. Once numbered all other iterations of the same selection ID within that MarketID need to be numbered the same. The MarketID will always be unique, but the same selectionID can appear in more than 1 MarketID.</p> <p>This has got me stumped, any ideas?</p>
<p>First, you need the combinations of 'MarketID' and 'SelectionID' in order of occurrence, so lets sort on the time. Then, for each 'MarketID' get the unique 'SelectionID's and number them in order of occurrence (already ordered, because df is ordered on column time). Secondly, the combination of number 'MarketID' and 'SelectionID' together with the order will be used later to set the numbers.</p> <p>I'll give you two solution to the first part:</p> <pre><code>dfnewindex = df.sort_values('Time').set_index('MarketID') valuesetter = {} for indx in dfnewindex.index.unique(): selectionid_per_marketid = dfnewindex.loc[indx].sort_values('Time')['SelectionID'].drop_duplicates().values valuesetter.update(dict(zip(zip(len(selectionid_per_marketid)*[indx], selectionid_per_marketid), range(1, 1+len(selectionid_per_marketid))))) </code></pre> <p>100 loops, best of 3: 3.22 ms per loop</p> <pre><code>df_sorted = df.sort_values('Time') valuesetter = {} for mrktid in df_sorted['MarketID'].unique(): sltnids = df_sorted[df_sorted['MarketID']==mrktid]['SelectionID'].drop_duplicates(keep='first').values valuesetter.update(dict(zip(zip(len(sltnids)*[mrktid], sltnids), range(1, 1+len(sltnids))))) </code></pre> <p>100 loops, best of 3: 2.59 ms per loop</p> <p>The boolean slicing solution is slightly faster in this case</p> <p>The output:</p> <pre><code>valuesetter {(112337406, 3819251.0): 4, (112337406, 4979206.0): 1, (112337406, 5117439.0): 3, (112337406, 5696467.0): 2, (112337407, 3819254.0): 4, (112337407, 4979206.0): 1, (112337407, 5117441.0): 3, (112337407, 5696485.0): 2} </code></pre> <p>For the second part, this dict is used to generate a column, i.e. SelectNumber. Again two solutions, the first uses multiindex, the second groupby:</p> <pre><code>map(lambda x: valuesetter[x], df.set_index(['MarketID', 'SelectionID']).index.values) </code></pre> <p>1000 loops, best of 3: 1.23 ms per loop</p> <pre><code>map(lambda x: valuesetter[x], df.groupby(['MarketID', 'SelectionID']).count().index.values) </code></pre> <p>1000 loops, best of 3: 1.59 ms per loop</p> <p>the multiindex seems to be the fastest solution.</p> <p>The final, up to this point, fastest answer:</p> <pre><code>df_sorted = df.sort_values('Time') valuesetter2 = {} for mrktid in df_sorted['MarketID'].unique(): sltnids = df_sorted[df_sorted['MarketID']==mrktid]['SelectionID'].drop_duplicates(keep='first').values valuesetter2.update(dict(zip(zip(len(sltnids)*[mrktid], sltnids), range(1, 1+len(sltnids))))) df_sorted['SelectNumber'] = list(map(lambda x: valuesetter[x], df.set_index(['MarketID', 'SelectionID']).index.values)) </code></pre>
python-3.x|pandas|dataframe
0
9,652
69,608,359
How to convert 6 first values in column to date based on some assumption in Python Pandas?
<p>I have Pandas Data Frame in Python like below:</p> <pre><code>VAL -------- 99050605188 00102255789 20042388956 02111505667 </code></pre> <p>Values are in str format.</p> <p>First 6 numbers means date, for example:</p> <ul> <li>99050605188 --&gt; 1999-05-06</li> <li>00102255789 --&gt; 2000-10-22</li> <li>20042388956 --&gt; 1920-04-23</li> </ul> <p>Be aware that:</p> <ol> <li>if value in column &quot;VAL&quot; starts with 0 it will be year 2000 +, for example 001203... ---&gt; 2000-12-03, 021115...--&gt; 2002-11-15</li> <li>if value in column &quot;VAL&quot; starts with 9,8,7,6,5,4,3,2,1 it will be year 1900+, for example 200423... --&gt; 1920-04-23</li> </ol> <p>So as a result I need something like below (column &quot;Date&quot; in str format):</p> <pre><code>VAL date --------------------------- 99050605188 | 1999-05-06 00102255789 | 2000-10-22 20042388956 | 1920-04-23 02111505667 | 2002-11-15 </code></pre> <p>How can I do that in Python Pandas ?</p>
<p>You can parse the first 6 characters of the string using the format, <code>%y%m%d</code> and then change the year as per your requirement.</p> <p><strong>Demo:</strong></p> <pre><code>from datetime import datetime import pandas as pd df = pd.DataFrame( {'val': ['99050605188', '00102255789', '20042388956', '02111505667']}) date_list = [] for s in df['val']: date = datetime.strptime(s[:6], '%y%m%d') if s[0] != '0' and date.year &gt; 2000: date = date.replace(year=date.year - 100) date_list.append(date.date()) result = df.assign(date=pd.Series(date_list)) print(result) </code></pre> <p><strong>Output:</strong></p> <pre><code> val date 0 99050605188 1999-05-06 1 00102255789 2000-10-22 2 20042388956 1920-04-23 3 02111505667 2002-11-15 </code></pre> <p>Update based on the following request from the OP:</p> <blockquote> <p>could you make update also in terms of situation when val is NaN and in this situation return 1900-01-01 in column &quot;date&quot; ?</p> </blockquote> <pre><code>from datetime import datetime import pandas as pd import numpy as np df = pd.DataFrame( {'val': ['99050605188', '00102255789', '20042388956', '02111505667', np.nan]}) date_list = ['19000101' if pd.isnull(s) else ('20' + s if s[0] == '0' else '19' + s)[:8] for s in df['val']] result = df.assign(date=pd.Series(pd.to_datetime(date_list, format='%Y%m%d'))) print(result) </code></pre> <p><strong>Output:</strong></p> <pre><code> val date 0 99050605188 1999-05-06 1 00102255789 2000-10-22 2 20042388956 1920-04-23 3 02111505667 2002-11-15 4 NaN 1900-01-01 </code></pre>
python|pandas|string|dataframe|date
1
9,653
41,174,249
Looping through pandas data frame while changing row values using regex
<hr> <p><em>-EDIT-</em></p> <p>As Daniel Kasatchkow (below) suggested, I have attempted the following:</p> <pre><code>df._links.str.findall('qwer://abc\\\.x-data\\\.orc/v1/i/\d+/users') </code></pre> <p>But I get the following output:</p> <pre><code>0 NaN 1 NaN 2 NaN 3 NaN 4 NaN 5 NaN ... </code></pre> <p>UPDATE - Still unable to find a solution</p>
<p>Try something like this</p> <pre><code>import pandas as pd df = pd.DataFrame(["{u'users': {u'href': u'qwer://abc\.x-data\.orc/v1/i/32/users'}, u'self': {u'href': ...","{u'users': {u'href': u'qwer://abc\.x-data\.orc/v1/i/87/users'}, u'self': {u'href': ..."], columns=['_links']) df._links.str.findall('qwer://abc\\\.x-data\\\.orc/v1/i/\d+/users') </code></pre> <p>When using regex I find it helpful to trial out the regex on <a href="http://pythex.org/" rel="nofollow noreferrer">http://pythex.org/</a></p> <p>If the data is in a dictionary format, it would be best to convert it over to a DataFrame using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.from_dict.html" rel="nofollow noreferrer">pandas.DataFrame.from_dict</a></p>
python|regex|python-2.7|pandas|for-loop
1
9,654
54,121,263
Speed issues with pandas and list comprehensions
<p>I have a dataset with 4m rows of data, and I split this into chunks using pd.read_csv(chunk size...) and then perform some simple data cleaning code to get it into a format I need.</p> <pre><code>tqdm.pandas() print("Merging addresses...") df_adds = chunk.progress_apply(merge_addresses, axis = 1) [(chunk.append(df_adds[idx][0], ignore_index=True),chunk.append(df_adds[idx][1], \ ignore_index=True)) for idx in tqdm(range(len(chunk))) \ if pd.notnull(df_adds[idx][0]['street_address'])] def merge_addresses(row): row2 = pd.Series( {'Org_ID' : row.Org_ID, 'org_name': row.org_name, 'street_address': row.street_address2}) row3 = pd.Series( {'Org_ID' : row.Org_ID, 'org_name': row.org_name, 'street_address': row.street_address3}) return row2, row3 </code></pre> <p>I'm using tqdm to analyse the speed of two operations, the first, a pandas apply function runs fine at about 1.5k it/s, and the second, a list comprehension starts at about 2k it/s then quickly drops to 200 it/s. Can anyone help explain how I can improve the speed of this?</p> <p>My aim is to take the street_address 2 &amp; 3 and merge and copy all of them that aren't null into the street_address1 column, duplicating the org_id and org_name as required.</p> <p><strong>Update</strong></p> <p>I've tried to capture any NaNs in merge_addresses and replace them as strings. My aim is to bring address2 and address3 into their own row (with org_name and org_id (so these two fields will be duplicates) in the same column as address1. So potentially there could be three rows for the same org_id but the addresses vary.</p> <pre><code>df_adds = chunk.progress_apply(merge_addresses, axis = 1) [(chunk.append(x[0]), chunk.append(x[1])) for x in tqdm(df_adds) if (pd.notnull(x[0][3]),pd.notnull(x[0][3]))] def merge_addresses(row): if pd.isnull(row.street_address2): row.street_address2 = '' if pd.isnull(row.street_address3): row.street_address3 = '' return ([row.Org_ID, row.pub_name_adj, row.org_name, row.street_address2], [row.Org_ID, row.pub_name_adj, row.org_name, row.street_address3]) </code></pre> <p>I'm getting the error <code>'&lt;' not supported between instances of 'str' and 'int', sort order is undefined for incomparable objects result = result.union(other)</code></p> <p>Using tqdm, the list comprehension appears to be work, but it's painfully slow (24 it/s)</p> <p><strong>Update</strong></p> <p>Just to clarify, the data is in the current format: <a href="https://i.stack.imgur.com/Z2nbF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z2nbF.png" alt="enter image description here"></a></p> <p>And my aim is to get it to the following:</p> <p><a href="https://i.stack.imgur.com/yS5B7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yS5B7.png" alt="enter image description here"></a></p> <p>I've played around with different chunk sizes:</p> <blockquote> <p>20k row = 70 it/s 100k row = 35 it/s 200k = 31 it/s</p> </blockquote> <p>It seems that the better size for the trade-off is 200k rows.</p>
<p>Calling <code>DataFrame.append</code> too frequently can be expensive (<a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.append.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.append.html</a>):</p> <blockquote> <p>Iteratively appending rows to a DataFrame can be more computationally intensive than a single concatenate. A better solution is to append those rows to a list and then concatenate the list with the original DataFrame all at once.</p> </blockquote> <p>If you can, use <code>pd.concat</code> for a speedier implementation.</p>
python|pandas|list-comprehension
2
9,655
38,256,180
How to make shared libraries with Bazel at Tensorflow
<p>I've tried building <a href="https://www.tensorflow.org/" rel="noreferrer">tensorflow</a> with <a href="http://www.bazel.io/" rel="noreferrer">bazel</a> as follows:</p> <blockquote> <p>bazel build -c opt --copt="-fPIC" --copt="-g0" //tensorflow/tools/pip_package:build_pip_package</p> </blockquote> <p>I couldn't see <code>.so</code> file under <code>~/tensorflow/bazel-bin/tensorflow/core</code>.</p> <p>There are no <code>.so</code> files but <code>.lo</code> files and <code>.a</code> files.</p> <p>Could you tell me how to make <code>.so</code> files of tensorflow library?</p>
<p><code>//tensorflow:libtensorflow.so</code> is the target you are looking for.</p> <pre><code>bazel build -c opt //tensorflow:libtensorflow.so </code></pre> <p>should produce the file in <code>bazel-bin/tensorflow</code>.</p>
shared-libraries|tensorflow|bazel
8
9,656
38,243,318
Get the string from the DataFrame column an assign to the other column in pandas
<p>I Have a input data frame with columns:</p> <pre><code>Template Template Name This is String This is String line This is Int This is Int Name This is String Name String Name is none Int is empty </code></pre> <p>Expected Output Dataframe:</p> <pre><code>Template Template Name This is String String This is String line String This is Int Int This is Int Name Int This is String Name String String Name is none String Int is empty Int </code></pre> <p>I have tried the below code</p> <pre><code> all_data['Template Name'] = all_data['Template'].str.contains('String') if all_data['Template'].str.contains('String').any() == True: all_data['Template Name'] = 'String' </code></pre> <p>but it just prints 'String' in all the cells, Please help me.</p>
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html" rel="nofollow"><code>extract</code></a>:</p> <pre><code>df['Template Name'] = df.Template.str.extract('(String|Int)', expand=False) print (df) Template Template Name 0 This is String String 1 This is String line String 2 This is Int Int 3 This is Int Name Int 4 This is String Name String 5 String Name is none String 6 Int is empty Int </code></pre>
python|pandas
3
9,657
38,251,786
How to display out of range values on image histogram?
<p>I want to plot the RGB histograms of an image using <code>numpy.histogram</code>.</p> <p>(See my function <code>draw_histogram</code> below)</p> <p>It works well for a regular range of [0, 255] :</p> <pre><code>import numpy as np import matplotlib.pyplot as plt im = plt.imread('Bulbasaur.jpeg') draw_histogram(im, minimum=0., maximum=255.) </code></pre> <p><img src="https://i.stack.imgur.com/jMTpn.jpg" alt="Bulbasaur.jpeg"> <img src="https://i.stack.imgur.com/Q9sVx.png" alt="Histogram_Bulbasaur"></p> <p><strong>What I want to do :</strong></p> <p>I expect the images I use to have out of range values. Sometimes they will be out of range, sometimes not. I want to use the RGB histogram to analyse how bad the values are out of range.</p> <p>Let's say I expect the values to be at worst in the interval [-512, 512]. I still want the histogram to display the in-range intensities at the right spot, and leave blank the unpopulated range sections. For example, if I draw the histogram of <code>Bulbasaur.jpeg</code> again but with range [-512, 512], I expect to see the same histogram but contracted along the "x" axis (between the two dashed lines in the histogram below).</p> <p><strong>The problem :</strong></p> <p>When I try to draw the histogram for an unregular range, something goes wrong :</p> <pre><code>import numpy as np import matplotlib.pyplot as plt im = plt.imread('Bulbasaur.jpeg') draw_histogram(im, minimum=-512., maximum=512.) </code></pre> <p><a href="https://i.stack.imgur.com/Y0XVn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y0XVn.png" alt="enter image description here"></a></p> <p><strong>My code for <code>draw_histogram()</code> :</strong></p> <pre><code>def draw_histogram(im, minimum, maximum): fig = plt.figure() color = ('r','g','b') for i, col in enumerate(color): hist, bins = np.histogram(im[:, :, i], int(maximum-minimum), (minimum, maximum)) plt.plot(hist, color=col) plt.xlim([int(minimum), int(maximum)]) # Draw vertical lines to easily locate the 'regular range' plt.axvline(x=0, color='k', linestyle='dashed') plt.axvline(x=255, color='k', linestyle='dashed') plt.savefig('Histogram_Bulbasaur.png') plt.close(fig) return 0 </code></pre> <p><strong>Question</strong></p> <p>Does anyone know a way of properly drawing RGB histogram with unregular ranges?</p>
<p>You should pass x values to 'plt.plot'</p> <p>I changed:</p> <pre><code>plt.plot(hist, color=col) </code></pre> <p>to this:</p> <pre><code>plt.plot(np.arange(minimum,maximum),hist, color=col) </code></pre> <p>With this change, the graph began to appear normally. Essentially, plt.plot was trying to start plotting the y-values you gave it from np.hist starting at 0. This works when your expected range starts at 0, but when you want to include negative numbers, plt.plot shouldn't start at 0, rather, it should start at minimum, so using np.range to manually assign x values fixes the problem.</p> <p><a href="https://i.stack.imgur.com/6fObF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6fObF.png" alt="enter image description here"></a></p>
python|numpy|image-processing|histogram|rgb
1
9,658
66,122,744
Pandas - get_loc nearest for whole column
<p>I have a df with date and price. Given a datetime, I would like to find the price at the nearest date.</p> <p>This works for one input datetime:</p> <pre><code>import requests, xlrd, openpyxl, datetime import pandas as pd file = &quot;E:/prices.csv&quot; #two columns: Timestamp (UNIX epoch), Price (int) df = pd.read_csv(file, index_col=None, names=[&quot;Timestamp&quot;, &quot;Price&quot;]) df['Timestamp'] = pd.to_datetime(df['Timestamp'],unit='s') df = df.drop_duplicates(subset=['Timestamp'], keep='last') df = df.set_index('Timestamp') file = &quot;E:/input.csv&quot; #two columns: ID (string), Date (dd-mm-yyy hh:ss:mm) dfinput = pd.read_csv(file, index_col=None, names=[&quot;ID&quot;, &quot;Date&quot;]) dfinput['Date'] = pd.to_datetime(dfinput['Date'], dayfirst=True) exampledate = pd.to_datetime(&quot;20-3-2020 21:37&quot;, dayfirst=True) exampleprice = df.iloc[df.index.get_loc(exampledate, method='nearest')][&quot;Price&quot;] print(exampleprice) #price as output </code></pre> <p>I have another dataframe with the datetimes (&quot;dfinput&quot;) I want to lookup prices of and save in a new column &quot;Price&quot;. Something like this which is obviously not working:</p> <pre><code>dfinput['Date'] = pd.to_datetime(dfinput['Date'], dayfirst=True) dfinput['Price'] = df.iloc[df.index.get_loc(dfinput['Date'], method='nearest')][&quot;Price&quot;] dfinput.to_csv('output.csv', index=False, columns=[&quot;Hash&quot;, &quot;Date&quot;, &quot;Price&quot;]) </code></pre> <p>Can I do this for a whole column or do I need to iterate over all rows?</p>
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer"><code>merge_asof</code></a> (cannot test, because no sample data):</p> <pre><code>df = df.sort_index('Timestamp') dfinput = dfinput.sort_values('Date') df = pd.merge_asof(df, dfinput, left_index=True, right_on='Date', direction='nearest') </code></pre>
pandas|dataframe
1
9,659
66,236,117
Does pandas index have advantage on performance than column?
<p>Till now, I used to put timestamps as the index for my time series dataframe. I felt that if I put the timestamps as the index, I might have performance gain when I search the data comparing to the search with the timestamps in a column. (It's kind of my feeling from the name, 'index'. I felt it might be indexed.) But I start to feel it might not the case.</p> <p>Is there any advantage of using the index comparing to the column?</p>
<p>One answer is in terms of data frame size. I have a data frame with 50M rows</p> <pre><code>df_Usage.info() </code></pre> <p>output</p> <pre><code>&lt;class 'pandas.core.frame.DataFrame'&gt; RangeIndex: 49991484 entries, 0 to 49991483 Data columns (total 7 columns): BILL_ACCOUNT_NBR int64 MM_ADJ_BILLING_YEARMO int64 BILLING_USAGE_QTY float64 BILLING_DAYS_CNT int64 TARIFF_RATE_TYP object READ_FROM object READ_TO object dtypes: float64(1), int64(3), object(3) memory usage: 2.6+ GB </code></pre> <p>Setting the first two columns as index (one includes time)</p> <pre><code>df_Usage['MM_ADJ_BILLING_YEARMO'] = pd.to_datetime(df_Usage['MM_ADJ_BILLING_YEARMO'], format='%Y%m') df_Usage.set_index(['BILL_ACCOUNT_NBR','MM_ADJ_BILLING_YEARMO'],inplace = True) df_Usage.info() </code></pre> <pre><code>&lt;class 'pandas.core.frame.DataFrame'&gt; MultiIndex: 49991484 entries, (5659128163, 2020-09-01 00:00:00) to (7150058108, 2020-01-01 00:00:00) Data columns (total 5 columns): BILLING_USAGE_QTY float64 BILLING_DAYS_CNT int64 TARIFF_RATE_TYP object READ_FROM object READ_TO object dtypes: float64(1), int64(1), object(3) memory usage: 2.1+ GB </code></pre> <p>20% reduction in memory</p>
python|pandas|indexing
1
9,660
52,776,332
Tensorflow-GPU without NVIDIA,possible?
<p>I don't have a NVIDIA graphics card, but I need to use tensorflow-gpu. Is this feasible? what should I do?</p>
<p>You can use Google's colab for free directly in your browser:</p> <blockquote> <p><a href="https://colab.research.google.com/notebooks/welcome.ipynb#recent=true" rel="nofollow noreferrer">https://colab.research.google.com/notebooks/welcome.ipynb#recent=true</a> <a href="https://colab.research.google.com/notebooks/gpu.ipynb" rel="nofollow noreferrer">https://colab.research.google.com/notebooks/gpu.ipynb</a></p> </blockquote> <p>You can create easily a notebook to run tensorflow (wich is already installed) from the file menu.</p> <p>To activate the GPU simply select "GPU" in the Accelerator drop-down in Notebook Settings (either through the Edit menu or the command palette at cmd/ctrl-shift-P).</p>
python|tensorflow|nvidia
2
9,661
58,530,051
How to relabel a Pandas Dataframe starting at 1 with RangeIndex when stop is not known?
<p>So I'm building software that takes large csv files of amino acid data and converts them to a pandas DataFrame. I need to relabel the columns 1-n. Is there a way to do this when the stop value is not known? I've tried the following: </p> <pre><code>df = pd.read_csv(file, encoding='utf-8', header=None) df.index = pd.RangeIndex(start=1, stop=3000, step=1) </code></pre> <p>I always end up with a length mismatch error:</p> <pre><code>ValueError: Length mismatch: Expected axis has 2902 elements, new values have 2999 elements </code></pre> <p>Surely there a simple way to anonymize the stop value so that different sequences can be loaded and relabeled properly? </p>
<p>You are way overthinking it. Since you index started at 0 and you now want to change it to 1-based, simply add 1 to the index:</p> <pre><code>df.index += 1 </code></pre>
python|python-3.x|pandas|bioinformatics
0
9,662
58,470,330
Reassigning dataframe rows or variables with iloc returns NaN
<p>I have a "dataframe A" that is partially complete; some rows are missing data. The missing data can be found on rows in another "dataframe B" (with the same labels, but not position).</p> <p>When I try to reassign the rows of dataframe A, python returns NaN for all the rows I tried to reassign.</p> <p>I have tried: iloc iloc specifying the columns iloc after making column names match iloc on deep copy converting the target column into different dtypes e.g. string, float</p> <h1>First attempt, returns dfA with NaN in all columns of rows A1:A2</h1> <pre><code>dfA.iloc[A1:A2] = dfB.iloc[B1:B2] </code></pre> <h1>Second attempt, returns dfA with NaN in column 2 of rows A1:A2</h1> <pre><code>dfA.iloc[A1:A2, 2] = dfB.iloc[B1:B2, 2] </code></pre> <h1>Third attempt, same issue</h1> <pre><code>dfA_copy = copy.deepcopy(dfA) dfA_copy.iloc[A1:A2] </code></pre> <p><a href="https://i.stack.imgur.com/M0isW.png" rel="nofollow noreferrer">dfA_copy rows A1:A2</a></p> <pre><code>dfB_copy = copy.deepcopy(dfB) dfB_copy.iloc[B1:B2] </code></pre> <p><a href="https://i.stack.imgur.com/fDEWj.png" rel="nofollow noreferrer">dfB_copy rows B1:B2</a></p> <pre><code>dfA_copy.iloc[A1:A2] = dfB_copy.iloc[B1:B2] </code></pre> <p><a href="https://i.stack.imgur.com/R3vxn.png" rel="nofollow noreferrer">deep copy reassignment</a></p> <p>I expected the data from "dataframe B" to replace the missing data in "dataframe A".</p>
<p>If you have the two pandas dataframe in the same shape then you can do something like this</p> <pre><code>dfA[dfA.isnull()] = dfB </code></pre> <p>If this solves your question, please accept the answer.</p>
python|pandas|dataframe|variable-assignment
0
9,663
58,357,525
df.loc - ValueError: Buffer has wrong number of dimensions (expected 1, got 0)
<p>I currently have the following code - I am trying to get a matching row in one dataframe based on the <code>Last Name</code> column.</p> <pre class="lang-py prettyprint-override"><code>def rule(row): name = row['Last Name'] return rules.loc[rules['Last Name'] == name]['Type'] df['Type'] = df.apply(rule, axis=1) </code></pre> <p>When I run this I get an error, because of the <code>== name</code> in the <code>rule</code> method - how do I fix it?</p> <pre><code>ValueError: ('Buffer has wrong number of dimensions (expected 1, got 0)', 'occurred at index 0') </code></pre> <p>This is what <code>rules</code> looks like:</p> <pre><code> Last Name Type 0 Smith A 1 Doe B </code></pre> <p>and <code>df</code>:</p> <pre><code> Name First Name Last Name 0 John Smith John Smith 1 Jane Doe Jane Doe 2 John Doe John Doe </code></pre> <p>I want the final to look like:</p> <pre><code> Name First Name Last Name Type 0 John Smith John Smith A 1 Jane Doe Jane Doe B 2 John Doe John Doe B </code></pre> <p>EDIT: Added example <code>rules</code> and <code>df</code></p>
<pre><code>df1 = pd.DataFrame({'First Name': ['John', 'Jane','John'], 'Last Name': ['Smith','Doe','Doe']}) print(df1) rules = pd.DataFrame({'Last Name':['Smith', 'Doe'], 'Type': ['A','B']}) print(rules) </code></pre> <p>Output is:</p> <pre><code> First Name Last Name 0 John Smith 1 Jane Doe 2 John Doe Last Name Type 0 Smith A 1 Doe B </code></pre> <p>df1.merge(rules)</p> <p>output is:</p> <pre><code> First Name Last Name Type 0 John Smith A 1 Jane Doe B 2 John Doe B </code></pre> <p>is it your desire answer?</p>
python|pandas
2
9,664
58,584,909
In Pandas frame make column entry that is a list break up its items so that they are listed vertical individually
<p>I've been working on this for hours trying to figure out how to make this json </p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>{ "method": "STATIC", "type": "IEEE_8023AD", "dns_search_domains": [ "lab.local1", "lab.local2", "lab.local3" ], "dns_servers": [ "11.12.200.1", "11.12.200.2", "11.12.200.3" ] }</code></pre> </div> </div> </p> <p>convert to this output in pandas</p> <pre><code> method type dns_search_domains dns_servers 0 STATIC IEEE_8023AD lab.local1 11.12.200.1 lab.local2 11.12.200.2 lab.local3 11.12.200.3 </code></pre> <p>Instead the best I can come up with is this. I keep getting this list format under columns dns_search_domains and dns_servers.</p> <pre><code> method type dns_search_domains dns_servers 0 STATIC IEEE_8023AD [lab.local1, lab.local2, lab.local3] [11.12.200.1, 11.12.200.2, 11.12.200.3] </code></pre> <p>Looks like the list format is not allowing me to break the items up individually so they can output vertical under their column. </p> <p>Here is the basic code.</p> <pre><code>import json import pandas as pd from pandas.io.json import json_normalize with open ('network_conf_get.json') as f: d = json.load(f) data = json_normalize(d) pd.set_option('display.max_colwidth', 0) pd.set_option('display.max_columns', None) pd.set_option('expand_frame_repr', False) print(pd.DataFrame(data)) </code></pre>
<p>Thanks set_index did help. </p> <p>Unfortunately, if I decide to add another column of data with a list of a different length compared to columns to "dns_search_domains" and "dns_servers" then I get this error</p> <pre><code>ValueError: arrays must all be same length </code></pre> <p>At this point I've opted to just build out each column manually with pd.Series() and then concatenate them with pd.concat(). Unless anyone has any other suggestions thank you for the help friends!!</p>
python|pandas
0
9,665
58,462,173
Training Accuracy increases, then drops sporadically and abruptly. Fix? [Keras] [TensorFlow backend]
<p>I'm doing binary classification.</p> <p>So while training my Model, the training accuracy is increasing, but in some epochs its drops abrupty. Below is an image to illustrate. what am i doing wrong? Why is this happening? What is the explanation? How can I fix this?</p> <p>Also, both the training accuracy and the validation accuracy (especially the validation accuracy) are close to 1 (100%) most of the time, pretty early in the epoch cycles. Why? Is this good or bad? I dont think so right?</p> <p>This is the Data: <a href="https://drive.google.com/open?id=1--1OoFHdOjb2ARyJ2dD80Zw4RkvCY0lK" rel="nofollow noreferrer">https://drive.google.com/open?id=1--1OoFHdOjb2ARyJ2dD80Zw4RkvCY0lK</a></p> <p>"Gewicht" is the output, which i have transformed in the code below to 1 and 0.</p> <p>The below Code is what I have tried:</p> <p>This is the code:</p> <pre><code># -*- coding: utf-8 -*- """ Created on Fri Oct 18 15:44:44 2019 @author: Shahbaz Shah Syed """ #Import the required Libraries from sklearn.metrics import confusion_matrix, precision_score from sklearn.model_selection import train_test_split from keras.layers import Dense,Dropout from keras.models import Sequential from keras.regularizers import l2 import matplotlib.pyplot as plt import pandas as pd import numpy as np ##EXTRACT THE DATA AND SPLITTING IN TRAINING AND TESTING----------------------- Input = 'DATA_Gewicht.xlsx' Tabelle = pd.read_excel(Input,names=['Plastzeit Z [s]','Massepolster [mm]', 'Zylind. Z11 [°C]','Entformen[s]', 'Nachdr Zeit [s]','APC+ Vol. [cm³]', 'Energie HptAntr [Wh]','Fläche WkzDr1 [bar*s]', 'Fläche Massedr [bar*s]', 'Fläche Spritzweg [mm*s]', 'Gewicht']) Gewicht = Tabelle['Gewicht'] #Toleranz festlegen toleranz = 0.5 #guter Bereich für Gewicht Gewicht_mittel = Gewicht.mean() Gewicht_abw = Gewicht.std() Gewicht_tol = Gewicht_abw*toleranz Gewicht_OG = Gewicht_mittel+Gewicht_tol Gewicht_UG = Gewicht_mittel-Gewicht_tol #Gewicht Werte in Gut und Schlecht zuordnen G = [] for element in Gewicht: if element &gt; Gewicht_OG or element &lt; Gewicht_UG: G.append(0) else: G.append(1) G = pd.DataFrame(G) G=G.rename(columns={0:'Gewicht_Value'}) Gewicht = pd.concat([Gewicht, G], axis=1) #extracting columns from sheets Gewicht_Value = Gewicht['Gewicht_Value'] x = Tabelle.drop(columns=['Gewicht']) y = Gewicht_Value #Split the train and test/validation set x_train, x_test, y_train, y_test = train_test_split(x,y, test_size=0.10, random_state=0) x_train.shape,y_train.shape,x_test.shape,y_test.shape ##Creating a Neural Network---------------------------------------------------- #define and use a Sequential model model = Sequential() #Sequential model is a linear stack of layers #Hidden Layer-1/Input Layer model.add(Dense(200,activation='relu',input_dim=10,kernel_regularizer=l2(0.01))) #adding a layer model.add(Dropout(0.3, noise_shape=None, seed=None)) #Hidden Layer-2 model.add(Dense(200,activation = 'relu',kernel_regularizer=l2(0.01))) model.add(Dropout(0.3, noise_shape=None, seed=None)) #Output layer model.add(Dense(1,activation='sigmoid')) #Compile the Model model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) #Check the Model summary model.summary() ##TRAINING the Neural Network-------------------------------------------------- #Train the Model model_output = model.fit(x_train,y_train,epochs=500,batch_size=20,verbose=1,validation_data=(x_test,y_test),) print('Training Accuracy : ' , np.mean(model_output.history['accuracy'])) print('Validation Accuracy : ' , np.mean(model_output.history['val_accuracy'])) ##CHECKING PREDICTION---------------------------------------------------------- #Do a Prediction and check the Precision y_pred = model.predict(x_test) rounded = [round(x[0]) for x in y_pred] y_pred1 = np.array(rounded,dtype='int64') confusion_matrix(y_test,y_pred1) precision_score(y_test,y_pred1) #Plot the model accuracy over epochs # Plot training &amp; validation accuracy values plt.plot(model_output.history['accuracy']) plt.plot(model_output.history['val_accuracy']) plt.title('Model accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() # Plot training &amp; validation loss values plt.plot(model_output.history['loss']) plt.plot(model_output.history['val_loss']) plt.title('model_output loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() </code></pre> <p>What I would like to see is the following image.</p> <p><a href="https://user-images.githubusercontent.com/55457221/67140808-a45e4580-f25e-11e9-89f7-1812a2d04e7d.png" rel="nofollow noreferrer">https://user-images.githubusercontent.com/55457221/67140808-a45e4580-f25e-11e9-89f7-1812a2d04e7d.png</a>)</p> <p><a href="https://user-images.githubusercontent.com/55457221/67140810-aaecbd00-f25e-11e9-9e76-ed737f11aee3.png" rel="nofollow noreferrer">https://user-images.githubusercontent.com/55457221/67140810-aaecbd00-f25e-11e9-9e76-ed737f11aee3.png</a>)</p> <p>Console/Log of the image which i would "like to see" (second 2 images):</p> <p>Epoch 500/500 691/691 [==============================] - 0s 271us/step - loss: 0.5075 - accuracy: 0.7496 - val_loss: 0.4810 - val_accuracy: 0.7792 Training Accuracy : 0.72937775 Validation Accuracy : 0.776207780957222</p> <hr> <p>The actual results:</p> <p><a href="https://user-images.githubusercontent.com/55457221/67140782-5d705000-f25e-11e9-9425-5cc624311e39.png" rel="nofollow noreferrer">https://user-images.githubusercontent.com/55457221/67140782-5d705000-f25e-11e9-9425-5cc624311e39.png</a></p> <p><a href="https://user-images.githubusercontent.com/55457221/67140795-7d077880-f25e-11e9-955e-bfacbe2a1a92.png" rel="nofollow noreferrer">https://user-images.githubusercontent.com/55457221/67140795-7d077880-f25e-11e9-955e-bfacbe2a1a92.png</a></p> <p>Console/Log of the image which i "think is wrong" (first 2 images):</p> <p>Epoch 500/500 774/774 [==============================] - 0s 506us/step - loss: 0.1957 - accuracy: 0.9109 - val_loss: 0.0726 - val_accuracy: 1.0000 Training Accuracy : 0.9189251 Validation Accuracy : 0.9792092989683151</p> <p>Hop you can help me. Thank you in advance guys.</p>
<p>You should shuffle and randomize your data prior to training it</p> <pre><code>def Randomizing(): df = pd.DataFrame({"D1":range(5), "D2":range(5)}) print(df) df2 = df.reindex(np.random.permutation(df.index)) print(df2) Randomizing() </code></pre> <p>This is a sample code which you can perform for your dataframe Tabelle once you read you data, inorder to shuffle your data</p> <p>Hope this helps</p>
python|tensorflow|keras|neural-network|deep-learning
0
9,666
69,253,628
Plugging in pre-trained model on top of embeddings from another pre-trained model, how to make input dimensions work?
<p>I am experimenting with placing a pre-trained model (e.g. VGG, AlexNet, etc...) on top of the embeddings outputted from another model. I think the only unclear part for me is how would I go about making the input dimension work with that newly added pre-trained model? In more concrete terms:</p> <ol> <li>Grab the embeddings of images from pre-trained model 1</li> <li>Plug them into pre-trained model 2 to perform image classification</li> <li>Pre-trained model 2 requires RGB images of certain shape [3, x, x] while I only have the embeddings of shape[512].</li> </ol> <p>Is there any way to get this to work, such that I can input an already processed image embedding into another pre-trained model and successfully perform image classification?</p>
<p>If you already have extracted an embedding from your image, you should not be looking to use a CNN. A typical CNN architecture is comprised of a <em>feature extractor</em> (the convolutional layers) and a <em>classifier</em> (a fully connected layer). The purpose of the convolution part is to extract relevant information from the image while the latter section maps this information to accomplish the desired task (for instance classification task).</p> <p>In your case using a fully connected layer as <code>model 2</code> would make sense.</p>
machine-learning|deep-learning|neural-network|pytorch|computer-vision
0
9,667
68,967,837
Numpy - count of duplicate rows in 3D array
<p>I am looking to count the number of unique rows in a 3D NumPy array. Take the following array:</p> <pre><code>a = np.array([[[1, 2], [1, 2], [2, 3]], [[2, 3], [2, 3], [3, 4]], [[1, 2], [1, 2], [1, 2]]]) </code></pre> <p>My desired output is a 1-D array of the same length as axis 0 of the 3-D array. <code>array([2, 2, 1])</code>.</p> <p>In this example, the output would be 2, 2, 1 because in the first grouping [1, 2] and [2, 3] are the unique values, in the second grouping [2, 3] and [3, 4] are the unique values, and in the third grouping [1, 2] is the &quot;unique&quot; values. Perhaps I'm using unique incorrectly in this context but that is what I'm looking to calculate.</p> <p>The difficulty I'm having is that the count of unique rows will be different. If I use <code>np.unique</code>, the result is broadcast as shown below:</p> <pre><code>&gt;&gt;&gt; np.unique(a, axis=1) array([[[1, 2], [2, 3]], [[2, 3], [3, 4]], [[1, 2], [1, 2]]]) </code></pre> <p>I know I can loop over each a 2D array and use <code>np.apply_along_axis</code>, as described in <a href="https://stackoverflow.com/questions/48473056/number-of-unique-elements-per-row-in-a-numpy-array">this answer</a>.</p> <p>However, I am dealing with arrays as large as <code>(1 000 000, 256, 2)</code>, so I would prefer to avoid loops if this is possible.</p>
<p>Calling <code>np.unique</code> for each 2D plan appear to be extremely slow. Actually, it is <code>np.unique</code> which is slow and not really the pure Python loop.</p> <p>A better approach is to to that manually with <strong>Numba</strong> (using a <code>dict</code>). While this strategy is faster, it is not a silver-bullet. However, this implementation can be easily be <strong>parallelized</strong> to run significantly faster although <code>dict</code> accesses are not very fast. Here is the implementation:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import numba as nb @nb.njit('i4[::1](i4[:,:,::1])', parallel=True) def compute_unique_count(data): n,m,o = data.shape assert o == 2 res = np.empty(n, dtype=np.int32) for i in nb.prange(n): tmp = dict() for j in range(m): tmp[(data[i, j, 0], data[i, j, 1])] = True res[i] = len(tmp) return res </code></pre> <p>Since numbers are <em>small bounded integers</em>, there is an efficient method to make the computation much faster. Indeed, an <strong>associative table</strong> can be used to flag values already found where the index of the array is the key of the <code>dict</code> and the value of the array define either the key as been found so far. The array can be flatten for sake of performance using the expression <code>id = point[0] * maxi + point[1]</code> where <code>maxi</code> is the bound (all values are assumed to be strictly smaller than it). For sake of performance, branches are avoided and <code>maxi</code> should be a power of two typically &gt;= 64 (due to low-level cache effects like cache line conflicts, cache thrashing and false-sharing). The resulting implementation is very fast.</p> <pre class="lang-py prettyprint-override"><code>@nb.njit('int32[::1](int32[:,:,::1])', parallel=True) def compute_unique_count_fastest(data): n,m,o = data.shape assert o == 2 maxi = 64 threadCount = nb.get_num_threads() res = np.empty(n, dtype=np.int32) globalUniqueVals = np.zeros((threadCount, maxi * maxi), dtype=np.uint8) for i in nb.prange(n): threadId = nb.np.ufunc.parallel._get_thread_id() uniqueVals = globalUniqueVals[threadId] uniqueVals.fill(0) # Reset the associative table uniqueCount = 0 for j in range(m): idx = data[i, j, 0] * maxi + data[i, j, 1] uniqueCount += uniqueVals[idx] == 0 uniqueVals[idx] = 1 res[i] = uniqueCount return res </code></pre> <p>Here are the timings on my machine (i5-9600KF with 6 cores) with an array of size <code>(1_000_000, 256, 2)</code> containing random 32-bit integers from 0 to 40:</p> <pre class="lang-none prettyprint-override"><code>np.unique in a comprehension list: 78 000 ms compute_unique_count: 2 050 ms compute_unique_count_fastest: 57 ms </code></pre> <p>The last implementation is <strong>1370 times faster</strong> than the naive implementation.</p>
numpy
1
9,668
44,621,681
Tensorflow: TypeError: helper must be a Helper, received: <class 'helper.GreedyEmbeddingHelper'>
<p>Hello I am trying to create a BasicDecoder with a GreedyEmbeddingHelper but it is giving an error:</p> <pre><code>TypeError: helper must be a Helper, received: &lt;class 'helper.GreedyEmbeddingHelper'&gt; </code></pre> <p>Here is a simplified version of my code:</p> <pre><code> elif self.mode == 'decode': # Start_tokens: [batch_size,] `int32` vector start_tokens = tf.ones([self.batch_size, self.dimension], tf.float32) * 0.1337 end_token = 0.1337 def project_inputs(inputs): print inputs.shape return input_layer(inputs) if not self.use_beamsearch_decode: # Helper to feed inputs for greedy decoding: uses the argmax of the output decoding_helper = helper.GreedyEmbeddingHelper(start_tokens=start_tokens, end_token=end_token, embedding=project_inputs) # Basic decoder performs greedy decoding at each time step print("building greedy decoder..") inference_decoder = seq2seq.BasicDecoder(cell=self.decoder_cell, helper=decoding_helper, initial_state=self.decoder_initial_state, output_layer=output_layer) else: # Beamsearch is used to approximately find the most likely translation print("building beamsearch decoder..") inference_decoder = beam_search_decoder.BeamSearchDecoder(cell=self.decoder_cell, embedding=project_inputs, start_tokens=start_tokens, end_token=end_token, initial_state=self.decoder_initial_state, beam_width=self.beam_width, output_layer=output_layer,) </code></pre> <p>I don't know how to fix it because Helper is an abstract class. So it won't be possible.</p>
<p>GreedyEmbeddingHelper is defined at tf.contrib.seq2seq.GreedyEmbeddingHelper. So instead of <code>helper.GreedyEmbeddingHelper</code>, use <code>tf.contrib.seq2seq.GreedyEmbeddingHelper</code></p>
python|tensorflow|deep-learning|lstm
0
9,669
44,379,949
Pandas: Pivot to True/False, drop column
<p>I'm trying to create what I think is a simple pivot table but am having serious issues. There are two things I'm unable to do:</p> <ol> <li>Get rid of the "partner" column at the end.</li> <li>Set the values to either True or False if each company has that partner.</li> </ol> <p><strong>Setup:</strong></p> <pre><code>df = pd.DataFrame({'company':['a','b','c','b'], 'partner':['x','x','y','y'], 'str':['just','some','random','words']}) </code></pre> <p><strong>Desired Output:</strong></p> <pre><code>company x y a True False b True True c False True </code></pre> <p>I started with: </p> <pre><code>df = df.pivot(values = 'partner', columns = 'partner', index = 'company').reset_index() </code></pre> <p>which gets me close, but when I try to get rid of the "partner" column, I can't even reference it, and it's not the "index". </p> <p>For the second issue, I can use:</p> <pre><code>df.fillna(False, inplace = True) df.loc[~(df['x'] == False), 'x'] = True df.loc[~(df['y'] == False), 'y'] = True </code></pre> <p>but that seems incredibly hacky. Any help would be appreciated.</p>
<p><strong>Option 1</strong> </p> <pre><code>df.groupby(['company', 'partner']).size().unstack(fill_value=0).astype(bool) partner x y company a True False b True True c False True </code></pre> <p>Get rid of names on columns object</p> <pre><code>df.groupby(['company', 'partner']).size().unstack(fill_value=0).astype(bool) \ .rename_axis(None, 1).reset_index() company x y 0 a True False 1 b True True 2 c False True </code></pre> <p><strong>Option 2</strong> </p> <pre><code>pd.crosstab(df.company, df.partner).astype(bool) partner x y company a True False b True True c False True pd.crosstab(df.company, df.partner).astype(bool) \ .rename_axis(None, 1).reset_index() company x y 0 a True False 1 b True True 2 c False True </code></pre> <p><strong>Option 3</strong> </p> <pre><code>f1, u1 = pd.factorize(df.company.values) f2, u2 = pd.factorize(df.partner.values) n, m = u1.size, u2.size b = np.bincount(f1 * m + f2) pad = np.zeros(n * m - b.size, dtype=int) b = np.append(b, pad) v = b.reshape(n, m).astype(bool) pd.DataFrame(np.column_stack([u1, v]), columns=np.append('company', u2)) company x y 0 a True False 1 b True True 2 c False True </code></pre> <hr> <p><strong>Timing</strong><br> <em>small data</em> </p> <pre><code>%timeit df.groupby(['company', 'partner']).size().unstack(fill_value=0).astype(bool).rename_axis(None, 1).reset_index() %timeit pd.crosstab(df.company, df.partner).astype(bool).rename_axis(None, 1).reset_index() %%timeit f1, u1 = pd.factorize(df.company.values) f2, u2 = pd.factorize(df.partner.values) n, m = u1.size, u2.size b = np.bincount(f1 * m + f2) pad = np.zeros(n * m - b.size, dtype=int) b = np.append(b, pad) v = b.reshape(n, m).astype(bool) pd.DataFrame(np.column_stack([u1, v]), columns=np.append('company', u2)) 1000 loops, best of 3: 1.67 ms per loop 100 loops, best of 3: 5.97 ms per loop 1000 loops, best of 3: 301 µs per loop </code></pre>
python|pandas|pivot
9
9,670
60,800,389
slicing with iloc and negative integer in Pandas
<p>I have been following this Python linear regression tutorial: <a href="https://medium.com/@contactsunny/linear-regression-in-python-using-scikit-learn-f0f7b125a204" rel="nofollow noreferrer">https://medium.com/@contactsunny/linear-regression-in-python-using-scikit-learn-f0f7b125a204</a></p> <p>Using the following dataset: <a href="https://github.com/contactsunny/data-science-examples/blob/master/salaryData.csv" rel="nofollow noreferrer">https://github.com/contactsunny/data-science-examples/blob/master/salaryData.csv</a></p> <p>My problem is with the following piece of code: </p> <pre><code>x = dataset.iloc[:, :-1].values </code></pre> <p>What does the negation(-1) do here? Why do I get an error If I use the following as an alternate:</p> <pre><code>x = dataset.iloc[:, 0].values </code></pre>
<p>It means, get all columns except the last column:</p> <pre><code>df = pd.DataFrame(np.random.randint(0,100,(5,5)), index=[*'abcde'], columns=[*'ABCDE']) df.iloc[:,:-1] </code></pre> <p>Output:</p> <pre><code> A B C D a 79 23 9 89 b 67 60 32 82 c 66 18 41 67 d 90 51 63 29 e 34 65 82 82 </code></pre> <p>This statement gets all rows and slices the columns to filter out the last. And, there is no error by your second statement it is good statement.</p> <pre><code>df.iloc[:, 0] </code></pre> <p>Output:</p> <pre><code>a 79 b 67 c 66 d 90 e 34 Name: A, dtype: int3 </code></pre> <p>Get all rows of the first column (position 0).</p>
python|pandas|machine-learning
2
9,671
71,473,205
Jupyter Notebook stuck at loading when importing data with pymongo to a pandas dataframe
<p>Everytime I run it, the cell stays loading and it never finishes. Data is 2mbs so it should load really fast. Pymongo was installed with pymongo[srv]. My code:</p> <pre><code>import pandas as pd import pymongo import os pwd = os.getenv(&quot;mongodb_pwd&quot;) client = pymongo.MongoClient( f&quot;mongodb+srv://...:{pwd}@.../test?authSource=admin&amp;replicaSet=...&amp;readPreference=primary&amp;ssl=true&quot; ) db = client[&quot;general&quot;] data = db[&quot;orders&quot;].find() df = pd.DataFrame(list(data)) </code></pre> <p>any idea why this is happening? It used to work 2 weeks ago (under python 3.9, im using 3.10 now)</p> <p>Ran the file as .py, still stuck.</p> <p>After more than 5 minutes, the data was loaded. Any ideas why it is taking so long?</p>
<p>restarted the pc and everything worked fast again. No idea what happened.</p>
python|pandas|jupyter-notebook|pymongo
0
9,672
71,618,942
Monai : RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 7 but got size 8 for tensor number 1 in the list
<p>I am using <a href="https://github.com/Project-MONAI" rel="nofollow noreferrer">Monai</a> for the 3D Multilabel segmentation task. My input image size is 512x496x49 and my label size is 512x496x49. An Image can have 3 labels in one image. With transform, I have converted the image in size 1x512x512x49 and Label in 3x512x512x49</p> <h1>My Transform</h1> <pre class="lang-py prettyprint-override"><code># Setting tranform for train and test data a_min=6732 a_max=18732 train_transform = Compose( [ LoadImaged(keys=[&quot;image&quot;, &quot;label&quot;]), EnsureChannelFirstd(keys=&quot;image&quot;), ConvertToMultiChannelBasedOnBratsClassesd(keys=&quot;label&quot;), ScaleIntensityRanged(keys='image', a_min=a_min, a_max=a_max, b_min=0.0, b_max=1.0, clip=False), Orientationd(keys=[&quot;image&quot;, &quot;label&quot;], axcodes=&quot;RAS&quot;), # Spacingd(keys=[&quot;image&quot;, &quot;label&quot;], pixdim=( # 1.5, 1.5, 2.0), mode=(&quot;bilinear&quot;, &quot;nearest&quot;)), RandFlipd(keys=[&quot;image&quot;, &quot;label&quot;], prob=0.5, spatial_axis=0), RandFlipd(keys=[&quot;image&quot;, &quot;label&quot;], prob=0.5, spatial_axis=1), RandFlipd(keys=[&quot;image&quot;, &quot;label&quot;], prob=0.5, spatial_axis=2), CropForegroundd(keys=[&quot;image&quot;, &quot;label&quot;], source_key=&quot;image&quot;), NormalizeIntensityd(keys=&quot;image&quot;, nonzero=True, channel_wise=True), SpatialPadd(keys=['image', 'label'], spatial_size= [512, 512, 49]),# it will result in 512x512x49 EnsureTyped(keys=[&quot;image&quot;, &quot;label&quot;]), ] ) val_transform = Compose( [ LoadImaged(keys=[&quot;image&quot;, &quot;label&quot;]), EnsureChannelFirstd(keys=&quot;image&quot;), ConvertToMultiChannelBasedOnBratsClassesd(keys=&quot;label&quot;), ScaleIntensityRanged(keys='image', a_min=a_min, a_max=a_max, b_min=0.0, b_max=1.0, clip=False), Orientationd(keys=[&quot;image&quot;, &quot;label&quot;], axcodes=&quot;RAS&quot;), # Spacingd(keys=[&quot;image&quot;, &quot;label&quot;], pixdim=( # 1.5, 1.5, 2.0), mode=(&quot;bilinear&quot;, &quot;nearest&quot;)), CropForegroundd(keys=[&quot;image&quot;, &quot;label&quot;], source_key=&quot;image&quot;), NormalizeIntensityd(keys=&quot;image&quot;, nonzero=True, channel_wise=True), SpatialPadd(keys=['image', 'label'], spatial_size= [512, 512, 49]),# it will result in 512x512x49 EnsureTyped(keys=[&quot;image&quot;, &quot;label&quot;]), ] ) </code></pre> <h1>Dataloader for training and val</h1> <pre class="lang-py prettyprint-override"><code>train_ds = CacheDataset(data=train_files, transform=train_transform,cache_rate=1.0, num_workers=4) train_loader = DataLoader(train_ds, batch_size=2, shuffle=True, num_workers=4,collate_fn=pad_list_data_collate) val_ds = CacheDataset(data=val_files, transform=val_transform, cache_rate=1.0, num_workers=4) val_loader = DataLoader(val_ds, batch_size=1, num_workers=4) </code></pre> <h1>3D U-Net Network from Monai</h1> <pre class="lang-py prettyprint-override"><code># standard PyTorch program style: create UNet, DiceLoss and Adam optimizer device = torch.device(&quot;cuda:0&quot;) model = UNet( spatial_dims=3, in_channels=1, out_channels=4, channels=(16, 32, 64, 128, 256), strides=(2, 2, 2, 2), num_res_units=2, norm=Norm.BATCH, ).to(device) loss_function = DiceLoss(to_onehot_y=True, sigmoid=True) optimizer = torch.optim.Adam(model.parameters(), 1e-4) dice_metric = DiceMetric(include_background=True, reduction=&quot;mean&quot;) </code></pre> <h1>Training</h1> <pre class="lang-py prettyprint-override"><code>max_epochs = 5 val_interval = 2 best_metric = -1 best_metric_epoch = -1 epoch_loss_values = [] metric_values = [] post_pred = Compose([EnsureType(), AsDiscrete(argmax=True, to_onehot=4)]) post_label = Compose([EnsureType(), AsDiscrete(to_onehot=4)]) for epoch in range(max_epochs): print(&quot;-&quot; * 10) print(f&quot;epoch {epoch + 1}/{max_epochs}&quot;) model.train() epoch_loss = 0 step = 0 for batch_data in train_loader: step += 1 inputs, labels = ( batch_data[&quot;image&quot;].to(device), batch_data[&quot;label&quot;].to(device), ) optimizer.zero_grad() print(&quot;Size of inputs :&quot;, inputs.shape) print(&quot;Size of inputs[0] :&quot;, inputs[0].shape) # print(&quot;Size of inputs[1] :&quot;, inputs[1].shape) # print(&quot;printing of inputs :&quot;, inputs) outputs = model(inputs) loss = loss_function(outputs, labels) loss.backward() optimizer.step() epoch_loss += loss.item() print( f&quot;{step}/{len(train_ds) // train_loader.batch_size}, &quot; f&quot;train_loss: {loss.item():.4f}&quot;) epoch_loss /= step epoch_loss_values.append(epoch_loss) print(f&quot;epoch {epoch + 1} average loss: {epoch_loss:.4f}&quot;) if (epoch + 1) % val_interval == 0: model.eval() with torch.no_grad(): for val_data in val_loader: val_inputs, val_labels = ( val_data[&quot;image&quot;].to(device), val_data[&quot;label&quot;].to(device), ) roi_size = (160, 160, 160) sw_batch_size = 4 val_outputs = sliding_window_inference( val_inputs, roi_size, sw_batch_size, model) val_outputs = [post_pred(i) for i in decollate_batch(val_outputs)] val_labels = [post_label(i) for i in decollate_batch(val_labels)] # compute metric for current iteration dice_metric(y_pred=val_outputs, y=val_labels) # aggregate the final mean dice result metric = dice_metric.aggregate().item() # reset the status for next validation round dice_metric.reset() metric_values.append(metric) if metric &gt; best_metric: best_metric = metric best_metric_epoch = epoch + 1 torch.save(model.state_dict(), os.path.join( root_dir, &quot;best_metric_model.pth&quot;)) print(&quot;saved new best metric model&quot;) print( f&quot;current epoch: {epoch + 1} current mean dice: {metric:.4f}&quot; f&quot;\nbest mean dice: {best_metric:.4f} &quot; f&quot;at epoch: {best_metric_epoch}&quot; ) </code></pre> <h1>While training I am getting this error</h1> <p><code>RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 7 but got size 8 for tensor number 1 in the list.</code> <img src="https://i.stack.imgur.com/xQNdC.png" alt="enter image description here" /></p> <p>I followed the <a href="https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d.ipynb" rel="nofollow noreferrer">3D Segmentation Monai tutorial</a> but this was only for 2 classes (including background) therefore I followed the discussion at <a href="https://github.com/Project-MONAI/MONAI/issues/415" rel="nofollow noreferrer">https://github.com/Project-MONAI/MONAI/issues/415</a> but even though I changed what was recommended in this discussion still am getting errors while training.</p>
<p>Your images have a depth of 49, but due to the 4 downsampling steps, each with stride 2, your images need to be divisible by a factor of 2**4=16. Adding in <code>DivisiblePadd([&quot;image&quot;, &quot;label&quot;], 16)</code> should solve it.</p>
pytorch|image-segmentation|multilabel-classification|medical-imaging
0
9,673
69,921,335
Cumulative difference of numbers starting from an initial value
<p>I have a Pandas dataframe containing a series of numbers:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'deduction':[10,60,70,50,60,10,10,60,60,20,50,20,10,90,60,70,30,50,40,60]}) deduction 0 10 1 60 2 70 3 50 4 60 5 10 6 10 7 60 8 60 9 20 10 50 11 20 12 10 13 90 14 60 15 70 16 30 17 50 18 40 19 60 </code></pre> <p>I would like to compute the cumulative difference of these numbers, starting from a larger number (i.e. <code>&lt;base_number&gt; - 10 - 60 - 70 - 50 - ...</code>).</p> <p>My current solution is to negate all the numbers, prepend the (positive) larger number to the dataframe, and then call <code>cumsum()</code>:</p> <pre class="lang-py prettyprint-override"><code># Compact: (-df['deduction'][::-1]).append(pd.Series([start_value], index=[-1]))[::-1].cumsum().reset_index(drop=True) # Expanded: total_series = ( # Negate (-df['deduction'] # Reverse [::-1]) # Add the base value to the end .append(pd.Series([start_value])) # Reverse again (to put the base value at the beginning) [::-1] # Calculate cumulative sum (all the values except the first are negative, so this will work) .cumsum() # Clean up .reset_index(drop=True) ) </code></pre> <p>But I was wondering if there were possible a shorter solution, that didn't append to the series (I hear that that's bad practice).</p> <p>(It doesn't need to be put in a dataframe; a series, like I've done above, will be alright.)</p>
<pre><code>df['total'] = start_value - df[&quot;deduction&quot;].cumsum() </code></pre> <p>If you need the start value at the beginning of the series then shift and insert (there's a few ways to do it, and this is one of them):</p> <pre><code>df['total'] = -df[&quot;deduction&quot;].shift(1, fill_value=-start_value).cumsum() </code></pre>
python|pandas|dataframe
2
9,674
72,279,786
pandas can't read my list of numeric values
<p>I am having an issue with pandas reading my data file. The relevant code I'm using at the moment is this:</p> <pre><code>import pandas as pd file_data = pd.read_csv('14_May_2022.csv') complete_df = pd.DataFrame(file_data) print(complete_df.info()) Current_Players_df = complete_df[[&quot;Game&quot;,&quot;Current_Players&quot;]] print(Current_Players_df) print(&quot;The average current players is:\n&quot;) print(Current_Players_df[&quot;Current_Players&quot;].mean())` </code></pre> <p>when I try to run this all is fine until the last line of code where I get a lengthy error code beginning with:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\christy\AppData\Roaming\Python\Python37\site-packages\pandas\core\nanops.py&quot;, line 1603, in _ensure_numeric x = float(x) ValueError: could not convert string to float: </code></pre> <p>and then going on to list every value (which are all numbers)</p> <hr /> <p>I have tried two possible solutions but neither work. These are the ones I've tried:</p> <pre><code>complete_df['Current_Players']= pd.to_numeric(complete_df['Current_Players'].astype(str).str.strip(),error='coerce') </code></pre> <p>The error message shows:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\christy\OneDrive\Documents\steam data analysis\analysis.py&quot;, line 6, in &lt;module&gt; complete_df['Current_Players']= pd.to_numeric(complete_df['Current_Players'].astype(str).str.strip(),error='coerce') TypeError: to_numeric() got an unexpected keyword argument 'error' </code></pre> <pre><code>str.strip(complete_df['Current_Players']) </code></pre> <p>The error message shows:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\christy\OneDrive\Documents\steam data analysis\analysis.py&quot;, line 7, in &lt;module&gt; str.strip(complete_df['Current_Players']) TypeError: descriptor 'strip' requires a 'str' object but received a 'Series' </code></pre>
<p>First off, it may be hard for us to help without having your data.</p> <p>That said, in your <code>pd.to_numeric()</code> call, the keyword is <code>errors</code> with an 's', you just have error. Fixing that might get you somewhere.</p> <p>Also, why do you have your second line calling <code>pd.DataFrame(file_data)</code>? The first line already created a dataframe - did you find that things weren't working without this? I expect that that line is making no change - or perhaps even making a change for the worse.</p>
python|pandas
1
9,675
50,576,732
Different functions for calculating phase/argument of complex numbers
<p>Are there any differences between the </p> <pre><code>cmath.phase() </code></pre> <p>function from the <code>cmath</code> module, and the</p> <pre><code>np.angle() </code></pre> <p>function from <code>numpy</code>.</p>
<p>Mathematically, there is no difference between these two functions. Both compute the phase or argument of a complex number as:</p> <pre><code>arg = arctan2(zimag, zreal) </code></pre> <p>See documentation for <a href="https://docs.python.org/2/library/cmath.html#cmath.phase" rel="nofollow noreferrer"><code>cmath.phase</code></a> and source code for <a href="https://github.com/numpy/numpy/blob/6a58e25703cbecb6786faa09a04ae2ec8221348b/numpy/lib/function_base.py#L2136" rel="nofollow noreferrer"><code>numpy.angle</code></a>. From software point of view, as @Julien mentioned in <a href="https://stackoverflow.com/questions/50576732/different-functions-for-calculating-phase-argument-of-complex-numbers#comment88162927_50576732">his comment</a>, <code>cmath.phase()</code> will not work on <code>numpy.ndarray</code>.</p>
python|numpy|phase|cmath
4
9,676
50,326,076
how to calculate how many times is changed in the column
<p>how I can calculate on the most easy way, how much values changes I have in the specific DataFrame columns. For example I have follow DF:</p> <pre><code>a b 0 1 1 1 2 1 3 2 4 1 5 2 6 2 7 3 8 3 9 3 </code></pre> <p>In this Data Frame the values in the column <code>b</code> have been changed 4 times (in the rows 4,5,6 and 8).</p> <p>My very simple solution is:</p> <pre><code>a = 0 for i in range(df.shape[0] - 1): if df['b'].iloc[i] != df['b'].iloc[i+1]: a+=1 </code></pre>
<p>I think need <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with <code>index</code>:</p> <pre><code>idx = df.index[df['b'].diff().shift().fillna(0).ne(0)] print (idx) Int64Index([4, 5, 6, 8], dtype='int64') </code></pre> <p>For more general solution is possible indexing by <code>arange</code>:</p> <pre><code>a = np.arange(len(df))[df['b'].diff().shift().bfill().ne(0)].tolist() print (a) [4, 5, 6, 8] </code></pre> <p><strong>Explanation</strong>:</p> <p>First get difference by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.diff.html" rel="nofollow noreferrer"><code>Series.diff</code></a>:</p> <pre><code>print (df['b'].diff()) 0 NaN 1 0.0 2 0.0 3 1.0 4 -1.0 5 1.0 6 0.0 7 1.0 8 0.0 9 0.0 Name: b, dtype: float64 </code></pre> <p>Then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.shift.html" rel="nofollow noreferrer"><code>shift</code></a> by one value:</p> <pre><code>print (df['b'].diff().shift()) 0 NaN 1 NaN 2 0.0 3 0.0 4 1.0 5 -1.0 6 1.0 7 0.0 8 1.0 9 0.0 Name: b, dtype: float64 </code></pre> <p>Replace first <code>NaN</code>s by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow noreferrer"><code>fillna</code></a>:</p> <pre><code>print (df['b'].diff().shift().fillna(0)) 0 0.0 1 0.0 2 0.0 3 0.0 4 1.0 5 -1.0 6 1.0 7 0.0 8 1.0 9 0.0 Name: b, dtype: float64 </code></pre> <p>And compare for not equal to <code>0</code></p> <pre><code>print (df['b'].diff().shift().fillna(0).ne(0)) 0 False 1 False 2 False 3 False 4 True 5 True 6 True 7 False 8 True 9 False Name: b, dtype: bool </code></pre>
pandas
1
9,677
50,620,755
How to create an array/dataframe populated by 1 or 0 based on whether time is within a range?
<p>Basically, I have a <em>dataframe</em> which has 2 columns, both of which are hours:</p> <pre><code> 0 1 +-----+----+ 0| 11 | 12 | +-----+----+ 1| 3 | 4 | +-----+----+ 2| 11 | 12 | +-----+----+ 3| 6 | 7 | +-----+----+ 4| 16 | 16 | etc... </code></pre> <p>This has a few thousand rows. I want to make another dataframe which has column headers '1' to '24' (based on the hours of a 24 hour period) and for each row of the dataframe above displays 1 if the hour time is within that range (inclusive) and 0 if it is outside.</p> <p>So for example the second row of the above dataframe would be something like:</p> <pre><code>1 2 3 4 5 6 7 8 ......24 0 0 1 1 0 0 0 0 ......0 </code></pre> <p>And I want to do the same for each row of the first dataframe and append to the new 24 hour data frame.</p> <p>Hopefully this makes sense and someone can help! Happy to walk through further if it doesn't make sense! Also am new to posting on here so not sure exactly how to get the data to paste over in a sensible way.</p>
<p>You can compare and multiply the values by creating a dataframe i.e </p> <pre><code>temp = pd.DataFrame([np.arange(1,25)],index = df.index,) begin = (temp.values&gt;=df['0'].values[:,None]).astype(int) end = (temp.values&lt;=df['1'].values[:,None]).astype(int) pd.DataFrame(begin*end,columns=np.arange(1,25)) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 </code></pre>
python|pandas|loops|time
1
9,678
62,670,041
batch_size in tf model.fit() vs. batch_size in tf.data.Dataset
<p>I have a large dataset that can fit in host memory. However, when I use tf.keras to train the model, it yields GPU out-of-memory problem. Then I look into tf.data.Dataset and want to use its batch() method to batch the training dataset so that it can execute the model.fit() in GPU. According to its documentation, an example is as follows:</p> <pre><code>train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels)) test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels)) BATCH_SIZE = 64 SHUFFLE_BUFFER_SIZE = 100 train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE) test_dataset = test_dataset.batch(BATCH_SIZE) </code></pre> <p>Is the BATCH_SIZE in dataset.from_tensor_slices().batch() the same as the batch_size in the tf.keras modelt.fit()?</p> <p>How should I choose BATCH_SIZE so that GPU has sufficient data to run efficiently and yet its memory is not overflown?</p>
<p>You do not need to pass the <code>batch_size</code> parameter in <code>model.fit()</code> in this case. It will automatically use the BATCH_SIZE that you use in <code>tf.data.Dataset().batch()</code>.</p> <p>As for your other question : the batch size hyperparameter indeed needs to be carefully tuned. On the other hand, if you see OOM errors, you should decrease it until you do not get OOM (normally (but not necessarily) in this manner 32 --&gt; 16 --&gt; 8 ...). In fact you can try non-power of two batch sizes for the decrease purposes.</p> <p>In your case I would start with a batch_size of 2 an increase it gradually (<code>3-4-5-6...</code>).</p> <p>You do not need to provide the <code>batch_size</code> parameter if you use the <code>tf.data.Dataset().batch()</code> method.</p> <p>In fact, even the official <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit" rel="nofollow noreferrer">documentation</a> states this:</p> <blockquote> <p>batch_size : Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).</p> </blockquote>
tensorflow|tensorflow2.0|tensorflow2.x
5
9,679
62,749,100
JSON to Pandas dataframe conversion
<p>I have trouble with conversion of JSON formatted data to Pandas dataframe. My JSON data looks like this and is stored in the variable active_assets -</p> <pre><code>Asset({ 'class': 'us_equity', 'easy_to_borrow': True, 'exchange': 'NYSE', 'id': '879ac630-107f-43ce-a01d-1fd9da89453c', 'marginable': True, 'name': 'Zymeworks Inc.', 'shortable': True, 'status': 'active', 'symbol': 'ZYME', 'tradable': True}), Asset({ 'class': 'us_equity', 'easy_to_borrow': False, 'exchange': 'NASDAQ', 'id': 'a838c0cf-0008-432e-8882-feee5a6ef7cd', 'marginable': True, 'name': 'Zynerba Pharmaceuticals, Inc. Common Stock', 'shortable': False, 'status': 'active', 'symbol': 'ZYNE', 'tradable': True}), Asset({ 'class': 'us_equity', 'easy_to_borrow': True, 'exchange': 'NASDAQ', 'id': '52eed246-61b0-4e82-95a9-1d23906b752e', 'marginable': True, 'name': 'Zynex, Inc. Common Stock', 'shortable': True, 'status': 'active', 'symbol': 'ZYXI', 'tradable': True}) active_assets = api.list_assets(status='active',asset_class='us_equity') df_dict = [{'SYMBL':i['symbol'], 'NAME':i['name'],'Shortable':i['shortable']} for i in active_assets['Asset']] extracted_df = pd.DataFrame(df_dict) </code></pre> <p>When i run this code I get the following error? Any help is appreciated.</p> <pre><code> File &quot;D:\Trading-Scripts\Scan-alpaca.py&quot;, line 32, in &lt;module&gt; df_dict = [{'SYMBL':i['symbol'], 'NAME':i['name'],'Shortable':i['shortable']} for i in active_assets['Asset']] TypeError: list indices must be integers or slices, not str </code></pre>
<p>You are iterating over a dictionary in your dictionary comprehension. Try to have i equal to your json object.</p> <p>You can also try to use the read_json method from pandas.</p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_json.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_json.html</a></p>
python|json|pandas|dictionary
0
9,680
62,702,703
Descriptor 'date' requires a 'datetime.datetime' object but received a 'Series' (Python)
<p>So, I have the following DataFrame:</p> <p><a href="https://i.stack.imgur.com/ITbRK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ITbRK.png" alt="enter image description here" /></a></p> <p>I would like to convert those datetime values to date, so I tried:</p> <pre><code>df['Notification Date']= datetime.date(df['Notification Date']) </code></pre> <p>What am I doing wrong?</p>
<p>You should use <code>pd.to_datetime</code> for pandas series:</p> <pre><code>df['Notification Date'] = pd.to_datetime(df['Notification Date']) </code></pre>
python|pandas|datetime
2
9,681
62,652,886
How to count unique combinations of values in selected columns in pandas data frame including frequencies with the value of 0?
<p>In my dataframe (assume it is called df), I have two columns: one labeled colour and one labeled TOY_ID. Using <code>df.groupby(['Colour', 'TOY_ID']).size()</code> I was able to generate a third column which is unnamed that represents the frequency of the number of times that the other two columns' values have appeared in my df. The output example is shown below:</p> <pre><code>Colour TOY_ID Blue 31490.0 50 31569.0 50 50360636.0 20 .. Yellow 50360636.0 25 50366678.0 9 .. Green 31490.0 17 50366678.0 10 </code></pre> <p>Although this method is working, it does not show the combinations where the first two columns have values of 0. I know this can be done in R but I am unsure how can I do this in Python. The example of my desired output is below. Any suggestions?</p> <pre><code>Colour TOY_ID Blue 31490.0 50 31569.0 50 50360636.0 20 50366678.0 0 .. Yellow 31490.0 0 31569.0 0 50360636.0 25 50366678.0 9 .. Green 31490.0 17 31569.0 0 50360636.0 0 50366678.0 10 </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reindex.html" rel="nofollow noreferrer"><code>Series.reindex</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.from_product.html" rel="nofollow noreferrer"><code>MultiIndex.from_product</code></a>:</p> <pre><code>s = df.groupby(['Colour', 'TOY_ID']).size() s = s.reindex(pd.MultiIndex.from_product(s.index.levels), fill_value=0) print (s) Colour TOY_ID Blue 31490.0 50 31569.0 50 50360636.0 20 50366678.0 0 Green 31490.0 17 31569.0 0 50360636.0 0 50366678.0 10 Yellow 31490.0 0 31569.0 0 50360636.0 25 50366678.0 9 Name: a, dtype: int64 </code></pre>
python|pandas
1
9,682
62,583,096
Plotting filtered rows
<p>I have this dataset (it is just a sample):</p> <pre><code>Date Name Surname Text 2020/03/20 Joe Smith Include details 2020/03/20 Michael Jordan Describe what you've tried 2020/03/21 Bill Gates Preserve colouring and details 2020/03/24 Bill Gates Preserve colouring ... </code></pre> <p>I extracted specific words from text as follows:</p> <pre><code>def extr(txt): return(df.loc[df['Text'].str.contains(txt, flags=re.IGNORECASE), 'Name'].tolist()) </code></pre> <p>So if I have txt='details' I get the following:</p> <pre><code>extr('details) </code></pre> <p>output</p> <pre><code>['Joe','Bill'] </code></pre> <p>After selecting them, I would like to plot Joe and Bill by date, i.e.</p> <pre><code> 2020/03/20 Joe Smith Include details 2020/03/21 Bill Gates Preserve colouring and details </code></pre> <p>I would like to have a scatter plot with on the x-axis the date (sorted of course) and on the y-axis Name.</p> <p>Since the other 'Bill Gates' does not include details, I am not interested in it.</p> <p>How can I get this information?</p>
<p>You should extract the associated dates along with the names, then can do something like this:</p> <pre><code>(df.loc[df['Text'].str.contains('details', flags=re.IGNORECASE)] .plot.scatter('Date','Name') ) </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/H7uVj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H7uVj.png" alt="enter image description here" /></a></p>
python|pandas|matplotlib
1
9,683
62,857,991
Advice on how to shape data for lstm
<p>I have a time series of 933 matrices, each matrix is a 8x10 matrix. This is my X (input). So X has shape (933, 8, 10). The Y (output) is a time series of 933 vectors, each vector is a 5-dimensional vector. So Y has shape (933, 1, 5).</p> <p>I can also reshape the data (should I?) such as X is (933, 80) and Y is (933, 5) because really in the end it is just 933 samples of a set of 80 numbers for input (imagine 80 pixels in a matrix) and 933 samples of a set of 5 numbers for output.</p> <p>I am writing a CNN-LSTM. I still don't know the size of train/test, let's assume for now that I want to use all 933 samples for training. My model is:</p> <pre><code>model = Sequential() # define CNN model model.add(TimeDistributed(Conv2D(1, (2,2), activation='relu', padding='same', input_shape=(None,8,10)))) model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2)))) model.add(TimeDistributed(Flatten())) # define LSTM model model.add(LSTM(933, activation='relu', input_shape=(8,10))) model.add(Dense(5)) model.compile(optimizer='adam', loss='mse') model.fit(X, Y) </code></pre> <p>I get the following error: ValueError: input tensor must have rank 4.</p> <p>My question is why I am getting this error and how could I solve this?</p>
<p>For LSTM, the input must be <code>3D</code> that is <code>(samples, time steps, features)</code>.<br /> In your case, you need to reshape your data to <code>(933,8,10)</code></p> <p>In the LSTM layer, the argument <code>input_shape</code> takes a tuple of two values that defines the number of <code>time steps</code> and <code>features</code>. In your case, it would be <code>(8,10)</code></p> <p>And the number of units for an LSTM layer you can mention any, usually it will be 32,64 or 128 range. Having too many units will not help the model to learn.</p> <p>Below is the sample LSTM layer for your data.</p> <pre><code>data.shape= (933,8,10) model = Sequential() model.add(LSTM(32, input_shape=(8, 10))) model.add(Dense(5)) model.compile(optimizer='adam', loss='mse') model.fit(X, Y) </code></pre>
python|tensorflow|keras|regression|lstm
1
9,684
62,700,540
My script is only executed to the first element on my dataframe Python
<p>I need your help on a loop here:</p> <p>I have a dataframe df that looks like this:</p> <pre><code> OC OZ ON WT DC DZ DN 0 PL 97 TP 59 DE 63 DC 1 US 61 SU 95 US 95 SU 2 SA 32 FS 57 DQ 09 PO 3 QS 54 FS 13 HR 78 LK 4 DQ 76 DS 65 SQ 94 PO </code></pre> <p>I have some operations that works for the first row and would like to automate it to the rest of the dataframe.</p> <p>*** Expected Output***</p> <pre><code> OC OZ ON WT DC DZ DN VALUE 0 PL 97 TP 59 DE 63 DC 1800 1 US 61 SU 95 US 95 SU 9819 2 SA 32 FS 57 DQ 09 PO 8721 3 QS 54 FS 13 HR 78 LK 6721 4 DQ 76 DS 65 SQ 94 PO 3432 </code></pre> <p>This whole works for the first row, but cannot be executed for the whole dataframe:</p> <pre><code>dic = {} dic['section'] = [] for ix, row in df_road.iterrows(): in_dict1 = {'location': { 'zipCode': {'country': row['OC'], 'code': row['OZ']}, 'location': {'id': '1'}, 'longName': row['ON'], }, 'carriageParameter': {'road': {'truckLoad': 'Auto'} }, 'load': {'weight': str(row['WT']), 'unit': 'ton', 'showEmissionsAtResponse': 'true' } } in_dict2 = {'location': { 'zipCode': {'country': row['DC'], 'code': row['DZ']}, 'location': {'id': '2'}, 'longName': row['DN'] }, 'carriageParameter': {'road': {'truckLoad': 'Auto'} }, 'unload': { 'weight': str(row['WT']), 'unit': 'ton', 'showEmissionsAtResponse': 'true' } } dic['section'].append(in_dict1) dic['section'].append(in_dict2) request_data=dict({'section':'', 'customer':'XXXX', 'password':'XXXX', 'showRoute':'true'}) section_data = dict(dic) request_data.update(section_data) print(request_data) result = client.service.calculateDistribution(**request_data) result = serialize_object(result.result) df = pd.json_normalize(result) </code></pre>
<p>I could solve the issue.</p> <p>The problem was that i had to instantiate the <code>dic = {}</code> and <code>dic['section'] = []</code> inside the loop.</p> <p>Below the final code that works.</p> <pre><code>for ix, row in df_road.iterrows(): in_dict1 = {'location': dic = {} dic['section'] = [] { 'zipCode': {'country': row['OC'], 'code': row['OZ']}, 'location': {'id': '1'}, 'longName': row['ON'], }, 'carriageParameter': {'road': {'truckLoad': 'Auto'} }, 'load': {'weight': str(row['WT']), 'unit': 'ton', 'showEmissionsAtResponse': 'true' } } in_dict2 = {'location': { 'zipCode': {'country': row['DC'], 'code': row['DZ']}, 'location': {'id': '2'}, 'longName': row['DN'] }, 'carriageParameter': {'road': {'truckLoad': 'Auto'} }, 'unload': { 'weight': str(row['WT']), 'unit': 'ton', 'showEmissionsAtResponse': 'true' } } dic['section'].append(in_dict1) dic['section'].append(in_dict2) request_data=dict({'section':'', 'customer':'XXXX', 'password':'XXXX', 'showRoute':'true'}) section_data = dict(dic) request_data.update(section_data) print(request_data) result = client.service.calculateDistribution(**request_data) result = serialize_object(result.result) df = pd.json_normalize(result) </code></pre>
python|python-3.x|pandas
1
9,685
62,508,022
Extract range-start and range-end records from a dataframe
<p>I'd like to calculate the time periods for which <code>Value</code> is in range (41 - 46) and remain at the same value for <code>df</code> below. <code>Value</code> should only update when there is a change, otherwise remains constant.</p> <pre><code> Id Timestamp Value 34213951 34214809 2012-05-01 08:33:47.127 41.5 34214252 34215110 2012-05-01 08:39:06.270 41.5 34214423 34215281 2012-05-01 08:41:56.240 40.5 34214602 34215460 2012-05-01 08:44:55.777 39.5 34214873 34215731 2012-05-01 08:49:25.600 38.5 34215071 34215929 2012-05-01 08:53:04.593 37.5 34215342 34216200 2012-05-01 08:56:47.257 36.5 34216007 34216865 2012-05-01 09:07:24.370 34.5 34216443 34217301 2012-05-01 09:14:46.120 33.5 34216884 34217742 2012-05-01 09:22:51.907 32.5 34217190 34218048 2012-05-01 09:29:00.023 31.5 34217803 34218661 2012-05-01 09:40:08.483 30.5 34218381 34219239 2012-05-01 09:50:20.440 30.5 34218382 34219240 2012-05-01 09:50:22.317 32.5 34218388 34219246 2012-05-01 09:50:26.067 37.5 34218389 34219247 2012-05-01 09:50:27.940 39.0 34218392 34219250 2012-05-01 09:50:29.817 39.5 34218393 34219251 2012-05-01 09:50:31.690 40.5 34218396 34219254 2012-05-01 09:50:35.440 41.0 34218789 34219647 2012-05-01 09:56:55.327 41.0 34218990 34219848 2012-05-01 10:00:07.847 40.0 </code></pre> <p>with:</p> <pre><code>def samevalue(df): df = df.reset_index(drop=True) dataframe = [] flag = 0 start_time = [] start_value = [] end_time = [] end_value = [] for i in range(len(df.index)): if flag == 0: if ((df.loc[i, 'Value']&gt;=41) and (df.loc[i, 'Value']&lt;=46)): start_time = df.loc[i, 'Timestamp'] start_value = df.loc[i, 'Value'] flag = 1 elif flag == 1: if (df.loc[i, 'Data'] != start_temp): end_time = df.loc[i, 'Timestamp'] end_value = df.loc[i, 'Value'] flag = 0 dataframe.append([start_time, end_time, start_value, end_value]) data1 = pd.DataFrame(dataframe, columns= [&quot;StartTime&quot;, &quot;EndTime&quot;, &quot;StartValue&quot;, &quot;EndValue&quot;]) return data1 samevalue(df) </code></pre> <p>Actual Output:</p> <pre><code> StartTime EndTime StartValue EndValue 0 2012-05-01 08:33:47.127 [] 41.5 [] 1 2012-05-01 08:33:47.127 2012-05-01 08:41:56.240000 41.5 40.5 2 2012-05-01 09:50:35.440 2012-05-01 08:41:56.240000 41.0 40.5 3 2012-05-01 09:50:35.440 2012-05-01 10:00:07.847000 41.0 40 </code></pre> <p>Expected Output:</p> <pre><code> StartTime EndTime StartValue EndValue 0 2012-05-01 08:33:47.127 2012-05-01 08:41:56.240 41.5 40.5 1 2012-05-01 09:50:35.440 2012-05-01 10:00:07.847 41.0 40.0 </code></pre> <p>I would have expected that the <code>EndTime</code> is always after the <code>StartTime</code> but it's not the case. Have I missed out something?</p>
<p>Here's a vectorized way of doing that. Mostly using <code>shift</code> to compare adjacent rows.</p> <pre><code>df[&quot;in_range&quot;] = (df.Value &gt;= 41) &amp; (df.Value &lt;= 46) df[&quot;end_of_range&quot;] = df.in_range.shift() &amp; ~df.in_range df[&quot;start_of_range&quot;] = ~df.in_range.shift(1).fillna(False) &amp; df.in_range </code></pre> <p>At this point, the dataframe is (I removed the index and the Id fro better visibility):</p> <pre><code> Timestamp Value in_range end_of_range start_of_range 0 2012-05-01 08:33:47.127 41.5 True False True 1 2012-05-01 08:39:06.270 41.5 True False False 2 2012-05-01 08:41:56.240 40.5 False True False 3 2012-05-01 08:44:55.777 39.5 False False False ... </code></pre> <p>I now create two dataframes - one for all the 'range start' records, and another one for all the 'range end' records:</p> <pre><code>starts = df[df.start_of_range][[&quot;Timestamp&quot;, &quot;Value&quot;]] ends = df[df.end_of_range][[&quot;Timestamp&quot;, &quot;Value&quot;]] # reset the index of these two dataframe, so I can easility concat them later. starts.index = range(len(starts)) ends.index = range(len(starts)) </code></pre> <p>The value of 'starts' and 'ends' is now:</p> <pre><code> Timestamp Value 0 2012-05-01 08:33:47.127 41.5 1 2012-05-01 09:50:35.440 41.0 Timestamp Value 0 2012-05-01 08:41:56.240 40.5 1 2012-05-01 10:00:07.847 40.0 </code></pre> <p>All that is left now is <code>concat</code> the two newly created dataframes, so that each start record is aligned with its corresponding end record.</p> <pre><code>res = pd.concat([starts, ends], axis=1) res.columns = [&quot;StartTime&quot;, &quot;EndTime&quot;, &quot;StartValue&quot;, &quot;EndValue&quot;] </code></pre> <p>The result is:</p> <pre><code> StartTime EndTime StartValue EndValue 0 2012-05-01 08:33:47.127 41.5 2012-05-01 08:41:56.240 40.5 1 2012-05-01 09:50:35.440 41.0 2012-05-01 10:00:07.847 40.0 </code></pre>
python|pandas
1
9,686
73,703,110
merge two dataframe based on intersection
<p>i have two dataframes df1 and df2</p> <p>df1:</p> <pre><code>categories ; ['hello','world'] ['gogo','albert'] ['dodo'] </code></pre> <p>df2:</p> <pre><code>categories ; ['hello','world'] ['albert'] ['dodji'] </code></pre> <p>i want to have as result only lines of df1 based on : if the intersection of df1 and df2 is true == keep this kine of df1 : for example for our case we will have :</p> <p>df_all:</p> <pre><code>categories ; ['hello','world'] ['gogo','albert'] </code></pre> <p>because the intersection of ['hello','world'] of df1 and ['hello','world'] of df2 is true and the intersection of ['gogo','albert'] and ['albert'] is true so we keep those lines of df1</p>
<p>Pandas isn't optimised for Series consisting of lists. I think the best solution is just to use Python sets and check length is nonzero, then use that to mask <code>df1</code>:</p> <pre><code># Set up data df1 = pd.DataFrame({'categories': [['hello','world'],['gogo','albert'],['dodo']]}) df2 = pd.DataFrame({'categories': [['hello','world'],['albert'],['dodji']]}) # Solution mask = [len(set(a).intersection(b)) &gt; 0 for (a,b) in zip(df1.categories, df2.categories)] df1.loc[mask] </code></pre> <p>Output:</p> <pre><code> categories 0 [hello, world] 1 [gogo, albert] </code></pre>
python|pandas|dataframe
1
9,687
73,581,722
How to Compare Two Columns From Two Dataframes for Differences?
<p>I have two DataFrames, and I need to compare two columns in my first DataFrame with two columns in another DataFrame to compare the differences in the values.</p> <p>This is what my first DataFrame looks like:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>item_number</th> <th>sell_price</th> </tr> </thead> <tbody> <tr> <td>50</td> <td>12</td> </tr> <tr> <td>50</td> <td>12</td> </tr> <tr> <td>43</td> <td>15</td> </tr> <tr> <td>21</td> <td>20</td> </tr> <tr> <td>66</td> <td>54</td> </tr> <tr> <td>66</td> <td>102</td> </tr> <tr> <td>66</td> <td>76</td> </tr> </tbody> </table> </div> <p>This is what my second DataFrame looks like:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>item_number</th> <th>price</th> </tr> </thead> <tbody> <tr> <td>50</td> <td>15</td> </tr> <tr> <td>50</td> <td>15</td> </tr> <tr> <td>43</td> <td>15</td> </tr> <tr> <td>21</td> <td>28</td> </tr> <tr> <td>66</td> <td>87</td> </tr> <tr> <td>66</td> <td>87</td> </tr> <tr> <td>66</td> <td>78</td> </tr> </tbody> </table> </div> <p>Now, how do I compare the <code>item_number</code> and <code>sell_price</code> in my first DataFrame with the <code>item_number</code> and <code>price</code> in my second DataFrame?</p> <p>I need to see the differences between the two DataFrames for the desired columns.</p> <p>I am looking for an output like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>item_number</th> <th>sell_price</th> <th>price</th> </tr> </thead> <tbody> <tr> <td>50</td> <td>12</td> <td>15</td> </tr> <tr> <td>50</td> <td>12</td> <td>15</td> </tr> <tr> <td>21</td> <td>20</td> <td>28</td> </tr> <tr> <td>66</td> <td>54</td> <td>87</td> </tr> <tr> <td>66</td> <td>102</td> <td>87</td> </tr> <tr> <td>66</td> <td>76</td> <td>78</td> </tr> </tbody> </table> </div>
<p>Here's an example:</p> <pre><code>import pandas as pd df1=pd.DataFrame({'item_number':[10,20],'sell_price':[20,40]},index=[0,1]) df2=pd.DataFrame({'item_number':[10,20],'price':[15,20]},index=[0,1]) df1['price']=df2['price'] </code></pre> <p>Note you're effectively adding a column to the original df1. You can always reassign to another df if you wish.</p>
python|pandas|dataframe|compare|comparison
-1
9,688
73,788,242
Shift value by month within a group
<p>Consider the following data frame:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({ &quot;id&quot;: [0, 0, 0, 1, 1, 1, 1, 2, 2, 2], &quot;date&quot;: [&quot;2017-01-31&quot;, &quot;2017-02-28&quot;, &quot;2017-03-31&quot;, &quot;2017-01-31&quot;, &quot;2017-03-31&quot;, &quot;2017-04-30&quot;, &quot;2017-05-31&quot;, &quot;2017-01-31&quot;, &quot;2017-03-31&quot;, &quot;2017-05-31&quot;], &quot;value&quot;: [10., 12., 15., 8., 11., 15., 17., 6., 14., 15.] }) df[&quot;date&quot;] = pd.to_datetime(df[&quot;date&quot;], format=&quot;%Y-%m-%d&quot;) </code></pre> <p>I want to create monthly shifted <code>value</code> columns within each <code>id</code> group, where the monthly shifts are specified in a list and can also be negative (meaning past and future shifts should be allowed).</p> <p>Desired result:</p> <pre class="lang-py prettyprint-override"><code>from pandas import Timestamp from numpy import nan data={ 'id': {0: 0, 1: 0, 2: 0, 3: 1, 4: 1, 5: 1, 6: 1, 7: 2, 8: 2, 9: 2}, 'date': { 0: Timestamp('2017-01-31 00:00:00'), 1: Timestamp('2017-02-28 00:00:00'), 2: Timestamp('2017-03-31 00:00:00'), 3: Timestamp('2017-01-31 00:00:00'), 4: Timestamp('2017-03-31 00:00:00'), 5: Timestamp('2017-04-30 00:00:00'), 6: Timestamp('2017-05-31 00:00:00'), 7: Timestamp('2017-01-31 00:00:00'), 8: Timestamp('2017-03-31 00:00:00'), 9: Timestamp('2017-05-31 00:00:00') }, 'value': {0: 10.0, 1: 12.0, 2: 15.0, 3: 8.0, 4: 11.0, 5: 15.0, 6: 17.0, 7: 6.0, 8: 14.0, 9: 15.0}, 'value_1': {0: 12.0, 1: 15.0, 2: nan, 3: nan, 4: 15.0, 5: 17.0, 6: nan, 7: nan, 8: nan, 9: nan}, 'value_2': {0: 15.0, 1: nan, 2: nan, 3: 11.0, 4: 17.0, 5: nan, 6: nan, 7: 14.0, 8: 15.0, 9: nan} } df = pd.DataFrame(data=data) </code></pre> <pre><code> id date value value_1 value_2 0 0 2017-01-31 10.0 12.0 15.0 1 0 2017-02-28 12.0 15.0 NaN 2 0 2017-03-31 15.0 NaN NaN 3 1 2017-01-31 8.0 NaN 11.0 4 1 2017-03-31 11.0 15.0 17.0 5 1 2017-04-30 15.0 17.0 NaN 6 1 2017-05-31 17.0 NaN NaN 7 2 2017-01-31 6.0 NaN 14.0 8 2 2017-03-31 14.0 NaN 15.0 9 2 2017-05-31 15.0 NaN NaN </code></pre> <p>In the data frame above, the columns <code>value_1</code> and <code>value_2</code> shall be created.</p> <p>My approach so far:</p> <pre class="lang-py prettyprint-override"><code>from pandas.tseries.offsets import MonthEnd shifts = [1, 2] tmp = df.copy() for shift in shifts: tmp_shifed = tmp.rename(columns={&quot;value&quot;: f&quot;value_{shift}&quot;}).assign(date=df[&quot;date&quot;] + MonthEnd(-1 * shift)) df = df.merge(tmp_shifed, on=[&quot;id&quot;, &quot;date&quot;], how=&quot;left&quot;) </code></pre> <p>It works but I am sure there is a better way to achieve this. Note that my data frame is quite large and that the shift list has the size of 7.</p> <p>Any help is appreciated!</p>
<p>Here's another way to do what your question asks:</p> <pre class="lang-py prettyprint-override"><code>df = df.set_index('id') for shift in [1, 2]: df = df.assign(shift_date=df.date + MonthEnd(shift)) df[f'value_{shift}'] = df.set_index('shift_date', append=True).index.map(df.set_index('date', append=True).value) df = df.drop(columns='shift_date').reset_index() </code></pre> <p>Explanation:</p> <ul> <li>set <code>id</code> column to be the index</li> <li>loop over desired shifts (in months)</li> <li>set a temporary column <code>shift_date</code> that adds the current iteration's shift to the <code>date</code> column</li> <li>append <code>shift_date</code> to the index and use <code>map()</code> to do a <code>value</code> lookup in the original dataframe (with index set to <code>id, date</code>) and populate a new column <code>value_X</code> where <code>X</code> has the value of the current iteration's shift in months</li> <li>after the loop, drop the temporary column <code>shift_date</code> and use <code>reset_index()</code> to restore <code>id</code> as a column.</li> </ul> <p>Output:</p> <pre><code> id date value value_1 value_2 0 0 2017-01-31 10.0 12.0 15.0 1 0 2017-02-28 12.0 15.0 NaN 2 0 2017-03-31 15.0 NaN NaN 3 1 2017-01-31 8.0 NaN 11.0 4 1 2017-03-31 11.0 15.0 17.0 5 1 2017-04-30 15.0 17.0 NaN 6 1 2017-05-31 17.0 NaN NaN 7 2 2017-01-31 6.0 NaN 14.0 8 2 2017-03-31 14.0 NaN 15.0 9 2 2017-05-31 15.0 NaN NaN </code></pre> <p><strong>UPDATE:</strong></p> <p>A different approach would be to identify the min and max dates in the dataframe and automatically create <code>value_X</code> columns for each month. This will create the minimum number of columns with at least one non-null entry:</p> <pre class="lang-py prettyprint-override"><code>dates = pd.date_range(df.date.min(), df.date.max(), freq='M') df['sh'] = df.date.map({dt:i for i, dt in enumerate(dates)}) df = ( df .pivot('id','date','value') .reindex(columns=dates) .set_axis(['value' + (f'_{i}' if i else '') for i, col in enumerate(dates)], axis='columns') .join(df.set_index('id')[['date','sh']]) ) df = df[['date','sh'] + [col for col in df.columns if col.startswith('value')]].set_index('date', append=True) sh = df.pop('sh') df = df.T for col in df.columns: df[col] = df[col].shift(-sh[col]) df = df.T.reset_index() </code></pre> <p>Output:</p> <pre><code> id date value value_1 value_2 value_3 value_4 0 0 2017-01-31 10.0 12.0 15.0 NaN NaN 1 0 2017-02-28 12.0 15.0 NaN NaN NaN 2 0 2017-03-31 15.0 NaN NaN NaN NaN 3 1 2017-01-31 8.0 NaN 11.0 15.0 17.0 4 1 2017-03-31 11.0 15.0 17.0 NaN NaN 5 1 2017-04-30 15.0 17.0 NaN NaN NaN 6 1 2017-05-31 17.0 NaN NaN NaN NaN 7 2 2017-01-31 6.0 NaN 14.0 NaN 15.0 8 2 2017-03-31 14.0 NaN 15.0 NaN NaN 9 2 2017-05-31 15.0 NaN NaN NaN NaN </code></pre>
python|pandas|dataframe|performance|optimization
1
9,689
71,428,904
ValueError: Layer "sequential" expects 1 input(s), but it received 10 input tensors
<p>I am following TFF tutorials to build my FL model My data is contained in different CSV files which are considered as different clients. Following this <a href="https://www.tensorflow.org/federated/tutorials/building_your_own_federated_learning_algorithm" rel="nofollow noreferrer">tutorial</a>, and build the Keras model function as following</p> <pre><code>@tf.function def create_tf_dataset_for_client_fn(dataset_path): return tf.data.experimental.CsvDataset(dataset_path, record_defaults=record_defaults, header=True) @tf.function def add_parsing(dataset): def parse_dataset(*x): return OrderedDict([('y', x[-1]), ('x', x[1:-1])]) return dataset.map(parse_dataset, num_parallel_calls=tf.data.AUTOTUNE) source = tff.simulation.datasets.FilePerUserClientData( dataset_paths, create_tf_dataset_for_client_fn) client_ids = sorted(source.client_ids) # Make sure the client ids are tensor strings when splitting data. source._client_ids = [tf.cast(c, tf.string) for c in source.client_ids] source = source.preprocess(add_parsing) train, test = source.train_test_client_split(source, 1) train_client_ids = train.client_ids train_data = train.create_tf_dataset_for_client(train_client_ids[0]) def create_keras_model(): initializer = tf.keras.initializers.GlorotNormal(seed=0) return tf.keras.models.Sequential([ tf.keras.layers.Input(shape=(32,)), tf.keras.layers.Dense(10, kernel_initializer=initializer), tf.keras.layers.Softmax(), ]) def model_fn(): keras_model = create_keras_model() return tff.learning.from_keras_model( keras_model, input_spec=train_data.element_spec, loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]) </code></pre> <p>Then I followed instructions and run other <code>@tff.tf_computation</code> functions as the tutorial, like <code>def server_init()</code>, <code>def initialize_fn()</code>, <code>def client_update()</code> and <code>def server_update()</code>. But when I run the def <code>client_update_fn()</code> I got this error</p> <pre><code>ValueError: in user code: File &quot;&lt;ipython-input-14-cada45ffae0f&gt;&quot;, line 12, in client_update * for batch in dataset: File &quot;/usr/local/lib/python3.7/dist-packages/tensorflow_federated/python/learning/keras_utils.py&quot;, line 455, in forward_pass * return self._forward_pass(batch_input, training=training) File &quot;/usr/local/lib/python3.7/dist-packages/tensorflow_federated/python/learning/keras_utils.py&quot;, line 408, in _forward_pass * predictions = self.predict_on_batch(inputs, training) File &quot;/usr/local/lib/python3.7/dist-packages/tensorflow_federated/python/learning/keras_utils.py&quot;, line 398, in predict_on_batch * return self._keras_model(x, training=training) File &quot;/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer_v1.py&quot;, line 740, in __call__ ** self.name) File &quot;/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py&quot;, line 200, in assert_input_compatibility raise ValueError(f'Layer &quot;{layer_name}&quot; expects {len(input_spec)} input(s),' ValueError: Layer &quot;sequential&quot; expects 1 input(s), but it received 10 input tensors. Inputs received: [&lt;tf.Tensor 'x:0' shape=() dtype=int32&gt;, &lt;tf.Tensor 'x_1:0' shape=() dtype=int32&gt;, &lt;tf.Tensor 'x_2:0' shape=() dtype=int32&gt;, &lt;tf.Tensor 'x_3:0' shape=() dtype=float32&gt;, &lt;tf.Tensor 'x_4:0' shape=() dtype=float32&gt;, &lt;tf.Tensor 'x_5:0' shape=() dtype=float32&gt;, &lt;tf.Tensor 'x_6:0' shape=() dtype=float32&gt;, &lt;tf.Tensor 'x_7:0' shape=() dtype=float32&gt;, &lt;tf.Tensor 'x_8:0' shape=() dtype=float32&gt;, &lt;tf.Tensor 'x_9:0' shape=() dtype=int32&gt;] </code></pre> <p>Notes:</p> <ul> <li>each CSV file has 10 column as features (input) and one column as label (output).</li> <li>I added the <code>shape=(32,)</code> arbitrary, I don't really know what are the shape of the data is in each column?</li> </ul> <p>So, the question is, how to feed the data to the keras model and overcome this error</p> <p>Thanks in advance</p>
<p>A couple problems: Your data has ten separate features, which means you actually need 10 separate inputs for your model. However, you can also stack the features into a tensor and then use a single input with the shape <code>(10,)</code>. Here is a working example, but please note that it uses dummy data and therefore may not make much sense in reality.</p> <p><strong>Create dummy data</strong>:</p> <pre><code>import tensorflow as tf import tensorflow_federated as tff import pandas as pd from collections import OrderedDict import nest_asyncio nest_asyncio.apply() # Dummy data samples = 5 data = [[tf.random.uniform((samples,), maxval=50, dtype=tf.int32).numpy().tolist(), tf.random.uniform((samples,), maxval=50, dtype=tf.int32).numpy().tolist(), tf.random.uniform((samples,), maxval=50, dtype=tf.int32).numpy().tolist(), tf.random.uniform((samples,), maxval=50, dtype=tf.int32).numpy().tolist(), tf.random.normal((samples,)).numpy().tolist(), tf.random.normal((samples,)).numpy().tolist(), tf.random.normal((samples,)).numpy().tolist(), tf.random.normal((samples,)).numpy().tolist(), tf.random.normal((samples,)).numpy().tolist(), tf.random.normal((samples,)).numpy().tolist(), tf.random.uniform((samples,), maxval=50, dtype=tf.int32).numpy().tolist(), tf.random.uniform((samples,), maxval=50, dtype=tf.int32).numpy().tolist()]] df = pd.DataFrame(data) df = df.explode(list(df.columns)) df.to_csv('client1.csv', index= False) df.to_csv('client2.csv', index= False) </code></pre> <p><strong>Load and process dataset</strong>:</p> <pre><code>import tensorflow as tf record_defaults = [int(), int(), int(), int(), float(),float(),float(),float(),float(),float(), int(), int()] @tf.function def create_tf_dataset_for_client_fn(dataset_path): return tf.data.experimental.CsvDataset(dataset_path, record_defaults=record_defaults, header=True) @tf.function def add_parsing(dataset): def parse_dataset(*x): return OrderedDict([('y', x[-1]), ('x', x[1:-1])]) return dataset.map(parse_dataset, num_parallel_calls=tf.data.AUTOTUNE) dataset_paths = {'client1': '/content/client1.csv', 'client2': '/content/client2.csv'} source = tff.simulation.datasets.FilePerUserClientData( dataset_paths, create_tf_dataset_for_client_fn) client_ids = sorted(source.client_ids) # Make sure the client ids are tensor strings when splitting data. source._client_ids = [tf.cast(c, tf.string) for c in source.client_ids] source = source.preprocess(add_parsing) train, test = source.train_test_client_split(source, 1) train_client_ids = train.client_ids def reshape_data(d): d['x'] = tf.stack([tf.cast(x, dtype=tf.float32) for x in d['x']]) return d train_data = [train.create_tf_dataset_for_client(c).map(reshape_data).batch(1) for c in train_client_ids] </code></pre> <p><strong>Create and run model</strong>:</p> <pre><code>def create_keras_model(): initializer = tf.keras.initializers.GlorotNormal(seed=0) return tf.keras.models.Sequential([ tf.keras.layers.Input(shape=(10,)), tf.keras.layers.Dense(75, kernel_initializer=initializer), tf.keras.layers.Dense(50, kernel_initializer=initializer), tf.keras.layers.Softmax(), ]) def model_fn(): keras_model = create_keras_model() return tff.learning.from_keras_model( keras_model, input_spec=train_data[0].element_spec, loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]) def initialize_fn(): model = model_fn() return model.trainable_variables @tf.function def client_update(model, dataset, server_weights, client_optimizer): &quot;&quot;&quot;Performs training (using the server model weights) on the client's dataset.&quot;&quot;&quot; client_weights = model.trainable_variables tf.nest.map_structure(lambda x, y: x.assign(y), client_weights, server_weights) for batch in dataset: with tf.GradientTape() as tape: outputs = model.forward_pass(batch) grads = tape.gradient(outputs.loss, client_weights) grads_and_vars = zip(grads, client_weights) client_optimizer.apply_gradients(grads_and_vars) return client_weights @tf.function def server_update(model, mean_client_weights): &quot;&quot;&quot;Updates the server model weights as the average of the client model weights.&quot;&quot;&quot; model_weights = model.trainable_variables tf.nest.map_structure(lambda x, y: x.assign(y), model_weights, mean_client_weights) return model_weights federated_float_on_clients = tff.FederatedType(tf.float32, tff.CLIENTS) @tff.federated_computation(tff.FederatedType(tf.float32, tff.CLIENTS)) def get_average_temperature(client_temperatures): return tff.federated_mean(client_temperatures) str(get_average_temperature.type_signature) get_average_temperature([68.5, 70.3, 69.8]) @tff.tf_computation def server_init(): model = model_fn() return model.trainable_variables @tff.federated_computation def initialize_fn(): return tff.federated_value(server_init(), tff.SERVER) whimsy_model = model_fn() tf_dataset_type = tff.SequenceType(whimsy_model.input_spec) model_weights_type = server_init.type_signature.result @tff.tf_computation(tf_dataset_type, model_weights_type) def client_update_fn(tf_dataset, server_weights): model = model_fn() client_optimizer = tf.keras.optimizers.SGD(learning_rate=0.01) return client_update(model, tf_dataset, server_weights, client_optimizer) @tff.tf_computation(model_weights_type) def server_update_fn(mean_client_weights): model = model_fn() return server_update(model, mean_client_weights) federated_server_type = tff.FederatedType(model_weights_type, tff.SERVER) federated_dataset_type = tff.FederatedType(tf_dataset_type, tff.CLIENTS) @tff.federated_computation(federated_server_type, federated_dataset_type) def next_fn(server_weights, federated_dataset): server_weights_at_client = tff.federated_broadcast(server_weights) client_weights = tff.federated_map( client_update_fn, (federated_dataset, server_weights_at_client)) mean_client_weights = tff.federated_mean(client_weights) server_weights = tff.federated_map(server_update_fn, mean_client_weights) return server_weights federated_algorithm = tff.templates.IterativeProcess( initialize_fn=initialize_fn, next_fn=next_fn ) server_state = federated_algorithm.initialize() for round in range(15): server_state = federated_algorithm.next(server_state, train_data) </code></pre> <p>Regarding this line in the model: <code>tf.keras.layers.Dense(50, kernel_initializer=initializer)</code>, I am using 50 output nodes, since I created dummy labels that can vary between 0 and 49. This is necessary when using the <code>SparseCategoricalCrossentropy</code> loss function.</p>
python|tensorflow|keras|tensorflow-federated
1
9,690
71,332,423
I am unable to read data into my Jupyter notebook
<p>I cloned a repo from GitHub to the Google Cloud Workbench. I haven't been able to read in my data to the Jupyter notebook. It seems like it is unable to locate the file. I have checked the file spellings and location, it all seems to be in place. I also tried to read it in as</p> <pre><code>PATH = &quot;data/countypres_2000-2020.csv&quot; df = pd.read_csv(PATH) </code></pre> <p>or as</p> <pre><code>PATH = &quot;eco395m-homework-6/data/countypres_2000-2020.csv&quot; df = pd.read_csv(PATH) </code></pre> <p><a href="https://i.stack.imgur.com/eOcLW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eOcLW.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/U0IKA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U0IKA.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/5aLnD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5aLnD.png" alt="The data file exists in the expected directory" /></a></p> <p><a href="https://i.stack.imgur.com/bVNal.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bVNal.png" alt="enter image description here" /></a></p>
<p>Try this:</p> <pre><code>PATH = r&quot;/data/countypres_2000-2020.csv&quot; df = pd.read_csv(PATH) </code></pre>
pandas|jupyter-notebook
0
9,691
71,196,836
Finding a quicker method to construct a matrix in python
<p>I'm trying to construct a (p+1,n) matrix with the code below:</p> <pre><code>import numpy as np p = 3 xh = np.linspace(0.5,1.5,p+1) n = 7 x = np.linspace(0,2,n) M = np.zeros([p+1,n]) l2 = 1 for i in range(len(x)): for k in range(len(xh)): for j in range(len(xh)): if k != j: l = (x[i]-xh[j])/(xh[k]-xh[j]) l2 *= l elif k == j: l = 1 l2 *= l M[k][i]=l2 l2 = 1 print(M) </code></pre> <p>This method produces the matrix I want but is very slow (6 sec for p=40 and n=2000).</p> <p>The matrix itself is a matrix of lagrange polynomials, for approximating some function. The nodal points, xh, are the points used in forming/calculating the interpolation of a function. They have the property that their values on the original function and the interpolation are always the same. The number of distinct nodal points (p+1) indicate the degree (p) of the polynomial for the Lagrange interpolation. The x points are where a function is to be evaluated. That could be the interpolation of the function or the function. This is the formula I'm following:</p> <p><a href="https://i.stack.imgur.com/pTX8Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pTX8Q.png" alt="enter image description here" /></a></p> <p>I don't know how a faster way to construct a matrix in numpy, other methods seem to keep going wrong when I apply it to the code I've got and I don't know enough to see why. What faster method can I use here?</p>
<p>Your code can be nicely compiled by decorating a function with <a href="https://numba.pydata.org/numba-doc/latest/user/performance-tips.html" rel="nofollow noreferrer"><code>@nb.njit</code></a> from the <a href="https://numba.pydata.org/" rel="nofollow noreferrer"><code>numba</code></a> package. Some minor redundant parts were removed.</p> <pre><code>import numpy as np import numba as nb @nb.njit def test(p,n): xh = np.linspace(0.5,1.5,p+1) x = np.linspace(0,2,n) M = np.zeros((p+1,n), dtype=nb.float64) l2 = 1 for k in range(len(x)): for i in range(len(xh)): for j in range(len(xh)): if i != j: l = (x[k]-xh[j])/(xh[i]-xh[j]) else: l = 1 l2 *= l M[i][k]=l2 l2 = 1 return M </code></pre> <p>Benchmark for <code>p=40, n=2000</code> on a 2-core colab instance. Array <em>M</em> was computed with your original code.</p> <pre><code>a = [0] %timeit a[0] = test(40,2000) np.testing.assert_allclose(M, a[0]) </code></pre> <p>Runs in <code>5.57 ms per loop vs 2.24 s per loop</code> or ~402x speed up.</p>
python|numpy
2
9,692
71,286,703
How can I change a string in a cell with my pre-made dictionary?
<p>I have a big dataset (pandas df). It's about news reading. I'm trying to clean it. But it's a bit messy. I want to work on coutries but in some (most!) rows it has the city name not the country. I created a dict, keys are countries and values are cities. I want to change city names to country names. I</p> <p>To picture the data frame (I have 1 m rows btw):</p> <pre><code> Country Age 0 France 25-34 1 Lyon 45-54 2 Kiev 35-44 3 France 25-34 4 New York 25-34 5 Paris 65+ 6 Toulouse 35-44 7 Nice 55-64 8 Chicago 45-54 9 Stuttgart 35-44 10 Germany 65+ 11 Moscow 25-34 12 USA 45-54 13 Italy 35-44 14 Berlin 65+ 15 Russia 25-34 16 Ukraine 45-54 17 Lille 35-44 18 Germany 65+ 19 Moscow 25-34 20 Lviv 25-34 21 Vladivostok 25-34 22 Rome 25-34 23 Milan 25-34 </code></pre> <p>My checklist;</p> <pre><code>checklist = {&quot;France&quot;:[&quot;Touluse&quot;,&quot;Lyon&quot;,&quot;Paris&quot;,&quot;Nice&quot;,&quot;Lille&quot;],&quot;USA&quot;:[&quot;New York&quot;,&quot;Chicago&quot;],&quot;Germany&quot;:[&quot;Berlin&quot;,&quot;Stuttgart&quot;],&quot;Ukraine&quot;:[&quot;Lviv&quot;,&quot;Kiev&quot;],&quot;Russia&quot;:[&quot;Moscow&quot;,Vladivostok],&quot;Italy&quot;:[&quot;Rome&quot;,&quot;Milan&quot;]} </code></pre>
<p>I have written the code what @Tobias suggested in the comments.</p> <pre><code>cities, countries = [],[] for key, value in checklist.items(): for i in value: cities.append(i) countries.append(key) new_checklist = dict(zip(cities,countries)) </code></pre> <p>The above inverts your checklist. You can then change rows like this</p> <pre><code>df['country'] = df['country'].apply(lambda x: new_checklist[x] if x in new_checklist.keys() else x) </code></pre>
python|pandas|dataframe
0
9,693
60,612,985
Python list has none type elements even though filled with dataframe elements
<p>I initialize a list of length n using <code>df_list = [None] * n</code>. Then I have a for loop where I fill each element of <code>df_list</code> with a dataframe. Then, when I concat all these dataframes together using <code>df = pd.concat(df_list, axis=0)</code> I end up with fewer rows than expected, and upon further inspection I find that some elements of <code>df_list</code> are <code>None</code> type while others are dataframes. This is strange to me because in my for loop, I print the type of each value before filling it into <code>df_list</code> and they are all dataframes of the desired shape and columns as well. </p> <p>Wondering how, after running the loop, I can have <code>None</code> values in <code>df_list</code> when each value I filled in is a dataframe and not <code>None</code>. </p> <p>Any help here is appreciated - quite puzzled by this!</p>
<p>Why do you even need to initialize the list with None values. Just create empty list and append your dataframes to it. </p>
python|pandas|dataframe|concat|nonetype
1
9,694
72,693,016
Find intersections in all dataset rows
<p>I need to write a function.</p> <p>It takes any value from the dataset as input and should look for an intersection in all rows.</p> <p>For example: phone = 87778885566</p> <p>The table is represented by the following fields:</p> <ol> <li>key</li> <li>id</li> <li>phone</li> <li>email</li> </ol> <p>Test data:</p> <ul> <li>1; 12345; 89997776655; test@gmail.com</li> <li>2; 54321; 87778885566; two@gmail.com</li> <li>3; 98765; 87776664577; three@gmail.com</li> <li>4; 66678; 87778885566; four@gmail.com</li> <li>5; 34567; 84547895566; four@gmail.com</li> <li>6; 34567; 89087545678; five@gmail.com</li> </ul> <p>The output should be:</p> <ul> <li>2; 54321; <strong>87778885566;</strong> two@gmail.com</li> <li>4; 66678; <strong>87778885566;</strong> <strong>four@gmail.com</strong></li> <li>5; <strong>34567;</strong> 84547895566; <strong>four@gmail.com</strong></li> <li>6; <strong>34567;</strong> 89087545678; five@gmail.com</li> </ul> <p>It should check all values ​​and if values ​​intersect somewhere, return a dataset with intersections.</p>
<p>You could use <code>recurssion</code>:</p> <pre><code>import numpy as np def relation(dat, values): d = dat.apply(lambda x: x.isin(values.ravel())) values1 = dat.iloc[np.unique(np.where(d)[0]),:] if set(np.array(values)) == set(values1.to_numpy().ravel()): return values1 else: return relation(dat, values1.to_numpy().ravel()) relation(df.astype(str), np.array(['87778885566'])) 1 2 3 1 54321 87778885566 two@gmail.com 3 66678 87778885566 four@gmail.com 4 34567 84547895566 four@gmail.com 5 34567 89087545678 five@gmail.com </code></pre>
python|python-3.x|pandas|numpy
0
9,695
72,717,049
Scipy Optimize Minimize: Optimization terminated successfully but not iterating at all
<p>I am trying to code an optimizer finding the optimal constant parameters so as to minimize the MSE between an array y and a generic function over X. The generic function is given in pre-order, so for example if the function over X is x1 + c*x2 the function would be [+, x1, *, c, x2]. The objective in the previous example, would be minimizing:</p> <blockquote> <p>sum_for_all_x (y - (x1 + c*x2))^2</p> </blockquote> <p>I show next what I have done to solve the problem. Some things that sould be known are:</p> <ol> <li>X and y are torch tensors.</li> <li>constants is the list of values to be optimized.</li> </ol> <pre><code> def loss(self, constants, X, y): stack = [] # Stack to save the partial results const = 0 # Index of constant to be used for idx in self.traversal[::-1]: # Reverse the prefix notation if idx &gt; Language.max_variables: # If we are dealing with an operator function = Language.idx_to_token[idx] # Get its associated function first_operand = stack.pop() # Get first operand if function.arity == 1: # If the arity of the operator is one (e.g sin) stack.append(function.function(first_operand)) # Append result else: # Same but if arity is 2 second_operand = stack.pop() # Need a second operand stack.append(function.function(first_operand, second_operand)) elif idx == 0: # If it is a constant -&gt; idx 0 indicates a constant stack.append(constants[const]*torch.ones(X.shape[0])) # Append constant const += 1 # Update else: stack.append(X[:, idx - 1]) # Else append the associated column of X prediction = stack[0] return (y - prediction).pow(2).mean().cpu().numpy() def optimize_constants(self, X, y): ''' # This function optimizes the constants of the expression tree. ''' if 0 not in self.traversal: # If there are no constants to be optimized return return self.traversal x0 = [0 for i in range(len(self.constants))] # Initial guess ini = time.time() res = minimize(self.loss, x0, args=(X, y), method='BFGS', options={'disp': True}) print(res) print('Time:', time.time() - ini) </code></pre> <p>The problem is that the optimizer theoretically terminates successfully but does not iterate at all. The output res would be something like that:</p> <pre><code>Optimization terminated successfully. Current function value: 2.920725 Iterations: 0 Function evaluations: 2 Gradient evaluations: 1 fun: 2.9207253456115723 hess_inv: array([[1]]) jac: array([0.]) message: 'Optimization terminated successfully.' nfev: 2 nit: 0 njev: 1 status: 0 success: True x: array([0.]) </code></pre> <p>So far I have tried to:</p> <ol> <li>Change the method in the minimizer (e.g Nelder-Mead, SLSQP,...) but it happens the same with all of them.</li> <li>Change the way I return the result (e.g (y - prediction).pow(2).mean().item())</li> </ol>
<p>It seems that scipy optimize minimize does not work well with Pytorch. Changing the code to use numpy ndarrays solved the problem.</p>
python|pytorch|scipy-optimize-minimize
0
9,696
59,589,975
Count the duplicate rows in excel using python and i am getting error TypeError: a bytes-like object is required, not 'str'
<p>Format Like this email excel file</p> <pre><code>name email A A@gmail.com B B@gmailcom C c@gmail.com A A@gmail.com B B@gmail.com </code></pre> <p>In second excel file outfile.csv This is the output </p> <pre><code>name email count A A@gmail.com 2 B B@gmailcom 2 C c@gmail.com 1 </code></pre> <p>This is python code First, I read the excel file</p> <pre><code>data_file=pd.read_excel('email.xlsx') writer = csv.writer(open('outfiles.csv','wb')) code = defaultdict(int) for row in data_file: code[row[0]] += 1 # now write the file for row in code.items(): writer.writerow(row) </code></pre> <p>Error:</p> <blockquote> <p>writer.writerow(row) TypeError: a bytes-like object is required, not 'str'</p> </blockquote> <p>I am getting this error so could you please help me out.</p>
<p>If you just want to count the duplicates, use pandas.DataFrame.unique()!</p> <pre><code>import pandas as pd data = pd.read_excel('email.xlsx') unique = data.column_name.unique() duplicates = len(data)-len(unique) print("number of duplicate rows is:",duplicates) </code></pre> <p>you just need to know the column_name, you can see all using <code>print(data.columns)</code></p>
python|excel|pandas|csv
2
9,697
59,675,306
Bar chart with customised width in Python
<p>I have this dataframe <code>df</code> which contains -</p> <pre><code>Name Team Name Category Challenge Points Time A B 1 1ABC 50 2019-11-04 07:37:02 D B 2 2ACE 150 2019-11-04 09:57:02 X P 4 4PQR 500 2019-11-05 08:45:02 A B 3 3PQR 10 2019-11-04 10:25:20 N P 4 4ABC 120 2019-11-05 08:35:00 C G 1 1ABC 50 2019-11-04 07:37:02 D B 4 4RST 200 2019-11-04 10:57:02 </code></pre> <p>I have this ambitious plan of visualizing this dataset as a customised barchart where each team has a building (bar) made of different blocks of varying width (depending on the points asssociated with that challenge), and vertical order of blocks would be depending on the time (first one goes at the bottom). In short the plot for the above data should roughly look like this - </p> <p><a href="https://i.stack.imgur.com/m75xg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m75xg.png" alt="enter image description here"></a></p> <p>The different colours represent the different categories here. I know how to group the data by teams and then plot each teams number of attempts by -</p> <pre><code>df.groupby(['Team Name'])['Challenge'].count().plot.bar() </code></pre> <p>but beyond that, I'm clueless as to how to change the bar widths. Can someone help with this? Alternatively, if someone has a better idea of how to visualise it using any of the conventional plots, I'd love to hear your opinions too.</p> <p>Thanks!</p>
<p>Does this look like what you want?</p> <p><a href="https://i.stack.imgur.com/ifsTK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ifsTK.png" alt="enter image description here"></a></p> <p>You can accomplish this by manually plotting the 'blocks' via <code>matplotlib.patches</code>, it just requires some extra manipulation to do so algorithmically. Here is a complete example using the data supplied in the question</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt from matplotlib.patches import Rectangle import numpy as np import pandas as pd t20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120)] for i in range(len(t20)): r, g, b = t20[i] t20[i] = (r / 255., g / 255., b / 255.) fig, ax = plt.subplots(1) df['Time'] = pd.to_datetime(df['Time']) df = df.sort_values('Time') cat = df['Category'].unique() cidx = dict(zip(cat, range(len(cat)))) mw = max(df['Points']) names = list(df['Team Name'].unique()) nt = len(names) h = 0.5 hs = [0]*3 for ii in range(len(df.index)): w = float(df['Points'].iloc[ii])/mw idx = names.index(df['Team Name'].iloc[ii]) r = Rectangle((idx - w/2.0, hs[idx]), w, h, color=t20[cidx[df['Category'].iloc[ii]]]) hs[idx] += 0.5 ax.add_patch(r) plt.xlim([-0.5, len(names)-0.5]) plt.ylim([0, max(hs)+3]) plt.xticks(range(len(names)), names) plt.show() </code></pre> <p>I used the first 4 colors in the <a href="https://public.tableau.com/profile/chris.gerrard#!/vizhome/TableauColors/ColorPaletteswithRGBValues" rel="nofollow noreferrer">tableau 20 palette</a> in case you were interested.</p> <hr> <h3> Edit </h3> <p>You can add a legend with the line</p> <pre class="lang-py prettyprint-override"><code>plt.legend(handles=[Patch(facecolor=t20[ii], label=cat[ii]) for ii in range(len(t20))]) </code></pre> <p>as long as the additional import of <code>Patches</code> from <code>matplotlib.patches</code> is included, i.e.</p> <pre class="lang-py prettyprint-override"><code>from matplotlib.patches import Rectangle, Patch </code></pre> <p>And the output will be </p> <p><a href="https://i.stack.imgur.com/MnA0W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MnA0W.png" alt="Plot with legend"></a></p>
python|pandas|matplotlib|data-visualization
2
9,698
59,765,712
OPTICS parallelism
<p>I have the following script (<code>optics.py</code>) to estimate clustering with precomuted distances:</p> <pre><code>from sklearn.cluster import OPTICS import numpy as np distances = np.load(r'distances.npy') clust = OPTICS(metric='precomputed', n_jobs=-1) clust = clust.fit(distances) </code></pre> <p>Looking at htop results I can see that only one CPU core is used</p> <p><a href="https://i.stack.imgur.com/gAb2I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gAb2I.png" alt="enter image description here"></a></p> <p>despite the fact scikit runs clustering in multiple processes:</p> <p><a href="https://i.stack.imgur.com/I17nJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I17nJ.png" alt="enter image description here"></a></p> <p>Why <code>n_jobs=-1</code> has not resulted in using all the CPU cores?</p>
<p>I'm the primary author of the sklearn OPTICS module. Parallelism is difficult because there is an ordering loop which <em>cannot</em> be run in parallel; that said, the most computationally intensive task is distance calculations, and these can be run in parallel. More specifically, sklearn OPTICS calculates the upper triangle distance matrix one row at a time, starting with 'n' distance lookups, and decreasing to 'n-1, n-2' lookups for a total of n-squared / 2 distance calculations... the problem is that parallelism in sklearn is generally handled by joblib, which uses processes (not threads), which have rather high overhead for creation and destruction when used in a loop. (i.e., you create and destroy the process workers per row as you loop through the data set, and 'n' setup/teardowns of processes has more overhead then the parallelism benefit you get from joblib--this is why njobs is disabled for OPTICS)</p> <p>The best way to 'force' parallelism in OPTICS is probably to define a custom distance metric that runs in parallel-- see this post for a good example of this:</p> <p><a href="https://medium.com/aspectum/acceleration-for-the-nearest-neighbor-search-on-earths-surface-using-python-513fc75984aa" rel="nofollow noreferrer">https://medium.com/aspectum/acceleration-for-the-nearest-neighbor-search-on-earths-surface-using-python-513fc75984aa</a></p> <p>One of the example's above actually forces the distance calculation onto a GPU, but still uses sklearn for the algorithm execution.</p>
python|numpy|scikit-learn|optics-algorithm
4
9,699
32,491,339
Python Pandas with functions and column equal values
<p>I am trying to create a function that takes an already formatted json.loads(). </p> <pre><code>def data_fp(fp): for line in fp: try: data=json.loads(line) json_data.append(data) except: continue </code></pre> <p>I take the json_data and am trying to clean it. I created a blank dataframe and made a function.</p> <pre><code>df=pd.DataFrame() def data_clean(liste): df['col1'] = map(lambda datas: datas['col1'] if 'col1' in datas else 'NA', liste) df['col2'] = map(lambda datas: datas['col2'] if 'col2' in datas else 'NA', liste) df=df[df['col2']=='foo'] </code></pre> <p>The problem comes from the last line. When I include it in the function, I get an error </p> <blockquote> <p>UnboundLocalError: local variable 'df' referenced before assignment</p> </blockquote> <p>But when I run the function without the last line in it, I get no errors, and I can run the == line in the console and get the desired result.</p> <p>Why does it not work in the function?</p>
<p>Any variable assigned in a function is local to that function, unless it is specifically declared global. So without assignment you access the global variable and everything goes fine, with assignment you access a non-declared local variable hence an error.</p> <p>Look here: <a href="http://effbot.org/pyfaq/how-do-you-set-a-global-variable-in-a-function.htm" rel="nofollow">http://effbot.org/pyfaq/how-do-you-set-a-global-variable-in-a-function.htm</a></p>
python|pandas|global-variables|dataframe
1