Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
5,400
69,455,808
Replace Unnamed values in date column with true values
<p>I'm working on this raw data frame that needs some cleaning. So far, I have transformed this xlsx file</p> <p><a href="https://i.stack.imgur.com/2YaT5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2YaT5.png" alt="enter image description here" /></a></p> <p>into this pandas dataframe:</p> <pre><code>print(df.head(16)) </code></pre> <pre><code> date technician alkalinity colour uv ph turbidity \ 0 2020-02-01 00:00:00 Catherine 24.5 33 0.15 7.24 1.53 1 Unnamed: 2 NaN NaN NaN NaN NaN 2.31 2 Unnamed: 3 NaN NaN NaN NaN NaN 2.08 3 Unnamed: 4 NaN NaN NaN NaN NaN 2.2 4 Unnamed: 5 Michel 24 35 0.152 7.22 1.59 5 Unnamed: 6 NaN NaN NaN NaN NaN 1.66 6 Unnamed: 7 NaN NaN NaN NaN NaN 1.71 7 Unnamed: 8 NaN NaN NaN NaN NaN 1.53 8 2020-02-02 00:00:00 Catherine 24 NaN 0.145 7.21 1.44 9 Unnamed: 10 NaN NaN NaN NaN NaN 1.97 10 Unnamed: 11 NaN NaN NaN NaN NaN 1.91 11 Unnamed: 12 NaN NaN 33.0 NaN NaN 2.07 12 Unnamed: 13 Michel 24 34 0.15 7.24 1.76 13 Unnamed: 14 NaN NaN NaN NaN NaN 1.84 14 Unnamed: 15 NaN NaN NaN NaN NaN 1.72 15 Unnamed: 16 NaN NaN NaN NaN NaN 1.85 temperature 0 3 1 NaN 2 NaN 3 NaN 4 3 5 NaN 6 NaN 7 NaN 8 3 9 NaN 10 NaN 11 NaN 12 3 13 NaN 14 NaN 15 NaN </code></pre> <p>From here, I want to combine the rows so that I only have one row for each date. The values for each row will be the mean in the respective columns. ie.</p> <pre><code>print(new_df.head(2)) </code></pre> <pre><code> date time alkalinity colour uv ph turbidity temperature 0 2020-02-01 00:00:00 24.25 34 0.151 7.23 1.83 3 1 2020-02-02 00:00:00 24 33.5 0.148 7.23 1.82 3 </code></pre> <p>How can I accomplish this when I have Unnamed values in my date column? Thanks!</p>
<p>Try setting the values to <code>NaN</code> and then use <code>ffill</code>:</p> <pre><code>df.loc[df.date.str.contains('Unnamed', na=False), 'date'] = np.nan df.date = df.date.ffill() </code></pre>
python|pandas|dataframe|pandas-groupby|nan
1
5,401
41,132,570
convert columns to rows based on row value using pandas
<p>I have an excel file as follows:</p> <pre><code>ID wk48 wk49 wk50 wk51 wk52 wk1 wk2 1123 10 22 233 2 4 22 11 1198 9 4 44 23 34 5 234 101 3 6 3 43 33 34 78 </code></pre> <p>I want the output as follows in python </p> <pre><code>1123 wk48 10 1123 wk49 22 1123 wk50 233 1123 wk51 2 1123 wk52 4 1123 wk1 22 1123 wk2 11 1198 wk48 9 1198 wk49 4 1198 wk50 44 1198 wk51 23 1198 wk52 34 1198 wk1 5 1198 wk2 234 </code></pre> <p>Any suggestions</p>
<p>The first thing I would do is set <code>ID</code> as your index with:</p> <pre><code>df.set_index('ID',inplace=True) </code></pre> <p>Next, you can use the following command to reorient your dataframe:</p> <pre><code>df = df.stack().reset_index() print df ------------------------------------------- Output: ID level_1 0 0 1123 wk48 10 1 1123 wk49 22 2 1123 wk50 233 3 1123 wk51 2 4 1123 wk52 4 5 1123 wk1 22 6 1123 wk2 11 7 1198 wk48 9 8 1198 wk49 4 9 1198 wk50 44 10 1198 wk51 23 11 1198 wk52 34 12 1198 wk1 5 13 1198 wk2 234 14 101 wk48 3 15 101 wk49 6 16 101 wk50 3 17 101 wk51 43 18 101 wk52 33 19 101 wk1 34 20 101 wk2 78 </code></pre>
python|pandas
2
5,402
41,075,993
facenet triplet loss with keras
<p>I am trying to implement facenet in Keras with Tensorflow backend and I have some problem with the triplet loss.<a href="https://i.stack.imgur.com/RT3TZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RT3TZ.png" alt="enter image description here" /></a></p> <p>I call the fit function with 3*n number of images and then I define my custom loss function as follows:</p> <pre><code>def triplet_loss(self, y_true, y_pred): embeddings = K.reshape(y_pred, (-1, 3, output_dim)) positive_distance = K.mean(K.square(embeddings[:,0] - embeddings[:,1]),axis=-1) negative_distance = K.mean(K.square(embeddings[:,0] - embeddings[:,2]),axis=-1) return K.mean(K.maximum(0.0, positive_distance - negative_distance + _alpha)) self._model.compile(loss=triplet_loss, optimizer=&quot;sgd&quot;) self._model.fit(x=x,y=y,nb_epoch=1, batch_size=len(x)) </code></pre> <p>where y is just a dummy array filled with 0s</p> <p>The problem is that even after the first iteration with batch size 20 the model starts predicting the same embedding for all the images. So when I first do the prediction on the batch every embedding is different. Then I do the fit and predict again and suddenly all the embeddings becomes almost the same for all the images in the batch</p> <p>Also notice that there is a Lambda layer at the end of the model. It normalizes the output of the net so all the embeddings has a unit length as it was suggested in the face net study.</p> <p>Can anybody help me out here?</p> <p>Model summary</p> <pre><code> Layer (type) Output Shape Param # Connected to ==================================================================================================== input_1 (InputLayer) (None, 224, 224, 3) 0 ____________________________________________________________________________________________________ convolution2d_1 (Convolution2D) (None, 112, 112, 64) 9472 input_1[0][0] ____________________________________________________________________________________________________ batchnormalization_1 (BatchNormal(None, 112, 112, 64) 128 convolution2d_1[0][0] ____________________________________________________________________________________________________ maxpooling2d_1 (MaxPooling2D) (None, 56, 56, 64) 0 batchnormalization_1[0][0] ____________________________________________________________________________________________________ convolution2d_2 (Convolution2D) (None, 56, 56, 64) 4160 maxpooling2d_1[0][0] ____________________________________________________________________________________________________ batchnormalization_2 (BatchNormal(None, 56, 56, 64) 128 convolution2d_2[0][0] ____________________________________________________________________________________________________ convolution2d_3 (Convolution2D) (None, 56, 56, 192) 110784 batchnormalization_2[0][0] ____________________________________________________________________________________________________ batchnormalization_3 (BatchNormal(None, 56, 56, 192) 384 convolution2d_3[0][0] ____________________________________________________________________________________________________ maxpooling2d_2 (MaxPooling2D) (None, 28, 28, 192) 0 batchnormalization_3[0][0] ____________________________________________________________________________________________________ convolution2d_5 (Convolution2D) (None, 28, 28, 96) 18528 maxpooling2d_2[0][0] ____________________________________________________________________________________________________ convolution2d_7 (Convolution2D) (None, 28, 28, 16) 3088 maxpooling2d_2[0][0] ____________________________________________________________________________________________________ maxpooling2d_3 (MaxPooling2D) (None, 28, 28, 192) 0 maxpooling2d_2[0][0] ____________________________________________________________________________________________________ convolution2d_4 (Convolution2D) (None, 28, 28, 64) 12352 maxpooling2d_2[0][0] ____________________________________________________________________________________________________ convolution2d_6 (Convolution2D) (None, 28, 28, 128) 110720 convolution2d_5[0][0] ____________________________________________________________________________________________________ convolution2d_8 (Convolution2D) (None, 28, 28, 32) 12832 convolution2d_7[0][0] ____________________________________________________________________________________________________ convolution2d_9 (Convolution2D) (None, 28, 28, 32) 6176 maxpooling2d_3[0][0] ____________________________________________________________________________________________________ merge_1 (Merge) (None, 28, 28, 256) 0 convolution2d_4[0][0] convolution2d_6[0][0] convolution2d_8[0][0] convolution2d_9[0][0] ____________________________________________________________________________________________________ convolution2d_11 (Convolution2D) (None, 28, 28, 96) 24672 merge_1[0][0] ____________________________________________________________________________________________________ convolution2d_13 (Convolution2D) (None, 28, 28, 32) 8224 merge_1[0][0] ____________________________________________________________________________________________________ maxpooling2d_4 (MaxPooling2D) (None, 28, 28, 256) 0 merge_1[0][0] ____________________________________________________________________________________________________ convolution2d_10 (Convolution2D) (None, 28, 28, 64) 16448 merge_1[0][0] ____________________________________________________________________________________________________ convolution2d_12 (Convolution2D) (None, 28, 28, 128) 110720 convolution2d_11[0][0] ____________________________________________________________________________________________________ convolution2d_14 (Convolution2D) (None, 28, 28, 64) 51264 convolution2d_13[0][0] ____________________________________________________________________________________________________ convolution2d_15 (Convolution2D) (None, 28, 28, 64) 16448 maxpooling2d_4[0][0] ____________________________________________________________________________________________________ merge_2 (Merge) (None, 28, 28, 320) 0 convolution2d_10[0][0] convolution2d_12[0][0] convolution2d_14[0][0] convolution2d_15[0][0] ____________________________________________________________________________________________________ convolution2d_16 (Convolution2D) (None, 28, 28, 128) 41088 merge_2[0][0] ____________________________________________________________________________________________________ convolution2d_18 (Convolution2D) (None, 28, 28, 32) 10272 merge_2[0][0] ____________________________________________________________________________________________________ convolution2d_17 (Convolution2D) (None, 14, 14, 256) 295168 convolution2d_16[0][0] ____________________________________________________________________________________________________ convolution2d_19 (Convolution2D) (None, 14, 14, 64) 51264 convolution2d_18[0][0] ____________________________________________________________________________________________________ maxpooling2d_5 (MaxPooling2D) (None, 14, 14, 320) 0 merge_2[0][0] ____________________________________________________________________________________________________ merge_3 (Merge) (None, 14, 14, 640) 0 convolution2d_17[0][0] convolution2d_19[0][0] maxpooling2d_5[0][0] ____________________________________________________________________________________________________ convolution2d_21 (Convolution2D) (None, 14, 14, 96) 61536 merge_3[0][0] ____________________________________________________________________________________________________ convolution2d_23 (Convolution2D) (None, 14, 14, 32) 20512 merge_3[0][0] ____________________________________________________________________________________________________ maxpooling2d_6 (MaxPooling2D) (None, 14, 14, 640) 0 merge_3[0][0] ____________________________________________________________________________________________________ convolution2d_20 (Convolution2D) (None, 14, 14, 256) 164096 merge_3[0][0] ____________________________________________________________________________________________________ convolution2d_22 (Convolution2D) (None, 14, 14, 192) 166080 convolution2d_21[0][0] ____________________________________________________________________________________________________ convolution2d_24 (Convolution2D) (None, 14, 14, 64) 51264 convolution2d_23[0][0] ____________________________________________________________________________________________________ convolution2d_25 (Convolution2D) (None, 14, 14, 128) 82048 maxpooling2d_6[0][0] ____________________________________________________________________________________________________ merge_4 (Merge) (None, 14, 14, 640) 0 convolution2d_20[0][0] convolution2d_22[0][0] convolution2d_24[0][0] convolution2d_25[0][0] ____________________________________________________________________________________________________ convolution2d_27 (Convolution2D) (None, 14, 14, 112) 71792 merge_4[0][0] ____________________________________________________________________________________________________ convolution2d_29 (Convolution2D) (None, 14, 14, 32) 20512 merge_4[0][0] ____________________________________________________________________________________________________ maxpooling2d_7 (MaxPooling2D) (None, 14, 14, 640) 0 merge_4[0][0] ____________________________________________________________________________________________________ convolution2d_26 (Convolution2D) (None, 14, 14, 224) 143584 merge_4[0][0] ____________________________________________________________________________________________________ convolution2d_28 (Convolution2D) (None, 14, 14, 224) 226016 convolution2d_27[0][0] ____________________________________________________________________________________________________ convolution2d_30 (Convolution2D) (None, 14, 14, 64) 51264 convolution2d_29[0][0] ____________________________________________________________________________________________________ convolution2d_31 (Convolution2D) (None, 14, 14, 128) 82048 maxpooling2d_7[0][0] ____________________________________________________________________________________________________ merge_5 (Merge) (None, 14, 14, 640) 0 convolution2d_26[0][0] convolution2d_28[0][0] convolution2d_30[0][0] convolution2d_31[0][0] ____________________________________________________________________________________________________ convolution2d_33 (Convolution2D) (None, 14, 14, 128) 82048 merge_5[0][0] ____________________________________________________________________________________________________ convolution2d_35 (Convolution2D) (None, 14, 14, 32) 20512 merge_5[0][0] ____________________________________________________________________________________________________ maxpooling2d_8 (MaxPooling2D) (None, 14, 14, 640) 0 merge_5[0][0] ____________________________________________________________________________________________________ convolution2d_32 (Convolution2D) (None, 14, 14, 192) 123072 merge_5[0][0] ____________________________________________________________________________________________________ convolution2d_34 (Convolution2D) (None, 14, 14, 256) 295168 convolution2d_33[0][0] ____________________________________________________________________________________________________ convolution2d_36 (Convolution2D) (None, 14, 14, 64) 51264 convolution2d_35[0][0] ____________________________________________________________________________________________________ convolution2d_37 (Convolution2D) (None, 14, 14, 128) 82048 maxpooling2d_8[0][0] ____________________________________________________________________________________________________ merge_6 (Merge) (None, 14, 14, 640) 0 convolution2d_32[0][0] convolution2d_34[0][0] convolution2d_36[0][0] convolution2d_37[0][0] ____________________________________________________________________________________________________ convolution2d_39 (Convolution2D) (None, 14, 14, 144) 92304 merge_6[0][0] ____________________________________________________________________________________________________ convolution2d_41 (Convolution2D) (None, 14, 14, 32) 20512 merge_6[0][0] ____________________________________________________________________________________________________ maxpooling2d_9 (MaxPooling2D) (None, 14, 14, 640) 0 merge_6[0][0] ____________________________________________________________________________________________________ convolution2d_38 (Convolution2D) (None, 14, 14, 160) 102560 merge_6[0][0] ____________________________________________________________________________________________________ convolution2d_40 (Convolution2D) (None, 14, 14, 288) 373536 convolution2d_39[0][0] ____________________________________________________________________________________________________ convolution2d_42 (Convolution2D) (None, 14, 14, 64) 51264 convolution2d_41[0][0] ____________________________________________________________________________________________________ convolution2d_43 (Convolution2D) (None, 14, 14, 128) 82048 maxpooling2d_9[0][0] ____________________________________________________________________________________________________ merge_7 (Merge) (None, 14, 14, 640) 0 convolution2d_38[0][0] convolution2d_40[0][0] convolution2d_42[0][0] convolution2d_43[0][0] ____________________________________________________________________________________________________ convolution2d_44 (Convolution2D) (None, 14, 14, 160) 102560 merge_7[0][0] ____________________________________________________________________________________________________ convolution2d_46 (Convolution2D) (None, 14, 14, 64) 41024 merge_7[0][0] ____________________________________________________________________________________________________ convolution2d_45 (Convolution2D) (None, 7, 7, 256) 368896 convolution2d_44[0][0] ____________________________________________________________________________________________________ convolution2d_47 (Convolution2D) (None, 7, 7, 128) 204928 convolution2d_46[0][0] ____________________________________________________________________________________________________ maxpooling2d_10 (MaxPooling2D) (None, 7, 7, 640) 0 merge_7[0][0] ____________________________________________________________________________________________________ merge_8 (Merge) (None, 7, 7, 1024) 0 convolution2d_45[0][0] convolution2d_47[0][0] maxpooling2d_10[0][0] ____________________________________________________________________________________________________ convolution2d_49 (Convolution2D) (None, 7, 7, 192) 196800 merge_8[0][0] ____________________________________________________________________________________________________ convolution2d_51 (Convolution2D) (None, 7, 7, 48) 49200 merge_8[0][0] ____________________________________________________________________________________________________ maxpooling2d_11 (MaxPooling2D) (None, 7, 7, 1024) 0 merge_8[0][0] ____________________________________________________________________________________________________ convolution2d_48 (Convolution2D) (None, 7, 7, 384) 393600 merge_8[0][0] ____________________________________________________________________________________________________ convolution2d_50 (Convolution2D) (None, 7, 7, 384) 663936 convolution2d_49[0][0] ____________________________________________________________________________________________________ convolution2d_52 (Convolution2D) (None, 7, 7, 128) 153728 convolution2d_51[0][0] ____________________________________________________________________________________________________ convolution2d_53 (Convolution2D) (None, 7, 7, 128) 131200 maxpooling2d_11[0][0] ____________________________________________________________________________________________________ merge_9 (Merge) (None, 7, 7, 1024) 0 convolution2d_48[0][0] convolution2d_50[0][0] convolution2d_52[0][0] convolution2d_53[0][0] ____________________________________________________________________________________________________ convolution2d_55 (Convolution2D) (None, 7, 7, 192) 196800 merge_9[0][0] ____________________________________________________________________________________________________ convolution2d_57 (Convolution2D) (None, 7, 7, 48) 49200 merge_9[0][0] ____________________________________________________________________________________________________ maxpooling2d_12 (MaxPooling2D) (None, 7, 7, 1024) 0 merge_9[0][0] ____________________________________________________________________________________________________ convolution2d_54 (Convolution2D) (None, 7, 7, 384) 393600 merge_9[0][0] ____________________________________________________________________________________________________ convolution2d_56 (Convolution2D) (None, 7, 7, 384) 663936 convolution2d_55[0][0] ____________________________________________________________________________________________________ convolution2d_58 (Convolution2D) (None, 7, 7, 128) 153728 convolution2d_57[0][0] ____________________________________________________________________________________________________ convolution2d_59 (Convolution2D) (None, 7, 7, 128) 131200 maxpooling2d_12[0][0] ____________________________________________________________________________________________________ merge_10 (Merge) (None, 7, 7, 1024) 0 convolution2d_54[0][0] convolution2d_56[0][0] convolution2d_58[0][0] convolution2d_59[0][0] ____________________________________________________________________________________________________ averagepooling2d_1 (AveragePoolin(None, 1, 1, 1024) 0 merge_10[0][0] ____________________________________________________________________________________________________ flatten_1 (Flatten) (None, 1024) 0 averagepooling2d_1[0][0] ____________________________________________________________________________________________________ dense_1 (Dense) (None, 128) 131200 flatten_1[0][0] ____________________________________________________________________________________________________ lambda_1 (Lambda) (None, 128) 0 dense_1[0][0] ==================================================================================================== Total params: 7456944 ____________________________________________________________________________________________________ None </code></pre>
<p>What could have happened, other than the learning rate was simply too high, was that an unstable triplet selection strategy had been used, effectively. If, for example, you only use <strong>'hard triplets'</strong> (triplets where the a-n distance is smaller than the a-p distance), your network weights might collapse all embeddings to a single point (making the loss always equal to margin (your <code>_alpha</code>), because all embedding distances are zero).</p> <p>This can be fixed by using other kinds of triplets as well (like <strong>'semi-hard triplets'</strong> where a-p is smaller than a-n, but the distance between a-p and a-n is still smaller than margin). So maybe if you always checked for this... It is explained in more detail in this blog post: <a href="https://omoindrot.github.io/triplet-loss" rel="noreferrer">https://omoindrot.github.io/triplet-loss</a></p>
neural-network|tensorflow|keras
11
5,403
65,958,288
Python solve complex inventory in Dataframe
<p>Need your help on solving ordering inventory. This is only 4 items, however the real DataFrame has 10,000 items.</p> <p>df = pd.DataFrame(data)</p> <pre><code> Inventory count Batch size Store A needs Store B needs Store C needs Total requires: Actually requires: Buckets 198 20 63 18 104 185 220 Candy Bars 876 100 567 435 673 1675 1800 Coke (cans) 1759 6 1212 758 836 2806 2814 Masks (boxes) 2000 1000 333 444 555 1332 3000 </code></pre> <ul> <li>df['Inventory count'] is how much inventory I have on hand</li> <li>df['Batch size'] is how much I have to assign in multiples each time</li> </ul> <p>The buckets row I have 198 buckets on hand. The 3 stores total required is 185, however due to batch size, 63-&gt;80, 18-&gt;20, 104-&gt;120, 80+20+120 = 220. How do I assign the inventory?</p> <p>Candy Bars and Coke both inventory count is less than the df['Actually requires:'], how can I assign them based on demand ranking?</p> <p>Alternatively, I am open to suggestions if there are better solutions</p> <p>this is the dataframe: <a href="https://i.stack.imgur.com/sxPLC.jpg" rel="nofollow noreferrer">enter image description here</a></p> <p>Please I need your assistance, and thank you for helping me.</p>
<p>I'll qualify this response by saying you should search allocation strategies for inventory and/or supply chain management if the above example is oversimplified - meaning many more stores and/or inventory allocation strategies needed.</p> <p>I'm just returning a dictionary, but if you want or need to, you can write the data back to columns in the dataframe or create a new dataframe or send the data off to a file directly. All depends on what you need to do.</p> <pre><code>import math def allocation(x): max_units_available = math.floor(x['Inventory count'] / x['Batch size']) sa_units_need = math.ceil(x['Store A needs'] / x['Batch size']) sb_units_need = math.ceil(x['Store B needs'] / x['Batch size']) sc_units_need = math.ceil(x['Store C needs'] / x['Batch size']) allocation_needs = [sa_units_need, sb_units_need, sc_units_need] # print(allocation_needs) # print(max_units_available, tota_untis_need) if max_units_available &gt;= sum(allocation_needs): return dict(zip(store_list, allocation_needs)) else: #allocation goes in order until all units are allocated #you can pass as many allocations strategies as you wish, you just need to code them #and work them into your code store_list = ['Store A received', 'Store B received', 'Store C received'] allocation_received = [0]*len(allocation_needs) inv_sum = 0 for i in range(0, len(allocation_needs)): inv_sum = inv_sum + allocation_needs[i] if max_units_available &lt; inv_sum: allocation_received[i] = max_units_available break else: allocation_received[i] = allocation_needs[i] max_units_available = max_units_available - allocation_needs[i] # print(dict(zip(store_list, allocation_received))) return dict(zip(store_list, allocation_received)) df.apply(lambda x: allocation(x), axis=1) </code></pre> <p>Output:</p> <pre><code>Buckets {'Store A received': 4, 'Store B received': 1, 'Store C received': 4} CandyBars {'Store A received': 6, 'Store B received': 2, 'Store C received': 0} Coke(cans) {'Store A received': 202, 'Store B received': 91, 'Store C received': 0} Masks(boxes) {'Store A received': 1, 'Store B received': 1, 'Store C received': 0} </code></pre>
python|pandas|dataframe|numpy|inventory-management
0
5,404
66,177,791
pd.categorical didn't sort bars by specified orders in plot
<p>I was trying to use pd categorical to order the bars in a barplot but the result still didn't get sorted.</p> <pre><code>import pandas as pd import numpy as np np.random.seed(10) df = pd.DataFrame({'x':np.random.randint(1,10,15),'y': ['x']*15}) df.loc[:,'group'] = df['x'].apply(lambda x:'&gt;=5' if x&gt;=5 else x) df['group'] = df['group'].astype('string') sample = df['group'].value_counts().reset_index() sample['index'] = pd.Categorical(sample['index'],categories=['1','2','3','4','5','6','7','8','9','&gt;=5'], ordered=True) sample.plot(x='index',kind='bar') </code></pre> <p>After applied ordered=True, the categories still weren't in order and '&gt;=5' were not at the end of the barplot. Not sure why.</p>
<p><code>DataFrame.plot.bar()</code> plots the bars in order of occurrence (that is, against the range) and relabel the ticks with the column specified by <code>x</code>.</p> <p>This is the case even with numerical data:</p> <pre><code>pd.DataFrame({'idx': [3,2,1], 'val':[4,5,6]}).plot.bar(x='idx') </code></pre> <p>would give:</p> <p><a href="https://i.stack.imgur.com/evtZm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/evtZm.png" alt="enter image description here" /></a></p> <p>In your case, you will need to sort the data before plot:</p> <pre><code>sample.sort_values('index').plot(x='index',kind='bar') </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/cHxO5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cHxO5.png" alt="enter image description here" /></a></p>
python|pandas
0
5,405
46,588,829
Does slim of tensorflow have distributed version?
<p>The <a href="https://github.com/tensorflow/models/blob/master/research/slim/deployment/model_deploy.py" rel="nofollow noreferrer">model_deploy</a> of slim has DeploymentConfig parameters, such as <code>num_replicas</code>, <code>num_ps_tasks</code>, <code>worker_job_name</code>, <code>ps_job_name</code>, these terms may appear in distributed version, but I don't think the <code>model_deploy</code> is distributed version, because it don't declare <code>tf.train.ClusterSpec</code>.</p> <p>So I can't understand <code>model_deploy</code>, does it want to simulate distributed version on stand-alone computer? And on stand-alone computer, what does <code>ps</code> and <code>worker</code> mean? And the name of <code>tf.device</code>, such as <code>/job:ps/device:CPU:0/task:0</code>, what hardware are <code>/job:ps</code> ans <code>/task:0</code> corresponding to ?</p>
<p>You can call model_deploy in a single-machine and in a distributed setup. If you only have a single machine I recommend setting num_ps_tasks=0 and num_replicas=1 to get the right behavior.</p>
python|c++|tensorflow|distributed-computing|grpc
0
5,406
46,612,097
Extracting number from inside parentheses in column list
<p>I have a pandas dataframe column of lists and want to extract numbers from list strings and add them to their own separate column. </p> <pre><code> Column A 0 [ FUNNY (1), CARING (1)] 1 [ Gives good feedback (17), Clear communicator (2)] 2 [ CARING (3), Gives good feedback (3)] 3 [ FUNNY (2), Clear communicator (1)] 4 [] 5 [] 6 [ CARING (1), Clear communicator (1)] </code></pre> <p>I would like the output to look as follows:</p> <pre><code>FUNNY CARING Gives good feedback Clear communicator 1 1 None None None None 17 2 None 3 3 None 2 None None 1 None None None None </code></pre> <p>etc...</p>
<p>Let's use <code>apply</code> with <code>pd.Series</code>, then <code>extract</code> and reshape with <code>set_index</code> and <code>unstack</code>:</p> <pre><code>df['Column A'].apply(pd.Series).stack().str.extract(r'(\w+)\((\d+)', expand=True)\ .reset_index(1, drop=True).set_index(0, append=True)[1]\ .unstack(1) </code></pre> <p>Output:</p> <pre><code>0 Authentic Caring Classy Funny 0 1 3 None 2 1 2 None 1 2 </code></pre> <h1>Edit with new input data set:</h1> <pre><code>df['Column A'].apply(pd.Series).stack().str.extract(r'(\w+).*\((\d+)', expand=True)\ .reset_index(1, drop=True)\ .set_index(0, append=True)[1]\ .unstack(1) 0 CARING Clear FUNNY Gives 0 1 None 1 None 1 None 2 None 17 2 3 None None 3 3 None 1 2 None 6 1 1 None None </code></pre>
python|pandas
1
5,407
58,177,063
Unable Color Code Points on GeoPanda Map with Contextly Background Map
<p>I am trying to create a Map that has points that are color coded by a category - however when I color by category the index is being included in the category so every point is its own color. Here is some sample code to recreate my problem.</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline from matplotlib import cm import geopandas as gpd from shapely.geometry import Point import contextily as ctx list = [[39.17413494, -84.36475604, 'BK'], [38.96151336, -84.55732482, 'McDonalds'], [38.95100498, -84.55013050000001, 'McDonalds'], [38.96186501, -84.55717946, 'McDonalds'], [39.00969774, -84.50139703, 'Subway'], [39.09614656, -84.56445953, 'Pizza Hut'], [38.98661028, -84.39965444, 'Popeyes'], [39.34727542, -84.66033389, 'Arbys'], [39.09854089, -84.55881323, 'Wendys'], [39.0985409, -84.55881323, 'Subway'], [38.98693673, -84.39936496, 'Starbucks'], [39.17663372, -84.66664250000001, 'ChickFilA'], [39.19368097, -84.67709306, 'Subway'], [39.202496000000004, -84.5474509, 'Starbucks'], [39.202496000000004, -84.5474509, 'Starbucks'], [39.05680444, -84.32690772, 'BK'], [39.049786100000006, -84.39536650000001, 'McDonalds'], [39.049786100000006, -84.39536650000001, 'McDonalds'], [39.049786100000006, -84.39536650000001, 'McDonalds'], [39.049786100000006, -84.39536650000001, 'Subway'], [39.049786100000006, -84.39536650000001, 'Pizza Hut'], [39.04982251, -84.39533805, 'Popeyes'], [39.04982249, -84.39533811, 'Arbys'], [39.04982581, -84.39533835, 'Wendys'], [39.04982419, -84.39533558, 'Subway'], [39.04982533, -84.39534599, 'Starbucks'], [39.04982604, -84.39534769, 'ChickFilA'], [39.356410100000005, -84.361086, 'Subway'], [39.18283407, -84.38227921, 'Starbucks'], [39.43731072, -84.26926351, 'Starbucks']] data = pd.DataFrame(list, columns =['Lat', 'Long', 'Type']) geometry = [Point(xy) for xy in zip(data['Long'], data['Lat'])] crs = {'init':'epsg:4326'} gdf = gpd.GeoDataFrame(data, crs=crs, geometry=geometry) cmap = plt.cm.get_cmap('Dark2', 9) gdf = gdf.to_crs(epsg=3857) ax = gdf.plot(c=gdf.Type, cmap=cmap, label=gdf.Type, figsize=(10,10), alpha=.5) ctx.add_basemap(ax, url=ctx.providers.Stamen.TonerLite) ax.set_axis_off() ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) </code></pre> <p>Which results in the following image:</p> <p><a href="https://i.stack.imgur.com/rXibL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rXibL.png" alt="enter image description here"></a></p> <p>As you can see each "McDonalds" is its own color and the legend lists each data point.</p>
<p>You are using incorrect attributes for plot. Geopandas (at least in recent versions) needs <code>column</code> not <code>c</code>.</p> <pre><code>ax = gdf.plot(column=gdf.Type, cmap=cmap, label=gdf.Type, figsize=(10,10), alpha=.5) </code></pre> <p>This seems to work. But you should be aware that you have multiple overlapping points, so it does not look the same all the time.</p> <p>EDIT: to have proper legend, you need to use Geopandas to draw it and pass legend_kwds to Geopandas <code>plot</code>:</p> <pre><code>ax = gdf.plot(column=gdf.Type, cmap=cmap, label=gdf.Type, figsize=(10,10), alpha=.5, legend=True, legend_kwds={'loc': 'center left', 'bbox_to_anchor': (1, 0.5)}) ctx.add_basemap(ax, url=ctx.providers.Stamen.TonerLite) ax.set_axis_off() </code></pre>
python|matplotlib|geopandas
1
5,408
58,233,238
Read csv file using pandas and display cell value with sorted date/time
<p>Am trying to read a csv file using pandas in the python. i have referred this link <a href="https://stackoverflow.com/questions/32897414/pandas-read-csv-moves-column-names-over-one">pandas.read_csv moves column names over one</a></p> <p>and used the below code to display the first row of csv file.</p> <pre><code> prodid ProdParent productname StartDate wfStatus ErrorMessage FCT TDAR 2752_bg42328_US 3/8/2019 15:21 "PROCESs IS empty" VEE TNL 2752_bg42329_US 3/8/2019 15:26 "success" FCT TRAD 2752_bg42328_US 3/8/2019 15:21 "PROCESs IS empty" VEE TNL 2752_bg42329_US. 3/8/2019 15:32 VEE TNL 2752_bg42329_US 3/8/2019 15:34 VEE TNL 2752_bg42329_US 3/8/2019 15:38 JUR TLO 2755_bg567_US 4/8/2019 03:19 </code></pre> <p>how to iterate through each n every row using pandas. in my csv file having a headercolumns namely errorMessage and productName, start date, wfstatus etc... the issue am facing is, am having some 8000 rows in my csv file and i need to filter/fetch only those rows/column values with the below conditions:</p> <p>if <code>errorMessage_column_value == blank/null value</code> OR <code>wfSTATUS_columnvalue == blank/null</code> then fetch the corresponding productName cell/column value where it matches the above condition.</p> <p>now, if there multiple productname column values exists with different timespan on the same date exists (in the startdate column), i need to get the latest/most recent productName value ONLY.</p> <p>how to achieve this.</p> <pre><code>df = pd.read_csv(csv_ctrl_file, index_col=False) print(df.head(1)) </code></pre>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.isna.html" rel="nofollow noreferrer"><code>isna()</code></a> to find the blank rows and then use <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer">boolean indexing</a>:</p> <pre><code>df[df["errorMessage"].isna() | df[" wfSTATUS_columnvalue"].isna()] </code></pre>
python|python-3.x|pandas
0
5,409
58,188,704
Unable to import Keras(from TensorFlow 2.0) in PyCharm 2019.2
<p>I have just installed the stable version of TensorFlow 2.0 (released on October 1st 2019) in PyCharm.</p> <p><strong>The problem</strong> is that the <strong>keras package is unavailable</strong>. <a href="https://i.stack.imgur.com/wZfqL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wZfqL.png" alt="Unable to import keras"></a></p> <p>The actual error is : </p> <blockquote> <p>"<strong>cannot import name 'keras' from tensorflow</strong>"</p> </blockquote> <p>I have installed via <code>pip install tensorflow==2.0.0</code> the <code>CPU version</code>, and then uninstalled the CPU version and installed the GPU version , via <code>pip install tensorflow-gpu==2.0.0.</code></p> <p>Neither of the above worked versions of TensorFlow were working properly(could not import keras or other packages via <code>from tensorflow.package_X import Y</code>). </p> <p>If I <strong>revert TensorFlow to version 2.0.0.b1</strong>, <strong>keras is available</strong> as a package (PyCharm recognises it) and everything runs smoothly.</p> <p>Is there a way to solve this problem? Am I making a mistake in the installation process?</p> <p>UPDATE --- Importing from the Python Console works and allows the imports without any error. <a href="https://i.stack.imgur.com/85nh1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/85nh1.png" alt="Writing from the console works"></a></p>
<p><strong>For PyCharm Users</strong></p> <p>For those who use PyCharm. Install future (EAP) release <code>2019.3 EAP build 193.3793.14</code> from <a href="https://www.jetbrains.com/idea/nextversion/" rel="noreferrer">here</a>. With that, you will be able to use autocomplete for the current stable release of TensorFlow (i.e. 2.0). I have tried it and it works :).</p> <p><strong>For other IDEs</strong></p> <p>For users with other IDEs, this will be resolved only after the stable version is released, which is anyways the case now. But this might take some more time for a fix. See the comment <a href="https://github.com/tensorflow/tensorflow/issues/31973#issuecomment-524928419" rel="noreferrer">here</a>. I assume it will be wise to wait and keep using <code>version 2.0.0.b1</code>. On the other hand avoid imports from <code>tensorflow_core</code> if you do not want to refactor your code in the future.</p> <p><strong>Note:</strong> for autocomplete to work use import statement as below</p> <pre class="lang-py prettyprint-override"><code>import tensorflow.keras as tk # this does not work for autocomplete # from tensorflow import keras as tk </code></pre> <p>The autocomplete works for TensorFlow 2.0.0 on CPU version, but the autocomplete does not work for the GPU version. </p>
tensorflow|keras|pycharm|tensorflow2.0|tf.keras
11
5,410
69,211,296
TypeError: unsupported operand type(s) for +: 'int' and 'str' - Pandas DataFrame
<div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">addition</th> <th style="text-align: center;">add-revised</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">6 insertions(+)</td> <td style="text-align: center;">6</td> </tr> <tr> <td style="text-align: center;">NaN</td> <td style="text-align: center;">0</td> </tr> <tr> <td style="text-align: center;">8 insertions(+)</td> <td style="text-align: center;">8</td> </tr> </tbody> </table> </div> <p>From the 'addition' column of the data frame, I created the 'add-revised' column.</p> <pre><code> df['add-revised'] = df.addition.str.extract('(\d+)') df['add-revised'] = df['add-revised'].fillna(0) </code></pre> <p>When I attempt to do</p> <pre><code> df_new['add-revised'].mean() </code></pre> <p>It's giving me the following error</p> <pre><code>TypeError: unsupported operand type(s) for +: 'int' and 'str' </code></pre> <p>I attempted to solve the problem with</p> <pre><code>df_new['add-revised'].to_numeric() </code></pre> <p>and it's giving me the following error</p> <pre><code>AttributeError: 'Series' object has no attribute 'to_numeric' </code></pre>
<p>Did you try:</p> <pre><code>df_new['add-revised'] = df_new['add-revised'].astype(int) </code></pre> <p>it works for pandas version '1.2.0'</p>
python|pandas|dataframe
1
5,411
44,416,354
how to save a DNN model with tensorflow
<p>I have code that trains a DNN network. I don't want to train this network every time, because it uses too much time. How can I save the model?</p> <pre><code>def train_model(filename, validation_ratio=0.): # define model to be trained columns = [tf.contrib.layers.real_valued_column(str(col), dtype=tf.int8) for col in FEATURE_COLS] classifier = tf.contrib.learn.DNNClassifier( feature_columns=columns, hidden_units=[100, 100], n_classes=N_LABELS, dropout=0.3) # load and split data print( 'Loading training data.') data = load_batch(filename) overall_size = data.shape[0] learn_size = int(overall_size * (1 - validation_ratio)) learn, validation = np.array_split(data, [learn_size]) print( 'Finished loading data. Samples count = {}'.format(overall_size)) # learning print( 'Training using batch of size {}'.format(learn_size)) classifier.fit(input_fn=lambda: pipeline(learn), steps=learn_size) if validation_ratio &gt; 0: validate_model(classifier, learn, validation) return classifier </code></pre> <p>After running this function, I get a <code>DNNClassifier</code> which I want to save.</p>
<p>I believe this has already been answered here: <a href="https://stackoverflow.com/questions/33759623/tensorflow-how-to-save-restore-a-model?rq=1">Tensorflow: how to save/restore a model?</a></p> <pre><code>saver = tf.train.Saver() saver.save(sess, 'my_test_model',global_step=1000) </code></pre> <p>(code copied from that question's answer)</p>
python|tensorflow|dotnetnuke
1
5,412
61,086,228
How do I remove a specific value from a row in a pandas dataframe?
<p>I have a pandas dataframe that looks something like this:</p> <pre><code> Column1 Column2 Column3 0 1 NaN NaN 1 4 NaN NaN 2 NaN 3 NaN 3 NaN 98 NaN 4 NaN NaN 562 5 NaN NaN 742 . . . </code></pre> <p>How would I go about removing all of the unnecessary NaNs and make it look like this</p> <pre><code> Column1 Column2 Column3 0 1 3 562 1 4 98 742 . . . </code></pre>
<p>Run:</p> <pre><code>df.apply(lambda col: col.dropna().reset_index(drop=True).astype(int)) </code></pre> <p>Just apply to each column a function, which drops <em>NaN</em> values in this column. Due to presence of <em>NaN</em> values column are generally of <em>float</em> type, but I attempt to cast them to <em>int</em>.</p> <p>Note also that other solutions work only as long as each column contains equal number of non-NaN values.</p> <p>To check it, add the following row:</p> <pre><code>6 NaN NaN 999 </code></pre> <p>to your 6 initial rows, so that now <em>Column3</em> contains <strong>3</strong> non-Nan values, whereas other columns - only <strong>2</strong>.</p> <p>Solution by <em>yatu</em> drops this last row, whereas solution by <em>Quang</em> results in <em>ValueError: arrays must all be same length</em>.</p> <p>But my solution works OK also in this case, leaving trailing <em>NaN</em> in "too short" columns.</p>
python|pandas
2
5,413
60,792,766
How to fill the area with in Matplotlib
<p>I need to fill the area between y1 and y but but I don't understand how to limit the area under y2</p> <pre><code>import numpy as np import matplotlib.pyplot as plt y = lambda z: (4 * z - z ** 2) ** (1 / 2) y1 = lambda x: (8 * x - x ** 2) ** (1 / 2) y2 = lambda c: c * 3 ** (1 / 2) x = np.linspace(0, 12, 500) z = np.linspace(0, 12, 500) c = np.linspace(0, 12, 500) plt.ylim(0, 4) plt.xlim(0, 4) plt.plot(z, y(z), color='blue', label="$y=\\sqrt{4x-x^2}$") plt.plot(c, y2(c), color='black', label='$y=x\\sqrt{3}$') plt.plot(x, y1(x), color='red', label='$y=\\sqrt{8x-x^2}$') plt.plot([0, 4], [0, 0], color='yellow', label='y=0') plt.grid(True, zorder=5) plt.fill_between(x, y(z), y1(x), where=(y2(c) &gt;= y1(x)), alpha=0.5) plt.legend() plt.show() </code></pre>
<p>Do you want to fill between the minimum of <code>y1, y2</code> and <code>y</code>?</p> <pre><code>miny = np.minimum(y2(x),y1(x)) plt.fill_between(x, y(x), miny, where=(miny&gt;=y(x)), alpha=0.5) plt.legend() plt.show() </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/JdDxM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JdDxM.png" alt="enter image description here"></a></p>
python|numpy|matplotlib|math|plot
2
5,414
61,162,881
Pivoting dataframes with pd.melt() on time series data
<p>I have some data here:</p> <pre><code> Country/Region 1/22/20 1/23/20 1/24/20 1/25/20 1/26/20 1/27/20 0 Afghanistan 0 0 0 0 0 1 Albania 0 0 0 0 0 2 Algeria 0 0 0 0 0 3 Andorra 0 0 0 0 0 4 Angola 0 0 0 0 0 5 Antigua and Barbuda 0 0 0 0 0 6 Argentina 0 0 0 0 0 7 Armenia 0 0 0 0 0 8 Australia 0 0 0 0 0 9 Australia 0 0 0 0 3 10 Australia 0 0 0 0 0 11 Australia 0 0 0 0 0 12 Australia 0 0 0 0 0 13 Australia 0 0 0 0 0 14 Australia 0 0 0 0 1 15 Australia 0 0 0 0 0 16 Austria 0 0 0 0 0 17 Azerbaijan 0 0 0 0 0 18 Bahamas 0 0 0 0 0 19 Bahrain 0 0 0 0 0 20 Bangladesh 0 0 0 0 0 </code></pre> <p>I'd like to rearrange this so that the dates are rows, while the countries are columns. Like this:</p> <pre><code>Country/Region Afghanistan Albania 1/22/20 0 0 1/23/20 0 0 1/24/20 0 0 </code></pre> <p>and so on. I've tried to use pd.melt, but can't quite nail how to get the desired output. Here's my attempt:</p> <pre><code>%matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt import math data = pd.read_csv("covid.csv", sep=",") data = data.drop(["Province/State","Lat","Long"], axis=1) data_melted = data.melt(value_vars=data.columns[1:], var_name="Date",value_name="Cases") Date Cases 0 1/22/20 0 1 1/22/20 0 2 1/22/20 0 3 1/22/20 0 4 1/22/20 0 5 1/22/20 0 6 1/22/20 0 7 1/22/20 0 8 1/22/20 0 9 1/22/20 0 10 1/22/20 0 11 1/22/20 0 12 1/22/20 0 13 1/22/20 0 14 1/22/20 0 </code></pre> <p>I also tried:</p> <pre><code>data_melted = data.melt(value_vars=[data.columns[1:], "Country/Region"]) </code></pre> <p>but this came up with a TypeError: unhashable type: 'Index' even though "Country/Region" wasn't the index.</p> <p>Would appreciate any help on this.</p>
<p>You are looking to transpose the table:</p> <pre><code>df.set_index('Country/Region').T </code></pre> <p>I noticed that <code>Australia</code> was repeated multiple times, if you want to consolidate by adding them up:</p> <pre><code>df.set_index('Country/Region').T \ .groupby(level=0, axis=1) \ .sum() </code></pre>
python|pandas|time-series|pivot|melt
2
5,415
71,605,903
Filtering string in column (counts of string) plus average of second column
<p>Looking for a way to filter a column &quot;Role&quot; for counts of string and get an average of 2nd col &quot;Rank&quot; for these values. I tried value counts and string.contains but do not know how to bring this together.</p> <pre><code>import pandas as pd data = {'Role':['Big, Big, Guard, Guard, Forward', 'Big, Big, Guard, Guard, Forward', 'Big, Guard, Big, Guard, Guard', 'Big, Big, Guard, Forward, Big', 'Guard, Big, Guard, Guard, Big','Big, Big, Guard, Forward, Big' ], 'Rank':[10, 6, 5, 2, 1, 3]} df = pd.DataFrame(data) print(df) </code></pre> <p>df</p> <pre><code> Role Rank 0 Big, Big, Guard, Guard, Forward 10 1 Big, Big, Guard, Guard, Forward 6 2 Big, Guard, Big, Guard, Guard 5 3 Big, Big, Guard, Forward, Big 2 4 Guard, Big, Guard, Guard, Big 1 5 Big, Big, Guard, Forward, Big 3 </code></pre> <p>Idea of result filtering for &quot;2* Big&quot;, .........</p> <pre><code>Role Value count Rank/avg Big, Big 4 5.5 Big, Big, Big 2 2.5 </code></pre> <p>Just edited the values for two Bigs, original df is two big to add here. Output of 2<em>Big and 3</em> Big is result wished for.</p>
<p>Not exactly the same output format, but if what you are looking for is the <code>Rank/avg</code>, maybe that helps you:</p> <pre class="lang-py prettyprint-override"><code>data = {'Role':['Big, Big, Guard, Guard, Forward', 'Big, Big, Guard, Guard, Forward', 'Big, Guard, Big, Guard, Guard', 'Big, Big, Guard, Forward, Big', 'Guard, Big, Guard, Guard, Big','Big, Big, Guard, Forward, Big' ], 'Rank':[10, 6, 5, 2, 1, 3]} df = pd.DataFrame(data) df['Role'] = df['Role'].str.strip() # Create another column to hold `Role` values as a list df['Role2'] = df['Role'].str.split(', ') # Count how many times each role appears in each `Role` value role_counts = df['Role2'].apply(lambda x: pd.Series(x).value_counts()) # Merge with original dataframe df2 = df.merge(role_counts, left_index=True, right_index=True) # Calculate average by `Role` count df2[role_counts.columns] = df2[role_counts.columns].fillna(0) df2['Rank avg'] = 0 for col in role_counts: df2['Rank avg'] = df2.groupby(col)['Rank'].transform('mean') print(df2) </code></pre> <p>Output:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>-</th> <th>Role</th> <th>Rank</th> <th>Role2</th> <th>Big</th> <th>Guard</th> <th>Forward</th> <th>Rank avg</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>Big, Big, Guard, Guard, Forward</td> <td>10</td> <td>[Big, Big, Guard, Guard, Forward]</td> <td>2.0</td> <td>2.0</td> <td>1.0</td> <td>5.25</td> </tr> <tr> <td>1</td> <td>Big, Big, Guard, Guard, Forward</td> <td>6</td> <td>[Big, Big, Guard, Guard, Forward]</td> <td>2.0</td> <td>2.0</td> <td>1.0</td> <td>5.25</td> </tr> <tr> <td>2</td> <td>Big, Guard, Big, Guard, Guard</td> <td>5</td> <td>[Big, Guard, Big, Guard, Guard]</td> <td>2.0</td> <td>3.0</td> <td>0.0</td> <td>3.00</td> </tr> <tr> <td>3</td> <td>Big, Big, Guard, Forward, Big</td> <td>2</td> <td>[Big, Big, Guard, Forward, Big]</td> <td>3.0</td> <td>1.0</td> <td>1.0</td> <td>5.25</td> </tr> <tr> <td>4</td> <td>Guard, Big, Guard, Guard, Big</td> <td>1</td> <td>[Guard, Big, Guard, Guard, Big]</td> <td>2.0</td> <td>3.0</td> <td>0.0</td> <td>3.00</td> </tr> <tr> <td>5</td> <td>Big, Big, Guard, Forward, Big</td> <td>3</td> <td>[Big, Big, Guard, Forward, Big]</td> <td>3.0</td> <td>1.0</td> <td>1.0</td> <td>5.25</td> </tr> </tbody> </table> </div>
python|pandas
0
5,416
69,842,813
pandas read_xml missing data
<p>I have tried using Pandas read_xml and it reads most of the XML fine but it leaves some parts out because its in a slightly different format. I have included an extract below and it reads &quot;Type&quot;, &quot;Activation&quot; fine but doesn't for the &quot;Amt&quot; value. It picks up the column heading &quot;Amt&quot; just not the value. Could anyone point me in the right direction on how to get it to read it. Thanks</p> <pre><code>&lt;Type&gt;PYI&lt;/Type&gt; &lt;Activation&gt;N&lt;/Activation&gt; &lt;Amt val=&quot;4000&quot; curr=&quot;GBP&quot;/&gt; xml_df = pd.read_xml(xml_data) </code></pre> <p>Anybody able to help I have tried going through the documentation for Pandas.read_xml but I can see why it wouldn't pick this up?</p>
<p>By default, <a href="https://pandas.pydata.org/docs/dev/reference/api/pandas.read_xml.html" rel="nofollow noreferrer"><code>pandas.read_xml</code></a> parses all the <em>immediate</em> descendants of a set of nodes including its child nodes and attributes. Unless, the <code>xpath</code> argument indicates it, <code>read_xml</code> will not go further than <em>immediate</em> descendants.</p> <p>To illustrate your use case. Below is likely the generalized set up of your XML where <code>&lt;Type&gt;</code> and its siblings, <code>&lt;Activation&gt;</code> and <code>&lt;Amt&gt;</code> are parsed. However, <code>&lt;Amt&gt;</code> does not contain a text node, only attributes. So the value in that column should be empty.</p> <pre class="lang-xml prettyprint-override"><code>&lt;root&gt; &lt;row&gt; &lt;Type&gt;PYI&lt;/Type&gt; &lt;!-- Type IS A CHILD NODE OF row --&gt; &lt;Activation&gt;N&lt;/Activation&gt; &lt;!-- Activation IS A CHILD NODE OF row --&gt; &lt;Amt val=&quot;4000&quot; curr=&quot;GBP&quot;/&gt; &lt;!-- Amt IS A CHILD NODE OF row --&gt; &lt;/row&gt; &lt;/root&gt; </code></pre> <p>But then you ask, why did <code>read_xml</code> ignore the <em>val</em> and <em>curr</em> attributes? Because each are not an <em>immediate</em> descendant of <code>&lt;row&gt;</code>. They are descendants of <code>&lt;Amt&gt;</code> (i.e., grandchildren of <code>&lt;row&gt;</code>). If attributes were moved to <code>&lt;row&gt;</code>, then they will be captured as shown below:</p> <pre class="lang-xml prettyprint-override"><code>&lt;root&gt; &lt;row val=&quot;4000&quot; curr=&quot;GBP&quot;&gt; &lt;!-- val AND curr ARE CHILD ATTRIBS OF row --&gt; &lt;Type&gt;PYI&lt;/Type&gt; &lt;!-- Type IS A CHILD NODE OF row --&gt; &lt;Activation&gt;N&lt;/Activation&gt; &lt;!-- Activation IS A CHILD NODE OF row --&gt; &lt;Amt/&gt; &lt;!-- Amt IS A CHILD NODE OF row --&gt; &lt;/row&gt; &lt;/root&gt; </code></pre> <p>To capture those attributes, adjust <code>xpath</code> argument to point to its immediate parent:</p> <pre class="lang-py prettyprint-override"><code>amt_df = pd.read_xml(&quot;Input.xml&quot;, xpath=&quot;//Amt&quot;) </code></pre> <hr /> <p>To have such attributes captured with <code>&lt;row&gt;</code> level information, consider the special-purpose language, <a href="https://stackoverflow.com/tags/xslt/info">XSLT</a>, to transform your original XML to the following:</p> <pre class="lang-xml prettyprint-override"><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;root&gt; &lt;row&gt; &lt;Type&gt;PYI&lt;/Type&gt; &lt;Activation&gt;N&lt;/Activation&gt; &lt;Amt_val&gt;4000&lt;/Amt_val&gt; &lt;Amt_curr&gt;GBP&lt;/Amt_curr&gt; &lt;/row&gt; &lt;/root&gt; </code></pre> <p>Above is the intermediate output that is parsed by <code>read_xml</code> when using the <code>stylesheet</code> argument as shown below:</p> <pre class="lang-py prettyprint-override"><code>xsl = '''&lt;xsl:stylesheet version=&quot;1.0&quot; xmlns:xsl=&quot;http://www.w3.org/1999/XSL/Transform&quot;&gt; &lt;xsl:output indent=&quot;yes&quot;/&gt; &lt;xsl:strip-space elements=&quot;*&quot;/&gt; &lt;xsl:template match=&quot;@*|node()&quot;&gt; &lt;xsl:copy&gt; &lt;xsl:apply-templates select=&quot;@*|node()&quot;/&gt; &lt;/xsl:copy&gt; &lt;/xsl:template&gt; &lt;xsl:template match=&quot;row&quot;&gt; &lt;xsl:copy&gt; &lt;xsl:copy-of select=&quot;*[name() != 'Amt']&quot;/&gt; &lt;Amt_val&gt;&lt;xsl:value-of select=&quot;Amt/@val&quot;/&gt;&lt;/Amt_val&gt; &lt;Amt_curr&gt;&lt;xsl:value-of select=&quot;Amt/@curr&quot;/&gt;&lt;/Amt_curr&gt; &lt;/xsl:copy&gt; &lt;/xsl:template&gt; &lt;/xsl:stylesheet&gt;''' row_df = pd.read_xml(&quot;Input.xml&quot;, xpath=&quot;//row&quot;, stylesheet=xsl&quot;) </code></pre>
python|pandas|xml|readxml
1
5,417
43,036,982
Receiving strange permission errors when trying to install tensorflow
<p>I apologize if this is trivial. I'm not too comfortable with running commands in linux so i'm having trouble debugging the issues below. I am just following the installation process <a href="https://www.tensorflow.org/install/install_mac#installing_with_anaconda" rel="nofollow noreferrer">here.</a> </p> <pre><code>(tensorflow) MAdhavs-MBP:~ madhavthaker$ pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.1-py2-none-any.whl Collecting tensorflow==1.0.1 from https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.1-py2-none-any.whl Downloading https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.1-py2-none-any.whl (39.3MB) 100% |████████████████████████████████| 39.3MB 15kB/s Collecting mock&gt;=2.0.0 (from tensorflow==1.0.1) Using cached mock-2.0.0-py2.py3-none-any.whl Collecting six&gt;=1.10.0 (from tensorflow==1.0.1) Using cached six-1.10.0-py2.py3-none-any.whl Collecting numpy&gt;=1.11.0 (from tensorflow==1.0.1) Using cached numpy-1.12.1-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl Collecting protobuf&gt;=3.1.0 (from tensorflow==1.0.1) Using cached protobuf-3.2.0-py2.py3-none-any.whl Collecting wheel (from tensorflow==1.0.1) Using cached wheel-0.29.0-py2.py3-none-any.whl Collecting funcsigs&gt;=1; python_version &lt; "3.3" (from mock&gt;=2.0.0-&gt;tensorflow==1.0.1) Using cached funcsigs-1.0.2-py2.py3-none-any.whl Collecting pbr&gt;=0.11 (from mock&gt;=2.0.0-&gt;tensorflow==1.0.1) Using cached pbr-2.0.0-py2.py3-none-any.whl Collecting setuptools (from protobuf&gt;=3.1.0-&gt;tensorflow==1.0.1) Using cached setuptools-34.3.3-py2.py3-none-any.whl Collecting appdirs&gt;=1.4.0 (from setuptools-&gt;protobuf&gt;=3.1.0-&gt;tensorflow==1.0.1) Using cached appdirs-1.4.3-py2.py3-none-any.whl Collecting packaging&gt;=16.8 (from setuptools-&gt;protobuf&gt;=3.1.0-&gt;tensorflow==1.0.1) Using cached packaging-16.8-py2.py3-none-any.whl Collecting pyparsing (from packaging&gt;=16.8-&gt;setuptools-&gt;protobuf&gt;=3.1.0-&gt;tensorflow==1.0.1) Using cached pyparsing-2.2.0-py2.py3-none-any.whl Installing collected packages: funcsigs, six, pbr, mock, numpy, appdirs, pyparsing, packaging, setuptools, protobuf, wheel, tensorflow Exception: Traceback (most recent call last): File "/Users/madhavthaker/Downloads/anaconda/lib/python2.7/site-packages/pip-9.0.1-py2.7.egg/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/Users/madhavthaker/Downloads/anaconda/lib/python2.7/site-packages/pip-9.0.1-py2.7.egg/pip/commands/install.py", line 342, in run prefix=options.prefix_path, File "/Users/madhavthaker/Downloads/anaconda/lib/python2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_set.py", line 784, in install **kwargs File "/Users/madhavthaker/Downloads/anaconda/lib/python2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_install.py", line 851, in install self.move_wheel_files(self.source_dir, root=root, prefix=prefix) File "/Users/madhavthaker/Downloads/anaconda/lib/python2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_install.py", line 1064, in move_wheel_files isolated=self.isolated, File "/Users/madhavthaker/Downloads/anaconda/lib/python2.7/site-packages/pip-9.0.1-py2.7.egg/pip/wheel.py", line 345, in move_wheel_files clobber(source, lib_dir, True) File "/Users/madhavthaker/Downloads/anaconda/lib/python2.7/site-packages/pip-9.0.1-py2.7.egg/pip/wheel.py", line 323, in clobber shutil.copyfile(srcfile, destfile) File "/Users/madhavthaker/Downloads/anaconda/lib/python2.7/shutil.py", line 83, in copyfile with open(dst, 'wb') as fdst: IOError: [Errno 13] Permission denied: '/Users/madhavthaker/Downloads/anaconda/lib/python2.7/site-packages/pbr/__init__.py' (tensorflow) MAdhavs-MBP:~ madhavthaker$ </code></pre> <p>This makes no sense to me as I'm the only user on my mac. Any help would be much appreciated.</p>
<p>It looks like you're not using the proper permissions. </p> <p>Try <code>sudo pip install</code></p>
python|linux|permissions|tensorflow|anaconda
3
5,418
43,373,171
How to remove automated chart titles generated by Pandas
<p>I generated a boxplot using below code:</p> <pre><code>import pandas as pd import random country = ['A' for z in range(1,6)] + [ 'B' for z in range(1,6)] sales = [random.random() for z in range(1,11)] data =pd.DataFrame({'country':country, 'sales':sales}) bp=data.boxplot(by='country') </code></pre> <p><a href="https://i.stack.imgur.com/1i3fD.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1i3fD.png" alt="enter image description here"></a></p> <p>Pandas generated automated titles of the chart. 1. 'Boxplot grouped by country' 2. 'sales'</p> <p>I can get rid of 1 using:</p> <pre><code>bp.get_figure().suptitle('') </code></pre> <p>But I cannot figure out how to get rid of the second 'sales'</p> <p>I am struggling with it searching through stack overflow for the whole day and nothing seems to work.</p> <p>I am using Python 3.6.1 together with Conda. The code I run in Jupiter notebook.</p> <p>Thank you in advance for help!</p>
<p>You also need to get rid of the title on the axes via:</p> <p><code>bp.get_figure().gca().set_title("")</code></p> <p>and if you want to get rid of the [country] part too:</p> <p><code>bp.get_figure().gca().set_xlabel("")</code></p>
python|pandas|matplotlib|jupyter-notebook
11
5,419
72,268,382
Join 2 data frame with special columns matching new
<p>i want to join two Dataframe and get result as bellow, i tried many ways but it fails</p> <p>i want only texts on df2 ['A'] which contain text on df1 ['A']. please help me change the codes</p> <p>I wanted :</p> <pre><code>0 A0_link0 1 A1_link1 2 A2_link2 3 A3_link3 </code></pre> <pre><code>import pandas as pd df1 = pd.DataFrame( { &quot;A&quot;: [&quot;A0&quot;, &quot;A1&quot;, &quot;A2&quot;, &quot;A3&quot;], }) df2 = pd.DataFrame( { &quot;A&quot;: [&quot;A0_link0&quot;, &quot;A1_link1&quot;, &quot;A2_link2&quot;, &quot;A3_link3&quot;, &quot;A4_link4&quot;, 'An_linkn'], &quot;B&quot; : [&quot;B0_link0&quot;, &quot;B1_link1&quot;, &quot;B2_link2&quot;, &quot;B3_link3&quot;, &quot;B4_link4&quot;, 'Bn_linkn'] }) result = pd.concat([df1, df2], ignore_index=True, join= &quot;inner&quot;, sort=False) print(result) </code></pre>
<p>Create an intermediate dataframe and <code>map</code>:</p> <pre><code>d = (df2.assign(key=df2['A'].str.extract(r'([^_]+)')) .set_index('key')) df1['A'].map(d['A']) </code></pre> <p>Output:</p> <pre><code>0 A0_link0 1 A1_link1 2 A2_link2 3 A3_link3 Name: A, dtype: object </code></pre> <p>Or <code>merge</code> if you want several columns from df2 (<code>df1.merge(d, left_on='A', right_index=True)</code>)</p>
pandas
0
5,420
72,321,110
Pick a column value based on column index stored in another column (Pandas)
<p>Let's say we have four columns: Column1, Column2, Column3, ind</p> <pre><code>import pandas as pd tbl = { 'Column1':['Spark',10000,'Python','35days'], 'Column2' :[500,'PySpark',22000,30000], 'Column3':['30days','40days','35days','pandas'], 'ind':[1,2,1,3] } df = pd.DataFrame(tbl) </code></pre> <p>Does anyone know is there a way to add a new column without loop that will gather values from first 3 columns based on index stored in 'ind' column?</p> <p>'Course':['Spark','PySpark','Python','pandas']</p> <p>I've tried some combinations with iloc, lambda and apply but failed.</p> <p>Expected output:</p> <pre><code> Column1 Column2 Column3 ind Course 0 Spark 500 30days 1 Spark 1 10000 PySpark 40days 2 PySpark 2 Python 22000 35days 1 Python 3 35days 30000 pandas 3 pandas </code></pre>
<p>IIUC, you can try <code>apply</code> on rows</p> <pre class="lang-py prettyprint-override"><code>df['Course'] = df.apply(lambda row: row.iloc[row['ind']-1], axis=1) </code></pre> <p>Or you can try</p> <pre class="lang-py prettyprint-override"><code>df['Course'] = df.values[np.arange(len(df['ind'])), df['ind'].sub(1)] </code></pre> <pre><code>print(df) Column1 Column2 Column3 ind Course 0 Spark 500 30days 1 Spark 1 10000 PySpark 40days 2 PySpark 2 Python 22000 35days 1 Python 3 35days 30000 pandas 3 pandas </code></pre>
python|pandas
0
5,421
72,447,242
Compare two dataframes with different shapes and with condition in python
<p>I have two dataframes in python</p> <p>First dataframe : <strong>tf_words</strong> : of <strong>shape (1 row,2235 columns)</strong> : looks like-</p> <pre><code> 0 1 2 3 4 5 6 ...... 2234 0 aa, aaa, aaaa, aaan, aaanu, aada, aadhyam,.....zindabad] </code></pre> <p>Second dataframe : <strong>tf1_bigram</strong>: of <strong>shape (4000, 34319)</strong> : contains bigram with their occurrences in dataset, dataframe looks like-</p> <pre><code>(a, en) (a, ha) (a, padam) (aa, aala) (aa, accountinte) (aa,adhamanaya)... 1 0 0 1 0 0 ... 0 1 0 0 1 0 ... 0 0 1 0 0 1 ... </code></pre> <p><strong>I have to compare tf_words dataframe with tf1_bigram dataframe and the comparison should be as follows</strong></p> <p>E.g. As seen in tf_words dataframe, though the word 'aa' is matching with only one word in columns: (aa, aala) (aa, accountinte) &amp; (aa,adhamanaya) in tf1_bigram datagram, those matching columns values will be multiply by 0.5.</p> <p>then to check for 'aaa', and if found multiply found column by 0.5;</p> <p>then to check for 'aaaa', if found multiply found column by 0.5;</p> <p>then for 'aaan', if found multiply the found column by 0.5</p> <p>and so on upto last word 'zindabad'(having coulmn no. 2234)</p> <p>Thus the output tf1_bigram will look like as below:</p> <pre><code>(a, en) (a, ha) (a, padam) (aa, aala) (aa, accountinte) (aa,adhamanaya)... 1 0 0 0.5 0 0 ... 0 1 0 0 0.5 0 ... 0 0 1 0 0 0.5 ... </code></pre> <p>I have tried : tf1_bigram.apply(lambda x: np.multiply(x * 0.5) if x.name in tf_words else x) but output output is not what I have expected.</p> <p>Plz help...!!!!!!!!</p>
<p>try this</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd table = { 'a, en':[1,0,0], 'a, ha':[0,1,0], 'a, padam':[0,0,1], 'aa, aala' :[1,0,0], 'aaa, accountinte':[0,1,0], 'aaaa,adhamanaya':[0,0,1], 'aaab,adhamanaya':[0,0,1] } tf1_bigram = pd.DataFrame(table) table = {0:['aa'], 1:['aaa'], 2:['aaaa'], 3:['aaan'], 4:['aaanu'], 5:['aada'], 6:['aadhyam']} tf_words = pd.DataFrame(table) list_tf_words = tf_words.values.tolist() print(tf1_bigram) print(f'\n\n-------------BREAK-----------\n\n') def func(x): for y in list_tf_words[0]: if x.name.find(y) != -1: return x*0.5 else: pass return x tf1_bigram = tf1_bigram.apply(func, axis = 0) print(tf1_bigram) </code></pre> <p>OUTUPUT</p> <pre class="lang-py prettyprint-override"><code> a, en a, ha a, padam ... aaa, accountinte aaaa,adhamanaya aaab,adhamanaya 0 1 0 0 ... 0 0 0 1 0 1 0 ... 1 0 0 2 0 0 1 ... 0 1 1 [3 rows x 7 columns] -------------BREAK----------- a, en a, ha a, padam ... aaa, accountinte aaaa,adhamanaya aaab,adhamanaya 0 1 0 0 ... 0.0 0.0 0.0 1 0 1 0 ... 0.5 0.0 0.0 2 0 0 1 ... 0.0 0.5 0.5 [3 rows x 7 columns] </code></pre> <p>If you want to multiply by 0.5 more than once, use this code below</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd table = { 'a, en':[1,0,0], 'a, ha':[0,1,0], 'a, padam':[0,0,1], 'aa, aala' :[1,0,0], 'aaa, aaanu, accountinte':[0,1,0], 'aaaa,adhamanaya':[0,0,1] } tf1_bigram = pd.DataFrame(table) table = {0:['aa'], 1:['aaa'], 2:['aaaa'], 3:['aaan'], 4:['aaanu'], 5:['aada'], 6:['aadhyam']} tf_words = pd.DataFrame(table) list_tf_words = tf_words.values.tolist() print(tf1_bigram) print(f'\n\n-------------BREAK-----------\n\n') def func(x): for y in list_tf_words[0]: if x.name.find(y) != -1: x = x*0.5 else: pass return x tf1_bigram = tf1_bigram.apply(func, axis = 0) print(tf1_bigram) </code></pre> <p>OUTUPUT</p> <pre class="lang-py prettyprint-override"><code> a, en a, ha a, padam aa, aala aaa, aaanu, accountinte aaaa,adhamanaya 0 1 0 0 1 0 0 1 0 1 0 0 1 0 2 0 0 1 0 0 1 -------------BREAK----------- a, en a, ha a, padam aa, aala aaa, aaanu, accountinte aaaa,adhamanaya 0 1 0 0 0.5 0.0000 0.000 1 0 1 0 0.0 0.0625 0.000 2 0 0 1 0.0 0.0000 0.125 </code></pre> <p>try this, if you need compare exactly content the column with tf_words</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd table = { 'a, en':[1,0,0], 'a, ha':[0,1,0], 'a, padam':[0,0,1], 'aa, aala' :[1,0,0], 'aaa, accountinte':[0,1,0], 'aaaa,adhamanaya':[0,0,1], 'aaab,adhamanaya':[0,0,1] } tf1_bigram = pd.DataFrame(table) table = {0:['a'], 1:['en'], 2:['aaaa'], 3:['aaan'], 4:['aaanu'], 5:['aada'], 6:['aadhyam']} tf_words = pd.DataFrame(table) list_tf_words = tf_words.values.tolist() print(tf1_bigram) print(f'\n\n-------------BREAK-----------\n\n') def func(x): temp = x.name.split(',') for y in list_tf_words[0]: if (temp[0].strip()) in list_tf_words[0] and (temp[1].strip()) in list_tf_words[0]: # change &quot;and&quot; condition case only one value need match with the list of tf_words return x*0.5 else: return x tf1_bigram = tf1_bigram.apply(func, axis = 0) print(tf1_bigram) </code></pre> <p>OUTUPUT</p> <pre class="lang-py prettyprint-override"><code> a, en a, ha a, padam ... aaa, accountinte aaaa,adhamanaya aaab,adhamanaya 0 1 0 0 ... 0 0 0 1 0 1 0 ... 1 0 0 2 0 0 1 ... 0 1 1 [3 rows x 7 columns] -------------BREAK----------- a, en a, ha a, padam ... aaa, accountinte aaaa,adhamanaya aaab,adhamanaya 0 0.5 0 0 ... 0 0 0 1 0.0 1 0 ... 1 0 0 2 0.0 0 1 ... 0 1 1 [3 rows x 7 columns] </code></pre> <p>Solution for Tuples:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd table = { ('a', 'en'):(1,0,0), ('a', 'ha'):[0,1,0], ('a', 'padam'):[0,0,1], ('aa', 'aala') :[1,0,0], ('aaa', 'accountinte'):[0,1,0], ('aaaa','adhamanaya'):[0,0,1], ('aaab','adhamanaya'):[0,0,1] } tf1_bigram = pd.DataFrame(table) table = {0:['a'], 1:['en'], 2:['aaaa'], 3:['aaan'], 4:['aaanu'], 5:['aada'], 6:['aadhyam']} tf_words = pd.DataFrame(table) list_tf_words = tf_words.values.tolist() print(tf1_bigram) print(f'\n\n-------------BREAK-----------\n\n') def func(x): temp = x.name if (temp[0].strip()) in list_tf_words[0] and (temp[1].strip()) in list_tf_words[0]: # change &quot;and&quot; condition case only one value need match with the list of tf_words return x*0.5 else: return x tf1_bigram = tf1_bigram.apply(func, axis = 0) print(tf1_bigram) </code></pre> <p>OUTUPUT</p> <pre class="lang-py prettyprint-override"><code> a aa aaa aaaa aaab en ha padam aala accountinte adhamanaya adhamanaya 0 1 0 0 1 0 0 0 1 0 1 0 0 1 0 0 2 0 0 1 0 0 1 1 -------------BREAK----------- a aa aaa aaaa aaab en ha padam aala accountinte adhamanaya adhamanaya 0 0.5 0 0 1 0 0 0 1 0.0 1 0 0 1 0 0 2 0.0 0 1 0 0 1 1 </code></pre>
python|pandas|dataframe|compare
0
5,422
72,178,306
How to Merge two Panel data sets on Date and a combination of columns?
<p>I have two datasets, <code>df1</code> &amp;<code>df2</code>, that look like this:</p> <p><code>df1</code>:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Date</th> <th>Code</th> <th>City</th> <th>State</th> <th>Population</th> <th>Cases</th> <th>Deaths</th> </tr> </thead> <tbody> <tr> <td>2020-03</td> <td>10001</td> <td>Los Angeles</td> <td>CA</td> <td>5000000</td> <td>122</td> <td>12</td> </tr> <tr> <td>2020-03</td> <td>10002</td> <td>Sacramento</td> <td>CA</td> <td>5400000</td> <td>120</td> <td>2</td> </tr> <tr> <td>2020-03</td> <td>12223</td> <td>Houston</td> <td>TX</td> <td>3500000</td> <td>23</td> <td>11</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>2021-07</td> <td>10001</td> <td>Los Angeles</td> <td>CA</td> <td>5000002</td> <td>12220</td> <td>2200</td> </tr> <tr> <td>2021-07</td> <td>10002</td> <td>Sacramento</td> <td>CA</td> <td>5444000</td> <td>211</td> <td>22</td> </tr> <tr> <td>2021-07</td> <td>12223</td> <td>Houston</td> <td>TX</td> <td>4443300</td> <td>2111</td> <td>330</td> </tr> </tbody> </table> </div> <p><code>df2</code>:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Date</th> <th>Code</th> <th>City</th> <th>State</th> <th>Quantity x</th> <th>Quantity y</th> </tr> </thead> <tbody> <tr> <td>2019-01</td> <td>100015</td> <td>LOS ANGELES</td> <td>CA</td> <td></td> <td>445</td> </tr> <tr> <td>2019-01</td> <td>100015</td> <td>LOS ANGELES</td> <td>CA</td> <td>330</td> <td></td> </tr> <tr> <td>2019-01</td> <td>100023</td> <td>SACRAMENTO</td> <td>CA</td> <td>4450</td> <td>566</td> </tr> <tr> <td>2019-01</td> <td>1222393</td> <td>HOUSTON</td> <td>TX</td> <td>440</td> <td>NA</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>2021-07</td> <td>100015</td> <td>LOS ANGELES</td> <td>CA</td> <td>31113</td> <td>3455</td> </tr> <tr> <td>2021-07</td> <td>100023</td> <td>SACRAMENTO</td> <td>CA</td> <td>3220</td> <td>NA</td> </tr> <tr> <td>2021-07</td> <td>1222393</td> <td>HOUSTON</td> <td>TX</td> <td>NA</td> <td>3200</td> </tr> </tbody> </table> </div> <p>As you can see, <code>df2</code> starts before <code>df1</code>, but they both end on the same dates. Also, <code>df1</code> and <code>df2</code> both have IDs that share some commonalities but are not equal (in general, <code>df2</code> has an extra one or two digits than <code>df1</code>).</p> <p>Also note that there may be multiple entries for the same date on <code>df2</code>, but with different quantities.</p> <p>I want to merge the two, more specifically I want to merge <code>df1</code> on <code>df2</code>, such that it starts on <code>2019-01</code> and ends on <code>2021-07</code>. In that case, <code>Cases</code> and <code>Deaths</code> would be 0 between <code>2019-01</code> and <code>2020-02</code>.</p> <p>How can I merge <code>df1</code> on <code>df2</code> using Date, City, and State (given that some cities have the same name but are in different states) and using the date specifications mentioned above? I would like my combined data frame, <code>df3</code>, to look like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Date</th> <th>Code</th> <th>City</th> <th>State</th> <th>Quantity x</th> <th>Quantity y</th> <th>Population</th> <th>Cases</th> <th>Deaths</th> </tr> </thead> <tbody> <tr> <td>2019-01</td> <td>10001</td> <td>Los Angeles</td> <td>CA</td> <td></td> <td>445</td> <td></td> <td>0</td> <td>0</td> </tr> <tr> <td>2019-01</td> <td>10002</td> <td>Sacramento</td> <td>CA</td> <td>4450</td> <td>556</td> <td></td> <td>0</td> <td>0</td> </tr> <tr> <td>2020-03</td> <td>12223</td> <td>Houston</td> <td>TX</td> <td>440</td> <td>4440</td> <td>35000000</td> <td>23</td> <td>11</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>2021-07</td> <td>10002</td> <td>Sacramento</td> <td>CA</td> <td>3220</td> <td>NA</td> <td>5444000</td> <td>211</td> <td>22</td> </tr> </tbody> </table> </div> <p>Edit: I was thinking: maybe I can first cut <code>df1</code>'s date range and then do the merge. I am not sure about how an outer merge would work with dates that don't necessarily overlap. Perhaps someone has a better idea.</p>
<p>It sounds like you're looking for the <code>how</code> keyword argument in <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html?highlight=how%20outer" rel="nofollow noreferrer"><code>pd.DataFrame.merge</code></a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.join.html?highlight=how%20outer" rel="nofollow noreferrer"><code>pd.DataFrame.join</code></a>.</p> <p>Here is a sample:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df1 = pd.read_json( '{&quot;Date&quot;:{&quot;0&quot;:1583020800000,&quot;1&quot;:1583020800000,&quot;2&quot;:1583020800000,&quot;3&quot;:1625097600000,&quot;4&quot;:1625097600000,&quot;5&quot;:1625097600000},&quot;City&quot;:{&quot;0&quot;:&quot;Los Angeles&quot;,&quot;1&quot;:&quot;Sacramento&quot;,&quot;2&quot;:&quot;Houston&quot;,&quot;3&quot;:&quot;Los Angeles&quot;,&quot;4&quot;:&quot;Sacramento&quot;,&quot;5&quot;:&quot;Houston&quot;},&quot;State&quot;:{&quot;0&quot;:&quot;CA&quot;,&quot;1&quot;:&quot;CA&quot;,&quot;2&quot;:&quot;TX&quot;,&quot;3&quot;:&quot;CA&quot;,&quot;4&quot;:&quot;CA&quot;,&quot;5&quot;:&quot;TX&quot;},&quot;Population&quot;:{&quot;0&quot;:5000000,&quot;1&quot;:5400000,&quot;2&quot;:3500000,&quot;3&quot;:5000002,&quot;4&quot;:5444000,&quot;5&quot;:4443300},&quot;Cases&quot;:{&quot;0&quot;:122,&quot;1&quot;:120,&quot;2&quot;:23,&quot;3&quot;:12220,&quot;4&quot;:211,&quot;5&quot;:2111},&quot;Deaths&quot;:{&quot;0&quot;:12,&quot;1&quot;:2,&quot;2&quot;:11,&quot;3&quot;:2200,&quot;4&quot;:22,&quot;5&quot;:330}}' ) df2 = pd.read_json( '{&quot;Date&quot;:{&quot;0&quot;:1546300800000,&quot;1&quot;:1546300800000,&quot;2&quot;:1546300800000,&quot;3&quot;:1546300800000,&quot;4&quot;:1625097600000,&quot;5&quot;:1625097600000,&quot;6&quot;:1625097600000},&quot;City&quot;:{&quot;0&quot;:&quot;LOS ANGELES&quot;,&quot;1&quot;:&quot;LOS ANGELES&quot;,&quot;2&quot;:&quot;SACRAMENTO&quot;,&quot;3&quot;:&quot;HOUSTON&quot;,&quot;4&quot;:&quot;LOS ANGELES&quot;,&quot;5&quot;:&quot;SACRAMENTO&quot;,&quot;6&quot;:&quot;HOUSTON&quot;},&quot;State&quot;:{&quot;0&quot;:&quot;CA&quot;,&quot;1&quot;:&quot;CA&quot;,&quot;2&quot;:&quot;CA&quot;,&quot;3&quot;:&quot;TX&quot;,&quot;4&quot;:&quot;CA&quot;,&quot;5&quot;:&quot;CA&quot;,&quot;6&quot;:&quot;TX&quot;},&quot;Quantity x&quot;:{&quot;0&quot;:null,&quot;1&quot;:330.0,&quot;2&quot;:4450.0,&quot;3&quot;:440.0,&quot;4&quot;:31113.0,&quot;5&quot;:3220.0,&quot;6&quot;:null},&quot;Quantity y&quot;:{&quot;0&quot;:445.0,&quot;1&quot;:null,&quot;2&quot;:566.0,&quot;3&quot;:null,&quot;4&quot;:3455.0,&quot;5&quot;:null,&quot;6&quot;:3200.0}}' ) print(&quot;\ndf1 = \n&quot;, df1) print(&quot;\ndf2 = \n&quot;, df2) # Transform df1 df1[&quot;City&quot;] = df1[&quot;City&quot;].apply(str.upper) # To merge, need consistent casing df1 = df1.groupby([&quot;Date&quot;, &quot;City&quot;, &quot;State&quot;])[ [&quot;Cases&quot;, &quot;Deaths&quot;] ].sum() # Aggregate cases + deaths just in case... # Aggregate in df2 df2 = df2.groupby([&quot;Date&quot;, &quot;City&quot;, &quot;State&quot;])[ [&quot;Quantity x&quot;, &quot;Quantity y&quot;] ].sum() # implicit skipna=True print(&quot;\ndf1' = \n&quot;, df1) print(&quot;\ndf2' = \n&quot;, df2) # MERGE: merging on indices df3 = df1.join(df2, how=&quot;outer&quot;) # key: &quot;how&quot; df3[[&quot;Cases&quot;, &quot;Deaths&quot;]] = ( df3[[&quot;Cases&quot;, &quot;Deaths&quot;]].fillna(0).astype(int) ) # inplace: downcasting complaint df3.reset_index( inplace=True ) # Will cause [&quot;Date&quot;, &quot;City&quot;, &quot;State&quot;] to be ordinary columns, not indices. print(&quot;\ndf3 = \n&quot;, df3) </code></pre> <p>...the output is:</p> <pre class="lang-none prettyprint-override"><code>df1 = Date City State Population Cases Deaths 0 2020-03-01 Los Angeles CA 5000000 122 12 1 2020-03-01 Sacramento CA 5400000 120 2 2 2020-03-01 Houston TX 3500000 23 11 3 2021-07-01 Los Angeles CA 5000002 12220 2200 4 2021-07-01 Sacramento CA 5444000 211 22 5 2021-07-01 Houston TX 4443300 2111 330 df2 = Date City State Quantity x Quantity y 0 2019-01-01 LOS ANGELES CA NaN 445.0 1 2019-01-01 LOS ANGELES CA 330.0 NaN 2 2019-01-01 SACRAMENTO CA 4450.0 566.0 3 2019-01-01 HOUSTON TX 440.0 NaN 4 2021-07-01 LOS ANGELES CA 31113.0 3455.0 5 2021-07-01 SACRAMENTO CA 3220.0 NaN 6 2021-07-01 HOUSTON TX NaN 3200.0 df1' = Cases Deaths Date City State 2020-03-01 HOUSTON TX 23 11 LOS ANGELES CA 122 12 SACRAMENTO CA 120 2 2021-07-01 HOUSTON TX 2111 330 LOS ANGELES CA 12220 2200 SACRAMENTO CA 211 22 df2' = Quantity x Quantity y Date City State 2019-01-01 HOUSTON TX 440.0 0.0 LOS ANGELES CA 330.0 445.0 SACRAMENTO CA 4450.0 566.0 2021-07-01 HOUSTON TX 0.0 3200.0 LOS ANGELES CA 31113.0 3455.0 SACRAMENTO CA 3220.0 0.0 df3 = Date City State Cases Deaths Quantity x Quantity y 0 2019-01-01 HOUSTON TX 0 0 440.0 0.0 1 2019-01-01 LOS ANGELES CA 0 0 330.0 445.0 2 2019-01-01 SACRAMENTO CA 0 0 4450.0 566.0 3 2020-03-01 HOUSTON TX 23 11 NaN NaN 4 2020-03-01 LOS ANGELES CA 122 12 NaN NaN 5 2020-03-01 SACRAMENTO CA 120 2 NaN NaN 6 2021-07-01 HOUSTON TX 2111 330 0.0 3200.0 7 2021-07-01 LOS ANGELES CA 12220 2200 31113.0 3455.0 8 2021-07-01 SACRAMENTO CA 211 22 3220.0 0.0 </code></pre> <p>A few other points:</p> <ul> <li><code>City</code> casing needs to be consistent at join/merge time.</li> <li>You could also do: <code>df1.merge(df2, ..., left_index=True, right_index=True)</code> instead of <code>df1.join</code>. You could also reset the indices via <code>df1.reset_index(inplace=True)</code>, etc. after the groupby-sum line(s) then use <code>.merge(..., on=...)</code> (but the indices are convenient).</li> <li>The final values of <code>Quantity {x,y}</code> are floats because <code>NaN</code>s are present. (See next point.)</li> <li>I would be deliberate about your treatment of <code>NaN</code>s v. auto-filled 0s. In the case of <code>Cases</code>/<code>Deaths</code> it sounds like you had no data <em>BUT</em> you were making the assumption that - in the absence of <code>Cases</code>/<code>Deaths</code> data - the values are <code>0</code>. For the <code>Quantity {x,y}</code> variables, no such assumption seemed to be warranted.</li> </ul>
python|pandas|dataframe|merge
1
5,423
50,485,174
Get Row Position instead of Row Index from iterrows() in Pandas
<p>I'm new to stackoverflow and I have research but have not find a satisfying answer.</p> <p>I understand that I can get a row index by using df.iterrows() to iterate through a df. But what if I want to get a row position instead of row idx. What method can I use? </p> <p>Example code that I'm working on is below: </p> <pre><code>df = pd.DataFrame({'month': ['Jan', 'Feb', 'March', 'April'], 'year': [2012, 2014, 2013, 2014], 'sale':[55, 40, 84, 31]}) df = df.set_index('month') for idx, value in df.iterrows(): print(idx) </code></pre> <p>How can I get an output of:</p> <pre><code>0 1 2 3 </code></pre> <p>Thanks!</p>
<p>If you need row number instead of index, you should:</p> <ol> <li>Use <code>enumerate</code> for a counter within a loop.</li> <li>Don't extract the index, see options below.</li> </ol> <p><strong>Option 1</strong></p> <p>In most situations, for performance reasons you should try and use <code>df.itertuples</code> instead of <code>df.iterrows</code>. You can specify <code>index=False</code> so that the first element is not the index.</p> <pre><code>for idx, row in enumerate(df.itertuples(index=False)): # do something </code></pre> <p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.itertuples.html#pandas.DataFrame.itertuples" rel="noreferrer"><code>df.itertuples</code></a> returns a namedtuple for each row.</p> <p><strong>Option 2</strong></p> <p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iterrows.html#pandas.DataFrame.iterrows" rel="noreferrer"><code>df.iterrows</code></a>. This is more cumbersome, as you need to separate out an unused variable. In addition, this is inefficient vs <code>itertuples</code>.</p> <pre><code>for idx, (_, row) in enumerate(df.iterrows()): # do something </code></pre>
python|python-3.x|pandas|for-loop
8
5,424
62,841,529
Using unique values of column as higher level column pandas
<p>Imagine I have a pandas dataframe as the following:</p> <pre><code>Side Year Value 1 Value 2 A 2020 56 5% B 2019 24 3% B 2018 42 4% B 2020 414 31% A 2019 421 51% </code></pre> <p>I would like to have something like this:</p> <pre><code> A B Year Value1 Value2 Value1 Value2 2018 - - 42 4% 2019 421 51% 24 3% 2020 56 5% 414 31% </code></pre> <p>I tried creating side as an index and then transposing, but then I don't have the unique values but I get repeated values.</p>
<p>You can use a combination of <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer">set index</a>, along with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer">unstack</a> and finally a <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_index.html" rel="nofollow noreferrer">sort index</a> to reshape your data :</p> <pre><code>res = df.set_index([&quot;Year&quot;, &quot;Side&quot;]).unstack().swaplevel(1, 0, axis=1).sort_index(axis=1) #remove axis details res.columns.names = (None, None) #reset to lowest level (-1) res.reset_index(col_level=-1) A B Year Value 1 Value 2 Value 1 Value 2 0 2018 NaN NaN 42.0 4% 1 2019 421.0 51% 24.0 3% 2 2020 56.0 5% 414.0 31% </code></pre>
python|pandas
2
5,425
62,859,627
Image processing: counting individual number pixel in an mask image using Pillow and 3D Numpy
<p>Please help me. I need some opinion on this problem. <strong>I am trying to count individual number of R,G,B value of an mask image.</strong></p> <p>I have an image which is masked filled background with green and it mask a human with red and an object with blue.</p> <p><em><strong>The image size and data type are</strong></em> <em><strong>(1536, 2048, 3) uint8</strong></em></p> <p>I have tried to access the numpy array of pixels</p> <pre><code>img_path = &quot;sample.png&quot; i = Image.open(img_path, 'r') data = asarray(i) array = np.array(i) </code></pre> <p>But the array only show the background green. Something like below.</p> <pre><code>[[[ 0, 255, 0 0, 255,0]]] </code></pre> <p><em>It does not show red and blue color of an image</em></p> <p>I have tried getpixel()</p> <pre><code>i = Image.open(img_path, 'r') r, g, b = i.getpixel((0, 0)) print(&quot;Red: {}, Green: {}, Blue: {}&quot;.format(r, g, b)) </code></pre> <p><strong>It does not count the red and and blue color of mask image.</strong></p> <p>How to count the number of R,G,B pixel in mask image?</p> <p>Where can I find read more about accessing and counting total number of pixels with numpy and pillow?</p> <p>Please tell me anything related to this.</p>
<pre><code>from PIL import Image with Image.open('hopper.jpg') as im: px = im.load() r, g, b = px[x, y] print(r, g, b) </code></pre> <p>This code worked for me. <strong>x</strong> and <strong>y</strong> pixel cordinates. You can get full info from <a href="https://pillow.readthedocs.io/en/stable/reference/PixelAccess.html" rel="nofollow noreferrer">this</a></p>
python|numpy|image-processing|computer-vision|python-imaging-library
0
5,426
62,890,245
implementation of multiple if else condition in pandas data frame
<p>My data frame is -</p> <pre><code>id score 1 50 2 88 3 44 4 77 5 93 </code></pre> <p>I want my data frame looks like -</p> <pre><code>id score is_good 1 50 low 2 88 high 3 44 low 4 77 medium 5 93 high </code></pre> <p>i have done the following code -</p> <pre><code>def selector(row): if row['score'] &gt;= 0 and row['score'] &lt;= 50 : return &quot;low&quot; elif row['score'] &gt; 50 and row['score'] &lt;=80 : return &quot;medium&quot; else: return &quot;high&quot; x['is_good'] = x.apply(lambda row : selector(x), axis=1) </code></pre> <p>I think logic is fine but code is not working. May be we can use map function.</p>
<p>This is a good use case for <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html" rel="nofollow noreferrer"><code>pd.cut</code></a>:</p> <pre><code>df['is_good'] = pd.cut(df.score, [-np.inf,50,80,np.inf], labels=['low','medium','high']) </code></pre> <hr /> <pre><code>print(df) id score is_good 0 1 50 low 1 2 88 high 2 3 44 low 3 4 77 medium 4 5 93 high </code></pre>
python|pandas|pandas-groupby
6
5,427
62,822,458
Simplest or most Pythonic way to exclude rows in a DataFrame based on a list of regex patterns?
<p>I know I can exclude rows like so:</p> <pre><code>df = df[ ~df['B'].str.contains(&lt;regex_pattern&gt;) ] </code></pre> <p>But what is the simplest or most Pythonic way to exclude rows from a list of regex patterns? Something like the following would be fine:</p> <pre><code>df = exclude_rows(dataframe, list_of_regex_pats) </code></pre> <p>(Where df would be passed in as 'dataframe'.)</p> <p>How can this be done with DataFrame.drop? Or is this a problem that would call for a recursive function?</p>
<pre><code>def drop_from_patterns(dataframe, column, regex_pattern_list): dfcopy = dataframe.copy() for pattern in regex_pattern_list: dfcopy.drop(dfcopy[ dfcopy[column].str.contains(pattern, case=False) ].index, inplace=True) return dfcopy </code></pre>
python|pandas
0
5,428
54,565,811
Why python script doesn't print to console or can't debug using pdb in Ubuntu
<p>I am looking into this <a href="https://github.com/opencv/training_toolbox_tensorflow" rel="nofollow noreferrer">code</a>.</p> <p>For training lpr, we can use <a href="https://github.com/opencv/training_toolbox_tensorflow/blob/develop/training_toolbox/lpr/train.py" rel="nofollow noreferrer">train.py</a> in lpr folder.</p> <p>train.py uses methods and classes in <a href="https://github.com/opencv/training_toolbox_tensorflow/blob/develop/training_toolbox/lpr/trainer.py" rel="nofollow noreferrer">trainer.py</a>, such as <code>CTCUtils, InputData, inference and LPRVocab.</code></p> <p>I put print inside <code>LPRVocab</code> to see how the code works as follows.</p> <pre><code>class LPRVocab: @staticmethod def create_vocab(train_list_path, val_list_path, use_h_concat=False, use_oi_concat=False): print('create_vocab called ') [vocab, r_vocab, num_classes] = LPRVocab._create_standard_vocabs(train_list_path, val_list_path) if use_h_concat: [vocab, r_vocab, num_classes] = LPRVocab._concat_all_hieroglyphs(vocab, r_vocab) if use_oi_concat: [vocab, r_vocab, num_classes] = LPRVocab._concat_oi(vocab, r_vocab) return vocab, r_vocab, num_classes @staticmethod def _char_range(char1, char2): """Generates the characters from `char1` to `char2`, inclusive.""" for char_code in range(ord(char1), ord(char2) + 1): yield chr(char_code) # Function for reading special symbols @staticmethod def _read_specials(filepath): characters = set() with open(filepath, 'r') as file_: for line in file_: current_label = line.split(' ')[-1].strip() characters = characters.union(re.findall('(&lt;[^&gt;]*&gt;|.)', current_label)) return characters @staticmethod def _create_standard_vocabs(train_list_path, val_list_path): print('_create_standard_vocabs called ') chars = set().union(LPRVocab._char_range('A', 'Z')).union(LPRVocab._char_range('0', '9')) print(chars) print('for special characters') chars = chars.union(LPRVocab._read_specials(train_list_path)).union(LPRVocab._read_specials(val_list_path)) print(chars) print('for list characters') chars = list(chars) print(chars) print('for sort characters') chars.sort() print(chars) print('for append characters') chars.append('_') print(chars) num_classes = len(chars) print('num_classes '+str(num_classes)) vocab = dict(zip(chars, range(num_classes))) print('vocab ') print(vocab) r_vocab = dict(zip(range(num_classes), chars)) r_vocab[-1] = '' print('r_vocab ') print(r_vocab) return [vocab, r_vocab, num_classes] </code></pre> <p>But I don't see any prints to console.</p> <p>Then I used </p> <pre><code>python -m pdb train.py </code></pre> <p>then set break point inside trainer.py. Break points are never hit. Press Key S also doesn't make to go detail inside another files.</p> <p>Why debug desn't work and print doesn't print to console? I used python3.5.</p>
<p>I recommend the following: </p> <p>wherever you want to debug, put this: </p> <pre><code>import ipdb ipdb.set_trace() </code></pre> <p>Then on the ipython console, make an instance of your class, and the call the method you need to debug, it will stop in your trace </p>
python|python-3.x|tensorflow|python-3.5
0
5,429
54,451,127
Creating a HeatMap from a Pandas MultiIndex Series
<p>I have a Pandas DF and I need to create a Heatmap. My data looks like this and I'd like to put the Years in Columns, the Days in rows and then use that with Seaborn to create a heatmap</p> <p>I tried multiple ways but I was always getting "inconsistent shape" when I chose the DF, so any recommendation on how to transform it?</p> <p>Year and Days are the index of this series</p> <p>2016 </p> <pre><code> Tuesday 4 Wednesady 6 ..... </code></pre> <p>2017 </p> <pre><code> Tuesday 4.4 Monday 3.5 .... </code></pre> <p>import seaborn as sns ax = sns.heatmap(dayofweek)</p>
<p>If you have a DataFrame like this:</p> <pre><code>years = range(2016,2019) months = range(1,6) df = pd.DataFrame(index=pd.MultiIndex.from_product([years,months])) df['vals'] = np.random.random(size=len(df)) </code></pre> <p>You can reformat the data to a rectangular shape using:</p> <pre><code>df2 = df.reset_index().pivot(columns='level_0',index='level_1',values='vals') sns.heatmap(df2) </code></pre> <p><a href="https://i.stack.imgur.com/g1M81.png" rel="noreferrer"><img src="https://i.stack.imgur.com/g1M81.png" alt="enter image description here"></a></p>
python|pandas|seaborn
7
5,430
73,725,267
Pandas lookup to update value by refereeing col and row with 2 data frames
<p>I've a df 1 and df2 like below and need to lookup the part and week column value from df2 and update the qty value in df1 .. Initially I've tried using melt function to change weeks as col and used merge function to join them but when i do pivot to get back to same as df1 with updated value it says grouper is not 1 dimensional since part and weeks are repeated -- is there any other better approach pls help. ( Need to update DF1 weeks value based on DF2 by referring .. Not to group the DF2 value )</p> <p><a href="https://i.stack.imgur.com/BHr0S.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BHr0S.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/Ete2t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ete2t.png" alt="enter image description here" /></a></p> <pre><code>{'Part': {0: 'Part1', 1: 'part2', 2: 'Part3'}, 'Week26': {0: nan, 1: nan, 2: nan}, 'Week27': {0: nan, 1: nan, 2: nan}, 'Week28': {0: nan, 1: nan, 2: nan}, 'Week29': {0: nan, 1: nan, 2: nan}, 'Week30': {0: nan, 1: nan, 2: nan}, 'Week31': {0: nan, 1: nan, 2: nan}, 'Week32': {0: nan, 1: nan, 2: nan}, 'Week33': {0: nan, 1: nan, 2: nan}, 'Week34': {0: nan, 1: nan, 2: nan}} {'ITM_NO': {0: 'Part1', 1: 'Part1', 2: 'Part1', 3: 'part2', 4: 'part2', 5: 'part2', 6: 'part2', 7: 'Part3', 8: 'Part3', 9: 'Part3', 10: 'Part3'}, 'WEEK': {0: 'Week26', 1: 'Week27', 2: 'Week28', 3: 'Week26', 4: 'Week27', 5: 'Week28', 6: 'Week29', 7: 'Week29', 8: 'Week30', 9: 'Week31', 10: 'Week32'}, 'QTY': {0: 12, 1: 10, 2: 30, 3: 20, 4: 40, 5: 60, 6: 70, 7: 20, 8: 10, 9: 30, 10: 20}} </code></pre> <p>Expected output <a href="https://i.stack.imgur.com/jeaZz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jeaZz.png" alt="enter image description here" /></a></p>
<p>Pivot the 2nd dataframe, then concatenate with the first dataframe, and finally get the sum by grouping the <code>Part</code> column. You can <code>reset_index()</code> at last if you want to</p> <pre class="lang-py prettyprint-override"><code>(pd.concat([ df2 .pivot('ITM_NO', 'WEEK', 'QTY') .reset_index() .rename(columns={'ITM_NO': 'Part'}), df1]) .groupby('Part').sum()) Week26 Week27 Week28 Week29 Week30 Week31 Week32 Week33 Week34 Part Part1 12.0 10.0 30.0 0.0 0.0 0.0 0.0 0.0 0.0 Part3 0.0 0.0 0.0 20.0 10.0 30.0 20.0 0.0 0.0 part2 20.0 40.0 60.0 70.0 0.0 0.0 0.0 0.0 0.0 </code></pre>
python|pandas|dataframe
1
5,431
73,718,365
How to group the dataframe with a list column in Python
<p>I have a data frame like this:</p> <pre><code>l1 = [1,2,3,1,2,3] l2 = ['A','A','A','B','B','B'] values = [['Ram', 'Ford', 'Honda', 'Ford'],['Ford', 'Toyota', 'Subaru'],['Ford', 'Ram'],['Volvo', 'Honda', 'Ford'],['Honda', 'Ford', 'Toyota', 'Ford'],['Ram', 'Ford']] d = {'ID': l1, 'Group': l2, 'Values': values} df = pd.DataFrame(d) </code></pre> <p>df</p> <pre><code>ID Group Values 1 A [Ram, Ford, Honda, Ford] 2 A [Ford, Toyota, Subaru] 3 A [Ford, Ram] 1 B [Volvo, Honda, Ford] 2 B [Honda, Ford, Toyota, Ford] 3 B [Ram, Ford] </code></pre> <p>I want to group the data such that for every ID, I get the count of every value in every group like this:</p> <pre><code> ID Group Value 1 2 3 A Ram 1 0 1 A Ford 2 1 1 A Honda 1 0 0 A Toyota 0 1 0 A Subaru 0 1 0 B Volvo 1 0 0 B Honda 1 1 0 B Ford 1 2 1 B Toyota 0 1 0 B Ram 0 0 1 </code></pre> <p>Can anyone help me with this?</p>
<p>First <code>explode</code> the column with the lists, then <code>groupby.size</code> to get the number wanted and <code>unstack</code> to get the shape wanted.</p> <pre><code>res = ( df.explode('Values') .groupby(['Group','Values','ID']).size() .unstack('ID',fill_value=0) .reset_index() # if necessary ) print(res) # ID Group Values 1 2 3 # 0 A Ford 2 1 1 # 1 A Honda 1 0 0 # 2 A Ram 1 0 1 # 3 A Subaru 0 1 0 # 4 A Toyota 0 1 0 # 5 B Ford 1 2 1 # 6 B Honda 1 1 0 # 7 B Ram 0 0 1 # 8 B Toyota 0 1 0 # 9 B Volvo 1 0 0 </code></pre>
python|python-3.x|pandas|dataframe
2
5,432
71,216,795
Calculate sum for column in dataframe using pandas
<p>I need to get sum of positive values as one value and sum of negative values as one values of a column in dataframe. for eg:-</p> <pre><code>date | amount 2021-09-02 | 98.3 2021-08-25 | -23.4 2021-08-14 | 34.57 2021-07-30 | -87.9 </code></pre> <p>then i need (98.3+34.57) and (-23.4-87.9) as output</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.clip.html" rel="nofollow noreferrer"><code>Series.clip</code></a> with <code>sum</code>:</p> <pre><code>pos = df['amount'].clip(lower=0).sum() neg = df['amount'].clip(upper=0).sum() print (pos) 132.87 print (neg) -111.3 </code></pre> <p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a> with <code>sum</code> and filtering with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.gt.html" rel="nofollow noreferrer"><code>Series.gt</code></a> for greater, <code>~</code> is used for invert mask for negative values:</p> <pre><code>m = df[&quot;amount&quot;].gt(0) pos = df.loc[m, &quot;amount&quot;].sum() neg = df.loc[~m, &quot;amount&quot;].sum() print (pos) 132.87 print (neg) -111.3 </code></pre>
python|pandas|dataframe|sum|output
3
5,433
71,393,426
How to place comma on interval value
<p>Here's my dataset</p> <pre><code>Id Longitude 1 923237487 2 102237487 3 934237487 4 103423787 </code></pre> <p>What I did</p> <pre><code>df['Longitude'] = df['Longitude'].str.replace('\.', '', regex=True) df['Longitude'] = (df['Longitude'].str[:3] + '.' + df['Longitude'].str[3:]).astype(float) </code></pre> <p>The result is</p> <pre><code>Id Longitude 1 923.237487 2 102.237487 3 934.237487 4 103.423787 </code></pre> <p>My expected value, the value is between 80 - 160</p> <pre><code>Id Longitude 1 92.3237487 2 102.237487 3 93.4237487 4 103.423787 </code></pre>
<p>IIUC, you could use a regex to find the leading numbers in the range 80-160:</p> <pre><code>df['Longitude2'] = (df['Longitude'].astype(str) .str.replace(r'(^(?:1[0-5]|[8-9])[0-9])', r'\1.') .astype(float) ) </code></pre> <p>output:</p> <pre><code> Id Longitude Longitude2 0 1 923237487 92.323749 1 2 102237487 102.237487 2 3 934237487 93.423749 3 4 103423787 103.423787 </code></pre>
python|pandas
3
5,434
71,113,051
Convert the date format from MM/DD/YYYY to YYYY-MM-DDT00:00:00.000Z in a list of dictionaries
<p>I want to add my own data in a sample HTML chart. The javascript codes of the sample data is:</p> <pre><code>var data = []; var visits = 10; for (var i = 1; i &lt; 50000; i++) { visits += Math.round((Math.random() &lt; 0.5 ? 1 : -1) * Math.random() * 10); data.push({ date: new Date(2018, 0, i), value: visits }); } chart.data = data </code></pre> <p>I ran the output of the sample data with <code>console.log(data)</code> and found the sample data is the following list of dictionaries:</p> <pre><code>[{&quot;date&quot;:&quot;2018-11-10T05:00:00.000Z&quot;,&quot;value&quot;:-459}, {&quot;date&quot;:&quot;2018-11-11T05:00:00.000Z&quot;,&quot;value&quot;:-459}, {&quot;date&quot;:&quot;2018-11-12T05:00:00.000Z&quot;,&quot;value&quot;:-464}, ... {&quot;date&quot;:&quot;2020-11-22T05:00:00.000Z&quot;,&quot;value&quot;:-493}] </code></pre> <p>I used <code>pandas</code> to create a list of dictionaries as my own data. I changed my own data to match the format of the above sample data list, the following list is my own data:</p> <pre><code>rnow = [{'date': '01/02/2020', 'value': 13}, {'date': '01/03/2020', 'value': 2}, {'date': '01/06/2020', 'value': 5}, ... {'date': '01/07/2020', 'value': 6}] </code></pre> <p>I realized the date format is different from the sample data. I used the following code to change the date format to yyyy-mm-dd.</p> <pre><code>dfrnow['date'] = pd.to_datetime(dfrnow.date) rnow = dfrnow.to_dict('records') </code></pre> <p>Now my data is shown as:</p> <pre><code>rnow = [{'date': Timestamp('2020-01-02 00:00:00'), 'value': 13}, {'date': Timestamp('2020-01-03 00:00:00'), 'value': 2}, {'date': Timestamp('2020-01-06 00:00:00'), 'value': 5}, ... {'date': Timestamp('2020-01-07 00:00:00'), 'value': 6}] </code></pre> <p>When I replace the sample data with my own data, the chart doesn't show up any data. Is that because it has this &quot;Timestamp&quot; thing shown in my list and how should I remove it? Or are there other better methods I can use to convert my dates?</p> <p>(The following codes are all the js codes of this template chart. I listed them here just in case I missed anything.)</p> <pre><code>var chart = am4core.create(&quot;chartdiv&quot;, am4charts.XYChart); chart.paddingRight = 20; var data = []; var visits = 10; for (var i = 1; i &lt; 50000; i++) { visits += Math.round((Math.random() &lt; 0.5 ? 1 : -1) * Math.random() * 10); data.push({ date: new Date(2018, 0, i), value: visits }); } chart.data = {{rnow}}; var dateAxis = chart.xAxes.push(new am4charts.DateAxis()); dateAxis.renderer.grid.template.location = 0; dateAxis.minZoomCount = 5; dateAxis.groupData = true; dateAxis.groupCount = 500; var valueAxis = chart.yAxes.push(new am4charts.ValueAxis()); var series = chart.series.push(new am4charts.LineSeries()); series.dataFields.dateX = &quot;date&quot;; series.dataFields.valueY = &quot;value&quot;; series.tooltipText = &quot;{valueY}&quot;; series.tooltip.pointerOrientation = &quot;vertical&quot;; series.tooltip.background.fillOpacity = 0.5; chart.cursor = new am4charts.XYCursor(); chart.cursor.xAxis = dateAxis; var scrollbarX = new am4core.Scrollbar(); scrollbarX.marginBottom = 20; chart.scrollbarX = scrollbarX; var selector = new am4plugins_rangeSelector.DateAxisRangeSelector(); selector.container = document.getElementById(&quot;selectordiv&quot;); selector.axis = dateAxis; </code></pre>
<p>I solved it by:</p> <pre><code>rnow = dfrnow.to_dict('records') def new_date(date): return datetime.strptime(date, '%m/%d/%Y').strftime('%Y-%m-%dT00:00:00Z') for i in range (len(rnow)): rnow[i]['date'] = new_date(rnow[i]['date']) </code></pre>
javascript|python|pandas|date|charts
0
5,435
52,199,843
How to check that the values of Tensor is contained in other tensor?
<p>I have a problem about finding a value from other tensor</p> <p>It's similar to the following problem : (URL: <a href="https://stackoverflow.com/questions/52102706/how-to-find-a-value-in-tensor-from-other-tensor-in-tensorflow/52107710?noredirect=1#comment91348123_52107710">How to find a value in tensor from other tensor in Tensorflow</a>)</p> <p>The previous problem was to ask if input tensor <strong>x[i]</strong>, <strong>y[i]</strong> is contained in input tensor <strong>label_x</strong>, <strong>label_y</strong></p> <p>Here is an example of the previous problem:</p> <pre><code>Input Tensor s_idx = (1, 3, 5, 7) e_idx = (3, 4, 5, 8) label_s_idx = (2, 2, 3, 6) label_e_idx = (2, 3, 4, 8) </code></pre> <p>The problem is to give output[i] a value of 1 if <strong>s_idx[i] == label_s_idx[j] and e_idx[i] == label_s_idx[j] for some j</strong> are satisfied for some j.</p> <p>Thus, in the above example, the output tensor is</p> <pre><code>output = (0, 1, 0, 0) </code></pre> <p>Because (<strong>s_idx[1]</strong> = 3, <strong>e_idx[1]</strong> = 4) is same as (<strong>label_s_idx[2]</strong> = 3, <strong>label_e_idx[2]</strong> = 4)</p> <p>(s_idx, e_idx) does not have a duplicate value, and (label_s_idx, label_e_idx) does so.</p> <p>Therefore, it is assumed that the following input example is impossible:</p> <pre><code>s_idx = (2, 2, 3, 3) e_idx = (2, 3, 3, 3) </code></pre> <p>Because, (<strong>s_idx[2]</strong> = 3, <strong>e_idx[2]</strong> = 3) is same as (<strong>s_idx[3]</strong> = 3, <strong>e_idx[3]</strong> = 3). </p> <p>What I want to change a bit in this problem is to add another value to the input tensor:</p> <pre><code>Input Tensor s_idx = (1, 3, 5, 7) e_idx = (3, 4, 5, 8) label_s_idx = (2, 2, 3, 6) label_e_idx = (2, 3, 4, 8) label_score = (1, 3, 2, 3) </code></pre> <p>*There is no 0 values in label_score tensor</p> <p>The task in the changed problem is defined as follows:</p> <p>The problem is to give output_2[i] a value of label_score[j] if <strong>s_idx[i] == label_s_idx[j]</strong> and <strong>e_idx[i] == label_s_idx[j]</strong> for some j are satisfied.</p> <p>Therefore, the output_2 should be like this:</p> <pre><code>output = (0, 1, 0, 0) // It is same as previous problem output_2 = (0, 2, 0, 0) </code></pre> <p>How do I code like this on Tensorflow in Python?</p>
<p>This perhaps works. Since this is a complex task, try more examples and see if expected results are obtained.</p> <pre><code>import tensorflow as tf s_idx = [1, 3, 5, 7] e_idx = [3, 4, 5, 8] label_s_idx = [2, 2, 3, 6] label_e_idx = [2, 3, 4, 8] label_score = [1, 3, 2, 3] # convert to one-hot vector. # make sure all have the same shape max_idx = tf.reduce_max([s_idx, label_s_idx, e_idx, label_e_idx]) s_oh = tf.one_hot(s_idx, max_idx) label_s_oh = tf.one_hot(label_s_idx, max_idx) e_oh = tf.one_hot(e_idx, max_idx) label_e_oh = tf.one_hot(label_e_idx, max_idx) # make a matrix such that (i,j) element equals one if # idx(i) = label(j) s_mult = tf.matmul(s_oh, label_s_oh, transpose_b=True) e_mult = tf.matmul(e_oh, label_e_oh, transpose_b=True) # find i such that idx(i) = label(j) for s and e, with some j # there is at most one such j by the uniqueness condition. output = tf.reduce_max(s_mult * e_mult, axis=1) with tf.Session() as sess: print(sess.run(output)) # [0. 1. 0. 0.] # extract the label score at the corresponding j index # and store in the index i # then remove redundant dimension output_2 = tf.matmul( s_mult * e_mult, tf.cast(tf.expand_dims(label_score, -1), tf.float32)) output_2 = tf.squeeze(output_2) with tf.Session() as sess: print(sess.run(output_2)) # [0. 2. 0. 0.] </code></pre>
python|tensorflow
1
5,436
60,529,092
Is there a way to produce a count of multiple categories over a resampled date indexed pandas dataframe?
<p>I have a dataframe, indexed by date, containing information about the magnitude of floods - none, small, medium and large which are represented numerically by 0,1,2 and 3 respectively, see below (edited product of df.head(15).to_dict()):</p> <pre><code> date flood 2001-01-01 0.0 2001-01-02 0.0 2001-01-03 0.0 2001-01-04 1.0 2001-01-05 1.0 2001-01-06 1.0 2001-01-07 0.0 2001-01-08 0.0 2001-01-09 2.0 2001-01-10 0.0 2001-01-11 3.0 2001-01-12 0.0 2001-01-13 0.0 2001-01-14 2.0 2001-01-15 0.0 </code></pre> <p>I would like to resample by month (or other specified time period) and produce a count of the frequency of each category occurring. So the output would be something like this, for example:</p> <pre><code> 0 1 2 3 date 1-1-2001 23 6 1 1 1-2-2001 20 7 1 0 1-3-2001 30 1 0 0 ... </code></pre> <p>Any ideas how this might be achieved?</p> <p>Thanks!!</p>
<pre><code># convert date column to datetime format (assuming it isn't already) df['date'] = pd.to_datetime(df['date'], dayfirst=True) # set all days to 1 df['date'] = df['date'].apply(lambda dt: dt.replace(day=1)) # count using pivot table: result = pd.pivot_table(df, index='date', columns='flood', aggfunc=len) print(result) </code></pre> <p>Output:</p> <pre><code>flood 0.0 1.0 2.0 3.0 date 2001-01-01 9 3 2 1 </code></pre>
python|pandas
0
5,437
60,476,943
PyTorch LSTM has nan for MSELoss
<p>My model is:</p> <pre><code>class BaselineModel(nn.Module): def __init__(self, feature_dim=5, hidden_size=5, num_layers=2, batch_size=32): super(BaselineModel, self).__init__() self.num_layers = num_layers self.hidden_size = hidden_size self.lstm = nn.LSTM(input_size=feature_dim, hidden_size=hidden_size, num_layers=num_layers) def forward(self, x, hidden): lstm_out, hidden = self.lstm(x, hidden) return lstm_out, hidden def init_hidden(self, batch_size): hidden = Variable(next(self.parameters()).data.new( self.num_layers, batch_size, self.hidden_size)) cell = Variable(next(self.parameters()).data.new( self.num_layers, batch_size, self.hidden_size)) return (hidden, cell) </code></pre> <p>Training looks like:</p> <pre><code>train_loader = torch.utils.data.DataLoader( train_set, batch_size=BATCH_SIZE, shuffle=True, **params) model = BaselineModel(batch_size=BATCH_SIZE) optimizer = optim.Adam(model.parameters(), lr=0.01, weight_decay=0.0001) loss_fn = torch.nn.MSELoss(reduction='sum') for epoch in range(250): # hidden = (torch.zeros(2, 13, 5), # torch.zeros(2, 13, 5)) # model.hidden = hidden for i, data in enumerate(train_loader): hidden = model.init_hidden(13) inputs = data[0] outputs = data[1] print('inputs', inputs.size()) # print('outputs', outputs.size()) # optimizer.zero_grad() model.zero_grad() # print('inputs', inputs) pred, hidden = model(inputs, hidden) loss = loss_fn(pred, outputs) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() print('Epoch: ', epoch, '\ti: ', i, '\tLoss: ', loss) </code></pre> <p>I have gradient clipping set already, which seems to be the recommended solution. But after even the first step, I get:</p> <blockquote> <p>Epoch: 0 i: 0 Loss: tensor(nan, grad_fn=)</p> </blockquote>
<p>I suspect your issue has to do with your outputs / <code>data[1]</code> (it would help if you show examples of your train_set). Running the following piece of code gives no nan, but I forced shape of output by hand before calling the <code>loss_fn(pred, outputs)</code> :</p> <pre><code>class BaselineModel(nn.Module): def __init__(self, feature_dim=5, hidden_size=5, num_layers=2, batch_size=32): super(BaselineModel, self).__init__() self.num_layers = num_layers self.hidden_size = hidden_size self.lstm = nn.LSTM(input_size=feature_dim, hidden_size=hidden_size, num_layers=num_layers) def forward(self, x, hidden): lstm_out, hidden = self.lstm(x, hidden) return lstm_out, hidden def init_hidden(self, batch_size): hidden = Variable(next(self.parameters()).data.new( self.num_layers, batch_size, self.hidden_size)) cell = Variable(next(self.parameters()).data.new( self.num_layers, batch_size, self.hidden_size)) return (hidden, cell) model = BaselineModel(batch_size=32) optimizer = optim.Adam(model.parameters(), lr=0.01, weight_decay=0.0001) loss_fn = torch.nn.MSELoss(reduction='sum') hidden = model.init_hidden(10) model.zero_grad() pred, hidden = model(torch.randn(2,10,5), hidden) pred.size() #torch.Size([2, 10, 5]) outputs = torch.zeros(2,10,5) loss = loss_fn(pred, outputs) loss loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() print(loss) </code></pre> <p>Please note a common reason for nan values can be related to numerical stability of your learning phase, but usually you have values for the first steps before you see the divergence happening, which is apparently not the case here.</p>
python|pytorch|lstm|gradient-descent
2
5,438
72,521,988
Survey data Cleaning - Grouping Age range - Python pandas
<p>I have this data set with the following values counts for the column <code>Age</code>:</p> <pre><code>&gt;&gt;&gt; game['Age'].value_counts() Between 18 -25 131 Between 26 - 30 21 Under 18 10 31 or more 7 Name: Age, dtype: int64 </code></pre> <p>I´m trying <strong>to create a regrouping of values with 2 groups for this column 'Age'</strong> :</p> <pre><code> - &lt;=25 // (grouping Between 18 -25 + Under 18 ) - &gt;=26 // (grouping Between 26 - 30 + 31 or more ) </code></pre> <p>I have been trying to play groupby function but no good result yet. Can you please help?</p>
<p>You can use <code>np.select</code>:</p> <pre><code>mapping = { 'Between 18 -25': '&lt;=25', 'Under 18': '&lt;=25', 'Between 26 - 30': '&gt;=26', '31 or more': '&gt;=26', } df['Age'] = np.select([df['Age'] == k for k in mapping.keys()], mapping.values()) </code></pre> <p>Or just use <code>.loc</code>:</p> <pre><code>df.loc[df['Age'] == 'Between 18 -25', 'Age'] = '&lt;=25' df.loc[df['Age'] == 'Under 18', 'Age'] = '&lt;=25' df.loc[df['Age'] == 'Between 26 - 30', 'Age'] = '&gt;=26' df.loc[df['Age'] == '31 or more', 'Age'] = '&gt;=26' </code></pre> <p>Or <code>isin</code>:</p> <pre><code>df.loc[df['Age'].isin(['Between 18 -25', 'Under 18']), 'Age'] = '&lt;=25' df.loc[df['Age'].isin(['Between 26 - 30', '31 or more']), 'Age'] = '&gt;=26' </code></pre>
python|pandas|survey
0
5,439
59,896,902
numpy (n, m) and (n, k) to (n, m, k)
<p>Let <code>x</code> be a <code>np.array</code> of shape <code>(n, m)</code>.</p> <p>Let <code>y</code> be a <code>np.array</code> of shape <code>(n, k)</code>.</p> <p>What is the right way of computing the tensor <code>z</code> of shape <code>(n, m, k)</code> such that </p> <pre><code>for all i in [0, n - 1] z[i] = np.dot(x[i][:, np.newaxis], y[i][np.newaxis, :]) </code></pre> <p>?</p> <p>In other words, each pair of rows <code>(x_i, y_i)</code> gives one matrix of shape <code>(m, k)</code>.</p> <p>I looked at <code>np.tensordot</code> but after many trials, I can't find the right value for its <code>axes</code> argument. I'm not sure it's the right tool for the job.</p>
<p>You could use <code>np.einsum()</code> like this:</p> <pre><code>z = np.einsum('ij,ik-&gt;ijk', x, y) </code></pre> <hr> <p>From a quick test, this is also faster than the <code>np.matmul()</code>-based approach (except than for very small inputs):</p> <pre><code>import numpy as np x = np.random.randint(1, 100, (2, 3)) y = np.random.randint(1, 100, (2, 4)) %timeit np.einsum('ij,ik-&gt;ijk', x, y) # 100000 loops, best of 3: 3.14 µs per loop %timeit np.matmul(x[:, :, None], y[:, None, :]) # 100000 loops, best of 3: 2,07 µs per loop x = np.random.randint(1, 100, (20, 30)) y = np.random.randint(1, 100, (20, 40)) %timeit np.einsum('ij,ik-&gt;ijk', x, y) # 10000 loops, best of 3: 32.1 µs per loop %timeit np.matmul(x[:, :, None], y[:, None, :]) # 10000 loops, best of 3: 76.8 µs per loop x = np.random.randint(1, 100, (200, 300)) y = np.random.randint(1, 100, (200, 400)) %timeit np.einsum('ij,ik-&gt;ijk', x, y) # 10 loops, best of 3: 48.7 ms per loop %timeit np.matmul(x[:, :, None], y[:, None, :]) # 10 loops, best of 3: 68.2 ms per loop </code></pre> <hr> <p>The application of <code>np.dot()</code> to the broadcastable views like the following:</p> <pre><code>np.dot(x[:, :, None], y[:, None, :]) </code></pre> <p>would not work (it will not even get to the right shape).</p> <p>(EDITED)</p>
numpy|linear-algebra|tensor
2
5,440
59,775,656
Count number of different rows in which each word appears
<p>I have a Pandas DataFrame (or a Series, given that I'm just using one column) that contains strings. I also have a list of words. For each word in this list, I want to check how many different rows it appears in at least once. For example:</p> <pre><code>words = ['hi', 'bye', 'foo', 'bar'] df = pd.Series(["hi hi hi bye foo", "bye bye bye bye", "bar foo hi bar", "hi bye foo bar"]) </code></pre> <p>In this case, the output should be</p> <pre><code>0 hi 3 1 bye 3 2 foo 3 3 bar 2 </code></pre> <p>Because "hi" appears in three different rows (1st, 3rd and 4th), "bar" appears in two (3rd and 4th), and so on.</p> <p>I came up with the following way to do this:</p> <pre><code>word_appearances = {} for word in words: appearances = df.str.count(word).clip(upper=1).sum() word_appearances.update({word: appearances}) pd.DataFrame(word_appearances.items()) </code></pre> <p>This works fine, but the problem is that I have a rather long list of words (around 40,000), around 30,000 rows to check and strings that are not as short as the ones I used in the example. When I try my approach with my real data, it takes forever to run. Is there a way to do this in a more efficient way?</p>
<p>Try list comprehension and <code>str.contains</code> and <code>sum</code></p> <pre><code>df_out = pd.DataFrame([[word, sum(df.str.contains(word))] for word in words], columns=['word', 'word_count']) Out[58]: word word_count 0 hi 3 1 bye 3 2 foo 3 3 bar 2 </code></pre>
python|pandas|optimization
2
5,441
59,577,442
Numpy vectorization messes up data type
<p>When using <code>pandas</code> dataframes, it's a common situation to create a column <code>B</code> with the information in column <code>A</code>.</p> <h1>Background</h1> <p>In some cases, it's possible to do this in one go (<code>df['B'] = df['A'] + 4</code>), but in others, the operation is more complex and a separate function is written. In that case, this function can be applied in one of two ways (that I know of):</p> <pre><code>def calc_b(a): return a + 4 df = pd.DataFrame({'A': np.random.randint(0, 50, 5)}) df['B1'] = df['A'].apply(lambda x: calc_b(x)) df['B2'] = np.vectorize(calc_b)(df['A']) </code></pre> <p>The resulting dataframe:</p> <pre><code> A B1 B2 0 17 21 21 1 25 29 29 2 6 10 10 3 21 25 25 4 14 18 18 </code></pre> <p>Perfect - both ways have the correct result. In my code, I've been using the <code>np.vectorize</code> way, as <code>.apply</code> is slow and <a href="https://stackoverflow.com/questions/54432583/when-should-i-ever-want-to-use-pandas-apply-in-my-code">considered bad practise</a>.</p> <h1>Now comes my problem</h1> <p>This method seems to be breaking down when working with datetimes / timestamps. A minimal working example is this:</p> <pre><code>def is_past_midmonth(dt): return (dt.day &gt; 15) df = pd.DataFrame({'date':pd.date_range('2020-01-01', freq='6D', periods=7)}) df['past_midmonth1'] = df['date'].apply(lambda x: is_past_midmonth(x)) df['past_midmonth2'] = np.vectorize(is_past_midmonth)(df['date']) </code></pre> <p>The <code>.apply</code> way works; the resulting dataframe is</p> <pre><code> date past_midmonth1 0 2020-01-01 False 1 2020-01-07 False 2 2020-01-13 False 3 2020-01-19 True 4 2020-01-25 True 5 2020-01-31 True 6 2020-02-06 False </code></pre> <p>But the <code>np.vectorize</code> way fails with an <code>AttributeError: 'numpy.datetime64' object has no attribute 'day'</code>.</p> <p>Digging a bit with <code>type()</code>, the elements of <code>df['date']</code> are of the <code>&lt;class 'pandas._libs.tslibs.timestamps.Timestamp'&gt;</code>, which is also how the function receives them. In the vectorized function, however, they are received as instances of <code>&lt;class 'numpy.datetime64'&gt;</code>, which then causes the error.</p> <p>I have two questions:</p> <ul> <li><strong>Is there a way to 'fix' this behaviour of <code>np.vectorize</code>? How?</strong></li> <li><strong>How can I avoid these kinds of incompatibilities in general?</strong></li> </ul> <p>Of course I can make a mental note to not use <code>np.vectorize</code> functions that take datetime arguments, but that is cumbersome. I'd like a solution that always works so I don't have to think about it whenever I encounter this situation.</p> <p>As stated, this is a <strong>minimal working example</strong> that demonstrates the problem. I know I could use easier, all-column-at-once operations in this case, exactly as I could in the first example with the <code>int</code> column. But that's beside the point here; I'm interested in the general case of vectorizing any function that takes timestamp arguments. For those asking about a more concrete/complicated example, I've created one <a href="https://stackoverflow.com/questions/59580504/numpy-vectorization-messes-up-data-type-2">here</a>.</p> <p>Edit: I was wondering if using type hinting would make a difference - if <code>numpy</code> would actually take this information into account - but I doubt it, as using this signature <code>def is_past_midmonth(dt: float) -&gt; bool:</code>, where <code>float</code> is obviously wrong, gives the same error. I'm pretty new to type hinting though, and I don't have an IDE that supports it, so it's a bit hard for me to debug.</p> <p>Many thanks!</p>
<p>Have you consider passing the day as <code>int</code> instead of the <code>datetime64[ns]</code>?</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np # I'd avoid use dt as it's used as alias for datetime def is_past_midmonth1(d): return (d.day &gt; 15) def is_past_midmonth2(day): return (day &gt; 15) N = int(1e4) df = pd.DataFrame({'date':pd.date_range('2020-01-01', freq='6D', periods=N)}) </code></pre> <h1>Apply (using datetime)</h1> <pre class="lang-py prettyprint-override"><code>%%time df['past_midmonth1'] = df['date'].apply(lambda x: is_past_midmonth1(x)) CPU times: user 55.4 ms, sys: 0 ns, total: 55.4 ms Wall time: 53.8 ms </code></pre> <h1>Apply (using int)</h1> <pre class="lang-py prettyprint-override"><code>%%time df['past_midmonth2'] = (df['date'].dt.day).apply(lambda x: is_past_midmonth2(x)) CPU times: user 4.71 ms, sys: 0 ns, total: 4.71 ms Wall time: 4.16 ms </code></pre> <h1><code>np.vectorize</code></h1> <pre class="lang-py prettyprint-override"><code>%%time df['past_midmonth2_vec'] = np.vectorize(is_past_midmonth2)(df['date'].dt.day) CPU times: user 4.2 ms, sys: 75 µs, total: 4.27 ms Wall time: 3.49 ms </code></pre> <h1>Vectorizing your code</h1> <pre class="lang-py prettyprint-override"><code>%%time df['past_midmonth3'] = df["date"].dt.day&gt;15 CPU times: user 3.1 ms, sys: 11 µs, total: 3.11 ms Wall time: 2.41 ms </code></pre> <h1>Timing</h1> <p><a href="https://i.stack.imgur.com/BbZRe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BbZRe.png" alt="enter image description here"></a></p>
python|pandas|numpy
3
5,442
32,176,542
Sometimes get a dataframe returned instead of a Series
<p>I have a looping structure in one class that retrieves rows from a dataframe within another class. The rows are retrieved one by one which means they are returned as a Series. I then perform several operations on the Series and then update the original dataframe row with the changes.</p> <p>All of this works fine 99% of the time, but on very rare occasions instead of getting a Series returned to me I get a dataframe. This makes no sense to me because there are no duplicates so I should get a Series returned to me every single time. Here is basically what im doing:</p> <pre><code>class XYZ: state_df = #create dataframe and populate it def __init__(self): pass def get_state(self, rowname): return self.state_df.loc[rowname].copy() def update_state(self, new_symbol_state): self.state_df.loc[new_symbol_state.name] = new_symbol_state class ABC: def __init__(self): pass def process(): xyz = MyClass.XYZ() state_series = xyz.get_state(rowname) # do stuff with the dataframe row which should be a series # ie: state_series. Then update the original dataframe row xyz.update_state(state_series) </code></pre> <p>So like I said, 99% of the time I get a Series returned to me, I perform some operations on it, then I send it back to the original dataframe and all is fine. However every now and again I get a dataframe instead of a series which makes no sense. Even if I print out the dataframe, it shows that it only has one row (ie: no duplicates), therefore it should be a Series?</p> <p>I need a way to ensure that I ALWAYS get a Series returned to me when calling <code>state_series = xyz.get_state(rowname)</code>. Is there a way to make sure I always get a series returned to me? Or at least if I get a dataframe returned to me which only has 1 row, then how do I change it into a Series.</p>
<p><code>df.loc[rowname]</code> would return a DataFrame, if rowname is a list , instead of being a single element. Example -</p> <pre><code>In [14]: df Out[14]: A B 0 1 3 1 2 4 2 3 5 3 4 5 In [15]: df.loc[0] Out[15]: A 1 B 3 Name: 0, dtype: int64 In [16]: type(df.loc[0]) Out[16]: pandas.core.series.Series In [17]: df.loc[[0]] Out[17]: A B 0 1 3 In [18]: type(df.loc[[0]]) Out[18]: pandas.core.frame.DataFrame </code></pre> <p>Since, we cannot see where <code>rowname</code> is coming from, I am guessng this could be the issue, you can check, why sometimes <code>rowname</code> would be coming as a list, instead of a single value.</p>
python|pandas
1
5,443
18,708,642
Unexpected eigenvectors in numPy
<p>I have seen <a href="https://stackoverflow.com/questions/13739186/compute-eigenvector-using-a-dominant-eigenvalue">this</a> question, and it is relevant to my attempt to compute the dominant eigenvector in Python with numPy.</p> <p>I am trying to compute the dominant eigenvector of an n x n matrix without having to get into too much heavy linear algebra. I did cursory research on determinants, eigenvalues, eigenvectors, and characteristic polynomials, but I would prefer to rely on the numPy implementation for finding eigenvalues as I believe it is more efficient than my own would be.</p> <p>The problem I encountered was that I used this code:</p> <pre><code> markov = array([[0.8,0.2],[.1,.9]]) print eig(markov) </code></pre> <p>...as a test, and got this output:</p> <pre><code> (array([ 0.7, 1. ]), array([[-0.89442719, -0.70710678], [ 0.4472136 , -0.70710678]])) </code></pre> <p>What concerns me about this is that by the Perron-Frobenius theorem, all of the components of the second eigenvector should be positive (since, according to Wikipedia, "a real square matrix with positive entries has a unique largest real eigenvalue and that the corresponding eigenvector has strictly positive components").</p> <p>Anyone know what's going on here? Is numPy wrong? Have I found an inconsistency in ZFC? Or is it just me being a noob at linear algebra, Python, numPy, or some combination of the three?</p> <p>Thanks for any help that you can provide. Also, this is my first SO question (I used to be active on cstheory.se though), so any advice on improving the clarity of my question would be appreciated, too.</p>
<p>You are just misinterpreting <code>eig</code>'s return. According to the docs, the second return argument is</p> <blockquote> <p>The normalized (unit “length”) eigenvectors, such that the column v[:,i] is the eigenvector corresponding to the eigenvalue w[i].</p> </blockquote> <p>So the eigenvector corresponding to eigenvalue <code>1</code> is not <code>[ 0.4472136 , -0.70710678]</code>, but <code>[-0.70710678, -0.70710678]</code>, as can be easily verified:</p> <pre><code>&gt;&gt;&gt; markov.dot([ 0.4472136 , -0.70710678]) # not an eigenvector array([ 0.21634952, -0.59167474]) &gt;&gt;&gt; markov.dot([-0.70710678, -0.70710678]) # an eigenvector array([-0.70710678, -0.70710678]) </code></pre>
python|numpy|linear-algebra|eigenvector
9
5,444
61,708,442
how to keep pytorch model in redis cache to access model faster for video streaming?
<p>I have this code belonging to <code>feature_extractor.py</code> which is a part of this folder in <a href="https://github.com/masouduut94/deep_sort_pytorch/tree/master/deep_sort/deep" rel="noreferrer">here</a>:</p> <pre><code>import torch import torchvision.transforms as transforms import numpy as np import cv2 from .model import Net class Extractor(object): def __init__(self, model_path, use_cuda=True): self.net = Net(reid=True) self.device = "cuda" if torch.cuda.is_available() and use_cuda else "cpu" state_dict = torch.load(model_path, map_location=lambda storage, loc: storage)['net_dict'] self.net.load_state_dict(state_dict) print("Loading weights from {}... Done!".format(model_path)) self.net.to(self.device) self.size = (64, 128) self.norm = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), ]) def _preprocess(self, im_crops): def _resize(im, size): return cv2.resize(im.astype(np.float32) / 255., size) im_batch = torch.cat([self.norm(_resize(im, self.size)).unsqueeze(0) for im in im_crops], dim=0).float() return im_batch def __call__(self, im_crops): im_batch = self._preprocess(im_crops) with torch.no_grad(): im_batch = im_batch.to(self.device) features = self.net(im_batch) return features.cpu().numpy() if __name__ == '__main__': img = cv2.imread("demo.jpg")[:, :, (2, 1, 0)] extr = Extractor("checkpoint/ckpt.t7") feature = extr(img) print(feature.shape) </code></pre> <p>Now Imagine 200 requests are in row to proceed. The process of loading model for each request makes the code run slowly. </p> <p>So I thought it might be a good idea to keep the pytorch model in cache. I modified it like this:</p> <pre><code>from redis import Redis import msgpack as msg r = Redis('111.222.333.444') class Extractor(object): def __init__(self, model_path, use_cuda=True): try: self.net = msg.unpackb(r.get('REID_CKPT')) finally: self.net = Net(reid=True) self.device = "cuda" if torch.cuda.is_available() and use_cuda else "cpu" state_dict = torch.load(model_path, map_location=lambda storage, loc: storage)['net_dict'] self.net.load_state_dict(state_dict) print("Loading weights from {}... Done!".format(model_path)) self.net.to(self.device) packed_net = msg.packb(self.net) r.set('REID_CKPT', packed_net) self.size = (64, 128) self.norm = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), ]) </code></pre> <p>Unfortunately this error comes up:</p> <pre><code> File "msgpack/_packer.pyx", line 286, in msgpack._cmsgpack.Packer.pack File "msgpack/_packer.pyx", line 292, in msgpack._cmsgpack.Packer.pack File "msgpack/_packer.pyx", line 289, in msgpack._cmsgpack.Packer.pack File "msgpack/_packer.pyx", line 283, in msgpack._cmsgpack.Packer._pack TypeError: can not serialize 'Net' object </code></pre> <p>The reason obviously is because that it cannot convert Net object (<code>pytorch nn.Module</code> class) to bytes. </p> <p>How can I efficiently save pytorch model in cache (or somehow keep it in RAM) and call for it for each request? </p> <p>Thanks everyone.</p>
<p>If you only need to keep model state on RAM, Redis is not necessary. You could instead mount RAM as a virtual disk and store model state there. Check out <code>tmpfs</code>.</p>
python|caching|redis|video-streaming|pytorch
2
5,445
61,660,815
Conflict between thousand separator and date format - pandas.read_csv
<p>I have a problem with reading data from csv file with Pythons' read_csv method.</p> <p>Row format: </p> <pre><code>'06.02.2013;544,00;2,52;3,53' </code></pre> <p>With this implementation: </p> <pre><code> df = pd.read_csv(filepath, sep=";", header=5, decimal=",") df['value'] = df['value'].astype(int) </code></pre> <p>Python gives me an error: <strong>invalid literal for int() with base 10: '544,00'</strong>, When I print this dataframe object I can see that some float values have been recognized and some haven't. </p> <pre><code> value value1 value2 Datum 06.02.2013 544,00 2.52 3.53 </code></pre> <p>What I did next was implement a method (even though I do not have thousands in my file): </p> <pre><code>df = pd.read_csv(filepath, sep=";", header=5, decimal=",", thousands = ".") </code></pre> <p>Then I do not get that error, but resulting date is <strong>06022013 instead of 06.02.2013</strong>.</p> <p>To solve that problem, I have tried this:</p> <pre><code>df = pd.read_csv(filepath, sep=";", header=5, dayfirst=True, decimal=",", thousands = ".", parse_dates=[0]) </code></pre> <p>In that case, date is formatted like this: <em>January 2, 2013, midnight.</em></p> <p>And after all of that I have tried to add a <strong>date_parser</strong> to this method like this: </p> <pre><code>df = pd.read_csv(filepath, sep=";", header=5, dayfirst=True, decimal=",", thousands = ".", parse_dates=[0],date_parser=lambda x: datetime.strptime(x, '%d.%m.%Y') ) </code></pre> <p>But it still formatted date like before: <em>January 2, 2013, midnight</em>. Has anyone else encountered such a problem or knows how to solve it?</p> <p>EDIT: So, the real data looks like this (first row after header):</p> <pre><code>0 1 2 3 4 Datum value1 value2 value3 value4 value5 value6 value7 value8 value9 value10 value11 value12 value13 value14 01.03.2020 str1 str2 str3 str4 str5 str6 9,82 9,75 0,75 500,00 544,00 44,00 50,00 49,25 In [1]: df['value11'] = df['value11'].astype(int) Out [1]: invalid literal for int() with base 10: '544,00' </code></pre> <p>Also, error takes place already on first row. I have since come to realize that after changing first row, I get no error. Modified first row:</p> <pre><code>0 1 2 3 4 Datum value1 value2 value3 value4 value5 value6 value7 value8 value9 value10 value11 value12 value13 value14 01.04.2020 str1 str2 str3 str4 str5 str6 36,03 5,46 84,85 23,00 64,00 41,00 59,92 -24,92 </code></pre> <p>Pandas version: 1.0.2</p> <p>EDIT2:</p> <pre><code>df = pd.read_csv(filepath, sep=";", header=5, decimal=",") print(df.iloc[:,7:]) </code></pre> <p>OUTPUT: <a href="https://i.stack.imgur.com/sbgy6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sbgy6.png" alt="**Headers not included because of information sensitivity**"></a></p> <p>EDIT3: I found out how to reproduce this problem. Example of csv file:</p> <pre><code>data.csv 0 1 2 3 4 Datum Datum;value1;value2;value3;value4;value5;value6;value7;value8;value9;value10;value11;value12;value13;value14 01.03.2020;str1;str2;str3;str4;str5;str6;"9,82";"9,75";"0,75";"500,00";"544,00";"44,00";"50,00";"49,25" 01.03.2020;str1;str2;str3;str4;str5;str6;"9,72";"7,00";"27,97";"737,00";"1.123,00";"386,00";"51,03";"23,06" </code></pre> <p>Thanks in advance!</p>
<p>Are you indicating you header row correctly?</p> <p>Here's a sample CSV:</p> <pre><code>cat seven_rows.csv 0 1 2 3 4 Datum;value1;value2;value3;value4;value5;value6;value7;value8;value9;value10;value11;value12;value13;value14 01.03.2020;str1;str2;str3;str4;str5;str6;9,82;9,75;0,75;500,00;544,00;44,00;50,00;49,25 </code></pre> <p>Your original import:</p> <pre><code>df = pd.read_csv('seven_rows.csv', sep=";", header=5, decimal=",") Datum value1 value2 value3 value4 value5 value6 value7 value8 value9 value10 value11 value12 value13 value14 0 01.03.2020 str1 str2 str3 str4 str5 str6 9.82 9.75 0.75 500.0 544.0 44.0 50.0 49.25 </code></pre> <p>Casting <code>value11</code> to <code>int</code>:</p> <pre><code>df['value11'] = df['value11'].astype(int) Datum value1 value2 value3 value4 value5 value6 value7 value8 value9 value10 value11 value12 value13 value14 0 01.03.2020 str1 str2 str3 str4 str5 str6 9.82 9.75 0.75 500.0 544 44.0 50.0 49.25 </code></pre>
python-3.x|pandas|csv|dataframe
1
5,446
61,743,727
Summing two values from different dataframes if certain criteria is matched python
<p>I would like to sum two columns, each in different frame if certain criteria is met.</p> <p>Dataframe 1:</p> <pre><code>desk Type total_position desk1 ES 786.0 desk1 ES1 100 desk2 ES1 0 desk2 ES2 10 desk3 ES 0 desk4 ES1 0 desk5 ES -757 </code></pre> <p>Dataframe 2:</p> <pre><code>desk Type total_position desk1 ES -758.0 desk2 ES 0 desk3 ES -29 desk4 ES 0.0 desk5 ES 786.0 </code></pre> <p>I would like to sum both the positions if only the type is "ES" in the first dataframe and it is the same desk.</p> <p>How do i do that?</p> <p>Expected Answer</p> <pre><code>desk Type total_position desk1 ES 29 desk2 ES1 0 desk3 ES -29 desk4 ES1 0 desk5 ES 29 </code></pre>
<p>I would <code>map</code> and then <code>add</code>:</p> <pre><code>df1['total_position'] = (df1['total_position'].add( df1['desk'].map(df2.set_index('desk')['total_position'])) print(df1) </code></pre> <hr> <pre><code> desk Type total_position 0 desk1 ES 28.0 1 desk2 ES1 0.0 2 desk3 ES -29.0 3 desk4 ES1 0.0 4 desk5 ES 29.0 </code></pre> <p>EDIT for type:</p> <pre><code>m = (df1['desk'].map(df2.set_index('desk')['total_position']) .where(df1['Type'].eq('ES')).fillna(0)) df1['total_position'] = df1['total_position'].add(m) print(df1) desk Type total_position 0 desk1 ES 28.0 1 desk2 ES1 0.0 2 desk3 ES -29.0 3 desk4 ES1 0.0 4 desk5 ES 29.0 </code></pre>
python|pandas|numpy
3
5,447
62,028,664
Drop_duplicates fails to drop exact match?
<p>I am scanning for duplicate rows in imported data and I am using pd.duplicated &amp; pd.drop_duplicates to find &amp; drop duplicate rows. I have a set of rows which seem to be exact duplicates. Previously the columns were in a different order, but I merged the data &amp; the problem persists.</p> <p><strong>EDIT:</strong> I should have noted that my data is mixed float/str, so I cannot use numpy methods. I want the solution to be adaptable to variable numbers of columns, so I cannot manually reorder them.</p> <p>Example of two rows that are not being flagged by drop_duplicates:</p> <pre><code>Datetime 2019-09-05 17:36:38 Site Name glacier hut Chlorophyll RFU 0.81 Chlorophyll ug/L 2.93 Cond µS/cm 2593.8 fDOM QSU 76.75 fDOM RFU 24.79 nLF Cond µS/cm 3061.3 ODO % sat 78.6 ODO % local 78.6 ODO mg/L 7.44 ORP mV 196.9 Sal psu 1.58 SpCond µS/cm 3024 BGA PC RFU -0.1 BGA PC ug/L -0.1 TDS mg/L 1966 Turbidity FNU 19.49 TSS mg/L 0 Wiper Position volt 1.211 pH 4.41 pH mV 149.2 Temp °C 17.553 Battery V 5.9 Cable Pwr V 0 sonde_id 19E100810 field_monitor 0 </code></pre> <pre><code>Datetime 2019-09-05 17:36:38 Site Name glacier hut Chlorophyll RFU 0.81 Chlorophyll ug/L 2.93 Cond µS/cm 2593.8 fDOM QSU 76.75 fDOM RFU 24.79 nLF Cond µS/cm 3061.3 ODO % sat 78.6 ODO % local 78.6 ODO mg/L 7.44 ORP mV 196.9 Sal psu 1.58 SpCond µS/cm 3024 BGA PC RFU -0.1 BGA PC ug/L -0.1 TDS mg/L 1966 Turbidity FNU 19.49 TSS mg/L 0 Wiper Position volt 1.211 pH 4.41 pH mV 149.2 Temp °C 17.553 Battery V 5.9 Cable Pwr V 0 sonde_id 19E100810 field_monitor 0 </code></pre> <p>Both also have identical dtypes.</p> <pre><code>Datetime datetime64[ns] Site Name object Chlorophyll RFU float64 Chlorophyll ug/L float64 Cond µS/cm float64 fDOM QSU float64 fDOM RFU float64 nLF Cond µS/cm float64 ODO % sat float64 ODO % local float64 ODO mg/L float64 ORP mV float64 Sal psu float64 SpCond µS/cm float64 BGA PC RFU float64 BGA PC ug/L float64 TDS mg/L float64 Turbidity FNU float64 TSS mg/L float64 Wiper Position volt float64 pH float64 pH mV float64 Temp °C float64 Battery V float64 Cable Pwr V float64 sonde_id object field_monitor float64 </code></pre> <pre><code>Datetime datetime64[ns] Site Name object Chlorophyll RFU float64 Chlorophyll ug/L float64 Cond µS/cm float64 fDOM QSU float64 fDOM RFU float64 nLF Cond µS/cm float64 ODO % sat float64 ODO % local float64 ODO mg/L float64 ORP mV float64 Sal psu float64 SpCond µS/cm float64 BGA PC RFU float64 BGA PC ug/L float64 TDS mg/L float64 Turbidity FNU float64 TSS mg/L float64 Wiper Position volt float64 pH float64 pH mV float64 Temp °C float64 Battery V float64 Cable Pwr V float64 sonde_id object field_monitor float64 </code></pre>
<p>Is there any similar rows in your dataframe, if not duplicated method return true for the second occurrence for the same rows for exp:</p> <pre><code>df = pd.DataFrame([[1,2,3],[2,3,4],[3,4,5],[1,2,3]],columns = ["a","b","c"]) df.duplicated() 0 False 1 False 2 False 3 True dtype: bool </code></pre> <p>Edit: You have to consider drop_duplicates() method doesn't edits your original dataframe, it returns a copy of it, so you have to assign it manually.</p> <pre><code>df = df.drop_duplicates() </code></pre> <p>also you can give supset for testing spesific columns such as -></p> <pre><code>df = df.drop_duplicates(subset=['sonde_id','..','...etc'],keep='last') </code></pre>
python|pandas|dataframe
2
5,448
58,122,871
How to use tensorflow model to train and predict mouse movement?
<pre><code>&lt;html&gt; &lt;body id='body'&gt; &lt;button onclick="StartData(event)"&gt; Start&lt;/button&gt; &lt;button onclick="getStopCordinates(event)"&gt;Stop&lt;/button&gt; &lt;script&gt; let inputs = []; let labels = []; function Mouse(event) { inputs.push({ x: event.clientX, y: event.clientY }) } function StartData() { document.getElementById('body').addEventListener("mouseover", Mouse()) } function getStopCordinates(event) { labels.push({ x: event.clientX, y: event.clientY }) document.getElementById('body').removeEventListener("mouseover", Mouse()) } &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>I am using above code to capture all the x y coordinates of the mouse in the body . When the user moves the pointer towards the stop button i am capturing all the x,y coordinates for this . and when user clicks stop i am capturing stop coordinates also . now i want to train the tensorflow js model from the captured points so that when user moves the mouse with same trajectory i can predict that he will click the stop button .</p> <p>tensorflow code : </p> <pre><code> const model = tf.sequential(); // Add a single hidden layer model.add(tf.layers.dense({inputShape: [2], units: 1, useBias: true})); // Add an output layer model.add(tf.layers.dense({units: 1, useBias: true})); const inputTensor = tf.tensor2d(inputs, [inputs.length, 1]); const labelTensor = tf.tensor2d(labels, [labels.length, 1]); trainModel(model,inputs,labels) async function trainModel(model, inputs, labels) { // Prepare the model for training. model.compile({ optimizer: tf.train.adam(), loss: tf.losses.meanSquaredError, metrics: ['mse'], }); const batchSize = 32; const epochs = 50; return await model.fit(inputs, labels, { batchSize, epochs, shuffle: true, callbacks: tfvis.show.fitCallbacks( { name: 'Training Performance' }, ['loss', 'mse'], { height: 200, callbacks: ['onEpochEnd'] } ) }); } </code></pre> <p>but this code gives error as the inputs and labels are not the same so how to correct this code for the above result ?</p>
<p>The inputs and labels should be an array of array and not an array of object. The inputs should rather be </p> <pre><code> [[1, 2], [4, 6], ...] </code></pre> <p>The same thing holds for the labels.</p> <p>Since your are predicting 2 values, the last layer should have as number of units 2</p> <pre><code> const model = tf.sequential(); // Add a single hidden layer model.add(tf.layers.dense({inputShape: [2], units: 1, useBias: true})); // Add an output layer model.add(tf.layers.dense({units: 1, useBias: true})); </code></pre> <p>The last thing - surely not the least - is to add activation in order to add non linearity to the model</p>
javascript|tensorflow|machine-learning|deep-learning|tensorflow.js
0
5,449
57,843,272
Add scatterplot with different colors and size based on volume?
<p>I want to create a scatterplot with matplotlib and a simple pandas dataframe. Have tested almost everything and nothing works and honestly I have just now ordered a book on matplotlib.</p> <p>Dataframe looks like this</p> <pre><code> Time Type Price Volume 0 03:03:26.936 B 1.61797 1000000 1 03:41:06.192 B 1.61812 1000000 2 05:59:12.799 B 1.62280 410000 3 05:59:12.814 B 1.62280 390000 4 06:43:33.607 B 1.62387 1000000 5 06:43:33.621 S 1.62389 500000 6 06:47:36.834 B 1.62412 1000000 7 08:15:13.903 B 1.62589 1000000 8 09:15:31.496 S 1.62296 500000 9 10:29:24.072 S 1.61876 500000 10 10:49:08.619 S 1.61911 1000000 11 11:07:01.213 S 1.61882 1000000 12 11:07:01.339 S 1.61880 200000 13 11:23:00.300 S 1.61717 1000000 </code></pre> <p>Type B should be green in color and Type S Blue and dots should be different in size depending on volume! Any idea how to achieve this or a guide somewhere?</p>
<p>A solution using just <code>matplotlib</code>:</p> <pre><code>import matplotlib.pyplot as plt from matplotlib.dates import DateFormatter # Your Time column is stored as strings. Convert them to Timestamp # so matplotlib can plot a proper timeline times = pd.to_datetime(df['Time']) # Set the marker's color: 'B' is green, 'S' is blue colors = df['Type'].map({ 'B': 'green', 'S': 'blue' }) # Limit the x-axis from 0:00 to 24:00 xmin = pd.Timestamp('0:00') xmax = xmin + pd.Timedelta(days=1) # Make the plot fig, ax = plt.subplots(figsize=(6,4)) ax.scatter(x=times, y=df['Price'], c=colors, s=df['Volume'] / 2000, alpha=0.2) ax.set( xlabel='Time', xlim=(xmin, xmax), ylabel='Price' ) ax.xaxis.set_major_formatter(DateFormatter('%H:%M')) </code></pre> <p>Result:</p> <p><a href="https://i.stack.imgur.com/XKRbb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XKRbb.png" alt="Scatter Plot"></a></p>
python|pandas|matplotlib
2
5,450
57,757,871
Create frequency Tables for all the categorical variables of a dataframe in python
<p>I have a data-frame that has columns containing both continuous and categorical variables. I want to create frequency table for all the categorical variables using pandas.</p> <p>I have used <code>.value_counts()</code> function to generate the table, but it is giving me a list.</p> <h1>This code returns a list</h1> <pre><code>for i in Customer_Final.columns: if Customer_Final[i].dtype=='object': print(Customer_Final[i].value_counts()) </code></pre> <h1>The output is</h1> <pre><code>13-07-2011 35 25-09-2011 33 22-11-2011 33 21-12-2013 33 23-10-2011 33 9/3/2013 32 25-08-2012 32 25-11-2012 32 4/1/2013 31 1/1/2014 31 7/10/2011 31 11/10/2012 31 3/2/2014 31 6/10/2012 31 17-04-2011 31 15-03-2013 31 5/4/2012 31 7/5/2012 31 11/5/2012 31 29-02-2012 30 18-02-2012 30 23-05-2012 30 13-06-2011 30 15-09-2013 30 18-12-2013 30 6/8/2013 29 3/1/2014 29 26-09-2011 29 23-07-2012 29 26-07-2011 29 .. 7/2/2012 12 29-08-2011 12 16-05-2013 12 6/1/2014 12 26-11-2012 12 10/9/2013 12 24-11-2013 12 21-05-2011 12 11/4/2013 12 23-04-2013 12 25-12-2011 12 4/6/2011 11 13-04-2013 11 23-05-2011 11 26-05-2013 11 27-11-2013 11 15-05-2012 11 24-01-2012 11 6/4/2011 10 27-01-2012 10 21-05-2013 10 15-04-2012 9 9/12/2012 8 29-01-2012 6 22-02-2014 3 23-02-2014 2 24-02-2014 2 27-02-2014 1 28-02-2014 1 21-02-2014 1 Name: tran_date, Length: 1129, dtype: int64 e-Shop 9311 MBR 4661 Flagship store 4577 TeleShop 4504 Name: Store_type, dtype: int64 27-12-1988 32 17-09-1982 32 25-02-1974 27 20-03-1972 25 18-11-1991 24 09-06-1970 24 26-05-1977 23 20-12-1981 22 08-03-1983 22 21-07-1988 22 08-05-1988 21 08-09-1987 21 05-12-1992 21 16-04-1978 21 23-06-1986 21 06-12-1982 21 19-03-1971 20 20-04-1980 20 10-11-1973 20 21-03-1990 20 27-11-1991 19 11-07-1971 19 29-06-1985 19 02-02-1974 19 26-09-1988 19 26-06-1975 19 14-06-1989 19 08-10-1987 19 07-05-1974 19 17-08-1976 19 .. 29-01-1972 1 03-10-1972 1 03-03-1980 1 22-01-1986 1 01-11-1977 1 01-05-1980 1 01-09-1992 1 04-01-1991 1 08-05-1981 1 06-06-1980 1 14-07-1979 1 28-08-1988 1 02-01-1985 1 29-01-1979 1 19-08-1980 1 08-05-1979 1 21-07-1980 1 12-09-1970 1 23-08-1991 1 04-05-1981 1 29-07-1985 1 22-01-1989 1 23-04-1992 1 01-06-1972 1 21-07-1986 1 10-08-1984 1 12-07-1977 1 14-04-1984 1 26-08-1987 1 01-05-1982 1 Name: DOB, Length: 3987, dtype: int64 M 11811 F 11233 Name: Gender, dtype: int64 Books 6069 Electronics 4898 Home and kitchen 4129 Footwear 2999 Clothing 2960 Bags 1998 Name: prod_cat, dtype: int64 Women 3048 Mens 2912 Kids 1997 Tools 1062 Fiction 1043 Kitchen 1037 Children 1035 Comics 1031 Mobiles 1031 Bath 1023 Furnishing 1007 Non-Fiction 1004 DIY 989 Cameras 985 Personal Appliances 972 Academic 967 Computers 958 Audio and video 952 Name: prod_subcat, dtype: int64 </code></pre> <p>I want to see separate data-frames of the frequency table for each column which has categorical variables in it. But, How can I do this using Pandas?</p> <h1>So, I thought Groupby should help, and written this code</h1> <pre><code>for i in Customer_Final.columns: if Customer_Final[i].dtype=='object': return Customer_Final.groupby([i]).count.reset_index() </code></pre> <h1>But got error</h1> <pre><code>File &quot;&lt;ipython-input-16-5889a174ef03&gt;&quot;, line 3 return Customer_Final.groupby([i]).count().reset_index() ^ SyntaxError: 'return' outside function </code></pre> <p>Please help on How I can return Dataframes containing frequency tables of all the categorical variables? Thanks in advance for helping!</p>
<p>try this</p> <pre><code># Frequency tables for each categorical feature for column in data.select_dtypes(include=['object']).columns: display(pd.crosstab(index=data[column], columns='% observations', normalize='columns')*100) </code></pre>
python|pandas
1
5,451
57,849,420
Convert nan to Zero when numpy dtype is "object"
<p>I have a numpy array that contains nan. I attempted to convert those nans to zeros using </p> <pre><code> X_ = np.nan_to_num(X_, copy = False) </code></pre> <p>but it didn't work. I suspect its because dtype of X_ is object. I attempted to convert that to float64 using </p> <pre><code>X_= X_.astype(np.float64) </code></pre> <p>but that didn't work either</p> <p>Is there a way to convert nan to zero when dtype is object?</p>
<p>The &quot;object&quot; dtype was causing me a problem too. But your <code>astype(np.float64)</code> actually did work for me. Thanks!</p> <pre><code>print(&quot;Creating a numpy array from a mixed type DataFrame can create an 'object' numpy array dtype:&quot;) A = np.array([1., 2., 3., np.nan]); print('A:', A, A.dtype) B = pd.DataFrame([[1., 2., 3., np.nan,], [1, 2, 3, '4']] ).to_numpy(); print('B:', B, B.dtype, '\n') print('Converting vanilla A is fine:\n', np.nan_to_num(A, nan=-99), '\n') print('But not B:\n', np.nan_to_num(B, nan=-99), '\n') print('Not even this slice of B, \nB[0, :] : ', B[0, :]) print(np.nan_to_num(B[0, :], nan=-99), '\n') print('The astype(np.float64) does the trick here:\n', np.nan_to_num(B[0, :].astype(np.float64), nan=-99), '\n\n') </code></pre> <p>Output:</p> <pre><code>Creating a numpy array from a mixed type DataFrame can create an 'object' numpy array dtype: A: [ 1. 2. 3. nan] float64 B: [[1.0 2.0 3.0 nan] [1.0 2.0 3.0 '4']] object Converting vanilla A is fine: [ 1. 2. 3. -99.] But not B: [[1.0 2.0 3.0 nan] [1.0 2.0 3.0 '4']] Not even this slice of B, B[0, :] : [1.0 2.0 3.0 nan] [1.0 2.0 3.0 nan] The astype(np.float64) does the trick here: [ 1. 2. 3. -99.] </code></pre>
python|numpy|nan|dtype
0
5,452
34,108,134
Finding unique elements in cells of pandas DF and expanding DF to include columns with the names of those unique elements
<p>I have a DF that looks like this:</p> <p><a href="https://i.stack.imgur.com/zjLD8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zjLD8.png" alt="enter image description here"></a></p> <p>I want to create a new DF, let's say instrumentsDF, in some sort of vectorized form so I get something like this:</p> <pre><code>0 Piano Guitar Viola 0 0 0 1 1 0 1 0 2 1 0 1 3 0 1 0 4 1 1 1 </code></pre> <p>I don't know how many unique favored_instruments I have in the cells, which means I don't know how many columns I will have in the new DF.</p> <p>My code thus far is this, but can't think of how to expand it to output what I need:</p> <pre><code>crunk = lambda x: pd.Series([i for i in reversed(x.split(','))]) vector = compDf['favored_instrument'].apply(crunk) print vector </code></pre> <p>Which produces this:</p> <pre><code> 0 1 2 0 Piano NaN NaN 1 Piano NaN NaN 2 Piano NaN NaN 3 Guitar Piano NaN 4 Piano NaN NaN </code></pre> <p>I could try to iterate over each row of the DF, split the value with ',', and add to a python list, but that approach could be slow. Is there a better way? </p>
<p>Pandas has the <code>get_dummies</code> function:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; data = pd.DataFrame({'instrument': ['Piano', 'Piano', 'Guitar', 'Viola', 'Viola', 'Guitar']}) &gt;&gt;&gt; pd.get_dummies(data['instrument']) instrument_Guitar instrument_Piano instrument_Viola 0 0 1 0 1 0 1 0 2 1 0 0 3 0 0 1 4 0 0 1 5 1 0 0 </code></pre>
python|pandas|split
1
5,453
54,833,076
Tensorflow model from very 1st tutorial doesnt work like expected
<p>Trying to start learning tensorflow by tutorials. Started with 1st one (of course) and for some reason when I try to learn model it shows loss number between 10 and 12 and accuracy number is 0.2 and 0.3 but in tutorial numbers are very different. Before I had some troubles installing tensorflow, as I tried to make gpu work with it, but I only got errors, so I reinstalled it with cpu support only (python-tensorflow package archlinux). But also I get <code>2019-02-22 19:18:02.042566: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA</code> error. I don't know if that's the case.</p> <p>My code is:</p> <pre><code>import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation=tf.nn.relu), keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(train_images, train_labels, epochs=5) test_loss, test_acc = model.evaluate(test_images, test_labels) print('Test accuracy:', test_acc) </code></pre> <p>Thanks in advance!</p>
<blockquote> <p>Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA This is just a warning, stating that you can compile from source and be able to use it.</p> </blockquote> <p>As for your model, it's ok, but if you normalize inputs, you'll get 70% accuracy</p> <pre><code>train_images = train_images.astype('float32') / 255 test_images = test_images.astype('float32') / 255 </code></pre> <p>You can find more on that here <a href="https://stats.stackexchange.com/questions/211436/why-normalize-images-by-subtracting-datasets-image-mean-instead-of-the-current">https://stats.stackexchange.com/questions/211436/why-normalize-images-by-subtracting-datasets-image-mean-instead-of-the-current</a></p>
tensorflow|archlinux|python-3.7
1
5,454
55,035,424
Apply function to a range of specific rows
<p>I have the following dataframe <code>df</code>:</p> <pre><code> bucket_value is_new_bucket dates 2019-03-07 0 1 2019-03-08 1 0 2019-03-09 2 0 2019-03-10 3 0 2019-03-11 4 0 2019-03-12 5 1 2019-03-13 6 0 2019-03-14 7 1 </code></pre> <p>I want to apply a specific function (let’s say the mean function) to each <code>bucket_value</code> data groups where the column <code>is_new_bucket</code> is equal to zero, such that the resulting dataframe would look like this:</p> <pre><code> mean_values dates 2019-03-08 2.5 2019-03-13 6.0 </code></pre> <p>In other words, applying a function to the consecutive rows where <code>is_new_bucket = 0</code>, which takes the <code>bucket_value</code> as input.</p> <p>For instance, if I want to apply the max function, the resulting dataframe would look like this:</p> <pre><code> max_values dates 2019-03-11 4.0 2019-03-13 6.0 </code></pre>
<p>Using <code>cumsum</code> with <code>filter</code> </p> <pre><code>df.reset_index(inplace=True) s=df.loc[df.is_new_bucket==0].groupby(df.is_new_bucket.cumsum()).agg({'date':'first','bucket_value':['mean','max']}) s date bucket_value first mean max is_new_bucket 1 2019-03-08 2.5 4 2 2019-03-13 6.0 6 </code></pre> <p>Updated </p> <pre><code>df.loc[df.loc[df.is_new_bucket==0].groupby(df.is_new_bucket.cumsum())['bucket_value'].idxmax()] date bucket_value is_new_bucket 4 2019-03-11 4 0 6 2019-03-13 6 0 </code></pre> <p>Updated2 after using the <code>cumsum</code> create the group key Newkey , you can do whatever you need , base on the groupkey </p> <pre><code>df['Newkey']=df.is_new_bucket.cumsum() df date bucket_value is_new_bucket Newkey 0 2019-03-07 0 1 1 1 2019-03-08 1 0 1 2 2019-03-09 2 0 1 3 2019-03-10 3 0 1 4 2019-03-11 4 0 1 5 2019-03-12 5 1 2 6 2019-03-13 6 0 2 7 2019-03-14 7 1 3 </code></pre>
python|pandas|dataframe
2
5,455
54,701,681
String to n*n matrix in python
<p>I am an undergraduate student who loves programming. I encountered a problem today and I don't know how to solve this problem. I looked for "Python - string to matrix representation" (<a href="https://stackoverflow.com/questions/31877901/python-string-to-matrix-representation">Python - string to matrix representation</a>) for help, but I am still confused about this problem.</p> <p>The problem is in the following:</p> <p>Given a string of whitespace separated numbers, create an nxn matrix (a 2d list where with the same number of columns as rows)and return it. The string will contain a perfect square number of integers. The int() and split() functions may be useful. </p> <p>Example: </p> <p>Input: '1 2 3 4 5 6 7 8 9' </p> <p>Output: [[1,2,3],[4,5,6],[7,8,9]] </p> <p>Example 2: </p> <p>Input: '1' </p> <p>Output: [[1]]</p> <p>My answer:</p> <pre><code>import numpy as np def string_to_matrix(str_in): str_in_split = str_in.split() answer = [] for element in str_in_split: newarray = [] for number in element.split(): newarray.append(int(number)) answer.append(newarray) print (answer) </code></pre> <p>The test results are in the following:</p> <pre><code>Traceback (most recent call last): File "/grade/run/test.py", line 20, in test_whitespace self.assertEqual(string_to_matrix('1 2 3 4'), [[1,2],[3,4]]) AssertionError: None != [[1, 2], [3, 4]] Stdout: [[4]] </code></pre> <p>as well as</p> <pre><code>Traceback (most recent call last): File "/grade/run/test.py", line 15, in test_small self.assertEqual(string_to_matrix('1 2 3 4'), [[1,2],[3,4]]) AssertionError: None != [[1, 2], [3, 4]] Stdout: [[4]] </code></pre> <p>as well as</p> <pre><code>Traceback (most recent call last): File "/grade/run/test.py", line 10, in test_one self.assertEqual(string_to_matrix('1'), [[1]]) AssertionError: None != [[1]] Stdout: [[1]] </code></pre> <p>as well as</p> <pre><code>Traceback (most recent call last): File "/grade/run/test.py", line 25, in test_larger self.assertEqual(string_to_matrix('4 3 2 1 8 7 6 5 12 11 10 9 16 15 14 13'), [[4,3,2,1], [8,7,6,5], [12,11,10,9], [16,15,14,13]]) AssertionError: None != [[4, 3, 2, 1], [8, 7, 6, 5], [12, 11, 10, 9], [16, 15, 14, 13]] Stdout: [[13]] </code></pre> <p>I am still confused how to solve this problem. Thank you very much for your help! </p>
<p>Assuming you don't want <code>numpy</code> and want to use a list of lists:</p> <pre><code>def string_to_matrix(str_in): nums = str_in.split() n = int(len(nums) ** 0.5) return list(map(list, zip(*[map(int, nums)] * n))) </code></pre> <p><code>nums = str_in.split()</code> splits by any whitespace, <code>n</code> is the side length of the result, <code>map(int, nums)</code> converts the numbers to integers (from strings), <code>zip(*[map(int, nums)] * n)</code> groups the numbers in groups of <code>n</code>, <code>list(map(list, zip(*[map(int, nums)] * n)))</code> converts the tuples produced by <code>zip</code> into lists.</p>
python|python-3.x|numpy
2
5,456
49,409,350
Element in a series takes on a different value when assigned to a dataframe
<p>I am really puzzled by this... I have an existing DataFrame and when I assign a series of values (of the same length) to a new column somehow the last element in the series takes on a different value when in the DataFrame. This code</p> <pre><code> print('Standalone 2nd to last: ' + series.iloc[-2]) print('Standalone last: ' + series.iloc[-1]) delta['Etf'] = series print('In a frame 2nd to last: ' + delta['Etf'].iloc[-2]) print('In a frame last: ' + str(delta['Etf'].iloc[-1])) </code></pre> <p>produces this output:</p> <pre><code>Standalone 2nd to last: ZHY CN Standalone last: IBDB US In a frame 2nd to last: ZHY CN In a frame last: nan </code></pre> <p>I appreciate any explanation of this.</p>
<p>Per @emmet02 comment, the index of the data frame was different from the one in the series and hence they did not align perfectly</p>
python|pandas
0
5,457
28,073,651
Link C++ program output with Python script
<p>I have a C++ program that uses some very specific method to calculate pairwise distances for a data set (30,000 elements). The output file would be 20 GB, and look something like this:</p> <pre> point1, point2, distancex pointi, pointj, distancexx ..... </pre> <p>I then input the file to Python and use Python (NumPy) for clustering. It takes forever using Python to read the output file. Is there a way to connect the C++ program directly with my Python code to save time on I/O on the intermediate file? Maybe using SWIG?</p>
<p>I assume you have been saving ascii. You could modify your C++ code to write binary instead, and read it with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html" rel="nofollow">numpy.fromfile</a>.</p> <p>For a more direct connection, you would wrap your C++ code as a library (remove main() and drive it from Python) using swig. This allows you to share the memory of arrays between C++ and Python. </p> <p>You can use either Python's <a href="https://docs.python.org/3/c-api/buffer.html" rel="nofollow">buffer protocol</a> on the C++ side together with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.frombuffer.html" rel="nofollow">numpy.frombuffer</a> on the Python side. Or you can use the <a href="http://docs.scipy.org/doc/numpy/reference/c-api.array.html" rel="nofollow">numpy headers</a> to directly work on numpy arrays in C++. Here is a small <a href="https://github.com/martinxyz/python/tree/master/realistic" rel="nofollow">swig example project</a> using the second method. (Disclaimer: I wrote it.)</p>
python|c++|numpy|swig|boost-python
0
5,458
73,295,865
what is the correct way of splitting dataset into train, validation and test?
<p>I'm following an <a href="https://vijayabhaskar96.medium.com/tutorial-image-classification-with-keras-flow-from-directory-and-generators-95f75ebe5720" rel="nofollow noreferrer">article</a> which says the test folder should also contain a single folder inside which all the test images are present(there will not be subfolders/label folders). On the other hand the train and validation folders should contain ‘n’ folders each containing images of the respective classes. For example:</p> <p><strong>structure 1</strong></p> <pre><code>/Data //train classA folder classB folder classC folder //val classA folder classB folder classC folder //test test folder </code></pre> <p>Again, I learned about using the python library split-folder which splits the data in the following structure,</p> <p><strong>structure 2</strong></p> <pre><code>/Data //train classA folder classB folder classC folder //val classA folder classB folder classC folder //test classA folder classB folder classC folder </code></pre> <p>I implemented one by using the python library split-folder (<strong>structure 2</strong>) and evaluated the model by using the following method,</p> <pre><code>model.evaluate(test_generator,batch_size=32) </code></pre> <p>here I only provided test_generator(which I got from <em>flow_from_directory</em>) to my evaluate function(I did not use any labels) and I got accuracy around 88%. my confusions are:</p> <ol> <li>Which structure should I follow for the data splitting?</li> <li>How can I evaluate or predict my model if I use <em><strong>structure 1</strong></em>? How can I extract labels from the data?</li> <li>How, despite the fact that I did not supply any labels, the Python library split-folder is evaluating the model without throwing an error?</li> </ol>
<ol> <li>It looks like <code>Structure 2</code> just splits whatever is available, which is fundamentally correct. In reality, you'll most likely be using <code>Structure 1</code> when using <code>flow_from_directory()</code>. You can't perform <code>evaluate()</code> without labels, so your <code>test_generator</code> is more akin to a validation set, but you can technically evaluate using that &quot;test&quot; data since they would created the same way, but ideally used differently.</li> <li><code>flow_from_directory()</code> outputs <code>Dataset</code>, which contains features <code>classes</code> and labels <code>class_indices</code>. When you pass a <code>Dataset</code> object to <code>evaluate()</code>, the method uses both features and labels from the passed variable. If you want to extract the labels from an object from <code>flow_from_directory()</code>, if the variable is <code>x</code>, it's <code>x.class_indices</code>, which will be a dictionary. When you pass a <code>Dataset</code> to <code>predict()</code>, only the features are used. The labels are ignored. Unless you need to manually retrieve something within the <code>Dataset</code> object, you don't need to access anything within that object when evaluating or predicting.</li> <li><code>split_folder</code> does not do anything with your model.</li> </ol> <p>The subfolders named after classes for train and val is when comparing the image to the label (the folder name - the class). This is how <code>flow_from_directory()</code> keeps track of each image's class. Since prediction is supposed to use unseen data as input, it wouldn't have a label to compare to, hence no labels (or subfolder containing classes) when you split your test folder out.</p> <p>Another thing you could do which is common is, just splitting your train and test set, then creating your validation set from your training set. But both methods are fundamentally the same.</p>
python|image|tensorflow|keras|dataset
2
5,459
73,419,075
Pandas list comparison giving value error
<p>I have a dataframe I generate by using</p> <pre><code>df = qr_actions.get_pandas_df(query) </code></pre> <p>and then generate a list of the rows using <code>rows = [r[1] for r in df.iterrows()]</code> and am trying to compare it to another list of rows I generate using the same method by doing <code>(rows1 == rows2).all()</code>, but keep getting the error</p> <pre><code>Name: 0, dtype: bool def __nonzero__(self): raise ValueError(&quot;The truth value of a {0} is ambiguous. &quot; &quot;Use a.empty, a.bool(), a.item(), a.any() or a.all().&quot; &gt; .format(self.__class__.__name__)) E ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). /usr/local/lib/python2.7/dist-packages/pandas/core/generic.py:892: ValueError </code></pre> <p>How can I do this? When I try to use pandas series equality functions I get errors because my objects are lists, but doing rows1 == rows2 alone gives me the output above. How can I solve this?</p> <p>===================================================</p> <p>Alternatively, I know my two rows are both</p> <pre><code>[a 1 Name: 0, dtype: int64] </code></pre> <p>How can I compare them to assert true for testing purposes?</p>
<p>Your <code>rows</code> is a list and a operation <code>rows1 == rows2</code> will return a boolean after which you can't apply <code>all</code> attribute. Converting your rows to <code>pd.Series</code> will solve the issue:</p> <pre><code>rows = pd.Series([r[1] for r in dftest.iterrows()]) </code></pre>
python|pandas|dataframe|debugging
0
5,460
73,297,945
How to get prior close when you have all stocks in a single DF?
<p>Sorry for the noob question. I have a bunch of stocks in a sqlite3 database:</p> <pre><code>import pandas as pd import sqlite3, config connection = sqlite3.connect(config.db_file) connection.row_factory = sqlite3.Row df = pd.read_sql('SELECT * FROM stock_price', connection) # sort the dataframe df.sort_values(by='stock_id', inplace=True) # # set the index to be this and don't drop df.set_index(keys=['stock_id'], drop=False,inplace=True) </code></pre> <p>When I print the df, it gives me the following (where each stock_id refers to a unique stock, e.g APPL):</p> <pre><code> id stock_id date open high low close volume stock_id 1 1 1 2022-08-02 9.83 9.845 9.83 9.830 584772 1 2 1 2022-08-03 9.84 9.860 9.84 9.820 7711 4 3 4 2022-08-03 10.38 10.380 10.38 10.380 199 5 46 5 2022-08-03 34.75 35.200 34.75 35.200 1007 5 45 5 2022-08-02 34.32 34.550 34.32 34.442 1252 ... ... ... ... ... ... ... ... ... 98 8 98 2022-08-02 28.00 28.095 27.90 28.000 2417 99 71 99 2022-08-02 88.19 88.940 87.15 88.370 1045596 99 72 99 2022-08-03 88.34 88.550 87.65 88.410 982710 100 171 100 2022-08-02 117.58 120.010 117.08 119.270 67795 100 172 100 2022-08-03 119.80 121.940 120.60 121.440 4237 [178 rows x 8 columns] </code></pre> <p>I need to target each unique <code>stock_id</code> individually, and get the prior close.</p> <p>I know if each stock was in its own separate dataframe, I could do something like this:</p> <pre><code>final_df['previous close'] = final_df['c'].shift() </code></pre> <p>But when I've tried that, because everything in one dataframe, then you get one stock getting the previous close of an entirely different stock, which isn't what I want.</p> <p>So my question:</p> <p>What's the best to achieve splitting out all these different stocks from one single dataframe and being able to target them individually, and get the previous close price of each stock?</p>
<p>How about shift the date and merging, i.e.</p> <pre><code># conversion (if not already datetime) df['date'] = pd.to_datetime(df['date'], format='%Y-%m-%d') # help dataframe df_help = df.copy() df_help['date'] = df_help['date'] + pd.Timedelta(1, 'D') # today's close is tomorrow's prev_close df_help.rename(columns={'close' : 'prev_close'}, inplace=True) df_help = df_help[['stock_id', 'date', 'prev_close']] # merge df = pd.merge(left=df, right=df_new, on=['stock_id', 'date'], how='left') </code></pre> <p>For sample data</p> <pre><code>df = pd.DataFrame({'stock_id' : [1,1,2,2,3,3,4,4,5,5,6,6], 'date' : ['2022-08-02', '2022-08-03', '2022-08-02', '2022-08-03', '2022-08-02', '2022-08-03', '2022-08-02', '2022-08-03', '2022-08-02', '2022-08-03', '2022-08-02', '2022-08-03'], 'open' : [25.408, 19.859, 27.801, 9.548, 25.825, 22.746, 11.26, 17.841, 10.848, 4.354, 29.09, 13.561], 'high' : [38.343, 26.984, 33.553, 30.683, 30.87, 32.342, 20.318, 22.889, 18.122, 8.736, 18.894, 13.561], 'close' : [8.81, 23.385, 13.484, 19.834, 21.274, 28.743, 17.734, 20.824, 14.819, 8.736, 12.628, 5.739],}) </code></pre> <p>this yields</p> <pre><code>stock_id date open high close prev_close 0 1 2022-08-02 25.408 38.343 8.810 NaN 1 1 2022-08-03 19.859 26.984 23.385 8.810 2 2 2022-08-02 27.801 33.553 13.484 NaN 3 2 2022-08-03 9.548 30.683 19.834 13.484 4 3 2022-08-02 25.825 30.870 21.274 NaN 5 3 2022-08-03 22.746 32.342 28.743 21.274 6 4 2022-08-02 11.260 20.318 17.734 NaN 7 4 2022-08-03 17.841 22.889 20.824 17.734 8 5 2022-08-02 10.848 18.122 14.819 NaN 9 5 2022-08-03 4.354 8.736 8.736 14.819 10 6 2022-08-02 29.090 18.894 12.628 NaN 11 6 2022-08-03 13.561 13.561 5.739 12.628 </code></pre>
python|pandas|numpy|stock|ohlc
2
5,461
35,281,427
Fast Python plotting library to draw plots directly on 2D numpy array image buffer?
<p>I often draw 2D plots directly on 2D numpy array image buffer coming from opencv webcam stream using opencv drawing functions. And, I send the numpy array to imshow and video writer to monitor and create a video.</p> <pre><code>import cv2 import numpy as np cap = cv2.VideoCapture(0) ret, frame = cap.read() # frame is a 2D numpy array w640 h480 h,w,_ = frame.shape # (480,640,3) x = np.arange(w) writer = cv2.VideoWriter( 'out.avi', cv2.cv.FOURCC('D','I','V','3'), fps=30, frameSize=(w,h), isColor=True ) while True: ret, frame = cap.read() # frame is a 2D numpy array w640 h480 B = frame[:,:,0].sum(axis=0) B = h - h * B / B.max() G = frame[:,:,1].sum(axis=0) G = h - h * G / G.max() R = frame[:,:,2].sum(axis=0) R = h - h * R / R.max() pts = np.vstack((x,B)).astype(np.int32).T cv2.polylines(frame, [pts], isClosed=False, color=(255,0,0)) pts = np.vstack((x,G)).astype(np.int32).T cv2.polylines(frame, [pts], isClosed=False, color=(0,255,0)) pts = np.vstack((x,R)).astype(np.int32).T cv2.polylines(frame, [pts], isClosed=False, color=(0,0,255)) writer.write(frame) cv2.imshow('frame', frame) key = cv2.waitKey(33) &amp; 0xFF # for 64 bit PC if key in 27: # ESC key break cap.release() writer.release() </code></pre> <p><a href="https://i.stack.imgur.com/DqudH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DqudH.png" alt="enter image description here"></a></p> <p>This works great but I wonder if I can do more stuff like what matplotlib can do such as axes, ticks, grid, title, bar graphs etc without rolling out my own plotting library based on basic cv2 drawing functions, which will be possible but I don't want reinventing the wheel.</p> <p>Looking into <a href="https://wiki.python.org/moin/NumericAndScientific/Plotting" rel="noreferrer">https://wiki.python.org/moin/NumericAndScientific/Plotting</a>, there are so many plotting libraries. So, I feel that one of them might already do this.</p> <p>I thought about using matplotlib and export the plot as image by <code>savefig</code>. But this will be slow for video capture.</p> <p>(edit) I could embed a matplotlib plot into frame using <code>mplfig_to_npimage</code> as suggested in the accepted answer! It seems fast enough for video rate.</p> <pre><code>import cv2 from pylab import * from moviepy.video.io.bindings import mplfig_to_npimage fp = r'C:/Users/Public/Videos/Sample Videos/Wildlife.wmv' cap = cv2.VideoCapture(fp) ret, frame = cap.read() # frame is a 2D numpy array h,w,_ = frame.shape writer = cv2.VideoWriter( 'out.avi', cv2.cv.FOURCC('D','I','V','3'), fps=30, frameSize=(w,h), isColor=True ) # prepare a small figure to embed into frame fig, ax = subplots(figsize=(4,3), facecolor='w') B = frame[:,:,0].sum(axis=0) line, = ax.plot(B, lw=3) xlim([0,w]) ylim([40000, 130000]) # setup wide enough range here box('off') tight_layout() graphRGB = mplfig_to_npimage(fig) gh, gw, _ = graphRGB.shape while True: ret, frame = cap.read() # frame is a 2D numpy array B = frame[:,:,0].sum(axis=0) line.set_ydata(B) frame[:gh,w-gw:,:] = mplfig_to_npimage(fig) cv2.imshow('frame', frame) writer.write(frame) key = cv2.waitKey(33) &amp; 0xFF # for 64 bit if key in 27: # ESC key break cap.release() writer.release() </code></pre> <p><a href="https://i.stack.imgur.com/U18IG.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/U18IG.jpg" alt="enter image description here"></a></p>
<p>So, if I'm getting this right, you want:</p> <ul> <li>Plot conceptual figures (paths, polygons), with out-of-the-box indicators (axes, enclosing plots automagically) over an image </li> <li>Video dump and <em>hopefully</em> realtime streaming.</li> </ul> <p>If so, I'd recommend using <a href="http://zulko.github.io/blog/2014/11/29/data-animations-with-python-and-moviepy/" rel="noreferrer">matplotlib with moviepy</a>.</p> <p>Indeed doing <code>savefig</code> to stream video is not the best way, but you can make these two work rather easily.</p> <p>Including small example from the link above for the record (mind licenses):</p> <pre><code>import matplotlib.pyplot as plt import numpy as np from moviepy.video.io.bindings import mplfig_to_npimage import moviepy.editor as mpy # DRAW A FIGURE WITH MATPLOTLIB duration = 2 fig_mpl, ax = plt.subplots(1,figsize=(5,3), facecolor=&amp;#39;white&amp;#39;) xx = np.linspace(-2,2,200) # the x vector zz = lambda d: np.sinc(xx**2)+np.sin(xx+d) # the (changing) z vector ax.set_title("Elevation in y=0") ax.set_ylim(-1.5,2.5) line, = ax.plot(xx, zz(0), lw=3) # ANIMATE WITH MOVIEPY (UPDATE THE CURVE FOR EACH t). MAKE A GIF. def make_frame_mpl(t): line.set_ydata( zz(2*np.pi*t/duration)) # &amp;lt;= Update the curve return mplfig_to_npimage(fig_mpl) # RGB image of the figure animation =mpy.VideoClip(make_frame_mpl, duration=duration) animation.write_gif("sinc_mpl.gif", fps=20) </code></pre>
python|opencv|numpy|matplotlib
5
5,462
67,350,600
Pandas Dataframe: how can i compare values in two columns of a row are equal to the ones in the same columns of a subsequent row?
<p>Let's say I have a dataframe like this</p> <pre><code>Fruit Color Weight apple red 50 apple red 75 apple green 45 orange orange 80 orange orange 90 orange red 90 </code></pre> <p>I would like to add a column with True or False according to the fact that Fruit and Color of row x are equal to Fruit and Color of row x+1, like this:</p> <pre><code>Fruit Color Weight Validity apple red 50 True apple red 75 False apple green 45 False orange orange 80 True orange orange 90 False orange red 90 False </code></pre> <p>I have tried the following, but there are some errors I guess, I am getting wrong results:</p> <pre><code>g['Validity'] = (g[['Fruit', 'Color']] == g[['Fruit', 'Color']].shift()).any(axis=1) </code></pre>
<p>You had the right idea about shifted comparison, but you need to shift backwards so you compare the current row with the next one. Finally use an <code>all</code> condition to enforce that ALL columns are equal in a row:</p> <pre><code>df['Validity'] = df[['Fruit', 'Color']].eq(df[['Fruit', 'Color']].shift(-1)).all(axis=1) df Fruit Color Weight Validity 0 apple red 50 True 1 apple red 75 False 2 apple green 45 False 3 orange orange 80 True 4 orange orange 90 False 5 orange red 90 False </code></pre>
python|pandas
5
5,463
67,249,822
In Python, how to find consecutive negative numbers in a row and return column header of first negative number?
<p>I have a <code>DataFrame</code>:</p> <p><code>first_week_of_consecutive_Negatives</code>: <img src="https://i.stack.imgur.com/etZAE.png" alt="df" /></p> <p>This example <code>df</code> I provided is a small part of the whole df, the df continues (each column is a week). I need to find the first week (column name) where we have identified a pattern of 4 consecutive negative numbers week over week (4 consecutive weeks). I then need to assign the first column <code>'first_week_of_consecutive_Negatives'</code> with the first column name in which that pattern began.</p> <p>For example:</p> <p>In the image I provided, rows 2 &amp; 3 would qualify as consecutively negative for 4 or more weeks, and I would want to return the column name in which that pattern began, in this case for both rows 2 &amp; 3 the value for column <code>'first_week_of_consecutive_Negatives'</code> will be <code>'2020-12-27 00:00:00'</code></p>
<p>Using a simplified version of your <code>df</code>:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'First_Week':np.nan,'2020-12-27':[9,8,-5,0,1,2],'2020-01-03':[9,-2,-1,0,1,1],'2020-01-10':[9,-3,-1,0,1,1],'2020-01-17':[8,-3,-2,0,1,1],'2020-01-24':[8,-4,-3,0,1,1]}) # First_Week 2020-12-27 2020-01-03 2020-01-10 2020-01-17 2020-01-24 # 0 NaN 9 9 9 8 8 # 1 NaN 8 -2 -3 -3 -4 # 2 NaN -5 -1 -1 -2 -3 # 3 NaN 0 0 0 0 0 # 4 NaN 1 1 1 1 1 # 5 NaN 2 1 1 1 1 </code></pre> <p>First build a boolean matrix of 4x consecutive negative matches using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.shift.html" rel="nofollow noreferrer"><strong><code>shift()</code></strong></a>:</p> <pre class="lang-py prettyprint-override"><code>num = 4 matches = pd.DataFrame( np.logical_and.reduce([df.T.iloc[1:].shift(-n).lt(0) for n in range(num)]), index=df.columns[1:], columns=df.index, ) # 0 1 2 3 4 5 # 2020-12-27 False False True False False False # 2020-01-03 False True False False False False # 2020-01-10 False False False False False False # 2020-01-17 False False False False False False # 2020-01-24 False False False False False False </code></pre> <p>Then set <code>First_Week</code> as the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.first_valid_index.html" rel="nofollow noreferrer"><strong><code>first_valid_index()</code></strong></a>:</p> <pre class="lang-py prettyprint-override"><code>df['First_Week'] = matches.replace(False, np.nan).apply(lambda x: x.first_valid_index() if x.any() else np.nan) # First_Week 2020-12-27 2020-01-03 2020-01-10 2020-01-17 2020-01-24 # 0 NaN 9 9 9 8 8 # 1 2020-01-03 8 -2 -3 -3 -4 # 2 2020-12-27 -5 -1 -1 -2 -3 # 3 NaN 0 0 0 0 0 # 4 NaN 1 1 1 1 1 # 5 NaN 2 1 1 1 1 </code></pre>
python|pandas
2
5,464
67,545,001
Read Excel data from dynamic column using pandas
<p>In my excel file header start from D5 and row star from D6. column data type displayed in D4 cell onwards.</p> <p>i want skip data type row and first blank columns A,B,C and Read Actual excel data using python pandas.</p> <p><a href="https://i.stack.imgur.com/3DlRD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3DlRD.png" alt="enter image description here" /></a></p>
<p>Try my code. so, you can move data table in excel sheet to anywhere.</p> <pre><code>filename = [&quot;yourfile.xlsx&quot;] sheet = [&quot;your sheetname&quot;] df = pd.read_excel(filename, sheet_name=sheet, header=None) # drop the columns and rows with all NaN's df.dropna(axis=0, thresh=2, inplace=True) df.dropna(axis=1, thresh=2, inplace=True) # reset index to ensure first index is 0 df.reset_index(drop=True, inplace=True) # drop row NUMBER, CHAR and INTEGER df.drop([0], inplace=True) df = df.reset_index(drop=True) # get empno, ename and sal to set as header, then drop this data row header = df.iloc[0] df.columns = header df.drop([0], inplace=True) df.reset_index(drop=True, inplace=True) </code></pre> <p>Output</p> <pre><code> empno ename sal 0 1 A 100 1 2 B 200 2 3 C 300 3 4 D 400 4 5 E 500 </code></pre>
python|xml|pandas|dataframe|automation
0
5,465
67,219,581
Keras' model.predict() give an output in binary with softmax activation layer
<p>I trained InceptionResNetV2 from Keras with 40 classes and tested it using model.evaluate(); it was all good. But when I try to use model.predict() with a single image, I get an output like</p> <p><code>[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]</code> instead of a probability distribution.</p> <p>Colab for error reproduction: <a href="https://colab.research.google.com/drive/1BTuNdwQK5CqaggBVAfJ1cTTR2Qp0GH6v?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1BTuNdwQK5CqaggBVAfJ1cTTR2Qp0GH6v?usp=sharing</a></p> <p><a href="https://i.stack.imgur.com/8zX4Q.png" rel="nofollow noreferrer">Model Architecture</a></p>
<p>Preprocessing the testing image with the Keras' built-in function helped in my case.</p> <pre><code>from keras.applications.inception_resnet_v2 import preprocess_input img_batch = preprocess_input(img_batch) </code></pre>
python|tensorflow|keras
0
5,466
67,519,746
Pytorch transfer learning error: The size of tensor a (16) must match the size of tensor b (128) at non-singleton dimension 2
<p>Currently, I'm working on an image motion deblurring problem with PyTorch. I have two kinds of images: Blurry images (variable = blur_image) that are the input image and the sharp version of the same images (variable = shar_image), which should be the output. Now I wanted to try out transfer learning, but I can't get it to work.</p> <p>Here is the code for my dataloaders:</p> <pre><code>train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle = True) validation_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=batch_size, shuffle = False) test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle = False) </code></pre> <p>Their shape:</p> <pre><code>Trainloader - Shape of blur_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) Trainloader - Shape of sharp_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) torch.float32 Validationloader - Shape of blur_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) Validationloader - Shape of sharp_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) torch.float32 Testloader- Shape of blur_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) Testloader- Shape of sharp_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) torch.float32 </code></pre> <p>The way I use transfer learning (I thought that for the 'in_features' I have to put in the amount of pixels):</p> <pre><code>model = models.alexnet(pretrained=True) model.classifier[6] = torch.nn.Linear(model.classifier[6].in_features, 128) device_string = &quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot; device = torch.device(device_string) model = model.to(device) </code></pre> <p>The way I define my training process:</p> <pre><code># Define the loss function (MSE was chosen due to the comparsion of pixels # between blurred and sharp images criterion = nn.MSELoss() # Define the optimizer and learning rate optimizer = optim.Adam(model.parameters(), lr=0.001) # Learning rate schedule - If the loss value does not improve after 5 epochs # back-to-back then the new learning rate will be: previous_rate*0.5 #scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau( optimizer, mode='min', patience=5, factor=0.5, verbose=True ) def training(model, trainDataloader, epoch): &quot;&quot;&quot; Function to define the model training Args: model (Model object): The model that is going to be trained. trainDataloader (Dataloader object): Dataloader object of the trainset. epoch (Integer): Number of training epochs. &quot;&quot;&quot; # Changing model into trainings mode model.train() # Supporting variable to display the loss for each epoch running_loss = 0.0 running_psnr = 0.0 for i, data in tqdm(enumerate(trainDataloader), total=int(len(train_dataset)/trainDataloader.batch_size)): blur_image = data[0] sharp_image = data[1] # Transfer the blurred and sharp image instance to the device blur_image = blur_image.to(device) sharp_image = sharp_image.to(device) # Sets the gradient of tensors to zero optimizer.zero_grad() outputs = model(blur_image) loss = criterion(outputs, sharp_image) # Perform backpropagation loss.backward() # Update the weights optimizer.step() # Add the loss that was calculated during the trainigs run running_loss += loss.item() # calculate batch psnr (once every `batch_size` iterations) batch_psnr = psnr(sharp_image, blur_image) running_psnr += batch_psnr # Display trainings loss trainings_loss = running_loss/len(trainDataloader.dataset) final_psnr = running_psnr/int(len(train_dataset)/trainDataloader.batch_size) final_ssim = ssim(sharp_image, blur_image, data_range=1, size_average=True) print(f&quot;Trainings loss: {trainings_loss:.5f}&quot;) print(f&quot;Train PSNR: {final_psnr:.5f}&quot;) print(f&quot;Train SSIM: {final_ssim:.5f}&quot;) return trainings_loss, final_psnr, final_ssim </code></pre> <p>And here is my way to start the training:</p> <pre><code>train_loss = [] val_loss = [] train_PSNR_score = [] train_SSIM_score = [] val_PSNR_score = [] val_SSIM_score = [] start = time.time() for epoch in range(nb_epochs): print(f&quot;Epoch {epoch+1}\n-------------------------------&quot;) train_epoch_loss = training(model, train_loader, nb_epochs) val_epoch_loss = validation(model, validation_loader, nb_epochs) train_loss.append(train_epoch_loss[0]) val_loss.append(val_epoch_loss[0]) train_PSNR_score.append(train_epoch_loss[1]) train_SSIM_score.append(train_epoch_loss[2]) val_PSNR_score.append(val_epoch_loss[1]) val_SSIM_score.append(val_epoch_loss[2]) scheduler.step(train_epoch_loss[0]) scheduler.step(val_epoch_loss[0]) end = time.time() print(f&quot;Took {((end-start)/60):.3f} minutes to train&quot;) </code></pre> <p>But every time when I want to perform the training I receive the following error:</p> <pre><code> 0%| | 0/249 [00:00&lt;?, ?it/s]Epoch 1 ------------------------------- /usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([16, 3, 128, 128])) that is different to the input size (torch.Size([16, 128])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) &lt;ipython-input-195-ff0214e227cd&gt; in &lt;module&gt;() 9 for epoch in range(nb_epochs): 10 print(f&quot;Epoch {epoch+1}\n-------------------------------&quot;) ---&gt; 11 train_epoch_loss = training(model, train_loader, nb_epochs) 12 val_epoch_loss = validation(model, validation_loader, nb_epochs) 13 train_loss.append(train_epoch_loss[0]) &lt;ipython-input-170-dfa2c212ad23&gt; in training(model, trainDataloader, epoch) 25 optimizer.zero_grad() 26 outputs = model(blur_image) ---&gt; 27 loss = criterion(outputs, sharp_image) 28 29 # Perform backpropagation /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --&gt; 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), /usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py in forward(self, input, target) 526 527 def forward(self, input: Tensor, target: Tensor) -&gt; Tensor: --&gt; 528 return F.mse_loss(input, target, reduction=self.reduction) 529 530 /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in mse_loss(input, target, size_average, reduce, reduction) 2926 reduction = _Reduction.legacy_get_string(size_average, reduce) 2927 -&gt; 2928 expanded_input, expanded_target = torch.broadcast_tensors(input, target) 2929 return torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction)) 2930 /usr/local/lib/python3.7/dist-packages/torch/functional.py in broadcast_tensors(*tensors) 72 if has_torch_function(tensors): 73 return handle_torch_function(broadcast_tensors, tensors, *tensors) ---&gt; 74 return _VF.broadcast_tensors(tensors) # type: ignore 75 76 RuntimeError: The size of tensor a (16) must match the size of tensor b (128) at non-singleton dimension 2 </code></pre> <p><strong>model structure:</strong></p> <pre><code>AlexNet( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2)) (1): ReLU(inplace=True) (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2)) (4): ReLU(inplace=True) (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (7): ReLU(inplace=True) (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (9): ReLU(inplace=True) (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace=True) (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) ) (avgpool): AdaptiveAvgPool2d(output_size=(6, 6)) (classifier): Sequential( (0): Dropout(p=0.5, inplace=False) (1): Linear(in_features=9216, out_features=4096, bias=True) (2): ReLU(inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Linear(in_features=4096, out_features=4096, bias=True) (5): ReLU(inplace=True) (6): Linear(in_features=4096, out_features=128, bias=True) ) ) </code></pre> <p>I'm a newbie in terms of using Pytorch (and image deblurring in general) and so I rather confused about the meaning of the error message and how to fix it. I tried to change my parameters and nothing worked. Does anyone have any advice for me on how to solve this problem?</p> <p>I would appreciate every input :)</p>
<p>Here your you can't use <code>alexnet</code> for this task. becouse output from your model and <code>sharp_image</code> should be shame. because <code>convnet</code> encode your image as enbeddings you and fully connected layers can not convert these images to its normal size you can not use fully connected layers for decoding, for obtain the same size you need to use <code>ConvTranspose2d()</code> for this task.</p> <p><strong>your encoder should be:</strong></p> <pre><code>class ConvEncoder(nn.Module): &quot;&quot;&quot; A simple Convolutional Encoder Model &quot;&quot;&quot; def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 16, (3, 3), padding=(1, 1)) self.relu1 = nn.ReLU(inplace=True) self.maxpool1 = nn.MaxPool2d((2, 2)) self.conv2 = nn.Conv2d(16, 32, (3, 3), padding=(1, 1)) self.relu2 = nn.ReLU(inplace=True) self.maxpool2 = nn.MaxPool2d((2, 2)) self.conv3 = nn.Conv2d(32, 64, (3, 3), padding=(1, 1)) self.relu3 = nn.ReLU(inplace=True) self.maxpool3 = nn.MaxPool2d((2, 2)) self.conv4 = nn.Conv2d(64, 128, (3, 3), padding=(1, 1)) self.relu4 = nn.ReLU(inplace=True) self.maxpool4 = nn.MaxPool2d((2, 2)) def forward(self, x): # Downscale the image with conv maxpool etc. x = self.conv1(x) x = self.relu1(x) x = self.maxpool1(x) x = self.conv2(x) x = self.relu2(x) x = self.maxpool2(x) x = self.conv3(x) x = self.relu3(x) x = self.maxpool3(x) x = self.conv4(x) x = self.relu4(x) x = self.maxpool4(x) return x </code></pre> <p><strong>And your decoder should be:</strong></p> <pre><code>class ConvDecoder(nn.Module): &quot;&quot;&quot; A simple Convolutional Decoder Model &quot;&quot;&quot; def __init__(self): super().__init__() self.deconv1 = nn.ConvTranspose2d(256, 128, (2, 2), stride=(2, 2)) self.relu1 = nn.ReLU(inplace=True) self.deconv2 = nn.ConvTranspose2d(128, 64, (2, 2), stride=(2, 2)) self.relu2 = nn.ReLU(inplace=True) self.deconv3 = nn.ConvTranspose2d(64, 32, (2, 2), stride=(2, 2)) self.relu3 = nn.ReLU(inplace=True) self.deconv4 = nn.ConvTranspose2d(32, 16, (2, 2), stride=(2, 2)) self.relu4 = nn.ReLU(inplace=True) def forward(self, x): # Upscale the image with convtranspose etc. x = self.deconv1(x) x = self.relu1(x) x = self.deconv2(x) x = self.relu2(x) x = self.deconv3(x) x = self.relu3(x) x = self.deconv4(x) x = self.relu4(x) return x encoder = ConvEncoder() decoder = ConvDecoder() </code></pre> <p><strong>You can train your model like that:</strong></p> <pre><code> encoder.train() decoder.train() for batch_idx, (train_img, target_img) in enumerate(train_loader): # Move images to device train_img = train_img.to(device) target_img = target_img.to(device) # Zero grad the optimizer optimizer.zero_grad() # Feed the train images to encoder enc_output = encoder(train_img) # The output of encoder is input to decoder ! dec_output = decoder(enc_output) # Decoder output is reconstructed image # Compute loss with it and orginal image which is target image. loss = loss_fn(dec_output, target_img) # Backpropogate loss.backward() # Apply the optimizer to network by calling step. optimizer.step() # Return the loss return loss.item() </code></pre> <p><strong>you might want visit <a href="https://github.com/yjn870/SRCNN-pytorch" rel="nofollow noreferrer">this</a> for getting help in your project.</strong></p>
python|image-processing|pytorch|tensor|motion-blur
0
5,467
34,549,187
Pandas DataFrame grouped box plot from aggregated results
<p>I want to draw box plot, but I don't have raw data but aggregated results in Pandas DataFrame. </p> <p>Is it still possible to draw box plot from the aggregated results? </p> <p>If not, what is the closest plot that I can get, to plot the min, max, mean, median, std-dev etc. I know I can plot them using line chart, but I need the boxplots to be grouped/clustered. </p> <p>Here is my data, the plotting part is missing. Please help. Thanks</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import pandas as pd df = pd.DataFrame({ 'group' : ['Tick Tick Tick', 'Tock Tock Tock', 'Tock Tock Tock', 'Tick Tick Tick']*3, # , ['Tock Tock Tock', 'Tick Tick Tick']*6, 'person':[x*5 for x in list('ABC')]*4, 'Median':np.random.randn(12), 'StdDev':np.random.randn(12) }) df["Average"]=df["Median"]*1.1 df["Minimum"]=df["Median"]*0.5 df["Maximum"]=df["Median"]*1.6 df["90%"]=df["Maximum"]*0.9 df["95%"]=df["Maximum"]*0.95 df["99%"]=df["Maximum"]*0.99 df </code></pre> <p><strong>UPDATE</strong>, </p> <p>I'm now one step closer to get my result -- I have just found that this feature was <a href="https://github.com/matplotlib/matplotlib/pull/2643" rel="nofollow noreferrer">available since matplotlib 1.4</a>, and I'm using matplotlib 1.5, and I tested it and <a href="https://gist.github.com/suntong/56490c89220592fa7409" rel="nofollow noreferrer">proved that it is working for me</a>. </p> <p>The problem is I have no clue why it works, and how to adapt my above code to use such new feature. I'll re-post my working code below, hope someone can understand and put two and two together. </p> <p>The data I have are Median, Average, Minimum, 90%,95%, 99%, Maximum and StdDev, and I hope to chart them all. and I took a look at the data structure of <code>logstats</code> of the following code, after the <code>for stats, label in zip(logstats, list('ABCD'))</code>, and found its fields are: </p> <pre><code>[{'cihi': 4.2781254505311281, 'cilo': 1.6164348064249057, 'fliers': array([ 19.69118642, 19.01171604]), 'iqr': 5.1561885723613567, 'label': 'A', 'mean': 4.9486856766955922, 'med': 2.9472801284780168, 'q1': 1.7655440553898782, 'q3': 6.9217326277512345, 'whishi': 12.576334012545718, 'whislo': 0.24252084924003742}, {'cihi': 4.3186289184254107, 'cilo': 1.9963715983778565, ... </code></pre> <p>So, from this</p> <p><a href="https://i.stack.imgur.com/AJDKf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AJDKf.png" alt="box plot"></a></p> <p>and the <code>bxp</code> doc, I'm going to map my data as follows:</p> <ul> <li>whislo: Minimum </li> <li>q1: Median</li> <li>med: Average</li> <li>mean: 90%</li> <li>q3: 95%</li> <li>whishi: 99%</li> <li>and Maximum as fliers</li> </ul> <p>To map them, I'll just do <code>SELECT Minimum AS whislo, [90%] AS mean, [95%] as q3, [99%] as whishi</code>...Here is the final result:</p> <pre><code>raw_data = {'label': ['Label_01 Init', 'Label_02', 'Label_03', 'Label_04', 'Label_05', 'Label_06', 'Label_07', 'Label_08', 'Label_99'], 'whislo': [0.17999999999999999, 2.0299999999999998, 4.0800000000000001, 2.0899999999999999, 2.3300000000000001, 2.3799999999999999, 1.97, 2.6499999999999999, 0.089999999999999997], 'q3': [0.5, 4.9699999999999998, 11.77, 5.71, 12.460000000000001, 11.859999999999999, 13.84, 16.969999999999999, 0.29999999999999999], 'mean': [0.40000000000000002, 4.1299999999999999, 10.619999999999999, 5.0999999999999996, 10.24, 9.0700000000000003, 11.960000000000001, 15.15, 0.26000000000000001], 'whishi': [1.76, 7.6399999999999997, 20.039999999999999, 6.6699999999999999, 22.460000000000001, 21.66, 16.629999999999999, 19.690000000000001, 1.1799999999999999], 'q1': [0.28000000000000003, 2.96, 7.6100000000000003, 3.46, 5.8099999999999996, 5.4400000000000004, 6.6299999999999999, 8.9900000000000002, 0.16], 'fliers': [5.5, 17.129999999999999, 32.890000000000001, 7.9100000000000001, 32.829999999999998, 70.680000000000007, 24.699999999999999, 32.240000000000002, 3.3500000000000001]} df = pd.DataFrame(raw_data, columns = ['label', 'whislo', 'q1', 'mean', 'q3', 'whishi', 'fliers']) </code></pre> <p>Now to challenge is how to present my above dataframe in box plot with multiple level of grouping. If multiple level of grouping is too difficult, let's get the plotting from pd dataframe working first, because my <code>pd</code> dataframe has the same fields as the required <code>np</code> array. So I tried,</p> <pre><code>fig, ax = plt.subplots() ax.bxp(df.as_matrix(), showmeans=True, showfliers=True, vert=False) </code></pre> <p>But I got</p> <pre><code>...\Anaconda3\lib\site-packages\matplotlib\axes\_axes.py in bxp(self, bxpstats, positions, widths, vert, patch_artist, shownotches, showmeans, showcaps, showbox, showfliers, boxprops, whiskerprops, flierprops, medianprops, capprops, meanprops, meanline, manage_xticks) 3601 for pos, width, stats in zip(positions, widths, bxpstats): 3602 # try to find a new label -&gt; 3603 datalabels.append(stats.get('label', pos)) 3604 # fliers coords 3605 flier_x = np.ones(len(stats['fliers'])) * pos AttributeError: 'numpy.ndarray' object has no attribute 'get' </code></pre> <p>If I use <code>ax.bxp(df.to_records(), ...</code>, then I'll get <code>AttributeError: 'record' object has no attribute 'get'</code>.</p> <p>OK, I finally got it working, the plotting from pd dataframe, but not multiple level of grouping, like this:</p> <pre><code>df['fliers']='' fig, ax = plt.subplots() ax.bxp(df.to_dict('records'), showmeans=True, meanline=True, showfliers=False, vert=False) # shownotches=True, plt.show() </code></pre> <p>Note my above data is missing the <code>med</code> field, you can add the correct ones, or use <code>df['med']=df['q1']*1.2</code> to make it works. </p> <pre><code>import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd def test_bxp_with_ylabels(): np.random.seed(937) logstats = matplotlib.cbook.boxplot_stats( np.random.lognormal(mean=1.25, sigma=1., size=(37,4)) ) print(logstats) for stats, label in zip(logstats, list('ABCD')): stats['label'] = label fig, ax = plt.subplots() ax.set_xscale('log') ax.bxp(logstats, vert=False) test_bxp_with_ylabels() </code></pre> <p><a href="https://i.stack.imgur.com/cynPy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cynPy.png" alt="bxp_with_ylabels"></a></p>
<p>While waiting a clarification of your df, related to:</p> <pre><code>dic = [{'cihi': 4.2781254505311281, 'cilo': 1.6164348064249057, 'fliers': array([ 19.69118642, 19.01171604]), 'iqr': 5.1561885723613567, 'mean': 4.9486856766955922, 'med': 2.9472801284780168, 'q1': 1.7655440553898782, 'q3': 6.9217326277512345, 'whishi': 12.576334012545718, 'whislo': 0.24252084924003742}] </code></pre> <p>and how your data should map:</p> <p>from <code>bxp</code> doc:</p> <pre><code> Required keys are: - ``med``: The median (scalar float). - ``q1``: The first quartile (25th percentile) (scalar float). - ``q3``: The first quartile (50th percentile) (scalar float). # Here I guess it's rather : the 3rd quartile (75th percentile) - ``whislo``: Lower bound of the lower whisker (scalar float). - ``whishi``: Upper bound of the upper whisker (scalar float). Optional keys are: - ``mean``: The mean (scalar float). Needed if ``showmeans=True``. - ``fliers``: Data beyond the whiskers (sequence of floats). Needed if ``showfliers=True``. - ``cilo`` &amp; ``cihi``: Lower and upper confidence intervals about the median. Needed if ``shownotches=True``. </code></pre> <p>Then, you just have to do:</p> <pre><code>fig, ax = plt.subplots(1,1) ax.bxp([dic], showmeans=True) </code></pre> <p>So you just need to find a way to build your <code>dic</code>. Note that it does not plot your <code>std</code> and for the whisker, you need to choose whether they go up to 90%, 95% or 99% but you can't have all values. In that case you need to add them afterward with something like <code>plt.hlines()</code>.</p> <p>HTH</p>
python|pandas|matplotlib|plot|dataframe
2
5,468
60,213,340
Concatenate a column of lists within a dataframe
<p>I have a dataframe in pandas and one of my columns is a set of lists; however, some of the lists in the column have more elements than others:</p> <pre><code>df['Name'].head() </code></pre> <p>Output: </p> <pre><code>0 ['Andrew', '24'] 1 ['James'] 2 ['Billy', '19', 'M'] 3 ['Grace', '42'] 4 ['Amy'] </code></pre> <p>Is it possible for me to concatenate each element in each list together, while still maintaining my df?</p> <p>Desired output: </p> <pre><code>0 'Andrew24' 1 'James' 2 'Billy19M' 3 'Grace42' 4 'Amy' </code></pre> <p>I tried many different things; the closest I got was using the snip below but this concatenated all of the lists together in each record:</p> <pre><code>def concatenate_list_data(list): result= '' for element in list: result += str(element) return result df['Name'] = concatenate_list_data(df['Name'])` </code></pre>
<p>Pandas offers a method for this, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.join.html" rel="nofollow noreferrer"><code>Series.str.join</code></a>: <code>df['Name'].str.join('')</code>.</p>
python|pandas|list|dataframe
2
5,469
60,104,532
selecting multiple unique columns in pandas
<pre><code>import pandas as pd df = pd.DataFrame({'col1': ['A', 'B', 'C'], 'col2': ['B', 'A', 'A']}) df </code></pre> <p>How would I SELECT DISTINCT col1, col2 from df?</p>
<pre><code>df.drop_duplicates(subset = ['col1','col2']) </code></pre>
python|pandas
1
5,470
60,055,086
Keras: Custom loss function with training data not directly related to model
<p>I am trying to convert my CNN written with tensorflow layers to use the keras api in tensorflow (I am using the keras api provided by TF 1.x), and am having issue writing a custom loss function, to train the model.</p> <p>According to this guide, when defining a loss function it expects the arguments <code>(y_true, y_pred)</code> <a href="https://www.tensorflow.org/guide/keras/train_and_evaluate#custom_losses" rel="nofollow noreferrer">https://www.tensorflow.org/guide/keras/train_and_evaluate#custom_losses</a></p> <pre><code>def basic_loss_function(y_true, y_pred): return ... </code></pre> <p>However, in every example I have seen, <code>y_true</code> is somehow directly related to the model (in the simple case it is the output of the network). In my problem, this is not the case. How do implement this if my loss function depends on some training data that is unrelated to the tensors of the model?</p> <p>To be concrete, here is my problem:</p> <p>I am trying to learn an image embedding trained on pairs of images. My training data includes image pairs and annotations of matching points between the image pairs (image coordinates). The input feature is only the image pairs, and the network is trained in a siamese configuration.</p> <p>I am able to implement this successfully with tensorflow layers and train it sucesfully with tensorflow estimators. My current implementations builds a tf Dataset from a large database of tf Records, where the features is a dictionary containing the images and arrays of matching points. Before I could easily feed these arrays of image coordinates to the loss function, but here it is unclear how to do so.</p>
<p>There is a hack I often use that is to calculate the loss within the model, by means of <code>Lambda</code> layers. (When the loss is independent from the true data, for instance, and the model doesn't really have an output to be compared)</p> <p>In a functional API model:</p> <pre><code>def loss_calc(x): loss_input_1, loss_input_2 = x #arbirtray inputs, you choose #according to what you gave to the Lambda layer #here you use some external data that doesn't relate to the samples externalData = K.constant(external_numpy_data) #calculate the loss return the loss </code></pre> <p>Using the outputs of the model itself (the tensor(s) that are used in your loss)</p> <pre><code>loss = Lambda(loss_calc)([model_output_1, model_output_2]) </code></pre> <p>Create the model outputting the loss instead of the outputs:</p> <pre><code>model = Model(inputs, loss) </code></pre> <p>Create a dummy keras loss function for compilation:</p> <pre><code>def dummy_loss(y_true, y_pred): return y_pred #where y_pred is the loss itself, the output of the model above model.compile(loss = dummy_loss, ....) </code></pre> <p>Use any dummy array correctly sized regarding number of samples for training, it will be ignored:</p> <pre><code>model.fit(your_inputs, np.zeros((number_of_samples,)), ...) </code></pre> <hr> <p>Another way of doing it, is using a custom training loop. </p> <p>This is much more work, though.</p> <p>Although you're using <code>TF1</code>, you can still turn <a href="https://www.tensorflow.org/guide/eager" rel="noreferrer">eager execution</a> on at the very beginning of your code and do stuff like it's done in <code>TF2</code>. (<code>tf.enable_eager_execution()</code>)</p> <p>Follow the tutorial for custom training loops: <a href="https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough" rel="noreferrer">https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough</a></p> <p>Here, you calculate the gradients yourself, of any result regarding whatever you want. This means you don't need to follow Keras standards of training.</p> <hr> <p>Finally, you can use the approach you suggested of <code>model.add_loss</code>. In this case, you calculate the loss exaclty the same way I did in the first answer. And pass this loss tensor to <code>add_loss</code>. </p> <p>You can probably compile a model with <code>loss=None</code> then (not sure), because you're going to use other losses, not the standard one.</p> <p>In this case, your model's output will probably be <code>None</code> too, and you should fit with <code>y=None</code>. </p>
tensorflow|keras|tf.keras
7
5,471
60,011,128
Plotting choropleth map with discrete colorbar/legend using Geopandas
<p>I am trying to make a choropleth map using Geopandas. However I'm having trouble with the colourbar formatting, it seems to be very limited.</p> <p>Here is the map I have:</p> <p><a href="https://i.stack.imgur.com/FOMNH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FOMNH.png" alt="Map"></a></p> <p>But I would like the colorbar(legend?) to be discrete breaks, or a colour key, I'm not really sure what to call it. Something like this:</p> <p><a href="https://i.stack.imgur.com/R23QZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R23QZ.png" alt="Discrete Color Key"></a></p> <p>Is this even possible with Geopandas? I'm finding nothing in their documentation. Maybe something with <a href="http://geopandas.org/mapping.html#creating-a-legend" rel="nofollow noreferrer"><code>legend_kwds</code></a>?</p> <p>Here is my plotting code</p> <pre><code>fig, ax = plt.subplots(1, figsize = (20, 12) ) ax.axis('off') merged.plot(ax = ax, column = '2019-12', color = 'grey', label = 'No Data') #Plots 'No Data' layer merged.dropna().plot(ax = ax, column = '2019-12', cmap = 'Reds', alpha = 1, legend = True) #Plots data layer ax.legend() </code></pre> <p>In an ideal world, I'd also be able to manually set the numbers and limits of the intervals as well.</p>
<p><a href="https://stackoverflow.com/a/56695238/8056865">This answer</a> helped a lot. I was able to use <code>classification_kwds</code> and define my own bins using using the <code>User_Defined</code> <code>scheme</code> argument when calling <code>gpd.plot()</code>. Although I would like more functionality, this is good enough for now.</p>
matplotlib|colorbar|geopandas|choropleth|legend-properties
2
5,472
65,360,933
"The Conv2D op currently only supports the NHWC tensor format on the CPU" Error despite NHWC Format (YOLO 3)
<p>I am trying to get to run a bit of sample code from <a href="https://github.com/theAIGuysCode/yolo-v3" rel="nofollow noreferrer">github</a> in order to learn Working with Tensorflow 2 and the YOLO Framework. My Laptop has a M1000M Graphics Card and I installed the CUDA Platform from NVIDIA from <a href="https://developer.nvidia.com/cuda-downloads" rel="nofollow noreferrer">here</a>.</p> <p>So the Code in question is this bit:</p> <pre><code>tf.compat.v1.disable_eager_execution() _MODEL_SIZE = (416, 416) _CLASS_NAMES_FILE = './data/labels/coco.names' _MAX_OUTPUT_SIZE = 20 def main(type, iou_threshold, confidence_threshold, input_names): class_names = load_class_names(_CLASS_NAMES_FILE) n_classes = len(class_names) model = Yolo_v3(n_classes=n_classes, model_size=_MODEL_SIZE, max_output_size=_MAX_OUTPUT_SIZE, iou_threshold=iou_threshold, confidence_threshold=confidence_threshold) if type == 'images': batch_size = len(input_names) batch = load_images(input_names, model_size=_MODEL_SIZE) inputs = tf.compat.v1.placeholder(tf.float32, [batch_size, *_MODEL_SIZE, 3]) detections = model(inputs, training=False) saver = tf.compat.v1.train.Saver(tf.compat.v1.global_variables(scope='yolo_v3_model')) with tf.compat.v1.Session() as sess: saver.restore(sess, './weights/model.ckpt') detection_result = sess.run(detections, feed_dict={inputs: batch}) draw_boxes(input_names, detection_result, class_names, _MODEL_SIZE) print('Detections have been saved successfully.') </code></pre> <p>While executing this (also wondering why starting the detection.py doesnt use GPU in the first place), I get the Error Message:</p> <pre><code>File &quot;C:\SDKs etc\Python 3.8\lib\site-packages\tensorflow\python\client\session.py&quot;, line 1451, in _call_tf_sessionrun return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict, tensorflow.python.framework.errors_impl.UnimplementedError: The Conv2D op currently only supports the NHWC tensor format on the CPU. The op was given the format: NCHW [[{{node yolo_v3_model/conv2d/Conv2D}}]] </code></pre> <p>Full Log see <a href="https://pastebin.com/A4NVhPqj" rel="nofollow noreferrer">here</a>.</p> <p>If I am understanding this correctly, the format of <code>inputs = tf.compat.v1.placeholder(tf.float32, [batch_size, *_MODEL_SIZE, 3])</code> is already NHWC (Model Size is a tuple of 2 Numbers) and I don't know how I need to change things in Code to get this running on CPU.</p>
<blockquote> <p>If I am understanding this correctly, the format of inputs = tf.compat.v1.placeholder(tf.float32, [batch_size, *_MODEL_SIZE, 3]) is already NHWC (Model Size is a tuple of 2 Numbers) and I don't know how I need to change things in Code to get this running on CPU.</p> </blockquote> <p>Yes you are. But <a href="https://github.com/theAIGuysCode/yolo-v3/blob/master/yolo_v3.py" rel="nofollow noreferrer">look here</a>:</p> <pre><code>def __init__(self, n_classes, model_size, max_output_size, iou_threshold, confidence_threshold, data_format=None): &quot;&quot;&quot;Creates the model. Args: n_classes: Number of class labels. model_size: The input size of the model. max_output_size: Max number of boxes to be selected for each class. iou_threshold: Threshold for the IOU. confidence_threshold: Threshold for the confidence score. data_format: The input format. Returns: None. &quot;&quot;&quot; if not data_format: if tf.test.is_built_with_cuda(): data_format = 'channels_first' else: data_format = 'channels_last' </code></pre> <p>And later:</p> <pre><code>def __call__(self, inputs, training): &quot;&quot;&quot;Add operations to detect boxes for a batch of input images. Args: inputs: A Tensor representing a batch of input images. training: A boolean, whether to use in training or inference mode. Returns: A list containing class-to-boxes dictionaries for each sample in the batch. &quot;&quot;&quot; with tf.compat.v1.variable_scope('yolo_v3_model'): if self.data_format == 'channels_first': inputs = tf.transpose(inputs, [0, 3, 1, 2]) </code></pre> <p>Solution:</p> <ol> <li><p>check <a href="https://github.com/theAIGuysCode/yolo-v3/blob/cc438021d95715b2e1ab5a2807bb56ce0e80c355/yolo_v3.py#L333" rel="nofollow noreferrer">tf.test.is_built_with_cuda()</a> work as expected</p> </li> <li><p>if not - set order manually when create model:</p> <p><code>model = Yolo_v3(n_classes=n_classes, model_size=_MODEL_SIZE,</code><br /> <code> max_output_size=_MAX_OUTPUT_SIZE,</code><br /> <code> iou_threshold=iou_threshold,</code><br /> <code> confidence_threshold=confidence_threshold,</code><br /> <code> data_format = 'channels_last')</code></p> </li> </ol>
tensorflow|keras|tensorflow2.0|yolo
1
5,473
65,416,794
Issue on Runge Kutta Fehlberg algorithm
<p>I have wrote a code for Runge-Kutta 4th order, which works perfectly fine for a system of differential equations:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt import numba import time start_time = time.clock() @numba.jit() def V(u,t): x1,dx1, x2, dx2=u ddx1=-w**2 * x1 -b * dx1 ddx2=-(w+0.5)**2 * x2 -(b+0.1) * dx2 return np.array([dx1,ddx1,dx2,ddx2]) @numba.jit() def rk4(f, u0, t0, tf , n): t = np.linspace(t0, tf, n+1) u = np.array((n+1)*[u0]) h = t[1]-t[0] for i in range(n): k1 = h * f(u[i], t[i]) k2 = h * f(u[i] + 0.5 * k1, t[i] + 0.5*h) k3 = h * f(u[i] + 0.5 * k2, t[i] + 0.5*h) k4 = h * f(u[i] + k3, t[i] + h) u[i+1] = u[i] + (k1 + 2*(k2 + k3) + k4) / 6 return u, t u, t = rk4(V,np.array([0,0.2,0,0.3]) ,0,100, 20000) print(&quot;Execution time:&quot;,time.clock() - start_time, &quot;seconds&quot;) x1,dx1,x2,dx2 = u.T plt.plot(x1,x2) plt.xlabel('X1') plt.ylabel('X2') plt.show() </code></pre> <p>The above code, returns the desired result: <a href="https://i.stack.imgur.com/aSRhH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aSRhH.png" alt="enter image description here" /></a></p> <p>And thanks to Numba JIT, this code works really fast. However, this method doesn't use adaptive step size and hence, it is not very suitable for a system of stiff differential equations. Runge Kutta Fehlberg method, solves this problem by using a straight forward algorithm. Based on the algorithm (<a href="https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta%E2%80%93Fehlberg_method" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta%E2%80%93Fehlberg_method</a>) I wrote this code which works for <strong>only one</strong> differential equation :</p> <pre><code>import numpy as np def rkf( f, a, b, x0, tol, hmax, hmin ): a2 = 2.500000000000000e-01 # 1/4 a3 = 3.750000000000000e-01 # 3/8 a4 = 9.230769230769231e-01 # 12/13 a5 = 1.000000000000000e+00 # 1 a6 = 5.000000000000000e-01 # 1/2 b21 = 2.500000000000000e-01 # 1/4 b31 = 9.375000000000000e-02 # 3/32 b32 = 2.812500000000000e-01 # 9/32 b41 = 8.793809740555303e-01 # 1932/2197 b42 = -3.277196176604461e+00 # -7200/2197 b43 = 3.320892125625853e+00 # 7296/2197 b51 = 2.032407407407407e+00 # 439/216 b52 = -8.000000000000000e+00 # -8 b53 = 7.173489278752436e+00 # 3680/513 b54 = -2.058966861598441e-01 # -845/4104 b61 = -2.962962962962963e-01 # -8/27 b62 = 2.000000000000000e+00 # 2 b63 = -1.381676413255361e+00 # -3544/2565 b64 = 4.529727095516569e-01 # 1859/4104 b65 = -2.750000000000000e-01 # -11/40 r1 = 2.777777777777778e-03 # 1/360 r3 = -2.994152046783626e-02 # -128/4275 r4 = -2.919989367357789e-02 # -2197/75240 r5 = 2.000000000000000e-02 # 1/50 r6 = 3.636363636363636e-02 # 2/55 c1 = 1.157407407407407e-01 # 25/216 c3 = 5.489278752436647e-01 # 1408/2565 c4 = 5.353313840155945e-01 # 2197/4104 c5 = -2.000000000000000e-01 # -1/5 t = a x = np.array(x0) h = hmax T = np.array( [t] ) X = np.array( [x] ) while t &lt; b: if t + h &gt; b: h = b - t k1 = h * f( x, t ) k2 = h * f( x + b21 * k1, t + a2 * h ) k3 = h * f( x + b31 * k1 + b32 * k2, t + a3 * h ) k4 = h * f( x + b41 * k1 + b42 * k2 + b43 * k3, t + a4 * h ) k5 = h * f( x + b51 * k1 + b52 * k2 + b53 * k3 + b54 * k4, t + a5 * h ) k6 = h * f( x + b61 * k1 + b62 * k2 + b63 * k3 + b64 * k4 + b65 * k5, \ t + a6 * h ) r = abs( r1 * k1 + r3 * k3 + r4 * k4 + r5 * k5 + r6 * k6 ) / h if len( np.shape( r ) ) &gt; 0: r = max( r ) if r &lt;= tol: t = t + h x = x + c1 * k1 + c3 * k3 + c4 * k4 + c5 * k5 T = np.append( T, t ) X = np.append( X, [x], 0 ) h = h * min( max( 0.84 * ( tol / r )**0.25, 0.1 ), 4.0 ) if h &gt; hmax: h = hmax elif h &lt; hmin: raise RuntimeError(&quot;Error: Could not converge to the required tolerance %e with minimum stepsize %e.&quot; % (tol,hmin)) break return ( T, X ) </code></pre> <p>but I'm struggling to convert it to a function like the first code, where I can input a system of differential equations. The most confusing part for me, is how can I vectorize everything in the second code without messing things up. In other words, I cannot reproduce the first result using the RKF algorithm. Can anyone point me in the right direction?</p>
<p>I'm not really sure where your problem lies. Setting the not given parameters to <code>w=1; b=0.1</code> and calling, without changing anything</p> <pre><code>T, X = rkf( f=V, a=0, b=100, x0=[0,0.2,0,0.3], tol=1e-6, hmax=1e1, hmin=1e-16 ) </code></pre> <p>gives the phase plot</p> <p><a href="https://i.stack.imgur.com/6s8BK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6s8BK.png" alt="enter image description here" /></a></p> <p>The step sizes grow as the system slows down as</p> <p><a href="https://i.stack.imgur.com/3Ll0G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Ll0G.png" alt="enter image description here" /></a></p> <p>which is the expected behavior for an unfiltered step size controller.</p>
python|numpy|runge-kutta
1
5,474
49,798,944
Not understanding numpy code from a book
<pre><code>X = np.array([[0, 1, 0, 1], [1, 0, 1, 1], [0, 0, 0, 1], [1, 0, 1, 0]]) y = np.array([0, 1, 0, 1]) counts = {} for label in np.unique(y): counts[label] = X[y == label].sum(axis=0) print("Feature counts: ", counts)' </code></pre> <p>This code is meant to check for the number of times a feature of a class is not zero, however I do not understand the syntax <code>counts[label] = X[y == label].sum(axis=0)</code>. When I simply run <code>print(y==label)</code>, the numpy array <code>[False True False True]</code> is presented and I do not understand how this indexes and sums item in the numpy array. Further, I do not understand why `y==label' has been set. Any help is appreciated. Thank you.</p>
<p>What you are looking at is a classic group-by operation; which sadly numpy does not provide in an elegant way out of the box. You seem less concerned with a high-level understanding than with a low level understanding; but if the latter is infact not a goal in itself, there are alternatives that abstract these worries away from you, such as <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP/tree/master/numpy_indexed" rel="nofollow noreferrer">numpy_indexed</a> (disclaimer: I am its author)</p> <pre><code>import numpy_indexed as npi labels, counts = npi.group_by(y).sum(X) </code></pre> <p>Which will do the same thing, but in a vectorized and thus much more scalable manner.</p>
python|numpy
0
5,475
50,206,517
Pandas .loc taking a very long time
<p>I have a 10 GB csv file with <code>170,000,000</code> rows and <code>23</code> columns that I read in to a dataframe as follows: </p> <pre><code>import pandas as pd d = pd.read_csv(f, dtype = {'tax_id': str}) </code></pre> <p>I also have a list of strings with nearly 20,000 unique elements: </p> <pre><code>h = ['1123787', '3345634442', '2342345234', .... ] </code></pre> <p>I want to create a new column called <code>class</code> in the dataframe <code>d</code>. I want to assign <code>d['class'] = 'A'</code> whenever <code>d['tax_id']</code> has a value that is found in the list of strings <code>h</code>. Otherwise, I want <code>d['class'] = 'B'</code>. </p> <p>The following code works very quickly on a 1% sample of my dataframe <code>d</code>: </p> <pre><code>d['class'] = 'B' d.loc[d['tax_num'].isin(h), 'class'] = 'A' </code></pre> <p>However, on the complete dataframe <code>d</code>, this code takes over 48 hours (and counting) to run on a 32 core server in batch mode. I suspect that indexing with <code>loc</code> is slowing down the code, but I'm not sure what it could really be. </p> <p><strong>In sum:</strong> Is there a more efficient way of creating the <code>class</code> column? </p>
<p>If your tax numbers are unique, I would recommend setting <code>tax_num</code> to the index and then indexing on that. As it stands, you call <code>isin</code> which is a linear operation. However fast your machine is, it can't do a linear search on 170 million records in a reasonable amount of time.</p> <pre><code>df.set_index('tax_num', inplace=True) # df = df.set_index('tax_num') df['class'] = 'B' df.loc[h, 'class'] = 'A' </code></pre> <p>If you're still suffering from performance issues, I'd recommend switching to distributed processing with <code>dask</code>.</p>
python|pandas|select|indexing
3
5,476
50,027,783
Numpy array transform using self elements without for loop
<p>I want to transform all elements in a numpy array the following way:</p> <pre><code>x = np.array([1,2,3,6]) ... some transformation y = np.array([1, 0.5, 0.66, 0.5]) </code></pre> <p>Where the rule is:</p> <pre><code>y[i]=x[i]/x[i+1] </code></pre> <p>But I can't use a for loop or while.</p> <p>I don't see how I could use map or vectorize in this case. Any idea?</p> <p>EDIT: The final goal is to get from a time serie the numpy array containing the returns instead of the values themselves Have edited the values of y since they were incorrect</p>
<p>You could try something like this.</p> <pre><code>import numpy as np x = np.array([1,2,3,6]) y = x / np.r_[x[1:],x[0]] print(y) </code></pre> <p>outputs</p> <pre><code>&gt;&gt;&gt; y [0.5 0.66666667 0.5 6. ] </code></pre> <p>Or this might be faster but has the same idea:</p> <pre><code>import numpy as np x = np.array([1,2,3,6]) y = x / np.roll(x, -1) print(y) </code></pre> <p>outputs</p> <pre><code>&gt;&gt;&gt; y [0.5 0.66666667 0.5 6. ] </code></pre>
python|arrays|numpy
1
5,477
63,813,811
How can I determine the speed of my Tensorflow on my GPU?
<p>I just started with <strong>Tensorflow (Version 2.3.0)</strong> and installed it on my GPU (in an virtual environment using python 3.5). That seems to work fine. I am using a <strong>nvidia geforce 1060</strong> with windows 10.<br /> Now my Question is: How can find the speed of my tensorflow working. I found this test: <a href="https://stackoverflow.com/questions/35703201/speed-benchmark-for-testing-tensorflow-install">speed benchmark for testing tensorflow install</a> but it seems to me old to work with my versions. Is there something like it for tensorflow 2? And how can I start it ? I would like to compare it to another computer to find out which one is how much faster.</p> <p>Cheers</p> <p>PS: If I can improve my post just let me now.</p>
<p>I was able to find the full answer in another post which I didn't found earlier:</p> <p><a href="https://stackoverflow.com/questions/61178521/what-is-the-proper-way-to-benchmark-part-of-tensorflow-graph/63591009#63591009">What is the proper way to benchmark part of tensorflow graph?</a></p>
python|python-3.x|performance|tensorflow|tensorflow2.0
0
5,478
63,936,581
How do I create multiple dataframes from a loop based on two existing dataframes and matching creteria?
<p>I think the below code might make my question easier to understand. But I'll try explain what I want to do anyway.</p> <p>I have two dataframes, There is one column common to each. I want the rows from df2 that match based on the values in df1's col 1 to be put in a separate dataframe and loop through df2 until I have new dataframes for each of the criteria in df1's col1.</p> <pre><code>Dataframes df1 = pd.DataFrame([['a', '1'], ['p', '3']], columns=['col 1', 'col 2']) df2 = pd.DataFrame([['t','a', '1'], ['q','a', '2'], ['x','p', '3']], columns=['col 1', 'col 2', 'col 3']) for strategy in df2: if df2[df2['col 2']] == df1[df1['col 1']]: df = df2[df2['col 2']] == df1[df1['strategy']] df.to_excel(&quot;output.xlsx&quot;, sheet_name = 'Sheet_name_1') </code></pre> <p>After that, I want to use each new dataframe in the loop and perform a function on it and then export that new dataframe to excel. But for the time being lets focus on the first problem.</p>
<p>Below is a solution provided to me by a colleague in work. A big thank you to him.</p> <p>So, we have two dataframes, with a common columns being Strategy and Style in both (indices and funds). We set the index to the Strategy column values in the indices dataframe. We then create a dictionary of dataframes from the funds dataframe. Each of these is based on the Style value. Now we create an example function that we can run. And finally we create a function that selects each of the new dataframes and takes the index assigned to each fund and passes them to our new function and gives us our final set of dataframes. At this point we would export each to excel individually per sheet (I'm still working on this part)</p> <pre><code>import pandas as pd indices = pd.DataFrame([['Volatility', 'SPX Index'], ['Multistrategy', 'SXXP Index'], ['Real Estate', 'IBEX Index']], columns=['Strategy', 'Ticker']) indices = indices.set_index('Strategy') # i would have the strategy as index to help with mapping later funds = pd.DataFrame([['MABAX US Equity','Real Estate', 'name1'], ['SPY US Equity','Real Estate', 'name2'], ['AAPL US Equity','Volatility', 'name3']], columns=['Ticker', 'Style', 'Name']) strategies = indices.index.values data_frames = {strategy:funds[funds['Style']==strategy] for strategy in strategies} #this gives you a dictionary of dataframes so you can get funds of volatility with # something like: data_frames['Volatility'] def my_function(fund_df,parameter,index_ticker): #this is your function and it takes the fund df of each #strategy and also the index ticker of each strategy print(fund_df) print(index_ticker) return #you could run the function for each dataframe and each index by using something like: for strategy in strategies: index_ticker = indices.loc[strategy,'Ticker'] fund_df = data_frames[strategy] my_function(fund_df,'M',index_ticker) </code></pre>
python|pandas|dataframe|loops
0
5,479
47,001,413
How to replace 'any strings' with nan in pandas DataFrame using a boolean mask?
<p>I have a 227x4 DataFrame with country names and numerical values to clean (wrangle ?).</p> <p>Here's an abstraction of the DataFrame:</p> <pre><code>import pandas as pd import random import string import numpy as np pdn = pd.DataFrame(["".join([random.choice(string.ascii_letters) for i in range(3)]) for j in range (6)], columns =['Country Name']) measures = pd.DataFrame(np.random.random_integers(10,size=(6,2)), columns=['Measure1','Measure2']) df = pdn.merge(measures, how= 'inner', left_index=True, right_index =True) df.iloc[4,1] = 'str' df.iloc[1,2] = 'stuff' print(df) Country Name Measure1 Measure2 0 tua 6 3 1 MDK 3 stuff 2 RJU 7 2 3 WyB 7 8 4 Nnr str 3 5 rVN 7 4 </code></pre> <p>How do I replace string values with <code>np.nan</code> in all columns without touching the country names?</p> <p>I tried using a boolean mask:</p> <pre><code>mask = df.loc[:,measures.columns].applymap(lambda x: isinstance(x, (int, float))).values print(mask) [[ True True] [ True False] [ True True] [ True True] [False True] [ True True]] # I thought the following would replace by default false with np.nan in place, but it didn't df.loc[:,measures.columns].where(mask, inplace=True) print(df) Country Name Measure1 Measure2 0 tua 6 3 1 MDK 3 stuff 2 RJU 7 2 3 WyB 7 8 4 Nnr str 3 5 rVN 7 4 # this give a good output, unfortunately it's missing the country names print(df.loc[:,measures.columns].where(mask)) Measure1 Measure2 0 6 3 1 3 NaN 2 7 2 3 7 8 4 NaN 3 5 7 4 </code></pre> <p>I have looked at several questions related to mine (<a href="https://stackoverflow.com/questions/44440991/pandas-dataframe-boolean-mask-on-multiple-columns">[1]</a>, <a href="https://stackoverflow.com/questions/30519140/pandas-dataframe-set-value-on-boolean-mask">[2]</a>, <a href="https://stackoverflow.com/questions/13842088/set-value-for-particular-cell-in-pandas-dataframe?s=1|181.1213">[3]</a>, <a href="https://stackoverflow.com/questions/13842088/set-value-for-particular-cell-in-pandas-dataframe">[4]</a>, <a href="https://stackoverflow.com/questions/17383094/python-pandas-numpy-true-false-to-1-0-mapping?s=1|94.6631">[5]</a>, <a href="https://stackoverflow.com/questions/29686046/python-pandas-filter-method-using-boolean-mask?s=1|69.9785">[6]</a>, <a href="https://stackoverflow.com/questions/42035652/how-to-load-an-excel-sheet-and-clean-the-data-in-python?s=1|33.3427">[7]</a>, <a href="https://stackoverflow.com/questions/37837148/replaces-spaces-with-nan-in-pandas-dataframe">[8]</a>), but could not find one that answered my concern.</p>
<p>Assign only columns of interest:</p> <pre><code>cols = ['Measure1','Measure2'] mask = df[cols].applymap(lambda x: isinstance(x, (int, float))) df[cols] = df[cols].where(mask) print (df) Country Name Measure1 Measure2 0 uFv 7 8 1 vCr 5 NaN 2 qPp 2 6 3 QIC 10 10 4 Suy NaN 8 5 eFS 6 4 </code></pre> <blockquote> <p>A meta-question, Is it normal that it takes me more than 3 hours to formulate a question here (including research) ?</p> </blockquote> <p>In my opinion yes, create good question is really hard.</p>
python|python-3.x|pandas|numpy|dataframe
13
5,480
32,842,303
Looking up values from one csv-file in another csv-file, using a third csv-file as map
<p>I didn't quite figure how to formulate this question, suggestions to improve the title is welcome.</p> <p>I have three files: <em>e_data.csv</em>, <em>t_data.csv</em> and <em>e2d.csv</em>. I want to merge <code>e_id</code>, <code>t_id</code>, <code>gene_name</code> and <code>value</code> into one file, as represented by <em>desired_result.csv</em>. The naive approach is as follows:</p> <ol> <li>For each row in <em>e_data.csv</em>, extract <code>e_id</code> and <code>value</code>.</li> <li>Check <em>e2t.csv</em> for which <code>t_id</code> that corresponds to the given <code>e_id</code>.</li> <li>Check <em>t_data.csv</em> for which <code>gene_name</code> that corresponds to the given <code>t_id</code>.</li> <li>Merge them all to one file.</li> </ol> <p>Please see the following example for what I'm trying to achieve:</p> <p><em>e_data.csv:</em></p> <pre><code> e_id value 1 110 2 240 3 370 </code></pre> <p><em>e2t.csv:</em></p> <pre><code> e_id t_id 1 10 2 24 3 32 </code></pre> <p><em>t_data.csv:</em></p> <pre><code> t_id gene_name 10 Gene1 24 Gene2 32 Gene3 </code></pre> <p><em>desired_result.csv:</em></p> <pre><code> gene_name t_id e_id value Gene1 10 1 110 Gene2 24 2 240 Gene3 32 3 370 </code></pre> <p>There's no limitation to which tools or language to use, but I would prefer to use Python, as that's what I'm most familiar with. R could also be an option. I've already implemented a solution in pure Python, but the datasets are rather large, and I'm hoping something like Pandas or Numpy can speed things up a bit. Thanks!</p>
<p>After you load all the csvs using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html#pandas.read_csv" rel="nofollow"><code>read_csv</code></a> you can just iteratively <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging" rel="nofollow"><code>merge</code></a> them so long as the column names are consistent:</p> <pre><code>In [149]: merged = t_data.merge(e2t.merge(e_data)) merged Out[149]: t_id gene_name e_id value 0 10 Gene1 1 110 1 24 Gene2 2 240 2 32 Gene3 3 370 </code></pre> <p>The above works as by default it will try to merge on matching column names and perform an inner merge so the column values must match on lhs and rhs.</p>
python|r|csv|numpy|pandas
4
5,481
32,768,555
find the set of column indices for non-zero values in each row in pandas' data frame
<p>Is there a good way to find the set of column indices for non-zero values in each row in pandas' data frame? Do I have to traverse the data frame row-by-row?</p> <p>For example, the data frame is</p> <pre><code>c1 c2 c3 c4 c5 c6 c7 c8 c9 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 1 1 1 1 1 0 2 1 5 5 0 0 1 0 4 6 4 3 0 1 1 1 1 5 10 3 5 2 4 1 2 2 1 3 6 4 0 1 0 0 0 0 0 3 9 1 0 1 0 2 1 0 </code></pre> <p>The output is expected to be</p> <pre><code>['c1','c2'] ['c1'] ['c2'] ... </code></pre>
<p>It seems you have to traverse the DataFrame by row.</p> <pre><code>cols = df.columns bt = df.apply(lambda x: x &gt; 0) bt.apply(lambda x: list(cols[x.values]), axis=1) </code></pre> <p>and you will get:</p> <pre><code>0 [c1, c2] 1 [c1] 2 [c2] 3 [c1] 4 [c2] 5 [] 6 [c2, c3, c4, c5, c6, c7, c9] 7 [c1, c2, c3, c6, c8, c9] 8 [c1, c2, c4, c5, c6, c7, c8, c9] 9 [c1, c2, c3, c4, c5, c6, c7, c8, c9] 10 [c1, c2, c4] 11 [c1, c2, c3, c5, c7, c8] dtype: object </code></pre> <p>If performance is matter, try to pass <code>raw=True</code> to boolean DataFrame creation like below:</p> <pre><code>%timeit df.apply(lambda x: x &gt; 0, raw=True).apply(lambda x: list(cols[x.values]), axis=1) 1000 loops, best of 3: 812 µs per loop </code></pre> <p>It brings you a better performance gain. Following is <code>raw=False</code> (which is default) result:</p> <pre><code>%timeit df.apply(lambda x: x &gt; 0).apply(lambda x: list(cols[x.values]), axis=1) 100 loops, best of 3: 2.59 ms per loop </code></pre>
python|pandas
12
5,482
32,817,017
Remove numbers from a numpy array
<p>Let us say I have a numpy array of numbers (eg: integers). I want to drop the number <code>k</code> wherever it happens in the sequence. Currently I am writing a for loop for this which seems to be a overkill. Is there a straight forward way to do it? In general, what if I have one more than one number to be dropped. </p>
<p>Assuming <code>A</code> to the input array and <code>B</code> to be the array containing the numbers to be removed, you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html" rel="noreferrer"><code>np.in1d</code></a> to get a mask of matches of <code>B</code> in <code>A</code> and then use an inverted version of the mask to map <code>A</code> and get the desired output. Here's how the implementation would look like -</p> <pre><code>A[~np.in1d(A,B).reshape(A.shape)] </code></pre> <p>Sample run -</p> <pre><code>In [14]: A Out[14]: array([3, 2, 1, 4, 3, 3, 2, 2, 4, 1]) In [15]: B Out[15]: array([2, 4]) In [16]: A[~np.in1d(A,B).reshape(A.shape)] Out[16]: array([3, 1, 3, 3, 1]) </code></pre> <p>For a 2D input array case, you would get a 1D array as output, like so -</p> <pre><code>In [21]: A Out[21]: array([[3, 3, 3, 4, 0, 4], [2, 4, 4, 4, 4, 3], [1, 2, 4, 4, 3, 1], [0, 3, 1, 4, 1, 1]]) In [22]: B Out[22]: array([2, 4]) In [23]: A[~np.in1d(A,B).reshape(A.shape)] Out[23]: array([3, 3, 3, 0, 3, 1, 3, 1, 0, 3, 1, 1, 1]) </code></pre>
python|arrays|numpy|vectorization
5
5,483
38,796,548
Subtract a vector to each row of a dataframe
<p>My dataframe looks like this : </p> <pre><code>fruits = pd.DataFrame({'orange': [10, 20], 'apple': [30, 40], 'banana': [50, 60]}) apple banana orange 0 30 50 10 1 40 60 20 </code></pre> <p>And I have this vector (its also a dataframe)</p> <pre><code>sold = pd.DataFrame({'orange': [1], 'apple': [2], 'banana': [3]}) apple banana orange 0 2 3 1 </code></pre> <p>I want to subtract this vector to each row of the initial dataframe to obtain a dataframe which looks like this </p> <pre><code> apple banana orange 0 28.0 47.0 9.0 1 38.0 57.0 19.0 </code></pre> <p>I tried :</p> <pre><code>print fruits.subtract(sold, axis = 0) </code></pre> <p>And the output is </p> <pre><code> apple banana orange 0 28.0 47.0 9.0 1 NaN NaN NaN </code></pre> <p>It worked only for the first line. I could create a dataframe filled with the vector for each row. Is there a more efficient way to subtract this vector ? I don't want to use a loop.</p>
<p>convert the df to a series using <code>squeeze</code> and pass <code>axis=1</code>:</p> <pre><code>In [6]: fruits.sub(sold.squeeze(), axis=1) Out[6]: apple banana orange 0 28 47 9 1 38 57 19 </code></pre> <p>The conversion is necessary as by design arithmetic operations between dfs will align on indices and columns, by passing a Series this allows you to subtract from each row in the df the row from the other df.</p>
python-2.7|pandas|dataframe
4
5,484
38,591,454
How to use ckpt data model into tensorflow iOS example?
<p>I am quiet new to Machine learning, and I am working on iOS app for object detection using tensorflow, I have been using the sample data model that is provided by tensorflow example in the form of .pb (graph.pb) file which works just fine with object detection.</p> <p>But My backend team has given me model2_BN.ckpt for data model file, I have tried to research on how to use this file and I have no clue. Is it possible to use the ckpt file on client side as data model? If yes How can I use it in the iOS tensorflow example as data model?</p> <p>Please help. Thanks</p>
<p>This one from my backend developer:</p> <pre><code>The .ckpt is the model given by tensorflow which includes all the weights/parameters in the model. The .pb file stores the computational graph. To make tensorflow work we need both the graph and the parameters. There are two ways to get the graph: (1) use the python program that builds it in the first place (tensorflowNetworkFunctions.py). (2) Use a .pb file (which would have to be generated by tensorflowNetworkFunctions.py). .ckpt file is were all the intelligence is. </code></pre>
ios|machine-learning|tensorflow
2
5,485
38,515,096
module error in multi-node spark job on google cloud cluster
<p>This code runs perfect when I set master to localhost. The problem occurs when I submit on a cluster with two worker nodes. </p> <p>All the machines have same version of python and packages. I have also set the path to point to the desired python version i.e. 3.5.1. when I submit my spark job on the master ssh session. I get the following error -</p> <blockquote> <p>py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 5, .c..internal): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/hadoop/yarn/nm-local-dir/usercache//appcache/application_1469113139977_0011/container_1469113139977_0011_01_000004/pyspark.zip/pyspark/worker.py", line 98, in main command = pickleSer._read_with_length(infile) File "/hadoop/yarn/nm-local-dir/usercache//appcache/application_1469113139977_0011/container_1469113139977_0011_01_000004/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length return self.loads(obj) File "/hadoop/yarn/nm-local-dir/usercache//appcache/application_1469113139977_0011/container_1469113139977_0011_01_000004/pyspark.zip/pyspark/serializers.py", line 419, in loads return pickle.loads(obj, encoding=encoding) File "/hadoop/yarn/nm-local-dir/usercache//appcache/application_1469113139977_0011/container_1469113139977_0011_01_000004/pyspark.zip/pyspark/mllib/<strong>init</strong>.py", line 25, in import numpy ImportError: No module named 'numpy'</p> </blockquote> <p>I saw other posts where people did not have access to their worker nodes. I do. I get the same message for the other worker node. not sure if I am missing some environment setting. Any help will be much appreciated.</p>
<p>Not sure if this qualifies as a solution. I submitted the same job using dataproc on google platform and it worked without any problem. I believe the best way to run jobs on google cluster is via the utilities offered on google platform. The dataproc utility seems to iron out any issues related to the environment.</p>
python-3.x|numpy|pyspark|google-cloud-platform
0
5,486
62,945,449
How to plot time series data in plotly?
<p>When I plot my data with just the index the graph looks fine. But when I try to plot it with a datetime object in the x axis, the plot gets messed up. Does anyone know why? I provided the head of my data and also the two plots.</p> <p><a href="https://i.stack.imgur.com/0GkNm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0GkNm.png" alt="enter image description here" /></a></p> <pre><code>import plotly.express as px fig = px.line(y=data.iloc[:,3]) fig.show() </code></pre> <p><a href="https://i.stack.imgur.com/k1Xk5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k1Xk5.png" alt="enter image description here" /></a></p> <pre><code>fig = px.line(y=data.iloc[:,3],x=data.iloc[:,0]) fig.show() </code></pre> <p><a href="https://i.stack.imgur.com/Gs1QN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gs1QN.png" alt="enter image description here" /></a></p>
<p>It is probably because of missing dates as you have around 180 data points but your second plot shows data spans from 2014 to 2019 that means it does not have many data points in between that's why your second graph looks like that.</p> <p>Instead of datetime try plotting converting it into string but then it will not be a time series as you will have many missing dates</p>
python|pandas|datetime|plotly|plotly-python
1
5,487
63,103,778
Python: to_csv for decoded dataframe
<p>I have a text file that has encoded strings. I am decoding them using <code>urllib.parse.unquote(string_df.encoded_string)</code> and storing it in a dataframe.</p> <p>I want to export this dataframe to a text file. However, it just adds garbage values.</p> <p>For example: Encoded String: %D1%81%D0%BF%D0%B0%D1%81%D0%BE-%D0%BF%D1%80%D0...</p> <p>Decoded using <code>urllib.parse.unquote</code> : спасо-преображенский собор</p> <p>Exported value in text file: ?????-?????????????? ?????</p> <p>I have tried exporting in an excel file using <code>to_excel</code> but when I open the excel file, it gives illegal character error. Also tried using <code>numpy.savetxt</code> but it gives the same ?????-?????????????? ?????.</p> <p>Anyway I can export it to a flat file and still have the desired &quot;спасо-преображенский собор&quot; result?</p>
<p>This looks like a character-encoding problem, make sure your CSV file is opened as 'UTF-8' or other compatible encoding, not ASCII, as 'спасо-преображенский собор' are cyrillic and not latin characters.</p>
python|python-3.x|pandas|numpy|decode
1
5,488
67,936,362
How do I look up values in a dataframe and return matching values using python/pandas?
<p>I have 2 large dataframes, df1 and df2. I am missing a column (colB) in df2 and I would like to add that column based of the values in the shared column (colA). If I was using Excel I would do this via a standard vlookup formula but I’m struggling to get the desired results using the pandas merge function.</p> <p>colA and colB both contain multiple entries of the same value so I’m using this line of code to create a new dataframe with only the unique pairings.</p> <pre><code>df_keyvalues = df1[[&quot;colA&quot;, &quot;colB&quot;]].drop_duplicates() </code></pre> <p>I’m then using merge to add colB into df2</p> <pre><code>df2 = df2.merge(df_keyvalues, how = &quot;left&quot;, on = &quot;colA&quot;) </code></pre> <p>After running the above, I do get colB in df2 but I also get more row in my dataframe that what I started with.</p> <p>What am I doing wrong?</p> <p>I would like to be able to lookup the value in df2[“colA”] in df1[“colA”] and return the value in df1[“colB”]. If the values in df2[“colA”] and df1[“colA”] is not an exact match, then leave value in df2[“colB”] empty and move on to the next one.</p> <p>Thanks in advance.</p>
<p>If you are getting more rows after the merge, this means that <code>colA</code> is not a unique key of <code>df_keyvalues</code>. This in turn means that the mapping <code>colA -&gt; colB</code> is not unique in <code>df1</code>, i.e. for at least one value of <code>colA</code> there are multiple values of <code>colB</code>.</p> <p>You need to create a unique mapping <code>colA -&gt; colB</code> from <code>df1</code> first. One way to do this would be:</p> <pre class="lang-py prettyprint-override"><code># take the smallest value if A-&gt;B mapping is not unique df_AtoB = df1.groupby(&quot;colA&quot;, as_index=False).agg(colB_=(&quot;colB&quot;, &quot;min&quot;)) </code></pre> <p>What exactly is the &quot;right&quot; way to de-duplicate above depends on your use-case.</p> <p>Afterwards you can fill-in <code>colB</code> in <code>df2</code> as follows</p> <pre class="lang-py prettyprint-override"><code>df = df2.merge(df_AtoB, on=&quot;colA&quot;, how=&quot;left&quot;) df.colB = df.colB.fillna(df.colB_) df = df.drop(columns=&quot;colB_&quot;) </code></pre>
python|pandas|merge|lookup
1
5,489
31,767,173
How do I vectorize this loop in numpy?
<p>I have this array:</p> <pre><code>arr = np.array([3, 7, 4]) </code></pre> <p>And these boolean indices:</p> <pre><code>cond = np.array([False, True, True]) </code></pre> <p>I want to find the index of the maximum value in the array where the boolean condition is true. So I do:</p> <pre><code>np.ma.array(arr, mask=~cond).argmax() </code></pre> <p>Which works and returns 1. But if I had an array of boolean indices:</p> <pre><code>cond = np.array([[False, True, True], [True, False, True]]) </code></pre> <p>Is there a vectorized/numpy way of iterating through the array of boolean indices to return [1, 2]?</p>
<p>For your special use case of <code>argmax</code>, you may use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow"><code>np.where</code></a> and set the masked values to negative infinity:</p> <pre><code>&gt;&gt;&gt; inf = np.iinfo('i8').max &gt;&gt;&gt; np.where(cond, arr, -inf).argmax(axis=1) array([1, 2]) </code></pre> <p>alternatively, you can manually broadcast using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.tile.html" rel="nofollow"><code>np.tile</code></a>:</p> <pre><code>&gt;&gt;&gt; np.ma.array(np.tile(arr, 2).reshape(2, 3), mask=~cond).argmax(axis=1) array([1, 2]) </code></pre>
python|numpy
3
5,490
31,869,257
Python & Pandas: Combine columns into a date
<p>In my <code>dataframe</code>, the time is separated in 3 columns: <code>year</code>, <code>month</code>, <code>day</code>, like this: <a href="https://i.stack.imgur.com/XWL8X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XWL8X.png" alt="enter image description here"></a></p> <p>How can I convert them into <code>date</code>, so I can do time series analysis?</p> <p>I can do this:</p> <pre><code>df.apply(lambda x:'%s %s %s' % (x['year'],x['month'], x['day']),axis=1) </code></pre> <p>which gives: </p> <pre><code>1095 1954 1 1 1096 1954 1 2 1097 1954 1 3 1098 1954 1 4 1099 1954 1 5 1100 1954 1 6 1101 1954 1 7 1102 1954 1 8 1103 1954 1 9 1104 1954 1 10 1105 1954 1 11 1106 1954 1 12 1107 1954 1 13 </code></pre> <p>But what follows?</p> <p><strong>EDIT:</strong> This is what I end up with:</p> <pre><code>from datetime import datetime df['date']= df.apply(lambda x:datetime.strptime("{0} {1} {2}".format(x['year'],x['month'], x['day']), "%Y %m %d"),axis=1) df.index= df['date'] </code></pre>
<p>Here's how to convert value to time:</p> <pre><code>import datetime df.apply(lambda x:datetime.strptime("{0} {1} {2} 00:00:00".format(x['year'],x['month'], x['day']), "%Y %m %d %H:%M:%S"),axis=1) </code></pre>
python|pandas
9
5,491
31,993,186
type switch from int to float64 after merge creating error
<p>I'm trying to merge two dataframes in Pandas. One of the dataframes has a numerical column whose type is "int64"</p> <p>However, after the merge, the type is switched to "float64" for some reason. Note that this is not my join column</p> <p>When I try to access the dataframe, it errors out:</p> <p>In [56]: account_aggregates.head()<br> Out[56]: ) failed: TypeError: %d format: a number is required, not numpy.float64></p>
<p>The reason that the dtype is changed to <code>float64</code> is because missing values <code>NaN</code> cannot be represented using integer.</p> <p>With respect to the error message, I had a hunch that it was <code>'display.float_format'</code> as I answered a <a href="https://stackoverflow.com/questions/31983341/using-scientific-notation-in-pandas/31983844#31983844">question</a> earlier today on this and saw this error. I think that it's because you have to pass a <code>str.format</code> as the value rather than a format string:</p> <p><code>pd.set_option('display.float_format', '{:.2g}'.format)</code> as opposed to <code>pd.set_option('display.float_format', '%.2g')</code> as an example.</p>
python|numpy|pandas
2
5,492
61,554,255
how can i fix this : AttributeError: module 'tensorflow_core.python.keras.api._v2.keras' has no attribute 'Dense'
<p>AttributeError: module 'tensorflow_core.python.keras.api._v2.keras' has no attribute 'Dense' in model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu)) model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))model.add(tf.keras.Dense(10, activation=tf.nn.softmax)</p>
<p>In your last <code>model.add()</code> call, you try to use <code>tf.keras.Dense</code> instead of <code>tf.keras.layers.Dense</code>. Modify you code to the following:</p> <pre><code> model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu)) model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu)) model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax) # &lt;-- your typo was in this line </code></pre>
python|tensorflow|keras
3
5,493
61,490,955
Why my callback is not invoking in Tensorflow?
<p>Below is my Tensorflow and Python code which will end the training when accuracy in 99% with the call back function. But the callback is not invoking. Where is the problem ?</p> <pre><code>def train_mnist(): class myCallback(tf.keras.callbacks.Callback): def on_epoc_end(self, epoch,logs={}): if (logs.get('accuracy')&gt;0.99): print("Reached 99% accuracy so cancelling training!") self.model.stop_training=True mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data(path=path) x_train= x_train/255.0 x_test= x_test/255.0 callbacks=myCallback() model = tf.keras.models.Sequential([ # YOUR CODE SHOULD START HERE tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(256, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # model fitting history = model.fit(x_train,y_train, epochs=10,callbacks=[callbacks]) # model fitting return history.epoch, history.history['acc'][-1] </code></pre>
<p>Unfortunately don't have enough reputation to provide commentary on one of the above comments, but I wanted to point out that the on_epoch_end function is something called directly through tensorflow when an epoch ends. In this case, we're just implementing it inside a custom python class that will be called automatically by the underlying framework. I'm sourcing from Tensorflow in Practice deeplearning.ai week 2 on coursera. Very similar where the issues with the above callback are coming from it seems.</p> <p>Here's some proof from my most recent run:</p> <pre><code>Epoch 1/20 59968/60000 [============================&gt;.] - ETA: 0s - loss: 1.0648 - acc: 0.9491Inside callback 60000/60000 [==============================] - 34s 575us/sample - loss: 1.0645 - acc: 0.9491 Epoch 2/20 59968/60000 [============================&gt;.] - ETA: 0s - loss: 0.0560 - acc: 0.9825Inside callback 60000/60000 [==============================] - 35s 583us/sample - loss: 0.0560 - acc: 0.9825 Epoch 3/20 59840/60000 [============================&gt;.] - ETA: 0s - loss: 0.0457 - acc: 0.9861Inside callback 60000/60000 [==============================] - 31s 512us/sample - loss: 0.0457 - acc: 0.9861 Epoch 4/20 59840/60000 [============================&gt;.] - ETA: 0s - loss: 0.0428 - acc: 0.9873Inside callback 60000/60000 [==============================] - 32s 528us/sample - loss: 0.0428 - acc: 0.9873 Epoch 5/20 59808/60000 [============================&gt;.] - ETA: 0s - loss: 0.0314 - acc: 0.9909Inside callback 60000/60000 [==============================] - 30s 507us/sample - loss: 0.0315 - acc: 0.9909 Epoch 6/20 59840/60000 [============================&gt;.] - ETA: 0s - loss: 0.0271 - acc: 0.9924Inside callback 60000/60000 [==============================] - 32s 532us/sample - loss: 0.0270 - acc: 0.9924 Epoch 7/20 59968/60000 [============================&gt;.] - ETA: 0s - loss: 0.0238 - acc: 0.9938Inside callback 60000/60000 [==============================] - 33s 555us/sample - loss: 0.0238 - acc: 0.9938 Epoch 8/20 59936/60000 [============================&gt;.] - ETA: 0s - loss: 0.0255 - acc: 0.9934Inside callback 60000/60000 [==============================] - 33s 550us/sample - loss: 0.0255 - acc: 0.9934 Epoch 9/20 59872/60000 [============================&gt;.] - ETA: 0s - loss: 0.0195 - acc: 0.9953Inside callback 60000/60000 [==============================] - 33s 557us/sample - loss: 0.0194 - acc: 0.9953 Epoch 10/20 59744/60000 [============================&gt;.] - ETA: 0s - loss: 0.0186 - acc: 0.9959Inside callback 60000/60000 [==============================] - 33s 551us/sample - loss: 0.0185 - acc: 0.9959 Epoch 11/20 59968/60000 [============================&gt;.] - ETA: 0s - loss: 0.0219 - acc: 0.9954Inside callback 60000/60000 [==============================] - 32s 530us/sample - loss: 0.0219 - acc: 0.9954 Epoch 12/20 59936/60000 [============================&gt;.] - ETA: 0s - loss: 0.0208 - acc: 0.9960Inside callback 60000/60000 [==============================] - 33s 558us/sample - loss: 0.0208 - acc: 0.9960 Epoch 13/20 59872/60000 [============================&gt;.] - ETA: 0s - loss: 0.0185 - acc: 0.9968Inside callback 60000/60000 [==============================] - 31s 520us/sample - loss: 0.0184 - acc: 0.9968 Epoch 14/20 59872/60000 [============================&gt;.] - ETA: 0s - loss: 0.0181 - acc: 0.9970Inside callback 60000/60000 [==============================] - 35s 587us/sample - loss: 0.0181 - acc: 0.9970 Epoch 15/20 59936/60000 [============================&gt;.] - ETA: 0s - loss: 0.0193 - acc: 0.9971Inside callback 60000/60000 [==============================] - 33s 555us/sample - loss: 0.0192 - acc: 0.9972 Epoch 16/20 59968/60000 [============================&gt;.] - ETA: 0s - loss: 0.0176 - acc: 0.9972Inside callback 60000/60000 [==============================] - 33s 558us/sample - loss: 0.0176 - acc: 0.9972 Epoch 17/20 59968/60000 [============================&gt;.] - ETA: 0s - loss: 0.0183 - acc: 0.9974Inside callback 60000/60000 [==============================] - 33s 555us/sample - loss: 0.0182 - acc: 0.9974 Epoch 18/20 59872/60000 [============================&gt;.] - ETA: 0s - loss: 0.0225 - acc: 0.9970Inside callback 60000/60000 [==============================] - 34s 570us/sample - loss: 0.0224 - acc: 0.9970 Epoch 19/20 59808/60000 [============================&gt;.] - ETA: 0s - loss: 0.0185 - acc: 0.9975Inside callback 60000/60000 [==============================] - 33s 548us/sample - loss: 0.0185 - acc: 0.9975 Epoch 20/20 59776/60000 [============================&gt;.] - ETA: 0s - loss: 0.0150 - acc: 0.9979Inside callback 60000/60000 [==============================] - 34s 565us/sample - loss: 0.0149 - acc: 0.9979 --------------------------------------------------------------------------- KeyError Traceback (most recent call last) &lt;ipython-input-25-1ff3c304aec3&gt; in &lt;module&gt; ----&gt; 1 _, _ = train_mnist_conv() &lt;ipython-input-24-b469df35dac0&gt; in train_mnist_conv() 38 ) 39 # model fitting ---&gt; 40 return history.epoch, history.history['accuracy'][-1] 41 KeyError: 'accuracy' </code></pre> <p>The key error is because of the history object not having the keyword 'accuracy', so I wanted to address that as a source of concern before continuing on.</p>
python|tensorflow|machine-learning|keras|deep-learning
0
5,494
61,575,633
Select size for output vector with 1000s of labels
<p>Most of the examples on the Internet regarding <code>multi-label</code> image classification are based on just a <code>few</code> labels. For example, with <code>6</code> classes we get:</p> <pre><code>model = models.Sequential() model.add(layer=base) model.add(layer=layers.Flatten()) model.add(layer=layers.Dense(units=256, activation="relu")) model.add(layer=layers.Dense(units=6, activation="sigmoid")) </code></pre> <pre><code>_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= vgg16 (Model) (None, 7, 7, 512) 14714688 _________________________________________________________________ flatten_1 (Flatten) (None, 25088) 0 _________________________________________________________________ dense_1 (Dense) (None, 256) 6422784 _________________________________________________________________ dense_2 (Dense) (None, 6) 1542 ================================================================= Total params: 21,139,014 Trainable params: 13,503,750 Non-trainable params: 7,635,264 </code></pre> <p>However, for datasets with <code>significantly</code> more labels, the size of the training <code>parameters</code> explodes and eventually training process fails with a <code>ResourceExhaustedError</code> error. For example, with <code>3047</code> label we get:</p> <pre><code>model = models.Sequential() model.add(layer=base) model.add(layer=layers.Flatten()) model.add(layer=layers.Dense(units=256, activation="relu")) model.add(layer=layers.Dense(units=3047, activation="sigmoid")) </code></pre> <pre><code>_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= vgg16 (Model) (None, 7, 7, 512) 14714688 _________________________________________________________________ flatten_1 (Flatten) (None, 25088) 0 _________________________________________________________________ dense_1 (Dense) (None, 256) 6422784 _________________________________________________________________ dense_2 (Dense) (None, 3047) 783079 ================================================================= Total params: 21,920,551 Trainable params: 14,285,287 Non-trainable params: 7,635,264 _________________________________________________________________ </code></pre> <p>Obviously, there is something wrong with my network but not sure how to overcome this issue...</p>
<p>Resource Exhauseted Error is related to memory issues. Either you don't have enough memory in your system or some other part of the code is causing memory issues.</p>
tensorflow|keras|transfer-learning
0
5,495
61,232,554
Batching in tf.Keras 2.1 -> ValueError: Error when checking input
<p>I have a tf dataset on which I'm trying to implement batching in order to reduce memory load. The model works without batching, but I get OOM errors hence trying out batches.</p> <p><strong>Dataset</strong></p> <pre><code>ds = tf.data.Dataset.from_tensors(( (train_ids, train_masks), labels )) &gt;&gt;&gt;ds &lt;TensorDataset shapes: (((22500, 64), (22500, 64)), (22500,)), types: ((tf.int16, tf.int16), tf.int16)&gt; </code></pre> <p><strong>Batching</strong></p> <pre><code>BATCH_SIZE = 4 ds = ds.batch(BATCH_SIZE, drop_remainder=True) TOTAL_BATCHES = math.ceil(len(train_ids) / BATCH_SIZE) TEST_BATCHES = TOTAL_BATCHES // 10 ds.shuffle(TOTAL_BATCHES) print("Total Batches : ", str(TOTAL_BATCHES)) print("Test batches : ", str(TEST_BATCHES)) test_data = ds.take(TEST_BATCHES) val_data = ds.take(TEST_BATCHES) train_data = ds.skip(TEST_BATCHES*2) </code></pre> <p><strong>Model</strong></p> <pre><code>model = TFAlbertForSequenceClassification.from_pretrained('albert-base-v2', trainable=False) input_layer = Input(shape=(64, ), dtype=tf.int32) input_mask_layer = Input(shape=(64, ), dtype=tf.int32) bert_layer = model([input_layer, input_mask_layer])[0] flat_layer = Flatten() (bert_layer) dense_output = Dense(n_classes, activation='softmax') (flat_layer) model_ = Model(inputs=[input_layer, input_mask_layer], outputs=dense_output) optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3, epsilon=1e-08, clipnorm=1.0) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') model_.compile(optimizer=optimizer, loss=loss, metrics=[metric]) </code></pre> <p><strong>Summary of Model</strong></p> <pre><code>Model: "model_17" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_64 (InputLayer) [(None, 64)] 0 __________________________________________________________________________________________________ input_65 (InputLayer) [(None, 64)] 0 __________________________________________________________________________________________________ tf_albert_for_sequence_classifi ((None, 2),) 11685122 input_64[0][0] input_65[0][0] __________________________________________________________________________________________________ flatten_18 (Flatten) (None, 2) 0 tf_albert_for_sequence_classifica __________________________________________________________________________________________________ dense_17 (Dense) (None, 45) 135 flatten_18[0][0] ================================================================================================== Total params: 11,685,257 Trainable params: 135 Non-trainable params: 11,685,122 </code></pre> <p>When I call <code>model_.fit(train_data, validation_data=(val_data), epochs=1, verbose=1)</code> with the batch I got the error:</p> <pre><code> ValueError: Error when checking input: expected input_66 to have 2 dimensions, but got array with shape (None, 22500, 64) </code></pre> <p>I know I need to reshape my inputs but I'm not sure how.</p>
<p>I resolved this problem by not using tensorflow datasets. It's obviously not a perfect fix, but it works for now.</p> <p>With the various inputs and labels as tf tensors, I did:</p> <pre><code>inputs_ids = input_ids.numpy() input_masks = input_masks.numpy() labels = labels.numpy() </code></pre> <p>And then for the fit:</p> <pre><code>model.fit([input_ids, input_masks], labels, epochs=10, batch_size=10) </code></pre> <p>And batch sizing is now working and no longer getting OOM errors.</p>
python|tensorflow|keras
0
5,496
68,766,978
how to convert pandas data frame rows into columns
<p>I have the data frame like</p> <pre><code>df.to_dict('list') </code></pre> <p>output:</p> <pre><code>{'ChannelPartnerID': [10000, 10000, 10000, 10000, 10000, 10001, 10001, 10001, 10002, 10002], 'Brand': ['B1', 'B2', 'B3', 'B4', 'B5', 'B1', 'B2', 'B5', 'B1', 'B4'], 'Sales': [29630, 38573, 1530, 21793, 7155, 26477, 42158, 14612, 6649, 6468]} </code></pre> <p>.</p> <pre><code>df </code></pre> <p>output:</p> <pre><code>ChannelPartnerID Brand Sales 0 10000 B1 29630 1 10000 B2 38573 2 10000 B3 1530 3 10000 B4 21793 4 10000 B5 7155 5 10001 B1 26477 6 10001 B2 42158 7 10001 B5 14612 8 10002 B1 6649 9 10002 B4 6468 </code></pre> <p>I want to group the data by 'ChannelPartnerID' because I want to merge the data by unique values in unique value in 'ChannelPartnerID' and convert 'Brand' column values into columns containing Sales price for each column</p> <p>I want it to be like this output:</p> <pre><code>ChannelPartnerID B1_Sales B2_Sales B3_Sales B4_Sales B5_Sales B6_Sales B7_Sales 0 10000 29630 38573 1530 21793 7155 0 0 1 10001 26477 42158 0 0 14612 0 0 2 10002 6649 0 0 6468 0 0 0 </code></pre>
<p>What you want is a called a <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a>:</p> <pre><code>df.pivot(*df).fillna(0).add_suffix('_Sales') </code></pre> <p>output:</p> <pre><code>Brand B1_Sales B2_Sales B3_Sales B4_Sales B5_Sales ChannelPartnerID 10000 29630 38573 1530 21793 7155 10001 26477 42158 0 0 14612 10002 6649 0 0 6468 0 </code></pre> <p><em>NB. <code>df.pivot(*df)</code> is a shortcut for <code>df.pivot(index='ChannelPartnerID', columns='Brand', values='Sales')</code></em></p>
python|pandas|dataframe|transpose
1
5,497
68,595,777
How to drop the all the 1's in a correlation matrix
<p>I'm trying to change/eliminate the 1's that run diagonally in a correlation matrix so that when I take the average of the rows of the correlation matrix, the 1s don't affect the mean of each of the rows.</p> <p>Let's say I have the dataset,</p> <pre><code> A B C D E F 0 45 100 58 78 80 35 1 49 80 80 104 58 20 2 49 80 65 78 79 20 3 65 100 80 159 83 45 4 65 123 78 115 100 50 5 45 122 84 100 85 20 6 60 120 78 44 105 55 7 62 80 109 48 78 25 8 63 39 85 65 79 25 9 80 52 100 50 103 30 10 80 43 78 64 120 60 11 60 60 130 43 135 45 12 80 50 111 59 115 50 13 82 65 130 63 78 90 14 83 58 85 80 45 80 15 100 64 100 65 30 70 </code></pre> <p>When I do <code>dfcorr = df.corr()</code> <code>dfcorr</code>, I get</p> <pre><code> A B C D E F A 1.000000 0.842125 0.834808 0.832773 0.844158 0.806787 B 0.842125 1.000000 0.847606 0.907595 0.818668 0.863645 C 0.834808 0.847606 1.000000 0.718199 0.804671 0.582033 D 0.832773 0.907595 0.718199 1.000000 0.884236 0.878421 E 0.844158 0.818668 0.804671 0.884236 1.000000 0.718668 F 0.806787 0.863645 0.582033 0.878421 0.718668 1.000000 </code></pre> <p>I want all the 1's to be dropped so that if I want to take the mean of each of the rows, the 1's won't affect them.</p>
<p>If you are working with it as a data frame this will work:</p> <pre><code>df=pd.DataFrame({'c1':[1, 0, 0.3, 0.4], 'c2':[0.2, 1, 0.6, 0.4], 'c3':[0.1, 0, 1, 0.4], 'c4':[0.7, 0.2, 0.2, 1]} ) df.where(df!=1).mean(axis=1) </code></pre> <p>This only works correctly if all 1's are on the diagonal.</p>
python|pandas
0
5,498
68,527,072
Pandas group by window range
<p>I have the following data table:</p> <pre><code>values ====== 2.0 2.5 3.2 7.0 7.8 9.0 11.0 </code></pre> <p>I want to extract groups witzhin a certain window, for example</p> <pre><code>window_size = 1.0 </code></pre> <p>All values within this distance should become one group:</p> <pre><code>values group ====== ==== 2.0 1 2.5 1 3.2 1 7.0 2 7.8 2 9.0 3 11.0 4 </code></pre> <p>3.2 and 2.0 are in one group because 2.5 is between them and to both sides below the windows size 1.0.</p> <p>How to achieve this with pandas?</p> <p><strong>Edit1</strong> (more sophisticated example return wrong group with the answers below):</p> <pre><code>windows_size= 1000000 value group correct_group 65951649.0 1 1 59397882.0 1 2 7633231.0 1 3 7638485.0 1 3 68085447.0 2 4 67973423.0 2 4 </code></pre> <p><strong>Edit2</strong> Follow up question, is it possible to group by another group: <a href="https://stackoverflow.com/questions/68529463/pandas-group-by-window-range-follow-up-question-with-category">Pandas group by window range (Follow up question with category)</a></p>
<p>IIUC use <code>diff</code> and <code>cumsum</code>:</p> <pre><code>window_size = 1.0 df[&quot;group&quot;] = df[&quot;values&quot;].diff().abs().gt(window_size).cumsum()+1 print (df) values group 0 2.0 1 1 2.5 1 2 3.2 1 3 7.0 2 4 7.8 2 5 9.0 3 6 11.0 4 </code></pre>
python|pandas|group-by
3
5,499
68,837,404
Fiona not seeing .shp file as a recognised format
<p>All of my unittests pass on my local machine, however when I try to use a .yml file to test them every time a pull request is created, there are several failures. An example of one of the error messages is shown below:</p> <pre><code>_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ &gt; ??? E fiona._err.CPLE_OpenFailedError: 'static_data/england_wa_2011_clipped.shp' not recognized as a supported file format. fiona/_err.pyx:291: CPLE_OpenFailedError </code></pre> <p>My Linux .yml file is below, I have already tried changing around the working directory and it appears to be correct. The file is not corrupted and as it is the same on both VM's I think it is an issue with Fiona. This file also has a corresponding file for testing on a Windows VM however they are spitting out the same error messages and failing the same tests.</p> <pre><code>name: Python Linux application on: pull_request: branches: [ '**' ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Set up Python 3.8 uses: actions/setup-python@v2 with: python-version: 3.8 - name: Install dependencies run: | python -m pip install --upgrade pip sudo apt-get install libproj-dev proj-data proj-bin sudo apt-get install libgeos-dev sudo apt-add-repository ppa:ubuntugis/ubuntugis-unstable sudo apt-get update sudo apt-get install gdal-bin libgdal-dev pip install GDAL==3.2.3 pip install flake8 pytest Cython numpy pyproj pygeos if [ -f requirements-linux.txt ]; then pip install -r requirements-linux.txt; fi - name: Lint with flake8 run: | # stop the build if there are Python syntax errors or undefined names flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics - name: Test with pytest run: | pytest </code></pre> <p>GitHub Repo: <a href="https://github.com/Zach10a/seedpod_ground_risk" rel="nofollow noreferrer">https://github.com/Zach10a/seedpod_ground_risk</a> The branch is CI.</p>
<p>Your problem is that the <code>actions/checkout@v2</code> action does not, by default, check out files stored using LFS. So while there is file named, for example, <code>static_data/england_wa_2011_clipped.shp</code> in your repository, the contents are going to look something like this:</p> <pre><code>version https://git-lfs.github.com/spec/v1 oid sha256:c60f74e3b8ed753d771378f0b03b7c8e8a84406f413a37f9f5242ac9235a2e6c size 114084720 </code></pre> <p>So Fiona is giving you an accurate error:</p> <pre><code>E fiona._err.CPLE_OpenFailedError: 'static_data/england_wa_2011_clipped.shp' not recognized as a supported file format. </code></pre> <p>You need to instruct the <code>checkout</code> action to download files stored in LFS:</p> <pre><code> steps: - uses: actions/checkout@v2 with: lfs: true </code></pre> <p>You can find the repository where I test this all out <a href="https://github.com/larsks/so-example-68837404" rel="nofollow noreferrer">here</a>.</p>
python|github-actions|shapefile|geopandas|fiona
1