Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
376,500
67,369,572
Product scoring in pandas dataframe
<p>I do have product id dataframe. I would like to find the best product by scoring each product. For each variable the more the value the better the product score except returns which means more returns less score.Also I need to assign different weight to score for the variables Shipped revenue and returns that maybe increased by 20 percent of their importance.</p> <p>A scoring function can look like this Score=ShippedUnits+1.2*ShippedRevenue+OrderedUnits-1.2Returns+View+Stock where 0&lt;=Score&lt;=100</p> <p>Please help. Thank you.</p> <pre><code> df_product=pd.DataFrame({'ProductId':['1','2','3','4','5','6','7','8','9','10'],'ShippedUnits': [6,8,0,4,27,3,4,14,158,96],'ShippedRevenue':[268,1705,1300,950,1700,33380,500,2200,21000,24565] ,'OrderedUnits':[23,78,95,52,60,76,68,92,34,76],'Returns':[0,0,6,0,2,5,6,5,2,13],'View': [0,655,11,378,920,12100,75,1394,12368,14356],'Stock':[24,43,65,27,87,98,798,78,99,231] }) </code></pre>
<pre><code> df_product=pd.DataFrame({'ProductId':['1','2','3','4','5','6','7','8','9','10'],'ShippedUnits': [6,8,0,4,27,3,4,14,158,96],'ShippedRevenue':[268,1705,1300,950,1700,33380,500,2200,21000,24565] ,'OrderedUnits':[23,78,95,52,60,76,68,92,34,76],'Returns':[0,0,6,0,2,5,6,5,2,13],'View': [0,655,11,378,920,12100,75,1394,12368,14356],'Stock':[24,43,65,27,87,98,798,78,99,231] }) df_product['score'] = df_product['ShippedUnits'] +1.2*df_product['ShippedRevenue']+df_product['OrderedUnits']-1.2*df_product['Returns']+df_product['View']+df_product['Stock'] df_product['score']=(df_product['score']-df_product['score'].min())/(df_product['score'].max()-df_product['score'].min())*100 df_product </code></pre>
python|pandas|product|segment
1
376,501
67,354,697
Saving prediction_generator results in tensorflow and python
<p>Let's assume, we fitted a model in TensorFlow flow</p> <pre><code>model.fit( train_generator, epochs=epochs, verbose=1, steps_per_epoch=steps_per_epoch, validation_data=valid_generator, validation_steps=val_steps_per_epoch).history </code></pre> <p>In the next step, we generate predictions.</p> <pre><code> Y_pred = model.predict_generator(valid_generator, np.ceil(valid_generator.samples / valid_generator.batch_size)) </code></pre> <p>I'm wondering if it is possible to save predictions and load them from disk for debugging subsequent code without retraining the model and predicting the data each time after each restart.</p> <p>Of course, it is possible to save and load the model, but there is still some overhead on predicting.</p> <p>Any ideas are highly appreciated. Thanks in advance</p>
<p>Based on my understanding from the comment box, here is some possible solution for your query, let me know if it works for you or not.</p> <blockquote> <p>I'm wondering if it is possible to save predictions and load them from disk for debugging subsequent code without retraining the model and predicting the data each time after each restart.</p> </blockquote> <hr /> <p>First, we build a model and train it first.</p> <pre><code>import tensorflow as tf # Model input = tf.keras.Input(shape=(28, 28)) base_maps = tf.keras.layers.Flatten(input_shape=(28, 28))(input) base_maps = tf.keras.layers.Dense(128, activation='relu')(base_maps) base_maps = tf.keras.layers.Dense(units=10, activation='softmax', name='primary')(base_maps) model = tf.keras.Model(inputs=[input], outputs=[base_maps]) # compile model.compile( loss = tf.keras.losses.CategoricalCrossentropy(), metrics = ['accuracy'], optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3) ) # data (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() x_train = tf.divide(x_train, 255) y_train = tf.one_hot(y_train , depth=10) # customized fit model.fit(x_train, y_train, batch_size=512, epochs=3, verbose = 1) </code></pre> <p>Next, We use this trained model to predict unseen data (<code>x_test</code>) and save the prediction to disk so that we can later debug model performance issue.</p> <pre><code>import numpy as np import pandas as pd y_pred = model.predict(x_test) # get prediction y_pred = np.argmax(y_pred, axis=-1) # get class labels # save ground truth and prediction to local disk as CSV file oof = pd.DataFrame(dict( gt = y_test, pd = y_pred, )) oof.to_csv('oof.csv', index=False) oof.head(20) # compute how many prediction are accurate or match oof['check'] = np.where((oof['gt'] == oof['pd']), 'Match', 'No Match') oof.check.value_counts() Match 9492 No Match 508 Name: check, dtype: int64 </code></pre> <p>Like this, we can do various types of analysis from the model prediction and ground truth. However, in order to save probabilities (instead of actual labels), we can also do something like this: <a href="https://stackoverflow.com/a/62150997/9215780">reference</a>.</p> <pre><code>y_pred = model.predict(x_test) np.savetxt(&quot;y_pred.csv&quot;, y_pred , delimiter=&quot;,&quot;) </code></pre>
python|tensorflow|deep-learning|tensorflow2.0|prediction
1
376,502
67,230,437
Extracting ID and Relevant data from a csv dataset in python
<p>Making a program for my Final Year Project. Program takes the longitude and latitude coords from a .csv dataset and plots them on the map. Issue I am having is there is multiple ID's and this totals 445,000+ points.</p> <p>How would I refine it so the program can differentiate between the IDs?</p> <pre><code> def create_image(self, color, width=2): # Creates an image that contains the Map and the GPS record # color = color the GPS line is # width = width of the GPS line data = pd.read_csv(self.data_path, header=0) # sep will separate the latitude from the longitude data.info() self.result_image = Image.open(self.map_path, 'r') img_points = [] gps_data = tuple(zip(data['latitude'].values, data['longitude'].values)) for d in gps_data: x1, y1 = self.scale_to_img(d, (self.result_image.size[0], self.result_image.size[1])) img_points.append((x1, y1)) draw = ImageDraw.Draw(self.result_image) draw.line(img_points, fill=color, width=width) </code></pre> <p>I have also attached the <a href="https://github.com/JagerScouser/Alex_Hulme_FYP" rel="nofollow noreferrer">github project here</a> the program works but I am just trying to minimize how many users it plots at once.</p> <p>Thanks in advance.</p>
<p>To check for a specific ID you could create a filter. For this dataframe</p> <pre><code> long lat ID 0 10 5 test1 1 15 20 test2 </code></pre> <p>you could do the following:</p> <pre><code>id_filt = df_data['ID'] == 'test1' </code></pre> <p>This can be used to filter out every entry from the dataframe that has the ID 'test1'</p> <pre><code>df_data[id_filt] long lat ID 10 5 test1 </code></pre>
python|pandas|numpy|csv
0
376,503
34,735,189
error reading date time from csv using pandas
<p>I am using Pandas to read and process csv file. My csv file have date/time column that looks like:</p> <pre><code>11:59:50:322 02 10 2015 -0400 EDT 11:11:55:051 16 10 2015 -0400 EDT 00:38:37:106 02 11 2015 -0500 EST 04:15:51:600 14 11 2015 -0500 EST 04:15:51:600 14 11 2015 -0500 EST 13:43:28:540 28 11 2015 -0500 EST 09:24:12:723 14 12 2015 -0500 EST 13:28:12:346 28 12 2015 -0500 EST </code></pre> <p>How can I read this using python/pandas, so far what I have is this:</p> <pre><code>pd.to_datetime(pd.Series(df['senseStartTime']),format='%H:%M:%S:%f %d %m %Y %z %Z') </code></pre> <p>But this is not working, though previously I was able to use the same code for another format (with a different format specifier). Any suggestions?</p>
<p>The issue you're having is likely because versions of Python before 3.2 (I think?) had a lot of trouble with time zones, so your format string might be screwing up on the %z and %Z parts. For example, in Python 2.7:</p> <pre><code>In [187]: import datetime In [188]: datetime.datetime.strptime('11:59:50:322 02 10 2015 -0400 EDT', '%H:%M:%S:%f %d %m %Y %z %Z') ValueError: 'z' is a bad directive in format '%H:%M:%S:%f %d %m %Y %z %Z' </code></pre> <p>You're using pd.to_datetime instead of datetime.datetime.strptime but the underlying issues are the same, you can refer to <a href="https://stackoverflow.com/questions/12281975/convert-timestamps-with-offset-to-datetime-obj-using-strptime">this thread</a> for help. What I would suggest is instead of using pd.to_datetime, do something like </p> <pre><code>In [191]: import dateutil In [192]: dateutil.parser.parse('11:59:50.322 02 10 2015 -0400') Out[192]: datetime.datetime(2015, 2, 10, 11, 59, 50, 322000, tzinfo=tzoffset(None, -14400)) </code></pre> <p>It should be pretty simple to chop off the timezone at the end (which is redundant since you have the offset), and change the ":" to "." between the seconds and microseconds.</p>
python|csv|datetime|pandas
1
376,504
34,544,210
Numba 3x slower than numpy
<p>We have a vectorial numpy <strong>get_pos_neg_bitwise</strong> function that use a mask=[132 20 192] and a df.shape of (500e3, 4) that we want to accelerate with numba.</p> <pre><code>from numba import jit import numpy as np from time import time def get_pos_neg_bitwise(df, mask): """ In [1]: print mask [132 20 192] In [1]: print df [[ 1 162 97 41] [ 0 136 135 171] ..., [ 0 245 30 73]] """ check = (np.bitwise_and(mask, df[:, 1:]) == mask).all(axis=1) pos = (df[:, 0] == 1) &amp; check neg = (df[:, 0] == 0) &amp; check pos = np.nonzero(pos)[0] neg = np.nonzero(neg)[0] return (pos, neg) </code></pre> <p>Using tips from @morningsun we made this numba version:</p> <pre><code>@jit(nopython=True) def numba_get_pos_neg_bitwise(df, mask): posneg = np.zeros((df.shape[0], 2)) for idx in range(df.shape[0]): vandmask = np.bitwise_and(df[idx, 1:], mask) # numba fail with # if np.all(vandmask == mask): vandm_equal_m = 1 for i, val in enumerate(vandmask): if val != mask[i]: vandm_equal_m = 0 break if vandm_equal_m == 1: if df[idx, 0] == 1: posneg[idx, 0] = 1 else: posneg[idx, 1] = 1 pos = list(np.nonzero(posneg[:, 0])[0]) neg = list(np.nonzero(posneg[:, 1])[0]) return (pos, neg) </code></pre> <p>But it still 3 times slower than the numpy one (~0.06s Vs ~0,02s).</p> <pre><code>if __name__ == '__main__': df = np.array(np.random.randint(256, size=(int(500e3), 4))) df[:, 0] = np.random.randint(2, size=(1, df.shape[0])) # set target to 0 or 1 mask = np.array([132, 20, 192]) start = time() pos, neg = get_pos_neg_bitwise(df, mask) msg = '==&gt; pos, neg made; p={}, n={} in [{:.4} s] numpy' print msg.format(len(pos), len(neg), time() - start) start = time() msg = '==&gt; pos, neg made; p={}, n={} in [{:.4} s] numba' pos, neg = numba_get_pos_neg_bitwise(df, mask) print msg.format(len(pos), len(neg), time() - start) start = time() pos, neg = numba_get_pos_neg_bitwise(df, mask) print msg.format(len(pos), len(neg), time() - start) </code></pre> <p>Am I missing something ?</p> <pre><code>In [1]: %run numba_test2.py ==&gt; pos, neg made; p=3852, n=3957 in [0.02306 s] numpy ==&gt; pos, neg made; p=3852, n=3957 in [0.3492 s] numba ==&gt; pos, neg made; p=3852, n=3957 in [0.06425 s] numba In [1]: </code></pre>
<p>Try moving the call to <code>np.bitwise_and</code> outside of the loop since numba can't do anything to speed it up:</p> <pre><code>@jit(nopython=True) def numba_get_pos_neg_bitwise(df, mask): posneg = np.zeros((df.shape[0], 2)) vandmask = np.bitwise_and(df[:, 1:], mask) for idx in range(df.shape[0]): # numba fail with # if np.all(vandmask == mask): vandm_equal_m = 1 for i, val in enumerate(vandmask[idx]): if val != mask[i]: vandm_equal_m = 0 break if vandm_equal_m == 1: if df[idx, 0] == 1: posneg[idx, 0] = 1 else: posneg[idx, 1] = 1 pos = np.nonzero(posneg[:, 0])[0] neg = np.nonzero(posneg[:, 1])[0] return (pos, neg) </code></pre> <p>Then I get timings of:</p> <pre><code>==&gt; pos, neg made; p=3920, n=4023 in [0.02352 s] numpy ==&gt; pos, neg made; p=3920, n=4023 in [0.2896 s] numba ==&gt; pos, neg made; p=3920, n=4023 in [0.01539 s] numba </code></pre> <p>So now numba is a bit faster than numpy.</p> <p>Also, it didn't make a huge difference, but in your original function you return numpy arrays, while in the numba version you were converting <code>pos</code> and <code>neg</code> to lists. </p> <p>In general though, I would guess that the function calls are dominated by numpy functions, which numba can't speed up, and the numpy version of the code is already using fast vectorization routines.</p> <p><strong>Update:</strong></p> <p>You can make it faster by removing the <code>enumerate</code> call and index directly into the array instead of grabbing a slice. Also splitting <code>pos</code> and <code>neg</code> into separate arrays helps to avoid slicing along a non-contiguous axis in memory:</p> <pre><code>@jit(nopython=True) def numba_get_pos_neg_bitwise(df, mask): pos = np.zeros(df.shape[0]) neg = np.zeros(df.shape[0]) vandmask = np.bitwise_and(df[:, 1:], mask) for idx in range(df.shape[0]): # numba fail with # if np.all(vandmask == mask): vandm_equal_m = 1 for i in xrange(vandmask.shape[1]): if vandmask[idx,i] != mask[i]: vandm_equal_m = 0 break if vandm_equal_m == 1: if df[idx, 0] == 1: pos[idx] = 1 else: neg[idx] = 1 pos = np.nonzero(pos)[0] neg = np.nonzero(neg)[0] return pos, neg </code></pre> <p>And timings in an ipython notebook:</p> <pre><code> %timeit pos1, neg1 = get_pos_neg_bitwise(df, mask) %timeit pos2, neg2 = numba_get_pos_neg_bitwise(df, mask) ​ 100 loops, best of 3: 18.2 ms per loop 100 loops, best of 3: 7.89 ms per loop </code></pre>
python|numpy|numba
11
376,505
34,762,239
How to reverse sub arrays in numpy?
<p>In my application, I have an array like this:</p> <pre><code>[10,20,30, 40,50,60, 70,80,90, 0.1,0.2,0.3, 0.4,0.5,0.6, 0.7,0.8,0.9, 1,2,3, 4,5,6, 7,8,9] </code></pre> <p>I want to reverse every 9 numbers so that my array looks like this:</p> <pre><code>[90,80,70, 60,50,40, 30,20,10, 0.9,0.8,0.7, 0.6,0.5,0.4, 0.3,0.2,0.1, 9,8,7, 6,5,4, 3,2,1] </code></pre> <p>Can someone tells me how to do this efficiently?</p>
<p>Perhaps something like:</p> <pre><code>n = a.shape[0] a.reshape((n//9,9))[:,::-1].reshape((n,)) array([ 90. , 80. , 70. , 60. , 50. , 40. , 30. , 20. , 10. , 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 9. , 8. , 7. , 6. , 5. , 4. , 3. , 2. , 1. ]) </code></pre> <p>But this relies on there being a multiple of 9 elements in your array. It leaves the original array unchanged. To alter the original <code>a</code> in-place you can use <code>resize</code>:</p> <pre><code>a.resize((n//9,9)) a[:,::-1] = a a.resize((n,)) </code></pre>
python|arrays|numpy
4
376,506
34,454,683
Find how many weeks passed in Python Pandas
<p>I have a Pandas dataframe with a datetime column. My problem is the following:</p> <p>I have a starting date of 04/08/2014. Since then, I count weeks in chunks of 16 weeks. So, from 04/08/2014 until 11/08/2014, it will be week 1. After 16 weeks, it will start again from week 1. I want to create a new column where it will find the week of the current chunk based on the datetime column.</p> <p>This is what I have done, but it seems that it doesn't work as it should.</p> <pre><code>startingweek = datetime.date(2014, 8, 4) df['WeekChunk'] = int(((df['DateTimeColumn'] - startingweek) / pd.offsets.Day(1))/7/16) </code></pre> <p>I calculated the number of days between the two days, then divided by 7 days to find the number of weeks and then divided by 16 to find the week of chunk.</p> <p>If I use a date of 23/12/2015, it should be week 9. But, the above code seems wrong.</p>
<p>If you need the week in a period of 16, you need the modulo, not devision. So change "/" to "%". And get int() before that.</p> <pre><code>df['WeekChunk'] = int(((df['DateTimeColumn'] - startingweek) / pd.offsets.Day(1))/7) % 16 </code></pre> <p>P.S. But the first week would be 0, not 1.</p>
python|datetime|pandas
1
376,507
34,819,979
Why does Pandas give me only one SettingWithCopyWarning?
<p>I have the following code:</p> <pre><code>import pandas as pd import numpy as np mydf = pd.DataFrame({'UID':[1,2,3], 'Price':[10,20,30], 'Shipped':[2,4,6]}) grouped = mydf.groupby('UID').aggregate(np.sum) # Call 1 mydf['Price'].loc[:] = np.round(grouped['Price'], 2) # Call 2 mydf['Shipped'].loc[:] = grouped['Shipped'] </code></pre> <p>The line that I have preceded with <code>Call 1</code> executes with no errors or warnings. The line that I have preceded with <code>Call 2</code> results in a <code>SettingWithCopyWarning</code> error. Why does the one result in the error and not the other? What can I do in my second call in order to get rid of this error?</p> <p>My code executes fine, I'm just tired on seeing this one lone error every time I run my tests.</p>
<p>No SettingWithCopyWarning error</p> <pre><code>mydf['Shipped'].values[:] = grouped['Shipped'] </code></pre>
python|pandas|warnings
1
376,508
34,768,835
Count of incremental duplicates over dates in Pandas
<p>I have a dataframe in the format below:</p> <pre><code> day value 1/1/15 aa 2/1/15 bb 3/1/15 bb 3/1/15 cc 4/1/15 ee 4/1/15 ff 4/1/15 aa </code></pre> <p>I would like to first: group by 'day' and then count the unique values in 'value' adding up the count incrementally for each subsequent day.</p> <p>The result would look like:</p> <pre><code> day value 1/1/15 1 2/1/15 2 3/1/15 3 4/1/15 5 </code></pre> <p>The solution would be ideally in pandas. I don't know where to start, the only idea that I have is too count per group and then use defaultdict to sum up, but how to do it incrementally following the order of the dates?</p> <p>Thanks! Vincenzo </p>
<p>The following works:</p> <pre><code>values = [l+l for l in ascii_lowercase[:8] dates = pd.date_range(date(2016, 1, 1), date(2016, 3, 30)) df = pd.DataFrame(data=np.random.choice(values, 500), index=np.random.choice(dates, 500), columns=['value']) df.sort_index().head(25) value 2016-01-01 bb 2016-01-01 dd 2016-01-01 ff 2016-01-02 hh 2016-01-02 aa 2016-01-02 ee 2016-01-02 aa 2016-01-02 gg 2016-01-02 hh 2016-01-02 aa 2016-01-03 cc 2016-01-03 ee print(df.groupby(level=0)['value'].apply(lambda x: x.nunique()).cumsum()) 2016-01-01 3 2016-01-02 7 2016-01-03 9 2016-01-04 13 2016-01-05 18 2016-01-06 20 </code></pre>
python|datetime|pandas|duplicates|dataframe
0
376,509
34,803,955
Remove minima from inner dimension in NumPy 2D array
<p>Hello I'm new to python and vectorization.</p> <p>Say you have a 5x3 numpy array like this:</p> <pre><code>array([[ -1.262, -4.034, 2.422], [ 13.849, 14.377, 4.951], [ 3.203, 10.209, -2.865], [ 3.618, -3.51 , -7.059], [ -0.098, -5.012, 6.389]]) </code></pre> <p>and you want to end up with a new 5x2 matrix with minima removed from each inner dimension like this:</p> <pre><code>array([[ -1.262, 2.422], [ 13.849, 14.377], [ 3.203, 10.209], [ 3.618, -3.51 ], [ -0.098, 6.389]]) </code></pre> <p>What is the best way to achieve that? I suppose it is with vectorization?</p> <p>Thank you!</p>
<p>I would think there's a relatively straightforward function for this, but it may not be around in numpy; pandas could probably do this more easily.</p> <p>With numpy, this is one way to do this:</p> <pre><code>In [56]: a = np.array([[ -1.262, -4.034, 2.422], [ 13.849, 14.377, 4.951], [ 3.203, 10.209, -2.865], [ 3.618, -3.51 , -7.059], [ -0.098, -5.012, 6.389]]) In [57]: m = np.argmin(a, axis=1) In [58]: indices = np.ones(shape=a.shape, dtype=bool) In [59]: indices[np.arange(5), m] = False In [60]: a[indices].reshape((-1, a.shape[1]-1)) Out[60]: array([[ -1.262, 2.422], [ 13.849, 14.377], [ 3.203, 10.209], [ 3.618, -3.51 ], [ -0.098, 6.389]]) </code></pre> <p>The step with the boolean indices is to "invert" the indices returned from <code>np.argmin</code>, since the latter returns integer indices, not boolean indices.</p>
arrays|python-3.x|numpy|vectorization
0
376,510
34,808,668
Find only rows that contains "NaN" in a specific column?
<p>How can i find only rows that contains "NaN" in a specific column ?</p> <p>I tried this specific code to merge (left join) two dataFrame and find <strong>ONLY</strong> rows that contains "NaN" in "matricule"'s column. </p> <pre><code>ln []: import pandas as pd import datetime as dt import numpy as np from datetime import datetime from datetime import timedelta from sqlalchemy import create_engine ln []: vehicule_xls = pd.read_excel("vehicule.xls") vehicule_xls ln []: vehicule_sql = pd.read_sql_query('select * from vehicule ', con=engine) vehicule_sql ln []: vehicules = pd.merge(vehicule_xls, vehicule_sql, left_on='Immat', right_on="matricule", how='left', indicator='Indicator') ln []: vehicules[vehicules['matricule'].isnull()] </code></pre> <p>But i got this error on the last command.</p> <pre><code>TypeError: data type not understood </code></pre>
<p>You can obtain a binary vector indicating whether a specific row has a <code>nan</code> in the column of interest as follows</p> <pre><code>fltr = vehicules['matricule'].isnull() </code></pre> <p>You can then extract the subset of interest with</p> <pre><code>subset = vehicules[fltr] </code></pre>
python|numpy|pandas|jupyter|jupyter-notebook
0
376,511
34,552,304
Python given numpy array of weights, find indices which split array so that sum of each split is less than value
<p>I have an 1D array of weights, w and an array of capacities c of the same shape as w. I need to find the smallest array of indices such that when w is split by these indices, the cumsums of split arrays less than the corresponding capacities in c. Given an array of weights and capacities as follows:</p> <pre><code>w = [1,2,3,4,5,6]; c = [3, 12, 7, 6, 12] </code></pre> <p>I need to find the smallest number of indices 'i' so that the cumsums of split arrays less than the corresponding capacities in c. In this case, </p> <pre><code>i = [2, 3, 5] </code></pre> <p>The cumsums of split arrays of w formed by i are </p> <pre><code>[1, 1+2, 1+2+3, 4, 5, 5+6] </code></pre> <p>each element is clearly less than c. The cum sums are calculated as given <a href="https://stackoverflow.com/questions/34525118/find-cumsum-of-subarrays-split-by-indices-for-numpy-array-efficiently">here</a>.</p> <p>An approximation of the required indices is also fine. But the cumsums should be strictly less than corresponding elements in c.</p> <p>w is a very large array (size 100000 elements). I need a vectorized solution for it to be efficient. As said before, approximations are fine as long as the cumsums are less than c</p> <p>Here is what I've tried. I've assumed that the entire c matrix is just one element repeated multiple times (I'm trying to solve a simpler case first and then add complexities). In this case, I just have to ensure that each split array has to have sum less than a given value (somewhat similar to bin packing). I find the indices as follows.</p> <pre><code>weights = np.random.random_integers(1, 20, size=(20)) capacity = 100 # Find cumulative sums and divide by capacity. This gives an approximation of indices. All elements in first # split array would have values between 0 and 1. Those in second array would have elements between 1 and 2, # and so on. When ever the integer part changes, a new split array would be formed. Find indices from this. # After taking the ceiling value of all elements, elements between 0 and 1 would become 1, elements between # 1 and 2 become 2 and so on. The place where the elements change give the indices. Take diff to find the # boundary (of change). indices = np.diff(np.ceil(np.cumsum(weights[i]) / self.sleigh_capacity)) # 0s represent repeated elements, 1s represent values where values change. Find the indices indices = np.where(indices != 0)[0] + 1 </code></pre> <p>This gives me the indices. One thing to note is that this might give me wrong indices, because cumulative sums are calculated from the beginning. That is, cumsum of [1,2,3,2,3] is [1,2,6,8,9]. Now if my capacity is 5. dividing cumsum by 5 and taking ceil gives me [1, 1, 2, 2, 2] which would correspond to splitting indices of [1, 4]. But the actual splitting indices are [1, 3, 4]. I'm handling this by reducing the capacity. That is, if my actual capacity is 5, I'd take it as 4 and then do the above (The value 4 is gotten by pure guess. To be on the safer side I might decrease the capacity even further).</p> <p>But I'm not able to extend this method to the case where the capacities are varying. That is, if I have a capacity array of shape (1,5) then I would have to use a different approach, as this approach wouldn't work.</p>
<pre><code>w = [1,2,3,1,6,6]; c = [1,3,5, 1, 6, 12] </code></pre> <p>The only solution to this is</p> <pre><code>i=[2,3,4,5] </code></pre> <p>The greedy solution (to my understanding is to take until you cannot take) </p> <p>It starts off with a 2 to get the [1,2] &lt; =[1, 1+2] in c. However, if the next split is at 4 (as the greedy solution leads to, you get into issues since nothing can satisfy the 1). We should have instead split it at 2 and 3.</p> <p>I suggested using backtracking to look back when this happens, but the running time could spiral out of control. The limit with 100k seems to suggest a linear solution or nlogn solution at worst. I have ideas of how to do this with dynamic programming, but still figuring out some specifics. Will update hopefully, or discard answer after a while. :)</p>
python|arrays|numpy|split|indices
1
376,512
60,170,747
pandas get index values n positions ahead of selection
<p>I have a dataframe with a datetime index. I also have a list of specific dates which I am interested in looking at in my dataframe. I would like to get the rows 'n' positions ahead of my list of specific dates. Say for the example n=5. Here is my code:</p> <pre><code>import pandas as pd # generate an example df output = pd.DataFrame() d = pd.date_range(start='1/1/2000', end='1/1/2006', freq='D') output['Date'] = d output['Value'] = 1 output = output[output['Date'].dt.dayofweek &lt; 5].reset_index(drop=True) # remove weekends output = output.set_index('Date') # dates of interest date_list = pd.to_datetime(['09/05/2002', '15/07/2004', '21/03/2005'], format='%d/%m/%Y') # i can pull out the dates of interest, but I really want the dates '5' positions ahead selection = output.iloc[output.index.isin(date_list)] print(selection) </code></pre> <p>Please note, '5' positions ahead is not the same as timedelta(days=5)</p> <p>I know this can be solved by iteration, something like:</p> <pre><code>for i, row in output.iterrows(): for i2 in date_list: if i == i2: print(i, output.loc[i2:].iloc[5]) </code></pre> <p>But I am looking to do this ideally with a vectorized one liner. Any help would be much appreciated?</p> <p>Many thanks in advance!</p>
<p>You could use <code>flatnonzero</code> to get the indices, add <code>5</code> to them and index:</p> <pre><code>import numpy as np output.iloc[np.flatnonzero(output.index[:-5].isin(date_list)) + 5] Value Date 2002-05-16 1 2004-07-22 1 2005-03-28 1 </code></pre> <hr> <p>Or we also have pandas' <code>nonzero</code>:</p> <pre><code>output.iloc[output.index[:-5].isin(date_list).nonzero()[0]+5] Value Date 2004-07-08 1 2005-03-14 1 </code></pre>
python|pandas
4
376,513
60,014,502
How can I convert some columns of a pandas dataframe to categorical?
<p>This is my code:</p> <pre><code>l1 = list(train) for i in (0,len(l1)): if train[l1[i]].dtypes == object: train[l1[i]] = pd.Categorical(train[l1[i]]) train.info(verbose=True) </code></pre> <p>But this just makes the first variable change and nothing else. The rest of the 62 object variables aren't converted to object. </p> <p>How do you do it?</p>
<p>Get all object columns by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.select_dtypes.html" rel="nofollow noreferrer"><code>DataFrame.select_dtypes</code></a>, convert to dict and pass to <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.astype.html" rel="nofollow noreferrer"><code>DataFrame.astype</code></a>:</p> <pre><code>train = train.astype(dict.fromkeys(train.select_dtypes('object').columns, 'category')) </code></pre>
python|pandas
1
376,514
60,012,688
how do I classify or regroup dataset based on time variation in python
<p>I need to assign number to values between different time hourly. How can I then add a new column to this where I can specify each cell to be grouped hourly. for instance, all the transactions within 00:00:00 to 00:59:59 to be filled with 1, transactions within 01:00:00 to 01:59:59 to be filled with 2, and so on till 23:00:00 to 23:59:59 to be filled with 24</p> <pre><code>Time_duration = df['period'] print (Time_duration) </code></pre> <pre><code>0 23:59:56 1 23:59:56 2 23:59:55 3 23:59:53 4 23:59:52 ... 74187 00:00:18 74188 00:00:09 74189 00:00:08 74190 00:00:03 74191 00:00:02 ``` # this is the result I desire.... How can I then add a new column to this where I can specify each cell to be grouped hourly. for instance, all the transactions within 00:00:00 to 00:59:59 to be filled with 1, transactions within 01:00:00 to 01:59:59 to be filled with 2, and so on till 23:00:00 to 23:59:59 to be filled with 24. 0 23:59:56 24 1 23:59:56 24 2 23:59:55 24 3 23:59:53 24 4 23:59:52 24 ... 74187 00:00:18 1 74188 00:00:09 1 74189 00:00:08 1 74190 00:00:03 1 74191 00:00:02 1 </code></pre>
<p>You can use <a href="https://docs.microsoft.com/fr-fr/dotnet/standard/base-types/regular-expression-language-quick-reference" rel="nofollow noreferrer">regular expressions</a> and <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.Series.str.extract.html" rel="nofollow noreferrer">str.extract</a></p> <pre><code>import pandas as pd pattern= r'^(\d{1,2}):' #capture the digits of the hour df['hour']=df['period'].str.extract(pattern).astype('int') + 1 # cast it as int so that you can add 1 </code></pre>
python|python-3.x|pandas|numpy|datetime
0
376,515
60,273,966
Replacing elements in numpy array
<pre><code>import numpy as np a = np.array([0.75, 0.5, 0.21]) one_list = [1] * 3 L_vec = np.diag(one_list) L_vec[1,0] = a[0] print(L_vec) </code></pre> <p>Expected Result:</p> <pre><code>[[1,0,0],[0.75,1,0],[0,0,1]] </code></pre> <p>Actual Result:</p> <pre><code>[[1 0 0] [0 1 0] [0 0 1]] </code></pre> <p>this is the result I got. I have no idea why. </p>
<p>By default dtype for <code>np.diag</code> is <code>int</code></p> <p>convert it into <code>float</code> so your float values from array <code>a</code> can replace older value</p> <p><code>L_vec = L_vec.astype(float)</code></p> <p>Use below code</p> <pre><code>a = np.array([0.75, 0.5, 0.21]) one_list = [1]*3 L_vec = np.diag(one_list) L_vec = L_vec.astype(float) L_vec[1,0] = a[0] print(L_vec) </code></pre> <p>Output:</p> <pre><code>[[1. 0. 0. ] [0.75 1. 0. ] [0. 0. 1. ]] </code></pre> <p>You can check datatype using <code>print(L_vec.dtype)</code></p>
python|python-3.x|numpy
1
376,516
60,099,785
TensorFlow Lite 2.0 advanced GPU using on Android with C++
<p>I am new in TensorFlow. I built TensorFlow Lite libraries from sources. I try to use TensorFlow for face recognition. This one a part of my project. And I have to use GPU memory for input/output e.g. input data: opengl texture, output data: opengl texture. Unfortunately, this information is outdated: <a href="https://www.tensorflow.org/lite/performance/gpu_advanced" rel="nofollow noreferrer">https://www.tensorflow.org/lite/performance/gpu_advanced</a>. I tried to use gpu::gl::InferenceBuilder for building gpu::gl::InferenceRunner. And I have problem. I don’t understand how I can get the model in GraphFloat32 (Model>) format and TfLiteContext.</p> <p>Example of my experemental code:</p> <pre><code>using namespace tflite::gpu; using namespace tflite::gpu::gl; const TfLiteGpuDelegateOptionsV2 options = { .inference_preference = TFLITE_GPU_INFERENCE_PREFERENCE_SUSTAINED_SPEED, .is_precision_loss_allowed = 1 // FP16 }; tfGPUDelegate = TfLiteGpuDelegateV2Create(&amp;options); if (interpreter-&gt;ModifyGraphWithDelegate(tfGPUDelegate) != kTfLiteOk) { __android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "GPU Delegate hasn't been created"); return ; } else { __android_log_print(ANDROID_LOG_INFO, "Tensorflow", "GPU Delegate has been created"); } InferenceEnvironmentOptions envOption; InferenceEnvironmentProperties properties; auto envStatus = NewInferenceEnvironment(envOption, &amp;env, &amp;properties); if (envStatus.ok()){ __android_log_print(ANDROID_LOG_INFO, "Tensorflow", "Inference environment has been created"); } else { __android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "Inference environment hasn't been created"); __android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "Message: %s", envStatus.error_message().c_str()); } InferenceOptions builderOptions; builderOptions.usage = InferenceUsage::SUSTAINED_SPEED; builderOptions.priority1 = InferencePriority::MIN_LATENCY; builderOptions.priority2 = InferencePriority::AUTO; builderOptions.priority3 = InferencePriority::AUTO; //The last part requires a model // GraphFloat32* graph; // TfLiteContext* tfLiteContex; // // auto buildStatus = BuildModel(tfLiteContex, delegate_params, &amp;graph); // if (buildStatus.ok()){} </code></pre>
<p>You may look function BuildFromFlatBuffer (<a href="https://github.com/tensorflow/tensorflow/blob/6458d346470158605ecb5c5ba6ad390ae0dc6014/tensorflow/lite/delegates/gpu/common/testing/tflite_model_reader.cc" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/6458d346470158605ecb5c5ba6ad390ae0dc6014/tensorflow/lite/delegates/gpu/common/testing/tflite_model_reader.cc</a>). It creates Interpreter and graph from it.</p> <p>Also Mediapipe uses InferenceRunner you may find for useful in files: <a href="https://github.com/google/mediapipe/blob/master/mediapipe/calculators/tflite/tflite_inference_calculator.cc" rel="nofollow noreferrer">https://github.com/google/mediapipe/blob/master/mediapipe/calculators/tflite/tflite_inference_calculator.cc</a> <a href="https://github.com/google/mediapipe/blob/ecb5b5f44ab23ea620ef97a479407c699e424aa7/mediapipe/util/tflite/tflite_gpu_runner.cc" rel="nofollow noreferrer">https://github.com/google/mediapipe/blob/ecb5b5f44ab23ea620ef97a479407c699e424aa7/mediapipe/util/tflite/tflite_gpu_runner.cc</a></p>
android|c++|tensorflow2.0
0
376,517
59,945,409
Calculate Date Columns in Datframe
<p><a href="https://i.stack.imgur.com/gGAsL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gGAsL.png" alt="enter image description here"></a></p> <p>I try to calculate these columns. I need two new columns, </p> <ul> <li>one named "Starttime" = Date + Time </li> <li>one named "Endtime" = Date + Time + Timedelta</li> </ul> <p>I need them for a gantt diagram in python, the problem ist, that time could appear multiple times. Any Idea how to solve this?</p> <p>I red them out from a Dataframe and got this error here</p> <pre><code>unsupported operand type(s) for +: 'datetime.date' and 'datetime.time' </code></pre>
<p>First convert <code>date</code>s and <code>time</code>s to strings by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.astype.html" rel="nofollow noreferrer"><code>Series.astype</code></a> and then use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a>, for second column use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>to_timedelta</code></a>:</p> <pre><code>df = pd.DataFrame({'Date': pd.to_datetime(['06.12.2019','06.12.2019']).date, 'Time': pd.to_datetime(['17:20:10','17:20:31']).time, 'TimeDelta':['00:00:21','14:31:09']}) df['Starttime'] = pd.to_datetime(df['Date'].astype(str) + ' ' + df['Time'].astype(str)) df['Endtime'] = df['Starttime'] + pd.to_timedelta(df['TimeDelta']) print (df) Date Time TimeDelta Starttime Endtime 0 2019-06-12 17:20:10 00:00:21 2019-06-12 17:20:10 2019-06-12 17:20:31 1 2019-06-12 17:20:31 14:31:09 2019-06-12 17:20:31 2019-06-13 07:51:40 </code></pre>
python|pandas|dataframe|datetime|timestamp
1
376,518
59,934,963
keras - Error when checking target with embedding layer
<p>I'm trying to run keras model as follows: </p> <pre><code>model = Sequential() model.add(Dense(10, activation='relu',input_shape=(286,))) model.add(Dense(1, activation='softmax',input_shape=(324827, 286))) </code></pre> <p>This code works, but if I'm trying to add an embedding layer:</p> <pre><code>model = Sequential() model.add(Embedding(286,64, input_shape=(286,))) model.add(Dense(10, activation='relu',input_shape=(286,))) model.add(Dense(1, activation='softmax',input_shape=(324827, 286))) </code></pre> <p>I'm getting the following error : </p> <pre><code>ValueError: Error when checking target: expected dense_2 to have 3 dimensions, but got array with shape (324827, 1) </code></pre> <p>My data have 286 features and 324827 rows. I'm probably doing something wrong with the shape definitions, can you tell me what it is ? Thanks</p>
<p>You don't need to provide the input_shape in the second Dense layer, and neither the first one, only on the first layer, the following layers shape will be coomputed :</p> <pre class="lang-py prettyprint-override"><code>from tensorflow.keras.layers import Embedding, Dense from tensorflow.keras.models import Sequential # 286 features and 324827 rows (324827, 286) model = Sequential() model.add(Embedding(286,64, input_shape=(286,))) model.add(Dense(10, activation='relu')) model.add(Dense(1, activation='softmax')) model.compile(loss='mse', optimizer='adam') model.summary() </code></pre> <p>returns :</p> <pre><code>Model: "sequential_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_2 (Embedding) (None, 286, 64) 18304 _________________________________________________________________ dense_2 (Dense) (None, 286, 10) 650 _________________________________________________________________ dense_3 (Dense) (None, 286, 1) 11 ================================================================= Total params: 18,965 Trainable params: 18,965 Non-trainable params: 0 _________________________________________________________________ </code></pre> <p>I hope it's what you're looking for</p>
python|tensorflow|keras|embedding
0
376,519
59,986,396
Groupby and apply different function to each column with first and last
<p>I am trying to group the columns then apply different functions to each column. I referred <a href="https://stackoverflow.com/questions/58326349/apply-different-functions-to-different-columns-with-a-singe-pandas-groupby-comma">to the answer here</a> and my code is as shown below</p> <pre><code>def f(x): d = {} d['a'] = x['a'].max() d['b'] = x['b'].first() d['c'] = x['c'].last() return pd.Series(d, index=['a', 'b', 'c']) require_data = required_data.groupby(['S','id', 'lane', 'timestamp','E']).apply(f) </code></pre> <p>And I am getting the following error because of first function</p> <pre><code>TypeError: first() missing 1 required positional argument: 'offset' </code></pre> <p>But I can run groupby with first fine</p> <pre><code>require_data = required_data.groupby(['S','id', 'lane', 'timestamp','E']).first() </code></pre> <p>What is the cause of the error</p>
<p>Better here is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.agg.html" rel="nofollow noreferrer"><code>GroupBy.agg</code></a>, there is possible pass columns names with aggregate methods <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.first.html" rel="nofollow noreferrer"><code>GroupBy.first</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.last.html" rel="nofollow noreferrer"><code>GroupBy.last</code></a>:</p> <pre><code>require_data = (required_data.groupby(['S','id', 'lane', 'timestamp','E']) .agg({'a':'max', 'b':'first', 'c':'last'})) </code></pre> <p>If you want to use your own custom function, it's necessary to select by position, with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.iat.html" rel="nofollow noreferrer"><code>Series.iat</code></a> or with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.iloc.html" rel="nofollow noreferrer"><code>Series.iloc</code></a>, but like @Erfan mentioned, thank you:</p> <blockquote> <p>Using your own custom function is highly discouraged, because of efficiency.</p> </blockquote> <pre><code>def f(x): d = {} d['a'] = x['a'].max() d['b'] = x['b'].iat[0] d['c'] = x['c'].iat[-1] return pd.Series(d, index=['a', 'b', 'c']) </code></pre>
python|pandas
4
376,520
60,308,058
How can I get rid of this error which I am getting with a code to find nearest neighbors?
<p>I have written the following function which takes data, number of peers to find, and index to find the top N nearest neighbors :</p> <pre><code>def fit_nearest_neighbors(data, number_of_peers, index): peer_data = FindPeers.filt_data(data) peer_data_array = np.array(peer_data) knn = NearestNeighbors(algorithm = 'auto', n_neighbors = number_of_peers, metric = 'minkowski', p = 2) knn.fit(peer_data_array) return knn.kneighbors(peer_data_array[index], return_distance = False) </code></pre> <p>But I am getting the following error with the last line of code after return which says :</p> <pre><code>ValueError: Expected 2D array, got 1D array instead: array=[2.86839521e-01 7.63588709e-01 1.00000000e+00 1.73483898e-01 0.00000000e+00 1.25068828e-02 1.66424454e-17 4.38357126e-01 7.55219585e-03 6.03820534e-02 2.72387749e-01]. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. </code></pre> <p>The error array that It prints is basically the datapoint present at that particular <code>index</code>. I understand the error but I don't understand how do I make it right. Any type of leads/answers would be helpful.</p>
<p>The error message is clear. You need to reshape your data from 1D to 2D using the numpy function <code>np.reshape()</code>.</p> <p>You can accomplish this with </p> <pre><code>peer_data_array = np.array(peer_data).reshape(-1, 1) </code></pre>
python|numpy|machine-learning|data-science|knn
0
376,521
60,207,251
Trace Operation in Python not Forming Correct Array Shape
<p>I'm looking to find the trace of matrices (using Numpy) in a function I have defined in Python. The input parameters <code>tensor</code> and <code>tensor_transpose</code> are both matrices of size (N,2,2) and are extracted from a VTK file (N is a rather large number and varies depending on the file). So both <code>A</code> and <code>B</code> are arrays of (N,2,2). By taking the trace of each array (sum of the diagonal terms), a single value for each array should be returned. So <code>np.trace(A)**3)-(np.trace(B)**3</code> should be a single numerical value, with the array being of shape (N,1). My output though does not show this, with the returned shape being <code>(2,)</code>.</p> <p>Can anyone explain why? Is it an issue with the <code>trace</code> function and is there a solution?</p> <pre><code>import numpy as np A=np.array(0.5*(tensor-tensor_transpose)) B=np.array(0.5*(tensor+tensor_transpose)) C=np.array(0.5*((np.trace(A)**3)-(np.trace(B)**3))) print(A.shape) print(B.shape) print(C.shape) #Output #(60600, 2, 2) #(60600, 2, 2) #(2,) </code></pre>
<p>Maybe you need to specify the axes:</p> <pre class="lang-py prettyprint-override"><code>np.trace(A, axis1=1, axis2=2) </code></pre>
python|arrays|numpy|matrix|vtk
0
376,522
60,299,516
Python count column values by other column
<p>I have the table such that: </p> <pre><code>no Order materials status 1 1000 100 available 2 1000 200 not available 3 1001 500 Feb-20 4 1002 400 available 5 1002 300 not available 6 1002 600 available 7 1002 900 available 8 1003 700 available 9 1003 800 available </code></pre> <p>And I'd like to have columns that shows:</p> <ol> <li>Total number of materials per Order</li> <li>Total number of materials with their status per Order</li> </ol> <p>I was able to get the total number of materials per Order:</p> <pre><code>ds.groupby('Order').count() ds['Total Materials'] = ds.groupby('Order')['Order'].transform('count') </code></pre> <p>But not sure how to add a new columns based on conditions where status equals to each status, So that it will look like this: </p> <pre><code>no Order materials status Total Materials available not available Feb-20 1 1000 100 available 2 1 1 0 2 1000 200 not available 2 1 1 0 3 1001 500 Feb-20 1 0 0 1 4 1002 400 available 4 3 1 0 5 1002 300 not available 4 3 1 0 6 1002 600 available 4 3 1 0 7 1002 900 available 4 3 1 0 8 1003 700 available 2 2 0 0 9 1003 800 available 2 2 0 0 </code></pre> <p>Basically trying to figure out how to get the rest of the columns. Would appreciate your help!</p>
<p>I would do a combination of <code>pivot table</code> and <code>merge</code>:</p> <pre><code>ds_final = ds.merge(ds.pivot_table(values='Total Materials',index=['Order'],columns='status',aggfunc='count',fill_value=0).reset_index(),how='left',on='Order') print(ds_final) </code></pre> <p>Output:</p> <pre><code> no Order materiales status Total Materials A Feb-20 not A 0 1 1000 100 A 2 1 0 1 1 2 1000 200 not A 2 1 0 1 2 3 1001 500 Feb-20 1 0 1 0 3 4 1002 400 A 4 3 0 1 4 5 1002 300 not A 4 3 0 1 5 6 1002 600 A 4 3 0 1 6 7 1002 900 A 4 3 0 1 7 8 1003 700 A 2 2 0 0 8 9 1003 800 A 2 2 0 0 </code></pre> <h3>Some extra explanation:</h3> <p>The pivot table helps to generate the columns from the <code>status</code> column. So here's the output of the pivot_table alone:</p> <pre><code>status Order A Feb-20 not A 0 1000 1 0 1 1 1001 0 1 0 2 1002 3 0 1 3 1003 2 0 0 </code></pre> <p>Finally with this output, we can use <code>merge</code> or <code>concat</code> to generate the desired output.</p>
python|pandas
2
376,523
60,247,871
Duplicate tensor name worked while it is not supposed to
<pre><code>def convolutional_block(X, f, filters, stage, block, s = 2): conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' F1, F2, F3 = filters X_shortcut = X X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', padding='valid', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X) X = Activation('relu')(X) X = Conv2D(F2, (f, f), strides = (1,1), name = conv_name_base + '2b', padding='same', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X) X = Activation('relu')(X) X = Conv2D(F3, (1, 1), strides = (1,1), name = conv_name_base + '2c', padding='valid', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X) # &lt;&lt;&lt; X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X) also works! X_shortcut = Conv2D(F3, (1, 1), strides = (s,s), name = conv_name_base + '1', padding='valid', kernel_initializer = glorot_uniform(seed=0))(X_shortcut) X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut) X = Add()([X, X_shortcut]) X = Activation('relu')(X) return X tf.reset_default_graph() with tf.Session() as test: np.random.seed(1) A_prev = tf.placeholder("float", [3, 4, 4, 6]) X = np.random.randn(3, 4, 4, 6) A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a') test.run(tf.global_variables_initializer()) out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0}) print("out = " + str(out[0][1][1][0])) print(tf.global_variables()) </code></pre> <p>I am amazed by the fact that this line <code>X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)</code> can be replicable with <code>X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)</code> and the code gives no error. I expected it to give the duplicated name error. Could anyone explain why it works. Thank you.</p>
<p>Tensorflow/Keras has its own internal variable name resolution. So when you name two variables with the same name (i.e. <code>x_b</code> and <code>x_b</code>), the latter will be renamed to <code>x_b_1</code>. </p> <p>For example, in your code, here are the variables with name <code>2b</code> in it. This is after changing the lines,</p> <pre><code>X = Conv2D(F3, (1, 1), strides = (1,1), name = conv_name_base + '2c', padding='valid', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X) </code></pre> <p>to</p> <pre><code>X = Conv2D(F3, (1, 1), strides = (1,1), name = conv_name_base + '2b', padding='valid', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X) # &lt;&lt;&lt; X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X) also works! </code></pre> <p>As you can see ther are a two sets of variables (postfixed with <code>2b</code> and <code>2b_1</code>)</p> <pre><code>&lt;tf.Variable 'res1a_branch2b/kernel:0' shape=(2, 2, 2, 4) dtype=float32&gt; &lt;tf.Variable 'res1a_branch2b/bias:0' shape=(4,) dtype=float32&gt; &lt;tf.Variable 'bn1a_branch2b/gamma:0' shape=(4,) dtype=float32&gt; &lt;tf.Variable 'bn1a_branch2b/beta:0' shape=(4,) dtype=float32&gt; &lt;tf.Variable 'bn1a_branch2b/moving_mean:0' shape=(4,) dtype=float32&gt; &lt;tf.Variable 'bn1a_branch2b/moving_variance:0' shape=(4,) dtype=float32&gt; &lt;tf.Variable 'res1a_branch2b_1/kernel:0' shape=(1, 1, 4, 6) dtype=float32&gt; &lt;tf.Variable 'res1a_branch2b_1/bias:0' shape=(6,) dtype=float32&gt; &lt;tf.Variable 'bn1a_branch2b_1/gamma:0' shape=(6,) dtype=float32&gt; &lt;tf.Variable 'bn1a_branch2b_1/beta:0' shape=(6,) dtype=float32&gt; &lt;tf.Variable 'bn1a_branch2b_1/moving_mean:0' shape=(6,) dtype=float32&gt; &lt;tf.Variable 'bn1a_branch2b_1/moving_variance:0' shape=(6,) dtype=float32&gt; </code></pre>
python|tensorflow|keras
0
376,524
60,089,495
How does the frozen model for speech recognition has been created?
<p>After following the steps train.py and freeze.py from "<a href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/sequences/audio_recognition.md" rel="nofollow noreferrer">https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/sequences/audio_recognition.md</a>", The inputs of my frozen model(<a href="https://imgur.com/a/JtNVkHw" rel="nofollow noreferrer">https://imgur.com/a/JtNVkHw</a>) is quite different from the actual frozen model conv_actions_frozen.pb(<a href="https://imgur.com/a/KJXExbV" rel="nofollow noreferrer">https://imgur.com/a/KJXExbV</a>). What changes should I make to get the original frozen model for speech recognition?. PS: My Python version is 3.7.3 and Tensor flow version is 2.1.0 </p>
<p>You are following guide intended for the TensorFlow 1.x while using TensorFlow 2.1. Try using TensorFlow 1.15.</p>
python|tensorflow
0
376,525
60,317,953
Python pandas - Divide a column in dataframe1 with a column in dataframe2 based on another column in dataframe1
<p>Dateframe1</p> <pre><code>df = pd.DataFrame(SQL_Query, columns=[ X,Y . . . . Currency,Amount] Index X Y ... Currency Amount 0 74 1 ... USD 100 1 75 1 ... EUR 5000 2 76 1 ... AUD 300 3 79 1 ... EUR 750 [1411137 rows x 162 columns] </code></pre> <p>A large SQL query so I avoid writing out all columns. </p> <pre><code>df1=pd.read_excel(r`FX_EUR.xlsx) Index Currency FX 0 AUD 1.61350 1 BGN 1.95580 2 BRL 4.51450 3 CAD 1.45830 4 CHF 1.09280 </code></pre> <p>So what would I like to achieve is to make a lookup in DF1 to see which Currency is used then divide the "DF1 Amount" column with "DF2 FX" column and to this for all rows in DF1. Either by making a third DF3 or by creating a new column i DF1 called Amount_EUR. </p> <p>Any ideas on how to write this code?</p>
<p>You can use a <code>map</code> to apply the transformation - </p> <pre><code>import pandas as pd df = pd.DataFrame({"Currency": ['USD', 'EUR', 'AUD', 'EUR'], "Amount": [100, 5000, 300, 750]}) df1 = pd.DataFrame({"Currency": ["AUD", "BGN", "BRL", "CAD", "EUR"], "FX": [1.6, 1.9, 4.5, 1.5, 1.1]}) df1 = df1.set_index("Currency") df['FX'] = df['Currency'].map(df1.FX) df['FX_Adj_Amt'] = df['Amount'].div(df['FX']) df # Currency Amount Fx FX_Adj_Amt #0 USD 100 NaN NaN #1 EUR 5000 1.1 4545.454545 #2 AUD 300 1.6 187.500000 #3 EUR 750 1.1 681.818182 </code></pre>
python|python-3.x|pandas|dataframe
0
376,526
60,301,973
Unfurling two columns in a Pandas dataframe into a list of lists
<p>I have two columns in a Pandas dataframe whose values logically follow one another. See the following:</p> <pre><code>Name Includes Account Product Account Product Account Card Account Card Account Plastic Card Account Token Token Token Vault Account Savings Account </code></pre> <p>So Account > Product Account > Card Account, etc. Ultimately I want to create a list of lists where the root ('Account') is the first element of each list. The output should look like the following:</p> <pre><code>[['Account', 'Product Account', 'Card Account', 'Plastic'], ['Account', 'Product Account', 'Card Account', 'Token', 'Token Vault'], ['Account', 'Savings Account']] </code></pre> <p>I basically want to find any and all possible paths between the dataframe elements that may exist. I currently have a code that converts the two dataframe columns into a dictionary structure:</p> <pre><code>def link_hops(dictionary): dictionary = dict(df.groupby('Name')['Includes'].apply(set)) dictionary = {k: list(v) for k, v in dictionary.items()} all_values = set(x for xs in dictionary.values() for x in xs) refs = all_values &amp; set(dictionary.keys()) for k, v in dictionary.items(): for i in range(len(v)): if v[i] in refs: v[i] = {v[i]: v1 for k1, v1 in dictionary.items() if v[i] == k1} dictionary = {k: v for k, v in dictionary.items() if k not in refs} return dictionary </code></pre> <p>I get the following:</p> <pre><code>{'Account': ['Savings Account', {'Product Account': [{'Card Account': ['Plastic', {'Token': ['Token Vault']}]}]}]} </code></pre> <p>This code does the job of defining all of the paths that exist from the root ('Account') to the terminus for each path ('Savings Account', 'Plastic', 'Token Vault'), but I cannot figure out how to convert this into a list format that is scalable. I have a recursion script which does work on small examples like this, but the actual dataframes I am working with can potentially be hundreds or thousands of levels deep when I convert them into dictionaries via <code>link_hops</code>, and easily blow past the recursion limit when I call that script.</p> <p>I want to know if it is possible to skip the intermediate step of converting my dataframe into a dictionary and directly convert it into a list of lists, or even just use <code>.map()</code> or something similar to work on the dataframe directly.</p>
<h3> #Approach1 </h3> <p>Here's one approach, taking each row from the dataframe as a graph edge of a directed graph using <code>NetworkX</code> and looking for the <a href="https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.shortest_paths.generic.shortest_path.html#networkx.algorithms.shortest_paths.generic.shortest_path" rel="nofollow noreferrer"><code>shortest_path</code></a> from <code>Account</code> to the different <em>targets</em>:</p> <pre><code>import numpy as np a = df.values # check correspondence with value in next row and first col m = np.r_[False, (a[:-1, 1] != a[1:, 0])].cumsum() # array([0, 0, 0, 1, 1, 2], dtype=int32) # get indices of where theres is not a correspondence m_diff = np.r_[m[:-1] != m[1:], True] # array([False, False, True, False, True, True]) # get targets of all paths targets = a[m_diff, 1] # array(['Plastic', 'TokenVault', 'SavingsAccount'], dtype=object) # define a directed graph using networkx import networkx as nx #add edges from the graph G = nx.from_pandas_edgelist(df, source='Name', target='Includes') #find all shortest paths from Account to the different found targets [nx.shortest_path(G, 'Account', target) for target in targets] [['Account', 'ProductAccount', 'CardAccount', 'Plastic'], ['Account', 'ProductAccount', 'CardAccount', 'Token', 'TokenVault'], ['Account', 'SavingsAccount']] </code></pre> <hr> <h3> #Approach2 </h3> <p>Another way to find the graph <em>end nodes</em> could be to look at the <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)" rel="nofollow noreferrer">degree</a>, and keep those that have a degree of 1:</p> <pre><code>G = nx.from_pandas_edgelist(df, source='Name', target='Includes') [nx.shortest_path(G, 'Account', node) for node, degree in G.degree() if degree==1] [['Account', 'ProductAccount', 'CardAccount', 'Plastic'], ['Account', 'ProductAccount', 'CardAccount', 'Token', 'TokenVault'], ['Account', 'SavingsAccount']] </code></pre> <hr> <p>For a visual understanding of the graph problem being solved:</p> <pre><code>pos = nx.spring_layout(G, scale=20) nx.draw(G, pos, node_color='lightblue', node_size=500, with_labels=True) </code></pre> <p><a href="https://i.stack.imgur.com/nb5Gx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nb5Gx.png" alt="enter image description here"></a></p> <p>As we can see, by knowing the <em>sources</em> and <em>targets</em> to look for, by using <code>nx.shortest_path</code> we can obtain the path between <code>Account</code> and the obtained targets</p>
python|pandas|list|networkx
2
376,527
60,322,381
Hopf Bifurcation Plot
<p>I am attempting to code a bifurcation diagram to illustrate the values of f for which the Oregonator model yields oscillatory behaviour. I get the "setting an array element with a sequence" error at the solve_ivp line. I suspect it has something to do the time span but I am not sure. I should get a Hopf bifurcation, i.e. a bullet-like cone in the region of oscillations. </p> <p>Here is the code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.integrate import solve_ivp # Dimensionless constant parameters eps = 0.04 a = 0.0008 # Dimensionless varying parameter - will reveal limit cycle region f = np.linspace(-5,5,250) # Oregonator model def Oregonator(t, Y): x,z = Y; return [(x * (1 - x) + ((a - x) * f * z) / (a + x)) / eps, x - z] # Time span, initial conditions ts = np.linspace(-5, 5, 250) Y0 = [1, 0.5] # Numerical algorithm/method NumSol = solve_ivp(Oregonator, [0, 30], Y0, method="Radau") t = NumSol.t x,z = NumSol.y # Plot fig = plt.figure() ax = fig.gca(projection='3d') ax.plot(f, x, z, 'm') ax.set_xlabel('$f$', fontsize=10) ax.set_ylabel('$x^*$', fontsize=10) ax.set_zlabel('$z^*$', fontsize=10) ax.axis([-5, 5, -5, 5]) plt.grid() plt.show() </code></pre>
<p>You can not do this kind of simultaneous computation. (That is, you can, but it requires explicit coding and is still ill-advised as the step size selection may greatly vary over the range of <code>f</code> values.)</p> <p>You should compute the solution for each of the <code>f</code> values separately, and then plot them. To build a list of all solutions first, it is useful to assemble the construction for a single solution and its last point into a separate function so that the list construction can be done via list processing.</p> <pre class="lang-py prettyprint-override"><code># Dimensionless constant parameters eps = 0.04 a = 0.0008 def limit(f): # Oregonator model def Oregonator(t, Y): x,z = Y; return [(x * (1 - x) + ((a - x) * f * z) / (a + x)) / eps, x - z] # Time span, initial conditions ts = np.linspace(-5, 5, 250) Y0 = [1, 0.5] # Numerical algorithm/method NumSol = solve_ivp(Oregonator, [0, 30], Y0, method="Radau") t = NumSol.t x,z = NumSol.y return x[-1],z[-1] # Dimensionless varying parameter - will reveal limit cycle region f = np.linspace(-5,5,250) x,z = np.array([ limit(ff) for ff in f ]).T # Plot fig = plt.figure() ax = fig.gca(projection='3d') ax.plot(f, x, z, 'mo', ms=2) ax.set_xlabel('$f$', fontsize=10) ax.set_ylabel('$x^*$', fontsize=10) ax.set_zlabel('$z^*$', fontsize=10) ax.axis([-5, 5, -5, 5]) plt.grid() plt.show() </code></pre> <p>This results in a plot</p> <p><a href="https://i.stack.imgur.com/j6mTX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j6mTX.png" alt="plot of the limit points"></a></p>
python|numpy|ode|nonlinear-functions|chemistry
0
376,528
60,118,136
Take a cell's value to indicate a column name in pandas
<p>Input</p> <pre><code> DBN Grade 3 4 5 0 01M015 3 30 44 15 1 01M015 4 30 44 15 2 01M015 5 30 44 15 </code></pre> <p>Desired Output</p> <pre><code> DBN Grade 3 4 5 Enrollment 0 01M015 3 30 44 15 30 1 01M015 4 30 44 15 44 2 01M015 5 30 44 15 15 </code></pre> <p>How would you create the Enrollment column?</p> <p>Note that the column we seek for each record depends on the value at df['Grade']. </p> <p>I've tried <a href="https://stackoverflow.com/questions/60102536/understanding-pandas-apply-looping-behavior">variations</a> of df[df['Grade']] so that I could find the column df['3'], but I haven't been successful.</p> <p>Is there a way to do this simply?</p> <pre><code>import pandas as pd import numpy as np data={'DBN':['01M015','01M015','01M015'], 'Grade':['3','4','5'], '3':['30','30','30'], '4':['44','44','44'], '5':['15','15','15']} df = pd.DataFrame(data) # This line below doesn't work: raises ValueError: Length of values does not match length of index df['Enrollment'] = [df[c] if (df.loc[i,'Grade'] == c) else None for i in df.index for c in df.columns] </code></pre>
<p>Set your index, and then use <code>lookup</code>:</p> <pre><code>df.set_index('Grade').lookup(df['Grade'], df['Grade']) </code></pre> <p></p> <pre><code>array(['30', '44', '15'], dtype=object) </code></pre> <hr> <p>You might run into some issues if your data is numeric (in your sample data it is all strings), requiring a cast to make the lookup succeed.</p>
python|pandas
4
376,529
60,154,404
Is there the equivalent of to_markdown to read data?
<p>With pandas 1.0.0 the use of <code>.to_markdown()</code> to show the content of a dataframe in this forum in <a href="https://www.markdownguide.org/" rel="noreferrer">markdown</a> is going to proliferate. Is there a convenient way to load the data back into a dataframe? Maybe an option to <code>.from_clipboard(markdown=True)</code>? </p>
<p>You can read markdown tables (or any structured text table) with the pandas <code>read_table</code> function:</p> <p>Let's create a sample markdown table:</p> <pre class="lang-py prettyprint-override"><code>pd.DataFrame({"a": [0, 1], "b":[2, 3]}).to_markdown() </code></pre> <pre class="lang-html prettyprint-override"><code>| | a | b | |---:|----:|----:| | 0 | 0 | 2 | | 1 | 1 | 3 | </code></pre> <p>As you can see, this is just a structured text table where the delimiters are pipes, there's a lot of whitespace, there are null columns on the left-most and right-most, and there's a header underline that must be dropped.</p> <pre class="lang-py prettyprint-override"><code>pd # Read a markdown file, getting the header from the first row and inex from the second column .read_table('df.md', sep="|", header=0, index_col=1, skipinitialspace=True) # Drop the left-most and right-most null columns .dropna(axis=1, how='all') # Drop the header underline row .iloc[1:] a b 0 0 2 1 1 3 </code></pre>
python|pandas
18
376,530
60,290,705
Getting rows which contain certain value in a column per each group in Pandas
<p>was wondering if you could provide some guidance with this one- I'm still working on <a href="http://data.un.org/Data.aspx?d=EDATA&amp;f=cmID:ES&amp;c=2,5,6,7,8&amp;s=_crEngNameOrderBy:asc,_enID:asc,yr:desc&amp;v=1" rel="nofollow noreferrer">this data</a>and here's what I am trying to do: -For each grouped 'country or area', I am trying to get the rows that contain the 2016 'quantity' and the 2011 'quantity'. However, it looks like there may be some countries that don't have a row for 2016 or 2011. The problem is I get an error when executing the following code:</p> <pre><code> for c in grp['Country or Area'].unique(): deltafiveyrs.append(grp[(grp['Year'] == 2016.0) &amp; (grp['Country or Area'] == c)]['Quantity'] - grp[(grp['Year'] == 2011.0) &amp; (grp['Country or Area'] == c)]['Quantity']) </code></pre> <p>The error message I get is:</p> <pre><code>/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:5: DeprecationWarning: elementwise comparison failed; this will raise an error in the future. """ --------------------------------------------------------------------------- KeyError Traceback (most recent call last) &lt;ipython-input-30-90579ab30ed1&gt; in &lt;module&gt;() 3 4 for c in grp['Country or Area'].unique(): ----&gt; 5 deltafiveyrs.append(grp[(grp['Year'] == 2016.0) &amp; (grp['Country or Area'] == c)]['Quantity'] - grp[(grp['Year'] == 2011.0) &amp; (grp['Country or Area'] == c)]['Quantity']) 6 7 /usr/local/lib/python3.6/dist-packages/pandas/core/base.py in __getitem__(self, key) 266 else: 267 if key not in self.obj: --&gt; 268 raise KeyError("Column not found: {key}".format(key=key)) 269 return self._gotitem(key, ndim=1) 270 KeyError: 'Column not found: False' </code></pre> <p>does anybody know what is going on? Should the values in the 'years' column be changed from float to int? And what would be the best way on how to handle the groups with no values for 2011/2016?</p> <p>Many thanks</p>
<p>Explore and Clean the data before working on that. few columns (ignoring Quantity footnotes) are having nan values. so lets drop those rows and proceed further. I tried this in jupyter notebook which gave dataframe of countries with quantity in year 2011 &amp; 2016.</p> <pre><code>cdf = df[['Country or Area', 'Commodity - Transaction', 'Year', 'Unit', 'Quantity']] cdf.dropna() cdf[(cdf['Year']== 2011) | (cdf['Year']== 2016)] </code></pre>
python|pandas|int|data-cleaning|isnull
0
376,531
59,911,740
Custom loss function that skips the NaN input
<p>I am building an autoencoder, my data has NaN values in it. How do I create a custom (MSE) loss function, that does not compute loss if it encounters a NaN in the validation data? </p> <p>Got a hint from the web:</p> <pre><code>def nan_mse(y_actual, y_predicted): per_instance = tf.where(tf.is_nan(y_actual), tf.zeros_like(y_actual), tf.square(tf.subtract(y_predicted, y_actual))) return tf.reduce_mean(per_instance, axis=0) </code></pre> <p>But receive loss of NaN:</p> <blockquote> <p>Epoch 1/50 - 25s - loss: nan</p> </blockquote> <p>When I try using the custom loss function in my callback function, after each epoch:</p> <pre><code>predictions = autoencoder.predict(x_pred) mae = (nan_mse(x_pred, predictions)) </code></pre> <blockquote> <p>TypeError: Input 'e' of 'Select' Op has type float32 that does not match type float64 of argument 't'.</p> </blockquote>
<p>I guess, your loss function actually works well. The <code>nan</code> value probably comes from the predictions. Thus the condition <code>tf.is_nan(y_actual)</code> doesn't filter it out. To filter out the prediction's <code>nan</code> you should do as follows:</p> <pre><code>import tensorflow.compat.v1 as tf from tensorflow.compat.v1.keras import backend as K import numpy as np def nan_mse(y_actual, y_predicted): stack = tf.stack((tf.is_nan(y_actual), tf.is_nan(y_predicted)), axis=1) is_nans = K.any(stack, axis=1) per_instance = tf.where(is_nans, tf.zeros_like(y_actual), tf.square(tf.subtract(y_predicted, y_actual))) print(per_instance) return tf.reduce_mean(per_instance, axis=0) print(nan_mse([1.,1.,np.nan,1.,0.], [1.,1.,0.,0.,np.nan])) </code></pre> <p><strong>Out:</strong></p> <pre><code>tf.Tensor(0.2, shape=(), dtype=float32) </code></pre>
python|tensorflow|keras|autoencoder
4
376,532
60,138,199
Format bar chart text to 2 decimal places
<p>I have working code as follows:</p> <pre><code>active_parking = pd.pivot_table(parking[parking['Bldg Status'] == 'ACTIVE'], index='Owned/Leased', values='Total Parking Spaces', aggfunc='mean' ) active_parking['% of Total'] = ((active_parking['Total Parking Spaces'] / active_parking['Total Parking Spaces'].sum()) * 100) print(active_parking, '\n') active_parking.plot(kind='bar') for i, number in enumerate(active_parking['Total Parking Spaces']): plt.text(x=i, y=number, s=number, horizontalalignment='center', weight='bold') for i, number in enumerate(active_parking['% of Total']): plt.text(x=i, y=number, s=number, horizontalalignment='left', weight='bold') plt.xticks(rotation=0) plt.show() </code></pre> <p>And it produces this output:</p> <pre><code> Total Parking Spaces % of Total Owned/Leased LEASED 44.707349 37.546059 OWNED 74.365997 62.453941 </code></pre> <p>But my plot has two issues that I can't seem to solve:</p> <pre><code>1) I want the displayed value on top of each bar limited to two decimal places. 2) After solving the above issue, how do I center the value over each bar? </code></pre> <p><a href="https://i.stack.imgur.com/jBmKC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jBmKC.png" alt="enter image description here"></a></p> <p>Even if I crop the text display with this: pd.options.display.float_format = '{:.2f}'.format, the plot still shows 14 decimal places.</p>
<p>Use:</p> <pre><code>active_parking.plot(kind='bar') for i, number in enumerate(active_parking['Total Parking Spaces']): plt.text(x=i-.23, y=number + 0.9, s=round(number, 2), weight='bold') for i, number in enumerate(active_parking['% of Total']): plt.text(x=i+.02, y=number + 0.9, s=round(number, 2), weight='bold') plt.xticks(rotation=0) plt.ylim(top=active_parking.values.max() + 8) plt.show() </code></pre>
python-3.x|pandas|matplotlib
0
376,533
60,197,465
Advance indexing in Pytorch to get rid of nested for-loops
<p>I have a situation for which I am using nested for-loops, but I want to know if there's a faster way of doing this using some advanced indexing in Pytorch.</p> <p>I have a tensor named <code>t</code>:</p> <pre><code>t = torch.randn(3,8) print(t) tensor([[-1.1258, -1.1524, -0.2506, -0.4339, 0.8487, 0.6920, -0.3160, -2.1152], [ 0.4681, -0.1577, 1.4437, 0.2660, 0.1665, 0.8744, -0.1435, -0.1116], [ 0.9318, 1.2590, 2.0050, 0.0537, 0.6181, -0.4128, -0.8411, -2.3160]]) </code></pre> <p>I want to create a new tensor which indexes values from <code>t</code>. Let's say these indexes are stored in variable <code>indexes</code></p> <pre><code>indexes = [[(0, 1, 4, 5), (0, 1, 6, 7), (4, 5, 6, 7)], [(2, 3, 4, 5)], [(4, 5, 6, 7), (2, 3, 6, 7)]] </code></pre> <p>Each inner tuple in <code>indexes</code> represents four indexes that are to be taken from a row.</p> <p>As an example, based on these indexes my output would be a 6x4 dimension tensor (6 is the total number of tuples in <code>indexes</code>, and 4 corresponds to one value in a tuple)</p> <p>For instance, this is what I want to do:</p> <pre><code>#counting the number of tuples in indexes count_instances = sum([1 for lst in indexes for tupl in lst]) #creating a zero output matrix final_tensor = torch.zeros(count_instances,4) final_tensor[0] = t[0,indexes[0][0]] final_tensor[1] = t[0,indexes[0][1]] final_tensor[2] = t[0,indexes[0][2]] final_tensor[3] = t[1,indexes[1][0]] final_tensor[4] = t[2,indexes[2][0]] final_tensor[5] = t[2,indexes[2][1]] </code></pre> <p>The final output looks like this: print(final_tensor)</p> <pre><code>tensor([[-1.1258, -1.1524, 0.8487, 0.6920], [-1.1258, -1.1524, -0.3160, -2.1152], [ 0.8487, 0.6920, -0.3160, -2.1152], [ 1.4437, 0.2660, 0.1665, 0.8744], [ 0.6181, -0.4128, -0.8411, -2.3160], [ 2.0050, 0.0537, -0.8411, -2.3160]]) </code></pre> <p>I created a function <code>build_tensor</code> (shown below) to achieve this with nested for-loops, but I want to know if there's a faster way of doing it with simple indexing in Pytorch. I want a faster way of doing it because I'm doing this operation hundreds of times with bigger index and t sizes. </p> <p>Any help?</p> <pre><code>def build_tensor(indexes, t): #count tuples count_instances = sum([1 for lst in indexes for tupl in lst]) #create a zero tensor final_tensor = torch.zeros(count_instances,4) final_tensor_idx = 0 for curr_idx, lst in enumerate(indexes): for tupl in lst: final_tensor[final_tensor_idx] = t[curr_idx,tupl] final_tensor_idx+=1 return final_tensor </code></pre>
<p>You can arrange the indices into 2D arrays then do the indexing in one shot like this:</p> <pre><code>rows = [(row,)*len(index_tuple) for row, row_indices in enumerate(indexes) for index_tuple in row_indices] columns = [index_tuple for row_indices in indexes for index_tuple in row_indices] final_tensor = t[rows, columns] </code></pre>
python|indexing|pytorch|numpy-ndarray
1
376,534
60,064,762
Does Tensorflows MirroredStrategy() split the training model?
<p>Does Tensorflows <code>MirroredStrategy()</code> split the training model across multiple GPUs? I am trying to run a 3D-UNet and I am at a limit of 224x224x224 for the volume for my training data on a single GPU. I am trying to implement <code>MirroredStrategy()</code> and <code>with tf.device():</code> to pass parts of the model to a second GPU. I still am not able to pass the 224x224x224 limit. If I go for a larger volume I get a <code>ResourceExhaustedError</code>. </p> <p>Code:</p> <pre><code>def get_model(optimizer, loss_metric, metrics, lr=1e-3): with tf.device('/job:localhost/replica:0/task:0/device:GPU:0'): inputs = Input((sample_width, sample_height, sample_depth, 1)) conv1 = Conv3D(32, (3, 3, 3), activation='relu', padding='same')(inputs) conv1 = Conv3D(32, (3, 3, 3), activation='relu', padding='same')(conv1) pool1 = MaxPooling3D(pool_size=(2, 2, 2))(conv1) drop1 = Dropout(0.5)(pool1) conv2 = Conv3D(64, (3, 3, 3), activation='relu', padding='same')(drop1) conv2 = Conv3D(64, (3, 3, 3), activation='relu', padding='same')(conv2) pool2 = MaxPooling3D(pool_size=(2, 2, 2))(conv2) drop2 = Dropout(0.5)(pool2) conv3 = Conv3D(128, (3, 3, 3), activation='relu', padding='same')(drop2) conv3 = Conv3D(128, (3, 3, 3), activation='relu', padding='same')(conv3) pool3 = MaxPooling3D(pool_size=(2, 2, 2))(conv3) drop3 = Dropout(0.3)(pool3) conv4 = Conv3D(256, (3, 3, 3), activation='relu', padding='same')(drop3) conv4 = Conv3D(256, (3, 3, 3), activation='relu', padding='same')(conv4) pool4 = MaxPooling3D(pool_size=(2, 2, 2))(conv4) drop4 = Dropout(0.3)(pool4) conv5 = Conv3D(512, (3, 3, 3), activation='relu', padding='same')(drop4) conv5 = Conv3D(512, (3, 3, 3), activation='relu', padding='same')(conv5) with tf.device('/job:localhost/replica:0/task:0/device:GPU:1'): up6 = concatenate([Conv3DTranspose(256, (2, 2, 2), strides=(2, 2, 2), padding='same')(conv5), conv4], axis=4) conv6 = Conv3D(256, (3, 3, 3), activation='relu', padding='same')(up6) conv6 = Conv3D(256, (3, 3, 3), activation='relu', padding='same')(conv6) up7 = concatenate([Conv3DTranspose(128, (2, 2, 2), strides=(2, 2, 2), padding='same')(conv6), conv3], axis=4) conv7 = Conv3D(128, (3, 3, 3), activation='relu', padding='same')(up7) conv7 = Conv3D(128, (3, 3, 3), activation='relu', padding='same')(conv7) up8 = concatenate([Conv3DTranspose(64, (2, 2, 2), strides=(2, 2, 2), padding='same')(conv7), conv2], axis=4) conv8 = Conv3D(64, (3, 3, 3), activation='relu', padding='same')(up8) conv8 = Conv3D(64, (3, 3, 3), activation='relu', padding='same')(conv8) up9 = concatenate([Conv3DTranspose(32, (2, 2, 2), strides=(2, 2, 2), padding='same')(conv8), conv1], axis=4) conv9 = Conv3D(32, (3, 3, 3), activation='relu', padding='same')(up9) conv9 = Conv3D(32, (3, 3, 3), activation='relu', padding='same')(conv9) conv10 = Conv3D(1, (1, 1, 1), activation='sigmoid')(conv9) with tf.device('/job:localhost/replica:0/task:0/device:CPU:0'): model = Model(inputs=[inputs], outputs=[conv10]) model.compile(optimizer=optimizer(lr=lr), loss=loss_metric, metrics=metrics) return model mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/job:localhost/replica:0/task:0/device:GPU:0", "/job:localhost/replica:0/task:0/device:GPU:1"], cross_device_ops = tf.distribute.HierarchicalCopyAllReduce()) with mirrored_strategy.scope(): model = get_model(optimizer=Adam, loss_metric=dice_coef_loss, metrics=[dice_coef], lr=1e-3) </code></pre> <p>ResourceExhaustedError: </p> <pre><code>ResourceExhaustedError Traceback (most recent call last) &lt;ipython-input-1-7a601312fa7a&gt; in &lt;module&gt; 405 # e_drive_model_dir = '\\models\\' 406 model_checkpoint = ModelCheckpoint('unet_seg_cs9300_3d_{epoch:04}.model', monitor=observe_var, save_best_only=False, save_freq = 1000) --&gt; 407 model.fit(train_x, train_y, batch_size= 2, epochs= 10000, verbose=1, shuffle=True, validation_split=0, callbacks=[model_checkpoint]) 408 409 model.save('unet_seg_final_3d_test.model') ~\.conda\envs\gputest\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 647 steps_per_epoch=steps_per_epoch, 648 validation_steps=validation_steps, --&gt; 649 validation_freq=validation_freq) 650 651 batch_size = self._validate_or_infer_batch_size( ~\.conda\envs\gputest\lib\site-packages\tensorflow\python\keras\engine\training_distributed.py in fit_distributed(model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq) 141 validation_steps=validation_steps, 142 validation_freq=validation_freq, --&gt; 143 steps_name='steps_per_epoch') 144 145 ~\.conda\envs\gputest\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs) 272 # `ins` can be callable in tf.distribute.Strategy + eager case. 273 actual_inputs = ins() if callable(ins) else ins --&gt; 274 batch_outs = f(actual_inputs) 275 except errors.OutOfRangeError: 276 if is_dataset: ~\.conda\envs\gputest\lib\site-packages\tensorflow\python\keras\backend.py in __call__(self, inputs) 3290 3291 fetched = self._callable_fn(*array_vals, -&gt; 3292 run_metadata=self.run_metadata) 3293 self._call_fetch_callbacks(fetched[-len(self._fetches):]) 3294 output_structure = nest.pack_sequence_as( ~\.conda\envs\gputest\lib\site-packages\tensorflow\python\client\session.py in __call__(self, *args, **kwargs) 1456 ret = tf_session.TF_SessionRunCallable(self._session._session, 1457 self._handle, args, -&gt; 1458 run_metadata_ptr) 1459 if run_metadata: 1460 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[1,32,240,240,240] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node Adam/gradients/conv3d_17_1/Conv3D_grad/Conv3DBackpropInputV2}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. (1) Resource exhausted: OOM when allocating tensor with shape[1,32,240,240,240] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node Adam/gradients/conv3d_17_1/Conv3D_grad/Conv3DBackpropInputV2}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. [[GroupCrossDeviceControlEdges_0/Adam/Adam/update_1/Const/_1070]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. 1 successful operations. 0 derived errors ignored. </code></pre>
<p>Though it's late, I hope this answer helps others in the future.</p> <p>Note that this has been tested on TF 2.0, so it may not work for older versions.</p> <p>Short answer to the first part of the question:</p> <p><code>MirroredStrategy()</code> does not split the model on separate GPUs; it replicates the model on each and splits the batches. So if the model is trained using a batch size of 32 on a 2 GPU machine, each GPU gets 16 examples. The gradients are accumulated and the model is updated for all 32 examples.</p> <hr /> <p>How can the model itself be split?</p> <p>After a lot of trial and error, I have the following:</p> <ol> <li><p>You can have individual layers and ops on separate devices, but once you wrap them under a single instance of <code>tf.keras.Model</code>, you can call the whole model on a single device only.</p> </li> <li><p>The layers in a model can be referenced and used outside the model, as individual ops instead of only as a collective whole.</p> </li> <li><p>When saving and restoring the model, you can get away with only restoring the weights and then using those weights as specified in point 2, with new instances of layers that don't have variables.</p> </li> </ol> <p>Combining these three points, one way to split the model on multiple GPUs for training and inference is to first create the graph(<code>tf.keras.Model</code>) on a single GPU, then replicate individual components on separate GPUs.</p> <p>A bare minimum example:</p> <pre><code>def create_model(): input = Input((None, None, 3)) x = Conv2D(64, (3, 3), activation='relu')(input) y = Conv2D(64, (3, 3), activation='relu')(input) z = Concatenate()([x, y]) output = Conv2D(3, (3, 3), activation='sigmoid')(z) return tf.keras.Model(inputs=[input], outputs=[output]) def model_graph(input, model): # get all layers that contain trainable parameters layers = [] for layer in model.layers: if len(layer.trainable_variables) != 0: layers.append(layer) # use the list to access layers with trainable variables layer_num = 0 with tf.device('/gpu:0'): x = layers[layer_num](input); layer_num += 1 with tf.device('/gpu:1'): y = layers[layer_num](input); layer_num += 1 # You can create new instances of layers that don't have variables z = Concatenate()([x, y]) output = layers[layer_num](z) return output model = create_model() </code></pre> <p>When you want to use the model on a single device, you can use:</p> <pre><code>output = model(inputs) </code></pre> <p>When you wat to split it across two devices, you can use:</p> <pre><code>output = model_graph(model, inputs) </code></pre>
python|tensorflow|keras|gpu|multi-gpu
1
376,535
60,044,547
Google Colab taking too long to train a GAN
<p>I followed the Generative-Adversarial Network tutorial on the TensorFlow site (linked below), it says "This may take about one minute / epoch with the default settings on Colab.". So far only two epochs have completed with each being around 750 seconds. That means it will take 10 hours to complete all 50 epochs. I tried first with default settings, then tried with TPU selected and if anything it now is worse. I am hoping to find anything wrong on my end which could help speed up the training of the GAN.</p> <p>The code is too long to put here so I'm putting the link to my walkthrough of the tutorial down below.</p> <p>Link to the tutorial: <a href="https://www.tensorflow.org/tutorials/generative/dcgan" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/generative/dcgan</a></p> <p>Link to my Google Colab code: <a href="https://drive.google.com/open?id=10-VTSyFqWMRT3NIWOU77he7li4YRE0dY" rel="nofollow noreferrer">https://drive.google.com/open?id=10-VTSyFqWMRT3NIWOU77he7li4YRE0dY</a></p>
<p>Have you tried adjusting the 'runtime type' to GPU in the Runtime menu?</p>
python-3.x|tensorflow|machine-learning|google-colaboratory|generative-adversarial-network
-1
376,536
60,304,229
Save model as h5 / Save model as .ckpt
<p>i have had big troubles today with saving formats while training a style-transfer neural network.</p> <p>The task is already solved i feel, i only need to save my model and load it again. But i can't find a proper way to do it. </p> <p>I used the following code from github to train a style-transfer network:</p> <p><a href="https://github.com/nikhilagrawal2000/Neural-Style-Transfer-with-Eager-Execution/blob/master/Neural_Style_Transfer_with_Eager_Execution.ipynb" rel="nofollow noreferrer">https://github.com/nikhilagrawal2000/Neural-Style-Transfer-with-Eager-Execution/blob/master/Neural_Style_Transfer_with_Eager_Execution.ipynb</a></p> <p>I already succesfully trained the network.</p> <p>Now, i saved the model using the following line:</p> <pre><code>model.save("/tmp/nst/test.h5") </code></pre> <p>For applying the saved neural network though, i need to use the network in a .ckpt format. </p> <p>Can someone tell me how to switch the data formats between h5 and .ckpt ? </p> <p>Or is there a specific save method for keras, so i can save it as .ckpt? (--> pseudocode: model.save_cpkt("/tmp/nst/test.ckpt")</p> <p>Would be extremely happy if someone could explain that to me, i tried it for several hours now without success. </p>
<p>You can save the weights in checkpoint format using:</p> <pre><code>model.save_weights("modelcheckpoint",save_format="tf") </code></pre> <p>You can read more about saving weights or models and chepoints <a href="https://www.tensorflow.org/guide/keras/save_and_serialize#weights-only_saving_using_tensorflow_checkpoints" rel="nofollow noreferrer">here</a></p>
tensorflow|keras|conv-neural-network|dataformat|style-transfer
1
376,537
60,175,023
Converting a numpy ndarray to 1 dataframe column
<p>I have a text features which I convert to numeric using tfidf Vectorizer. The <code>complaint</code> text column is converted as below</p> <pre><code>tfidf = TfidfVectorizer(sublinear_tf=True, min_df=5,ngram_range=(1, 2), stop_words='english') complain_features = tfidf.fit_transform(df.complaint.values.astype('str')).toarray() </code></pre> <p><code>complain_features</code> is a 2D numpy array. I convert it to dataframe using below </p> <pre><code>complain_df = pd.DataFrame(complain_features, index=range(complain_features.shape[0]), columns=range(complain_features.shape[1])) </code></pre> <p>As you can see in the attached image below. <code>complain_df</code> is a 39 column df but I need it to be 1 column. How do I do that? Please suggest.</p> <p><a href="https://i.stack.imgur.com/xn3ao.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xn3ao.png" alt="enter image description here"></a></p>
<p>Try:</p> <pre><code>complain_df['Column1'] =complain_df[complain_df.columns[1:]].apply( lambda x: ','.join(x.dropna().astype(str)), axis=1 ) complain_df </code></pre>
python|pandas|dataframe
0
376,538
59,986,093
How to compare the values of same index in multiple list and print the start time and end time if condition not satisfy?
<p>Here, i have 6 lists, all of them has same length of data. one is time which contains time from one start point to one end point and another five list contains signals. </p> <pre><code> time = [11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67] A = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0] B = [0, 0, 0, 0, 0, 0, 0 ,0 ,0 ,0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 2] C = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1] D = [0, 0, 0, 0, 0, 0, 0 ,0 ,0 ,0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 2] E = [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1] </code></pre> <p>Here first i want to compare list <strong>A</strong> and <strong>B</strong>. if in list <strong>A</strong> 0 comes and in the same index 2 comes in <strong>B</strong> and if it is True then in second condition check in the same index in other three list there <strong>C</strong> should be 0, <strong>D</strong> should be 0 and <strong>E</strong> should be 1. if this condition satisfy then it is passed but in case in some point it comes different value then i need the start time and end time.</p> <pre><code>or j in range(len(time)): lis = [] lis2 = [] for i in range(len(A)): if(A[i] == 0 and B[i] == 2): if C == 0 and D == 0 and E == 1: lis.append(time[i]) else: lis2.append(time[i]) print lis print lis2 </code></pre> <p>Using this code i've got the time where it is not satisfying but this isn't what i want. i want the start time and end time like this</p> <pre><code>OUTPUT - [33,42] or [33,34,35,36,37,38,39,40,41,42] </code></pre> <p>Because in this time period 1st condition is True and from where it fails 2nd condition from there it should print the time till 1st condition True like i've given in output, then no need to check further.</p> <p>Thank You In Advance. </p>
<p>Using numpy, you can do the following:</p> <pre><code>import numpy as np A = np.array(A) B = np.array(B) C = np.array(C) D = np.array(D) E = np.array(E) time = np.array(time) print time[(A == 0)*(B == 2)*(C == 0)*(D == 0)*(E == 1)] </code></pre> <p>By the way, your example is wrong. The correct result is <code>[32, 34, 35, 36, 37, 39, 40, 48, 49, 50, 51, 52]</code>, thus there are two periods with the correct pattern (from 31 to 40 and from 48 to 52).</p>
python|pandas|list|numpy|dataframe
0
376,539
59,922,447
Generate a new dataframe with boolean column based on comparing other dataframes values
<p>I have the following 3 dataframes with shape of (8004,29) and the following schema as an example:</p> <pre><code> id var0 var1 var2 var3 var4 ... var29 5171 10.0 2.8 0.0 5.0 1.0 ... 9.4 5171 40.9 2.5 3.4 4.5 1.3 ... 7.7 5171 60.7 3.1 5.2 6.6 3.4 ... 1.0 ... 5171 0.5 1.3 5.1 0.5 0.2 ... 0.4 4567 1.5 2.0 1.0 4.5 0.1 ... 0.4 4567 4.4 2.0 1.3 6.4 0.1 ... 3.3 4567 6.3 3.0 1.5 7.6 1.6 ... 1.6 ... 4567 0.7 1.4 1.4 0.3 4.2 ... 1.7 ... 9584 0.3 2.6 0.0 5.2 1.6 ... 9.7 9584 0.5 1.2 8.3 3.4 1.3 ... 1.7 9584 0.7 3.0 5.6 6.6 3.0 ... 1.0 ... 9584 0.7 1.3 0.1 0.0 2.0 ... 1.7 </code></pre> <p>where each <code>id</code> has 58 elements or rows and there are 138 unique <code>id</code>s.</p> <p>I am only interested in the last column of these dataframes: column <code>var29</code>. What i need to do is the following comparison: </p> <pre><code>if df1['var29'] &gt; (df2['var29'] + df3['var29']) or df1['var29'] &lt; (df2['var29'] - df3['var29']) </code></pre> <p>and generate a new dataframe as a result:</p> <pre><code> id result 5171 True 5171 True 5171 False ... 5171 False 4567 True 4567 True 4567 True ... 4567 False ... 9584 True 9584 False 9584 False ... 9584 True </code></pre> <p>I tried to loop over each index and use lamda to generate result dataframe as follow but it failed:</p> <pre><code>idxs = unique(df1.index).tolist() results = pd.DataFrame(index=df1.index) for idx in idxs: results['result'] = df1.loc[idx]['var29'].apply(lambda x: True if ( (df2['var29'].loc[idx] - df3['var29'].loc[idx]) &gt; x or ( df2['var29'].loc[idx] + df3['var29'].loc[idx]) &lt; x) else False) </code></pre> <p>Can someone help me to generate it? </p>
<p>Here's a way to do it, we map the columns into a single dataframe to ensure the ids are mapped correctly:</p> <pre><code># create a new data new_df = df1.copy() new_df['df2'] = new_df['id'].map(df2.set_index('id')['var29']) new_df['df3'] = new_df['id'].map(df3.set_index('id')['var29']) # use conditions cond = (new_df['var29'] &gt; (new_df['df2'] + new_df['df3'])) | (new_df['var29'] &lt; (new_df['df2'] - new_df['df3'])) new_df['result'] = np.where(cond, True, False) #choose columns new_df = new_df[['id','result']] </code></pre> <blockquote> <p><strong>Sample Data</strong></p> </blockquote> <pre><code>df1 = pd.DataFrame({'id': list(range(10)),'var29': np.random.randn(10)}) df2 = pd.DataFrame({'id': list(range(10)), 'var29': np.random.randn(10)}) df3 = pd.DataFrame({'id': list(range(10)), 'var29': np.random.randn(10)}) </code></pre>
python|pandas|dataframe
1
376,540
59,954,265
Installation errors with tensorflow 2.1.0
<ol> <li><code>pip3 --version</code> == <code>pip 20.0.2 from /home/nitin/anaconda3/envs/tensorflow/lib/python3.7/site-packages/pip (python 3.7)</code></li> <li><code>python --version</code> == <code>python 3.7.6</code></li> </ol> <p>i created an environment with <code>conda create --name tensorflow</code>. I had tensorflow 2.0 installed in it with which, i did with conda. I upgraded it with <code>pip install --upgrade tensorflow</code> from inside the tensorflow environment</p> <p>Now, when i do <code>import tensorflow as tf</code> i get the following error</p> <p><code>2020-01-28 23:01:06.791110: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory 2020-01-28 23:01:06.791210: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory 2020-01-28 23:01:06.791226: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.</code></p> <p>I dont seem to understand what went wrong. Any help?</p>
<p>This is just warning from tensorflow stating that if you want to improve latency and throughput of some models you can harness the power of tensorRT. However you cannot use it because libnvinfer is not installed. In order to install it,</p> <pre><code>wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb sudo apt install ./nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb sudo apt-get update sudo apt-get install -y --no-install-recommends \ libnvinfer6=6.0.1-1+cuda10.1 \ libnvinfer-dev=6.0.1-1+cuda10.1 \ libnvinfer-plugin6=6.0.1-1+cuda10.1 </code></pre> <p>This is with reference to the installation instructions provided by <a href="https://www.tensorflow.org/install/gpu" rel="nofollow noreferrer">tensorflow</a></p>
python|tensorflow|tensorflow2.0
2
376,541
60,150,530
Python Pandas read error, What does this error mean?
<p>I want to import a <code>csv</code> file from the <a href="https://statswales.gov.wales/Catalogue/Health-and-Social-Care/Mental-Health/Psychiatric-Census/patientsinmentalhealthhospitalsandunitsinwaleswithamentalillness" rel="nofollow noreferrer">mental health website</a>.</p> <p>I imported the correct libraries and directory for the file is correct.</p> <p>I used this line:</p> <pre><code>export = pd.read_csv('export.csv') </code></pre> <p>here's the error: </p> <pre class="lang-none prettyprint-override"><code>--------------------------------------------------------------------------- ParserError Traceback (most recent call last) &lt;ipython-input-2-0d82cf77b73c&gt; in &lt;module&gt; ----&gt; 1 export = pd.read_csv('export.csv') ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision) 683 ) 684 --&gt; 685 return _read(filepath_or_buffer, kwds) 686 687 parser_f.__name__ = name ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds) 461 462 try: --&gt; 463 data = parser.read(nrows) 464 finally: 465 parser.close() ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in read(self, nrows) 1152 def read(self, nrows=None): 1153 nrows = _validate_integer("nrows", nrows) -&gt; 1154 ret = self._engine.read(nrows) 1155 1156 # May alter columns / col_dict ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in read(self, nrows) 2057 def read(self, nrows=None): 2058 try: -&gt; 2059 data = self._reader.read(nrows) 2060 except StopIteration: 2061 if self._first_chunk: pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader.read() pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory() pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_rows() pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows() pandas\_libs\parsers.pyx in pandas._libs.parsers.raise_parser_error() ParserError: Error tokenizing data. C error: Expected 1 fields in line 8, saw 13 </code></pre>
<p>The error actually means that the 8th line in the file was expecting 1 field but found 13. </p> <p>I think the error came because when you tried to export the file from the website, you also checked 'Include Title' and 'Include Metadata' buttons. Don't check these buttons and just select 'Comma separated' in Export Type. It should work. </p>
python|pandas|csv
1
376,542
60,293,398
How to solve this obscure and excessively localised problem with pandas?
<p>I have this data frame wherein I want to give a score to each of the element in a distinct values column.</p> <pre><code>+-------------------------------+-------------------------------+------------------------------+ | A | B | Distinct Values | +-------------------------------+-------------------------------+------------------------------+ | ['a', 'b', 'c'] | ['a', 'b'] | ['a', 'b', 'c'] | | ['c', 'b', 'e', 'a'] | ['b', 'e', 'a'] | ['a', 'b', 'e', 'c'] | | ['a', 'b', 'd', 'e'] | ['a', 'b', 'c'] | ['a', 'b', 'd', 'e', 'c'] | | ['a', 'b', 'c'] | ['a', 'd', 'c'] | ['a', 'b', 'c', 'd'] | | | | | +-- ----------------------------+-------------------------------+------------------------------+ ( NO. of times that element has occurred in A and B) Scoring = ---------------------------------------------------- (Total number of elements(Distinct Values)) </code></pre> <p>This is how it would look like after scoring:</p> <pre><code>+------------------------+--------------------+-----------------------------------------------+ | A | B | Distinct_Values_with_scoring | +------------------------+--------------------+-----------------------------------------------+ | ['a', 'b', 'c'] | ['a', 'b'] |['a':2/3, 'b':2/3, 'c':1/3] | | ['c', 'b', 'e', 'a'] | ['b', 'e', 'a'] |['a':2/4, 'b':2/4, 'e':2/4, 'c':1/4] | | ['a', 'b', 'd', 'e'] | ['a', 'b', 'c'] |['a':2/5, 'b':2/5, 'd':1/5, 'e':1/5, 'c':1/5] | | ['a', 'b', 'c'] | ['a', 'd', 'c'] |['a':2/4, 'b':1/4, 'c':2/4, 'd':1/4] | | | | | +-- ---------------------+--------------------+-----------------------------------------------+ </code></pre> <p>How can I go about solving this problem in pandas? <div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>d = {"A":[['a', 'b', 'c'], ['c', 'b', 'e', 'a'], ['a', 'b', 'd', 'e'], ['a', 'b', 'c']], "B": [['a', 'b'],['b', 'e', 'a'],['a', 'b', 'c'], ['a', 'd', 'c']], "Distinct Values": [['a', 'b', 'c'], ['a', 'b', 'e', 'c'], ['a', 'b', 'd', 'e', 'c'], ['a', 'b', 'c', 'd']]} data = pd.DataFrame(d)</code></pre> </div> </div> </p>
<p>Because columns <code>A</code> and <code>B</code> are both lists, you can just add them together to get the total elements. Then use <code>Counter</code> in a dictionary comprehension to get the count of each letter, and divide each count by the total number of unique letters (determined by the length of the set).</p> <pre><code>from collections import Counter # Sample data. df = pd.DataFrame({ 'A': [['a', 'b', 'c'], ['c', 'b', 'e', 'a'], ['a', 'b', 'd', 'e'], ['a', 'b', 'c']], 'B': [['a', 'b'], ['b', 'e', 'a'], ['a', 'b', 'c'], ['a', 'd', 'c']] }) # Solution. &gt;&gt;&gt; df.assign( Distinct_Values_with_scoring= df['A'] .add(df['B']) .apply(lambda x: {k: v / len(set(x)) for k, v in Counter(x).items()}) ) A B Distinct_Values_with_scoring 0 [a, b, c] [a, b] {'a': 0.6666666666666666, 'b': 0.6666666666666... 1 [c, b, e, a] [b, e, a] {'c': 0.25, 'b': 0.5, 'e': 0.5, 'a': 0.5} 2 [a, b, d, e] [a, b, c] {'a': 0.4, 'b': 0.4, 'd': 0.2, 'e': 0.2, 'c': ... 3 [a, b, c] [a, d, c] {'a': 0.5, 'b': 0.25, 'c': 0.5, 'd': 0.25} </code></pre>
python|pandas
4
376,543
60,015,261
Is there a fast alternative to scipy _norm_pdf for correlated distribution sampling?
<p>I have fit a series of SciPy continuous distributions for a Monte-Carlo simulation and am looking to take a large number of samples from these distributions. However, I would like to be able to take correlated samples, such that the <code>i</code>th sample takes the e.g., 90th percentile from each of the distributions. </p> <p>In doing this, I've found a quirk in SciPy performance:</p> <pre class="lang-py prettyprint-override"><code># very fast way to many uncorrelated samples of length n for shape, loc, scale, in distro_props: sp.stats.norm.rvs(*shape, loc=loc, scale=scale, size=n) # verrrrryyyyy slow way to take correlated samples of length n correlate = np.random.uniform(size=n) for shape, loc, scale, in distro_props: sp.stats.norm.ppf(correlate, *shape, loc=loc, scale=scale) </code></pre> <p>Most of the results about this claim that the slowness on these SciPy distros if from the type-checking etc. wrappers. However when I profiled the code, the vast bulk of the time is spent in the underlying math function <code>[_continuous_distns.py:179(_norm_pdf)]</code><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.html" rel="noreferrer">1</a>. Furthermore, it scales with <code>n</code>, implying that it's looping through every elemnt internally.</p> <p>The SciPy <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.html" rel="noreferrer">docs on rv_continuous</a> almost seem to suggest that the subclass should override this for performance, but it seems bizarre that I would monkeypatch into SciPy to speed up their ppf. I would just compute this for the normal from the ppf formula, but I also use lognormal and skewed normal, which are more of a pain to implement.</p> <p>So, what is the best way in Python to compute a fast ppf for normal, lognormal, and skewed normal distributions? Or more broadly, to take correlated samples from several such distributions?</p>
<p>If you need just the normal <code>ppf</code>, it is indeed puzzling that it is so slow, but you can use <code>scipy.special.erfinv</code> instead:</p> <pre><code>x = np.random.uniform(0,1,100) np.allclose(special.erfinv(2*x-1)*np.sqrt(2),stats.norm().ppf(x)) # True timeit(lambda:stats.norm().ppf(x),number=1000) # 0.7717257660115138 timeit(lambda:special.erfinv(2*x-1)*np.sqrt(2),number=1000) # 0.015020604943856597 </code></pre> <p>EDIT:</p> <p><code>lognormal</code> and <code>triangle</code> are also straight forward:</p> <pre><code>c = np.random.uniform() np.allclose(np.exp(c*special.erfinv(2*x-1)*np.sqrt(2)),stats.lognorm(c).ppf(x)) # True np.allclose(((1-np.sqrt(1-(x-c)/((x&gt;c)-c)))*((x&gt;c)-c))+c,stats.triang(c).ppf(x)) # True </code></pre> <p>skew normal I'm not familiar enough, unfortunately.</p>
python|numpy|scipy|distribution|montecarlo
3
376,544
60,180,917
Java API for loading TensorFlow models created in python
<p>I am using Java 13 (I'm not sure if that is relevant) and I'm making a blackjack game. I want to add a neural network model after training it in python to my Java application. However, on <a href="https://www.tensorflow.org/install/lang_java" rel="nofollow noreferrer">here</a> it says </p> <blockquote> <p>"Note: There is no libtensorflow support for TensorFlow 2 yet."</p> </blockquote> <p>I haven't even started making my model so I have not tried to load a model into Java. Is this going to be a problem? Do I need to use an older version of TensorFlow?</p>
<p>If you do not want to convert your model to TF Lite, you can also use the snapshots available for TensorFlow Java with 2.x support enabled, check this <a href="https://github.com/tensorflow/java" rel="nofollow noreferrer">repository</a>.</p>
java|python|tensorflow|installation|java-13
1
376,545
60,145,422
Out of memory running VGG-19 on Keras and tensorflow on an 11GB GPU
<p>I am using keras + tensorflow (1.14) on (cuda-10.0). I have a RTX 2080 TI gpu. I am trying to run a VGG-19 model to train on 640*480*1 size images. I run a code a determine the amount of memory GPU needs for running training on batch size 10. It says the needed memory is ~6GB. Still it throws out of memory error on an 11GB GPU with just batch size of 1. What am I missing here? Thanks and regards,</p> <p>The model I am using looks like:</p> <pre><code> model = Sequential() model.add(Conv2D(input_shape=(IMG_SIZE_HEIGHT,IMG_SIZE_WIDTH,1),filters=64,kernel_size=(3,3),padding="same", activation="relu")) model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Convolution2D(128, 3, 3, activation='relu')) model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Flatten()) model.add(Dense(units=4096,activation="relu")) model.add(Dense(units=2048,activation="relu")) model.add(Dropout(0.5)) model.add(Dense(3, activation='softmax')) This model cannot even train dataset of batch size 1! I get an out of memory error. I am running the following piece of code to determine how much memory it takes to run training with batch size of 10 : get_model_memory_usage: Conv2D get_model_memory_usage:s: 480 get_model_memory_usage:s: 640 get_model_memory_usage:s: 64 get_model_memory_usage: for layer: Conv2D , memory_usage in MB is: 75.0 get_model_memory_usage: Conv2D get_model_memory_usage:s: 480 get_model_memory_usage:s: 640 get_model_memory_usage:s: 64 get_model_memory_usage: for layer: Conv2D , memory_usage in MB is: 75.0 get_model_memory_usage: MaxPooling2D get_model_memory_usage:s: 240 get_model_memory_usage:s: 320 get_model_memory_usage:s: 64 get_model_memory_usage: for layer: MaxPooling2D , memory_usage in MB is: 18.75 get_model_memory_usage: Conv2D get_model_memory_usage:s: 238 get_model_memory_usage:s: 318 get_model_memory_usage:s: 128 get_model_memory_usage: for layer: Conv2D , memory_usage in MB is: 36.955 get_model_memory_usage: Conv2D get_model_memory_usage:s: 238 get_model_memory_usage:s: 318 get_model_memory_usage:s: 128 get_model_memory_usage: for layer: Conv2D , memory_usage in MB is: 36.955 get_model_memory_usage: Conv2D get_model_memory_usage:s: 238 get_model_memory_usage:s: 318 get_model_memory_usage:s: 128 get_model_memory_usage: for layer: Conv2D , memory_usage in MB is: 36.955 get_model_memory_usage: MaxPooling2D get_model_memory_usage:s: 119 get_model_memory_usage:s: 159 get_model_memory_usage:s: 128 get_model_memory_usage: for layer: MaxPooling2D , memory_usage in MB is: 9.239 get_model_memory_usage: Conv2D get_model_memory_usage:s: 119 get_model_memory_usage:s: 159 get_model_memory_usage:s: 256 get_model_memory_usage: for layer: Conv2D , memory_usage in MB is: 18.478 get_model_memory_usage: Conv2D get_model_memory_usage:s: 119 get_model_memory_usage:s: 159 get_model_memory_usage:s: 256 get_model_memory_usage: for layer: Conv2D , memory_usage in MB is: 18.478 get_model_memory_usage: Conv2D get_model_memory_usage:s: 119 get_model_memory_usage:s: 159 get_model_memory_usage:s: 256 get_model_memory_usage: for layer: Conv2D , memory_usage in MB is: 18.478 get_model_memory_usage: Conv2D get_model_memory_usage:s: 119 get_model_memory_usage:s: 159 get_model_memory_usage:s: 256 get_model_memory_usage: for layer: Conv2D , memory_usage in MB is: 18.478 get_model_memory_usage: MaxPooling2D get_model_memory_usage:s: 59 get_model_memory_usage:s: 79 get_model_memory_usage:s: 256 get_model_memory_usage: for layer: MaxPooling2D , memory_usage in MB is: 4.552 get_model_memory_usage: Conv2D get_model_memory_usage:s: 59 get_model_memory_usage:s: 79 get_model_memory_usage:s: 512 get_model_memory_usage: for layer: Conv2D , memory_usage in MB is: 9.104 get_model_memory_usage: Conv2D get_model_memory_usage:s: 59 get_model_memory_usage:s: 79 get_model_memory_usage:s: 512 get_model_memory_usage: for layer: Conv2D , memory_usage in MB is: 9.104 get_model_memory_usage: Conv2D get_model_memory_usage:s: 59 get_model_memory_usage:s: 79 get_model_memory_usage:s: 512 get_model_memory_usage: for layer: Conv2D , memory_usage in MB is: 9.104 get_model_memory_usage: MaxPooling2D get_model_memory_usage:s: 29 get_model_memory_usage:s: 39 get_model_memory_usage:s: 512 get_model_memory_usage: for layer: MaxPooling2D , memory_usage in MB is: 2.209 get_model_memory_usage: Conv2D get_model_memory_usage:s: 29 get_model_memory_usage:s: 39 get_model_memory_usage:s: 512 get_model_memory_usage: for layer: Conv2D , memory_usage in MB is: 2.209 get_model_memory_usage: Conv2D get_model_memory_usage:s: 29 get_model_memory_usage:s: 39 get_model_memory_usage:s: 512 get_model_memory_usage: for layer: Conv2D , memory_usage in MB is: 2.209 get_model_memory_usage: MaxPooling2D get_model_memory_usage:s: 14 get_model_memory_usage:s: 19 get_model_memory_usage:s: 512 get_model_memory_usage: for layer: MaxPooling2D , memory_usage in MB is: 0.52 get_model_memory_usage: Flatten get_model_memory_usage:s: 136192 get_model_memory_usage: for layer: Flatten , memory_usage in MB is: 0.52 get_model_memory_usage: Dense get_model_memory_usage:s: 4096 get_model_memory_usage: for layer: Dense , memory_usage in MB is: 0.016 get_model_memory_usage: Dense get_model_memory_usage:s: 2048 get_model_memory_usage: for layer: Dense , memory_usage in MB is: 0.008 get_model_memory_usage: Dropout get_model_memory_usage:s: 2048 get_model_memory_usage: for layer: Dropout , memory_usage in MB is: 0.008 get_model_memory_usage: Dense get_model_memory_usage:s: 3 get_model_memory_usage: for layer: Dense , memory_usage in MB is: 0.0 get_model_memory_usage: trainable_count: 579334723 non-trainable count 0.0 get_model_memory_usage: final size of the model with batch size: 10 is: 6.087 GB </code></pre> <p>The code to determine memory usage is:</p> <pre><code>def get_model_memory_usage(batch_size, model): number_size = 4.0 if K.floatx() == 'float16': number_size = 2.0 if K.floatx() == 'float64': number_size = 8.0 shapes_mem_count = 0 internal_model_mem_count = 0 for l in model.layers: layer_type = l.__class__.__name__ print("get_model_memory_usage:", layer_type) if layer_type == 'Model': internal_model_mem_count += get_model_memory_usage(batch_size, l) single_layer_mem = 1 for s in l.output_shape: if s is None: continue print(" get_model_memory_usage:s: ", s) single_layer_mem *= s print(" get_model_memory_usage: for layer: ", layer_type, ", memory_usage in MB is: ", np.round(single_layer_mem * number_size / (1024.0 ** 2), 3)) shapes_mem_count += single_layer_mem trainable_count = np.sum([K.count_params(p) for p in set(model.trainable_weights)]) non_trainable_count = np.sum([K.count_params(p) for p in set(model.non_trainable_weights)]) print("get_model_memory_usage: trainable_count: ", trainable_count, " non-trainable count ", non_trainable_count) total_memory = number_size*(batch_size*shapes_mem_count + trainable_count + non_trainable_count) gbytes = np.round(total_memory / (1024.0 ** 3), 3) + internal_model_mem_count print("get_model_memory_usage: final size of the model with batch size: ", batch_size, " is: ", gbytes) return gbytes </code></pre>
<p>You can follow below network to avoid out-of memory issue by including <code>maxpooling</code> layer after every two convolution layers.</p> <pre><code>model = Sequential() model.add(Conv_Base) model.add(Conv2D(input_shape=(32,32,3),filters=64,kernel_size=(3,3),padding="same", activation="relu")) model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2),padding="same")) model.add(Conv2D(filters=128, kernel_size=(3,3), activation='relu',padding="same")) model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2),padding="same")) model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2),padding="same")) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2),padding="same")) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2),padding="same")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2),padding="same")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2),padding="same")) model.add(Flatten()) model.add(Dense(units=4096,activation="relu")) model.add(Dense(units=2048,activation="relu")) model.add(Dropout(0.5)) model.add(Dense(10, activation='softmax')) model.summary() </code></pre> <p>Output:</p> <pre><code>Model: "sequential_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= vgg19 (Model) (None, 1, 1, 512) 20024384 _________________________________________________________________ conv2d_16 (Conv2D) (None, 1, 1, 64) 294976 _________________________________________________________________ conv2d_17 (Conv2D) (None, 1, 1, 64) 36928 _________________________________________________________________ max_pooling2d_8 (MaxPooling2 (None, 1, 1, 64) 0 _________________________________________________________________ conv2d_18 (Conv2D) (None, 1, 1, 128) 73856 _________________________________________________________________ conv2d_19 (Conv2D) (None, 1, 1, 128) 147584 _________________________________________________________________ max_pooling2d_9 (MaxPooling2 (None, 1, 1, 128) 0 _________________________________________________________________ conv2d_20 (Conv2D) (None, 1, 1, 128) 147584 _________________________________________________________________ conv2d_21 (Conv2D) (None, 1, 1, 256) 295168 _________________________________________________________________ max_pooling2d_10 (MaxPooling (None, 1, 1, 256) 0 _________________________________________________________________ conv2d_22 (Conv2D) (None, 1, 1, 256) 590080 _________________________________________________________________ conv2d_23 (Conv2D) (None, 1, 1, 256) 590080 _________________________________________________________________ max_pooling2d_11 (MaxPooling (None, 1, 1, 256) 0 _________________________________________________________________ conv2d_24 (Conv2D) (None, 1, 1, 256) 590080 _________________________________________________________________ conv2d_25 (Conv2D) (None, 1, 1, 512) 1180160 _________________________________________________________________ max_pooling2d_12 (MaxPooling (None, 1, 1, 512) 0 _________________________________________________________________ conv2d_26 (Conv2D) (None, 1, 1, 512) 2359808 _________________________________________________________________ conv2d_27 (Conv2D) (None, 1, 1, 512) 2359808 _________________________________________________________________ max_pooling2d_13 (MaxPooling (None, 1, 1, 512) 0 _________________________________________________________________ conv2d_28 (Conv2D) (None, 1, 1, 512) 2359808 _________________________________________________________________ conv2d_29 (Conv2D) (None, 1, 1, 512) 2359808 _________________________________________________________________ max_pooling2d_14 (MaxPooling (None, 1, 1, 512) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 512) 0 _________________________________________________________________ dense_3 (Dense) (None, 4096) 2101248 _________________________________________________________________ dense_4 (Dense) (None, 2048) 8390656 _________________________________________________________________ dropout_1 (Dropout) (None, 2048) 0 _________________________________________________________________ dense_5 (Dense) (None, 10) 20490 ================================================================= Total params: 43,922,506 Trainable params: 43,922,506 Non-trainable params: 0 </code></pre>
tensorflow|keras|gpu|conv-neural-network
1
376,546
59,932,796
Python: Pandas support in Pint
<p>I am trying to reproduce the example described in the Python Pint library <a href="https://pint.readthedocs.io/en/latest/pint-pandas.html" rel="nofollow noreferrer">here</a>.</p> <p>In the section "Reading from csv" when running the following line:</p> <pre><code>df_ = df.pint.quantify(level=-1) </code></pre> <p>I got the following message error:</p> <blockquote> <p>AttributeError: 'DataFrame' object has no attribute 'pint'</p> </blockquote> <p>Has anybody a solution to that?</p> <p>Thanks in advance!</p> <p>Best regards.</p>
<p>As @Ivan noted in the comments, you need to install <code>pint-pandas</code> package: </p> <p><code>pip install git+https://github.com/hgrecco/pint-pandas.git</code></p> <p><a href="https://github.com/hgrecco/pint/issues/1001#issuecomment-579210723" rel="nofollow noreferrer">Pandas has an open issue</a> regarding this.</p>
python|pandas|pint
1
376,547
59,933,278
Analyse monthly sale in python
<p>I have a data set like this </p> <pre><code> order_status created_at 0 cancelled 05/08/2018 1 cancelled 06/08/2018 2 dispatched 27/08/2018 3 dispatched 30/08/2018 4 cancelled 05/09/2018 5 dispatched 05/09/2018 6 dispatched 25/09/2018 7 cancelled 23/10/2018 8 dispatched 05/10/2018 9 dispatched 02/08/2018 </code></pre> <p>where the date format is dd/mm/yy. What I want is to analyze the data based on month, like how many orders were cancelled in 8th month of the year, how many were dispatched in 9th month of the year. What I'm doing is something like this </p> <pre><code>df2 = df[['order_status','created_at']].\ set_index('created_at').\ resample('M') df2.iplot(kind='bar', xTitle='Date', yTitle='Order Status', title='Monthly Order Status') </code></pre> <p>but it's throwing error </p> <blockquote> <p>TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'Index'</p> </blockquote> <p>what can i do to get the monthly report of all the orders?</p>
<p>You can use a <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a>:</p> <pre><code>df['created_at'] = pd.to_datetime(df['created_at']) f = df.groupby(df.created_at.dt.month)['order_status'].value_counts().reset_index(name='count') created_at order_status count 0 2 dispatched 1 1 5 cancelled 2 2 5 dispatched 2 3 6 cancelled 1 4 8 dispatched 2 5 9 dispatched 1 6 10 cancelled 1 # plot f.plot(kind='bar') </code></pre> <p><a href="https://i.stack.imgur.com/hRy2b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hRy2b.png" alt="enter image description here"></a></p>
python|pandas|matplotlib|seaborn
1
376,548
60,132,576
Adding a rowsum column to a large sparse matrix
<p>I have a large sparse matrix X (2 mil rows, 23k cols), and I would like to add a rowsum column on it and return a sparse matrix.</p> <p>I have tried below </p> <pre><code>np.hstack( (X.toarray(),X.sum(axis=1)) ) </code></pre> <p>but it doesn't work well with large sparse matrix. </p> <p>The thing is, when I call <code>X.toarray()</code>, it blows up and terminates python kernel without giving any error message.</p> <p>Similary I have tried</p> <pre><code>sparse.hstack( X ,sparse.csr_matrix(X.sum(axis=1))) sparse.csr_matrix(X.sum(axis=1)).ndim # is 2 X.ndim # 2 as well </code></pre> <p>but it give me below error message:</p> <pre><code>~/miniconda3/lib/python3.7/site-packages/scipy/sparse/construct.py in bmat(blocks, format, dtype) 546 547 if blocks.ndim != 2: --&gt; 548 raise ValueError('blocks must be 2-D') 549 550 M,N = blocks.shape ValueError: blocks must be 2-D </code></pre> <p>Is there any way to work around this problem? </p>
<pre><code>In [93]: from scipy import sparse In [94]: M = sparse.random(5,7, .2, 'csr') In [95]: M Out[95]: &lt;5x7 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 7 stored elements in Compressed Sparse Row format&gt; </code></pre> <p>One sum is a (n,1) <code>np.matrix</code>:</p> <pre><code>In [96]: M.sum(axis=1) Out[96]: matrix([[0.92949904], [1.068337 ], [0.10927561], [0. ], [0.68352182]]) </code></pre> <p>The other a (1,n) matrix:</p> <pre><code>In [97]: M.sum(axis=0) Out[97]: matrix([[0. , 0.90221854, 0.42335774, 1.35578158, 0. , 0. , 0.10927561]]) </code></pre> <p>add the column to the matrix (note the argument details):</p> <pre><code>In [98]: sparse.hstack((M, M.sum(axis=1))) Out[98]: &lt;5x8 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 11 stored elements in COOrdinate format&gt; </code></pre> <p>add the row matrix:</p> <pre><code>In [99]: sparse.vstack((M, M.sum(axis=0))) Out[99]: &lt;6x7 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 11 stored elements in COOrdinate format&gt; </code></pre>
python|numpy|scikit-learn|scipy
2
376,549
60,103,825
Retrieve data in Pandas
<p>I am using pandas and uproot to read data from a .root file, and I get a table like the following one:</p> <p><a href="https://i.stack.imgur.com/92Xbg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/92Xbg.png" alt="enter image description here"></a></p> <p>The aforementioned table is made with the following code:</p> <pre><code>fname = 'ZZ4lAnalysis_VBFH.root' key = 'ZZTree/candTree' ttree = uproot.open(fname)[key] branches = ['Z1Flav', 'Z2Flav', 'nCleanedJetsPt30', 'LepPt', 'LepLepId'] df = ttree.pandas.df(branches, flatten=False) </code></pre> <p>I need to find the maximum value in LepPt, and, once found the maximum, I also need to retrieve the LepLepId of that maximum value. I have no problem in finding the maximum values:</p> <pre><code>Pt_l1 = [max(i) for i in df.LepPt] </code></pre> <p>In this way I get an array with all the maximum values. However, I have to separate such values according to the LepLepId. So I need an array with the maximum LepPt and |LepLepId|=11 and one with the maximum LepPt and |LepLepId|=13.</p> <p>If someone could give me any hint, advice and/or suggestion, I would be very grateful. </p>
<p>You could use the <a href="https://github.com/scikit-hep/awkward-array/" rel="nofollow noreferrer"><code>awkward.JaggedArray</code></a> interface for this (one of the dependencies of <code>uproot</code>), which allows you to have irregularly sized arrays.</p> <p>For this you would need to slightly change the way you load the data, but it allows you to use the same methods you would use with a normal <code>numpy</code> array, namely <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html" rel="nofollow noreferrer"><code>argmax</code></a>:</p> <pre><code>fname = 'ZZ4lAnalysis_VBFH.root' key = 'ZZTree/candTree' ttree = uproot.open(fname)[key] # branches = ['Z1Flav', 'Z2Flav', 'nCleanedJetsPt30', 'LepPt', 'LepLepId'] branches = ['LepPt', 'LepLepId'] # to save memory, only load what you need # df = ttree.pandas.df(branches, flatten=False) a = ttree.arrays(branches) # use awkward array interface max_pt_idx = a[b'LepPt'].argmax() max_pt_lepton_id = a[b'LepLepld'][max_pt_idx].flatten() </code></pre> <p>This is then just a normal <code>numpy</code> array, which you can assign to a column of a <code>pandas</code> dataframe if you want to. It should have the right dimensionality and order. It should also be faster than using the built-in Python functions.</p> <p>Note that the keys are bytestrings, instead of normal strings and that you will have to take some extra steps if there are events with no leptons (in which case the <code>flatten</code> will ignore those empty events, destroying the alignment).</p> <p>Alternatively, you can also convert the columns afterwards:</p> <pre><code>import awkward df = ttree.pandas.df(branches, flatten=False) max_pt_idx = awkward.fromiter(df["LepPt"]).argmax() lepton_id = awkward.fromiter(df["LepLepld"]) df["max_pt_lepton_id"] = lepton_id[max_pt_idx].flatten() </code></pre> <p>The former will be faster if you don't need the columns again afterwards, otherwise the latter might be better.</p>
python|pandas|physics|uproot
2
376,550
60,177,310
Converting MultiIndex Pandas DataFrame to Pivot
<p>Seems like this should be easy, but I cannot get it to work for the life of me.</p> <p>I have a dataframe for stock data like below. How can I convert the below dataframe into a pivot table with the date as the rows, stock symbols as the columns, and Adj Close as the values (picture at bottom)</p> <p>I am getting the dataframe with this code: <code>pricing = web.DataReader(['MSFT', 'AAPL'], 'yahoo', datetime.datetime(2020, 1, 1), datetime.datetime(2020, 2, 10))</code></p> <p>DataFrame <a href="https://i.stack.imgur.com/TZ8p0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TZ8p0.png" alt="DataFrame"></a> Pivot Table</p> <p><a href="https://i.stack.imgur.com/fFgFp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fFgFp.png" alt="Pivot Table"></a></p> <p><strong>EDIT:</strong></p> <p>If I just do <code>pricing = web.DataReader(['MSFT', 'AAPL'], 'yahoo', datetime.datetime(2020, 1, 1), datetime.datetime(2020, 2, 10))['Adj Close']</code> then I run into problems using .loc[] to grab data using another pivot's index.</p> <p>I have a second pivot (shown below) and when I try to do pricing.loc[pivot2.index] I get an error.</p> <p><a href="https://i.stack.imgur.com/BF1SY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BF1SY.png" alt="Pivot 2"></a></p> <p>Error: </p> <blockquote> <p>KeyError: "None of [DatetimeIndex(['1999-01-01', '2000-01-01', '2003-01-01', '2004-01-01',\n '2005-01-01', '2006-01-01', '2007-01-01', '2008-01-01',\n '2009-01-01', '2010-01-01', '2011-01-01', '2012-01-01',\n '2013-01-01', '2014-01-01', '2015-01-01', '2016-01-01',\n '2017-01-01', '2018-01-01', '2019-01-01'],\n dtype='datetime64[ns]', name='date', freq=None)] are in the [index]"</p> </blockquote>
<p>Try using:</p> <pre><code>pricing['Adj Close'] </code></pre> <p>where,</p> <pre><code>pricing = pd.DataFrame(np.random.randint(50,75,(10,6)), index=pd.date_range('01/2/20', periods=10, freq='D'), columns=pd.MultiIndex.from_product([['Adj Close', 'Close', 'High'],['MSFT', 'AAPL']])) Adj Close Close High MSFT AAPL MSFT AAPL MSFT AAPL 2020-01-02 67 71 58 60 50 53 2020-01-03 54 59 64 72 62 50 2020-01-04 51 53 56 70 63 51 2020-01-05 64 71 74 62 68 62 2020-01-06 74 68 69 71 60 62 2020-01-07 55 55 51 70 74 72 2020-01-08 60 58 74 70 73 69 2020-01-09 51 58 72 54 50 61 2020-01-10 64 56 74 52 59 57 2020-01-11 55 50 68 61 60 59 </code></pre> <p>Using <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#basic-indexing-on-axis-with-multiindex" rel="nofollow noreferrer">Basic indexing on axis with MultiIndex</a> we can just select 'Adj Close' at level 0.</p> <pre><code>pricing['Adj Close'] </code></pre> <p>Output:</p> <pre><code> MSFT AAPL 2020-01-02 66 51 2020-01-03 67 67 2020-01-04 74 74 2020-01-05 73 66 2020-01-06 68 52 2020-01-07 67 50 2020-01-08 73 54 2020-01-09 66 52 2020-01-10 62 73 2020-01-11 61 71 </code></pre>
python|pandas|dataframe|pivot|pivot-table
0
376,551
60,283,015
Return type dependent on input type in Python
<p>The following function in Python can returns a list no matter if the input is a <code>list</code>, a <code>numpy.array</code> or a <code>pandas.Series</code>.</p> <p>What is the pythonic way to write it so that the output type is the same as the input type?</p> <pre><code>def foo(input): output = [] output.append(input[0]) for i in range(1, len(input)-1): if (some condition): output.append(input[i]) output.append(input[-1]) return output </code></pre>
<p>In general, you can't do this without making a lot of assumptions about what the input <em>is</em>.</p> <p>The first step would be to make sure that <code>output</code> is the right type, not a list.</p> <pre><code>output = type(self)() </code></pre> <p>However, this assumes that whatever type your input is, you can create an instance by calling it with no arguments.</p> <p>Next, you have to restrict yourself to operations on <code>output</code> that are supported by all expected inputs. Not all iterables support an <code>append</code> method (<code>set</code>, for instance, uses <code>add</code>, not <code>append</code>), and not all iterables support <code>__getitem__</code> (a generator, for instance). This means that you can't generalize your function <em>too</em> much; you always have to keep in mind which types of input you will support.</p> <p>Alternatively, if the set of types you want to support can create an instance from a list, you can let <code>output = []</code> stand, but convert it just before returning:</p> <pre><code>return type(self)(output) </code></pre>
python|pandas|list|numpy|polymorphism
1
376,552
60,212,425
Best practice (effiency) data manipulation of pandas big dataframe
<p>I'm dealing with a large datasets and I would like to ask the best way to change some entries of a pandas data frame <code>df</code></p> <p>Here is my code:</p> <pre><code>mask = df.P2I.values &gt; c_th df.loc[mask, 'P1I'] = df.P1I - c_offset df.loc[mask, 'P3I'] = df.P3I - c_offset df.loc[mask, 'P2I'] = df.P2I - c_offset </code></pre> <ol> <li>Is there a way to access those rows-columns directly in a single call and do three operations altogether?</li> <li>Should I call <code>.values</code> to the data frame to have a continuous memory allocation to do the subtraction?</li> </ol> <p>I can't measure performance that's why I'm asking. Thanks guys</p>
<p>You could do something like this in your case:</p> <pre><code>df.loc[mask, ['P1I', 'P2I', 'P3I']] -= c_offset </code></pre> <p>You can also use different offsets for each column if you need to, like this (in terms of performance it looks to be rather similar to the first one):</p> <pre><code>df.loc[mask, ['P1I', 'P2I', 'P3I']] -= [ c_offset_1, c_offset_2, c_offset_3 ] </code></pre> <p>However, if performance is crucial, it seems like the best options is indeed use the numpy format. Probably if your "mathematical wrangling" is larger than a single subtraction, this seems the way to go:</p> <pre><code>df.loc[ mask, ["P1I", "P2I", "P3I"]] = df.loc[ mask, ["P1I", "P2I", "P3I"]].values - c_offset </code></pre> <p><em>Note</em>: OP tested this approach in his/her dataset, and mentioned that it was actually slower than just using the previous one. Tried to replicate this, but my computer almost crashed before I was able to...</p> <p>Some timings I took comparing the different approaches:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({ "P1I": np.random.rand(1000000), "P2I": np.random.rand(1000000), "P3I": np.random.rand(1000000) }) c_th = 0.5 c_offset = -1 mask = df.P2I &gt; c_th %timeit df.loc[ mask, "P1I" ] = df.loc[ mask , "P1I" ] - c_offset; df.loc[ mask, "P2I" ] = df.loc[ mask , "P2I" ] - c_offset; df.loc[ mask, "P3I" ] = df.loc[ mask , "P3I" ] - c_offset # 77.9 ms ± 1.19 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) %timeit df.loc[ mask, ["P1I", "P2I", "P3I"]] -= c_offset # 59.3 ms ± 1.4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) %timeit df.loc[ mask, ["P1I", "P2I", "P3I"]] -= [ c_offset, c_offset, c_offset ] # 59.5 ms ± 3.12 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) %timeit df.loc[ mask, ["P1I", "P2I", "P3I"]] = df.loc[ mask, ["P1I", "P2I", "P3I"]].values - c_offset # 43.6 ms ± 553 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre>
python|pandas|numpy|indexing|data-manipulation
1
376,553
60,288,451
Numpy Changing Matrix dimensions
<p>I have a 28x28 pixel image as a numpy array and its shape is (28,28) using the np.array.shape function. I want the shape to be 784x1. In other words with a NxN matrix how do you convert it to a N^2x1. Using the flatten function i get almost what I'm looking for, the shape from flatten is (784,).</p>
<p>Another possible way is to use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.atleast_2d.html" rel="nofollow noreferrer">np.atleast_2d</a></p> <pre><code>np.atleast_2d(arr.flatten()) </code></pre>
numpy|array-broadcasting
1
376,554
59,973,453
Is there any difference of precision in python methods to calculate euclidean distance?
<p>I am calculating euclidean distance array by array in a numpy array. I was using <code>np.linalg.norm(v1-v2)</code> to this. Since I was planning to use other distance measures I changed that to <code>scipy.spatial.distance.euclidean(v1,v2)</code> to keep a pattern in my code. </p> <p>I noticed the last digits vary a bit in each scenario. I thouth it wouldn't since scipy euclidean version uses functions from numpy core like <code>dot</code> and <code>sqrt</code>. I tried other ways in Python to calculate euclidean distance to compare and for a specific example I got these results. </p> <pre><code>&gt;&gt;&gt; math.sqrt(sum([(a-b)**2 for a,b in zip(v1,v2)])) 1.0065822095995844 &gt;&gt;&gt; numpy.linalg.norm(v1-v2) 1.0065822095995838 &gt;&gt;&gt; sklearn.metrics.pairwise.euclidean_distances(v1.reshape(1,-1),v2.reshape(1,-1))[0,0] 1.0065822095995838 &gt;&gt;&gt; scipy.spatial.distance.euclidean(v1,v2) 1.006582209599584 </code></pre> <p>Just for the record, in my examples, v1 and v2 are normalized histograms.<br> Why is there this difference in precision? Should this happen?</p>
<p>Floating point numbers are stored in computer as a fraction, with 53 bits to represent the numerator. So you cannot get a floating point answer with more than 15 significant digits of precision. <a href="https://docs.python.org/3/tutorial/floatingpoint.html" rel="nofollow noreferrer">https://docs.python.org/3/tutorial/floatingpoint.html</a></p>
python|numpy|scikit-learn|scipy|euclidean-distance
0
376,555
60,169,809
Replace zeros with mean of non-zeros along an axis of array - Python / NumPy
<p>How can I replace 0s of the first rows by mean of the remaining rows? </p> <pre><code>import numpy as np from sklearn.impute import SimpleImputer data = np.array([[0,0,0,0,3,2,4,4,0], [4,6,8,9,3,1,1,4,0], [4,6,8,9,3,1,1,4,0]]) print (data.shape) imputer = SimpleImputer(missing_values=0, strategy='mean') res = imputer.fit_transform(data) print (res) [[4. 6. 8. 9. 3. 2. 4. 4.] [4. 6. 8. 9. 3. 1. 1. 4.] [4. 6. 8. 9. 3. 1. 1. 4.]] </code></pre> <p>But, should not drop any column.</p> <p>Expected result is:</p> <pre><code>[[4. 6. 8. 9. 3. 2. 4. 4. 0] [4. 6. 8. 9. 3. 1. 1. 4. 0] [4. 6. 8. 9. 3. 1. 1. 4. 0]] </code></pre> <p>Any ideas, guys? </p>
<p>Just indexing should be enough for what you want:</p> <pre><code>m = data[0] == 0 data[0, m] = data[1:,m].mean(0) print(data) array([[4, 6, 8, 9, 3, 2, 4, 4, 0], [4, 6, 8, 9, 3, 1, 1, 4, 0], [4, 6, 8, 9, 3, 1, 1, 4, 0]]) </code></pre> <hr> <p>To fill all zeros from the means of all other rows and <em>excluding zeroes</em> from the mean, we could use a masked array:</p> <pre><code>m = data == 0 means = np.ma.array(data, mask = m).mean(0) data + m * means.data array([[4., 6., 8., 9., 3., 2., 4., 4., 0.], [4., 6., 8., 9., 3., 1., 1., 4., 0.], [4., 6., 8., 9., 3., 1., 1., 4., 0.]]) </code></pre> <hr> <p><strong>Update</strong></p> <p>To fill with the mean of the other columns, you could similarly do:</p> <pre><code>m = data == 0 means = np.ma.array(data, mask = m).mean(1) data + m * means.data[:,None] array([[3.25, 3.25, 3.25, 3.25, 3. , 2. , 4. , 4. , 3.25], [4. , 6. , 8. , 9. , 3. , 1. , 1. , 4. , 4.5 ], [4. , 6. , 8. , 9. , 3. , 1. , 1. , 4. , 4.5 ]]) </code></pre>
python|numpy
2
376,556
59,929,165
Understanding Pandas grouby sum()
<p>I am unable to understand how sum() works in case of groupby(). <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.sum.html#pandas.core.groupby.GroupBy.sum" rel="nofollow noreferrer">Official docs</a> say it computes sum of values but I can't see how:</p> <pre><code>df = pd.DataFrame({'A': [1, 1, 2, 1, 2], 'B': [np.nan, 2, 3, 4, 5], 'C': [1, 2, 1, 1, 2]}, columns=['A', 'B', 'C']) </code></pre> <p><a href="https://i.stack.imgur.com/NSldJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NSldJ.png" alt="enter image description here"></a></p> <p>I don't see how it computed the result, it definitely didn't do A+B+C for every row. </p>
<p>Your example is quite bad but let me explain.</p> <p>Groupby is an operation that takes the value of the column and merge all equal values together. Now we need an operation to deal with the other columns. Because with the merging the program needs to know how to deal with them. And that would be the operation sum. (Other Operations: mean, count, ...)</p> <p>In your case you only have unique value in <code>B</code> and therefore there are no 2 rows that are merging together. Therefore what is the sum of one element? exactly the element.</p> <pre><code>l = [3] print(sum(l)) # Output: 3 </code></pre> <p>And that is what is happening in your example.</p> <p>That is why <a href="https://stackoverflow.com/questions/59929165/understanding-pandas-grouby-sum#comment105980409_59929165">@jezrael</a> said in the comment you probably want to do <code>df = df.groupby('A').sum()</code></p> <p>The output would be:</p> <pre><code> B C A 1 6.0 4 2 8.0 3 </code></pre> <p>As you see we group by column A. Row 1 and 3 are added together and Row 2 and 4.</p> <p>You are maybe looking for this:</p> <pre><code>df.sum() </code></pre> <p>which output:</p> <pre><code>A 7.0 B 14.0 C 7.0 </code></pre> <p>Or this mentioned by <a href="https://stackoverflow.com/questions/59929165/understanding-pandas-grouby-sum/59929657#comment105981095_59929165">@Andrea</a>:</p> <pre><code>df.sum(axis=1) </code></pre> <p>which outputs:</p> <pre><code>0 2.0 1 5.0 2 6.0 3 6.0 4 9.0 </code></pre> <p>But Groupby is the wrong way to achieve what you want I think.</p>
python|pandas|pandas-groupby
1
376,557
59,933,336
Eigen matrix in cpp
<p>How to create a dynamic 3d matrix using the Eigen library. and how can slice the particular channel, in that channel slice some height and width?</p> <p>example:</p> <p>I want to create a matrix of size <code>3 * 320 * 240</code> (here channel width and height known at runtime), and then select a slice of <code>3 * 3</code> in each channel.</p>
<p>Perhaps something like this:</p> <pre><code>#include &lt;iostream&gt; #include &lt;vector&gt; #include &lt;Eigen/Dense&gt; using namespace Eigen; int main() { int a = 320; int b = 240; // Create as many as you want, probably better in a loop. MatrixXd m(a, b); MatrixXd n(a, b); MatrixXd o(a, b); std::vector&lt;MatrixXd&gt; v; v.push_back(m); v.push_back(n); v.push_back(o); std::cout &lt;&lt; v.at(0)(0, 1) &lt;&lt; std::endl; } </code></pre>
c++|tensorflow|eigen|eigen3
0
376,558
60,235,593
np.average of word vectors
<p>I have this dictionary that contains words as keys and their vectors as values.</p> <pre><code>my_dict = {'new': array([ 6.77980e-02, -2.07800e-02, -1.22845e-01, 1.75853e-01, 1.49210e-02]), 'its': array([ 7.85300e-03, -8.81160e-02, 2.60125e-01, 1.77740e-02, -1.09075e-011])} </code></pre> <p>I would like to calculate the element-wise average of the vectors for these given words using <code>np.average()</code>. The result should be a <code>np.ndarray</code>.</p> <p>This is my attempt:</p> <pre><code>average = [np.average(my_dict[x], axis=None) for x in self.my_dict] </code></pre>
<p>So that you calculate one average over all the values (rather than collect a set of nonsense average for each individual vector), try:</p> <pre><code>np.average(list(my_dict.values())) </code></pre> <p>(If it doesn't return the shape you expect, try each explicit <code>axis=</code> possibility.)</p>
python|numpy
1
376,559
60,310,736
Read S3 Files That Meet Last Modified Window Into DataFrame
<p>I have an S3 bucket with objects where the Last Modified ranges from very old to current. I need to be able to find the files with a last modified stamp within a window, and then read those files (which are JSON) into some sort of Data Frame (pandas, spark, etc.). </p> <p>I have attempted to gather the files, read them in individually and append via the following code but it is painfully slow:</p> <pre><code>session = boto3.session.Session(region_name=region) #Gather all keys that have a modified stamp between max_previous_data_extracted_timestamp and start_time_proper s3 = session.resource('s3', region_name=region) bucket = s3.Bucket(args.sourceBucket) app_body = [] for obj in bucket.objects.all(): obj_datetime = obj.last_modified.replace(tzinfo=None) if args.accountId + '/Patient' in obj.key and obj_datetime &gt; max_previous_data_extracted_timestamp_datetime and obj_datetime &lt;= start_time_datetime: obj_df = pd.read_csv(obj.get()['Body']) app_body.append(obj_df) merged_dataframe = pd.concat(app_body) </code></pre> <p>The logic is functional in that I only get objects that have been modified within the window, however, the next part where it gets the body and appends to the list runs for 30-45 minutes on ~10K files. There has to be a better way to do this that I am just not thinking of. </p>
<p>Spark is a way to go on this one.</p> <p>When talking to S3 bucket with a large number of files, we always need to keep in mind that listing all objects in a bucket is expensive since it returns 1000 object at a time and a pointer to fetch the next set. This makes it very hard to parallelise unless you know the structure and use it to optimize those calls.</p> <p>I'm sorry if the code doesn't work, I use scala but this should be almost in a working state.</p> <p>Knowing that your structure is <code>bucket/account_identifier/Patient/patient_identifier</code>:</p> <pre><code># account_identifiers -- provided from DB accounts_df = sc.parallelize(account_identifiers, number_of_partitions) paths = accounts_df.mapPartitions(fetch_files_for_account).collect() df = spark.read.json(paths) def fetch_files_for_account(accounts): s3 = boto3.client('s3') result = [] for a in accounts: marker = '' while True: request_result = s3.list_objects(Bucket=args.sourceBucket, Prefix=a) items = request_result['Contents'] for i in items: obj_datetime = i['LastModified'].replace(tzinfo=None) if obj_datetime &gt; max_previous_data_extracted_timestamp_datetime and obj_datetime &lt;= start_time_datetime: result.append('s3://' + args.sourceBucket +'/' + i['Key']) if not request_result['IsTruncated']: break else: marker = request_result['Marker'] return iter(result) </code></pre> <p>Map partitions will make sure you do not have too many clients instantiated. You can control that number using the <code>number_of_partitions</code>.</p> <p>Another optimisation you can do is to manually load contents after <code>mapPartitions</code> call instead of using <code>collect()</code>. After that stage you'd have <code>String</code>s that are JSON contents and then you'd call <code>spark.createDataFrame(records, schema)</code>. Note: you have to provide schema.</p> <p>If you do not have <code>account_identifiers</code> or number of files will not get into 100k territory, you would have to list all object in a bucket, filter by <code>last_modified</code> and basically do the same call:</p> <pre><code>spark.read.json(paths) </code></pre>
python|python-3.x|pandas|apache-spark|boto3
1
376,560
65,168,197
Unable to open geojson file with geopandas, getting TypeError
<p>I'm trying to open a geojson file into geopandas but getting the following error message:</p> <pre><code> Traceback (most recent call last): File &quot;C:\Users\arobe\Anaconda3\envs\test_env\lib\site-packages\geopandas\io\file.py&quot;, line 95, in read_file gdf = GeoDataFrame.from_features(f_filt, crs=crs, columns=columns) File &quot;C:\Users\arobe\Anaconda3\envs\test_env\lib\site-packages\geopandas\geodataframe.py&quot;, line 283, in from_features for f in features_lst: File &quot;fiona/ogrext.pyx&quot;, line 1369, in fiona.ogrext.Iterator.__next__ File &quot;fiona/ogrext.pyx&quot;, line 232, in fiona.ogrext.FeatureBuilder.build TypeError: startswith first arg must be bytes or a tuple of bytes, not str </code></pre> <p>The solutions here <a href="https://stackoverflow.com/questions/53890704/geopandas-cannot-read-a-geojson-properly">geopandas cannot read a geojson properly</a>_ have not worked for me and I've tried all manner of encodings. The data is from UK MSOA dataset (<a href="https://geoportal.statistics.gov.uk/datasets/f341dcfd94284d58aba0a84daf2199e9_2/geoservice?page=720" rel="nofollow noreferrer">https://geoportal.statistics.gov.uk/datasets/f341dcfd94284d58aba0a84daf2199e9_2/geoservice?page=720</a>).</p> <p>The data downloads fine and works ok in Tableau. It also looks to be ok when opened in Notepad++ so it doesn't appear to be a data issue but I'm new to this so really don't know what I'm doing!</p> <p>Any help would be much appreciated.</p> <p>Code snippet:</p> <pre><code> gdf=gpd.read_file(&quot;https://opendata.arcgis.com/datasets/f341dcfd94284d58aba0a84daf2199e9_2.geojson&quot;) print(gdf.head(10)) gdf.to_file(&quot;msoa.geojson&quot;, driver='GeoJSON') gdf2=gpd.read_file(&quot;msoa.geojson&quot; ,driver='GeoJSON' ) print(gdf2.head(10)) </code></pre>
<p>I run your code on Linux using geopandas v0.8.1, fiona v1.8.17. All OK. The simple plot is as follows.</p> <p><a href="https://i.stack.imgur.com/fhAz0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fhAz0.png" alt="gb-plot" /></a></p>
python|geopandas|fiona
1
376,561
65,072,104
Why doesn't this None filtering work with pandas?
<p>Let's say we have:</p> <pre><code>import pandas as pd df = pd.DataFrame([[1, 2], [4, None], [None, 7]], dtype=object, columns=['a', 'b']) print(df['a']) # 0 1 # 1 4 # 2 None </code></pre> <p>I have read <a href="https://stackoverflow.com/questions/45512763/python-pandas-dataframe-remove-all-rows-where-none-is-the-value-in-any-column">Python Pandas Dataframe, remove all rows where &#39;None&#39; is the value in any column</a> and <a href="https://stackoverflow.com/questions/45117272/pandas-filtering-none-values">Pandas - Filtering None Values</a>, and I do know that <strong>the solution to remove rows with <code>None</code></strong> is to filter the dataframe with <code>df[~df['a'].isnull()]</code>, since we have:</p> <pre><code>df['a'].isnull() # 0 False # 1 False # 2 True </code></pre> <p><strong>Question: why don't these two pythonic-looking solutions fail?</strong></p> <pre><code>df[df['a'] != None] # fails: filters nothing df[df['a'] is not None] # fails too </code></pre>
<p>I think because in pandas most time is possible change <code>None</code> with <code>NaN</code> both working with special function like <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isna.html" rel="nofollow noreferrer"><code>Series.isna</code></a>, <code>isnull</code> for oldier versions.</p> <p>Your solution working if test element wise:</p> <pre><code>df = df[df['a'].apply(lambda x: x is not None)] </code></pre>
python|pandas|dataframe|null
1
376,562
65,312,782
How to compare and match data from different columns of same dataframe
<p>i am new to programming and trying to learn python. pardon me if this sounds silly. i am trying to compare two columns in a dataframe and match the values based on the first column(used as reference). when the values in first column are not available in second or third columns, then i need to enter an NaN. could anyone helpme out how to do this? please look at the input and expected output below</p> <p>Input dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>A</th> <th>B</th> <th>C</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>290</td> <td>390</td> <td>160</td> </tr> <tr> <td>1</td> <td>390</td> <td>450</td> <td>290</td> </tr> <tr> <td>2</td> <td>160</td> <td>290</td> <td>NaN</td> </tr> <tr> <td>3</td> <td>450</td> <td>NaN</td> <td>450</td> </tr> </tbody> </table> </div> <p>Expected Output</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>A</th> <th>B</th> <th>C</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>290</td> <td>290</td> <td>290</td> </tr> <tr> <td>1</td> <td>390</td> <td>390</td> <td>NaN</td> </tr> <tr> <td>2</td> <td>160</td> <td>NaN</td> <td>160</td> </tr> <tr> <td>3</td> <td>450</td> <td>450</td> <td>450</td> </tr> </tbody> </table> </div>
<p>You can do something like this</p> <pre><code>df = pd.DataFrame([[290, 390, 160],[390, 450, 290], [160, 290, np.NaN], [450, np.NaN, 450]], columns=['A', 'B', 'C']) lis = list(df['A']) print(lis) </code></pre> <p><strong>Output</strong></p> <pre><code>[290, 390, 160, 450] </code></pre> <p>Then</p> <pre><code>b = [i if i in list(df['B']) else np.nan for i in lis] c = [i if i in list(df['C']) else np.nan for i in lis] print(b) print(c) </code></pre> <p>Output</p> <pre><code>[290, 390, nan, 450] #b [290, nan, 160, 450] #c </code></pre> <p>Replace the column B,C with list b and c</p> <pre><code>df = df.assign(B=b) df = df.assign(C=c) </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>A</th> <th>B</th> <th>C</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>290</td> <td>290</td> <td>290</td> </tr> <tr> <td>1</td> <td>390</td> <td>390</td> <td>NaN</td> </tr> <tr> <td>2</td> <td>160</td> <td>NaN</td> <td>160</td> </tr> <tr> <td>3</td> <td>450</td> <td>450</td> <td>450</td> </tr> </tbody> </table> </div>
python|pandas
1
376,563
65,337,770
Understanding for loop in pandas dataframe
<p>Hello there I was coding in pandas when I found this problem:</p> <pre><code>for label,content in data_temp.items(): print(len(label))#Como vemos aqui nos imprime print(len(data_temp.columns)) </code></pre> <p>Firstly, I was trying to print the label, which is the indicator of the column, right? It outputs these different numbers.</p> <p><strong>7 9 9 7 10 12 8 24 9 11 11 15 13 17 11 18 5 12 16 12 9 5 8 12 5 12 12 15 11 14 17 10 9 6 9 11 9 7 14 14 15 10 23 12 5 15 12 16 10 15 17 17 8 9 7 7 22 34</strong></p> <p>And when i print the <code>print(len(data_temp.columns))</code> it outputs:</p> <p><strong>58</strong></p> <p>Why does the <code>data_temp.columns</code> gives me a different number from the label in the for loop <code>data_temp.item()</code>? Aren't the labels of the for loop the indices of the <code>data_temp.columns</code>?</p>
<p>You are printing the length of the labels, not the labels themselves.</p> <p>Try <code>print(label)</code> and <code>print(data_temp.columns)</code> that should output the labels one by one in the for loop and then the name of the columns as a list</p>
python|pandas|dataframe|data-science
0
376,564
65,235,536
Working with Lists as Pandas cell elements
<p>I am working with pandas dataframes where some of the columns have individual lists as cell elements. I want to conditionally select elements in each of the cell in one column and read the corresponding elements in lists with same index in the other column (and then print as another column). I am struggling how to do it. To explain the problem with example:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;"></th> <th style="text-align: center;">A</th> <th style="text-align: center;">B</th> <th style="text-align: center;">C</th> <th style="text-align: right;">D</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">0</td> <td style="text-align: center;">3.4</td> <td style="text-align: center;">5.7</td> <td style="text-align: center;">[1,4,2]</td> <td style="text-align: right;">[2.5,3.4,1.2]</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: center;">4</td> <td style="text-align: center;">1.7</td> <td style="text-align: center;">[7,4,5,2]</td> <td style="text-align: right;">[12.15,1.2,34.2,67.2]</td> </tr> </tbody> </table> </div> <p>I want to put condition on lists in column C (e.g. selecting values &gt; 3 ) and read corresponding elements in column D, to print them in column E. This should give me something like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;"></th> <th style="text-align: center;">A</th> <th style="text-align: center;">B</th> <th style="text-align: center;">C</th> <th style="text-align: center;">D</th> <th style="text-align: right;">E</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">0</td> <td style="text-align: center;">3.4</td> <td style="text-align: center;">5.7</td> <td style="text-align: center;">[1,4,2]</td> <td style="text-align: center;">[2.5,3.4,1.2]</td> <td style="text-align: right;">[3.4]</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: center;">4</td> <td style="text-align: center;">1.7</td> <td style="text-align: center;">[7,4,5,2]</td> <td style="text-align: center;">[12.15,1.2,34.2,67.2]</td> <td style="text-align: right;">[12.15,1.2,34.2]</td> </tr> </tbody> </table> </div> <p>Help will be much appreciated.</p>
<p>List comprehension is your friend - here are zipped columns, and then in nested list comprehension filtered zipped lists:</p> <pre><code>df['E'] = [[b for a, b in zip(x, y) if a &gt; 3] for x, y in zip(df['C'], df['D'])] print (df) A B C D E 0 3.4 5.7 [1, 4, 2] [2.5, 3.4, 1.2] [3.4] 1 4.0 1.7 [7, 4, 5, 2] [12.15, 1.2, 34.2, 67.2] [12.15, 1.2, 34.2] </code></pre> <p>Or you can use boolean indexing with convert lists to numpy arrays:</p> <pre><code>df['E'] = [list(np.array(y)[np.array(x) &gt; 3]) for x, y in zip(df['C'], df['D'])] print (df) A B C D E 0 3.4 5.7 [1, 4, 2] [2.5, 3.4, 1.2] [3.4] 1 4.0 1.7 [7, 4, 5, 2] [12.15, 1.2, 34.2, 67.2] [12.15, 1.2, 34.2] </code></pre>
python|pandas|list
2
376,565
65,361,712
AWS Lambda Function Exiting Execution Improperly
<p>I have Lambda function trying to run PoseNet from TensorflowJS. The program executed properly until it gets to</p> <pre><code> const net = await posenet.load({ architecture: 'MobileNetV1', inputResolution: { width: 183, height: 275 }, scale: 0.8, }) </code></pre> <p>after this line I have a function <code>detect(net, image)</code> which internally takes the input image and executes <code>const pose = await net.estimateSinglePose(image)</code> which should simply return a JSON object containing the result from the model. However, the program skips over this <code>detect()</code> function and completes the Lambda execution successfully. Why is this?</p>
<p>detect is async so you have to <code>await detect(....)</code></p> <p>or use then</p> <pre><code>detect(img).then(predictions =&gt; { console.log('Predictions: ', predictions); }); </code></pre>
tensorflow|aws-lambda
0
376,566
65,075,327
How to add print OP in TensorFlow layer(GRU)?
<p>I add print OP in GRU source code, and want to debug the input of GRU , and also want to debug with some operation inside GRU, But this print nothing. Dose tf.print don't work inside this source code of GRU. I hope someone can give me some suggesstion. Thank you very much!</p> <pre class="lang-py prettyprint-override"><code> def call(self, inputs, state): &quot;&quot;&quot;Gated recurrent unit (GRU) with nunits cells.&quot;&quot;&quot; import tensorflow as tf print_GRU = tf.print(inputs) #&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt; add print OP HERE with tf.control_dependencies([print_GRU]): gate_inputs = math_ops.matmul( array_ops.concat([inputs, state], 1), self._gate_kernel) # gate_inputs = math_ops.matmul( # array_ops.concat([inputs, state], 1), self._gate_kernel) gate_inputs = nn_ops.bias_add(gate_inputs, self._gate_bias) value = math_ops.sigmoid(gate_inputs) r, u = array_ops.split(value=value, num_or_size_splits=2, axis=1) r_state = r * state candidate = math_ops.matmul( array_ops.concat([inputs, r_state], 1), self._candidate_kernel) candidate = nn_ops.bias_add(candidate, self._candidate_bias) c = self._activation(candidate) new_h = u * state + (1 - u) * c return new_h, new_h </code></pre>
<p>Inside <code>call</code>, use this line:</p> <pre><code>tf.py_function(func=tf.print, inp=[inputs], Tout=[]) </code></pre>
python|tensorflow|printing|operation
0
376,567
65,394,175
dataframe split to multiple dataframe for each rows with some condition
<p>I have a dataframe like this.</p> <pre><code>A,B 1,2 3,4 5,6 7,8 9,10 11,12 13,14 </code></pre> <p>I would like to split this above dataframe. The splitted dataframe should contains every three rows. The first dataframe splitted can contain from index 0 to index 2. Second contains from index 1 to index and so on.</p> <pre><code>A,B 1,2 3,4 5,6 A,B 3,4 5,6 7,8 A,B 5,6 7,8 9,10 </code></pre> <p>and so on.</p> <p>I have been using forloop and then using the iloc and then adding those splitted dataframe into the list.</p> <p>I am looking if there is some vectorized method to split that above dataframe in pandas. The dataframe is huge and using forloop through each rows is quite slow.</p>
<p>Assuming you have standard <code>RangeIndex</code> indexes and borrowing a vectorized approach for a rolling window <a href="https://stackoverflow.com/questions/6811183/rolling-window-for-1d-arrays-in-numpy">from here</a>, we can get down to numpy's level and:</p> <pre><code>def rolling_window(a, window): shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) df.to_numpy()[rolling_window(df.index.values, 3)] </code></pre> <p>which yields</p> <pre><code>array([[[ 1, 2], [ 3, 4], [ 5, 6]], [[ 3, 4], [ 5, 6], [ 7, 8]], [[ 5, 6], [ 7, 8], [ 9, 10]], [[ 7, 8], [ 9, 10], [11, 12]], [[ 9, 10], [11, 12], [13, 14]]]) </code></pre> <p>If you need these as data frames back, just use the constructor and a <code>map</code></p> <pre><code>map(pd.DataFrame, df.to_numpy()[rolling_window(df.index.values, 3)]) </code></pre>
python|pandas
1
376,568
65,174,274
How do I group values in different rows that have the same name in Pandas?
<p>I have this Pandas DataFrame <code>df</code>:</p> <pre><code> column1 column2 0 x a 1 x b 2 x c 3 y d 4 y e 5 y f 6 y g 7 z h 8 z i 9 z j </code></pre> <p>How do I group the values in <code>column2</code> according to the value in <code>column1</code>?</p> <p>Expected output:</p> <pre><code> x y z 0 a d h 1 b e i 2 c f j 3 g </code></pre> <p>I'm new to Pandas, I'd really appreciate your help.</p>
<p>This is a pivot problem with some preprocessing work:</p> <pre><code>(df.assign(index=df.groupby('column1').transform('cumcount')) .pivot('index', 'column1', 'column2')) column1 x y z index 0 a d h 1 b e i 2 c f j 3 NaN g NaN </code></pre> <p>We're pivoting using &quot;column1&quot; as the header and &quot;column2&quot; as the values. To make pivoting possible, we need a 3rd column which identifies the uniqueness of the values being pivoted, so we build that with <code>groupby</code> and <code>cumcount</code>.</p>
python|pandas|dataframe
2
376,569
65,213,769
Merge two values and remove duplicates
<p>I want to merge the values <em>Account details</em> and <em>Account specifics</em> when accounting for which department it is in, and then remove any duplicate processes resulting from the merge. In this example, <em>Account specifics</em> in the HR department will cause a duplicate when merged with <em>Account details</em> in the HR department, since both of them perform <em>Process2</em>. The duplicate will be removed.</p> <p>I have tried a lot with pd.groupby to solve this problem, but I can't seem to figure it out.</p> <p>Here is what it looks like now:</p> <pre><code> Name Department Process 0 Account details HR Process1 1 Account details HR Process2 2 Account details Finance Process1 3 Account specifics HR Process2 4 Account specifics Finance Process2 5 Account specifics Retail Process1 </code></pre> <p>Here is the code:</p> <pre><code>df = pd.DataFrame({&quot;Name&quot;: [&quot;Account details&quot;, &quot;Account details&quot;, &quot;Account details&quot;, &quot;Account specifics&quot;, &quot;Account specifics&quot;, &quot;Account specifics&quot;], &quot;Department&quot;: [&quot;HR&quot;, &quot;HR&quot;, &quot;Finance&quot;, &quot;HR&quot;, &quot;Finance&quot;, &quot;Retail&quot;], &quot;Process&quot;: [&quot;Process1&quot;, &quot;Process2&quot;, &quot;Process1&quot;, &quot;Process2&quot;, &quot;Process2&quot;, &quot;Process1&quot;]}) </code></pre> <p>This is the desired output (with <em>Account details</em> as the merged result):</p> <pre><code> Name Department Process 0 Account details HR Process1 1 Account details HR Process2 2 Account details Finance Process1 3 Account details Finance Process2 4 Account details Retail Process1 </code></pre>
<p>I think simplest is assign same values to <code>Name</code> column and remove duplicates:</p> <pre><code>df = df.assign(Name = 'Account details').drop_duplicates() print (df) Name Department Process 0 Account details HR Process1 1 Account details HR Process2 2 Account details Finance Process1 4 Account details Finance Process2 5 Account details Retail Process1 </code></pre> <p>If necessary, specify the column to check for duplicates:</p> <pre><code>df = df.assign(Name = 'Account details').drop_duplicates(subset=['Department','Process']) </code></pre> <p>See more <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer">here</a>.</p>
python|pandas|join|merge
2
376,570
65,192,666
Is there a way to concatenate the output of a pandas apply into a single multiindex?
<p>I have a dataframe with two columns where each of the two columns contains a list indices. I want to get the product of the two lists on a row by row level to create a multiindex.</p> <p>For example, df1 below</p> <pre><code>|---------------------|------------------| | col_a | col_b | |---------------------|------------------| | [A1, A2] | [B1] | |---------------------|------------------| | [A3] | [B2, B3] | |---------------------|------------------| </code></pre> <p>would turn into this:</p> <pre><code>MultiIndex([('A1', 'B1'), ('A2', 'B1'), ('A3', 'B2'), ('A3', 'B3')], names = ['col_a', 'col_b'], length = 4) </code></pre> <p>What I'm doing right now is the following:</p> <pre><code>my_multiindex = df1.apply(lambda row: pd.MultiIndex.from_product([row[&quot;col_a&quot;], row[&quot;col_b&quot;]], names = ['col_a', 'col_b']), axis = 1) </code></pre> <p>However, the output of this is looks like:</p> <pre><code>0 MultiIndex([('A1', 'B1'), ('A2', 'B1')]) 1 MultiIndex([('A3', 'B2'), ('A3', 'B3')]) </code></pre> <p>One object per row, which makes sense since we are using an apply on axis 1.</p> <p>Is there any way that I can &quot;concatenate&quot; all the output objects into a single multiindex? Preferably this can be done within the lambda function, but doing it afterwards in a separate step wouldn't be the end of the world.</p> <p>If you have any suggestions for how to do the overall task (not just the specific &quot;concatenate&quot; step I described) that would be helpful as well. I've already tried doing this in a for loop using iterrows but that is taking way too long as the size of df1 is about 50K rows, the average length of the list in col_a is 300, and the average length of the list in col_b is 30.</p> <p>Any help would be appreciated!</p>
<p>The easiest way is probably also the &quot;dumbest&quot;: just take products of each row, put them all together, and feed them to <code>pd.MultiIndex.from_tuples</code>.</p> <pre><code>import itertools rowwise_products = df.apply( lambda row: list(itertools.product(*row)), axis=1 ) all_tuples = rowwise_products.sum() # list concatenation &gt;&gt;&gt; pd.MultiIndex.from_tuples(all_tuples, names=df.columns) MultiIndex([('A1', 'B1'), ('A2', 'B1'), ('A3', 'B1'), ('A3', 'B2'), ('A3', 'B3')], names=['col_a', 'col_b']) </code></pre>
python|pandas|apply|multi-index
1
376,571
65,141,343
how to split a column into comma seperated string?
<p>Here is sample dataframe and a is my column name.</p> <pre><code> a b x 0 1 3 a 1 2 4 a 2 1 3 b 3 2 5 b 4 2 4 c </code></pre> <p>need a column unique values to be seperated in this way</p> <pre><code>required output: '1','2' </code></pre> <p>below is my code i'm getting like this</p> <pre><code>x=x1['id'].unique() x2=','.join(&quot;\'&quot;+str(i)+&quot;\'&quot; for i in x) for this way of code i'm getting output some thing like this output:&quot;'1','2'&quot; **2nd approach:** x2=','.join(&quot;\'&quot;+x1['id']+&quot;\'&quot;): if i'm do this i'm getting the count of id has been increasing </code></pre> <p>i need to pass output into sql query like select * from abc where a in (x2) for that reason need output something like this</p> <pre><code>x2 --&gt;'1','2' i'm getting x2---&gt;&quot; '1','2'&quot; </code></pre>
<p>Try using your first approach with f-strings to make things easier.</p> <pre><code>x2 =' ,'.join(f&quot;'{str(i)}'&quot; for i in x) query = rf&quot;&quot;&quot; SELECT * FROM abc WHERE a in ({x2}) &quot;&quot;&quot; </code></pre> <p>If you try <code>print(query)</code>, it gives</p> <pre><code>SELECT * FROM abc WHERE a in ('1' ,'2') </code></pre>
python|python-3.x|pandas|dataframe
0
376,572
65,322,276
Custom metric in Keras using keras.losses.CategoricalCrossentropy
<p>I'm struggling to implement a custom metric in Keras (2.4.3 with the tensorflow backend) such that I can trigger an early stopping mechanic. Essentially, I want to have Keras stop training a model should there be too big a decrease in the training loss function. To do this, I am using the following code:</p> <pre><code>def custom_metric(y_true,y_pred): y=keras.losses.CategoricalCrossentropy(y_true,y_pred) z=1.0/(1.0-y.numpy()) return z model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['categorical_accuracy',custom_metric]) custom_stop = EarlyStopping(monitor='custom_metric',min_delta=0,patience=2, verbose=1,mode='min',restore_best_weights=True) </code></pre> <p>I'm getting errors along the lines of AttributeError: 'CategoricalCrossentropy' object has no attribute 'numpy', which I understand is due to the definition of z, but I can't get something equivalent to work using by replacing the floats in the definition of z with tf.constants or anything like that. Does anyone have any suggestions? Thanks a lot</p>
<p>Use this instead, mind the spelling:</p> <pre><code>keras.losses.categorical_crossentropy(y_true,y_pred) </code></pre>
python|tensorflow|keras|deep-learning
1
376,573
65,426,110
Pandas groupby by category
<p>I want to group by category</p> <p>I have this DF(for ex)</p> <pre><code>Period val2 val3 1 5546708.53 19741660.61 1 5235399.56 13022005.11 2 2294129.82 7336506.28 3 4888151.37 11870210.71 4 1463851.95 8057862.59 5 1733743.17 5131406.15 5 1682831.20 11953188.47 6 2334756.66 8721801.29 7 1011877.55 5565875.39 8 2171051.93 8348294.45 8 797894.95 7218259.63 9 1005890.25 5085592.10 </code></pre> <p>And I want to group by Period (1-3 is the first group, 4-6 is the second, 7-9 is the third) result</p> <pre><code>Period val2 val3 1 sum(1) sum(2) 4 sum(3) sum(4) 7 sum(5) sum(6) </code></pre>
<p>Use <code>pd.cut</code> to get the buckets:</p> <pre><code>periods = pd.cut(df.Period, bins=[0,3,6,9]) (df.groupby(periods,as_index=False) .agg({'Period':'min', 'val2':'sum', 'val3':'sum'}) ) </code></pre> <p>Output:</p> <pre><code> Period val2 val3 0 1 17964389.28 51970382.71 1 4 7215182.98 33864258.50 2 7 4986714.68 26218021.57 </code></pre>
python|python-3.x|pandas|dataframe
2
376,574
65,178,296
Frustrating error: Throwing KeyError when creating new columns in Pandas
<p>This is weird, I have used this code before and it worked. But now it throws me a KeyError saying:</p> <blockquote> <p>KeyError: &quot;None of [Index(['0', '1'], dtype='object')] are in the [columns]&quot;</p> </blockquote> <pre><code>d = {'col1': [1, 2, 3], 'col2': [3, 4, 5]} df = pd.DataFrame(data=d) df[[str(c) for c in range(2)]] = [[5,6],[6,6], [7,7]] </code></pre> <p>I have even asked my friend to test it on his side and he has no issue executing the code without any errors. Also, my <code>Pandas</code> version is indeed different, one is 1.1.4, one is 1.0.3.</p> <p>Also, if anyone has an elegant solution using <code>.loc</code> or <code>.iloc</code> please let me know as well. I am really frustrated why this does not work on my computer when versioning is not an issue.</p>
<p>For me working well.</p> <p>Another solution is create new DataFrame and add to original:</p> <pre><code>df = df.join(pd.DataFrame([[5,6],[6,6], [7,7]], index=df.index, columns=['0','1'])) print (df) col1 col2 0 1 0 1 3 5 6 1 2 4 6 6 2 3 5 7 7 </code></pre>
python|pandas|numpy
0
376,575
65,233,188
Generate heat-map of cyclical continuous features - 24-hour time
<p>Having a Pandas DF with hour of day, I've calculated the sin/cos time feature, <a href="https://ianlondon.github.io/blog/encoding-cyclical-features-24hour-time/" rel="nofollow noreferrer">based on this article</a>:</p> <pre><code> counter hour sin_time cos_time 0 1 1 2.588190e-01 9.659258e-01 1 0 2 5.000000e-01 8.660254e-01 2 2 3 7.071068e-01 7.071068e-01 3 0 4 8.660254e-01 5.000000e-01 ... 19 0 20 -8.660254e-01 5.000000e-01 20 0 21 -7.071068e-01 7.071068e-01 21 1 22 -5.000000e-01 8.660254e-01 22 0 23 -2.588190e-01 9.659258e-01 </code></pre> <p>I'm trying to plot a heat-map based on the X,Y of the sin/cos time and the value of the counter, so if the counter is 0 no point is added. I've googeled around and written the following code:</p> <pre><code>import numpy as np import numpy.random import matplotlib.pyplot as plt # Generate some test data x = raw_df_tz['sin_time'] y = raw_df_tz['cos_time'] heatmap, xedges, yedges = np.histogram2d(x, y, bins=50) extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]] plt.clf() plt.imshow(heatmap.T, extent=extent, origin='lower') plt.show() </code></pre> <p>Output:</p> <hr /> <p><a href="https://i.stack.imgur.com/2w9ZY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2w9ZY.png" alt="enter image description here" /></a></p> <p>How can I incorporate the counter value and influence the char accordingly?</p>
<p>Found out that you can add weights argument to histogram2d:</p> <pre><code>np.histogram2d(x, y, weights=w, bins=50) </code></pre> <p>so w is my counter column: <a href="https://i.stack.imgur.com/CSmmQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CSmmQ.png" alt="enter image description here" /></a></p>
python|pandas|time|heatmap|scatter-plot
0
376,576
65,300,388
Tensorflow Taking too much time on GPU
<pre><code>import tensorflow as tf gpus = tf.config.experimental.list_physical_devices('GPU') for gpu in gpus: print(&quot;Name:&quot;, gpu.name, &quot; Type:&quot;, gpu.device_type) from tensorflow.python.client import device_lib device_lib.list_local_devices() tf.test.is_gpu_available() </code></pre> <p>I am able to get the output as True (i.e Tensorflow is able to detect the GPU) but the problem is its taking 5 - 10 minutes in showing the Output and continuously consumes system memory. I am using RTX 3060Ti with Python 3.8, CUDA 10.1, cudnn 7.6, tensorflow 2.3.1 and tensorflow-gpu 2.3.1.</p>
<p>It's because Tensorflow version 2.3 doesn't support using CUDA 11, but all Ampere cards require a minimum of CUDA 11.0 and Cudnn 8.</p> <p>Luckily TensorFlow 2.4 got released recently. It's compatible, but with a slightly lower CUDA 11.0.</p> <p>Please update your installation to use CUDA 11.0 from the archives section on the Nvidia website. It won't be as performant as 11.1 (which was the first version with official support for RTX 3000), but at least it will support Ampere GPUs</p> <p>You can easily check the CUDA and cuDNN versions used to build Tensorflow from <a href="https://www.tensorflow.org/install/source_windows" rel="nofollow noreferrer">here</a>.</p>
python|tensorflow2.0|tensorflow2.x
2
376,577
65,188,251
How to pivot a dataframe to collapse multiple rows into one
<p>I have a dataframe that has 4 columns: UID, Date, Type, and Value as such:</p> <pre><code>UID Date Type Value 50 2020-12-01 3 15 50 2020-12-01 2 13 50 2020-12-01 1 50 135 2020-12-02 2 0 135 2020-12-02 1 12 50 2020-12-02 4 100 50 2020-12-02 2 25 50 2020-12-02 3 15 50 2020-12-02 1 40 </code></pre> <p>For a given user on a given date, each type that appears is unique (i.e there will never be two entries for Type X for UID Y on Day Z). Type is an integer between 1 and 4, inclusive.</p> <p>I would like to transform this into a dataframe that has a column for each type, the value in the corresponding column, and reduces all rows to having a unique UID/Date pair as such, with missing type/value pairs as nan or 0:</p> <pre><code>UID Date Type_1 Type_2 Type_3 Type_4 50 2020-12-01 50 13 15 nan 135 2020-12-02 12 0 nan nan 50 2020-12-02 40 25 15 100 </code></pre> <p>I've been tinkering with pivot but can't quite get it, any assistance would be much appreciated!</p>
<p>It is simply</p> <pre><code>df.pivot(index = ['UID','Date'], values = 'Value', columns = 'Type').add_prefix('Type_') </code></pre> <p>output</p> <pre><code> Type Type_1 Type_2 Type_3 Type_4 UID Date 50 2020-12-01 50.0 13.0 15.0 NaN 2020-12-02 40.0 25.0 15.0 100.0 135 2020-12-02 12.0 0.0 NaN NaN </code></pre> <p>you can stick <code>reset_index()</code> at the end of the expression if you do not like those columns in the index</p>
python|pandas|dataframe
2
376,578
65,357,979
How to delete empty sheets that have a header row from excel workbook in Python?
<p>I need to delete all empty sheets from a workbook, and these sheets have headers, so they are not completely empty. Just row 2 and on will be empty and I need to delete these sheets.</p> <p>I currently have this:</p> <pre><code>def DeleteEmptyColumnsAndRows(filename): import pandas as pd import pathlib full_path = filename df = pd.read_excel(full_path, header=None, sheet_name=None) # engine can be openpyxl if we need .xlsx ext writer = pd.ExcelWriter(new_loc, engine='xlwt') for key in df: sheet = df[key].dropna(how=&quot;all&quot;).dropna(1, how=&quot;all&quot;) sheet.to_excel(writer, key, index=False, header=False) writer.save() </code></pre> <p>This works fine for completely empty sheets but my sheets have headers. Any idea how I can manipulate this to work how I need it to?</p> <p>Thanks in advance, any guidance is appreciated!</p>
<p>Figured it out. This is the solution to delete all sheets from a workbook where the 1st row (the header row) is the only row with any data in it:</p> <pre><code>def DeleteEmptyColumnsAndRows(filename): import pandas as pd full_path = filename df_with_header = pd.read_excel(full_path, header=None, sheet_name=None) df_without_header = pd.read_excel(full_path, header=0, sheet_name=None) # engine can be openpyxl if we need .xlsx ext writer = pd.ExcelWriter(new_loc, engine='xlwt') for key in df_with_header: if not df_without_header[key].empty: print(df_with_header[key]) sheet = df_with_header[key].dropna(how=&quot;all&quot;).dropna(1, how=&quot;all&quot;) sheet.to_excel(writer, key, index=False, header=False) writer.save() </code></pre>
python|python-3.x|pandas|dataframe
0
376,579
65,460,997
Import latest file from S3 bucket to Pandas dataframe
<p><strong>SITUATION</strong></p> <p>I have written some code in Python 3 in which I use the OS and Glob packages to find the latest csv file in a directory and convert it to a Panda dataframe.</p> <p>Code as follows:</p> <pre><code>import pandas as pd from pathlib import Path import glob import os # LOOK FOR ALL CSVs IN FOLDER AND GET LATEST fld = '.' latest_csv_file = glob.glob('/path/to/file/filename.csv') imported_file = max(latest_csv_file, key=os.path.getctime) # IMPORT LATEST CV USING PANDAS imported_file = pd.read_csv(latest_csv_file, dtype={2:'str'}) # REMOVE SPACES FROM COL NAMES AND CONVERT COL NAMES TO LOWER CASE imported_file.columns = imported_file.columns.str.replace(' ', '') imported_file.columns = imported_file.columns.str.lower() </code></pre> <p>This seems to work well, however I need to be able to perform the same operation in my Lambda fuction, which saves a csv file attachment from an incoming email.</p> <p><strong>WHAT I HAVE TRIED</strong></p> <pre><code> bucketname = 'my_bucket' s3_client = boto3.client('s3') response = s3_client.list_objects_v2(Bucket = bucketname2, Prefix = 'attachments/') all = response['Contents'] latest_file = max(all, key=lambda x: x['LastModified']) print(latest_file) </code></pre> <p>This will give me the name of the file in the following format</p> <blockquote> <p>attachments/nolu34lqipv1cl14i0qjebcc1rnqb2ngbnf4ss01-filename.csv</p> <p>[folder]/[original_msg_key]-[filename.csv]</p> </blockquote> <p>However if I try and read the file into a Pandas df I get the follwoing</p> <pre><code>imported_file = pd.read_csv(latest_file, dtype={2:'str'}) </code></pre> <blockquote> <p>module initialization error Invalid file path or buffer object type: &lt;class 'dict'&gt;</p> </blockquote> <p>I understand this to be because Pandas is expecting a specific file path and not an object of type 'dict', but can't see how to achieve my aim.</p> <p>Any help appreciated.</p>
<p>To get the key of your object, you have to use <code>latest_file['Key']</code> and for pandas you should include <code>s3://</code> as a prefix:</p> <pre><code>imported_file = pd.read_csv('s3://' + latest_file['Key'], dtype={2:'str'}) </code></pre> <p>This will require <a href="https://s3fs.readthedocs.io/en/latest/" rel="nofollow noreferrer">s3fs</a> correctly setup for your python along with AWS credentials for S3 access.</p> <p>You also have <code>bucketname2</code> where <code>bucketname</code> should be used.</p>
python-3.x|pandas|amazon-web-services|amazon-s3|boto3
0
376,580
65,076,353
How to write a list with strings and DataFrames into an .txt file
<p>i would like to write this list into a .txt file. My Problem is that i cant add the whole data of the 1801*20 matrix.</p> <p><a href="https://i.stack.imgur.com/S841m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S841m.png" alt="list" /></a></p> <p>i tried to convert the list into an pandas DataFrame and write it into an .txt.</p> <pre><code>dftistext = pd.DataFrame(tistext) dftistext.to_csv('test.txt', mode='a', header=False,index=None) </code></pre> <p>The index 0 string is right but as you can see the DataFrame is not displayed completely. In Addition to that i would like to remove the quotation marks as well.</p> <p><a href="https://i.stack.imgur.com/g5oNl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g5oNl.png" alt=".txt File output" /></a></p> <p>thank you for the help!</p> <p>edit: now another problem has occurred. When i try to write the pandas DataFrame into the .txt file with <code>fobj.write(e.to_string(header=False,index=None)+'\n')</code>, a space is added before each row. I tried to remove it with split():<code>fobj.write((((e.to_string(header=False,index=None)).lstrip()))+'\n')</code> but it only works for the first and last row of the 1801 rows. Do you have any suggestions?</p>
<p>We make the assumption that you know that you have two types of elements in your list: string and pandas dataframes. So you have to identify across this only two types.</p> <pre><code>with open('test.txt', 'w+') as fobj: for e in Liste: if isinstance(e, str): fobj.write(e + '\n') else: # The other type -- DataFrame fobj.write(e.to_string(header=False,index=None)+'\n') </code></pre>
python|pandas|list|dataframe|export-to-csv
1
376,581
65,245,787
What is the Problem in my Building Softmax from Scratch in Pytorch
<p>I read this <a href="https://d2l.ai/chapter_linear-networks/softmax-regression-scratch.html" rel="nofollow noreferrer">post</a> ans try to build softmax by myself. Here is the code</p> <pre><code>import torch import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt import time import sys import numpy as np #============================ get the dataset ========================= mnist_train = torchvision.datasets.FashionMNIST(root='~/Datasets/FashionMNIST', train=True, download=True, transform=transforms.ToTensor()) mnist_test = torchvision.datasets.FashionMNIST(root='~/Datasets/FashionMNIST', train=False, download=True, transform=transforms.ToTensor()) batch_size = 256 num_workers = 0 train_iter = torch.utils.data.DataLoader(mnist_train, batch_size=batch_size, shuffle=True, num_workers=num_workers) test_iter = torch.utils.data.DataLoader(mnist_test, batch_size=batch_size, shuffle=False, num_workers=num_workers) #============================ train ========================= num_inputs = 28 * 28 num_outputs = 10 epochs = 5 lr = 0.05 # Initi the Weight and bia W = torch.tensor(np.random.normal(0, 0.01, (num_inputs, num_outputs)), dtype=torch.float) b = torch.zeros(num_outputs, dtype=torch.float) W.requires_grad_(requires_grad = True) b.requires_grad_(requires_grad=True) # softmax function def softmax(X): X = X.exp() den = X.sum(dim=1, keepdim=True) return X / den # loss def cross_entropy(y_hat, y): return - torch.log(y_hat.gather(1, y.view(-1, 1))).sum() # accuracy function def accuracy(y_hat, y): return (y_hat.argmax(dim=1) == y).float().mean().item() for epoch in range(epochs): train_loss_sum = 0.0 train_acc_sum = 0.0 n_train = 0 for X, y in train_iter: # X.shape: [256, 1, 28, 28] # y.shape: [256] # flatten the X into [256, 28*28] X = X.flatten(start_dim=1) y_pred = softmax(torch.mm(X, W) + b) loss = cross_entropy(y_pred, y) loss.backward() W.data = W.data - lr * W.grad b.data = b.data - lr* b.grad W.grad.zero_() b.grad.zero_() train_loss_sum += loss.item() train_acc_sum += accuracy(y_pred, y) n_train += y.shape[0] # evaluate the Test test_acc, n_test = 0.0, 0 with torch.no_grad(): for X_test, y_test in test_iter: X_test = X_test.flatten(start_dim=1) y_test_pred = softmax(torch.mm(X_test, W) + b) test_acc += accuracy(y_test_pred, y_test) n_test += y_test.shape[0] print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f' % (epoch + 1, train_loss_sum/n_train , train_acc_sum / n_train, test_acc / n_test)) </code></pre> <p>Compare with original post, Here I turn</p> <pre><code>def cross_entropy(y_hat, y): return - torch.log(y_hat.gather(1, y.view(-1, 1))) </code></pre> <p>into</p> <pre><code>def cross_entropy(y_hat, y): return - torch.log(y_hat.gather(1, y.view(-1, 1))).sum() </code></pre> <p>Since the <code>backward</code> need a scalar.</p> <p>However, My results are</p> <pre><code>epoch 1, loss nan, train acc 0.000, test acc 0.000 epoch 2, loss nan, train acc 0.000, test acc 0.000 epoch 3, loss nan, train acc 0.000, test acc 0.000 epoch 4, loss nan, train acc 0.000, test acc 0.000 epoch 5, loss nan, train acc 0.000, test acc 0.000 </code></pre> <p>Any idea?</p> <p>Thanks.</p>
<p>Change:</p> <pre><code>def cross_entropy(y_hat, y): return - torch.log(y_hat.gather(1, y.view(-1, 1))).sum() </code></pre> <p>To:</p> <pre><code>def cross_entropy(y_hat, y): return - torch.log(y_hat[range(len(y_hat)), y] + 1e-8).sum() </code></pre> <p>Outputs should be something like:</p> <pre><code>epoch 1, loss 9.2651, train acc 0.002, test acc 0.002 epoch 2, loss 7.8493, train acc 0.002, test acc 0.002 epoch 3, loss 6.6875, train acc 0.002, test acc 0.003 epoch 4, loss 6.0928, train acc 0.003, test acc 0.003 epoch 5, loss 5.1277, train acc 0.003, test acc 0.003 </code></pre> <p>And be aware the problem of <code>nan</code> can also cause by <code>X = X.exp()</code> in the <code>softmax(X)</code>, when X is too big then <code>exp()</code> will outputs <code>inf</code>, when this happen you could try to clip the <code>X</code> before using <code>exp()</code></p>
python-3.x|pytorch|loss-function|softmax
1
376,582
65,175,275
Pytorch: Converting a VGG model into a sequential model, but getting different outputs
<p><strong>Background:</strong> I'm working on an adversarial detector method which requires to access the outputs from each hidden layer. I loaded a pretrained VGG16 from <code>torchvision.models</code>.</p> <p>To access the output from each hidden layer, I put it into a sequential model:</p> <pre><code>vgg16 = models.vgg16(pretrained=True) vgg16_seq = nn.Sequential(*( list(list(vgg16.children())[0]) + [nn.AdaptiveAvgPool2d((7, 7)), nn.Flatten()] + list(list(vgg16.children())[2]))) </code></pre> <p>Without <code>nn.Flatten()</code>, the forward method will complaint about dimensions don't match between <code>mat1</code> and <code>mat2</code>.</p> <p>I looked into the <em>torchvision</em> <a href="https://github.com/pytorch/vision/blob/d3d393672b877f80fedd2d11de6b84fb19599c2e/torchvision/models/vgg.py" rel="nofollow noreferrer">VGG</a> implementation, it uses the <code>[feature..., AvgPool, flatten, classifier...]</code> structure. Since <code>AdaptiveAvgPool2d</code> layer and <code>Flatten</code> layer have <strong>no parameters</strong>, I assume this should work, but I have different outputs.</p> <pre><code>output1 = vgg16(X_small) print(output1.size()) output2 = vgg16_seq(X_small) print(output2.size()) torch.equal(output1, output2) </code></pre> <p><strong>Problem:</strong> They are in the same dimension but different outputs.</p> <blockquote> <p>torch.Size([32, 1000])<br /> torch.Size([32, 1000])<br /> False</p> </blockquote> <p>I tested the outputs right after the <code>AdaptiveAvgPool2d </code> layer, the outputs are equal:</p> <pre><code>output1 = nn.Sequential(*list(vgg16.children())[:2])(X_small) print(output1.size()) output2 = nn.Sequential(*list(vgg16_seq)[:32])(X_small) print(output2.size()) torch.equal(output1, output2) </code></pre> <blockquote> <p>torch.Size([32, 512, 7, 7])<br /> torch.Size([32, 512, 7, 7])<br /> True</p> </blockquote> <p>Can someone point out what went wrong? Thank you</p>
<p>You need to call the eval mode before doing inference.</p> <p>i.e.</p> <pre><code>vgg16.eval() vgg16_seq.eval() </code></pre>
python|neural-network|computer-vision|pytorch
1
376,583
65,142,574
How to sum across columns after pandas groupby?
<p>I am using the <code>groupby()</code> operation on a pandas dataframe. I am then trying to sum the columns together for each row. However I keep getting an error when calling <code>sum()</code>.</p> <p>I have attached my code below:</p> <pre class="lang-py prettyprint-override"><code>bike_use = bike_use.groupby(['road_name', 'count_point_id'])['pedal_cycles', 'two_wheeled_motor_vehicles'].sum(axis = 1) </code></pre> <p>And the error that I get is:</p> <pre><code>TypeError: sum() got an unexpected keyword argument 'axis' </code></pre> <p>Even though the documentation for summing and pandas dataframe and a pandas series, <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sum.html" rel="nofollow noreferrer">here</a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.sum.html" rel="nofollow noreferrer">here</a> both allow the keyword <code>axis</code>.</p> <p>I don't know why it throws this error even though the functions allow it to take this as a keyword?</p>
<p>I think you want double <code>sum</code> - first aggregate by <code>sum</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.sum.html" rel="nofollow noreferrer"><code>GroupBy.sum</code></a> and then sum both columns with <code>sum(axis=1)</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sum.html" rel="nofollow noreferrer"><code>DataFrame.sum</code></a>:</p> <pre><code>bike_use = bike_use.groupby(['road_name', 'count_point_id'])['pedal_cycles', 'two_wheeled_motor_vehicles'].sum().sum(axis = 1) </code></pre> <p>But if need sum 2 columns without groupby:</p> <pre><code>bike_use['both'] = bike_use[['pedal_cycles', 'two_wheeled_motor_vehicles']].sum(axis = 1) </code></pre>
python|pandas|dataframe
0
376,584
65,146,830
problem with numpy ('module' object is not callable), ( fails to pass a sanity check due to a bug in the windows runtime.)
<p>I got this error trying to install another package: the current Numpy installation fails to pass a sanity check due to a bug in the windows runtime. Ive checked the problem and it seems to occur using the newest version of numpy (1.19.4). I then downgraded and it still didnt work. I then tried a lot of deinstalling, reinstalliung and whatnot until it seems I destroyed my numpy completely. Even the simplest functions do not work anymore. Right now im back to version 1.19.4, but when I try to use it, it says the object is not callable. Example:</p> <pre><code>import numpy as np print(np.random(10)) </code></pre> <p>TypeError: 'module' object is not callable</p> <p>It might be because of permissions? I got these messages while reinstallilng numpy: &quot;ERROR: Could not install packages due to an EnvironmentError: [WinError 5] Zugriff verweigert: 'C:\Users\...\Anaconda3\Lib\site-packages\~-mpy\.libs\libopenblas.NOIJJG62EMASZI6NYURL6JBKM4EVBGM7.gfortran-win_amd64.dll' Consider using the <code>--user</code> option or check the permissions.&quot;</p> <p>&quot;ERROR: Could not install packages due to an EnvironmentError: [WinError 5] Zugriff verweigert: 'C:\Users\...\Anaconda3\Lib\site-packages\~andas\_libs\algos.cp38-win_amd64.pyd' Consider using the <code>--user</code> option or check the permissions.&quot;</p> <p>RuntimeError: The current Numpy installation ('C:\Users\...\AppData\Local\Temp\pip-build-env-h9d1jjmf\overlay\Lib\site-packages\numpy\<strong>init</strong>.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information:</p> <p>I even reinstalled anaconda but I think the oldest numpy just reinstalls itself. Which is the easiest way to reset my numpy completely? (I tried the obvious pip uninstall numpy and pip install numpy...)</p>
<p>You want <code>np.random.random(10)</code>. <code>np.random</code> is the module, <code>np.random.random</code> is the function.</p> <p>Documentation: <a href="https://numpy.org/doc/stable/reference/random/generated/numpy.random.random.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/reference/random/generated/numpy.random.random.html</a></p>
python|numpy|permissions|anaconda|reset
0
376,585
65,392,679
json_normalize: Accessing data that is both in library and array form
<p>I'm trying to extract the values for the 'Resource' in the following nested json</p> <pre><code>{ &quot;Statement&quot;: [ { &quot;Effect&quot;: &quot;A-----------&quot;, &quot;Action&quot;: [ &quot;logs:C-----------&quot;, &quot;logs:P-----------&quot; ], &quot;Resource&quot;: [ &quot;a----&quot;, &quot;b----&quot;, &quot;c----&quot; ] } { &quot;Effect&quot;: &quot;A-----------&quot;, &quot;Action&quot;: &quot;l-----------p&quot;, &quot;Resource&quot;: &quot;a-----------*&quot; }, { &quot;Effect&quot;: &quot;A-----------&quot;, &quot;Action&quot;: [ &quot;-----------&quot;, &quot;l-----------&quot; ], &quot;Resource&quot;: [ &quot;a----&quot;, &quot;b----&quot; ] } ] } </code></pre> <p>As you can see, the the 'Resource' information is in an array <code>&quot;Resource&quot;:[&quot;value&quot;]</code> for the first and third example, however the second one is in library form <code>{&quot;Resource&quot;: &quot;value&quot;}</code></p> <p><strong>In the actual json file I'm extracting the information from, most of the 'Resource' data is in array form.</strong></p> <p>I can get the information for the array with the following code:</p> <pre><code>df = pd.json_normalize(response['Document']['Statement'], record_path=['Resource']) </code></pre> <p>but since the 'Resource' in library form is between the two in array form, it gives me this error:</p> <p><code>TypeError(TypeError: {'Resource': '----'} has non list value ---- for path Resource. Must be list or null.</code></p> <p>I know I can access the library information with the code below but I want to get the information for all of the 'Resource' instances in one go.</p> <pre><code>table_df = pd.json_normalize(response['Statement']) table_df = table_df.reindex(columns=['Resource']) </code></pre> <p>What approach could I solve this with?</p>
<p>You don't need pandas to manipulate a json.</p> <pre><code>import pandas as pd data = { &quot;Statement&quot;: [ { &quot;Effect&quot;: &quot;A-----------&quot;, &quot;Action&quot;: [ &quot;logs:C-----------&quot;, &quot;logs:P-----------&quot; ], &quot;Resource&quot;: [ &quot;a----&quot;, &quot;b----&quot;, &quot;c----&quot; ] }, { &quot;Effect&quot;: &quot;A-----------&quot;, &quot;Action&quot;: &quot;l-----------p&quot;, &quot;Resource&quot;: &quot;a-----------*&quot; }, { &quot;Effect&quot;: &quot;A-----------&quot;, &quot;Action&quot;: [ &quot;-----------&quot;, &quot;l-----------&quot; ], &quot;Resource&quot;: [ &quot;a----&quot;, &quot;b----&quot; ] } ] } resource = list() for statement in data['Statement']: if isinstance(statement['Resource'], list): resource.extend(statement['Resource']) else: resource.append(statement['Resource']) df = pd.Series(resource) </code></pre> <p>you then get a series that looks like:</p> <pre><code>df 0 a---- 1 b---- 2 c---- 3 a-----------* 4 a---- 5 b---- dtype: object </code></pre>
pandas
1
376,586
65,172,004
Is it possible to close/reopen connection using pd.read_sql and chunking?
<p>Let's say I have a very large table and I want to use pd.read_sql with chunksize = 10000.</p> <p>The way I approach it now is:</p> <pre><code>from sqlalchemy import create_engine import pandas as pd engine = create_engine('dialect://user:pass@host:port/schema') with engine.connect() as conn: for df in pd.read_sql('SELECT * FROM VERY_LARGE_TABLE', con=conn, chunksize=10000): do stuff </code></pre> <p>My issue here (with snowflake as a data source) is that the connection will expire midway through &quot;do stuff&quot;.</p> <p>Is it possible to do something like:</p> <pre><code>engine = create_engine('dialect://user:pass@host:port/schema', echo=False) # chunk 1 with engine.connect() as conn: df = pd.read_sql('SELECT * FROM VERY_LARGE_TABLE', con=conn) do stuff # chunk 2 with engine.connect() as conn: df = pd.read_sql('SELECT * FROM VERY_LARGE_TABLE', con=conn) do stuff </code></pre> <p>The alternative I'm exploring right now is setting <code>connect_args={&quot;client_session_keep_alive&quot;: True}</code> in the engine</p>
<p>Can you do something like:</p> <pre><code>start = 0 chunk = 5000 while True: with engine.connect() as conn: query = f'SELECT * FROM VERY_LARGE_TABLE LIMIT {start}, {chunk}' df = pd.read_sql(query, con=conn) # if df is empty stop querying if df.empty: break else: # increase start for next iteration start += chunk do stuff </code></pre>
pandas|sqlalchemy|snowflake-cloud-data-platform
0
376,587
65,362,162
Apply an operation to specific columns in numpy array
<p>I would like to apply feature normalisation to a numpy array. Normally this would be trivial with python broadcasting, for example one would do something like this:</p> <pre><code>train_mean = train.mean(axis=0) train_std = train.std(axis=0) train = (train - train_mean) / train_std val = (val - train_mean) / train_std test = (test - train_mean) / train_std </code></pre> <p>However, my numpy array has 9 columns, hence the shape of <code>train_mean</code> and <code>train_std</code> is <code>(9,)</code>, and I only want to apply normalisation to specific columns in my array, for which I have the indexes in a dictionary:</p> <pre><code>column_indices {'blind angle': 0, 'fully open': 1, 'ibn': 2, 'idh': 3, 'altitude': 4, 'azimuth_sin': 5, 'azimuth_cos': 6, 'dgp': 7, 'ill': 8} </code></pre> <p>I have made a list of the columns I would like to normalise:</p> <pre><code>FEATURE_NORM_COLS = ['blind angle', 'ibn', 'idh', 'altitude'] </code></pre> <p>I only want to normalise these columns based on their index and the respective indexes in my train_mean and train_std lists (which are the same as the indexes of my data).</p> <p>What is the best way to achieve this operation?</p> <p>I have done the following, which seems to get the desired result, but it seems very cumbersome. Is there a better way of doing this?</p> <pre><code>for name in FEATURE_NORM_COLS: train[:, column_indices[name]] = (train[:, column_indices[name]] - train_mean[column_indices[name]]) / train_std[column_indices[name]] </code></pre> <p>UPDATE</p> <p>I have followed an approach similar to the comments which I think is more elegant and avoids looping over each column in the dataset.</p> <pre><code>def normalise(dataset, col_indices=COLUMN_INDICES, norm_cols=NORM_COLS, train_mean=TRAIN_MEAN, train_std=TRAIN_STD): &quot;&quot;&quot; Returns normalised features with mean of zero and std of 1. formula is (train - train_mean) / train_std, but we index by indices since we dont want to normalise all columns. Args: dataset: numpy array to normalise col_indices -&gt; dict: the indices of cols in dataset norm_cols -&gt; list: columns to be normalised train_mean -&gt; list: means of train set columns train_std -&gt; list: std's of train set columns &quot;&quot;&quot; indices = [col_indices[col] for col in norm_cols] dataset[:,indices] = (dataset[:,indices] - train_mean[indices]) / train_std[indices] return dataset </code></pre>
<p>Would that be a better way?</p> <pre><code>from sklearn import preprocessing np.set_printoptions(suppress=True, linewidth=1000, precision=3) np.random.seed(5) train = np.array([np.random.uniform(low=0, high=100, size=10), np.random.uniform(low=0, high=30, size=10), np.random.uniform(low=0, high=70, size=10), np.random.uniform(low=0, high=20, size=10), np.random.uniform(low=0, high=90, size=10), np.random.uniform(low=0, high=50, size=10), np.random.uniform(low=0, high=30, size=10), np.random.uniform(low=0, high=80, size=10), np.random.uniform(low=0, high=90, size=10)]).T column_indices = {'blind angle': 0, 'fully open': 1, 'ibn': 2, 'idh': 3, 'altitude': 4, 'azimuth_sin': 5, 'azimuth_cos': 6, 'dgp': 7, 'ill': 8} FEATURE_NORM_COLS = ['blind angle', 'ibn', 'idh', 'altitude'] indices = [column_indices[c] for c in FEATURE_NORM_COLS] print('TRAIN\n', train, '\n') scaler = preprocessing.StandardScaler().fit(train) train[:,indices] = scaler.transform(train)[:, indices] print('PARTIALLY SCALED\n', train) </code></pre> <p>You can reuse the <code>scaler</code> for your validation set and test set if need be (<a href="https://scikit-learn.org/stable/modules/preprocessing.html" rel="nofollow noreferrer">see documentation</a>).</p> <p>Output:</p> <pre><code>TRAIN [[22.199 2.422 41.995 0.486 23.319 38.543 19.061 4.091 84.919] [87.073 22.153 18.607 4.091 72.225 24.247 24.357 15.093 10.052] [20.672 13.239 19.928 13.997 78.343 1.456 27.8 29.238 75.92 ] [91.861 4.749 17.751 15.59 83.047 4.326 27.379 19.543 31.143] [48.841 26.398 22.929 0.459 0.199 5.573 24.744 63.607 9.074] [61.174 8.223 10.092 11.553 42.254 12.562 2.826 28.168 34.507] [76.591 12.427 11.593 0.033 88.332 48.246 10.831 51.11 45.932] [51.842 8.882 67.475 10.309 35.905 31.588 1.065 39.473 86.499] [29.68 18.864 67.216 12.796 73.236 40.833 16.391 46.68 33.436] [18.772 17.395 13.189 19.712 49.181 28.304 23.884 75.144 1.113]] PARTIALLY SCALED [[-1.085 2.422 0.618 -1.247 -1.132 38.543 19.061 4.091 84.919] [ 1.371 22.153 -0.501 -0.713 0.638 24.247 24.357 15.093 10.052] [-1.143 13.239 -0.437 0.755 0.859 1.456 27.8 29.238 75.92 ] [ 1.552 4.749 -0.542 0.991 1.029 4.326 27.379 19.543 31.143] [-0.077 26.398 -0.294 -1.251 -1.969 5.573 24.744 63.607 9.074] [ 0.39 8.223 -0.908 0.393 -0.447 12.562 2.826 28.168 34.507] [ 0.974 12.427 -0.836 -1.314 1.22 48.246 10.831 51.11 45.932] [ 0.037 8.882 1.836 0.208 -0.677 31.588 1.065 39.473 86.499] [-0.802 18.864 1.824 0.577 0.674 40.833 16.391 46.68 33.436] [-1.215 17.395 -0.76 1.601 -0.196 28.304 23.884 75.144 1.113]] </code></pre>
python-3.x|numpy
2
376,588
65,119,407
Pandas: How do I search series cells for partial string match?
<p>I have a DataFrame created by importing an Excel sheet that has several columns but inconsistent text data in its rows. For example, one column labeled &quot;Airplane 1&quot; has a &quot;Gross weight: 2500&quot; in row 25 where as the column &quot;Airplane 2&quot; has &quot;Gross weight: 3000&quot; in a different row. All the columns have an entry for &quot;Gross weight:&quot; but their row numbers are off by 1 or more. I can iterate through the columns and rows, but I can't seem to query a cell in a row for a specific string. I've tried several approaches, below find one fail. It's reasonably clear that I'm erroneously trying to generate a single boolean from series, thus generating the error, but I can't seem to get into the separate cells in the series. Ultimately, I want to identify specific parameters, &quot;Gross weight:&quot; for example, extract and tie the number associated with that parameter with its specific column. And yes, I'm new at this, thanks in advance...</p> <p>Just to show that the data is there...</p> <pre><code>#print(df.at[2,'Aviat_A-1B']) x = df.loc[11,&quot;Aviat_A-1B&quot;] #x.partition(':') #print(type(x)) #print(x.split(':')) print(x) Gross weight (lbs.): 2000 </code></pre> <p>This doesn't work...</p> <pre><code>sub = 'Gross weight (lbs.):' for index, row in df.iterrows(): print(type(index)) print(index) print('~~~~~~') print(type(row)) print(row) print('------') if row.str.extract(sub): print(type(row)) print(row) print('------') &lt;class 'int'&gt; 3 </code></pre> <pre><code>&lt;class 'pandas.core.series.Series'&gt; 1997_7GCAA_American_Champion_Adventure Price as tested: $76,000 Aviat_A-1B Engine make/model: Lycoming 1960_Beech_Travel_Air_B95 Engine make/model: Lycoming... 1979_Beechcraft_Bonanza_A-36 IO-520BB 1977_Bellanca_8KCAB-180_Super_Decathlon Engine make/model: Lycoming... ... 1974_Piper_Arrow_II_with_LoPresti_Speed Price: $48,500 (plus mod. cost) 1999_Piper_Archer_III Engine make/model: Lycoming Ryan_Navion Engine make/model: Cont. E-185 1997_Mooney_Ovation Engine make/model: Continental IO-550G 1997_Mooney_Encore_Prototype Engine make/model: Cont TSIO-360-SB Name: 3, Length: 61, dtype: object ------ --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-52-8590ea2a7401&gt; in &lt;module&gt; 11 print(row) 12 print('------') ---&gt; 13 if row.str.extract(sub): 14 print(type(row)) 15 print(row) C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py in __nonzero__(self) 1327 1328 def __nonzero__(self): -&gt; 1329 raise ValueError( 1330 f&quot;The truth value of a {type(self).__name__} is ambiguous. &quot; 1331 &quot;Use a.empty, a.bool(), a.item(), a.any() or a.all().&quot; ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! UPDATE !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! There has to be a smarter way than this to find and extract a number, 199 in the case below... `wing_area = [] #print(df) for col, item in df.iteritems(): # print(col) wing_area_bool = item.str.contains(&quot;Wing area&quot;, na=False) # print(df.index[wing_area_bool]) # print(item[wing_area_bool]) # wing_area.append(item[wing_area_bool]) # wing_area.append(item[wing_area_bool].str.split(&quot;:&quot;)) wing_area.append(item[wing_area_bool].str.split()) print(wing_area[-1]) #print(len(wing_area[-1])) #print(str(wing_area[-1])) #x = (str(wing_area[-1]).split(&quot;,&quot;)) #y = x[-2].split(&quot;]&quot;) #int(y[0].strip()) int(str(wing_area[-1]).split(&quot;,&quot;)[-2].split(&quot;]&quot;)[0].strip()) 22 [Wing, area, (sq., ft.):, 199] Name: 1960_Beech_Travel_Air_B95, dtype: object 199` </code></pre>
<pre><code>df.loc[df.column_name.str.contains(&quot;Gross weight&quot;, na=False)] </code></pre> <p><a href="https://stackoverflow.com/questions/28311655/ignoring-nans-with-str-contains">reference</a></p> <p>To do this iteratively you could loop over the column names when doing this</p> <pre><code>for name in [column_name_1, column_name_2, column_name_x]: df.loc[df.name.str.contains(&quot;Gross weight&quot;, na=False)] </code></pre> <p>And depending on what youd like to do with the result you would do it in the loop</p>
python|pandas|string|series
0
376,589
65,200,767
pandas dataframe index remove date from datetime
<p>File dataexample_df.txt:</p> <pre><code>2020-12-04_163024 26.15 26.37 19.40 24.57 2020-12-04_163026 26.15 26.37 19.20 24.57 2020-12-04_163028 26.05 26.37 18.78 24.57 </code></pre> <p>I want to read it in as pandas dataframe where the index column has only the time part in format <code>'%H:%M:%S'</code>, without the date.</p> <pre><code>import pandas as pd df = pd.read_csv(&quot;dataexample_df.txt&quot;, sep=' ', header=None, index_col=0) print(df) </code></pre> <p>Output:</p> <pre><code> 1 2 3 4 0 2020-12-04_163024 26.15 26.37 19.40 24.57 2020-12-04_163026 26.15 26.37 19.20 24.57 2020-12-04_163028 26.05 26.37 18.78 24.57 </code></pre> <p>However, wanted output:</p> <pre><code> 1 2 3 4 0 16:30:24 26.15 26.37 19.40 24.57 16:30:26 26.15 26.37 19.20 24.57 16:30:28 26.05 26.37 18.78 24.57 </code></pre> <p>I have tried different <code>date_parser=</code> -functions (cf. Answers in <a href="https://stackoverflow.com/questions/23797491/parse-dates-in-pandas">Parse_dates in Pandas</a>) but get only error messages. Also, somewhat relevant is <a href="https://stackoverflow.com/q/37801321/11199684">Python/Pandas convert string to time only</a> but no luck, I'm stuck. I'm using Python 3.7.</p>
<p>Considering your <code>df</code> to be this:</p> <pre><code>In [121]: df Out[121]: 1 2 3 4 0 2020-12-04_163024 26.15 26.37 19.40 24.57 2020-12-04_163026 26.15 26.37 19.20 24.57 2020-12-04_163028 26.05 26.37 18.78 24.57 </code></pre> <p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.replace.html" rel="nofollow noreferrer"><code>Series.replace</code></a> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.time.html" rel="nofollow noreferrer"><code>Series.dt.time</code></a>:</p> <pre><code>In [122]: df.reset_index(inplace=True) In [127]: df[0] = pd.to_datetime(df[0].str.replace('_', ' ')).dt.time In [130]: df.set_index(0, inplace=True) In [131]: df Out[131]: 1 2 3 4 0 16:30:24 26.15 26.37 19.40 24.57 16:30:26 26.15 26.37 19.20 24.57 16:30:28 26.05 26.37 18.78 24.57 </code></pre>
python|pandas|dataframe
3
376,590
65,246,124
Dataset Labeled as not found or Corrupt, but the dataset is not corrupt
<p>I have been trying to use this Github (<a href="https://github.com/AntixK/PyTorch-VAE" rel="nofollow noreferrer">https://github.com/AntixK/PyTorch-VAE</a>) and call the CelebA dataset using the config file listed. Specifically under the vae.yaml I have placed the path of the unzipped file where I have downloaded the celeba dataset (<a href="https://www.kaggle.com/jessicali9530/celeba-dataset" rel="nofollow noreferrer">https://www.kaggle.com/jessicali9530/celeba-dataset</a>) on my computer. And every time I run the program, I keep getting these errors:</p> <p>File &quot;/usr/local/lib/python3.6/dist-packages/torchvision/datasets/celeba.py&quot;, line 67, in <strong>init</strong> ' You can use download=True to download it') RuntimeError: Dataset not found or corrupted. You can use download=True to download it AttributeError: 'VAEXperiment' object has no attribute '_lazy_train_dataloader'</p> <p>I have tried to download the dataset, but nothing changes. So I have no idea why the program is not running. The run.py calls the experiment.py which uses this dataloader to retrieve the information:</p> <pre><code>def train_dataloader(self): transform = self.data_transforms() if self.params['dataset'] == 'celeba': dataset = CelebA(root = self.params['data_path'], split = &quot;train&quot;, transform=transform, download=False) else: raise ValueError('Undefined dataset type') self.num_train_imgs = len(dataset) return DataLoader(dataset, batch_size= self.params['batch_size'], shuffle = True, drop_last=True) </code></pre> <p>The config file grabs the information passed on the root. So what I did was upload a few files to google colab (some .jpg files) and when I run the command stated in the GItHub, python run.py -c config/vae.yaml, it states that the dataset is not found or is corrupt. I have tried this on my linux machine and the same error occurs, even when I used the downloaded and unzip link. I have gone further to attempt to change the self.params['data_path'] to the actual path and that still does not work. Any ideas what I can do?</p>
<p>My pytorch version is 1.6.0.<br /> There are two issues which I have faced. The below is my solution. It is not official but it works for me. Hope the next pytorch version will update it.</p> <ol> <li>Issue: Dataset not found or corrupted.'</li> </ol> <p>When I checked file celeba.py in pytorch library. I found this line:</p> <pre><code>if ext not in [&quot;.zip&quot;, &quot;.7z&quot;] and not check_integrity(fpath, md5): return False </code></pre> <p>This part will make self._check_integrity() return False and the program provides the message error as we got.</p> <p>Solve: You can ignore this part by add &quot;if False&quot; immediately in front of this line</p> <pre><code>if False: if ext not in [&quot;.zip&quot;, &quot;.7z&quot;] and not check_integrity(fpath, md5): return False </code></pre> <ol start="2"> <li><p>celeba.py downloads dataset if you choose download=True but these two files are broken &quot;list_landmarks_align_celeba.txt&quot; and &quot;list_attr_celeba.txt&quot;</p> <p>You need to find somewhere, download and replace them</p> </li> </ol> <p>Hope these solutions will help you !!!!</p>
pytorch|dataset|google-colaboratory
0
376,591
65,201,233
How to generate an onnx file with linear layers using Pytorch
<p>I want to create a network on the basis of the vgg16 network, but adding linear layers (Gemm) just after the conv2d layers, for normalization purpose. After that, I want to export the network in an ONNX file.</p> <p>The first part seems to work: I took the Pytorch code for generating the vgg16 and modified it as follows</p> <pre><code>import torch.nn as nn class VGG(nn.Module): def __init__(self, features, num_classes=8, init_weights=True): super(VGG, self).__init__() self.features = features self.classifier = nn.Sequential( nn.Linear(512 * 7 * 7, 4096), nn.Linear(4096, 4096), # New shift layer nn.ReLU(True), nn.Dropout(), nn.Linear(4096, 4096), nn.Linear(4096, 4096), # New shift layer nn.ReLU(True), nn.Dropout(), nn.Linear(4096, 8), nn.Linear(8, 8), # New shift layer ) def forward(self, x): x = self.features(x) x = x.view(x.size(0), -1) x = self.classifier(x) return x def make_layers(cfg, batch_norm=False): layers = [] in_channels = 3 n = 224 for v in cfg: if v == 'M': layers += [nn.MaxPool2d(kernel_size=2, stride=2)] n = int(n / 2) elif v == 'B': layers += [nn.AdaptiveAvgPool2d(n)] else: conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1) linear = nn.Linear(n,n,True) if batch_norm: layers += [conv2d, linear, nn.BatchNorm2d(v), nn.ReLU(inplace=True)] else: layers += [conv2d, linear, nn.ReLU(inplace=True)] in_channels = v return nn.Sequential(*layers) cfg = {'D': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M','B'], } def vgg16(**kwargs): &quot;&quot;&quot;VGG 16-layer model (configuration &quot;D&quot;) &quot;&quot;&quot; model = VGG(make_layers(cfg['D']), **kwargs) return model </code></pre> <p>But when I insert the weights and export to onnx, I see that my linear layers are not referred to as Gemm but as {Transpose + Matmult + Add}</p> <p><a href="https://i.stack.imgur.com/4Rq6C.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Rq6C.jpg" alt="Top layers in the onnx file" /></a></p> <p>The Transpose part is the weights matrix and the Add part is for the biases (which are all 0).</p> <p>Am I wrong to think that it's possible to do this, or is there a way to get a real Gemm layer here or another way to do this normalization (which is simply multiply all outputs by a single value)?</p>
<p>The input data of nn.Linear here is a 4-D tensor, then torch will export it to {Transpose, MatMul, Add}. Only input is 2-D, the GEMM op will be exported.</p> <p>You can have to look at the <a href="https://github.com/pytorch/pytorch/blob/b31f58de6fa8bbda5353b3c77d9be4914399724d/torch/nn/functional.py#L1672" rel="nofollow noreferrer">source code</a> of Pytorch for more information.</p>
pytorch|onnx
0
376,592
65,261,037
Getting rid of extra spaces I have in different rows of a column in datafram
<p>I am trying to iterate through the rows of my column (item_2) and get rid of the extra spaces each row has by using</p> <pre><code>&quot; &quot;.join(x.split()) </code></pre> <p>But I get</p> <blockquote> <p>AttributeError: can't set attribute</p> </blockquote> <p>error when running following code.</p> <pre><code>for row in dffirst2[['item_2']].itertuples(): row.item_2 = &quot; &quot;.join(row.item_2.split()) </code></pre>
<p>If there's no leading/trailing whitespace, you can just use <code>.str.replace(r' +',r' ')</code></p> <p>e.g, <code>dffirst['item_2']=dffirst['item_2'].str.replace(r' +',r' ')</code>.</p> <p>If there is, you'll need to toss on a .strip() or use a slightly better regex.</p>
python|python-3.x|pandas|dataframe
0
376,593
65,233,322
REGEX IN DATAFRAME PANDAS
<p>I have a dataframe with a column like that</p> <pre><code>COL1 PACK[5.95 $ /if game game1 + game1] PACK[3 $ /2 products.] </code></pre> <p>I want create other column as following according COL1</p> <pre><code>pack_plus pack 5,95 3 </code></pre> <p>I am ok for pack_plus : <code>PACK\[(\d[\d.]*) $[^][]*\+[^][]*]</code></p> <p>but not for pack (I dont want to select raw with &quot;+&quot;)</p> <p>I have this : <code>PACK\[(\d[\d.]*) €[^][()]*]</code></p> <p>Thank you</p>
<p>You can use</p> <pre class="lang-py prettyprint-override"><code>PACK\[(\d[\d.]*)\s*\$[^][+]*] </code></pre> <p>See the <a href="https://regex101.com/r/DU8ejJ/1" rel="nofollow noreferrer">regex demo</a>.</p> <p><strong>Details</strong></p> <ul> <li><code>PACK\[</code> - <code>PACK[</code> string</li> <li><code>(\d[\d.]*)</code> - Group 1: a digit and then zero or more digits or dots</li> <li><code>\s*</code> - zero or more whitespaces</li> <li><code>\$</code> - a <code>$</code> char</li> <li><code>[^][+]*</code> - zero or more chars other than <code>]</code>, <code>[</code> and <code>+</code></li> <li><code>]</code> - a <code>]</code> char.</li> </ul>
regex|pandas
1
376,594
65,235,662
Python create a data frame by using last n rows
<p>I have a pandas df as follows:</p> <pre><code>Value1 Value2 Label 15.1 12 0 17 5 1 19 2 1 </code></pre> <p>I am looking to build a new df, such that each row contains thee input of the previous <code>n</code> rows. For example if <code>n=2</code> my output should be</p> <pre><code>Value1.1 Value2.1 Value1.2 Value2.2 Value1 Value2 Label 15.1 12 17 5 19 2 1 </code></pre> <p>This is the third row and has a <code>label=1</code>, the <code>Value1</code> and <code>Value2</code> of the previous <code>2</code> rows are appended to the third row. Any thoughts on how I can achieve this in python? Thanks!</p>
<p>Perhaps something like:</p> <pre><code>n = 2 sel = [k for k in df.columns if k != 'Label'] df2 = df for k in range(1, n + 1): df2 = df2.join(df[sel].shift(k), rsuffix=f'.{k}') print(df2) Value1 Value2 Label Value1.1 Value2.1 Value1.2 Value2.2 0 15.1 12 0 NaN NaN NaN NaN 1 17.0 5 1 15.1 12.0 NaN NaN 2 19.0 2 1 17.0 5.0 15.1 12.0 </code></pre> <p>Or, if you prefer the column order you indicated in your example:</p> <pre><code>df2 = df for k in range(1, n+1): df2 = df[sel].shift(k).join(df2, lsuffix=f'.{k}') print(df2) Value1.2 Value2.2 Value1.1 Value2.1 Value1 Value2 Label 0 NaN NaN NaN NaN 15.1 12 0 1 NaN NaN 15.1 12.0 17.0 5 1 2 15.1 12.0 17.0 5.0 19.0 2 1 </code></pre>
python|pandas
1
376,595
65,402,880
Tensorflow batchsize affects precision
<p>The same input, only different batch sizes.</p> <ul> <li>Why the outputs are different?</li> <li>How to avoid the difference or force the output to be the same?</li> </ul> <pre class="lang-py prettyprint-override"><code>from tensorflow import keras from tensorflow.keras import layers def create_net(n_input, n_hidden, activation=None): seq = keras.Sequential( [ layers.Dense(n_hidden, input_shape=(n_input,), activation=activation, ), layers.Dense(159*159, ) ] ) return seq seq2 = create_net(n_input=100000, n_hidden=50, activation='linear') print(seq2.predict(np.ones((64, 100000)), batch_size=32)[0, :]) print(seq2.predict(np.ones((64, 100000)), batch_size=1)[0, :]) </code></pre> <p>output:</p> <pre><code>[-0.10577075 -0.02522129 0.05591403 ... 0.07279566 -0.01813894 -0.03258121] [-0.10577081 -0.02522125 0.05591398 ... 0.07279569 -0.01813904 -0.03258121] </code></pre>
<p>The change in result happens only when the batch size is very less i.e 1 or 2, this is because if the batch is too small, the mean and variance for any particular batch will not be the same as compared to the larger dataset.</p> <p>I tested with your code with a batch size of more than 2 and it all gave the same results.</p> <pre><code>import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers tf.random.set_seed(42) np.random.seed = 42 def create_net(n_input, n_hidden, activation=None): seq = keras.Sequential( [ layers.Dense(n_hidden, input_shape=(n_input,), activation=activation, ), layers.Dense(159*159, ) ] ) return seq seq2 = create_net(n_input=100000, n_hidden=50, activation='linear') print(seq2.predict(np.ones((64, 100000)), batch_size=1)[0, :]) print(seq2.predict(np.ones((64, 100000)), batch_size=4)[0, :]) print(seq2.predict(np.ones((64, 100000)), batch_size=8)[0, :]) print(seq2.predict(np.ones((64, 100000)), batch_size=16)[0, :]) print(seq2.predict(np.ones((64, 100000)), batch_size=32)[0, :]) print(seq2.predict(np.ones((64, 100000)), batch_size=64)[0, :]) </code></pre> <p><strong>Result:</strong></p> <pre><code>[-0.03781796 -0.17668417 0.12602948 ... 0.01211533 0.12969542 -0.05006403] # Batch size 1 [-0.03781779 -0.17668422 0.12602954 ... 0.01211521 0.12969519 -0.05006415] [-0.03781779 -0.17668422 0.12602954 ... 0.01211521 0.12969519 -0.05006415] [-0.03781779 -0.17668422 0.12602954 ... 0.01211521 0.12969519 -0.05006415] [-0.03781779 -0.17668422 0.12602954 ... 0.01211521 0.12969519 -0.05006415] [-0.03781779 -0.17668422 0.12602954 ... 0.01211521 0.12969519 -0.05006415] </code></pre>
python|tensorflow|keras
0
376,596
65,073,439
How to use Numpy vectorize to calculate columns in Pandas
<p>I have a pd Dataframe and would like to calculate one column based on two others from the same dataframe. I would like to use Numpy vectorisation for this as the dataset is large. Here is the dataframe:</p> <pre><code>Input Dataframe A B 0 567 345 1 123 456 2 568 354 Output Dataframe A B C 0 567 345 567.345 1 123 456 123.456 2 568 354 568.354 </code></pre> <p>where column C is a concatenation between A and B with dot between both values. I am using apply():</p> <pre><code>df['C'] = df.apply(lambda row: str(row['A']) + '.' + str(row['B']), axis=1) </code></pre> <p>instead to iterate over rows/index etc. but still it is slow. I know that I could do:</p> <pre><code>df['C'] = df['A'].values + df['B'].values </code></pre> <p>which is extremely faster, but this will not give me the desired result, and on the same time:</p> <pre><code>df['C'] = str(df['A'].values) + '.' + str(df['B'].values) </code></pre> <p>will give me something completely different. The example is just for presentation purposes (the values of A and B could be of any type). The question is more general. Thank you in advance!</p>
<p>A list comprehension should be faster then apply or such use case:</p> <pre><code>df['C'] = [f&quot;{a}.{b}&quot; for a,b in zip(df['A'],df['B'])] </code></pre> <hr /> <p>Outputs</p> <pre><code> A B C 0 567 345 567.345 1 123 456 123.456 2 568 354 568.354 </code></pre>
python|pandas|numpy
0
376,597
65,165,864
Programming Error (sql syntax) trying to get a MySQL table with "-" in table name into Pandas dataframe
<p>Edit:</p> <p>See answer but &quot;-&quot; in MySQL table names causes problems. I tried this: <a href="https://stackoverflow.com/a/37730334/14767913">https://stackoverflow.com/a/37730334/14767913</a></p> <p>My code:</p> <pre><code>import pandas as pd table_name = 'calcium-foods' df = pd.read_sql('SELECT * FROM calcium-foods', con=engine ) </code></pre> <p>The table is there:</p> <p><a href="https://i.stack.imgur.com/IVzZE.png" rel="nofollow noreferrer">imgfromme</a></p> <p>I connected properly and can get a list of tables using <code>engine.table_names()</code></p> <p>This did not work either;</p> <pre><code>import pandas as pd table_name = 'calcium-foods' sql = &quot;SELECT * from &quot; + table_name print(sql) df = pd.read_sql_query(sql, engine) </code></pre> <p>I am working in Jupyter Notebooks, Python 3.7 or 3.8, and here is the Traceback</p> <pre><code>-------------------------------------------------- ProgrammingError Traceback (most recent call last) ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/sqlalchemy/engine/base.py in _execute_context(self, dialect, constructor, statement, parameters, *args) 1266 self.dialect.do_execute_no_params( -&gt; 1267 cursor, statement, context 1268 ) ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/sqlalchemy/engine/default.py in do_execute_no_params(self, cursor, statement, context) 595 def do_execute_no_params(self, cursor, statement, context=None): --&gt; 596 cursor.execute(statement) 597 ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/cursors.py in execute(self, query, args) 162 --&gt; 163 result = self._query(query) 164 self._executed = query ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/cursors.py in _query(self, q) 320 self._clear_result() --&gt; 321 conn.query(q) 322 self._do_get_result() ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/connections.py in query(self, sql, unbuffered) 504 self._execute_command(COMMAND.COM_QUERY, sql) --&gt; 505 self._affected_rows = self._read_query_result(unbuffered=unbuffered) 506 return self._affected_rows ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/connections.py in _read_query_result(self, unbuffered) 723 result = MySQLResult(self) --&gt; 724 result.read() 725 self._result = result ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/connections.py in read(self) 1068 try: -&gt; 1069 first_packet = self.connection._read_packet() 1070 ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/connections.py in _read_packet(self, packet_type) 675 self._result.unbuffered_active = False --&gt; 676 packet.raise_for_error() 677 return packet ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/protocol.py in raise_for_error(self) 222 if DEBUG: print(&quot;errno =&quot;, errno) --&gt; 223 err.raise_mysql_exception(self._data) 224 ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/err.py in raise_mysql_exception(data) 106 errorclass = InternalError if errno &lt; 1000 else OperationalError --&gt; 107 raise errorclass(errno, errval) ProgrammingError: (1064, &quot;You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-foods' at line 1&quot;) The above exception was the direct cause of the following exception: ProgrammingError Traceback (most recent call last) &lt;ipython-input-16-7e7208d94a8a&gt; in &lt;module&gt; 3 sql = &quot;SELECT * from &quot; + table_name 4 print(sql) ----&gt; 5 df = pd.read_sql_query(sql, engine) 6 #conn = engine.connect() 7 #table_name = 'calcium-foods' ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pandas/io/sql.py in read_sql_query(sql, con, index_col, coerce_float, params, parse_dates, chunksize) 381 coerce_float=coerce_float, 382 parse_dates=parse_dates, --&gt; 383 chunksize=chunksize, 384 ) 385 ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pandas/io/sql.py in read_query(self, sql, index_col, coerce_float, parse_dates, params, chunksize) 1293 args = _convert_params(sql, params) 1294 -&gt; 1295 result = self.execute(*args) 1296 columns = result.keys() 1297 ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pandas/io/sql.py in execute(self, *args, **kwargs) 1160 &quot;&quot;&quot;Simple passthrough to SQLAlchemy connectable&quot;&quot;&quot; 1161 return self.connectable.execution_options(no_parameters=True).execute( -&gt; 1162 *args, **kwargs 1163 ) 1164 ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/sqlalchemy/engine/base.py in execute(self, statement, *multiparams, **params) 2233 2234 connection = self._contextual_connect(close_with_result=True) -&gt; 2235 return connection.execute(statement, *multiparams, **params) 2236 2237 def scalar(self, statement, *multiparams, **params): ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/sqlalchemy/engine/base.py in execute(self, object_, *multiparams, **params) 1001 &quot;&quot;&quot; 1002 if isinstance(object_, util.string_types[0]): -&gt; 1003 return self._execute_text(object_, multiparams, params) 1004 try: 1005 meth = object_._execute_on_connection ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/sqlalchemy/engine/base.py in _execute_text(self, statement, multiparams, params) 1176 parameters, 1177 statement, -&gt; 1178 parameters, 1179 ) 1180 if self._has_events or self.engine._has_events: ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/sqlalchemy/engine/base.py in _execute_context(self, dialect, constructor, statement, parameters, *args) 1315 except BaseException as e: 1316 self._handle_dbapi_exception( -&gt; 1317 e, statement, parameters, cursor, context 1318 ) 1319 ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/sqlalchemy/engine/base.py in _handle_dbapi_exception(self, e, statement, parameters, cursor, context) 1509 elif should_wrap: 1510 util.raise_( -&gt; 1511 sqlalchemy_exception, with_traceback=exc_info[2], from_=e 1512 ) 1513 else: ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/sqlalchemy/util/compat.py in raise_(***failed resolving arguments***) 180 181 try: --&gt; 182 raise exception 183 finally: 184 # credit to ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/sqlalchemy/engine/base.py in _execute_context(self, dialect, constructor, statement, parameters, *args) 1265 if not evt_handled: 1266 self.dialect.do_execute_no_params( -&gt; 1267 cursor, statement, context 1268 ) 1269 else: ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/sqlalchemy/engine/default.py in do_execute_no_params(self, cursor, statement, context) 594 595 def do_execute_no_params(self, cursor, statement, context=None): --&gt; 596 cursor.execute(statement) 597 598 def is_disconnect(self, e, connection, cursor): ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/cursors.py in execute(self, query, args) 161 query = self.mogrify(query, args) 162 --&gt; 163 result = self._query(query) 164 self._executed = query 165 return result ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/cursors.py in _query(self, q) 319 self._last_executed = q 320 self._clear_result() --&gt; 321 conn.query(q) 322 self._do_get_result() 323 return self.rowcount ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/connections.py in query(self, sql, unbuffered) 503 sql = sql.encode(self.encoding, 'surrogateescape') 504 self._execute_command(COMMAND.COM_QUERY, sql) --&gt; 505 self._affected_rows = self._read_query_result(unbuffered=unbuffered) 506 return self._affected_rows 507 ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/connections.py in _read_query_result(self, unbuffered) 722 else: 723 result = MySQLResult(self) --&gt; 724 result.read() 725 self._result = result 726 if result.server_status is not None: ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/connections.py in read(self) 1067 def read(self): 1068 try: -&gt; 1069 first_packet = self.connection._read_packet() 1070 1071 if first_packet.is_ok_packet(): ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/connections.py in _read_packet(self, packet_type) 674 if self._result is not None and self._result.unbuffered_active is True: 675 self._result.unbuffered_active = False --&gt; 676 packet.raise_for_error() 677 return packet 678 ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/protocol.py in raise_for_error(self) 221 errno = self.read_uint16() 222 if DEBUG: print(&quot;errno =&quot;, errno) --&gt; 223 err.raise_mysql_exception(self._data) 224 225 def dump(self): ~/anaconda3/envs/gamechangers/lib/python3.7/site-packages/pymysql/err.py in raise_mysql_exception(data) 105 if errorclass is None: 106 errorclass = InternalError if errno &lt; 1000 else OperationalError --&gt; 107 raise errorclass(errno, errval) ProgrammingError: (pymysql.err.ProgrammingError) (1064, &quot;You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-foods' at line 1&quot;) [SQL: SELECT * from calcium-foods] (Background on this error at: http://sqlalche.me/e/13/f405) </code></pre> <p>As an aside, when I click on the SQL Alchemy error URL, it seems like they all take me to the same page rather than error-specific. Is this normal?</p>
<p>ProgrammingError: (1064, &quot;You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-foods' at line 1&quot;)</p> <p>Try change this line:</p> <pre><code>sql = &quot;SELECT * from &quot; + table_name </code></pre> <p>to:</p> <pre><code>sql = &quot;SELECT * from &quot; + table_name + &quot;;&quot; </code></pre>
python|mysql|pandas
0
376,598
65,144,547
keras model.fit ValueError: The outer 2 dimensions of indices.shape=[1,11,1] must match the outer 2 dimensions of updates.shape=[2]
<p>I am training a keras model with custom loss and evaluation metric. It trains without metric. But it gave following error when i try to train like:</p> <pre><code>model.compile(optimizer= keras.optimizers.Adam(learning_rate = 1e-3), loss = inner_product, metrics=dice_index_metric) model.fit([X_train], [y_train], epochs=50, batch_size = 1, validation_split=0.2, callbacks = keras.callbacks.EarlyStopping(monitor=&quot;val_loss&quot;, min_delta=0, patience=5), verbose=2) </code></pre> <p>Error:</p> <hr /> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-38-270dbe25d468&gt; in &lt;module&gt; ----&gt; 1 hist = model.fit([X_train], [y_train1], epochs=50, batch_size = 1, validation_split=0.2, 2 callbacks = keras.callbacks.EarlyStopping(monitor=&quot;val_loss&quot;, min_delta=0, patience=5), verbose=2) ~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\keras\engine\training.py in _method_wrapper(self, *args, **kwargs) 106 def _method_wrapper(self, *args, **kwargs): 107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access --&gt; 108 return method(self, *args, **kwargs) 109 110 # Running inside `run_distribute_coordinator` already. ~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 1096 batch_size=batch_size): 1097 callbacks.on_train_batch_begin(step) -&gt; 1098 tmp_logs = train_function(iterator) 1099 if data_handler.should_sync: 1100 context.async_wait() ~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds) 778 else: 779 compiler = &quot;nonXla&quot; --&gt; 780 result = self._call(*args, **kwds) 781 782 new_tracing_count = self._get_tracing_count() ~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds) 821 # This is the first call of __call__, so we have to initialize. 822 initializers = [] --&gt; 823 self._initialize(args, kwds, add_initializers_to=initializers) 824 finally: 825 # At this point we know that the initialization is complete (or less ~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\def_function.py in _initialize(self, args, kwds, add_initializers_to) 694 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph) 695 self._concrete_stateful_fn = ( --&gt; 696 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access 697 *args, **kwds)) 698 ~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 2853 args, kwargs = None, None 2854 with self._lock: -&gt; 2855 graph_function, _, _ = self._maybe_define_function(args, kwargs) 2856 return graph_function 2857 ~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\function.py in _maybe_define_function(self, args, kwargs) 3211 3212 self._function_cache.missed.add(call_context_key) -&gt; 3213 graph_function = self._create_graph_function(args, kwargs) 3214 self._function_cache.primary[cache_key] = graph_function 3215 return graph_function, args, kwargs ~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 3063 arg_names = base_arg_names + missing_arg_names 3064 graph_function = ConcreteFunction( -&gt; 3065 func_graph_module.func_graph_from_py_func( 3066 self._name, 3067 self._python_function, ~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\framework\func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 984 _, original_func = tf_decorator.unwrap(python_func) 985 --&gt; 986 func_outputs = python_func(*func_args, **func_kwargs) 987 988 # invariant: `func_outputs` contains only Tensors, CompositeTensors, ~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\def_function.py in wrapped_fn(*args, **kwds) 598 # __wrapped__ allows AutoGraph to swap in a converted function. We give 599 # the function a weak reference to itself to avoid a reference cycle. --&gt; 600 return weak_wrapped_fn().__wrapped__(*args, **kwds) 601 weak_wrapped_fn = weakref.ref(wrapped_fn) 602 ~\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\framework\func_graph.py in wrapper(*args, **kwargs) 971 except Exception as e: # pylint:disable=broad-except 972 if hasattr(e, &quot;ag_error_metadata&quot;): --&gt; 973 raise e.ag_error_metadata.to_exception(e) 974 else: 975 raise ValueError: in user code: C:\Users\haluk\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\keras\engine\training.py:806 train_function * return step_function(self, iterator) &lt;ipython-input-36-2c54f0983574&gt;:5 dice_index_metric * y_pred1 = tf.scatter_nd(ind, updates, tf.shape(y_pred)) C:\Users\haluk\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\ops\gen_array_ops.py:8855 scatter_nd ** _, _, _op, _outputs = _op_def_library._apply_op_helper( C:\Users\haluk\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\framework\op_def_library.py:742 _apply_op_helper op = g._create_op_internal(op_type_name, inputs, dtypes=None, C:\Users\haluk\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\framework\func_graph.py:591 _create_op_internal return super(FuncGraph, self)._create_op_internal( # pylint: disable=protected-access C:\Users\haluk\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\framework\ops.py:3477 _create_op_internal ret = Operation( C:\Users\haluk\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\framework\ops.py:1974 __init__ self._c_op = _create_c_op(self._graph, node_def, inputs, C:\Users\haluk\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\framework\ops.py:1815 _create_c_op raise ValueError(str(e)) ValueError: The outer 2 dimensions of indices.shape=[1,11,1] must match the outer 2 dimensions of updates.shape=[2]: Shapes must be equal rank, but are 2 and 1 for '{{node ScatterNd}} = ScatterNd[T=DT_INT32, Tindices=DT_INT32](strided_slice_2, Const_3, Shape_1)' with input shapes: [1,11,1], [2], [2]. </code></pre> <p>Custom metric is as follows:</p> <pre><code>def dice_index_metric(y_true, y_pred): ind = tf.argsort(y_pred,axis=-1,direction='ASCENDING',stable=False,name=None)[-2:] ind = ind[..., tf.newaxis] updates = tf.constant([1, 1]) y_pred1 = tf.scatter_nd(ind, updates, tf.shape(y_pred)) innerproduct = tf.minimum(y_true, y_pred1) innerproduct = tf.reduce_sum(innerproduct) union= tf.maximum(y_true, y_pred1) union = tf.reduce_sum(union) return innerproduct/union </code></pre> <p>Custom metric converts predictions vector to a vector which top 2 element in the predictions are 1 and others are 0 then compare this to truth value and calculate their (# of intersection)/(# of union) So Lets say predictions from model is:</p> <p>pred = [0.01, 0.3, 0,4 0.01, 0.01, 0.2, 0.02, 0.05],</p> <p>Top 2 values are 0.3 and 0.4 with indexes 1, 2. Then I should recommend this:</p> <p>reccomend = [0, 1, 1, 0, 0, 0, 0, 0],</p> <p>If truth values are as follows than their intersection is only index 2 and their union is [1,2,3] than i should return 1/3.</p> <p>truth = [0, 0, 1, 1, 0, 0, 0, 0]</p> <p>model:</p> <pre><code>inputs = keras.Input(shape =(None,23)) features = layers.LSTM(100)(inputs) next = layers.Dense(11, activation=activations.sigmoid)(features) next = layers.Softmax()(next) model = keras.Model(inputs=[inputs] , outputs=next, name=&quot;LSTMmodel2&quot;) </code></pre>
<p>First, from your code</p> <pre><code>ind = tf.argsort(y_pred,axis=-1,direction='ASCENDING',stable=False,name=None)[-2:] </code></pre> <p>you have forgotten that y_pred is in batch. Which means y_pred's shape is not [11,] but rather [N,11], and assuming from your error message, batch size is 1. Therefore, the line above splits in the axis of batch (which is axis 0). This is why the error says</p> <pre><code>indices.shape=[1,11,1] </code></pre> <p>Second, scatter_nd does not work like that. Unlike gather_nd, it does not support batch_dims. The last axis value of 'indices' is the only thing that can contribute to the place where 'update' element can go into.</p> <p>For example,</p> <pre><code>tf.scatter_nd([[1,2],[1,3]],[1,1],shape=(2,10)) #&lt;tf.Tensor: shape=(2, 10), dtype=int32, #numpy=array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], # [0, 0, 1, 1, 0, 0, 0, 0, 0, 0]])&gt; </code></pre> <p>So in your case,</p> <pre><code>N = tf.shape(y_pred)[0] # Shape: [N,2] ind = tf.argsort(y_pred,axis=-1,direction='ASCENDING',stable=False,name=None)[:,-2:] # Shape: [N,2,1] ind = ind[...,tf.newaxis] # Dummy range to add index for batch axis # Shape : [N,1,1] r = tf.range(N)[:,tf.newaxis,tf.newaxis] # Shape : [N,2,1] r = tf.repeat(r,2,axis=1) # Shape : [N,2,2] ind = tf.concat([r,ind],axis=-1) # Shape : [N,2] updates = tf.ones((N,2)) y_pred1 = tf.scatter_nd(ind, updates, tf.shape(y_pred)) </code></pre> <p>is the right way to use tf.scatter_nd</p> <p>However, it looks so dirty. I would rather recommand using this way:</p> <pre><code>second_max = tf.sort(y_pred,axis=-1,direction='ASCENDING')[:,-2,tf.newaxis] y_pred1 = tf.cast(y_pred&gt;=second_max,tf.int32) </code></pre>
python|tensorflow|machine-learning|keras|recommendation-engine
0
376,599
65,327,509
Select/Group rows from a data frame with the nearest values for a specific column(s)
<p>I have the two columns in a data frame (you can see a sample down below) Usually in columns A &amp; B I get 10 to 12 rows with similar values. So for example: from index 1 to 10 and then from index 11 to 21. I would like to group these values and get the mean and standard deviation of each group. I found this following line code where I can get the index of the nearest value. but I don't know how to do this repetitively:</p> <pre><code>Index = df['A'].sub(df['A'][0]).abs().idxmin() </code></pre> <p>Anyone has any ideas on how to approach this problem?</p> <pre><code> A B 1 3652.194531 -1859.805238 2 3739.026566 -1881.965576 3 3742.095325 -1878.707674 4 3747.016899 -1878.728626 5 3746.214554 -1881.270329 6 3750.325368 -1882.915532 7 3748.086576 -1882.406672 8 3751.786422 -1886.489485 9 3755.448968 -1885.695822 10 3753.714126 -1883.504098 11 -337.969554 24.070990 12 -343.019575 23.438956 13 -344.788697 22.250254 14 -346.433460 21.912217 15 -343.228579 22.178519 16 -345.722368 23.037441 17 -345.923108 23.317620 18 -345.526633 21.416528 19 -347.555162 21.315934 20 -347.229210 21.565183 21 -344.575181 22.963298 22 23.611677 -8.499528 23 26.320500 -8.744512 24 24.374874 -10.717384 25 25.885272 -8.982414 26 24.448127 -9.002646 27 23.808744 -9.568390 28 24.717935 -8.491659 29 25.811393 -8.773649 30 25.084683 -8.245354 31 25.345618 -7.508419 32 23.286342 -10.695104 33 -3184.426285 -2533.374402 34 -3209.584366 -2553.310934 35 -3210.898611 -2555.938332 36 -3214.234899 -2558.244347 37 -3216.453616 -2561.863807 38 -3219.326197 -2558.739058 39 -3214.893325 -2560.505207 40 -3194.421934 -2550.186647 41 -3219.728445 -2562.472566 42 -3217.630380 -2562.132186 43 234.800448 -75.157523 44 236.661235 -72.617806 45 238.300501 -71.963103 46 239.127539 -72.797922 47 232.305335 -70.634125 48 238.452197 -73.914015 49 239.091210 -71.035163 50 239.855953 -73.961841 51 238.936811 -73.887023 52 238.621490 -73.171441 53 240.771812 -73.847028 54 -16.798565 4.421919 55 -15.952454 3.911043 56 -14.337879 4.236691 57 -17.465204 3.610884 58 -17.270147 4.407737 59 -15.347879 3.256489 60 -18.197750 3.906086 </code></pre>
<p>A simpler approach consist in grouping the values where the percentage change is not greater than a given threshold (let's say 0.5):</p> <pre><code>df['Group'] = (df.A.pct_change().abs()&gt;0.5).cumsum() df.groupby('Group').agg(['mean', 'std']) </code></pre> <p>Output:</p> <pre><code> A B mean std mean std Group 0 3738.590934 30.769420 -1880.148905 7.582856 1 -344.724684 2.666137 22.496995 0.921008 2 24.790470 0.994361 -9.020824 0.977809 3 -3210.159806 11.646589 -2555.676749 8.810481 4 237.902230 2.439297 -72.998817 1.366350 5 -16.481411 1.341379 3.964407 0.430576 </code></pre> <hr /> <p><strong>Note:</strong> I have only used the &quot;A&quot; column, since the &quot;B&quot; column appears to follow the same pattern of consecutive nearest values. You can check if the identified groups are the same between columns with:</p> <pre><code>grps = (df[['A','B']].pct_change().abs()&gt;1).cumsum() grps.A.eq(grps.B).all() </code></pre>
python|pandas
1