Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
7,200
53,173,312
Comparing two panda dataframes and writing new dataframes if a row value is common between both dfs
<p>First of all I am going to explain the whole problem and if there is a better way to do this without pandas please say. I have just attempted a bunch of ways and I feel like pandas is likely the best way to go.</p> <p>I have two text files. Each text file looks something like the following:</p> <pre><code>Sometextinbothfiles UniqueText SomeTextThatCouldbeCommon Unique Text </code></pre> <p>There are more columns with UniqueText in but this gives a basic idea of the layout. There is also some header info but this is easy to remove by ignoring the first 22 lines in pandas.The column with the SomeTextThatCouldbeCommon is always in the same place and it is this that I want to look at. It is a filename.</p> <p>Currently I am just pulling in each text file and seperating them in pandas using </p> <pre><code>Data = open("data.star", "r") Datapd = pd.read_csv(Data, sep=r"\s+", skiprows=range(0,23), header=None) </code></pre> <p>So I want to compare the SomeTextThatCouldbeCommon on each line of the text file to the same SomeTextThatCouldbeCommon on EVERY line of the other text file. If there is a match I then want to write out that whole line to a new dataframe/textfile/array. I then want to do the same in reverse. So in the end I have two files that refer to the same files but have the unique data present in each file about that data. </p> <p>I hope I have explained this ok. Please help I am struggling to figure out how to do this. </p>
<p>Hi mate here you could find simple example for solving your problem, I hope is gonna work for you:</p> <p>two example data-frames:</p> <pre><code>df1 = pd.DataFrame({ "Date" : [2013-11-24, 2013-11-24, 2013-11-24, 2013-11-24], "Fruit" : ['Banana', 'Orange', 'Apple', 'Celery'], "Num" : [22.1, 8.6, 7.6, 10.2], "Color" : ['Yellow', 'Orange', 'Green', 'Green'] }) df2 = pd.DataFrame({ "Date" : [2013-11-25, 2013-11-24, 2013-11-24, 2018-11-24], "Fruit" : ['Banana', 'Cherry', 'Mango', 'Celery'], "Num" : [22.1, 8.6, 7.6, 10.2], "Color" : ['Yellow', 'Green', 'Yellow', 'Green'] }) mask = (df1 == df2) df1.where(mask) </code></pre> <p>where there is a match you have result otherwise you should receive "NaN" values.</p>
python|pandas|dataframe|text
0
7,201
65,506,956
Matching multiple conditions and returning/appending multiple results, between two dataframes in pandas
<br/> I'm very new to python and really don't know where to start doing the following:<br/> I have two dataframes, df1 and df2.<br/> <pre class="lang-py prettyprint-override"><code>df1 fruit id date 0 apple 2 01/10/20 1 pear 1 15/09/20 2 banana 3 01/06/20 3 peach 4 10/04/20 </code></pre> <pre class="lang-py prettyprint-override"><code>df2 name uid ndate 0 paul 2 02/11/20 1 tracy 1 15/12/20 2 iain 3 01/05/20 3 frida 4 23/02/20 4 david 2 06/06/20 5 peter 3 19/11/20 6 adam 4 07/03/20 7 eve 1 30/11/20 8 hannah 2 25/09/20 9 janine 2 13/08/20 10 charlotte 5 10/04/20 </code></pre> <p>I want to map 'name' values from df2 to new columns in df1.<br/> There are two conditions to be met, first that the 'uid' in df2 must be the same as the 'id' in df1,<br/> and the second that the <em><strong>'ndate' in df2 must be before (less than) the 'date' in df1</strong></em>.<br/> Ideally <em><strong>each matched value should return to new columns to the right of date in df1</strong></em>.<br/></p> <pre class="lang-py prettyprint-override"><code>df1 fruit id date match1 match2 match3 ...(etc) 0 apple 2 01/10/20 david hannah janine 1 pear 1 15/09/20 2 banana 3 01/06/20 iain 3 peach 4 10/04/20 frida adam </code></pre> <p>Hope this makes sense, I've simplified the tables as much as possible, and this is the gist of what I'm looking to do. Any suggestions much appreciated, thanks in advance!</p>
<pre><code>df1.merge(df2,left_on=['id','date'],right_on=['uid','ndate']) </code></pre>
python|pandas|dataframe
0
7,202
65,856,489
Python Pandas: How to drop rows by time?
<p>I want to keep rows with time that are between 6am (morning) and 12am (midnight), how should I do it?</p> <p>This is my dataframe: <a href="https://i.stack.imgur.com/Yah8x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yah8x.png" alt="enter image description here" /></a></p> <p>and this is the datatype: <a href="https://i.stack.imgur.com/CUXnd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CUXnd.png" alt="enter image description here" /></a></p> <p>I tried this but doesn't work:</p> <pre><code>daytime_start = '06:00:00' daytime_end = '23:59:59' mask = (df['Time'] &gt;= daytime_start) &amp; (df['Time'] &lt;= daytime_end) filtered_df = df.loc[mask] </code></pre>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.between_time.html" rel="nofollow noreferrer"><code>df.between_time</code></a>:</p> <pre><code>df.set_index('Date Time').between_time('6:00', '23:59').reset_index() </code></pre>
python|pandas|jupyter-notebook
2
7,203
63,693,361
Drop Hours, Mins, Secs in Timestamp Error
<p>I want to drop the hours/mins/sec in timestamp. I tried using <code>strftime</code> but I'm getting an error</p> <pre><code>import pandas as pd df = pd.read_csv('file.csv') df['Date'] = pd.to_datetime(df.Date) df.strftime('%Y-%m-%d') </code></pre> <p>Here's the exact error I'm getting:</p> <pre><code>AttributeError: 'DataFrame' object has no attribute 'strftime' </code></pre>
<p>Try:</p> <pre><code>df['Date'] = df['Date'].strftime('%Y-%m-%d') </code></pre> <p>or</p> <pre><code>df['Date'] = df['Date'].dt.normalize() </code></pre>
python|pandas
1
7,204
63,715,251
Performing iterative arithmetic over a column in a Pandas dataframe
<p>I am attempting to perform arithmetic on the 'data_d' column.</p> <pre><code>dataframe data_a data_b data_c data_d 60 0.30786 Discharge 2.31714 61 0.30792 Rest 2.34857 121 0.62095 Rest 2.38647 182 0.93398 Discharge 2.31115 183 0.93408 Rest 2.34550 243 1.24711 Rest 2.37162 304 1.56014 Discharge 2.30855 305 1.56019 Rest 2.34215 365 1.87322 Rest 2.36276 426 2.18630 Discharge 2.30591 </code></pre> <p>I want to assign the variables A,B,C into a new column named 'variable'. As shown below.</p> <pre><code>dataframe2 data_a data_b data_c data_d variable 60 0.30786 Discharge 2.31714 A 61 0.30792 Rest 2.34857 B 121 0.62095 Rest 2.38647 C 182 0.93398 Discharge 2.31115 A 183 0.93408 Rest 2.34550 B 243 1.24711 Rest 2.37162 C 304 1.56014 Discharge 2.30855 A 305 1.56019 Rest 2.34215 B 365 1.87322 Rest 2.36276 C 426 2.18630 Discharge 2.30591 A </code></pre> <p>The script then should perform the following operation iteratively over the entire 'data_d' column.</p> <pre><code>(C - (B-A)) (2.38647 - (2.34857-2.31714)) (2.35504) ... </code></pre> <pre><code>dataframe3 measurement 0 2.35504 1 2.33727 2 2.32916 ... ... </code></pre> <p>And so on.</p> <p>Thank you in advance for any insight.</p>
<p>We use the <code>cumsum</code> to create the <code>groupby</code> key , then do <code>cumcount</code> with <code>groupby</code> <code>map</code> the number of count back to letter</p> <pre><code>key = df['data_c'].eq('Discharge').cumsum() df['variable'] = df.groupby(key).cumcount().map({0:'A',1:'B',2:'C'}) df Out[61]: data_a data_b data_c data_d variable 0 60 0.30786 Discharge 2.31714 A 1 61 0.30792 Rest 2.34857 B 2 121 0.62095 Rest 2.38647 C 3 182 0.93398 Discharge 2.31115 A 4 183 0.93408 Rest 2.34550 B 5 243 1.24711 Rest 2.37162 C 6 304 1.56014 Discharge 2.30855 A 7 305 1.56019 Rest 2.34215 B 8 365 1.87322 Rest 2.36276 C 9 426 2.18630 Discharge 2.30591 A </code></pre> <p>Then we just need to pivot : here I am using <code>crosstab</code></p> <pre><code>s = pd.crosstab(index=key, columns=df['variable'], values=df['data_d'], aggfunc='sum') dfout = s.eval('C - (B-A)').to_frame(name = 'measurement') dfout Out[69]: measurement data_c 1 2.35504 2 2.33727 3 2.32916 4 NaN </code></pre>
python|pandas|dataframe|math|data-science
1
7,205
63,704,174
Encoding for patterns with Numpy
<p>I want to find up/down patterns in a time series. This is what I use for simple up/down:</p> <pre><code>diff = np.diff(source, n=1) encoding = np.where(diff &gt; 0, 1, 0) </code></pre> <p>Is there a way with Numpy to do that for patterns with a given lookback length without a slow loop? For example up/up/up = 0 down/down/down = 1 up/down/up = 2 up/down/down = 3.....</p> <p>Thank you for your help.</p>
<p>I learned yesterday about <code>np.lib.stride_tricks.as_strided</code> from one of StackOverflow answers <a href="https://stackoverflow.com/a/53099870/3044825">similar to this</a>. This is an awesome trick and not that hard to understand as I expected. Now, if you get it, let's define a function called <code>rolling</code> that lists all the patterns to check with:</p> <pre><code>def rolling(a, window): shape = (a.size - window + 1, window) strides = (a.itemsize, a.itemsize) return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) compare_with = [True, False, True] bool_arr = np.random.choice([True, False], size=15) paterns = rolling(bool_arr, len(compare_with)) </code></pre> <p>And after that you can calculate indexes of pattern matches <a href="https://stackoverflow.com/questions/41992745/using-np-where-to-find-matching-row-in-2d-array">as discussed here</a></p> <pre><code>idx = np.where(np.all(paterns == compare_with, axis=1)) </code></pre> <h3>Sample run:</h3> <pre><code>bool_arr array([ True, False, True, False, True, True, False, False, False, False, False, False, True, True, False]) patterns array([[ True, False, True], [False, True, False], [ True, False, True], [False, True, True], [ True, True, False], [ True, False, False], [False, False, False], [False, False, False], [False, False, False], [False, False, False], [False, False, True], [False, True, True], [ True, True, False]]) idx (array([ 0, 2, 13], dtype=int64),) </code></pre>
python|numpy
1
7,206
63,456,492
Data API : ValueError: `y` argument is not supported when using dataset as input
<p>I have 45000 images of size 224*224, stored as a numpy array. This array, called <code>source_arr</code> has shape 45000,224,224 and it fits in the memory.</p> <p>I divide this array into train, test and validate array and pre-process (normalize and convert greyscale to 3 channels RGB) them using tf.data API.</p> <p>I have written a pre process function like:</p> <pre><code>def pre_process(x): x_norm = (x - mean_Rot_MIP) / Var_Rot_MIP # Stacking along the last dimension to avoid having to move channel axis x_norm_3ch = tf.stack((x_norm, x_norm, x_norm), axis=-1) return x_norm_3ch </code></pre> <p><code>X_train_cases_idx.idx</code> contains the index of images from <code>source_arr</code> that are part of training data.</p> <p>I have read the corresponding training images from <code>source_arr</code> in the dataset object like:</p> <pre><code>X_train = tf.data.Dataset.from_tensor_slices([source_arr[i] for i in X_train_cases_idx.idx]) </code></pre> <p>And then I apply the pre_process function on the training images like <code>X_train = X_train.map(pre_process)</code></p> <p>This is a multiclass classification problem, thus I convert the label variable y into 1 hot encoding like:</p> <pre><code>lb = LabelBinarizer() y_train = lb.fit_transform(y_train) </code></pre> <p>The length of X_train and y_train are 36000</p> <p>I perform the model.fit operation on RESNET50 like:</p> <pre><code>H = model.fit(X_train, y_train, batch_size = BS, validation_data=(X_val, y_val), epochs = NUM_EPOCHS, shuffle =False) </code></pre> <p>and I get an error:</p> <pre><code>ValueError: `y` argument is not supported when using dataset as input. </code></pre> <p>I understand that I need to pass the X_train and y_train both as a tuple in the Dataset object. <strong>How can I do that?</strong></p>
<p>you have source_arr and y_train as numpy array ;so you can do :</p> <pre><code>data_set = tf.data.Dataset.from_tensor_slices( (source_arr , y_train) ) </code></pre> <p>if you have source_arr and y_train as tf.dataset :</p> <pre><code>data_set = tf.data.Dataset.zip( (source_arr , y_train) ) </code></pre>
python|tensorflow|tensorflow-datasets
2
7,207
71,897,140
Formatting an output from data retrieved from a CSV file with Pandas
<p>Basically I'm trying to remove the row index (2) and I think the type information (bottom line of output) from the variable 'speed'. The code is meant to retrieve information from a csv file at a certain location, but I only want the value of that location (1.5), rather than the rest of it. I have looked around but I couldn't find anything that applies to my problem</p> <p>Code:</p> <pre><code>def statRetriever(): statType = int(input(&quot;Would you like the stats of a:\n\t1. Kart\n\t2. Wheel\n\t3. Glider\n\t4. Character\nEnter your number here: &quot;)) if statType == 1: openCSV = pd.read_csv(&quot;kartStats.csv&quot;) kartName = input(&quot;Enter the name of the kart you wants stats for: &quot;) rowNum = openCSV.index[openCSV['Name'] == kartName].tolist() # this gets the index/row number of the row where the kart name is equal to the user inputed one speed = openCSV.iloc[rowNum,1] # iloc can take integers in lists! print(f&quot;{kartName}\n--Speed: {speed}&quot;) </code></pre> <p>Output:</p> <pre><code>Mario Kart 8 Deluxe Toolkit Which tool would you like to use? 1. Randomiser 2. Stat Viewer Enter your number here: 2 Would you like the stats of a: 1. Kart 2. Wheel 3. Glider 4. Character Enter your number here: 1 Enter the name of the kart you wants stats for: Mach 8 Mach 8 --Speed: 2 1.5 Name: Speed, dtype: float64 </code></pre>
<p>In this case there is no need to look for the index/row number of the row where the kart name is equal to the user inputed one and use it later. If you change</p> <pre><code>rowNum = openCSV.index[openCSV['Name'] == kartName].tolist() # this gets the index/row number of the row where the kart name is equal to the user inputed one speed = openCSV.iloc[rowNum,1] # iloc can take integers in lists! </code></pre> <p>to</p> <pre><code>speed = openCSV[openCSV['Name'] == kartName].iloc[0, 1] </code></pre> <p>you will get the the speed without the row index and the datatype details.</p>
python|pandas|csv
0
7,208
72,107,274
Python interp function that returns first/leftmost match?
<p>I am given something like selected percentile values (5th, 10th, 25th, 50th) and so on, and need to find what percentile a given value is. So I have tried scipy and numpy, but have come across a problem. It is not uncommon for multiple percentiles to have the same value (for example a value of 0 all the way until the 50th percentile). When I interpolate, it always returns the highest value, which introduces a skew into my bulk stats. I have a quick example below. X would be percentile values, Y is the corresponding percentiles. 0.0 is a value I would be interpolating. It seems the interpolation function and method is fairly limited since I have repeating x values.</p> <pre><code>x=[0.0,0.0,0.0,0.0,0.05,0.2,0.5] y=[5,10,25,50,75,90,95] interp = interp1d(x, y, kind='slinear', fill_value='extrapolate') z2 = np.interp(0.0, x, y, left=0, right=100).round(1) z = interp(0.0) print(z) print(z2) </code></pre> <p>In this case, both z and z2 return 50.0, when I expect/want 0.0 or 5.0 (depending on extrapolation). Is there anyway to force these to return the minimum possible value, the middle possible value, or any other way to accomplish this?</p>
<p>Both <code>np.interp()</code> and <code>scipy.interpolate.interp1d()</code> require that the x values must be strictly increasing (i.e. <code>x[i+1] &gt; x[i]</code>), and may return nonsense if they aren't. If you want some specific behavior, you need to preprocess your data to get rid of any repeated x values. For example:</p> <pre><code># assuming x and y are already sorted x_fixed, indices = np.unique(x, return_index=True) y_fixed = [np.min(vals) for vals in np.split(y, indices[1:])] </code></pre>
python|numpy|scipy
0
7,209
55,500,094
How to sum up the prediction vectors of a keras model into a single vector
<p>I have 2 keras models. The first gets as input a string and gives a prediction for example, five classes.</p> <p>In the second model I want to use this output. However, the output of the first model should be summed up into a single output for multiple inputs.</p> <p>I want single prediction for the sum of all entered strings and not a prediction for each entered string.</p> <pre><code>model1 = tf.keras.Sequential() model1.add(Input(shape=(len(inputs[0]),), dtype=tf.float32)) model1.add(Dense(256, activation='relu')) model1.add(Dense(len(helper_classes), activation='softmax')) model2 = tf.keras.Sequential() model2.add(model1) model2.add(Dense(16)) model2.add(Dense(len(classes), activation=tf.nn.softmax)) model2.layers[0].trainable = False model2.compile(loss='categorical_crossentropy',optimizer='adam', metrics=['accuracy']) model2.summary() </code></pre> <p>For explanation: the strings are preprocessed to a float vector.</p> <p>Actual output of model1:<br> Input: "Hello","World", ...<br> Output: [0.2, 0, 0, 0.8, 0],[0, 0, 0.4, 0, 0.6], ...</p> <p>What i need:<br> Input: "Hello","World", ...<br> Output: [0.2 + 0.0 + ... , 0 + 0.0 + ... , 0 + 0.4 + ... , 0.8 + 0.0 + ... , 0 + 0.6 + ...]</p> <p><a href="https://i.stack.imgur.com/IiuQV.png" rel="nofollow noreferrer">Image of model1</a><br> <a href="https://i.stack.imgur.com/b0elz.png" rel="nofollow noreferrer">Image of model1 after adding Reduction Layer</a></p> <p><br> <strong>Solution</strong><br> Okay I solved it now. My first mistake was that I summed up on axis 1. What I could fix with the help of vlad. The second mistake was that I did not keep the dimensions with keep_dims = true.</p> <p>The solution was to insert a lambda layer in the second model which basically does what Vlad and Thibault proposed:</p> <pre><code>model2 = tf.keras.Sequential() model2.add(model1) model2.add(Lambda(lambda x: K.sum(x, axis=0,keepdims=True))) model2.add(Dense(16)) model2.add(Dense(len(classes), activation=tf.nn.softmax)) model2.layers[0].trainable = False model2.compile(loss='categorical_crossentropy',optimizer='adam', metrics=['accuracy']) </code></pre>
<p>Use <code>tf.reduce_sum()</code>:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf output = tf.Variable([[0.2, 0.0, 0.0, 0.8, 0],[0.0, 0.0, 0.4, 0, 0.6],]) reduced = tf.reduce_sum(output, axis=0) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(reduced.eval()) # [0.2 0. 0.4 0.8 0.6] </code></pre> <p>To use it within <code>Keras</code> define a custom layer like this:</p> <pre class="lang-py prettyprint-override"><code>from tensorflow.keras import layers class ReductionLayer(layers.Layer): def __init__(self): super(ReductionLayer, self).__init__() def call(self, inputs): return tf.reduce_sum(inputs, axis=0) </code></pre> <p>and add it to your <code>Sequential()</code> model:</p> <p><code>model.add(ReductionLayer())</code></p>
tensorflow|keras
2
7,210
55,154,163
Pandas replace the values of multiple columns
<p>If the match value is equal to the sample_input,The value in the sample_input is replaced. The merge method now used can match, But don't know how to replace it. There are many duplicate values in the sample being replaced.</p> <p>The sample_data I used upload to the github. <a href="https://github.com/salemilk/test_tim" rel="nofollow noreferrer">sample_data_input</a></p> <pre><code>import pandas as pd #Read file match = pd.read_excel('match.xlsx', sheet_name='Sheet1') replace = pd.read_excel('replace.xlsx', sheet_name='Sheet1') #replace value sample_input = pd.read_excel('sample_input.xlsx', sheet_name='Sheet1') #raw file #column match_col_n1 = ['e', 'i', 'j', 'k', 'l', 'n', 'label'] match_col_n2 = ['e', 'i', 'j', 'k', 'l', 'n'] replace_col_n = ['i', 'j', 'k', 'l', 'label'] #replace sample_input_col_n = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n'] #DataFrame match_data = pd.DataFrame(match, columns=match_col_n1) replace_data = pd.DataFrame(replace, columns=replace_col_n) sample_input_data = pd.DataFrame(sample_input, columns=sample_input_col_n) # tmp tmp = sample_input_data.merge(match_data, how='left', on=None, left_on=match_col_n2, right_on=match_col_n2, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None) sample_input_data['label'] = tmp['label'] #for num in match_data.index.values: # label = match_data.loc[num, 'label'] # sample_input_data[sample_input_data['label'] == label][replace_col_n] = replace_data.iloc[num, :].values sample_input_data = sample_input_data.to_excel('output.xlsx', index=False) </code></pre>
<p>Here's a pretty straightforward way of comparing and contrasting two Excel files.</p> <pre><code>import pandas as pd import numpy as np # Next, read in both of our excel files into dataframes df1 = pd.read_excel('C:\\your_path\\Book1.xlsx', 'Sheet1', na_values=['NA']) df2 = pd.read_excel('C:\\your_path\\Book2.xlsx', 'Sheet1', na_values=['NA']) # Order by account number and reindex so that it stays this way. df1.sort_index(by=["H1"]) df1=df1.reindex() df2.sort_index(by=["H1"]) df2=df2.reindex() # Create a diff function to show what the changes are. def report_diff(x): return x[0] if x[0] == x[1] else '{} ---&gt; {}'.format(*x) # Merge the two datasets together in a Panel . I will admit that I haven’t fully grokked the panel concept yet but the only way to learn is to keep pressing on! diff_panel = pd.Panel(dict(df1=df1,df2=df2)) # Once the data is in a panel, we use the report_diff function to highlight all the changes. I think this is a very intuitive way (for this data set) to show changes. It is relatively simple to see what the old value is and the new one. For example, someone could easily check and see why that postal code changed for account number 880043. diff_output = diff_panel.apply(report_diff, axis=0) diff_output.tail() # One of the things we want to do is flag rows that have changes so it is easier to see the changes. We will create a has_change function and use apply to run the function against each row. def has_change(row): if "---&gt;" in row.to_string(): return "Y" else: return "N" diff_output['has_change'] = diff_output.apply(has_change, axis=1) diff_output.tail() # It is simple to show all the columns with a change: diff_output[(diff_output.has_change == 'Y')] # Finally, let’s write it out to an Excel file: diff_output[(diff_output.has_change == 'Y')].to_excel('C:\\your_path\\diff.xlsx') </code></pre> <p><a href="https://pbpython.com/excel-diff-pandas.html" rel="nofollow noreferrer">https://pbpython.com/excel-diff-pandas.html</a></p>
python|pandas
1
7,211
56,561,607
Pandas: How to increment a new column based on increment and consecutive properties of 2 other columns?
<p>I'm currently working on a bulk data pre-processing framework in pandas and since I'm relatively new to pandas, I can't seem to solve this problem:</p> <p><strong>Given:</strong> A dataset with 2 columns :<code>col_1</code>, <code>col_2</code></p> <p><strong>Required:</strong> A new column <code>req_col</code> such that it's value is incremented if<br> a. the values in <code>col_1</code> are not consecutive<br>OR<br>b.the value of <code>col_2</code> is incremented consecutively </p> <p><strong>NOTE:</strong> </p> <ol> <li><code>col_2</code> always starts from <code>1</code> and always increases in value and values are never missing (always consecutive), eg:1,1,2,2,3,3,4,5,6,6,6,7,8,8,9.....</li> <li><code>col_1</code> always starts from <code>0</code> and always increases in value, but some values can be missing (need not be consecutive), eg:0,1,2,2,3,6,6,6,10,10,10...</li> </ol> <p><strong>EXPECTED ANSWER</strong>:</p> <pre><code>col_1 col_2 req_col #Changes in req_col explained below 0 1 1 0 1 1 0 2 2 #because col_2 value has incremented 1 2 2 1 2 2 3 2 3 #because '3' is not consectutive to '1' in col_1 3 3 4 #because of increment in col_2 5 3 5 #because '5' is not consecutive to '3' in col_1 6 4 6 #because of increment in col_2 and so on... 6 4 6 </code></pre>
<p>Try:</p> <pre><code>df['req_col'] = (df['col_1'].diff().gt(1) | # col_1 is not consecutive df['col_2'].diff().ne(0) # col_2 is has a jump ).cumsum() </code></pre> <p>Output:</p> <pre><code>0 1 1 1 2 2 3 2 4 2 5 3 6 4 7 5 8 6 9 6 dtype: int32 </code></pre>
python-3.x|pandas
0
7,212
66,823,458
Merging two DF's on shortest date record and delete non-matching date rows
<p>i have two df's that i need to merge into one new df based on the day, month and year of the df with the shortest record of day, month and year. In other words, if the &quot;day&quot;, &quot;month&quot; and &quot;year&quot; columns do not match in the comparison then i need to delete those rows or do not match. The df with the longest record or rows of day, month and year is &quot;ncm&quot; df and looks like this:</p> <pre><code>ncm.head() Out[358]: plant_name month year power_kwh 0 ALBUREJOS 1 2018 2634.583602 1 ALBUREJOS 1 2019 1947.384812 2 ALBUREJOS 1 2020 1787.296640 3 ALBUREJOS 2 2018 1539.008929 4 ALBUREJOS 2 2019 4948.003274 </code></pre> <p>and, the second df that i need to merge with some missing data and shorter number of dates (day, month and year) is df &quot;dfm&quot; and looks like this:</p> <pre><code>dfm.head() Out[359]: plant_name month year power_obs_kwh 0 ALBUREJOS 1 2018 2631.353970 1 ALBUREJOS 1 2019 1931.685916 2 ALBUREJOS 1 2020 1750.192298 3 ALBUREJOS 1 2021 314.000000 4 ALBUREJOS 2 2018 1537.588323 </code></pre> <p>I have tried multiple iterations of things like this below and have reached this error shown also here.</p> <pre><code>new_df = dfm.merge(ncm, left_on=['month','year'], right_on = ['power_kwh'], how='left') </code></pre> <p>error message:</p> <pre><code>ValueError: len(right_on) must equal len(left_on) </code></pre> <p>thank you for your insight.</p>
<p>In <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a>, parameters <code>left_on</code> and <code>right_on</code> must be the columns you want to use to join the two DataFrames, so they have to be the same. In your case since the columns have the same name you can use <code>on</code> instead</p> <pre><code>dfm.merge(ncm, on=['month','year']) </code></pre> <p>for example</p> <pre><code>np.random.seed(42) df_1 = pd.DataFrame({ 'month': np.random.choice(np.arange(1, 13), 100), 'year': np.random.choice(np.arange(2010, 2019), 100), 'some_data_1': np.random.random(100) }) np.random.seed(33) df_2 = pd.DataFrame({ 'month': np.random.choice(np.arange(1, 13), 100), 'year': np.random.choice(np.arange(2010, 2019), 100), 'some_data_2': np.random.random(100) }) </code></pre> <p>and then we simply do</p> <pre><code>df_1.merge( df_2, on=['month', 'year'] ) </code></pre> <p>which gives</p> <pre><code> month year some_data_1 some_data_2 0 7 2018 0.242055 0.646164 1 7 2018 0.649633 0.646164 2 4 2016 0.672136 0.936810 3 11 2018 0.761620 0.419030 4 11 2018 0.761620 0.533564 .. ... ... ... ... 101 9 2010 0.853009 0.856196 102 9 2010 0.853009 0.602498 103 9 2010 0.853009 0.713095 104 5 2015 0.428184 0.377500 105 12 2010 0.294449 0.455945 [106 rows x 4 columns] </code></pre>
python|pandas|merge|multiple-columns|missing-data
1
7,213
66,837,503
In the data frame of probabilities over time return first column name where value is < .5 for each row
<p>Given a pandas data frame like the following where the column names are the time, the rows are each of the subjects, and the values are probabilities return the column name (or time) the first time the probability is less than .50 for each subject in the data frame. The probabilities are always descending from 1-0 I. have tried looping though the data frame but it is not computationally efficient.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>subject id</th> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>…</th> <th>669</th> <th>670</th> <th>671</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>0.997913</td> <td>0.993116</td> <td>0.989017</td> <td>0.976157</td> <td>0.973078</td> <td>0.968056</td> <td>0.963685</td> <td>…</td> <td>0.156092</td> <td>0.156092</td> <td>0.156092</td> </tr> <tr> <td>2</td> <td>1</td> <td>0.990335</td> <td>0.988685</td> <td>0.983145</td> <td>0.964912</td> <td>0.958</td> <td>0.952</td> <td>0.946995</td> <td>…</td> <td>0.148434</td> <td>0.148434</td> <td>0.148434</td> </tr> <tr> <td>3</td> <td>1</td> <td>0.996231</td> <td>0.990571</td> <td>0.985775</td> <td>0.976809</td> <td>0.972736</td> <td>0.969633</td> <td>0.966116</td> <td>…</td> <td>0.17037</td> <td>0.17037</td> <td>0.17037</td> </tr> <tr> <td>4</td> <td>1</td> <td>0.997129</td> <td>0.994417</td> <td>0.991054</td> <td>0.978795</td> <td>0.974216</td> <td>0.96806</td> <td>0.963039</td> <td>…</td> <td>0.15192</td> <td>0.15192</td> <td>0.15192</td> </tr> <tr> <td>5</td> <td>1</td> <td>0.997728</td> <td>0.993598</td> <td>0.986641</td> <td>0.98246</td> <td>0.977371</td> <td>0.972874</td> <td>0.96816</td> <td>…</td> <td>0.154545</td> <td>0.154545</td> <td>0.154545</td> </tr> <tr> <td>6</td> <td>1</td> <td>0.998134</td> <td>0.995564</td> <td>0.989901</td> <td>0.986941</td> <td>0.982313</td> <td>0.972951</td> <td>0.969645</td> <td>…</td> <td>0.17473</td> <td>0.17473</td> <td>0.17473</td> </tr> <tr> <td>7</td> <td>1</td> <td>0.995681</td> <td>0.994131</td> <td>0.990401</td> <td>0.974494</td> <td>0.967941</td> <td>0.961859</td> <td>0.956636</td> <td>…</td> <td>0.144753</td> <td>0.144753</td> <td>0.144753</td> </tr> <tr> <td>8</td> <td>1</td> <td>0.997541</td> <td>0.994904</td> <td>0.991941</td> <td>0.983389</td> <td>0.979375</td> <td>0.973158</td> <td>0.966358</td> <td>…</td> <td>0.158763</td> <td>0.158763</td> <td>0.158763</td> </tr> <tr> <td>9</td> <td>1</td> <td>0.992253</td> <td>0.989064</td> <td>0.979258</td> <td>0.955747</td> <td>0.948842</td> <td>0.942899</td> <td>0.935784</td> <td>…</td> <td>0.150291</td> <td>0.150291</td> <td>0.150291</td> </tr> </tbody> </table> </div> <p>Goal Output</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>subject id</th> <th>time prob &lt; .05</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>100</td> </tr> <tr> <td>2</td> <td>99</td> </tr> <tr> <td>3</td> <td>34</td> </tr> <tr> <td>4</td> <td>19</td> </tr> <tr> <td>5</td> <td>600</td> </tr> <tr> <td>6</td> <td>500</td> </tr> <tr> <td>7</td> <td>222</td> </tr> <tr> <td>8</td> <td>111</td> </tr> <tr> <td>9</td> <td>332</td> </tr> </tbody> </table> </div>
<p>Since the probabilities are always descending you can do this:</p> <pre><code>&gt;&gt;&gt; df.set_index(&quot;subject id&quot;).gt(.98).sum(1) subject id 1 4 2 4 3 4 4 4 5 5 6 6 7 4 8 5 9 3 dtype: int64 </code></pre> <p>note: I'm using <code>.98</code> instead of <code>.5</code> because I'm using only a portion of the data.</p> <hr /> <p>Data used</p> <pre><code>{'subject id': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9}, '0': {0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7: 1, 8: 1}, '1': {0: 0.997913, 1: 0.990335, 2: 0.996231, 3: 0.997129, 4: 0.997728, 5: 0.998134, 6: 0.995681, 7: 0.997541, 8: 0.992253}, '2': {0: 0.993116, 1: 0.988685, 2: 0.990571, 3: 0.994417, 4: 0.993598, 5: 0.995564, 6: 0.994131, 7: 0.994904, 8: 0.989064}, '3': {0: 0.989017, 1: 0.983145, 2: 0.985775, 3: 0.991054, 4: 0.986641, 5: 0.989901, 6: 0.990401, 7: 0.991941, 8: 0.979258}, '4': {0: 0.976157, 1: 0.964912, 2: 0.976809, 3: 0.978795, 4: 0.98246, 5: 0.986941, 6: 0.974494, 7: 0.983389, 8: 0.955747}, '5': {0: 0.973078, 1: 0.958, 2: 0.972736, 3: 0.974216, 4: 0.977371, 5: 0.982313, 6: 0.967941, 7: 0.979375, 8: 0.948842}, '6': {0: 0.968056, 1: 0.952, 2: 0.969633, 3: 0.96806, 4: 0.972874, 5: 0.972951, 6: 0.961859, 7: 0.973158, 8: 0.942899}, '7': {0: 0.963685, 1: 0.946995, 2: 0.966116, 3: 0.963039, 4: 0.96816, 5: 0.969645, 6: 0.956636, 7: 0.966358, 8: 0.935784}} </code></pre>
python|pandas
2
7,214
47,167,409
Using weights initializer with tf.nn.conv2d
<p>When using <code>tf.layers.conv2d</code>, setting the initializer is easy, it can be done through its parameter. But what if I use <code>tf.nn.conv2d</code>? I use this code. Is this equivalent to setting the <code>kernel_initializer</code> parameter in <code>tf.layers.conv2d</code>? Although the program runs without errors, I don't know how to verify whether it does what it is expected do.</p> <pre><code> with tf.name_scope('conv1_2') as scope: kernel = tf.get_variable(initializer=tf.contrib.layers.xavier_initializer(), shape=[3, 3, 32, 32], name='weights') conv = tf.nn.conv2d(conv1_1, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[32], dtype=tf.float32), trainable=True, name='biases') out = tf.nn.bias_add(conv, biases) self.conv1_2 = tf.nn.relu(out, name=scope) self.parameters += [kernel, biases] </code></pre>
<p>The operation underneath is the same (see <a href="https://stackoverflow.com/questions/42785026/tf-nn-conv2d-vs-tf-layers-conv2d">here</a>).</p> <p>As for the kernel and its initialization, I took a glimpse in the code and it <em>looked</em> the same... the <code>layers.conv2d</code> call a <code>tf.get_variable</code> at the end of the day.</p> <p>But I wanted to see it empirically, so here is a test code that declares a conv2d using each method (<code>tf.layers.conv2d</code> and <code>tf.nn.conv2d</code>), evaluates the initialized kernels and compares them.</p> <p>I've arbitrarily set the things that shouldn't interfere in the comparison, such as an input tensor and the strides.</p> <pre><code>import tensorflow as tf import numpy as np # the way you described in your question def _nn(input_tensor, initializer, filters, size): kernel = tf.get_variable( initializer=initializer, shape=[size, size, 32, filters], name='kernel') conv = tf.nn.conv2d( input=input_tensor, filter=kernel, strides=[1, 1, 1, 1], padding='SAME') return kernel # the other way def _layer(input_tensor, initializer, filters, size): tf.layers.conv2d( inputs=input_tensor, filters=filters, kernel_size=size, kernel_initializer=initializer) # 'conv2d/kernel:0' is the name of the generated kernel return tf.get_default_graph().get_tensor_by_name('conv2d/kernel:0') def _get_kernel(method): # an isolated context for each conv2d graph = tf.Graph() sess = tf.Session(graph=graph) with graph.as_default(), sess.as_default(): # important so that same randomness doesnt play a role tf.set_random_seed(42) # arbitrary input tensor with compatible shape input_tensor = tf.constant(1.0, shape=[1, 64, 64, 32]) initializer = tf.contrib.layers.xavier_initializer() kernel = method( input_tensor=input_tensor, initializer=initializer, filters=32, size=3) sess.run(tf.global_variables_initializer()) return sess.run(kernel) if __name__ == '__main__': kernel_nn = _get_kernel(_nn) kernel_layer = _get_kernel(_layer) print('kernels are ', end='') # compares shape and values if np.array_equal(kernel_layer, kernel_nn): print('exactly the same') else: print('not the same!') </code></pre> <p>And the output is... <strong>kernels are exactly the same</strong>.</p> <p>The docs, btw: <a href="https://www.tensorflow.org/api_docs/python/tf/nn/conv2d" rel="nofollow noreferrer">tf.nn.conv2d</a> and <a href="https://www.tensorflow.org/api_docs/python/tf/layers/conv2d" rel="nofollow noreferrer">tf.layers.conv2d</a>.</p>
python|tensorflow|conv-neural-network|initializer
2
7,215
68,283,948
How can i replace a code within my pandas dataframe with a dict mapping?
<p>I have a table like below:</p> <pre><code>Group col1 col2 col3 A shop_101 shop_102 shop_104 B shop_101 shop_105 shop_108 C shop_101 shop_103 shop_109 C shop_111 shop_122 shop_104 </code></pre> <p>I also have a dict which has mappings of these e.g.:</p> <pre><code>{'group_name': {103: 'AUTO', 104: 'BUSINESS', 105: 'STORES', 106: 'DIRECT MARKETING', 107: 'DISCOUNT STORES', 108: 'PHARMACIES', 109: 'GOVERNMENT', 110: 'ELECTRONICS', 112: 'FOOD &amp; GROCERY', 113: 'FUEL', 114: 'GENERAL RETAIL GOODS', 116: 'HEALTH', 124: 'Tfl'}} </code></pre> <p>how can i replace all instances of the code in the dataframe with this mapping from dict?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.replace.html" rel="nofollow noreferrer"><code>DataFrame.replace</code></a> for substrings replacement values of dict with convert keys to strings:</p> <pre><code>d = {'group_name': {103: 'AUTO', 104: 'BUSINESS', 105: 'STORES', 106: 'DIRECT MARKETING', 107: 'DISCOUNT STORES', 108: 'PHARMACIES', 109: 'GOVERNMENT', 110: 'ELECTRONICS', 112: 'FOOD &amp; GROCERY', 113: 'FUEL', 114: 'GENERAL RETAIL GOODS', 116: 'HEALTH', 124: 'Tfl'}} d1 = {str(k):v for k, v in d['group_name'].items()} df = df.replace(d1, regex=True) print (df) Group col1 col2 col3 0 A shop_101 shop_102 shop_BUSINESS 1 B shop_101 shop_STORES shop_PHARMACIES 2 C shop_101 shop_AUTO shop_GOVERNMENT 3 C shop_111 shop_122 shop_BUSINESS </code></pre> <p>If need specify columns for replacement:</p> <pre><code>cols = ['col1','col2','col3'] d1 = {str(k):v for k, v in d['group_name'].items()} df[cols] = df[cols].replace(d1, regex=True) print (df) Group col1 col2 col3 0 A shop_101 shop_102 shop_BUSINESS 1 B shop_101 shop_STORES shop_PHARMACIES 2 C shop_101 shop_AUTO shop_GOVERNMENT 3 C shop_111 shop_122 shop_BUSINESS </code></pre>
python-3.x|pandas|dataframe
1
7,216
68,422,818
Replace value in dataframe column until specific conditionally changing value is reached
<p>I have a dataframe df which looks like this:</p> <pre><code>data = [[&quot;nota&quot;, &quot;b&quot;], [&quot;notb&quot;, &quot;nota&quot;], [&quot;a&quot;, &quot;b&quot;], [&quot;a&quot;, &quot;notb&quot;], [&quot;notb&quot;, &quot;notb&quot;],[ &quot;nota&quot;, &quot;notb&quot;], [&quot;nota&quot;, &quot;nota&quot;], [&quot;notb&quot;, &quot;a&quot;], [&quot;b&quot;, &quot;notb&quot;], [&quot;b&quot;, &quot;a&quot;], [&quot;nota&quot;, &quot;nota&quot;], [ &quot;notb&quot;, &quot;b&quot;]] df = pd.DataFrame(data, columns = [&quot;status1&quot;, &quot;status2&quot;]) </code></pre> <p>Output looks like:</p> <pre><code>status1 status2 nota b notb nota a b a notb notb notb nota notb nota nota notb a b notb b a nota nota notb b </code></pre> <p>What I would like to do is to iterate through each of the columns seperately (from top to bottom) and replace values (or delete them if that makes it easier) based on some conditions. Starting in the first row, I want to replace all values in the respective column with &quot;empty&quot; until it says a or b. The cells could also be replaced with anything else or get deleted if this makes the whole thing easier since I'm not interested in the altered cells anymore as long as they are not either &quot;a&quot;, &quot;b&quot;, &quot;nota&quot; or &quot;notb&quot;. Then I would like to replace all values with &quot;empty&quot; until nota is reached (in case the first column which was not replaced was &quot;a&quot;). If the first not replaced cell said &quot;b&quot;, I want to replace all cells until there is a &quot;notb&quot;. After that I want to repeat the process and replace every cell in the respective column until &quot;a&quot; or &quot;b&quot; is reached again, and so on.</p> <p>Desired output would look like this:</p> <pre><code> status1 status2 empty b empty empty a empty empty notb empty empty nota empty empty empty empty a b empty empty empty empty nota notb b </code></pre> <p>Please note that I have several other columns that I don't want to treat like those. It is important that it is not possible that there is a &quot;b&quot; after an &quot;a&quot; without an &quot;nota&quot; between them and vice versa.</p> <p>Thank you very much in advance if anyone could help on this.</p>
<p>Try:</p> <pre class="lang-py prettyprint-override"><code>def process_status(x): out, cur, cur_end = [], None, None for v in x: if cur is None and v in {&quot;a&quot;, &quot;b&quot;}: cur, cur_end = v, {&quot;a&quot;: &quot;nota&quot;, &quot;b&quot;: &quot;notb&quot;}[v] out.append(v) elif cur_end == v: cur, cur_end = None, None out.append(v) else: out.append(&quot;Empty&quot;) return out df[&quot;status1&quot;] = process_status(df[&quot;status1&quot;]) df[&quot;status2&quot;] = process_status(df[&quot;status2&quot;]) print(df) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code> status1 status2 0 Empty b 1 Empty Empty 2 a Empty 3 Empty notb 4 Empty Empty 5 nota Empty 6 Empty Empty 7 Empty a 8 b Empty 9 Empty Empty 10 Empty nota 11 notb b </code></pre>
python|pandas|dataframe
2
7,217
59,343,283
ModuleNotFoundError: No module named 'torch_scope'
<p>Using the macOS terminal, I'm trying to run <code>./autoner_train.sh</code> by following <a href="https://github.com/shangjingbo1226/AutoNER#command" rel="nofollow noreferrer">this guide</a> on GitHub.</p> <p>I have activated my <code>Conda</code> environment and check my <code>PyTorch</code> version</p> <pre><code>(pytorch_env) myname (master) AutoNER $ python -c "import torch; print(torch.__version__)" 1.3.1 </code></pre> <p>After that, when running, I get the following error</p> <blockquote> <p>ModuleNotFoundError: No module named 'torch_scope'</p> </blockquote> <p>I don't know where's the problem. I have installed everything and I tried googling the problem, all I found is that I need <code>PyTorch</code> installed, which I already have.</p>
<p>I the documentation at the <a href="https://github.com/shangjingbo1226/AutoNER#dependencies" rel="nofollow noreferrer">Dependencies</a> section you can read:</p> <blockquote> <p><strong>Dependencies</strong> </p> <p>This project is based on <code>python&gt;=3.6</code>. The dependent package for this project is listed as below:</p> <p><code>numpy==1.13.1 tqdm torch-scope&gt;=0.5.0 pytorch==0.4.1</code></p> </blockquote> <p>So you need to install <code>torch-scope&gt;=0.5.0</code> too:</p> <pre><code>pip install torch-scope </code></pre>
python|anaconda|pytorch|conda
0
7,218
59,415,014
Extracting numbers using regex in dataframe for heights (ft,in)
<p>I am trying to extract the numbers from a column in my Pandas data frame <code>[height]</code> using regular expressions. The data in the column is listed as a string using ft and in: e.g."<code>5ft 6in</code>". In order to visualize this data for future analysis I need to convert this format to be entirely in inches and as an integer. So far, I have successfully created a column <code>height_feet</code> using the first line of code below. However, I am having trouble extracting the inches <code>height_in</code>.</p> <pre><code> modcloth_df = modcloth_df.assign(height_feet = modcloth_df['height'].str.extract('(\d+)')) modcloth_df = modcloth_df.assign(height_in = modcloth_df['height'].str.extract('((\d+)in)')) modcloth_df.head() </code></pre> <p>This results in a traceback:</p> <pre><code>ValueError: Wrong number of items passed 2, placement implies 1 </code></pre> <p>This traces back to the second line for extracting inches. I want to then assign a column as the total_height using the two integers.</p>
<ul> <li>Use <a href="https://docs.python.org/3/library/re.html#re.findall" rel="nofollow noreferrer"><code>re.findall</code></a> to extract the digits from your given format</li> <li>Convert the values to <code>int</code>, calculate the value in inches and return it</li> </ul> <pre class="lang-py prettyprint-override"><code>import pandas as pd import re # create dataframe df = pd.DataFrame({'height': ['5ft 6in', '6ft 0in']}) # function to extract numbers, convert and return inches def convert_to_inches(x): values = re.findall(r'\d+', x) return int(values[0]) * 12 + int(values[1]) # apply the function df['height_in'] = df.height.apply(convert_to_inches) # output height height_in 0 5ft 6in 66 1 6ft 0in 72 </code></pre> <ul> <li>If there are cases where the <code>height</code> column does not include <code>in</code></li> </ul> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'height': ['5ft 6in', '6ft 0in', '6ft']}) def convert_to_inches(x): values = re.findall(r'\d+', x) ft = int(values[0]) try: inches = int(values[1]) except IndexError: inches = 0 return ft * 12 + inches df['height_in'] = df.height.apply(convert_to_inches) # output height height_in 0 5ft 6in 66 1 6ft 0in 72 2 6ft 72 </code></pre>
python|regex|pandas|dataframe|extract
1
7,219
59,460,167
Python Dataframe iterate over rows ( compare a values between them) and prepare a groups as output
<p>I have a dataframe like this I want to group them by url and status and split a records by date, is it a more efficient way to do that?</p> <pre><code>def transform_to_unique(df): test = [] counter = 0 #first_row if df.loc[0, 'status']!= df.loc[1, 'status']: counter = counter +1 test.append(counter) for i in range(1, len(df)): if df.loc[i-1, 'url']!= df.loc[i, 'url']: counter=0 if df.loc[i-1, 'status']!= df.loc[i, 'status'] : counter = counter +1 test.append(counter) df['test'] = pd.Series(test) return df df = transform_to_unique(frame) df_g = df.groupby(['url', 'status', 'test'])['date_scraped'].agg({min, max}) </code></pre> <p><a href="https://i.stack.imgur.com/shfW0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/shfW0.png" alt="ouptut from script"></a></p> <p>Here is a dataframe:</p> <pre>1000,20191109,active 1000,20191108,inactive 2000,20191109,active 2000,20191101,inactive 351,20191109,active 351,20191102,active 351,20191026,active 351,20191019,active 351,20191012,active 351,20191005,active 351,20190928,inactive 351,20190921,inactive 351,20190914,inactive 351,20190907,active 351,20190831,active 351,20190615,inactive 3000,20200101,active</pre> <pre><code>import pandas as pd frame =pd.read_clipboard(sep=",", header=None) frame.columns = ['url', 'date_scraped', 'status'] </code></pre>
<p>I'm not sure, whether I'm getting correctly where are you heading with the <code>test</code> column, but is this what you want to achieve (based on the sample data, you posted):</p> <pre class="lang-py prettyprint-override"><code>import numpy as np df.sort_values(["url", "date_scraped"], axis=0, ascending=True, inplace=True) df["date_scraped_till"]=np.where(df["url"]==df["url"].shift(-1), df["date_scraped"].shift(-1), np.nan).astype(np.int32) </code></pre> <p>Output:</p> <pre class="lang-py prettyprint-override"><code> url date_scraped status date_scraped_till 15 351 20190615 inactive 20190831 14 351 20190831 active 20190907 13 351 20190907 active 20190914 12 351 20190914 inactive 20190921 11 351 20190921 inactive 20190928 10 351 20190928 inactive 20191005 9 351 20191005 active 20191012 8 351 20191012 active 20191019 7 351 20191019 active 20191026 6 351 20191026 active 20191102 5 351 20191102 active 20191109 4 351 20191109 active 0 1 1000 20191108 inactive 20191109 0 1000 20191109 active 0 3 2000 20191101 inactive 20191109 2 2000 20191109 active 0 16 3000 20200101 active 0 </code></pre> <p><strong>Edit</strong></p> <p>If instead of "splitted" you meant "collapsed", this should do the trick (it's basically more efficient way of doing your <code>test</code> column):</p> <pre class="lang-py prettyprint-override"><code>import numpy as np df.sort_values(["url", "date_scraped"], axis=0, ascending=True, inplace=True) df["test"]=np.where((df["url"]==df["url"].shift(1)) &amp; (df["status"]==df["status"].shift(1)), 0,1) df["test"]=df.groupby(["url", "status", "test"])["test"].cumsum().replace(to_replace=0, method='ffill') df_g = df.groupby(['url', 'status', 'test'])['date_scraped'].agg({min, max}) </code></pre> <p>Output:</p> <pre class="lang-py prettyprint-override"><code> max min url status test 351 active 1 20190907 20190831 2 20191109 20191005 inactive 1 20190615 20190615 2 20190928 20190914 1000 active 1 20191109 20191109 inactive 1 20191108 20191108 2000 active 1 20191109 20191109 inactive 1 20191101 20191101 3000 active 1 20200101 20200101 </code></pre>
python|pandas|dataframe
1
7,220
45,772,006
Perform Conditional Grouping and selecting second best row using Cumcount in Pandas
<p>Here is the data that I have:</p> <pre><code>ID Vehicle Calculator Offer NextCalculator NextOffer 3497827 2002 Ford Explorer Manheim Salvage 190 Copart 190 3497827 2002 Ford Explorer Manheim Salvage 190 IAA 140 3497827 2002 Ford Explorer Manheim Salvage 190 Manheim Salvage 190 3497827 2002 Ford Explorer Manheim Salvage 190 SVP 55 3497828 2003 Honda CRV Manheim Salvage 320 Copart 150 3497828 2003 Honda CRV Manheim Salvage 320 IAA 320 3497828 2003 Honda CRV Manheim Salvage 320 Manheim Salvage 320 3497828 2003 Honda CRV Manheim Salvage 320 SVP 200 </code></pre> <p>What I want to do is find out which is the next best calculator offer for each vehicle? E.g. for 3497827, next best offer is Copart - 190 (not considering Manheim Salvage since we want the next after it) and for 3497828 next best offer would be IAA - 320.</p> <p>So far I have done </p> <pre><code>df = df.sort_values(['ID', 'NextOffer'], ascending=False) df1 = df[df.groupby('ID').cumcount() == 1] </code></pre> <p>which gives me:</p> <pre><code>ID Vehicle Calculator Offer NextCalculator NextOffer 3497827 2002 Ford Explorer Manheim Salvage 190 Manheim Salvage 190 3497828 2003 Honda CRV Manheim Salvage 320 IAA 320 </code></pre> <p>It gives me correct result only if the <code>NextOffers</code> are lesser than the Offer value, but not if <code>NextOffer</code> is same as Offer.</p> <p>What I want is:</p> <pre><code>ID Vehicle Calculator Offer NextCalculator NextOffer 3497827 2002 Ford Explorer Manheim Salvage 190 Copart 190 3497828 2003 Honda CRV Manheim Salvage 320 IAA 320 </code></pre> <p>So my guess is that first I would have to do cumcount() == 0 and if for that row <code>NextCalculator</code> is same as <code>Calculator</code> then I would have to get the second row using cumcount() == 1. Any help in how can I do this or is there any efficient way to get the desired output?</p>
<p>IIUC:</p> <pre><code>In [21]: df.loc[df.query("Calculator != NextCalculator") .groupby('ID', as_index=False).NextOffer.idxmax()] Out[21]: ID Vehicle Calculator Offer NextCalculator NextOffer 0 3497827 2002 Ford Explorer Manheim Salvage 190 Copart 190 5 3497828 2003 Honda CRV Manheim Salvage 320 IAA 320 </code></pre>
python|pandas|grouping
2
7,221
46,112,795
python while loop to combine and delete repeated rows
<p>I got a dataframe like:</p> <pre><code> Type: Volume: Date: Price:.... Q 10 2016.6.1 10 Q 20 2016.6.1 20 T 10 2016.6.2 Q 10 2016.6.3 T 20 2016.6.4 T 20 2016.6.5 Q 10 2016.6.6 </code></pre> <p><a href="https://i.stack.imgur.com/bjsiU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bjsiU.png" alt="here is the full dataframe"></a></p> <p>and I want to add up the value of 'volume' only if two(or more) Ts are consecutive and delete one of the row</p> <p>i.e. to :</p> <pre><code> Q 10 2016.6.1 Q 20 2016.6.1 T 10 2016.6.2 Q 10 2016.6.3 T 20+20=40 2016.6.4 Q 10 2016.6.6 </code></pre> <p>now I'm using a if loop:</p> <pre><code>l = len(df) Volume = df['Volume'] Type = df['Type'] for i in range(2,l-1): if Type[i] == 'Trade': if Type[i] == 'Trade' and Type[i+1] == 'Trade' : Volume[i] = Volume[i]+Volume[i+1] df = np.delete(fd, (i), axis=0) </code></pre> <p>However, I am getting an error:</p> <pre><code>ValueError: Shape of passed values is (8, 303540), indices imply (8, 303541) </code></pre> <p>Also, I would like to change the 'if' loop to a 'while' loop so I can handle data more easily if there are more than two consecutive type 'Trade' data</p>
<p>If you want to edit an iterable while looping over it, it's generally safer to work on a copy of the data inside the loop and replace the original with that updated copy afterwards. This avoids Python getting confused about its position in the iteration (which is the problem that seems hinted at in your error, as it complains about indices).</p>
python|numpy
1
7,222
50,755,586
How to loop large parquet file with generators in python?
<p>Is it possible to open parquet files and iterate line by line, using generators? This is to avoid loading the whole parquet file into memory.</p> <p>The content of the file is pandas DataFrame.</p>
<p>You can not iterate by line as it is not the way it is stored. You can iterate through the row-groups as following:</p> <pre><code>from fastparquet import ParquetFile pf = ParquetFile('myfile.parq') for df in pf.iter_row_groups(): process sub-data-frame df </code></pre>
python|pandas|dataframe|generator|parquet
7
7,223
50,708,793
tensorflow object detection export_inference_graph.py ckpt name
<p>Does <code>export_inference_graph.py</code> need an exact checkpoint number, or is there a way to run it so that it will use the highest numbered checkpoint in a directory?</p>
<p>It needs exact checkpoint number in the command to find the correct file.</p>
tensorflow|export|object-detection
0
7,224
51,083,978
Indicate whether datetime of row is in a daterange
<p>I'm trying to get dummy variables for holidays in a dataset. I have a couple of dateranges (<code>pd.daterange()</code>) with holidays and a dataframe to which I would like to append a dummy to indicate whether the datetime of that row is in a certain daterange of the specified holidays.</p> <p>Small example:</p> <pre><code>ChristmasBreak = list(pd.date_range('2014-12-20','2015-01-04').date) dates = pd.date_range('2015-01-03', '2015-01-06, freq='H') d = {'Date': dates, 'Number': np.rand(len(dates))} df = pd.DataFrame(data=d) df.set_index('Date', inplace=True) for i, row in df.iterrows(): if i in ChristmasBreak: df[i,'Christmas] = 1 </code></pre> <p>The <code>if loop</code> is never entered, so matching the dates won't work. Is there any way to do this? Alternative methods to come to dummies for this case are welcome as well!</p>
<p>First dont use <strong>iterrows</strong>, because <a href="https://stackoverflow.com/a/24871316/2901002">really slow</a>.</p> <p>Better is use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.date.html" rel="nofollow noreferrer"><code>dt.date</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series,isin</code></a>, last convert boolean mask to integer - <code>True</code>s are <code>1</code>:</p> <pre><code>df = pd.DataFrame(data=d) df['Christmas'] = df['Date'].dt.date.isin(ChristmasBreak).astype(int) </code></pre> <p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.between.html" rel="nofollow noreferrer"><code>between</code></a>:</p> <pre><code>df['Christmas'] = df['Date'].between('2014-12-20', '2015-01-04').astype(int) </code></pre> <p>If want compare with <code>DatetimeIndex</code>:</p> <pre><code>df = pd.DataFrame(data=d) df.set_index('Date', inplace=True) df['Christmas'] = df.index.date.isin(ChristmasBreak).astype(int) df['Christmas'] = ((df.index &gt; '2014-12-20') &amp; (df.index &lt; '2015-01-04')).astype(int) </code></pre> <p><strong>Sample</strong>:</p> <pre><code>ChristmasBreak = pd.date_range('2014-12-20','2015-01-04').date dates = pd.date_range('2014-12-19 20:00', '2014-12-20 05:00', freq='H') d = {'Date': dates, 'Number': np.random.randint(10, size=len(dates))} df = pd.DataFrame(data=d) df['Christmas'] = df['Date'].dt.date.isin(ChristmasBreak).astype(int) print (df) Date Number Christmas 0 2014-12-19 20:00:00 6 0 1 2014-12-19 21:00:00 7 0 2 2014-12-19 22:00:00 0 0 3 2014-12-19 23:00:00 9 0 4 2014-12-20 00:00:00 1 1 5 2014-12-20 01:00:00 3 1 6 2014-12-20 02:00:00 1 1 7 2014-12-20 03:00:00 8 1 8 2014-12-20 04:00:00 2 1 9 2014-12-20 05:00:00 1 1 </code></pre>
python-3.x|pandas|datetime-format|date-range
2
7,225
50,758,110
Warm start with distribute.MirroredStrategy and tf.Estimator
<p>I'm trying run a multi-gpus training using MirroredStartegy and tf.Estimator. The first attempt is to use <code>tf.train.init_from_chekpoint</code> in the estimator <code>model_fn</code> as follow</p> <pre><code>def model_fn(features, labels, mode, params): ..... tf.train.init_from_checkpoint(params['resnet_checkpoint'], {'/': 'resnet50/'}) .... </code></pre> <p>This throws the following error </p> <pre><code>.../tensorflow/contrib/distribute/python/values.py", line 285, in _get_update_device "Use DistributionStrategy.update() to modify a MirroredVariable.") </code></pre> <p>The next attempt is to use <code>tf.estimator.WarmStartSetting</code></p> <pre><code>ws = tf.estimator.WarmStartSettings( ckpt_to_initialize_from=params['resnet_checkpoint'], vars_to_warm_start='resnet50.*', var_name_to_prev_var_name=var_name_to_prev_var_name ) session_config = tf.ConfigProto(allow_soft_placement=True) if FLAGS.num_gpus == 0: distribution = tf.contrib.distribute.OneDeviceStrategy('device:CPU:0') elif FLAGS.num_gpus == 1: distribution = tf.contrib.distribute.OneDeviceStrategy('device:GPU:0') else: distribution = tf.contrib.distribute.MirroredStrategy( num_gpus=FLAGS.num_gpus ) run_config = tf.estimator.RunConfig(train_distribute=distribution, session_config=session_config) estimator = tf.estimator.Estimator( model_fn=model_function, params=params, config=run_config, model_dir=FLAGS.model_dir, warm_start_from=ws ) </code></pre> <p>Again, this throws an error</p> <pre><code>TypeError: var MUST be one of the following: a Variable, list of Variable or PartitionedVariable, but is &lt;class 'tensorflow.contrib.distribute.python.values.MirroredVariable'&gt; </code></pre> <p>Any ideas to fix one of these two approaches ?</p>
<p>Restoring from checkpoints using the 2 mechanisms you tried is unfortunately not yet supported in MirroredStrategy. I've filed a github issue to track this <a href="https://github.com/tensorflow/tensorflow/issues/19958" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/19958</a>. Please follow this issue for progress. </p>
tensorflow
0
7,226
66,534,544
Reimplementing bert-style pooler throws shape error as if length-dimension were still needed
<p>I have trained an off-the-shelf Transformer().</p> <p>Now I want to use the encoder in order to build a classifier. For that I want to only use the first token's output (bert-style cls-token-result) and run that through a dense layer.</p> <p>What I do:</p> <pre><code>tl.Serial(encoder, tl.Fn('pooler', lambda x: (x[:, 0, :])), tl.Dense(7)) </code></pre> <p><strong>Shapes:</strong> The <strong>encoder gives me shape (64, 50, 512)</strong> with <code>64 = batch_size, </code> <code>50 = seq_len,</code> <code>512 = model_dim</code></p> <p>The <strong>pooler gives me shape (64, 512)</strong> which is as expected and desired.</p> <p>The dense layer is supposed to take the 512 dimensions for each batchmember and classify over 7 classes. But I guess trax/jax still expects this to have length seq_len (50).</p> <pre><code>TypeError: dot_general requires contracting dimensions to have the same shape, got [512] and [50]. </code></pre> <p>What do I miss?</p> <p>Full traceback:</p> <pre><code>Traceback (most recent call last): File &quot;mikado_classes.py&quot;, line 2054, in &lt;module&gt; app.run(main) File &quot;/root/.local/lib/python3.7/site-packages/absl/app.py&quot;, line 300, in run _run_main(main, args) File &quot;/root/.local/lib/python3.7/site-packages/absl/app.py&quot;, line 251, in _run_main sys.exit(main(argv)) File &quot;mikado_classes.py&quot;, line 1153, in main loop_neu.run(2) File &quot;/root/.local/lib/python3.7/site-packages/trax/supervised/training.py&quot;, line 361, in run loss, optimizer_metrics = self._run_one_step(task_index, task_changed) File &quot;/root/.local/lib/python3.7/site-packages/trax/supervised/training.py&quot;, line 483, in _run_one_step batch, rng, step=step, learning_rate=learning_rate File &quot;/root/.local/lib/python3.7/site-packages/trax/optimizers/trainer.py&quot;, line 134, in one_step (weights, self._slots), step, self._opt_params, batch, state, rng) File &quot;/root/.local/lib/python3.7/site-packages/trax/optimizers/trainer.py&quot;, line 173, in single_device_update_fn batch, weights, state, rng) File &quot;/root/.local/lib/python3.7/site-packages/trax/layers/base.py&quot;, line 549, in pure_fn self._caller, signature(x), trace) from None jax._src.traceback_util.FilteredStackTrace: trax.layers.base.LayerError: Exception passing through layer Serial (in pure_fn): layer created in file [...]/trax/supervised/training.py, line 865 layer input shapes: (ShapeDtype{shape:(64, 50), dtype:int32}, ShapeDtype{shape:(64, 1), dtype:int32}, ShapeDtype{shape:(64, 1), dtype:int32}) File [...]/trax/layers/combinators.py, line 88, in forward outputs, s = layer.pure_fn(inputs, w, s, rng, use_cache=True) LayerError: Exception passing through layer Serial (in pure_fn): layer created in file [...]/mikado_classes.py, line 1134 layer input shapes: (ShapeDtype{shape:(64, 50), dtype:int32}, ShapeDtype{shape:(64, 1), dtype:int32}, ShapeDtype{shape:(64, 1), dtype:int32}) File [...]/trax/layers/combinators.py, line 88, in forward outputs, s = layer.pure_fn(inputs, w, s, rng, use_cache=True) LayerError: Exception passing through layer Dense_7 (in pure_fn): layer created in file [...]/mikado_classes.py, line 1133 layer input shapes: ShapeDtype{shape:(64, 512), dtype:float32} File [...]/trax/layers/assert_shape.py, line 122, in forward_wrapper y = forward(self, x, *args, **kwargs) File [...]/trax/layers/core.py, line 95, in forward return jnp.dot(x, w) + b # Affine map. File [...]/_src/numpy/lax_numpy.py, line 3498, in dot return lax.dot_general(a, b, (contract_dims, batch_dims), precision) File [...]/_src/lax/lax.py, line 674, in dot_general preferred_element_type=preferred_element_type) File [...]/site-packages/jax/core.py, line 282, in bind out = top_trace.process_primitive(self, tracers, params) File [...]/jax/interpreters/ad.py, line 285, in process_primitive primal_out, tangent_out = jvp(primals_in, tangents_in, **params) File [...]/jax/interpreters/ad.py, line 458, in standard_jvp val_out = primitive.bind(*primals, **params) File [...]/site-packages/jax/core.py, line 282, in bind out = top_trace.process_primitive(self, tracers, params) File [...]/jax/interpreters/partial_eval.py, line 140, in process_primitive return self.default_process_primitive(primitive, tracers, params) File [...]/jax/interpreters/partial_eval.py, line 147, in default_process_primitive return primitive.bind(*consts, **params) File [...]/site-packages/jax/core.py, line 282, in bind out = top_trace.process_primitive(self, tracers, params) File [...]/jax/interpreters/partial_eval.py, line 1058, in process_primitive out_avals = primitive.abstract_eval(*avals, **params) File [...]/_src/lax/lax.py, line 1992, in standard_abstract_eval shapes, dtypes = shape_rule(*args, **kwargs), dtype_rule(*args, **kwargs) File [...]/_src/lax/lax.py, line 3090, in _dot_general_shape_rule raise TypeError(msg.format(lhs_contracting_shape, rhs_contracting_shape)) TypeError: dot_general requires contracting dimensions to have the same shape, got [512] and [50]. The stack trace above excludes JAX-internal frames. The following is the original exception that occurred, unmodified. -------------------- The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;mikado_classes.py&quot;, line 2054, in &lt;module&gt; app.run(main) File &quot;/root/.local/lib/python3.7/site-packages/absl/app.py&quot;, line 300, in run _run_main(main, args) File &quot;/root/.local/lib/python3.7/site-packages/absl/app.py&quot;, line 251, in _run_main sys.exit(main(argv)) File &quot;mikado_classes.py&quot;, line 1153, in main loop_neu.run(2) File &quot;/root/.local/lib/python3.7/site-packages/trax/supervised/training.py&quot;, line 361, in run loss, optimizer_metrics = self._run_one_step(task_index, task_changed) File &quot;/root/.local/lib/python3.7/site-packages/trax/supervised/training.py&quot;, line 483, in _run_one_step batch, rng, step=step, learning_rate=learning_rate File &quot;/root/.local/lib/python3.7/site-packages/trax/optimizers/trainer.py&quot;, line 134, in one_step (weights, self._slots), step, self._opt_params, batch, state, rng) File &quot;/root/.local/lib/python3.7/site-packages/jax/_src/traceback_util.py&quot;, line 139, in reraise_with_filtered_traceback return fun(*args, **kwargs) File &quot;/root/.local/lib/python3.7/site-packages/jax/api.py&quot;, line 398, in f_jitted return cpp_jitted_f(context, *args, **kwargs) File &quot;/root/.local/lib/python3.7/site-packages/jax/api.py&quot;, line 295, in cache_miss donated_invars=donated_invars) File &quot;/root/.local/lib/python3.7/site-packages/jax/core.py&quot;, line 1275, in bind return call_bind(self, fun, *args, **params) File &quot;/root/.local/lib/python3.7/site-packages/jax/core.py&quot;, line 1266, in call_bind outs = primitive.process(top_trace, fun, tracers, params) File &quot;/root/.local/lib/python3.7/site-packages/jax/core.py&quot;, line 1278, in process return trace.process_call(self, fun, tracers, params) File &quot;/root/.local/lib/python3.7/site-packages/jax/core.py&quot;, line 631, in process_call return primitive.impl(f, *tracers, **params) File &quot;/root/.local/lib/python3.7/site-packages/jax/interpreters/xla.py&quot;, line 581, in _xla_call_impl *unsafe_map(arg_spec, args)) File &quot;/root/.local/lib/python3.7/site-packages/jax/linear_util.py&quot;, line 260, in memoized_fun ans = call(fun, *args) File &quot;/root/.local/lib/python3.7/site-packages/jax/interpreters/xla.py&quot;, line 656, in _xla_callable jaxpr, out_avals, consts = pe.trace_to_jaxpr_final(fun, abstract_args) File &quot;/root/.local/lib/python3.7/site-packages/jax/interpreters/partial_eval.py&quot;, line 1216, in trace_to_jaxpr_final jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(fun, main, in_avals) File &quot;/root/.local/lib/python3.7/site-packages/jax/interpreters/partial_eval.py&quot;, line 1196, in trace_to_subjaxpr_dynamic ans = fun.call_wrapped(*in_tracers) File &quot;/root/.local/lib/python3.7/site-packages/jax/linear_util.py&quot;, line 166, in call_wrapped ans = self.f(*args, **dict(self.params, **kwargs)) File &quot;/root/.local/lib/python3.7/site-packages/trax/optimizers/trainer.py&quot;, line 173, in single_device_update_fn batch, weights, state, rng) File &quot;/root/.local/lib/python3.7/site-packages/jax/_src/traceback_util.py&quot;, line 139, in reraise_with_filtered_traceback return fun(*args, **kwargs) File &quot;/root/.local/lib/python3.7/site-packages/jax/api.py&quot;, line 810, in value_and_grad_f ans, vjp_py, aux = _vjp(f_partial, *dyn_args, has_aux=True) File &quot;/root/.local/lib/python3.7/site-packages/jax/api.py&quot;, line 1918, in _vjp out_primal, out_vjp, aux = ad.vjp(flat_fun, primals_flat, has_aux=True) File &quot;/root/.local/lib/python3.7/site-packages/jax/interpreters/ad.py&quot;, line 116, in vjp out_primals, pvals, jaxpr, consts, aux = linearize(traceable, *primals, has_aux=True) File &quot;/root/.local/lib/python3.7/site-packages/jax/interpreters/ad.py&quot;, line 101, in linearize jaxpr, out_pvals, consts = pe.trace_to_jaxpr(jvpfun_flat, in_pvals) File &quot;/root/.local/lib/python3.7/site-packages/jax/interpreters/partial_eval.py&quot;, line 506, in trace_to_jaxpr jaxpr, (out_pvals, consts, env) = fun.call_wrapped(pvals) File &quot;/root/.local/lib/python3.7/site-packages/jax/linear_util.py&quot;, line 166, in call_wrapped ans = self.f(*args, **dict(self.params, **kwargs)) File &quot;/root/.local/lib/python3.7/site-packages/trax/layers/base.py&quot;, line 549, in pure_fn self._caller, signature(x), trace) from None trax.layers.base.LayerError: Exception passing through layer Serial (in pure_fn): layer created in file [...]/trax/supervised/training.py, line 865 layer input shapes: (ShapeDtype{shape:(64, 50), dtype:int32}, ShapeDtype{shape:(64, 1), dtype:int32}, ShapeDtype{shape:(64, 1), dtype:int32}) File [...]/trax/layers/combinators.py, line 88, in forward outputs, s = layer.pure_fn(inputs, w, s, rng, use_cache=True) LayerError: Exception passing through layer Serial (in pure_fn): layer created in file [...]/mikado_classes.py, line 1134 layer input shapes: (ShapeDtype{shape:(64, 50), dtype:int32}, ShapeDtype{shape:(64, 1), dtype:int32}, ShapeDtype{shape:(64, 1), dtype:int32}) File [...]/trax/layers/combinators.py, line 88, in forward outputs, s = layer.pure_fn(inputs, w, s, rng, use_cache=True) LayerError: Exception passing through layer Dense_7 (in pure_fn): layer created in file [...]/mikado_classes.py, line 1133 layer input shapes: ShapeDtype{shape:(64, 512), dtype:float32} File [...]/trax/layers/assert_shape.py, line 122, in forward_wrapper y = forward(self, x, *args, **kwargs) File [...]/trax/layers/core.py, line 95, in forward return jnp.dot(x, w) + b # Affine map. File [...]/_src/numpy/lax_numpy.py, line 3498, in dot return lax.dot_general(a, b, (contract_dims, batch_dims), precision) File [...]/_src/lax/lax.py, line 674, in dot_general preferred_element_type=preferred_element_type) File [...]/site-packages/jax/core.py, line 282, in bind out = top_trace.process_primitive(self, tracers, params) File [...]/jax/interpreters/ad.py, line 285, in process_primitive primal_out, tangent_out = jvp(primals_in, tangents_in, **params) File [...]/jax/interpreters/ad.py, line 458, in standard_jvp val_out = primitive.bind(*primals, **params) File [...]/site-packages/jax/core.py, line 282, in bind out = top_trace.process_primitive(self, tracers, params) File [...]/jax/interpreters/partial_eval.py, line 140, in process_primitive return self.default_process_primitive(primitive, tracers, params) File [...]/jax/interpreters/partial_eval.py, line 147, in default_process_primitive return primitive.bind(*consts, **params) File [...]/site-packages/jax/core.py, line 282, in bind out = top_trace.process_primitive(self, tracers, params) File [...]/jax/interpreters/partial_eval.py, line 1058, in process_primitive out_avals = primitive.abstract_eval(*avals, **params) File [...]/_src/lax/lax.py, line 1992, in standard_abstract_eval shapes, dtypes = shape_rule(*args, **kwargs), dtype_rule(*args, **kwargs) File [...]/_src/lax/lax.py, line 3090, in _dot_general_shape_rule raise TypeError(msg.format(lhs_contracting_shape, rhs_contracting_shape)) TypeError: dot_general requires contracting dimensions to have the same shape, got [512] and [50]. </code></pre>
<p>The mistake was not in the architecture. Problem was: My <strong>inputs were not shaped correctly</strong>.</p> <p>The target should have been of shape (batch_size, ) but I sent (batch_size, 1). So a target array should have been, e.g.:</p> <pre><code>[1, 5, 99, 2, 1, 3, 2, 8] </code></pre> <p>but I produced</p> <pre><code>[[1], [5], [99], [2], [1], [3], [2], [8]]. </code></pre>
tensorflow|bert-language-model|jax|trax
0
7,227
57,404,843
Importing json files into pandas dataframe
<p>I have a couple json files which look like this:</p> <pre><code>data = {"75575": {"name": "Dummy name 1", "season": "", "ep": "", "channel": "Dummy channel 1", "Schedule": ["2017-05-11", "2019-04-30", "", "", "2019-08-01", "2019-08-31", "2017-05-11", "2019-04-30", "", ""]}, "115324": {"name": "Dummy name 2", "season": "", "ep": "", "channel": "Dummy channel 2", "Schedule": ["2017-05-09", "2019-05-31", "2017-05-09", "2019-05-31", "", "", "", "", "2019-09-01", "2019-09-30"]},} </code></pre> <p>I tried to use <code>json_normalize(data)</code> but it resulted in <code>[1 rows x 10 columns]</code>, so I am using the below workaround:</p> <pre><code>import pandas as pd df = pd.DataFrame() for k, v in data.items(): x = pd.Series(["Dummy genre",k, v.get("name"), v.get("season"), v.get("ep"), v.get("channel"), *v.get("Schedule")], index=("Genre", "ID", "Name", "Season", "Episode", "Channel", "Start date 1", "End date 1", "Start date 2", "End date 2", "Start date 3", "End date 3", "Start date 4", "End date 4", "Start date 5", "End date 5")) df = pd.concat([df, x.to_frame().T], ignore_index=True) </code></pre> <p>Is there a way to do it by <code>json_normalize</code>? I tried playing around with the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.io.json.json_normalize.html" rel="nofollow noreferrer">parameters</a> but couldn't wrap my head around it. Also note that I have to ingest 5 different json files with the same format.</p> <p>My expected output:</p> <pre><code> Genre ID ... Start date 5 End date 5 0 Dummy genre 75575 ... 1 Dummy genre 115324 ... 2019-09-01 2019-09-30 </code></pre>
<p>Not sure about <code>json_normalize</code>, but seems like you can just use regular <code>pd.DataFrame</code> constructor</p> <pre><code>df = pd.DataFrame(data).T df = df.join(pd.DataFrame(df.Schedule.tolist(), index=df.index)).drop('Schedule', 1) </code></pre> <p>Then simply rename the columns with the list you already have.</p>
python|pandas
2
7,228
57,718,155
Tensorflow is not using the GPU for python_io library
<p>I am really new to tensorflow and this might be a simple question. I was wondering what are the correct mechanism for assignment of the GPU devices in the code. Specifically I want to transfer this part of the code to the GPU:</p> <pre><code>tfr_opt = tf.python_io.TFRecordOptions(tf.python_io.TFRecordCompressionType.NONE) for lod in range(self.resolution_log2 - 1): tfr_file = self.tfr_prefix + '-r%02d.tfrecords' % (self.resolution_log2 - lod) self.tfr_writers.append(tf.python_io.TFRecordWriter(tfr_file, tfr_opt)) </code></pre> <p>But using line:</p> <pre><code>with tf.device('/GPU:0'): </code></pre> <p>Doesn't seem to be helping and when I do nvidia-smi I am seeing that memory and GPU usage is 0.</p> <p>Thanks</p>
<p>Can you try to execute the below code and see which device it uses. </p> <pre><code>tf.debugging.set_log_device_placement(True) # Place tensors on the GPU with tf.device('/GPU:0'): a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]) c = tf.matmul(a, b) print(c) </code></pre> <p>and then place the same "tf.debugging.set_log_device_placement(True)" and try the code you have mentioned and see if it works. </p> <p>It should log the device while executing the code.</p>
python|tensorflow|deep-learning|gpu
0
7,229
57,597,121
How to use a boolean array to skip expensive calculations of elements in array?
<p>Is there a way in numpy to use a boolean array to skip calculations of certain elements in an array? I'd like it to skip the evaluation of <code>expensive * arr</code> whenever the corresponding element in <code>bool_arr</code> is <code>False</code>.</p> <pre><code> results = bool_arr &amp; (expensive * arr) </code></pre> <p>This code does not short-circuit and the <code>and</code> operator is unfit because it does not evaluate elementwise, is there another elegant solution available in numpy?</p>
<p>You can use the <code>bool_arr</code> to work on a subset of the array, given <code>expensive</code> can thus run on a small set of values, like:</p> <pre><code>results = bool_arr results<b>[bool_arr]</b> = expensive * arr<b>[bool_arr]</b></code></pre>
numpy|numpy-ndarray|short-circuiting
1
7,230
70,387,147
return a message and put in a new column after compare values in a dataset (using pandas)
<p>I need to see if the values in column <code>Deathrate</code> is lower than 9 and return <code>balanced</code>. If isn't return <code>Urgent</code> and put all this in a new column <code>Humanitarian Help</code>. I tried this first:</p> <pre><code>new = country_data.filter(items=['Country', 'Deathrate']).where(country_data['Deathrate'] &gt; 9) new = new.dropna() new </code></pre>
<p>Try this:</p> <pre><code>import numpy as np country_data['Humanitarian Help'] = np.where(country_data['Deathrate'] &lt; 9, &quot;balanced&quot;, &quot;Deathrate&quot;) </code></pre> <p>Note that I changed <code>&gt;</code> (greater than) to <code>&lt;</code> (less than) per &quot;<code>Deathrate</code> is <strong>lower</strong> than 9&quot;</p>
python|pandas
0
7,231
71,087,933
Printing date from Year, Month and Day columns in Pandas
<p>I am looking to add a new column - &quot;date&quot; to my Pandas dataframe. Below are the first 5 rows of my dataframe: <a href="https://i.stack.imgur.com/TN0as.png" rel="nofollow noreferrer">First 5 rows of the dataframe</a> As seen from the image, the first column is year, second month, and third day. Below is what I have tried to do:</p> <pre><code>df['Year'] = pd.to_datetime(df[['Year','Month','Day']]) </code></pre> <p>But, I keep getting the error as below:</p> <pre><code>ValueError: cannot assemble the datetimes: time data '610101' does not match format '%Y%m%d' (match) </code></pre> <p>It would be great if I can get any help for the same.</p>
<p>try this:</p> <pre><code>df.apply(lambda x:'%s %s %s' % (x['year'],x['month'], x['day']),axis=1) </code></pre>
python|pandas
0
7,232
70,985,993
concatenate dataframes with variable row sizes
<p>I have two csv files which have different row numbers.</p> <pre><code>test1.csv num,sam 1,1.2 2,1.13 3,0.99 test2.csv num,sam 1,1.2 2,1.1 3,0.99 4,1.02 </code></pre> <p>I would like to read the <code>sam</code> columns and append them to an empty dataframe. Thing is that, when I read <code>test1.csv</code>, I extract the base file name, test1 and want to append the <code>sam</code> column based on the `column header in the empty dataframe.</p> <pre><code>big_df = pd.DataFrame(columns =['test1','test2']) pwd = os.getcwd() for file in os.listdir(pwd): filename = os.fsdecode(file) if filename.endswith(&quot;.csv&quot;): prog = filename.split('.')[0] # test1 test2 df = pd.read_csv(filename, usecols=['sam']) # The read dataframe has one column # Move/append that column to the big_df where column == prog big_df[prog] = df print(big_df) </code></pre> <p>But <code>big_df</code> misses the fourth row of test2.csv.</p> <pre><code> test1 test2 0 1.20 1.20 1 1.13 1.1 2 0.99 0.99 </code></pre> <p>I expect to see</p> <pre><code> test1 test2 0 1.20 1.20 1 1.13 1.1 2 0.99 0.99 3 NaN 1.02 </code></pre> <p>How can I fix that?</p>
<p>Using <code>pandas.concat</code> and a simple dictionary comprehension:</p> <pre><code>files = ['test1.csv', 'test2.csv'] df = pd.concat({f.rsplit('.', 1)[0]: pd.read_csv(f).set_index('num')['sam'] for f in files}, axis=1) </code></pre> <p>output:</p> <pre><code> test1 test2 num 1 1.20 1.20 2 1.13 1.10 3 0.99 0.99 4 NaN 1.02 </code></pre>
python|pandas
2
7,233
71,024,507
Pandas dataframe group by 10 min intervals with different actions on other columns
<p>I have a pandas dataframe which includes a timestamp and 71 other columns, something like this:</p> <pre><code> timestamp |close_price|highest_price|volume| ... 2018-09-29 00:00:20 |1809 |1811 | ... | 2018-09-29 00:00:34 |1823 |1832 | 2018-09-29 00:00:59 |1832 |1863 | 2018-09-29 00:01:09 |1800 |1802 | 2018-09-29 00:01:28 |1832 |1845 | . . . </code></pre> <p>I want to put the data into 10 min intervals and I want to do separate operations on each column, for example I want the 10 min intervals of <code>close_price</code> column to show the <code>last</code> value of the corresponding range in the real table, or for the <code>highest_price</code> column, I want the <code>max</code> value of the corresponding range, or for <code>volume</code> I want the <code>mean</code> of the values in that range. I already tried</p> <pre><code>dataTable = datefram.resample(&quot;10min&quot;).agg({'first_price':'first', 'close_price':'last', 'highest_price': 'max', 'volume':'mean', #other attributes... }) </code></pre> <p>but the result seems to be incorrect. Is there any other ways to do what I want to do? I will appreciate any comments or thoughts.</p> <p>Note that there is no specific pattern in timestamp values. In 1 minute, we can have 0 to 60 rows.</p>
<p>If your data spans multiple days or periods where you don't have any data points, calling <code>resample()</code> can result in lots of additional rows with <code>NaN</code> values. I think your code is actually correct, you just got the wrong impression from seeing all the extra rows.</p>
python|pandas|group-by|pandas-groupby|pandas-resample
1
7,234
70,751,715
User based encoding/convert with its interaction in pandas
<p>I have this dataframe which looks like this:</p> <p>user_id : Represents user</p> <p>question_id : Represent question number</p> <p>user_answer : which option user has opted for the specific question from (A,B,C,D)</p> <p>correct_answer: What is correct answer for that specific question</p> <p>correct : 1.0 it means user answer is right</p> <p>elapsed_time : it represents time in minutes user took to answer that question</p> <p>timestamp : UNIX TIMESTAMP OF EACH INTERACTION</p> <p>real_date : I have added this column and converted timestamp to human date &amp; time</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">** user_*iD ***</th> <th style="text-align: left;">** question_*id ***</th> <th style="text-align: left;">** user_*answer ***</th> <th style="text-align: left;">** correct_answer **</th> <th style="text-align: left;">** correct **</th> <th style="text-align: left;">** elapsed_*time ***</th> <th style="text-align: left;">** solving_*id ***</th> <th style="text-align: left;">** bundle_*id ***</th> <th style="text-align: left;"><strong>timestamp</strong></th> <th style="text-align: left;"><strong>real_date</strong></th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">1</td> <td style="text-align: left;">A</td> <td style="text-align: left;">A</td> <td style="text-align: left;">1.0</td> <td style="text-align: left;">5.00</td> <td style="text-align: left;">1</td> <td style="text-align: left;">b1</td> <td style="text-align: left;">1547794902000</td> <td style="text-align: left;">Friday, January 18, 2019 7:01:42 AM</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">2</td> <td style="text-align: left;">D</td> <td style="text-align: left;">D</td> <td style="text-align: left;">1.0</td> <td style="text-align: left;">3.00</td> <td style="text-align: left;">2</td> <td style="text-align: left;">b2</td> <td style="text-align: left;">1547795130000</td> <td style="text-align: left;">Friday, January 18, 2019 7:05:30 AM</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">5</td> <td style="text-align: left;">C</td> <td style="text-align: left;">C</td> <td style="text-align: left;">1.0</td> <td style="text-align: left;">7.00</td> <td style="text-align: left;">5</td> <td style="text-align: left;">b5</td> <td style="text-align: left;">1547795370000</td> <td style="text-align: left;">Friday, January 18, 2019 7:09:30 AM</td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: left;">10</td> <td style="text-align: left;">C</td> <td style="text-align: left;">C</td> <td style="text-align: left;">1.0</td> <td style="text-align: left;">5.00</td> <td style="text-align: left;">10</td> <td style="text-align: left;">b10</td> <td style="text-align: left;">1547806170000</td> <td style="text-align: left;">Friday, January 18, 2019 10:09:30 AM</td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: left;">1</td> <td style="text-align: left;">B</td> <td style="text-align: left;">B</td> <td style="text-align: left;">1.0</td> <td style="text-align: left;">15.0</td> <td style="text-align: left;">1</td> <td style="text-align: left;">b1</td> <td style="text-align: left;">1547802150000</td> <td style="text-align: left;">Friday, January 18, 2019 9:02:30 AM</td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: left;">15</td> <td style="text-align: left;">A</td> <td style="text-align: left;">A</td> <td style="text-align: left;">1.0</td> <td style="text-align: left;">2.00</td> <td style="text-align: left;">15</td> <td style="text-align: left;">b15</td> <td style="text-align: left;">1547803230000</td> <td style="text-align: left;">Friday, January 18, 2019 9:20:30 AM</td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: left;">7</td> <td style="text-align: left;">C</td> <td style="text-align: left;">C</td> <td style="text-align: left;">1.0</td> <td style="text-align: left;">5.00</td> <td style="text-align: left;">7</td> <td style="text-align: left;">b7</td> <td style="text-align: left;">1547802730000</td> <td style="text-align: left;">Friday, January 18, 2019 9:12:10 AM</td> </tr> <tr> <td style="text-align: left;">3</td> <td style="text-align: left;">12</td> <td style="text-align: left;">A</td> <td style="text-align: left;">A</td> <td style="text-align: left;">1.0</td> <td style="text-align: left;">1.00</td> <td style="text-align: left;">25</td> <td style="text-align: left;">b12</td> <td style="text-align: left;">1547771110000</td> <td style="text-align: left;">Friday, January 18, 2019 12:25:10 AM</td> </tr> <tr> <td style="text-align: left;">3</td> <td style="text-align: left;">10</td> <td style="text-align: left;">C</td> <td style="text-align: left;">C</td> <td style="text-align: left;">1.0</td> <td style="text-align: left;">2.00</td> <td style="text-align: left;">10</td> <td style="text-align: left;">b10</td> <td style="text-align: left;">1547770810000</td> <td style="text-align: left;">Friday, January 18, 2019 12:20:10 AM</td> </tr> <tr> <td style="text-align: left;">3</td> <td style="text-align: left;">3</td> <td style="text-align: left;">D</td> <td style="text-align: left;">D</td> <td style="text-align: left;">1.0</td> <td style="text-align: left;">5.00</td> <td style="text-align: left;">3</td> <td style="text-align: left;">b3</td> <td style="text-align: left;">1547770390000</td> <td style="text-align: left;">Friday, January 18, 2019 12:13:10 AM</td> </tr> <tr> <td style="text-align: left;">104</td> <td style="text-align: left;">6</td> <td style="text-align: left;">C</td> <td style="text-align: left;">C</td> <td style="text-align: left;">1.0</td> <td style="text-align: left;">6.00</td> <td style="text-align: left;">6</td> <td style="text-align: left;">b6</td> <td style="text-align: left;">1553040610000</td> <td style="text-align: left;">Wednesday, March 20, 2019 12:10:10 AM</td> </tr> <tr> <td style="text-align: left;">104</td> <td style="text-align: left;">4</td> <td style="text-align: left;">A</td> <td style="text-align: left;">A</td> <td style="text-align: left;">1.0</td> <td style="text-align: left;">5.00</td> <td style="text-align: left;">4</td> <td style="text-align: left;">b4</td> <td style="text-align: left;">1553040547000</td> <td style="text-align: left;">Wednesday, March 20, 2019 12:09:07 AM</td> </tr> <tr> <td style="text-align: left;">104</td> <td style="text-align: left;">1</td> <td style="text-align: left;">A</td> <td style="text-align: left;">A</td> <td style="text-align: left;">1.0</td> <td style="text-align: left;">2.00</td> <td style="text-align: left;">1</td> <td style="text-align: left;">b1</td> <td style="text-align: left;">1553040285000</td> <td style="text-align: left;">Wednesday, March 20, 2019 12:04:45 AM</td> </tr> </tbody> </table> </div> <p>I need to do some encoding , I don't know which encoding should I do and how?</p> <p>What i need a next dataframe to look like this :</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;"><strong>user_id</strong></th> <th style="text-align: left;"><strong>b1</strong></th> <th style="text-align: left;"><strong>b2</strong></th> <th style="text-align: left;"><strong>b3</strong></th> <th style="text-align: left;"><strong>b4</strong></th> <th style="text-align: left;"><strong>b5</strong></th> <th style="text-align: left;"><strong>b6</strong></th> <th style="text-align: left;"><strong>b7</strong></th> <th style="text-align: left;"><strong>b8</strong></th> <th style="text-align: left;"><strong>b9</strong></th> <th style="text-align: left;"><strong>b10</strong></th> <th style="text-align: left;"><strong>b11</strong></th> <th style="text-align: left;"><strong>b12</strong></th> <th style="text-align: left;"><strong>b13</strong></th> <th style="text-align: left;"><strong>b14</strong></th> <th style="text-align: left;"><strong>b15</strong></th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">1</td> <td style="text-align: left;">2</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">3</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: left;">1</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">2</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">3</td> </tr> <tr> <td style="text-align: left;">3</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">1</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">2</td> <td style="text-align: left;">0</td> <td style="text-align: left;">3</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> </tr> <tr> <td style="text-align: left;">104</td> <td style="text-align: left;">1</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">2</td> <td style="text-align: left;">0</td> <td style="text-align: left;">3</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> <td style="text-align: left;">0</td> </tr> </tbody> </table> </div> <p>As you can see with the help of timestamp and real_date ; the question_id of each user is not sorted, The new dataframe should contain which of the bundles user has interacted with, time-based sorted.</p>
<p>First create the final value for each <code>bundle</code> element using <code>groupby</code> and <code>cumcount</code> then pivot your dataframe. Finally reindex it to get all columns:</p> <pre><code>bundle = [f'b{i}' for i in range(1, 16)] values = df.sort_values('timestamp').groupby('user_iD').cumcount().add(1) out = ( df.assign(value=values).pivot_table('value', 'user_iD', 'bundle_id', fill_value=0) .reindex(bundle, axis=1, fill_value=0) ) </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; out bundle_id b1 b2 b3 b4 b5 b6 b7 b8 b9 b10 b11 b12 b13 b14 b15 user_iD 1 1 2 0 0 3 0 0 0 0 0 0 0 0 0 0 2 1 0 0 0 0 0 2 0 0 4 0 0 0 0 3 3 0 0 1 0 0 0 0 0 0 2 0 3 0 0 0 104 1 0 0 2 0 3 0 0 0 0 0 0 0 0 0 &gt;&gt;&gt; out.reset_index().rename_axis(columns=None) user_iD b1 b2 b3 b4 b5 b6 b7 b8 b9 b10 b11 b12 b13 b14 b15 0 1 1 2 0 0 3 0 0 0 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 2 0 0 4 0 0 0 0 3 2 3 0 0 1 0 0 0 0 0 0 2 0 3 0 0 0 3 104 1 0 0 2 0 3 0 0 0 0 0 0 0 0 0 </code></pre>
pandas|encoding|converters
1
7,235
71,041,793
Implementing Multi-Label Margin-Loss in Tensorflow
<p>I'm wanted to implement the Multi-Label Margin-Loss in Tensorflow, using as orientation the definition of pytorch, i.e.</p> <p><a href="https://i.stack.imgur.com/3GdOf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3GdOf.png" alt="Example" /></a></p> <p><a href="https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelMarginLoss.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelMarginLoss.html</a></p> <p>This is the naive solution I came up with:</p> <pre><code>def naive(y_true, y_pred, mu = 1.0): pos = tf.ragged.boolean_mask(y_pred, tf.cast(y_true, dtype=tf.bool)) neg = tf.ragged.boolean_mask(y_pred, tf.cast(1 - y_true, dtype=tf.bool)) loss = 0 for i in range(y_true.shape[0]): loss += tf.reduce_mean(tf.nn.relu(mu - (tf.transpose([pos[i]]) - neg[i]))) return loss </code></pre> <p>The implementation above yield correct results (see example below), but I'm having a hard time removing the loop from the function, i.e. expressing this in matrix/vector multiplication, etc.</p> <p>Example:</p> <pre><code>y_pred = tf.constant([[0.1, 0.2, 0.4, 0.8]], dtype=tf.float32) print(y_pred) y_true = tf.constant([[1, 0, 0, 1]], dtype=tf.float32) print(y_true) naive(y_true, y_pred) # 0.25 * ((1-(0.1-0.2)) + (1-(0.1-0.4)) + (1-(0.8-0.2)) + (1-(0.8-0.4))) # 0.8500 # (see pytorch example) </code></pre> <p>Any ideas are very welcome.</p>
<p>You could try using <code>tf.while_loop</code> like this:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf def naive(y_true, y_pred, mu = 1.0): pos = tf.ragged.boolean_mask(y_pred, tf.cast(y_true, dtype=tf.bool)) neg = tf.ragged.boolean_mask(y_pred, tf.cast(1 - y_true, dtype=tf.bool)) loss = tf.Variable(0.0, trainable=False) i = tf.constant(0) while_condition = lambda i, loss, pos, neg: tf.math.less(i, tf.shape(y_true)[0]) def body(i, loss, p, n): loss.assign_add(tf.reduce_mean(tf.nn.relu(1.0 - (tf.transpose([p[i]]) - n[i])))) return tf.add(i, 1), loss, p, n _, loss, _,_ = tf.while_loop(while_condition, body, loop_vars=(i, loss, pos, neg)) return loss y_pred = tf.constant([[0.1, 0.2, 0.4, 0.8], [0.1, 0.2, 0.4, 0.8]], dtype=tf.float32) y_true = tf.constant([[1, 0, 0, 1], [1, 0, 0, 1]], dtype=tf.float32) naive(y_true, y_pred) </code></pre> <pre><code>&lt;tf.Tensor: shape=(), dtype=float32, numpy=1.7&gt; </code></pre>
python|numpy|tensorflow|tensorflow2.0
0
7,236
51,765,063
Resample sum keeping the index of last observation per day pandas
<p>I have a dataframe:</p> <pre><code>Localmax symbol dvol idx 2016-10-19 09:05:00 st1 5172.159 2016-10-19 09:05:00 2016-10-19 09:05:00 st2 5172.18 2016-10-19 09:05:00 2016-10-19 17:30:00 st1 5000 2016-10-19 17:30:00 2016-10-19 17:40:00 st2 8000 2016-10-19 17:40:00 </code></pre> <p>How can I do resample per symbol, so that I have a sum of dvol per day, KEEPING the index of the last observation per day?</p> <p>I tried: </p> <pre><code>&gt; df['idx']=df.index &gt; dvol_sum = df.groupby(['symbol', Grouper(freq='D')])['dvol', 'idx'].agg(['sum']) </code></pre> <p>but it produced just one column of dvol, and the index with 00:00:00 time stamp..</p> <p>The expected output is:</p> <pre><code> Localmax symbol dvol 2016-10-19 17:30:00 st1 sum of dvol for 2016-10-19 for st1 2016-10-19 17:40:00 st2 sum of dvol for 2016-10-19 for st2 </code></pre>
<p>I think there should be a simple way better than this but this works fine:</p> <pre><code>In [58]: df Out[58]: Localmax symbol dvol idx 0 2016-10-19 09:05:00 st1 5172.159 2016-10-19 09:05:00 1 2016-10-19 09:05:00 st2 5172.180 2016-10-19 09:05:00 2 2016-10-19 17:30:00 st1 5000.000 2016-10-19 17:30:00 3 2016-10-19 17:40:00 st2 8000.000 2016-10-19 17:40:00 4 2016-10-20 17:30:00 st1 6000.000 2016-10-19 17:30:00 5 2016-10-20 17:40:00 st2 9000.000 2016-10-19 17:40:00 In [59]: df['Localmax'] = pd.to_datetime(df['Localmax']) In [60]: df['date'] = df['Localmax'].dt.date In [61]: new_df = df.groupby(['date','symbol'],as_index=False)['dvol'].max() In [62]: new_df['date'] = new_df.date.map(df.groupby(['date'])['Localmax'].max()) In [63]: new_df Out[63]: date symbol dvol 0 2016-10-19 17:40:00 st1 5172.159 1 2016-10-19 17:40:00 st2 8000.000 2 2016-10-20 17:40:00 st1 6000.000 3 2016-10-20 17:40:00 st2 9000.000 </code></pre>
python|pandas|resampling
0
7,237
51,600,034
Lazy evaluation of numpy.max
<p>Suppose I have a 1D numpy array <code>x</code> with shape <code>(n,)</code> consisting mostly of zeros, and a 2D array <code>Y</code> with shape <code>(m,n)</code>. I want to compute </p> <pre><code>np.sum(x * np.max(Y,axis=0)) </code></pre> <p>i.e. the dot product of <code>x</code> with the matrix <code>Y</code> flattened by taking the max of each column. If these arrays are large and <code>x</code> consists mostly of zeros, presumably I'm doing a lot of wasteful <code>max</code> operations. </p> <p>Is there any way to do the computation in a lazy way, so that the max only gets computed for nonzero values? I'm looking for an elegant way - obviously, I could write a for loop and check for zero values. </p>
<p>You can use <code>np.where</code> to find the non zero indices. For example (<code>m=3</code> and <code>n=6</code>):</p> <pre><code>x= np.array([1,0,0,2,3,1]) Y = np.array([[1,2,3,4,5,6], [4,5,6,1,2,3], [7,8,9,4,5,1]]) ind = np.where(x != 0)[0] result = sum(x[ind]*np.max(Y[:,ind], axis=0)) print (result) </code></pre> <p><strong>Output</strong></p> <pre><code>36.0 </code></pre>
python|numpy|functional-programming|lazy-evaluation
1
7,238
51,763,496
Sparse matrix output to csv
<p>I have a sparse matrix <code>z</code> that is a <code>scipy.sparse.csr_matrix</code> and has shape <code>(n,m)</code> where <code>n&lt;&lt;m</code>. I also have labels <code>l</code> which is simply a <code>np.array</code> of strings with size <code>n</code>.</p> <p>What I'd like to do is make a csv file with the "ragged" version of the data. i.e. all of the nonzero vlaues in <code>z[0]</code> would go in a column of the csv file with a header value <code>l[0]</code>, but each column would have a different number of values. Unfortunately <code>numpy</code> doesn't deal with ragged arrays well and I'm not sure what would be an elegant way to construct it.</p> <p>Right now I'm just doing</p> <pre><code>np.savetxt(pth, z.todense().T, delimiter = ",") </code></pre> <p>and adding the column headers manually as my next process step can handle all the zeros, but is very slow that way.</p> <p>EXAMPLE:</p> <pre><code>z.todense() array([[0,0,1,0,0,-1,0,3,0,-6,4], [-1,0,4,0,0,0,0,0,0,0,-2]]) l array(["chan1", "chan2"]) </code></pre> <p>What I want</p> <pre><code>example.csv chan1, chan2 1,-1 -1,4 3,-2 -6, 4, </code></pre>
<pre><code>In [74]: from scipy import sparse In [75]: M = sparse.csr_matrix([[0,0,1,0,0,-1,0,3,0,-6,4], ...: [-1,0,4,0,0,0,0,0,0,0,-2]]) In [76]: M Out[76]: &lt;2x11 sparse matrix of type '&lt;class 'numpy.int64'&gt;' with 8 stored elements in Compressed Sparse Row format&gt; In [77]: M.A Out[77]: array([[ 0, 0, 1, 0, 0, -1, 0, 3, 0, -6, 4], [-1, 0, 4, 0, 0, 0, 0, 0, 0, 0, -2]], dtype=int64) </code></pre> <p><code>lil</code> format gives the data by row:</p> <pre><code>In [78]: Ml = M.tolil() In [79]: Ml.data Out[79]: array([list([1, -1, 3, -6, 4]), list([-1, 4, -2])], dtype=object) </code></pre> <p>Now it's just a matter of writing those lists to file in the way you want:</p> <pre><code>In [81]: from itertools import zip_longest In [82]: for i,j in zip_longest(*Ml.data, fillvalue=''): ...: astr = '%s, %s'%(i,j) ...: print(astr) ...: 1, -1 -1, 4 3, -2 -6, 4, </code></pre> <p><code>zip_longest</code> is an easy way to iterate through several lists, using the longest as reference.</p>
csv|numpy|sparse-matrix
1
7,239
51,597,073
python - No module named dill while using pickle.load()
<p>I have dill installed in my python 2.7 but when I try to unpickle my model it says "No module named dill". The pickled file contains pandas series.</p> <p>EDIT : Here's the snapshot of the traceback on ElasticBeanstalk environment</p> <pre><code>File "/opt/python/current/app/app/models/classification.py", line 663, in __init__ self.lookupdict = pickle.load(open(&lt;filepath&gt;)) File "/usr/lib64/python2.7/pickle.py", line 1384, in load return Unpickler(file).load() File "/usr/lib64/python2.7/pickle.py", line 864, in load dispatch[key](self) File "/usr/lib64/python2.7/pickle.py", line 1096, in load_global klass = self.find_class(module, name) File "/usr/lib64/python2.7/pickle.py", line 1130, in find_class __import__(module) File "/opt/python/run/venv/local/lib64/python2.7/site-packages/gevent/builtins.py", line 93, in __import__ result = _import(*args, **kwargs) ImportError: No module named dill </code></pre>
<p>If version on your Elastic beanstalk or error environment is greater than your local version then downgrade your dill package to the package which is working on your EC2 or local machine. On your local machine, check current dill package:</p> <pre><code>pip freeze | grep -i 'dill' </code></pre> <p>e.g it outputs: dill==0.2.7.1 which is lower than what it is on beanstalk</p> <p>then downgrade using</p> <pre><code>pip install dill==0.2.7.1 </code></pre>
python|pandas|pickle|dill
10
7,240
35,884,743
Match till comma OR end of line
<p>I have a pandas DataFrame that looks like this:</p> <pre><code>0 UDP/ax/bsd 1 T Traffic/sa 2 ICMP/v/e,stuff hi/a/abc,ab/a </code></pre> <p>I want to replace everything from the first encountered <code>/</code> till a comma or end of line. So I tried initially <code>df.col_A.replace('/.+','',regex=True)</code> which just gave me the first word (till first slash). </p> <p>To get comma separated words I attempted the following:</p> <pre><code>`df.col_A.replace('/.+[,$]',',',regex=True)` </code></pre> <p>My logic being replace everything from slash till [comma or EOL]. This didn't have the expected behaviour. How do I amend this?</p> <p>The expected ouput from line 2(3) of the data frame is:</p> <pre><code>ICMP,stuff hi, ab </code></pre> <p>Note that I am trying to avoid using split as I think this may take longer since it stores the irrelevant pieces as well.</p>
<p>You can use:</p> <pre><code> &gt;&gt;&gt; print re.sub(r'/[^,]*(,|$)', ' \1', 'ICMP/v/e,stuff hi/a/abc,ab/a') ICMP stuff hi ab </code></pre> <p><a href="https://regex101.com/r/gV7gI6/3" rel="nofollow">RegEx Demo</a></p> <p><strong>RegEx Breakup:</strong></p> <pre><code>/ # match literal / [^,]* # match 0 or more of any character that is not comma (,|$) # Match comma or end of line and capture it as group #1 </code></pre> <p>Replacement is <code>" \1"</code> which means space followed by back-reference to group #1</p>
python|regex|pandas
2
7,241
37,448,773
Converting space-aligned text file into Pandas DataFrame
<p>I am quite new to pandas. I have a log text file. I am trying to grab few data point from the file. Below is the code that kind of gets me the desired data but not in desired format. I wanted Pandas data frame with two columns.</p> <pre><code>import os from collections import Counter import pandas as pd #print(os.getcwd()) infile = "myfile.txt" important = [] keep_phrases = ["Host", "User-Agent" ] with open(infile) as f: f = f.readlines() for line in f: for phrase in keep_phrases: if phrase in line: important.append(line) break #print(type(important)) print(important) #Counter(important) pd.DataFrame(important) </code></pre> <p>This does not give me output in two column. I am looking for host and user agent as one row.</p> <p>Sample of the text file as below</p> <pre><code> 15 SessionOpen c aa.bb.cc.ddd 62667 :8080 15 SessionClose c pipe 15 ReqStart c aa.bb.cc.ddd 62667 442374415 15 RxURL c /61665002001003_001/CH4_08_02_24_61665002001003_001_16x9_1500000_Seg1-Frag666 15 RxHeader c Host: ll.abrstream.channel4.com 15 RxHeader c Connection: keep-alive 15 RxHeader c User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36 15 RxHeader c X-Requested-With: ShockwaveFlash/21.0.0.216 15 RxHeader c Accept: */* 15 RxHeader c Referer: http://www.channel4.com/programmes/the-tiny-tots-talent-agency/on-demand/61665-002 15 RxHeader c Accept-Encoding: gzip, deflate, sdch 15 RxHeader c Accept-Language: en-US,en;q=0.8 15 ReqEnd c 442374415 1461870946.496117592 1461870947.112555504 0.000315428 0.001363039 0.615074873 15 SessionOpen c aa1.bb1.cc1.ddd1 59409 :8080 15 SessionClose c pipe 15 ReqStart c aa1.bb1.cc1.ddd1 59409 442374416 15 RxURL c /gpsApi.php 15 RxHeader c Content-Length: 0 15 RxHeader c Host: map.yanue.net 15 RxHeader c Connection: Keep-Alive 15 RxHeader c User-Agent: Apache-HttpClient/UNAVAILABLE (java 1.4) 15 ReqEnd c 442374416 1461870950.580444574 1461870951.139206648 0.000064135 0.001196861 0.557565212 15 SessionOpen c aa1.bb1.cc1.ddd1 52179 :8080 15 SessionClose c pipe 15 ReqStart c aa1.bb1.cc1.ddd1 52179 442374417 15 RxURL c /gpsApi.php 15 RxHeader c Content-Length: 0 15 RxHeader c Host: map.yanue.net 15 RxHeader c Connection: Keep-Alive 15 RxHeader c User-Agent: Apache-HttpClient/UNAVAILABLE (java 1.4) 15 ReqEnd c 442374417 1461870951.776547432 1461870952.448071241 0.000062943 0.001109123 0.670414686 18 SessionOpen c aa.bb.cc.ddd 62670 :8080 18 SessionClose c pipe 18 ReqStart c aa.bb.cc.ddd 62670 442374418 18 RxURL c /61665002001003_001/CH4_08_02_24_61665002001003_001_16x9_1500000_Seg1-Frag667 18 RxHeader c Host: ll.abrstream.channel4.com 18 RxHeader c Connection: keep-alive 18 RxHeader c User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36 18 RxHeader c X-Requested-With: ShockwaveFlash/21.0.0.216 18 RxHeader c Accept: */* 18 RxHeader c Referer: http://www.channel4.com/programmes/the-tiny-tots-talent-agency/on-demand/61665-002 18 RxHeader c Accept-Encoding: gzip, deflate, sdch 18 RxHeader c Accept-Language: en-US,en;q=0.8 18 ReqEnd c 442374418 1461870951.920178175 1461870952.507097483 0.001731873 0.001337051 0.585582256 15 SessionOpen c aa1.bb1.cc1.ddd1 48034 :8080 15 SessionClose c pipe </code></pre>
<p>You can create a dataframe by creating a list of lists, and then use the dataframe constructor.</p> <p>Loop through each line of the file, like you've started doing, then split each line into the different columns. You can use <a href="https://docs.python.org/3/library/re.html#re.split" rel="nofollow noreferrer">re.split</a> to create a list of the columns, limitting the maximum number of splits to treat the last column as one element. Alternatively, if you know each element is always going to be aligned in the same way, you can use slicing to create that list.</p> <pre><code>import re df_list = [] with open(infile) as f: for line in f: # remove whitespace at the start and the newline at the end line = line.strip() # split each column on whitespace columns = re.split('\s+', line, maxsplit=4) df_list.append(columns) </code></pre> <p>You can then use the method in <a href="https://stackoverflow.com/questions/20638006/convert-list-of-dictionaries-to-dataframe?rq=1">this answer</a> to create the dataframe.</p> <pre><code>df = pd.DataFrame(df_list) </code></pre>
python|pandas|dataframe
0
7,242
42,097,967
Get Max/Min from array based on another array using Numpy.where
<p>Starting with this:</p> <pre><code>import numpy as np x = np.array([0, 2, 8, 9, 4, 1, 12, 4, 33, 11, 5, 3 ]) y = np.array(['', '', '', '', '', 'yo', '', '', '', '', 'yo', '' ]) i = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ]) print np.amax(x[:3] ) print np.amin(x[:3] ) </code></pre> <p>Trying to get the max or min value for the prior three items using <code>numpy.where</code>. So, in essence trying to use the "index" of array within the <code>np.where</code>. If there a more performant way to do this, please show. </p> <p>Tried variations on this:</p> <pre><code>np.where(y == "yo", np.amax(x[:3] ) ,"") </code></pre> <p>result (why is it returning a string?): </p> <pre><code>array(['', '', '', '', '', '8', '', '', '', '', '8', ''], dtype='|S21') </code></pre> <p>wanted:</p> <pre><code> array(['', '', '', '', '', 9, '', '', '', '', 33, ''], dtype='|S21') </code></pre>
<p>First look at the simpler version of <code>where</code>, which finds the indices:</p> <pre><code>In [266]: np.where(y=='yo') Out[266]: (array([ 5, 10], dtype=int32),) </code></pre> <p>Evidently you want all the valyes for <code>y</code>, but replacing the <code>yo</code> with some value from <code>x</code>:</p> <pre><code>In [267]: np.where(y=='yo',x,y) Out[267]: array(['', '', '', '', '', '1', '', '', '', '', '5', ''], dtype='&lt;U11') </code></pre> <p><code>y</code> is string type, and since <code>''</code> can't be converted to a number, the numbers are converted to string.</p> <p>Now if <code>y</code> was object dtype:</p> <pre><code>In [268]: y = np.array(['', '', '', '', '', 'yo', '', '', '', '', 'yo', '' ],object) In [269]: np.where(y=='yo') Out[269]: (array([ 5, 10], dtype=int32),) In [270]: np.where(y=='yo',x,y) Out[270]: array(['', '', '', '', '', 1, '', '', '', '', 5, ''], dtype=object) </code></pre> <p>the replacement is also object dtype and can have a mix of numbers and strings.</p> <p>In this use all 3 terms have the same length. In your use, <code>x</code> and <code>y</code> are replaced with scalars</p> <pre><code>In [271]: np.max(x[:3]) Out[271]: 8 In [272]: np.where(y=='yo',8, '') Out[272]: array(['', '', '', '', '', '8', '', '', '', '', '8', ''], dtype='&lt;U11') In [273]: np.where(y=='yo',8, y) Out[273]: array(['', '', '', '', '', 8, '', '', '', '', 8, ''], dtype=object) </code></pre> <p>To insert <code>9</code> and <code>33</code> you have figure out some way of collecting the max of the previous 3 items, i.e. a running or rolling max. <code>where</code> itself isn't going to help.</p> <p><code>accumulate</code> approximates this (this is the 'maximum' version of <code>cumsum</code>)</p> <pre><code>In [276]: xm=np.maximum.accumulate(x) In [277]: xm Out[277]: array([ 0, 2, 8, 9, 9, 9, 12, 12, 33, 33, 33, 33], dtype=int32) In [278]: np.where(y=='yo',xm, y) Out[278]: array(['', '', '', '', '', 9, '', '', '', '', 33, ''], dtype=object) </code></pre> <p><code>xm</code> is not the maximum of the previous three values, but rather the maximum of all previous values. In this case that's the same, but in general it won't. For this <code>x</code> it is different for the last value</p> <hr> <p>Here's one way of getting the max of the previous 3, admittedly a bit crude (with a list comprehension):</p> <pre><code>In [305]: x1=np.concatenate(([0,0],x)) In [306]: xm = [max(x1[i:i+3]) for i in range(0,len(x1))][:len(x)] In [307]: xm Out[307]: [0, 2, 8, 9, 9, 9, 12, 12, 33, 33, 33, 11] In [308]: np.where(y=='yo',xm, y) Out[308]: array(['', '', '', '', '', 9, '', '', '', '', 33, ''], dtype=object) </code></pre> <hr> <p>sliding window with <code>as_strided</code> (adapted from <a href="https://stackoverflow.com/questions/42036229/numpy-matrix-array-shift-insert-by-index">Numpy: Matrix Array Shift / Insert by Index</a>)</p> <pre><code>In [317]: xm=np.lib.stride_tricks.as_strided(x1[::-1],shape=(3,12),strides=(-4,-4)) In [318]: xm Out[318]: array([[ 3, 5, 11, 33, 4, 12, 1, 4, 9, 8, 2, 0], [ 5, 11, 33, 4, 12, 1, 4, 9, 8, 2, 0, 0], [11, 33, 4, 12, 1, 4, 9, 8, 2, 0, 0, 0]]) In [319]: xm.max(axis=0) Out[319]: array([11, 33, 33, 33, 12, 12, 9, 9, 9, 8, 2, 0]) In [320]: xm = xm.max(axis=0)[::-1] In [321]: xm Out[321]: array([ 0, 2, 8, 9, 9, 9, 12, 12, 33, 33, 33, 11]) </code></pre> <hr> <p>Using Paul Panzer's idea for just a few <code>yo</code>:</p> <pre><code>In [29]: idx=np.where(y=='yo') In [30]: idx Out[30]: (array([ 5, 10], dtype=int32),) In [32]: xm = [max(x[i-3:i]) for i in idx[0]] In [33]: xm Out[33]: [9, 33] In [34]: y[idx]=xm In [35]: y Out[35]: array(['', '', '', '', '', 9, '', '', '', '', 33, ''], dtype=object) </code></pre> <p>If it is possible that <code>yo</code> occurs in the first 3 elements, we need to refine <code>xm</code> with something like: </p> <pre><code>xm = [max(x[max(i-3,0):i+1]) if i&gt;0 else x[i] for i in idx[0]] </code></pre> <p>otherwise we get errors from trying to take <code>max([])</code>.</p>
python|arrays|numpy
2
7,243
41,982,290
Pandas customized group aggregation
<p>I have a question regarding pandas and customised group aggregations to find the most efficient way to calculate my values. Here is my code snippet: </p> <pre><code>import pandas as pd listA = list('abcdefghijklmnopqrstuvwxyz') * 2 listB = listA[::-1] listC = listA[::2] * 2 listD = "Won" data1 = range(52) data2 = range(52,104) data3 = range(104,156) rawStructure = [('A', listA), ('B', listB), ('C', listC), ('D', listD), ('Data1', data1), ('Data2', data2), ('Data3', data3)] df = pd.DataFrame.from_items(rawStructure, orient='columns') df.loc[40:,"D"] = "Lost" def customfct(x,y,z): print('x',x) data = round(((x.sum() + y.sum())/z.sum()) * 100,2) return data def f(row): val1 = row.loc[(row['D'] == "Won"), 'Data1'].sum() val2 = row.loc[(row['D'] == "Won"), 'Data2'].sum() val3 = row.loc[(row['D'] == "Won"), 'Data3'].sum() val4 = customfct(row.loc[(row['D'] == "Won"), 'Data1'], row.loc[(row['D'] == "Won"), 'Data2'], row.loc[(row['D'] == "Won"), 'Data3']) return val1, val2, val3, val4 groupByCriteria = "C" agg = df[:].groupby(by=groupByCriteria).apply(f) print(agg) </code></pre> <p>I would like to know if there is a more efficient way to make groupings and apply customised calculations (like function "customfct", which uses different columns (Data1, Data2,Data3)). My first approach was something like you could see here: <a href="http://www.shanelynn.ie/summarising-aggregation-and-grouping-data-in-python-pandas/" rel="nofollow noreferrer">http://www.shanelynn.ie/summarising-aggregation-and-grouping-data-in-python-pandas/</a> but it seems to be infeasible to create a formula which isn't constraint on one column (e.g lambda x: max(x) - min(x)). Furthermore, how would you return a pandas data frame instead of a pandas series (with a tuple)? Thanks in advance!</p> <p>That is my current output (which is correct, but I guess there is a more efficient way):</p> <p><a href="https://i.stack.imgur.com/5DSn5.png" rel="nofollow noreferrer">Pandas output</a></p>
<p>Consider aggregating all <em>Data</em> columns in one <code>groupby()</code> call and then create a new column for <em>val4</em>. Then merge aggregation back to original dataframe.</p> <pre><code># EQUIVALENT EXAMPLE DATA listA = list('abcdefghijklmnopqrstuvwxyz') * 2 df = pd.DataFrame({'A': listA, 'B': listA[::-1], 'C': listA[::2] * 2, 'D': ["Won" for i in range(40)] + ["Lost" for i in range(40,52)], 'Data1': range(52), 'Data2': range(52,104), 'Data3': range(104,156)}) # ADJUSTED METHOD groupByCriteria = "C" grp = df[df['D']=="Won"].groupby(by=groupByCriteria).sum().reset_index()\ .rename(columns={'Data1':'val1','Data2':'val2','Data3':'val3'}) grp['val4'] = round(((grp['val1'] + grp['val2'])/grp['val3']) * 100,2) agg = df.merge(grp, on='C').sort_values('Data1').reset_index(drop=True) </code></pre> <hr> <p>In timing comparisons, adjusted code is markedly faster. Do note: your method was adjusted to return a dataframe and not a series.</p> <pre><code>def origfct(): def customfct(x,y,z): #print('x',x) data = round(((x.sum() + y.sum())/z.sum()) * 100,2) return data def f(row): row['val1'] = row.loc[(row['D'] == "Won"), 'Data1'].sum() row['val2'] = row.loc[(row['D'] == "Won"), 'Data2'].sum() row['val3'] = row.loc[(row['D'] == "Won"), 'Data3'].sum() row['val4'] = customfct(row.loc[(row['D'] == "Won"), 'Data1'], row.loc[(row['D'] == "Won"), 'Data2'], row.loc[(row['D'] == "Won"), 'Data3']) return row groupByCriteria = "C" agg = df[:].groupby(by=groupByCriteria).apply(f) return agg def newsetup(): groupByCriteria = "C" grp = df[df['D']=="Won"].groupby(by=groupByCriteria).sum().reset_index()\ .rename(columns={'Data1':'val1','Data2':'val2','Data3':'val3'}) grp['val4'] = round(((grp['val1'] + grp['val2'])/grp['val3']) * 100,2) agg = df.merge(grp, on='C').sort_values('Data1').reset_index(drop=True) return agg python -mtimeit -n'100' -s'import pyscript as test' 'test.origfct()' # 100 loops, best of 3: 198 msec per loop python -mtimeit -n'100' -s'import pyscript as test' 'test.newsetup()' # 100 loops, best of 3: 16 msec per loop </code></pre>
pandas|grouping|customization|aggregation
1
7,244
42,012,730
How to read two lines from a file and create dynamics keys in a for-loop, a follow-up
<p>This question follows the problem in question: <a href="https://stackoverflow.com/q/41929351/868546">How to read two lines from a file and create dynamics keys in a for-loop?</a></p> <p>But, the nature of the problem has evolved to certain complexity that I want to address.</p> <p>Below is the structure of my data separated by space.</p> <pre><code>chr pos M1 M2 Mk Mg1 F1_hybrid F1_PG F1_block S1 Sk1 S2 Sj 2 16229767 T/T T/T T/T G/T C|T 1|0 726 . T/C T/C T/C 2 16229783 C/C C/C C/C A/C G|C 0|1 726 G/C G/C G/C C|G 2 16229992 A/A A/A A/A G/A G|A 1|0 726 A/A A/A A/A A|G 2 16230007 T/T T/T T/T A/T A|T 1|0 726 A|T A|T A|T A|T 2 16230011 G/G G/G G/G G/G C|G 1|0 726 G/C C|G C|G G/C 2 16230049 A/A A/A A/A A/A T|A 1|0 726 A|T . A/T A/T 2 16230174 . . . C/C T|C 1|0 726 C|T T|C T|C C|T 2 16230190 A/A A/A A/A A/A T|A 1|0 726 T|G G|T T|G T|G 2 16230260 A/A A/A A/A A/A G|A 1|0 726 G/G G/G G/G G/G </code></pre> <p>Explanation:</p> <ul> <li><p>there are two major categories of data in the above file. Data from <code>Group M</code> have sample name starting with <strong>M</strong>, and similarly <code>group S</code> that has several columns names starting with <strong>S</strong>.</p></li> <li><p>And there is a hybrid column (represented by <strong>F1_hybrid</strong>).</p></li> <li><p>the data is the string along the position line. The F1_hybrid is phased with pipe (|) distinguishing the two letters. So, the two strings values from F1 are <code>C-G-G-A-C-T-T-T-G</code>, while another string value is T-C-A-T-G-A-C-A-A. One of this string is from <strong>M-group</strong> while the other is from <strong>S-group</strong> but <strong>I need to do some statistical analyses to do so. However, I can tell that visually that T-C-A-T-G-A-C-A-A string most likely came from M-group.</strong></p></li> </ul> <p>Procedure:</p> <ul> <li><p>I read the first line and create a unique keys using the column information.</p></li> <li><p>Then I read the second and 3rd line and the values in F1_hybrid, which is C|T with G|C. Now, I need to calculate how many GgC (explained as G given C) vs. CgT (C given T) exist between M-group vs. S group.</p></li> <li><p>Then read 3rd (G|C) with 4th (G|A) line in F1_hybrid. So, the states are GgG and AgC. Similarly, <strong>I now count have many GcG vs. AgC exist in M vs. S group.</strong></p></li> </ul> <p>Therefore, I am trying to build a <code>Markov-model</code> which counts the number of state for a phased string from F1 and taking the observed counts in <code>group M</code> vs <code>group S</code>.</p> <p>I am now explaining, how to count the number of any XgY (X given Y) based on F1_hyrbid:</p> <ul> <li><strong>It important to note the conditions before doing the count.</strong> The existing condition may be phased (which is represented by having pipe) vs. unphased (if the if two line have at least one slash (/).</li> </ul> <p><strong>Condition 01:</strong></p> <p>The <code>M1</code> sample has <strong>state as (T/T with C/C)</strong> for 2nd and 3rd line. since the separator is a <strong>slash (/)</strong> and not <strong>pipe (|)</strong> we cannot tell which exact state M1-sample is in. But, we can create combination matrix (for previous state with present state) </p> <pre><code> T T C CgT CgT C CgT CgT </code></pre> <p>Now, we can tell that there are <strong>4 total CgT</strong> </p> <p>and we keep doing the same matrix if this condition meets.</p> <p><strong>Condition 02</strong></p> <p>Same is the case for other samples from Group M, except for Mg1 where the G/T is preceeding A/C. So, the matrix is:</p> <pre><code> G T A AgG AgT C CgG CgT </code></pre> <p>So, here we observed 1 count of CgT.</p> <p><strong>Condition 03:</strong></p> <p>But, if the earlier state - present state are phased by pipe in both states <strong>(like A|T at position 16230007 with C|G at position 16230011 for sample Sk1)</strong> we can do a direct count of phase state of observed state at that position, that there are only CgA and GgT, so count of CgT is 0.</p> <p><strong>Condition 04:</strong> If one of the state has pipe (|) but other has slash (/), the condition will be same as both state having slash.</p> <p><strong>Condition 05:</strong> If any of the previous_state or present_state is period(.) the observation count is automatically zero (0) for the state expected from F1_hybrid.</p> <p><strong>So, the expected output should be something like this:</strong></p> <pre><code>pos M1 M2 Mk Mg1 H0 H1 S1 Sk1 S2 Sj 16..9783 4-CgT 4-CgT 4-CgT 1-CgT GgC CgT 0 1-CgT 1-CgT 1-CgT 16..9992 4-AgC 4-AgC 4-AgC 2-AgC GgG AgC 1-AgC 1-AgC 1-AgC 1-AgC,1-GgG 16..0007 4-TgA 4-TgA 4-TgA 1-AgG,1-TgA AgG TgA 2-TgA 2-TgA 2-TgA1 1-TgA ..................contd </code></pre> <p>Or, the values in dictionary format for each column would equally work. Something like <code>['4-CgT','4-CgT','4-CgT','4-CgT']</code> for first M1 at position 16..9783 and same for other.</p>
<p>The question is a bit old, but interesting because you have a very clear specification and you need help to write the code. I will expose a solution following a top-down approach, which is a very well known method, using plain old python. It shouldn't be difficult to adapt to pandas.</p> <p>The top-down approach means to me: <strong>if you don't know how to write it, just name it!</strong></p> <p>You have a file (or a string) as input, and you want to output a file (or a string). It seems quite simple, but you want to merge pairs of rows to build every new row. The idea is:</p> <ol> <li>get the rows of the input, as dictionaries</li> <li>take them by two</li> <li>build a new row for each pair</li> <li>output the result</li> </ol> <p>You don't know for now how to write the generator of rows. You don't know either how to build a new row for each pair. Don't stay blocked by the difficulties, just name the solutions. Imagine you have a function <code>get_rows</code> and a function <code>build_new_row</code>. Let's write this:</p> <pre><code>def build_new_rows(f): """generate the new rows. Output may be redirected to a file""" rows = get_rows(f) # get a generator on rows = dictionaries. r1 = next(rows) # store the first row for r2 in rows: # for every following row yield build_new_row(r1, r2) # yield a new row built of the previous stored row and the current row. r1 = r2 # store the current row, which becomes the previous row </code></pre> <p>Now, examine the two "missing" functions: <code>get_rows</code> and <code>build_new_row</code>. The function <code>get_rows</code> is quite easy to write. Here's the main part:</p> <pre><code>header = process_line(next(f)) for line in f: yield {k:v for k,v in zip(header, process_line(line))} </code></pre> <p>where <code>process_line</code> just splits the line on space, e.g. with a <code>re.split("\s+", line.strip())</code>.</p> <p>The second part is <code>build_new_row</code>. Still the top-down approach: you need to build H0 and H1 from your expected table, and then to build the count of H1 for every M and S according to the conditions you exposed. Pretend you have a <code>pipe_compute</code> function that compute H0 and H1, and a <code>build_count</code> function that builds the count of H1 for every M and S:</p> <pre><code>def build_new_row(r1, r2): """build a row""" h0, h1 = pipe_compute(r1["F1_hybrid"], r2["F1_hybrid"]) # initialize the dict whith the pos, H0 and H1 new_row = {"pos":r2["pos"], "H0":h0, "H1":h1} for key in r1.keys(): if key[0] in ("M", "S"): new_row[key] = build_count(r1[key], r2[key], h1) return new_row </code></pre> <p>You have almost everything now. Take a look at <code>pipe_compute</code>: it's exactly what you have written in your condition 03.</p> <pre><code>def pipe_compute(v1, v2): """build H0 H1 according to condition 03""" xs = v1.split("|") ys = v2.split("|") return [ys[0]+"g"+xs[0], ys[1]+"g"+xs[1]] </code></pre> <p>And for <code>buid_count</code>, stick to the top-down approach:</p> <pre><code>def build_count(v1, v2, to_count): """nothing funny here: just follow the conditions""" if is_slash_count(v1, v2): # are conditions 01, 02, 04 true ? c = slash_count(v1, v2)[to_count] # count how many "to_count" we find in the 2 x 2 table of condtions 01 or 02. elif "|" in v1 and "|" in v2: # condition 03 c = pipe_count(v1, v2)[to_count] elif "." in v1 or "." in v2: # condition 05 return '0' else: raise Exception(v1, v2) return "{}-{}".format(c, to_count) # n-XgY </code></pre> <p>We are still going down. When do we have <code>is_slash_count</code>? Two slashes (conditions 01 and 02) or one slash and one pipe (condition 04):</p> <pre><code>def is_slash_count(v1, v2): """conditions 01, 02, 04""" return "/" in v1 and "/" in v2 or "/" in v1 and "|" in v2 or "|" in v1 and "/" in v2 </code></pre> <p>The function <code>slash_count</code> is simply the 2 x 2 table of conditions 01 and 02:</p> <pre><code>def slash_count(v1, v2): """count according to conditions 01, 02, 04""" cnt = collections.Counter() for x in re.split("[|/]", v1): # cartesian product for y in re.split("[|/]", v2): # cartesian product cnt[y+"g"+x] += 1 return cnt # a dictionary XgY -&gt; count(XgY) </code></pre> <p>The function <code>pipe_count</code> is even simpler, because you just have to count the result of <code>pipe_compute</code>:</p> <pre><code>def pipe_count(v1, v2): """count according to condition 03""" return collections.Counter(pipe_compute(v1, v2)) </code></pre> <p>Now you're done (and down). I get this result, which is slightly different from your expectation, but you certainly have already seen my mistake(s?):</p> <pre><code>pos M1 M2 Mk Mg1 H0 H1 S1 Sk1 S2 Sj 16229783 4-CgT 4-CgT 4-CgT 1-CgT GgC CgT 0 1-CgT 1-CgT 1-CgT 16229992 4-AgC 4-AgC 4-AgC 1-AgC GgG AgC 2-AgC 2-AgC 2-AgC 1-AgC 16230007 4-TgA 4-TgA 4-TgA 1-TgA AgG TgA 2-TgA 2-TgA 2-TgA 0-TgA 16230011 4-GgT 4-GgT 4-GgT 2-GgT CgA GgT 1-GgT 1-GgT 1-GgT 1-GgT 16230049 4-AgG 4-AgG 4-AgG 4-AgG TgC AgG 1-AgG 0 1-AgG 1-AgG 16230174 0 0 0 4-CgA TgT CgA 1-CgA 0 1-CgA 1-CgA 16230190 0 0 0 4-AgC TgT AgC 0-AgC 0-AgC 0-AgC 0-AgC 16230260 4-AgA 4-AgA 4-AgA 4-AgA GgT AgA 0-AgA 0-AgA 0-AgA 0-AgA </code></pre> <p>Bonus: <a href="https://tio.run/##lVZtb@M2DP6eX0G4H5qgsWO7XXvNUAxFsKU3oLgB7YcNucBwbDlW41iZrLxd09/ekZLtOD3fhgWgI4lvj0iK0mqvUpFfvr@fQSRins@HsFaJ/alzBqlSq2I4GBQqjBZiw2SSia0TieXg7zUrFBd5MbjyXc@/uXQHqdjaStiShbGttsLOeM4KO5FiaYd2wjNmh3lsR8hXzI73ebjkUWEv2L6weU4iQtqZECt7XSAIe9Xp8OVKSAVcVCPJqlEksoxFGkG1VOzrYVRsOh21U3AHlmVFqYSVKKD6PXpIPtICaY6T37wg3c8kjzUbZ3@My8EsE9GCxk8o9rTwaOTr@UuH/r1r37@9ub6hpefBc01jpNHhGbyDCzf@NYBD/FFFDd1Pl6Q7wvWK7pHGhxG4B0/rjmle0ugwPure3moo94P7msZEh/vab5N3f9S9dF33O8z3RA3MNG5QQ9ejOKCv8Qkhtlq3xKppfNwv6l7dfsRM9NzEjP4cLaMxHXW9myvSdRpE8XrGWFW6FHOaE42amL1b97/8PhNWrT8mOur61626zTh/iAUWXacTswRma57FQc62gRTbopv0hh0yhfw5y5nEkwAqZYACQAIOfFmr1VrBMtzDjGG9x1ximbMYlIAQ6BSRbbJB8ljfc6Yq22bZw8Wc7VSXVs0aniyQPvBcKxkI9NtzlsWnGLvS66Nsr5bR9qRvttNwVm9kXG1EA6LjrmFCxhcMxOwF4TsV5jMoVhlXIHII8z1Owog5mrOSImJFEVDTQIdZuJzFIdBsiFFwtFrX@lpcWH296hRK8lW31@to9RSbDpOo2LTT1WFIescgaOsYhuRjDF4Xw42WWPQ3JPANbRub/VOT9On13lqyW0WuDozmYtKQNwTPARViQAqxZLAJM2yfJlbS/xl8B/vZEvNuqsGwq5ilbh9SSsKKr1hQyqGziVW3LWtKnk8Wyric4Wa44mHGvxnbMY8UbFOuUj3FttiHBxezEcODp1XK7aDDVwvZ1pAs0wCdWA@uNSRA1oOHA@@tU4cW@7guMM@hjt7tHSPME2JO3Cnxu9YjZtB6shoCDa8TlJyiaxPZSKxzRVulVb1FM0i9cnuSqbXMK2WTlJMwbTAnm@9ykvFCQRhFQtJ1R0crEnnM6TYB97IK/I6O18arau9gmTra62X/43IJZbIvcKcX1ty62NGoj/ITr17wptNm6ZgNGpB9xGEWjnBzoVJCmKxzPC4pk3gaXtYIPsHrD3NEKayh1xWDAedFUGRhkZ546A2xHkLZVAEXOS76dq9AyTWDX@B3so@XMx4U5dQ5inDTLRYnFeYp0OsBB4DPAGxfCNeqeBZsGfYELDEsAILsww5JhTNsEyLRcCo0gKXk4nn4HgXLcF8YbjKy8XTFVjN/qL0fU3gCuyyIVtQNy05lGRFUk4@Gfzq2RZPuc/e8NFGwY0HLkBcMft1FbEV6lduTmrVe3@zXN8vBs7MMVTdqpB995vaf879MpfwglVWJtOayviNKX4OTqJUzX@900BpQzTq0ahlU/w6J6qD9fLWhjHJ6qDVedM6ILDDZPXbunW4u9UUwOQym2Ec2XqONkNi@Xcz/0G3Q42RvzuQULu7Aa0YLmSed5P9usdFCKostW2trUz3jdxnyvGqgCUaGC@cJb7t8/vlLFx@1VB/66qAHbv0saN63p1ei6d7YdB89/fX1d6G/c72EXd20dOrM5rswf1r26cUyB4Wu/9MLOh7CJJ4spuYKoOAbt1NT61vJlcaAD3LHTLr4TMfLOxZr1YcYj96SVu/Ov6rzXkOnlMZr1RhsPGTIyw/eVW36FeiupPjiSQ8C6itBAHd3cB4EFO0gODcGTOg77@//AA" rel="nofollow noreferrer" title="Python 3 – Try It Online">Try it online!</a></p> <p>What is important is, beyond the solution to this specific problem, the method I used and which is widely used in software development. The code may be improved a lot.</p>
python|pandas|numpy|dictionary|defaultdict
3
7,245
41,796,965
Tensorflow: How to use a trained model in a application?
<p>I have trained a Tensorflow Model, and now I want to export the "function" to use it in my python program. Is that possible, and if yes, how? Any help would be nice, could not find much in the documentation. (I dont want to save a session!)</p> <p>I have now stored the session as you suggested. I am loading it now like this:</p> <pre><code>f = open('batches/batch_9.pkl', 'rb') input = pickle.load(f) f.close() sess = tf.Session() saver = tf.train.Saver() saver.restore(sess, 'trained_network.ckpt') y_pred = [] sess.run(y_pred, feed_dict={x: input}) print(y_pred) </code></pre> <p>However, I get the error "no Variables to save" when I try to initialize the saver. </p> <p>What I want to do is this: I am writing a bot for a board game, and the input is the situation on the board formatted into a tensor. Now I want to return a tensor which gives me the best position to play next, i.e. a tensor with 0 everywhere and a 1 at one position.</p>
<p>I don't know if there is any other way to do it, but you can use your model in another Python program by saving your session:</p> <p>Your training code:</p> <pre><code># build your model sess = tf.Session() # train your model saver = tf.train.Saver() saver.save(sess, 'model/model.ckpt') </code></pre> <p>In your application:</p> <pre><code># build your model (same as training) sess = tf.Session() saver = tf.train.Saver() saver.restore(sess, 'model/model.ckpt') </code></pre> <p>You can then evaluate any tensor in your model using a feed_dict. This obviously depends on your model. For example:</p> <pre><code>#evaluate tensor sess.run(y_pred, feed_dict={x: input_data}) </code></pre>
python|tensorflow
9
7,246
7,948,510
Using pcolor to plot 3 arrays in python
<p>I read an satellite image, and got the data, lat and lon out of the image and put in an array. The dimension of the lat is (135,90) and lon is also (135,90). The dimension of the data was originally (135,90,4,9,8), which 4 represent the band of the image. After processing( which used a for loop to put all band in a single image), the dimension of data is biw (1215,720), which is (135 x9, 90 x 8). I have a piece of code, which is:</p> <pre><code>x = lat # dimension (135,90) y = lon # dimension (135,90) z = data # dimension ( 1215, 720) plt.figure() plt.pcolor(x,y,z) plt.colorbar() plt.savefig("proj1.png") </code></pre> <p>But then it produced a very bad image below:</p> <p><img src="https://i.stack.imgur.com/DbcnX.png" alt="enter image description here"></p> <p>My friend told me before I should take more points in the lat and lon, so to make it same dimension as data. But don't know how to do it. Is the method he said correct?</p>
<p>its me again... The documentation for matplotlib says here <a href="http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.pcolor" rel="nofollow">http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.pcolor</a> that </p> <blockquote> <p>X and Y, if given, specify the (x, y) coordinates of the colored quadrilaterals; the quadrilateral for C[i,j] has corners at:</p> </blockquote> <pre><code> (X[i, j], Y[i, j]), (X[i, j+1], Y[i, j+1]), (X[i+1, j], Y[i+1, j]), (X[i+1, j+1], Y[i+1, j+1]). </code></pre> <blockquote> <p>Ideally the dimensions of X and Y should be one greater than those of C; if the dimensions are the same, then the last row and column of C will be ignored.</p> </blockquote> <p>Yet dimension (or shape) of C is totally different from X and Y. The matplotlib idally want you to prepare (1) X, Y being x and y coordinate of grid point (or corner point), and (2) C being value of the tile surouded by 4 adjacent grid points. so you have shape of x,y being 135 and 90. Then the color should be either 134 by 89, or 135 and 90. </p> <p>My understanding is that the data for this C is from modis pixels, and you already have them 135x90. So you should specify corner points of those 12150 tiles... Make sense? if you know lat/lon of the center point, you shift them by half distance to left/below, then add one row and column at right/above to create grid points. If you use projected coordinate instead of lat/lon, it's same thing. Or you can forget about these half distance deal and just plug that X and Y you already got (135x90) as is, along with C, which has to be 135x90, in order to use pcolor.</p> <p>What's the meaning of 9,8 in (135,90,4,9,8)? do you have 9*8 different property at each horizontal grid cell? e.g. vertical layer, different kind of chemical species, or physical property? If so, you have to pick one and make a plot one at a time (i.e., feed only C of 135x90 along with your X and Y). Also, you mentioned that 4 is for "band". If this id color band like RGBK and you want to show that color, then probably pcolor is not good, and you have to look for some other function or something that understands those 4 number. pcolor simply read range of number, then apply scale between min and max then apply color scale from blue to red (or whatever you choose if you do)</p> <p><strong>EDIT</strong></p> <p>I grabbed a data set for Level-1B, VISIBLE along with documentation from <a href="http://disc.sci.gsfc.nasa.gov/AIRS/data-holdings/by-access-method" rel="nofollow">http://disc.sci.gsfc.nasa.gov/AIRS/data-holdings/by-access-method</a>.</p> <blockquote> <p>This data set is generated from AIRS level 1A digital numbers (DN), including 4 channels in the 0.4 to 1.0 um region of the spectrum. A day's worth of AIRS data is divided into 240 scenes each of 6 minute duration. For the AIRS visible/near IR measurements, an individual scene contains 135 scanlines with a scanline containing 720 cross-track pixels and 9 along-track pixels; there is a total of 720 x 9 x 135 = 874,800 visible/near-IR pixels per scene.</p> </blockquote> <p>So easiest would be to get average of 8x9 values for each location, and pick one of four tracks one at a time. Alternatively, since these band corresponds to different colors, as shown in wave length below, </p> <ul> <li>Channel 1: 0.41 um - 0.44 um</li> <li>Channel 2: 0.58 um - 0.68 um</li> <li>Channel 3: 0.71 um - 0.92 um</li> <li>Channel 4: 0.49 um - 0.94 um</li> </ul> <p>you may be used these as RGBK values for pylab's imshow() function's input, maybe. You may not like the coarse resolution of output after spatial averaging. In that case you have to take coordinate of each of (9,8) pixel within each location, somehow. There should be a standard way though, the data is a widely used public data. </p>
python|image-processing|numpy|matplotlib
0
7,247
37,766,384
Tensorflow : How to recover a tensor properly
<p>Sorry for this newbie question, but i have some trouble learning tensor flow. I know basic things about ML ( linear regression, nn, cnn, perceptron, Kmeans ..) but i did not have any experience on a particular library.</p> <p>I'm currently learning how to save and recover datas from a graph. In my example, i do have a tensor which shape is equal to [168,8,8] It has been named <strong>saved_tensor</strong></p> <p>But i don't know how to recover it properly, below what i've done so far. As you will see, it is working when shape is constant and as you would imagine, shape can be in the form [x,8,8]</p> <ol> <li>Can please someone guide me on this ?<br> I believe i have to dig into reshaping (which i did btw) but i don't know how to modify simple code below.</li> <li>Could you please recommend a practical guide on Tensorflow (other that official documentation which i found a bit hard to learn) (Saw upcoming books Delip Rao/Tensorflow or Jordi Torres/First Contact With Tensorflow)</li> </ol> <blockquote> <pre><code> t = tf.zeros((168,8,8),tf.double) ten = tf.Variable(t, name="saved_tensor") saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, Path) print("Model restored.") print(ten.eval()) # sth else to do # </code></pre> </blockquote> <p>Regards, Pierre</p> <p>Have found the following site to learn tensorflow from the start :<a href="http://learningtensorflow.com" rel="nofollow">http://learningtensorflow.com</a> </p>
<p>Try creating the Variable without an initial value, and with validate_shape=False, and then running the restore process.</p>
python-2.7|tensorflow|restore
0
7,248
37,813,059
I am getting peak frequency from wav file. But for recorded 2 channels wav it is not working
<p>I am getting the peak frequency from wav files</p> <p>My code for getting the peak frequency from a wav file is:</p> <pre><code>import wave import struct import numpy as np import wave import contextlib if __name__ == '__main__': fname = "test.wav" frate = 0 data_size = 0 with contextlib.closing(wave.open(fname,'r')) as f: frate = f.getframerate() data_size = f.getnframes() wav_file = wave.open(fname, 'r') data = wav_file.readframes(data_size) data_size = data_size * wav_file.getnchannels() print wav_file.getparams() wav_file.close() data = struct.unpack('{n}h'.format(n=data_size), data) data = np.array(data) w = np.fft.fft(data) freqs = np.fft.fftfreq(len(w)) print(freqs.min(), freqs.max()) # Find the peak in the coefficients idx = np.argmax(np.abs(w)) freq = freqs[idx] freq_in_hertz = abs(freq * frate) print(freq_in_hertz) </code></pre> <p>I recorded a wav file with 48000 sample rate, 16 bitwidth, 2 channels. In that file I have a sine tone with 1000Hz. But the script outputting only 500Hz. I dont know where I went wrong. But for single channel and generated wav file with 48000 sample rate, 16 bitwidth, 2 channels it is working fine.</p> <p>I generated the wav file using the following script</p> <pre><code>import math import wave import struct if __name__ == '__main__': # http://stackoverflow.com/questions/3637350/how-to-write-stereo-wav-files-in-python # http://www.sonicspot.com/guide/wavefiles.html freq = 1000 data_size = 454656 * 2 fname = "test.wav" frate = 48000.0 amp = 64000.0 nchannels = 2 sampwidth = 2 framerate = int(frate) nframes = data_size comptype = "NONE" compname = "not compressed" data = [math.sin(2 * math.pi * freq * (x / frate)) for x in range(data_size)] wav_file = wave.open(fname, 'w') wav_file.setparams( (nchannels, sampwidth, framerate, nframes, comptype, compname)) for v in data: wav_file.writeframes(struct.pack('h', int(v * amp / 2))) wav_file.close() </code></pre> <p>I dont know where I did wrong. I uploaded my wav files at script generated wav <a href="http://www.quantumbuddy.com/stackoverflow/script_gen.wav" rel="nofollow">script_gen.wav</a> with 48000 sample rate, 2 channels, 16 bit. Recorded wavs: <a href="http://www.quantumbuddy.com/stackoverflow/recorded_48k_16_2channel.wav" rel="nofollow">2 channel wav</a> with 48000 sample rate, 2 channels, 16 bit 1 channel wav(Not allowing to post the link here, so will post in the comments) with 48000 sample rate, 1 channel, 16 bit.</p> <p>I checked all these peak frequency in audacity, it showing 1000Khz only.</p> <p>But when I tried with my scirpt I am getting correct output for 1 channel wav and failing for 2 channel wav.</p> <p>update: I am getting the half of the peak frequency as the output for 2 channels.</p> <p>I am feeling that I missed something. Can anyone help me in this?</p>
<p>Why so complicated? Consider the following</p> <pre><code>#!/usr/bin/env python3 import numpy as np from numpy import fft import scipy.io.wavfile as wf import matplotlib.pyplot as plt sr = 44100 # sample rate len_sig = 2 # length of resulting signal in seconds f = 1000 # frequency in Hz # set you time axis t = np.linspace(0, len_sig, sr*len_sig) # set your signal mono_data = np.sin(2*np.pi*t*f) # write single channel .wav file wf.write('mono.wav', sr, mono_data) # write two-channel .wav file stereo_data = np.vstack((mono_data, mono_data)).T wf.write('stereo.wav', sr, stereo_data) </code></pre> <p>Now test it by loading and analyzing the data</p> <pre><code># Load data mono_sr, mono_data = wf.read('mono.wav') stereo_sr, stereo_data = wf.read('stereo.wav') # analyze the data X_mono = fft.fft(mono_data) / len(mono_data) # remember to normalize your amplitudes # Remember that half of energy of the signal is distributed over the # positive frequencies and the other half over the negative frequencies. # # Commonly you want see a magnitude spectrum. That means, we ignore the phases. Hence, we # simply multiply the spectrum by 2 and consider ONLY the first half of it. freq_nq = len(X_mono) // 2 X_mono = abs(X_mono[:freq_nq]) * 2 freqs_mono = fft.fftfreq(len(mono_data), 1/mono_sr)[:freq_nq] # in order the analyze a stereo signal you first have to add both channels sum_stereo = stereo_data.sum(axis=1) / 2 # and now the same way as above freq_nq = len(sum_stereo) // 2 X_stereo= abs(fft.fft(sum_stereo))[:freq_nq] / len(stereo_data) * 2 freqs_stereo = fft.fftfreq(len(stereo_data), 1/stereo_sr)[:freq_nq] </code></pre> <p>Peak picking:</p> <pre><code>freqs_mono[np.argmax(X_mono)] # == 1000.0 freqs_stereo[np.argmax(X_stereo)] # == 1000.0 </code></pre> <p>Plot the result:</p> <pre><code>fig, (ax1, ax2) = plt.subplots(2, figsize=(10,5), sharex=True, sharey=True) ax1.set_title('mono signal') ax1.set_xlim([0, 2000]) ax1.plot(freqs_mono, X_mono, 'b', lw=2) ax2.set_title('stereo signal') ax2.plot(freqs_stereo, X_stereo, 'g', lw=2) ax2.set_xlim([0, 2000]) plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/0ixJk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0ixJk.png" alt="Mono and stereo peaks"></a></p>
python|numpy|audio|fft
1
7,249
64,257,005
Converting a DataArray to a DataFrame and preserve coordinate label order
<p>Is there a simple way to convert an xarray DataArray to a pandas DataFrame, where I can prescribe which dimensions get turned into index/columns? For example, suppose I have a DataArray</p> <pre class="lang-py prettyprint-override"><code>import xarray as xr weather = xr.DataArray( name='weather', data=[['Sunny', 'Windy'], ['Rainy', 'Foggy']], dims=['date', 'time'], coords={ 'date': ['Thursday', 'Friday'], 'time': ['Morning', 'Afternoon'], } ) </code></pre> <p>which results in:</p> <pre><code>&lt;xarray.DataArray 'weather' (date: 2, time: 2)&gt; array([['Sunny', 'Windy'], ['Rainy', 'Foggy']], dtype='&lt;U5') Coordinates: * date (date) &lt;U8 'Thursday' 'Friday' * time (time) &lt;U9 'Morning' 'Afternoon' </code></pre> <p>Suppose I now want to move that to a pandas DataFrame indexed by date, with columns time. I can kind of do this by using <code>.to_dataframe()</code> and then <code>.unstack()</code> on the resulting dataframe:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; weather.to_dataframe().unstack() weather time Afternoon Morning date Friday Foggy Rainy Thursday Windy Sunny </code></pre> <p>However, pandas will sort things so rather than Morning followed by Afternoon, I get Afternoon followed by Morning. I was rather hoping there would be an API like</p> <pre class="lang-py prettyprint-override"><code>weather.to_dataframe(index_dims=[...], column_dims=[...]) </code></pre> <p>which could do this reshaping for me, without me having to re-sort my indices and columns afterwards.</p>
<p>In xarray 0.16.1, <code>dim_order</code> was added to <a href="http://xarray.pydata.org/en/stable/generated/xarray.DataArray.to_dataframe.html" rel="nofollow noreferrer"><code>.to_dataframe</code></a>. Does this do what you're looking for?</p> <pre><code>xr.DataArray.to_dataframe( self, name: Hashable = None, dim_order: List[Hashable] = None, ) -&gt; pandas.core.frame.DataFrame Docstring: Convert this array and its coordinates into a tidy pandas.DataFrame. The DataFrame is indexed by the Cartesian product of index coordinates (in the form of a :py:class:`pandas.MultiIndex`). Other coordinates are included as columns in the DataFrame. Parameters ---------- name Name to give to this array (required if unnamed). dim_order Hierarchical dimension order for the resulting dataframe. Array content is transposed to this order and then written out as flat vectors in contiguous order, so the last dimension in this list will be contiguous in the resulting DataFrame. This has a major influence on which operations are efficient on the resulting dataframe. If provided, must include all dimensions of this DataArray. By default, dimensions are sorted according to the DataArray dimensions order. </code></pre>
pandas|python-xarray
1
7,250
64,205,355
How to read a particular cell in a excel sheet through pandas?
<p>I am working on a requirement where in I need to insert value on the fly in an excel sheet that is empty. So, to accomplish the goal, I am using Pandas.</p> <p>I have single column(date), where in multiple rows could be empty. I am reading excel file through pandas. However, I found that if a cell is blank, pandas will ignore it, i.e. it will only show the output of a row that has value. Is that normal. If yes, then how could I go about the requirements?</p> <pre><code>df = pd.read_excel('test1234.xlsx') print(df) </code></pre> <h2>OUTPUT:</h2> <pre><code> Date 0 ami 1 ami 2 ami 3 ami 4 ami 5 wef 6 wef 7 wef 8 wef 9 wef 10 wef 11 wef 12 wef 13 wef 14 wef 15 wef 16 wef </code></pre> <p>I know that the column has 23 rows hat includes empty cells. I understand that the pandas will not show any output after the end of last row. But what about the rows those are blank in between?</p> <p>Regards, Amitesh</p>
<p>pandas fills the rows with Nan when empty blocks are in between the non empty rows but in the case of empty rows at the end, it is interpreted as end and ignores them. But if you are aware of number of ignored rows at the end u can do this as below.</p> <pre><code> from numpy import nan as Nan import pandas as pd &gt;&gt;&gt; df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'], ... 'B': ['B0', 'B1', 'B2', 'B3'], ... 'C': ['C0', 'C1', 'C2', 'C3'], ... 'D': ['D0', 'D1', 'D2', 'D3']}, ... index=[0, 1, 2, 3]) &gt;&gt;&gt; s2 = pd.Series([Nan,Nan,Nan,Nan], index=['A', 'B', 'C', 'D']) &gt;&gt;&gt; result = df1.append(s2) &gt;&gt;&gt; result A B C D 0 A0 B0 C0 D0 1 A1 B1 C1 D1 2 A2 B2 C2 D2 3 A3 B3 C3 D3 4 NaN NaN NaN NaN </code></pre> <p>this operation can be iterated by the number of required empty rows.</p>
python|excel|pandas
0
7,251
47,542,745
Combining mixed data types in pandas column
<p>I have a column in a dataframe called 'Year'. When I invoke;</p> <pre><code>filtered_df['Year'].unique() </code></pre> <p>My result is:</p> <blockquote> <p>array([2013, 2012, 2014, 2015, 2016, 2017, 2011, 2010, 2009, 2008, '2011', '2010', '2015', '2009', 'N 117 ST / GREENWOOD AV N'], dtype=object)</p> </blockquote> <p>I would like to combine the results of the <code>'2011','2010', '2015', and '2009'</code> instances with that of their non-string counterparts. I thought it might be possible to do so using a regular expression, but the only things I have attempted thus far have returned errors that make me believe that my methodology is inherently flawed, so I have not included them here. </p> <p>Any ideas for a computationally efficient solution to this problem?</p>
<p>usually we convert it to numeric values (all non-convertable values will be converted into <code>NaN</code>'s) in the following way:</p> <pre><code>filtered_df['Year'] = pd.to_numeric(filtered_df['Year'], errors='coerce') </code></pre>
python|pandas|dataframe
2
7,252
49,337,764
How can I calculate standard deviation in pandas dataframe?
<p>I am using norway_new_car_sales_by_model.csv <a href="https://www.kaggle.com/dmi3kno/newcarsalesnorway/data" rel="nofollow noreferrer">Dataset here</a> dataset which you find. I want to find the model that has highest sales fluctuation over the years. I am using standard deviation of the yearly total sales for each model. Expected output is: <a href="https://i.stack.imgur.com/ZF3H7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZF3H7.png" alt="Expected output"></a></p> <pre><code>import pandas as pd import numpy as np data=pd.read_csv("norway_new_car_sales_by_model.csv",header=None,encoding="latin-1") data.columns = ['Year','Month','Make','Model','Quantity','Pct']#give column name data.drop(data.head(1).index, inplace=True) #drop first row data[['Quantity']]=data[['Quantity']].astype(np.int64) data.dropna(subset=['Quantity'], how='all', inplace = True) maketotal_1 = data.pivot_table(values='Quantity',index=['Month','Model','Make'],aggfunc=np.std) </code></pre> <p>My question are that </p> <p>1) I did not handle nan values... even if i try many codes... </p> <p><a href="https://i.stack.imgur.com/03pyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/03pyT.png" alt="My output"></a></p> <p>2) How to get Audi A4 Audi from Index column</p>
<p>I think need:</p> <p>First remove parameter <code>header=None</code> from <code>read_csv</code>, because first in csv are columns names:</p> <pre><code>data=pd.read_csv("norway_new_car_sales_by_model.csv",encoding="latin-1") print (data.head()) Year Month Make Model Quantity Pct 0 2007 1 Volkswagen Volkswagen Passat 1267 10.0 1 2007 1 Toyota Toyota Rav4 819 6.5 2 2007 1 Toyota Toyota Avensis 787 6.2 3 2007 1 Volkswagen Volkswagen Golf 720 5.7 4 2007 1 Toyota Toyota Corolla 691 5.4 </code></pre> <p>Apply <code>pivot_table</code> function with <code>np.std</code>:</p> <pre><code>maketotal_1=data.pivot_table(values='Quantity',index=['Month','Model','Make'],aggfunc=np.std) print (maketotal_1.head()) Quantity Month Model Make 1 Audi A3 Audi 50.986109 Audi A4 Audi 60.549704 Audi A6 Audi NaN Audi Q3 Audi NaN BMW 2-serie BMW NaN </code></pre> <p>Last first remove <code>NaN</code>s by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="nofollow noreferrer"><code>dropna</code></a> and use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a> for convert <code>MultiIndex</code> to columns and create unique default index:</p> <pre><code>df1 = maketotal_1.dropna().reset_index() </code></pre> <p>Last per groups by <code>Make</code> get indices of max values by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.idxmax.html" rel="nofollow noreferrer"><code>idxmax</code></a> and then select rows by <code>loc</code>:</p> <pre><code>df3 = df1.loc[df1.groupby('Make')['Quantity'].idxmax()] print (df3) Month Model Make Quantity 447 12 Audi A3 Audi 119.867427 415 11 BMW i3 BMW 460.936366 56 2 Ford Mondeo Ford 169.889880 235 6 Honda CR-V Honda 171.579671 457 12 Hyundai ix35 Hyundai 32.526912 348 9 Kia Sportage Kia 55.154329 60 2 Mazda CX-5 Mazda 144.030957 14 1 Mercedes-Benz GLC Mercedes-Benz 119.501046 160 4 Mitsubishi ASX Mitsubishi 312.541197 391 10 Nissan Leaf Nissan 225.322584 114 3 Opel Astra Opel 85.182158 22 1 Peugeot 207 Peugeot 97.962578 168 4 Renault Zoe Renault 53.740115 395 10 Skoda Octavia Skoda 121.668767 122 3 Suzuki Vitara Suzuki 85.559921 123 3 Tesla Model S Tesla 510.400823 33 1 Toyota Corolla Toyota 326.683333 179 4 Volkswagen Golf Volkswagen 454.872681 485 12 Volvo V40 Volvo 183.919366 </code></pre> <p>EDIT:</p> <p>There is no <code>Citroen</code> because <code>np.std</code> return <code>NaN</code>:</p> <pre><code>print (maketotal_1[maketotal_1.index.get_level_values('Make') == 'Citroen ']) Quantity Month Model Make 11 Citroen C4 Aircross Citroen NaN </code></pre>
python|pandas|dataframe
2
7,253
49,018,980
Check if two numpy array rows simultaneously satisfy a proposition
<p>This is a follow-up post to a previous question of mine:</p> <p><a href="https://stackoverflow.com/questions/48978144/check-whether-numpy-array-row-is-smaller-than-the-next">Check whether numpy array row is smaller than the next</a></p> <p>Suppose i have the following numpy array:</p> <pre><code>a=np.reshape(np.array([[79,np.nan,87,77,92,133,99,121,103,118,126, 133,131,67]]),(7,2)) In [1]: a Out[1]: array([[ 79., nan], [ 87., 77.], [ 92., 133.], [ 99., 121.], [ 103., 118.], [ 126., 133.], [ 131., 67.]]) </code></pre> <p>I would like to create a new column or array which will be a True/False indicator testing the following proposition:</p> <pre><code>a[-1, 0] &lt; a[1:, 0] and a[-1, 1] &gt; a[1:, 1] </code></pre> <p>The result that i expect is the following:</p> <pre><code>False (because the first value of column 1 is nan) False True True False True False </code></pre> <p>I have tried different variations of the solutions described in my previous post, but so far i have been unsuccessful.</p> <p>EDIT:</p> <p>The idea is to test whether 87&lt;92 and at the same time 77>133 which is False. Then 92&lt;99 and 133>121 which is True etc.</p>
<p>You will use exactly the same strategy than <a href="https://stackoverflow.com/a/48978474/3104974">Tai's answer</a> in your previous post:</p> <pre><code>b = np.diff(a, axis=0) [[ 8. nan] [ 5. 56.] [ 7. -12.] [ 4. -3.] [ 23. 15.] [ 5. -66.]] </code></pre> <p>And now you use a the logic function <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.logical_and.html" rel="nofollow noreferrer"><code>np.logical_and</code></a> on array <code>b</code>. Since you always want a <em>trailing False</em>, we can just append it:</p> <pre><code>result = np.logical_and(b[:,0] &gt; 0, b[:,1] &lt; 0) result = np.append(result, np.array([False])) [False False True True False True False] </code></pre> <p>(edited my post according to your comments)</p> <p><strong>Note</strong>: Comparing values with <code>nan</code> will always return <code>False</code>. So if <code>nan</code> appears in some row in the middle, it will always yield <em>two</em> rows that evaluate to <code>False</code>.</p>
python|arrays|python-3.x|numpy
0
7,254
48,907,532
How to sort dataframe with ignoring prefixes?
<p>I can sort dataframe by column like this:</p> <pre><code>df.sort(columns='sort_index', inplace=True) </code></pre> <p>And I can sort array with ignoring prefixes like this:</p> <pre><code>array.sort(key=lambda element: re.sub(re, "", element)) </code></pre> <p>But how to sort dataframe with ignoring prefixes?</p>
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow noreferrer"><code>str.replace</code></a> with <a href="https://stackoverflow.com/q/17901218/2901002"><code>argsort</code></a> for indices and then select by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.iloc.html" rel="nofollow noreferrer"><code>iloc</code></a> what reorder rows:</p> <pre><code>df = pd.DataFrame({ 'B': list(range(9)), }, index=['1s','2d','2a','1c','22d','1b','2b','1c','4d']) print (df) B 1s 0 2d 1 2a 2 1c 3 22d 4 1b 5 2b 6 1c 7 4d 8 idx = df.index.str.replace('\D+', '').astype(int).argsort() df = df.iloc[idx] print (df) B 1s 0 1c 3 1b 5 1c 7 2d 1 2a 2 2b 6 4d 8 22d 4 </code></pre>
python|pandas|dataframe|data-structures
2
7,255
48,962,312
ValueError: setting an array element with a sequence()
<blockquote> <p>Before downvoting this question and marked as duplicate, let me just explain the issue, i tried all the possible solutions with similar question here on stack, but none of them worked. i also checked, <a href="https://github.com/numpy/numpy/issues/6584" rel="nofollow noreferrer">setting an array element with a sequence" error could be improved. #6584</a></p> </blockquote> <p>So am training a random forest classifier on 3 different features, all with different dimensions but i reshaped them to to (-1,1), which can fit for training the RF(random forest) model, but it keep on giving the same error again and again as i have tried all the possible things, here are the list of feature functions am using,</p> <blockquote> <p>here , am computing the color features by simply taking mean/average of images in different color spaces,here am working on RGB,LAB,HSV and GRAY image respectively, as from the code below i have flattened all the possible feature vector array, from different color spaces.</p> </blockquote> <pre><code>def extract_color_feature(rgb_roi, lab_roi, hsv_roi, gray_roi): avg_rgb_per_row = np.average(rgb_roi, axis=0) avg_rgb = np.average(avg_rgb_per_row, axis=0).flatten() avg_lab_per_row = np.average(lab_roi, axis=0) avg_lab = np.average(avg_lab_per_row, axis=0).flatten() h, s, _ = cv2.split(hsv_roi) h_avg = cv2.mean(h) s_avg = cv2.mean(s) avg_hs = np.hstack([h_avg, s_avg]).flatten() lbp = extract_lbp(gray_roi).flatten() avg_rgb = np.array(avg_rgb, dtype=np.float32).flatten() avg_lab = np.array(avg_lab, dtype=np.float32).flatten() avg_hs = np.array(avg_hs, dtype=np.float32).flatten() lbp = np.array(lbp, dtype=np.float32).flatten() avg_color = np.hstack([avg_rgb, avg_lab, avg_hs, lbp]) return avg_color.flatten() </code></pre> <blockquote> <p>in the following function i only computed histogram values from different color spaces again RGB,LAB,HSV color spaces used. as every histogram here performed on single color channel, so depth of every histogram feature will always be 1.</p> </blockquote> <pre><code>def compute_hist_feature(rgb_seg, hsv_seg, lab_seg, mask): b, g, r = cv2.split(rgb_seg) h, s, v = cv2.split(hsv_seg) l, a, b = cv2.split(lab_seg) r_equ = cv2.equalizeHist(r) g_equ = cv2.equalizeHist(g) b_equ = cv2.equalizeHist(b) r_hist = cv2.calcHist([r_equ], [0], mask, [8], [0, 256]).flatten() g_hist = cv2.calcHist([g_equ], [0], mask, [8], [0, 256]).flatten() b_hist = cv2.calcHist([b_equ], [0], mask, [8], [0, 256]).flatten() l_hist = cv2.calcHist([l], [0], mask, [8], [0, 256]).flatten() a_hist = cv2.calcHist([a], [0], mask, [8], [0, 256]).flatten() bb_hist = cv2.calcHist([b], [0], mask, [8], [0, 256]).flatten() h_hist = cv2.calcHist([h], [0], mask, [8], [0, 256]).flatten() s_hist = cv2.calcHist([s], [0], mask, [8], [0, 256]).flatten() h_hist = np.array(h_hist, dtype=np.float32).flatten() r_hist = np.array(r_hist, dtype=np.float32).flatten() g_hist = np.array(g_hist, dtype=np.float32).flatten() b_hist = np.array(b_hist, dtype=np.float32).flatten() s_hist = np.array(s_hist, dtype=np.float32).flatten() l_hist = np.array(l_hist, dtype=np.float32).flatten() a_hist = np.array(a_hist, dtype=np.float32).flatten() bb_hist = np.array(bb_hist, dtype=np.float32).flatten() hist = np.hstack([r_hist, g_hist, b_hist, h_hist, s_hist, l_hist, a_hist, bb_hist]) return hist.flatten() </code></pre> <blockquote> <p>and finally am using <strong><em>location features</em></strong> , by simply flattened down the (x,y) cordinate list to form a feature array whhich will represent location feautre respectively.</p> </blockquote> <pre><code>cords = [t[::-1] for t in clusters_.get(disc)] # reversing the list of tuples disc_pts = np.array(cords, dtype=np.int32) loc_feat = np.array(cords, dtype=np.float32).flatten() </code></pre> <blockquote> <p>here initially the cords represents to a array with depth 2 coz every pixel have two cordinates so, i flattened it , to form a array with depth of 1.</p> </blockquote> <p>finally i stacked all the three features to form single feature vector,</p> <pre><code>feat_vec = np.hstack([loc_feat, color_feat, hist_feat]).flatten() </code></pre> <blockquote> <p>here i have manually cheked the elements in all three feature vectors, in order to confirm the dtype, dimensions of array are not ambiguous to trigger the error, but everything looks fine to me.</p> </blockquote> <p>this is the first one, location feature</p> <pre><code>[ 82. 209. 82. 210. 83. 210. 82. 211. 83. 211. 82. 212. 83. 212. 84. 212. 81. 213. 82. 213. 83. 213. 84. 213. 81. 214. 82. 214. 83. 214. 84. 214. 81. 215. 82. 215. 83. 215. 84. 215. 81. 216. 82. 216. 83. 216. 84. 216. 81. 217. 82. 217. 83. 217. 84. 217. 81. 218. 82. 218. 83. 218. 84. 218. 85. 218. 81. 219. 82. 219. 83. 219. 84. 219. 85. 219. 81. 220. 82. 220. 83. 220. 84. 220. 85. 220. 81. 221. 82. 221. 83. 221. 84. 221. 85. 221. 81. 222. 82. 222. 83. 222. 84. 222. 85. 222. 86. 222. 81. 223. 82. 223. 83. 223. 84. 223. 85. 223. 86. 223. 81. 224. 82. 224. 83. 224. 84. 224. 85. 224. 86. 224. 81. 225. 82. 225. 83. 225. 84. 225. 85. 225. 86. 225. 87. 225. 81. 226. 82. 226. 83. 226. 84. 226. 85. 226. 86. 226. 87. 226. 81. 227. 82. 227. 83. 227. 84. 227. 85. 227. 86. 227. 87. 227. 82. 228. 83. 228. 84. 228. 85. 228. 86. 228. 87. 228. 82. 229. 83. 229. 84. 229. 85. 229. 86. 229. 87. 229. 82. 230. 83. 230. 84. 230. 85. 230. 86. 230. 87. 230. 82. 231. 83. 231. 84. 231. 85. 231. 86. 231. 87. 231. 82. 232. 83. 232. 84. 232. 85. 232. 86. 232. 87. 232. 82. 233. 83. 233. 84. 233. 85. 233. 86. 233. 87. 233. 88. 233. 83. 234. 84. 234. 85. 234. 86. 234. 87. 234. 88. 234. 83. 235. 84. 235. 85. 235. 86. 235. 87. 235. 88. 235. 83. 236. 84. 236. 85. 236. 86. 236. 87. 236. 88. 236. 83. 237. 84. 237. 85. 237. 86. 237. 87. 237. 88. 237. 84. 238. 85. 238. 86. 238. 87. 238. 84. 239. 85. 239. 86. 239. 87. 239. 84. 240. 85. 240. 86. 240. 87. 240. 84. 241. 85. 241. 86. 241. 87. 241. 85. 242. 86. 242. 87. 242. 85. 243. 86. 243.] </code></pre> <p>this is color feautre vector</p> <pre><code>[ 3.35917592e-01 3.25945705e-01 3.25065553e-01 3.34438205e-01 2.04288393e-01 1.97153553e-01 1.85440078e-01 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.32209742e-02 0.00000000e+00 0.00000000e+00 0.00000000e+00 2.62172282e-04 3.93258437e-04 1.31086141e-04 9.36329598e-05 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 9.98417616e-01 7.02247198e-04] </code></pre> <p>and this is histogram feature vector</p> <pre><code>[ 0. 0. 0. 0. 0. 0. 0. 169. 0. 0. 0. 0. 0. 0. 0. 169. 0. 163. 6. 0. 0. 0. 0. 0. 0. 0. 0. 169. 0. 0. 0. 0. 169. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 29. 93. 47. 0. 0. 0. 0. 169. 0. 0. 0. 0. 0. 0. 169. 0. 0. 0. 0.] </code></pre> <p>as it can be seen the datatype and dimensions of all three arrays are same, but still getting the error while training with RF or SVC classifier, <strong><em>also when i don't use location feature and train only with color and histogram features, then it doesn't generate the error, and both the training and prediction program works fine.</em></strong> but only when all the three features stacked it geves the error.</p> <p>the error is throwned when RF classifier is set for training.here _data is a list of feature vectors( ~feat_vec~ ) that are computed previously. and _labels are curresponding lables either 1 or 0, for each data(image) samples respectively.</p> <pre><code>model = RandomForestClassifier(n_estimators=100, random_state=42) model.fit(_data, _labels) </code></pre> <p>complete error trace back:</p> <pre><code>Traceback (most recent call last): File "~/openCV/saliency_detection/svm_train.py", line 59, in &lt;module&gt; model.fit(_data, _labels) File "/usr/lib/python2.7/site-packages/sklearn/ensemble/forest.py", line 247, in fit X = check_array(X, accept_sparse="csc", dtype=DTYPE) File "/usr/lib/python2.7/site-packages/sklearn/utils/validation.py", line 382, in check_array array = np.array(array, dtype=dtype, order=order, copy=copy) ValueError: setting an array element with a sequence. </code></pre>
<p>Most likely the error is cause by trying to create an array from lists or arrays of differing length. </p> <p>Without the <code>dtype</code> the following creates an <code>object</code> <code>dtype</code> array; with a numeric <code>dtype</code> it raises this error.</p> <pre><code>In [33]: np.array([[1,2,3],[4,5,6],[7,8,9,10]]) Out[33]: array([list([1, 2, 3]), list([4, 5, 6]), list([7, 8, 9, 10])], dtype=object) In [34]: np.array([[1,2,3],[4,5,6],[7,8,9,10]], dtype=int) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-34-677fde45dbde&gt; in &lt;module&gt;() ----&gt; 1 np.array([[1,2,3],[4,5,6],[7,8,9,10]], dtype=int) ValueError: setting an array element with a sequence. </code></pre> <p>It can't create a 2d numeric array from 3 lists of differing length. </p> <pre><code>In [37]: np.array([[1,2,3],[4,5,6],[7,8,9]], dtype=int) Out[37]: array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) </code></pre> <p>In the traceback variable names change, but I'm guessing the problem can be traced back to the <code>_data</code> variable you give <code>fit</code>. You don't show the code that creates <code>_data</code>, but only give a vague description:</p> <blockquote> <p>_data is a list of feature vectors( ~feat_vec~ )</p> </blockquote> <p>From your prints it looks like color and histogram have about 80 values. but location clearly has many more. That's consistent with your claim that</p> <blockquote> <p>also when i don't use location feature and train only with color and histogram features, then it doesn't generate the error, and both the training and prediction program works fine. but only when all the three features stacked it geves the error.</p> </blockquote> <p>The fact that you can <code>hstack</code> them doesn't tell us anything about how they will work in <code>np.array(....)</code>.</p> <pre><code>In [35]: np.hstack([[1,2,3],[4,5,6],[7,8,9,10]]) Out[35]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) </code></pre> <p>Here's a list of previous times that I've answered a question about the same ValueError:</p> <p><a href="https://stackoverflow.com/search?q=user%3A901925+ValueError%2Bsequence">https://stackoverflow.com/search?q=user%3A901925+ValueError%2Bsequence</a></p>
python|arrays|numpy|random-forest|scikit-image
1
7,256
58,700,150
Python Pandas: Can I import a CSV through user input using a dynamic file path?
<p>So I'm trying to create a very basic python UI where a user uploads a CSV with a list of cities and the program creates an inner join with a pre-existing database of zip codes returns a list of cities and their corresponding zip codes. My code is functional so far. I'd just like the user to be able to upload a CSV from anywhere in their system. Right now, the program only reads from the python directory. I don't want to have to specify the file path of the input file. Is there any way this can be done? Here's the code I have right now - </p> <pre><code>import tkinter as tk from tkinter import filedialog import pandas as pd root= tk.Tk() canvas1 = tk.Canvas(root, width = 300, height = 300, bg = 'lightsteelblue2', relief = 'raised') canvas1.pack() def getCSV(): global cities import_file_path = filedialog.askopenfilename() cities = pd.read_csv('cities.csv') zips = pd.read_csv('zips.csv') output = pd.merge(cities, zips, on='state' and 'county', how='inner') output.to_csv('output.csv', encoding='utf-8', index=False) browseButton_CSV = tk.Button(text=" Upload City Data &amp; Close ", command=getCSV, bg='green', fg='white', font=('helvetica', 12, 'bold')) canvas1.create_window(150, 150, window=browseButton_CSV) root.mainloop() </code></pre> <p>I'm kinda new to python and programming in general. Just been learning it over the past month or 2. Any help is appreciated!</p> <p>Thanks, DJ</p>
<p>@gingerhaze's answer - </p> <pre><code>import tkinter as tk from tkinter import filedialog import pandas as pd root= tk.Tk() canvas1 = tk.Canvas(root, width = 300, height = 300, bg = 'lightsteelblue2', relief = 'raised') canvas1.pack() def getCSV(): global cities import_file_path = filedialog.askopenfilename() cities = pd.read_csv(import_file_path) zips = pd.read_csv('zips.csv') output = pd.merge(cities, zips, on='state' and 'county', how='inner') output.to_csv('output.csv', encoding='utf-8', index=False) browseButton_CSV = tk.Button(text=" Upload City Data &amp; Close ", command=getCSV, bg='green', fg='white', font=('helvetica', 12, 'bold')) canvas1.create_window(150, 150, window=browseButton_CSV) root.mainloop() </code></pre>
python|pandas
0
7,257
70,047,986
Replace a single value with multiple values
<p>Given a <code>mask</code> numpy array such as:</p> <pre><code>mask = np.array([0, 0, 1, 0, 0, 0, 1, ...]) </code></pre> <p>I want to replace each <code>1</code> with a <code>target</code> <strong>vector</strong>. Example:</p> <pre><code>target = np.array([5, 4, 3, 2, 1]) mask = np.array([0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0,...]) output = np.array([0, 0, 5, 4, 3, 2, 1, 0, 5, 4, 3, 2, ...]) # Overlaps: mask = np.array([0, 0, 1, 0, 0, 0, 1, 0, 0, 0,...]) output = np.array([0, 0, 5, 4, 3, 2, 5, 4, 3, 2, ...]) </code></pre> <p>Naivly, one can write this via the following (ignoring boundary problems):</p> <pre><code>output = np.zeros_like(mask) for i, x in enumerate(mask): if x == 1: output[i:i+len(target)] = target </code></pre> <p>I'm wondering, whether this is possible without resorting to a for loop?</p>
<p><code>numpy</code> supports assigning value for the same index multiple times in one go, like so:</p> <pre><code>mask = np.array([0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0]) padding_idx = [2,3,4,5,6,5,6,7,8,9,8,9,10,11] padding_values = [5,4,3,2,1,5,4,3,2,1,5,4,3,2] mask[padding_idx] = padding_values &gt;&gt;&gt; mask array([0, 0, 5, 4, 3, 5, 4, 3, 5, 4, 3, 2]) </code></pre> <p>You just need to find out <code>padding_idx</code> and <code>padding_values</code>.</p> <p>Note that <code>padding_values = [5,4,3,2,1,5,4,3,2,1,5,4,3,2]</code> has one value missing. So you need also to find a number of values missing. After that you can use <a href="https://numpy.org/doc/stable/user/basics.broadcasting.html" rel="nofollow noreferrer"><code>broadcasting</code></a></p> <pre><code>vector = np.array([5,4,3,2,1]) N = len(vector) mask = np.array([0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0]) idx = np.flatnonzero(mask) missing_values = len(mask) - idx[-1] - N #Broadcast padding_idx = np.flatnonzero(mask)[:,None] + np.arange(N) padding_values = np.repeat(vector[np.newaxis, :], len(idx), axis=0) #Flatten padding_idx = padding_idx.ravel()[:missing_values] padding_values = padding_values.ravel()[:missing_values] #Go! mask[padding_idx] = padding_values &gt;&gt;&gt; mask array([0, 0, 5, 4, 3, 5, 4, 3, 5, 4, 3, 2]) </code></pre>
python|numpy
1
7,258
70,084,538
How to drop column where you don't know the name of the column?
<p>I'm a beginner and I'm wondering about this.</p> <p>For example I have this code:</p> <p><code>df = example.get_data</code></p> <p>And I only know that the the header will be a date <em>numpy.datetime64</em> type. How can I only keep the last 2 years data without knowing anything more about it?</p> <p>I tried something like this:</p> <p><code>df.drop(df.columns.year &gt;= date.today().year-2, axis=1, inplace = True</code></p> <p>But it's not working. Any suggestions?</p>
<p>If your column names are e.g. <code>'12/02/2021', '14/01/2021', '19/08/2019'</code> you can select all columns of the last two years like that:</p> <pre><code>from pandas.tseries.offsets import DateOffset last_2_years = [c for c in df.columns if pd.to_datetime(c) &gt; pd.Timestamp.today() - DateOffset(years=2)] df = df[last_2_years] </code></pre> <p>It's usually easier to select the columns you want to keep than to drop the columns you don't need, but you can of course also do</p> <pre><code>cols_to_drop = [c for c in df.columns if pd.to_datetime(c) &lt; pd.Timestamp.today()-DateOffset(years=2)] df = df.drop(cols_to_drop, axis=1) </code></pre>
python|python-3.x|pandas|dataframe
1
7,259
56,382,596
Why do we use numpy.argmax() to return an index from a numpy array of predictions?
<p>Let me preface this by saying, I am very new to neural networks, and this is my first time using numpy, tensorflow, or keras.</p> <p>I wrote a neural network to recognize handwritten digits, using the MNIST data set. I followed <a href="https://www.youtube.com/watch?v=wQ8BIBpya2k" rel="nofollow noreferrer">this tutorial</a> by Sentdex and noticed he was using <code>print(np.argmax(predictions[0]))</code> to print the first index from the numpy array of predictions.</p> <p>I tried running the program with that line replaced by <code>print(predictions[i])</code>, (i was set to 0) but the output was not a number, it was: <code>[2.1975785e-08 1.8658861e-08 2.8842608e-06 5.7113186e-05 1.2067199e-10 7.2511304e-09 1.6282028e-12 9.9993789e-01 1.3356166e-08 2.0409643e-06]</code>.</p> <p>My code than I'm confused about is: </p> <pre><code>predictions = model.predict(x_test) for i in range(10): plt.imshow(x_test[i]) plt.show() print("PREDICTION: ", predictions[i]) </code></pre> <p>I read the numpy documentation for the argmax() function, and from what I understand, it takes in a x-dimensional array, converts it to a one-dimensional array, then returns the index of the largest value. The Keras documentation for model.predict() indicated that the function returns a numpy array of the networks predictions. <strong>So I don't understand why we have to use argmax()</strong> to properly print the prediction, because as I understand, it has a completely unrelated purpose.</p> <p>Sorry for the bad code formatting, I couldn't figure out how to properly insert multi line chunks of code into my post </p>
<p>What any classification neural network outputs is a probability distribution over the class indices, meaning that the network assigns one probability to each class. The sum of these probabilities is 1.0. Then the network is trained to assign the highest probability to the correct class, so to recover the class index from the probabilities you have to take the location (index) that has the maximum probability. This is done with the <code>argmax</code> operation.</p>
python|numpy|tensorflow|machine-learning|keras
2
7,260
56,079,085
How do I overcome the TypeError: cannot convert the series to <class 'float'> error
<p>I am trying to calculate the Latitude and Longitude for a number (series) of flights in which I tried to use this code </p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from math import radians, cos, sin, asin, sqrt # convert decimal degrees to radians lon1 = df1['from_lon'].map(radians) lat1 = df1['from_lat'].map(radians) lon2 = df1['dest_lon'].map(radians) lat2 = df1['dest_lat'].map(radians) # haversine formula dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2 c = 2 * asin(sqrt(a)) results = 3959.0 * c </code></pre> <p>but I keep getting this error</p> <pre class="lang-py prettyprint-override"><code>TypeError: cannot convert the series to &lt;class 'float'&gt; </code></pre> <p>I have seen many people asking about this error on stackoverflow and tried the answers provided to them. What I have tried is to use the equation in this way </p> <pre class="lang-py prettyprint-override"><code>a = dlat.divide(2).apply(sin).pow(2) + cos(lat1) * lat2.apply(cos).multiply(dlon.divide(2).apply(sin).pow(2)) </code></pre> <p>I also tried to use <code>lambda</code> as well as tried to convert each variable to float using <code>.astype(float)</code> but unfortunately nothing worked with me. </p> <p>The data type for lon1, lat1, lon2 and lat2 is <code>float64</code>.a sample of the data:</p> <pre class="lang-py prettyprint-override"><code>lon1 -1.826892 -1.287685 -1.534229 -1.534229 -1.826892 1.775173 -0.062252 </code></pre>
<p>Below is the code which works without error. Basically you need to use Series.apply(lambda x: ) function. </p> <pre><code>a = (dlat/2).apply(lambda x : sin(x)) ** 2 + lat1.apply(lambda x : cos(x)) * lat2.apply(lambda x:cos(x))* (dlon/2).apply(lambda x : sin(x)) ** 2 c = 2 * a.apply(lambda x : asin(sqrt(x))) results = 3959.0 * c </code></pre>
python|python-3.x|pandas|jupyter-notebook
0
7,261
55,800,339
Different accuracies on different machines using same seeds, code and dataset
<p>I am trying to develop a CNN for signature recognition to identify which person a given signature belongs to. There are 3 different classes(persons) and 23 signatures for each of them. Having this little amount of samples, I decided to use the Keras <code>ImageDataGenerator</code> to create additional images.</p> <p>However, testing the CNN on different machines (Windows 10 and Mac OS) gives different accuracy scores when evaluating the model on the test data. 100% on windows and 93% on mac OS. Both machines run python 3.7.3 64-bit.</p> <p>The data is split using <code>train_test_split</code> from <code>sklearn.model_selection</code> with 0.8 for training and 0.2 for testing, <code>random_state</code> is 1. Data and labels are properly normalised and adjusted to fit into the CNN. Various numbers for <code>steps_per_epoch</code>, <code>batch_size</code> and <code>epoch</code> have been tried.</p> <p>I have tried using both <code>np.random.seed</code> and <code>tensorflow.set_random_seed</code>, generating 100% accuracy on the test data using <code>seed(1)</code> on PC, however the same seed on the other machine still yields a different accuracy score. </p> <p>Here is the CNN architecture along with the method call for creating additional images. The following code yields an accuracy of 100% on one machine and 93.33% on the other.</p> <pre class="lang-py prettyprint-override"><code>seed(185) set_random_seed(185) X_train, X_test, y_train, y_test = train_test_split(data, labels, train_size=0.8, test_size=0.2, random_state=1) datagen = ImageDataGenerator() model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(3, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit_generator(datagen.flow(X_train, y_train, batch_size=64), validation_data=(X_test,y_test),steps_per_epoch= 30, epochs=10) model.evaluate(X_test, y_test) </code></pre> <p><strong>EDIT</strong> So after more research I've discovered that using different hardware, specifically different graphics cards, will result in varying accuracies. Saving the trained model and using that to evalute data on different machines is the ideal solution.</p>
<p>Providing the solution here (Answer Section), even though it is present in the Question Section, for the benefit of the community. </p> <p>Using different hardware, specifically different graphics cards will result in varying accuracies though we use same seeds, code and dataset. <code>Saving the trained model and using that to evaluate data</code> on different machines is the ideal solution.</p>
python|tensorflow|machine-learning|keras|conv-neural-network
0
7,262
55,708,140
Calculate new list with differences of list elements
<p>I am looking for a faster way to calculate the absolute difference between every element of two lists. </p> <p>This is my current code, but it gets a bit slow with big arrays:</p> <pre><code>import numpy as np np.random.seed(10) x_values = np.random.randint(-50,100,size=(10)) test_values = x_values * 2 # print(x_values, test_values) for x in test_values: test = sorted([(j, np.abs(j-x)) for j in x_values], key=lambda x: x[1]) print(test) </code></pre> <p>Output: </p> <pre><code>[(-37, 37), (-17, 57), (4, 78), (12, 86), (27, 101), (38, 112), (50, 124), (57, 131), (72, 146), (76, 150)] [(-37, 3), (-17, 17), (4, 38), (12, 46), (27, 61), (38, 72), (50, 84), (57, 91), (72, 106), (76, 110)] [(12, 4), (4, 4), (27, 19), (-17, 25), (38, 30), (50, 42), (-37, 45), (57, 49), (72, 64), (76, 68)] [(27, 3), (12, 12), (38, 14), (4, 20), (50, 26), (57, 33), (-17, 41), (72, 48), (76, 52), (-37, 61)] [(57, 3), (50, 4), (38, 16), (72, 18), (76, 22), (27, 27), (12, 42), (4, 50), (-17, 71), (-37, 91)] [(76, 0), (72, 4), (57, 19), (50, 26), (38, 38), (27, 49), (12, 64), (4, 72), (-17, 93), (-37, 113)] [(76, 24), (72, 28), (57, 43), (50, 50), (38, 62), (27, 73), (12, 88), (4, 96), (-17, 117), (-37, 137)] [(76, 38), (72, 42), (57, 57), (50, 64), (38, 76), (27, 87), (12, 102), (4, 110), (-17, 131), (-37, 151)] [(76, 68), (72, 72), (57, 87), (50, 94), (38, 106), (27, 117), (12, 132), (4, 140), (-17, 161), (-37, 181)] [(76, 76), (72, 80), (57, 95), (50, 102), (38, 114), (27, 125), (12, 140), (4, 148), (-17, 169), (-37, 189)] </code></pre>
<p>This is a pure numpy solution. I didn't compare it against your code in terms of speed, so let me know:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np np.random.seed(10) x_values = np.random.randint(-50,100,size=(10)) test_values = x_values*2 # create a matrix of len(test_values) times the test_values column vector... test_mat = np.array([test_values]*len(test_values)).T # ... and then calculate the absolute difference matrix abs_mat = np.abs(x_values - test_mat) # this part is to obtain your desired output output = np.column_stack((np.array([x_values]*len(x_values)).flatten(), abs_mat.flatten())) output = np.split(output, len(x_values)) </code></pre>
python|numpy
0
7,263
55,907,611
How do I speed up this file creation process?
<p>I am trying to create a large flat file with fixed width columns that contains multiple layers, but processing seems to be very slow, most likely because I am iterating over each row. For context, this is for transmitting insurance policy information.</p> <p>The hierarchy goes like this: </p> <pre><code>-Policy row --Property on policy ---Coverage on property --Property on policy ---Coverage on property --Owner on policy --Owner on policy --Owner on policy </code></pre> <p>Currently I'm loading the four record types into separate dataframes, and then doing a for loop over each type by pulling them based on the parent record's ID, and then writing them to the file. I'm hoping for some sort of hierarchical dataFrame merge that doesn't force me to scan the file each time I want a record.</p> <pre><code>import re import pandas as pd import math def MakeNumeric(instring): output = re.sub('[^0-9]', '', str(instring)) return str(output) def Pad(instring, padchar, length, align): if instring is None: # Takes care of NULL values instring = '' instring = str(instring).upper() instring = instring.replace(',', '').replace('\n', '').replace('\r', '') instring = instring[:length] if align == 'L': output = instring + (padchar * (length - len(instring))) elif align == 'R': output = (padchar * (length - len(instring))) + instring else: output = instring return output def FileCreation(): POLR = pd.read_parquet(r'POLR.parquet') PRP1 = pd.read_parquet(r'PRP1.parquet') PROP = pd.read_parquet(r'PROP.parquet') SUBJ = pd.read_parquet(r'SUBJ.parquet') rownum = 1 totalrownum = 1 POLRCt = 0 size = 900000 POLR = [POLR.loc[i:i + size - 1, :] for i in range(0, len(POLR), size)] FileCt = 0 print('Predicted File Count: ' + str(math.ceil(len(POLR[0])/ size)) ) for df in POLR: FileCt += 1 filename = r'OutputFile.' + Pad(FileCt, '0', 2, 'R') with open(filename, 'a+') as outfile: for i, row in df.iterrows(): row[0] = Pad(rownum, '0', 9, 'R') row[1] = Pad(row[1], ' ', 4, 'L') row[2] = Pad(row[2], '0', 5, 'R') # I do this for all 50 columns outfile.write((','.join(row[:51])).replace(',', '') + '\n') rownum += 1 totalrownum += 1 for i2, row2 in PROP[PROP.ID == row[51]].iterrows(): row2[0] = Pad(rownum, '0', 9, 'R') row2[1] = Pad(row2[1], ' ', 4, 'L') row2[2] = Pad(row2[2], '0', 5, 'R') # I do this for all 105 columns outfile.write((','.join(row2[:106])).replace(',', '') + '\n') rownum += 1 totalrownum += 1 for i3, row3 in PRP1[(PRP1['id'] == row2['ID']) &amp; (PRP1['VNum'] == row2['vnum'])].iterrows(): row3[0] = Pad(rownum, '0', 9, 'R') row3[1] = Pad(row3[1], ' ', 4, 'L') row3[2] = Pad(row3[2], '0', 5, 'R') # I do this for all 72 columns outfile.write((','.join(row3[:73])).replace(',', '') + '\n') rownum += 1 totalrownum += 1 for i2, row2 in SUBJ[SUBJ['id'] == row['id']].iterrows(): row2[0] = Pad(rownum, '0', 9, 'R') row2[1] = Pad(row2[1], ' ', 4, 'L') row2[2] = Pad(row2[2], '0', 5, 'R') # I do this for all 24 columns outfile.write((','.join(row2[:25])).replace(',', '') + '\n') rownum += 1 totalrownum += 1 POLRCt += 1 print('File {} of {} '.format(str(FileCt),str(len(POLR)) ) + str((POLRCt - 1) / len(df.index) * 100) + '% Finished\r') rownum += 1 rownum = 1 POLRCt = 1 </code></pre> <p>I'm essentially looking for a script that doesn't take multiple days to create a 27M record file.</p>
<p>I ended up populating temp tables for each record level, and creating keys, then inserting them into a permanent staging table and assigning an clustered index to the keys. I then queried the results while using <code>OFFSET</code> and <code>FETCH NEXT %d ROWS ONLY</code> to reduce memory size. I then used the multiprocessing library to break the workload out for each thread on the CPU. Ultimately, the combination of these have reduced the runtime to about 20% of what it was when this question was originally posted.</p>
python|pandas|large-files|file-writing|fixed-width
1
7,264
64,676,672
How can I make a distance matrix with own metric using no loop?
<p>I have a np.arrray like this:</p> <pre><code>[[ 1.3 , 2.7 , 0.5 , NaN , NaN], [ 2.0 , 8.9 , 2.5 , 5.6 , 3.5], [ 0.6 , 3.4 , 9.5 , 7.4 , NaN]] </code></pre> <p>And a function to compute the distance between two rows:</p> <pre><code>def nan_manhattan(X, Y): nan_diff = np.absolute(X - Y) length = nan_diff.size return np.nansum(nan_diff) * length / (length - np.isnan(nan_diff).sum()) </code></pre> <p>I need all pairwise distances, and I don't want to use a loop. How do I do that?</p>
<p>Use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html" rel="nofollow noreferrer">pdist</a>:</p> <pre><code>import numpy as np from scipy.spatial.distance import pdist, squareform def nan_manhattan(X, Y): nan_diff = np.absolute(X - Y) length = nan_diff.size return np.nansum(nan_diff) * length / (length - np.isnan(nan_diff).sum()) arr = np.array([[1.3, 2.7, 0.5, np.nan, np.nan], [2.0, 8.9, 2.5, 5.6, 3.5], [0.6, 3.4, 9.5, 7.4, np.nan]]) result = squareform(pdist(arr, nan_manhattan)) print(result) </code></pre> <p><strong>Output</strong></p> <pre><code>[[ 0. 14.83333333 17.33333333] [14.83333333 0. 19.625 ] [17.33333333 19.625 0. ]] </code></pre>
python|numpy|numpy-ndarray
4
7,265
39,994,804
pandas groupby apply does not broadcast into a DataFrame
<p>Using pandas 0.19.0. The following code will reproduce the problem:</p> <pre><code>In [1]: import pandas as pd import numpy as np In [2]: df = pd.DataFrame({'c1' : list('AAABBBCCC'), 'c2' : list('abcdefghi'), 'c3' : np.random.randn(9), 'c4' : np.arange(9)}) df Out[2]: c1 c2 c3 c4 0 A a 0.819618 0 1 A b 1.764327 1 2 A c -0.539010 2 3 B d 1.430614 3 4 B e -1.711859 4 5 B f 1.002522 5 6 C g 2.257341 6 7 C h 1.338807 7 8 C i -0.458534 8 In [3]: def myfun(s): """Function does practically nothing""" req = s.values return pd.Series({'mean' : np.mean(req), 'std' : np.std(req), 'foo' : 'bar'}) In [4]: res = df.groupby(['c1', 'c2'])['c3'].apply(myfun) res.head(10) Out[4]: c1 c2 A a foo bar mean 0.819618 std 0 b foo bar mean 1.76433 std 0 c foo bar mean -0.53901 std 0 B d foo bar </code></pre> <p>And, of course I expect this:</p> <pre><code>Out[4]: foo mean std c1 c2 A a bar 0.819618 0 b bar 1.76433 0 c bar -0.53901 0 B d bar 1.43061 0 </code></pre> <p>Pandas automatically converts a Series to a DataFrame when returned by a function that is applied to a Series or a DataFrame. Why is the behavior different for functions applied to groups?</p> <p>I am looking for an answer that will result in the output desired. Bonus points for explaining the difference in behavior among <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html" rel="nofollow"><code>pandas.Series.apply</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow"><code>pandas.DataFrame.apply</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.apply.html" rel="nofollow"><code>pandas.core.groupby.GroupBy.apply</code></a></p>
<p>an easy fix would be to <code>unstack</code></p> <pre><code>df = pd.DataFrame({'c1' : list('AAABBBCCC'), 'c2' : list('abcdefghi'), 'c3' : np.random.randn(9), 'c4' : np.arange(9)}) def myfun(s): """Function does practically nothing""" req = s.values return pd.Series({'mean' : np.mean(req), 'std' : np.std(req), 'foo' : 'bar'}) res = df.groupby(['c1', 'c2'])['c3'].apply(myfun) res.unstack() </code></pre> <p><a href="https://i.stack.imgur.com/WRb1O.png" rel="nofollow"><img src="https://i.stack.imgur.com/WRb1O.png" alt="enter image description here"></a></p>
python-3.x|pandas
2
7,266
39,722,279
Is it a good idea to apply ML libraries on pandas data frame?
<p>i am building a cognitive miner AI Bot. where My Bot has two task , one is train and other is predict.i'm using some/few ML functionalities. so here i have lots of documents(~200,000 docs) which i'm training. and then in predicting for a query, i'm following some steps to find most accurate matched document(by looking score, confidence on each document) from training. and some known functions i'm using finding like TF-IDF,n-gram,cosine-similarity of each tokens available in asked query. for doing this i am using core python , python third-party libraries,NoSQL database for keeping training data. </p> <p>NOTE: all performance improvement taken care using core python as much as possible. (please don't give suggestion to use Elastic Search or python whoosh because i just want to use my silly code for another decade.:) )</p> <p>I'm facing Performance issue. like to do score it is taking 2-3 seconds time. which is not good.i want that result should come in some milliseconds.</p> <p>SO my question to you, if i use pandas , and try to apply all above functionality to it, will it give better performance ? or numpy matrix calculation will give better performance ? </p> <p>so here i don't think code required to be paste. i just need experienced peoples view on my problem. and of course keeping in mind solution should be scalable. </p>
<p>It probably won't make much of a difference either way, in terms of performance.</p> <p>Pandas is extremely efficient for loading data and munging it (grouping it in different ways, pivoting, creating new columns from existing columns, and so forth). </p> <p>Once your data is ready for passing to a machine learning algorithm (say, in <code>sklearn</code>), then, basically, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.as_matrix.html" rel="nofollow"><code>pd.DataFrame.as_matrix()</code></a> can transform it into a numpy array, without fundamentally affecting overall performance. It's hard to conceive of any <code>sklearn</code> prediction/classification stage whose cost doesn't dominate this.</p> <p>The <a href="https://pypi.python.org/pypi/sklearn-pandas" rel="nofollow"><code>sklearn-pandas</code> package</a> facilitates this even further.</p> <p>If your performance isn't satisfactory at this point, the solution lies elsewhere.</p>
python|pandas|numpy|artificial-intelligence
2
7,267
44,101,223
Dict inside dict to excel export
<p>I am trying save a dict file to excel(I've tried to use xlsxwriter). Example:</p> <pre><code> {'Mod1': {'A': 0.029999999999999999, 'B': 0.050000000000000003, 'C': 0.14000000000000001}, 'Mod2':{'A2': ....}} </code></pre> <p>I am getting the Mod1 on first Column (in excel) then the 'A' but not the scores. My code is as follow, I think I must add a for loop but I don't know how.</p> <pre><code>row = 0 col = 0 score = 0 for key in ranks.keys(): row += 1 worksheet.write(row, col, key) for item in ranks[key]: worksheet.write(row, col + 1, item) row += 1 </code></pre> <p>Any ideas ? Thank you!</p>
<p>Here is a small working example based on a best guess at the output structure you are looking for. If it doesn't quite match it should be easy to change. This is more of a data structure issue than an XlsxWriter issue:</p> <pre><code>import xlsxwriter workbook = xlsxwriter.Workbook('test.xlsx') worksheet = workbook.add_worksheet() row = 0 col = 0 ranks = {'Mod1': {'A': 0.029999999999999999, 'B': 0.050000000000000003, 'C': 0.14000000000000001}, 'Mod2': {'A': 1.029999999999999999, 'B': 1.050000000000000003, 'C': 1.14000000000000001}} for model in ranks.keys(): worksheet.write(row, col, model) row += 1 for key, value in ranks[model].items(): worksheet.write(row, col + 1, key) worksheet.write(row, col + 2, value) row += 1 workbook.close() </code></pre> <p><strong>Output:</strong></p> <p><a href="https://i.stack.imgur.com/Zdv1k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zdv1k.png" alt="enter image description here"></a></p> <p>For sorted <code>A .. C</code> categories you could replace the inner loop with something like this:</p> <pre><code> for category in sorted(ranks[model].keys()): worksheet.write(row, col + 1, category) worksheet.write(row, col + 2, ranks[model][category]) row += 1 </code></pre>
python|pandas|numpy|dictionary
1
7,268
69,312,646
How to join 2 columns of word embeddings in Pandas
<p>I have extracted word embeddings of 2 different texts (title and description) and want to train an <code>XGBoost</code> model on both embeddings. The embeddings are <code>200</code> in dimension each as can be seen below:</p> <p><a href="https://i.stack.imgur.com/EgRIk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EgRIk.png" alt="enter image description here" /></a></p> <p>Now I was able to train the model on 1 embedding data and it worked perfectly like this:</p> <pre><code>x=df['FastText'] #training features y=df['Category'] # target variable #Defining Model model = XGBClassifier(objective='multi:softprob') #Evaluation metrics score=['accuracy','precision_macro','recall_macro','f1_macro'] #Model training with 5 Fold Cross Validation scores = cross_validate(model, np.vstack(x), y, cv=5, scoring=score) </code></pre> <p>Now I want to use both the features for training but it gives me an error if I pass 2 columns of df like this:</p> <pre><code>x=df[['FastText_Title','FastText']] </code></pre> <p>One solution I tried is adding both the embeddings like x1+x2 but it decreases accuracy significantly. How do I use both features in <code>cross_validate</code> function?</p>
<p>In the past for multiple inputs, I've done this:</p> <pre><code>features = ['FastText_Title', 'FastText'] x = df[features] y = df['Category'] </code></pre> <p>It is creating an array containing both datasets. I usually need to scale the data as well using MinMaxScaler once the new array has been made.</p>
python|pandas|numpy|machine-learning
0
7,269
69,600,671
How to groupby pandas dataframe and sum values in another column
<p>I have a pandas dataframe with 3 columns (CHAR, VALUE, and WEIGHT).</p> <ul> <li><p>CHAR column contains duplicate values which I need to group ['A', 'A', 'A', 'B', 'B', 'C'].</p> </li> <li><p>VALUE column has a unique value for every unique CHAR [10, 10, 10, 15, 15, 20].</p> </li> <li><p>WEIGHT column has various values [1, 2, 1, 4, 4, 6].</p> </li> </ul> <p>Consider an example of my initial dataframe:</p> <p><a href="https://i.stack.imgur.com/czIu7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/czIu7.png" alt="enter image description here" /></a></p> <p>I need to create a new dataframe which will have 3 columns.</p> <ul> <li>CHAR which will not have any duplicates</li> <li>T_VALUE (total value) which will have a sum of this CHAR's value and all its weights</li> <li>T_WEIGHT (total weight) which will have a sum of this CHAR's weights</li> </ul> <p>Result would look like this:</p> <p><a href="https://i.stack.imgur.com/kc4uA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kc4uA.png" alt="enter image description here" /></a></p> <p>I would highly appreciate any help.</p>
<p>You could use <code>+=</code> instead:</p> <pre><code>newDF = df.groupby(['CHAR', 'VALUE'], as_index=False)['WEIGHT'].sum() newDF['VALUE'] += newDF['WEIGHT'] </code></pre>
python|pandas|dataframe
2
7,270
69,528,507
auto_arima(... , seasonal=False) but got SARIMAX
<p>I want to know the orders (p,d,q) for ARIMA model, so I've got to use <a href="https://pypi.org/project/pmdarima/" rel="nofollow noreferrer"><code>pmdarima</code></a> python package. but it recommends me <strong>SARIMAX</strong> model! keep reading for more details.<br /> i used <a href="https://drive.google.com/file/d/185VMdJ2AhOr4qqLvAXM38HrhhKkG_yM7/view?usp=sharing" rel="nofollow noreferrer">Daily Total Female Births</a> Data for this purpose. it's a stationary time series.</p> <pre class="lang-py prettyprint-override"><code># importing packages import pandas as pd from pmdarima import auto_arima import warnings warnings.filterwarnings('ignore') # read csv file df = pd.read_csv('/Data/DailyTotalFemaleBirths.csv' , index_col=0 , parse_dates=True) # set daily frequency for datetime indexes df.index.freq = 'D' # now using auto_arima i try to find (p,d,q) orders for ARIMA model. # so i set seasonal=False because i don't want orders for SARIMA! my # goal is to find orders for ARIMA model not SARIMA auto_arima(df['Births'] , start_P= 0 , start_q=0 , max_p=6 , max_q=3 , d=None , error_action='ignore' , suppress_warnings=True , m=12 , seasonal=False , stepwise=True).summary() </code></pre> <p>then it gives me this:<br /> <a href="https://i.stack.imgur.com/goEJr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/goEJr.jpg" alt="enter image description here" /></a></p> <p>the problem is where although I set <code>seasonal=False</code> but it gives me SARIMAX (which stands for <strong>Seasonal Autoregressive Independent Moving Average</strong>) but I don't want to consider seasonal component, so I set <code>seasonal=False</code>! seems that <code>pmdarima</code> doesn't pay attention to <code>seasonal=False</code>!</p> <p>can someone help me to figure out what is the problem?</p> <hr /> <p>Expected result:<br /> <a href="https://i.stack.imgur.com/3Zwz7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zwz7.jpg" alt="enter image description here" /></a></p> <p>for <strong>False Result</strong>: SARIMAX result comes from <code>pmdarima</code> version <code>1.8.3</code><br /> for <strong>True Result</strong>: ARIMA result comes from <code>pmdarima</code> version <code>1.1.0</code></p>
<p>It's not really using a seasonal model. It's just a confusing message.</p> <p>In the pmdarima library, in version <a href="https://github.com/alkaline-ml/pmdarima/blob/master/doc/whats_new.rst#v151" rel="nofollow noreferrer">v1.5.1</a> they changed the statistical model in use from ARIMA to a more flexible and less buggy model called SARIMAX. (It stands for Seasonal Autoregressive Integrated Moving Average Exogenous.)</p> <p>Despite the name, you can use it in a non-seasonal way by setting the seasonal terms to zero.</p> <p>You can double-check whether the model is seasonal or not by using the following code:</p> <pre class="lang-py prettyprint-override"><code>model = auto_arima(...) print(model.seasonal_order) </code></pre> <p>If it shows as <code>(0, 0, 0, 0)</code>, then no seasonality adjustment will be done.</p>
python|pandas|arima|pmdarima
2
7,271
54,172,932
How to append multiple CSV files and add an additional column indicating file name in Python?
<p>I have over 20 CSV files in a single folder. All files have the same structure, they just represent different days. </p> <p>Example:</p> <p>Day01.csv</p> <p>Day02.csv</p> <p>Day03.csv</p> <p>Day04.csv (and so on...)</p> <p>The files contain just two numeric columns: x and y. I would like to append all of these csv files together into one large file and add a column for the file name (day). I have explored similar examples to generate the following code but this code adds each y to a separate column (Y1, Y2, Y3, Y4...and so on). I would like to simply have this appended file as three columns: x, y, file name. How can I modify the code to do the proper append? </p> <p>I have tried the code from this example: <a href="https://stackoverflow.com/questions/42756696/read-multiple-csv-files-and-add-filename-as-new-column-in-pandas">Read multiple csv files and Add filename as new column in pandas</a></p> <pre><code>import pandas as pd import os os.chdir('C:....path to my folder') files = os.listdir() df = pd.concat([pd.read_csv(fp).assign(New=os.path.basename(fp)) for fp in files]) </code></pre> <p>However, this code does not append all Y values under one column. (all other aspects seem to work, however). Can someone help with the code so that all Y values are under a single column?</p>
<p>The following should work by creating the <code>filename</code> column before appending the <code>dataframe</code> to your list.</p> <pre><code>import os import pandas as pd file_list = [] for file in os.listdir(): if file.endswith('.csv'): df = pd.read_csv(file,sep=";") df['filename'] = file file_list.append(df) all_days = pd.concat(file_list, ignore_index=True) all_days.to_csv("all.txt") </code></pre>
python|pandas|csv|append
7
7,272
38,356,555
Pandas: converting .xlsx file to .csv results in a zip file
<p>I am using pandas to covert .xlsx file to .csv. The problem is anytime I run the program the resulting file becomes a zip file instead of csv file. This is my code:</p> <pre><code> def exl2csv(x,y): exlfilename = str(x) exlsheetname = str(y) workbook = xlrd.open_workbook(exlfilename) worksheet = workbook.sheet_by_name(exlsheetname) csvfileloc = os.path.join('uploads', 'frmxl2csv-' + get_random_id()) dataframe = pd.DataFrame(worksheet) dataframe.to_csv(csvfileloc, sep='\t', encoding='utf-8') return csvfileloc </code></pre> <p>Thanks for helping.</p>
<p>You could use pandas to do the whole thing if you'd like.</p> <p><code>pandas.read_excel()</code> will allow you to specify the sheet you want to read so you could do:</p> <pre><code>def exl2csv(x,y): exlfilename = str(x) exlsheetname = str(y) df = pandas.read_excel(exlfilename, exlsheetname) csvfileloc = os.path.join('uploads', 'frmxl2csv-' + get_random_id()+'.csv') df.to_csv(csvfileloc, sep='\t', encoding='utf-8') return csvfileloc </code></pre> <p>Make sure you pass in '.csv' at the end of the filename for the csv.</p>
python|excel|csv|pandas
0
7,273
66,265,235
Split a string into columns of a table Python
<p>I have a file containing many strings, all of the same format. These strings consist of numbers, all of which are used for providing information about a given problem. I am using Pandas to store my data currently, but not in the format I require.</p> <p>For example, the format of the strings is as follows:</p> <pre><code>10010010000000000000000002 </code></pre> <p>which I want to be split as such:</p> <pre><code>1001 0010 00000000 00000000 0 2 </code></pre> <p>So the 1st 4 bits will become a column, the 2nd 4 bits, the 3rd 8 bits, the 4th 8 bits, and the last 2 bits each being their own column.</p> <p>So the table will consist of 6 columns.</p> <p>Thanks</p>
<p>Try:</p> <pre><code>pd.read_fwf('file.txt', withds=[4,4,8,8,1,1], dtype='str', header=None) </code></pre>
python|pandas|dataframe
1
7,274
66,288,251
Combining a list of tuple dataframes in python
<p>I have a large dataset where every two rows needs to be group together and combined into one longer row, basically duplicating the headers and adding the 2nd row to the 1st. Here is a small sample:</p> <pre><code>df = pd.DataFrame({'ID' : [1,1,2,2],'Var1': ['A', 2, 'C', 7], 'Var2': ['B', 5, 'D', 9]}) print(df) ID Var1 Var2 1 A B 1 2 5 2 C D 2 7 9 </code></pre> <p>I will have to group the rows my 'ID' so therefore I ran:</p> <pre><code>grouped = df.groupby(['ID']) grp_lst = list(grouped) </code></pre> <p>This resulted in a list of tuples grouped by id where element 1 is the grouped dataframe I would like to combine.</p> <p>The desired result is a dataframe that looks something like this:</p> <pre><code>ID Var1 Var2 ID.1 Var1.1 Var2.1 1 A B 1 2 5 2 C D 2 7 9 </code></pre> <p>I have to do this over a large data set, where the &quot;ID&quot; is used to group the rows and then I want to basically add the bottom row to end on the top.</p> <p>Any help would be appreciated and I assume there is a much easier way to do this than I am doing.</p> <p>Thanks in advance!</p>
<p>Let us try:</p> <pre><code>i = df.groupby('ID').cumcount().astype(str) df_out = df.set_index([df['ID'].values, i]).stack().unstack([2, 1]) df_out.columns = df_out.columns.map('.'.join) </code></pre> <p><strong>Details:</strong></p> <p><code>group</code> the dataframe on <code>ID</code> and use <code>cumcount</code> to create sequential counter to uniquely identify the rows per <code>ID</code>:</p> <pre><code>&gt;&gt;&gt; i 0 0 1 1 2 0 3 1 dtype: object </code></pre> <p>Create multilevel index in the dataframe with the first level set to <code>ID</code> values and second level set to the above sequential counter, then use <code>stack</code> followed by <code>unstack</code> to reshape the dataframe in the desired format:</p> <pre><code>&gt;&gt;&gt; df_out ID Var1 Var2 ID Var1 Var2 #---&gt; Level 0 columns 0 0 0 1 1 1 #---&gt; Level 1 columns 1 1 A B 1 2 5 2 2 C D 2 7 9 </code></pre> <p>Finally flatten the multilevel columns using <code>Index.map</code> with <code>join</code>:</p> <pre><code>&gt;&gt;&gt; df_out ID.0 Var1.0 Var2.0 ID.1 Var1.1 Var2.1 1 1 A B 1 2 5 2 2 C D 2 7 9 </code></pre>
python|pandas|dataframe|pandas-groupby
1
7,275
66,030,740
Unexpected amplitude in numpy fft
<p>I am having an issue with numpy fft not giving me the expected amplitude in the fft plot. This only happens for certain periods as input.</p> <p>I am using a clean sine signal with a period of 25 points over 240 datapoints.</p> <p>The np.fft.rfft gives a peak of 24.</p> <p><img src="https://i.stack.imgur.com/YKIyl.png" alt="enter image description here" /></p> <p>I am wondering what may cause this. I would think clean signal should produce a dirac-delta function like result around 25. I get this type of result for certain periods, but not all. Is there a need for more repetitions of this period in order to specify the period accurately? this does not make sense to me. The fft is done in the following way, where y=my sine datapoints with period 25:</p> <pre><code>fft = np.fft.rfft(y) fft = abs(fft) x=np.fft.rfftfreq(len(y),d=1./1) x = 1/x # to convert from freq to periods. T = 1/f plt.plot(x,fft) </code></pre>
<p>Recall that an FFT is technically computed over an infinite periodic extension of your signal. Therefore, if your signal doesn’t contain an integer number of periods, the periodic extension will contain a discontinuity (in phase, and usually also in amplitude) at the period boundaries. This will manifest as a “smearing” of your signal across multiple adjacent frequency bins. Additionally, note that individual bins of the FFT represent particular frequencies (use fftfreq for a list); any frequency that isn’t representable in a single bin will need to be reconstructed from nearby bins.</p> <p>You can improve your FFT result by applying windowing, e.g. by multiplying your signal with a window function (typically something like a Hamming, Hann or raised cosine filter), which will reduce the amplitude discontinuity, and thus sharpen your peak, at the expense of some accuracy at the edges of your signal.</p>
python|numpy|fft
6
7,276
46,316,854
How to parse string representation back to numpy array?
<p>I used opencv to read an image and save it to redis like this:</p> <pre><code>frame=cv2.imread('/path/to/image.png') rd.set('frame', frame) </code></pre> <p>then,read it back a string representation like this:</p> <pre><code>[[[ 38 45 51] [ 38 45 51] [ 38 45 51] ..., [235 217 222]]] </code></pre> <p>then I tried to get it back like this:</p> <pre><code>frameString=rd.get('frame') mat=np.array(frameString) </code></pre> <p>but </p> <pre><code> print mat.shape </code></pre> <p>output</p> <pre><code> () </code></pre> <p>then I tried</p> <pre><code> mat=eval(frameString) </code></pre> <p>this gives me error:</p> <pre><code> exec exp in global_vars, local_vars File "&lt;console&gt;", line 1, in &lt;module&gt; File "&lt;string&gt;", line 1 [[[ 38 45 51] ^ SyntaxError: invalid syntax </code></pre> <p>question is </p> <pre><code>how to convert this string representation back to numpy array correctly? </code></pre>
<p>The easiest thing to do would be to encode it as JSON and save that to redis. </p> <pre><code>frame=cv2.imread('/path/to/image.png') rd.set('frame', json.dumps(frame. tolist())) frameString=json.loads(rd.get('frame')) mat=np.array(frameString) </code></pre> <p>You can find faster and more compact serialization formats though. </p>
opencv|numpy|redis
0
7,277
46,533,197
Understanding variable scope and changes in Python
<p>I'm using Python 3.6 and Pandas 0.20.3.</p> <p>I'm sure this must be addressed somewhere, but I can't seem to find it. I alter a dataframe inside a function by adding columns; then I restore the dataframe to the original columns. I don't return the dataframe. The added columns stay. I could understand if I add columns inside the function and they are not permanent AND updating the dataframe does not work. I'd also understand if adding columns altered the dataframe and assigning the dataframe also stuck. Here is the code:</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame(np.random.randn(10, 5)) df </code></pre> <p>which gives</p> <pre><code> 0 1 2 3 4 0 0.406779 -0.481733 -1.187696 -0.210456 -0.608194 1 0.732978 -0.079787 -0.051720 1.097441 0.089850 2 1.859737 -1.422845 -1.148805 0.254504 1.207134 3 0.074400 -1.352875 -1.341630 -1.371050 0.005505 4 -0.102024 -0.905506 -0.165681 2.424180 0.761963 5 0.400507 -0.069214 0.228971 -0.079805 -1.059972 6 1.284812 0.843705 -0.885566 1.087703 -1.006714 7 0.135243 0.055807 -1.217794 0.018104 -1.571214 8 -0.524320 -0.201561 1.535369 -0.840925 0.215584 9 -0.495721 0.284237 0.235668 -1.412262 -0.002418 </code></pre> <p>Now, I create a function:</p> <pre><code>def mess_around(df): cols = df.columns df['extra']='hi' df = df[cols] </code></pre> <p>then run it and display dataframe:</p> <pre><code>mess_around(df) df </code></pre> <p>which gives:</p> <pre><code> 0 1 2 3 4 extra 0 0.406779 -0.481733 -1.187696 -0.210456 -0.608194 hi 1 0.732978 -0.079787 -0.051720 1.097441 0.089850 hi 2 1.859737 -1.422845 -1.148805 0.254504 1.207134 hi 3 0.074400 -1.352875 -1.341630 -1.371050 0.005505 hi 4 -0.102024 -0.905506 -0.165681 2.424180 0.761963 hi 5 0.400507 -0.069214 0.228971 -0.079805 -1.059972 hi 6 1.284812 0.843705 -0.885566 1.087703 -1.006714 hi 7 0.135243 0.055807 -1.217794 0.018104 -1.571214 hi 8 -0.524320 -0.201561 1.535369 -0.840925 0.215584 hi 9 -0.495721 0.284237 0.235668 -1.412262 -0.002418 hi </code></pre> <p>I know I can solve the problem by return ts. So I can fix the problem. I want to understand where I am going wrong. I suspect that the scope of the variable ts is inside the function; it is given a pointer but that does not change because of scope. Yet the column assignment is using the pointer that is passed in and therefore impacts the dataframe "directly". Is that correct?</p> <p>EDIT: For those that might want to address the dataframe in place, I've added:</p> <pre><code>for c in ts.columns: if c not in cols: del ts[c] </code></pre> <p>I'm guessing if I return the new dataframe, then there will be a potentially large dataframe that will have to be dealt with by garbage collection.</p>
<p>To understand what happens, you should know the difference between passing attributes to functions by value versus passing them by reference:</p> <ul> <li><a href="https://stackoverflow.com/questions/986006/how-do-i-pass-a-variable-by-reference">How do I pass a variable by reference?</a></li> </ul> <hr> <p>You pass a variable <code>df</code> to your function <code>messing_around</code>. The function modifies the <em>original</em> dataframe in-place by adding a column. </p> <p>This subsequent line of code seems to be the cause for confusion here:</p> <pre><code>df = df[cols] </code></pre> <p>What happens here is that the variable <code>df</code> originally held a reference to your dataframe. But, the reassignment causes the <em>variable</em> to point to a different object - your original dataframe is not changed.</p> <p>Here's a simpler example:</p> <pre><code>def foo(l): l.insert(0, np.nan) # original modified l = [4, 5, 6] # reassignment - no change to the original, # but the variable l points to something different lst = [1, 2, 3] foo(lst) print(lst) [nan, 1, 2, 3] # notice here that the insert modifies the original, # but not the reassignment </code></pre>
python|function|pandas
1
7,278
58,390,827
'numpy.int64' object has no attribute 'loc'
<p>I have a csv file with date and two input values. Here I need to read date with value contain in first column. here I used the code and it gave me this error"'numpy.int64' object has no attribute 'loc'"</p> <p>Here is my code:</p> <pre><code>data = pd.read_csv("data6.csv") data['date']= pd.to_datetime(data['date'] + " " + data['time'].str.strip(), format='%d/%m/%Y %H:%M:%S') filtered = data['X'] current_X = filtered.iloc[0] current_time = filtered.iloc[0].loc['date'] </code></pre> <p>error:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>AttributeError Traceback (most recent call last) &lt;ipython-input-24-b3a8e880770f&gt; in &lt;module&gt;() 1 filtered = data['x'] 2 current_x = filtered.iloc[0] ----&gt; 3 current_time = filtered.iloc[0].loc['date'] AttributeError: 'numpy.int64' object has no attribute 'loc'</code></pre> </div> </div> </p> <p>my csv file :</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>date time x x1 8/6/2018 6:15:00 141 0 8/6/2018 6:45:00 0 20 8/6/2018 7:45:00 0 0 8/6/2018 9:00:00 0 0 8/6/2018 9:25:00 95 30 8/6/2018 9:30:00 0 0 8/6/2018 11:00:00 149 0 8/6/2018 11:30:00 0 0 8/6/2018 13:30:00 0 40 8/6/2018 13:50:00 85 0 8/6/2018 15:00:00 0 0 8/6/2018 15:25:00 0 0</code></pre> </div> </div> </p>
<p>There are 2 possible solutions - select by positions with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.get_loc.html" rel="nofollow noreferrer"><code>Index.get_loc</code></a> for position of <code>date</code> column with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>DataFrame.iloc</code></a>:</p> <pre><code>current_time = data.iloc[0, data.columns.get_loc('date')] </code></pre> <p>Or get label of first index value and select by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p> <pre><code>current_time = data.loc[data.index[0], 'date'] </code></pre> <p>If there is default RangeIndex:</p> <pre><code>current_time = data.loc[0, 'date'] </code></pre> <p>Your solution not working, because:</p> <pre><code>#returned Series filtered = data['X'] #returned first value of Series - scalar current_X = filtered.iloc[0] #error current_time = filtered.iloc[0].loc['date'] </code></pre>
python-3.x|pandas
1
7,279
58,227,305
KeyError: "None of [Index(['', ''], dtype='object')] are in the [columns]" when trying to select columns on a dask dataframe
<p>I am creating a dask dataframe from a pandas dataframe using the from_pandas() function. When I try to select two columns from the dask dataframe using the square brackets [[ ]], I am getting a KeyError. </p> <p>According to dask documentation, the dask dataframe supports the square bracket column selection like the pandas dataframe. </p> <pre><code># data is a pandas dataframe dask_df = ddf.from_pandas(data, 30) data = data[dask_df[['length', 'country']].apply( lambda x: myfunc(x, countries), meta=('Boolean'), axis=1 ).compute()].reset_index(drop=True) </code></pre> <p>This is the error I am getting: </p> <pre><code>KeyError: "None of [Index(['length', 'country'], dtype='object')] are in the [columns]" </code></pre> <p>I was thinking that this might be something to do with providing the correct meta for the apply, but from the error it seems like the dask dataframe is not able to select the two columns, which should happen before the apply. </p> <p>This works perfectly with if I replace "dask_df" with "data"(pandas df) in the apply line. </p> <p>Is the index not being preserved when I am doing the from_pandas?</p>
<p>Try loading less data at once.</p> <p>I had the same issue, but when I loaded only a subset of my data, it worked.</p> <p>With the large dataset, I was able to run <code>print(dask_df.columns)</code> and see e.g.</p> <p><code>Index(['apple', 'orange', 'pear'], dtype='object', name='fruit')</code>.</p> <p>But when I ran <code>dask_df.compute</code> I would get <code>KeyError: &quot;None of [Index(['apple', 'orange', 'pear'], dtype='object')] are in the [columns]&quot;</code>.</p> <p>I knew that the data set was too big for my memory, and was trying dask hoping it would just figure it out for me =) I guess I have more work to do, but in any case I am glad to be in dask!</p>
pandas|dask
0
7,280
58,512,790
Trouble finding numpy.i
<p>I'm wanting to wrap some c++ code in python using swig, and I need to be able to use numpy.i to convert numpy arrays to vectors.</p> <p>This has been quite the frustrating process, as I haven't been able to find any useful info online as to where I actually get numpy.i from. </p> <p>This is what I currently have running: </p> <p>numpy 1.17.3 </p> <p>swig 2.0.12</p> <p>python 3.7.3</p> <p>Debian 4.9.2</p> <p>From reading <a href="https://docs.scipy.org/doc/numpy/reference/swig.interface-file.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/swig.interface-file.html</a> I'm told that numpy.i should be located in tools/swig/numpy.i, though the only place on my machine that I can find numpy.i is in a python 2.7 folder which I've upgraded from. My working version of python (3.7.3) holds no such file. </p> <pre><code>$ locate numpy.i /usr/lib/python2.7/dist-packages/instant/swig/numpy.i </code></pre> <p><strong>What I've tried:</strong></p> <ul> <li><p>copying the numpy.i (as described above) into my working folder. This is at least recognized by my test.i file when I call %include "numpy.i", but it doesn't seem to allow usage of numpy.i calls. </p></li> <li><p>Copying this code <a href="https://github.com/numpy/numpy/blob/master/tools/swig/numpy.i" rel="nofollow noreferrer">https://github.com/numpy/numpy/blob/master/tools/swig/numpy.i</a> into a new file called numpy.i and putting that in my folder, but I get lots of errors when I try to run it. </p></li> </ul> <p><strong>Is there a standard way to get the proper numpy.i version? Where would I download it from, and where should I put it?</strong></p> <p>I've included some code below as reference:</p> <p>test.i:</p> <pre><code>%module test %{ #define SWIG_FILE_WITH_INIT #include "test.h" %} %include "numpy.i" //this doesn't seem to do anything %init %{ import_array(); %} %apply (int DIM1) {(char x)}; //this doesn't seem to do anything %include "test.h" </code></pre> <p>test.h:</p> <pre><code>#include &lt;iostream&gt; void char_print(char x); </code></pre> <p>test.cpp:</p> <pre><code>#include "test.h" void char_print(char x) { std::cout &lt;&lt; x &lt;&lt; std::endl; return; } </code></pre> <p>tester.py:</p> <pre><code>import test test.char_print(5) #nothing is printed, since this isn't converted properly to a char. </code></pre> <p>This is just a simple example, but I've tried using numpy.i in many different ways (including copying and pasting other people's code that works for them) but it consistently doesn't change anything whether I have it in my test.i file or not. </p> <p><strong>Where/how do I get numpy.i?</strong></p>
<p><strong>Problem:</strong> The numpy.i file I copied over from the python2.7 package isn't compatible, and the compatible version isn't included in the installation package when you go through anaconda (still not sure why they'd do that).</p> <p><strong>Answer:</strong> Find which version of numpy you're running, then go here (<a href="https://github.com/numpy/numpy/releases" rel="nofollow noreferrer">https://github.com/numpy/numpy/releases</a>) and download the numpy-[your_version].zip file, then specifically copy the numpy.i file, found in numpy-[your_version]/tools/swig/. Now paste that numpy.i into your project working directory.</p>
python|c++|numpy|wrapper|swig
0
7,281
58,531,223
Efficient Way to Slice Strings in Pandas
<p>I have a dataset that has over 100 million rows that I am trying to manipulate in pandas. I am trying to slice the string in <code>a</code> based on the values in <code>b</code> and <code>c</code> as the start and end points respectively.</p> <p><a href="https://i.stack.imgur.com/6s8SR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6s8SR.png" alt="enter image description here"></a></p> <p>I can do this with list comprehension like so:</p> <pre><code>df['d'] = [a[1]['a'][a[1]['b']:a[1]['c']] for a in df.iterrows()] </code></pre> <p>This is really slow. I can do the same thing with an apply like this:</p> <pre><code>df['d'] = df.apply(lambda x: x['a'][x['b']:x['c']],axis=1) </code></pre> <p>This is also quite slow. My question is, what is the most efficient way to slice the strings in <code>a</code> using the values in <code>b</code> and <code>c</code> as the start and end for the slice?</p>
<p>Iterating over <code>df.iterrows()</code> is really slow because for each row it creates a separate <code>pd.Series</code> object. For 100 million rows this means 100 million such objects are being created (and discarded). It's better to <code>zip</code> the columns and use this in a comprehension like so:</p> <pre><code>df.assign(d=[a[b:c] for a, b, c in zip(df['a'], df['b'], df['c'])]) </code></pre> <p>This will only create three <code>Series</code> objects and then iterate over them which saves a lot of overhead.</p> <p>You can also take a look at <a href="https://numba.pydata.org/numba-doc/dev/reference/pysupported.html#str" rel="nofollow noreferrer">Numba</a> to write your own function that loops over the data frame.</p>
python|pandas|dataframe|text-processing
3
7,282
68,939,178
TypeError: descriptor 'strftime' for 'datetime.date' objects doesn't apply to a 'str' object
<p>I have a dataframe</p> <pre><code>timestamp 2020-08-26 2020-08-27 2020-08-28 </code></pre> <p>I want it to look like this</p> <pre><code>timestamp 2020-08-26 00:00:00 2020-08-27 00:00:00 2020-08-28 00:00:00 </code></pre> <p>I tried</p> <pre><code>df['timestamp'] = df['timestamp'].apply(lambda x: dt.datetime.strftime(x, '%Y-%m-%d %H:%M:%S')) </code></pre> <p>but it gives an issue TypeError: descriptor 'strftime' for 'datetime.date' objects doesn't apply to a 'str' object</p> <p>Appreciate your help</p>
<p>try this,</p> <pre><code>df['timestamp'] = pd.to_datetime(df['timestamp']).dt.strftime('%Y-%m-%d %H:%M:%S') </code></pre>
python|python-3.x|pandas|dataframe|datetime
2
7,283
68,964,815
Pandas grouper date_time as per the market hours (Indian Stock Exchange)
<p>Below data is in the interval of 5 mins</p> <p>Dataframe names as df:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>script_id</th> <th>date_time</th> <th>open</th> <th>high</th> <th>low</th> <th>close</th> <th>volume</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>201</td> <td>2019-02-04 14:55:00</td> <td>1408.05</td> <td>1408.05</td> <td>1407</td> <td>1408</td> <td>2384</td> </tr> <tr> <td>1</td> <td>201</td> <td>2019-02-04 15:00:00</td> <td>1408</td> <td>1410.6</td> <td>1407.2</td> <td>1408.85</td> <td>12621</td> </tr> <tr> <td>2</td> <td>201</td> <td>2019-02-04 15:05:00</td> <td>1408.85</td> <td>1410.45</td> <td>1407.05</td> <td>1407.05</td> <td>3880</td> </tr> <tr> <td>3</td> <td>201</td> <td>2019-02-04 15:10:00</td> <td>1407.05</td> <td>1409.4</td> <td>1404.85</td> <td>1404.85</td> <td>12992</td> </tr> <tr> <td>4</td> <td>201</td> <td>2019-02-04 15:15:00</td> <td>1404.85</td> <td>1408.7</td> <td>1403.5</td> <td>1404.25</td> <td>30803</td> </tr> <tr> <td>5</td> <td>201</td> <td>2019-02-04 15:20:00</td> <td>1404.25</td> <td>1405</td> <td>1402.7</td> <td>1404.8</td> <td>14624</td> </tr> <tr> <td>6</td> <td>201</td> <td>2019-02-04 15:25:00</td> <td>1404.8</td> <td>1405</td> <td>1402.05</td> <td>1403.8</td> <td>8407</td> </tr> <tr> <td>7</td> <td>201</td> <td>2019-02-05 09:15:00</td> <td>1400</td> <td>1416.05</td> <td>1400</td> <td>1410.75</td> <td>17473</td> </tr> </tbody> </table> </div> <p>trying to group it in 10 mins by executing below code:</p> <pre><code>df_f = df.groupby(['script_id', pd.Grouper(key='date_time', freq='10T', origin='start')])\ .agg(open=pd.NamedAgg(column='open', aggfunc='first'), high=pd.NamedAgg(column='high', aggfunc='max'), low=pd.NamedAgg(column='low', aggfunc='min'), close=pd.NamedAgg(column='close', aggfunc='last'), volume=pd.NamedAgg(column='volume', aggfunc='sum'))\ .reset_index() print(df_f) </code></pre> <p>Result:</p> <p><a href="https://i.stack.imgur.com/EwL65.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EwL65.png" alt="result" /></a></p> <p><strong>Expected Result:-</strong> 0,1,2 are as expected below should be for 3 and there should not be 4.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>script_id</th> <th>date_time</th> <th>open</th> <th>high</th> <th>low</th> <th>close</th> <th>volume</th> </tr> </thead> <tbody> <tr> <td>3</td> <td>201</td> <td>2019-02-04 15:25:00</td> <td>1404.8 (value of 6)</td> <td>1416.05 (highest among 6 &amp; 7)</td> <td>400 (lowest among 6 &amp; 7)</td> <td>1410.75 (value of 7)</td> <td>25880 (sum of 6 &amp; 7)</td> </tr> </tbody> </table> </div> <p>How can we combine last two 5min tf to one 10min tf?</p> <p>Note:- There are possibilities to have holiday gap as well between two days</p> <p><a href="https://i.stack.imgur.com/yoP6E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yoP6E.png" alt="enter image description here" /></a></p>
<p><strong>Maybe:</strong></p> <pre><code>a = {'script_id': 'first', 'date_time': 'first', 'open': 'first', 'high':'max', 'low':'min', 'close':'last', 'volume':'sum'} print(df.groupby(df.index // 2).agg(a)) script_id date_time open high low close volume 0 201 2019-02-04 14:55:00 1408.05 1410.60 1407.00 1408.85 15005 1 201 2019-02-04 15:05:00 1408.85 1410.45 1404.85 1404.85 16872 2 201 2019-02-04 15:15:00 1404.85 1408.70 1402.70 1404.80 45427 3 201 2019-02-04 15:25:00 1404.80 1416.05 1400.00 1410.75 25880 </code></pre>
python|pandas|dataframe|pandas-groupby
0
7,284
44,804,784
Adding a fixed date to pandas dataframe
<p>I am reading some data and creating a dataframe with from_records in which the data contains a text timestamp HH:MM:SS:000000. I can convert to timeseries with <code>pd.to_datetime(data.timestamp, format='%H:%M:%S:%f')</code>. I know the date of the file from the filename. What is a pythonic and performant way to insert the date (and eventually set it as the index)?</p> <p>Data looks like:</p> <pre><code>12:00:00:000000 100 12:00:01:123456 200 12:00:02:000000 300 </code></pre> <p>Without the date inserted I get a dataframe that looks like:</p> <pre><code>1900-01-01 12:00:00.000000 100 1900-01-01 12:00:01.123456 200 1900-01-01 12:00:02.000000 300 </code></pre> <p>And what I'd want is (given <code>date = datetime.date(2017, 6, 28)</code>:</p> <pre><code>2017-06-28 12:00:00.000000 100 2017-06-28 12:00:01.123456 200 2017-06-28 12:00:02.000000 300 </code></pre> <p><code>pd.to_datetime</code> <code>origin</code> arg sounded like what I want, but it requires the input as a numeric timestamp rather than a string.</p>
<p>You can create string by <code>strftime</code> from date and add it to column <code>time</code>:</p> <pre><code>df['datetime'] = pd.to_datetime(date.strftime('%Y-%m-%d ') + df['time'], format='%Y-%m-%d %H:%M:%S:%f') print (df) time A datetime 0 12:00:00:000000 100 2017-06-28 12:00:00.000000 1 12:00:01:123456 200 2017-06-28 12:00:01.123456 2 12:00:02:000000 300 2017-06-28 12:00:02.000000 </code></pre> <p>And for index:</p> <pre><code>df.index = pd.to_datetime(date.strftime('%Y-%m-%d ') + df['time'], format='%Y-%m-%d %H:%M:%S:%f') print (df) time A time 2017-06-28 12:00:00.000000 12:00:00:000000 100 2017-06-28 12:00:01.123456 12:00:01:123456 200 2017-06-28 12:00:02.000000 12:00:02:000000 300 </code></pre> <p>Another solution:</p> <pre><code>date = datetime.date(2017, 6, 28) days = date - datetime.date(1900, 1, 1) df['datetime'] = pd.to_datetime(df['time'],format='%H:%M:%S:%f') + pd.to_timedelta(days, unit='d') print (df) time A datetime 0 12:00:00:000000 100 2017-06-28 12:00:00.000000 1 12:00:01:123456 200 2017-06-28 12:00:01.123456 2 12:00:02:000000 300 2017-06-28 12:00:02.000000 </code></pre>
python|pandas
2
7,285
44,425,862
TensorFlow Convolution code Optimization
<p>I am using C++ version of TensorFLow and have built 'TensorFlow for Android' successfully using below command 'bazel build -c opt //tensorflow/examples/android:tensorflow_demo' as described in <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android#bazel" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android#bazel</a></p> <p>I am trying to optimize the convolution code. Below are the issues faced</p> <ol> <li>Unable to find the exact location of convolution code. I am able to debug the code till below function in</li> </ol> <p>'return choose( Cond(), kernel.reshape(kernel_dims) .contract(input .extract_image_patches( kernelRows, kernelCols, row_stride, col_stride, row_in_stride, col_in_stride, padding_type) .reshape(pre_contract_dims), contract_dims) .reshape(post_contract_dims), input .extract_image_patches(kernelRows, kernelCols, row_stride, col_stride, row_in_stride, col_in_stride, padding_type) .reshape(pre_contract_dims) .contract(kernel.reshape(kernel_dims), contract_dims) .reshape(post_contract_dims));'</p> <p>as present in <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/eigen_spatial_convolutions.h" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/eigen_spatial_convolutions.h</a></p> <p>I have few questions related to the above function.<br></p> <p>1.1 Is the above function really performing convolution ? If so where is the code ?<br><br> 1.2 Is contraction (contract function ) same as convolution ? If both convolution and contraction are same, why is the contract operation being performed to both input and kernel matrix ?<br><br> 1.3 Where are the definitions of functions - choose, reshape, contract ,extract image patches etc ?<br><br></p> <p>2.Unable to extract data (matrices ) from input and kernel matrix .This is in reference to the same page in the above link<br><br> 2.1 I have found a line of code 'kern(kernel);' at line no 946 in the above page. Can I know the location definition of the above function ?<br></p> <p>2.2 I am unable to extract input and kernel matrices from the corresponding 4d tensors(input and kernel) as a float array, as i would like to try optimizing the convolution code using parallel processing. I couldn't find any method to convert Tensor Matrices from Tensor 4D to an array.</p> <p>Please help me in answering the above questions</p>
<p>1.1) Yes it is, the code is what comes after the Cond() statement:</p> <pre><code>// If this condition is met, the first argument is chosen, if not, the second one // is chosen, this condition checks if the input is ColMajor or RowMajor, all of // the tests I've done result in a RowMajor but I don't know what determines this // exactly return choose( Cond(), // ColMajor kernel.reshape(kernel_dims) .contract(input .extract_image_patches( kernelRows, kernelCols, row_stride, col_stride, row_in_stride, col_in_stride, padding_type) .reshape(pre_contract_dims), contract_dims) .reshape(post_contract_dims), // RowMajor input.extract_image_patches(kernelRows, kernelCols, row_stride, col_stride, row_in_stride, col_in_stride, padding_type) .reshape(pre_contract_dims) .contract(kernel.reshape(kernel_dims),contract_dims).reshape(post_contract_dims)); </code></pre> <p>1.2) No, contraction is an abstraction of matrix multiplication to Tensors of N dimensions. It is only being applyed to one of them, depending on the condition</p> <p>1.3) These are all Eigen functions, Eigen has a rather not helpfull documentation of their Tensor operations.I found <a href="https://github.com/ROCmSoftwarePlatform/eigen-upstream/tree/master/unsupported/Eigen/CXX11/src/Tensor" rel="nofollow noreferrer">this</a> on their wiki which can help you to understand what they do, it is not thorough but it can help you wrap your head around the idea of the operation.</p> <p>2.1) I don't know where it is either.</p> <p>2.2) I'm not sure if this can be done directly, Eigen's functions can be rather unintuitive, thou if you know the shape of the 4D tensor, you can create a matrix and just assign every element to that matrix (which I reckon wouldn't be very efficient)</p> <p>I just realized this was posted a year ago but I had already wrote my answer, it might be useful to someone else so I'll just leave it here.</p>
tensorflow|convolution
2
7,286
60,876,793
converting sequence of nucleotide into 2D array of integers
<p>I am trying to convert nucleotide to integer using the following mapping:</p> <pre><code>A -&gt; 0 C -&gt; 1 G -&gt; 2 T -&gt; 3 </code></pre> <p>The sequence of nucleotide is saved in a pandas dataframe and it looks like:</p> <pre><code> 0 0 GGATAATA 1 CGATAACC </code></pre> <p>I have used the df.apply() method to do the task. Here is the code:</p> <pre><code>import pandas as pd a = ["GGATAATA","CGATAACC"] d = dict(zip('A C G T'.split(), range(4))) df = pd.DataFrame(a) mapping = df[0].apply(lambda s: np.array([d[i] for i in s])) </code></pre> <p>It returns the following numpy array which is one dimensional:</p> <pre><code>print(mapping.values) array([array([2, 2, 0, 3, 0, 0, 3, 0]), array([1, 2, 0, 3, 0, 0, 1, 1])], dtype=object) </code></pre> <p>However, the expected output should be two dimensional array:</p> <pre><code>[[2,2,0,3,0,0,3,0], [1,2,0,3,0,0,1,1]] </code></pre>
<p>Use <code>map</code>:</p> <pre><code>list(map(lambda x: list(map(lambda c: d[c], list(x))), df[0])) </code></pre> <p><strong>Output</strong></p> <pre><code>[[2, 2, 0, 3, 0, 0, 3, 0], [1, 2, 0, 3, 0, 0, 1, 1]] </code></pre> <p>or</p> <pre><code>df[0].agg(list).explode().replace(d).groupby(level=0).agg(list).tolist() </code></pre> <p>I think the first solusion is faster</p> <pre><code>%%timeit list(map(lambda x: list(map(lambda c: d[c], list(x))), df[0])) 11.7 µs ± 392 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) </code></pre> <hr> <pre><code>%%timeit df[0].agg(list).explode().replace(d).groupby(level=0).agg(list).tolist() 5.02 ms ± 697 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre>
python-3.x|pandas|numpy
1
7,287
71,619,122
Flag subset of a dataframe based on another dataframe values
<p>I've encountered a problem which I didn't succeed to solve, for now. Your assistance on this one would be highly appreciated.</p> <p>I have 2 dataframes:</p> <pre><code>first_df: A B C D 1 1 a q zz 2 2 b w xx 3 3 c e yy 4 4 d r vv </code></pre> <pre><code>second_df: C1 C2 1 10 a 2 20 b 3 70 g 4 80 h </code></pre> <p>What I want to achieve is mark rows from first_df, based on values from second_df. Marking those rows should be based on comparing column values:</p> <pre><code>(first_df['A'] == second_df['C1'] * 10) &amp; (first_df['B'] == second_df['C2']) </code></pre> <p>Expected output should be like this:</p> <pre><code>compared_df: A B C D Match 1 1 a q zz True 2 2 b w xx True 3 3 c e yy False 4 4 d r vv False </code></pre> <p>Could you please point me in a right direction? Which pandas methods should I use to achieve my goal? I've tried many things, but I'm pandas beginner, so it's hard to assess if those attempts were correct.</p>
<p>First create Multiindex on both the dataframes then use <strong><a href="https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.MultiIndex.isin.html" rel="nofollow noreferrer"><code>MultiIndex.isin</code></a></strong> to test for the occurrence of the index values of first dataframe in the index of second dataframe in order the create boolean flag:</p> <pre><code>i1 = first_df.set_index([first_df['A'] * 10, 'B']).index i2 = second_df.set_index(['C1', 'C2']).index first_df['Match'] = i1.isin(i2) </code></pre> <p>Result</p> <pre><code>print(first_df) A B C D Match 1 1 a q zz True 2 2 b w xx True 3 3 c e yy False 4 4 d r vv False </code></pre>
python|pandas|dataframe
1
7,288
42,383,490
{"error": "Error loading the model"} when using /ml/v1beta1/ml.projects.predict
<p>Using the following API Explorer and body,I get the error {"error": "Error loading the model"}. I was going to start using <a href="https://developers.google.com/resources/api-libraries/documentation/ml/v1beta1/python/latest/ml_v1beta1.projects.html#predict" rel="nofollow noreferrer">https://developers.google.com/resources/api-libraries/documentation/ml/v1beta1/python/latest/ml_v1beta1.projects.html#predict</a>, but would like to verify everything is okay first. </p> <p>Is there a way to see the actual error? </p> <hr> <p><a href="https://developers.google.com/apis-explorer/?authuser=1#p/ml/v1beta1/ml.projects.predict" rel="nofollow noreferrer">https://developers.google.com/apis-explorer/?authuser=1#p/ml/v1beta1/ml.projects.predict</a>?</p> <p>POST <a href="https://ml.googleapis.com/v1beta1/projects/" rel="nofollow noreferrer">https://ml.googleapis.com/v1beta1/projects/</a>{project}/models/{model_name}/versions/v1:predict?key={YOUR_API_KEY}</p> <p>{ "httpBody": { "data": "[{\"placeholder_name\": [44, 158, 178, 156, 111, 101, 110, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], \"key\": 1}]" } }</p> <p>200</p> <p>cache-control: private content-encoding: gzip content-length: 53 content-type: text/html; charset=utf-8 date: Wed, 22 Feb 2017 05:25:14 GMT server: ESF vary: Origin, X-Origin, Referer</p> <p>{"error": "Error loading the model"} `</p>
<p>I added export.meta, export.index and a renamed export.data-00000-of-00001 -> export too cloud storage.</p> <p>It was an incorrect guess based on the documentation <a href="https://cloud.google.com/ml/docs/concepts/deployment-overview#deployment_location" rel="nofollow noreferrer">https://cloud.google.com/ml/docs/concepts/deployment-overview#deployment_location</a>.</p> <p>So it looks like you just need the files created by <code>saver.save(sess, os.path.join(FLAGS.model_dir, 'export'))</code></p>
tensorflow|google-cloud-ml|google-apis-explorer
0
7,289
43,242,246
Pandas dataframe + groupby = failed zooming for x-axis ticks
<p>I am trying to plot some <code>pandas</code> dataframe data but, when it is organised into daily/monthly/yearly sums using <code>groupby</code>, the resulting plot cannot be zoomed correctly.</p> <p>The zoom does work however the x-axis tickmarks don't update correctly. I can't work out the solution to this.</p> <p>Example code:</p> <pre><code>import datetime import pandas as pd import numpy as np arraya = np.random.rand(1,100)[0] arrayb = np.random.rand(1,100)[0] arrayc = np.random.rand(1,100)[0] arrayd = np.random.rand(1,100)[0] day_counts = {'A': arraya, 'B': arrayb, 'C': arrayc, 'D': arrayd} #prepare data df_days = pd.DataFrame(day_counts, index=pd.date_range('2012-01-01', periods=100)) #df_use = df_days.groupby([lambda x: x.year, lambda x: x.month, lambda x: x.day]).sum() df_use = df_days.groupby([lambda x: x.year, lambda x: x.month]).sum() #prepare percentages df_use_perc = df_use.divide(df_use.sum(axis=1), axis=0).multiply(100) #percentages my_colors = list(['orange', 'blue', 'purple', 'red']) #plot the main subfigure (relative event types) ax = df_use_perc.plot(kind='area', stacked=True, color=my_colors) </code></pre> <p>It is this line that causes the failure:</p> <pre><code>df_use = df_days.groupby([lambda x: x.year, lambda x: x.month]).sum() </code></pre> <p>I can plot it just using the dataframe <code>df_days</code>, without using the <code>groupby</code> function and it works okay but I need to be able to sum up the months etc.</p> <p>Plot: <a href="https://i.stack.imgur.com/RnmKH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RnmKH.png" alt="enter image description here"></a></p> <p>Plot after zooming in massively (the whole x-axis is probably only a few seconds wide): <a href="https://i.stack.imgur.com/hfNDi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hfNDi.png" alt="enter image description here"></a></p>
<p>IIUC you can do the following:</p> <pre><code>x = df_days.groupby(pd.TimeGrouper('MS')).sum() x.div(x.sum(1), 0).mul(100).plot(kind='area', stacked=True, color=my_colors) </code></pre> <p><a href="https://i.stack.imgur.com/Ypx0R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ypx0R.png" alt="enter image description here"></a></p> <p>after zooming:</p> <p><a href="https://i.stack.imgur.com/ID9uK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ID9uK.png" alt="enter image description here"></a></p> <p>Explanation:</p> <pre><code>In [35]: x Out[35]: A B C D 2012-01-01 14.739981 18.306502 11.659834 13.990243 2012-02-01 13.180681 12.487874 15.367421 16.877128 2012-03-01 14.528299 16.936493 16.467844 16.668185 2012-04-01 4.190121 3.110165 5.165066 3.086899 In [36]: x.div(x.sum(1), 0).mul(100) Out[36]: A B C D 2012-01-01 25.112171 31.188374 19.864594 23.834861 2012-02-01 22.759411 21.563123 26.535309 29.142158 2012-03-01 22.489341 26.217149 25.491695 25.801815 2012-04-01 26.942217 19.998167 33.211047 19.848569 </code></pre>
python|pandas|plot|zooming
2
7,290
72,437,202
Count and Find Min, Max of value occurs in a dataframe column
<p>I have a dataframe like that</p> <pre><code>Date | DayName | A | B | C 2022-03-01 Tuesday 50 20 40 2022-03-02 Wednesday 10 10 20 2022-03-03 Thurday 64 1 9 2022-03-04 Friday 9 7 12 </code></pre> <p>I'd like to be add rows like :</p> <pre><code>Date | DayName | A | B | C 2022-03-01 Tuesday 50 20 40 2022-03-02 Wednesday 10 10 20 2022-03-03 Thurday 64 1 9 2022-03-04 Friday 9 7 12 Count 4 4 4 Min 9 1 9 Max 64 20 40 </code></pre> <p>I tried add a row by</p> <pre><code>new_row = {'Date':'','DayName': '','A':'','B':'','C':''} frame = frame.append(new_row,ignore_index = True)``` </code></pre> <p>But i don't know how to count and find min, max of value. Somebody help me please</p>
<p>You can try aggregate multiple functions over the rows then concat dataframes</p> <pre class="lang-py prettyprint-override"><code>cols = ['A', 'B', 'C'] agg = (df[cols].agg(['count', min, max]) .rename_axis('Date') .reset_index()) out = pd.concat([df, agg]) </code></pre> <pre><code>print(out) Date DayName A B C 0 2022-03-01 Tuesday 50 20 40 1 2022-03-02 Wednesday 10 10 20 2 2022-03-03 Thurday 64 1 9 3 2022-03-04 Friday 9 7 12 0 count NaN 4 4 4 1 min NaN 9 1 9 2 max NaN 64 20 40 </code></pre>
python|pandas|dataframe
1
7,291
50,381,329
How to merge 2 CSV files together by multiple columns in Python
<p>I have two CSV files. <strong>File 1</strong> that looks like:</p> <pre><code>Ticker | Date | Marketcap A | 2002-03-14 | 600000 A | 2002-06-18 | 520000 . . ABB | 2004-03-16 | 400000 ABB | 2005-07-11 | 800000 . . AD | 2004-03-16 | 680000 . . </code></pre> <p><strong>File 2</strong> like:</p> <pre><code>Ticker | Date | Open | Close | A | 2002-03-14 | 580000 | 500000 | ABB | 2002-03-14 | 500000 | 420000 | AD | 2002-03-16 | 700000 | 670000 | . . . . </code></pre> <p>The periods indicate that values continue on for a large number of entries for each ticker for both <strong>File 1</strong> and <strong>File 2</strong>. The first file has all values for every date and every ticker listed all in one line continuously whereas the second file has all values for every year and ticker listed one-by-one.</p> <p>What I want to do is merge files 1 and 2 based off both "Ticker" and "Date" to look like:</p> <pre><code>Ticker | Date | Marketcap | Open | Close | A | 2002-03-14 | 600000 | 580000 | 500000 | ABB | 2002-03-14 | 520000 | 500000 | 420000 | . . </code></pre> <p>I've tried merging files using something like:</p> <pre><code>a = pd.read_csv("File1.csv") b = pd.read_csv("File2.csv") merged = a.merge(b, on='Date') </code></pre> <p>But I don't think this accounts for both Date and Ticker at once.</p>
<p>I believe you need to use <code>['Date', 'Ticker']</code> instead of just <code>'Date'</code>. Also you might need to specify the <code>how</code> argument depending on what you want.</p>
python|pandas|csv|merge
2
7,292
50,297,604
Getting rid of the extra header row for HTML styling of Pandas multi-index DataFrames
<p>This pandas code:</p> <pre><code>import pandas as pd df=pd.DataFrame([[0,0,1,2,3],[0,1,3,4,5],[1,0,4,5,6],[1,1,5,6,7]], columns=['A','B','X','Y','Z']) df.set_index(['A','B'], inplace=True) df </code></pre> <p>displays as follows in IPython Notebook:</p> <p><a href="https://i.stack.imgur.com/OPQjw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OPQjw.png" alt="enter image description here"></a></p> <p>Note that there are two header rows, to distinguish the index columns from the data columns.</p> <p>Is there any way to get these to show in only one header row?</p>
<p>just use, columns names also in <code>set_index</code>:</p> <pre><code>df.set_index(['A','B','X','Y','Z'], inplace=True) df </code></pre> <p><a href="https://i.stack.imgur.com/R1jJJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R1jJJ.jpg" alt="enter image description here"></a></p>
python|pandas|jupyter-notebook
0
7,293
45,656,479
What's the best way to store Keras model params and model architecture alongside of the model?
<p>I'd like to save all the model parameters (optimizer, learning rate, batch size, etc.) and model architecture (number and types of layers) alongside of the model so that later go back analyze why some models works better.</p> <p><strong>Is there a simple way to store this metadata along with the weights?</strong> </p>
<p>From <a href="https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model" rel="nofollow noreferrer">the docs</a>:</p> <pre><code>from keras.models import load_model model.save('my_model.h5') # creates a HDF5 file 'my_model.h5' del model # deletes the existing model # returns a compiled model # identical to the previous one model = load_model('my_model.h5') </code></pre> <p>This includes the optimizer (which should include the learning rate and the batch size). Besides that, you can use</p> <ul> <li>Using configuration scripts which are under version control (I did so for my masters thesis, see <a href="https://github.com/MartinThoma/msthesis-experiments/tree/master/experiments" rel="nofollow noreferrer">my configuration scripts</a>)</li> <li>Storing the training script</li> </ul> <p>If you want one file, just use a container file format like <code>.tar</code>.</p>
machine-learning|tensorflow|neural-network|keras
2
7,294
62,771,868
AxisError: axis 1 is out of bounds for array of dimension 1 when calculating accuracy of classes
<p>I try to predict 10 classes using this code</p> <pre><code>#Predicting the Test set rules y_pred = model.predict(traindata) y_pred = np.argmax(y_pred, axis=1) y_true = np.argmax(testdata, axis=1) target_names = [&quot;akLembut&quot;,&quot;akMundur&quot;,&quot;akTajam&quot;,&quot;caMenaik&quot;, &quot;caMenurun&quot;, &quot;coretanTengah&quot;, &quot;garisAtas&quot;, &quot;garisBawah&quot;, &quot;garisBawahBanyak&quot;, &quot;ttdCangkang&quot;] print(&quot;\n&quot;+ classification_report(y_true, y_pred, target_names=target_names)) </code></pre> <p>But then I got an error message like this</p> <pre><code>AxisError Traceback (most recent call last) &lt;ipython-input-13-a2b02b251547&gt; in &lt;module&gt;() 2 y_pred = model.predict(traindata) 3 y_pred = np.argmax(y_pred, axis=1) ----&gt; 4 y_true = np.argmax(testdata, axis=1) 5 6 target_names = [&quot;akLembut&quot;,&quot;akMundur&quot;,&quot;akTajam&quot;,&quot;caMenaik&quot;, &quot;caMenurun&quot;, &quot;coretanTengah&quot;, &quot;garisAtas&quot;, &quot;garisBawah&quot;, &quot;garisBawahBanyak&quot;, &quot;ttdCangkang&quot;] &lt;__array_function__ internals&gt; in argmax(*args, **kwargs) 2 frames /usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py in _wrapit(obj, method, *args, **kwds) 45 except AttributeError: 46 wrap = None ---&gt; 47 result = getattr(asarray(obj), method)(*args, **kwds) 48 if wrap: 49 if not isinstance(result, mu.ndarray): AxisError: axis 1 is out of bounds for array of dimension 1 </code></pre> <p>I already train the data and I need to know each accuracy.</p>
<p>My guess is that your <code>test_data</code> array is only one-dimensional, so change to</p> <pre><code>y_true = np.argmax(testdata, axis=0) </code></pre>
python|python-3.x|tensorflow
7
7,295
73,814,929
Formulating self increasing flag with end string based condition
<p>I have the following Dataframe</p> <pre><code>df = pd.DataFrame({'Category': {0: 'onboarding segment-confirmation-unexpected-input origin', 1: 'onboarding segment-confirmation-unexpected-input view', 2: 'product-availability cpf-request-unexpected-input origin', 3: 'product-availability postalcode-validation-true-unexpected-input origin', 4: 'product-availability postalcode-validation-true-unexpected-input view'}, 'UserId': {0: 9090, 1: 4545, 2: 3266, 3: 2894, 4: 2772}}) </code></pre> <p>What I want to do is to formulate a flag that checks if the string part that is different than the word &quot;view&quot; or &quot;origin&quot;. Is equal to the previous value, if so maintain the flag if not increase the flag value.</p> <p>Wanted result</p> <pre><code>df = pd.DataFrame({'Category': {0: 'onboarding segment-confirmation-unexpected-input origin', 1: 'onboarding segment-confirmation-unexpected-input view', 2: 'product-availability cpf-request-unexpected-input origin', 3: 'product-availability postalcode-validation-true-unexpected-input origin', 4: 'product-availability postalcode-validation-true-unexpected-input view'}, 'UserId': {0: 9090, 1: 4545, 2: 3266, 3: 2894, 4: 2772}, 'Flag':{0:'Flag_1',1:'Flag_1',2:'Flag_2',3:'Flag_3',4:'Flag_3'}}) </code></pre> <p>What would be the way to do this? I tried to slice it and formulating a groupby but I am having a little difficulty on the increasing part.</p>
<p>Assuming you want to consider the first 2 blocks or string (blocks beinf separated by spaces):</p> <pre><code># get substrings, keep first 2 (can be changed) df2 = df['Category'].str.split(expand=True).iloc[:, :2] # start new group if any value is different from the previous row group = df2.ne(df2.shift()).any(axis=1).cumsum() # add flag df['Flag'] = 'Flag_'+group.astype(str) </code></pre> <p>output:</p> <pre><code> Category UserId Flag 0 onboarding segment-confirmation-unexpected-inp... 9090 Flag_1 1 onboarding segment-confirmation-unexpected-inp... 4545 Flag_1 2 product-availability cpf-request-unexpected-in... 3266 Flag_2 3 product-availability postalcode-validation-tru... 2894 Flag_3 4 product-availability postalcode-validation-tru... 2772 Flag_3 </code></pre>
python|pandas
2
7,296
71,167,801
Fill dataframe with consecutive datetimes
<p>I have a DataFrame:</p> <pre><code>| init | end | temp 2022-02-02 10:34:00 | 2022-02-02 11:34:00 | 34 2022-02-02 11:34:00 | 2022-02-02 12:34:00 | 12 2022-02-02 13:34:00 | 2022-02-02 14:34:00 | 23 2022-02-02 14:34:00 | 2022-02-02 15:34:00 | 22 2022-02-02 17:34:00 | 2022-02-02 18:34:00 | 18 </code></pre> <p>I need to fill in the missing times (the end of one is the beginning of another) from a start and end date, if I have <code>start=2022-02-02 09:34:00 end=2022-02-02 18:34:00</code> I need to fill the DataFrame as follows:</p> <pre><code>| init | end | temp **2022-02-02 09:34:00 | 2022-02-02 11:34:00 | 0** 2022-02-02 10:34:00 | 2022-02-02 11:34:00 | 34 2022-02-02 11:34:00 | 2022-02-02 12:34:00 | 12 **2022-02-02 12:34:00 | 2022-02-02 11:34:00 | 0** 2022-02-02 13:34:00 | 2022-02-02 14:34:00 | 23 2022-02-02 14:34:00 | 2022-02-02 15:34:00 | 22 **2022-02-02 15:34:00 | 2022-02-02 11:34:00 | 0** **2022-02-02 16:34:00 | 2022-02-02 11:34:00 | 0** 2022-02-02 17:34:00 | 2022-02-02 18:34:00 | 18 **2022-02-02 18:34:00 | 2022-02-02 11:34:00 | 0** </code></pre>
<p>You can make temporal dataframe which consist of datetime period, then you can OUTER JOIN (using <code>pd.merge()</code>), as follows:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from datetime import timedelta df = pd.DataFrame({ 'init': ['2022-02-02 10:34:00', '2022-02-02 11:34:00', '2022-02-02 13:34:00', '2022-02-02 14:34:00', '2022-02-02 17:34:00'], 'end': ['2022-02-02 11:34:00', '2022-02-02 12:34:00', '2022-02-02 14:34:00', '2022-02-02 15:34:00', '2022-02-02 18:34:00'], 'temp': [34, 12, 23, 22, 18], }) # to convert str to datetime type for init and end columns df['init'] = pd.to_datetime(df['init']) df['end'] = pd.to_datetime(df['end']) # to create temporal dataframe for additional rows tmp_df = pd.DataFrame() tmp_df['init'] = pd.date_range(start=df.iloc[0]['init'] - timedelta(hours=1), end=df.iloc[-1]['end'], freq=&quot;H&quot;) # to create final result result = pd.merge(df, tmp_df, on='init', how='outer') result = result.sort_values(by=['init']).reset_index(drop=True) #result['end'] = result['init'] + timedelta(hours=1) # use this if you make end value as init + 1 hour result['end'] = result['end'].apply(lambda x: datetime(2020, 2, 2, 11, 34, 0) if x is pd.NaT else x) result['temp'] = result['temp'].fillna(0) # convert NaN to 0 print(result) </code></pre> <p>This will print what you expected:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; result init end temp 0 2022-02-02 09:34:00 2020-02-02 11:34:00 0.0 1 2022-02-02 10:34:00 2022-02-02 11:34:00 34.0 2 2022-02-02 11:34:00 2022-02-02 12:34:00 12.0 3 2022-02-02 12:34:00 2020-02-02 11:34:00 0.0 4 2022-02-02 13:34:00 2022-02-02 14:34:00 23.0 5 2022-02-02 14:34:00 2022-02-02 15:34:00 22.0 6 2022-02-02 15:34:00 2020-02-02 11:34:00 0.0 7 2022-02-02 16:34:00 2020-02-02 11:34:00 0.0 8 2022-02-02 17:34:00 2022-02-02 18:34:00 18.0 9 2022-02-02 18:34:00 2020-02-02 11:34:00 0.0 </code></pre> <p>If you want to make &quot;end&quot; column as &quot;init + 1 hour&quot;, then use this code (already commented in the code), <code>#result['end'] = result['init'] + timedelta(hours=1)</code>, instead of <code>result['end'] = result['end'].apply(lambda x: datetime(2020, 2, 2, 11, 34, 0) if x is pd.NaT else x)</code>.</p> <p>This will print following:</p> <pre class="lang-py prettyprint-override"><code> init end temp 0 2022-02-02 09:34:00 2022-02-02 10:34:00 0.0 1 2022-02-02 10:34:00 2022-02-02 11:34:00 34.0 2 2022-02-02 11:34:00 2022-02-02 12:34:00 12.0 3 2022-02-02 12:34:00 2022-02-02 13:34:00 0.0 4 2022-02-02 13:34:00 2022-02-02 14:34:00 23.0 5 2022-02-02 14:34:00 2022-02-02 15:34:00 22.0 6 2022-02-02 15:34:00 2022-02-02 16:34:00 0.0 7 2022-02-02 16:34:00 2022-02-02 17:34:00 0.0 8 2022-02-02 17:34:00 2022-02-02 18:34:00 18.0 9 2022-02-02 18:34:00 2022-02-02 19:34:00 0.0 </code></pre>
python|pandas|datetime
1
7,297
71,435,149
Check for a list of word in DataFrame pandas but skipping those words that are not in
<p>so i'm trying to filter a dataframe using a list of words. The problem is that some words could be not there but anyway could be useful.</p> <p>These dataframe is a catalog that I'm getting from a web scraping process. For every single row, I have a different product and unique.</p> <p>The list that I'm using came from another process and could have words that are not useful because it not appear in the string and i can modify it.</p> <p>For example let's think that we have the next dataframe:</p> <pre><code>mycolumn = ['Products'] products = ['Kitadol 500 mg x 24 Comprimidos', 'Paracetamol 500 mg', 'Prestat 75 mg x 40 Comprimidos', 'Pedialyte 60 Manzana x 500 mL Solución Oral', 'Panadol Niños 100mg/Ml Gotas 15ml'] df = pd.DataFrame(products, columns=mycolumn) </code></pre> <p>And i have the next list of words:</p> <pre><code>list_words = ['PARACETAMOL','KITADOL','500','MG','LIB'] </code></pre> <p>In my dataset of products, 'Kitadol 500 mg x 24 Comprimidos' is a product where &quot;Kitadol&quot; is a commercial name and the molecule, that is &quot;Paracetamol&quot;, is not in the description. My principal problem is that if i want to ask for all those product that have &quot;paracetamol&quot;, &quot;'Kitadol 500 mg x 24 Comprimidos&quot; will not be in my search.</p> <p>My list of words contains keywords from a dictionary, so for example, if I search in my dictionary for &quot;Paracetamol&quot; and &quot;500&quot;, I'm getting this keywords (could be more).</p> <p>My purpose is get from my dataset, all those products that contain this words and for those words that are not contain, i want to skip it. For example that product with the description 'Paracetamol 500 mg', doesn't conatin &quot;Kitadol&quot;, so there will be no a full match, just with the keywords &quot;Paracetamol&quot;, &quot;500&quot; and &quot;MG&quot;. Right now i don't know how to show all those products that contains at least some of these keywords and ignore those that are not contain.</p> <p>My final table need to contains two products:</p> <ul> <li>Kitadol 500 mg x 24 Comprimidos</li> <li>Paracetamol 500 mg</li> </ul> <p>I wonder if someone know how to deal with this question or give some ideas. King regards and thanks!</p>
<p>If is possible specify what exactly need for each match - here is necessary match 3 values of tuples use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.findall.html" rel="nofollow noreferrer"><code>Series.str.findall</code></a> with <code>re.I</code> for ignore case and test length of unique values is same like length of each tuple in list comprehension:</p> <pre><code>import re tups = [('PARACETAMOL','500','MG'), ('KITADOL','500','MG')] L = [df['Products'].str.findall(&quot;|&quot;.join(i),flags=re.I).apply(lambda x: len(set(x)))==len(i) for i in tups] df = df[np.logical_or.reduce(L)] print (df) Products 0 Kitadol 500 mg x 24 Comprimidos 1 Paracetamol 500 mg </code></pre>
python|pandas|dataframe|contains
0
7,298
60,557,274
Creating a new Min() and Max() column based another dataframe
<p>I'm trying to create two columns in a new dataframe base on the Min and the Min from an existing dataframe. When groupby is used, it is giving min and max NAN values </p> <pre><code>df.groupby('street').min()['sold_price'] df.groupby('street').max()['sold_price'] </code></pre> <p>sample from existing dataframe.</p> <pre><code>street_name sold_price A 100,000 A 200,100 B 50,000 B 100,000 </code></pre> <p>new dataframe should be</p> <pre><code>street_name min max A 100,000 200,000 B 50,000 100,000 </code></pre>
<p>It should be</p> <pre><code>(df.groupby("street_name", as_index=False) ["sold_price"].agg(["min","max"]) ) </code></pre> <hr> <p><strong>Update</strong>: for rename:</p> <pre><code>(df.groupby("street_name", as_index=False) ["sold_price"].agg({'low':'min', 'high':'max'}) ) </code></pre>
python|pandas
0
7,299
60,518,772
Convert ndarray to dict in python3
<p>I have a ndarray that look like this</p> <pre><code>LABEL1 99 113 2010-04-26 20:12:23+00:00 LABEL1 29 143 2010-05-06 20:12:23+00:00 LABEL1 99 323 2010-02-12 20:12:23+00:00 LABEL1 23 223 2010-04-25 20:12:23+00:00 LABEL2 23 23 2010-01-21 20:12:23+00:00 LABEL1 234 123 2010-12-26 20:12:23+00:00 LABEL1 93 133 2010-02-23 20:12:23+00:00 LABEL4 19 1223 2010-07-24 20:12:23+00:00 </code></pre> <p>I need to do some aggregation and return as dict..</p> <p>What I should get at the end is similary to this</p> <pre><code>[ { 'LABEL1': { 'COLA':577, 'COLB': 1058, 'LAST': '2010-12-26 20:12:23+00:00' } }, { 'LABEL2': { 'COLA':23, 'COLB': 23, 'LAST': '2010-01-21 20:12:23+00:00' } }, { 'LABEL4': { 'COLA':19, 'COLB':1223, 'LAST': '2010-07-24 20:12:23+00:00' } } ] </code></pre> <p>The way I was thinking of doing was to convert to DataFrame, then do a group().agg...</p> <pre><code>aggr = select_df.groupby('LABELS').agg({'LABELS': [('LABELS', 'max')], 'COLA': [('COLA', 'sum'), ('COLB', 'count')], {'LAST': [('LAST', 'max')]}) </code></pre> <p>I'm kinda new to Python... and having nightmare with all data conversion required to do this...</p> <p>The original structure is a list</p> <pre><code> [ { 'Label': 'xxxx', 'LABELS': 'xxxx', 'COLA': ##, 'COLB': ##, 'LAST': 'datetime' },... ] </code></pre> <p>If I could simply aggregate directly this list and then concatenate with the next pass (list is read in chunk) to have a final list as mentioned above...</p>
<p>First convert it into dataframe:</p> <p><strong>df:</strong></p> <pre><code> 0 1 2 3 0 LABEL1 29 143 2010-05-06 20:12:23+00:00 1 LABEL1 99 323 2010-02-12 20:12:23+00:00 2 LABEL1 23 223 2010-04-25 20:12:23+00:00 3 LABEL2 23 23 2010-01-21 20:12:23+00:00 4 LABEL1 234 123 2010-12-26 20:12:23+00:00 5 LABEL1 93 133 2010-02-23 20:12:23+00:00 6 LABEL4 19 1223 2010-07-24 20:12:23+00:00 </code></pre> <hr> <pre><code>df.columns = ['label','x','y','z','w'] </code></pre> <hr> <pre><code>df.set_index('label').T.to_dict('dict') </code></pre> <p><strong>result:</strong></p> <pre><code>{'LABEL1': {'x': 93, 'y': 133, 'z': '2010-02-23', 'w': '20:12:23+00:00'}, 'LABEL2': {'x': 23, 'y': 23, 'z': '2010-01-21', 'w': '20:12:23+00:00'}, 'LABEL4': {'x': 19, 'y': 1223, 'z': '2010-07-24', 'w': '20:12:23+00:00'}} </code></pre> <p><strong>Edit:</strong> Then groupby label and aggregate by sum, max</p> <pre><code>df.groupby(["label"])\ .agg({"x": "sum", "y": "sum", "z": "max", "w": "max"}).T.to_dict('dict') </code></pre> <p><strong>result:</strong></p> <pre><code>{'LABEL1': {'x': 478, 'y': 945, 'z': '2010-12-26', 'w': '20:12:23+00:00'}, 'LABEL2': {'x': 23, 'y': 23, 'z': '2010-01-21', 'w': '20:12:23+00:00'}, 'LABEL4': {'x': 19, 'y': 1223, 'z': '2010-07-24', 'w': '20:12:23+00:00'}} </code></pre>
python|pandas|list|dataframe|arraylist
1