Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
3,800
61,933,021
How to overwrite data on an existing excel sheet while preserving all other sheets?
<p>I have a pandas dataframe <code>df</code> which I want to overwrite to a sheet <code>Data</code> of an excel file while preserving all the other sheets since other sheets have formulas linked to sheet <code>Data</code></p> <p>I used the following code but it does not overwrite an existing sheet, it just creates a new sheet with the name <code>Data 1</code></p> <pre><code>with pd.ExcelWriter(filename, engine="openpyxl", mode="a") as writer: df.to_excel(writer, sheet_name="Data") </code></pre> <p>Is there a way to overwrite on an existing sheet?</p>
<p>You can do it using <strong><a href="https://openpyxl.readthedocs.io/en/stable/" rel="noreferrer"><code>openpyxl</code></a>:</strong></p> <pre><code>import pandas as pd from openpyxl import load_workbook book = load_workbook(filename) writer = pd.ExcelWriter(filename, engine='openpyxl') writer.book = book writer.sheets = dict((ws.title, ws) for ws in book.worksheets) df.to_excel(writer, "Data") writer.save() </code></pre> <p>You need to initialize <code>writer.sheets</code> in order to let <code>ExcelWriter</code> know about the sheets. Otherwise, it will create a new sheet with the name that you pass.</p>
python|excel|pandas|dataframe
14
3,801
61,871,562
Maintain column consistency when writing into an existing excel file with pandas to_excel
<p>this might be a simple and easy problem but I can't find out how to solve it. The problem is more complex but I made a simple version in order to focus in the real issues.</p> <pre><code>import pandas as pd d = {'col1': [1, 2], 'col2': [3, 4]} df = pd.DataFrame(data=d) e = {'col1': [11, 33], 'ab1': [55,44], 'col2': [22, 66]} df2 = pd.DataFrame(data=e) with pd.ExcelWriter('file.xlsx',mode='a') as writer: df.to_excel(writer, header=True, index=False,engine='xlsxwriter',sheet_name="uno") df2.to_excel(writer, startrow=3, header=False, index=False,engine='xlsxwriter',sheet_name="uno") </code></pre> <p>I have this code where df has:</p> <pre><code> col1 col2 0 1 3 1 2 4 </code></pre> <p>df2:</p> <pre><code> col1 ab1 col2 0 11 55 22 1 33 44 66 </code></pre> <p>The current result is </p> <pre><code> col1 col2 Unnamed: 2 0 1 3 NaN 1 2 4 NaN 2 11 55 22.0 3 33 44 66.0 </code></pre> <p>As you can see the column "col2" does not have the values that the df2 has with that key. so I would like an output like the following:</p> <pre><code> col1 col2 ab1 0 1 3 NaN 1 2 4 NaN 2 11 22 55 3 33 66 44 </code></pre> <p>I cannot change the order in the Dataframes that I am going to insert in the excel since in the real problem they are much larger and they are coming from a mongo db where the inconsistency across different dfs could be greater.</p> <p>Edit: I forgot to mention a constrain of the system, I cannot have both dataframes in memory at the same time since they have a considerable size. For that reason, one of them is inserted into the excel and deleted and then the other one is created and inserted.</p> <p>Thanks</p>
<p>Let us try <code>concat</code> before write into excel </p> <pre><code>s=pd.concat([df,df2],keys=['df','df2'],sort=True) s Out[129]: ab1 col1 col2 df 0 NaN 1 3 1 NaN 2 4 df2 0 55.0 11 22 1 44.0 33 66 with pd.ExcelWriter('file.xlsx',mode='a') as writer: s.loc['df'].to_excel(writer, header=True, index=False,engine='xlsxwriter',sheet_name="uno") s.loc['df2'].to_excel(writer, startrow=3, header=False, index=False,engine='xlsxwriter',sheet_name="uno") </code></pre> <p>Update </p> <pre><code>col=df.columns #df.to_excel(writer, header=True, index=False,engine='xlsxwriter',sheet_name="uno") df2=df2.reindex(columns=col.append(df2.columns.difference(col))) #df2.to_excel(writer, startrow=3, header=False, index=False,engine='xlsxwriter',sheet_name="uno") </code></pre>
python|pandas|mongodb|dataframe
1
3,802
61,706,381
How to conditionally groupby a column and transform a pandas dataframe on the basis of row wise operations?
<p>I have a pandas dataframe <em>df</em> of the form:</p> <pre><code> id start_time end_time label 1 0 2 A 1 3 6 C 1 9 11 A 2 0 4 B 2 5 7 A 3 1 10 C 3 20 22 A 3 22.5 24 A </code></pre> <p>I want to groupby column id based on the criteria that <em>end_time</em>(current row) - <em>start_time</em>(previous row)&lt;= <em>threshold</em> and then get the corresponding times and labels as lists in a new dataframe. Effectively, for <em>threshold</em> = 2, the new dataframe after transforming <em>df</em> should look like:</p> <pre><code> id times labels 1 [(0,2), (3,6)] [A, C] 1 [(9,11)] [A] 2 [(0,4), (5,7)] [B, A] 3 [(1,10)] [C] 3 [(20,22), (22.5, 24)] [A, A] </code></pre> <p>What is an efficient, pythonic way to achieve this?</p> <p>The code for generating the sample df: </p> <pre><code>df = pandas.DataFrame([[1,0, 2, 'A'],[1, 3,6,'C'],[1,9,11,'A'],[2,0,4,'B'],[2,5,7,'A'],[3,1,10,'C'],[3,20,22,'A'],[3,22.5,24,'A']],columns=['id', 'start_time', 'end_time', 'label']) </code></pre>
<p>We need to use <code>groupby</code> with <code>shift</code> to create the sub group key , then we just do the <code>groupby</code> with <code>agg</code> </p> <pre><code>s=df.groupby('id').apply(lambda x : (x.start_time-x.end_time.shift(1)).gt(1).cumsum()).reset_index(level=0,drop=True) df['times']=list(zip(df.start_time,df.end_time)) df_out=df.groupby([df.id,s]).agg({'times':list,'label':list}) df_out times label id 1 0 [(0.0, 2), (3.0, 6)] [A, C] 1 [(9.0, 11)] [A] 2 0 [(0.0, 4), (5.0, 7)] [B, A] 3 0 [(1.0, 10)] [C] 1 [(20.0, 22), (22.5, 24)] [A, A] </code></pre>
python|pandas|numpy
5
3,803
61,728,624
Get the pandas dataframe first row with index
<p>Recently I have started using Pandas in my work to handle the data obtained by some sensors I have a dictionary with the sensor values in the following format:</p> <pre class="lang-py prettyprint-override"><code>data={ 2019-10-23 00:00:00: { key1: value1, key2: value2, ... keyN: valueN }, 2019-10-23 00:00:03: { key1: value1, key2: value2, ... keyN: valueN }, ... } </code></pre> <p>I create a pandas dataframe:</p> <pre class="lang-py prettyprint-override"><code>dataframe = pandas.DataFrame.from_dict(data, orient="index") </code></pre> <p>The resulting dataframe looks like this:</p> <pre class="lang-py prettyprint-override"><code>Whole Dataframe: co no2 ... temperature illuminance 2019-10-23 00:00:43 298.66458 0.000000 ... 15.498970 0.0 2019-10-23 00:00:44 305.92203 0.000000 ... 15.498970 0.0 2019-10-23 00:00:37 298.66458 3.456714 ... 15.498970 0.0 2019-10-23 00:00:50 305.92203 0.000000 ... 15.498970 0.0 2019-10-23 00:00:45 305.92203 0.000000 ... 15.498970 0.0 ... ... ... ... ... ... 2019-10-23 23:33:59 327.05542 0.000000 ... 14.740597 0.0 2019-10-23 23:38:37 296.85214 0.000000 ... 14.687190 0.0 2019-10-23 23:43:38 289.69748 0.000000 ... 14.612421 0.0 2019-10-23 23:50:38 282.21335 15.672545 ... 14.526970 0.0 2019-10-23 23:54:44 297.21220 0.000000 ... 14.505608 0.0 </code></pre> <p>Now I need to be able to get the values of the first row, I tried using <code>.iloc[0]</code> and <code>to_dict()</code> to get a dictionary to send through an api rest:</p> <pre class="lang-py prettyprint-override"><code>selected_value = dataframe.iloc[0].to_json() </code></pre> <p>prints this:</p> <pre class="lang-py prettyprint-override"><code>Selected value: {"co":298.66458,"no2":3.456714,"o3":53.318943,"so2":0.0,"humidity":65.13771,"pm1":0.0198951,"pm10":0.0209116,"pm25":0.0209116,"temperature":15.49897,"illuminance":0.0} </code></pre> <p>But it doesn't return the index, I'd like to get something like this (or at least include the index anyway):</p> <pre class="lang-py prettyprint-override"><code>{"2019-10-23 00:00:43": { "co":298.66458, "no2":3.456714, "o3":53.318943, "so2":0.0, "humidity":65.13771, "pm1":0.0198951, "pm10":0.0209116, "pm25":0.0209116, "temperature":15.49897, "illuminance":0.0 } } </code></pre> <p>Any way to do this?</p> <p>PD: Indicate that I perform some intermediate procedures to obtain the sensor values every 10 minutes using the <code>between_time</code> method </p>
<p>you can use <code>head</code> instead like:</p> <pre><code># example data df = pd.DataFrame({'a':range(2), 'b':range(2,4)}, index=pd.to_datetime(['01/01/2018','02/01/2018']).strftime('%Y-%m-%d %H:%M:%S')) print (df.head(1).to_json(orient='index')) {"2018-01-01 00:00:00":{"a":0,"b":2}} #or to_dict maybe print (df.head(1).to_dict(orient='index')) {'2018-01-01 00:00:00': {'a': 0, 'b': 2}} </code></pre>
python|pandas
0
3,804
61,949,695
NeuralNet regression, input normalization/ouput denormalization and role of activation funcions?
<p>Given a training dataset, Xtrain (m x n) and ytrain(m,) and some neural net sequential model.</p> <p>When and to what range does the training data have to be normalized too? How should predicted values be denormalized? And how do the choices of activation functions of different layers effect this?</p> <ul> <li>do we have to normalize the Xtrain data?</li> <li>does the range we normalize to, for example [0-1], [-1,1], [-5,5], [0,) depend on the input layers activation function domain? or do all activation functions impact it, therefore it should be normalized to the common range of all activation functions in the model?</li> </ul> <p>and for the target (ytrain) used in training:</p> <ul> <li>does that have to denormalized?</li> <li>does it have to be normalized to the range of output layers activation function or common range of all layers?</li> </ul> <p>very confused, so shedding any light on this for me would be greatly appreciated.</p>
<p><strong>Do we have to normalize the Xtrain data?</strong><br> - Yes, we have to</p> <p><strong>Does the range we normalize depend on the input layers activation function?</strong><br> - No, it doesn't</p> <p><strong>Does that have to denormalized?</strong><br> - No</p> <p><strong>Does it have to be normalized to the range of output layers activation function or common range of all layers?</strong><br> - Actually, I didn't get the question. But let me explain how normalization works.</p> <hr> <p>The main aim of normalization - bring features to a single view. Standardization improves the numerical stability of your model. If you have different features (e.g. some numeric columns in range between 1000 and 20000, some numeric columns in range between -10 and 5, some boolean columns, etc.) you must make a standardization. This will turn your very different into similar ones. </p> <p>But why do we need it? In neural networks every neuron takes features and weights as input:<br> <code>g(X) = X^T * w</code><br> So, if some of your features is bigger than others, the model will pay more attention to large numbers.</p> <p>Speaking about <strong>denormalization</strong>. Do we need to denormalizate <em>y</em> values? No, we don't. Since we didn't normalize <em>y_train</em> features, which the model was trained on, we don't need to denormalizate predicted values. </p>
tensorflow|keras|normalization|denormalization
1
3,805
54,892,653
Tensorflow Embedding using Continous and Categorical Variable
<p>Based on <a href="https://stackoverflow.com/questions/43574889/tensorflow-embedding-for-categorical-feature">this</a> post, I tried to create another model, where I'm adding both categorical and continous variables. Please find the code below:</p> <pre><code>from __future__ import print_function import pandas as pd; import tensorflow as tf import numpy as np from sklearn.preprocessing import LabelEncoder if __name__ == '__main__': # 1 categorical input feature and a binary output df = pd.DataFrame({'cat2': np.array(['o', 'm', 'm', 'c', 'c', 'c', 'o', 'm', 'm', 'm']), 'num1': np.random.rand(10), 'label': np.array([0, 0, 1, 1, 0, 0, 1, 0, 1, 1])}) encoder = LabelEncoder() encoder.fit(df.cat2.values) X1 = encoder.transform(df.cat2.values).reshape(-1,1) X2 = np.array(df.num1.values).reshape(-1,1) # X = np.concatenate((X1,X2), axis=1) Y = np.zeros((len(df), 2)) Y[np.arange(len(df)), df.label.values] = 1 # Neural net parameters training_epochs = 5 learning_rate = 1e-3 cardinality = len(np.unique(X)) embedding_size = 2 input_X_size = 1 n_labels = len(np.unique(Y)) n_hidden = 10 # Placeholders for input, output cat2 = tf.placeholder(tf.int32, [None], name='cat2') x = tf.placeholder(tf.float32, [None, 1], name="input_x") y = tf.placeholder(tf.float32, [None, 2], name="input_y") embed_matrix = tf.Variable( tf.random_uniform([cardinality, embedding_size], -1.0, 1.0), name="embed_matrix" ) embed = tf.nn.embedding_lookup(embed_matrix, cat2) inputs_with_embed = tf.concat([x, embedding_aggregated], axis=2, name="inputs_with_embed") # Neural network weights h = tf.get_variable(name='h2', shape=[inputs_with_embed, n_hidden], initializer=tf.contrib.layers.xavier_initializer()) W_out = tf.get_variable(name='out_w', shape=[n_hidden, n_labels], initializer=tf.contrib.layers.xavier_initializer()) # Neural network operations #embedded_chars = tf.nn.embedding_lookup(embeddings, x) layer_1 = tf.matmul(inputs_with_embed,h) layer_1 = tf.nn.relu(layer_1) out_layer = tf.matmul(layer_1, W_out) # Define loss and optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=out_layer, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Initializing the variables init = tf.global_variables_initializer() # Launch the graph with tf.Session() as sess: sess.run(init) for epoch in range(training_epochs): avg_cost = 0. # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={x: X2,cat2:X1, y: Y}) print("Optimization Finished!") </code></pre> <p>But I'm getting the following error. It seems I'm not concatenating the continous variable and embedding properly. But I'm not understanding how to fix it. </p> <p>Please if someone can please guide me.</p> <pre><code>ValueError: Shape must be at least rank 3 but is rank 2 for 'inputs_with_embed_2' (op: 'ConcatV2') with input shapes: [?,1], [?,2], [] and with computed input tensors: input[2] = &lt;2&gt;. </code></pre> <p>Thanks!</p>
<p>If by <code>embedding_agregated</code> you mean <code>embed</code> (probably typo) </p> <p>The error is that there is no <code>axis=2</code> in your case , it should be <code>axis=1</code></p> <p><code>inputs_with_embed = tf.concat([x, embed], axis=1, name="inputs_with_embed")</code></p> <p><code>embed</code> has a shape [None, embedding_dimension] and <code>x</code> has a shape [None, 1]</p> <p>They are both 2D tensors, so you have access to axis=0 or axis=1 (indexing at 0 not 1), therefore to have your <code>input_with_embed</code> of shape [None, embedding_dimension+1] you need to concat on the <code>axis=1</code></p>
tensorflow|embedding
1
3,806
54,752,195
How to define ConvLSTM encoder_decoder in Keras?
<p>I have seen examples of building an encoder-decoder network using LSTM in Keras but I want to have a ConvLSTM encoder-decoder and since the ConvLSTM2D does not accept any 'initial_state' argument so I can pass the initial state of the encoder to the decoder, I tried to use RNN in Keras and tried to pass the ConvLSTM2D as the cell of RNN but I got the following error:</p> <pre><code>ValueError: ('`cell` should have a `call` method. The RNN was passed:', &lt;tf.Tensor 'encoder_1/TensorArrayReadV3:0' shape=(?, 62, 62, 32) dtype=float32&gt;) </code></pre> <p>This is how I tried to define the RNN cell:</p> <pre><code>first_input = Input(shape=(None, 62, 62, 12)) encoder_convlstm2d = ConvLSTM2D(filters=32, kernel_size=(3, 3), padding='same', name='encoder'+ str(1))(first_input ) encoder_outputs, state_h, state_c = keras.layers.RNN(cell=encoder_convlstm2d, return_sequences=False, return_state=True, go_backwards=False, stateful=False, unroll=False) </code></pre>
<p>Below is my approach for encoder-decoder-based solution with ConvLSTM.</p> <pre><code>def convlstm(input_shape): print(np.shape(input_shape)) inpTensor = Input((input_shape)) #encoder net1 = ConvLSTM2D(filters=32, kernel_size=3, padding='same', return_sequences=True)(inpTensor) max_pool1 = MaxPooling3D(pool_size=(2, 2, 2), strides=2, padding='same')(net1) bn1 = BatchNormalization(axis=1)(max_pool1) dp1 = Dropout(0.2)(bn1) net2 = ConvLSTM2D(filters=64, kernel_size=3, padding='same', return_sequences=True)(dp1) max_pool2 = MaxPooling3D(pool_size=(2, 2, 2), strides=2, padding='same')(net2) bn2 = BatchNormalization(axis=1)(max_pool2) dp2 = Dropout(0.2)(bn2) net3 = ConvLSTM2D(filters=128, kernel_size=3, padding='same', return_sequences=True)(dp2) max_pool3 = MaxPooling3D(pool_size=(2, 2, 2), strides=2, padding='same')(net3) bn3 = BatchNormalization(axis=1)(max_pool3) dp3 = Dropout(0.2)(bn3) #decoder net4 = ConvLSTM2D(filters=128, kernel_size=3, padding='same', return_sequences=True)(dp3) up1 = UpSampling3D((2, 2, 2))(net4) net5= ConvLSTM2D(filters=64, kernel_size=3, padding='same', return_sequences=True)(up1) up2 = UpSampling3D((2, 2, 2))(net5) net6 = ConvLSTM2D(filters=32, kernel_size=3, padding='same', return_sequences=False)(up2) up3 = UpSampling2D((2, 2))(net6) out = Conv2D(filters=1, kernel_size=(3, 3), activation='sigmoid', padding='same', data_format='channels_last')(up3) #or use only return out return Model(inpTensor, out) </code></pre>
tensorflow|keras|conv-neural-network|recurrent-neural-network|encoder-decoder
0
3,807
54,934,019
replacing a column's values (if it meets a condition) from another existing column's values
<p>I have a <code>data frame</code> that consists of 3 columns: </p> <pre><code>Id, Summary, Description </code></pre> <p>What I am trying to do is if any values in <code>Description</code> exactly match this string: "This is an empty description", then replace those contents with those of <code>Summary</code>.</p> <p>For example:</p> <p>Before:</p> <pre><code> Id Summary Description 0 1 Cool song This is an empty description 1 2 It was ok was ok because needed more melody 2 3 this was sick This is an empty description 3 4 not a fan i prefer classical over rock 4 5 alright This is an empty description </code></pre> <p>After:</p> <pre><code> Id Summary Description 0 1 Cool song Cool song 1 2 It was ok was ok because needed more melody 2 3 this was sick this was sick 3 4 not a fan i prefer classical over rock 4 5 alright alright </code></pre> <p>The code I have I am using works, but I wonder if there is a better way because I get a warning:</p> <p>Input: </p> <pre><code> df.Description = np.where(df.Description == "This is an empty description", df.Summary, df.Description) </code></pre> <p>Output:</p> <pre><code>C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py:3643: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy self[name] = value </code></pre>
<p>This is a really common warning when you are chaining multiple indexing operations in <code>Pandas</code>. You can read about it in details <a href="https://www.dataquest.io/blog/settingwithcopywarning/" rel="nofollow noreferrer">here</a>. If you want to leverage <code>Pandas</code> native methods, you can do something like below without getting any errors.</p> <pre><code>import pandas as pd mask = (df.Description == "This is an empty description") df.loc[mask, 'Description'] = df.Summary </code></pre>
python-3.x|pandas|dataframe|series
0
3,808
54,975,556
Check if Column Has String Object Then Convert to Numeric
<p>I need to check if a column in my dataframe is of type 'object' and then based on that information, change all the values in that column to an integer. Here is the function I wrote to do that:</p> <pre><code>def multiply_by_scalar(self): self.columns_to_index() i = ask_user("What column would you like to multiply by a scalar? Please type in index:\n", int) m = ask_user("Type in the value of the scalar:\n", int) if self.df.columns[i] == np.object: print("{} is of type 'object'. Scalar multiplication can only be applied to dtypes of type 'numeric'.".format(self.df.columns[i])) c = ask_user("Would you like to convert column '{}' to type 'int'?".format(self.df.columns[i])) if c in yes_values: pd.to_numeric(self.df.columns[i]) self.df.columns[i] = self.df.columns[i].multiply(m) print(self.df.columns[i]) else: self.df.columns[i] = self.df.columns[i].multiply(m) print(self.df.columns[i]) </code></pre> <p>NOTE: The <code>self.columns_to_index()</code> is a function in the program that maps each column name to an index and it is not important information to answer the question. </p> <p>When I run this function, I get the error: </p> <p><code>AttributeError: 'str' object has no attribute 'multiply</code></p> <p>Demonstrating that the conversion from a string to an integer did not work.</p>
<p>Here my solution: </p> <pre><code>#df.dtypes.to_dict() create a dictionary with name column as index and dtype as values for colname, coltype in df.dtypes.to_dict().items(): if coltype == 'object' : df[colname] = df[colname].astype(int) </code></pre> <p>or if you have a function fc to execute</p> <pre><code>def fc(colname, coltype): #coding fc here for colname, coltype in df.dtypes.to_dict().items(): if coltype == 'object' : fc(colname, coltype) </code></pre>
python|pandas|dataframe
0
3,809
55,113,824
pandas valueError when using agg function
<p>I am getting myself acquainted with pandas and I have encountered an issue I cannot find an answer to.</p> <p>I am using the dataset available here <a href="https://raw.githubusercontent.com/Shreyas3108/house-price-prediction/master/kc_house_data.csv" rel="nofollow noreferrer">https://raw.githubusercontent.com/Shreyas3108/house-price-prediction/master/kc_house_data.csv</a></p> <p>I am then running the function <code>df.describe()</code> which outputs everything it should without issue.</p> <p>Since I am only currently only interested in the min, max and diff of the min/max. I am using the <code>df.agg</code> function from pandas to get the min/max of each column by running the following code</p> <pre><code>df.agg([min, max],axis=0) </code></pre> <p>When I run this, I get the error:</p> <pre><code> ~/.virtualenvs/cv/lib/python3.6/site-packages/pandas/core/base.py in _aggregate_multiple_funcs(self, arg, _level, _axis) 615 # if we are empty 616 if not len(results): --&gt; 617 raise ValueError("no results") 618 619 try: ValueError: no results </code></pre> <p>I am not sure why I am getting this error, when <code>df.describe()</code> is able to find the min/max of each column without issue. I have looked for blank and NaN values as well as looking for strings to see if they were producing the problem and my data does not seem to have them.</p> <p>I would appreciate any pointers to where I am going wrong.</p>
<p>I have tried below code and able to done Successfully which you have mentioned in your question.</p> <pre><code>df = pd.read_csv('https://raw.githubusercontent.com/Shreyas3108/house-price-prediction/master/kc_house_data.csv') df = df.agg([min, max]).T CLM = ['max', 'min'] df = (df.drop(CLM, axis=1) .join(df[CLM].apply(pd.to_numeric, errors='coerce'))) df = num_df[num_df[CLM].notnull().all(axis=1)] df['Diff'] = df['max'] - df['min'] df </code></pre> <p>Kindly try this and let me know if this works for you.</p>
python|pandas|data-science
0
3,810
49,373,835
String substitution in str.replace vs Pandas str.replace
<p>I need to replace a backslash with something else and wrote this code to test the basic concept. Works fine:</p> <pre><code>test_string = str('19631 location android location you enter an area enable quick action honeywell singl\dzone thermostat environment control and monitoring') print(test_string) test_string = test_string.replace('singl\\dzone ','singl_dbl_zone ') print(test_string) 19631 location android location you enter an area enable quick action honeywell singl\dzone thermostat environment control and monitoring 19631 location android location you enter an area enable quick action honeywell singl_dbl_zone thermostat environment control and monitoring </code></pre> <p>However, I have a pandas df full of these (re-configured) strings and when I try to operate on the df, it doesn't work. </p> <pre><code>raw_corpus.loc[:,'constructed_recipe']=raw_corpus['constructed_recipe'].str.replace('singl\\dzone ','singl_dbl_zone ') </code></pre> <p>The backslash remains!</p> <pre><code>323096 you enter an area android location location environment control and monitoring honeywell singl\dzone thermostat enable quick action </code></pre>
<p>There's a difference between <code>str.replace</code> and <code>pd.Series.str.replace</code>. The former accepts substring replacements, and the latter accepts regex patterns.</p> <p>Using <code>str.replace</code>, you'd need to pass a <em>raw string</em> instead.</p> <pre><code>df['col'] = df['col'].str.replace(r'\\d', '_dbl_') </code></pre>
python|string|pandas|replace
2
3,811
49,441,151
How to control FEA software like MSC NASTRAN using Python code?
<p>I would like to run MSC NASTRAN using python. I have seen a similiar function in MATLAB using <code>system('nastran.exe file_name.bdf')</code> #where file_name.bdf is the input file to Run using nastran.</p> <p>Hence i tried below using python code, but it did not work,</p> <pre><code>import os os.system('nastran.exe file_name.bdf') </code></pre> <p>Could you tell me where i going wrong?</p> <p>Also, how to give the command line in NASTRAN thru python? Like for example memory allocation for the run, number of cores need to be used for run etc. </p> <p>some NASTRAN command lines include, 1. scr=yes delete=f04,log,xdb pause=yes 2. mem=10gb bpool=3gb memorymaximum=14gb sscr=500gb sdball=500gb mode=i8 ...etc.</p>
<p>I can't speak directly for MSC Nastran, its been a while since I've used it. But most modern FEA programs have an API (application program interface) to allow you to call commands from a external program like python or matlab. </p> <p>Without an API, you may be limited to using python to start the program from the command line, which is what your code is trying to do. As for how to launch a program from within python, check out this question/answer: <a href="https://stackoverflow.com/questions/7032212/how-to-run-application-with-parameters-in-python">How to run application with parameters in Python?</a></p>
python|numpy|system
1
3,812
67,235,753
Replace table with dependencies using pandas to_sql's if_exists='replace'
<p>Pandas <code>pd.to_sql()</code> function has the parameter <code>if_exists='replace'</code>, which drops the table and inserts the given <code>DataFrame</code>. But the table I'm trying to replace is part of a <code>view</code>. Is there a way to replace the table and keep the view without having to delete and recreate it?</p>
<p>If update has same columns. You can truncate table,</p> <pre><code>con.execute('TRUNCATE table RESTART IDENTITY;') </code></pre> <p>and then run <code>pd.to_sql</code> with <code>if_exists='append'</code></p>
python|sql|pandas|psql|pandas-to-sql
1
3,813
60,246,599
get specific value in each group and add it as new column in each group
<p>I want to have something like this:</p> <hr> <p>dataframe:</p> <pre><code> col1 col2 col3 A 'aa' date1 A 'aa' date2 A 'aa' date3 A 'bb' date4 B 'aa' date5 B 'bb' date6 B 'aa' date7 </code></pre> <p>output:</p> <pre><code> col1 col2 col3 col4 A 'aa' date1 date4 A 'aa' date2 date4 A 'aa' date3 date4 A 'bb' date4 date4 B 'aa' date5 date6 B 'bb' date6 date6 B 'aa' date7 date6 </code></pre> <p>I want to group by col1 and based on value in col2 get col3 and add a new column as col4 and set it as this value of col3</p> <p>'aa' and 'bb' are examples so I can't use sort...I should compare it with a value.</p>
<p>Try:</p> <pre class="lang-py prettyprint-override"><code>df=df.sort_values("col2", ascending=False).set_index("col1") df["col4"]=df.groupby("col1")["col3"].first() df=df.reset_index(drop=False) </code></pre> <p>Outputs:</p> <pre class="lang-py prettyprint-override"><code> col1 col2 col3 col4 0 A bb date4 date4 1 B bb date6 date6 2 A aa date1 date4 3 A aa date2 date4 4 A aa date3 date4 5 B aa date5 date6 6 B aa date7 date6 </code></pre> <p><strong>Edit</strong> IIUC - to get <code>col4</code> the value from <code>col3</code>, grouped by <code>col1</code> for where <code>col2=="bb"</code></p> <p>Try:</p> <pre class="lang-py prettyprint-override"><code>df=df.set_index("col1") df["col4"]=df.loc[df["col2"]=="bb"].groupby("col1")["col3"].first() df=df.reset_index(drop=False) </code></pre>
python|python-3.x|pandas|dataframe
4
3,814
59,913,977
Why does very simple port of the official Keras mnist example to tensorflow 2.x result in massive drop in accuracy?
<p>Here is the mnist example from the Keras documentation: <a href="https://keras.io/examples/mnist_cnn/" rel="nofollow noreferrer">https://keras.io/examples/mnist_cnn/</a></p> <p>I put it into google colab, under Tensorflow 1.x, and it performs really well: <a href="https://colab.research.google.com/drive/15NW-lXhRUxqSCCygVxddXCo5ID7yF2iL" rel="nofollow noreferrer">https://colab.research.google.com/drive/15NW-lXhRUxqSCCygVxddXCo5ID7yF2iL</a></p> <p>I made very simple changes to make it execute under TF-2.x: <a href="https://colab.research.google.com/drive/1ul-eFn1XRe9ta3cu5vHchaa4DxStRda_" rel="nofollow noreferrer">https://colab.research.google.com/drive/1ul-eFn1XRe9ta3cu5vHchaa4DxStRda_</a></p> <p>It completely crushes performance! Accuracy drops like a rock!</p> <p>What did I do wrong?</p>
<p>The difference is in the optimizers. <code>tf.keras.optimizers.Adadelta</code> uses a learning rate of 0.001. <code>keras.optimizers.Adadelta</code> uses a learning rate of 1.0.</p> <p>Check <a href="https://keras.io/optimizers/" rel="nofollow noreferrer">keras.optimizers</a> and <a href="https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adadelta" rel="nofollow noreferrer">tf.keras.optimizers.Adadelta</a> for more details. In particular, the Tensorflow page mentions that Adadelta is supposed to have a learning rate of 1.0 to match the original paper.</p>
tensorflow2.0|mnist|tf.keras
1
3,815
65,289,194
How to correctly use apply and int inside values of a single key
<p>I've a basic question: I'm using the following script:</p> <pre><code>import pandas as pd from collections import OrderedDict df = pd.DataFrame({'ID' : ['ID1', 'ID1', &quot;ID1&quot;,&quot;ID2&quot;,&quot;ID2&quot;], &quot;pdb&quot; : [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;,&quot;d&quot;,&quot;e&quot;], &quot;beg&quot;: [1, 3, 40,111,100], &quot;end&quot; : [11, 12, 50,115,110]}) df2 = pd.DataFrame for index, row in df.iterrows(): df['var1'] = df.apply(lambda x : &quot; &quot;.join(list(map(str,range(x['beg'],x['end']+1)))),axis=1) df2 = df.groupby([&quot;ID&quot;], sort=False)['var1'] .apply(lambda x : (' '.join(x.astype(str)))).reset_index(name='var1') df2['var1'] = (df2['var1'].str.split().apply(lambda x: (OrderedDict.fromkeys(x).keys())) .str.join(' ')) df2[&quot;var1&quot;] = df2[&quot;var1&quot;].map(lambda x: int(x)) df2[&quot;var2&quot;] = (df2[&quot;var1&quot;].str.split().apply(lambda x: sorted(x)).str.join(&quot; &quot;)) </code></pre> <p>And I'm getting this error, while trying to convert a string of numbers to ints, so it can be sorted properly: (from this line: <code>df2[&quot;var1&quot;] = df2[&quot;var1&quot;].map(lambda x: int(x)</code> )</p> <pre><code>ValueError: invalid literal for int() with base 10: '1 10 11 12 2 3 4 40 41 42 43 44 45 46 47 48 49 5 50 6 7 8 9' </code></pre> <p>Is there a proper way of doing this? Thanks in advance.</p>
<p>I think you can create new column <code>r</code> with ranges and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>DataFrame.explode</code></a>, then sorting, remove duplicates and convert to strings before <code>join</code> per groups by <code>ID</code>:</p> <pre><code>df['r'] = df.apply(lambda x: range(x['beg'],x['end']+1), axis=1) df2 = df.explode('r').drop_duplicates(['ID','r']).sort_values(['ID','r']) df2['r'] = df2['r'].astype(str) df2 = df2.groupby('ID')['r'].agg(' '.join).reset_index() print (df2) ID r 0 ID1 1 2 3 4 5 6 7 8 9 10 11 12 40 41 42 43 44 45 4... 1 ID2 100 101 102 103 104 105 106 107 108 109 110 11... </code></pre> <p>In your solution it is possible by map each value to int, then sorting and last map to strings like:</p> <pre><code>df2[&quot;var2&quot;] = df2[&quot;var1&quot;].str.split().apply(lambda x: ' '.join(list(map(str,sorted(map(int,x)))))) </code></pre>
python|pandas|string|dataframe|integer
1
3,816
49,804,230
Integer-based offset from index label
<p>In <a href="https://pandas.pydata.org/" rel="nofollow noreferrer">Pandas</a>, is it possible to determine the index's label at a given integer offset from a known index label? For instance,</p> <pre><code>&gt;&gt;&gt; df a b 10 1 2 20 3 4 30 5 6 40 7 8 &gt;&gt;&gt; df.index.offset(10, 3) 40 &gt;&gt;&gt; df.index.offset(30, -1) 20 </code></pre>
<p>Use <code>pd.Index.get_loc</code></p> <pre><code>df.index[df.index.get_loc(10) + 3] 40 </code></pre> <hr> <pre><code>df.index[df.index.get_loc(30) - 1] 20 </code></pre>
python|python-3.x|python-2.7|pandas
1
3,817
46,856,221
Numpy 3D array indexing using lists
<p>Suppose I have a numpy array with shape (10, 1000, 1000), and I have three lists, which are supposed to represent the range of indexes of each axis like so:</p> <pre><code>z_range = [0, 5] y_range = [200, 300] x_range = [300, 500] </code></pre> <p>I know I can do the following, but it seems rather verbose:</p> <pre><code>arr[z_range[0]:z_range[1], y_range[0]:y_range[1], x_range[0]:x_range[1]] </code></pre> <p>Is there an easier way to slice this particular array using the three lists?</p>
<p>Indexing takes a tuple, so you can just construct your tuple dynamically, using a generator expression:</p> <pre><code>&gt;&gt;&gt; z_range = [0, 3] &gt;&gt;&gt; y_range = [2, 3] &gt;&gt;&gt; x_range = [3, 5] &gt;&gt;&gt; arr = numpy.arange(5*5*5).reshape(5,5,5) &gt;&gt;&gt; arr[tuple(slice(a, b) for a,b in (x_range, y_range, z_range))] array([[[ 85, 86, 87]], [[110, 111, 112]]]) </code></pre>
python|arrays|numpy|indexing
3
3,818
46,836,389
How can I get access to intermediate activation maps of the pre-trained models in NiftyNet?
<p>I could download and successfully test <a href="https://cmiclab.cs.ucl.ac.uk/CMIC/NiftyNet/tree/dev/demos/brain_parcellation" rel="nofollow noreferrer">brain parcellation demo</a> of <a href="http://niftynet.io/" rel="nofollow noreferrer">NiftyNet</a> package. However, this only gives me the ultimate parcellation result of a pre-trained network, whereas I need to get access to the output of the intermediate layers too. </p> <p>According to this demo, the following line downloads a pre-trained model and a test MR volume:</p> <pre><code>wget -c https://www.dropbox.com/s/rxhluo9sub7ewlp/parcellation_demo.tar.gz -P ${demopath} </code></pre> <p>where <code>${demopath}</code> is the path to the demo folder. Extracting the downloaded file will create a <code>.ckpt</code> file which seems to contain a pre-trained tensorflow model, however I could not manage to load it into a tensorflow session.</p> <p>Is there a way that I can load the pre-trained model and have access to the all its intermediate activation maps? In other words, how can I load the pre-trained models from NiftyNet library into a tensorflow session such that I can explore through the model or probe certain intermediate layer for a any given input image?</p> <p>Finally, in NiftyNet's website it is mentioned that "a number of models from the literature have been (re)implemented in the NiftyNet framework". Are pre-trained weights of these models also available? The demo is using a pre-trained model called HighRes3DNet. If the pre-trained weights of other models are also available, what is the link to download those weights or saved tensorflow models?</p>
<p>To answer your 'Finally' question first, NiftyNet has some network architectures implemented (e.g., VNet, UNet, DeepMedic, HighRes3DNet) that you can train on your own data. For a few of these, there are pre-trained weights for certain applications (e.g. brain parcellation with HighRes3DNet and abdominal CT segmentation with DenseVNet).</p> <p>Some of these pre-trained weights are linked from the demos, like the parcellation one you linked to. We are starting to collect the pre-trained models into a <a href="https://cmiclab.cs.ucl.ac.uk/CMIC/NiftyNetExampleServer/blob/master/model_zoo.md" rel="nofollow noreferrer">model zoo</a>, but this is still a work in progress.</p> <p>Eli Gibson [NiftyNet developer]</p>
tensorflow|pre-trained-model|niftynet
2
3,819
63,225,734
Mix of line and scatter plots from pandas dataframe in a single plot using the tick frequency of the first plot only
<p>I am wanting to compare data within a dataframe, plotting some of the data as lines, and other columns as scatter. My actual data is a combination of model output and observations, I want the observations to be scatter, and the model to be lines.</p> <p>The observations have a LOT of Nan values (most times steps do not have an observation).</p> <p>This MWE duplicates the issue I'm having</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt base = datetime.datetime.today() date_list = [base - datetime.timedelta(days=x) for x in range(40)] df = pd.DataFrame(data = { &quot;Time&quot;: date_list, &quot;Chocolate&quot;: np.random.rand(40), &quot;Strawberry&quot;: np.random.rand(40), &quot;Fake Chocolate&quot;: np.random.rand(40), &quot;Fake Strawberry&quot;: np.random.rand(40), }) df.iloc[3,3] = np.nan ax1 = df.plot(x = 'Time', y = [&quot;Chocolate&quot;,&quot;Strawberry&quot;]) ax1 = df.plot(x = 'Time', y = [&quot;Chocolate&quot;,&quot;Strawberry&quot;]) ax2 = df.plot.scatter(x = 'Time', y = ['Fake Chocolate'], marker = '^', ax = ax1) ax3 = df.plot.scatter(x = 'Time', y = ['Fake Strawberry'], marker = '*', ax = ax1, color = '#ff7f0e') </code></pre> <p><a href="https://i.stack.imgur.com/YxsQI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YxsQI.png" alt="example output" /></a></p> <p>I want to have the x-axis like in the first plot, so taking the style of the line plot where you don't have EVERY date trying to print in a tiny space. How do I do this?</p> <p>I am using <code>ax1.set</code> to set the x and y-axis labels.</p> <p>and if I can sneak in a second question, why is it possible to do multiple lines using <code>y = []</code> but not possible for the scatter plots?</p>
<p>Just add <code>plt.xticks(rotation=45)</code> to the end of your script and you will be fine.</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt import datetime base = datetime.datetime.today() date_list = [base - datetime.timedelta(days=x) for x in range(40)] df = pd.DataFrame(data = { &quot;Time&quot;: date_list, &quot;Chocolate&quot;: np.random.rand(40), &quot;Strawberry&quot;: np.random.rand(40), &quot;Fake Chocolate&quot;: np.random.rand(40), &quot;Fake Strawberry&quot;: np.random.rand(40), }) df.iloc[3,3] = np.nan ax1 = df.plot(x = 'Time', y = [&quot;Chocolate&quot;,&quot;Strawberry&quot;]) ax1 = df.plot(x = 'Time', y = [&quot;Chocolate&quot;,&quot;Strawberry&quot;]) ax2 = df.plot.scatter(x = 'Time', y = ['Fake Chocolate'], marker = '^', ax = ax1) ax3 = df.plot.scatter(x = 'Time', y = ['Fake Strawberry'], marker = '*', ax = ax1, color = '#ff7f0e') plt.xticks(rotation=45); </code></pre> <p><a href="https://i.stack.imgur.com/ExTO4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ExTO4.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/HPBpI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HPBpI.png" alt="enter image description here" /></a></p>
python|pandas|matplotlib|plot
0
3,820
63,185,421
RuntimeError: The expanded size of the tensor (7) must match the existing size (128) at non-singleton dimension 3
<p>When I run AdaIN code</p> <pre><code>def adaptive_instance_normalization(content_feat, style_mean, style_std): size = content_feat.size() content_mean, content_std = calc_mean_std(content_feat) normalized_feat = (content_feat - content_mean.expand( size)) / content_std.expand(size) return normalized_feat * style_std.expand(size) + style_mean.expand(size) </code></pre> <p>I got the following error</p> <p>RuntimeError: The expanded size of the tensor (7) must match the existing size (128) at non-singleton dimension 3. Target sizes: [100, 128, 7, 7]. Tensor sizes: [100, 128]</p>
<p>You should be more precise and descriptive while explaining your issue. You cannot expect from people to read your mind or be familiar with your exact problem. So first, what should be the expected output and which line is failing ? I guess from the <code>expand</code> calls that you would like to enable broadcasting. Unfortunately, as you can read it from the <a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor.expand" rel="nofollow noreferrer">official documentation</a>, <code>expand</code> works the same as usual broadcasting, and add the required extra dimensions at the beginning, not the end.</p> <p>So you should use <code>reshape(size[:2] + (1, 1))</code> in place of <code>expand(size)</code>.</p>
pytorch
1
3,821
67,631,676
topography data, string '-' can't be converted to float
<p>I'm attempting to import cornea topography data from a CSV file. The imshow fails to plot the data after slicing the axes and converting all to np.array, displaying the error message</p> <p>&quot;raise TypeError(&quot;Image data of dtype {} cannot be converted to &quot; TypeError: Image data of dtype object cannot be converted to float&quot;</p> <p>If I don't convert the data to np.array and just write</p> <p>topo1 = df.iloc[1:142, 1:142].astype(dtype=float)</p> <p>the erroe message says:</p> <p>&quot;return arr.astype(dtype, copy=True) ValueError: could not convert string to float: '-'&quot;</p> <p>So it seems to me that there is a '-' character somewhere in my data that can't be converted to float but I've not been able to locate it and remove it.</p> <p>Could someone please help me solve this problem?</p> <p>All the best, Payman.</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt data= pd.read_csv (r'myfile.CSV', header=None) df = pd.DataFrame(data=data) # the first row of df is the x-axis range and the first column is the y-axis range X= np.array(df.iloc[0, 1:142].astype(float)) Y= np.array(df.iloc[1:142, 0].astype(float)) topo1 = np.array(df.iloc[1:142, 1:142].astype(dtype=float, errors = 'ignore')) plt.imshow(topo1) plt.show() </code></pre>
<p>When reading the.csv file, it appears that the key thing is to define the separator manually. As a result, the line should be:</p> <p>df= read_csv (filename, header=None, error_bad_lines=False, sep=';')</p>
python|pandas|dataframe|imshow|topography
0
3,822
67,829,360
handle comment lines when reading csv using pandas
<p>Here is a simple example:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from io import StringIO s = &quot;&quot;&quot;a b c ------------ A1 1 2 A-2 -NA- 3 ------------ B-1 2 -NA- ------------ &quot;&quot;&quot; df = pd.read_csv(StringIO(s), sep='\s+', comment='-') df a b c 0 A1 1.0 2.0 1 A NaN NaN 2 B NaN NaN </code></pre> <p>For lines containing but not starting with the comment specifier, <code>pandas</code> treats the substring from <code>-</code> as comments.</p> <hr /> <p>My question is as above.</p> <p>Not important but just for curiosity, can <code>pandas</code> handle two different types of comment lines: starting with <code>#</code> or <code>-</code></p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from io import StringIO s = &quot;&quot;&quot;a b c # comment line ------------ A1 1 2 A2 -NA- 3 ------------ B1 2 -NA- ------------ &quot;&quot;&quot; df = pd.read_csv(StringIO(s), sep='\s+', comment='#-') df </code></pre> <p>raises <code>ValueError: Only length-1 comment characters supported</code></p>
<p>Another solution: You can &quot;preprocess&quot; the file before <code>.read_csv</code>. For example:</p> <pre class="lang-py prettyprint-override"><code>import re import pandas as pd from io import StringIO s = &quot;&quot;&quot;a b c # comment line ------------ A1 1 2 A-2 -NA- 3 ------------ B-1 2 -NA- ------------ &quot;&quot;&quot; df = pd.read_csv( StringIO(re.sub(r&quot;^-{2,}&quot;, &quot;&quot;, s, flags=re.M)), sep=r&quot;\s+&quot;, comment=&quot;#&quot; ) print(df) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code> a b c 0 A1 1 2 1 A-2 -NA- 3 2 B-1 2 -NA- </code></pre>
python|pandas|dataframe|csv|comments
2
3,823
67,989,036
How to find the first point of intersection between two arrays?
<pre><code>wavelength_one=target[60].wavelength.value flux_one=target[60].flux.value wavelength_two=target[61].wavelength.value flux_two=target[61].flux.value f,ax=plt.subplots(figsize=(15,10)) ax.plot(wavelength_one,flux_one,color='green') ax.plot(wavelength_two,flux_two,color='black') ax.grid(True) </code></pre> <p><a href="https://i.stack.imgur.com/5QPLq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5QPLq.png" alt="the plot" /></a></p> <p>I have the following code that plots two spectrum with wavelength as the x-axis and flux as the y-axis. I want to find the first point of intersection between these graphs. How would I be able to find the index for the flux value where they intersect for the first time? The flux values are in a numpy array. I want to find the index of the first overlapping flux value and the corresponding wavelength associated with the intersection.</p>
<p>You can try this code snippet:</p> <pre class="lang-py prettyprint-override"><code>from math import isclose For i in range(len(flux_one)): if flux_one[i] == flux_two[i] and isclose(wavelength_one, wavelength_two): print(flux_one[i], flux_two[i]) </code></pre>
python|numpy|matplotlib|numpy-ndarray
0
3,824
68,021,867
CNN model is not learn well
<p>I started to learn CNN implementation in PyTorch, and I tried to build CNNs to process the grayscale images with 4 classes from 0 to 3. I got in the beginning accuracy around 0.55. The maximum accuracy I got is ~ 0.683%.</p> <p>I tried SGD and Adam optimizer with different values for lr and batch_size, but the accuracy is still low.</p> <p>I used data Augmentation to create more samples, around 4k.</p> <p>I cannot improve accuracy further and wondered if I could get some advices about what I need to change in CNN structure to increase accuracy. Loss starts around: Loss: [1.497] then decreases near: Loss: [0.001] then fluctuated up and down around this value.</p> <p>I spent time reading about similar problems but without luck. I am using nn.CrossEntropyLoss() for my loss_fn. I don't use softmax for dense layer.</p> <p>This is the Summary of the CNN model:</p> <pre><code>------------------------------------------------------------- Layer (type) Output Shape Param # ============================================================= Conv2d-1 [-1, 32, 128, 128] 320 ReLU-2 [-1, 32, 128, 128] 0 BatchNorm2d-3 [-1, 32, 128, 128] 64 MaxPool2d-4 [-1, 32, 64, 64] 0 Conv2d-5 [-1, 64, 64, 64] 18,496 ReLU-6 [-1, 64, 64, 64] 0 BatchNorm2d-7 [-1, 64, 64, 64] 128 MaxPool2d-8 [-1, 64, 32, 32] 0 Conv2d-9 [-1, 128, 32, 32] 73,856 ReLU-10 [-1, 128, 32, 32] 0 BatchNorm2d-11 [-1, 128, 32, 32] 256 MaxPool2d-12 [-1, 128, 16, 16] 0 Flatten-13 [-1, 32768] 0 Linear-14 [-1, 512] 16,777,728 ReLU-15 [-1, 512] 0 Dropout-16 [-1, 512] 0 Linear-17 [-1, 4] 2,052 ============================================================ </code></pre> <p>I would appreciate the help.</p>
<p>How many images are in the train set ? the test set ? What are the size of the images ? How would you consider the difficulty of classification of the images ? Do you think it should be simple or difficult ?</p> <p>According to the numbers you have, you're overfitting as your loss is near 0 (meaning nothing much will retropropagate to the weights, i.e your model won't change anymore) and your 68.3% (it's a typo right ?) is from the test set (I suppose). So you don't have any problem to train the network which is a good point.</p> <p>Then you can search ways of countering overfitting online and here is some &quot;classical&quot; possibilities : - you may raise the dropout parameter<br /> - putting some regularizer (L1 or L2) to constraint the learning<br /> - early stopping using a validation set<br /> - using a classical and/or lighter convolutional network (resnet,inception) with/without pretrained weights. This latter also depends on your images type (natural, biomedical ...)<br /> - ... a lot more or less difficult to implement</p> <p>Also technically you are using a softmax layer as it's included in the crossentropyloss of pytorch.</p>
pytorch|conv-neural-network|classification
1
3,825
61,184,276
Exec format error: Try to compile .proto files into .py files with Raspberry Pi
<p>I`m a german studant. For the school I have to make a physics or chemistry project, I decided to install tensorflow on a raspberry pi to train a object detection modal. But there is an error I don´t understand. 'sh: 1: bin/protoc: Exec format error'</p> <p>I tried all versions of protobuf from the source here: <a href="https://github.com/protocolbuffers/protobuf/releases" rel="nofollow noreferrer">Source</a></p> <p>I follow the instructions from the side: <a href="https://gilberttanner.com/blog/installing-the-tensorflow-object-detection-api" rel="nofollow noreferrer">Instruction</a></p> <p>I use a raspberry pi 4 4gb ram. With Raspbain 10 (buster)</p>
<p><code>Exec format error</code> means that you're trying to run a binary that's not compatible with your system.<br> The Raspberry Pi 4 has an ARM Cortex-A72 CPU, and Raspbian is a 32-bit operating system, so you need a 32-bit ARM binary.<br> At first glance, none of the binaries that you can download from the GitHub page you linked to is a 32-bit ARM version, so you'll have to <a href="https://askubuntu.com/a/1072684/740497">compile it from source</a>.</p>
tensorflow|raspberry-pi|proto
0
3,826
61,436,770
SSD’s loss not decreasing in PyTorch
<p>I am implementing SSD(Single shot detector) to study in PyTorch. However, my custom training loss didn't decrease... I've searched and tried various solution for week, but problem is still remaining.</p> <p>What should I do? My loss function is incorrect?</p> <p>Here is my SSD300 model</p> <pre><code>SSD300( (feature_layers): ModuleDict( (conv1_1): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu1_1): ReLU() (conv1_2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu1_2): ReLU() (pool1): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False) (conv2_1): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu2_1): ReLU() (conv2_2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu2_2): ReLU() (pool2): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False) (conv3_1): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu3_1): ReLU() (conv3_2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu3_2): ReLU() (conv3_3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu3_3): ReLU() (pool3): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=True) (conv4_1): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu4_1): ReLU() (conv4_2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu4_2): ReLU() (conv4_3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu4_3): ReLU() (pool4): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False) (conv5_1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu5_1): ReLU() (conv5_2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu5_2): ReLU() (conv5_3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu5_3): ReLU() (pool5): MaxPool2d(kernel_size=(3, 3), stride=(1, 1), padding=1, dilation=1, ceil_mode=False) (conv6): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(6, 6), dilation=(6, 6)) (relu6): ReLU() (conv7): Conv2d(1024, 1024, kernel_size=(1, 1), stride=(1, 1)) (relu7): ReLU() (conv8_1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (relu8_1): ReLU() (conv8_2): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (relu8_2): ReLU() (conv9_1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (relu9_1): ReLU() (conv9_2): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (relu9_2): ReLU() (conv10_1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1)) (relu10_1): ReLU() (conv10_2): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1)) (relu10_2): ReLU() (conv11_1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1)) (relu11_1): ReLU() (conv11_2): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1)) (relu11_2): ReLU() ) (localization_layers): ModuleDict( (loc1): Sequential( (l2norm_loc1): L2Normalization() (conv_loc1): Conv2d(512, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu_loc1): ReLU() ) (loc2): Sequential( (conv_loc2): Conv2d(1024, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu_loc2): ReLU() ) (loc3): Sequential( (conv_loc3): Conv2d(512, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu_loc3): ReLU() ) (loc4): Sequential( (conv_loc4): Conv2d(256, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu_loc4): ReLU() ) (loc5): Sequential( (conv_loc5): Conv2d(256, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu_loc5): ReLU() ) (loc6): Sequential( (conv_loc6): Conv2d(256, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu_loc6): ReLU() ) ) (confidence_layers): ModuleDict( (conf1): Sequential( (l2norm_conf1): L2Normalization() (conv_conf1): Conv2d(512, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu_conf1): ReLU() ) (conf2): Sequential( (conv_conf2): Conv2d(1024, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu_conf2): ReLU() ) (conf3): Sequential( (conv_conf3): Conv2d(512, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu_conf3): ReLU() ) (conf4): Sequential( (conv_conf4): Conv2d(256, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu_conf4): ReLU() ) (conf5): Sequential( (conv_conf5): Conv2d(256, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu_conf5): ReLU() ) (conf6): Sequential( (conv_conf6): Conv2d(256, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu_conf6): ReLU() ) ) (predictor): Predictor() ) </code></pre> <p>My loss function is defined as;</p> <pre class="lang-py prettyprint-override"><code>class SSDLoss(nn.Module): def __init__(self, alpha=1, matching_func=None, loc_loss=None, conf_loss=None): super().__init__() self.alpha = alpha self.matching_strategy = matching_strategy if matching_func is None else matching_func self.loc_loss = LocalizationLoss() if loc_loss is None else loc_loss self.conf_loss = ConfidenceLoss() if conf_loss is None else conf_loss def forward(self, predicts, gts, dboxes): """ :param predicts: Tensor, shape is (batch, total_dbox_nums, 4+class_nums=(cx, cy, w, h, p_class,...) :param gts: Tensor, shape is (batch*bbox_nums(batch), 1+4+class_nums) = [[img's_ind, cx, cy, w, h, p_class,...],.. :param dboxes: Tensor, shape is (total_dbox_nums, 4=(cx,cy,w,h)) :return: loss: float """ # get predict's localization and confidence pred_loc, pred_conf = predicts[:, :, :4], predicts[:, :, 4:] # matching pos_indicator, gt_loc, gt_conf = self.matching_strategy(gts, dboxes, batch_num=predicts.shape[0], threshold=0.5) # calculate ground truth value considering default boxes gt_loc = gt_loc_converter(gt_loc, dboxes) # Localization loss loc_loss = self.loc_loss(pos_indicator, pred_loc, gt_loc) # Confidence loss conf_loss = self.conf_loss(pos_indicator, pred_conf, gt_conf) return conf_loss + self.alpha * loc_loss class LocalizationLoss(nn.Module): def __init__(self): super().__init__() self.smoothL1Loss = nn.SmoothL1Loss(reduction='none') def forward(self, pos_indicator, predicts, gts): N = pos_indicator.sum() total_loss = self.smoothL1Loss(predicts, gts).sum(dim=-1) # shape = (batch num, dboxes num) loss = total_loss.masked_select(pos_indicator) return loss.sum() / N class ConfidenceLoss(nn.Module): def __init__(self, neg_factor=3): """ :param neg_factor: int, the ratio(1(pos): neg_factor) to learn pos and neg for hard negative mining """ super().__init__() self.logsoftmax = nn.LogSoftmax(dim=-1) self._neg_factor = neg_factor def forward(self, pos_indicator, predicts, gts): loss = (-gts * self.logsoftmax(predicts)).sum(dim=-1) # shape = (batch num, dboxes num) N = pos_indicator.sum() neg_indicator = torch.logical_not(pos_indicator) pos_loss = loss.masked_select(pos_indicator) neg_loss = loss.masked_select(neg_indicator) neg_num = neg_loss.shape[0] neg_num = min(neg_num, self._neg_factor * N) _, topk_indices = torch.topk(neg_loss, neg_num) neg_loss = neg_loss.index_select(dim=0, index=topk_indices) return (pos_loss.sum() + neg_loss.sum()) / N </code></pre> <p>loss output is below;</p> <pre><code>Training... Epoch: 1, Iter: 1, [32/21503 (0%)] Loss: 28.804445 Training... Epoch: 1, Iter: 10, [320/21503 (1%)] Loss: 12.880742 Training... Epoch: 1, Iter: 20, [640/21503 (3%)] Loss: 15.932519 Training... Epoch: 1, Iter: 30, [960/21503 (4%)] Loss: 14.624641 Training... Epoch: 1, Iter: 40, [1280/21503 (6%)] Loss: 16.301014 Training... Epoch: 1, Iter: 50, [1600/21503 (7%)] Loss: 15.710087 Training... Epoch: 1, Iter: 60, [1920/21503 (9%)] Loss: 12.441727 Training... Epoch: 1, Iter: 70, [2240/21503 (10%)] Loss: 12.283393 Training... Epoch: 1, Iter: 80, [2560/21503 (12%)] Loss: 12.272835 Training... Epoch: 1, Iter: 90, [2880/21503 (13%)] Loss: 12.273635 Training... Epoch: 1, Iter: 100, [3200/21503 (15%)] Loss: 12.273409 Training... Epoch: 1, Iter: 110, [3520/21503 (16%)] Loss: 12.266172 Training... Epoch: 1, Iter: 120, [3840/21503 (18%)] Loss: 12.272820 Training... Epoch: 1, Iter: 130, [4160/21503 (19%)] Loss: 12.274920 Training... Epoch: 1, Iter: 140, [4480/21503 (21%)] Loss: 12.275247 Training... Epoch: 1, Iter: 150, [4800/21503 (22%)] Loss: 12.273258 Training... Epoch: 1, Iter: 160, [5120/21503 (24%)] Loss: 12.277486 Training... Epoch: 1, Iter: 170, [5440/21503 (25%)] Loss: 12.266512 Training... Epoch: 1, Iter: 180, [5760/21503 (27%)] Loss: 12.265674 Training... Epoch: 1, Iter: 190, [6080/21503 (28%)] Loss: 12.265306 Training... Epoch: 1, Iter: 200, [6400/21503 (30%)] Loss: 12.269717 Training... Epoch: 1, Iter: 210, [6720/21503 (31%)] Loss: 12.274122 Training... Epoch: 1, Iter: 220, [7040/21503 (33%)] Loss: 12.263970 Training... Epoch: 1, Iter: 230, [7360/21503 (34%)] Loss: 12.267252 </code></pre>
<p>I must normalize predicted boxes before calculating loss function.</p> <p>The word of variance caused to mislead... <a href="https://leimao.github.io/blog/Bounding-Box-Encoding-Decoding/" rel="nofollow noreferrer">link</a> </p> <pre class="lang-py prettyprint-override"><code>class Encoder(nn.Module): def __init__(self, norm_means=(0, 0, 0, 0), norm_stds=(0.1, 0.1, 0.2, 0.2)): super().__init__() # shape = (1, 1, 4=(cx, cy, w, h)) or (1, 1, 1) self.norm_means = torch.tensor(norm_means, requires_grad=False).unsqueeze(0).unsqueeze(0) self.norm_stds = torch.tensor(norm_stds, requires_grad=False).unsqueeze(0).unsqueeze(0) def forward(self, gt_boxes, default_boxes): """ :param gt_boxes: Tensor, shape = (batch, default boxes num, 4) :param default_boxes: Tensor, shape = (default boxes num, 4) Note that 4 means (cx, cy, w, h) :return: encoded_boxes: Tensor, calculate ground truth value considering default boxes. The formula is below; gt_cx = (gt_cx - dbox_cx)/dbox_w, gt_cy = (gt_cy - dbox_cy)/dbox_h, gt_w = train(gt_w / dbox_w), gt_h = train(gt_h / dbox_h) shape = (batch, default boxes num, 4) """ assert gt_boxes.shape[1:] == default_boxes.shape, "gt_boxes and default_boxes must be same shape" gt_cx = (gt_boxes[:, :, 0] - default_boxes[:, 0]) / default_boxes[:, 2] gt_cy = (gt_boxes[:, :, 1] - default_boxes[:, 1]) / default_boxes[:, 3] gt_w = torch.log(gt_boxes[:, :, 2] / default_boxes[:, 2]) gt_h = torch.log(gt_boxes[:, :, 3] / default_boxes[:, 3]) encoded_boxes = torch.cat((gt_cx.unsqueeze(2), gt_cy.unsqueeze(2), gt_w.unsqueeze(2), gt_h.unsqueeze(2)), dim=2) # normalization return (encoded_boxes - self.norm_means.to(gt_boxes.device)) / self.norm_stds.to(gt_boxes.device) &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt; answer!! </code></pre>
python|python-3.x|deep-learning|pytorch
0
3,827
61,574,046
Pandas pivot table: Aggregate function by count of a particular string
<p>I am trying to analyse a DataFrame which contains the Date as the index, and Name and Message as columns. </p> <p>df.head() returns:</p> <pre><code> Name Message Date 2020-01-01 Tom ‎ image omitted 2020-01-01 Michael ‎image omitted 2020-01-02 James ‎image Happy new year you wonderfully awfully people... 2020-01-02 James I was waiting for you ‎image 2020-01-02 James QB whisperer ‎image </code></pre> <p>This is the pivot table I was trying to call off the initial df, which the aggfunc being the count of the existence of a word (eg. image)</p> <pre><code>df_s = df.pivot_table(values='Message',index='Date',columns='Name',aggfunc=(lambda x: x.value_counts()['image'])) </code></pre> <p>Which ideally would show, as an <strong>example</strong>:</p> <pre><code> Name Tom Michael James Date 2020-01-01 1 1 0 2020-01-02 0 0 3 </code></pre> <p>For instance, I've done another df.pivot_table using</p> <pre><code>df_m = df.pivot_table(values='Message',index='Date',columns='Name',aggfunc=lambda x: len(x.unique())) </code></pre> <p>Which aggregates based off the number of messages in a day and this returns the table fine.</p> <p>Thanks in advance</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.count.html" rel="nofollow noreferrer"><code>Series.str.count</code></a> for number of matched values to new column added to DataFrame by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>DataFrame.assign</code></a> and then pivoting with <code>sum</code>:</p> <pre><code>df_m = (df.reset_index() .assign(count= df['Message'].str.count('image')) .pivot_table(index='Date', columns='Name', values='count' , aggfunc='sum', fill_value=0)) print (df_m) Name James Michael Tom Date 2020-01-01 0 1 1 2020-01-02 3 0 0 </code></pre>
python|pandas|lambda|pivot-table|aggregate-functions
2
3,828
61,557,662
Vectorize two pandas columns at once with CountVectorizer
<p>I want I want to apply Sklearn's CountVectorizer at two columns at once. I have tried this:</p> <pre><code>features = df[['col 1', 'col2']] results = df[['col 3'] vectorizer = CountVectorizer(lowercase=False) features = vectorizer.fit_transform(features) results = vectorizer.fit_transform(results) </code></pre> <p>But I get this error:</p> <pre><code>TypeError: expected string or bytes-like object </code></pre> <p>And then I have tried this:</p> <pre><code>from sklearn.compose import make_column_transformer vectorizer = CountVectorizer(lowercase=False) transformer = make_column_transformer((vectorizer, 'col 1'), (vectorizer, 'col 2')) features = transformer.fit_transform(features) results = vectorizer.fit_transform(results) </code></pre> <p>But I get this error:</p> <pre><code>ValueError: Specifying the columns using strings is only supported for pandas DataFrames </code></pre> <p>What am I doing wrong, I saw this second solution here:</p> <p><a href="https://media-exp1.licdn.com/dms/image/C4E22AQFC6Uf5_el2nQ/feedshare-shrink_800/0?e=1591228800&amp;v=beta&amp;t=7ZQbbIvgpQKlTfg1Z_IpGT9DB21LUqy_bkKaNE41l0E" rel="nofollow noreferrer">https://media-exp1.licdn.com/dms/image/C4E22AQFC6Uf5_el2nQ/feedshare-shrink_800/0?e=1591228800&amp;v=beta&amp;t=7ZQbbIvgpQKlTfg1Z_IpGT9DB21LUqy_bkKaNE41l0E</a></p>
<p>This is the solution:</p> <pre><code>features = df.iloc[:, [-2,-3]] results = df.iloc[:, -1] from sklearn.compose import make_column_transformer vectorizer = CountVectorizer(lowercase=False) transformer = make_column_transformer((vectorizer, 'col 1'), (vectorizer, 'col 2')) features = transformer.fit_transform(features) results = vectorizer.fit_transform(results) </code></pre>
python|pandas|scikit-learn
0
3,829
68,771,230
Pandas: How to concat or merge two incomplete dataframe into one more complete dataframe
<p>I would like to concatenate two incomplete data frame with the same data (in theory) regarding a similar index. I tried with pd.concat but I don't managed to get what I need.</p> <p>Here is a simple example of what I would like to do :</p> <pre><code> df1 = pd.DataFrame( { &quot;A&quot;: [&quot;A0&quot;, &quot;A1&quot;, &quot;A2&quot;, &quot;A3&quot;], &quot;B&quot;: [&quot;B0&quot;, &quot;B1&quot;, &quot;B2&quot;, &quot;B4&quot;], &quot;C&quot;: [&quot;C0&quot;, &quot;C1&quot;, &quot;C2&quot;, &quot;B5&quot;], &quot;D&quot;: [np.nan,np.nan,np.nan,np.nan,] }, index=[0, 1, 2, 3],) df2 = pd.DataFrame( { &quot;A&quot;: [&quot;A0&quot;, &quot;A1&quot;, &quot;A5&quot;, &quot;A6&quot;], &quot;B&quot;: [&quot;B0&quot;, &quot;B1&quot;, &quot;B5&quot;, &quot;B6&quot;], &quot;C&quot;: [np.nan,np.nan,np.nan,np.nan,], &quot;D&quot;: [&quot;D0&quot;, &quot;D1&quot;, &quot;D5&quot;, &quot;D6&quot;], }, index=[0, 1, 5, 6] ) res_expected = pd.DataFrame( { &quot;A&quot;: [&quot;A0&quot;, &quot;A1&quot;, &quot;A2&quot;, &quot;A3&quot;, &quot;A5&quot;, &quot;A6&quot;], &quot;B&quot;: [&quot;B0&quot;, &quot;B1&quot;, &quot;B2&quot;, &quot;B3&quot;, &quot;B5&quot;, &quot;B6&quot;], &quot;C&quot;: [&quot;C0&quot;, &quot;C1&quot;, &quot;C2&quot;, &quot;B5&quot;,np.nan,np.nan,], &quot;D&quot;: [&quot;D0&quot;, &quot;D1&quot;, np.nan,np.nan,&quot;D5&quot;, &quot;D6&quot;], }, index=[0, 1, 2, 3, 5, 6] ) </code></pre> <p>Does someone have an idea ?</p> <p>Thanks !</p>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.combine_first.html" rel="nofollow noreferrer"><code>combine_first()</code></a>, as follows:</p> <pre><code>df_result = df1.combine_first(df2) </code></pre> <p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.combine_first.html" rel="nofollow noreferrer"><code>combine_first()</code></a> works as follows:</p> <blockquote> <p>Combine two DataFrame objects by filling null values in one DataFrame with non-null values from other DataFrame. The row and column indexes of the resulting DataFrame will be the union of the two.</p> </blockquote> <p><strong>Result:</strong></p> <pre><code>print(df_result) A B C D 0 A0 B0 C0 D0 1 A1 B1 C1 D1 2 A2 B2 C2 NaN 3 A3 B4 B5 NaN 5 A5 B5 NaN D5 6 A6 B6 NaN D6 </code></pre>
python|pandas|merge|concatenation|outer-join
2
3,830
65,808,296
How to change only xticks fontsize in pandas Dataframe barplot?
<p>I want to create a barplot from pandas. But I want to change fontsize for x and y-axis separately.</p> <p>This code:</p> <blockquote> <pre><code>max_coef.plot.bar(fontsize = 15, x = 'coef_word', y = 'coefficient', color=['yellow']) </code></pre> </blockquote> <p>Creates this plot:</p> <p><a href="https://i.stack.imgur.com/p4gtY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p4gtY.png" alt="enter image description here" /></a></p> <p>I tried to play around with xticks and xticklabels, but apparently this does not work similar to matplotlib syntax. I already checked documentation but I cannot wrap my head around it.</p> <p>I want to change x and y ticks separately. Please help. + Thanks in advance!</p>
<p>See the code below for changing the font size:</p> <pre><code>import matplotlib.pyplot as plt SMALL_SIZE = 8 MEDIUM_SIZE = 10 BIG_SIZE = 12 plt.rc('font', size=SMALL_SIZE) # default text sizes plt.rc('axes', titlesize=MEDIUM_SIZE ) # axes title fontsize plt.rc('axes', labelsize=MEDIUM_SIZE) # x and y labels fontsize plt.rc('xtick', labelsize=MEDIUM_SIZE ) # x tick labels fontsize plt.rc('ytick', labelsize=MEDIUM_SIZE ) # y tick labels fontsize plt.rc('legend', fontsize=MEDIUM_SIZE ) # legend fontsize plt.rc('figure', titlesize=BIGGER_SIZE) # figure title fontsize </code></pre> <p>You may find more info here: <a href="https://matplotlib.org/3.3.3/api/_as_gen/matplotlib.pyplot.rc.html" rel="nofollow noreferrer">matplotlib rc</a></p>
python|pandas|bar-chart
2
3,831
65,697,424
Pandas: drop out of sequence row
<p>My Pandas df:</p> <pre><code>import pandas as pd import io data = &quot;&quot;&quot;date value &quot;2015-09-01&quot; 71.925000 &quot;2015-09-06&quot; 71.625000 &quot;2015-09-11&quot; 71.333333 &quot;2015-09-12&quot; 64.571429 &quot;2015-09-21&quot; 72.285714 &quot;&quot;&quot; df = pd.read_table(io.StringIO(data), delim_whitespace=True) df.date = pd.to_datetime(df.date) </code></pre> <p>I Given a user input date ( 01-09-2015). I would like to keep only those date where difference between date and input date is multiple of 5.</p> <p>Expected output:</p> <pre><code>input = 01-09-2015 df: date value 0 2015-09-01 71.925000 1 2015-09-06 71.625000 2 2015-09-11 71.333333 3 2015-09-21 72.285714 </code></pre> <p>My Approach so far: I am taking the delta between input_date and date in pandas and saving this delta in separate column. If delta%5 == 0, keep the row else drop. Is this the best that can be done?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> for filter by mask, here convert input values to datetimes and then timedeltas to days by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.days.html" rel="nofollow noreferrer"><code>Series.dt.days</code></a>:</p> <pre><code>input1 = '01-09-2015' df = df[df.date.sub(pd.to_datetime(input1)).dt.days % 5 == 0] print (df) date value 0 2015-09-01 71.925000 1 2015-09-06 71.625000 2 2015-09-11 71.333333 4 2015-09-21 72.285714 </code></pre>
pandas
1
3,832
63,582,377
How do I search for index values in each row of an array in numpy creating a boolean array
<p>Given an array with size MxN and an array with size Mx1, I want to compute a boolean array with MxN.</p> <pre><code>import numpy as np M = 2 N = 3 a = np.random.rand(M, N) # The values doesn't matter b = np.random.choice(a=N, size=(M, 1), replace=True) # b = # array([[2], # [1]]) # I found this way to compute the boolean array but I wonder if there's a fancier, elegant way index_array = np.array([np.array(range(N)), ]*M) # Create an index array # index_array = # array([[0, 1, 2], # [0, 1, 2]]) # boolean_array = index_array == b # boolean_array = # array([[False, False, True], # [False, True, False]]) # </code></pre> <p>So i wonder if theres's a fancier, pythonic way of doing this</p>
<p>You could simplify by leveraging broadcasting an comparing with a single 1d range directly:</p> <pre><code>M = 2 N = 3 a = np.random.rand(M, N) b = np.random.choice(a=N, size=(M, 1), replace=True) print(b) array([[1], [2]]) b == np.arange(N) array([[False, True, False], [False, False, True]]) </code></pre> <hr /> <p>In general, broadcasting is handy in these cases because it saves us from having to create arrays compatible in shape to perform operations with other arrays. For the generated array, I'd probably go with the following instead:</p> <pre><code>np.broadcast_to(np.arange(N), (M,N)) array([[0, 1, 2], [0, 1, 2]]) </code></pre> <p>Though as mentioned, NumPy makes life easier here so that we don't have to worry about that.</p>
python|numpy|numpy-ndarray|boolean-algebra
0
3,833
63,675,583
How to replace every row that are filled with zero with a certain value from a Pytorch tensor?
<p>I have a Pytorch tensor of the size <code>bsize x 50 x 50</code> where some of the rows are completely filled with zeros:</p> <pre class="lang-py prettyprint-override"><code> [[0, 2, 0, ..., 0, 0, 0], [2, 0, 2, ..., 0, 0, 0], [0, 2, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 2, 0, ..., 0, 0, 0], [2, 0, 2, ..., 0, 0, 0], [0, 2, 0, ..., 0, 0, 0], </code></pre> <p>I want to replace the rows filled with zeros with a negative value <code>-100</code> throughout in the tensor. Expected tensor -</p> <pre class="lang-py prettyprint-override"><code> [[0, 2, 0, ..., 0, 0, 0], [2, 0, 2, ..., 0, 0, 0], [0, 2, 0, ..., 0, 0, 0], ..., [-100, -100, -100, ..., -100,-100, -100], [-100, -100, -100, ..., -100,-100, -100], [-100, -100, -100, ..., -100,-100, -100]], [[0, 2, 0, ..., 0, 0, 0], [2, 0, 2, ..., 0, 0, 0], [0, 2, 0, ..., 0, 0, 0], </code></pre> <p>Whats the best way to do this avoiding a loop over the row shape ?</p>
<p>Assuming <code>x</code> is your tensor with <code>BxRxC</code> (batch, rows, and columns), you can do something like this:</p> <pre class="lang-py prettyprint-override"><code>x[(x == 0).all(dim=-1)] = -100 </code></pre> <p>Basically:</p> <ul> <li><code>x == 0</code> returns a boolean tensor (shape <code>BxRxC</code>) with <code>True</code> where it is equal to zero;</li> <li>then, <code>.all(dim=-1)</code> returns another boolean tensor, now with shape <code>BxR</code> because we chose to do <code>all</code> in the last dimension (<code>-1</code>), with <code>True</code> where <strong>all</strong> the columns are <code>True</code>;</li> <li>finally, we use this boolean tensor to index the original tensor and the <code>-100</code> is assigned to the <code>True</code> positions.</li> </ul>
python|pytorch|vectorization|tensor
3
3,834
63,715,025
Tricky cascade grouping in pandas
<p>I have a bit of an odd problem that I am trying to solve in pandas. Let's say I have a bunch of objects that have different ways to group them. Here is what our dataframe look like:</p> <pre><code>df=pd.DataFrame([ {'obj': 'Ball', 'group1_id': None, 'group2_id': '7' }, {'obj': 'Balloon', 'group1_id': '92', 'group2_id': '7' }, {'obj': 'Person', 'group1_id': '14', 'group2_id': '11'}, {'obj': 'Bottle', 'group1_id': '3', 'group2_id': '7' }, {'obj': 'Thought', 'group1_id': '3', 'group2_id': None}, ]) obj group1_id group2_id Ball None 7 Balloon 92 7 Person 14 11 Bottle 3 7 Thought 3 None </code></pre> <p>I want to group things together based on any of the groups. Here it is annotated:</p> <pre><code>obj group1_id group2_id # annotated Ball None 7 # group2_id = 7 Balloon 92 7 # group1_id = 92 OR group2_id = 7 Person 14 11 # group1_id = 14 OR group2_id = 11 Bottle 3 7 # group1_id = 3 OR group2_id = 7 Thought 3 None # group1_id = 3 </code></pre> <p>When combined, our output should look like this:</p> <pre><code>count objs composite_id 4 [Ball, Balloon, Bottle, Thought] g1=3,92|g2=7 1 [Person] g1=11|g2=14 </code></pre> <p>Notice that the first three objects we can get based on <code>group2_id=7</code> and then the fourth one, <code>Thought</code>, is because it can match with another item via <code>group1_id=3</code> that assigns it the <code>group_id=7</code> id. Note: for this question assume an item will only ever be in one combined group (and there will never be conditions where it could possibly be in two groups).</p> <p>How could I do this in <code>pandas</code>?</p>
<p>This is not odd at all ~ network problem</p> <pre><code>import networkx as nx #we need to handle the miss value first , we fill it with same row, so that we did not calssed them into wrong group df['key1']=df['group1_id'].fillna(df['group2_id']) df['key2']=df['group2_id'].fillna(df['group1_id']) # here we start to create the network G=nx.from_pandas_edgelist(df, 'key1', 'key2') l=list(nx.connected_components(G)) L=[dict.fromkeys(y,x) for x, y in enumerate(l)] d={k: v for d in L for k, v in d.items()} # we using above dict to map the same group into the same one in order to groupby them out=df.groupby(df.key1.map(d)).agg(objs = ('obj',list) , Count = ('obj','count'), g1= ('group1_id', lambda x : set(x[x.notnull()].tolist())), g2= ('group2_id', lambda x : set(x[x.notnull()].tolist()))) # notice here I did not conver the composite id into string format , I keep them into different columns which more easy to understand Out[53]: objs Count g1 g2 key1 0 [Ball, Balloon, Bottle, Thought] 4 {92, 3} {7} 1 [Person] 1 {14} {11} </code></pre> <p>PS: If you need more detail about the network steps check <a href="https://stackoverflow.com/a/56344293/7964527">link</a></p>
python|pandas
2
3,835
63,460,065
Optimizer for an RNN using pytorch
<p><a href="https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html" rel="nofollow noreferrer">The pytorch RNN tutorial</a> uses</p> <pre><code>for p in net.parameters(): p.data.add_(p.grad.data, alpha = -learning_rate) </code></pre> <p>as optimizer. Does anyone know the difference between doing that or doing the classical <code>optimizer.step()</code>, once an optimizer has been defined explicitly? Is there some special consideration one has to take into when training RNNs in regards to the optimizer?</p>
<p>It looks like the example uses a simple gradient descent algorithm to update:</p> <p><a href="https://i.stack.imgur.com/RBAsN.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RBAsN.gif" alt="p_{t+1}=p_t-\alpha \mathrm{grad}_pJ_t" /></a> where J is cost.</p> <p>If the optimizer your using is a simple gradient descent tool, then there is no difference between using <code>optimizer.step()</code> and the code in the example.</p> <p>I know that's not a super exciting answer to your question, because it depends on how the <code>step()</code> function is written. Check out <a href="http://mcneela.github.io/machine_learning/2019/09/03/Writing-Your-Own-Optimizers-In-Pytorch.html" rel="nofollow noreferrer">this page</a> to learn about <code>step()</code> and <a href="https://pytorch.org/docs/stable/optim.html" rel="nofollow noreferrer">this page</a> to learn more about <code>torch.optim</code>.</p>
python|pytorch
1
3,836
63,492,730
Pandas not converting certain columns of dataframe to datetimeindex
<p>My dataframe until now, <a href="https://i.stack.imgur.com/TBwoo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TBwoo.png" alt="enter image description here" /></a></p> <p>and I am trying to convert <code>cols</code> which is a list of all columns from <strong>0 to 188</strong> <code>( cols = list(hdata.columns[ range(0,188) ]) )</code> which are in this format <code>yyyy-mm</code> to datetimeIndex. There are other few columns as well which are 'string' Names and can't be converted to dateTime hence,so I tried doing this,</p> <pre><code>hdata[cols].columns = pd.to_datetime(hdata[cols].columns) #convert columns to **datetimeindex** </code></pre> <p>But this is <strong>not</strong> working. Can you please figure out what is wrong here?</p> <p><strong>Edit:</strong> A better way to work on this type of data is to use <strong>Split-Apply-Combine</strong> method.</p> <p><strong>Step 1:</strong> Split the data which you want to perform some specific operation.</p> <pre><code>nonReqdf = hdata.iloc[:,188:].sort_index() reqdf= reqdf.drop(['CountyName','Metro','RegionID','SizeRank'],axis=1) </code></pre> <p><strong>Step 2:</strong> do the operations. In my case it was converting the dataframe columns with year and months to datetimeIndex. And resample it quarterly.</p> <pre><code>reqdf.columns = pd.to_datetime(reqdf.columns) reqdf = reqdf.resample('Q',axis=1).mean() reqdf = reqdf.rename(columns=lambda x: str(x.to_period('Q')).lower()).sort_index() # renaming so that string is yyyy**q**&lt;1/2/3/4&gt; like 2012q1 or 2012q2 likewise </code></pre> <p><strong>Step 3:</strong> Combine the two splitted dataframe.(<code>merge</code> can be used but may depend on what you want)</p> <pre><code>reqdf = pd.concat([reqdf,nonReqdf],axis=1) </code></pre>
<p>In order to modify some of the labels from an Index (be it for rows or columns), you need to use <code>df.rename</code> as in</p> <pre><code>for i in range(188): df.rename({df.columns[i]: pd.to_datetime(df.columns[i])}, axis=1, inplace=True) </code></pre> <p>Or you can avoid looping by building a full sized index to cover all the columns with</p> <pre><code>df.columns = ( pd.to_datetime(cols) # pass the list with strings to get a partial DatetimeIndex .append(df.columns.difference(cols)) # complete the index with the rest of the columns ) </code></pre>
python|pandas|dataframe|datetimeindex
1
3,837
71,836,954
Numpy slicing with index
<pre><code>x = np.array([ [0, 1], [2, 3], [4, 5], [6, 7], [8, 9] ]) end_point = [3, 4] # Slicing end point # Want: Slicing along the row, like this x[end_point-2 : end_point] = np.array([[2, 5], [4, 7], [6, 9]]) </code></pre> <p>Can I do this thing in an elegant way? Of course, the above code induces a type error &quot;TypeError: only integer scalar arrays can be converted to a scalar index&quot;. Thanks</p>
<p>Here's how I went about figuring out a solution:</p> <pre><code>In [21]: x = np.arange(10).reshape(5,2) In [22]: x Out[22]: array([[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]) </code></pre> <p>Looks like you want several 'diagonals', which we can get with pairs of indexing lists/arrays:</p> <pre><code>In [23]: x[[1,2],[0,1]] Out[23]: array([2, 5]) In [24]: x[[2,3],[0,1]] Out[24]: array([4, 7]) In [25]: x[[3,4],[0,1]] Out[25]: array([6, 9]) </code></pre> <p>Put those together into index:</p> <pre><code>In [26]: x[[[1,2],[2,3],[3,4]],[0,1]] Out[26]: array([[2, 5], [4, 7], [6, 9]]) </code></pre> <p>And generate that 2d indexing array:</p> <pre><code>In [28]: np.arange(1,4)[:,None]+[0,1] Out[28]: array([[1, 2], [2, 3], [3, 4]]) In [29]: idx = np.arange(1,4)[:,None]+[0,1] In [30]: x[idx,np.arange(2)] Out[30]: array([[2, 5], [4, 7], [6, 9]]) </code></pre>
numpy
1
3,838
55,490,404
Convert pandas data frame to categorical for keras
<p>I am trying to preprocess data in python for use in deep learning keras functions.</p> <p>I use <code>categorical crossentropy</code> as loss function in model fit. It requires categorical variable as target.</p> <p>My target data sample:</p> <pre class="lang-py prettyprint-override"><code> y_train = y_train.astype('category') y_train.head() </code></pre> <pre><code> truth 0 0 1 0 2 1 3 0 4 0 </code></pre> <p>When I tried to convert data frame column to categorical:</p> <pre><code> num_classes=2 y_train = keras.utils.to_categorical(y_train, num_classes) </code></pre> <p>It produced an error: <code>IndexError: index 1 is out of bounds for axis 1 with size 1</code>.</p> <p>How do I convert the data properly?</p> <p>By the way, which keras models are better for binary classification (yes, no) if I have sample of 3800 observations with 2300 numeric (float32) features each? The features describe mostly graphical objects.</p>
<p>Unfortunately i didn't manage to reproduce your error. Running:</p> <pre><code>a=pd.DataFrame(np.concatenate([np.zeros(3),np.ones(3)]) ).astype('int').astype('category') from keras.utils import to_categorical to_categorical(a, 2) </code></pre> <p>I get an output:</p> <pre><code>array([[1., 0.], [1., 0.], [1., 0.], [0., 1.], [0., 1.], [0., 1.]], dtype=float32) </code></pre> <p>Maybe it's a versioning issue!</p> <p>The good news are that you don' t have to use <code>categorical_crossentropy</code> for a binary classification problem. You can can use <code>binary_crossentropy</code> loss and feed you model with your y_train as a target as it is.</p> <p>Regarding you last request about which keras model is better for binary classification, Keras pre-trained models are referring to images. You seem to have tabular data, though you wan't be able to use a pre-trained model but you will have to run a custom model on your own.</p>
python|pandas|machine-learning|keras
1
3,839
56,835,074
AttributeError: 'Series' object has no attribute 'has_z'
<p>I got the following <code>GeoDataFrame</code> taken from a CSV file and after some slincing and <code>CRS</code> and <code>geometry</code> asignment</p> <pre><code> ctf_nom geometry id 0 Prunus mahaleb POINT (429125.795043319 4579664.7564311) 2616 1 Betula pendula POINT (425079.292045901 4585098.09043407) 940 2 Betula pendula POINT (425088.115045896 4585093.66943407) 940 3 Abelia triflora POINT (429116.661043325 4579685.93743111) 2002 4 Abies alba POINT (428219.962044021 4587346.66843531) 797 </code></pre> <p>I've converted the <code>geometry</code> from a <code>str</code> through:</p> <pre><code>from shapely import wkt df['geometry'] = df['geometry'].apply(wkt.loads) df_geo = gpd.GeoDataFrame(df, geometry = 'geometry') </code></pre> <p>and asigned a crs by :</p> <pre><code>df_geo.crs = {'init' :'epsg:25831'} df_geo.crs </code></pre> <p>when I'm trying to save again the reduced geodataframe by <code>gdf.to_file()</code> function it returns the following attribute error:</p> <p><code>AttributeError: 'Series' object has no attribute 'has_z'</code></p> <p>How can I solve this?</p>
<p>You need to explicitly set the geometry column in the <code>GeoDataFrame</code>:</p> <pre><code>df_geo.set_geometry(col='geometry', inplace=True) </code></pre> <hr /> <p><sub>Taken from: <a href="https://gis.stackexchange.com/a/342635/6998">https://gis.stackexchange.com/a/342635/6998</a></sub></p>
python-3.x|attributeerror|geopandas|writetofile
0
3,840
47,288,914
Splitting words to rows in DataFrame
<p>I have a DataFrame where one of the columns contains strings. I would like to split the strings by spaces and then transform the DataTable, so that it contains one word per row.</p> <pre><code>dat = pd.DataFrame(data = {'x' : [1,2], 'y' : ['Lorem ipsum dolor sit amet', 'consectetur adipiscing elit']}) </code></pre> <p>I would like to get DataFrame like below:</p> <pre><code> x y 1 Lorem 1 ipsum ... 2 consectetur 2 adipiscing ... </code></pre> <p>What is the best way to achieve this?</p>
<p>str to <code>list</code> , then we using <code>stack</code></p> <pre><code>dat.y=dat.y.str.split(' ') dat.set_index('x').y.apply(pd.Series).stack().reset_index().\ drop('level_1',1).rename(columns={0:'y'}) Out[484]: x y 0 1 Lorem 1 1 ipsum 2 1 dolor 3 1 sit 4 1 amet 5 2 consectetur 6 2 adipiscing 7 2 elit </code></pre>
python|python-3.x|pandas
1
3,841
47,422,603
pandas parse dates with two columns and replace a single point to a double point
<p>this is how i want to read my csv file in:</p> <pre><code>data01 = pd.read_csv('data/file.csv', sep=';', decimal=',', parse_dates=[['date', 'time']]) </code></pre> <p>Time is given as <i>hh.mm.ss</i> and i want it like this:<i>hh:mm:ss</i> Is there a way to do this inside the pd.read_csv?</p>
<p>For this, you can use </p> <pre><code>data01["(insert name of column with time)"].replace({".",":"} </code></pre>
python|pandas
0
3,842
68,369,649
Problems casting dtypes
<p>i have a df lke this:</p> <pre><code>value_1 Value_2 10.0 20.0 </code></pre> <p><code>value_1</code> and <code>value_2</code> as <code>float64</code>.</p> <p>How do i cast it to int ad get this:</p> <pre><code>value_1 Value_2 10 20 </code></pre> <p>I tried:</p> <pre><code> df.astype('int32').dtypes </code></pre> <p>didn´t work!</p> <p>Thanks</p>
<p>Let us try</p> <pre><code>df.astype(int) value_1 Value_2 0 10 20 </code></pre>
pandas|jupyter
1
3,843
68,105,079
PyTorch Error while building CNN: "1only batches of spatial targets supported (3D tensors) but got targets of size: : [1, 2, 64, 64]"
<p>I want to build a CNN like the one in this paper: <a href="https://arxiv.org/abs/1603.08511" rel="nofollow noreferrer">https://arxiv.org/abs/1603.08511</a> (<a href="https://richzhang.github.io/colorization/" rel="nofollow noreferrer">https://richzhang.github.io/colorization/</a> ). As data I got images from the LAB - color space. I wrote a data loader for these l and a, b values and give the l values as input to my Neural Network and the a, b values as label. I get an error &quot;1only batches of spatial targets supported (3D tensors) but got targets of size: : [1, 2, 64, 64]&quot; in the criterion loss function. There is a problem with what I am inserting as &quot;label&quot; into the criterion() method. But the dimensions of the label seem right to me: [1, 2, 64, 64] --&gt; [batch_size, in_channels (a,b), width, heigth]. I set the batch_size to 1 to just see if it's working. I tried to just cut of the batch_size dimension using pytorch.squeeze(), but it didn't work. I don't understand why I can't put in a vector of this shape and size to the criterion() function. Any help is appreciated! My code is below:</p> <pre><code>#importing the libraries import numpy as np import pandas as pd from numpy import random # for creating validation set from sklearn.model_selection import train_test_split # PyTorch libraries and modules import torch import torch.nn as nn import torch.nn.functional as F from torchvision import transforms from torch.autograd import Variable from torch.nn import Linear, ReLU, CrossEntropyLoss, Sequential, Conv2d, MaxPool2d, Module, Softmax, BatchNorm2d, Dropout from torch.optim import Adam, SGD from torch.utils.data import Dataset, DataLoader from torchvision import datasets from torch.utils.data.sampler import SubsetRandomSampler from typing import Any, Tuple # set device device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # define local paths L_path = 'l/gray_scale.npy' ab1_path = 'ab/ab/ab1.npy' ab2_path = 'ab/ab/ab2.npy' ab3_path = 'ab/ab/ab3.npy' image_size = 64 class ColorDataset(Dataset): def __init__(self, transformations=None, seed=42) -&gt; None: if transformations is None: self.transformations = transforms.Compose([ transforms.ToPILImage(), transforms.Resize(image_size), transforms.ToTensor() ]) else: self.transformations = transformations self.seed = seed self.L = np.load(L_path) self.L = np.expand_dims(self.L, -1) # self.L = self.L.transpose((0, 3, 1, 2)) self.ab = np.concatenate([ np.load(ab1_path), np.load(ab2_path), np.load(ab3_path) ], axis=0) # self.ab = self.ab.transpose((0, 3, 1, 2)) print(&quot;All inputs loaded&quot;) def __len__(self) -&gt; int: return len(self.L) def __getitem__(self, index: int) -&gt; Tuple[Any, Any]: random.seed(self.seed) L = self.transformations(self.L[index]) random.seed(self.seed) ab = self.transformations(self.ab[index]) return L, ab # initialize dataset dataset = ColorDataset() dataset_size = len(dataset) # set relative test size (for split) test_size = 0.3 indices = list(range(dataset_size)) np.random.shuffle(indices) split = int(np.floor(test_size * dataset_size)) train_index, test_index = indices[split:], indices[:split] train_sampler = SubsetRandomSampler(train_index) test_sampler = SubsetRandomSampler(test_index) # set batch size batch_size = 1 train_loader = DataLoader(dataset, batch_size=batch_size, sampler=train_sampler, num_workers=0) test_loader = DataLoader(dataset, batch_size=batch_size, sampler=test_sampler, num_workers=0) # Network class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1_1 = nn.Conv2d(1, 64, 1) self.conv1_2 = nn.Conv2d(64, 64, 1) self.batch_norm_1 = nn.BatchNorm2d(64) self.conv2_1 = nn.Conv2d(64, 128, 1, 2) self.conv2_2 = nn.Conv2d(128, 128, 1) self.batch_norm_2 = nn.BatchNorm2d(128) self.conv3_1 = nn.Conv2d(128, 256, 1, 2) self.conv3_2 = nn.Conv2d(256, 256, 1) self.conv3_3 = nn.Conv2d(256, 256, 1) self.batch_norm_3 = nn.BatchNorm2d(256) self.conv4_1 = nn.Conv2d(256, 512, 1, 2) self.conv4_2 = nn.Conv2d(512, 512, 1) self.conv4_3 = nn.Conv2d(512, 512, 1) self.batch_norm_4 = nn.BatchNorm2d(512) self.conv5_1 = nn.Conv2d(512, 512, 1) self.conv5_2 = nn.Conv2d(512, 512, 1) self.conv5_3 = nn.Conv2d(512, 512, 1) self.batch_norm_5 = nn.BatchNorm2d(512) self.conv6_1 = nn.Conv2d(512, 512, 1) self.conv6_2 = nn.Conv2d(512, 512, 1) self.conv6_3 = nn.Conv2d(512, 512, 1) self.batch_norm_6 = nn.BatchNorm2d(512) self.conv7_1 = nn.Conv2d(512, 256, 1) self.conv7_2 = nn.Conv2d(256, 256, 1) self.conv7_3 = nn.Conv2d(256, 256, 1) self.batch_norm_7 = nn.BatchNorm2d(256) self.conv8_1 = nn.Conv2d(256, 128, 1) self.conv8_2 = nn.Conv2d(128, 128, 1, 1) self.conv8_3 = nn.Conv2d(128, 128, 1) #define forward pass def forward(self, x): # Pass data through conv1_1 x = self.conv1_1(x) # Use the rectified-linear activation function over x x = F.relu(x) x = self.conv1_2(x) x = F.relu(x) #batch normalization x = self.batch_norm_1(x) x = self.conv2_1(x) x = F.relu(x) x = self.conv2_2(x) x = F.relu(x) #batch normalization x = self.batch_norm_2(x) x = self.conv3_1(x) x = F.relu(x) x = self.conv3_2(x) x = F.relu(x) x = self.conv3_3(x) #batch normalization x = self.batch_norm_3(x) x = self.conv4_1(x) x = F.relu(x) x = self.conv4_2(x) x = F.relu(x) x = self.conv4_3(x) #batch normalization x = self.batch_norm_4(x) x = self.conv5_1(x) x = F.relu(x) x = self.conv5_2(x) x = F.relu(x) x = self.conv5_3(x) #batch normalization x = self.batch_norm_5(x) x = self.conv6_1(x) x = F.relu(x) x = self.conv6_2(x) x = F.relu(x) x = self.conv6_3(x) #batch normalization x = self.batch_norm_6(x) x = self.conv7_1(x) x = F.relu(x) x = self.conv7_2(x) x = F.relu(x) x = self.conv7_3(x) #batch normalization x = self.batch_norm_7(x) x = self.conv8_1(x) x = F.relu(x) x = self.conv8_2(x) x = F.relu(x) x = self.conv8_3(x) return x model = Net() optimizer = Adam(model.parameters(), lr=0.07) # defining the loss function criterion = CrossEntropyLoss() # checking if GPU is available if torch.cuda.is_available(): model = model.cuda() criterion = criterion.cuda() #labels und output des netzwerks in criterion def train(epoch): model.train() train_loss = 0 # train the model model.train() # prep model for training for data, label in train_loader: data = data.to('cuda') label = label.to('cuda') print(label.size()) # clear the gradients of all optimized variables optimizer.zero_grad() #forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the loss label = label.long() #convert label to long since in criterion long is expected loss = criterion(output, label) #l value is data, ab values are labels # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update running training loss train_loss += loss.item() * data.size(0) # calculate average loss over an epoch train_loss = train_loss / len(train_loader.sampler) # printing the loss print('Epoch : ', epoch+1, '\t', 'loss :', train_loss) # defining number of epochs n_epochs = 1 # empty list to store training losses (actually not used) train_losses = [] # training the model for epoch in range(n_epochs): train(epoch) </code></pre> <p>and here the Error:</p> <pre><code>--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) &lt;ipython-input-25-c26ffccd8f7e&gt; in &lt;module&gt; 250 # training the model 251 for epoch in range(n_epochs): --&gt; 252 train(epoch) &lt;ipython-input-25-c26ffccd8f7e&gt; in train(epoch) 224 label = label.long() #convert label to long since in criterion long is expected 225 print(label.size()) --&gt; 226 loss = criterion(output, label) #l value is data, ab values are labels 227 # backward pass: compute gradient of the loss with respect to model parameters 228 loss.backward() ~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --&gt; 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~\anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target) 960 def forward(self, input: Tensor, target: Tensor) -&gt; Tensor: 961 return F.cross_entropy(input, target, weight=self.weight, --&gt; 962 ignore_index=self.ignore_index, reduction=self.reduction) 963 964 ~\anaconda3\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 2466 if size_average is not None or reduce is not None: 2467 reduction = _Reduction.legacy_get_string(size_average, reduce) -&gt; 2468 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 2469 2470 ~\anaconda3\lib\site-packages\torch\nn\functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 2264 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 2265 elif dim == 4: -&gt; 2266 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 2267 else: 2268 # dim == 3 or dim &gt; 4 RuntimeError: 1only batches of spatial targets supported (3D tensors) but got targets of size: : [1, 2, 64, 64] </code></pre>
<p>Why are you using <code>CrossEntropyLoss()</code> for this task?<br /> Why does your network outputs 128dim &quot;pixels&quot;?<br /> What are the <em>spatial</em> dimensions of your prediction vs the targets?<br /> You are using only 1x1 kernels, meaning your model makes its predictions for each pixel <em>independantly</em> of its neighbors. How do you expect it to learn to predict meaningful colorizations?</p> <p>You should go back to the drawing table and rethink your model and your criterion. Right now this error is the least of your worries.</p>
python|deep-learning|neural-network|pytorch|conv-neural-network
0
3,844
68,238,588
Assigning header names to panda datasheets with varying number of columns
<p>I am looking to assign a header row to panda dataframes with a varying number of columns. The underlying data sheet are targets disclosed by companies, the number of which differ per company</p> <p>Eg. this could be:</p> <pre><code>df = pd.DataFrame([['apple','become carbon neutral','100% renewable'],['Microsoft','carbon neutral','']]) 0 apple become carbon neutral 100% renewable 1 Microsoft carbon neutral </code></pre> <p>Or</p> <pre><code>df2 = pd.DataFrame([['nike','set up foundation'],['Amazon','more energy efficient']]) 0 nike set up foundation 1 amazon more energy efficient </code></pre> <p>The header row should hence be:</p> <pre><code>company name, target 1, target 2, target 3, target 4 etc. </code></pre> <p>What is the best way of achieving this?</p>
<p>Assuming default column names <code>0</code>, <code>1</code>, <code>2</code> etc:</p> <pre><code>df.columns = ['company name', *map('target {}'.format, df.columns[1:])] </code></pre> <p>Or based on a <code>range</code> if columns are already named:</p> <pre><code>df.columns = ['company name', *map('target {}'.format, range(1, len(df.columns)))] </code></pre> <hr /> <p>Complete Working Example:</p> <pre><code>import pandas as pd df = pd.DataFrame([['apple', 'become carbon neutral', '100% renewable'], ['Microsoft', 'carbon neutral', '']]) df.columns = ['company name', *map('target {}'.format, range(1, len(df.columns)))] print(df) </code></pre> <p><code>df</code>:</p> <pre><code> company name target 1 target 2 0 apple become carbon neutral 100% renewable 1 Microsoft carbon neutral </code></pre>
python|pandas|dataframe
0
3,845
68,048,692
Pandas: eror when checking for a binary flag pattern
<p>I have a dataframe where one of the columns of type <code>int</code> is storing a binary flag pattern:</p> <pre><code>import pandas as pd df = pd.DataFrame({'flag': [1, 2, 4, 5, 7, 3, 9, 11]}) </code></pre> <p>I tried selecting rows with value matching 4 the way it is typically done (with binary and operator):</p> <pre><code>df[df['flag'] &amp; 4] </code></pre> <p>But it failed with:</p> <blockquote> <p>KeyError: &quot;None of [Int64Index([0, 0, 4, 4, 4, 0, 0, 0], dtype='int64')] are in the [columns]&quot;</p> </blockquote> <p>How to actually select rows matching binary pattern?</p>
<p>The bitwise-flag selection works as you’d expect:</p> <pre><code>&gt;&gt;&gt; df['flag'] &amp; 4 0 0 1 0 2 4 3 4 4 4 5 0 6 0 7 0 Name: flag, dtype: int64 </code></pre> <p>However if you pass this to <code>df.loc[]</code>, you’re asking to get the indexes <code>0</code> and <code>4</code> repeatedly, or if you use <code>df[]</code> directly you’re asking for the column that has <code>Int64Index[...]</code> as column header.</p> <p>Instead, you should force the conversion to a boolean indexer:</p> <pre><code>&gt;&gt;&gt; (df['flag'] &amp; 4) != 0 0 False 1 False 2 True 3 True 4 True 5 False 6 False 7 False Name: flag, dtype: bool &gt;&gt;&gt; df[(df['flag'] &amp; 4) != 0] flag 2 4 3 5 4 7 </code></pre>
python|pandas|series|binary-operators
1
3,846
56,885,523
How to pivot a dataframe with pandas to display values with aggregation and without aggregation
<p>I want to pivot my dataframe using pandas, my dataframe look like this</p> <p><a href="https://i.stack.imgur.com/ZD0yg.png" rel="nofollow noreferrer">Dataframe</a></p> <p>I want <code>shop_id</code> with maximum <code>item_cnt_day</code> with maximum sold <code>item_id</code> sorted by <code>date_block_num</code> in descending order. </p> <p>I have tried this</p> <pre><code>pd.pivot_table(sales1,index=['date_block_num', 'shop_id'], values=["item_cnt_day","item_id"], \ aggfunc={"item_id":lambda x: x.value_counts().idxmax(),'item_cnt_day':sum}).\ sort_values(by=['date_block_num','item_cnt_day'], ascending=False).reset_index().head(10) </code></pre> <p><a href="https://i.stack.imgur.com/2Dhwt.png" rel="nofollow noreferrer">Result dataframe</a> (Not allowed to embed images as per stackoverflow)</p> <p>i want only one row per <code>date_block</code> with <code>shop_id</code> having maximum <code>item_cnt_day</code> with <code>item_id</code> sold maximum. </p>
<p>You can do that in two aggregation steps like:</p> <pre><code># first group by all three attributes to get one line per # this three columns grouped=df.groupby(['date_block_no', 'shop_id', 'item_id']) # and just aggregate the item_cnt_day you want to have listed aggregated=grouped.aggregate({'item_cnt_day': 'sum'}) # make the index columns regular columns again and resort # so the highest sales come first (btw. I think you could remove # date_block_no form the sort if you like, but it doesn't hurt) aggregated.reset_index(inplace=True) aggregated.sort_values(['date_block_no', 'item_cnt_day'], ascending=False, inplace=True) # now aggregate the intermediate result again, but this time # only by date_block_no and only keep the first row per # group, which is the one with the highest sales, because we # sorted it this way above aggregated.groupby(['date_block_no']).aggregate('first') </code></pre>
python|pandas
0
3,847
57,102,089
C++: Passing Operator as Parameter leads to Error "expected an identifier"
<p>I compiled Tensorflow for C++ by using bazel 0.20.0 and VS2015</p> <p>I created a simple C++-Project in VS2019 and tried to build it but following problem occurs:</p> <p>The code part in ...\tensorflow\core\platform\default\logging.h ,that is affected:</p> <pre><code>// Helper functions for CHECK_OP macro. // The (int, int) specialization works around the issue that the compiler // will not instantiate the template version of the function on values of // unnamed enum type - see comment below. // The (size_t, int) and (int, size_t) specialization are to handle unsigned // comparison errors while still being thorough with the comparison. #define TF_DEFINE_CHECK_OP_IMPL(name, op) \ template &lt;typename T1, typename T2&gt; \ inline string* name##Impl(const T1&amp; v1, const T2&amp; v2, \ const char* exprtext) { \ if (TF_PREDICT_TRUE(v1 op v2)) \ return NULL; \ else \ return ::tensorflow::internal::MakeCheckOpString(v1, v2, exprtext); \ } \ inline string* name##Impl(int v1, int v2, const char* exprtext) { \ return name##Impl&lt;int, int&gt;(v1, v2, exprtext); \ } \ inline string* name##Impl(const size_t v1, const int v2, \ const char* exprtext) { \ if (TF_PREDICT_FALSE(v2 &lt; 0)) { \ return ::tensorflow::internal::MakeCheckOpString(v1, v2, exprtext); \ } \ const size_t uval = (size_t)((unsigned)v1); \ return name##Impl&lt;size_t, size_t&gt;(uval, v2, exprtext); \ } \ inline string* name##Impl(const int v1, const size_t v2, \ const char* exprtext) { \ if (TF_PREDICT_FALSE(v2 &gt;= std::numeric_limits&lt;int&gt;::max())) { \ return ::tensorflow::internal::MakeCheckOpString(v1, v2, exprtext); \ } \ const size_t uval = (size_t)((unsigned)v2); \ return name##Impl&lt;size_t, size_t&gt;(v1, uval, exprtext); \ } // We use the full name Check_EQ, Check_NE, etc. in case the file including // base/logging.h provides its own #defines for the simpler names EQ, NE, etc. // This happens if, for example, those are used as token names in a // yacc grammar. TF_DEFINE_CHECK_OP_IMPL(Check_EQ, ==) // Compilation error with CHECK_EQ(NULL, x)? TF_DEFINE_CHECK_OP_IMPL(Check_NE, !=) // Use CHECK(x == NULL) instead. TF_DEFINE_CHECK_OP_IMPL(Check_LE, &lt;=) TF_DEFINE_CHECK_OP_IMPL(Check_LT, &lt;) TF_DEFINE_CHECK_OP_IMPL(Check_GE, &gt;=) TF_DEFINE_CHECK_OP_IMPL(Check_GT, &gt;) #undef TF_DEFINE_CHECK_OP_IMPL </code></pre> <p>leads to following errors: "<strong>expected an identifier</strong>" at the lines</p> <pre><code>TF_DEFINE_CHECK_OP_IMPL(Check_EQ, ==) // Compilation error with CHECK_EQ(NULL, x)? TF_DEFINE_CHECK_OP_IMPL(Check_NE, !=) // Use CHECK(x == NULL) instead. TF_DEFINE_CHECK_OP_IMPL(Check_LE, &lt;=) TF_DEFINE_CHECK_OP_IMPL(Check_LT, &lt;) TF_DEFINE_CHECK_OP_IMPL(Check_GE, &gt;=) TF_DEFINE_CHECK_OP_IMPL(Check_GT, &gt;) </code></pre> <p>I don't see the problem. I am a Visual Studio and C++-Noob but those lines should be valid.</p> <p>I tried the solution in: <a href="https://stackoverflow.com/questions/4530588/passing-operator-as-a-parameter">Passing operator as a parameter</a></p> <p>where <code>#define TF_DEFINE_CHECK_OP_IMPL(name, op)</code> is replaced with <code>#define TF_DEFINE_CHECK_OP_IMPL(name, std::function&lt;bool(bool,bool)&gt; op)</code></p> <p>but that didn't work and I don't think that I want it to be a template.</p> <p>Any advice?</p>
<p>try to <a href="https://www.tensorflow.org/install/source_windows" rel="nofollow noreferrer">update</a> bazel version, cause MS2015 may be compatible with bazel 0.20, but MS2019 is not. </p> <p><a href="https://www.tensorflow.org/install/source_windows#cpu" rel="nofollow noreferrer">Checkout compatibility table</a></p>
c++|tensorflow|visual-studio-2015|compilation|parameter-passing
0
3,848
57,094,179
How do I upsample an irregular dataset based on a data column?
<p>I have a pandas dataframe that contains logged data based on depth. The depth is spaced irregularly. I need the dataset to be spaced in regular dx steps. </p> <p>Is there a way of doing this without stuffing it into separated numpy arrays and interpolating them seperately?</p> <p>Seperate interpolation of all columns. </p> <pre><code>df=pd.DataFrame(np.array([[0. , 2. , 3.5, 5. , 6. , 18.], [100, 20, 150, 80, 110, 125], [1. , 0.5 , 2.6 , 0.01, 3. , 2.]]).T, columns=['depth', 'value1', 'value2']) step=0.05 # this is what the column "depth" should be like afterwards target_depth=np.linspace(df['a'].min(),df['a'].max(),int(df['a'].max()/step)) </code></pre> <p>A pandas or other library function that does the interpolation/resampling</p>
<p>I ended up with <code>interp1d</code> from the <code>signal</code> package.</p>
python|pandas|resampling
0
3,849
57,075,168
How to generate new Python dataframe series based on dataframe values
<p>I have a data frame as the one generated by the script below - bringing in dataframe "data".</p> <p>Ideally I would like to generate a new dataframe that combines the id and a sequence of 1 : value. </p> <pre class="lang-py prettyprint-override"><code>d = {'id': ['a', 'b','c'], 'value': [1, 2,1]} data = pd.DataFrame(data=d) data </code></pre> <p>This means that the ideal output would be:</p> <pre class="lang-py prettyprint-override"><code>|------|---------| | ID | value | |------|---------| | a | 1 | | b | 1 | | b | 2 | | c | 1 | |------|---------| </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.repeat.html" rel="nofollow noreferrer"><code>Index.repeat</code></a> by column <code>value</code> and reassign values by counter by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a>:</p> <pre><code>#if not default RangeIndex #data = data.reset_index(drop=True) df = data.loc[data.index.repeat(data['value'])] df['value'] = df.groupby(level=0).cumcount() + 1 df = df.reset_index(drop=True) print (df) id value 0 a 1 1 b 1 2 b 2 3 c 1 </code></pre> <p>Alternative solution with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>DataFrame.assign</code></a>:</p> <pre><code>df = (data.loc[data.index.repeat(data['value'])] .assign(value=lambda x: x.groupby(level=0).cumcount() + 1) .reset_index(drop=True)) </code></pre>
python|pandas|dataframe|series
2
3,850
57,085,178
How to assign value to a pandas dataframe, when subset by complex index and boolean based conditions?
<p>I would like to replace values in a pandas dataframe, with a complex subsetting pattern. </p> <p>With the .loc accessor, I was only able to subset by chaining multiple conditions, because some of the conditions are index based. But it seems I can not assign values after such a chain of subsetting. <strong>UPDATE:</strong> A further problem is caused by the duplicated indicies. I have updated the example accordingly.</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame({'a': ['foo'] * 10 + ['bar'] * 10, 'b': range(20)}, index=pd.date_range('2019-01-01','2019-01-10').append(pd.date_range('2019-01-01','2019-01-10'))) df.loc[df['a'] == 'foo', 'b'].loc[pd.to_datetime(['2019-01-05','2019-01-09'])] = np.nan df </code></pre> <p>Result:</p> <pre><code> a b 2019-01-01 foo 0 2019-01-02 foo 1 2019-01-03 foo 2 2019-01-04 foo 3 2019-01-05 foo 4 2019-01-06 foo 5 2019-01-07 foo 6 2019-01-08 foo 7 2019-01-09 foo 8 2019-01-10 foo 9 2019-01-01 bar 10 2019-01-02 bar 11 2019-01-03 bar 12 2019-01-04 bar 13 2019-01-05 bar 14 2019-01-06 bar 15 2019-01-07 bar 16 2019-01-08 bar 17 2019-01-09 bar 18 2019-01-10 bar 19 </code></pre> <p>Expected:</p> <pre><code> a b 2019-01-01 foo 0 2019-01-02 foo 1 2019-01-03 foo 2 2019-01-04 foo 3 2019-01-05 foo NaN 2019-01-06 foo 5 2019-01-07 foo 6 2019-01-08 foo 7 2019-01-09 foo NaN 2019-01-10 foo 9 2019-01-01 bar 10 2019-01-02 bar 11 2019-01-03 bar 12 2019-01-04 bar 13 2019-01-05 bar 14 2019-01-06 bar 15 2019-01-07 bar 16 2019-01-08 bar 17 2019-01-09 bar 18 2019-01-10 bar 19 </code></pre> <p>I have tried alternative approaches like:</p> <pre><code>df.loc[df['a'] == 'foo' and df.index.isin(['2019-01-05','2019-01-09']), 'b'] </code></pre> <p>which drops:</p> <pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>Not even this works, as the isin returns an array without the date based indexing:</p> <pre><code>df['a'] == 'foo' and pd.Series(df.index.isin(['2019-01-05','2019-01-09'])) </code></pre>
<p>You can do with one <code>.loc</code> chain of <code>loc</code> assignment will be not safe</p> <pre><code>df.loc[df.index.isin(['2019-01-05','2019-01-09'])&amp;df.a.eq('foo'),'b']=np.nan </code></pre>
python|pandas|dataframe|subset
4
3,851
45,929,439
How can I insert a column into a dataframe if the column values come from a different file?
<p>Currently I am reading in from a file and it is generating this file (<code>output.txt</code>): </p> <pre><code>Atom nVa avgppm stddev delta 1.H1' 2 5.73649 0.00104651803616 1.0952e-06 1.H2' 1 4.85438 1.H8 1 8.05367 10.H1' 3 5.33823 0.136655138213 0.0186746268 10.H2' 1 4.20449 10.H5 3 5.27571333333 0.231624986634 0.0536501344333 10.H6 5 7.49485 0.0285124165935 0.0008129579 </code></pre> <p>This is the code that reads generates this file (I am reading in from a text file to generate these values)</p> <pre><code>df = pd.read_csv(expAtoms, sep = ' ', header = None) df.columns = ["Atom","ppm"] gb = (df.groupby("Atom", as_index=False).agg({"ppm":["count","mean","std","var"]}).rename(columns={"count":"nVa", "mean":"avgppm","std":"stddev","var":"delta"})) gb.head() gb.columns = gb.columns.droplevel() gb = gb.rename(columns={"":"Atom"}) gb.to_csv("output.txt", sep =" ", index=False) </code></pre> <p>In between my <code>nVa</code> column and my <code>avgppm</code> column, I want to insert another column called <code>predppm</code>. I want to get the values from a file called <code>file.txt</code> which looks like this:</p> <pre><code>5.H6 7.72158 0.3 6.H6 7.70272 0.3 7.H8 8.16859 0.3 1.H1' 7.65014 0.3 9.H8 8.1053 0.3 10.H6 7.5231 0.3 </code></pre> <p>How can I check if the values in the first column of <code>file.txt</code> = the values of the first column in <code>output.txt</code>and if it does, insert the value from the second column of <code>file.txt</code> into a column in my output file between the nVa column and the avgppm column?</p> <p>For example, <code>1.H1'</code> is in output.txt and file.txt, so i would like to create a column called <code>predppm</code> in my output.txt file and have the value <code>7.65014</code> (which comes from second column of file.txt) inserted for the <code>1.H1'</code> atom.</p> <p>I think I understand how to add columns but only for functions that I can use with groupby, but I don't know how I can insert an arbitrary column into my output.</p>
<p>The easiest way is to make an <code>index</code> on the <code>pandas.DataFrame</code>. Pandas has nice logic for matching up indexes.</p> <pre><code>from io import StringIO import pandas as pd # if python2, do: # data = u"""\ data = """\ Atom nVa avgppm stddev delta 1.H1' 2 5.73649 0.00104651803616 1.0952e-06 1.H2' 1 4.85438 1.H8 1 8.05367 10.H1' 3 5.33823 0.136655138213 0.0186746268 10.H2' 1 4.20449 10.H5 3 5.27571333333 0.231624986634 0.0536501344333 10.H6 5 7.49485 0.0285124165935 0.0008129579 """ # if python2, do: # other_data = u"""\ other_data = """\ 5.H6 7.72158 0.3 6.H6 7.70272 0.3 7.H8 8.16859 0.3 1.H1' 7.65014 0.3 9.H8 8.1053 0.3 10.H6 7.5231 0.3 """ # setup these strings so they can be read by pd.read_csv # (not necessary if these are actual files on disk) data_file = StringIO(data) other_data_file = StringIO(other_data) # don't say header=None because the first row has the column names df = pd.read_csv(data_file, sep=' ') # set the index to 'Atom' df = df.set_index('Atom') # header=None because the other_data doesn't have header info other_df = pd.read_csv(other_data_file, sep=' ', header=None) # set the column names since they're not specified in other_data other_df.columns = ['Atom', 'predppm', 'some_other_field'] # set the index to 'Atom' other_df = other_df.set_index('Atom') # this will assign other_df['predppm'] to the correct rows, # because pandas uses the index when assigning new columns df['predppm'] = other_df['predppm'] print(df) # nVa avgppm stddev delta predppm # Atom # 1.H1' 2 5.736490 0.001047 0.000001 7.65014 # 1.H2' 1 4.854380 NaN NaN NaN # 1.H8 1 8.053670 NaN NaN NaN # 10.H1' 3 5.338230 0.136655 0.018675 NaN # 10.H2' 1 4.204490 NaN NaN NaN # 10.H5 3 5.275713 0.231625 0.053650 NaN # 10.H6 5 7.494850 0.028512 0.000813 7.52310 # if you want to return 'Atom' to being a column: df = df.reset_index() </code></pre>
python|pandas
1
3,852
50,909,879
which performance will gain if more memory available?
<p>I got the log:</p> <pre><code>2018-06-18 20:33:24.218811: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_1_bfc) ran out of memory trying to allocate 2.27GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available. </code></pre> <p>and my program can still run . So I doubt which performance could gain if more memory is available? Training speed, accurancy, or something else?</p> <p>Thank you!</p>
<p>If your GPU lacks memory, but the program still runs, then it means that some optimizations won't take place, or some operations will be run on your CPU instead of the GPU, which will decrease the computation speed of your program.</p> <p>So, if more memory is available, you'll improve your training and testing speed. The accuracy should not change however. </p>
tensorflow
1
3,853
50,959,561
Convert `String Feature` DataFrame into Float in Azure ML Using Python Script
<p>I am trying to understand how to convert azure ml <code>String Feature</code> data type into float using python script. my data set is contain "HH:MM" data time format. It recognized as <code>String Feature</code> like the following img:</p> <p><a href="https://i.stack.imgur.com/gy4A7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gy4A7.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/aB4P6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aB4P6.jpg" alt="enter image description here"></a></p> <p>I want to convert it into float type which will divide the timestamp by 84600 ( 24 hour) so <code>17:30</code> will be converted into <code>0,729166666666667</code>, so I write python script to convert that. This is my script:</p> <pre><code>import pandas as pd import numpy as np def timeToFloat(x): frt = [3600,60] data = str(x) result = float(sum([a*b for a,b in zip(frt, map(int,data.split(':')))]))/86400 return result if isNotZero(x) else 0.0 def isNotZero(x): return (x is "0") def azureml_main(dataframe1 = None): df = pd.DataFrame(dataframe1) df["Departure Time"] = pd.to_numeric(df["Departure Time"]).apply(timeToFloat) print(df["Departure Time"]) return df, </code></pre> <p>When I run the script it was failed. Then I try to check whether it is <code>str</code> or not, but it returns <code>None</code>.</p> <p>can we treat <code>String Feature</code> as <code>String</code>? or how should I covert this data correctly?</p>
<p>The to_numeric conversion seems to be the problem, as there's no default parsing from string to number.</p> <p>Does it work if you just use pd.apply(timeToFloat) ?</p> <p>Roope - Microsoft Azure ML Team</p>
python|pandas|csv|azure-machine-learning-studio
0
3,854
66,377,478
Can not get entire table from html - Pandas
<p>I was trying to get data using pandas from a wikipedia article about the largest bankrupts <a href="https://en.wikipedia.org/wiki/List_of_largest_U.S._bank_failures" rel="nofollow noreferrer">DATA</a> but for some reason the table was incomplete. I used this:</p> <pre><code>df = pd.read_html('https://en.wikipedia.org/wiki/List_of_largest_U.S._bank_failures') type(df) len(df) df[1]` </code></pre> <p>PS: I am using hidrogen to run jupyter at Atom. But that was the output: <a href="https://i.stack.imgur.com/SPox5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SPox5.png" alt="Output from Pandas" /></a> Please explain what happened. I am new to Data Science and Pandas</p>
<p>You are getting the whole table. By default, only some number of rows is displayed, hence the <code>...</code> signs in the middle. If you want to display all rows you can change pandas display default as follows:</p> <pre><code># show at most 100 rows pd.options.display.max_rows = 100 </code></pre> <p>Note that this is a display setting only, the DataFrame contains all table data already.</p>
python|pandas|data-science
0
3,855
57,322,649
Converting from list format to matrix showing equality
<p>I'm trying to convert from a pandas dataframe that looks like: </p> <pre><code>Item | Country A | UK B | FR C | DE D | FR </code></pre> <p>And I want to create a matrix that compares each item to each other item based on country, so:</p> <pre class="lang-none prettyprint-override"><code> A B C D A 1 0 0 0 B 0 1 0 1 C 0 0 1 0 D 0 1 0 1 </code></pre> <p>I feel like this should be possible using some sort of pandas pivot but I can't find the right way</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>DataFrame.merge</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.crosstab.html" rel="nofollow noreferrer"><code>crosstab</code></a>:</p> <pre><code>df = df.merge(df, on='Country') df = pd.crosstab(df['Item_x'], df['Item_y']) print (df) Item_y A B C D Item_x A 1 0 0 0 B 0 1 0 1 C 0 0 1 0 D 0 1 0 1 </code></pre>
python|pandas
1
3,856
51,218,923
Does tf.train.CheckpointSaverHook in tf.train.MonitoredTrainingSession block training while checkpointing or it is done asynchronously?
<p>I am pretty new in TensorFlow. I am currently curious to track the IO time and bandwidth (preferably percentage of IO time taken in the training process for checkpointing) for checkpointing which is performed by the internal checkpointing mechanism provided by high level <code>tf.train.MonitoredTrainingSession</code> that can be implemented through adding a <code>tf.train.CheckpointSaverHook</code> while initializing the <code>tf.train.MonitoredTrainingSession</code>.</p> <p>I am thinking about using a <code>tf.train.CheckpointSaverListener</code> (i.e. using <code>before_save</code> and <code>after_save</code> methods) to log the time and track IO. But I have a question, will this logging technique I am thinking about give me a proper percentage calculation (i.e. <code>Time taken for checkpointing IO / Time taken for Training * 100%</code>) ?</p> <p>I am suspecting that, this checkpointing is done asynchronously through a thread different from training. I have been looking into the TensorFlow code to find this out, but I thought asking this question here can accelerate my exploration.</p> <p>I am open to any suggestion on using any other alternative technique (e.g. using TensorBoard, IO profiling tools etc.)</p>
<p>I believe it will.</p> <p>The checkpointing isn't done asynchronously. You'd want the checkpoint to contain a consistent snapshot of the variables/parameters and thus do not want to checkpoint asynchronously with other operations that may update the parameter values.</p> <p>The <code>CheckpointSaverHook</code> explicitly uses the <code>Session</code> to execute the operation that saves the checkpoint (<a href="https://github.com/tensorflow/tensorflow/blob/5c7a6fba35436fcf02826c5df953263dfe1f2340/tensorflow/python/training/basic_session_run_hooks.py#L464" rel="nofollow noreferrer">source code</a>) and waits for it to complete (It's basically invoking <a href="https://www.tensorflow.org/api_docs/python/tf/train/Saver#save" rel="nofollow noreferrer"><code>tf.train.Saver.save</code></a>).</p> <p>So, the <code>CheckpointSaverListener</code> you thought of should work out fine - modulo the time taken by any other <code>CheckpointSaverListener</code>s in your program.</p> <p>Hope that helps.</p>
tensorflow|io|profiling|checkpointing
1
3,857
51,536,090
Pandas DataFrame rolling count
<p>I have the following pandas dataframe (just an example):</p> <pre><code>import pandas as pd df = pd.DataFrame(pd.Series(['a','a','a','b','b','c','c','c','c','b','c','a']), columns = ['Data']) </code></pre> <p><br></p> <pre><code> Data 0 a 1 a 2 a 3 b 4 b 5 c 6 c 7 c 8 c 9 b 10 c 11 a </code></pre> <p>The goal is to get another column, <em>Stats</em>, that count the element of <em>Data</em> column as following:</p> <pre><code> Data Stats 0 a 1 a 2 a a3 3 b 4 b b2 5 c 6 c 7 c 8 c c4 9 b b1 10 c c1 11 a a1 </code></pre> <p>Where, for example, <em>a3</em> means "three consecutive <em>a</em> elements", <em>c4</em> means "four consecutive <em>c</em> elements" and so on...</p> <p>Thank you in advance for your help</p>
<p>Here's one way using <code>groupby</code>:</p> <pre><code>counts = df.groupby((df['Data'] != df['Data'].shift()).cumsum()).cumcount() + 1 df['Stats'] = np.where(df['Data'] != df['Data'].shift(-1), df['Data'] + counts.astype(str), '') print(df) Data Stats 0 a 1 a 2 a a3 3 b 4 b b2 5 c 6 c 7 c 8 c c4 9 b b1 10 c c1 11 a a1 </code></pre>
python|pandas|dataframe|counting
2
3,858
70,879,159
Get datetime format from string python
<p>In Python there are multiple DateTime parsers which can parse a date string automatically without providing the datetime format. My problem is that I don't need to cast the datetime, I only need the datetime format.</p> <p>Example: From &quot;2021-01-01&quot;, I want something like &quot;%Y-%m-%d&quot; or &quot;yyyy-MM-dd&quot;.</p> <p>My only idea was to try casting with different formats and get the successful one, but I don't want to list every possible format.</p> <p>I'm working with pandas, so I can use methods that work either with series or the string DateTime parser.</p> <p>Any ideas?</p>
<p>In <code>pandas</code>, this is achieved by <code>pandas._libs.tslibs.parsing.guess_datetime_format</code></p> <pre class="lang-py prettyprint-override"><code>from pandas._libs.tslibs.parsing import guess_datetime_format guess_datetime_format('2021-01-01') # '%Y-%m-%d' </code></pre> <p>As there will always be an ambiguity on the day/month, you can specify the dayfirst case:</p> <pre class="lang-py prettyprint-override"><code>guess_datetime_format('2021-01-01', dayfirst=True) # '%Y-%d-%m' </code></pre>
python|pandas|datetime|format
4
3,859
71,048,186
How to select one and remove correlated features from a long format correlation dataframe directly?
<p>I have a long format pandas dataframe containing feature correlation pairs (with duplicate pairs). I want to select one out of every correlated pair from this long table (100s of feature). Is their a pythonic way to do this without transforming this table into a matrix? Ideally in this example, we need to keep only feature <code>a</code>, since it is correlated to both <code>b</code> and <code>c</code>. In this example, I'm considering the threshold 0.95.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>feature1</th> <th>feature2</th> <th>corr</th> </tr> </thead> <tbody> <tr> <td>a</td> <td>b</td> <td>0.96</td> </tr> <tr> <td>b</td> <td>a</td> <td>0.96</td> </tr> <tr> <td>a</td> <td>c</td> <td>0.95</td> </tr> <tr> <td>c</td> <td>a</td> <td>0.95</td> </tr> <tr> <td>b</td> <td>c</td> <td>0.94</td> </tr> <tr> <td>c</td> <td>b</td> <td>0.94</td> </tr> </tbody> </table> </div> <p>Reproduce this dataframe using</p> <pre><code>import pandas as pd df = pd.DataFrame({'feature1': ['a', 'b', 'a', 'c', 'b', 'c'], 'feature2': ['a', 'b', 'a', 'c', 'b', 'c'], 'corr': [0.96, 0.96, 0.95, 0.95, 0.94, 0.94]}) </code></pre>
<p>Solved this by lexicographical sorting the rows and then removing everything from <code>feature2</code> column.</p> <pre><code>features_list = ['a', 'b', 'c'] def custom_sort(x): min_val = min(x[&quot;feature1&quot;], x[&quot;feature2&quot;]) max_val = max(x[&quot;feature1&quot;], x[&quot;feature2&quot;]) x[&quot;feature1&quot;] = min_val x[&quot;feature2&quot;] = max_val return x drop_features_list = list(df.apply(custom_sort, axis=1)[&quot;feature2&quot;].drop_duplicates()) drop_features_list selected_features = list(set(features_list) - set(drop_features_list)) selected_features &gt; ['a'] </code></pre>
python|pandas
0
3,860
70,809,444
How to filter a dataframe and each row, based on the presence of strings (from another list) in different columns and add a new column with annotation
<p>I have a dataframe (df1) where I would like to search each row for items from listA. If the dataframe has a row that contains 'positive' and one or more of the items from listA, I would like to generate another dataframe (df2) by adding a column called result, listing the listA item + present. Items in list A, may exist as a stand alone item in each row of df1 or they may exist as part of a larger string. I've tried using pandas.DataFrame.loc but I am only able to search through one column at a time which isn't ideal.</p> <pre><code> df1 = pd.DataFrame({'column no': ['1', '2', '3', '4'], 'name': ['fred', 'sammy', 'tom', 'sam'], 'test': ['positive', 'positive', 'negative', 'negative'], 'date': [&quot;15-'05&quot;, &quot;13-'02&quot;, &quot;12-'01&quot;, &quot;29-'08&quot;], 'food':['lemon-2.v4*?-10%;ham-12?-0%;orange?-58%', 'cake', 'cheese', 'eggs']}) listA = [&quot;15-'05&quot;,'ham','tom','cake'] </code></pre> <p>Output:</p> <pre><code> df2 = pd.DataFrame({'column no': ['1', '2', '3', '4'], 'name': ['fred', 'sammy', 'tom', 'sam'], 'test': ['positive', 'positive', 'negative', 'negative'], 'date': [&quot;15-'05&quot;, &quot;13-'02&quot;, &quot;12-'01&quot;, &quot;29-'08&quot;], 'food':['lemon-2.v4*?-10%;ham-12?-0%;orange?-58%', 'cake', 'cheese', 'eggs'], 'result': [&quot;15-'05, ham, present&quot;, &quot;cake, present&quot;, 'tom, present', 'not found']}) </code></pre>
<p>Updated:</p> <p>I have created a function first which is applied to every row ('axis=1') and the results are added to the result column.</p> <pre><code>def check_rows(row): same_values = ', '.join([term for term in listA for substring in row.values if term in substring]) if same_values: return same_values+&quot;, present&quot; else: return 'not found' df1['result'] = df1.apply(lambda x: check_rows(x), axis=1) </code></pre> <p>Output:</p> <pre><code> column no name test date food result 0 1 fred positive 15-'05 lemon-2.v4*?-10%;ham-12?-0%;orange?-58% 15-'05, ham, present 1 2 sammy positive 13-'02 cake cake, present 2 3 tom negative 12-'01 cheese tom, present 3 4 sam negative 29-'08 eggs not found </code></pre>
python|pandas|dataframe
0
3,861
70,977,136
Is there a way to add a new column to a pandas multiindex that only aligns with one level?
<p>I'm looking for a clean way to add a column to a multiindex dataframe, where the value is only repeated once per level=0.</p> <p>For example,</p> <p><strong>I want to add a column to this:</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Index level=0</th> <th>Index level=1</th> <th>Value (x)</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>1</td> <td>300</td> </tr> <tr> <td></td> <td>2</td> <td>850</td> </tr> <tr> <td></td> <td>3</td> <td>2000</td> </tr> <tr> <td>B</td> <td>1</td> <td>100</td> </tr> <tr> <td></td> <td>2</td> <td>70</td> </tr> <tr> <td></td> <td>3</td> <td>400</td> </tr> </tbody> </table> </div> <p><strong>In order to get to this:</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Index level=0</th> <th>Index level=1</th> <th>Value (x)</th> <th>Value (y)</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>1</td> <td>300</td> <td>Yellow</td> </tr> <tr> <td></td> <td>2</td> <td>850</td> <td></td> </tr> <tr> <td></td> <td>3</td> <td>2000</td> <td></td> </tr> <tr> <td>B</td> <td>1</td> <td>100</td> <td>Red</td> </tr> <tr> <td></td> <td>2</td> <td>70</td> <td></td> </tr> <tr> <td></td> <td>3</td> <td>400</td> <td></td> </tr> </tbody> </table> </div> <p><strong>I do <em>NOT</em> want this:</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Index level=0</th> <th>Index level=1</th> <th>Value (x)</th> <th>Value (y)</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>1</td> <td>300</td> <td>Yellow</td> </tr> <tr> <td></td> <td>2</td> <td>850</td> <td>Yellow</td> </tr> <tr> <td></td> <td>3</td> <td>2000</td> <td>Yellow</td> </tr> <tr> <td>B</td> <td>1</td> <td>100</td> <td>Red</td> </tr> <tr> <td></td> <td>2</td> <td>70</td> <td>Red</td> </tr> <tr> <td></td> <td>3</td> <td>400</td> <td>Red</td> </tr> </tbody> </table> </div> <p>I'm not sure how best to create a table here that shows what I'm hoping for, but the important part to me is that y corresponds to all rows of index level=0, but is not repeated for every increment of index level=1. I'm sure that I could the additional rows in the y column with null values but I thought there might be a more elegant way.</p>
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.IndexSlice.html" rel="nofollow noreferrer"><code>pd.IndexSlice</code></a>:</p> <pre><code>idx = pd.IndexSlice df.loc[idx[:, 1], 'Color'] = ['Yellow', 'Red'] print(df) # Output Value Color A 1 300 Yellow 2 850 NaN 3 2000 NaN B 1 100 Red 2 70 NaN 3 400 NaN </code></pre> <p>Or only with <code>slice</code>:</p> <pre><code>df.loc[(slice(None), 1), 'Color'] = ['Yellow', 'Red'] print(df) # Output Value Color A 1 300 Yellow 2 850 NaN 3 2000 NaN B 1 100 Red 2 70 NaN 3 400 NaN </code></pre>
python|pandas|dataframe|multi-index
3
3,862
51,862,276
Creating a numeric variable issue
<p>I'm trying to create a new variable as the mean of another numeric var present in my database <code>(mark1 type = float)</code>. Unfortunately, the result is a new colunm with all <code>NaN</code> values. still can't understand the reanson why. The code i made is the following:</p> <pre><code>df = pd.read_csv("students2.csv") df.loc[:, 'mean_m1'] = pd.Series(np.mean(df['mark1']).mean(), index= df) </code></pre> <p>this the first few rows after the code:</p> <pre><code>df.head() ID gender subject mark1 mark2 mark3 fres mean_m1 0 1 mm 1 17.0 20.0 15.0 neg NaN 1 2 f 2 24.0 330.0 23.0 pos NaN 2 3 FEMale 1 17.0 16.0 24.0 0 NaN 3 4 male 3 27.0 23.0 21.0 1 NaN 4 5 m 2 30.0 22.0 24.0 positive NaN </code></pre> <p><code>None</code> error messages are printed. thx so much!</p>
<p>You need <code>GroupBy</code> + <code>transform</code> with <code>'mean'</code>.</p> <p>For the data you have provided, this is trivially equal to <code>mark1</code>. You should probably map your genders to categories, e.g. <code>M</code> or <code>F</code>, as a preliminary step.</p> <pre><code>df['mean_m1'] = df.groupby('gender')['mark1'].transform('mean') print(df) ID gender subject mark1 mark2 mark3 fres mean_m1 0 1 mm 1 17.000 20.000 15.000 neg 17.000 1 2 f 2 24.000 330.000 23.000 pos 24.000 2 3 FEMale 1 17.000 16.000 24.000 0 17.000 3 4 male 3 27.000 23.000 21.000 1 27.000 4 5 m 2 30.000 22.000 24.000 positive 30.000 </code></pre>
python|python-3.x|pandas|pandas-groupby
0
3,863
35,895,194
Create a list of pandas dataframes with names in pattern
<p>I want to create a list of pandas dataframes with names in some pattern, like list = [df_1, df_2, df_3....] (there are about 15 of them) I know I can define them one by one and append them into a list. Is there any efficient way to do it?</p>
<p>I believe this should work. Naming variables dynamically in python is not great practice, however, so I would recommend against it:</p> <pre><code>dct = {} for i in range(0,16): dct['df_%s' % i] = pd.DataFrame() for k,v in dct.items(): locals()[k] = v </code></pre>
list|pandas|dataframe
0
3,864
36,209,830
pandas, slice multi-index df with multiple conditions
<p>This question is a continuation of <a href="https://stackoverflow.com/questions/36204249/pandas-re-indexing-with-missing-dates">pandas re-indexing with missing dates</a></p> <p>I want to compute the sum of the values for the most recent 3 months (2015-12, 2015-11, 2015-10). If a stock doesn't have sufficient data i.e. has none,1 or 2 of the 3 months then I want that the value of that sum to be NaN.</p> <p>I can slice and perform a group by and sum but this doesn't give me what I want since it may have excluded stocks that didn't have any data in this three month period and then does not account for stocks that have 1 or 2 months. </p> <p>I imagine I need a multi loc statement but I've tinkered around and have not been able to get the results I want. </p> <pre><code>df2.loc[idx[:,datetime.date(2015,10,1):datetime.date(2015,12,1)],:].groupby(level=0).sum() </code></pre>
<p>try this:</p> <pre><code>In [142]: df Out[142]: value date stock 0 4 2015-01-01 amzn 1 2 2015-02-01 amzn 2 5 2015-03-01 amzn 3 6 2015-04-01 amzn 4 7 2015-05-01 amzn 5 8 2015-06-01 amzn 6 6 2015-07-01 amzn 7 5 2015-08-01 amzn 8 4 2015-09-01 amzn 9 1 2015-10-01 amzn 10 2 2015-11-01 amzn 11 4 2015-12-01 amzn 12 7 2015-12-02 amzn In [143]: df[(df['date'] &gt;= pd.to_datetime('2015-10-01'))].groupby(df['date'].dt.month).sum() Out[143]: value date 10 1 11 2 12 11 </code></pre> <p>Note: I've intentionally added one row to your DF in order to have at least one month with more than one row</p> <pre><code>In [141]: df.loc[12] = [7, pd.to_datetime('2015-12-02'), 'amzn'] </code></pre>
python|pandas|indexing|slice|object-slicing
0
3,865
37,265,159
Lasagne/Theano doesn't consume multi cores while check_blas.py does
<p>I'm running a Logistic Regression classifier on Lasagne/Theano with multiple <strong>cpu</strong> cores.</p> <p>This is my <em>~/.theanorc</em> file:</p> <pre><code>[global] OMP_NUM_THREADS=20 </code></pre> <p>theano/misc/<strong>check_blas.py</strong> consumes all 20 cores but my script doesn't. when I run:</p> <pre><code>python -c 'import theano; print(theano.config)' </code></pre> <p>I see that the value of openmp is False:</p> <blockquote> <p>openmp () Doc: Allow (or not) parallel computation on the CPU with OpenMP. This is the default value used when creating an Op that supports OpenMP parallelization. It is preferable to define it via the Theano configuration file ~/.theanorc or with the environment variable THEANO_FLAGS. Parallelization is only done for some operations that implement it, and even for operations that implement parallelism, each operation is free to respect this flag or not. You can control the number of threads used with the environment variable OMP_NUM_THREADS. If it is set to 1, we disable openmp in Theano by default. <strong>Value: False</strong></p> </blockquote> <p>Does anybody know How I should enable the multi-core feature for my script?</p> <p>blas, atlas, openmp, etc. are installed on my system and as I said work perfectly with check_blas.py.</p>
<p>I found the reason. Besides <em>OMP_NUM_THREADS=20</em>, <strong>openmp=True</strong> should also be set in the ~/.theanorc file and now it consumes all the 20 cores. My <strong>~/.theanorc</strong> file now looks like:</p> <pre><code>[global] OMP_NUM_THREADS=20 openmp=True </code></pre>
numpy|theano|keras|multicore|lasagne
4
3,866
37,186,066
Python: Running maximum by another column?
<p>I have a dataframe like this, which tracks the value of certain items (ids) over time:</p> <pre><code>mytime=np.tile( np.arange(0,10) , 2 ) myids=np.repeat( [123,456], [10,10] ) myvalues=np.random.random_integers(20,30,10*2) df=pd.DataFrame() df['myids']=myids df['mytime']=mytime df['myvalues']=myvalues +-------+--------+----------+--+--+ | myids | mytime | myvalues | | | +-------+--------+----------+--+--+ | 123 | 0 | 29 | | | +-------+--------+----------+--+--+ | 123 | 1 | 23 | | | +-------+--------+----------+--+--+ | 123 | 2 | 26 | | | +-------+--------+----------+--+--+ | 123 | 3 | 24 | | | +-------+--------+----------+--+--+ | 123 | 4 | 25 | | | +-------+--------+----------+--+--+ | 123 | 5 | 29 | | | +-------+--------+----------+--+--+ | 123 | 6 | 28 | | | +-------+--------+----------+--+--+ | 123 | 7 | 21 | | | +-------+--------+----------+--+--+ | 123 | 8 | 20 | | | +-------+--------+----------+--+--+ | 123 | 9 | 26 | | | +-------+--------+----------+--+--+ | 456 | 0 | 26 | | | +-------+--------+----------+--+--+ | 456 | 1 | 24 | | | +-------+--------+----------+--+--+ | 456 | 2 | 20 | | | +-------+--------+----------+--+--+ | 456 | 3 | 26 | | | +-------+--------+----------+--+--+ | 456 | 4 | 29 | | | +-------+--------+----------+--+--+ | 456 | 5 | 29 | | | +-------+--------+----------+--+--+ | 456 | 6 | 24 | | | +-------+--------+----------+--+--+ | 456 | 7 | 21 | | | +-------+--------+----------+--+--+ | 456 | 8 | 27 | | | +-------+--------+----------+--+--+ | 456 | 9 | 29 | | | +-------+--------+----------+--+--+ </code></pre> <p>I'd need to calculate the running maximum for each id.</p> <pre><code>np.maximum.accumulate() </code></pre> <p>would calculate the running maximum regardless of id, whereas I need a similar calculation, which however resets every time the id changes. I can think of a simple script to do it in numba (I have very large arrays and non-vectorised non-numba code would be slow), but is there an easier way to do it?</p> <p>With just two values I can run:</p> <pre><code>df['running max']= np.hstack(( np.maximum.accumulate(df[ df['myids']==123 ]['myvalues']) , np.maximum.accumulate(df[ df['myids']==456 ]['myvalues']) ) ) </code></pre> <p>but this is not feasible with lots and lots of values.</p> <p>Thanks!</p>
<p>Here you go. Assumption is mytime is sorted.</p> <pre><code>mytime=np.tile( np.arange(0,10) , 2 ) myids=np.repeat( [123,456], [10,10] ) myvalues=np.random.random_integers(20,30,10*2) df=pd.DataFrame() df['myids']=myids df['mytime']=mytime df['myvalues']=myvalues groups = df.groupby('myids') df['run_max_group'] = groups['myvalues'].transform(np.maximum.accumulate) </code></pre> <p>Output...</p> <pre><code> myids mytime myvalues run_max_group 0 123 0 27 27 1 123 1 21 27 2 123 2 24 27 3 123 3 25 27 4 123 4 22 27 5 123 5 20 27 6 123 6 20 27 7 123 7 30 30 8 123 8 24 30 9 123 9 22 30 10 456 0 29 29 11 456 1 23 29 12 456 2 30 30 13 456 3 28 30 14 456 4 26 30 15 456 5 25 30 16 456 6 28 30 17 456 7 27 30 18 456 8 20 30 19 456 9 24 30 </code></pre>
python|numpy|pandas|max
2
3,867
37,578,346
Extract Values from heavily nested list of dictionaries with duplicate key value pairs
<p>Trying to extract Total Cash and Cash Equivalent values from complex and messy list of dictionaries. A shortened version of the structure follows below. </p> <p>I've tried: maps, Dataframe.from_dict &amp; .from_records. Trying to avoid using RE.</p> <p>I'm stumped.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code> [{u'Fields': [], u'ReportDate': u'2 June 2016', u'ReportID': u'BalanceSheet', u'ReportName': u'Balance Sheet', u'ReportTitles': [u'Balance Sheet', u'Test Company', u'As at 30 June 2016'], u'ReportType': u'BalanceSheet', u'Rows': [{u'Cells': [{u'Value': u''}, {u'Value': u'30 Jun 2016'}, {u'Value': u'30 Jun 2015'}], u'RowType': u'Header'}, {u'RowType': u'Section', u'Rows': [], u'Title': u'Assets'}, {u'RowType': u'Section', u'Rows': [{u'Cells': [{u'Attributes': [{u'Id': u'account', u'Value': u'c0bxx922-cc31-4d53-b060-cbf23511`2533'}], u'Value': u'Test Bank 1'}, {u'Attributes': [{u'Id': u'account', u'Value': u'c1b4xx22-cc31-4d53-b060-cb45282533'}], u'Value': u'5555.20'}, {u'Attributes': [{u'Id': u'account', u'Value': u'c2b44922-cc31-4d53-b060-cbf4532582533'}], u'Value': u'5555.20'}], u'RowType': u'Row'}, {u'Cells': [{u'Attributes': [{u'Id': u'account', u'Value': u'290c7c3c-a712-4ads6f-9a2f-3d5258aad5a9e'}], u'Value': u'Test Bank 2'}, {u'Attributes': [{u'Id': u'account', u'Value': u'490c7c32-axxxdf6f-9a2f-3db682a3ad5a9e'}], u'Value': u'55555.20'}, {u'Attributes': [{u'Id': u'account', u'Value': u'490xxc3c-a71-adsf6f-9a2f-3d3aad5a9e'}], u'Value': u'55555.20'}], u'RowType': u'Row'}, {u'Cells': [{u'Attributes': [{u'Id': u'account', u'Value': u'c6d4da40-f0df1b0-8f7d-xx45b1405'}], u'Value': u'Test Bank 3'}, {u'Attributes': [{u'Id': u'account', u'Value': u'c6d4da4fg-df-41b0-8f7d-54xx345b1405'}], u'Value': u'5555.20'}, {u'Attributes': [{u'Id': u'account', u'Value': u'c6d4dafgss-9-41b0-8f7d-60xx5b1405'}], u'Value': u'5555.20'}], u'RowType': u'Row'}, {u'Cells': [{u'Value': u'Total Cash and Cash Equivalents'}, {u'Value': u'5555555.20'}, {u'Value': u'5555555.20'}], u'RowType': u'SummaryRow'}], u'Title': u'Cash and Cash Equivalents'}, {u'RowType': u'Section',</code></pre> </div> </div> </p>
<p>If you know that the data will have exactly the format from above and you really just need these two values, you can access it directly (assuming <code>data</code> is your above structure):</p> <pre><code>print data[0]['Rows'][2]['Rows'][3]['Cells'][1]['Value'] print data[0]['Rows'][2]['Rows'][3]['Cells'][2]['Value'] </code></pre> <p>However, this is error prone, both in writing down the correct expression and with respect to changes of the order of the lists (which might not be defined in your format). Since there is a semantical structure behind the data, you could translate the raw data back into an easily accessible object. You might want to change some details but this is a good starting point:</p> <pre><code>from collections import Mapping import pandas as pd class Report(Mapping): def __init__(self, data): self.sections = OrderedDict() for row in data.pop('Rows'): getattr(self, 'make_%s' % row['RowType'])(row) self.__dict__.update(data) def make_Header(self, row): self.header = [c['Value'] for c in row['Cells']] def make_Section(self, sec): def make_row(row): cells = [c['Value'] for c in row['Cells']] return pd.Series(map(float, cells[1:]), name=cells[0]) self.sections[sec['Title']] = pd.DataFrame(make_row(r) for r in sec['Rows']) def __getitem__(self, item): return self.sections[item] def __len__(self): return len(self.sections) def __iter__(self): return iter(self.sections) report = Report(data[0]) print report.ReportName print report['Cash and Cash Equivalents'] </code></pre>
python|dictionary|pandas
1
3,868
41,701,884
How to efficiently set values of numpy array using indices and boolean indexing?
<p>What is the most efficient way to use a mask to select elements of a multidimensional numpy array, when the mask is to be applied with an offset? For example:</p> <pre><code>import numpy as np # in real application, following line would read an image figure = np.random.uniform(size=(4, 4)) # used as a mask canvas = np.zeros((10, 10)) # The following doesn't do anything, because a copy is modified canvas[np.ix_(np.arange(4) + 3, range(4))][figure &gt; 0.5] = 1.0 print np.mean(figure &gt; 0.5) # should be ~ 0.5 print canvas.max() # prints 0.0 </code></pre> <p>A similar question is posted here: <a href="https://stackoverflow.com/questions/40572482/setting-values-of-numpy-array-when-indexing-an-indexed-array">Setting values of Numpy array when indexing an indexed array</a> but I'm using a mask and I'm not asking why it doesn't work.</p>
<p>The problem, it seems, is that using the arrays returned by <code>np.ix_</code> as index means you are doing advanced indexing, and, <a href="https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing" rel="nofollow noreferrer">as the documentation of NumPy states</a>:</p> <blockquote> <p>Advanced indexing always returns a copy of the data (contrast with basic slicing that returns a <a href="https://docs.scipy.org/doc/numpy/glossary.html#term-view" rel="nofollow noreferrer">view</a>).</p> </blockquote> <p>But in this case, if the real application is similar to the code you have posted (that is, if you really just need an offset), you can get away with basic slicing:</p> <pre><code>import numpy as np figure = np.random.uniform(size=(4, 4)) canvas = np.zeros((10, 10)) # Either of the following works fine canvas[3:(3 + 4), :4][figure &gt; 0.5] = 1.0 canvas[slice(3, 3 + 4), slice(4)][figure &gt; 0.5] = 1.0 print np.mean(figure &gt; 0.5) # ~ 0.5 print canvas.max() # Prints 1.0 now </code></pre>
python|numpy|multidimensional-array
2
3,869
37,869,744
Tensorflow LSTM Regularization
<p>I was wondering how one can implement l1 or l2 regularization within an LSTM in TensorFlow? TF doesn't give you access to the internal weights of the LSTM, so I'm not certain how one can calculate the norms and add it to the loss. My loss function is just RMS for now.</p> <p>The answers <a href="https://stackoverflow.com/questions/37571514/regularization-for-lstm-in-tensorflow">here</a> don't seem to suffice.</p>
<p>The answers in the link you mentioned are the correct way to do it. Iterate through <code>tf.trainable_variables</code> and find the variables associated with your LSTM.</p> <p>An alternative, more complicated and possibly more brittle approach is to re-enter the LSTM's variable_scope, set reuse_variables=True, and call get_variable(). But really, the original solution is faster and less brittle.</p>
tensorflow|lstm
1
3,870
31,674,195
plot normal distribution given mean and sigma - python
<p>I have some data in pandas dataframe</p> <pre><code>df['Difference'] = df.Congruent.values - df.Incongruent.values mean = df.Difference.mean() std = df.Difference.std(ddof=1) median = df.Difference.median() mode = df.Difference.mode() </code></pre> <p>and I want to plot a histogram together with normal distribution in 1 plot. Is there a plotting function that takes mean and sigma as arguments? I don't care whether it is matplotplib, seaborn or ggplot. The best would be if I could mark also mode and median of the data all within 1 plot.</p>
<p>You can use matplotlib/pylab with <code>scipy.stats.norm.pdf</code> and pass the mean and standard deviation as <code>loc</code> and <code>scale</code>:</p> <pre><code>import pylab import numpy as np from scipy.stats import norm x = np.linspace(-10,10,1000) y = norm.pdf(x, loc=2.5, scale=1.5) # for example pylab.plot(x,y) pylab.show() </code></pre> <p><a href="https://i.stack.imgur.com/WPP2K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WPP2K.png" alt="enter image description here"></a></p>
python|pandas|matplotlib|plot|seaborn
7
3,871
31,254,392
Sorting a scipy.stats.itemfreq result containing strings
<p><strong>The Problem</strong></p> <p>I'm attempting to count the frequency of a list of strings and sort it in descending order. <code>scipy.stats.itemfreq</code> generates the frequency results which are output as a numpy array of string elements. This is where I'm stumped. How do I sort it?</p> <p>So far I have tried <code>operator.itemgetter</code> which appeared to work for a small list until I realised that it is sorting by the first string character rather than converting the string to an integer so <code>'5' &gt; '11'</code> as it is comparing <code>5</code> and <code>1</code> not <code>5</code> and <code>11</code>.</p> <p>I'm using python 2.7, numpy 1.8.1, scipy 0.14.0.</p> <p><strong>Example Code:</strong></p> <pre><code>from scipy.stats import itemfreq import operator as op items = ['platypus duck','platypus duck','platypus duck','platypus duck','cat','dog','platypus duck','elephant','cat','cat','dog','bird','','','cat','dog','bird','cat','cat','cat','cat','cat','cat','cat'] items = itemfreq(items) items = sorted(items, key=op.itemgetter(1), reverse=True) print items print items[0] </code></pre> <p><strong>Output:</strong></p> <pre><code>[array(['platypus duck', '5'], dtype='|S13'), array(['dog', '3'], dtype='|S13'), array(['', '2'], dtype='|S13'), array(['bird', '2'], dtype='|S13'), array(['cat', '11'], dtype='|S13'), array(['elephant', '1'], dtype='|S13')] ['platypus duck' '5'] </code></pre> <p><strong>Expected Output:</strong></p> <p>I'm after the ordering so something like:</p> <pre><code>[array(['cat', '11'], dtype='|S13'), array(['platypus duck', '5'], dtype='|S13'), array(['dog', '3'], dtype='|S13'), array(['', '2'], dtype='|S13'), array(['bird', '2'], dtype='|S13'), array(['elephant', '1'], dtype='|S13')] ['cat', '11'] </code></pre> <p><strong>Summary</strong></p> <p>My question is: how do I sort the array (which in this case is a string array) in descending order of counts? Please feel free to suggest alternative and faster/improved methods to my code sample above.</p>
<p>It is unfortunate that <code>itemfreq</code> returns the unique items <em>and</em> their counts in the same array. For your case, it means the counts are converted to strings, which is just dumb.</p> <p>If you can upgrade numpy to version 1.9, then instead of using <code>itemfreq</code>, you can use <code>numpy.unique</code> with the argument <code>return_counts=True</code> (see below for how to accomplish this in older numpy):</p> <pre><code>In [29]: items = ['platypus duck','platypus duck','platypus duck','platypus duck','cat','dog','platypus duck','elephant','cat','cat','dog','bird','','','cat','dog','bird','cat','cat','cat','cat','cat','cat','cat'] In [30]: values, counts = np.unique(items, return_counts=True) In [31]: values Out[31]: array(['', 'bird', 'cat', 'dog', 'elephant', 'platypus duck'], dtype='|S13') In [32]: counts Out[32]: array([ 2, 2, 11, 3, 1, 5]) </code></pre> <p>Get indices that puts <code>counts</code> in decreasing order:</p> <pre><code>In [38]: idx = np.argsort(counts)[::-1] In [39]: values[idx] Out[39]: array(['cat', 'platypus duck', 'dog', 'bird', '', 'elephant'], dtype='|S13') In [40]: counts[idx] Out[40]: array([11, 5, 3, 2, 2, 1]) </code></pre> <p>For older versions of numpy, you can combine <code>np.unique</code> and <code>np.bincount</code>, as follows:</p> <pre><code>In [46]: values, inv = np.unique(items, return_inverse=True) In [47]: counts = np.bincount(inv) In [48]: values Out[48]: array(['', 'bird', 'cat', 'dog', 'elephant', 'platypus duck'], dtype='|S13') In [49]: counts Out[49]: array([ 2, 2, 11, 3, 1, 5]) In [50]: idx = np.argsort(counts)[::-1] In [51]: values[idx] Out[51]: array(['cat', 'platypus duck', 'dog', 'bird', '', 'elephant'], dtype='|S13') In [52]: counts[idx] Out[52]: array([11, 5, 3, 2, 2, 1]) </code></pre> <p>In fact, the above is exactly what <code>itemfreq</code> does. Here's the definition of <code>itemfreq</code> in the scipy source code (without the docstring):</p> <pre><code>def itemfreq(a): items, inv = np.unique(a, return_inverse=True) freq = np.bincount(inv) return np.array([items, freq]).T </code></pre>
python|arrays|sorting|numpy|scipy
2
3,872
64,362,395
processing a dataframe in parallel
<p>I have a process that requires each row of a dataframe processed and then a new value appended to each row. It's a large dataframe and taking hours to process one dataframe at a time.</p> <p>If I have a iterrow loop that sends each row to a function, can I parallize my processing for a speedup? The results of the row are not related</p> <p>basically my code something like this</p> <pre><code>for index, row in df.iterrows(): row['data'] = function[row] </code></pre> <p>Is there a easy way to do this to speed up processing?</p>
<p>While iterating over rows isnt good practice and there can be alternate logics with grouby/transform aggregations etc, but if in worst case you really need to do so, follow the answer. Also, you might not need to reimplement everything here and you can use libraries like <a href="https://dask.org/" rel="nofollow noreferrer">Dask</a>, which is built on top of pandas.</p> <p>But just to give Idea, you can use <code>multiprocessing</code> (<code>Pool.map</code>) in combination with <code>chunking</code>. read csv in chunk(or make chucks as mentioned in the end of answer) and map it to the pools, in processing each chunk add new rows (or add them to list and make new chunk) and return it from the function.</p> <p>In the end combine the dataframes when all pools are executed.</p> <pre><code>import pandas as pd import numpy as np import multiprocessing def process_chunk(df_chunk): for index, row in df_chunk.reset_index(drop = True).iterrows(): #your logic for updating this chunk or making new chunk here print(row) print(&quot;index is &quot; + str(index)) #if you can added to same df_chunk, return it, else if you appended #rows to have list_of_rows, make a new df with them and return #pd.Dataframe(list_of_rows) return df_chunk if __name__ == '__main__': #use all available cores , otherwise specify the number you want as an argument, #for example if you have 12 cores, leave 1 or 2 for other things pool = multiprocessing.Pool(processes=10) results = pool.map(process_chunk, [c for c in pd.read_csv(&quot;your_csv.csv&quot;, chunksize=7150)]) pool.close() pool.join() #make new df by concatenating concatdf = pd.concat(results, axis=0, ignore_index=True) </code></pre> <p><strong>Note</strong>: Instead of reading csv you can pass chucks by the same logic, to calculate chunk-size you might want something like <code>round_of( (length of df) / (number of core available-2))</code> eg <code>100000/14 = round(7142.85) = 7150 rows</code> per chunk</p> <pre><code> results = pool.map(process_chunk, [df[c:c+chunk_size] for c in range(0,len(df),chunk_size]) </code></pre>
python|pandas
1
3,873
64,418,138
Snowflake pandas Connector Kills Kernel
<p>I'm having trouble with the pandas connector for Snowflake.</p> <p>The last line of this code causes the immediate death of the python kernel. Any suggestions on how to diagnose such a situation?</p> <pre><code>import pyarrow import snowflake.connector import pandas as pd ctx = snowflake.connector.connect( user=********, password=********, account=********, warehouse='compute_wh', database='SNOWFLAKE_SAMPLE_DATA', schema='WEATHER' ) cs = ctx.cursor() cs.execute('select * from weather_14_total limit 10') cs.fetch_pandas_all() </code></pre> <p>Note that if <code>fetch_pandas_all()</code> is replaced with <code>fetchone()</code> everything works fine.</p> <p>Thanks in advance.</p> <ul> <li>Keith</li> </ul>
<p>This worked for me:</p> <pre><code>import pandas as pd from snowflake.connector import connect qry = &quot;SELECT * FROM TABLE LIMIT 5&quot; con = connect( account = 'ACCOUNT', user = 'USER', password = 'PASSWORD', role= 'ROLE', warehouse = 'WAREHOUSE', database = 'DATABASE', schema = 'SCHEMA' ) df = pd.read_sql(qry, con) </code></pre> <p>However, this was the most upvoted answer for a similar question:</p> <pre><code>import pandas as pd from sqlalchemy import create_engine from snowflake.sqlalchemy import URL url = URL( account = 'xxxx', user = 'xxxx', password = 'xxxx', database = 'xxx', schema = 'xxxx', warehouse = 'xxx', role='xxxxx', authenticator='https://xxxxx.okta.com', ) engine = create_engine(URL) connection = engine.connect() query = ''' select * from MYDB.MYSCHEMA.MYTABLE LIMIT 10; ''' df = pd.read_sql(query, connection) </code></pre>
pandas|jupyter|snowflake-cloud-data-platform
1
3,874
47,687,872
pandas.DataFrame.replace doesn't seems to work
<p>I'm currently working on Kaggle's Titanic Survival Prediction. There's a problem when I wanted to encode the 'Sex' feature to 0 or 1. I've followed the official documentation on Pandas but it's does not help.</p> <p>Edit: I've noticed that I wrote 'Male' instead of 'male' but still not working after i change to 'male'</p> <p><a href="https://i.stack.imgur.com/PZqOe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PZqOe.png" alt="I&#39;ve followed the official documentation on Pandas but it&#39;s not working"></a></p> <p>Much Appreciated!</p>
<p>Try this: </p> <pre><code>df['Sex'] = df['Sex'].replace(['male', 'female'], [1,0]) </code></pre> <p>or this</p> <pre><code>df['Sex'].replace(['male', 'female'], [1,0], inplace=True) </code></pre> <p>The problem is that you've set inplace=True while performing an assignment. This will change the series in place (i.e., it will change the Series object itself), which returns None for the entire series, which is wrong. So put inplace=False, or make an assignment, but not both. Using inplace=True changes the object and returns None, whereas inplace=False returns a new object. </p>
python|pandas|dataframe|machine-learning
4
3,875
47,884,987
pandas create a Boolean column for a df based on one condition on a column of another df
<p>I have two <code>df</code>s, <code>A</code> and <code>B</code>. <code>A</code> is like,</p> <pre><code>date id 2017-10-31 1 2017-11-01 2 2017-08-01 3 </code></pre> <p><code>B</code> is like,</p> <pre><code>type id 1 1 2 2 3 3 </code></pre> <p>I like to create a new boolean column <code>has_b</code> for <code>A</code>, set the column value to <code>True</code> if its corresponding row (<code>A</code> joins <code>B</code> on <code>id</code>) in <code>B</code> does not have <code>type == 1</code>, and its time delta is > 90 days comparing to <code>datetime.utcnow().day</code>; and <code>False</code> otherwise, here is my solution</p> <pre><code> B = B[B['type'] != 1] A['has_b'] = A.merge(B[['id', 'type']], how='left', on='id')['date'].apply(lambda x: datetime.utcnow().day - x.day &gt; 90) A['has_b'].fillna(value=False, inplace=True) </code></pre> <p>expect to see <code>A</code> result in,</p> <pre><code>date id has_b 2017-10-31 1 False 2017-11-01 2 False 2017-08-01 3 True </code></pre> <p>I am wondering if there is a better way to do this, in terms of more concise and efficient code.</p>
<p>First merge <code>A</code> and <code>B</code> on <code>id</code> -</p> <pre><code>i = A.merge(B, on='id') </code></pre> <p>Now, compute <code>has_b</code> - </p> <pre><code>x = i.type.ne(1) y = (pd.to_datetime('today') - i.date).dt.days.gt(90) i['has_b'] = (x &amp; y) </code></pre> <p>Merge back <code>i</code> and <code>A</code> - </p> <pre><code>C = A.merge(i[['id', 'has_b']], on='id') C date id has_b 0 2017-10-31 1 False 1 2017-11-01 2 False 2 2017-08-01 3 True </code></pre> <hr> <p><strong>Details</strong></p> <p><code>x</code> will return a boolean mask for the first condition.</p> <pre><code>i.type.ne(1) 0 False 1 True 2 True Name: type, dtype: bool </code></pre> <p><code>y</code> will return a boolean mask for the second condition. Use <code>to_datetime('today')</code> to get the current date, subtract this from the date column, and access the days component with <code>dt.days</code>.</p> <pre><code>(pd.to_datetime('today') - i.date).dt.days.gt(90) 0 False 1 False 2 True Name: date, dtype: bool </code></pre> <hr> <p>In case, <code>A</code> and <code>B</code>'s IDs do not align, you may need a <em>left</em> merge instead of an inner merge, for the last step - </p> <pre><code>C = A.merge(i[['id', 'has_b']], on='id', how='left') </code></pre> <p>C's <code>has_b</code> column will contain NaNs in this case.</p>
python-3.x|pandas|dataframe
1
3,876
58,864,339
skimage rotated image displays as black
<p>I'm trying to do a simple rotation of a sample image, but when I try to display it, the file just shows black pixels. I can tell that it has rotated, because the dimensions are changed properly.</p> <pre><code>from io import BytesIO import numpy as np from PIL import Image from skimage.transform import rotate from flask import send_file image_file = Image.open(file_path).convert("L") image_array = np.array(image_file) image_array_rotated = rotate(image_array, angle=90, resize=True) rotated_image_file = Image.fromarray(image_array_rotated).convert("L") buffered_image_file = BytesIO() rotated_image_file.save(buffered_image_file, 'PNG') buffered_image_file.seek(0) return send_file(buffered_image_file, mimetype='image/png') </code></pre> <p>If I remove the rotation code and show the original image, or the converted grayscale ("L") image, they both show up fine. My rotated image is just black.</p> <p><a href="https://i.stack.imgur.com/J3Or2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3Or2.png" alt="Original Image"></a></p> <p><a href="https://i.stack.imgur.com/LyE4B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LyE4B.png" alt="Greyscaled Image"></a></p> <p><a href="https://i.stack.imgur.com/YixKk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YixKk.png" alt="Rotated Image"></a></p>
<p>scikit-image automatically converts images to floating point any time that interpolation or convolution is necessary, to ensure precision in calculations. In converting to float, the range of the image is converted to [0, 1]. You can read more about how it treats data types here:</p> <p><a href="https://scikit-image.org/docs/dev/user_guide/data_types.html" rel="nofollow noreferrer">https://scikit-image.org/docs/dev/user_guide/data_types.html</a></p> <p>Here is how you could modify your code to work with PIL data:</p> <pre class="lang-py prettyprint-override"><code>from io import BytesIO import numpy as np from PIL import Image from skimage.transform import rotate from skimage import util from flask import send_file image_file = Image.open(file_path).convert("L") image_array = util.img_as_float(np.array(image_file)) image_array_rotated = rotate(image_array, angle=90, resize=True) image_array_rotated_bytes = util.img_as_ubyte(image_array_rotated) rotated_image_file = Image.fromarray(image_array_rotated).convert("L") buffered_image_file = BytesIO() rotated_image_file.save(buffered_image_file, 'PNG') buffered_image_file.seek(0) return send_file(buffered_image_file, mimetype='image/png') </code></pre> <p>Alternatively, you could use <code>skimage.io.imsave</code>, which would do all these conversions for you behind the scenes.</p> <p>Another option, as pointed out by Mark in the comments, is to use PIL for the rotation also.</p>
numpy|python-imaging-library|scikit-image|numpy-ndarray
2
3,877
58,837,753
For loop is not giving expected output using pandas DataFrame
<p>I want to write a bill program where i want the cgst, sgst from the bill as output. All was going fine but i got stuck on a problem. I want separate names of product from the result of dataframe's output but i am getting only the name of only one product but the amount was sum of two...</p> <p><strong>Here's my code:</strong></p> <pre><code>import pandas as pd count = 0 num = int(input("Type number of items: ")) while count &lt; num: count += 1 print("-----------------------") item = input("Enter Item Name: ") SP = int(input("enter selling price of " + item + ": ")) gstrate = float(input("Enter GST rate: ")) cgst = SP * ((gstrate/2)/100) sgst = cgst amount = SP + cgst + sgst data = pd.DataFrame({ 'Item ': [item], 'Price': [SP], 'CGST': [cgst], 'SGST': [sgst], 'Amount payable': [amount], }) print(data) </code></pre> <p><strong>what i am getting is this example output:</strong></p> <pre><code> Type number of items: 2 ----------------------- Enter Item Name: samsung enter selling price of samsung: 2341 Enter GST rate: 34 ----------------------- Enter Item Name: iphone enter selling price of iphone: 1234567 Enter GST rate: 15 Item Price CGST SGST Amount payable 0 iphone 1234567 92592.525 92592.525 1419752.05 ``` </code></pre> <p><strong>What i want the output to be:</strong></p> <pre><code> Type number of items: 2 ----------------------- Enter Item Name: iphone enter selling price of iphone: 1000 Enter GST rate: 18 ----------------------- Enter Item Name: samsung enter selling price of samsung: 1000 Enter GST rate: 18 Item Price CGST SGST Amount payable 0 iphone 1000 90.0 90.0 1180.0 1 samsung 1000 90.0 90.0 1180.0 </code></pre> <p>As you can see, i am getting only name samsung not iphone and samsung saparatley </p>
<p>In each iteration of your loop you are creating a new data frame with only this loops data and overwitting any data that was in the last data frame. So when you finish your loops and print the dataframe all thats in it is the data from the last iteration of the loop since you created a new dataframe on each iteration. </p> <p>Instead you could create the data frame before the loop and then just append to the data frame on each iteration of the loop </p> <pre class="lang-py prettyprint-override"><code>import pandas as pd items = [] columns = ['Item', 'Price', 'CGST', 'SGST', 'Amount payable'] df = pd.DataFrame(columns=columns) num = int(input("Type number of items: ")) for _ in range(num): print("-----------------------") item = input("Enter Item Name: ") SP = int(input("enter selling price of " + item + ": ")) gstrate = float(input("Enter GST rate: ")) cgst = SP * ((gstrate/2)/100) sgst = cgst amount = SP + cgst + sgst data = [item, SP, cgst, sgst, amount] df_row = dict(zip(columns, data)) df = df.append(df_row, ignore_index=True) print(df) </code></pre> <p><strong>OUTPUT</strong></p> <pre><code>Type number of items: 2 ----------------------- Enter Item Name: iphone enter selling price of iphone: 1000 Enter GST rate: 18 ----------------------- Enter Item Name: samsung enter selling price of samsung: 1000 Enter GST rate: 18 Item Price CGST SGST Amount payable 0 iphone 1000 90.0 90.0 1180.0 1 samsung 1000 90.0 90.0 1180.0 </code></pre>
python|python-3.x|pandas
0
3,878
70,291,074
pandad reader :TypeError: only integer scalar arrays can be converted to a scalar index
<p>I am facing an error on this, how to I solve it? I think its on the reshape part but I'm not sure about it.</p> <pre><code> dataset = data.values print(len(dataset)) training_data_size = math.ceil(len(dataset)*.7) scaler = MinMaxScaler(feature_range=(0,1)) scaled_data = scaler.fit_transform(dataset) train_data = scaled_data[0:training_data_size,:] x_train = [] y_train = [] for i in range(60,len(train_data)): x_train.append(train_data[i-60:i,0]) y_train.append(train_data[i,0]) x_train,y_train = np.array(x_train),np.array(y_train) x_train = np.reshape(x_train,(x_train.shape[0],x_train[1],1)) </code></pre> <p>The error is as below:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\USER\AppData\Roaming\Python\Python36\site-packages\numpy\core\fromnumeric.py&quot;, line 58, in _wrapfunc return bound(*args, **kwds) TypeError: only integer scalar arrays can be converted to a scalar index During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:/Users/USER/PycharmProjects/Predict Stock Price/main.py&quot;, line 105, in &lt;module&gt; LSTM_predidction() File &quot;C:/Users/USER/PycharmProjects/Predict Stock Price/main.py&quot;, line 72, in LSTM_predidction x_train = np.reshape(x_train,(x_train.shape[0],x_train[1],1)) File &quot;&lt;__array_function__ internals&gt;&quot;, line 6, in reshape File &quot;C:\Users\USER\AppData\Roaming\Python\Python36\site-packages\numpy\core\fromnumeric.py&quot;, line 299, in reshape return _wrapfunc(a, 'reshape', newshape, order=order) File &quot;C:\Users\USER\AppData\Roaming\Python\Python36\site-packages\numpy\core\fromnumeric.py&quot;, line 67, in _wrapfunc return _wrapit(obj, method, *args, **kwds) File &quot;C:\Users\USER\AppData\Roaming\Python\Python36\site-packages\numpy\core\fromnumeric.py&quot;, line 44, in _wrapit result = getattr(asarray(obj), method)(*args, **kwds) TypeError: only integer scalar arrays can be converted to a scalar index Process finished with exit code 1 </code></pre> <p>Any suggestions would be appreciated</p>
<p>you are missing shape in the second element of tuple</p> <pre><code>x_train = np.reshape(x_train,(x_train.shape[0],x_train[1],1)) </code></pre> <p>correct :</p> <pre><code>x_train = np.reshape(x_train,(x_train.shape[0],x_train.shape[1],1)) </code></pre>
python|numpy
1
3,879
70,240,761
Changing row values based on previous row values
<p>I have a dataframe which looks like this:</p> <pre><code>Province Admissions Eastern Cape 10 Private 3 Public 7 Free State 20 Private 15 Public 5 </code></pre> <p>I want to change the 'Private' and 'Public' to reference the Province. I want to achieve the following dataframe:</p> <pre><code>Province Admissions Eastern Cape 10 Eastern Cape-Private 3 Eastern Cape-Public 7 Free State 20 Free State-Private 15 Free State-Public 5 </code></pre> <p>I've actually already achieved this by the following code:</p> <pre><code>for row in range(0,len(df)): df['Province'] = np.where((df['Province'] == 'Private'), df['Province'].shift(1)+' '+ df['Province'], df['Province']) df['Province'] = np.where((df['Province'] == 'Public'), df['Province'].shift(2)+' '+ df['Province'], df['Province']) </code></pre> <p>However, I would like to do it in a more general approach, in case the order of the Private and Public is swapped. Right now Private comes before Public hence my method workds. Would appreciate any input!</p>
<p>You can do <code>mask</code> and <code>ffill</code> create the adding array</p> <pre><code>s = df.Province.mask(df.Province.isin(['Private','Public'])).ffill() df['Province'] = np.where(df.Province.isin(['Private','Public']), s + ' ' + df.Province, df.Province) </code></pre>
python|pandas
2
3,880
55,750,229
How to Save a Data Frame as a table in SQL
<p>I have a SQL Server on which I have databases that I want to use pandas to alter that data. I know how to get the data using pyodbc into a DataFrame, but then I have no clue how to get that DataFrame back into my SQL Server.</p> <p>I have tried to create an engine with sqlalchemy and use the <code>to_sql</code> command, but I can not get that to work because my engine is never able to connect correctly to my database.</p> <pre class="lang-py prettyprint-override"><code>import pyodbc import pandas server = "server" db = "db" conn = pyodbc.connect('DRIVER={SQL Server};SERVER='+server+';DATABASE='+db+';Trusted_Connection=yes') cursor = conn.cursor() df = cursor.fetchall() data = pandas.DataFrame(df) conn.commit() </code></pre>
<p>You can <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html" rel="nofollow noreferrer">use pandas.DataFrame.to_sql</a> to insert your dataframe into SQL server. Databases supported by <a href="https://docs.sqlalchemy.org/en/13/core/engines.html" rel="nofollow noreferrer">SQLAlchemy</a> are supported by this method.</p> <p>Here is a example how you can achieve this:</p> <pre><code>from sqlalchemy import create_engine, event from urllib.parse import quote_plus import logging import sys import numpy as np from datetime import datetime, timedelta # setup logging logging.basicConfig(stream=sys.stdout, filemode='a', format='%(asctime)s.%(msecs)3d %(levelname)s:%(name)s: %(message)s', datefmt='%m-%d-%Y %H:%M:%S', level=logging.DEBUG) logger = logging.getLogger(__name__) # get the name of the module def write_to_db(df, database_name, table_name): """ Creates a sqlalchemy engine and write the dataframe to database """ # replacing infinity by nan df = df.replace([np.inf, -np.inf], np.nan) user_name = 'USERNAME' pwd = 'PASSWORD' db_addr = '10.00.000.10' chunk_size = 40 conn = "DRIVER={SQL Server};SERVER="+db_addr+";DATABASE="+database_name+";UID="+user_name+";PWD="+pwd+"" quoted = quote_plus(conn) new_con = 'mssql+pyodbc:///?odbc_connect={}'.format(quoted) # create sqlalchemy engine engine = create_engine(new_con) # Write to DB logger.info("Writing to database ...") st = datetime.now() # start time # WARNING!! -- overwrites the table using if_exists='replace' df.to_sql(table_name, engine, if_exists='replace', index=False, chunksize=chunk_size) logger.info("Database updated...") logger.info("Data written to '{}' databsae into '{}' table ...".format(database_name, table_name)) logger.info("Time taken to write to DB: {}".format((datetime.now()-st).total_seconds())) </code></pre> <p>Calling this method should write your dataframe to the database, note that it will replace the table if there is already a table in the database with the same name.</p>
sql-server|pandas|pyodbc
2
3,881
55,923,126
How to efficiently save a large pandas.Dataframe with million even billion rows with no error?
<p>How to save a large Dataframe to disk with good reading speed?</p> <p>I have a large datasets (youtube 8M), now I have extract the raw data to dict. And I want to save it as dataframe for reading by index with pytorch dataset.</p> <p>For concrete, the validate data seems like this:</p> <pre class="lang-py prettyprint-override"><code>&lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 1112356 entries, 0 to 1112355 Data columns (total 4 columns): id 1112356 non-null object mean_rgb 1112356 non-null object mean_audio 1112356 non-null object label 1112356 non-null object dtypes: object(4) memory usage: 42.4+ MB </code></pre> <p>The dtypes is listed below:</p> <pre class="lang-py prettyprint-override"><code>id : str mean_rgb : numpy.ndarray mean_audio : numpy.ndarray label : numpy.ndarray </code></pre> <p>I want to save it to disk, for the purpose that I could efficiently read it. First, I used the <code>hdf5</code> with <code>pd.to_hdf()</code>, but I got an <code>OverFlowError</code>. </p> <p>And then, I turn to <code>csv</code> with successful saving. However, when I read data from this <code>.csv</code>, I get a corrupt <code>dataframe</code>. Where the rows is greatly more than <strong>1112356</strong>.</p> <p>Finally, I saved the <code>dataframe</code> to <code>csv</code> with <code>chunksize=1000</code>, the reading result is still wrong with <code>2842137</code> rows as well as more confusing inner data.</p> <pre><code>RangeIndex: 2842137 entries, 0 to 2842136 Data columns (total 1 columns): widwmean_rgbwmean_audiowlabel object dtypes: object(1) memory usage: 21.7+ MB </code></pre>
<p>Joblib and klepto python packages might help you.</p> <p>On other hand, you do chunking store at-max you can in one chunk while storing and load iteratively and merge at the end.</p>
python|pandas|csv|hdf5
0
3,882
55,728,846
Merge 3 pandas based on key columns
<p>I am new to pandas I have 3 CSV files extracted from a MySql database and stored in pandas dataframes. I have generated a sequential id for all the 3 files they look like this:</p> <pre><code>df1 id1 key_column1 name1 1 567 qqq 2 898 rrr 3 345 bbb df2 id2 key_column2 name2 4 967 qqqq 5 998 rrrr 6 945 bbbb df3 id3 key_column1 key_column2 7 345 967 8 567 945 </code></pre> <p>df1 and df2 represent 2 tables their original key_columns are key_column1 and key_column2, respectively. df3 containing the mapping from both df1 and df2 based on their key_columns. Now the df3 must do the mapping based on the generated sequential id it must look like this</p> <pre><code> df3 id3 id1 id2 key_column1 key_column2 7 3 4 345 967 8 1 6 567 945 </code></pre> <p>I have tried the merge initially of one columns but I have got none values.</p> <pre><code>df=pd.merge(df1,df3,left_on=df1['key_column1'],right_on=df3['key_column1'],how='inner') </code></pre>
<p>This seems like it works for me.</p> <pre><code>df3.merge(df1,how='left',on='key_column1').merge(df2,how='left',on='key_column2') id3 key_column1 key_column2 id1 name1 id2 name2 0 7 345 967 3 bbb 4 qqqq 1 8 567 945 1 qqq 6 bbbb </code></pre>
python|pandas
1
3,883
64,702,326
How to ignore NaN in column length check
<p>I am trying to calculate the maximum and minimum length of each column in a dataframe which has some missing values. Pandas treat those missing values as &quot;NaN&quot; and counts the length as 3. How do I completely ignore missing values while calculating maximum and minimum length? Here is my code:</p> <pre><code>import pandas as pd columnname=[] maxColumnLenghts = [] minColumnLenghts=[] for colname in df.columns: columnname.append(colname) for col in range(len(df.columns)): minColumnLenghts.append(min(df.iloc[:,col].astype(str).apply(len))) maxColumnLenghts.append(max(df.iloc[:,col].astype(str).apply(len))) </code></pre> <p>Here is my dataframe: You would notice that column c has missing values(pandas converts it into NaN) and hence minimum length of column c is coming as 3 which is incorrect. <a href="https://i.stack.imgur.com/XFsv7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XFsv7.png" alt="enter image description here" /></a></p>
<p>Taking method from previous answer, but you may want to use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer">pandas.fillna()</a> to get rid of NaN's and convert each value into string before counting min and max lengths. My suggestions is:</p> <pre><code>col_stats = {} for col in df: min_length = df[col].fillna(method='ffill').astype(str).str.len().min() max_length = df[col].fillna(method='ffill').astype(str).str.len().max() col_stats[col] = [min_length, max_length] </code></pre>
python|pandas|dataframe
2
3,884
64,628,281
fatal error: numpy/arrayobject.h: No such file or directory #include "numpy/arrayobject.h" in google colab
<p>hello i am trying to run a git repo in google colab, i installed all requirements as per the git instruction</p> <p>while running the certain file i am getting this error</p> <p>fatal error: numpy/arrayobject.h: No such file or directory<br /> #include &quot;numpy/arrayobject.h&quot;</p> <p>i already checked solution for this error but they recommended to change the code in setup.py but i am running the file in google colab so some body help me</p> <p>the command i ran in google colab was</p> <pre><code> !python build.py build_ext --inplace </code></pre> <p>for which i got the following error could some body help me , i verified that numpy was already installed</p> <pre><code> running build_ext skipping '_nms_gpu_post.c' Cython extension (up-to-date) building '_nms_gpu_post' extension creating build/temp.linux-x86_64-3.6 x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.6m -c _nms_gpu_post.c -o build/temp.linux-x86_64-3.6/_nms_gpu_post.o _nms_gpu_post.c:485:10: fatal error: numpy/arrayobject.h: No such file or directory #include &quot;numpy/arrayobject.h&quot; ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 </code></pre>
<p>I had a similar problem and I solved it by editing <code>build.py</code>:</p> <pre><code>ext_modules=[ Extension(&quot;my_module&quot;, [&quot;my_module.c&quot;], include_dirs=[numpy.get_include()]), ] </code></pre> <p>Here <code>include_dirs=[numpy.get_include()]</code> is the part that's needed to solve it.</p>
python|python-3.x|numpy|jupyter-notebook|google-colaboratory
1
3,885
64,618,229
Using Python Selenium to download a file in memory, not in disk
<p>I have a bunch of scripts that do web scraping, download files, and read them with pandas. This process has to be deployed in a new architecture where download the files on disk is not appropriate, instead is preferable to save the file in memory and read it with pandas from there. For demonstration purposes I leave here a web scraping script that downloads an excel file from a random website:</p> <pre><code>import time import pandas as pd from io import StringIO, BytesIO from selenium import webdriver from selenium.webdriver.common.action_chains import ActionChains from datetime import date, timedelta from selenium.webdriver.common.keys import Keys from selenium import webdriver from selenium.webdriver.support.ui import Select from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By pathDriver = #Path to chromedriver driver = webdriver.Chrome(executable_path=pathDriver) url = 'https://file-examples.com/index.php/sample-documents-download/sample-xls-download/' driver.get(url) time.sleep(1) file_link = driver.find_element_by_xpath('//*[@id=&quot;table-files&quot;]/tbody/tr[1]/td[5]/a[1]') file_link.click() </code></pre> <p>This script effectively downloads the file in my Downloads folder. What I've tried is to put a <code>StringIO()</code> or <code>BytesIO()</code> stream before and after the <code>click()</code> method and read the object similiar to this:</p> <pre><code>file_object = StringIO() df = pd.read_excel(file_object.read()) </code></pre> <p>But the file_object doesn't capture the file and even the file is still downloaded in my disk.</p> <p>Any suggestions with that?</p>
<p>Your question can be accomplished by adding the <em>selenium add_experimental_option</em>. I also redesigned your code to loop through the table to extract the href to pass them to StringIO. No files are downloaded to my local system using this code.</p> <p>If I missed something please let me know.</p> <pre><code>import pandas as pd from time import sleep from io import StringIO from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.desired_capabilities import DesiredCapabilities capabilities = DesiredCapabilities().CHROME chrome_options = Options() chrome_options.add_argument(&quot;--incognito&quot;) chrome_options.add_argument(&quot;--disable-infobars&quot;) chrome_options.add_argument(&quot;start-maximized&quot;) chrome_options.add_argument(&quot;--disable-extensions&quot;) chrome_options.add_argument(&quot;--disable-popup-blocking&quot;) prefs = { 'profile.default_content_setting_values': { 'automatic_downloads': 0 }, 'profile.content_settings.exceptions': { 'automatic_downloads': 0 } } chrome_options.add_experimental_option('prefs', prefs) capabilities.update(chrome_options.to_capabilities()) driver = webdriver.Chrome('/usr/local/bin/chromedriver', options=chrome_options) url_main = 'https://file-examples.com/index.php/sample-documents-download/sample-xls-download/' driver.get(url_main) elements = driver.find_elements_by_xpath('//*[@id=&quot;table-files&quot;]//td/a') for element in elements: if str(element.get_attribute(&quot;href&quot;)).endswith('.xls'): file_object = StringIO(element.get_attribute(&quot;href&quot;)) xls_file = file_object.read() df = pd.read_excel(xls_file) print(df.to_string(index=False)) First Name Last Name Gender Country Age Date Id 1 Dulce Abril Female United States 32 15/10/2017 1562 2 Mara Hashimoto Female Great Britain 25 16/08/2016 1582 3 Philip Gent Male France 36 21/05/2015 2587 4 Kathleen Hanner Female United States 25 15/10/2017 3549 5 Nereida Magwood Female United States 58 16/08/2016 2468 6 Gaston Brumm Male United States 24 21/05/2015 2554 7 Etta Hurn Female Great Britain 56 15/10/2017 3598 8 Earlean Melgar Female United States 27 16/08/2016 2456 9 Vincenza Weiland Female United States 40 21/05/2015 6548 sleep(360) </code></pre> <p>Here is an example using a RAMDISK that was mentioned in the comments. This option does not use <em>selenium add_experimental_option</em> or StringIO.</p> <pre><code>import fs import pandas as pd from time import sleep from selenium import webdriver from selenium.webdriver.chrome.options import Options chrome_options = Options() chrome_options.add_argument(&quot;--incognito&quot;) chrome_options.add_argument(&quot;--disable-infobars&quot;) chrome_options.add_argument(&quot;start-maximized&quot;) chrome_options.add_argument(&quot;--disable-extensions&quot;) chrome_options.add_argument(&quot;--disable-popup-blocking&quot;) driver = webdriver.Chrome('/usr/local/bin/chromedriver', options=chrome_options) url_main = 'https://file-examples.com/index.php/sample-documents-download/sample-xls-download/' driver.get(url_main) urls_to_process = [] elements = driver.find_elements_by_xpath('//*[@id=&quot;table-files&quot;]//td/a') # Create RAMDISK mem_fs = fs.open_fs('mem://') mem_fs.makedir('hidden_dir') for element in elements: if str(element.get_attribute(&quot;href&quot;)).endswith('.xls'): with mem_fs.open('hidden_dir/file1.csv', 'w') as in_file: in_file.write(element.get_attribute(&quot;href&quot;)) in_file.close() with mem_fs.open('hidden_dir/file1.csv', 'r') as out_file: df = pd.read_excel(out_file.read()) print(df.to_string(index=False)) # same output as above sleep(360) </code></pre>
python|pandas|selenium|selenium-webdriver
6
3,886
40,226,883
I don't understand the usage of 'Lambda' and 'Transform' and from this code (pandas docs)
<p>I am scouring the pandas docs to try to understand how transform is used and can upon this example from the docs: <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/groupby.html</a> (under 'Transformation")</p> <pre><code>import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt index = pd.date_range('10/1/1999', periods=1100) ts = pd.Series(np.random.normal(0.5, 2, 1100), index) ts = ts.rolling(window=100,min_periods=100).mean().dropna() key = lambda x: x.year zscore = lambda x: (x - x.mean()) / x.std() transformed = ts.groupby(key).transform(zscore) </code></pre> <p>A couple of things I'm confused about; first the usage of lambda.</p> <pre><code>key = lambda x: x.year </code></pre> <p>What datatype is x supposed to represent in this case? I'm unsure of which datatypes allow the call of the attribute ".year"</p> <p>As for this case:</p> <pre><code>zscore = lambda x: (x - x.mean()) / x.std() </code></pre> <p>x is going to be representing each row of ts (of ts.roll?) and x.mean is the mean of what exactly?</p> <p>Finally, what exactly does transform do in the last line? Is it just replacing the values of ts with zscore? I ran the variable "transformed" but the index (dates) look the same as ts' index. So what exactly did groupby(key) do in this case? </p> <p>Thanks!</p>
<p>using <code>key = lambda x: x.year</code> implies that your index is of <code>dtype='datetime64[ns]'</code></p> <p>this would call the year of each index and group the df by year.</p> <p>now that you have a groupby object you can transform each group:</p> <pre><code>zscore = lambda x: (x - x.mean()) / x.std() </code></pre> <p>will take each group (that would be each year) calculate the mean (<code>x.mean()</code>), the standard deviation (<code>x.std()</code>) and apply the formula <code>(x - x.mean()) / x.std()</code> for each data point.</p> <p>so doing this:</p> <pre><code> ts.groupby(key).mean() Out[274]: 2000 0.4851 2001 0.2568 2002 0.4544 </code></pre> <p>return the mean for each year, and doing this:</p> <pre><code>ts.groupby(key).std() Out[275]: 2000 0.1969 2001 0.1539 2002 0.1881 </code></pre> <p>return the standard deviation for each year</p> <p>transform will apply this to each row, so let's use position 1 for a test</p> <pre><code>ts.head() Out[277]: 2000-01-08 0.7562 2000-01-09 0.7639 2000-01-10 0.7020 2000-01-11 0.6970 2000-01-12 0.6906 </code></pre> <p>since the first index is of year 2000, we need to use the mean and std of that group such has: 0.7562- 0.4851 / 0.1969 = 1.3767</p> <pre><code>ts.groupby(key).transform(zscore).head(2) Out[282]: 2000-01-08 1.3767 2000-01-09 1.4159 </code></pre>
pandas|lambda|transform
3
3,887
39,788,542
Distributed Tensorflow Training of Reinpect Human detection model
<p>I am working on Distributed Tensorflow, particularly the implementation of Reinspect model using Distributed Tensorflow given in the following paper <a href="https://github.com/Russell91/TensorBox" rel="nofollow">https://github.com/Russell91/TensorBox</a> .</p> <p>We are using Between-graph-Asynchronous implementation of Distributed tensorflow settings but the results are very surprising. While bench marking, we have come to see that Distributed training takes almost more than 2 times more training time than a single machine training. Any leads about what could be happening and what else could be tried be would be really appreciated. Thanks</p> <p>Note: There is a correction in the post, we are using between-graph implementation not in-graph implementation. Sorry for the mistake</p>
<p>In general, I wouldn't be surprised if moving from a single-process implementation of a model to a multi-machine implementation would lead to a slowdown. From your question, it's not obvious what might be going on, but here are a few general pointers:</p> <ul> <li><p>If the model has a large number of parameters relative to the amount of computation (e.g. if it mostly performs large matrix multiplications rather than convolutions), then you may find that the network is the bottleneck. What is the bandwidth of your network connection?</p></li> <li><p>Are there a large number of copies between processes, perhaps due to unfortunate device placement? Try collecting and visualizing a timeline to see what is going on when you run your model.</p></li> <li><p>You mention that you are using "in-graph replication", which is <a href="https://stackoverflow.com/a/39681588/3574081">not currently recommended</a> for scalability. In-graph replication can create a bottleneck at the single master, especially when you have a large model graph with many replicas.</p></li> <li><p>Are you using a single input pipeline across the replicas or multiple input pipelines? Using a single input pipeline would create a bottleneck at the process running the input pipeline. (However, with in-graph replication, running multiple input pipelines could also create a bottleneck as there would be one Python process driving the I/O with a large number of threads.)</p></li> <li><p>Or are you using the feed mechanism? Feeding data is much slower when it has to cross process boundaries, as it would in a replicated setting. Using between-graph replication would at least remove the bottleneck at the single client process, but to get better performance you should use an input pipeline. (As <a href="https://stackoverflow.com/a/39809719/3574081">Yaroslav observed</a>, feeding and fetching large tensor values is slower in the distributed version because the data is transferred via RPC. In a single process these would use a simple <code>memcpy()</code> instead.)</p></li> <li><p>How many processes are you using? What does the scaling curve look like? Is there an immediate slowdown when you switch to using a parameter server and single worker replica (compared to a single combined process)? Does the performance get better or worse as you add more replicas?</p></li> </ul>
computer-vision|tensorflow|distributed|multi-gpu
2
3,888
39,500,258
Pandas: how to get the unique values of a column that contains a list of values?
<p>Consider the following dataframe</p> <pre><code>df = pd.DataFrame({'name' : [['one two','three four'], ['one'],[], [],['one two'],['three']], 'col' : ['A','B','A','B','A','B']}) df.sort_values(by='col',inplace=True) df Out[62]: col name 0 A [one two, three four] 2 A [] 4 A [one two] 1 B [one] 3 B [] 5 B [three] </code></pre> <p>I would like to get a column that keeps track of all the unique strings included in <code>name</code> for each combination of <code>col</code>.</p> <p>That is, the expected output is</p> <pre><code>df Out[62]: col name unique_list 0 A [one two, three four] [one two, three four] 2 A [] [one two, three four] 4 A [one two] [one two, three four] 1 B [one] [one, three] 3 B [] [one, three] 5 B [three] [one, three] </code></pre> <p>Indeed, say for group A, you can see that the unique set of strings included in <code>[one two, three four]</code>, <code>[]</code> and <code>[one two]</code> is <code>[one two]</code></p> <p>I can obtain the corresponding number of unique values using <a href="https://stackoverflow.com/questions/38355931/pandas-how-to-get-the-unique-number-of-values-in-cells-when-cells-contain-list">Pandas : how to get the unique number of values in cells when cells contain lists?</a> : </p> <pre><code>df['count_unique']=df.groupby('col')['name'].transform(lambda x: list(pd.Series(x.apply(pd.Series).stack().reset_index(drop=True, level=1).nunique()))) df Out[65]: col name count_unique 0 A [one two, three four] 2 2 A [] 2 4 A [one two] 2 1 B [one] 2 3 B [] 2 5 B [three] 2 </code></pre> <p>but replacing <code>nunique</code> with <code>unique</code> above fails.</p> <p>Any ideas? Thanks!</p>
<p>Here is the solution </p> <pre><code>df['unique_list'] = df.col.map(df.groupby('col')['name'].sum().apply(np.unique)) df </code></pre> <p><a href="https://i.stack.imgur.com/S8IdI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S8IdI.png" alt="enter image description here"></a></p>
python|pandas
4
3,889
39,824,469
Replacing Elements in 3 dimensional Python List
<p>I've a Python list if converted to NumPy array would have the following dimensions: (5, 47151, 10)</p> <pre><code>np.array(y_pred_list).shape # returns (5, 47151, 10) len(y_pred_list) # returns 5 </code></pre> <p>I would like to go through every element and replace the element where:</p> <ul> <li>If the element >= 0.5 then 1.</li> <li>If the element &lt; 0.5 then 0.</li> </ul> <p>Any idea? </p>
<p>To create an array with a value True if the element is >= 0.5, and False otherwise:</p> <pre><code>new_array = y_pred_list &gt;= 0.5 </code></pre> <p>use the .astype() method for Numpy arrays to make all True elements 1 and all False elements 0:</p> <pre><code>new_array.astype(int) </code></pre>
python|arrays|list|numpy
1
3,890
44,245,229
Drop duplicates with less precision
<p>I have a pandas DataFrame with string-columns and float columns I would like to use <code>drop_duplicates</code> to remove duplicates. Some of the duplicates are not exactly the same, because there are some slight differences in low decimal places. How can I remove duplicates with less precision?</p> <p>Example:</p> <pre><code>import pandas as pd df = pd.DataFrame.from_dict({'text': ['aaa','aaa','aaa','bb'], 'result': [1.000001,1.000000,2,2]}) df result text 0 1.000001 aaa 1 1.000000 aaa 2 2.000000 aaa 3 2.000000 bb </code></pre> <p>I would like to get</p> <pre><code>df_out = pd.DataFrame.from_dict({'text': ['aaa','aaa','bb'], 'result': [1.000001,2,2]}) df_out result text 0 1.000001 aaa 1 2.000000 aaa 2 2.000000 bb </code></pre>
<p>round them</p> <pre><code>df.loc[df.round().drop_duplicates().index] result text 0 1.000001 aaa 2 2.000000 aaa 3 2.000000 bb </code></pre>
python|pandas
3
3,891
44,353,509
Tensorflow tf.constant_initializer is very slow
<p>Trying to use pre trained word2vec embeddings of 100 dim for training a LSTM </p> <pre><code>@staticmethod def load_embeddings(pre_trained_embeddings_path, word_embed_size): embd = [] import time start_time = time.time() cnt = 4 with codecs.open(pre_trained_embeddings_path, mode="r", encoding='utf-8') as f: for line in f.readlines(): values = line.strip().split(' ') embd.append(values[1:]) cnt += 1 if cnt % 100000 == 0: print("word-vectors loaded: %d" % cnt) embedding, vocab_size, embed_dim = embd, len(embd), len(embd[0]) load_end_time = time.time() print("word vectors loaded from and start initialising, cnt: %d, time taken: %d secs " % (vocab_size, load_end_time - start_time)) embedding_init = tf.constant_initializer(embedding, dtype=tf.float16) src_word_embedding = tf.get_variable(shape=[vocab_size, embed_dim], initializer=embedding_init, trainable=False, name='word_embedding', dtype=tf.float16) print("word-vectors loaded and initialised, cnt: %d, time taken: %d secs" % (vocab_size, time.time() - load_end_time)) return src_word_embedding </code></pre> <p>And the output of this when running this method is like : </p> <pre><code>word vectors loaded from and start initialising, cnt: 2419080, time taken: 74 secs word-vectors loaded and initialised, cnt: 2419080, time taken: 1647 secs </code></pre> <p>system info: <code>tensorflow 1.1.0, tcmalloc, python 3.6, ubuntu 14.04</code></p> <p>HALF an hour to initialize seems to be very slow or is it a normal behavior ? Any idea what could be the issue or is there one ?</p> <p>UPDATE: using @sirfz method of supplying the embeddings made it really fast to load the embeddings <code>Initialization Done in 85 secs</code></p>
<p>Loading large constants into a graph is not only slower, it also leaks lots of memory. I had a similar issue which <a href="https://github.com/tensorflow/tensorflow/issues/9742" rel="nofollow noreferrer">I reported not long ago</a> and the best workaround for me was:</p> <pre><code># placeholder for loading your saved embeddings embedding_init = tf.placeholder(tf.float16, shape=[vocab_size, embed_dim]) src_word_embedding = tf.get_variable(initializer=embedding_init, trainable=False, name='word_embedding', dtype=tf.float16) # run initialization with the value of embeddings placeholder session.run(tf.global_variables_initializer(), feed_dict={embedding_init: embedding}) </code></pre>
python|tensorflow|lstm|word2vec
0
3,892
44,006,497
Check values in dataframe against another dataframe and append values if present
<p>I have two dataframes as follows:</p> <pre><code>DF1 A B C 1 2 3 4 5 6 7 8 9 DF2 Match Values 1 a,d 7 b,c </code></pre> <p>I want to match DF1['A'] with DF2['Match'] and append DF2['Values'] to DF1 if the value exists</p> <pre><code>So my result will be: A B C Values 1 2 3 a,d 7 8 9 b,c </code></pre> <p>Now I can use the following code to match the values but it's returning an empty dataframe.</p> <pre><code>df1 = df1[df1['A'].isin(df2['Match'])] </code></pre> <p>Any help would be appreciated.</p>
<p>Instead of doing a lookup, you can do this in one step by merging the dataframes:</p> <p><code>pd.merge(df1, df2, how='inner', left_on='A', right_on='Match')</code></p> <p>Specify <code>how='inner'</code> if you only want records that appear in both, <code>how='left'</code> if you want all of df1's data.</p> <p>If you want to keep only the Values column:</p> <p><code>pd.merge(df1, df2.set_index('Match')['Values'].to_frame(), how='inner', left_on='A', right_index=True)</code></p>
python|pandas
3
3,893
44,087,637
Pandas how does IndexSlice work
<p>I am following this tutorial: <a href="https://github.com/TomAugspurger/pydata-chi-h2t/blob/master/3-Indexing.ipynb" rel="noreferrer">GitHub Link</a></p> <p>If you scroll down (Ctrl+F: Exercise: Select the most-reviewd beers ) to the section that says <code>Exercise: Select the most-reviewd beers</code>:</p> <p>The dataframe is multindexed: <a href="https://i.stack.imgur.com/M5QKz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/M5QKz.png" alt="enter image description here"></a></p> <p>To select the most-reviewed beers:</p> <pre><code>top_beers = df['beer_id'].value_counts().head(10).index reviews.loc[pd.IndexSlice[:, top_beers], ['beer_name', 'beer_style']] </code></pre> <p>My question is the way of how the IndexSlice is used, how come you can skip the colon after top_beers and the code still run?</p> <pre><code>reviews.loc[pd.IndexSlice[:, top_beers, :], ['beer_name', 'beer_style']] </code></pre> <p>There are three indexes, <code>pofile_name</code>, <code>beed_id</code> and <code>time</code>. Why does <code>pd.IndexSlice[:, top_beers]</code> work (without specify what to do with the time column)?</p>
<p>To complement the previous answer, let me explain how <code>pd.IndexSlice</code> works and why it is useful.</p> <p>Well, there is not much to say about its implementation. As you read in the <a href="https://github.com/pandas-dev/pandas/blob/main/pandas/core/indexing.py" rel="nofollow noreferrer">source</a>, it just does the following:</p> <pre><code>class IndexSlice(object): def __getitem__(self, arg): return arg </code></pre> <p>From this we see that <code>pd.IndexSlice</code> only forwards the arguments that <code>__getitem__</code> has received. Looks pretty stupid, doesn't it? However, it actually does something.</p> <p>As you certainly know already, <a href="https://docs.python.org/3/reference/datamodel.html#object.__getitem__" rel="nofollow noreferrer"><code>obj.__getitem__(arg)</code></a> is called if you access an object <code>obj</code> through its bracket operator <code>obj[arg]</code>. For sequence-type objects, <code>arg</code> can be either an integer or a <a href="https://docs.python.org/3/library/functions.html#slice" rel="nofollow noreferrer">slice object</a>. We rarely construct slices ourselves. Rather, we'd use the slice operator <code>:</code> (aka ellipsis) for this purpose, e.g. <code>obj[0:5]</code>.</p> <p>And here comes the point. The python interpretor converts these slice operators <code>:</code> into slice objects before calling the object's <code>__getitem__(arg)</code> method. Therefore, the return value of <code>IndexSlice.__getItem__()</code> will actually be a slice, an integer (if no <code>:</code> was used), or a tuple of these (if multiple arguments are passed). In summary, the only purpose of <code>IndexSlice</code> is that we don't have to construct the slices on our own. This behavior is particularly useful for <code>pd.DataFrame.loc</code>.</p> <p>Let's first have a look at the following examples:</p> <pre><code>import pandas as pd idx = pd.IndexSlice print(idx[0]) # 0 print(idx[0,'a']) # (0, 'a') print(idx[:]) # slice(None, None, None) print(idx[0:3]) # slice(0, 3, None) print(idx[0.1:2.3]) # slice(0.1, 2.3, None) print(idx[0:3,'a':'c']) # (slice(0, 3, None), slice('a', 'c', None)) </code></pre> <p>We observe that all usages of colons <code>:</code> are converted into slice object. If multiple arguments are passed to the index operator, the arguments are turned into n-tuples.</p> <p>To demonstrate how this could be useful for a pandas data-frame <code>df</code> with a multi-level index, let's have a look at the following.</p> <pre><code># A sample table with three-level row-index # and single-level column index. import numpy as np level0 = range(0,10) level1 = list('abcdef') level2 = ['I', 'II', 'III', 'IV'] mi = pd.MultiIndex.from_product([level0, level1, level2]) df = pd.DataFrame(np.random.random([len(mi),2]), index=mi, columns=['col1', 'col2']) # Return a view on 'col1', selecting all rows. df.loc[:,'col1'] # pd.Series # Note: in the above example, the returned value has type # pd.Series, because only one column is returned. One can # enforce the returned object to be a data-frame: df.loc[:,['col1']] # pd.DataFrame, or df.loc[:,'col1'].to_frame() # # Select all rows with top-level values 0:3. df.loc[0:3, 'col1'] # If we want to create a slice for multiple index levels # we need to pass somehow a list of slices. The following # however leads to a SyntaxError because the slice # operator ':' cannot be placed inside a list declaration. df.loc[[0:3, 'a':'c'], 'col1'] # The following is valid python code, but looks clumsy: df.loc[(slice(0, 3, None), slice('a', 'c', None)), 'col1'] # Here is why pd.IndexSlice is useful. It helps # to create a slice that makes use of two index-levels. df.loc[idx[0:3, 'a':'c'], 'col1'] # We can expand the slice specification by a third level. df.loc[idx[0:3, 'a':'c', 'I':'III'], 'col1'] # A solitary slicing operator ':' means: take them all. # It is equivalent to slice(None). df.loc[idx[0:3, 'a':'c', :], 'col1'] # pd.Series # Semantically, this is equivalent to the following, # because the last ':' in the previous example does # not add any information about the slice specification. df.loc[idx[0:3, 'a':'c'], 'col1'] # pd.Series # The following lines are also equivalent, but # both expressions evaluate to a result with multiple columns. df.loc[idx[0:3, 'a':'c', :], :] # pd.DataFrame df.loc[idx[0:3, 'a':'c'], :] # pd.DataFrame </code></pre> <p>In summary, <code>pd.IndexSlice</code> helps to improve readability when specifying slices for rows and column indices.</p> <p>What pandas then does with these slices is a different story. It essentially selects rows/columns, starting from the topmost index-level and reduces the selection when going further down the levels, depending on how many levels have been specified. <code>pd.DataFrame.loc</code> is an object with its own <code>__getitem__()</code> function that does all this.</p> <p>As you pointed out already in one of your comments, pandas seemingly behaves weird in some special cases. The two examples you mentioned will actually evaluate to the same result. However, they are treated differently by pandas internally.</p> <pre><code># This will work. reviews.loc[idx[top_reviewers, 99, :], ['beer_name', 'brewer_id']] # This will fail with TypeError &quot;unhashable type: 'Index'&quot;. reviews.loc[idx[top_reviewers, 99] , ['beer_name', 'brewer_id']] # This fixes the problem. (pd.Index is not hashable, a tuple is. # However, the problem matters only with the second expression.) reviews.loc[idx[tuple(top_reviewers), 99] , ['beer_name', 'brewer_id']] </code></pre> <p>Admittedly, the difference is subtle.</p>
python|pandas
16
3,894
69,543,880
How to create sum of binary dummy variable grouped by month in pandas?
<p>I have a dataframe</p> <pre><code>Month | Acct_id| Sku 2020-01-01 |1 |book 2020-01-02 |2 |phone 2020-01-01 |3 |book </code></pre> <p>Now, I want to create dummies of the &quot;Sku&quot; column and sum of the resulting binary values when grouping by month. Additionally, I also want to get unique count for the &quot;Acct_id&quot; column like this:</p> <pre><code>Month | book | phone | total_accounts 2020-01-01 | 2 | 0 | 2 2020-01-02 | 0 | 1 | 1 </code></pre> <p>I am using</p> <pre><code>dummies=df.set_index('Month')['Sku'].str.get_dummies().sum(level=0).reset_index() </code></pre> <p>But the output gives only binary values and it not summing. Also, it does NOT grab the account column the way I want it! How do I tweak this?</p>
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.crosstab.html" rel="nofollow noreferrer"><code>crosstab</code></a></p> <pre><code>res = pd.crosstab(index=df[&quot;Month&quot;], columns=df[&quot;Sku&quot;], margins=True, margins_name=&quot;total_counts&quot;).drop(&quot;total_counts&quot;) print(res) </code></pre> <p><strong>Output</strong></p> <pre><code>Sku book phone total_counts Month 2020-01-01 2 0 2 2020-01-02 0 1 1 </code></pre> <p>If you need to strictly match the output, just do as @ddejohn suggested:</p> <pre><code>res = pd.crosstab(index=df[&quot;Month&quot;], columns=df[&quot;Sku&quot;], margins=True, margins_name=&quot;total_counts&quot;).drop(&quot;total_counts&quot;) res = res.reset_index().rename_axis(None, axis=1) print(res) </code></pre> <p><strong>Output</strong></p> <pre><code> Month book phone total_counts 0 2020-01-01 2 0 2 1 2020-01-02 0 1 1 </code></pre>
python|pandas
1
3,895
69,556,486
How to expand pandas column containing values in arrays to multiple columns?
<p>I have a data frame with column named 'gear' which contains lists of values... what i want to do now is to move each element in every list to the corresponding column. <a href="https://i.stack.imgur.com/Sa25i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Sa25i.png" alt="enter image description here" /></a></p> <p>is there a way to do this without a need for a for loop? for example in the first row the value 'Hengerfeste' in the list should be moved to 'Hengerfeste' column and so on for each element in the list.</p>
<p>Try <code>explode</code>, then <code>groupby().value_counts()</code>:</p> <pre><code>#sample data df = pd.DataFrame({'col':[['a','b','c'], ['a','c','x'],[],['b','x','y']]}) (df['your_list_col'].explode() .groupby(level=0).value_counts() .unstack(fill_value=0) .reindex(df.index, fill_value=0) ) </code></pre> <p>Output:</p> <pre><code>col a b c x y 0 1 1 1 0 0 1 1 0 1 1 0 2 0 0 0 0 0 3 0 1 0 1 1 </code></pre>
arrays|pandas|list|dataframe|mapping
1
3,896
54,110,888
How can I vectorize my function to speed up the operation on my dataframe?
<p>I have a dataframe (below, i.e, membership), one field (A) has some row with the value in a sorted manner. There is also a new field (new) which at the beginning of the process is a copy of the field <code>C</code>. What I would like to do is that, if the previous row in <code>A</code> is the same as the current row in <code>A</code>, and if either the current row of <code>new</code> or previous row of <code>new</code> is <code>1</code>, the assign 1 to the current <code>new</code>. In the end, at the lasts of the repeated values of <code>A</code>, <code>new</code> will be <code>1</code> or <code>0</code> depending on the conditions in the function and the previous values where <code>A</code> is repeated will have <code>new</code> to be <code>0</code>. I am able to accomplish that with the function below. </p> <pre><code>membership = pd.DataFrame.from_dict(dict([('A', ['20000000460', '20000000460', '20000000460','20000000460','20000000459','20000000461','20000000461','20000000462','20000000464','20000000464','20000000464','20000000464','20000000465','20000000465','20000000466']), ('B', [4,0, 5,0, 6,0,2,5,6,7,4,3,2,7,9]), ('C', [1,1,0,0,0,1,0,1,1,1,0,0,0,0,1])])) def members(df, field): df[field] = df.C print(field) for i in range(1, df.shape[0]): if (df.loc[i, 'A'] == df.loc[i-1, 'A']) and\ (df.loc[i-1, field] == 1 or df.loc[i, field] == 1): df.loc[i, field] = 1 df.loc[i-1, field] = 0 </code></pre> <p>The results of this function on the dataframe is in this <a href="https://i.stack.imgur.com/qnZWA.png" rel="nofollow noreferrer">enter image description here</a></p> <p>The issue is that, I have a very large dataset and running this function on it is very slow. How can I improve the code to make it faster? I know if I am able to vectorize this function in pandas, the time will improve significantly. How can I vectorize this function?</p>
<p>IIUC, Let me explain a little logic and see if this matches.</p> <p>If in any group of A, a value of C is equal to 1, then assign to the last records in that group a value of 1 to column 'new'. </p> <pre><code>membership['new'] = membership.groupby('A')['C']\ .transform(lambda x: np.where(x.index == x.index[-1], x.max(), 0)) </code></pre> <p>Output:</p> <pre><code> A B C new 0 20000000460 4 1 0 1 20000000460 0 1 0 2 20000000460 5 0 0 3 20000000460 0 0 1 4 20000000459 6 0 0 5 20000000461 0 1 0 6 20000000461 2 0 1 7 20000000462 5 1 1 8 20000000464 6 1 0 9 20000000464 7 1 0 10 20000000464 4 0 0 11 20000000464 3 0 1 12 20000000465 2 0 0 13 20000000465 7 0 0 14 20000000466 9 1 1 </code></pre>
python|pandas|dataframe|data-science
0
3,897
53,870,486
Fastest way to add rows to existing pandas dataframe
<p>I'm currently trying to create a new csv based on an existing csv.</p> <p>I can't find a faster way to set values of a dataframe based on an existing dataframe values.</p> <pre><code>import pandas import sys import numpy import time # path to file as argument path = sys.argv[1] df = pandas.read_csv(path, sep = "\t") # only care about lines with response_time df = df[pandas.notnull(df['response_time'])] # new empty dataframe new_df = pandas.DataFrame(index = df["datetime"]) # new_df needs to have datetime as index # and columns based on a combination # of 2 columns name from previous dataframe # (there are only 10 differents combinations) # and response_time as values, so there will be lots of # blank cells but I don't care for i, row in df.iterrows(): start = time.time() new_df.set_value(row["datetime"], row["name"] + "-" + row["type"], row["response_time"]) print(i, time.time() - start) </code></pre> <p>Original dataframe is:</p> <pre><code> datetime name type response_time 0 2018-12-18T00:00:00.500829 HSS_ANDROID audio 0.02430 1 2018-12-18T00:00:00.509108 HSS_ANDROID video 0.02537 2 2018-12-18T00:00:01.816758 HSS_TEST audio 0.03958 3 2018-12-18T00:00:01.819865 HSS_TEST video 0.03596 4 2018-12-18T00:00:01.825054 HSS_ANDROID_2 audio 0.02590 5 2018-12-18T00:00:01.842974 HSS_ANDROID_2 video 0.03643 6 2018-12-18T00:00:02.492477 HSS_ANDROID audio 0.01575 7 2018-12-18T00:00:02.509231 HSS_ANDROID video 0.02870 8 2018-12-18T00:00:03.788196 HSS_TEST audio 0.01666 9 2018-12-18T00:00:03.807682 HSS_TEST video 0.02975 </code></pre> <p>new_df will look like this:</p> <p><a href="https://i.stack.imgur.com/VdK9J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VdK9J.png" alt="enter image description here"></a></p> <p>I takes 7ms per loop. </p> <p>It takes an eternity to process a (only ?) 400 000 rows Dataframe. How can I make it faster ?</p>
<p>Indeed, using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a> will do what you look for such as:</p> <pre><code>import pandas as pd new_df = pd.pivot(df.datetime, df.name + '-' + df.type, df.response_time) print (new_df.head()) HSS_ANDROID-audio HSS_ANDROID-video \ datetime 2018-12-18T00:00:00.500829 0.0243 NaN 2018-12-18T00:00:00.509108 NaN 0.02537 2018-12-18T00:00:01.816758 NaN NaN 2018-12-18T00:00:01.819865 NaN NaN 2018-12-18T00:00:01.825054 NaN NaN HSS_ANDROID_2-audio HSS_ANDROID_2-video \ datetime 2018-12-18T00:00:00.500829 NaN NaN 2018-12-18T00:00:00.509108 NaN NaN 2018-12-18T00:00:01.816758 NaN NaN 2018-12-18T00:00:01.819865 NaN NaN 2018-12-18T00:00:01.825054 0.0259 NaN HSS_TEST-audio HSS_TEST-video datetime 2018-12-18T00:00:00.500829 NaN NaN 2018-12-18T00:00:00.509108 NaN NaN 2018-12-18T00:00:01.816758 0.03958 NaN 2018-12-18T00:00:01.819865 NaN 0.03596 2018-12-18T00:00:01.825054 NaN NaN </code></pre> <p>and to not have <code>NaN</code>, you can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>fillna</code></a> with any value you want such as:</p> <pre><code>new_df = pd.pivot(df.datetime, df.name +'-'+df.type, df.response_time).fillna(0) </code></pre>
python|pandas
3
3,898
38,445,281
creating text file with multiple arrays of mixed types python
<p>I am trying to create a text file with multiple array as the columns in this file. The trick is that each array is a different datatype. For example:</p> <pre><code>a = np.zeros(100,dtype=np.int)+2 #integers all twos b = QC_String = np.array(['NA']*100) #strings all 'NA' c = np.ones(100,dtype=np.float)*99.9999 #floats all 99.9999 np.savetxt('filename.txt',[a,b,c],delimiter='\t') </code></pre> <p>However, I get an error:</p> <pre><code>TypeError: Mismatch between array dtype ('|S32') and format specifier ('%.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e') </code></pre> <p>Any ideas? Thanks!</p>
<p>I recommending using <a href="http://pandas.pydata.org/" rel="nofollow">pandas</a> to accomplish this task, which can easily handle multiple data types while writing out a new text file.</p> <pre><code>import numpy as np import pandas as pd # Create an empty DataFrame df = pd.DataFrame() # Populate columns in the dataframe with numpy arrays of different data types df['a'] = np.zeros(100, dtype=np.int)+2 df['b'] = np.array(['NA']*100) df['c'] = np.ones(100, dtype=np.float)*99.9999 # Store the data in a new text file df.to_csv('./my_text_file.txt', index=False) </code></pre> <p>Opening up the .txt file reveals:</p> <pre><code>a,b,c 2,NA,99.999 2,NA,99.999 2,NA,99.999 ... </code></pre>
python|arrays|file|numpy|text
2
3,899
66,094,075
Filtering all entries on today's date (pandas)
<p>I want to filter out all the rows in my data that have today's date in a column.</p> <p>The (Fixture,Date) column has pandas datetime type of values.</p> <pre><code>0 2021-05-02 1 2021-06-02 2 2021-06-02 3 2021-06-02 4 2021-06-02 189 2021-06-02 190 2021-06-02 191 2021-07-02 192 2021-07-02 193 2021-08-02 </code></pre> <p>I had the following code filtering it in my script and if I can remember correctly it was working in the past.</p> <pre><code>today= probs_final[probs_final[&quot;Fixture&quot;,&quot;Date&quot;].dt.date.eq(datetime.datetime.today().date())] </code></pre> <p>But now it returns an empty data-frame.</p> <p>I checked this <a href="https://stackoverflow.com/a/56093654/847773">answer</a> , but this does not work either:</p> <pre><code>today= probs_final[probs_final[&quot;Fixture&quot;,&quot;Date&quot;].dt.date.eq(str(datetime.datetime.now().date()))] </code></pre>
<p>Your error is that you misunderstand the date format. 2021-08-02 means August 2nd, 2021 not February 8th, 2021 (which may now be today in some time zones).</p> <p>Your code is fine, your dates aren't.</p> <p>Edit:</p> <p>To answer the source problem, which seems to be your ingestion of a CSV file. I have had some success using the infer_datetime_format parameter of <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html#pandas-read-csv" rel="nofollow noreferrer">read_csv</a>.</p> <pre><code>pd.read_csv(..., infer_datetime_format=True) </code></pre>
python|pandas|dataframe|datetime
1