Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
3,300
51,342,841
how to remove this warning in python 3
<p>trying to lower and strip a column in python 3 using panda, but getting the warning-- what is the right way so this warning will not come up </p> <pre><code>df["col1"] = df[["col1"]].apply(lambda x: x.str.strip()) df["col1"] = df[["col1"]].apply(lambda x: x.str.lower()) </code></pre> <p>The warning</p> <pre><code>A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy self[k1] = value[k2] </code></pre> <p>how to remove the warning</p>
<p>To get rid of this warning apply it to a series instead of a dataframe. Using <code>df[["col1"]]</code> is creating a new dataframe that you are then setting to the column. If you instead just modify the column it'll be fine. Additionally, I chained the two together.</p> <pre><code>df["col1"] = df["col1"].str.strip().str.lower() </code></pre>
python|python-3.x|pandas
1
3,301
51,405,517
How to iterate through tensors in custom loss function?
<p>I'm using keras with tensorflow backend. My goal is to query the <code>batchsize</code> of the current batch in a <strong>custom loss</strong> function. This is needed to compute values of the custom loss functions which depend on the index of particular observations. I like to make this clearer given the minimum reproducible examples below.</p> <p>(BTW: Of course I could use the batch size defined for the training procedure and plugin it's value when defining the custom loss function, but there are some reasons why this can vary, especially if <code>epochsize % batchsize</code> (epochsize modulo batchsize) is unequal zero, then the last batch of an epoch has different size. I didn't found a suitable approach in stackoverflow, especially e. g. <a href="https://stackoverflow.com/q/46200080/10099959">Tensor indexing in custom loss function</a> and <a href="https://stackoverflow.com/q/48524836/10099959">Tensorflow custom loss function in Keras - loop over tensor</a> and <a href="https://stackoverflow.com/q/43327668/10099959">Looping over a tensor</a> because obviously the shape of any tensor can't be inferred when building the graph which is the case for a loss function - shape inference is only possible when evaluating given the data, which is only possible given the graph. Hence I need to tell the custom loss function to do something with particular elements along a certain dimension without knowing the length of the dimension.</p> <h1>(this is the same in all examples)</h1> <pre><code>from keras.models import Sequential from keras.layers import Dense, Activation # Generate dummy data import numpy as np data = np.random.random((1000, 100)) labels = np.random.randint(2, size=(1000, 1)) model = Sequential() model.add(Dense(32, activation='relu', input_dim=100)) model.add(Dense(1, activation='sigmoid')) </code></pre> <h1>example 1: nothing special without issue, no custom loss</h1> <pre><code>model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) # Train the model, iterating on the data in batches of 32 samples model.fit(data, labels, epochs=10, batch_size=32) </code></pre> <p><em>(Output omitted, this runs perfectily fine)</em></p> <h1>example 2: nothing special, with a fairly simple custom loss</h1> <pre><code>def custom_loss(yTrue, yPred): loss = np.abs(yTrue-yPred) return loss model.compile(optimizer='rmsprop', loss=custom_loss, metrics=['accuracy']) # Train the model, iterating on the data in batches of 32 samples model.fit(data, labels, epochs=10, batch_size=32) </code></pre> <p><em>(Output omitted, this runs perfectily fine)</em></p> <h1>example 3: the issue</h1> <pre><code>def custom_loss(yTrue, yPred): print(yPred) # Output: Tensor("dense_2/Sigmoid:0", shape=(?, 1), dtype=float32) n = yPred.shape[0] for i in range(n): # TypeError: __index__ returned non-int (type NoneType) loss = np.abs(yTrue[i]-yPred[int(i/2)]) return loss model.compile(optimizer='rmsprop', loss=custom_loss, metrics=['accuracy']) # Train the model, iterating on the data in batches of 32 samples model.fit(data, labels, epochs=10, batch_size=32) </code></pre> <p>Of course the tensor has not shape info yet which can't be inferred when building the graph, only at training time. Hence <code>for i in range(n)</code> rises an error. Is there any way to perform this?</p> <p><strong>The traceback of the output:</strong> <a href="https://i.stack.imgur.com/b9eCa.png" rel="noreferrer"><img src="https://i.stack.imgur.com/b9eCa.png" alt="enter image description here"></a></p> <h1>-------</h1> <p>BTW here's my true custom loss function in case of any questions. I skipped it above for clarity and simplicity.</p> <pre><code>def neg_log_likelihood(yTrue,yPred): yStatus = yTrue[:,0] yTime = yTrue[:,1] n = yTrue.shape[0] for i in range(n): s1 = K.greater_equal(yTime, yTime[i]) s2 = K.exp(yPred[s1]) s3 = K.sum(s2) logsum = K.log(y3) loss = K.sum(yStatus[i] * yPred[i] - logsum) return loss </code></pre> <p>Here's an image of the partial negative log-likelihood of the cox proportional harzards model. </p> <p><a href="https://i.stack.imgur.com/jxmXd.png" rel="noreferrer"><img src="https://i.stack.imgur.com/jxmXd.png" alt="enter image description here"></a></p> <p>This is to clarify a question in the comments to avoid confusion. I don't think it is necessary to understand this in detail to answer the question. </p>
<p>As usual, don't loop. There are severe performance drawbacks and also bugs. Use only backend functions unless totally unavoidable (usually it's not unavoidable)</p> <hr> <h2>Solution for example 3:</h2> <p>So, there is a very weird thing there... </p> <blockquote> <p>Do you really want to simply ignore half of your model's predictions? (Example 3)</p> </blockquote> <p>Assuming this is true, just duplicate your tensor in the last dimension, flatten and discard half of it. You have the exact effect you want. </p> <pre><code>def custom_loss(true, pred): n = K.shape(pred)[0:1] pred = K.concatenate([pred]*2, axis=-1) #duplicate in the last axis pred = K.flatten(pred) #flatten pred = K.slice(pred, #take only half (= n samples) K.constant([0], dtype="int32"), n) return K.abs(true - pred) </code></pre> <h2>Solution for your loss function:</h2> <p>If you have sorted times from greater to lower, just do a cumulative sum.</p> <blockquote> <p><strong>Warning:</strong> If you have one time per sample, you cannot train with mini-batches!!!<br> <code>batch_size = len(labels)</code></p> </blockquote> <p>It makes sense to have time in an additional dimension (many times per sample), as is done in recurrent and 1D conv netoworks. Anyway, considering your example as expressed, that is shape <code>(samples_equal_times,)</code> for <code>yTime</code>:</p> <pre><code>def neg_log_likelihood(yTrue,yPred): yStatus = yTrue[:,0] yTime = yTrue[:,1] n = K.shape(yTrue)[0] #sort the times and everything else from greater to lower: #obs, you can have the data sorted already and avoid doing it here for performance #important, yTime will be sorted in the last dimension, make sure its (None,) in this case # or that it's (None, time_length) in the case of many times per sample sortedTime, sortedIndices = tf.math.top_k(yTime, n, True) sortedStatus = K.gather(yStatus, sortedIndices) sortedPreds = K.gather(yPred, sortedIndices) #do the calculations exp = K.exp(sortedPreds) sums = K.cumsum(exp) #this will have the sum for j &gt;= i in the loop logsums = K.log(sums) return K.sum(sortedStatus * sortedPreds - logsums) </code></pre>
python|tensorflow|keras|loss-function
4
3,302
48,127,096
GroupBY frequency counts JSON response - nested field
<p>I'm trying aggregate the response from an API call that returns a JSON object and get some frequency counts.</p> <p>I've managed to do it for one of the fields in the JSON response, but a second field that I want to try the same thing isn't working</p> <p>Both fields are called "category" but the one that isn't working is nested within "outcome_status".</p> <p>The error I get is KeyError: 'category'</p> <p>The below code uses a public API that does not require authentication, so can be tested easily.</p> <pre><code>import simplejson import requests #make a polygon for use in the API call lat_coord = 51.767538 long_coord = -1.497488 lat_upper = str(lat_coord + 0.02) lat_lower = str(lat_coord - 0.02) long_upper = str(long_coord + 0.02) long_lower = str(long_coord - 0.02) #call from the API - no authentication required api_call="https://data.police.uk/api/crimes-street/all-crime?poly=" + lat_lower + "," + long_upper + ":" + lat_lower + "," + long_lower + ":" + lat_upper + "," + long_lower + ":" + lat_upper + "," + long_upper + "&amp;date=2017-01" print (api_call) request_resp=requests.get(api_call).json() import pandas as pd import numpy as np df_resp = pd.DataFrame(request_resp) #frequency counts for non-nested field (this works) df_resp.groupby('category').context.count() #next bit tries to do the nested (this doesn't work) #tried dropping nulls df_outcome = df_resp['outcome_status'].dropna() print(df_outcome) #tried index reset df_outcome.reset_index() #just errors df_outcome.groupby('category').date.count() </code></pre>
<p>I think you will have the easiest time of it, if you expand the dict in the <code>"outcome_status"</code> column like:</p> <h3>Code:</h3> <pre><code>outcome_status = [ {'outcome_status_' + k: v for k, v in z.items()} for z in ( dict(category=None, date=None) if x is None else x for x in (y['outcome_status'] for y in request_resp) ) ] df = pd.concat([df_resp.drop('outcome_status', axis=1), pd.DataFrame(outcome_status)], axis=1) </code></pre> <p>This uses some comprehensions to rename the fields in the <code>outcome_status</code> by pre-pending <code>"outcome_status_"</code> to the key names and turning them into columns. It also expands <code>None</code> values as well.</p> <h3>Test Code:</h3> <pre><code>import requests import pandas as pd # make a polygon for use in the API call lat_coord = 51.767538 long_coord = -1.497488 lat_upper = str(lat_coord + 0.02) lat_lower = str(lat_coord - 0.02) long_upper = str(long_coord + 0.02) long_lower = str(long_coord - 0.02) # call from the API - no authentication required api_call = ("https://data.police.uk/api/crimes-street/all-crime?poly=" + lat_lower + "," + long_upper + ":" + lat_lower + "," + long_lower + ":" + lat_upper + "," + long_lower + ":" + lat_upper + "," + long_upper + "&amp;date=2017-01") request_resp = requests.get(api_call).json() df_resp = pd.DataFrame(request_resp) outcome_status = [ {'outcome_status_' + k: v for k, v in z.items()} for z in ( dict(category=None, date=None) if x is None else x for x in (y['outcome_status'] for y in request_resp) ) ] df = pd.concat([df_resp.drop('outcome_status', axis=1), pd.DataFrame(outcome_status)], axis=1) # just errors print(df.groupby('outcome_status_category').category.count()) </code></pre> <h3>Results:</h3> <pre><code>outcome_status_category Court result unavailable 4 Investigation complete; no suspect identified 38 Local resolution 1 Offender given a caution 2 Offender given community sentence 3 Offender given conditional discharge 1 Offender given penalty notice 2 Status update unavailable 6 Suspect charged as part of another case 1 Unable to prosecute suspect 9 Name: category, dtype: int64 </code></pre>
python|json|python-3.x|pandas|pandas-groupby
1
3,303
48,256,372
Neural Machine Translation model predictions are off-by-one
<p><strong>Problem Summary</strong></p> <p>In the following example, my NMT model has high loss because it correctly predicts <code>target_input</code> instead of <code>target_output</code>.</p> <pre><code>Targetin : 1 3 3 3 3 6 6 6 9 7 7 7 4 4 4 4 4 9 9 10 10 10 3 3 10 10 3 10 3 3 10 10 3 9 9 4 4 4 4 4 3 10 3 3 9 9 3 6 6 6 6 6 6 10 9 9 10 10 4 4 4 4 4 4 4 4 4 4 4 4 9 9 9 9 3 3 3 6 6 6 6 6 9 9 10 3 4 4 4 4 4 4 4 4 4 4 4 4 9 9 10 3 10 9 9 3 4 4 4 4 4 4 4 4 4 10 10 4 4 4 4 4 4 4 4 4 4 9 9 10 3 6 6 6 6 3 3 3 10 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 9 9 3 3 10 6 6 6 6 6 3 9 9 3 3 3 3 3 3 3 10 10 3 9 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 9 3 6 6 6 6 6 6 3 5 3 3 3 3 10 10 10 3 9 9 5 10 3 3 3 3 9 9 9 5 10 10 10 10 10 4 4 4 4 3 10 6 6 6 6 6 6 3 5 10 10 10 10 3 9 9 6 6 6 6 6 6 6 6 6 9 9 9 3 3 3 6 6 6 6 6 6 6 6 3 9 9 9 3 3 6 6 6 3 3 3 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Targetout : 3 3 3 3 6 6 6 9 7 7 7 4 4 4 4 4 9 9 10 10 10 3 3 10 10 3 10 3 3 10 10 3 9 9 4 4 4 4 4 3 10 3 3 9 9 3 6 6 6 6 6 6 10 9 9 10 10 4 4 4 4 4 4 4 4 4 4 4 4 9 9 9 9 3 3 3 6 6 6 6 6 9 9 10 3 4 4 4 4 4 4 4 4 4 4 4 4 9 9 10 3 10 9 9 3 4 4 4 4 4 4 4 4 4 10 10 4 4 4 4 4 4 4 4 4 4 9 9 10 3 6 6 6 6 3 3 3 10 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 9 9 3 3 10 6 6 6 6 6 3 9 9 3 3 3 3 3 3 3 10 10 3 9 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 9 3 6 6 6 6 6 6 3 5 3 3 3 3 10 10 10 3 9 9 5 10 3 3 3 3 9 9 9 5 10 10 10 10 10 4 4 4 4 3 10 6 6 6 6 6 6 3 5 10 10 10 10 3 9 9 6 6 6 6 6 6 6 6 6 9 9 9 3 3 3 6 6 6 6 6 6 6 6 3 9 9 9 3 3 6 6 6 3 3 3 3 3 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Prediction : 3 3 3 3 3 6 6 6 9 7 7 7 4 4 4 4 4 9 3 3 3 3 3 3 10 3 3 10 3 3 10 3 3 9 3 4 4 4 4 4 3 10 3 3 9 3 3 6 6 6 6 6 6 10 9 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 9 3 3 3 3 3 3 6 6 6 6 6 9 6 3 3 4 4 4 4 4 4 4 4 4 4 4 4 9 3 3 3 10 9 3 3 4 4 4 4 4 4 4 4 4 3 10 4 4 4 4 4 4 4 4 4 4 9 3 3 3 6 6 6 6 3 3 3 10 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 9 3 3 3 10 6 6 6 6 6 3 9 3 3 3 3 3 3 3 3 3 3 3 9 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 9 3 6 6 6 6 6 6 3 3 3 3 3 3 10 3 3 3 9 3 3 10 3 3 3 3 9 3 9 3 10 3 3 3 3 4 4 4 4 3 10 6 6 6 6 6 6 3 3 10 3 3 3 3 9 3 6 6 6 6 6 6 6 6 6 9 6 9 3 3 3 6 6 6 6 6 6 6 6 3 9 3 9 3 3 6 6 6 3 3 3 3 3 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 Source : 9 16 4 7 22 22 19 1 12 19 12 18 5 18 9 18 5 8 12 19 19 5 5 19 22 7 12 12 6 19 7 3 20 7 9 14 4 11 20 12 7 1 18 7 7 5 22 9 13 22 20 19 7 19 7 13 7 11 19 20 6 22 18 17 17 1 12 17 23 7 20 1 13 7 11 11 22 7 12 1 13 12 5 5 19 22 5 5 20 1 5 4 12 9 7 12 8 14 18 22 18 12 18 17 19 4 19 12 11 18 5 9 9 5 14 7 11 6 4 17 23 6 4 5 12 6 7 14 4 20 6 8 12 25 4 19 6 1 5 1 5 20 4 18 12 12 1 11 12 1 25 13 18 19 7 12 7 3 4 22 9 9 12 4 8 9 19 9 22 22 19 1 19 7 5 19 4 5 18 11 13 9 4 14 12 13 20 11 12 11 7 6 1 11 19 20 7 22 22 12 22 22 9 3 8 12 11 14 16 4 11 7 11 1 8 5 5 7 18 16 22 19 9 20 4 12 18 7 19 7 1 12 18 17 12 19 4 20 9 9 1 12 5 18 14 17 17 7 4 13 16 14 12 22 12 22 18 9 12 11 3 18 6 20 7 4 20 7 9 1 7 25 13 5 25 14 11 5 20 7 23 12 5 16 19 19 25 19 7 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 </code></pre> <p>As is evident, the prediction matches up almost 100% with <code>target_input</code> instead of <code>target_output</code>, as it should (off-by-one). Loss and gradients are being calculated using <code>target_output</code>, so it is strange that predictions are matching up to <code>target_input</code>.</p> <p><strong>Model Overview</strong></p> <p>An NMT model predicts a sequence of words in a target language using a primary sequence of words in a source language. This is the framework behind Google Translate. Since NMT uses coupled-RNNs, it is supervised and required labelled target input and output.</p> <p>NMT uses a <code>source</code> sequence, a <code>target_input</code> sequence, and a <code>target_output</code> sequence. In the example below, the encoder RNN (blue) uses the source input words to produce a meaning vector, which it passes to the decoder RNN (red), which uses the meaning vector to produce output.</p> <p>When doing new predictions (inference), the decoder RNN uses its own previous output to seed the next prediction in the timestep. However, to improve training, it is allowed to seed itself with the correct previous prediction at each new timestep. This is why <code>target_input</code> is necessary for training.</p> <p><a href="https://i.stack.imgur.com/jus6z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jus6z.jpg" alt="enter image description here" /></a></p> <p><strong>Code to get an iterator with source, target_in, target_out</strong></p> <pre><code>def get_batched_iterator(hparams, src_loc, tgt_loc): if not (os.path.exists('primary.csv') and os.path.exists('secondary.csv')): utils.integerize_raw_data() source_dataset = tf.data.TextLineDataset(src_loc) target_dataset = tf.data.TextLineDataset(tgt_loc) dataset = tf.data.Dataset.zip((source_dataset, target_dataset)) dataset = dataset.shuffle(hparams.shuffle_buffer_size, seed=hparams.shuffle_seed) dataset = dataset.map(lambda source, target: (tf.string_to_number(tf.string_split([source], delimiter=',').values, tf.int32), tf.string_to_number(tf.string_split([target], delimiter=',').values, tf.int32))) dataset = dataset.map(lambda source, target: (source, tf.concat(([hparams.sos], target), axis=0), tf.concat((target, [hparams.eos]), axis=0))) dataset = dataset.map(lambda source, target_in, target_out: (source, target_in, target_out, tf.size(source), tf.size(target_in))) # Proceed to batch and return iterator </code></pre> <p><strong>NMT model core code</strong></p> <pre><code>def __init__(self, hparams, iterator, mode): source, target_in, target_out, source_lengths, target_lengths = iterator.get_next() # Lookup embeddings embedding_encoder = tf.get_variable(&quot;embedding_encoder&quot;, [hparams.src_vsize, hparams.src_emsize]) encoder_emb_inp = tf.nn.embedding_lookup(embedding_encoder, source) embedding_decoder = tf.get_variable(&quot;embedding_decoder&quot;, [hparams.tgt_vsize, hparams.tgt_emsize]) decoder_emb_inp = tf.nn.embedding_lookup(embedding_decoder, target_in) # Build and run Encoder LSTM encoder_cell = tf.nn.rnn_cell.BasicLSTMCell(hparams.num_units) encoder_outputs, encoder_state = tf.nn.dynamic_rnn(encoder_cell, encoder_emb_inp, sequence_length=source_lengths, dtype=tf.float32) # Build and run Decoder LSTM with TrainingHelper and output projection layer decoder_cell = tf.nn.rnn_cell.BasicLSTMCell(hparams.num_units) projection_layer = layers_core.Dense(hparams.tgt_vsize, use_bias=False) helper = tf.contrib.seq2seq.TrainingHelper(decoder_emb_inp, sequence_length=target_lengths) decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, helper, encoder_state, output_layer=projection_layer) outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder) logits = outputs.rnn_output if mode is 'TRAIN' or mode is 'EVAL': # then calculate loss crossent = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=target_out, logits=logits) target_weights = tf.sequence_mask(target_lengths, maxlen=tf.shape(target_out)[1], dtype=logits.dtype) self.loss = tf.reduce_sum((crossent * target_weights) / hparams.batch_size) if mode is 'TRAIN': # then calculate/clip gradients, then optimize model params = tf.trainable_variables() gradients = tf.gradients(self.loss, params) clipped_gradients, _ = tf.clip_by_global_norm(gradients, hparams.max_gradient_norm) optimizer = tf.train.AdamOptimizer(hparams.l_rate) self.update_step = optimizer.apply_gradients(zip(clipped_gradients, params)) if mode is 'EVAL': # then allow access to input/output tensors to printout self.src = source self.tgt_in = target_in self.tgt_out = target_out self.logits = logits </code></pre>
<p>The core issue with the NMT model used to predict a language-like syntax with a repetitive structure is that it becomes incentivized to simply predict whatever the past prediction was. Since it is fed the correct previous prediction at each step by <code>TrainingHelper</code> to speed up training, this artificially produces a local minimum that the model is unable to get out of.</p> <p>The best option I have found is to weight the loss functions such the key points in the output sequence where the output is not repetitive are weighted more heavily. This will incentivize the model to get those correct, and not just repeat the past prediction.</p>
python|tensorflow|machine-learning|recurrent-neural-network|tensorflow-datasets
1
3,304
48,239,019
Shape must be rank 1 but is rank 0 for 'CTCLoss' (op: 'CTCLoss')
<p>I've successfully converted a Tensor into a SparseTensor with this code:</p> <pre><code>def dense_to_sparse(dense_tensor, out_type): indices = tf.where(tf.not_equal(dense_tensor, tf.constant(0, dense_tensor.dtype) values = tf.gather_nd(dense_tensor, indices) shape = tf.shape(dense_tensor, out_type=out_type) return tf.SparseTensor(indices, values, shape) </code></pre> <p>I want to try out using a SparseTensor converted from a dense one:</p> <pre><code>input_layer = tf.placeholder(tf.float32, [None, 1596, 48]) dense_labels = tf.placeholder(tf.int32) sparse_from_dense = dense_to_sparse(dense_lables, out_type=tf.int64) cell_fw = grid_rnn.Grid2LSTMCell(num_units=128) cell_bw = grid_rnn.Grid2LSTMCell(num_units=128) bidirectional_grid_rnn = tf.nn.bidirectional_dynamic_rnn(cell_fw, cell_bw, input_layer, dtype=tf.float32) outputs = tf.reshape(bidirectional_grid_rnn[0], [-1, 256]) W = tf.Variable(tf.truncated_normal([256, 80], stddev=0.1, dtype=tf.float32), name='W') b = tf.Variable(tf.constant(0., dtype=tf.float32, shape=[80], name='b')) logits = tf.matmul(outputs, W) + b logits = tf.reshape(logits, [tf.shape(input_layer)[0], -1, 80]) logits = tf.transpose(logits, (1, 0, 2)) loss = tf.nn.ctc_loss(inputs=logits, labels=sparse, sequence_length=320) </code></pre> <p>Unfortunately, when I do this, I encounter this error:</p> <pre><code>Shape must be rank 1 but is rank 0 for 'CTCLoss' (op: 'CTCLoss') with input shapes: [?,?,80], [?,1], [?], []. </code></pre> <p>How do I fix this error?</p>
<p>From the Tensorflow documentation <a href="https://www.tensorflow.org/versions/r0.12/api_docs/python/nn/connectionist_temporal_classification__ctc_#ctc_loss" rel="nofollow noreferrer">https://www.tensorflow.org/versions/r0.12/api_docs/python/nn/connectionist_temporal_classification__ctc_#ctc_loss</a></p> <blockquote> <p>sequence_length: 1-D int32 vector, size [batch_size]. The sequence lengths.</p> </blockquote> <p>So you need to pass an array/vector of length batch_size instead of an integer.</p>
tensorflow|lstm
1
3,305
48,020,122
Currently Animating Scatter Plot With Static Frames. Is there a way to animate over a moving window instead?
<p>I have an array of arrays with format <code>[2000][200,3]</code> that I am creating an animated scatter plot of. 2000 is the number of frames and the interior arrays have format <code>[length, [x,y,inten]]</code> which are the points to scatter. So for an example a single frame will look like:</p> <pre><code>Array[0]=np.array([x_1,y_1,I_1],[x_2,y_2,I_2],...,[x_200,y_200,I_200]) </code></pre> <p>So we have 2000 frames of 200 points each. These points are arbitrarily truncated every 200 and are actually sequential. So I can feasibly reshape the array into:</p> <pre><code>Array=np.array(np.array([x_1,y_1,I_1],[x_2,y_2,I_2],...,[x_400000,y_400000,I_400000]) </code></pre> <p>Which is no problem for me. I know how to do this. </p> <p>My question is how can I animate a scatter plot that adaptively moves through the points instead of displaying 200 point bins? The code below allows me to plot an animated scatter plot with frames (1-200,201-400,401-600,etc) but the result is not very smooth to the eye. Ideally I would like something that updates at every point or at least every 10 points so for example frames (1-200,2-201,3-202,etc) or (1-200,11-210,21-200,etc)</p> <pre><code>numframes=len(Array) plt.ion() fig, ax = plt.subplots() norm = plt.Normalize(Array[:][:,2].min(), Array[:][:,2].max()) sc = ax.scatter(Array[0][:,0], Array[0][:,1], c=Array[0][:,2], cmap=cm.hot, s=5) plt.xlim(-40,40) plt.ylim(0,200) plt.draw() for i in range(numframes): sc.set_offsets(np.c_[Array[i][:,0], Array[i][:,1]]) sc.set_array(Array[i][:,2]) print(i) plt.pause(0.1) plt.ioff() plt.show() </code></pre>
<p>The code below steps continuously through my array of points with a given step size and window of 200 instead of discretely binning every 200.</p> <pre><code>stepsize=10 NewArray=np.ravel(Array) NewArray.reshape(2000*200,3) plt.ion() fig, ax = plt.subplots() norm = plt.normalize(NewArray[:,2].min(), NewArray[:,2].max()) sc = ax.scatter(NewArray[0:200,0], NewArray[0:200,1], c=NewArray[0:200,2], cmap=cm.jet, s=5) plt.xlim(-40,40) plt.ylim(0,200) plt.draw() for i in range(len(NewArray//stepsize)-200): sc.set_offsets(np.c_[NewArray[(i*stepsize):(i*stepsize)+200,0],\ NewArray[(i*stepsize):(i*stepsize)+200,1]]) sc.set_array(NewArray[(i*stepsize):(i*stepsize)+200,2]) plt.pause(0.1) plt.ioff() plt.show() </code></pre>
python|numpy|animation|matplotlib|plot
0
3,306
48,016,370
How to split a CSV column with repeated text into split 0-1 columns for each possible text variant?
<p>I have a CSV with a column like</p> <pre><code>LABEL a b a a c n o ye s </code></pre> <p>I want to split it into something like:</p> <pre><code>LABEL_a LABEL_b LABEL_c LABEL_n_o LABEL_ye_s 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 </code></pre> <p>How to do such thing with pandas?</p>
<p>Using <code>get_dummies</code></p> <pre><code>s.str.get_dummies().add_prefix('label_') Out[19]: label_a label_b label_c label_n o label_ye s 0 1 0 0 0 0 1 0 1 0 0 0 2 1 0 0 0 0 3 1 0 0 0 0 4 0 0 1 0 0 5 0 0 0 1 0 6 0 0 0 0 1 </code></pre>
python|pandas|csv
3
3,307
48,335,755
df.loc is giving key error in linux environment where the same code is working fine in mac?
<p>I have a dataframe like this </p> <pre><code> key epic uname port 0 PORT-100 None user5 None 1 PORT-101 None user1 None 2 PORT-102 None NA None 3 PORT-103 None NA None 4 PORT-104 None user2 None 5 PORT-105 None user3 None </code></pre> <p>and I have a dictionary </p> <pre><code>{'PORT-10': ['PORT-100', 'ST-111'], 'PORT-100': ['PORT-105', ], 'PORT-101': ['PORT-103']} </code></pre> <p>I want to change the port column of dataframe according to the port dictionary i.e if a any key in df matches to the list of values inside the dictionary then I'll assign that dict key to df['port'] I am doing like below</p> <pre><code>for port in port_dict: df.loc[df['key'].isin(port_dict[port]),'port']=port </code></pre> <p>and it is working fine in my mac but giving key error in linux . I have tried using try...except key error, but no luck. I am using python 3.6 in both cases. Any idea why it is behaving differently in linux environment even if I am using same python version?</p>
<p>As noted in the comments, this may be due to different versions of pandas being used. At any rate, the more idiomatic way of doing this is to use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html#pandas.Series.map" rel="nofollow noreferrer"><code>Series.map</code></a>. You can read more about vectorizing operations over data <a href="https://pandas.pydata.org/pandas-docs/stable/basics.html#applying-elementwise-functions" rel="nofollow noreferrer">in the docs</a>.</p> <pre><code>port_dict2 = {v: k for k, vs in port_dict.items() for v in vs} df['port'] = df['key'].map(port_dict2) </code></pre> <p>This works on my Mac and Linux Mint VM.</p>
pandas|dataframe
3
3,308
48,713,967
Uploading CSV files to Fusion Tables through Python
<p>I am trying to grab data from looker and insert it directly into Google Fusion Tables using the MediaFileUpload so as to not download any files and upload from memory. My current code below returns a TypeError. Any help would be appreciated. Thanks! </p> <p>Error returned to me:</p> <pre><code>Traceback (most recent call last): File "csvpython.py", line 96, in &lt;module&gt; main() File "csvpython.py", line 88, in main media = MediaFileUpload(dataq, mimetype='application/octet-stream', resumable=True) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/oauth2client/_helpers.py", line 133, in positional_wrapper return wrapped(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/googleapiclient/http.py", line 548, in __init__ fd = open(self._filename, 'rb') TypeError: expected str, bytes or os.PathLike object, not NoneType </code></pre> <p>Code in question:</p> <pre><code>for x, y, z in zip(look, destination, fusion): look_data = lc.run_look(x) df = pd.DataFrame(look_data) stream = io.StringIO() dataq = df.to_csv(path_or_buf=stream, sep=";", index=False) media = MediaFileUpload(dataq, mimetype='application/octet-stream', resumable=True) replace = ftserv.table().replaceRows(tableId=z, media_body=media, startLine=None, isStrict=False, encoding='UTF-8', media_mime_type='application/octet-stream', delimiter=';', endLine=None).execute() </code></pre> <p>After switching dataq to stream in MediaFileUpload, I have had the following returned to me:</p> <pre><code> Traceback (most recent call last): File "quicktestbackup.py", line 96, in &lt;module&gt; main() File "quicktestbackup.py", line 88, in main media = MediaFileUpload(stream, mimetype='application/octet-stream', resumable=True) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/oauth2client/_helpers.py", line 133, in positional_wrapper return wrapped(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/googleapiclient/http.py", line 548, in __init__ fd = open(self._filename, 'rb') TypeError: expected str, bytes or os.PathLike object, not _io.StringIO </code></pre>
<p><a href="http://pandas.pydata.org/pandas-docs/version/0.13.1/generated/pandas.DataFrame.to_csv.html#pandas.DataFrame.to_csv%20Df" rel="nofollow noreferrer"><code>DataFrame.to_csv</code> is a void method</a> and any side effects from calling it are passed to <code>stream</code> and not <code>dataq</code>. That is, <code>dataq</code> is <code>NoneType</code> and has no data - your CSV data is in <code>stream</code>.<br> When you construct the media file from the io object, you need to feed it the data from the stream (and not the stream itself), thus its <code>getvalue()</code> <a href="https://docs.python.org/2/library/stringio.html#StringIO.StringIO.getvalue" rel="nofollow noreferrer">method</a> is needed.</p> <pre><code>df.to_csv(path_or_buf=stream, ...) media = MediaFileUpload(stream.getvalue(), ...) </code></pre> <p>The call to FusionTables looks to be perfectly valid.</p>
python-3.x|pandas|file-upload|stream|google-fusion-tables
1
3,309
48,750,682
Pandas - flattening a multiindex column containing tuples, but ignore missing values
<p>I have a multiindex pandas dataframe like this:</p> <pre><code>lst = [(1, 2), (3, 4), (5, 6), (7, 8), (9, 10), (11, 12), (13, 14), (21, 22)] df = pd.DataFrame(lst, pd.MultiIndex.from_product([['A', 'B'], ['1','2', '3', '4']])).loc[:('B', '2')] df["tuple"] = list(zip(df[0], df[1])) #df: 0 1 tuple A 1 1 2 (1, 2) 2 3 4 (3, 4) 3 5 6 (5, 6) 4 7 8 (7, 8) B 1 9 10 (9, 10) 2 11 12 (11, 12) </code></pre> <p>I want to transform the column, containing the tuples, into a list of tuples. My approach is:</p> <pre><code>#dataframe to append list of tuples new_df = pd.DataFrame([1, 2], index = list("AB") ) #voila a list of tuples new_df["list_of_tuples"] = df["tuple"].unstack(level = -1).values.tolist() #new_df: 0 list_of_tuples A 1 [(1, 2), (3, 4), (5, 6), (7, 8)] B 2 [(9, 10), (11, 12), None, None] </code></pre> <p>This works, but only for multiindex dataframes with equal length for each entry. If all entries don't have the same length, the missing columns give rise to a <code>None</code> value in the list. My attempts to remove numpy <code>NaN</code> values, before creating a list, failed. Is there an approach to prevent the appearance of <code>None</code> in the final list of tuples?</p>
<p>Is this what you need ? </p> <pre><code>df.groupby(level=[0]).tuple.apply(list) Out[306]: A [(1, 2), (3, 4), (5, 6), (7, 8)] B [(9, 10), (11, 12)] Name: tuple, dtype: object </code></pre>
python|python-3.x|pandas|multi-index
3
3,310
48,554,149
How to rewrite a tensorflow's checkpoint files?
<p>I want to change a ckpt files's tensor's value by many other ckpt files's tensors, and use the modified ckpt files to restart TF training jobs. Hope you some advices! Thanks!</p>
<p>There are standalone utilities for reading checkpoint files (search for <code>CheckpointReader</code> or <code>NewCheckpointReader</code>) but not modifying them. The easiest approach is probably to load the checkpoint into your model, assign a new value to the variable you want to change, and save this new checkpoint.</p>
tensorflow
1
3,311
48,701,827
numpy - 1 field value in 3d array from a 1d array
<p>I have this issue, I'm trying to build a 3D array where I need later to overwrite eg. [:,:,5] with a value from a 1D array. My arrays look like this in <code>numpy</code>:</p> <p>3D:</p> <pre><code>[[[ 0. 150. 10. 300. 25. 0.] [ 1. 25. 2. 75. 7. 0.] [ 4. 0. 0. 0. 0. 0.] [ 5. 0. 0. 0. 0. 0.]] [[ 0. 150. 10. 300. 25. 0.] [ 1. 25. 2. 75. 7. 0.] [ 4. 0. 0. 0. 0. 0.] [ 5. 0. 0. 0. 0. 0.]] [[ 0. 150. 10. 300. 25. 0.] [ 1. 25. 2. 75. 7. 0.] [ 4. 0. 0. 0. 0. 0.] [ 5. 0. 0. 0. 0. 0.]] [[ 0. 150. 10. 300. 25. 0.] [ 1. 25. 2. 75. 7. 0.] [ 4. 0. 0. 0. 0. 0.] [ 5. 0. 0. 0. 0. 0.]] [[ 0. 150. 10. 300. 25. 0.] [ 1. 25. 2. 75. 7. 0.] [ 4. 0. 0. 0. 0. 0.] [ 5. 0. 0. 0. 0. 0.]] [[ 0. 150. 10. 300. 25. 0.] [ 1. 25. 2. 75. 7. 0.] [ 4. 0. 0. 0. 0. 0.] [ 5. 0. 0. 0. 0. 0.]] [[ 0. 150. 10. 300. 25. 0.] [ 1. 25. 2. 75. 7. 0.] [ 4. 0. 0. 0. 0. 0.] [ 5. 0. 0. 0. 0. 0.]] [[ 0. 150. 10. 300. 25. 0.] [ 1. 25. 2. 75. 7. 0.] [ 4. 0. 0. 0. 0. 0.] [ 5. 0. 0. 0. 0. 0.]] [[ 0. 150. 10. 300. 25. 0.] [ 1. 25. 2. 75. 7. 0.] [ 4. 0. 0. 0. 0. 0.] [ 5. 0. 0. 0. 0. 0.]] [[ 0. 150. 10. 300. 25. 0.] [ 1. 25. 2. 75. 7. 0.] [ 4. 0. 0. 0. 0. 0.] [ 5. 0. 0. 0. 0. 0.]]] </code></pre> <p>1D:</p> <pre><code>[ 1806. 1092. 150. 150. 2669. 150. 150. 150. 310. 7181.85] </code></pre> <p>.. and what I want is this:</p> <pre><code>3d[0][0][5] = 1d[0] 3d[0][1][5] = 1d[0] 3d[0][2][5] = 1d[0] 3d[0][3][5] = 1d[0] 3d[1][0][5] = 1d[1] 3d[1][1][5] = 1d[1] 3d[1][2][5] = 1d[1] 3d[1][3][5] = 1d[1] </code></pre> <p>and so on. I have been trying somthing like this:</p> <pre><code>list_product_pricegroup[:,:,5] = migrete_array[:] </code></pre> <p>without any kind of luck, hope someone can guide me in the right direction.</p>
<p>Your array <code>list_product_pricegroup</code> is 10x4x6 and <code>migrete_array</code> is a 1-D vector of 10. Since you index (5) the array <code>list_product_pricegroup</code> before assignment, it is now a 10x4 matrix. Then you need to promote <code>migrete_array</code> to a 2-D array of size 4x1 to be broadcasted, as such: </p> <pre><code>list_product_pricegroup[..., 5] = migrete_array[:, None] </code></pre>
python|arrays|numpy
2
3,312
48,610,132
Tensorflow crash with CUDNN_STATUS_ALLOC_FAILED
<p>Been searching the web for hours with no results, so figured I'd ask here.</p> <p>I'm trying to make a self driving car following Sentdex's tutorial, but when running the model, I get a bunch of fatal errors. I've searched all over the internet for the solution, and many seem to have the same problem. However, none of the solutions I've found (Including <a href="https://stackoverflow.com/questions/41117740/tensorflow-crashes-with-cublas-status-alloc-failed">this Stack-post</a>), work for me.</p> <p>Here is my software:</p> <ul> <li>Tensorflow: 1.5, GPU version</li> <li>CUDA: 9.0, with the patch</li> <li>CUDnn: 7</li> <li>Windows 10 Pro</li> <li>Python 3.6</li> </ul> <p>Hardware:</p> <ul> <li>Nvidia 1070ti, with latest drivers</li> <li>Intel i5 7600K</li> </ul> <p>Here is the crash log: </p> <p><code>2018-02-04 16:29:33.606903: E C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_blas.cc:444] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED 2018-02-04 16:29:33.608872: E C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_blas.cc:444] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED 2018-02-04 16:29:33.609308: E C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_blas.cc:444] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED 2018-02-04 16:29:35.145249: E C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_dnn.cc:385] could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED 2018-02-04 16:29:35.145563: E C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_dnn.cc:352] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM 2018-02-04 16:29:35.149896: F C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\kernels\conv_ops.cc:717] Check failed: stream-&gt;parent()-&gt;GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo&lt;T&gt;(), &amp;algorithms)</code></p> <p>Here's my code:</p> <pre><code> import tensorflow as tf import numpy as np import cv2 import time from PIL import ImageGrab from getkeys import key_check from alexnet import alexnet import os from sendKeys import PressKey, ReleaseKey, W,A,S,D,Sp import random WIDTH = 80 HEIGHT = 60 LR = 1e-3 EPOCHS = 10 MODEL_NAME = 'DiRT-AI-Driver-{}-{}-{}-epochs.model'.format(LR, 'alexnetv2', EPOCHS) def straight(): PressKey(W) ReleaseKey(A) ReleaseKey(S) ReleaseKey(D) ReleaseKey(Sp) def left(): PressKey(A) ReleaseKey(W) ReleaseKey(S) ReleaseKey(D) ReleaseKey(Sp) def right(): PressKey(D) ReleaseKey(A) ReleaseKey(S) ReleaseKey(W) ReleaseKey(Sp) def brake(): PressKey(S) ReleaseKey(A) ReleaseKey(W) ReleaseKey(D) ReleaseKey(Sp) def handbrake(): PressKey(Sp) ReleaseKey(A) ReleaseKey(S) ReleaseKey(D) ReleaseKey(W) model = alexnet(WIDTH, HEIGHT, LR) model.load(MODEL_NAME) def main(): last_time = time.time() for i in list(range(4))[::-1]: print(i+1) time.sleep(1) paused = False while(True): if not paused: screen = np.array(ImageGrab.grab(bbox=(0,40,1024,768))) screen = cv2.cvtColor(screen,cv2.COLOR_BGR2GRAY) screen = cv2.resize(screen,(80,60)) print('Loop took {} seconds'.format(time.time()-last_time)) last_time = time.time() print('took time') prediction = model.predict([screen.reshape(WIDTH,HEIGHT,1)])[0] print('predicted') moves = list(np.around(prediction)) print('got moves') print(moves,prediction) if moves == [1,0,0,0,0]: straight() elif moves == [0,1,0,0,0]: left() elif moves == [0,0,1,0,0]: brake() elif moves == [0,0,0,1,0]: right() elif moves == [0,0,0,0,1]: handbrake() keys = key_check() if 'T' in keys: if paused: pased = False time.sleep(1) else: paused = True ReleaseKey(W) ReleaseKey(A) ReleaseKey(S) ReleaseKey(D) ReleaseKey(Sp) time.sleep(1) main() </code></pre> <p>I've found that the line that crashes python and spawns the first three bugs is this line: </p> <ul> <li><code>prediction = model.predict([screen.reshape(WIDTH,HEIGHT,1)])[0]</code></li> </ul> <p>When running the code, the CPU goes up to a whopping 100%, suggesting that something is seriously off. GPU goes to about 40-50%</p> <p>I've tried Tensorflow 1.2 and 1.3, as well as CUDA 8, to no good. When installing CUDA I do not install the specific drivers, since they are too old for my GPU. Tried different CUDnn's too, did no good.</p>
<p>In my case, the issue happened because another python console with <code>tensorflow</code> imported was running. Closing it solved the problem.</p> <p>I have Windows 10, the main errors were :</p> <blockquote> <p>failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED</p> <p>Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED</p> </blockquote>
python|python-3.x|tensorflow|neural-network
12
3,313
48,716,906
Python - Concatenating two images and adding up their color channels
<p>I have two <code>500x500</code> images, and need to merge them together and add up their channels.</p> <p>When I used Numpy's concatenate function for instance, the returned output becomes <code>500x1000</code>, and not sure if the color channels are added at all. </p> <p>The output I'm looking for for merging two colored <code>500x500</code> images would be <code>500x500x6</code>.</p> <p>How can I perform that in Python?</p> <p>Thanks.</p>
<p>a couple of options, if you want separate RGB or stuck together:</p> <pre><code>np.stack([np.zeros((2,2,3)), np.ones((2,2,3))], axis=2) Out[157]: array([[[[ 0., 0., 0.], [ 1., 1., 1.]], [[ 0., 0., 0.], [ 1., 1., 1.]]], [[[ 0., 0., 0.], [ 1., 1., 1.]], [[ 0., 0., 0.], [ 1., 1., 1.]]]]) np.concatenate([np.zeros((2,2,3)), np.ones((2,2,3))], axis=2) Out[158]: array([[[ 0., 0., 0., 1., 1., 1.], [ 0., 0., 0., 1., 1., 1.]], [[ 0., 0., 0., 1., 1., 1.], [ 0., 0., 0., 1., 1., 1.]]]) </code></pre> <p>to address the above, extract each original img:</p> <pre><code>two_img =np.stack([np.zeros((2,2,3)), np.ones((2,2,3))], axis=2) two_img[...,0,:] Out[160]: array([[[ 0., 0., 0.], [ 0., 0., 0.]], [[ 0., 0., 0.], [ 0., 0., 0.]]]) two_img[...,1,:] Out[161]: array([[[ 1., 1., 1.], [ 1., 1., 1.]], [[ 1., 1., 1.], [ 1., 1., 1.]]]) too_img = np.concatenate([np.zeros((2,2,3)), np.ones((2,2,3))], axis=2) too_img[...,0:3] Out[163]: array([[[ 0., 0., 0.], [ 0., 0., 0.]], [[ 0., 0., 0.], [ 0., 0., 0.]]]) too_img[...,3:] Out[164]: array([[[ 1., 1., 1.], [ 1., 1., 1.]], [[ 1., 1., 1.], [ 1., 1., 1.]]]) </code></pre>
python|numpy|opencv
0
3,314
70,973,324
What does it mean torch.rand(1, 3, 64, 64)?
<p>I am beginner in PyTorch. In one tutorial, I saw: <strong>torch.rand(1, 3, 64, 64)</strong>, I understand that it creates a Tensor with random numbers following standard normal distribution.</p> <p>The outputs looks like:</p> <pre><code>tensor([[[[0.1352, 0.5110, 0.7585, ..., 0.9067, 0.4730, 0.8077], [0.2471, 0.8726, 0.3580, ..., 0.4983, 0.9747, 0.5219], [0.8554, 0.4266, 0.0718, ..., 0.6734, 0.8739, 0.6137], ..., [0.2132, 0.9319, 0.5361, ..., 0.3981, 0.2057, 0.7032], [0.3347, 0.5330, 0.7019, ..., 0.6713, 0.0936, 0.4706], [0.6257, 0.6656, 0.3322, ..., 0.6664, 0.8149, 0.1887]], [[0.3210, 0.6469, 0.7772, ..., 0.3175, 0.5102, 0.9079], [0.3054, 0.2940, 0.6611, ..., 0.0941, 0.3826, 0.3103], [0.7484, 0.3442, 0.1034, ..., 0.8028, 0.4643, 0.2800], ..., [0.9946, 0.5868, 0.8709, ..., 0.4837, 0.6691, 0.5303], [0.1770, 0.5355, 0.8048, ..., 0.1843, 0.0658, 0.3817], [0.9612, 0.0122, 0.5012, ..., 0.4198, 0.3294, 0.2106]], [[0.5800, 0.5174, 0.5454, ..., 0.3881, 0.3277, 0.5470], [0.8871, 0.7536, 0.9928, ..., 0.8455, 0.8071, 0.0062], [0.2199, 0.0449, 0.2999, ..., 0.3570, 0.7996, 0.3253], ..., [0.8238, 0.1100, 0.1489, ..., 0.0265, 0.2165, 0.2919], [0.4074, 0.5817, 0.8021, ..., 0.3417, 0.1280, 0.9279], [0.0047, 0.1796, 0.4522, ..., 0.3257, 0.2657, 0.4405]]]]) </code></pre> <p>But what do the 4 parameters <strong>(1, 3, 64, 64)</strong> mean exactly? Thanks!</p>
<p>These parameters refer to the tensor dimension. In concrete, this code snippet will generate a 4-dimension tensor of random values between 0 and 1.</p>
python|pytorch
1
3,315
51,884,792
Pandas - Insert blank row for each group in pandas
<p>I have a dataframe</p> <pre><code>import pandas as pd import numpy as np df1=pd.DataFrame({'group':[1,1,2,2,2], 'value':[2,3,np.nan,5,4]}) df1 group value 0 1 2 1 1 3 2 2 NaN 3 2 5 4 2 4 </code></pre> <p>I want to add a row after each group in which the value of <code>value</code> is <code>NaN</code> . The desire output is:</p> <pre><code> group value 0 1 2 1 1 3 2 1 NaN 3 2 NaN 4 2 5 5 2 4 6 2 NaN </code></pre> <p>In my real dataset I have a lot of groups and more columns besides <code>value</code>, I want all of them to be <code>NaN</code> in newly added row.</p> <p>Thanks a lot for the help</p>
<h3><code>concat</code> with <code>append</code></h3> <pre><code>s = df1.groupby('group') out = pd.concat([i.append({'value': np.nan}, ignore_index=True) for _, i in s]) out.group = out.group.ffill().astype(int) </code></pre> <h3><code>apply</code> with <code>append</code><sup>[1]</sup></h3> <pre><code>df1.groupby('group').apply( lambda d: d.append({'group': d.name}, ignore_index=True).astype({'group': int}) ).reset_index(drop=True) </code></pre> <p>Both produce:</p> <pre><code> group value 0 1 2.0 1 1 3.0 2 1 NaN 3 2 NaN 4 2 5.0 5 2 4.0 6 2 NaN </code></pre> <hr> <p><sup>[1]</sup> This solution brought to you by your local <a href="https://stackoverflow.com/users/2336654/pirsquared">@piRSquared</a></p>
python|pandas|dataframe
7
3,316
51,667,769
tf.Print not working if the graph is broken
<p>I'm trying to build a fully convolutional neural network. My problem is that at some phase the shape of the tensors no longer match, causing and Exception, and I would like to print the shape of the tensors after each step to be able to pin point the problem. However the problem is that the tf.Print does not seem to print anything if the graph is broken and an Exception is thrown at some point (even the exception occurs after the print statement in the pipeline). I'm using the below code in printing. It is working OK if I have a working graph. So is it really that the tf.Print can be used only with working graphs? If this is the case how could I print the shape of tensors or is the only possibility to use some debugger for example tfdbg?</p> <pre><code>upsample = custom_layers.crop_center(input_layer, upsample) upsample_print = tf.Print(upsample, [tf.shape(upsample)], "shape of tensor is ") logits = tf.reshape(upsample_print, [-1, 2]) ... </code></pre> <p>The error given is</p> <pre><code>ValueError: Dimension size must be evenly divisible by 2898844 but is 2005644 for 'gradients/Reshape_grad/Reshape' (op: 'Reshape') with input shapes: [1002822,2], [4] and with input tensors computed as partial shapes: input[1] = [?,1391,1042,2]. </code></pre>
<p><code>tf.Print</code> only prints during runtime. It simply adds a node to the graph that upon execution prints something to the console. So, if your graph cannot be constructed, i.e. no computations can be executed, you will never see an output from <code>tf.Print</code>.</p> <p>At construction time, you can only see static shapes of your tensors (and e.g. print them with the Python native print statement). I am not aware of any way to get the dynamic shape at construction time (the dynamic shape is dependent on the actual input you feed, so there is no way of knowing that before you actually feed something, which only happens during runtime). Knowing the static shapes was often enough for my purposes. If it is not the case for you, try to make the dynamic dimensions static in a toy example and then Python-print all the shapes to track down the problem.</p>
python|debugging|tensorflow
0
3,317
51,917,032
splitting csv files in pandas on python
<p>I am trying to load a spec column in pandas but it keep printing me the name of the column and also it skips the first part</p> <p>can anyone help me?</p> <p>this is the code i am using:</p> <pre><code>import pandas as pd pd.set_option('display.max_colwidth', -1) df_iter = pd.read_csv('tweets.csv', chunksize=10000, iterator=True, usecols=["text"]) df_iter = df_iter[1:] for iter_num in enumerate(df_iter, -1): for line in df_iter: print(line) </code></pre>
<p>Firstly, Since you are reading the csv in chunks, I would assume that the file is very large. You need to loop through those chunks to read all the data of the file. Then you can merge / concatenate all these chunks.</p> <p>Second thing, enumerate() is not for dataframes. You need iterrows().</p> <p>Something like this - </p> <pre><code>import pandas as pd pd.set_option('display.max_colwidth', -1) df_iter = pd.read_csv('tweets.csv', chunksize=10000, iterator=True, usecols=["text"]) df_records = [] #list for chunk in df_iter: df_records.append(chunk) df_new = pd.concat(df_records) for iter_num, value in df_new.iterrows(): print(value[0]) </code></pre>
python|pandas
0
3,318
64,206,483
Creating python data frame from list of dictionary
<p>I have the following data:</p> <pre><code>sentences = [{'mary':'N', 'jane':'N', 'can':'M', 'see':'V','will':'N'}, {'spot':'N','will':'M','see':'V','mary':'N'}, {'will':'M','jane':'N','spot':'V','mary':'N'}, {'mary':'N','will':'M','pat':'V','spot':'N'}] </code></pre> <p>I want to create a data frame where each key (from the pairs above) will be the column name and each value (from above) will be the index of the row. The values in the data frame will be counting of each matching point between the key and the value.</p> <p>The expected result should be:</p> <pre><code>df = pd.DataFrame([(4,0,0), (2,0,0), (0,1,0), (0,0,2), (1,3,0), (2,0,1), (0,0,1)], index=['mary', 'jane', 'can', 'see', 'will', 'spot', 'pat'], columns=('N','M','V')) </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>value_counts</code></a> per columns in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>DataFrame.apply</code></a>, replace missing values, convert to integers and last transpose by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.T.html" rel="nofollow noreferrer"><code>DataFrame.T</code></a>:</p> <pre><code>df = df.apply(pd.value_counts).fillna(0).astype(int).T print (df) M N V mary 0 3 1 jane 0 2 0 can 1 0 0 see 0 0 2 will 3 1 0 spot 0 2 1 pat 0 0 1 </code></pre> <p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.SeriesGroupBy.value_counts.html" rel="nofollow noreferrer"><code>SeriesGroupBy.value_counts</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>Series.unstack</code></a>:</p> <pre><code>df = df.stack().groupby(level=1).value_counts().unstack(fill_value=0) print (df) M N V can 1 0 0 jane 0 2 0 mary 0 3 1 pat 0 0 1 see 0 0 2 spot 0 2 1 will 3 1 0 </code></pre>
python-3.x|pandas|dataframe
3
3,319
64,441,356
ModuleNotFoundError: No module named 'official'
<p>File &quot;/home/abir/.local/lib/python3.6/site-packages/object_detection/models/ssd_efficientnet_bifpn_feature_extractor.py&quot;, line 33, in from official.vision.image_classification.efficientnet import efficientnet_model ModuleNotFoundError: No module named 'official'</p>
<p>You need to install the module <code>tf-models-official</code>.</p> <ul> <li>First open Command Prompt in Windows or Terminal in Linux/Mac.</li> <li>In windows make sure <code>pip</code> is in path, then run:</li> </ul> <pre><code>pip install -U tf-models-official </code></pre> <ul> <li>If you have multiple versions then its better to: <code>pipVERSION install -U tf-models-official</code> Where VERSION is like <code>pip3.7</code></li> </ul>
python|tensorflow|keras|tensorflow2.0|object-detection
1
3,320
64,473,988
How to Create Partially Stacked Bar Plot
<p>I want to make a partially stacked bar plot of <em>n</em> elements where <em>n</em> - 1 elements are stacked, and the remaining element is another bar adjacent to the stacked bars of equal width. The adjacent bar element is plotted on a secondary y-axis and is typically a percentage, plotted between 0 and 1.</p> <p>The solution that I am currently using is able to represent the data fine, but I'm curious to know how I could achieve the desired result of an equal width single bar next to a stacked bar.</p> <pre><code>import pandas as pd import numpy as np from matplotlib import pyplot as plt import matplotlib.patches as mpatches mylabels=list('BCD') df = pd.DataFrame(np.random.randint(1,11,size=(5,4)), columns=list('ABCD')) df['A'] = list('abcde') df['D'] = np.random.rand(5,1) ax = df.loc[:,~df.columns.isin(['D'])].plot(kind='bar', stacked=True, x='A', figsize=(15,7)) ax2 = ax.twinx() ax2.bar(df.A,df.D, color='g', width=.1) ax2.set_ylim(0,1) handles, labels = ax.get_legend_handles_labels() green_patch = mpatches.Patch(color='g') handles.append(green_patch) ax.legend(handles=handles, labels=mylabels) ax.set_xlabel('') </code></pre> <p><a href="https://i.stack.imgur.com/P56hJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P56hJ.png" alt="Example" /></a></p>
<p>Let's try passing <code>align='edge'</code> and <code>width</code> to control the relative position of the bars:</p> <pre><code>ax = df.drop('D', axis=1).plot.bar(x='A', stacked=True, align='edge', width=-0.4) ax1=ax.twinx() df.plot.bar(x='A',y='D', width=0.4, align='edge', ax=ax1, color='C2') # manually set the limit so the left most bar isn't cropped ax.set_xlim(-0.5) # handle the legends handles, labels = ax.get_legend_handles_labels() h, l = ax1.get_legend_handles_labels() ax.legend(handles=handles + h, labels=mylabels+l) ax1.legend().remove() </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/wAHzZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wAHzZ.png" alt="enter image description here" /></a></p>
python|pandas|matplotlib
3
3,321
64,360,116
Error while installing Pandas and same kind of error while installing Datapane
<p>I am getting the following error when I try to install Pandas using pip install pandas. Python would install other modules correctly but not pandas and datapane. I am not sure what the error is saying, any help in fixing is appreciated.</p> <pre><code>Installing build dependencies ... error ERROR: Command errored out with exit status 1: command: 'c:\python\python39\python.exe' 'c:\python\python39\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\Mariya Susnerwala\AppDa ta\Local\Temp\pip-build-env-r96h80d0\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel 'Cython &gt;=0.29.21,&lt;3' 'numpy==1.15.4; python_version=='&quot;'&quot;'3.6'&quot;'&quot;' and platform_system!='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.15.4; python_version=='&quot;'&quot;'3.7'&quot;'&quot;' and platform_system!='&quot;' &quot;'AIX'&quot;'&quot;'' 'numpy==1.17.3; python_version&gt;='&quot;'&quot;'3.8'&quot;'&quot;' and platform_system!='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.16.0; python_version=='&quot;'&quot;'3.6'&quot;'&quot;' and platform_system=='&quot;'&quot;' AIX'&quot;'&quot;'' 'numpy==1.16.0; python_version=='&quot;'&quot;'3.7'&quot;'&quot;' and platform_system=='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.17.3; python_version&gt;='&quot;'&quot;'3.8'&quot;'&quot;' and platform_system=='&quot;'&quot;'AI X'&quot;'&quot;'' cwd: None Complete output (21 lines): Ignoring numpy: markers 'python_version == &quot;3.6&quot; and platform_system != &quot;AIX&quot;' don't match your environment Ignoring numpy: markers 'python_version == &quot;3.7&quot; and platform_system != &quot;AIX&quot;' don't match your environment Ignoring numpy: markers 'python_version == &quot;3.6&quot; and platform_system == &quot;AIX&quot;' don't match your environment Ignoring numpy: markers 'python_version == &quot;3.7&quot; and platform_system == &quot;AIX&quot;' don't match your environment Ignoring numpy: markers 'python_version &gt;= &quot;3.8&quot; and platform_system == &quot;AIX&quot;' don't match your environment Collecting setuptools Using cached setuptools-50.3.0-py3-none-any.whl (785 kB) Collecting wheel Using cached wheel-0.35.1-py2.py3-none-any.whl (33 kB) Collecting Cython&lt;3,&gt;=0.29.21 Using cached Cython-0.29.21-py2.py3-none-any.whl (974 kB) Collecting numpy==1.17.3 Using cached numpy-1.17.3.zip (6.4 MB) Using legacy 'setup.py install' for numpy, since package 'wheel' is not installed. Installing collected packages: setuptools, wheel, Cython, numpy Running setup.py install for numpy: started Running setup.py install for numpy: still running... Running setup.py install for numpy: still running... Running setup.py install for numpy: finished with status 'done' ERROR: Could not install packages due to an EnvironmentError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: '&quot;C:' ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\python\python39\python.exe' 'c:\python\python39\lib\site-packages\pip' install --ignore-installed --no-user --prefi x 'C:\Users\Mariya Susnerwala\AppData\Local\Temp\pip-build-env-r96h80d0\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org /simple -- setuptools wheel 'Cython&gt;=0.29.21,&lt;3' 'numpy==1.15.4; python_version=='&quot;'&quot;'3.6'&quot;'&quot;' and platform_system!='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.15.4; python_version=='&quot;' &quot;'3.7'&quot;'&quot;' and platform_system!='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.17.3; python_version&gt;='&quot;'&quot;'3.8'&quot;'&quot;' and platform_system!='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.16.0; python_version=='&quot;'&quot;' 3.6'&quot;'&quot;' and platform_system=='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.16.0; python_version=='&quot;'&quot;'3.7'&quot;'&quot;' and platform_system=='&quot;'&quot;'AIX'&quot;'&quot;'' 'numpy==1.17.3; python_version&gt;='&quot;'&quot;'3. 8'&quot;'&quot;' and platform_system=='&quot;'&quot;'AIX'&quot;'&quot;'' Check the logs for full command output. </code></pre>
<p>Maybe try installing it with pipwin.</p> <pre><code>pip install pipwin </code></pre> <p>then</p> <pre><code>pipwin install pandas </code></pre>
python|pandas|pip|datapane
0
3,322
64,356,199
Avoiding loops or list comprehension with numpy
<p>Is it possible to replace</p> <pre><code>np.concatenate([np.where(x == i)[0] for i in range(y)]) </code></pre> <p>with something that doesn't involve looping?</p> <p>I want to take an array x, e.g. [0, 1, 2, 0 , 2, 2], and a number y, e.g. 2 in this case, and output an array [0, 3, 1, 2, 4, 5]. E.g. for each integer in the array, write their index locations such that they're &quot;in order&quot;.</p> <p>Perhaps some kind of numpy function that can provide a performance boost over this list comprehension?</p>
<p>Here's an approach that uses <code>argsort</code>:</p> <pre><code># settings x = np.array([0, 1, 2, 0 , 2, 2]) y = 2 # sort the index u = np.argsort(x) # filter those that are larger than y mask = x[u]&lt;=y u[mask] </code></pre> <p>Output:</p> <pre><code>array([0, 3, 1, 2, 4, 5]) </code></pre>
python|performance|numpy
3
3,323
64,508,095
How to unstack after aggregation using Groupby in Pandas
<p>Hello Data Scientist and Pandas Experts,</p> <p>I need some help to figure out how to better organize my data after applying groupby aggregation method. I have tried unstack to new dataframe but it does not yield the intended results.</p> <p>Here is my data frame:</p> <pre><code>df = [{'Store': 's1', 'Date': Timestamp('2020-08-01 00:00:00'), 'Employee': 'a', 'Department': 'd1', 'ID': 's1ad1', 'Level': 2, 'duties': 'O'},\ {'Store': 's1', 'Date': Timestamp('2020-08-02 00:00:00'), 'Employee': 'a', 'Department': 'd2', 'ID': 's1ad2', 'Level': 2, 'duties': 'C'},\ {'Store': 's1', 'Date': Timestamp('2020-08-03 00:00:00'), 'Employee': 'a', 'Department': 'd1', 'ID': 's1ad1', 'Level': 2, 'duties': 'O'},\ {'Store': 's1', 'Date': Timestamp('2020-08-04 00:00:00'), 'Employee': 'a', 'Department': 'd1', 'ID': 's1ad1', 'Level': 2, 'duties': 'O'},\ {'Store': 's1', 'Date': Timestamp('2020-08-05 00:00:00'), 'Employee': 'a', 'Department': 'd2', 'ID': 's1ad2', 'Level': 2, 'duties': 'C'},\ {'Store': 's2', 'Date': Timestamp('2020-08-08 00:00:00'), 'Employee': 'a', 'Department': 'd1', 'ID': 's2ad1', 'Level': 2, 'duties': 'O'},\ {'Store': 's2', 'Date': Timestamp('2020-08-09 00:00:00'), 'Employee': 'a', 'Department': 'd3', 'ID': 's2ad3', 'Level': 2, 'duties': 'C'},\ {'Store': 's2', 'Date': Timestamp('2020-08-10 00:00:00'), 'Employee': 'a', 'Department': 'd1', 'ID': 's2ad1', 'Level': 2, 'duties': 'O'},\ {'Store': 's2', 'Date': Timestamp('2020-08-11 00:00:00'), 'Employee': 'a', 'Department': 'd1', 'ID': 's2ad1', 'Level': 2, 'duties': 'O'},\ {'Store': 's2', 'Date': Timestamp('2020-08-12 00:00:00'), 'Employee': 'a', 'Department': 'd3', 'ID': 's3ad1', 'Level': 2, 'duties': 'C'},\ {'Store': 's1', 'Date': Timestamp('2020-08-01 00:00:00'), 'Employee': 'b', 'Department': 'd1', 'ID': 's1bd1', 'Level': 1, 'duties': 'O'},\ {'Store': 's1', 'Date': Timestamp('2020-08-02 00:00:00'), 'Employee': 'b', 'Department': 'd2', 'ID': 's1bd2', 'Level': 1, 'duties': 'C'},\ {'Store': 's1', 'Date': Timestamp('2020-08-03 00:00:00'), 'Employee': 'b', 'Department': 'd1', 'ID': 's1bd1', 'Level': 1, 'duties': 'O'},\ {'Store': 's1', 'Date': Timestamp('2020-08-04 00:00:00'), 'Employee': 'b', 'Department': 'd1', 'ID': 's1bd1', 'Level': 1, 'duties': 'O'},\ {'Store': 's1', 'Date': Timestamp('2020-08-05 00:00:00'), 'Employee': 'b', 'Department': 'd2', 'ID': 's1bd2', 'Level': 1, 'duties': 'C'},\ {'Store': 's2', 'Date': Timestamp('2020-08-08 00:00:00'), 'Employee': 'c', 'Department': 'd1', 'ID': 's2ac1', 'Level': 3, 'duties': 'O'},\ {'Store': 's2', 'Date': Timestamp('2020-08-09 00:00:00'), 'Employee': 'c', 'Department': 'd3', 'ID': 's2cd3', 'Level': 3, 'duties': 'C'},\ {'Store': 's2', 'Date': Timestamp('2020-08-10 00:00:00'), 'Employee': 'c', 'Department': 'd1', 'ID': 's2cd1', 'Level': 3, 'duties': 'O'},\ {'Store': 's2', 'Date': Timestamp('2020-08-11 00:00:00'), 'Employee': 'c', 'Department': 'd1', 'ID': 's2cd1', 'Level': 3, 'duties': 'O'},\ {'Store': 's2', 'Date': Timestamp('2020-08-12 00:00:00'), 'Employee': 'c', 'Department': 'd3', 'ID': 's3cd1', 'Level': 3, 'duties': 'C'},\ {'Store': 's3', 'Date': Timestamp('2020-08-08 00:00:00'), 'Employee': 'd', 'Department': 'd1', 'ID': 's3cd1', 'Level': 3, 'duties': 'O'},\ {'Store': 's3', 'Date': Timestamp('2020-08-09 00:00:00'), 'Employee': 'd', 'Department': 'd3', 'ID': 's3dd3', 'Level': 3, 'duties': 'C'},\ {'Store': 's3', 'Date': Timestamp('2020-08-10 00:00:00'), 'Employee': 'd', 'Department': 'd1', 'ID': 's3dd1', 'Level': 3, 'duties': 'O'},\ {'Store': 's3', 'Date': Timestamp('2020-08-11 00:00:00'), 'Employee': 'd','Department': 'd1', 'ID': 's3dd1', 'Level': 3, 'duties': 'O'},\ {'Store': 's3', 'Date': Timestamp('2020-08-12 00:00:00'), 'Employee': 'd', 'Department': 'd3', 'ID': 's3dd1', 'Level': 3, 'duties': 'C'}] </code></pre> <p>I want to organize my output so its nicely stacked in Store --&gt; Department --&gt; Employee. Something like as follow (Sorry the output is not nicely lined):</p> <pre><code>Store s1 s2 s3 Department d1 d2 d1 d3 d1 d3 Employee ID Level duties first shift last shift #ofshift first shift last shift #ofshift first shift last shift #ofshift first shift last shift #ofshift first shift last shift #ofshift first shift last shift #ofshift a s1ad1 2 O 2020-08-01 2020-08-04 3 a s1ad2 2 C 2020-08-02 2020-08-05 2 a s2ad1 2 O 2020-08-08 2020-08-11 3 a s2ad3 2 O 2020-08-09 2020-08-12 2 b s1bd1 1 O 2020-08-01 2020-08-04 3 b s1bd2 1 C 2020-08-02 2020-08-05 2 c s2cd1 3 O 2020-08-08 2020-08-11 3 c s3ad3 3 O 2020-08-09 2020-08-12 2 d s3dd1 3 O 2020-08-08 2020-08-11 3 d s3dd3 3 O 2020-08-09 2020-08-12 2 </code></pre> <p>So I have tried following Group by expression:</p> <pre><code>df = df.groupby(['Employee', 'Store', 'Department'])\ .agg({'Date':['first', 'last', 'size'], 'ID': 'first', 'Level': 'first', 'duties': 'first'}) # Join the Each Column with its operation. df.columns = df.columns.map('_'.join) # Reset the Index df = df.reset_index().set_index('Employee') # Renaming Columns of Dataframe. df.rename(columns={'Date_first':'First Shift', 'Date_last':'Last Shift', 'Date_size':'# of shift', 'ID_first':'ID', 'Level_first':'Level', 'duties_first':'duties'}, inplace=True) </code></pre> <p>This prints following results:</p> <pre><code> Store Department First Shift Last Shift # of shift ID_first Level duties Employee a s1 d1 2020-08-01 2020-08-04 3 s1ad1 2 O a s1 d2 2020-08-02 2020-08-05 2 s1ad2 2 C a s2 d1 2020-08-08 2020-08-11 3 s2ad1 2 O a s2 d3 2020-08-09 2020-08-12 2 s2ad3 2 C b s1 d1 2020-08-01 2020-08-04 3 s1bd1 1 O b s1 d2 2020-08-02 2020-08-05 2 s1bd2 1 C c s2 d1 2020-08-08 2020-08-11 3 s2ac1 3 O c s2 d3 2020-08-09 2020-08-12 2 s2cd3 3 C d s3 d1 2020-08-08 2020-08-11 3 s3cd1 3 O d s3 d3 2020-08-09 2020-08-12 2 s3dd3 3 C </code></pre> <p>Then I applied the unstack expression as follow:</p> <pre><code>df = df.groupby(['Employee', 'Store', 'Department', 'First Shift', 'Last Shift', '# of shift', 'ID_first',\ 'Level', 'duties'])\ .size()\ .unstack(['Store', 'Department']).fillna(0) </code></pre> <p>It prints our the result as follow:</p> <pre><code>Store s1 s2 s3 Department d1 d2 d1 d3 d1 d3 Employee First Shift Last Shift # of shift ID_first Level duties a 2020-08-01 2020-08-04 3 s1ad1 2 O 1.0 0.0 0.0 0.0 0.0 0.0 2020-08-02 2020-08-05 2 s1ad2 2 C 0.0 1.0 0.0 0.0 0.0 0.0 2020-08-08 2020-08-11 3 s2ad1 2 O 0.0 0.0 1.0 0.0 0.0 0.0 2020-08-09 2020-08-12 2 s2ad3 2 C 0.0 0.0 0.0 1.0 0.0 0.0 b 2020-08-01 2020-08-04 3 s1bd1 1 O 1.0 0.0 0.0 0.0 0.0 0.0 2020-08-02 2020-08-05 2 s1bd2 1 C 0.0 1.0 0.0 0.0 0.0 0.0 c 2020-08-08 2020-08-11 3 s2ac1 3 O 0.0 0.0 1.0 0.0 0.0 0.0 2020-08-09 2020-08-12 2 s2cd3 3 C 0.0 0.0 0.0 1.0 0.0 0.0 d 2020-08-08 2020-08-11 3 s3cd1 3 O 0.0 0.0 0.0 0.0 1.0 0.0 2020-08-09 2020-08-12 2 s3dd3 3 C 0.0 0.0 0.0 0.0 0.0 1.0 </code></pre> <p>I think I am wrongly using Size and unstack. However I can't seems to figure out how to reorganize the data.</p> <p>I would highly appreciate the expert opinion on how to properly organize my data.</p> <p>Once again really appreciate your help and consideration.</p> <p>Thank You.</p>
<p>The crux of your problem is that you need to restructure the data prior to using <code>.unstack()</code>, because your desired format is a matrix with the values being three repeated columns. So, you need to change your dataframe from wide to long and create a new column with these three values in one column <code>Values</code> with another column that categorizes them <code>Shift</code>.</p> <pre><code># Step 1: Named Groupby Agregation naming columns ins specific format required for `pd.wide_to_long`. Must end with integer. df = (df.groupby(['Employee', 'Store', 'Department']) .agg(Shift_1=('Date','first'), Shift_2=('Date','last'), Shift_3=('Date','size'), ID=('ID', 'first'), Level=('Level', 'first'), duties=('duties', 'first')) .reset_index()) # Step 2: In preparation for a matrix The data must be transformed so that the three columns that are values in the matrix must be in long format. df = pd.wide_to_long(df, stubnames='Shift_', i='ID', j='Shift').reset_index().rename(columns={'Shift_':'Values'}) # Step 3: 1,2,3 were required integer suffixes for wide_to_long but now let's change to what we want the columns to be called. df['Shift'] = df['Shift'].replace([1,2,3],['First Shift','Last Shift','# of shift']) # Step 4: Create the matrix be setting index and unstacking to columns df = (df.sort_values(['Employee', 'Store', 'Department']) #values must be sorted in order for how we want columns to appear in matrix format .set_index(['Employee', 'ID', 'Level', 'duties', 'Store', 'Department', 'Shift']) .unstack(['Store', 'Department', 'Shift']).fillna(0)) # Step 5: Cleanup of Multi-index into desred format df.columns = df.columns.reorder_levels([1,2,3,0]).droplevel(3) df = df.reset_index() df Out[1]: Store Employee ID Level duties s1 \ Department d1 Shift First Shift 0 a s1ad1 2 O 2020-08-01 00:00:00 1 a s1ad2 2 C 0 2 a s2ad1 2 O 0 3 a s2ad3 2 C 0 4 b s1bd1 1 O 2020-08-01 00:00:00 5 b s1bd2 1 C 0 6 c s2ac1 3 O 0 7 c s2cd3 3 C 0 8 d s3cd1 3 O 0 9 d s3dd3 3 C 0 Store \ Department d2 Shift Last Shift # of shift First Shift 0 2020-08-04 00:00:00 3 0 1 0 0 2020-08-02 00:00:00 2 0 0 0 3 0 0 0 4 2020-08-04 00:00:00 3 0 5 0 0 2020-08-02 00:00:00 6 0 0 0 7 0 0 0 8 0 0 0 9 0 0 0 Store ... s2 \ Department ... d1 Shift Last Shift # of shift ... # of shift 0 0 0 ... 0 1 2020-08-05 00:00:00 2 ... 0 2 0 0 ... 3 3 0 0 ... 0 4 0 0 ... 0 5 2020-08-05 00:00:00 2 ... 0 6 0 0 ... 3 7 0 0 ... 0 8 0 0 ... 0 9 0 0 ... 0 Store \ Department d3 Shift First Shift Last Shift # of shift 0 0 0 0 1 0 0 0 2 0 0 0 3 2020-08-09 00:00:00 2020-08-12 00:00:00 2 4 0 0 0 5 0 0 0 6 0 0 0 7 2020-08-09 00:00:00 2020-08-12 00:00:00 2 8 0 0 0 9 0 0 0 Store s3 \ Department d1 Shift First Shift Last Shift # of shift 0 0 0 0 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 5 0 0 0 6 0 0 0 7 0 0 0 8 2020-08-08 00:00:00 2020-08-11 00:00:00 3 9 0 0 0 Store Department d3 Shift First Shift Last Shift # of shift 0 0 0 0 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 5 0 0 0 6 0 0 0 7 0 0 0 8 0 0 0 9 2020-08-09 00:00:00 2020-08-12 00:00:00 2 [10 rows x 22 columns] </code></pre>
python|pandas|dataframe|pandas-groupby
1
3,324
47,689,217
Attempting to append two Pandas DataFrames within a loop causes the first to be overwritten
<p>I have this function that (among other things) is supposed to read a baseball matches csv file create a list of all the teams (this part works). The file has away match data and home match data, the idea is to split the data change the columns and lastly append the matches data regardless of location (this last part does not work). Instead of appending my away and home games into one dataframe the code overwrites the home games entirely. </p> <p>I have attached my code along with the question. </p> <p>Thank you so much for all your help.</p> <pre><code>df = pd.read_csv('C:\\Users\\data.csv', index_col=0) unique = df['Team Home'].unique() inplace = ['H', 'A'] myway = pd.DataFrame() for i in range(len(unique)): for inp in inplace: if inp == 'H': #loop to find column names with 'Home' and 'Away' Labels located = 'Home' character = 'H' else: located = 'Away' character = 'A' noseclean_h = df[df['Team {}'.format(located)].isin([unique[i]])] noseclean_h = noseclean_h.sort_values('Date') home = [rr for rr in rolling_haiting if character in rr] new_home = [rr.replace('{}'.format(located), '').strip() if character in rr and len(rr) &gt; 2 else rr.replace(character, '') for rr in home] new_home.append('Date') new_home.append('Team') home.append('Date') home.append('Team {}'.format(located)) ncleaned = ncleaned[home] d = dict(zip(home, new_home)) ncleaned .rename(columns=d, inplace=True) nosecleaned_h['Date'] = pd.to_datetime(ncleaned ['Date']) nosecleaned_h.set_index('Date', inplace=True) # set index to date to prevent overlapping nosecleaned_h = nosecleaned_h.append(nosecleaned_h, ignore_index=False) print(nosecleaned_h) ....etc </code></pre>
<p>On each loop, you are reassigning the variable <code>noseclean_h</code>: </p> <pre><code>noseclean_h = df[df['Team {}'.format(located)].isin([unique[i]])] </code></pre> <p>Then, on each loop the <code>nosecleaned_h = nosecleaned_h.append(nosecleaned_h, ignore_index=False)</code> is replaced.</p>
python|pandas|dataframe|append|overlap
0
3,325
47,977,694
Getting the variance of each column in pandas
<p>I want to calculate the variance of features saved in a Train and Test file a followed :</p> <pre><code>col1 Feature0 Feature1 Feature2 Feature3 Feature4 Feature5 Feature6 Feature7 Feature8 Feature9 col2 26658 40253.5 3.22115e+09 0.0277727 5.95939 266.56 734.248 307.364 0.000566779 0.000520574 col3 2658 4053.5 3.25e+09 0.0277 5.95939 266.56 734.248 307.364 0.000566779 0.000520574 .... </code></pre> <p>for that I've wrote the following :</p> <pre><code>import numpy as np from sklearn.decomposition import PCA import pandas as pd #from sklearn.preprocessing import StandardScaler from sklearn import preprocessing from matplotlib import pyplot as plt # Reading csv file training_file = 'Training.csv' testing_file = 'Test.csv' Training_Frame = pd.read_csv(training_file) Testing_Frame = pd.read_csv(testing_file) Training_Frame.shape # Now we have the feature values saved we start # with the standardisation of the those values stdsc = preprocessing.MinMaxScaler() np_scaled_train = stdsc.fit_transform(Training_Frame.iloc[:,:-2]) sel = VarianceThreshold(threshold=(.2 * (1 - .2))) sel.fit_transform(np_scaled_train) pd_scaled_train = pd.DataFrame(data=np_scaled_train) pd_scaled_train.to_csv('variance_result.csv',header=False, index=False) </code></pre> <p>This obviously doesn't work. the result in <code>variance_result.csv</code> is just the train matrix normalized. So my question how can I get the index of the columns(features) that have a variance bellow 20%. thanks in advance ! </p> <p><strong>Update</strong></p> <p>I've solved the variance issue this way :</p> <pre><code> import numpy as np from sklearn.decomposition import PCA import pandas as pd #from sklearn.preprocessing import StandardScaler from sklearn import preprocessing from matplotlib import pyplot as plt from sklearn.feature_selection import VarianceThreshold # Reading csv file training_file = 'Training.csv' testing_file = 'Test.csv' Training_Frame = pd.read_csv(training_file) Testing_Frame = pd.read_csv(testing_file) Training_Frame.shape # Now we have the feature values saved we start # with the standardisation of the those values stdsc = preprocessing.MinMaxScaler() np_scaled_train = stdsc.fit_transform(Training_Frame.iloc[:,:-2]) pd_scaled_train = pd.DataFrame(data=np_scaled_train) variance =pd_scaled_train.apply(np.var,axis=0) pd_scaled_train.to_csv('variance_result.csv',header=False, index=False) temp_df = pd.DataFrame(variance.values,Training_Frame.columns.values[:-2]) temp_df.T.to_csv('Training_features_variance.csv',index=False) </code></pre> <p>No I still don't know how to get indeces of features with a variance say bigger than <code>0.2</code> from <code>variance</code> other thanks running a loop! </p>
<p>Just set the threshold to 0.0 and then use the <code>variances_</code> attribute of the VarianceThreshold object to get the variances of all your features, then you can identify which of them have lower variance.</p> <pre><code>from sklearn.feature_selection import VarianceThreshold X = [[0, 2, 0, 3], [0, 1, 4, 3], [0, 1, 1, 3]] selector = VarianceThreshold() selector.fit_transform(X) selector.variances_ #Output: array([ 0. , 0.22222222, 2.88888889, 0. ]) </code></pre>
python|pandas|scikit-learn
2
3,326
48,900,977
Find all indexes of a numpy array closest to a value
<p>In a numpy array the indexes of all values closest to a given constant are needed. The background is digital signal processing. The array holds the magnitude function of a filter (<code>np.abs(np.fft.rfft(h))</code>) and certain frequencies (=indexes) are searched where the magnitude is e.g. 0.5 or in another case 0. Most the time the value in question is not included exactly in the sequence. The index of the closed value should be found in this.</p> <p>So far I came up with the following method where I look at the change in sign of the difference between the sequence and the constant. However, this only works for sequences which are monotonically increasing or decresasing at the points in question. It also is off by 1 sometimes.</p> <pre><code>def findvalue(seq, value): diffseq = seq - value signseq = np.sign(diffseq) signseq[signseq == 0] = 1 return np.where(np.diff(signseq))[0] </code></pre> <p>I'm wondering if there is a better solution for this. It is only for 1D real float arrays and the requirement of the computation efficiency is not so high in my case.</p> <p>As a numerical example the following code should return <code>[8, 41]</code>. I replaced the filter magnitude response with a half-wave for simplicity here.</p> <pre><code>f=np.sin(np.linspace(0, np.pi)) findvalue(f, 0.5) </code></pre> <p>Similar questions I found are as following but they do only return the first or second index:<br> <a href="https://stackoverflow.com/questions/29696644/find-the-second-closest-index-to-value">Find the second closest index to value</a><br> <a href="https://stackoverflow.com/questions/2566412/find-nearest-value-in-numpy-array">Find nearest value in numpy array</a></p>
<pre><code>def findvalue(seq, value): diffseq = seq - value signseq = np.sign(diffseq) zero_crossings = signseq[0:-2] != signseq[1:-1] indices = np.where(zero_crossings)[0] for i, v in enumerate(indices): if abs(seq[v + 1] - value) &lt; abs(seq[v] - value): indices[i] = v + 1 return indices </code></pre> <p>Some more explanation</p> <pre><code>def print_vec(v): for i, f in enumerate(v): print("[{}]{:.2f} ".format(i,f), end='') print('') def findvalue_loud(seq, value): diffseq = seq - value signseq = np.sign(diffseq) print_vec(signseq) zero_crossings = signseq[0:-2] != signseq[1:-1] print(zero_crossings) indices = np.where(zero_crossings)[0] # indices contains the index in the original vector # just before the seq crosses the value [8 40] # this may be good enough for you print(indices) for i, v in enumerate(indices): if abs(seq[v + 1] - value) &lt; abs(seq[v] - value): indices[i] = v + 1 # now indices contains the closest [8 41] print(indices) return indices </code></pre>
python|numpy|search
1
3,327
49,103,308
Pandas retrieve value in one column(s) corresponding to the maximum value in another
<p>Relatively new Python scripter here with a quick question about Pandas and DataFrames. There may be an easier method in Python to do what I am doing (outside of Pandas), so I am open to any and all suggestions.</p> <p>I have a large data-set (don't we all), with dozens of attributes and tens of thousands of entries. I have successfully opened it (.csv file) and removed the unnecessary columns for the exercise, as well as used pandas techniques I learned from other questions here to parry down the table to something I can use</p> <p>As an example, I now have dataframe <code>df</code>, with three columns - A, B and C. I need to find the index of the max of A and then pull the values of B and C at that index. Based off research on the best method, it seemed that <code>idxmax</code> was the best option.</p> <pre><code>MaxIDX = df['A'].idxmax() </code></pre> <p>This gives me the correct answer, however when I try to then grab a value using <code>at</code> based on this variable, I am getting errors. I believe it is because <code>idxmax</code> produces a series, and not an integer output.</p> <pre><code>variable = df.at[MaxIDX, 'B'] </code></pre> <p>So the question I have is kind of two part.</p> <p>How do I convert the series to the proper input for <code>at</code>? And, is there an easier way to do this that I am completely missing? All I want to do is get the index of the max of column A, and then pull the values of Column B and C at that index.</p> <p>Any help is appreciated. Thanks a bunch! Cheers!</p> <p>Note: Using: Python 3.6.4 and Pandas 0.22.0</p>
<pre><code>np.random.seed(0) df = pd.DataFrame(np.random.randn(5, 3), columns=list('ABC')) df A B C 0 1.764052 0.400157 0.978738 1 2.240893 1.867558 -0.977278 2 0.950088 -0.151357 -0.103219 3 0.410599 0.144044 1.454274 4 0.761038 0.121675 0.443863 df.A.idxmax() 1 </code></pre> <p>What you claim fails, seems to work for me:</p> <pre><code>df.at[df.A.idxmax(), 'B'] 1.8675579901499675 </code></pre> <p>Although, based on your explanation, you may instead want <code>loc</code>, not <code>at</code>:</p> <pre><code>df.loc[df.A.idxmax(), ['B', 'C']] B 1.867558 C -0.977278 Name: 1, dtype: float64 </code></pre> <p>Note: <em>You may want to check that your index does not contain duplicate entries. This is one possible reason for failure.</em></p>
python|pandas|dataframe
1
3,328
48,906,138
Pandas Dataframe: if row in column A, B or C contains “x” or "y", write “z” to new column
<p>Very similar to this question: <a href="https://stackoverflow.com/questions/30953299/pandas-if-row-in-column-a-contains-x-write-y-to-row-in-column-b">Pandas: if row in column A contains &quot;x&quot;, write &quot;y&quot; to row in column B</a></p> <p>I want to know if a row contains "x" or "y" in multiple different columns then output "z" to a new column.</p> <p>INPUT:</p> <pre><code>A B C Cat Dog Pig Monkey Tiger Cat Cow Sheep Goat </code></pre> <p>If "cat" or "tiger" or "lion" - output 1 to new column</p> <p>OUTPUT</p> <pre><code>A B C CAT FAMILY Cat Dog Pig 1 Monkey Tiger Cat 1 Cow Sheep Goat 0 </code></pre>
<p>Use <code>isin</code> with <code>any</code> and <code>astype</code></p> <pre><code>In [298]: cat_family = ["Cat", "Tiger", "Lion"] In [303]: df['CAT_FAMILY'] = df.isin(cat_family).any(1).astype(int) In [304]: df Out[304]: A B C CAT_FAMILY 0 Cat Dog Pig 1 1 Monkey Tiger Cat 1 2 Cow Sheep Goat 0 </code></pre>
python|pandas|dataframe|contains
1
3,329
49,068,020
Filter DataFrame after sklearn.feature_selection
<p>I reduce dimensionality of a dataset (pandas DataFrame).</p> <pre><code>X = df.as_matrix() sel = VarianceThreshold(threshold=0.1) X_r = sel.fit_transform(X) </code></pre> <p>then I wanto to get back the reduced DataFrame (i.e. keep only ok columns)</p> <p>I found only this ugly way to do so, which is very inefficient, do you have any cleaner idea?</p> <pre><code> cols_OK = sel.get_support() # which columns are OK? c = list() for i, col in enumerate(cols_OK): if col: c.append(df.columns[i]) return df[c] </code></pre>
<p>I think you need if return <code>mask</code>:</p> <pre><code>cols_OK = sel.get_support() df = df.loc[:, cols_OK] </code></pre> <p>and if return indices:</p> <pre><code>cols_OK = sel.get_support() df = df.iloc[:, cols_OK] </code></pre>
python|pandas|numpy|scikit-learn|dimensionality-reduction
2
3,330
58,792,421
Check which Python version Pandas is accessing
<p>My system is claiming that pandas requires a different Python, even though, that Python version is what's installed. How do I check which version of Python is being accessed by Pandas?</p> <pre><code>quinn@quinn-Lemur:~$ sudo -H pip3 install pandas Requirement already satisfied: pandas in /usr/local/lib/python3.5/distpackages (0.25.3) ERROR: Package 'pandas' requires a different Python: 3.5.2 not in '&gt;=3.5.3' quinn@quinn-Lemur:~$ python3 --version Python 3.5.2 </code></pre>
<p>Actually it says you have version <code>3.5.2</code> which is not high enough since <code>3.5.3</code> is needed.</p> <p>Try upgrading your Python first.</p>
python|pandas|pip
1
3,331
58,843,519
lemmatization issue using Spacy in pandas Series and Dataframe
<p>I am working on <a href="https://www.kaggle.com/crowdflower/twitter-airline-sentiment" rel="nofollow noreferrer">text data</a> having shape of (14640,16) using Pandas and Spacy for preprocessing but having issue in getting lemmetized form of text. Moreover, if I work with pandas series (i.e dataframe with one column) which contain only text column there are different issue with that also.</p> <p>Code: (Dataframe)</p> <pre><code>nlp = spacy.load(&quot;en_core_web_sm&quot;) df['parsed_tweets'] = df['text'].apply(lambda x: nlp(x)) df[:3] </code></pre> <p>Result: <a href="https://i.stack.imgur.com/j3oDf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j3oDf.png" alt="Result" /></a></p> <p>After this I iterate over the column with parsed_tweets to get lemmetized data but get the error.</p> <p>Code:</p> <pre><code>for token in df['parsed_tweets']: print(token.lemma_) </code></pre> <p>Error: <a href="https://i.stack.imgur.com/DSTSR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DSTSR.png" alt="Error" /></a></p> <p>Code: (Pandas Series)</p> <pre><code>df1['tweets'] = df['text'] nlp = spacy.load(&quot;en_core_web_sm&quot;) for text in nlp.pipe(iter(df1), batch_size = 1000, n_threads=-1): print(text) </code></pre> <p>Error: <a href="https://i.stack.imgur.com/em20Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/em20Y.png" alt="Error" /></a></p> <p>Can someone help me with the errors? I tried other stackoverflow solution but can't get doc object of Spacy to iterate over it and get tokens and lemmetized tokens. What am I doing wrong?</p>
<pre><code>#you can directly get your lemmatized token by running list comprehension in your lambda function df['parsed_tweets'] = df['text'].apply(lambda x: [y.lemma_ for y in nlp(x)]) </code></pre> <p><a href="https://i.stack.imgur.com/zgiN1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zgiN1.png" alt="enter image description here"></a></p> <pre><code>print(type(df['parsed_tweets'][0])) #op spacy.tokens.doc.Doc for i in range(df.shape[0]): for word in df['parsed_tweets'][i]: print(word.lemma_) #op play football i be work hard </code></pre>
python|pandas|dataframe|series|spacy
4
3,332
58,850,101
Generate dataframe columns based on constraints in current dataframe
<p>I have a dataframe with the following columns : </p> <pre><code>Date_2 Date_1 is_B 02/08/2019 01/09/2019 1 02/08/2019 01/09/2019 1 02/08/2019 01/09/2019 0 02/08/2019 01/09/2019 0 . . . . . . . . . 31/08/2019 01/09/2019 0 31/08/2019 01/09/2019 0 31/08/2019 01/09/2019 0 31/08/2019 01/09/2019 0 31/08/2019 01/09/2019 0 31/08/2019 01/09/2019 1 31/08/2019 01/09/2019 1 </code></pre> <p>I want to generate another dataframe df2 such that the output looks like the following : </p> <pre><code>Date_1 Total_count Total(is_b = 1) num_2 num_3 num_5 num_20 01/09/2019 493 147 26 30 32 59 Total_Count = total entries for Date_1 in the dataframe Total(is_b = 1) = total entries for Date_1 where is_b = 1 num_2 = total entries for Date_1 for 2 days where Date_2 = (Date_1 - 1 to Date_1 - 2){Both included as well} num_3 = total entries for Date_1 for 3 days where Date_2 = (Date_1 - 3 to Date_1 - 5){Both included as well} num_5 = total entries for Date_1 for 5 days where Date_2 = (Date_1 - 6 to Date_1 - 10){Both included as well} num_20 = total entries for Date_1 for 20 days where Date_2 = (Date_1 - 11 to Date_1 - 30){Both included as well} </code></pre> <p>I was able to generate first 2 columns easily using : </p> <pre><code>df.groupby('Date_1')['Date_1'].count() df.loc[df.isBooked == 1].groupby('Date_1')['Date_1'].count() </code></pre> <p>I am not sure how to calculate the other columns : </p> <p>I did try this : </p> <pre><code>df.loc[(df.isBooked == 1) &amp; (df.Booking_Date = Flight_Date - 1) &amp; (df.Booking_Date = Flight_Date - 2)].groupby('Flight_Date')['Flight_Date'].count().reset_index(name='num_2') </code></pre> <p>But this is an invalid syntax altogether. </p> <p>Can anyone help me with generating the columns num_2, num_3, num_5, num_20. </p>
<p>The answer has two parts.</p> <h3>Date parsing</h3> <p>It appears from the example, that date is <em>not</em> parsed - they are strings. They must be parsed to perform date operations.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd def dateparse(d): return pd.datetime.strptime(d, '%d/%m/%Y') for c in ['Date_1', 'Date_2']: df[c] = df[c].map(dateparse) </code></pre> <p>If you print <code>df</code>, it should look like this (notice date format):</p> <pre><code> Date_2 Date_1 is_B 0 2019-08-02 2019-09-01 1 1 2019-08-02 2019-09-01 1 2 2019-08-02 2019-09-01 0 3 2019-08-02 2019-09-01 0 </code></pre> <p>Now, the columns have <code>dtype: datetime64[ns]</code>.</p> <h3>Calculation of statistics</h3> <p>We will calculate a few series with <code>Date_1</code> as index, and then merge them.</p> <pre class="lang-py prettyprint-override"><code>total_count = df.groupby('Date_1')['Date_1'].count().rename('Total_Count') total_count_is_b = df[df.is_B == 1].groupby('Date_1')['Date_1'] \ .count().rename('Total(is_b = 1)') </code></pre> <p>To get <code>num_2</code> perform this:</p> <pre class="lang-py prettyprint-override"><code>from datetime import timedelta num_2_df = df[ (df.is_B == 1) &amp; df.Date_2.between( df.Date_1 - timedelta(days=2), df.Date_1 - timedelta(days=1) ) ].groupby('Date_1')['Date_2'].count().rename('num_2') # notice argument order of `pandas.Series.between` </code></pre> <p>Other <code>num_3</code>, <code>num_5</code>, <code>num_20</code> can be calculated analogously:</p> <pre class="lang-py prettyprint-override"><code>num_3_df = df[ (df.is_B == 1) &amp; df.Date_2.between(df.Date_1 - timedelta(days=5), df.Date_1 - timedelta(days=3)) ].groupby('Date_1')['Date_2'].count().rename('num_3') num_5_df = df[ (df.is_B == 1) &amp; df.Date_2.between(df.Date_1 - timedelta(days=10), df.Date_1 - timedelta(days=6)) ].groupby('Date_1')['Date_2'].count().rename('num_5') num_20_df = df[ (df.is_B == 1) &amp; df.Date_2.between(df.Date_1 - timedelta(days=30), df.Date_1 - timedelta(days=11)) ].groupby('Date_1')['Date_2'].count().rename('num_20') </code></pre> <p>Finally all columns are merged to one table:</p> <pre class="lang-py prettyprint-override"><code>result_df = pd.concat( [total_count, total_count_is_b, num_2_df, num_3_df, num_5_df, num_20_df], axis=1 ).fillna(0).astype(int) result_df = result_df.reset_index() </code></pre>
python|pandas|numpy
4
3,333
58,979,824
How to do Cohen Kappa Quadratic Loss in Tensorflow 2.0?
<p>I'm trying to create the loss function according to:</p> <p><a href="https://stackoverflow.com/questions/54831044/how-can-i-specify-a-loss-function-to-be-quadratic-weighted-kappa-in-keras">How can I specify a loss function to be quadratic weighted kappa in Keras?</a></p> <p>But in tensorflow 2.0:</p> <pre><code>tf.contrib.metrics.cohen_kappa </code></pre> <p>No longer exists. Is there an alternative?</p>
<pre><code>def kappa_loss(y_pred, y_true, y_pow=2, eps=1e-10, N=4, bsize=256, name='kappa'): """A continuous differentiable approximation of discrete kappa loss. Args: y_pred: 2D tensor or array, [batch_size, num_classes] y_true: 2D tensor or array,[batch_size, num_classes] y_pow: int, e.g. y_pow=2 N: typically num_classes of the model bsize: batch_size of the training or validation ops eps: a float, prevents divide by zero name: Optional scope/name for op_scope. Returns: A tensor with the kappa loss.""" with tf.name_scope(name): y_true = tf.cast(y_true,dtype='float') repeat_op = tf.cast(tf.tile(tf.reshape(tf.range(0, N), [N, 1]), [1, N]), dtype='float') repeat_op_sq = tf.square((repeat_op - tf.transpose(repeat_op))) weights = repeat_op_sq / tf.cast((N - 1) ** 2, dtype='float') pred_ = y_pred ** y_pow try: pred_norm = pred_ / (eps + tf.reshape(tf.reduce_sum(pred_, 1), [-1, 1])) except Exception: pred_norm = pred_ / (eps + tf.reshape(tf.reduce_sum(pred_, 1), [bsize, 1])) hist_rater_a = tf.reduce_sum(pred_norm, 0) hist_rater_b = tf.reduce_sum(y_true, 0) conf_mat = tf.matmul(tf.transpose(pred_norm), y_true) nom = tf.reduce_sum(weights * conf_mat) denom = tf.reduce_sum(weights * tf.matmul( tf.reshape(hist_rater_a, [N, 1]), tf.reshape(hist_rater_b, [1, N])) / tf.cast(bsize, dtype='float')) return nom / (denom + eps) </code></pre> <p>and use</p> <pre><code> lossMetric = kappa_loss model.compile(optimizer=optimizer, loss=lossMetric, metrics=metricsToWatch) </code></pre> <p>and cast values to floats beforehand:</p> <pre><code>tf.cast(nn_x_train.values, dtype='float') </code></pre> <p>I also used a numpy validation version:</p> <pre><code>def qwk3(a1, a2, max_rat=3): assert(len(a1) == len(a2)) a1 = np.asarray(a1, dtype=int) a2 = np.asarray(a2, dtype=int) hist1 = np.zeros((max_rat + 1, )) hist2 = np.zeros((max_rat + 1, )) o = 0 for k in range(a1.shape[0]): i, j = a1[k], a2[k] hist1[i] += 1 hist2[j] += 1 o += (i - j) * (i - j) e = 0 for i in range(max_rat + 1): for j in range(max_rat + 1): e += hist1[i] * hist2[j] * (i - j) * (i - j) e = e / a1.shape[0] return sum(1 - o / e)/len(1 - o / e) </code></pre> <p>and use</p> <pre><code>nn_y_valid=tf.cast(nn_y_train.values, dtype='float') print(qwk3(nn_y_valid, trainPredict)) </code></pre> <p>where nn_x_train and nn_y_train are pandas dataframes</p>
python|python-3.x|tensorflow|machine-learning|tensorflow2.0
4
3,334
58,672,185
Pytorch Hardware Requirement
<p><strong>What is the minimum Computation Capability required by the latest PyTorch version?</strong></p> <p>I have Nvidia Geforce 820M with computation capability 2.1. How can I run PyTorch models on my GPU <code>(if it doesn't support naturally)</code></p>
<p>Looking at <a href="https://pytorch.org/get-started/previous-versions/" rel="nofollow noreferrer">this page</a>, PyTorch (even the somewhat oldest versions) support CUDA upwards from version <code>7.5</code>. Whereas, looking at <a href="https://stackoverflow.com/questions/28932864/cuda-compute-capability-requirements">this page</a>, CUDA <code>7.5</code> requires minimum Compute Capability <code>2.0</code>. So, on paper, your machine should support some older version of PyTorch which allows CUDA <code>7.5</code> or preferably <code>8.0</code> (as of writing this answer, the latest version uses minimum CUDA <code>9.2</code>). </p> <p>However, PyTorch also requires cuDNN. So, cuDNN <code>6.0</code> works for CUDA <code>7.5</code>. But cuDNN <code>6.0</code> requires Compute Capability of <code>3.0</code>. So, mostly, PyTorch won't work on your machine. (Thanks for pointing out the cuDNN part <a href="https://stackoverflow.com/users/1695960/robert-crovella">Robert Crovella</a>)</p>
deep-learning|gpu|pytorch
4
3,335
58,951,331
How to do parallel GPU inferencing in Tensorflow 2.0 + Keras?
<p>Let's begin with the premise that I'm newly approaching to TensorFlow and deep learning in general.</p> <p>I have TF 2.0 Keras-style model trained using <code>tf.Model.train()</code>, two available GPUs and I'm looking to scale down inference times.</p> <p>I trained the model distributing across GPUs using the extremely handy <code>tf.distribute.MirroredStrategy().scope()</code> context manager </p> <pre><code>mirrored_strategy = tf.distribute.MirroredStrategy() with mirrored_strategy.scope(): model.compile(...) model.train(...) </code></pre> <p>both GPUs get effectively used (even if I'm not quite happy with the results accuracy).</p> <p>I can't seem to find a similar strategy for distributing inference between GPUs with the <code>tf.Model.predict()</code> method: when i run <code>model.predict()</code> I get (obviously) usage from only one of the two GPUs.</p> <p>Is it possible to istantiate the same model on both GPUs and feed them different chunks of data in parallel?</p> <p>There are posts that suggest how to do it in TF 1.x but I can't seem to replicate the results in TF2.0 </p> <p><a href="https://medium.com/@sbp3624/tensorflow-multi-gpu-for-inferencing-test-time-58e952a2ed95" rel="noreferrer">https://medium.com/@sbp3624/tensorflow-multi-gpu-for-inferencing-test-time-58e952a2ed95</a></p> <p><a href="https://stackoverflow.com/questions/44255362/tensorflow-simultaneous-prediction-on-gpu-and-cpu">Tensorflow: simultaneous prediction on GPU and CPU</a></p> <p>my mental struggles with the question are mainly</p> <ul> <li>TF 1.x is <code>tf.Session()</code>based while sessions are implicit in TF2.0, if I get it correctly, the solutions I read use separate sessions for each GPU and I don't really know how to replicate it in TF2.0 </li> <li>I don't know how to use the <code>model.predict()</code> method with a specific session.</li> </ul> <p>I know that the question is probably not well-formulated but I summarize it as:</p> <p>Does anybody have a clue on how to run Keras-style <code>model.predict()</code> on multiple GPUs (inferencing on a different batch of data on each GPU in a parallel way) in TF2.0?</p> <p>Thanks in advance for any help.</p>
<p>Try to load model in <code>tf.distribute.MirroredStrategy</code> and use greater batch_size</p> <pre><code>mirrored_strategy = tf.distribute.MirroredStrategy() with mirrored_strategy.scope(): model = tf.keras.models.load_model(saved_model_path) result = model.predict(batch_size=greater_batch_size) </code></pre>
tensorflow|keras|predict|tensorflow2.0|multi-gpu
0
3,336
70,259,623
Define start and end date of several DataFrames with pandas
<p>I have many <code>DataFrames</code> which have a different period lengths. I am trying to create a <code>for loop</code> to define for all those DataFrames a specific start and end day.</p> <p>Here is a simple example:</p> <pre><code>df1: Dates ID1 ID2 0 2021-01-01 0 1 1 2021-01-02 0 0 2 2021-01-03 1 0 3 2021-01-04 2 2 4 2021-01-05 1 4 5 2021-01-06 -1 -2 df2: Dates ID1 ID2 0 2021-01-01 0 1 1 2021-01-02 1 2 2 2021-01-03 -1 3 3 2021-01-04 1 -1 4 2021-01-05 4 2 </code></pre> <p>I want to define a specific start and end day as:</p> <pre><code>start = pd.to_datetime('2021-01-02') end = pd.to_datetime('2021-01-04') </code></pre> <p>So far, I only figured out how to define the period for one <code>DataFrame</code>:</p> <pre><code>df1.loc[(df1['Dates'] &gt;= start) &amp; (df1['Dates'] &lt;= end)] </code></pre> <p>Is there an easy method to loop over all <code>DataFrames</code> at the same time to define the start and end dates?</p> <p>For reproducibility:</p> <pre><code>import pandas as pd df1 = pd.DataFrame({ 'Dates':['2021-01-01', '2021-01-02', '2021-01-03', '2021-01-04', '2021-01-05', '2021-01-06'], 'ID1':[0,0,1,2,1,-1], 'ID2':[1,0,0,2,4,-2]}) df1['Dates'] = pd.to_datetime(df1['Dates']) df2 = pd.DataFrame({ 'Dates':['2021-01-01', '2021-01-02', '2021-01-03', '2021-01-04', '2021-01-05'], 'ID1':[0,1,-1,1,4], 'ID2':[1,2,3,-1,2]}) df2['Dates'] = pd.to_datetime(df2['Dates']) </code></pre>
<p>You can store your dataframes in a list, and then apply your <code>loc</code> formula on all the dataframes in the list using <code>list</code> comprehension, and return back a new list of the resulting filtered dataframes:</p> <pre><code># Create a list with your dataframes dfs = [df1 , df2] # Thresholds start = pd.to_datetime('2021-01-02') end = pd.to_datetime('2021-01-04') # Filter all of them and store back filtered_dfs = [df.loc[(df['Dates'] &gt;= start) &amp; (df['Dates'] &lt;= end)] for df in dfs] </code></pre> <p>Result:</p> <pre><code>&gt;&gt;&gt; print(filtered_dfs) [ Dates ID1 ID2 1 2021-01-02 0 0 2 2021-01-03 1 0 3 2021-01-04 2 2, Dates ID1 ID2 1 2021-01-02 1 2 2 2021-01-03 -1 3 3 2021-01-04 1 -1] &gt;&gt;&gt; print(dfs) [ Dates ID1 ID2 0 2021-01-01 0 1 1 2021-01-02 0 0 2 2021-01-03 1 0 3 2021-01-04 2 2 4 2021-01-05 1 4 5 2021-01-06 -1 -2, Dates ID1 ID2 0 2021-01-01 0 1 1 2021-01-02 1 2 2 2021-01-03 -1 3 3 2021-01-04 1 -1 4 2021-01-05 4 2] </code></pre>
python|pandas|dataframe|for-loop
3
3,337
70,078,251
Cannot run Carlini and Wagner Attack using foolbox on a tensorflow Model
<p>I am using the latest version of foolbox (3.3.1), and my code simply load a RESNET-50 CNN, adds some layers for a transferred learning application, and loads the weights as follows.</p> <pre><code>from numpy.core.records import array import tensorflow as tf from keras.applications.resnet50 import ResNet50, preprocess_input from tensorflow.keras.layers import Dense, Dropout, Flatten from tensorflow.keras.models import Model from tensorflow.keras.layers import Input import cv2 import os import numpy as np import foolbox as FB from sklearn.metrics import accuracy_score from scipy.spatial.distance import cityblock from sklearn.metrics import plot_confusion_matrix from sklearn.metrics import confusion_matrix from PIL import Image import foolbox as FB import math from foolbox.criteria import Misclassification #load model num_classes = 12 #Load model and prepare it for testing print(&quot;Step 1: Load model and weights&quot;) baseModel = ResNet50(weights=None, include_top=False, input_tensor=Input(shape=(224, 224, 3))) headModel = baseModel.output headModel = Flatten(name=&quot;flatten&quot;)(headModel) headModel = Dense(512, activation=&quot;relu&quot;)(headModel) headModel = Dropout(0.5)(headModel) headModel = Dense(num_classes, activation=&quot;softmax&quot;)(headModel) model = Model(inputs=baseModel.input, outputs=headModel) model.load_weights(&quot;RESNET-50/weights/train1-test1.h5&quot;) print(&quot;Step 2: prepare testing data&quot;) #features is a set of (1200,10,224,224,3) images features=np.load(&quot;features.npy&quot;) labels=np.load(&quot;labels.npy&quot;) </code></pre> <p>Now I would like to attack it using the foolbox 3.3.1 Carlini and Wagner attack, here is the way I load the model for foolbox</p> <pre><code>#Lets test the foolbox model bounds = (0, 1) fmodel = fb.TensorFlowModel(model, bounds=bounds) </code></pre> <p>My dataset is split into 10 images per document, I will attack these 10 images using a batch size of 10 for foolbox using Carlini and Wagner attack</p> <pre><code>#for each i, I have 10 images for i in range(0, features.shape[0]): print(&quot;document &quot;+str(i)) #Receive current values #This is a batch of (10,224,224,3) images features_to_test=features[i,:] #Get their labels labels_to_test=labels[i,:] ######################ATTACK IN THE NORMALIZED DOMAIN########################### #lets do the attack #We use an interval of epsilons epsilons = np.linspace(0.01, 1, num=2) attack = fb.attacks.L2CarliniWagnerAttack(fmodel) adversarials = attack(features_to_test, labels_to_test, criterion=Misclassification(labels=labels_to_test), epsilons=epsilons) </code></pre> <p>However, whenever I run the code, here is the error that is returned to me</p> <pre><code>Traceback (most recent call last): File &quot;test_carlini_wagner.py&quot;, line 161, in &lt;module&gt; adversarials = attack(features_to_test, labels_to_test, criterion=Misclassification(labels=labels_to_test), epsilons=epsilons) File &quot;/usr/local/lib/python3.8/dist-packages/foolbox/attacks/base.py&quot;, line 410, in __call__ xp = self.run(model, x, criterion, early_stop=early_stop, **kwargs) File &quot;/usr/local/lib/python3.8/dist-packages/foolbox/attacks/carlini_wagner.py&quot;, line 100, in run bounds = model.bounds AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'bounds' </code></pre> <p>What is supposed to be the error? am I loading my model wrongly? should I add new parameters for the attack called? as previously stated, I am on foolbox 3.3.1.</p>
<p>I think you might have mixed up the parameters of the <code>L2CarliniWagnerAttack</code>. Here is a simplified working example with dummy data:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf import numpy as np from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input from tensorflow.keras.layers import Dense, Dropout, Flatten from tensorflow.keras.models import Model from tensorflow.keras.layers import Input from sklearn.metrics import accuracy_score from scipy.spatial.distance import cityblock from sklearn.metrics import plot_confusion_matrix from sklearn.metrics import confusion_matrix from foolbox import TensorFlowModel from foolbox.criteria import Misclassification from foolbox.attacks import L2CarliniWagnerAttack num_classes = 12 print(&quot;Step 1: Load model and weights&quot;) baseModel = ResNet50(weights=None, include_top=False, input_tensor=Input(shape=(224, 224, 3))) headModel = baseModel.output headModel = Flatten(name=&quot;flatten&quot;)(headModel) headModel = Dense(512, activation=&quot;relu&quot;)(headModel) headModel = Dropout(0.5)(headModel) headModel = Dense(num_classes, activation=&quot;softmax&quot;)(headModel) model = Model(inputs=baseModel.input, outputs=headModel) bounds = (0, 1) fmodel = TensorFlowModel(model, bounds=bounds) images, labels = tf.random.normal((64, 10, 224, 224, 3)), tf.random.uniform((64, 10,), maxval=13, dtype=tf.int32) for i in range(0, images.shape[0]): print(&quot;document &quot;+str(i)) features_to_test=images[i,:] labels_to_test=labels[i,:] epsilons = np.linspace(0.01, 1, num=2) attack = L2CarliniWagnerAttack() adversarials = attack(fmodel, features_to_test, criterion=Misclassification(labels_to_test), epsilons=epsilons) </code></pre> <pre><code>Step 1: Load model and weights document 0 document 1 document 2 document 3 document 4 document 5 document 6 ... </code></pre>
python|tensorflow|keras|neural-network|adversarial-machines
1
3,338
70,368,812
how to compare columns of two dataframes (VLOOKUP) and add a value from other column to one dataframe?
<p>I have two dataframes of different sizes, both have many columns and I need to compare two columns that have a different name and if there is a match then add the value of another column to a new column. This is like a VLOOKUP in excel, search the ID of the dataframe 1 in the company_id of dataframe 2 and if there are coincidences insert in the Qty value in the corresponding row of the dataframe 1</p> <pre><code>df1: ID Area Dept 0 IDX1 A Dept 21 1 IDX2 B Dept 2 2 IDX3 C Dept 3 3 IDX4 D Dept 3 df2: company_id Age Qty 0 ID01 42 10 1 IDX4 40 162 2 ID02 37 17 3 IDX1 42 100 4 ID24 40 12 5 IDX2 37 170 6 ID21 42 10 7 IDX3 40 120 8 ID02 37 17 </code></pre> <p>this is the output that I need:</p> <pre><code>df3: ID Area Dept extracted_qty 0 IDX1 A Dept 21 100 1 IDX2 B Dept 2 170 2 IDX3 C Dept 3 120 3 IDX4 D Dept 3 162 </code></pre>
<p>Use <code>merge</code>:</p> <pre><code>mapping = {'company_id': 'ID', 'Qty': 'extracted_qty'} out = df1.merge(df2.rename(columns=mapping)[['ID', 'extracted_qty']], on='ID') print(out) # Output: ID Area Dept extracted_qty 0 IDX1 A Dept 21 100 1 IDX2 B Dept 2 170 2 IDX3 C Dept 3 120 3 IDX4 D Dept 3 162 </code></pre>
python|pandas
0
3,339
56,363,333
Tensorflow view graph from model.ckpt.meta file
<p>I have a <code>model.ckpt.meta</code> file and I just want to view the architecture/graph structure. I can't find how to do this from the <code>model.ckpt.meta</code> file.</p> <p>Following the code thanks to Milan:</p> <pre><code>tf.train.import_meta_graph("./model.ckpt.meta") for n in tf.get_default_graph().as_graph_def().node: print(n) with tf.Session() as sess: writer = tf.summary.FileWriter("./output/", sess.graph) writer.close() </code></pre> <p>I get the graph below. But a lot of the architecture is missing. <a href="https://i.stack.imgur.com/uzxFw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uzxFw.png" alt="enter image description here"></a></p> <p>EDIT: Woops! I forgot to double click the model. Sorted.</p>
<p>You can import the meta graph in python with <code>tf.train.import_meta_graph</code> and then traverse the nodes in the graph, for example:</p> <pre><code>import tensorflow as tf tf.train.import_meta_graph("./model.ckpt-200000.meta") for n in tf.get_default_graph().as_graph_def().node: print(n) </code></pre> <p>Once imported, you can make a summary file for Tensorboard, which allows you to visualize the graph nicely:</p> <pre><code>with tf.Session() as sess: writer = tf.summary.FileWriter("./output/", sess.graph) writer.close() </code></pre> <p>To see the saved summary file in Tensorboard, run <code>tensorboard --logdir=./output/</code></p>
python|tensorflow
5
3,340
56,286,093
Write pandas dataframe into AWS athena database
<p>I have run a query using pyathena, and have created a pandas dataframe. Is there a way to write the pandas dataframe to AWS athena database directly? Like data.to_sql for MYSQL database. </p> <p>Sharing a example of dataframe code below for reference need to write into AWS athena database:</p> <pre><code>data=pd.DataFrame({'id':[1,2,3,4,5,6],'name':['a','b','c','d','e','f'],'score':[11,22,33,44,55,66]}) </code></pre>
<p>Another modern (as for February 2020) way to achieve this goal is to use <a href="https://aws-data-wrangler.readthedocs.io/examples.html#typical-pandas-etl" rel="nofollow noreferrer">aws-data-wrangler</a> library. It's authomating many routine (and sometimes annoying) tasks in data processing.</p> <p>Combining the case from the question the code would look like below:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import awswrangler as wr data=pd.DataFrame({'id':[1,2,3,4,5,6],'name':['a','b','c','d','e','f'],'score':[11,22,33,44,55,66]}) # Typical Pandas, Numpy or Pyarrow transformation HERE! wr.pandas.to_parquet( # Storing the data and metadata to Data Lake dataframe=data, database=&quot;database&quot;, path=&quot;s3://your-s3-bucket/path/to/new/table&quot;, partition_cols=[&quot;name&quot;], ) </code></pre> <p>This is amazingly helpful, because <a href="https://aws-data-wrangler.readthedocs.io/examples.html#typical-pandas-etl" rel="nofollow noreferrer">aws-data-wrangler</a> knows to parse table name from the path (but you can provide table name in the parameter) and define proper types in Glue catalog according to the dataframe.</p> <p>It also helpful for querying the data with Athena directly to pandas dataframe:</p> <pre class="lang-py prettyprint-override"><code>df = wr.pandas.read_table(database=&quot;dataase&quot;, table=&quot;table&quot;) </code></pre> <p>All the process will be fast and convenient.</p>
python|database|pandas|amazon-athena
5
3,341
56,345,521
Replace dataframe multiple columns with id from another dataframe
<p>I have Pandas Dataframe <strong>df1</strong> as:</p> <blockquote> <pre><code>ID | c1 | c2 | c3 ----------------- 1 | A | B | 32 2 | C | D | 34 3 | A | B | 11 4 | E | F | 3 </code></pre> </blockquote> <p>And <strong>df2</strong>:</p> <blockquote> <pre><code>ID | c1 | c2 ------------ 1 | A | B 2 | C | D 3 | E | F </code></pre> </blockquote> <p>There is foreign key between <strong>df1</strong> and <strong>df2</strong> on columns (c1, c2). Join look like:</p> <pre><code>pd.merge(df1, df2, left_on=['c1','c2'], right_on = ['c1','c2']) </code></pre> <p>Result is:</p> <blockquote> <pre><code>ID_x| c1 | c2 | c3 | ID_y ------------------------- 1 | A | B | 32 | 1 2 | C | D | 34 | 2 3 | A | B | 11 | 1 4 | E | F | 3 | 3 </code></pre> </blockquote> <p>I want to replace (c1,c2) in <strong>df1</strong> with <strong>df2.id</strong>. Expected FINAL df1 is:</p> <blockquote> <pre><code>ID| c3 | df2_id --------------- 1 | 32 | 1 2 | 34 | 2 3 | 11 | 1 4 | 3 | 3 </code></pre> </blockquote> <p>In other words I want to add column 'df2_id' in df1(filled with df2.id value for this row) and drop columns (c1,c2)(they are not necessary anymore).</p> <p>I have idea to do that by: </p> <ol> <li>save result from <strong>merge</strong> in df1 </li> <li>drop unnecessary columns (c1,c2)</li> <li>rename 'ID_y' to 'df2_id' and 'ID_x' to 'ID'</li> </ol> <p>Is there any better solution?</p>
<p>We could make a one liner out of your steps by making use of <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>suffixes</code></a> argument and <code>on</code> instead of <code>left_on, right_on</code> plus using <em>method chaining</em> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html" rel="nofollow noreferrer"><code>drop</code></a>:</p> <pre><code>df1.merge(df2, on=['c1','c2'], suffixes=['_1', '_2']).drop(['c1', 'c2'], axis=1) </code></pre> <p><strong>Output</strong></p> <pre><code> ID_1 c3 ID_2 0 1 32 1 1 3 11 1 2 2 34 2 3 4 3 3 </code></pre> <p>To make it exactly like OP's output:</p> <pre><code>df1.merge(df2, on=['c1','c2'], suffixes=['', '_2']).drop(['c1', 'c2'], axis=1).rename(columns={"id_2": "df2_id"}) </code></pre>
python|pandas|dataframe
2
3,342
56,088,665
Creating a new variable using other variables in an expression in a multiindexed Pandas dataframe
<p>I have the following multi-indexed Pandas dataframe:</p> <pre><code>toy.to_json() '{"["ISRG","Price"]":{"2004-12-31":10.35,"2005-01-28":10.35,"2005-03-31":14.15,"2005-04-01":14.15,"2005-04-29":14.15,"2005-06-30":15.51,"2005-07-01":15.51,"2005-07-29":15.51,"2005-09-30":20.77,"2005-10-28":20.77},"["ISRG","Price_high"]":{"2004-12-31":13.34,"2005-01-28":13.34,"2005-03-31":16.27,"2005-04-01":16.27,"2005-04-29":16.27,"2005-06-30":17.35,"2005-07-01":17.35,"2005-07-29":17.35,"2005-09-30":25.96,"2005-10-28":25.96},"["ISRG","Price_low"]":{"2004-12-31":7.36,"2005-01-28":7.36,"2005-03-31":12.03,"2005-04-01":12.03,"2005-04-29":12.03,"2005-06-30":13.67,"2005-07-01":13.67,"2005-07-29":13.67,"2005-09-30":15.58,"2005-10-28":15.58},"["EW","Price"]":{"2004-12-31":9.36,"2005-01-28":9.36,"2005-03-31":10.47,"2005-04-01":10.47,"2005-04-29":10.47,"2005-06-30":11.07,"2005-07-01":11.07,"2005-07-29":11.07,"2005-09-30":10.86,"2005-10-28":10.86},"["EW","Price_high"]":{"2004-12-31":10.56,"2005-01-28":10.56,"2005-03-31":11.07,"2005-04-01":11.07,"2005-04-29":11.07,"2005-06-30":11.69,"2005-07-01":11.69,"2005-07-29":11.69,"2005-09-30":11.56,"2005-10-28":11.56},"["EW","Price_low"]":{"2004-12-31":8.15,"2005-01-28":8.15,"2005-03-31":9.87,"2005-04-01":9.87,"2005-04-29":9.87,"2005-06-30":10.46,"2005-07-01":10.46,"2005-07-29":10.46,"2005-09-30":10.16,"2005-10-28":10.16},"["volatility",""]":{"2004-12-31":null,"2005-01-28":null,"2005-03-31":null,"2005-04-01":null,"2005-04-29":null,"2005-06-30":null,"2005-07-01":null,"2005-07-29":null,"2005-09-30":null,"2005-10-28":null}}' </code></pre> <p><a href="https://i.stack.imgur.com/hqf64.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hqf64.png" alt="enter image description here"></a></p> <p>I want with one line of code to create a new column called 'volatility' in the second level (i.e. under both 'ISGR' and 'EW') which will be defined by the following expression:</p> <pre><code>(100 * (Price_high - Price_low)/Price).round() </code></pre> <p>I have two problems: a) I can not create the new column b) I can not assign it</p> <p>Here is the code I used to create the column but it fails:</p> <pre><code>idx = pd.IndexSlice 100 *( toy.loc[:, idx[:, 'Price_high']] - toy.loc[:, idx[:, 'Price_low']].div(toy.loc[:, idx[:, 'Price']])).round() </code></pre> <p>This code line returns NaNs:</p> <p><a href="https://i.stack.imgur.com/i8PzE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i8PzE.png" alt="enter image description here"></a></p>
<p>For output <code>MultiIndex DataFrame</code> is necessary same <code>MultiIndex</code> in selected DataFrames, so use <code>rename</code>:</p> <pre><code>idx = pd.IndexSlice Price_high = toy.loc[:, idx[:, 'Price_high']].rename(columns={'Price_high':'new'}) Price_low = toy.loc[:, idx[:, 'Price_low']].rename(columns={'Price_low':'new'}) Price = toy.loc[:, idx[:, 'Price']].rename(columns={'Price':'new'}) df4 = (100 * (Price_high - Price_low)/Price).round() print (df4) ISRG EW new new 2004-12-31 58.0 26.0 2005-01-28 58.0 26.0 2005-03-31 30.0 11.0 2005-04-01 30.0 11.0 2005-04-29 30.0 11.0 2005-06-30 24.0 11.0 2005-07-01 24.0 11.0 2005-07-29 24.0 11.0 2005-09-30 50.0 13.0 2005-10-28 50.0 13.0 </code></pre> <p>Another approach is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.xs.html" rel="nofollow noreferrer"><code>DataFrame.xs</code></a> for avoid second level, so working with no <code>MultiIndex DataFrames</code>:</p> <pre><code>Price_high = toy.xs('Price_high', axis=1, level=1) Price_low = toy.xs('Price_low', axis=1, level=1) Price = toy.xs('Price', axis=1, level=1) df4 = (100 * (Price_high - Price_low)/Price).round() print (df4) ISRG EW 2004-12-31 58.0 26.0 2005-01-28 58.0 26.0 2005-03-31 30.0 11.0 2005-04-01 30.0 11.0 2005-04-29 30.0 11.0 2005-06-30 24.0 11.0 2005-07-01 24.0 11.0 2005-07-29 24.0 11.0 2005-09-30 50.0 13.0 2005-10-28 50.0 13.0 </code></pre> <p>And then if need <code>MultiIndex</code> add <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.from_product.html" rel="nofollow noreferrer"><code>MultiIndex.from_product</code></a>:</p> <pre><code>df4.columns = pd.MultiIndex.from_product([df4.columns, ['new']]) print (df4) ISRG EW new new 2004-12-31 58.0 26.0 2005-01-28 58.0 26.0 2005-03-31 30.0 11.0 2005-04-01 30.0 11.0 2005-04-29 30.0 11.0 2005-06-30 24.0 11.0 2005-07-01 24.0 11.0 2005-07-29 24.0 11.0 2005-09-30 50.0 13.0 2005-10-28 50.0 13.0 </code></pre>
python-3.x|pandas|multi-index|assign
0
3,343
56,335,178
Merge levels of same variable which are in consecutive columns
<p>I have a csv data file which has 2 headers that means one header as a question and the second one as a sub header which has multiple levels or answers for the main header. Current csv look like below table</p> <pre> Header Which country do you live? Which country you previously visited? Users Canada USA UK Mexico Norway India Singapore Pakistan User 1 Canada Singapore User 2 UK India User 3 Mexico Pakistan User 4 Norway India </pre> <p>I need to transform it into below table</p> <pre> Users Which country do you live? Which country you previously visited? User 1 Canada Singapore User 2 UK India User 3 Norway Pakistan User 4 Mexico India </pre> <p>Can someone help me with this?</p> <p>This is how my data look like <a href="https://i.stack.imgur.com/t4e8u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t4e8u.png" alt="enter image description here"></a></p> <p>My input file look like this <a href="https://i.stack.imgur.com/2Owh6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Owh6.png" alt="enter image description here"></a> and this is how my final output look like <a href="https://i.stack.imgur.com/xEHsh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xEHsh.png" alt="enter image description here"></a></p>
<p>First back filling missing values by <code>bfill</code>, then select first column and remove second level of <code>MultiIndex</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.droplevel.html" rel="nofollow noreferrer"><code>DataFrame.droplevel</code></a>:</p> <pre><code>print (df.columns) MultiIndex(levels=[['Header', 'Which country do you live?'], ['Canada', 'Mexico', 'UK', 'USA', 'Users']], codes=[[0, 1, 1, 1, 1], [4, 0, 3, 2, 1]]) </code></pre> <hr> <pre><code>#if first column is not index, create it #df = df.set_index([df.columns[0]]) #if empty strings repalce them to NaNs #df = df.replace('', np.nan) df = df.bfill(axis=1).iloc[:, 0].reset_index().droplevel(level=1, axis=1) print (df) Header Which country do you live? 0 User 1 Canada 1 User 2 UK 2 User 3 Mexico 3 User 4 Norway </code></pre> <p>EDIT:</p> <pre><code>df = df.groupby(level=0, axis=1).apply(lambda x: x.bfill(axis=1).iloc[:, 0]) print (df) Header Which country do you live? Which country you previously visited? 0 User 1 Canada Singapore 1 User 2 UK India 2 User 3 Mexico Pakistan 3 User 4 Norway India </code></pre>
python|pandas|csv|data-transform
2
3,344
56,033,418
PyTorch and Chainer implementations of the Linear layer- are they equivalent?
<p>I want to use a Linear, Fully-Connected Layer as one of the input layers in my network. The input has shape (batch_size, in_channels, num_samples). It is based on the Tacotron paper: <a href="https://arxiv.org/pdf/1703.10135.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1703.10135.pdf</a>, the Enocder prenet part. It feels to me as if Chainer and PyTorch have different implementations of the Linear layer - are they really performing the same operations or am I misunderstanding something?</p> <p>In PyTorch, behavior of the Linear layer follows the documentations: <a href="https://pytorch.org/docs/0.3.1/nn.html#torch.nn.Linear" rel="nofollow noreferrer">https://pytorch.org/docs/0.3.1/nn.html#torch.nn.Linear</a> according to which, the shape of the input and output data are as follows:</p> <blockquote> <p>Input: (N,∗,in_features) where * means any number of additional dimensions</p> <p>Output: (N,∗,out_features) where all but the last dimension are the same shape as the input.</p> </blockquote> <p>Now, let's try creating a linear layer in pytorch and performing the operation. I want an output with 8 channels, and the input data will have 3 channels.</p> <pre><code>import numpy as np import torch from torch import nn linear_layer_pytorch = nn.Linear(3, 8) </code></pre> <p>Let's create some dummy input data of shape (1, 4, 3) - (batch_size, num_samples, in_channels:</p> <pre><code>data = np.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4], dtype=np.float32).reshape(1, 4, 3) data_pytorch = torch.from_numpy(data) </code></pre> <p>and finally, perform the operation:</p> <pre><code>results_pytorch = linear_layer_pytorch(data_pytorch) results_pytorch.shape </code></pre> <p>the shape of the output is as follows: <code>Out[27]: torch.Size([1, 4, 8])</code> Taking a look at the source of the PyTorch implementation:</p> <pre><code>def linear(input, weight, bias=None): # type: (Tensor, Tensor, Optional[Tensor]) -&gt; Tensor r&quot;&quot;&quot; Applies a linear transformation to the incoming data: :math:`y = xA^T + b`. Shape: - Input: :math:`(N, *, in\_features)` where `*` means any number of additional dimensions - Weight: :math:`(out\_features, in\_features)` - Bias: :math:`(out\_features)` - Output: :math:`(N, *, out\_features)` &quot;&quot;&quot; if input.dim() == 2 and bias is not None: # fused op is marginally faster ret = torch.addmm(bias, input, weight.t()) else: output = input.matmul(weight.t()) if bias is not None: output += bias ret = output return ret </code></pre> <p>It transposes the weight matrix that is passed to it, broadcasts it along the batch_size axis and performs a matrix multiplications. Having in mind how a linear layer works, I imagine it as 8 nodes, connected through a synapse, holding a weight, with every channel in an input sample, thus in my case it has 3*8 weights. And that is exactly the shape I see in debugger (8, 3).</p> <p>Now, let's jump to Chainer. The Chainer's linear layer documentation is available here: <a href="https://docs.chainer.org/en/stable/reference/generated/chainer.links.Linear.html#chainer.links.Linear" rel="nofollow noreferrer">https://docs.chainer.org/en/stable/reference/generated/chainer.links.Linear.html#chainer.links.Linear</a>. According to this documentation, the <em>Linear</em> layer wraps the function <em>linear</em>, which according to the docs, flattens the input along the non-batch dimensions and the shape of it's weight matrix is <code>(output_size, flattend_input_size)</code></p> <pre><code>import chainer linear_layer_chainer = chainer.links.Linear(8) results_chainer = linear_layer_chainer(data) results_chainer.shape Out[21]: (1, 8) </code></pre> <p>Creating the layer as <code>linear_layer_chainer = chainer.links.Linear(3, 8)</code> and calling it causes a size mismatch. So in case of chainer, I have gotten a totally different results, because this time around I have a weight matrix that is of shape (8, 12) and my results have a shape of (1, 8). So now, here is my question : since the results are clearly different,both the weight matrices and the outputs have different shapes, how can I make them equivalent and what should be the desired output? In the PyTorch implementation of Tacotron it seems that the PyTorch approach is used as is (<a href="https://github.com/mozilla/TTS/blob/master/layers/tacotron.py" rel="nofollow noreferrer">https://github.com/mozilla/TTS/blob/master/layers/tacotron.py</a>) - Prenet. If that is the case, how can I make the Chainer produce the same results (I have to implement this in Chainer). I will be grateful for any inshight, sorry that the post has gotten this long.</p>
<p>Chainer <code>Linear</code> layer (a bit frustratingly) does not apply the transformation to the last axis. Chainer flattens the rest of the axes. Instead you need to provide how many batch axes there are, <a href="https://docs.chainer.org/en/stable/reference/generated/chainer.links.Linear.html#chainer.links.Linear.forward" rel="nofollow noreferrer">documentation</a> which is 2 in your case:</p> <pre><code># data.shape == (1, 4, 3) results_chainer = linear_layer_chainer(data, n_batch_axes=2) # 2 batch axes (1,4) means you apply linear to (..., 3) # results_chainer.shape == (1, 4, 8) </code></pre> <p>You can also use <code>l(data, n_batch_axes=len(data.shape)-1)</code> to always apply to the last dimension which is the default behaviour in PyTorch, Keras etc.</p>
deep-learning|pytorch|linear-algebra|chainer
0
3,345
55,852,570
How can I get all the unique categories within my dataframe using python?
<p>im new to python and trying to work with dataframes manipulation:</p> <p>I have a df with unique categories: I am unable to paste the dataframe because I use Spyder IDE and it is not interactive does not display all fields.</p> <p>My input to get all these unique categories within a dataframe:</p> <pre><code>uc =[] for i in df['Category']: if i[0] not in df['Category']: uc.append(i[0]) print(uc) </code></pre> <p>But when I use this script, I only receive the first letters of these categories:</p> <p>Output:</p> <pre><code>['F', 'P', 'N', 'F', 'L', 'T', 'W', 'S', 'W', 'B', 'S', 'F', 'T', 'T', 'B', 'T', 'B', 'L', 'S', 'F', 'F', 'F', 'N', 'P', 'H', 'T', 'L', 'T', 'S', 'E', 'P', 'N', 'T', 'L', 'P', 'L', 'W', 'F', 'N', 'L', 'N', 'L', 'F', 'F', 'N', 'T', 'P', 'L', 'B', 'W', 'L', 'W', 'F', 'F', 'H', 'T', 'F', 'T', 'T', 'N', 'G', 'L', 'M', 'N', 'F', 'N', 'F', 'L', 'N', 'P', 'F', 'B', 'B', 'S', 'F', 'P', 'F', 'P', 'P', 'P', 'B', 'P', 'B', 'B', 'L', 'B', 'F', 'P', 'P', 'B', 'B', 'C', 'G', 'C', 'G', 'B', 'P', 'T', 'P', 'P', 'N', 'G', 'S', 'G', 'F', 'G', 'F', 'T', 'S', 'P', 'F', 'C', 'C', 'C', 'C', 'C', 'G', 'C', 'F', 'C', 'F', 'B', 'G', 'C', 'B', 'B', 'B', 'C', 'P', 'G', 'S', 'D', 'P', 'G', 'F', 'L', 'C', 'G', 'P', 'S', 'B', 'P', 'T', 'T', 'L', 'M', 'F', 'T', 'P', 'C', 'F', 'B', 'M', 'G', 'C', 'P', 'T', 'L', 'F', 'F', 'F', 'T', 'P', 'C', 'G', 'T', 'F', 'F', 'S', 'B', 'M', 'T', 'T', 'T', 'T', 'H', 'B', 'N', 'F', 'A', 'T', 'E', 'M', 'L', 'G', 'P', 'B', 'L', 'N', 'S', 'G', 'G', 'F', 'F', 'F', 'G', 'G', 'G', 'G', 'F', 'T', 'G', 'P', 'G', 'C', 'G', 'G', 'G', 'F', 'T', 'T', 'L', 'F', 'S', 'T', 'F', 'F', 'G', 'G', 'L', 'M', 'T', 'L', 'F', 'B', 'A', 'F', 'B', 'F', 'B', 'B', 'T', 'F', 'B', 'F', 'F', 'P', 'V', 'M', 'S', 'F', 'C', 'B', 'N', 'M', 'W', 'B', 'F', 'B', 'F', 'F', 'M', 'L'] </code></pre> <p>How do I change my script to reveive unique categories within a dataframe?</p>
<p>Try with</p> <pre><code> df['Category'].unique() </code></pre>
python|pandas
5
3,346
64,806,620
Pandas split data frames into multiple csv's based on value from column
<p>I have a question <a href="https://stackoverflow.com/questions/36192633/python-pandas-split-a-data-frame-based-on-a-column-value">similar to this one</a> but I need some further steps. The thing is my file contains like 50k+ lines. Each line have 4 values &quot;Indicator&quot;,&quot;Country&quot;,&quot;Date&quot; and &quot;value&quot;. I want to split my CSV based on country. I do not know how many countries there is so all countries with similar name should be in one CSV file and so on. The CSV file in not order either. I am using pandas and here is my code so far:</p> <pre><code>import pandas as pd def read_csvfile(): df = pd.read_csv('ebola_data_db_format.csv', sep= ',') #remove the unneeded columns df = df[df['Country'] != &quot;Guinea 2&quot;] df = df[df['Country'] != &quot;Liberia 2&quot;] #reset the index df.reset_index(drop=True, inplace=True) print (df.head(10)) read_csvfile() </code></pre> <p>I want to be able to have a CSV file for every country so I can plot their data separately. Help please!</p>
<p>You can use groupby:</p> <pre><code>country_dfs = {k:v for k,v in df.groupby('Country')} </code></pre> <p>To save them in several csv files:</p> <pre><code>for k, v in df.groupby('Country'): v.to_csv(f'{k}.csv') </code></pre> <p>or from <code>country_dfs</code>:</p> <pre><code>for k, v in country_dfs.items(): v.to_csv(f'{k}.csv') </code></pre>
python|pandas|dataframe|csv
3
3,347
39,900,651
Pandas date difference in one column
<p>Here is my dataframe:</p> <pre><code>import pandas as pd df_manual = pd.DataFrame({'A': ['one', 'one', 'two', 'two', 'one'] , 'B': ['Ar', 'Br', 'Cr', 'Ar','Ar'] , 'C': ['12/15/2011', '11/11/2001', '08/7/2015', '07/3/1999','03/03/2000' ]}) </code></pre> <p>I would like to create column which would contain date difference for column see (with prior grouping). Here is what I wrote:</p> <pre><code>df_manual['C']=pd.to_datetime(df_manual['C']) df_manual['diff'] = df_manual.groupby(['A'])['C'].transform(lambda x: x.diff()) </code></pre> <p>But the result I get is not day difference. Resulting difference between 2001-11-11 and 2000-03-03 is a date 1971-09-11, while I need number of days in between.</p> <p>Any idea how to achieve it?</p>
<p>Use <code>apply</code> instead of <code>transform</code>:</p> <pre><code>df_manual['diff'] = df_manual.groupby(['A'])['C'].apply(lambda x: x.diff()) </code></pre> <p>The resulting output:</p> <pre><code> A B C diff 0 one Ar 2011-12-15 NaT 1 one Br 2001-11-11 -3686 days 2 two Cr 2015-08-07 NaT 3 two Ar 1999-07-03 -5879 days 4 one Ar 2000-03-03 -618 days </code></pre> <p>If you want <code>df_manual['diff']</code> to be an integer instead of a timedelta, use the <code>dt.days</code> accessor:</p> <pre><code>df_manual['diff'] = df_manual.groupby(['A'])['C'].apply(lambda x: x.diff()).dt.days </code></pre>
pandas|date-difference
4
3,348
40,062,641
Pandas time series: groupby and sum from noon to noon
<p>My pandas dataframe is structured like this (with 'date' as index):</p> <pre><code> starttime duration_seconds date 2012-12-24 11:52:00 31800 2012-12-23 0:28:00 35940 2012-12-22 2:00:00 26820 2012-12-21 1:57:00 23520 2012-12-20 1:32:00 23100 2012-12-19 0:50:00 25080 2012-12-18 1:17:00 24780 2012-12-17 0:38:00 25440 2012-12-15 10:38:00 32760 2012-12-14 0:35:00 23160 2012-12-12 22:54:00 3960 2012-12-12 0:21:00 24060 2012-12-10 23:45:00 900 2012-12-11 11:00:00 24840 2012-12-10 0:27:00 25980 2012-12-09 19:29:00 4320 2012-12-09 3:00:00 29880 2012-12-08 2:07:00 34380 </code></pre> <p>I use the following to groupby date and sum the total seconds each day:</p> <pre><code>df_sum = df.groupby(df.index.date).sum() </code></pre> <p>What I'd like to do is sum duration_seconds from noon on one day to noon on the following day. Is there an elegant (pandas) way of doing this? Thanks in advance!</p>
<p><code>pd.TimeGrouper</code> is a custom groupby class for time-interval grouping of NDFrames with a <code>DatetimeIndex</code>, <code>TimedeltaIndex</code> or <code>PeriodIndex</code>. (If your dataframe index is using date-strings, you'll need to convert it to a DatetimeIndex first by using <code>df.index = pd.DatetimeIndex(df.index)</code>.)</p> <p><code>df.groupby(pd.TimeGrouper('24H')).sum()</code> groups <code>df</code> using 24-hour intervals starting at time <code>00:00:00</code>. </p> <p><code>df.groupby(pd.TimeGrouper('24H'), base=12).sum()</code> groups <code>df</code> using 24-hour intervals starting at time <code>12:00:00</code>:</p> <pre><code>In [90]: df.groupby(pd.TimeGrouper('24H', base=12)).sum() Out[90]: duration_seconds 2012-12-07 12:00:00 34380.0 2012-12-08 12:00:00 34200.0 2012-12-09 12:00:00 26880.0 2012-12-10 12:00:00 24840.0 2012-12-11 12:00:00 28020.0 2012-12-12 12:00:00 NaN 2012-12-13 12:00:00 23160.0 2012-12-14 12:00:00 32760.0 2012-12-15 12:00:00 NaN 2012-12-16 12:00:00 25440.0 2012-12-17 12:00:00 24780.0 2012-12-18 12:00:00 25080.0 2012-12-19 12:00:00 23100.0 2012-12-20 12:00:00 23520.0 2012-12-21 12:00:00 26820.0 2012-12-22 12:00:00 35940.0 2012-12-23 12:00:00 31800.0 </code></pre> <hr> <p>Documentation on <code>pd.TimeGrouper</code> is a little sparse. It is a subclas of <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Grouper.html#pandas-grouper" rel="nofollow"><code>pd.Grouper</code></a> and thus many of its parameters have the same meaning as those documented for <code>pd.Grouper</code>. You can find more examples of <code>pd.TimeGrouper</code> usage in the <a href="http://pandas.pydata.org/pandas-docs/stable/cookbook.html#resampling" rel="nofollow">Cookbook</a>. I found the <code>base</code> parameter by inspecting <a href="https://github.com/pandas-dev/pandas/blob/master/pandas/tseries/resample.py#L982" rel="nofollow">the source code</a>. The <code>base</code> parameter in <code>pd.TimeGrouper</code> has the same meaning as the <code>base</code> parameter in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html" rel="nofollow"><code>pd.resample</code></a> and that is not surprising since <code>pd.resample</code> is <a href="https://github.com/pandas-dev/pandas/blob/master/pandas/tseries/resample.py#L941" rel="nofollow">implemented using <code>pd.TimeGrouper</code></a>.</p> <p>In fact, come to think of it, another way to compute the desired result is </p> <pre><code>df.resample('24H', base=12).sum() </code></pre>
python-2.7|pandas
3
3,349
40,291,132
Comparing columns of 2 dataframes
<p>I am trying to get the columns that are unique to a data frame.</p> <p>DF_A has 10 columns DF_B has 3 columns (all three match column names in DF_A).</p> <p>Before I was using:</p> <p>cols_to_use = DF_A.columns - DF_B.columns.</p> <p>Since my pandas update, I am getting this error: TypeError: cannot perform <strong>sub</strong> with this index type: </p> <p>What should I be doing now instead?</p> <p>Thank you!</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.difference.html" rel="nofollow">difference</a> method:</p> <p>Demo:</p> <pre><code>In [12]: df Out[12]: a b c d 0 0 8 0 3 1 3 4 1 7 2 0 5 4 0 3 0 9 7 0 4 5 8 5 4 In [13]: df2 Out[13]: a d 0 4 3 1 3 1 2 1 2 3 3 4 4 0 3 In [14]: df.columns.difference(df2.columns) Out[14]: Index(['b', 'c'], dtype='object') In [15]: cols = df.columns.difference(df2.columns) In [16]: df[cols] Out[16]: b c 0 8 0 1 4 1 2 5 4 3 9 7 4 8 5 </code></pre>
python|pandas|dataframe
1
3,350
69,384,465
pandas dataframe and/or condition syntax
<p>This pandas dataframe conditions work perfectly</p> <pre><code> df2 = df1[(df1.A &gt;= 1) | (df1.C &gt;= 1) ] </code></pre> <p>But if I want to filter out rows where based on 2 conditions</p> <pre><code>(1) A&gt;=1 &amp; B=10 (2) C &gt;=1 df2 = df1[(df1.A &gt;= 1 &amp; df1.B=10) | (df1.C &gt;= 1) ] </code></pre> <p>giving me an error message</p> <pre><code>[ERROR] Cannot perform 'rand_' with a dtyped [object] array and scalar of type [bool] </code></pre> <p>Anyone can help? Thank you!</p>
<p>One set of brackets is missing. Add brackets surrounding A and B individiually as well</p> <p>Try this</p> <pre><code>df2 = df1[((df1.A &gt;= 1) &amp; (df1.B==10)) | (df1.C &gt;= 1) ] </code></pre> <p>Example</p> <pre><code>df1 = pd.DataFrame({'A': [0,0,1,1,2,2], 'B': [0,10,0,10,0,10], 'C': [2,2,3,3,0,0]}) df1 A B C 0 0 0 2 1 0 10 2 2 1 0 3 3 1 10 3 4 2 0 0 5 2 10 0 df2 = df1[((df1.A &gt;= 1) &amp; (df1.B==10)) | (df1.C &gt;= 1) ] df2 A B C 0 0 0 2 1 0 10 2 2 1 0 3 3 1 10 3 5 2 10 0 </code></pre>
pandas|python-3.7
1
3,351
69,437,233
Object detection model in a for loop - memory issue
<p>I've been trying to run object detection on a batch of images, but the graph keeps using more and more memory. I think the variables are not being removed (when reassigned) and just keep getting added. I tried resetting the default graph, clearing the session and manually deleting + garbage collection.</p> <p>this is the for loop basically:</p> <pre><code>for i in range(num_images) box, m, s = detect(imgs[i]) boxes.append(box) </code></pre> <p>this is detect</p> <pre><code>def detect(img): height, width, channels = img.shape detector_output = detector(tf.expand_dims(img, axis=0)) classes = detector_output['detection_classes'][0] most_likely = tf.convert_to_tensor(classes[0]) second_ = 20 box = detector_output['detection_boxes'][0][0] box = tf.math.multiply(box, [height, width, height, width]) box = tf.cast(box, tf.int16) del detector_output K.clear_session() return (box, most_likely) </code></pre> <p>I've been keeping track of memory and it increases linearly with the loop, so I'm thinking it's detector_output that just gets added each loop.</p> <p>How could I solve this, what is a smarter way to do this?</p> <p>PS the reason I have (thinking I have to) do it this way is because the models do not support batching: <a href="https://tfhub.dev/tensorflow/collections/object_detection/2" rel="nofollow noreferrer">https://tfhub.dev/tensorflow/collections/object_detection/2</a></p> <p>Thank you very much</p>
<p>The <code>imgs</code> array could be taking a huge chunk of your memory as all the images are already loaded. You could try loading each image one by one and running the <code>detector()</code></p> <p>So your code has to change into something like this:</p> <pre><code>for name in image_names: # this function loads the image from disk and converts to a np array image = load(name) box, m, s = detect(image) boxes.append(box) </code></pre> <p>If you're still running out of memory, you can try saving <code>box</code> as a <code>.npy</code> file instead of appending to the <code>boxes</code> list, but the <code>boxes</code> list is unlikely to cause memory problems.</p>
tensorflow|keras|tensorflow2.0|transfer-learning
0
3,352
69,655,047
How can I handles duplicates when plotting a csv?
<p>I have a csv file and a python script in which I use pandas and matplotlib to plot the values. The script is</p> <pre><code>#!/usr/bin/env python3 import matplotlib.pyplot as plt import numpy as np import pandas as pd import sys def main(argv): in_csv_file=argv[0] print(argv[0]) df= pd.read_csv(in_csv_file) print(df.head()) print(df.columns.tolist()) df.plot(kind='line',x='Frame',y='Confidence',color='blue', grid=True) #plt.show() plt.savefig('output.png') if __name__ == '__main__': main(sys.argv[1:]) </code></pre> <p>This works without problems when all lines in the csv has a unique value for &quot;Frame&quot;. But what happens if I have for example this csv</p> <pre><code>Frame,V1,V2,V3,V4,V5,Confidence 1,0,0,0,0,0,0 2,0,0,0,0,0,0 3,0,0,0,0,0,0 4,0,0,0,0,0,0 5,0,0,0,0,0,0 6,4,2,3,3,4,0.5 7,4,2,3,3,4,0.7 8,4,2,3,3,4,0.7 9,4,2,3,3,4,0.9 9,4,2,3,3,4,0.5 10,4,2,3,3,4,0.7 11,4,2,3,3,4,0.9 12,4,2,3,3,4,0.6 13,4,2,3,3,4,0.5 14,4,2,3,3,4,0.3 15,0,0,0,0,0,0 </code></pre> <p>In this case I got a plot like</p> <p><a href="https://i.stack.imgur.com/psHYW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/psHYW.png" alt="Actual output" /></a></p> <p>However I would like a plot more like</p> <p><a href="https://i.stack.imgur.com/3lG60.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3lG60.png" alt="Desired output" /></a></p> <p>As you can see the values from 8 is followed by two values for 9 and then one for 10.</p> <p>Is this possible, and how can I modify the code to handle it?</p>
<p>One way is to pivot you data and plot:</p> <pre><code>(df.assign(count=df.groupby('Frame').cumcount()) .pivot(index='Frame', columns='count', values='Confidence') .ffill(axis=1) .plot(color='C0', legend=None, ylabel='Confidence') ) </code></pre> <p>Which gives:</p> <p><a href="https://i.stack.imgur.com/EUrSA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EUrSA.png" alt="enter image description here" /></a></p> <p>Now, this may give different output when there are two consecutive duplicates, e.g., what do you expect when there are two <code>8</code> frames?</p>
python|pandas|matplotlib
2
3,353
69,622,199
Is there a way to achieve this with groupby/pivot table in pandas?
<p>How to transform this dataframe</p> <pre><code>Year Gender Count 2018 Female 4010 2018 Male 19430 2019 Female 3212 2019 Male 16138 </code></pre> <p>To</p> <pre><code>Year Male Female Ratio 2018 19430 4010 0.21 2019 16138 3212 0.20 </code></pre> <p>using groupby/pivot/any custom function????</p>
<p>Something like this:</p> <pre><code>new_df = df.pivot(&quot;Year&quot;, &quot;Gender&quot;, &quot;Count&quot;).assign(Ratio=lambda r: r.Female / r.Male) # To remove the &quot;Gender&quot; name from column index new_df.columns.name = None # Reset row index as column new_df = new_df.reset_index() </code></pre> <p>Result:</p> <pre><code> Year Female Male Ratio 0 2018 4010 19430 0.206382 1 2019 3212 16138 0.199033 </code></pre>
python|pandas
0
3,354
41,071,586
I got a error when I got the variables of convolution layers in tensorflow
<p>I want to got variables of convolution layers and to visualize it. Then my code is </p> <pre><code>d3 = de_conv(d2, weights2['wc2'], biases2['bc2'], out_shape=[batch_size , c2, c2, 128]) d3 = batch_norm(d3, epsilon=1e-5, decay=0.9) d3 = tf.nn.relu(d3) tf.add_to_collection('weight_2', weights2['wc3']) </code></pre> <p>and in test </p> <pre><code>with tf.Session() as sess: saver.restore(sess , model_path) conv_weights = sess.run([tf.get_collection('weight_2')]) #visualize the weights conv_weights = np.array(conv_weights) print(conv_weights.shape) vis_square(conv_weights) </code></pre> <p>But I don't understand the conv_weights have a confusing dimensions</p> <pre><code>(1, 1, 5, 5, 1, 128) </code></pre>
<p>Weights for conv layers should be <code>[filter height, filter width, input channels, number of filters (output channels]</code>. Except for the first two dimensions, your weights fit. Is it just wrapped in two lists? E.g. <code>[[weights]]</code> instead of just <code>weights</code>.</p>
python|tensorflow|deep-learning
0
3,355
41,090,854
assign value of arbitrary line in 2-d array to nans
<p>I have a 2D numpy array, z, in which I would like to assign values to nan based on the equation of a line +/- a width of 20. I am trying to implement the Raman 2nd scattering correction as it is done by the eem_remove_scattering method in the eemR package listed here: <a href="https://cran.r-project.org/web/packages/eemR/vignettes/introduction.html" rel="nofollow noreferrer">https://cran.r-project.org/web/packages/eemR/vignettes/introduction.html</a> but the method isn't visible.</p> <pre><code> import numpy as np ex = np.array([240, 245, 250, 255, 260, 265, 270, 275, 280, 285, 290, 295, 300, 305, 310, 315, 320, 325, 330, 335, 340, 345, 350, 355, 360, 365, 370, 375, 380, 385, 390, 395, 400, 405, 410, 415, 420, 425, 430, 435, 440, 445, 450]) em = np.array([300, 302, 304, 306, 308, 310, 312, 314, 316, 318, 320, 322, 324, 326, 328, 330, 332, 334, 336, 338, 340, 342, 344, 346, 348, 350, 352, 354, 356, 358, 360, 362, 364, 366, 368, 370, 372, 374, 376, 378, 380, 382, 384, 386, 388, 390, 392, 394, 396, 398, 400, 402, 404, 406, 408, 410, 412, 414, 416, 418, 420, 422, 424, 426, 428, 430, 432, 434, 436, 438, 440, 442, 444, 446, 448, 450, 452, 454, 456, 458, 460, 462, 464, 466, 468, 470, 472, 474, 476, 478, 480, 482, 484, 486, 488, 490, 492, 494, 496, 498, 500, 502, 504, 506, 508, 510, 512, 514, 516, 518, 520, 522, 524, 526, 528, 530, 532, 534, 536, 538, 540, 542, 544, 546, 548, 550, 552, 554, 556, 558, 560, 562, 564, 566, 568, 570, 572, 574, 576, 578, 580, 582, 584, 586, 588, 590, 592, 594, 596, 598, 600]) X, Y = np.meshgrid(ex, em) z = np.sin(X) + np.cos(Y) </code></pre> <p>The equation that I would like to apply is em = - 2 ex/ (0.00036*ex-1) + 500. I want to set every value in the array that intersects this line (+/- 20 ) to be set to nans. Its simple enough to set a single element to nans, but I havent been able to locate a python function to apply this equation to the array and only set values that intersect with this line to nans. The desired output would be a new array with the same dimensions as z, but with the values that intersect the line equivalent to nan. Any suggestions on how to proceed are greatly appreciated.</p>
<p>Use <code>np.where</code> in the form <code>np.where( "condition for intersection", np.nan, z)</code>:</p> <pre><code>zi = np.where( np.abs(-2*X/(0.00036*X-1) + 500 - Y) &lt;= 20, np.nan, z) </code></pre> <p>As a matter of fact, there are no intersections here because (0.00036*ex-1) is close to -1 for all your values, which makes <code>- 2*ex/(0.00036*ex-1)</code> close to <code>2*ex</code>, and adding 500 brings this over any values you have in <code>em</code>. But in principle this works. </p> <p>Also, I suspect that the goal you plan to achieve by setting those values to NaN would be better achieved by using a <a href="https://docs.scipy.org/doc/numpy/reference/maskedarray.generic.html" rel="nofollow noreferrer">masked array</a>.</p>
python|arrays|numpy
1
3,356
41,070,549
Get index when looping through one column of pandas
<p>I have a simple dataframe:</p> <p>index, a, y 0 , 1, 2 1 , 4, 6 2 , 5, 8</p> <p>I want to loop through the "a" column, and print out its index for a specific value.</p> <pre><code>for x in df.a: if x == 4: print ("Index of that row") </code></pre> <p>What syntax should I use to obtain the index value when the for loop hits the specific value in the "a" column that I am seeking?</p> <p>Thank You</p>
<p>A series is like a dictionary, so you can use the <code>.iteritems</code> method:</p> <pre><code>for idx, x in df['a'].iteritems(): if x==4: print('Index of that row: {}'.format(idx)) </code></pre>
python|pandas
11
3,357
53,808,834
python pandas dataframe adding total column with filtered criteria
<p>I have a file where I compare different pieces of information for different views of an underlying dataset. The goal is to list out the pieces of information and compare the totals.</p> <p>I have the following dataframe:</p> <pre><code>df = pandas.DataFrame({"Measures": ['Country','State','County','City'], "Green": ['Included','Excluded','Included','Included'], "Orange": ['Excluded', 'Excluded', 'Excluded', 'Included']}) </code></pre> <p>I have the following underlying dataset:</p> <pre><code>Location Green Orange Country 1 6 State 3 10 County 2 15 City 5 20 </code></pre> <p>I would like the final outcome to look like this:</p> <pre><code>Measures Green Orange Country Included Excluded State Excluded Excluded County Included Excluded City Included Included Total 8 20 </code></pre>
<p>You can use <code>df</code> to mask the underlying dataframe's values before computing the sum.</p> <pre><code>m = df.eq('Included') # Assume df2 is your underlying DataFrame. v = df2[m].sum() # Assign the total back as a new row in df. df.loc['Total', :] = v[df2.dtypes != object] df Measures Green Orange 0 Country Included Excluded 1 State Excluded Excluded 2 County Included Excluded 3 City Included Included Total NaN 8 20 </code></pre> <hr> <p>Another option, if you want a more identical output is to set "Measures" and "Locations" as the indexes respectively.</p> <pre><code>df = df.set_index('Measures') df2 = df2.set_index('Location') m = df.eq('Included') v = df2[m].sum() df.loc['Total', :] = v df Green Orange Measures Country Included Excluded State Excluded Excluded County Included Excluded City Included Included Total 8 20 </code></pre>
python|pandas|dataframe
1
3,358
52,839,122
Reshaping/Pivoting Data with Date Value
<p>I need to pivot/reshape long form data 2 ways: 1) adding date columns(End-of_month) and filling in numeric value (total) 2) adding date columns(End-of_month) and filling in date value(day-of-month that reached the 'total' value in previous pivot)</p> <p>I can do 1 with:</p> <pre><code>data = pd.DataFrame({'date': ['1-12-2016', '1-23-2016', '2-23-2016', '2-1-2016', '3-4-2016'], 'EOM': ['1-31-2016', '1-31-2016', '2-28-2016', '2-28-2016', '3-31-2016'], 'country':['uk', 'usa', 'fr','fr','uk'], 'tr_code': [10, 21, 20, 10,12], 'TOTAL': [435, 367,891,1234,231] }) data['EOM'] = pd.to_datetime(data['EOM']) data['date'] = pd.to_datetime(data['date']) data_total = data.pivot_table(values='TOTAL', index=['country','tr_code'], columns='EOM') Out[73]: EOM 2016-01-31 2016-02-28 2016-03-31 country tr_code fr 10 NaN 1234.0 NaN 20 NaN 891.0 NaN uk 10 435.0 NaN NaN 12 NaN NaN 231.0 usa 21 367.0 NaN NaN </code></pre> <p>However, trying to change value argument with 'date' produces: DataError: No numeric types to aggregate</p> <p>I basically want two df's - the one I accomplished, and another in the same format , but instead of the 'TOTAL' value the 'date' in which that total was accomplished.</p> <p>Any help is greatly appreciated.</p>
<h3><code>set_index</code> with <code>unstack</code></h3> <p>This assumes the combinations of <code>['country', 'tr_code', 'EOM']</code> are unique and will break if they are not. This is why an aggregation function is important. We need a rule if and when we get multiple observations of a combination.</p> <pre><code>data.set_index(['country', 'tr_code', 'EOM']).date.unstack() EOM 2016-01-31 2016-02-28 2016-03-31 country tr_code fr 10 NaT 2016-02-01 NaT 20 NaT 2016-02-23 NaT uk 10 2016-01-12 NaT NaT 12 NaT NaT 2016-03-04 usa 21 2016-01-23 NaT NaT </code></pre> <hr> <h3><code>aggfunc</code> / <code>pivot_table</code></h3> <p>The default aggregation function is <code>mean</code> and that makes no sense for dates. <code>first</code> will do. Could also have used <code>last</code> which ALollz had used in their deleted answer.</p> <pre><code>data.pivot_table( values='date', index=['country', 'tr_code'], columns='EOM', aggfunc='first') EOM 2016-01-31 2016-02-28 2016-03-31 country tr_code fr 10 NaT 2016-02-01 NaT 20 NaT 2016-02-23 NaT uk 10 2016-01-12 NaT NaT 12 NaT NaT 2016-03-04 usa 21 2016-01-23 NaT NaT </code></pre> <hr> <h3><code>groupby</code></h3> <p>Less glamorous way of doing the same thing as <code>pivot_table</code></p> <pre><code>data.groupby(['country', 'tr_code', 'EOM']).date.first().unstack() EOM 2016-01-31 2016-02-28 2016-03-31 country tr_code fr 10 NaT 2016-02-01 NaT 20 NaT 2016-02-23 NaT uk 10 2016-01-12 NaT NaT 12 NaT NaT 2016-03-04 usa 21 2016-01-23 NaT NaT </code></pre>
python-3.x|pandas
7
3,359
52,635,812
how to determine if any column has a particular value
<p>I have a dataframe that looks like this:</p> <pre><code>ID Column1 Column2 Column3 1 cats dog bird 2 dog elephant tiger 3 leopard monkey cat </code></pre> <p>I'd like to create a new column that indicates whether <code>cat</code> is present in that row, as part of a string, so that the dataframe looks like this:</p> <pre><code> ID Column1 Column2 Column3 Column4 1 cats dog bird Yes 2 dog elephant tiger No 3 leopard monkey cat Yes </code></pre> <p>I would like to do this without assessing each column individually, because in the real data set there are a lot of columns. </p>
<p>The following should do the trick for you:</p> <pre><code>df['Column4'] = np.where((df.astype(np.object)=='cat').any(1), 'Yes', 'No') </code></pre> <p>Working example: </p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; d = {'ID': [1, 2, 3], 'Column1': ['cat', 'dog', 'leopard'], 'Column2': ['dog', 'elephant', 'monkey'], 'Column3': ['bird', 'tiger', 'cat']} &gt;&gt;&gt; df = pd.DataFrame(data=d) &gt;&gt;&gt; df Column1 Column2 Column3 ID 0 cat dog bird 1 1 dog elephant tiger 2 2 leopard monkey cat 3 &gt;&gt;&gt; df['Column4'] = np.where((df.astype(np.object)=='cat').any(1), 'Yes', 'No') &gt;&gt;&gt; df Column1 Column2 Column3 ID Column4 0 cat dog bird 1 Yes 1 dog elephant tiger 2 No 2 leopard monkey cat 3 Yes </code></pre> <p><strong>EDIT:</strong> In case you want to check if any of the columns <em>contains</em> a particular string you can use the following: </p> <pre><code>df['Column4'] = df.apply(lambda r: r.str.contains('cat', case=False).any(), axis=1) </code></pre>
python|string|pandas|row
3
3,360
46,237,149
Creating list of multiple copies from another list in python
<p>Any function in numpy can achieve this? Function <code>f</code> below is kind of awkward</p> <pre><code>def f(l,times): res=[] for i in range(len(l)): res+=[l[i]]*times[i] return res In [93]:f([1,2,3],[2,2,2]) Out [93]:[1, 1, 2, 2, 3, 3] </code></pre>
<p><code>np.repeat</code> does exactly this. Ex:</p> <pre><code>In [8]: a = np.arange(4) In [9]: b = np.array([1, 2, 1, 3]) In [10]: np.repeat(a, b) Out[10]: array([0, 1, 1, 2, 3, 3, 3]) </code></pre> <p>If you are working with >= 2 dimensional arrays, you can specify an axis parameter. See the doc <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html" rel="nofollow noreferrer">here</a>.</p>
python|numpy
0
3,361
46,516,244
Why is a category column seen as a column of strings in pandas?
<p>I have a dataset containing ints, floats and strings. I (think I) converted all string to categories by the following statements:</p> <pre><code>for col in list (X): if X[col].dtype == np.object_:#dtype ('object'): X [col] = X [col].str.lower().astype('category', copy=False) </code></pre> <p>However, when I want to do input the data for a random forest model I get the error: </p> <pre><code>ValueError: could not convert string to float: 'non-compliant by no payment' </code></pre> <p>The string 'non-compliant by no payment' occurs in a column named <code>X['compliance_detail']</code> and when I request its <code>dtype</code> I get <code>category</code>. When I ask its values:</p> <pre><code>In[111]: X['compliance_detail'].dtype Out[111]: category In[112]: X['compliance_detail'].value_counts() Out[112]: non-compliant by no payment 5274 non-compliant by late payment more than 1 month 939 compliant by late payment within 1 month 554 compliant by on-time payment 374 compliant by early payment 10 compliant by payment with no scheduled hearing 7 compliant by payment on unknown date 3 Name: compliance_detail, dtype: int64 </code></pre> <p>Does somebody know what's happening here? Why is a string seen in categorial data? Why is a dtype of Int64 listed for this column? </p> <p>Thank you for your time.</p>
<p>I should have read the docs more carefully ;-) Most statistical tests in sklearn do not handle categories, as they do in R. RandomForestClassifiers can handle categories without problems in theory, the implementation in sklearn does not allow it (for now). My mistake was to think that they could do so, because theory says they can and it worked nicely in R. However, <a href="http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html" rel="nofollow noreferrer">the sklearn documentation</a> says the following about the fit function:</p> <blockquote> <p>X : array-like or sparse matrix of shape = [n_samples, n_features]</p> <p>The training input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csc_matrix.</p> </blockquote> <p>Thus no room for categories, and when they are factorized they are considered as numbers. <a href="https://datascience.stackexchange.com/questions/5226/strings-as-features-in-decision-tree-random-forest">In this article</a> it is explained how categories work in Pandas and what their pitfalls are. I advise everybody who wants to use categories to read it, especially when with an R background. I hope this aspect will be improved as in the current situation one cannot make full use of some procedures.</p>
python|pandas|dataframe|categories
1
3,362
58,456,213
pandas dataframe to_html formatters - can't display image
<p>Im trying to add an up and down arrow to a pandas data frame with to_html for an email report.</p> <p>I'm using a lambda function to input an up and down arrow onto column values in my data frame, I know the image html works ok becasue I can put it in the body of the email and it works fine but when I use this function and the pandas formatter it outputs like in the below image (ps. i know the CID is different to whats in my function, I was just tesing something) </p> <p>Any one have any idea why? Or anyone have a better way to do it?</p> <p><a href="https://i.stack.imgur.com/baJ2n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/baJ2n.png" alt="pandas formatter error"></a></p> <p>call:</p> <pre><code>worst_20_accounts.to_html(index=False,formatters={'Last Run Rank Difference': lambda x: check_html_val_for_arrow(x)})) </code></pre> <p>function:</p> <pre><code>def check_html_val_for_arrow(x): try: if x &gt; 0: return str(x) + ' &lt;img src="cid:image7"&gt;' elif x &lt; 0: return str(x) + ' &lt;img src="cid:image8"&gt;' else: return str(x) except: return str(x) </code></pre>
<h3><code>escape=False</code></h3> <p>By default, the <code>pandas.DataFrame.to_html</code> method escapes any html in the dataframe's values.</p> <pre><code>my_img_snippet = ( "&lt;img src='https://www.pnglot.com/pngfile/detail/" "208-2086079_right-green-arrow-right-green-png-arrow.png'&gt;" ) df = pd.DataFrame([[my_img_snippet]]) </code></pre> <p>Then</p> <pre><code>from IPython.display import HTML HTML(df.to_html(escape=False)) </code></pre> <p><a href="https://i.stack.imgur.com/Cb4edm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cb4edm.png" alt="enter image description here"></a></p> <hr> <h3><code>style</code></h3> <p>You can let the <code>styler</code> object handle the rendering</p> <pre><code>df.style </code></pre> <p><a href="https://i.stack.imgur.com/Cb4edm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cb4edm.png" alt="enter image description here"></a></p>
html|pandas|dataframe|email|formatter
1
3,363
68,947,153
Return column names if contains all zeros or NaNs in Pandas
<p>Say I have a dataframe as follows:</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame([[1, 0, np.NaN, 0], [0, 0, np.NaN, 0]], columns=['a', 'b', 'c', 'd']) print(df) </code></pre> <p>Out:</p> <pre><code> a b c d 0 1 0 NaN 0 1 0 0 NaN 0 </code></pre> <p>I would like to get the columns names (<code>b</code>, <code>c</code>, and <code>d</code> for case above) if all its values are <strong>Zeros</strong> or <strong>NaNs</strong>, how could I achieve that in Pandas? Thanks.</p> <p>To subset zeros columns:</p> <pre><code>df.loc[:, ~(df != 0).any(axis=0)] </code></pre> <p>Out:</p> <pre><code> b d 0 0 0 1 0 0 </code></pre>
<p>You can directly use <code>.any()</code> to detect <code>null</code> and <code>0</code></p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df a b c d 0 1 0 NaN 0 1 0 0 NaN 0 &gt;&gt;&gt; df.any() a True b False c False d False dtype: bool </code></pre> <p>You can use this as-is, or call <code>.to_dict()</code> to create a pure-Python mapping</p> <pre><code>&gt;&gt;&gt; df.any().to_dict() {'a': True, 'b': False, 'c': False, 'd': False} </code></pre> <hr /> <p>EDIT: Original Answer (incorrect)</p> <p>This is a rare case where you should iterate over the DataFrame as you're interested in the columns, not the rows</p> <pre class="lang-py prettyprint-override"><code>na_or_0_cols = [] for column in df: # iterates by-name if df[column].isna().all() or (df[column] == 0).all(): na_or_0_cols.append(column) print(na_or_0_cols) </code></pre> <p>Out:</p> <pre><code>['b', 'c', 'd'] </code></pre> <p>(this doesn't work because it won't detect both)</p>
python-3.x|pandas|dataframe
1
3,364
68,875,464
Vintage / Static Pool Analysis in Pandas / Anaconda
<p>I'm looking to do vintage analysis in pandas with some csv data, and I'm wondering if I can streamline the process vs. iterating through a dataframe to create the vintage analysis. For example, I have a dataset similar to below in csv and I've read it into a dataframe <code>default</code>.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Origination</th> <th>Default Month</th> <th>Default Amount</th> </tr> </thead> <tbody> <tr> <td>1Q20</td> <td>6</td> <td>3000</td> </tr> <tr> <td>1Q20</td> <td>8</td> <td>2000</td> </tr> <tr> <td>1Q20</td> <td>4</td> <td>1000</td> </tr> <tr> <td>2Q20</td> <td>6</td> <td>3000</td> </tr> <tr> <td>2Q20</td> <td>8</td> <td>2500</td> </tr> <tr> <td>1Q20</td> <td>6</td> <td>3000</td> </tr> <tr> <td>2Q20</td> <td>0</td> <td>0</td> </tr> <tr> <td>3Q20</td> <td>3</td> <td>1000</td> </tr> <tr> <td>2Q20</td> <td>4</td> <td>3000</td> </tr> <tr> <td>3Q20</td> <td>6</td> <td>4000</td> </tr> </tbody> </table> </div> <p>Now I know I can easily hit the dataframe <code>default.groupby(['Origination']).sum()</code> which tells me by quarter what defaulted, but doesn't give me the time dimension in months. Similarly grouping by the columns will give me defaults by month but not differtiate by quarter.</p> <p>I'm guessing using something like this <a href="https://stackoverflow.com/questions/39922986/pandas-group-by-and-sum">this question</a> and grouping by two dimensions <code>default.groupby(['Origination','Default Month']).sum()</code> would allow me to split it into quarters and by months, <strong>BUT</strong> is there a way to effectively do that in a vintage format?</p> <p>I created a blank dataframe with the form:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> </tr> </thead> <tbody> <tr> <td>1Q20</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>2Q20</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>3Q20</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> </tbody> </table> </div> <p>I'm trying to transform that dataset by summation into a vintage table like so, note for example 1Q20 had two loans default in the 6 month so that is now $6,000.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Default</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> </tr> </thead> <tbody> <tr> <td>1Q20</td> <td>0</td> <td>0</td> <td>0</td> <td>1000</td> <td>0</td> <td>6000</td> <td>0</td> <td>2000</td> </tr> <tr> <td>2Q20</td> <td>0</td> <td>3000</td> <td>0</td> <td>0</td> <td>0</td> <td>3000</td> <td>0</td> <td>2500</td> </tr> <tr> <td>3Q20</td> <td>0</td> <td>0</td> <td>1000</td> <td>0</td> <td>0</td> <td>4000</td> <td>0</td> <td>0</td> </tr> </tbody> </table> </div> <p>Is there an easy way to do this I'm overlooking, or is it a dual group.by and then trying to assign the data by category into the dataframe?</p>
<p>You should try:</p> <pre><code>df.set_index(['Origination',' Default Month']).unstack(level=1) </code></pre> <p>Alternatively, if you have duplicates, use <code>pivot_table</code>:</p> <pre><code>(pd.pivot_table(df, index='Origination', columns=['Default Month'], values='Default Amount', aggfunc='sum', fill_value=0) .reindex(range(12), axis=1) # those two lines to ensure .fillna(0, downcast='infer') # all columns are present ) </code></pre> <p>output:</p> <pre><code>Default Month 0 1 2 3 4 5 6 7 8 9 10 11 Origination 1Q20 0 0 0 0 1000 0 6000 0 2000 0 0 0 2Q20 0 0 0 0 3000 0 3000 0 2500 0 0 0 3Q20 0 0 0 1000 0 0 4000 0 0 0 0 0 </code></pre>
python|pandas|dataframe|numpy
1
3,365
44,475,993
Pandas less than or equal to giving TypeError: invalid type comparison
<p>I have 4 lists based on what I want to continuously filter my Pandas data-frame </p> <pre><code>categoryList=['Parameter1', 'Parameter1', 'Parameter2', 'Parameter2'] conditionList=['b1', 'b41', 'm1', 'm2'] conditionDescList=['&gt;', 'btn', '&lt;=', 'btn'] conditionParamList=['1000', '2:3', '0.5', '0.1:0.3'] </code></pre> <p>Now I am trying below code to filter rows from my 2 data-frames(<code>df_custid_marker</code>,<code>df_custid_bp</code>) based on <code>categoryList</code></p> <pre><code> k =0 for i in conditionDescList: if(categoryList[k]=='Parameter1'): if(i=='btn'): arrValues=conditionParamList[k].split(":") minVal=arrValues[0] maxVal=arrValues[1] df_custid_marker=df_custid_marker[(df_custid_marker[conditionList[k]] &gt; minVal) &amp; (df_custid_marker[conditionList[k]] &lt; maxVal)] elif(i=='&gt;'): df_custid_marker=df_custid_marker[(df_custid_marker[conditionList[k]] &gt; conditionParamList[k])] elif(i=='&lt;'): df_custid_marker=df_custid_marker[(df_custid_marker[conditionList[k]] &lt; conditionParamList[k])] elif(i=='&lt;='): df_custid_marker=df_custid_marker[(df_custid_marker[conditionList[k]] &lt; conditionParamList[k]) | (df_custid_marker[conditionList[k]] == conditionParamList[k])] elif(i=='&gt;='): df_custid_marker=df_custid_marker[(df_custid_marker[conditionList[k]] &gt; conditionParamList[k]) | (df_custid_marker[conditionList[k]] == conditionParamList[k])] else: df_custid_marker=df_custid_marker[(df_custid_marker[conditionList[k]] == conditionParamList[k])] k+=1 k =0 for i in conditionDescList: if(categoryList[k]=='Parameter2'): if(i=='btn'): arrValues=conditionParamList[k].split(":") minVal=arrValues[0] maxVal=arrValues[1] df_custid_bp=df_custid_bp[(df_custid_bp[conditionList[k]] &gt; minVal) &amp; (df_custid_bp[conditionList[k]] &lt; maxVal)] elif(i=='&gt;'): df_custid_bp=df_custid_bp[(df_custid_bp[conditionList[k]] &gt; conditionParamList[k])] elif(i=='&lt;'): df_custid_bp=df_custid_bp[(df_custid_bp[conditionList[k]] &lt; conditionParamList[k])] elif(i=='&lt;='): df_custid_bp=df_custid_bp[(df_custid_bp[conditionList[k]] &lt; conditionParamList[k]) | (df_custid_bp[conditionList[k]] == conditionParamList[k])] elif(i=='&gt;='): df_custid_bp=df_custid_bp[(df_custid_bp[conditionList[k]] &gt; conditionParamList[k]) | (df_custid_bp[conditionList[k]] == conditionParamList[k])] else: df_custid_bp=df_custid_bp[(df_custid_bp[conditionList[k]] == conditionParamList[k])] k+=1 </code></pre> <p>Now I am getting below error for <code>&lt;=</code> <code>df_custid_marker=df_custid_marker[(df_custid_marker[conditionList[k]] &lt; conditionParamList[k]) | (df_custid_marker[conditionList[k]] == conditionParamList[k])]</code> </p> <p><code>raise TypeError("invalid type comparison")</code></p>
<p>The problem was data-frame column type was <code>float</code> and list is <code>string</code></p>
python|pandas
1
3,366
44,381,192
How do I aggregate the values from two columns and create a new column from it?
<p>How can I sum up the values in column "number" for each state and then create a new column from those values next to "number"?</p> <p>So far I have this to aggregate: </p> <pre><code>out_state_total['df']=df.groupby('State')['out-of-state'].sum(axis=1) </code></pre> <p>But for some reason I can't create a new column from those values...</p> <pre><code>example state in-state out-of-state final_state red NJ 3000 99 AL blue ND 43 500 AK green NY 8000 10 AZ gray NJ 94 20 AR orange DE 32 7 </code></pre>
<p>Use Transform</p> <pre><code>df[['in_state_total','out_state_total']]=df.groupby('state')['in-state', 'out-of-state'].transform('sum') example state in-state out-of-state in_state_total out_state_total 0 red NJ 3000 99 3094 119 1 blue ND 43 500 43 500 2 green NY 8000 10 8000 10 3 gray NJ 94 20 3094 119 4 orange DE 32 7 32 7 </code></pre> <p>​</p>
python|python-2.7|python-3.x|pandas
1
3,367
60,930,158
Tensorflow saving subclass model which has multiple arguments to call() method
<p>I am following the tensorflow neural machine translation tutorial: <a href="https://www.tensorflow.org/tutorials/text/nmt_with_attention" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/text/nmt_with_attention</a></p> <p>I am trying to save the Encoder and Decoder models which are subclasses of the tf.keras.Model and work properly during training and inference, however I want to save the models. When I try to do so I get the following error:</p> <pre><code>TypeError: call() missing 1 required positional argument: 'initial_state' </code></pre> <p>Here is the code:</p> <pre><code>class Encoder(tf.keras.Model): def __init__(self, vocab_size, embedding_matrix, n_units, batch_size): super(Encoder, self).__init__() self.n_units = n_units self.batch_size = batch_size self.embedding = Embedding(vocab_size, embedding_matrix.shape[1], weights=[embedding_matrix], trainable=True, mask_zero=True) self.lstm = LSTM(n_units, return_sequences=True, return_state=True, recurrent_initializer="glorot_uniform") def call(self, input_utterence, initial_state): input_embed = self.embedding(input_utterence) encoder_states, h1, c1 = self.lstm(input_embed, initial_state=initial_state) return encoder_states, h1, c1 def create_initial_state(self): return tf.zeros((self.batch_size, self.n_units)) encoder = Encoder(vocab_size, embedding_matrix, LSTM_DIM, BATCH_SIZE) # do some training... tf.saved_model.save(decoder, "encoder_model") </code></pre> <p>I also tried to make the call method take one input list argument only and unpack the variables I need within the method but then I get the following error when trying to save:</p> <pre><code>File "C:\Users\Fady\Documents\Machine Learning\chatbot\models\seq2seq_model.py", line 32, in call input_utterence, initial_state = inputs ValueError: too many values to unpack (expected 2) </code></pre>
<p>You can export the model successfully if you package your inputs in a list. You also need to specify the input signatures to export your model, here your code with slight modifications which works</p> <pre><code>import tensorflow as tf from tensorflow.keras.layers import Embedding, LSTM import numpy as np print('TensorFlow: ', tf.__version__) vocab_size = 10000 LSTM_DIM = 256 BATCH_SIZE = 16 embedding_matrix = np.random.randn(vocab_size, 300) class Encoder(tf.keras.Model): def __init__(self, vocab_size, embedding_matrix, n_units, batch_size): super(Encoder, self).__init__() self.n_units = n_units self.batch_size = batch_size self.embedding = Embedding(vocab_size, embedding_matrix.shape[1], weights=[embedding_matrix], trainable=True, mask_zero=True) self.lstm = LSTM(n_units, return_sequences=True, return_state=True, recurrent_initializer="glorot_uniform") @tf.function def call(self, inputs): input_utterence, initial_state = inputs input_embed = self.embedding(input_utterence) encoder_states, h1, c1 = self.lstm(input_embed, initial_state=initial_state) return encoder_states, h1, c1 def create_initial_state(self): return tf.zeros((self.batch_size, self.n_units)) random_input = tf.random.uniform(shape=[BATCH_SIZE, 3], maxval=vocab_size, dtype=tf.int32) encoder = Encoder(vocab_size, embedding_matrix, LSTM_DIM, BATCH_SIZE) initial_state = [encoder.create_initial_state(), encoder.create_initial_state()] _ = encoder([random_input, initial_state]) # required so that encoder.build is triggered tf.saved_model.save(encoder, "encoder_model", signatures=encoder.call.get_concrete_function( [ tf.TensorSpec(shape=[None, None], dtype=tf.int32, name='input_utterence'), [ tf.TensorSpec(shape=[None, LSTM_DIM], dtype=tf.float32, name='initial_h'), tf.TensorSpec(shape=[None, LSTM_DIM], dtype=tf.float32, name='initial_c') ] ])) loaded_model = tf.saved_model.load('encoder_model') loaded_model([random_input, initial_state]) </code></pre> <p>output:</p> <pre><code>TensorFlow: 2.2.0-rc1 WARNING:tensorflow:From /home/dl_user/tf_stable/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. INFO:tensorflow:Assets written to: encoder_model/assets (&lt;tf.Tensor: shape=(16, 3, 256), dtype=float32, numpy= array([[[-0.06000457, 0.02422162, -0.05310762, ..., -0.01340707, 0.12212028, -0.02747637], [ 0.13303193, 0.3119418 , -0.17995344, ..., -0.10185111, 0.09568192, 0.06919193], [-0.08075664, -0.11490613, -0.20294832, ..., -0.14999194, 0.02177649, 0.05538464]], [[-0.03792192, -0.08431012, 0.03687581, ..., -0.1768839 , -0.10469476, 0.08730042], [-0.02956271, 0.43850696, -0.07400024, ..., 0.04097629, 0.209705 , 0.27194855], [ 0.02529916, 0.18367583, -0.11409087, ..., 0.0458075 , 0.2065246 , 0.22976378]], [[ 0.04196627, 0.08302739, 0.02218204, ..., 0.07388053, -0.05696848, -0.31895265], [-0.00536443, 0.1566213 , -0.22412768, ..., 0.10560389, 0.20187919, -0.1896591 ], [ 0.26364946, 0.13163888, 0.14586888, ..., 0.19517538, 0.17677066, -0.40476215]], ..., [[ 0.10999472, 0.07398727, 0.23443945, ..., -0.1912791 , -0.0195728 , 0.11717851], [ 0.03978832, 0.07587367, 0.16567066, ..., -0.29463592, 0.05950819, 0.0242265 ], [ 0.2505787 , 0.15849623, 0.06635283, ..., -0.17969091, 0.12549783, -0.11459641]], [[-0.20408148, 0.04629526, 0.00601436, ..., 0.21321473, 0.04952445, -0.0129672 ], [-0.14671509, 0.2911171 , 0.13047697, ..., -0.03531414, -0.16794083, 0.01575338], [-0.08337164, 0.08723269, 0.16235027, ..., 0.07919721, 0.05701642, 0.15379705]], [[-0.2747393 , 0.24351111, -0.05829309, ..., -0.00448833, 0.07568972, 0.03978251], [-0.16282909, -0.04586324, -0.0054924 , ..., 0.11050001, 0.1312355 , 0.16555254], [ 0.07759799, -0.07308074, -0.10038756, ..., 0.18139914, 0.07769153, 0.1375772 ]]], dtype=float32)&gt;, &lt;tf.Tensor: shape=(16, 256), dtype=float32, numpy= array([[-0.08075664, -0.11490613, -0.20294832, ..., -0.14999194, 0.02177649, 0.05538464], [ 0.02529916, 0.18367583, -0.11409087, ..., 0.0458075 , 0.2065246 , 0.22976378], [ 0.26364946, 0.13163888, 0.14586888, ..., 0.19517538, 0.17677066, -0.40476215], ..., [ 0.2505787 , 0.15849623, 0.06635283, ..., -0.17969091, 0.12549783, -0.11459641], [-0.08337164, 0.08723269, 0.16235027, ..., 0.07919721, 0.05701642, 0.15379705], [ 0.07759799, -0.07308074, -0.10038756, ..., 0.18139914, 0.07769153, 0.1375772 ]], dtype=float32)&gt;, &lt;tf.Tensor: shape=(16, 256), dtype=float32, numpy= array([[-0.32829475, -0.18770668, -0.2956414 , ..., -0.2427501 , 0.03146099, 0.16033864], [ 0.05112522, 0.6664379 , -0.19836858, ..., 0.10015503, 0.511694 , 0.51550364], [ 0.3379809 , 0.7145362 , 0.22311993, ..., 0.372106 , 0.25914627, -0.81374717], ..., [ 0.36742535, 0.29009506, 0.13245934, ..., -0.4318537 , 0.26666188, -0.20086129], [-0.17384854, 0.22998339, 0.27335796, ..., 0.09973672, 0.10726923, 0.47339764], [ 0.22148325, -0.11998752, -0.16339599, ..., 0.31903535, 0.20365229, 0.28087002]], dtype=float32)&gt;) </code></pre>
python|tensorflow|machine-learning|keras|tf.keras
6
3,368
61,091,608
Visualize the model's training process taking into account ModelCheckpoint
<p>I am training a Tensorflow model, in which I include a checkpoint to save the best model (based on val_loss).</p> <pre><code>checkpoint = ModelCheckpoint(filepath, monitor='val_rmse', verbose=2, \ save_best_only=True, save_weights_only=False, \ mode='min', save_frequency=1) </code></pre> <p>After the training, to visualize the model's training process epoch after epoch using the stats stored in the r objects. I do:</p> <pre><code>plotter.plot({'Basic': history}, metric = 'loss') </code></pre> <p>Question: How do I do if I want to visual the model's straining process not epoch after epoch but only until the best model is saved. E.gg, if I initially set epoch=5,000 but the best model is at epoch = 2,000, I want to chart only until epoch = 2,000. </p> <p>Thanks</p>
<p>From <a href="https://keras.io/callbacks/" rel="nofollow noreferrer">documentation</a>: The <code>filepath</code> can contain named formatting options, which will be filled with the values of <code>epoch</code> and keys in <code>logs</code> (passed in on_epoch_end).</p> <p>For example: if <code>filepath</code> is weights.{<code>epoch</code>:02d}-{val_loss:.2f}.hdf5, then the model checkpoints will be saved with the <code>epoch</code> number and the validation loss in the filename. e.g <strong>weights.0150-0.88.hdf5</strong></p> <p>You can then inspect the file name and plot until the desired epoch number.</p>
tensorflow
0
3,369
60,803,983
Remove groupbs with groupby if row does not contain a pattern in pandas
<p>Hel lo, I have a dataframe such as </p> <pre><code>col1 col2 G1 OP2 G1 OP0 G1 OPP G1 OPL_Lh G2 OII G2 OIP G2 IOP G3 TYU G4 TUI G4 TYUI G4 TR_Lh </code></pre> <p>and i would like to groupby and remove from the df tha groups that does not contain at leats one row in col2 that contain </p> <pre><code>'_Lh' </code></pre> <p>here I should only keep <code>G1</code> and <code>G4</code> and get : </p> <pre><code>col1 col2 G1 OP2 G1 OP0 G1 OPP G1 OPL_Lh G4 TUI G4 TYUI G4 TR_Lh </code></pre> <p>Does someone have an idea ? thank you </p>
<p>IIUC,</p> <p>you can use a boolean test and <code>isin</code> to filter in the groups that contain <code>_Lh</code></p> <pre><code>m = df[df['col2'].str.contains('_Lh')]['col1'] df[df['col1'].isin(m)].groupby('col1')... </code></pre> <hr> <pre><code>print(df[df['col1'].isin(m)]) col1 col2 0 G1 OP2 1 G1 OP0 2 G1 OPP 3 G1 OPL_Lh 8 G4 TUI 9 G4 TYUI 10 G4 TR_Lh </code></pre>
python|pandas
1
3,370
71,714,565
How to calculate percentages from multiple columns
<p>I want to create a table that looks like this:</p> <p><a href="https://i.stack.imgur.com/NiHTF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NiHTF.png" alt="1" /></a></p> <p>So far I have a table I created to get the value counts but I need help with creating a table that calculates the total value of row 0 and 1. I'm using this dataset: <a href="https://github.com/fivethirtyeight/data/tree/master/bob-ross" rel="nofollow noreferrer">https://github.com/fivethirtyeight/data/tree/master/bob-ross</a></p> <p><a href="https://i.stack.imgur.com/yHIXO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yHIXO.png" alt="2" /></a></p> <p>Code:</p> <pre><code>ross = bobross[['Apple frame', 'Aurora borealis', 'Barn', 'Beach', 'Boat', 'Bridge', 'Building', 'Bushes', 'Cabin', 'Cactus', 'Circle frame', 'Cirrus clouds', 'Cliff', 'Clouds', 'Coniferous tree', 'Cumulus clouds', 'Decidious tree', 'Diane andre', 'Dock', 'Double oval frame', 'Farm', 'Fence', 'Fire', 'Florida frame', 'Flowers', 'Fog', 'Framed', 'Grass', 'Guest', 'Half circle frame', 'Half oval frame', 'Hills', 'Lake', 'Lakes', 'Lighthouse', 'Mill', 'Moon', 'At least one mountain', 'At least two mountains', 'Nighttime', 'Ocean', 'Oval frame', 'Palm trees', 'Path', 'Person', 'Portrait', 'Rectangle 3d frame', 'Rectangular frame', 'River or stream', 'Rocks', 'Seashell frame', 'Snow', 'Snow-covered mountain', 'Split frame', 'Steve ross', 'Man-made structure', 'Sun', 'Tomb frame', 'At least one tree', 'At least two trees', 'Triple frame', 'Waterfall', 'Waves', 'Windmill', 'Window frame', 'Winter setting', 'Wood framed']].apply(pd.Series.value_counts) ross </code></pre>
<p>IIUC,</p> <pre><code>import pandas as pd import numpy as np df = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/bob-ross/elements-by-episode.csv') dfi = df.set_index(['EPISODE', 'TITLE']) (dfi.sum()/np.sum(dfi.to_numpy())) </code></pre> <p>Output:</p> <pre><code>APPLE_FRAME 0.000310 AURORA_BOREALIS 0.000621 BARN 0.005278 BEACH 0.008382 BOAT 0.000621 ... WAVES 0.010556 WINDMILL 0.000310 WINDOW_FRAME 0.000310 WINTER 0.021422 WOOD_FRAMED 0.000310 Length: 67, dtype: float64 </code></pre>
python|pandas
0
3,371
42,503,495
Creating a new column in pandas quantile using Quantile function
<p>I want to create a column Quantile, for each date. Calculated the Quantile for each unique value Sales value. Ie Category always corresponds to the same number in sales for each particular date.</p> <p>I have dataframe which is indexed by date. There are many dates and multiple of the same dates. Example of the subset of df for 1 day:</p> <pre><code> Category Sales Ratio 1 Ratio 2 11/19/2016 Bar 300 0.46 0.96 11/19/2016 Bar 300 0.56 0.78 11/19/2016 Bar 300 0.43 0.96 11/19/2016 Bar 300 0.47 0.94 11/19/2016 Casino 550 0.92 0.12 11/19/2016 Casino 550 0.43 0.74 11/19/2016 Casino 550 0.98 0.65 11/19/2016 Casino 550 0.76 0.67 11/19/2016 Casino 550 0.79 0.80 11/19/2016 Casino 550 0.90 0.91 11/19/2016 Casino 550 0.89 0.31 11/19/2016 Café 700 0.69 0.99 11/19/2016 Café 700 0.07 0.18 11/19/2016 Café 700 0.75 0.59 11/19/2016 Café 700 0.07 0.64 11/19/2016 Café 700 0.14 0.42 11/19/2016 Café 700 0.30 0.67 11/19/2016 Pub 250 0.64 0.09 11/19/2016 Pub 250 0.93 0.37 11/19/2016 Pub 250 0.69 0.42 </code></pre> <p>I want a code which adds a new column called Quantile which calculates for each date the 0.5 quantile of unique Sales. Key thing to note is Sales is always the same for a category for a particular date (things change as dates change).</p> <p>Example of a solution: df['Quantile'] = df.Sales.groupby(df.index).transform(lambda x: x.quantile(q=0.5, axis=0, interpolation='midpoint'))</p> <p>However this would not suffice (even if it worked). For this example (for this one date), In the new column df['Quantile'], all values would be the same for a partcular date.</p> <p>For this date the calculation would use 300, 550, 700 and 250 for the quantile.</p> <p>Therefore the final df would look like this:</p> <pre><code> Category Sales Ratio 1 Ratio 2 Quantile 11/19/2016 Bar 300 0.46 0.96 425 11/19/2016 Bar 300 0.56 0.78 425 11/19/2016 Bar 300 0.43 0.96 425 11/19/2016 Bar 300 0.47 0.94 425 11/19/2016 Casino 550 0.92 0.12 425 11/19/2016 Casino 550 0.43 0.74 425 11/19/2016 Casino 550 0.98 0.65 425 11/19/2016 Casino 550 0.76 0.67 425 11/19/2016 Casino 550 0.79 0.80 425 11/19/2016 Casino 550 0.90 0.91 425 11/19/2016 Casino 550 0.89 0.31 425 11/19/2016 Café 700 0.69 0.99 425 11/19/2016 Café 700 0.07 0.18 425 11/19/2016 Café 700 0.75 0.59 425 11/19/2016 Café 700 0.07 0.64 425 11/19/2016 Café 700 0.14 0.42 425 11/19/2016 Café 700 0.30 0.67 425 11/19/2016 Pub 250 0.64 0.09 425 11/19/2016 Pub 250 0.93 0.37 425 11/19/2016 Pub 250 0.69 0.42 425 </code></pre> <p>If I was to do Quantile of all Sales for a particular date without looking at only one element of each category I would get something like 550 (which I do not want).</p> <p>Key thing is I would like the code to be simple, and reasonably fast (as date is quite big). Also interpolation has to be midpoint.</p>
<p>It seems you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.drop_duplicates.html" rel="noreferrer"><code>drop_duplicates</code></a>:</p> <pre><code>df['Quantile'] = df.Sales.groupby(df.index) .transform(lambda x: x.drop_duplicates().quantile()) print (df) Category Sales Ratio 1 Ratio 2 Quantile 11/19/2016 Bar 300 0.46 0.96 425 11/19/2016 Bar 300 0.56 0.78 425 11/19/2016 Bar 300 0.43 0.96 425 11/19/2016 Bar 300 0.47 0.94 425 11/19/2016 Casino 550 0.92 0.12 425 11/19/2016 Casino 550 0.43 0.74 425 11/19/2016 Casino 550 0.98 0.65 425 11/19/2016 Casino 550 0.76 0.67 425 11/19/2016 Casino 550 0.79 0.80 425 11/19/2016 Casino 550 0.90 0.91 425 11/19/2016 Casino 550 0.89 0.31 425 11/19/2016 Cafe 700 0.69 0.99 425 11/19/2016 Cafe 700 0.07 0.18 425 11/19/2016 Cafe 700 0.75 0.59 425 11/19/2016 Cafe 700 0.07 0.64 425 11/19/2016 Cafe 700 0.14 0.42 425 11/19/2016 Cafe 700 0.30 0.67 425 11/19/2016 Pub 250 0.64 0.09 425 11/19/2016 Pub 250 0.93 0.37 425 11/19/2016 Pub 250 0.69 0.42 425 </code></pre> <hr> <pre><code>df['Quantile'] = df.Sales.groupby(df.index) .transform(lambda x: np.percentile(x.unique(), 50)) print (df) Category Sales Ratio 1 Ratio 2 Quantile 11/19/2016 Bar 300 0.46 0.96 425 11/19/2016 Bar 300 0.56 0.78 425 11/19/2016 Bar 300 0.43 0.96 425 11/19/2016 Bar 300 0.47 0.94 425 11/19/2016 Casino 550 0.92 0.12 425 11/19/2016 Casino 550 0.43 0.74 425 11/19/2016 Casino 550 0.98 0.65 425 11/19/2016 Casino 550 0.76 0.67 425 11/19/2016 Casino 550 0.79 0.80 425 11/19/2016 Casino 550 0.90 0.91 425 11/19/2016 Casino 550 0.89 0.31 425 11/19/2016 Cafe 700 0.69 0.99 425 11/19/2016 Cafe 700 0.07 0.18 425 11/19/2016 Cafe 700 0.75 0.59 425 11/19/2016 Cafe 700 0.07 0.64 425 11/19/2016 Cafe 700 0.14 0.42 425 11/19/2016 Cafe 700 0.30 0.67 425 11/19/2016 Pub 250 0.64 0.09 425 11/19/2016 Pub 250 0.93 0.37 425 11/19/2016 Pub 250 0.69 0.42 425 </code></pre>
python|pandas|group-by|quantile
6
3,372
69,876,267
How to slice very large dataframe by days without walking the entire dataframe each time?
<p>I have two massive datetime indexed and sorted dataframes that I need compare groups from one to groups from another.</p> <pre><code>start, end = df.index.min(), df.index.max() for day in pd.date_range(start.date(), end.date()+a_day, freq='D'): current_df = df[df.index.date == day.date()] current_df2= df2[df2.index.date == day.date()] do_heavy_lift(df, df2) </code></pre> <p>I thought the heavy lift would take most of the time but the slicing by date takes &gt; 95% of the time.</p> <p>Upon reflection, each time I slice a dataframe it is probably walking the entire index.</p> <p>Is there any way to significantly improve this approach? The indexes are sorted. Can I:</p> <ol> <li>Tell it to stop searching after the end of the current day?</li> <li>Remember where it was at the end of the previous day and start there for the next day?</li> </ol> <p>You can create a sample dataset using:</p> <pre><code>import numpy as np from datetime import datetime, timedelta date_today = datetime.now() days = pd.date_range(date_today, date_today + timedelta(7000), freq='H') np.random.seed(seed=1111) data = np.random.randint(1, high=100, size=len(days)) df = pd.DataFrame({'test': days, 'col2': data}) df = df.set_index('test') np.random.seed(seed=1345) data = np.random.randint(1, high=100, size=len(days)) df2 = pd.DataFrame({'test': days, 'col2': data}) df2 = df2.set_index('test') df.sort_index(inplace=True) df2.sort_index(inplace=True) </code></pre> <blockquote> <p>CPU times: user 9min 50s, sys: 4.27 s, total: 9min 54s Wall time: 10min 45s</p> </blockquote>
<p>You can use <code>groupby_apply</code>. Normalize your datetime by keeping the date part.</p> <pre><code>def do_heavy_lift(df): # do stuff here return ... out = pd.concat([df1, df2], axis=1) \ .groupby(lambda x: x.normalize()) \ .apply(heavy_lift) </code></pre>
python|pandas|dataframe|python-datetime
1
3,373
43,152,803
How do I subtract the max element of each row of a 2D tensor from all elements of that row
<p>Originally, I asked this with the global max, but the solution of just subtracting <code>tf.reduce_max()</code> doesn't work when you put in dimensions. I'd want something like <code> mytensor - tf.reduce_max(mytensor, 1) </code> but this gives a dimension error.</p> <p>I can't use <code> tf.constant(value = tf.reduce_max(mytensor,1) , shape = mytensor.get_shape()[1])</code> with a specified value because the output of <code>reduce_max()</code> is a tensor and not a constant.</p>
<p>For global max, you can do:</p> <pre><code>import tensorflow as tf inp = tf.constant([[1, 2, 3],[4,5,6] ]) res=tf.reduce_max(inp) res1=inp-res sess = tf.Session() print(sess.run(res)) print(sess.run(res1)) </code></pre> <p>Then res is 6 and res1 is</p> <pre><code>[[-5 -4 -3] [-2 -1 0]] </code></pre> <p>If you want to subtract the maximum element in each row, this will do the job:</p> <pre><code>import tensorflow as tf inp = tf.constant([[1, 2, 3],[6,6,6] ]) res=tf.reduce_max(inp,1) res1=inp-tf.reshape(res,[-1,1]) sess = tf.Session() print(sess.run(res1)) </code></pre> <p>Then <code>res1</code> is </p> <pre><code> [[-2 -1 0] [ 0 0 0]] </code></pre>
python|tensorflow
1
3,374
72,360,518
Merge two pandas dataframes with multiple columns per row
<p>I have a dataframe &quot;df1&quot; that look like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">company id</th> <th style="text-align: center;">company name</th> <th style="text-align: right;">dealid_1</th> <th style="text-align: right;">dealyear_1</th> <th style="text-align: right;">dealid_2</th> <th style="text-align: right;">dealyear_2</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">C1</td> <td style="text-align: center;">ABC</td> <td style="text-align: right;"></td> <td style="text-align: right;"></td> <td style="text-align: right;"></td> <td style="text-align: right;"></td> </tr> <tr> <td style="text-align: left;">C2</td> <td style="text-align: center;">DEF</td> <td style="text-align: right;"></td> <td style="text-align: right;"></td> <td style="text-align: right;"></td> <td style="text-align: right;"></td> </tr> </tbody> </table> </div> <p>Where I want to fill the blank cells with data from another dataframe &quot;df2&quot; which looks like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">deal id</th> <th style="text-align: center;">deal year</th> <th style="text-align: right;">company id</th> <th style="text-align: right;">company name</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">D1</td> <td style="text-align: center;">2010</td> <td style="text-align: right;">C1</td> <td style="text-align: right;">ABC</td> </tr> <tr> <td style="text-align: left;">D2</td> <td style="text-align: center;">2015</td> <td style="text-align: right;">C1</td> <td style="text-align: right;">ABC</td> </tr> <tr> <td style="text-align: left;">D3</td> <td style="text-align: center;">2012</td> <td style="text-align: right;">C2</td> <td style="text-align: right;">DEF</td> </tr> <tr> <td style="text-align: left;">D4</td> <td style="text-align: center;">2017</td> <td style="text-align: right;">C2</td> <td style="text-align: right;">DEF</td> </tr> </tbody> </table> </div> <p>So the final result for &quot;df1&quot; should be as follows:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">company id</th> <th style="text-align: center;">company name</th> <th style="text-align: right;">dealid_1</th> <th style="text-align: right;">dealyear_1</th> <th style="text-align: right;">dealid_2</th> <th style="text-align: right;">dealyear_2</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">C1</td> <td style="text-align: center;">ABC</td> <td style="text-align: right;">D1</td> <td style="text-align: right;">2010</td> <td style="text-align: right;">D2</td> <td style="text-align: right;">2015</td> </tr> <tr> <td style="text-align: left;">C2</td> <td style="text-align: center;">DEF</td> <td style="text-align: right;">D3</td> <td style="text-align: right;">2012</td> <td style="text-align: right;">D4</td> <td style="text-align: right;">2017</td> </tr> </tbody> </table> </div> <p>Can anyone please help me with this?</p> <p>Thank you!</p>
<p>You can use:</p> <pre><code>df3 = (df2.drop(columns='company name') .assign(col=df2.groupby('company name').cumcount().add(1).astype(str)) .pivot(index='company id', columns='col') ) df3.columns = df3.columns.map('_'.join) out = df1[['company id', 'company name']].merge(df3, on='company id') </code></pre> <p>output:</p> <pre><code> company id company name deal id_1 deal id_2 deal year_1 deal year_2 0 C1 ABC D1 D2 2010 2015 1 C2 DEF D3 D4 2012 2017 </code></pre>
python|pandas|dataframe|join|merge
0
3,375
72,280,550
Why matplotlib imshow shows different images by changing the order of the array?
<p>I have a test case that reshaping the array changes the result of <code>plt.imshow</code>:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from skimage import io file_raw_path = &quot;8258792/Fig5_ColorfulCell_raw.tif&quot; im = io.imread(file_raw_path) im= np.max(im, axis=0) im_reshaped = im.reshape((im.shape[1],im.shape[2],im.shape[0])) for i in range(im.shape[0]): plt.imshow(im[i],cmap='gray') plt.show() for i in range(im_reshaped.shape[2]): plt.imshow(im_reshaped[...,i],cmap='gray') plt.show() </code></pre> <p>The first loop shows these images:</p> <p><a href="https://i.stack.imgur.com/TSanf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TSanf.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/ovuGL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ovuGL.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/8FHsA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8FHsA.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/HVQLU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HVQLU.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/w6BWy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w6BWy.png" alt="enter image description here" /></a></p> <p>And the second loop shows this image (of course 5 times the same thing...):</p> <p><a href="https://i.stack.imgur.com/7i8Bs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7i8Bs.png" alt="enter image description here" /></a></p> <p>Any idea why this is happening?!</p>
<p><code>np.reshape()</code> doesn't move any data around; it just changes where the axes &quot;wrap around&quot;. You can think about it as first flattening the input array, then wrapping the data across the axes to fit the new shape.</p> <pre class="lang-python prettyprint-override"><code>&gt;&gt;&gt; arr = np.arange(6).reshape(2, 3) array([[0, 1, 2], [3, 4, 5]]) &gt;&gt;&gt; arr.reshape(3, 2) array([[0, 1], [2, 3], [4, 5]]) &gt;&gt;&gt; arr. </code></pre> <p>If you read across left-to-right, top-to-bottom, all the numbers are in the same order.</p> <p>You probably want <code>np.transpose()</code> and friends, which (essentially) shuffle the data around to change the order of the axes, so that <code>im[i, j, k] == im.transpose(1, 2, 0)[j, k, i]</code> (note, it doesn't actually move any data, it just looks like that). For your use case, <code>np.moveaxis(im, 0, -1)</code> will do the same thing, and is a bit easier to read (&quot;move axis 0 to the end&quot;).</p> <pre class="lang-python prettyprint-override"><code>&gt;&gt;&gt; arr.transpose(1, 0) array([[0, 3], [1, 4], [2, 5]]) </code></pre>
python|numpy|matplotlib
1
3,376
50,328,545
Stochastic Gradient Descent for Linear Regression on partial derivatives
<p>I am implementing stochastic gradient descent for linear regression manually by considering the partial derivatives (df/dm) and (df/db)</p> <p>The objective is we have to randomly select the w0(weights) and then converge them. As this is stochastic we have to take the sample of the data set on each run</p> <p>Learning rate initially it should be 1 and after every run it should get reduced by 2 so when wK+1 is equal to wK (k=1,2,3,......) then the loop should stop</p> <p>This is implemented on the boston dataset in Sklearn</p> <p>As I am new to python didn't use functions Below is the code:</p> <pre><code>r= 1 m_deriv = 0 b_deriv = 0 learning_rate = 1 it = 1 w0_random = np.random.rand(13) w0 = np.asmatrix(w0_random).T b = np.random.rand() b0 = np.random.rand() while True: df_sample = bos.sample(100) price = df_sample['price'] price = np.asmatrix(price) xi = np.asmatrix(df_sample.drop('price',axis=1)) N = len(xi) for i in range(N): # -2x * (y-(mx +b)) m_deriv += np.dot(-2*xi[i].T , (price[:,i] - np.dot(xi[i] , w0_random) + b)) # -2(y - (mx + b)) b_deriv += -2*(price[:,i] - (np.dot(xi[i] , w0_random) + b)) w0_new = m_deriv * learning_rate b0_new = b_deriv * learning_rate w1 = w0 - w0_new b1 = b0 - b0_new it += 1 if (w0==w1).all(): break else: w0 = w1 b0 = b1 learning_rate = learning_rate/2 </code></pre> <p>and when the loop runs I am getting large values for w as well as b. They are not converging properly where did the loop go wrong so its resulting in higher values and how to solve it.</p>
<p>In the above case, using <code>StandardScaler</code> before processing on <code>xi</code> gives good results and use <code>w1</code> instead of <code>w0_random</code>.</p> <pre><code>from sklearn.preprocessing import StandardScaler import numpy as np bos['PRICE'] = boston.target X = bos.drop('PRICE', axis = 1) Y = bos['PRICE'] df_sample =X[:100] price =Y[:100] xi_1=[] price_1=[] N = len(df_sample) for j in range(N): scaler = StandardScaler() scaler.fit(df_sample) xtrs = scaler.transform(df_sample) xi_1.append(xtrs) yi=np.asmatrix(price) price_1.append(yi) #print(price_1) #print(xi_1) xi=xi_1 price=price_1 r= 1 m_deriv = 0 b_deriv = 0 learning_rate = 1 it = 1 w0_random = np.random.rand(13) w0 = np.asmatrix(w0_random).T b = np.random.rand() b0 = np.random.rand() while True: for i in range(N): # -2x * (y-(mx +b)) w1=w0 b1=b0 m_deriv = np.dot(-2*xi[i].T , (price[i] - np.dot(xi[i] , w1) + b1)) # -2(y - (mx + b)) b_deriv = -2*(price[i] - (np.dot(xi[i] , w1) + b1)) w0_new = m_deriv * learning_rate b0_new = b_deriv * learning_rate w1 = w0 - w0_new b1 = b0 - b0_new it += 1 if (w0==w1).all(): break else: w0 = w1 b0 = b1 learning_rate = learning_rate/2 print("m_deriv=",m_deriv) print("b_driv",b_deriv) </code></pre> <p></p>
python|pandas|numpy|gradient-descent
2
3,377
45,276,830
Xcode version must be specified to use an Apple CROSSTOOL
<p>I try to build tensorflow-serving using bazel but I've encountered some errors during the building </p> <pre><code>ERROR:/private/var/tmp/_bazel_Kakadu/3f0c35881c95d2c43f04614911c03a57/external/local_config_cc/BUILD:49:5: in apple_cc_toolchain rule @local_config_cc//:cc-compiler-darwin_x86_64: Xcode version must be specified to use an Apple CROSSTOOL. ERROR: Analysis of target '//tensorflow_serving/sources/storage_path:file_system_storage_path_source_proto' failed; build aborted. </code></pre> <p>I've already tried to use <code>bazel clean</code> and <code>bazel clean --expunge</code> but it didn't help and still Bazel doesn't see my xcode (I suppose) but it's completely installed. I even reinstalled it to make sure that all works fine but the error didn't disappeared </p> <p>My Bazel version is </p> <pre><code>Build label: 0.5.2-homebrew Build target: bazel-out/darwin_x86_64-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar Build time: Thu Jul 13 12:29:40 2017 (1499948980) Build timestamp: 1499948980 Build timestamp as int: 1499948980 KakaduDevs-Mac-mini:serving Kakadu$ </code></pre> <p>OS is MacOS Sierra version 10.12.5</p> <p>What should I do to specify Xcode version in bazel to avoid this error? It seems that the error is common but I haven't found how I can make the bazel build. P.S I'm trying to install tensorflow-serving the way it's explained here. <a href="https://tensorflow.github.io/serving/setup" rel="noreferrer">https://tensorflow.github.io/serving/setup</a></p>
<pre><code>bazel clean --expunge sudo xcode-select -s /Applications/Xcode.app/Contents/Developer sudo xcodebuild -license bazel clean --expunge bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package </code></pre>
tensorflow|bazel|tensorflow-serving
106
3,378
54,507,486
Merging two TRUE/FALSE dataframe columns keeping only TRUE
<p>I have two columns in a pandas dataframe, like below:</p> <pre><code>df[1] df[2] TRUE TRUE FALSE TRUE TRUE FALSE FALSE FALSE TRUE FALSE FALSE FALSE </code></pre> <p>From these two columns, how do I make the following new column:</p> <pre><code>df[3] TRUE TRUE TRUE FALSE TRUE FALSE </code></pre>
<p>Looks like you need the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>any</code></a> function, like that:</p> <pre><code>df['result_col'] = df.any(axis=1) </code></pre>
python|pandas|dataframe
3
3,379
73,839,377
The Adam optimizer is showing error in Keras Tensorflow
<p>I was training a neural network to recognize angry and happy emotion. The code:</p> <pre><code>import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.python.keras.optimizer_v1 import Adam from tensorflow.python.keras.models import Sequential from tensorflow.python.keras.layers import Activation, Dense, MaxPool2D, Conv2D, Flatten from tensorflow.python.keras.metrics import categorical_crossentropy from sklearn.metrics import confusion_matrix import itertools import os import shutil import glob import random import matplotlib.pyplot as plt import warnings trainpath = 'angry-vs-happy/train' testpath = 'angry-vs-happy/test' validpath = 'angry-vs-happy/valid' train_batches = tf.keras.preprocessing.image.ImageDataGenerator(preprocessing_function=tf.keras.applications.vgg16.preprocess_input).flow_from_directory(directory=trainpath, target_size=(224,224), classes =['angry', 'happy'], batch_size=10) test_batches = tf.keras.preprocessing.image.ImageDataGenerator(preprocessing_function=tf.keras.applications.vgg16.preprocess_input).flow_from_directory(directory=testpath, target_size=(224,224), classes =['angry', 'happy'], batch_size=10) valid_batches = tf.keras.preprocessing.image.ImageDataGenerator(preprocessing_function=tf.keras.applications.vgg16.preprocess_input).flow_from_directory(directory=validpath, target_size=(224,224), classes =['angry', 'happy'], batch_size=10, shuffle=False) assert train_batches.n == 1000 assert valid_batches.n == 200 assert test_batches.n == 100 assert train_batches.num_classes == valid_batches.num_classes == test_batches.num_classes == 2 imgs, labels = next(train_batches) model = Sequential([ Conv2D(filters=32, kernel_size=(3,3),activation = 'relu', padding='same', input_shape = (224,224,3)), MaxPool2D(pool_size=(2,2), strides=2), Conv2D(filters=64, kernel_size=(3,3),activation='relu', padding='same'), MaxPool2D(pool_size=(2,2), strides=2), Flatten(), Dense(units=2, activation='softmax'), ]) model.summary() model.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy']) model.fit(x=train_batches, validation_data=valid_batches, epochs=10, verbose=2) </code></pre> <p>But it shows an error:</p> <pre><code>ValueError: ('`tf.compat.v1.keras` Optimizer (', &lt;tensorflow.python.keras.optimizer_v1.Adam object at 0x0000022339FBEDD0&gt;, ') is not supported when eager execution is enabled. Use a `tf.keras` Optimizer instead, or disable eager execution.') </code></pre> <p>But when I rewrite the model.compile code as :</p> <pre><code>model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy']) </code></pre> <p>it shows that :</p> <pre><code>ValueError: Could not interpret optimizer identifier: &lt;keras.optimizers.optimizer_v2.adam.Adam object at 0x0000028E41B7EE60&gt; </code></pre>
<p>Use <code>tf.keras.optimizers</code>, and remove <code>.python.</code> from the imports. I don't see anything about <code>tensorflow.python.keras</code> in the documentation, so I would not use it</p> <pre><code>from tensorflow.keras.optimizers import Adam from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Activation, Dense, MaxPool2D, Conv2D, Flatten from tensorflow.keras.metrics import categorical_crossentropy </code></pre>
python|tensorflow|machine-learning|keras|neural-network
1
3,380
73,647,160
How to select and combine different columns based on specific condition in pandas python?
<pre><code>df = pd.DataFrame(data={ &quot;id&quot;: ['a', 'a', 'b', 'b', 'a', 'c', 'c', 'b'], &quot;transaction_amount&quot;: [110, 0, 10, 30, 40.4, 62.2, 20, 20], &quot;principal_amount&quot;: [100, 0, 0, 0, 40, 60, 0, 0], &quot;interest_amount&quot;: [10, 0, 10, 0, 0.4, 0.6, 10, 0], &quot;overpayment_amount&quot;: [0, 0, 0, 0, 0, 1.6, 10, 20], }) </code></pre> <p>I have the above dataframe. I want to have a column ,<code>amount</code>, and populate it as follows:</p> <ul> <li>Create a row for each <code>principal_amount</code>, <code>interest_amount</code> and <code>overpayment_amount</code> if it's value is not 0, and assign <code>principal</code>, <code>interest</code> and <code>overpayment</code> to a new column, <code>transaction_type</code>, respectively.</li> <li>Get value from <code>transaction_amount</code> if other three column values are 0 for that row.</li> </ul> <p>The output should look like this:</p> <pre><code> amount transaction_type id 3 30.0 NaN b 0 100.0 principal a 4 40.0 principal a 5 60.0 principal c 0 10.0 interest a 2 10.0 interest b 4 0.4 interest a 5 0.6 interest c 6 10.0 interest c 5 1.6 overpayment c 6 10.0 overpayment c 7 20.0 overpayment b </code></pre> <p>My current solution:</p> <pre><code>import pandas as pd df = pd.DataFrame(data={ &quot;id&quot;: ['a', 'a', 'b', 'b', 'a', 'c', 'c', 'b'], &quot;transaction_amount&quot;: [110, 0, 10, 30, 40.4, 62.2, 20, 20], &quot;principal_amount&quot;: [100, 0, 0, 0, 40, 60, 0, 0], &quot;interest_amount&quot;: [10, 0, 10, 0, 0.4, 0.6, 10, 0], &quot;overpayment_amount&quot;: [0, 0, 0, 0, 0, 1.6, 10, 20], }) columns = [&quot;amount&quot;, &quot;transaction_type&quot;] output_df = pd.DataFrame(columns=columns) # Add transaction amount condition = (df[&quot;principal_amount&quot;] == 0) &amp; (df[&quot;interest_amount&quot;] == 0) &amp; (df[&quot;overpayment_amount&quot;] == 0) &amp; (df[&quot;transaction_amount&quot;] != 0) subdf = df.loc[condition, ['id', 'transaction_amount']] subdf = subdf.rename(columns={'transaction_amount': &quot;amount&quot;}) output_df = output_df.append(subdf) # Add principal and interest for field in [&quot;principal_amount&quot;, &quot;interest_amount&quot;, &quot;overpayment_amount&quot;]: subdf = df.loc[df[field] != 0, ['id', field]] subdf[&quot;transaction_type&quot;] = field.split(&quot;_&quot;)[0] subdf = subdf.rename(columns={field: &quot;amount&quot;}) output_df = output_df.append(subdf) </code></pre> <p>Is there any pandas feature that helps me do this implementation more concise and efficient?</p>
<p>One approach could be as follows.</p> <pre><code>import pandas as pd import numpy as np out = df.reset_index(drop=False).melt( id_vars=['index'], value_vars=list(df.columns)[1:], var_name='transaction_type', value_name='amount' ).set_index('index') out = out[out['amount'].gt(0)] out['v'] = out.index.value_counts() out = out[out.v.eq(1) | out.transaction_type.ne('transaction_amount')].drop('v', axis=1) out['transaction_type'] = out['transaction_type']\ .str.replace('_amount','').replace({'transaction':np.nan}) out = out.iloc[:,::-1] out.index.name=None out['id'] = df['id'] print(out) amount transaction_type id 3 30.0 NaN b 0 100.0 principal a 4 40.0 principal a 5 60.0 principal c 0 10.0 interest a 2 10.0 interest b 4 0.4 interest a 5 0.6 interest c 6 10.0 interest c 5 1.6 overpayment c 6 10.0 overpayment c 7 20.0 overpayment b </code></pre> <p>Explanation approach:</p> <ul> <li>We use <a href="https://pandas.pydata.org/docs/reference/api/pandas.melt.html" rel="nofollow noreferrer"><code>df.melt</code></a> to get all the column names (starting from the second column) and amounts in two separate columns, and make sure also to keep the original index values (reset index first, and then set it to 'index' again).</li> <li>We keep only the rows where amount &gt; 0 by using <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.gt.html" rel="nofollow noreferrer"><code>Series.gt</code></a> on <code>amount</code>.</li> <li>We create a temporary column to store <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>Series.value_counts</code></a> applied to the index. Each index value with a value count of <code>1</code> will only have a value associated with <code>transaction_amount</code>.</li> <li>We use this information for another filter: keep only rows that have <code>out['v'].eq(1)</code> <em>or</em> have a <code>transaction_type</code> that is not 'transaction_amount'. Afterwards, we can drop the temporary column again.</li> <li>Finally, we get rid of '_amount' in the column <code>transaction_type</code> and also replace 'transaction' with <code>NaN</code> values. Last cosmetic procedure is to get the columns in the requested order, to remove the index name, and add <code>id</code> as an extra column.</li> </ul>
python|pandas
1
3,381
71,163,720
How to return all column in groupby in pandas?
<p>Given data frame:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Group</th> <th>count</th> <th>status</th> <th>Duration</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>2</td> <td>1</td> <td>2.4</td> </tr> <tr> <td>A</td> <td>4</td> <td>0</td> <td>7</td> </tr> <tr> <td>A</td> <td>2</td> <td>1</td> <td>4</td> </tr> <tr> <td>B</td> <td>3</td> <td>1</td> <td>6</td> </tr> <tr> <td>B</td> <td>2</td> <td>0</td> <td>7</td> </tr> </tbody> </table> </div> <pre><code>df.groupby(&quot;Group&quot;)[&quot;Duration&quot;].max() </code></pre> <p>Expected Result data frame:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Group</th> <th>count</th> <th>status</th> <th>Duration</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>4</td> <td>0</td> <td>7</td> </tr> <tr> <td>B</td> <td>2</td> <td>0</td> <td>7</td> </tr> </tbody> </table> </div>
<p>You'll also need <code>as_index=False</code> to prevent the group columns from becoming the index in your output.</p> <pre><code>df.groupby(&quot;Group&quot;,as_index=False)[[&quot;count&quot;,&quot;status&quot;,&quot;Duration&quot;]].max() </code></pre>
python|pandas|dataframe|pandas-groupby
0
3,382
71,184,483
Changing the order of middle levels of a tensor (python)?
<p>Imagine I have the following numpy array:</p> <pre><code>array([[['Xa0', 'Ya0'], ['Xa1', 'Ya1'], ['Xa2', 'Ya2']], [['Xb0', 'Yb0'], ['Xb1', 'Yb1'], ['Xb2', 'Yb2']], [['Xc0', 'Yc0'], ['Xc1', 'Yc1'], ['Xc2', 'Yc2']]], dtype='&lt;U3') </code></pre> <p>How could I change the ordering to have the following:</p> <pre><code>array([[['Xa2', 'Ya2'], ['Xa1', 'Ya1'], ['Xa0', 'Ya0']], [['Xb2', 'Yb2'], ['Xb1', 'Yb1'], ['Xb0', 'Yb0']], [['Xc2', 'Yc2'], ['Xc1', 'Yc1'], ['Xc0', 'Yc0']]], dtype='&lt;U3') </code></pre> <p>?</p> <p>P.S.: The entries are floats, not strings...</p>
<p>Well, I've just found the answer.</p> <p>By using numpy.flip(tensor, axis=1)</p>
python|pandas|numpy
1
3,383
71,422,940
Convert string date column with format of ordinal numeral day, abbreviated month name, and normal year to %Y-%m-%d
<p>Given the following <code>df</code> with string <code>date</code> column with ordinal numbers for day, abbreviated month name for month, and normal year:</p> <pre><code> date oil gas 0 1st Oct 2021 428 99 1 10th Sep 2021 401 101 2 2nd Oct 2020 189 74 3 10th Jan 2020 659 119 4 1st Nov 2019 691 130 5 30th Aug 2019 742 162 6 10th May 2019 805 183 7 24th Aug 2018 860 182 8 1st Sep 2017 759 183 9 10th Mar 2017 617 151 10 10th Feb 2017 591 149 11 22nd Apr 2016 343 88 12 10th Apr 2015 760 225 13 23rd Jan 2015 1317 316 </code></pre> <p>I'm wondering how could we parse <code>date</code> column to standard <code>%Y-%m-%d</code> format?</p> <p>My ideas so far: 1. strip ordinal indicators (<code>'st', 'nd', 'rd', 'th'</code>) from character day string while keeping the day number with <code>re</code>; 2. and convert abbreviated month name to numbers (seems not <code>%b</code>), 3. finally convert them to <code>%Y-%m-%d</code>.</p> <p>Code may be useful for the first step:</p> <pre><code>re.compile(r&quot;(?&lt;=\d)(st|nd|rd|th)&quot;).sub(&quot;&quot;, df['date']) </code></pre> <p><strong>References:</strong></p> <p><a href="https://metacpan.org/release/DROLSKY/DateTime-Locale-0.46/view/lib/DateTime/Locale/en_US.pm#Months" rel="nofollow noreferrer">https://metacpan.org/release/DROLSKY/DateTime-Locale-0.46/view/lib/DateTime/Locale/en_US.pm#Months</a></p>
<p><code>pd.to_datetime</code> already handles this case if you don't specify the <code>format</code> parameter:</p> <pre><code>&gt;&gt;&gt; pd.to_datetime(df['date']) 0 2021-10-01 1 2021-09-10 2 2020-10-02 3 2020-01-10 4 2019-11-01 5 2019-08-30 6 2019-05-10 7 2018-08-24 8 2017-09-01 9 2017-03-10 10 2017-02-10 11 2016-04-22 12 2015-04-10 13 2015-01-23 Name: date, dtype: datetime64[ns] </code></pre>
python-3.x|pandas|datetime|python-dateutil
1
3,384
52,028,341
How to predict missing values in python using linear regression 3 year worth of data
<p>Hey guys so i have these 3 years worth of data from 2012~2014, however the 2014 have a missing value to it (100 rows), i'm really not too sure on how to deal with it, this is my attempt at it: </p> <pre><code>X = red2012Mob.values y = red2014Mob.values X = X.reshape(-1,1) y = y.reshape(-1,1) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) from sklearn.linear_model import LinearRegression regressor = LinearRegression() regressor.fit(X_train, y_train) y_pred = regressor.predict(X_test) </code></pre> <p>i'm not changing any data from the 2014 where it have missing value i just directly input it to the model</p>
<p>There is two ways:</p> <ul> <li>Drop the instances with missing data (e.g. using <code>red2012Mob.dropna()</code>, or if it is time series, leave out complete blocks of missing data, e.g. start later in 2014).</li> <li>Impute the missing data. Here however, you won't get a one size fits all answer, as it really depends on your data and your problem. Since you seem to have time series data, the simplest strategies for "small" holes is to us linear or constant interpolation. If time dependency is not so important, maybe the mean of the column may be a good strategy. For larger holes you may find a suitable model to fill the data. Sometimes a "naive" strategy like using the same value of a seasonality before (e.g. last monday's data for current monday) may work, or you use a KNN Imputer (either check out <a href="https://github.com/scikit-learn/scikit-learn/pull/9212" rel="nofollow noreferrer">this</a> sklearn PR or the package discussed <a href="https://stackoverflow.com/a/45321581/6694255">here</a>). For the simple strategies, there is also a module in the upcoming <a href="http://scikit-learn.org/dev/modules/generated/sklearn.impute.SimpleImputer.html#sklearn.impute.SimpleImputer" rel="nofollow noreferrer">sklearn release</a>.</li> </ul> <p>In practice I usually combine methods. For instance up to some point I will try strategies of the second point, but if data is too bad it is usually better to have less "good" data than much of the imputed data.</p>
pandas|numpy|scikit-learn
3
3,385
52,077,487
Python Multilevel Indexing using pandas read_csv method
<p>I want to read the following table as a pandas dataframe</p> <p><a href="https://i.stack.imgur.com/MYOfd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MYOfd.png" alt="enter image description here"></a></p> <p>Say the dataframe is df, the purpose is to query df['acct_id']['A']['0-3_mon] should give me 10</p> <p>I have done it for panel data, where everything is a column and then you create a multi-level-index for both cross-section and time-series.</p> <p>But over here, the source data itself has more than two levels of columns. How do I read this csv as a multi-level index? I am stuck here, any idea. </p> <p>Some of the similar work if you want to look at - <a href="https://lectures.quantecon.org/py/pandas_panel.html" rel="nofollow noreferrer">https://lectures.quantecon.org/py/pandas_panel.html</a></p> <p>Thanks a lot.</p>
<p>Create <code>DataFrame</code> with <code>MultiIndex</code>, because <a href="http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dsintro-deprecate-panel" rel="nofollow noreferrer"><code>deprecate panel</code></a>:</p> <pre><code>df = pd.read_csv(file, header=[0,1], index_col=[0]) </code></pre> <p>And then select by <a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html#using-slicers" rel="nofollow noreferrer">slicers</a>:</p> <pre><code>idx = pd.IndexSlice print (df.loc[1, idx['A', '0-3_mon']]) </code></pre> <p><strong>Sample</strong>: with no Multindex names:</p> <pre><code>import pandas as pd temp=u"""A;A;B;B 0-3_mon;3-6_mon;0-3_mon;3-6_mon 1;10;12;14;18 2;11;15;17;19 3;13;16;21;20""" #after testing replace 'pd.compat.StringIO(temp)' to 'filename.csv' df = pd.read_csv(pd.compat.StringIO(temp), sep=";", header=[0,1]) print (df) A B 0-3_mon 3-6_mon 0-3_mon 3-6_mon 1 10 12 14 18 2 11 15 17 19 3 13 16 21 20 print (df.columns) MultiIndex(levels=[['A', 'B'], ['0-3_mon', '3-6_mon']], labels=[[0, 0, 1, 1], [0, 1, 0, 1]]) idx = pd.IndexSlice print (df.loc[1, idx['A', '0-3_mon']]) 10 </code></pre> <p><strong>Sample</strong> with specified names of MultiIndex:</p> <pre><code>import pandas as pd temp=u"""acct_id;A;A;B;B level;0-3_mon;3-6_mon;0-3_mon;3-6_mon 1;10;12;14;18 2;11;15;17;19 3;13;16;21;20""" #after testing replace 'pd.compat.StringIO(temp)' to 'filename.csv' df = pd.read_csv(pd.compat.StringIO(temp), sep=";", index_col=[0], header=[0,1]) print (df) acct_id A B level 0-3_mon 3-6_mon 0-3_mon 3-6_mon 1 10 12 14 18 2 11 15 17 19 3 13 16 21 20 print (df.columns) MultiIndex(levels=[['A', 'B'], ['0-3_mon', '3-6_mon']], labels=[[0, 0, 1, 1], [0, 1, 0, 1]], names=['acct_id', 'level']) idx = pd.IndexSlice print (df.loc[1, idx['A', '0-3_mon']]) 10 </code></pre>
python|pandas|csv|dataframe|data-science
3
3,386
60,673,109
Pandas - Drop row from list of values
<p>I have a simple dataframe:</p> <pre><code>df = pd.DataFrame({'ID': [100, 101, 134, 139, 192], 'Name': ['Tom', 'Dave', 'Steve', 'Bob', 'Jim']}) </code></pre> <p>and a list of values:</p> <pre><code>id_list = [100, 139] </code></pre> <p>I want to drop the rows from my dataframe if the 'ID' column == one of the values in my id_list.</p> <p>The desired output is...</p> <pre><code> ID Name 1 101 Dave 2 134 Steve 4 192 Jim </code></pre>
<p>You can use <code>.isin()</code> for the <code>ID</code> series preceded with <code>~</code>. Essentialy this works like <em>"Not in"</em>:</p> <pre><code>output_df = df[~df['ID'].isin(id_list)] </code></pre> <p>Output:</p> <pre><code> ID Name 1 101 Dave 2 134 Steve 4 192 Jim </code></pre>
python|pandas
3
3,387
60,374,642
fill data in a dataframe when a certain condition is satisfied
<pre><code>id date idx comments 1 01-05-2018 0 null 2 02-05-2018 0 null 3 03-05-2018 Y null 4 04-05-2018 Y null </code></pre> <p>when <strong>idx</strong> = 0, <strong>comments</strong> column needs to be updated as <strong>'flow reported as null for id (mention the respective id) and date (mention the respective date)'</strong></p>
<p>Using <code>list comprehension</code> with <code>zip</code> and <code>f-strings</code>:</p> <pre><code>df['comments'] = [f'flow reported as null for id {i} and date {d}' if idx == '0' else 'NULL' for i, d, idx in zip(df['id'], df['date'], df['idx'])] id date idx comments 0 1 01-05-2018 0 flow reported as null for id 1 and date 01-05-... 1 2 02-05-2018 0 flow reported as null for id 2 and date 02-05-... 2 3 03-05-2018 Y NULL 3 4 04-05-2018 Y NULL </code></pre> <p>Or using <code>apply</code>:</p> <pre><code>df['comments'] = ( df.apply(lambda x: f'flow reported as null for id {x["id"]} and date {x["date"]}' if x['idx'] == '0' else 'NULL', axis=1) ) </code></pre>
pandas
0
3,388
72,818,254
Collapse specific multiindex columns pandas dataframe
<p>I'm importing an Excel file which has the following structure:</p> <pre><code>| | Cat 1 | | | | Cat 2 | | | Total | |code| a | b | c | d | a | b | c | | |data| data |data|data|data| data |data|data| data | </code></pre> <p>I want to keep the information in the double header row so I use:</p> <pre><code>df = pd.read_excel(file, sheet, header=[0,1] </code></pre> <p>But this gives me the following MultiIndex: <code>print(df.columns)</code>:</p> <pre><code>MultiIndex([('Unnamed: 0_level_0', 'code'), ( 'Cat 1', 'a'), ( 'Cat 1', 'b'), etc. ( 'Cat 2', 'a'), ( 'Cat 2', 'b'), etc. ( 'Total','Unnamed: 8_level_1')],) </code></pre> <p>I'm looking for a way to collapse the <code>unnamed: x_level_y</code> columns so I can access them simply with <code>df['code']</code> or <code>df['Total']</code>. I've tried <code>df.rename(columns={'Unnamed: 0_level_0: ''})</code>, but this isn't generalisable if I don't know which levels are missing, and doesn't allow me to access the column with just the single layered name. The other answers I've found are about removing any columns which contain <code>Unnamed</code> in the column name, but I want to keep the columns and the data they contain.</p>
<p>You can re-create the MultiIndex and put the existing name in level 0 for all columns where any level contains <code>Unnamed</code>:</p> <pre><code>df.columns = pd.MultiIndex.from_tuples( [(c[1],'') if 'Unnamed' in c[0] else (c[0],'') if 'Unnamed' in c[1] else c for c in df.columns.to_list()]) </code></pre>
python|pandas|dataframe|multi-index
1
3,389
72,500,516
Why doesn't this Python pandas code work on my dataset?
<p>I am a newbie in data science, and I encountered a problem about pandas in Python. Basically, I want to substitute the value lower than 0 in a column with 0, and I wonder why this does not work:</p> <p>Image of my dataset: dataset:<br /> <a href="https://i.stack.imgur.com/7F13s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7F13s.png" alt="dataset" /></a></p> <p>Original:</p> <pre><code>submit[submit.score&lt;0].score = 0 </code></pre> <p>Fixed:</p> <pre><code>submit.loc[submit.score&lt;0, 'score'] = 0 </code></pre> <p>I have already solved this problem by using iloc, but it really confuses me. Any explanation would be great.</p>
<p>Your first attempt is equivalent to <code>submit[submit['score'] &lt; 0]['score'] = 0</code>. Whenever you see multiple <code>[</code> and <code>]</code> pairs in your pandas code, it might be a bad sign. In this case, with <code>submit[submit['score'] &lt; 0]</code> you're creating a copy of your dataframe, so you're basically assigning <code>0</code> to the <code>score</code> column <em>on that copy</em>, which isn't going to do anything.</p> <p>By using <code>loc</code>, you eliminate the copy and assign directly to the dataframe.</p>
python|pandas
1
3,390
72,584,994
Convert pandas df to orc bytes
<p>Following is generated by this line of code:</p> <pre><code>table_bytes = df.to_parquet() </code></pre> <pre><code>table_bytes: b'PAR1\x15\x04\x15@\x15DL\x15\x08\x15\x04\x12\x00\x00 |\x03\x00\x00\x00Tom\x04\x00\x00\x00nick\x05\x00\x00\x00krish\x04\x00\x00\x00jack\x15\x00\x15\x14\x15\x18,\x15\x08\x15\x04\x15\x06\x15\x06\x1c6\x00(\x04nick\x18\x03Tom\x00\x00\x00\n$\x02\x00\x00\x00\x08\x01\x02\x03\xe4\x00&amp;\xc0\x01\x1c\x15\x0c\x195\x04\x00\x06\x19\x18\rcustomer_name\x15\x02\x16\x08\x16\xb0\x01\x16\xb8\x01&amp;h&amp;\x08\x1c6\x00(\x04nick\x18\x03Tom\x00\x19,\x15\x04\x15\x04\x15\x02\x00\x15\x00\x15\x04\x15\x02\x00\x00\x00\x15\x04\x15@\x15@L\x15\x08\x15\x04\x12\x00\x00 \x0c\x84\x90\x01\x00\x01\x01\x08\xc2\x15\x02\x01\x07@\x00\x97\xbb\x01\x00\x00\x00\x00\x00\x13\xd9\x02\x00\x00\x00\x00\x00\x15\x00\x15\x14\x15\x18,\x15\x08\x15\x04\x15\x06\x15\x06\x1c\x18\x08\x13\xd9\x02\x00\x00\x00\x00\x00\x18\x08\x84\x90\x01\x00\x00\x00\x00\x00\x16\x00(\x08\x13\xd9\x02\x00\x00\x00\x00\x00\x18\x08\x84\x90\x01\x00\x00\x00\x00\x00\x00\x00\x00\n$\x02\x00\x00\x00\x08\x01\x02\x03\xe4\x00&amp;\xc2\x04\x1c\x15\x04\x195\x04\x00\x06\x19\x18\x0bcustomer_id\x15\x02\x16\x08\x16\xea\x01\x16\xee\x01&amp;\xb0\x03&amp;\xd4\x02\x1c\x18\x08\x13\xd9\x02\x00\x00\x00\x00\x00\x18\x08\x84\x90\x01\x00\x00\x00\x00\x00\x16\x00(\x08\x13\xd9\x02\x00\x00\x00\x00\x00\x18\x08\x84\x90\x01\x00\x00\x00\x00\x00\x00\x19,\x15\x04\x15\x04\x15\x02\x00\x15\x00\x15\x04\x15\x02\x00\x00\x00\x15\x04\x15@\x15&lt;L\x15\x08\x15\x04\x12\x00\x00 \x08\xe8\x03\x00\x05\x01\x04\xc0\x12\x05\x07@\x00\r\x00\x00\x00\x00\x00\x00\x00u&quot;\x00\x00\x00\x00\x00\x00\x15\x00\x15\x14\x15\x18,\x15\x08\x15\x04\x15\x06\x15\x06\x1c\x18\x08u&quot;\x00\x00\x00\x00\x00\x00\x18\x08\r\x00\x00\x00\x00\x00\x00\x00\x16\x00(\x08u&quot;\x00\x00\x00\x00\x00\x00\x18\x08\r\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\n$\x02\x00\x00\x00\x08\x01\x02\x03\xe4\x00&amp;\xfa\x07\x1c\x15\x04\x195\x04\x00\x06\x19\x18\x14purchase_amt_last_30\x15\x02\x16\x08\x16\xea\x01\x16\xea\x01&amp;\xe8\x06&amp;\x90\x06\x1c\x18\x08u&quot;\x00\x00\x00\x00\x00\x00\x18\x08\r\x00\x00\x00\x00\x00\x00\x00\x16\x00(\x08u&quot;\x00\x00\x00\x00\x00\x00\x18\x08\r\x00\x00\x00\x00\x00\x00\x00\x00\x19,\x15\x04\x15\x04\x15\x02\x00\x15\x00\x15\x04\x15\x02\x00\x00\x00\x15\x04\x156\x15:L\x15\x02\x15\x04\x12\x00\x00\x1bh\x17\x00\x00\x002022-06-11 19:23:16.477\x15\x00\x15\x12\x15\x16,\x15\x08\x15\x04\x15\x06\x15\x06\x1c6\x00(\x172022-06-11 19:23:16.477\x18\x172022-06-11 19:23:16.477\x00\x00\x00\t \x02\x00\x00\x00\x08\x01\x01\x08\x00&amp;\xd4\x0b\x1c\x15\x0c\x195\x04\x00\x06\x19\x18\ractivity_time\x15\x02\x16\x08\x16\xf2\x01\x16\xfa\x01&amp;\xb0\n&amp;\xda\t\x1c6\x00(\x172022-06-11 19:23:16.477\x18\x172022-06-11 19:23:16.477\x00\x19,\x15\x04\x15\x04\x15\x02\x00\x15\x00\x15\x04\x15\x02\x00\x00\x00\x15\x04\x15\n\x15\x0eL\x15\x02\x15\x04\x12\x00\x00\x05\x10\x01\x00\x00\x006\x15\x00\x15\x12\x15\x16,\x15\x08\x15\x04\x15\x06\x15\x06\x1c6\x00(\x016\x18\x016\x00\x00\x00\t \x02\x00\x00\x00\x08\x01\x01\x08\x00&amp;\xb0\x0e\x1c\x15\x0c\x195\x04\x00\x06\x19\x18\x05month\x15\x02\x16\x08\x16n\x16v&amp;\xe4\r&amp;\xba\r\x1c6\x00(\x016\x18\x016\x00\x19,\x15\x04\x15\x04\x15\x02\x00\x15\x00\x15\x04\x15\x02\x00\x00\x00\x15\x02\x19l5\x00\x18\x06schema\x15\n\x00\x15\x0c%\x02\x18\rcustomer_name%\x00L\x1c\x00\x00\x00\x15\x04%\x02\x18\x0bcustomer_id\x00\x15\x04%\x02\x18\x14purchase_amt_last_30\x00\x15\x0c%\x02\x18\ractivity_time%\x00L\x1c\x00\x00\x00\x15\x0c%\x02\x18\x05month%\x00L\x1c\x00\x00\x00\x16\x08\x19\x1c\x19\\&amp;\xc0\x01\x1c\x15\x0c\x195\x04\x00\x06\x19\x18\rcustomer_name\x15\x02\x16\x08\x16\xb0\x01\x16\xb8\x01&amp;h&amp;\x08\x1c6\x00(\x04nick\x18\x03Tom\x00\x19,\x15\x04\x15\x04\x15\x02\x00\x15\x00\x15\x04\x15\x02\x00\x00\x00&amp;\xc2\x04\x1c\x15\x04\x195\x04\x00\x06\x19\x18\x0bcustomer_id\x15\x02\x16\x08\x16\xea\x01\x16\xee\x01&amp;\xb0\x03&amp;\xd4\x02\x1c\x18\x08\x13\xd9\x02\x00\x00\x00\x00\x00\x18\x08\x84\x90\x01\x00\x00\x00\x00\x00\x16\x00(\x08\x13\xd9\x02\x00\x00\x00\x00\x00\x18\x08\x84\x90\x01\x00\x00\x00\x00\x00\x00\x19,\x15\x04\x15\x04\x15\x02\x00\x15\x00\x15\x04\x15\x02\x00\x00\x00&amp;\xfa\x07\x1c\x15\x04\x195\x04\x00\x06\x19\x18\x14purchase_amt_last_30\x15\x02\x16\x08\x16\xea\x01\x16\xea\x01&amp;\xe8\x06&amp;\x90\x06\x1c\x18\x08u&quot;\x00\x00\x00\x00\x00\x00\x18\x08\r\x00\x00\x00\x00\x00\x00\x00\x16\x00(\x08u&quot;\x00\x00\x00\x00\x00\x00\x18\x08\r\x00\x00\x00\x00\x00\x00\x00\x00\x19,\x15\x04\x15\x04\x15\x02\x00\x15\x00\x15\x04\x15\x02\x00\x00\x00&amp;\xd4\x0b\x1c\x15\x0c\x195\x04\x00\x06\x19\x18\ractivity_time\x15\x02\x16\x08\x16\xf2\x01\x16\xfa\x01&amp;\xb0\n&amp;\xda\t\x1c6\x00(\x172022-06-11 19:23:16.477\x18\x172022-06-11 19:23:16.477\x00\x19,\x15\x04\x15\x04\x15\x02\x00\x15\x00\x15\x04\x15\x02\x00\x00\x00&amp;\xb0\x0e\x1c\x15\x0c\x195\x04\x00\x06\x19\x18\x05month\x15\x02\x16\x08\x16n\x16v&amp;\xe4\r&amp;\xba\r\x1c6\x00(\x016\x18\x016\x00\x19,\x15\x04\x15\x04\x15\x02\x00\x15\x00\x15\x04\x15\x02\x00\x00\x00\x16\xe4\x07\x16\x08&amp;\x08\x16\x80\x08\x14\x00\x00\x19,\x18\x06pandas\x18\xac\x07{&quot;index_columns&quot;: [{&quot;kind&quot;: &quot;range&quot;, &quot;name&quot;: null, &quot;start&quot;: 0, &quot;stop&quot;: 4, &quot;step&quot;: 1}], &quot;column_indexes&quot;: [{&quot;name&quot;: null, &quot;field_name&quot;: null, &quot;pandas_type&quot;: &quot;unicode&quot;, &quot;numpy_type&quot;: &quot;object&quot;, &quot;metadata&quot;: {&quot;encoding&quot;: &quot;UTF-8&quot;}}], &quot;columns&quot;: [{&quot;name&quot;: &quot;customer_name&quot;, &quot;field_name&quot;: &quot;customer_name&quot;, &quot;pandas_type&quot;: &quot;unicode&quot;, &quot;numpy_type&quot;: &quot;object&quot;, &quot;metadata&quot;: null}, {&quot;name&quot;: &quot;customer_id&quot;, &quot;field_name&quot;: &quot;customer_id&quot;, &quot;pandas_type&quot;: &quot;int64&quot;, &quot;numpy_type&quot;: &quot;int64&quot;, &quot;metadata&quot;: null}, {&quot;name&quot;: &quot;purchase_amt_last_30&quot;, &quot;field_name&quot;: &quot;purchase_amt_last_30&quot;, &quot;pandas_type&quot;: &quot;int64&quot;, &quot;numpy_type&quot;: &quot;int64&quot;, &quot;metadata&quot;: null}, {&quot;name&quot;: &quot;activity_time&quot;, &quot;field_name&quot;: &quot;activity_time&quot;, &quot;pandas_type&quot;: &quot;unicode&quot;, &quot;numpy_type&quot;: &quot;object&quot;, &quot;metadata&quot;: null}, {&quot;name&quot;: &quot;month&quot;, &quot;field_name&quot;: &quot;month&quot;, &quot;pandas_type&quot;: &quot;unicode&quot;, &quot;numpy_type&quot;: &quot;object&quot;, &quot;metadata&quot;: null}], &quot;creator&quot;: {&quot;library&quot;: &quot;pyarrow&quot;, &quot;version&quot;: &quot;7.0.0&quot;}, &quot;pandas_version&quot;: &quot;1.3.3&quot;}\x00\x18\x0cARROW:schema\x18\x8c\x0e/////0AFAAAQAAAAAAAKAA4ABgAFAAgACgAAAAABBAAQAAAAAAAKAAwAAAAEAAgACgAAAOQDAAAEAAAAAQAAAAwAAAAIAAwABAAIAAgAAAC8AwAABAAAAKwDAAB7ImluZGV4X2NvbHVtbnMiOiBbeyJraW5kIjogInJhbmdlIiwgIm5hbWUiOiBudWxsLCAic3RhcnQiOiAwLCAic3RvcCI6IDQsICJzdGVwIjogMX1dLCAiY29sdW1uX2luZGV4ZXMiOiBbeyJuYW1lIjogbnVsbCwgImZpZWxkX25hbWUiOiBudWxsLCAicGFuZGFzX3R5cGUiOiAidW5pY29kZSIsICJudW1weV90eXBlIjogIm9iamVjdCIsICJtZXRhZGF0YSI6IHsiZW5jb2RpbmciOiAiVVRGLTgifX1dLCAiY29sdW1ucyI6IFt7Im5hbWUiOiAiY3VzdG9tZXJfbmFtZSIsICJmaWVsZF9uYW1lIjogImN1c3RvbWVyX25hbWUiLCAicGFuZGFzX3R5cGUiOiAidW5pY29kZSIsICJudW1weV90eXBlIjogIm9iamVjdCIsICJtZXRhZGF0YSI6IG51bGx9LCB7Im5hbWUiOiAiY3VzdG9tZXJfaWQiLCAiZmllbGRfbmFtZSI6ICJjdXN0b21lcl9pZCIsICJwYW5kYXNfdHlwZSI6ICJpbnQ2NCIsICJudW1weV90eXBlIjogImludDY0IiwgIm1ldGFkYXRhIjogbnVsbH0sIHsibmFtZSI6ICJwdXJjaGFzZV9hbXRfbGFzdF8zMCIsICJmaWVsZF9uYW1lIjogInB1cmNoYXNlX2FtdF9sYXN0XzMwIiwgInBhbmRhc190eXBlIjogImludDY0IiwgIm51bXB5X3R5cGUiOiAiaW50NjQiLCAibWV0YWRhdGEiOiBudWxsfSwgeyJuYW1lIjogImFjdGl2aXR5X3RpbWUiLCAiZmllbGRfbmFtZSI6ICJhY3Rpdml0eV90aW1lIiwgInBhbmRhc190eXBlIjogInVuaWNvZGUiLCAibnVtcHlfdHlwZSI6ICJvYmplY3QiLCAibWV0YWRhdGEiOiBudWxsfSwgeyJuYW1lIjogIm1vbnRoIiwgImZpZWxkX25hbWUiOiAibW9udGgiLCAicGFuZGFzX3R5cGUiOiAidW5pY29kZSIsICJudW1weV90eXBlIjogIm9iamVjdCIsICJtZXRhZGF0YSI6IG51bGx9XSwgImNyZWF0b3IiOiB7ImxpYnJhcnkiOiAicHlhcnJvdyIsICJ2ZXJzaW9uIjogIjcuMC4wIn0sICJwYW5kYXNfdmVyc2lvbiI6ICIxLjMuMyJ9AAAAAAYAAABwYW5kYXMAAAUAAAD4AAAAqAAAAGQAAAAwAAAABAAAACz///8AAAEFEAAAABgAAAAEAAAAAAAAAAUAAABtb250aAAAABT///9U////AAABBRAAAAAgAAAABAAAAAAAAAANAAAAYWN0aXZpdHlfdGltZQAAAET///+E////AAABAhAAAAAoAAAABAAAAAAAAAAUAAAAcHVyY2hhc2VfYW10X2xhc3RfMzAAAAAAzP///wAAAAFAAAAAxP///wAAAQIQAAAAJAAAAAQAAAAAAAAACwAAAGN1c3RvbWVyX2lkAAgADAAIAAcACAAAAAAAAAFAAAAAEAAUAAgABgAHAAwAAAAQABAAAAAAAAEFEAAAACQAAAAEAAAAAAAAAA0AAABjdXN0b21lcl9uYW1lAAAABAAEAAQAAAA=\x00\x18\x1fparquet-cpp-arrow version 7.0.0\x19\\\x1c\x00\x00\x1c\x00\x00\x1c\x00\x00\x1c\x00\x00\x1c\x00\x00\x00s\r\x00\x00PAR1' </code></pre> <p>I wish to generate the similar thing for a orc file. I am able to generate a ORC file by following code:</p> <pre><code>table = pa.Table.from_pandas(df, preserve_index=False) table_bytes = orc.write_table(table, orc_file) </code></pre> <p>However, it writes an orc file. I want something that to_parquet provides (which is in the form of bytes.</p> <p>Any idea on how can I achieve that?</p> <p>Thank you so much in advance. Hope what I mentioned makes sense. Otherwise, please let me know and I will provide further context.</p>
<p>I got it like this:</p> <pre><code>import pandas as pd import pyarrow as pa from pyarrow import orc df = pd.DataFrame({&quot;col1&quot;: [1, 2, 3]}) print(df.to_parquet()) # Wrote the table to a file and then read bytes from it. orc.write_table(pa.table({&quot;col1&quot;: [1, 2, 3]}), &quot;test.orc&quot;) with open('test.orc', &quot;rb&quot;) as file: bytes_read = file.read() print(bytes_read) </code></pre> <pre><code>b'PAR1\x15\x04\x150\x15.L\x15\x06\x15\x04\x12\x00\x00\x18\x04\x01\x00\t\x01&lt;\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x15\x00\x15\x14\x15\x18,\x15\x06\x15\x04\x15\x06\x15\x06\x1c\x18\x08\x03\x00\x00\x00\x00\x00\x00\x00\x18\x08\x01\x00\x00\x00\x00\x00\x00\x00\x16\x00(\x08\x03\x00\x00\x00\x00\x00\x00\x00\x18\x08\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\n$\x02\x00\x00\x00\x06\x01\x02\x03$\x00&amp;\xe4\x01\x1c\x15\x04\x195\x04\x00\x06\x19\x18\x04col1\x15\x02\x16\x06\x16\xda\x01\x16\xdc\x01&amp;R&amp;\x08\x1c\x18\x08\x03\x00\x00\x00\x00\x00\x00\x00\x18\x08\x01\x00\x00\x00\x00\x00\x00\x00\x16\x00(\x08\x03\x00\x00\x00\x00\x00\x00\x00\x18\x08\x01\x00\x00\x00\x00\x00\x00\x00\x00\x19,\x15\x04\x15\x04\x15\x02\x00\x15\x00\x15\x04\x15\x02\x00\x00\x00\x15\x02\x19,5\x00\x18\x06schema\x15\x02\x00\x15\x04%\x02\x18\x04col1\x00\x16\x06\x19\x1c\x19\x1c&amp;\xe4\x01\x1c\x15\x04\x195\x04\x00\x06\x19\x18\x04col1\x15\x02\x16\x06\x16\xda\x01\x16\xdc\x01&amp;R&amp;\x08\x1c\x18\x08\x03\x00\x00\x00\x00\x00\x00\x00\x18\x08\x01\x00\x00\x00\x00\x00\x00\x00\x16\x00(\x08\x03\x00\x00\x00\x00\x00\x00\x00\x18\x08\x01\x00\x00\x00\x00\x00\x00\x00\x00\x19,\x15\x04\x15\x04\x15\x02\x00\x15\x00\x15\x04\x15\x02\x00\x00\x00\x16\xda\x01\x16\x06&amp;\x08\x16\xdc\x01\x14\x00\x00\x19,\x18\x06pandas\x18\xab\x03{&quot;index_columns&quot;: [{&quot;kind&quot;: &quot;range&quot;, &quot;name&quot;: null, &quot;start&quot;: 0, &quot;stop&quot;: 3, &quot;step&quot;: 1}], &quot;column_indexes&quot;: [{&quot;name&quot;: null, &quot;field_name&quot;: null, &quot;pandas_type&quot;: &quot;unicode&quot;, &quot;numpy_type&quot;: &quot;object&quot;, &quot;metadata&quot;: {&quot;encoding&quot;: &quot;UTF-8&quot;}}], &quot;columns&quot;: [{&quot;name&quot;: &quot;col1&quot;, &quot;field_name&quot;: &quot;col1&quot;, &quot;pandas_type&quot;: &quot;int64&quot;, &quot;numpy_type&quot;: &quot;int64&quot;, &quot;metadata&quot;: null}], &quot;creator&quot;: {&quot;library&quot;: &quot;pyarrow&quot;, &quot;version&quot;: &quot;7.0.0&quot;}, &quot;pandas_version&quot;: &quot;1.3.5&quot;}\x00\x18\x0cARROW:schema\x18\xb8\x06/////2ACAAAQAAAAAAAKAA4ABgAFAAgACgAAAAABBAAQAAAAAAAKAAwAAAAEAAgACgAAAOABAAAEAAAAAQAAAAwAAAAIAAwABAAIAAgAAAAIAAAAEAAAAAYAAABwYW5kYXMAAKsBAAB7ImluZGV4X2NvbHVtbnMiOiBbeyJraW5kIjogInJhbmdlIiwgIm5hbWUiOiBudWxsLCAic3RhcnQiOiAwLCAic3RvcCI6IDMsICJzdGVwIjogMX1dLCAiY29sdW1uX2luZGV4ZXMiOiBbeyJuYW1lIjogbnVsbCwgImZpZWxkX25hbWUiOiBudWxsLCAicGFuZGFzX3R5cGUiOiAidW5pY29kZSIsICJudW1weV90eXBlIjogIm9iamVjdCIsICJtZXRhZGF0YSI6IHsiZW5jb2RpbmciOiAiVVRGLTgifX1dLCAiY29sdW1ucyI6IFt7Im5hbWUiOiAiY29sMSIsICJmaWVsZF9uYW1lIjogImNvbDEiLCAicGFuZGFzX3R5cGUiOiAiaW50NjQiLCAibnVtcHlfdHlwZSI6ICJpbnQ2NCIsICJtZXRhZGF0YSI6IG51bGx9XSwgImNyZWF0b3IiOiB7ImxpYnJhcnkiOiAicHlhcnJvdyIsICJ2ZXJzaW9uIjogIjcuMC4wIn0sICJwYW5kYXNfdmVyc2lvbiI6ICIxLjMuNSJ9AAEAAAAUAAAAEAAUAAgABgAHAAwAAAAQABAAAAAAAAECEAAAACAAAAAEAAAAAAAAAAQAAABjb2wxAAAAAAgADAAIAAcACAAAAAAAAAFAAAAAAAAAAA==\x00\x18\x1fparquet-cpp-arrow version 7.0.0\x19\x1c\x1c\x00\x00\x00\xb7\x05\x00\x00PAR1' b'ORC\n\x0b\n\x03\x00\x00\x00\x12\x04\x08\x03P\x00\n\x15\n\x05\x00\x00\x00\x00\x00\x12\x0c\x08\x03\x12\x06\x08\x02\x10\x06\x18\x0cP\x00\xff\xe0\xff\xe0F\x02$`\n\x06\x08\x06\x10\x00\x18\r\n\x06\x08\x06\x10\x01\x18\x17\n\x06\x08\x00\x10\x00\x18\x02\n\x06\x08\x00\x10\x01\x18\x02\n\x06\x08\x01\x10\x01\x18\x04\x12\x04\x08\x00\x10\x00\x12\x04\x08\x02\x10\x00\x1a\x03GMT\n\x14\n\x04\x08\x03P\x00\n\x0c\x08\x03\x12\x06\x08\x02\x10\x06\x18\x0cP\x00\x08\x03\x10e\x1a\n\x08\x03\x10$\x18\x08 9(\x03&quot;\x11\x08\x0c\x12\x01\x01\x1a\x04col1 \x00(\x000\x00&quot;\x08\x08\x04 \x00(\x000\x000\x03:\x04\x08\x03P\x00:\x0c\x08\x03\x12\x06\x08\x02\x10\x06\x18\x0cP\x00@\x90NH\x01b\x051.7.2\x08O\x10\x00\x18\x80\x80\x04&quot;\x02\x00\x0c(\x160\x06\x82\xf4\x03\x03ORC\x17' </code></pre> <p>On Windows 10 it gives an error ModuleNotFoundError: No module named 'pyarrow._orc'. In google colab worked without errors.</p>
python|python-3.x|pandas|orc
1
3,391
72,657,714
How can flag if column values had increase/decrease in last n months in time series data frame in python?
<p>I am working with customer data and need to flag customers who had decrease or increase in salary in last 6 and 12 months?</p>
<p>Not having a ton of information, let's assume your timeseries data is set up so each row consists of monthly data. i.e row 1 is January data, row 2 is February, etc.</p> <p>Here is an implementation <a href="https://www.statology.org/pandas-difference-between-rows/" rel="nofollow noreferrer">https://www.statology.org/pandas-difference-between-rows/</a></p> <p>Then you can use apply to check each row in the <code>delta</code> column and if it is positive return the value <code>increase</code> otherwise return the value <code>decrease</code>. You can probably use lambda for this.</p> <p>My syntax may be a little off but this is the general idea to return a boolean flag.</p> <p><code>df['flag']= df['delta'].apply(lambda x: x&gt;0)</code></p>
pandas|time-series
0
3,392
59,628,607
Sort values by columns and not rows
<p>I hope you can help me with this stupid problem.</p> <p>I need to sort my columns by highest values. My dataframe consist of 31 columns, with the first 7 looking like this. </p> <p><a href="https://i.stack.imgur.com/VbhVe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VbhVe.png" alt="enter image description here"></a></p> <p>I need to look like this </p> <p><a href="https://i.stack.imgur.com/tkUJp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tkUJp.png" alt="enter image description here"></a></p> <p>I have tried with this code </p> <pre><code>sorted_df = df_1.sort_values(df_1.last_valid_index(), axis=1) </code></pre> <p>But it wont work. Basically, out of all the columns, I need to find the 6 column with highest values. Can you help me out ? </p> <p>Thank you!</p>
<p>You can transpose, sort and transpose back</p> <pre><code>df = pd.DataFrame( { "name": ["messi"], "height": [170], "weight":[72], "attack_cross":[88] }) df.T[df.T.index != 'name'].sort_values(0,ascending = False).T </code></pre> <p>gives</p> <pre><code> height attack_cross weight 0 170 88 72 </code></pre> <p>add back player name and you are good.</p> <p>If you only want the top 6, you can add in <code>head(6)</code></p> <pre><code>df.T[df.T.index != 'name'].sort_values(0,ascending = False).head(6).T </code></pre>
python|pandas|sorting
2
3,393
54,771,985
Extract non-digit characters before certain character in pandas dataframe
<p>I have a pandas dataframe that looks like this:</p> <pre><code>&gt; row extract_column &gt; 0 412952266-desiredtext1»randtext-irrelevant &gt; 1 512952766-desiredtext1»randtext-irrelevant &gt; 2 212952766-desiredtext1»randtext-irrelevant &gt; 3 112953066-desiredtext1»randtext-irrelevant &gt; 4 712953066-desiredtext1»randtext-irrelevant &gt; 5 612953366-desiredtext1»randtext-irrelevant &gt; 6 912953366-desiredtext1»randtext-irrelevant &gt; 7 412954866-desiredtext1»randtext-irrelevant &gt; 8 312954966-desiredtext1»randtext-irrelevant &gt; 9 212954966-desiredtext1»randtext-irrelevant &gt; 10 612955866-desiredtext1»randtext-irrelevant &gt; 11 912256266-desiredtext1»randtext-irrelevant &gt; 12 812256366-desiredtext1»randtext-irrelevant &gt; 13 512256566-desiredtext1»randtext-irrelevant &gt; 14 412256566-desiredtext1»randtext-irrelevant &gt; 15 312256566-desiredtext1»randtext-irrelevant &gt; 16 212256566-desiredtext1»randtext-irrelevant &gt; 17 612256566-desiredtext1»randtext-irrelevant &gt; 18 812956666-desiredtext2»randtext-irrelevant &gt; 19 912957166-desiredtext2»randtext-irrelevant &gt; 20 012957866-desiredtext2»randtext-irrelevant &gt; 21 12952966-desiredtext2»randtext-irrelevant &gt; 22 2012953066-desiredtext2»randtext-irrelevant &gt; 23 012953066-desiredtext2»randtext-irrelevant &gt; 24 312953066-desiredtext2»randtext-irrelevant &gt; 25 112254166-desiredtext2»randtext-irrelevant &gt; 26 712254166-desiredtext2»randtext-irrelevant </code></pre> <p>I want to get the desiredtext1, desiredtext2 fields from extract_column. The desired data is always followed by the » symbol and preceded by 9 digits followed by a dash.</p>
<p>Try with <code>extract</code></p> <pre><code>df.extract_column.str.extract(r'-([^\.]*)\»', expand=False) </code></pre>
python|regex|string|pandas
2
3,394
55,019,885
Resume Training tf.keras Tensorboard
<p>I encountered some problems when I continued training my model and visualized the progress on tensorboard.</p> <p><a href="https://i.stack.imgur.com/GsdB7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/GsdB7.png" alt="Tensorboard Training Visualization"></a></p> <p>My question is how do I resume training from the same step without specifying any epoch manually? If possible, simply by loading the saved model, it somehow could read the <code>global_step</code> from the optimizer saved and continue training from there.</p> <p>I have provided some codes below to reproduce similar errors.</p> <pre><code>import tensorflow as tf from tensorflow.keras.callbacks import TensorBoard from tensorflow.keras.models import load_model mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=10, callbacks=[Tensorboard()]) model.save('./final_model.h5', include_optimizer=True) del model model = load_model('./final_model.h5') model.fit(x_train, y_train, epochs=10, callbacks=[Tensorboard()]) </code></pre> <p>You can run the <code>tensorboard</code> by using the command:</p> <pre><code>tensorboard --logdir ./logs </code></pre>
<p>You can set the parameter <code>initial_epoch</code> in the function <code>model.fit()</code> to the number of the epoch you want your training to start from. Take into account that the model trains until the epoch of index <code>epochs</code> is reached (and not a number of iterations given by <code>epochs</code>). In your example, if you want to train for 10 epochs more, it should be:</p> <pre><code>model.fit(x_train, y_train, initial_epoch=9, epochs=19, callbacks=[Tensorboard()]) </code></pre> <p>It will allow you to visualise your plots on Tensorboard in a correct manner. More extensive information about these parameters can be found in the <a href="https://keras.io/models/model/#fit" rel="noreferrer">docs</a>.</p>
python|tensorflow|machine-learning|keras|tensorboard
8
3,395
49,540,365
Keras composed neural network model from two neural network models
<p>I am using Keras with Tensorflow to implement my model (M). Lets suppose that I have the following input features F = {x,y, a1, a2, a3, ..., an} I want to build a deep model (M1) using only x and y. Then, the output of (M1) with all the remaining features (a1, a2, ..., an) will be the input of another model (M2). </p> <p>x,y --> M1 --> z, a1, a2, ..., an --> M2 --> final output </p> <p>How can I build such model in Keras? </p>
<p>Use <a href="https://keras.io/getting-started/functional-api-guide/" rel="nofollow noreferrer">Keras functional api</a>.</p> <p>It's not entirely clear to me whether you mean to have a second model that is only trained on the output of first model, or something that could make both models trained jointly.</p> <p>If you mean for M1 and M2 to be trained separately, then assuming <code>x, y, a</code> are your input <code>ndarray</code>s you can do something like that:</p> <pre><code>input_x = Input(shape=...) input_y = Input(shape=...) ... M1 = ... # your first model m1_output = M1.output # assuming M1 outputs only one tensor m2_input = Input(batch_shape=m1_output.shape) # the part that you can feed outputs from M1 m2_output = ... M2 = Model(inputs=[m2_input,a], outputs=[m2_output]) </code></pre> <p>You could also train both parts simultaneously, then <a href="https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models" rel="nofollow noreferrer">it's also covered in Functional API's documentation</a>. You'd need to define M2 like this:</p> <pre><code>M2 = Model(inputs=M1.inputs + [a], outputs=M1.outputs + [m2_output]) </code></pre> <p>Of course you'd have to work out the losses accordingly.</p>
python|tensorflow|neural-network|keras
1
3,396
49,701,918
tf.layers.batch_normalization parameters
<p>I am not sure if it is only me who thinks that tensorflow documentation is a bit weak.</p> <p>I was planing to use the tf.nn.batch_normalization function to implement batch normalization but later recognized the tf.layers.batch_normalization function which seemingly should be the one to use for its simplicity. But the documentation is really poor if I may say it.</p> <p>I am trying to understand how to <em>correctly</em> use it but with the information provided on the Web page is it really not easy. I am hoping that maybe some other people have experience and help me (and possibly many others) to understand it.. </p> <p>Let me share the interface first:</p> <pre><code>tf.layers.batch_normalization( inputs, axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer=tf.zeros_initializer(), gamma_initializer=tf.ones_initializer(), moving_mean_initializer=tf.zeros_initializer(), moving_variance_initializer=tf.ones_initializer(), beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, training=False, trainable=True, name=None, reuse=None, renorm=False, renorm_clipping=None, renorm_momentum=0.99, fused=None, virtual_batch_size=None, adjustment=None ) </code></pre> <p>Q1) beta values are initialized to zero and gamma values are initialized to 1. But it does not say why. When batch normalization used, I understand that the ordinary bias parameter of the neural network becomes obsolete and beta parameter in the batch normalization step kind of does the same thing. From that angle, setting beta to zero is understandable. But why are gamma values initialized to 1? Is that really the most efficient way?</p> <p>Q2) I see a momentum parameter there as well. The documentation just says " Momentum for the moving average.". I assume that this parameter is used when calculating the "mean" value for a certain mini batch in the corresponding hidden layer. With other words, the mean value used in batch normalization is NOT the mean of current mini batch, it is rather primarily the mean of the last 100 mini batches (since momentum = 0.99). But it is very unclear how this parameter affects the execution in testing, or if I am just validating my model on the dev set by calculating cost and accuracy. My <em>assumption</em> is that anytime I deal with test and dev sets, I set the parameter "training" to False so that momentum parameter becomes obsolete for that particular execution and the "mean" and "variance" values that were calculated during the training are used now instead of calculating new mean and variance values. It is how it should be if I am mistaken but I do not see anything in the documentation if it is the case. Could anyone confirm that my understanding correct? If not, I would really appreciate further explanation on this.</p> <p>Q3) I am having difficulties to give a meaning to the trainable parameter. I assume beta and gamma params are meant here. Why would they not be trainable?</p> <p>Q4) The "reuse" parameter. What is it really?</p> <p>Q5) adjustment parameter. Another mistery..</p> <p>Q5) A kind of summary question.. Here is my overall assumption that needs confirmation and feedback.. Important params here are: - inputs - axis - momentum - center - scale - training And I assume that as long as the training=True when training, we are safe. And as long as training=False when validating dev set or test set or even when using the model in real life, we are safe too.</p> <p>Any feedback will really be appreciated.</p> <p>ADDENDUM:</p> <p>Confusion continues. Help!</p> <p>I am trying to use this function instead of implementing a batch normalizer manually. I have the following forward propagation function that loops through layers of the NN.</p> <pre><code>def forward_propagation_with_relu(X, num_units_in_layers, parameters, normalize_batch, training, mb_size=7): L = len(num_units_in_layers) A_temp = tf.transpose(X) for i in range (1, L): W = parameters.get("W"+str(i)) b = parameters.get("b"+str(i)) Z_temp = tf.add(tf.matmul(W, A_temp), b) if normalize_batch: if (i &lt; (L-1)): with tf.variable_scope("batch_norm_scope", reuse=tf.AUTO_REUSE): Z_temp = tf.layers.batch_normalization(Z_temp, axis=-1, training=training) A_temp = tf.nn.relu(Z_temp) return Z_temp #This is the linear output of last layer </code></pre> <p>The tf.layers.batch_normalization(..) function wants to have static dimensions but I do not have it in my case.</p> <p>Since I apply mini batches rather than training the entire train set each time before I run the optimizer, 1 dimension of the X appears to be unknown.</p> <p>If I write:</p> <pre><code>print(X.shape) </code></pre> <p>I get:</p> <pre><code>(?, 5) </code></pre> <p>And when this is the case, when I run the whole program I get the following error below.</p> <p>I saw in some other threads that some people say that they could solve the problem by using tf.reshape function. I try it.. Forward prop goes fine but later on it crashes in the Adam Optimizer..</p> <p>Here is what I get when I run the code above (without using tf.reshape):</p> <p>How do I solve this???</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-191-990fb7d7f7f6&gt; in &lt;module&gt;() 24 parameters = nn_model(train_input_paths, dev_input_paths, test_input_paths, learning_rate, num_train_epochs, 25 normalize_batch, epoch_period_to_save_cost, minibatch_size, num_units_in_layers, ---&gt; 26 lambd, print_progress) 27 28 print(parameters) &lt;ipython-input-190-59594e979129&gt; in nn_model(train_input_paths, dev_input_paths, test_input_paths, learning_rate, num_train_epochs, normalize_batch, epoch_period_to_save_cost, minibatch_size, num_units_in_layers, lambd, print_progress) 34 # Forward propagation: Build the forward propagation in the tensorflow graph 35 ZL = forward_propagation_with_relu(X_mini_batch, num_units_in_layers, ---&gt; 36 parameters, normalize_batch, training) 37 38 with tf.name_scope("calc_cost"): &lt;ipython-input-187-8012e2fb6236&gt; in forward_propagation_with_relu(X, num_units_in_layers, parameters, normalize_batch, training, mb_size) 15 with tf.variable_scope("batch_norm_scope", reuse=tf.AUTO_REUSE): 16 Z_temp = tf.layers.batch_normalization(Z_temp, axis=-1, ---&gt; 17 training=training) 18 19 A_temp = tf.nn.relu(Z_temp) ~/.local/lib/python3.5/site-packages/tensorflow/python/layers/normalization.py in batch_normalization(inputs, axis, momentum, epsilon, center, scale, beta_initializer, gamma_initializer, moving_mean_initializer, moving_variance_initializer, beta_regularizer, gamma_regularizer, beta_constraint, gamma_constraint, training, trainable, name, reuse, renorm, renorm_clipping, renorm_momentum, fused, virtual_batch_size, adjustment) 775 _reuse=reuse, 776 _scope=name) --&gt; 777 return layer.apply(inputs, training=training) 778 779 ~/.local/lib/python3.5/site-packages/tensorflow/python/layers/base.py in apply(self, inputs, *args, **kwargs) 805 Output tensor(s). 806 """ --&gt; 807 return self.__call__(inputs, *args, **kwargs) 808 809 def _add_inbound_node(self, ~/.local/lib/python3.5/site-packages/tensorflow/python/layers/base.py in __call__(self, inputs, *args, **kwargs) 676 self._defer_regularizers = True 677 with ops.init_scope(): --&gt; 678 self.build(input_shapes) 679 # Create any regularizers added by `build`. 680 self._maybe_create_variable_regularizers() ~/.local/lib/python3.5/site-packages/tensorflow/python/layers/normalization.py in build(self, input_shape) 251 if axis_to_dim[x] is None: 252 raise ValueError('Input has undefined `axis` dimension. Input shape: ', --&gt; 253 input_shape) 254 self.input_spec = base.InputSpec(ndim=ndims, axes=axis_to_dim) 255 ValueError: ('Input has undefined `axis` dimension. Input shape: ', TensorShape([Dimension(6), Dimension(None)])) </code></pre> <p>This is so hopeless.. </p> <p>ADDENDUM(2)</p> <p>I am adding more information:</p> <p>The following simply means that there are 5 units in input layer, 6 units in each hidden layer, and 2 units in output layer.</p> <pre><code>num_units_in_layers = [5,6,6,2] </code></pre> <p>Here is the updated version of forward prop function with tf.reshape</p> <pre><code>def forward_propagation_with_relu(X, num_units_in_layers, parameters, normalize_batch, training, mb_size=7): L = len(num_units_in_layers) print("X.shape before reshape: ", X.shape) # ADDED LINE 1 X = tf.reshape(X, [mb_size, num_units_in_layers[0]]) # ADDED LINE 2 print("X.shape after reshape: ", X.shape) # ADDED LINE 3 A_temp = tf.transpose(X) for i in range (1, L): W = parameters.get("W"+str(i)) b = parameters.get("b"+str(i)) Z_temp = tf.add(tf.matmul(W, A_temp), b) if normalize_batch: if (i &lt; (L-1)): with tf.variable_scope("batch_norm_scope", reuse=tf.AUTO_REUSE): Z_temp = tf.layers.batch_normalization(Z_temp, axis=-1, training=training) A_temp = tf.nn.relu(Z_temp) return Z_temp #This is the linear output of last layer </code></pre> <p>When I do this, I can run the forward prop function. But it seems to be crashing in later execution. Here is the error that I get. (Note that I print out the shape of input X before and after reshaping in the forward prop function).</p> <pre><code>X.shape before reshape: (?, 5) X.shape after reshape: (7, 5) --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) ~/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args) 1349 try: -&gt; 1350 return fn(*args) 1351 except errors.OpError as e: ~/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata) 1328 feed_dict, fetch_list, target_list, -&gt; 1329 status, run_metadata) 1330 ~/.local/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py in __exit__(self, type_arg, value_arg, traceback_arg) 515 compat.as_text(c_api.TF_Message(self.status.status)), --&gt; 516 c_api.TF_GetCode(self.status.status)) 517 # Delete the underlying status object from memory otherwise it stays alive InvalidArgumentError: Incompatible shapes: [7] vs. [2] [[Node: forward_prop/batch_norm_scope/batch_normalization/cond_2/AssignMovingAvg/sub = Sub[T=DT_FLOAT, _class=["loc:@batch_norm_scope/batch_normalization/moving_mean"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](forward_prop/batch_norm_scope/batch_normalization/cond_2/Switch_1:1, forward_prop/batch_norm_scope/batch_normalization/cond_2/AssignMovingAvg/sub/Switch_1:1)]] During handling of the above exception, another exception occurred: InvalidArgumentError Traceback (most recent call last) &lt;ipython-input-222-990fb7d7f7f6&gt; in &lt;module&gt;() 24 parameters = nn_model(train_input_paths, dev_input_paths, test_input_paths, learning_rate, num_train_epochs, 25 normalize_batch, epoch_period_to_save_cost, minibatch_size, num_units_in_layers, ---&gt; 26 lambd, print_progress) 27 28 print(parameters) &lt;ipython-input-221-59594e979129&gt; in nn_model(train_input_paths, dev_input_paths, test_input_paths, learning_rate, num_train_epochs, normalize_batch, epoch_period_to_save_cost, minibatch_size, num_units_in_layers, lambd, print_progress) 88 cost_mini_batch, 89 accuracy_mini_batch], ---&gt; 90 feed_dict={training: True}) 91 nr_of_minibatches += 1 92 sum_minibatch_costs += minibatch_cost ~/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata) 893 try: 894 result = self._run(None, fetches, feed_dict, options_ptr, --&gt; 895 run_metadata_ptr) 896 if run_metadata: 897 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) ~/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata) 1126 if final_fetches or final_targets or (handle and feed_dict_tensor): 1127 results = self._do_run(handle, final_targets, final_fetches, -&gt; 1128 feed_dict_tensor, options, run_metadata) 1129 else: 1130 results = [] ~/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata) 1342 if handle is None: 1343 return self._do_call(_run_fn, self._session, feeds, fetches, targets, -&gt; 1344 options, run_metadata) 1345 else: 1346 return self._do_call(_prun_fn, self._session, handle, feeds, fetches) ~/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args) 1361 except KeyError: 1362 pass -&gt; 1363 raise type(e)(node_def, op, message) 1364 1365 def _extend_graph(self): InvalidArgumentError: Incompatible shapes: [7] vs. [2] [[Node: forward_prop/batch_norm_scope/batch_normalization/cond_2/AssignMovingAvg/sub = Sub[T=DT_FLOAT, _class=["loc:@batch_norm_scope/batch_normalization/moving_mean"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](forward_prop/batch_norm_scope/batch_normalization/cond_2/Switch_1:1, forward_prop/batch_norm_scope/batch_normalization/cond_2/AssignMovingAvg/sub/Switch_1:1)]] Caused by op 'forward_prop/batch_norm_scope/batch_normalization/cond_2/AssignMovingAvg/sub', defined at: File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/ipykernel_launcher.py", line 16, in &lt;module&gt; app.launch_new_instance() File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance app.start() File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 478, in start self.io_loop.start() File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/zmq/eventloop/ioloop.py", line 177, in start super(ZMQIOLoop, self).start() File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tornado/ioloop.py", line 888, in start handler_func(fd_obj, events) File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tornado/stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events self._handle_recv() File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv self._run_callback(callback, msg) File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback callback(*args, **kwargs) File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tornado/stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell handler(stream, idents, msg) File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 208, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 537, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2728, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2850, in run_ast_nodes if self.run_code(code, result): File "/home/cesncn/anaconda3/envs/tensorflow/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2910, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "&lt;ipython-input-222-990fb7d7f7f6&gt;", line 26, in &lt;module&gt; lambd, print_progress) File "&lt;ipython-input-221-59594e979129&gt;", line 36, in nn_model parameters, normalize_batch, training) File "&lt;ipython-input-218-62e4c6126c2c&gt;", line 19, in forward_propagation_with_relu training=training) File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/layers/normalization.py", line 777, in batch_normalization return layer.apply(inputs, training=training) File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/layers/base.py", line 807, in apply return self.__call__(inputs, *args, **kwargs) File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/layers/base.py", line 697, in __call__ outputs = self.call(inputs, *args, **kwargs) File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/layers/normalization.py", line 602, in call lambda: self.moving_mean) File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/layers/utils.py", line 211, in smart_cond return control_flow_ops.cond(pred, true_fn=fn1, false_fn=fn2, name=name) File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 316, in new_func return func(*args, **kwargs) File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py", line 1985, in cond orig_res_t, res_t = context_t.BuildCondBranch(true_fn) File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py", line 1839, in BuildCondBranch original_result = fn() File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/layers/normalization.py", line 601, in &lt;lambda&gt; lambda: _do_update(self.moving_mean, new_mean), File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/layers/normalization.py", line 597, in _do_update var, value, self.momentum, zero_debias=False) File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/training/moving_averages.py", line 87, in assign_moving_average update_delta = (variable - value) * decay File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/ops/variables.py", line 778, in _run_op return getattr(ops.Tensor, operator)(a._AsTensor(), *args) File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py", line 934, in binary_op_wrapper return func(x, y, name=name) File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/ops/gen_math_ops.py", line 4819, in _sub "Sub", x=x, y=y, name=name) File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3267, in create_op op_def=op_def) File "/home/cesncn/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1650, in __init__ self._traceback = self._graph._extract_stack() # pylint: disable=protected-access InvalidArgumentError (see above for traceback): Incompatible shapes: [7] vs. [2] [[Node: forward_prop/batch_norm_scope/batch_normalization/cond_2/AssignMovingAvg/sub = Sub[T=DT_FLOAT, _class=["loc:@batch_norm_scope/batch_normalization/moving_mean"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](forward_prop/batch_norm_scope/batch_normalization/cond_2/Switch_1:1, forward_prop/batch_norm_scope/batch_normalization/cond_2/AssignMovingAvg/sub/Switch_1:1)]] </code></pre> <p>Regarding the question why the shape of X is not static.. I don't know... HEre is how I setup the dataset.</p> <pre><code>with tf.name_scope("next_train_batch"): filenames = tf.placeholder(tf.string, shape=[None]) dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.flat_map(lambda filename: tf.data.TextLineDataset(filename).skip(1).map(decode_csv)) dataset = dataset.shuffle(buffer_size=1000) dataset = dataset.batch(minibatch_size) iterator = dataset.make_initializable_iterator() X_mini_batch, Y_mini_batch = iterator.get_next() </code></pre> <p>I have 2 csv files that include the train data.</p> <pre><code>train_path1 = "train1.csv" train_path2 = "train2.csv" train_input_paths = [train_path1, train_path2] </code></pre> <p>And I use the initializable iterator as following:</p> <pre><code>sess.run(iterator.initializer, feed_dict={filenames: train_input_paths}) </code></pre> <p>During the training, I keep getting mini batches from the train set. Everything works fine when I disable batch normalization. If I enable batch norm, it requires static shape of the input X (mini batch). I reshape it but this time it crashes later in the execution as seen above. </p> <p>ADDENDUM(3)</p> <p>I guess I figured out where it crashes. It probably crashes when I run the optimizer after calculating the cost.</p> <p>First the sequence of commands: First forward prop, then compute cost, then run optimizer. First 2 seems to be working but not the optimizer.</p> <p>HEre is how I define the optimizer:</p> <pre><code>with tf.name_scope("train"): update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(update_ops): # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer. optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost_mini_batch) </code></pre> <p>I have the update_ops there to be able to update the moving averages. If I interpret it right, it is just crashing when it tries to update moving averages. I might be misinterpreting the error msg as well.. </p> <p>ADDENDUM(4)</p> <p>I tried to normalize based on the known dimension and it worked! But that's not the dimension I would like to normalize, which is now confusing. Let me elaborate:</p> <p>nr of units in input layer: 5 nr of units in layer 1 (first hidden layer): 6 so weight1 is (6, 5) matrix Assume that mini batch size is 7. Shape of A[0] (or X_mini_batch) in my case is: (7, 5), where 7 is the # training samples in mini batch, and 5 is the # units in input layer.</p> <p>When calculating Z[1]... Z[1] = weight1 * A[0].transpose ... then shape of Z[1] is (6, 7) matrix, where each column gives 6 features for each train sample.</p> <p>The question is then which column do we want to normalize in Z[1]? What makes sense to me is that you normalize each feature from all given train samples. This means that I need to normalize each row bcz I have different feature values for different train examples in each row. And since Z[1] has the shape (6, 7), if I set axis=0, it should refer to normalization in each row. And 7 is the unknown number in my case so it doesn't hurt. Based on this logic, it works! But I am totally puzzled if axis=0 really refers to each row here... Let me show another example about this axis issue, which has bothered me for a long time now..</p> <p>The <em>irrelevant from this topic</em> code example:</p> <pre><code>cc = tf.constant([[1.,2.,3.], [4.,5.,6.]]) with tf.Session() as sess: print(sess.run(tf.reduce_mean(cc, axis=0))) print(sess.run(tf.reduce_mean(cc, axis=1))) </code></pre> <p>This gives the following output:</p> <pre><code>[2.5 3.5 4.5] [2. 5.] </code></pre> <p>When I set axis to 0, it is giving the average of each column. And if axis=1, it is giving the average of each row.</p> <p>(Note that cc.shape gives (2,3))</p> <p>Now the million dollar question: In a 2 dimensional matrix, is axis 0 or 1 when I want to address each row?</p> <p>ADDENDUM(5) I guess I get it now correctly. Let me summarize my axis understanding here. Hopefully I am getting it right now...</p> <p>Here is the Z[1] matrix representation with the shape (6,7):</p> <p>t_ex : train example f: feature</p> <pre><code>t_ex1 t_ex2 t_ex3 t_ex4 t_ex5 t_ex6 t_ex7 f1 f1 f1 f1 f1 f1 f1 f2 f2 f2 f2 f2 f2 f2 f3 f3 f3 f3 f3 f3 f3 f4 f4 f4 f4 f4 f4 f4 f5 f5 f5 f5 f5 f5 f5 f6 f6 f6 f6 f6 f6 f6 </code></pre> <p>In this mini batch above, there are 7 train examples and each train ex has 6 features (since there are 6 units in layer 1). When we say "tf.layers.batch_normalization(..,axis=0)", we mean that the normalization has to be done per row for each feature to eliminate the high variance between - say - f1 values in the first row.</p> <p>With other words, we do NOT normalize f1,f2,f3,f4,f5,f6 with each other. We normalize f1:s with each other, and f2:s with each other, and so on..</p>
<p>Q1) Initializing gamma as 1, beta as 0 means directly using the normalized inputs. Since there is no prior information about what the variance of a layer output should be, it is fair enough to assume standard Gaussian.</p> <p>Q2) During training phase (<code>training=True</code>), the batch is normalized with their own mean and var, assuming that training data are randomly sampled. During test (<code>training=False</code>), since the test data could be arbitrarily sampled, we cannot use their mean and var. Thus, we use, as you said, the moving averaging estimations from the last "100" training iterations.</p> <p>Q3) Yes, trainable refers to <code>beta</code> and <code>gamma</code>. There are cases to set <code>trainable=False</code>, e.g. if a novel method is used to update the parameters, or if the batch_norm layer is pre-trained and needs to be frozen. </p> <p>Q4) You may have noticed <code>reuse</code> parameters in other <code>tf.layers</code> functions as well. In general, if you wanna call a layer more than once (e.g. training and validation) and you do not wanna TensorFlow to think that you are creating a new layer, you set <code>reuse=True</code>. I prefer <code>with tf.variable_scope(..., reuse=tf.AUTO_REUSE):</code> to achieve the same purpose.</p> <p>Q5) I am not sure about this one. I guess it is for users who want to design new tricks to adjust the scale and bias.</p> <p>Q6) Yes, you are right.</p>
python|tensorflow|machine-learning|neural-network
6
3,397
49,346,423
How to fill column in pandas.DataFrame using data from another pandas.DataFrame?
<p>I have the first pandas.DataFrame</p> <pre><code> first_key second_key 0 0 1 1 0 1 2 0 2 3 0 3 4 0 3 </code></pre> <p>and also the second pandas.DataFrame</p> <pre><code> key status 0 1 'good' 1 2 'bad' 2 3 'good' </code></pre> <p>And I want to get the following pandas.DataFrame</p> <pre><code> first_key second_key status 0 0 1 'good' 1 0 1 'good' 2 0 2 'bad' 3 0 3 'good' 4 0 3 'good' </code></pre> <p>How to do this? </p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a> by Series created from second <code>DataFrame</code>:</p> <pre><code>df['status'] = df['second_key'].map(df1.set_index('key')['status']) print (df) first_key second_key status 0 0 1 'good' 1 0 1 'good' 2 0 2 'bad' 3 0 3 'good' 4 0 3 'good' </code></pre>
python|pandas
3
3,398
73,217,739
Variation in color representation using Matplotlib
<p>I am trying to represent the array <code>p</code> with different values on the grid as shown. I identify the <code>max,min</code> of <code>p</code> and set the colorbar according to <code>Amax,Amin</code>. However, I do not see color variation even though the values are quite different. I don't know if there is problem with <code>color_list.append(color(P[i]/Amax))</code>.</p> <pre><code>import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib.patches import Rectangle import numpy as np from matplotlib.colors import Normalize from matplotlib import cm import math fig,ax = plt.subplots(1) n=3 N=2*n*(n-1) #p = np.random.uniform(a,b,N) p=np.array([[0.000053 , 0.00005017809219905991 , 0.00005034247517775545 , 0.000048835631206379674, 0.000051109595745001285, 0.0000519589078015949 , 0.00004938357446869813 , 0.000047575361703047204, 0.0000502876808515236 , 0.00004858905673833636 , 0.00004724659574565612 , 0.00005242465957456561 ]]) p=p.reshape(12,1) P = np.array(p) P=P.reshape(len(P),1) Max=max(P) #print(&quot;Max =&quot;,Max) Min=min(P) #print(&quot;Min =&quot;,Min) a=Min b=Max Amax= b*math.ceil(Max/b) Amin= a*math.floor(Min/a) #print(Amax, Amin) color = cm.get_cmap('Greys') norm = Normalize(vmin=Amin, vmax=Amax) color_list = [] for i in range(len(P)): color_list.append(color(P[i]/Amax)) #print(color_list) id = 0 for j in range(0, n): for k in range(n-1): ax.hlines(200+200*(n-j-1)+5*n, 200*(k+1)+5*n, 200*(k+2)+5*n, zorder=0, colors=color_list[id],linewidth=5.0) id += 1 for i in range(0, n): rect = mpl.patches.Rectangle((200+200*i, 200+200*j), 10*n, 10*n, linewidth=1, edgecolor='black', facecolor='black') ax.add_patch(rect) if j &lt; n-1: ax.vlines(200+200*i+5*n, 200*(n-1-j)+5*n, 200*(n-j)+5*n, zorder=0, colors=color_list[id],linewidth=5.0) id += 1 cb = fig.colorbar(cm.ScalarMappable(cmap=color, norm=norm)) cb.set_label(&quot;Radius (m)&quot;) ax.set_xlim(left = 0, right = 220*n) ax.set_ylim(bottom = 0, top = 220*n) # ax.set_yticklabels([]) # ax.set_xticklabels([]) plt.axis('off') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/yxsde.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yxsde.png" alt="enter image description here" /></a></p>
<p>You are scaling your <code>color_list</code> from [0, Amax] in stead of [Amin, Amax] You could use the <code>norm</code> to scale it.</p> <pre><code>for i in range(len(P)): color_list.append(color(norm(P[i]))) </code></pre> <p><a href="https://i.stack.imgur.com/Ezbtz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ezbtz.png" alt="enter image description here" /></a></p>
python|numpy|matplotlib
1
3,399
73,439,060
Extending a dataframe, filling in missing time, and keeping the other column values with the corresponding time?
<p>More about my problem, I have a 2 column dataframe (one information based, and one time based) that is ~190k rows long. I am missing some dates, and would like to fill in the missing dates while keeping the information with the correct date, and the come back and resample the missing information using the interpolate refill sample.</p> <p>What I've tried so far:</p> <pre><code>data_res = pd.period_range(min(data.time), max(data.time), freq = 'H') ...: data_res.reindex(data_res) </code></pre> <p>which was great, because it gave me the output and the correct times I needed.</p> <pre><code>(PeriodIndex(['1984-10-25 09:00', '1984-10-25 10:00', '1984-10-25 11:00', '1984-10-25 12:00', '1984-10-25 13:00', '1984-10-25 14:00', '1984-10-25 15:00', '1984-10-25 16:00', '1984-10-25 17:00', '1984-10-25 18:00', ... '2022-08-16 09:00', '2022-08-16 10:00', '2022-08-16 11:00', '2022-08-16 12:00', '2022-08-16 13:00', '2022-08-16 14:00', '2022-08-16 15:00', '2022-08-16 16:00', '2022-08-16 17:00', '2022-08-16 18:00'], dtype='period[H]', length=211460), </code></pre> <p>Then I used:</p> <pre><code>data1['time'] = pd.Series(data_res) </code></pre> <p>which had no issues. Finally I printed the table to double check and the output was:</p> <pre><code>189741 rows × 2 columns </code></pre> <p>Where instead of going from late 1980s to 2022 like when I imported it, it now cuts off at 2006. I understand that the dataframe gets cut off when the row length of the time matches the information column. My problem seems to be twofold: insert the full time, and keep the information column values with the corresponding dates. I tried looking for similar problems, but everything I found was inserting a shorter column into a dataframe and filling the extra values with NA - which is similarish to what I need, but not helpful towards pointing me in the right direction. Does anyone have any ideas on how to fix this? I would also be happy to provide extra information if needed.</p>
<p>Try a merge of 2 dfs instead of pd.Series.</p> <p>Time Series Range (using an 8 hr range for simplicity):</p> <pre><code>import pandas as pd import numpy as np s = pd.to_datetime(&quot;2022-08-01 00:00:00&quot;) e = pd.to_datetime(&quot;2022-08-01 08:00:00&quot;) data_res = pd.DataFrame(pd.period_range(s, e, freq = 'H'), columns = ['Time']) </code></pre> <p>DF with Data (Just sampling a few of the rows from the full dataset and adding random values as info)</p> <pre><code>actual_data = data_res.sample(n=6).sort_values(by=['Time']) actual_data['info'] = np.random.random(size=len(actual_data)) </code></pre> <pre><code> Time info 0 2022-08-01 00:00 0.549414 2 2022-08-01 02:00 0.746876 3 2022-08-01 03:00 0.715491 5 2022-08-01 05:00 0.521234 6 2022-08-01 06:00 0.822393 7 2022-08-01 07:00 0.430862 </code></pre> <p>You can then merge these 2 dfs on time and fill nulls from there - I'm filling with 0, but you can use whatever interpolation method is needed.</p> <pre><code>data_joined = pd.merge(data_res, actual_data, on=['Time'], how = 'left').fillna({'info': 0}) </code></pre> <pre><code> Time info 0 2022-08-01 00:00 0.549414 1 2022-08-01 01:00 0.000000 2 2022-08-01 02:00 0.746876 3 2022-08-01 03:00 0.715491 4 2022-08-01 04:00 0.000000 5 2022-08-01 05:00 0.521234 6 2022-08-01 06:00 0.822393 7 2022-08-01 07:00 0.430862 8 2022-08-01 08:00 0.000000 </code></pre>
python|pandas|dataframe|time-series
1