Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
377,100
| 63,410,790
|
plots in matploblib become noisy
|
<p>so my code are :</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = [1, 2, 3, 4, 5]
y = [2, 4, 6, 8, 10]
x1 = [1, 2, 3, 4, 5]
y1 = [1, 4, 9, 14, 25]
t = np.arange(0.0, 2.0, 0.01)
s1 = np.sin(2*np.pi*t)
s2 = np.sin(4*np.pi*t)
plt.figure(1)
plt.subplots(211)
plt.plot(x, y)
plt.subplots(212)
plt.plot(x1, y1)
plt.figure(2)
plt.subplot(211)
plt.plot(t, s1)
plt.subplot(212)
plt.plot(t, 2*s1)
plt.show()
</code></pre>
<p>when I just run fig 2 it's all okay but when I run all the code, plots become noisy
also instead of 2 fig it will run an empty fig as fig num1
as u can see the result in pic1
<a href="https://i.stack.imgur.com/0DMhV.png" rel="nofollow noreferrer">pic</a></p>
|
<p>You have a small typo inside your code.
In the plots belonging to <code>figure(1)</code> change your <code>subplots()</code> to <code>subplot()</code> (remove the trailing s)</p>
|
python|linux|numpy|matplotlib
| 2
|
377,101
| 63,553,184
|
Get the row number from given indeces in Pandas
|
<pre><code>import pandas as pd
df = pd.DataFrame({'A':[1,2,3], 'B':[4,5,6]},
index=['first','second','third'])
</code></pre>
<p>Retrieving the corresponding indeces from a given list of row numbers is simple:</p>
<pre><code>df.iloc[[1,2]].index
</code></pre>
<p>But what about the converse? Suppose I am given a list <code>['second','third']</code>. How can I return <code>df</code>'s row numbers 0 and 1 from that?</p>
|
<p>Try:</p>
<pre><code>df.index.get_indexer(['second', 'third'])
</code></pre>
<p>Output:</p>
<pre><code>array([1, 2], dtype=int64)
</code></pre>
|
pandas|dataframe
| 2
|
377,102
| 63,497,760
|
pandas group by with isin statement return index true/false
|
<p>I have the following dataframe:</p>
<pre><code> balance
currency
JPY 2342
USD 33245
BTC 23424
ETH 19080
CNY 89678
</code></pre>
<p>The following code return the sum from the dataframe within only 2 categories, true and false:</p>
<pre><code>bal = df.groupby(~user_bal.index.isin(['BTC', 'ETH'])).sum()
</code></pre>
<p>While the expected output is as follow where [JPY,USD,CNY] have been summed under one index named JPY:</p>
<pre><code> balance
currency
JPY 125265
BTC 23424
ETH 19080
</code></pre>
<p>Any help would be very appreciated.</p>
|
<p>You could try with <code>where</code> first and then <code>groupby</code> + <code>sum</code>:</p>
<pre><code>groups=df.index.where(df.index.isin(['BTC', 'ETH']),'JPY')
df.groupby(groups).sum()
</code></pre>
<p>Output:</p>
<pre><code> balance
currency
BTC 23424
ETH 19080
JPY 125265
</code></pre>
<hr />
<p><strong>Details</strong>:<br />
Mask and replace with <code>where</code>+<code>isin</code> to change the values that are not in the list with <code>'JPY'</code>:</p>
<pre><code>groups=df.index.where(df.index.isin(['BTC', 'ETH']),'JPY')
print(groups)
>>>Index(['JPY', 'JPY', 'BTC', 'ETH', 'JPY'], dtype='object', name='currency')
</code></pre>
<p>Then group by <code>groups</code> and sum.</p>
|
python|pandas
| 1
|
377,103
| 63,445,562
|
pandas Timestamp to datetime.date
|
<p>I have problem with converting pandas Series to datetime.datetime.</p>
<p>I got DataFrame - df, with column <strong>Timestamp</strong> of type: <code>pandas._libs.tslibs.timestamps.Timestamp</code> and column <strong>Timestamp-end</strong> of type: <code>pandas._libs.tslibs.timedeltas.Timedelta</code>
<a href="https://i.stack.imgur.com/2wKyL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2wKyL.png" alt="enter image description here" /></a></p>
<p>I found that topic on SO: <a href="https://stackoverflow.com/questions/25852044/converting-pandas-tslib-timestamp-to-datetime-python">Converting pandas.tslib.Timestamp to datetime python</a> but the suggestions on this topic did not work.</p>
<p>Is there any possibility to convert it into datetime? If no, how can I subtract <strong>Timestamp-end</strong> from <strong>Timestamp</strong> column of type to get date and time into Timestamp and Timedelta type?</p>
<p>How I created <strong>Timestamp</strong> column:</p>
<pre><code>import adodbapi
import pandas as pd
import numpy as np
import datetime as dt
cursor = myConn.cursor()
cursor.execute(query)
# every row in query_list is type of SQLrow
query_list = [row for row in cursor]
df = pd.DataFrame({'TagAddress':[row[0] for row in query_list], 'Timestamp':[row[1] for row in query_list], 'Value':[row[3] for row in query_list]})
</code></pre>
<p><strong>Timestamp-end</strong> column:</p>
<pre><code>df['Timestamp-end'] = pd.NaT
# in for loop, dict values are type of timestamps.Timestamp
df['Timestamp-end'].iloc[i] = df['Timestamp'].iloc[i] - current_errors_timestamp[curr_fault_key]
</code></pre>
<p>My expected output (column <strong>Result</strong>):</p>
<p>I just want to subtract <code>Timedelta</code> from <code>Timestamp</code> to get new column <code>Timestamp</code>. With type <code>datetime.datetime</code> I can do it without any problems.</p>
<pre><code>Timestamp ErrorValue Machine Station FAULT Timestamp-end Result
2020-06-20 08:01:09.562 370 T1 R1 1 0 days 00:00:06 2020-06-20 08:01:03
2020-06-20 08:01:21.881 370 T1 R1 0 0 days 00:00:12.319000 2020-06-20 08:01:09
2020-06-20 08:07:06.708 338 T1 R1 0 0 days 00:00:24.623000 2020-06-20 08:06:42
2020-06-20 08:07:31.041 338 T1 R1 0 0 days 00:00:18.333000 2020-06-20 08:07:13
</code></pre>
|
<p>I beleive you need convert column to dates:</p>
<pre><code>df['Timestamp1'] = df['Timestamp'].dt.date
</code></pre>
<p>Or beter should be remove times, set them to <code>00:00:00</code>:</p>
<pre><code>df['Timestamp1'] = df['Timestamp'].dt.normalize()
</code></pre>
<p>And then subtract.</p>
<p>EDIT: You can subtract values and then use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.floor.html" rel="nofollow noreferrer"><code>Series.dt.floor</code></a> for seconds:</p>
<pre><code>df['Timestamp-end'] = pd.to_timedelta(df['Timestamp-end'])
df['Result'] = df['Timestamp'].sub(df['Timestamp-end']).dt.floor('S')
print (df)
Timestamp ErrorValue Machine Station FAULT Timestamp-end \
0 2020-06-20 08:01:09.562 370 T1 R1 1 00:00:06
1 2020-06-20 08:01:21.881 370 T1 R1 0 00:00:12.319000
2 2020-06-20 08:07:06.708 338 T1 R1 0 00:00:24.623000
3 2020-06-20 08:07:31.041 338 T1 R1 0 00:00:18.333000
Result
0 2020-06-20 08:01:03
1 2020-06-20 08:01:09
2 2020-06-20 08:06:42
3 2020-06-20 08:07:12
</code></pre>
|
python|pandas|datetime
| 2
|
377,104
| 63,645,473
|
Python Pandas: Combine Dataframes that are unevenly filled
|
<p>Good day,</p>
<p>from one of our clients we're getting csv-exports that are looking something like this:</p>
<pre><code>id | name | object_a | amount_a | object_b | amount_b | object_c | amount_c
1 abc object_1 12 none none none none
id | name | object_a | amount_a | object_b | amount_b | object_c | amount_c
2 def object_2 7 object_3 19 none none
id | name | object_a | amount_a | object_b | amount_b | object_c | amount_c
3 ghi object_4 25 none none none none
</code></pre>
<p>Now I really only care for the pair of objects (object-name and amount). In each set of data the max number of pairs are always the same, but they are randomly filled.
My question: is it possible to load them all into a dataframe and convert them into something like this:</p>
<pre><code>object | amount
object_1 12
object_2 7
object_3 19
object_4 25
</code></pre>
<p>Loading all these csv-exports into a single dataframe isn't the problem, but does panda contains a solution for this kind of problem?</p>
<p>Thanks for all your help!</p>
|
<p>First <code>concat</code> all the csvs, and then use <code>pd.wide_to_long</code>:</p>
<pre><code>csv_paths = ["your_csv_paths..."]
df = pd.concat([pd.read_csv(i) for i in csv_paths]).replace("none", np.NaN)
print (pd.wide_to_long(df, stubnames=["object","amount"],
i=["id","name"],j="Hi", suffix="\w*",
sep="_").dropna())
object amount
id name Hi
1 abc a object_1 12
2 def a object_2 7
b object_3 19
3 ghi a object_4 25
</code></pre>
|
python|pandas|dataframe|sorting
| 2
|
377,105
| 63,639,993
|
Dataframe values pre-processing - readexcel drop apostrophe in values during importing or other ways to convert string to int values after importing
|
<p>I have an excel file with the data format below to import in dataframe.</p>
<p>My current code allows me to extract the exact rows as show in the picture into the dataframe.</p>
<pre><code>df_gdp = pd.read_excel (open(gdp_path,'rb'), sheet_name='T2', skiprows= 5, skipfooter= 29)
</code></pre>
<p>Below is the data in excel:
<a href="https://i.stack.imgur.com/cba6j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cba6j.png" alt="enter image description here" /></a></p>
<p>Below is my dataframe output:
<a href="https://i.stack.imgur.com/iOQAO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iOQAO.png" alt="enter image description here" /></a></p>
<p>Problem: The values shown above are imported as string with the apostrophe at the beginning and ending not being shown.</p>
<p>When I tried to convert values to int using the below methods, it doesn't work.</p>
<pre><code>df_gdp.iloc[1:, 1] = df_gdp.iloc[1:, 0].str.replace("'", "").astype(float)
or
b1 = df_gdp.iloc[:, 54:61].values.astype(float)
</code></pre>
<p>ValueError: could not convert string to float: '384,870.3'</p>
<p>There is something that I might have missed out either in my code I should have added something earlier at my readexcel during the import but I don't know how to do it.</p>
<p>I looked up on dtype argument for readexcel but couldn't find an example on how to declare a specific range of columns to convert to int during import. The example that I found is like the below:</p>
<pre><code>pd.read_excel('tmp.xlsx', index_col=0, dtype={'Name': str, 'Value': float})
</code></pre>
<p>My data has too many years created as columns to declare individually, is there a way out?</p>
<p>My desired numpy array output after conversion is below (not [ '69124.4' ....]) :</p>
<pre><code>[ 69124.4 63585.4 51331.7 174596.4 183850.7 -107672.4 49833.8
120578.6 40884.1 106405. 126586.1 94867.2 22184.3 100575.9
110966.1 52548.9 243641.7]
</code></pre>
|
<p>Instead of:</p>
<pre><code>df_gdp.iloc[1:, 1] = df_gdp.iloc[1:, 0].str.replace("'", "").astype(float)
</code></pre>
<p>You must use:</p>
<pre><code>lst = df_gdp.iloc[0,1:].to_list()
lst = [s.replace(',', '') for s in lst]
lst = [float(i) for i in lst]
</code></pre>
<p>Now lst is: <code>[69124.4 63585.4 51331.7 174596.4 , ...]</code><br></p>
<p>Works fine for:</p>
<p><a href="https://i.stack.imgur.com/xAu9m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xAu9m.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe
| 1
|
377,106
| 63,656,778
|
How to load a trained TF1 protobuf model into TF2?
|
<p><em><strong>Update:</strong> This is a bug in tensorflow. Track progress <a href="https://github.com/tensorflow/tensorflow/issues/42980" rel="noreferrer">here</a>.</em></p>
<p>I have created and trained a model using stable-baselines, which uses Tensorflow 1.
Now I need to use this trained model in an environment where I only have access to Tensorflow 2 or PyTorch.
I figured I would go with Tensorflow 2 as the <a href="https://www.tensorflow.org/api_docs/python/tf/saved_model/load" rel="noreferrer">documentation says</a> I should be able to load models created with Tensorflow 1.</p>
<p><em>I can load the pb file without a problem in Tensorflow 1</em>:</p>
<pre><code>global_session = tf.Session()
with global_session.as_default():
model_loaded = tf.saved_model.load_v2('tensorflow_model')
model_loaded = model_loaded.signatures['serving_default']
init = tf.global_variables_initializer()
global_session.run(init)
</code></pre>
<p><em>However in Tensorflow 2 I get the following error</em>:</p>
<pre><code>can_be_imported = tf.saved_model.contains_saved_model('tensorflow_model')
assert(can_be_imported)
model_loaded = tf.saved_model.load('tensorflow_model/')
ValueError: Node 'loss/gradients/model/batch_normalization_3/FusedBatchNormV3_1_grad/FusedBatchNormGradV3' has an _output_shapes attribute inconsistent with the GraphDef for output #3: Dimension 0 in both shapes must be equal, but are 0 and 64. Shapes are [0] and [64].
</code></pre>
<p><em>Model definition</em>:</p>
<pre><code>NUM_CHANNELS = 64
BN1 = BatchNormalization()
BN2 = BatchNormalization()
BN3 = BatchNormalization()
BN4 = BatchNormalization()
BN5 = BatchNormalization()
BN6 = BatchNormalization()
CONV1 = Conv2D(NUM_CHANNELS, kernel_size=3, strides=1, padding='same')
CONV2 = Conv2D(NUM_CHANNELS, kernel_size=3, strides=1, padding='same')
CONV3 = Conv2D(NUM_CHANNELS, kernel_size=3, strides=1)
CONV4 = Conv2D(NUM_CHANNELS, kernel_size=3, strides=1)
FC1 = Dense(128)
FC2 = Dense(64)
FC3 = Dense(7)
def modified_cnn(inputs, **kwargs):
relu = tf.nn.relu
log_softmax = tf.nn.log_softmax
layer_1_out = relu(BN1(CONV1(inputs)))
layer_2_out = relu(BN2(CONV2(layer_1_out)))
layer_3_out = relu(BN3(CONV3(layer_2_out)))
layer_4_out = relu(BN4(CONV4(layer_3_out)))
flattened = tf.reshape(layer_4_out, [-1, NUM_CHANNELS * 3 * 2])
layer_5_out = relu(BN5(FC1(flattened)))
layer_6_out = relu(BN6(FC2(layer_5_out)))
return log_softmax(FC3(layer_6_out))
class CustomCnnPolicy(CnnPolicy):
def __init__(self, *args, **kwargs):
super(CustomCnnPolicy, self).__init__(*args, **kwargs, cnn_extractor=modified_cnn)
model = PPO2(CustomCnnPolicy, env, verbose=1)
</code></pre>
<p><em>Model saving in TF1:</em></p>
<pre><code>with model.graph.as_default():
tf.saved_model.simple_save(model.sess, 'tensorflow_model', inputs={"obs": model.act_model.obs_ph},
outputs={"action": model.act_model._policy_proba})
</code></pre>
<p>Fully reproducible code can be found in the following 2 google colab notebooks:
<a href="https://colab.research.google.com/drive/1ftm9vgiJA8LGwLuUFtaG0aYCS9lUrzET?usp=sharing" rel="noreferrer">Tensorflow 1 saving and loading</a>
<a href="https://colab.research.google.com/drive/18IWxx-eppX2Sjoo2h1hGe0TzCHNiykF1?usp=sharing" rel="noreferrer">Tensorflow 2 loading</a></p>
<p>Direct link to the saved model:
<a href="https://drive.google.com/drive/folders/1KmfLDIlAqX80Ha1F7MmC314loI2ayiFY?usp=sharing" rel="noreferrer">model</a></p>
|
<p>You can use compatibility layer of TensorFlow.</p>
<p>All <code>v1</code> functionality is available under <code>tf.compat.v1</code> namespace.</p>
<p>I managed to load your model in TF 2.1 (nothing special about that version, I just have it locally):</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
tf.__version__
Out[2]: '2.1.0'
model = tf.compat.v1.saved_model.load_v2('~/tmp/tensorflow_model')
model.signatures
Out[3]: _SignatureMap({'serving_default': <tensorflow.python.eager.wrap_function.WrappedFunction object at 0x7ff9244a6908>})
</code></pre>
|
tensorflow|tensorflow2.0|stable-baselines
| 9
|
377,107
| 63,639,345
|
tf.data.experimental.save VS TFRecords
|
<p>I have notice that the method <a href="https://www.tensorflow.org/api_docs/python/tf/data/experimental/save" rel="nofollow noreferrer"><code>tf.data.experimental.save</code></a> (added in r2.3) allows to save a <code>tf.data.Dataset</code> to file in just one line of code, which seems extremely convenient. Are there still some benefits in serializing a <code>tf.data.Dataset</code> and writing it into a TFRecord ourselves, or is this save function supposed to replace this process?</p>
|
<p>TFRecord have several benefits especially when using the large datasets. <a href="https://medium.com/mostly-ai/tensorflow-records-what-they-are-and-how-to-use-them-c46bc4bbb564" rel="nofollow noreferrer">TFRecord</a> - If you are working with large datasets, using a binary file format for storage of your data can have a significant impact on the performance of your import pipeline and as a consequence on the training time of your model. Binary data takes up less space on disk, takes less time to copy and can be read much more efficiently from disk. This is especially true if your data is stored on spinning disks, due to the much lower read/write performance in comparison with SSDs.</p>
<p><code>tf.data.experimental.save</code> and <code>tf.data.experimental.load</code> will be useful if you are not worried about the performance of your import pipeline.</p>
<p><a href="https://www.tensorflow.org/api_docs/python/tf/data/experimental/save" rel="nofollow noreferrer">tf.data.experimental.save</a> - The saved dataset is saved in multiple file "shards". By default, the dataset output is divided to shards in a round-robin fashion. The datasets saved through <code>tf.data.experimental.save</code> should only be consumed through <code>tf.data.experimental.load</code>, which is guaranteed to be backwards compatible.</p>
|
tensorflow|dataset
| 0
|
377,108
| 63,530,609
|
using rolling functions, in pandas, with a series where the time index is very sparse
|
<p>I have a time series where the index is in milliseconds and is quite sparse. You can have many entries a few ms apart and nothing for seconds.</p>
<p>I would like to compute a rolling min / max, but I can't get it to work.</p>
<p>The index is built that way:</p>
<pre><code>df['timestamp'] = pd.to_datetime(df['timestamp'], unit='ms')
</code></pre>
<p>First I tried this:</p>
<pre><code>df['rolling_low'] = df['price'].rolling('1m').min()
</code></pre>
<p>but then I get this error:</p>
<blockquote>
<p>window must be an integer</p>
</blockquote>
<p>looking at various posts, I tried this:</p>
<pre><code>df['rolling_low'] = df.rolling('1m', on='timestamp')['price'].min()
</code></pre>
<p>for some reason, it has a different syntax than the first try, but anyhow, it give me:</p>
<blockquote>
<p>timestamp must be monotonic</p>
</blockquote>
<p>Another search on SO and I added this:</p>
<pre><code>df = df.sort_index()
</code></pre>
<p>but it's still the same problem.</p>
<p>This issue seems very unclear to me: I don't really understand the error message, I also don't understand the difference between the two syntax I have found and I don't really find much documentation about this error, besides a couple of online posts with the same problem and no solution that works in my case.</p>
<p>What does the error mean exactly? and, additionally, how do I fix it :)</p>
|
<p>When you do the following: <code>df['rolling_low'] = df['price'].rolling('1m').min()</code> you've got to make sure that
your timestamp is the index of the dataframe. This is done in: <code>df = df.set_index("timestamp")</code>
in the code example below, otherwise you will get <code>ValueError: window must be an integer</code> error.
I agree the error is quite nebulous in this context. Here's a working example :)</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
"timestamp": [
"2020-04-20 12:00:00.123",
"2020-04-20 12:00:00.126",
"2020-04-20 12:00:00.128",
"2020-04-20 12:00:05.126",
"2020-04-20 12:00:05.140",
"2020-04-20 12:00:05.156",
"2020-04-20 12:00:12.126",
"2020-04-20 12:00:12.129",
],
"price": range(8),
}
)
df["timestamp"] = pd.to_datetime(df["timestamp"])
df = df.set_index("timestamp")
df["rolling_low"] = df["price"].rolling("1s").min()
</code></pre>
<p>Output:</p>
<pre><code> price rolling_low
timestamp
2020-04-20 12:00:00.123 0 0.0
2020-04-20 12:00:00.126 1 0.0
2020-04-20 12:00:00.128 2 0.0
2020-04-20 12:00:05.126 3 3.0
2020-04-20 12:00:05.140 4 3.0
2020-04-20 12:00:05.156 5 3.0
2020-04-20 12:00:12.126 6 6.0
2020-04-20 12:00:12.129 7 6.0
</code></pre>
<p>If you want to do 1 minute aggregations use "60s" as the arg to <code>rolling</code>.</p>
|
python|pandas
| 0
|
377,109
| 63,450,286
|
pd.groupby on another groupby, transposing results of pd.cut
|
<p>Another rather complicated question I'm stuck at regarding Pandas and its groupby and cut function. Situation is as follows, let's say I have a DataFrame that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import Pandas as pd
pd.DataFrame(data)
A B C ipv4
0 1 3 3 0.0.0.0
1 2 2 1 140.0.0.0
2 3 1 3 230.0.0.0
3 1 1 2 140.0.0.0
4 3 1 2 NaN
</code></pre>
<p>At this point, I have to add that the actual DataFrames I'm working with here can contain millions(!) of rows, so performance is something I have to keep in mind here.
I've made a function that gives me the power set of A, B and C, so <code>pset = [(A), (B), (C), (A,B), ... ]</code> without the empty one, you get the idea. I'm now grouping by each of these combinations in a loop and I'm creating a <code>count_df</code> for each one like this:</p>
<pre class="lang-py prettyprint-override"><code>for combination in pset:
df.groupby(list(combination))
count_df = df.size().reset_index().rename(columns={0: 'count'})
print(count_df)
A count
0 1 2
1 2 1
2 3 2
...
A B count
0 1 1 1
1 1 3 1
2 2 2 1
3 3 1 2
...
</code></pre>
<p>We're coming closer to my problem: I need to add some very very basic information about <a href="https://networkustad.com/2019/08/27/classful-network-addressing/" rel="nofollow noreferrer">IP classes</a> to each row of the <code>count_df</code> with their respective A-B-C combination (you can scroll down in the provided link to <em>High Order Bit (HOB)</em> and look at the table to get a quick idea of what I'm trying to do here). I've added another row to my <code>df</code> for this containing the first octet of each rows ipv4 and used Pandas' cut to get the counts for each interval quite fast:</p>
<pre class="lang-py prettyprint-override"><code># I use 256 as value for any row that has "NaN" instead of a real address
df["ipv4"].replace(to_replace="NaN", value="256.0.0.0", inplace=True)
df["first_octet"] = df["ipv4"].apply(lambda x: int(x.partition(".")[0]))
df["cut_group"] = pd.cut(data["first_octet"], [0, 127, 191, 223, 239, 255, 256])
print(df)
A B C ipv4 first_octet cut_group
0 1 3 3 0.0.0.0 0 (0, 127.0]
1 2 2 1 140.0.0.0 140 (127.0, 191.0]
2 3 1 3 230.0.0.0 230 (223.0, 239.0]
3 1 1 2 140.0.0.0 140 (127.0, 191.0]
4 3 1 2 256.0.0.0 256 (255.0, 256.0]
for combination in pset:
df.groupby(list(combination) + ["cut_group"])
count_df = df.size().reset_index().rename(columns={0: 'count'})
print(count_df)
A cut_group count
0 1 (0, 127] 1
1 1 (127, 191] 1
2 1 (191, 223] 0
3 1 (223, 239] 0
4 1 (239, 255] 0
5 1 (255, 256] 0
6 2 (0, 127] 0
7 2 (127, 191] 1
8 2 (191, 223] 0
9 2 (223, 239] 0
10 2 (239, 255] 0
11 2 (255, 256] 0
12 3 (0, 127] 0
13 3 (127, 191] 0
14 3 (191, 223] 0
15 3 (223, 239] 1
16 3 (239, 255] 0
17 3 (255, 256] 1
...
A B cut_group count
0 1 1 (0, 127] 0
1 1 1 (127, 191] 1
2 1 1 (191, 223] 0
3 1 1 (223, 239] 0
4 1 1 (239, 255] 0
5 1 1 (255, 256] 0
6 1 2 (0, 127] 0
7 1 2 (127, 191] 0
8 1 2 (191, 223] 0
9 1 2 (223, 239] 0
10 1 2 (239, 255] 0
11 1 2 (255, 256] 0
12 1 3 (0, 127] 1
13 1 3 (127, 191] 0
14 1 3 (191, 223] 0
15 1 3 (223, 239] 0
16 1 3 (239, 255] 0
17 1 3 (255, 256] 0
18 2 1 (0, 127] 0
19 2 1 (127, 191] 0
20 2 1 (191, 223] 0
21 2 1 (223, 239] 0
22 2 1 (239, 255] 0
23 2 1 (255, 256] 0
24 2 2 (0, 127] 0
25 2 2 (127, 191] 1
26 2 2 (191, 223] 0
27 2 2 (223, 239] 0
28 2 2 (239, 255] 0
29 2 2 (255, 256] 0
30 2 3 (0, 127] 0
31 2 3 (127, 191] 0
32 2 3 (191, 223] 0
33 2 3 (223, 239] 0
34 2 3 (239, 255] 0
35 2 3 (255, 256] 0
36 3 1 (0, 127] 0
37 3 1 (127, 191] 0
38 3 1 (191, 223] 0
39 3 1 (223, 239] 1
40 3 1 (239, 255] 0
41 3 1 (255, 256] 1
42 3 2 (0, 127] 0
43 3 2 (127, 191] 0
44 3 2 (191, 223] 0
45 3 2 (223, 239] 0
46 3 2 (239, 255] 0
47 3 2 (255, 256] 0
48 3 3 (0, 127] 0
49 3 3 (127, 191] 0
50 3 3 (191, 223] 0
51 3 3 (223, 239] 0
52 3 3 (239, 255] 0
53 3 3 (255, 256] 0
...
</code></pre>
<p>Ok, so the next step here is now missing for me. What I need is an output that looks like this for each combination of the pset:</p>
<pre class="lang-py prettyprint-override"><code>for combination in pset:
<???>
print(count_df)
A count (0, 127] (127, 191] (191, 223] (223, 239] (239, 255] (255, 256]
0 1 2 1 1 0 0 0 0
1 2 1 0 1 0 0 0 0
2 3 1 0 0 0 1 0 1
...
A B count (0, 127] (127, 191] (191, 223] (223, 239] (239, 255] (255, 256]
0 1 1 1 0 1 0 0 0 0
1 1 2 0 0 0 0 0 0 0
2 1 3 1 1 0 0 0 0 0
3 2 1 0 0 0 0 0 0 0
4 2 2 1 0 1 0 0 0 0
5 2 3 0 0 0 0 0 0 0
6 3 1 2 0 0 0 1 0 1
7 3 2 0 0 0 0 0 0 0
8 3 3 0 0 0 0 0 0 0
...
</code></pre>
<p>I'm not sure how to get to that. The columns of <code>count_df</code> could also be <code>A-B-C count classA classB classC classD classE classNaN</code> for clarification. The <code>count</code> column needs to indicate the count of how many underlying rows had the individual combination of A-B-C like I would get calling <code>df.groupby(list(combination)).size().reset_index().rename(columns={0: 'count'})</code>, the interval columns need to indicate the count of how many underlying rows were counted for the individual class of the individual combination of A-B-C. You can sum up the problem to something like having a groupby with <code>groupby1 = df.groupby(list(combination) + ["cut_group"])</code> and after that another groupby on that one like <code>groupby2 = groupby1.groupby(list(combination))</code> and adding the class count information from <code>groupby1</code> transposed to rows. These last lines here are nonesense code, just to clarify what I mean.</p>
<p>I'm open to any suggestion regarding filling out the mentioned 'gap' in my code, aswell as any suggestion to maybe do something different here using other functions of Pandas which I don't know of yet. As always, I'm happy to learn different ways of using Pandas. Thank you!</p>
|
<p>what you can do is to <code>join</code> a <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html" rel="nofollow noreferrer"><code>pd.get_dummies</code></a> of the column cut_group and then use <code>sum</code> in the <code>groupby</code>, something like:</p>
<pre><code># get dummies
dummies = pd.get_dummies(df["cut_group"])
df_ = df.join(dummies) #you can reassign to df if you want
for combination in pset:
gr = df_.groupby(list(combination)) #change to df if you reassign the join to df before
count_df = (gr.size().to_frame('count')
.join(gr[dummies.columns].sum())
)
print(count_df)
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 2
|
377,110
| 63,607,320
|
How to group rows based on first and last review date columns for each ID and forward fill N/A values using python?
|
<p>I have the following data frame:</p>
<p><a href="https://i.stack.imgur.com/sAjHy.png" rel="nofollow noreferrer">Data frame</a></p>
<p>Here both Hotel_id and Chef_Id are unique(both can act as a primary key).</p>
<p>I need to fill the missing rows in Month_Year column(should add consecutive month_year between first and last date for each id and then forward fill other corresponding column values, I need something like this:</p>
<p><a href="https://i.stack.imgur.com/tB1NA.png" rel="nofollow noreferrer">Expected data frame</a></p>
<p>Here I have explained for a few id's but I need to apply this concept for every id in the data frame.</p>
<p>Please let me know the solution.</p>
<hr />
<p>@r-beginners, Please find the below data for your reference:</p>
<pre><code>Hotel_id Month_Year last_review_date
2400614 May-2015 March-2016
2400614 June-2015 March-2016
2400614 December-2015 March-2016
2400614 January-2016 March-2016
2400614 March-2016 March-2016
2400133 April-2016 May-2017
2400133 June-2016 May-2017
2400133 August-2016 May-2017
2400133 January-2017 May-2017
2400133 April-2017 May-2017
2400133 May-2017 May-2017
2400178 June-2015 April-2018
2400178 July-2016 April-2018
2400178 August-2016 April-2018
2400178 January-2017 April-2018
2400178 March-2017 April-2018
2400178 April-2018 April-2018
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.last.html" rel="nofollow noreferrer"><code>GroupBy.last</code></a>:</p>
<pre><code>df['last_review_date'] = df.groupby('Hotel_id')['Month_Year'].transform('last')
print (df)
Hotel_id Month_Year last_review_date
0 2400614 May-2015 March-2016
1 2400614 June-2015 March-2016
2 2400614 December-2015 March-2016
3 2400614 January-2016 March-2016
4 2400614 March-2016 March-2016
5 2400133 April-2016 May-2017
6 2400133 June-2016 May-2017
7 2400133 August-2016 May-2017
8 2400133 January-2017 May-2017
9 2400133 April-2017 May-2017
10 2400133 May-2017 May-2017
11 2400178 June-2015 April-2018
12 2400178 July-2016 April-2018
13 2400178 August-2016 April-2018
14 2400178 January-2017 April-2018
15 2400178 March-2017 April-2018
16 2400178 April-2018 April-2018
</code></pre>
<p>Another idea is convert values to datetimes and return maximal values per groups
:</p>
<pre><code>df['Month_Year'] = pd.to_datetime(df['Month_Year'], format='%B-%Y')
df['last_review_date'] = df.groupby('Hotel_id')['Month_Year'].transform('max')
print (df)
Hotel_id Month_Year last_review_date
0 2400614 2015-05-01 2016-03-01
1 2400614 2015-06-01 2016-03-01
2 2400614 2015-12-01 2016-03-01
3 2400614 2016-01-01 2016-03-01
4 2400614 2016-03-01 2016-03-01
5 2400133 2016-04-01 2017-05-01
6 2400133 2016-06-01 2017-05-01
7 2400133 2016-08-01 2017-05-01
8 2400133 2017-01-01 2017-05-01
9 2400133 2017-04-01 2017-05-01
10 2400133 2017-05-01 2017-05-01
11 2400178 2015-06-01 2018-04-01
12 2400178 2016-07-01 2018-04-01
13 2400178 2016-08-01 2018-04-01
14 2400178 2017-01-01 2018-04-01
15 2400178 2017-03-01 2018-04-01
16 2400178 2018-04-01 2018-04-01
</code></pre>
<p>If need original format of datetimes:</p>
<pre><code>dates = pd.to_datetime(df['Month_Year'], format='%B-%Y')
df['last_review_date'] = dates.groupby(df['Hotel_id']).transform('max').dt.strftime('%B-%Y')
print (df)
Hotel_id Month_Year last_review_date
0 2400614 May-2015 March-2016
1 2400614 June-2015 March-2016
2 2400614 December-2015 March-2016
3 2400614 January-2016 March-2016
4 2400614 March-2016 March-2016
5 2400133 April-2016 May-2017
6 2400133 June-2016 May-2017
7 2400133 August-2016 May-2017
8 2400133 January-2017 May-2017
9 2400133 April-2017 May-2017
10 2400133 May-2017 May-2017
11 2400178 June-2015 April-2018
12 2400178 July-2016 April-2018
13 2400178 August-2016 April-2018
14 2400178 January-2017 April-2018
15 2400178 March-2017 April-2018
16 2400178 April-2018 April-2018
</code></pre>
<p>EDIT:</p>
<p>If need add all existing months datetimes per groups use:</p>
<pre><code>df['Month_Year'] = pd.to_datetime(df['Month_Year'], format='%B-%Y')
df1 = (df.set_index('Month_Year')
.groupby('Hotel_id')
.resample('1M')
.ffill()
.reset_index(level=0, drop=True)
.reset_index())
</code></pre>
<hr />
<pre><code>print (df1)
Month_Year Hotel_id
0 2016-04-30 2400133
1 2016-05-31 2400133
2 2016-06-30 2400133
3 2016-07-31 2400133
4 2016-08-31 2400133
5 2016-09-30 2400133
6 2016-10-31 2400133
7 2016-11-30 2400133
8 2016-12-31 2400133
9 2017-01-31 2400133
10 2017-02-28 2400133
11 2017-03-31 2400133
12 2017-04-30 2400133
13 2017-05-31 2400133
14 2015-06-30 2400178
15 2015-07-31 2400178
16 2015-08-31 2400178
17 2015-09-30 2400178
18 2015-10-31 2400178
19 2015-11-30 2400178
20 2015-12-31 2400178
21 2016-01-31 2400178
22 2016-02-29 2400178
23 2016-03-31 2400178
24 2016-04-30 2400178
25 2016-05-31 2400178
26 2016-06-30 2400178
27 2016-07-31 2400178
28 2016-08-31 2400178
29 2016-09-30 2400178
30 2016-10-31 2400178
31 2016-11-30 2400178
32 2016-12-31 2400178
33 2017-01-31 2400178
34 2017-02-28 2400178
35 2017-03-31 2400178
36 2017-04-30 2400178
37 2017-05-31 2400178
38 2017-06-30 2400178
39 2017-07-31 2400178
40 2017-08-31 2400178
41 2017-09-30 2400178
42 2017-10-31 2400178
43 2017-11-30 2400178
44 2017-12-31 2400178
45 2018-01-31 2400178
46 2018-02-28 2400178
47 2018-03-31 2400178
48 2018-04-30 2400178
49 2015-05-31 2400614
50 2015-06-30 2400614
51 2015-07-31 2400614
52 2015-08-31 2400614
53 2015-09-30 2400614
54 2015-10-31 2400614
55 2015-11-30 2400614
56 2015-12-31 2400614
57 2016-01-31 2400614
58 2016-02-29 2400614
59 2016-03-31 2400614
</code></pre>
|
python|pandas
| 1
|
377,111
| 63,348,945
|
Write DF to existing Excel file with pandas and openxlpy
|
<p>I am trying to write a df to an existing work sheet. Initially the programme opens a dialog box to select a new sheet name to be added, then find the path of the excel sheet.</p>
<pre><code>#reload window to select new sheet name
root = tk.Tk()
root.geometry('500x200+350+150')
root.configure(bg='#072462')
#set buttons
label_date = Label(root, text='Input New Sheet Name', bg='#072462', fg='white')
ws_new_title = Entry(root, width=45, bg='#0A1944', fg='white', highlightthickness=0, borderwidth=0)
ws_new_title.insert(0, 'YYYY-MM-DD') #insert guide text in box
button_ws_name = Button(root, width=20, text='Select Export Excel File', command=root.quit)
button_ws_name.configure(fg='#C51E42', bg='#06122f', borderwidth=5)
label_date.pack()
ws_new_title.pack()
button_ws_name.pack(pady=30)
root.mainloop()
</code></pre>
<p>Code for dialog box to find excel path (working)</p>
<pre><code>#find and save to new excel sheet based on a input of name
root = tk.Tk()
root.title('Great Britain Basketball')
root.withdraw()
file_path_export = filedialog.askopenfilename(initialdir='/Desktop/', title='Select Player Tracking document')
root.update()
export_fn = file_path_export
</code></pre>
<p>code to open workbook and add new sheet (think this. might be wrong)</p>
<pre><code>book = load_workbook(export_fn)
writer = pd.ExcelWriter(export_fn, engine='openpyxl')
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
</code></pre>
<p>finally to write df (full_append) to excel</p>
<pre><code>full_append.to_excel(writer, startrow=len(df)+1, index=False, header=False, sheet_name='Season Data') #working -write to exsisting sheet and append to bottom row
full_append.to_excel(writer, sheet_name=ws_new_title, index=False) #not working - write to same sheet but on seperate sheet taking name from input
writer.save()
</code></pre>
<p>So the main issue is the programme is not adding a new sheet in the workbook using a prescribed sheet name (ws_new_title variable), and writing the df to it. Thank you</p>
|
<p>Managed to find the syntax error. After setting the entry button</p>
<pre><code>ws_new_title = Entry(root, width=45, bg='#0A1944', fg='white', highlightthickness=0, borderwidth=0)
</code></pre>
<p>I needed to change this entry to a string variable before using as a new sheet name in existing Excel sheet. To do this I used a .get() function</p>
<pre><code>#change entry to a string
ws_name = ws_new_title.get()
</code></pre>
<p>Then finally append df to 'Season Data' and save to a new sheet based on date</p>
<pre><code>#write to excel workbook
book = load_workbook(export_fn)
writer = pd.ExcelWriter(export_fn, engine='openpyxl')
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
full_data_table.to_excel(writer, sheet_name=ws_name, index=False) #working but need to update name based on input
full_data_table.to_excel(writer, startrow=len(df)+1, index=False, header=False, sheet_name='Season Data') #working -write to exsisting sheet and append to bottom row
writer.save()
writer.close()
</code></pre>
|
python|excel|pandas|dataframe
| 0
|
377,112
| 63,732,828
|
How do I load multiple already saved models
|
<p>I trained 5 different classifiers and save to disk, like so:</p>
<pre><code>for e in range(len(ensemble)): #..ensemble: -list of 5 models
save_file = "Model_" + str(e) + ".h5"
ensemble[e].save(save_file)
print("MODEL ", e, " SAVED!")
</code></pre>
<p>So that I have five models, one for each classifier:</p>
<pre><code>$ls
accuracy.txt dfObj.csv Model_0.h5 Model_1.h5 Model_2.h5
Model_3.h5 Model_4.h5 predicted_label.csv real_label.csv
</code></pre>
<p>I would like to evaluate these classifiers on new unseen dataset, so there's need to load them. So I tried this:</p>
<pre><code>import os
from keras.models import load_model
ensemble = []
for root, dirs, files in os.walk(path):
for file in files:
if file.endswith('.h5'):
model = load_model(file)
ensemble.append(model)
</code></pre>
<p>Which generates the error:</p>
<pre><code> raise IOError("SavedModel file does not exist at: %s/{%s|%s}" %
OSError: SavedModel file does not exist at: Model_4.h5/{saved_model.pbtxt|saved_model.pb}
</code></pre>
<p>Can someone point how to load these models back for evaluation on unseen dataset?</p>
|
<p>That's because you are loading from the wrong path</p>
<p>the files that you are iterating from the <code>files</code> list are relative paths</p>
<p>there an example:</p>
<pre><code>import os
model.save('/tmp/test.h5')
for root, dirs, files in os.walk('/tmp/'):
print(root) # root path 'tmp'
for file in files:
print(file) # relative path 'test.h5'
keras.models.load_model(file) # error relative path
</code></pre>
<p>loading from absolute path works:</p>
<pre><code>keras.models.load_model('/tmp/test.h5')
</code></pre>
<p>This should be the correct way to load the absolute paths:</p>
<pre><code># !/usr/bin/python3
import os
for root, dirs, files in os.walk(your_path, topdown = False):
for name in files:
print(os.path.join(root, name))
for name in dirs:
print(os.path.join(root, name))
</code></pre>
<p>details of <code>os.walk</code> <a href="https://www.tutorialspoint.com/python3/os_walk.htm" rel="nofollow noreferrer">here</a></p>
|
python|python-3.x|tensorflow|keras
| 1
|
377,113
| 63,541,279
|
Is there a way to loop through a dataframe and assign a value in a new column based on a list?
|
<p>I have one dataframe with 2 columns and I want to add a new column;</p>
<p>This new column should be updated based on a list that I have:</p>
<pre><code>list = [0,1,2,3,6,7,9,10]
</code></pre>
<p>The new column is only updated with the list value if the flag (in col2) is 1.
If flag is 0, do not populate row in new column.</p>
<p>Current DF</p>
<pre><code>+-------------+---------+
| context | flag |
+-------------+---------+
| 0 | 1 |
| 0 | 1 |
| 0 | 0 |
| 2 | 1 |
| 2 | 1 |
| 2 | 1 |
| 2 | 1 |
| 2 | 0 |
| 4 | 1 |
| 4 | 1 |
| 4 | 0 |
+-------------+---------+
</code></pre>
<p>Desired DF</p>
<pre><code>+-------------+---------+-------------+
| context | flag | new_context |
+-------------+---------+-------------+
| 0 | 1 | 0 |
| 0 | 1 | 1 |
| 0 | 0 | |
| 2 | 1 | 2 |
| 2 | 1 | 3 |
| 2 | 1 | 6 |
| 2 | 1 | 7 |
| 2 | 0 | |
| 4 | 1 | 9 |
| 4 | 1 | 10 |
| 4 | 0 | |
+-------------+---------+-------------+
</code></pre>
<p>Right now, I loop through the indices of the list and assign the list value to the new_context column. Then I increment to go through the list.
The values are populated in the correct spots but they all say 0. I don't believe it's iterating through the list properly.</p>
<pre><code>list_length = len(list)
i=0
for i in range(list_length])):
df["new_context"] = [list[i] if ele == 0 else "" for ele in df["flag"]]
if df["flag"] == 0: i+=1
</code></pre>
<p>I have also tried to iterate through the entire dataframe, however I think it's just applying the same list value (first list value of 0)</p>
<pre><code>i=0
for index, row in df.iterrows():
df["new_context"] = [list[i] if ele == 0 else "" for ele in df["flag"]]
if row['flag'] == 0: i+=1
</code></pre>
<p>How can I use the next list value to populate the new column where the flag=1?
It seems i+=1 is not working.</p>
|
<p>Let us try</p>
<pre><code>l = [0,1,2,3,6,7,9,10]
df['New']=''
df.loc[df.flag==1,'New']=l
df
Out[80]:
context flag New
0 0 1 0
1 0 1 1
2 0 0
3 2 1 2
4 2 1 3
5 2 1 6
6 2 1 7
7 2 0
8 4 1 9
9 4 1 10
10 4 0
</code></pre>
|
python|pandas|list|dataframe
| 3
|
377,114
| 63,584,149
|
Virtual Environment Settings for Rasa
|
<p>I have been building a Chatbot following Rasa for a while. I create a venvs for it, called Chatbot. T have tried many versions of different packs or frameworks, such as Python(3.8, 3.7, ...), Tensorflow (1.13, 1.15, 2.1, 2.2), conda (4.5.12, 4.8, ...) and pip (20.1.1, 20.2). However, I couldn't train and run my Rasa chatbot because of continuous errors like incompatibility between those installation, despite my finding and searching on lots of reference for those failure.</p>
<hr />
<p>Now I feel quite exhausted and depressed. So how could I figure out the best solution for this problem (which version of those lib could work well together), anyone who succeeded in developing Rasa chatbot could help me?
Thanks a lot.</p>
|
<p>I use conda and it works well with rasa (conda-4.8.4, python-3.7.7, pip-20.2.2, rasa-1.10.10). Note: Rasa requires Python 3.6 or 3.7</p>
<pre><code>conda create --name rasa_test python=3.7
conda activate rasa_test
conda install -c anaconda pip
conda update --all
pip install rasa
</code></pre>
|
python|tensorflow|compatibility|rasa
| 1
|
377,115
| 63,652,270
|
Keep column index after pandas.melt?
|
<p>I have a data frame of values varying over time. For example, the number of cars I observe on a street:</p>
<pre><code>df = pd.DataFrame(
[{'Orange': 0, 'Green': 2, 'Blue': 1},
{'Orange': 2, 'Green': 4, 'Blue': 4},
{'Orange': 1, 'Green': 3, 'Blue': 10}
])
</code></pre>
<p>I want to create graphs that highlight the cars with the highest values. So I sort by maximum value.</p>
<pre><code>df.loc[:, df.max().sort_values(ascending=False).index]
</code></pre>
<pre><code> Blue Green Orange
0 1 2 0
1 4 4 2
2 10 3 1
</code></pre>
<p>I'm using seaborn to create these graphs. From what I understand I need to melt this representation to a tidy format.</p>
<pre><code>tidy = pd.melt(df.reset_index(), id_vars=['index'], var_name='color', value_name='number')
</code></pre>
<pre><code> index color number
0 0 Blue 1
1 1 Blue 4
2 2 Blue 10
3 0 Green 2
4 1 Green 4
5 2 Green 3
6 0 Orange 0
7 1 Orange 2
8 2 Orange 1
</code></pre>
<p>How can I add a column that represents the column order before the data frame was melted?</p>
<pre><code> index color number importance
0 0 Blue 1 0
1 1 Blue 4 0
2 2 Blue 10 0
3 0 Green 2 1
4 1 Green 4 1
5 2 Green 3 1
6 0 Orange 0 2
7 1 Orange 2 2
8 2 Orange 1 2
</code></pre>
<p>I see that I can still find the maximum columns after melting, but I'm not sure how to add that as a new column to the data frame:</p>
<pre><code>tidy.groupby('color').number.max().sort_values(ascending=False).index
</code></pre>
<pre><code>Index(['Blue', 'Green', 'Orange'], dtype='object', name='color')
</code></pre>
<p><em>EDIT</em>
To clarify, I'm plotting this on a line graph.</p>
<pre><code>axes = sns.relplot(data=tidy, x='index', y='number', hue='color', kind="line")
</code></pre>
<p>This is what the graph currently looks like:
<a href="https://i.stack.imgur.com/FrCFo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FrCFo.png" alt="car sample graph" /></a></p>
<p>I want to use the importance data to either: color / bold the lines, or split the graph into multiple graphs, so it looks something like these</p>
<p><img src="https://i.stack.imgur.com/m61pPb.png" alt="car sample graph" />
<img src="https://i.stack.imgur.com/0qSMDb.png" alt="car sample graph" /></p>
|
<p>You can make a <code>MultiIndex</code> on the columns, then stack both levels.</p>
<pre><code># Map color to importance
d = (df.max().rank(method='dense', ascending=False)-1).astype(int)
df.columns = pd.MultiIndex.from_arrays([df.columns, df.columns.map(d)],
names=['color', 'importance'])
#color Orange Green Blue
#importance 2 1 0
#0 0 2 1
#1 2 4 4
#2 1 3 10
df = df.rename_axis(index='index').stack([0,1]).to_frame('value').reset_index()
</code></pre>
<hr />
<pre><code> index color importance value
0 0 Blue 0 1.0
1 0 Green 1 2.0
2 0 Orange 2 0.0
3 1 Blue 0 4.0
4 1 Green 1 4.0
5 1 Orange 2 2.0
6 2 Blue 0 10.0
7 2 Green 1 3.0
8 2 Orange 2 1.0
</code></pre>
|
python|pandas|dataframe|seaborn
| 1
|
377,116
| 63,589,810
|
Create pandas dataframe based on row search substring value
|
<p>I am struggling to shape my Python data into a dataframe. Can anyone help me with the code that might get me there? It seems the easiest solution would be to create columns based on substrings of text from the rows but I cannot find documentation to get me the shape I am seeking from the rows.</p>
<p><a href="https://i.stack.imgur.com/NGBWC.png" rel="nofollow noreferrer">Original Dataframe - no column headers, data all in rows</a></p>
<p><a href="https://i.stack.imgur.com/b3uvU.png" rel="nofollow noreferrer">Desired Dataframe - bounding box rows to columns with uniform header, confidence to column</a></p>
<p>My response is structured as follows:
{
"status": "succeeded",
"createdDateTime": "2020-08-28T19:21:29Z",
"lastUpdatedDateTime": "2020-08-28T19:21:31Z",
"analyzeResult": {
"version": "3.0.0",
"readResults": [{
"page": 1,
"angle": 0.1296,
"width": 1700,
"height": 2200,
"unit": "pixel",
"lines": [{
"boundingBox": [
182,
119,
383,
119,
383,
161,
182,
160
],
"text": "FORM 101",
"words": [{
"boundingBox": [
183,
120,
305,
120,
305,
161,
182,
161
],
"text": "FORM",
"confidence": 0.987
},
{
"boundingBox": [
318,
120,
381,
120,
382,
162,
318,
161
],
"text": "101",
"confidence": 0.987
}
]
},
{
"boundingBox": [
578,
129,
1121,
129,
1121,
163,
578,
162
],
"text": "The Commonwealth of Massachusetts",
"words": [{
"boundingBox": [
579,
129,
634,
129,
634,
162,
579,
161
],
"text": "The",
"confidence": 0.988
},
{
"boundingBox": [
641,
129,
868,
129,
866,
164,
640,
162
],
"text": "Commonwealth",
"confidence": 0.979
},
{
"boundingBox": [
874,
129,
902,
129,
900,
164,
872,
164
],
"text": "of",
"confidence": 0.988
},
{
"boundingBox": [
908,
129,
1120,
130,
1117,
163,
906,
164
],
"text": "Massachusetts",
"confidence": 0.977
}
]
},
{
"boundingBox": [
1341,
137,
1540,
138,
1540,
164,
1341,
163
],
"text": "DIA USE ONLY",
"words": [{
"boundingBox": [
1342,
138,
1392,
138,
1392,
164,
1341,
163
],
"text": "DIA",
"confidence": 0.983
},
{
"boundingBox": [
1397,
138,
1452,
139,
1452,
164,
1397,
164
],
"text": "USE",
"confidence": 0.983
},
{
"boundingBox": [
1457,
139,
1539,
138,
1540,
164,
1457,
164
],
"text": "ONLY",
"confidence": 0.986
}
]
},
{
"boundingBox": [
459,
169,
1235,
168,
1235,
202,
459,
203
],
"text": "Department of Industrial Accidents - Department 101",
"words": [{
"boundingBox": [
460,
170,
634,
170,
634,
203,
460,
204
],
"text": "Department",
"confidence": 0.981
},
{
"boundingBox": [
640,
170,
669,
170,
669,
203,
640,
203
],
"text": "of",
"confidence": 0.983
},
{
"boundingBox": [
676,
170,
821,
169,
821,
203,
676,
203
],
"text": "Industrial",
"confidence": 0.981
},
{
"boundingBox": [
828,
169,
967,
169,
966,
203,
828,
203
],
"text": "Accidents",
"confidence": 0.952
},
{
"boundingBox": [
973,
169,
993,
169,
993,
203,
973,
203
],
"text": "-",
"confidence": 0.983
},
{
"boundingBox": [
1000,
169,
1176,
169,
1176,
203,
999,
203
],
"text": "Department",
"confidence": 0.982
},
{
"boundingBox": [
1183,
169,
1236,
169,
1235,
203,
1182,
203
],
"text": "101",
"confidence": 0.987
}
]
},
{
"boundingBox": [
511,
205,
1189,
205,
1189,
233,
511,
234
],
"text": "1 Congress Street, Suite 100, Boston, Massachusetts 02114-2017",
"words": [{
"boundingBox": [
513,
206,
520,
206,
519,
233,
512,
233
],
"text": "1",
"confidence": 0.974
},
{
"boundingBox": [
525,
206,
625,
206,
624,
234,
524,
233
],
"text": "Congress",
"confidence": 0.981
},
{
"boundingBox": [
630,
206,
702,
206,
701,
234,
629,
234
],
"text": "Street,",
"confidence": 0.977
},
{
"boundingBox": [
707,
206,
763,
206,
762,
234,
706,
234
],
"text": "Suite",
"confidence": 0.983
},
{
"boundingBox": [
769,
206,
812,
206,
811,
234,
767,
234
],
"text": "100,",
"confidence": 0.983
},
{
"boundingBox": [
818,
206,
898,
206,
897,
234,
816,
234
],
"text": "Boston,",
"confidence": 0.983
},
{
"boundingBox": [
903,
206,
1059,
205,
1058,
234,
902,
234
],
"text": "Massachusetts",
"confidence": 0.975
},
{
"boundingBox": [
1064,
205,
1189,
205,
1187,
233,
1063,
234
],
"text": "02114-2017",
"confidence": 0.978
}
]
},
{
"boundingBox": [
422,
236,
1279,
237,
1279,
263,
422,
263
],
"text": "Info. Line 800-323-3249 ext. 470 in Mass. Outside Mass. - 617-727-4900 ext. 470",
"words": [{
"boundingBox": [
423,
237,
472,
237,
472,
263,
422,
263
],
"text": "Info.",
"confidence": 0.983
},
{
"boundingBox": [
477,
237,
526,
237,
526,
264,
477,
264
],
"text": "Line",
"confidence": 0.986
},
{
"boundingBox": [
531,
237,
674,
237,
674,
264,
531,
264
],
"text": "800-323-3249",
"confidence": 0.977
},
{
"boundingBox": [
679,
237,
718,
237,
718,
264,
679,
264
],
"text": "ext.",
"confidence": 0.982
},
{
"boundingBox": [
724,
237,
763,
237,
763,
264,
723,
264
],
"text": "470",
"confidence": 0.986
},
{
"boundingBox": [
768,
237,
790,
237,
790,
264,
768,
264
],
"text": "in",
"confidence": 0.987
},
{
"boundingBox": [
795,
237,
865,
237,
865,
264,
795,
264
],
"text": "Mass.",
"confidence": 0.983
},
{
"boundingBox": [
870,
237,
953,
237,
953,
264,
870,
264
],
"text": "Outside",
"confidence": 0.981
},
{
"boundingBox": [
958,
237,
1019,
237,
1020,
264,
958,
264
],
"text": "Mass.",
"confidence": 0.984
},
{
"boundingBox": [
1025,
237,
1036,
237,
1037,
264,
1025,
264
],
"text": "-",
"confidence": 0.983
},
{
"boundingBox": [
1042,
237,
1184,
237,
1185,
264,
1042,
264
],
"text": "617-727-4900",
"confidence": 0.975
},
{
"boundingBox": [
1190,
237,
1229,
238,
1229,
264,
1190,
264
],
"text": "ext.",
"confidence": 0.985
},
{
"boundingBox": [
1234,
238,
1278,
238,
1278,
264,
1234,
264
],
"text": "470",
"confidence": 0.983
}
]
},
{
"boundingBox": [
716,
264,
984,
266,
984,
293,
715,
292
],
"text": "http://www.mass.gov/dia",
"words": [{
"boundingBox": [
717,
265,
985,
267,
984,
294,
716,
293
],
"text": "http://www.mass.gov/dia",
"confidence": 0.952
}]
},
{
"boundingBox": [
398,
299,
1289,
299,
1289,
342,
398,
342
],
"text": "EMPLOYER'S FIRST REPORT OF INJURY",
"words": [{
"boundingBox": [
399,
300,
693,
300,
693,
341,
399,
343
],
"text": "EMPLOYER'S",
"confidence": 0.98
},
{
"boundingBox": [
702,
300,
836,
300,
836,
341,
702,
341
],
"text": "FIRST",
"confidence": 0.982
},
{
"boundingBox": [
845,
300,
1036,
300,
1036,
341,
844,
341
],
"text": "REPORT",
"confidence": 0.985
},
{
"boundingBox": [
1045,
300,
1105,
300,
1104,
342,
1044,
341
],
"text": "OF",
"confidence": 0.988
},
{
"boundingBox": [
1113,
300,
1288,
299,
1287,
343,
1113,
342
],
"text": "INJURY",
"confidence": 0.986
}
]
},
{
"boundingBox": [
691,
354,
1005,
355,
1005,
395,
691,
393
],
"text": "OR FATALITY",
"words": [{
"boundingBox": [
691,
354,
760,
355,
760,
395,
692,
394
],
"text": "OR",
"confidence": 0.988
},
{
"boundingBox": [
768,
355,
1005,
356,
1003,
395,
768,
395
],
"text": "FATALITY",
"confidence": 0.981
}
]
}
]
}]
}
}</p>
|
<p>Without supplying your data or an explanation this mostly does what you want.</p>
<ol>
<li>comments explain approach</li>
<li>there is more work to be done on <em>linekey</em> however I cannot see the relationship between the actual data and the outcome you posted as an image</li>
</ol>
<pre><code>import re
import numpy as np
import pandas as pd
df = pd.DataFrame(
{0:["analyzeResult_readResults_0_lines_0_text","analyzeResult_readResults_0_lines_0_words_0_boundingBox_0","analyzeResult_readResults_0_lines_0_words_0_boundingBox_1","analyzeResult_readResults_0_lines_0_words_0_boundingBox_2","analyzeResult_readResults_0_lines_0_words_0_boundingBox_3","analyzeResult_readResults_0_lines_0_words_0_boundingBox_4","analyzeResult_readResults_0_lines_0_words_0_boundingBox_5","analyzeResult_readResults_0_lines_0_words_0_boundingBox_6","analyzeResult_readResults_0_lines_0_words_0_boundingBox_7","analyzeResult_readResults_0_lines_0_words_0_text","analyzeResult_readResults_0_lines_0_words_0_confidence","analyzeResult_readResults_0_lines_0_words_1_boundingBox_0","analyzeResult_readResults_0_lines_0_words_1_boundingBox_1","analyzeResult_readResults_0_lines_0_words_1_boundingBox_2","analyzeResult_readResults_0_lines_0_words_1_boundingBox_3","analyzeResult_readResults_0_lines_0_words_1_boundingBox_4","analyzeResult_readResults_0_lines_0_words_1_boundingBox_5","analyzeResult_readResults_0_lines_0_words_1_boundingBox_6","analyzeResult_readResults_0_lines_0_words_1_boundingBox_7","analyzeResult_readResults_0_lines_0_words_1_text","analyzeResult_readResults_0_lines_0_words_1_confidence","analyzeResult_readResults_0_lines_1_boundingBox_0","analyzeResult_readResults_0_lines_1_boundingBox_1","analyzeResult_readResults_0_lines_1_boundingBox_2","analyzeResult_readResults_0_lines_1_boundingBox_3","analyzeResult_readResults_0_lines_1_boundingBox_4","analyzeResult_readResults_0_lines_1_boundingBox_5","analyzeResult_readResults_0_lines_1_boundingBox_6","analyzeResult_readResults_0_lines_1_boundingBox_7"],
1:["FORM 101",183,120,305,120,305,161,182,161,"FORM",0.987,318,120,381,120,382,162,318,161,101,0.987,578,129,1121,129,1121,163,578,162],
},
index=[17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45]
)
df = (
df
.rename(columns={0:"key",1:"val"})
.assign(
b=lambda x: x["key"].str.extract("(.*)_bounding"),
c=lambda x: x["key"].str.extract("(.*)_confidence"),
# linekey is everything before "_bounding" or "_confidence". pull the two together
linekey=lambda x: np.where(x["b"].isna(),
np.where(x["c"].isna(), x["key"], x["c"]),
x["b"]),
# column key is every thing after line key minus leading "_"
colkey=lambda x: x.apply(lambda r: r["key"].replace(r["linekey"], "").strip("_"), axis=1)
)
.assign(
# cleanup special case line keys...
colkey=lambda x: np.where(x["colkey"]=="", "Value", x["colkey"].replace("confidence","Confidence"))
)
# remove working columns
.drop(columns=["b","c","key"])
# mixed values and strings so use "first" and unstack to get to desired layout
.groupby(["linekey","colkey"]).agg({"val":"first"}).unstack()
)
print(df.to_string())
</code></pre>
<p><strong>output</strong></p>
<pre><code> val
colkey Confidence Value boundingBox_0 boundingBox_1 boundingBox_2 boundingBox_3 boundingBox_4 boundingBox_5 boundingBox_6 boundingBox_7
linekey
analyzeResult_readResults_0_lines_0_text NaN FORM 101 NaN NaN NaN NaN NaN NaN NaN NaN
analyzeResult_readResults_0_lines_0_words_0 0.987 NaN 183 120 305 120 305 161 182 161
analyzeResult_readResults_0_lines_0_words_0_text NaN FORM NaN NaN NaN NaN NaN NaN NaN NaN
analyzeResult_readResults_0_lines_0_words_1 0.987 NaN 318 120 381 120 382 162 318 161
analyzeResult_readResults_0_lines_0_words_1_text NaN 101 NaN NaN NaN NaN NaN NaN NaN NaN
analyzeResult_readResults_0_lines_1 NaN NaN 578 129 1121 129 1121 163 578 162
</code></pre>
|
python-3.x|pandas|dataframe
| 0
|
377,117
| 63,471,524
|
String Contains and String Does Not Contain in Same Command
|
<p>I have a dataset that looks like this. There are thousands of variations of the <code>symptom</code> column.</p>
<pre><code>ID Symptoms
1 neck infection, fever
2 tonsil wound
3 lymph laceration
4 tonsil sore
5 Leg break
5 ear ache, headache
</code></pre>
<p>I want all IDs who HAD either "neck", "lymph" or "tonsil" as a symptom, and of these IDs, I only want flag a 1 for a new variable <code>Lymph_Node_Neck</code>, for those who DID NOT have the following adjoining text "abscess","laceration" or "peritonsillar".</p>
<p>So for example, if I were to run the correct code for this request:</p>
<pre><code>ID Symptoms Lymph_Node_Neck
1 neck infection, fever 1
2 tonsil peritonsillar 0
3 lymph laceration 0
4 tonsil sore, cough 1
5 Leg break 0
6 ear ache, headache 0
</code></pre>
<p>Here is the code I'm attempting to use to accomplish this analysis but when I run it I get an error.</p>
<pre><code>LABS_TAT.loc[:,"Lymph_Node_Neck"]=np.where((LABS_TAT["Symptoms"].str.contains("neck|lymph|tonsil", case=False)&(~LABS_TAT["Symptoms"].str.contains("abscess|laceration|peritonsillar", case=False)),1,0)
SyntaxError: unexpected EOF while parsing
</code></pre>
<p>Am I getting this error because I'm trying to combine a string contains with a string does not contain?</p>
|
<p><code>SyntaxError: unexpected EOF while parsing</code></p>
<p>This is a syntax error, meaning it's not even attempting to execute your code yet. EOF means End Of File. So it's reached the end of the file, but it was expecting to see some other syntax. In this case a closing parenthesis:</p>
<pre><code>LABS_TAT.loc[:,"Lymph_Node_Neck"]=np.where((LABS_TAT["Symptoms"].str.contains("neck|lymph|tonsil", case=False)&(~LABS_TAT["Symptoms"].str.contains("abscess|laceration|peritonsillar", case=False)),1,0))
</code></pre>
|
python|pandas
| 1
|
377,118
| 63,719,559
|
How to check for vertex matches and replace the repeat index on the edge?
|
<p>After developing code for Delaunay triangulation I made a list of node coordinates and found that there are a few numbers of duplicate nodes.</p>
<p>So I avoid duplicate nodes:</p>
<pre><code>distort = tri.Triangulation(mesh_x, mesh_y) #triangulation
#making list of nodes coordinates
data = np.array([mesh_x, mesh_y])
data = np.transpose(data)
#sorting avoids duplicated nodes
unique_data = np.unique(data, axis = 0)
</code></pre>
<p>Now I have a problem with edges that connect the nodes. After removing the duplicate nodes, the edges are reallocated and instead of a smooth grid I got something like this:<a href="https://i.stack.imgur.com/sV6Id.jpg" rel="nofollow noreferrer">image</a></p>
<p>How can I check for vertex matches and replace the repeat index on the edge and get a smooth grid? (like this <a href="https://i.stack.imgur.com/ZW1yW.png" rel="nofollow noreferrer">image</a>)</p>
|
<p>Every time you remove a node you need to subtract one to all the indices of triangles vertices bigger of the removed node index.</p>
|
python|numpy|triangulation|vertex
| 0
|
377,119
| 63,539,890
|
convert a list in rows of dataframe in one column to simple string
|
<p>I have a dataframe which has list in one column that I want to convert into a simple string</p>
<pre><code> id data_words_nostops
26561364 [andrographolide, major, labdane, diterpenoid]
26561979 [dgat, plays, critical, role, hepatic, triglyc]
26562217 [despite, success, imatinib, inhibiting, bcr]
</code></pre>
<p>DESIRED OUTPUT</p>
<pre><code>id data_words_nostops
26561364 andrographolide, major, labdane, diterpenoid
26561979 dgat, plays, critical, role, hepatic, triglyc
26562217 despite, success, imatinib, inhibiting, bcr
</code></pre>
|
<p>Try this :</p>
<pre class="lang-py prettyprint-override"><code>df['data_words_nostops'] = df['data_words_nostops'].apply(lambda row : ','.join(row))
</code></pre>
<p>Complete code :</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
l1 = ['26561364', '26561979', '26562217']
l2 = [['andrographolide', 'major', 'labdane', 'diterpenoid'],['dgat', 'plays', 'critical', 'role', 'hepatic', 'triglyc'],['despite', 'success', 'imatinib', 'inhibiting', 'bcr']]
df = pd.DataFrame(list(zip(l1, l2)),
columns =['id', 'data_words_nostops'])
df['data_words_nostops'] = df['data_words_nostops'].apply(lambda row : ','.join(row))
</code></pre>
<p><strong>Output :</strong></p>
<pre><code>id data_words_nostops
0 26561364 andrographolide,major,labdane,diterpenoid
1 26561979 dgat,plays,critical,role,hepatic,triglyc
2 26562217 despite,success,imatinib,inhibiting,bcr
</code></pre>
|
python|pandas
| 3
|
377,120
| 63,401,585
|
torch.jit.script(module) vs @torch.jit.script decorator
|
<p>Why is adding the decorator "@torch.jit.script" results in an error, while I can call torch.jit.script on that module, e.g. this fails:</p>
<pre class="lang-py prettyprint-override"><code>import torch
@torch.jit.script
class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell, self).__init__()
self.linear = torch.nn.Linear(4, 4)
def forward(self, x, h):
new_h = torch.tanh(self.linear(x) + h)
return new_h, new_h
my_cell = MyCell()
x, h = torch.rand(3, 4), torch.rand(3, 4)
traced_cell = torch.jit.script(my_cell, (x, h))
print(traced_cell)
traced_cell(x, h)
</code></pre>
<pre><code>"C:\Users\Administrator\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\torch\jit\__init__.py", line 1262, in script
raise RuntimeError("Type '{}' cannot be compiled since it inherits"
RuntimeError: Type '<class '__main__.MyCell'>' cannot be compiled since it inherits from nn.Module, pass an instance instead
</code></pre>
<p>While the following code works well:</p>
<pre class="lang-py prettyprint-override"><code>class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell, self).__init__()
self.linear = torch.nn.Linear(4, 4)
def forward(self, x, h):
new_h = torch.tanh(self.linear(x) + h)
return new_h, new_h
my_cell = MyCell()
x, h = torch.rand(3, 4), torch.rand(3, 4)
traced_cell = torch.jit.script(my_cell, (x, h))
print(traced_cell)
traced_cell(x, h)
</code></pre>
<p>This question is also featured on <a href="https://discuss.pytorch.org/t/torch-jit-script-module-vs-torch-jit-script-decorator/92735" rel="nofollow noreferrer">PyTorch forums</a>.</p>
|
<p>Reason for your error is <a href="https://pytorch.org/docs/master/jit_language_reference.html#id2" rel="nofollow noreferrer">here</a>, this bulletpoint precisely:</p>
<blockquote>
<p>No support for inheritance or any other polymorphism strategy, except
for inheriting from object to specify a new-style class.</p>
</blockquote>
<p>Also, as stated at the top:</p>
<blockquote>
<p>TorchScript class support is experimental. Currently it is best suited
for simple record-like types (think a NamedTuple with methods
attached).</p>
</blockquote>
<p>Currently, it's purpose is for simple <strong>Python</strong> classes (see other points in the link I've provided) and functions, see link I've provided for more information.</p>
<p>You can also check <a href="https://github.com/pytorch/pytorch/blob/master/torch/jit/_script.py#L735" rel="nofollow noreferrer"><code>torch.jit.script</code> source code</a> to get a better grasp of how it works.</p>
<p>From what it seems, when you pass an instance, all <code>attributes</code> which should be preserved are recursively parsed (<a href="https://github.com/pytorch/pytorch/blob/master/torch/jit/_script.py#L887" rel="nofollow noreferrer">source</a>). You can follow this function along (quite commented, but too long for an answer, see <a href="https://github.com/pytorch/pytorch/blob/master/torch/jit/_recursive.py#L294" rel="nofollow noreferrer">here</a>), though exact reason why this is the case (and why it was designed this way) is beyond my knowledge (so hopefully someone with expertise in <code>torch.jit</code>'s inner workings will speak more about it).</p>
|
pytorch|torchscript
| 2
|
377,121
| 63,402,635
|
Finding intersection of Pandas dataframes within range
|
<p>A project I'm working on requires merging two dataframes together along some line with a delta. Basically, I need to take a dataframe with a non-linear 2D line and find the data points within the other that fall along that line, plus or minus a delta.</p>
<h3>Dataframe 1 (Line that we want to find points along)</h3>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df1 = pd.read_csv('path/to/df1/data.csv')
df1
</code></pre>
<pre><code> x y
0 0.23 0.54
1 0.27 0.95
2 0.78 1.59
...
97 0.12 2.66
98 1.74 0.43
99 0.93 4.23
</code></pre>
<h3>Dataframe 2 (Dataframe we want to filter, leaving points within some delta)</h3>
<pre class="lang-py prettyprint-override"><code>df2 = pd.read_csv('path/to/df2/data.csv')
df2
</code></pre>
<pre><code> x y
0 0.21 0.51
1 0.27 0.35
2 3.45 1.19
...
971 0.94 2.60
982 1.01 1.33
993 0.43 2.43
</code></pre>
<h3>Finding the coarse line</h3>
<pre><code>DELTA = 0.03
coarse_line = find_coarse_line(df1, df2, DELTA)
coarse_line
</code></pre>
<pre><code> x y
0 0.21 0.51
1 0.09 2.68
2 0.23 0.49
...
345 1.71 0.45
346 0.96 0.40
347 0.81 1.62
</code></pre>
<p>I've tried using <code>df.loc((df['x'] >= BOTLEFT_X) & (df['x'] >= BOTLEFT_Y) & (df['x'] <= TOPRIGHT_X) & (df['y'] <= TOPRIGHT_Y))</code> among many, many other Pandas functions and whatnot but have yet to find anything that works, much less anything efficient (with datasets >2 million points).</p>
|
<p>Have taken an approach of using <code>merge()</code> where x,y have been placed into bins from <em>good</em> curve <code>df1</code></p>
<ol>
<li>generated a uniform line, <em>y=x^2</em></li>
<li>randomised it a small amount to generate <code>df1</code></li>
<li>randomised it a large amount to generate <code>df2</code> also generated three times as many co-ordinates</li>
<li>take <code>df1</code> as reference for good ranges of x and y co-ordinates to split into bins using <code>pd.cut()</code>. bins being 1/3 of total number of co-ordinates is working well</li>
<li>standardised these back into arrays for use again in <code>pd.cut()</code> when merging</li>
</ol>
<p>You can see from scatter plots, it's doing a pretty reasonable job of finding and keeping points close to curve in <code>df2</code></p>
<pre><code>import pandas as pd
import random
import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1,3, sharey=True, sharex=False, figsize=[20,5])
linex = [i for i in range(100)]
liney = [i**2 for i in linex]
df1 = pd.DataFrame({"x":[l*random.uniform(0.95, 1.05) for l in linex],
"y":[l*random.uniform(0.95, 1.05) for l in liney]})
df1.plot("x","y", kind="scatter", ax=ax[0])
df2 = pd.DataFrame({"x":[l*random.uniform(0.5, 1.5) for l in linex*3],
"y":[l*random.uniform(0.5, 1.5) for l in liney*3]})
df2.plot("x","y", kind="scatter", ax=ax[1])
# use bins on x and y axis - both need to be within range to find
bincount = len(df1)//3
xc = pd.cut(df1["x"], bincount).unique()
yc = pd.cut(df1["y"], bincount).unique()
xc = np.sort([intv.left for intv in xc] + [xc[-1].right])
yc = np.sort([intv.left for intv in yc] + [yc[-1].right])
dfm = (df2.assign(
xb=pd.cut(df2["x"],xc, duplicates="drop"),
yb=pd.cut(df2["y"],yc, duplicates="drop"),
).query("~(xb.isna() | yb.isna())") # exclude rows where df2 falls outside of range of df1
.merge(df1.assign(
xb=pd.cut(df1["x"],xc, duplicates="drop"),
yb=pd.cut(df1["y"],yc, duplicates="drop"),
),
on=["xb","yb"],
how="inner",
suffixes=("_l","_r")
)
)
dfm.plot("x_l", "y_l", kind="scatter", ax=ax[2])
print(f"graph 2 pairs:{len(df2)} graph 3 pairs:{len(dfm)}")
</code></pre>
<p><a href="https://i.stack.imgur.com/ywa11.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ywa11.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe|filter
| 1
|
377,122
| 63,569,448
|
Dataframe merge creates multiple columns
|
<pre><code>import numpy as np
import pandas as pd
np.random.seed(0)
left = pd.DataFrame({'key': ['A', 'B', 'C', 'D'], 'value': np.random.randn(4)})
right = pd.DataFrame({'key': [ 'E', 'F', 'G', 'H'], 'value': np.random.randn(4)})
df = left.merge(right, on='key', how='outer', indicator=True)
df
</code></pre>
<p><a href="https://i.stack.imgur.com/9oooZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9oooZ.png" alt="enter image description here" /></a></p>
<p>This always creates value_X and value_y column, is it possible to have only one value column with merge?</p>
|
<p>I think you want some thing like this, or please share how you want your output to look like :</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(0)
left = pd.DataFrame({'key': ['A', 'B', 'C', 'D'], 'value': np.random.randn(4)})
right = pd.DataFrame({'key': [ 'E', 'F', 'G', 'H'], 'value': np.random.randn(4)})
df = pd.concat([left,right])
df
</code></pre>
|
pandas|dataframe
| 0
|
377,123
| 63,330,365
|
How does pandas groupby() function makes a difference in this code?
|
<pre><code> import pandas as pd
data = {'Company':['GOOG','MSFT','FB','GOOG','MSFT','FB'],
'Dates':["1970-01-01 01:00:00","1970-01-01 01:00:02","1970-01-01 01:00:03","1970-01-01 01:00:04","1970-01-01 01:00:05","1970-01-01 01:00:06"]}
df = pd.DataFrame(data)
df["Sales"]=pd.to_datetime(df["Sales"])
df.Sales.diff().dt.total_seconds()/3600
</code></pre>
<p>This code gives me output</p>
<pre><code> 0 NaN
1 0.000556
2 0.000278
3 0.000278
4 0.000278
5 0.000278
Name: Sales, dtype: float64
</code></pre>
<p>and</p>
<pre><code>df.groupby("Company").Sales.diff().dt.total_seconds()/3600
</code></pre>
<p>this gives me output</p>
<pre><code> 0 NaN
1 NaN
2 NaN
3 0.001111
4 0.000833
5 0.000833
Name: Sales, dtype: float64
</code></pre>
<p>Can you explain what groupby function does here?</p>
|
<p>The reason why , you have three <code>NaN</code> ,due to you have three different company name in df, so when we do <code>groupby</code> , it will split the dataframe into 3, then do <code>diff</code> for each of them, and <code>concat</code> the result back</p>
<p>Detail :</p>
<pre><code>df["Dates"] = pd.to_datetime(df["Dates"])
...:
for x , y in df.groupby('Company'):
...: print(y)
...: print(y['Dates'].diff().dt.total_seconds())
...:
Company Dates
2 FB 1970-01-01 01:00:03
5 FB 1970-01-01 01:00:06
2 NaN
5 3.0
Name: Dates, dtype: float64
Company Dates
0 GOOG 1970-01-01 01:00:00
3 GOOG 1970-01-01 01:00:04
0 NaN
3 4.0
Name: Dates, dtype: float64
Company Dates
1 MSFT 1970-01-01 01:00:02
4 MSFT 1970-01-01 01:00:05
1 NaN
4 3.0
Name: Dates, dtype: float64
</code></pre>
|
python|python-3.x|pandas|numpy|data-science
| 1
|
377,124
| 63,629,615
|
Numpy.array indexing
|
<pre><code>import numpy as np
arr = np.array([[0, 1, 0],
[1, 0, 0],
[1, 0, 0]])
mask = arr
print('boolean mask is:')
print(mask)
print('arr[mask] is:')
print(arr[mask])
</code></pre>
<p>Result:</p>
<pre><code>boolean mask is:
[[0 1 0]
[1 0 0]
[1 0 0]]
arr[mask] is:
[[[0 1 0]
[1 0 0]
[0 1 0]]
[[1 0 0]
[0 1 0]
[0 1 0]]
[[1 0 0]
[0 1 0]
[0 1 0]]]
</code></pre>
<p>I know how indexing works when the mask is 2-D, but confused when the mask is 3-D.
Anyone can explain it?</p>
|
<pre><code>import numpy as np
l = [[0,1,2],[3,5,4],[7,8,9]]
arr = np.array(l)
mask = arr[:,:] > 5
print(mask) # shows boolean results
print(mask.sum()) # shows how many items are > 5
print(arr[:,1]) # slicing
print(arr[:,2]) # slicing
print(arr[:, 0:3]) # slicing
</code></pre>
<p>output</p>
<pre><code>[[False False False]
[False False False]
[ True True True]]
3
[1 5 8]
[2 4 9]
[[0 1 2]
[3 5 4]
[7 8 9]]
</code></pre>
|
python|numpy
| 1
|
377,125
| 63,650,263
|
Mapping value from dictionary to dataframe
|
<p>I have this <code>df</code>:</p>
<pre><code> opponent pontos_num
0 262 29.1
1 265 28.8
2 284 21.4
3 282 16.3
4 266 14.8
5 292 12.4
6 373 9.6
7 354 6.8
8 277 6.3
9 294 5.5
10 276 3.9
11 356 3.5
12 280 3.3
13 263 0.9
14 293 0.2
15 264 0.2
16 285 -1.6
17 290 -5.3
18 267 -6.2
19 275 -6.5
</code></pre>
<p>And this dict:</p>
<pre><code>teams_dict = {'team1':262, 'team2': 263, 'team3': 264, 'team4':265, 'team5':266,
'team6':267, 'team7':275, 'team8': 276, 'team9': 277, 'team10': 280, 'team11': 282,
'team12':284, 'team13':285, 'team14':290, 'team15':292, 'team16':293, 'team17':294,
'team18':354, 'team19':356, 'team20':373}
</code></pre>
<hr />
<p>Now I'm trying to bring team names into my <code>df</code>. I'm trying:</p>
<pre><code> df['opponent_name'] = df['opponent'].map(lambda x: teams_dict[x])
</code></pre>
<p>But I'm getting:</p>
<pre><code>KeyError: 262
</code></pre>
<hr />
<p>What am I missing?</p>
|
<p>As sushanth already commented, you need to swap <code>teams_dict</code> to have <code>value: key</code> pairs instead.</p>
<p>Then you should prefer the built-in <code>.replace</code> over <code>.map</code></p>
<pre><code>df['opponent_name'] = df.opponent.replace(
{v: k for k, v in teams_dict.items()})
</code></pre>
|
python|pandas
| 0
|
377,126
| 63,426,561
|
How to get a link in a dataframe
|
<p>with attached screenshot my question can be explained quite well.
<a href="https://i.stack.imgur.com/tqpry.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tqpry.png" alt="Screenshot" /></a></p>
<p>I am <strong>scraping</strong> the following page: <a href="https://www.transfermarkt.de/tsg-1899-hoffenheim/kader/verein/533/saison_id/2019/plus/1" rel="nofollow noreferrer">https://www.transfermarkt.de/tsg-1899-hoffenheim/kader/verein/533/saison_id/2019/plus/1</a></p>
<p>Table 1 lists the team. In the second column is the player. I need the link as you can see in the screenshot on the bottom left.</p>
<p>When I look into the data frame normally, I only get the following in this cell: "Oliver BaumannO. BaumannTorwart" But I am looking for "https://www.transfermarkt.de/oliver-baumann/profil/spieler/55089".</p>
<p>You guys got any ideas?</p>
<p>Code:</p>
<pre><code>import pandas as pd
import requests
# Global variables
HEADS = {'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36'}
dateiname = 'test.xlsx'
# Global variables
def get_response(url):
# URL-Anfrage durchfuehren
try:
response = requests.get(url, headers=HEADS)
except AttributeError:
print('AttributeError')
return response
def scraping_kader(response):
try:
dfs = pd.read_html(response.text)
#dfs = dfs.to_html(escape=False)
print(dfs[1])
print(dfs[1].iloc[0, :])
except ImportError:
print(' ImportError')
except ValueError:
print(' ValueError')
except AttributeError:
print(' AttributeError')
response = get_response('https://www.transfermarkt.de/tsg-1899-hoffenheim/kader/verein/533/saison_id/2019/plus/1')
scraping_kader(response)
</code></pre>
|
<p>as I know <code>read_html</code> gets only text from table and it doesn't care of links, hidden elements, attributes, etc.</p>
<p>You need module like <code>BeautifulSoup</code> or <code>lxml</code> to work with full HTML and manually get needed information.</p>
<pre><code> soup = BeautifulSoup(response.text, 'html.parser')
all_tooltips = soup.find_all('td', class_='hauptlink')
for item in all_tooltips:
item = item.find('a', class_='spielprofil_tooltip')
if item:
print(item['href']) #, item.text)
</code></pre>
<p>This example gets only links but in the same way you can get other elements.</p>
<pre><code>import requests
from bs4 import BeautifulSoup
#import pandas as pd
HEADS = {
'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36'
}
def get_response(url):
try:
response = requests.get(url, headers=HEADS)
except AttributeError:
print('AttributeError')
return response
def scraping_kader(response):
try:
soup = BeautifulSoup(response.text, 'html.parser')
all_tooltips = soup.find_all('td', class_='hauptlink')
for item in all_tooltips:
item = item.find('a', class_='spielprofil_tooltip')
if item:
print(item['href']) #, item.text)
#print(dfs[1])
#print(dfs[1].iloc[0, :])
except ImportError:
print(' ImportError')
except ValueError:
print(' ValueError')
except AttributeError:
print(' AttributeError')
# --- main --
response = get_response('https://www.transfermarkt.de/tsg-1899-hoffenheim/kader/verein/533/saison_id/2019/plus/1')
scraping_kader(response)
</code></pre>
<p>Result</p>
<pre><code>/oliver-baumann/profil/spieler/55089
/philipp-pentke/profil/spieler/8246
/luca-philipp/profil/spieler/432671
/stefan-posch/profil/spieler/223974
/kevin-vogt/profil/spieler/84435
/benjamin-hubner/profil/spieler/52348
/kevin-akpoguma/profil/spieler/160241
/kasim-adams/profil/spieler/263801
/ermin-bicakcic/profil/spieler/51676
/havard-nordtveit/profil/spieler/42234
/melayro-bogarde/profil/spieler/476915
/konstantinos-stafylidis/profil/spieler/148967
/pavel-kaderabek/profil/spieler/143798
/joshua-brenet/profil/spieler/207006
/florian-grillitsch/profil/spieler/195736
/diadie-samassekou/profil/spieler/315604
/dennis-geiger/profil/spieler/251309
/ilay-elmkies/profil/spieler/443752
/christoph-baumgartner/profil/spieler/324278
/mijat-gacinovic/profil/spieler/215864
/jacob-bruun-larsen/profil/spieler/293281
/sargis-adamyan/profil/spieler/125614
/felipe-pires/profil/spieler/327911
/robert-skov/profil/spieler/270393
/ihlas-bebou/profil/spieler/237164
/andrej-kramaric/profil/spieler/46580
/ishak-belfodil/profil/spieler/111039
/munas-dabbur/profil/spieler/145866
/klauss/profil/spieler/498862
/maximilian-beier/profil/spieler/578392
</code></pre>
|
python|pandas|hyperlink
| 1
|
377,127
| 63,580,402
|
Create cumulative list pandas
|
<p>I have this DataFrame</p>
<pre><code>lst = [[1,0],[None,1],[2,0],[2,0],[None,1],[None,1],[3,0],[None,1] ]
df1 = pd.DataFrame(lst,columns = ['id','is_cumulative'])
</code></pre>
<p>output</p>
<pre><code> id is_cumulative
0 1.0 0
1 NaN 1
2 2.0 0
3 2.0 0
4 NaN 1
5 NaN 1
6 3.0 0
7 NaN 1
</code></pre>
<p>I want replace the NaN values to cumulative list for <code>id</code> column</p>
<pre><code> id is_cumulative
0 1 0
1 [1] 1
2 2 0
3 2 0
4 [1, 2] 1
5 [1, 2] 1
6 3 0
7 [1, 2, 3] 1
</code></pre>
<p>Some explanation:- where ever <code>is_cumulative</code> value is 1 we have NaN value for <code>id</code> column as we need to calculate the cumulative list of id to replace it.
Data is like new id than cumulative of all previous id occurred till now than again some new id and cumulative of all id's occurred till that row.</p>
|
<p>Let us try only with the id with <code>dropna</code> and drop the duplicate , <code>cumsum</code> the result then <code>reindex</code> and <code>fillna</code></p>
<pre><code>s = (df1.id.dropna().drop_duplicates().astype(str)+',').cumsum().str[:-1].str.split(',').reindex(df1.index).ffill()
df1.id = df1.id.fillna(s)
df1
Out[425]:
id is_cumulative
0 1 0
1 [1.0] 1
2 2 0
3 2 0
4 [1.0, 2.0] 1
5 [1.0, 2.0] 1
6 3 0
7 [1.0, 2.0, 3.0] 1
</code></pre>
|
python|pandas
| 2
|
377,128
| 63,415,624
|
Normalising a 2D histogram
|
<p>I have a 2D histogram h1 with var1 on the x axis and var2 on the y axis, which I've plotted from a <code>dataframe</code>. I have normalised it as I want in c++ but now need to do the same in python and am struggling with how to get and set bin content.</p>
<p>The idea is to remove the effect of having more events in one part of the distribution than in another and only leave the correlation between <code>var1</code> and <code>var2</code>.</p>
<p>Working Code in c++:</p>
<pre><code>double norm = h1->GetEntries()/h1->GetNbinsX();
int nbins = h1->GetNbinsX();
for(int i = 1; i< nbins+1; i++)
{
double nevents = 0.;
for(int iy = 1; iy< h1->GetNbinsY()+1; iy++)
{
float bincont = h1->GetBinContent(i,iy);
nevents+=bincont;
}
for(int iy = 1; iy< h1->GetNbinsY()+1; iy++)
{
float bincont = h1->GetBinContent(i,iy);
float fact = norm/nevents;
float value = bincont*fact;
h1->SetBinContent(i,iy,value);
}
}
</code></pre>
<p>Attempt for code in python:</p>
<pre><code>plt.hist2d(var1, var2, bins=(11100, 1030), cmap=plt.cm.BuPu)
norm = 10
for i in var1:
nevents = 0.
for j in var2:
plt.GetBinContent(i,j)
nevents+=bincont
for j in var2:
plt.GetBinContent(i,j)
fact = norm/nevents
value = bincont*fact
plt.SetBinContent(i, j, value)
</code></pre>
<p>Edit after help from @JohanC:</p>
<p>Problem has been resolved. Make sure you don't have nan-s when normalising, because dealing with them is always a pain.</p>
|
<p>To manipulate the contents of the bins, you could first calculate them, change them and only then draw the plot.</p>
<p><code>plt.hist2d()</code> returns the bin contents (a 2D matrix) together with the bin edges in both directions. To get the same information without plotting, <code>np.histogram2d()</code> returns exactly the same values. Afterwards, the result can be plotted via <code>plt.pcolormesh()</code>.</p>
<p>For some reason, the returned matrix is transposed. So, the first step is to transpose it again.</p>
<p>To calculate sums and do multiplications and divisions on 2D arrays, numpy has some powerful array and <a href="https://numpy.org/doc/stable/user/basics.broadcasting.html" rel="nofollow noreferrer">broadcasting</a> operations. The double loops in C++ are just one operation in numpy: <code>hist *= norm / hist.sum(axis=0, keepdims=True)</code>. As the denominator can be zero, the warning can be suppressed (the result will be <code>NaN</code>s and <code>Inf</code>s that are ignored for plotting).</p>
<p>Here is some demo code. Note that using <code>bins=(11100, 1030)</code> is extremely large. The code below uses much smaller values.</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt
import numpy as np
N = 1000000
var1 = np.concatenate([np.random.uniform(0, 20, size=9 * N // 10), np.random.normal(10, 1, size=N // 10)])
var2 = var1 * 0.1 + np.random.normal(size=N)
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 4))
norm = 10
binsX = 200
binsY = 100
ax1.hist2d(var1, var2, bins=(binsX, binsY), cmap='BuPu')
ax1.set_title('regular 2d histogram')
hist, xedges, yedges = np.histogram2d(var1, var2, bins=(binsX, binsY))
hist = hist.T
with np.errstate(divide='ignore', invalid='ignore'): # suppress division by zero warnings
hist *= norm / hist.sum(axis=0, keepdims=True)
ax2.pcolormesh(xedges, yedges, hist, cmap='BuPu')
ax2.set_title('normalized columns')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/1saNj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1saNj.png" alt="example plot" /></a></p>
<p>PS: About <code>hist *= norm / hist.sum(axis=0, keepdims=True)</code>:</p>
<ul>
<li><code>hist.sum(axis=0, keepdims=True)</code> creates a new matrix (name it <code>s</code>) where for each <code>h[i, j]</code> the elements are replaced by the sum over all <code>i</code>, so <code>s[i, j] = sum([h[k,j] for k in range(0, N)])</code>. Without <code>keepdims=True</code>, a 1D array would be created with only the sums.</li>
<li><code>hist *= norm / s</code> creates a loop over all <code>i,j</code> as in <code>h[i,j]=h[i,j]*norm/s[i,j]</code>. Division by zero creates <code>NaN</code> when dividing zero by zero, and <code>inf</code> when dividing another number by zero. These values are ignored by <code>pcolormesh</code>.</li>
</ul>
<p>Optionally you could execute <a href="https://numpy.org/doc/stable/reference/generated/numpy.nan_to_num.html" rel="nofollow noreferrer"><code>nan_to_num()</code></a>:</p>
<pre><code>hist = np.nan_to_num(hist, nan=0, posinf=0, neginf=0)
</code></pre>
|
python|c++|pandas|dataframe|matplotlib
| 4
|
377,129
| 63,351,521
|
How to group and get three most frequent value?
|
<p>i want to group by id and get three most frequent city. For example i have original dataframe</p>
<pre><code> ID City
1 London
1 London
1 New York
1 London
1 New York
1 Berlin
2 Shanghai
2 Shanghai
</code></pre>
<p>and result i want is like this:</p>
<pre><code>ID first_frequent_city second_frequent_city third_frequent_city
1 London New York Berlin
2 Shanghai NaN NaN
</code></pre>
|
<p>First step is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.SeriesGroupBy.value_counts.html" rel="nofollow noreferrer"><code>SeriesGroupBy.value_counts</code></a> for count values of <code>City</code> per <code>ID</code>, advantage is already values are sorted, then get counter by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a>, filter first <code>3</code> values by <code>loc</code>, pivoting by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>DataFrame.pivot</code></a>, change columns names and last convert <code>ID</code> to column by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>DataFrame.reset_index</code></a>:</p>
<pre><code>df = (df.groupby('ID')['City'].value_counts()
.groupby(level=0).cumcount()
.loc[lambda x: x < 3]
.reset_index(name='c')
.pivot('ID','c','City')
.rename(columns={0:'first_', 1:'second_', 2:'third_'})
.add_suffix('frequent_city')
.rename_axis(None, axis=1)
.reset_index())
print (df)
ID first_frequent_city second_frequent_city third_frequent_city
0 1 London New York Berlin
1 2 Shanghai NaN NaN
</code></pre>
|
python|pandas
| 4
|
377,130
| 63,595,213
|
Hadamard product for each unique pair of columns in numpy array
|
<p>Using Python (3.7.7) and numpy (1.17.4), I am working with medium sized 2d numpy arrays (from 5000x80 up to 200,000x120). For a given array, I want to calculate the Hadamard product between all possbible uniqe pairs of column-vectors of that array.</p>
<p>I have:</p>
<pre><code> A A
[a,b,c,d] [a,b,c,d]
[1,2,3,4] [1,2,3,4]
[4,5,6,7] * [4,5,6,7]
[7,8,9,1] [7,8,9,1]
</code></pre>
<p>and I want to get:</p>
<pre><code>[a*b, ac, ad, bc, bd, cd]
[ 2., 3., 4., 6., 8., 12.]
[20., 24., 28., 30., 35., 42.]
[56., 63., 7., 72., 8., 9.]
</code></pre>
<p>I already have a solution from a colleague using np.kron which I adapated a bit:</p>
<pre><code>def hadamard_kron(A: np.ndarray) -> :
"""Returns the hadamard products of all unique pairs of all columns,
and return indices signifying which columns constitute a given pair.
"""
n = raw_inputs.shape[0]
ind1 = (np.kron(np.arange(0, n).reshape((n, 1)), np.ones((n, 1)))).squeeze().astype(int)
ind2 = (np.kron(np.ones((n, 1)), np.arange(0, n).reshape((n, 1)))).squeeze().astype(int)
xmat2 = np.kron(raw_inputs, np.ones((n, 1))) * np.kron(np.ones((n, 1)), raw_inputs)
hadamard_inputs = xmat2[ind2 > ind1, :]
ind1_ = ind1[ind1 < ind2]
ind2_ = ind2[ind1 < ind2]
return hadamard_A, ind1_, ind2_
hadamard_A, first_pair_members, second_pair_members = hadamard_kron(a.transpose())
</code></pre>
<p>Note that hadamard_A is what I want, but transposed (which is also what I want for further processing). Also, ind1_ (ind2_) gives the indices for the objects which feature as the first (second) element in the pair for which the hadamard product is calculated. I need those as well.</p>
<p>However, I feel this code is too inefficient: it takes to long and since I call this function several times during my algorithm, I was wondering whether there is a cleverer solution? Am I overlooking some numpy/scipy tools I could cleverly combine for this task?</p>
<p>Thanks all! :)</p>
|
<p><strong>Approach #1</strong></p>
<p>Simplest one with <a href="https://numpy.org/doc/stable/reference/generated/numpy.triu_indices.html" rel="nofollow noreferrer"><code>np.triu_indices</code></a> -</p>
<pre><code>In [45]: a
Out[45]:
array([[1, 2, 3, 4],
[4, 5, 6, 7],
[7, 8, 9, 1]])
In [46]: r,c = np.triu_indices(a.shape[1],1)
In [47]: a[:,c]*a[:,r]
Out[47]:
array([[ 2, 3, 4, 6, 8, 12],
[20, 24, 28, 30, 35, 42],
[56, 63, 7, 72, 8, 9]])
</code></pre>
<p><strong>Approach #2</strong></p>
<p>Memory-efficient one for large arrays -</p>
<pre><code>m,n = a.shape
s = np.r_[0,np.arange(n-1,-1,-1).cumsum()]
out = np.empty((m, n*(n-1)//2), dtype=a.dtype)
for i,(s0,s1) in enumerate(zip(s[:-1], s[1:])):
out[:,s0:s1] = a[:,i,None] * a[:,i+1:]
</code></pre>
<p><strong>Approach #3</strong></p>
<p>Masking based one -</p>
<pre><code>m,n = a.shape
mask = ~np.tri(n,dtype=bool)
m3D = np.broadcast_to(mask, (m,n,n))
b1 = np.broadcast_to(a[...,None], (m,n,n))
b2 = np.broadcast_to(a[:,None,:], (m,n,n))
out = (b1[m3D]* b2[m3D]).reshape(m,-1)
</code></pre>
<p><strong>Approach #4</strong></p>
<p>Extend approach #2 for a <code>numba</code> one -</p>
<pre><code>from numba import njit
def numba_app(a):
m,n = a.shape
out = np.empty((m, n*(n-1)//2), dtype=a.dtype)
return numba_func(a,out,m,n)
@njit
def numba_func(a,out,m,n):
for p in range(m):
I = 0
for i in range(n):
for j in range(i+1,n):
out[p,I] = a[p,i] * a[p,j]
I += 1
return out
</code></pre>
<p>Then, leverage <code>parallel</code> processing (as pointed out in comments by @max9111), like so -</p>
<pre><code>from numba import prange
def numba_app_parallel(a):
m,n = a.shape
out = np.empty((m, n*(n-1)//2), dtype=a.dtype)
return numba_func_parallel(a,out,m,n)
@njit(parallel=True)
def numba_func_parallel(a,out,m,n):
for p in prange(m):
I = 0
for i in range(n):
for j in range(i+1,n):
out[p,I] = a[p,i] * a[p,j]
I += 1
return out
</code></pre>
<h3>Benchmarking</h3>
<p>Using <a href="https://github.com/droyed/benchit" rel="nofollow noreferrer"><code>benchit</code></a> package (few benchmarking tools packaged together; disclaimer: I am its author) to benchmark proposed solutions.</p>
<pre><code>import benchit
in_ = [np.random.rand(5000, 80), np.random.rand(10000, 100), np.random.rand(20000, 120)]
funcs = [ehsan, app1, app2, app3, numba_app, numba_app_parallel]
t = benchit.timings(funcs, in_, indexby='shape')
t.rank()
t.plot(logx=False, save='timings.png')
</code></pre>
<p><a href="https://i.stack.imgur.com/185gu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/185gu.png" alt="enter image description here" /></a></p>
<p>Conclusion : <code>Numba</code> ones seem to be doing pretty well and <code>app2</code> among NumPy ones.</p>
|
python|arrays|numpy|unique
| 3
|
377,131
| 63,697,309
|
python, pandas - selecting group in multi indexing series
|
<p>I would like some advice to selecting a group.</p>
<p><a href="https://i.stack.imgur.com/Q1m9x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q1m9x.png" alt="https://i.stack.imgur.com/Q1m9x.png" /></a></p>
<p>for instance, I have try to select warehouse(JobTitle) and PA(loc):</p>
<pre><code>fdO2.xs(('Warehouse','PA'))
fdO2.loc[('Warehouse','PA')]
</code></pre>
<p>and for some reason I get this error:</p>
<pre><code>KeyError: ('Warehouse', 'PA')
KeyError: 'PA'
</code></pre>
<p>any advice?</p>
|
<p>You can extract the "<code>Location</code>" and "<code>Sum of Spend</code>" for the <code>Warehouse</code> and <code>PA</code>:</p>
<pre><code>fdO2 = fdO2.loc[(fdO2["JobTitle"]=="Warehouse") & (fdO2["loc"]=="PA"),['Location','Sum of Spend']]
</code></pre>
<p>Or if you want all the columns for the selected <code>Warehouse</code> and <code>PA</code>:</p>
<pre><code>fdO2 = fdO2.loc[(fdO2["JobTitle"]=="Warehouse") & (fdO2["loc"]=="PA"),:]
</code></pre>
|
python|pandas|dataframe|multi-index
| 0
|
377,132
| 63,596,240
|
Why error appear when I apply function to dataframe?
|
<pre><code>def function(s):
if (s['col1'] == 'something1')|(s['col1'] == 'smth2')|(s['col1'] == 'smth3'):
return 'A'
elif (s['col1'] == 'smth4')|(s['col1'] == 'smth5'):
return 'B'
elif (s['col1'] == 'smth6')|(s['col1'] == 'smth7'):
return 'C'
else:
return 'D'
</code></pre>
<p>The function above worked. But when I apply it to dataframe:</p>
<pre><code>df['new_col'] = df.apply(function, axis = 1)
</code></pre>
<p>I get:</p>
<pre><code>TypeError: ("'bool' object is not callable", 'occurred at index 0')
</code></pre>
|
<p>For me working correct, here ia alternative solution with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series.isin</code></a> and <a href="https://numpy.org/doc/stable/reference/generated/numpy.select.html" rel="nofollow noreferrer"><code>numpy.select</code></a>:</p>
<pre><code>df = pd.DataFrame({
'col1':['something1','jeff bridges','smth7','billy boy','smth5']})
print (df)
def function(s):
if (s['col1'] == 'something1')|(s['col1'] == 'smth2')|(s['col1'] == 'smth3'):
return 'A'
elif (s['col1'] == 'smth4')|(s['col1'] == 'smth5'):
return 'B'
elif (s['col1'] == 'smth6')|(s['col1'] == 'smth7'):
return 'C'
else:
return 'D'
df['new_col'] = df.apply(function, axis = 1)
</code></pre>
<hr />
<pre><code>m1 = df['col1'].isin(['something1','smth2','smth3'])
m2 = df['col1'].isin(['smth4','smth5'])
m3 = df['col1'].isin(['smth6','smth7'])
df['new_col1'] = np.select([m1, m2, m3], ['A','B','C'], default='D')
print (df)
col1 new_col new_col1
0 something1 A A
1 jeff bridges D D
2 smth7 C C
3 billy boy D D
4 smth5 B B
</code></pre>
|
python|pandas|function|dataframe|typeerror
| 1
|
377,133
| 63,455,140
|
How to multiply two pandas columns from seperate dataframes together?
|
<p>I have two pandas dataframes, one as a look up table and one 'main' table.</p>
<p>The look up table is so.</p>
<pre><code>import pandas as pd
lu_dict = {'state': ['OH', 'TX', 'IA', 'WY', 'KS'], 'fire_pct':[0.542630,.174425,0.206752,0.004621,0.441946]
, 'hail_pct':[0.008787,0.440272,0.422005,0.434709,0.312338]
,'tw_pct':[0.101449,0.179536,0.159886,0.028349,0.151416]
,'other_pct':[0.224980,0.160096,0.149560,0.393357,0.036523]
,'wp_pct':[0.122154,0.045671,0.061796,0.138963,0.057777]}
lu = pd.DataFrame(lu_dict)
</code></pre>
<p>The main table is like so:</p>
<pre><code>preds_dict = {'state':['OH', 'TX', 'IA', 'WY', 'KS'],
'fire_preds':[.01,.02,.03,.015,.66]
, 'hail_preds':[.03,.005,.12,.23,.006]
,'tw_preds':[.001,.02,.0035,.04,.02]
,'other_preds':[.003,.05,.001,.01,.06]
,'wp_preds':[.002,.03,.005,.01,.04]}
preds = pd.DataFrame(preds_dict)
</code></pre>
<p>I need the observation in the 'main' table to match on the <code>state</code> column in the look up table, then multiply <code>fire_pct</code> in the look up table by 'fire_preds` in the 'main' table, 'other_pct' by 'other_preds', 'wp_pct' by 'wp_preds' etc.</p>
<p>If a dictionary would work better for a lookup table, that's fine. I just need to keep the main table in it's current data frame form for further processing.</p>
<p>Finally, the output I'm looking for is the sum of those multiplication outputs in one column.</p>
|
<p>IIUC, you need to do some renaming to get pandas to align data correctly.</p>
<pre><code>mults = (lu.rename(columns=dict(zip(lu.columns, preds.columns))).set_index('state') *
preds.set_index('state'))
print(mults)
</code></pre>
<p>Output:</p>
<pre><code> fire_preds hail_preds tw_preds other_preds wp_preds
state
OH 0.005426 0.000264 0.000101 0.000675 0.000244
TX 0.003488 0.002201 0.003591 0.008005 0.001370
IA 0.006203 0.050641 0.000560 0.000150 0.000309
WY 0.000069 0.099983 0.001134 0.003934 0.001390
KS 0.291684 0.001874 0.003028 0.002191 0.002311
</code></pre>
<p>Sum products:</p>
<pre><code>mults.sum()
fire_preds 0.306871
hail_preds 0.154963
tw_preds 0.008414
other_preds 0.014954
wp_preds 0.005624
dtype: float64
</code></pre>
<p>Sum by states:</p>
<pre><code>mults.sum(axis=1)
state
OH 0.006711
TX 0.018656
IA 0.057861
WY 0.106510
KS 0.301089
dtype: float64
</code></pre>
|
pandas
| 1
|
377,134
| 63,442,862
|
Unable to load images from a Google Cloud Storage bucket in TensorFlow or Keras
|
<p>I have a bucket on Google Cloud Storage that contains images for a TensorFlow model training. I'm using <code>tensorflow_cloud</code> to load the images stored in the bucket called <code>stereo-train</code> and the full URL to the directory with images is:</p>
<pre><code>gs://stereo-train/data_scene_flow/training/dat
</code></pre>
<p>But using this path in the <code>tf.keras.preprocessing.image_dataset_from_directory</code> function, I get the error in the log in Google Cloud Console:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'gs://stereo-train/data_scene_flow/training/dat'
</code></pre>
<p>How to fix this?</p>
<p>Code:</p>
<pre><code>GCP_BUCKET = "stereo-train"
kitti_dir = os.path.join("gs://", GCP_BUCKET, "data_scene_flow")
kitti_training_dir = os.path.join(kitti_dir, "training", "dat")
ds = tf.keras.preprocessing.image_dataset_from_directory(kitti_training_dir, image_size=(375,1242), batch_size=batch_size, shuffle=False, label_mode=None)
</code></pre>
<p>Even when I use the following, it doesn't work:</p>
<pre><code>
filenames = np.sort(np.asarray(os.listdir(kitti_train))).tolist()
# Make a Dataset of image tensors by reading and decoding the files.
ds = list(map(lambda x: tf.io.decode_image(tf.io.read_file(kitti_train + x)), filenames))
</code></pre>
<p><code>tf.io.read_file</code> instead of the keras function, I get the same error. How to fix this?</p>
|
<p>If you are using Linux or OSX you can use <a href="https://cloud.google.com/storage/docs/gcs-fuse" rel="nofollow noreferrer">Google Cloud Storage FUSE</a> which will allow you to mount your bucket locally and use it like any other file system. Follow the <a href="https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/installing.md" rel="nofollow noreferrer">installation guide</a> and then mount your bucket somewhere on your system, ie.:</p>
<pre><code>mkdir /mnt/buckets
gcsfuse gs://stereo-train /mnt/buckets
</code></pre>
<p>Then you should be able to use the paths from the mount point in your code and load the content from the bucket in Keras.</p>
|
python|tensorflow|keras|google-cloud-platform
| 2
|
377,135
| 63,394,131
|
Loading CSV with Pandas - Array's not parsed correctly
|
<p>I have a dataset which I transformed to CSV as potential input for a keras auto encoder.
The loading of the CSV works flawless with <code>pandas.read_csv()</code> but the data types are not correct.</p>
<p>The csv solely contains two colums: <strong>label</strong> and <strong>features</strong> whereas the label column contains strings and the features column arrays with signed integers ([-1, 1]). So in general pretty simple structure.</p>
<p>To get two different dataframes for further processing I created them via:</p>
<p><code>labels = pd.DataFrame(columns=['label'], data=csv_data, dtype='U')</code>
and</p>
<p><code>features = pd.DataFrame(columns=['features'], data=csv_data)</code></p>
<p>in both cases I got wrong datatypes as both are marked as <code>object</code> typed dataframes. <strong>What am I doing wrong?</strong>
For the features it is even harder because the parsing returns me a <code>pandas.sequence</code> that contains the array as string: <code>['[1, ..., 1]']</code>.</p>
<p>So I tried a tedious workaround by parsing the string back to an numpy array via <code>.to_numpy()</code> a python cast for every element and than an <code>np.assarray()</code> - but the type of the dataframe is still incorrect. I think this could not be the general approach how to solve this task. As I am fairly new to pandas I checked some tutorials and the API but in most cases a cell in a dataframe rather contains a single value instead of a complete array. Maybe my overall design of the dataframe ist just not suitable for this task.</p>
<p>Any help appreacheated!</p>
|
<p>You are reading the file as string but you have a python list as a column you need to evaluate it to get the list.
I am not sure of the use case but you can split the labels for a more readable dataframe</p>
<pre><code>import pandas as pd
features = ["featurea","featureb","featurec","featured","featuree"]
labels = ["[1,0,1,1,1,1]","[1,0,1,1,1,1]","[1,0,1,1,1,1]","[1,0,1,1,1,1]","[1,0,1,1,1,1]"]
df = pd.DataFrame(list(zip(features, labels)),
columns =['Features', 'Labels'])
import ast
#convert Strings to lists
df['Labels'] = df['Labels'].map(ast.literal_eval)
df.index = df['Features']
#Since list itself might not be useful you can split and expand it to multiple columns
new_df = pd.DataFrame(df['Labels'].values.tolist(),index= df.index)
</code></pre>
<p><strong>Output</strong></p>
<pre><code> 0 1 2 3 4 5
Features
featurea 1 0 1 1 1 1
featureb 1 0 1 1 1 1
featurec 1 0 1 1 1 1
featured 1 0 1 1 1 1
featuree 1 0 1 1 1 1
</code></pre>
|
python|pandas|dataframe|csv
| 1
|
377,136
| 63,651,243
|
Your input ran out of data
|
<pre><code>history = model.fit_generator(
train_generator,
steps_per_epoch=50,
epochs=10,
verbose=1,
validation_data = validation_generator,
validation_steps=50)
</code></pre>
<p>tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least <code>steps_per_epoch * epochs</code> batches (in this case, 5000 batches). You may need to use the repeat() function when building your dataset.</p>
|
<p>To solve this problem, we need to pay attention two items:</p>
<ol>
<li>How to define batch size and batch_size and steps_per_epoch.
The simple answer is steps_per_epoch=total_train_size//batch_size</li>
<li>How to define max number of epochs for the training process.
This is not as straightforward as the first one.</li>
</ol>
<p>Majority answers covered the first topic, I didn't find good answer for second, try to explain as below:</p>
<p>I have a train data set of 93 data samples, and batch_size of 32, so the for the first question:
steps_per_epoch=total_train_size//batch_size=93//32=2</p>
<p>For the second question, it will depend on how many not repeated batch your data generator can provide, if I have 93 data samples and each batch need 32 two samples so the each epoch has 2 train steps. You will have 93//2 = 46 epochs that will able to provide not repeated batches, the epoch 47 will cause this error.</p>
<p>I didn't find reference for tensorflow data generator so this is just my understanding, if there are anything wrong please correct me, thanks!</p>
|
tensorflow2.0|tensorflow-datasets
| 1
|
377,137
| 63,716,828
|
Extract a particular value from categorical column using Python
|
<p>Following is the sample table which consists of transaction data of bank customers. I need to create a separate column as annual salary of customer taking the data from <code>txn_description</code> column.</p>
<pre><code>Customer_ID txn_description Amount Type
01 POS 345 Dr
02 SALARY 2000 Cr
03 INTER BANK 148 Dr
04 SALARY 1500 Cr
05 NEFT 289 Dr
06 SALARY 1800 Cr
01 NEFT 40 Dr
02 SALARY 2000 Cr
04 POS 69 Dr
04 SALARY 1500 Cr
06 SALARY 1800 Cr
</code></pre>
<p>Note: The transaction data is of three months. So the salary is credited to a particular customer's account thrice in this table for three months.</p>
<p>(Dr = Debit transaction and Cr = Credit transaction)</p>
|
<p>you could try this,</p>
<pre><code>df= df[df["txn_description"]=="SALARY"]
df["Annual"] = df["Amount"]*12
</code></pre>
<p>O/P:</p>
<pre><code> Customer_ID txn_description Amount Annual
1 2 SALARY 2000 24000
3 4 SALARY 1500 18000
5 6 SALARY 1800 21600
</code></pre>
<p>Furthermore, If you want to apply it on original frame find this,</p>
<pre><code>dic = df.set_index("Customer_ID")["Annual"].to_dict()
</code></pre>
<p>and apply it to actual dtaframe using <code>df.map(dic)</code></p>
<p>Explanation:</p>
<ol>
<li>First remove unwanted records, get only 'cr' or Salary records.</li>
<li>Now Dataframe has salary credited record of a month data of each customer. i.e., customer id and amount is one to one map.</li>
<li>multiply amount with 12 to get annual value.</li>
<li>convert customer to annual value in dic and replace into actual frame.</li>
</ol>
|
python|pandas|data-science|data-analysis
| 1
|
377,138
| 21,908,375
|
R parameter DROP equivalent in Pandas
|
<p>I would like to know what is the Python (Pandas) equivalent of the following R code:</p>
<pre><code>outDataFrame <- myDataFrame[, rownames(inputDataFrame), drop=FALSE]
</code></pre>
<ul>
<li>row names of inputDataFrame are the same as the column names of myDataFrame.</li>
<li>each row of myDataframe contains only one TRUE value (all other values are FALSE)</li>
</ul>
<p>The result outDataFrame should have:</p>
<ul>
<li>same row names as myDataFrame row names</li>
<li>only one column</li>
<li>the values contained in that column should correspond to the colum name of myDataFrame for which the value is TRUE</li>
</ul>
<p>I hope it is understandable...</p>
<p>Kind Regards</p>
<p>R.</p>
|
<p>Here is another way, using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html" rel="nofollow">np.argmax</a>:</p>
<pre><code>In [55]: myDataFrame = pd.DataFrame([(True,False,False), (False,False,True), (False,False,True)], index=list('ABC'), columns=list('XYZ'))
In [56]: myDataFrame
Out[56]:
X Y Z
A True False False
B False False True
C False False True
[3 rows x 3 columns]
In [58]: pd.Series(myDataFrame.columns[np.argmax(myDataFrame.values, axis=1)], index=myDataFrame.index)
Out[58]:
A X
B Z
C Z
dtype: object
</code></pre>
<p>It's long but perhaps faster especially for large dataframes:</p>
<pre><code>In [76]: myDataFrame2 = pd.concat([myDataFrame]*10000)
In [77]: %timeit pd.Series(myDataFrame2.columns[np.argmax(myDataFrame2.values, axis=1)], index=myDataFrame2.index)
1000 loops, best of 3: 1.19 ms per loop
In [78]: %timeit pd.Series( np.dot( myDataFrame2, myDataFrame2.columns ), index=myDataFrame2.index )
100 loops, best of 3: 5.72 ms per loop
In [79]: %timeit myDataFrame2.apply(lambda row: myDataFrame2.columns[row][0], axis=1)
1 loops, best of 3: 1.15 s per loop
</code></pre>
|
python|r|select|pandas
| 0
|
377,139
| 21,866,930
|
left hand side eigenvector in python?
|
<p>How do calculate the left hand side eigenvector in python?</p>
<pre><code> >>> import from numpy as np
>>> from scipy.linalg import eig
>>> np.set_printoptions(precision=4)
>>> T = np.mat("0.2 0.4 0.4;0.8 0.2 0.0;0.8 0.0 0.2")
>>> print "T\n", T
T
[[ 0.2 0.4 0.4]
[ 0.8 0.2 0. ]
[ 0.8 0. 0.2]]
>>> w, vl, vr = eig(T, left=True)
>>> vl
array([[ 0.8165, 0.8165, 0. ],
[ 0.4082, -0.4082, -0.7071],
[ 0.4082, -0.4082, 0.7071]])
</code></pre>
<p>This does not seem correct, google has not been kind on this!</p>
|
<p>Your result is correct to my understanding.</p>
<p>However, you might be misinterpreting it. The <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html" rel="noreferrer">numpy docs</a> are a bit clearer on what the left eigenvectors should be.</p>
<blockquote>
<p>Finally, it is emphasized that v consists of the right (as in
right-hand side) eigenvectors of a. A vector y satisfying dot(y.T, a)
= z * y.T for some number z is called a left eigenvector of a, and, in general, the left and right eigenvectors of a matrix are not
necessarily the (perhaps conjugate) transposes of each other.</p>
</blockquote>
<p>I.e. you need to transpose the vectors in <code>vl</code>. <code>vl[:,i].T</code> is the i-th left eigenvector.
If I test this, I get, that the results are correct.</p>
<pre><code>>>> import numpy as np
>>> from scipy.linalg import eig
>>> np.set_printoptions(precision=4)
>>> T = np.mat("0.2 0.4 0.4;0.8 0.2 0.0;0.8 0.0 0.2")
>>> print "T\n", T
T
[[ 0.2 0.4 0.4]
[ 0.8 0.2 0. ]
[ 0.8 0. 0.2]]
>>> w, vl, vr = eig(T, left=True)
>>> vl
array([[ 0.8165, 0.8165, 0. ],
[ 0.4082, -0.4082, -0.7071],
[ 0.4082, -0.4082, 0.7071]])
>>> [ np.allclose(np.dot(vl[:,i].T, T), w[i]*vl[:,i].T) for i in range(3) ]
[True, True, True]
</code></pre>
|
python-2.7|numpy|eigenvector
| 6
|
377,140
| 21,469,261
|
Extract subarray between certain value in Python
|
<p>I have a list of values that are the result of merging many files. I need to pad some of the values. I know that each sub-section begins with the value -1. I am trying to basically extract a sub-array between -1's in the main array via iteration.</p>
<p>For example supposed this is the main list:</p>
<pre><code>-1 1 2 3 4 5 7 -1 4 4 4 5 6 7 7 8 -1 0 2 3 5 -1
</code></pre>
<p>I would like to extract the values between the -1s:</p>
<pre><code>list_a = 1 2 3 4 5 7
list_b = 4 4 4 5 6 7 7 8
list_c = 0 2 3 5 ...
list_n = a1 a2 a3 ... aM
</code></pre>
<p>I have extracted the indices for each -1 by searching through the main list:</p>
<pre><code>minus_ones = [i for i, j in izip(count(), q) if j == -1]
</code></pre>
<p>I also assembled them as pairs using a common recipe:</p>
<pre><code>def pairwise(iterable):
a, b = tee(iterable)
next(b, None)
return izip(a,b)
for index in pairwise(minus_ones):
print index
</code></pre>
<p>The next step I am trying to do is grab the values between the index pairs, for example:</p>
<pre><code> list_b: (7 , 16) -> 4 4 4 5 6 7 7 8
</code></pre>
<p>so I can then do some work to those values (I will add a fixed int. to each value in each sub-array).</p>
|
<p>You mentioned <code>numpy</code> in the tags. If you're using it, have a look at <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.split.html" rel="nofollow"><code>np.split</code></a>.</p>
<p>For example:</p>
<pre><code>import numpy as np
x = np.array([-1, 1, 2, 3, 4, 5, 7, -1, 4, 4, 4, 5, 6, 7, 7, 8, -1, 0, 2,
3, 5, -1])
arrays = np.split(x, np.where(x == -1)[0])
arrays = [item[1:] for item in arrays if len(item) > 1]
</code></pre>
<p>This yields:</p>
<pre><code>[array([1, 2, 3, 4, 5, 7]),
array([4, 4, 4, 5, 6, 7, 7, 8]),
array([0, 2, 3, 5])]
</code></pre>
<p>What's going on is that <code>where</code> will yield an array (actually a tuple of arrays, therefore the <code>where(blah)[0]</code>) of the indicies where the given expression is true. We can then pass these indicies to <code>split</code> to get a sequence of arrays. </p>
<p>However, the result will contain the <code>-1</code>'s and an empty array at the start, if the sequence starts with <code>-1</code>. Therefore, we need to filter these out.</p>
<p>If you're not already using <code>numpy</code>, though, your (or @DSM's) <code>itertools</code> solution is probably a better choice.</p>
|
python|arrays|list|numpy
| 4
|
377,141
| 21,566,379
|
Fitting a 2D Gaussian function using scipy.optimize.curve_fit - ValueError and minpack.error
|
<p>I intend to fit a 2D Gaussian function to images showing a laser beam to get its parameters like <code>FWHM</code> and position. So far I tried to understand how to define a 2D Gaussian function in Python and how to pass x and y variables to it.</p>
<p>I've written a little script which defines that function, plots it, adds some noise to it and then tries to fit it using <code>curve_fit</code>. Everything seems to work except the last step in which I try to fit my model function to the noisy data. Here is my code:</p>
<pre><code>import scipy.optimize as opt
import numpy as np
import pylab as plt
#define model function and pass independant variables x and y as a list
def twoD_Gaussian((x,y), amplitude, xo, yo, sigma_x, sigma_y, theta, offset):
xo = float(xo)
yo = float(yo)
a = (np.cos(theta)**2)/(2*sigma_x**2) + (np.sin(theta)**2)/(2*sigma_y**2)
b = -(np.sin(2*theta))/(4*sigma_x**2) + (np.sin(2*theta))/(4*sigma_y**2)
c = (np.sin(theta)**2)/(2*sigma_x**2) + (np.cos(theta)**2)/(2*sigma_y**2)
return offset + amplitude*np.exp( - (a*((x-xo)**2) + 2*b*(x-xo)*(y-yo) + c*((y-yo)**2)))
# Create x and y indices
x = np.linspace(0, 200, 201)
y = np.linspace(0, 200, 201)
x,y = np.meshgrid(x, y)
#create data
data = twoD_Gaussian((x, y), 3, 100, 100, 20, 40, 0, 10)
# plot twoD_Gaussian data generated above
plt.figure()
plt.imshow(data)
plt.colorbar()
# add some noise to the data and try to fit the data generated beforehand
initial_guess = (3,100,100,20,40,0,10)
data_noisy = data + 0.2*np.random.normal(size=len(x))
popt, pcov = opt.curve_fit(twoD_Gaussian, (x,y), data_noisy, p0 = initial_guess)
</code></pre>
<p>Here is the error message I get when running the script using <code>winpython 64-bit</code> <code>Python 2.7</code>:</p>
<pre><code>ValueError: object too deep for desired array
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python\WinPython-64bit-2.7.6.2\python-2.7.6.amd64\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 540, in runfile
execfile(filename, namespace)
File "E:/Work Computer/Software/Python/Fitting scripts/2D Gaussian function fit/2D_Gaussian_LevMarq_v2.py", line 39, in <module>
popt, pcov = opt.curve_fit(twoD_Gaussian, (x,y), data_noisy, p0 = initial_guess)
File "C:\Python\WinPython-64bit-2.7.6.2\python-2.7.6.amd64\lib\site-packages\scipy\optimize\minpack.py", line 533, in curve_fit
res = leastsq(func, p0, args=args, full_output=1, **kw)
File "C:\Python\WinPython-64bit-2.7.6.2\python-2.7.6.amd64\lib\site-packages\scipy\optimize\minpack.py", line 378, in leastsq
gtol, maxfev, epsfcn, factor, diag)
minpack.error: Result from function call is not a proper array of floats.
</code></pre>
<p>What is it that am I doing wrong? Is it how I pass the independent variables to the model <code>function/curve_fit</code>?</p>
|
<p>The output of <code>twoD_Gaussian</code> needs to be 1D. What you can do is add a <code>.ravel()</code> onto the end of the last line, like this:</p>
<pre><code>def twoD_Gaussian((x, y), amplitude, xo, yo, sigma_x, sigma_y, theta, offset):
xo = float(xo)
yo = float(yo)
a = (np.cos(theta)**2)/(2*sigma_x**2) + (np.sin(theta)**2)/(2*sigma_y**2)
b = -(np.sin(2*theta))/(4*sigma_x**2) + (np.sin(2*theta))/(4*sigma_y**2)
c = (np.sin(theta)**2)/(2*sigma_x**2) + (np.cos(theta)**2)/(2*sigma_y**2)
g = offset + amplitude*np.exp( - (a*((x-xo)**2) + 2*b*(x-xo)*(y-yo)
+ c*((y-yo)**2)))
return g.ravel()
</code></pre>
<p>You'll obviously need to reshape the output for plotting, e.g:</p>
<pre><code># Create x and y indices
x = np.linspace(0, 200, 201)
y = np.linspace(0, 200, 201)
x, y = np.meshgrid(x, y)
#create data
data = twoD_Gaussian((x, y), 3, 100, 100, 20, 40, 0, 10)
# plot twoD_Gaussian data generated above
plt.figure()
plt.imshow(data.reshape(201, 201))
plt.colorbar()
</code></pre>
<p>Do the fitting as before:</p>
<pre><code># add some noise to the data and try to fit the data generated beforehand
initial_guess = (3,100,100,20,40,0,10)
data_noisy = data + 0.2*np.random.normal(size=data.shape)
popt, pcov = opt.curve_fit(twoD_Gaussian, (x, y), data_noisy, p0=initial_guess)
</code></pre>
<p>And plot the results:</p>
<pre><code>data_fitted = twoD_Gaussian((x, y), *popt)
fig, ax = plt.subplots(1, 1)
ax.hold(True)
ax.imshow(data_noisy.reshape(201, 201), cmap=plt.cm.jet, origin='bottom',
extent=(x.min(), x.max(), y.min(), y.max()))
ax.contour(x, y, data_fitted.reshape(201, 201), 8, colors='w')
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/BbxGR.png" alt="enter image description here"></p>
|
python|numpy|scipy|data-fitting
| 46
|
377,142
| 21,769,010
|
Removing list comps from numpy code
|
<p>I'm in the middle of constructing a geometric neural net, and I'm running up against an issue with vectorization. Basically there is a lambda function I have defined that really should run on each sample provided as input. Problem being that it would be most convenient to pass in the inputs as an array with the last axis of that array serving as the "sample axis" (axis where each index is a full sample)</p>
<p>I have a solution that works, that basically just does this in a listcomp and then converts it back to a numpy array for the rest of the calculations. (Let me know if you want to see any of the functions defined, but I don't think they're hugely relevant)</p>
<pre><code>class GeometricNeuralNet(object):
def __init__(self, c, weight_domain=math.log(2)):
"""
Dimensions of c should be a tuple that indicates the size of each layer.
First number should be the number of input units, and the last should be the number of output units.
Other entries should be the sizes of hidden layers.
"""
weight_matrix = lambda a, b: np.exp(np.random.uniform(-weight_domain, weight_domain, [a,b]))
self.weights = [weight_matrix(c[i], c[i+1]) for i in range(len(c) - 1)]
self.predict = lambda input_vector, end=None: reduce(transfer_function, [input_vector] + self.weights[:end])
def train(self, samples, outputs, learning_rate):
# Forward Pass
true_inputs = np.array([self.predict(sample, -1) for sample in samples])
print true_inputs.shape
</code></pre>
<p>My major problem with this code is the weird way that <code>true_inputs</code> is being calculated. Is there any way around this? <code>np.vectorize</code> and <code>np.frompyfunc</code> don't seem to allow an axis argument, which would really be crucial here.</p>
<p>EDIT:</p>
<p>Here's the <code>transfer_function</code> method.</p>
<pre><code>def transfer_function(x, y):
return gmean(np.power(x, y.T), axis=1)
</code></pre>
|
<p>You should checkout numpy's apply_along_axis method: <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.apply_along_axis.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/generated/numpy.apply_along_axis.html</a></p>
<pre><code>>>> def my_func(a):
... """Average first and last element of a 1-D array"""
... return (a[0] + a[-1]) * 0.5
>>> b = np.array([[1,2,3], [4,5,6], [7,8,9]])
>>> np.apply_along_axis(my_func, 0, b)
array([ 4., 5., 6.])
>>> np.apply_along_axis(my_func, 1, b)
array([ 2., 5., 8.])
</code></pre>
|
python|arrays|numpy|vectorization
| 1
|
377,143
| 21,752,989
|
numpy: Efficiently avoid 0s when taking log(matrix)
|
<pre><code>from numpy import *
m = array([[1,0],
[2,3]])
</code></pre>
<p>I would like to compute the element-wise <code>log2(m)</code>, but only in the places where <code>m</code> is not 0. In those places, I would like to have 0 as a result.</p>
<p>I am now fighting against:</p>
<pre><code>RuntimeWarning: divide by zero encountered in log2
</code></pre>
<hr>
<p><strong>Try 1: using <code>where</code></strong></p>
<pre><code>res = where(m != 0, log2(m), 0)
</code></pre>
<p>which computes me the correct result, but I still get logged a <code>RuntimeWarning: divide by zero encountered in log2</code>. It looks like (and syntactically it is quite obvious) numpy still computes <code>log2(m)</code> on the full matrix and only afterwards <code>where</code> picks the values to keep.</p>
<p>I would like to avoid this warning.</p>
<hr>
<p><strong>Try 2: using masks</strong></p>
<pre><code>from numpy import ma
res = ma.filled(log2(ma.masked_equal(m, 0)), 0)
</code></pre>
<p>Sure masking away the zeros will prevent <code>log2</code> to get applied to them, won't it? Unfortunately not: We still get <code>RuntimeWarning: divide by zero encountered in log2</code>.</p>
<p>Even though the matrix is masked, <code>log2</code> still seems to be applied to every element.</p>
<hr>
<p><strong>How can I efficiently compute the element-wise log of a numpy array without getting division-by-zero warnings?</strong></p>
<ul>
<li>Of course I could temporarily disable the logging of these warnings using <code>seterr</code>, but that doesn't look like a clean solution.</li>
<li>And sure a double <em>for</em> loop would help with treating 0s specially, but defeats the efficiency of numpy.</li>
</ul>
<p>Any ideas?</p>
|
<p>We can use masked arrays for this:</p>
<pre><code>>>> from numpy import *
>>> m = array([[1,0], [2,3]])
>>> x = ma.log(m)
>>> print x.filled(0)
[[ 0. 0. ]
[ 0.69314718 1.09861229]]
</code></pre>
|
numpy
| 34
|
377,144
| 21,451,776
|
convert numpy string array into int array
|
<hr>
<p>I have a numpy.ndarray</p>
<pre><code>a = [['-0.99' '' '0.56' ..., '0.56' '-2.02' '-0.96']]
</code></pre>
<p>how to convert it to int?</p>
<p>output : </p>
<pre><code>a = [[-0.99 0.0 0.56 ..., 0.56 -2.02 -0.96]]
</code></pre>
<p>I want 0.0 in place of blank ''</p>
|
<pre><code>import numpy as np
a = np.array([['-0.99', '', '0.56', '0.56', '-2.02', '-0.96']])
a[a == ''] = 0.0
a = a.astype(np.float)
</code></pre>
<p>Result is:</p>
<pre><code>[[-0.99 0. 0.56 0.56 -2.02 -0.96]]
</code></pre>
<p>Your values are floats, not integers. It is not clear if you want a list of lists or a numpy array as your end result. You can easily get a list of lists like this:</p>
<pre><code>a = a.tolist()
</code></pre>
<p>Result:</p>
<pre><code>[[-0.99, 0.0, 0.56, 0.56, -2.02, -0.96]]
</code></pre>
|
python|numpy
| 15
|
377,145
| 24,876,906
|
Removing duplicates from a series based on a symmetric matrix in pandas
|
<p>I am new to Pandas and have been unable to find a succinct solution to the following problem. </p>
<p>Say I have a Series of data based on a symmetric (distance)matrix, what is the most efficient way to drop duplicates from the following series?</p>
<pre><code>from pandas import DataFrame
df = DataFrame([[0, 1, 2],
[1, 0, 3],
[2, 3, 0]],
index=['a', 'b', 'c'],
columns=['a', 'b', 'c'])
ser = df.stack()
ser
a a 0
b 1
c 2
b a 1
b 0
c 3
c a 2
b 3
c 0
</code></pre>
<p>What I want to do is remove duplicate pairs, since the matrix is symmetric. The output should look like this</p>
<pre><code>a a 0
b 1
c 2
b b 0
c 3
c c 0
</code></pre>
|
<p>The following code runs faster than the currently accepted answer:</p>
<pre><code>import numpy as np
def dm_to_series1(df):
df = df.astype(float)
df.values[np.triu_indices_from(df, k=1)] = np.nan
return df.unstack().dropna()
</code></pre>
<p>The type of the <code>DataFrame</code> is converted to <code>float</code> so that elements can be nulled with <code>np.nan</code>. In practice, a distance matrix would probably already store floats so this step may not be strictly necessary. The upper triangle (excluding the diagonal) is nulled and these entries are removed after converting the <code>DataFrame</code> to a <code>Series</code>.</p>
<p>I adapted the currently accepted solution in order to compare runtimes. Note that I updated it to use a set instead of a list for faster runtime:</p>
<pre><code>def dm_to_series2(df):
ser = df.stack()
seen = set()
for tup in ser.index.tolist():
if tup[::-1] in seen:
continue
seen.add(tup)
return ser[seen]
</code></pre>
<p>Testing the two solutions on the original example dataset:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[0, 1, 2],
[1, 0, 3],
[2, 3, 0]],
index=['a', 'b', 'c'],
columns=['a', 'b', 'c'])
</code></pre>
<p>My solution:</p>
<pre><code>In [4]: %timeit dm_to_series1(df)
1000 loops, best of 3: 538 µs per loop
</code></pre>
<p>@Marius' solution:</p>
<pre><code>In [5]: %timeit dm_to_series2(df)
1000 loops, best of 3: 816 µs per loop
</code></pre>
<p>I also tested against a larger distance matrix by randomly generating a 50x50 matrix using scikit-bio's <code>skbio.stats.distance.randdm</code> function and converting that to a <code>DataFrame</code>:</p>
<pre><code>from skbio.stats.distance import randdm
big_dm = randdm(50)
big_df = pd.DataFrame(big_dm.data, index=big_dm.ids, columns=big_dm.ids)
</code></pre>
<p>My solution:</p>
<pre><code>In [7]: %timeit dm_to_series1(big_df)
1000 loops, best of 3: 649 µs per loop
</code></pre>
<p>@Marius' solution:</p>
<pre><code>In [8]: %timeit dm_to_series2(big_df)
100 loops, best of 3: 3.61 ms per loop
</code></pre>
<p>Note that my solution may not be as memory-efficient as @Marius' solution because I'm creating a copy of the input <code>DataFrame</code> and making modifications to it. If it is acceptable to modify the input <code>DataFrame</code>, the code could be updated to be more memory-efficient by using in-place <code>DataFrame</code> operations.</p>
<p>Note: my solution was inspired by the answers in <a href="https://stackoverflow.com/q/14129979/3776794">this SO question</a>.</p>
|
python|pandas
| 3
|
377,146
| 24,780,697
|
numpy: unique list of colors in the image
|
<p>I have an image <code>img</code>:</p>
<pre><code>>>> img.shape
(200, 200, 3)
</code></pre>
<p>On pixel (100, 100) I have a nice color:</p>
<pre><code>>>> img[100,100]
array([ 0.90980393, 0.27450982, 0.27450982], dtype=float32)
</code></pre>
<p>Now my question is: How many different colors are there in this image, and how do I enumerate them?</p>
<p>My first idea was <code>numpy.unique()</code>, but somehow I am using this wrong.</p>
|
<p>Your initial idea to use <code>numpy.unique()</code> actually can do the job perfectly with the best performance:</p>
<pre><code>numpy.unique(img.reshape(-1, img.shape[2]), axis=0)
</code></pre>
<p>At first, we flatten rows and columns of matrix. Now the matrix has as much rows as there're pixels in the image. Columns are color components of each pixels.</p>
<p>Then we count unique rows of flattened matrix.</p>
|
python|numpy
| 52
|
377,147
| 24,666,289
|
Converting groupby dataframe (with dropped duplicate rows by group) into normal dataframe
|
<p>How do I convert a groupby dataframe I created in order to drop duplicates by a group back into a normal dataframe? </p>
<pre><code>df3 = df2.groupby('Organization')
df3 = df3.drop_duplicates('Name')
</code></pre>
<p>I tried this, but this seems to create abnormal properties that don't allow me to subset my data</p>
<pre><code>df3 = df3.add_suffix(' ').reset_index()
df3 = df3.set_index(df3.level_1)
df3.columns = map(lambda x: x.strip(), df3.columns)
df3.ix[:,2:]
AssertionError: Number of manager items must equal union of block items
# manager items: 12, # tot_items: 13
</code></pre>
|
<p>Instead of creating a groupby to drop duplicates, have you considered:</p>
<pre><code>df4 = df3.drop_duplicates(['Organization', 'Name'])
</code></pre>
<p>That will keep you in a normal dataframe the whole time and should accomplish what you're trying to do.</p>
<p>If you want to group and "ungroup" then <a href="https://stackoverflow.com/questions/20122521/is-there-an-ungroup-by-operation-opposite-to-groupby-in-pandas">this post may help</a></p>
|
python|pandas
| 0
|
377,148
| 24,593,694
|
How to troubleshoot code in the case of big data
|
<p>I'm trying to implement <a href="https://stackoverflow.com/questions/24581967/count-lines-with-same-value-in-column-in-python">this python solution to count the number of lines with identical content in the first few columns</a> of a table. Here is my code:</p>
<pre><code>#count occurrences of reads
import pandas as pd
#pd.options.display.large_repr = 'info'
#pd.set_option('display.max_rows', 100000000)
#pd.set_option('display.width',50000)
import sys
file1 = sys.argv[1]
file2 = file1[:4] + '_multi_nobidir_count.soap'
df = pd.read_csv(file1,sep='\t',header=None)
df.columns = ['v0','v1','v2','v3','v4','v5','v6','v7','v8','v9','v10','v11']
df['v3']=df.groupby(['v0','v1','v2']).transform(sum).v3
df.to_csv(file2,sep='\t',index=False,header=False)
</code></pre>
<p>It worked fine with the test data (200 lines) but gives me the following error when I apply it to the real data (20 million lines):</p>
<pre><code>Traceback (most recent call last):
File "count_same_reads.py", line 14, in <module>
df['v3']=df.groupby(['v0','v1','v2']).transform(sum).v3
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0-py2.7-linux-x86_64.egg/pandas/core/groupby.py", line 2732, in transform
return self._transform_item_by_item(obj, fast_path)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.14.0-py2.7-linux-x86_64.egg/pandas/core/groupby.py", line 2799, in _transform_item_by_item
raise TypeError('Transform function invalid for data types')
TypeError: Transform function invalid for data types
</code></pre>
<p>How do I go about troubleshooting, to find out why I am getting this error?</p>
<p>[EDIT] Uncommenting the <code>pd.options.</code> and <code>pd.set_option</code> lines did not change the outcome.</p>
<p>[EDIT2] Taking into consideration some of the replies below, I ran the following code on my data to output any lines of data that do not have a number in the 4th column:</p>
<pre><code>#test data type
import sys
file1 = sys.argv[1]
def is_number(s):
try:
float(s)
return True
except ValueError:
return False
with open(file1, 'r') as data:
for row in data:
a = row.strip().split()[3]
if is_number(a) == False:
print row.strip()
</code></pre>
<p>This worked on the test data in which I changed one of the rows' fourth column value from <code>1</code> to <code>e</code>, it output only the line containing the letter instead of a number. I ran it on the original big data but no lines were returned.</p>
|
<p>Open the file <code>/usr/local/lib/python2.7/dist-packages/pandas-0.14.0-py2.7-linux-x86_64.egg/pandas/core/groupby.py</code>, go to line 2799.</p>
<p>Right before the following statement, in the same indent level, add a line to print the value of the offending data.</p>
<pre><code>raise TypeError('Transform function invalid for data types')
</code></pre>
<p>Now, right before the <code>TypeError</code> is thrown, you will know what data caused the error.</p>
<p>Given that you are trying to sum, I would speculate that you have a non-numeric value in your column, but I do not have your data, so that is pure speculation.</p>
<hr>
<p>I have taken a quick look at the code region around where the error occurs, and it appears that you should be this case be inspecting the object <code>obj</code> before the <code>TypeError</code> is raised.</p>
<pre><code>for i, col in enumerate(obj):
try:
output[col] = self[col].transform(wrapper)
inds.append(i)
except Exception:
pass
if len(output) == 0: # pragma: no cover
raise TypeError('Transform function invalid for data types')
</code></pre>
|
python|debugging|pandas|bigdata
| 2
|
377,149
| 24,771,027
|
evaluating numpy polynomials at other polynomials
|
<p>numpy.lib.polynomial.polyval lets you evaluate a polynomial using another polynomial:</p>
<pre><code>numpy.polyval(poly1d([1, 2, 3]), 2)
Out[832]: 11
numpy.polyval(poly1d([1, 1]), poly1d([1, 1, 1]))
Out[820]: poly1d([ 1., 1., 2.])
</code></pre>
<p>for instance. How do you do the same using numpy.polynomial.polynomial.polyval?</p>
<pre><code>numpy.polynomial.polynomial.polyval(2, [3, 2, 1])
Out[833]: 11.0
numpy.polynomial.polynomial.polyval(Polynomial([1, 1, 1]), Polynomial([1, 1]))
Out[834]: Polynomial([ 1., 1.], [-1., 1.], [-1., 1.])
</code></pre>
|
<p>The easiest way is to use the Polynomial class.</p>
<pre><code>In [1]: from numpy.polynomial import Polynomial as P
In [2]: p1 = P([1,1])
In [3]: p2 = P([1,1,1])
In [4]: p2(p1)
Out[4]: Polynomial([ 3., 3., 1.], [-1., 1.], [-1., 1.])
In [5]: p1(p2)
Out[5]: Polynomial([ 2., 1., 1.], [-1., 1.], [-1., 1.])
</code></pre>
<p>If you insist on polyval, you need both the coefficients, and a polynomial to use as x.</p>
<pre><code>In [12]: import numpy.polynomial.polynomial as poly
In [13]: poly.polyval(p1, [1,1,1])
Out[13]: Polynomial([ 3., 3., 1.], [-1., 1.], [-1., 1.])
In [14]: poly.polyval(p2, [1,1])
Out[14]: Polynomial([ 2., 1., 1.], [-1., 1.], [-1., 1.])
</code></pre>
|
python|numpy|polynomial-math
| 3
|
377,150
| 24,917,580
|
Is there a way to pass different vertical lines to each subplot when using pandas histogram with "by=somevar"?
|
<p>I'm making histograms using pandas and I find this approach convenient. </p>
<p>For example if I do: </p>
<p>df['plotvar'].hist(by='Zone')</p>
<p>I get </p>
<p><img src="https://i.stack.imgur.com/56pgT.png" alt="Histograms">
But now I want to add the 95%CI on each of these subgroups, and of course the intervals are different for each group. I could do it just using plt.axvline in matplotlib, but not sure how to do it when I've made the original plots using pandas. TIA for any inputs/suggestions. </p>
<p>edit: I shoudl add that I already know what the 95%CI values are. This is just a plotting question (how to apply the axvline to each of these subplots). Thx. </p>
|
<p><code>DataFrame.hist()</code> with <code>by=</code> returns the array of matplotlib Axes, so you could grab that and then iterate over it.</p>
<p>For similar functionality that speaks pandas but has more flexible features you could use the <a href="http://stanford.edu/~mwaskom/software/seaborn/examples/many_facets.html" rel="nofollow">FacetGrid</a> object from <a href="http://stanford.edu/~mwaskom/software/seaborn" rel="nofollow">seaborn</a>.</p>
|
python|matplotlib|pandas|histogram
| 0
|
377,151
| 24,684,441
|
Pandas: Concatenate dataframe and keep duplicate indices
|
<p>I have two dataframes that I would like to concatenate column-wise (axis=1) with an inner join. One of the dataframes has some duplicate indices, but the rows are not duplicates, and I don't want to lose the data from those :</p>
<pre><code>df1 = pd.DataFrame([{'a':1,'b':2},{'a':1,'b':3},{'a':2,'b':4}],
columns = ['a','b']).set_index('a')
df2 = pd.DataFrame([{'a':1,'c':5},{'a':2,'c':6}],columns = ['a','c']).set_index('a')
>>> df1
b
a
1 2
1 3
2 4
8 9
>>> df2
c
a
1 5
2 6
</code></pre>
<p>The default <code>concat</code> behavior is to fill missing values with NaNs:</p>
<pre><code>>>> pd.concat([df1,df2])
b c
a
1 2 NaN
1 3 NaN
2 4 NaN
1 NaN 5
2 NaN 6
</code></pre>
<p>I want to keep the duplicate indices from df1 and fill them with duplicated values from df2, but in pandas 0.13.1 an inner join on the columns produces an error. In more recent versions of pandas concat does what I want:</p>
<pre><code>>>> pd.concat([df1, df2], axis=1, join='inner')
b c
a
1 2 5
1 3 5
2 4 6
</code></pre>
<p>What's the best way to achieve the result I want? Is there a groupby solution? Or maybe I shouldn't be using <code>concat</code> at all?</p>
|
<p>You can perform a merge and set the params to use the index from the lhs and rhs:</p>
<pre><code>In [4]:
df1.merge(df2, left_index=True, right_index=True)
Out[4]:
b c
a
1 2 5
1 3 5
2 4 6
[3 rows x 2 columns]
</code></pre>
<p>Concat should've worked, it worked for me:</p>
<pre><code>In [5]:
pd.concat([df1,df2], join='inner', axis=1)
Out[5]:
b c
a
1 2 5
1 3 5
2 4 6
[3 rows x 2 columns]
</code></pre>
|
python|pandas|concat
| 7
|
377,152
| 24,556,426
|
Aggregate a DataFrame column by counting special elements
|
<p>I have a DataFrame <code>b</code> in the following format: </p>
<pre><code> chip sampleid WL ok
0 1 test 4 True
1 2 test 4 False
</code></pre>
<p>If I want to count number of True elements in <code>b['ok']</code>, I can run this: </p>
<pre><code>In [125]: sum(b['ok'])
Out[125]: 1
</code></pre>
<p>I now want to group this DataFrame by <code>sampleid</code> and count the chips as well as the number of True elements in the ok column. </p>
<pre><code>In [121]: c = b.groupby('sampleid', as_index=False).aggregate({'chip': lambda x: len(x.unique()), 'ok': sum})
In [122]: c
Out[122]:
sampleid chip ok
0 test 2 True
</code></pre>
<p>Why is this behaviour different than from above? How can I count the elements in the column? The expected output is: </p>
<pre><code> sampleid chip ok
0 test 2 1
</code></pre>
|
<p>This clearly is a bug, <code>aggregate</code> will try to convert the result to the same <code>dtype</code> the original <code>DataFrame</code> has. Here the <code>sum</code> will return <code>1</code> and <code>bool(1)</code> is <code>True</code>. If both of the values in <code>ok</code> are <code>False</code>s, the result will be <code>False</code> (<code>bool(0)</code>). Further examples:</p>
<pre><code>In [85]:
print df.groupby('sampleid', as_index=False).aggregate({'chip': lambda x: len(x.unique()),
'ok': lambda x: np.mean(x)})
sampleid chip ok
0 test 2 0.5 #somehow if a float(?) is returned, converting doesn't happen , despite of bool(0.5)==True
In [87]:
print df.groupby('sampleid', as_index=False).aggregate({'chip': lambda x: len(x.unique()),
'ok': lambda x: np.ptp(x)})
sampleid chip ok
0 test 2 True #np.ptp() will return 1 and bool(1)==True
</code></pre>
|
python|pandas
| 0
|
377,153
| 24,762,706
|
Python Scipy Optimization curve_fit
|
<p>I have two numpy arrays x and y and would like to fit a curve to the data. The fitting function is an exponential with a and t as fitting parameters, and another numpy array ex. </p>
<pre><code>import numpy as np
import scipy
import scipy.optimize as op
k=1.38e-23
h=6.63e-34
c=3e8
def func(ex,a,t):
return a*np.exp(-h*c/(ex*1e-9*kb*t))
t0=300 #initial guess
print op.curve_fit(func,x,y,t0)
</code></pre>
|
<p>Your initial guess should contain two values like <code>t0=(300, 1.)</code> since you have two fitting parameters (<code>a</code> and <code>t</code>).</p>
<p>You need to define the points you want to fit, i.e. defining <code>x</code> and <code>y</code> before calling <code>curve_fit()</code>.</p>
|
python|optimization|numpy|scipy|curve-fitting
| 3
|
377,154
| 24,638,059
|
RuntimeWarning: overflow encountered in np.exp(x**2)
|
<p>I need to calculate <code>exp(x**2)</code> where <code>x = numpy.arange(30,90)</code>. This raises the warning:</p>
<pre><code>RuntimeWarning: overflow encountered in exp
inf
</code></pre>
<p>I cannot safely ignore this warning, but neither SymPy nor mpmath is a solution and I need to perform array operations so a Numpy solution would be my dream.</p>
<p>Does anyone know how to handle this problem?</p>
|
<p>I think you can use this method to solve this problem:</p>
<blockquote>
<p>Normalized</p>
</blockquote>
<p>I overcome the problem in this method. Before using this method, my classify accuracy is :86%. After using this method, my classify accuracy is :96%!!!
It's great!<br>
first:<br>
<a href="http://i.stack.imgur.com/3VmJo.gif" rel="nofollow">Min-Max scaling</a><br>
second:<br>
<a href="http://i.stack.imgur.com/BYs0q.png" rel="nofollow">Z-score standardization</a> </p>
<p>These are common methods to implement <code>normalization</code>.<br>
I use the first method. And I alter it. The maximum number is divided by 10.
So the maximum number of the result is 10. Then exp(-10) will be not <code>overflow</code>!<br>
I hope my answer will help you !(^_^)</p>
|
python|numpy|exp
| 3
|
377,155
| 24,825,275
|
Selecting rows by members of a list contained in a cell
|
<p>In my dataframe I have a column that contains lists of items. I would like to select only those rows that contain all or several items. At least matching a list would be great.</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[2,[2,3,8]]], columns=['a','b'])
df
</code></pre>
<p>I have tried the following: </p>
<pre><code> df[df['b'] == [2,3,8]]
df[[2,3,8] in df['b']] # and etc.
</code></pre>
<p>I feel blindfolded here...</p>
<p>To FooBar:</p>
<p>I am doing scientific fields analysis. The list contains codes of different scientific fields. Where row represents cases, when these scientific fields are coocuring. I can keep list members in different columns, but the problem is that the number of coocuring fields is changing. Therefore I thought that it is ok to keep a list in a cell.</p>
|
<p>I think you can do the following :</p>
<pre><code>idx = []
S = [2,3,8]
for i, line in df.iterrows():
if set(S).issubset(line['b']):
idx.append(i)
</code></pre>
<p>Now, you can select only the rows you're interested in :</p>
<pre><code>df_subset = df.ix[idx]
</code></pre>
|
python|pandas|selection|dataframe
| 2
|
377,156
| 24,597,446
|
pandas TimeSeries diff() reverts to Series
|
<p>I am working with some TimeSeries data in this format:</p>
<blockquote>
<p>1984-12-12 14:08:00<br>
1984-12-12 14:25:00<br>
1984-12-12 14:47:00<br>
1984-12-12 16:37:00<br>
1984-12-12 16:37:00<br>
1984-12-12 16:37:00<br>
1984-12-12 17:52:00<br>
1984-12-12 17:52:00<br>
1984-12-12 19:29:00 </p>
</blockquote>
<p>Over the past few <code>days!</code>, what seemed to be a few simple operations (a pleasant afternoon), has turned hackish and grim.</p>
<p>Here are the reqs btw:</p>
<ul>
<li>take the difference between certain rows in a TimeSeries </li>
<li>generate the cumsum of the differences. </li>
</ul>
<p>First, when I approach pandas and the whole <code>group-apply-combine</code> paradigm, what I like to do is </p>
<ul>
<li>create some group over the DataFrame </li>
<li>write a function that takes a group object and returns a group object</li>
<li>use a lamda apply to pass groups to the function</li>
</ul>
<p>I believe this is standard, and the reason I like using it is implicit concatentation of groups, multiple columns, and new column insertion. (it also removes looping over groups, makes vectorization easier) ... but i think it has trouble dealing with empty groups...</p>
<p>Anyhow, to get the differences of the TimeSeries, I found using <code>shift()</code> to get the time differences threw a <code>StopIteration</code> error, using <code>diff(1)</code> threw no errors. </p>
<p>However, the new delta column (the time difference between rows with events) turns into a Series.</p>
<pre><code>time ev delta
1984-12-12 14:08:00 1 NaT
1984-12-12 14:25:00 1 00:17:00
1984-12-12 14:47:00 1 00:22:00
1984-12-12 16:37:00 0 01:50:00
1984-12-12 16:37:00 1 01:50:00
1984-12-12 16:37:00 0 01:50:00
1984-12-12 17:52:00 0 01:15:00
1984-12-12 17:52:00 1 01:15:00
1984-12-12 19:29:00 1 01:37:00
</code></pre>
<p>Trying to convert Series to TimeSeries proved unfruitful. An error is thrown do to a format issue (a verylongnumber+L is found not in format of hour,minute,sec), this apparently aborts the whole attempt and try catch cant get past it.</p>
<pre><code> try:
pd.to_datetime(d['delta'], format='%H:%M:%S')
except:
pass
</code></pre>
<p>Another error that keeps popping up is <code>StopIteration</code> error from getting a sum of the times. </p>
<pre><code>gg['cumt'] = pd.rolling_apply( gg['time'], 2, np.sum )
gg['cumt'] = pd.rolling_sum(gg['time'],2).shift(1)
gg['cumt'] = gg.apply(lambda x: pd.expanding_sum(x['time'], min_periods=2) )
</code></pre>
<p>I believe this a simple <code>cumsum</code> did not throw an error, <code>gg['cumt'] = gg['tavg'].cumsum()</code>, but the time formating issue causes the string to convert to some int and they are summed as tiny numbers. </p>
<p>Any help, general or specific is appreciated:</p>
<p>I like the simple idea of write a function and return a group. Havent explored the <code>transform</code> function too much (dont think I could get it to work); does returning modified groups in functions remove the need for transforms/broadcasting. Is this what is causing my <code>StopIteration</code> error? I get the feeling that it cannot deal with some groups being empty ?</p>
|
<p>Pandas 0.12.0, Numpy 1.7.1, Python 2.7.5, Linux Mint</p>
<pre><code>import pandas as pd
import StringIO
data = '''time
1984-12-12 14:08:00
1984-12-12 14:25:00
1984-12-12 14:47:00
1984-12-12 16:37:00
1984-12-12 16:37:00
1984-12-12 16:37:00
1984-12-12 17:52:00
1984-12-12 17:52:00
1984-12-12 19:29:00'''
df = pd.read_csv(StringIO.StringIO(data))
df['time'] = pd.DatetimeIndex(df['time'])
df['delta'] = df['time'].diff()
#df['delta'] = pd.TimeSeries(df['delta']) # sorry, not needed
#df['delta'][0] = 0 # to remove NaT
# better method to remove NaT - thanks to Jeff
df['delta'] = df['delta'].fillna(0)
df['cumsum'] = df['delta'].cumsum()
print df
</code></pre>
<p>result</p>
<pre><code> time delta cumsum
0 1984-12-12 14:08:00 00:00:00 00:00:00
1 1984-12-12 14:25:00 00:17:00 00:17:00
2 1984-12-12 14:47:00 00:22:00 00:39:00
3 1984-12-12 16:37:00 01:50:00 02:29:00
4 1984-12-12 16:37:00 00:00:00 02:29:00
5 1984-12-12 16:37:00 00:00:00 02:29:00
6 1984-12-12 17:52:00 01:15:00 03:44:00
7 1984-12-12 17:52:00 00:00:00 03:44:00
8 1984-12-12 19:29:00 01:37:00 05:21:00
</code></pre>
|
python|pandas
| 4
|
377,157
| 24,870,953
|
Does pandas iterrows have performance issues?
|
<p>I have noticed very poor performance when using iterrows from pandas.</p>
<p>Is it specific to iterrows and should this function be avoided for data of a certain size (I'm working with 2-3 million rows)?</p>
<p><a href="https://github.com/pydata/pandas/issues/7683" rel="nofollow noreferrer">This discussion</a> on GitHub led me to believe it is caused when mixing dtypes in the dataframe, however the simple example below shows it is there even when using one dtype (float64). This takes 36 seconds on my machine:</p>
<pre><code>import pandas as pd
import numpy as np
import time
s1 = np.random.randn(2000000)
s2 = np.random.randn(2000000)
dfa = pd.DataFrame({'s1': s1, 's2': s2})
start = time.time()
i=0
for rowindex, row in dfa.iterrows():
i+=1
end = time.time()
print end - start
</code></pre>
<p>Why are vectorized operations like apply so much quicker? I imagine there must be some row by row iteration going on there too.</p>
<p>I cannot figure out how to not use iterrows in my case (this I'll save for a future question). Therefore I would appreciate hearing if you have consistently been able to avoid this iteration. I'm making calculations based on data in separate dataframes.</p>
<p>A simplified version of what I want to run:</p>
<pre><code>import pandas as pd
import numpy as np
#%% Create the original tables
t1 = {'letter':['a','b'],
'number1':[50,-10]}
t2 = {'letter':['a','a','b','b'],
'number2':[0.2,0.5,0.1,0.4]}
table1 = pd.DataFrame(t1)
table2 = pd.DataFrame(t2)
#%% Create the body of the new table
table3 = pd.DataFrame(np.nan, columns=['letter','number2'], index=[0])
#%% Iterate through filtering relevant data, optimizing, returning info
for row_index, row in table1.iterrows():
t2info = table2[table2.letter == row['letter']].reset_index()
table3.ix[row_index,] = optimize(t2info,row['number1'])
#%% Define optimization
def optimize(t2info, t1info):
calculation = []
for index, r in t2info.iterrows():
calculation.append(r['number2']*t1info)
maxrow = calculation.index(max(calculation))
return t2info.ix[maxrow]
</code></pre>
|
<p>Generally, <code>iterrows</code> should only be used in very, very specific cases. This is the general order of precedence for performance of various operations:</p>
<ol>
<li>vectorization</li>
<li>using a custom Cython routine</li>
<li>apply
<ul>
<li>reductions that can be performed in Cython</li>
<li>iteration in Python space</li>
</ul>
</li>
<li>itertuples</li>
<li>iterrows</li>
<li>updating an empty frame (e.g., using loc one-row-at-a-time)</li>
</ol>
<p>Using a custom Cython routine is usually too complicated, so let's skip that for now.</p>
<ol>
<li><p>Vectorization is <em>always</em>, <em>always</em> the first and best choice. However, there is a small set of cases (usually involving a recurrence) which cannot be vectorized in obvious ways. Furthermore, on a smallish <code>DataFrame</code>, it may be faster to use other methods.</p>
</li>
<li><p><code>apply</code> <em>usually</em> can be handled by an iterator in Cython space. This is handled internally by pandas, though it depends on what is going on inside the <code>apply</code> expression. For example, <code>df.apply(lambda x: np.sum(x))</code> will be executed pretty swiftly, though of course, <code>df.sum(1)</code> is even better. However something like <code>df.apply(lambda x: x['b'] + 1)</code> will be executed in Python space, and consequently is much slower.</p>
</li>
<li><p><code>itertuples</code> does not box the data into a <code>Series</code>. It just returns the data in the form of tuples.</p>
</li>
<li><p><code>iterrows</code> <em>does</em> box the data into a <code>Series</code>. Unless you really need this, use another method.</p>
</li>
<li><p>Updating an empty frame a-single-row-at-a-time. I have seen this method used WAY too much. It is by far the slowest. It is probably common place (and reasonably fast for some Python structures), but a <code>DataFrame</code> does a fair number of checks on indexing, so this will always be very slow to update a row at a time. Much better to create new structures and <code>concat</code>.</p>
</li>
</ol>
|
python|performance|pandas|iteration
| 246
|
377,158
| 24,935,282
|
Pandas: merging two dataframes
|
<p>The output of a MERGE operation on two pandas data frames does not yield the expected result:</p>
<pre><code>**dfmatrix**:
… young label filename
0 … 1 neg cv005_29357
1 … 0 neg cv006_17022
2 … 0 neg cv007_4992
3 … 1 neg cv008_29326
4 … 1 neg cv009_29417
**dfscores**:
filename score
0 cv005_29357 -10
1 cv006_17022 5
dfnew = pandas.merge(dfmatrix, dfscores, on='filename', how='outer', left_index=False, right_index=False)
**dfnew**:
… young label filename score_y
0 … 0 neg cv005_29357 NaN
1 … 1 neg cv006_17022 NaN
2 … 0 neg cv007_4992 NaN
3 … 0 neg cv008_29326 NaN
4 … 1 neg cv009_29417 NaN
Excpected Output:
**dfnew**:
… young label filename score_y
0 … 0 neg cv005_29357 -10
1 … 1 neg cv006_17022 5
2 … 0 neg cv007_4992 NaN
3 … 0 neg cv008_29326 NaN
4 … 1 neg cv009_29417 NaN
</code></pre>
<p>What am I doing wrong?</p>
<p>Update: <a href="https://stackoverflow.com/questions/18792918/pandas-combining-2-data-frames-join-on-a-common-column">this post</a> suggests that MERGE is the way to go for the purposes of joining two data frames</p>
|
<p>the problem was at the file level: the entries in the <code>filename</code> column of the <code>dfscores</code> file being read had a <code>trailing whitespace</code> which caused the JOIN to fail. Admitted, this is not a glorious moment for me but neverthelesss these things happen and i think it is worth posting the answer as it may happen to other less experienced coders.</p>
<p>To automate the process:</p>
<p><code>dfscores['filename'] = dfscores['filename'].map(lambda x: x.strip())</code></p>
<p>source: <a href="https://stackoverflow.com/questions/13682044/pandas-dataframe-remove-unwanted-parts-from-strings-in-a-column">Pandas DataFrame: remove unwanted parts from strings in a column</a></p>
|
python|pandas
| 0
|
377,159
| 30,155,740
|
Incorrect Pandas DataFrame creation using lists
|
<p>I want to create a data frame using Python's Panda by reading a text file. The values are tab-separated but when I use this code:</p>
<pre><code>import sys
import pandas as pd
query = sys.argv[1]
df = pd.DataFrame()
with open(query) as file_open:
for line in iter(file_open.readline, ''):
if line.startswith("#CHROM"):
columns = line.split("\t")
if line.startswith("chr7"):
df = df.append(line.split("\t"))
print df
print len(df)
</code></pre>
<p>My output is:</p>
<pre><code>...
0 chr7
1 158937585
2 rs3763427
3 T
4 C
5 931.21
6 .
7 AC=2;AF=1.00;AN=2;DP=24;Dels=0.00;FS=0.000;HRu...
8 GT:DP:GQ:PL:A:C:G:T:IR
9 1/1:24:72.24:964,72,0:0,0:11,12:0,0:0,0:0\n
0 chr7
1 158937597
2 .
3 C
4 CG
5 702.73
6 .
7 AC=2;AF=1.00;AN=2;BaseQRankSum=-1.735;DP=19;FS...
8 GT:DP:GQ:PL:A:C:G:T:IR
9 1/1:19:41.93:745,42,0:0,0:10,8:0,0:0,0:17\n
[510350 rows x 1 columns]
510350
</code></pre>
<p>The text file contains this format:</p>
<pre><code>#CHROM \t POS \t ID \t REF \t ALT \t QUAL \t FILTER \t INFO \n
chr7 \t 149601 \t tMERGED_DEL_2_39754 \t T \t .\t 141.35 \t . \t AC=0;AF=0.00;AN=2;DP=37;MQ=37.00;MQ0=0;1000gALT=<DEL>;AF1000g=0.09.. \n
chr7 \t 149616 \t rs190051229 \t C \t . \t 108.65 \t . \t AC=0;AF=0.00;AN=2;DP=35;MQ=37.00;MQ0=0;1000gALT=T;AF1000g=0.00.. \n
...
</code></pre>
<p>I want the data frame to look like:</p>
<pre><code> #CHROM POS ID REF ALT QUAL FILTER INFO
chr7 149601 MERGED.. T . 141.35 . AC=0;AF=0.00;A..
chr7 149616 rs1900.. C . 108.65 . AC=0;AF=0.00;A..
...
</code></pre>
<p>Reading each line with the code above creates a list of the values in that line:</p>
<pre><code>['chr7','149601','MERGED..','T','.','141.35','.','AC=0;AF=0;A..'\n]
</code></pre>
<p>What is wrong about my code?</p>
<p>Thank you.</p>
<p>Rodrigo</p>
|
<p>Don’t read the file by hand. Use pandas’ powerful <code>read_csv</code>:</p>
<pre><code>df = pd.read_csv(query, sep='\t')
</code></pre>
<p>Full program:</p>
<pre><code>import sys
import pandas as pd
query = sys.argv[1]
df = pd.read_csv(query, sep='\t')
print df
</code></pre>
|
python|list|pandas|dataframe
| 2
|
377,160
| 29,852,233
|
Assigning coefficient vector back to features in scikit learn Lasso
|
<p>I am running a Lasso in scikit learn on a dataset. Here is how my design matrix(X) looks like: </p>
<pre><code> Year Country SW NY.GDP.DEFL.KD.ZG.1 NY.GDP.PCAP.KD.ZG NY.GDP.DEFL.KD.ZG NE.IMP.GNFS.ZS NY.GDP.DISC.CN FS.AST.PRVT.GD.ZS FS.AST.DOMS.GD.ZS NY.GDS.TOTL.ZS NY.GDP.DISC.KN NY.GDP.NGAS.RT.ZS NY.GDP.PETR.RT.ZS NY.GDP.COAL.RT.ZS NY.GDP.MINR.RT.ZS NY.GDP.TOTL.RT.ZS MS.MIL.XPND.GD.ZS
0 0 0 1 -3576217.383052 -5146876.546040 -3471506.772186 -2633821.885258 -3.680928e+06 91.575314 99.278420 -5670429.600369 -3.785639e+06 -4832744.713442 -5461008.378638 -3366796.16132 -3995059.826515 -5565718.989504 -1691426.387465
1 1 0 1 5.713486 0.563529 4.713486 21.969161 -5.000000e+06 88.625556 92.244479 23.625253 1.309500e+10 1.089173 0.983267 0.00000 1.471053 3.860570 2.057921
2 2 0 1 3.559686 2.640931 2.559686 21.466621 -1.000000e+06 87.785550 93.413707 24.273287 1.558700e+10 1.014641 1.021970 0.00000 1.371797 3.681716 1.925137
3 3 0 1 1.337874 3.811404 0.337874 20.646004 1.000000e+06 84.262083 91.313310 23.840716 1.962200e+10 0.445549 0.412880 0.00000 1.079369 2.178213 1.994438
4 0 1 1 7.638720 9.914861 6.638720 25.640006 -1.305679e+11 129.923249 146.277785 51.979295 -6.818467e+11 0.164374 1.500932 2.37375 2.563449 6.954085 2.079635
</code></pre>
<p>It has three categorical features in the beginning. </p>
<p>Here is how my Target vector(Y) looks like: </p>
<pre><code>0 -0.003094
1 -0.015327
2 0.100617
3 0.067728
4 0.089962
</code></pre>
<p>Both are currently pandas data frame/Series. </p>
<p>Now I Recode my categorical variables in X using oneHotEncoder of scikit.from </p>
<pre><code>sklearn import preprocessing
X_train=preprocessing.OneHotEncoder(categorical_features=[0,1,2],sparse=False).fit_transform(data_train)
</code></pre>
<p>This transforms the data to something like this: </p>
<pre><code>X_train[0:2]
Out[473]:
array([[ 1.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 1.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 1.00000000e+00,
-3.57621738e+06, -5.14687655e+06, -3.47150677e+06,
-2.63382189e+06, -3.68092799e+06, 9.15753144e+01,
9.92784200e+01, -5.67042960e+06, -3.78563860e+06,
-4.83274471e+06, -5.46100838e+06, -3.36679616e+06,
-3.99505983e+06, -5.56571899e+06, -1.69142639e+06],
[ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00,
0.00000000e+00, 1.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 1.00000000e+00,
5.71348642e+00, 5.63529053e-01, 4.71348642e+00,
2.19691610e+01, -5.00000000e+06, 8.86255560e+01,
9.22444788e+01, 2.36252526e+01, 1.30950000e+10,
1.08917343e+00, 9.83266854e-01, 0.00000000e+00,
1.47105308e+00, 3.86057046e+00, 2.05792067e+00]])
</code></pre>
<p>After this I do missing value imputation: </p>
<pre><code>X_imputed=preprocessing.Imputer().fit_transform(X_train)
X_imputed[0:1]
Out[474]:
array([[ 1. , 0. , 0. ,
0. , 1. , 0. ,
0. , 0. , 0. ,
0. , 0. , 0. ,
0. , 0. , 1. ,
-3576217.38305151, -5146876.54603993, -3471506.77218561,
-2633821.88525845, -3680927.9939174 , 91.57531444,
99.27842 , -5670429.60036941, -3785638.6047833 ,
-4832744.71344225, -5461008.37863762, -3366796.16131972,
-3995059.82651509, -5565718.98950351, -1691426.3874654 ]])
</code></pre>
<p>By Now I have started getting confused with the order of variables as after using oneHotencoder my data frame is converted to numpy array and strips the headers. SO I am not sure first 13 columns(which are dummies for three categoricals are what and in which order?. </p>
<p>Secondly, I go ahead and run LassoCV to get the right alpha value of Lasso and corresponding coefficients. </p>
<pre><code>from sklearn import linear_model
lasso=linear_model.LassoCV(max_iter=2000,cv=10,normalize=True)
lasso.fit(X_imputed,Y_train)
</code></pre>
<p>When I check which alpha value it finally chose using cross validation
it gave this: </p>
<pre><code>lasso.alpha_
Out[476]:
4.1303618102099771e-05
</code></pre>
<p>So am assuming this alpha value is the best one which gives least MSE over all 10 folds. </p>
<p>But now when I try to find the lasso path for all alphas it tried, here is what I get. I am creating a numpy array to store MSE of all 10 fold for each alpha chosen by lasso (100 alphas for 10 folds)</p>
<pre><code>scores=np.zeros((100,2))
scores[:,0]=lasso.mse_path_[:,0]
scores[:,1]=np.mean(lasso.mse_path_[:,1:],axis=1)
scr=scores[scores[:,1].argsort()]
</code></pre>
<p>Since I have sorted my scores matrix in ascending order of MSE for each alpha , I expect the first record to show me the alpha for which the score is min. </p>
<pre><code>scr[0]
Out[477]:
array([ 441334.91133953, 0.00739538])
</code></pre>
<p>But I see the alpha value as totally different from what I got in above step using lasso.alpha_. That was to power of -5 and this is power of +5. Why is that?. </p>
<p>Thirdly, here is my coefficient vector from lasso. How do I know which coefficient is mapped to which feature in my original data set (data_train)?. THis is what I need in the end to get the weights corresponding to each feature from the best chose alpha. </p>
<pre><code>lasso.coef_
Out[478]:
array([ 0.02930289, 0.01039652, -0. , -0.05448752, 0.01310975,
0. , -0.03755883, 0.02754805, -0.0498908 , -0.10531218,
-0.08303772, 0.00465392, 0. , -0.04597282, 0. ,
0.00000003, 0. , 0. , 0. , 0. ,
-0.00101291, 0.00155892, 0. , 0. , 0. ,
0. ,
</code></pre>
<p>Right now because the headers are stripped and all , I have no clue which weights correspond to which feature. Also, why the alpha value is different when I choose lasso.alpha_ or when I do lasso_mse_path_ and check the lowest mse. </p>
<p>Any idea? </p>
|
<p>To relate the feature indices back to the original feature columns, you can use the <code>feature_indices_</code> attribute of <code>OneHotEncoder</code> after fitting:</p>
<pre><code>from sklearn import preprocessing
encoder = preprocessing.OneHotEncoder(categorical_features=[0,1,2])
X_train = encoder.fit_transform(data_train)
print encoder.feature_indices_
</code></pre>
<p>Output:</p>
<pre><code>[0 4 6 8]
</code></pre>
<p>According to the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html" rel="nofollow">documentation</a>:</p>
<blockquote>
<p>feature_indices_ : array of shape (n_features,)
Indices to feature ranges. Feature i in the original data is mapped to features from feature_indices_[i] to feature_indices_[i+1] (and then potentially masked by active_features_ afterwards)</p>
</blockquote>
<p>In this case, the first 4 dimensions in the one-hot encoded space correspond to column <code>Year</code>, the next 2 correspond to column <code>Country</code> and the last 2 correspond to <code>SW</code>.</p>
|
python|numpy|pandas|scikit-learn
| 1
|
377,161
| 29,882,965
|
Python list to pandas dataframe
|
<p>I have a list that follows this format:</p>
<pre><code>a=['date name','10150425010245 name1','10150425020245 name2']
</code></pre>
<p>I am trying to convert this to Pandas df:</p>
<pre><code>newlist=[]
for item in a:
newlist.append(item.split(' '))
</code></pre>
<p>Now, convert this to df:</p>
<pre><code>pd.DataFrame(newlist)
</code></pre>
<p>which results in</p>
<pre><code> 0 1
0 date name
1 10150425010245 name1
2 10150425020245 name2
</code></pre>
<p>I want to have 'date' and 'name' as header, but I can't manage to do that. Is there a more efficient way to automatically convert a list of strings into a dataframe than this?</p>
|
<p>you were on the right track. With slight modification, your code works fine.</p>
<pre><code> import pandas as pd
a=['date name','10150425010245 name1','10150425020245 name2']
newlist=[]
for item in a:
newlist.append(item.split(' '))
newlist2=pd.DataFrame(newlist,columns=["date","name"])[1:]
newlist2
date name
10150425010245 name1
10150425020245 name2
</code></pre>
|
python|pandas
| 4
|
377,162
| 30,028,149
|
how to select column values to display in pandas groupby
|
<p>I am using pandas 0.16.0, I have data:</p>
<pre><code>id A B
1 10 100
2 10 101
3 20 102
</code></pre>
<p>when I call</p>
<pre><code>df.groupby(['A']).groups
</code></pre>
<p>I have </p>
<pre><code>{10: [1 2], 20: [3]}
</code></pre>
<p>and I want to have this (values from column B)</p>
<pre><code>{10: [100, 101], 20: [102]}
</code></pre>
<p>please help</p>
|
<p>One way is to groupby and apply function to take list, and then convert to dict.</p>
<pre><code>In [92]: df.groupby(['A']).apply(lambda x: x['B'].tolist()).to_dict()
Out[92]: {10: [100, 101], 20: [102]}
</code></pre>
|
python|pandas|data-analysis
| 2
|
377,163
| 30,037,186
|
Splitting each item in an array into a separate data frame column
|
<p>I'm relatively new to python. I have a data frame and I need to split each character of the data in each column into it's own column in another data frame. I split the data into a dictionary, but just found that I need a new data frame. Here's the deal:</p>
<p>The source data frame looks like this:</p>
<pre><code> Col1
1 100100
2 000000
3 020001
4 100300
</code></pre>
<p>I have a dictionary like this:</p>
<pre><code>1: "['1', '0', '0', '1', '0', '0']",
2: "['0', '0', '0', '0', '0', '0']",
3: "['0', '2', '0', '0', '0', '1']",
4: "['1', '0', '0', '3', '0', '0']"
</code></pre>
<p>and need to end up with a data frame in this format:</p>
<pre><code> 0 1 2 3 4 5
1 1 0 0 1 0 0
2 0 0 0 0 0 0
3 0 2 0 0 0 1
4 1 0 0 3 0 0
</code></pre>
<p>Any advice would be appreciated - I haven't had any luck in my searches. I'd assume going direct from the source data to the new data frame is ideal. Or is using the dictionary I created (source==>dict==>new data frame) a better route? Thanks.</p>
|
<p>It's not the most elegant, but life is short, so I'd apply <code>list</code> to get the values and then <code>pd.Series</code> to expand them into columns:</p>
<pre><code>>>> df
Col1
1 100100
2 000000
3 020001
4 100300
>>> df.Col1.apply(list).apply(pd.Series).astype(int)
0 1 2 3 4 5
1 1 0 0 1 0 0
2 0 0 0 0 0 0
3 0 2 0 0 0 1
4 1 0 0 3 0 0
</code></pre>
|
python|pandas
| 3
|
377,164
| 30,178,871
|
How do I use numpy.genfromtxt to read a lower triagular matrix into a numpy array?
|
<p>I have a lower diagonal matrix like this</p>
<pre><code>1
2 3
4 5 6
</code></pre>
<p>in a text file, and I want to read it into a numpy array with zeros above the main diagonal. The simplest code I can think of</p>
<pre><code>import io
import scipy
data = "1\n2 3\n4 5 6"
scipy.genfromtxt(io.BytesIO(data.encode()))
</code></pre>
<p>fails with </p>
<pre><code>ValueError: Some errors were detected !
Line #2 (got 2 columns instead of 1)
Line #3 (got 3 columns instead of 1)
</code></pre>
<p>which makes sense, because in the text file, there isn't <em>anything</em> in the upper diagonal part of the matrix, so numpy doesn't know what to interpret as missing values.</p>
<p>Looking at the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html" rel="nofollow">documentation</a>, I want something like the <code>invalid_raise = False</code> option, except I don't want to skip the "invalid" rows. </p>
<hr>
<p>With some modifications from the answer below, the final code I'm using is</p>
<pre><code>import scipy
with open("data.txt", "r") as r:
data = r.read()
n = data.count("\n") + 1
mat = scipy.zeros((n, n))
mat[scipy.tril_indices_from(mat)] = data.split()
</code></pre>
|
<p><code>np.tril_indices_from()</code> makes it easy to populate your array lower-triagular matrix through fancy idexing:</p>
<pre><code>data = "1\n2 3\n4 5 6"
n = len(data.split('\n'))
data = data.replace('\n', ' ').split()
a = np.zeros((n, n))
a[np.tril_indices_from(a)] = data
print(a)
#array([[ 1., 0., 0.],
# [ 2., 3., 0.],
# [ 4., 5., 6.]])
</code></pre>
|
python|arrays|numpy|matrix
| 2
|
377,165
| 29,836,477
|
Pandas create new column with count from groupby
|
<p>I have a df that looks like the following:</p>
<pre><code>id item color
01 truck red
02 truck red
03 car black
04 truck blue
05 car black
</code></pre>
<p>I am trying to create a df that looks like this:</p>
<pre><code>item color count
truck red 2
truck blue 1
car black 2
</code></pre>
<p>I have tried </p>
<pre><code>df["count"] = df.groupby("item")["color"].transform('count')
</code></pre>
<p>But it is not quite what I am searching for.</p>
<p>Any guidance is appreciated</p>
|
<p>That's not a new column, that's a new DataFrame:</p>
<pre><code>In [11]: df.groupby(["item", "color"]).count()
Out[11]:
id
item color
car black 2
truck blue 1
red 2
</code></pre>
<p>To get the result you want is to use <code>reset_index</code>:</p>
<pre><code>In [12]: df.groupby(["item", "color"])["id"].count().reset_index(name="count")
Out[12]:
item color count
0 car black 2
1 truck blue 1
2 truck red 2
</code></pre>
<p>To get a "new column" you could use transform:</p>
<pre><code>In [13]: df.groupby(["item", "color"])["id"].transform("count")
Out[13]:
0 2
1 2
2 2
3 1
4 2
dtype: int64
</code></pre>
<p>I recommend reading the <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html">split-apply-combine section of the docs</a>.</p>
|
python|pandas
| 125
|
377,166
| 30,053,329
|
Elegant way to create empty pandas DataFrame with NaN of type float
|
<p>I want to create a Pandas DataFrame filled with NaNs. During my research I found <a href="https://stackoverflow.com/questions/13784192/creating-an-empty-pandas-dataframe-then-filling-it">an answer</a>:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(index=range(0,4),columns=['A'])
</code></pre>
<p>This code results in a DataFrame filled with NaNs of type "object". So they cannot be used later on for example with the <code>interpolate()</code> method. Therefore, I created the DataFrame with this complicated code (inspired by <a href="https://stackoverflow.com/questions/1704823/initializing-numpy-matrix-to-something-other-than-zero-or-one">this answer</a>):</p>
<pre><code>import pandas as pd
import numpy as np
dummyarray = np.empty((4,1))
dummyarray[:] = np.nan
df = pd.DataFrame(dummyarray)
</code></pre>
<p>This results in a DataFrame filled with NaN of type "float", so it can be used later on with <code>interpolate()</code>. Is there a more elegant way to create the same result?</p>
|
<p>Simply pass the desired value as first argument, like <code>0</code>, <code>math.inf</code> or, here, <code>np.nan</code>. The constructor then initializes and fills the value array to the size specified by arguments <code>index</code> and <code>columns</code>:</p>
<pre><code>>>> import numpy as np
>>> import pandas as pd
>>> df = pd.DataFrame(np.nan, index=[0, 1, 2, 3], columns=['A', 'B'])
>>> df
A B
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 NaN NaN
>>> df.dtypes
A float64
B float64
dtype: object
</code></pre>
|
python|pandas|numpy|dataframe|nan
| 124
|
377,167
| 30,272,137
|
groupby on sparse matrix in pandas: filling them first
|
<p>I have a pandas DataFrame <code>df</code> with shape (1000000,3) as follows:</p>
<pre><code>id cat team
1 'cat1' A
1 'cat2' A
2 'cat3' B
3 'cat1' A
4 'cat3' B
4 'cat1' B
</code></pre>
<p>Then I dummify with respect to the <code>cat</code> column in order to get ready for a machine learning classification.</p>
<pre><code>df2 = pandas.get_dummies(df,columns=['cat'], sparse=True)
</code></pre>
<p>But when I try to do:</p>
<pre><code>df2.groupby(['id','team']).sum()
</code></pre>
<p>It get stuck and the computing never ends. So instead of grouping by right away, I try:</p>
<pre><code>df2 = df2.fillna(0)
</code></pre>
<p>But it does not work and the DataFrame is still full of <code>NaN</code> values. Why does the <code>fillna()</code> function does not fill my DataFrame as it should?
In other words, how can a pandas sparse matrix I got from get_dummies be filled with 0 instead of NaN?</p>
<p>I also tried:</p>
<pre><code>df2 = pandas.get_dummies(df,columns=['cat'], sparse=True).to_sparse(fill_value=0)
</code></pre>
<p>This time, <code>df2</code> is well filled with 0, but when I try:</p>
<pre><code>print df2.groupby(['id','sexe']).sum()
</code></pre>
<p>I get:</p>
<pre><code>C:\Anaconda\lib\site-packages\pandas\core\groupby.pyc in loop(labels, shape)
3545 for i in range(1, nlev):
3546 stride //= shape[i]
-> 3547 out += labels[i] * stride
3548
3549 if xnull: # exclude nulls
ValueError: operands could not be broadcast together with shapes (1205800,) (306994,) (1205800,)
</code></pre>
<p>My solution was to do:</p>
<pre><code>df2 = pandas.DataFrame(np.nan_to_num(df2.as_matrix()))
df2.groupby(['id','sexe']).sum()
</code></pre>
<p>And it works, but it takes a lot of memory. Can someone help me to find a better solution or at least understand why I can't fill sparse matrix with zeros easily? And why it is impossible to use <code>groupby()</code> then <code>sum()</code> on a sparse matrix?</p>
|
<p>I think your problem is due to mixing of dtypes. But you could get around it like this. First, provide only the relevant column to <code>get_dummies()</code> rather than the whole dataframe:</p>
<pre><code>df2 = pd.get_dummies(df['cat']).to_sparse(0)
</code></pre>
<p>After that, you can add other variables back but everything needs to be numeric. A pandas sparse dataframe is just a wrapper on a sparse (and homogenous dtype) numpy array.</p>
<pre><code>df2['id'] = df['id']
'cat1' 'cat2' 'cat3' id
0 1 0 0 1
1 0 1 0 1
2 0 0 1 2
3 1 0 0 3
4 0 0 1 4
5 1 0 0 4
</code></pre>
<p>For non-numeric types, you could do the following:</p>
<pre><code>df2['team'] = df['team'].astype('category').cat.codes
</code></pre>
<p>This groupby seems to work OK:</p>
<pre><code>df2.groupby('id').sum()
'cat1' 'cat2' 'cat3'
id
1 1 1 0
2 0 0 1
3 1 0 0
4 1 0 1
</code></pre>
<p>An additional but possibly important point for memory management is that you can often save substantial memory with categoricals rather than string objects (perhaps you are already doing this though):</p>
<pre><code>df['cat2'] = df['cat'].astype('category')
df[['cat','cat2']].memory_usage()
cat 48
cat2 30
</code></pre>
<p>Not much savings here for the small example dataframe but could be a substantial difference in your actual dataframe.</p>
|
pandas|fill|sparse-matrix
| 1
|
377,168
| 30,206,293
|
Matrix/Tensor Triple Product?
|
<p>An algorithm I'm working on requires computing, in a couple places, a type of matrix triple product.</p>
<p>The operation takes three square matrices with identical dimensions, and produces a 3-index tensor. Labeling the operands <code>A</code>, <code>B</code> and <code>C</code>, the <code>(i,j,k)</code>-th element of the result is</p>
<pre><code>X[i,j,k] = \sum_a A[i,a] B[a,j] C[k,a]
</code></pre>
<p>In numpy, you can compute this with <code>einsum('ia,aj,ka->ijk', A, B, C)</code>.</p>
<p>Questions: </p>
<ul>
<li>Does this operation have a standard name?</li>
<li>Can I compute this with a single BLAS call?</li>
<li>Are there any other heavy-optimized numerical C/Fortran libraries that can compute expressions of this type?</li>
</ul>
|
<h2>Introduction and Solution Code</h2>
<p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow noreferrer"><code>np.einsum</code></a>, is really hard to beat, <strong>but</strong> in rare cases, you can still beat it, if you can bring in <code>matrix-multiplication</code> into the computations. After few trials, it seems you can bring in <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html" rel="nofollow noreferrer"><code>matrix-multiplication with np.dot</code></a> to surpass the performance with <code>np.einsum('ia,aj,ka->ijk', A, B, C)</code>.</p>
<p>The basic idea is that we break down the "all einsum" operation into a combination of <code>np.einsum</code> and <code>np.dot</code> as listed below:</p>
<ul>
<li>The summations for <code>A:[i,a]</code> and <code>B:[a,j]</code> are done with <code>np.einsum</code> to get us a <code>3D array:[i,j,a]</code>.</li>
<li>This 3D array is then reshaped into a <code>2D array:[i*j,a]</code> and the third array, <code>C[k,a]</code> is transposed to <code>[a,k]</code>, with the intention of performing <code>matrix-multiplication</code> between these two, giving us <code>[i*j,k]</code> as the matrix product, as we lose the index <code>[a]</code> there.</li>
<li>The product is reshaped into a <code>3D array:[i,j,k]</code> for the final output.</li>
</ul>
<p>Here's the implementation for the first version discussed so far -</p>
<pre><code>import numpy as np
def tensor_prod_v1(A,B,C): # First version of proposed method
# Shape parameters
m,d = A.shape
n = B.shape[1]
p = C.shape[0]
# Calculate \sum_a A[i,a] B[a,j] to get a 3D array with indices as (i,j,a)
AB = np.einsum('ia,aj->ija', A, B)
# Calculate entire summation losing a-ith index & reshaping to desired shape
return np.dot(AB.reshape(m*n,d),C.T).reshape(m,n,p)
</code></pre>
<p>Since we are summing the <code>a-th</code> index across all three input arrays, one can have three different methods to sum along the a-th index. The code listed earlier was for <code>(A,B)</code>. Thus, we can also have <code>(A,C)</code> and <code>(B,C)</code> giving us two more variations, as listed next:</p>
<pre><code>def tensor_prod_v2(A,B,C):
# Shape parameters
m,d = A.shape
n = B.shape[1]
p = C.shape[0]
# Calculate \sum_a A[i,a] C[k,a] to get a 3D array with indices as (i,k,a)
AC = np.einsum('ia,ja->ija', A, C)
# Calculate entire summation losing a-ith index & reshaping to desired shape
return np.dot(AC.reshape(m*p,d),B).reshape(m,p,n).transpose(0,2,1)
def tensor_prod_v3(A,B,C):
# Shape parameters
m,d = A.shape
n = B.shape[1]
p = C.shape[0]
# Calculate \sum_a B[a,j] C[k,a] to get a 3D array with indices as (a,j,k)
BC = np.einsum('ai,ja->aij', B, C)
# Calculate entire summation losing a-ith index & reshaping to desired shape
return np.dot(A,BC.reshape(d,n*p)).reshape(m,n,p)
</code></pre>
<p>Depending upon the shapes of the input arrays, different approaches would yield different speedups with respect to each other, but we are hopeful that all would be better than the <code>all-einsum</code> approach. The performance numbers are listed in the next section.</p>
<h2>Runtime Tests</h2>
<p>This is probably the most important section, as we try to look into the speedup numbers with the three variations of the proposed approach over
the <code>all-einsum</code> approach as originally proposed in the question.</p>
<p>Dataset #1 (Equal shaped arrays) :</p>
<pre><code>In [494]: L1 = 200
...: L2 = 200
...: L3 = 200
...: al = 200
...:
...: A = np.random.rand(L1,al)
...: B = np.random.rand(al,L2)
...: C = np.random.rand(L3,al)
...:
In [495]: %timeit tensor_prod_v1(A,B,C)
...: %timeit tensor_prod_v2(A,B,C)
...: %timeit tensor_prod_v3(A,B,C)
...: %timeit np.einsum('ia,aj,ka->ijk', A, B, C)
...:
1 loops, best of 3: 470 ms per loop
1 loops, best of 3: 391 ms per loop
1 loops, best of 3: 446 ms per loop
1 loops, best of 3: 3.59 s per loop
</code></pre>
<p>Dataset #2 (Bigger A) :</p>
<pre><code>In [497]: L1 = 1000
...: L2 = 100
...: L3 = 100
...: al = 100
...:
...: A = np.random.rand(L1,al)
...: B = np.random.rand(al,L2)
...: C = np.random.rand(L3,al)
...:
In [498]: %timeit tensor_prod_v1(A,B,C)
...: %timeit tensor_prod_v2(A,B,C)
...: %timeit tensor_prod_v3(A,B,C)
...: %timeit np.einsum('ia,aj,ka->ijk', A, B, C)
...:
1 loops, best of 3: 442 ms per loop
1 loops, best of 3: 355 ms per loop
1 loops, best of 3: 303 ms per loop
1 loops, best of 3: 2.42 s per loop
</code></pre>
<p>Dataset #3 (Bigger B) :</p>
<pre><code>In [500]: L1 = 100
...: L2 = 1000
...: L3 = 100
...: al = 100
...:
...: A = np.random.rand(L1,al)
...: B = np.random.rand(al,L2)
...: C = np.random.rand(L3,al)
...:
In [501]: %timeit tensor_prod_v1(A,B,C)
...: %timeit tensor_prod_v2(A,B,C)
...: %timeit tensor_prod_v3(A,B,C)
...: %timeit np.einsum('ia,aj,ka->ijk', A, B, C)
...:
1 loops, best of 3: 474 ms per loop
1 loops, best of 3: 247 ms per loop
1 loops, best of 3: 439 ms per loop
1 loops, best of 3: 2.26 s per loop
</code></pre>
<p>Dataset #4 (Bigger C) :</p>
<pre><code>In [503]: L1 = 100
...: L2 = 100
...: L3 = 1000
...: al = 100
...:
...: A = np.random.rand(L1,al)
...: B = np.random.rand(al,L2)
...: C = np.random.rand(L3,al)
In [504]: %timeit tensor_prod_v1(A,B,C)
...: %timeit tensor_prod_v2(A,B,C)
...: %timeit tensor_prod_v3(A,B,C)
...: %timeit np.einsum('ia,aj,ka->ijk', A, B, C)
...:
1 loops, best of 3: 250 ms per loop
1 loops, best of 3: 358 ms per loop
1 loops, best of 3: 362 ms per loop
1 loops, best of 3: 2.46 s per loop
</code></pre>
<p>Dataset #5 (Bigger a-th dimension length) :</p>
<pre><code>In [506]: L1 = 100
...: L2 = 100
...: L3 = 100
...: al = 1000
...:
...: A = np.random.rand(L1,al)
...: B = np.random.rand(al,L2)
...: C = np.random.rand(L3,al)
...:
In [507]: %timeit tensor_prod_v1(A,B,C)
...: %timeit tensor_prod_v2(A,B,C)
...: %timeit tensor_prod_v3(A,B,C)
...: %timeit np.einsum('ia,aj,ka->ijk', A, B, C)
...:
1 loops, best of 3: 373 ms per loop
1 loops, best of 3: 269 ms per loop
1 loops, best of 3: 299 ms per loop
1 loops, best of 3: 2.38 s per loop
</code></pre>
<p><strong>Conclusions:</strong> We are seeing a speedup of <strong><code>8x-10x</code></strong> with the variations of the proposed approach over the <code>all-einsum</code> approach listed in the question.</p>
|
matlab|numpy|matrix|matrix-multiplication|blas
| 7
|
377,169
| 30,144,170
|
What does pandas' sub operator do?
|
<p>This is coming straight from the tutorial, which I can't understand even after reading the doc.</p>
<pre><code>In [14]: df = DataFrame({'one' : Series(randn(3), index=['a', 'b', 'c']),
....: 'two' : Series(randn(4), index=['a', 'b', 'c', 'd']),
....: 'three' : Series(randn(3), index=['b', 'c', 'd'])})
....:
In [15]: df
Out[15]:
one three two
a -0.626544 NaN -0.351587
b -0.138894 -0.177289 1.136249
c 0.011617 0.462215 -0.448789
d NaN 1.124472 -1.101558
In [16]: row = df.ix[1]
In [17]: column = df['two']
In [18]: df.sub(row, axis='columns')
Out[18]:
one three two
a -0.487650 NaN -1.487837
b 0.000000 0.000000 0.000000
c 0.150512 0.639504 -1.585038
d NaN 1.301762 -2.237808
</code></pre>
<p>Why does the second row turn into 0? Is it being <code>sub</code>-stituted with 0?</p>
<p>Also, when I use <code>row = df.ix[0]</code>, the entire second column turns into <code>NaN</code>. Why?</p>
|
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sub.html#pandas.DataFrame.sub" rel="noreferrer"><code>sub</code></a> means subtract, so lets walk through this:</p>
<pre><code>In [44]:
# create some data
df = pd.DataFrame({'one' : pd.Series(np.random.randn(3), index=['a', 'b', 'c']),
'two' : pd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd']),
'three' : pd.Series(np.random.randn(3), index=['b', 'c', 'd'])})
df
Out[44]:
one three two
a -1.536737 NaN 1.537104
b 1.486947 -0.429089 -0.227643
c 0.219609 -0.178037 -1.118345
d NaN 1.254126 -0.380208
In [45]:
# take a copy of 2nd row
row = df.ix[1]
row
Out[45]:
one 1.486947
three -0.429089
two -0.227643
Name: b, dtype: float64
In [46]:
# now subtract the 2nd row row-wise
df.sub(row, axis='columns')
Out[46]:
one three two
a -3.023684 NaN 1.764747
b 0.000000 0.000000 0.000000
c -1.267338 0.251052 -0.890702
d NaN 1.683215 -0.152565
</code></pre>
<p>So probably what is confusing you is what is happening when you specified 'columns' as the axis to operate on. We've subtracted from each row the value of the 2nd row, this explains why the 2nd row has now become all 0's. The data you've passed is a series and we're aligning on column's so in effect we're aligning against the column names which is why it's performed row-wise</p>
<pre><code>In [47]:
# now take a copy of the first row
row = df.ix[0]
row
Out[47]:
one -1.536737
three NaN
two 1.537104
Name: a, dtype: float64
In [48]:
# perform the same op
df.sub(row, axis='columns')
Out[48]:
one three two
a 0.000000 NaN 0.000000
b 3.023684 NaN -1.764747
c 1.756346 NaN -2.655449
d NaN NaN -1.917312
</code></pre>
<p>So why do we now have a column with all <code>NaN</code> values? It's because when you perform any operator function with a <code>NaN</code> then the result is a <code>NaN</code></p>
<pre><code>In [55]:
print(1 + np.NaN)
print(1 * np.NaN)
print(1 / np.NaN)
print(1 - np.NaN)
nan
nan
nan
nan
</code></pre>
|
python|pandas
| 5
|
377,170
| 30,042,873
|
how to rapidaly load data into memory with python?
|
<p>I have a large csv file (5 GB) and I can read it with <code>pandas.read_csv()</code>. This operation takes a lot of time 10-20 minutes. </p>
<p>How can I speed it up? </p>
<p>Would it be useful to transform the data in a <code>sqllite</code> format? In case what should I do?</p>
<p>EDIT: More information:
The data contains 1852 columns and 350000 rows. Most of the columns are float65 and contain numbers. Some other contains string or dates (that I suppose are considered as string)</p>
<p>I am using a laptop with 16 GB of RAM and SSD hard drive. The data should fit fine in memory (but I know that python tends to increase the data size)</p>
<p>EDIT 2 : </p>
<p>During the loading I receive this message</p>
<pre><code>/usr/local/lib/python3.4/dist-packages/pandas/io/parsers.py:1164: DtypeWarning: Columns (1841,1842,1844) have mixed types. Specify dtype option on import or set low_memory=False.
data = self._reader.read(nrows)
</code></pre>
<p>EDIT: SOLUTION</p>
<p>Read one time the csv file and save it as
<code>data.to_hdf('data.h5', 'table')</code>
This format is incredibly efficient</p>
|
<p>This actually depends on which part of reading it is taking 10 minutes.</p>
<ul>
<li>If it's actually reading from disk, then obviously any more compact form of the data will be better.</li>
<li>If it's processing the CSV format (you can tell this because your CPU is at near 100% on one core while reading; it'll be very low for the other two), then you want a form that's already preprocessed.</li>
<li>If it's swapping memory, e.g., because you only have 2GB of physical RAM, then nothing is going to help except splitting the data.</li>
</ul>
<p>It's important to know which one you have. For example, stream-compressing the data (e.g., with <code>gzip</code>) will make the first problem a lot better, but the second one even worse.</p>
<p>It sounds like you probably have the second problem, which is good to know. (However, there are things you can do that will probably be better no matter what the problem.)</p>
<hr>
<p>Your idea of storing it in a sqlite database is nice because it can at least potentially solve all three at once; you only read the data in from disk as-needed, and it's stored in a reasonably compact and easy-to-process form. But it's not the best possible solution for the first two, just a "pretty good" one.</p>
<p>In particular, if you actually do need to do array-wide work across all 350000 rows, and can't translate that work into SQL queries, you're not going to get much benefit out of sqlite. Ultimately, you're going to be doing a giant <code>SELECT</code> to pull in all the data and then process it all into one big frame.</p>
<hr>
<p>Writing out the shape and structure information, then writing the underlying arrays in NumPy binary form. Then, for reading, you have to reverse that. NumPy's binary form just stores the raw data as compactly as possible, and it's a format that can be written blindingly quickly (it's basically just dumping the raw in-memory storage to disk). That will improve both the first and second problems.</p>
<hr>
<p>Similarly, storing the data in HDF5 (either using Pandas IO or an external library like PyTables or h5py) will improve both the first and second problems. HDF5 is designed to be a reasonably compact and simple format for storing the same kind of data you usually store in Pandas. (And it includes optional compression as a built-in feature, so if you know which of the two you have, you can tune it.) It won't solve the second problem quite as well as the last option, but probably well enough, and it's much simpler (once you get past setting up your HDF5 libraries).</p>
<hr>
<p>Finally, pickling the data may sometimes be faster. <a href="https://docs.python.org/3/library/pickle.html" rel="nofollow"><code>pickle</code></a> is Python's native serialization format, and it's hookable by third-party modules—and NumPy and Pandas have both hooked it to do a reasonably good job of pickling their data.</p>
<p>(Although this doesn't apply to the question, it may help someone searching later: If you're using Python 2.x, make sure to explicitly use <a href="https://docs.python.org/2.7/library/pickle.html#data-stream-format" rel="nofollow">pickle format 2</a>; IIRC, NumPy is very bad at the default pickle format 0. In Python 3.0+, this isn't relevant, because the default format is at least 3.)</p>
|
python|performance|pandas
| 4
|
377,171
| 29,997,115
|
python- pandas read_excel getting wrong numbers for index_col
|
<p>I'm trying to read a .xlsx file that has 4 sheets each with a Time and Absorbance column like below:</p>
<pre><code>Time Absorbance
0 0.1254
5 0.1278
10 0.128
15 0.1286
20 0.1303
25 0.1295
30 0.1296
35 0.1308
40 0.1301
45 0.1301
50 0.1309
...
</code></pre>
<p>I want to make a DataFrame with each sheet as a different column and the time as the row index currently my code is as follows:</p>
<pre><code>import numpy as np
import pandas as pd, datetime as dt
import glob, os
runDir = "/Users/AaronT/Documents/Lab/Cascade/DTRA"
if os.getcwd() != runDir:
os.chdir(runDir)
files = glob.glob("PTE_Kinetics*.xlsx")
df = pd.DataFrame()
for each in files:
sheets = pd.ExcelFile(each).sheet_names
for sheet in sheets:
df[sheet] = pd.read_excel(each, sheet, index_col='Time')
print df
</code></pre>
<p>However, my output does not have the proper values for the row index:</p>
<pre><code> Forced Wash Elution Wash Flow Through
0 0.1254 -0.0062 0.0544 0.0443
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
5 0.1278 -0.0027 0.0560 0.0459
6 NaN NaN NaN NaN
7 NaN NaN NaN NaN
8 NaN NaN NaN NaN
9 NaN NaN NaN NaN
10 0.1280 -0.0004 0.0564 0.0467
11 NaN NaN NaN NaN
12 NaN NaN NaN NaN
13 NaN NaN NaN NaN
14 NaN NaN NaN NaN
...
</code></pre>
<p>Maybe I'm not understanding how the index_col works, I was able to make a separate DataFrame for each sheet with the proper times, but I would prefer they all be on the same one. Any suggestions?</p>
<p>Edit: Here is a link to the <a href="https://www.dropbox.com/s/5bzqfy3s0aemmmf/PTE_Kinetics_04-30-2015.xlsx?dl=0" rel="nofollow">excel file</a>.</p>
|
<p>Note: that each sheet is read correctly, you're just not glueing them up right:</p>
<pre><code>In [11]: for sheet in e.sheet_names:
print(pd.read_excel("PTE_Kinetics_04-30-2015.xlsx", sheet, index_col='Time').head(3))
Absorbance
Time
0 0.1254
5 0.1278
10 0.1280
Absorbance
Time
0 -0.0062
5 -0.0027
10 -0.0004
Absorbance
Time
0 0.0544
5 0.0560
10 0.0564
Absorbance
Time
0 0.0443
5 0.0459
10 0.0467
</code></pre>
<p>Rather than as a DataFrame I would prefer to extract them into a dict:</p>
<pre><code>d = {}
for sheet in e.sheet_names:
d[sheet] = pd.read_excel("PTE_Kinetics_04-30-2015.xlsx", sheet, index_col='Time').head(3)
</code></pre>
<p>Now you can glue them up (without worrying about excel):</p>
<pre><code>In [21]: pd.concat(d).unstack(0)
Out[21]:
Absorbance
Elution Flow Through Forced Wash Wash
Time
0 -0.0062 0.0443 0.1254 0.0544
5 -0.0027 0.0459 0.1278 0.0560
10 -0.0004 0.0467 0.1280 0.0564
</code></pre>
|
python|excel|pandas
| 0
|
377,172
| 53,723,928
|
AttributeError: 'Series' object has no attribute 'reshape'
|
<p>I'm using sci-kit learn linear regression algorithm.
While scaling Y target feature with:</p>
<pre><code>Ys = scaler.fit_transform(Y)
</code></pre>
<p>I got</p>
<blockquote>
<p>ValueError: Expected 2D array, got 1D array instead:</p>
</blockquote>
<p>After that I reshaped using:</p>
<pre><code>Ys = scaler.fit_transform(Y.reshape(-1,1))
</code></pre>
<p>But got error again:</p>
<blockquote>
<p>AttributeError: 'Series' object has no attribute 'reshape'</p>
</blockquote>
<p>So I checked pandas.Series documentation page and it says:</p>
<blockquote>
<p>reshape(*args, **kwargs) <em>Deprecated since version 0.19.0.</em></p>
</blockquote>
|
<p>Solution was linked on reshaped method on <a href="https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.Series.reshape.html#pandas.Series.reshape" rel="noreferrer">documentation page</a>.</p>
<p>Insted of <code>Y.reshape(-1,1)</code> you need to use:</p>
<pre><code>Y.values.reshape(-1,1)
</code></pre>
|
python|python-3.x|pandas|reshape|attributeerror
| 137
|
377,173
| 53,745,233
|
How to use the values in a column to identify which column to analyse in a different dataframe?
|
<p>I have two pandas dataframes, one with raw data, and the other is an analysis output based on data analysis of the first dataframe. The setup is below:</p>
<pre><code>df1
P1T P2T P3T
P N P
N P U
P P U
U U N
df2
Indicator Indicator State Occurrences
P1T P
P1T N
P1T U
P2T P
P2T N
P2T U
P3T P
P3T N
P3T U
</code></pre>
<p>In <code>df1</code>, each column represents an 'Indicator', and each indicator can have three states: 'P', 'N', or 'U'.</p>
<p><code>df2</code> lists each 'Indicator' and the range of states it can have, each representing a different case. It is supposed to then count the number of occurrences of each case and output that number in the 'Occurrences' column. That is,</p>
<pre><code>df2
Indicator Indicator State Occurrences
P1T P 2
P1T N 1
P1T U 1
P2T P 2
P2T N 1
P2T U 1
P3T P 1
P3T N 1
P3T U 2
</code></pre>
<p>Is it possible to use the value in the <code>df2['Indicators']</code> column to specify the column in <code>df1</code> to perform a count in, and then the value in <code>df2['Indicator State']</code> column to provide the 'countif' condition?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.melt.html" rel="nofollow noreferrer"><code>melt</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>size</code></a> for <code>MultiIndex Series</code>:</p>
<pre><code>df3 = (df1.melt(var_name='Indicator', value_name='Indicator State')
.groupby(['Indicator','Indicator State'])
.size()
.rename('Occurrences'))
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>value_counts</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a>:</p>
<pre><code>df3 = df1.apply(lambda x: x.value_counts()).unstack().rename('Occurrences')
</code></pre>
<hr>
<pre><code>print (df3)
Indicator Indicator State
P1T N 1
P 2
U 1
P2T N 1
P 2
U 1
P3T N 1
P 1
U 2
Name: Occurrences, dtype: int64
</code></pre>
<p>Last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>join</code></a> it to original <code>DataFrame</code>:</p>
<pre><code>#if necessary remove only NaN column
df2 = df2.drop('Occurrences', axis=1)
df2 = df2.join(df3, on=['Indicator','Indicator State'])
print (df2)
Indicator Indicator State Occurrences
0 P1T P 2
1 P1T N 1
2 P1T U 1
3 P2T P 2
4 P2T N 1
5 P2T U 1
6 P3T P 1
7 P3T N 1
8 P3T U 2
</code></pre>
|
python|pandas|dataframe|analysis
| 2
|
377,174
| 53,415,157
|
Filtering outliers before using group by
|
<p>I have a dataframe with price column (p) and I have some undesired values like (0, 1.50, 92.80, 0.80). Before I calculate the mean of the price by product code, I would like to remove these outliers</p>
<pre><code> Code Year Month Day Q P
0 100 2017 1 4 2.0 42.90
1 100 2017 1 9 2.0 42.90
2 100 2017 1 18 1.0 45.05
3 100 2017 1 19 2.0 45.05
4 100 2017 1 20 1.0 45.05
5 100 2017 1 24 10.0 46.40
6 100 2017 1 26 1.0 46.40
7 100 2017 1 28 2.0 92.80
8 100 2017 2 1 0.0 0.00
9 100 2017 2 7 2.0 1.50
10 100 2017 2 8 5.0 0.80
11 100 2017 2 9 1.0 45.05
12 100 2017 2 11 1.0 1.50
13 100 2017 3 8 1.0 49.90
14 100 2017 3 17 6.0 45.05
15 100 2017 3 24 1.0 45.05
16 100 2017 3 30 2.0 1.50
</code></pre>
<p>How would be a good way to filter the outliers for each product (group by code) ?</p>
<p>I tried this:</p>
<pre><code>stds = 1.0 # Number of standard deviation that defines 'outlier'.
z = df[['Code','P']].groupby('Code').transform(
lambda group: (group - group.mean()).div(group.std()))
outliers = z.abs() > stds
df[outliers.any(axis=1)]
</code></pre>
<p>And then :</p>
<pre><code>print(df[['Code', 'Year', 'Month','P']].groupby(['Code', 'Year', 'Month']).mean())
</code></pre>
<p>But the outlier filter doesn`t work properly.</p>
|
<p>IIUC You can use a groupby on <code>Code</code>, do your <code>z</code> score calculation on <code>P</code>, and filter if the <code>z</code> score is greater than your threshold:</p>
<pre><code>stds = 1.0
filtered_ df = df[~df.groupby('Code')['P'].transform(lambda x: abs((x-x.mean()) / x.std()) > stds)]
Code Year Month Day Q P
0 100 2017 1 4 2.0 42.90
1 100 2017 1 9 2.0 42.90
2 100 2017 1 18 1.0 45.05
3 100 2017 1 19 2.0 45.05
4 100 2017 1 20 1.0 45.05
5 100 2017 1 24 10.0 46.40
6 100 2017 1 26 1.0 46.40
11 100 2017 2 9 1.0 45.05
13 100 2017 3 8 1.0 49.90
14 100 2017 3 17 6.0 45.05
15 100 2017 3 24 1.0 45.05
filtered_df[['Code', 'Year', 'Month','P']].groupby(['Code', 'Year', 'Month']).mean()
P
Code Year Month
100 2017 1 44.821429
2 45.050000
3 46.666667
</code></pre>
|
python|pandas|numpy
| 2
|
377,175
| 53,473,188
|
Why is Python Pandas loc not returning a single item?
|
<p>For an assignment I need to access entries in a CSV file by the column name and index.</p>
<p>I am using a for loop to get each index. Contrary to my expectations,</p>
<pre><code>read.loc[[i], ["magType"]]
</code></pre>
<p>returns something like
<a href="https://i.stack.imgur.com/Unxiq.png" rel="nofollow noreferrer">this</a></p>
<p>instead of just "mb" which is what I would expect based on a pandas cheatsheet (linked below)
Why is this? And how can I get just the item (in this case "mb" without the magType and 0)?</p>
<pre><code>import pandas as pd
read = pd.read_csv("earthquake_data.csv")
print(read.loc[[0], ["magType"]])
</code></pre>
<blockquote>
<p>magType</p>
<blockquote>
<p>0 mb</p>
</blockquote>
</blockquote>
<p><a href="https://s3.amazonaws.com/assets.datacamp.com/blog_assets/PandasPythonForDataScience.pdf" rel="nofollow noreferrer">https://s3.amazonaws.com/assets.datacamp.com/blog_assets/PandasPythonForDataScience.pdf</a></p>
|
<p><code>read.loc[0, "magType"]</code> returns the element at <code>0</code>-th row and <code>magType</code> column - presumably the result you desire.</p>
<p><code>read.loc[[0], ["magType"]]</code>, on the other hand, returns a <em>slice</em> of the DataFrame, with columns from <code>["magType"]</code>, with all the rows whose indices are in <code>[0]</code>. (Compare <code>read.loc[[1, 2, 3], ["magType", "magnitude"]]</code> or <code>read.loc[[0:10], ["magType", "magnitude"]]</code> to see a general case of this.) Like any other DataFrame, it has indices and column names.</p>
|
python|pandas
| 1
|
377,176
| 53,454,043
|
Python: How to tokenize every type of URL paths?
|
<p>I have a dataframe of website urls and I need to first extract url domains (e.g. google.com) and url paths (e.g. foo/foo2/foo3/sjj.html), and second to tokenize the path part of the urls. The problem is that they can be in any of the following forms:</p>
<pre><code>1- https://www.politics.com/watch?v=4PykB_cU
(desired output: [watch])
2- https://www.politics.com/video/2014/USA/hello_world_how_are_you
(desired output: [video, USA, hello, world, how, are, you])
3- https://www.politics.com/video/2014/USA/hello-world-how-are-you
(desired output: [video, USA, hello, world, how, are, you])
4- https://www.politics.com/video/2014/USA/helloworldhowareyou
(desired output: [video, USA, hello, world, how, are, you]
5- https://www.politics.com/video/2014/USA/HelloWorldHowAreYou
(desired output: [video, USA, Hello, World, How, Are, You]
6- https://www.politics.com/1VOuFvY
(desired output: [])
</code></pre>
<p>Is there any function or package that can automatically parse and tokenize all these types of url paths?</p>
|
<p>First three can be accomplished with string.split()</p>
<p>Fifth you can split in the capital letter with regex or just iterating through.</p>
<p>Fourth one will require much more effort. The only method I can think of is entity recognition with the entire English dictionary as entities to match, and even then you’ll need to disambiguate some conflicting matches.</p>
|
python-3.x|pandas|nltk
| 1
|
377,177
| 53,501,449
|
Numerically append binary numbers in np.array
|
<p>Considering the following example array :</p>
<pre><code>a = np.array([0,1,1,0,1,1,1,0,1,0])
</code></pre>
<p>Which could be of any dtype (int, float...) </p>
<p>How would I get the following output without using nasty loops and string casts ?</p>
<pre><code>np.array([0b01,0b10,0b11,0b10,0b10])
</code></pre>
|
<pre><code>a = a.astype(int)
output = a[0::2] * 2 + a[1::2]
</code></pre>
<p>Gives the array you've described (though it doesn't print in binary).</p>
|
python|numpy
| 1
|
377,178
| 53,685,733
|
How to identify column value change in pandas python
|
<p>I have a pandas DataFrame as below that have data for strike_price and value. </p>
<pre><code> date time int_sp value
1 20180903 09:16 11700 283.90
315 20180903 14:31 11700 273.85
316 20180903 14:32 11700 274.05
317 20180903 14:33 11600 295.35
390 20180904 09:31 11600 284.5
391 20180904 09:32 11500 304.15
403 20180904 09:44 11500 301.6
404 20180904 09:45 11600 282.4
405 20180904 09:46 11500 300.35
406 20180904 09:47 11500 300.35
407 20180904 09:48 11500 300.95
408 20180904 09:49 11500 301.3
409 20180904 09:50 11600 280.4
474 20180904 10:55 11600 279.25
475 20180904 10:56 11500 300.15
</code></pre>
<p>My first trade should always be a sell on the first record. Now, whenever strike price(int_sp) changes, I need to buy the sold position and create a new trade by selling at the new strike price.</p>
<p>This is my expected output.</p>
<pre><code>sell_date sell_time buy_date buy_time int_sp sell_price buy_price
20180903 09:16 20180903 14:32 11700 283.90 274.05
20180903 14:33 20180904 09:31 11600 295.35 284.5
20180904 09:32 20180904 09:44 11500 304.15 301.6
20180904 09:45 20180904 09:45 11600 282.4 282.4
20180904 09:46 20180904 09:49 11500 300.35 301.3
20180904 09:50 20180904 10:55 11600 280.4 279.25
20180904 10:56 TBD TBD 11500 300.15 TBD
</code></pre>
<p>I am very new to pandas and can't think of how to achieve this.
Can somebody help me with this?</p>
|
<p>IIUC, use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.diff.html" rel="nofollow noreferrer"><code>diff</code></a> to get sell info</p>
<pre><code>ndf = df.loc[df['int_sp'].diff().ne(0)].add_prefix('sell_').reset_index().copy()
</code></pre>
<p>Now, use mask</p>
<pre><code>mask = df['int_sp'].diff().shift(-1).fillna(0).ne(0)
</code></pre>
<p>This mask filters on lagged values, which are the buy-related values. Then just assign</p>
<pre><code>ndf.loc[:, 'buy_value'] = df.loc[mask, 'value'].reset_index(drop=True)
ndf.loc[:, 'buy_date'] = df.loc[mask, 'date'].reset_index(drop=True)
ndf.loc[:, 'buy_time'] = df.loc[mask, 'time'].reset_index(drop=True)
sell_date sell_time sell_int_sp sell_value buy_value buy_date buy_time
index
1 20180903 09:16 11700 283.90 274.05 20180903.0 14:32
317 20180903 14:33 11600 295.35 284.50 20180904.0 09:31
391 20180904 09:32 11500 304.15 301.60 20180904.0 09:44
404 20180904 09:45 11600 282.40 282.40 20180904.0 09:45
405 20180904 09:46 11500 300.35 301.30 20180904.0 09:49
409 20180904 09:50 11600 280.40 279.25 20180904.0 10:55
475 20180904 10:56 11500 300.15 NaN NaN NaN
</code></pre>
|
python|pandas
| 1
|
377,179
| 53,789,715
|
Transform pandas DataFrame from wide to long and count occurences of a unique value
|
<p>Hello dear community i have a pretty specific problem that i sadly can not work my mind around. The DataFrame I want to transform currently looks like this.</p>
<pre><code>df_larceny
CATEGORY INCIDENTYEAR INCIDENTMONTH
LARCENY 2009 1
LARCENY 2009 1
LARCENY 2009 1
.............................
.............................
LARCENY 2016 11
LARCENY 2016 12
LARCENY 2016 12
LARCENY 2016 12
</code></pre>
<p>after the Transformation it should look like this.</p>
<pre><code>COUNT INCIDENTYEAR INCIDENTMONTH
234 2009 1
453 2009 2
847 2009 3
943 2009 4
958 2009 5
.............................
.............................
324 2016 11
372 2016 12
241 2016 12
412 2016 12
</code></pre>
<p>basically I want to count how often Larceny occured for every month of every year.</p>
<p>I tried <a href="https://stackoverflow.com/questions/36537945/reshape-wide-to-long-in-pandas?newreg=333267eb8a42493797f6eec95f674a1e">this tutorial</a> before, sadly without any luck. </p>
<p>I also tried various methods with value_counts() but sadly with no luck.</p>
<p>Out of pure despair at the end I did it manually for another DataFrame wich kinda looked like this </p>
<pre><code>jan09 = df["CATEGORY"].loc['2009-01-01':'2009-02-01'].value_counts().sum()
jan10 = df["CATEGORY"].loc['2010-01-01':'2010-02-01'].value_counts().sum()
jan11 = df["CATEGORY"].loc['2011-01-01':'2011-02-01'].value_counts().sum()
jan12 = df["CATEGORY"].loc['2012-01-01':'2012-02-01'].value_counts().sum()
jan13 = df["CATEGORY"].loc['2013-01-01':'2013-02-01'].value_counts().sum()
jan14 = df["CATEGORY"].loc['2014-01-01':'2014-02-01'].value_counts().sum()
jan15 = df["CATEGORY"].loc['2015-01-01':'2015-02-01'].value_counts().sum()
jan16 = df["CATEGORY"].loc['2016-01-01':'2016-02-01'].value_counts().sum()
jan_df = [jan09,jan10,jan11,jan12,jan13,jan14,jan15,jan16]`
</code></pre>
<p>I did this for every month and created a new DataFrame at the end which even for an amateur like me looks like way too inefficient.
I hope anyone can help me out here.</p>
|
<p>Maybe something like this:</p>
<pre><code>df_larceny[df_larceny['CATEGORY'] == 'LARCENY'].groupby(['INCIDENTYEAR', 'INCIDENTMONTH']).count().reset_index()
</code></pre>
|
python|pandas|dataframe|format|transform
| 2
|
377,180
| 53,662,651
|
Unable to fill missing hours with appropriate values in pandas dataframe
|
<p>I have a dataset which is basically a list of list</p>
<pre><code>data = [[(datetime.datetime(2018, 12, 6, 10, 0), Decimal('7.0000000000000000')), (datetime.datetime(2018, 12, 6, 11, 0), Decimal('2.0000000000000000')), (datetime.datetime(2018, 12, 6, 12, 0), Decimal('43.6666666666666667')), (datetime.datetime(2018, 12, 6, 14, 0), Decimal('8.0000000000000000')), (datetime.datetime(2018, 12, 7, 9, 0), Decimal('12.0000000000000000')), (datetime.datetime(2018, 12, 7, 10, 0), Decimal('2.0000000000000000')), (datetime.datetime(2018, 12, 7, 11, 0), Decimal('2.0000000000000000')), (datetime.datetime(2018, 12, 7, 17, 0), Decimal('2.0000000000000000'))], [(datetime.datetime(2018, 12, 6, 10, 0), 28.5), (datetime.datetime(2018, 12, 6, 11, 0), 12.75), (datetime.datetime(2018, 12, 6, 12, 0), 12.15), (datetime.datetime(2018, 12, 6, 14, 0), 12.75), (datetime.datetime(2018, 12, 7, 9, 0), 12.75), (datetime.datetime(2018, 12, 7, 10, 0), 12.75), (datetime.datetime(2018, 12, 7, 11, 0), 12.75), (datetime.datetime(2018, 12, 7, 17, 0), 12.75)]]
</code></pre>
<p>It basically contains two lists each of them with a <code>date</code> and <code>metric</code> column. I need to extract the metric column values of each of the list and find a a coorelation between them.</p>
<p>Note: The dates are similar in each of the list</p>
<p>So first I load each of the list into pandas and set date index.</p>
<pre><code>data1 = data[0]
data2 = data[1]
df1 = pd.DataFrame(data1)
df1[0] = pd.to_datetime(df1[0], errors='coerce')
df1.set_index(0, inplace=True)
df2 = pd.DataFrame(data2)
df2[0] = pd.to_datetime(df2[0], errors='coerce')
df2.set_index(0, inplace=True)
</code></pre>
<p>Now I merge the two data frames (both of them share the same dates).</p>
<pre><code>df = pd.merge(df1,df2, how='inner', left_index=True, right_index=True)
</code></pre>
<p>Now my data frame looks something like this</p>
<pre><code> 1_x 1_y
0
2018-12-06 10:00:00 7.0000000000000000 28.50
2018-12-06 11:00:00 2.0000000000000000 12.75
2018-12-06 12:00:00 43.6666666666666667 12.15
2018-12-06 14:00:00 8.0000000000000000 12.75
2018-12-07 09:00:00 12.0000000000000000 12.75
2018-12-07 10:00:00 2.0000000000000000 12.75
2018-12-07 11:00:00 2.0000000000000000 12.75
2018-12-07 17:00:00 2.0000000000000000 12.75
</code></pre>
<p>But if you see the final dataframe, it has missing hours. I need to ensure the missing hours are introduced with approprate values.</p>
<p>Now I saw this example which talks about reindexing <a href="https://www.tutorialspoint.com/python_pandas/python_pandas_reindexing.htm" rel="nofollow noreferrer">https://www.tutorialspoint.com/python_pandas/python_pandas_reindexing.htm</a> but I am not sure how to replicate this in my example. The values must be set using <code>interpolate</code> but this method only gives <code>ffill</code>,<code>bfill</code> and <code>nearest</code>.</p>
<p>How can I add missing hours with approate values?</p>
<p>Note: The dataset is a sql query output.To handle the <code>Decimal</code> type in the output, I used <code>from decimal import Decimal</code>. </p>
|
<p>Try:</p>
<pre><code>df.resample('H').interpolate()
</code></pre>
<p>Output:</p>
<pre><code> 1_x 1_y
0
2018-12-06 10:00:00 7.000000 28.50
2018-12-06 11:00:00 2.000000 12.75
2018-12-06 12:00:00 43.666667 12.15
2018-12-06 13:00:00 25.833333 12.45
2018-12-06 14:00:00 8.000000 12.75
2018-12-06 15:00:00 8.210526 12.75
2018-12-06 16:00:00 8.421053 12.75
2018-12-06 17:00:00 8.631579 12.75
2018-12-06 18:00:00 8.842105 12.75
2018-12-06 19:00:00 9.052632 12.75
2018-12-06 20:00:00 9.263158 12.75
2018-12-06 21:00:00 9.473684 12.75
2018-12-06 22:00:00 9.684211 12.75
2018-12-06 23:00:00 9.894737 12.75
2018-12-07 00:00:00 10.105263 12.75
2018-12-07 01:00:00 10.315789 12.75
2018-12-07 02:00:00 10.526316 12.75
2018-12-07 03:00:00 10.736842 12.75
2018-12-07 04:00:00 10.947368 12.75
2018-12-07 05:00:00 11.157895 12.75
2018-12-07 06:00:00 11.368421 12.75
2018-12-07 07:00:00 11.578947 12.75
2018-12-07 08:00:00 11.789474 12.75
2018-12-07 09:00:00 12.000000 12.75
2018-12-07 10:00:00 2.000000 12.75
2018-12-07 11:00:00 2.000000 12.75
2018-12-07 12:00:00 2.000000 12.75
2018-12-07 13:00:00 2.000000 12.75
2018-12-07 14:00:00 2.000000 12.75
2018-12-07 15:00:00 2.000000 12.75
2018-12-07 16:00:00 2.000000 12.75
2018-12-07 17:00:00 2.000000 12.75
</code></pre>
|
python|pandas
| 1
|
377,181
| 53,674,690
|
Creating a Y_true Dataset in Keras
|
<p>Here's my current call to model.fit in Keras </p>
<pre><code>history_callback = model.fit(x_train/255.,
validation_train_data,
validation_split=validation_split,
batch_size=batch_size,
callbacks=callbacks)
</code></pre>
<p>in this example <code>x_train</code> is a list of numpy arrays that contains all of my image data. The way <code>validation_train_data</code> is structured though is its a list of numpy arrays of totally different sizes that is equal in length to the list of numpy arrays that contains my image. The data for each image though is contained in validation_train_data such that <code>x_train[i]</code> would correspond to a set containing <code>validation_train_data[0][i]</code>, <code>validation_train_data[1][i]</code>, <code>validation_train_data[2][i]</code>, etc. Is there any way I can reformat my validation_train_data such that it can properly be used as a <code>y_true</code> in a custom keras loss function.</p>
|
<p>I managed to solve my problem by writing a generator function which generated a batch of x and y data as lists and put them together as a tuple. I then called fit_generator with the argument where generator = my_generator and it worked just fine. If you have odd input data then you should consider writing a generator to take care of it.</p>
<p>This is the tutorial I used to do so:
<a href="https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly" rel="nofollow noreferrer">https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly</a></p>
|
python|tensorflow|keras
| 0
|
377,182
| 53,533,907
|
Delete values in rows based on value in column and then split cell in multiple rows in Pandas
|
<p>When System appears in column "Type" I want to delete all values from that row, except the value from the column "Name". When Hardware appears in column "Type" I want to delete all values from that row except the value from column "Color".
After that, I want to split all cells from the column "Text" which are not empty into multiple rows, and to keep rows which are empty from that column.</p>
<p>Here is data frame that I have:</p>
<p>df</p>
<pre><code> Type Text Name ID Color
System aca\nmaca\nstream\nphase\n Gary 123 Red
System aca\nmaca\nstream\nphase\n Mary 3254 Yellow
Hardware a\nmaca\nstream\nphase\n Jerry 158 White
Software ca\nmaca\nstream\nphase\n Perry 56414 Green
Software aca\nmac\nstream\nphase\n Jimmy 548 Blue
System aca\nmaca\nstream\nphase\n Marc 5658 Black
System aca\nmaca\nstram\npha\n John 867 Pink
Hardware aca\nma\nstream\nphase\n Sam 665 Gray
Hardware aca\nmaca\nstream\nphase\n Jury 5784 Azure
System aca\nmaca\nstream\nphase\n Larry 5589 Fawn
Software aca\nmaca\nst\nphase\n James 6568 Magenta
System aca\nmaca\nstream\nph\n Kevin 568 Cyan
</code></pre>
<p>And here is the desired result:</p>
<pre><code> Type Text Name ID Color
System Gary
System Mary
Hardware White
Software ca Perry 56414 Green
Software maca Perry 56414 Green
Software stream Perry 56414 Green
Software phase Perry 56414 Green
Software aca Jimmy 548 Blue
Software mac Jimmy 548 Blue
Software stream Jimmy 548 Blue
Software phase Jimmy 548 Blue
System Marc
System John
Hardware Gray
Hardware Azure
System Larry
Software aca James 6568 Magenta
Software maca James 6568 Magenta
Software st James 6568 Magenta
Software phase James 6568 Magenta
System Kevin
</code></pre>
<p>For split cells to multiple rows I tried this function:</p>
<pre><code> def SepInRows(df, c):
s = df[c].str.split('\n', expand=True).stack()
i = s.index.get_level_values(0)
df2 = df.loc[i].copy()
df2[c] = s.values
return df2
</code></pre>
<p>But it drops rows with empty values in the column "Text" which is not that I want.</p>
<p>How to solve this? </p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mask.html" rel="nofollow noreferrer"><code>mask</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.difference.html" rel="nofollow noreferrer"><code>difference</code></a> in preprocessing and then <a href="https://stackoverflow.com/a/53494908/2901002">this solution</a>:</p>
<pre><code>c1 = df.columns.difference(['Type','Name'])
c2 = df.columns.difference(['Type','Color'])
df[c1] = df[c1].mask(df['Type'] == 'System', np.nan)
df[c2] = df[c2].mask(df['Type'] == 'Hardware', np.nan)
</code></pre>
<hr>
<pre><code>cols = df.columns
df1 = (df.join(df.pop('Text').str.split('\n', expand=True)
.stack()
.reset_index(level=1, drop=True)
.rename('Text'))
).reset_index(drop=True).reindex(columns=cols)
print (df1)
Type Text Name ID Color
0 System NaN Gary NaN NaN
1 System NaN Mary NaN NaN
2 Hardware NaN NaN NaN White
3 Software ca Perry 56414.0 Green
4 Software maca Perry 56414.0 Green
5 Software stream Perry 56414.0 Green
6 Software phase Perry 56414.0 Green
7 Software Perry 56414.0 Green
8 Software aca Jimmy 548.0 Blue
9 Software mac Jimmy 548.0 Blue
10 Software stream Jimmy 548.0 Blue
11 Software phase Jimmy 548.0 Blue
12 Software Jimmy 548.0 Blue
13 System NaN Marc NaN NaN
14 System NaN John NaN NaN
15 Hardware NaN NaN NaN Gray
16 Hardware NaN NaN NaN Azure
17 System NaN Larry NaN NaN
18 Software aca James 6568.0 Magenta
19 Software maca James 6568.0 Magenta
20 Software st James 6568.0 Magenta
21 Software phase James 6568.0 Magenta
22 Software James 6568.0 Magenta
23 System NaN Kevin NaN NaN
</code></pre>
|
python|pandas|dataframe|split|cell
| 1
|
377,183
| 53,751,882
|
Pytorch modify dataset label
|
<p>This is a code snippet for loading images as dataset from <a href="https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html" rel="nofollow noreferrer">pytorch transfer learning tutorial</a>:</p>
<pre><code>data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
data_dir = 'data'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
shuffle=True, num_workers=4)
for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
</code></pre>
<p>And this is one of the examples in dataset:</p>
<pre><code>image_datasets['val'][0]:
(tensor([[[ 2.2489, 2.2489, 2.2489, ..., 2.2489, 2.2489, 2.2489],
[ 2.2489, 2.2489, 2.2489, ..., 2.2489, 2.2489, 2.2489],
[ 2.2489, 2.2489, 2.2489, ..., 2.2489, 2.2489, 2.2489],
...,
[ 2.2489, 2.2489, 2.2489, ..., 2.2489, 2.2489, 2.2489],
[ 2.2489, 2.2489, 2.2489, ..., 2.2489, 2.2489, 2.2489],
[ 2.2489, 2.2489, 2.2489, ..., 2.2489, 2.2489, 2.2489]],
[[ 2.4286, 2.4286, 2.4286, ..., 2.4286, 2.4286, 2.4286],
[ 2.4286, 2.4286, 2.4286, ..., 2.4286, 2.4286, 2.4286],
[ 2.4286, 2.4286, 2.4286, ..., 2.4286, 2.4286, 2.4286],
...,
[ 2.4286, 2.4286, 2.4286, ..., 2.4286, 2.4286, 2.4286],
[ 2.4286, 2.4286, 2.4286, ..., 2.4286, 2.4286, 2.4286],
[ 2.4286, 2.4286, 2.4286, ..., 2.4286, 2.4286, 2.4286]],
[[ 2.6400, 2.6400, 2.6400, ..., 2.6400, 2.6400, 2.6400],
[ 2.6400, 2.6400, 2.6400, ..., 2.6400, 2.6400, 2.6400],
[ 2.6400, 2.6400, 2.6400, ..., 2.6400, 2.6400, 2.6400],
...,
[ 2.6400, 2.6400, 2.6400, ..., 2.6400, 2.6400, 2.6400],
[ 2.6400, 2.6400, 2.6400, ..., 2.6400, 2.6400, 2.6400],
[ 2.6400, 2.6400, 2.6400, ..., 2.6400, 2.6400, 2.6400]]]), 0)
</code></pre>
<p>Is there any method (best practices) to change the example data in dataset, for example change label 0 to label 1. The following does not work:</p>
<pre><code>image_datasets['val'][0] = (image_datasets['val'][0][0], 1)
</code></pre>
|
<p>Yes, though not (easily) programmatically. The labels are coming from <a href="https://pytorch.org/docs/stable/torchvision/datasets.html#imagefolder" rel="nofollow noreferrer">torchvision.datasets.ImageFolder</a> and reflect the directory structure of your dataset (as seen on your HDD). Firstly, I suspect you may want to know the directory name as a string. This is poorly documented but the dataloader has a <code>classes</code> attribute which stores those. So</p>
<pre><code>img, lbl = image_datasets['val'][0]
directory_name = image_datasets['val'].classes[lbl]
</code></pre>
<p>If you're looking to consistently return those instead of class IDs, you can use the <code>target_transform</code> api as follows:</p>
<pre><code>image_datasets['val'].target_transform = lambda id: image_datasets['val'].classes[id]
</code></pre>
<p>which will make the loader return strings instead of IDs from now on. If you're looking for something more advanced you can reimplement/inherit from <code>ImageFolder</code> or <code>DatasetFolder</code> and implement your own semantics. The only methods you need to provide are <code>__len__</code> and <code>__getitem__</code>.</p>
|
python|deep-learning|pytorch|transfer-learning
| 1
|
377,184
| 53,607,181
|
Run multiple Keras models in sequence
|
<p>I am running 3 keras cnn models in sequence on all images in a folder. After completing prediction of all three models on 1 image, I am getting below error in next iteration of the loop. </p>
<pre><code> File "/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1078, in _run
'Cannot interpret feed_dict key as Tensor: ' + e.args[0])
TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("input_1:0", shape=(?, ?, ?, 3), dtype=float32) is not an element of this graph.
</code></pre>
<p>My code structure for one model:</p>
<pre><code>def model_1():
K.clear_session()
cnn_model = load_model(model_path, compile=False)
with K.get_session().as_default() as sess:
...Do inference....
</code></pre>
<p>I have tried solutions mentioned <a href="https://github.com/tensorflow/tensorflow/issues/14356" rel="nofollow noreferrer">here</a> and <a href="https://stackoverflow.com/questions/40785224/tensorflow-cannot-interpret-feed-dict-key-as-tensor">here</a> but none of them worked for me.</p>
|
<p>Adding tf.Session().as_default() resolved the issue: </p>
<pre><code>def model_1():
K.clear_session()
cnn_model = load_model(model_path, compile=False)
tf.Session().as_default()
with K.get_session().as_default() as sess:
...Do inference....
</code></pre>
<p>Adding answer that worked for me in case someone wants to resolve similar issue.</p>
|
python|tensorflow|keras
| 1
|
377,185
| 53,726,915
|
How to use RandomState with Sklearn RandomizedSearchCV on multiple cores
|
<p>I am puzzled about the right way to use <code>np.random.RandomState</code> with <code>sklearn.model_selection.RandomizedSearchCV</code> when running on multiple cores. </p>
<p>I use <code>RandomState</code> to generate pseudo-random numbers so that my results are reproducible. I give <code>RandomizedSearchCV</code> an instance of <code>RandomState</code> and set <code>n_jobs=-1</code> so that it uses all six cores. </p>
<p>Running on multiple cores introduces an asynchronous element. I expect that this will cause requests for pseudo-random numbers from the various cores to be made in different orders in different runs. Therefore the different runs should give different results, rather than displaying reproducibility.</p>
<p>But in fact the results are reproducible. For a given value of <code>n_iter</code> (i.e., the number of draws from the parameter space), the best hyper-parameter values found are identical from one run to the next. I also get the same values if <code>n_jobs</code> is a positive number that is smaller than the number of cores.</p>
<p>To be specific, here is code:</p>
<pre><code>import numpy as np
import scipy.stats as stats
from sklearn.datasets import load_iris
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import RandomizedSearchCV, StratifiedKFold, train_test_split
# Use RandomState for reproducibility.
random_state = np.random.RandomState(42)
# Get data. Split it into training and test sets.
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.4, random_state=random_state, stratify=y)
# Prepare for hyper-parameter optimization.
n_iter = 1_000
base_clf = GradientBoostingClassifier(
random_state=random_state, max_features='sqrt')
param_space = {'learning_rate': stats.uniform(0.05, 0.2),
'n_estimators': [50, 100, 200],
'subsample': stats.uniform(0.8, 0.2)}
# Generate data folds for cross validation.
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=random_state)
# Create the search classifier.
search_clf = RandomizedSearchCV(
base_clf, param_space, n_iter=n_iter, scoring='f1_weighted', n_jobs=-1,
cv=skf, random_state=random_state, return_train_score=False)
# Optimize the hyper-parameters and print the best ones found.
search_clf.fit(X_train, y_train)
print('Best params={}'.format(search_clf.best_params_))
</code></pre>
<p>I have several questions.</p>
<ol>
<li><p>Why do I get reproducible results despite the asynchronous aspect?</p></li>
<li><p>The documentation for <a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html" rel="nofollow noreferrer"><code>RandomizedSearchCV</code></a> says about the <code>random_state</code> parameter: "Pseudo random number generator state used for random uniform sampling from lists of possible values instead of scipy.stats distributions." Does this mean that it does not affect the distributions in the parameter space? Is the code above sufficient to ensure reproducibility, or do I need to set <code>np.random.seed()</code>, or perhaps write something like this:</p>
<pre><code>distn_learning_rate = stats.uniform(0.05, 0.2)
distn_learning_rate.random_state = random_state
distn_subsample = stats.uniform(0.8, 0.2)
distn_subsample.random_state = random_state
param_space = {'learning_rate': distn_learning_rate,
'n_estimators': [50, 100, 200],
'subsample': distn_subsample}
</code></pre></li>
<li><p>Overall, is this the correct way to set up <code>RandomizedSearchCV</code> for reproducibility?</p></li>
<li><p>Is using a single instance of <code>RandomState</code> ok, or should I use separate instances for <code>train_test_split</code>, <code>GradientBoostingClassifier</code>, <code>StratifiedKFold</code>, and <code>RandomizedSearchCV</code>? Also, the documentation of <a href="https://docs.scipy.org/doc/numpy-1.15.4/reference/generated/numpy.random.seed.html" rel="nofollow noreferrer"><code>np.random.seed</code></a> says that the seed is set when <code>RandomState</code> is initialized. How does this interact with <code>RandomizedSearchCV</code> setting the seed?</p></li>
<li><p>When <code>n_jobs</code> is set to use fewer than all the cores, I still see activity on all the cores, though the usage level per core increases and the elapsed time decreases as the number of cores increases. Is this just sklearn and/or macOS optimizing the machine usage?</p></li>
</ol>
<p>I am using macOS 10.14.2, Python 3.6.7, Numpy 1.15.4, Scipy 1.1.0, and Sklearn 0.20.1.</p>
|
<p>The parameter candidates are generated before passing to the multi-threaded functionality using a <a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ParameterSampler.html" rel="nofollow noreferrer">ParameterSampler object</a>. So only a single <code>random_state</code> is enough for reproducibility of RandomizedSearchCV. </p>
<p>Note that I said <code>"reproducibility of RandomizedSearchCV"</code>. For the estimators used inside it (<code>base_clf</code> here), each estimator should carry its own <code>random_state</code> as you have done. </p>
<p>Now talking about <code>a single instance of RandomState</code>, it is perfectly fine for the code which is sequential. Only case to worry is when the multi-processing kicks in. So lets analyze the steps which happen during your program's execution.</p>
<ol>
<li>You set up a <code>RandomState</code> object with a seed. It has a state now.</li>
<li>Inside <code>train_test_split</code>, a <code>StratifiedShuffleSplit</code> is used (because you have used <code>stratify</code> param) which will use the passed <code>RandomState</code> object to split and generate permutations in train and test data. So the internal state of <code>RandomState</code> is changed now. But its sequential and nothing to worry.</li>
<li>Now you set this <code>random_state</code> object in <code>skf</code>. But no splitting happens until <code>fit()</code> in <code>RandomizedSearchCV</code> is called. So state is unchanged.</li>
<li><p>After that, when <code>search_clf.fit</code> is called, <a href="https://github.com/scikit-learn/scikit-learn/blob/55bf5d9/sklearn/model_selection/_search.py#L691" rel="nofollow noreferrer">the following happens</a>:</p>
<ol>
<li><a href="https://github.com/scikit-learn/scikit-learn/blob/55bf5d9/sklearn/model_selection/_search.py#L1511" rel="nofollow noreferrer"><code>_run_search()</code></a> is executed, which will use the <code>random_state</code> to generate all the parameter combinations at once (according to given <code>n_iters</code>). So still no part of multi-threading is happening, and everything is good.</li>
<li><p><a href="https://github.com/scikit-learn/scikit-learn/blob/55bf5d9/sklearn/model_selection/_search.py#L695" rel="nofollow noreferrer"><code>evaluate_candidates()</code></a> is called. The interesting part is this:</p>
<pre><code>out = parallel(delayed(_fit_and_score)(clone(base_estimator),
X, y,
train=train, test=test,
parameters=parameters,
**fit_and_score_kwargs)
for parameters, (train, test)
in product(candidate_params,
cv.split(X, y, groups)))
</code></pre></li>
<li><p>The part after <code>parallel(delayed(_fit_and_score)</code> is still sequential which is handled by parent thread. </p>
<ul>
<li><code>cv.split()</code> will use the <code>random_state</code> (change its state) to generate train test splits</li>
<li><code>clone(estimator)</code> will clone all the parameters of the estimator, (the <code>random_state</code> also). So the changed state of <code>RandomState</code> from <code>cv.split</code> object becomes the base state in <code>estimator</code></li>
<li>The above two steps happen multiple times (number of splits x parameter combinations times) from parent thread (without asynchronicity). And each time the original <code>RandomState</code> is cloned to serve the estimator. So the results are reproducible.</li>
<li>So when the actual multi-threading part is started, the original <code>RandomState</code> is not used, but each estimator (thread) will have its own copy of <code>RandomState</code></li>
</ul></li>
</ol></li>
</ol>
<p>Hope this is making sense, and answers your question. Scikit-learn <a href="https://scikit-learn.org/stable/faq.html#how-do-i-set-a-random-state-for-an-entire-execution" rel="nofollow noreferrer">explicitly requests the user</a> to set up like this:</p>
<pre><code>import numpy as np
np.random.seed(42)
</code></pre>
<p>to make entire execution reproducible, but what you are doing will also do.</p>
<p>I am not entirely sure about your last question as I not able to reproduce that on my system. I have 4 cores and when I set <code>n_jobs=2</code> or <code>3</code> I am only seeing those many cores at 100% and remaining at around 20-30%. My system specs:</p>
<pre><code>System:
python: 3.6.6 |Anaconda, Inc.| (default, Jun 28 2018, 17:14:51) [GCC 7.2.0]
machine: Linux-4.15.0-20-generic-x86_64-with-debian-buster-sid
Python deps:
pip: 18.1
setuptools: 40.2.0
sklearn: 0.20.1
numpy: 1.15.4
scipy: 1.1.0
Cython: 0.29
pandas: 0.23.4
</code></pre>
|
python|numpy|scikit-learn|scipy|numpy-random
| 3
|
377,186
| 53,613,722
|
Loss function for simple Reinforcement Learning algorithm
|
<p>This question comes from watching the following video on TensorFlow and Reinforcement Learning from Google I/O 18: <a href="https://www.youtube.com/watch?v=t1A3NTttvBA" rel="nofollow noreferrer">https://www.youtube.com/watch?v=t1A3NTttvBA</a></p>
<p>Here they train a very simple RL algorithm to play the game of Pong.</p>
<p>In the slides they use, the loss is defined like this ( approx @ 11m 25s ):</p>
<pre><code>loss = -R(sampled_actions * log(action_probabilities))
</code></pre>
<p>Further they show the following code ( approx @ 20m 26s):</p>
<pre><code># loss
cross_entropies = tf.losses.softmax_cross_entropy(
onehot_labels=tf.one_hot(actions, 3), logits=Ylogits)
loss = tf.reduce_sum(rewards * cross_entropies)
# training operation
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.001, decay=0.99)
train_op = optimizer.minimize(loss)
</code></pre>
<p>Now my question is this; They use the +1 for winning and -1 for losing as rewards. In the code that is provided, any cross entropy loss that's multiplied by a negative reward will be very low? And if the training operation is using the optimizer to minimize the loss, well then the algorithm is trained to lose?</p>
<p>Or is there something fundamental I'm missing ( probably because of my very limited mathematical skills ) </p>
|
<p>Great question Corey. I am also wondering exactly what this popular loss function in RL actually means. I've seen many implementations of it, but many contradict each other. For my understanding, it means this:</p>
<p>Loss = - log(pi) * A</p>
<p>Where A is the advantage compared to a baseline case. In Google's case, they used a baseline of 0, so A = R. This is multiplied by that specific action at that specific time, so in your above example, actions were one hot encoded as [1, 0, 0]. We will ignore the 0s and only take the 1. Hence we have the above equation.</p>
<p><strong>If you intuitively calculate this loss for a negative reward:</strong></p>
<p>Loss = - (-1) * log(P)</p>
<p>But for any P less than 1, log of that value will be negative. Therefore, you have a negative loss which can be interpreted as "very good", but really doesn't make physical sense.</p>
<p><strong>The correct way:</strong></p>
<p>However in my opinion, and please others correct me if I'm wrong, you do not calculate the loss directly. You take the <strong><em>gradient</em></strong> of the loss. That is, you take the derivative of -log(pi)*A.</p>
<p>Therefore, you would have:</p>
<p>-(d(pi) / pi) * A</p>
<p>Now, if you have a large negative reward, it will translate to a very large loss.</p>
<p>I hope this makes sense.</p>
|
python|tensorflow|reinforcement-learning
| 3
|
377,187
| 53,444,173
|
Replace zeros with last value different from zero
|
<p>I have the following dataframe:</p>
<pre><code>print(inventory_df)
dt_op Prod_1 Prod_2 Prod_n
1 10/09/18 5 50 2
2 11/09/18 4 0 0
3 12/09/18 2 0 0
4 13/09/18 0 0 0
5 14/09/18 4 30 1
</code></pre>
<p>I would like to change the values equal to zero, with the last value != from zero, in each columns, as:</p>
<pre><code>print(final_inventory_df)
dt_op Prod_1 Prod_2 Prod_n
1 10/09/18 5 50 2
2 11/09/18 4 50 2
3 12/09/18 2 50 2
4 13/09/18 2 50 2
5 14/09/18 4 30 1
</code></pre>
<p>How could I do it?</p>
|
<p>Idea is replace <code>0</code> to NaNs by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mask.html" rel="nofollow noreferrer"><code>mask</code></a> and then forward filling them by previous non missing values:</p>
<pre><code>cols = df.columns.difference(['dt_op'])
df[cols] = df[cols].mask(df[cols] == 0).ffill().astype(int)
</code></pre>
<p>Similar solution with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>:</p>
<pre><code>df[cols] = pd.DataFrame(np.where(df[cols] == 0, np.nan, df[cols]),
index=df.index,
columns=cols).ffill().astype(int)
print (df)
dt_op Prod_1 Prod_2 Prod_n
1 10/09/18 5 50 2
2 11/09/18 4 50 2
3 12/09/18 2 50 2
4 13/09/18 2 50 2
5 14/09/18 4 30 1
</code></pre>
<p>Solution for fun - convert to integer all columns without <code>dt_op</code>:</p>
<pre><code>d = dict.fromkeys(df.columns.difference(['dt_op']), 'int')
df = df.mask(df == 0).ffill().astype(d)
</code></pre>
|
python|pandas
| 5
|
377,188
| 53,520,576
|
Is it possible to feed dynamic shape matrix in Tensorflow?
|
<p>I have data set with unstatic shape like <code>(batch_size, None, None, None, 92)</code></p>
<pre><code>for x in x_data:
print(x.shape)
(4, 4, 8, 92)
(3, 3, 7, 92)
(4, 4, 8, 92)
(3, 3, 7, 92)
(4, 4, 8, 92)
(4, 4, 7, 92)
(3, 3, 7, 92)
(4, 4, 8, 92)
(4, 4, 8, 92)
(3, 3, 8, 92)
</code></pre>
<p>But when I try to feed this <strong>x_data</strong> to my <strong>X place holder</strong> I faced error</p>
<pre><code>X = tf.placeholder(tf.float32, [None, None, None, None, 92])
with tf.Session() as sess:
c, _ = sess.run([cost, optimizer], feed_dict={X: x_data, Y: y_data})
</code></pre>
<p>This error may caused from unstatic shape of input data.
<br>Error message</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/bsjun/Documents/GitHub/CCpyNN/CCpyNN/Inception_v.2.py", line 274, in <module>
c, hy, _ = sess.run([cost, logit_layer, optimizer], feed_dict={X: batch_x[i], Y: batch_y[i], keep_prob: 0.8})
File "C:\Users\bsjun\AppData\Local\conda\conda\envs\tf_normal\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "C:\Users\bsjun\AppData\Local\conda\conda\envs\tf_normal\lib\site-packages\tensorflow\python\client\session.py", line 1121, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "C:\Users\bsjun\AppData\Local\conda\conda\envs\tf_normal\lib\site-packages\numpy\core\numeric.py", line 501, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.
</code></pre>
<p>Is it impossible to feed dynamic shape matrix ?</p>
<p>Here is a summary of my code.</p>
<pre><code>import tensorflow as tf
import numpy as np
shapes = [(4, 4, 8, 92), (3, 3, 7, 92), (4, 4, 8, 92), (3, 3, 7, 92)]
x_data = []
for s in shapes:
x = np.zeros(shape=s)
print(x.shape)
x_data.append(x)
X = tf.placeholder(tf.float32, [None, None, None, None, 92])
with tf.Session() as sess:
sess.run(X, feed_dict={X: x_data})
</code></pre>
|
<p>It won't work for the data which you provide, but there are some options how to deal with that.</p>
<p><strong>Why it doesn't work</strong></p>
<p>The line</p>
<pre><code> x = tf.placeholder(tf.float32, [None, None, None, 92])
</code></pre>
<p>means that the shape of the input array is not known beforehand. But it must be an object which could be converted numpy array. As your input data is a sequence of numpy arrays of different shapes, it won't be converted.</p>
<p><strong>How to deal with that</strong></p>
<p><strong>1. Provide separate input for your model.</strong> You could, probably, modify the code of your model, providing it two inputs:</p>
<ul>
<li>Input data with the "maximum" shape. That means, that any array from your sequence would fit into this shape;</li>
<li>Actual shapes of the input arrays.</li>
</ul>
<p>Such approach is used, for example, in the <a href="https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn" rel="nofollow noreferrer">tf.nn.dynamic_rnn()</a>, where one of the parameters is the actual data, and another is <code>sequence_length</code> - the length of each sequence.</p>
<p><strong>2. Vary shape of data from batch to batch.</strong> Another option would be to feed arrays of different shapes on each batch. So, for example, you group <code>batch_size</code> arrays of shape (4, 4, 8, 92) into one batch and feed it into the model. Then you take <code>batch_size</code> arrays of shape (3, 3, 8, 92) and do one more pass and so on. So, the shape can vary in the dataset, but it should be constant in the single batch.</p>
|
python|tensorflow|deep-learning
| 2
|
377,189
| 53,664,902
|
Parsing from Excel multisheet file: List comprehension between columns
|
<p>I am trying to parse an excel file that has many sheets. Each sheet has a column that has information as follows (3 sheets=3 columns):</p>
<pre><code>ReceivedEmail OpenedEmail ClickedURL
aaaa@aaa.com gggg@aaa.com aaaa@aaa.com
bbbb@aaa.com dddd@aaa.com rrrr@aaa.com
cccc@aaa.com rrrr@aaa.com
dddd@aaa.com aaaa@aaa.com
eeee@aaa.com oooo@aaa.com
ffff@aaa.com
gggg@aaa.com
rrrr@aaa.com
qqqq@aaa.com
oooo@aaa.com
</code></pre>
<p>What I want is a single table that retains the first column of sheet one i.e. the one that has all data regarding ReceivedEmail (persons we mass e-mailed). The next columns should be each first column of subsequent sheet but instead of repeating the e-mails, I want to use list comprehension to check if OpenedEmail exists in ReceivedEmail and give <em>1</em> else give <em>0</em>.</p>
<p>Here's what I did so far:</p>
<pre><code>import pandas as pd
xl = pd.ExcelFile(path_to_file)
xl.sheet_names
['ReceivedEmail', 'OpenedEmail', 'ClickedURL']
df = xl.parse(sheet_name=xl.sheet_names[0], header=None)
df.rename(columns={df.columns[0]:xl.sheet_names[0]}, inplace=True);
df.columns[0]
['ReceivedEmail']
# then I created a buffer dataframe to check next columns
df_buffer = xl.parse(sheet_name=xl.sheet_names[1], header=None)
df_buffer.rename(columns={df_buffer.columns[0]:xl.sheet_names[1]}, inplace=True);
</code></pre>
<p>But then when I run list comprehension like this:</p>
<pre><code>df[df_buffer.columns[0]] = [1 if x in df[df.columns[0]] else 0 for x in df_buffer[df_buffer.columns[0]]]
</code></pre>
<p>I get an error:</p>
<p><em>ValueError: Length of values does not match length of index</em></p>
<p>Any clue how to solve this error or handle the problem in a smart way? I am doing manually to see if it works, then I could do a looping later, but I am stuck with the error.</p>
<p>End result should be:</p>
<pre><code>ReceivedEmail OpenedEmail ClickedURL
aaaa@aaa.com 1 1
bbbb@aaa.com 0 0
cccc@aaa.com 0 0
dddd@aaa.com 1 0
eeee@aaa.com 0 0
ffff@aaa.com 0 0
gggg@aaa.com 1 0
rrrr@aaa.com 1 1
qqqq@aaa.com 0 0
oooo@aaa.com 1 0
</code></pre>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html" rel="nofollow noreferrer"><code>read_excel</code></a> with parameter <code>sheetname=None</code> for return all sheets to ordered dictionary of DataFrames:</p>
<p><em>Notice:</em></p>
<p><em>Each sheet have one column.</em></p>
<pre><code>dfs = pd.read_excel('file.xlsx', sheetname=None)
print (dfs)
OrderedDict([('ReceivedEmail', a
0 aaaa@aaa.com
1 bbbb@aaa.com
2 cccc@aaa.com
3 dddd@aaa.com
4 eeee@aaa.com
5 ffff@aaa.com
6 gggg@aaa.com
7 rrrr@aaa.com
8 qqqq@aaa.com
9 oooo@aaa.com), ('OpenedEmail', a
0 gggg@aaa.com
1 dddd@aaa.com
2 rrrr@aaa.com
3 aaaa@aaa.com
4 oooo@aaa.com), ('ClickedURL', a
0 aaaa@aaa.com
1 rrrr@aaa.com)])
</code></pre>
<p>Then join together and change order by subset <code>[]</code> and for each column from second check membership by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow noreferrer"><code>isin</code></a>, last convert boolena mask to integers.</p>
<pre><code>cols = list(dfs.keys())
df = pd.concat({k: v.iloc[:, 0] for k, v in dfs.items()}, axis=1)[cols]
df.iloc[:, 1:] = df.iloc[:, 1:].apply(lambda x: df.iloc[:, 0].isin(x)).astype(int)
print (df)
ReceivedEmail OpenedEmail ClickedURL
0 aaaa@aaa.com 1 1
1 bbbb@aaa.com 0 0
2 cccc@aaa.com 0 0
3 dddd@aaa.com 1 0
4 eeee@aaa.com 0 0
5 ffff@aaa.com 0 0
6 gggg@aaa.com 1 0
7 rrrr@aaa.com 1 1
8 qqqq@aaa.com 0 0
9 oooo@aaa.com 1 0
</code></pre>
|
python|excel|pandas|parsing|list-comprehension
| 1
|
377,190
| 53,663,293
|
How to change the tensor shape in middle layers?
|
<p>Saying I have a 2000x100 matrix, I put it into 10 dimension embedding layer, which gives me 2000x100x10 tensor. so it's 2000 examples and each example has a 100x10 matrix. and then, I pass it to a conv1d and KMaxpolling to get 2000x24 matrix, which is 2000 examples and each example has a 24 dimension vector. and now, I would like to recombine those examples before I apply another layer. I would like to combine the first 10 examples together, and such and such, so I get a tuple. and then I pass that tuple to the next layer.
My question is, Can I do that with Keras? and any idea on how to do it? </p>
|
<p>The idea of using "samples" is that these samples should be unique and not relate to each other. </p>
<p>This is something Keras will demand from your model: if it started with 2000 samples, it must end with 2000 samples. Ideally, these samples do not talk to each other, but you can use custom layers to hack this, but only in the middle. You will need to end with 2000 samples anyway.</p>
<p>I believe you're going to end your model with 200 groups, so maybe you should already start with shape <code>(200,10,100)</code> and use <code>TimeDistributed</code> wrappers:</p>
<pre><code>inputs = Input((10,100)) #shape (200,10,100)
out = TimeDistributed(Embedding(....))(inputs) #shape (200,10,100,10)
out = TimeDistributed(Conv1D(...))(out) #shape (200,10,len,filters)
#here, you use your layer that will work on the groups without TimeDistributed.
</code></pre>
<p>To reshape a tensor without changing the batch size, use the <code>Reshape(newShape)</code> layer, where <code>newShape</code> does not include the first dimension (batch size). </p>
<p>To reshape a tensor including the batch size, use a <code>Lambda(lambda x: K.reshape(x,newShape))</code> layer, where <code>newShape</code> includes the first dimension (batch size) - Here you must remember the warning above: somewhere you will need to undo this change so you end up with the same batch size as the input. </p>
|
tensorflow|keras|nlp|embedding|word-embedding
| 0
|
377,191
| 53,494,667
|
Can traing deep learning model using Intel Xeon CPU in tensorflow , solving short of gpu memory
|
<p>NVidia GPU has 16GB memory at most, which limits large model training. Model parallism may needs modification of deep learning framework. Is it feasible to train tensorflow models using Intel multi-core CPUs? Could you give some advice about the hardware configuration and the performance?</p>
|
<p>You can try using Intel AI Devcloud which is cloud hosted hardware and software platform available to developers, researchers and startups to learn and get started on their Artificial Intelligence projects. It has Intel® Xeon® Scalable Processors and each processor has 24 cores with 2-way hyper-threading. Each processor has access to 96 GB of on-platform RAM. </p>
<p>Refer the below link for more details.</p>
<p><a href="https://ai.intel.com/devcloud/" rel="nofollow noreferrer">https://ai.intel.com/devcloud/</a></p>
<p>You can access this platform for 30 days by registering in the following link.</p>
<p><a href="https://software.intel.com/en-us/ai-academy/devcloud" rel="nofollow noreferrer">https://software.intel.com/en-us/ai-academy/devcloud</a></p>
<p>You will get a welcome mail which gives the user name and password. Open the hyperlink in the welcome mail to get more details on how to connect and use the Devcloud.
To get the best performance on Devcloud, change the parallelism threads and OpenMP settings (either inside the code or in the terminal) as below: </p>
<p>In the terminal:</p>
<p>export OMP_NUM_THREADS="NUM_PARALLEL_EXEC_UNITS"</p>
<p>export KMP_BLOCKTIME="0" </p>
<p>export KMP_SETTINGS="1" </p>
<p>export KMP_AFFINITY="granularity=fine,verbose,compact,1,0"</p>
<p>Inside code:</p>
<p>import os</p>
<p>os.environ["OMP_NUM_THREADS"] = "NUM_PARALLEL_EXEC_UNITS"</p>
<p>os.environ["KMP_BLOCKTIME"] = "0" </p>
<p>os.environ["KMP_SETTINGS"] = "1" </p>
<p>os.environ["KMP_AFFINITY"]= "granularity=fine,verbose,compact,1,0"</p>
<p>For more details regarding optimization, please refer:</p>
<p><a href="https://communities.intel.com/docs/DOC-112392" rel="nofollow noreferrer">https://communities.intel.com/docs/DOC-112392</a></p>
<p>Hope this helps.</p>
|
tensorflow|cpu
| 1
|
377,192
| 53,597,813
|
Groupby pandas in different sections
|
<p>I have a serialized dataset that has its content separated by spaces, like this <code>#a value1 #b value2 ....</code> where the first element with # is the column name and second is the value. My problem occurs in some sections of this dataset that has a sequence like this "#% value1 #% value2" this especific mark represent a column with multiple values, in this way, I need a mechanism to transformer this multiple lines in one. Eg. Original data = <code>#a value1 #b value2 #% value3 #% value4 #a value5 #b value6 #% value7 #% value8</code></p>
<p>After my split process:</p>
<pre><code>Key value
#a. Value1
#b. Value2
#%. Value3
#%. Value4
#a. Value5
#b. Value6
#%. Value7
#%. Value8
</code></pre>
<p>But I need this:</p>
<pre><code>Key value
#a. Value1
#b. Value2
#%. Value3,Value4
#a. Value5
#b. Value6
#%. Value7,Value8
</code></pre>
<p>How can I perform this local groupby using pandas? One detail is that is a huge dataset (~2Gb) and I'm running all this in a good, but normal, PC.</p>
|
<p>First create the help key by using <code>shift</code> and <code>cumsum</code> , then it become the regular <code>groupby</code> and <code>join</code> problem </p>
<pre><code>s=(df.Key!=df.Key.shift()).cumsum()
df.groupby([df.Key,s]).value.apply(','.join).\
sort_index(level=1).\
reset_index(level=1,drop=True)
Out[788]:
Key
#a. Value1
#b. Value2
#%. Value3,Value4
#a. Value5
#b. Value6
#%. Value7,Value8
Name: value, dtype: object
</code></pre>
|
python|pandas|pandas-groupby
| 4
|
377,193
| 53,496,913
|
Import error tensorflow-gpu in ubuntu18.04
|
<p>I'm getting some error while importing tensorflow.</p>
<p>My computer's Specifications:</p>
<p>OS:ubuntu 18.04</p>
<p>Nvidia RTX 2080 Ti*2 </p>
<p>Nvidia driver-415</p>
<p>CUDA:10.0</p>
<p>cuDNN:7.3.0
tensorflow:1.11.0</p>
<pre><code>import tensorflow
</code></pre>
<p>Error:</p>
<blockquote>
<p>Traceback (most recent call last): File
"/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py",
line 58, in
from tensorflow.python.pywrap_tensorflow_internal import * File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py",
line 28, in
_pywrap_tensorflow_internal = swig_import_helper() File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py",
line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/usr/lib/python3.6/imp.py", line 243,
in load_module
return load_dynamic(name, filename, file) File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec) ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory</p>
<p>During handling of the above exception, another exception occurred:</p>
<p>Traceback (most recent call last): File "", line 1, in
File
"/usr/local/lib/python3.6/dist-packages/tensorflow/<strong>init</strong>.py", line
22, in
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File
"/usr/local/lib/python3.6/dist-packages/tensorflow/python/<strong>init</strong>.py",
line 49, in
from tensorflow.python import pywrap_tensorflow File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py",
line 74, in
raise ImportError(msg) ImportError: Traceback (most recent call last): File
"/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py",
line 58, in
from tensorflow.python.pywrap_tensorflow_internal import * File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py",
line 28, in
_pywrap_tensorflow_internal = swig_import_helper() File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py",
line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/usr/lib/python3.6/imp.py", line 243,
in load_module
return load_dynamic(name, filename, file) File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec) ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory</p>
<p>Failed to load the native TensorFlow runtime.</p>
<p>See
<a href="https://www.tensorflow.org/install/install_sources#common_installation_problems" rel="nofollow noreferrer">https://www.tensorflow.org/install/install_sources#common_installation_problems</a></p>
<p>for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.</p>
</blockquote>
<p>I already tried to use ubuntu 16.04 ,but GPU didn't support.
Installation of CUDA9.0 & CUDA9.2 was not supported too.</p>
<p>how can I use tensorflow-gpu?</p>
<p>I already app path in ~/.bashrc</p>
<pre><code>export PATH=/usr/local/cuda-10.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
export CUDA_HOME=/usr/local/cuda-10.0
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64$LD_LIBRARY_PATH
</code></pre>
|
<p>The <code>tensorflow-gpu</code> package is built against Cuda 9.0, but you have Cuda 10.0 installed. </p>
<p>You need either to downgrade your version of Cuda to 9.0 (but if I recall, that's not possible with a 2080Ti), or build tensorflow from the sources. There is extensive documentation on how to do so on the <a href="https://www.tensorflow.org/install/source" rel="nofollow noreferrer">tensorflow webiste</a></p>
<p>You can also try to install the package <code>tf-nightly-gpu</code>. You should note that this version is more experimental though, as it has not been tested as extensively. </p>
|
python|tensorflow|nvidia|ubuntu-18.04
| 1
|
377,194
| 53,553,444
|
Determining a cell value based on previous/next cell value(s)
|
<p>This is a follow-up from a previous post of mine: <a href="https://stackoverflow.com/questions/53539078/change-multiple-cell-values-based-on-other-cell-values/53539132?noredirect=1#comment93950476_53539132">Change multiple cell values based on other cell value(s)</a></p>
<p>this is what a normal cycle of an object looks like:</p>
<pre><code> DateTime limitswitchopen limitswitchclose safetyedgeclose safetyedgeopen photocells traploopext rectloopext moving comment
0 2018-11-12 15:04:46.861 0 1 0 0 0 0 0 close NaN
1 2018-11-12 15:04:57.149 0 1 0 0 0 0 0 close NaN
2 2018-11-12 15:05:05.046 0 1 0 0 0 0 0 close Normaal
3 2018-11-12 15:05:06.859 0 0 0 0 0 0 0 movingToopen Normaal
4 2018-11-12 15:05:10.080 0 0 0 0 0 0 0 movingToopen Normaal
5 2018-11-12 15:05:11.801 1 0 0 0 0 0 0 open Normaal
6 2018-11-12 15:05:13.409 1 0 0 0 0 0 0 open Normaal
7 2018-11-12 15:05:17.142 1 0 0 0 0 1 0 open Normaal
8 2018-11-12 15:05:18.754 1 0 0 0 0 1 0 open Normaal
9 2018-11-12 15:05:19.055 1 0 0 0 0 0 1 open Normaal
10 2018-11-12 15:05:19.763 1 0 0 0 0 0 1 open Normaal
11 2018-11-12 15:05:20.367 1 0 0 0 0 0 0 open Normaal
12 2018-11-12 15:05:21.575 0 0 0 0 0 0 0 movingToclose Normaal
13 2018-11-12 15:05:23.385 0 0 0 0 0 0 0 movingToclose Normaal
14 2018-11-12 15:05:26.505 0 1 0 0 0 0 0 close Normaal
15 2018-11-12 15:05:26.906 0 1 0 0 0 0 0 close NaN
</code></pre>
<p>I need to know in which direction the object is moving. So what i did was</p>
<p><code>df['moving] = df[(df.limitswitchclose == 0) & (df.limitswitchopen == 0)]
df['open'] = df[(dfsamen.limitswitchclose == 0) & (df.limitswitchopen == 1)]
df['close'] = df[(dfsamen.limitswitchclose == 1) & (df.limitswitchopen == 0)]</code></p>
<p>which I merged into one column and then made the previous post.
and then i used that code to create dataframe above where the direction is shown. Which works in this case. But when the cycle gets interrupted the direction changes based on an interruption during opening/closing. </p>
<pre><code> DateTime limitswitchopen limitswitchclose safetyedgeclose safetyedgeopen photocells traploopext rectloopext moving comment
41 2018-11-12 15:06:09.931 0 1 0 0 0 0 0 close Fotocellopen
42 2018-11-12 15:06:11.944 0 0 0 0 0 0 0 movingToclose Fotocellopen
43 2018-11-12 15:06:13.756 0 0 0 0 1 0 0 movingToclose Fotocellopen
44 2018-11-12 15:06:15.168 0 0 0 0 0 0 0 movingToclose Fotocellopen
45 2018-11-12 15:06:18.388 0 1 0 0 0 0 0 close Fotocellopen
46 2018-11-12 15:06:20.100 0 0 0 0 0 0 0 movingToopen Fotocellopen
47 2018-11-12 15:06:23.316 0 0 0 0 0 0 0 movingToopen Fotocellopen
48 2018-11-12 15:06:25.730 1 0 0 0 0 0 0 open Fotocellopen
49 2018-11-12 15:06:26.637 1 0 0 0 0 0 0 open Fotocellopen
50 2018-11-12 15:06:27.644 1 0 0 0 0 1 0 open Fotocellopen
51 2018-11-12 15:06:28.550 1 0 0 0 0 1 1 open Fotocellopen
52 2018-11-12 15:06:28.855 1 0 0 0 0 0 1 open Fotocellopen
53 2018-11-12 15:06:29.356 1 0 0 0 0 0 0 open Fotocellopen
54 2018-11-12 15:06:30.563 1 0 0 0 0 0 0 open Fotocellopen
55 2018-11-12 15:06:31.369 0 0 0 0 0 0 0 movingToclose Fotocellopen
56 2018-11-12 15:06:32.575 0 0 0 0 0 0 0 movingToclose Fotocellopen
57 2018-11-12 15:06:35.593 0 1 0 0 0 0 0 close Fotocellopen
</code></pre>
<p>at <code>43 2018-11-12 15:06:13.756</code> <code>photocells = 1</code> this will make the object close and then start to open again. </p>
<p>So what this dataframe should be is:</p>
<pre><code> DateTime limitswitchopen limitswitchclose safetyedgeclose safetyedgeopen photocells traploopext rectloopext moving comment
41 2018-11-12 15:06:09.931 0 1 0 0 0 0 0 close Fotocellopen
42 2018-11-12 15:06:11.944 0 0 0 0 0 0 0 movingToopen Fotocellopen
43 2018-11-12 15:06:13.756 0 0 0 0 1 0 0 movingToopen Fotocellopen
44 2018-11-12 15:06:15.168 0 0 0 0 0 0 0 movingToclose Fotocellopen
45 2018-11-12 15:06:18.388 0 1 0 0 0 0 0 close Fotocellopen
46 2018-11-12 15:06:20.100 0 0 0 0 0 0 0 movingToopen Fotocellopen
47 2018-11-12 15:06:23.316 0 0 0 0 0 0 0 movingToopen Fotocellopen
48 2018-11-12 15:06:25.730 1 0 0 0 0 0 0 open Fotocellopen
49 2018-11-12 15:06:26.637 1 0 0 0 0 0 0 open Fotocellopen
50 2018-11-12 15:06:27.644 1 0 0 0 0 1 0 open Fotocellopen
51 2018-11-12 15:06:28.550 1 0 0 0 0 1 1 open Fotocellopen
52 2018-11-12 15:06:28.855 1 0 0 0 0 0 1 open Fotocellopen
53 2018-11-12 15:06:29.356 1 0 0 0 0 0 0 open Fotocellopen
54 2018-11-12 15:06:30.563 1 0 0 0 0 0 0 open Fotocellopen
55 2018-11-12 15:06:31.369 0 0 0 0 0 0 0 movingToclose Fotocellopen
56 2018-11-12 15:06:32.575 0 0 0 0 0 0 0 movingToclose Fotocellopen
57 2018-11-12 15:06:35.593 0 1 0 0 0 0 0 close Fotocellopen
</code></pre>
<p>So what i need is a way to determine whether the object is opening or closing.
If limitswitchclose goes from <code>1</code> to <code>0</code> it will always be opening and if limitswitchopen goes from <code>1</code> to <code>0</code> it will always be closing. But based on the other columns in df it can change direction. If <code>safetyedgeopen = 1</code> during opening it will close again. But if <code>traploopext = 1</code> during opening it will continue opening. </p>
<p>How do i tackle this problem? </p>
<p>(I'll continue trying to solve it and post my answer if it works, I can give more examples of how I want the dataframe to look but the post was getting long) </p>
|
<p>If I understand what you mainly ask, it's "how do I assign a value based on the value of previous row." The way I Would do this is to simply shift the filtering condition by one row. I won't give the solution for all the options, but you can just expand the filtering condition appropriately. </p>
<p>this line of code filters by where the neighbouring row would have the value of 1 in photocells, and if that's the case, then replaces the current rows "tomoving" to "toclosing", or "toclosing", to "tomoving"</p>
<p><code>df.loc[df.shift(1).photocells==1,'moving'] = df.loc[df.shift(1).photocells==1,'moving'].str.replace('Toclose','To_open').str.replace('Toopen','To_close').str.replace('_','')</code></p>
|
python|pandas|dataframe
| 1
|
377,195
| 53,513,032
|
Pandas groupby and apply - getting a new DataFrame over the groupby variable
|
<p>I'm trying to use <code>pandas.DataFrame.groupby['x']</code> in order to make calculation on the grouped <code>df</code>, by <code>x</code>.</p>
<p>Problem arise when <code>'x'</code> repeats more then once. apply function will do the calculations as many times as <code>'x'</code> repeats, though I only need the 'aggregated' values (It's not really an <em>aggregation</em> but more like - <em>processing</em>).</p>
<p>Here's a toy example:</p>
<pre><code>def simulate_complicated_func(df):
# This function simulates complicate calculations
returned_col_names = ['calc1', 'calc2', 'calc3']
df['calc1'] = ''.join(df['var1'])
df['calc2'] = df['var2'].mean()
df['calc3'] = ''.join(df['var1']) + str(df['var2'].max())
return df[['id'] + returned_col_names]
df = pd.DataFrame({'id':['id1', 'id1', 'id2', 'id3', 'id3', 'id3'],
'var1':['abc', 'cba', 'abc', 'cba', 'abc', 'cba'],
'var2':[9, 4, 7, 4, 1, 3]})
print(df)
id var1 var2
0 id1 abc 9
1 id1 cba 4
2 id2 abc 7
3 id3 cba 4
4 id3 abc 1
5 id3 cba 3
res_df = df.groupby(['id']).apply(simulate_complicated_func).drop_duplicates()
print(res_df)
id calc1 calc2 calc3
0 id1 abccba 6.500000 abccba9
2 id2 abc 7.000000 abc7
3 id3 cbaabccba 2.666667 cbaabccba4
</code></pre>
<p>The output is exactly what I want, but it's not efficient. Is there a better way doing it using pandas?</p>
<h2>Edit: Optimize how?</h2>
<p>If we'll add a <code>print</code> statement to <code>simulate_complicated_func()</code></p>
<pre><code>def simulate_complicated_func(df):
# This function simulates complicate calculations
print("function called")
# ...
</code></pre>
<p>We can see that the code will print it 6 times:</p>
<pre><code>function called
function called
function called
function called
function called
function called
</code></pre>
<p>In reality, we only need to access this function 3 times (the number of groups created by groupby).</p>
|
<p>One idea is return <code>Series</code> from custom function, so <code>drop_duplicates</code> is not necessary:</p>
<pre><code>def simulate_complicated_func(df):
# This function simulates complicate calculations
returned_col_names = ['calc1', 'calc2', 'calc3']
a = ''.join(df['var1'])
b = df['var2'].mean()
c = ''.join(df['var1']) + str(df['var2'].max())
return pd.Series([a,b,c], index=returned_col_names)
res_df = df.groupby(['id']).apply(simulate_complicated_func).reset_index()
print(res_df)
id calc1 calc2 calc3
0 id1 abccba 6.500000 abccba9
1 id2 abc 7.000000 abc7
2 id3 cbaabccba 2.666667 cbaabccba4
</code></pre>
<p>Another idea is use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.agg.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.agg</code></a> but it is possible only for processing all columns with aggregated functions like <code>join</code> and <code>mean</code>. Function <code>agg</code> working with each column separately, so <code>cal3</code> is not possible easy/effective way count - is necessary again custom function and last join output together: </p>
<pre><code>def simulate_complicated_func(df):
# This function simulates complicate calculations
returned_col_names = ['calc3']
c = ''.join(df['var1']) + str(df['var2'].max())
return pd.Series([c], index=returned_col_names)
d = {'var1': ''.join, 'var2':'mean'}
cols = {'var1':'calc1','var2':'calc2'}
g = df.groupby(['id'])
df1 = g.agg(d).rename(columns=cols)
print (df1)
calc1 calc2
id
id1 abccba 6.500000
id2 abc 7.000000
id3 cbaabccba 2.666667
df2 = df.groupby(['id']).apply(simulate_complicated_func)
print(df2)
calc3
id
id1 abccba9
id2 abc7
id3 cbaabccba4
df = pd.concat([df1, df2], axis=1).reset_index()
print (df)
id calc1 calc2 calc3
0 id1 abccba 6.500000 abccba9
1 id2 abc 7.000000 abc7
2 id3 cbaabccba 2.666667 cbaabccba4
</code></pre>
|
python|pandas
| 2
|
377,196
| 53,680,401
|
Combining date and time (timestamp) in a list of dataframes
|
<p>I wish to combine date and time into one timestamp on a list of dataframes and to designate a week to the call date. </p>
<p>Here is the error :
ValueError: could not convert string to Timestamp </p>
<p>I have used the following function :</p>
<pre><code>def new_call_time(df):
i=0
df[' CALL_DATE_MANIPULATED']=str(df['CALL_DATE'][i]).split()[0] + ' ' + str(df['CALL_TIME'][i])
df[' UNIX_TIME']= pd.Timestamp(df[' CALL_DATE_MANIPULATED'][i]).value//10 ** 9
df[' WEEK']=''
for i in range(len(df)):
df[' CALL_DATE_MANIPULATED'][i]=str(df['CALL_DATE'][i]).split()[0] + ' ' + str(df['CALL_TIME'][i])
df[' UNIX_TIME'][i]= pd.Timestamp(df[' CALL_DATE_MANIPULATED'][i]).value// 10 ** 9
df[' WEEK'][i]=df[' UNIX_TIME'][i]//604800
return df
</code></pre>
<p>Here is the function call statement :</p>
<pre><code>for df in data_frame :
df = new_call_time(df)
</code></pre>
<p>Here are the tables I have read from excel sheets (contained in a list called data_frame) :</p>
<pre><code> CALL_DATE CALL_TIME
01-JAN-2016 00:15:06
01-JAN-2016 07:07:00
CALL_DATE CALL_TIME
01-JAN-2016 08:40:38
01-JAN-2016 08:44:14
CALL_DATE CALL_TIME
01-JAN-2016 08:51:10
01-JAN-2016 09:06:31
</code></pre>
<p>This is working for an individual dataframe but isn't working for a list of dataframes.</p>
<p>The new tables should have the following columns too :
example : data_frame[0] -</p>
<pre><code> CALL_DATE CALL_TIME CALL_DATE_MANIPULATED UNIX_TIME WEEK
01-JAN-2016 00:15:06 01-JAN-2016 00:15:06 1451607306 2400
01-JAN-2016 07:07:00 01-JAN-2016 07:07:00 1451632020 2400
</code></pre>
<p>Thanks a lot :)))</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>Series.str.split</code></a> with indexing <code>str[0]</code> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a> with parameters <code>errors='coerce'</code> for <code>NaT</code> if some values not matched format defined in <code>format</code> (parameter format is for better performance, but should be omitted):</p>
<pre><code>def new_call_time(df):
df['CALL_DATE_MANIPULATED'] = (df['CALL_DATE'].astype(str).str.split().str[0] + ' ' +
df['CALL_TIME'].astype(str))
dates = pd.to_datetime(df['CALL_DATE_MANIPULATED'],
errors='coerce',
format='%d-%b-%Y %H:%M:%S')
df['UNIX_TIME'] = dates.values.astype(np.int64) // 10 ** 9
df['WEEK'] = df['UNIX_TIME'] //604800
return df
</code></pre>
<p>Call function in list comprehension for new <code>list of DataFrames</code>:</p>
<pre><code>data_frame1 = [new_call_time(df) for df in data_frame]
</code></pre>
|
python|pandas|timestamp|unix-timestamp|valueerror
| 1
|
377,197
| 53,530,132
|
Convert pandas datetime to hours from start
|
<p>I have a dataframe with the following datatime index:</p>
<pre><code>DatetimeIndex(['2018-10-17 00:00:00', '2018-10-17 01:00:00',
'2018-10-17 02:00:00', '2018-10-17 03:00:00',
'2018-10-17 04:00:00', '2018-10-17 05:00:00',
'2018-10-17 06:00:00', '2018-10-17 07:00:00',
'2018-10-17 08:00:00', '2018-10-17 09:00:00',
...
'2018-11-29 15:00:00', '2018-11-29 16:00:00',
'2018-11-29 17:00:00', '2018-11-29 18:00:00',
'2018-11-29 19:00:00', '2018-11-29 20:00:00',
'2018-11-29 21:00:00', '2018-11-29 22:00:00',
'2018-11-29 23:00:00', '2018-11-30 00:00:00'],
dtype='datetime64[ns]', name='dates', length=914, freq=None)
</code></pre>
<p>How do I convert it to hours from the first datetime index i.e. 0, 1, 2...</p>
|
<p>You can subtract the the first datetime from all values in your index, then divide by <code>numpy.timedelta(1,'h')</code> (timedelta of 1 hour):</p>
<pre><code>(df.index - df.index[0]) / np.timedelta64(1,'h')
Float64Index([ 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0,
8.0, 9.0,
...
1047.0, 1048.0, 1049.0, 1050.0, 1051.0, 1052.0,
1053.0, 1054.0, 1055.0, 1056.0],
dtype='float64', name='dates')
</code></pre>
|
python|pandas|datetime|indexing
| 2
|
377,198
| 53,361,618
|
How to Upsample Pandas DataFrame using Timestamp
|
<p>I have a DataFrame like this (don't care about NaN values):</p>
<p><a href="https://i.stack.imgur.com/FHgOZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FHgOZ.png" alt="enter image description here"></a></p>
<p>And I would like to upsample it each 20 milliseconds.</p>
<p>What I did is:</p>
<pre><code>df = df.set_index('TIMESTAMP')
df = df.resample('20ms').ffill()
</code></pre>
<p>But I get the error:</p>
<pre><code>Traceback (most recent call last):
sens_encoded = sens_encoded.resample('20ms').ffill()
TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'Index'
</code></pre>
<p>So I tried to convert TIMESTAMP to DateTime, which should be already:</p>
<pre><code>df = df.set_index('TIMESTAMP')
df.index = pd.to_datetime(df.index) //Added this
df = df.resample('20ms').ffill()
</code></pre>
<p>But I get the error:</p>
<pre><code>Traceback (most recent call last):
df.index = pd.to_datetime(df.index)
TypeError: <class 'tuple'> is not convertible to datetime
</code></pre>
<p>EDIT:</p>
<p>I think the problem might be that after set_index('TIMESTAMP'), the dataframe looks like this(note the parenthesis in the timestamps values):</p>
<p><a href="https://i.stack.imgur.com/CAx4g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CAx4g.png" alt="enter image description here"></a></p>
<p><strong>EDIT2</strong>:</p>
<p>I found out why I was getting those parenthesis in df.
It was because I was creating it assigning column names as list inside square brackets. The correct way to do it is:</p>
<pre><code>columns_names = ['D07', 'C10', ...]
df = pd.DataFrame(columns=columns_names)
</code></pre>
<p><s>df = pd.DataFrame(columns=[columns_names])</s></p>
|
<p>First set first level of <code>MultiIndex</code> to columns for remove broken one level <code>MultiIndex</code>.</p>
<p>Add parameter <code>errors='coerce'</code> for convert non parseable values to <code>NaT</code> if necessary, also is possible first converting column, then create <code>DatetimeIndex</code> and last <code>upsample</code>:</p>
<pre><code>df.columns = df.columns.get_level_values(0)
df['TIMESTAMP'] = pd.to_datetime(df['TIMESTAMP'], errors='coerce')
df = df.set_index('TIMESTAMP').resample('20ms').ffill()
</code></pre>
<p>Or:</p>
<pre><code>df.columns = df.columns.get_level_values(0)
df = df.set_index('TIMESTAMP')
df.index = pd.to_datetime(df.index, errors='coerce')
df = df.resample('20ms').ffill()
</code></pre>
|
python|pandas|resampling
| 3
|
377,199
| 53,565,437
|
MLWIC: Machine Learning for Wildlife Image Classification in R Issues with Python
|
<p>I am a wildlife PhD researcher manually identifying ~1.5 million game camera photos by species. A machine learning package in R has recently come out of a research project and I've been trying to get the script to run in R for about 12 hours and can't seem to get it right (I have used R and python a lot, but I am no expert and this is the first question I have asked on here so forgive me if I haven't done this correctly).</p>
<p>The ReadMe (To understand what I am trying to do you will probably have to read this, I apologize) for the package downloaded on Github is located at: <a href="https://github.com/mikeyEcology/MLWIC/blob/master/README.md" rel="nofollow noreferrer">https://github.com/mikeyEcology/MLWIC/blob/master/README.md</a></p>
<p>Unfortunately for me, the package was developed on a Macintosh platform and I have Windows.
I followed the steps in the ReadMe as follows:</p>
<p>1: Installed the MLWIC package using the code:</p>
<pre><code>devtools::install_github("mikeyEcology/MLWIC")
library(MLWIC)
</code></pre>
<p>2: Followed the instructions to install "pip", python, and "TensorFlow" at
<a href="https://www.tensorflow.org/install/pip" rel="nofollow noreferrer">https://www.tensorflow.org/install/pip</a></p>
<p>3: Downloaded the L1 folder</p>
<p>4: I ran a different code than outlined in the ReadMe, it is as follows:
setup(python_loc = "I used this location I got from running "where python" in Anaconda")</p>
<p>After this initial setup, I ran the code for the "classify function":
library(MLWIC)</p>
<pre><code>setup(python_loc = "C:/ProgramData/Anaconda3", conda_loc = "auto", r_reticulate = FALSE)
setwd("C:/Users/werdel/Desktop/MachineLearning")
help("classify")
classify(path_prefix = "C:/Users/werdel/Desktop/MachineLearning/images",# this is the absolute path to the images.
data_info = "C:/Users/werdel/Desktop/MachineLearning/image_labels.csv", # this is the location of the csv containing image information. It has Unix linebreaks and no headers.
model_dir = "C:/Users/werdel/Desktop/MachineLearning", # assuming this is where you stored the L1 folder in Step 3 of the instructions: github.com/mikeyEcology/MLWIC/blob/master/README
python_loc = "C:/ProgramData/Anaconda3/python.exe", # the location of Python on your computer.
save_predictions = "model_predictions.txt" # this is the default and you should use it unless you have reason otherwise.)
</code></pre>
<p>This is where the problem seemed to arise. It seems to run fine, with the output showing a file created in my working directory, but when I check, there is no file. I have tried changing python location, downloading new and old versions of anaconda, messing with environments, but nothing has changed the fact that there is no file created in my working directory:</p>
<pre><code>> library(MLWIC)
> setup(python_loc = "C:/ProgramData/Anaconda3", conda_loc = "auto", r_reticulate = FALSE)
Remove all packages in environment C:\PROGRA~3\ANACON~1\envs\r-reticulate:
## Package Plan ##
environment location: C:\PROGRA~3\ANACON~1\envs\r-reticulate
The following packages will be REMOVED:
ca-certificates: 2018.03.07-0
certifi: 2018.10.15-py37_0
openssl: 1.1.1a-he774522_0
pip: 18.1-py37_0
python: 3.7.1-he44a216_5
setuptools: 40.6.2-py37_0
vc: 14.1-h0510ff6_4
vs2015_runtime: 14.15.26706-h3a45250_0
wheel: 0.32.3-py37_0
wincertstore: 0.2-py37_0
Solving environment: ...working... done
## Package Plan ##
environment location: C:\PROGRA~3\ANACON~1\envs\r-reticulate
added / updated specs:
- python
The following NEW packages will be INSTALLED:
ca-certificates: 2018.03.07-0
certifi: 2018.10.15-py37_0
openssl: 1.1.1a-he774522_0
pip: 18.1-py37_0
python: 3.7.1-he44a216_5
setuptools: 40.6.2-py37_0
vc: 14.1-h0510ff6_4
vs2015_runtime: 14.15.26706-h3a45250_0
wheel: 0.32.3-py37_0
wincertstore: 0.2-py37_0
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
#
# To activate this environment, use:
# > activate r-reticulate
#
# To deactivate an active environment, use:
# > deactivate
#
# * for power-users using bash, you must source
#
Solving environment: ...working... failed
UnsatisfiableError: The following specifications were found to be in conflict:
- argparse
- tensorflow
Use "conda info <package>" to see the dependencies for each package.
Error: Error 1 occurred installing packages into conda environment r-reticulate
> classify(path_prefix = "C:/Users/werdel/Desktop/MachineLearning/images", # this is
the absolute path to the images.
+ data_info = "C:/Users/werdel/Desktop/MachineLearning/image_labels.csv", #
this is the location of the csv containing image information. It has Unix linebreaks
and no headers.
+ model_dir = "C:/Users/werdel/Desktop/MachineLearning", # assuming this is
where you stored the L1 folder in Step 3 of the instructions:
github.com/mikeyEcology/MLWIC/blob/master/README
+ python_loc = "C:/ProgramData/Anaconda3/python.exe", # the location of Python
on your computer.
+ save_predictions = "model_predictions.txt" # this is the default and you
should use it unless you have reason otherwise.
+ )
[1] "evaluation of images took 0.000504970550537109 secs. The results are stored in
C:/Users/werdel/Desktop/MachineLearning/L1/model_predictions.txt. To view the results
in a viewer-friendly format, please use the function make_output"
</code></pre>
<p>So my final question is, does it seem like I set something up wrong while downloading pip, tensorflow, anaconda, and python, is it something with the way I am coding, etc.? </p>
|
<p>If I am not mistaken there is a small bug in their code that ignores the "data_info" path. Try renaming your "image_labels.csv" to "data_info.csv" and put the file inside the model_dir. This solved the problem for me. Also, use "C:/ProgramData/Anaconda3/" instead of "C:/ProgramData/Anaconda3/python.exe"</p>
|
python|r|tensorflow|github|anaconda
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.