Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
6,500
64,474,626
How to load data in tensorflow from subdirectories
<p>I have a subset of ImageNet data contained in sub-folders locally, where each sub-folder represents a class of images. There are potentially hundreds of classes, and therefore sub-folders, and each subfolder can contain hundreds of images. Here is an example of this structure with a subset of folders. I want to train a classification model in tensorflow, but I am not sure how to format and load the data given this structure of different image classes in different folders and the class label being the name of the folder. Normally I've just used datasets that already exist in tensorflow like mnist or cifar10 which are formatted and easy to use.</p> <p><a href="https://i.stack.imgur.com/A56CW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A56CW.png" alt="enter image description here" /></a></p>
<p>You can use <code>tf.keras.preprocessing.image_dataset_from_directory()</code>.</p> <p>Your directory structure would be something like this but with many more classes:</p> <pre><code>main_directory/ ...class_a/ ......a_image_1.jpg ......a_image_2.jpg ...class_b/ ......b_image_1.jpg ......b_image_2.jpg </code></pre> <p>I would suggest you split the dataset before this step as I think the data is split here randomly and not by stratified sampling(if your datasets are imbalanced then do this first and do not use the validation split to do it for you as I am not sure of the nature of how splitting is done as there is no mention of it).</p> <p><strong>Example:</strong></p> <pre><code>train_dataset = image_dataset_from_directory( directory=TRAIN_DIR, labels=&quot;inferred&quot;, label_mode=&quot;categorical&quot;, class_names=[&quot;0&quot;, &quot;10&quot;, &quot;5&quot;], image_size=SIZE, seed=SEED, subset=None, interpolation=&quot;bilinear&quot;, follow_links=False, ) </code></pre> <p>Important things you have to set:</p> <ol> <li><p><strong>Labels must be inferred</strong> where the labels of the images are generated based on the directory structure so it follows the order of the classes.</p> </li> <li><p><strong>Label mode</strong> has to be set to &quot;categorical&quot; which encodes the labels as a categorical vector.</p> </li> <li><p><strong>Class names</strong> you can set this yourself where you would have to list the order of the folders in the directory otherwise the order is based on alphanumeric ordering. What you can do here as you have lots of folders is use <code>os.walk(directory)</code> to get the list of the directories in the order that they are.</p> </li> <li><p><strong>Image size</strong> you can resize the images to be of the same size. Do so according to the model that you are using i.e., MobileNet takes in (224,224) so you can set this to (224,224).</p> </li> </ol> <p><a href="https://keras.io/api/preprocessing/image/" rel="nofollow noreferrer">More information here</a>.</p>
tensorflow|keras|subdirectory|tensorflow-datasets|loaddata
3
6,501
49,172,050
Efficient way of writing multiple conditions for filtering data using loc or iloc
<p>I have written the code like below to filter out the records from the column named 'Document Type' which contains around 25 categorical values.</p> <pre><code>salesdf.loc[(salesdf['Document type'] != 'AVC') &amp; (salesdf['Document type'] != 'CC') &amp; (salesdf['Document type'] != 'CDI') &amp; (salesdf['Document type'] != 'BSX') &amp; (salesdf['Document type'] != 'BTR') &amp; (salesdf['Document type'] != 'FAF')] </code></pre> <p>I am just wondering if there is an efficient way of writing code that gives me the same output?</p>
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow noreferrer"><code>isin</code></a> with inverted condition by <code>~</code>:</p> <pre><code>salesdf[~salesdf['Document type'].isin(['AVC', 'CC','CDI', 'BSX','BTR','FAF'])] </code></pre> <p><strong>Sample</strong>:</p> <pre><code>salesdf = pd.DataFrame({ 'Document type': ['AVC','CDI','CC','a','b','FAF','BTR','c','BSX'] }) print (salesdf) Document type 0 AVC 1 CDI 2 CC 3 a 4 b 5 FAF 6 BTR 7 c 8 BSX a = salesdf.loc[(salesdf['Document type'] != 'AVC') &amp; (salesdf['Document type'] != 'CC') &amp; (salesdf['Document type'] != 'CDI') &amp; (salesdf['Document type'] != 'BSX') &amp; (salesdf['Document type'] != 'BTR') &amp; (salesdf['Document type'] != 'FAF')] print (a) Document type 3 a 4 b 7 c b = salesdf[~salesdf['Document type'].isin(['AVC', 'CC','CDI', 'BSX','BTR','FAF'])] print (b) Document type 3 a 4 b 7 c </code></pre>
python|pandas|pandas-loc
3
6,502
49,305,174
fast way to stack vectors into a matrix in Python
<p>I want to stack 100k vector of the same lentgh (500) into a single matrix in python but it takes too much time.</p> <p>here is my code:</p> <pre><code>stacked = all_vectors[0] for i in range(1,100000): stacked = np.column_stack((stacked ,all_vectors[i])) </code></pre> <p>Do you know how to make this quicker?</p>
<p>You should get the answer you want with</p> <pre><code>stacked = np.column_stack(all_vectors[:100000]) </code></pre> <p>There appears to be no difference between that and</p> <pre><code>stacked = np.array(all_vectors[:100000]).transpose() </code></pre> <p>as you can see from this interactive session:</p> <pre><code>&gt;&gt;&gt; stacked = np.column_stack(all_vectors[:100000]) &gt;&gt;&gt; sstacked = np.array(all_vectors[:100000]).transpose() &gt;&gt;&gt; stacked == sstacked array([[ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True], ..., [ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True]], dtype=bool) &gt;&gt;&gt; (stacked == sstacked).all() True </code></pre> <p><em>EDIT</em>: Timing results appear to prefer the second method:</p> <pre><code>%%timeit vector = list(range(1, 1+10)) all_vectors = [vector] *100_000 result = np.column_stack(all_vectors) 396 ms ± 18.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %%timeit vector = list(range(1, 1+10)) all_vectors = [vector] *100_000 result = np.array(all_vectors) np.array(all_vectors[:100000]).transpose() 152 ms ± 3.16 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre>
python|loops|numpy|matrix|vector
3
6,503
59,036,924
In python is there a way to delete parts of a column?
<p>I want to trim the values of a pandas data frame. For example, I have the following:</p> <pre><code> A B C 33344-10 5555-78 999902 3444441 5555679 2334 2334 5555 3344 </code></pre> <p>And I would like the result to be:</p> <pre><code>A B C 3334 5555 9999 3444 5555 2334 2334 5555 3344 </code></pre> <p>If anyone could help it would be very appreciated.</p>
<p>slice each column in a loop as below</p> <pre class="lang-py prettyprint-override"><code>columns = df.columns for column in columns: df[column] = df[column].astype(str).str[:4] df </code></pre> <p>which gives you the following output</p> <pre><code> A B C 0 3334 5555 9999 1 3444 5555 2334 2 2334 5555 3344 </code></pre>
python|pandas|dataframe
5
6,504
58,664,914
How do I split multiple columns?
<p>I would like to split each of columns in dataset.</p> <p>The idea is to split the number between "/" and string between "/" and "@" and put this values to the new colums.</p> <p>I tried sth like this :</p> <pre><code>new_df = dane['1: Brandenburg'].str.split('/',1) </code></pre> <p>and then creating new columns for it. But I don't want to do this for all 60 colums.</p> <pre><code>first column 1: Branburg : ES-NL-10096/1938/X1@hkzydzon.dk/6749 BE-BR-6986/3551/B1@oqk.bf/39927 PH-SA-39552610/2436/A1@venagi.hr/80578 PA-AE-59691/4881/X1@zhicksl.cl/25247 second column 2: Achon : DE-JP-20082/2066/A2@qwier.cu/68849 NL-LK-02276/2136/A1@ozmdpfts.de/73198 OM-PH-313/3671/Z1@jtqy.ml/52408 AE-ID-9632/3806/C3@lhbt.ar/83484 etc,etc... </code></pre>
<p>As I understood, you want to extract <strong>two parts</strong> from each cell. E.g. from <em>ES-NL-10096/1938/X1@hkzydzon.dk/6749</em> there should be extracted:</p> <ul> <li><em>1938</em> - the number between slashes,</li> <li><em>X1</em> - the string between the second slash and <em>@</em>.</li> </ul> <p>To to this, you can run:</p> <pre><code>df.stack().str.extract(r'/(?P&lt;num&gt;\d+)/(?P&lt;txt&gt;[A-Z\d]+)@')\ .stack().unstack([1, 2]) </code></pre> <p>You will get a MultiIndex on columns:</p> <ul> <li>top level - the name of "source" column,</li> <li>second level - <em>num</em> and <em>txt</em> - 2 extracted "parts".</li> </ul> <p>For your sample data, the result is:</p> <pre><code> 1: Brandenburg 2: Achon num txt num txt 0 1938 X1 2066 A2 1 3551 B1 2136 A1 2 2436 A1 3671 Z1 3 4881 X1 3806 C3 </code></pre>
python|pandas|split
1
6,505
70,170,844
Pandas JSON Normalize - Choose Correct Record Path
<p>I am trying to figure out how to normalize the nested JSON response sampled below.</p> <p>Right now, <code>json_normalize(res,record_path=['data'])</code> is giving me MOST of the data I need but what I would really like is the detail in the &quot;session_pageviews&quot; list/dict with the attributes of the data list/dic included.</p> <p>I tried <code>json_normalize(res,record_path=['data', ['session_pageviews']], meta = ['data'])</code> but I get an error: <code>ValueError: operands could not be broadcast together with shape (32400,) (180,)</code></p> <p>I also tried <code>json_normalize(res,record_path=['data'], max_level = 1)</code> but that does not unnest session_pageviews</p> <p>Any help would be appreciated!</p> <p><a href="https://i.stack.imgur.com/AyLhX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AyLhX.png" alt="enter image description here" /></a></p>
<p>You can try to apply the following function to your json:</p> <pre><code>def flatten_nested_json_df(df): df = df.reset_index() s = (df.applymap(type) == list).all() list_columns = s[s].index.tolist() s = (df.applymap(type) == dict).all() dict_columns = s[s].index.tolist() while len(list_columns) &gt; 0 or len(dict_columns) &gt; 0: new_columns = [] for col in dict_columns: horiz_exploded = pd.json_normalize(df[col]).add_prefix(f'{col}.') horiz_exploded.index = df.index df = pd.concat([df, horiz_exploded], axis=1).drop(columns=[col]) new_columns.extend(horiz_exploded.columns) # inplace for col in list_columns: #print(f&quot;exploding: {col}&quot;) df = df.drop(columns=[col]).join(df[col].explode().to_frame()) new_columns.append(col) s = (df[new_columns].applymap(type) == list).all() list_columns = s[s].index.tolist() s = (df[new_columns].applymap(type) == dict).all() dict_columns = s[s].index.tolist() return df </code></pre> <p>by doing this:</p> <pre><code>df1= flatten_nested_json_df(df) </code></pre> <p>where</p> <pre><code>df = pd.json_normalize(json) </code></pre> <p>That should give you all the information contained in your json.</p>
python|pandas|json-normalize
2
6,506
70,301,937
How do I filter specific number using `np.where`
<p>Case</p> <p><a href="https://i.stack.imgur.com/vBA4e.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vBA4e.jpg" alt="enter image description here" /></a></p> <p>so I have an array like these, and how do I extract the green number from the array?</p> <p>The output that I wanted :</p> <pre><code>[11, 13, 15, 21, 23, 25, 31, 33, 35] </code></pre> <p>My code to make the array:</p> <pre><code>a = np.arange(start=11, stop=36, step=1) </code></pre> <p>My code for filtering:</p> <pre><code>print(a[np.where((a % 2 == 1))]) </code></pre>
<p>Hope I answered your question..</p> <pre><code>array = np.array( [[11, 12, 13, 14, 15], [16, 17, 18, 19, 20], [21, 22, 23, 24, 25], [26, 27, 28, 29, 30], [31, 32, 33, 34, 35]]) filter1 = array[:, [0, 2, 4]] filter2 = filter1[[0, 2, 4], :].flatten() &gt;&gt;[11 13 15 21 23 25 31 33 35] </code></pre>
python|numpy
0
6,507
70,137,421
Removing the index when appending data and rewriting CSV using pandas
<p>I have a script that runs on a daily basis to collect data. I record this data in a CSV file using the following code:</p> <pre><code>old_df = pd.read_csv('/Users/tdonov/Desktop/Python/Realestate Scraper/master_data_for_realestate.csv') old_df = old_df.append(dataframe_for_cvs, ignore_index=True) old_df.to_csv('/Users/tdonov/Desktop/Python/Realestate Scraper/master_data_for_realestate.csv') </code></pre> <p>I am using <code>append(ignore_index=True)</code>, but after every run of the code I still get additional columns created at the start of my CSV. I delete them manually, but is there a way to stop them from the code itself? I looked the function but I am still not sure if it is possible. My result file gets the following columns added after every run (one at a time, after each run): <a href="https://i.stack.imgur.com/KcIbs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KcIbs.png" alt="enter image description here" /></a></p> <p>This is really annoying to have to delete everytime.</p> <p>Update: Data looks like that: <a href="https://i.stack.imgur.com/a3DAl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a3DAl.png" alt="enter image description here" /></a> However the id is not unique. Every day it can be repeated. In my case it is not unique. This is an id of an online offer. The offer can be available for one day or for 5 months, or couple of days.</p>
<p>Did you try</p> <pre><code>to_csv(index=False) </code></pre>
python|pandas
1
6,508
55,682,800
Selecting from one dataframe using values from a second dataframe
<p>I have two dataframes with the same index and columns:</p> <pre><code>In: import pandas as pd import numpy as np import random df1 = pd.DataFrame({'A' : [ random.random(), random.random(), random.random()], 'B' : [ random.random(), random.random(), random.random()], 'C' : [ random.random(), random.random(), random.random()]}) df2 = pd.DataFrame({'A' : [random.randint(0,10), random.randint(0,10), random.randint(0,10)], 'B' : [random.randint(0,10), random.randint(0,10), random.randint(0,10)], 'C' : [random.randint(0,10), random.randint(0,10), random.randint(0,10)]}) df1 Out: A B C 0 0.424566 0.054485 0.830993 1 0.673692 0.754941 0.621544 2 0.890594 0.805776 0.878123 In: df2 Out: A B C 0 9 9 3 1 4 6 6 2 10 2 9 </code></pre> <p>I want to select values from <code>df1</code> depending on the corresponding value in <code>df2</code> and return it as an array.</p> <p>e.g. selecting by the value <code>6</code> in the example above would return <code>[0.754941, 0.621544]</code></p> <p>I have looked at <code>mask</code> but can't see how to apply a mask from one df to the second df.</p>
<p>If same index and columns in both DataFrmes, is possible use indexing with mask from <code>df2</code> with <code>2d array</code> created by <code>to_numpy</code> or <code>values</code>:</p> <pre><code>#pandas 0.24+ a = df1.to_numpy()[df2 == 6] #oldier pandas versions #a = df1.values[df2 == 6] print (a) [0.754941 0.621544] </code></pre>
python|pandas|selection|mask
4
6,509
39,892,684
How can I format the index column(s) with xlsxwriter?
<p>I'm using <strong>xlsxwriter</strong> and the <strong>set_column</strong> function that format the columns in my excel outputs. </p> <p>However, formatting seems to be ignored when applied to the index column (or index columns in case of multi index). </p> <p>I've found a workaround, so far is to introduce a fake index with <strong>reset_index</strong> then pass <strong>index=False</strong> to the to_excel function but then the nice merging feature of the multi index will be gone too.</p> <p>Any ideas?</p> <pre><code>import pandas as pd import numpy as np from Config import TEMP_XL_FILE def temp(): ' temp' pdf = pd.DataFrame(np.random.randn(6,4), columns=list('ABCD')) pdf.set_index('A', drop=True, inplace=True) writer = pd.ExcelWriter(TEMP_XL_FILE, engine='xlsxwriter') pdf.to_excel(writer, 'temp') workbook = writer.book worksheet = writer.sheets['temp'] tempformat = workbook.add_format({'num_format': '0%', 'align': 'center'}) worksheet.set_column(-1, 3, None, tempformat) writer.save() if __name__ == '__main__': temp() </code></pre>
<p>The pandas <code>ExcelWriter</code> overwrites the <code>XlsxWriter</code> formats in the index columns. To prevent that, change the pandas <code>header_style</code> to <code>None</code></p> <pre><code>header_style = {"font": {"bold": True}, "borders": {"top": "thin", "right": "thin", "bottom": "thin", "left": "thin"}, "alignment": {"horizontal": "center", "vertical": "top"}} </code></pre> <p>To do that:</p> <pre><code>import pandas.io.formats.excel pandas.io.formats.excel.header_style = None </code></pre> <p>See also</p> <ul> <li><a href="https://stackoverflow.com/questions/41447606/xlsxwriter-not-applying-format-to-header-row-of-dataframe-python-pandas">xlsxwriter not applying format to header row of dataframe - Python Pandas</a></li> <li><a href="https://stackoverflow.com/questions/42234622/pandas-raising-attributeerror-module-pandas-core-has-no-attribute-format">Pandas raising: AttributeError: module &#39;pandas.core&#39; has no attribute &#39;format&#39;</a></li> </ul>
python|excel|pandas|xlsxwriter
2
6,510
39,732,126
Using Pandas groupby methods, find largest values in each group
<p>By using Pandas groupby, I have data on how much activity certain users have on average any given day of the week. Grouped by user and day, I compute max and mean for several users in the last 30 days.</p> <p>Now I want to find, for every user, which day of the week corresponds to their daily max activity, and what is the average magnitude of that activity.</p> <p>What is the method in pandas to perform such a task?</p> <p>The original data looks something like this:</p> <pre><code> userID countActivity weekday 0 3 25 5 1 3 58 6 2 3 778 0 3 3 78208 1 4 3 6672 2 </code></pre> <p>The object that has these groups is created from the following:</p> <pre><code>aggregations = { 'countActivity': { 'maxDaily': 'max', 'meanDaily': 'mean' } } dailyAggs = df.groupby(['userID','weekday']).agg(aggregations) </code></pre> <p>The groupby object looks something like this:</p> <pre><code> countActivity maxDaily meanDaily userID weekday 3 0 84066 18275.6 1 78208 20698.5 2 172579 64930.75 3 89535 25443 4 6152 2809 </code></pre> <p>Pandas groupby method <code>filter</code> seems to be needed here, but I'm stumped how on how to proceed.</p>
<p>I'd first do a <code>groupby</code> on <code>'userID'</code>, and then write an <code>apply</code> function to do the rest. The <code>apply</code> function will take a <code>'userID'</code> group, perform another <code>groupby</code> on <code>'weekday'</code> to do your aggregations, and then only return the row that contains the maximum value for <code>maxDaily</code>, which can be found with <code>argmax</code>.</p> <pre><code>def get_max_daily(grp): aggregations = {'countActivity': {'maxDaily': 'max', 'meanDaily': 'mean'}} grp = grp.groupby('weekday').agg(aggregations).reset_index() return grp.loc[grp[('countActivity', 'maxDaily')].argmax()] result = df.groupby('userID').apply(get_max_daily) </code></pre> <p>I've added a row to your sample data to make sure the daily aggregations were working correctly, since your sample data only contains one entry per weekday:</p> <pre><code> userID countActivity weekday 0 3 25 5 1 3 58 6 2 3 778 0 3 3 78208 1 4 3 6672 2 5 3 78210 1 </code></pre> <p>The resulting output:</p> <pre><code> weekday countActivity meanDaily maxDaily userID 3 1 78209 78210 </code></pre>
pandas
5
6,511
44,116,689
Siamese Model with LSTM network fails to train using tensorflow
<p><strong>Dataset Description</strong></p> <p>The dataset contains a set of question pairs and a label which tells if the questions are same. e.g.</p> <blockquote> <p>"How do I read and find my YouTube comments?" , "How can I see all my Youtube comments?" , "1"</p> </blockquote> <p>The goal of the model is to identify if the given question pair is same or different.</p> <p><strong>Approach</strong></p> <p>I have created a <a href="https://www.quora.com/What-are-Siamese-neural-networks-what-applications-are-they-good-for-and-why" rel="nofollow noreferrer">Siamese network</a> to identify if two questions are same. Following is the model:</p> <pre><code>graph = tf.Graph() with graph.as_default(): embedding_placeholder = tf.placeholder(tf.float32, shape=embedding_matrix.shape, name='embedding_placeholder') with tf.variable_scope('siamese_network') as scope: labels = tf.placeholder(tf.int32, [batch_size, None], name='labels') keep_prob = tf.placeholder(tf.float32, name='question1_keep_prob') with tf.name_scope('question1') as question1_scope: question1_inputs = tf.placeholder(tf.int32, [batch_size, seq_len], name='question1_inputs') question1_embedding = tf.get_variable(name='embedding', initializer=embedding_placeholder, trainable=False) question1_embed = tf.nn.embedding_lookup(question1_embedding, question1_inputs) question1_lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) question1_drop = tf.contrib.rnn.DropoutWrapper(question1_lstm, output_keep_prob=keep_prob) question1_multi_lstm = tf.contrib.rnn.MultiRNNCell([question1_drop] * lstm_layers) q1_initial_state = question1_multi_lstm.zero_state(batch_size, tf.float32) question1_outputs, question1_final_state = tf.nn.dynamic_rnn(question1_multi_lstm, question1_embed, initial_state=q1_initial_state) scope.reuse_variables() with tf.name_scope('question2') as question2_scope: question2_inputs = tf.placeholder(tf.int32, [batch_size, seq_len], name='question2_inputs') question2_embedding = question1_embedding question2_embed = tf.nn.embedding_lookup(question2_embedding, question2_inputs) question2_lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) question2_drop = tf.contrib.rnn.DropoutWrapper(question2_lstm, output_keep_prob=keep_prob) question2_multi_lstm = tf.contrib.rnn.MultiRNNCell([question2_drop] * lstm_layers) q2_initial_state = question2_multi_lstm.zero_state(batch_size, tf.float32) question2_outputs, question2_final_state = tf.nn.dynamic_rnn(question2_multi_lstm, question2_embed, initial_state=q2_initial_state) </code></pre> <p>Calculate the cosine distance using the RNN outputs:</p> <pre><code>with graph.as_default(): diff = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(question1_outputs[:, -1, :], question2_outputs[:, -1, :])), reduction_indices=1)) margin = tf.constant(1.) labels = tf.to_float(labels) match_loss = tf.expand_dims(tf.square(diff, 'match_term'), 0) mismatch_loss = tf.expand_dims(tf.maximum(0., tf.subtract(margin, tf.square(diff)), 'mismatch_term'), 0) loss = tf.add(tf.matmul(labels, match_loss), tf.matmul((1 - labels), mismatch_loss), 'loss_add') distance = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(distance) </code></pre> <p>Following is the code to train the model:</p> <pre><code>with graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=graph) as sess: sess.run(tf.global_variables_initializer(), feed_dict={embedding_placeholder: embedding_matrix}) iteration = 1 for e in range(epochs): summary_writer = tf.summary.FileWriter('/Users/mithun/projects/kaggle/quora_question_pairs/logs', sess.graph) summary_writer.add_graph(sess.graph) for ii, (x1, x2, y) in enumerate(get_batches(question1_train, question2_train, label_train, batch_size), 1): feed = {question1_inputs: x1, question2_inputs: x2, labels: y[:, None], keep_prob: 0.9 } loss1 = sess.run([distance], feed_dict=feed) if iteration%5==0: print("Epoch: {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Train loss: {:.3f}".format(loss1)) if iteration%50==0: val_acc = [] for x1, x2, y in get_batches(question1_val, question2_val, label_val, batch_size): feed = {question1_inputs: x1, question2_inputs: x2, labels: y[:, None], keep_prob: 1 } batch_acc = sess.run([accuracy], feed_dict=feed) val_acc.append(batch_acc) print("Val acc: {:.3f}".format(np.mean(val_acc))) iteration +=1 saver.save(sess, "checkpoints/quora_pairs.ckpt") </code></pre> <p>I have trained the above model with about 10,000 labeled data. But, the accuracy is stagnant at around 0.630 and strangely the validation accuracy is same across all the iterations. </p> <pre><code>lstm_size = 64 lstm_layers = 1 batch_size = 128 learning_rate = 0.001 </code></pre> <p>Is there anything wrong with the way I have created the model? </p>
<p>This is a common problem with imbalanced datasets like the recently released Quora dataset which you are using. Since the Quora dataset is imbalanced (~63% negative and ~37% positive examples) you need proper initialization of weights. Without weight initialization your solution will be stuck in a local minima and it will train to predict only the negative class. Hence the 63% accuracy, because that is the percentage of 'not similar' questions in your validation data. If you check the results obtained on your validation set you will notice that it predicts all zeros. A truncated normal distribution proposed in He et al., <a href="http://arxiv.org/abs/1502.01852" rel="nofollow noreferrer">http://arxiv.org/abs/1502.01852</a> is a good alternate for initializing the weights.</p>
tensorflow|lstm|recurrent-neural-network
2
6,512
69,482,458
Am I mislabeling my data in my neural network?
<p>I'm working on an implementation of EfficientNet in Tensorflow. My model is overfitting and predicting all three classes as just a single class. My training and validation accuracy is in the 99% after a few epochs and my loss is &lt;0.5. I have 32,000 images between the three classes (12, 8, 12).</p> <p>My hypothesis is that it has to do with the way I input the data and one hot coded the labels. Perhaps it is due to everything being labeled the same accidentally, but I can't figure out where.</p> <pre class="lang-py prettyprint-override"><code> # Load Data train_ds = tf.keras.utils.image_dataset_from_directory( train_dir, labels='inferred', seed=42, image_size=(height, width), batch_size=batch_size ) val_ds = tf.keras.utils.image_dataset_from_directory( val_dir, labels='inferred', seed=42, image_size=(height, width), batch_size=batch_size ) class_names = train_ds.class_names num_classes = len(class_names) print('There are ' + str(num_classes) + ' classes:\n' + str(class_names)) # Resize images train_ds = train_ds.map(lambda image, label: ( tf.image.resize(image, (height, width)), label)) val_ds = val_ds.map(lambda image, label: ( tf.image.resize(image, (height, width)), label)) </code></pre> <p>This provides a sample of the correct images and class labels:</p> <pre class="lang-py prettyprint-override"><code> # # Visualization of samples # plt.figure(figsize=(10, 10)) # for images, labels in train_ds.take(1): # for i in range(9): # ax = plt.subplot(3, 3, i + 1) # plt.imshow(images[i].numpy().astype(&quot;uint8&quot;)) # plt.title(class_names[labels[i]]) # plt.axis(&quot;off&quot;) </code></pre> <p>Could this be causing an issue with labels?</p> <pre class="lang-py prettyprint-override"><code> # Prepare inputs # One-hot / categorical encoding def input_preprocess(image, label): label = tf.one_hot(label, num_classes) return image, label train_ds = train_ds.map(input_preprocess, num_parallel_calls=tf.data.AUTOTUNE) train_ds = train_ds.prefetch(tf.data.AUTOTUNE) val_ds = val_ds.map(input_preprocess) </code></pre> <p>My network:</p> <pre class="lang-py prettyprint-override"><code> def build_model(num_classes): inputs = Input(shape=(height, width, 3)) x = img_augmentation(inputs) model = EfficientNetB0( include_top=False, input_tensor=x, weights=&quot;imagenet&quot;) # Freeze the pretrained weights model.trainable = False # Rebuild top x = layers.GlobalAveragePooling2D(name=&quot;avg_pool&quot;)(model.output) x = layers.BatchNormalization()(x) top_dropout_rate = 0.4 x = layers.Dropout(top_dropout_rate, name=&quot;top_dropout&quot;)(x) outputs = layers.Dense(num_classes, activation=&quot;softmax&quot;, name=&quot;pred&quot;)(x) # Compile model = tf.keras.Model(inputs, outputs, name=&quot;EfficientNet&quot;) optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3) model.compile( optimizer=optimizer, loss=&quot;categorical_crossentropy&quot;, metrics=[&quot;accuracy&quot;] ) return model with strategy.scope(): model = build_model(num_classes=num_classes) epochs = 40 hist = model.fit(train_ds, epochs=epochs, validation_data=val_ds, workers=6, verbose=1, callbacks=callback) plot_hist(hist) </code></pre>
<p>Well first off you are writing more code than you need to. In train_ds and val_ds you did not specify the parameter label_mode. By default it is set to 'int'. Which means your labels will be integers. This is fine if your compile your model using loss=tf.keras.losses.SparseCategoricalCrossentropy. If you had set</p> <pre><code>label_mode= 'categorical' then you can use loss=tf.keras.losses.CategoricalCrossentropy </code></pre> <p>You did convert you labels to one-hot-encoded and that appears to have been done correctly. But you could have avoided having to do that by setting the label mode to categorical as mentioned. You also wrote code to resize the images. This is not necessary since tf.keras.utils.image_dataset_from_directory resized the images for you. I had trouble getting your model to run probably because I don't have the code for x = img_augmentation(inputs). you have the code</p> <pre><code>model = EfficientNetB0( include_top=False, input_tensor=x, weights=&quot;imagenet&quot;) </code></pre> <p>Since you are using the model API I think this should be</p> <pre><code>model = EfficientNetB0( include_top=False, weights=&quot;imagenet&quot;, pooling='max')(x) </code></pre> <p>NOTE I included pooliing='max' so efficientnet produces a one dimensional tensor output and thus you do not need the layer</p> <pre><code>x = layers.GlobalAveragePooling2D(name=&quot;avg_pool&quot;)(model.output) </code></pre> <p>I also modified your code to produce a test_ds so I could test the accuracy of the model. Of course I used a different dataset but the results were fine. My complete code is shown below</p> <pre><code>train_dir=r'../input/beauty-detection-data-set/train' val_dir=r'../input/beauty-detection-data-set/valid' batch_size=32 height=224 width=224 train_ds = tf.keras.preprocessing.image_dataset_from_directory( train_dir, labels='inferred', validation_split=0.1, subset=&quot;training&quot;, label_mode='categorical', seed=42, image_size=(height, width), batch_size=batch_size ) test_ds = tf.keras.preprocessing.image_dataset_from_directory( train_dir, labels='inferred', validation_split=0.1, subset=&quot;validation&quot;, label_mode='categorical', seed=42, image_size=(height, width), batch_size=batch_size) val_ds = tf.keras.preprocessing.image_dataset_from_directory( val_dir, labels='inferred', seed=42, label_mode='categorical', image_size=(height, width), batch_size=batch_size ) class_names = train_ds.class_names num_classes = len(class_names) print('There are ' + str(num_classes) + ' classes:\n' + str(class_names)) img_shape=(224,224,3) base_model=tf.keras.applications.EfficientNetB3(include_top=False, weights=&quot;imagenet&quot;,input_shape=img_shape, pooling='max') x=base_model.output x=keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x) x = Dense(256, kernel_regularizer = regularizers.l2(l = 0.016),activity_regularizer=regularizers.l1(0.006), bias_regularizer=regularizers.l1(0.006) ,activation='relu')(x) x=Dropout(rate=.45, seed=123)(x) output=Dense(num_classes, activation='softmax')(x) model=Model(inputs=base_model.input, outputs=output) model.compile(Adamax(lr=.001), loss='categorical_crossentropy', metrics=['accuracy']) epochs =5 hist = model.fit(train_ds, epochs=epochs, validation_data=val_ds, verbose=1) accuracy =model.evaluate(test_ds, verbose=1)[1] print (accuracy) ``` </code></pre>
python|tensorflow|deep-learning|neural-network|multilabel-classification
1
6,513
69,528,805
How to send a Python Dataframe through an E-mail?
<p>I'm having trouble sending my DataFrame through an e-mail. The DataFrame looks fine whenever I export it or print it. Through an e-mail however, it looks messed up.</p> <p>Of course, I've tried the <code>to_html</code> option that Pandas offers. And also the <code>build_table</code> from the <code>pretty_html_table</code> package.</p> <p>Whenever I send my dataFrame through an e-mail, it returns this:</p> <pre class="lang-html prettyprint-override"><code> Dear mister X, Please have a look at these numbers. There is a big change in revenue since 11-10-2021 compared to 10-10-2021: &lt;p&gt;&lt;table border=&quot;0&quot; class=&quot;dataframe&quot;&gt; &lt;thead&gt; &lt;tr style=&quot;text-align: right;&quot;&gt; &lt;th style = &quot;background-color: #FFFFFF;font-family: Century Gothic, sans-serif;font-size: medium;color: #305496;text-align: left;border-bottom: 2px solid #305496;padding: 0px 20px 0px 0px;width: auto&quot;&gt;Product Category&lt;/th&gt; &lt;th style = &quot;background-color: #FFFFFF;font-family: Century Gothic, sans-serif;font-size: medium;color: #305496;text-align: left;border-bottom: 2px solid #305496;padding: 0px 20px 0px 0px;width: auto&quot;&gt;Source / Medium&lt;/th&gt; &lt;th style = &quot;background-color: #FFFFFF;font-family: Century Gothic, sans-serif;font-size: medium;color: #305496;text-align: left;border-bottom: 2px solid #305496;padding: 0px 20px 0px 0px;width: auto&quot;&gt;Avg. Rev. Period 1&lt;/th&gt; &lt;th style = &quot;background-color: #FFFFFF;font-family: Century Gothic, sans-serif;font-size: medium;color: #305496;text-align: left;border-bottom: 2px solid #305496;padding: 0px 20px 0px 0px;width: auto&quot;&gt;Avg. Rev. Period 2&lt;/th&gt; &lt;th style = &quot;background-color: #FFFFFF;font-family: Century Gothic, sans-serif;font-size: medium;color: #305496;text-align: left;border-bottom: 2px solid #305496;padding: 0px 20px 0px 0px;width: auto&quot;&gt;Percentage Change&lt;/th&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td style = &quot;background-color: #D9E1F2;font-family: Century Gothic, sans-serif;font-size: medium;text-align: left;padding: 0px 20px 0px 0px;width: auto&quot;&gt;TEST&lt;/td&gt; &lt;td style = &quot;background-color: #D9E1F2;font-family: Century Gothic, sans-serif;font-size: medium;text-align: left;padding: 0px 20px 0px 0px;width: auto&quot;&gt;TEST&lt;/td&gt; &lt;td style = &quot;background-color: #D9E1F2;font-family: Century Gothic, sans-serif;font-size: medium;text-align: left;padding: 0px 20px 0px 0px;width: auto&quot;&gt;0.01&lt;/td&gt; &lt;td style = &quot;background-color: #D9E1F2;font-family: Century Gothic, sans-serif;font-size: medium;text-align: left;padding: 0px 20px 0px 0px;width: auto&quot;&gt;100&lt;/td&gt; &lt;td style = &quot;background-color: #D9E1F2;font-family: Century Gothic, sans-serif;font-size: medium;text-align: left;padding: 0px 20px 0px 0px;width: auto&quot;&gt;-52&lt;/td&gt; &lt;/tr&gt; ......... </code></pre> <p>I think it probably has to do with the code I use to send the e-mail. I want to insert different variables in it. Here's the code for the e-mail:</p> <pre class="lang-py prettyprint-override"><code> email_df = build_table(df, &quot;blue_light&quot;) subject = f&quot;Subject: blablabla - {today}&quot; message = f&quot;\n\nDear mister X, \n\n Please have a look at these numbers. There is a big change in revenue since {period 1} compared to {period 2}: \n\n {email_df}&quot; emailcontent = f'{subject}{message}' context = ssl.create_default_context() with smtplib.SMTP_SSL(smtp_server, port, context=context) as server: server.login(sender_email, password) server.sendmail(sender_email, receiver_email, emailcontent) </code></pre> <p>Any help is welcome! :)</p>
<p>You are currently sending the email in plain text format, which makes it impossible to ensure the column spacing is correct unless the recipient happens to view their emails in a plain text reader.</p> <p>To ensure the table is spaced correctly you have to send an HTML email - see the <a href="https://docs.python.org/3/library/email.examples.html" rel="nofollow noreferrer">example here</a> and <a href="https://stackoverflow.com/a/58318206/567595">this answer</a>. Something like the following (untested) should work:</p> <pre><code>from email.message import EmailMessage import smptlib email = EmailMessage() email['Subject'] = f&quot;Subject: blablabla - {today}&quot; email['From'] = sender_email email['To'] = receiver_email email.set_content(f&quot;&quot;&quot;\ &lt;html&gt;&lt;head&gt;&lt;/head&gt;&lt;body&gt; &lt;p&gt;Dear mister X,&lt;/p&gt; &lt;p&gt;Please have a look at these numbers. There is a big change in revenue since {period 1} compared to {period 2}:&lt;/p&gt; {df.to_html()} # or use build_table(df) for prettier formatting &lt;/body&gt;&lt;/html&gt;&quot;&quot;&quot;, subtype='html') context = ssl.create_default_context() with smtplib.SMTP_SSL(smtp_server, port, context=context) as server: server.login(sender_email, password) server.send_message(email) </code></pre>
python|pandas|dataframe|email
1
6,514
54,144,408
Repeatedly execute same code before/after statements/code blocks
<p>I am filtering some data in a <code>pandas.DataFrame</code> and want to track the rows I loose. So basically, I want to</p> <pre><code>df = pandas.read_csv(...) n1 = df.shape[0] df = ... # some logic that might reduce the number of rows print(f'Lost {n1 - df.shape[0]} rows') </code></pre> <p>Now there are multiple of these filter steps, and the code before/after it is always the same. So I am looking for a way to abstract that away.</p> <p>Of course the first thing that comes into mind are decorators - however, I don't like the idea of creating a bunch of functions with just one LOC.</p> <p>What I came up with are context managers:</p> <pre><code>from contextlib import contextmanager @contextmanager def rows_lost(df): try: n1 = df.shape[0] yield df finally: print(f'Lost {n1 - df.shape[0]} rows') </code></pre> <p>And then:</p> <pre><code>with rows_lost(df) as df: df = ... </code></pre> <p>I am wondering whether there is a better solution to this?</p> <p><strong>Edit</strong>: I just realized that the context manager approach does not work, if a filter step returns a new object (which is the default for pandas Dataframes). It only works when the objects are modified "in place".</p>
<p>You could write a "wrapper-function" that wraps the filter you specify:</p> <pre><code>def filter1(arg): return arg+1 def filter2(arg): return arg*2 def wrap_filter(arg, filter_func): print('calculating with argument', arg) result = filter_func(arg) print('result', result) return result wrap_filter(5, filter1) wrap_filter(5, filter2) </code></pre> <p>The only thing that this improves on using a decorator is that you can choose to call the filter without the wrapper...</p>
python|python-3.x|pandas
0
6,515
53,987,380
How does torch.empty calculate the values?
<p>Every time I run <code>torch.empty(5, 3)</code> I get one of those two results:</p> <pre><code>&gt;&gt;&gt; torch.empty(5, 3) tensor([[ 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000], [ 0.0000, -0.0000, 0.0000], [ 0.0000, 0.0000, -50716.6250]]) &gt;&gt;&gt; torch.empty(5, 3) tensor([[0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000]]) </code></pre> <p>I tried this multiple times and I still get one of those two results. I tried changing the size number <code>-50716.6250</code> appeared again.</p> <p>Are the values here random? Why are those numbers reoccurring?</p>
<p><a href="https://pytorch.org/docs/stable/torch.html#torch.empty" rel="nofollow noreferrer"><code>torch.empty</code></a> returns "a tensor filled with uninitialized data."</p> <p>If you want to have a tensor filled with zeros, use <a href="https://pytorch.org/docs/stable/torch.html#torch.zeros" rel="nofollow noreferrer"><code>torch.zeros</code></a>.</p>
python|numpy|pytorch|torch
4
6,516
54,059,953
I can't adapt my dataset to VGG-net, getting size mismatch
<p>I’m trying to implement the pre-trained VGG net to my script, in order to recognize faces from my dataset in RGB [256,256], but I’m getting a “size mismatch, m1: [1 x 2622], m2: [4096 x 2]” even if i'm resizing my images it doesn't work, as you can see my code work with resnet and alexnet.</p> <p>I've tryed resizing the images with the function interpolate but the size mismatch persist.</p> <pre><code>def training(model_conv, learning_rate, wd, net): criterion = nn.CrossEntropyLoss(weight= torch.FloatTensor([1,1])) optimizer = torch.optim.Adam(model_conv.fc.parameters(), lr=learning_rate, weight_decay = wd) total_step = len(train_loader) loss_list = [] acc_list = [] print("Inizio il training") for epoch in range(num_epochs): for i, (im, labels) in enumerate(train_loader): images = torch.nn.functional.interpolate(im, 224, mode = 'bilinear') outputs = model_conv(images) loss = criterion(outputs, labels) loss_list.append(loss.item()) optimizer.zero_grad() loss.backward() optimizer.step() if (i + 1) % 100 == 0: print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' .format(epoch + 1, num_epochs, i + 1, total_step, loss.item())) torch.save(model_conv, 'TrainedModel.pt') return images, labels def main(): net = "vgg" learning_rate = 10e-6 wd = 10e-4 if net == "vgg": print("Hai selezionato VGG") model_conv = VGG_FACE.vgg_face data = torch.load("VGG_FACE.pth") model_conv.load_state_dict(data) model_conv.fc = nn.Linear(4096, 2) model_conv[-1] = model_conv.fc if __name__ == '__main__': main() </code></pre> <p>For example this is another code where I used correctly my VGG with some random images</p> <pre><code>def test(): N=5 net = VGG_FACE.vgg_face data = torch.load("VGG_FACE.pth") net.load_state_dict(data) net.eval() names = open("names.txt").read().split() with torch.no_grad(): mean = np.array([93.5940, 104.7624, 129.1863]) images = scipy.misc.imread("cooper2.jpg", mode="RGB") images = scipy.misc.imresize(images, [224, 224]) images = images.astype(np.float32) images -= mean[np.newaxis, np.newaxis, :] images = np.transpose(images, (2, 0, 1)) images = images[np.newaxis, ...] images = torch.tensor(images, dtype=torch.float32) y = net(images) y = torch.nn.functional.softmax(y, 1) rank = torch.topk(y[0, :], N) for i in range(N): index = rank[1][i].item() score = rank[0][i].item() print("{}) {} ({:.2f})".format(i + 1, names[index], score)) print() numero_classi = 2 net[-1] = torch.nn.Linear(4096, numero_classi) if __name__ == "__main__": test() </code></pre> <p>the error i'm gettin is </p> <pre><code> File "/Users/danieleligato/PycharmProjects/parametral/VGGTEST.py", line 53, in training outputs = model_conv(images) RuntimeError: size mismatch, m1: [4 x 2622], m2: [4096 x 2] at /Users/soumith/code/builder/wheel/pytorch-src/aten/src/TH/generic/THTensorMath.cpp:2070 </code></pre> <p><strong>THIS IS THE VGG NET THAT I'M USING</strong></p> <pre><code>class LambdaBase(nn.Sequential): def __init__(self, fn, *args): super(LambdaBase, self).__init__(*args) self.lambda_func = fn def forward_prepare(self, input): output = [] for module in self._modules.values(): output.append(module(input)) return output if output else input class Lambda(LambdaBase): def forward(self, input): return self.lambda_func(self.forward_prepare(input)) class LambdaMap(LambdaBase): def forward(self, input): return map(self.lambda_func,self.forward_prepare(input)) class LambdaReduce(LambdaBase): def forward(self, input): return reduce(self.lambda_func,self.forward_prepare(input)) vgg_face = nn.Sequential( # Sequential, nn.Conv2d(3,64,(3, 3),(1, 1),(1, 1)), nn.ReLU(), nn.Conv2d(64,64,(3, 3),(1, 1),(1, 1)), nn.ReLU(), nn.MaxPool2d((2, 2),(2, 2),(0, 0),ceil_mode=True), nn.Conv2d(64,128,(3, 3),(1, 1),(1, 1)), nn.ReLU(), nn.Conv2d(128,128,(3, 3),(1, 1),(1, 1)), nn.ReLU(), nn.MaxPool2d((2, 2),(2, 2),(0, 0),ceil_mode=True), nn.Conv2d(128,256,(3, 3),(1, 1),(1, 1)), nn.ReLU(), nn.Conv2d(256,256,(3, 3),(1, 1),(1, 1)), nn.ReLU(), nn.Conv2d(256,256,(3, 3),(1, 1),(1, 1)), nn.ReLU(), nn.MaxPool2d((2, 2),(2, 2),(0, 0),ceil_mode=True), nn.Conv2d(256,512,(3, 3),(1, 1),(1, 1)), nn.ReLU(), nn.Conv2d(512,512,(3, 3),(1, 1),(1, 1)), nn.ReLU(), nn.Conv2d(512,512,(3, 3),(1, 1),(1, 1)), nn.ReLU(), nn.MaxPool2d((2, 2),(2, 2),(0, 0),ceil_mode=True), nn.Conv2d(512,512,(3, 3),(1, 1),(1, 1)), nn.ReLU(), nn.Conv2d(512,512,(3, 3),(1, 1),(1, 1)), nn.ReLU(), nn.Conv2d(512,512,(3, 3),(1, 1),(1, 1)), nn.ReLU(), nn.MaxPool2d((2, 2),(2, 2),(0, 0),ceil_mode=True), Lambda(lambda x: x.view(x.size(0),-1)), # View, nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(25088,4096)), # Linear, nn.ReLU(), nn.Dropout(0.5), nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(4096,4096)), # Linear, nn.ReLU(), nn.Dropout(0.5), nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(4096,2622)), # Linear, ) </code></pre>
<p>The error comes from this line:</p> <pre><code>model_conv.fc = nn.Linear(4096, 2) </code></pre> <p>Change to:</p> <pre><code>model_conv.fc = nn.Linear(2622, 2) </code></pre>
python-3.x|pytorch|vgg-net|transfer-learning
0
6,517
38,285,774
How to save a csv file so iPython shell can open and use it?
<p>I'm a newbie to Python. I'm having trouble opening my csv file in the iPython shell, although I can open my file in Spyder just find. How can I save a csv, or any other file, properly to be used by both Spyder and iPython?</p> <p>For example, I tried opening up and reading a file</p> <pre><code>DATA_data = open('Data.csv') DATA_reader = (pd.read_csv(Data), ',') print DATA_reader </code></pre> <p>and I would get this error message:</p> <pre><code>IOError: [Errno 2] No such file or directory: 'Data.csv' </code></pre> <p>Also, how can I make sure that the csv file is in the same director as my python script?</p>
<p>A <code>csv</code> file is plain text, so almost any code can read it. With <code>ipython</code> you can read it with shell command, with Python read, or with numpy or pandas.</p> <p>The first issue knowing where the file is located. That's a file system issue - what's directory. In <code>ipython</code> you can use the <code>%pwd</code> magic to see the current directory, and <code>%cd</code> to change directories. <code>%ls</code> gives the directory listing. With <code>magics</code> you can all the file and directory manipulation that you could with a linux shell like <code>bash</code> (and windows with some terminology adjustment).</p> <p>Once you've located the file you can look at with <code>%cat</code></p> <p>For example:</p> <pre><code>In [26]: %pwd Out[26]: '/home/paul' In [27]: %ls ~/ Desktop/ Downloads/ mypy/ Public/ Videos/ bin/ Documents/ Music/ Pictures/ Templates/ In [28]: %cd mypy /home/paul/mypy In [29]: %ls test* test test2.hdf test.h5 test.ipy test.mat test.npz test1.hdf test.gz test.hdf5 testipy.py test.npy test.txt In [30]: %cat test one a 1 two b 2 three c 3 </code></pre> <p>Plain Python read:</p> <pre><code>In [34]: f=open('test') In [35]: f.read() Out[35]: ' one a 1\n two b 2\n three c 3\n' In [36]: f.close() </code></pre> <p><code>np.genfromtxt</code> is the most powerful <code>numpy</code> csv reader:</p> <pre><code>In [38]: np.genfromtxt('test',dtype=None) Out[38]: array([(b'one', b'a', 1), (b'two', b'b', 2), (b'three', b'c', 3)], dtype=[('f0', 'S5'), ('f1', 'S1'), ('f2', '&lt;i4')]) </code></pre> <p>In this case I got a 1d array with a structured dtype - because of the mix of string and numeric columns. My delimiter in this case is white space.</p> <p>or <code>loadtxt</code>:</p> <pre><code>In [40]: np.loadtxt('test',dtype='str') Out[40]: array([["b'one'", "b'a'", "b'1'"], ["b'two'", "b'b'", "b'2'"], ["b'three'", "b'c'", "b'3'"]], dtype='&lt;U8') </code></pre> <p>I don't have <code>pandas</code> installed on this machine so can't demonstrate that, though I think <code>data = pd.read_csv('data.csv', delimiter=',')</code> should be sufficient (ie. file name and delimiter).</p>
python|python-2.7|csv|pandas|ipython
1
6,518
66,249,826
list of stings into list of integers
<p>How to convert multiple columns where each row contains a list of strings to rows containing lists of integers?</p> <p>From this state</p> <pre><code>index column_1 column_2 column_3 column_4 column_5 column_6 0 ['1','1'] ['7','6'] ['1','3'] 7 2 ['5','1'] </code></pre> <p>To this state</p> <pre><code>index column_1 column_2 column_3 column_4 column_5 column_6 0 [1,1] [7 , 6 ] [ 1 , 3 ] 7 2 [5 , 1] </code></pre> <p>I have tried many solutions the only one that worked was the one below <strong>but</strong> for a single column</p> <pre><code>df['column_1'].map(lambda a: map(int, a)) </code></pre>
<p>Here is a solution that converts that dataframe to a 2D list, performs the necessary conversion to integers, and converts back to dataframe:</p> <pre><code>import pandas as pd df = pd.DataFrame({'column_1':[['1','1']], 'column_2':[['7','6']], 'column_3':[['1','3']], 'column_4':[7], 'column_5':[2], 'column_6':[['5','1']]}) cols = df.columns.array lst = df.to_numpy().tolist() for row in lst: for col in row: if type(col) == list: for i, numstring in enumerate(col): col[i] = int(numstring) df = pd.DataFrame(lst, columns= cols) print(df) #output: column_1 column_2 column_3 column_4 column_5 column_6 0 [1, 1] [7, 6] [1, 3] 7 2 [5, 1] </code></pre>
python-3.x|pandas
0
6,519
66,192,403
How to get time values as strings, during read_excel execution?
<p>i have to parse ODF-format turnstile's data file. In the file are employees entry/out time values in HH:MM:SS (like a 141:59:30).<br /> <a href="https://drive.google.com/file/d/1j0EEraI-JbfXKoaliXi_LV85S3_nIAk5/view?usp=sharing" rel="nofollow noreferrer">link to sample file on GoogleDrive</a></p> <p>My attempts to open the file with df = pd.read_excel(filename, engine=&quot;odf&quot;, ...) had crushed with exception: ParserError hour must be in 0..23: 141:59:30.</p> <p>I tried to open file a several ways:</p> <ol> <li><code>df = pd.read_excel(filename, engine=&quot;odf&quot;, skiprows=3)</code> &quot;skiprows&quot; to cut useless header's rows.</li> <li><code>df = pd.read_excel(filename, engine=&quot;odf&quot;, skiprows=3, dtype=str)</code> &quot;dtype=str&quot; i thought, this option to represent all cells as string, to forbid automaticalliy datetime parsing.</li> </ol> <p>But i haven't rid the ParserError exception. Can you to point me a way to get values like a '141:59:30' as string at time read_excel execution?</p>
<p>You can pass a dictionary to the dtype parameter where you input your column name as the key, and the data type as the value.</p> <p>Could look something like this :</p> <pre><code>df = pd.read_excel(filename, engine=&quot;odf&quot;, skiprows=3, dtype={'time_col':str}) </code></pre> <p><strong>UPDATE</strong></p> <p>You could also try passing a converter function in the read statement.</p> <pre><code>def to_timedelta(x): return pd.to_timedelta(x) df = pd.read_excel(filename, engine=&quot;odf&quot;, skiprows=3, converters={-1:to_timedelta}) </code></pre>
python|pandas|datetime|ods
0
6,520
66,121,576
How to actually save a csv file to google drive from colab?
<p>so, this problem seems very simple but apparently is not. I need to transform a pandas dataframe to a csv file and save it in google drive.</p> <p>My drive is mounted, I was able to save a zip file and other kinds of files to my drive. However, when I do:</p> <pre><code>df.to_csv(&quot;file_path\data.csv&quot;) </code></pre> <p>it seems to save it where I want, it's on the left panel in my colab, where you can see all your files from all your directories. I can also read this csv file as a dataframe with pandas in the same colab.</p> <p><strong>HOWEVER</strong>, when I actually go on my Google Drive, I can never find it! but I need a code to save it to my drive because I want the user to be able to just run all cells and find the csv file in the drive.</p> <p>I have tried everything I could find online and I am running out of ideas! Can anyone help please?</p> <p>I have also tried this which creates a visible file named data.csv but i only contains the file path</p> <pre><code>import csv with open('file_path/data.csv', 'w', newline='') as csvfile: csvfile.write('file_path/data.csv') </code></pre> <p>HELP :'(</p> <p><strong>edit</strong> :</p> <pre><code>import csv with open('/content/drive/MyDrive/Datatourisme/tests_automatisation/data_tmp.csv') as f: s = f.read() with open('/content/drive/MyDrive/Datatourisme/tests_automatisation/data.csv', 'w', newline='') as csvfile: csvfile.write(s) </code></pre> <p>seems to do the trick.</p> <ol> <li>First export as csv with pandas (named this one data_tmp.csv),</li> <li>then read it and put that in a variable,</li> <li>then write the result of this &quot;reading&quot; into another file that I named data.csv,</li> </ol> <p>this data.csv file can be found in my drive :)</p> <p><strong>HOWEVER</strong> when the csv file I try to open is too big (mine has 100.000 raws), it does nothing. Has anyone got any idea?</p>
<ol> <li><p>First of all, mount your Google Drive with the Colab:</p> <p><code>from google.colab import drive</code></p> <p><code>drive.mount('/content/drive')</code></p> </li> <li><p>Allow Google Drive permission</p> </li> <li><p>Save your data frame as CSV using this function:</p> <p><code>import pandas as pd</code></p> <p><code>filename = 'filename.csv'</code></p> <p><code>df.to_csv('/content/drive/' + filename)</code></p> </li> </ol> <p>In some cases, directory <code>'/content/drive/'</code> may not work, so try <code>'content/drive/MyDrive/'</code></p> <p>Hope it helps!</p>
pandas|csv|save|google-colaboratory|drive
1
6,521
65,961,796
Loop and Accumulate Sum from Pandas Column Made of Lists
<p>Currently, my Pandas data frame looks like the following</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Row_X</th> </tr> </thead> <tbody> <tr> <td>[&quot;Medium, &quot;High&quot;, &quot;Low&quot;]</td> </tr> <tr> <td>[&quot;Medium&quot;]</td> </tr> </tbody> </table> </div> <p>My intention is to iterate through the list in each row such that:</p> <pre><code>summation = 0 for value in df[&quot;Row_X&quot;]: if &quot;High&quot; in value: summation = summation + 10 elif &quot;Medium&quot; in value: summation = summation + 5 else: summation= summation + 0 </code></pre> <p>Finally, I wish to apply this to each and create a new column that looks like the following:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Row_Y</th> </tr> </thead> <tbody> <tr> <td>15</td> </tr> <tr> <td>10</td> </tr> </tbody> </table> </div> <p>My assumption is that either np.select() or apply() can play into this but thus far have encountered errors with implementing either.</p>
<p>We can do:</p> <pre><code>mapper = {'Medium' : 5, 'High' : 10} </code></pre> <hr /> <pre><code>df['Row_Y'] = [sum([mapper[word] for word in l if word in mapper]) for l in df['Row_X']] </code></pre> <p>If <strong>pandas version &gt; 0.25.0</strong> We can use</p> <pre><code>df['Row_Y'] = df['Row_X'].explode().map(mapper).sum(level=0) </code></pre> <hr /> <pre><code>print(df) Row_X Row_Y 0 [Medium, High, Low] 15 1 [Medium] 5 </code></pre>
python|pandas
4
6,522
58,237,439
How do I apply CountVectorizer to each row in a dataframe?
<p>I have a dataframe say df which has 3 columns. Column A and B are some strings. Column C is a numeric variable. <a href="https://i.stack.imgur.com/kRXs9.png" rel="nofollow noreferrer">Dataframe</a></p> <p>I want to convert this to a feature matrix by passing it to a CountVectorizer.</p> <p>I define my countVectorizer as:</p> <pre><code>cv = CountVectorizer(input='content', encoding='iso-8859-1', decode_error='ignore', analyzer='word', ngram_range=(1), tokenizer=my_tokenizer, stop_words='english', binary=True) </code></pre> <p>Next I pass the entire dataframe to cv.fit_transform(df) which doesn't work. I get this error: cannot unpack non-iterable int object</p> <p>Next I covert each row of the dataframe to </p> <pre><code>sample = pdt_items["A"] + "," + pdt_items["C"].astype(str) + "," + pdt_items["B"] </code></pre> <p>Then I apply</p> <pre><code>cv_m = sample.apply(lambda row: cv.fit_transform(row)) </code></pre> <p>I still get error: ValueError: Iterable over raw text documents expected, string object received.</p> <p>Please let me know where am I going wrong?Or if I need to take some other approach? </p>
<p>Try this:</p> <pre><code>import pandas as pd from sklearn.feature_extraction.text import CountVectorizer A = ['very good day', 'a random thought', 'maybe like this'] B = ['so fast and slow', 'the meaning of this', 'here you go'] C = [1, 2, 3] pdt_items = pd.DataFrame({'A':A,'B':B,'C':C}) cv = CountVectorizer() # use pd.DataFrame here to avoid your error and add your column name sample = pd.DataFrame(pdt_items['A']+','+pdt_items['B']+','+pdt_items['C'].astype('str'), columns=['Output']) vectorized = cv.fit_transform(sample['Output']) </code></pre>
python|pandas|dataframe|scikit-learn|countvectorizer
1
6,523
69,077,375
How to identify dummy data in pandas and delete?
<p>Is there a way to identify the dummy data in a dataframe and delete them? In my data below, there are random characters in each column that I need to delete.</p> <pre><code>import pandas as pd import numpy as np data = {'Name' : ['Tom', 'AABBCC', 'Joseph', 'Krish', 'XXXX', 'John', 'U'], 'Address1': ['High Street', 'uwdfjfuf', '00000', 'Green Lane', 'Kingsway', 'Church Street', 'iwefwfn'], 'Address2': ['Park Avenue', 'The Crescent', 'ABCXYZ', 'Highfield Road', 'Stanley Road', 'New Street', '1ca2s597']} contact_details = pd.DataFrame(data) #Code to identify and delete dummy data print(contact_details) </code></pre> <p>Output of the above code:</p> <pre><code> Name Address1 Address2 0 Tom High Street Park Avenue 1 AABBCC uwdfjfuf The Crescent 2 Joseph 00000 ABCXYZ 3 Krish Green Lane Highfield Road 4 XXXX Kingsway Stanley Road 5 John Church Street New Street 6 U iwefwfn 1ca2s597 </code></pre>
<p>have you investigated your data? Are always the &quot;good data&quot; a combination of lowercase and uppercase characters? If so you could make a function to find those dummy data, for example:</p> <pre><code>if text.lower() == text or text.upper() == text: # text is dummy </code></pre>
python|pandas
0
6,524
44,566,744
Pandas DataFrame: convert WKT into GeoJSON in a new column using Lambda function
<p>I have some data in this format:</p> <pre><code>Dep Dest geom ---- ---- ----- EDDF KIAD LINESTRING(3.961389 43.583333, 3.968056 43.580.... </code></pre> <p>Which contains flight trajectories. The geom column contains the coordinates in WKT format. It is possible to convert them via the library <a href="https://pypi.python.org/pypi/geomet/0.1.0" rel="nofollow noreferrer">geomet</a> to GeoJSON format, which I want to do in a new column.</p> <p>In order to do this efficiently with Pandas, I am trying to do: </p> <pre><code>from geomet import wkt import json df = .... #load data to df df['geojson'] = df['geom'].apply(lambda x: json.dumps(wkt.loads(x['geom'] ))) </code></pre> <p>Which does not work. Any way to make it happen?</p>
<p>Try changing the following line: </p> <pre><code>df['geojson'] = df['geom'].apply(lambda x: json.dumps(wkt.loads(x['geom'] ))) </code></pre> <p>into this one:</p> <pre><code>df['geojson'] = df['geom'].apply(lambda x: json.dumps(wkt.loads(x))) </code></pre> <p>This produce the desired results:</p> <pre><code>from geomet import wkt import json #Generate dataframe df = pd.DataFrame({"Dep":["EDDf"], "Dest": ["KIAD"], "geom": ["LINESTRING(3.961389 43.583333, 3.968056 43.580)"]}) #Apply function to create new column df["geojson"] = df["geom"].apply(lambda x: json.dumps(wkt.loads(x))) </code></pre> <p>This creates:</p> <pre><code> Dep Dest geom geojson 0 EDDf KIAD LINESTRING(3.961389 43.583333, 3.968056 43.580) {"type": "LineString", "coordinates": [[3.9613... </code></pre>
python|pandas|lambda|geojson|wkt
3
6,525
44,492,753
How to use pandas in google cloud data flow?
<p>Are there any methods to use pandas, numpy for doing transformations in google cloud data flow?</p> <p><a href="https://cloud.google.com/blog/big-data/2016/03/google-announces-cloud-dataflow-with-python-support" rel="nofollow noreferrer">https://cloud.google.com/blog/big-data/2016/03/google-announces-cloud-dataflow-with-python-support</a> In the above link it says having support for numpy, scipy and pandas, But there are no examples available</p>
<p>Dataflow or Beam do not currently have transforms that use Numpy or Pandas. Nonetheless, you must be able to use them without much trouble.</p> <p>If you give more info about your use case, we can help you figure it out.</p>
pandas|google-cloud-dataflow|apache-beam
0
6,526
44,788,581
Find half of each group with Pandas GroupBy
<p>I need to select half of a dataframe using the <code>groupby</code>, where the size of each group is unknown and may vary across groups. For example:</p> <pre><code> index summary participant_id 0 130599 17.0 13 1 130601 18.0 13 2 130603 16.0 13 3 130605 15.0 13 4 130607 15.0 13 5 130609 16.0 13 6 130611 17.0 13 7 130613 15.0 13 8 130615 17.0 13 9 130617 17.0 13 10 86789 12.0 14 11 86791 8.0 14 12 86793 21.0 14 13 86795 19.0 14 14 86797 20.0 14 15 86799 9.0 14 16 86801 10.0 14 20 107370 1.0 15 21 107372 2.0 15 22 107374 2.0 15 23 107376 4.0 15 24 107378 4.0 15 25 107380 7.0 15 26 107382 6.0 15 27 107597 NaN 15 28 107384 14.0 15 </code></pre> <p>The size of groups from <code>groupyby('participant_id')</code> are 10, 7, 9 for <code>participant_id</code> 13, 14, 15 respectively. What I need is to take only the FIRST half (or floor(N/2)) of each group.</p> <p>From my (very limited) experience with Pandas <code>groupby</code>, it should be something like:</p> <pre><code>df.groupby('participant_id')[['summary','participant_id']].apply(lambda x: x[:k_i]) </code></pre> <p>where <code>k_i</code> is the half of the size of each group. Is there a simple solution to find the <code>k_i</code>?</p>
<p>IIUC, you can use index slicing with size //2 inside of lambda:</p> <pre><code>df.groupby('participant_id').apply(lambda x: x.iloc[:x.participant_id.size//2]) </code></pre> <p>Output:</p> <pre><code> index summary participant_id participant_id 13 0 130599 17.0 13 1 130601 18.0 13 2 130603 16.0 13 3 130605 15.0 13 4 130607 15.0 13 14 10 86789 12.0 14 11 86791 8.0 14 12 86793 21.0 14 15 20 107370 1.0 15 21 107372 2.0 15 22 107374 2.0 15 23 107376 4.0 15 </code></pre>
python|pandas|pandas-groupby|split-apply-combine
8
6,527
61,134,674
Is there any problem in performance if I use 'categorical_crossentropy' as loss function just to classify to objects?
<p>I'm training a CNN to classify dogs and cats and I'm using 'categorical_crossentropy' as loss function because at beginning I had three classes, but at the end I decided to use just two, and I didn't have the opportunity to change the loss function. My prolem here is that I don't have the computer where I was workinng to prove with 'binary_crossentropy'and I need to solve this quetion. So I don't know if it would have the same performance. </p> <p>Here the part where I compile</p> <pre><code>model.compile(optimizer = keras.optimizers.Adam(lr=lr), loss='categorical_crossentropy', metrics=['accuracy']) </code></pre>
<p>The answer is no, it is not a problem.</p> <p>You can use <code>binary_crossentropy + Dense(1,activation='sigmoid')</code> or <code>categorical_crossentropy + Dense(2,activation='softmax')</code>.</p> <p>The performance of your model should not be affected at all.</p>
tensorflow
0
6,528
61,088,788
How to speed up process when apply a function in Python
<p>I have a function peak_value which takes two iuputs area and data and returns a new column in data with potential peaks as output. I actually want to apply this peak value function on list of dataframes e.g data = [df1, df2, df3...dfn2] each dataframe has respective value of area e.g area = [a1, a2, a3.....an]. I have applied argrelextrema function to speed up the processing but not succeed so far. Is there any way to make it fast?</p> <pre><code>def peak_value(data,area): lag = np.round(5 + np.log10(area)) data_tmp = data.loc[data['loc_max']==1] data_sorted = data_tmp.sort_values(by='value',ascending=False) data_sorted['idx'] = data_sorted.index data_sorted = data_sorted.reset_index(drop = True) flag = 0 i = 0 updated = len(data_sorted) while i &lt; updated and flag == 0: lag_pre = np.arange(data_sorted['date'][i]-lag,data_sorted['date'][i]) lag_post = np.arange(data_sorted['date'][i]+1,data_sorted['date'][i]+lag+1) lag_interval = np.concatenate((lag_pre,lag_post)) ind_del = data_sorted.iloc[np.isin(data_sorted['date'],lag_interval)].index data_sorted = data_sorted.drop(data_sorted.index[ind_del]) data_sorted = data_sorted.reset_index(drop=True) updated = len(data_sorted) if i &lt; updated: flag = 0 else: flag = 1 i = i+1 #adds a column that says which are the potential peaks data['Potential_peaks'] = np.zeros(len(data)) data['Potential_peaks'].loc[data_sorted['idx']] = 1 return data def max_new(data): loc_opt_ind = argrelextrema(df, np.greater) Potential_peaks = np.zeros(len(data)) Potential_peaks[loc_opt_ind] = 1 data['Potential_peaks']= Potential_peaks return data new_max= [] for index, df in enumerate(data): max_values = max_new(df).Potential_peaks new_max.append(max_values) </code></pre>
<p>If it's possible to run your solution in parallel, then I think <a href="https://joblib.readthedocs.io/en/latest/" rel="nofollow noreferrer">Joblib</a> is a viable solution. </p> <p>I tried it myself and I like it a lot. The amount of modifications needed for this to work is really low.</p> <p>Here's an example about how it work:</p> <pre class="lang-py prettyprint-override"><code>from joblib import Parallel, delayed numbers = list(range(10)) def square(x): return x ** 2 result = Parallel(n_jobs=3)(delayed(square)(number) for number in numbers) print(result) # [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] </code></pre> <p>If this solution doesn't work for you, please share some more details about your problems.</p>
python|python-3.x|pandas|python-2.7|scipy
-1
6,529
61,180,155
Add additional column in merged csv file
<p>My code merges csv files and removes duplicates with pandas. Is it possible to add an additional header with values to the single merged file?</p> <p>The additional header should be called <code>Host Alias</code> and should correspond to <code>Host Name</code></p> <p>E.g. <code>Host Name</code> is <code>dpc01n1</code> and the corresponding <code>Host Alias</code> should be <code>dev_dom1</code> <code>Host Name</code> is <code>dpc02n1</code> and the corresponding <code>Host Alias</code> should be <code>dev_dom2</code> etc.</p> <p>Here is my code</p> <pre><code>from glob import glob import pandas as pd class bcolors: HEADER = '\033[95m' OKBLUE = '\033[94m' OKGREEN = '\033[92m' WARNING = '\033[93m' FAIL = '\033[91m' ENDC = '\033[0m' BOLD = '\033[1m' UNDERLINE = '\033[4m' input_path = r'C:\Users\urale\Desktop\logs' output_path = r'C:\Users\urale\Desktop\logs' + '\\' output_name = 'output.csv' stock_files = sorted(glob(input_path + '\pc_dblatmonstat_*_*.log')) print(bcolors.OKBLUE + 'Getting .log files from', input_path) final_headers = [ 'Start Time', 'epoch', 'Host Name', 'Db Alias', 'Database', 'Db Host', 'Db Host IP', 'IP Port', 'Latency (us)' ] #read in files via list comprehension content = [pd.read_csv(f,usecols = final_headers, sep='[;]',engine='python') for f in stock_files] print(bcolors.OKBLUE + 'Reading files') #combine files into one dataframe combo = pd.concat(content,ignore_index = True) print(bcolors.OKBLUE + 'Combining files') #drop duplicates combo = combo.drop_duplicates() #combo = combo.drop_duplicates(final_headers, keep=False) print(bcolors.OKBLUE + 'Dropping duplicates') #write to csv: combo.to_csv(output_path + output_name, index = False) print(bcolors.OKGREEN + 'Merged file output to', output_path, 'as', output_name) </code></pre>
<p><strong>Something like this should work:</strong></p> <pre><code>import pandas as pd combo = pd.DataFrame({ 'Start Time' : [1,2,3], 'epoch' : [1,2,3], 'Host Name': ['dpc01n1','dpc02n1','dpc00103n1'], 'Db Alias' : [1,2,3], 'Database' : [1,2,3], 'Db Host' : [1,2,3], 'Db Host IP' : [1,2,3], 'IP Port' : [1,2,3], 'Latency (us)' : [1,2,3], }) h_num = combo['Host Name'].str.lstrip('dpc0').str[:-2] combo['Host Alias'] = 'dev_dom' + h_num print(combo) </code></pre> <p>It assumes all <code>'Host Name'</code>s don't start with anything other than <code>'dpc'</code> and the two trailing characters like <code>'n1'</code> are not needed. <a href="http://pythontutor.com/visualize.html#code=import%20pandas%20as%20pd%0A%0Acombo%20%3D%20pd.DataFrame%28%7B%0A%20%20%20%20%20%20%20%20&#39;Start%20Time&#39;%20%3A%20%5B1,2,3%5D,%20%0A%20%20%20%20%20%20%20%20&#39;epoch&#39;%20%3A%20%5B1,2,3%5D,%20%0A%20%20%20%20%20%20%20%20&#39;Host%20Name&#39;%3A%20%5B&#39;dpc01n1&#39;,&#39;dpc02n1&#39;,&#39;dpc00103n1&#39;%5D,%20%0A%20%20%20%20%20%20%20%20&#39;Db%20Alias&#39;%20%3A%20%5B1,2,3%5D,%20%0A%20%20%20%20%20%20%20%20&#39;Database&#39;%20%3A%20%5B1,2,3%5D,%20%0A%20%20%20%20%20%20%20%20&#39;Db%20Host&#39;%20%3A%20%5B1,2,3%5D,%20%0A%20%20%20%20%20%20%20%20&#39;Db%20Host%20IP&#39;%20%3A%20%5B1,2,3%5D,%0A%20%20%20%20%20%20%20%20&#39;IP%20Port&#39;%20%3A%20%5B1,2,3%5D,%0A%20%20%20%20%20%20%20%20&#39;Latency%20%28us%29&#39;%20%3A%20%5B1,2,3%5D,%0A%7D%29%0A%0Ah_num%20%3D%20combo%5B&#39;Host%20Name&#39;%5D.str.lstrip%28&#39;dpc0&#39;%29.str%5B%3A-2%5D%0A%0Acombo%5B&#39;Host%20Alias&#39;%5D%20%3D%20&#39;dev_dom&#39;%20%2B%20h_num%0A%0Aprint%28combo%29&amp;cumulative=true&amp;curInstr=11&amp;heapPrimitives=false&amp;mode=display&amp;origin=opt-frontend.js&amp;py=py3anaconda&amp;rawInputLstJSON=%5B%5D&amp;textReferences=false" rel="nofollow noreferrer">Example in python tutor</a></p> <p><strong>Follow up question asked in comments:</strong></p> <blockquote> <p>It assumes that my merged csv file already has Host Alias but it doesn't resulting in an error: Exception has occurred: ValueError Usecols do not match columns, columns expected but not found: ['Host Alias'] File "D:\OneDrive\python\merger.py", line 42, in content = [pd.read_csv(f,usecols = combo_headers, sep='[;]',engine='python') Other than dpc, I also have tpc. How can I add that too? – Trunks</p> </blockquote> <p><code>str.lstrip</code> will strip all characters provided in the argument regardless of order. Just add a <code>'t'</code>:</p> <pre><code>h_num = combo['Host Name'].str.lstrip('tdpc0').str[:-2] </code></pre> <p><a href="http://pythontutor.com/visualize.html#code=import%20pandas%20as%20pd%0A%0Acombo%20%3D%20pd.DataFrame%28%7B%0A%20%20%20%20%20%20%20%20&#39;Start%20Time&#39;%20%3A%20%5B1,2,3%5D,%20%0A%20%20%20%20%20%20%20%20&#39;epoch&#39;%20%3A%20%5B1,2,3%5D,%20%0A%20%20%20%20%20%20%20%20&#39;Host%20Name&#39;%3A%20%5B&#39;dpc01n1&#39;,&#39;dpc02n1&#39;,&#39;tpc00103n1&#39;%5D,%20%0A%20%20%20%20%20%20%20%20&#39;Db%20Alias&#39;%20%3A%20%5B1,2,3%5D,%20%0A%20%20%20%20%20%20%20%20&#39;Database&#39;%20%3A%20%5B1,2,3%5D,%20%0A%20%20%20%20%20%20%20%20&#39;Db%20Host&#39;%20%3A%20%5B1,2,3%5D,%20%0A%20%20%20%20%20%20%20%20&#39;Db%20Host%20IP&#39;%20%3A%20%5B1,2,3%5D,%0A%20%20%20%20%20%20%20%20&#39;IP%20Port&#39;%20%3A%20%5B1,2,3%5D,%0A%20%20%20%20%20%20%20%20&#39;Latency%20%28us%29&#39;%20%3A%20%5B1,2,3%5D,%0A%7D%29%0A%0Ah_num%20%3D%20combo%5B&#39;Host%20Name&#39;%5D.str.lstrip%28&#39;tdpc0&#39;%29.str%5B%3A-2%5D%0A%0Acombo%5B&#39;Host%20Alias&#39;%5D%20%3D%20&#39;dev_dom&#39;%20%2B%20h_num%0A%0Aprint%28combo%29&amp;cumulative=true&amp;curInstr=11&amp;heapPrimitives=false&amp;mode=display&amp;origin=opt-frontend.js&amp;py=py3anaconda&amp;rawInputLstJSON=%5B%5D&amp;textReferences=false" rel="nofollow noreferrer">python tutor example with t added</a></p> <p><a href="https://www.learnbyexample.org/python-string-strip-method/" rel="nofollow noreferrer">More reading on str.strip</a></p> <p>As for:</p> <blockquote> <p>It assumes that my merged csv file already has Host Alias </p> </blockquote> <p>I'm not sure what you mean by this. When you do </p> <pre><code>combo['Host Alias'] = 'dev_dom' + h_num </code></pre> <p>The <code>'Host Alias'</code> column will be created in the <code>pandas.DataFrame</code> should it not already exist. If it does exist then the column will be replaced with the new data returned by the operation. You can then use <code>pandas.DataFrame.to_csv</code> to save this DataFrame to a .csv file.</p>
python|pandas|csv
1
6,530
61,078,252
Comparing two data frames with a given tolerance range
<p>I have two dataFrame with same number of columns and same rows size. I'm comparing the the first row in df1 with the first row in df2, and the 2nd row in df1 with the 2nd row in df2 and so on, to see how many feature differences are there. This code is working ok but it cosiders the exact match.</p> <pre><code>Df1: var1 var2 var 3 1 30 65 100 2 40 32 200 3 25 64 500 Df2: var1 var2 var 3 1 30 65 100 2 80 77 50 3 22 60 499 </code></pre> <pre><code>In: differences = np.zeros(len(df1)) for i in df1: differences += np.where(df1[i]!=df2[i],1,0) print(differences) </code></pre> <p>the output is an array that returns the number of differences between each row :</p> <pre><code>In: print(differences) [0. 3. 3.] </code></pre> <p>All good but, i want to take into account the tolerance range when we comparing the value. So, the values not have to be exactly the same, i would add a tolerance range of 5. So, if the value in df1 is 25 and value in df2 is 22, so it should be the same. the desired output is:</p> <pre><code>In: print(differences) [0. 3. 0.] </code></pre> <p>because if we look at third row in df1 and df2, the values fall within a tolerance range if 5. Any idea to implement this? </p>
<p><strong>Try using <code>np.isclose()</code> :</strong></p> <pre><code>differences = np.zeros(len(df1)) for i in df1: differences += np.where(~np.isclose(df1[i],df2[i],atol = 5),1,0) print(differences) </code></pre> <p><strong>Output:</strong></p> <pre><code>[0. 3. 0.] </code></pre>
python|arrays|pandas|comparison
1
6,531
71,526,796
Cleaning data in Panda
<p><strong>Background</strong> I load data into Panda from a csv/xlsx file created by a text-to-data app. While saving time, the auto-read is only so accurate. Below I have simplified a load to illustrate a specific problem I struggle to sort:</p> <pre><code>import pandas as pd from tabulate import tabulate df_is = {&quot;Var&quot;:[&quot;Sales&quot;,&quot;Gogs&quot;,&quot;Op prof&quot;,&quot;Depreciation&quot;,&quot;Net fin&quot;,&quot;PBT&quot;,&quot;Tax&quot;,&quot;PAT&quot;], &quot;2021&quot;:[100,-50,50,-10,-5,35,&quot;&quot;,&quot;&quot;], &quot;2022&quot;:[125,-55,70,-15,-10,45,-10,25], &quot;&quot;:[&quot;&quot;,&quot;&quot;,&quot;&quot;,&quot;&quot;,&quot;&quot;,&quot;&quot;,-15,30]} df_want = {&quot;Var&quot;:[&quot;Sales&quot;,&quot;Gogs&quot;,&quot;Op prof&quot;,&quot;Depreciation&quot;,&quot;Net fin&quot;,&quot;PBT&quot;,&quot;Tax&quot;,&quot;PAT&quot;], &quot;2021&quot;:[100,-50,50,-10,-5,35,-10,25], &quot;2022&quot;:[125,-55,70,-15,-10,45,-15,30]} print(tabulate(df_is)) print() print(tabulate(df_want)) </code></pre> <p><strong>Problem</strong> As can be seen by running the code, the data in the first table has not been read properly by the app, resulting in the last two datapoints of the second and third column appearing in third and last column, respectively.</p> <p>Second table shows how I want it to appear. The real problem is more complex and general, so local solutions of over-writing values is not feasible. A solution, like in Excel, where I would delete the empty cells in the second column and simultaneously move all other data in the rows to the left/right (depending on task), would be good.</p> <p><strong>Tried</strong> Being a novice, I have tried to search for solutions, but none of my search criteria seem to lead to a relevant solution.</p> <p>I have also used df.iloc() to create a variable of the four data-cells that are out of line, then tried to append them to column 1 and 2. Than only added copies of the last two rows.</p> <p>Greatful for advise!</p> <p><strong>versions</strong> conda 4.11.0 Python 3.9.7</p> <p>Pandas 1.3.4</p>
<p>Please try this:</p> <pre><code>import pandas as pd import numpy as np f_is = {&quot;Var&quot;:[&quot;Sales&quot;,&quot;Gogs&quot;,&quot;Op prof&quot;,&quot;Depreciation&quot;,&quot;Net fin&quot;,&quot;PBT&quot;,&quot;Tax&quot;,&quot;PAT&quot;], &quot;2021&quot;:[100,-50,50,-10,-5,35,&quot;&quot;,&quot;&quot;], &quot;2022&quot;:[125,-55,70,-15,-10,45,-10,25], &quot;&quot;:[&quot;&quot;,&quot;&quot;,&quot;&quot;,&quot;&quot;,&quot;&quot;,&quot;&quot;,-15,30]} input_df = pd.DataFrame(f_is) output_df = input_df.T.replace('', np.nan).apply(lambda x: pd.Series(x.dropna().to_numpy())).T output_df.columns = ['Var','2021','2022'] output_df </code></pre>
pandas|database|data-cleaning
0
6,532
71,480,457
Pandas Column Split but ignore splitting on specific pattern
<p>I have a Pandas Series containing Several strings Patterns as below:</p> <pre><code>stringsToSplit = ['6 Wrap', '1 Salad , 2 Pepsi , 2 Chicken Wrap', '1 Kebab Plate [1 Bread ]', '1 Beyti Kebab , 1 Chicken Plate [1 Bread ], 1 Kebab Plate [1 White Rice ], 1 Tikka Plate [1 Bread ]', '1 Kebab Plate [1 Bread , 1 Rocca Leaves ], 1 Mountain Dew ' ] s = pd.Series(stringsToSplit) s 0 6 Wrap 1 1 Salad , 2 Pepsi , 2 Chicken Wrap 2 1 Kebab Plate [1 Bread ] 3 1 Beyti Kebab , 1 Chicken Plate [1 Bread ],... 4 1 Kebab Plate [1 Bread , 1 Rocca Leaves ], 1... dtype: object </code></pre> <p>I would like to split and explode it such that the result would be as follows:</p> <pre><code>0 6 Wrap 1 1 Salad 1 2 Pepsi 1 2 Chicken Wrap 2 1 Kebab Plate [1 Bread ] 3 1 Beyti Keba 3 1 Chicken Plate [1 Bread ] 3 1 Kebab Plate [1 White Rice ] 3 1 Tikka Plate [1 Bread ] 4 1 Kebab Plate [1 Bread , 1 Rocca Leaves ] 4 1 Mountain Dew </code></pre> <p>In order to do the <code>explode</code> I need to first <code>split</code>. However, if I use <code>split(',')</code> that also splits the items between <code>[]</code> which I do not want. I have tried using split using regex but was not able to find the correct pattern.</p> <p>I would appreciate the support.</p>
<p>You can use a regex with a negative lookahead:</p> <pre><code>s.str.split(r'\s*,(?![^\[\]]*\])').explode() </code></pre> <p>output:</p> <pre><code>0 6 Wrap 1 1 Salad 1 2 Pepsi 1 2 Chicken Wrap 2 1 Kebab Plate [1 Bread ] 3 1 Beyti Kebab 3 1 Chicken Plate [1 Bread ] 3 1 Kebab Plate [1 White Rice ] 3 1 Tikka Plate [1 Bread ] 4 1 Kebab Plate [1 Bread , 1 Rocca Leaves ] 4 1 Mountain Dew dtype: object </code></pre> <p><a href="https://regex101.com/r/XFjQn6/1" rel="nofollow noreferrer">regex demo</a></p>
python|pandas|string|split
1
6,533
71,531,749
Passing the name of a pandas dataframe column to a function
<p>I'm trying to write a function that takes the name of a column and splits the dataframe based on the values of that column. I have the following</p> <p><code>df_split = df[df.a == 1]</code></p> <p>I'm trying to implement the following idea</p> <pre><code>def f(df,column_name): df_split = df[df.column_name == 1] </code></pre> <p>Any help is highly appreciated.</p>
<p>Please change the function to following:</p> <pre><code>def f(df,column_name): df_split = df[df[column_name] == 1] return df_split </code></pre> <p>df.column_name will work only if the dataframe really have a column labelled as <code>column_name</code> so don't use it inside the function</p>
python|pandas|dataframe
2
6,534
42,158,198
R equivalent of Python's np.dot for 3D array
<p>I am translating some code from Python to R involving 3D matrices. Which is tricky as I know very little Python or matrix algebra. Anyhow in the Python code I have a matrix dot.product as follows: <code>np.dot(A, B)</code>. Matrix A has dimension (10, 4) and B is (2, 4, 2). (These dimensions may vary but always will match on the second dimension). So np.dot has no problem with this as from the documentation:</p> <blockquote> <p>"For 2-D arrays it is equivalent to matrix multiplication, and for 1-D arrays to inner product of vectors (without complex conjugation). For N dimensions it is a sum product over the last axis of a and the second-to-last of b:"</p> </blockquote> <p>Therefore it multiplies along the second axis of A=4, and the middle axis of B=4 and outputs a (10,2,2) matrix. => No problem. However in R, <code>%*%</code> does not have this behaviour and throws a 'non-conformable array' error.</p> <p>Toy example in r:</p> <pre><code>A &lt;- matrix( rnorm(10*4), nrow=10, ncol=4) B &lt;- array( rnorm(2*4*2), c(2,4,2)) A %*% B Error in A %*% B : non-conformable arrays </code></pre> <p>How can I resolve this to achieve the same calculation as <code>np.dot</code>?</p>
<p>We can do this with <code>aperm()</code> and <code>tensor::tensor</code>. Using @SandipanDey's example.</p> <p>Set up arrays (you need <code>aperm</code> to get the appropriate B, which I call B2 here):</p> <pre><code>A &lt;- matrix(0:39,ncol=4,byrow=TRUE) B &lt;- array(0:15,dim=c(2,4,2)) B2 &lt;- aperm(B,c(2,1,3),resize=TRUE) </code></pre> <p><code>tensor::tensor</code> does the right computation, but we need to reshape the result:</p> <pre><code>library(tensor) C &lt;- tensor(A,B2,2,1) aperm(C,c(3,2,1),resize=TRUE) </code></pre>
python|arrays|r|numpy|matrix
6
6,535
42,434,095
How to recover 3D image from its patches in Python?
<p>I have a 3D image with shape <code>DxHxW</code>. I was successful to extract the image into patches <code>pdxphxpw</code>(overlapping patches). For each patch, I do some processing. Now, I would like to generate the image from the processed patches such that the new image must be same shape with original image. Could you help me to do it. </p> <p><a href="https://i.stack.imgur.com/kQkfw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kQkfw.png" alt="enter image description here"></a></p> <p>This is my code to extract patch</p> <pre><code>def patch_extract_3D(input,patch_shape,xstep=1,ystep=1,zstep=1): patches_3D = np.lib.stride_tricks.as_strided(input, ((input.shape[0] - patch_shape[0] + 1) / xstep, (input.shape[1] - patch_shape[1] + 1) / ystep, (input.shape[2] - patch_shape[2] + 1) / zstep, patch_shape[0], patch_shape[1], patch_shape[2]), (input.strides[0] * xstep, input.strides[1] * ystep,input.strides[2] * zstep, input.strides[0], input.strides[1],input.strides[2])) patches_3D= patches_3D.reshape(patches_3D.shape[0]*patches_3D.shape[1]*patches_3D.shape[2], patch_shape[0],patch_shape[1],patch_shape[2]) return patches_3D </code></pre> <p>This is the processing the patches (just simple multiple with 2</p> <pre><code>for i in range(patches_3D.shape[0]): patches_3D[i]=patches_3D[i]; patches_3D[i]=patches_3D[i]*2; </code></pre> <p>Now, what I need is from patches_3D, I want to reshape it to the original image. Thanks</p> <p>This is example code</p> <pre><code>patch_shape=[2, 2, 2] input=np.arange(4*4*6).reshape(4,4,6) patches_3D=patch_extract_3D(input,patch_shape) print patches_3D.shape for i in range(patches_3D.shape[0]): patches_3D[i]=patches_3D[i]*2 print patches_3D.shape </code></pre>
<p>This will do the reverse, however, since your patches overlap this will only be well-defined if their values agree where they overlap</p> <pre><code>def stuff_patches_3D(out_shape,patches,xstep=12,ystep=12,zstep=12): out = np.zeros(out_shape, patches.dtype) patch_shape = patches.shape[-3:] patches_6D = np.lib.stride_tricks.as_strided(out, ((out.shape[0] - patch_shape[0] + 1) // xstep, (out.shape[1] - patch_shape[1] + 1) // ystep, (out.shape[2] - patch_shape[2] + 1) // zstep, patch_shape[0], patch_shape[1], patch_shape[2]), (out.strides[0] * xstep, out.strides[1] * ystep,out.strides[2] * zstep, out.strides[0], out.strides[1],out.strides[2])) patches_6D[...] = patches.reshape(patches_6D.shape) return out </code></pre> <p>Update: here is a safer version that averages overlapping pixels:</p> <pre><code>def stuff_patches_3D(out_shape,patches,xstep=12,ystep=12,zstep=12): out = np.zeros(out_shape, patches.dtype) denom = np.zeros(out_shape, patches.dtype) patch_shape = patches.shape[-3:] patches_6D = np.lib.stride_tricks.as_strided(out, ((out.shape[0] - patch_shape[0] + 1) // xstep, (out.shape[1] - patch_shape[1] + 1) // ystep, (out.shape[2] - patch_shape[2] + 1) // zstep, patch_shape[0], patch_shape[1], patch_shape[2]), (out.strides[0] * xstep, out.strides[1] * ystep,out.strides[2] * zstep, out.strides[0], out.strides[1],out.strides[2])) denom_6D = np.lib.stride_tricks.as_strided(denom, ((denom.shape[0] - patch_shape[0] + 1) // xstep, (denom.shape[1] - patch_shape[1] + 1) // ystep, (denom.shape[2] - patch_shape[2] + 1) // zstep, patch_shape[0], patch_shape[1], patch_shape[2]), (denom.strides[0] * xstep, denom.strides[1] * ystep,denom.strides[2] * zstep, denom.strides[0], denom.strides[1],denom.strides[2])) np.add.at(patches_6D, tuple(x.ravel() for x in np.indices(patches_6D.shape)), patches.ravel()) np.add.at(denom_6D, tuple(x.ravel() for x in np.indices(patches_6D.shape)), 1) return out/denom </code></pre>
python|python-2.7|numpy|image-processing
5
6,536
69,851,198
I get AttributeError: module 'pandas' has no attribute 'DataFrame' when using pd.DataFrame
<p>Im working in Jupyter and all of a sudden pandas won´t create a dataframe for me. The name of the notebook is &quot;Cálculo_Energía_gases&quot;, I don´t think there is a name conflict.</p> <p>This is the code:</p> <pre><code>df_energía_gases = pd.DataFrame(columns = [&quot;Presión (Pa)&quot;,&quot;rpm&quot;,&quot;Q (m3/s)&quot;,&quot;MWgases (MW)&quot;,&quot;ref_prod&quot;,&quot;m3_prod&quot;]) </code></pre> <p>Pandas won´t work in other Jupiter notebooks, but it does work in visualstudiocode. I don´t know what is happening. What can I try to solve it?</p> <p>Edit: I managed to solve the problem, pandas installation was corrupt, or at least it cound´t load the <code>__init__.py</code> (it had an extension c~). My solution was to reinstall pandas within the base environment using:</p> <pre><code>conda install pandas </code></pre> <p>Thank you all for your support &lt;3</p>
<p>You probably have a file called <code>pandas.py</code> in the same directory. When you import a module python will first search the directory you are in and then will check your python path for other modules. Delete or rename the <code>pandas.py</code> and everything should work.</p>
python|pandas|dataframe
0
6,537
69,829,101
Simple gradient descent optimizer in PyTorch not working
<p>I'm trying to implement a simple minimizer in PyTorch, here is the code (<code>v</code> and <code>q</code> and <code>v_trans</code> are tensors, and <code>eta</code> is 0.01):</p> <pre><code>for i in range(10): print('i =', i, ' q =', q) v_trans = forward(v, q) loss = error(v_trans, v_target) q.requires_grad = True loss.backward() grads = q.grad with torch.no_grad() q = q - eta * grads print('Final q = ', q) </code></pre> <p>On the second iteration of the loop, I get an error at the line &quot;loss.backward()&quot;:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Scripts\main.py&quot;, line 97, in &lt;module&gt; loss.backward() File &quot;C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\_tensor.py&quot;, line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File &quot;C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\autograd\__init__.py&quot;, line 154, in backward Variable._execution_engine.run_backward( RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn </code></pre> <p>I've tried several things and cannot get this simple example to work. Is there a tutorial/guide/documentation on how to make a simple optimizer for a project that doesn't involve neural networks? Or maybe, how to use the optimizers built in PyTorch for non-NN projects?</p>
<p>Here is a simple example of finding a zero (or local minimum) of a function (in this case <a href="https://i.stack.imgur.com/LL5p5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LL5p5.png" alt="function" /></a>).</p> <pre class="lang-py prettyprint-override"><code># the loss function def mse(Y, target): diff = target - Y return (diff * diff).sum() / 2 # this is the variable X = torch.rand(1, requires_grad=True) # this is our learning rate lr = 1e-3 # this is the learning loop for i in range(0, 1000): # here the actual function Y=X*X+3*X+1.0 # we are looking for 0 loss = mse(Y, 0) loss.backward() if i % 100 == 0: print(&quot;X&quot;, X.item(),&quot;loss&quot;, loss.item(), &quot;grad&quot;, X.grad.item()) with torch.no_grad(): X -= X.grad * lr X.grad.zero_() print(&quot;result found (could be local minimum): &quot;, X.item(), &quot;/&quot;, Y.item()) </code></pre> <p>It should give you an output similar to this:</p> <pre><code>X 0.45342570543289185 loss 3.2918500900268555 grad 10.024480819702148 X -0.048841409385204315 loss 0.3662492334842682 grad 2.483980894088745 X -0.21245644986629486 loss 0.08313754200935364 grad 1.0500390529632568 X -0.2880801260471344 loss 0.02392572909593582 grad 0.5302143096923828 X -0.3278542160987854 loss 0.007678795140236616 grad 0.2905181050300598 X -0.35011476278305054 loss 0.0026090284809470177 grad 0.16612648963928223 X -0.3629951477050781 loss 0.0009150659898295999 grad 0.09728223085403442 X -0.37058910727500916 loss 0.00032688589999452233 grad 0.0577557273209095 X -0.3751157820224762 loss 0.0001180334365926683 grad 0.0345664918422699 X -0.377831369638443 loss 4.289587013772689e-05 grad 0.020787909626960754 result found (could be local minimum): -0.37946680188179016 / 0.00562286376953125 </code></pre>
python|optimization|pytorch|gradient|gradient-descent
0
6,538
69,708,228
Numpy add one array to another one
<p>Let</p> <pre><code>A = np.array([]) B = np.array([1,2]) C = np.array([&quot;hey&quot;]) D = np.array([]) </code></pre> <p>I'm looking for a function which can append the arrays B,C,D to A. But not the values, the whole array:</p> <p>So A should look like this:</p> <pre><code>A = np.array([[1,2],[&quot;hey&quot;],[]) </code></pre> <p>Append doesn't work, and concatenate as well as stack etc. don't work either because the arrays don't necessarily have the same shape. Is there someway to, for example, specify the type when appending?</p>
<p>Appends are not done in-place in numpy, since it operates on fixed buffers. Since your lists are ragged and inhomogenous, you can do:</p> <pre><code>A = np.array([B, C, D]) </code></pre> <p>The dtype will automatically be <code>object</code> in this particular case, and the result will be an array of arrays.</p> <p>This essentially defeats the purpose of using numpy: arrays are slower than lists when it comes to append and delete operations. Instead, it may be better to use a list:</p> <pre><code>A.extend([B, C, D]) </code></pre> <p>Alternatively, if you are trying to describe a structured datatype, you can do that effectively with numpy:</p> <pre><code>dt = np.dtype([('b', float, 2), ('c', 'U3'), ('d', float, 0)]) A = np.array([(B, C.item(), D)], dtype=dt) </code></pre>
python|arrays|numpy|multidimensional-array
1
6,539
69,692,572
loop through a list and compare each item with another whole list
<p><strong>Table 1</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Date</th> <th style="text-align: center;">Prescribed</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">16-05-2017</td> <td style="text-align: center;">Amlodipine [ Amlodipine | 10 mg | Tablet | OD | For 60 Days ], Cetirizine [ Cetirizine | 10 mg | Tablet | OD | For 5 Days ]</td> </tr> <tr> <td style="text-align: left;">15-05-2017</td> <td style="text-align: center;">CEFUROXIME[ ZINNAT | 500MG | Tablet | BID | For 7 Days ]</td> </tr> <tr> <td style="text-align: left;">17-05-2017</td> <td style="text-align: center;">Cetirizine [Cetirizine | 5 mg/5 mL | Syrup | BID | For 5 Days]</td> </tr> </tbody> </table> </div> <p><strong>Table 2</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Name</th> <th style="text-align: center;">Category</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Lisinopril (Lisinopril | 10 mg | Tablet)</td> <td style="text-align: center;">CARDIOVASCULAR AGENT</td> </tr> <tr> <td style="text-align: left;">Amlodipine (Amlodipine | 10 mg | Tablet)</td> <td style="text-align: center;">CARDIOVASCULAR AGENT</td> </tr> <tr> <td style="text-align: left;">Enoxaparin Sodium (80mg)(clexane 8000 iu | 80mg | 0)</td> <td style="text-align: center;">CARDIOVASCULAR AGENT</td> </tr> </tbody> </table> </div> <p>I want to be able to compare each item or row within the column['Prescribed'] with the whole of Table 2 column['Name'] to be able to create a column['category']row each row in Table 1. Using pandas dataframes or any possible python method</p> <h2>More Clarification(or Example)</h2> <p><strong>Table 1(From above)</strong></p> <pre><code> test_text = &quot;Amlodipine [ Amlodipine | 10 mg | Tablet | OD | For 60 Days ], Cetirizine [ Cetirizine | 10 mg | Tablet | OD | For 5 Days ]&quot; </code></pre> <p><strong>Table 2(From above)</strong></p> <pre><code>comparison_list = [ 'Amlodipine (Amlodipine | 10 mg | Tablet)' , 'Acetaminophen(Tylenol | 500mg| Tablet)' , 'Ibuprofen(Advil | 400mg | Tablet)'] </code></pre> <p><strong>Expected outcome:</strong></p> <pre><code>Return True if 'Amlodipine' is in test_text </code></pre> <p><strong>To have a final table like below</strong> | Date | Prescribed | Result | |:---- |:------:|:------:| | 16-05-2017| Amlodipine [ Amlodipine | 10 mg | Tablet | OD | For 60 Days ], Cetirizine [ Cetirizine | 10 mg | Tablet | OD | For 5 Days ]|True| | 15-05-2017 | CEFUROXIME[ ZINNAT | 500MG | Tablet | BID | For 7 Days ] |False| | 17-05-2017 | Cetirizine [Cetirizine | 5 mg/5 mL | Syrup | BID | For 5 Days]|False|</p> <p><strong>Below are some methods i have tried.</strong></p> <pre><code>for i in table1['Prescribed']: split_data = i.split(&quot;,&quot;) for b in split_data: if any(str(b) in s for s in table2['Name']): print('true') elif str(b) in table2['Name']: print('perfect') else: print('false') </code></pre> <blockquote> <p>Output:</p> </blockquote> <pre><code>false false false false false false false false </code></pre> <p><strong>Without splitting text:</strong></p> <pre><code>for i in table1['Prescribed']: if any(str(i) in s for s in table2['Name']): print('true') elif str(i) in table2['Name']: print('perfect') else: print('false') </code></pre> <blockquote> <p>Outcome:</p> </blockquote> <pre><code>false false false false false false false </code></pre> <p>If there is any solution to this, i would be happy to know. Suggestions on how to do it neater is also appreciated. And if there is a link or book to read concerning how to go about this too, i would be happy to know about them .</p>
<p>IIUC, you want to extract the drug names from the <code>table2['Name']</code> and then use that as a comparison list to find if ANY of those occur in the table1['Prescription'].</p> <p>If this is what you want, then try this -</p> <ol> <li>Use vectorized <code>str</code> functions like <code>replace</code>, <code>split</code> and <code>strip</code> to extract the unique drug names for your comparison list.</li> <li>Next use <code>'|'.join()</code> to connect these unique drugs with a <code>OR</code> connector to find if any of those exist in the <code>table1['Prescription]'</code> with the use of another vectorized <code>str</code> function <code>str.contains</code></li> </ol> <blockquote> <ol> <li>NOTE 1: Using apply functions for working with string is not as efficient as using <code>str</code> methods in pandas.</li> </ol> </blockquote> <blockquote> <ol start="2"> <li>NOTE 2: The regex <code>[\(\[].*?[\)\]]</code> is for removing the text inside the <code>()</code> or <code>[]</code> brackets and returning only the text outside, which is this case is the name of the drugs. Feel free to replace it with anything else.</li> </ol> </blockquote> <pre><code>#STEP 1: Get unique drugs from the table2 unique_drugs = table2['Name'].str.replace('[\(\[].*?[\)\]]','',regex=True)\ .str.split(',')\ .explode()\ .str.strip()\ .unique() ## unique_drugs : array(['Lisinopril', 'Amlodipine', 'Enoxaparin Sodium'], dtype=object) # STEP 2: FIND MATCHING DRUGS IN THE DATA table1['flag'] = table1['Prescribed'].str.contains('|'.join(unique_drugs)) print(table1) </code></pre> <pre><code> Date Prescribed flag 0 16-05-2017 Amlodipine [ Amlodipine | 10 mg | Tablet | OD ... True 1 15-05-2017 CEFUROXIME[ ZINNAT | 500MG | Tablet | BID | Fo... False 2 17-05-2017 Cetirizine [Cetirizine | 5 mg/5 mL | Syrup | B... False </code></pre>
python|pandas|loops
1
6,540
43,409,488
Custom Loss Function in TensorFlow for weighting training data
<p>I want to weight the training data based on a column in the training data set. Thereby giving more importance to certain training items than others. The weighting column should not be included as a feature for the input layer.</p> <p>The Tensorflow documentation holds an <a href="https://www.tensorflow.org/api_guides/python/contrib.losses" rel="nofollow noreferrer">example</a> how to use the label of the item to assign a custom loss and thereby assigning weight:</p> <pre><code># Ensures that the loss for examples whose ground truth class is `3` is 5x # higher than the loss for all other examples. weight = tf.multiply(4, tf.cast(tf.equal(labels, 3), tf.float32)) + 1 onehot_labels = tf.one_hot(labels, num_classes=5) tf.contrib.losses.softmax_cross_entropy(logits, onehot_labels, weight=weight) </code></pre> <p>I am using this in a custom <strong>DNN</strong> with three hidden layers. In theory i simply need to replace <strong>labels</strong> in the example above with a tensor containing the weight column.</p> <p>I am aware that there are several threads that already discuss similar problems e.g. <a href="https://stackoverflow.com/questions/43273677/defined-loss-function-in-tensorflow">defined loss function in tensorflow?</a></p> <p>For some reason i am running into a lot of problems trying to bring my weight column in. It's probably two easy lines of code or maybe there is an easier way to achieve the same result. </p>
<p>I believe i found the answer:</p> <pre><code> weight_tf = tf.range(features.get_shape()[0]-1, features.get_shape()[0]) loss = tf.losses.softmax_cross_entropy(target, logits, weights=weight_tf) </code></pre> <p>The weight is the last column in the features tensorflow.</p>
python|machine-learning|tensorflow
1
6,541
72,346,940
Pandas : Getting a cumulative sum for each month on the last friday
<p>I got a dataframe which look like this :</p> <pre><code>date_order date_despatch date_validation qty_ordered 2019-01-01 00:00:00 2019-11-01 00:00:00 2019-13-01 00:00:00 4.15 2019-01-01 00:00:00 2019-12-01 00:00:00 2019-14-01 00:00:00 5.9 2019-02-01 00:00:00 2019-16-01 00:00:00 2019-19-01 00:00:00 7.8 2019-03-01 00:00:00 2019-18-01 00:00:00 2019-20-01 00:00:00 9.6 2019-04-01 00:00:00 2019-22-01 00:00:00 2019-24-01 00:00:00 1.3 ... 2019-03-02 00:00:00 2019-22-02 00:00:00 2019-25-02 00:00:00 1.2 </code></pre> <p>My goal is to get, for each month, a cumulative sum of the quantity ordered from the start of the month to the last friday of this same month (e.g : 2019-01-01 to 2019-25-01 for January 2019)</p> <p>What would be expected :</p> <pre><code>date_order cumulative_ordered 2019-01-01 00:00:00 10.05 2019-02-01 00:00:00 17.85 ... ... 2019-24-01 00:00:00 150 2019-25-01 00:00:00 157 </code></pre> <p>Can anyone help me on this?</p>
<p>With an example df with <code>qty_ordered</code> always 1 (so we can easily keep track of the result):</p> <pre><code>import pandas as pd df = pd.DataFrame({'date_order': pd.date_range('2019-01-01', '2019-03-01')}) df['qty_ordered'] = 1 print(df) </code></pre> <pre><code> date_order qty_ordered 0 2019-01-01 1 1 2019-01-02 1 2 2019-01-03 1 3 2019-01-04 1 4 2019-01-05 1 5 2019-01-06 1 6 2019-01-07 1 7 2019-01-08 1 ... 59 2019-03-01 1 </code></pre> <p>Last Friday of Jan 2019 was the 2019-01-25, while in February it was 2019-02-22. We keep that in mind to verify the cumsums.</p> <p>You could do:</p> <pre><code># Make sure dates are sorted. df = df.sort_values('date_order') # Flag the Fridays. df['n_friday'] = df['date_order'].dt.dayofweek.eq(4) # Column to groupby. df['year_month'] = df['date_order'].dt.to_period(&quot;M&quot;) # EDITED LINE. Instead of 4th week, find number of Fridays. # Remove days past the last Friday in each year/month group. mask = df.groupby('year_month')['n_friday'].transform(lambda s: s.cumsum().shift().fillna(0).lt(s.sum())) res_df = df[mask].drop(columns=['n_friday']) # Calculate cumsum for each month. res_df['cumulative_ordered'] = res_df.groupby('year_month')['qty_ordered'].cumsum() print(res_df.drop(columns=['year_month'])) </code></pre> <pre><code> date_order qty_ordered ordered_cumusm 0 2019-01-01 1 1 1 2019-01-02 1 2 2 2019-01-03 1 3 3 2019-01-04 1 4 4 2019-01-05 1 5 5 2019-01-06 1 6 6 2019-01-07 1 7 7 2019-01-08 1 8 ... 52 2019-02-22 1 22 59 2019-03-01 1 1 </code></pre> <p>To check the cumsum and day selection worked:</p> <pre><code>print(res_df.groupby('year_month').last()) </code></pre> <pre><code> date_order qty_ordered cumulative_ordered year_month 2019/01 2019-01-25 1 25 2019/02 2019-02-22 1 22 2019/03 2019-03-01 1 1 </code></pre>
python|pandas|datetime
0
6,542
72,454,832
How to update column value of a data frame from another data frame matching 2 columns?
<p>I have 2 dataframes, and I want to update the score of rows with the same 2 column values.</p> <p>How can I do that?</p> <p>df 1:</p> <pre><code>DEP ID | Team ID | Group | Score 001 | 002 | A | 50 001 | 004 | A | 70 002 | 002 | A | 50 002 | 007 | A | 90 </code></pre> <p>df 2 (a subset of one department):</p> <pre><code>DEP ID | Team ID | Group | Result 001 | 002 | A | 80 001 | 003 | A | 60 001 | 004 | A | 70 </code></pre> <p><strong>OUTPUT:</strong> All columns with the same TeamID and Group update the score</p> <pre><code>DEP ID | Team ID | Group | Score 001 | 002 | A | 80 001 | 004 | A | 70 002 | 002 | A | 80 002 | 007 | A | 90 </code></pre> <p>I've tried doing pd merge left join but I'm not really getting the expected result.</p> <p>Any suggestions?</p>
<p>Here's a way to do it:</p> <pre class="lang-py prettyprint-override"><code>df1 = df1.join(df2.drop(columns='DEP ID').set_index(['Team ID', 'Group']), on=['Team ID', 'Group']) df1.loc[df1.Result.notna(), 'Score'] = df1.Result df1 = df1.drop(columns='Result') </code></pre> <p>Explanation:</p> <ul> <li>modify df2 so it has <code>Team ID, Group</code> as its index and its only column is <code>Result</code></li> <li>use <code>join</code> to bring the new scores from df2 into a <code>Result</code> column in df1</li> <li>use <code>loc</code> to update <code>Score</code> values for rows where <code>Result</code> is not null (i.e., rows for which an updated <code>Score</code> is available)</li> <li>drop the <code>Result</code> column.</li> </ul> <hr /> <p>Full test code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np df1 = pd.DataFrame({ 'DEP ID':['001','001','002','002'], 'Team ID':['002','004','002','007'], 'Group':['A','A','A','A'], 'Score':[50,70,50,90]}) df2 = pd.DataFrame({ 'DEP ID':['001','001','001'], 'Team ID':['002','003','004'], 'Group':['A','A','A'], 'Result':[80,60,70]}) print(df1) print(df2) df1 = df1.join(df2.drop(columns='DEP ID').set_index(['Team ID', 'Group']), on=['Team ID', 'Group']) df1.loc[df1.Result.notna(), 'Score'] = df1.Result df1 = df1.drop(columns='Result') print(df1) </code></pre> <p>Output:</p> <pre><code> index DEP ID Team ID Group Score 0 0 001 002 A 80 1 1 001 004 A 70 2 2 002 002 A 80 3 3 002 007 A 90 </code></pre> <hr /> <p><strong>UPDATE</strong>:</p> <p>If <code>Result</code> column in df2 is instead named <code>Score</code>, as asked by OP in a comment, then the code can be adjusted slightly as follows:</p> <pre class="lang-py prettyprint-override"><code>df1 = df1.join(df2.drop(columns='DEP ID').set_index(['Team ID', 'Group']), on=['Team ID', 'Group'], rsuffix='_NEW') df1.loc[df1.Score_NEW.notna(), 'Score'] = df1.Score_NEW df1 = df1.drop(columns='Score_NEW') </code></pre>
python|pandas
1
6,543
72,285,947
Python convert array dimensions with numpy
<br> I have a JPG image with a shape of (1, 48, 48, 3), <br> I want to convert it to the shape of (1, 48, 48, 1) How can I do it ? <p>Please help</p>
<p>thank you all! I solve it with</p> <p>img = img.mean(axis=-1) <br> img = np.expand_dims(img, axis=3)</p>
python|numpy|image-processing
-1
6,544
72,147,193
How to customize general error messages of numpy?
<p>I am writing a <code>Vector</code> and <code>Matrix</code> classes that use <code>numpy</code> in the backend in order to abstract some common methods and calculations (specifically, physics calculations, but it's irrelevant). I would like to intercept common errors that may occur with the usage of these classes in order to write more readable error messages and hide the usage of <code>numpy</code>.</p> <p>For example, assume we have <code>v = Vector([1, 2, 3])</code> defined. this code:</p> <pre class="lang-py prettyprint-override"><code>v[&quot;a&quot;] = 5 </code></pre> <p>generates the error:</p> <blockquote> <p>IndexError: only integers, slices (<code>:</code>), ellipsis (<code>...</code>), numpy.newaxis (<code>None</code>) and integer or boolean arrays are valid indices</p> </blockquote> <p>I would like something like:</p> <blockquote> <p>TypeError: Vector can be indexed by <code>int</code> or <code>slice</code> (<code>:</code>), not <code>str</code>.</p> </blockquote> <p>I am not sure why <code>numpy</code> raises <code>IndexError</code> here instead of <code>TypeError</code> but whatever. Another example is this code <code>v[6] = 0</code> which generates:</p> <blockquote> <p>IndexError: index 6 is out of bounds for axis 0 with size 3</p> </blockquote> <p>I would prefer something like:</p> <blockquote> <p>IndexError: index 6 is invalid for 3 dimensional vector</p> </blockquote> <p>Another example: <code>v[:2] = (4, 7, 12)</code> which generates:</p> <blockquote> <p>ValueError: could not broadcast input array from shape (3,) into shape (2,)</p> </blockquote> <p>I'd prefer something like:</p> <blockquote> <p>ValueError: Can't set 2d slice of vector with 3d data</p> </blockquote> <p>There are probably more examples I didn't come up with yet, but I think this illustrates the point. I want to customize the error messages for these different operations and I can't figure out how.</p> <p>I can catch the exceptions and raise new ones with proper messages, but the exception doesn't contain information about why it was raised. Was it because of a wrong type, out-of-bounds index, or wrong dimensions?</p> <p>There is no error code or something like that. The best option I came up with is parsing the error messages to understand what happened, but this feels like cheating, a hard work, and it relies on numpy not changing the format of the error messages. Is there a more reliable and clean way to do so?</p>
<p>If you mean you want to access the stack trace by,</p> <blockquote> <p>but the exception doesn't contain information about why it was raised.</p> </blockquote> <p>you can use <code>print(traceback.format_exc())</code> under the <code>catch</code> statement. Also you'll have to import <code>import traceback</code>.</p>
python|numpy
0
6,545
50,379,132
Format x-axis tick labels to seams like the default pandas plot
<p>I'm trying to set my plot xticks to similar to the pandas dataframe default format.</p> <p>I've been trying to set using the plt.set_xticklabels functions, but did not succeed. </p> <pre><code>fig, axarr = plt.subplots(len(stations), 2, figsize=(10,11)) plt.subplots_adjust(bottom=0.05) hPc3.plot(use_index=True, subplots=True, ax=axarr[0:len(stations),0], for i in range(0,len(axarr)): axarr[i,0].set_ylabel('$nT$') axarr[len(stations)-1,0].set_xlabel('$(UT)$') for i in range(0,len(axarr)): plot4 = axarr[i,1].pcolormesh(tti, wPc3_period[i], np.log10(abs(wPc3_power[i])), cmap = 'jet') axarr[i,1].set_yscale('log', basey=2, subsy=None) axarr[i,1].set_xlabel('$(UT)$') axarr[i,1].set_ylabel('$Period$ $(s)$') axarr[i,1].set_ylim([np.min(wPc3_period[i]), np.max(wPc3_period[i])]) axarr[i,1].invert_yaxis() axarr[i,1].plot(tti, te_coi3, 'w') cbar_coord = replace_at_index1(make_axes_locatable(axarr[i,1]).get_position(), [0,2], [0.92, 0.01]) cbar_ax = fig.add_axes(cbar_coord) cbar = plt.colorbar(plot4, cax=cbar_ax, boundaries=np.linspace(-10, 10, 512), ticks=[-10, -5, 0, 5, 10], label='$log_{2}$') cbar.set_clim([-10,5]) </code></pre> <p>the left panel show the default label of pandas data frame plot. The right panel is how is my formatation</p> <p><img src="https://i.stack.imgur.com/Za2ok.png" alt="the left panel show the default label of pandas data frame plot. The right panel is how is my formatation"></p>
<p>Matplotlib <a href="https://matplotlib.org/api/dates_api.html" rel="nofollow noreferrer">dates</a> api provides plenty of convenience functions and classes to represent and convert date and time data.</p> <p>You can reproduce <code>pandas</code> style using a simple combination of <code>DateFormatter</code>, <code>DayLocator</code> and <code>HourLocator</code>. Here's an example on a dummy dataset given you didn't provide complete working code, but it shouldn't be hard to adapt to your use case.</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates # create toy dataset index = pd.date_range("2018-05-25 00:00:00", "2018-05-26 00:00:00", freq = "1min") series = pd.Series(np.random.random(len(index)), index=index) x = index.to_pydatetime() y = series # plot fig = plt.figure(figsize=(5,1)) ax = fig.gca() ax.xaxis.set_minor_formatter(mdates.DateFormatter("%H:%M")) ax.xaxis.set_minor_locator(mdates.HourLocator(interval=3)) ax.tick_params(which='minor', labelrotation=30) ax.xaxis.set_major_formatter(mdates.DateFormatter("%d-%b")) ax.xaxis.set_major_locator(mdates.DayLocator()) ax.tick_params(which='major', pad=10, labelrotation=30) ax.set_xlim(x.min(), x.max()) ax.plot(x, y) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/HIIwj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HIIwj.png" alt="enter image description here"></a></p>
python|pandas|matplotlib
0
6,546
45,335,993
compare string got error ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()
<p>I am trying to use <code>if</code> condition to update some values in a column using the following code:</p> <pre><code>if df['COLOR_DESC'] == 'DARK BLUE': df['NEW_COLOR_DESC'] = 'BLUE' </code></pre> <p>But I got the following error:</p> <pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>So what is wrong with this piece of code?</p>
<p>To answer your immediate question, the problem is that the expression <code>df['COLOR_DESC'] == 'DARK BLUE'</code> results in a Series of booleans. The error message is telling you that there is no one unambiguous way to convert that array to a single boolean value as <code>if</code> demands.</p> <p>The solution is actually not to use <code>if</code>, since you are not applying the <code>if</code> to each element that is <code>DARK_BLUE</code>. Use the boolean values directly as a mask instead:</p> <pre><code>rows = (df['COLOR_DESC'] == 'DARK BLUE') df.loc[rows, 'COLOR_DESC'] = 'BLUE' </code></pre> <p>You have to use <code>loc</code> to update the original <code>df</code> because if you index it as <code>df[rows]['COLOR_DESC']</code>, you will be getting a copy of the required subset. Setting the values in the copy will <em>not</em> propagate back to the original, and you will even get a warning about that.</p> <p>For example:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame(data={'COLOR_DESC': ['LIGHT_RED', 'DARK_BLUE', 'MEDUIM_GREEN', 'DARK_BLUE']}) &gt;&gt;&gt; df COLOR_DESC 0 LIGHT_RED 1 DARK_BLUE 2 MEDUIM_GREEN 3 DARK_BLUE &gt;&gt;&gt; rows = (df['COLOR_DESC'] == 'DARK BLUE') &gt;&gt;&gt; rows 0 False 1 True 2 False 3 True Name: COLOR_DESC, dtype: bool &gt;&gt;&gt; df.loc[rows, 'COLOR_DESC'] = 'BLUE' &gt;&gt;&gt; df COLOR_DESC 0 LIGHT_RED 1 BLUE 2 MEDUIM_GREEN 3 BLUE </code></pre>
python|pandas
1
6,547
45,527,853
Python pandas - Filter a data frame based on a pre-defined array
<p>I'm trying to filter a data frame based on the contents of a pre-defined array.</p> <p>I've looked up several examples on StackOverflow but simply get an empty output.</p> <p>I'm not able to figure what is it I'm doing incorrectly. Could I please seek some guidance here?</p> <pre><code>import pandas as pd import numpy as np csv_path = 'history.csv' df = pd.read_csv(csv_path) pre_defined_arr = ["A/B", "C/D", "E/F", "U/Y", "R/E", "D/F"] distinct_count_column_headers = ['Entity'] distinct_elements= pd.DataFrame(df.drop_duplicates().Entity.value_counts(),columns=distinct_count_column_headers) filtered_data= distinct_elements[distinct_elements['Entity'].isin(pre_defined_arr)] print("Filtered data ... ") print(filtered_data) </code></pre> <p><strong>OUTPUT</strong></p> <pre><code>Filtered data ... Empty DataFrame Columns: [Entity] Index: [] </code></pre>
<p>Managed to that using <code>filter</code> function -> <code>.filter(items=pre_defined_arr )</code></p> <pre><code>import pandas as pd import numpy as np csv_path = 'history.csv' df = pd.read_csv(csv_path) pre_defined_arr = ["A/B", "C/D", "E/F", "U/Y", "R/E", "D/F"] distinct_count_column_headers = ['Entity'] distinct_elements_filtered= pd.DataFrame(df.drop_duplicates().Entity.value_counts().filter(items=pre_defined_arr),columns=distinct_count_column_headers) </code></pre> <p>It's strange that there's just one answer I bumped on that suggests <code>filter</code> function. Almost 9 out 10 out there talk about <code>.isin</code> function which didn't work in my case. </p>
python|pandas
0
6,548
62,861,470
Numpy 2D array indexing with indices out of bounds
<p>I found a substantial bottleneck in the following code:</p> <pre><code>def get_value(matrix, index): if (index[0] &gt;= 0 and index[1] &gt;= 0 and index[0] &lt; matrix.shape[0] and index[1] &lt; matrix.shape[1]): return matrix[index[0], index[1]] return DEFAULT_VAL </code></pre> <p>Given a 2D matrix and an index accessing the matrix, it checks for out-of-bounds indices and returns the value at the given index. Otherwise, it returns a DEFAULT_VAL in case of out-of-bounds indices.</p> <p>This method is called many times (even millions of calls), which is slow. So, I am trying to vectorize it using numpy. Unfortunately, I cannot find a way to do it.</p> <p>If I didn't have to care about out-of-bounds values, I'd do the following:</p> <pre><code>def get_values(matrix, indices): return matrix[indices[:,0], indices[:,1]] </code></pre> <p>I've been thinking of a way to utilize numpy to do this task, but I haven't found a way yet.</p> <p>Is there a way to accomplish this?</p>
<p>The code you have shown</p> <pre><code>def get_values(matrix, indices): return matrix[indices[:,0], indices[:,1]] </code></pre> <p>is the best you can do given that <code>indices</code> is a tuple with two values.</p> <p>You should rather look at the optimal way to call the above method. I suggest if you can, rather then calling <code>get_values</code> with a single tuple, call with a possibility large number of such tuples. Then you can atleast try to write a vectorized version of <code>get_values</code>. With single tuple there is nothing you can vectorize here.</p> <h2>Vectorized method</h2> <p>Assuming that your <code>indices</code> is a numpy array of size <code>n X 2</code> where <code>n</code> is the number of indices and <code>2</code> corresponds to two dimensions then you can use</p> <pre><code>index = np.random.randint(0,500, size=(10000,2)) matrix = np.random.randn(1000,1000) def get_value(matrix, index, default_value=-1): result = np.zeros(len(index))+default_value mask = (index[:,0] &lt; matrix.shape[0]) &amp; (index[:,1] &lt; matrix.shape[1]) valid = index[mask] result[mask] = matrix[valid[:, 0], valid[:, 1]] return result assert np.all(get_value(matrix, np.array(([0,1001],[1001,1001]))) == -1) </code></pre> <pre><code>%timeit get_value(matrix, index, -1): 1 loop, best of 3: 264 ms per loop </code></pre>
python|numpy|optimization|matrix-indexing
1
6,549
62,876,594
Pandas filter, group-by and then transform
<p>I have a pandas dataframe, which looks like the following:</p> <pre><code> df = a b a1. 1 a2 0 a1 0 a3 1 a2 1 a1 1 </code></pre> <p>I would like to first filter b on <code>1</code> and then, group by <code>a</code> and count number of times each group occurs (call this column <code>count</code>) and then attach this column with original df. <code>b</code> is guaranteed to be have at least one time <code>1</code> for each value of <code>a</code>.</p> <p>Expected output:</p> <pre><code> df = a b. count a1. 1 2 a2 0. 1 a1 0. 2 a3 1 1 a2 1. 1 a1 1 2 </code></pre> <p>I tried:</p> <pre><code> df['count] = df.groupby('a').b.transform('size') </code></pre> <p>But, this counts zeros as well. I want to filter for <code>b == 1</code> first.</p> <p>I also tried:</p> <pre><code>df['count'] = df[df['b' == 1].groupby('a').b.transform('size') </code></pre> <p>But, this introduces <code>nans</code> in the count column?</p> <p>How can I do this in one line?</p>
<p>Check with get the condition apply to <code>b</code> then <code>sum</code></p> <pre><code>df['b'].eq(1).groupby(df['a']).transform('sum') Out[103]: 0 2.0 1 1.0 2 2.0 3 1.0 4 1.0 5 2.0 Name: b, dtype: float64 </code></pre>
python-3.x|pandas|pandas-groupby
2
6,550
62,504,167
How to convert from frequency domain to time domain in python
<p>I know this is basic for signal processing, but, I am not sure what is wrong about my approach. I have a signal that behaves as damped sine signal with a sampling frequency of 5076Hz and 15,000 number of samples. I found from the following website how to convert a signal from a time domain to frequency domain and managed to get the FFT and frequency values. The code can be found below the link:</p> <p><a href="http://ataspinar.com/2018/04/04/machine-learning-with-signal-processing-techniques/" rel="nofollow noreferrer">Machine Learning with Signal Processing Techniques</a></p> <pre><code>def get_fft_values(y_values, T_s, N, f_s): f_values = np.linspace(0.0, 1.0/(2.0*T), N//2) fft_values_ = np.fft.rfft(y_values) fft_values = 2.0/N * np.abs(fft_values_[0:N//2]) return f_values, fft_values </code></pre> <p>I managed to get the frequency and FFT values. However, I need to implement filters to remove some noise from the signal, so, I created the following functions to implement the filter part:</p> <pre><code>def butter_bandpass(lowcut, highcut, fs, order): nyq = 0.5 * fs low = lowcut / nyq high = highcut / nyq b, a = butter(order, [low, high], btype='bandpass', output='ba') return b, a def butter_bandpass_filter(data, lowcut, highcut, fs, order): b, a = butter_bandpass(lowcut, highcut, fs, order=order) y = filtfilt(b=b, a=a, x=data) # y = lfilter(b=b, a=a, x=data) return y </code></pre> <p>I know that I would need to implement the following steps:</p> <ul> <li>Convert to the frequency domain</li> <li>apply a bandpass filter to get rid of frequencies you don't care about</li> <li>convert back to the time domain by inverse Fourier transform</li> </ul> <p>So, I created the following inverse transform function, but, I can't get the filtered signal back and the amplitudes don't almost match the original signal. (For my case, I need to resample)</p> <pre><code>def get_ifft_values(fft_values, T, N, f_s): # Time axis: N = 9903 S_T = 1 / S_F t_n = S_T * N # seconds of sampling # Obtaining data in order to plot the graph: x_time = np.linspace(0, t_n, N) ifft_val = np.fft.irfft(fft_values, n=N) y_s, x_time = scipy.signal.resample(x=ifft_val, num=N, t=x_time) return x_time, y_s </code></pre> <p>What am I doing wrong here?</p> <p><strong>Edit 1:</strong></p> <p>Based on the answer from @Han-Kwang Nienhuys. I edited the above code and applied it to the approach below:</p> <pre><code>##### Converting the signal into fft: f_val, fft_val = get_fft_values(y_values=y, T=S_T, N=N, f_s=S_F) # Applying bandpass filter: fft_filt_val = butter_bandpass_filter(data=fft_val, lowcut=50, highcut=600, fs=S_F, order=2) # Applying the inverse transform of the frequency domain: x_time, y = get_ifft_values(fft_values=fft_filt_val, T=S_T, N=N, f_s=S_F) </code></pre> <p>Here are the results from the signal:</p> <ul> <li>FFT of the original signal:</li> </ul> <p><a href="https://i.stack.imgur.com/uHZS8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uHZS8.png" alt="FFT of the original signal" /></a></p> <ul> <li>Filtered FFT of the original signal:</li> </ul> <p><a href="https://i.stack.imgur.com/Ltkui.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ltkui.png" alt="Filtered FFT from FFT values" /></a></p> <ul> <li>Converted Signal from Filtered FFT:</li> </ul> <p><a href="https://i.stack.imgur.com/zqmBa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zqmBa.png" alt="Converted Signal from Filtered FFT" /></a></p> <ul> <li>Without Applying the bandpass filter:</li> </ul> <p><a href="https://i.stack.imgur.com/ClNzh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ClNzh.png" alt="Without Applying the bandpass filter" /></a></p>
<p>There are several issues:</p> <ul> <li>You are using <code>np.fft.fft</code>, which is a complex-valued discrete Fourier transform, containing frequencies up to twice the Nyqvist frequency. The frequencies above the Nyqvist frequency can be interpreted equivalently as negative frequencies. You are indeed using a frequency slice <code>[:N//2]</code>, but if you want the inverse transform to work, you also need to deal with the other half of the spectrum.</li> <li>Don't take the absolute values of the FFT data. The filter must operate on the complex-valued coefficients.</li> <li>If you use <code>scipy.signal.filtfilt</code>: that function operates on time-domain data, not on frequency-domain data.</li> </ul> <p>For real-valued input data, it's much easier to use a real-valued FFT, which will behave more like you expect:</p> <pre><code>n = len(y) yf = np.fft.rfft(y) fstep = f_sampling / n freqs = np.arange(len(yf)) * fstep </code></pre> <p>To transform back, use <code>np.fft.irfft</code>.</p>
python|numpy|scipy|signal-processing|fft
0
6,551
62,754,767
TF.Keras SparseCategoricalCrossEntropy return nan on GPU
<p>Tried to train UNet on GPU to create binary classified image. Got nan loss on each epoch. Testing of loss function always produces nan-return.</p> <p>Test case:</p> <pre><code>import tensorflow as tf import tensorflow.keras.losses as ls true = [0.0, 1.0] pred = [[0.1,0.9],[0.0,1.0]] tt = tf.convert_to_tensor(true) tp = tf.convert_to_tensor(pred) l = ls.SparseCategoricalCrossentropy(from_logits = True) ret = l(tt,tp) print(ret) #tf.Tensor(nan, shape=(), dtype=float32) </code></pre> <p>If i would force my tf to work with CPU (<a href="https://stackoverflow.com/questions/40690598/can-keras-with-tensorflow-backend-be-forced-to-use-cpu-or-gpu-at-will">Can Keras with Tensorflow backend be forced to use CPU or GPU at will?</a>), all works fine. And yes, my UNet fits and predicts correctly on CPU.</p> <p>I checked several posts on keras GitHub, but the all point to problems with compiled ANN, such as using inappropriate optimizers for categorical crossentropy.</p> <p>Any workaround? Am i missing something?</p>
<p>I had the same issue. My loss was a real number if I trained on CPU. I tried upgrading the TF version, but it didn't fix the problem. I finally fixed my issue by reducing the y dimension. My model output was a 2D array. When I reduced it to 1D, I managed to get a real loss on GPU.</p>
python|tensorflow|keras
2
6,552
54,400,171
How to use list in Snakemake Tabular configuration, for describing of sequencing units for bioinformatic pipeline
<p>How to use a list in Snakemake tabular config.</p> <p>I use Snakemake Tabular (mapping with BWA mem) configuration to describe my sequencing units (libraries sequenced on separate lines). At the next stage of analysis I have to merge sequencing units (mapped .bed files) and take merged .bam files (one for each sample). Now I'm using YAML config for describing of what units belong to what samples. But I wish to use Tabular config for this purpose, </p> <p>I'm not clear how to write and recall a list (containing sample information) from cell of Tab separated file.</p> <p>This is how my Tablular config for units looks like:</p> <pre><code>Unit SampleSM LineID PlatformPL LibraryLB RawFileR1 RawFileR2 sample_001.lane_L1 sample_001 lane_L1 ILLUMINA sample_001 /user/data/sample_001.lane_L1.R1.fastq.gz /user/data/sample_001.lane_L1.R2.fastq.gz sample_001.lane_L2 sample_001 lane_L2 ILLUMINA sample_001 /user/data/sample_001.lane_L2.R1.fastq.gz /user/data/sample_001.lane_L2.R2.fastq.gz sample_001.lane_L8 sample_001 lane_L8 ILLUMINA sample_001 /user/data/sample_001.lane_L8.R1.fastq.gz /user/data/sample_001.lane_L8.R2.fastq.gz sample_002.lane_L1 sample_002 lane_L1 ILLUMINA sample_002 /user/data/sample_002.lane_L1.R1.fastq.gz /user/data/sample_002.lane_L1.R2.fastq.gz sample_002.lane_L2 sample_002 lane_L2 ILLUMINA sample_002 /user/data/sample_002.lane_L2.R1.fastq.gz /user/data/sample_002.lane_L2.R2.fastq.gz </code></pre> <p>This is how my YAML config for Samples looks like:</p> <pre><code>samples: "sample_001": ["sample_001.lane_L1", "sample_001.lane_L2", "sample_001.lane_L8"] "sample_002": ["sample_002.lane_L1", "sample_002.lane_L2"] </code></pre> <p>My Snakemake code:</p> <pre><code>import pandas as pd import os workdir: "/user/data/snakemake/" configfile: "Samples.yaml" units_table = pd.read_table("Units.tsv").set_index("Unit", drop=False) rule all: input: expand('map_folder/{unit}.bam', unit=units_table.Unit), expand('merge_bam_folder/{sample}.bam', sample=config["samples"]), rule map_paired_end: input: r1 = lambda wildcards: expand(units_table.RawFileR1[wildcards.unit]), r2 = lambda wildcards: expand(units_table.RawFileR2[wildcards.unit]) output: bam = 'map_folder/{unit}.bam' params: bai = 'map_folder/{unit}.bam.bai', ref='/user/data/human_g1k_v37.fasta.gz', SampleSM = lambda wildcards: units_table.SampleSM[wildcards.unit], LineID = lambda wildcards: units_table.LineID[wildcards.unit], PlatformPL = lambda wildcards: units_table.PlatformPL[wildcards.unit], LibraryLB = lambda wildcards: units_table.LibraryLB[wildcards.unit] threads: 16 shell: r""" seqtk mergepe {input.r1} {input.r2}\ | bwa mem -M -t {threads} -v 3 \ {params.ref} - \ -R "@RG\tID:{params.LineID}\tSM:{params.SampleSM}\tPL:{params.PlatformPL}\tLB:{params.LibraryLB}"\ | samtools view -u -Sb - \ | samtools sort - -m 4G -o {output.bam} samtools index {output.bam} """ rule samtools_merge_bam: input: lambda wildcards: expand('map_folder/{file}.bam', file=config['samples'][wildcards.sample]) output: bam = 'merge_bam_folder/{sample}.bam' threads: 1 shell: r""" samtools merge {output.bam} {input} samtools index {output.bam} """ </code></pre>
<p>What about this below? </p> <p>I have excluded the Samples.yaml as I think it is not necessary given your sample sheet. </p> <p>In rule <code>samtools_merge_bam</code> you collect all unit-bam files sharing the same SampleSM. These unit-bam files are created in <code>map_paired_end</code> where the lambda expression collects the fastq files for each unit. </p> <p>Note also that I have removed the unit-bam files from the all rule as (I think) these are just intermediate files and they could be marked as temporary using the <a href="https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#protected-and-temporary-files" rel="nofollow noreferrer">temp()</a> flag.</p> <pre><code>import pandas as pd import os workdir: "/output/dir" units_table = pd.read_table("Units.tsv") samples= list(units_table.SampleSM.unique()) rule all: input: expand('merge_bam_folder/{sample}.bam', sample= samples), rule map_paired_end: input: r1 = lambda wildcards: units_table.RawFileR1[units_table.Unit == wildcards.unit], r2 = lambda wildcards: units_table.RawFileR2[units_table.Unit == wildcards.unit], output: bam = 'map_folder/{unit}.bam' params: bai = 'map_folder/{unit}.bam.bai', ref='/user/data/human_g1k_v37.fasta.gz', SampleSM = lambda wildcards: list(units_table.SampleSM[units_table.Unit == wildcards.unit]), LineID = lambda wildcards: list(units_table.LineID[units_table.Unit == wildcards.unit]), PlatformPL = lambda wildcards: list(units_table.PlatformPL[units_table.Unit == wildcards.unit]), LibraryLB = lambda wildcards: list(units_table.LibraryLB[units_table.Unit == wildcards.unit]), threads: 16 shell: r""" seqtk mergepe {input.r1} {input.r2}\ | bwa mem -M -t {threads} -v 3 \ {params.ref} - \ -R "@RG\tID:{params.LineID}\tSM:{params.SampleSM}\tPL:{params.PlatformPL}\tLB:{params.LibraryLB}"\ | samtools view -u -Sb - \ | samtools sort - -m 4G -o {output.bam} samtools index {output.bam} """ rule samtools_merge_bam: input: lambda wildcards: expand('map_folder/{unit}.bam', unit= units_table.Unit[units_table.SampleSM == wildcards.sample]) output: bam = 'merge_bam_folder/{sample}.bam' threads: 1 shell: r""" samtools merge {output.bam} {input} samtools index {output.bam} """ </code></pre>
python|pandas|bioinformatics|pipeline|snakemake
0
6,553
54,652,103
Updating a pandas column with a dictionary lookup
<p>Have a dataframe, df:</p> <pre><code>import pandas as pd import numpy as np i = ['dog', 'cat', 'rabbit', 'elephant'] * 3 df = pd.DataFrame(np.random.randn(12, 2), index=i, columns=list('AB')) </code></pre> <p>...and a lookup dict for column B:</p> <p><code>b_dict = {'elephant': 2.0, 'dog': 5.0}</code></p> <p>How can column B of df be replaced for elephant and dog rows?</p> <p><code>df['B'].update(b_dict)</code> gives:</p> <blockquote> <p>AttributeError: 'dict' object has no attribute 'reindex_like'</p> </blockquote>
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where()</code></a> to replace only where condition matches and retain the rest:</p> <pre><code>df['B']=np.where(df.index.isin(b_dict.keys()),df.index.map(b_dict),df.B) </code></pre>
pandas
3
6,554
73,791,808
For loops through pandas dataframes for state and country
<p>I am trying to get the state location from zip codes when they are in the US. The code below is what I am using importing a csv with zip codes and country. I am getting repeating state for the first zip code in the dataframe. I tried .append on the state, country but am still just getting the return of the first row. How could I get it to run through each zip code to return that value?</p> <pre><code>import csv import pandas as pd from geopy.geocoders import Nominatim # Import CSV for ZipCodes filename='/filepath/test.csv' df=pd.read_csv(filename, dtype=object) df.head() #calling Nominatim tool geolocator = Nominatim(user_agent=&quot;user&quot;) state=[] country=[] for item in df[&quot;Zip&quot;]: geolocator = Nominatim(user_agent=&quot;user&quot;) location = geolocator.geocode(&quot;Zip&quot;, addressdetails=True) state.append(getLoc.raw['address']['state']) country.append(getLoc.raw['address']['country']) df['state']=state df['country']=country print(df.head()) </code></pre> <p>These are the results I am getting:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>Zip</th> <th>state</th> <th>country</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>65807, US</td> <td>Michigan</td> <td>United States</td> </tr> <tr> <td>1</td> <td>V92x4vr, IE</td> <td>Michigan</td> <td>United States</td> </tr> <tr> <td>2</td> <td>V92x4vr, IE</td> <td>Michigan</td> <td>United States</td> </tr> <tr> <td>3</td> <td>91010, US</td> <td>Michigan</td> <td>United States</td> </tr> </tbody> </table> </div>
<pre><code>df['state']=state df['country']=country </code></pre> <p>overwrite the whole column.</p> <p>You can use .apply() instead. Put your code in a function and you can do something like that:</p> <pre><code>def zip_to_state_country(zip_code): # your logic here return zip_code[0], zip_code[1] # returns just for example df[&quot;state&quot;], df[&quot;country&quot;] = zip(*df[&quot;Zip&quot;].apply(zip_to_state_country)) </code></pre>
python|pandas|dataframe|geopy
0
6,555
71,166,405
Getting ValueError: All arrays must be of the same length while reshaping tensor for Bidirectional LSTM
y_pred length returns 13, y_test length returns 13 as well however y_pred.reshape(-1) returns 130. y_pred argument expects one argument, how do I reshape it back to 13? <pre><code>from keras.layers import Dense, Dropout,Activation, LSTM,Bidirectional from keras.models import Sequential import tensorflow as tf BLSTM = Sequential() BLSTM.add(Bidirectional(LSTM(100,return_sequences=True, input_shape=(10,1), activation='gelu'))) BLSTM.add(Dense(1)) BLSTM.compile(optimizer = 'adam', loss = 'mean_squared_error') BLSTM.build(input_shape=(10,1,1)) BLSTM.summary() history = BLSTM.fit(X_train_t, y_train, epochs=10, batch_size=128) BLSTM.evaluate(X_test_t, y_test, batch_size=32) y_pred = BLSTM.predict(X_test_t, batch_size=32) como = pd.DataFrame({'testdata' : y_test.Price.values,'predictions' : example}) </code></pre>
<p>Look at your model's summary and the parameters of your <code>LSTM</code> layer. You are using an input shape of <code>(10,1)</code> meaning you have 10 timesteps and for each timestep you have 1 feature. At least that is what you are telling this layer. Note that this has nothing to do with number of samples you have in your dataset. The full shape would be <code>(samples, timesteps, features)</code>. And then you have set the <code>return_sequences</code> parameter to <code>True</code>, which means you will get all timesteps from your input resulting in the output shape <code>(None, 10, 200)</code>, where 200 is the output space is due to the <code>Bidirectional</code> layer and <code>None</code> is some variable batch size.</p> <p>Now based on this information, you have to ask yourself if your data does really have the shape <code>(samples, 10, 1)</code>, because if this is the case, you are building your model with the wrong shape: <code>input_shape=(10,1)</code> != <code>input_shape=(10,1,1)</code>. The timesteps dimension is different and that is probably causing the issues with your model. Here is one example of what your model would look like if <code>x_train</code> and <code>y_train</code> both have the shape <code>(samples, 10, 1)</code>:</p> <pre><code>import tensorflow as tf timesteps = 10 features = 1 BLSTM = tf.keras.Sequential() BLSTM.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(100,return_sequences=True, input_shape=(timesteps, features)))) BLSTM.add(tf.keras.layers.Dense(1)) BLSTM.compile(optimizer = 'adam', loss = 'mean_squared_error') BLSTM.build(input_shape=(1, timesteps, features)) BLSTM.summary() samples = 500 X_train_t = tf.random.normal((samples, timesteps, features)) y_train = tf.random.normal((samples, timesteps, features)) X_test_t = tf.random.normal((samples, timesteps, features)) y_test = tf.random.normal((samples, timesteps, features)) history = BLSTM.fit(X_train_t, y_train, epochs=1, batch_size=128) BLSTM.evaluate(X_test_t, y_test, batch_size=32) y_pred = BLSTM.predict(X_test_t, batch_size=32) print(y_pred.shape) </code></pre> <pre><code>Model: &quot;sequential_10&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= bidirectional_10 (Bidirecti (1, 10, 200) 81600 onal) dense_10 (Dense) (1, 10, 1) 201 ================================================================= Total params: 81,801 Trainable params: 81,801 Non-trainable params: 0 _________________________________________________________________ 4/4 [==============================] - 8s 16ms/step - loss: 0.9933 16/16 [==============================] - 2s 7ms/step - loss: 0.9771 (500, 10, 1) </code></pre> <p><code>X_test_t</code> and <code>y_test</code> also have to have the same shape.</p>
python|tensorflow|keras|lstm|bidirectional
0
6,556
71,252,199
Tricky distance matrix calculation and how to cleverly avoid out of bounds
<p>I have the layout of a warehouse with racks and aisles where items are picked from locations on the racks while traversing the aisles:</p> <p><a href="https://i.stack.imgur.com/5EPFv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5EPFv.png" alt="" /></a></p> <p>I want to compute a distance matrix between each pair of nodes/locations in the following way. Let <code>x</code> be the across aisle coordinate and <code>z</code> be the coordinate along the aisle as shown in the figure above. If <code>x_1 = x_2</code> the two nodes are located on the same aisle and the distance is easily computed as <code>t_12 = z_2-z_1</code> where <code>t_12</code> is the distance between node 1 and 2. Otherwise, the distance between the two nodes can be calculated as follows:</p> <pre><code>t_12 = min( |z_1 - B| + |x_2 - x_1| + |z_2 - B|, |z_1 - M| + |x_2 - x_1| + |z_2 - M|, |z_1 - T| + |x_2 - x_1| + |z_2 - T|) </code></pre> <p>since there are 3 ways to go from node 1 to node 2 and the shortest is chosen. <code>T</code>, <code>M</code> and <code>B</code> are the z-coordinate of top cross aisle, middle cross aisle and the bottom cross aisle.</p> <p>This is what I've done so far:</p> <pre><code>import numpy as np x_coordinates = np.linspace(0, 320, 33).astype(int) z_coordinates = np.linspace(0, 30, 31).astype(int) print(&quot;x coordinates = &quot;) print(x_coordinates) print(&quot;z coordinates = &quot;) print(z_coordinates) B = 0 M = 15 T = 31 def dist_calc(x, z): mat = np.zeros((len(x), len(z))) for i in range(len(x_coordinates)): for j in range(len(z_coordinates)): mat[i][j] = min(np.abs(z[i] - B) + np.abs(x[j] - x[j + 1]) + np.abs(z[i + 1] - B), np.abs(z[i] - M) + np.abs(x[j] - x[j + 1]) + np.abs(z[i + 1] - M), np.abs(z[i] - T) + np.abs(x[j] - x[j + 1]) + np.abs(z[i + 1] - T)) return mat Distances = dist_calc(x_coordinates, z_coordinates) </code></pre> <p>I have two questions:</p> <ol> <li><p>I don't see how I should incorporate the case when <code>x_1 = x_2</code>. I think that the way I've defined the coordinates makes it impossible to specify the condition that two nodes are on the same aisle, i.e have the same <code>x</code>-coordinate.</p> </li> <li><p>Of course I understand that I will get an index out of bounds error since I'm trying to access element <code>i+1</code> and <code>j+1</code> in arrays of length <code>i</code> and <code>j</code> respectively. Is there a way to circumvent this problem? Do I need to append an extra zero to the <code>x</code> and <code>z</code> coordinate arrays in the beginning?</p> </li> </ol>
<p>In your comment, you stated that the last point is (330,31), which is inconsistent with your linspace. The biggest mistake is that you are iterating over the individual axes, whilst you should iterate over all combinations of the coordinates. I separated out the distance calculation and the matrix generation, because generating a 1088x1088 matrix is a bit expensive.</p> <pre><code>import numpy as np x_c = np.linspace(0, 330, 34).astype(int) z_c = np.linspace(0, 31, 32).astype(int) print(&quot;x coordinates = &quot;) print(x_c,len(x_c)) print(&quot;z coordinates = &quot;) print(z_c,len(z_c)) print() B = 0 M = 15 T = 31 def dist_calc(p1,p2,B,M,T): return min( abs(p1[1] - B) + abs(p2[0] - p1[0]) + abs(p2[1] - B) , abs(p1[1] - M) + abs(p2[0] - p1[0]) + abs(p2[1] - M) , abs(p1[1] - T) + abs(p2[0] - p1[0]) + abs(p2[1] - T) ) * (p1 != p2) def gen_mat(x,z,B,M,T): c = [(i,j) for i in x for j in z] mat = np.zeros((len(c), len(c))) for i,p1 in enumerate(c): for j,p2 in enumerate(c): mat[i,j] = dist_calc(p1,p2,B,M,T) return mat print('Distance between (200,31) and (150,2) should be 200-150=50;31-2=29;50+29=79') print() D = gen_mat(x_c,z_c,B,M,T) c = [(i,j) for i in x_c for j in z_c] print('By using a distance matrix:') print(D[c.index((200,31)),c.index((150,2))]) print() print('By using the raw distance function:') print(dist_calc((200,31),(150,2),B,M,T)) </code></pre> <p>Output:</p> <pre><code>x coordinates = [ 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 210 220 230 240 250 260 270 280 290 300 310 320 330] 34 z coordinates = [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31] 32 Distance between (200,31) and (150,2) should be 200-150=50;31-2=29;50+29=79 By using a distance matrix: 79.0 By using the raw distance function: 79 </code></pre>
python|numpy
1
6,557
71,226,303
Interpolating data for missing values pandas python
<p><a href="https://i.stack.imgur.com/OzCUQ.png" rel="nofollow noreferrer">enter image description here</a>[enter image description here][2]I am having trouble interpolating my missing values. I am using the following code to interpolate</p> <pre><code>df=pd.read_csv(filename, delimiter=',') #Interpolating the nan values df.set_index(df['Date'],inplace=True) df2=df.interpolate(method='time') Water=(df2['Water']) Oil=(df2['Oil']) Gas=(df2['Gas']) </code></pre> <p>Whenever I run my code I get the following message: &quot;time-weighted interpolation only works on Series or DataFrames with a DatetimeIndex&quot;</p> <p>My Data consist of several columns with a header. The first column is named Date and all the rows look similar to this 12/31/2009. I am new to python and time series in general. Any tips will help.</p> <p><a href="https://i.stack.imgur.com/OzCUQ.png" rel="nofollow noreferrer">Sample of CSV file</a></p>
<p>Try this, assuming the <strong>first</strong> column of your csv is the one with date strings:</p> <pre class="lang-py prettyprint-override"><code>df = pd.read_csv(filename, index_col=0, parse_dates=[0], infer_datetime_format=True) df2 = df.interpolate(method='time', limit_direction='both') </code></pre> <p>It theoretically should 1) convert your first column into actual <code>datetime</code> objects, and 2) set the index of the dataframe to that <code>datetime</code> column, all in one step. You can optionally include the <code>infer_datetime_format=True</code> argument. If your datetime format is a standard format, it can help speed up parsing by quite a bit.</p> <p>The <code>limit_direction='both'</code> should back fill any <code>NaN</code>s in the first row, but because you haven't provided a copy-paste-able sample of your data, I cannot confirm on my end.</p> <p>Reading <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html" rel="nofollow noreferrer">the documentation</a> can be incredibly helpful and can usually answer questions faster than you'll get answers from Stack Overflow!</p>
python|pandas|dataframe|interpolation|nan
0
6,558
52,283,402
H5 Hexadecimal data
<p>When I load a .h5 file into the spyder environment using h5py, I no longer see the hexadecimal data that was in the original file. </p> <p>Does Python convert the hex information to uint8 automatically?</p>
<p>HDF5 does not store hexadecimal data, only numbers and characters. The <a href="https://support.hdfgroup.org/HDF5/doc/RM/PredefDTypes.html" rel="nofollow noreferrer">documentation of HDF5</a> lists the supported datatypes.</p> <p>What you interpret as hexadecimal data is <em>very likely</em> integer data. You can have a look at the datatypes in your file by typing</p> <pre><code>h5dump -A filename.h5 </code></pre> <p>The <code>-A</code> flag means: list the attributes (i.e. the metadata). You can look at a part of the file with</p> <pre><code>h5dump -A -g name_of_a_group filename.h5 </code></pre>
python|python-2.7|numpy|hdf5|h5py
1
6,559
60,646,621
How to use tensorflow's ncf model to predict?
<p>Hi I'm new to tensorflow and neural networks. Trying to understand the <a href="https://github.com/tensorflow/models/tree/master/official/recommendation" rel="nofollow noreferrer">ncf recommendation model</a> in tensorflow's official models repo. </p> <p>My understanding is that you build a model with input layers and learning layers. Then you create batches of data to train the model, and then you use test data to evaluate the model. This is done in this <a href="https://github.com/tensorflow/models/blob/master/official/recommendation/ncf_keras_main.py" rel="nofollow noreferrer">file</a>.</p> <p>However, I'm having trouble to understand the input layers. </p> <p>It shows in the code </p> <pre><code>user_input = tf.keras.layers.Input( shape=(1,), name=movielens.USER_COLUMN, dtype=tf.int32) </code></pre> <p>Which to my understanding you can input one parameter at a time.</p> <p>However I'm only able to use the following dummy data to call predict_on_batch</p> <pre><code>user_input = np.full(shape=(256,),fill_value=1, dtype=np.int32) item_input = np.full(shape=(256,),fill_value=1, dtype=np.int32) valid_pt_mask_input = np.full(shape=(256,),fill_value=True, dtype=np.bool) dup_mask_input = np.full(shape=(256,),fill_value=1, dtype=np.int32) label_input = np.full(shape=(256,),fill_value=True, dtype=np.bool) test_input_list = [user_input,item_input,valid_pt_mask_input,dup_mask_input,label_input] tf.print(keras_model.predict_on_batch(test_input_list)) </code></pre> <p>When I run the following code:</p> <pre><code> user_input = np.full(shape=(1,),fill_value=1, dtype=np.int32) item_input = np.full(shape=(1,),fill_value=1, dtype=np.int32) valid_pt_mask_input = np.full(shape=(1,),fill_value=True, dtype=np.bool) dup_mask_input = np.full(shape=(1,),fill_value=1, dtype=np.int32) label_input = np.full(shape=(1,),fill_value=True, dtype=np.bool) test_input_list = [user_input,item_input,valid_pt_mask_input,dup_mask_input,label_input] classes = _model.predict(test_input_list) tf.print(classes) </code></pre> <p>I got this error:</p> <pre><code>tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 1 values, but the requested shape requires a multiple of 256 [[{{node model_1/metric_layer/StatefulPartitionedCall/StatefulPartitionedCall/Reshape_1}}]] [Op:__inference_predict_function_2828] </code></pre> <p>Can somebody help me with how to use this model to predict with single inputs? And also why is item_id required with user_id when making the prediction? Shouldn't it be you provide a list of users the model returns a list of items? </p>
<p>If you're new to TensorFlow and deep learning, this recommendations project is probably not a good place to start. The code is not documented and the architecture might be a confusing to follow along.</p> <p>Anyway, to answer your questions, the model does not take single inputs to make predictions. Looking at the code, there are 5 inputs (user_id, item_id, duplicate_mask, valid_pt_mask, label) but this is really meant for training. If you just want to make predictions, you actually only need the user_id and item_id. This model is making predictions based on user_id and item_id interactions and this is why you need both. However, you can't directly do that unless you cut the unnecessary parts of the model off when making predictions. Below is the code on how to do that after your model object called <code>keras_model</code> is trained (I used tf-model-official version 2.5.0 and it worked fine):</p> <pre><code>from tensorflow.keras import Model import tensorflow as tf inputUserIds = keras_model.input['user_id'] inputItemIds = keras_model.input['item_id'] # Cut off the unnecessary parts of the model for predictions. # Specifically, we're removing the loss and metric layers in the architecture. # Note: we are not training a new model, just taking the parts of the model we need. outputs = keras_model.get_layer('rating').output newModel = Model(inputs=[inputUserIds, inputItemIds], outputs=outputs) ## Make predictions for user 1 with items ids 1, 2, 3, 4, 5 # Make a user. Each row will be user_id 1. The shape of this tensor is (5,1) userIds = tf.constant([1, 1, 1, 1, 1])[:, tf.newaxis] # Make a tensor of items. Each row will be different item ids. The shape of this tensor is (5,1) itemIds = tf.constant([1,2,3,4,5])[:, tf.newaxis] # Make preds. This predicts for user id 1 and items ids 1,2,3,4,5. preds = newModel.predict(x=[userIds, itemIds]) </code></pre> <p>So, if you make predictions, you want to create all item-user combinations and then sort the predictions in descending order, while keeping track of the indices, for each user. The top item for that user will be the models prediction for most likely to be interacted with for that user, the 2nd item will be models predictions for 2nd most likely to be interacted with, and so forth.</p> <p>There is a folder called &quot;summaries&quot; which gets created when running this ncf_keras_main.py file. If you point tensorboard at that folder, you can explore the model architecture under the graphs tab on the top left. It might help understand the code a little bit better. To run tensorboard, you open a terminal and type</p> <pre><code>tensorboard --logdir location_of_summaries_folder_here </code></pre> <p><a href="https://i.stack.imgur.com/cQ1Cq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cQ1Cq.png" alt="enter image description here" /></a></p>
python|tensorflow|neural-network|recommendation-engine|tensorflow-model-garden
0
6,560
60,429,490
Dataframe groupby into one column
<p>I would like to know how can i do this simply :</p> <p>i have a <code>DataFrame</code> with <code>4</code> columns, i want to <code>group by</code> the <code>3</code> first columns, get the result of the 4 based on the 3 others and create a column with a <code>''.join</code> of the 3 columns name.</p> <p>An example will be easier :</p> <p><code>a. |b. |c. |d.<br> name1|name2|name3|result.<br> name4|name5|name6|result2.</code> </p> <p>I want to have a new dataframe that look like that : UPDATE :</p> <p><code> |name1-name2-name3|name4-name5-name6|. |result1 |result2 | |result1 |result2 | |result1 |result2 | |result1 |result2 | |result1 |result2 |</code> </p> <p>NB_CLIENTS is my result I group by periodicity and country and for each month i have a result.</p> <p>EXAMPLE :</p> <p><code>MONTH|PERIODICITY|NB_CLIENTS|COUNTRY 2019-05| monthly| 872| NL 2019-02| monthly| 361| IT 2019-02| monthly| 214| NL 2019-05| monthly| 737| IT</code></p> <p>Will become :</p> <p><code>MONTH|monthly-NL. |monthly-IT 2019-05|872. |737 2019-02|214. |361</code></p> <p>I tried this :</p> <pre><code>grouped = test.groupby([name1,name2,name3]).RESULT tmp = pd.DataFrame() for name_of_the_group, group in grouped: tmp[' '.join(name_of_the_group)] = group </code></pre> <p>but i get all <code>Nan</code> values, I guess it's about a copy or something , I need to <code>reset_index</code> maybe ? but where </p> <p>Thanks</p>
<p>I believe you need <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="nofollow noreferrer"><code>DataFrame.pivot_table</code></a> with flatten columns by <code>f-string</code>s:</p> <pre><code>df = df.pivot_table(index='MONTH', columns=['PERIODICITY','COUNTRY'], values='NB_CLIENTS', aggfunc='sum') df.columns = df.columns.map(lambda x: f'{x[0]}-{x[1]}') df = df.reset_index() print (df) MONTH monthly-IT monthly-NL 0 2019-02 361 214 1 2019-05 737 872 </code></pre>
python|pandas|dataframe
0
6,561
72,674,054
lambda function with if statement to return mode
<p>I want to get the most repeated value of a column but I also want to add a condition that picks the 2nd highest value if the most repeated item is 'None'. for example:</p> <p>item_name = [apple, orange, orange, None, None, None] In this case I want the code to return &quot;orange&quot; as the most used item.</p> <p>I came with this code but I get invalid syntax error - any idea how can I get it fixed?</p> <pre><code>df['most_used_item']= df.groupby('user_id')['item_name'].transform(lambda x: x.mode().iat[1] if x =='None' else x: x.mode().iat[0]) </code></pre>
<p><code>mode</code> ignores null value by default:</p> <blockquote> <p><strong>dropna</strong> : bool, default True</p> <p>Don't consider counts of NaN/NaT.</p> </blockquote> <p>You can use:</p> <pre><code>df['most_used_item'] = (df.groupby('user_id')['item_name'] .transform(lambda x: x.mode()[0])) print(df) # Output user_id item_name most_used_item 0 A apple orange 1 A orange orange 2 A orange orange 3 A None orange 4 A None orange 5 A None orange </code></pre> <p>If <code>None</code> is a string, replace <code>'None'</code> by <code>NaN</code>:</p> <pre><code>df['most_used_item'] = (df.replace('None', np.nan) .groupby('user_id')['item_name'] .transform(lambda x: x.mode()[0])) # OR df.replace({'item_name': {'None': np.nan}})... </code></pre>
python|pandas|function|lambda|mode
0
6,562
59,702,785
what does dim=-1 or -2 mean in torch.sum()?
<p>let me take a 2D matrix as example:</p> <pre><code>mat = torch.arange(9).view(3, -1) tensor([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) torch.sum(mat, dim=-2) tensor([ 9, 12, 15]) </code></pre> <p>I find the result of <code>torch.sum(mat, dim=-2)</code> is equal to <code>torch.sum(mat, dim=0)</code> and <code>dim=-1</code> equal to <code>dim=1</code>. My question is how to understand the negative dimension here. What if the input matrix has 3 or more dimensions?</p>
<p>A tensor has multiple dimensions, ordered as in the following figure. There is a forward and backward indexing. Forward indexing uses positive integers, backward indexing uses negative integers.</p> <p>Example:</p> <p>-1 will be the last one, in our case it will be dim=2</p> <p>-2 will be dim=1</p> <p>-3 will be dim=0</p> <p><a href="https://i.stack.imgur.com/V3qfN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/V3qfN.png" alt="enter image description here" /></a></p>
python|pytorch
22
6,563
59,872,711
How can I use two pandas dataframes to create a new dataframe with specific rows from one dataframe?
<p>I am currently working with two sets of dataframes. Each set contains 60 dataframes. They are sorted to line up for mapping (eg. set1 df1 corresponds with set2 df1). First set is about 27 rows x 2 columns; second set is over 25000 rows x 8 columns. I want to create a new dataframe that contains rows from the 2nd dataframe according to the values in the 1st dataframe. </p> <p>For simplicity I've created a shorten example of the first df of each set to illustrate. I want to use the 797 to take the first 796 rows (indexes 0 - 795) from df2 and add them to a new dataframe, and then rows 796 to 930 and filter them to a 2nd new dataframe. Any suggestions how I could that do for all 60 pairs of dataframes? </p> <pre><code> 0 1 0 797.0 930.0 1 1650.0 1760.0 2 2500.0 2570.0 3 3250.0 3333.0 4 3897.0 3967.0 0 -1 -2 -1 -3 -2 -1 2 0 1 0 0 0 -2 0 -1 0 0 2 -3 0 0 -1 -2 -1 -1 -1 3 0 1 -1 -1 -3 -2 -1 0 4 0 -3 -3 0 0 0 -4 -2 </code></pre> <p>edit to add:</p> <pre><code>import pandas as pd df1 = pd.DataFrame([(3, 5), (8, 11)]) df2 = pd.DataFrame([(1, 0, 2, 3, 1, 0, 1, 2), (2, 0.5, 1, 3, 1, 0, 1, 2), (3, 0, 2, 3, 1, 0, 1, 2), (4, 0, 2, 3, 1, 0, 1, 2), (5, 0, 2, 3, 1, 0, 1, 2), (6, 0, 2, 3, 1, 0, 1, 2), (7, 0, 2, 3, 1, 0, 1, 2), (8, 0, 2, 3, 1, 0, 1, 2), (9, 0, 2, 3, 1, 0, 1, 2), (10, 0, 2, 3, 1, 0, 1, 2), (11, 0, 2, 3, 1, 0, 1, 2), (12, 0, 2, 3, 1, 0, 1, 2), (13, 0, 2, 3, 1, 0, 1, 2), (14, 0, 0, 1, 2, 5, 2, 3), (15, 0.5, 1, 3, 1.5, 2, 3, 1)]) #expected output will be two dataframes containing rows from df2 output1 = pd.DataFrame([(1, 0, 2, 3, 1, 0, 1, 2), (2, 0.5, 1, 3, 1, 0, 1, 2), (6, 0, 2, 3, 1, 0, 1, 2), (7, 0, 2, 3, 1, 0, 1, 2), (12, 0, 2, 3, 1, 0, 1, 2), (13, 0, 2, 3, 1, 0, 1, 2), (14, 0, 0, 1, 2, 5, 2, 3), (15, 0.5, 1, 3, 1.5, 2, 3, 1)]) output2 = pd.DataFrame([(3, 0, 2, 3, 1, 0, 1, 2), (4, 0, 2, 3, 1, 0, 1, 2), (5, 0, 2, 3, 1, 0, 1, 2), (8, 0, 2, 3, 1, 0, 1, 2), (9, 0, 2, 3, 1, 0, 1, 2), (10, 0, 2, 3, 1, 0, 1, 2), (11, 0, 2, 3, 1, 0, 1, 2)]) </code></pre>
<p>You can use list comprehension with flatten for indices:</p> <pre><code>rng = [x for a, b in df.values for x in range(int(a)-1, int(b))] print (rng) [2, 3, 4, 7, 8, 9, 10] </code></pre> <p>And then filter by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>DataFrame.iloc</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.difference.html" rel="nofollow noreferrer"><code>Index.difference</code></a>:</p> <pre><code>output1 = df2.iloc[df2.index.difference(rng)] print (output1) 0 1 2 3 4 5 6 7 0 1 0.0 2 3 1.0 0 1 2 1 2 0.5 1 3 1.0 0 1 2 5 6 0.0 2 3 1.0 0 1 2 6 7 0.0 2 3 1.0 0 1 2 11 12 0.0 2 3 1.0 0 1 2 12 13 0.0 2 3 1.0 0 1 2 13 14 0.0 0 1 2.0 5 2 3 output2 = df2.iloc[rng] print (output2) 0 1 2 3 4 5 6 7 2 3 0.0 2 3 1.0 0 1 2 3 4 0.0 2 3 1.0 0 1 2 4 5 0.0 2 3 1.0 0 1 2 7 8 0.0 2 3 1.0 0 1 2 8 9 0.0 2 3 1.0 0 1 2 9 10 0.0 2 3 1.0 0 1 2 10 11 0.0 2 3 1.0 0 1 2 </code></pre> <p>EDIT:</p> <pre><code>#list of DataFrames L1 = [df11, df21, df31] L2 = [df12, df22, df32] #if necessary output lists out1 = [] out2 = [] #loop with zipped lists and apply solution for df1, df2 in zip(L1, L2): print (df1) print (df2) rng = [x for a, b in df.values for x in range(int(a)-1, int(b))] output1 = df2.iloc[df2.index.difference(rng)] output2 = df2.iloc[rng] #if necessary append output df to lists out1.append(output1) out2.append(output2) </code></pre>
python|pandas|dataframe
1
6,564
59,799,890
utf-8 and skipinitialspace within a line
<p>I'm recently using pandas to read dataframe from a CSV file. Upon calling the reading the Csv file I also have to include 'utf-8' in order to not get the following error</p> <pre><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0x84 in position 19: invalid start byte </code></pre> <p>So I go and modify the code like this </p> <pre><code>df = pd.read_csv('file.csv', 'utf-8' ) </code></pre> <p>This gets rid of the error, Is there a way to not get this error without including 'utf-8' ?</p> <p>I have to include <em>skipinitialspace=True</em> on my code and I'm note sure how to include it in the line. The following code gives me an error</p> <pre><code>df = pd.read_csv('file.csv', skipinitialspace=True, usecols=fields, 'utf-8') </code></pre>
<p>I had a similair problem when working with lots of diferent <code>txt</code> files, I developed a small program to parse 50 lines of my file and detect the encoding using chardet.</p> <p>it's bettter to use more lines as suggested by anky_91 so use 1000 or more.</p> <pre><code>from pathlib import Path import chardet files = [f for f in Path(ryour_path).glob('*.csv')] # change ext as you wish encodings = {} for file in files with open(file,'rb') as f: data = f.read(1000) encoding=chardet.detect(data).get("encoding") encodings[f'{file}'] = encoding </code></pre> <p>this will give you a dictionary of file paths and encodings you can pass into a <code>read_csv</code></p>
python|pandas|utf-8
1
6,565
32,235,229
Pandas: Drop leading rows with NaN threshold in dataframe
<p>I have a Pandas Dataframe with intermittent NaN values:</p> <pre><code>Index Col1 Col2 Col3 Col4 Col5 Col6 Col7 Col8 1991-12-31 100.000 100.000 NaN NaN NaN NaN NaN NaN 1992-01-31 98.300 101.530 NaN NaN NaN NaN NaN NaN 1992-02-29 97.602 100.230 98.713 NaN NaN NaN NaN NaN 1992-03-31 93.473 NaN 102.060 NaN NaN NaN NaN NaN 1992-04-30 94.529 102.205 107.755 NaN NaN NaN NaN NaN </code></pre> <p>I'd like to drop leading rows with 6 NaNs or more. Specifically, in this case, I'd only like to drop row with Indices '1991-12-31' and '1992-01-31'.</p> <p>Using df.dropna(thresh = 6) doesn't work because it drops row '1992-03-31' as well.</p> <p>One solution would be to count the NaNs in each row and stop at the first row when the number of NaNs is less than 6.</p> <p>Any faster/cleaner solution?</p> <p>EDIT: Edited for clarity and @Alexander's comment</p>
<p>You just need <code>df[(df.irow(0).isnull().sum()&gt;5):]</code></p> <p>When the 1st row has more than 5 <code>nan</code>, <code>df.irow(0).isnull().sum()&gt;5</code> is <code>True</code> and <code>df[(df.irow(0).isnull().sum()&gt;5):]</code> is simply <code>df[1:]</code>: the 1st row is omitted.</p> <p>To address @DSM's point, we may consider:</p> <pre><code>df.ix[np.argwhere(df.isnull().sum(1)&lt;=5).ravel()[0]:] </code></pre> <p>Basically this is to slice the DataFrame, from the 1st row (not in the original df, but the 1st row that has less or equal than 5 <code>nan</code>) on. This way, if the 1st row has 6, the 2nd row has 7 and the 3rd has 8 <code>nan</code>s, the resultant dataframe will start from the 4th row. If the 1st row has only 1 <code>nan</code>, the result will be <code>df[0:]</code>, no rows skipped.</p>
python|numpy|pandas|dataframe
3
6,566
40,390,068
Python pandas: Find values in one column that fall within range in another column
<p>I have a pandas dataframe that contains payroll information from the years 2013 through 2016. Each row describes the amount of money the employee earned in a single year. It looks like this:</p> <p><strong>Name, Year, Amount</strong></p> <p>"Bill Smith", "2014", "$20,000"</p> <p>"John Jones", "2014", "$10,000"</p> <p>"Bill Smith", "2015", "$21,000"</p> <p>"John Jones", "2015", "$12,000"</p> <p>"Sam Stone", "2015", "$15,000"</p> <p>I need to filter the dataframe to select workers who were hired after 2014 (for example, Sam Stone, but not Bill Smith or John Jones). Any suggestions? My guess is to use groupby() and then try to use conditions to filter the list.</p>
<p>This should work:</p> <pre><code>workers = df[df.Year&lt;2015].Name.unique() mew_workers_data = df[~df.Name.isin(workers)] </code></pre>
python|pandas
0
6,567
40,394,874
How to find common elements inside a list
<p>I have a list l1 that looks like [1,2,1,0,1,1,0,3..]. I want to find, for each element the indexes of elements which have same value as the element.</p> <p>For eg, for the first value in the list, 1, it should list out all indexes where 1 is present in the list and it should repeat same for every element in the list. I can wrote a function to do that iterating through the list but wanted to check if there is any predefined function.</p> <p>I am getting the list from Pandas dataframe columns, it would be good know if series/dataframe library offer any such functions</p>
<p>You can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html" rel="nofollow noreferrer"><code>numpy.unique</code></a>, which can return the inverse too. This can be used to reconstruct the indices using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>:</p> <pre><code>In [49]: a = [1,2,1,0,1,1,0,3,8,10,6,7] In [50]: uniq, inv = numpy.unique(a, return_inverse=True) In [51]: r = [(uniq[i], numpy.where(inv == i)[0]) for i in range(uniq.size)] In [52]: print(r) [(0, array([3, 6])), (1, array([0, 2, 4, 5])), (2, array([1])), (3, array([7])), (6, array([10])), (7, array([11])), (8, array([8])), (10, array([9]))] </code></pre>
python|pandas|dataframe
1
6,568
18,400,955
Python and Pandas - column that "count direction" and show "average until now"
<p>I have a DataFrame that contain price (of a stock) at the end of specific minute.</p> <p><em>DF columns are:</em></p> <ul> <li>minute_id: 0-1440, 0 for midnight, 480 for 8:00 AM (60*8) </li> <li>price:stock price at the end of the minute </li> <li>change: price change from prev. minute </li> <li>direction: direction of the change</li> </ul> <blockquote> <pre><code>import numpy.random as nprnd from pandas import DataFrame n = 10 # Number of samples # Starting at 8:00 AM, set some (n) random prices between 4-5 df = DataFrame({'minute_id': range(480,480+n), 'price':(5-4) * nprnd.random(n) + 4 }) df['change'] = df.price - df.price.shift(1) df['direction'] = df.change.map(lambda x: 0 if x == 0 else x/abs(x)) df = df.dropna() df </code></pre> </blockquote> <p>I want to add few columns to this DF.</p> <ol> <li>Average price until now for the first row, it will have the price. for the 2nd row, it will have the average price of the 2 first rows for the n-th row, it will have the average price of the first n rows</li> <li>Sum of the 'change' column while in the current direction (Will be zeroed every time 'direction' switched)</li> <li>count in the current direction, until now For every row, what is the number of this row in the current direction run.</li> <li>Average price of the last 4 rows</li> </ol> <p>I can create all of those columns by iterating through the DF row at a time. But am sure there is a more (pythonic|pandastic) way for doing it.</p> <p>I'm also not sure how to handle missing data (If i have gaps within the minute_id)</p> <hr> <p>EDIT:</p> <p>out of the 4 columns I wanted to add, 1 and 4 are easy...</p> <p>C4: this is just a rolling mean with a period of 4</p> <p>C1: rolling mean can get another parameter for the <strong>minimum period</strong>.</p> <p>setting it to 1 and setting the windows size to the length of the df will give a running mean for every row in the set.</p> <blockquote> <p>df['rolling_avg'] = pd.rolling_mean(df.price, n, 1) </p> </blockquote> <p>For the other 2 columns, I'm still trying to find the best way to get it.</p>
<p>OK, After a lot of "playing around" I've got something that works for me.</p> <p>It might be done in a little more "Pandastic" way, but this is a reasonable way to get it done.</p> <p>I want to thanks <em>Andy Hayden</em>, <em>Jeff</em> and <em>Phillip Cloud</em> for pointing out to the "10 minutes to pandas" It didn't contain the direct answers, but was very helpful. Also, <em>Andy Hayden</em> send me to create rolling mean, which helped me much as a direction.</p> <hr> <p><strong>So lets do it column by column</strong></p> <ul> <li><p>Adding col 1: Average price until now</p> <pre><code># Rolling avg, windows size is the size of the entire DataFrame, with minimum of 1 df['rolling_avg'] = pd.rolling_mean(df.price, n, 1) </code></pre></li> <li><p>Adding col 4: Avarage price of the last 4 rows</p> <pre><code>df['RA_wnd_4'] = pd.rolling_mean(df.price, 4, 1) </code></pre></li> <li><p>Adding col 2: CumSum() of the 'change' column while in the current "blcok" (direction)</p> <pre><code># Adding Helper column that shows when direction have been changed df['dir_change'] = (df.direction.shift(1) != df.direction).astype(int) # Identify the DF "blocks" for every direction change df['block'] = df.dir_change.cumsum() # Split the DF based on those bolcks grouped = df.groupby('block') # Add Function that will cumsum() for a block, and call it def f1(group): return DataFrame({'rolling_count' : group.cumsum()}) df['rolling_count'] = grouped.change.apply(f1) </code></pre></li> <li><p>Adding col 3: Row number in the current "block" (Direction)</p> <pre><code>df['one'] = 1 df['rolling_count'] = grouped.one.apply(f1) df = df.drop('one', axis=1) </code></pre></li> </ul> <hr> <p><strong>The full code:</strong></p> <pre><code>import numpy.random as nprnd from pandas import DataFrame import pandas as pd n = 10 # Number of samples # Starting at 8:00 AM, set some (n) random prices between 4-5 df = DataFrame({'minute_id': range(480,480+n), 'price':(5-4) * nprnd.random(n) + 4 }) df['change'] = df.price - df.price.shift(1) df['direction'] = df.change.map(lambda x: 0 if x == 0 else x/abs(x)) df = df.dropna() #------------------------------------------ # Col 1, rolling Avg over the entire DF df['rolling_avg'] = pd.rolling_mean(df.price, n, 1) #------------------------------------------ # Col 4, rolling Avg windows size of 4 df['RA_wnd_4'] = pd.rolling_mean(df.price, 4, 1) #------------------------------------------ # Helper code for cols 2, 3 # Adding Helper column that shows when direction have been changed df['dir_change'] = (df.direction.shift(1) != df.direction).astype(int) # Identify the DF "blocks" for every direction change df['block'] = df.dir_change.cumsum() # Split the DF based on those bolcks grouped = df.groupby('block') # Add Function that will cumsum() for a block, and call it def f1(group): return DataFrame({'rolling_count' : group.cumsum()}) df['one'] = 1 #------------------------------------------ # Col 2, CumSum() of the 'change' column while in the current "blcok" (direction) df['rolling_count'] = grouped.change.apply(f1) #------------------------------------------ # Col 3, Count in the current "block" (Direction) df['rolling_count'] = grouped.one.apply(f1) df = df.drop('one', axis=1) print df </code></pre> <hr> <p>Output:</p> <pre><code> minute_id price change direction rolling_avg RA_wnd_4 dir_change block rolling_count 1 481 4.771701 0.474349 1 4.771701 4.771701 1 1 1 2 482 4.300078 -0.471623 -1 4.535889 4.535889 1 2 1 3 483 4.946744 0.646666 1 4.672841 4.672841 1 3 1 4 484 4.529403 -0.417340 -1 4.636981 4.636981 1 4 1 5 485 4.434598 -0.094805 -1 4.596505 4.552706 0 4 2 6 486 4.171169 -0.263429 -1 4.525616 4.520479 0 4 3 7 487 4.416980 0.245810 1 4.510096 4.388038 1 5 1 8 488 4.727078 0.310098 1 4.537219 4.437456 0 5 2 9 489 4.049097 -0.677981 -1 4.482983 4.341081 1 6 1 </code></pre>
python|pandas
4
6,569
62,014,890
how to count positive and negative numbers of a column after applying groupby in pandas
<p>have the following dataframe:</p> <pre><code> token name ltp change 0 12345.0 abc 2.0 NaN 1 12345.0 abc 5.0 1.500000 2 12345.0 abc 3.0 -0.400000 3 12345.0 abc 9.0 2.000000 4 12345.0 abc 5.0 -0.444444 5 12345.0 abc 16.0 2.200000 6 6789.0 xyz 1.0 NaN 7 6789.0 xyz 5.0 4.000000 8 6789.0 xyz 3.0 -0.400000 9 6789.0 xyz 13.0 3.333333 10 6789.0 xyz 9.0 -0.307692 11 6789.0 xyz 20.0 1.222222 </code></pre> <p>I need to count of positive and negative number for each category of the name column. in above example </p> <pre><code>abc:pos_count: 3 abc:neg_count:2 xyz:pos_count:2 xyz:neg_count:2 </code></pre> <hr> <pre><code>count=df.groupby('name')['change'].count() count </code></pre> <p>however, this gives me only the total count by group but not the positive &amp; negative count separately.</p>
<p>Use:</p> <pre><code>g = df.groupby('name')['change'] counts = g.agg( pos_count=lambda s: s.gt(0).sum(), neg_count=lambda s: s.lt(0).sum(), net_count=lambda s: s.gt(0).sum()- s.lt(0).sum()).astype(int) </code></pre> <p>Result:</p> <pre><code># print(counts) pos_count neg_count net_count name abc 3 2 1 xyz 3 2 1 </code></pre>
python|pandas
3
6,570
61,989,827
Is there a way to use apply() to create two columns in pandas dataframe?
<p>I have a function returning a tuple of values, as an example:</p> <pre><code>def dumb_func(number): return number+1,number-1 </code></pre> <p>I'd like to apply it to a pandas DataFrame</p> <pre><code>df=pd.DataFrame({'numbers':[1,2,3,4,5,6,7]}) test=dumb_df['numbers'].apply(dumb_func) </code></pre> <p>The result is that <code>test</code> is a pandas series containing tuples. Is there a way to use the variable <code>test</code> or to remplace it to assign the results of the function to two distinct columns <code>'number_plus_one'</code> and <code>'number_minus_one'</code> of the original DataFrame?</p>
<pre><code>df[['number_plus_one', 'number_minus_one']] = pd.DataFrame(zip(*df['numbers'].apply(dumb_func))).transpose() </code></pre> <p>To understand, try taking it apart piece by piece. Have a look at <code>zip(*df['numbers'].apply(dumb_func))</code> in isolation (you'll need to convert it to a list). You'll see how it unpacks the tuples one by one and creates two separate lists out of them. Then have a look what happens when you create a dataframe out of it - you'll see why the <code>transpose</code> is necessary. For more on zip, see here : docs.python.org/3.8/library/functions.html#zip</p>
python|pandas|apply
1
6,571
61,688,764
Keras CNN: Incompatible shapes [batch_size*2,1] vs. [batch_size,1] with any batch_size > 1
<p>I am Fitting a Siamese CNN with the following structure: </p> <pre><code>def get_siamese_model(input_shape): """ Model architecture """ # Define the tensors for the three input images A_input = Input(input_shape) B_input = Input(input_shape) C_input = Input(input_shape) # Convolutional Neural Network #Initialzers initializer = 'random_uniform' initializer0 = 'zeros' model = Sequential() model.add(Conv2D(64, (10,10), activation='relu', input_shape=input_shape, kernel_initializer=initializer , kernel_regularizer=l2(2e-4))) model.add(MaxPooling2D()) model.add(Conv2D(128, (7,7), activation='relu', kernel_initializer=initializer , bias_initializer=initializer0, kernel_regularizer=l2(2e-4))) model.add(MaxPooling2D()) model.add(Conv2D(128, (4,4), activation='relu', kernel_initializer=initializer , bias_initializer=initializer0, kernel_regularizer=l2(2e-4))) print("C3 shape: ", model.output_shape) model.add(MaxPooling2D()) print("P3 shape: ", model.output_shape) model.add(Conv2D(256, (4,4), activation='relu', kernel_initializer=initializer , bias_initializer=initializer0, kernel_regularizer=l2(2e-4))) model.add(Flatten()) model.add(Dense(4096, activation='sigmoid', kernel_regularizer=l2(1e-3), kernel_initializer=initializer, bias_initializer=initializer0)) # Generate the encodings (feature vectors) for the three images encoded_A = model(A_input) encoded_B = model(B_input) encoded_C = model(C_input) #Custom Layer for L1-norm L1_layer = Lambda(lambda tensors: K.sum(K.abs(tensors[0] - tensors[1]), axis=1,keepdims=True)) L_layerAB = L1_layer([encoded_A, encoded_B]) L2_layer = Lambda(lambda tensors: K.sum(K.abs(tensors[0] - tensors[1]), axis=1,keepdims=True)) L_layerAC = L2_layer([encoded_A, encoded_C]) merge6 = concatenate([L_layerAB, L_layerAC], axis = 0) prediction = Dense(1,activation='sigmoid')(merge6) siamese_net = Model(inputs=[A_input,B_input, C_input],outputs= prediction) # return the model return siamese_net </code></pre> <p>The training data is are triplets of pictures in array form with following dimensions: (128,128,3). And the target data is a label (0,1). </p> <p>Then we fit the model: </p> <pre><code>model = siam.get_siamese_model((128,128,3)) model.fit([tripletA,tripletB, tripletC], targets , epochs=2, verbose=1, batch_size = 1) </code></pre> <p><strong>This works for batch_size = 1 but anything over batchsize >1 produces the following error:</strong> </p> <pre><code>Epoch 1/5 Traceback (most recent call last): File "&lt;ipython-input-147-8959bad9406a&gt;", line 2, in &lt;module&gt; batch_size = 2) File "C:\Users\valan\Anaconda3\lib\site-packages\keras\engine\training.py", line 1239, in fit validation_freq=validation_freq) File "C:\Users\valan\Anaconda3\lib\site-packages\keras\engine\training_arrays.py", line 196, in fit_loop outs = fit_function(ins_batch) File "C:\Users\valan\Anaconda3\lib\site-packages\tensorflow_core\python\keras\backend.py", line 3727, in _call_ outputs = self._graph_fn(*converted_inputs) File "C:\Users\valan\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1551, in _call_ return self._call_impl(args, kwargs) File "C:\Users\valan\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1591, in _call_impl return self._call_flat(args, self.captured_inputs, cancellation_manager) File "C:\Users\valan\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1692, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "C:\Users\valan\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 545, in call ctx=ctx) File "C:\Users\valan\Anaconda3\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute six.raise_from(core._status_to_exception(e.code, message), None) File "&lt;string&gt;", line 3, in raise_from InvalidArgumentError: Incompatible shapes: [4,1] vs. [2,1] [[node loss_16/dense_47_loss/binary_crossentropy/logistic_loss/mul (defined at C:\Users\valan\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:3009) ]] [Op:__inference_keras_scratch_graph_33258] </code></pre> <p><strong>Does anybody know where the Problem is with batch_size > 1?</strong></p> <p><strong>EDIT1:</strong> </p> <p><strong>We found out that the following lines caused the error</strong>: </p> <pre><code> L1_layer = Lambda(lambda tensors: K.sum(K.abs(tensors[0] - tensors[1]), axis=1,keepdims=True)) L_layerAB = L1_layer([encoded_A, encoded_B]) L2_layer = Lambda(lambda tensors: K.sum(K.abs(tensors[0] - tensors[1]), axis=1,keepdims=True)) L_layerAC = L2_layer([encoded_A, encoded_C]) </code></pre> <p>Removing these lines and just using sigmoid on encoded A and thus, making the model simpler makes it work for batchsizes >1 .</p> <p><strong>But does anybody know how to re-add those customized layers properly?</strong></p>
<p>Mentioning the solution in this (Answer) section even though it is present in the Comments section, for the benefit of the community.</p> <p>For the above code, with <code>batch_size &gt; 1</code>, it is resulting in error, </p> <pre><code>InvalidArgumentError: Incompatible shapes: [4,1] vs. [2,1] [[node loss_16/dense_47_loss/binary_crossentropy/logistic_loss/mul (defined at C:\Users\valan\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:3009) ]] [Op:__inference_keras_scratch_graph_33258] </code></pre> <p>Changing the code from </p> <pre><code>merge6 = concatenate([L_layerAB, L_layerAC], axis = 0) </code></pre> <p>to </p> <pre><code>merge6 = Concatenate()([L_layerAB, L_layerAC]) </code></pre> <p>has resolved the error.</p>
python-3.x|tensorflow|image-processing|keras|conv-neural-network
0
6,572
58,091,879
Whats the difference between frombuffer and fromiter in numpy?Why and when to use these
<p>frombuffer and fromiter both are used for the numpy array creation.But why to these function</p>
<p><strong>frombuffer -:</strong> this is use to <strong>explain</strong> a buffer as a 1-dimensional array.</p> <p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.frombuffer.html" rel="nofollow noreferrer">Full explain</a></p> <p>eg -:</p> <pre><code>&gt;&gt;&gt; s = b'hello world' &gt;&gt;&gt; np.frombuffer(s, dtype='S1', count=5, offset=6) output -: array([b'w', b'o', b'r', b'l', b'd'], dtype='|S1') &gt;&gt;&gt; np.frombuffer(b'\x01\x02', dtype=np.uint8) output -: array([1, 2], dtype=uint8) </code></pre> <blockquote> <p></p> </blockquote> <p><strong>fromiter -:</strong> This is ues to <strong>create</strong> a new 1-dimensional array from an iterable object</p> <p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromiter.html" rel="nofollow noreferrer">Full explain</a></p> <p>eg-:</p> <pre><code>&gt;&gt;&gt; iterable = (x*x for x in range(5)) &gt;&gt;&gt; np.fromiter(iterable, float) output -: array([ 0., 1., 4., 9., 16.]) </code></pre>
numpy|numpy-ndarray
2
6,573
58,114,048
How to import tensorflow in javascript? Import in file, served by local http-server
<p>I'm following this tutorial: <a href="https://codelabs.developers.google.com/codelabs/tfjs-training-classfication/index.html#2" rel="noreferrer">https://codelabs.developers.google.com/codelabs/tfjs-training-classfication/index.html#2</a></p> <p>I have set up a local HTTP-server, and that work and the app is running. However, when I try to execute step 3 (Load the data), I get the following error when loading the data:</p> <pre><code>Uncaught TypeError: Failed to resolve module specifier "@tensorflow/tfjs". Relative references must start with either "/", "./", or "../". </code></pre> <p>If i comment out the import statment:</p> <pre><code>import * as tf from '@tensorflow/tfjs'; </code></pre> <p>the page actually loads and shows the sidebar with 16 handwritten digits.</p> <ol> <li>Does this mean that TensorFlow is not loaded?</li> <li><p>Is TensorFlow loaded, and I do not need this import statement?</p></li> <li><p>Or maybe most important, why does the import not work??</p></li> </ol>
<p>you don't need to import the tfjs libraries if they are loaded as modules</p> <p>in the html file put:</p> <pre><code>&lt;html&gt; &lt;meta content="text/html;charset=utf-8" http-equiv="Content-Type"&gt; &lt;head&gt; ... &lt;script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.5.2"&gt;&lt;/script&gt; &lt;script type="module" src="YOUR_CODE.js"&gt;&lt;/script&gt; ... &lt;/head&gt; ​ &lt;body&gt; ... &lt;/body&gt; ​ &lt;/html&gt; </code></pre> <p>And in YOUR_CODE.js simply put:</p> <pre><code>async function testModel() { const model = await tf.loadGraphModel(...); ... model.execute(pic); ... } testModel(); </code></pre>
javascript|node.js|tensorflow|tensorflow.js
2
6,574
58,072,185
Pytorch:Apply cross entropy loss with custom weight map
<p>I am solving multi-class segmentation problem using u-net architecture in pytorch. As specified in <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">U-NET</a> paper, I am trying to implement custom weight maps to counter class imbalances.</p> <p>Below is the opertion which I want to apply - <a href="https://i.stack.imgur.com/Erp6K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Erp6K.png" alt="image"></a></p> <p>Also, I reduced the <code>batch_size=1</code> so that I can remove that dimension while passing it to <code>precompute_to_masks</code> function. I tried the below approach-</p> <pre><code>def precompute_for_image(masks): masks = masks.cpu() cls = masks.unique() res = torch.stack([torch.where(masks==cls_val, torch.tensor(1), torch.tensor(0)) for cls_val in cls]) return res def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path): ################### # train the model # ################### model.train() for batch_idx, (data, target) in enumerate(final_train_loader): # move to GPU if use_cuda: data, target = data.cuda(), target.cuda() optimizer.zero_grad() output = model(data) temp_target = precompute_for_image(target) w = weight_map(temp_target) loss = criterion(output,target) loss = w*loss loss.backward() optimizer.step() train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss)) return model </code></pre> <p>where weight_map is the function to calculate weight mask which I got from <a href="https://jaidevd.github.io/posts/weighted-loss-functions-for-instance-segmentation/" rel="nofollow noreferrer">here</a> The issue, I am facing is I am getting <code>memory error</code> when I apply the following method. <br>I am using 61gb RAM and Tesla V100 GPU. I really think I am applying it in incorrect way. How to do it? <br>I am omitting the non-essential details from the training loop. Below is my <code>weight_map</code> function:</p> <pre><code>from skimage.segmentation import find_boundaries w0 = 10 sigma = 5 def make_weight_map(masks): """ Generate the weight maps as specified in the UNet paper for a set of binary masks. Parameters ---------- masks: array-like A 3D array of shape (n_masks, image_height, image_width), where each slice of the matrix along the 0th axis represents one binary mask. Returns ------- array-like A 2D array of shape (image_height, image_width) """ nrows, ncols = masks.shape[1:] masks = (masks &gt; 0).astype(int) distMap = np.zeros((nrows * ncols, masks.shape[0])) X1, Y1 = np.meshgrid(np.arange(nrows), np.arange(ncols)) X1, Y1 = np.c_[X1.ravel(), Y1.ravel()].T for i, mask in enumerate(masks): # find the boundary of each mask, # compute the distance of each pixel from this boundary bounds = find_boundaries(mask, mode='inner') X2, Y2 = np.nonzero(bounds) xSum = (X2.reshape(-1, 1) - X1.reshape(1, -1)) ** 2 ySum = (Y2.reshape(-1, 1) - Y1.reshape(1, -1)) ** 2 distMap[:, i] = np.sqrt(xSum + ySum).min(axis=0) ix = np.arange(distMap.shape[0]) if distMap.shape[1] == 1: d1 = distMap.ravel() border_loss_map = w0 * np.exp((-1 * (d1) ** 2) / (2 * (sigma ** 2))) else: if distMap.shape[1] == 2: d1_ix, d2_ix = np.argpartition(distMap, 1, axis=1)[:, :2].T else: d1_ix, d2_ix = np.argpartition(distMap, 2, axis=1)[:, :2].T d1 = distMap[ix, d1_ix] d2 = distMap[ix, d2_ix] border_loss_map = w0 * np.exp((-1 * (d1 + d2) ** 2) / (2 * (sigma ** 2))) xBLoss = np.zeros((nrows, ncols)) xBLoss[X1, Y1] = border_loss_map # class weight map loss = np.zeros((nrows, ncols)) w_1 = 1 - masks.sum() / loss.size w_0 = 1 - w_1 loss[masks.sum(0) == 1] = w_1 loss[masks.sum(0) == 0] = w_0 ZZ = xBLoss + loss return ZZ </code></pre> <p>Traceback of the error-</p> <pre><code>MemoryError Traceback (most recent call last) &lt;ipython-input-30-f0a595b8de7e&gt; in &lt;module&gt; 1 # train the model 2 model_scratch = train(20, final_train_loader, unet, optimizer, ----&gt; 3 criterion, train_on_gpu, 'model_scratch.pt') &lt;ipython-input-29-b481b4f3120e&gt; in train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path) 24 loss = criterion(output,target) 25 target.requires_grad = False ---&gt; 26 w = make_weight_map(target) 27 loss = W*loss 28 loss.backward() &lt;ipython-input-5-e75a6281476f&gt; in make_weight_map(masks) 33 X2, Y2 = np.nonzero(bounds) 34 xSum = (X2.reshape(-1, 1) - X1.reshape(1, -1)) ** 2 ---&gt; 35 ySum = (Y2.reshape(-1, 1) - Y1.reshape(1, -1)) ** 2 36 distMap[:, i] = np.sqrt(xSum + ySum).min(axis=0) 37 ix = np.arange(distMap.shape[0]) MemoryError: </code></pre>
<p>Your <code>final_train_loader</code> provides you with an input image <code>data</code> and the expected pixel-wise labeling <code>target</code>. I assume (following pytorch's conventions) that <code>data</code> is of shape B-3-H-W and of <code>dtype=torch.float</code>.<br> More importantly, <code>target</code> is of shape B-H-W and of <code>dtype=torch.long</code>.</p> <p>On the other hand <code>make_weight_map</code> expects its input to be C-H-W (with <code>C</code> = number of classes, NOT batch size), of type numpy array. </p> <p>Try providing <code>make_weight_map</code> the input mask <em>as it expects it</em> and see if you get similar errors.<br> I also recommend that you <em>visualize</em> the resulting weight map - to make sure your function does what you expect it to do.</p>
deep-learning|pytorch|image-segmentation|semantic-segmentation|unet-neural-network
2
6,575
34,362,193
How to explicitly broadcast a tensor to match another's shape in tensorflow?
<p>I have three tensors, <code>A, B and C</code> in tensorflow, <code>A</code> and <code>B</code> are both of shape <code>(m, n, r)</code>, <code>C</code> is a binary tensor of shape <code>(m, n, 1)</code>.</p> <p>I want to select elements from either A or B based on the value of <code>C</code>. The obvious tool is <code>tf.select</code>, however that does not have broadcasting semantics, so I need to first explicitly broadcast <code>C</code> to the same shape as A and B.</p> <p>This would be my first attempt at how to do this, but it doesn't like me mixing a tensor (<code>tf.shape(A)[2]</code>) into the shape list. </p> <pre><code>import tensorflow as tf A = tf.random_normal([20, 100, 10]) B = tf.random_normal([20, 100, 10]) C = tf.random_normal([20, 100, 1]) C = tf.greater_equal(C, tf.zeros_like(C)) C = tf.tile(C, [1,1,tf.shape(A)[2]]) D = tf.select(C, A, B) </code></pre> <p>What's the correct approach here?</p>
<p><strong>EDIT:</strong> In all versions of TensorFlow since 0.12rc0, the code in the question works directly. TensorFlow will automatically stack tensors and Python numbers into a tensor argument. The solution below using <code>tf.pack()</code> is only needed in versions prior to 0.12rc0. Note that <code>tf.pack()</code> was renamed to <a href="https://www.tensorflow.org/api_docs/python/tf/stack" rel="noreferrer"><code>tf.stack()</code></a> in TensorFlow 1.0.</p> <hr> <p>Your solution is very close to working. You should replace the line:</p> <pre><code>C = tf.tile(C, [1,1,tf.shape(C)[2]]) </code></pre> <p>...with the following:</p> <pre><code>C = tf.tile(C, tf.pack([1, 1, tf.shape(A)[2]])) </code></pre> <p>(The reason for the issue is that TensorFlow won't implicitly convert a list of tensors and Python literals into a tensor. <a href="https://www.tensorflow.org/versions/master/api_docs/python/array_ops.html#pack" rel="noreferrer"><code>tf.pack()</code></a> takes a list of tensors, so it will convert each of the elements in its input (<code>1</code>, <code>1</code>, and <code>tf.shape(C)[2]</code>) to a tensor. Since each element is a scalar, the result will be a vector.)</p>
tensorflow
15
6,576
34,212,429
Convert a long string into a matrix, cleaning it
<p>I have a large text file which is just a very long string. It is a huge block of text. </p> <p>The original maker of this file tried to make this a "matrix" by setting <code>\n</code> tabs after a certain count of letters. </p> <pre><code>string = "adfajdslfkajsddf&amp;&amp;adfadfladfsjdfl\nadk...fhaldkfjahsdf" </code></pre> <p>Using regular expressions with the module <code>re</code> (possibly), how can I input each character of this massive string into the matrix it was originally supposed to be? </p> <p>Also, there are certain garbage characters in the string, like "&amp;" and "#" and "{". Is there a standard module to use? </p> <p>I imagine one could take the original string, break it up into a couple of strings based on where the '\n' marker is, and then place these strings somehow into a numpy ndarray by each individual character of the string. </p>
<p>You can do this a couple of way, you can check each char seeing if it is alphanumeric:</p> <pre><code>stg = "abcde123]\nefghi456}\njk{lmn789" import numpy as np arr = np.array([ch for line in stg for ch in line if ch.isdigit() or ch.isalpha()]) </code></pre> <p>Or if all the junk is punctuation you can <code>str.translate</code>:</p> <pre><code>from string import punctuation junk = {ord(ch):"" for ch in punctuation + "\n"} arr = np.array(list(stg.translate(junk))) </code></pre> <p>Both will give you a flat list:</p> <pre><code>['a' 'b' 'c' 'd' 'e' '1' '2' '3' 'e' 'f' 'g' 'h' 'i' '4' '5' '6' 'j' 'k' 'l' 'm' 'n' '7' '8' '9'] </code></pre> <p>If you want multidimensional arrays, you can split on the newline:</p> <pre><code>arr = np.array([[ch for ch in line ] for line in stg.translate(junk).split()]) arr = np.array([[ch for ch in line if ch.isdigit() or ch.isalpha()] for line in stg.split()]) </code></pre> <p>Which will give you:</p> <pre><code>[['a' 'b' 'c' 'd' 'e' '1' '2' '3'] ['e' 'f' 'g' 'h' 'i' '4' '5' '6'] ['j' 'k' 'l' 'm' 'n' '7' '8' '9']] </code></pre> <p>For python2 the <code>translate</code> is a little different:</p> <pre><code>from string import punctuation import numpy as np stg = "abcde123]\nefghi456}\njk{lmn789" arr = np.array([[ch for ch in line ] for line in stg.translate(None, punctuation).split()]) print(arr) </code></pre>
python|regex|string|numpy|matrix
2
6,577
37,111,877
creating a variable within tf.variable_scope(name), initialized from another variable's initialized_value
<p>Hey tensorflow community,</p> <p>I am experiencing unexpected naming conventions when using variable_scope in the following setup:</p> <pre><code>with tf.variable_scope("my_scope"): var = tf.Variable(initial_value=other_var.initialized_value()) </code></pre> <p>In the above, it holds that </p> <pre><code>other_var.name = 'outer_scope/my_scope/other_var_name:0' </code></pre> <p>I am therefore "reusing" the same scope at this point in the code. Intuitively I do not see an issue with this, but the following happens:</p> <pre><code>var.name = 'outer_scope/my_scope_1/var_name:0' </code></pre> <p>So apparently, tf isn't happy with "my_scope" and needs to append the "_1". The "outer_scope" remains the same, though.</p> <p>If I do not initialize with "other_var", this behaviour does not come up.</p> <p>An explanation would be much appreciated! Thx</p> <p>Mat</p>
<p>You might want to use <code>tf.get_variable()</code> instead of 'tf.Variable`.</p> <pre><code>with tf.variable_scope('var_scope', reuse=False) as var_scope: var = tf.get_variable('var', [1]) var2 = tf.Variable([1], name='var2') print var.name # var_scope/var:0 print var2.name # var_scope/var2:0 with tf.variable_scope('var_scope', reuse=True) as var_scope: var = tf.get_variable('var', [1]) var2 = tf.Variable([1], name='var2') print var.name # var_scope/var:0 print var2.name # var_scope_1/var2:0 </code></pre> <p>The reason behind this I think is that in your example, although you have successfully "re-entered" the variable_scope you want, what really affects your variable name is another scope named <code>name_scope</code> intead of <code>variable_scope</code> as you might guess. From the official document <a href="https://www.tensorflow.org/versions/r0.8/how_tos/variable_scope/index.html#names-of-ops-in-tf-variable-scope" rel="nofollow">here</a> you can see that:</p> <blockquote> <p>when we do with tf.variable_scope("name"), this implicitly opens a tf.name_scope("name").</p> </blockquote> <p><code>name_scope</code> is originally used for managing operation names(such as <code>add</code>, <code>matmul</code>), because <code>tf.Variable</code> is actually an operation and its operation name will be "inherited" by variables created by it, so the name of <code>name_scope</code> rather than <code>variable_scope</code> is used as prefix.</p> <p>But if you want to use tf.Variable, you can also directly use <code>name_scope</code> in <code>with</code> statement:</p> <pre><code>with tf.name_scope('n_scope') as n_scope: var = tf.Variable([1], name='var') print var.name #n_scope/var_1:0 with tf.name_scope(n_scope) as n_scope: var = tf.Variable([1], name='var') print var.name #n_scope/var_1:0 </code></pre> <p>One thing to pay attention to is that you should pass as argument the scope varible previously captured from a <code>with</code> statement when you want to "re-enter" a name scope, rather than using <code>str</code> scope name:</p> <pre><code> with tf.name_scope('n_scope') as n_scope: var = tf.Variable([1], name='var') print var.name #n_scope/var_1:0 with tf.name_scope('n_scope') as n_scope: var = tf.Variable([1], name='var') print var.name #n_scope_1/var_1:0 </code></pre> <p>Pay attention to the argument passed to <code>tf.name_scope</code>. This behavior is again described in doc string of <code>name_scope</code>:</p> <blockquote> <p>The name argument will be interpreted as follows:</p> <ol> <li><p>A string (not ending with ‘/’) will create a new name scope, in which name is appended to the prefix of all operations created in the context. If name has been used before, it will be made unique by calling self.unique_name(name). </p></li> <li><p>A scope previously captured from a with g.name_scope(...) as scope: statement will be treated as an “absolute” name scope, which makes it possible to re-enter existing scopes. </p></li> <li><p>A value of None or the empty string will reset the current name scope to the top-level (empty) name scope.</p></li> </ol> </blockquote>
tensorflow
3
6,578
36,806,745
Data transformation for machine learning
<p>I have dataset with SKU IDs and their counts, i need to feed this data into a machine learning algorithm, in a way that SKU IDs become columns and COUNTs are at the intersection of transaction id and SKU ID. Can anyone suggest how to achieve this transformation.</p> <p>CURRENT DATA</p> <pre><code>TransID SKUID COUNT 1 31 1 1 32 2 1 33 1 2 31 2 2 34 -1 </code></pre> <p>DESIRED DATA</p> <pre><code>TransID 31 32 33 34 1 1 2 1 0 2 2 0 0 -1 </code></pre>
<p>In <code>R</code>, we can use either <code>xtabs</code></p> <pre><code>xtabs(COUNT~., df1) # SKUID #TransID 31 32 33 34 # 1 1 2 1 0 # 2 2 0 0 -1 </code></pre> <p>Or <code>dcast</code></p> <pre><code>library(reshape2) dcast(df1, TransID~SKUID, value.var="COUNT", fill=0) # TransID 31 32 33 34 #1 1 1 2 1 0 #2 2 2 0 0 -1 </code></pre> <p>Or <code>spread</code></p> <pre><code>library(tidyr) spread(df1, SKUID, COUNT, fill=0) </code></pre>
r|python-2.7|numpy|pandas|graphlab
4
6,579
36,721,673
Pandas dataframe add columns with automatic adding missing indices
<p>I have the following 2 simple dataframes.</p> <p>df1:</p> <p><a href="https://i.stack.imgur.com/nwlQT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nwlQT.png" alt="df1"></a></p> <p>df2:</p> <p><a href="https://i.stack.imgur.com/IjM0r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IjM0r.png" alt="df2"></a></p> <p>I want to add df2 to df1 by using something like:</p> <pre><code>df1["CF 0.3"]=df2 </code></pre> <p>However, this only adds values where indexes in df1 and df2 are the same. I would like a way I can add a column so that missing indexes are automatically added and if there is not associated value of that index, it is filled with NaN. Something like this:</p> <p><a href="https://i.stack.imgur.com/yZOAU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yZOAU.png" alt="enter image description here"></a></p> <p>The way I did this is by writing df1=df1.add(df2)</p> <p>This adds automatically missing indexes but all values are NaN. Then I manually populated values by writing:</p> <pre><code>df1["CF 0.1"]=dummyDF1 df1["CF 0.3"]=dummyDF2 </code></pre> <p>Is there an easier way to do this? I have a feeling I am missing something.</p> <p>I hope you understand my question :)</p>
<p>Use <code>concat</code> refer to this <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html" rel="nofollow">documentation</a> for detailed help.</p> <p>And here is an example based on the documentation:</p> <pre><code>import pandas as pd df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3'], 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', 'D2', 'D3']}, index=[0, 1, 2, 3]) df2 = pd.DataFrame({'X': ['A4', 'A5', 'A6', 'A7'], 'XB': ['B4', 'B5', 'B6', 'B7'], 'XC': ['C4', 'C5', 'C6', 'C7'], 'XD': ['D4', 'D5', 'D6', 'D7']}, index=[4, 5, 6, 7]) df3 = pd.DataFrame({'YA': ['A8', 'A9', 'A10', 'A11'], 'YB': ['B8', 'B9', 'B10', 'B11'], 'YC': ['C8', 'C9', 'C10', 'C11'], 'YD': ['D8', 'D9', 'D10', 'D11']}, index=[8, 9, 10, 11]) #To get the desired result you are looking for you need to reset the index. #With the dataframes you have you may not be able to merge as well #Since merge would need a common index or column frames = [df1.reset_index(drop=True), df2.reset_index(drop=True), df3.reset_index(drop=True)] df4 = pd.concat(frames, axis=1) print df4 </code></pre>
python|pandas|dataframe
1
6,580
54,754,927
Find index of all elements numpy arrays
<p>I am using the <code>numpy</code> <code>searchsort()</code> function to find the index of <code>numpy</code> array. It is working only for some arrays. What is it that is going wrong (implementation below)?</p> <pre><code>import numpy as np #specify the dtype in RV RV = np.array([ np.array([0.23, 2.5, 5.0, 7.1]), np.array(['a1', 'a2']), np.array(['b2', 'b1']) ], dtype=object) print(RV) def Rules(): global r r = np.array(np.meshgrid(*RV), dtype=object).T.reshape(-1,len(RV)) return r Rules() print(r) print(RV[0].searchsorted(r[:,0])) #working print(RV[1].searchsorted(r[:,1])) #working print(RV[2].searchsorted(r[:,2])) #not working </code></pre>
<p>By default, the <code>array</code> argument of <code>searchsorted()</code> must be a <strong>sorted</strong> one. Therefore the solution is:</p> <p>Use <code>numpy.sort()</code> to sort it in advance:</p> <pre><code>np.sort(RV[2]).searchsorted(r[:,2]) </code></pre> <p>Or use the <code>sorter</code> argument:</p> <pre><code>RV[2].searchsorted(r[:,2],sorter=np.argsort(RV[2])) </code></pre>
python|python-3.x|numpy|indexing
0
6,581
54,778,734
Remove partial duplicate row using column value
<p>I'm trying to clean data where there is a lot of partial duplicate only storing the first row of data when the key in Col A has duplicate.</p> <pre><code> A B C D 0 foo bar lor ips 1 foo bar 2 test do kin ret 3 test do 4 er ed ln pr </code></pre> <p>expected output after cleaning</p> <pre><code> A B C D 0 foo bar lor ips 1 test do kin ret 2 er ed ln pr </code></pre> <p>I have been looking at methods such as drop_duplicates or even group_by but they don't really help in my case : the duplicate are partial since some rows contain empty data and only have similar value in col A and B. group by partial work but doesn't return the transformed data , they just filter through.</p> <p>I'm very new to panda and pointer are appreciated. I could probably doing it outside panda but i'm thinking there might be a better way to do it.</p> <p>edit: sorry just noticed a mistake i made in the provided example. ( test had became " tes " </p>
<p>In your case how would you say partial duplicate? Please provide complicate example. In the above example instead of Col A duplication you could try Col B.</p> <p>Expected output could be obtained from this following snippet,</p> <pre><code>print (df.drop_duplicates(subset=['B'])) </code></pre> <p><strong>Note: Suggested solution only works for the above sample, it won't work when it has different col A and same Col B value.</strong> </p>
python|pandas
0
6,582
54,919,376
How to convert csv into nested json in python pandas?
<p>I have a csv like this:</p> <pre><code> Art Category LEVEL 2 LEVEL 3 LEVEL 4 LEVEL 5 Location 0 PRINTMAKING VISUAL CONTEMPORARY 2D NaN NaN NaN 1 PAINTING VISUAL CONTEMPORARY 2D NaN NaN NaN 2 AERIAL VISUAL CONTEMPORARY 2D PHOTOGRAPHY AERIAL NaN 3 WILDLIFE VISUAL CONTEMPORARY 2D PHOTOGRAPHY WILDLIFE NaN 4 NATURE VISUAL CONTEMPORARY 2D PHOTOGRAPHY NATURE NaN </code></pre> <p>The art and category will be there but the levels from l1 to l6 can be null. What I want to achive is like so:</p> <pre><code>art: PRINTMAKING category: VISUAL tags: [CONTEMPORARY, 2D] </code></pre> <p>The levels are basically tags for a particular art which are to stored in an array.</p> <p>I am new to python and so far I have written the following code. How can I achive this.</p> <pre><code>import pandas as pd import json data = pd.read_excel("C:\\Users\\Desktop\\visual.xlsx") rec = {} rec['art'] = data['Art'] rec['category'] = data['Category'] rec['tags'] = data['LEVEL 2'] + ',' + data['LEVEL 3'] + ',' + data['LEVEL 4'] + ',' + data['LEVEL 5'] </code></pre> <p>I guess this is not the correct way to do it.</p>
<p>for convert values of <code>tags</code> to lists without <code>NaN</code>s use:</p> <pre><code>df['tags'] = df.filter(like='LEVEL').apply(lambda x: x.dropna().tolist(), axis=1) #alternative, should be faster #df['tags'] = [[y for y in x if isinstance(y, str)] for x in # df.filter(like='LEVEL').values] d = df[['Art','Category','tags']].to_dict(orient='records') [{ 'Art': 'PRINTMAKING', 'Category': 'VISUAL', 'tags': ['CONTEMPORARY', '2D'] }, { 'Art': 'PAINTING', 'Category': 'VISUAL', 'tags': ['CONTEMPORARY', '2D'] }, { 'Art': 'AERIAL', 'Category': 'VISUAL', 'tags': ['CONTEMPORARY', '2D', 'PHOTOGRAPHY', 'AERIAL'] }, { 'Art': 'WILDLIFE', 'Category': 'VISUAL', 'tags': ['CONTEMPORARY', '2D', 'PHOTOGRAPHY', 'WILDLIFE'] }, { 'Art': 'NATURE', 'Category': 'VISUAL', 'tags': ['CONTEMPORARY', '2D', 'PHOTOGRAPHY', 'NATURE'] }] </code></pre>
python|json|pandas|csv
2
6,583
54,806,726
Flatten nested dictionary using Pandas
<p>I would like to flatten a nested dictionary. A solution for such a problem was suggested here: <a href="https://stackoverflow.com/a/41801708/8443371">https://stackoverflow.com/a/41801708/8443371</a>. Problem: I would like to obtain a keys identical to the keys in the last layer only. For an input:</p> <pre><code>d = {'a': 1, 'c': {'b': {'x': 5, 'y' : 10}}, 'd': [1, 2, 3]} </code></pre> <p>I would like to have an output:</p> <pre><code>{'a': 1, 'x': 5, 'y': 10, 'd': [1, 2, 3]} </code></pre> <p>Suggestions using python only should be probably slower than Pandas, which is based on a C implementation. </p> <p>Note: Assuming max two layer dictionary, I have a python solution, which seems to be very slow:</p> <pre><code>for key in dict.keys(): if '.' in key: dict[key.split('.')[-1]] = dict.pop(key) </code></pre>
<p>Here's my solution in pure python for the dictionary you've provided:</p> <pre><code>d = {'a': 1, 'c': {'b': {'x': 5, 'y' : 10}}, 'd': [1, 2, 3]} def flatten_dict(dic): result = {} for key in dic.keys(): if isinstance(dic[key], dict): result.update(flatten_dict(dic[key])) else: result[key] = dic[key] return result flatten_dict(d) {'a': 1, 'x': 5, 'y': 10, 'd': [1, 2, 3]} %%timeit flatten_dict(d) 2.45 µs ± 72.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) </code></pre>
python|pandas|python-2.7
2
6,584
27,923,768
Optimizing pairwise calculation of distances an array for a given shift
<p>I have an array containing millions of entries. I would like to calculate a another vector, containing all of the distances, for pairs of entries, that are shifted by a certain number delta in the array.</p> <p>Actually I'm using this:</p> <pre><code>for i in range(0, len(a) - delta): difs = numpy.append(difs, a[i + self.delta] - a[i]) </code></pre> <p>Does anyone know how to do this faster?</p> <p>There's a similar question here: <a href="https://stackoverflow.com/questions/20277982/fastest-pairwise-distance-metric-in-python">Fastest pairwise distance metric in python</a></p> <p>But I don't want to calculate the distance for every pair.</p> <p>Example: </p> <pre><code>&gt;&gt;&gt; a = [1,5,7,7,2,6] &gt;&gt;&gt; delta = 2 &gt;&gt;&gt; print difs array([ 6., 2., -5., -1.]) </code></pre>
<p>You could just slice <code>a</code> using <code>delta</code> and then subtract the two subarrays:</p> <pre><code>&gt;&gt;&gt; a = np.array([1,5,7,7,2,6]) &gt;&gt;&gt; delta = 2 &gt;&gt;&gt; a[delta:] - a[:-delta] array([ 6, 2, -5, -1]) </code></pre> <p>This slicing operation is likely to be very quick for large arrays as no additional indexes or copies of the data in <code>a</code> needs to be created. The subtraction creates a new array with the required values in.</p>
python|arrays|performance|python-2.7|numpy
2
6,585
28,227,147
Numpy matrix row stacking
<p>I have 4 arrays (all the same length) which I am trying to stack together to create a new array, with each of the 4 arrays being a row. </p> <p>My first thought was this:</p> <pre><code>B = -np.array([[x1[i]],[x2[j]],[y1[i]],[y2[j]]]) </code></pre> <p>However the shape of that is <code>(4,1,20)</code>.</p> <p>To get the 2D output I expected I resorted to this:</p> <pre><code>B = -np.vstack((np.vstack((np.vstack(([x1[i]],[x2[j]])),[y1[i]])),[y2[j]])) </code></pre> <p>Where the shape is <code>(4,20)</code>.</p> <p>Is there a better way to do this? And why would the first method not work?</p> <p><strong>Edit</strong></p> <p>For clarity, the shapes of <code>x1[i], x2[j], y1[i], y2[j]</code> are all <code>(20,)</code>.</p>
<p><code>np.vstack</code> takes a sequence of equal-length arrays to stack, one on top of the other, as long as they have compatible shapes. So in your case, a tuple of the one-dimensional arrays would do:</p> <pre><code>np.vstack((x1[i], x2[j], y1[i], y2[j])) </code></pre> <p>would do what you want. If this statement is part of a loop building many such 4x20 arrays, however, that may be a different matter.</p>
python|arrays|numpy
2
6,586
73,347,474
Pandas - Sum for each unique word
<blockquote> <p>Updated. Instead of <code>dict</code> data, I change for a <code>dataframe</code> as input</p> </blockquote> <p>I'm analyzing a DataFrame with approximately 10,000 rows and 2 columns.</p> <p>The criteria of my analysis is based on whether certain words appear in a certain cell.</p> <p>I believe I will be more successful if I know which words are most relevant in terms of values...</p> <h5>Foo data to be used as an example:</h5> <pre class="lang-py prettyprint-override"><code>data = { 'product': ['Dell Notebook I7', 'Dell Notebook I3', 'Logitech mx keys', 'Logitech mx 2'], 'cost': [1000,1200,300,100]} df_data = pd.DataFrame(data) </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>product</th> <th>cost</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>Dell Notebook I7</td> <td>1000</td> </tr> <tr> <td>1</td> <td>Dell Notebook I3</td> <td>1200</td> </tr> <tr> <td>2</td> <td>Logitech mx keys</td> <td>300</td> </tr> <tr> <td>3</td> <td>Logitech mx 2</td> <td>100</td> </tr> </tbody> </table> </div> <p>Basically, the column <code>product</code> shows the product an description. In the column <code>cost</code> shows the product cost.</p> <h5>What I want:</h5> <p>I would like to create another dataframe like this:</p> <h5>Desired Output:</h5> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>unique_words</th> <th>total_cost_for_unique_word</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Dell</td> <td>2200</td> </tr> <tr> <td>4</td> <td>Logitech</td> <td>2200</td> </tr> <tr> <td>5</td> <td>Notebook</td> <td>2200</td> </tr> <tr> <td>2</td> <td>I3</td> <td>1200</td> </tr> <tr> <td>3</td> <td>I7</td> <td>1000</td> </tr> <tr> <td>7</td> <td>mx</td> <td>400</td> </tr> <tr> <td>6</td> <td>keys</td> <td>300</td> </tr> <tr> <td>0</td> <td>2</td> <td>100</td> </tr> </tbody> </table> </div> <ul> <li>Column <code>unique_words</code> with the list of each word that appears in the column <code>product</code>.</li> <li>Column <code>total_cost_for_unique_word</code> with the sum of the values of products that contain that word.</li> </ul> <p>I've tried searching for posts here from StackOverflow... Also, I've done google research, but I haven't found a solution. Maybe I still don't have the knowledge to find the answer.</p> <p>If by any chance it has already been answered, please let me know and I will delete the post.</p> <p>Thank you all.</p>
<p>You can <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>split</code></a>, <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>explode</code></a>, <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.aggregate.html" rel="nofollow noreferrer"><code>groupby.agg</code></a>:</p> <pre><code>df_data = pd.DataFrame(data) new_df = (df_data .assign(unique_words=df['product'].str.split()) .explode('unique_words') .groupby('unique_words', as_index=False) .agg(**{'total cost': ('cost' ,'sum')}) .sort_values('total cost', ascending=False, ignore_index=True) ) </code></pre> <p>Output:</p> <pre><code> unique_words total cost 0 Dell 2200 1 Notebook 2200 2 I3 1200 3 I7 1000 4 Logitech 400 5 mx 400 6 keys 300 7 2 100 </code></pre>
python|pandas|group-by
2
6,587
73,463,070
np.fromfile returns blank ndarray
<p>I'm supposed to troubleshoot an open-source Python 2.7 code from GitHub that for some reason, cannot import the binary file correctly. I traced the problem to the following definition, which supposedly imports a binary file (.dat) into an ndarray (rawsnippet) with the following code</p> <pre><code> def update_rawsnippet(self): self.rawfile_samp = self.rawfile.tell()/self.datatype.itemsize self.rawfile_time = self.rawfile_samp/self.fs rawsnippet = np.fromfile(self.rawfile, self.datatype, self.S) self.rawsnippet = self.format_rawsnippet(rawsnippet) return </code></pre> <p>where self.rawfile is an open file object defined in the following</p> <pre><code> def open_rawfile(self): if (not hasattr(self, 'rawfile')) or (hasattr(self, 'rawfile') and self.rawfile.closed): self.rawfile = open(self.abspath, 'rb') print('Opened rawfile = \'%s\'.\n'%self.abspath) else: print('Rawfile = \'%s\' is already opened.\n'%self.rawfile.name) return </code></pre> <p>and self.datatype and self.S are</p> <pre><code>self.datatype=np.dtype([('i', np.int16), ('q', np.int16)]) self.S = 2500 </code></pre> <p>But for some reason, <em><strong>rawsnippet is just an empty array</strong></em>. When I print rawsnippet, it simply returns <em>[]</em>.</p> <p>I've tried testing the function <code>np.fromfile</code> and checked the binary file with the following code, and it works.</p> <pre><code>&gt;&gt;&gt;rawfile=open(abspath,'rb') &gt;&gt;&gt;rawsnippet = np.fromfile(rawfile,datatype,s) &gt;&gt;&gt;print rawsnippet [( 1280, 0) ( 1280, 0) ( 1280, 0) ... (-1537, 1280) ( -769, -1025) ( -769, -1025)] </code></pre> <p>So really don't know why it don't work in the full code.</p> <p>Interestingly, when I change the datatype to the following (in the full code)</p> <pre><code>datatype = np.dtype([('i', np.int8), ('q', np.int8)]) </code></pre> <p>rawsnippet is no longer empty, but reads the binary file wrongly (The values are wrong).</p> <p>Any clues as to why this occurs?? I'm using numpy ver 1.16.6</p> <p><strong>Note:</strong></p> <p>The full code can be found at the following link.</p> <p><a href="https://github.com/mjp5578/GGRG-GPS-SDR" rel="nofollow noreferrer">https://github.com/mjp5578/GGRG-GPS-SDR</a></p> <p>I'm trying to run the <em>PyGNSS</em> code, and <code>def update_rawsnippet(self)</code> as well as <code>def open_rawfile(self)</code> can be found in rawfile.py</p>
<pre><code>In [225]: datatype=np.dtype([('i', np.int16), ('q', np.int16)]) ...: S = 2500 </code></pre> <p>Make an empty file:</p> <pre><code>In [226]: !touch empty.txt </code></pre> <p>Load with this, or any dtype, produces an (0,) shape file:</p> <pre><code>In [227]: np.fromfile('empty.txt', dtype=datatype) Out[227]: array([], dtype=[('i', '&lt;i2'), ('q', '&lt;i2')]) In [228]: np.fromfile('empty.txt', dtype=datatype, count=S) Out[228]: array([], dtype=[('i', '&lt;i2'), ('q', '&lt;i2')]) </code></pre> <p><code>np.frombuffer</code> complains if there aren't enough bytes, but <code>fromfile</code> does not.</p> <p>Write a small array:</p> <pre><code>In [229]: arr = np.zeros(3, dtype=datatype) In [230]: arr Out[230]: array([(0, 0), (0, 0), (0, 0)], dtype=[('i', '&lt;i2'), ('q', '&lt;i2')]) In [231]: arr.tofile('test.txt') </code></pre> <p>Read it (again no complaints if there aren't <code>S</code> elements):</p> <pre><code>In [234]: np.fromfile('test.txt', dtype=datatype, count=S) Out[234]: array([(0, 0), (0, 0), (0, 0)], dtype=[('i', '&lt;i2'), ('q', '&lt;i2')]) </code></pre> <p>Read with context:</p> <pre><code>In [235]: with open('test.txt','rb') as f: ...: print(np.fromfile(f,dtype=datatype)) ...: [(0, 0) (0, 0) (0, 0)] </code></pre> <p>Read, but after reading to the end, the result is empty:</p> <pre><code>In [236]: with open('test.txt','rb') as f: ...: f.read() ...: print(np.fromfile(f,dtype=datatype)) ...: [] </code></pre> <p>I suspect this last is what's happening in your code.</p> <p>Or the use a py2.7 might be causing problems. It's been a long time since I used it, and I don't recall what migration problems there were, especially regarding file reading.</p>
python|numpy|fromfile
1
6,588
73,188,883
How to filter and process on a dataframe for some different conditions and combine them without using for loop?
<p>I have a dataframe of logs of orders for anything and a list of times. at moment t, each order can be active or deactive and it can calculate such as below:</p> <pre><code>create_time &lt;= t &lt; response time </code></pre> <p>I want to calculate sum of active orders value for each item at each moment that are in my moment list and store all of them in a dataframe.</p> <p><strong>orders dataframe:</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>create_time</th> <th>response_time</th> <th>value</th> <th>item</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>09:00:00</td> <td>10:00:00</td> <td>100</td> <td>y</td> </tr> <tr> <td>2</td> <td>08:00:00</td> <td>09:00:00</td> <td>200</td> <td>x</td> </tr> <tr> <td>3</td> <td>10:00:00</td> <td>11:45:00</td> <td>300</td> <td>x</td> </tr> <tr> <td>4</td> <td>09:40:00</td> <td>09:43:00</td> <td>400</td> <td>y</td> </tr> <tr> <td>5</td> <td>12:00:00</td> <td>13:00:00</td> <td>500</td> <td>w</td> </tr> <tr> <td>6</td> <td>11:15:00</td> <td>14:00:00</td> <td>250</td> <td>x</td> </tr> <tr> <td>7</td> <td>07:00:00</td> <td>07:12:00</td> <td>10</td> <td>z</td> </tr> <tr> <td>8</td> <td>05:00:00</td> <td>15:00:00</td> <td>350</td> <td>y</td> </tr> </tbody> </table> </div> <p><strong>moment list:</strong></p> <p><strong>08:00:00 - 08:30:00 - 9:00:00 - 09:30:00 - 10:00:00 - ...</strong></p> <p><strong>my desired output:</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>moment</th> <th>item</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>08:00:00</td> <td>x</td> <td>200</td> </tr> <tr> <td>08:00:00</td> <td>y</td> <td>350</td> </tr> <tr> <td>08:30:00</td> <td>x</td> <td>200</td> </tr> <tr> <td>08:30:00</td> <td>y</td> <td>350</td> </tr> <tr> <td>09:00:00</td> <td>y</td> <td>100+350=450</td> </tr> <tr> <td>09:30:00</td> <td>y</td> <td>100+350=450</td> </tr> <tr> <td>10:00:00</td> <td>x</td> <td>300</td> </tr> <tr> <td>10:00:00</td> <td>y</td> <td>350</td> </tr> </tbody> </table> </div> <p><strong>code for create order dataframe and moment list:</strong></p> <pre class="lang-py prettyprint-override"><code>id = [1, 2, 3, 4, 5, 6, 7, 8] create_time = ['09:00:00', '08:00:00', '10:00:00', '09:40:00', '12:00:00', '11:15:00', '07:00:00', '05:00:00'] response_time = ['10:00:00', '09:00:00', '11:45:00', '09:43:00', '13:00:00', '14:00:00', '07:12:00', '15:00:00'] value = [100, 200, 300, 400, 500, 250, 10, 350] item = ['y', 'x', 'x', 'y', 'w', 'x', 'z', 'y'] df = pd.DataFrame({'id': id, 'create_time': create_time, 'response_time': response_time, 'value': value, 'item': item}) moment_list = ['08:00:00', '08:30:00', '9:00:00', '09:30:00', '10:00:00'] </code></pre> <p>my sulotion is using for such as below:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd a = pd.DataFrame() for moment in moment_list: moment_df = df[(df.create_time &lt;= moment) &amp; (df.response_time &gt; moment)] moment_df = df.groupby('item')['value'].sum().reset_index() a = pd.concat([moment_df, a]) print(a) </code></pre> <p>this is the simple form of my problem. I have a large dataframe and big list of moments also I need to apply some many filters to create my moment dataframe in for loop.</p> <p>Can anyone help me and introduce a faster way for this because my for loop is a litlle slow.</p>
<p>Preprocessing data, make sure the comparability of time-related columns.</p> <pre><code>moment_list = ['08:00:00', '08:30:00', '9:00:00', '09:30:00', '10:00:00'] moment_series = pd.Series(moment_list) moment_series = pd.to_datetime(moment_series, format='%H:%M:%S').dt.time moment_series ### 0 08:00:00 1 08:30:00 2 09:00:00 3 09:30:00 4 10:00:00 dtype: object </code></pre> <pre><code>df['create_time'] = pd.to_datetime(df['create_time'], format='%H:%M:%S').dt.time df['response_time'] = pd.to_datetime(df['response_time'], format='%H:%M:%S').dt.time df ### id create_time response_time value item 0 1 09:00:00 10:00:00 100 y 1 2 08:00:00 09:00:00 200 x 2 3 10:00:00 11:45:00 300 x 3 4 09:40:00 09:43:00 400 y 4 5 12:00:00 13:00:00 500 w 5 6 11:15:00 14:00:00 250 x 6 7 07:00:00 07:12:00 10 z 7 8 05:00:00 15:00:00 350 y </code></pre> <hr /> <br/> <br/> <br/> <pre><code>df_array = df[['create_time', 'response_time']].to_numpy() df_3d_array = np.tile(df_array, (len(moment_series),1,1)) </code></pre> <p><code>create_time ≤ t &lt; response_time</code></p> <pre><code>cross_boolean = ((df_3d_array[:,:,0] &lt;= np.tile(moment_series, (len(df),1)).T) &amp; (np.tile(moment_series, (len(df),1)).T &lt; df_3d_array[:,:,1])).T cross_boolean ### [[False False True True False] [ True True False False False] [False False False False True] [False False False False False] [False False False False False] [False False False False False] [False False False False False] [ True True True True True]] </code></pre> <br/> <br/> <br/> <p><strong>You may wanna check after <code>df[moment_series] = cross_boolean</code>, how <code>df</code> looks like.</strong></p> <pre><code>df[moment_series] = cross_boolean output = df.melt(id_vars=['item','value'], value_vars=moment_series, var_name='moment', value_name='boolean') output = output[output['boolean'] == True] output.drop(columns=['boolean'], inplace=True) </code></pre> <p>Output</p> <pre><code>output_group = output.groupby(['moment','item']).sum().reset_index() output_group ### moment item value 0 08:00:00 x 200 1 08:00:00 y 350 2 08:30:00 x 200 3 08:30:00 y 350 4 09:00:00 y 450 5 09:30:00 y 450 6 10:00:00 x 300 7 10:00:00 y 350 </code></pre>
python|pandas|dataframe|performance|group-by
1
6,589
67,473,229
How to compute 2D cumulative sum efficiently
<p>Given a two-dimensional numerical array <code>X</code> of shape <code>(m,n)</code>, I would like to compute an array <code>Y</code> of the same shape, where <code>Y[i,j]</code> is the cumulative sum of <code>X[i_,j_]</code> for <code>0&lt;=i_&lt;=i, 0&lt;=j_&lt;=j</code>. If <code>X</code> describes a 2D probability distribution, <code>Y</code> could be thought of as the 2D cumulative distribution function (CDF).</p> <p>I can obviously compute all entries of <code>Y</code> in a double <code>for</code> loop. However, there is a recursive aspect to this computation, as <code>Y[i,j] = X[i,j] + Y[i-1,j] + Y[i,j-1] - Y[i-1,j-1]</code> (where negative indexing means 0).</p> <p>I was looking for &quot;2d Python cumsum&quot;, and I've found that NumPy's <code>cumsum</code> merely flattens the array.</p> <p><strong>My Questions:</strong></p> <ol> <li>Is there a standard Python function for computing <code>Y</code> efficiently?</li> <li>If not, is the recursive idea above optimal?</li> </ol> <p>Thanks.</p>
<p>A <strong>kernel splitting</strong> method can be applied here to solve this problem very efficiently with only two <code>np.cumsum</code>: one vertical and one horizontal (or the other way since this is symatric).</p> <p>Here is an example:</p> <pre class="lang-py prettyprint-override"><code>x = np.random.randint(0, 10, (4, 5)) print(x) y = np.cumsum(np.cumsum(x, axis=0), axis=1) print(y) </code></pre> <p>Here is the result:</p> <pre class="lang-none prettyprint-override"><code>[[1 9 8 1 7] [0 6 8 2 3] [1 3 6 4 4] [0 8 1 2 9]] [[ 1 10 18 19 26] [ 1 16 32 35 45] [ 2 20 42 49 63] [ 2 28 51 60 83]] </code></pre>
python|performance|numpy|probability|cdf
3
6,590
67,369,710
Fill up columns in dataframe based on condition
<p>I have a dataframe that looks as follows:</p> <pre><code>id cyear month datadate fyear 1 1988 3 nan nan 1 1988 4 nan nan 1 1988 5 1988-05-31 1988 1 1988 6 nan nan 1 1988 7 nan nan 1 1988 8 nan nan 1 1988 9 nan nan 1 1988 12 nan nan 1 1989 1 nan nan 1 1989 2 nan nan 1 1989 3 nan nan 1 1989 4 nan nan 1 1989 5 1989-05-31 1989 1 1989 6 nan nan 1 1989 7 nan nan 1 1989 8 nan nan 1 1990 8 nan nan 4 2000 1 nan nan 4 2000 2 nan nan 4 2000 3 nan nan 4 2000 4 nan nan 4 2000 5 nan nan 4 2000 6 nan nan 4 2000 7 nan nan 4 2000 8 nan nan 4 2000 9 nan nan 4 2000 10 nan nan 4 2000 11 nan nan 4 2000 12 2000-12-31 2000 5 2000 11 nan nan </code></pre> <p>More specifically, I have a dataframe consisting of monthly (month) data on firms (id) per calendar year (cyear). If the respective row, i.e. month, represents the end of a fiscal year of the firm, the datadate column will denote the respective months end as a date variable and the fyear column will denote the respective fiscal year that just ended.</p> <p>I now want the fyear value to indicate the respective fiscal year not just in the last month of the respective companies fiscal year, but in every month within the respective fiscal year:</p> <pre><code>id cyear month datadate fyear 1 1988 3 nan 1988 1 1988 4 nan 1988 1 1988 5 1988-05-31 1988 1 1988 6 nan 1989 1 1988 7 nan 1989 1 1988 8 nan 1989 1 1988 9 nan 1989 1 1988 12 nan 1989 1 1989 1 nan 1989 1 1989 2 nan 1989 1 1989 3 nan 1989 1 1989 4 nan 1989 1 1989 5 1989-05-31 1989 1 1989 6 nan 1990 1 1989 7 nan 1990 1 1989 8 nan 1990 1 1990 8 nan 1991 4 2000 1 nan 2000 4 2000 2 nan 2000 4 2000 3 nan 2000 4 2000 4 nan 2000 4 2000 5 nan 2000 4 2000 6 nan 2000 4 2000 7 nan 2000 4 2000 8 nan 2000 4 2000 9 nan 2000 4 2000 10 nan 2000 4 2000 11 nan 2000 4 2000 12 2000-12-31 2000 5 2000 11 nan nan </code></pre> <p>Note that months may be missing, as evident in case of id 1, and fiscal years may end on different months in fyear=cyear or fyear=cyear+1 (I have included only the former example, one could construct the latter example by adding 1 to the current fyear values of e.g. id 1). Also, the last row(s) of a given firm may not necessarily be its fiscal year end month, as evident in case of id 1. Lastly, there may exist firms for which no information on fiscal years is available.</p> <p>I appreciate any help on this.</p>
<p>Do you want this?</p> <pre><code>def backword_fill(x): x = x.bfill() x = x.ffill() + x.isna().astype(int) return x df.fyear = df.groupby('id')['fyear'].transform(backword_fill) </code></pre> <p><strong>Output</strong></p> <pre><code> id cyear month datadate fyear 0 1 1988 3 &lt;NA&gt; 1988 1 1 1988 4 &lt;NA&gt; 1988 2 1 1988 5 1988-05-31 1988 3 1 1988 6 &lt;NA&gt; 1989 4 1 1988 7 &lt;NA&gt; 1989 5 1 1988 8 &lt;NA&gt; 1989 6 1 1988 9 &lt;NA&gt; 1989 7 1 1988 12 &lt;NA&gt; 1989 8 1 1989 1 &lt;NA&gt; 1989 9 1 1989 2 &lt;NA&gt; 1989 10 1 1989 3 &lt;NA&gt; 1989 11 1 1989 4 &lt;NA&gt; 1989 12 1 1989 5 1989-05-31 1989 13 1 1989 6 &lt;NA&gt; 1990 14 4 2000 1 &lt;NA&gt; 2000 15 4 2000 2 &lt;NA&gt; 2000 16 4 2000 3 &lt;NA&gt; 2000 17 4 2000 4 &lt;NA&gt; 2000 18 4 2000 5 &lt;NA&gt; 2000 19 4 2000 6 &lt;NA&gt; 2000 20 4 2000 7 &lt;NA&gt; 2000 21 4 2000 8 &lt;NA&gt; 2000 22 4 2000 9 &lt;NA&gt; 2000 23 4 2000 10 &lt;NA&gt; 2000 24 4 2000 11 &lt;NA&gt; 2000 25 4 2000 12 2000-12-31 2000 </code></pre>
python|pandas
2
6,591
67,503,869
Pandas: Holding on to an output until a change happens in parameter X
<p>I am trying to identify different phases in a process. What I basically need to create is the following:</p> <ul> <li>When Parameter A &gt; certain value: Output = Phase 1; keep this value until:</li> <li>Parameter B reaches a certain value, then Output = Phase 2</li> </ul> <p>This is of course quite easy to program with a generator, however, the tricky part here is that sometimes it can go back from phase 2 to 1 or it can also skip a phase.</p> <p>I am not quite sure how to do this. Ideally the code would look at a parameter, and when it changes decide to go back or forward in the phases.</p> <p>I came up with some sample code below:</p> <ul> <li>Give an output for Phase 1 when Parameter A reaches 1.</li> <li>Hold on to Phase 1 until Parameter B changes to &gt;120 or parameter A &gt;= 2.</li> <li>Hold on until parameter A &lt; 1.5 --&gt; go back to Phase 1 or Hold on until parameter A &gt; 3 --&gt; go forward to Phase 3.</li> </ul> <p>I hope this question is clear. The real dataset has 36 parameters so I simplified the case a bit to not make it any more complicated than necessary!</p> <p>I hope you can help me out!</p> <pre><code>import pandas as pd data = { &quot;Date and Time&quot;: [&quot;2020-06-07 00:00&quot;, &quot;2020-06-07 00:01&quot;, &quot;2020-06-07 00:02&quot;, &quot;2020-06-07 00:03&quot;, &quot;2020-06-07 00:04&quot;, &quot;2020-06-07 00:05&quot;, &quot;2020-06-07 00:06&quot;, &quot;2020-06-07 00:07&quot;, &quot;2020-06-07 00:08&quot;, &quot;2020-06-07 00:09&quot;, &quot;2020-06-07 00:10&quot;], &quot;Parameter A&quot;: [1, 1, 1, 1, 1.5, 2, 2.1, 2.2, 2.3, 1.6, 1.2], &quot;Parameter B&quot;: [100, 101, 99, 102, 101, 105, 120, 125, 122, 123, 99], &quot;Required output&quot;: [&quot;Phase 1&quot;, &quot;Phase 1&quot;,&quot;Phase 1&quot;,&quot;Phase 1&quot;,&quot;Phase 1&quot;,&quot;Phase 2&quot;,&quot;Phase 2&quot;,&quot;Phase 2&quot;,&quot;Phase 2&quot;,&quot;Phase 2&quot;,&quot;Phase 1&quot;] } df = pd.DataFrame(data) </code></pre>
<p>The basic problem you are trying to solve is to implement a <a href="https://en.wikipedia.org/wiki/Hysteresis" rel="nofollow noreferrer">hysteresis</a> (i.e. where a state depends on history).</p> <p>Aside from that, the logic to capture intervals of <code>a</code> and <code>b</code> can be expressed using <code>pd.cut()</code>.</p> <pre class="lang-py prettyprint-override"><code>a = df['Parameter A'] b = df['Parameter B'] cat_a = pd.cut(a, [-np.inf, 1, 1.5, 2, 3, np.inf], labels=[0,1,1.5,2,3], right=False) cat_b = pd.cut(b, [-np.inf, 120, np.inf], labels=[0,2], right=False) </code></pre> <p>For <code>cat_a</code>, we have a bin (labeled <code>1.5</code>) that corresponds to the &quot;uncertain&quot; zone between 1.5 and 2, where the hysteresis takes place (in that area, if the previous phase was <code>&gt;= 2</code>, use <code>2</code>, otherwise use <code>1</code>).</p> <p>We use <code>max</code> between <code>cat_a</code> and <code>cat_b</code> to establish a history-independent (<code>tmp</code>) value:</p> <pre class="lang-py prettyprint-override"><code>tmp = pd.concat([cat_a, cat_b], axis=1).max(axis=1) &gt;&gt;&gt; df.assign(tmp=tmp) Date and Time Parameter A Parameter B Required output tmp 0 2020-06-07 00:00 1.0 100 Phase 1 1.0 1 2020-06-07 00:01 1.0 101 Phase 1 1.0 2 2020-06-07 00:02 1.0 99 Phase 1 1.0 3 2020-06-07 00:03 1.0 102 Phase 1 1.0 4 2020-06-07 00:04 1.5 101 Phase 1 1.5 5 2020-06-07 00:05 2.0 105 Phase 2 2.0 6 2020-06-07 00:06 2.1 120 Phase 2 2.0 7 2020-06-07 00:07 2.2 125 Phase 2 2.0 8 2020-06-07 00:08 2.3 122 Phase 2 2.0 9 2020-06-07 00:09 1.6 123 Phase 2 2.0 10 2020-06-07 00:10 1.2 99 Phase 1 1.0 </code></pre> <p>Now, to implement the hysteresis, we use <a href="https://stackoverflow.com/a/23291658/758174">this SO answer</a> which uses <code>numpy</code>. It is slightly adapted to include the left side of intervals:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np def hyst(x, th_lo, th_hi, initial = False): hi = x &gt;= th_hi lo_or_hi = (x &lt; th_lo) | hi ind = np.nonzero(lo_or_hi)[0] if not ind.size: # prevent index error if ind is empty return np.zeros_like(x, dtype=bool) | initial cnt = np.cumsum(lo_or_hi) # from 0 to len(x) return np.where(cnt, hi[ind[cnt-1]], initial) </code></pre> <p>This returns a boolean value that indicates whether the phase should be &quot;high&quot; (<code>True</code>) or &quot;low&quot; (<code>False</code>). We then replace the uncertain values (1.5) with <code>1</code> or <code>2</code> depending on the hysteresis. Finally, we assign the numerical value of <code>phase</code> into a string:</p> <pre class="lang-py prettyprint-override"><code>phase = tmp.where(tmp != 1.5, np.where(hyst(tmp.values, 1.5, 2), 2, 1)) df = df.assign(phase='Phase ' + phase.astype(int).astype(str)) &gt;&gt;&gt; df Date and Time Parameter A Parameter B Required output phase 0 2020-06-07 00:00 1.0 100 Phase 1 Phase 1 1 2020-06-07 00:01 1.0 101 Phase 1 Phase 1 2 2020-06-07 00:02 1.0 99 Phase 1 Phase 1 3 2020-06-07 00:03 1.0 102 Phase 1 Phase 1 4 2020-06-07 00:04 1.5 101 Phase 1 Phase 1 5 2020-06-07 00:05 2.0 105 Phase 2 Phase 2 6 2020-06-07 00:06 2.1 120 Phase 2 Phase 2 7 2020-06-07 00:07 2.2 125 Phase 2 Phase 2 8 2020-06-07 00:08 2.3 122 Phase 2 Phase 2 9 2020-06-07 00:09 1.6 123 Phase 2 Phase 2 10 2020-06-07 00:10 1.2 99 Phase 1 Phase 1 </code></pre> <h2>In summary</h2> <p>The full code is (in addition to the <code>hyst()</code> function above):</p> <pre class="lang-py prettyprint-override"><code>a = df['Parameter A'] b = df['Parameter B'] cat_a = pd.cut(a, [-np.inf, 1, 1.5, 2, 3, np.inf], labels=[0,1,1.5,2,3], right=False) cat_b = pd.cut(b, [-np.inf, 120, np.inf], labels=[0,2], right=False) tmp = pd.concat([cat_a, cat_b], axis=1).max(axis=1) phase = tmp.where(tmp != 1.5, np.where(hyst(tmp.values, 1.5, 2), 2, 1)) df = df.assign(tmp=tmp, phase='Phase ' + phase.astype(int).astype(str)) </code></pre> <p>Hopefully, you can adapt and extend this logic for your 36-parameter case.</p> <h2>Another example</h2> <p>To better illustrate the phase transitions and the logic, here is another example:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame([ [0, 0], [1, 100], [1.2, 100], [1.5, 100], [1.6, 100], [2, 100], [2.1, 100], [1.6, 100], [1.5, 100], [1.4, 100], [1.4, 120], [1.5, 100], [3, 100], [1.5, 100], [1.6, 100], [1.4, 100], ], columns=['Parameter A', 'Parameter B']) </code></pre> <p>Running the code above, and adding <code>tmp</code> to the <code>df</code> for inspection, we see (with comments added by hand):</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df.assign(tmp=tmp, phase='Phase ' + phase.astype(int).astype(str)) Parameter A Parameter B tmp phase 0 0.0 0 0.0 Phase 0 1 1.0 100 1.0 Phase 1 2 1.2 100 1.0 Phase 1 3 1.5 100 1.5 Phase 1 # in hyst., but prev was low 4 1.6 100 1.5 Phase 1 5 2.0 100 2.0 Phase 2 6 2.1 100 2.0 Phase 2 7 1.6 100 1.5 Phase 2 # in hyst. but prev was high 8 1.5 100 1.5 Phase 2 9 1.4 100 1.0 Phase 1 10 1.4 120 2.0 Phase 2 # goes to 2 bc b &gt;= 120 11 1.5 100 1.5 Phase 2 12 3.0 100 3.0 Phase 3 13 1.5 100 1.5 Phase 2 # note: not 3, even though prev was 3 14 1.6 100 1.5 Phase 2 15 1.4 100 1.0 Phase 1 </code></pre>
python|pandas
0
6,592
34,570,177
Tensorflow: List of Tensors for Cost
<p>I am trying to work with LSTMs in Tensor Flow. I found a tutorial online where a set of sequences is taken in and the objective function is composed of the last output of the LSTM and the known values. However, I would like to have my objective function use information from each output. Specifically, I am trying to have the LSTM learn the set of sequences (i.e. learn all the letters in words in a sentence).:</p> <pre><code>cell = rnn_cell.BasicLSTMCell(num_units) inputs = [tf.placeholder(tf.float32,shape=[batch_size,input_size]) for _ in range(seq_len)] result = [tf.placeholder(tf.float32, shape=[batch_size,input_size]) for _ in range(seq_len)] W_o = tf.Variable(tf.random_normal([num_units,input_size], stddev=0.01)) b_o = tf.Variable(tf.random_normal([input_size], stddev=0.01)) outputs, states = rnn.rnn(cell, inputs, dtype=tf.float32) losses = [] for i in xrange(len(outputs)): final_transformed_val = tf.matmul(outputs[i],W_o) + b_o losses.append(tf.nn.softmax(final_transformed_val)) cost = tf.reduce_mean(losses) </code></pre> <p>Doing this results in the error:</p> <pre><code>TypeError: List of Tensors when single Tensor expected </code></pre> <p>How should I fix this issue? Does the <code>tf.reduce_mean()</code> take in a list of tensor values, or is there some special tensor object that takes them?</p>
<p>In your code, <code>losses</code> is a Python list. TensorFlow's <a href="https://www.tensorflow.org/versions/master/api_docs/python/math_ops.html#reduce_mean" rel="nofollow"><code>reduce_mean()</code></a> expects a single tensor, not a Python list.</p> <pre><code>losses = tf.reshape(tf.concat(1, losses), [-1, size]) </code></pre> <p>where size is the number of values you're taking a softmax over should do what you want. See <a href="https://www.tensorflow.org/versions/master/api_docs/python/array_ops.html#concat" rel="nofollow">concat()</a></p> <p>But, one thing I notice in your code that seems a bit odd, is that you have a list of placeholders for your inputs, whereas the code in <a href="https://www.tensorflow.org/versions/master/tutorials/recurrent/index.html" rel="nofollow">the TensorFlow Tutorial</a> uses an order 3 tensor for inputs. Your input is a list of order 2 tensors. I recommend looking over the code in the tutorial, because it does almost exactly what you're asking about. </p> <p>One of the main files in that tutorial is <a href="https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/rnn/ptb/ptb_word_lm.py" rel="nofollow" title="here">here</a>. In particular, line 139 is where they create their cost. Regarding your input, lines 90 and 91 are where the input and target placeholders are setup. The main takeaway in those 2 lines is that an entire sequence is passed in in a single placeholder rather than with a list of placeholders.</p> <p>See line 120 in the ptb_word_lm.py file to see where they do their concatenation.</p>
python|list|machine-learning|tensorflow
3
6,593
34,564,906
Calculating the number of consecutive periods that match a condition
<p>Given the data in the <code>Date</code> and <code>Close</code> columns, I'd like to calculate the values in the <code>ConsecPeriodsUp</code> column. This column gives the number of consecutive two-week periods that the <code>Close</code> value has increased.</p> <pre><code>Date Close UpThisPeriod ConsecPeriodsUp 23/12/2015 3 1 1 16/12/2015 2 0 0 09/12/2015 1 0 0 02/12/2015 3 1 1 25/11/2015 2 0 0 18/11/2015 1 0 0 11/11/2015 7 1 3 04/11/2015 6 1 3 28/10/2015 5 1 2 21/10/2015 4 1 2 14/10/2015 3 1 1 07/10/2015 2 NaN NaN 30/09/2015 1 NaN NaN </code></pre> <p>I've written the following code to give the <code>UpThisPeriod</code> column but I can't see how I would aggregate that to get the <code>ConsecPeriodsUp</code> column, or whether there is way to do it in a single calculation that I'm missing.</p> <pre><code>import pandas as pd def up_over_period(s): return s[0] &gt;= s[-1] df = pd.read_csv("test_data.csv") period = 3 # one more than the number of weeks df['UpThisPeriod'] = pd.rolling_apply( df['Close'], window=period, func=up_over_period, ).shift(-period + 1) </code></pre>
<p>This can be done by adapting the <code>groupby</code>, <code>shift</code> and <code>cumsum</code> trick described in the Pandas Cookbook, <a href="http://pandas.pydata.org/pandas-docs/stable/cookbook.html#grouping" rel="nofollow">Grouping like Python’s itertools.groupby</a>. The main change is in dividing by the length of the period - 1 and then using the <code>ceil</code> function to round up to the next integer.</p> <pre><code>from math import ceil ... s = df['UpThisPeriod'][::-1] df['ConsecPeriodsUp'] = (s.groupby((s != s.shift()).cumsum()).cumsum() / (period - 1)).apply(ceil) </code></pre>
python|pandas
0
6,594
60,271,690
Why my test accuracy falls when i use more epochs for training my CNN
<p>I have a problem with my CNN. I trained my model for 50 epochs (BN, Dropouts used) and i got test accuracy 92%. After that i trained my exact same network again but for 100 epochs, with just the same tuning and generalization techniques, and my test set's accuracy fell to 79%. Due to my small data set i used data augmentation (horizontal and vertical flip). I cannot explain this, can somebody help?</p> <pre><code>import numpy as np import tensorflow as tf from numpy.random import seed seed(1) tf.compat.v1.set_random_seed(2) from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) import tensorflow as tf sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True)) gpu_options = tf.GPUOptions(allow_growth=True) session = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options)) import os os.environ['KERAS_BACKEND']='tensorflow' import keras from tensorflow.keras.models import Sequential from tensorflow.keras.callbacks import Callback, ModelCheckpoint, ReduceLROnPlateau from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, BatchNormalization from tensorflow.keras.layers import Conv2D,MaxPooling2D from keras.utils import np_utils from tensorflow.keras.optimizers import SGD,Adam from tensorflow.keras.metrics import categorical_crossentropy from keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.layers import BatchNormalization import matplotlib as plt from matplotlib import pyplot as plt from sklearn.metrics import confusion_matrix import itertools keras.initializers.glorot_normal(seed=42) train_path='C:/Users/Panagiotis Gkanos/Desktop/dataset/40X/train' train_batches=ImageDataGenerator(rescale=1./255,horizontal_flip=True, vertical_flip=True).flow_from_direct ory(train_path, target_size=[400,400], classes=['malignant','benign'], class_mode='categorical',batch_size=40) valid_path='C:/Users/Panagiotis Gkanos/Desktop/dataset/40X/valid' valid_batches=ImageDataGenerator(rescale=1./255).flow_from_directory(valid_path, target_size=[400,400], classes=['malignant','benign'], class_mode='categorical',batch_size=20) test_path='C:/Users/Panagiotis Gkanos/Desktop/dataset/40X/test' test_batches=ImageDataGenerator(rescale=1./255).flow_from_directory(test_path, target_size=[400,400], classes=['malignant','benign'], class_mode='categorical',batch_size=20) model=Sequential() model.add(Conv2D(16,(3,3),strides=2,padding='same',input_shape=(400,400,3))) model.add(Activation('relu')) model.add(BatchNormalization()) model.add(Conv2D(16,(3,3),strides=1,padding='same')) model.add(Activation('relu')) model.add(BatchNormalization()) model.add(Conv2D(16,(3,3),strides=1,padding='same')) model.add(Activation('relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2),strides=2)) model.add(Conv2D(32,(3,3),strides=1,padding='same')) model.add(Activation('relu')) model.add(BatchNormalization()) model.add(Conv2D(32,(3,3),strides=1,padding='same')) model.add(Activation('relu')) model.add(BatchNormalization()) model.add(Conv2D(32,(3,3),strides=1,padding='same')) model.add(Activation('relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2),strides=2)) model.add(Conv2D(64,(3,3),padding='same')) model.add(Activation('relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2),strides=2)) model.add(Conv2D(128,(3,3),padding='same')) model.add(Activation('relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2),strides=2)) model.add(Flatten()) model.add(Dense(256,activation='relu')) model.add(Dropout(0.3)) model.add(Dense(128,activation='relu')) model.add(Dropout(0.3)) model.add(Dense(2,activation='softmax')) model.summary() #learn_control = ReduceLROnPlateau(monitor='val_acc', patience=5, # verbose=1,factor=0.2, min_lr=1e-7) model.compile(optimizer=Adam(lr=0.001),loss='categorical_crossentropy',metrics=['accuracy']) history=model.fit_generator(train_batches,steps_per_epoch=20 ,validation_data=valid_batches, validation_steps=8 ,epochs=100) #,callbacks=[learn_control]) model.evaluate(test_batches) def plot_loss(history): train_loss=history.history['loss'] val_loss=history.history['val_loss'] x=list(range(1,len(val_loss)+1)) plt.plot(x,val_loss,color='red',label='validation loss') plt.plot(x,train_loss,label='training loss') plt.xlabel('Epoch') plt.ylabel('Loss') plt.title('Loss vs. Epoch') plt.legend() plt.show() def plot_accuracy(history): train_acc=history.history['acc'] val_acc=history.history['val_acc'] x=list(range(1,len(val_acc)+1)) plt.plot(x,val_acc,color='red',label='validation acc') plt.plot(x,train_acc,label='training acc') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.title('Accuracy vs. Epoch') plt.legend() plt.show() plot_loss(history) plot_accuracy(history) </code></pre>
<p>This is what is referred to as "overfitting." If your network had better test performance at 50 epochs, you may want to stop there.</p> <p>In this case it is likely due to having a small data set for which the network will be unable to find general patterns which fit all cases. Instead it is fitting to small recurring details from your training data.</p> <p>For instance, if you were to train a CNN to perform animal classification, bird or dog, using only blue birds in your training set and dogs of various colors in your training set. If you test your network with a picture of a bird that is any color other than blue, it is likely that it will classify as a dog because your network learned that everything blue is a bird and everything else is a dog instead of learning what features are unique to birds and what features are unique to dogs. </p> <p>The short of it is, you likely just need a bigger, more varied data set. You could also implement early stopping, which will stop the network from training before it overfits to the data. Otherwise you can try other forms of regularization, but this is a difficult problem to overcome.</p>
python|tensorflow|keras|conv-neural-network
1
6,595
59,992,439
Pandas subtract column values only when non nan
<p>I have a dataframe <code>df</code> as follows with about 200 columns:</p> <pre><code>Date Run_1 Run_295 Prc 2/1/2020 3 2/2/2020 2 6 2/3/2020 5 2 </code></pre> <p>I want to subtract column <code>Prc</code> from columns <code>Run_1 Run_295 Run_300</code> only when they are non-Nan or non empty, to get the following:</p> <pre><code>Date Run_1 Run_295 2/1/2020 2/2/2020 -4 2/3/2020 3 </code></pre> <p>I am not sure how to proceed with the above.</p> <p>Code to reproduce the dataframe:</p> <pre><code>import pandas as pd from io import StringIO s = """Date,Run_1,Run_295,Prc 2/1/2020,,,3 2/2/2020,2,,6 2/3/2020,,5,2""" df = pd.read_csv(StringIO(s)) print(df) </code></pre>
<p>Three steps, <code>melt</code> to unpivot your dataframe</p> <p>Then <code>loc</code> to handle assignment</p> <p>&amp; <code>GroupBy</code> to reomake your original df.</p> <p>sure there is a better way to this, but this avoids loops and <code>apply</code> </p> <pre><code>cols = df.columns s = pd.melt(df,id_vars=['Date','Prc'],value_name='Run Rate') s.loc[s['Run Rate'].isnull()==False,'Run Rate'] = s['Run Rate'] - s['Prc'] df_new = s.groupby([s["Date"], s["Prc"], s["variable"]])["Run Rate"].first().unstack(-1) </code></pre> <hr> <pre><code>print(df_new[cols]) variable Date Run_1 Run_295 Prc 0 2/1/2020 NaN NaN 3 1 2/2/2020 -4.0 NaN 6 2 2/3/2020 NaN 3.0 2 </code></pre>
pandas|python-3.5
1
6,596
60,327,350
Is there a way to append a list in a pandas dataframe?
<p>I have a column in a pandas <code>dataframe</code> <code>dfr</code> in which there is a empty list. When I try to append it, the entire column is changed. </p> <p>Below is the code attached.</p> <pre><code>N = 10 Nr = list(range(10)) dfr = pd.DataFrame(Nr,columns = ['ID']) dfr['Assignment'] = [[]] * dfr.shape[0] for i in range(N): dfr.loc[i][1].append(i) dfr </code></pre> <p>Now when I run this, the whole assignment column changes. Can anyone help me here. I just need to have 1 value of <code>i</code> in the list in each row.</p> <p><img src="https://i.stack.imgur.com/zJ57X.png" alt="enter image description here"></p>
<p>As mentioned by @AMC, the reason why this is happening is that the lists in your dataframe cells are identical. As a result, when you are iterating over the dataframe cells every time you are appending a number to the same list. Therefore, I suggest you to create one list per cell as follow:</p> <pre><code>for i in range(N): dfr.at[i,'Assignment'] = [i] ID Assignment 0 0 [0] 1 1 [1] 2 2 [2] 3 3 [3] 4 4 [4] 5 5 [5] 6 6 [6] 7 7 [7] 8 8 [8] 9 9 [9] </code></pre> <p>Then you can update these cells, independently:</p> <pre><code>for i in range(N): dfr.at[i,'Assignment'].append(i+1) ID Assignment 0 0 [0, 1] 1 1 [1, 2] 2 2 [2, 3] 3 3 [3, 4] 4 4 [4, 5] 5 5 [5, 6] 6 6 [6, 7] 7 7 [7, 8] 8 8 [8, 9] 9 9 [9, 10] </code></pre>
python|pandas
0
6,597
65,067,446
Custom loss function not improving with epochs
<p>I have created a custom loss function to deal with binary class imbalance, but my loss function does not improve per epoch. For metrics, I'm using precision and recall.</p> <p>Is this a design issue where I'm not picking good hyper-parameters?</p> <pre><code>weights = [np.array([.10,.90]), np.array([.5,.5]), np.array([.1,.99]), np.array([.25,.75]), np.array([.35,.65])] for weight in weights: print('Model with weights {a}'.format(a=weight)) model = keras.models.Sequential([ keras.layers.Flatten(), #input_shape=[X_train.shape[1]] keras.layers.Dense(32, activation='relu'), keras.layers.Dense(32, activation='relu'), keras.layers.Dense(1, activation='sigmoid')]) model.compile(loss=weighted_loss(weight),metrics=[tf.keras.metrics.Precision(), tf.keras.metrics.Recall()]) n_epochs = 10 history = model.fit(X_train.astype('float32'), y_train.values.astype('float32'), epochs=n_epochs, validation_data=(X_test.astype('float32'), y_test.values.astype('float32')), batch_size=64) model.evaluate(X_test.astype('float32'), y_test.astype('float32')) pd.DataFrame(history.history).plot(figsize=(8, 5)) plt.grid(True); plt.gca().set_ylim(0, 1); plt.show() </code></pre> <p>Custom loss function to deal with class imbalance issue:</p> <pre><code>def weighted_loss(weights): weights = K.variable(weights) def loss(y_true, y_pred): y_pred /= K.sum(y_pred, axis=-1, keepdims=True) y_pred = K.clip(y_pred, K.epsilon(), 1 - K.epsilon()) loss = y_true * K.log(y_pred) * weights loss = -K.sum(loss, -1) return loss return loss </code></pre> <p>Output:</p> <pre><code>Model with weights [0.1 0.9] Epoch 1/10 274/274 [==============================] - 1s 2ms/step - loss: 1.1921e-08 - precision_24: 0.1092 - recall_24: 0.4119 - val_loss: 1.4074e-08 - val_precision_24: 0.1247 - val_recall_24: 0.3953 Epoch 2/10 274/274 [==============================] - 0s 1ms/step - loss: 1.1921e-08 - precision_24: 0.1092 - recall_24: 0.4119 - val_loss: 1.4074e-08 - val_precision_24: 0.1247 - val_recall_24: 0.3953 Epoch 3/10 274/274 [==============================] - 0s 1ms/step - loss: 1.1921e-08 - precision_24: 0.1092 - recall_24: 0.4119 - val_loss: 1.4074e-08 - val_precision_24: 0.1247 - val_recall_24: 0.3953 Epoch 4/10 274/274 [==============================] - 0s 969us/step - loss: 1.1921e-08 - precision_24: 0.1092 - recall_24: 0.4119 - val_loss: 1.4074e-08 - val_precision_24: 0.1247 - val_recall_24: 0.3953 [...] </code></pre> <p>Image of the input data set and the true y variable class designation: Input Dataset a <code>(17480 X 20)</code> matrix: <a href="https://i.stack.imgur.com/ABZKP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ABZKP.png" alt="enter image description here" /></a></p> <p><code>y</code> is the output array (2 classes) with dimensions <code>(17480 x 1)</code> and total number of 1's is: 1748 (the class that I want to predict)</p>
<p>Since there is no MWE present it's rather difficult to be sure. In order to be as educative as possible I'll lay out some observations and remarks.</p> <p>The first observation is that your custom loss function has really small values i.e. <code>~10e-8</code> throughout training. This seems to tell your model that performance is already really good while in fact, when looking at the metrics you chose, it isn't. This indicates that the problem resides near the output or has something to do with the loss function. My recommendation here is since you have a classification problem to have a look at this post regarding weighted cross-entropy [1].</p> <p>Second observation is that it seems you don't have a benchmark for performance of your model. In general, ML workflow goes from very simple to complex models. I would recommend trying a simple Logistic Regression [2] to get an idea for minimal performance. After this I would try some more complex models such as tree booster (XGBoost/LightGBM/...) or a random forest. Especially considering you are using a full-blown neural network for <em>tabular</em> data with only about 20 numerical features that tends to still be in the traditional machine learning territory.</p> <p>Once you have obtained a baseline and perhaps improved performance using a standard machine learning technique, you can look towards a neural network again. Some other recommendations depending on the results of the traditional approaches are:</p> <ul> <li><p>Try several and optimizers and cross-validate them over different learning rates.</p> </li> <li><p>Try, as mentioned by @TyQuangTu, some simpler and shallower architectures.</p> </li> <li><p>Try an activation function that does not have the &quot;dying neuron&quot; problems such as LeakyRelu or ELU.</p> </li> </ul> <p>Hopefully this answer can help you and if you have any more questions I am glad to help.</p> <p>[1] <a href="https://stackoverflow.com/questions/44560549/unbalanced-data-and-weighted-cross-entropy">Unbalanced data and weighted cross entropy</a></p> <p>[2] <a href="https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html</a></p>
tensorflow|machine-learning|keras|deep-learning
2
6,598
65,191,690
How to speedup the groupby.filter() operation?
<p>I have the following code with groupby function</p> <pre><code> order_diff = order.groupby(&quot;CD_QTY&quot;).filter( lambda x: x[&quot;BOX_ID&quot;].nunique() &gt; 1 ) </code></pre> <p>But the filter function is very slow</p> <p>I would like to use the transform or map function. But the result with both transform and map function is different than the filter function</p> <p>I have tried transform() and map()</p> <pre><code># tranform oder_diff = order[order.groupby(&quot;CD_QTY&quot;)[&quot;BOX_ID&quot;].transform('nunique').gt(1)] </code></pre> <pre><code>#using map() order[order[&quot;CD_QTY&quot;].map(order.groupby(&quot;CD_QTY&quot;)[&quot;BOX_ID&quot;].nunique()).gt(1)] </code></pre> <p>I am a bit confused about why the results are different than the filter(). I would appreciate your feedback on this. Thank you.</p>
<p>My rule of thumb is that any pandas function that take a custom lambda will be slowed down significantly due to looping and hence perfomance-limited by the GIL.</p> <p>If you care about speed, use vectorized functions as much as possible:</p> <pre><code># Some mock data n = 10_000 np.random.seed(42) cd_qty = np.random.randint(1, n, n) box_id = np.random.randint(1, 5, n) order = pd.DataFrame({ 'CD_QTY': cd_qty, 'BOX_ID': box_id }) # The original solution. 1.59s order_diff_1 = order.groupby(&quot;CD_QTY&quot;).filter( lambda x: x[&quot;BOX_ID&quot;].nunique() &gt; 1 ) # Vectorized. 8.5 ms -&gt; 187x faster count = order.groupby(['CD_QTY'])['BOX_ID'].nunique() cond = order['CD_QTY'].isin(count[count &gt; 1].index) order_diff_2 = order.loc[cond] # Check if they produce the same outputs all(order_diff_1 == order_diff_2) # --&gt; True </code></pre>
python|pandas|pandas-groupby
3
6,599
50,219,044
Why variables and constants are operations in TensorFlow?
<p>Intuitively, I expected an operation to be something that takes an input and modifies it (add, substract, divide, square root...). In fact, that's the definition of operation I found on the Internet. Then, <strong>why variables and constants are also operations in TensorFlow?</strong></p>
<p>TensorFlow generalizes your definition of operation as <em>something that takes zero or more inputs and produces zero or more outputs</em>. Concretely, a <a href="https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/Operation" rel="nofollow noreferrer">TensorFlow Operation</a> is defined as:</p> <blockquote> <p>An Operation is a node in a TensorFlow Graph that takes zero or more Tensor objects as input, and produces zero or more Tensor objects as output.</p> </blockquote> <p>Therefore:</p> <ul> <li>A constant is an operation without inputs that produces a single <code>Tensor</code> as output.</li> <li>A variable is a special (stateful) operation that takes one <code>Tensor</code> (initial value) as input and produces another <code>Tensor</code> as output.</li> </ul>
tensorflow
2