Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
9,400
68,159,213
Python/Pandas dataframe: Finish writing to file when program stops
<p>I am appending data to dataframes in python in a parallel fashion for multiple CSV-files using the to_csv() function for pandas dataframes.</p> <p>However when I stop the program while it runs, some files are completely emptied. When I stop the program unexpectedly, I want python to either finish writing to the file or leave it as it is.</p> <p>Do you know how to implement this?</p> <p>Thanks for any help :)</p>
<pre><code>csv.flush() csv.close() </code></pre> <p>The flush() method in Python file handling clears the internal buffer of the file. In Python, files are automatically flushed while closing them. However, a programmer can flush a file before closing it by using the flush() method.</p>
python|pandas|file
0
9,401
68,359,147
Obtaining total seconds from a datetime.time object
<p>I have a datetime.time object as 02:00:00 i would like to convert it to the total seconds which should be 7200 seconds.</p>
<p>You can combine the time with a reference date to get a datetime object. If you then subtract that reference date, you get a timedelta object, from which you can take the <code>total_seconds</code>:</p> <pre><code>from datetime import datetime, time t = time(2,0,0) ts = (datetime.combine(datetime.min, t) - datetime.min).total_seconds() print(ts) # 7200.0 </code></pre> <hr /> <p>With <code>pandas</code>, I'd use the string representation of the time object column (Series) and convert it to timedelta datatype - which then allows you to use the <code>dt</code> accessor to get the total seconds:</p> <pre><code>import pandas as pd df = pd.DataFrame({'time': [time(2,0,0)]}) df['totalseconds'] = pd.to_timedelta(df['time'].astype(str)).dt.total_seconds() # df['totalseconds'] # 0 7200.0 # Name: totalseconds, dtype: float64 </code></pre>
python|pandas|datetime|timedelta
3
9,402
68,091,936
Grouping Nearby Contours/Bounding Rectangles
<p>I have an image containing obscure rectangular shapes: <a href="https://i.stack.imgur.com/0JTGP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0JTGP.png" alt="enter image description here" /></a></p> <p>Using opencv I would like to group nearby rectangles to have an expected output as: <a href="https://i.stack.imgur.com/UWI1b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UWI1b.png" alt="enter image description here" /></a></p> <p>I've used the Dilate Morphological Transformation to enlarge the shapes so that they would be joined to create a larger shape which produces: <a href="https://i.stack.imgur.com/KFpd3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KFpd3.png" alt="enter image description here" /></a></p> <p>It doesn't join the larger rectangles to right very well, with a kernel size (40,40) any larger the smaller rectangles join to be one big one instead of separates.</p> <p>Possible to use cv2.minAreaRect(c) and group by similar angles of the rectangles? or any feature based detection in getting the number of rectangles in a certain area?</p>
<p>A thin vertical kernel should do what you want. Just make it taller than the maximum of the minimum 1/2 gaps over all objects you want to connect. Looks like about 65 pixels should work. Here is the morphology close result in Python/OpenCV that seems to connect the parts you want.</p> <p>Input:</p> <p><a href="https://i.stack.imgur.com/MfeMa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MfeMa.png" alt="enter image description here" /></a></p> <pre><code>import cv2 import numpy as np # read image as grayscale img = cv2.imread('lines.png', cv2.IMREAD_GRAYSCALE) # threshold to binary thresh = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY)[2] # apply morphology kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,65)) morph = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel) # write results cv2.imwrite(&quot;lines_morphology.png&quot;, morph) # show results cv2.imshow(&quot;thresh&quot;, thresh) cv2.imshow(&quot;morph&quot;, morph) cv2.waitKey(0) </code></pre> <p>Result:</p> <p><a href="https://i.stack.imgur.com/plRGU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/plRGU.png" alt="enter image description here" /></a></p>
python|numpy|opencv|image-processing
2
9,403
59,077,221
Merging DFs from two different lists in python
<p>There are two lists where elements are DFs and having <code>datetimeindex</code>:</p> <pre><code>lst_1 = [ df1, df2, df3, df4] #columns are same here 'price' lst_2 = [df1, df2, df3, df4] #columns are same here 'quantity' </code></pre> <p>I am doing it with one by one using the pandas merge function. I tried to do something where i add the two list and make function like this:</p> <pre><code>def df_merge(df1 ,df1): p_q_df1 = pd.merge(df1,df1, on='Dates') return p_q_df1 #this merged df has now price and quantity representing df1 from list! and list_2 </code></pre> <p>still i have to apply to every pair again. Is there a better way, maybe in loop to automate this? </p>
<p>Consider elementwise looping with <code>zip</code> which can be handled in a list comprehension.</p> <pre><code># DATES AS INDEX final_lst = [pd.concat(i, j, axis=1) for i, j in zip(lst_1, lst_2)] # DATES AS COLUMN final_lst = [pd.merge(i, j, on='Dates') for i, j in zip(lst_1, lst_2)] </code></pre>
python|pandas|dataframe|merge
2
9,404
59,319,729
Finding min, max, avg in Pandas, Python for all rows with the same first column
<p>Is it possible to find minimum, maximum and average value of all data with a same first column?</p> <p>For example, for first column <code>1_204192587</code>:</p> <ol> <li><p>take into account all rows and columns from 4 to n</p> </li> <li><p>find min, max and avg of all entries in columns 4+ and all rows with <code>**1_204192587**</code> value in first column.</p> <p>Meaning, to do kind of describing data for every unique Start value shown below.</p> </li> </ol> <blockquote> <pre><code> `In: data.groupby([&quot;Start&quot;]).groups.keys() out: dict_keys(['1_204192587', '1_204197200'])` </code></pre> </blockquote> <p><a href="https://i.stack.imgur.com/LJOvc.png" rel="nofollow noreferrer">This is how data frame looks like </a></p> <p>I tried</p> <pre><code>df=data.groupby([&quot;Start&quot;]).describe() </code></pre> <p>But This is not what I want.</p> <p>I also try to specify axis while describing,</p> <pre><code>data.apply.(pd.DataFrame.describe, axis=1) </code></pre> <p>but I got error.</p> <p>Desired output</p> <pre><code>unique key/first column value MIN MAX AVG 1_204192587 * * * 1_204197200 * * * </code></pre> <p>I am a beginner, thank you in advance for any response.</p>
<p>You can use the below:</p> <pre class="lang-py prettyprint-override"><code>df.loc[4:].describe() </code></pre> <p><code>df</code> is your dataframe <br/> <code>[4:]</code> chooses the 5th row and on <br/> <code>.describe()</code> gives you a statistical summary (avg, mean ...)</p> <p>You can also add <code>.transpose()</code> and the end to get the output you asked.</p> <p>And if you want to assign it to another variable(dataframe)</p> <p>so it will look like:</p> <pre class="lang-py prettyprint-override"><code>new_df = df.loc[4:].describe().trasnpose() </code></pre>
python|pandas|max|average|min
2
9,405
59,193,132
Time difference between two columns in Pandas
<p>How can I subtract the time between two columns and convert it to minutes </p> <pre><code> Date Time Ordered Time Delivered 0 1/11/19 9:25:00 am 10:58:00 am 1 1/11/19 10:16:00 am 11:13:00 am 2 1/11/19 10:25:00 am 10:45:00 am 3 1/11/19 10:45:00 am 11:12:00 am 4 1/11/19 11:11:00 am 11:47:00 am </code></pre> <p>I want to subtract the Time_delivered - Time_ordered to get the minutes the delivery took.</p> <pre><code>df.time_ordered = pd.to_datetime(df.time_ordered) </code></pre> <p>This doesn't output the correct time instead it adds today's date the time</p>
<p>Convert both time columns to datetimes, get difference, convert to seconds by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.total_seconds.html" rel="noreferrer"><code>Series.dt.total_seconds</code></a> and then to minutes by division by <code>60</code>:</p> <pre><code>df['diff'] = (pd.to_datetime(df.time_ordered, format='%I:%M:%S %p') .sub(pd.to_datetime(df.time_delivered, format='%I:%M:%S %p')) .dt.total_seconds() .div(60)) </code></pre>
pandas|numpy|dataframe
5
9,406
45,109,115
compare all couples of numeric columns of a dataframe
<p>I have the following csv file:</p> <blockquote> <pre><code>C1,C2,C3,C4,C5,C6,C7 0,1,1,1,1,1,1 1,1,1,1,1,1,1 0,1,1,1,0,0,1 0,1,0,1,0,0,1 0,1,1,1,1,1,1 1,1,1,1,1,1,1 </code></pre> </blockquote> <p>I would like to create a dataframe comparing columns pairs. I would like to count the number of times each pair of column share the value of 1. So, for the data showed at the beginning of the question, I would like to generate the following dataframe:</p> <pre><code> C1 C2 C3 C4 C5 C6 C7 C1 C2 C3 C4 C5 C6 C7 </code></pre> <p><strong>[C1,C1]</strong> will contain the number of times C1 is equal to 1:</p> <blockquote> <p>awk -F',' '$1==1' f.csv | wc -l</p> </blockquote> <p><strong>[C1,C2]</strong> will contain the number of times C1 is equal to C2 and equal to 1.</p> <blockquote> <p>awk -F',' '$1==1 &amp;&amp; $1==$2' f.csv | wc -l</p> </blockquote> <p>Is there any easier way to calculate this? Maybe using <code>pandas</code>?</p>
<p>If the data frame contains only 1 and 0, you can use matrix multiplication <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dot.html" rel="nofollow noreferrer">dot</a>:</p> <pre><code>df = pd.read_csv("/path/to/csvfile") df.T.dot(df) </code></pre> <p><a href="https://i.stack.imgur.com/BF42V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BF42V.png" alt="enter image description here"></a></p>
python|pandas|dataframe
4
9,407
44,931,216
TensorFlow image reading queue empty
<p>I'm trying to use the pipeline for reading images to the CNN. I used <code>string_input_producer()</code> to obtain the queue of file names, but it seems to hang there without doing anything. Below is my code, please give me some advise of how to make it work.</p> <pre><code>def read_image_file(filename_queue, labels): reader = tf.WholeFileReader() key, value = reader.read(filename_queue) image = tf.image.decode_png(value, channels=3) image = tf.cast(image, tf.float32) resized_image = tf.image.resize_images(image, [224, 112]) with tf.Session() as sess: label = getLabel(labels, key.eval()) return resized_image, label def input_pipeline(filename_queue, queue_names, batch_size, num_epochs, labels): image, label = read_image_file(filename_queue, labels) min_after_dequeue = 10 * batch_size capacity = 20 * batch_size image_batch, label_batch = tf.train.shuffle_batch( [image, label], batch_size=batch_size, num_threads=1, capacity=capacity, min_after_dequeue=min_after_dequeue) return image_batch, label_batch train_queue = tf.train.string_input_producer(trainnames, shuffle=True, num_epochs=epochs) train_batch, train_label = input_pipeline(train_queue, trainnames, batch_size, epochs, labels) prediction = AlexNet(x) #Training with tf.name_scope("cost_function") as scope: cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=train_label, logits=prediction(train_batch))) tf.summary.scalar("cost_function", cost) train_step = tf.train.MomentumOptimizer(learning_rate, 0.9).minimize(cost) #Accuracy with tf.name_scope("accuracy") as scope: correct_prediction = tf.equal(tf.argmax(prediction,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) tf.summary.scalar("accuracy", accuracy) merged = tf.summary.merge_all() #Session with tf.Session() as sess: print('started') sess.run(tf.global_variables_initializer()) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord, start=True) sess.run(threads) try: for step in range(steps_per_epch * epochs): print('step: %d' %step) sess.run(train_step) except tf.errors.OutOfRangeError as ex: pass coord.request_stop() coord.join(threads) </code></pre>
<p>Your code is not completely self-contained as the <code>get_label</code> method is not defined.</p> <p>But it is very likely that the issue you have comes from these lines in the <code>read_image_file</code> method:</p> <pre><code>with tf.Session() as sess: label = getLabel(labels, key.eval()) </code></pre> <p>The <code>key.eval</code> part tries to dequeue an element of a queue which has not started yet. You shouldn't create any session before your input pipeline is defined (nor try to eval <code>key</code> (and possibly <code>labels</code>)). The <code>get_label</code> method should only perform tensor operations on <code>labels</code> and <code>key</code> and return a <code>label</code> tensor..</p> <p>For example, you can use these <a href="https://www.tensorflow.org/api_guides/python/string_ops" rel="nofollow noreferrer"><code>tensor</code> string operations</a> so they will be part of the graph.</p>
machine-learning|tensorflow|computer-vision
0
9,408
45,020,672
Convert PyQt5 QPixmap to numpy ndarray
<p>I have pixmap:</p> <pre><code>pixmap = self._screen.grabWindow(0, self._x, self._y, self._width, self._height) </code></pre> <p>I want to convert it to OpenCV format. I tried to convert it to <code>numpy.ndarray</code> as described <a href="https://stackoverflow.com/questions/37552924/convert-qpixmap-to-numpy">here</a> but I got error <code>sip.voidptr object has an unknown size</code></p> <p>Is there any way to get numpy array (same format as <code>cv2.VideoCapture</code> <code>read</code> method returns)?</p>
<p>I got numpy array using this code:</p> <pre><code>channels_count = 4 pixmap = self._screen.grabWindow(0, self._x, self._y, self._width, self._height) image = pixmap.toImage() s = image.bits().asstring(self._width * self._height * channels_count) arr = np.fromstring(s, dtype=np.uint8).reshape((self._height, self._width, channels_count)) </code></pre>
python|numpy|pyqt|pyqt5|qpixmap
7
9,409
44,875,888
Running a for loop through 2 arrays of different lengths and index manipulation
<p>I need to run a for loop through 2 arrays of different lengths. One array is 8760 by 1 and the other is 10 by 1. If a value in the short array is equal to to the index of a value in the long array, I don't want to change anything. If the index of a value in the long array isn't equal to a value in the short array, I want to set it equal to zero. I know the code I have is wrong, but it's a start. I couldn't attach the longer array, but it could be random values for now.</p> <p>I = np.array([4993, 4994, 4995, 5016, 5017, 5018, 5019, 5042, 5043, 5066])</p> <pre><code>import numpy as np A = np.loadtxt('A.txt') I = np.loadtxt('I.txt') for i in A: for j in I: if A[j] != I[j]: i = 0 </code></pre>
<p><strong>Main principle</strong></p> <p>When you need a <code>for</code> loop to walk across the indices of an array, use a "counted" loop-- a loop that iterates across a set of integers. Use <code>for index in range(len(</code>your list<code>)</code>.</p> <p><strong>Your specific problem</strong></p> <p>Given your <code>I</code>, I think you are asking to set all values of <code>A</code> to <code>0</code> (e.g. assign <code>A[5] = 0</code>) except, for example, <code>A[4993]</code> will be unchanged, and so on for indices in <code>I</code>.</p> <pre><code>good_elements_indices = I all_elements = A for all_elements_index in range(len(all_elements)): if all_elements_index not in good_elements_index: A[all_elements_index] = 0 </code></pre> <p><strong>Additional comments</strong></p> <ul> <li>Python style uses variable names that are lower case with underscores between words and no abbreviations. Hence I renamed <code>I</code> and <code>A</code>. See <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow noreferrer">PEP 8: Python Style Guide</a></li> <li>The <code>in</code> operator is core Python. Because you're already importing <code>numpy</code> and your <code>good_elements_index</code> aka <code>I</code> is already a <code>numpy.array</code> object, it is faster but less general to use <code>numpy.isin</code> function as suggested by <code>mrcl</code>.</li> <li>Your question talked about "running a for loop through 2 arrays." That suggests not one <code>for</code> loop but two <code>for</code> loops, nested, as your question's code shows. The <code>in</code> operator actually iterates like a <code>for</code> loop across the list testing for existence within the set.</li> </ul>
python|arrays|numpy|for-loop|indexing
1
9,410
56,981,256
Python: append() and extend()
<p>I have a .txt file of 3 million rows. The file contains data that looks like this:</p> <pre><code># RSYNC: 0 1 1 0 512 0 #$SOA 5m localhost. hostmaster.localhost. 1906022338 1h 10m 5d 1s # random_number_ofspaces_before_this text $TTL 60s #more random information :127.0.1.2:https://www.spamhaus.org/query/domain/$ test :127.0.1.2:https://www.spamhaus.org/query/domain/$ .0-0m5tk.com .0-1-hub.com .zzzy1129.cn :127.0.1.4:https://www.spamhaus.org/query/domain/$ .0-il.ml .005verf-desj.com .01accesfunds.com </code></pre> <p>I am trying to parse it such that it looks like:</p> <pre><code>+--------------------+--------------+-------------+-----------------------------------------------------+ | domain_name | period_count | parsed_code | raw_code | +--------------------+--------------+-------------+-----------------------------------------------------+ | test | 0 | 127.0.1.2 | :127.0.1.2:https://www.spamhaus.org/query/domain/$ | | .0-0m5tk.com | 2 | 127.0.1.2 | :127.0.1.2:https://www.spamhaus.org/query/domain/$ | | .0-1-hub.com | 2 | 127.0.1.2 | :127.0.1.2:https://www.spamhaus.org/query/domain/$ | | .zzzy1129.cn | 2 | 127.0.1.2 | :127.0.1.2:https://www.spamhaus.org/query/domain/$ | | .0-il.ml | 2 | 127.0.1.4 | :127.0.1.4:https://www.spamhaus.org/query/domain/$ | | .005verf-desj.com | 2 | 127.0.1.4 | :127.0.1.4:https://www.spamhaus.org/query/domain/$ | | .01accesfunds.com | 2 | 127.0.1.4 | :127.0.1.4:https://www.spamhaus.org/query/domain/$ | +--------------------+--------------+-------------+-----------------------------------------------------+ </code></pre> <p>To that end, I have come up with the following:</p> <pre><code>rows = [] raw_code = None parsed_code = None with open('dbl-sr-2019-06-02T23_38_27Z.txt', 'r') as f: # assumes the file name is input.txt for line in f: line = line.rstrip('\n') if line.startswith(':127'): raw_code = line parsed_code = re.split(":", line)[1] continue if line.startswith('#'): continue rows.append((line, parsed_code)) # rows.append((raw_code)) # rows.extend((line, parsed_code, raw_code)) # rows.extend((raw_code)) import pandas as pd df = pd.DataFrame(rows, columns=['domain_name', "parsed_code" 'raw_spamhaus_return_code']) print(df) </code></pre> <p>The commented out lines in the code chunk above either, did not produce the output I wanted, or gave an error. I am struggling to build a Pandas dataframe with more than 2 columns. I can get <code>domain_name</code> and one other column. It seems I am unable to get the code down to correctly use the <code>.append</code> and <code>.extend</code> functions. Can someone please provide guidance?</p>
<p>The likely source of your problem is a missing comma.</p> <p>This:</p> <pre><code>df = pd.DataFrame(rows, columns=[ 'domain_name', 'parsed_code', 'raw_spamhaus_return_code']) </code></pre> <p>is not the same as:</p> <pre><code>df = pd.DataFrame(rows, columns=[ 'domain_name', "parsed_code" 'raw_spamhaus_return_code']) </code></pre> <p>because (note the missing comma):</p> <pre><code>"parsed_code" 'raw_spamhaus_return_code' </code></pre> <p>becomes one string.</p> <h3>Test Code:</h3> <pre><code>import re data = [x.strip() for x in """ # RSYNC: 0 1 1 0 512 0 #$SOA 5m localhost. hostmaster.localhost. 1906022338 1h 10m 5d 1s # random_number_ofspaces_before_this text $TTL 60s #more random information :127.0.1.2:https://www.spamhaus.org/query/domain/$ test :127.0.1.2:https://www.spamhaus.org/query/domain/$ .0-0m5tk.com .0-1-hub.com .zzzy1129.cn :127.0.1.4:https://www.spamhaus.org/query/domain/$ .0-il.ml .005verf-desj.com .01accesfunds.com """.split('\n')[1:-1]] rows = [] raw_code = None parsed_code = None for line in data: line = line.rstrip('\n') if line.startswith(':127'): raw_code = line parsed_code = re.split(":", line)[1] continue if line.startswith('#'): continue rows.append((line, line.count('.'), parsed_code, raw_code)) import pandas as pd df = pd.DataFrame(rows, columns=[ 'domain_name', 'period_count ', 'parsed_code', 'raw_spamhaus_return_code']) print(df) </code></pre> <h3>Results:</h3> <pre><code> domain_name period_count parsed_code \ 0 test 0 127.0.1.2 1 .0-0m5tk.com 2 127.0.1.2 2 .0-1-hub.com 2 127.0.1.2 3 .zzzy1129.cn 2 127.0.1.2 4 .0-il.ml 2 127.0.1.4 5 .005verf-desj.com 2 127.0.1.4 6 .01accesfunds.com 2 127.0.1.4 raw_spamhaus_return_code 0 :127.0.1.2:https://www.spamhaus.org/query/doma... 1 :127.0.1.2:https://www.spamhaus.org/query/doma... 2 :127.0.1.2:https://www.spamhaus.org/query/doma... 3 :127.0.1.2:https://www.spamhaus.org/query/doma... 4 :127.0.1.4:https://www.spamhaus.org/query/doma... 5 :127.0.1.4:https://www.spamhaus.org/query/doma... 6 :127.0.1.4:https://www.spamhaus.org/query/doma... </code></pre>
python|regex|pandas
5
9,411
57,181,569
Random colors by default in matplotlib
<p>I had a look at Kaggle's <a href="https://www.kaggle.com/residentmario/univariate-plotting-with-pandas" rel="nofollow noreferrer">univariate-plotting-with-pandas</a>. There's this line which generates bar graph. </p> <p><code>reviews['province'].value_counts().head(10).plot.bar()</code></p> <p>I don't see any color scheme defined specifically. I tried plotting it using <code>jupyter notebook</code> but could see only one color instead of all multiple colors as at Kaggle.</p> <p>I tried reading the document and online help but couldn't get any method to generate these colors just by the line above.</p> <p>How do we do that? Is there a config to set this randomness by default?</p> <p><a href="https://i.stack.imgur.com/65U05.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/65U05.png" alt="At Kaggle:"></a></p> <p><a href="https://i.stack.imgur.com/72jc9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/72jc9.png" alt="Jupyter Notebook:"></a></p>
<p>In seaborn is it not problem:</p> <pre><code>import seaborn as sns sns.countplot(x='province', data=reviews) </code></pre> <p>In matplotlib are not spaces, but possible with convert values to one row DataFrame:</p> <pre><code>reviews['province'].value_counts().head(10).to_frame(0).T.plot.bar() </code></pre> <p>Or use some <a href="https://matplotlib.org/3.1.1/tutorials/colors/colormaps.html#qualitative" rel="nofollow noreferrer">qualitative colormap</a>:</p> <pre><code>import matplotlib.pyplot as plt N = 10 reviews['province'].value_counts().head(N).plot.bar(color=plt.cm.Paired(np.arange(N))) </code></pre> <hr> <pre><code>reviews['province'].value_counts().head(N).plot.bar(color=plt.cm.Pastel1(np.arange(N))) </code></pre>
python|pandas|matplotlib
4
9,412
57,065,878
Split pandas column and create new columns that count the split values
<p>I have a goofy data where one column contains multiple values slammed together with a comma:</p> <pre class="lang-py prettyprint-override"><code>In [62]: df = pd.DataFrame({'U': ['foo', 'bar', 'baz'], 'V': ['a,b,a,c,d', 'a,b,c', 'd,e']}) In [63]: df Out[63]: U V 0 foo a,b,a,c,d 1 bar a,b,c 2 baz d,e </code></pre> <p>Now I want to split column <code>V</code>, drop it, and add columns <code>a</code> through <code>e</code>. Columns <code>a</code> through <code>e</code> should contains the count of the occurrences of that letter in that row:</p> <pre><code>In [62]: df = pd.DataFrame({'U': ['foo', 'bar', 'baz'], 'V': ['a,b,a,c,d', 'a,b,c', 'd,e']}) In [63]: df Out[63]: U a b c d e 0 foo 2 1 1 1 0 1 bar 1 1 1 0 0 2 baz 0 0 0 1 1 </code></pre> <p>Maybe some combination of <code>df['V'].str.split(',')</code> and <code>pandas.get_dummies</code> but I can't quite work it out. </p> <p>Edit: apparently I have to justify why my question is not a duplicate. I think why is intuitively obvious to the most casual observer.</p>
<p>This is <code>str.get_dummies</code></p> <pre><code>pd.concat([df,df.pop('V').str.split(',',expand=True).stack().str.get_dummies().sum(level=0)],1) Out[602]: U a b c d e 0 foo 2 1 1 1 0 1 bar 1 1 1 0 0 2 baz 0 0 0 1 1 </code></pre>
python|pandas
5
9,413
57,257,620
How can I loop the columns of a dataframe and then calling them as df.col without having python think col is a method?
<p>I'm trying to loop all the columns of a dataframe but I can't call them inside the loop as df.col, apparently it reads col as a method and gives me an error because no such method exists. </p> <pre><code>for bin in bins: for col in app_train.columns: if app_train[col].isnull().any(): app_train.loc[(app_train.col.isnull())&amp;(app_train.YEARS_BINNED == bin),col] = app_train.col.mean() else: continue </code></pre> <p>What I was trying to do was filtering the dataframe by bins, and then imputing the missing values using the mean of each column of the filtered dataframe.</p> <p>This is what it tells me: AttributeError: 'DataFrame' object has no attribute 'col'</p>
<p>The for loop are not needed since pandas can deal with this kind of operation:</p> <pre><code>df.fillna(df.groupby(['YEARS_BINNED']).transform('mean'),inplace=True) </code></pre> <p>This line will fill the NaN value of your dataframe with the mean of the respective column grouped by <code>YEARS_BINNED</code></p>
python-3.x|pandas|loops
0
9,414
57,118,106
ValueError: Input contains NaN, infinity or a value too large for dtype('float32'). Why?
<p>I have gone through all the similar questions but none of them answer my query. I am using random forest classifier as follows:</p> <pre class="lang-py prettyprint-override"><code>from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=100, max_depth=2, random_state=0) clf.fit(X_train, y_train) clf.predict(X_test) </code></pre> <p>It's giving me this error:</p> <p><code>ValueError: Input contains NaN, infinity or a value too large for dtype('float32').</code></p> <p>However, when I do <code>X_train.describe()</code> I don't see any missing values. In fact, actually, I already took care of the missing values before even splitting my data. </p> <p>When I do the following:</p> <p><code>np.where(X_train.values &gt;= np.finfo(np.float32).max)</code></p> <p>I get:</p> <p><code>(array([], dtype=int64), array([], dtype=int64))</code></p> <p>And for these commands:</p> <pre class="lang-py prettyprint-override"><code>np.any(np.isnan(X_train)) #true np.all(np.isfinite(X_train)) #false </code></pre> <p>And after getting the above results, I also tried this:</p> <p><code>X_train.fillna(X_train.mean())</code></p> <p>but I get the same error and it doesn't fix anything.</p> <p>Please tell me where I'm going wrong. Thank you!</p>
<p><strong>Solution</strong><br/><code>X_train = X_train.fillna(X_train.mean())</code></p> <p><strong>Explanation</strong><br/> <code>np.any(np.isnan(X_train))</code> evals to <code>True</code>, therefore <code>X_train</code> contains some <code>nan</code> values. Per pandas <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer">fillna() docs</a>, DataFrame.fillna() returns a copy of the DataFrame with missing values filled. You must reassign X_train to the return value of fillna(), like <code>X_train = X_train.fillna(X_train.mean())</code></p> <p><strong>Example</strong></p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; &gt;&gt;&gt; a = pd.DataFrame(np.arange(25).reshape(5, 5)) &gt;&gt;&gt; a[2][2] = np.nan &gt;&gt;&gt; &gt;&gt;&gt; a 0 1 2 3 4 0 0 1 2.0 3 4 1 5 6 7.0 8 9 2 10 11 NaN 13 14 3 15 16 17.0 18 19 4 20 21 22.0 23 24 &gt;&gt;&gt; &gt;&gt;&gt; a.fillna(1) 0 1 2 3 4 0 0 1 2.0 3 4 1 5 6 7.0 8 9 2 10 11 1.0 13 14 3 15 16 17.0 18 19 4 20 21 22.0 23 24 &gt;&gt;&gt; &gt;&gt;&gt; a 0 1 2 3 4 0 0 1 2.0 3 4 1 5 6 7.0 8 9 2 10 11 NaN 13 14 3 15 16 17.0 18 19 4 20 21 22.0 23 24 &gt;&gt;&gt; &gt;&gt;&gt; a = a.fillna(1) &gt;&gt;&gt; a 0 1 2 3 4 0 0 1 2.0 3 4 1 5 6 7.0 8 9 2 10 11 1.0 13 14 3 15 16 17.0 18 19 4 20 21 22.0 23 24 &gt;&gt;&gt; </code></pre>
python|pandas|numpy|scikit-learn|jupyter
1
9,415
57,231,943
How can I randomly shuffle the labels of a Pytorch Dataset?
<p>I am new to Pytorch, and I am having troubles with some technicalities. I have downloaded the MNIST dataset, using the following command:</p> <pre><code>train_dataset = dsets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True) </code></pre> <p>I now need to run some experiments on this dataset, but using random labels. How can I shuffle/reassign them randomly? </p> <p>I am trying to do it manually, but it tells me that " 'tuple' object does not support item assignment". How can I do it then? </p> <p>Second question: How can I remove a training point from the dataset? It gives me the same error, when I try to do it. </p> <p>Thank you!!</p>
<p>If you only want to shuffle the targets, you can use <a href="https://pytorch.org/docs/stable/torchvision/datasets.html#mnist" rel="nofollow noreferrer"><code>target_transform</code></a> argument. For example:</p> <pre class="lang-py prettyprint-override"><code>train_dataset = dsets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), target_transform=lambda y: torch.randint(0, 10, (1,)).item(), download=True) </code></pre> <p>If you want some more elaborate tweaking of the dataset, you can wrap <a href="https://pytorch.org/docs/stable/torchvision/datasets.html#mnist" rel="nofollow noreferrer"><code>mnist</code></a> completely</p> <pre class="lang-py prettyprint-override"><code>class MyTwistedMNIST(torch.utils.data.Dataset): def __init__(self, my_args): super(MyTwistedMNIST, self).__init__() self.orig_mnist = dset.MNIST(...) def __getitem__(self, index): x, y = self.orig_mnist[index] # get the original item my_x = # change input digit image x ? my_y = # change the original label y ? return my_x, my_y def __len__(self): return self.orig_mnist.__len__() </code></pre> <p>If there are elements of the original mnist you want to completely discard, than by wrapping around the original mnist, your <code>MyTwistedMNIST</code> class can return <code>len</code> smaller than <code>self.orig_mnist.__len__()</code> reflecting the amount of actual mnist examples you want to handle. Moreover, you will need to map the new <code>index</code> of examples to the original mnist index.</p>
machine-learning|computer-vision|dataset|pytorch
2
9,416
57,062,108
Converting Items from Pandas Series to Date Time
<p>I have a Pandas Series ("timeSeries") that includes a time of day. Some of the items are blank, some are actual times (08:00; 13:00), some are indications of time (morning, early afternoon). </p> <p>As the time of day I have is New York, I would like to convert the items in the time format to London time. Using <code>pd.to_datetime(timeSeries, error='ignore')</code> does not work when I also have the addition of <code>timedelta(hours=5)</code>. So I attempted to add a if condition but it does not work. </p> <p>Sample Initial DataFrame:</p> <pre><code>dfNY = pd.DataFrame({'TimeSeries': [13:00, nan, 06:00, 'Morning', 'Afternoon', nan, nan, 01:30]) </code></pre> <p>Desired Result: </p> <pre><code>dfLondon = pd.DataFrame({'TimeSeries': [18:00, nan, 11:00, 'Morning', 'Afternoon', nan, nan, 06:30]) </code></pre> <p>Any help or simplification of my code would be great.</p> <pre><code>london = dt.datetime.now(timezone("America/New_York")) newYork = dt.datetime.now(timezone("Europe/London")) timeDiff = (london - dt.timedelta(hours = newYork.hour)).hour for dayTime in timeSeries: if dayTime == "%%:%%": print(dayTime) dayTime = pd.to_datetime(dayTime) + dt.timedelta(hours=timeDiff) return timeSeries </code></pre> <p>Update: using pytz method in comment below yields a timezone that is off my 5min. How do we fix this?</p>
<p>Using the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/indexing.html#id3" rel="nofollow noreferrer"><code>.dt</code> accessor</a>, you can set a timezone to your value, and than convert it to another one, using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.tz_localize.html" rel="nofollow noreferrer"><code>tz.localize</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.tz_convert.html" rel="nofollow noreferrer"><code>tz_convert</code></a>.</p> <pre><code>import pandas as pd import numpy as np pd.options.display.max_columns = 5 df = pd.DataFrame({'TimeSeries': ["13:00", np.nan, "06:00", 'Morning', 'Afternoon', np.nan, np.nan, "01:30"]}) # Convert your data to datetime, errors appears, but we do not care about them. # We also explicitly note that the datetime is a specific timezone. df['TimeSeries_TZ'] = pd.to_datetime(df['TimeSeries'], errors='coerce', format='%H:%M')\ .dt.tz_localize('America/New_York') print(df['TimeSeries_TZ']) # 0 1900-01-01 13:00:00-04:56 # 1 NaT # 2 1900-01-01 06:00:00-04:56 # 3 NaT # 4 NaT # 5 NaT # 6 NaT # 7 1900-01-01 01:30:00-04:56 # Then, we can use the datetime accessor to convert the timezone. df['Converted_time'] = df['TimeSeries_TZ'].dt.tz_convert('Europe/London').dt.strftime('%H:%M') print(df['Converted_time']) # 0 17:55 # 1 NaT # 2 10:55 # 3 NaT # 4 NaT # 5 NaT # 6 NaT # 7 06:25 # If you want to convert the original result that CAN be converted, while keeping the values that # raised errors, you can copy the original data, and change the data that is not equal to the value # that means an error was raised, e.g : NaT (not a timestamp). df['TimeSeries_result'] = df['TimeSeries'].copy() df['TimeSeries_result'] = df['TimeSeries'].where(~df['Converted_time'].ne('NaT'), df['Converted_time']) print(df[['TimeSeries', 'TimeSeries_result']]) # TimeSeries TimeSeries_result # 0 13:00 17:55 # 1 NaN NaN # 2 06:00 10:55 # 3 Morning Morning # 4 Afternoon Afternoon # 5 NaN NaN # 6 NaN NaN # 7 01:30 06:256 06:25 06:25 </code></pre>
python|pandas|datetime
0
9,417
56,890,105
How to keep NaN in pivot table?
<p>Looking to preserve NaN values when changing the shape of the dataframe.</p> <p>These two questions may be related:</p> <ul> <li><a href="https://stackoverflow.com/questions/56742166/">How to preserve NaN instead of filling with zeros in pivot table?</a></li> <li><a href="https://stackoverflow.com/questions/55626176/">How to make two NaN as NaN after the operation instead of making it zero?</a></li> </ul> <p>but not been able to use the answers provided - can I set a min count for np.sum somehow?</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np df = pd.DataFrame([['Y1', np.nan], ['Y2', np.nan], ['Y1', 6], ['Y2',8]], columns=['A', 'B'], index=['1988-01-01','1988-01-01', '1988-01-04', '1988-01-04']) df.index.name = 'Date' df pivot_df = pd.pivot_table(df, values='B', index=['Date'], columns=['A'],aggfunc=np.sum) pivot_df </code></pre> <p>The output is:</p> <pre><code>A Y1 Y2 Date 1988-01-01 0.0 0.0 1988-01-04 6.0 8.0 </code></pre> <p>and the desired output is:</p> <pre><code>A Y1 Y2 Date 1988-01-01 NaN NaN 1988-01-04 6.0 8.0 </code></pre>
<p>From the helpful comments the following solution meets my requirements:</p> <pre class="lang-py prettyprint-override"><code> pivot_df_2 = pd.pivot_table(df, values='B', index=['Date'], columns=['A'],aggfunc=min, dropna=False) pivot_df_2 </code></pre> <p>Values are supposed to be unique per slot so replacing the sum function with a min function shouldn't make a difference (in my case)</p>
python|pandas|numpy
2
9,418
46,066,685
Rename the column inside csv file
<p>Can anyone please check for me what's wrong with my renaming command. It changes nothing on the csv file. The code that i have tried below renaming header.</p> <pre><code>df = pandas.read_csv('C:/JIRA Excel File.csv') df.rename(columns=({'Custom field (Implemented Date)':'Custom field (Verified Date)'})) df.set_index('Custom field (Verified Date)').to_csv("C:/JIRA Excel File/Done.csv", index=None) </code></pre> <p>I want column Custom field (Implemented Date) CHANGE to Custom field (verified Date), but the column still doesn't change.</p> <p><strong>Original CSV.file</strong></p> <p><a href="https://i.stack.imgur.com/5F1v3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5F1v3.png" alt="Click Here"></a></p> <p>Now the KeyError: 'Custom field (Implemented Date)' is not execute anymore. Just after I run this code.</p> <p>The output will display as below.</p> <p><a href="https://i.stack.imgur.com/ybaNe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ybaNe.png" alt="enter image description here"></a></p>
<p>you can call rename function with external parameter <code>inplace=True</code> </p> <pre><code>df.rename(columns={'Custom field (Implemented Date)': 'Custom field (Verified Date)'}, inplace=True) </code></pre> <p>For more see <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html" rel="nofollow noreferrer">pandas.DataFrame.rename</a> and <a href="https://stackoverflow.com/questions/11346283/renaming-columns-in-pandas">Renaming columns in pandas</a></p> <p><strong>Update:</strong> from your comment and updated question</p> <pre><code># considering a sample csv from your description and the df is. ''' Issue Type Custom field (Verified Date) Custom field (Implemented Date) 0 issue-1 varified-date1 Implemented-Date1 1 issue-2 varified-date2 Implemented-Date2 ''' # first delete the 'Custom field (Verified Date)' column del df['Custom field (Verified Date)'] ''' Issue Type Custom field (Implemented Date) 0 issue-1 Implemented-Date1 1 issue-2 Implemented-Date2 ''' # rename the column 'Custom field (Implemented Date)' to 'Custom field (Verified Date)' df.rename(columns={'Custom field (Implemented Date)': 'Custom field (Verified Date)'}, inplace=True) ''' Issue Type Custom field (Verified Date) 0 issue-1 Implemented-Date1 1 issue-2 Implemented-Date2 ''' df.set_index('Custom field (Verified Date)').to_csv("Done.csv", index=None) </code></pre> <p>And after all this I get the output in file as you describe above with out any error. </p>
python|pandas
4
9,419
46,044,618
Count of values between other values in a pandas DataFrame
<p>I have a column of a pandas DataFrame that looks like this:</p> <pre><code>1 False 2 False 3 False 4 True 5 True 6 False 7 False 8 False 9 False 10 False 11 True 12 False </code></pre> <p>I would like to get the count of False between the True. Something like this:</p> <pre><code>1 3 2 0 3 5 4 1 </code></pre> <p>This is what I've done:</p> <pre><code>counts = [] count = 0 for k in df['result'].index: if df['result'].loc[k] == False: count += 1 else: counts.append(count) count = 0 </code></pre> <p>where counts would be the result. Is there a simpler way?</p>
<p>Group by the cumulative sum of itself and then count the <code>False</code> with <code>sum</code>:</p> <pre><code>s = pd.Series([False, False, False, True, True, False, False, False, False, False, True, False]) (~s).groupby(s.cumsum()).sum() #0 3.0 #1 0.0 #2 5.0 #3 1.0 #dtype: float64 </code></pre>
python|pandas
5
9,420
45,971,533
Slicing array returns strange shape
<p>Suppose I do the following in Ipython:</p> <pre><code>import numpy as np test = np.zeros([3,2]) test test.shape test[:,0] test[:,0].shape </code></pre> <p>The results will be:</p> <pre><code>array([[ 0., 0.], [ 0., 0.], [ 0., 0.]]) (3,2) array([ 0., 0., 0.]) (3,) </code></pre> <p>Why is the last result here not <code>(3,1)</code>? I have a work-around: The <code>reshape</code> command, but that seems silly. </p>
<p>I use a different array for visualization:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; test = np.arange(6).reshape(3, 2) &gt;&gt;&gt; test array([[0, 1], [2, 3], [4, 5]]) </code></pre> <p>Slicing like this:</p> <pre><code>&gt;&gt;&gt; test[:,0] array([0, 2, 4]) </code></pre> <p>tells NumPy to keep the first dimension but only take the first element of the second dimension. By definition this will reduce the number of dimensions by 1.</p> <p>Just like:</p> <pre><code>&gt;&gt;&gt; test[0, 0] 0 </code></pre> <p>would take the first element in the first dimension and the first element of the second dimension. Thus reducing the number of dimensions by 2.</p> <p>In case you wanted to get the first column as actual column (not changing the number of dimensions) you need to use slicing:</p> <pre><code>&gt;&gt;&gt; test[:, 0:1] # remember that the stop is exlusive so this will get the first column only array([[0], [2], [4]]) </code></pre> <p>or similarly</p> <pre><code>&gt;&gt;&gt; test[:, 1:2] # for the second column array([[1], [3], [5]]) &gt;&gt;&gt; test[0:1, :] # first row array([[0, 1]]) </code></pre>
python|python-3.x|numpy|slice|reshape
2
9,421
23,026,037
Pandas div using index
<p>I am sometimes struggling a bit to understand pandas datastructures and it seems to be the case again. Basically, I've got:</p> <ul> <li>1 pivot table, major axis being a serial number</li> <li>a Serie using the same index</li> </ul> <p>I would like to divide each column of my pivot table by the value in the Serie using index to match the lines. I've tried plenty of combinations... without being successful so far :/</p> <pre><code>import pandas as pd df = pd.DataFrame([['123', 1, 1, 3], ['456', 2, 3, 4], ['123', 4, 5, 6]], columns=['A', 'B', 'C', 'D']) pt = pd.pivot_table(df, rows=['A', 'B'], cols='C', values='D', fill_value=0) serie = pd.Series([5, 5, 5], index=['123', '678', '345']) pt.div(serie, axis='index') </code></pre> <p>But I am only getting NaN. I guess it's because columns names are not matching but that's why I was using index as the axis. Any ideas on what I am doing wrong?</p> <p>Thanks</p>
<p>You say "using the same index", but they're not the same: <code>pt</code> has a multiindex, and <code>serie</code> only an index:</p> <pre><code>&gt;&gt;&gt; pt.index MultiIndex(levels=[[u'123', u'456'], [1, 2, 4]], labels=[[0, 0, 1], [0, 2, 1]], names=[u'A', u'B']) </code></pre> <p>And you haven't told the division that you want to align on the <code>A</code> part of the index. You can pass that information using <code>level</code>:</p> <pre><code>&gt;&gt;&gt; pt.div(serie, level='A', axis='index') C 1 3 5 A B 123 1 0.6 0 0.0 4 0.0 0 1.2 456 2 NaN NaN NaN [3 rows x 3 columns] </code></pre>
python|pandas
1
9,422
35,611,786
Select one cell when pandas DataFrame has hierarchical index
<p>I would expect to be able to do </p> <p><code>dat.loc['label_row1', 'label_row2', 'label_col']</code></p> <p>However, it does not work and require </p> <p><code>dat.loc['label_row1', 'label_row2'].loc['label_col']</code></p> <p>To me, this is rather unintuitive, because when there isn't hierarchical index, I can select one cell with </p> <p><code>dat.loc['label_row', 'label_col']</code></p> <p>Can anyone explain the reasoning or suggest a way to remember this quirk?</p> <p>Example:</p> <pre><code>import pandas as pd from pandas_datareader import wb dat = wb.download( indicator=['BX.KLT.DINV.WD.GD.ZS'], country='CN', start=2005, end=2011) dat.loc["China", "2003"].loc["BX.KLT.DINV.WD.GD.ZS"] </code></pre>
<p>If your index is first sorted, you can do this which selects all countries and the year 2009:</p> <pre><code>dat.sort_index().loc[(slice(None), '2009'), :] BX.KLT.DINV.WD.GD.ZS country year China 2009 2.590357 </code></pre> <p>Here is a link to <a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html#advanced-indexing-with-hierarchical-index" rel="nofollow">indexing with hierarchical data</a> in the docs.</p> <p>Because your index is a MultiIndex is a tuple, your .loc indexing needs to be a tuple as well. Note the difference between the two methods below. One returns a series, the other a dataframe:</p> <pre><code>&gt;&gt;&gt; dat.sort_index().loc[('China', '2009'), :] BX.KLT.DINV.WD.GD.ZS 2.590357 Name: (China, 2009), dtype: float64 &gt;&gt;&gt; dat.sort_index().loc[[('China', '2009')], :] BX.KLT.DINV.WD.GD.ZS country year China 2009 2.590357 </code></pre>
python|pandas
1
9,423
28,598,035
Pandas: 52 week high from yahoo or google finance
<p>Does anyone know if you can get the 52 week high in pandas from either yahoo or google finance? Thanks. </p>
<p>It is possible, please check out <a href="http://pandas.pydata.org/pandas-docs/dev/remote_data.html" rel="nofollow">pandas documentation</a>. Here's an example:</p> <pre><code>import pandas.io.data as web import datetime symbol = 'aapl' end = datetime.datetime.now() start = end - datetime.timedelta(weeks=52) df = web.DataReader(symbol, 'yahoo', start, end) highest_high = df['High'].max() </code></pre>
python|pandas|yahoo|finance
2
9,424
50,925,783
error with DataType when training CNN
<p>I have a problem with the training of a CNN. I based it on the example that can be found at <a href="https://www.tensorflow.org/tutorials/layers" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/layers</a>. The difference between my network and the one in the example is that i am using my own data instead of a dataset. I have them in numpy arrays I created before and saved. This is how a treat them in my code:</p> <pre><code>train_data = np.load("train_data.npy") train_labels = np.load("train_labels.npy") eval_data = np.load("test_data.npy") eval_labels = np.load("test_labels.npy") </code></pre> <p>This is at the beginning of my main, for the rest it follows the example (of course changing the number of nodes according to my problem). When I try to run, I get a pretty long error:</p> <pre><code> python cnn_mnist.py/home/someone/tensorflow/local/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters INFO:tensorflow:Using default config. INFO:tensorflow:Using config: {'_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_task_type': 'worker', '_train_distribute': None, '_is_chief':True, '_cluster_spec': &lt;tensorflow.python.training.server_lib.ClusterSpec object at 0x7fcd75bd6090&gt;, '_evaluation_master': '', '_save_checkpoints_steps': None, '_keep_checkpoint_every_n_hours': 10000, '_service': None, '_num_ps_replicas': 0, '_tf_random_seed': None,'_master': '', '_num_worker_replicas': 1, '_task_id': 0, '_log_step_count_steps': 100, '_model_dir': '/tmp/mnist_convnet_model', '_global_id_in_cluster': 0, '_save_summary_steps': 100} INFO:tensorflow:Calling model_fn. Traceback (most recent call last): File "cnn_mnist.py", line 133, in &lt;module&gt; tf.app.run() File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run _sys.exit(main(argv)) File "cnn_mnist.py", line 121, in main hooks=[logging_hook]) File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 363, in train loss = self._train_model(input_fn, hooks, saving_listeners) File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 843, in _train_model return self._train_model_default(input_fn, hooks, saving_listeners) File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 856, in _train_model_default features, labels, model_fn_lib.ModeKeys.TRAIN, self.config) File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 831, in _call_model_fn model_fn_results = self._model_fn(features=features, **kwargs) File "cnn_mnist.py", line 24, in cnn_model_fn activation=tf.nn.relu) File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/layers/convolutional.py", line 621, in conv2d return layer.apply(inputs) File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 828, in apply return self.__call__(inputs, *args, **kwargs) File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 717, in __call__ outputs = self.call(inputs, *args, **kwargs) File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/layers/convolutional.py", line 168, in call outputs = self._convolution_op(inputs, self.kernel) File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 868, in __call__ return self.conv_op(inp, filter) File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 520, in __call__ return self.call(inp, filter) File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 204, in __call__ name=self.name) File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 956, in conv2d data_format=data_format, dilations=dilations, name=name) File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 609, in _apply_op_helper param_name=input_name) File "/home/someone/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 60, in _SatisfiesTypeConstraint ", ".join(dtypes.as_dtype(x).name for x in allowed_list))) TypeError: Value passed to parameter 'input' has DataType uint8 not in list of allowed values: float16, bfloat16, float32, float64. </code></pre> <p>What I understand from this is that something has the wrong data type, but I can't understand what is wrong. Too many functions are called and I can't track back where the error is. Can someone help me?</p>
<p>Even though there are too many function calls and errors in the log, the main reason for the error can always be found in the last statement. In this case, "<strong>TypeError: Value passed to parameter 'input' has DataType uint8 not in list of allowed values: float16, bfloat16, float32, float64.</strong>"</p> <p>As the error clearly mentions, the 'input' (most probably a placeholder), expects a float object (i.e., one of float16, bfloat16, float32, float64 values as mentioned in the list of allowed values in the error), but the input you're giving is in integer format (uint8). Generally, when you read an image using PIL or cv2, you'd get an array in uint8 format. So you need to convert your numpy arrays and pass it to the input. </p> <p>Just do:</p> <pre><code>data = np.array(data).astype(np.float32) </code></pre> <p>Here, data can be any numpy array of your data that you load.</p>
python|tensorflow|convolutional-neural-network
3
9,425
20,485,139
Multiple AggFun in Pandas
<p>If you think about pivot tables in <code>Excel</code>, you can add additional columns and change from sum to mean to min or max. Is it possible to get the multiple values in a <code>pivot</code> in <code>Pandas</code>? </p> <p>Here is a working example (lifted from the pandas documentation):</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'A' : ['one', 'one', 'two', 'three'] * 6, ....: 'B' : ['A', 'B', 'C'] * 8, ....: 'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 4, ....: 'D' : np.random.randn(24), ....: 'E' : np.random.randn(24), ....: 'F' : np.random.randn(24)}) </code></pre> <p>Here is a pivot example: </p> <pre><code>pd.pivot_table(df, values=['D', 'E'], rows=['B'], aggfunc=np.mean) </code></pre> <p>Which returns: </p> <pre><code> D E B A -0.083449 -0.242955 B 0.826492 -0.058596 C 0.124266 -0.197583 </code></pre> <p>Is there a way I could take <code>np.sum</code> to the <code>pivot</code> example here?</p>
<p>You can pass a list to <code>pivot_table</code>'s <code>aggfunc</code> keyword argument:</p> <pre><code>&gt;&gt;&gt; pd.pivot_table(df, values=['D', 'E'], rows=['B'], aggfunc=[np.mean, np.sum]) mean sum D E D E B A -0.102403 0.854174 -0.819224 6.833389 B 0.426928 -0.177344 3.415428 -1.418754 C -0.159123 -0.071418 -1.272980 -0.571341 [3 rows x 4 columns] </code></pre> <p>(PS: you can also use the method version, i.e. <code>df.pivot_table(stuff)</code>.)</p>
python|pandas
5
9,426
9,070,306
numpy/scipy/ipython:Failed to interpret file as a pickle
<p>I have the file in following format: </p> <pre><code>0,0.104553357966 1,0.213014562052 2,0.280656379048 3,0.0654249076288 4,0.312223429689 5,0.0959008911106 6,0.114207780917 7,0.105294501195 8,0.0900673766572 9,0.23941317105 10,0.0598239513149 11,0.541701803956 12,0.093929580526 </code></pre> <p>I want to plot these point using ipython plot function doing the following: </p> <pre><code> In [40]: mean_data = load("/Users/daydreamer/data/mean") </code></pre> <p>But it fails saying the following:</p> <pre><code>--------------------------------------------------------------------------- IOError Traceback (most recent call last) /Users/daydreamer/&lt;ipython-input-40-8f1329559411&gt; in &lt;module&gt;() ----&gt; 1 mean_data = load("/Users/daydreamer/data/mean") /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy-1.6.1-py2.7-macosx-10.5-fat3.egg/numpy/lib/npyio.pyc in load(file, mmap_mode) 354 except: 355 raise IOError, \ --&gt; 356 "Failed to interpret file %s as a pickle" % repr(file) 357 finally: 358 if own_fid: IOError: Failed to interpret file '/Users/daydreamer/data/mean' as a pickle </code></pre> <p>How do I fix this error?</p>
<p>The <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.load.html" rel="noreferrer"><code>numpy.load</code></a> routine is for loading pickled <code>.npy</code> or <code>.npz</code> binary files, which can be created using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.save.html" rel="noreferrer"><code>numpy.save</code></a> and <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.savez.html" rel="noreferrer"><code>numpy.savez</code></a>, respectively. Since you have text data, these are not the routines you want.</p> <p>You can load your comma-separated values with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html" rel="noreferrer"><code>numpy.loadtxt</code></a>.</p> <pre><code>import numpy as np mean_data = np.loadtxt("/Users/daydreamer/data/mean", delimiter=',') </code></pre> <h2>Full Example</h2> <p>Here's a complete example (using <a href="http://docs.python.org/library/stringio.html" rel="noreferrer"><code>StringIO</code></a> to simulate the file I/O).</p> <pre><code>import numpy as np import StringIO s = """0,0.104553357966 1,0.213014562052 2,0.280656379048 3,0.0654249076288 4,0.312223429689 5,0.0959008911106 6,0.114207780917 7,0.105294501195 8,0.0900673766572 9,0.23941317105 10,0.0598239513149 11,0.541701803956 12,0.093929580526""" st = StringIO.StringIO(s) a = np.loadtxt(st, delimiter=',') </code></pre> <p>Now we have:</p> <pre><code>&gt;&gt;&gt; a array([[ 0. , 0.10455336], [ 1. , 0.21301456], [ 2. , 0.28065638], [ 3. , 0.06542491], [ 4. , 0.31222343], [ 5. , 0.09590089], [ 6. , 0.11420778], [ 7. , 0.1052945 ], [ 8. , 0.09006738], [ 9. , 0.23941317], [ 10. , 0.05982395], [ 11. , 0.5417018 ], [ 12. , 0.09392958]]) </code></pre>
python|numpy|matplotlib|scipy|ipython
23
9,427
66,374,254
Creating a new matrix from a matrix of index in numpy
<p>I have a 3D numpy array <code>A</code> with shape(k, l, m) and a 2D numpy array <code>B</code> with shape (k,l) with the indexes (between 0 and m-1) of particular items that I want to create a new 2D array <code>C</code> with shape (k,l), like this:</p> <pre><code>import numpy as np A = np.random.random((2,3,4)) B = np.array([[0,0,0],[2,2,2])) C = np.zeros((2,3)) for i in range(2): for j in range(3): C[i,j] = A[i, j, B[i,j]] </code></pre> <p>Is there a more efficient way of doing this?</p>
<p>Use inbuilt routine name <code>fromfunction</code> of Numpy library. And turn your code into</p> <pre><code>C = np.fromfunction(lambda i, j: A[i, j, B[i,j]], (5, 5)) </code></pre>
python|arrays|numpy
2
9,428
66,387,970
Pandas read_csv does not separate values after comma
<p>I am trying to load some .csv data in the Jupyter notebook but for some reason, it does not separate my data but puts everything in a single column.</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt df = pd.read_csv(r'C:\Users\leonm\Documents\Fontys\Semester 4\GriefBot_PracticeChallenge\DummyDataGriefbot.csv') df.head() </code></pre> <p><a href="https://i.stack.imgur.com/rVH8l.png" rel="nofollow noreferrer">My csv data</a> <a href="https://i.stack.imgur.com/bdGJH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bdGJH.png" alt="enter image description here" /></a></p> <p>In this picture there is the data I am using.</p> <p>And now I do not understand why my values all come in single column and are not separated where the comas are. I have also tried both spe=',' and sep=';' but they do not change anything.</p> <p><a href="https://i.stack.imgur.com/2XkxW.png" rel="nofollow noreferrer">This is what I am getting</a> <a href="https://i.stack.imgur.com/W4L7t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W4L7t.png" alt="enter image description here" /></a> I would really appreciate your help.</p>
<p>If that's how your data looks in a CSV reader like Excel, then each row likely looks like one big string in a text editor.</p> <pre><code>&quot;ID,PERSON,DATE&quot; &quot;1,A. Molina,1593147221&quot; &quot;2,A. Moran, 16456&quot; &quot;3,Action Marquez,15436&quot; </code></pre> <p>You could of course do &quot;text to columns&quot; within Excel and resave your file, or if you have many of these files, you can use the Pandas <code>split</code> function.</p> <pre class="lang-py prettyprint-override"><code>df[df.columns[0].split(',')] = df.iloc[:,0].str.split(',', expand=True) # ^ split header by comma ^ ^ create list split by comma, and expand # | each list entry into a new column # | select first column of data df.head() &gt; ID,PERSON,DATE ID PERSON DATE &gt; 0 1,A. Molina,1593147221 1 A. Molina 1593147221 &gt; 1 2,A. Moran, 16456 2 A. Moran 16456 &gt; 2 3,Action Marquez,15436 3 Action Marquez 15436 </code></pre> <p>You can then use <code>pd.drop</code> to drop that first column if you have no use for it</p> <pre class="lang-py prettyprint-override"><code>df.drop(df.columns[0], axis=1, inplace=True) </code></pre>
python|pandas|csv
2
9,429
57,297,569
Mapping data to ground truth list
<p>I have ground truth data in the following Python list:</p> <pre><code>ground_truth = [(A,16), (B,18), (C,36), (A,59), (C,77)] </code></pre> <p>So any value from:</p> <pre><code>0-16 gets mapped to A, 17-18 maps to B, 19-36 maps to C, 37-59 maps to A 60-77 maps to C and so on </code></pre> <p>I am trying to map a time series input say from numbers like</p> <pre><code>[9,15,29,32,49,56, 69] to its respective classes like: [A, A, C, C, A, A, C] </code></pre> <p>Assuming my input is a Pandas series like:</p> <pre><code>in = pd.Series([9,15,29,32,49,56, 69]) </code></pre> <p>How do I get to the series <code>[A, A, C, C, A, A, C]</code> ?</p>
<p>Here's my approach:</p> <pre><code>gt = pd.DataFrame(ground_truth) # bins for cut bins = [0] + list(gt[1]) # categories cats = pd.cut(pd.Series([9,15,29,32,49,56, 69]), bins=bins, labels=False) # labels gt.loc[cats, 0] </code></pre> <p>gives</p> <pre><code>0 A 0 A 2 C 2 C 3 A 3 A 4 C Name: 0, dtype: object </code></pre> <p>Or, without creating new dataframe:</p> <pre><code>labels = np.array([x for x,_ in ground_truth]) bins = [0] + [y for _,y in ground_truth] cats = pd.cut(pd.Series([9,15,29,32,49,56, 69]), bins=bins, labels=False) labels[cats] </code></pre> <p>which gives:</p> <pre><code>array(['A', 'A', 'C', 'C', 'A', 'A', 'C'], dtype='&lt;U1') </code></pre>
pandas
7
9,430
57,503,000
How to compare a single cell to an entire column of a dataframe in python using pandas
<p>I'm getting this error: "The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()." despite my best efforts to check the intermediate outputs (getting bools where expected, and the numbers are the correct data type I think, numpy.float64) I'm also using bit-wise operators.</p> <p>I am attempting to count the number of times that each cell of a given column (M-1 m/z) is about equal to all the values of another column (observed M0 m/z) and then write that count to the row corresponding to M-1 m/z in a new column called "M-1 MSMS existence". I have the checked all the intermediate outputs and everything as far as I can tell is as expected (see the #print statements in the code).I'm also using bit-wise operators to avoid the error that persists. The if statement appears to be the issue and I've everything I can think up to this point (including reading docs and looking for similar issues on stack overflow). There's something else going on that alludes me. Thanks for any help.</p> <p>Here's an abbreviated version of the csv I'm using: <a href="https://i.stack.imgur.com/8tWVZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8tWVZ.png" alt="enter image description here"></a></p> <p>Here's the code:</p> <pre><code> for i in range(len(df)): # print('i=', i) # print("(df.at[i, 'M-1 m/z'] - (df.at[i, 'M-1 m/z']/10**6)*100)", (df.at[i, 'M-1 m/z'] - (df.at[i, 'M-1 m/z']/10**6)*100)) # print("df['observed M0 m/z']", (df['observed M0 m/z'])) # print("bool", (((df.at[i, 'M-1 m/z'] - (df.at[i, 'M-1 m/z']/10**6)*100) &lt;= df['observed M0 m/z']) &amp; ((df.at[i, 'M-1 m/z'] + (df.at[i, 'M-1 m/z']/10**6)*100) &gt;= df['observed M0 m/z']))) count = 0 if (((df.at[i, 'M-1 m/z'] - (df.at[i, 'M-1 m/z']/10**6)*100) &lt;= df['observed M0 m/z']) &amp; ((df.at[i, 'M-1 m/z'] + (df.at[i, 'M-1 m/z']/10**6)*100) &gt;= df['observed M0 m/z'])): count += 1 df.at[i, 'M-1 MSMS existence?'] = count </code></pre> <p>i expect that the "M-1 MSMS existence" column will be populated with a number that corresponds the number of times that number was observed in the other columns rows. 0 if there were no values within the range (shown in the if statement) and 3 if there were 3 rows where "m-1 m/z" and "observed M0 m/z" were the same.</p>
<p>I beleive the the soln is: </p> <pre><code>for i in range(len(df)): #include RT too counter = 0 counter = np.count_nonzero(((df.at[i, 'M-1 m/z'] - (df.at[i, 'M-1 m/z']/10**6)*100) &lt;= df['observed M0 m/z']) &amp; ((df.at[i, 'M-1 m/z'] + (df.at[i, 'M-1 m/z']/10**6)*100) &gt;= df['observed M0 m/z'])) df.at[i, 'M-1 MSMS existence?'] = counter </code></pre> <p>I think the issue was that the if statement couldn't take the list of bools. So instead, we just count the trues as ones and use that number. </p>
pandas|dataframe
0
9,431
57,362,679
Python - Mirror a data set
<p>I have a data set, with XYZ cooridnate values and a scalar value at each X Y Z coordinate. I am looking to mirror this on YZ plane, and then XY plane. </p> <p>I was able to do this, by manually reading everything into a list or numpy array, creating a new array for each mirror. HOwever, that is not efficient. Wondering if pandas can be used, or if there is an other library, that is useful to have this job done. </p> <p>As an example, if I have a dataset as follows</p> <pre><code>Scalar X Y Z 123 1 1 1 </code></pre> <p>My result should be the following, in a csv or excel: </p> <pre><code>Scalar X Y Z 123 1 1 1 123 -1 1 1 123 -1 1 -1 123 1 1 -1 </code></pre>
<p>If you have a dataframe (from what I understand), you can <code>append</code> (multiple times) modified copies of the dataframe, changing each time one or more columns of your choosing by using <code>df.assign</code>:</p> <pre><code>df = pd.DataFrame({'scalar': [123] * 3, 'sym': 'None', 'x': np.ones(3), 'y': np.ones(3),'z': np.ones(3)}) df.head() &gt;&gt;scalar sym x y z 0 123 None 1.0 1.0 1.0 1 123 None 1.0 1.0 1.0 2 123 None 1.0 1.0 1.0 df.append(df.assign(x=-df['x'], sym='yz')).append(df.assign(y=-df['y'], sym='xz')) &gt;&gt; scalar sym x y z 0 123 None 1.0 1.0 1.0 1 123 None 1.0 1.0 1.0 2 123 None 1.0 1.0 1.0 0 123 yz -1.0 1.0 1.0 1 123 yz -1.0 1.0 1.0 2 123 yz -1.0 1.0 1.0 0 123 xz 1.0 -1.0 1.0 1 123 xz 1.0 -1.0 1.0 2 123 xz 1.0 -1.0 1.0 </code></pre>
python|arrays|pandas|numpy
0
9,432
57,572,695
I have to compare data from each row of a Pandas DataFrame with data from the rest of the rows, is there a way to speed up the computation?
<p>Let's say I have a pandas DataFrame (loaded from a csv file) with this structure (the number of var and err columns is not fixed, and it varies from file to file):</p> <pre><code>var_0; var_1; var_2; 32; 9; 41; 47; 22; 41; 15; 12; 32; 3; 4; 4; 10; 9; 41; 43; 21; 45; 32; 14; 32; 51; 20; 40; </code></pre> <p>Let's discard the err_ds_j and the err_mean columns for the sake of this question. I have to perform an automatic comparison of the values of each row, with the values of the other rows; as an example: I have to compare the first row with the second row, then with the third, then with the fourth, and so on, then I have to take the second row and compare it with the first one, then with the third one, and so on for the rest of the DataFrame.</p> <p>Going deeper into problem, I want to see if for each couple of rows, all the "var_i" values from one of them are higher or equal than the correspondent values of the other row. If this is satisfied, the row with higher values is called DOMINANT, and I add a row in another DataFrame, with this structure:</p> <pre><code>SET_A; SET_B; DOMINANT_SET 0; 1; B ... </code></pre> <p>Where SET_A and SET_B values are indices from the csv DataFrame, and DOMINANT_SET tells me which one of the two is the dominant set (or if there's none, it's just assigned as "none"). I found the third column to be useful since it helps me avoiding the comparison of rows I've already compared in the opposite way (e.g.: comparing row 1 with row 0 is useless, since I've already compared 0 and 1 previously).</p> <p>So, for that csv file, the output produced should be (and actually is, with my code):</p> <pre><code> SET_A SET_B DOMINANT_SET 1 0 1 B 2 0 2 none 3 0 3 A 4 0 4 A 5 0 5 B 6 0 6 none 7 0 7 none 8 1 2 A 9 1 3 A 10 1 4 A 11 1 5 none 12 1 6 A 13 1 7 none 14 2 3 A 15 2 4 none 16 2 5 B 17 2 6 B 18 2 7 B 19 3 4 B 20 3 5 B 21 3 6 B 22 3 7 B 23 4 5 B 24 4 6 none 25 4 7 none 26 5 6 A 27 5 7 none 28 6 7 B </code></pre> <p>I've already written all of the code for this particular problem, and it works just fine with some test datasets (100 rows sampled from an actual dataset).</p> <p>Here's a snippet of the relevant code:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd def couple_already_tested(index1, index2, dataframe): return (((dataframe['SET_A'] == index1) &amp; (dataframe['SET_B'] == index2)).any()) | (((dataframe['SET_A'] == index2) &amp; (dataframe['SET_B'] == index1)).any()) def check_dominance(set_a, set_b, index_i, index_j, dataframe): length = dataframe.shape[0] if np.all(set_a &gt;= set_b): print("FOUND DOMINANT CONFIGURATION A &gt; B") dataframe.loc[length+1] = [index_i,index_j,'A'] elif np.all(set_b &gt;= set_a): print("FOUND DOMINANT CONFIGURATION B &gt; A") dataframe.loc[length+1] = [index_i,index_j,'B'] else: dataframe.loc[length+1] = [index_i,index_j,'none'] df = pd.read_csv('test.csv', sep=';') dom_table_df = pd.DataFrame(columns=['SET_A','SET_B','DOMINANT_SET']) df_length = df.shape[0] var_num = df.shape[1]-1 a = None b = None for i in range(0, df_length): a = df.iloc[i, 0:var_num].values for j in range(0, df_length): if j == i: continue b = df.iloc[j, 0:var_num].values if couple_already_tested(i,j,dom_table_df): print("WARNING: configuration", i, j, "already compared, skipping") else: print("Comparing configuration at row", i, "with configuration at row", j) check_dominance(a, b, i, j, dom_table_df) print(dom_table_df) </code></pre> <p>The issue is that, being not so proficient in both python and pandas (I've been learning them for about one and a half months), this code is of course terribly slow (for datasets with, like, 1000 to 10000 rows) because I'm using iterations in my algorithm. I know I can use something called vectorization, but reading about it I'm not entirely sure that's good for my use case.</p> <p>So, how could I speed up the calculations?</p>
<p>Another speedup can be accomplished by replacing <code>.iloc[].values</code> as well as <code>.loc[]</code> with <code>.values[]</code>, but with <code>.loc[]</code> we have to adjust the subscript, because <code>.values</code> takes a zero-based subscript, which is different from our 1-based <code>dom_table_df.index</code>.</p> <pre><code>dom_table_df = pd.DataFrame(index=np.arange(1, 1+(df_length**2-df_length)/2).astype('i'), columns=['SET_A', 'SET_B', 'DOMINANT_SET']) length = 0 # counter of already filled rows for i in range(0, df_length): a = df.values[i, 0:var_num] for j in range(i+1, df_length): # we can skip the range from 0 to i b = df.values[j, 0:var_num] #print("Comparing configuration at row", i, "with configuration at row", j) if np.all(a &gt;= b): #print("FOUND DOMINANT CONFIGURATION A &gt; B") dom_table_df.values[length] = [i, j, 'A'] elif np.all(b &gt;= a): #print("FOUND DOMINANT CONFIGURATION B &gt; A") dom_table_df.values[length] = [i, j, 'B'] else: dom_table_df.values[length] = [i, j, 'none'] length += 1 </code></pre>
python|pandas|performance|dataframe|optimization
1
9,433
43,632,476
InvalidArgumentError on softmax in tensorflow
<p>I have the following function:</p> <pre><code>def forward_propagation(self, x): # The total number of time steps T = len(x) # During forward propagation we save all hidden states in s because need them later. # We add one additional element for the initial hidden, which we set to 0 s = tf.zeros([T+1, self.hidden_dim]) # The outputs at each time step. Again, we save them for later. o = tf.zeros([T, self.word_dim]) a = tf.placeholder(tf.float32) b = tf.placeholder(tf.float32) c = tf.placeholder(tf.float32) s_t = tf.nn.tanh(a + tf.reduce_sum(tf.multiply(b, c))) o_t = tf.nn.softmax(tf.reduce_sum(tf.multiply(a, b))) # For each time step... with tf.Session() as sess: s = sess.run(s) o = sess.run(o) for t in range(T): # Note that we are indexing U by x[t]. This is the same as multiplying U with a one-hot vector. s[t] = sess.run(s_t, feed_dict={a: self.U[:, x[t]], b: self.W, c: s[t-1]}) o[t] = sess.run(o_t, feed_dict={a: self.V, b: s[t]}) return [o, s] </code></pre> <p>self.U, self.V, and self.W are numpy arrays. I try to get softmax on</p> <pre><code>o_t = tf.nn.softmax(tf.reduce_sum(tf.multiply(a, b))) </code></pre> <p>graph, and it gives me error on this line:</p> <pre><code>o[t] = sess.run(o_t, feed_dict={a: self.V, b: s[t]}) </code></pre> <p>The error is:</p> <blockquote> <p>InvalidArgumentError (see above for traceback): Expected begin[0] == 0 (got -1) and size[0] == 0 (got 1) when input.dim_size(0) == 0<br> [[Node: Slice = Slice[Index=DT_INT32, T=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](Shape_1, Slice/begin, Slice/size)]]</p> </blockquote> <p>How I am supposed to get softmax in tensorflow?</p>
<p>The problem arises because you call <code>tf.reduce_sum</code> on the argument of <code>tf.nn.softmax</code>. As a result, the softmax function fails because a scalar is not a valid input argument. Did you mean to use <code>tf.matmul</code> instead of the combination of <code>tf.reduce_sum</code> and <code>tf.multiply</code>?</p> <p>Edit: Tensorflow does not provide an equivalent of <code>np.dot</code> out of the box. If you want to compute the dot product of a matrix and a vector, you need to sum over indices explicitly:</p> <pre><code># equivalent to np.dot(a, b) if a.ndim == 2 and b.ndim == 1 c = tf.reduce_sum(a * b, axis=1) </code></pre>
python|numpy|tensorflow
2
9,434
43,615,727
wrong exception handling in feed_dict in tensorflow
<p>I create my tensorflow graph as following:</p> <pre><code>s = tf.zeros([T+1, self.hidden_dim]) o = tf.zeros([T, self.word_dim]) a = tf.placeholder(tf.float32) b = tf.placeholder(tf.float32) c = tf.placeholder(tf.float32) d = tf.placeholder(tf.float32) dot_product = tf.reduce_sum(tf.multiply(a, b)) s_t = tf.nn.tanh(c + d) o_t = dot_product </code></pre> <p>Then afterwards run it as following:</p> <pre><code>with tf.Session() as sess: sess.run(s) sess.run(o) print type(self.W) # For each time step... for t in range(T): product = sess.run(dot_product, feed_dict={a: self.W, b: s[t-1]}) s[t] = sess.run(s_t, feed_dict={c: self.U[:, x[t]], d: product}) o[t] = sess.run(o_t, feed_dict={a: self.V, b: s[t]}) </code></pre> <p>Fome reason, I get the following exception:</p> <pre><code>TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, or numpy ndarrays. </code></pre> <p>This eror occures on</p> <pre><code>product = sess.run(dot_product, feed_dict={a: self.W, b: s[t-1]}) </code></pre> <p>But "W" is type of numpy.ndarray. Where is the problem? How can I fix it?</p>
<p>TF complains because your variable <code>s</code> is a <code>tf.Tensor</code> (it has no problems with your 'W' variable).</p> <p>It it would not be a tensor, this part of the code <code>sess.run(s)</code> would complain with something like this: <code>Fetch argument XX has invalid type &lt;type 'YY'&gt;, must be a string or Tensor. (Can not convert a YY into a Tensor or Operation.)</code></p>
python|numpy|tensorflow
1
9,435
43,485,230
Python Pandas - Find the elements ( substring ) in the same column
<p>I have a string column ('b') and would like to get the strings which are like substring in the same column. Example, in the below dataframe column 'b' , world is a substring of helloworld and ness is a substring of greatness. I would like to get the strings world and ness in a list. Can you please suggest a solution.</p> <pre><code> a b 0 test world 1 teat helloworld 2 gor bye 3 jhr greatness 4 fre ness </code></pre> <p>desired output in a list</p> <pre><code>listofsubstrings Out[353]: ['world', 'ness'] </code></pre>
<p>You can use:</p> <pre><code>from itertools import product #get unique values only b = df.b.unique() #create all combination df1 = pd.DataFrame(list(product(b, b)), columns=['a', 'b']) #filtering df1 = df1[df1.apply(lambda x: x.a in x.b, axis=1) &amp; (df1.a != df1.b)] print (df1) a b 1 world helloworld 23 ness greatness print (df1.a.tolist()) ['world', 'ness'] </code></pre> <p>Alternative solution with cross join:</p> <pre><code>b = df.b.unique() df['tmp'] = 1 df1 = pd.merge(df[['b','tmp']],df[['b','tmp']], on='tmp') df1 = df1[df1.apply(lambda x: x.b_x in x.b_y, axis=1) &amp; (df1.b_x != df1.b_y)] print (df1) b_x tmp b_y 1 world 1 helloworld 23 ness 1 greatness print (df1.b_x.tolist()) ['world', 'ness'] </code></pre>
python|python-2.7|pandas|dataframe
2
9,436
43,632,925
Efficient loop through pandas dataframe
<p>I have the following problem I need help with. I have 310 records in a csv file that contains some information about bugs. In another csv file I have 800 thousand records containing statistics about the bags (events that possibly led to the bugs).</p> <p>With the script below, I am trying to </p> <ol> <li>Loop through the bugs and select one.</li> <li>loop through the statistics records and check some conditions</li> <li>If there is a match, add a column from the bugs records to the statistics records.</li> <li>Save the new file </li> </ol> <p>My question is if I could archieve this in a more efficient way using numpy or anything else. The current method is taking forever to run because of the size of the statistics</p> <p>Any help or tips in the right direction will be appreciated. thanx in adavance</p> <pre><code>dataset = pd.read_csv('310_records.csv') dataset1 = pd.read_csv('800K_records.csv') cols_error = dataset.iloc[:, [0, 1, 2, 3, 4, 5, 6]] cols_stats = dataset1.iloc[:, [1, 2, 3, 4, 5, 6, 7, 8, 9]] cols_stats['Fault'] = '' cols_stats['Created'] = '' for i, error in cols_error.iterrows(): fault_created = error [0] fault_ucs = error [1] fault_dn = error [2] fault_epoch_end = error [3] fault_epoch_begin = error [4] fault_code = error [6] for index, stats in cols_stats.iterrows(): stats_epoch = stats[0] stats_ucs = stats[5] stats_dn = stats[7] print("error:", i, " Stats:", index) if(stats_epoch &gt;= fault_epoch_begin and stats_epoch &lt;= fault_epoch_end): if(stats_dn == fault_dn): if(stats_ucs == fault_ucs): cols_stats.iloc[index, 9] = fault_code cols_stats.iloc[index, 10] = fault_created else: cols_stats.iloc[index, 9] = 0 cols_stats.iloc[index, 10] = fault_created cols_stats.to_csv('datasets/dim_stats_error.csv', sep=',', encoding='utf-8') </code></pre>
<p>First of all: are you sure that your code does what you want it to do? As I see it, you keep looping over your statistics, so if you found a matching bug with bug #1, you can later overwrite the corresponding appendix to the statistics data with bug #310. It is unclear what you should be doing with statistics events that don't have a matching bug event, but currently you're storing <code>fault_created</code> columns for these data points somewhat arbitrarily. Not to mention the extra work done for checking every event for every bug every time.</p> <p>The reason for the slowness is that you're note making use of the power of pandas at all. Both in numpy and in pandas part of the performance comes from memory management, and the rest from vectorization. By pushing most of your work from native python loops to vectorized functions (running compiled code), you start seeing huge speed improvements.</p> <p>I'm unsure whether there's an advanced way to vectorize <em>all</em> of your work, but since you're looking at 310 vs 800k items, it seems perfectly reasonable to keep the loop over your bugs and vectorize the inner loop. The key is logical indexing, using which you can address all 800k items at once:</p> <pre><code>for i, error in cols_error.iterrows(): created, ucs, dn, epoch_end, epoch_begin, _, code = error inds = ( (epoch_begin &lt;= cols_stats['epoch']) &amp; (cols_stats['epoch'] &lt;= epoch_end) &amp; (cols_stats['dn'] == dn) &amp; (cols_stats['ucs'] == ucs) ) cols_stats['Fault'][inds] = code cols_stats['Created'][inds] = created cols_stats.to_csv('datasets/dim_stats_error.csv', sep=',', encoding='utf-8') </code></pre> <p>Note that the above will not set the unmatched columns to something non-trivial, because I don't think you have a reasonable example in your question. Whatever defaults you want to set should be independent from the list of bugs, so you should set these values before the whole matching ordeal.</p> <p>Note that I made some facelifts for your code. You can use an unpacking assignment to set all those values from <code>error</code>, and removing the prefix of those variables makes this clearer. We can dispose of the prefix since we don't define separate variables for the statistics dataframe.</p> <p>As you can see, your conditions for finding all the matching statistics items for a given bug can be defined in terms of a vectorized logical indexing operation. The resulting pandas <code>Series</code> called <code>inds</code> has a bool for each row of your statistics dataframe. This can be used to assign to a subset of your columns named <code>'Fault'</code> and <code>'Created'</code>. Note that you can (and probably should) index your columns by name, at least I find this <em>much</em> more clear and convenient.</p> <p>Since for each bug your <code>code</code> and <code>created</code> are scalars (probably strings), the vectorized assignments <code>cols_stats['Fault'][inds] = code</code> and <code>cols_stats['Created'][inds] = created</code> set every indexed item of <code>cols_stats</code> to these scalars.</p> <p>I believe the result should be the same as before, but much faster, at the cost of increased memory use.</p> <p>Further simplifications could be made in your initialization, but without an MCVE it's hard to say specifics. At the very least you can use slice notation:</p> <pre><code>cols_error = dataset.iloc[:, :7] cols_stats = dataset1.iloc[:, 1:10] </code></pre> <p>But odds are you're only ignoring a few columns, in which case it's probably clearer to <code>drop</code> those instead. For instance, if in <code>dataset</code> you have a single seventh column called 'junk' that you're ignoring, you can just set</p> <pre><code>cols_error = dataset.drop('junk', axis=1) </code></pre>
python|performance|pandas|numpy|vectorization
1
9,437
2,231,842
Numpy with python 3.0
<p>NumPy installer can't find python path in the registry.</p> <blockquote> <p>Cannot install Python version 2.6 required, which was not found in the registry.</p> </blockquote> <p>Is there a numpy build which can be used with python 3.0?</p>
<p>Guido van Rossum (creator of Python) says he is <a href="http://neopythonic.blogspot.com/2009/11/python-in-scientific-world.html" rel="noreferrer">keen to see NumPy work in Python 3.x</a>, because it would enable many dependent libraries to move to 3.x.</p> <p><strong>Update 2010-08-05:</strong> <a href="http://www.mail-archive.com/numpy-discussion@scipy.org/msg26524.html" rel="noreferrer">NumPy version 1.5 supports Python 3.x, and SciPy will soon.</a> NumPy 1.5 beta is <a href="http://sourceforge.net/projects/numpy/" rel="noreferrer">available for download</a> now.</p> <p><strong>Update 2012-05-31:</strong> <a href="http://sourceforge.net/projects/scipy/files/scipy/" rel="noreferrer">SciPy</a> <a href="http://sourceforge.net/projects/scipy/files/scipy/0.10.0/" rel="noreferrer">0.10.0 added support for Python 3.1</a> in November 2011.</p>
python|numpy|python-3.x
46
9,438
72,929,810
Manipulate string values in pandas
<p>I have a pandas dataframe with different formats for one column like this</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Name</th> <th>Values</th> </tr> </thead> <tbody> <tr> <td>First</td> <td>5-9</td> </tr> <tr> <td>Second</td> <td>7</td> </tr> <tr> <td>Third</td> <td>-</td> </tr> <tr> <td>Fourth</td> <td>12-16</td> </tr> </tbody> </table> </div> <p>I need to iterate over all Values column, and if the format is like the first row <code>5-9</code> or like fourth row <code>12-16</code> replace it with the mean between the 2 numbers in string. For first row replace <code>5-9</code> to <code>7</code>, or for fourth row replace <code>12-16</code> to <code>14</code>. And if the format is like third row <code>-</code> replace it to 0</p> <p>I have tried</p> <pre class="lang-py prettyprint-override"><code>if df[&quot;Value&quot;].str.len() &gt; 1: df[&quot;Value&quot;] = df[&quot;Value&quot;].str.split('-') df[&quot;Value&quot;] = (df[&quot;Value&quot;][0] + df[&quot;Value&quot;][1]) / 2 elif df[&quot;Value&quot;].str.len() == 1: df[&quot;Value&quot;] = df[&quot;Value&quot;].str.replace('-', 0) </code></pre> <p>Expected output</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Name</th> <th>Values</th> </tr> </thead> <tbody> <tr> <td>First</td> <td>7</td> </tr> <tr> <td>Second</td> <td>7</td> </tr> <tr> <td>Third</td> <td>0</td> </tr> <tr> <td>Fourth</td> <td>14</td> </tr> </tbody> </table> </div>
<p>Let us <code>split</code> and <code>expand</code> the column then cast values to <code>float</code> and calculate <code>mean</code> along column axis:</p> <pre><code>s = df['Values'].str.split('-', expand=True) df['Values'] = s[s != ''].astype(float).mean(1).fillna(0) </code></pre> <hr /> <pre><code> Name Values 0 First 7.0 1 Second 7.0 2 Third 0.0 3 Fourth 14.0 </code></pre>
python|pandas
3
9,439
73,072,920
Finding perimeter coordinates of a mask
<pre><code> [[ 0, 0, 0, 0, 255, 0, 0, 0, 0], [ 0, 0, 255, 255, 255, 255, 255, 0, 0], [ 0, 255, 255, 255, 255, 255, 255, 255, 0], [ 0, 255, 255, 255, 255, 255, 255, 255, 0], [255, 255, 255, 255, 255, 255, 255, 255, 255], [ 0, 255, 255, 255, 255, 255, 255, 255, 0], [ 0, 255, 255, 255, 255, 255, 255, 255, 0], [ 0, 0, 255, 255, 255, 255, 255, 0, 0], [ 0, 0, 0, 0, 255, 0, 0, 0, 0]] </code></pre> <p>I have a mask array like the one above. I would like to get the x and y coordinates belonging to the perimeter of the mask. The perimeter points are the ones shown in the array below:</p> <pre><code> [[ 0, 0, 0, 0, 255, 0, 0, 0, 0], [ 0, 0, 255, 255, 0, 255, 255, 0, 0], [ 0, 255, 0, 0, 0, 0, 0, 255, 0], [ 0, 255, 0, 0, 0, 0, 0, 255, 0], [255, 0, 0, 0, 0, 0, 0, 0, 255], [ 0, 255, 0, 0, 0, 0, 0, 255, 0], [ 0, 255, 0, 0, 0, 0, 0, 255, 0], [ 0, 0, 255, 255, 0, 255, 255, 0, 0], [ 0, 0, 0, 0, 255, 0, 0, 0, 0]] </code></pre> <p>In the array above, I could just use numpy.nonzero() but I was unable to apply this logic to the original array because it returns a tuple of arrays each containing all the x or y values of all non zero elements without partitioning by row.</p> <p>I wrote the code below which works but seems inefficient:</p> <pre><code>height = mask.shape[0] width = mask.shape[1] y_coords = [] x_coords = [] for y in range(1,height-1,1): for x in range(0,width-1,1): val = mask[y,x] prev_val = mask[y,(x-1)] next_val = mask[y, (x+1)] top_val = mask[y-1, x] bot_val = mask[y+1, x] if (val != 0 and prev_val == 0) or (val != 0 and next_val == 0) or (val != 0 and top_val == 0) or (val != 0 and bot_val == 0): y_coords.append(y) x_coords.append(x) </code></pre> <p>I am new to python and would like to learn a better way to do this. Perhaps using Numpy?</p>
<p>I played a bit with your problem and found a solution and I realized you could use convolutions to count the number of neighboring 255s for each cell, and then perform a filtering of points based on the appropriate values of neighbors.</p> <p>I am giving a detailed explanation below, although one part was trial and error and you could potentially skip it and get directly to the code if you understand that convolutions can count neighbors in binary images.</p> <hr /> <p><strong>First observation:</strong> <em>When does a point belong to the perimeter of the mask?</em><br /> Well, that point has to have a value of 255 and &quot;around&quot; it, there must be at least one (and possibly more) 0.</p> <p><strong>Next:</strong> <em>What is the definition of &quot;around&quot;?</em><br /> We could consider all four cardinal (i.e. North, East, South, West) neighbors. In this case, a point of the perimeter must have at least <strong>one</strong> cardinal neighbor which is 0.<br /> <em>You have already done that, and truly, I cannot thing of a faster way by this definition</em>.</p> <p><strong>What if we extended the definition of &quot;around&quot;?</strong><br /> Now, let's consider the neighbors of a point at <code>(i,j)</code> all points along an <code>N x N</code> square centered on <code>(i,j)</code>. I.e. all points <code>(x,y)</code> such that <code>i-N/2 &lt;= x &lt;= i+N/2</code> and <code>j-N/2 &lt;= y &lt;= j+N/2</code> (where <code>N</code> is odd and ignoring out of bounds for the moment).<br /> This is more useful from a performance point of view, because the operation of sliding &quot;windows&quot; along the points of 2D arrays is called a &quot;convolution&quot; operation. There are built in functions to perform such operations on numpy arrays really fast. The <code>scipy.ndimage.convolve</code> works great.<br /> I won't attempt to fully explain convolutions here (the internet is ful of nice visuals), but the main idea is that the convolution essentially replaces the value of each cell with the <em>weighted</em> sum of the values of all its neighboring cells. Depending on what weight matrix, (or <em>kernel</em>) you specify, the convolution does different things.</p> <p>Now, if your mask was 1s and 0s, to count the number of neighboring ones around a cell, you would need a kernel matrix of 1s everywhere (since the weighted sum will simply add the original ones of your mask and cancel the 0s). So we will scale the values from [0, 255] to [0,1].</p> <p>Great, we know how to quickly count the neighbors of a point within an area, but the two questions are</p> <ol> <li>What area size should we choose?</li> <li>How many neighbors do the points in the perimeter have, now that we are including diagonal and more faraway neighbors?</li> </ol> <p>I suppose there is an explicit answer to that, but I did some trial and error. It turns out, we need <code>N=5</code>, at which case the number of neighbors being one for each point in the original mask is the following:</p> <pre><code>[[ 3 5 8 10 11 10 8 5 3] [ 5 8 12 15 16 15 12 8 5] [ 8 12 17 20 21 20 17 12 8] [10 15 20 24 25 24 20 15 10] [11 16 21 25 25 25 21 16 11] [10 15 20 24 25 24 20 15 10] [ 8 12 17 20 21 20 17 12 8] [ 5 8 12 15 16 15 12 8 5] [ 3 5 8 10 11 10 8 5 3]] </code></pre> <p>Comparing that matrix with your original mask, the points on the perimeter are the ones having values between <code>11</code> and <code>15</code> (inclusive) [1]. So we simply filter out the rest using <code>np.where()</code>.</p> <p>A final caveat: We need to explicilty say to the <code>convolve</code> function how to treat points near the edges, where an <code>N x N</code> window won't fit. In those cases, we tell it to treat out of bounds values as 0s.</p> <p>The full code is following:</p> <pre class="lang-py prettyprint-override"><code>from scipy import ndimage as ndi mask //= 255 kernel = np.ones((5,5)) C = ndi.convolve(mask, kernel, mode='constant', cval=0) #print(C) # The C matrix contains the number of neighbors for each cell. outer = np.where( (C&gt;=11) &amp; (C&lt;=15 ), 255, 0) print(outer) [[ 0 0 0 0 255 0 0 0 0] [ 0 0 255 255 0 255 255 0 0] [ 0 255 0 0 0 0 0 255 0] [ 0 255 0 0 0 0 0 255 0] [255 0 0 0 0 0 0 0 255] [ 0 255 0 0 0 0 0 255 0] [ 0 255 0 0 0 0 0 255 0] [ 0 0 255 255 0 255 255 0 0] [ 0 0 0 0 255 0 0 0 0]] </code></pre> <p>[1] Note that we are also counting the point itself as one of its own neighbors. That's alright.</p>
python|numpy
2
9,440
73,139,560
Adding rows using timestamp
<p>I saw this code <a href="https://stackoverflow.com/questions/46074742/combine-rows-and-add-up-value-in-dataframe">combine rows and add up value in dataframe</a>,</p> <p>but I want to add the values in cells for the same day, i.e. add all data for a day. how do I modify the code to achieve this?</p>
<p>Check below code:</p> <pre><code>import pandas as pd df = pd.DataFrame({'Price':[10000,10000,10000,10000,10000,10000], 'Time':['2012.05','2012.05','2012.05','2012.06','2012.06','2012.07'], 'Type':['Q','T','Q','T','T','Q'], 'Volume':[10,20,10,20,30,10] }) df.assign(daily_volume = df.groupby('Time')['Volume'].transform('sum')) </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/5jZbS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5jZbS.png" alt="enter image description here" /></a></p>
python|pandas|dataframe|for-loop|timestamp
0
9,441
72,933,196
Python Pandas Converting Dataframe to Tidy Format
<pre><code>dt = {'ID': [1, 1, 1, 1, 2, 2, 2, 2], 'Test': [‘Math’, 'Math', 'Writing', 'Writing', ‘Math’, 'Math', 'Writing', 'Writing', ‘Math’] 'Year': ['2008', '2009', '2008', '2009', '2008', ‘2009’, ‘2008’, ‘2009’], 'Fall': [15, 12, 22, 10, 12, 16, 13, 23] ‘Spring’: [16, 13, 22, 14, 13, 14, 11, 20] ‘Winter’: [19, 27, 24, 20, 25, 21, 29, 26]} mydt = pd.DataFrame(dt, columns = ['ID', ‘Test’, 'Year', 'Fall', ‘Spring’, ‘Winter’]) </code></pre> <p>So I have the above dataset. How can I convert the above dataset so that it looks like the following? Please let me know.</p> <p><a href="https://i.stack.imgur.com/HSpPg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HSpPg.png" alt="enter image description here" /></a></p>
<p>You can try with <code>set_index</code> with <code>stack</code> + <code>unstack</code></p> <pre><code>out = (df.set_index(['ID','Test','Year']). stack().unstack(level=1). add_suffix('_Score').reset_index()) out Out[271]: Test ID Year level_2 Math_Score Writing_Score 0 1 2008 Fall 15 22 1 1 2008 Spring 16 22 2 1 2008 Winter 19 24 3 1 2009 Fall 12 10 4 1 2009 Spring 13 14 5 1 2009 Winter 27 20 6 2 2008 Fall 12 13 7 2 2008 Spring 13 11 8 2 2008 Winter 25 29 9 2 2009 Fall 16 23 10 2 2009 Spring 14 20 11 2 2009 Winter 21 26 </code></pre>
python|pandas
0
9,442
73,127,093
Stack dataframes in Pandas vertically and horizontally
<p>I have a dataframe that looks like this:</p> <pre><code> country region region_id year doy variable_a num_pixels 0 USA Iowa 12345 2022 1 32.2 100 1 USA Iowa 12345 2022 2 12.2 100 2 USA Iowa 12345 2022 3 22.2 100 3 USA Iowa 12345 2022 4 112.2 100 4 USA Iowa 12345 2022 5 52.2 100 </code></pre> <p>The year in the dataframe above is 2022. I have more dataframes for other years starting from 2010 onwards. I have also dataframes for other variables: <code>variable_b</code>, <code>variable_c</code>.</p> <p>I want to combine all these dataframes into a single dataframe such that</p> <ol> <li>The years are listed vertically, one below the other</li> <li>the data for the different variables is listed horizontally. The output should look like this:</li> </ol> <pre><code> country region region_id year doy variable_a variable_b variable_c 0 USA Iowa 12345 2010 1 32.2 44 101 1 USA Iowa 12345 2010 2 12.2 76 2332 .......................................................................... n-1 USA Iowa 12345 2022 1 321.2 444 501 n USA Iowa 12345 2022 2 122.2 756 32 </code></pre> <p>What is the most efficient way to achieve this? Please note that there will be overlap in years in the other dataframes so the solution needs to take that into account and not leave NaN values.</p>
<p>IIUC, this should work for you:</p> <pre><code>data1 = { 'country': {0: 'USA', 1: 'USA', 2: 'USA', 3: 'USA', 4: 'USA'}, 'region': {0: ' Iowa', 1: ' Iowa', 2: ' Iowa', 3: ' Iowa', 4: ' Iowa'}, 'region_id': {0: 12345, 1: 12345, 2: 12345, 3: 12345, 4: 12345}, 'year': {0: 2022, 1: 2022, 2: 2022, 3: 2022, 4: 2022}, 'doy': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5}, 'variable_a': {0: 32.2, 1: 12.2, 2: 22.2, 3: 112.2, 4: 52.2}, 'num_pixels': {0: 100, 1: 100, 2: 100, 3: 100, 4: 100} } data2 = { 'country': {0: 'USB', 1: 'USB', 2: 'USB', 3: 'USB', 4: 'USB'}, 'region': {0: ' Iowb', 1: ' Iowb', 2: ' Iowb', 3: ' Iowb', 4: ' Iowb'}, 'region_id': {0: 12345, 1: 12345, 2: 12345, 3: 12345, 4: 12345}, 'year': {0: 2021, 1: 2021, 2: 2021, 3: 2021, 4: 2021}, 'doy': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5}, 'variable_b': {0: 32.2, 1: 12.2, 2: 22.2, 3: 112.2, 4: 52.2}, 'num_pixels': {0: 100, 1: 100, 2: 100, 3: 100, 4: 100} } data3 = { 'country': {0: 'USC', 1: 'USC', 2: 'USC', 3: 'USC', 4: 'USC'}, 'region': {0: ' Iowc', 1: ' Iowc', 2: ' Iowc', 3: ' Iowc', 4: ' Iowc'}, 'region_id': {0: 12345, 1: 12345, 2: 12345, 3: 12345, 4: 12345}, 'year': {0: 2020, 1: 2020, 2: 2020, 3: 2020, 4: 2020}, 'doy': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5}, 'variable_c1': {0: 32.2, 1: 12.2, 2: 22.2, 3: 112.2, 4: 52.2}, 'variable_c2': {0: 32.2, 1: 12.2, 2: 22.2, 3: 112.2, 4: 52.2}, 'num_pixels': {0: 100, 1: 100, 2: 100, 3: 100, 4: 100} } df1 = pd.DataFrame(data1) df2 = pd.DataFrame(data2) df3 = pd.DataFrame(data3) dfn = [df1, df2, df3] pd.concat(dfn, axis=0).sort_values(['year', 'country', 'region']).reset_index(drop=True) </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/xvUrf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xvUrf.png" alt="enter image description here" /></a></p>
python|pandas
3
9,443
70,590,276
Shifting a value and creating a new index using pandas
<p>I have a <code>df</code></p> <pre><code> 2019 2020 2021 2022 A 10 20 30 40 </code></pre> <p>I am trying to create 2 new indexes <code>A-1</code> and <code>A-2</code> so that the output would look like this:</p> <pre><code> 2019 2020 2021 2022 A 10 20 30 40 A-1 nan 10 20 40 A-2 nan nan 10 20 </code></pre> <p>I tried:</p> <pre><code>s = df.loc['A',:].shift(1, axis=0) s = s.rename({'A': 'A-1'}, axis = 0) df = df.combine_first(s) </code></pre> <p>But I get an error at <code>----&gt; 3 df= df.combine_first(s)</code></p> <blockquote> <p>ValueError: Must specify axis=0 or 1</p> </blockquote> <p>When I add <code>axis = 0</code> I get:</p> <blockquote> <p>TypeError: combine_first() got an unexpected keyword argument 'axis'</p> </blockquote> <p>So I am not sure where is my mistake.</p>
<p>Juste use <code>loc</code> to create your new indexes</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame({2019:[10], 2020:[20], 2021:[30], 2022:[40]}, index=[&quot;A&quot;]) &gt;&gt;&gt; df.loc[&quot;A-1&quot;] = df.loc[&quot;A&quot;].shift() &gt;&gt;&gt; df.loc[&quot;A-2&quot;] = df.loc[&quot;A-1&quot;].shift() &gt;&gt;&gt; df 2019 2020 2021 2022 A 10.0 20.0 30.0 40.0 A-1 NaN 10.0 20.0 30.0 A-2 NaN NaN 10.0 20.0 </code></pre>
python|pandas
1
9,444
70,631,807
Python Pandas Pivot Of Two columns (ColumnName and Value)
<p>I have a Panda dataframe that contains two columns, as well as a default index. The first columns is the intended 'Column Name' and the second column the required value for that column.</p> <pre><code> name returnattribute 0 Customer Name Customer One Name 1 Customer Code CGLOSPA 2 Customer Name Customer Two Name 3 Customer Code COTHABA 4 Customer Name Customer Three Name 5 Customer Code CGLOADS 6 Customer Name Customer Four Name 7 Customer Code CAPRCANBRA 8 Customer Name Customer Five Name 9 Customer Code COTHAMO </code></pre> <p>I would like to povit this so that instead of 10 rows, I have 5 rows with two columns ('Customer Name' and 'Customer Code'). The hoped for result is as below:</p> <pre><code> Customer Code Customer Name 0 CGLOSPA Customer One Name 1 COTHABA Customer Two Name 2 CGLOADS Customer Three Name 3 CAPRCANBRA Customer Four Name 4 COTHAMO Customer Five Name </code></pre> <p>I have tried to use the pandas pivot function:</p> <pre><code>df.pivot(columns='name', values='returnattribute') </code></pre> <p>But this results in ten rows still with alternate blanks:</p> <pre><code> Customer Code Customer Name 0 NaN Customer One Name 1 CGLOSPA NaN 2 NaN Customer Two Name 3 COTHABA NaN 4 NaN Customer Three Name 5 CGLOADS NaN 6 NaN Customer Four Name 7 CAPRCANBRA NaN 8 NaN Customer Five Name 9 COTHAMO NaN </code></pre> <p>How to I pivot the dataframe to get just 5 rows of two columns?</p>
<p>In <code>df.pivot</code> when <code>index</code> parameter is not passed <code>df.index</code> is used as default. Hence, the output.</p> <ul> <li><a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer">From Docs <code>DataFrame.pivot</code>:</a></li> </ul> <blockquote> <p><code>index</code>: str or object or a list of str, optional</p> <ul> <li>Column to use to make new frame’s index. If <code>None</code>, uses existing index.</li> </ul> </blockquote> <p>To get the desired output. You'd have to create a new index column like below.</p> <pre><code>df.assign(idx=df.index // 2).pivot( index=&quot;idx&quot;, columns=&quot;name&quot;, values=&quot;returnattribute&quot; ) # name Customer Code Customer Name # idx # 0 CGLOSPA Customer One Name # 1 COTHABA Customer Two Name # 2 CGLOADS Customer Three Name # 3 CAPRCANBRA Customer Four Name # 4 COTHAMO Customer Five Name </code></pre> <hr /> <p>Since every two rows represent one data point. You can <a href="https://numpy.org/doc/stable/reference/generated/numpy.reshape.html" rel="nofollow noreferrer"><code>reshape</code></a> the data and build the required dataframe.</p> <pre><code>reshaped = df['returnattribute'].to_numpy().reshape(-1, 2) # array([['Customer One Name', 'CGLOSPA'], # ['Customer Two Name', 'COTHABA'], # ['Customer Three Name', 'CGLOADS'], # ['Customer Four Name', 'CAPRCANBRA'], # ['Customer Five Name', 'COTHAMO']], dtype=object) col_names = pd.unique(df.name) # array(['Customer Name', 'Customer Code'], dtype=object) out = pd.DataFrame(reshaped, columns=col_names) # Customer Name Customer Code # 0 Customer One Name CGLOSPA # 1 Customer Two Name COTHABA # 2 Customer Three Name CGLOADS # 3 Customer Four Name CAPRCANBRA # 4 Customer Five Name COTHAMO # we can reorder the columns using reindex. </code></pre>
python|pandas|dataframe|pivot
2
9,445
70,599,088
Row sums of dataframe with variable column indexes (Python)
<p>I have a dataframe that has a few million rows. I need to calculate the sum of each row from a particular column index up until the last column. The column index for each row is unique. An example of this, with the desired output, will be:</p> <pre><code>import pandas as pd df = pd.DataFrame({'col1': [1, 2, 2, 5, None, 4], 'col2': [4, 2, 4, 2, None, 1], 'col3': [6, 3, 8, 6, None, 4], 'col4': [9, 8, 9, 3, None, 5], 'col5': [1, 3, 0, 1, None, 7], }) df_ind = pd.DataFrame({'ind': [1, 0, 3, 4, 3, 5]}) for i in df.index.to_list(): df.loc[i, &quot;total&quot;] = df.loc[i][(df_ind.loc[i, &quot;ind&quot;]).astype(int):].sum() print(df) &gt;&gt; col1 col2 col3 col4 col5 total 0 1.0 4.0 6.0 9.0 1.0 20.0 1 2.0 2.0 3.0 8.0 3.0 18.0 2 2.0 4.0 8.0 9.0 0.0 9.0 3 5.0 2.0 6.0 3.0 1.0 1.0 4 NaN NaN NaN NaN NaN 0.0 5 4.0 1.0 4.0 5.0 7.0 0.0 </code></pre> <p>How can I achieve this efficiently with pandas without using a for loop. Thanks</p>
<p>You can create a like-Indexed DataFrame that lists all of the column positions and then by comparing this DataFrame with <code>df_ind</code> you can create a mask for the entire original DataFrame.</p> <p>Then <code>mask</code> the original DataFrame and <code>sum</code> to get the row sums based on the appropriate index positions that vary by row.</p> <pre><code>import pandas as pd mask = pd.DataFrame({col: df.columns.get_loc(col) for col in df.columns}, index=df.index) # col1 col2 col3 col4 col5 #0 0 1 2 3 4 #1 0 1 2 3 4 #2 0 1 2 3 4 #3 0 1 2 3 4 #4 0 1 2 3 4 #5 0 1 2 3 4 mask = mask.ge(df_ind['ind'], axis=0) # col1 col2 col3 col4 col5 #0 False True True True True #1 True True True True True #2 False False False True True #3 False False False False True #4 False False False True True #5 False False False False False df['total'] = df[mask].sum(1) </code></pre> <hr /> <pre><code>print(df) col1 col2 col3 col4 col5 total 0 1.0 4.0 6.0 9.0 1.0 20.0 1 2.0 2.0 3.0 8.0 3.0 18.0 2 2.0 4.0 8.0 9.0 0.0 9.0 3 5.0 2.0 6.0 3.0 1.0 1.0 4 NaN NaN NaN NaN NaN 0.0 5 4.0 1.0 4.0 5.0 7.0 0.0 </code></pre>
python|pandas|sum|row
2
9,446
70,669,850
pandas merge ignore duplicate merged rows
<p>I am trying to merge below two data frames but I am not getting the expected result.</p> <pre><code>import pandas as pd previous_dict = [{&quot;category1&quot;:&quot;Home&quot;, &quot;category2&quot;:&quot;Power&quot;,&quot;usage&quot;:&quot;15&quot;,&quot;amount&quot;:&quot;65&quot;}, {&quot;category1&quot;:&quot;Home&quot;, &quot;category2&quot;:&quot;Power&quot;,&quot;usage&quot;:&quot;2&quot;,&quot;amount&quot;:&quot;15&quot;}, {&quot;category1&quot;:&quot;Home&quot;, &quot;category2&quot;:&quot;Vehicle&quot;,&quot;usage&quot;:&quot;6&quot;,&quot;amount&quot;:&quot;5&quot;} ] current_dict = [{&quot;category1&quot;:&quot;Home&quot;, &quot;category2&quot;:&quot;Power&quot;,&quot;usage&quot;:&quot;16&quot;,&quot;amount&quot;:&quot;79&quot;}, {&quot;category1&quot;:&quot;Home&quot;, &quot;category2&quot;:&quot;Power&quot;,&quot;usage&quot;:&quot;0.5&quot;,&quot;amount&quot;:&quot;2&quot;}, {&quot;category1&quot;:&quot;Home&quot;, &quot;category2&quot;:&quot;Vehicle&quot;,&quot;usage&quot;:&quot;3&quot;,&quot;amount&quot;:&quot;4&quot;} ] df_previous = pd.DataFrame.from_dict(previous_dict) print(df_previous) df_current = pd.DataFrame.from_dict(current_dict) print(df_current) df_merge = pd.merge(df_previous, df_current, on=['category1','category2'], how='outer',indicator=True, suffixes=('', '_y')) print(df_merge) </code></pre> <p>A previous year data frame</p> <pre><code> category1 category2 usage amount 0 Home Power 15 65 1 Home Power 2 15 2 Home Vehicle 6 5 </code></pre> <p>A current year data frame</p> <pre><code> category1 category2 usage amount 0 Home Power 16 79 1 Home Power 0.5 2 2 Home Vehicle 3 4 </code></pre> <p>Current result:</p> <pre><code> category1 category2 usage amount usage_y amount_y _merge 0 Home Power 15 65 16 79 both 1 Home Power 15 65 0.5 2 both 2 Home Power 2 15 16 79 both 3 Home Power 2 15 0.5 2 both 4 Home Vehicle 6 5 3 4 both </code></pre> <p>But my expected result is,</p> <pre><code> category1 category2 usage amount usage_y amount_y _merge 0 Home Power 15 65 16 79 both 3 Home Power 2 15 0.5 2 both 4 Home Vehicle 6 5 3 4 both </code></pre> <p>When category 1 and category2 have the same values a couple of times in both tables, I just want to match it with the correct order. How can I get the values as my expectation?</p>
<p>I think this is occurring due to the duplication in the columns you are joining on. One way to fix this is to also use the index as follows:</p> <pre><code>df_merge = pd.merge(df_previous.reset_index(), df_current.reset_index(), on=['category1','category2', 'index'], how='outer',indicator=True, suffixes=('', '_y')) index category1 category2 usage amount usage_y amount_y _merge 0 0 Home Power 15 65 16 79 both 1 1 Home Power 2 15 0.5 2 both 2 2 Home Vehicle 6 5 3 4 both </code></pre>
python|pandas|merge
2
9,447
70,562,007
How to use interpn to interpolate in a dataframe?
<p>I am trying to interpolate a dataframe but am having no luck. I have a dataframe with a distance header and a wind component header that I am working with.</p> <p>The wind components are split with a <code>20</code> unit difference and the distance by <code>10</code>. I would like to be able to interpolate to within <code>1</code> of each unit but I'm stuck.</p> <p>I haven't used Scipy before this and I can't see much in the way of explanations in their docs (that I can understand).</p> <p>I have a table that I converted <code>to_dict</code> and use that for the dataframe:</p> <pre><code>data = {'dist': [100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210, 220, 230, 240, 250, 260, 270, 280, 290, 300, 310, 320, 330, 340, 350, 360, 370, 380, 390, 400, 410, 420], '-60': [520, 600, 670, 740, 810, 880, 950, 1020, 1100, 1170, 1240, 1310, 1380, 1450, 1520, 1600, 1670, 1740, 1810, 1880, 1950, 2020, 2100, 2170, 2240, 2310, 2380, 2450, 2530, 2600, 2670, 2740, 2810], '-40': [440, 500, 570, 630, 690, 760, 820, 880, 950, 1010, 1070, 1140, 1200, 1260, 1330, 1390, 1450, 1510, 1580, 1640, 1700, 1770, 1830, 1890, 1960, 2020, 2080, 2150, 2210, 2270, 2340, 2400, 2460], '-20': [380, 430, 490, 550, 600, 660, 720, 770, 830, 880, 940, 1000, 1050, 1110, 1170, 1220, 1280, 1340, 1390, 1450, 1510, 1560, 1620, 1680, 1730, 1790, 1850, 1900, 1960, 2020, 2070, 2130, 2190], '0': [320, 370, 420, 480, 530, 580, 630, 680, 730, 780, 830, 890, 940, 990, 1040, 1090, 1140, 1190, 1240, 1300, 1350, 1400, 1450, 1500, 1550, 1600, 1650, 1710, 1760, 1810, 1860, 1910, 1960], '20': [280, 320, 370, 420, 470, 510, 560, 610, 650, 700, 750, 790, 840, 890, 930, 980, 1030, 1070, 1120, 1170, 1210, 1260, 1310, 1350, 1400, 1450, 1500, 1540, 1590, 1640, 1680, 1730, 1780], '40': [240, 280, 330, 370, 410, 460, 500, 540, 590, 630, 670, 720, 760, 800, 840, 890, 930, 970, 1020, 1060, 1100, 1150, 1190, 1230, 1280, 1320, 1360, 1400, 1450, 1490, 1530, 1580, 1620], '60': [210, 250, 290, 330, 370, 410, 450, 490, 530, 570, 610, 650, 690, 730, 770, 810, 850, 890, 930, 970, 1010, 1050, 1090, 1130, 1170, 1210, 1250, 1290, 1330, 1370, 1410, 1450, 1490]} df = pd.DataFrame(data).set_index(['dist']) df.columns = df.columns.map(float) df.columns.name = 'wind' print(df) </code></pre> <p>Printing this gives me:</p> <pre><code>wind -60.0 -40.0 -20.0 0.0 20.0 40.0 60.0 dist 100 520 440 380 320 280 240 210 110 600 500 430 370 320 280 250 120 670 570 490 420 370 330 290 130 740 630 550 480 420 370 330 140 810 690 600 530 470 410 370 150 880 760 660 580 510 460 410 160 950 820 720 630 560 500 450 170 1020 880 770 680 610 540 490 180 1100 950 830 730 650 590 530 190 1170 1010 880 780 700 630 570 200 1240 1070 940 830 750 670 610 210 1310 1140 1000 890 790 720 650 220 1380 1200 1050 940 840 760 690 230 1450 1260 1110 990 890 800 730 240 1520 1330 1170 1040 930 840 770 250 1600 1390 1220 1090 980 890 810 260 1670 1450 1280 1140 1030 930 850 270 1740 1510 1340 1190 1070 970 890 280 1810 1580 1390 1240 1120 1020 930 290 1880 1640 1450 1300 1170 1060 970 300 1950 1700 1510 1350 1210 1100 1010 310 2020 1770 1560 1400 1260 1150 1050 320 2100 1830 1620 1450 1310 1190 1090 330 2170 1890 1680 1500 1350 1230 1130 340 2240 1960 1730 1550 1400 1280 1170 350 2310 2020 1790 1600 1450 1320 1210 360 2380 2080 1850 1650 1500 1360 1250 370 2450 2150 1900 1710 1540 1400 1290 380 2530 2210 1960 1760 1590 1450 1330 390 2600 2270 2020 1810 1640 1490 1370 400 2670 2340 2070 1860 1680 1530 1410 410 2740 2400 2130 1910 1730 1580 1450 420 2810 2460 2190 1960 1780 1620 1490 </code></pre> <p>Which is all fine so far. Now what I'm stuck on is how to interpolate so that I can get accurate figures from it. I'm trying to use <code>interpn</code> but I'm obviously doing it wrong. Here is what I'm doing to try and get an interpolated figure for a wind component of <code>-35</code> and a distance of <code>103</code>:</p> <pre><code>arr = np.dstack(np.array_split(df.to_numpy(), 1)) wind = df.columns.to_numpy() dist = df.index.get_level_values(0).unique().to_numpy() print(interpn((wind, dist), arr, [float(-35), int(103)])) </code></pre> <p>To which I get an error of:</p> <pre><code>ValueError: There are 7 points and 33 values in dimension 0 </code></pre> <p>I have tried reading through the docs but can't seem to get my head around it and all the examples I find elsewhere are for graphical data.</p> <p>Can someone please help me figure this out, I'm pretty new to this kind of work. Thank you :)</p>
<p>There's no need to transform your data, you already have a 2D array and can use it as-is. You got the axes wrong: the first axis (axis 0) is the rows of the dataframe, the second axis (axis 1) the columns.</p> <pre><code>arr = df.to_numpy() dist = df.index.to_numpy() wind = df.columns.to_numpy() x, y = np.meshgrid(wind, dist) print(interpn((dist, wind), arr, [103, -35])) # array([442.25]) </code></pre> <p>As an alternative, you can also use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp2d.html" rel="nofollow noreferrer"><code>itnerp2d</code></a>, here are the axes just the other way round:</p> <pre><code>f = interp2d(wind, dist, arr) print(f(-35, 103)) #array([442.25]) </code></pre>
python|pandas|dataframe|scipy|interpolation
1
9,448
70,714,317
TypeError: the first argument must be callable when calling tensorflow optimizer `apply_gradients`
<p>I hope someone can help me resolve this issue which has been driving me crazy for days. I am building something somehow inspired to <a href="https://keras.io/examples/rl/actor_critic_cartpole/" rel="nofollow noreferrer">this</a> keras example. I am trying to manually calculate the gradient of a network but I can't figure out what I am doing wrong. Here is the model definition</p> <pre><code>inputs = layers.Input(shape=(state_dim,)) layer1 = layers.Dense(l1_dim, activation=&quot;relu&quot;)(inputs) ayer2 = layers.Dense(l2_dim, activation=&quot;relu&quot;)(layer1) action = layers.Dense(num_actions, activation=&quot;softmax&quot;)(layer2) critic = layers.Dense(1, activation=None)(layer2) model = keras.Model(inputs=inputs, outputs=[critic, action]) # model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate)) optimizer = keras.optimizers.Adam(learning_rate=learning_rate) </code></pre> <p>then I have my training loop, given set (state, action, reward, terminal, state_):</p> <pre><code>state = tf.convert_to_tensor([state], dtype=tf.float32) state_ = tf.convert_to_tensor([state_], dtype=tf.float32) reward = tf.convert_to_tensor(reward, dtype=tf.float32) # not fed to NN with tf.GradientTape(persistent=True) as tape: state_value, probs = model(state) state_value_, _ = model(state_) state_value = tf.squeeze(state_value) state_value_ = tf.squeeze(state_value_) action_probs = tfp.distributions.Categorical(probs=probs) log_prob = action_probs.log_prob(action) delta = reward + self.gamma * state_value_ * (1 - int(terminal)) - state_value actor_loss = -log_prob * delta critic_loss = delta ** 2 total_loss = actor_loss + critic_loss gradient = tape.gradient(total_loss, model.trainable_variables) optimizer.apply_gradients(zip(gradient, model.trainable_variables)) </code></pre> <p>However on my last line of code, when calling <code>optimizer.apply_gradients</code> I get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/maccheroni/.virtualenvs/rl_gym/lib/python3.9/site-packages/keras/optimizer_v2/optimizer_v2.py&quot;, line 639, in apply_gradients self._create_all_weights(var_list) File &quot;/Users/maccheroni/.virtualenvs/rl_gym/lib/python3.9/site-packages/keras/optimizer_v2/optimizer_v2.py&quot;, line 829, in _create_all_weights self._create_hypers() File &quot;/Users/maccheroni/.virtualenvs/rl_gym/lib/python3.9/site-packages/keras/optimizer_v2/optimizer_v2.py&quot;, line 977, in _create_hypers self._hyper[name] = self.add_weight( File &quot;/Users/maccheroni/.virtualenvs/rl_gym/lib/python3.9/site-packages/keras/optimizer_v2/optimizer_v2.py&quot;, line 1192, in add_weight variable = self._add_variable_with_custom_getter( File &quot;/Users/maccheroni/.virtualenvs/rl_gym/lib/python3.9/site-packages/tensorflow/python/training/tracking/base.py&quot;, line 816, in _add_variable_with_custom_getter new_variable = getter( File &quot;/Users/maccheroni/.virtualenvs/rl_gym/lib/python3.9/site-packages/keras/engine/base_layer_utils.py&quot;, line 106, in make_variable init_val = functools.partial(initializer, shape, dtype=dtype) TypeError: the first argument must be callable </code></pre> <p>and I really don't understand why, because I have read so many tutorials, followed so many examples and they seem all to use this function in this way.</p>
<p>I also had the same error and found the solution. In my case, the initialization of the optimizer:</p> <pre><code>optimizer = keras.optimizers.Adam(learning_rate=learning_rate) </code></pre> <p>was using the variable <code>learning_rate</code> which was <code>None</code>. initializting with a number or simply:</p> <pre><code>optimizer = keras.optimizers.Adam() </code></pre> <p>solved the issue. In your case, it is not clear, what <code>learning_rate</code> is but you might check this out.</p>
python|tensorflow|keras|deep-learning|reinforcement-learning
0
9,449
27,322,876
Trouble plotting pandas DataFrame
<p>I have a pandas DataFrame that has 2 columns one of the columns is a list of dates in this format: '4-Dec-14' the other column is a list of numbers. I want to plot a graph with the dates on the x-axis and numbers that correspond with that date on the y-axis. Either a scatter plot or line graph. I was trying to follow the tutorial on Matplotlib <a href="http://matplotlib.org/users/recipes.html" rel="nofollow">http://matplotlib.org/users/recipes.html</a> 'Fixing common date annoyances' but it will not accept my date format. When I try using the <code>plot</code> function it returns an error: <code>ValueError: invalid literal for float(): 4-Dec-14</code>. How would I change my date format so it will work. Or is there some other way I can plot this DataFrame. Thanks</p>
<p>You should first convert the strings in the date column, to actual datetime values:</p> <pre><code>df['date'] = pd.to_datetime(df['date']) </code></pre> <p>Then you can plot it, either by setting the date as the index:</p> <pre><code>df = df.set_index('date') df['y'].plot() </code></pre> <p>or by specifying x and y in plot:</p> <pre><code>df.plot(x='date', y='y') </code></pre>
python|python-2.7|numpy|matplotlib|pandas
4
9,450
27,238,924
Converting List of Numpy Arrays to Numpy Matrix
<p>I have a list of lists, <code>lists</code> which I would like to convert to a numpy matrix (which I would usually do by <code>matrixA = np.matrix(lists)</code>. The len of each list in <code>lists</code> is 7000, and the <code>len(lists)</code> is 10000.</p> <p>So when I perform <code>matrixA = np.matrix(lists)</code>, I would expect that <code>np.shape(matrixA)</code> to return <code>(10000, 7000)</code>. However it instead returns <code>(10000, 1)</code> where each element is an ndarray.</p> <p>This has never happened to me before, but I absolutely need this to be in the form of <code>(10000, 7000)</code>. Might anyone have a suggestion about how to get this in the proper format?</p>
<p>I tried to recreate, but I can't:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; arrs = np.random.randn(10000, 7000) &gt;&gt;&gt; arrs array([[ 1.07575627, 0.16139542, 1.92732122, ..., -0.26905029, 0.73061849, -0.61021016], [-0.61298112, 0.58251565, -1.0204561 , ..., 1.73095028, 0.25763494, 0.03769834], [ 1.08827523, 1.67841947, -0.08118218, ..., -0.4315941 , 1.41509082, 0.59479981], ..., [ 0.7457839 , 0.20886401, 1.07463208, ..., 0.79508743, 0.15184803, -0.34028477], [-0.25272939, 0.17744917, -1.45035157, ..., -0.54263528, 0.04489259, -0.41222399], [ 1.58635482, 2.2273889 , 1.1803809 , ..., 0.8501827 , -0.43804703, 0.78975036]]) &gt;&gt;&gt; lists = [list(arr) for arr in arrs] &gt;&gt;&gt; len(lists) 10000 &gt;&gt;&gt; all(len(lis) == 7000 for lis in lists) True &gt;&gt;&gt; mat = np.matrix(lists) </code></pre> <p>and <code>mat</code> is now:</p> <pre><code>&gt;&gt;&gt; mat matrix([[ 1.07575627, 0.16139542, 1.92732122, ..., -0.26905029, 0.73061849, -0.61021016], [-0.61298112, 0.58251565, -1.0204561 , ..., 1.73095028, 0.25763494, 0.03769834], [ 1.08827523, 1.67841947, -0.08118218, ..., -0.4315941 , 1.41509082, 0.59479981], ..., [ 0.7457839 , 0.20886401, 1.07463208, ..., 0.79508743, 0.15184803, -0.34028477], [-0.25272939, 0.17744917, -1.45035157, ..., -0.54263528, 0.04489259, -0.41222399], [ 1.58635482, 2.2273889 , 1.1803809 , ..., 0.8501827 , -0.43804703, 0.78975036]]) &gt;&gt;&gt; mat.shape (10000, 7000) </code></pre>
python|arrays|numpy|matrix
1
9,451
14,364,203
How do you install numpy when you're not a superuser?
<p>I've downloaded the python binary file and then opened it in my home folder using</p> <pre><code>tar xzvf Python-2.7.3.tgz </code></pre> <p>This seems to work and when I run </p> <pre><code>~/Python-2.7.3/python </code></pre> <p>it works great. However when I try to import numpy, apparently it is not included. I've downloaded the numpy binary and opened it in my home directory but python still will not find it. Is there somewhere special that I have to put the numpy folder so that Python will find it?</p>
<p>Use <a href="https://github.com/utahta/pythonbrew" rel="nofollow">pythonbrew</a> to install Python into your $HOME folder:</p> <pre><code>$ pythonbrew install 2.7.2 </code></pre> <p>Then switch your current shell to use your local Python install:</p> <pre><code>$ pythonbrew use 2.7.2 </code></pre> <p>Now you should be able to install NumPy:</p> <pre><code>$ pip install numpy </code></pre>
python|numpy|installation
1
9,452
14,885,660
Numpy loadtxt rounding off numbers
<p>I'm using numpy loadtxt function to read in a large set of data. The data appears to be rounded off. for example: The number in the text file is -3.79000000000005E+01 but numpy reads the number in as -37.9. I've set the dypte to np.float64 in the loadtxt call. Is there anyway to keep the precision of the original data file?</p>
<p><code>loadtxt</code> is not rounding the number. What you are seeing is the way NumPy chooses to <em>print</em> the array:</p> <pre><code>In [80]: import numpy as np In [81]: x = np.loadtxt('test.dat', dtype = np.float64) In [82]: print(x) -37.9 </code></pre> <p>The actual value is the np.float64 closest to the value inputted.</p> <pre><code>In [83]: x Out[83]: array(-37.9000000000005) </code></pre> <hr> <p>Or, in the more likely instance that you have a higher dimensional array,</p> <pre><code>In [2]: x = np.loadtxt('test.dat', dtype = np.float64) </code></pre> <p>If the <code>repr</code> of <code>x</code> looks truncated:</p> <pre><code>In [3]: x Out[3]: array([-37.9, -37.9]) </code></pre> <p>you can use <code>np.set_printoptions</code> to get higher precision:</p> <pre><code>In [4]: np.get_printoptions() Out[4]: {'edgeitems': 3, 'infstr': 'inf', 'linewidth': 75, 'nanstr': 'nan', 'precision': 8, 'suppress': False, 'threshold': 1000} In [5]: np.set_printoptions(precision = 17) In [6]: x Out[6]: array([-37.90000000000050306, -37.90000000000050306]) </code></pre> <p>(Thanks to @mgilson for pointing this out.)</p>
python|numpy
7
9,453
14,395,678
How to drop extra copy of duplicate index of Pandas Series?
<p>I have a Series <code>s</code> with duplicate index :</p> <pre><code>&gt;&gt;&gt; s STK_ID RPT_Date 600809 20061231 demo_str 20070331 demo_str 20070630 demo_str 20070930 demo_str 20071231 demo_str 20060331 demo_str 20060630 demo_str 20060930 demo_str 20061231 demo_str 20070331 demo_str 20070630 demo_str Name: STK_Name, Length: 11 </code></pre> <p>And I just want to keep the unique rows and only one copy of the duplicate rows by:</p> <pre><code>s[s.index.unique()] </code></pre> <p><code>Pandas 0.10.1.dev-f7f7e13</code> give the below error msg</p> <pre><code>&gt;&gt;&gt; s[s.index.unique()] Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "d:\Python27\lib\site-packages\pandas\core\series.py", line 515, in __getitem__ return self._get_with(key) File "d:\Python27\lib\site-packages\pandas\core\series.py", line 558, in _get_with return self.reindex(key) File "d:\Python27\lib\site-packages\pandas\core\series.py", line 2361, in reindex level=level, limit=limit) File "d:\Python27\lib\site-packages\pandas\core\index.py", line 2063, in reindex limit=limit) File "d:\Python27\lib\site-packages\pandas\core\index.py", line 2021, in get_indexer raise Exception('Reindexing only valid with uniquely valued Index ' Exception: Reindexing only valid with uniquely valued Index objects &gt;&gt;&gt; </code></pre> <p>So how to drop extra duplicate rows of series, keep the unique rows and only one copy of the duplicate rows in an efficient way ? (better in one line)</p>
<p>You can groupby the index and apply a function that returns one value per index group. Here, I take the first value:</p> <pre><code>In [1]: s = Series(range(10), index=[1,2,2,2,5,6,7,7,7,8]) In [2]: s Out[2]: 1 0 2 1 2 2 2 3 5 4 6 5 7 6 7 7 7 8 8 9 In [3]: s.groupby(s.index).first() Out[3]: 1 0 2 1 5 4 6 5 7 6 8 9 </code></pre> <p><strong>UPDATE</strong></p> <p>Addressing BigBug's comment about crashing when passing a MultiIndex to Series.groupby():</p> <pre><code>In [1]: s Out[1]: STK_ID RPT_Date 600809 20061231 demo 20070331 demo 20070630 demo 20070331 demo In [2]: s.reset_index().groupby(s.index.names).first() Out[2]: 0 STK_ID RPT_Date 600809 20061231 demo 20070331 demo 20070630 demo </code></pre>
python|pandas
25
9,454
25,359,658
Get row-index of the last non-NaN value in each column of a pandas data frame
<p>How can I return the row index location of the last non-nan value for each column of the pandas data frame and return the locations as a pandas dataframe?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.notnull.html#pandas.notnull" rel="noreferrer"><code>notnull</code></a> and specifically <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html#pandas.DataFrame.idxmax" rel="noreferrer"><code>idxmax</code></a> to get the index values of the non <code>NaN</code> values</p> <pre><code>In [22]: df = pd.DataFrame({'a':[0,1,2,NaN], 'b':[NaN, 1,NaN, 3]}) df Out[22]: a b 0 0 NaN 1 1 1 2 2 NaN 3 NaN 3 In [29]: df[pd.notnull(df)].idxmax() Out[29]: a 2 b 3 dtype: int64 </code></pre> <p><strong>EDIT</strong></p> <p>Actually as correctly pointed out by @Caleb you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.last_valid_index.html" rel="noreferrer"><code>last_valid_index</code></a> which is designed for this:</p> <pre><code>In [3]: df = pd.DataFrame({'a':[3,1,2,np.NaN], 'b':[np.NaN, 1,np.NaN, -1]}) df Out[3]: a b 0 3 NaN 1 1 1 2 2 NaN 3 NaN -1 In [6]: df.apply(pd.Series.last_valid_index) Out[6]: a 2 b 3 dtype: int64 </code></pre>
python-2.7|pandas|numpy|scipy|nan
8
9,455
25,064,506
What scipy statistical test do I use to compare sample means?
<p>Assuming <em>sample sizes are not equal</em>, what test do I use to compare sample means under the following circumstances (please correct if any of the following are incorrect):</p> <p><strong>Normal Distribution = True</strong> and <strong>Homogeneity of Variance = True</strong></p> <pre><code>scipy.stats.ttest_ind(sample_1, sample_2) </code></pre> <p><strong>Normal Distribution = True</strong> and <strong>Homogeneity of Variance = False</strong></p> <pre><code>scipy.stats.ttest_ind(sample_1, sample_2, equal_var = False) </code></pre> <p><strong>Normal Distribution = False</strong> and <strong>Homogeneity of Variance = True</strong></p> <pre><code>scipy.stats.mannwhitneyu(sample_1, sample_2) </code></pre> <p><strong>Normal Distribution = False</strong> and <strong>Homogeneity of Variance = False</strong></p> <pre><code>??? </code></pre>
<h2>Fast answer:</h2> <p><strong>Normal Distribution = True</strong> and <strong>Homogeneity of Variance = False</strong> and <strong>sample sizes > 30-50</strong></p> <pre><code>scipy.stats.ttest_ind(sample1, sample2, equal_var=False) </code></pre> <h2>Good answer:</h2> <p>If you check the Central limit theorem, it says (from Wikipedia): "In probability theory, the central limit theorem (CLT) states that, given certain conditions, the arithmetic mean of a sufficiently large number of iterates of independent random variables, each with a well-defined (finite) expected value and finite variance, will be approximately normally distributed, regardless of the underlying distribution"</p> <p>So, although you do not have a normal distributed population, if your sample is big enough (greater than 30 or 50 samples), then the mean of the samples will be normally distributed. So, you can use:</p> <pre><code>scipy.stats.ttest_ind(sample1, sample2, equal_var=False) </code></pre> <p>This is a two-sided test for the null hypothesis that 2 independent samples have identical average (expected) values. With the option equal_var = False it performs a Welch’s t-test, which does not assume equal population variance.</p>
python|numpy|statistics|scipy
10
9,456
25,128,537
Creating data histograms/visualizations using ipython and filtering out some values
<p>I posted a question earlier ( <a href="https://stackoverflow.com/questions/25107975/pandas-ipython-how-to-create-new-data-frames-with-drill-down-capabilities">Pandas-ipython, how to create new data frames with drill down capabilities</a> ) and it was pointed out that it is possibly too broad so I have some more specific questions that may be easier to respond to and help me get a start with graphing data.</p> <p>I have decided to try creating some visualizations of my data using Pandas (or any package accessible through ipython). The first, obvious, problem I run into is how can I filter on certain conditions. For example I type the command:</p> <pre><code>df.Duration.hist(bins=10) </code></pre> <p>but get an error due to unrecognized dtypes (there are some entries that aren't in datetime format). How can I exclude these in the original command?</p> <p>Also, what if I want to create the same histogram but filtering to keep only records that have id's (in an account id field) starting with the integer (or string?) '2'? </p> <p>Ultimately, I want to be able to create histograms, line plots, box plots and so on but filtering for certain months, user id's, or just bad 'dtypes'.</p> <p>Can anyone help me modify the above command to add filters to it. (I'm decent with python-new to data)</p> <p>tnx</p> <p><strong>update:</strong> a kind user below has been trying to help me with this problem. I have a few developments to add to the question and a more specific problem.</p> <p>I have columns in my data frame for Start Time and End Time and created a 'Duration' column for time lapsed.</p> <p>The Start Time/End Time columns have fields that look like: </p> <pre><code>2014/03/30 15:45 </code></pre> <p>and when I apply the command pd.to_datetime() to these columns I get fields resulting that look like:</p> <pre><code>2014-03-30 15:45:00 </code></pre> <p>I changed the format to datetime and created a new column which is the 'Duration' or time lapsed in one command:</p> <pre><code>df['Duration'] = pd.to_datetime(df['End Time'])-pd.to_datetime(df['Start Time']) </code></pre> <p>The format of the fields in the duration column is:</p> <pre><code>01:14:00 </code></pre> <p>or hh:mm:ss</p> <p>to indicate time lapsed or 74 mins in the above example.</p> <p>the dtype of the duration column fields (hh:mm:ss) is:</p> <pre><code>dtype('&lt;m8[ns]') </code></pre> <p>The question is, how can I convert these fields to just integers? </p>
<p>I think you need to convert duration (timedelta64) to int (assuming you have a duration). Then the .hist method will work.</p> <pre><code>from pandas import Series from numpy.random import rand from numpy import timedelta64 In [21]: a = (rand(3) *10).astype(int) a Out[21]: array([3, 3, 8]) In [22]: b = [timedelta64(x, 'D') for x in a] # This is a duration b Out[22]: [numpy.timedelta64(3,'D'), numpy.timedelta64(3,'D'), numpy.timedelta64(8,'D')] In [23]: c = Series(b) # This is a duration c Out[23]: 0 3 days 1 3 days 2 8 days dtype: timedelta64[ns] In [27]: d = c.apply(lambda x: x / timedelta64(1,'D')) # convert duration to int d Out[27]: 0 3 1 3 2 8 dtype: float64 In [28]: d.hist() </code></pre> <p>I converted the duration to days ('D'), but you can convert it to any <a href="http://docs.scipy.org/doc/numpy/reference/arrays.datetime.html#datetime-units" rel="nofollow">legal unit</a>.</p>
python|pandas|histogram
0
9,457
29,012,212
Implementing common random numbers in a simulation
<p>I am building a small simulation in Python and I would like to use <a href="http://en.wikipedia.org/wiki/Variance_reduction">Common Random Numbers</a> to reduce variation. I know that I must achieve synchronization for CRN to work:</p> <blockquote> <p>CRN requires synchronization of the random number streams, which ensures that in addition to using the same random numbers to simulate all configurations, a specific random number used for a specific purpose in one configuration is used for exactly the same purpose in all other configurations.</p> </blockquote> <p>I was wondering if the way I wanted to implement it in my simulation was valid or if I should be using a different approach.</p> <p>My simulation has three different classes (ClassA, ClassB, ClassC), and ClassA objects have random travel times, ClassB objects have random service times and random usage rates, and ClassC objects have random service times. Of course there can be multiple instances of each class of object.</p> <p>At the start of the simulation I specify a single random number seed (<code>replication_seed</code>) so that I can use a different seed for each simulation replication.</p> <pre><code>import numpy.random as npr rep_rnd_strm = npr.RandomState().seed(replication_seed) </code></pre> <p>Then in the constructor for each Class, I use <code>rep_rnd_strm</code> to generate a seed that is used to initialize the random number stream for the instance of the class:</p> <pre><code>self.class_rnd_strm = npr.RandomState().seed(rep_rnd_strm.randint(10000000)) </code></pre> <p>I then use <code>self.class_rnd_strm</code> to generate a seed for each random number stream needed for the class instance. For example the constructor of ClassA has:</p> <pre><code>self.travel_time_strm = npr.RandomState().seed(self.class_rnd_strm.randint(10000000)) </code></pre> <p>while the constructor of ClassB has:</p> <pre><code>self.service_time_strm = npr.RandomState().seed(self.class_rnd_strm.randint(10000000)) self.usage_rate_strm = npr.RandomState().seed(self.class_rnd_strm.randint(10000000)) </code></pre> <p>Is what I am doing here a valid approach to getting synchronization to work, or should I be doing things differently?</p>
<p>Yes. That is a valid approach to make it replicable, <strong>but only if</strong> you can <strong>guarantee</strong> that there is no randomness in the order in which the various instances of the various classes are instantiated. This is because if they are instantiated in a different order, then they will get a different seed for their random number generator. </p>
python|python-2.7|numpy|random|simulation
2
9,458
29,294,983
How to calculate correlation between all columns and remove highly correlated ones using pandas?
<p>I have a huge data set and prior to machine learning modeling it is always suggested that first you should remove highly correlated descriptors(columns) how can i calculate the column wice correlation and remove the column with a threshold value say remove all the columns or descriptors having >0.8 correlation. also it should retained the headers in reduce data.. </p> <p>Example data set </p> <pre><code> GA PN PC MBP GR AP 0.033 6.652 6.681 0.194 0.874 3.177 0.034 9.039 6.224 0.194 1.137 3.4 0.035 10.936 10.304 1.015 0.911 4.9 0.022 10.11 9.603 1.374 0.848 4.566 0.035 2.963 17.156 0.599 0.823 9.406 0.033 10.872 10.244 1.015 0.574 4.871 0.035 21.694 22.389 1.015 0.859 9.259 0.035 10.936 10.304 1.015 0.911 4.5 </code></pre> <p>Please help.... </p>
<p>The method here worked well for me, only a few lines of code: <a href="https://chrisalbon.com/machine_learning/feature_selection/drop_highly_correlated_features/" rel="noreferrer">https://chrisalbon.com/machine_learning/feature_selection/drop_highly_correlated_features/</a></p> <pre><code>import numpy as np # Create correlation matrix corr_matrix = df.corr().abs() # Select upper triangle of correlation matrix upper = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool)) # Find features with correlation greater than 0.95 to_drop = [column for column in upper.columns if any(upper[column] &gt; 0.95)] # Drop features df.drop(to_drop, axis=1, inplace=True) </code></pre>
python|pandas|correlation
59
9,459
29,155,745
Speed up function using cython
<p>I am trying to speed up one of my functions.</p> <pre><code>def get_scale_local_maximas(cube_coordinates, laplacian_cube): """ Check provided cube coordinate for scale space local maximas. Returns only the points that satisfy the criteria. A point is considered to be a local maxima if its value is greater than the value of the point on the next scale level and the point on the previous scale level. If the tested point is located on the first scale level or on the last one, then only one inequality should hold in order for this point to be local scale maxima. Parameters ---------- cube_coordinates : (n, 3) ndarray A 2d array with each row representing 3 values, ``(y,x,scale_level)`` where ``(y,x)`` are coordinates of the blob and ``scale_level`` is the position of a point in scale space. laplacian_cube : ndarray of floats Laplacian of Gaussian scale space. Returns ------- output : (n, 3) ndarray cube_coordinates that satisfy the local maximum criteria in scale space. Examples -------- &gt;&gt;&gt; one = np.array([[1, 2, 3], [4, 5, 6]]) &gt;&gt;&gt; two = np.array([[7, 8, 9], [10, 11, 12]]) &gt;&gt;&gt; three = np.array([[0, 0, 0], [0, 0, 0]]) &gt;&gt;&gt; check_coords = np.array([[1, 0, 1], [1, 0, 0], [1, 0, 2]]) &gt;&gt;&gt; lapl_dummy = np.dstack([one, two, three]) &gt;&gt;&gt; get_scale_local_maximas(check_coords, lapl_dummy) array([[1, 0, 1]]) """ amount_of_layers = laplacian_cube.shape[2] amount_of_points = cube_coordinates.shape[0] # Preallocate index. Fill it with False. accepted_points_index = np.ones(amount_of_points, dtype=bool) for point_index, interest_point_coords in enumerate(cube_coordinates): # Row coordinate y_coord = interest_point_coords[0] # Column coordinate x_coord = interest_point_coords[1] # Layer number starting from the smallest sigma point_layer = interest_point_coords[2] point_response = laplacian_cube[y_coord, x_coord, point_layer] # Check the point under the current one if point_layer != 0: lower_point_response = laplacian_cube[y_coord, x_coord, point_layer-1] if lower_point_response &gt;= point_response: accepted_points_index[point_index] = False continue # Check the point above the current one if point_layer != (amount_of_layers-1): upper_point_response = laplacian_cube[y_coord, x_coord, point_layer+1] if upper_point_response &gt;= point_response: accepted_points_index[point_index] = False continue # Return only accepted points return cube_coordinates[accepted_points_index] </code></pre> <p>This is my attempt to speed it up using Cython:</p> <pre><code># cython: cdivision=True # cython: boundscheck=False # cython: nonecheck=False # cython: wraparound=False import numpy as np cimport numpy as cnp def get_scale_local_maximas(cube_coordinates, cnp.ndarray[cnp.double_t, ndim=3] laplacian_cube): """ Check provided cube coordinate for scale space local maximas. Returns only the points that satisfy the criteria. A point is considered to be a local maxima if its value is greater than the value of the point on the next scale level and the point on the previous scale level. If the tested point is located on the first scale level or on the last one, then only one inequality should hold in order for this point to be local scale maxima. Parameters ---------- cube_coordinates : (n, 3) ndarray A 2d array with each row representing 3 values, ``(y,x,scale_level)`` where ``(y,x)`` are coordinates of the blob and ``scale_level`` is the position of a point in scale space. laplacian_cube : ndarray of floats Laplacian of Gaussian scale space. Returns ------- output : (n, 3) ndarray cube_coordinates that satisfy the local maximum criteria in scale space. Examples -------- &gt;&gt;&gt; one = np.array([[1, 2, 3], [4, 5, 6]]) &gt;&gt;&gt; two = np.array([[7, 8, 9], [10, 11, 12]]) &gt;&gt;&gt; three = np.array([[0, 0, 0], [0, 0, 0]]) &gt;&gt;&gt; check_coords = np.array([[1, 0, 1], [1, 0, 0], [1, 0, 2]]) &gt;&gt;&gt; lapl_dummy = np.dstack([one, two, three]) &gt;&gt;&gt; get_scale_local_maximas(check_coords, lapl_dummy) array([[1, 0, 1]]) """ cdef Py_ssize_t y_coord, x_coord, point_layer, point_index cdef cnp.double_t point_response, lower_point_response, upper_point_response cdef Py_ssize_t amount_of_layers = laplacian_cube.shape[2] cdef Py_ssize_t amount_of_points = cube_coordinates.shape[0] # amount_of_layers = laplacian_cube.shape[2] # amount_of_points = cube_coordinates.shape[0] # Preallocate index. Fill it with False. accepted_points_index = np.ones(amount_of_points, dtype=bool) for point_index in range(amount_of_points): interest_point_coords = cube_coordinates[point_index] # Row coordinate y_coord = interest_point_coords[0] # Column coordinate x_coord = interest_point_coords[1] # Layer number starting from the smallest sigma point_layer = interest_point_coords[2] point_response = laplacian_cube[y_coord, x_coord, point_layer] # Check the point under the current one if point_layer != 0: lower_point_response = laplacian_cube[y_coord, x_coord, point_layer-1] if lower_point_response &gt;= point_response: accepted_points_index[point_index] = False continue # Check the point above the current one if point_layer != (amount_of_layers-1): upper_point_response = laplacian_cube[y_coord, x_coord, point_layer+1] if upper_point_response &gt;= point_response: accepted_points_index[point_index] = False continue # Return only accepted points return cube_coordinates[accepted_points_index] </code></pre> <p>But I can see no gain in the speed. And also I tried to replace <code>cnp.ndarray[cnp.double_t, ndim=3]</code> with memoryview <code>cnp.double_t[:, :, ::1]</code> but it only slowed down the whole code. I will appreciate any hints or corrections to my code. I am relatively new to Cython and I may have done something wrong.</p> <p><strong>Edit:</strong></p> <p>I fully rewrote my function in Cython:</p> <pre><code>def get_scale_local_maximas(cnp.ndarray[cnp.int_t, ndim=2] cube_coordinates, cnp.ndarray[cnp.double_t, ndim=3] laplacian_cube): """ Check provided cube coordinate for scale space local maximas. Returns only the points that satisfy the criteria. A point is considered to be a local maxima if its value is greater than the value of the point on the next scale level and the point on the previous scale level. If the tested point is located on the first scale level or on the last one, then only one inequality should hold in order for this point to be local scale maxima. Parameters ---------- cube_coordinates : (n, 3) ndarray A 2d array with each row representing 3 values, ``(y,x,scale_level)`` where ``(y,x)`` are coordinates of the blob and ``scale_level`` is the position of a point in scale space. laplacian_cube : ndarray of floats Laplacian of Gaussian scale space. Returns ------- output : (n, 3) ndarray cube_coordinates that satisfy the local maximum criteria in scale space. Examples -------- &gt;&gt;&gt; one = np.array([[1, 2, 3], [4, 5, 6]]) &gt;&gt;&gt; two = np.array([[7, 8, 9], [10, 11, 12]]) &gt;&gt;&gt; three = np.array([[0, 0, 0], [0, 0, 0]]) &gt;&gt;&gt; check_coords = np.array([[1, 0, 1], [1, 0, 0], [1, 0, 2]]) &gt;&gt;&gt; lapl_dummy = np.dstack([one, two, three]) &gt;&gt;&gt; get_scale_local_maximas(check_coords, lapl_dummy) array([[1, 0, 1]]) """ cdef Py_ssize_t y_coord, x_coord, point_layer, point_index cdef cnp.double_t point_response, lower_point_response, upper_point_response cdef Py_ssize_t amount_of_layers = laplacian_cube.shape[2] cdef Py_ssize_t amount_of_points = cube_coordinates.shape[0] # Preallocate index. Fill it with False. accepted_points_index = np.ones(amount_of_points, dtype=bool) for point_index in range(amount_of_points): interest_point_coords = cube_coordinates[point_index] # Row coordinate y_coord = interest_point_coords[0] # Column coordinate x_coord = interest_point_coords[1] # Layer number starting from the smallest sigma point_layer = interest_point_coords[2] point_response = laplacian_cube[y_coord, x_coord, point_layer] # Check the point under the current one if point_layer != 0: lower_point_response = laplacian_cube[y_coord, x_coord, point_layer-1] if lower_point_response &gt;= point_response: accepted_points_index[point_index] = False continue # Check the point above the current one if point_layer != (amount_of_layers-1): upper_point_response = laplacian_cube[y_coord, x_coord, point_layer+1] if upper_point_response &gt;= point_response: accepted_points_index[point_index] = False continue # Return only accepted points return cube_coordinates[accepted_points_index] </code></pre> <p>And after that I made some benchmarks with my function and with suggested function that was vectorized:</p> <pre><code>%timeit compiled.get_scale_local_maximas_np(coords, lapl_dummy) %timeit compiled.get_scale_local_maximas(coords, lapl_dummy) %timeit dynamic.get_scale_local_maximas_np(coords, lapl_dummy) %timeit dynamic.get_scale_local_maximas(coords, lapl_dummy) 10000 loops, best of 3: 101 µs per loop 1000 loops, best of 3: 328 µs per loop 10000 loops, best of 3: 103 µs per loop 1000 loops, best of 3: 1.6 ms per loop </code></pre> <p>The <code>compiled</code> namespace represents these two functions compiled using Cython.</p> <p>The <code>dynamic</code> namespace represents usual Python file.</p> <p>So, I made a conclusion that in this case the numpy approach is better.</p>
<p>Your Python code could still be improved as you're not "already doing 98% in numpy": you're still iterating over the rows of the coordinate array and performing 1-2 checks per row.</p> <p>You could use numpy's "fancy indexing" and masks to get it <em>fully</em> in a vectorized form:</p> <pre><code>def get_scale_local_maximas_full_np(coords, cube): x, y, z = [ coords[:, ind] for ind in range(3) ] point_responses = cube[x, y, z] lowers = point_responses.copy() uppers = point_responses.copy() not_layer_0 = z &gt; 0 lower_responses = cube[x[not_layer_0], y[not_layer_0], z[not_layer_0]-1] lowers[not_layer_0] = lower_responses not_max_layer = z &lt; (cube.shape[2] - 1) upper_responses = cube[x[not_max_layer], y[not_max_layer], z[not_max_layer]+1] uppers[not_max_layer] = upper_responses lo_check = np.ones(z.shape, dtype=np.bool) lo_check[not_layer_0] = (point_responses &gt; lowers)[not_layer_0] hi_check = np.ones(z.shape, dtype=np.bool) hi_check[not_max_layer] = (point_responses &gt; uppers)[not_max_layer] return coords[lo_check &amp; hi_check] </code></pre> <p>I've generated a set of somewhat larger data to test performance with:</p> <pre><code>lapl_dummy = np.random.rand(100,100,100) coords = np.random.random_integers(0,99, size=(1000,3)) </code></pre> <p>I get the following timing results:</p> <pre><code>In [146]: %timeit get_scale_local_maximas_full_np(coords, lapl_dummy) 10000 loops, best of 3: 175 µs per loop In [147]: %timeit get_scale_local_maximas(coords, lapl_dummy) 100 loops, best of 3: 2.24 ms per loop </code></pre> <p>But of course, be careful with performance tests, because it depends often on the data used.</p> <p>I have little experience with Cython, can't help you there.</p>
python|c|numpy|cython
4
9,460
33,651,243
plotting seismic wiggle traces using matplotlib
<p><a href="https://i.stack.imgur.com/XMhoQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XMhoQ.png" alt="enter image description here"></a></p> <p>I'm trying to recreate the above style of plotting using matplotlib.</p> <p>The raw data is stored in a 2D numpy array, where the fast axis is time.</p> <p>Plotting the lines is the easy bit. I'm trying to get the shaded-in areas efficiently.</p> <p>My current attempt looks something like:</p> <pre><code>import numpy as np from matplotlib import collections import matplotlib.pyplot as pylab #make some oscillating data panel = np.meshgrid(np.arange(1501), np.arange(284))[0] panel = np.sin(panel) #generate coordinate vectors. panel[:,-1] = np.nan #lazy prevents polygon wrapping x = panel.ravel() y = np.meshgrid(np.arange(1501), np.arange(284))[0].ravel() #find indexes of each zero crossing zero_crossings = np.where(np.diff(np.signbit(x)))[0]+1 #calculate scalars used to shift "traces" to plotting corrdinates trace_centers = np.linspace(1,284, panel.shape[-2]).reshape(-1,1) gain = 0.5 #scale traces #shift traces to plotting coordinates x = ((panel*gain)+trace_centers).ravel() #split coordinate vectors at each zero crossing xpoly = np.split(x, zero_crossings) ypoly = np.split(y, zero_crossings) #we only want the polygons which outline positive values if x[0] &gt; 0: steps = range(0, len(xpoly),2) else: steps = range(1, len(xpoly),2) #turn vectors of polygon coordinates into lists of coordinate pairs polygons = [zip(xpoly[i], ypoly[i]) for i in steps if len(xpoly[i]) &gt; 2] #this is so we can plot the lines as well xlines = np.split(x, 284) ylines = np.split(y, 284) lines = [zip(xlines[a],ylines[a]) for a in range(len(xlines))] #and plot fig = pylab.figure() ax = fig.add_subplot(111) col = collections.PolyCollection(polygons) col.set_color('k') ax.add_collection(col, autolim=True) col1 = collections.LineCollection(lines) col1.set_color('k') ax.add_collection(col1, autolim=True) ax.autoscale_view() pylab.xlim([0,284]) pylab.ylim([0,1500]) ax.set_ylim(ax.get_ylim()[::-1]) pylab.tight_layout() pylab.show() </code></pre> <p>and the result is <a href="https://i.stack.imgur.com/HOLMG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HOLMG.png" alt="enter image description here"></a></p> <p>There are two issues:</p> <ol> <li><p>It does not fill perfectly because I am splitting on the array indexes closest to the zero crossings, not the exact zero crossings. I'm assuming that calculating each zero crossing will be a big computational hit.</p></li> <li><p>Performance. It's not that bad, given the size of the problem - around a second to render on my laptop, but i'd like to get it down to 100ms - 200ms.</p></li> </ol> <p>Because of the usage case I am limited to python with numpy/scipy/matplotlib. Any suggestions?</p> <p><strong>Followup:</strong></p> <p>Turns out linearly interpolating the zero crossings can be done with very little computational load. By inserting the interpolated values into the data, setting negative values to nans, and using a single call to pyplot.fill, 500,000 odd samples can be plotted in around 300ms.</p> <p>For reference, Tom's method below on the same data took around 8 seconds.</p> <p>The following code assumes an input of a numpy recarray with a dtype that mimics a seismic unix header/trace definition. </p> <pre><code>def wiggle(frame, scale=1.0): fig = pylab.figure() ax = fig.add_subplot(111) ns = frame['ns'][0] nt = frame.size scalar = scale*frame.size/(frame.size*0.2) #scales the trace amplitudes relative to the number of traces frame['trace'][:,-1] = np.nan #set the very last value to nan. this is a lazy way to prevent wrapping vals = frame['trace'].ravel() #flat view of the 2d array. vect = np.arange(vals.size).astype(np.float) #flat index array, for correctly locating zero crossings in the flat view crossing = np.where(np.diff(np.signbit(vals)))[0] #index before zero crossing #use linear interpolation to find the zero crossing, i.e. y = mx + c. x1= vals[crossing] x2 = vals[crossing+1] y1 = vect[crossing] y2 = vect[crossing+1] m = (y2 - y1)/(x2-x1) c = y1 - m*x1 #tack these values onto the end of the existing data x = np.hstack([vals, np.zeros_like(c)]) y = np.hstack([vect, c]) #resort the data order = np.argsort(y) #shift from amplitudes to plotting coordinates x_shift, y = y[order].__divmod__(ns) ax.plot(x[order] *scalar + x_shift + 1, y, 'k') x[x&lt;0] = np.nan x = x[order] *scalar + x_shift + 1 ax.fill(x,y, 'k', aa=True) ax.set_xlim([0,nt]) ax.set_ylim([ns,0]) pylab.tight_layout() pylab.show() </code></pre> <p><a href="https://i.stack.imgur.com/vkyFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vkyFU.png" alt="enter image description here"></a></p> <p>The full code is published at <a href="https://github.com/stuliveshere/PySeis" rel="nofollow noreferrer">https://github.com/stuliveshere/PySeis</a></p>
<p>You can do this easily with <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.fill_betweenx" rel="noreferrer"><code>fill_betweenx</code></a>. From the docs:</p> <blockquote> <p>Make filled polygons between two horizontal curves.</p> <p>Call signature:</p> <p>fill_betweenx(y, x1, x2=0, where=None, **kwargs) Create a PolyCollection filling the regions between x1 and x2 where where==True</p> </blockquote> <p>The important part here is the <code>where</code> argument.</p> <p>So, you want to have <code>x2 = offset</code>, and then have <code>where = x&gt;offset</code></p> <p>For example:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt fig,ax = plt.subplots() # Some example data y = np.linspace(700.,900.,401) offset = 94. x = offset+10*(np.sin(y/2.)* 1/(10. * np.sqrt(2 * np.pi)) * np.exp( - (y - 800)**2 / (2 * 10.**2)) ) # This function just gives a wave that looks something like a seismic arrival ax.plot(x,y,'k-') ax.fill_betweenx(y,offset,x,where=(x&gt;offset),color='k') ax.set_xlim(93,95) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/VVUgJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VVUgJ.png" alt="enter image description here"></a></p> <p>You need to do <code>fill_betweenx</code> for each of your offsets. For example:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt fig,ax = plt.subplots() # Some example data y = np.linspace(700.,900.,401) offsets = [94., 95., 96., 97.] times = [800., 790., 780., 770.] for offset, time in zip(offsets,times): x = offset+10*(np.sin(y/2.)* 1/(10. * np.sqrt(2 * np.pi)) * np.exp( - (y - time)**2 / (2 * 10.**2)) ) ax.plot(x,y,'k-') ax.fill_betweenx(y,offset,x,where=(x&gt;offset),color='k') ax.set_xlim(93,98) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/IJHZp.png" rel="noreferrer"><img src="https://i.stack.imgur.com/IJHZp.png" alt="enter image description here"></a></p>
python|numpy|matplotlib|plot
7
9,461
33,941,797
creating new column of values from parse function using pandas
<p>I have a csv containing a column 'start' with values of:</p> <pre><code>2015-09-28T12:58:42.831+03 2015-09-28T13:37:43.669+03 2015-09-28T14:11:31.383+03 2015-09-28T15:25:34.710+03 2015-09-28T18:06:02.106+03 </code></pre> <p>I want to create a new column in the dataframe with the parsed version of the time. So for one value it would be:</p> <pre><code>import pandas as pd from dateutil.parser import parse parse(time_Test.start[1]) datetime.datetime(2015, 9, 28, 13, 37, 43, 669000, tzinfo=tzoffset(None, 10800)) </code></pre> <p>I can iterate through and parse all of the values:</p> <pre><code>for i in time_Test.start: x = parse(i) print x 2015-09-28 12:58:42.831000+03:00 2015-09-28 13:37:43.669000+03:00 2015-09-28 14:11:31.383000+03:00 2015-09-28 15:25:34.710000+03:00 2015-09-28 18:06:02.106000+03:00 2015-09-28 18:33:19.217000+03:00 </code></pre> <p>How would I alter this to place the calculated values into a new column? </p>
<p>You can create a new column in your data frame with the following command.</p> <pre><code>time_Test['parsed_datetime'] = [parse(i) for i in time_Test.start] </code></pre> <p>However, as suggested by EdChum, I would recommend using the <code>parse_dates=[the column index where your dates are]</code> flag when you read your file. Your dates will be parsed automatically. You can find the full documentation here: <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html</a></p>
python|pandas
0
9,462
33,641,358
Can someone help me with TensorFlow?
<p>Google just opened up TensorFlow as opened source. I read it a bit but looks like you can only train it with their given MNIST data.</p> <p>I am looking for example code where i can train with my own data, and output results for my test file.</p> <p>where I have .csv file (like a sample per line) as training data (with id,output,+72 more columns)</p> <p>and have another .csv file for test data where i'd to predict output(1 or 0).</p> <p>Anyone understand that TensorFlow enough to give me some sample code?</p>
<p>The best solution I have found is:</p> <p><a href="https://github.com/google/skflow">https://github.com/google/skflow</a></p> <p>Charles</p>
tensorflow
5
9,463
23,796,191
Calculate rolling time difference in pandas efficiently
<p>I have a panel in pandas and am trying to calculate the amount of time that an individual spends in each stage. To give a better sense of this my dataset is as follows:</p> <pre><code>group date stage A 2014-01-01 one A 2014-01-03 one A 2014-01-04 one A 2014-01-05 two B 2014-01-02 four B 2014-01-06 five B 2014-01-10 five C 2014-01-03 two C 2014-01-05 two </code></pre> <p>I'm looking to calculate stage duration to give:</p> <pre><code> group date stage dur A 2014-01-01 one 0 A 2014-01-03 one 2 A 2014-01-04 one 3 A 2014-01-05 two 0 B 2014-01-02 four 0 B 2014-01-06 five 0 B 2014-01-10 five 4 C 2014-01-03 two 0 C 2014-01-05 two 2 </code></pre> <p>The method that I'm using below is extremely slow. Any ideas on a quicker method?</p> <pre><code>df['stage_duration'] = df.groupby(['group', 'stage']).date.apply(lambda y: (y - y.iloc[0])).apply(lambda y:y / np.timedelta64(1, 'D'))) </code></pre>
<p>Based your code (your <code>groupby/apply</code>), it looks like (despite your example ... but maybe I misunderstand what you want and then what Andy did would be the best idea) that you're working with a 'date' column that is a <code>datetime64</code> dtype and not an <code>integer</code> dtype in your actual data. Also it looks like you want compute the change in days as measured from the first observation of a given <code>group/stage</code>. I think this is a better set of example data (if I understand your goal correctly):</p> <pre><code>&gt;&gt;&gt; df group date stage dur 0 A 2014-01-01 one 0 1 A 2014-01-03 one 2 2 A 2014-01-04 one 3 3 A 2014-01-05 two 0 4 B 2014-01-02 four 0 5 B 2014-01-06 five 0 6 B 2014-01-10 five 4 7 C 2014-01-03 two 0 8 C 2014-01-05 two 2 </code></pre> <p>Given that you should get some speed-up from just modifying your apply (as Jeff suggests in his comment) by dividing through by the <code>timedelta64</code> in a vectorized way after the apply (or you could do it in the apply):</p> <pre><code>&gt;&gt;&gt; df['dur'] = df.groupby(['group','stage']).date.apply(lambda x: x - x.iloc[0]) &gt;&gt;&gt; df['dur'] /= np.timedelta64(1,'D') &gt;&gt;&gt; df group date stage dur 0 A 2014-01-01 one 0 1 A 2014-01-03 one 2 2 A 2014-01-04 one 3 3 A 2014-01-05 two 0 4 B 2014-01-02 four 0 5 B 2014-01-06 five 0 6 B 2014-01-10 five 4 7 C 2014-01-03 two 0 8 C 2014-01-05 two 2 </code></pre> <p>But you can also avoid the <code>groupby/apply</code> given your data is in group,stage,date order. The first date for every <code>['group','stage']</code> grouping happens when either the group changes or the stage changes. So I think you can do something like the following:</p> <pre><code>&gt;&gt;&gt; beg = (df.group != df.group.shift(1)) | (df.stage != df.stage.shift(1)) &gt;&gt;&gt; df['dur'] = (df['date'] - df['date'].where(beg).ffill())/np.timedelta64(1,'D') &gt;&gt;&gt; df group date stage dur 0 A 2014-01-01 one 0 1 A 2014-01-03 one 2 2 A 2014-01-04 one 3 3 A 2014-01-05 two 0 4 B 2014-01-02 four 0 5 B 2014-01-06 five 0 6 B 2014-01-10 five 4 7 C 2014-01-03 two 0 8 C 2014-01-05 two 2 </code></pre> <p>Explanation: Note what <code>df['date'].where(beg)</code> creates:</p> <pre><code>&gt;&gt;&gt; beg = (df.group != df.group.shift(1)) | (df.stage != df.stage.shift(1)) &gt;&gt;&gt; df['date'].where(beg) 0 2014-01-01 1 NaT 2 NaT 3 2014-01-05 4 2014-01-02 5 2014-01-06 6 NaT 7 2014-01-03 8 NaT </code></pre> <p>And then I <code>ffill</code> the values and take the difference with the 'date' column.</p> <p><strong>Edit:</strong> As Andy points out you could also use <code>transform</code>:</p> <pre><code>&gt;&gt;&gt; df['dur'] = df.date - df.groupby(['group','stage']).date.transform(lambda x: x.iloc[0]) &gt;&gt;&gt; df['dur'] /= np.timedelta64(1,'D') group date stage dur 0 A 2014-01-01 one 0 1 A 2014-01-03 one 2 2 A 2014-01-04 one 3 3 A 2014-01-05 two 0 4 B 2014-01-02 four 0 5 B 2014-01-06 five 0 6 B 2014-01-10 five 4 7 C 2014-01-03 two 0 8 C 2014-01-05 two 2 </code></pre> <p><strong>Speed</strong>: I timed the two method using a similar dataframe with 400,000 observations:</p> <p>Apply method:</p> <pre><code>1 loops, best of 3: 18.3 s per loop </code></pre> <p>Non-apply method:</p> <pre><code>1 loops, best of 3: 1.64 s per loop </code></pre> <p>So I think avoiding the apply could give some significant speed-ups</p>
python|pandas
7
9,464
29,678,166
Pandas: Weighted median of grouped observations
<p>I have a dataframe that contains number of observations per group of income:</p> <pre><code>INCAGG 1 6.561681e+08 3 9.712955e+08 5 1.658043e+09 7 1.710781e+09 9 2.356979e+09 </code></pre> <p>I would like to compute the median income group. What do I mean? Let's start with a simpler series:</p> <pre><code>INCAGG 1 6 3 9 5 16 7 17 9 23 </code></pre> <p>It represents this set of numbers:</p> <pre><code>1 1 1 1 1 1 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 </code></pre> <p>Which I can reorder to</p> <pre><code>1 1 1 1 1 1 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 </code></pre> <p>which visually is what I mean - the median here would be <code>7</code>.</p>
<p>After glancing at a numpy example <a href="https://stackoverflow.com/questions/20601872/numpy-or-scipy-to-calculate-weighted-median">here</a>, I think <code>cumsum()</code> provides a good approach. Assuming your column of counts is called 'wt', here's a simple solution that will work most of the time (and see below for a more general solution):</p> <pre><code>df = df.sort('incagg') df['tmp'] = df.wt.cumsum() &lt; ( df.wt.sum() / 2. ) df['med_grp'] = (df.tmp==False) &amp; (df.tmp.shift()==True) </code></pre> <p>The second code line above is dividing into rows above and below the median. The median observation will be in the first <code>False</code> group.</p> <pre><code> incagg wt tmp med_grp 0 1 656168100 True False 1 3 971295500 True False 2 5 1658043000 True False 3 7 1710781000 False True 4 9 2356979000 False False df.ix[df.med_grp,'incagg'] 3 7 Name: incagg, dtype: int64 </code></pre> <p>This will work fine when the median is unique and often when it isn't. The problem can only occur if the median is non-unique AND it falls on the edge of a group. In this case (with 5 groups and weights in the millions/billions), it's really not a concern but nevertheless here's a more general solution:</p> <pre><code>df['tmp1'] = df.wt.cumsum() == (df.wt.sum() / 2.) df['tmp2'] = df.wt.cumsum() &lt; (df.wt.sum() / 2.) df['med_grp'] = (df.tmp2==False) &amp; (df.tmp2.shift()==True) df['med_grp'] = df.med_grp | df.tmp1.shift() incagg wt tmp1 tmp2 med_grp 0 1 1 False True False 1 3 1 False True False 2 5 1 True False True 3 7 2 False False True 4 9 1 False False False df.ix[df.med_grp,'incagg'] 2 5 3 7 df.ix[df.med_grp,'incagg'].mean() 6.0 </code></pre>
python|pandas|scipy
1
9,465
62,255,125
Trying to understand why none of my tensorflow-gpu import statements are working
<p>I'm using anaconda to run through a google code lab with tensorflow on my Windowsx64 machine. Followed directions and got the model trained nicely, all was good. Then I decided to try again with tensorflow-gpu. </p> <p>So I uninstalled tensorflow, and installed tensorflow-gpu using anaconda. (conda uninstall tensorflow -> conda install tensorflow-gpu). Supposedly, anaconda is supposed to take care of cuDNN versions and so forth. In my terminal, I can run python and in the interpreter run:</p> <pre><code>&gt;&gt;&gt; import tensorflow as tf &gt;&gt;&gt; print(tf.__version__) 1.8.0 </code></pre> <p>Looking great so far. I try</p> <pre><code>&gt;&gt;&gt; sess = tf.Session() </code></pre> <p>and get a nice long output with my GeForce GTX 1050 driver listed as it should be, with compute capability 6.1, well above the 3.5 threshold.</p> <p>Awesome.</p> <p>But then when I try to go back and run my code, every other statement I use with tf fails. For example, from <a href="https://www.tensorflow.org/guide/gpu" rel="nofollow noreferrer">Tensorflow's GPU documentation</a>, they recommend running the following code to check that the GPU is working:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU'))) </code></pre> <p>and my result is:</p> <pre><code>&gt;&gt;&gt; print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU'))) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; AttributeError: module 'tensorflow' has no attribute 'config' </code></pre> <p>Further down they recommend enabling debugging with:</p> <pre><code>tf.debugging.set_log_device_placement(True) </code></pre> <p>My result:</p> <pre><code>&gt;&gt;&gt; tf.debugging.set_log_device_placement(True) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; AttributeError: module 'tensorflow' has no attribute 'debugging' </code></pre> <p>Needless to say, when I go back to my original code from my codelab, it fails on the first line that uses tf aside from the original import statement itself. Am I missing something basic here? Why are none of my tf commands recognized?</p> <p>Any ideas what's up?</p>
<p>It's probably your tensorflow version is too old. I'm on 2.1 and that line works for me so I'm guessing they introduced config in 2.0 or something. Anyways, try the one that they deprecated instead. <a href="https://www.tensorflow.org/api_docs/python/tf/test/is_gpu_available" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/test/is_gpu_available</a></p>
python|tensorflow|anaconda
0
9,466
62,062,231
Is it possible to involve a search bar on a DataFrame in Pandas?
<p>I have a DataFrame in Pandas which collects some data from an Excel document. I created a GUI with PyQt5 in order to make it look more interesting but here is the thing.</p> <p>Is it possbile to make a dynamic search bar in order to search through that DataFrame? For example, my DataFrame has over 3k+ rows and I wanna search for John Doe, then the results will come up on the GUI. As far as I know, QLineEdit is used for this but I can't seem to implement it on my code.</p> <p>Is it me that is doing wrong or it is not possible to do it on a DataFrame? And if anyone wanna help me, just let me know, I would be so grateful and thankful, I guess it'll only take 10-15 minutes. I can also post the code here, but talking on Discord and explaining you in detail and also sharing screens would be a lot easier.</p>
<p>This can be done by subclassing <code>QAbstractTableModel</code> to create a custom table model that uses the underlying dataframe for supplying data to a <code>QTableView</code>. This custom model can then be combined with a <code>QProxyFilterSortModel</code> to filter the data in the table. To create a custom non-editable model from <code>QAbstractTableModel</code> you need to implement <code>rowCount</code>, <code>columnCount</code>, <code>data</code>, and <code>headerData</code> at the very least. In this case, minimal implemetation could be something like this:</p> <pre><code>class DataFrameModel(QtCore.QAbstractTableModel): def __init__(self, data_frame, parent = None): super().__init__(parent) self.data_frame = data_frame def rowCount(self, index): if index.isValid(): return 0 return self.data_frame.shape[0] def columnCount(self, index): if index.isValid(): return 0 return self.data_frame.shape[1] def data(self, index, role): if not index.isValid() or role != QtCore.Qt.DisplayRole: return None return str(self.data_frame.iloc[index.row(), index.column()]) def headerData(self, section, orientation, role=None): if role != QtCore.Qt.DisplayRole: return None if orientation == QtCore.Qt.Vertical: return self.data_frame.index[section] else: return self.data_frame.columns[section] </code></pre> <p>To show and filter the data in a table you could do something like this:</p> <pre><code>class MyWidget(QtWidgets.QWidget): def __init__(self, parent = None): super().__init__(parent) self.table_view = QtWidgets.QTableView() self.proxy_model = QtCore.QSortFilterProxyModel() # by default, the QSortFilterProxyModel will search for keys in the first column only # setting QSortFilterProxyModel.filterKeyColumn to -1 will match values in all columns self.proxy_model.setFilterKeyColumn(-1) self.table_view.setModel(self.proxy_model) # line edit for entering (part of) the key that should be searched for self.line_edit = QtWidgets.QLineEdit() self.line_edit.textChanged.connect(self.filter_text) vlayout = QtWidgets.QVBoxLayout(self) vlayout.addWidget(self.line_edit) vlayout.addWidget(self.table_view) def filter_text(self, text): self.proxy_model.setFilterFixedString(text) def set_data(self, data): self.model = DataFrameModel(pd.DataFrame(data)) self.proxy_model.setSourceModel(self.model) if __name__ == "__main__": app = QtWidgets.QApplication([]) win = MyWidget() win.set_data({'a':['apple', 'banana', 'cherry'], 'b':[4,5,6], 'c':['green', 'yellow', 'red']}) win.show() app.exec() </code></pre> <p>Of course this is a very basic implementation but I might help you get started.</p>
python-3.x|pandas|dataframe|user-interface|pyqt5
1
9,467
62,248,037
How to get the imagenet dataset on which pytorch models are trained on
<p><strong><em>Can anyone please tell me how to download the complete imagenet dataset on which the pytorch torchvision models are trained on and their Top-1 error is reported on?</em></strong></p> <p>I have downloaded Tiny-Imagenet from Imagenet website and used pretrained resnet-101 model which provides only 18% Top-1 accuracy. </p>
<p>Download the ImageNet dataset from <a href="http://www.image-net.org/" rel="nofollow noreferrer">http://www.image-net.org/</a> (you have to sign in)</p> <p>Then, you should move validation images to labeled subfolders, which could be done automatically using the following shell script: <a href="https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh" rel="nofollow noreferrer">https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh</a></p>
pytorch|torchvision|imagenet
1
9,468
62,217,844
Plotly doesn't draw barchart from pivot
<p>I am trying to draw a bar chart from the CSV data I transform using pivot_table. The bar chart should have the count on the y-axis and companystatus along the x-axis. </p> <p>I am getting this instead: <a href="https://i.stack.imgur.com/IVqpr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IVqpr.png" alt="Bar chart not showing"></a></p> <p>Ultimately, I want to stack the bar by CompanySizeId.</p> <p>I have been following <a href="https://www.youtube.com/watch?v=oM8NV-y6wmE&amp;list=PLH6mU1kedUy9HTC1n9QYtVHmJRHQ97DBa&amp;index=11&amp;t=0s" rel="nofollow noreferrer">this video</a>.</p> <pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go import plotly.offline as pyo import pandas as pd countcompany = pd.read_csv( 'https://raw.githubusercontent.com/redbeardcr/Plotly/master/Data/countcompany.csv') df = pd.pivot_table(countcompany, index='CompanyStatusLabel', values='n', aggfunc=sum) print(df) data = [go.Bar( x=df.index, y=df.values, )] layout = go.Layout(title='Title') fig = go.Figure(data=data, layout=layout) pyo.plot(fig) </code></pre> <p>Code can be found <a href="https://github.com/redbeardcr/Plotly" rel="nofollow noreferrer">here</a></p> <p>Thanks for any help</p>
<p>If you flatten the array with the <code>y</code> values, i.e. if you replace <code>y=df.values</code> with <code>y=df.values.flatten()</code>, your code will work as expected.</p> <pre><code>import plotly.graph_objects as go import plotly.offline as pyo import pandas as pd countcompany = pd.read_csv('https://raw.githubusercontent.com/redbeardcr/Plotly/master/Data/countcompany.csv') df = pd.pivot_table(countcompany, index='CompanyStatusLabel', values='n', aggfunc=sum) data = [go.Bar( x=df.index, y=df.values.flatten(), )] layout = go.Layout(title='Title') fig = go.Figure(data=data, layout=layout) pyo.plot(fig) </code></pre>
python|pandas|plotly
2
9,469
62,359,392
error when i try to sort values in descending order
<p>Im trying to sort the values in descending order but the code throws an error every time I run it. Im trying to run ANOVA and sort F Statistic values for every key value, and then sort the F statistic values in the descending oreder, everything until the sorting part seems to work fine.</p> <pre><code>def PAregression() : m_p_values = result values=m_p_values.iloc[:,0].str.split('_', expand=True) m_p_values=pd.concat([values,m_p_values], axis=1) m_p_values.columns=['Parent','Sub','Texture','Orig','Values'] m_p_values['Child']=m_p_values['Sub'].astype(str)+'_'+m_p_values['Texture'].astype(str) m_p_values=m_p_values[['Parent','Child','Values']] m_p_values.columns=['Parent','Child',Final1.iloc[0,-2]] Parent=[] Parent=pd.DataFrame(Parent) for group in m_p_values.groupby('Parent'): Child=group[1] Child=pd.DataFrame(Child) Child=Child.fillna(0) Child=Child.sort_values(by=[Child.iloc[:,2],Child.iloc[:,1]], ascending=[False,True]) Child=Child.iloc[0,:] Parent=pd.concat([Parent,Child],axis=1) print(Parent) Finals=[] Finals2=[] Finals=pd.DataFrame(Finals) Finals2=pd.DataFrame(Finals) for group in Final_check.groupby('Key'): # group is a tuple where the first value is the Key and the second is the dataframe Final1=group[1] Final1=pd.DataFrame(Final1) result=regression() result2=PAregression() Finals=pd.concat([Finals, result], axis=1) Finals2=pd.concat([Finals2, result2], axis=1) # do xyz with result print(Finals2) </code></pre> <p>Traceback</p> <pre><code>--------------------------------------------------------------------------- KeyError Traceback (most recent call last) &lt;ipython-input-109-fa78de049ae7&gt; in &lt;module&gt; 8 Final1=pd.DataFrame(Final1) 9 result=regression() ---&gt; 10 result2=PAregression() 11 Finals=pd.concat([Finals, result], axis=1) 12 Finals2=pd.concat([Finals2, result2], axis=1) &lt;ipython-input-108-f3d7cc3edd81&gt; in PAregression() 14 Child=pd.DataFrame(Child) 15 Child=Child.fillna(0) ---&gt; 16 Child=Child.sort_values(by=[Child.iloc[:,2],Child.iloc[:,1]], ascending=[False,True]) 17 Child=Child.iloc[0,:] 18 Parent=pd.concat([Parent,Child],axis=1) ~\anaconda3\lib\site-packages\pandas\core\frame.py in sort_values(self, by, axis, ascending, inplace, kind, na_position, ignore_index) 4918 from pandas.core.sorting import lexsort_indexer 4919 -&gt; 4920 keys = [self._get_label_or_level_values(x, axis=axis) for x in by] 4921 indexer = lexsort_indexer(keys, orders=ascending, na_position=na_position) 4922 indexer = ensure_platform_int(indexer) ~\anaconda3\lib\site-packages\pandas\core\frame.py in &lt;listcomp&gt;(.0) 4918 from pandas.core.sorting import lexsort_indexer 4919 -&gt; 4920 keys = [self._get_label_or_level_values(x, axis=axis) for x in by] 4921 indexer = lexsort_indexer(keys, orders=ascending, na_position=na_position) 4922 indexer = ensure_platform_int(indexer) ~\anaconda3\lib\site-packages\pandas\core\generic.py in _get_label_or_level_values(self, key, axis) 1690 values = self.axes[axis].get_level_values(key)._values 1691 else: -&gt; 1692 raise KeyError(key) 1693 1694 # Check for duplicates KeyError: 0 0.00 1 0.00 2 0.00 3 1.04 4 0.55 5 0.00 6 0.00 7 0.00 8 0.00 9 0.00 10 1.00 11 1.00 12 0.00 13 0.00 14 0.00 15 0.00 16 0.00 17 0.00 18 0.00 19 0.00 20 0.00 21 2.89 22 0.00 23 0.00 24 0.00 25 0.00 26 0.00 27 0.00 28 0.00 29 1.83 Name: 11003_LP SHIRT GODS &amp; KINGS, dtype: float64 </code></pre> <p>I cant figure out why this is happening.</p>
<p>You pass a whole series as value of the <code>by</code> argument. Instead you need to pass the column <strong>name</strong>, not the column itself:</p> <pre><code>Child = Child.sort_values(by=[Child.columns[2],Child.columns[1]], ascending=[False,True]) </code></pre> <p>Alternative: <code>by=Child.columns[2:0:-1].tolist()</code>.</p>
python|pandas|numpy|regression
0
9,470
62,307,458
AttributeError - remove multiple whitespaces from multiple columns in dataframe
<p>I do this:</p> <pre><code>df[['InfoType', 'InfoLabel1', 'InfoLabel2']] = df[['InfoType', 'InfoLabel1', 'InfoLabel2']].apply(lambda x: ' '.join(x.split())) </code></pre> <p>and I receive this error:</p> <p>AttributeError: ("'Series' object has no attribute 'split'", 'occurred at index InfoType')</p> <p>the columns <code>['InfoType', 'InfoLabel1', 'InfoLabel2']</code> simply have strings in their cells.</p> <p>My goal is to remove multiple whitespaces and just put one whitespace in their place.</p> <p>How can I fix this?</p>
<p>The <code>x</code> in your apply won't be the value of the individual cells, but rather a series (I think of each row). Hence your error.</p> <p>Luckily for you, there is a much easier way to convert all white space into a single space, use regex and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.replace.html" rel="nofollow noreferrer"><code>replace</code></a>:</p> <pre class="lang-py prettyprint-override"><code>text_features = [ 'InfoType', 'InfoLabel1', 'InfoLabel2', ] df[text_features] = df[text_features].replace(regex=r"\s+", value=" ") </code></pre>
pandas
3
9,471
51,509,939
Funnel restrictive analysis with groupby
<p>I am trying to create a funnel analysis with the following conditions. I want to know how many people which enter to the home a page make a search (number of people which make search / number of people which make a home search) and the number of people which make a buy from those which make a search but must be necessary from those people which make first a home (event).</p> <p>And after this i want to add a new condition that each individual must complete the funnel (or until the further step for each person) in an hour. </p> <p>For example:</p> <pre><code>df id day time event 1 20 16:00 home 1 20 16:20 search 1 20 16:25 buy 2 20 17:00 home 2 20 17:02 home 2 20 17:03 home 2 20 17:06 search 2 20 17:06 search 3 21 9:00 search 4 20 8:00 home 4 21 8:00 search 5 22 7:00 home 5 22 7:15 buy </code></pre> <p>result must be</p> <pre><code> result home 4 search 3 buy 1 </code></pre> <p>explanation of result: home: id1,id2,id4,id5 are ids which make the first step of the funnel that is home (that why the 4 in home) search: id1,id2,id4 are ids which make first home as event and also make a search in less than an hour buy: only id1 make the funnel complete in order in less than an hour</p>
<p>I've used groupby and str.contains to match your events and count them. Hopefully this will help with the first part of your question.</p> <p>First I created a sample dataframe</p> <pre><code>df = pd.DataFrame({'id': [1, 1, 1, 2, 3, 3, 3, 3], 'event': ['home', 'home', 'search', 'home', 'home', 'search', 'search', 'buy']}) df = df.set_index('id') event id 1 home 1 home 1 search 2 home 3 home 3 search 3 search 3 buy </code></pre> <p>Then we groupby the ids and sum with</p> <pre><code>df = df.groupby(level=0).sum() </code></pre> <p>Then we can use <code>str.contains</code> to match your cases</p> <pre><code>ps = ['home', 'homesearch', r'home(search)+buy'] print(pd.DataFrame([df.event.str.contains(p).sum() for p in ps], index=ps)) </code></pre> <p>which creates (You can use whatever you like for the index I just used the search string)</p> <pre><code> 0 home 3 homesearch 2 home(search)+buy 1 </code></pre> <p>Note that I had to use regex to match the more complicated case when multiple searches happened before a buy.</p>
python|python-3.x|pandas|numpy|pandas-groupby
0
9,472
51,132,201
Pandas IndexError: single positional indexer is out-of-bounds, with strange manner
<p>Hi I am trying to get a sql query output in a format to keep the file stats.</p> <p>I am checking if sql query has all the dates or not. If not adding a dataframe with date and zero values and then doing the concating them into one.</p> <p>df.iloc[0,1] prints the value 2018-07-02 but when checking in if statement, it returns error 'IndexError: single positional indexer is out-of-bounds':</p> <p>Please help.</p> <pre><code> import ibm_db import ibm_db_dbi import datetime import pandas as pd con = ibm_db.pconnect("DATABASE=####;HOSTNAME=####;PORT=####;PROTOCOL=TCPIP;UID=#####;PWD=#######;","","") conn = ibm_db_dbi.Connection(con) today = datetime.date.today() today1 = today - datetime.timedelta(days = 1) today2 = today1 - datetime.timedelta(days = 1) today3 = today2 - datetime.timedelta(days = 1) today4 = today3 - datetime.timedelta(days = 1) today5 = today4 - datetime.timedelta(days = 1) DT1 = today5.strftime("%d-%b-%Y") DT2 = today4.strftime("%d-%b-%Y") DT3 = today3.strftime("%d-%b-%Y") DT4 = today2.strftime("%d-%b-%Y") DT5 = today1.strftime("%d-%b-%Y") sql1 = "select substr(load_date,0,10) load_date, count (distinct file_name) file_count, count(1) record_count from ###### where load_date BETWEEN '" + today5.strftime("%Y-%m-%d") +" 00:00:00' and '" + today.strftime("%Y-%m-%d") + " 23:59:59' group by substr(load_date,0,10) ORDER BY substr(load_date,0,10) WITH UR" df1 = pd.read_sql(sql1, conn) #print df1 df1t = df1.T df = df1t #print df1t[0] #index_list = df1t[(df1t.iloc[0,0] == today2.strftime("%Y-%m-%d") )].index.tolist() #print today2.strftime("%Y-%m-%d") #print list(df1t) df1t.columns = df1t.iloc[0] df1t = df1t.drop(df1t.index[0]) df.rename(columns=df.iloc[0]).drop(df.index[0]) #print df if df1t.iloc[0,0] != today5.strftime("%Y-%m-%d"): print("Mismatch Found For " + today5.strftime("%Y-%m-%d")) dfDT5 = pd.DataFrame([today5.strftime("%Y-%m-%d"),0,0], index=['LOAD_DATE', 'FILE_COUNT', 'RECORD_COUNT']) dfDT5.columns = [today5.strftime("%Y-%m-%d")] dfc = pd.concat([dfDT5, df], axis=1, sort=False) print "-------------------" print dfc #print dfDT5 print "-------------------" #df1t.add(dfDT5, fill_value=0) print dfc.iloc[0,1] print "-------------------" if df1t.iloc[0,1] != today4.strftime("%Y-%m-%d"): print("Mismatch Found For " + today4.strftime("%Y-%m-%d")) </code></pre> <p>The Error I am getting is below with the output:</p> <pre><code> Mismatch Found For 2018-06-27 ------------------- 2018-06-27 2018-07-02 LOAD_DATE 2018-06-27 2018-07-02 FILE_COUNT 0 2 RECORD_COUNT 0 8100999 ------------------- 2018-07-02 ------------------- Traceback (most recent call last): File ".\AIR_CS5.py", line 113, in &lt;module&gt; if df1t.iloc[0,1] != today4.strftime("%Y-%m-%d"): File "C:\Python27\lib\site-packages\pandas\core\indexing.py", line 1472, in __getitem__ return self._getitem_tuple(key) File "C:\Python27\lib\site-packages\pandas\core\indexing.py", line 2013, in _getitem_tuple self._has_valid_tuple(tup) File "C:\Python27\lib\site-packages\pandas\core\indexing.py", line 222, in _has_valid_tuple self._validate_key(k, i) File "C:\Python27\lib\site-packages\pandas\core\indexing.py", line 1957, in _validate_key self._validate_integer(key, axis) File "C:\Python27\lib\site-packages\pandas\core\indexing.py", line 2009, in _validate_integer raise IndexError("single positional indexer is out-of-bounds") IndexError: single positional indexer is out-of-bounds </code></pre>
<p>Your dataframe <code>df1t</code> has only one column. Therefore, slicing <code>df1t.iloc[0,1]</code> will fail with <code>IndexError</code>.</p> <p>Make sure your dataframe has the data you require for your logic, or change your logic to accommodate your data.</p> <p>The line <code>print dfc.iloc[0,1]</code>, which outputs <code>2018-07-02</code>, does not represent <code>df1t</code>. It may help to print <code>df1t</code> beforehand to see what you're working with.</p>
python|python-2.7|pandas
1
9,473
51,376,941
Google cloud TPU: NotImplementedError: Non-resource Variables are not supported inside TPU computations
<p>I am trying to train my model using google cloud's TPUs. The model works fine on CPU and GPU, and I can run the TPU tutorials without any problems (so it is not a problem of connecting to TPUs). However, when I run my program on the TPU cloud I get an error. The most important line is probably the following:</p> <pre><code>NotImplementedError: Non-resource Variables are not supported inside TPU computations (operator name: training_op/update_2nd_caps/primary_to_first_fc/W/ApplyAdam/RefEnter) </code></pre> <p>And here is the full error in case there is something important there:</p> <pre><code>Traceback (most recent call last): File "TPU_playground.py", line 85, in &lt;module&gt; capser.train(input_fn=train_input_fn_tpu, steps=n_steps) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 366, in train loss = self._train_model(input_fn, hooks, saving_listeners) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 1119, in _train_model return self._train_model_default(input_fn, hooks, saving_listeners) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 1132, in _train_model_default features, labels, model_fn_lib.ModeKeys.TRAIN, self.config) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1992, in _call_model_fn features, labels, mode, config) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 1107, in _call_model_fn model_fn_results = self._model_fn(features=features, **kwargs) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2223, in _model_fn _train_on_tpu_system(ctx, model_fn_wrapper, dequeue_fn)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2537, in _train_on_tpu_system device_assignment=ctx.device_assignment) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu.py", line 733, in shard name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu.py", line 394, in replicate device_assignment, name)[1] File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu.py", line 546, in split_compile_and_replicate outputs = computation(*computation_inputs) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2530, in multi_tpu_train_steps_on_single_shard single_tpu_train_step, [_INITIAL_LOSS]) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/training_loop.py", line 207, in repeat cond, body_wrapper, inputs=inputs, infeed_queue=infeed_queue, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/training_loop.py", line 169, in while_loop name="") File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 3209, in while_loop result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 2941, in BuildLoop pred, body, original_loop_vars, loop_vars, shape_invariants) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 2878, in _BuildLoop body_result = body(*packed_vars_for_body) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/training_loop.py", line 120, in body_wrapper outputs = body(*(inputs + dequeue_ops)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/training_loop.py", line 203, in body_wrapper return [i + 1] + _convert_to_list(body(*args)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1166, in train_step self._call_model_fn(features, labels)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1337, in _call_model_fn estimator_spec = self._model_fn(features=features, **kwargs) File "/home/adrien_doerig/capser/capser_7_model_fn.py", line 100, in model_fn_tpu **output_decoder_deconv_params) File "/home/adrien_doerig/capser/capser_model.py", line 341, in capser_model loss_training_op = optimizer.minimize(loss=loss, global_step=tf.train.get_global_step(), name="training_op") File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 409, in minimize name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_optimizer.py", line 114, in apply_gradients return self._opt.apply_gradients(summed_grads_and_vars, global_step, name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 602, in apply_gradients update_ops.append(processor.update_op(self, grad)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 113, in update_op update_op = optimizer._apply_dense(g, self._v) # pylint: disable=protected-access File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/adam.py", line 148, in _apply_dense grad, use_locking=self._use_locking).op File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/gen_training_ops.py", line 293, in apply_adam use_nesterov=use_nesterov, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3414, in create_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1782, in __init__ self._control_flow_post_processing() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1793, in _control_flow_post_processing self._control_flow_context.AddOp(self) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 2430, in AddOp self._AddOpInternal(op) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 2451, in _AddOpInternal real_x = self.AddValue(x) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 2398, in AddValue self._outer_context.AddInnerOp(enter.op) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu.py", line 310, in AddInnerOp self._AddOpInternal(op) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu.py", line 287, in _AddOpInternal "(operator name: %s)" % op.name) NotImplementedError: Non-resource Variables are not supported inside TPU computations (operator name: training_op/update_2nd_caps/primary_to_first_fc/W/ApplyAdam/RefEnter) </code></pre> <p>It seems that the forward pass of the graph is built fine, but the backprop using AdamOptimizer is not supported by the TPUs in this case. I tried using more standard optimizers (GradientDescentOptimizer and MomentumOptimizer) but it doesn't help. All the tensors in the feedforward pass are in formats compatible with TPUs (i.e. tf.float32).</p> <p>Does anyone have suggestions as to what I should try?</p> <p>Thank you!</p>
<p>I have found a way to use the TPUs without using the <code>ctpu up</code> command, which solves the problem. I simply do everything exactly as I would do it to run my code on cloud GPUs: </p> <p>-- see documentation here: <a href="https://cloud.google.com/ml-engine/docs/tensorflow/getting-started-training-prediction" rel="nofollow noreferrer">https://cloud.google.com/ml-engine/docs/tensorflow/getting-started-training-prediction</a> -- a simple explanatory video here: <a href="https://www.youtube.com/watch?v=J_d4bEKUG2Q" rel="nofollow noreferrer">https://www.youtube.com/watch?v=J_d4bEKUG2Q</a></p> <p>BUT, the ONLY DIFFERENCE is that I use <code>--scale-tier 'BASIC_TPU'</code> instead of <code>--scale-tier 'STANDARD_1'</code> when I run my job. So the command to run the job is</p> <pre><code>gcloud ml-engine jobs submit training $JOB_NAME --module-name capser.capser_7_multi_gpu --package-path ./capser --job-dir=gs://capser-data/$JOB_NAME --scale-tier 'BASIC_TPU' --stream-logs --runtime-version 1.9 --region us-central1 </code></pre> <p>(I previously define the variable $JOB_NAME: <code>export JOB_NAME=&lt;input your job name&gt;</code>)</p> <p>Also, make sure you choose a region which has TPUs! us-central1 works for example.</p> <p>So maybe it is a small bug when using ctpu up, but it seems not to be a problem when using the above method. I hope that helps!</p>
tensorflow|google-cloud-platform|google-cloud-tpu
0
9,474
51,452,246
Repeat columns as rows in python?
<pre><code> Fruit January Shipments January Sales February Shipments February Sales ------------ ------------------- --------------- -------------------- ---------------- Apple 30 11 18 31 Banana 12 49 39 14 Pear 25 50 44 21 Kiwi 41 25 10 25 Strawberry 11 33 35 50 </code></pre> <p>I'm trying to achieve the following result:</p> <pre><code> Fruit Month Shipments Sales ------------ ---------- ----------- ------- Apple January 30 11 Banana January 12 49 Pear January 25 50 Kiwi January 41 25 Strawberry January 11 33 Apple February 18 31 Banana February 39 14 Pear February 44 21 Kiwi February 10 25 Strawberry February 35 50 </code></pre> <p>I've tried pandas.pivot and pandas.pivot_table and had no luck. I'm in the process of creating two dataframes (Fruit/Month/Shipments) and (Fruit/Month/Sales), and concatenating the two into one with a loop, but I was hoping for a easier way to do this.</p>
<p>one way is to use modify the column to a multi level then use <code>stack</code>. Let suppose your dataframe is called df. First set the column Fruit as index, then define the multilevel columns:</p> <pre><code>df = df.set_index('Fruit') # manual way to create the multiindex columns #df.columns = pd.MultiIndex.from_product([['January','February'], # ['Shipments','Sales']], names=['Month',None]) # more general way to create the multiindex columns thanks to @Scott Boston df.columns = df.columns.str.split(expand=True) df.columns.names = ['Month',None] </code></pre> <p>your data looks like:</p> <pre><code>Month January February Shipments Sales Shipments Sales Fruit Apple 30 11 18 31 Banana 12 49 39 14 Pear 25 50 44 21 Kiwi 41 25 10 25 Strawberry 11 33 35 50 </code></pre> <p>Now you can use <code>stack</code> on level 0 and <code>reset_index</code></p> <pre><code>df_output = df.stack(0).reset_index() </code></pre> <p>which gives</p> <pre><code> Fruit Month Sales Shipments 0 Apple February 31 18 1 Apple January 11 30 2 Banana February 14 39 3 Banana January 49 12 4 Pear February 21 44 5 Pear January 50 25 6 Kiwi February 25 10 7 Kiwi January 25 41 8 Strawberry February 50 35 9 Strawberry January 33 11 </code></pre> <p>Finally, if you want a specific order for values in the column Month you can use <code>pd.Categorical</code>:</p> <pre><code>df_output['Month'] = pd.Categorical(df_output['Month'].tolist(), ordered=True, categories=['January','February']) </code></pre> <p>setting that January is before February when sorting. Now, doing </p> <pre><code>df_output = df_output.sort_values(['Month']) </code></pre> <p>gives the result:</p> <pre><code> Fruit Month Sales Shipments 1 Apple January 11 30 3 Banana January 49 12 5 Pear January 50 25 7 Kiwi January 25 41 9 Strawberry January 33 11 0 Apple February 31 18 2 Banana February 14 39 4 Pear February 21 44 6 Kiwi February 25 10 8 Strawberry February 50 35 </code></pre> <p>I see it's not exactly the expected output (order in Fruit column and order of columns) but both can be easily change if needed.</p>
python|pandas|dataframe
2
9,475
51,158,467
Trying to implement experience replay in Tensorflow
<p>I am trying to implement experience replay in Tensorflow. The problem I am having is in storing outputs for the models trial and then updating the gradient simultaneously. A couple approaches I have tried are to store the resulting values from sess.run(model), however, these are not tensors and cannot be used for gradient descent as far as tensorflow is concerned. I am currently trying to use tf.assign(), however, The difficulty I am having is best shown through this example.</p> <pre><code>import tensorflow as tf import numpy as np def get_model(input): return input a = tf.Variable(0) b = get_model(a) d = tf.Variable(0) for i in range(10): assign = tf.assign(a, tf.Variable(i)) b = tf.Print(b, [assign], "print b: ") c = b d = tf.assign_add(d, c) e = d with tf.Session() as sess: tf.global_variables_initializer().run() print(sess.run(e)) </code></pre> <p>The issue I have with the above code is as follows: -It prints different values on each run which seems odd -It does not correctly update at each step in the for loop Part of why I am confused is the fact that I understand you have to run the assign operation to update the prior reference, however, I just can't figure out how to correctly do that in each step of the for loop. If there is an easier way I am open to suggestions. This example is the same as how I am currently trying to feed in an array of inputs and get a sum based on each prediction the model makes. If clarification on any of the above would help I will be more than happy to provide it. </p> <p>The following is the results from running the code above three times.</p> <pre><code>$ python test3.py 2018-07-03 13:35:08.380077: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 print b: [8] print b: [8] print b: [8] print b: [8] print b: [8] print b: [8] print b: [8] print b: [8] print b: [8] print b: [8] 80 $ python test3.py 2018-07-03 13:35:14.055827: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 print b: [7] print b: [6] print b: [6] print b: [6] print b: [6] print b: [6] print b: [6] print b: [6] print b: [6] print b: [6] 60 $ python test3.py 2018-07-03 13:35:20.120661: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 print b: [9] print b: [9] print b: [9] print b: [9] print b: [9] print b: [9] print b: [9] print b: [9] print b: [9] print b: [9] 90 </code></pre> <p>The result I am expecting is as follows:</p> <pre><code>print b: [0] print b: [1] print b: [2] print b: [3] print b: [4] print b: [5] print b: [6] print b: [7] print b: [8] print b: [9] 45 </code></pre> <p>The main reason I am confused is that sometimes it provides all nines which makes me think that it loads the last value assigned 10 times, however, sometimes it loads different values which seems to contrast this theory.</p> <p>What I would like to do is to feed in an array of input examples and compute the gradient for all examples at the same time. It needs to be concurrently because the reward used is dependent on the outputs of the model, so if the model changes the resulting rewards would also change.</p>
<p>When you call <code>tf.assign(a, tf.Variable(i))</code> this does not actually immediately assign the value of the second variable to the first one. It just create an operation in the NN to do the assignment when <code>sess.run(...)</code> is called. </p> <p>When it is called all 10 assignments try to do their assignment at the same time. One of them randomly wins and then gets passed to the 10 <code>assign_add</code> operations which in effect multiplies it 10 times.</p> <p>As to your motivating problem of implementing experience replay, most approaches I came across use <code>tf.placeholder()</code> to feed the experience buffer content into the network on training.</p>
tensorflow|assign|policy-gradient-descent
0
9,476
51,125,969
loading EMNIST-letters dataset
<p>I have been trying to find a way to load the EMNIST-letters dataset but without much success. I have found interesting stuff in the structure and can't wrap my head around what is happening. Here is what I mean:</p> <p>I downloaded the .mat format <a href="https://www.nist.gov/itl/iad/image-group/emnist-dataset" rel="noreferrer">in here</a></p> <p>I can load the data using </p> <pre><code>import scipy.io mat = scipy.io.loadmat('letter_data.mat') # renamed for conveniance </code></pre> <p>it is a dictionnary with the keys as follow:</p> <pre><code>dict_keys(['__header__', '__version__', '__globals__', 'dataset']) </code></pre> <p>the only key with interest is dataset, which I havent been able to gather data from. printing the shape of it give this:</p> <pre><code>&gt;&gt;&gt;print(mat['dataset'].shape) (1, 1) </code></pre> <p>I dug deeper and deeper to find a shape that looks somewhat like a real dataset and came across this:</p> <pre><code>&gt;&gt;&gt;print(mat['dataset'][0][0][0][0][0][0].shape) (124800, 784) </code></pre> <p>which is exactly what I wanted but I cant find the labels nor the test data, I tried many things but cant seem to understand the structure of this dataset.</p> <p>If someone could tell me what is going on with this I would appreciate it</p>
<p>Because of the way the dataset is structured, the array of image arrays can be accessed with <code>mat['dataset'][0][0][0][0][0][0]</code> and the array of label arrays with <code>mat['dataset'][0][0][0][0][0][1]</code>. For instance, <code>print(mat['dataset'][0][0][0][0][0][0][0])</code> will print out the pixel values of the first image, and <code>print(mat['dataset'][0][0][0][0][0][1][0])</code> will print the first image's label. </p> <p>For a less...<em>convoluted</em> dataset, I'd actually recommend using the CSV version of the EMNIST dataset on Kaggle: <a href="https://www.kaggle.com/crawford/emnist" rel="noreferrer">https://www.kaggle.com/crawford/emnist</a>, where each row is a separate image, there are 785 columns where the first column = class_label and each column after represents one pixel value (784 total for a 28 x 28 image). </p>
python|python-3.x|numpy|scipy|mnist
6
9,477
51,331,721
writing pandas dataframe with timedeltas to parquet
<p>I can't seem to write a pandas dataframe containing timedeltas to a parquet file through pyarrow.</p> <p>The pyarrow documentation specifies that it can handle numpy <code>timedeltas64</code> with <code>ms</code> precision. However, when I build a dataframe from numpy's <code>timedelta64[ms]</code> the datatype of that column is <code>timedelta64[ns]</code>.</p> <p>Pyarrow then throws an error because of this.</p> <p>Is this a bug in pandas or pyarrow? Is there an easy fix for this?</p> <p>The following code:</p> <pre><code>df = pd.DataFrame({ 'timedelta': np.arange(start=0, stop=1000, step=10, dtype='timedelta64[ms]') }) print(df.timedelta.dtypes) df.to_parquet('test.parquet', engine='pyarrow', compression='gzip') </code></pre> <p>produces the following output: <code>timedelta64[ns]</code> and error:</p> <pre><code>--------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) &lt;ipython-input-41-7df28b306c1e&gt; in &lt;module&gt;() 3 step=10, 4 dtype='timedelta64[ms]') ----&gt; 5 }).to_parquet('test.parquet', engine='pyarrow', compression='gzip') ~/miniconda3/envs/myenv/lib/python3.6/site-packages/pandas/core/frame.py in to_parquet(self, fname, engine, compression, **kwargs) 1940 from pandas.io.parquet import to_parquet 1941 to_parquet(self, fname, engine, -&gt; 1942 compression=compression, **kwargs) 1943 1944 @Substitution(header='Write out the column names. If a list of strings ' ~/miniconda3/envs/myenv/lib/python3.6/site-packages/pandas/io/parquet.py in to_parquet(df, path, engine, compression, **kwargs) 255 """ 256 impl = get_engine(engine) --&gt; 257 return impl.write(df, path, compression=compression, **kwargs) 258 259 ~/miniconda3/envs/myenv/lib/python3.6/site-packages/pandas/io/parquet.py in write(self, df, path, compression, coerce_timestamps, **kwargs) 116 117 else: --&gt; 118 table = self.api.Table.from_pandas(df) 119 self.api.parquet.write_table( 120 table, path, compression=compression, table.pxi in pyarrow.lib.Table.from_pandas() ~/miniconda3/envs/myenv/lib/python3.6/site-packages/pyarrow/pandas_compat.py in dataframe_to_arrays(df, schema, preserve_index, nthreads) 369 arrays = [convert_column(c, t) 370 for c, t in zip(columns_to_convert, --&gt; 371 convert_types)] 372 else: 373 from concurrent import futures ~/miniconda3/envs/myenv/lib/python3.6/site-packages/pyarrow/pandas_compat.py in &lt;listcomp&gt;(.0) 368 if nthreads == 1: 369 arrays = [convert_column(c, t) --&gt; 370 for c, t in zip(columns_to_convert, 371 convert_types)] 372 else: ~/miniconda3/envs/myenv/lib/python3.6/site-packages/pyarrow/pandas_compat.py in convert_column(col, ty) 364 365 def convert_column(col, ty): --&gt; 366 return pa.array(col, from_pandas=True, type=ty) 367 368 if nthreads == 1: array.pxi in pyarrow.lib.array() array.pxi in pyarrow.lib._ndarray_to_array() error.pxi in pyarrow.lib.check_status() ArrowNotImplementedError: Unsupported numpy type 22 </code></pre>
<p><a href="https://fastparquet.readthedocs.io/en/latest/" rel="noreferrer">fastparquet</a> supports the timedelta type.</p> <p>First <a href="https://fastparquet.readthedocs.io/en/latest/install.html" rel="noreferrer">install</a> fastparquet, eg.:</p> <pre><code>pip install fastparquet </code></pre> <p>Then you can use this:</p> <pre><code>df.to_parquet('test.parquet.gzip', engine='fastparquet', compression='gzip') </code></pre>
python|pandas|parquet|pyarrow
11
9,478
48,418,468
Tensorflow keras frozen .pb model returns bad results on android
<p>I created my model using Keras with transfer learning on IncpetionV3, and exported it to a <em>.pb</em> file using the following python code:</p> <pre><code>MODEL_NAME = 'Model_all1' def export_model(saver, model, input_node_names, output_node_name): tf.train.write_graph(K.get_session().graph_def, 'out_all2', MODEL_NAME + '_graph.pbtxt') saver.save(K.get_session(), 'out_all2/' + MODEL_NAME + '.chkp') freeze_graph.freeze_graph('out_all2/' + MODEL_NAME + '_graph.pbtxt', None, False, 'out_all2/' + MODEL_NAME + '.chkp', output_node_name, "save/restore_all", "save/Const:0", 'out_all2/final_' + MODEL_NAME + '.pb', True, "") print("graph saved!") export_model(tf.train.Saver(), model, ["input_3"], "dense_6/Softmax") </code></pre> <p>I then attempt to load my model into my Android application. For my application I have used the following codes to preprocess my image before sending it to <em>.pb</em> model. The Bitmap comes from the camera on my phone.</p> <pre><code>//scaled the bitmap down Bitmap bitmap = Bitmap.createScaledBitmap(imageBitmap, PIXEL_WIDTH, PIXEL_WIDTH, true); float pixels[] = getPixelData(bitmap); public static float[] getPixelData(Bitmap imageBitmap) { if (imageBitmap == null) { return null; } int width = imageBitmap.getWidth(); int height = imageBitmap.getHeight(); int inputSize = 299; int imageMean = 155; float imageStd = 255.0f; int[] pixels = new int[width * height]; float[] floatValues = new float[inputSize * inputSize * 3]; imageBitmap.getPixels(pixels, 0, imageBitmap.getWidth(), 0, 0, imageBitmap.getWidth(), imageBitmap.getHeight()); for (int i = 0; i &lt; pixels.length; ++i) { final int val = pixels[i]; floatValues[i * 3 + 0] = (((val &gt;&gt; 16) &amp; 0xFF) - imageMean) / imageStd; floatValues[i * 3 + 1] = (((val &gt;&gt; 8) &amp; 0xFF) - imageMean) / imageStd; floatValues[i * 3 + 2] = ((val &amp; 0xFF) - imageMean) / imageStd; } return floatValues; } </code></pre> <p>Below shows my recognise image code to link to my loaded .pb file on Android</p> <pre><code>public ArrayList&lt;Classification&gt; recognize(final float[] pixels) { //using the interface //input size tfHelper.feed(inputName, pixels, 1, inputSize, inputSize, 3); //get the possible outputs tfHelper.run(outputNames, logStats); //get the output tfHelper.fetch(outputName, outputs); // Find the best classifications. PriorityQueue&lt;Recognition&gt; pq = new PriorityQueue&lt;Recognition&gt;( 3, new Comparator&lt;Recognition&gt;() { @Override public int compare(Recognition lhs, Recognition rhs) { // Intentionally reversed to put high confidence at the head of the queue. return Float.compare(rhs.getConfidence(), lhs.getConfidence()); } }); for (int i = 0; i &lt; outputs.length; ++i) { if (outputs[i] &gt; THRESHOLD) { pq.add( new Classifier.Recognition( "" + i, labels.size() &gt; i ? labels.get(i) : "unknown", outputs[i], null)); } } final ArrayList&lt;Recognition&gt; recognitions = new ArrayList&lt;Recognition&gt;(); int recognitionsSize = Math.min(pq.size(), MAX_RESULTS); for (int i = 0; i &lt; recognitionsSize; ++i) { recognitions.add(pq.poll()); } Trace.endSection(); // "recognizeImage" //fit into classification list ArrayList&lt;Classification&gt; anslist = new ArrayList&lt;&gt;(); for (int i = 0; i &lt; recognitions.size(); i++) { Log.d("classification",recognitions.get(i).getTitle() +" confidence : "+ recognitions.get(i).getConfidence()); Classification ans = new Classification(); ans.update(recognitions.get(i).getConfidence(),recognitions.get(i).getTitle()); anslist.add(ans); } return anslist; } </code></pre> <p>From my testing, before I generated my frozen graph model, <em>.pb</em> file. The accuracy of my model is quite high. However, when I load it unto my Android app, the prediction results return from my model on Android are all over the place. </p> <p>I have been testing for a long time and I am unable to find my problem. Does anyone have any insights? Did I generate the wrong <em>.pb</em> file? Or did I send the image wrongly to the frozen graph? I am stumped.</p>
<p>If you are sure about your image pre-processing steps. Then the problem might be the same as mine. I faced the same problem and I find an answer. It is <a href="https://stackoverflow.com/questions/49474467/export-keras-model-to-pb-file-and-optimize-for-inference-gives-random-guess-on/49582776#49582776">here</a> for more clarification. I guess that you did all the steps as mine except the following.</p> <p>Your problem might be in color channel reversion. <code>imageBitmap.getPixels</code> reverses color channels from BGR to RGB. You just need to convert them back to BGR if this is part of the image preprocessing.</p> <p>use the following code to do so:</p> <pre><code> floatValues[i * 3 + 0] = (((val &gt;&gt; 16) &amp; 0xFF) - imageMean) / imageStd; floatValues[i * 3 + 1] = (((val &gt;&gt; 8) &amp; 0xFF) - imageMean) / imageStd; floatValues[i * 3 + 2] = ((val &amp; 0xFF) - imageMean) / imageStd; // reverse the color orderings to BGR. floatValues[i*3 + 2] = Color.red(val); floatValues[i*3 + 1] = Color.green(val); floatValues[i*3] = Color.blue(val); </code></pre> <p>Hope it helps,,</p>
android|python|tensorflow|keras|protocol-buffers
0
9,479
48,204,584
Rename Pandas DataFrame inside function does not work
<p>I want to implement this function in order to create new columns with new names. If I apply line by line the code works perfectly. If I run the function, the line lag.columns = [rename] does not work. </p> <p>What is happening?</p> <pre><code>T = [50, 48, 47, 49, 51, 53, 54, 52] v1 = [1, 3, 2, 4, 5, 5, 6, 2] v2 = [2, 5, 4, 2, 3, 1, 6, 9] dataframe = pd.DataFrame({'T': T, 'v1': v1, 'v2': v2}) def timeseries_to_supervised(data, ts=1, dropnan=True): ''' Helper function to convert a timeseries dataframe to supervised The response must be placed as the first column Arguments: :data --&gt; dataframe to transform into supervised :timesteps --&gt; number of timesteps we want to shift Returns: :final --&gt; numpy array transformed ''' # n_vars = 1 if type(data) is list else data.shape[1] # y = data.loc[1] # Create lags for i, col in enumerate(list(data)): name = col rename = name + '(t-1)' lag = pd.DataFrame(data.iloc[:, i]).shift(1) lag.colums = [rename] data = pd.concat([data, lag], axis=1) return data reframed = timeseries_to_supervised(dataframe, 1) </code></pre> <p>So, it is returning the data frame with the new columns but the names of the columns don't include the changing part.</p> <p>Thanks in advance!</p>
<p>this works for me:</p> <pre><code>import pandas as pd T = [50, 48, 47, 49, 51, 53, 54, 52] v1 = [1, 3, 2, 4, 5, 5, 6, 2] v2 = [2, 5, 4, 2, 3, 1, 6, 9] dataframe = pd.DataFrame({'T': T, 'v1': v1, 'v2': v2}) def timeseries_to_supervised(data, ts=1, dropnan=True): # n_vars = 1 if type(data) is list else data.shape[1] # y = data.loc[1] # Create lags for i, col in enumerate(list(data)): name = col rename = name + '(t-1)' lag = pd.DataFrame(data.iloc[:, i].shift(1).values, columns=[rename], index=data.index) data = pd.concat([data, lag], axis=1) return data reframed = timeseries_to_supervised(dataframe, 1) print reframed </code></pre> <p>only changed the way you create the new lag. This gives me:</p> <pre><code> T v1 v2 T(t-1) v1(t-1) v2(t-1) 0 50 1 2 NaN NaN NaN 1 48 3 5 50.0 1.0 2.0 2 47 2 4 48.0 3.0 5.0 3 49 4 2 47.0 2.0 4.0 4 51 5 3 49.0 4.0 2.0 5 53 5 1 51.0 5.0 3.0 6 54 6 6 53.0 5.0 1.0 7 52 2 9 54.0 6.0 6.0 </code></pre>
python|pandas|dataframe
1
9,480
48,163,352
Error running Object Detection training in google ML engine - grpc epoll fd: 3
<p>I'm trying to train Object Detection model with gcloud ml-engine,reference to the official documents <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_cloud.md" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_cloud.md</a>,and set runtime-version=1.4,and reference this issue <a href="https://github.com/tensorflow/models/issues/2739" rel="nofollow noreferrer">https://github.com/tensorflow/models/issues/2739</a> to modify the setup.py , but have the error:</p> <blockquote> <p>worker-replica-3 2018-01-09 06:32:39.416080: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX</p> </blockquote> <p>ker-replica-3 grpc epoll fd: 3</p> <blockquote> <p></p> </blockquote> <pre><code>{ insertId: "1fwigqcg5k37j2o" jsonPayload: { created: 1515479559.41658 levelname: "ERROR" lineno: 1051 message: " grpc epoll fd: 3" pathname: "ev_epoll1_linux.c" thread: 917 } </code></pre> <p>The last error message is:</p> <blockquote> <p></p> </blockquote> <pre><code>The replica master 0 ran out-of-memory and exited with a non-zero status of 247. </code></pre> <p>I start the training job on Cloud ML Engine using the following command:</p> <blockquote> <p></p> </blockquote> <pre><code>gcloud ml-engine jobs submit training object_detection_training_date +%s \ --job-dir=gs://mybucket/train \ --packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz \ --module-name object_detection.train \ --region asia-east1 \ --config object_detection/samples/cloud/cloud.yml \ -- \ --train_dir=gs://mybucket/train \ --pipeline_config_path=gs://mybucket/data/ssd_mobilenet_v1_coco.config \ --runtime-version 1.4 </code></pre>
<p>Only runtime version 1.2 is supported currently. We are working on other versions.</p>
tensorflow|grpc|google-cloud-ml
1
9,481
48,170,398
How do I change taking the mean based on the length of the list? (Python)
<p>I have a list containing lists containing arrays, something like this:</p> <pre><code>A = [ [ array([[ 1., 4.3, 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.]]) ], [ array([[ 5., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.]]) ] ] </code></pre> <p>So I basically have two 4x4 matrices inside of this thing. Now, the goal is to take the average of these two which I managed to do with:</p> <pre><code>np.mean([A[0][0],A[1][0]],axis=0) </code></pre> <p>I also have another matrix B which consists of three 4x4 matrices, and the average would then be something like</p> <pre><code>np.mean([B[0][0],B[1][0]],B[2][0],axis=0) </code></pre> <p>I want to generalize this so that I dont have to rewrite the np.mean part each time. So I would probably use the length of A (2) or length of B (3) to construct that, but I'm not sure how to get something like</p> <pre><code>np.mean(C[0][0],C[1][0],[...][0],[n-1][0],axis=0) </code></pre> <p>where n is len(C).</p> <p>How can I implement this? Thanks!</p>
<p>You could just use a list comprehension:</p> <pre><code>&gt;&gt;&gt; np.mean([A[i][0] for i in range(len(A))], axis=0) </code></pre> <p>Or shorter, more readable and "pythonic":</p> <pre><code>&gt;&gt;&gt; np.mean([a[0] for a in A], axis=0) array([[ 3. , 2.15, 0. , 0. ], [ 0. , 0. , 0. , 0. ], [ 0. , 0. , 0. , 0. ], [ 0. , 0. , 0. , 0. ]]) </code></pre>
python|numpy
1
9,482
48,254,436
Python Pandas Dataframe - compute difference between rows and take the minimum one
<p>I have a Pandas <code>Dataframe</code> <code>D</code> storing a large database. I also have a smaller <code>DataFrame</code> <code>C</code> with 10 rows, containing exactly the same columns as the main one, including column '<code>price</code>'. For each row <code>r</code> in the main dataframe <code>D</code> I want to find a row <code>t</code> in <code>C</code> which is closest, i.e., the difference between <code>r.price</code> and <code>t.price</code> is minimal. As an output I want to have:</p> <p>I have a function computing the difference </p> <pre><code>def difference(self, row1, row2): d = np.abs(row1["price"] - row2["price"]) return d </code></pre> <p>And I want to use apply function to apply the difference function to each row in C, for each row v in D:</p> <pre><code>for _, v in D.iterrows(): C.apply(self.difference, axis=1, args=(v,)) </code></pre> <p>But I don't know how I should find the row of <code>C</code> for which the difference was minimal. I was thinkking about the <code>min</code> build-in function from Python, but I don't know how to apply it correctly for dataframes.</p> <p>An example: Let say I have a data D</p> <pre><code> id | name | price 1. bdb | AAA | 2.34 2. 441 | BBB | 3.56 3. ae9 | CCC | 1.27 4. fb9 | DDD | 9.78 5. e6b | EEE | 5.13 6. bx4 | FFF | 6.23 7. a9a | GGG | 9.56 8. 847 | HHH | 9.23 9. e4c | III | 0.45 ... 200. eb3 | XYZ | 10.34 </code></pre> <p>And C (for simplicity with just 5 rows) as below</p> <pre><code> id | name | price 1. xyh | AA1 | 0.34 2. y5h | BB1 | 9.77 3. af6 | CC1 | 3.24 4. op9 | DD1 | 6.34 5. 23h | EE1 | 0.20 </code></pre> <p>So, the output of my function should be as follows:</p> <pre><code>Row bdb in D should be matched with row af6 Row 441 in D should be matched with row af6 Row ae9 in D should be matched with row xyh Row fb9 in D should be matched with row y5h Row e6b in D should be matched with row op9 etc. </code></pre>
<p>Use <code>np.searchsorted</code> to index into a sorted version of <code>C.price</code>.</p> <pre><code>p1 = D.price.values v = np.sort(C.price.values) p2 = v[np.searchsorted(v, p1) - 1] p2 array([ 0.34, 3.24, 0.34, 9.77, 3.24, 3.24, 6.34, 6.34, 0.34]) </code></pre> <p></p> <p>Now, subtract them <code>p2</code> from <code>p1</code>. </p> <pre><code>pd.Series(p1 - p2) 0 2.00 1 0.32 2 0.93 3 0.01 4 1.89 5 2.99 6 3.22 7 2.89 8 0.11 dtype: float64 </code></pre>
python|pandas|dataframe
1
9,483
48,875,447
Pandas: Convert Rare Entries in Column to Common Value
<p>I have a Pandas DataFrame, <code>df</code>, with a column called <code>LocationNormalized</code>. I've examined how often each value occurs with <code>value_counts()</code> and there are values that occur very rarely.</p> <p>So I want to convert these rare values to "RARE". Specifically, if a value occurs in the column less than 10 times, I want to replace it with "RARE".</p> <p>I was thinking something like:</p> <p><code>df["LocationNormalized"].apply(lambda x: "RARE" if df.value_counts()[x] &lt; 10 else x)</code></p> <p>But I know the <code>df.value_counts()[x]</code> syntax isn't correct because <code>value_counts()</code> returns a Series.</p> <p>How would I go about this?</p> <p>Thanks!</p>
<p>Use <code>groupby</code> + <code>transform</code> instead of <code>value_counts</code>, it becomes easier to create a mask to set values accordingly.</p> <pre><code>m = df.groupby("LocationNormalized").transform('count').lt(10) df.loc[m, "LocationNormalized"] = "RARE" </code></pre> <hr> <p>Demo with a <code>Series</code> -</p> <pre><code>s 0 a 1 a 2 a 3 a 4 b 5 b 6 c 7 d dtype: object m = s.groupby(s).transform('count').lt(2) # find objects occurring once s[m] = 'RARE' s 0 a 1 a 2 a 3 a 4 b 5 b 6 RARE 7 RARE dtype: object </code></pre>
python|pandas
2
9,484
48,721,620
What is the purpose of thread in tensorflow queue operation?
<pre><code>images, labels = tf.train.batch([image, label], batch_size=32, num_threads=4) </code></pre> <p>I often see a queue created with <code>num_threads</code> and the threads are said to be for enqueue operation. I don't quite understand the purpose of setting multiple threads for enqueue, because the way I see it, enqueue is just putting a value at the end of the queue. </p>
<p>From <a href="https://www.tensorflow.org/versions/r1.1/programmers_guide/threading_and_queues" rel="nofollow noreferrer">Threading and Queues tutorial</a>:</p> <blockquote> <p>For example, a typical input architecture is to use a <code>RandomShuffleQueue</code> to prepare inputs for training a model:</p> <ul> <li>Multiple threads prepare training examples and push them in the queue.</li> <li>A training thread executes a training op that dequeues mini-batches from the queue.</li> </ul> <p>The TensorFlow Session object is multithreaded, so multiple threads can easily use the same session and run ops in parallel.</p> </blockquote> <p>The idea is that data pipeline is usually I/O intensive: the data may be fetched from the disk or even streamed from the network. It is quite possible for a GPU <em>not to be</em> a bottleneck in computation, simply because the data isn't fed fast enough to saturate it.</p> <p>Reading in multiple threads solves this problem: while one thread is waiting for I/O task, the other thread already has some data for the GPU. When this data is processed, the first thread hopefully received and prepared its batch, and so on. That's why <code>tf.train.batch</code>, <code>tf.train.shuffle_batch</code> and other functions, support multi-thread data processing. Setting <code>num_threads = 1</code> makes the batching deterministic, but if there are multiple threads, the order of data in the queue is not guaranteed.</p>
python|multithreading|tensorflow|queue|python-multithreading
1
9,485
48,511,285
Python pandas conditional column creation with aggregates
<p>I am trying to create a column that returns True if the P/E is in the lower quartile of all P/E in my dataframe. Below is what i have done so far.</p> <p>First i defined a function:</p> <pre><code>def pe_cond(df): if df['P/E'] &lt;= df['P/E'].quantile(0.1): return 1 else: return 0 </code></pre> <p>Secondly, i tried to apply it to my dataframe</p> <pre><code>df['pe_cond'] = df.apply(pe_cond, axis=1) </code></pre> <p>However i get the below error:</p> <pre><code>AttributeError: ("'float' object has no attribute 'quantile'", u'occurred at index 0') </code></pre> <p>Any help will be appreciated.</p> <p>Thanks</p>
<p>I can't do this in a repl quite yet but this may work for you</p> <pre><code>mask = df['P/E'] &lt;= df['P/E'].quantile(0.1) df.loc[mask, 'pe_cond'] = 1 df.loc[~mask, 'pe_cond'] = 0 </code></pre> <p>This is using <code>.loc</code> to find the subset of the <code>DataFrame</code> that fulfills the logic defined in the <code>mask</code> variable. The <code>~</code> (unary) operator negates that. So essentially, where the condition is true give it a 1. Where the condition is false give it a 0.</p> <p>-- Edit -- You should totally use @djk47463 comment. It's very concise.</p> <pre><code>df['pe_cond'] = (df['P/E'] &lt;= df['P/E'].quantile(0.1)).astype(int) </code></pre>
python|pandas|dataframe
0
9,486
70,875,761
Python : Split string every three words in dataframe
<p>I've been searching around for a while now, but I can't seem to find the answer to this small problem.</p> <p>I have this code that is supposed to split the string after every three words:</p> <pre><code>import pandas as pd import numpy as np df1 = { 'State':['Arizona AZ asdf hello abc','Georgia GG asdfg hello def','Newyork NY asdfg hello ghi','Indiana IN asdfg hello jkl','Florida FL ASDFG hello mno']} df1 = pd.DataFrame(df1,columns=['State']) df1 def splitTextToTriplet(df): text = df['State'].str.split() n = 3 grouped_words = [' '.join(str(text[i:i+n]) for i in range(0,len(text),n))] return grouped_words splitTextToTriplet(df1) </code></pre> <p>Currently the output is as such:</p> <pre><code>['0 [Arizona, AZ, asdf, hello, abc]\n1 [Georgia, GG, asdfg, hello, def]\nName: State, dtype: object 2 [Newyork, NY, asdfg, hello, ghi]\n3 [Indiana, IN, asdfg, hello, jkl]\nName: State, dtype: object 4 [Florida, FL, ASDFG, hello, mno]\nName: State, dtype: object'] </code></pre> <p>But I am actually expecting this output in 5 rows, one column on dataframe:</p> <pre><code>['Arizona AZ asdf', 'hello abc'] ['Georgia GG asdfg', 'hello def'] ['Newyork NY asdfg', 'hello ghi'] ['Indiana IN asdfg', 'hello jkl'] ['Florida FL ASDFG', 'hello mno'] </code></pre> <p>how can I change the regex so it produces the expected output?</p>
<p>For efficiency, you can use a regex and <code>str.extractall</code> + <code>groupby</code>/<code>agg</code>:</p> <pre><code>(df1['State'] .str.extractall(r'((?:\w+\b\s*){1,3})')[0] .groupby(level=0).agg(list) ) </code></pre> <p>output:</p> <pre><code>0 [Arizona AZ asdf , hello abc] 1 [Georgia GG asdfg , hello def] 2 [Newyork NY asdfg , hello ghi] 3 [Indiana IN asdfg , hello jkl] 4 [Florida FL ASDFG , hello mno] </code></pre> <p>regex:</p> <pre><code>( # start capturing (?:\w+\b\s*) # words {1,3} # the maximum, up to three ) # end capturing </code></pre>
python-3.x|pandas|tokenize
1
9,487
51,997,022
Fast way to apply custom function to every pixel in image
<p>I'm looking for a faster way to apply a custom function to an image which I use to remove a blue background. I have a function that calculates the distance each pixel is from approximately the blue colour in the background. The original code with a loop looked like this:</p> <pre><code>def dist_to_blue(pix): rdist = 76 - pix[0] gdist = 150 - pix[1] bdist = 240 - pix[2] return rdist*rdist + gdist*gdist + bdist*bdist imgage.shape #outputs (576, 720, 3) for i, row in enumerate(image): for j, pix in enumerate(row): if dist_to_blue(pix) &lt; 12000: image[i,j] = [255,255,255] </code></pre> <p>However this code takes around 8 seconds to run for this relatively small image. I've been trying to use numpy's "vectorize" function but that applies the function to every value individually. However I want to do it to every pixel aka not expand the z/rgb dimension</p> <p>the only improvements I've come up with is replacing the for loops with the following:</p> <p><code>m = np.apply_along_axis(lambda pix: (255,255,255) if dist_to_blue(pix) &lt; 12000 else pix, 2, image)</code></p> <p>Which runs in about 7 seconds which is still painfully slow. Is there something I'm missing that could speed this up to a reasonable execution time</p>
<p>This should be a lil bit faster ... ;)</p> <pre><code>import numpy as np blue = np.full_like(image, [76,150,250]) mask = np.sum((image-blue)**2,axis=-1) &lt; 12000 image[mask] = [255,0,255] </code></pre> <p>Here you're generating the ideal blue image, squaring the difference of the images pixel by pixel, then summing over the last axis (the rgb vectors) before generating a mask and using it to modify values in the original image. </p>
python|image|numpy
2
9,488
51,818,047
How to create an object of type numpy.int8 in a C numpy extension?
<p>I have created my own class in a C Python extension. I want it to behave like a numpy array.</p> <p>Let's say my class is a myarray. I can index it using the slice notation. This means that my implementation of the <code>mp_subscript</code> function of the <code>mapping_methods</code> looks correct. I can further index it, and it returns the element I want, of the correct type.</p> <pre><code># Create a new myarray object of 10 elements a = myarray( 10 ) # Get a slice of this myarray, as a numpy.ndarray object # returns new one created with PyArray_NewFromDescr (no copy) b = a[2:4] # What is the class of an indexed item in the slice? print( b[0].__class__ ) &lt;class 'numpy.int8'&gt; </code></pre> <p>I also have implemented the direct index into my own type with <code>a[0]</code>. For this, I've tried calling <code>PyArray_GETITEM</code>. But the object I have in return is of type <code>int</code>.</p> <pre><code># Create a new myarray object of 10 elements a = myarray( 10 ) # What is the class of an indexed item in the slice? # returns the result of calling PyArray_GETITEM. print( a[0].__class__ ) &lt;class 'int'&gt; </code></pre> <p>How can I, within my C extension, create this object of type <code>numpy.int8</code>?</p>
<p>Found out I have to call <code>PyArray_Scalar</code> for this.</p>
python|numpy|cpython|python-c-api
0
9,489
51,586,114
How can i extract day of week from timestamp in pandas
<p>I have a timestamp column in a dataframe as below, and I want to create another column called <code>day of week</code> from that. How can do it?</p> <p>Input:</p> <pre><code>Pickup date/time 07/05/2018 09:28:00 14/05/2018 17:00:00 15/05/2018 17:00:00 15/05/2018 17:00:00 23/06/2018 17:00:00 29/06/2018 17:00:00 </code></pre> <p>Expected Output:</p> <pre><code>Pickup date/time Day of Week 07/05/2018 09:28:00 Monday 14/05/2018 17:00:00 Monday 15/05/2018 17:00:00 Tuesday 15/05/2018 17:00:00 Tuesday 23/06/2018 17:00:00 Saturday 29/06/2018 17:00:00 Friday </code></pre>
<p>You can use weekday_name</p> <pre><code>df['date/time'] = pd.to_datetime(df['date/time'], format = '%d/%m/%Y %H:%M:%S') df['Day of Week'] = df['date/time'].dt.weekday_name </code></pre> <p>You get</p> <pre><code> date/time Day of Week 0 2018-05-07 09:28:00 Monday 1 2018-05-14 17:00:00 Monday 2 2018-05-15 17:00:00 Tuesday 3 2018-05-15 17:00:00 Tuesday 4 2018-06-23 17:00:00 Saturday 5 2018-06-29 17:00:00 Friday </code></pre> <p>Edit:</p> <p>For the newer versions of Pandas, use day_name(),</p> <pre><code>df['Day of Week'] = df['date/time'].dt.day_name() </code></pre>
python|python-3.x|pandas
11
9,490
51,812,095
Is pandas a local-only library
<p>I recently started coding, but took a brief stint. I started a new job and I’m under some confidential restrictions. I need to make sure python and pandas are secure before I do this—I’ll also be talking with IT on Monday </p> <p>I was wondering if pandas in python was a local library, or does the data get sent to or from elsewhere? If I write something in pandas—will the data be stored somewhere under pandas?</p> <p>The best example of what I’m doing is best found on a medium article about stripping data from tables that don’t have csv Exports. </p> <p><a href="https://medium.com/@ageitgey/quick-tip-the-easiest-way-to-grab-data-out-of-a-web-page-in-python-7153cecfca58" rel="nofollow noreferrer">https://medium.com/@ageitgey/quick-tip-the-easiest-way-to-grab-data-out-of-a-web-page-in-python-7153cecfca58</a></p>
<p>Creating a <code>DataFrame</code> out of a dict, doing vectorized operations on its rows, printing out slices of it, etc. are all completely local. I'm not sure why this matters. Is your IT department going to say, "Well, this looks fishy—but some random guy on the internet says it's safe, so forget our policies, we'll allow it"? But, for what it's worth, you have this random guy on the internet saying it's safe.</p> <p>However, Pandas <em>can</em> be used to make network requests. Some of <a href="https://pandas.pydata.org/pandas-docs/stable/io.html" rel="nofollow noreferrer">the IO functions</a> can take a URL instead of a filename or file object. Some of them can also use another library that does so—e.g., if you have <code>lxml</code> installed, <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html" rel="nofollow noreferrer"><code>read_html</code></a>, will pass the filename to <code>lxml</code> to open, and if that filename is an HTTP URL, <code>lxml</code> will go fetch it.</p> <p>This is rarely a concern, but if you want to get paranoid, you could imagine ways in which it might be.</p> <p>For example, let's say your program is parsing user-supplied CSV files and doing some data processing on them. That's safe; there's no network access at all.</p> <p>Now you add a way for the user to specify CSV files by URL, and you pass them into <code>read_csv</code> and go fetch them. Still safe; there is network access, but it's transparent to the end user and obviously needed for the user's task; if this weren't appropriate, your company wouldn't have asked you to add this feature.</p> <p>Now you add a way for CSV files to reference other CSV files: if column 1 is <code>@path/to/other/file</code>, you recursively read and parse <code>path/to/other/file</code> and embed it in place of the current row. Now, what happens if I can give one of your users a CSV file where, buried at line 69105, there's <code>@http://example.com/evilendpoint?track=me</code> (an endpoint which does something evil, but then returns something that looks like a perfectly valid thing to insert at line 69105 of that CSV)? Now you may be facilitating my hacking of your employees, without even realizing it.</p> <p>Of course this is a more limited version of exactly the same functionality that's in every web browser with HTML pages. But maybe your IT department has gotten paranoid and clamped down security on browsers and written an application-level sniffer to detect suspicious followup requests from HTML, and haven't thought to do the same thing for references in CSV files.</p> <p>I don't think that's a problem a sane IT department should worry about. If your company doesn't trust you to think about these issues, they shouldn't hire you and assign you to write software that involves scraping the web. But then not every IT department is sane about what they do and don't get paranoid about. ("Sure, we can forward this under-1024 port to your laptop for you… but you'd better not install a newer version of Firefox than 16.0…")</p>
python|pandas
1
9,491
42,127,156
tensorflow TF-slim inceptionv3 training loss curve is strange
<p>TF-slim inceptionv3 train from scratch</p> <p>I use slim/train_image_classifier.py to train a inception_v3 model on my own dataset: python train_image_classifier.py --train_dir=${TRAIN_DIR} --dataset_name=mydataset --dataset_split_name=train --dataset_dir=${DATASET_DIR} --model_name=inception_v3 --num_clones=2</p> <p>the loss curve was strange as the tensorboard shown, it was a linearly decreasing straight line with a little bump part. <a href="https://i.stack.imgur.com/NWnm4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NWnm4.png" alt="enter image description here"></a></p> <p>the following are the last outputs, the loss was decreased 0.0001 every 20 or 30 steps:</p> <pre><code>INFO:tensorflow:global step 34590: loss = 0.5359 (1.17 sec/step) INFO:tensorflow:global step 34600: loss = 0.5358 (1.15 sec/step) INFO:tensorflow:global step 34590: loss = 0.5359 (1.17 sec/step) INFO:tensorflow:global step 34600: loss = 0.5358 (1.15 sec/step) INFO:tensorflow:global step 34610: loss = 0.5358 (1.17 sec/step) INFO:tensorflow:global step 34620: loss = 0.5357 (1.12 sec/step) INFO:tensorflow:global step 34630: loss = 0.5357 (1.16 sec/step) INFO:tensorflow:global step 34640: loss = 0.5356 (1.16 sec/step) INFO:tensorflow:global step 34650: loss = 0.5356 (1.16 sec/step) INFO:tensorflow:global step 34660: loss = 0.5355 (1.15 sec/step) INFO:tensorflow:global step 34670: loss = 0.5355 (1.15 sec/step) INFO:tensorflow:global step 34680: loss = 0.5355 (1.18 sec/step) INFO:tensorflow:global step 34690: loss = 0.5354 (1.17 sec/step) INFO:tensorflow:global step 34700: loss = 0.5354 (1.15 sec/step) INFO:tensorflow:global step 34710: loss = 0.5353 (1.15 sec/step) INFO:tensorflow:global step 34720: loss = 0.5353 (2.25 sec/step) INFO:tensorflow:global step 34730: loss = 0.5353 (2.22 sec/step) INFO:tensorflow:global step 34740: loss = 0.5352 (1.16 sec/step) INFO:tensorflow:global step 34750: loss = 0.5352 (1.16 sec/step) INFO:tensorflow:global step 34760: loss = 0.5351 (1.18 sec/step) INFO:tensorflow:global step 34770: loss = 0.5351 (1.15 sec/step) INFO:tensorflow:global step 34780: loss = 0.5350 (1.17 sec/step) INFO:tensorflow:global step 34790: loss = 0.5350 (1.15 sec/step) INFO:tensorflow:global step 34800: loss = 0.5349 (1.12 sec/step) INFO:tensorflow:global step 34810: loss = 0.5349 (1.12 sec/step) INFO:tensorflow:global step 34820: loss = 0.5349 (1.16 sec/step) INFO:tensorflow:global step 34830: loss = 0.5348 (1.16 sec/step) INFO:tensorflow:global step 34840: loss = 0.5348 (1.18 sec/step) INFO:tensorflow:global step 34850: loss = 0.5347 (1.12 sec/step) INFO:tensorflow:global step 34860: loss = 0.5347 (1.12 sec/step) INFO:tensorflow:global step 34870: loss = 0.5347 (1.18 sec/step) INFO:tensorflow:global step 34880: loss = 0.5346 (1.13 sec/step) INFO:tensorflow:global step 34890: loss = 0.5346 (1.18 sec/step) INFO:tensorflow:global step 34900: loss = 0.5345 (1.16 sec/step) INFO:tensorflow:global step 34910: loss = 0.5345 (1.15 sec/step) INFO:tensorflow:global step 34920: loss = 0.5344 (1.17 sec/step) INFO:tensorflow:global step 34930: loss = 0.5344 (1.14 sec/step) INFO:tensorflow:global step 34940: loss = 0.5344 (1.15 sec/step) INFO:tensorflow:global step 34950: loss = 0.5343 (1.14 sec/step) INFO:tensorflow:global step 34960: loss = 0.5343 (1.17 sec/step) </code></pre> <p>mydataset.py is the same as flowers.py except:</p> <pre><code>SPLITS_TO_SIZES = {'train': 18000000, 'validation': 400000} _NUM_CLASSES = 4 </code></pre> <p>Is it normal? Thanks for any help.</p>
<p>You are plotting the graph for total loss after n steps(which is probably number_of_steps if you are using tf.contrib.slim train method) in your training while the loss thats logged is for every 10 steps. Hope this helps! </p>
tensorflow|tf-slim
0
9,492
64,577,273
Pandas getting unique index after concatenating list of dataframes
<p><strong>Problem:</strong> I have the following pandas dataframe object which was initially concatenated based on a dataframes list (in which each dataframe <code>df_*</code> carries <code>check_*</code> information). Below dataframe is only an example, the real one carries more (stage, unit) combinations (and I don't know a-priori how many).</p> <p><strong>Aim</strong>: Stage and unit shall be the index which carry the values for <code>check_*</code>. So essentially, for each stage, unit combination I want to have one unique row carrying the information for <code>check_*</code>.</p> <p>Any idea how to do that? Many thanks!</p> <pre><code># Current Situation stage unit check_1 check_2 check_3 check_4 A min NaN NaN 120 NaN B min NaN NaN 210 NaN A sec NaN NaN 3 NaN B sec NaN NaN 3 NaN B min NaN NaN NaN 0.8 A min NaN NaN NaN 0.3 # Target stage unit check_1 check_2 check_3 check_4 A min NaN NaN 120 0.3 B min NaN NaN 210 0.8 A sec NaN NaN 3 NaN B sec NaN NaN 3 NaN </code></pre>
<p>Try</p> <pre><code>df = df.groupby(['stage', 'unit'], as_index=False).first() </code></pre>
python|pandas
0
9,493
47,846,320
Python - How to save spectrogram output in a text file?
<p>My code calculates the spectrogram for <code>x</code>, <code>y</code> and <code>z</code>. </p> <p>I calculate the magnitude of the three axis first, then calculate the spectrogram.</p> <p>I need to take the spectrogram output and save it as one column in an array to use it as an input for a deep learning model.</p> <p>This is my code:</p> <pre><code>dataset = np.loadtxt("trainingdatasetMAG.txt", delimiter=",") X = dataset[:,0:6] Y = dataset[:,6] fake_size = 1415684 time = np.arange(fake_size)/1000 # 1kHz base_freq = 2 * np.pi * 100 magnitude = dataset[:,5] plt.title('xyz_magnitude') ls=(plt.specgram(magnitude, Fs=1000)) </code></pre> <p>This is my dataset, whose headers are <code>(patientno, time/Msecond, x-axis, y-axis, z-axis, xyz_magnitude, label)</code></p> <pre><code>1,15,70,39,-970,947321,0 1,31,70,39,-970,947321,0 1,46,60,49,-960,927601,0 1,62,60,49,-960,927601,0 1,78,50,39,-960,925621,0 1,93,50,39,-960,925621,0 </code></pre> <p>and this is the output of the spectrogram that needs to be more efficient</p> <pre><code>(array([[ 1.52494154e+11, 1.52811638e+11, 1.52565040e+11, ..., 1.47778892e+11, 1.46781213e+11, 1.46678951e+11], [ 7.69589176e+10, 7.73638333e+10, 7.76935891e+10, ..., 7.48498747e+10, 7.40088248e+10, 7.40343108e+10], [ 6.32683585e+04, 1.58170271e+06, 6.11287648e+06, ..., 5.06690834e+05, 3.31360693e+05, 7.04757400e+05], ..., [ 7.79589127e+05, 8.09843763e+04, 2.52907491e+05, ..., 2.48520301e+05, 2.11734697e+05, 2.50917758e+05], [ 9.41199946e+05, 4.98371406e+05, 1.29328139e+06, ..., 2.56729806e+05, 3.45253951e+05, 3.51932417e+05], [ 4.36846676e+05, 1.24123764e+06, 9.20694394e+05, ..., 8.35807658e+04, 8.36986905e+05, 3.57807267e+04]]), array([ 0. , 3.90625, 7.8125 , 11.71875, 15.625 , 19.53125, 23.4375 , 27.34375, 31.25 , 35.15625, 39.0625 , 42.96875, 46.875 , 50.78125, 54.6875 , 58.59375, 62.5 , 66.40625, 70.3125 , 74.21875, 78.125 , 82.03125, 85.9375 , 89.84375, 93.75 , 97.65625, 101.5625 , 105.46875, 109.375 , 113.28125, 117.1875 , 121.09375, 125. , 128.90625, 132.8125 , 136.71875, 140.625 , 144.53125, 148.4375 , 152.34375, 156.25 , 160.15625, 164.0625 , 167.96875, 171.875 , 175.78125, 179.6875 , 183.59375, 187.5 , 191.40625, 195.3125 , 199.21875, 203.125 , 207.03125, 210.9375 , 214.84375, 218.75 , 222.65625, 226.5625 , 230.46875, 234.375 , 238.28125, 242.1875 , 246.09375, 250. , 253.90625, 257.8125 , 261.71875, 265.625 , 269.53125, 273.4375 , 277.34375, 281.25 , 285.15625, 289.0625 , 292.96875, 296.875 , 300.78125, 304.6875 , 308.59375, 312.5 , 316.40625, 320.3125 , 324.21875, 328.125 , 332.03125, 335.9375 , 339.84375, 343.75 , 347.65625, 351.5625 , 355.46875, 359.375 , 363.28125, 367.1875 , 371.09375, 375. , 378.90625, 382.8125 , 386.71875, 390.625 , 394.53125, 398.4375 , 402.34375, 406.25 , 410.15625, 414.0625 , 417.96875, 421.875 , 425.78125, 429.6875 , 433.59375, 437.5 , 441.40625, 445.3125 , 449.21875, 453.125 , 457.03125, 460.9375 , 464.84375, 468.75 , 472.65625, 476.5625 , 480.46875, 484.375 , 488.28125, 492.1875 , 496.09375, 500. ]), array([1.28000000e-01, 2.56000000e-01, 3.84000000e-01, ..., 1.41529600e+03, 1.41542400e+03, 1.41555200e+03]), &lt;matplotlib.image.AxesImage object at 0x000002161A78F898&gt;) </code></pre>
<p>Matplotlib function <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.specgram.html?highlight=matplotlib%20pyplot%20specgram#matplotlib.pyplot.specgram" rel="nofollow noreferrer">specgram</a> has 4 outputs : </p> <blockquote> <p><strong>spectrum</strong> : 2-D array</p> <p>Columns are the periodograms of successive segments.</p> <p><strong>freqs</strong> : 1-D array</p> <p>The frequencies corresponding to the rows in spectrum.</p> <p><strong>t</strong> : 1-D array</p> <p>The times corresponding to midpoints of segments (i.e., the columns in spectrum).</p> <p><strong>im</strong> : instance of class AxesImage</p> </blockquote> <p>From your code :</p> <pre><code>ls=plt.specgram(magnitude, Fs=1000) </code></pre> <p>So <code>ls[0]</code> contains the spectrum that you want to export in txt, you can write it in a file with this piece of code : </p> <pre><code>with open('spectrogram.txt', 'w+b') as ffile: for spectros in ls[0]: for spectro in spectros: lline = str(spectro) + ' \t' ffile.write(lline) # one row written ffile.write(' \n') </code></pre> <p>plt. However before, <code>ls[0]</code> contains the power spectral density of <code>NFFT=256</code> segments with 128 samples of overlap - by default -, so you'll have <code>NFFT/2 +1 = 129 rows</code>. So each columns contains the <code>PSD</code> at time T and each line contains a time series of the frequency concerned. To have the FFT at instant <code>T</code> slice it : </p> <pre><code>T_idx = 10 psd_ls[:,T_idx] </code></pre>
python-3.x|numpy|signal-processing|spectrum|spectrogram
1
9,494
49,109,053
FailedPreconditionError: Attempting to use uninitialized value
<p>I am trying to put together a neural network, and I want to save the weights with which I initialise the network for later use. </p> <p>This is the code that creates the network:</p> <pre><code>def neural_network_model(data, layer_sizes): num_layers = len(layer_sizes) - 1 # hidden and output layers layers = [] # hidden and output layers # initialise the weights weights = {} for i in range(num_layers): w_name = 'W' + str(i+1) b_name = 'b' + str(i+1) w = tf.get_variable(w_name, [layer_sizes[i], layer_sizes[i+1]], initializer = tf.contrib.layers.xavier_initializer(), dtype=tf.float32) b = tf.get_variable(b_name, [layer_sizes[i+1]], initializer = tf.zeros_initializer(), dtype=tf.float32) layers.append({'weights': w, 'biases': b}) weights[w_name] = w # save the weights dictionary saver = tf.train.Saver(var_list=weights) with tf.Session() as sess: sess.run(init) save_path = saver.save(sess, path + 'weights.ckpt') # path is set elsewhere </code></pre> <p>The errors that I get is as follows:</p> <pre><code>FailedPreconditionError: Attempting to use uninitialized value b2 </code></pre> <p>(I also got <code>W1</code> and <code>W3</code> as uninitialised values).</p> <p>How is this possible? Should the <code>sess.run(init)</code> (I specify <code>init</code> as <code>tf.global_variables_initializer()</code> earlier in the code) not take care of all variable initialisation?</p>
<p>You need to create the <code>variable initializer</code> in the end, when you are done creating the graph.<br> When you call <code>tf.global_variable_initializer()</code>, it takes all the trainable variables that have been created up until that point. So, if you define this before creating your layers (and variables), those new variables won't be added to this initializer.</p>
python-3.x|serialization|tensorflow
1
9,495
58,996,519
Populate Pandas Dataframe with normal distribution
<p>I would like to populate a dataframe with numbers that follow a normal distribution. Currently I'm populating it randomly, but the distribution is flat. Column a has mean and sd of 5 and 1 respectively, and column b has mean and sd of 15 and 1.</p> <pre><code>import pandas as pd import numpy as np n = 10 df = pd.DataFrame(dict( a=np.random.randint(1,10,size=n), b=np.random.randint(100,110,size=n) )) </code></pre>
<p>Try this. <code>randint</code> does not select from normal dist. <code>normal</code> does. Also no idea where you came up with 100 and 110 in <code>min</code> and <code>max</code> args for <code>b</code>.</p> <pre><code>n = 10 a_bar = 5; a_sd = 1 b_bar = 15; b_sd = 1 df = pd.DataFrame(dict(a=np.random.normal(a_bar, a_sd, size=n), b=np.random.normal(b_bar, b_sd, size=n)), columns=['a', 'b']) </code></pre>
python|pandas|numpy
4
9,496
58,966,588
why np.concatenate() can be used for class 'PIL.Image.Image'?
<p>I'm reading someone's transform.py, and there is a code which really confuses me. Here it is:</p> <pre><code>np.concatenate(img_group, axis=2) </code></pre> <p>however, the <code>img_group</code> here is a sequence of <code>&lt;class 'PIL.Image.Image'&gt;</code>, and I've looked through the docs of np.concatenate(), it tells me that:<a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html" rel="nofollow noreferrer">enter link description here</a></p> <blockquote> <p>numpy.concatenate((a1, a2, ...), axis=0, out=None) Join a sequence of arrays along an existing axis. Parameters: a1, a2, … : <strong>sequence of array_like</strong> The arrays must have the same shape, except in the dimension corresponding to axis (the first, by default).</p> </blockquote> <p>and I have tried some samples like:</p> <pre><code>x = Image.open('flows/v_ApplyEyeMakeup_g08_c01/frame/img_00001.jpg').convert('RGB') y = Image.open('flows/v_ApplyEyeMakeup_g08_c01/frame/img_00002.jpg').convert('RGB') z = np.concatenate([x,y], axis=2) </code></pre> <p>then it worked! the <code>z</code> is a <code>numpy.ndarray</code> type whose size is (240,320,6). However, the <code>&lt;class 'PIL.Image.Image'&gt;</code> seems not the kinds of <code>array</code> that the <code>np.concatenate()</code> parameters need, so I wonder how it works?</p>
<p>Numpy operates on <a href="https://docs.scipy.org/doc/numpy/reference/arrays.interface.html" rel="nofollow noreferrer">array-like</a> objects. Here's a <a href="https://stackoverflow.com/questions/40378427/numpy-formal-definition-of-array-like-objects">link</a> to a question regarding what constitutes array-likeness. One way for a python object to be array-like is to implement the <code>__array_interface__</code> method. That is precisely what <a href="https://github.com/python-pillow/Pillow/blob/3.1.x/PIL/Image.py#L618-L628" rel="nofollow noreferrer"><code>PIL.Image</code></a> does.</p>
python|numpy|python-imaging-library
1
9,497
58,687,266
The best way to expand the dimension of a numpy array so as to meet different requirements
<p>I have a mask <code>m</code> of shape <code>[bz]</code> and a dictionary <code>d</code> contains many different ndarrays, e.g. <code>d['s'].shape=(bz, 84, 84, 4)</code>, <code>d['r'].shape=(bz, 1)</code> and etc. All have the same first dimension of size <code>bz</code>, but the rest may vary. I want to expand the dimension of <code>m</code> properly so that I can multiply <code>m</code> by the values in <code>d</code>. For example, I may want <code>m</code> to be of shape <code>[bz, 1, 1, 1]</code> in order to multiply <code>d['s']</code> and <code>[bz, 1]</code> for <code>d['r']</code>. I can think of a nasty while-loop solution as follows.</p> <pre class="lang-py prettyprint-override"><code>for k, v in d.items(): if m.shape != v.shape: reshaped_m = m.copy() while len(reshaped_m.shape) &lt; len(v.shape): reshaped_m = reshaped_m[..., None] d[k] = v * reshaped_m </code></pre> <p>I'm looking for a better solution, thanks.</p>
<p>You can simply do -</p> <pre><code>for k, v in d.items(): d[k] = (v.T*m).T </code></pre> <p>So, we are basically pushing the first axis to the end, so that <code>v</code> becomes broadcastable against <code>m</code>. Then, multiply with <code>m</code>. Finally, pushing the last axis back to the front.</p> <p>There could be different ways to permute those axes, but <code>transposing</code> is the simplest one.</p> <p>If the mask has more dimensions than <code>1</code>, we would need to transpose <code>m</code> too. Hence. for that case, it would be <code>d[k] = (v.T*m.T).T</code>.</p> <hr> <p>Another way would be to reshape <code>m</code> and then multiply with <code>v</code> -</p> <pre><code>for k, v in d.items(): d[k] = v*m.reshape(m.shape + (1,)*(v.ndim-m.ndim)) </code></pre> <hr> <p>And another with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow noreferrer"><code>np.einsum</code></a> -</p> <pre><code>for k, v in d.items(): d[k] = np.einsum('i...,i-&gt;i...',v,m) </code></pre>
python|numpy
1
9,498
58,696,183
generate a column delta based on a constraint
<p>I have a dataframe : </p> <pre><code>date_1 Count 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 01/09/2019 21 </code></pre> <p>I want to generate a column delta such that 60% of the count values(rounded off) have value 2 and rest have 4. </p> <p>for example for date_1 = 01/09/2019 has 21 entries, so 0.6 * 21 = 12.6 ~ 13 values have delta as 2 and remaining have delta as 4</p> <p><strong>Expected output :</strong> </p> <pre><code>date_1 Count delta 01/09/2019 21 2 01/09/2019 21 2 01/09/2019 21 2 01/09/2019 21 2 01/09/2019 21 2 01/09/2019 21 2 . . . . . . 01/09/2019 21 4 01/09/2019 21 4 01/09/2019 21 4 </code></pre> <p>Can anyone help in achieving the same. </p>
<p>Use <a href="https://pandas-docs.github.io/pandas-docs-travis/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>Groupby.transform</code></a> to transform Column Count in a Serie with 2or 4 using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a> to fill 60% of the length with 2 and 40% with 4:</p> <pre><code>df['delta']=df.groupby('date_1')['Count'].transform(lambda x: np.where ( (x.reset_index(drop=True).index+1 &lt; round(len(x)*0.6)),2,4) ).sample(frac=1).reset_index(drop=True) print(df) </code></pre> <hr> <pre><code>print(df) date_1 Count delta 0 01/09/2019 21 2 1 01/09/2019 21 4 2 01/09/2019 21 2 3 01/09/2019 21 2 4 01/09/2019 21 2 5 01/09/2019 21 2 6 01/09/2019 21 4 7 01/09/2019 21 2 8 01/09/2019 21 4 9 01/09/2019 21 2 10 01/09/2019 21 2 11 01/09/2019 21 4 12 01/09/2019 21 4 13 01/09/2019 21 2 14 01/09/2019 21 4 15 01/09/2019 21 4 16 01/09/2019 21 4 17 01/09/2019 21 2 18 01/09/2019 21 4 19 01/09/2019 21 2 20 01/09/2019 21 2 </code></pre>
python|pandas|numpy
1
9,499
70,199,004
Check if condition is True with .loc and if is set 1
<pre><code>varv = -1 df: Open VarC Position Date 2019-11-25 10:00:00 30.38 -0.325098 0.0 2019-11-25 16:00:00 30.59 -1.538955 0.0 2019-11-26 10:00:00 30.56 -2.244309 0.0 2019-11-26 16:00:00 30.53 -3.584000 0.0 2019-11-27 10:00:00 30.20 -0.640000 0.0 df.loc[df['VarC'] &lt;= varv, 'Position'] = 1 </code></pre> <p>Its doesnt working. df['Position'] still 0.0. Suggestions how it may set 1 to 'Position'? Already try using if and still doesnt work.</p>
<p>Try this:</p> <pre><code>df['Position'] = df['Position'].where(df['VarC'] &gt; varv, 1) </code></pre> <p>Output:</p> <pre><code>print(df) Open VarC Position Date 2019-11-25 10:00:00 30.38 -0.325098 0.0 2019-11-25 16:00:00 30.59 -1.538955 1.0 2019-11-26 10:00:00 30.56 -2.244309 1.0 2019-11-26 16:00:00 30.53 -3.584000 1.0 2019-11-27 10:00:00 30.20 -0.640000 0.0 </code></pre>
python|pandas
0