Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
7,800
53,638,825
Introduction to Data Science in Python problem
<p>Can any one tell my what that part (town = thisLine[:thisLine.index('(')-1])exactly do?</p> <pre><code>def get_list_of_university_towns(): '''Returns a DataFrame of towns and the states they are in from the university_towns.txt list. The format of the DataFrame should be: DataFrame( [ ["Michigan", "Ann Arbor"], ["Michigan", "Yipsilanti"] ], columns=["State", "RegionName"] ) The following cleaning needs to be done: 1. For "State", removing characters from "[" to the end. 2. For "RegionName", when applicable, removing every character from " (" to the end. 3. Depending on how you read the data, you may need to remove newline character '\n'. ''' data = [] state = None state_towns = [] with open('university_towns.txt') as file: for line in file: thisLine = line[:-1] if thisLine[-6:] == '[edit]': state = thisLine[:-6] continue if '(' in line: town = thisLine[:thisLine.index('(')-1] state_towns.append([state,town]) else: town = thisLine state_towns.append([state,town]) data.append(thisLine) df = pd.DataFrame(state_towns,columns = ['State','RegionName']) return df </code></pre> <p>get_list_of_university_towns()</p>
<p>It performs this step:</p> <pre><code>2. For "RegionName", when applicable, removing every character from " (" to the end. </code></pre> <p>An index of <code>-1</code> means the end of an array or list.</p>
python|pandas|numpy|data-science
0
7,801
53,436,301
How can I calculate the 3 genres most frequent it in python pandas?
<p>I have a dataframe with one column and I need to return 3 most frequent genres.</p> <blockquote> <p><strong>INPUT</strong></p> </blockquote> <pre><code> genres 0 Drama 1 Animation|Children's|Musical 2 Musical|Romance 3 Drama 4 Animation|Children's|Comedy 5 Action|Adventure|Comedy|Romance 6 Action|Adventure|Drama 7 Comedy|Drama 8 Animation|Children's|Musical 9 Adventure|Children's|Drama|Musical 10 Animation|Children's|Musical 11 Musical 12 Drama 13 Comedy </code></pre> <p>Drama 6 Musical 6 Children's 5 Animation 4 Comedy 4 Adventure 3 Action 2</p> <blockquote> <p><strong>OUTPUT</strong> - A dataframe with:</p> </blockquote> <pre><code> genres 0 Drama 1 Musical 2 Children's </code></pre>
<p>You need <code>split</code> first , the do <code>stack</code> , then using <code>value_counts</code></p> <pre><code>df.genres.str.split('|',expand=True).stack().value_counts().head(3) Drama 6 Musical 6 Children's 5 dtype: int64 </code></pre>
python|pandas|data-science
1
7,802
71,928,043
Subtract one column from another in pandas - with a condition
<p>I have this code that will subtract, for each person (AAC or AAB), timepoint 1 from time point 2 data.</p> <p>i.e this is the original data:</p> <pre><code> pep_seq AAC-T01 AAC-T02 AAB-T01 AAB-T02 0 0 1 2.0 NaN 4.0 1 4 3 2.0 6.0 NaN 2 4 3 NaN 6.0 NaN 3 4 5 2.0 6.0 NaN </code></pre> <p>This is the code:</p> <pre><code>import sys import numpy as np from sklearn.metrics import auc import pandas as pd from numpy import trapz #read in file df = pd.DataFrame([[0,1,2,np.nan,4],[4,3,2,6,np.nan],[4,3,np.nan,6,np.nan],[4,5,2,6,np.nan]],columns=['pep_seq','AAC-T01','AAC-T02','AAB-T01','AAB-T02']) #standardise the data by taking T0 away from each sample df2 = df.drop(['pep_seq'],axis=1) df2 = df2.apply(lambda x: x.sub(df2[x.name[:4]+&quot;T01&quot;])) df2.insert(0,'pep_seq',df['pep_seq']) print(df) print(df2) </code></pre> <p>This is the output (i.e. df2)</p> <pre><code> pep_seq AAC-T01 AAC-T02 AAB-T01 AAB-T02 0 0 0 1.0 NaN NaN 1 4 0 -1.0 0.0 NaN 2 4 0 NaN 0.0 NaN 3 4 0 -3.0 0.0 NaN </code></pre> <p>...but what I actually wanted was to subtract the T01 columns from all the others EXCEPT for when the T01 value is NaN in which case keep the original value, so the desired output was (see the 4.0 in AAB-T02):</p> <pre><code> pep_seq AAC-T01 AAC-T02 AAB-T01 AAB-T02 0 0 0 1.0 NaN 4.0 1 4 0 -1.0 0 NaN 2 4 0 NaN 0 NaN 3 4 0 -3.0 0 NaN </code></pre> <p>Could someone show me where I went wrong? Note that in real life, there are ~100 timepoints per person, not just two.</p>
<p>I hope that I understand you correctly but <code>numpy.where()</code> should do it for you.</p> <p>Have a look here: <a href="https://stackoverflow.com/questions/64031732/subtract-two-date-columns-given-condition-in-another-column">condition based substraction</a></p>
python|pandas
0
7,803
72,107,118
Converting simple returns to monthly log returns
<p>I have a pandas DataFrame with simple daily returns. I need to convert it to monthly log returns and add a column to the current DataFrame. I have to use <code>np.log</code> to compute the monthly return. But I can only compute daily log return. Below is my code.</p> <pre><code>df[‘return_monthly’]= np.log(data([‘Simple Daily Returns’]+1) </code></pre> <p>The code only produces daily log returns. Is there any particular methods I should be using in the above code to get monthly return?? <a href="https://i.stack.imgur.com/acEiP.png" rel="nofollow noreferrer">Please see my input for pandas Dataframe, the third column in excel is the expected out.</a></p>
<p>The question is a little confusing, but it seems like you want to group the rows by month. This can be done with pandas.resample if you have a datetime index, pandas.groupby, or pandas.pivot.</p> <p>Here is a simple implementation, let us know if this isn't what you're looking for. Furthermore, your values are less than 1, so the log is negative. You can adjust as needed. I aggregated the months with sum, but there are many other aggregation functions such as mean(), median(), size() and <a href="https://cmdlinetips.com/2019/10/pandas-groupby-13-functions-to-aggregate/" rel="nofollow noreferrer">many more</a>. See the link for a list of aggregating functions.</p> <pre><code>#create dataframe with 1220 values that match your dataset df = pd.DataFrame({ 'Date':pd.date_range(start = '1/1/2019' , end ='5/4/2022' , freq='1D'), 'Return':np.random.uniform(low=1e-6, high=1.0, size=1220) #avoid log 0 which returns NAN }).set_index('Date') #set the index to the date so we can use resample Return Log_return Date 2019-01-31 14.604863 -33.950987 2019-02-28 13.118111 -32.025086 2019-03-31 14.541947 -32.962914 2019-04-30 14.212689 -33.684422 2019-05-31 14.154918 -33.347081 2019-06-30 10.710209 -43.474120 2019-07-31 12.358001 -43.051723 2019-08-31 17.932673 -30.328784 ... </code></pre>
python-3.x|pandas|numpy|return|finance
0
7,804
71,884,092
selection on multiple conditions doesn't work correctly in pandas dataframe
<p>I have a dataframe <code>df</code> which I create by loading a csv file and appending another df to (I know that appending is not done in place, so I assign the result of this operation to <code>df</code>). The dataframe has columns: stimulus (contains strings), syllable (contains numbers 1 or 2), response (contains strings). If I do</p> <pre><code>df[df['syllable']==1] </code></pre> <p>or</p> <pre><code>edf[df['syllable']==2] </code></pre> <p>It selects the rows correctly. But if I do:</p> <pre><code>df[(df['stimulus'].str.contains(&quot;bearded_guy&quot;))&amp;(df['syllable']==1)] </code></pre> <p>it selects rows where <code>syllable</code> is equal to 2 instead of 1. <a href="https://i.stack.imgur.com/7BeMF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7BeMF.png" alt="enter image description here" /></a></p>
<p>Try this:</p> <pre><code>df.loc[(df['stimulus'].str.contains(&quot;bearded_guy&quot;))&amp;(df['syllable']==1), :] </code></pre>
python|pandas
0
7,805
22,413,561
using numpy to import multiple text files
<p>I've been importing multiple txt files and using them to create plots. The code is the same as before but it isn't seeming to work this time. I've taken it back to basics and I have no idea what's going wrong.</p> <pre><code>import numpy close('all') data = [] pixels = [] for i in range(0,92): data.append(genfromtxt('filename_'+str(i+1)+'.txt', usecols=4)) pixels.append(genfromtxt('filename_'+str(i+1)+'.txt', usecols=5)) </code></pre> <p>I only need the columns stated in the loop as the txt files have multiple values. This returns:</p> <pre><code> raise ValueError(errmsg) ValueError: Some errors were detected ! Line #1 (got 2 columns instead of 1) Line #3 (got 1 columns instead of 1) Line #5 (got 3 columns instead of 1) Line #6 (got 3 columns instead of 1) Line #8 (got 4 columns instead of 1) Line #10 (got 2 columns instead of 1) Line #11 (got 2 columns instead of 1) Line #12 (got 1 columns instead of 1) Line #35 (got 1 columns instead of 1) </code></pre> <p>Any help, would be fantastic!</p>
<p>The problem is how you are passing the <code>usecols</code> parameter, it must be a sequence (<code>list</code> or <code>tuple</code>, for example), with <code>0</code> being the first column. Perhaps you wanted this:</p> <pre><code>for i in range(0,92): data.append(genfromtxt('filename_'+str(i+1)+'.txt', usecols=(0,1,2,3))) pixels.append(genfromtxt('filename_'+str(i+1)+'.txt', usecols=(0,1,2,3,4))) </code></pre>
python|file-io|numpy|genfromtxt
0
7,806
18,021,056
Reading GPS RINEX data with Pandas
<p>I am reading a [RINEX-3.02] (page 60) Observation Data file to do some timed based satellite ID filtering, and will eventually reconstruct it latter. This would give me more control over the selection of satellites I allow to contribute to a position solution over time with RTK post processing.</p> <p>Specifically for this portion though, I'm just using:</p> <ul> <li>[python-3.3]</li> <li>[pandas]</li> <li>[numpy]</li> </ul> <p>Here is an sample with the first three time stamped observations.<br> Note: It is not necessary for me to parse data from the header.</p> <pre><code> 3.02 OBSERVATION DATA M: Mixed RINEX VERSION / TYPE CONVBIN 2.4.2 20130731 223656 UTC PGM / RUN BY / DATE log: /home/ruffin/Documents/Data/in/FlagStaff_center/FlagStaCOMMENT format: u-blox COMMENT MARKER NAME MARKER NUMBER MARKER TYPE OBSERVER / AGENCY REC # / TYPE / VERS ANT # / TYPE 808673.9171 -4086658.5368 4115497.9775 APPROX POSITION XYZ 0.0000 0.0000 0.0000 ANTENNA: DELTA H/E/N G 4 C1C L1C D1C S1C SYS / # / OBS TYPES R 4 C1C L1C D1C S1C SYS / # / OBS TYPES S 4 C1C L1C D1C S1C SYS / # / OBS TYPES 2013 7 28 0 27 28.8000000 GPS TIME OF FIRST OBS 2013 7 28 0 43 43.4010000 GPS TIME OF LAST OBS G SYS / PHASE SHIFT R SYS / PHASE SHIFT S SYS / PHASE SHIFT 0 GLONASS SLOT / FRQ # C1C 0.000 C1P 0.000 C2C 0.000 C2P 0.000 GLONASS COD/PHS/BIS END OF HEADER &gt; 2013 7 28 0 27 28.8000000 0 10 G10 20230413.601 76808.847 -1340.996 44.000 G 4 20838211.591 171263.904 -2966.336 41.000 G12 21468211.719 105537.443 -1832.417 43.000 S38 38213212.070 69599.2942 -1212.899 45.000 G 5 22123924.655 -106102.481 1822.942 46.000 G25 23134484.916 -38928.221 656.698 40.000 G17 23229864.981 232399.788 -4048.368 41.000 G13 23968536.158 6424.1143 -123.907 28.000 G23 24779333.279 103307.5703 -1805.165 29.000 S35 39723655.125 69125.5242 -1209.970 44.000 &gt; 2013 7 28 0 27 29.0000000 0 10 G10 20230464.937 77077.031 -1341.254 44.000 G 2 20684692.905 35114.399 -598.536 44.000 G12 21468280.880 105903.885 -1832.592 43.000 S38 38213258.255 69841.8772 -1212.593 45.000 G 5 22123855.354 -106467.087 1823.084 46.000 G25 23134460.075 -39059.618 657.331 40.000 G17 23230018.654 233209.408 -4048.572 41.000 G13 23968535.044 6449.0633 -123.060 28.000 G23 24779402.809 103668.5933 -1804.973 29.000 S35 39723700.845 69367.3942 -1208.954 44.000 &gt; 2013 7 28 0 27 29.2000000 0 9 G10 20230515.955 77345.295 -1341.436 44.000 G12 21468350.548 106270.372 -1832.637 43.000 S38 38213304.199 70084.4922 -1212.840 45.000 G 5 22123786.091 -106831.642 1822.784 46.000 G25 23134435.278 -39190.987 657.344 40.000 G17 23230172.406 234019.092 -4048.079 41.000 G13 23968534.775 6473.9923 -125.373 28.000 G23 24779471.004 104029.6643 -1805.983 29.000 S35 39723747.025 69609.2902 -1209.259 44.000 </code></pre> <p>If I do have to make a custom parser,<br> The other tricky thing is satellite IDs come and go over time,<br> (as shown with satellites "G 2" and "G 4")<br> (plus they have spaces in the IDs too)<br> So as I read them into a DataFrame,<br> I need to make new column labels (or row labels for MultiIndex?) as I find them.</p> <p>I was initially thinking this could be considered a MultiIndex problem,<br> but I'm not so sure pandas read_csv could do everything<br> <a href="http://pandas.pydata.org/pandas-docs/stable/io.html#reading-dataframe-objects-with-multiindex" rel="noreferrer">Jump to Reading DataFrame objects with MultiIndex</a></p> <p>Any suggestions?</p> <p>Relevant sources if interested:</p> <ul> <li>[python-3.3]: <a href="http://www.python.org/download/releases/3.3.0/" rel="noreferrer">http://www.python.org/download/releases/3.3.0/</a></li> <li>[numpy]: <a href="http://www.numpy.org/" rel="noreferrer">http://www.numpy.org/</a></li> <li>[pandas]: <a href="http://pandas.pydata.org/" rel="noreferrer">http://pandas.pydata.org/</a></li> <li>[RINEX-3.02]: <a href="http://igscb.jpl.nasa.gov/igscb/data/format/rinex302.pdf" rel="noreferrer">http://igscb.jpl.nasa.gov/igscb/data/format/rinex302.pdf</a></li> <li>[ephem]: <a href="https://pypi.python.org/pypi/ephem/" rel="noreferrer">https://pypi.python.org/pypi/ephem/</a></li> <li>[RTKLIB]: <a href="http://www.rtklib.com/" rel="noreferrer">http://www.rtklib.com/</a></li> <li>[NOAA CORS]: <a href="http://geodesy.noaa.gov/CORS/" rel="noreferrer">http://geodesy.noaa.gov/CORS/</a></li> </ul>
<p>Here is what I ended up doing</p> <pre><code>df = readObs(indir, filename) df.set_index(['%_GPST', 'satID']) </code></pre> <p>Note that I just set the new MultiIndex at the end after building it. <img src="https://i.stack.imgur.com/yC78L.png" alt="enter image description here"></p> <pre><code>def readObs(dir, file): df = pd.DataFrame() #Grab header header = '' with open(dir + file) as handler: for i, line in enumerate(handler): header += line if 'END OF HEADER' in line: break #Grab Data with open(dir + file) as handler: for i, line in enumerate(handler): #Check for a Timestamp lable if '&gt; ' in line: #Grab Timestamp links = line.split() index = datetime.strptime(' '.join(links[1:7]), '%Y %m %d %H %M %S.%f0') #Identify number of satellites satNum = int(links[8]) #For every sat for j in range(satNum): #just save the data as a string for now satData = handler.readline() #Fix the names satdId = satData.replace("G ", "G0").split()[0] #Make a dummy dataframe dff = pd.DataFrame([[index,satdId,satData]], columns=['%_GPST','satID','satData']) #Tack it on the end df = df.append(dff) return df, header </code></pre> <p>Using a dummy data-frame just doesn't seem the most elegant though.</p>
python|python-3.x|gps|pandas
3
7,807
55,543,004
How to resample the dataframe without changing it's core?
<p>How to resample the dataframe without changing it's core?</p> <pre><code>import pandas as pd import sys if sys.version_info[0] &lt; 3: from StringIO import StringIO else: from io import StringIO csvdata = StringIO("""date,LASTA,LASTB,LASTC 1999-03-15,2.5597,8.20145,16.900 1999-03-16,2.6349,8.03439,17.150 1999-03-17,2.6375,8.12431,17.125 1999-03-18,2.6375,8.27908,16.950 1999-03-19,2.6634,8.54914,17.325 1999-03-22,2.6721,8.32183,17.195 1999-03-23,2.6998,8.21218,16.725 1999-03-24,2.6773,8.15284,16.350 1999-03-25,2.6807,8.08378,17.030 1999-03-26,2.7802,8.14038,16.725 1999-03-29,2.8139,8.07832,16.800 1999-03-30,2.8105,8.10124,16.775 1999-03-31,2.7724,7.73057,16.955 1999-04-01,2.8321,7.63714,17.500 1999-04-06,2.8537,7.63703,17.750""") df = pd.read_csv(csvdata, sep=",", index_col="date", parse_dates=True, infer_datetime_format=True) </code></pre> <p>This is my code...</p> <pre><code># Join 3 stock DataFrame together full_df = pd.concat([AAAA, BBBB, CCCC], axis=1).dropna() # Resample the full DataFrame to monthly timeframe monthly_df = full_df.resample('BMS').first() # Calculate daily returns of stocks returns_daily = full_df.pct_change() # Calculate monthly returns of the stocks returns_monthly = monthly_df.pct_change().dropna() print(returns_monthly.tail()) </code></pre> <p>this is the error that I get...</p> <pre><code>TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'Index' </code></pre> <p>I've already tried the <code>pd.Dataframe</code> then <code>DataTimeIndex</code> even <code>pd.to_datetime</code>, but somehow I only make things worse</p>
<p>Please convert index to Datetime index:</p> <pre><code>full_df.index = pd.to_datetime(full_df.index) </code></pre>
python|pandas|dataframe|indexing|datetimeindex
1
7,808
9,651,218
TypeError: unorderable types: float() < function()
<p>I have a code comprised of two functions one that reads data and the other that counts it. Both functions run properly when run separately, but I get the error when I try to have the counter call the file reader. I would appreciate it if some one could tell me where I am goofing up. Thanks in advance</p> <p>Error</p> <pre><code>File &quot;C:\Documents and Settings\Read_File.py&quot;, line 50, in counter Sx = ((25. &lt; Xa) &amp; (Xa &lt; 100.)).sum() #count what is in x range TypeError: unorderable types: float() &lt; function() </code></pre> <p>Code</p> <pre><code>for line in f: #Loop Strips empty lines as well as replaces tabs with space if line !='': line = line.strip() line = line.replace('\t',' ') columns = line.split() for line in range(N): #Loop number of lines to be counted x = columns[8] # assigns variable to columns y = columns[18] z = columns[19] #vx = columns[] #vy = columns[] #vz = columns[] X.append(x) Y.append(y) #appends data in list Z.append(z) Xa = numpy.array(X, dtype=float) #Converts lists to NumPy arrays Ya = numpy.array(Y, dtype=float) Za = numpy.array(Z, dtype=float) return(Xa,Ya,Za) #returns arrays/print statement to test def counter(Xa): Sx = ((25. &lt; Xa) &amp; (Xa &lt; 100.)).sum() #count what is in x range Sy = ((25. &lt; Ya) &amp; (Ya &lt; 100.)).sum() #count what is in y range Sz = ((25. &lt; Za) &amp; (Za &lt; 100.)).sum() #count what is in z range return(print(Sx,Sy,Sz)) read_file(F) #function calls counter(read_file) </code></pre> <p><strong>EDit</strong></p> <p>With the help of Lev and James the first problem was fixed now I get this error</p> <pre><code> Sx = ((2. &lt; Xa) &amp; (Xa &lt; 10.)).sum() #count what is in x range TypeError: unorderable types: float() &lt; tuple() </code></pre> <p>Is this because of the commas in the arrays? And if so how can I get around this?</p>
<p>You are trying to call <code>counter()</code> on the <em>function <code>read_file()</code></em>, not on the results of calling <code>read_file(F)</code>. You don't include source for <code>read_file()</code>, but you almost certainly want to do:</p> <pre><code>counter(readfile(F)) </code></pre> <p>instead of the last two lines. (By the way, the <code>result(print(...))</code> in <code>counter()</code> probably doesn't need the <code>return</code> wrapping round the rest of it.)</p>
python|function|numpy
2
7,809
56,694,234
Can anyone tell me how to use tensorflow iou function?
<p>I want to use tensorflow mean_iou function and write a sample code as follwing; but it gives me error message </p> <p>Attempting to use uninitialized value mean_iou_5/total_confusion_matrix [[{{node mean_iou_5/total_confusion_matrix/read}}]]</p> <p>Can anyone tell me how to use mean_iou function of tensorflow? </p> <p>Thanks.</p> <pre><code>labels1 = tf.convert_to_tensor([[3,1,2],[2,3,1]],tf.int32) pred = tf.convert_to_tensor ([[3,1,2],[2,3,1]],tf.int32) test,conf_mat = tf.metrics.mean_iou(labels = labels1, predictions = pred, num_classes = 3) init_op = tf.global_variables_initializer() with tf.Session() as sess: init_op.run() print('test',sess.run(test)) </code></pre>
<p>Taken from the StackOverflow answer here: <a href="https://stackoverflow.com/a/49326455/9820369">https://stackoverflow.com/a/49326455/9820369</a></p> <pre><code># y_pred and y_true are np.arrays of shape [1, size, channels] with tf.Session() as sess: ypredT = tf.constant(np.argmax(y_pred, axis=-1)) ytrueT = tf.constant(np.argmax(y_true, axis=-1)) iou,conf_mat = tf.metrics.mean_iou(ytrueT, ypredT, num_classes=3) sess.run(tf.local_variables_initializer()) sess.run([conf_mat]) miou = sess.run([iou]) print(miou) </code></pre>
python|tensorflow|deep-learning
2
7,810
56,710,490
plot a normal distribution curve and histogram
<p>Please, I want to know how I can plot a normal distribution plot.</p> <p>Here is my code:</p> <pre><code>import numpy as np import scipy.stats as stats import pylab as pl h=[27.3,27.6,27.5,27.6,27.3,27.6,27.9,27.5,27.4,27.5,27.5,27.4,27.1,27.0,27.3,27.4] fit = stats.norm.pdf(h, np.mean(h), np.std(h)) #this is a fitting indeed pl.plot(h,fit,'-o') pl.hist(h,density=True) #use this to draw histogram of your data pl.show() #use may also need add this </code></pre> <p>I tried this but the curve is very rough.</p>
<p>Simply sort your list <code>h</code>.</p> <p>Using sorted like this:</p> <pre><code>h = sorted([27.3,27.6,27.5,27.6,27.3,27.6,27.9,27.5,27.4,27.5,27.5,27.4,27.1,27.0,27.3,27.4]) </code></pre> <p>Alternatively, you can also use <code>h.sort()</code>.</p> <pre><code>h =[27.3,27.6,27.5,27.6,27.3,27.6,27.9,27.5,27.4,27.5,27.5,27.4,27.1,27.0,27.3,27.4] h.sort() </code></pre> <p>Output: <a href="https://i.stack.imgur.com/dlajU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dlajU.png" alt="enter image description here"></a></p> <p>In order to get a smooth distribution curve, you can use <code>seaborn.distplot()</code>:</p> <pre><code>import seaborn as sns import scipy h=[27.3,27.6,27.5,27.6,27.3,27.6,27.9,27.5,27.4,27.5,27.5,27.4,27.1,27.0,27.3,27.4] ax = sns.distplot(h,fit=scipy.stats.norm, kde=False, hist=True, color='r') ax.plot() </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/iImrQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iImrQ.png" alt="enter image description here"></a></p> <p>For more information on <code>seaborn.distplot()</code>, check <a href="https://seaborn.pydata.org/generated/seaborn.distplot.html" rel="nofollow noreferrer">this</a> official documentation.</p>
python|numpy|matplotlib|scipy
1
7,811
56,577,834
How to do a loop scrape from a pandas dataframe
<p>So, I have a data frame with a lot of URL, but there is only the second part of the link... I want to do a loop-scrape of every URL but I don't know how to do. I already know what I want to scrape, but I don't know how do the loop.</p> <p>This is the main: <a href="https://www.brewersfriend.com" rel="nofollow noreferrer">https://www.brewersfriend.com</a></p> <p>And the second part is in the row of data frame['URL']</p> <p><a href="https://i.stack.imgur.com/MbTrC.png" rel="nofollow noreferrer">enter image description here</a></p> <p>I try so:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>base = 'https://www.brewersfriend.com' links = [base + df['URL'] for r in df['URL']]</code></pre> </div> </div> </p> <p>But it doesn't work, because doesn't take every row...</p> <p>Someone can help me? </p>
<pre><code>df['links'] = df['URL'].apply(lambda x : 'https://www.brewersfriend.com' + x ) </code></pre>
python|pandas|dataframe|screen-scraping
0
7,812
56,798,635
Random selection with conditional probabilities
<p>I have list say <code>y = [1, 2, 3, 4, 6, 7, 8, 9, 5, 23, 12, 24, 43, 10]</code> and I want to make a random selection from it with conditional probability. A number greater than 10 in the list has a probability of say 0.8 of being selected while the rest have probability 0.2 of being selected.</p>
<p>Since random.choice provides a uniform distribution, you will have to work in two steps. First select between the groups of values (below 10 and above 10). Then select a value within the group.</p> <p>To get different probabilities between groups, you can create a list with the appropriate number of repetitions of each group. For example, for 0.2 and 0.8 you would have 2 instance of the "below10" group and 8 instances of the "above10" group in the list. This will transform the regular distribution into a weighted distribution relative to each group.</p> <pre><code>import random treshold = 10 y = [1, 2, 3, 4, 6, 7, 8, 9, 5, 23, 12, 24, 43, 10] group1 = [v for v in y if v &lt; treshold] group2 = [v for v in y if v &gt;= treshold] def getValue(): group = random.choice([group1]*2 + [group2]*8) return random.choice(group) </code></pre> <p>To test if the distribution is as required, you can use the function a large number of times and calculate how many times a value in each group was selected.</p> <pre><code>lowCount = 0 highCount = 0 N = 10000 for _ in range(N): v = getValue() if v &lt; treshold: lowCount += 1 else: highCount += 1 print(round(lowCount/N,2),round(highCount/N,2)) # 0.2 0.8 </code></pre> <p>If you only ever have 2 groups, you could use a simple if-else statement for the selection:</p> <pre><code>def getValue(): return random.choice(group1) if random.random() &lt;= 0.2 else random.choice(group2) </code></pre> <p><strong>EDIT</strong> For a single value (lets say 23) with a probability of 0.9, the approach is the same:</p> <pre><code>y = [1, 2, 3, 4, 6, 7, 8, 9, 5, 23, 12, 24, 43, 10] group1 = [23] group2 = [v for v in y if v not in group1] def getValue(): return random.choice(group1) if random.random() &lt;= 0.9 else random.choice(group2) lowCount = 0 highCount = 0 N = 10000 for _ in range(N): v = getValue() if v == 23: # &lt;== same condition as the grouping rule. lowCount += 1 else: highCount += 1 print(round(lowCount/N,2),round(highCount/N,2)) # 0.9 0.1 </code></pre> <p><em>You have to adjust your testing loop accordingly however</em></p>
python|numpy|random|choice
3
7,813
25,460,028
Use numpy to get the positions of all objects in 3D space relative to one another
<p>I want get the differences between all permutations of pairs of <em>vectors</em> in a numpy array.</p> <p>In my specific use case these vectors are the 3D position vectors of a list of objects.</p> <p>So, if I have an array <code>r = [r1, r2, r3]</code> where <code>r1</code>, <code>r2</code> and <code>r3</code> are 3-dimensional vectors, I want the following:</p> <pre><code>[[r1-r1 r1-r2 r1-r3] [r2-r1 r2-r2 r2-r3] [r3-r1 r3-r2 r3-r3]] </code></pre> <p>Where the <code>-</code> op is applied element-wise to the vectors.</p> <p>Basically, the vector equivalent of this:</p> <pre><code>&gt;&gt;&gt; scalars = np.arange(3) &gt;&gt;&gt; print(scalars) [0 1 2] &gt;&gt;&gt; result = np.subtract.outer(scalars, scalars) &gt;&gt;&gt; print(result) [[ 0 -1 -2] [ 1 0 -1] [ 2 1 0]] </code></pre> <p>However, the <code>outer</code> function seems to flatten my array of vectors before subtraction and then reshapes it. For example:</p> <pre><code>&gt;&gt;&gt; vectors = np.arange(6).reshape(2, 3) # Two 3-dimensional vectors &gt;&gt;&gt; print(vectors) [[0 1 2] [3 4 5]] &gt;&gt;&gt; results = np.subtract.outer(vectors, vectors) &gt;&gt;&gt; print(results.shape) (2, 3, 2, 3) </code></pre> <p>The result that I'm expecting is:</p> <pre><code>&gt;&gt;&gt; print(result) [[[ 0 0 0] [-3 -3 -3]] [[ 3 3 3] [ 0 0 0]]] &gt;&gt;&gt; print(result.shape) (2, 2, 3) </code></pre> <p>Can I achieve the above without iterating over the array?</p>
<p><strong>Short answer:</strong></p> <p>An (almost) pure Python way to do a "pair-wise outer subtraction" of vectors <code>r</code> would be as follows:</p> <pre class="lang-py prettyprint-override"><code>np.array(map(operator.sub, *zip(*product(r, r)))).reshape((2, 2, -1)) </code></pre> <p>So you basically can use the <code>product</code> function to get all possible pairs of list items, un<code>zip</code> them to get two separate lists and <code>map</code> them to the subtraction <code>operator</code>. Finally you can <code>reshape</code> it as usual.</p> <p><strong>Step-by-step:</strong></p> <p>Here is a step-by-step example with all required libraries and outputs of intermediate results:</p> <pre><code>import numpy as np from itertools import product import operator r = np.arange(6).reshape(2, 3) print "Vectors:\n", r print "Product:\n", list(product(r, r)) print "Zipped:\n", zip(*product(r, r)) print "Mapped:\n", map(operator.sub, *zip(*product(r, r))) print "Reshaped:\n", np.array(map(operator.sub, *zip(*product(r, r)))).reshape((2, 2, -1)) </code></pre> <p>Output:</p> <pre><code>Vectors: [[0 1 2] [3 4 5]] Product: [(array([0, 1, 2]), array([0, 1, 2])), (array([0, 1, 2]), array([3, 4, 5])), (array([3, 4, 5]), array([0, 1, 2])), (array([3, 4, 5]), array([3, 4, 5]))] Zipped: [(array([0, 1, 2]), array([0, 1, 2]), array([3, 4, 5]), array([3, 4, 5])), (array([0, 1, 2]), array([3, 4, 5]), array([0, 1, 2]), array([3, 4, 5]))] Mapped: [array([0, 0, 0]), array([-3, -3, -3]), array([3, 3, 3]), array([0, 0, 0])] Reshaped: [[[ 0 0 0] [-3 -3 -3]] [[ 3 3 3] [ 0 0 0]]] </code></pre> <p>(Note that I need to switch the dimensions <code>2</code> and <code>3</code> in order to create your example array.)</p>
python|arrays|numpy|vector|array-broadcasting
3
7,814
26,273,512
Numpy's asarray() doesn't work with csr_matrix
<p>I'm a big trouble. I wrote code in python that is using <code>Numpy</code> and <code>Networkx</code> 6 months ago with this code:</p> <pre><code>import numpy as np import networkx as nx G = nx.Graph() #add node and edges to G ... A = nx.adj_matrix(Gx) A = np.asarray(A) </code></pre> <p>Now I need to run this on a computing cluster with the latest version of Numpy. But when I run this code it fails, because <code>A = np.asarray(A)</code> returns <code>()</code></p> <p>I have no idea what to do, since this code is everywhere. Is this a bug in Numpy or what?</p> <p>This question is related to my <a href="https://stackoverflow.com/questions/26273200/installing-the-same-python-environment-on-another-machine">earlier question</a></p>
<p>Judging from this pull request:</p> <p><a href="https://github.com/networkx/networkx/commit/67bf6c1b4d2844a859b21057a63a72b36a45906b" rel="nofollow">https://github.com/networkx/networkx/commit/67bf6c1b4d2844a859b21057a63a72b36a45906b</a></p> <p>In Nov 2013, <code>networkx</code> changed <code>adjacency_matrix</code> (syn for <code>adj_matrix</code>) from producing a dense matrix to producing a sparse one. In a number of cases they had to add <code>.todense()</code> when calling this function.</p> <p>So the change may have been in <code>networkx</code> rather than <code>numpy</code>. I don't think <code>np.asarray</code> has ever been <code>sparse</code> aware. Usually it is used to convert a <code>np.matrix</code> to <code>np.ndarray</code>.</p> <hr> <p>Using <code>adj_matrix().A</code> should work in both environments. Both <code>np.matrix</code> and <code>csr_matrix</code> have this property.</p>
python|numpy
1
7,815
67,004,312
Multi-output regression using skorch & sklearn pipeline gives runtime error due to dtype
<p>I want to use skorch to do multi-output regression. I've created a small toy example as can be seen below. In the example, the NN should predict 5 outputs. I also want to use a preprocessing step that is incorporated using sklearn pipelines (in this example PCA is used, but it could be any other preprocessor). When executing this example I get the following error in the Variable._execution_engine.run_backward step of torch:</p> <pre><code>RuntimeError: Found dtype Double but expected Float </code></pre> <p>Am I forgetting something? I suspect, somewhere something has to be cast, but as skorch handles a lot of the pytorch stuff, I don't see what and where.</p> <p>Example:</p> <pre><code>import torch import skorch from sklearn.datasets import make_classification, make_regression from sklearn.pipeline import Pipeline, make_pipeline from sklearn.decomposition import PCA X, y = make_regression(n_samples=1000, n_features=40, n_targets=5) X = X.astype('float32') class RegressionModule(torch.nn.Module): def __init__(self, input_dim=80): super().__init__() self.l0 = torch.nn.Linear(input_dim, 10) self.l1 = torch.nn.Linear(10, 5) def forward(self, X): y = self.l0(X) y = self.l1(y) return y class InputShapeSetter(skorch.callbacks.Callback): def on_train_begin(self, net, X, y): net.set_params(module__input_dim=X.shape[-1]) net = skorch.NeuralNetRegressor( RegressionModule, callbacks=[InputShapeSetter()], ) pipe = make_pipeline(PCA(n_components=10), net) pipe.fit(X, y) print(pipe.predict(X)) </code></pre> <p><strong>Edit 1:</strong></p> <p>Casting X to float32 at the start won't work for every preprocessor as can be seen from this example:</p> <pre><code>import torch import skorch from sklearn.datasets import make_classification, make_regression from sklearn.pipeline import Pipeline from sklearn.decomposition import PCA from category_encoders import OneHotEncoder X, y = make_regression(n_samples=1000, n_features=40, n_targets=5) X = pd.DataFrame(X,columns=[f'feature_{i}' for i in range(X.shape[1])]) X['feature_1'] = pd.qcut(X['feature_1'], 3, labels=[&quot;good&quot;, &quot;medium&quot;, &quot;bad&quot;]) y = y.astype('float32') class RegressionModule(torch.nn.Module): def __init__(self, input_dim=80): super().__init__() self.l0 = torch.nn.Linear(input_dim, 10) self.l1 = torch.nn.Linear(10, 5) def forward(self, X): y = self.l0(X) y = self.l1(y) return y class InputShapeSetter(skorch.callbacks.Callback): def on_train_begin(self, net, X, y): net.set_params(module__input_dim=X.shape[-1]) net = skorch.NeuralNetRegressor( RegressionModule, callbacks=[InputShapeSetter()], ) pipe = make_pipeline(OneHotEncoder(cols=['feature_1'], return_df=False), net) pipe.fit(X, y) print(pipe.predict(X)) </code></pre>
<p>By default <code>OneHotEncoder</code> returns numpy array of <code>dtype=float64</code>. So one could simply cast the input-data <code>X</code> when being fed into <code>forward()</code> of the model:</p> <pre><code>class RegressionModule(torch.nn.Module): def __init__(self, input_dim=80): super().__init__() self.l0 = torch.nn.Linear(input_dim, 10) self.l1 = torch.nn.Linear(10, 5) def forward(self, X): X = X.to(torch.float32) y = self.l0(X) y = self.l1(y) return y </code></pre>
python|pytorch|torch|dtype|skorch
3
7,816
66,775,321
Training accuracy decrease and loss increase when using pack_padded_sequence - pad_packed_sequence
<p>I'm trying to train a bidirectional lstm with pack_padded_sequence and pad_packed_sequence, but the accuracy keeps decreasing while the loss increasing.</p> <p>This is my data loader:</p> <pre><code>X1 (X[0]): tensor([[1408, 1413, 43, ..., 0, 0, 0], [1452, 1415, 2443, ..., 0, 0, 0], [1434, 1432, 2012, ..., 0, 0, 0], ..., [1408, 3593, 1431, ..., 0, 0, 0], [1408, 1413, 1402, ..., 0, 0, 0], [1420, 1474, 2645, ..., 0, 0, 0]]), shape: torch.Size([64, 31]) len_X1 (X[3]): [9, 19, 12, 7, 7, 15, 4, 13, 9, 8, 14, 19, 7, 23, 7, 13, 7, 12, 10, 12, 13, 11, 31, 8, 20, 17, 8, 9, 9, 29, 8, 5, 5, 13, 9, 8, 10, 17, 13, 8, 8, 11, 7, 29, 15, 10, 6, 7, 10, 9, 10, 10, 4, 16, 11, 10, 16, 8, 13, 8, 8, 20, 7, 12] X2 (X[1]): tensor([[1420, 1415, 51, ..., 0, 0, 0], [1452, 1415, 2376, ..., 1523, 2770, 35], [1420, 1415, 51, ..., 0, 0, 0], ..., [1408, 3593, 1474, ..., 0, 0, 0], [1408, 1428, 2950, ..., 0, 0, 0], [1474, 1402, 3464, ..., 0, 0, 0]]), shape: torch.Size([64, 42]) len_X2 (X[4]): [14, 42, 13, 18, 12, 31, 8, 19, 5, 7, 15, 19, 7, 17, 6, 11, 12, 16, 8, 8, 19, 8, 12, 10, 11, 9, 9, 9, 9, 21, 7, 5, 8, 13, 14, 8, 15, 8, 8, 8, 12, 13, 7, 14, 4, 10, 6, 11, 12, 7, 8, 11, 9, 13, 30, 10, 15, 9, 9, 7, 9, 8, 7, 20] t (X[2]): tensor([0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1]), shape: torch.Size([64]) </code></pre> <p>This is my model class:</p> <pre><code>class BiLSTM(nn.Module): def __init__(self, n_vocabs, embed_dims, n_lstm_units, n_lstm_layers, n_output_classes): super(BiLSTM, self).__init__() self.v = n_vocabs self.e = embed_dims self.u = n_lstm_units self.l = n_lstm_layers self.o = n_output_classes self.padd_idx = tokenizer.get_vocab()['[PAD]'] self.embed = nn.Embedding( self.v, self.e, self.padd_idx ) self.bilstm = nn.LSTM( self.e, self.u, self.l, batch_first = True, bidirectional = True, dropout = 0.5 ) self.linear = nn.Linear( self.u * 4, self.o ) def forward(self, X): # initial_hidden h0 = torch.zeros(self.l * 2, X[0].size(0), self.u).to(device) c0 = torch.zeros(self.l * 2, X[0].size(0), self.u).to(device) # embedding out1 = self.embed(X[0].to(device)) out2 = self.embed(X[1].to(device)) # # pack_padded_sequence out1 = nn.utils.rnn.pack_padded_sequence(out1, X[3], batch_first=True, enforce_sorted=False) out2 = nn.utils.rnn.pack_padded_sequence(out2, X[4], batch_first=True, enforce_sorted=False) # NxTxh, lxNxh out1, _ = self.bilstm(out1, (h0, c0)) out2, _ = self.bilstm(out2, (h0, c0)) # # pad_packed_sequence out1, _ = nn.utils.rnn.pad_packed_sequence(out1, batch_first=True) out2, _ = nn.utils.rnn.pad_packed_sequence(out2, batch_first=True) # take only the final time step out1 = out1[:, -1, :] out2 = out2[:, -1, :] # concatenate out1&amp;2 out = torch.cat((out1, out2), 1) # linear layer out = self.linear(out) iout = torch.max(out, 1)[1] return iout, out </code></pre> <p>And if I remove pack_padded_sequence - pad_packed_sequence, the model training works just fine:</p> <pre><code>class BiLSTM(nn.Module): def __init__(self, n_vocabs, embed_dims, n_lstm_units, n_lstm_layers, n_output_classes): super(BiLSTM, self).__init__() self.v = n_vocabs self.e = embed_dims self.u = n_lstm_units self.l = n_lstm_layers self.o = n_output_classes self.padd_idx = tokenizer.get_vocab()['[PAD]'] self.embed = nn.Embedding( self.v, self.e, self.padd_idx ) self.bilstm = nn.LSTM( self.e, self.u, self.l, batch_first = True, bidirectional = True, dropout = 0.5 ) self.linear = nn.Linear( self.u * 4, self.o ) def forward(self, X): # initial_hidden h0 = torch.zeros(self.l * 2, X[0].size(0), self.u).to(device) c0 = torch.zeros(self.l * 2, X[0].size(0), self.u).to(device) # embedding out1 = self.embed(X[0].to(device)) out2 = self.embed(X[1].to(device)) # pack_padded_sequence # out1 = nn.utils.rnn.pack_padded_sequence(out1, X[3], batch_first=True, enforce_sorted=False) # out2 = nn.utils.rnn.pack_padded_sequence(out2, X[4], batch_first=True, enforce_sorted=False) # NxTxh, lxNxh out1, _ = self.bilstm(out1, (h0, c0)) out2, _ = self.bilstm(out2, (h0, c0)) # pad_packed_sequence # out1, _ = nn.utils.rnn.pad_packed_sequence(out1, batch_first=True) # out2, _ = nn.utils.rnn.pad_packed_sequence(out2, batch_first=True) # take only the final time step out1 = out1[:, -1, :] out2 = out2[:, -1, :] # concatenate out1&amp;2 out = torch.cat((out1, out2), 1) # linear layer out = self.linear(out) iout = torch.max(out, 1)[1] return iout, out </code></pre>
<p>These lines of your code are wrong.</p> <pre><code># take only the final time step out1 = out1[:, -1, :] out2 = out2[:, -1, :] </code></pre> <p>You say you are taking the final time step but you are forgetting that each sequence has different lengths.</p> <p><code>nn.utils.rnn.pad_packed_sequence</code> will <strong>pad</strong> the output of each sequence until it's length equals that of the longest so that they all have the same length.</p> <p>In other words you are slicing out vectors of zeros (the padding) for most sequence.</p> <p>This should do what you want.</p> <pre><code># take only the final time step out1 = out1[range(out1.shape[0]), X3 - 1, :] out2 = out2[range(out2.shape[0]), X4 - 1, :] </code></pre> <p>This is assuming <code>X3</code> and <code>X4</code> are <strong>tensors</strong>.</p>
python|pytorch|bilstm
1
7,817
67,135,327
What is the difference between tensorflow-gpu and tensorflow?
<p>When I see some tutorials regarding TensorFlow with GPU, it seems that the tutorial is using tensorflow-gpu instead of tensorflow.<br /> The only info I got is the <a href="https://pypi.org/project/tensorflow-gpu/#:%7E:text=TensorFlow%20is%20an%20open%20source%20software%20library%20for%20high%20performance,to%20mobile%20and%20edge%20devices." rel="nofollow noreferrer">pypi page</a> where it doesn't cover much information.<br /> Where the <a href="https://www.tensorflow.org/install/gpu" rel="nofollow noreferrer">official web</a> says that the tensorflow already packed with GPU support.</p> <p>So are there any differences between the two libraries?</p> <p>My hypothesis is in the early version tensorflow doesn't have native GPU support they create separate libraries, and the tensorflow-gpu is still updated for older users who already use tensorflow-gpu.</p> <p>[Update]<br /> Thanks to the comment and the answers. I just finish installing several prerequisites related to Nvidia and use the plain TensorFlow and now I need to reinstall the CUDA version since the latest CUDA not compatible with the latest tensorflow. It's true that the setup can be a pain in the arse.</p>
<p>The main difference is that you need the GPU enabled version of TensorFlow for your system. However, before you install TensorFlow into this environment, you need to setup your computer to be GPU enabled with CUDA and CuDNN.</p> <p>| Support for TensorFlow libraries | tensorflow | tensorflow-gpu | | for hardware type: | tf | tf-gpu | |----------------------------------|------------|-----------------| | cpu-only | yes | no (~tf-like) | | gpu with cuda+cudnn installed | yes | yes | | gpu without cuda+cudnn installed | yes | no (~tf-like) |</p> <p><a href="https://stackoverflow.com/questions/52624703/difference-between-installation-libraries-of-tensorflow-gpu-vs-cpu">More info</a></p> <p><a href="https://databricks.com/tensorflow/using-a-gpu" rel="nofollow noreferrer">More info</a></p>
python|tensorflow
1
7,818
67,044,621
Pandas dataframe column forward fill from first non-zero value
<p>I am looking to forward fill specific dataframe columns from first non-zero value and I further want to do this for each group.</p> <pre><code>df = pd.DataFrame(np.array([[1, 0, 0], [1, 5, 1], [1, 8, 0],[2, 4, 0],[2, 8, 1],[2, 81, 0]]), columns=['ID', 'b', 'c']) </code></pre> <p>The result I want is:</p> <pre><code>df2 = pd.DataFrame(np.array([[1, 0, 0], [1, 5, 1], [1, 8, 1],[2, 4, 0],[2, 8, 1],[2, 81, 1]]), columns=['ID', 'b', 'c']) </code></pre> <p>Attempt:</p> <pre><code>df2 = df.groupby('ID',as_index = False)['c'].apply(lambda x: x.replace(to_replace=0, method='ffill')) </code></pre> <p>The problem is the original dataframe is not returned. Any help with this would be much appreciated!</p>
<p>Use <code>.values</code> attribute:</p> <pre><code>df['c']=df.groupby('ID',as_index = False)['c'].apply(lambda x: x.replace(to_replace=0, method='ffill')).values </code></pre> <p>Now if you print <code>df</code> you will get your desired output:</p> <pre><code> ID b c 0 1 0 0 1 1 5 1 2 1 8 1 3 2 4 0 4 2 8 1 5 2 81 1 </code></pre>
python|pandas
2
7,819
47,232,779
How to extract and save images from tensorboard event summary?
<p>Given a tensorflow event file, how can I extract images corresponding to a specific tag, and then save them to disk in a common format e.g. <code>.png</code>?</p>
<p>You could extract the images like so. The output format may depend on how the image is encoded in the summary, so the resulting write to disk may need to use another format besides <code>.png</code></p> <pre><code>import os import scipy.misc import tensorflow as tf def save_images_from_event(fn, tag, output_dir='./'): assert(os.path.isdir(output_dir)) image_str = tf.placeholder(tf.string) im_tf = tf.image.decode_image(image_str) sess = tf.InteractiveSession() with sess.as_default(): count = 0 for e in tf.train.summary_iterator(fn): for v in e.summary.value: if v.tag == tag: im = im_tf.eval({image_str: v.image.encoded_image_string}) output_fn = os.path.realpath('{}/image_{:05d}.png'.format(output_dir, count)) print("Saving '{}'".format(output_fn)) scipy.misc.imsave(output_fn, im) count += 1 </code></pre> <p>And then an example invocation may look like:</p> <p><code>save_images_from_event('path/to/event/file', 'tag0')</code></p> <p>Note that this assumes the event file is fully written -- in the case that it's not, some error handling is probably necessary.</p>
python|tensorflow|tensorboard
17
7,820
47,444,011
Python 2.7- convert array of strings to csv
<p>I have set of strings, that I want to save as data frame (one column, each string to separated cell). Each string has the following structure: </p> <pre><code>u'word word word\n word word\nword word word word word word \nword word word word' </code></pre> <p>I tried to use <code>np.savetxt("dataframe.csv", strings, fmt='%s', delimiter='\t')</code>but due to newline <code>\n</code> character, each CSV cell contain one line instead of whole string. Any ideas how to solve it easily? </p>
<p>You can use pandas.DataFrame.to_csv(path_or_buf='', sep=','). after coverting to csv, then you can write it in a file as .csv file</p>
string|python-2.7|csv|numpy|set
0
7,821
47,212,464
Implement shortcut with Keras Sequential model
<p> I have implemented shortcut with the Keras functional model this way:</p> <pre class="lang-py prettyprint-override"><code>inputs = ... # shortcut path shortcut = ShortcutLayer()(inputs) # main path outputs = MainLayer()(inputs) # add main and shortcut together outputs = Add()([outputs, shortcut]) </code></pre> <p>Is it possible to convert this to a Sequential model, so that I don't need to know <code>inputs</code> in advance?</p> <p>Basically, what I want to achieve looks like:</p> <pre class="lang-py prettyprint-override"><code>def my_model_with_shortcut(): # returns a Sequential model equivalent to the functional one above model = my_model_with_shortcut() inputs = ... outputs = model(inputs) </code></pre>
<p>I would try the following;</p> <pre><code>def my_model_with_shortcut(): def _create_shortcut(inputs): # here create model as in case you know inputs, e.g.: aux = Dense(10, activation='relu')(inputs) output = Dense(10, activation='relu')(aux) return output return _create_shortcut </code></pre> <p>Now you should have your scenario possible.</p>
python|tensorflow|machine-learning|keras|deep-learning
0
7,822
68,299,303
numpy sum of each array in a list of arrays of different size
<p>Given a list of numpy arrays, each of different length, as that obtained by doing <code>lst = np.array_split(arr, indices)</code>, how do I get the sum of every array in the list? (I know how to do it using list-comprehension but I was hoping there was a pure-numpy way to do it).</p> <p>I thought that this would work:</p> <pre><code>np.apply_along_axis(lambda arr: arr.sum(), axis=0, arr=lst) </code></pre> <p>But it doesn't, instead it gives me this error which I don't understand:</p> <blockquote> <p>ValueError: operands could not be broadcast together with shapes (0,) (12,)</p> </blockquote> <p><em>NB: It's an array of sympy objects.</em></p>
<p>There's a faster way which avoids <code>np.split</code>, and utilizes <a href="https://numpy.org/doc/stable/reference/generated/numpy.ufunc.reduceat.html" rel="nofollow noreferrer"><code>np.reduceat</code></a>. We create an ascending array of indices where you want to sum elements with <code>np.append([0], np.cumsum(indices)[:-1])</code>. For proper indexing we need to put a zero in front (and discard the last element, if it covers the full range of the original array.. otherwise just delete the <code>[:-1]</code> indexing). Then we use the <code>np.add</code> ufunc with <code>np.reduceat</code>:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np arr = np.arange(1, 11) indices = np.array([2, 4, 4]) # this should split like this # [1 2 | 3 4 5 6 | 7 8 9 10] np.add.reduceat(arr, np.append([0], np.cumsum(indices)[:-1])) # array([ 3, 18, 34]) </code></pre>
python|numpy
3
7,823
1,053,928
Very large matrices using Python and NumPy
<p><a href="http://en.wikipedia.org/wiki/NumPy" rel="noreferrer">NumPy</a> is an extremely useful library, and from using it I've found that it's capable of handling matrices which are quite large (10000 x 10000) easily, but begins to struggle with anything much larger (trying to create a matrix of 50000 x 50000 fails). Obviously, this is because of the massive memory requirements.</p> <p>Is there is a way to create huge matrices natively in NumPy (say 1 million by 1 million) in some way (without having several terrabytes of RAM)?</p>
<p>PyTables and NumPy are the way to go.</p> <p>PyTables will store the data on disk in HDF format, with optional compression. My datasets often get 10x compression, which is handy when dealing with tens or hundreds of millions of rows. It's also very fast; my 5 year old laptop can crunch through data doing SQL-like GROUP BY aggregation at 1,000,000 rows/second. Not bad for a Python-based solution!</p> <p>Accessing the data as a NumPy recarray again is as simple as:</p> <pre><code>data = table[row_from:row_to] </code></pre> <p>The HDF library takes care of reading in the relevant chunks of data and converting to NumPy.</p>
python|matrix|numpy
93
7,824
59,123,663
Getting top 100 words with highest document frequency in a pandas series
<p>Suppose I have a pandas series like this:</p> <pre><code>0 "sun moon earth moon" 1 "sun saturn mercury saturn" 2 "sun earth mars" 3 "sun earth saturn sun saturn" </code></pre> <p>I want to get the top 3 words with the highest row ("document") frequency <strong>irrespective</strong> of the frequency within a single row ("document").</p> <p>For overall frequency I can just collect all the words from all rows in a string, do a split, convert back to series and use <code>value_counts</code>. In that case, the top 3 frequencies would be:</p> <pre><code>1. sun: 5 2. saturn: 4 3. earth: 3 </code></pre> <p>But the document frequencies, i.e. the <strong>number of rows in which a word occurs</strong>, would be</p> <pre><code>1. sun: 4 2. earth: 3 3. saturn: 2 </code></pre> <p>A way I can think of off the top of my head is to apply a lambda function to the series, splitting each string, making a set out of it, then combining all words into a single set, making a series out of that and then using <code>value_counts</code>. Is there a more efficient way of achieving the same thing?</p>
<p>Because performance is important use <code>Counter</code>:</p> <pre><code>from collections import Counter a = Counter([y for x in s for y in x.split()]).most_common(3) print (a) [('sun', 5), ('saturn', 4), ('earth', 3)] b = Counter([y for x in s for y in set(x.split())]).most_common(3) print (b) [('sun', 4), ('earth', 3), ('saturn', 2)] df1 = pd.DataFrame(a, columns=['val','count']) #df2 = pd.DataFrame(b, columns=['val','count']) print (df1) val count 0 sun 5 1 saturn 4 2 earth 3 </code></pre> <p>Pandas alternatives:</p> <pre><code>a = s.str.split(expand=True).stack().value_counts().head(3) print (a) sun 5 saturn 4 earth 3 dtype: int64 b = (s.str.split(expand=True) .stack() .reset_index(name='val') .drop_duplicates(['val', 'level_0'])['val'] .value_counts() .head(3)) print (b) sun 4 earth 3 saturn 2 Name: val, dtype: int64 </code></pre>
python|pandas|word-frequency
2
7,825
59,359,097
Optimizing a dataframe subset operation in Python
<p>Summarize the Problem</p> <p>I am trying to optimize some code I have written. In its current form it works as intended, however because of the sheer number of loops required the script it takes a very long time to run.</p> <p>I'm looking for a method of speeding up the below-described code.</p> <p>Detail the problem</p> <p>Within this data frame called master, there are 3,936,192 rows. The Position column represents a genomic window. Which is present in this data frame 76 times. Such that <code>master[master['Position'] == 300]</code> returns a dataframe of 76 rows, and similar for each unique appearance of Position. I do some operations on each of these subsets of the data frame. </p> <p>The data can be found <a href="https://www.dropbox.com/s/3sksk3vp63bnf61/master.csv?dl=0" rel="nofollow noreferrer">here</a></p> <p>My current code takes the form:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd master = pd.read_csv(data_location) windows = sorted(set(master['Position'])) window_factor = [] # loop through all the windows, look at the cohort of samples, ignore anything not CNV == 2 # if that means ignore all, then drop the window entirely # else record the 1/2 mean of that windows normalised coverage across all samples. for window in windows: current_window = master[master['Position'] == window] t = current_window[current_window['CNV'] == 2] if t.shape[0] == 0: window_factor.append('drop') else: window_factor.append( np.mean(current_window[current_window['CNV'] == 2]['Normalised_coverage'])/2) </code></pre> <p>However, this takes an exceptionally long time to run and I can't figure out a way to speed this up, though I know there must be one. </p>
<p>your <code>df</code> is not that big and in your code there are few problems:</p> <ul> <li>If you use <code>np.mean</code> and one value is <code>np.nan</code> it returns <code>np.nan</code></li> <li>You can divide by 2 after calculate the mean.</li> <li>It seems to me a perfect case for <code>groupby</code></li> <li>Return a string while other results are <code>float</code> you might consider to use <code>np.nan</code> instead</li> </ul> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.read_csv("master.csv") def fun(x): t = x[x["CNV"]==2] return t["Normalised_coverage"].mean()/2 # returns np.nan when len(t)==0 out = df.groupby('Position').apply(fun) CPU times: user 34.7 s, sys: 72.5 ms, total: 34.8 s Wall time: 34.7 s </code></pre> <p>Or even faster filtering before the <code>groupby</code> as</p> <pre class="lang-py prettyprint-override"><code>%%time out = df[df["CNV"]==2].groupby("Position")["Normalised_coverage"].mean()/2 CPU times: user 82.5 ms, sys: 8.03 ms, total: 90.5 ms Wall time: 87.8 ms </code></pre> <p><strong>UPDATE:</strong> In the last case if you really need to keep track of groups where <code>df["CNV"]!=2</code> you can use this code:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np bad = df[df["CNV"]!=2]["Position"].unique() bad = list(set(bad)-set(out.index)) out = out.reset_index(name="value") out1 = pd.DataFrame({"Position":bad, "value":[np.nan]*len(bad)}) out = pd.concat([out,out1], ignore_index=True)\ .sort_values("Position")\ .reset_index(drop=True) </code></pre> <p>Which is going to add <code>160ms</code> to your computation.</p>
python|pandas|optimization|bioinformatics
2
7,826
59,223,772
Columns display wierd naming after resetting multi index back to columns
<p>So I have been working with a dataframe and converted it to long to wide setting a multi index.</p> <p><code>df_wide = df.pivot_table(index = ["StationId", "day", "month", "year", "hour", "dayofweek"], columns = "minute", values = ["StationTotalFlow"])</code></p> <p>I then used reset_index to reuse the columns I originally multi indexed.</p> <pre><code>df_wide = df_wide.reset_index() </code></pre> <p>My dataframe now looks like this(screenshot):</p> <p><a href="https://i.stack.imgur.com/ov6nU.png" rel="nofollow noreferrer">dataframe screenshot</a></p> <p>I would like to remove that minute index and... Using </p> <pre><code>df_wide.info() </code></pre> <p>I notice my column names are wrapped in parentheses.</p> <p><a href="https://i.stack.imgur.com/a7V9q.png" rel="nofollow noreferrer">.info screenshot</a></p> <p>Does anyone know what is going on?</p>
<p>It's not that they're "wrapped in parentheses" really, it's that you went from a MultiIndex to a single-level list of names, so the first level of the MultiIndex became the first element of each tuple, and the second level became the second element of each tuple.</p>
python-3.x|pandas
0
7,827
59,063,138
run pyspark date column thru datetime/pandas function
<p>I have a pyspark dataframe where one column is a date column.</p> <p>I need to run this column thru a pandas/datetime function to calculate business hours.</p> <p>However, I can't seem to get the conversion right:</p> <pre><code>df3 = df2.withColumn('test_date', add_one(df2.AssignedDate.toPandas())) </code></pre> <p>produces the error:</p> <p>'Column' object is not callable</p> <p>I'm trying to run <code>df2.AssignedDate</code> thru the following function:</p> <pre><code>def add_one(pd_date): if pd_date.isoweekday() == 6: pd_date = pd_date.replace(hour = 7 , minute=0) return pd_date </code></pre>
<p>You could use regular pyspark.sql.functions to parse the timestamp and manipulate it directly:</p> <pre class="lang-py prettyprint-override"><code>In [1]: from datetime import datetime ...: from pyspark.sql.functions import col, date_format, to_timestamp, when, dayofweek ...: ...: frame = spark.createDataFrame( ...: [(1, datetime(2019, 11, 4, 7, 15, 21)), ...: (2, datetime(2019, 11, 9, 6, 2, 4))], ...: schema=("id", "time")) ...: ...: replaced_as_string = frame.withColumn( ...: "trunc", ...: when( ...: dayofweek(col("time")) == 7, # different convention ...: date_format(col("time"), "yyyy-MM-dd 07:00:ss") ...: ).otherwise( ...: date_format(col("time"), "yyyy-MM-dd HH:mm:ss")) ...: ) ...: replaced_as_timestamp = replaced_as_string.withColumn( ...: "trunc", ...: to_timestamp(col("trunc"))) ...: replaced_as_timestamp.show() ...: +---+-------------------+-------------------+ | id| time| trunc| +---+-------------------+-------------------+ | 1|2019-11-04 07:15:21|2019-11-04 07:15:21| | 2|2019-11-09 06:02:04|2019-11-09 07:00:04| +---+-------------------+-------------------+ </code></pre> <p>This has the advantage of staying entirely with Java objects for the internals, so you don't loose time transforming to and from Python objects.</p> <p>Remark that the function <code>dayofweek</code> has a different count than Python’s <code>datetime.datetime.isoweekday()</code>.</p>
pandas|datetime|pyspark
2
7,828
59,142,527
np.solve() but when A (first matrix) unknown
<p><code>np.solve()</code> works great when you have an equation in the form of <code>Ax = b</code> My problem is that I actually have an equation in the form of <code>xC = D</code>, where x is a 2x2 matrix I want to find out, and C and D are 2x2 matrices I'm given.</p> <p>And because matrix multiplication is generally not commutative, I can't just swap the two around.</p> <p>Is there an efficient way to solve this in numpy (or other library in python)?</p>
<p><code>x @ C = D</code> is the same as <code>D^-1 @ x @ C @ C^-1 = D^-1 @ D @ C^-1</code> which is <code>D^-1 @ x = C^-1</code> which is in the form Ax = b where A is <code>np.linalg.pinv(D)</code> and b is <code>np.linalg.pinv(C)</code></p> <p>which boils down to </p> <pre><code>x = D @ np.linalg.pinv(C) </code></pre> <p>which you could have gotten by just multipying both side of the equation by the inverse of C</p>
python|numpy|linear-algebra|matrix-multiplication
2
7,829
59,060,862
Easy way to do this in numpy?
<p>Suppose I have a BxNxL array, M. In other words, M is composed of B NxL matrices. In addition, I have a LxB column vector, Q. Is there any easy way (without for loops) to broadcast (sum) the ith column of Q to the ith matrix in M? </p>
<p>So your iterative code would be?</p> <pre><code>for i in range(...): res[i,:,:] = M[i,:,:] + Q[:,i] # NxL + L </code></pre> <p>with the whole array</p> <pre><code>res = M + Q.T[:,None,:] # BxNxL + (Bx1xL) </code></pre> <p>(I wrote this without a test example, so there might an error, but the basic idea should be right)</p>
python|numpy
0
7,830
59,107,448
AttributeError: 'numpy.ndarray' object has no attribute 'strip'
<p>i tried to make a training model with multiple inputs and outputs.</p> <p>This model worked very well with single input and output but i got an error message.</p> <p>AttributeError: 'numpy.ndarray' object has no attribute 'strip'</p> <p>I guess the problem is that the fit_generator can't process the numpy array.</p> <p>my fit generator looks like that:</p> <pre><code>def train_model(model, X_train, X_valid, y_train, y_valid): """ Train the model """ checkpoint = ModelCheckpoint('model-{epoch:03d}-Hunet-LSTM-Canny_Final_bc50.h5', monitor='val_loss', verbose=0, save_best_only=True, mode='auto') model.compile(loss='mse', optimizer=Adam(lr=0.0001)) X=X_train print(X) y=y_train print(y) history = model.fit_generator(batcher(data_dir, X_train, y_train, batch_size, True), samples_per_epoch, nb_epoch, max_q_size=1, validation_data=batcher(data_dir, X_valid, y_valid, batch_size, False), nb_val_samples=len(X_valid), callbacks=[checkpoint], verbose=1) </code></pre> <p>and the results of the Print(X_train) </p> <pre><code>[['images/photo6190.jpg' 0.119999997318] ['images/photo8791.jpg' 0.10000000149] ['images/photo12711.jpg' 0.060000006109499994] ... ['images/photo9846.jpg' 0.0700000077486] ['images/photo10800.jpg' 0.109999999404] ['images/photo2733.jpg' 0.10000000149]] </code></pre> <p>and print(y_train)</p> <pre><code>[[ 0.20000002 0.12 ] [ 0.30000001 0.1 ] [-0.19999999 0.06000001] ... [ 0.30000001 0.07000001] [ 0.5 0.11 ] [ 0.40000001 0.1 ]] </code></pre> <p>Any idea to fixing it?</p>
<p>From the <a href="https://keras.io/models/sequential/#fit_generator" rel="nofollow noreferrer">docs</a>, you cannot simply pass a numpy array in the <code>fit_generator()</code> function. As the name suggests <code>fit_generator()</code> takes in a python generator as an argument. You can use the Keras <code>ImageDataGenerator()</code> generator function to get a generator or you can make your own custom generator function which <code>yields</code> your <code>x</code> and <code>y</code> training pairs</p> <p>This sample example might help: <a href="https://keras.io/examples/cifar10_cnn/" rel="nofollow noreferrer">https://keras.io/examples/cifar10_cnn/</a></p>
numpy|neural-network|multiple-input
0
7,831
59,347,325
Python recursive quadtree issues
<p>I've been writing a recursive quadtree constructor to use for some n-body simulations, but my current implementation doesn't seem to be working properly, and after a lot of debugging, I'm stumped. The results that it gives are clearly incorrect, although all the debugging checks seem to give the results they should. Could anyone help? Here's my code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from numba import jit, autojit import matplotlib.animation as ani from mpl_toolkits.mplot3d import Axes3D from time import sleep mu, sigma = 0, 0.5 size = 50 X = np.array([np.random.normal(mu, sigma, size),np.random.normal(mu, sigma, size)]) A = np.zeros([size,2],dtype='float64') V = np.zeros([size,2],dtype='float64') M = np.random.normal(1, 0, size) quad_list = np.zeros([4,1]) def quadtree(p,n,x,y,w,h): global quad_list L = len(p[0]) if L&gt;1: print(len(p[0])) midx = (w/2+x) midy = (h/2+y) px = p[0] py = p[1] plx, ply = px[px&lt;midx], py[px&lt;midx] prx, pry = px[px&gt;midx], py[px&gt;midx] p1 = np.array([plx[ply&gt;midy],ply[ply&gt;midy]]) p2 = np.array([prx[pry&gt;midy],pry[pry&gt;midy]]) p3 = np.array([prx[pry&lt;midy],pry[pry&lt;midy]]) p4 = np.array([plx[ply&lt;midy],ply[ply&lt;midy]]) quad_list = np.append(quad_list,np.array([x,y,w,h])) quadtree(p1,n+1,x,y+h/2,w/2,h/2) quadtree(p2,n+1,x+w/2,y+h/2,w/2,h/2) quadtree(p3,n+1,x,y,w/2,h/2) quadtree(p4,n+1,x,y,w/2,h/2) else: quad_list = np.append(quad_list,np.array([x,y,w,h])) quadtree(X,0,-2,-2,4,4) plt.scatter(X[0],X[1],c='black') out = np.zeros([4,int(len(quad_list)/4)]) for i in range(0,int(len(quad_list)),4): for j in range(4): out[j,int(i/4)] = quad_list[i+j] for n in range(int(len(out[0,:]))): print(n) plt.plot([out[0,n],out[0,n]], [out[1,n],out[1,n]+out[3,n]]) plt.plot([out[0,n]+out[2,n],out[0,n]+out[2,n]], [out[1,n],out[1,n]+out[3,n]]) plt.plot([out[0,n],out[0,n]+out[2,n]], [out[1,n],out[1,n]]) plt.plot([out[0,n],out[0,n]+out[2,n]], [out[1,n]+out[3,n],out[1,n]+out[3,n]]) plt.show() </code></pre> <p>Thank in advance! (Footnote: I'm making a more optimised version where <code>quad_list</code> is a list)</p>
<p>Try to change the line:</p> <pre><code>quadtree(p3,n+1,x,y,w/2,h/2) </code></pre> <p>to</p> <pre><code>quadtree(p3,n+1,x+w/2,y,w/2,h/2) </code></pre>
python|numpy|quadtree
1
7,832
59,045,410
How can I create a new series by using specific rows and columns of a pandas data frame?
<p>I am working with a pandas data frame which looks like as follows:</p> <pre><code> title view_count comment_count like_count dislike_count dog_tag cat_tag bird_tag other_tag 0 Great Dane Loves 299094 752.0 15167 58 [dog] [] [] [] 1 Guy Loves His Cat 181320 1283.0 13254 262 [] [cat] [] [] </code></pre> <p>Basically, title represents the name of the YouTube video. If the video is about dogs, you can see [dog] under dog_tag category. If it is not about dogs, you see an empty list [] under dog_tag.</p> <p>I need to do create a new series which include title, view_count, comment_count, like_count and dislike_count for every row of dog_tag <strong>if the value of dog_tag is [dog]</strong>. I should not include any information for the rows where the value of dog_tag is []. </p> <p>So, my new series should seem like this:</p> <pre><code> title view_count comment_count like_count dislike_count dog_tag 0 Great Dane Loves 299094 752.0 15167 58 [dog] 1 Dogs are Soo Great!! 181320 1283.0 13254 262 [dog] 2 Dog and Little Girl 562585 5658.3 46589 121 [dog] </code></pre> <p>Is there any <strong><em>genius person</em></strong> who can solve this problem? I tried the following solutions that I found on Stack Overflow but I could not get what I need :(</p> <pre><code>only_dog = [dodo_data.loc[:, dodo_data.loc[0,:].eq(s)] for s in ['dog_tag', 'view_count', 'comment_count', 'like_count', 'dislike_count','ratio_of_comments_per_view', 'ratio_of_likes_per_view']] dodo_data.loc[:,dodo_data.iloc[0, :] == "dog_tag"] dodo_data.loc[:,dodo_data.iloc[0, :] == "view_count"] dodo_data.loc[:,dodo_data.iloc[0, :] == "comment_count"] </code></pre>
<p>Because if convert empty list to boolean get <code>False</code> you can use <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a> for filter by condition and also by list of columns names:</p> <pre><code>cols = ['title', 'view_count', 'comment_count', 'like_count', 'dislike_count', 'dog_tag'] df = df.loc[df['dog_tag'].astype(bool), cols] </code></pre>
python|pandas|dataframe|series
3
7,833
13,871,466
Irregular results from numpy and scipy
<p>I am creating finite element code in Python that relies on numpy and scipy for array, matrix and linear algebra calculations. The initial generated code seems to be working and I am getting the results I need.</p> <p>However, for some other feature I need to call a function that performs the analysis more than one time and when I review the results they differ completely from the first call although both are called using the same inputs. The only thing I can think of is that the garbage collection is not working and the memory is being corrupted.</p> <p>Here is the procedure used:</p> <ol> <li>call setup file to generate model database: mDB = F0(inputs)</li> <li>call first analysis with some variable input: r1 = F1(mDB, v1)</li> <li>repeat first analysis with the same variable from step2: r2 = F1(mDB, v1)</li> </ol> <p>Since nothing has changed, I would expect that the results from step#2 and step#3 would be the same, however, my code produces different results (verified using matplotlib). </p> <p>I am using:</p> <p>Python 2.7 (32bit) on Windows 7 with numpy-1.6.2 and scipy-0.11.0</p>
<p>If your results are sensitive to rounding error (e.g. you have some programming error in your code), then in general floating point results are not reproducible. This occurs already due to the way modern compilers optimize code, so it does not require e.g. accessing uninitialized memory.</p> <p>Please see: <a href="http://www.nccs.nasa.gov/images/FloatingPoint_consistency.pdf" rel="nofollow">http://www.nccs.nasa.gov/images/FloatingPoint_consistency.pdf</a></p> <p>Another likely possibility is that your computation function modifies the input data. The point you mention in the comment above does not exclude this possibility, as Python is pass-by-reference.</p>
python|numpy|scipy
2
7,834
44,929,823
Unexpectedly needed to give input value to an irrelevant placeholder in the graph
<p><a href="https://gist.github.com/Wermarter/466e9585579ef65927fa934fe4e0ffd4" rel="nofollow noreferrer">https://gist.github.com/Wermarter/466e9585579ef65927fa934fe4e0ffd4</a> Here I'm trying to implement Variational AutoEncoder in Tensorflow with TFLearn.</p> <p>I build the computations for training, encoding, generating in one big graph in <code>self.training_model.session</code>. The <code>self.generating_model</code> and <code>self.recognition_model</code> share the same session as <code>self.training_model</code>.</p> <p>Everything went fine when I ran the <code>generating_model</code> to generate MNIST 2D Latent space. But error appeared when I ran <code>self.recognition_model</code> to encode the given input_data, it required me to given input value to the <code>self.train_data</code> which belongs to the <code>self.training_model</code>.</p> <p>Here's the full error:</p> <pre><code>Traceback (most recent call last): File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1039, in _do_call return fn(*args) File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1021, in _run_fn status, run_metadata) File "/home/wermarter/anaconda3/lib/python3.5/contextlib.py", line 66, in __exit__ next(self.gen) File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'train_data/X' with dtype float [[Node: train_data/X = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/gpu:0"]()]] [[Node: add_5/_47 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_8_add_5", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/wermarter/Desktop/vae.py", line 178, in &lt;module&gt; main() File "/home/wermarter/Desktop/vae.py", line 172, in main vae.img_transition(trainX[4], trainX[100]) File "/home/wermarter/Desktop/vae.py", line 130, in img_transition enc_A = self.encode(A)[0] File "/home/wermarter/Desktop/vae.py", line 121, in encode return self.recognition_model.predict({self.input_data: input_data}) File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tflearn/models/dnn.py", line 257, in predict return self.predictor.predict(feed_dict) File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tflearn/helpers/evaluator.py", line 69, in predict return self.session.run(self.tensors[0], feed_dict=feed_dict) File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 778, in run run_metadata_ptr) File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 982, in _run feed_dict_string, options, run_metadata) File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1032, in _do_run target_list, options, run_metadata) File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1052, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'train_data/X' with dtype float [[Node: train_data/X = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/gpu:0"]()]] [[Node: add_5/_47 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_8_add_5", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] Caused by op 'train_data/X', defined at: File "/home/wermarter/Desktop/vae.py", line 178, in &lt;module&gt; main() File "/home/wermarter/Desktop/vae.py", line 169, in main vae = VAE() File "/home/wermarter/Desktop/vae.py", line 28, in __init__ self._build_training_model() File "/home/wermarter/Desktop/vae.py", line 78, in _build_training_model self.train_data = tflearn.input_data(shape=[None, *self.img_shape], name='train_data') File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tflearn/layers/core.py", line 81, in input_data placeholder = tf.placeholder(shape=shape, dtype=dtype, name="X") File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 1507, in placeholder name=name) File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1997, in _placeholder name=name) File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op op_def=op_def) File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2336, in create_op original_op=self._default_original_op, op_def=op_def) File "/home/wermarter/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1228, in __init__ self._traceback = _extract_stack() InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'train_data/X' with dtype float [[Node: train_data/X = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/gpu:0"]()]] [[Node: add_5/_47 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_8_add_5", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] </code></pre>
<p>This is a code-specific error. My <code>self.recognition_model</code> is actually linked to the placeholder <code>self.train_data</code> through <code>self.curr_batch_size</code> in <code>self._sample_z()</code>. My solution is to re-link <code>self.curr_batch_size</code> to the size of <code>self.input_data</code>.</p> <p>That's it. Happy Coding</p>
graph|tensorflow|tflearn
0
7,835
44,940,057
why can we use variable name to get data stored in it?
<p>When using Python, I am confronted with a problem confusing me for a long time. Say, I use numpy to define an array <code>x = np.array([1, 2])</code>. </p> <p>This, I think, means that <code>x</code> is an instance of class <code>array</code>. Moreover, the tutorial also says that <code>[1,2]</code> is actually stored in <code>x.data</code>. But I get data <code>[1,2]</code> through the instance name <code>x</code> instead of <code>x.data</code> in Python. </p> <p>How does this happen? There is a link between the instance name <code>x</code> and <code>x.data</code>? </p>
<p><code>x</code> and <code>x.data</code> are different types though they are interpreting data from the same location in memory</p> <pre><code>In [1]: import numpy as np In [2]: x = np.array([1,2]) In [3]: type(x) Out[3]: numpy.ndarray In [4]: type(x.data) Out[4]: buffer </code></pre> <p><code>x.data</code> is a pointer to the underlying buffer of bytes that composes the array object in memory, referenced here in the <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.data.html#numpy.ndarray.data" rel="nofollow noreferrer"><code>numpy</code> docs</a>.</p> <p>When we check the underlying datatype (<code>dtype</code>) the array is storing the data as we see the following:</p> <pre><code>In [5]: x.dtype Out[5]: dtype('int64') </code></pre> <p>An <code>int64</code> is composed of 64 bits or 8 bytes (8 bits in a byte). This means the underlying buffer of <code>x</code>, <code>x.data</code> should be a <code>buffer</code> of length 16. We confirm that here:</p> <pre><code>In [6]: len(x.data) Out[6]: 16 </code></pre> <p>Lastly, we can peek into the actual values of the buffer to see how Python is storing the values in memory:</p> <pre><code>In [7]: for i in range(len(x.data)): print ord(x.data[i]) 1 0 0 0 0 0 0 0 # first 8 bytes above, second 8 below 2 0 0 0 0 0 0 0 </code></pre> <p>We use <a href="https://docs.python.org/2/library/functions.html#ord" rel="nofollow noreferrer"><code>ord</code></a> to return the value of the byte since <code>numpy</code> is storing the value as an 8 bit (1 byte) string.</p> <p>Since, each of these bytes only stores 8 bits of information, none of the above values printed by the loop will never exceed 255, <a href="https://stackoverflow.com/questions/4986486/why-does-a-byte-only-have-0-to-255">the maximum value of a byte</a>.</p> <p>The link between <code>x</code> and <code>x.data</code> is that <code>x.data</code> points to the location in memory of the values you see when you inspect <code>x</code>. <code>numpy</code> uses the <code>ndarray</code> type as an abstraction on top of this lower level storage in memory to make it easy to deal with arrays at a high level, like getting the value of <code>x</code> at index one:</p> <pre><code>In [8]: x[1] Out[8]: 2 </code></pre> <p>instead of needing to implement the correct offsetting and binary to integer conversion yourself.</p>
python|arrays|numpy
4
7,836
44,962,794
How to Integrate Arc Lengths using python, numpy, and scipy?
<p>On another <a href="https://math.stackexchange.com/questions/433094/how-to-determine-the-arc-length-of-ellipse">thread</a>, I saw someone manage to integrate the length of a arc using mathematica.They wrote: </p> <pre><code>In[1]:= ArcTan[3.05*Tan[5Pi/18]/2.23] Out[1]= 1.02051 In[2]:= x=3.05 Cos[t]; In[3]:= y=2.23 Sin[t]; In[4]:= NIntegrate[Sqrt[D[x,t]^2+D[y,t]^2],{t,0,1.02051}] Out[4]= 2.53143 </code></pre> <p>How exactly could this be transferred to python using the imports of numpy and scipy? In particular, I am stuck on line 4 in his code with the "NIntegrate" function. Thanks for the help! </p> <p>Also, if I already have the arc length and the vertical axis length, how would I be able to reverse the program to spit out the original paremeters from the known values? Thanks!</p>
<p>To my knowledge <code>scipy</code> cannot perform symbolic computations (such as symbolic differentiation). You may want to have a look at <a href="http://www.sympy.org" rel="nofollow noreferrer">http://www.sympy.org</a> for a symbolic computation package. Therefore, in the example below, I compute derivatives analytically (the <code>Dx(t)</code> and <code>Dy(t)</code> functions).</p> <pre><code>&gt;&gt;&gt; from scipy.integrate import quad &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; Dx = lambda t: -3.05 * np.sin(t) &gt;&gt;&gt; Dy = lambda t: 2.23 * np.cos(t) &gt;&gt;&gt; quad(lambda t: np.sqrt(Dx(t)**2 + Dy(t)**2), 0, 1.02051) (2.531432761012828, 2.810454936566873e-14) </code></pre> <h1>EDIT: Second part of the question - inverting the problem</h1> <p>From the fact that you know the value of the integral (arc) you can now solve for <em>one</em> of the parameters that determine the arc (semi-axes, angle, etc.) Let's assume you want to solve for the angle. Then you can use one of the non-linear solvers in <a href="https://docs.scipy.org/doc/scipy/reference/optimize.nonlin.html" rel="nofollow noreferrer"><code>scipy</code></a>, to revert the equation <code>quad(theta) - arcval == 0</code>. You can do it like this:</p> <pre><code>&gt;&gt;&gt; from scipy.integrate import quad &gt;&gt;&gt; from scipy.optimize import broyden1 &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; a = 3.05 &gt;&gt;&gt; b = 2.23 &gt;&gt;&gt; Dx = lambda t: -a * np.sin(t) &gt;&gt;&gt; Dy = lambda t: b * np.cos(t) &gt;&gt;&gt; arc = lambda theta: quad(lambda t: np.sqrt(Dx(t)**2 + Dy(t)**2), 0, np.arctan((a / b) * np.tan(np.deg2rad(theta))))[0] &gt;&gt;&gt; invert = lambda arcval: float(broyden1(lambda x: arc(x) - arcval, np.rad2deg(arcval / np.sqrt((a**2 + b**2) / 2.0)))) </code></pre> <p>Then:</p> <pre><code>&gt;&gt;&gt; arc(50) 2.531419526553662 &gt;&gt;&gt; invert(arc(50)) 50.000031008458365 </code></pre>
python|numpy|scipy|automatic-ref-counting|ellipse
4
7,837
45,151,742
unable to turn a simple text file into pandas dataframe
<p>this is what my file looks like:</p> <p><code>raw_file</code> --> </p> <pre><code>'Date\tValue\tSeries\tLabel\n07/01/2007\t687392\t31537611\tThis home\n08/01/2007\t750624\t31537611\tThis home\n09/01/2007\t769358\t31537611\tThis home\n10/01/2007\t802014\t31537611\tThis home\n11/01/2007\t815973\t31537611\tThis home\n12/01/2007\t806853\t31537611\tThis home\n01/01/2008\t836318\t31537611\tThis home\n02/01/2008\t856792\t31537611\tThis home\n03/01/2008\t854411\t31537611\tThis home\n04/01/2008\t826354\t31537611\tThis home\n05/01/2008\t789017\t31537611\tThis home\n06/01/2008\t754162\t31537611\tThis home\n07/01/2008\t749522\t31537611\tThis home\n08/01/2008\t757577\t31537611\tThis home\n' </code></pre> <p><code>type(raw_file)</code> --> <code>&lt;type 'str'&gt;</code></p> <p>for some reason, <code>I can't use pd.read_csv(raw_file)</code> or I would get the error: </p> <pre><code>File "pandas\_libs\parsers.pyx", line 710, in pandas._libs.parsers.TextReader._setup_parser_source (pandas\_libs\parsers.c:8873) IOError: File Date Value Series Label 07/01/2007 687392 31537611 This home 08/01/2007 750624 31537611 This home does not exist </code></pre> <p>the best I can think of is :</p> <pre><code>for row in raw_file.split('\n'): print(row.split('\t')) </code></pre> <p>which is quite slow. is there a better way?</p>
<p>Why don't you use the <code>csv</code> module and set the delimiter to <code>\t</code>?</p> <p><a href="https://docs.python.org/3.4/library/csv.html" rel="nofollow noreferrer">https://docs.python.org/3.4/library/csv.html</a></p> <p>with csv.reader(your_file, delimiter='\t') as f: # Do stuff</p>
python|pandas|dataframe|text|error-handling
0
7,838
45,265,254
Using tf.contrib.learn to solve basic logistic classifier
<p>I am learning about tf.contrib.learn in Tensorflow, and am using a self-made exercise. The exercise is to classify three regions as follows, with x1 and x2 as inputs, and the labels are triangles/circles/crosses: <a href="https://i.stack.imgur.com/5Amou.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Amou.jpg" alt="enter image description here"></a></p> <p>My code is able to fit the data, and evaluate it. However, I cannot seem to get predictions to work. Code is below. Any ideas?</p> <pre><code>from __future__ import absolute_import from __future__ import division from __future__ import print_function import argparse import sys import tempfile from six.moves import urllib import pandas as pd import tensorflow as tf import numpy as np FLAGS = None myImportedDatax1_np = np.array([[.1],[.1],[.2],[.2],[.4],[.4],[.5],[.5],[.1],[.1],[.2],[.2]],dtype=float) myImportedDatax2_np = np.array([[.1],[.2],[.1],[.2],[.1],[.2],[.1],[.2],[.4],[.5],[.4],[.5]],dtype=float) combined_Imported_Data_x = np.append(myImportedDatax1_np, myImportedDatax2_np, axis=1) myImportedDatay_np = np.array([[0],[0],[0],[0],[1],[1],[1],[1],[2],[2],[2],[2]],dtype=int) def build_estimator(model_dir, model_type): x1 = tf.contrib.layers.real_valued_column("x1") x2 = tf.contrib.layers.real_valued_column("x2") wide_columns = [x1, x2] m = tf.contrib.learn.LinearClassifier(model_dir=model_dir, feature_columns=wide_columns) return m def input_fn(input_batch, output_batch): inputs = {"x1": tf.constant(input_batch[:,0]), "x2": tf.constant(input_batch[:,1])} label = tf.constant(output_batch) print(inputs) print(label) print(input_batch) # Returns the feature columns and the label. return inputs, label def train_and_eval(model_dir, model_type, train_steps, train_data, test_data): model_dir = tempfile.mkdtemp() if not model_dir else model_dir print("model directory = %s" % model_dir) m = build_estimator(model_dir, model_type) m.fit(input_fn=lambda: input_fn(combined_Imported_Data_x, myImportedDatay_np), steps=train_steps) results = m.evaluate(input_fn=lambda: input_fn(np.array([[.4, .1],[.4, .2]], dtype=float), np.array([[0], [0]], dtype=int)), steps=1) for key in sorted(results): print("%s: %s" % (key, results[key])) predictions = list(m.predict(input_fn=({"x1": tf.constant([[.1]]),"x2": tf.constant([[.1]])}))) # print(predictions) def main(_): train_and_eval(FLAGS.model_dir, FLAGS.model_type, FLAGS.train_steps, FLAGS.train_data, FLAGS.test_data) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.register("type", "bool", lambda v: v.lower() == "true") parser.add_argument( "--model_dir", type=str, default="", help="Base directory for output models." ) parser.add_argument( "--model_type", type=str, default="wide_n_deep", help="Valid model types: {'wide', 'deep', 'wide_n_deep'}." ) parser.add_argument( "--train_steps", type=int, default=200, help="Number of training steps." ) parser.add_argument( "--train_data", type=str, default="", help="Path to the training data." ) parser.add_argument( "--test_data", type=str, default="", help="Path to the test data." ) FLAGS, unparsed = parser.parse_known_args() tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) </code></pre>
<p>To fix this concrete issue you can add the following input function which is similar to the existing one, except that it returns None as a second element in the tuple</p> <pre><code>def input_fn_predict(): inputs = {"x1": tf.constant([0.1]), "x2": tf.constant([0.2])} print(inputs) return inputs, None </code></pre> <p>In a next phase you can invoke it with:</p> <pre><code>predictions = list(m.predict(input_fn=lambda: input_fn_predict())) </code></pre> <p>And if you comment out your print, then this should work.</p>
python|tensorflow|logistic-regression
1
7,839
44,873,026
Sparse matrix with fast access
<p>While working with large SciPy CSR sparse matrices I noticed that slicing the matrix to get a single row from the matrix was very slow as it seems to make a copy.</p> <p>Is there any way to make a sparse matrix that takes a reference of the existing row instead of copying it, perhaps there is a more fitting implementation than CSR matrix?</p> <p>What I need for my implementation is fast lookup for elements and rows and fast lookup of all non zero indices of a vector. I never need to change the matrix in any way or perform other operations on the matrix.</p>
<p>You can take advantage of the CSR representation to slice the underlying arrays directly and share the data with a new CSR matrix:</p> <pre><code>mat = # some CSR matrix i = # the index of whatever row you want start, stop = mat.indptr[i], mat.indptr[i+1] noncopy_row_i = scipy.sparse.csr_matrix((mat.data[start:stop], mat.indices[start:stop], numpy.array([0, stop-start])), shape=(1, mat.shape[1])) </code></pre>
python|numpy|scipy
0
7,840
57,280,472
How to plot correlation matrix/heatmap with categorical and numerical variables
<p>I have 4 variables of which 2 variables are nominal (dtype=object) and 2 are numeric(dtypes=int and float). </p> <pre><code>df.head(1) OUT: OS_type|Week_day|clicks|avg_app_speed iOS|Monday|400|3.4 </code></pre> <p>Now, I want to throw the dataframe into a seaborn heatmap visualization.</p> <pre><code>import numpy as np import seaborn as sns ax = sns.heatmap(df) </code></pre> <p>But I get an error indicating I cannot use categorical variables, only numbers. How do I process this correctly and then feed it back into the heatmap? </p>
<p>The heatmap to be plotted needs values between 0 and 1. For correlations between numerical variables you can use Pearson's R, for categorical variables (the corrected) Cramer's V, and for correlations between categorical and numerical variables you can use the correlation ratio.</p> <p>As for creating numerical representations of categorical variables there is a number of ways to do that:</p> <pre><code>import pandas as pd from sklearn.preprocessing import LabelEncoder df = pd.read_csv('some_source.csv') # has categorical var 'categ_var' # method 1: uses pandas df['numerized1'] = df['categ_var'].astype('category').cat.codes # method 2: uses pandas, sorts values descending by frequency df['numerized2'] = df['categ_var'].apply(lambda x: df['categ_var'].value_counts().index.get_loc(x)) # method 3: uses sklearn, result is the same as method 1 lbl = LabelEncoder() df['numerized3'] = lbl.fit_transform(df['categ_var']) # method 4: uses pandas; xyz captures a list of the unique values df['numerized4'], xyz = pd.factorize(df['categ_var']) </code></pre>
python|pandas|statistics|seaborn
0
7,841
57,048,157
Summing an array along different dim each time with different slice range
<p>Suppose I have an array <code>b</code> of shape <code>(3, 10, 3)</code> and another array <code>v = [8, 9, 4]</code> of shape <code>(3,)</code>, see below. For each of the 3 arrays of shape <code>(10, 3)</code> in <code>b</code>, I need to sum a number of rows as determined by <code>v</code>, i.e. for <code>i = 0, 1, 2</code> I need to get <code>np.sum(b[i, 0:v[i]], axis=0)</code>. My solution (shown below) uses a for loop which is inefficient I guess. I wonder if there is an efficient (vectorized) way to do what I have described above.</p> <p>NB: my actual arrays have more dimension, these arrays are for illustration.</p> <pre class="lang-py prettyprint-override"><code>v = np.array([8,9,4]) b = np.array([[[0., 1., 0.], [0., 0., 1.], [0., 0., 1.], [0., 0., 1.], [1., 0., 0.], [1., 0., 0.], [0., 0., 1.], [1., 0., 0.], [0., 1., 0.], [1., 0., 0.]], [[0., 0., 1.], [0., 1., 0.], [1., 0., 0.], [0., 0., 1.], [1., 0., 0.], [1., 0., 0.], [1., 0., 0.], [0., 1., 0.], [0., 0., 1.], [0., 1., 0.]], [[1., 0., 0.], [1., 0., 0.], [1., 0., 0.], [0., 0., 1.], [0., 1., 0.], [0., 1., 0.], [1., 0., 0.], [1., 0., 0.], [0., 0., 1.], [1., 0., 0.]]]) n=v.shape[0] vv=np.zeros([n, n]) for i in range(n): vv[i]=np.sum( b[i,0:v[i]],axis=0) </code></pre> <p>Output:</p> <pre class="lang-py prettyprint-override"><code>vv array([[3., 1., 4.], [4., 2., 3.], [3., 0., 1.]]) </code></pre> <p><strong>Edit</strong>: Below is more an actual example of the arrays v and b. </p> <pre class="lang-py prettyprint-override"><code>v= np.random.randint(0,300, size=(32, 98,3)) b = np.zeros([98, 3, 300, 3]) for i in range(3): for j in range(98): b[j,i] = np.random.multinomial(1,[1./3, 1./3, 1./3], 300) v.shape Out[292]: (32, 98, 3) b.shape Out[293]: (98, 3, 300, 3) </code></pre> <p>I need to do the same thing as before, so the final result is an array of shape <code>(32,98,3,3)</code>. Note that I have to do the above at each iteration that is why I'm looking for an efficient implementation.</p>
<p>The following function allows for reducing a given axis with varying slices indicated by start and stop arrays. It uses <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.reduceat.html" rel="nofollow noreferrer"><code>np.ufunc.reduceat</code></a> under the hood together with appropriately reshaped versions of the input array and the indices. It avoids unnecessary computations but allocates an intermediary array two times the size of the final output array (the computation of the discarded values are however no-ops).</p> <pre><code>def sliced_reduce(a, i, j, ufunc, axis=None): """Reduce an array along a given axis for varying slices `a[..., i:j, ...]` where `i` and `j` are arrays themselves. Parameters ---------- a : array The array to be reduced. i : array Start indices for the reduced axis. Must have the same shape as `j`. j : array Stop indices for the reduced axis. Must have the same shape as `i`. ufunc : function The function used for reducing the indicated axis. axis : int, optional Axis to be reduced. Defaults to `len(i.shape)`. Returns ------- array Shape `i.shape + a.shape[axis+1:]`. Notes ----- The shapes of `a` and `i`, `j` must match up to the reduced axis. That means `a.shape[:axis] == i.shape[len(i.shape) - axis:]``. `i` and `j` can have additional leading dimensions and `a` can have additional trailing dimensions. """ if axis is None: axis = len(i.shape) indices = np.tile( np.repeat( np.arange(np.prod(a.shape[:axis])) * a.shape[axis], 2 # Repeat two times to have start and stop indices next to each other. ), np.prod(i.shape[:len(i.shape) - axis]) # Perform summation for each element of additional axes. ) # Add `a.shape[axis]` to account for negative indices. indices[::2] += (a.shape[axis] + i.ravel()) % a.shape[axis] indices[1::2] += (a.shape[axis] + j.ravel()) % a.shape[axis] # Now indices are sorted in ascending order but this will lead to unnecessary computation when reducing # from odd to even indices (since we're only interested in even to odd indices). # Hence we reverse the order of index pairs (need to reverse the result as well then). indices = indices.reshape(-1, 2)[::-1].ravel() result = ufunc.reduceat(a.reshape(-1, *a.shape[axis+1:]), indices)[::2] # Select only even to odd. # In case start and stop index are equal (i.e. empty slice) `reduceat` will select the element # corresponding to the start index. Need to supply the correct default value in this case. result[indices[::2] == indices[1::2]] = ufunc.reduce([]) return result[::-1].reshape(*(i.shape + a.shape[axis+1:])) # Reverse order and reshape. </code></pre> <p>For the examples in the OP it can be used in the following way:</p> <pre><code># 1. example: b = np.random.randint(0, 1000, size=(3, 10, 3)) v = np.random.randint(-9, 10, size=3) # Indexing into `b.shape[1]`. result = sliced_reduce(b, np.zeros_like(v), v, np.add) # 2. example: b = np.random.randint(0, 1000, size=(98, 3, 300, 3)) v = np.random.randint(-299, 300, size=(32, 98, 3)) # Indexing into `b.shape[2]`; one additional leading dimension for `v`. result = sliced_reduce(b, np.zeros_like(v), v, np.add, axis=2) </code></pre> <h1>Notes</h1> <ul> <li>Reversing the order of flat index pairs in order to have <code>even &lt; odd</code> and thus shortcut every second computation with a no-op doesn't seem to be a good idea (probably because the flattened array is not traversed in memory layout order anymore). Removing this part and using the flat indices in ascending order gives a performance increase of about 30% (also for the <a href="https://stackoverflow.com/a/57091481/3767239">perfplots</a>, though not included there).</li> </ul>
python|arrays|numpy|indexing
1
7,842
45,956,139
resetting a Tensorflow graph after OutOfRangeError when using Dataset
<p>I am trying to use <a href="https://www.tensorflow.org/versions/master/api_docs/python/tf/contrib/data/Dataset#from_generator" rel="nofollow noreferrer">the <code>from_generator</code> interface for the Dataset API</a> to inject multiple "rounds" of input into a graph.</p> <p>On my <a href="https://gist.github.com/samwhitlock/93c955b26a329cf2e34c932abff86199" rel="nofollow noreferrer">first attempt</a>, I used the <a href="https://gist.github.com/samwhitlock/93c955b26a329cf2e34c932abff86199#file-repeated_offset_feed-py-L35" rel="nofollow noreferrer"><code>repeat()</code> function</a> to cause the generator to be run 3 consecutive times. However, the <a href="https://gist.github.com/samwhitlock/93c955b26a329cf2e34c932abff86199#file-repeated_offset_feed-py-L41" rel="nofollow noreferrer"><code>batch_join</code> call</a> with a batch size that is <em>not</em> an even multiple of the number of iterations per round (10 iterations with a batch size of 3), data from different "rounds" / "epochs" end up in the same batch (depending on the order the tensors are processed; there is some parallelism in the graph).</p> <p>On my <a href="https://gist.github.com/samwhitlock/9bbe3b91cccb1d1c565302e1e9789ef6" rel="nofollow noreferrer">second attempt</a>, I tried to re-run the iterator after each epoch was done. However, as soon as <a href="https://gist.github.com/samwhitlock/9bbe3b91cccb1d1c565302e1e9789ef6#file-dataset_feed_reset-py-L64" rel="nofollow noreferrer"><code>tf.errors.OutOfRangeError</code> is thrown</a>, all subsequent calls to <a href="https://gist.github.com/samwhitlock/9bbe3b91cccb1d1c565302e1e9789ef6#file-dataset_feed_reset-py-L59" rel="nofollow noreferrer"><code>sess.run()</code> on the output of the batch call</a> throw <code>OutOfRangeError</code> again, even after <a href="https://gist.github.com/samwhitlock/9bbe3b91cccb1d1c565302e1e9789ef6#file-dataset_feed_reset-py-L67" rel="nofollow noreferrer">rerunning the iterator's initializer</a>.</p> <p>I would like to inject multiple rounds of input in succession into a graph and not have them overlap like the first example (e.g. using <code>allow_smaller_final_batch</code> on the batching options). Some of the kernels I instantiate in my custom Tensorflow fork are very expensive to restart, e.g. <code>mmap</code>ing a file of O(10gb), so I'd like to somehow get the best of both of these worlds.</p>
<p>I think the problem stems from using <code>tf.contrib.data.Dataset</code> (which supports reinitialization) with <code>tf.train.batch_join()</code> (which uses TensorFlow queues and queue-runners, and hence does not support reinitialization).</p> <p>I'm not completely clear what your code is doing, but I think you can implement the entire pipeline as a <code>Dataset</code>. Replace the following fragment of code:</p> <pre><code>my_iterator = MyIterator(iterations=iterations) dataset = ds.Dataset.from_generator(my_iterator, output_types=my_iterator.output_types, output_shapes=my_iterator.output_shapes) #dataset = dataset.repeat(count=repetitions) iterator = dataset.make_initializable_iterator() next_elem = iterator.get_next() #change constant to 1 or 2 or something to see that the batching is more predictable ripple_adds = [(tf.stack((next_elem[0], next_elem[1] + constant)),) for constant in ripple_add_coefficients] batch = tf.train.batch_join(ripple_adds, batch_size=batch_size, enqueue_many=False, name="sink_queue") </code></pre> <p>...with something like the following:</p> <pre><code>my_iterator = MyIterator(iterations=iterations) dataset = tf.contrib.data.from_generator(my_iterator, output_types=my_iterator.output_types, output_shapes=my_iterator.output_shapes) def ripple_add_map_func(x, y): return (tf.contrib.data.Dataset.range(num_ripples) .map(lambda r: tf.stack([x, y + r]))) dataset = dataset.flat_map(ripple_add_map_func).batch(batch_size) iterator = dataset.make_initializable_iterator() batch = iterator.get_next() </code></pre>
tensorflow
2
7,843
46,121,067
Datetime upsampling
<p>I have a dataframe like such:</p> <pre><code>rows = [['bob', '01/2017', 12], ['bob', '02/2017', 14], ['bob', '03/2017', 16], ['julia', '01/2017', 18], ['julia', '02/2017', 16], ['julia', '03/2017', 24]] df = pd.DataFrame(rows, columns = ['name','date','val']) </code></pre> <p>Assuming that each month has four weeks (i will use a lookup to match month to num weeks, but for simplicity assume 4), I want to create a row for each person for each week of the month where the value is the months value divided by 4 (or n_weeks).</p> <p>I tried using <code>.resample()</code> and <code>.asfreq()</code> but they told me I needed a unique index.</p> <p>How can I do this in pandas?</p> <p><strong>EDIT</strong> </p> <p>Ok so i got this:</p> <pre><code>weekly = df.groupby('name').apply(lambda g: g.set_index('date').resample('w').pad().reset_index()).reset_index(drop=True) weekly.val/4 date name val 0 2017-01-01 bob 3 1 2017-01-08 bob 3 2 2017-01-15 bob 3 3 2017-01-22 bob 3 4 2017-01-29 bob 3 5 2017-02-05 bob 3.5 6 2017-02-12 bob 3.5 7 2017-02-19 bob 3.5 8 2017-02-26 bob 3.5 9 2017-03-05 bob 4 10 2017-01-01 julia 4.5 11 2017-01-08 julia 4.5 12 2017-01-15 julia 4.5 13 2017-01-22 julia 4.5 14 2017-01-29 julia 4.5 15 2017-02-05 julia 4 16 2017-02-12 julia 4 17 2017-02-19 julia 4 18 2017-02-26 julia 4 19 2017-03-05 julia 6 </code></pre> <p>My problem is still that it's not distributing the last month of each group.</p>
<p>So someone answered this partially but then deleted it before I could copy it, but I think i figured out what they were going for:</p> <p>So from this dataframe (created in the question)</p> <pre><code> name date val 0 bob 01/2017 12 1 bob 02/2017 14 2 bob 03/2017 16 3 julia 01/2017 18 4 julia 02/2017 16 5 julia 03/2017 24 </code></pre> <p>I can do:</p> <pre><code>from pandas.tseries.offsets import * df['date']=pd.to_datetime(df.date) min_date = df.date.min()+MonthBegin(0) max_date = df.date.max()+MonthEnd(0) dr = pd.date_range(min_date, max_date,freq='w') weekly = df.groupby('name').apply(lambda g: g.set_index('date') .reindex(dr,method='pad').reset_index()).reset_index(drop=True) </code></pre> <p>and get</p> <pre><code> index name val 0 2017-01-01 bob 12 1 2017-01-08 bob 12 2 2017-01-15 bob 12 3 2017-01-22 bob 12 4 2017-01-29 bob 12 5 2017-02-05 bob 14 6 2017-02-12 bob 14 7 2017-02-19 bob 14 8 2017-02-26 bob 14 9 2017-03-05 bob 16 10 2017-03-12 bob 16 11 2017-03-19 bob 16 12 2017-03-26 bob 16 13 2017-01-01 julia 18 14 2017-01-08 julia 18 15 2017-01-15 julia 18 16 2017-01-22 julia 18 17 2017-01-29 julia 18 18 2017-02-05 julia 16 19 2017-02-12 julia 16 20 2017-02-19 julia 16 21 2017-02-26 julia 16 22 2017-03-05 julia 24 23 2017-03-12 julia 24 24 2017-03-19 julia 24 25 2017-03-26 julia 24 </code></pre>
python|pandas|datetime|resampling
0
7,844
22,946,139
Python class to convert all tables in a database to pandas dataframes
<p>I'm trying to achieve the following. I want to create a python Class that transforms all tables in a database to pandas dataframes. </p> <p>This is how I do it, which is not very generic... </p> <pre><code>class sql2df(): def __init__(self, db, password='123',host='127.0.0.1',user='root'): self.db = db mysql_cn= MySQLdb.connect(host=host, port=3306,user=user, passwd=password, db=self.db) self.table1 = psql.frame_query('select * from table1', mysql_cn) self.table2 = psql.frame_query('select * from table2', mysql_cn) self.table3 = psql.frame_query('select * from table3', mysql_cn) </code></pre> <p>Now I can access all tables like so:</p> <pre><code>my_db = sql2df('mydb') my_db.table1 </code></pre> <p>I want something like:</p> <pre><code>class sql2df(): def __init__(self, db, password='123',host='127.0.0.1',user='root'): self.db = db mysql_cn= MySQLdb.connect(host=host, port=3306,user=user, passwd=password, db=self.db) tables = (""" SELECT TABLE_NAME FROM information_schema.TABLES WHERE TABLE_SCHEMA = '%s' """ % self.db) &lt;some kind of iteration that gives back all the tables in df as class attributes&gt; </code></pre> <p>Suggestions are most welcome...</p>
<p>I would use SQLAlchemy for this:</p> <pre><code>engine = sqlalchemy.create_engine("mysql+mysqldb://root:123@127.0.0.1/%s" % db) </code></pre> <p>Note the <a href="http://docs.sqlalchemy.org/en/rel_0_9/core/engines.html#database-urls" rel="nofollow">syntax</a> is dialect+driver://username:password@host:port/database.</p> <pre><code>def db_to_frames_dict(engine): meta = sqlalchemy.MetaData() meta.reflect(bind=engine) tables = meta.sorted_tables return {t: pd.read_sql('SELECT * FROM %s' % t.name, engine.raw_connection()) for t in tables} # Note: frame_query is depreciated in favor of read_sql </code></pre> <p><em>This returns a dictionary, but you could equally well have these as class attributes (e.g. by updating the class dict and <code>__getitem__</code>)</em></p> <pre><code>class SQLAsDataFrames: def __init__(self, engine): self.__dict__ = db_to_frames_dict(engine) # allows .table_name access def __getitem__(self, key): # allows [table_name] access return self.__dict__[key] </code></pre> <p><em>In pandas 0.14 the sql code has been rewritten to take engines, and IIRC there is helpers for all tables and for reading all of a table (using <code>read_sql(table_name)</code>).</em></p>
python|mysql|pandas
5
7,845
35,518,308
All possible permutations columns Pandas Dataframe within the same column
<p>I had a similar question using Postgres SQL, but I figured that this kind of task is really hard to do in Postgres, and I think python/pandas would make this a lot easier, although I still can't quite come up with the solution.</p> <p>I now have a Pandas Dataframe which looks like this:</p> <pre><code>df={'planid' : ['A', 'A', 'B', 'B', 'C', 'C'], 'x' : ['a1', 'a2', 'b1', 'b2', 'c1', 'c2']} df=pd.DataFrame(df) df planid x 0 A a1 1 A a2 2 B b1 3 B b2 4 C c1 5 C c2 </code></pre> <p>I want to get all possible permutations where planid are not equal to each other. In other words, think of each value in planid as a "bucket" and I want all possible combinations if I were to draw values from <code>x</code> from each "bucket" in <code>planid</code>. In this particular example, there are 8 total permutations {(a1, b1, c1), (a1, b2, c1), (a1, b1, c2), (a1, b2, c2), (a2, b1, c1), (a2, b2, c1), (a2, b1, c2), (a2, b2, c2)}.</p> <p>However, I want my resulting data frame to be three columns, <code>planid</code>, <code>x</code> and another column, perhaps named <code>permutation_counter</code>. The final data frame has all the different permutations labeled with <code>permutation_counter</code>. In other words, I want my final dataframe to look like</p> <pre><code> planid x permutation_counter 0 A a1 1 1 B b1 1 2 C c1 1 3 A a1 2 4 B b2 2 5 C c1 2 6 A a1 3 7 B b1 3 8 C c2 3 9 A a1 4 10 B b2 4 11 C c2 4 12 A a2 5 13 B b1 5 14 C c1 5 15 A a2 6 16 B b2 6 17 C c1 6 18 A a2 7 19 B b1 7 20 C c2 7 21 A a2 8 22 B b2 8 23 C c2 8 </code></pre> <p>Any help would be greatly appreciated!</p>
<p>I was trying to chain as many steps together as possible. Break them down to see what each step does :)</p> <pre><code>df2 = pd.DataFrame(index=pd.MultiIndex.from_product([subdf['x'] for p, subdf in df.groupby('planid')], names=df.planid.unique())).reset_index().stack().reset_index() df2.columns = ['permutation_counter', 'planid', 'x'] df2['permutation_counter'] += 1 print df2[['planid', 'x', 'permutation_counter']] planid x permutation_counter 0 A a1 1 1 B b1 1 2 C c1 1 3 A a1 2 4 B b1 2 5 C c2 2 6 A a1 3 7 B b2 3 8 C c1 3 9 A a1 4 10 B b2 4 11 C c2 4 12 A a2 5 13 B b1 5 14 C c1 5 15 A a2 6 16 B b1 6 17 C c2 6 18 A a2 7 19 B b2 7 20 C c1 7 21 A a2 8 22 B b2 8 23 C c2 8 </code></pre>
python|pandas|permutation
3
7,846
20,572,749
mapreduce to find multiple max values
<p>Trying to understand how to do this with map_reduce. Currently, I do a find to pull a whole collection into one big pandas dataframe. That df contains something like this:</p> <pre><code>project ep seq shot layers totalframes showA sh18 17120 10 cnt_chr_set 128 showA sh18 17040 70 shd_chr_set 288 showA sh18 80 460 chr_rim 131 showA sh18 17120 20 chr_vol_lgt 120 showA sh18 17120 10 set_all 128 showA sh18 17120 20 cnt_chr_set 120 showA sh18 17120 20 cnt_chr_set 130 showA sh18 17120 20 cnt_chr_set 3 showA sh18 17120 20 cnt_chr_set 1 showA sh18 17120 10 set_all_ani 128 showA sh18 17120 20 set_all_ani 120 showA sh18 17040 70 set_all 288 showA sh18 17120 10 shd_chr_set 128 showA sh18 17120 20 shd_chr_set 120 showA sh18 18150 20 chr_ben_steam 3 showA sh18 18150 20 chr_whi_steam 3 showA sh18 18150 20 chr_bil_steam 3 showA sh18 17040 70 chr_sal_steam 288 </code></pre> <p>What I actually need to do, is find the MAX totalframes for each layer of a shot. The resulting dataframe, should contain only one of each layer for a shot. eg:</p> <pre><code>showA sh18 17120 20 chr_vol_lgt 120 showA sh18 17120 20 cnt_chr_set 130 showA sh18 17120 20 set_all_ani 120 </code></pre> <p>I've been actually trying to get to this point just with pandas, but it seems like it's too much data to work with. Pulling only exactly the info I need from mongodb into the dataframe seems like the right way to go, but I don't know where to start with map_reduce.</p> <p>Pointers appreciated.</p>
<p>MapReduce is unnecessary here, most likely, just use aggregation framework:</p> <pre><code>{ "$group" : { "_id" : { "l": "$layers", "s": "$shots" }, "maxframes" : {"$max" : "$totalframes"} } } </code></pre> <p>Not sure if you care about the other fields, if so you can add them to the "_id" grouping. You can use <code>$project</code> to rename fields in another stage, if that matters.</p>
mongodb|pandas|pymongo
1
7,847
66,378,763
from ._nnls import nnls ImportError: DLL load failed: The specified module could not be found
<p>While running a UNet traning code I found DLL load failed error. Here is the code:</p> <pre><code>''' import torch import scipy import albumentations as A from ._nnls import nnls from albumentations.pytorch import ToTensorV2 from tqdm import tqdm import torch.nn as nn import torch.optim as optim from unet_model import UNet from utilscar import ( load_checkpoint, save_checkpoint, get_loaders, check_accuracy, save_predictions_as_imgs, ) # Hyperparameters etc. LEARNING_RATE = 1e-4 DEVICE = &quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot; BATCH_SIZE = 16 NUM_EPOCHS = 3 NUM_WORKERS = 2 IMAGE_HEIGHT = 160 # 1280 originally IMAGE_WIDTH = 240 # 1918 originally PIN_MEMORY = True LOAD_MODEL = False TRAIN_IMG_DIR = &quot;Dataset/train_images/&quot; TRAIN_MASK_DIR = &quot;Dataset/train_masks/&quot; VAL_IMG_DIR = &quot;Dataset/val_images/&quot; VAL_MASK_DIR = &quot;Dataset/val_masks/&quot; def train_fn(loader, model, optimizer, loss_fn, scaler): loop = tqdm(loader) for batch_idx, (data, targets) in enumerate(loop): data = data.to(device=DEVICE) targets = targets.float().unsqueeze(1).to(device=DEVICE) # forward with torch.cuda.amp.autocast(): predictions = model(data) loss = loss_fn(predictions, targets) # backward optimizer.zero_grad() scaler.scale(loss).backward() scaler.step(optimizer) scaler.update() # update tqdm loop loop.set_postfix(loss=loss.item()) def main(): train_transform = A.Compose( [ A.Resize(height=IMAGE_HEIGHT, width=IMAGE_WIDTH), A.Rotate(limit=35, p=1.0), A.HorizontalFlip(p=0.5), A.VerticalFlip(p=0.1), A.Normalize( mean=[0.0, 0.0, 0.0], std=[1.0, 1.0, 1.0], max_pixel_value=255.0, ), ToTensorV2(), ], ) val_transform = A.Compose( [ A.Resize(height=IMAGE_HEIGHT, width=IMAGE_WIDTH), A.Normalize( mean=[0.0, 0.0, 0.0], std=[1.0, 1.0, 1.0], max_pixel_value=255.0, ), ToTensorV2(), ], ) model = UNet(in_channels=3, out_channels=1).to(DEVICE) loss_fn = nn.BCEWithLogitsLoss() optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE) train_loader, val_loader = get_loaders( TRAIN_IMG_DIR, TRAIN_MASK_DIR, VAL_IMG_DIR, VAL_MASK_DIR, BATCH_SIZE, train_transform, val_transform, NUM_WORKERS, PIN_MEMORY, ) if LOAD_MODEL: load_checkpoint(torch.load(&quot;my_checkpoint.pth.tar&quot;), model) check_accuracy(val_loader, model, device=DEVICE) scaler = torch.cuda.amp.GradScaler() for epoch in range(NUM_EPOCHS): train_fn(train_loader, model, optimizer, loss_fn, scaler) # save model checkpoint = { &quot;state_dict&quot;: model.state_dict(), &quot;optimizer&quot;:optimizer.state_dict(), } save_checkpoint(checkpoint) # check accuracy check_accuracy(val_loader, model, device=DEVICE) # print some examples to a folder save_predictions_as_imgs( val_loader, model, folder=&quot;saved_images/&quot;, device=DEVICE ) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>'''</p> <p>I found this error: from ._nnls import nnls</p> <p>ImportError: DLL load failed: The specified module could not be found.</p> <p>Here I want to mention that, my python is 64 bit and all the libraries are also in 64 bit. I updated everything using conda update.</p>
<p>The below solution worked for me.</p> <blockquote> <p>conda remove --force numpy, scipy</p> </blockquote> <blockquote> <p>pip install -U numpy, scipy</p> </blockquote> <p>Successfully installed numpy-1.19.5 scipy-1.5.4</p> <p>Reference: <a href="https://github.com/conda/conda/issues/6396#issuecomment-350254762" rel="nofollow noreferrer">https://github.com/conda/conda/issues/6396#issuecomment-350254762</a></p>
python|pytorch
1
7,848
66,570,293
How to color a dataframe to a conditional heatmap with same color across whole row based on a single column
<p>So I have a dataframe which looks like:</p> <pre><code>Target, Achieved, Goal, Remaining 10, 5, 50, 5 4, 5, 125, 0 3, 3, 100, 0 8, 2, 25, 6 </code></pre> <p>I want to display this dataframe with visible info based on colors, Under this criteria:</p> <ol> <li>If goal is achieved I just wanted row to be green regardless of how surpassed goal actually is. So the whole 2nd and 3rd rows would be the same green color</li> <li>If goal is not achieved, I want to color them based on heatmap. So here I want 4th row to be a darker shade (of lets say red) than 1st row since I am missing more on that row's goal.</li> </ol> <p>For single color, following function works perfectly: Thanks to the answer <a href="https://stackoverflow.com/questions/47469478/how-to-color-whole-rows-in-python-pandas-based-on-value">here</a></p> <pre><code>def highlight_col(x): #copy df to new - original data are not changed df = x.copy() #set by condition mask = df['Goal Completion (%)'] &gt;= 100 df.loc[mask, :] = 'background-color: lightgreen' df.loc[~mask,:] = 'background-color: pink' return df </code></pre> <p>For a simple heatmap without excluding Goal completion condition, its possible via:</p> <pre><code>df.style.background_gradient(cmap='Reds') </code></pre> <p>But it:</p> <ol> <li>Includes whole dataframe</li> <li>Color by each column separately</li> <li>Cannot exclude rows with goals achieved</li> <li>Could not be used inside my above function (tried using: <code>df.loc[~mask,:] = 'background_gradient: Reds'</code> in last line but didn't work either.</li> </ol> <p>P.S. My Dataframe isn't very large so I prefer table coloring itself from where I can select rows instead of having a whole new viz. Any suggestions bettering the situation are highly welcomed!</p> <h2>Sample output:</h2> <p><a href="https://i.stack.imgur.com/uLTkc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uLTkc.png" alt="enter image description here" /></a></p>
<p>Perhaps you are looking for something like this (using @QuangHoang methods):</p> <pre><code>import pandas as pd import numpy as np import matplotlib as mpl df = pd.read_clipboard(sep=',\s+') cmap = mpl.cm.get_cmap('RdYlGn') norm = mpl.colors.Normalize(df['Goal'].min(), 100.) def colorRow(s): return [f'background-color: {mpl.colors.to_hex(cmap(norm(s[&quot;Goal&quot;])))}' for _ in s] df.style.apply(colorRow, axis=1) </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/mKU1A.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mKU1A.jpg" alt="enter image description here" /></a></p>
python|pandas|dataframe|heatmap
1
7,849
57,562,496
TypeError: 'module' object is not callable. keras
<p><strong>System information</strong><br> - Windows 10<br> - TensorFlow backend (yes / no): yes<br> - TensorFlow version: 1.14.0<br> - Keras version: 2.24<br> - Python version: 3.6<br> - CUDA/cuDNN version: 10<br> - GPU model and memory: gtx 1050 ti </p> <p><strong>Describe the current behavior</strong><br> I installed tensoflow and keras via conda. Then i tried to run this code:</p> <pre><code>import tensorflow as tf import keras import numpy as np model = keras.Sequential([keras.layers(units=1, input_shape=[1])]) model.compile(optimizer="sgd", loss="mean_squared_error") x = np.array([-1, 0, 1, 2, 3, 4]) y = np.array([-3, -1, 1, 3, 5, 7]) model.fit(x, y, epochs=500) print(model.predict([10]))` </code></pre> <p>When i run this code I get the error: </p> <pre><code>Using TensorFlow backend. Traceback (most recent call last): File "C:/Users/xxx/PycharmProjects/Workspace/tensorflow/hello_world_of_nn.py", line 5, in &lt;module&gt; model = keras.Sequential([keras.layers(units=1, input_shape=[1])]) TypeError: 'module' object is not callable </code></pre> <p>When i try this:<br> <code>python -c 'import keras as k; print(k.__version__)'</code></p> <p>I get the error: </p> <pre><code>C:\Users\xxx&gt;python -c 'import keras as k; print(k.__version__)' File "&lt;string&gt;", line 1 'import ^ SyntaxError: EOL while scanning string literal </code></pre>
<p>This should be fine:</p> <pre><code>import tensorflow as tf import keras import numpy as np model = keras.models.Sequential([keras.layers.Dense(units=1, input_shape=[1])]) model.compile(optimizer="sgd", loss="mean_squared_error") x = np.array([-1, 0, 1, 2, 3, 4]) y = np.array([-3, -1, 1, 3, 5, 7]) model.fit(x, y, epochs=500) print(model.predict([10])) </code></pre> <p>Please note the usage of <code>keras.models.Sequential</code> and <code>keras.layers.Dense</code>.</p>
python|tensorflow|keras|anaconda|conda
3
7,850
57,485,856
When using np.linalg.eigvals, I am getting the first eigenvalue with negative value systematically. Why is this?
<p>I am working in a implementation of the Expectation-Maximization algorithm with Missing Data for Mixture of MVNs. You don't have to know anything about this algorithm to help me with my issue.</p> <p>Let say that my dataset has shape <code>D x N</code> with <code>D = 6</code>.</p> <p>I compute the estimation of sigma (the covariance matrix). The result is a <code>6 x 6</code> matrix (everything ok).</p> <p>Then I have to compute the PDF of some samples for the distribution with the mean and covariance matrix that I just computed (estimated parameters). In order to compute the PDF, I use <code>scipy.stats.multivariate_normal</code> because it has a pdf() method.</p> <p>But I always get the following error:</p> <pre><code>ValueError: the input matrix must be positive semidefinite </code></pre> <p>This error is due to the covariance matrix. I have checked its eigenvalues, and the first eigenvalue is always a negative value. Some real examples that I got:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np eigvals = np.linalg.eigvals(sigma) # shape (6,) ''' Three different results that I got in different executions (the parameters are randomly generated) [-406.73080893 94.43623712 57.06170498 73.75668673 69.21443981 70.60878445] [-324.74509361 104.75315794 50.43203113 67.92014983 63.35285505 65.41071698] [-277.14957619 98.17501755 69.8623394 54.49958827 59.4808295 57.28174734] ''' </code></pre> <p><strong>Does anyone know why this is happening?</strong></p> <p>I am not very familiar with the eigenvalues, so I would greatly appreciate your help.</p> <h1>==================== MORE INFO ====================</h1> <p>Here I include some code of how I compute sigma:</p> <pre class="lang-py prettyprint-override"><code># This is how I compute the covariance matrix def estimate_sigma_with_nan(xx_est, gamma_k, mu): sigma = (xx_est / gamma_k) - np.dot(mu, np.transpose(mu)) return sigma # = (d, d) # This is how I compute xx_est. I will include a image of the maths behind this exp_prod = np.zeros_like(xx_est_k) x_i_norm[m] = estimate_x_norm(x_i_norm, mu_k, sigma_k, m, o) exp_prod[np.ix_(m, m)] = estimate_xx_norm(reshape_(x[:d, i], keep_axes=range(2)), mu_k, sigma_k, m, o) exp_prod[np.ix_(o, o)] = np.dot(x_i_norm[o], np.transpose(x_i_norm[o])) exp_prod[np.ix_(o, m)] = np.dot(x_i_norm[o], np.transpose(x_i_norm[m])) exp_prod[np.ix_(m, o)] = np.dot(x_i_norm[m], np.transpose(x_i_norm[o])) xx_est = xx_est + (exp_prod * gamma[k, i]) def estimate_xx_norm(x_i, mu_k, sigma_k, m, o): assert x_i.ndim == 2 and mu_k.ndim == 2 and sigma_k.ndim == 2 est_xx = sigma_k[np.ix_(m, m)] - np.dot(np.dot(sigma_k[np.ix_(m, o)], np.linalg.inv(sigma_k[np.ix_(o, o)])), np.transpose(sigma_k[np.ix_(m, o)])) est_xx = est_xx + np.dot(estimate_x_norm(x_i, mu_k, sigma_k, m, o), np.transpose(estimate_x_norm(x_i, mu_k, sigma_k, m, o))) assert est_xx.ndim == 2 return est_xx </code></pre> <p>This work is based on <a href="http://mlg.eng.cam.ac.uk/zoubin/papers/nips93.pdf" rel="nofollow noreferrer">this paper</a>:</p> <p><a href="https://i.stack.imgur.com/WSnmH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WSnmH.png" alt="enter image description here"></a></p>
<p>I think the way covariance matrix is computed is wrong. If X is (N, m) matrix with N as sample size and m as feature size, then</p> <pre class="lang-py prettyprint-override"><code>conv = (X - X_mean).T.dot((X - X_mean)) / (X.shape[0] - 1) </code></pre> <p><code>(X.shape[0] - 1)</code> is becuase this is samples</p>
python|numpy|math|statistics|algebra
0
7,851
57,631,364
Concatenate multiple pandas groupby outputs
<p>I would like to make multiple <code>.groupby()</code> operations on different subsets of a given dataset and bind them all together. For example:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({"ID":[1,1,2,2,2,3],"Subset":[1,1,2,2,2,3],"Value":[5,7,4,1,7,8]}) print(df) ID Subset Value 0 1 1 5 1 1 1 7 2 2 2 4 3 2 2 1 4 2 2 7 5 3 1 9 </code></pre> <p>I would then like to concatenate the following objects and store the result in a pandas data frame:</p> <pre class="lang-py prettyprint-override"><code>gr1 = df[df["Subset"] == 1].groupby(["ID","Subset"]).mean() gr2 = df[df["Subset"] == 2].groupby(["ID","Subset"]).mean() # Why do gr1 and gr2 have column names in different rows? </code></pre> <p>I realize that <code>df.groupby(["ID","Subset"]).mean()</code> would give me the concatenated object I'm looking for. Just bear with me, this is a reduced example of what I'm actually dealing with.</p> <p><a href="https://stackoverflow.com/questions/10373660/converting-a-pandas-groupby-output-from-series-to-dataframe">I think the solution</a> could be to transform <code>gr1</code> and <code>gr2</code> to pandas data frames and then concatenate them like I normally would.</p> <p>In essence, my questions are the following:</p> <ol> <li>How do I convert a <code>groupby</code> result to a data frame object?</li> <li>In case this can be done without transforming the series to data frames, how do you bind two <code>groupby</code> results together and then transform that to a pandas data frame?</li> </ol> <p>PS: I come from an R background, so to me it's odd to group a data frame by something and have the output return as a different type of object (series or multi index data frame). This is part of my question too: why does <code>.groupby</code> return a series? What kind of series is this? How come a series can have multiple columns and an index?</p>
<p>The return type in your example is a pandas <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.html#pandas.MultiIndex" rel="nofollow noreferrer">MultiIndex</a> object. To return a dataframe with a single transformation function for a single value, then you can use the following. Note the inclusion of <code>as_index=False</code>.</p> <hr> <pre><code>&gt;&gt;&gt; gr1 = df[df["Subset"] == 1].groupby(["ID","Subset"], as_index=False).mean() &gt;&gt;&gt; gr1 ID Subset Value 0 1 1 6 </code></pre> <p><br/> This however won't work if you wish to aggregate multiple functions like <a href="https://stackoverflow.com/questions/26323926/pandas-groupby-agg-how-to-return-results-without-the-multi-index">here</a>. If you wish to avoid using <code>df.groupby(["ID","Subset"]).mean()</code>, then you can use the following for your example.</p> <hr> <pre><code>&gt;&gt;&gt; gr1 = df[df["Subset"] == 1].groupby(["ID","Subset"], as_index=False).mean() &gt;&gt;&gt; gr2 = df[df["Subset"] == 2].groupby(["ID","Subset"], as_index=False).mean() &gt;&gt;&gt; pd.concat([gr1, gr2]).reset_index(drop=True) ID Subset Value 0 1 1 6 1 2 2 4 </code></pre> <p><br/></p> <p>If you're only concerned with dealing with a specific subset of rows, the following could be applicable, since it removes the necessity to concatenate results.</p> <hr> <pre><code>&gt;&gt;&gt; values = [1,2] &gt;&gt;&gt; df[df['Subset'].isin(values)].groupby(["ID","Subset"], as_index=False).mean() ID Subset Value 0 1 1 6 1 2 2 4 </code></pre>
python|pandas|concatenation|pandas-groupby
1
7,852
23,950,658
Python Pandas operate on row
<p>Hi my dataframe look like:</p> <pre><code>Store,Dept,Date,Sales 1,1,2010-02-05,245 1,1,2010-02-12,449 1,1,2010-02-19,455 1,1,2010-02-26,154 1,1,2010-03-05,29 1,1,2010-03-12,239 1,1,2010-03-19,264 </code></pre> <p>Simply, I need to add another column called '_id' as concatenation of Store, Dept, Date like "1_1_2010-02-05", I assume I can do it through df['<em>id'] = df['Store'] +'</em>' +df['Dept'] +'_'+df['Date'], but it turned out to be not.</p> <p>Similarly, i also need to add a new column as log of sales, I tried df['logSales'] = math.log(df['Sales']), again, it did not work.</p>
<p>You can first convert it to strings (the integer columns) before concatenating with <code>+</code>:</p> <pre><code>In [25]: df['id'] = df['Store'].astype(str) +'_' +df['Dept'].astype(str) +'_'+df['Date'] In [26]: df Out[26]: Store Dept Date Sales id 0 1 1 2010-02-05 245 1_1_2010-02-05 1 1 1 2010-02-12 449 1_1_2010-02-12 2 1 1 2010-02-19 455 1_1_2010-02-19 3 1 1 2010-02-26 154 1_1_2010-02-26 4 1 1 2010-03-05 29 1_1_2010-03-05 5 1 1 2010-03-12 239 1_1_2010-03-12 6 1 1 2010-03-19 264 1_1_2010-03-19 </code></pre> <p>For the <code>log</code>, you better use the <code>numpy</code> function. This is vectorized (<code>math.log</code> can only work on single scalar values):</p> <pre><code>In [34]: df['logSales'] = np.log(df['Sales']) In [35]: df Out[35]: Store Dept Date Sales id logSales 0 1 1 2010-02-05 245 1_1_2010-02-05 5.501258 1 1 1 2010-02-12 449 1_1_2010-02-12 6.107023 2 1 1 2010-02-19 455 1_1_2010-02-19 6.120297 3 1 1 2010-02-26 154 1_1_2010-02-26 5.036953 4 1 1 2010-03-05 29 1_1_2010-03-05 3.367296 5 1 1 2010-03-12 239 1_1_2010-03-12 5.476464 6 1 1 2010-03-19 264 1_1_2010-03-19 5.575949 </code></pre> <p>Summarizing the comments, for a dataframe of this size, using <code>apply</code> will not differ much in performance compared to using vectorized functions (working on the full column), but when your real dataframe becomes larger, it will.<br> Apart from that, I think the above solution is also easier syntax.</p>
python|pandas|dataframe
3
7,853
43,510,589
How to find Specific values in Pandas Dataframe
<p>I have imported the data in csv format in pandas. Can anybody tell me how i can find the values above 280 in one of the columns that i have and put them into another data frame. I have done the below code so far:</p> <pre><code>import numpy as np import pandas as pd df = pd.read_csv('...csv') </code></pre> <p>And the part of data is like the attached pic:<a href="https://i.stack.imgur.com/q47pw.jpg" rel="nofollow noreferrer">enter image description here</a></p>
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="noreferrer"><code>boolean indexing</code></a>:</p> <pre><code>df1 = df[df[2] &gt; 280] </code></pre> <p>If need select also only column add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="noreferrer"><code>loc</code></a>:</p> <pre><code>s = df.loc[df[2] &gt; 280, 2] </code></pre> <p>Sample:</p> <pre><code>df = pd.DataFrame({0:[1,2,3], 1:[4,5,6], 2:[107,800,300], 3:[1,3,5]}) print (df) 0 1 2 3 0 1 4 107 1 1 2 5 800 3 2 3 6 300 5 df1 = df[df[2] &gt; 280] print (df1) 0 1 2 3 1 2 5 800 3 2 3 6 300 5 s = df.loc[df[2] &gt; 280, 2] print (s) 1 800 2 300 Name: 2, dtype: int64 #one column df df2 = df.loc[df[2] &gt; 280, [2]] print (df2) 2 1 800 2 300 </code></pre>
python|pandas|dataframe
5
7,854
43,547,032
Python Pandas groupby multiple counts
<p>I have a dataframe that looks like:</p> <pre><code> id email domain created_at company 0 1 son@mail.com old.com 2017-01-21 18:19:00 company_a 1 2 boy@mail.com new.com 2017-01-22 01:19:00 company_b 2 3 girl@mail.com nadda.com 2017-01-22 01:19:00 no_company </code></pre> <p>I need summarize the data by Year, Month and if the company has a value that doesn't match "no_company":</p> <p>Desired output:</p> <pre><code>year month company count 2017 1 has_company 2 no_company 1 </code></pre> <p>The following works great but gives me the count for each value in the company column;</p> <pre><code>new_df = test_df['created_at'].groupby([test_df.created_at.dt.year, test_df.created_at.dt.month, test_df.company]).agg('count') print(new_df) </code></pre> <p>result:</p> <pre><code>year month company 2017 1 company_a 1 company_b 1 no_company 1 </code></pre>
<p>Map a new series for <code>has_company</code>/<code>no_company</code> then <code>groupby</code>:</p> <pre><code>c = df.company.map(lambda x: x if x == 'no_company' else 'has_company') y = df.created_at.dt.year.rename('year') m = df.created_at.dt.month.rename('month') df.groupby([y, m, c]).size() year month company 2017 1 has_company 2 no_company 1 dtype: int64 </code></pre>
python|pandas
4
7,855
43,846,481
Compute mean if two conditions are met
<p><strong>Set-up</strong></p> <p>I am scraping housing ads using Scrapy and subsequently analyse the data with pandas.</p> <p>I use the pandas to compute the means and medians of several housing characteristics. </p> <p>The dataframe <code>df</code> looks like,</p> <pre><code>district | rent | rooms | … ---------------------------- North | 200 | 3 | … South | 300 | 1 | … South | 300 | 1 | … ⋮ ⋮ ⋮ ⋮ </code></pre> <hr> <p><strong>Problem</strong></p> <p>I would like to compute the average rent for a <em>n</em>-room apartment per district. </p> <p>I found an answer <a href="https://stackoverflow.com/a/28236391/7326714">here</a> which brings me close, e.g.</p> <pre><code>df.loc[df['rooms'] == 1, 'rent'].mean() </code></pre> <p>but this computes the average rent for one-bedroom apartments for the whole city. </p> <p>To do it per district, I'd like to do something like,</p> <pre><code>for d in district_set: df.loc[df['rooms'] == 1 and df['district'] == d, 'rent'].mean() </code></pre> <p>where <code>district_set</code> contains all possible districts. </p> <p>Any suggestions?</p> <p>I'd like to obtain the following table,</p> <pre><code>district | avg rent 1R | avg rent 2R | … ---------------------------------------- North | 200 | 400 | … South | 300 | 500 | … ⋮ ⋮ ⋮ </code></pre>
<p><code>df.groupby(['district', 'rooms'])['rent'].mean().unstack()</code> should work. <code>unstack()</code> turns the MultiIndex returned by the previous expression to a table with <code>district</code> as rows and <code>rooms</code> as the columns.</p>
python|pandas|conditional|mean
1
7,856
72,936,816
Unable to split the column into multiple columns based on the first column value
<p>I've a data frame which contains one column. Below is the example</p> <pre><code>Questionsbysortorder Q1-4,Q2-3,Q3-2,Q4-3,Q5-3 Q1-1,Q2-2,Q3-1,Q4-1 Q1-5,Q2-3,Q3-3 </code></pre> <p>I'm trying to explode the columns with the help of already given row values. Like below is the example</p> <pre><code>Questionsbysortorder Q1 Q2 Q3 Q4 Q5 Q1-4,Q2-3,Q3-2,Q4-3,Q5-3 4 3 2 3 3 Q1-1,Q2-2,Q3-1,Q4-1 1 2 1 1 NA Q1-5,Q2-3,Q5-3 5 3 NA NA 3 </code></pre> <p>Below is the code which i tried, but it's returning an error</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'Questionsbysortorder': ['Q1-4,Q2-3,Q3-2,Q4-3,Q5-3', 'Q1-1,Q2-2,Q3-1,Q4-1','Q1-5,Q2-3,Q5-3']}) df['Questionsbysortorder'] = df['Questionsbysortorder'].str.split(',') df = df.explode('Questionsbysortorder') df['Questionsbysortorder'] = df['Questionsbysortorder'].str.split('-') df = df.explode('Questionsbysortorder') df = df.set_index('Questionsbysortorder').unstack().reset_index() df.columns = ['Questionsbysortorder', 'value'] df = df.pivot(index='Questionsbysortorder', columns='value', values='Questionsbysortorder') df.columns.name = None print(df) </code></pre> <p>Error is:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-6-30dd8b8d4f59&gt; in &lt;module&gt;() 14 df = df.set_index('Questionsbysortorder').unstack().reset_index() 15 ---&gt; 16 df.columns = ['Questionsbysortorder', 'value'] 17 18 df = df.pivot(index='Questionsbysortorder', columns='value', values='Questionsbysortorder') 4 frames /usr/local/lib/python3.7/dist-packages/pandas/core/internals/base.py in _validate_set_axis(self, axis, new_labels) 56 elif new_len != old_len: 57 raise ValueError( ---&gt; 58 f&quot;Length mismatch: Expected axis has {old_len} elements, new &quot; 59 f&quot;values have {new_len} elements&quot; 60 ) ValueError: Length mismatch: Expected axis has 3 elements, new values have 2 elements </code></pre> <p>Can anyone please help me with this?</p>
<p>You are very close. You want to</p> <ul> <li>split by <code>','</code>,</li> <li>explode the list,</li> <li>split again by <code>'-'</code> to get the different fields</li> <li>finally pivot the data</li> </ul> <p>In code:</p> <pre><code>df.join(df.Questionsbysortorder.str.split(',') .explode() .str.split('-', expand=True) .set_index(0, append=True)[1] .unstack() ) </code></pre> <p>Output:</p> <pre><code> Questionsbysortorder Q1 Q2 Q3 Q4 Q5 0 Q1-4,Q2-3,Q3-2,Q4-3,Q5-3 4 3 2 3 3 1 Q1-1,Q2-2,Q3-1,Q4-1 1 2 1 1 NaN 2 Q1-5,Q2-3,Q3-3 5 3 3 NaN NaN </code></pre>
python|python-3.x|pandas|dataframe|pandas-groupby
2
7,857
73,024,958
Panda python textfile processing into xlsx
<p>I have a .txt file that looks like something like this:</p> <pre><code>&lt;Location /git&gt; AuthType Basic AuthName &quot;Please enter your CVC username and password.&quot; AuthBasicProvider LOCAL_authfile LDAP_CVCLAB LDAP_CVC007 AuthGroupFile /data/conf/git_group #Require valid-user Require group admins Require group m4rtcdd &lt;/Location&gt; # TEST # SPACE &lt;Location /git/industry2go&gt; AuthType Basic AuthName &quot;Please enter your CVC username and password.&quot; AuthBasicProvider LOCAL_authfile LDAP_CVCLAB LDAP_CVC007 AuthGroupFile /data/conf/git_group #Require valid-user Require group admins Require group space_mobile_app &lt;/Location&gt; &lt;Location /git/sales2go-core&gt; AuthType Basic AuthName &quot;Please enter your CVC username and password.&quot; AuthBasicProvider LOCAL_authfile LDAP_CVCLAB LDAP_CVC007 AuthGroupFile /data/conf/git_group #Require valid-user Require group admins Require group space_mobile_app &lt;/Location&gt; </code></pre> <p>I need to make from this file a .xlsx file that needs to look like this:</p> <p><a href="https://i.stack.imgur.com/xWIIc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xWIIc.png" alt=".xlsx file" /></a></p> <p>So basicly what should happen:</p> <ul> <li><p>in the column name &quot;groupName&quot; should be group names that are taken from: &quot;Require group admins, &quot;Reguire group m4rtcdd&quot;</p> </li> <li><p>in the column name &quot;repoName&quot; should pe the repository name that is taken from: &quot;&lt;Location /git&gt; that means &quot;/&quot; (root) or for another example &quot;&lt;Location/git/industry2go&gt; where &quot;industry2go&quot; its the &quot;RepoName&quot;&quot;</p> </li> </ul> <p>This is my script for the momement:</p> <pre><code>import pandas as pd from svnscripts.timestampdirectory import createdir,path_dir import time import os def gitrepoaccess(): timestr = time.strftime(&quot;%Y-%m-%d&quot;) pathdest = path_dir() df = pd.read_csv(rf&quot;{pathdest}\{timestr}-rawGitData-conf.bck.txt&quot;,sep=';',lineterminator='\n',header=None) #this is the drop of 0-32 line because was a big chunk file df=df.drop(df.index[range(0,32)]) dest=createdir() df.to_excel(os.path.join(dest, &quot;GitRepoGroupAccess.xlsx&quot;), index=False) print(df) gitrepoaccess() </code></pre> <p>I tried to use the delimiter but its not work cause also the &quot;admins&quot; category appears always and need to be displayed only once at the begining, as you can se in the raw.txt file its appearing in each &quot;repository name&quot;.</p>
<p>If the suggestion in my comment is fine, then this is your solution.</p> <p>If it is not, please point out how you would prefer it and I will try to help you do that.</p> <p>Either way, this can get you on the right path for a vectorized solution.</p> <pre><code>def extract(gits): # get repo names in a colum gits = gits.assign(repoName= gits[0][gits[0].str.startswith(&quot;&lt;Location /git&quot;)]) # use ffill to match the reponame to the following rows of information gits[&quot;repoName&quot;] = gits.repoName.str.split(&quot;&lt;Location /git&quot;).str[-1].str[1:-1].ffill() # keep only rows with the group information gits = gits[gits[0].str.startswith(&quot;Require group&quot;)].copy() # set root repo to &quot;/&quot; gits.loc[gits.repoName.eq(&quot;&quot;), &quot;repoName&quot;] = &quot;/&quot; # keep only the group name gits[&quot;groupName&quot;] = gits[0].str.split(&quot; &quot;).str[-1] return gits[[&quot;groupName&quot;, &quot;repoName&quot;]].reset_index(drop= True) </code></pre> <p>In order for this function to work, you need a dataframe that has a row for every row in the text file, and only one column with name 0, which contains all the rows of the text file.</p> <p>I saved your sample text in a file called &quot;gits.txt&quot;. I assumed that there are no commas, which is why pd.read_csv works. You might have to use a different way to read it into the dataframe-format that I described.</p> <p>From the console:</p> <pre><code># pandas should not consider the first row a header gits = pd.read_csv(&quot;gits.txt&quot;, header= None) gits &gt;&gt;&gt; 0 0 &lt;Location /git&gt; 1 AuthType Basic 2 AuthName &quot;Please enter your CVC username and password.&quot; 3 AuthBasicProvider LOCAL_authfile LDAP_CVCLAB LDAP_CVC007 4 AuthGroupFile /data/conf/git_group 5 #Require valid-user 6 Require group admins 7 Require group m4rtcdd 8 &lt;/Location&gt; 9 # TEST 10 # SPACE 11 &lt;Location /git/industry2go&gt; 12 AuthType Basic 13 AuthName &quot;Please enter your CVC username and password.&quot; 14 AuthBasicProvider LOCAL_authfile LDAP_CVCLAB LDAP_CVC007 15 AuthGroupFile /data/conf/git_group 16 #Require valid-user 17 Require group admins 18 Require group space_mobile_app 19 &lt;/Location&gt; 20 &lt;Location /git/sales2go-core&gt; 21 AuthType Basic 22 AuthName &quot;Please enter your CVC username and password.&quot; 23 AuthBasicProvider LOCAL_authfile LDAP_CVCLAB LDAP_CVC007 24 AuthGroupFile /data/conf/git_group 25 #Require valid-user 26 Require group admins 27 Require group space_mobile_app 28 &lt;/Location&gt; done = extract(gits) done &gt;&gt;&gt; groupName repoName 0 admins / 1 m4rtcdd / 2 admins industry2go 3 space_mobile_app industry2go 4 admins sales2go-core 5 space_mobile_app sales2go-core </code></pre> <p>The output is just a dataframe, save it however you like.</p>
python|pandas|dataframe|numpy|pycharm
2
7,858
73,130,599
Tensorflow Fused conv implementation does not support grouped convolutions
<p>I did a neural network machine learning on colored images (3 channels). It worked but now I want to try to do it in grayscale to see if I can improve accuracy. Here is the code:</p> <pre><code>train_datagen = ImageDataGenerator( rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1. / 255) train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary', shuffle=True) validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_width, img_height), batch_size=batch_size, color_mode='grayscale', class_mode='binary', shuffle=True) model = tf.keras.Sequential() input_shape = (img_width, img_height, 1) model.add(Conv2D(32, 2, input_shape=input_shape, activation='relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Conv2D(32, 2, activation='relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Conv2D(64, 2, activation='relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Flatten()) model.add(Dense(128)) model.add(Dense(len(classes))) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) history = model.fit(train_generator, validation_data=validation_generator, epochs=EPOCHS) </code></pre> <p>You can see that I have changed the input_shape to have 1 single channel for grayscale. I'm getting an error:</p> <p><code>Node: 'sequential_26/conv2d_68/Relu' Fused conv implementation does not support grouped convolutions for now. [[{{node sequential_26/conv2d_68/Relu}}]] [Op:__inference_train_function_48830]</code></p> <p>Any idea how to fix this?</p>
<p>Your <code>train_generator</code> does not seem to have the <code>colormode='grayscale'</code>. Try:</p> <pre><code>train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary', colormode='grayscale', shuffle=True) </code></pre>
python|tensorflow|keras|deep-learning
2
7,859
73,004,748
TypeError: unsupported operand type(s) for *: 'builtin_function_or_method' and 'int' when multiplying using asterisk (*)
<p>In the <code>HuBMAPDataset</code> class, I set <code>sz</code> using <code>self.sz = reduce*sz</code>. My code raised unsupported operand type error.</p> <pre><code>reduce = 4 class HuBMAPDataset: def __init__(self, idx, fold, train=True, tfms=None): self.data = rasterio.open(os.path.join(DATA,idx+'.tiff'), transform = identity,num_threads='all_cpus') ids = pd.read_csv(LABELS).id.astype(str).values kf = KFold(n_splits=nfolds,random_state=SEED,shuffle=True) ids = set(ids[list(kf.split(ids))[fold][0 if train else 1]]) self.fnames = [fname for fname in os.listdir(TRAIN) if fname.split('_')[0] in ids] self.train = train self.tfms = tfms if self.data.count != 3: subdatasets = self.data.subdatasets self.layers = [] if len(subdatasets) &gt; 0: for i, subdataset in enumerate(subdatasets, 0): self.layers.append(rasterio.open(subdataset)) self.shape = self.data.shape self.reduce = reduce self.sz = reduce*sz self.pad0 = (self.sz - self.shape[0]%self.sz)%self.sz self.pad1 = (self.sz - self.shape[1]%self.sz)%self.sz self.n0max = (self.shape[0] + self.pad0)//self.sz self.n1max = (self.shape[1] + self.pad1)//self.sz def __len__(self): return len(self.fnames), self.n0max*self.n1max def __getitem__(self, idx): fname = self.fnames[idx] img = cv2.cvtColor(cv2.imread(os.path.join(TRAIN,fname)), cv2.COLOR_BGR2RGB) mask = cv2.imread(os.path.join(MASKS,fname),cv2.IMREAD_GRAYSCALE) if self.tfms is not None: augmented = self.tfms(image=img,mask=mask) img,mask = augmented['image'],augmented['mask'] n0,n1 = idx//self.n1max, idx%self.n1max x0,y0 = -self.pad0//2 + n0*self.sz, -self.pad1//2 + n1*self.sz p00,p01 = max(0,x0), min(x0+self.sz,self.shape[0]) p10,p11 = max(0,y0), min(y0+self.sz,self.shape[1]) img = np.zeros((self.sz,self.sz,3),np.uint8) if self.data.count == 3: img[(p00-x0):(p01-x0),(p10-y0):(p11-y0)] = np.moveaxis(self.data.read([1,2,3], window=Window.from_slices((p00,p01),(p10,p11))), 0, -1) else: for i,layer in enumerate(self.layers): img[(p00-x0):(p01-x0),(p10-y0):(p11-y0),i] =\ layer.read(1,window=Window.from_slices((p00,p01),(p10,p11))) if self.reduce != 1: img = cv2.resize(img,(self.sz//reduce,self.sz//reduce), interpolation = cv2.INTER_AREA) #check for empty imges hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) h,s,v = cv2.split(hsv) if (s&gt;s_th).sum() &lt;= p_th or img.sum() &lt;= p_th: #images with -1 will be skipped return img2tensor((img/255.0 - mean)/std), -1, img2tensor(mask) else: return img2tensor((img/255.0 - mean)/std), idx, img2tensor(mask) for fold in range(nfolds): for idx,row in tqdm(df_sample.iterrows(),total=len(df_sample)): idx = str(row['id']) ds_t = HuBMAPDataset(idx, fold=fold, train=True, tfms=get_aug()) </code></pre> <p>Traceback:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) /tmp/ipykernel_33/1067530021.py in &lt;module&gt; 3 for idx,row in tqdm(df_sample.iterrows(),total=len(df_sample)): 4 idx = str(row['id']) ----&gt; 5 ds_t = HuBMAPDataset(idx, fold=fold, train=True, tfms=get_aug()) 6 ds_v = HuBMAPDataset(idx, fold=fold, train=False) 7 data = ImageDataLoaders.from_dsets(ds_t,ds_v,bs=bs,num_workers=NUM_WORKERS,pin_memory=True).cuda() /tmp/ipykernel_33/3421423662.py in __init__(self, idx, fold, train, tfms) 25 self.shape = self.data.shape 26 self.reduce = reduce ---&gt; 27 self.sz = reduce*sz 28 self.pad0 = (self.sz - self.shape[0]%self.sz)%self.sz 29 self.pad1 = (self.sz - self.shape[1]%self.sz)%self.sz TypeError: unsupported operand type(s) for *: 'builtin_function_or_method' and 'int' </code></pre> <p><code>df_sample</code> (as dictionary)</p> <pre><code>df_sample.to_dict() {'id': {0: 10078}, 'rle': {0: '12 34'}} </code></pre>
<p>'reduce' is a function from functools. You multiply a function with an int. That's why you got the error. You can reproduce this error by the following simple code:</p> <pre><code>from functools import reduce c = reduce * 3 </code></pre>
python|algorithm|numpy|machine-learning
0
7,860
70,555,307
find words out of vocabulary
<p>I have some texts in a pandas dataframe <code>df['mytext']</code> I have also got a vocabulary <code>vocab</code> (list of words).</p> <p>I am trying to list and count the words out of vocabulary for each document</p> <p>I have tried the following but it is quite slow for 10k documents.</p> <p>How to quickly and efficiently quantify the out of vocabulary tokens in collection of texts in pandas?</p> <pre><code>OOV_text=df['mytext'].apply(lambda s: ' '.join([ word for word in s.split() if (word not in vocab) ])) OOV=df['mytext'].apply(lambda s: sum([(word in vocab) for word in s.split()])/len(s.split())) </code></pre> <p>df.shape[0] is quite large len(vocab) is large len(unique words in df.mytext)&lt;&lt;len(vocab)</p>
<p>You can use</p> <pre><code>from collections import Counter vocab=['word1','word2','word3','2021'] df['mytext_list']=df['mytext'].str.split(' ') df['count']=df['mytext_list'].apply(lambda c:sum([Counter(c)[w] for w in vocab])) </code></pre> <p>It should be faster than your solution because it uses pandas vectorization and then the Counter method.</p> <p>You can skip saving the helper column as &quot;mytest_list&quot; for saving memory usage.</p>
python|pandas|nlp|vocabulary
1
7,861
70,497,233
How to transform 2D array using values as another array's indices?
<p>I have a 2D array with indices refering to another array:</p> <pre><code>indexarray = np.array([[0,0,1,1], [1,2,3,0]]) </code></pre> <p>The array which these indices refer to is:</p> <pre><code>valuearray = np.array([8,7,6,5]) </code></pre> <p>I would like to get an array with the numbers from <code>valuearray</code> in the shape of <code>indexarray</code>, each item in this array corresponds to the value in <code>valuearray</code> with the index on the same location in <code>indexarray</code>, ie:</p> <pre><code>targetarray = np.array([[8,8,7,7], [7,6,5,8]]) </code></pre> <p>How can I do this without iteration?</p> <hr /> <p>What I do now to achieve this is:</p> <pre><code>np.apply_along_axis(func1d = lambda row: valuearray[row],axis=0,arr = indexarray) </code></pre> <p>If there is a simpler way, I am interested.</p>
<p>One way is to flatten the index array and get the values and reshape it back as follows.</p> <pre><code>targetarray = valuearray[indexarray.flatten()].reshape(indexarray.shape) </code></pre>
python|numpy
1
7,862
70,395,851
Showing gps points on altair world map
<p>I'm building (for learning purposes) a python program that extracts gps-data from *jpg files in a directory and display the gps-coordinates from the photo's on a world-map.</p> <p>I managed to extract the latitude and longitude values in a panda's dataframe and display it on a altair-world map.</p> <p>But my problem is that the points don't show up in the right gps position. Also when i pan the map the points move around the map and the map selve doesn't move.</p> <p>How can i manage to get the points fixed on a world map and in the right gps-position?</p> <p>This is my code for the altair-map:</p> <pre><code>world = alt.topo_feature(data.world_110m.url, feature='countries') # US states background background = alt.Chart(world).mark_geoshape( fill='lightgray', stroke='white' ).properties( width=1300, height=900 ) # airport positions on background points = alt.Chart(df).mark_circle().encode( x='latitude', y='longitude', color=alt.value('steelblue'), tooltip=['naam'] ).interactive() st.altair_chart(background + points, use_container_width=True) </code></pre> <p>This is a part of my dataframe:</p> <pre><code> naam latitude longitude 1 photo1 46.073822 6.109725 2 photo2 46.123119 6.205319 3 photo3 46.1232 6.205728 </code></pre> <p><a href="https://i.stack.imgur.com/35YAR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/35YAR.png" alt="This is my map-display in streamlit with gps-data individual photos" /></a></p> <p>Thnx in advanced!!</p>
<p>Solved in the comments, adding as an answer to mark this as solved:</p> <blockquote> <p>Try changing x='latitude', y='longitude' to latitude='latitude', longitude='longitude'. This may require you to delete .interactive(), because Altair produces Vega-Lite, which apparently does not support geographic chart interactivity: github.com/altair-viz/altair/issues/1555</p> </blockquote>
python|pandas|altair
0
7,863
70,626,934
How to compute pandas dataframe of pairwise string-similarities in parallel using dask?
<p>I have a list of strings, and I want to build a dataframe which gives the Jaro-Winkler normalized similarity between each pair of strings. There is a function in the package <a href="https://github.com/life4/textdistance" rel="nofollow noreferrer">textdistance</a> to compute it. Loosely, similar strings have a score close to 1, and different strings have a score close to 0. My actual list of strings has about 4000 strings, so there are nearly 8 million pairs of strings to compare.</p> <p>This seems like an &quot;embarassingly parallel&quot; computation to me. Is there some way to do this in <code>dask</code>? A bonus would be to have a <code>tqdm</code>-style progressbar with an ETA.</p> <pre><code>from itertools import combinations import pandas as pd import textdistance strings = [&quot;adsf&quot;, &quot;apple&quot;, &quot;apples&quot;, &quot;banana&quot;] def similarity(left: str, right: str) -&gt; float: &quot;&quot;&quot; Computes Jaro-Winkler normalized_similarity, which is between 0 and 1. More similar strings have a score closer to 1. &quot;&quot;&quot; return textdistance.jaro_winkler.normalized_similarity(left, right) generator = ( (left, right, similarity(left, right)) for left, right in combinations(strings, 2) ) df = pd.DataFrame(generator, columns=[&quot;left&quot;, &quot;right&quot;, &quot;sim_score&quot;]) </code></pre> <p>Sample of <code>df</code>:</p> <pre><code> left right sim_score 0 adsf apple 0.483333 1 adsf apples 0.472222 2 adsf banana 0.472222 3 apple apples 0.966667 4 apple banana 0.455556 5 apples banana 0.444444 </code></pre>
<p>There's lots of ways, but here's another one, using dask.dataframes...</p> <pre class="lang-py prettyprint-override"><code>In [1]: import dask, dask.distributed, dask.dataframe as dd, pandas as pd, itertools In [2]: client = dask.distributed.Client() In [3]: futures = client.scatter(strings) In [4]: def similarity(df, left_col: str, right_col: str) -&gt; pd.Series: ...: &quot;&quot;&quot; ...: Return 0 or 1 if first char of strings are different or equal. ...: &quot;&quot;&quot; ...: return ( ...: df[left_col].str[0] == df[right_col].str[0] ...: ).astype('int64') ...: In [5]: def make_df(s, others): ...: return pd.DataFrame({ ...: 'left': [s]*len(others), ...: 'right': others, ...: }) ...: In [6]: dfs = client.map(make_df, futures, others=strings) </code></pre> <pre class="lang-py prettyprint-override"><code>In [7]: df = dd.from_delayed(dfs) </code></pre> <p>The function can then be applied using <a href="https://docs.dask.org/en/stable/generated/dask.dataframe.DataFrame.map_partitions.html" rel="nofollow noreferrer"><code>ddf.map_partitions</code></a></p> <pre class="lang-py prettyprint-override"><code>In [8]: df['similarity'] = df.map_partitions( ...: similarity, left_col='left', right_col='right', meta='int64', ...: ) </code></pre> <p>The result is a dask.dataframe with the properties you want</p> <pre class="lang-py prettyprint-override"><code>In [9]: df Out[9]: Dask DataFrame Structure: left right similarity npartitions=4 object object int64 ... ... ... ... ... ... ... ... ... ... ... ... Dask Name: assign, 16 tasks In [10]: df.compute() Out[10]: left right similarity 0 adsf adsf 1 1 adsf apple 1 2 adsf apples 1 3 adsf banana 0 0 apple adsf 1 1 apple apple 1 2 apple apples 1 3 apple banana 0 0 apples adsf 1 1 apples apple 1 2 apples apples 1 3 apples banana 0 0 banana adsf 0 1 banana apple 0 2 banana apples 0 3 banana banana 1 </code></pre>
python|pandas|dask
0
7,864
42,945,649
Removing subarray from array
<p>I have a numpy array A and B. </p> <pre><code>A = [ 1, 2, 5, 9.8, 55, 3] B = [ 3, 4] </code></pre> <p>Now, how to remove A[3] &amp; A[4] that is whatever indices array B is having and then put them at the start of array A. So, I want my output to be </p> <pre><code>A = [9.8, 55, 1, 2, 5, 3] </code></pre> <p>Note : Both A and B are numpy arrays. </p> <p>Any help is highly appreciated. </p>
<p>One approach with <code>boolean-indexing</code> would be -</p> <pre><code>mask = np.in1d(np.arange(A.size),B) out = np.r_[A[mask], A[~mask]] </code></pre> <p>Sample run -</p> <pre><code>In [26]: A = np.array([ 1, 2, 5, 9.8, 55, 3]) In [27]: B = np.array([ 3, 4]) In [28]: mask = np.in1d(np.arange(A.size),B) In [29]: np.r_[A[mask], A[~mask]] Out[29]: array([ 9.8, 55. , 1. , 2. , 5. , 3. ]) </code></pre> <p>Another with integer-indexing -</p> <pre><code>idx = np.setdiff1d(np.arange(A.size),B) out = np.r_[A[B], A[idx]] </code></pre> <p>Sample run -</p> <pre><code>In [36]: idx = np.setdiff1d(np.arange(A.size),B) In [37]: np.r_[A[B], A[idx]] Out[37]: array([ 9.8, 55. , 1. , 2. , 5. , 3. ]) </code></pre>
python|numpy
1
7,865
42,739,557
Code runs much faster in C than in NumPy
<p>I wrote physics simulation code in python using numpy and than rewrote it to C++. in C++ it takes only 0.5 seconds while in python around 40s. Can someone please help my find what I did horribly wrong?</p> <pre><code>import numpy as np def myFunc(i): uH = np.copy(u) for j in range(1, xmax-1): u[i][j] = a*uH[i][j-1]+(1-2*a)*uH[i][j]+a*uH[i][j+1] u[i][0] = u[i][0]/b for x in range(1, xmax): u[i][x] = (u[i][x]+a*u[i][x-1])/(b+a*c[x-1]) for x in range(xmax-2,-1,-1): u[i][x]=u[i][x]-c[x]*u[i][x+1] xmax = 101 tmax = 2000 #All other variables are defined here but I removed that for visibility uH = np.zeros((xmax,xmax)) u = np.zeros((xmax,xmax)) c = np.full(xmax,-a) uH[50][50] = 10000 for t in range(1, tmax): if t % 2 == 0: for i in range(0,xmax): myFunc(i) else: for i in range(0, xmax): myFunc(i) </code></pre> <p>In case someones wants to run it here is whole code: <a href="http://pastebin.com/20ZSpBqQ" rel="nofollow noreferrer">http://pastebin.com/20ZSpBqQ</a> EDIT: all variables are defined in the whole code which can be found on pastebin. Sorry for confusion, I thought removing all the clutter will make the code easier to understand</p>
<p>fundamentally, C is a compiled language, when Python is a interpreted one, speed against ease of use.</p> <p>Numpy can fill the gap, but you must avoid for loop on items, which need often some skills.</p> <p>For exemple,</p> <pre><code>def block1(): for i in range(xmax): for j in range(1, xmax-1): u[i][j] = a*uH[i][j-1]+(1-2*a)*uH[i][j]+a*uH[i][j+1] </code></pre> <p>is in numpy style :</p> <pre><code>def block2(): u[:,1:-1] += a*np.diff(u,2) </code></pre> <p>with is shorter and faster ( and easier to read and understand ?) :</p> <pre><code>In [37]: %timeit block1() 10 loops, best of 3: 25.8 ms per loop In [38]: %timeit block2() 10000 loops, best of 3: 123 µs per loop </code></pre> <p>At last, you can speed numpy code with Just In Time compilation, what is allowed with Numba. Just change the beginning of your code like :</p> <pre><code>import numba @numba.jit def myFunc(u,i): ... </code></pre> <p>and the calls by <code>myFunc(u,i)</code> at the end of the script (<code>u</code> must be a parameter for automatic determination of types) and you will reach the same performance (0,4 s on my PC).</p>
python|numpy
2
7,866
27,098,762
How to create a data frame from a deeply nested dictionary?
<p>I have a nested dictionary that has 5 levels <code>masterdict = {a : {b: {c: {d : { e: }}}}}</code> and I am trying to create a flat data frame. </p> <p>When I run the following code: </p> <pre><code>masterDF = pd.DataFrame() for a in masterdict: for b in masterdict[a]: for c in masterdict[a][b]: for d in masterdict[a][b][c]: eDF = pd.DataFrame(masterdict[a][b][c][d]) eDF['level4'] = str(d) eDF['level3'] = str(c) eDF['level2'] = str(b) eDF['level1'] = str(a) masterDF = masterDF.append(eDF) </code></pre> <p>I get the following error:</p> <pre><code>--------------------------------------------------------------------------- PandasError Traceback (most recent call last) &lt;ipython-input-13-cc9930935c64&gt; in &lt;module&gt;() 5 for c in masterdict[a][b]: 6 for d in masterdict[a][b][c]: ----&gt; 7 eDF = pd.DataFrame(masterdict[a][b][c][d]) 8 eDF['level4'] = str(d) 9 eDF['level3'] = str(c) C:\Anaconda\lib\site-packages\pandas\core\frame.pyc in __init__(self, data, index, columns, dtype, copy) 275 copy=False) 276 else: --&gt; 277 raise PandasError('DataFrame constructor not properly called!') 278 279 NDFrame.__init__(self, mgr, fastpath=True) PandasError: DataFrame constructor not properly called! </code></pre> <p>I am not sure if this is the right code to flatten a nested dictionary but if it is, could you please suggest what I might be doing wrong?</p> <p>Thank you.</p>
<p>I write this to flatten nested dictionaries. Might help you also. pk becomes a string of previous key, and current key with a ' to join them. a becomes a list of items.</p> <pre><code>a=[] heading=[] def flat_dict(dic,pk=None): for k,v in dic.items(): if isinstance(v, dict): try: flat_dict(v,pk+"'"+k) except: flat_dict(v,k) else: a.append(v) try: heading.append(pk+"'"+k) # print pk+'\''+k,v except: heading.append(k) # print k,v </code></pre>
python|dictionary|pandas
0
7,867
14,591,855
pandas HDFStore - how to reopen?
<p>I created a file by using:</p> <pre><code>store = pd.HDFStore('/home/.../data.h5') </code></pre> <p>and stored some tables using:</p> <pre><code>store['firstSet'] = df1 store.close() </code></pre> <p>I closed down python and reopened in a fresh environment.</p> <p>How do I reopen this file?</p> <p>When I go:</p> <pre><code>store = pd.HDFStore('/home/.../data.h5') </code></pre> <p>I get the following error.</p> <pre><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/misc/apps/linux/python-2.6.1/lib/python2.6/site-packages/pandas-0.10.0-py2.6-linux-x86_64.egg/pandas/io/pytables.py", line 207, in __init__ self.open(mode=mode, warn=False) File "/misc/apps/linux/python-2.6.1/lib/python2.6/site-packages/pandas-0.10.0-py2.6-linux-x86_64.egg/pandas/io/pytables.py", line 302, in open self.handle = _tables().openFile(self.path, self.mode) File "/apps/linux/python-2.6.1/lib/python2.6/site-packages/tables/file.py", line 230, in openFile return File(filename, mode, title, rootUEP, filters, **kwargs) File "/apps/linux/python-2.6.1/lib/python2.6/site-packages/tables/file.py", line 495, in __init__ self._g_new(filename, mode, **params) File "hdf5Extension.pyx", line 317, in tables.hdf5Extension.File._g_new (tables/hdf5Extension.c:3039) tables.exceptions.HDF5ExtError: HDF5 error back trace File "H5F.c", line 1582, in H5Fopen unable to open file File "H5F.c", line 1373, in H5F_open unable to read superblock File "H5Fsuper.c", line 334, in H5F_super_read unable to find file signature File "H5Fsuper.c", line 155, in H5F_locate_signature unable to find a valid file signature End of HDF5 error back trace Unable to open/create file '/home/.../data.h5' </code></pre> <p>What am I doing wrong here? Thank you.</p>
<p>In my hands, following approach works best:</p> <pre><code>df = pd.DataFrame(...) "write" with pd.HDFStore('test.h5', mode='w') as store: store.append('df', df, data_columns= df.columns, format='table') "read" with pd.HDFStore('test.h5', mode='r') as newstore: df_restored = newstore.select('df') </code></pre>
python|pandas
10
7,868
30,413,714
python pandas time series select day of year
<p>I want to select data from a dataframe for a particular day of the year. Here is what I have so far as a minimal example.</p> <pre><code>import pandas as pd from datetime import datetime from datetime import timedelta import numpy.random as npr rng = pd.date_range('1/1/1990', periods=365*10, freq='D') df1 = pd.DataFrame(npr.randn(len(rng)), index=rng) print df1 </code></pre> <p>That generates:</p> <pre><code> 0 1990-01-01 -0.032601 1990-01-02 -0.496401 1990-01-03 0.444490 </code></pre> <p>etc. Now I make a list of dates that I want to extract. I have used this before in pandas, but I suspect this is not the best way to get values for a particular date. Anyway,</p> <pre><code>td = timedelta(days=31) dr = pd.date_range(datetime(1990,12,31)+td,datetime(2000,12,31), freq=pd.DateOffset(months=12, days=0)) print dr </code></pre> <p>This, of course, generates:</p> <pre><code>DatetimeIndex(['1991-01-31', '1992-01-31', '1993-01-31', '1994-01-31', '1995-01-31', '1996-01-31', '1997-01-31', '1998-01-31', '1999-01-31', '2000-01-31'], dtype='datetime64[ns]', freq='&lt;DateOffset: kwds={'months': 12, 'days': 0}&gt;', tz=None) </code></pre> <p>When I try to slice the dataframe by the list of dates, I generate an error:</p> <pre><code>monthly_df1 = df1[dr] </code></pre> <p>Output:</p> <pre><code>KeyError: "['1991-01-30T16:00:00.000000000-0800' '1992-01-30T16:00:00.000000000-0800'\n '1993-01-30T16:00:00.000000000-0800' '1994-01-30T16:00:00.000000000-0800'\n '1995-01-30T16:00:00.000000000-0800' '1996-01-30T16:00:00.000000000-0800'\n '1997-01-30T16:00:00.000000000-0800' '1998-01-30T16:00:00.000000000-0800'\n '1999-01-30T16:00:00.000000000-0800' '2000-01-30T16:00:00.000000000-0800'] not in index" </code></pre> <p>I think that I have two fundamental problems here: (1) there is a better way to extract yearly data for a particular date; and (2) the time series in the dataframe and date_range list are different. I would appreciate information on both problems. Thanks, community.</p>
<p>You could use <code>.ix</code> to filter <code>dr</code> dates from <code>df1</code></p> <pre><code>In [107]: df1.ix[dr] Out[107]: 0 1991-01-31 -1.239096 1992-01-31 0.153730 1993-01-31 -0.685778 1994-01-31 0.132170 1995-01-31 0.154965 1996-01-31 1.800437 1997-01-31 2.725209 1998-01-31 -0.084751 1999-01-31 1.604511 2000-01-31 NaN </code></pre> <p>Even <code>df1.loc[dr]</code> works.</p> <hr> <p>Also, for this case, you can just pass these conditions to extract the dates</p> <pre><code>In [108]: df1[(df1.index.month==1) &amp; (df1.index.day==31)] Out[108]: 0 1990-01-31 -0.362652 1991-01-31 -1.239096 1992-01-31 0.153730 1993-01-31 -0.685778 1994-01-31 0.132170 1995-01-31 0.154965 1996-01-31 1.800437 1997-01-31 2.725209 1998-01-31 -0.084751 1999-01-31 1.604511 </code></pre>
python|date|pandas|dataframe
3
7,869
26,783,719
Efficiently get indices of histogram bins in Python
<h1>Short Question</h1> <p>I have a large 10000x10000 elements image, which I bin into a few hundred different sectors/bins. I then need to perform some iterative calculation on the values contained within each bin.</p> <p>How do I extract the indices of each bin to efficiently perform my calculation using the bins values?</p> <p>What I am looking for is a solution which avoids the bottleneck of having to select every time <code>ind == j</code> from my large array. Is there a way to obtain directly, in one go, the indices of the elements belonging to every bin?</p> <h1>Detailed Explanation</h1> <h2>1. Straightforward Solution</h2> <p>One way to achieve what I need is to use code like the following (see e.g. <a href="https://stackoverflow.com/a/6163403/3753826">THIS</a> related answer), where I digitize my values and then have a j-loop selecting digitized indices equal to j like below</p> <pre><code>import numpy as np # This function func() is just a placemark for a much more complicated function. # I am aware that my problem could be easily sped up in the specific case of # of the sum() function, but I am looking for a general solution to the problem. def func(x): y = np.sum(x) return y vals = np.random.random(1e8) nbins = 100 bins = np.linspace(0, 1, nbins+1) ind = np.digitize(vals, bins) result = [func(vals[ind == j]) for j in range(1, nbins)] </code></pre> <p>This is not what I want as it selects every time <code>ind == j</code> from my large array. This makes this solution very inefficient and slow.</p> <h2>2. Using binned_statistics</h2> <p>The above approach turns out to be the same implemented in <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binned_statistic.html" rel="noreferrer">scipy.stats.binned_statistic</a>, for the general case of a user-defined function. Using Scipy directly an identical output can be obtained with the following</p> <pre><code>import numpy as np from scipy.stats import binned_statistics vals = np.random.random(1e8) results = binned_statistic(vals, vals, statistic=func, bins=100, range=[0, 1])[0] </code></pre> <h2>3. Using labeled_comprehension</h2> <p>Another Scipy alternative is to use <a href="https://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.ndimage.measurements.labeled_comprehension.html" rel="noreferrer">scipy.ndimage.measurements.labeled_comprehension</a>. Using that function, the above example would become</p> <pre><code>import numpy as np from scipy.ndimage import labeled_comprehension vals = np.random.random(1e8) nbins = 100 bins = np.linspace(0, 1, nbins+1) ind = np.digitize(vals, bins) result = labeled_comprehension(vals, ind, np.arange(1, nbins), func, float, 0) </code></pre> <p>Unfortunately also this form is inefficient and in particular, it has no speed advantage over my original example.</p> <h2>4. Comparison with IDL language</h2> <p>To further clarify, what I am looking for is a functionality equivalent to the <code>REVERSE_INDICES</code> keyword in the <code>HISTOGRAM</code> function of the IDL language <a href="http://www.exelisvis.com/docs/HISTOGRAM.html" rel="noreferrer">HERE</a>. Can this very useful functionality be efficiently replicated in Python?</p> <p>Specifically, using the IDL language the above example could be written as</p> <pre><code>vals = randomu(s, 1e8) nbins = 100 bins = [0:1:1./nbins] h = histogram(vals, MIN=bins[0], MAX=bins[-2], NBINS=nbins, REVERSE_INDICES=r) result = dblarr(nbins) for j=0, nbins-1 do begin jbins = r[r[j]:r[j+1]-1] ; Selects indices of bin j result[j] = func(vals[jbins]) endfor </code></pre> <p>The above IDL implementation is about 10 times faster than the Numpy one, due to the fact that the indices of the bins do not have to be selected for every bin. And the speed difference in favour of the IDL implementation increases with the number of bins.</p>
<p>I found that a particular sparse matrix constructor can achieve the desired result very efficiently. It's a bit obscure but we can abuse it for this purpose. The function below can be used in nearly the same way as <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binned_statistic.html" rel="noreferrer">scipy.stats.binned_statistic</a> but can be orders of magnitude faster</p> <pre><code>import numpy as np from scipy.sparse import csr_matrix def binned_statistic(x, values, func, nbins, range): '''The usage is nearly the same as scipy.stats.binned_statistic''' N = len(values) r0, r1 = range digitized = (float(nbins)/(r1 - r0)*(x - r0)).astype(int) S = csr_matrix((values, [digitized, np.arange(N)]), shape=(nbins, N)) return [func(group) for group in np.split(S.data, S.indptr[1:-1])] </code></pre> <p>I avoided <code>np.digitize</code> because it doesn't use the fact that all bins are equal width and hence is slow, but the method I used instead may not handle all edge cases perfectly.</p>
python|numpy|scipy
11
7,870
26,472,215
Pandas to calculate rolling aggregate rate
<p>I'm trying to calculate a rolling aggregate rate for a time series.</p> <p>The way to think about the data is that it is the results of a bunch of multigame series against a different teams. We don't know who wins the series until the last game. I'm trying to calculate the win rate as it evolves against each of the opposing teams.</p> <pre><code>series_id date opposing_team won_series 1 1/1/2000 a 0 1 1/3/2000 a 0 1 1/5/2000 a 1 2 1/4/2000 a 0 2 1/7/2000 a 0 2 1/9/2000 a 0 3 1/6/2000 b 0 </code></pre> <p>Becomes:</p> <pre><code>series_id date opposing_team won_series percent_win_against_team 1 1/1/2000 a 0 NA 1 1/3/2000 a 0 NA 1 1/5/2000 a 1 100 2 1/4/2000 a 0 NA 2 1/7/2000 a 0 100 2 1/9/2000 a 0 50 3 1/6/2000 b 0 0 </code></pre>
<p>I still don't feel like I understand the rule for how you decide when a series is over. Is 3 over? Why is it NA, I would have thought 1/3rd. Still, here is a way to keep track of the number of completed series and (a) win rate.</p> <p>Define 26472215table.csv:</p> <pre><code>series_id,date,opposing_team,won_series 1,1/1/2000,a,0 1,1/3/2000,a,0 1,1/5/2000,a,1 2,1/4/2000,a,0 2,1/7/2000,a,0 2,1/9/2000,a,0 3,1/6/2000,b,0 </code></pre> <p>Code:</p> <pre><code>import pandas as pd import numpy as np df = pd.read_csv('26472215table.csv') grp2 = df.groupby(['series_id']) sr = grp2['date'].max() sr.name = 'LastGame' df2 = df.join( sr, on=['series_id'], how='left') df2.sort('date') df2['series_comp'] = df2['date'] == df2['LastGame'] df2['running_sr_cnt'] = df2.groupby(['opposing_team'])['series_comp'].cumsum() df2['running_win_cnt'] = df2.groupby(['opposing_team'])['won_series'].cumsum() winrate = lambda x: x[1]/ x[0] if (x[0] &gt; 0) else None df2['winrate'] = df2[['running_sr_cnt','running_win_cnt']].apply(winrate, axis = 1 ) </code></pre> <p>Results df2[['date', 'winrate']]:</p> <pre><code> date winrate 0 1/1/2000 NaN 1 1/3/2000 NaN 2 1/5/2000 1.0 3 1/4/2000 1.0 4 1/7/2000 1.0 5 1/9/2000 0.5 6 1/6/2000 0.0 </code></pre>
python|pandas
1
7,871
39,122,955
Pyspark error for java heap space error
<p>I am new to spark using <strong>Spark 1.6.1</strong> with <strong>two workers</strong> each having <strong>Memory 1GB</strong> and <strong>5 Cores</strong> assigned, running this code on a 33MB file. </p> <p>This Code is used to Index word in spark.</p> <pre><code>from textblob import TextBlob as tb from textblob_aptagger import PerceptronTagger import numpy as np import nltk.data import Constants from pyspark import SparkContext,SparkConf import nltk TOKENIZER = nltk.data.load('tokenizers/punkt/english.pickle') def word_tokenize(x): return nltk.word_tokenize(x) def pos_tag (s): global TAGGER return TAGGER.tag(s) def wrap_words (pair): ''' associable each word with index ''' index = pair[0] result = [] for word, tag in pair[1]: word = word.lower() result.append({ "index": index, "word": word, "tag": tag}) index += 1 return result if __name__ == '__main__': conf = SparkConf().setMaster(Constants.MASTER_URL).setAppName(Constants.APP_NAME) sc = SparkContext(conf=conf) data = sc.textFile(Constants.FILE_PATH) sent = data.flatMap(word_tokenize).map(pos_tag).map(lambda x: x[0]).glom() num_partition = sent.getNumPartitions() base = list(np.cumsum(np.array(sent.map(len).collect()))) base.insert(0, 0) base.pop() RDD = sc.parallelize(base,num_partition) tagged_doc = RDD.zip(sent).map(wrap_words).cache() </code></pre> <p>For Smaller File &lt; 25MB the code is working fine but gives error for files whose size is larger that 25MB.<br> Help me resolve this issue or provide an alternative to this problem ?</p>
<p>That's because of the .collect(). You lose everything when you transform your rdd into a classic python variable (or np.array), all data is collected on the same place.</p>
python|numpy|optimization|pyspark
0
7,872
33,709,598
TensorFlow cholesky decomposition
<p>From reading the TensorFlow documentation I see that there is a method for computing the <a href="http://tensorflow.org/api_docs/python/math_ops.md#cholesky" rel="noreferrer">Cholesky decomposition of a square matrix</a>. However, usually when I want to use Cholesky decomposition, I do it for the purposes of solving a linear system where direct matrix inversion might be unstable. </p> <p>Therefore, I am looking for a method similar to the one implemented in <a href="http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.linalg.cho_solve.html" rel="noreferrer">Scipy</a>. Does anyone know if this exists in TensorFlow or if there is a way it could be incorporated?</p>
<p>user19..8: The way to do this for now if you want to keep things "mostly" in tensorflow would be to do what you and Berci were discussing in the comments: Run the tensorflow graph until the point where you need to solve the linear system, and then feed the results back in with a feed_dict. In pseudocode:</p> <pre><code>saved_tensor1 = tf.Variable(...) saved_tensor2 = tf.Variable(...) start_of_model... tensor1, tensor2 = various stuff... do_save_tensor1 = saved_tensor1.assign(tensor1) do_save_tensor2 = saved_tensor2.assign(tensor2) your_cholesky = tf.cholesky(your_other_tensor, ...) ## THIS IS THE SPLIT POINT # Second half of your model starts here solved_system = tf.placeholder(...) # You'll feed this in with feed_dict final_answer = do_something_with(saved_tensor1, saved_tensor2, solved_system) </code></pre> <p>Then to run the whole thing, do:</p> <pre><code>_, _, cho = tf.run([do_save_tensor1, do_save_tensor2, your_cholesky]) solution = ... solve your linear system with scipy ... feed_dict = {solved_system: solution} answer = tf.run(final_answer, feed_dict=feed_dict) </code></pre> <p>The key here is stashing your intermediate results in tf.Variables so that you can resume the computation afterwards.</p> <p>(I'm not promising that what you get out of tf.cholesky is in the right format to feed directly to scipy, or that you shouldn't just pull out the matrix in an earlier step and feed it to scipy, but this overall workflow should work for you).</p> <p>Note that this will create a performance bottleneck if you're doing heavily multicore or GPU operations and then have to serialize on spitting the matrix out to scipy, but it might also be just fine - depends a lot on your setting.</p>
scipy|tensorflow
3
7,873
33,872,129
Python - Retrieving last 30 days data from dataframe pandas
<p>I've a dataframe containing six month error logs, collected every day. I want to retrieve the last 30 days records from the last date. Last date isn't today.<br> For example: I've data for the months May, June, July and until <code>August 15</code>, I want to retrieve that data from <code>August 15</code> to <code>July 15</code> making it 30 days records.<br> Is there a way to do this in Python Pandas? </p> <p>This is the sample dataframe:</p> <pre><code>Error_Description Date Weekend Type N17739 Limit switch X- 5/1/2015 5/3/2015 Critical N17739 Limit switch Y- 5/1/2015 5/3/2015 Critical N938 Key non-functional 5/1/2015 5/3/2015 Non-Critical P124 Magazine is running 5/1/2015 5/3/2015 Non-Critical N17738 Limit switch Z+ 5/1/2015 5/3/2015 Critical N938 Key non-functional 5/1/2015 5/3/2015 Non-Critical ... ... ... ... P873 ENCLOSURE DOOR 8/24/2015 8/30/2015 Non-Critical N3065 Reset M114 8/24/2015 8/30/2015 Non-Critical N3065 Reset M114, 8/24/2015 8/30/2015 Non-Critical N2853 Synchronization 8/24/2015 8/30/2015 Critical P152 ENCLOSURE 8/24/2015 8/30/2015 Non-Critical N6236 has stopped 8/24/2015 8/30/2015 Critical </code></pre>
<p>Date <code>lastdayfrom</code> is used for selecting last 30 days of <code>DataFrame</code> by function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="noreferrer">loc</a>. </p> <pre><code>lastdayfrom = pd.to_datetime('8/24/2015') print lastdayfrom #2015-08-24 00:00:00 print df # Error_Description Date Weekend Type #0 N17739 Limit switch X- 2015-05-01 2015-05-03 Critical #1 N17739 Limit switch Y- 2015-05-01 2015-05-03 Critical #2 N938 Key non-functional 2015-05-01 2015-05-03 Non-Critical #3 P124 Magazine is running 2015-05-01 2015-05-03 Non-Critical #4 N17738 Limit switch Z+ 2015-02-01 2015-05-03 Critical #5 N938 Key non-functional 2015-07-25 2015-05-03 Non-Critical #6 P873 ENCLOSURE DOOR 2015-07-24 2015-08-30 Non-Critical #7 N3065 Reset M114 2015-07-21 2015-08-21 Non-Critical #8 N3065 Reset M114, 2015-08-22 2015-08-22 Non-Critical #9 N2853 Synchronization 2015-08-23 2015-08-30 Critical #10 P152 ENCLOSURE 2015-08-24 2015-08-30 Non-Critical #11 N6236 has stopped 2015-08-24 2015-08-30 Critical print df.dtypes #Error_Description object #Date datetime64[ns] #Weekend datetime64[ns] #Type object #dtype: object #set index from column Date df = df.set_index('Date') #if datetimeindex isn't order, order it df= df.sort_index() #last 30 days of date lastday df = df.loc[lastdayfrom - pd.Timedelta(days=30):lastdayfrom].reset_index() </code></pre> <pre><code>print df # Date Error_Description Weekend Type #0 2015-07-25 N3065 Reset M114 2015-08-21 Non-Critical #1 2015-08-22 N3065 Reset M114, 2015-08-22 Non-Critical #2 2015-08-23 N2853 Synchronization 2015-08-30 Critical #3 2015-08-24 P152 ENCLOSURE 2015-08-30 Non-Critical #4 2015-08-24 N6236 has stopped 2015-08-30 Critical </code></pre>
python|pandas|dataframe
11
7,874
33,697,810
Fastest way to group by ID for a really big numpy array
<p>I am trying to find the best way to group 'rows' with similar IDs.</p> <p>My best guess: <code>np.array([test[test[:,0] == ID] for ID in List_IDs])</code></p> <p>result: array of arrays of arrays</p> <pre><code>[ array([['ID_1', 'col1','col2',...,'coln'], ['ID_1', 'col1','col2',...,'coln'],..., ['ID_1', 'col1','col2',...,'coln']],dtype='|S32') array([['ID_2', 'col1','col2',...,'coln'], ['ID_2', 'col1','col2',...,'coln'],..., ['ID_2', 'col1','col2',...,'coln']],dtype='|S32') .... array([['ID_k', 'col1','col2',...,'coln'], ['ID_k', 'col1','col2',...,'coln'],..., ['ID_K', 'col1','col2',...,'coln']],dtype='|S32')] </code></pre> <p>Can anyone suggest something that can be more efficient ? </p> <p>Reminder: The <code>test</code> array is huge. 'Rows' not ordered</p>
<p>I am assuming <code>List_IDs</code> is a list of all unique IDs from the first column. With that assumption, here's a Numpy-based solution -</p> <pre><code># Sort input array test w.r.t. first column that are IDs test_sorted = test[test[:,0].argsort()] # Convert the string IDs to numeric IDs _,numeric_ID = np.unique(test_sorted[:,0],return_inverse=True) # Get the indices where shifts (IDs change) occur _,cut_idx = np.unique(numeric_ID,return_index=True) # Use the indices to split the input array into sub-arrays with common IDs out = np.split(test_sorted,cut_idx)[1:] </code></pre> <p>Sample run -</p> <pre><code>In [305]: test Out[305]: array([['A', 'A', 'B', 'E', 'A'], ['B', 'E', 'A', 'E', 'B'], ['C', 'D', 'D', 'A', 'C'], ['B', 'D', 'A', 'C', 'A'], ['B', 'A', 'E', 'A', 'E'], ['C', 'D', 'C', 'E', 'D']], dtype='|S32') In [306]: test_sorted Out[306]: array([['A', 'A', 'B', 'E', 'A'], ['B', 'E', 'A', 'E', 'B'], ['B', 'D', 'A', 'C', 'A'], ['B', 'A', 'E', 'A', 'E'], ['C', 'D', 'D', 'A', 'C'], ['C', 'D', 'C', 'E', 'D']], dtype='|S32') In [307]: out Out[307]: [array([['A', 'A', 'B', 'E', 'A']], dtype='|S32'), array([['B', 'E', 'A', 'E', 'B'], ['B', 'D', 'A', 'C', 'A'], ['B', 'A', 'E', 'A', 'E']], dtype='|S32'), array([['C', 'D', 'D', 'A', 'C'], ['C', 'D', 'C', 'E', 'D']], dtype='|S32')] </code></pre>
python|arrays|numpy
2
7,875
22,763,279
Optimize loop for changepoint test
<p>I'm trying to write a simple change point finder in Python. Below, the function loglike(xs) returns the maximized log-likelihood for an iid normal sample xs. The function most_probable_cp(xs) loops through each point in the middle ~75% of xs, and uses a likelihood ratio to find the most likely change point in xs. </p> <p>I'm using binary segmentation, and I'm bootstrapping to get critical values for the likelihood ratio, so I'll need to call most_probable_cp() thousands of times. Is there any way to speed it up? Would Cython help at all? I've never used it.</p> <pre><code>import numpy as np def loglike(xs): n = len(xs) mean = np.sum(xs)/n sigSq = np.sum((xs - mean)**2)/n return -0.5*n*np.log(2*np.pi*sigSq) - 0.5*n def most_probable_cp(xs, left=None, right=None): """ Finds the most probable changepoint location and corresponding likelihood for xs[left:right] """ if left is None: left = 0 if right is None: right = len(xs) OFFSETPCT = 0.125 MINNOBS = 12 ys = xs[left:right] offset = min(int(len(ys)*OFFSETPCT), MINNOBS) tLeft, tRight = left + offset, right - offset if tRight &lt;= tLeft: raise ValueError("left and right are too close together.") maxLike = -1e9 cp = None dataLike = loglike(ys) # Bottleneck is below. for t in xrange(tLeft, tRight): profLike = loglike(xs[left:t]) + loglike(xs[t:right]) lr = 2*(profLike - dataLike) if lr &gt; maxLike: cp = t maxLike = lr return cp, maxLike </code></pre>
<p>The first thing, use Numpy's implementation of standard deviation. That will not only be faster, but also more stable.</p> <pre><code>def loglike(xs): n = len(xs) return -0.5 * n * np.log(2 * np.pi * np.std(xs)) - 0.5 * n </code></pre> <p>If you really want to squeeze miliseconds, you could use bottleneck's <code>nanstd</code> function instead, because it is faster. And if you want to scrap microseconds, you could replace <code>np.log</code> by <code>math.log</code>, as you are only operating on a single number, and if <code>xs</code> is an array, you can use <code>xs.std()</code> instead. But before going down that road, I advise you to use this version, and profile the results to see where time is being spent.</p> <p><strong>Edit</strong></p> <p>If you profile loglike <code>python -m cProfile -o output yourprogram.py; runsnake output</code>, you will see that most (around 80%) of the time is being spent computing <code>np.std</code>. That is our first target. As I said before, the best call is to use <code>bottleneck.nanstd</code>. </p> <pre><code>import bottleneck as bn def loglike(xs): n = len(xs) return -0.5 * n * np.log(2 * np.pi * bn.nanstd(xs)) - 0.5 * n </code></pre> <p>In my benchmark, it makes a speedup of 8x, and it is only a 30% of the time. <code>len</code> is a 5%, so no point in looking into it further. Replacing np.log and np.pi by their math counterparts, and taking common factor I can cut the time in half again.</p> <pre><code>return -0.5 * n * (math.log(2 * math.pi * bn.nanstd(xs)) - 1) </code></pre> <p>I can yet sqeeze an aditional 10% hurting readability a bit:</p> <pre><code>factor = math.log(2*math.pi) def loglike(xs): n = len(xs) return -0.5 * n * (factor + math.log(bn.nanstd(xs)) - 1) </code></pre> <p><strong>Edit 2</strong></p> <p>If you want to really push it, you can replace bn.nanstd for the specialized function. Before your loop, define <code>std, _ = bn.func.nansum_selector(xs, axis=0)</code> and use it instead of bn.nanstd, or just <code>func.nanstd_1d_float64_axisNone</code> if you are not going to change the dtype.</p> <p>And I think this is as fast as it gets in Python. Still, half of the time is being spent on number operations, and perhaps Cython would be able to optimise this, but then calling in and out of Python would add an overhead that could compensate for this.</p>
python|numpy|statistics
2
7,876
22,775,371
How to iterate through a numpy array and select neighboring cells
<p>I am converting a USGS elevation raster data set to a Numpy array and then trying to select a position in the array at random. From this position I would like to create a method that identifies the eight surrounding cells to see if the elevations of these cells are within one meter of randomly selected cell. </p> <p>This is where it gets a little more complex...if a neighbor is within one meter then the same method will be called on it and the process repeats until there is no longer cells within a meter of elevation or the number of cells selected reaches a prescribed limit.</p> <p>If this is unclear hopefully this 2d array example below will make more sense. The bold/italicized cell (<strong><em>35</em></strong>) was randomly selected, the method was called on it (selecting all eight of its neighbors), and then the method was called on all neighbors until no more cells could be selected (all bold numbers were selected). </p> <p>33 33 33 37 38 37 43 40</p> <p>33 33 33 38 38 38 44 40</p> <p><strong>36 36 36 36</strong> 38 39 44 41</p> <p><strong>35 36 <em>35</em> 35 34</strong> 30 40 41</p> <p><strong>36 36 35 35 34</strong> 30 30 41</p> <p>38 38 <strong>35 35 34</strong> 30 30 41</p> <p>I am fairly good at java and know how to write a method to achieve this purpose, however GIS is primarily python based. I am in the process of learning python and have generated some code, but am having major problems adapting python to the GIS scripting interface. </p> <p>Thanks for any help!</p> <p><strong>Question continued...</strong></p> <p>Thanks for the answer Bas Swinckels. I tried to incorporate your code into the code I have written up so far and ended up getting a infinite loop. Below is what I have written up. There are two main steps I need to overcome to make this work. This is an example of my array generated from my raster (-3.40e+38 is the no data value).</p> <pre><code>&gt;&gt;&gt; [[ -3.40282306e+38 -3.40282306e+38 -3.40282306e+38 ..., -3.40282306e+38 -3.40282306e+38 -3.40282306e+38] [ -3.40282306e+38 -3.40282306e+38 -3.40282306e+38 ..., -3.40282306e+38 -3.40282306e+38 -3.40282306e+38] [ -3.40282306e+38 -3.40282306e+38 -3.40282306e+38 ..., -3.40282306e+38 -3.40282306e+38 -3.40282306e+38] ..., [ -3.40282306e+38 -3.40282306e+38 -3.40282306e+38 ..., -3.40282306e+38 -3.40282306e+38 -3.40282306e+38] [ -3.40282306e+38 -3.40282306e+38 -3.40282306e+38 ..., -3.40282306e+38 -3.40282306e+38 -3.40282306e+38] [ -3.40282306e+38 -3.40282306e+38 -3.40282306e+38 ..., -3.40282306e+38 -3.40282306e+38 -3.40282306e+38]] The script took 0.457999944687seconds. &gt;&gt;&gt; </code></pre> <p>What I need to do is <strong>randomly</strong> select a position(cell) within this array and then run the code you generated on this point, let the flood fill algorithm grow until it maxes out like in the example above or until it reaches a prescribed number of cells (the user can set that no flood fill algorithm selection will be over 25 selected cells). Than ideally, new selected cells would be outputted as a single raster maintaining its georefrenced structure. </p> <pre><code>#import modules from osgeo import gdal import numpy as np import os, sys, time #start timing startTime = time.time() #register all of drivers gdal.AllRegister() #get raster geo = gdal.Open("C:/Users/Harmon_work/Desktop/Python_Scratch/all_fill.img") #read raster as array arr = geo.ReadAsArray() data = geo.ReadAsArray(0, 0, geo.RasterXSize, geo.RasterYSize).astype(np.float) print data #get image size rows = geo.RasterYSize cols = geo.RasterXSize bands = geo.RasterCount #get georefrence info transform = geo.GetGeoTransform() xOrgin = transform[0] yOrgin = transform[3] pixelWidth = transform[1] pixelHeight = transform[5] #get array dimensions row = data.shape[0] col = data.shape[1] #get random position in array randx = random.randint(1, row) randy = random.randint(1, col) print randx, randy neighbours = [(-1,-1), (-1,0), (-1,1), (0,1), (1,1), (1,0), (1,-1), (0,-1)] mask = np.zeros_like(data, dtype = bool) #start coordinate stack = [(randx,randy)] while stack: x, y = stack.pop() mask[x, y] = True for dx, dy in neighbours: nx, ny = x + dx, y + dy if (0 &lt;= nx &lt; data.shape[0] and 0 &lt;= ny &lt; data.shape[1] and not mask[nx, ny] and abs(data[nx, ny] - data[x, y]) &lt;= 1): stack.append((nx, ny)) for line in mask: print ''.join('01'[i] for i in line) #run time endTime = time.time() print 'The script took ' + str(endTime-startTime) + 'seconds.' </code></pre> <p>Thanks again for your help. Please ask me questions if anything is unclear. </p>
<p>This can be done with an algorithm similar to <a href="http://en.wikipedia.org/wiki/Flood_fill" rel="nofollow">flood fill</a>, using a stack:</p> <pre><code>import numpy as np z = '''33 33 33 37 38 37 43 40 33 33 33 38 38 38 44 40 36 36 36 36 38 39 44 41 35 36 35 35 34 30 40 41 36 36 35 35 34 30 30 41 38 38 35 35 34 30 30 41''' z = np.array([[int(i) for i in line.split()] for line in z.splitlines()]) neighbours = [(-1,-1), (-1,0), (-1,1), (0,1), (1,1), (1,0), (1,-1), (0,-1)] mask = np.zeros_like(z, dtype = bool) stack = [(3,2)] # push start coordinate on stack while stack: x, y = stack.pop() mask[x, y] = True for dx, dy in neighbours: nx, ny = x + dx, y + dy if (0 &lt;= nx &lt; z.shape[0] and 0 &lt;= ny &lt; z.shape[1] and not mask[nx, ny] and abs(z[nx, ny] - z[x, y]) &lt;= 1): stack.append((nx, ny)) for line in mask: print ''.join('01'[i] for i in line) </code></pre> <p>Result:</p> <pre><code>00000000 00000000 11110000 11111000 11111000 00111000 </code></pre>
python|arrays|numpy|gis
4
7,877
13,288,889
Installing numpy using port (Python default version issue)
<p>I'm using Mountain Lion now and I've installed python27 and numpy using macports. The problem is that I cannot import numpy from the python. As far as I know, the default python of Mountain Lion is python 2.7.</p> <p>I've tried "import numpy" with both of two python (default - 2.7.2 and port - 2.7.3). It worked with default one but not with python 2.7.3.</p> <p>I've already selected 2.7.3 using "port select".</p> <p>These are the result of some port commands:</p> <pre><code>$ port installed|grep python python24 @2.4.6_10 (active) python27 @2.7.3_0 python27 @2.7.3_1 (active) python_select @0.3_1 (active) $ port installed|grep numpy py24-numpy @1.6.2_0 (active) py27-numpy @1.6.2_0 (active) </code></pre> <p>I really need to use numpy with python 2.7.3 which is installed using macports.</p> <p>Does anyone know about this?</p>
<p>Looks like you missed a step. Did you do port select like this?</p> <pre><code>sudo port select --set python python27 </code></pre> <p>If py27-numpy is installed, then you must be able to import it from the MacPorts version of python 2.7. To make sure which version of python you're running, do a <code>which python</code> in the command line. If the command <code>python2.7 -c 'import numpy'</code> does not give an error, then numpy is installed for the 2.7 version in MacPorts.</p>
python|numpy|port|macports
0
7,878
13,293,731
ValueError: object too deep for desired array
<pre><code>""" ___ """ from scipy.optimize import root import numpy as np LENGTH = 3 def process(x): return x[0, 0] + x[0, 1] * 5 def draw(process, length): """ """ X = np.matrix(np.random.normal(0, 10, (length, 2))) y = np.matrix([process(x) for x in X]) y += np.random.normal(3, 1, len(y)) return y.T, X.T def maximum_likelyhood(y, X): def objective(b): return (X.T * (y - X * b.T)) x0 = np.matrix([0, 0]) res = root(objective, x0=x0) return res.x y, X = draw(process, LENGTH) X = X.transpose() b = np.matrix([[0], [1]]) print maximum_likelyhood(y, X) </code></pre> <p>produces a</p> <pre><code> Traceback (most recent call last): File "ml.py", line 33, in &lt;module&gt; maximum_likelyhood(y, X) File "ml.py", line 26, in maximum_likelyhood res = root(objective, x0=x0) File "/usr/local/lib/python2.7/dist-packages/scipy/optimize/_root.py", line 168, in root sol = _root_hybr(fun, x0, args=args, jac=jac, **options) File "/usr/local/lib/python2.7/dist-packages/scipy/optimize/minpack.py", line 193, in _root_hybr ml, mu, epsfcn, factor, diag) ValueError: object too deep for desired array </code></pre> <p>I can't even gasp what the problem is is it in the b which goes into the objective function? or is it in its output?</p>
<p>The problem is that fsolve and root do not accept matrixes as return value of the objective function.</p> <p>For example this is a solution of above problem:</p> <pre><code>def maximum_likelyhood(y, X): def objective(b): b = np.matrix(b).T return np.transpose(np.array((X.T * (y - X * b))))[0] x0 = (1, 1) res = root(objective, x0=x0) return res.x </code></pre>
python|numpy|scipy
11
7,879
29,751,462
Pandas Yahoo Datareader RemoteDataError when start date or end date is current date
<p>I am running the below program to extract the stock information:</p> <pre><code>import datetime import pandas as pd from pandas import DataFrame from pandas.io.data import DataReader symbols_list = ['AAPL', 'TSLA', 'YHOO','GOOG', 'MSFT','ALTR','WDC','KLAC'] symbols=[] for ticker in symbols_list: r = DataReader(ticker, "yahoo", start=datetime.datetime(2015, 4, 17)) # add a symbol column r['Symbol'] = ticker symbols.append(r) # concatenate all the dfs df = pd.concat(symbols) #define cell with the columns that i need cell= df[['Symbol','Open','High','Low','Adj Close','Volume']] #changing sort of Symbol (ascending) and Date(descending) setting Symbol as first column and changing date format cell.reset_index().sort(['Symbol', 'Date'], ascending=[1,0]).set_index('Symbol').to_csv('stock.csv', date_format='%d/%m/%Y') </code></pre> <p>This runs perfectly. But when I change the start date to today i.e (2015, 4, 20), then the program errors out. I have tried giving end date as well but no use. Below is the error that I get:</p> <pre><code>UnboundLocalError Traceback (most recent call last) &lt;ipython-input-38-a05c721d551a&gt; in &lt;module&gt;() 8 for ticker in symbols_list: 9 r = DataReader(ticker, "yahoo", ---&gt; 10 start=datetime.datetime(2015, 4, 20)) 11 # add a symbol column 12 r['Symbol'] = ticker /usr/local/lib/python2.7/site-packages/pandas/io/data.pyc in DataReader(name, data_source, start, end, retry_count, pause) 75 return get_data_yahoo(symbols=name, start=start, end=end, 76 adjust_price=False, chunksize=25, ---&gt; 77 retry_count=retry_count, pause=pause) 78 elif data_source == "google": 79 return get_data_google(symbols=name, start=start, end=end, /usr/local/lib/python2.7/site-packages/pandas/io/data.pyc in get_data_yahoo(symbols, start, end, retry_count, pause, adjust_price, ret_index, chunksize, interval) 418 raise ValueError("Invalid interval: valid values are 'd', 'w', 'm' and 'v'") 419 return _get_data_from(symbols, start, end, interval, retry_count, pause, --&gt; 420 adjust_price, ret_index, chunksize, 'yahoo') 421 422 /usr/local/lib/python2.7/site-packages/pandas/io/data.pyc in _get_data_from(symbols, start, end, interval, retry_count, pause, adjust_price, ret_index, chunksize, source) 359 # If a single symbol, (e.g., 'GOOG') 360 if isinstance(symbols, (compat.string_types, int)): --&gt; 361 hist_data = src_fn(symbols, start, end, interval, retry_count, pause) 362 # Or multiple symbols, (e.g., ['GOOG', 'AAPL', 'MSFT']) 363 elif isinstance(symbols, DataFrame): /usr/local/lib/python2.7/site-packages/pandas/io/data.pyc in _get_hist_yahoo(sym, start, end, interval, retry_count, pause) 206 '&amp;g=%s' % interval + 207 '&amp;ignore=.csv') --&gt; 208 return _retry_read_url(url, retry_count, pause, 'Yahoo!') 209 210 /usr/local/lib/python2.7/site-packages/pandas/io/data.pyc in _retry_read_url(url, retry_count, pause, name) 175 #Get rid of unicode characters in index name. 176 try: --&gt; 177 rs.index.name = rs.index.name.decode('unicode_escape').encode('ascii', 'ignore') 178 except AttributeError: 179 #Python 3 string has no decode method. UnboundLocalError: local variable 'rs' referenced before assignment </code></pre>
<p>Putting together the suggestions by @JohnE, the code below seems to do the job:</p> <pre><code>import pandas as pd symbols_list = ['AAPL', 'TSLA', 'YHOO','GOOG', 'MSFT','ALTR','WDC','KLAC'] result = [] for ticker in symbols_list: url = 'http://chartapi.finance.yahoo.com/instrument/1.0/%s/chartdata;type=quote;range=1d/csv' % ticker.lower() data = pd.read_csv(url, skiprows=17) data.columns = ['timestamp', 'close', 'high', 'low', 'open', 'close'] data['ticker'] = ticker result.append(data) pd.concat(result) </code></pre> <p>There result looks like this:</p> <pre><code> timestamp close high low open close ticker 0 1429536719 125.5500 125.5700 125.4170 125.5100 183600 AAPL 1 1429536772 125.5900 125.6399 125.4600 125.5200 215000 AAPL 2 1429536835 125.7500 125.8000 125.5600 125.5901 348500 AAPL ... 367 1429559941 58.5700 58.5800 58.5400 58.5800 119100 KLAC 368 1429559946 58.5700 58.5700 58.5700 58.5700 0 KLAC 369 1429560000 58.5600 58.5600 58.5600 58.5600 0 KLAC </code></pre>
python|pandas|stocks|pandas-datareader
0
7,880
29,438,585
Element-wise average and standard deviation across multiple dataframes
<p>Data: Multiple dataframes of the same format (same columns, an equal number of rows, and no points missing).</p> <p>How do I create a &quot;summary&quot; dataframe that contains an element-wise mean for every element? How about a dataframe that contains an element-wise standard deviation?</p> <pre><code> A B C 0 -1.624722 -1.160731 0.016726 1 -1.565694 0.989333 1.040820 2 -0.484945 0.718596 -0.180779 3 0.388798 -0.997036 1.211787 4 -0.249211 1.604280 -1.100980 5 0.062425 0.925813 -1.810696 6 0.793244 -1.860442 -1.196797 A B C 0 1.016386 1.766780 0.648333 1 -1.101329 -1.021171 0.830281 2 -1.133889 -2.793579 0.839298 3 1.134425 0.611480 -1.482724 4 -0.066601 -2.123353 1.136564 5 -0.167580 -0.991550 0.660508 6 0.528789 -0.483008 1.472787 </code></pre>
<p>You can create a panel of your DataFrames and then compute the mean and SD along the items axis:</p> <pre><code>df1 = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C']) df2 = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C']) df3 = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C']) p = pd.Panel({n: df for n, df in enumerate([df1, df2, df3])}) &gt;&gt;&gt; p.mean(axis=0) A B C 0 -0.024284 -0.622337 0.581292 1 0.186271 0.596634 -0.498755 2 0.084591 -0.760567 -0.334429 3 -0.833688 0.403628 0.013497 4 0.402502 -0.017670 -0.369559 5 0.733305 -1.311827 0.463770 6 -0.941334 0.843020 -1.366963 7 0.134700 0.626846 0.994085 8 -0.783517 0.703030 -1.187082 9 -0.954325 0.514671 -0.370741 &gt;&gt;&gt; p.std(axis=0) A B C 0 0.196526 1.870115 0.503855 1 0.719534 0.264991 1.232129 2 0.315741 0.773699 1.328869 3 1.169213 1.488852 1.149105 4 1.416236 1.157386 0.414532 5 0.554604 1.022169 1.324711 6 0.178940 1.107710 0.885941 7 1.270448 1.023748 1.102772 8 0.957550 0.355523 1.284814 9 0.582288 0.997909 1.566383 </code></pre>
python|python-3.x|pandas
6
7,881
62,389,358
Extract value from column containing dicts in a few rows
<p>I have a nested json file that I convert to a Pandas Dataframe:</p> <pre><code>tabell = pd.DataFrame.from_records(r.response['trades']) </code></pre> <p>It looks like this:</p> <pre><code>id instrument price initialUnits takeProfitOrder 0 AUD_CAD 0.90 10000 NaN 1 AUD_CAD 0.89 10000 {'id': '379895', 'createTime': '2020-06-15T12:... </code></pre> <p>I want to extract the 'id' field from the inner dict, and keep that as the value in that column. </p> <p>If I write this, it works:</p> <pre><code>tabell.loc[1]['takeProfitOrder'] = tabell.loc[1]['takeProfitOrder']['id'] </code></pre> <p>However, I do not know which rows, and it is thousands. Therefore I do not want to iterate with a loop. </p> <p>But if I just write, which is what I want, it fails: </p> <pre><code>tabell['takeProfitOrder'] = tabell['takeProfitOrder']['id'] </code></pre> <p>Obviously it fails at the first line, as it contains 'NaN' instead of a dict.</p> <p>What is the most efficient way to achieve this. This operation needs to be done a lot of times on relatively large datasets. Therefore, I need an efficient way of accomplishing it.</p> <p>Any suggestions?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.get.html" rel="nofollow noreferrer"><code>Series.str.get</code></a> for possible processing missing values:</p> <pre><code>tabell['takeProfitOrder'] = tabell['takeProfitOrder'].str.get('id') </code></pre>
python|pandas|dataframe
1
7,882
62,161,367
How to group by key and retrieve keys from the grouped elements?
<p>I'm trying to <code>group</code> and <code>sum</code> dicts from a <code>DataFrame</code> like this:</p> <pre><code>dt = [ {'discount_value': 10, 'is_cumulative': True, 'code': 'x'}, {'discount_value': 10, 'is_cumulative': True, 'code': 'x1'}, {'discount_value': 10, 'is_cumulative': False, 'code': 'x2'} ] df = pandas.DataFrame(dt).groupby('is_cumulative') result = df.sum() </code></pre> <p>The result is:</p> <pre><code> discount_value is_cumulative False 10 True 20 </code></pre> <p>But i need to get all "code" key used to sum the final resultt, e.g:</p> <pre><code>result['discount_value'][True] # 20 # how to get the codes "x" and "x1"? </code></pre>
<p>Here is what you want:</p> <pre class="lang-py prettyprint-override"><code>import pandas dt = [ {'discount_value': 10, 'is_cumulative': True, 'code': 'x'}, {'discount_value': 10, 'is_cumulative': True, 'code': 'x1'}, {'discount_value': 10, 'is_cumulative': False, 'code': 'x2'} ] df = pandas.DataFrame(dt).groupby('is_cumulative') result = df.agg({'discount_value':sum, 'code':list}) print(result) </code></pre> <p>The output:</p> <pre><code> discount_value code is_cumulative False 10 [x2] True 20 [x, x1] </code></pre>
python|pandas
2
7,883
62,458,837
Groupby transform to list in pandas does not work
<p>Best described with an example</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'a' : ['A','B','C','A','B','C','A','B','C'], 'b': [1,2,3,4,5,6,7,8,9]} ) </code></pre> <p>And i want to create a column that contains in a <code>list</code> the elements of column <code>b</code> by group of column <code>a</code></p> <p>resulting in the following</p> <pre><code> a b c 0 A 1 [1, 4, 7] 1 A 4 [1, 4, 7] 2 A 7 [1, 4, 7] 3 B 2 [2, 5, 8] 4 B 5 [2, 5, 8] 5 B 8 [2, 5, 8] 6 C 3 [3, 6, 9] 7 C 6 [3, 6, 9] 8 C 9 [3, 6, 9] </code></pre> <p>I can do this with <code>groupby</code> and <code>apply</code> or <code>agg</code> and then joining the dataframes like so</p> <pre><code>df_tmp = df.groupby('a')['b'].agg(list).reset_index() df.merge(df_tmp, on='a') </code></pre> <p>But i would also be expecting to do the same with <code>transform</code></p> <pre><code>df['c'] = df.groupby('a')['b'].transform(list) </code></pre> <p>but the column <code>c</code> is the same as column <code>b</code></p> <p>Also the following</p> <pre><code>df.groupby('a')['b'].transform(lambda x: len(x)) </code></pre> <p>return a series with the values <code>3</code> i.e. the length of the grouped elements is 3 (to be expected)</p> <p>Also this</p> <pre><code>df.groupby('a')['b'].transform(lambda x: list(x)) </code></pre> <p>does not provide the expected result. </p> <p>So to my question, how can i obtain the desired result with groupby and tranform</p> <p><code>pandas</code> version is 1.0.5</p>
<p>I come up one fix with below. PS : it should something wrong with <code>transform</code> , when the object type is <code>list</code> <code>tuple</code> or <code>set</code>..</p> <pre><code>df.groupby('a')['b'].transform(lambda x : [x.tolist()]*len(x)) Out[226]: 0 [1, 4, 7] 1 [1, 4, 7] 2 [1, 4, 7] 3 [2, 5, 8] 4 [2, 5, 8] 5 [2, 5, 8] 6 [3, 6, 9] 7 [3, 6, 9] 8 [3, 6, 9] Name: b, dtype: object </code></pre>
pandas|pandas-groupby
7
7,884
62,256,536
Most performant data structure in Python to handle live streaming market data
<p>I am about to handle live streaming stock market data, hundreds of "ticks" (<code>dict</code>s) per second, store them in an in-memory data structure and analyze the data.</p> <p>I was reading up on <code>pandas</code> and got pretty excited about it, only to learn that pandas' <code>append</code> function is not recommended because it copies the whole data frame on each individual append. So it seems <code>pandas</code> is pretty much unusable to real time handling and analysis of high frequency streaming data, e.g. financial or sensor data.</p> <p>So I'm back to native Python, which is quite OK. To save RAM, I am thinking about storing the last 100,000 data points or so on a rolling basis.</p> <p>What would be the most performant Python data structure to use?</p> <p>I was thinking using a list, and inserting data point number 100,001, then deleting the first element, as in <code>del list[0]</code>. That way, I can keep a rolling history of the last 100,000 data points, by my indices will get larger and larger. A native "rolling" data structure (as in C with a 16bit index and increments without overflow checks) seems not possible in Python?</p> <p>What would be the best way to implement my real-time data analysis in Python?</p>
<p>The workflow you describe makes me think of a <code>deque</code>, basically a list that allows extending on one end (e.g. right), while popping (fetching/removing) them off the other end (e.g. left). The reference even has a short list of <a href="https://docs.python.org/3/library/collections.html?highlight=deque#deque-recipes" rel="nofollow noreferrer">deque recipes</a> to illustrate such common use cases as implementing <code>tail</code> or maintaining a moving average (as a generator).</p>
python|pandas|performance|data-science|real-time
2
7,885
62,101,315
Validation accuracy increases then suddenly decreases
<p>I am training an LSTM model on the <a href="http://alt.qcri.org/semeval2017/task4/index.php?id=data-and-tools" rel="nofollow noreferrer">SemEval 2017 task 4A dataset</a>. I observe that first validation accuracy increases along with training accuracy but then suddenly decreases by a significant amount. The loss decreases but validation loss increases by a signifcant amount.</p> <p><a href="https://i.stack.imgur.com/3aHbz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3aHbz.png" alt="The training sample"></a></p> <p>Here is the code of my model </p> <pre><code>model = Sequential() model.add(Embedding(max_words, 30, input_length=max_len)) model.add(BatchNormalization()) model.add(Activation('tanh')) model.add(Dropout(0.3)) model.add(Bidirectional(LSTM(32))) model.add(BatchNormalization()) model.add(Activation('tanh')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.summary() </code></pre> <p>And here is the model summary</p> <pre><code>Model: "sequential_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_2 (Embedding) (None, 300, 30) 60000 _________________________________________________________________ batch_normalization_3 (Batch (None, 300, 30) 120 _________________________________________________________________ activation_3 (Activation) (None, 300, 30) 0 _________________________________________________________________ dropout_3 (Dropout) (None, 300, 30) 0 _________________________________________________________________ bidirectional_2 (Bidirection (None, 64) 16128 _________________________________________________________________ batch_normalization_4 (Batch (None, 64) 256 _________________________________________________________________ activation_4 (Activation) (None, 64) 0 _________________________________________________________________ dropout_4 (Dropout) (None, 64) 0 _________________________________________________________________ dense_2 (Dense) (None, 1) 65 ================================================================= Total params: 76,569 Trainable params: 76,381 Non-trainable params: 188 </code></pre> <p>I am using GloVe for word embeddings, Adam optimizer, Binary Crossentropy loss function. </p>
<p>You have a few choices:</p> <ol> <li>keep training and see what happens</li> <li>if the val_loss become worse, you're overfitting -- check out how to deal with that -- increase the amount of the data, make a simpler network or do whatever seems to work in your particular case.</li> <li>if the val_loss gets better back again, you're on the right path.</li> </ol> <p>And, yeah, share results with us, what happens if you run training for a couple more epochs?</p>
python|tensorflow|keras|deep-learning|glove
2
7,886
62,404,451
pytorch versus autograd.numpy
<p>What are the big differences between pytorch and numpy, in particular, the autograd.numpy package? ( since both of them can compute the gradient automatically for you.) I know that pytorch can move tensors to GPU, but is this the only reason for choosing pytorch over numpy? While pytorch is well known for deep learning, obviously it can be used for almost any machine learning algorithm, its nn.Module structure is very flexible and we don't have to confine to the neural networks. (although I've never seen any neural network model written in numpy) So I'm wondering what's the biggest difference underlying pytorch and numpy. </p>
<p>I'm not sure if this question can be objectively answered, but besides the GPU functionality, it offers</p> <ul> <li>Parallelisation across GPUs</li> <li>Parallelisation across Machines</li> <li>DataLoaders / Manipulators incl. asynchronous pre-fetching</li> <li>Optimizers</li> <li>Predefined/Pretrained Models (can save you a lot of time)</li> <li>...</li> </ul> <p>But as you said, it's build around deep/machine learning, so that is what it's good as while numpy (together with scipy) is much more general and can be used to solve a large range of other engineering problems (possibly using methods that are not en vogue at the moment).</p>
numpy|pytorch|autograd
0
7,887
62,093,584
keras load_model cannot recognize new AUC metric tf.keras.metrics.AUC()
<p>I am using new tensorflow version and it has auc metric defined as tf.keras.metrics.AUC(). The model compiles and runs fine but when I load the model it cannot recognize auc metric function. I have added required import function. The codes are given below:</p> <pre><code> import keras import tensorflow as tf from tensorflow.keras import backend as K from keras.optimizers import SGD, Adam from keras.models import Model, load_model from kerao.callbacks import Plotter from keras.callbacks import Callback, ModelCheckpoint optimizer = SGD(lr=1e-3, decay=1e-4, momentum=0.9, nesterov=True) model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=[tf.keras.metrics.AUC()]) out_path = "../model_test.h5" checkpoint = ModelCheckpoint(out_path, monitor='val_loss', save_best_only=True, period=1, verbose=1) model.fit_generator(generatortrain, steps_per_epoch= 100, epochs= 30, validation_data= generatortest, validation_steps=len(generatortest), initial_epoch=0, callbacks=[Plotter(), checkpoint], workers=7, max_queue_size=20, class_weight= class_weight) model_new= load_model('../model_test.h5', custom_objects= {'AUC': tf.keras.metrics.AUC()}) &gt; ValueError: Unknown metric function:auc I have also tried following way: def auc(y_true, y_pred): return tf.keras.metrics.AUC() optimizer = SGD(lr=1e-3, decay=1e-4, momentum=0.9, nesterov=True) model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=[auc, f1]) This gives me following error: Failed to convert object of type &lt;class 'tensorflow.python.keras.metrics.AUC'&gt; to Tensor. Contents: &lt;tensorflow.python.keras.metrics.AUC object at 0x7fd6f0ea7350&gt;. Consider casting elements to a supported type. </code></pre>
<pre><code>from keras.metrics import AUC ... model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=[AUC(name='auc')]) </code></pre>
tensorflow|keras|model|metrics|auc
1
7,888
51,369,727
Convert pandas DataFrame into string to be written to a cfg file
<p><strong><em>Target</em></strong></p> <p>I have a Pandas data frame, as shown below, and would like to join the columns, <code>command</code> and <code>value</code> while also converting it back into it's raw string format to be written to a .cfg file.</p> <hr> <p><strong><em>Data Frame</em></strong> - <code>df</code>:</p> <p><em>Before</em> joined:</p> <pre><code> command value 0 bind "0" "slot10" 1 bind "1" "slot1" 2 bind "2" "slot2" 3 bind "3" "slot3" 4 bind "4" "slot4" 5 bind "5" "slot5" 6 bind "6" "slot6" 7 bind "7" "slot7" 8 bind "8" "slot8" 9 bind "9" "slot9" 10 bind "a" "+moveleft" 11 bind "b" "buymenu" 12 bind "d" "+moveright" </code></pre> <p><em>After</em> joined:</p> <pre><code>0 bind "0" "slot10" 1 bind "1" "slot1" 2 bind "2" "slot2" 3 bind "3" "slot3" 4 bind "4" "slot4" 5 bind "5" "slot5" 6 bind "6" "slot6" 7 bind "7" "slot7" 8 bind "8" "slot8" 9 bind "9" "slot9" 10 bind "a" "+moveleft" 11 bind "b" "buymenu" 12 bind "d" "+moveright" 13 bind "e" "+use" 14 bind "f" "+lookatweapon" ... etc. dtype: object </code></pre> <hr> <p><strong><em>My attempt</em></strong>:</p> <p>I have managed to combine two columns to get the output above using the following code:</p> <pre><code>df = df['command'].astype(str)+' '+df['value'].astype(str) </code></pre> <p>However, this still hasn't helped convert the dataframe into a raw string so that it can be written to a <em>.cfg</em> file .</p> <p>I have also attempted to use <code>df.to_string()</code> but this doesn't seem to make a difference</p> <hr> <p><strong><em>Expected Output</em></strong></p> <p>I would like to get raw string output like so assigned under a variable to be written to a .cfg file:</p> <pre><code>bind "0" "slot10" bind "1" "slot1" bind "2" "slot2" bind "3" "slot3" bind "4" "slot4" bind "5" "slot5" bind "6" "slot6" bind "7" "slot7" bind "8" "slot8" bind "9" "slot9" bind "a" "+moveleft" bind "b" "buymenu" bind "d" "+moveright" bind "e" "+use" bind "f" "+lookatweapon" ... etc. </code></pre>
<p>After you call</p> <pre><code>df = df['command'].astype(str)+' '+df['value'].astype(str) </code></pre> <p>you're actually left with a <code>Series</code> object, so you can call <code>df.tolist()</code> and then join the elements of the list with a newline. Something like this</p> <pre><code>s = df['command'].astype(str)+' '+df['value'].astype(str) cfg_output = '\n'.join(s.tolist()) # joins each command with a newline </code></pre> <p>You're getting a None value appended to the last line because of how you're reading in the original config file, this line</p> <pre><code>df = pd.DataFrame(df[0].str.split(' ',1).tolist(),columns = ['command','value']) </code></pre> <p>has trouble accounting for a config setting without a value. Consider the last three lines of your original conf file</p> <pre><code>sensitivity "0.9" cl_teamid_overhead_always 1 host_writeconfig </code></pre> <p>the settings for <code>sensitivity</code> and <code>cl_teamid_overhead_always</code> have values following them, but <code>host_writeconfig</code> doesn't, so when pandas tries to split on whitespace (which doesn't exist in that line), the first value is <code>host_writeconfig</code> and the second is the None object.</p>
python|python-2.7|pandas|dataframe
2
7,889
51,313,530
getting wrong result while merging pandas dataframe
<p>I have two dataframes like-</p> <pre><code> identity time Date matched_time 0 197_$ 21:21:21 9/11/2015 21:21:30 0 197_$ 21:21:51 9/11/2015 21:22:00 0 197_$ 21:22:21 9/11/2015 21:22:30 0 197_$ 21:22:51 9/11/2015 21:23:00 0 197_$ 21:23:21 9/11/2015 21:23:30 0 197_$ 21:23:51 9/11/2015 21:24:00 identity Line Epoch Day Seconds Date Time 197_$ 9344 11203 4 280290 9/11/2015 1/1/1900 21:21 197_$ 9345 11204 4 280320 9/11/2015 1/1/1900 21:22 197_$ 9346 11205 4 280350 9/11/2015 1/1/1900 21:22 197_$ 9347 11206 4 280380 9/11/2015 1/1/1900 21:23 197_$ 9348 11207 4 280410 9/11/2015 1/1/1900 21:23 197_$ 9349 11208 4 280440 9/11/2015 1/1/1900 21:24 </code></pre> <p>Now I want to merge the columns to create a new dataframe- I did-</p> <pre><code>df2=pd.merge(df,out,how='outer') </code></pre> <p>but the desired output was not obtained.I just wanted to create a dataframe which has all the columns.</p> <p>so the dataframe should look like this-</p> <pre><code>identity time Date matched_time Line Epoch .... 0 197_$ 21:21:21 9/11/2015 21:21:30 9344 11203 .... 0 197_$ 21:21:51 9/11/2015 21:22:00 9345 11204 0 197_$ 21:22:21 9/11/2015 21:22:30 9346 11205 0 197_$ 21:22:51 9/11/2015 21:23:00 9347 11206 0 197_$ 21:23:21 9/11/2015 21:23:30 9348 11207 0 197_$ 21:23:51 9/11/2015 21:24:00 9349 11208 </code></pre>
<p>In general you should not use <code>merge()</code> unless you have unique keys in at least one side (left or right). Instead, use <code>concat()</code> if you have identical columns in both dataframes. I omitted a column <code>Time</code> of your 2nd dataframe for simplicity.</p> <p><code>df1</code>:</p> <pre><code> identity time Date matched_time 0 197_$ 21:21:21 9/11/2015 21:21:30 1 197_$ 21:21:51 9/11/2015 21:22:00 2 197_$ 21:22:21 9/11/2015 21:22:30 3 197_$ 21:22:51 9/11/2015 21:23:00 4 197_$ 21:23:21 9/11/2015 21:23:30 5 197_$ 21:23:51 9/11/2015 21:24:00 </code></pre> <p><code>df2</code>:</p> <pre><code> identityLine Epoch Day Seconds Date 0 197_$ 9344 11203 4 280290 9/11/2015 1 197_$ 9345 11204 4 280320 9/11/2015 2 197_$ 9346 11205 4 280350 9/11/2015 3 197_$ 9347 11206 4 280380 9/11/2015 4 197_$ 9348 11207 4 280410 9/11/2015 5 197_$ 9349 11208 4 280440 9/11/2015 </code></pre> <p>Combine 2 dataframes using <code>concat()</code>:</p> <pre><code>df3 = (pd.concat([df1.set_index(['identity', 'Date']), df2.set_index(['identity', 'Date'])], axis=1).reset_index(drop=False)) </code></pre> <p>Output(<code>df3</code>):</p> <pre><code> identity Date time matched_time Line Epoch Day Seconds 0 197_$ 9/11/2015 21:21:21 21:21:30 9344 11203 4 280290 1 197_$ 9/11/2015 21:21:51 21:22:00 9345 11204 4 280320 2 197_$ 9/11/2015 21:22:21 21:22:30 9346 11205 4 280350 3 197_$ 9/11/2015 21:22:51 21:23:00 9347 11206 4 280380 4 197_$ 9/11/2015 21:23:21 21:23:30 9348 11207 4 280410 5 197_$ 9/11/2015 21:23:51 21:24:00 9349 11208 4 280440 </code></pre> <p>Hope this helps..</p>
python|pandas
1
7,890
51,308,340
Cannot override matplotlib format_coord
<p>I'm creating a contour plot with a list of V values as the x-axis and a list of T values as the y-axis (the V and T values are float numbers with 2 digits after the decimal point but all sorted of course). I created a data matrix and populated it with the data correlating with the V-T coordinates. </p> <p>If it helps anything, this is what the contour plot would look like:</p> <p><a href="https://i.stack.imgur.com/GYfly.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GYfly.png" alt="enter image description here"></a></p> <p>I'm trying to override the format_coord method to also display the data along with the x-y (V-T) coordinates when the cursor moves</p> <p>I can't post all my code here, but here are the relevant parts:</p> <pre><code>fig= Figure() a = fig.add_subplot(111) contour_plot = a.contourf(self.pre_formating[0],self.pre_formating[1],datapoint) #Plot the contour on the axes def fmt(x, y): '''Overrides the original matplotlib method to also display z value when moving the cursor ''' V_lst = self.pre_formating[0] #List of V points T_lst = self.pre_formating[1] #List of T points Zflat = datapoint.flatten() #Flatten out the data matrix print 'first print line' # get closest point with known v,t values dist = distance.cdist([x,y],np.stack([V_lst, T_lst],axis=-1)) print 'second print line' closest_idx = np.argmin(dist) z = Zflat[closest_idx] return 'x={x:.5f} y={y:.5f} z={z:.5f}'.format(x=x, y=y, z=z) a.format_coord = fmt </code></pre> <p>The above code does not work (when I move the cursor nothing shows up, even the x,y value. The 'first print line' gets printed but the 'second print line' doesn't, so I think the problem is with the 'dist' line).</p> <p>But when I change the 'dist' line to </p> <pre><code>dist = np.linalg.norm(np.vstack([V_lst - x, T_lst - y]), axis=0) </code></pre> <p>Everything works (x,y,data shows up) for a 37x37 matrix but not for a 37x46 matrix (37T, 46V) and I don't know why. What should I do to make my code work? Thank you for helping!</p>
<p>Hi guys so I found out how to solve this in case anyone needs it. There were 2 problems: 1/ I was wrong in my way of generating a list of V,T coordinates 2/ distance.cdist requires everything in the form of a 2d array. So this is the final solution:</p> <pre><code>def fmt(x, y): '''Overrides the original matplotlib method to also display z value when moving the cursor ''' V_lst = self.pre_formating[0] #List of V points T_lst = self.pre_formating[1] #List of T points coordinates = [[v,t] for t in T_lst for v in V_lst] # List of coordinates (V,T) Zflat = datapoint.flatten() #Flatten out the data matrix # get closest point with known v,t values dist = distance.cdist([[x,y]],coordinates) closest_idx = np.argmin(dist) z = Zflat[closest_idx] return 'x={x:.5f} y={y:.5f} z={z:.5f}'.format(x=x, y=y, z=z) </code></pre>
python-2.7|numpy|matplotlib|scipy
0
7,891
51,312,815
Taking a specific character in the string for a list of strings in python
<p>I have a list of 22000 strings like abc.wav . I want to take out a specific character from it in python like a character which is before .wav from all the files. How to do that in python ?</p>
<p>finding the spot of a character could be .split(), but if you want to pull up a specific spot in a string, you could use list[stringNum[letterNum]]. And then list[stringNum].split("a") would get two or more separate strings that are on the other side of the letter "a". Using those strings you could get the spots by measuring the length of the string versus the length of the strings outside of a and compare where those spots were taken. Just a simple algorithm idea ig. You'd have to play around with it.</p>
python|pandas|numpy
0
7,892
70,817,269
Longest continuous streaks of multiple users
<p>I want to find the solution for this,</p> <blockquote> <p>Provided a table with user_id and the dates they visited the platform, find the top 100 users with the longest continuous streak of visiting the platform as of yesterday.</p> </blockquote> <p>I found these <a href="https://stackoverflow.com/questions/48897265/how-to-count-longest-uninterrupted-sequence-in-pandas">links</a> that explain how to do this for one user. However, I am not sure how to do it for multiple users.</p> <p>One naive might be to get all unique users and using a for loop and above answer, get the users with a maximum continuous visiting streak. However, I am interested in a vectorised way if possible.</p> <p>If needed, these are the codes I used,</p> <pre><code>date_series = pd.Series(np.random.randint(0,10, 400), index=pd.to_datetime(np.random.randint(0,20, 400)*1e9*24*3600), name=&quot;uid&quot;) df = date_series.reset_index().rename({&quot;index&quot;:&quot;date_val&quot;}, axis=1).drop_duplicates().reset_index(drop=True) </code></pre> <p>For a given user id (say uid =1), I can use the following to find the max streak,</p> <pre><code>sub_df = df[df.uid==1].sort_values(&quot;date_val&quot;) (sub_df.date_val+pd.Timedelta(days=1) != sub_df.date_val.shift(-1)).cumsum().value_counts().max() </code></pre> <p>But I don't understand how to do a similar thing for all users in the original dataframe (df) with a vectorized (not for loop) approach.</p>
<p>I have gone long hand, there maybe a shorter way out there. Lets try</p> <pre><code>df=df.sort_values(by=['uid','date_val'])# Sort df #Check sequence df=(df.assign(diff=df['date_val'].diff().dt.days, diff1=df['date_val'].diff(-1).dt.days)) #create a grouper s=(((df['diff'].isna())&amp;(df['diff1']==-1))|((df['diff'].gt(1))&amp;(df['diff1']==-1))).cumsum() #Get streak length df['streak'] =df.groupby([s,'uid'])['date_val'].transform('count') #Isolate max streak new=df[df['streak'] ==df.groupby('uid')['streak'].transform('max')].drop(columns=['diff','diff1']).sort_values(by=['uid','date_val']) </code></pre>
python|pandas|dataframe|series
1
7,893
41,693,371
odeint floating point arithmetic
<p>I am interessted in understanding the floating point arithmetics using the <code>scipy.integrate.odeint</code> function.</p> <p>The case I am working with is the following</p> <pre><code># data omega = 136 # rad/s d = 75 # Nm/s k = 390000 # N/m m = 4 # kg n = 1000 # t_0 = 1 # s t_1 = 5.5 # s Y = 0.05 # m # time t = np.linspace(t_0, t_1, n) # initial condition x_0 = np.array([0, 0]) # first function def fun(x, t, k, d, m, Y, omega): y = Y*np.sin(omega*t) return np.array([x[1], (y - k*x[0] - d*x[1]) / m]) # second function def fun2(x, t, k, d, m, Y, omega): y = Y*np.sin(omega*t) return np.array([x[1], (-k*x[0] - d*x[1] + y)/m]) # results res = odeint(fun, x_0, t, args=(m, k, d, Y, omega)) res2 = odeint(fun2, x_0, t, args=(m, k, d, Y, omega)) </code></pre> <p>Note, that both functions are the same mathematically. The only difference is the order of numerical operations.</p> <p>I would like to better understand the difference in the result <code>res - res2</code>, which is:</p> <pre><code>array([[ 0.00000000e+00, 0.00000000e+00], [ -1.95628215e-22, 1.91508855e-19], [ 6.33676391e-19, -2.16307730e-17], ..., [ -8.50849113e-10, 3.04613004e-09], [ -8.49843242e-10, -9.43460353e-10], [ -1.00314946e-09, 4.45237878e-09]]) </code></pre> <p>but it should be an array of zeros.</p>
<p>What you are seeing is the result of a different order for the floating point operations (addition and multiplication). See a classic paper <a href="http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html" rel="nofollow noreferrer">http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html</a> or wikipedia <a href="https://en.wikipedia.org/wiki/Floating_point#Floating-point_arithmetic_operations" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Floating_point#Floating-point_arithmetic_operations</a></p> <p>Even though the difference can be minor, as odeint solves to a given accuracy you are left with a difference that can be up to that treshold.</p> <p>I don't know much about duplicates across SE sites but this is a close problem: <a href="https://scicomp.stackexchange.com/questions/10506/number-of-equations-and-precision-of-scipys-integrate-odeint">https://scicomp.stackexchange.com/questions/10506/number-of-equations-and-precision-of-scipys-integrate-odeint</a> or this one <a href="https://stackoverflow.com/questions/40515064/scipy-integrate-odeint-fails-depending-on-time-steps">scipy.integrate.odeint fails depending on time steps</a> with a reply by a SciPy core developer.</p>
python|numpy|floating-point|scipy|odeint
2
7,894
64,580,235
How to use tf.data.Dataset.from_generator() to load only one batch at a time from the dataset?
<p>I want to train a CNN and I am trying to feed the model with one batch at a time, directly from a <code>numpy</code> memmap, not having to load the whole dateset to the memory, using <code>tf.data.Dataset.from_generator()</code>. I am using <code>tf2.2</code> and the GPU for fitting. The dataset is a sequence of 3D matrices (NCHW format). The label of each case is the next 3D matrix. The problem is that it still loads the whole dataset to the memory.</p> <p>Here is a short reproducible example:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from numpy.lib.format import open_memmap import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers tf.config.list_physical_devices(&quot;GPU&quot;) # create and initialize the memmap ds_shape = (20000, 3, 50, 50) ds_mmap = open_memmap(&quot;ds.npy&quot;, mode='w+', dtype=np.dtype(&quot;float64&quot;), shape=ds_shape) ds_mmap = np.random.rand(*ds_shape) len_ds = len(ds_mmap) # 20000 len_train = int(0.6 * len_ds) # 12000 len_val = int(0.2 * len_ds) # 4000 len_test = int(0.2 * len_ds) # 4000 batch_size = 32 epochs = 50 </code></pre> <p>I tried 2 ways of generating train-val-test datasets (Also, if anyone could comment on pros and cons, it would be more than welcome)</p> <p>1.</p> <pre class="lang-py prettyprint-override"><code>def gen(ds_mmap, start, stop): for i in range(start, stop): yield (ds_mmap[i], ds_mmap[i + 1]) tvt = {&quot;train&quot;: None, &quot;val&quot;: None, &quot;test&quot;: None} tvt_limits = { &quot;train&quot;: (0, len_train), &quot;val&quot;: (len_train, len_train + len_val), &quot;test&quot;: (len_train + len_val, len_ds -1) # -1 because the last case does not have a label } for ds_type, ds in tvt.items(): start, stop = tvt_limits[ds_type] ds = tf.data.Dataset.from_generator( generator=gen, output_types=(tf.float64, tf.float64), output_shapes=(ds_shape[1:], ds_shape[1:]), args=[ds_mmap, start, stop] ) train_ds = ( tvt[&quot;train&quot;] .shuffle(len_ds, reshuffle_each_iteration=False) .batch(batch_size) ) val_ds = tvt[&quot;val&quot;].batch(batch_size) test_ds = tvt[&quot;test&quot;].batch(batch_size) </code></pre> <ol start="2"> <li></li> </ol> <pre class="lang-py prettyprint-override"><code>def gen(ds_mmap): for i in range(len(ds_mmap) - 1): yield (ds_mmap[i], ds_mmap[i + 1]) ds = tf.data.Dataset.from_generator( generator=gen, output_types=(tf.float64, tf.float64), output_shapes=(ds_shape[1:], ds_shape[1:]) args=[ds_mmap] ) train_ds = ( ds .take(len_train) .shuffle(len_ds, reshuffle_each_iteration=False) .batch(batch_size) ) val_ds = ds.skip(len_train).take(len_val).batch(batch_size) test_ds = ds.skip(len_train + len_val).take(len_test - 1).batch(batch_size) </code></pre> <p>Both ways work, but will bring the whole dataset to the memory.</p> <pre class="lang-py prettyprint-override"><code>model = keras.Sequential([ layers.Conv2D(64, (3, 3), input_shape=ds_shape[1:], activation=&quot;relu&quot;, data_format=&quot;channels_first&quot;), layers.MaxPooling2D(data_format=&quot;channels_first&quot;), layers.Conv2D(128, (3, 3), activation=&quot;relu&quot;, data_format=&quot;channels_first&quot;), layers.MaxPooling2D(data_format=&quot;channels_first&quot;), layers.Flatten(), layers.Dense(8182, activation=&quot;relu&quot;), layers.Dense(np.prod(ds_shape[1:])), layers.Reshape(ds_shape[1:]) ]) model.compile(loss=&quot;mean_aboslute_error&quot;, optimizer=&quot;adam&quot;, metrics=[tf.keras.metrics.MeanSquaredError()]) hist = model.fit( train_ds, validation_data=val_ds, epochs=epochs, # steps_per_epoch=len_train // batch_size, # validation_steps=len_val // batch_size, shuffle=True ) </code></pre>
<p>An alternative was to subclass <a href="https://www.tensorflow.org/api_docs/python/tf/keras/utils/Sequence" rel="nofollow noreferrer">keras.utils.Sequence</a>. The idea is to generate the whole batch.</p> <p>Quoting the docs:</p> <blockquote> <p>Sequence are a safer way to do multiprocessing. This structure guarantees that the network will only train once on each sample per epoch which is not the case with generators.</p> </blockquote> <p>To do so, it is needed to provide <code>__len__()</code> and <code>__getitem__()</code> methods.</p> <p>For the current example:</p> <pre class="lang-py prettyprint-override"><code>class DS(keras.utils.Sequence): def __init__(self, ds_mmap, start, stop, batch_size): self.ds = ds_mmap[start: stop] self.batch_size = batch_size def __len__(self): # divide-ceil return -(-len(self.ds) // self.batch_size) def __getitem__(self, idx): start = idx * self.batch_size stop = (idx + 1) * self.batch_size batch_y = self.ds[start + 1: stop + 1] batch_x = self.ds[start: stop][: len(batch_y)] return batch_x, batch_y </code></pre> <pre class="lang-py prettyprint-override"><code>for ds_type, ds in tvt.items(): start, stop = tvt_limits[ds_type] ds = DS(ds_mmap, start, stop, batch_size) </code></pre> <p>In that case, it is needed to explicitly define the number of steps and NOT pass a <code>batch_size</code>:</p> <pre class="lang-py prettyprint-override"><code>hist = model.fit( tvt[&quot;train&quot;], validation_data=tvt[&quot;val&quot;], epochs=epochs, steps_per_epoch=len_train // batch_size, validation_steps=len_val // batch_size, shuffle=True ) </code></pre> <p>Still, I didn't get <code>from_generator()</code> to work and I would like to know how.</p>
python|tensorflow|keras|deep-learning
-2
7,895
48,934,658
mysterious Python Pandas lambda function error
<p>I have a pandas dataframe and I have a column called 'email'. I have verified the dtype is object. It contains normally formatted emails such as xxx@yyy.com</p> <p>When I do this:</p> <pre><code>$ df['emaillower'] = df['email'].apply(lambda x: x.lower()) </code></pre> <p>I get this:</p> <pre><code>Traceback (most recent call last): File "&lt;ipython-input-153-e951d53133eb&gt;", line 1, in &lt;module&gt; df['emaillower'] = df['email'].apply(lambda x: x.upper()) File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\core\series.py", line 2355, in apply mapped = lib.map_infer(values, f, convert=convert_dtype) File "pandas\_libs\src\inference.pyx", line 1569, in pandas._libs.lib.map_infer (pandas\_libs\lib.c:66440) File "&lt;ipython-input-153-e951d53133eb&gt;", line 1, in &lt;lambda&gt; df['emaillower'] = df['email'].apply(lambda x: x.upper()) AttributeError: 'float' object has no attribute 'upper' </code></pre> <p>What is going on? </p>
<p>One of the entries in the column 'email' is a float, not a string, and it doesn't know how to do upper() on a float. This is common when one entry is empty and is converted to NaN - this is read as a float and that's the source of your error. Something like this may fix the problem:</p> <pre><code>df['emaillower'] = df['email'].apply(lambda x: x.upper() if type(x) is str else 'empty') </code></pre> <p>Also want to note that you call the column emaillower but you are actually making it upper case - this might cause some confusion in the future</p>
python|pandas|lambda
3
7,896
58,827,917
AttributeError: module 'keras.backend' has no attribute '_BACKEND'
<p>I am following a book on building chat bots and continue running into this error when attempting to start interactive learning.</p> <p>The full error is this: </p> <blockquote> <p>Traceback (most recent call last): File "train_initialize.py", line 18, in agent = Agent("horoscope_domain.yml", policies = [MemoizationPolicy(), KerasPolicy()]) File "C:\Users\Max\AppData\Local\Programs\Python\Python37\lib\site-packages\rasa_core\policies\keras_policy.py", line 31, in <strong>init</strong> if KerasPolicy.is_using_tensorflow() and not graph: File "C:\Users\Max\AppData\Local\Programs\Python\Python37\lib\site-packages\rasa_core\policies\keras_policy.py", line 48, in is_using_tensorflow return keras.backend._BACKEND == "tensorflow" AttributeError: module 'keras.backend' has no attribute '_BACKEND'</p> </blockquote> <p>my code looks like this '''</p> <pre><code>from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from rasa_core import utils import tensorflow.keras.backend from rasa_core.agent import Agent from rasa_core.policies.keras_policy import KerasPolicy from rasa_core.policies.memoization import MemoizationPolicy from rasa_core.policies.sklearn_policy import SklearnPolicy if __name__ == '__main__': utils.configure_colored_logging(loglevel="DEBUG") training_data_file = './data/stories.md' model_path = './models/dialogue' agent = Agent("horoscope_domain.yml", policies = [MemoizationPolicy(), KerasPolicy()]) training_data = agent.load_data(training_data_file) agent.train(training_data, augmentation_factor = 50, epochs = 500, batch_size = 10, validation_split = 0.2) agent.persist(model_path) </code></pre> <p>'''</p>
<p>Looks like outdated API code; open the files in the error trace, and replace <code>._BACKEND</code> w/ <code>.backend()</code>:</p> <pre class="lang-py prettyprint-override"><code># In "C:\Users\Max\AppData\Local\Programs\Python\Python37\lib\site-packages # \rasa_core\policies\keras_policy.py", line 48: # return keras.backend._BACKEND == "tensorflow" # &lt;-- DELETE return keras.backend.backend() == "tensorflow" # &lt;-- PASTE </code></pre>
python|tensorflow|keras|rasa-nlu|rasa
0
7,897
70,338,783
Double "melt" in a pandas dataframe from Excel file
<p>I am reading an excel file in pandas with two levels for the columns. I am using Python 3.7</p> <p><a href="https://i.stack.imgur.com/aqAUd.png" rel="nofollow noreferrer">Example Excel file</a></p> <pre><code> Unnamed: 0 Unnamed: 1 Unnamed: 2 2021-01-01 2021-01-02 2021-01-03 2021-01-04 2021-01-05 0 ProjectNr Name Sector categorya categoryb categoryc categoryd categorye 1 1 aaa A1 14.995 14.995 14.995 14.995 14.995 2 2 aaa A2 7.4975 7.4975 7.4975 NaN NaN 3 3 aaa A3 NaN 11.996 11.996 11.996 NaN </code></pre> <p>I would like to transform the row &quot;category&quot; and &quot;date&quot; in to different columns of the data frame. I try with melt but I do not know how to do it the second melt or the melt to the combination row headers.</p> <p>I would like to get something like</p> <pre><code>ProjectNr Name Sector Category date Price 1 aaa A1 categorya 01/01/2021 € 15,00 1 aaa A1 categoryb 02/01/2021 € 15,00 1 aaa A1 categoryc 03/01/2021 € 15,00 1 aaa A1 categoryd 04/01/2021 € 15,00 1 aaa A1 categorye 05/01/2021 € 15,00 2 aaa A2 categorya 01/01/2021 € 7,50 2 aaa A2 categoryb 02/01/2021 € 7,50 2 aaa A2 categoryc 03/01/2021 € 7,50 2 aaa A2 categoryd 04/01/2021 2 aaa A2 categorye 05/01/2021 3 aaa A3 categorya 01/01/2021 3 aaa A3 categoryb 02/01/2021 € 12,00 3 aaa A3 categoryc 03/01/2021 € 12,00 3 aaa A3 categoryd 04/01/2021 € 12,00 3 aaa A3 categorye 05/01/2021 </code></pre> <p>If I create the df with header=[0, 1] melt broke with the name of the columns. Without header, melt only work for one column. Example:</p> <pre><code> Unnamed: 0 Unnamed: 1 Unnamed: 2 dt value 0 ProjectNr Name Sector 2021-01-01 categorya 1 1 aaa A1 2021-01-01 14.995 2 2 aaa A2 2021-01-01 7.4975 3 3 aaa A3 2021-01-01 NaN 4 ProjectNr Name Sector 2021-01-02 categoryb 5 1 aaa A1 2021-01-02 14.995 6 2 aaa A2 2021-01-02 7.4975 </code></pre> <p>How can I make a melt for the two levels of the headers?</p>
<p>First of all, we need to read the excel file properly</p> <pre><code>df = pd.read_excel('~/test.xlsx', header=[0, 1], index_col=[0, 1, 2]) </code></pre> <p>Stack using the <code>MultiIndex</code> level that you need keeping the <code>NaN</code>s and then reset the index</p> <pre><code>df = df.stack(level=[1, 0], dropna=False).reset_index() </code></pre> <p>Finally, rename the columns</p> <pre><code>df.columns = ['ProjectNr', 'Name', 'Sector', 'Category', 'date', 'Price'] </code></pre>
python-3.x|excel|pandas|melt
2
7,898
70,359,235
Calculate Weights of a Column in Pandas
<p>this is a basic quesiton and easy to do in excel but have not an idea in python and every example online uses groupby with multiple names in the name column. So, all I need is a row value of weights from a single column. Suppose I have data that looks like this:</p> <pre><code> name value 0 A 45 1 B 76 2 C 320 3 D 210 </code></pre> <p>The answer should look like this:</p> <pre><code>0 name value weights 1 A 45 0.069124 2 B 76 0.116743 3 C 320 0.491551 4 D 210 0.322581 </code></pre> <p>thank you,</p>
<p>You can also groupby 'name' and then apply a function that divides each value by its group sum:</p> <pre><code>df['weights'] = df.groupby('name')['value'].apply(lambda x: x / x.sum()) </code></pre> <p>Output:</p> <pre><code> name value weights 0 A 45 0.069124 1 A 76 0.116743 2 A 320 0.491551 3 A 210 0.322581 </code></pre> <p>For new data:</p> <pre><code>df['weights'] = df['value'] / df['value'].sum() name value weights 0 A 45 0.069124 1 B 76 0.116743 2 C 320 0.491551 3 D 210 0.322581 </code></pre>
pandas|calculated-columns|weighted
2
7,899
56,148,015
How can I change the order of the data record with regular expression and put it together in one single dataframe?
<p>What I want to know is how I can use the above data frame with regular expression to put the data rows in the right order. As you can see by for example index 2 and 4, the Quantity and Piece are in the wrong order. Does anyone have any idee how I can fix this?</p> <pre class="lang-py prettyprint-override"><code>data = [['Total 8\r\r\nQuantity 2\r\r\nPiece 4'], ['Total 8\r\r\nQuantity 2\r\r\nPiece 4'],['Total 8\r\r\nPiece 2\r\r\nQuantity 4'], ['Total 8\r\r\nQuantity 2\r\r\nPiece 4'], ['Total 8\r\r\nPiece 2\r\r\nQuantity 4'],['Total 8\r\r\nQuantity 2\r\r\nPiece 4'], ['Total 8\r\r\nQuantity 2\r\r\nPiece 4'],['Total 8\r\r\nPiece 2\r\r\nQuantity 4'], ['Total 8\r\r\nQuantity 2\r\r\nPiece 4'], ['Total 8\r\r\nPiece 2\r\r\nQuantity 4']] df = pd.DataFrame(data, columns = ['Information']) df +-------+--------------------------------------+ | index | Information | +-------+--------------------------------------+ | 0 | Total 8\r\r\nQuantity 2\r\r\nPiece 4 | | 1 | Total 8\r\r\nQuantity 2\r\r\nPiece 4 | | 2 | Total 8\r\r\nPiece 2\r\r\nQuantity 4 | | 3 | Total 8\r\r\nQuantity 2\r\r\nPiece 4 | | 4 | Total 8\r\r\nPiece 2\r\r\nQuantity 4 | | 5 | Total 8\r\r\nQuantity 2\r\r\nPiece 4 | | 6 | Total 8\r\r\nQuantity 2\r\r\nPiece 4 | | 7 | Total 8\r\r\nPiece 2\r\r\nQuantity 4 | | 8 | Total 8\r\r\nQuantity 2\r\r\nPiece 4 | | 9 | Total 8\r\r\nPiece 2\r\r\nQuantity 4 | +-------+--------------------------------------+ dt = pd.DataFrame(df) data = [] for item in dt['Information']: regex = re.findall(r"(\d+)\D+(\d+)\D+(\d+)",item) quantity = re.findall(r"\bTotal\s?\d\D+(\bQuantity)",item) piece = re.findall(r"\bTotal\s?\d\D+(\bPiece)",item) regex = (map(list,regex)) data.append(list(map(int,list(regex)[0]))) dftotal = pd.DataFrame(data, columns=['Total','Quantity','Piece']) print(dftotal) </code></pre> <p>With this code I got a column like below</p> <pre><code>+-------+----------+-------+ | Total | Quantity | Piece | +-------+----------+-------+ | 8 | 2 | 4 | | 8 | 2 | 4 | | 8 | 2 | 4 | | 8 | 2 | 4 | | 8 | 2 | 4 | | 8 | 2 | 4 | | 8 | 2 | 4 | | 8 | 2 | 4 | | 8 | 2 | 4 | +-------+----------+-------+ </code></pre> <p>How can I get a dataframe like below by switching those wrong order from de 'data array' and put the right variables in a single dataframe?</p> <pre><code>+-------+----------+-------+ | Total | Quantity | Piece | +-------+----------+-------+ | 8 | 2 | 4 | | 8 | 4 | 2 | | 8 | 2 | 4 | | 8 | 4 | 2 | | 8 | 2 | 4 | | 8 | 2 | 4 | | 8 | 4 | 2 | | 8 | 2 | 4 | | 8 | 4 | 2 | +-------+----------+-------+ </code></pre>
<p>This is one approach using <code>str.extract</code></p> <p><strong>Ex:</strong></p> <pre><code>import pandas as pd data = [['Total 8\r\r\nQuantity 2\r\r\nPiece 4'], ['Total 8\r\r\nQuantity 2\r\r\nPiece 4'],['Total 8\r\r\nPiece 2\r\r\nQuantity 4'], ['Total 8\r\r\nQuantity 2\r\r\nPiece 4'], ['Total 8\r\r\nPiece 2\r\r\nQuantity 4'],['Total 8\r\r\nQuantity 2\r\r\nPiece 4'], ['Total 8\r\r\nQuantity 2\r\r\nPiece 4'],['Total 8\r\r\nPiece 2\r\r\nQuantity 4'], ['Total 8\r\r\nQuantity 2\r\r\nPiece 4'], ['Total 8\r\r\nPiece 2\r\r\nQuantity 4']] df = pd.DataFrame(data, columns = ['Information']) df["Total"] = df["Information"].str.extract(r"Total (\d+)") df["Quantity"] = df["Information"].str.extract(r"Quantity (\d+)") df["Piece"] = df["Information"].str.extract(r"Piece (\d+)") df.drop("Information", inplace=True, axis=1) print(df) </code></pre> <p><strong>Output:</strong></p> <pre><code> Total Quantity Piece 0 8 2 4 1 8 2 4 2 8 4 2 3 8 2 4 4 8 4 2 5 8 2 4 6 8 2 4 7 8 4 2 8 8 2 4 9 8 4 2 </code></pre>
python|regex|pandas|dataframe
2