Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
2,300
60,193,013
Does Pandas release the GIL for string comparison?
<p>I'm thinking of using Pandas to compare the values in certain columns between two large CSV's. The comparison is just a simple Pandas compare. Something like below:</p> <pre><code>flagged_cars = cars.loc[cars.name_L != cars.name_R].copy() </code></pre> <p>Does Pandas string comparison require the GIL? Will Pandas use the additional cores in my laptop's CPU?</p> <p>I'm aware that I could write this without using pandas but it would be very convenient if Pandas worked like this.</p>
<p>I know I'm 8 months late to your question, but no, pandas does not release the GIL for string comparison.</p>
python|pandas|large-files
1
2,301
59,912,850
Autoencoder MaxUnpool2d missing 'Indices' argument
<p>The following model returns the error: <strong>TypeError: forward() missing 1 required positional argument: 'indices'</strong></p> <p>I've exhausted many online examples and they all look similar to my code. My maxpool layer returns both the input and the indices for the unpool layer. Any ideas on what's wrong?</p> <pre><code>class autoencoder(nn.Module): def __init__(self): super(autoencoder, self).__init__() self.encoder = nn.Sequential( ... nn.MaxPool2d(2, stride=1, return_indices=True) ) self.decoder = nn.Sequential( nn.MaxUnpool2d(2, stride=1), ... ) def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x </code></pre>
<p>Similar to the question <a href="https://stackoverflow.com/questions/53858626/pytorch-convolutional-autoencoders?rq=1">here</a>, the solution seems to be to separate the maxunpool layer from the decoder and explicitly pass its required parameters. <code>nn.Sequential</code> only takes one parameter.</p> <pre><code>class SimpleConvAE(nn.Module): def __init__(self): super().__init__() # input: batch x 3 x 32 x 32 -&gt; output: batch x 16 x 16 x 16 self.encoder = nn.Sequential( ... nn.MaxPool2d(2, stride=2, return_indices=True), ) self.unpool = nn.MaxUnpool2d(2, stride=2, padding=0) self.decoder = nn.Sequential( ... ) def forward(self, x): encoded, indices = self.encoder(x) out = self.unpool(encoded, indices) out = self.decoder(out) return (out, encoded) </code></pre>
pytorch
5
2,302
60,003,512
Any special algorithm for pedestrian detection alone?
<p>I need to detect the pedestrian who are using zebra crossing.I implemented by using yolo algorithm .But it detects everyone not only the pedestrian .So is there any method or special algorithm for pedestrian alone.If not how can I train my new model?</p>
<p>From YOLO you should not only get the detections but also the classes. Your model was most likely trained on the COCO dataset which has a certain table of objects that it can offer. </p> <p>You can find such a list here: <a href="https://github.com/pjreddie/darknet/blob/master/data/coco.names" rel="nofollow noreferrer">https://github.com/pjreddie/darknet/blob/master/data/coco.names</a></p> <p>You can see that the class "person" is on the 0'th position on the list, so if you only use the detections from that class, you will only get the bounding boxes with persons in them!</p>
opencv|machine-learning|computer-vision|video-processing|tensorflow2.0
0
2,303
60,024,262
Error converting object (string) to Int32: TypeError: object cannot be converted to an IntegerDtype
<p>I get following error while trying to convert object (string) column in Pandas to <code>Int32</code> which is integer type that allows for <code>NA</code> values.</p> <pre><code>df.column = df.column.astype('Int32') </code></pre> <blockquote> <p>TypeError: object cannot be converted to an IntegerDtype</p> </blockquote> <p>I'm using pandas version: 0.25.3</p>
<p>It's known bug, as explained <a href="https://github.com/pandas-dev/pandas/issues/25472" rel="noreferrer">here</a>.</p> <p>Workaround is to convert column first to <code>float</code> and than to <code>Int32</code>.</p> <p>Make sure you strip your column from whitespaces before you do conversion:</p> <pre><code>df.column = df.column.str.strip() </code></pre> <p>Than do conversion:</p> <pre><code>df.column = df.column.astype('float') # first convert to float before int df.column = df.column.astype('Int32') </code></pre> <p>or simpler:</p> <pre><code> df.column = df.column.astype('float').astype('Int32') # or Int64 </code></pre>
python|pandas
35
2,304
65,406,327
How to create a boolean array where the value is based on an array of indices?
<p>Let say I have a numpy array <code>A</code> as follows:</p> <pre><code>A = array([[0, 2], [1, 2], [0, 1]]) </code></pre> <p>I created a boolean array <code>B</code> using <code>np.zeros</code> as follows</p> <pre><code>B = array([[False, False, False], [False, False, False], [False, False, False]]) </code></pre> <p>Now, I wanted to create an array <code>C</code>, where it gives True value based on the column index of <code>A</code>.</p> <p>So,</p> <pre><code>C = array([[True, False, True], [False, True, True], [True, True, False]]) </code></pre>
<p>You can do this using some Numpy's relatively advanced indexing techniques:</p> <pre><code>In [27]: B[np.arange(A.shape[0])[:,None], A] = True In [28]: B Out[28]: array([[ True, False, True], [False, True, True], [ True, True, False]]) </code></pre> <p>The <code>np.arange(A.shape[0])[:,None]</code> creates the following array</p> <pre><code>array([[0], [1], [2]]) </code></pre> <p>to be used as indices for the first axis of the <code>B</code> array. The <code>[:,None]</code> here is used to transform the one-dimensional <code>range</code> object into a two dimensional array to match with the second axis indices (<code>A</code> array ) which is 2 dimensional.</p>
python|numpy
3
2,305
49,950,261
searching a word in the column pandas dataframe python
<p>I have two text columns and I would like to find whether a word from one column is present in another. I wrote the below code, which works very well, but it detects if a word is present anywhere in the string. For example, it will find "ha" in "ham". I want to use regex expression instead, but I am stuck. I came across this <a href="https://stackoverflow.com/questions/5319922/python-check-if-word-is-in-a-string">post</a> and looked at the second answer, but I haven't been able to modify it for my purpose. I would like to do something similar. </p> <p>I would appreciate help and/or any pointers</p> <pre><code>d = {'emp': ['abc d. efg', 'za', 'sdfadsf '], 'vendor': ['ABCD enterprise', 'za industries', '' ]} df = pd.DataFrame(data=d) df['clean_empy_name']=df["emp"].str.lower().str.replace('\W', ' ') def check_subset(vendor, employee): s = [] for n in employee.split(): # n=" " + n +"[^a-zA-Z\d:]" if ((str(n) in vendor.lower()) &amp; (len(str(n))&gt;1)): s.append(n) return s check_subset("ABC-xy 54", "54 xy") df['emp_name_find_in_vendor'] = df.apply(lambda row: check_subset(row['vendor'],row['clean_empy_name']), axis=1) df </code></pre> #########update 2 <p>i updated my dataframe as below</p> <pre><code>d = {'emp': ['abc d. efg', 'za', 'sdfadsf ','abc','yuma'], 'vendor': ['ABCD enterprise', 'za industries', '','Person Vue\Cisco','U OF M CONTLEARNING' ]} df = pd.DataFrame(data=d) df['clean_empy_name']=df["emp"].str.lower().str.replace('\W', ' ') </code></pre> <p>I used code provided by first answer and it fails</p> <ol> <li>in case of <code>'Person Vue\Cisco'</code> it throws the error <code>error: bad escape \c</code>. If i remove \ in <code>'Person Vue\Cisco'</code>, code runs fine</li> <li>in case of <code>'U OF M CONTLEARNING'</code> it return <code>u</code> and <code>m</code> when clearly they are not a match </li> </ol>
<p>Yes, you can! It is going to be a little bit messy so let me construct in a few steps:</p> <p>First, let's just create a regular expression for the single case of <code>check_subset("ABC-xy 54", "54 xy")</code>: </p> <ul> <li>We will use <code>re.findall(pattern, string)</code> to find all the occurrences of <code>pattern</code> in <code>string</code></li> <li>The regex pattern will basically say "any of the words": <ul> <li>for the "any" we use the <code>|</code> (or) operator</li> <li>for constructing words we need to use the parenthesis to group together... However, parenthesis <code>(word)</code> create a group that keeps track, so we could later call reuse these groups, since we are not interested we can create a non-capturing group by adding <code>?:</code> as follows: <code>(?:word)</code></li> </ul></li> </ul> <pre class="lang-py prettyprint-override"><code>import re re.findall('(?:54)|(?:xy)', 'ABC-xy 54') # -&gt; ['xy', '54'] </code></pre> <p>Now, we have to construct the <code>pattern</code> each time: </p> <ul> <li>Split into words</li> <li>Wrap each word inside a non-capturing group <code>(?:)</code></li> <li>Join all of these groups by <code>|</code></li> </ul> <pre class="lang-py prettyprint-override"><code>re.findall('|'.join(['(?:'+x+')' for x in '54 xy'.split()]), 'ABC-xy 54') </code></pre> <p>One minor thing, since the last row's vendor is empty and you seem to want no matches (technically, the empty string matches with everything) we have to add a minor check. So we can rewrite your function to be:</p> <pre><code>def check_subset_regex(vendor, employee): if vendor == '': return [] pattern = '|'.join(['(?:'+x+')' for x in vendor.lower().split(' ')]) return re.findall(pattern, employee) </code></pre> <p>And then we can apply the same way:</p> <pre><code>df['emp_name_find_in_vendor_regex'] = df.apply(lambda row: check_subset_regex(row['vendor'],row['clean_empy_name']), axis=1) </code></pre> <p>One final comment is that your solution matches partial words, so employee Tom Sawyer would match "Tom" to the vendor "Atomic S.A.". The regex function I provided here will not give this as a match, should you want to do this the regex would become a little more complicated.</p> <hr> <p><strong>EDIT:</strong> Removing punctuation marks from vendors </p> <p>You could either add a new column as you did with clean_employee, or simply add the removal to the function, as so (you will need to <code>import string</code> to get the <code>string.punctuation</code>, or just add in there a string with all the symbols you want to substitute):</p> <pre><code>def check_subset_regex(vendor, employee): if vendor == '': return [] clean_vnd = re.sub('[' + string.punctuation + ']', '', vendor) pattern = '|'.join(['(?:'+x+')' for x in clean_vnd.lower().split(' ')]) return re.findall(pattern, employee) </code></pre> <p>In the spirit of teaching to fish :), in regex the <code>[]</code> denote any of these characters... So <code>[abc]</code> would be the same to <code>a|b|c</code>. </p> <p>So the <code>re.sub</code> line will substitute any occurrence of the <code>string.punctuation</code> (which evaluates to <code>!"#$%&amp;\'()*+,-./:;&lt;=&gt;?@[\\]^_`{|}~</code>) characters by a <code>''</code> (removing them).</p> <hr> <p><strong>EDIT2:</strong> Adding the possibility of a single non-alphanumeric character at the end of each searchword: </p> <pre><code>def check_subset_regex(vendor, employee): if vendor == '': return [] clean_vnd = re.sub('[' + string.punctuation + ']', '', vendor) pattern = '|'.join(['(?:'+x+'[^a-zA-Z0-9]?)' for x in clean_vnd.lower().split(' ')]) return re.findall(pattern, employee) </code></pre> <p>In this case we are using:<br> - <code>^</code> as the first character inside a <code>[]</code> (called character class), denotes any character except for those specified in the character class, e.g. <code>[^abc]</code> would match <em>anything</em> that is not <code>a</code> or <code>b</code> or <code>c</code> (so <code>d</code>, or a white space, or <code>@</code>) - and the <code>?</code>, which means the previous symbol is optional...</p> <p>So, <code>[^a-zA-Z0-9]?</code> means an optional single non-alphanumeric character.</p>
python|regex|pandas
3
2,306
63,799,693
pandas DatetimeIndex to matplotlib x-ticks
<p>I have a pandas Dateframe with a date index looking like this:</p> <pre><code>Date 2020-09-03 2020-09-04 2020-09-07 2020-09-08 </code></pre> <p>The dates are missing a few entries, since its only data for weekdays.</p> <p>The thing I want to do is: Plot the figure and set an x tick on every Monday of the week.</p> <p>So far I've tried:</p> <pre><code>date_form = DateFormatter(&quot;%d. %b %Y&quot;) ax4.xaxis.set_major_formatter(date_form) ax4.xaxis.set_major_locator(mdates.WeekdayLocator(byweekday=MO)) </code></pre> <p>But it will start with 1970 and not with the actual date index.</p> <p>Then I tried to:</p> <pre><code>mdates.set_epoch('First day of my data') </code></pre> <p>But this won't help since Saturday and Sunday is skipped in my original Index.</p> <p>Any ideas what I could do?</p>
<p>If you draw a line plot with one axis of <em>datetime</em> type, the most natural solution is to use <em>plot_date</em>.</p> <p>I created an example DataFrame like below:</p> <pre><code> Amount Date 2020-08-24 210 2020-08-25 220 2020-08-26 240 2020-08-27 215 2020-08-28 243 ... </code></pre> <p><em>Date</em> (the index) is of <em>datetime</em> type.</p> <p>The code to draw is:</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.dates as mdates fig, ax = plt.subplots() plt.xticks(rotation=30) ax.plot_date(df.index, df.Amount, linestyle='solid') ax.xaxis.set_major_locator(mdates.WeekdayLocator(byweekday=0)) plt.grid() plt.show() </code></pre> <p>The picture I got is:</p> <p><a href="https://i.stack.imgur.com/Rjav5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rjav5.png" alt="enter image description here" /></a></p> <p>As you can see, there is absolutely no problem with <em>x</em> ticks and they are just on Mondays, as you want.</p>
python-3.x|pandas|datetime|matplotlib
1
2,307
63,787,848
Find row value based on one column in another column and do calculation
<p>I have a dataframe:</p> <pre class="lang-python prettyprint-override"><code>import pandas as pd data = pd.DataFrame({'start':['2020-08-01','2020-08-02','2020-08-03','2020-08-04','2020-08-05','2020-08-06','2020-08-07','2020-08-08'], 'end':['2020-08-03','2020-08-03','2020-08-06','2020-08-06','2020-08-06','2020-08-08','2020-08-08','2020-08-08'], 'score':[74, 81, 38, 49, 79, 17, 53, 69]}) </code></pre> <p>that I need to compute the <code>score</code> difference between <code>start</code> date and its corresponding <code>end</code> date as:</p> <pre class="lang-python prettyprint-override"><code> start end score result 0 2020-08-01 2020-08-03 74 36 # 74-38 as score on 08/03 is 38 1 2020-08-02 2020-08-03 81 43 # 81-38 2 2020-08-03 2020-08-06 38 21 # 38-17 as score on 08/06 is 17 3 2020-08-04 2020-08-06 49 32 # 49-17 4 2020-08-05 2020-08-06 79 62 # 79-17 5 2020-08-06 2020-08-08 17 -52 # 17-69 as score on 08/08 is 69 6 2020-08-07 2020-08-08 53 -16 # 53-69 7 2020-08-08 2020-08-08 69 0 # 69-69 </code></pre> <p>Is there a good <code>pandas</code> way to do this? Many thanks!</p>
<p>Use if all <code>start</code> values are unique subtracting by mapped values:</p> <pre><code>data['result'] = data['score'].sub(data['end'].map(data.set_index('start')['score'])) print (data) start end score result 0 2020-08-01 2020-08-03 74 36 1 2020-08-02 2020-08-03 81 43 2 2020-08-03 2020-08-06 38 21 3 2020-08-04 2020-08-06 49 32 4 2020-08-05 2020-08-06 79 62 5 2020-08-06 2020-08-08 17 -52 6 2020-08-07 2020-08-08 53 -16 7 2020-08-08 2020-08-08 69 0 </code></pre> <p><strong>Detail</strong>:</p> <pre><code>print (data['end'].map(data.set_index('start')['score'])) 0 38 1 38 2 17 3 17 4 17 5 69 6 69 7 69 Name: end, dtype: int64 </code></pre>
python|pandas|dataframe
3
2,308
64,015,999
plt.plot draws multiple curves instad of single curve
<p>here is the link to the dataset I used: <a href="https://drive.google.com/file/d/1p7OsIq9koVC9gpreNjBiHia4MKh10fP4/view?usp=sharing" rel="nofollow noreferrer">Dataset</a></p> <pre><code>import numpy as np import matplotlib.pyplot as plt import pandas as pd #Lets begin with polynomial regression df = pd.read_excel('enes.xlsx', index='hacim') X=pd.DataFrame(df['hacim']) Y=pd.DataFrame(df['delay']) from sklearn.linear_model import LinearRegression from sklearn.preprocessing import PolynomialFeatures poly_reg = PolynomialFeatures(degree = 4) X_poly = poly_reg.fit_transform(X) lin_reg_2 = LinearRegression() lin_reg_2.fit(X_poly, Y) plt.scatter(X, Y, color = 'red') plt.plot(X, lin_reg_2.predict(poly_reg.fit_transform(X)), color = 'blue') plt.title('X Vs Y') plt.xlabel('hacim') plt.ylabel('delay') plt.show() </code></pre> <p>Last plt.show shows a graph where there are many lines instead of a 1 lined polynomial regression i desired. what wrong and how can ı fix this?</p> <h2>Data</h2> <pre class="lang-py prettyprint-override"><code>,hacim,delay 0,815,1.44 1,750,1.11 2,321,2.37 3,1021,1.44 4,255,1.09 5,564,1.61 6,1455,15.27 7,525,2.7 8,1118,106.98 9,1036,3.47 10,396,1.34 11,1485,21.49 12,1017,12.22 13,1345,2.72 14,312,1.71 15,742,33.79 16,1100,39.62 17,1445,4.88 18,847,1.55 19,991,1.82 20,1296,10.77 21,854,1.81 22,1198,61.9 23,1162,8.22 24,1463,42.25 25,1272,4.31 26,745,2.36 27,521,2.14 28,1247,94.33 29,732,12.55 30,489,1.05 31,1494,12.78 32,591,3.18 33,257,1.18 34,602,4.24 35,335,2.06 36,523,3.63 37,752,7.61 38,349,1.76 39,771,0.79 40,855,39.08 41,948,3.95 42,1378,97.28 43,598,2.69 44,558,1.67 45,634,34.69 46,1146,12.22 47,1087,1.74 48,628,1.03 49,711,3.34 50,1116,7.27 51,748,1.09 52,1212,14.16 53,434,1.42 54,1046,8.25 55,568,1.33 56,894,2.61 57,1041,4.79 58,801,1.84 59,1387,11.5 60,1171,161.21 61,734,2.43 62,1471,17.42 63,461,1.42 64,751,2.36 65,898,2.4 66,593,1.74 67,942,3.39 68,825,1.09 69,715,20.23 70,725,5.43 71,1128,7.57 72,1348,4.49 73,1393,9.77 74,1379,97.76 75,859,2.59 76,612,15.98 77,1495,8.22 78,887,1.85 79,867,38.65 80,1353,1.6 81,851,60.25 82,1079,24.05 83,1100,25.58 84,638,1.23 85,1115,1.94 86,1443,4.79 87,1421,10.33 88,1279,7.29 89,1176,173.44 90,315,1.53 91,1019,34.03 92,1337,48.67 93,576,28.83 94,919,2.88 95,361,1.5 96,989,1.47 97,1286,32.11 </code></pre>
<p>Let's use pandas plot it is much easier:</p> <pre><code>X=pd.DataFrame(df['hacim']) Y=pd.DataFrame(df['delay']) from sklearn.linear_model import LinearRegression from sklearn.preprocessing import PolynomialFeatures poly_reg = PolynomialFeatures(degree = 4) X_poly = poly_reg.fit_transform(X) lin_reg_2 = LinearRegression() lin_reg_2.fit(X_poly, Y) df['y_pred'] = lin_reg_2.predict(poly_reg.fit_transform(X)) df = df.sort_values('hacim') ax = df.plot.scatter('hacim','delay') df.plot('hacim', 'y_pred', ax=ax, color='r') plt.title('X Vs Y') plt.xlabel('hacim') plt.ylabel('delay') plt.show() </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/qisoq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qisoq.png" alt="enter image description here" /></a></p> <p>The root of the scatter lines was unsorted data when plotting line graph.</p> <p>You could do this:</p> <pre><code>plt.plot(X, lin_reg_2.predict(poly_reg.fit_transform(X)), color = 'blue', marker='o', linestyle='none') </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/xCtHR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xCtHR.png" alt="enter image description here" /></a></p>
python|pandas|numpy|matplotlib
1
2,309
63,986,764
How to use Fiscal Year values in Python?
<p>I am working with some historical data on fiscal transfers in Canada. The downloaded data is in the format of fiscal year i.e.</p> <pre><code>Year Quebec Alberta 1980-1981 2000 4000 1981-1982 3000 6000 </code></pre> <p>I am using the pandas library. However, when I try to make any visualizations using either matplot or sns, it generates an error either not recognizing 'Year' as a numerical value or ('DataFrame' object has no attribute 'Year'). However, when I change the values in the csv to a single year i.e.</p> <pre><code>Year Quebec Alberta 1980 2000 4000 1981 3000 6000 </code></pre> <p>it works perfectly fine. Is there a way for Python to treat fiscal year values like 1980-1981 the same as normal year. Any advice would be much appreciated.</p>
<p>You can use 2years <a href="https://pandas.pydata.org/docs/user_guide/timeseries.html#time-span-representation" rel="nofollow noreferrer">periods</a>, but if print DataFrame columns cannot see end year:</p> <pre><code>print (df) Year Quebec Alberta 0 1980 2000 4000 1 1981 3000 6000 df['Year'] = df['Year'].apply(lambda x: pd.Period(x, freq='2A-DEC')) </code></pre> <hr /> <pre><code>print (df['Year']) 0 1980 1 1981 Name: Year, dtype: period[2A-DEC] print (df['Year'].dt.to_timestamp('A', how='s')) 0 1980-12-31 1 1981-12-31 Name: Year, dtype: datetime64[ns] print (df['Year'].dt.to_timestamp('A', how='e')) 0 1981-12-31 23:59:59.999999999 1 1982-12-31 23:59:59.999999999 Name: Year, dtype: datetime64[ns] </code></pre> <p>But I think most easier is create 2 columns for start and end year:</p> <pre><code>print (df) Year Quebec Alberta 0 1980-1981 2000 4000 1 1981-1982 3000 6000 df[['StartYear','EndYear']] = df['Year'].str.split('-', expand=True).astype(int) print (df) Year Quebec Alberta StartYear EndYear 0 1980-1981 2000 4000 1980 1981 1 1981-1982 3000 6000 1981 1982 </code></pre>
python|pandas|dataframe|data-visualization
0
2,310
63,895,190
How to update selected datetime64 values in a pandas dataframe?
<p>I am trying to update selected datetime64 values in a pandas data frame using the loc method to select rows satisfying a condition. However, instead of assigning the new date-time value it results in NaT.</p> <p>Here is a simplification of my code that shows the problem:</p> <pre><code>import pandas as pd import numpy as np datetime = (np.datetime64('1899-12-30'), np.datetime64('1989-12-30'), np.datetime64('2199-12-30')) select = (0, 1, 0) df = pd.DataFrame(list(zip(datetime, select)), columns=['date_time', 'select']) # create a new column by subtracting 180 days df['new_date'] = df['date_time'] - pd.Timedelta(180, unit='d') # replace datetime with new date where select is true df.loc[(df['select'] == 1), ['date_time']] = df.loc[(df['select'] == 1), ['new_date']] print(df) # the second element of the date_time column is &quot;NaT&quot;, but this is not the desired outcome. # the desired behaviour is for it to be the same as the second element in the new_date column. </code></pre> <p>I've also put the code here: <a href="http://tpcg.io/vOoi87Gb" rel="nofollow noreferrer">http://tpcg.io/vOoi87Gb</a></p> <p>Any ideas on how this should be done or why this is not working as intended?</p> <p>Thanks for reading.</p>
<p>You should drop <code>[]</code> around the column name:</p> <pre><code>df.loc[(df['select'] == 1), 'date_time'] = df.loc[(df['select'] == 1), 'new_date'] </code></pre> <p>You can also drop the second boolean index:</p> <pre><code>df.loc[(df['select'] == 1), 'date_time'] = df['new_date'] </code></pre> <p>Also, <code>np.where</code>:</p> <pre><code>df['date_time'] = np.where(df['select']==1, df['new_date'], df['date_time']) </code></pre> <p><strong>Explanation</strong>: <code>df.loc[s, ['col_name']]</code> slices a <strong>dataframe</strong>, while <code>df.loc[s, 'col_name']</code> slices a series. When you do:</p> <pre><code>dataframe_slice = another_dataframe_slice </code></pre> <p>Pandas will try to align the index/columns of the two dataframe. In this case, the two slices have no common columns, so the updated dataframe has <code>NaN</code> values where <code>select==1</code>.</p>
python|pandas|numpy|dataframe
1
2,311
46,812,804
How to do a second interpolation in python
<p>I did my first interpolation with <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.polyfit.html" rel="nofollow noreferrer">numpy.polyfit()</a> and <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.polyval.html#numpy.polyval" rel="nofollow noreferrer">numpy.polyval()</a> for 50 longitude values for a full satellite orbit.</p> <p>Now, I just want to look at a window of 0-4.5 degrees longitude and do a second interpolation so that I have 6,000 points for longitude in the window.</p> <p>I need to use the equation/curve from the first interpolation to create the second one because there is only one point in the window range. I'm not sure how to do the second interpolation.</p> <p><strong>Inputs:</strong></p> <pre><code>lon = [-109.73105744378498, -104.28690174554579, -99.2435132929552, -94.48533149079628, -89.91054414962821, -85.42671400689177, -80.94616150449806, -76.38135021210172, -71.6402674905218, -66.62178379632216, -61.21120467960157, -55.27684029674759, -48.66970878028004, -41.23083703244677, -32.813881865289346, -23.332386757370532, -12.832819226213942, -1.5659455609661785, 10.008077792630402, 21.33116444634303, 31.92601575632583, 41.51883213364072, 50.04498630545507, 57.58103957109249, 64.26993028992476, 70.2708323505337, 75.73441871754586, 80.7944079829813, 85.56734813043659, 90.1558676264546, 94.65309120129724, 99.14730128118617, 103.72658922048785, 108.48349841714494, 113.51966824008079, 118.95024882101737, 124.9072309203375, 131.5395221402974, 139.00523971191907, 147.44847902856114, 156.95146022590976, 167.46163867248032, 178.72228750873975, -169.72898181991064, -158.44642409799974, -147.8993300787564, -138.35373014113995, -129.86955508919888, -122.36868103811106, -115.70852432245486] myOrbitJ2000Time = [ 20027712., 20027713., 20027714., 20027715., 20027716., 20027717., 20027718., 20027719., 20027720., 20027721., 20027722., 20027723., 20027724., 20027725., 20027726., 20027727., 20027728., 20027729., 20027730., 20027731., 20027732., 20027733., 20027734., 20027735., 20027736., 20027737., 20027738., 20027739., 20027740., 20027741., 20027742., 20027743., 20027744., 20027745., 20027746., 20027747., 20027748., 20027749., 20027750., 20027751., 20027752., 20027753., 20027754., 20027755., 20027756., 20027757., 20027758., 20027759., 20027760., 20027761.] </code></pre> <p><strong>Code:</strong> </p> <pre><code>deg = 30 #polynomial degree for fit fittime = myOrbitJ2000Time - myOrbitJ2000Time[0] 'Longitude Interpolation' fitLon = np.polyfit(fittime, lon, deg) #gets fit coefficients polyval_lon = np.polyval(fitLon,fittime) #interp.s to get actual values 'Get Longitude values for a window of 0-4.5 deg Longitude' lonwindow =[] for i in range(len(polyval_lon)): if 0 &lt; polyval_lon[i] &lt; 4.5: # get lon vals in window lonwindow.append(polyval_lon[i]) #append lon vals lonwindow = np.array(lonwindow) </code></pre>
<p>First, generate the polynomial fit coefficients using the old time (x-axis) values, and interpolated longitude (y-axis) values. </p> <pre><code>import numpy as np import matplotlib.pyplot as plt poly_deg = 3 #degree of the polynomial fit polynomial_fit_coeff = np.polyfit(original_times, interp_lon, poly_deg) </code></pre> <p>Next, use np.linspace() to generate arbitrary time values based on the number of desire points in the window.</p> <pre><code>start = 0 stop = 4 num_points = 6000 arbitrary_time = np.linspace(start, stop, num_points) </code></pre> <p>Finally, use the fit coefficients and the arbitrary time to get the actual interpolated longitude (y-axis) values and plot.</p> <pre><code>lon_intrp_2 = np.polyval(polynomial_fit_coeff, arbitrary_time) plt.plot(arbitrary_time, lon_intrp_2, 'r') #interpolated window as a red curve plt.plot(myOrbitJ2000Time, lon, '.') #original data plotted as points </code></pre>
python|arrays|python-2.7|numpy|interpolation
0
2,312
63,136,885
Constructing a highly customized neural network in keras (weight sharing, custom connectivity)
<p>I'm trying to create a NN model for a specific problem in physical sciences, and my motivation is to reduce the number of weights and share weights based on physical insights. The neural net looks something like:</p> <p><a href="https://i.stack.imgur.com/ZIzJ6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZIzJ6.jpg" alt="enter image description here" /></a></p> <p>The input size is 2n, where the first n inputs (X<sub>1</sub> .. X<sub>n</sub>) are fed into the first hiddent layer - however, the connectivity of the neural net is unique in that only one input is fed into the each units of the first hidden layer. Moreover, all the weights are shared among the inputs.</p> <p>Each unit in the second layer has 2 inputs - one from the previous layer and one directly from the raw input (X<sub>n+1</sub> .. X<sub>2n</sub>). The weights and biases are shared accordingly.</p> <p>Finally the output layer has 1 unit with 100 inputs (all outputs from the 2nd hidden layer), and the same weight and bias is applied to each unit.</p>
<p>This code will solve your issue (under the assumption that you wanted to add up all the inputs to the output at the end) BTW, if you don't have any activation, the operation you described is linear and can be easily simplified.</p> <pre><code>import tensorflow.keras as keras import tensorflow.keras.layers as layers input_shape = [1] num_inputs = 4 inputs = [layers.Input(shape=input_shape, name=f&quot;input_{i}&quot;) for i in range(num_inputs)] x = [i for i in inputs] dense_1 = layers.Dense(units=1, use_bias=False, name=&quot;1&quot;) dense_21 = layers.Dense(units=1, use_bias=True, name=&quot;21&quot;) dense_22 = layers.Dense(units=1, use_bias=False, name=&quot;22&quot;) dense_3 = layers.Dense(units=1, use_bias=True, name=&quot;3&quot;) for i in range(num_inputs//2): # First hidden layer x[i] = dense_1(x[i]) # Second hidden layer x[i] = dense_21(x[i]) # Connect with the other inputs x[i + num_inputs // 2] = dense_22(x[i + num_inputs // 2]) x[i] = layers.Add()([x[i], x[i + num_inputs // 2]]) # Last one x[i] = dense_3(x[i]) # Add all x = layers.Add()(x[:num_inputs//2]) model = keras.Model(inputs=inputs, outputs=x) keras.utils.plot_model(model=model, to_file=&quot;model.png&quot;, show_shapes=True) </code></pre> <p>The plot of the above code is: <a href="https://i.stack.imgur.com/PTYUu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PTYUu.png" alt="enter image description here" /></a></p>
tensorflow|keras|neural-network|physics|biological-neural-network
1
2,313
63,039,838
How to make intervals of data after 6 rows using pandas
<p>Hi I have a scenario where I have to maintain a number after every 6 six row.</p> <p>For example here is my dataframe</p> <pre><code>client_id patient_id Total Clinic Clinic Number 172 6021 1 Clinic 1 172 6021 1 Clinic 1 172 6021 1 Clinic 1 172 6021 1 Clinic 1 172 6021 1 Clinic 1 172 6021 1 Clinic 1 172 6137 1 Clinic 1 172 6137 1 Clinic 1 172 6137 1 Clinic 1 172 6137 1 Clinic 1 172 6137 1 Clinic 1 172 6137 1 Clinic 1 187 5658 5 Clinic 1 187 5658 5 Clinic 1 187 5658 5 Clinic 1 187 5658 5 Clinic 1 187 5658 5 Clinic 1 187 5658 5 Clinic 1 187 5658 5 Clinic 2 187 5658 5 Clinic 2 187 5658 5 Clinic 2 187 5658 5 Clinic 2 187 5658 5 Clinic 2 187 5658 5 Clinic 2 </code></pre> <p>I want to achieve below results so that after every six rows index count will be updated</p> <pre><code>client_id patient_id Total Clinic Clinic Number Index_Number 172 6021 1 Clinic 1 1 172 6021 1 Clinic 1 1 172 6021 1 Clinic 1 1 172 6021 1 Clinic 1 1 172 6021 1 Clinic 1 1 172 6021 1 Clinic 1 1 172 6137 1 Clinic 1 2 172 6137 1 Clinic 1 2 172 6137 1 Clinic 1 2 172 6137 1 Clinic 1 2 172 6137 1 Clinic 1 2 172 6137 1 Clinic 1 2 187 5658 5 Clinic 1 3 187 5658 5 Clinic 1 3 187 5658 5 Clinic 1 3 187 5658 5 Clinic 1 3 187 5658 5 Clinic 1 3 187 5658 5 Clinic 1 3 187 5658 5 Clinic 2 4 187 5658 5 Clinic 2 4 187 5658 5 Clinic 2 4 187 5658 5 Clinic 2 4 187 5658 5 Clinic 2 4 187 5658 5 Clinic 2 4 </code></pre> <p>Need Help Thanks.</p>
<p>Create a column of ones; compute cumulative sum and subtract 1 (to start from zero); and compute floordiv (i.e., integer division)</p> <pre><code> df['Index'] = 1 df['Index'] = df['Index'].cumsum() - 1 df['Index'] = df['Index'].floordiv(6) </code></pre>
python|pandas|data-cleaning|sklearn-pandas
1
2,314
63,303,505
How do I remove rows in a list containing numpy arrays based on a condition?
<p>I have the following numpy array <code>arr_split</code>:</p> <pre><code>import numpy as np arr1 = np.array([[1.,2,3], [4,5,6], [7,8,9]]) arr_split = np.array_split(arr1, indices_or_sections = 4, axis = 0) arr_split </code></pre> <p>Output:</p> <pre><code>[array([[1., 2., 3.]]), array([[4., 5., 6.]]), array([[7., 8., 9.]]), array([], shape=(0, 3), dtype=float64)] </code></pre> <p>How do I remove rows which are &quot;empty&quot; (ie. in the above eg., it's the last row). The array <code>arr_split</code> can have any number of &quot;empty&quot; rows. The above eg. just so happens to have only one row which is &quot;empty&quot;.</p> <p>I have tried using list comprehension, as per below:</p> <pre><code>arr_split[[(arr_split[i].shape[0] != 0) for i in range(len(arr_split))]] </code></pre> <p>but this doesn't work because the list comprehension <code>[(arr_split[i].shape[0] != 0) for i in range(len(arr_split))]</code> part returns a list, when I actually just need the elements in the list to feed into <code>arr_split[]</code> as indices.</p> <p>Anyone know how I could fix this or is there another way of doing this? If possible, looking for the easiest way of doing this without too many loops or if statements.</p>
<p>you can change the <code>indices_or_sections</code> value to length of the first axis, this will prevent any empty arrays from being produced</p> <pre><code>import numpy as np arr1 = np.array([[1.,2,3], [4,5,6], [7,8,9]]) arr_split = np.array_split(arr1, indices_or_sections = arr1.shape[0], axis = 0) arr_split &gt;&gt;&gt; [ array([[1., 2., 3.]]), array([[4., 5., 6.]]), array([[7., 8., 9.]]) ] </code></pre>
python|list|numpy|list-comprehension
3
2,315
67,782,893
Adding column to Pandas DataFrame based on dynamic indexing condition
<p>I have a dataframe with a column that randomly starts a &quot;count&quot; back at 1. My goal is to produce a new_col that divides my current column by the the last value in a count. See below for an example.</p> <p>This is my current DataFrame:</p> <pre><code> col 0 1.0 1 2.0 2 3.0 3 1.0 4 2.0 5 1.0 6 2.0 7 3.0 8 4.0 9 5.0 10 1.0 11 2.0 12 3.0 </code></pre> <p>Trying to get an output like so:</p> <pre><code> col new_col 0 1.0 0.333 1 2.0 0.667 2 3.0 1.000 3 1.0 0.500 4 2.0 1.000 5 1.0 0.200 6 2.0 0.400 7 3.0 0.600 8 4.0 0.800 9 5.0 1.000 10 1.0 0.333 11 2.0 0.667 12 3.0 1.000 </code></pre> <p>This is what I have tried so far:</p> <pre><code>df['col_bool'] = pd.DataFrame(df['col'] == 1.0) idx_lst = [x - 2 for x in df.index[df['col_bool']].tolist()] idx_lst = idx_lst[1:] mask = (df['col'] != 1.0) df_valid = df[mask] for i in idx_lst: df['new_col'] = 1.0 / df_valid.iloc[i]['col'] df.loc[mask, 'new_col'] = df_valid['col'] / df_valid.iloc[i]['col'] </code></pre> <p>This understandably results in an index error. Maybe I need to make a copy of a DataFrame each time and concat. I believe this would work but I want to ask if I am missing any shortcuts here?</p>
<p>Try:</p> <pre><code>df['new_col'] = df['col'].div(df.groupby((df['col'] == 1).cumsum()).transform('last')) </code></pre> <p>Output:</p> <pre><code> col new_col 0 1.0 0.333333 1 2.0 0.666667 2 3.0 1.000000 3 1.0 0.500000 4 2.0 1.000000 5 1.0 0.200000 6 2.0 0.400000 7 3.0 0.600000 8 4.0 0.800000 9 5.0 1.000000 10 1.0 0.333333 11 2.0 0.666667 12 3.0 1.000000 </code></pre>
python|pandas|dataframe
7
2,316
67,619,155
Create "denser" np.linspace with same points as original np.linspace
<p>I have a <code>base</code> array of equally spaced values <code>[0, 1, ..., 511]</code>. I need to create a <code>target</code> array over <code>[0 to 511]</code> that consists of approximately 4096 values. It must <strong>also</strong> contain all the values <code>0, 1, 2, ...</code> that are in <code>base</code>.</p> <pre><code>base = [0, 1, 2, 3, ..., 511] target = [0, 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875, 1, ..., 511] </code></pre> <p>I have:</p> <pre class="lang-py prettyprint-override"><code>base = np.linspace(0, 511, 512) target = np.linspace(0, 511, 4096) </code></pre> <p>Unfortunately, <code>target</code> seems to be incorrect:</p> <pre><code>[0;0.124786;0.249573;0.374359;0.499145;0.623932;0.748718;0.873504;0.998291;...] </code></pre> <p>I need it to contain the numbers from <code>base</code>.</p>
<p>To construct an equally spaced sequence <code>(0, ..., b)</code> containing all integers within that interval, where <code>b</code> is an integer, choose any integer <code>k</code> and then:</p> <pre><code>np.linspace(0, b, k * b + 1) </code></pre> <p>In your case,</p> <pre><code>np.linspace(0, 511, 8 * 511 + 1) </code></pre>
python|arrays|numpy|linspace
0
2,317
61,463,925
how to print a table for a pandas data frame in Pyscripter?
<p>Is there a way to show a pandas data frame in Pyscripter in a table form? Sure a data frame shows up on the python interface, but i could not find an option to print it in a more graphical , eye-friendly table form... Any help would be much appreciated</p>
<p>Assuming you are using SQL to wrangle your data for you tabular analysis.</p> <p><strong>Visit</strong> <a href="https://mode.com/example-gallery/python_dataframe_styling/" rel="nofollow noreferrer">https://mode.com/example-gallery/python_dataframe_styling/</a></p> <p>It's a great place to learn dataframe styling.</p>
python|pandas|pyscripter
0
2,318
61,371,732
Bokeh - legend outside the plot
<p>There is plenty of issues like this one, but I couldn't find one with the approach I'm looking to solve it.</p> <p>In bokeh we cannot move a legend outside the plot, we have to create one. If we try nowadays to move the legend from inside to outside the legend dissapears. In the <a href="https://docs.bokeh.org/en/latest/docs/user_guide/styling.html#outside-the-plot-area" rel="nofollow noreferrer">documentation</a> (and in the solutions provided I have found in SO like <a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;uact=8&amp;ved=2ahUKEwjGxuKVwvzoAhWZHLkGHYXJCEkQrAIoATAAegQIARAL&amp;url=https%3A%2F%2Fstackoverflow.com%2Fquestions%2F42646672%2Fbokeh-pandas-legend-outside-plot&amp;usg=AOvVaw3stH6AlHzmtJfEb4eDaXSh" rel="nofollow noreferrer">1</a>, <a href="https://stackoverflow.com/questions/53215062/bokeh-position-legend-outside-plot-area-for-stacked-vbar">2</a>, and <a href="https://stackoverflow.com/questions/48240867/how-can-i-make-legend-outside-plot-area-with-stacked-bar">this</a> solution does the copy but seems to use outdated fuctions), to plot an outside legend you need to create that legend from your data, not from the plot.</p> <p>However, is it possible to access the existing legend inside the plot, copying it, and creating the outside legend.. with that copy? </p> <p>I'm looking for this approach because I developed a function that creates this plot, with two x-axis (date and category per date) and two y-axis (percentage and integers), so creating a legend that fits with every color, every line-style, every time the categories changes, for every plot.. it's kind of complicated. So, as the inside legend is just perfect, I thought that just copying it to the outside new legend would be pretty straightforward, however I have not been able to do so.</p> <p>Any suggestions?</p>
<p>Something like this?</p> <pre class="lang-py prettyprint-override"><code>from bokeh.io import show from bokeh.models import Legend from bokeh.plotting import figure p = figure(tools=[]) p.circle(x=[0, 1], y=[0, 1], size=10, legend_label='Circle') legend = p.legend[0] p.center = [item for item in p.center if not isinstance(item, Legend)] p.add_layout(legend, 'right') show(p) </code></pre>
python|pandas|bokeh
1
2,319
61,278,924
Pandas plot countries total and newcol
<p>I am having an issue plotting multiple columns into a histogram plot</p> <pre><code>x1 = list(df[df['newcol'] == 0]['Country/Region']) x2 = list(df[df['newcol'] == 1]['Country/Region']) colors = ['r', 'c'] names = ['warm','cool'] plt.hist([x1, x2], bins = 1, normed=True, color = colors, label=names) </code></pre> <pre><code>Country/Region Total newcol USA 450 0 Andorra 225 1 Bahamas 300 1 Uk 150 0 Nigeria 189 0 </code></pre> <p>I want to have the countries on the x axis the Total on the y axis and then the bars be colored based on the newcol value for example USA will be colored green since it is associated with 0 and Bahamas would be colored blue because of 1. The code I am using above is giving me the color but since there are so many countries in the Country/Region Column the graph is scrunched and also the y axis is not giving me the correct numbers </p>
<p>I think you might need a bar plot instead of histogram</p> <pre><code>x1 = list(df[df['newcol'] == 0]['Country/Region']) x2 = list(df[df['newcol'] == 1]['Country/Region']) y1 = list(df[df['newcol'] == 0]['Total']) y2 = list(df[df['newcol'] == 1]['Total']) plt.bar(x1, y1, color='g') plt.bar(x2, y2, color='b') </code></pre> <p><a href="https://i.stack.imgur.com/4wywz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4wywz.png" alt="enter image description here"></a></p> <p>Is this what you needed?</p>
python|pandas
0
2,320
61,391,919
Loading image data from pandas to pytorch
<p>I am completely new to pytorch and have previously worked on keras and fastai. Currently trying an image regression task and the challenge is I have to load the data from pandas dataframe. Data frame structure:</p> <pre><code>ID Path Score fig1 /folder/fig1.jpg 2 fig2 /folder/fig2.jpg 3 ..... </code></pre> <p>I have previously worked on loading images to pytorch directly from folders because it was a simple classification task but kind of stuck now.</p> <p>I looked into pytorch forums but didn't quiet understand how to implement. Any help would be appreciated.</p>
<h2>Datasets</h2> <p>You have to use <code>torch.utils.data.Dataset</code> structure to define it. Here is how you can do it in plain <code>pytorch</code> (I'm using <code>pillow</code> to load the images and <code>torchvision</code> to transform them to <code>torch.Tensor</code> objects):</p> <pre><code>import torch import torchvision from PIL import Image class MyDataset(torch.utils.data.Dataset): def __init__(self, dataframe): self.dataframe = dataframe def __len__(self): return len(self.dataframe) def __getitem__(self, index): row = self.dataframe.iloc[index] return ( torchvision.transforms.functional.to_tensor(Image.open(row["Path"])), row["Score"], ) dataset = MyDataset(dataframe) </code></pre> <p>Alternatively, you can use <a href="https://github.com/szymonmaszke/torchdata" rel="noreferrer"><code>torchdata</code></a> (<strong>disclaimer: shameless self-promotion as I'm the author...</strong>) which allows you to decouple <code>Path</code> and <code>Scores</code> like this:</p> <pre><code>import torchvision from PIL import Image import torchdata class ImageDataset(torchdata.datasets.FilesDataset): def __getitem__(self, index): return Image.open(self.files[index]) class Labels(torchdata.Dataset): def __init__(self, scores): super().__init__() self.scores = scores def __len__(self): return len(self.scores) def __getitem__(self, index): return self.scores[index] # to_numpy for convenience # I assume all your images are in /folder and have *.jpg extension dataset = ImageDataset.from_folder("/folder", regex="*.jpg").map( torchvision.transforms.ToTensor() ) | Labels(dataframe["Score"].to_numpy()) </code></pre> <p>(or you could implement it just like in regular <code>pytorch</code> but inheriting from <code>torchdata.Dataset</code> and calling <code>super().__init__()</code> in the constructor).</p> <p><code>torchdata</code> allows you to cache your images easily or apply some other transformations via <code>.map</code> as shown there, <a href="https://github.com/szymonmaszke/torchdata" rel="noreferrer">check github repository for more info</a> or ask in the comment.</p> <h2>DataLoader</h2> <p>Either way you choose you should wrap your dataset in <code>torch.utils.data.DataLoader</code> to create batches and iterate over them, like this:</p> <pre><code>dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True) for images, scores in dataloader: # Rest of your code to train neural network or smth ... </code></pre> <p>Do with those images and scores what you want in the loop.</p>
python|pandas|deep-learning|pytorch
9
2,321
68,841,671
How to add a new column based on different conditions on other columns pandas
<p>This is my dataframe:</p> <pre><code>Date Month 04/21/2019 April 07/03/2019 July 01/05/2018 January 09/23/2019 September </code></pre> <p>I want to add a column called fiscal year. A new fiscal year starts on 1st of July every year and ends on the last day of June. So for example if the year is 2019 and month is April, it is still fiscal year 2019. However, if the year is 2019 but month is anything after June, it will be fiscal year 2020. The resulting data frame should look like this:</p> <pre><code> Date Month FY 04/21/2019 April FY19 07/03/2019 July FY20 01/05/2019 January FY19 09/23/2019 September FY20 </code></pre> <p>How do I achieve this?</p>
<p>try via <code>pd.PeriodIndex()</code>+<code>pd.to_datetime()</code>:</p> <pre><code>df['Date']=pd.to_datetime(df['Date']) df['FY']=pd.PeriodIndex(df['Date'],freq='A-JUN').strftime(&quot;FY%y&quot;) </code></pre> <p>output:</p> <pre><code> Date Month FY 0 2019-04-21 April FY19 1 2019-07-03 July FY20 2 2019-01-05 January FY19 3 2019-09-23 September FY20 </code></pre> <p><strong>Note:</strong> I suggest you you convert your <code>'Date'</code> to datetime first then do any operation on it or If you don't want to convert <code>'Date'</code> column then use the above code in a single step:</p> <pre><code>df['FY']=pd.PeriodIndex(pd.to_datetime(df['Date']),freq='A-JUN').strftime(&quot;FY%y&quot;) </code></pre>
python|pandas
-1
2,322
53,305,040
How to make a column with lists from columns of list elements in a pandas dataframe?
<p>I have a pandas dataframe like </p> <pre><code> test = pd.DataFrame([[['P','N'], ['Z', 'P']],[['N','N'], ['Z', 'P']]], columns=['c1', 'c2']) </code></pre> <p>I want to add another column <code>c3</code> to test whose elements are</p> <pre><code>['PZ', 'NP'] ['NZ', 'NP'] </code></pre> <p>How can I do this?</p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>assign</code></a>:</p> <pre><code>df = test.assign(c3 = [[x[0]+y[0], x[1]+y[1]] for x,y in test.values.tolist()]) </code></pre> <p>Or:</p> <pre><code>df = test.assign(c3 = list(map(list,zip(test.c1.str[0]+test.c2.str[0],test.c1.str[1]+test.c2.str[1])))) print(df) c1 c2 c3 0 [P, N] [Z, P] [PZ, NP] 1 [N, N] [Z, P] [NZ, NP] </code></pre> <hr> <pre><code>print([[x[0]+y[0], x[1]+y[1]] for x,y in test.values.tolist()]) [['PZ', 'NP'], ['NZ', 'NP']] print(list(map(list,zip(test.c1.str[0]+test.c2.str[0],test.c1.str[1]+test.c2.str[1])))) [['PZ', 'NP'], ['NZ', 'NP']] </code></pre>
python-3.x|pandas|dataframe
2
2,323
65,557,947
Learning a Categorical Variable with TensorFlow Probability
<p>I would like to use TFP to write a neural network where the output are the probabilities of a categorical variable with 3 classes, and train it using the negative log-likelihood.</p> <p>As I'm moving my first steps with TF and TFP, I started with a toy model where the input layer has only 1 unit receiving a null input, and the output layer has 3 units with softmax activation function. The idea is that the biases should learn (up to an additive constant) the log of the probabilities.</p> <p>Here below is my code, <code>true_p</code> are the true parameters I use to generate the data and I would like to learn, while <code>learned_p</code> is what I get from the NN.</p> <pre><code>import numpy as np import tensorflow as tf from tensorflow import keras from functions import nll from tensorflow.keras.optimizers import SGD import tensorflow.keras.layers as layers import tensorflow_probability as tfp tfd = tfp.distributions # params true_p = np.array([0.1, 0.7, 0.2]) n_train = 1000 # training data x_train = np.array(np.zeros(n_train)).reshape((n_train,)) y_train = np.array(np.random.choice(len(true_p), size=n_train, p=true_p)).reshape((n_train,)) # model input_layer = layers.Input(shape=(1,)) p_layer = layers.Dense(len(true_p), activation=tf.nn.softmax)(input_layer) p_y = tfp.layers.DistributionLambda(tfd.Categorical)(p_layer) model_p = keras.models.Model(inputs=input_layer, outputs=p_y) model_p.compile(SGD(), loss=nll) # training hist_p = model_p.fit(x=x_train, y=y_train, batch_size=100, epochs=3000, verbose=0) # check result learned_p = np.round(model_p.layers[1].call(tf.constant([0], shape=(1, 1))).numpy(), 3) learned_p </code></pre> <p>With this setup, I get the result:</p> <pre><code>&gt;&gt;&gt; learned_p array([[0.005, 0.989, 0.006]], dtype=float32) </code></pre> <p>I over-estimate the second category, and can't really distinguish between the first and the third one. What's worst, if I plot the probabilities at the end of each epoch, it looks like they are converging monotonically to the vector [0,1,0], which doesn't make sense (it seems to me the gradient should push in the opposite direction once I start to over-estimate).</p> <p>I really can't figure out what's going on here, but have the feeling I'm doing something plain wrong. Any idea? Thank you for your help!</p> <p>For the record, I also tried using other optimizers like Adam or Adagrad playing with the hyper-params, but with no luck.</p> <p>I'm using Python 3.7.9, TensorFlow 2.3.1 and TensorFlow probability 0.11.1</p>
<p>I believe the default argument to Categorical is not the vector of probabilities, but the vector of logits (values you'd take softmax of to get probabilities). This is to help maintain precision in internal Categorical computations like log_prob. I think you can simply eliminate the softmax activation function and it <em>should</em> work. Please update if it doesn't!</p> <p>EDIT: alternatively you can replace the tfd.Categorical with</p> <p><code>lambda p: tfd.Categorical(probs=p)</code></p> <p>but you'll lose the aforementioned precision gains. Just wanted to clarify that passing <code>probs</code> <em>is</em> an option, just not the default.</p>
tensorflow2.0|tensorflow-probability
1
2,324
65,566,794
How to compute the penalty for invariant risk minimization in Tensorflow?
<p>I am trying to implement the technique called &quot;Invariant risk minimization,&quot; which adds a penalty term to the loss function in training machine learning models. The new penalty term's technical definition is the <strong>squared gradient norm with respect to a constant classifier.</strong> There is an implementation of this &quot;penalty&quot; function with PyTorch <a href="https://github.com/facebookresearch/InvariantRiskMinimization/blob/master/code/colored_mnist/main.py" rel="nofollow noreferrer">here</a>.</p> <p>I was wondering how I can implement this function in Tensorflow 2.</p> <p>More specifically, I want to implement the function below, which is also in the code I shared its link.</p> <pre><code> def penalty(logits, y): scale = torch.tensor(1.).cuda().requires_grad_() loss = mean_nll(logits * scale, y) grad = autograd.grad(loss, [scale], create_graph=True)[0] return torch.sum(grad**2) </code></pre>
<p>In a similar fashion :</p> <pre><code>def penalty(y_true, y_pred): scale = tf.constant(1.) with tf.GradientTape() as tape: tape.watch(scale) loss = tf.losses.binary_crossentropy(y_true, y_pred*scale, from_logits=True) grad = tape.gradient(loss, [scale])[0] return tf.reduce_sum(grad**2) </code></pre> <p>Note that the order of the parameter logits and ground truth are reversed compared to the PyTorch version, to respect TensorFlow's convention.</p> <p>To calculate a Gradient, you just need <code>tf.GradientTape</code>, you can read more in the guide : <a href="https://www.tensorflow.org/guide/autodiff?hl=en#computing_gradients" rel="nofollow noreferrer">Introduction to Gradients and Automatic Differentiation</a></p> <hr /> <p>Comparing that the 2 versions produces the same results :</p> <p>PyTorch version:</p> <pre><code>&gt;&gt;&gt; penalty(torch.tensor(1.),torch.tensor(0.)) tensor(0.5344, grad_fn=&lt;SumBackward0&gt;) &gt;&gt;&gt; penalty(torch.tensor(1.),torch.tensor(1.)) tensor(0.0723, grad_fn=&lt;SumBackward0&gt;) </code></pre> <p>TensorFlow version:</p> <pre><code>&gt;&gt;&gt; penalty([0.],[1.]) &lt;tf.Tensor: shape=(), dtype=float32, numpy=0.53444666&gt; &gt;&gt;&gt; penalty([1.],[1.]) &lt;tf.Tensor: shape=(), dtype=float32, numpy=0.0723295&gt; </code></pre>
python|tensorflow|machine-learning|loss-function
1
2,325
65,823,942
Append data from one pandas dataframe into other one
<p>I'm trying to append latitude and longitude data from df table:</p> <pre><code>dict = {'city':['Wien', 'Prague','Berlin','London','Rome'], 'latitude': [48.20849, 50.08804, 52.52437, 51.50853, 41.89193 ], 'longitude': [16.37208, 14.42076, 13.41053, -0.12574, 12.51133] } df = pd.DataFrame(dict) # creating non duplicated pairs of cities df_pair = pd.DataFrame(list(combinations(df.city, 2)), columns=['start_city', 'end_city']) </code></pre> <p>into df_pair's columns start_latitude, start_longitude, end_latitude, end_longitude (which will be created while appending):</p> <pre><code> start_city end_city 0 Wien Prague 1 Wien Berlin 2 Wien London 3 Wien Rome 4 Prague Berlin 5 Prague London 6 Prague Rome 7 Berlin London 8 Berlin Rome 9 London Rome </code></pre> <p>so the final dataframe (lets call df_pair_geo) look like this:</p> <pre><code> start_city end_city start_latitude start_longitude end_latitude end_longitude 0 Wien Prague 48.20849 16.37208 50.08804 14.42076 1 Wien Berlin 48.20849 16.37208 52.52437 13.41053 2 Wien London 48.20849 16.37208 51.50853 -0.12574 3 Wien Rome 48.20849 16.37208 41.89193 12.51133 4 Prague Berlin 50.08804 14.42076 52.52437 13.41053 5 Prague London 50.08804 14.42076 51.50853 -0.12574 6 Prague Rome 50.08804 14.42076 41.89193 12.51133 7 Berlin London 52.52437 13.41053 51.50853 -0.12574 8 Berlin Rome 52.52437 13.41053 41.89193 12.51133 9 London Rome 51.50853 -0.12574 41.89193 12.51133 </code></pre> <p>But so far I was not able to do that. Is there a way how to do this? Thank you.</p>
<p>use merge.</p> <pre><code>df1 = df_pair.merge(df.set_index('city'), left_on='start_city', right_index=True, how='left') df2 = df1.merge(df.set_index('city'), left_on='end_city', right_index=True, how='left', suffixes=['_start', '_end']) # result print(df2) start_city end_city latitude_start longitude_start latitude_end \ 0 Wien Prague 48.20849 16.37208 50.08804 1 Wien Berlin 48.20849 16.37208 52.52437 2 Wien London 48.20849 16.37208 51.50853 3 Wien Rome 48.20849 16.37208 41.89193 4 Prague Berlin 50.08804 14.42076 52.52437 5 Prague London 50.08804 14.42076 51.50853 6 Prague Rome 50.08804 14.42076 41.89193 7 Berlin London 52.52437 13.41053 51.50853 8 Berlin Rome 52.52437 13.41053 41.89193 9 London Rome 51.50853 -0.12574 41.89193 longitude_end 0 14.42076 1 13.41053 2 -0.12574 3 12.51133 4 13.41053 5 -0.12574 6 12.51133 7 -0.12574 8 12.51133 9 12.51133 </code></pre>
python-3.x|pandas|dataframe
3
2,326
65,624,468
What is the proper configuration for Quadro RTX3000 to run tensorflow with GPU?
<p>My laptop System is Win10, with GPU NVIDIA Quadro RTX3000. While trying to set up the TensorFlow with GPU, it always can't recognize my GPU. What is the proper configuration for CUDA/CUDNN/Tensorflow etc.?</p>
<p>I did suffer a while before making it works. Here is my configuration:</p> <ul> <li>Win10</li> <li>RTX 3000</li> <li>Nvidia driver version 456.71</li> <li>cuda_11.0.3_451.82_win10 (can't works with 11.1 version, not sure why)</li> <li>cudnn -v8.0.4.30</li> <li>Python 3.8.7</li> <li>Tensorflow 2.5.0-dev20210106 (2.4 don't support cuda 11.x)</li> </ul>
tensorflow|gpu
0
2,327
53,415,751
Count occurences of True/False in column of dataframe
<p>Is there a way to count the number of occurrences of boolean values in a column without having to loop through the DataFrame?</p> <p>Doing something like </p> <pre><code>df[df["boolean_column"]==False]["boolean_column"].sum() </code></pre> <p>Will not work because False has a value of 0, hence a sum of zeroes will always return 0.</p> <p>Obviously you could count the occurrences by looping over the column and checking, but I wanted to know if there's a pythonic way of doing this.</p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="noreferrer"><code>pd.Series.value_counts()</code></a>:</p> <pre><code>&gt;&gt; df = pd.DataFrame({'boolean_column': [True, False, True, False, True]}) &gt;&gt; df['boolean_column'].value_counts() True 3 False 2 Name: boolean_column, dtype: int64 </code></pre> <p>If you want to count <code>False</code> and <code>True</code> separately you can use <code>pd.Series.sum()</code> + <code>~</code>:</p> <pre><code>&gt;&gt; df['boolean_column'].values.sum() # True 3 &gt;&gt; (~df['boolean_column']).values.sum() # False 2 </code></pre>
python|pandas|boolean|counter|series
40
2,328
72,104,594
Panda Dataframe read_json for list values
<p>I have a file with record json strings like:</p> <pre><code>{&quot;foo&quot;: [-0.0482006893, 0.0416476727, -0.0495583452]} {&quot;foo&quot;: [0.0621534586, 0.0509529933, 0.122285351]} {&quot;foo&quot;: [0.0169468746, 0.00475309044, 0.0085169]} </code></pre> <p>When I call <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_json.html" rel="nofollow noreferrer"><code>read_json</code></a> on this file I get a dataframe where the column <code>foo</code> is an object. Calling <code>.to_numpy()</code> on this dataframe gives me an numpy array in the form of:</p> <pre><code>array([list([-0.050888903400000005, -0.00733460533, -0.0595958121]), list([0.10726073400000001, -0.0247702841, -0.0298063811]), ..., list([-0.10156482500000001, -0.0402663834, -0.0609775148])], dtype=object) </code></pre> <p>I want to parse the values of <code>foo</code> as numpy array instead of <code>list</code>. Anyone have any ideas?</p>
<p>The easiest way is to create your DataFrame using <code>.from_dict()</code>.</p> <p>See a minimal example with one of your dicts.</p> <pre><code>d = {&quot;foo&quot;: [-0.0482006893, 0.0416476727, -0.0495583452]} df = pd.DataFrame().from_dict(d) &gt;&gt;&gt; df foo 0 -0.048201 1 0.041648 2 -0.049558 &gt;&gt;&gt; df.dtypes foo float64 dtype: object </code></pre>
pandas|dataframe
0
2,329
72,086,983
Repeated values in pyspark
<p>I have a dataframe in pyspark where i have three columns</p> <pre><code>df1 = spark.createDataFrame([ ('a', 3, 4.2), ('a', 7, 4.2), ('b', 7, 2.6), ('c', 7, 7.21), ('c', 11, 7.21), ('c', 18, 7.21), ('d', 15, 9.0), ], ['model', 'number', 'price']) df1.show() +-----+------+-----+ |model|number|price| +-----+------+-----+ | a| 3| 4.2| | a| 7| 4.2| | b| 7| 2.6| | c| 7| 7.21| | c| 11| 7.21| | c| 18| 7.21| | d| 15| 9.0| +-----+------+-----+ </code></pre> <p>Is there a way in pyspark to display only the values that are repeated in the column 'price'?</p> <p>like in df2 :</p> <pre><code>df2 = spark.createDataFrame([ ('a', 3, 4.2), ('a', 7, 4.2), ('c', 7, 7.21), ('c', 11, 7.21), ('c', 18, 7.21), ], ['model', 'number', 'price']) df2.show() +-----+------+-----+ |model|number|price| +-----+------+-----+ | a| 3| 4.2| | a| 7| 4.2| | c| 7| 7.21| | c| 11| 7.21| | c| 18| 7.21| +-----+------+-----+ </code></pre> <p>I tried to do this, but didn't work</p> <pre><code>df = df1.groupBy(&quot;model&quot;,&quot;price&quot;).count().filter(&quot;count &gt; 1&quot;) df2 = df1.where((df.model == df1.model) &amp; (df.price == df1.price)) df2.show() </code></pre> <p>it included the values that are not repeated too</p> <pre><code> +-----+------+-----+ |model|number|price| +-----+------+-----+ | a| 3| 4.2| | a| 7| 4.2| | b| 7| 2.6| | c| 7| 7.21| | c| 11| 7.21| | c| 18| 7.21| | d| 15| 9.0| +-----+------+-----+ </code></pre>
<p>You can do so with a window function. We partition by price, take a count and filter <code>count &gt; 1</code>.</p> <pre><code>from pyspark.sql import Window from pyspark.sql import functions as f w = Window().partitionBy('price') df1.withColumn('_c', f.count('price').over(w)).filter('_c &gt; 1').drop('_c').show() +-----+------+-----+ |model|number|price| +-----+------+-----+ | a| 3| 4.2| | a| 7| 4.2| | c| 7| 7.21| | c| 11| 7.21| | c| 18| 7.21| +-----+------+-----+ </code></pre>
pyspark|apache-spark-sql|pyspark-pandas
0
2,330
72,104,902
Loop over 2 lists to create seperate dfs from multiple excel worksheets
<p>I've read in the below excel workbook which has 40 sheets. The below reads in all the worksheets:</p> <pre><code>df = pd.read_excel(file_path, sheet_name = None) </code></pre> <p>All the worksheets have identical columns, but the relevant columns start at different rows in each worksheet, so I'm writing the below to create a df from each sheet:</p> <pre><code>df2 = df[Sheet_Name] df2.columns = df2.iloc[20] </code></pre> <p>I could replicate this code 40 times with the relevant row index, but there has to be a function with a loop that can clean the code.</p> <p>I was thinking of having 2 lists, sheet name &amp; row index, which the function iterates over to create a separate df for each worksheet.</p> <p>Is this possible?</p> <p>Thanks for your help</p>
<p>It should be possible using next()</p> <pre><code>new_dfs = list() indexes = iter([20, 12, 20]) for sheet in df: new_df = df[sheet] new_df.colunms = new_df.iloc[next(indexes)] new_dfs.append(new_df) </code></pre> <p>I did not run this code, but the idea is worth trying</p>
python|pandas|function|loops|xlsxwriter
0
2,331
56,646,482
How to assign values to a column using multiindex filter?
<p>I can't update the values on a column when I filter using a multiindex.</p> <pre><code>features_complete_new_index['ev_2'] = 1 features_complete_new_index.loc[true_positives_indexes,:].ev_2 = True features_complete_new_index.loc[false_negatives_indexes,:].ev2 = False features_complete_new_index.ev_2.value_counts() </code></pre> <p><strong>Output</strong></p> <pre><code>Out[20]: 1 8176700 Name: ev_2, dtype: int64 </code></pre> <p><strong>Expected output</strong> </p> <pre><code>1 7000000 True 1000000 False 17670000 </code></pre>
<p>I suspect Pandas is giving you a <strong>SettingwithCopyWarning</strong> warning. There is a <a href="https://www.dataquest.io/blog/settingwithcopywarning/" rel="nofollow noreferrer">very good article</a> that explains the risk of doing "chained assignment".</p> <p>The core problem is that when you write :</p> <p><code>features_complete_new_index.loc[true_positives_indexes,:]</code></p> <p><strong>you don't know if Pandas is working with the original data or a copy of it</strong>.</p> <p>So when writing :</p> <pre><code>features_complete_new_index.loc[true_positives_indexes,:].ev_2 = True </code></pre> <p>You might be assigning True to a <em>copy</em> of your dataframe.</p> <p>The solution is to do it in a <strong>single loc operation</strong> :</p> <pre><code>features_complete_new_index.loc[true_positives_indexes,'ev_2'] = True </code></pre> <p>It is very well explained in the article.</p>
python|pandas|multi-index
0
2,332
56,659,181
GoogLeNet Inception v4 is different from the paper?
<p>paper: <a href="https://arxiv.org/pdf/1602.07261.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1602.07261.pdf</a></p> <p>code: <a href="https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v4.py" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v4.py</a></p> <hr> <p>line 91:</p> <pre><code>branch_2 = slim.conv2d(branch_2, 224, [7, 1], scope='Conv2d_0d_7x1') </code></pre> <p>in the paper is [1, 7]</p> <hr> <p>line 216:</p> <pre><code>branch_0 = slim.conv2d(net, 192, [3, 3], stride=2, padding='VALID', </code></pre> <p>in the papter stride is not equal to 2</p>
<p>This one is a minor mismatch:</p> <pre><code> branch_2 = slim.conv2d(branch_2, 224, [7, 1], scope='Conv2d_0d_7x1') </code></pre> <p>instead of using sequence of convolutions {[1, 7], [7, 1], [1, 7], [7, 1]} it has {[7, 1], [1, 7], [7, 1], [1, 7]}. I would be surprised if it had an effect on the accuracy, but might still be worth creating an issue on github.</p> <p>Regarding the second one:</p> <pre><code>branch_0 = slim.conv2d(net, 192, [3, 3], stride=2, padding='VALID', </code></pre> <p>They clearly forgot or omitted (for the sake of getting a better picture) the stride on the illustration. That layer is supposed to reduce the spatial resolution since it takes 71x71x192 as input and produces an output of 35x35.</p>
tensorflow
0
2,333
56,688,744
unable to import Metric from tensorflow.keras.metrics
<p>I want to write a custom metric evaluator for which I am following <a href="https://www.tensorflow.org/beta/guide/keras/training_and_evaluation#specifying_a_loss_metrics_and_an_optimizer" rel="nofollow noreferrer">this link</a>. my dummy code is </p> <pre><code>import tensorflow as tf from tensorflow import keras class DummyMetric(keras.metrics.Metric): def __init__(self, name='categorical_true_positives', **kwargs): super(DummyMetric, self).__init__(name=name, **kwargs) self.true_positives = self.add_weight(name='tp', initializer='zeros') def update_state(self, y_true, y_pred, sample_weight=None): print("Evaluating tensor of shape {} against gt of shape {}".format(y_pred.shape, y_true.shape)) self.true_positives.assign_add(1.0) def result(self): return self.true_positives def reset_states(self): # The state of the metric will be reset at the start of each epoch. self.true_positives.assign(0.) </code></pre> <p><strong>my tensorflow version is 1.13.1 installed from source</strong>.</p> <p><code>keras.metrics.Metric</code> throws </p> <blockquote> <p>AttributeError: module 'tensorflow._api.v1.keras.metrics' has no attribute 'Metric'. </p> </blockquote> <p>When I do <code>pip install tensorflow-gpu==1.14</code> then this error goes away. </p> <p>please suggest any solution/hack if possible which will make it work without upgrading to 1.14</p>
<p>It seems like this was probably left out of an <code>__init__.py</code> and they fixed that in 1.14 I guess. I was able to import it this way:</p> <pre><code>from tensorflow.python.keras.metrics import Metric </code></pre> <p>It is defined in file:</p> <pre><code>tensorflow/python/keras/metrics.py </code></pre>
python|tensorflow|keras
7
2,334
56,698,566
How to reuse existing variable in TensorFlow when dtypes differ?
<p>Minimal code example:</p> <pre><code>with tf.variable_scope("initializer_test"): s = tf.get_variable("scalar", initializer=tf.constant(2)) with tf.variable_scope("initializer_test", reuse=True): s = tf.get_variable("scalar") # ValueError: Trying to share variable initializer_test/scalar, but specified dtype float32 and found dtype int32_ref. </code></pre> <hr> <p>My solution:</p> <p>Just reading the error message gives an easy solution:</p> <pre><code>with tf.variable_scope("initializer_test"): s = tf.get_variable("scalar", initializer=tf.constant(2)) with tf.variable_scope("initializer_test", reuse=True): s = tf.get_variable("scalar", dtype=tf.int32) # Just add the required dtype </code></pre> <p>Is there a better way to do this? I would prefer to not have to (look at the error message to find out the dtype) or (manually set the dtype for <code>s</code> the first time I declare it).</p>
<p>Adds <code>AUTO_REUSE</code> as a reuse mode to variable scopes. This mode modifies the behavior of <code>get_variable()</code> to create requested variables if they do not exist or return them if they do exist. </p> <p>It is now possible to write the following code:</p> <pre><code>def call_f(): with tf.variable_scope("initializer_test", reuse=tf.AUTO_REUSE): v = tf.get_variable("scalar", initializer=tf.constant(2)) return v v1 = call_f() # Creates v. v2 = call_f() # Gets the same, existing v. print(v1) print(v2) </code></pre> <p>output:</p> <pre><code>&lt;tf.Variable 'initializer_test/scalar:0' shape=() dtype=int32, numpy=2&gt; &lt;tf.Variable 'initializer_test/scalar:0' shape=() dtype=int32, numpy=2&gt; </code></pre>
python|tensorflow
0
2,335
56,665,409
What happens when you transform the test set using MinMaxScaler
<p>i am currently in the process of pre-processing my data and I understand that i have to use the same scaling parameters I have used on my training set, on my test set. However, when i applied the <code>transform</code> method from <code>sklearn</code> library, i noticed something weird.</p> <p>I first used <code>preprocessing.MinMaxScaler(feature_range=(0,1))</code> on my training set which sets the maximum to be 1 and minimum to be 0. Next, i used <code>minmax_scaler.transform(data)</code> on my test set and I've noticed when i printed out the data-frame, I have values that are greater than 1. What can this possibly mean?</p>
<p>For a given feature <code>x</code>, your <code>minmax</code> scaling to <code>(0,1)</code> will effectively map:</p> <p><code>x to (x- min_train_x)/(max_train_x - min_train_x)</code></p> <p>where <code>min_train_x</code> and <code>max_train_x</code> are the minimum and maximum value of <code>x</code> in the <strong>training set</strong>.</p> <p>If a value of <code>x</code> in the <strong>testing set</strong> is larger than the <code>max_train_x</code> the scaling transformation will return a value <code>&gt; 1</code>.</p> <p>It usually is not a big problem except if the input has to be in the <code>(0,1)</code> range.</p>
python|scikit-learn|sklearn-pandas
2
2,336
56,764,048
How to train the original U-Net model with PyTorch?
<p>I’m trying to implement and train the <a href="https://arxiv.org/pdf/1505.04597.pdf" rel="nofollow noreferrer">original U-Net model</a>, but I’m stuck in when I’m trying to train the model using the <a href="http://brainiac2.mit.edu/isbi_challenge/" rel="nofollow noreferrer">ISBI Challenge Dataset</a>.</p> <p>According with the original U-Net model, the network outputs an image with 2 channels and size of 388 x 388. So, my data loader for training generates a tensor with size of <em>[batch, channels=1, width=572, height=572]</em> for the input images and <em>[batch, channels=2, width=388, width=388]</em> for target/output images.</p> <p>My problem actually is that when I’m trying to use the nn.CrossEntropyLoss() the following error is raised: </p> <blockquote> <p>RuntimeError: invalid argument 3: only batches of spatial targets supported (3D tensors) but got targets of dimension: 4 at /opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THNN/generic/SpatialClassNLLCriterion.c:59</p> </blockquote> <p>I’m just starting with PyTorch (newbie here)… so, I’ll really appreciate if someone could help me to overcome this problem.</p> <p>The sourcecode is available on GitHub:</p> <p><a href="https://github.com/dalifreire/cnn_unet_pytorch" rel="nofollow noreferrer">https://github.com/dalifreire/cnn_unet_pytorch</a> <a href="https://github.com/dalifreire/cnn_unet_pytorch/blob/master/unet_pytorch.ipynb" rel="nofollow noreferrer">https://github.com/dalifreire/cnn_unet_pytorch/blob/master/unet_pytorch.ipynb</a></p> <p>Best regards!</p> <p><strong>UPDATE</strong></p> <p>I just remove the channel dimension from my masks and everything works well… now I’m generating masks with the shape 1 [width=388, height=388].</p> <p>After that, I’m working with input images (X), target masks (y) and predicted output masks (y_hat) as follow:</p> <pre><code>X --&gt; torch.Size([10, 1, 572, 572]) y --&gt; torch.Size([10, 388, 388]) y_hat --&gt; torch.Size([10, 2, 388, 388]) </code></pre> <p>But, I don’t understand why target masks (y) and predicted masks (y_hat) must have different shapes? It’s so weird for me…</p>
<p>From the CrossEntropyLoss docstring of PyTorch:</p> <pre><code>Shape: - Input: :math:`(N, C)` where `C = number of classes`, or :math:`(N, C, d_1, d_2, ..., d_K)` with :math:`K \geq 1` in the case of `K`-dimensional loss. - Target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`, or :math:`(N, d_1, d_2, ..., d_K)` with :math:`K \geq 1` in the case of K-dimensional loss. - Output: scalar. If :attr:`reduction` is ``'none'``, then the same size as the target: :math:`(N)`, or :math:`(N, d_1, d_2, ..., d_K)` with :math:`K \geq 1` in the case of K-dimensional loss. </code></pre> <p>If your targets contain the class indices already, you should remove the channel dimension.</p> <p><a href="https://discuss.pytorch.org/t/only-batches-of-spatial-targets-supported-non-empty-3d-tensors-but-got-targets-of-size-1-1-256-256/49134" rel="nofollow noreferrer">Source</a></p>
python|conv-neural-network|pytorch|image-segmentation
1
2,337
67,097,110
Convert elements in lists to separate rows
<p>My data frame has approximately 30 columns. Some of these columns have lists of items, for instance</p> <pre><code> Student Subject \ 0 J.M. [mathematics, history, literature] 1 M.G. [physics, mathematics, geography, history] 2 L.D. [latin, literature, mathematics] Score # + other 27 columns 0 [10, 8, 8.5] 1 [5, 4, 8, 8.5] 2 [4, 5, 5] </code></pre> <p>How can I convert elements in lists to separate rows to have subjects and scores in rows and not in lists?</p> <p>Generally,</p> <pre><code>Student Subject Score student 1 student 1's subject 1 student 1' score for subject 1 student 1 student 1's subject 2 student 1' score for subject 2 </code></pre>
<p>Assuming <code>df</code> to be:</p> <pre><code>In [2893]: df = pd.DataFrame({'Student':['J.M.', 'M.G.', 'L.D.'], 'Subject':[['mathematics', 'history', 'literature'], ['physics', 'mathematics', 'geography', 'history'], ['latin', 'literature', 'mathematics']], 'Score':[[10, 8, 8.5], [5, 4, 8, 8.5], [4, ...: 5, 5]]}) In [2894]: df Out[2894]: Student Subject Score 0 J.M. [mathematics, history, literature] [10, 8, 8.5] 1 M.G. [physics, mathematics, geography, history] [5, 4, 8, 8.5] 2 L.D. [latin, literature, mathematics] [4, 5, 5] </code></pre> <p>Use <code>df.explode</code> with <code>df.apply</code>:</p> <pre><code>In [2898]: df = df.apply(pd.Series.explode) In [2899]: df Out[2899]: Student Subject Score 0 J.M. mathematics 10 1 J.M. history 8 2 J.M. literature 8.5 3 M.G. physics 5 4 M.G. mathematics 4 5 M.G. geography 8 6 M.G. history 8.5 7 L.D. latin 4 8 L.D. literature 5 9 L.D. mathematics 5 </code></pre>
python|pandas
5
2,338
66,918,779
Getting hr count in pandas
<p>I have a pandas dataframe like below :</p> <pre><code> | Date | +-------------------+ |2009-11-01 00:00:08| |2009-11-01 00:00:40| |2009-11-01 01:00:20| |2009-11-01 01:50:08| |2009-11-01 02:22:00| |2009-11-01 02:45:50| |2009-11-01 03:10:20| |2009-11-01 03:20:30| +-------------------+ </code></pre> <p>I want to get the hr count like below :</p> <pre><code> | Hr | Count | +-------------------+--------+ |00:00:00 - 00:59:59| 2 | |01:00:00 - 01:59:59| 2 | |02:00:00 - 02:59:59| 2 | |03:00:00 - 03:59:59| 2 | +-------------------+--------+ </code></pre> <p>So how can I get the count using pandas dataframe ?</p>
<pre><code>df[&quot;Date&quot;] = pd.to_datetime(df[&quot;Date&quot;]) df[&quot;hour&quot;] = df.Date.dt.hour df_out = ( df.groupby(&quot;hour&quot;) .agg( { &quot;Date&quot;: lambda x: &quot;{h:02d}:00:00 - {h:02d}:59:59&quot;.format( h=x.iat[0].hour ), &quot;hour&quot;: &quot;size&quot;, } ) .rename(columns={&quot;Date&quot;: &quot;Hr&quot;, &quot;hour&quot;: &quot;Count&quot;}) .reset_index(drop=True) ) print(df_out) </code></pre> <p>Prints:</p> <pre><code> Hr Count 0 00:00:00 - 00:59:59 2 1 01:00:00 - 01:59:59 2 2 02:00:00 - 02:59:59 2 3 03:00:00 - 03:59:59 2 </code></pre>
python-3.x|pandas|dataframe
2
2,339
47,454,219
Apply set_index over groupby object in order to apply asfreq per group
<p>Im looking to apply <code>pading</code> over each group of my data frame</p> <p>notice that for a single group ('element_id') i have no problem in pading:</p> <p>first group (group1):</p> <pre><code>{'date': {88: datetime.date(2017, 10, 3), 43: datetime.date(2017, 9, 26), 159: datetime.date(2017, 11, 8)}, u'element_id': {88: 122, 43: 122, 159: 122}, u'VALUE': {88: '8.0', 43: '2.0', 159: '5.0'}} </code></pre> <p>So im applying padding over it (which works great):</p> <pre><code>print group1.set_index('date').asfreq('D', method='pad').head() </code></pre> <p>Im looking to apply this logic over several groups through <code>groupby</code></p> <p>Another group (group2):</p> <pre><code>{'date': {88: datetime.date(2017, 10, 3), 43: datetime.date(2017, 9, 26), 159: datetime.date(2017, 11, 8)}, u'element_id': {88: 122, 43: 122, 159: 122}, u'VALUE': {88: '8.0', 43: '2.0', 159: '5.0'}} group_data=pd.concat([group1,group2],axis=0) group_data.groupby(['element_id']).set_index('date').resample('D').asfreq() </code></pre> <p>And im getting the following error:</p> <pre><code>AttributeError: Cannot access callable attribute 'set_index' of 'DataFrameGroupBy' objects, try using the 'apply' method </code></pre>
<p>First there is problem your <code>date</code> column has <code>dtype</code> object, not datetime, so first is necessary convert it by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="noreferrer"><code>to_datetime</code></a>.</p> <p>Then is possible use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.apply.html" rel="noreferrer"><code>GroupBy.apply</code></a>:</p> <pre><code>group_data['date'] = pd.to_datetime(group_data['date']) df = (group_data.groupby(['element_id']) .apply(lambda x: x.set_index('date').resample('D').ffill())) print (df.head()) VALUE element_id element_id date 122 2017-09-26 2.0 122 2017-09-27 2.0 122 2017-09-28 2.0 122 2017-09-29 2.0 122 2017-09-30 2.0 122 </code></pre> <p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.resample.html" rel="noreferrer"><code>DataFrameGroupBy.resample</code></a>:</p> <pre><code> df = group_data.set_index('date').groupby(['element_id']).resample('D').ffill() print (df.head()) VALUE element_id element_id date 122 2017-09-26 2.0 122 2017-09-27 2.0 122 2017-09-28 2.0 122 2017-09-29 2.0 122 2017-09-30 2.0 122 </code></pre> <p>EDIT:</p> <p>If problem with duplicates values solution is add new column for subgroups with unique <code>dates</code>. If use <code>concat</code> there is parameter <code>keys</code> for it:</p> <pre><code>group1 = pd.DataFrame({'date': {88: datetime.date(2017, 10, 3), 43: datetime.date(2017, 9, 26), 159: datetime.date(2017, 11, 8)}, u'element_id': {88: 122, 43: 122, 159: 122}, u'VALUE': {88: '8.0', 43: '2.0', 159: '5.0'}}) d = {'level_0':'g'} group_data=pd.concat([group1,group1], keys=('a','b')).reset_index(level=0).rename(columns=d) print (group_data) g VALUE date element_id 43 a 2.0 2017-09-26 122 88 a 8.0 2017-10-03 122 159 a 5.0 2017-11-08 122 43 b 2.0 2017-09-26 122 88 b 8.0 2017-10-03 122 159 b 5.0 2017-11-08 122 group_data['date'] = pd.to_datetime(group_data['date']) df = (group_data.groupby(['g','element_id']) .apply(lambda x: x.set_index('date').resample('D').ffill())) print (df.head()) g VALUE element_id g element_id date a 122 2017-09-26 a 2.0 122 2017-09-27 a 2.0 122 2017-09-28 a 2.0 122 2017-09-29 a 2.0 122 2017-09-30 a 2.0 122 </code></pre>
pandas
6
2,340
68,379,553
pandas: groupby two columns and get random selection of groups such that each value in the first column will be represented by a single group
<p>It's similar to <a href="https://stackoverflow.com/questions/50004641/select-sample-random-groups-after-groupby-in-pandas">this question</a>, but with an additional level of complexity.<br /> In my case, I have a the following <em>dataframe</em>:</p> <pre><code>import pandas as pd df = pd.DataFrame({'col1': list('aaabbbabababbaaa'), 'col2': list('cdddccdsssssddcd'), 'val': range(0, 16)}) </code></pre> <p>output:</p> <pre><code> col1 col2 val 0 a c 0 1 a d 1 2 a d 2 3 b d 3 4 b c 4 5 b c 5 6 a d 6 7 b s 7 8 a s 8 9 b s 9 10 a s 10 11 b s 11 12 b d 12 13 a d 13 14 a c 14 15 a d 15 </code></pre> <p>My goal is to select random groups of <code>groupby(['col1', 'col2'])</code> such that each value of <code>col1</code> will be selected only once. This can be executed by the following code:</p> <pre><code>g = df.groupby('col1') indexes = [] for _, group in g: g_ = group.groupby('col2') a = np.arange(g_.ngroups) np.random.shuffle(a) indexes.extend(group[g_.ngroup().isin(a[:1])].index.tolist()) </code></pre> <p>output:</p> <pre><code>print(df[df.index.isin(indexes)]) col1 col2 val 4 b c 4 5 b c 5 8 a s 8 10 a s 10 </code></pre> <p>However, I'm looking for a more concise and pythonic way to solve this.</p>
<p>Another option is to sufffle your two columns with <code>sample</code> and <code>drop_duplicates</code> by col1, so that you keep only one couple per col1 value. then <code>merge</code> the result to df to select all the rows with these couples.</p> <pre><code>print(df.merge(df[['col1','col2']].sample(frac=1).drop_duplicates('col1'))) col1 col2 val 0 b s 7 1 b s 9 2 b s 11 3 a s 8 4 a s 10​ </code></pre> <p>or with <code>groupby</code> and <code>sample</code> a bit the same idea but to select only one row per col1 value with <code>merge</code> after</p> <pre><code>df.merge(df[['col1','col2']].groupby('col1').sample(n=1)) </code></pre> <p>EDIT: to get both the selected rows and the others rows, then you can use the parameter indicator in the merge and do a left merge. then <code>query</code> each separately:</p> <pre><code>m = df.merge(df[['col1','col2']].groupby('col1').sample(1), how='left', indicator=True) print(m) select_ = m.query('_merge==&quot;both&quot;')[df.columns] print(select_) comp_ = m.query('_merge==&quot;left_only&quot;')[df.columns] print(comp_) </code></pre>
python|pandas|pandas-groupby
1
2,341
68,108,089
pandas series to unique binary indicators
<p>Here is what I have</p> <pre><code>&gt;&gt;&gt; s = pd.Series([0,2,1,0], ['t1','t2','t3','t4']) &gt;&gt;&gt; s t1 0 t2 2 t3 1 t4 0 dtype: int64 </code></pre> <p>I want an output which looks like this with binary indicators for each value as pandas dataframe:</p> <pre><code>value t1 t2 t3 t4 0 1 0 0 1 1 0 0 1 0 2 0 1 0 0 </code></pre>
<p>Looks like you need <code>get_dummies</code>:</p> <pre><code>pd.get_dummies(s) # 0 1 2 #t1 1 0 0 #t2 0 0 1 #t3 0 1 0 #t4 1 0 0 </code></pre> <p>You can further transpose it if you need it in the other way:</p> <pre><code>pd.get_dummies(s).T # t1 t2 t3 t4 #0 1 0 0 1 #1 0 0 1 0 #2 0 1 0 0 </code></pre>
pandas|binary|indicator
2
2,342
68,083,588
Softmax Output Layer. Which dimension?
<p>I am having a question regarding Neuronal Nets used for image segmentation. I am using a 3D Implementation of Deeplab that can be found <a href="https://github.com/ChoiDM/pytorch-deeplabv3plus-3D" rel="nofollow noreferrer">here</a></p> <p>I am using <code>softmax</code>, so the output layer is the following:</p> <pre class="lang-py prettyprint-override"><code>elif self.last_activation.lower() == 'softmax': output = nn.Softmax()(output) </code></pre> <p>No dimension is defined, so I want to define it manually. But I am not sure which dimension I need tó set. The dimension of the output tensor is the following:</p> <pre><code>[batch_size, num_classes, width, height, depth] </code></pre> <p>So I would think that <code>dim=1</code> would be correct. Is that correct?</p> <p>Thanks!</p>
<p>Indeed it should be 1 as you want this axis to be summed to 1.<br /> Be careful if you need to train your network with a crossentropyloss as this latter already include a softmax.</p>
python|neural-network|pytorch|image-segmentation|deeplab
0
2,343
59,388,739
How to extract the column with only False condition without apply
<pre><code>df[['uid','verified','is_duplicate']].head(2) </code></pre> <p>How to get only where <code>is_duplicate=False</code></p> <pre><code> uid verified is_duplicate 0 2355954 True True 1 2626002 True False </code></pre>
<p>Since the values are already booleans, you can just negate the condition:</p> <pre><code>df[~df.is_duplicate] </code></pre> <p>Which gives:</p> <pre><code> uid verified is_duplicate 1 2626002 True False </code></pre>
pandas
1
2,344
51,027,435
How to detect and filter peaks over time series data?
<p>I have a pandas dataframe of user logins like this:</p> <pre><code> id datetime_login 646 2017-03-15 15:30:25 611 2017-04-14 11:38:30 611 2017-05-15 08:49:01 651 2017-03-15 15:30:25 611 2017-03-15 15:30:25 652 2017-03-08 14:03:56 652 2017-03-08 14:03:56 652 2017-03-15 15:30:25 654 2017-03-15 15:30:25 649 2017-03-15 15:30:25 902 2017-09-09 15:00:00 902 2017-02-13 16:39:53 902 2017-11-15 12:00:00 902 2017-11-15 12:00:00 902 2017-09-09 15:00:00 902 2017-05-15 08:48:47 902 2017-11-15 12:00:00 </code></pre> <p>After plotting the logins:</p> <pre><code>df.datetime_login = df.datetime_login.apply(lambda x: str(x)[:10]) df.datetime_login = df.datetime_login.apply(lambda x: date(int(x[:4]), int(x[5:7]), int(x[8:10]))) fig, ax = subplots() df.datetime_login.value_counts().sort_index().plot(figsize=(25,10), colormap='jet',fontsize=20) </code></pre> <ol> <li><p>How can I detect in my plot the peaks in the time series data?</p></li> <li><p>How can I filter into an array the peaks in my time series data?</p></li> </ol> <p>I tried to:</p> <pre><code>import peakutils indices = peakutils.indexes(df, thres=0.4, min_dist=1000) print(indices) </code></pre> <p>However, I got:</p> <pre><code>TypeError: unsupported operand type(s) for -: 'datetime.date' and 'int' </code></pre> <p>However, I got:</p>
<p>Where <code>df.datetime_login.value_counts().sort_index().plot(figsize=(25,10), colormap='jet',fontsize=20)</code> plots:</p> <p><a href="https://i.stack.imgur.com/L190d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L190d.png" alt="enter image description here"></a></p> <p>Let's try the following, you need to use the series returned by <code>value_counts</code> instead of your original df, <code>peakutils.indexes</code>:</p> <pre><code>df_counts = df.datetime_login.value_counts().sort_index() df_counts[peakutils.indexes(df_counts, thres=0.4, min_dist=1000)] </code></pre> <p>Output:</p> <pre><code>2017-03-15 15:30:25 6 Name: datetime_login, dtype: int64 </code></pre>
python|python-3.x|pandas|time-series
1
2,345
50,868,100
Finding and ranking intervals of data
<p>Every time I ride my bike a gather second by second data on a number of metrics. For simplicity, lets pretend that I have a csv file that looks something like:</p> <pre><code>secs, watts, 1,150 2,151 3,149 4,135 . . . 7000,160 </code></pre> <p>So, every second of my ride has an associated power value, in watts.</p> <p>I want to know "If I break my ride into N second blocks, which blocks have the realize in the highest average power?"</p> <p>I am using a pandas dataframe to manage my data, and this is the code I have been using to answer my question:</p> <pre><code>def bestEffort(ride_data, metric='watts', interval_length=5, sort_descending=True): seconds_in_ride = len(ride_data[metric]) average_interval_list = [[i+1, np.average( [ride_data[metric][i+j] for j in range(interval_length)]) ] for i in range(0, seconds_in_ride - interval_length)] average_interval_list.sort(key=lambda x: x[1], reverse=sort_descending) return average_interval_list </code></pre> <p>Seems simple? Right? Given an index, compute the average value of the interval_length subsequent entries. Keep track of this in a list of the form</p> <pre><code>[[second 1, avg val of metric over the interval starting that second], [second 2, avg val of metric over the interval starting that second], [second 3, avg val of metric over the interval starting that second], . . . [second 7000-interval_length, avg val of metric over the interval starting that second]] </code></pre> <p>Then, I sort the resulting list by the average values. So the first entry is of the form</p> <pre><code>[second_n, avg val of metric over the interval starting in second n] </code></pre> <p>telling me that my strongest effort over the given interval length started at second_n in my workout.</p> <p>The problem is that if I set "interval_length" to anything higher than 30, this computation takes forever (read: over two minutes on a decent machine). Please, help me find where my code is hitting a bottleneck, this seems like it should be way faster.</p>
<p>If you put your data in a numpy array, say <code>watts</code>, you can compute the mean power using convolve:</p> <pre><code>mean_power = np.convolve(watts, np.ones(interval_length)/interval_length, mode='valid') </code></pre> <p>As you can see in <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.convolve.html" rel="nofollow noreferrer">the reference of np.convolve</a>, this function computes a local mean of the first argument, smoothed with a window defined by the second argument. Here we smooth with a "top-hat" function--i.e. an "on/off" function which is constant over an interval of length <code>interval_length</code>, and zero otherwise. This is rudimentary but gives a first estimate.</p> <p>Then the time of your strongest effort is:</p> <pre><code>time_strongest_effort = np.argmax(mean_power) </code></pre>
python|performance|pandas|numpy
1
2,346
51,070,933
Combine rows based on index or column
<p>I have three dataframes: df1, df2, df3. I am trying to add a list of ART_UNIT do df1.</p> <p>df1 is 260846 rows x 4 columns:</p> <pre><code>Index SYMBOL level not-allocatable additional-only 0 A 2 True False 1 A01 4 True False 2 A01B 5 True False 3 A01B1/00 7 False False 4 A01B1/02 8 False False 5 A01B1/022 9 False False 6 A01B1/024 9 False False 7 A01B1/026 9 False False </code></pre> <p>df2 is 941516 rows x 2 columns:</p> <pre><code>Index CLASSIFICATION_SYMBOL_CD ART_UNIT 0 A44C27/00 3715 1 A44C27/001 2015 2 A44C27/001 3715 3 A44C27/001 2615 4 A44C27/005 2815 5 A44C27/006 3725 6 A44C27/007 3215 7 A44C27/008 3715 8 F41A33/00 3715 9 F41A33/02 3715 10 F41A33/04 3715 11 F41A33/06 3715 12 G07C13/00 3715 13 G07C13/005 3715 14 G07C13/02 3716 </code></pre> <p>And df3 is the same format as df2, but has 673023 rows x 2 columns</p> <p>The <code>'CLASSIFICATION_SYMBOL_CD'</code> in df2 and df3 are not unique. </p> <p>For each <code>'CLASSIFICATION_SYMBOL_CD'</code> in df2 and df3, I want to find the same string in df1 <code>'SYMBOL'</code> and add a new column to df1 <code>'ART_UNIT'</code> that contains all of the <code>'ART_UNIT'</code> from df2 and df3.</p> <p>For example, in df2, <code>'CLASSIFICATION_SYMBOL_CD'</code> A44C27/001 has <code>ART_UNIT</code> 2015, 3715, and 2615.</p> <p><b>I want to write those <code>ART_UNIT</code> to the correct row in df1 so that is reads:</p> <pre><code>Index SYMBOL level not-allocatable additional-only ART_UNIT 211 A44C27/001 2 True False [2015, 3715, 2615] </code></pre> <p></b> So far, I've tried to group df2/df3 by <code>'CLASSIFICATION_SYMBOL_CD'</code></p> <pre><code>gp = df2.groupby(['CLASSIFICATION_SYMBOL_CD']) for x in df2['CLASSIFICATION_SYMBOL_CD'].unique(): df2_g = gp.get_group(x) </code></pre> <p>Which gives me:</p> <pre><code>Index CLASSIFICATION_SYMBOL_CD ART_UNIT 1354 A61N1/3714 3762 117752 A61N1/3714 3766 347573 A61N1/3714 3736 548026 A61N1/3714 3762 560771 A61N1/3714 3762 566120 A61N1/3714 3766 566178 A61N1/3714 3762 799486 A61N1/3714 3736 802408 A61N1/3714 3736 </code></pre>
<p>Since <code>df2</code> and <code>df3</code> have the same format concatentate them first.</p> <pre><code>import pandas as pd df = pd.concat([df2, df3]) </code></pre> <p>Then to get the lists of all art units, <code>groupby</code> and apply list.</p> <pre><code>df = df.groupby('CLASSIFICATION_SYMBOL_CD').ART_UNIT.apply(list).reset_index() # CLASSIFICATION_SYMBOL_CD ART_UNIT #0 A44C27/00 [3715] #1 A44C27/001 [2015, 3715, 2615] #2 A44C27/005 [2815] #3 A44C27/006 [3725] #... </code></pre> <p>Finally, bring this information to <code>df1</code> with a merge (you could map or something else too). Rename the column first to have less to clean up after the merge.</p> <pre><code>df = df.rename(columns={'CLASSIFICATION_SYMBOL_CD': 'SYMBOL'}) df1 = df1.merge(df, on='SYMBOL', how='left') </code></pre> <p>Output:</p> <pre><code> Index SYMBOL level not-allocatable additional-only ART_UNIT 0 0 A 2 True False NaN 1 1 A01 4 True False NaN 2 2 A01B 5 True False NaN 3 3 A01B1/00 7 False False NaN 4 4 A01B1/02 8 False False NaN 5 5 A01B1/022 9 False False NaN 6 6 A01B1/024 9 False False NaN 7 7 A01B1/026 9 False False NaN </code></pre> <p>Sadly, you didn't provide any overlapping SYMBOLs in <code>df1</code>, so nothing merged. But this will work with your full data. </p>
python-3.x|pandas|pandas-groupby
1
2,347
51,038,503
TensorFlow: List index out of range with conv2d_transpose
<p>I would want to use a convolution transpose to obtain a tensor of 2700 values with the following input:</p> <pre><code>input = tf.placeholder(tf.float32, shape=(batch_size, 1 , 1 ,1)) </code></pre> <p>To do that, I used the <a href="https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose" rel="nofollow noreferrer">tf.nn.conv2d_transpose</a> function.</p> <p>Here is my code:</p> <pre><code>import tensorflow as tf import numpy as np sess = tf.Session() batch_size = 20 input = tf.placeholder(tf.float32, shape=(batch_size, 1 , 1 ,1)) logits = tf.nn.conv2d_transpose(input, [batch_size,1,2700,1],[batch_size, 1, 2700, 1],[1,1,3,1],'SAME') </code></pre> <p>When I run this program, I have the following error on the last line :</p> <pre><code>IndexError: list index out of range </code></pre> <p>Here is the complete error returned by Python:</p> <pre><code>IndexError Traceback (most recent call last) &lt;ipython-input-34-724f7880c01d&gt; in &lt;module&gt;() 9 input = tf.placeholder(tf.float32, shape=(batch_size, 1 , 1 ,1)) 10 ---&gt; 11 logits = tf.nn.conv2d_transpose(input, [batch_size,1,2700,1],[batch_size, 1, 2700, 1],[1,1,3,1],'SAME') /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/nn_ops.py in conv2d_transpose(value, filter, output_shape, strides, padding, data_format, name) 1223 filter = ops.convert_to_tensor(filter, name="filter") # pylint: disable=redefined-builtin 1224 axis = 3 if data_format == "NHWC" else 1 -&gt; 1225 if not value.get_shape()[axis].is_compatible_with(filter.get_shape()[3]): 1226 raise ValueError("input channels does not match filter's input channels, " 1227 "{} != {}".format(value.get_shape()[axis], /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/tensor_shape.py in __getitem__(self, key) 610 return TensorShape(self._dims[key]) 611 else: --&gt; 612 return self._dims[key] 613 else: 614 if isinstance(key, slice): IndexError: list index out of range </code></pre> <p>Some help would be welcome</p>
<p>From the documentation of <a href="https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose" rel="nofollow noreferrer">tf.nn.conv2d_transpose</a> you can see that you need to <strong>define placeholders</strong> for <code>filter</code> and <code>output_shape</code> similar to how you did for <code>input</code>. </p> <p>The following test code ran for me without returning an error. Make the necessary changes for the output size you need:</p> <pre><code>import tensorflow as tf import numpy as np sess = tf.Session() batch_size = 20 input = tf.placeholder(tf.float32, shape=(batch_size, 1 , 1 ,1)) filter = tf.placeholder(tf.float32, shape=(batch_size, 1 , 2700 ,1)) out = tf.placeholder(tf.int32, shape=(4,)) logits = tf.nn.conv2d_transpose(input, filter,out,[1,1,3,1],'SAME') </code></pre>
python|tensorflow|neural-network
1
2,348
71,048,521
How to freeze parts of T5 transformer model
<p>I know that T5 has K, Q and V vectors in each layer. It also has a feedforward network. I would like to freeze K, Q and V vectors and only train the feedforward layers on each layer of T5. I use Pytorch library. The model could be a wrapper for huggingface T5 model or a modified version of it. I know how to freeze all parameters using the following code:</p> <pre class="lang-py prettyprint-override"><code>tokenizer = AutoTokenizer.from_pretrained(underlying_model_name) model = T5ForConditionalGeneration.from_pretrained(underlying_model_name) for p in model.parameters(): p.requires_grad = False # freezing </code></pre> <p>Could you please guide me how can I do this?</p> <p>This <a href="https://github.com/microsoft/LoRA" rel="nofollow noreferrer">github project</a> probably could be helpful but it's for Roberta and GPT, could I adapt it for T5?</p>
<p>I've adapted a solution based on <a href="https://discuss.huggingface.co/t/how-to-freeze-some-layers-of-bertmodel/917" rel="nofollow noreferrer">this discussion</a> from the Huggingface forums. Basically, you have to specify the names of the modules/pytorch layers that you want to freeze.</p> <p>In your particular case of T5, I started by looking at the model summary:</p> <pre class="lang-py prettyprint-override"><code>from transformers import T5ModelForConditionalGeneration model = T5ModelForConditionalGeneration.from_pretrained(&quot;t5-small&quot;) print(model) </code></pre> <p>This gives the following (abbreviated output):</p> <pre><code>T5ForConditionalGeneration( (shared): Embedding(32128, 512) (encoder): T5Stack( (embed_tokens): Embedding(32128, 512) (block): ModuleList( (0): T5Block( (layer): ModuleList( (0): T5LayerSelfAttention( (SelfAttention): T5Attention( (q): Linear(in_features=512, out_features=512, bias=False) (k): Linear(in_features=512, out_features=512, bias=False) (v): Linear(in_features=512, out_features=512, bias=False) (o): Linear(in_features=512, out_features=512, bias=False) (relative_attention_bias): Embedding(32, 8) ) (layer_norm): T5LayerNorm() (dropout): Dropout(p=0.1, inplace=False) ) (1): T5LayerFF( (DenseReluDense): T5DenseReluDense( (wi): Linear(in_features=512, out_features=2048, bias=False) (wo): Linear(in_features=2048, out_features=512, bias=False) (dropout): Dropout(p=0.1, inplace=False) ) (layer_norm): T5LayerNorm() (dropout): Dropout(p=0.1, inplace=False) ) ) ) [...] # abbreviated output </code></pre> <p>with this, we can then generate a list of modules that we want to freeze. In particular, I decided to freeze the entire <code>T5LayerSelfAttention</code> block for the encoder (and, additionally, the <code>T5LayerCrossAttention</code> for the decoder):</p> <pre class="lang-py prettyprint-override"><code># All modules in the modules_to_freeze = [model.encoder.block[i].layer[0] for i in range(len(model.encoder.block))] # And the decoder modules, which has both a SelfAttention (layer[0]) modules_to_freeze.extend([model.decoder.block[i].layer[0] for i in range(len(model.decoder.block))]) # and CrossAttention (layer[1]) block modules_to_freeze.extend([model.decoder.block[i].layer[1] for i in range(len(model.decoder.block))]) </code></pre> <p>And then simply freeze all the parameters in the respective modules:</p> <pre class="lang-py prettyprint-override"><code>for module in modules_to_freeze: for param in module.parameters(): param.requires_grad = False # Actual freezing operation </code></pre> <p>You can verify that these are actually frozen in your model by running the following:</p> <pre class="lang-py prettyprint-override"><code>for param in model.parameters(): print(param.requires_grad) </code></pre> <p>which should print quite a few <code>False</code> as well. If you really only want to freeze K, Q and V, you can adapt the above process to just sub-select the modules you want.</p>
huggingface-transformers|t5-transformer
2
2,349
51,846,141
Tensorboard/tensorflow with s3 logdir - curl returned error code 6
<p>Been trying with numerous settings/env-vars/tf-versions but won't work..</p> <p>On my local machine this <strong>works</strong>: <code>AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=XXX AWS_REGION=eu-west-1 tensorboard --logdir="s3://my-bucket/tflogs/"</code></p> <p>On a AWS instance this will throw:</p> <pre><code>I tensorflow/core/platform/s3/aws_logging.cc:54] Creating HttpClient with max connections2 and scheme http I tensorflow/core/platform/s3/aws_logging.cc:54] Initializing CurlHandleContainer with size 2 I tensorflow/core/platform/s3/aws_logging.cc:54] Creating Instance with default EC2MetadataClient and refresh rate 900000 I tensorflow/core/platform/s3/aws_logging.cc:54] Found secret key I tensorflow/core/platform/s3/aws_logging.cc:54] Initializing CurlHandleContainer with size 25 I tensorflow/core/platform/s3/aws_logging.cc:54] Found secret key I tensorflow/core/platform/s3/aws_logging.cc:54] Pool grown by 2 I tensorflow/core/platform/s3/aws_logging.cc:54] Connection has been released. Continuing. E tensorflow/core/platform/s3/aws_logging.cc:60] Curl returned error code 6 W tensorflow/core/platform/s3/aws_logging.cc:57] If the signature check failed. This could be because of a time skew. Attempting to adjust the signer. W tensorflow/core/platform/s3/aws_logging.cc:57] Request failed, now waiting 0 ms before attempting again. I tensorflow/core/platform/s3/aws_logging.cc:54] Found secret key 2018-08-14 16:32:18.725199: I tensorflow/core/platform/s3/aws_logging.cc:54] Connection has been released. Continuing. E tensorflow/core/platform/s3/aws_logging.cc:60] Curl returned error code 6 W tensorflow/core/platform/s3/aws_logging.cc:57] If the signature check failed. This could be because of a time skew. Attempting to adjust the signer. </code></pre> <p><br> Didn't find any hints in <a href="https://github.com/tensorflow/tensorflow/issues/16397" rel="noreferrer">https://github.com/tensorflow/tensorflow/issues/16397</a></p> <p>And no definitive clue where the diff could be, I have made sure to have the same tensorflow/tensorboard version (1.8.0). Also happens running tensorflow with s3 tensorboard logdir specified.</p>
<p>Solved it like this:</p> <pre><code>export AWS_ACCESS_KEY_ID=&lt;access key id&gt; export AWS_SECRET_ACCESS_KEY=&lt;secret access key&gt; export AWS_REGION=us-west-2 export S3_REGION=us-west-2 export S3_ENDPOINT=s3.us-west-2.amazonaws.com export S3_USE_HTTPS=1 export S3_VERIFY_SSL=0 tensorboard --logdir=s3://&lt;path&gt; </code></pre>
tensorflow|amazon-s3|tensorboard
3
2,350
36,166,103
Remove columns that have NA values for rows - Python
<p>Suppose I have a dataframe as follows,</p> <pre><code>import pandas as pd columns=['A','B','C','D', 'E', 'F'] index=['1','2','3','4','5','6'] df = pd.DataFrame(columns=columns,index=index) df['D']['1'] = 1 df['E'] = 1 df['F']['1'] = 1 df['A']['2'] = 1 df['B']['3'] = 1 df['C']['4'] = 1 df['A']['5'] = 1 df['B']['5'] = 1 df['C']['5'] = 1 df['D']['6'] = 1 df['F']['6'] = 1 df A B C D E F 1 NaN NaN NaN 1 1 1 2 1 NaN NaN NaN 1 NaN 3 NaN 1 NaN NaN 1 NaN 4 NaN NaN 1 NaN 1 NaN 5 1 1 1 NaN 1 NaN 6 NaN NaN NaN 1 1 1 </code></pre> <p>My condition is, I want to remove the columns which have value only when A,B,C(together) don't have a value. I want to find which column is mutually exclusive to A,B,C columns together. I am interested in finding the columns that have values only when A or B or C has values. The output here would be to remove D,F columns. But my dataframe has 400 columns and I want a way to check this for A,B,C vs rest of the columns.</p> <p>One way I can think is, </p> <p>Remove NA rows from A,B,C</p> <pre><code>df = df[np.isfinite(df['A'])] df = df[np.isfinite(df['B'])] df = df[np.isfinite(df['C'])] </code></pre> <p>and get NA count of all columns and check with the total number of rows,</p> <pre><code>df.isnull().sum() </code></pre> <p>and remove the counts that match.</p> <p>Is there a better and efficient way to do this?</p> <p>Thanks</p>
<p>Rather than delete rows, just select the others that don't have A, B, C equal to NaN at the same time.</p> <pre><code>mask = df[["A", "B", "C"]].isnull().all(axis=1) df = df[~mask] </code></pre>
python|python-2.7|numpy|data-cleaning
0
2,351
36,119,783
How to efficiently compute orientation of 3D normals in large pointclouds
<p>I'm working (more like learn by doing) on a Python's library for managing pointclouds in Python.</p> <p>I've writed a function to compute the orientation of every Normal in a pointcloud stored as a numpy structured array, but I'm not happy enought with the final function (thought it works and pretty fast enought) and I was wondering if there is another more efficient/pythonic approach to compute the orientation in large pointclouds.</p> <p>This is how the pointcloud is structured:</p> <pre><code>esfera = PyntCloud.from_ply('Sphere.ply') esfera.vertex Out[3]: array([ (0.2515081465244293, 0.05602749437093735, 1.9830318689346313, 0.12660565972328186, 0.02801010198891163, 0.9915575981140137, 7.450349807739258, 77.52488708496094), (0.09723527729511261, 0.02066999115049839, 1.9934484958648682, 0.048643846064805984, 0.011384730227291584, 0.9987513422966003, 2.863548517227173, 76.82744598388672), (0.17640848457813263, 0.028193067759275436, 1.9881943464279175, 0.08916780352592468, 0.01611466333270073, 0.9958862066268921, 5.198856830596924, 79.75591278076172), ..., (0.17817874252796173, -0.046098098158836365, -1.9879237413406372, 0.08992616087198257, -0.02275240235030651, -0.9956884980201721, 5.322407245635986, 284.19854736328125), (0.2002459168434143, -0.002330917865037918, -1.986855149269104, 0.09960971027612686, -0.0010710721835494041, -0.9950260519981384, 5.717002868652344, 270.6160583496094), (0.12885123491287231, -0.03245270624756813, -1.9912745952606201, 0.06637085974216461, -0.01580258458852768, -0.9976698756217957, 3.912114381790161, 283.3924865722656)], dtype=[('x', '&lt;f4'), ('y', '&lt;f4'), ('z', '&lt;f4'), ('nx', '&lt;f4'), ('ny', '&lt;f4'), ('nz', '&lt;f4'), ('scalar_Dip_(degrees)', '&lt;f4'), ('scalar_Dip_direction_(degrees)', '&lt;f4')]) esfera.vertex['nx'] Out[4]: array([ 0.12660566, 0.04864385, 0.0891678 , ..., 0.08992616, 0.09960971, 0.06637086], dtype=float32) esfera.vertex[-1]['nx'] Out[5]: 0.06637086 </code></pre> <p>And this is the orientation function:</p> <pre><code>def add_orientation(self, degrees=True): """ Adds orientation (with respect to y-axis) values to PyntCloud.vertex This function expects the PyntCloud to have a numpy structured array with normals x,y,z values (correctly named) as the corresponding vertex atribute. Args: degrees (Optional[bool]): Set the oputput orientation units. If True(Default) set units to degrees. If False set units to radians. """ #: set copy to False for efficience in large pointclouds nx = self.vertex['nx'].astype(np.float64, copy=False) ny = self.vertex['ny'].astype(np.float64, copy=False) #: get orientations angle = np.arctan(np.absolute(nx / ny)) #: mask for every quadrant q2 = np.logical_and((self.vertex['nx']&gt;0),(self.vertex['ny']&lt;0)) q3 = np.logical_and((self.vertex['nx']&lt;0),(self.vertex['ny']&lt;0)) q4 = np.logical_and((self.vertex['nx']&lt;0),(self.vertex['ny']&gt;0)) #: apply modification for every quadrant angle[q2] = np.pi - angle[q2] angle[q3] = np.pi + angle[q3] angle[q4] = (2*np.pi) - angle[q4] if degrees == False: orientation = np.array(angle, dtype=[('orir', 'f4')]) else: orientation = np.array((180 * angle / np.pi), dtype=[('orid', 'f4')]) #: merge the structured arrays and replace the old vertex attribute self.vertex = join_struct_arrays([self.vertex, orientation]) </code></pre> <p>And the results visualized in CloudCompare (dont have enoght rep. for post images):</p> <p><a href="https://raw.githubusercontent.com/daavoo/sa/master/Captura%20de%20pantalla%20de%202016-03-21%2013%3A28%3A39.png" rel="nofollow">https://raw.githubusercontent.com/daavoo/sa/master/Captura%20de%20pantalla%20de%202016-03-21%2013%3A28%3A39.png</a></p> <p>Thank for your help.</p>
<p>Well, I'm ashamed of myself. xD</p> <p>Those numpy built-in function were exactly what I was looking for.</p> <p>Thanks @Dan.</p> <p>Here is the new function:</p> <pre><code> def add_orientation(self, degrees=True): """ Adds orientation (with respect to y-axis) values to PyntCloud.vertex This function expects the PyntCloud to have a numpy structured array with normals x,y,z values (correctly named) as the corresponding vertex atribute. Args: degrees (Optional[bool]): Set the oputput orientation units. If True(Default) set units to degrees. If False set units to radians. """ #: set copy to False for efficience in large pointclouds nx = self.vertex['nx'].astype(np.float64, copy=False) ny = self.vertex['ny'].astype(np.float64, copy=False) #: get orientations angle = np.arctan2(nx,ny) #: convert (-180 , 180) to (0 , 360) angle[(np.where(angle &lt; 0))] = (2*np.pi) + angle[(np.where(angle &lt; 0))] if degrees: orientation = np.array(np.rad2deg(angle), dtype=[("orid2",'f4')]) else: orientation = np.array(angle, dtype=[("orir2",'f4')]) self.vertex = join_struct_arrays([self.vertex, orientation]) </code></pre> <p>Wich is simpler and faster.</p> <pre><code>t0 = t.time() esfera.add_orientation() t1 = t.time() dif = t1-t0 dif Out[18]: 0.34514379501342773 t0 = t.time() esfera.add_orientation2() t1 = t.time() dif = t1-t0 dif Out[20]: 0.291456937789917 </code></pre> <p>Now I'm as happy as ashamed.</p> <p>Next time I'll take a deeper look to the numpy docs before posting a question.</p> <p>Thanks.</p> <pre><code>comp = esfera.vertex['orid'] == esfera.vertex['orid2'] np.all(comp) Out[15]: True </code></pre>
python|numpy|3d|point-clouds
1
2,352
35,799,709
pandas dataframe find nth non isnull row
<p>I want to know how many points in a pandas dataframe where index is a series of dates that I need to have in order to end up with X points after doing a dropna(). I want the latest points. Example:</p> <pre><code>window = 504 s1 = pd.DataFrame(stuff) len(s1.index) --&gt; 600 dropped_series = s1.dropna() len(dropped_series.index) --&gt; 480 diff_points_count = len(s1.index) - len(dropped_series.index) final_series = s1.tail(window + diff_points_count).dropna() </code></pre> <p>--> len(final_series.index) does not necessarily equal the window. Depends on where the NaN's are.</p> <p>I need it to work where s1 is either a pandas.Series or a pandas.DataFrame</p>
<p>Here is my solution, but I'm sure there's a more elegant way to do it:</p> <pre><code> all_series_df = pd.concat([harmonized_series_set[i] for i in series_indices], axis=1) all_series_df['is_valid'] = all_series_df.apply(lambda x: 0 if np.any(np.isnan(x)) else 1, raw=True, axis=1) valid_point_count = all_series_df['is_valid'].sum() all_series_df['count_valid'] = valid_point_count - all_series_df['is_valid'].cumsum() + 1 matching_row_array = all_series_df.loc[all_series_df['count_valid'] == (window + output_length - 1)] matching_row_index = 0 if isinstance(matching_row_array, pd.DataFrame) and len(matching_row_array.index) &gt; 0: matching_row_index = all_series_df.index.get_loc(matching_row_array.index[0]) tail_amount = len(all_series_df.index) - matching_row_index for i, arg in enumerate(args): if i in series_indices: tailed_series = harmonized_series_set[i].tail(tail_amount) harmonized_args.append(tailed_series) else: harmonized_args.append(arg) return tuple(harmonized_args) </code></pre>
python|numpy|pandas
0
2,353
35,916,378
Create empty csv file with pandas
<p>I am interacting through a number of csv files and want to append the mean temperatures to a blank csv file. How do you create an empty csv file with pandas?</p> <pre><code>for EachMonth in MonthsInAnalysis: TheCurrentMonth = pd.read_csv('MonthlyDataSplit/Day/Day%s.csv' % EachMonth) MeanDailyTemperaturesForCurrentMonth = TheCurrentMonth.groupby('Day')['AirTemperature'].mean().reset_index(name='MeanDailyAirTemperature') with open('my_csv.csv', 'a') as f: df.to_csv(f, header=False) </code></pre> <p>So in the above code how do I create the <code>my_csv.csv</code> prior to the <code>for</code> loop?</p> <p>Just a note I know you can create a data frame then save the data frame to csv but I am interested in whether you can skip this step.</p> <p>In terms of context I have the following csv files:</p> <p><a href="https://i.stack.imgur.com/2aU1f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2aU1f.png" alt="enter image description here"></a></p> <p>Each of which have the following structure:</p> <p><a href="https://i.stack.imgur.com/tHsy2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tHsy2.png" alt="enter image description here"></a></p> <p>The Day column reads up to 30 days for each file. </p> <p>I would like to output a csv file that looks like this:</p> <p><a href="https://i.stack.imgur.com/C7Fff.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C7Fff.png" alt="enter image description here"></a></p> <p>But obviously includes all the days for all the months. </p> <p>My issue is that I don't know which months are included in each analysis hence I wanted to use a for loop that used a list that has that information in it to access the relevant csvs, calculate the mean temperature then save it all into one csv.</p> <p>Input as text: </p> <pre><code> Unnamed: 0 AirTemperature AirHumidity SoilTemperature SoilMoisture LightIntensity WindSpeed Year Month Day Hour Minute Second TimeStamp MonthCategorical TimeOfDay 6 6 18 84 17 41 40 4 2016 1 1 6 1 1 10106 January Day 7 7 20 88 22 92 31 0 2016 1 1 7 1 1 10107 January Day 8 8 23 1 22 59 3 0 2016 1 1 8 1 1 10108 January Day 9 9 23 3 22 72 41 4 2016 1 1 9 1 1 10109 January Day 10 10 24 63 23 83 85 0 2016 1 1 10 1 1 10110 January Day 11 11 29 73 27 50 1 4 2016 1 1 11 1 1 10111 January Day </code></pre>
<p>Just open the file in write mode to create it.</p> <pre><code>with open('my_csv.csv', 'w'): pass </code></pre> <p>Anyway I do not think you should be opening and closing the file so many times. You'd better open the file once, write several times.</p> <pre><code>with open('my_csv.csv', 'w') as f: for EachMonth in MonthsInAnalysis: TheCurrentMonth = pd.read_csv('MonthlyDataSplit/Day/Day%s.csv' % EachMonth) MeanDailyTemperaturesForCurrentMonth = TheCurrentMonth.groupby('Day')['AirTemperature'].mean().reset_index(name='MeanDailyAirTemperature') df.to_csv(f, header=False) </code></pre>
python|csv|pandas|is-empty
4
2,354
37,885,014
tensorflow evaluate on test set with queques
<p>TensorFlow's customer/producer prefetching mechanism is awesome for training.</p> <p>However, I am not able to find a way to use it for evaluation on test data. We want to go through the test data only and exactly once. But test data is always not dividable by batch size. How should I deal with the remainder?</p> <p>Thanks!</p>
<p>See <code>eval_in_batches</code> from <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/mnist/convolutional.py#L265" rel="nofollow">convolutional.py</a> official example. It does most <code>session.run</code> calls on regular batch size, while last <code>session.run</code> is done on smaller sized batch. This works when you don't hard-code batch-size into your graph.</p>
tensorflow|deep-learning
0
2,355
37,897,527
get python pandas to_dict with orient='records' but without float cast
<p>I have a dataframe with one col int one col floats:</p> <pre><code>df # a b # 0 3 42.00 # 1 2 3.14 df.dtypes # a int64 # b float64 # dtype: object </code></pre> <p>I want a list of dicts like the one provide by <code>df.to_dict(orient='records')</code></p> <pre><code>df.to_dict(orient='records') [{'a': 3.0, 'b': 42.0}, {'a': 2.0, 'b': 3.1400000000000001}] </code></pre> <p>But with <code>a</code> as <code>int</code>, not casted as float</p>
<p>Currently (as of Pandas version 0.18), <code>df.to_dict('records')</code> accesses the NumPy array <code>df.values</code>. This property upcasts the dtype of the <code>int</code> column to <code>float</code> so that the array can have a single common dtype. After this point there is no hope of returning the desired result -- all the ints have been converted to floats.</p> <p>So instead, building on <a href="https://stackoverflow.com/questions/37897527/get-python-pandas-to-dict-with-orient-records-but-without-float-cast#comment63248425_37897527">ayhan's</a> and <a href="https://github.com/pydata/pandas/issues/12859#issuecomment-208319535" rel="noreferrer">Tom Augspurger's suggestion</a> you could use a list and dict comprehension:</p> <pre><code>import pandas as pd df = pd.DataFrame({'a':[3,2], 'b':[42.0,3.14]}) result = [{col:getattr(row, col) for col in df} for row in df.itertuples()] print(result) # [{'a': 3, 'b': 42.0}, {'a': 2, 'b': 3.1400000000000001}] </code></pre>
python|pandas
11
2,356
64,596,935
Tensorflow js: Your application contains ops that are small enough to be executed on the CPU backend, however the CPU backend cannot be found
<p>I use handpose tensorflow model for hand detection in browser using tfjs with webgl backend. However, i see warning in console <code>Your application contains ops that are small enough to be executed on the CPU backend, however the CPU backend cannot be found. Consider importing the CPU backend (@tensorflow/tfjs-backend-cpu) for better performance.</code></p> <p>If i try to add it to my project, I see a lot of warnings about that CPU backend is already registered.</p> <p>Is there any way to use CPU and GPU (webgl) together in tfjs?</p>
<p>By default, Tensorflow JS does some of the inference (forward propagation as opposed to backprop) on the CPU, since some CPUs have vector arithmetic units that speed up matrix multiplication.</p> <p>You can check the <code>tf.ENV.flags</code> variable to see the various state variables that tfjs sets by default. One of them is the <code>WEBGL_CPU_FORWARD</code> flag, which you can set to true like this in the console or in a javascript function.</p> <pre><code>tf.ENV.set(&quot;WEBGL_CPU_FORWARD&quot;, true) </code></pre> <p>This should activate the CPU backend without you having to import it again in a <code>script</code> tag. Or, you can just ignore the warnings since they don't impact your program at all.</p>
javascript|tensorflow|tensorflow.js
1
2,357
64,610,841
BERT-based NER model giving inconsistent prediction when deserialized
<p>I am trying to train an NER model using the HuggingFace transformers library on Colab cloud GPUs, pickle it and load the model on my own CPU to make predictions.</p> <p><strong>Code</strong></p> <p>The model is the following:</p> <pre><code>from transformers import BertForTokenClassification model = BertForTokenClassification.from_pretrained( &quot;bert-base-cased&quot;, num_labels=NUM_LABELS, output_attentions = False, output_hidden_states = False ) </code></pre> <p>I am using this snippet to save the model on Colab</p> <pre><code>import torch torch.save(model.state_dict(), FILENAME) </code></pre> <p>Then load it on my local CPU using</p> <pre><code># Initiating an instance of the model type model_reload = BertForTokenClassification.from_pretrained( &quot;bert-base-cased&quot;, num_labels=len(tag2idx), output_attentions = False, output_hidden_states = False ) # Loading the model model_reload.load_state_dict(torch.load(FILENAME, map_location='cpu')) model_reload.eval() </code></pre> <p>The code snippet used to tokenize the text and make actual predictions is the same both on the Colab GPU notebook instance and my CPU notebook instance.</p> <p><strong>Expected Behavior</strong></p> <p>The GPU-trained model behaves correctly and classifies the following tokens perfectly:</p> <pre><code>O [CLS] O Good O morning O , O my O name O is B-per John I-per Kennedy O and O I O am O working O at B-org Apple O in O the O headquarters O of B-geo Cupertino O [SEP] </code></pre> <p><strong>Actual Behavior</strong></p> <p>When loading the model and use it to make predictions on my CPU, the predictions are totally wrong:</p> <pre><code>I-eve [CLS] I-eve Good I-eve morning I-eve , I-eve my I-eve name I-eve is I-geo John B-eve Kennedy I-eve and I-eve I I-eve am I-eve working I-eve at I-gpe Apple I-eve in I-eve the I-eve headquarters I-eve of B-org Cupertino I-eve [SEP] </code></pre> <p>Does anyone have ideas why it doesn't work? Did I miss something?</p>
<p>I fixed it, there were two problems:</p> <ol> <li><p>The index-label mapping for tokens was wrong, for some reason the <code>list()</code> function worked differently on Colab GPU than my CPU (??)</p> </li> <li><p>The snippet used to save the model was not correct, for models based on the huggingface-transformers library you can't use <code>model.save_dict()</code> and load it later, you need to use the <code>save_pretrained()</code> method of your model class, and load it later using <code>from_pretrained()</code>.</p> </li> </ol>
python|pytorch|bert-language-model|huggingface-transformers
3
2,358
64,428,886
Matplotlib - Skipping xticks while maintaining correct x value
<p>I'm trying to plot two separate things from two pandas dataframes but the x-axis is giving some issues. When using matplotlib.ticker to skip x-ticks, the date doesn't get skipped. The result is that the x-axis values doesn't match up with what is plotted.</p> <p>For example, when the x-ticks are set to a base of 2, you'll see that the dates are going up by 1.</p> <p><a href="https://i.stack.imgur.com/H22L1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H22L1.png" alt="base of 2" /></a></p> <p>But the graph has the same spacing when the base is set to 4, which you can see here:</p> <p><a href="https://i.stack.imgur.com/WGPvy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WGPvy.png" alt="base of 4" /></a></p> <p>For the second image, the goal is for the days to increase by 4 each tick, so it should read 22, 26, 30, etc.</p> <p>Here is the code that I'm working with:</p> <pre><code>ax = plot2[['Date','change value']].plot(x='Date',color='red',alpha=1,linewidth=1.5) plt.ylabel('Total Change') plot_df[['Date','share change daily']].plot(x='Date',secondary_y=True,kind='bar',ax=ax,alpha=0.4,color='black',figsize=(6,2),label='Daily Change') plt.ylabel('Daily Change') ax.legend(['Total Change (L)','Daily Change']) plt.xticks(plot_df.index,plot_df['Date'].values) myLocator = mticker.MultipleLocator(base=4) ax.xaxis.set_major_locator(myLocator) </code></pre> <p>Any help is appreciated! Thanks :)</p>
<p>First off, I suggest you set the date as the index of your dataframe. This lets pandas automatically format the date labels nicely when you create line plots and it lets you conveniently create a custom format with the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.strftime.html" rel="nofollow noreferrer"><code>strftime</code></a> method.</p> <p>This second point is relevant to this example, seeing as plotting a bar plot over a line plot prevents you from getting the pandas line plot date labels because the x-axis units switch to integer units starting at 0 (note that this is also the case when you use the dates as strings instead of <a href="https://docs.python.org/3/library/datetime.html" rel="nofollow noreferrer"><code>datetime</code></a> objects, aka <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Timestamp.html" rel="nofollow noreferrer"><code>timestamp</code></a> objects in pandas). You can check this for yourself by running <code>ax.get_xticks()</code> after creating the line plot (with a DatetimeIndex) and again after creating the bar plot.</p> <p>There are too many peculiarities regarding the tick locators and formatters, the pandas plotting defaults, and the various ways in which you could define your custom ticks and tick labels for me to go into more detail here. So let me suggest you refer to the documentation for more information (though for your case you don't really need any of this): <a href="https://matplotlib.org/gallery/ticks_and_spines/major_minor_demo.html" rel="nofollow noreferrer">Major and minor ticks</a>, <a href="https://matplotlib.org/gallery/text_labels_and_annotations/date.html" rel="nofollow noreferrer">Date tick labels</a>, <a href="https://matplotlib.org/gallery/text_labels_and_annotations/date_index_formatter.html" rel="nofollow noreferrer">Custom tick formatter for time series</a>, <a href="https://matplotlib.org/gallery/index.html#ticks-and-spines" rel="nofollow noreferrer">more examples using ticks</a>, and the ticker module which contains the <a href="https://matplotlib.org/api/ticker_api.html" rel="nofollow noreferrer">list of tick locators and formatters</a> and their parameters.</p> <p>Furthermore, you can identify the default tick locators and formatters used by the plotting functions with <code>ax.get_xaxis().get_major_locator()</code> or <code>ax.get_xaxis().get_major_formatter()</code> (you can do the same for the y-axis, and for minor ticks) to get an idea of what is happening under the hood.</p> <p>On to solving your problem. Seeing as you want a fixed frequency of ticks for a predefined range of dates, I suggest that you avoid explicitly selecting a ticker locator and formatter and that instead you simply create the list of ticks and tick labels you want. First, here is some sample data similar to yours:</p> <pre><code>import numpy as np # v 1.19.2 import pandas as pd # v 1.1.3 import matplotlib.pyplot as plt # v 3.3.2 rng = np.random.default_rng(seed=1) # random number generator dti = pd.bdate_range(start='2020-07-22', end='2020-09-03') daily = rng.normal(loc=0, scale=250, size=dti.size) total = -1900 + np.cumsum(daily) df = pd.DataFrame({'Daily Change': daily, 'Total Change': total}, index=dti) df.head() </code></pre> <pre><code> Daily Change Total Change 2020-07-22 86.396048 -1813.603952 2020-07-23 205.404536 -1608.199416 2020-07-24 82.609269 -1525.590147 2020-07-27 -325.789308 -1851.379455 2020-07-28 226.338967 -1625.040488 </code></pre> <p>The date is set as the index, which will simplify the code for creating the plots (no need to specify <code>x</code>). I use the same formatting arguments as in the example you gave, except for the figure size. Note that for setting the ticks and tick labels I do not use <code>plt.xticks</code> because this refers to the secondary Axes containing the bar plot and for some reason, the <code>rotation</code> and <code>ha</code> arguments get ignored.</p> <pre><code>label_daily, label_total = df.columns # Create pandas line plot: note the 'use_index' parameter ax = df.plot(y=label_total, color='red', alpha=1, linewidth=1.5, use_index=False, ylabel=label_total) # Create pandas bar plot: note that the second ylabel must be created # after, else it overwrites the previous label on the left df.plot(kind='bar', y=label_daily, color='black', alpha=0.4, ax=ax, secondary_y=True, mark_right=False, figsize=(9, 4)) plt.ylabel(label_daily, labelpad=10) # Place legend in a better location: note that because there are two # Axes, the combined legend can only be edited with the fig.legend # method, and the ax legend must be removed ax.legend().remove() plt.gcf().legend(loc=(0.11, 0.15)) # Create custom x ticks and tick labels freq = 4 # business days xticks = ax.get_xticks() xticklabels = df.index[::freq].strftime('%b-%d') ax.set_xticks(xticks[::freq]) ax.set_xticks(xticks, minor=True) ax.set_xticklabels(xticklabels, rotation=0, ha='center') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/veSNl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/veSNl.png" alt="pandas_twinax_line_bar" /></a></p> <br> <p>The codes for formatting the dates can be found <a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes" rel="nofollow noreferrer">here</a>.</p> <hr /> <p>For the sake of completeness, here are two alternative ways of creating exactly the same ticks but this time by making explicit use of matplotlib tick locators and formatters.</p> <p>This first alternative uses lists of ticks and tick labels like before, but this time passing them to <a href="https://matplotlib.org/api/ticker_api.html#matplotlib.ticker.FixedLocator" rel="nofollow noreferrer"><code>FixedLocator</code></a> and <a href="https://matplotlib.org/api/ticker_api.html#matplotlib.ticker.FixedFormatter" rel="nofollow noreferrer"><code>FixedFormatter</code></a>:</p> <pre><code>import matplotlib.ticker as mticker # Create custom x ticks and tick labels freq = 4 # business days maj_locator = mticker.FixedLocator(ax.get_xticks()[::freq]) min_locator = mticker.FixedLocator(ax.get_xticks()) ax.xaxis.set_major_locator(maj_locator) ax.xaxis.set_minor_locator(min_locator) maj_formatter = mticker.FixedFormatter(df.index[maj_locator.locs].strftime('%b-%d')) ax.xaxis.set_major_formatter(maj_formatter) plt.setp(ax.get_xticklabels(), rotation=0, ha='center') </code></pre> <p>This second alternative makes use of the option to create a tick at every nth position of the index when using <a href="https://matplotlib.org/api/ticker_api.html#matplotlib.ticker.IndexLocator" rel="nofollow noreferrer"><code>IndexLocator</code></a>, combining it with <a href="https://matplotlib.org/api/ticker_api.html#matplotlib.ticker.FuncFormatter" rel="nofollow noreferrer"><code>FuncFormatter</code></a> (instead of <code>IndexFormatter</code> which is deprecated):</p> <pre><code>import matplotlib.ticker as mticker # Create custom x ticks and tick labels maj_freq = 4 # business days min_freq = 1 # business days maj_locator = mticker.IndexLocator(maj_freq, 0) min_locator = mticker.IndexLocator(min_freq, 0) ax.xaxis.set_major_locator(maj_locator) ax.xaxis.set_minor_locator(min_locator) maj_formatter = mticker.FuncFormatter(lambda x, pos=None: df.index[int(x)].strftime('%b-%d')) ax.xaxis.set_major_formatter(maj_formatter) plt.setp(ax.get_xticklabels(), rotation=0, ha='center') </code></pre> <p>As you can see, both of these alternatives are more verbose than the initial example.</p>
python|pandas|matplotlib
1
2,359
47,897,790
Python How to fill a customized value (such as "#NA####') with bfill method?
<p>I have a data frame containing "#NA####". I want to back-fill this value with group mean.</p> <p>I know I can first replace "#NA####" with np.NAN, then use pd.fillna, but are there any more convenient ways?</p>
<p><strong>Setup</strong></p> <pre><code>df Group Value 0 1 10 1 1 #NA### 2 3 5 3 2 10 4 2 #NA### 5 3 #NA### 6 1 40 7 2 #NA### 8 3 100 9 1 20 </code></pre> <p>Call <code>pd.to_numeric</code>, to coerce those strings to NaNs.</p> <pre><code>df.Value = pd.to_numeric(df.Value, errors='coerce') </code></pre> <p>Now, group by <code>Group</code>, and call <code>fillna</code> with the <code>mean</code> - </p> <pre><code>df = df.set_index('Group').Value\ .fillna(df.groupby('Group').mean().Value)\ .reset_index() df Group Value 0 1 10.000000 1 1 23.333333 2 3 5.000000 3 2 10.000000 4 2 10.000000 5 3 52.500000 6 1 40.000000 7 2 10.000000 8 3 100.000000 9 1 20.000000 </code></pre> <hr> <p>An alternative fill method (from a now deleted answer) which I thought was pretty good involves <code>groupby</code> + <code>transform</code> - </p> <pre><code>df.Value = df.Value.fillna(df.groupby('Group')['Value'].transform('mean')) df Group Value 0 1 10.000000 1 1 23.333333 2 3 5.000000 3 2 10.000000 4 2 10.000000 5 3 52.500000 6 1 40.000000 7 2 10.000000 8 3 100.000000 9 1 20.000000 </code></pre>
python|pandas|missing-data
0
2,360
47,985,005
Finetuning DNN with continuous outputs in the last layer
<p>Greatly appreciate it if someone could help me out here:</p> <p>I'm trying to do some finetuning on a regression task --- my inputs are <code>200X200</code> RGB images and my prediction output/label is a set of real values (let's say, within <code>[0,10]</code>, though scaling is not a big deal here...?) --- on top of <code>InceptionV3</code> architecture. Here are my functions that take a pretrained <code>Inception</code> model, remove the last layer and add a a new layer, set up for finetuning...</p> <pre><code>""" Fine-tuning functions """ IM_WIDTH, IM_HEIGHT = 299, 299 #fixed size for InceptionV3 NB_EPOCHS = 3 BAT_SIZE = 32 FC_SIZE = 1024 NB_IV3_LAYERS_TO_FREEZE = 172 def eucl_dist(inputs): x, y = inputs return ((x - y)**2).sum(axis=-1) def add_new_last_continuous_layer(base_model): """Add last layer to the convnet Args: base_model: keras model excluding top, for instance: base_model = InceptionV3(weights='imagenet',include_top=False) Returns: new keras model with last layer """ x = base_model.output x = GlobalAveragePooling2D()(x) x = Dense(FC_SIZE, activation='relu')(x) predictions = Lambda(eucl_dist, output_shape=(1,))(x) model = Model(input=base_model.input, output=predictions) return model def setup_to_finetune_continuous(model): """Freeze the bottom NB_IV3_LAYERS and retrain the remaining top layers. note: NB_IV3_LAYERS corresponds to the top 2 inception blocks in the inceptionv3 architecture Args: model: keras model """ for layer in model.layers[:NB_IV3_LAYERS_TO_FREEZE]: layer.trainable = False for layer in model.layers[NB_IV3_LAYERS_TO_FREEZE:]: layer.trainable = True model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='eucl_dist') </code></pre> <p>Here are my implementations:</p> <pre><code>base_model = InceptionV3(weights = "imagenet", include_top=False, input_shape=(3,200,200)) model0 = add_new_last_continuous_layer(base_model) setup_to_finetune_continuous(model0) history=model0.fit(train_x, train_y, validation_data = (test_x, test_y), nb_epoch=epochs, batch_size=32) scores = model0.evaluate(test_x, test_y, verbose = 0) features = model0.predict(X_train) </code></pre> <p>where <code>train_x</code> is a <code>(168435, 3, 200, 200)</code> <code>numpy</code> array and <code>train_y</code> is a <code>(168435,)</code> <code>numpy</code> array. The same goes for <code>test_x</code> and <code>test_y</code> except the number of observations is <code>42509</code>.</p> <p>I got the <code>TypeError: Tensor object is not iterable</code> bug which occurred at <code>predictions = Lambda(eucl_dist, output_shape=(1,))(x)'' when going through the</code>add_new_last_continuous_layer()`` function. Could you anyone kindly give me some guidance to get around that and what the problem is? Greatly appreciated and happy holidays!</p> <p>EDIT: Changed the functions to:</p> <pre><code>def eucl_dist(inputs): x, y = inputs return ((x - y)**2).sum(axis=-1) def add_new_last_continuous_layer(base_model): """Add last layer to the convnet Args: base_model: keras model excluding top, for instance: base_model = InceptionV3(weights='imagenet',include_top=False) Returns: new keras model with last layer """ x = base_model.output x = GlobalAveragePooling2D()(x) x1 = Dense(FC_SIZE, activation='relu')(x) x2 = Dense(FC_SIZE, activation='relu')(x) predictions = Lambda(eucl_dist, output_shape=eucl_dist_shape)([x1,x2]) model = Model(input=base_model.input, output=predictions) return model </code></pre>
<p>Your output shape for the lambda layer is wrong. Define your functions like this:</p> <pre><code>from keras import backend as K def euclidean_distance(vects): x, y = vects return K.sqrt(K.maximum(K.sum(K.square(x - y), axis=1, keepdims=True), K.epsilon())) def eucl_dist_output_shape(shapes): shape1, shape2 = shapes return (shape1[0], 1) predictions = Lambda(euclidean_distance, output_shape=eucl_dist_output_shape)([input1, input2]) </code></pre>
tensorflow|computer-vision|deep-learning|keras|conv-neural-network
2
2,361
58,709,538
Rounding hours in data frame using Python -Pandas
<p>I have data frame that I have created in python that contains data about plants that were measured one time every one hour. The problem is that the original intention was to measure them at the same hour every day- 10:00, 11:00,12:00... but in real life the plants were measured with a littlle different in time so now I have too many rows.</p> <p><a href="https://i.stack.imgur.com/W1fqd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W1fqd.png" alt="enter image description here"></a></p> <p>as you can see here, plant D10A was measured one day at 10:02, one day at 10:09, 10:14 ,10:17 ...... I want all of those to be under "10:00" and then to have less rows.</p> <p><strong>My end goal is to have the same table but with rounded hours instead of the exact time</strong></p>
<pre><code># here is the piece of your dataframe: 6/17/2019 6/18/2019 plant Hour D10A 10:02 NaN NaN 10:09 NaN 0.33 10:14 NaN NaN 10:17 0.777 NaN 10:19 NaN NaN col = df.columns df = df.reset_index() df['hr'] = pd.to_datetime(df['Hour']).apply(lambda x: x.hour) df.fillna(0).groupby(['plant','hr'])[col].max() Out[1]: 6/17/2019 6/18/2019 plant hr D10A 10 0.777 0.33 </code></pre> <h2>Upd: for just rounding Hours, here is the code:</h2> <pre><code>col = df.columns df = df.reset_index() df['Hour'] = pd.to_datetime(df['Hour']).apply(lambda x: str(x.hour) + ':00') df.set_index(['plant', 'Hour'])[col] Out[2]: 6/17/2019 6/18/2019 plant Hour D10A 10:00 NaN NaN 10:00 NaN 0.33 10:00 NaN NaN 10:00 0.777 NaN 10:00 NaN NaN </code></pre>
python|pandas|pandas-groupby
0
2,362
70,261,404
Iterate over rows and subtract values in pandas df
<p>I have the following table:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">ID</th> <th style="text-align: center;">Qty_1</th> <th style="text-align: center;">Qty_2</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">A</td> <td style="text-align: center;">1</td> <td style="text-align: center;">10</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: center;">2</td> <td style="text-align: center;">0</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: center;">3</td> <td style="text-align: center;">0</td> </tr> <tr> <td style="text-align: left;">B</td> <td style="text-align: center;">3</td> <td style="text-align: center;">29</td> </tr> <tr> <td style="text-align: left;">B</td> <td style="text-align: center;">2</td> <td style="text-align: center;">0</td> </tr> <tr> <td style="text-align: left;">B</td> <td style="text-align: center;">1</td> <td style="text-align: center;">0</td> </tr> </tbody> </table> </div> <p>I want to iterate based on the ID, and subtract Qty_2 - Qty_1 and update the next row with that result.</p> <p>The result would be:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">ID</th> <th style="text-align: center;">Qty_1</th> <th style="text-align: center;">Qty_2</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">A</td> <td style="text-align: center;">1</td> <td style="text-align: center;">10</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: center;">2</td> <td style="text-align: center;">8</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: center;">3</td> <td style="text-align: center;">5</td> </tr> <tr> <td style="text-align: left;">B</td> <td style="text-align: center;">3</td> <td style="text-align: center;">29</td> </tr> <tr> <td style="text-align: left;">B</td> <td style="text-align: center;">2</td> <td style="text-align: center;">27</td> </tr> <tr> <td style="text-align: left;">B</td> <td style="text-align: center;">1</td> <td style="text-align: center;">26</td> </tr> </tbody> </table> </div> <p>Ideally, I would also like to start by subtracting the first row end a new ID appears and only after that start the loop:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">ID</th> <th style="text-align: center;">Qty_1</th> <th style="text-align: center;">Qty_2</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">A</td> <td style="text-align: center;">1</td> <td style="text-align: center;">9</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: center;">2</td> <td style="text-align: center;">7</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: center;">3</td> <td style="text-align: center;">4</td> </tr> <tr> <td style="text-align: left;">B</td> <td style="text-align: center;">3</td> <td style="text-align: center;">26</td> </tr> <tr> <td style="text-align: left;">B</td> <td style="text-align: center;">2</td> <td style="text-align: center;">24</td> </tr> <tr> <td style="text-align: left;">B</td> <td style="text-align: center;">1</td> <td style="text-align: center;">23</td> </tr> </tbody> </table> </div> <p>Each of the solutions is ok! Thank you!</p>
<p>First compute the difference between 'Qty_1' and 'Qty_2' row by row, then group by 'ID' and compute cumulative sum:</p> <pre><code>df['Qty_2'] = df.assign(Qty_2=df['Qty_2'].sub(df['Qty_1'])) \ .groupby('ID')['Qty_2'].cumsum() print(df) # Output: ID Qty_1 Qty_2 0 A 1 9 1 A 2 7 2 A 3 4 3 B 3 26 4 B 2 24 5 B 1 23 </code></pre> <p>Setup:</p> <pre><code>data = {'ID': ['A', 'A', 'A', 'B', 'B', 'B'], 'Qty_1': [1, 2, 3, 3, 2, 1], 'Qty_2': [10, 0, 0, 29, 0, 0]} df = pd.DataFrame(data) </code></pre>
pandas|loops
1
2,363
56,268,769
Loading .npz with Python 3.5 always crashes
<p>In this simple <a href="https://homes.cs.washington.edu/~thickstn/intro.html" rel="nofollow noreferrer">tutorial</a> written in Python 2.7, they have a line loading the numpy array.</p> <pre><code>train_data = np.load(open('../musicnet.npz','rb')) </code></pre> <p>Then, they get the data by calling different keys</p> <pre><code>X,Y = train_data['2494'] </code></pre> <p>Everything works well in python 2.7</p> <p>Data type of <code>train_data</code> is <code>numpy.lib.npyio.NpzFile</code></p> <h2>My problem</h2> <p>However, whenever I try to do the same in Python 3.5, most of the lines work fine, except when it comes to the line of <code>X,Y = train_data['2494']</code>, it just freezes there forever. I would like to use Python 3.5 because my other projects are written in python 3.5.</p> <p>How to rewrite this line so that it runs with Python 3.5?</p> <h2>Error Message</h2> <p>I finally managed to get the error message in terminal <a href="https://i.stack.imgur.com/ZcW8C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZcW8C.png" alt="enter image description here"></a></p> <p>It freezes there because there's tons of output right after the error message, my jupyter notebook just cannot handle that much information. <a href="https://i.stack.imgur.com/CHlSM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CHlSM.png" alt="enter image description here"></a></p> <h2>Solution</h2> <p>Change the encoding to 'bytes'</p> <pre><code>train_data = np.load('../musicnet.npz', encoding='bytes') </code></pre> <p>Then everything works fine.</p>
<p>You first said things crashed, now you say it freezes when trying to access a specific array. <code>numpy</code> has the same syntax in 3.5 compared to 2.7. You shouldn't have to rewrite anything.</p> <p><code>np.load</code> does have a couple of parameters that deal with differences between Py2 and Py3. But I'm not sure these are an issue for you.</p> <pre><code>fix_imports : bool, optional Only useful when loading Python 2 generated pickled files on Python 3, which includes npy/npz files containing object arrays. If `fix_imports` is True, pickle will try to map the old Python 2 names to the new names used in Python 3. encoding : str, optional What encoding to use when reading Python 2 strings. Only useful when loading Python 2 generated pickled files in Python 3, which includes npy/npz files containing object arrays. Values other than 'latin1', 'ASCII', and 'bytes' are not allowed, as they can corrupt numerical data. Default: 'ASCII' </code></pre> <p>Try</p> <pre><code>print(list(train_data.keys())) </code></pre> <p>This should show the array names that were saved to the <code>zip</code> archive. Do they match the names in the Py2 load? Do they include the '2494' name?</p> <p>A couple of things are unusual about:</p> <pre><code>X,Y = train_data['2494'] </code></pre> <p>Naming an array in the zip archive by a string number, and unpacking the load into two variables.</p> <p>Do you know anything about how this was <code>savez</code>? What was saved?</p> <p>Another question - are you loading this file from the same machine that Py2 worked on? Or has the file been transferred from another machine, and possibly corrupted?</p> <p>As those parameters indicate, there are differences in the <code>pickle</code> code between Py2 and Py3. If the original save included object dtype arrays, or non-array objects, then they will be <code>pickled</code> and there might be incompatibilities in the pickle versions.</p>
python-3.x|python-2.7|numpy
1
2,364
56,384,938
Pandas.Series.dtype.kind is None for pd.interval
<p>Test code:</p> <pre class="lang-py prettyprint-override"><code>s = pd.Series(pd.array([pd.Interval(0,1.2), pd.Interval(5,123)])) s.dtype s.dtype.kind is None &gt;&gt;&gt; interval[float64] &gt;&gt;&gt; True </code></pre> <p>Is it some bug or made intentionally? If latter - for what reason?</p>
<p>The reason this is appearing as <code>None</code> is simply because the implementation of <code>IntervalDtype</code> <a href="https://github.com/pandas-dev/pandas/blob/7f318658b92155678b31780722277d1f8c8df569/pandas/core/dtypes/dtypes.py#L910" rel="nofollow noreferrer">explicitly sets <code>kind = None</code></a>. This should probably be updated to <code>'O'</code>, though some care is needed here as it will result in unintended side effects, e.g. this would cause <code>is_string_dtype</code> to return <code>True</code> (see <a href="https://stackoverflow.com/a/56401142/10711194">here</a>).</p>
python|pandas|intervals|series
1
2,365
55,816,447
cumulative return of savings plan with deposits in python pandas
<p>I am trying to calculate a monthly savings plan using financial time series with python pandas. </p> <p>I need to calculate the cumulative return given the percent changes in the time series BUT also take into account monthly deposits.</p> <p>The standard way to calculate the cumulative return from a dataframe is:</p> <pre><code>df['cum_ret'] = (1 + df.monthly_rets).cumprod() - 1 </code></pre> <p>But now, I have monthly deposits, say, each month 100$ are saved and added to the portfolio.</p> <p>How do I calculate the final value of the portfolio?</p> <p>I think the answer is to loop over the dataframe and summ the cumulative return from i to n where i increases in the loop..</p> <p>But how do I do that?</p> <p>Any help is appreciated!</p> <p>Many thanks in advance!</p>
<p>Difficult to answer without some more of the inputs and outputs. But have you tried:</p> <pre><code>df['cum_ret'].sum() </code></pre>
python|pandas|time-series|finance|cumsum
0
2,366
55,856,340
How does Python convert date value from excel
<p>I am reading a csv file with a CDATE column. The structure of the column is:</p> <pre><code>|CDATE | |08/28/2018| |08/28/2018| |08/29/2018| |08/30/2018| |09/02/2018| |09/04/2018| ... |04/10/2019| </code></pre> <p>As you can see there is duplicate date as well as missing dates in this column, and I would like to find the missing dates and add them to my dataframe.</p> <p>My code is:</p> <pre><code>import matplotlib.pyplot as plt warnings.filterwarnings("ignore") plt.style.use('fivethirtyeight') import pandas as pd df = pd.read_csv("XXX.csv") dateCol = df['CDATE'].values.tolist() dates = pd.to_datetime(dateCol, format='%m/%d/%Y') startDate = dates.min() endDate = dates.max() df = df.sort_values('CDATE') df_plastic = df['PLASTIC'].unique() dateRange = pd.date_range(startDate, endDate) df_date = df['CDATE'].unique() for cursorDate in dateRange: if (cursorDate in df_date) is False: print('Data is missing date {} from range {}'.format(cursorDate, df_date)) </code></pre> <p>But the output is:</p> <pre><code>Data is missing date 2019-02-21 00:00:00 from ['01/01/2019' '01/02/2019' '01/03/2019' '01/04/2019' '01/05/2019' '01/07/2019' '01/08/2019' '01/09/2019' '01/10/2019' '01/11/2019' '01/12/2019' '01/14/2019' '01/15/2019' '01/16/2019' '01/17/2019' '01/18/2019' '01/19/2019' '01/21/2019' '01/22/2019' '01/23/2019' '01/24/2019' '01/25/2019' '01/26/2019' '01/28/2019' '01/29/2019' '01/30/2019' '01/31/2019' '02/01/2019' '02/02/2019' '02/04/2019' '02/05/2019' '02/06/2019' '02/07/2019' '02/08/2019' '02/09/2019' '02/11/2019' '02/12/2019' '02/13/2019' '02/14/2019' '02/15/2019' '02/16/2019' '02/19/2019' '02/20/2019' '02/21/2019' '02/22/2019' '02/23/2019' '02/25/2019' '02/26/2019' '02/27/2019' '02/28/2019' '03/01/2019' '03/02/2019' '03/03/2019' '03/04/2019' '03/05/2019' '03/06/2019' '03/07/2019' '03/08/2019' '03/09/2019' '03/11/2019' '03/12/2019' '03/13/2019' '03/14/2019' '03/15/2019' '03/16/2019' '03/18/2019' '03/19/2019' '03/20/2019' '03/21/2019' '03/22/2019' '03/23/2019' '03/25/2019' '03/26/2019' '03/27/2019' '03/28/2019' '03/29/2019' '03/30/2019' '04/01/2019' '04/02/2019' '04/03/2019' '04/04/2019' '04/05/2019' '04/06/2019' '04/08/2019' '04/09/2019' '04/10/2019' '05/29/2018' '05/30/2018' '05/31/2018' '06/01/2018' '06/02/2018' '06/04/2018' '06/05/2018' '06/06/2018' '06/07/2018' '06/08/2018' '06/09/2018' '06/11/2018' '06/12/2018' '06/13/2018' '06/14/2018' '06/15/2018' '06/16/2018' '06/18/2018' '06/19/2018' '06/20/2018' '06/21/2018' '06/22/2018' '06/23/2018' '06/25/2018' '06/26/2018' '06/27/2018' '06/28/2018' '06/29/2018' '06/30/2018' '07/03/2018' '07/04/2018' '07/05/2018' '07/06/2018' '07/07/2018' '07/09/2018' '07/10/2018' '07/11/2018' '07/12/2018' '07/13/2018' '07/14/2018' '07/16/2018' '07/17/2018' '07/18/2018' '07/19/2018' '07/20/2018' '07/21/2018' '07/23/2018' '07/24/2018' '07/25/2018' '07/26/2018' '07/27/2018' '07/28/2018' '07/30/2018' '07/31/2018' '08/01/2018' '08/02/2018' '08/03/2018' '08/04/2018' '08/07/2018' '08/08/2018' '08/09/2018' '08/10/2018' '08/11/2018' '08/13/2018' '08/14/2018' '08/15/2018' '08/16/2018' '08/17/2018' '08/18/2018' '08/20/2018' '08/21/2018' '08/22/2018' '08/23/2018' '08/24/2018' '08/25/2018' '08/27/2018' '08/28/2018' '08/29/2018' '08/30/2018' '08/31/2018' '09/01/2018' '09/04/2018' '09/05/2018' '09/06/2018' '09/07/2018' '09/08/2018' '09/10/2018' '09/11/2018' '09/12/2018' '09/13/2018' '09/14/2018' '09/15/2018' '09/17/2018' '09/18/2018' '09/19/2018' '09/20/2018' '09/21/2018' '09/22/2018' '09/24/2018' '09/25/2018' '09/26/2018' '09/27/2018' '09/28/2018' '09/29/2018' '10/01/2018' '10/02/2018' '10/03/2018' '10/04/2018' '10/05/2018' '10/06/2018' '10/09/2018' '10/10/2018' '10/11/2018' '10/12/2018' '10/13/2018' '10/15/2018' '10/16/2018' '10/17/2018' '10/18/2018' '10/19/2018' '10/20/2018' '10/22/2018' '10/23/2018' '10/24/2018' '10/25/2018' '10/26/2018' '10/29/2018' '10/30/2018' '10/31/2018' '11/01/2018' '11/02/2018' '11/03/2018' '11/05/2018' '11/06/2018' '11/07/2018' '11/08/2018' '11/09/2018' '11/10/2018' '11/13/2018' '11/14/2018' '11/15/2018' '11/16/2018' '11/18/2018' '11/19/2018' '11/20/2018' '11/21/2018' '11/22/2018' '11/23/2018' '11/24/2018' '11/26/2018' '11/27/2018' '11/28/2018' '11/29/2018' '11/30/2018' '12/01/2018' '12/03/2018' '12/04/2018' '12/05/2018' '12/06/2018' '12/07/2018' '12/08/2018' '12/09/2018' '12/10/2018' '12/11/2018' '12/12/2018' '12/13/2018' '12/14/2018' '12/15/2018' '12/17/2018' '12/18/2018' '12/19/2018' '12/20/2018' '12/21/2018' '12/22/2018' '12/24/2018' '12/25/2018' '12/27/2018' '12/28/2018' '12/29/2018' '12/31/2018'] </code></pre> <p>Somehow the data type of cursorDate is changed to Timestamp, making the value comparison not work.</p> <p>How is it converting the datetime formats?</p>
<p>Building on my comment above. Change the last line before your loop to this:</p> <pre><code>df_date = df['CDATE'].apply(pd.to_datetime).unique() </code></pre>
python|pandas|numpy
0
2,367
64,663,456
Generate unique key from multiple dataframes based on name
<p>I have two data frames. As you can see, the function merges it correctly, but it is wrong. Because the carid must be unique and must not be assigned twice. How can I solve this problem? It can appear several times in a data frame, but it must remain unique over two data records. So <code>Carid = 1 = Mercedes-benz</code> across all data records and <strong>not</strong> <code>Cardid = 1 = Mercedes-Benz &amp; Citroen</code></p> <pre><code>import pandas as pd d = {'Carid ': [1, 2, 3, 1], 'Carname': ['Mercedes-Benz', 'Audi', 'BMW', 'Mercedes-Benz'], 'model': ['S-Klasse AMG 63s', 'S6', 'X6 M-Power', 'Maybach']} df = pd.DataFrame(data=d) display(df.head()) </code></pre> <p><a href="https://i.stack.imgur.com/1YSTU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1YSTU.png" alt="enter image description here" /></a></p> <pre><code>d2 = {'Carid ': [4, 1, 5], 'Carname': ['VW', 'Citroen', 'Opel'], 'model': ['GTI', 'S', 'Corsa']} df2 = pd.DataFrame(data=d2) display(df2.head()) </code></pre> <p><a href="https://i.stack.imgur.com/ilyT2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ilyT2.png" alt="enter image description here" /></a></p> <pre><code>dfs = [] dfs.append(df) dfs.append(df2) pd.concat(dfs) </code></pre> <p><a href="https://i.stack.imgur.com/vXa0d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vXa0d.png" alt="enter image description here" /></a></p> <p>What I want</p> <p><a href="https://i.stack.imgur.com/EQPmr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EQPmr.png" alt="enter image description here" /></a></p>
<h2>Method 1 Pandas Approach</h2> <p>First method if you don't mind changing your keys to floats is to increment using <code>cumcount</code></p> <pre><code>df3 = pd.concat([df,df2]) s = df3.groupby('Carname',sort=False)['Carid'].first().to_frame() s['Carid'] = s['Carid'] + s.groupby('Carid').cumcount() / 10 new_ids = s.to_dict(orient='dict')['Carid'] df3['Carid'] = df3['Carname'].map(new_ids) Carid Carname model 0 1.0 Mercedes-Benz S-Klasse AMG 63s 1 2.0 Audi S6 2 3.0 BMW X6 M-Power 3 1.0 Mercedes-Benz Maybach 0 4.0 VW GTI 1 1.1 Citroen S 2 5.0 Opel Corsa </code></pre> <h1>Method 2 Functional Approach using Dictionaries.</h1> <h2>Assumptions.</h2> <p>The logic of the function is predicated upon having a a unique <code>carid</code> per dataframe.</p> <p>Your IDs are in a sequential order so using the <code>max</code> <code>carid</code> to generate the numbers makes the most sense. This may generate non sequential numbers if you have a list of Carids <code>[1,2,3,200]</code></p> <p>this would generate a new unique <code>Carid</code> of <code>201</code> for Citroen given that an ID of <code>200</code> already exists and is owned by a car make.</p> <h2>Function</h2> <pre><code>import pandas as pd import numpy as np from collections import ChainMap def generate_new_keys(*args,key='Carid',name='Carname'): &quot;&quot;&quot; Takes in a number of dataframes and returns any duplicates with a new unique id. groupby columns fixed to CarID and CarName. &quot;&quot;&quot; # adds dictionaries into a single list. dicts_ = [arg.groupby(key)[name].first().to_dict() for arg in args] #merges dicts on unique key, this will exclude duplicates. merged_dicts = dict(ChainMap(*dicts_)) #get the duplicate and pass the name into a list. delta = [v for each_dict in dicts_ for k,v in each_dict.items() if v not in merged_dicts.values()] # get the max sequence key start_key = max(merged_dicts.keys()) + 1 # create a new sequence sequence = range(start_key, start_key + len(delta) + 1) # return a dictionary. return {name : number for name,number in zip(delta,sequence)} </code></pre> <h2>In Action</h2> <pre><code>new_keys = generate_new_keys(df,df2) print(new_keys) {'Citroen': 6} df3 = pd.concat([df,df2]) df3['Carid'] = np.where(df3['Carname'].isin(new_keys.keys()), df3['Carname'].map(new_keys), df3['Carid']) print(df3) Carid Carname model 0 1.0 Mercedes-Benz S-Klasse AMG 63s 1 2.0 Audi S6 2 3.0 BMW X6 M-Power 0 4.0 VW GTI 1 6.0 Citroen S 2 5.0 Opel Corsa </code></pre> <h2>Testing additional dataframe.</h2> <pre><code>new_df = pd.DataFrame({'Carid' : [1,2,3], 'Carname' : ['Mercedes-Benz', 'Toyota','BMW'] }) new_keys = generate_new_keys(df,df2,new_df) {'Citroen': 6, 'Toyota': 7} df3 = pd.concat([df1,df2,new_df]) df3['Carid'] = np.where(df3['Carname'].isin(new_keys.keys()), df3['Carname'].map(new_keys), df3['Carid']) print(df3) Carid Carname model 0 1.0 Mercedes-Benz S-Klasse AMG 63s 1 2.0 Audi S6 2 3.0 BMW X6 M-Power 0 4.0 VW GTI 1 6.0 Citroen S #&lt; new id 2 5.0 Opel Corsa 0 1.0 Mercedes-Benz NaN 1 7.0 Toyota NaN #&lt; new id 2 3.0 BMW NaN </code></pre>
python|pandas|dataframe
1
2,368
39,683,153
Is there a good way to find the rank of a matrix in a field of characteristic p>0?
<p>I need an efficient algorithm or a known way to determine <a href="https://en.wikipedia.org/wiki/Rank_(linear_algebra)" rel="nofollow">the mathematical rank</a> of a matrix A with coefficients in a field of positive characteristic. </p> <p>For example, in the finite field of 5 elements I have the following matrix:</p> <pre><code>import numpy A=[[2,3],[3,2]] print numpy.linalg.matrix_rank(A) </code></pre> <p>This method gives me the result of 2, but in characteristic 5 this matrix has rank 1 since <code>[2,3]+[3,2]=[0,0]</code>.</p>
<p>Numpy doesn't have built-in support for finite fields. The matrix <code>A</code> in your code is treated as a matrix of real numbers, and hence has rank 2. </p> <p>If you really need to support finite fields with Numpy, you'll have to define your own data type along with the arithmetic operations yourself, as shown <a href="https://stackoverflow.com/questions/17044064/how-to-calculate-numpy-arrays-on-galois-field">here</a>. There are of course the concerns about proper error handling (like divide by zero).</p> <p>Even then, many common routines will have to be rewritten to support your field data types. For example, from the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.matrix_rank.html" rel="nofollow noreferrer">numpy.linalg.matrix_rank</a> documentation, the routine uses Singular Value Decomposition (SVD), which is not well defined for finite fields, so you'll have to code the rank finding algorithm yourself.</p> <p>As for the algorithm itself, you could try implementing plain old Gaussian Elimination along <a href="https://stackoverflow.com/questions/16254654/test-if-matrix-is-invertible-over-finite-field">these lines</a>, but this can be a pain in the neck and really slow, so you will likely be better off with other tools/packages like <strong>Sage</strong>.</p>
python|numpy|matrix
2
2,369
39,755,742
pandas histogram with by: possible to make axes uniform?
<p>I am using the option to generate a separate histogram of a value for each group in a data frame like so (example code from documentation)</p> <pre><code>data = pd.Series(np.random.randn(1000)) data.hist(by=np.random.randint(0, 4, 1000), figsize=(6, 4)) </code></pre> <p>This is great, but what I am not seeing is a way to set and standardize the axes. Is this possible?</p> <p>To be specific, I would like to specify the x and y axes of the plots so that the y axis in particular has the same range for all plots. Otherwise it can be hard to compare distributions to one another.</p>
<p>you can pass <code>kwds</code> to hist and it will pass them along to appropriate sub processes. The relevant ones here are <code>sharex</code> and <code>sharey</code></p> <pre><code>data = pd.Series(np.random.randn(1000)) data.hist(by=np.random.randint(0, 4, 1000), figsize=(6, 4), sharex=True, sharey=True) </code></pre> <p><a href="https://i.stack.imgur.com/q86d4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q86d4.png" alt="enter image description here"></a></p>
python|pandas
2
2,370
39,785,661
ValueError: Filter must not be larger than the input
<p>I am pretty new to machine learning so I am playing around with examples and such. The image size specified in the code is (28,28) But for some reason I keep getting the same ValueError I cant figure out why this is happening.</p> <p>Here's the code:</p> <pre><code>import pandas as pd import numpy as np np.random.seed(1337) # for reproducibility from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.utils import np_utils # input image dimensions img_rows, img_cols = 28, 28 batch_size = 128 # Number of images used in each optimization step nb_classes = 10 # One class per digit nb_epoch = 35 # Number of times the whole data is used to learn # Read the train and test datasets train = pd.read_csv("../input/train.csv").values test = pd.read_csv("../input/test.csv").values # Reshape the data to be used by a Theano CNN. Shape is # (nb_of_samples, nb_of_color_channels, img_width, img_heigh) X_train = train[:, 1:].reshape(train.shape[0], 1, img_rows, img_cols) X_test = test.reshape(test.shape[0], 1, img_rows, img_cols) y_train = train[:, 0] # First data is label (already removed from X_train) # Make the value floats in [0;1] instead of int in [0;255] X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 # convert class vectors to binary class matrices (ie one-hot vectors) Y_train = np_utils.to_categorical(y_train, nb_classes) #Display the shapes to check if everything's ok print('X_train shape:', X_train.shape) print('Y_train shape:', Y_train.shape) print('X_test shape:', X_test.shape) model = Sequential() # For an explanation on conv layers see http://cs231n.github.io/convolutional-networks/#conv # By default the stride/subsample is 1 # border_mode "valid" means no zero-padding. # If you want zero-padding add a ZeroPadding layer or, if stride is 1 use border_mode="same" model.add(Convolution2D(12, 5, 5, border_mode='valid',input_shape=(1,img_rows, img_cols))) model.add(Activation('relu')) # For an explanation on pooling layers see http://cs231n.github.io/convolutional-networks/#pool model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.15)) model.add(Convolution2D(24, 5, 5)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.15)) # Flatten the 3D output to 1D tensor for a fully connected layer to accept the input model.add(Flatten()) model.add(Dense(180)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(100)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(nb_classes)) #Last layer with one output per class model.add(Activation('softmax')) #We want a score simlar to a probability for each class # The function to optimize is the cross entropy between the true label and the output (softmax) of the model # We will use adadelta to do the gradient descent see http://cs231n.github.io/neural-networks-3/#ada model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=["accuracy"]) # Make the model learn model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1) # Predict the label for X_test yPred = model.predict_classes(X_test) # Save prediction in file for Kaggle submission np.savetxt('mnist-pred.csv', np.c_[range(1,len(yPred)+1),yPred], delimiter=',', header = 'ImageId,Label', comments = '', fmt='%d') </code></pre>
<p>So the problem is with the convolution sizes used. Convolution operations usually <strong>reduce</strong> dimension of the image. Similarly - each pooling operation reduces the size. You have very small images yet applied model architecture which has been designed for a bigger ones, thus at some point, after one of the convolutions/poolings you actually have a smaller outputed image than a following filter size, and this is an ill-defined operation. </p> <p>To temporarly fix the problem - remove second convolution and maxpooling layers, since these operations (with parameters provided) cannot be performed on such small data. In general you should first understand how convolution works, and not apply someone elses model, since the parameters are crucial for good performance - if you apply transformations which reduce resolution to much - you will be unable to learn anything. Thus once you have some intuition how convolution works you can go back and try different architectures, but there is no one, "magical" equation to figure out the architecture, thus I cannot provide you with parameters that will "just work" - start with removing this additional convolution and pooling, and than go back and try other possibilities once you have better understanding of your data and model.</p>
python|pandas|machine-learning|keras
1
2,371
44,150,472
Tensorflow neural network has high error even in really easy dataset
<p>I'm trying to implement a 1 hidden layer NN for a regression problem. The loss function improves for a few iterations than it gets stuck on a really high error even for a very easy data. Could someone help me find the bug? Here is my code:</p> <pre><code>import tensorflow as tf import scipy.io as sio import numpy as np reuse_weights = 1 n_nodes_hl1 = 10 batch_size = 200 hm_epochs = 20 # load input from matlab input_training = sio.loadmat('xMat.mat') input_training = input_training['xMat'] input_test = sio.loadmat('xMat.mat') input_test = input_test['xMat'] # find number of measurements and input length n_measurements = input_training.shape[0] input_length = input_training.shape[1] # current input data_y = input_training[:, input_length - 1].astype(float) data_x = input_training[:, 0 : input_length - 1].astype(float) test_data_y = input_test[:, input_length - 1].astype(float) test_data_x = input_test[:, 0 : input_length - 1].astype(float) x = tf.placeholder('float32',[None, input_length - 1]) y = tf.placeholder('float32') # place holder for Dropout algorithm drop probability keep_prob = tf.placeholder('float32') def next_batch(data): """ Return a total of `batch_size` samples from the array `data`. """ if len(data.shape) == 2: idx = np.arange(0, len(data[:,0])) # get all possible indexes else: idx = np.arange(0, len(data)) # get all possible indexes np.random.shuffle(idx) # shuffle indexes idx = idx[0:batch_size] # use only `batch_size` random indexes if len(data.shape) == 2: data_shuffle = [data[i,:] for i in idx] # get list of `batch_size` random samples else: data_shuffle = [data[i] for i in idx] # get list of `batch_size` random samples data_shuffle = np.asarray(data_shuffle) # get back numpy array return data_shuffle def neural_network_model(data, weights, biases, keep_prob): layer1 = tf.add(tf.matmul(data, weights['h1']), biases['b1']) layer1 = tf.nn.sigmoid(layer1) output = tf.add(tf.matmul(layer1, weights['out']), biases['out']) return output if reuse_weights: weights = { 'h1': tf.Variable(sio.loadmat('weights_h1.mat')['weights_h1'], name="weights_h1"), 'out': tf.Variable(sio.loadmat('weights_out.mat')['weights_out'], name="weights_out") } biases = { 'b1': tf.Variable(sio.loadmat('biases_b1.mat')['biases_b1'], name="biases_b1"), 'out': tf.Variable(sio.loadmat('biases_out.mat')['biases_out'], name="biases_out") } else: # initialize weights weights = { 'h1': tf.Variable(tf.random_normal([input_length - 1, n_nodes_hl1]), name="weights_h1"), 'out': tf.Variable(tf.random_normal([n_nodes_hl1, 1]), name="weights_out") } biases = { 'b1': tf.Variable(tf.random_normal([n_nodes_hl1]), name="biases_b1"), 'out': tf.Variable(tf.random_normal([1]), name="biases_out") } def train_neural_network(x): prediction = neural_network_model(x, weights, biases, keep_prob)[:,0] cost = tf.reduce_mean(tf.abs(prediction - y)) optimizer = tf.train.AdamOptimizer() opt = optimizer.minimize(cost) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(weights['h1']) for epoch in range(hm_epochs): #training epoch_loss = 0 for _ in range(int(n_measurements/batch_size)): _, c, p = sess.run([opt, cost, prediction], feed_dict = {x:next_batch(data_x),\ y:next_batch(data_y) , keep_prob : 1.0}) epoch_loss += c print('Epoch', epoch, 'completed out of', hm_epochs, 'Average loss:', epoch_loss/int(n_measurements/batch_size)) # prediction accuracy = tf.reduce_mean(tf.abs(prediction - y)) # Feed 1.0 for keep prob during testing print("Training data accuracy:", accuracy.eval({x: data_x, y: data_y, keep_prob : 1.0})) print("Training data predictions:", prediction.eval({x: data_x[0:5,:], keep_prob : 1.0})) print("Training data:",data_y[0:5]) #print("Test data accuracy:", accuracy.eval({x: test_data_x, y: test_data_y, keep_prob : 1.0})) # save numpy arrays sio.savemat('weights_h1.mat', {'weights_h1': weights['h1'].eval()}) sio.savemat('biases_b1.mat', {'biases_b1': biases['b1'].eval()}) sio.savemat('weights_out.mat', {'weights_out': weights['out'].eval()}) sio.savemat('biases_out.mat', {'biases_out': biases['out'].eval()}) train_neural_network(x) </code></pre>
<p>Figured it out, the problem was with the data shuffling. The input and response were shuffled differently (two times random shuffle for each epoch) and thus the input data in each epoch did not correspond to the response data. </p>
tensorflow|neural-network
0
2,372
69,394,576
ImportError: cannot import name 'resnet' from 'tensorflow.python.keras.applications'
<p>I've been trying to implement <a href="https://colab.research.google.com/drive/11ko0DBnI1QLxVoJQR8gt9b4JDcvbCrtU#scrollTo=PhAuO2-1ZBnv" rel="nofollow noreferrer">this colab code</a> but encountered this error in training part of the code.</p> <p>(python version <strong>Python 3.7.12</strong> and tensorflow version <strong>1.14</strong>.)</p> <pre><code>!python3 /content/gun_detection/models/research/object_detection/model_main.py \ --pipeline_config_path={config_path + pipeline_file} \ --model_dir={model_dir} \ --alsologtostderr \ --num_train_steps={num_steps} \ --num_eval_steps={num_eval_steps} </code></pre> <p>I get this ImportError message:</p> <pre><code>/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([(&quot;qint8&quot;, np.int8, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([(&quot;quint8&quot;, np.uint8, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([(&quot;qint16&quot;, np.int16, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([(&quot;quint16&quot;, np.uint16, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([(&quot;qint32&quot;, np.int32, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([(&quot;resource&quot;, np.ubyte, 1)]) /usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([(&quot;qint8&quot;, np.int8, 1)]) /usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([(&quot;quint8&quot;, np.uint8, 1)]) /usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([(&quot;qint16&quot;, np.int16, 1)]) /usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([(&quot;quint16&quot;, np.uint16, 1)]) /usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([(&quot;qint32&quot;, np.int32, 1)]) /usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([(&quot;resource&quot;, np.ubyte, 1)]) Traceback (most recent call last): File &quot;/content/gun_detection/models/research/object_detection/model_main.py&quot;, line 25, in &lt;module&gt; from object_detection import model_lib File &quot;/content/gun_detection/models/research/object_detection/model_lib.py&quot;, line 30, in &lt;module&gt; from object_detection import exporter as exporter_lib File &quot;/content/gun_detection/models/research/object_detection/exporter.py&quot;, line 24, in &lt;module&gt; from object_detection.builders import model_builder File &quot;/content/gun_detection/models/research/object_detection/builders/model_builder.py&quot;, line 37, in &lt;module&gt; from object_detection.meta_architectures import deepmac_meta_arch File &quot;/content/gun_detection/models/research/object_detection/meta_architectures/deepmac_meta_arch.py&quot;, line 19, in &lt;module&gt; from object_detection.models.keras_models import resnet_v1 File &quot;/content/gun_detection/models/research/object_detection/models/keras_models/resnet_v1.py&quot;, line 22, in &lt;module&gt; from tensorflow.python.keras.applications import resnet ImportError: cannot import name 'resnet' from 'tensorflow.python.keras.applications' (/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/applications/__init__.py) </code></pre>
<p>You can check here the <a href="https://github.com/tensorflow/tensorflow/blob/r1.14/tensorflow/python/keras/applications/resnet50.py" rel="nofollow noreferrer">Tensorflow Library</a>.</p> <p><strong>Tensorflow 1.14</strong> does not have <code>resnet</code>. Right import is:</p> <pre><code>from tensorflow.keras.applications import resnet50 </code></pre>
python|python-3.x|tensorflow|image-processing|object-detection
0
2,373
69,619,523
How to return the correlation value from pandas dataframe
<p>I am working on a method for calculating the correlation between to columns of data from a dataset. The dataset is constructed of 4 columns A1, A2, A3, and Class. My goal is remove A3 if the correlation between A1 &amp; A3 greater than 0.6 or if the correlation between A1 &amp; A3 is less than 0.6.</p> <p>A sample of the data set is given below:</p> <pre><code>A1,A2,A3,Class 2,0.4631338,1.5,3 8,0.7460648,3.0,3 6,0.264391038,2.5,2 5,0.4406713,2.3,1 2,0.410438159,1.5,3 2,0.302901816,1.5,2 6,0.275869396,2.5,3 8,0.084782428,3.0,3 </code></pre> <p>The python program that I am using for this project is written like so</p> <pre><code>from numpy.core.defchararray import count import pandas as pd import numpy as np import numpy as np def main(): s = pd.read_csv('A1-dm.csv') print(calculate_correlation(s)) def calculate_correlation(s): # if correlation &gt; 0.6 or correlation &lt; 0.6 remove A3 s = s[['A1','A3']] return s.corr()[1,0] main() </code></pre> <p>When I run my code I get the following error:</p> <pre><code>File &quot;C:\Users\physe\AppData\Roaming\Python\Python36\site-packages\pandas\core\indexes\base.py&quot;, line 2897, in get_loc raise KeyError(key) from err KeyError: (1, 0) </code></pre> <p>I've reviewed the documentation <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.corr.html" rel="nofollow noreferrer">here</a>. The issue that I'm facing is selecting the 1,0 element from the covariance matrix that is returned by .corr(). Any help with this would be greatly appreciated.</p>
<p>Here is my example:</p> <pre><code>cor = df.corr() if cor['A3'] &gt; 0.6: train.drop(columns = 'A3', inplace = True) else: pass </code></pre>
python|pandas
1
2,374
69,503,993
How to read in a semicolon delimited file in Pandas and normalize
<p>I am trying to read in a wine quality dataset and normalize the data before I move on. I've read in the csv as semicolon delimited, but when I try to drop the target variable, quality, I'm getting an error that says that attribute isn't found in axis.</p> <pre><code>import pandas as pd df = pd.read_csv('gdrive/My Drive/whitewine.csv', delimiter=&quot;\s&quot;) x = df.drop(['quality'], axis=0).values </code></pre> <p><strong>Error</strong>:</p> <blockquote> <p>KeyError: &quot;['quality'] not found in axis&quot;</p> </blockquote>
<pre><code>import pandas as pd df = pd.read_csv('gdrive/My Drive/whitewine.csv', delimiter=&quot;\s&quot;) X = df.drop(['quality'], axis=1) y = df['quality'] </code></pre>
python|pandas|csv|keyerror
0
2,375
69,403,613
How to early-stop autoregressive model with a list of stop words?
<p>I am using GPT-Neo model from <code>transformers</code> to generate text. Because the prompt I use starts with <code>'{'</code>, so I would like to stop the sentence once the paring <code>'}'</code> is generated. I found that there is a <code>StoppingCriteria</code> method in the source code but without further instructions on how to use it. Does anyone have found a way to early-stop the model generation? Thanks!</p> <p>Here is what I've tried:</p> <pre class="lang-py prettyprint-override"><code>from transformers import StoppingCriteria, AutoModelForCausalLM, AutoTokenizer model_name = 'gpt2' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, pad_token_id=tokenizer.eos_token_id, torch_dtype=dtype).eval() class KeywordsStoppingCriteria(StoppingCriteria): def __init__(self, keywords_ids:list): self.keywords = keywords_ids def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -&gt; bool: if input_ids in self.keywords: return True return False stop_words = ['}', ' }', '\n'] stop_ids = [tokenizer.encode(w) for w in stop_words] stop_ids.append(tokenizer.eos_token_id) stop_criteria = KeywordsStoppingCriteria(stop_ids) model.generate( text_inputs='some text:{', StoppingCriteria=stop_criteria ) </code></pre>
<p>I've been able to adapt your code to work. Additionally, make sure you're using a recent version of transformers, you may have to upgrade.</p> <pre><code>import torch from transformers import StoppingCriteria, AutoModelForCausalLM, AutoTokenizer, StoppingCriteriaList model_name = 'gpt2' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, pad_token_id=tokenizer.eos_token_id).eval() class KeywordsStoppingCriteria(StoppingCriteria): def __init__(self, keywords_ids:list): self.keywords = keywords_ids def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -&gt; bool: if input_ids[0][-1] in self.keywords: return True return False stop_words = ['}', ' }', '\n'] stop_ids = [tokenizer.encode(w)[0] for w in stop_words] stop_criteria = KeywordsStoppingCriteria(stop_ids) inputs = tokenizer.encode('some text: {', add_special_tokens=False, return_tensors='pt') output = model.generate( inputs, do_sample=True, stopping_criteria=StoppingCriteriaList([stop_criteria]), ) print(tokenizer.decode(*output)) </code></pre>
python|huggingface-transformers|autoregressive-models|gpt-2
2
2,376
69,555,244
Returning of the sum of units in the particular year
<p>I have a simple task however struggling with it in python.</p> <p>I have a df with &quot;Freq&quot; column (the sum at the beginning) every year some units will be removed from this, could you help me to build a for loop to return the amount for a particular year:</p> <pre><code>df = pd.DataFrame({'Delivery Year' : [1976,1977,1978,1979], &quot;Freq&quot; : [120,100,80,60], &quot;1976&quot; : [10,float('nan'),float('nan'),float('nan')], &quot;1977&quot; : [5,3,float('nan'),float('nan')], &quot;1978&quot; : [10,float('nan'),8,float('nan')], &quot;1979&quot; : [13,10,5,14] }) df </code></pre> <p>My attempt, however not working..</p> <pre><code># Remaining in use for i in df.columns[2:len(df.columns)]: df[i] = df[i-1] - df[i] </code></pre> <p>Desired output:</p> <pre><code> df = pd.DataFrame({'Delivery Year' : [1976,1977,1978,1979], &quot;Freq&quot; : [120,100,80,60], &quot;1976&quot; : [110,100,80,60], &quot;1977&quot; : [105,97,80,60], &quot;1978&quot; : [95,97,72,60], &quot;1979&quot; : [82,87,67,46] }) df </code></pre>
<p>You can calculate the cumulative sum along the columns axis then subtract this sum from the <code>Freq</code> column to get available amounts for each year</p> <pre><code>s = df.iloc[:, 2:].fillna(0).cumsum(1).rsub(df['Freq'], axis=0) df.assign(**s) </code></pre> <hr /> <pre><code> Delivery Year Freq 1976 1977 1978 1979 0 1976 120 110.0 105.0 95.0 82.0 1 1977 100 100.0 97.0 97.0 87.0 2 1978 80 80.0 80.0 72.0 67.0 3 1979 60 60.0 60.0 60.0 46.0 </code></pre>
python|pandas
1
2,377
69,582,900
Using Pytorch model trained on RTX2080 on RTX3060
<p>I try to run my PyTorch model (trained on a Nvidia RTX2080) on the newer Nvidia RTX3060 with CUDA support. It is possible to load the model and to execute it. If I run it on the CPU with the <code>--no_cuda</code> flag it runs smootly and gives back the correct predictions, but if I want to run it with CUDA, it only returns wrong predictions which make no sense. Does the different GPU-architecture of the cards affect the prediction?</p>
<p>Ok it seemed that the problem was the different floating point of the two architectures. The flag <code>torch.backends.cuda.matmul.allow_tf32 = false</code> needs to be set, to provide a stable execution of the model of a different architecture.</p>
neural-network|pytorch|gpu|nvidia
1
2,378
69,361,178
My training and validation loss suddenly increased in power of 3
<p><a href="https://i.stack.imgur.com/CCIO8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CCIO8.png" alt="Training" /></a></p> <p><strong>train function</strong></p> <pre><code>def train(model, iterator, optimizer, criterion, clip): model.train() epoch_loss = 0 for i, batch in enumerate(iterator): optimizer.zero_grad() output = model(batch.text) loss = criterion(output, torch.unsqueeze(batch.labels, 1)) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() epoch_loss += loss.item() return epoch_loss / len(iterator) </code></pre> <p><strong>main_script</strong></p> <pre><code>def main( train_file, test_file, config_file, checkpoint_path, best_model_path ): device = 'cuda' if torch.cuda.is_available() else 'cpu' with open(config_file, 'r') as j: config = json.loads(j.read()) for k,v in config['model'].items(): v = float(v) if v &lt; 1.0: config['model'][k] = float(v) else: config['model'][k] = int(v) for k,v in config['training'].items(): v = float(v) if v &lt; 1.0: config['training'][k] = float(v) else: config['training'][k] = int(v) train_itr, val_itr, test_itr, vocab_size = data_pipeline( train_file, test_file, config['training']['max_vocab'], config['training']['min_freq'], config['training']['batch_size'], device ) model = CNNNLPModel( vocab_size, config['model']['emb_dim'], config['model']['hid_dim'], config['model']['model_layer'], config['model']['model_kernel_size'], config['model']['model_dropout'], device ) optimizer = optim.Adam(model.parameters()) criterion = nn.CrossEntropyLoss() num_epochs = config['training']['n_epoch'] clip = config['training']['clip'] is_best = False best_valid_loss = float('inf') model = model.to(device) for epoch in tqdm(range(num_epochs)): train_loss = train(model, train_itr, optimizer, criterion, clip) valid_loss = evaluate(model, val_itr, criterion) if (epoch + 1) % 2 == 0: print(&quot;training loss {}, validation_loss{}&quot;.format(train_loss,valid_loss)) </code></pre> <p>I was training a Convolution Neural Network for binary Text classification. Given a sentence, it detects its a hate speech or not. Training loss and validation loss was fine till 5 epoch after that suddenly the training loss and validation loss shot up suddenly from <strong>0.2 to 10,000</strong>.</p> <p>What could be the reason for such huge increase is loss suddenly?</p>
<p>Default learning rate of Adam is 0.001, which, depending on task, might be too high.</p> <p>It looks like instead of converging your neural network became divergent (it left the previous ~0.2 loss minima and fell into different region).</p> <p>Lowering your learning rate at some point (after 50% or 70% percent of training) would probably fix the issue.</p> <p>Usually people divide the learning rate by 10 (0.0001 in your case) or by half (0.0005 in your case). Try with dividing by half and see if the issue persist, in general you would want to keep your learning rate as high as possible until divergence occurs as is probably the case here.</p> <p>This is what <a href="https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.StepLR.html#torch.optim.lr_scheduler.StepLR" rel="nofollow noreferrer">schedulers</a> are for (gamma specifies learning rate multiplier, might want to change that to 0.5 first).</p> <p>One can think of lower learning rate phase as fine-tuning already found solution (placing weights in better region of the loss valley) and might require some patience.</p>
deep-learning|nlp|pytorch|conv-neural-network|text-classification
1
2,379
53,950,652
Dask categorize() won't work after using .loc
<p>I'm having a serious issue using dask (dask version: 1.00, pandas version: 0.23.3). I am trying to load a dask dataframe from a CSV file, filter the results into two separate dataframes, and perform operations on both. </p> <p>However, after the split the dataframes and try to set the category columns as 'known', they remain 'unknown'. Thus I cannot continue with my operations (which require category columns to be 'known'.)</p> <p><strong>NOTE:</strong> I have created a minimum example as suggested using pandas instead of read_csv().</p> <pre><code>import pandas as pd import dask.dataframe as dd # Specify dtypes b_dtypes = { 'symbol': 'category', 'price': 'float64', } i_dtypes = { 'symbol': 'category', 'price': 'object' } # Specify a function to quickly set dtypes def to_dtypes(df, dtypes): for column, dtype in dtypes.items(): if column in df.columns: df[column] = df.loc[:, column].astype(dtype) return df # Set up our test data data = [ ['B', 'IBN', '9.9800'], ['B', 'PAY', '21.5000'], ['I', 'PAY', 'seventeen'], ['I', 'SPY', 'ten'] ] # Create pandas dataframe pdf = pd.DataFrame(data, columns=['type', 'symbol', 'price'], dtype='object') # Convert into dask df = dd.from_pandas(pdf, npartitions=3) # ## At this point 'df' simulates what I get when I read the mixed-type CSV file via dask # # Split the dataframe by the 'type' column b_df = df.loc[df['type'] == 'B', :] i_df = df.loc[df['type'] == 'I', :] # Convert columns into our intended dtypes b_df = to_dtypes(b_df, b_dtypes) i_df = to_dtypes(i_df, i_dtypes) # Let's convert our 'symbol' column to known categories b_df = b_df.categorize(columns=['symbol']) i_df['symbol'] = i_df['symbol'].cat.as_known() # Is our symbol column known now? print(b_df['symbol'].cat.known, flush=True) print(i_df['symbol'].cat.known, flush=True) # ## print() returns 'False' for both, this makes me want to kill myself. ## (Please help...) # </code></pre> <p><strong>UPDATE:</strong> So it seems that if I shift the 'npartitions' parameters to 1, then print() returns True in both cases. So this appears to be an issue with the partitions containing different categories. However loading both dataframes into only two partitions is not feasible, so is there a way I can tell dask to do some sort of re-sorting to make the categories consistent across partitions?</p>
<p>The answer for your problem is basically contained in <a href="http://docs.dask.org/en/latest/dataframe-design.html" rel="nofollow noreferrer">doc</a>. I'm referring to the part code commented by <code># categorize requires computation, and results in known categoricals</code> I'll expand here because it seems to me you're misusing <code>loc</code> </p> <pre><code>import pandas as pd import dask.dataframe as dd # Set up our test data data = [['B', 'IBN', '9.9800'], ['B', 'PAY', '21.5000'], ['I', 'PAY', 'seventeen'], ['I', 'SPY', 'ten'] ] # Create pandas dataframe pdf = pd.DataFrame(data, columns=['type', 'symbol', 'price'], dtype='object') # Convert into dask ddf = dd.from_pandas(pdf, npartitions=3) # Split the dataframe by the 'type' column # reset_index is not necessary b_df = ddf[ddf["type"] == "B"].reset_index(drop=True) i_df = ddf[ddf["type"] == "I"].reset_index(drop=True) # Convert columns into our intended dtypes b_df = b_df.categorize(columns=['symbol']) b_df["price"] = b_df["price"].astype('float64') i_df = i_df.categorize(columns=['symbol']) # Is our symbol column known now? YES print(b_df['symbol'].cat.known, flush=True) print(i_df['symbol'].cat.known, flush=True) </code></pre>
python|pandas|dataframe|dask
1
2,380
54,164,085
How can i make this command run on a stored variable?
<p><strong>I want to execute a command in my program by a value stored in a variable.</strong></p> <p>at the moment it works like this:</p> <p>you need to write the value in the command, so if I want to filter by 'Americas' region, I need to do this:</p> <pre><code>wine.loc[wine['Region'] == 'Americas'] </code></pre> <p>but what I want is to have a code line somewhere else in the code like:</p> <pre><code>abc = 'Americas' </code></pre> <p>and that the loc line runs by what is stored in the abc variable</p>
<p>If you want a seperate dataframe to be created for All regions, create a loop and store every dataframe in a dictionary of dataframes like below:</p> <pre><code>dfs = ['df' + str(x) for x in list(wine['Region'].unique())] dicdf = dict() i = 0 while i &lt; len(dfs): dicdf[dfs[i]] = wine[(wine['Region']==list(wine['Region'].unique())[i])] i = i + 1 print(dicdf) </code></pre> <p>This will print a dictionary of the dataframes. You can print what dataframe you like to see for example data for <code>'Americas'</code> : <code>print(dicdf['dfAmericas'])</code></p> <p>Let me know if you need anything else.</p>
python|pandas|dataframe
1
2,381
38,238,215
python (pandas): recombine groupby statements
<p>I have a some data that represents results in time at many different sites. I want to find the quartile break down of my results and also the max and min dates for each site.</p> <p>Finding each of these is easy enough:</p> <pre><code>#quartiles q = df.groupby(['site_id', 'datum']).quantile([0.25,0.5,0.75]) #max and min vlaues d_max = df.groupby(['site_id', 'datum']).max() d_min = df.groupby(['site_id', 'datum']).min() </code></pre> <p>The results being multi index dataframes. How can I join these back together to get all 3 values for each combination of site_id and datum? </p> <p>Some sample data:</p> <pre><code>from io import StringIO import pandas as pd TESTDATA=StringIO(u'''date site_id datum result 1968-01-10 RN004481 SWL 61.23 1977-06-07 RN004481 SWL 60.16 1979-12-12 RN004481 SWL 58.76 1971-04-24 RN004482 SWL 79.93 1971-09-29 RN004482 SWL 79.97 1995-09-19 RN004482 SWL 92.91 1996-02-08 RN004482 SWL 93.15 1964-10-29 RN00448411 SWL 67.87 1965-03-04 RN004687 SWL 74.90 1993-03-16 RN02528611 SWL 7.50 2011-10-24 RN029429 SWL 2.59 2011-11-05 RN029429 SWL 2.68 1992-06-24 RN004464 SWL 52.24 1986-08-11 RN004482 SWL 86.84 1998-01-29 RN004482 SWL 94.33 1966-11-24 RN004687 DTW 75.16 1978-08-30 RN004687 SWL 78.24 1983-02-22 RN004687 DTW 81.00 1984-07-24 RN004687 SWL 81.26 1993-07-07 RN004687 SWL 87.18 1994-04-08 RN004687 DTW 87.53 1994-08-11 RN004687 SWL 87.41 2001-01-10 RN004687 SWL 92.04 2010-11-15 RN004687 SWL 97.06 1964-10-01 RN004693 SWL 59.56 1965-06-03 RN004693 SWL 59.74 1967-05-19 RN004693 SWL 59.58 1967-06-23 RN004693 RSWL 59.61 1967-09-22 RN004693 RSWL 59.69 1970-12-16 RN004693 DTW 59.54 ''') df = pd.read_csv(TESTDATA, delim_whitespace=True) </code></pre>
<p>This is one way to do it:</p> <pre><code>pd.concat([d_max, d_min, q.unstack().result], axis=1, keys=['max', 'min', 'quantiles']) </code></pre> <p><a href="https://i.stack.imgur.com/vzgDQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vzgDQ.png" alt="enter image description here"></a></p>
python|pandas
2
2,382
38,293,605
pandas filter if name appears in column more than n times
<p>this is my dataframe</p> <pre><code>df = pd.DataFrame({'Col1':['Joe','Bob','Joe','Joe'], 'Col2':[55,25,88,80]}) </code></pre> <p>I only want the names of if it appears more than once in 'Col1'</p> <p>I can do it like this</p> <pre><code>grouped = df.groupby("Col1") grouped.filter(lambda x: x["Col1"].count()&gt;2)['Col1'].unique() </code></pre> <p>However that is ugly looking code</p> <p>Is there simpler cleaner way?</p>
<p>Use <code>value_counts</code> and <code>isin</code></p> <pre><code>vc = df.Col1.value_counts() &gt; 2 vc = vc[vc] df.loc[df.Col1.isin(vc.index)] </code></pre> <p><a href="https://i.stack.imgur.com/fTVzR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fTVzR.png" alt="enter image description here"></a></p>
python|python-3.x|pandas
3
2,383
38,426,385
Splitting a 1-d numpy array from tdms file, and plot shorter time series/intervals from the original array
<p>Need help to pull out a specific interval from a 1-d numpy array from a tdms file. Im able to plot the file but are unable to specify the sample interval that I want to plot. As you can see on the picture I want to plot the interval that is in green.</p> <p><a href="https://i.stack.imgur.com/d4Fem.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d4Fem.png" alt="enter image description here"></a></p> <p>The amount of samles are about 35000, it is 1000 samples a second and I want to split it into 3 and plot the green areas. Lets say I want to plot the intervall [6000, to 13000] and so on. This is taken from a column of an tdms file. I can use the <code>numpy.split</code> but I don't want to split into many parts and then have to put those arrays together again. To get my wanted areas of plotting/ finding average of.</p>
<p>You should be able to use the array-subset function, give it your array, an index, and length and you will get your sub-array.</p>
python|arrays|numpy|split|labview
1
2,384
38,287,696
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 19: ordinal not in range(128)
<p>Posting again as the previous post had the API token in it. I am scraping data from a website: Here is the code:</p> <pre><code>reload(sys) sys.setdefaultencoding('utf-8-sig') def __unicode__(self): return unicode(self.some_field) or u'' def daterange(start_date, end_date): for n in range(int ((end_date - start_date).days)): yield start_date + timedelta(n+1) #def is_ascii(s): #return all(ord(c) &lt; 128 for c in s) date='' min_date='' max_date='' if sys.argv[1] == 'today': min_date = datetime.today() - timedelta(1) max_date = datetime.today() elif sys.argv[1] == 'yesterday': min_date = datetime.today() - timedelta(2) max_date = datetime.today() - timedelta(1) else: min_date = datetime.strptime(sys.argv[1], "%Y-%m-%d") - timedelta(1) max_date = datetime.strptime(sys.argv[2], "%Y-%m-%d") siteIDs = [37] for id in siteIDs: for date in daterange(min_date, max_date): response_data = {} url = 'http://survey.modul.ac.at/piwikAnalytics/?module=API&amp;method=Live.getLastVisitsDetails&amp;idSite=' + str(id) + '&amp;format=csv&amp;token_auth=' + token_auth + '&amp;period=day&amp;date=' + date.strftime('%Y-%m-%d') + '&amp;filter_limit=2000' try: response=requests.get(url,timeout=100) response_url=response.url response_data=urllib.urlopen(url) except (requests.exceptions.Timeout,requests.exceptions.RequestException,requests.exceptions.HTTPError,requests.exceptions.ConnectionError,socket.error) as e : response_data="error" with codecs.open('raw_csv/piwik_'+ str(id) + '_' + date.strftime('%Y-%m-%d')+ '.csv', 'wb',encoding='utf-8-sig') as fp: fp.write(response.text) </code></pre> <p>In the output a column 'idSite' is being shown as 'idSite'. I tried to remove it by the following code:</p> <pre><code>import pandas as pd df = pd.read_csv("piwik_37_2016-07-08.csv", dtype = "unicode", encoding="utf-8-sig") df.to_csv("abc.csv") </code></pre> <p>But i am getting the above mentioned Unicode error</p>
<p>One brute force way to remove all non-ASCII characters from a string is:</p> <pre><code>import re # substitute sequence of non-ASCII characters with single space str = re.sub(r'[^\x00-\x7F]+',' ', str) </code></pre> <p>Hope that helps in your case</p>
python|python-2.7|pandas|unicode
0
2,385
66,290,958
What is the use of pd.concat's copy=True?
<p>I read the following about the copy-parameter in pandas.concat function <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p><strong>copy</strong>: bool, default True<br>If False, do not copy data unnecessarily.</p> </blockquote> <p>Questions:</p> <ol> <li>Why would anyone want to copy data unnecessarily? (which appearantly happens because the default is <code>True</code>)</li> <li>Are there any downsides setting <code>copy=False</code>?</li> </ol>
<blockquote> <p>well see there are 2 type of copies in python</p> <p>1.Deep copy</p> <p>2.Shallow Copy</p> </blockquote> <p>so basically the copy parameter in <code>pd.concat()</code> defines the same by default it creates a <code>deep copy</code> but if you overwrite its value by <code>False</code> it creates <code>Shallow copy</code></p> <p><strong>Note:-In <code>deep copy</code> a exact copy is created so if you make any changes in original variable it doesn't reflect in your copy but in <code>shallow copy</code> it reflect in your copy because in <code>shallow copy</code> the data is referencing to original data</strong></p>
python|copy|concatenation|pandas
1
2,386
66,324,740
Copy a column to multiple columns of a DataFrame with Pandas
<p>I have a DataFrame with multiple columns, a few columns being NaN. The dataframe is quite big having around 5,000 columns. Below is a sample from it:</p> <pre><code> GeoCode ESP FIN USA EZ19 PRT 1 Geography Spain Finland USA EZ Portugal 2 31-Mar-15 NaN NaN 0.26 0.89 NaN 3 30-Jun-15 NaN NaN NaN 0.90 NaN 4 30-Sep-15 NaN NaN 0.31 0.90 NaN 5 31-Dec-15 NaN NaN 0.41 0.91 NaN </code></pre> <p>I want to copy the value of column '<em>EZ19</em>' to all columns where all values for row 2 and below are <em>NaN</em>. I tried the following code and it works:</p> <pre><code>nan_cols = df.columns[df_macro[2:].isnull().all()].to_list() for c in nan_cols: df.loc[2:,c]= df.loc[2:,'EZ19'] </code></pre> <p>But I was thinking there should be a way to assign value of column '<em>EZ19</em>' to the target columns without using a loop and am surprised that there didn't seem to be a straight forward way to do this. Other questions here don't seem to handle the exact issue I have and couldn't find a solution that worked for me.</p> <p>Given the size of my dataframe(and it is expected to grow larger overtime) I really want to avoid using a loop in my final code so any help with this will be greatly appreciated.</p>
<p>If you're interested in replacing values of columns that contain all nulls, you can take a shortcut and simply overwrite all values below row 2 after identifying those values are entirely null.</p> <pre><code># Identify columns that contain null values from row 2 onwards all_null_cols = df.loc[2:].isnull().all() # overwrite row 2 onwards in only our null columns with values from &quot;EZ19&quot; df.loc[2:, all_nulls] = df.loc[2:, [&quot;EZ19&quot;]].values print(df) GeoCode ESP FIN USA EZ19 PRT 1 Geography Spain Finland USA EZ Portugal 2 31-Mar-15 0.89 0.89 0.26 0.89 0.89 3 30-Jun-15 0.90 0.90 NaN 0.90 0.90 4 30-Sep-15 0.90 0.90 0.31 0.90 0.90 5 31-Dec-15 0.91 0.91 0.41 0.91 0.91 </code></pre>
python|pandas|dataframe
2
2,387
66,159,175
How to condense rows in Pandas by removing everything between two conditions
<p>I have a keyboard log that is telling me when keys are being press/released:</p> <pre><code>key state time z 1 0.133 d 1 0.298 d 0 0.36 a 1 0.522 a 1 0.6455 a 1 0.7744 a 1 0.9033 a 1 1.0322 a 1 1.1611 a 1 1.29 a 1 1.4189 a 1 1.5478 a 1 1.6767 a 1 1.8056 a 1 1.9345 a 1 2.0634 z 0 2.1923 a 0 2.3212 </code></pre> <p>When a key is pressed (state == 1), it continues to write that key until it returns to an up state (state = 0). How would I condense such a table so that it only includes rows where the key is first pressed and when it was let go? This form would make it easier to calculate the keypress duration.</p> <pre><code>key state time z 1 0.133 d 1 0.298 d 0 0.36 a 1 0.522 z 0 2.1923 a 0 2.3212 </code></pre> <p>My first thought is to use what I know, i.e. an ugly loop that would repeat for each key:</p> <p>(1) Detect first instance of keypress and add row to new dataframe, (2) Go through rows until we see that key has been released, then add that to dataframe, (3) Append everything into one dataframe and then sort by time</p> <p>I'm new to Pandas, but I know there must be a better way that properly takes advantage of the dataframe. I've discoevered dataframe.shift(), but can't quite wrap my head around how to deal with the non-constant distance in rows between the key presses/releases.</p> <p>Any suggestions would be appreciated :)</p>
<p>It's a simple application of <code>first()</code></p> <pre><code>dfu = df.groupby([&quot;key&quot;,&quot;state&quot;], as_index=False).first().sort_values(&quot;time&quot;) </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: left;">key</th> <th style="text-align: right;">state</th> <th style="text-align: right;">time</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">5</td> <td style="text-align: left;">z</td> <td style="text-align: right;">1</td> <td style="text-align: right;">0.133</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: left;">d</td> <td style="text-align: right;">1</td> <td style="text-align: right;">0.298</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">d</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0.36</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">a</td> <td style="text-align: right;">1</td> <td style="text-align: right;">0.522</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: left;">z</td> <td style="text-align: right;">0</td> <td style="text-align: right;">2.1923</td> </tr> <tr> <td style="text-align: right;">0</td> <td style="text-align: left;">a</td> <td style="text-align: right;">0</td> <td style="text-align: right;">2.3212</td> </tr> </tbody> </table> </div>
python|pandas
3
2,388
65,966,888
Very high latency running Python Flask app on gcloud app engine
<p>I have this small Python Flask app that gets a file posted as input and runs this file through a tensforflow keras model to come back with a prediction.</p> <p>On my old laptop, running this locally it is superfast. The app consumes around 450MB or ram.</p> <p>Now I have deployed this app to gcloud app engine, and I experience extremely high lantency, ranging from 1,900 to 3,500 ms. 1000x slower than on my own laptop! Not only is it slow, but it starts to much instances as well because of it.</p> <p>I have tried with F2 and F4 instances (F1 doesn't provide enough memory), but it doesn't make a difference.</p> <p><strong>app.yaml</strong></p> <pre><code>runtime: python37 env: standard instance_class: F2 entrypoint: gunicorn -b :$PORT main:app </code></pre> <p><strong>main.py</strong></p> <pre><code>from flask import Flask from flask_cors import CORS from flask_restful import Api, Resource, reqparse, abort from firebase import verifyToken, log from model_manager import predict import werkzeug, os import tempfile app = Flask(__name__) CORS(app) api = Api(app) post_args = reqparse.RequestParser() post_args.add_argument('file', type=werkzeug.datastructures.FileStorage, location='files', help=&quot;No file provided.&quot;, required=True) post_args.add_argument('Authorization', type=str, location='headers', help=&quot;No auth token provided.&quot;, required=True) class Analyze(Resource): def post(self): data = post_args.parse_args() if not verifyToken(data['Authorization']): abort(401, message=&quot;The user is not authorized to use this &quot;) try: tf = os.path.join('tmp', tempfile.NamedTemporaryFile().name) data['file'].save(tf) result = predict(tf) except Exception as ex: abort(400, message=ex) finally: if tf: os.remove(tf) return result, 200 api.add_resource(Analyze, &quot;/&quot;) if __name__ == &quot;__main__&quot;: app.run(debug=False) </code></pre> <p>Am I doing something wrong here that causes the high latency?</p>
<p>The best practice is to use tensorflow serving . Read <a href="https://www.tensorflow.org/tfx/guide/serving" rel="nofollow noreferrer">https://www.tensorflow.org/tfx/guide/serving</a> for more detail .</p>
python-3.x|tensorflow2.0|gcloud|flask-restful
0
2,389
58,288,670
Merging two dataframes on the same type column gives me wrong result
<p>I have two dataframes, assume A and B, which have been created after reading the sheets of an Excel file and performing some basic functions. I need to <code>merge right</code> the two dataframes on a column named ID which has first been converted to <code>astype(str)</code> for both dataframes.</p> <p>The ID column of the left Dataframe (A) is:</p> <pre><code>0 5815518813016 1 5835503994014 2 5835504934023 3 5845535359006 4 5865520960012 5 5865532845006 6 5875531550008 7 5885498289039 8 5885498289039_A2 9 5885498289039_A3 10 5885498289039_X2 11 5885498289039_X3 12 5885509768698 13 5885522349999 14 5895507791025 Name: ID, dtype: object </code></pre> <p>The ID column of the right Dataframe (B) is:</p> <pre><code>0 5835503994014 1 5845535359006 2 5835504934023 3 5815518813016 4 5885498289039_A1 5 5885498289039_A2 6 5885498289039_A3 7 5885498289039_X1 8 5885498289039_X2 9 5885498289039_X3 10 5885498289039 11 5865532845006 12 5875531550008 13 5865520960012 14 5885522349998 15 5895507791025 16 5885509768698 Name: ID, dtype: object </code></pre> <p>However, when I merge the two, the rest of the columns of the left (A) dataframe become "empty" (np.nan) except for the rows where the ID does not contain only numbers but letters too. This is the <code>pd.merge()</code> I do:</p> <pre><code>A_B=A.merge(B[['ID','col_B']], left_on='ID', right_on='ID', how='right') </code></pre> <p>Do you have any ideas what might be so wrong? Your input is valuable.</p>
<p>Try turning all values in both columns into strings: <code>A['ID'] = A['ID'].astype(str)</code> <code>B['ID'] = B['ID'].astype(str)</code></p> <p>Generally, when a merge like this doesn't work, I would try to debug by printing out the unique values in each column to check if anything pops out (usually dtype issues).</p>
pandas|merge
1
2,390
58,174,267
Computing age from to_timedelta is weird, and DateOffset is not scalable over a Series
<p>I have two columns:</p> <pre><code> date age 0 2016-01-05 47.0 1 2016-01-05 43.0 2 2016-01-05 28.0 3 2016-01-05 46.0 4 2016-01-04 39.0 </code></pre> <p>What I want is another column with the difference between the date and age:</p> <pre><code> date age dob 0 2016-01-05 47.0 1969-01-05 1 2016-01-05 43.0 1973-01-05 2 2016-01-05 28.0 1988-01-05 3 2016-01-05 46.0 1970-01-05 4 2016-01-04 39.0 1977-01-04 </code></pre> <p>Seems simple enough, but the simple <code>df['date'] - df['age'].astype('timedelta64[Y]')</code> gives:</p> <pre><code>0 1969-01-04 14:27:36 1 1973-01-04 13:44:24 2 1988-01-05 05:02:24 3 1970-01-04 20:16:48 4 1977-01-03 13:01:12 </code></pre> <p>Why the additional time stamp? Even <code>pd.to_timedelta(df['age'], unit='Y')</code> gives the same result, with an additional warning that <code>unit='Y'</code> is deprecated.</p> <p>Further, <code>df['date'] - pd.DateOffset(years=df['age'])</code> throws (understandably):</p> <pre><code>TypeError: cannot convert the series to &lt;class 'int'&gt; </code></pre> <p>I can use <code>apply</code> in the second option, <code>df['date'] - df['age'].apply(lambda a: pd.DateOffset(years=a))</code>, to circuitously get the correct result, and (understandably) <code>PerformanceWarning: Adding/subtracting array of DateOffsets to DatetimeArray not vectorized</code>.</p> <p>What is a good (pythonic and vectorized) solution here?</p>
<p>If you need to specify a different non-standard offset (i.e. months or years) for every row it can save time to <strong>loop over the unique offsets instead of rows</strong>. Accomplish this with a <code>groupby</code>.</p> <p>This will be especially true when the number of unique offsets is &lt;&lt; the number of rows in your DataFrame. This is very likely the case with realistic values for integer ages and a very long DataFrame.</p> <pre><code>pd.concat([gp.assign(dob = gp.date - pd.offsets.DateOffset(years=age)) for age, gp in df.groupby('age', sort=False)]) date age dob 0 2016-01-05 47.0 1969-01-05 1 2016-01-05 43.0 1973-01-05 2 2016-01-05 28.0 1988-01-05 3 2016-01-05 46.0 1970-01-05 4 2016-01-04 39.0 1977-01-04 </code></pre> <hr /> <p>Some timings:</p> <pre><code>import perfplot import pandas as pd import numpy as np def with_groupby(df): s = pd.concat([gp.date - pd.offsets.DateOffset(years=idx) for idx, gp in df.groupby('age', sort=False)]) return s def with_apply(df): s = df.apply(lambda x: x['date'] - pd.DateOffset(years=int(x['age'])), axis=1) return s perfplot.show( setup=lambda n: pd.DataFrame({'date': np.random.choice(pd.date_range('1980-01-01', freq='50D', periods=100), n), 'age': np.random.choice(range(100), n)}), kernels=[lambda df: with_groupby(df), lambda df: with_apply(df)], labels=[&quot;groupby&quot;, &quot;apply&quot;], n_range=[2 ** k for k in range(1, 20)], equality_check=lambda x,y: x.sort_index().compare(y.sort_index()).empty, xlabel='len(df)' ) </code></pre> <p><a href="https://i.stack.imgur.com/UpIdo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UpIdo.png" alt="enter image description here" /></a></p>
python|pandas
2
2,391
58,341,147
How to find a shape inside a Numpy 2D array having an contour?
<p>I have a shape contour <code>cnt</code>, I need to find it inside a 2D array, I have a target_index variable, it is used to find the required zone, but I need to look for the <code>cnt</code> contour in it.</p> <pre><code>import numpy as np x = np.linspace(0,1000, int(1000/50)) y = np.linspace(0,1000, int(1000/50)) X,Y = np.meshgrid(x,y) source = np.column_stack([X.ravel(), Y.ravel()]).astype(int) destination = source.copy() cnt = [[550, 42], [600, 42], [690, 273], [640, 273]] # Need to use cnt here target_index = np.where(np.logical_and(destination[:,1]==789,destination[:,0]&gt;=421)) destination[target_index] scope = destination[target_index] scope[:,0] = scope[:,0] + 10 destination[target_index] = scope destination[target_index] # Remap grid_x, grid_y = np.mgrid[0:800, 0:800] grid_z = griddata(source, destination, (grid_x, grid_y), method='cubic') map_x = np.append([], [ar[:,1] for ar in grid_z]).reshape(800,800).astype('float32') map_y = np.append([], [ar[:,0] for ar in grid_z]).reshape(800,800).astype('float32') warped_image = cv2.remap(img, map_x, map_y, cv2.INTER_CUBIC) cv2.drawContours(warped_image,[cnt],0,(0,0,0),2) </code></pre> <p>Other methods can be used, but <code>np.where</code> is preferred.</p>
<p>Unless you limit yourself to certain polygons, I think it's going to be very hard to use <code>np.where</code> to do this.</p> <p>Here's how to use <code>matplotlib</code>'s <code>Path</code> object to solve the problem (adapting <a href="https://stackoverflow.com/questions/21339448/how-to-get-list-of-points-inside-a-polygon-in-python">this solution</a>):</p> <pre><code>import numpy as np from matplotlib.path import Path x = np.linspace(0,1000, int(1000/50)) y = np.linspace(0,1000, int(1000/50)) X,Y = np.meshgrid(x,y) source = np.column_stack([X.ravel(), Y.ravel()]).astype(int) cnt = [[550, 42], [600, 42], [690, 273], [640, 273]] p = Path(cnt) grid = p.contains_points(source) mask = grid.reshape(20, 20) </code></pre> <p>Then look at the result:</p> <pre><code>import matplotlib.pyplot as plt plt.imshow(mask) </code></pre> <p>Which gives:</p> <p><a href="https://i.stack.imgur.com/7BunK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7BunK.png" alt="mask plot in matplotlib"></a></p> <p>Use more points in the <code>linspace</code> to get a higher-resolution result.</p>
python|arrays|numpy
1
2,392
58,506,163
Bigquery query result to dataframe with Airflow
<p>I am trying to query the data from bigquery and write it to dataframe with Airflow. But either it is giving <code>file not found</code> (service account key) or <code>file name is too long</code> or <code>eof line read</code> error.</p> <p>I have tried with hooks as well but I am not able to do put key file as json as it is saying it is too long.</p> <p>Any tips on how I can achieve it? </p> <pre><code>def get_data_from_GBQ(): global customer_data ofo_cred = Variable.get("ofo_cred") logging.info(ofo_cred) logging.info("Variable is here") customer_data_query = """ SELECT FirstName, LastName, Organisation FROM `bigquery-bi.ofo.Customers` LIMIT 2 """ logging.info("test") # Creating a connection to the google bigquery client = bigquery.Client.from_service_account_json(ofo_cred) logging.info("after client") customer_data = client.query(customer_data_query).to_dataframe() logging.info("after client") print(customer_data) dag = DAG( 'odoo_gbq_connection', default_args=default_args, description='A connection between ', schedule_interval=timedelta(days=1),) </code></pre> <p>And the error is:</p> <pre><code>FileNotFoundError: [Errno 2] No such file or directory: '{\r\n "type": "service_account",\r\n "project_id":... </code></pre>
<p><code>bigquery.Client.from_service_account_json</code> function expects file name of the service account file, you provide it with the contents of that file, so it tries to find the file which path starts with <code>{\r\n "type": "servi...</code> and it fails with <code>FileNotFound</code>. </p> <p>Potential fix:</p> <pre class="lang-py prettyprint-override"><code>client = bigquery.Client.from_service_account_json(path_to_ofo_cred) </code></pre> <p><a href="https://googleapis.dev/python/google-api-core/latest/auth.html#service-accounts" rel="nofollow noreferrer">https://googleapis.dev/python/google-api-core/latest/auth.html#service-accounts</a></p>
sql|pandas|dataframe|google-bigquery|airflow
2
2,393
68,972,366
Comparing previous row values in Pandas DataFrame in different column
<p>my input:</p> <pre><code>first=pd.Series([0,1680,5000,14999,17000]) last =pd.Series([4999,7501,10000,16777,21387]) dd=pd.concat([first, last], axis=1) </code></pre> <p>I trying find&amp;compare second value in first column (e.g. <code>1680</code>) and &quot;range&quot; previous row between first value in first column to first value in second column(e.g. from <code>0</code> to <code>4999</code>). So in my condition value <code>1680</code> fall in range previous row between <code>0</code> to <code>4999</code>, also 3td value in first column <code>5000</code> fall in range previous row between <code>1680</code> to <code>7501</code>, but other values (e.g. <code>14999</code>, <code>17000</code>) not in range of previous rows.<br /> My expect output something like this:<br /> <code>[1680]</code>, <code>[5000]</code> so show only values that fall in my condition<br /> I trying with <code>diff()</code>: <code>dd[0].diff().gt(dd[1])</code> or <code>reshape</code>/<code>shift</code> but not really success</p>
<p>Use <code>shift</code> and <code>between</code> to compare a row with the previous one:</p> <pre><code>&gt;&gt;&gt; df[0].loc[df[0].between(df[0].shift(), df[1].shift())] 1 1680 2 5000 Name: 0, dtype: int64 </code></pre> <p>Details of <code>shift</code>:</p> <pre><code>&gt;&gt;&gt; pd.concat([df[0], df.shift()], axis=1) 0 0 1 0 0 NaN NaN 1 1680 0.0 4999.0 2 5000 1680.0 7501.0 3 14999 5000.0 10000.0 4 17000 14999.0 16777.0 </code></pre>
python|pandas
1
2,394
44,572,926
Graphing with rolling mean data not smoothing properly
<p>When trying to plot a rolling mean in pandas to smooth my data using the following code I get a strange appearing graph</p> <pre><code>data['mean_Kincaid'] = pd.rolling_mean(data.Kincaid,30, min_periods=1) data['Year']= data['Date'].dt.year data.plot(x='Date', y='mean_Kincaid') </code></pre> <p>Which yields the following graph: <a href="https://i.stack.imgur.com/0RBhi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0RBhi.png" alt="enter image description here"></a></p> <p>I would like the graph to be 'smoother' (my goal in using the rolling_mean function to begin with). </p> <p>Any help would be much appreciated :)</p> <p>Update: Image with suggested code<a href="https://i.stack.imgur.com/JT0A1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JT0A1.png" alt="enter image description here"></a></p> <p>Update 2: With the following code I was able to produce the following image -- any idea on how to fix the x-axis to just the year? </p> <pre><code>data['mean_Kincaid'] = data.Kincaid.rolling(75, min_periods=1).mean() data.plot(x='Date', y='mean_Kincaid') </code></pre> <p><a href="https://i.stack.imgur.com/3byn1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3byn1.png" alt="enter image description here"></a></p> <p>When I run it with the following code I get the error "AttributeError: Can only use .dt accessor with datetimelike values" Thanks!</p> <p>Update 3: </p> <pre><code>data['mean_Kincaid'] = data.Kincaid.rolling(10000, min_periods=1).mean() data.Date = pd.to_datetime(data.Date) data.plot(x='Date', y='mean_Kincaid', legend=False, title="Kincaid scores over time") </code></pre> <p><a href="https://i.stack.imgur.com/6jSdI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6jSdI.png" alt="enter image description here"></a></p>
<p>This is insufficient smoothing.</p> <pre><code>n = 8001 df = pd.DataFrame(dict( Kincaid=np.sin(np.linspace(-4, 4, n)) + np.random.rand(n) * 2, Date=pd.date_range('2010-03-31', periods=n) )) df['mean_Kincaid'] = df.Kincaid.rolling(30, min_periods=1).mean() df.plot(x='Date', y=['Kincaid', 'mean_Kincaid']) </code></pre> <p><a href="https://i.stack.imgur.com/XaRuh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XaRuh.png" alt="enter image description here"></a></p> <p>This is better</p> <pre><code>df['mean_Kincaid'] = df.Kincaid.rolling(360, min_periods=1).mean() df.plot(x='Date', y=['Kincaid', 'mean_Kincaid']) </code></pre> <p><a href="https://i.stack.imgur.com/ApN2F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ApN2F.png" alt="enter image description here"></a></p> <p>Notice the larger window parameter.</p>
python|pandas|datetime|plot
1
2,395
44,476,957
How to fill "column B" based on value in "column A" when the column has object dtype in python pandas?
<p>I have a CSV file which I imported as a pandas dataframe. I want to create and fill up a column based on some specific terms in another column. The column that has all those values is an <strong>object</strong> dtype. It has values like:</p> <pre><code>ABC|MNO - 2017 - Trial|1|Random|xyz|RUN|Google|1x1|A10001-21|SD|GH|PRIME - 2017 - Big - This is For Example </code></pre> <p>The code I was using is:</p> <pre><code>def new(row): if row.str.contains("PRIME"): return 'A' if row.str.contains("Random"): return 'B' if row.str.contains("Google"): return 'C' df['X'] = df['Y'].apply (lambda row: new (row)) </code></pre> <p>This code is giving me following error:</p> <pre><code>AttributeError: 'str' object has no attribute 'str' </code></pre> <p>I think it is because <strong>Column X</strong> has <strong>Object</strong> dtype.</p> <p>I tried converting it to a string using the code:</p> <pre><code>df['Y'] = df['Y'].astype('str') </code></pre> <p>but it doesn't work. Then I tried splitting it using the following code:</p> <pre><code>df['Y_new'] = df['Y'].str.split(r'([A-Z][^\.!?]*[\.!?])') </code></pre> <p>But it converted all the values to <strong>NaN</strong>. How should I do this?</p>
<p>Try doing this:</p> <pre><code> def new(row): if row.contains("PRIME"): return 'A' if row.contains("Random"): return 'B' if row.contains("Google"): return 'C' </code></pre>
python|csv|pandas|numpy
0
2,396
60,951,491
Google cloud, ubuntu ERROR: Could not install packages due to an EnvironmentError: [Errno 28] No space left on device
<p>I am trying to install tensorflow on Google Cloud, Ubuntu 16.04.6 LTS with enough disk space but still i am getting the error "No Space left on device"</p> <p>df:</p> <pre><code>Filesystem 1K-blocks Used Available Use% Mounted on udev 15426012 0 15426012 0% /dev tmpfs 3087448 8784 3078664 1% /run /dev/sda1 9983268 7066636 2900248 71% / tmpfs 15437224 0 15437224 0% /dev/shm tmpfs 5120 0 5120 0% /run/lock tmpfs 15437224 0 15437224 0% /sys/fs/cgroup /dev/sda15 106858 3686 103172 4% /boot/efi tmpfs 3087448 0 3087448 0% /run/user/1001 </code></pre> <p>df- i : </p> <pre><code>Filesystem Inodes IUsed IFree IUse% Mounted on udev 3856503 398 3856105 1% /dev tmpfs 3859306 510 3858796 1% /run /dev/sda1 1290240 180504 1109736 14% / tmpfs 3859306 1 3859305 1% /dev/shm tmpfs 3859306 9 3859297 1% /run/lock tmpfs 3859306 17 3859289 1% /sys/fs/cgroup /dev/sda15 0 0 0 - /boot/efi tmpfs 3859306 5 3859301 1% /run/user/1001 </code></pre> <p>Any ideas how to solve this error?</p>
<p>Try pip install --no-cache-dir tensorflow</p> <p>Refer to following thread; <a href="https://github.com/pypa/pip/issues/5816#issuecomment-587302775" rel="nofollow noreferrer">https://github.com/pypa/pip/issues/5816#issuecomment-587302775</a></p>
tensorflow|ubuntu|google-cloud-platform
3
2,397
60,922,759
Tensorflow version mismatch on conda environments
<p>I had initially installed tf-nightly by mistake and later uninstalled it. Now, I have installed two different versions of tensorflow on two different conda environments (tf1.14-gpu and tf2.0-gpu). When I execute the command </p> <p><code>conda list -n tf1.14-gpu tensorflow</code> it shows the following output</p> <pre class="lang-none prettyprint-override"><code># Name Version Build Channel tensorflow 1.14.0 gpu_py36h3fb9ad6_0 tensorflow-base 1.14.0 gpu_py36he45bfe2_0 tensorflow-estimator 1.14.0 py_0 tensorflow-gpu 1.14.0 h0d30ee6_0 </code></pre> <p>When I execute the command <code>conda list -n tf2.0-gpu tensorflow</code> it shows the following output</p> <pre class="lang-none prettyprint-override"><code># Name Version Build Channel tensorflow 2.1.0 gpu_py36h2e5cdaa_0 tensorflow-base 2.1.0 gpu_py36h6c5654b_0 tensorflow-estimator 2.1.0 pyhd54b08b_0 tensorflow-gpu 2.1.0 h0d30ee6_0 </code></pre> <p>But in both the environments when i import tensorflow and check for its version, it gives the same output as <code>'2.2.0-dev20200218'</code> which I assume is the version for tensorflow nightly build. I am not able to use this version for my existing models. I tried uninstalling anaconda and reinstalling the two environments with tensorflow 1.14 and tensorflow 2.0, but it tensorflow version still shows the same as <code>'2.2.0-dev20200218'</code>. Any idea how to overcome this ?</p>
<p>I ran to the same problem. Could it be possible that you installed <code>tf-nightly</code> using pip and not Conda? But when you run <code>import tensorflow as tf; print(tf.__version__)</code>it picks up the global pip version which is troublesome to get rid of?</p> <p>p.s. Sorry that I'm posting instead of commenting. Don't have 50 reputation points yet.</p>
python|tensorflow|jupyter-notebook|anaconda|conda
0
2,398
71,609,817
How can i make each key in a dictionary a string and make the value of that key a Python list
<p>I am trying to convert a dictionary key to a string and make the values a list</p> <p>this is where i am and i don't know what to do next</p> <pre><code>dict_from_csv = pd.read_csv('Emissions.csv', header=None, index_col=0, squeeze=True).to_dict() keys = list(dict_from_csv.keys()) values = list(dict_from_csv.values()) keys values </code></pre>
<p>Do you want a unique string to represent all the keys? If yes, you can do this:</p> <pre><code>keys = &quot; &quot;.join(list(dict_from_csv.keys())) </code></pre> <p>And, Do you like a single list with all values? If yes, you can do this:</p> <pre><code>values = [val for key in df for val in dict_from_csv[key].values()] </code></pre>
python|pandas|csv|dictionary|key
0
2,399
71,612,541
Redis not working. __init__() got an unexpected keyword argument 'username'
<p>i am trying to run the celery -A project worker -l info. But each time it returns an error like <strong>init</strong> got unexpected error. Kindly Help. Thanks in advance.</p> <p>my settings file:</p> <pre><code>CELERY_BROKER_URL = 'redis://localhost:6379' CELERY_RESULT_BACKEND = 'redis://localhost:6379' CELERY_ACCEPT_CONTENT = ['application/json'] CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' CELERY_TIMEZONE = 'Africa/Nairobi' </code></pre> <p>celery file:</p> <pre><code>from __future__ import absolute_import, unicode_literals import os from celery import Celery # setting the Django settings module. os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings') app = Celery('project') app.config_from_object('django.conf:settings', namespace='CELERY') # Looks up for task modules in Django applications and loads them app.autodiscover_tasks() @app.task(bind=True) def debug_task(): print('Request') </code></pre> <p><strong>init</strong> file</p> <pre><code>from __future__ import absolute_import # This will make sure the app is always imported when # Django starts so that shared_task will use this app. from .celery import app as celery_app </code></pre>
<p>I just solved the problem by installing a Redis bundle.</p> <pre><code>pip install &quot;celery[redis]&quot; </code></pre>
python|django|pandas|redis|celery
3