Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
374,400
| 62,114,529
|
changing range causes a distribution not normal
|
<p><a href="https://stackoverflow.com/a/37412692">A post</a> gives some code to plot this figure</p>
<pre><code>import scipy.stats as ss
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-10, 11)
xU, xL = x + 0.5, x - 0.5
prob = ss.norm.cdf(xU, scale = 3) - ss.norm.cdf(xL, scale = 3)
prob = prob / prob.sum() #normalize the probabilities so their sum is 1
nums = np.random.choice(x, size = 10000, p = prob)
plt.hist(nums, bins = len(x))
</code></pre>
<p><a href="https://i.stack.imgur.com/YREbg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YREbg.png" alt="enter image description here"></a></p>
<p>I modifyied this line</p>
<pre><code>x = np.arange(-10, 11)
</code></pre>
<p>to this line</p>
<pre><code>x = np.arange(10, 31)
</code></pre>
<p>I got this figure</p>
<p><a href="https://i.stack.imgur.com/xQZyE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xQZyE.png" alt="enter image description here"></a></p>
<p>How to fix that?</p>
|
<p>Given what you're asking Python to do, there's no error in this plot: it's a histogram of 10,000 samples from the tail (anything that rounds to between 10 and 31) of a normal distribution with mean 0 and standard deviation 3. Since probabilities drop off steeply in the tail of a normal, it happens that none of the 10,000 exceeded 17, which is why you didn't get the full range up to 31.</p>
<p>If you just want the x-axis of the plot to cover your full intended range, you could add <code>plt.xlim(9.5, 31.5)</code> after <code>plt.hist</code>.</p>
<p>If you want a histogram with support over this entire range, then you'll need to adjust the mean and/or variance of the distribution. For instance, if you specify that your normal distribution has mean 20 rather than mean 0 when you obtain <code>prob</code>, i.e.</p>
<pre><code>prob = ss.norm.cdf(xU, loc=20, scale=3) - ss.norm.cdf(xL, loc=20, scale=3)
</code></pre>
<p>then you'll recover a similar-looking histogram, just translated to the right by 20.</p>
|
numpy
| 0
|
374,401
| 62,281,249
|
Python Merge Join with multiple join columns
|
<p>I have two tables with following structure:</p>
<pre><code> Table 1
ID City Country
1 India
2 Delhi
3 America
4 New York
5 Germany
Table 2
ID Country City
1 India
2 India Delhi
3 America
4 America New York
5 Germany
Select * from table1
Left outer join table2
on citycountry = city or citycountry = country
</code></pre>
<p>My task is to implement the same in pandas for multiple join conditions <code>"citycountry = city or citycountry = country"</code>. How should I do it in pandas?</p>
|
<p>Once you've stored your data as <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html" rel="nofollow noreferrer">DataFrames</a>, you can use pandas' <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a> function:</p>
<pre><code>import pandas as pd
pd.merge(table1, table2,
left_on= ['citycountry', 'citycountry'],
right_on= ['country', 'city'],
how = 'left')
</code></pre>
|
python|pandas|dataframe|merge
| 0
|
374,402
| 62,104,731
|
Pie chart with non labelled data?
|
<p>How can I produce a pie chart when the data is not labelled as usual? See the example dataframe below:</p>
<pre><code> First_Half Second_Half
Div
Bundesliga 0.438 0.562
EPL 0.434 0.566
La Liga 0.441 0.559
</code></pre>
<p>This just shows the proportion of first half goals vs second half goals for the divisions listed. I want to create a separate pie chart for each division with the proportion of First Half vs Second Half goals, how do I do this with this data?</p>
|
<p>Given your dataframe, you can also do something like this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import matplotlib.pyplot as plt
df = # your data
for i in range(len(df)):
df.iloc[i].plot.pie()
plt.show()
</code></pre>
|
python|pandas|matplotlib|seaborn
| 0
|
374,403
| 62,116,002
|
Referencing index from a column and adding count
|
<p>Following my project below, I would like to python to be able to show which date a trade is made. Currently I have done a backtest that shows if price is above average or below average. Is there a way where I can specify in the result on which date is the trade (buy and sell) is done? For example "Buying now at 5.86999 on 2019-5-20". Also is it possible to add count to the trades that I have done?</p>
<pre><code>### Purpose of this code is to backtest MACD strategy using 12,26,9. This is a long only strategy
## Below is to import the relevant code and set pandas option
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pandas_datareader.data as web
import datetime as dt
%matplotlib inline
plt.style = 'ggplot'
pd.set_option('display.max_rows', None)
## Identifying which ticker, start date, end date, defining MACD calcs and appending them into dataframe
stock = "FRO" #Input stock ticker
Start_date = '2019-1-1' #Input Start date for analysis
End_date = dt.datetime.today()
Quick_EMA = 12
Slow_EMA = 26
Signal_EMA = 9
df = web.DataReader(stock,'yahoo',Start_date,End_date)
df['12EMA'] = df['Close'].ewm(span = Quick_EMA, min_periods = Quick_EMA).mean()
df['26EMA'] = df['Close'].ewm(span = Slow_EMA, min_periods = Slow_EMA).mean()
df['MACDLine'] = df['12EMA'] - df['26EMA']
df['Signal_Line'] = df['MACDLine'].ewm(span = Signal_EMA, min_periods = Signal_EMA).mean()
##setting rules for backtest, main strategy - MACD Line crossover signal to buy/sell and risk tolerance on 1% bss Adj CLose
pos = 0
realized_pnl = []
buytrades = 0
selltrades = 0
percentchange = []
cut_loss_percent = 0.96
for i in df.index:
closing_price = df['Close'][i]
emin = df['MACDLine'][i]
emax = df['Signal_Line'][i]
openprice = df['Open'][i]
if (emin>emax):
print('MACD higher than Signal')
if pos == 0:
bp = closing_price
pos = 1
buytrades =+1
cut_loss = cut_loss_percent*bp
print('Buying now at ' + str(bp))
print('Will cut loss at ' + str (cut_loss))
elif (closing_price<cut_loss):
if pos == 1:
print('Cut Loss at ' + str(cut_loss))
pos = 0
selltrades =+1
print(cut_loss - bp)
pc = (cut_loss - bp)
realized_pnl.append(pc)
return_onper = pc/bp
percentchange.append(return_onper)
elif (emax>emin):
print('MACD is lower than Signal')
if pos == 1:
cp = closing_price
pos = 0
selltrades =+1
print('Selling now at ' + str(cp))
print(cp-bp)
pc = (cp - bp)
realized_pnl.append(pc)
return_onper = pc/bp
percentchange.append(return_onper)
if pos == 1:
print('Still has position in place')
mtm = (closing_price - bp)
mtmgainloss = mtm/bp
percentchange.append(mtmgainloss)
total_realizedpnl = sum(realized_pnl)
print('Realized Pnl '+ str(total_realizedpnl))
All_pnl = total_realizedpnl + mtm
print('Sum of total Pnl ' + str(All_pnl))
sum(percentchange)
</code></pre>
<p>Below is a snipet of the results</p>
<pre><code>MACD higher than Signal
Buying now at 5.869999885559082
Will cut loss at 5.635199890136718
MACD higher than Signal
...
MACD higher than Signal
Still has position in place
Realized Pnl 4.460401210784912
Sum of total Pnl 5.5204016304016115
Out[21]:
0.6354297951321523
</code></pre>
|
<p>Managed to find a workaround this, it might be quite idiocracy but somehow i works :D</p>
<pre><code>### Purpose of this code is to backtest MACD strategy using 12,26,9. This is a long only strategy
## Below is to import the relevant code and set pandas option
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pandas_datareader.data as web
import datetime as dt
%matplotlib inline
plt.style = 'ggplot'
pd.set_option('display.max_rows', None)
## Identifying which ticker, start date, end date, defining MACD calcs and appending them into dataframe
stock = "FRO" #Input stock ticker
Start_date = '2019-1-1' #Input Start date for analysis
End_date = dt.datetime.today()
Quick_EMA = 12 #Amend as you like
Slow_EMA = 26 #Amend as you like
Signal_EMA = 9 #Amend as you like
df = web.DataReader(stock,'yahoo',Start_date,End_date)
df['12EMA'] = df['Close'].ewm(span = Quick_EMA, min_periods = Quick_EMA).mean()
df['26EMA'] = df['Close'].ewm(span = Slow_EMA, min_periods = Slow_EMA).mean()
df['MACDLine'] = df['12EMA'] - df['26EMA']
df['Signal_Line'] = df['MACDLine'].ewm(span = Signal_EMA, min_periods = Signal_EMA).mean()
df['MACD_Sig_diff'] = df['MACDLine'] - df['Signal_Line']
df['MACD_Sig_diff_10MA'] = df['MACD_Sig_diff'].ewm(span = 10, min_periods = 10).mean()
df['Date'] = df.index
##setting rules for backtest, main strategy - MACD Line crossover signal to buy/sell and risk tolerance on 4% bss Adj CLose
pos = 0
realized_pnl = []
buytrades = []
selltrades = []
percentchange = []
cut_loss_percent = 0.96 #Stop loss level, if its 5% it will be 100% - 5% = 0.95
for i in df.index:
closing_price = df['Close'][i]
emin = df['MACDLine'][i]
emax = df['Signal_Line'][i]
openprice = df['Open'][i]
tradedate = df['Date'][i]
if (emin>emax):
print('MACD higher than Signal')
if pos == 0:
bp = closing_price
pos = 1
buytrades.append(1)
cut_loss = cut_loss_percent*bp
print('Buying now at ' + str(bp) +' at ' + str(tradedate))
print('Will cut loss at ' + str (cut_loss))
elif (closing_price<cut_loss):
if pos == 1:
print('Cut Loss at ' + str(cut_loss)+' at ' + str(tradedate))
pos = 0
selltrades.append(1)
print('Pnl is ' + str(cut_loss - bp))
pc = (cut_loss - bp)
realized_pnl.append(pc)
return_onper = pc/bp
percentchange.append(return_onper)
elif (emax>emin):
print('MACD is lower than Signal')
if pos == 1:
cp = closing_price
pos = 0
selltrades.append(1)
print('Selling now at ' + str(cp)+' at ' + str(tradedate))
print('Pnl is '+ str(cp-bp))
pc = (cp - bp)
realized_pnl.append(pc)
return_onper = pc/bp
percentchange.append(return_onper)
if pos == 1:
print('Still has position in place')
mtm = (closing_price - bp)
mtmgainloss = mtm/bp
percentchange.append(mtmgainloss)
print('Realized Pnl '+ str(sum(realized_pnl)))
All_pnl = realized_pnl + mtm
print('Sum of total Pnl ' + str(sum(All_pnl)))
print('Pnl in % ' +str(sum(percentchange*100)))
print('The number of buy trades are ' + str(sum(buytrades)))
print('The number of sell trades are '+ str(sum(selltrades)))
</code></pre>
|
python|pandas|dataframe
| 0
|
374,404
| 62,154,915
|
Error message trying to change string to year
|
<p>I'm trying to combine three fields to make a date, the year is currently string:
code to change string to datetime:</p>
<pre><code>f2[:,'frt_eli_year'] = pd.to_datetime(f2['frt_eli_year'].astype(str), format='%Y',errors='coerce',utc=True)
</code></pre>
<p>error message:</p>
<pre><code>TypeError: unhashable type: 'slice'
</code></pre>
<p>Then code to join year, month, date:</p>
<pre><code>f2[:,'test'] = pd.to_datetime(f2['frt_eli_year'].dt.year,f2['frt_eli_year'].dt.month,f2['frt_eli_year'].dt.day)
</code></pre>
<p>Appreciate the help, thanks!</p>
|
<p>The mistake is in <code>f2[:,'frt_eli_year']</code>. Check this <a href="https://stackoverflow.com/a/43291257/7891326">this answer</a>. Basically it'll work if you change to <code>f2.loc[:,'frt_eli_year']</code>, or or since you want all rows, just <code>f2['frt_eli_year']</code>.</p>
<p>In total:</p>
<pre><code>f2['frt_eli_year'] = pd.to_datetime(f2['frt_eli_year'],
format='%Y', errors='coerce', utc=True)
</code></pre>
|
python|pandas
| 0
|
374,405
| 62,044,768
|
Delete Multiple Columns
|
<p>How can I delete column on index 2 onwards for a dataframe that contains 10 columns. The dataframe looks like this:</p>
<pre><code>column1 column2 column3 column4 ...
</code></pre>
<p>The task is to delete column3-column10</p>
|
<p>Invert logic - select first 2 columns by positions by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>DataFrame.iloc</code></a>:</p>
<pre><code>df = df.iloc[:, :2]
</code></pre>
<p>If need <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html" rel="nofollow noreferrer"><code>DataFrame.drop</code></a> select columns names with indexing:</p>
<pre><code>df = df.drop(df.columns[2:], axis=1)
</code></pre>
|
python|pandas
| 2
|
374,406
| 62,084,464
|
XLA-able dynamic slicing
|
<p>Is there any way to dynamically slice a tensor according to a random number generator in an XLA-compiled function? For example:</p>
<pre class="lang-py prettyprint-override"><code>@tf.function(experimental_compile=True)
def random_slice(input, max_slice_size):
offset = tf.squeeze(tf.random.uniform([1], minval=0, maxval=input.shape[0]-max_slice_size, dtype=tf.int32))
sz = tf.squeeze(tf.random.uniform([1], minval=1, maxval=max_slice_size, dtype=tf.int32))
indices = tf.range(offset, offset+sz) # Non-XLA-able due to non-static bounds
return tf.gather(input, indices)
x = tf.ones([50, 50])
y = random_slice(x, 4)
</code></pre>
<p>This code fails to compile because XLA requires that the arguments to <code>tf.range</code> are known at compile time. Is there a recommended workaround?</p>
|
<p>The underlying issue here is that XLA needs to know, statically, the shapes of all <code>Tensor</code>s in the program. In this case it complains about <code>tf.range</code> because its output is not knowable given the random inputs. You might instead be able to get away with generating a masked version (zeroing out the elements you don't need, using something like tensor_scatter_nd_update) and using that masked version downstream (hard to say exactly how, without seeing more context on how <code>y</code> is to be used).</p>
|
python|tensorflow|tensorflow-xla
| 0
|
374,407
| 62,305,841
|
regex multiple lines and store results in an array iteratively
|
<p>I have a bank statement and have used Regex to extract all the items in a table.
The list is </p>
<pre><code>['15-10-2019 BIL/INFT/001823982708/Block2B5/ MAHAK JUNEJA 5,130.00 5,19,319.08',
'15-10-2019 BIL/INFT/001824120963/watermaintoct/ AAANKSHA AGRAWA 3,895.00 5,23,214.08',
'15-10-2019 MOBILE BANKING MMT/IMPS/928820560895/VURIMI UMA/AXIS BANK LTD 5,201.00 5,28,415.08',
'15-10-2019 MOBILE BANKING MMT/IMPS/928820342293/B1H2/KAVURI KIS/HDFC BANK LTD 3,401.00 5,31,816.08',
'15-10-2019 SE EER TRS 2 Malntenen eee guna. Shula HEEGBAME 3,732.00 5,35,548.08',
'16-10-2019 CHEQUE 7048 CLG/ZAP POWER SYSTEMS/UBI 16,815.00 5,18,733.08',
'17-10-2019 MOBILE BANKING NANCE ee osnesiBers GGRA fee/VOONA SRIN/HDFC 500.00 5,19,233.08',
'18-10-2019 CHEQUE 7049 CLG/BANGALORE APARTMENTS FED/SBI 3,500.00 5,15,733.08',
'21-10-2019 CHEQUE 7054 CASH PAID:mohan 1075 BANGALORE-BELLANDUR VILLAGE 20,000.00 4,95,733.08',
'24-10-2019 CHEQUE 7052 CLG/V PRAVEEN RAM/YES 14,000.00 4,81,733.08',
'25-10-2019 CHEQUE 7051 CLG/BESCOM S/UTI 17,385.00 4,64,348.08',
'30-10-2019 107510010791I0 Int on FD/RD XXX0791 Tds:0.Int:8625 and TAX:0. 8,625.00 4,72,973.08',
'31-10-2019 CHEQUE 7055 CLG/ADVANCE ENGINEERING CORPORATION/HSB 14,337.00 4,58,636.08']
</code></pre>
<p>I need to store them in this format: </p>
<pre><code>Date Item Name Amount Total
15-10-2019 BIL/INFT/001823982708/Block2B5 MAHAK JUNEJA 5,130.00 5,19,319.08
</code></pre>
<p>for every line in the list</p>
|
<p>I am not sure about how you want to store those values but you can use split method such as following,</p>
<pre><code>l1=['15-10-2019 BIL/INFT/001823982708/Block2B5/ MAHAK JUNEJA 5,130.00 5,19,319.08',
'15-10-2019 BIL/INFT/001824120963/watermaintoct/ AAANKSHA AGRAWA 3,895.00 5,23,214.08',
'15-10-2019 MOBILE BANKING MMT/IMPS/928820560895/VURIMI UMA/AXIS BANK LTD 5,201.00 5,28,415.08',
'15-10-2019 MOBILE BANKING MMT/IMPS/928820342293/B1H2/KAVURI KIS/HDFC BANK LTD 3,401.00 5,31,816.08',
'15-10-2019 SE EER TRS 2 Malntenen eee guna. Shula HEEGBAME 3,732.00 5,35,548.08',
'16-10-2019 CHEQUE 7048 CLG/ZAP POWER SYSTEMS/UBI 16,815.00 5,18,733.08',
'17-10-2019 MOBILE BANKING NANCE ee osnesiBers GGRA fee/VOONA SRIN/HDFC 500.00 5,19,233.08',
'18-10-2019 CHEQUE 7049 CLG/BANGALORE APARTMENTS FED/SBI 3,500.00 5,15,733.08',
'21-10-2019 CHEQUE 7054 CASH PAID:mohan 1075 BANGALORE-BELLANDUR VILLAGE 20,000.00 4,95,733.08',
'24-10-2019 CHEQUE 7052 CLG/V PRAVEEN RAM/YES 14,000.00 4,81,733.08',
'25-10-2019 CHEQUE 7051 CLG/BESCOM S/UTI 17,385.00 4,64,348.08',
'30-10-2019 107510010791I0 Int on FD/RD XXX0791 Tds:0.Int:8625 and TAX:0. 8,625.00 4,72,973.08',
'31-10-2019 CHEQUE 7055 CLG/ADVANCE ENGINEERING CORPORATION/HSB 14,337.00 4,58,636.08']
l2=[]
#splitting values based on '/'
for i in l1:
l2.append(i.split('/'))
#printing values from sublists of l2
for j in l2:
for k in j:
print(k)
</code></pre>
<p>output:</p>
<pre><code>15-10-2019 BIL
INFT
001823982708
Block2B5
MAHAK JUNEJA 5,130.00 5,19,319.08
15-10-2019 BIL
INFT
001824120963
watermaintoct
AAANKSHA AGRAWA 3,895.00 5,23,214.08
15-10-2019 MOBILE BANKING MMT
IMPS
928820560895
VURIMI UMA
AXIS BANK LTD 5,201.00 5,28,415.08
15-10-2019 MOBILE BANKING MMT
IMPS
928820342293
B1H2
KAVURI KIS
HDFC BANK LTD 3,401.00 5,31,816.08
15-10-2019 SE EER TRS 2 Malntenen eee guna. Shula HEEGBAME 3,732.00 5,35,548.08
16-10-2019 CHEQUE 7048 CLG
ZAP POWER SYSTEMS
UBI 16,815.00 5,18,733.08
17-10-2019 MOBILE BANKING NANCE ee osnesiBers GGRA fee
VOONA SRIN
HDFC 500.00 5,19,233.08
18-10-2019 CHEQUE 7049 CLG
BANGALORE APARTMENTS FED
SBI 3,500.00 5,15,733.08
21-10-2019 CHEQUE 7054 CASH PAID:mohan 1075 BANGALORE-BELLANDUR VILLAGE 20,000.00 4,95,733.08
24-10-2019 CHEQUE 7052 CLG
V PRAVEEN RAM
YES 14,000.00 4,81,733.08
25-10-2019 CHEQUE 7051 CLG
BESCOM S
UTI 17,385.00 4,64,348.08
30-10-2019 107510010791I0 Int on FD
RD XXX0791 Tds:0.Int:8625 and TAX:0. 8,625.00 4,72,973.08
31-10-2019 CHEQUE 7055 CLG
ADVANCE ENGINEERING CORPORATION
HSB 14,337.00 4,58,636.08
</code></pre>
|
arrays|python-3.x|regex|pandas|loops
| 0
|
374,408
| 62,465,980
|
Machine Learning model to identify grammatical errors in a sentence?
|
<p>Is there any machine learning model for identifying grammatical errors in a sentence? Please note that I've already tried BERT which is a classification based model and it is useful to tell us whether a sentence has any errors or not. But what I want is that a model which could identify exactly which word in sentence violates SVA (Subject Verb Agreement) or which causes error in the sentence?</p>
|
<p>Hi I just tried testing this repo <a href="https://github.com/grammarly/gector" rel="nofollow noreferrer">GECToR</a> it was able to spot the grammatical errors in a sentence and identifying SVA errors were also there.</p>
<p>And building a sequence tagger model can also help you, as described in this <a href="https://arxiv.org/abs/2005.12592" rel="nofollow noreferrer">paper</a>.</p>
|
python|tensorflow|machine-learning|deep-learning|statistics
| 0
|
374,409
| 62,118,771
|
"mask cannot be scalar" in keras.max() function
|
<p>When I used Keras.max(box_scores,keep_dims=False) in an assignment, I got an error, and it was "mask cannot be scalar".
But when I used Keras.max(box_scores,axis=-1,keep_dims=False) , I got the result. But I don't understand it.What is the purpose of axis=-1 in this function to correct this error?</p>
<pre><code>box_scores = box_confidence * box_class_probs
box_classes = K.argmax(box_scores, axis=-1)
box_class_scores = K.max(box_scores,keepdims=False)
filtering_mask = ((box_class_scores)>=threshold)
scores = tf.boolean_mask(box_class_scores,filtering_mask ,name="filtering_scores")
boxes = tf.boolean_mask(boxes,filtering_mask ,name="filtering_boxes")
classes = tf.boolean_mask(box_classes,filtering_mask ,name="filtering_classes")
</code></pre>
<p>Here, box_confidence = tensor of shape (19, 19, 5, 1),
boxes -- tensor of shape (19, 19, 5, 4),
box_class_probs -- tensor of shape (19, 19, 5, 80),
and threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box.</p>
<p><a href="https://i.stack.imgur.com/4peU7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4peU7.png" alt="enter image description here"></a></p>
|
<p>For <code>max</code> function <code>axis</code> parameter specifies a list of dimensions (or one dimension or <code>None</code> for all dimensions) over which max is computed. When negative integers are used they are interpreted similarly to Python negative indicies of array (i.e. <code>-1</code> means the last dimension, <code>-2</code> - second from last, etc.).</p>
<p>So when you're not specifying <code>axis</code> argument the default value <code>None</code> is used resulting in scalar output (i.e. maximum of all values in the tensor). When you are specifying <code>axis=-1</code> only the last dimension is reduced so from tensor of shape <code>(a,b,c,d)</code> you'll get a tensor of shape <code>(a,b,c)</code>. </p>
<p>Strangely, <code>keras</code> documentation doesn't specify it here <a href="https://www.tensorflow.org/api_docs/python/tf/keras/backend/max" rel="nofollow noreferrer">max reference</a>.</p>
|
python|tensorflow|keras|neural-network|conv-neural-network
| 1
|
374,410
| 62,063,085
|
How to build this simple vector in NumPy
|
<p>I'm new to Python and I want to create an array which has 0.00 has first element, and then adds 0.01 for its next elements until the last one is less than or equal to a given number (in my case 0.55).</p>
<p>In Matlab the code for it would be <code>(0: 0.01: 0.55)</code></p>
<p>And the result would be: <code>[0.00, 0.01, 0.02, ... , 0.55]</code></p>
<p>Now of course I think it can be done really easily in Python with a loop, but I'm wondering if there is a direct way to achieve this with a NumPy function</p>
<p>I tried arange but failed, maybe it's not the right one.</p>
<p>Thanks</p>
|
<p>would go with</p>
<pre><code>np.arange(0, 0.555, 0.01)
</code></pre>
<p>just took a look into the docs of numpy:</p>
<blockquote>
<p>End of interval. The interval does not include this value, except in some cases where step is not an integer and floating point round-off affects the length of out.
<a href="https://numpy.org/doc/stable/reference/generated/numpy.arange.html" rel="noreferrer">numpy-docs</a></p>
</blockquote>
<p>so the strange behaviour is caused by some float-rounding issue. See <a href="https://en.wikipedia.org/wiki/Floating-point_arithmetic#Accuracy_problems" rel="noreferrer">https://en.wikipedia.org/wiki/Floating-point_arithmetic#Accuracy_problems</a> for more info.</p>
|
python|numpy
| 5
|
374,411
| 62,272,424
|
Removing tensors for optimising a for loop in python
|
<p>I am working on a large code that I'm trying to optimise.
The part of code you see below is a for loop that returns encodings in a tensor. How do I output these numbers in a regular list instead, without going to tensors? </p>
<pre><code>def _make_batches(self, lines):
tokens = [self._tokenize(line) for line in lines]
lengths = np.array([t.numel() for t in tokens])
indices = np.argsort(-lengths, kind=self.sort_kind) # pylint: disable=invalid-unary-operand-type
def batch(tokens, lengths, indices):
toks = tokens[0].new_full((len(tokens), tokens[0].shape[0]),
self.pad_index)
for i in range(len(tokens)):
toks[i, -tokens[i].shape[0]:] = tokens[i]
return Batch(srcs=None,
tokens=toks,
lengths=torch.LongTensor(lengths)), indices
batch_tokens, batch_lengths, batch_indices = [], [], []
ntokens = nsentences = 0
for i in indices:
if nsentences > 0 and ((self.max_tokens is not None
and ntokens + lengths[i] > self.max_tokens)
or (self.max_sentences is not None
and nsentences == self.max_sentences)):
yield batch(batch_tokens, batch_lengths, batch_indices)
ntokens = nsentences = 0
batch_tokens, batch_lengths, batch_indices = [], [], []
batch_tokens.append(tokens[i])
batch_lengths.append(lengths[i])
batch_indices.append(i)
ntokens += tokens[i].shape[0]
nsentences += 1
if nsentences > 0:
yield batch(batch_tokens, batch_lengths, batch_indices)
</code></pre>
<p>This is how I call this function:</p>
<pre><code>if __name__ == '__main__':
s = SentenceEncoder("data/model.pt")
input = [args.string_enc]
make_batches = s._make_batches
print([batch[1] for batch, indexes in make_batches(input)])
</code></pre>
<p>The output is:</p>
<pre><code>[tensor([[29733, 20720, 2]])]
</code></pre>
<p>The desired output is:</p>
<pre><code>[29733, 20720, 2]
</code></pre>
|
<p>You mean this?</p>
<pre><code>a=[torch.tensor([[29733, 20720, 2]])]
b=a[0].squeeze(0).tolist()
print(b)
</code></pre>
|
python|for-loop|pytorch|tensor
| 0
|
374,412
| 62,409,126
|
(Python) How can I compare 2 or more columns with Pandas?
|
<p>I have been using the module pandas for data scraping, and albeit I understood how to (), I'm still unsure about how can I compare 2 or more columns of a CSV. Taking as an example the code below, I wanted to find out, e.g. the 3 publishers who published more Action, Shooter and Platform games, separately. I wrote the code below, but the output shows "False" instead of the name of the Genre. At least I believe the top 3 publishers are correct, but I'm not sure. Could anyone have a look?</p>
<pre><code>import pandas as pd
data = pd.read_csv("https://sites.google.com/site/dr2fundamentospython/arquivos/Video_Games_Sales_as_at_22_Dec_2016.csv")
a = data['Publisher'].groupby((data['Genre'] == 'Action')).value_counts().head(3)
print(a)
s = data['Publisher'].groupby((data['Genre'] == 'Shooter')).value_counts().head(3)
print(s)
p = data['Publisher'].groupby((data['Genre'] == 'Platform')).value_counts().head(3)
print(p)
</code></pre>
<p>Also, I should find out the top 3 publishers who sold the most Action, Shooter and Platform Games altogether. I tried writing this, but didn't work. How can I use 3 items of the same column, at the same time, and compare them with another 2 columns? And what if I want to include a time frame, e.g., compare all these columns for the past 10 years?</p>
<pre><code>import pandas as pd
data = pd.read_csv("https://sites.google.com/site/dr2fundamentospython/arquivos/Video_Games_Sales_as_at_22_Dec_2016.csv")
a = ((data['Genre'] == 'Action') & (data['Genre'] == 'Shooter') & (data['Genre'] == 'Platform')).groupby((data['Publisher']) & (data['Global_Sales'])).value_counts().head(3)
print(a)
</code></pre>
|
<p>These are a lot of questions at once:</p>
<ol>
<li><blockquote>
<p><code>a = data['Publisher'].groupby((data['Genre'] == 'Action')).value_counts().head(3)
print(a)</code></p>
</blockquote></li>
</ol>
<p>In a groupby you do not specify a concrete Genre like 'Action'. That is what query is for. The point of groupby is to perform the following calculation for <em>every</em> Genre</p>
<pre><code>In [11]: number_of_games = data.groupby('Genre')['Publisher'].value_counts()
Out[11]:
Genre Publisher
Action Activision 311
Namco Bandai Games 251
Ubisoft 198
THQ 194
Electronic Arts 183
...
Strategy Time Warner Interactive 1
Titus 1
Trion Worlds 1
Westwood Studios 1
Zoo Digital Publishing 1
Name: Publisher, dtype: int64
</code></pre>
<p>Note that the selection of Publisher is after the grouping, so internally pandas loops over all values in Genre and does a value_count of the Publisher</p>
<ol start="2">
<li><blockquote>
<p>I should find out the top 3 publishers who sold the most Action, Shooter and Platform Games</p>
</blockquote></li>
</ol>
<p>Simply filter for the categories you want like this</p>
<pre><code>In [25]: number_of_games.loc[['Action', 'Shooter', 'Platform'], :]
Out[25]:
Genre Publisher
Action Activision 311
Namco Bandai Games 251
Ubisoft 198
THQ 194
Electronic Arts 183
...
Shooter Visco 1
Warashi 1
Wargaming.net 1
Xseed Games 1
id Software 1
Name: Publisher, dtype: int64
</code></pre>
<p>Then again you want the largest 3 Publishers <em>per Genre</em> and therefore you use another groupby</p>
<pre><code>In [30]: number_of_games.loc[['Action', 'Shooter', 'Platform'], :].groupby(['Genre']).head(3)
Out[30]:
Genre Publisher
Action Activision 311
Namco Bandai Games 251
Ubisoft 198
Platform Nintendo 112
THQ 85
Ubisoft 70
Shooter Activision 162
Electronic Arts 145
Ubisoft 92
Name: Publisher, dtype: int64
</code></pre>
<p>The function <code>head</code> implicitly relies on the values being sorted. Alternatively you could use <code>nlargest</code></p>
<pre><code>In [31]: number_of_games.loc[['Action', 'Shooter', 'Platform'], :].groupby(['Genre']).nlargest(3).droplevel(0)
Out[31]:
Genre Publisher
Action Activision 311
Namco Bandai Games 251
Ubisoft 198
Platform Nintendo 112
THQ 85
Ubisoft 70
Shooter Activision 162
Electronic Arts 145
Ubisoft 92
Name: Publisher, dtype: int64
</code></pre>
<p>The result is the same but you would need to clean up the index with <code>droplevel</code> as this was appearing twice</p>
<ol start="4">
<li><blockquote>
<p>And what if I want to include a time frame, e.g., compare all these columns for the past 10 years?</p>
</blockquote></li>
</ol>
<p>You obviously would need data for the timeframe. If you just want Games published in the last 10 years, filter the original data for games newer than 10 years. If you want to resolve which Publishers published most every year, you would create a column with the year of publication and groupby this as well. With Genre and Publisher you have already seen, that you can group by a list of features.</p>
|
python|pandas|csv
| 1
|
374,413
| 62,332,343
|
TF Lite Retraining on Mobile
|
<p>Let's assume I made an app that has machine learning in it using a tflite file.</p>
<p>Is it possible that I could retrain this model right inside the app?</p>
<p>I have tried to use the <a href="https://www.tensorflow.org/lite/tutorials/model_maker_image_classification" rel="nofollow noreferrer">Model Maker</a> which is provided by TensorFlow, but, without this, i don't think there's any other way to retrain your model with just the app i made.</p>
|
<p>Do you mean training on the device when the app is deployed? If yes, TFLite currently doesn't support training in general. But there's some experimental work in this direction with limited support as shown by <a href="https://github.com/tensorflow/examples/blob/master/lite/examples/model_personalization" rel="nofollow noreferrer">https://github.com/tensorflow/examples/blob/master/lite/examples/model_personalization</a>.</p>
<p>Currently, the retraining of a TFLite model, as you found out w/ Model Maker, has to happen offline w/ TF before the app is deployed.</p>
|
tensorflow|tensorflow-lite
| 1
|
374,414
| 62,323,443
|
Getting a predictable Pandas DataFrame from (sparse) JSON
|
<p>I'm getting JSON from an API. This API omits <code>null</code> values (properties which are <code>null</code> are not sent over the wire), thus the data can be sparse. The properties contain a mix of, string, numeric, boolean, unix-timestamps, ISO8601-timestamps and ISO8601-durations. </p>
<p>Here's an example JSON (as a Python list/dict) with all data types</p>
<pre><code> data_full = [
{'name': 'alice', 'lastname': 'foo', 'value': 1.11, 'unix_ts': 1591848156000, 'iso_ts': '2020-05-17T12:33:44Z',
'iso_dur': 'PT1H11M', 'bool_val': True},
{'name': 'clair', 'lastname': 'bar', 'value': 3.33, 'unix_ts': 1591648156000, 'iso_ts': '2020-03-17T12:33:44Z',
'iso_dur': 'PT3H33M', 'bool_val': True},
]
</code></pre>
<p>Sparse data can be lacking fields on any row, or for all rows, or the API result can also be completely empty. Examples</p>
<pre><code> some_fields_missing_in_some_rows = [
{'name': 'alice', 'lastname': 'foo', 'value': 1.23, 'unix_ts': 1591848156000,
'iso_ts': '2020-05-17T12:33:44Z',
'iso_dur': 'PT1H11M', 'bool_val': True},
{'name': 'clair', }
]
some_fields_missing_in_all_rows = [
{'name': 'alice'},
{'name': 'clair'}
]
no_data = []
</code></pre>
<p>I convert this to a Pandas DataFrame using <code>json_normalize</code>. To allow for predictive further processing, I want that in all sparse cases the output dtype are the same as if the data was full, with the correct <code>NA</code> inserted in missing places. I struggle to get this missing values of the right type (np.nan or other).</p>
<p>The fully contained test-case below shows the problem (aka if you get the 4 tests to pass, I believe it's doing what I expect).</p>
<p>One explicit problems is how to create and populate with NaN an empty column of type <code>str</code>.
Any feedback is appreciated.</p>
<pre><code>import datetime
from typing import List, Tuple
from unittest import TestCase
import isodate
import numpy as np
import pandas as pd
class TestDFNormalization(TestCase):
def test_full_fields(self):
jsList = [
{'name': 'alice', 'lastname': 'foo', 'value': 1.11, 'unix_ts': 1591848156000,
'iso_ts': '2020-05-17T12:33:44Z',
'iso_dur': 'PT1H11M', 'bool_val': True},
{'name': 'clair', 'lastname': 'bar', 'value': 3.33, 'unix_ts': 1591648156000,
'iso_ts': '2020-03-17T12:33:44Z',
'iso_dur': 'PT3H33M', 'bool_val': True},
]
df = extract_df(js=jsList)
print(df.dtypes)
print(df)
self.assert_dtypes_conform(df)
self.assert_correct_NaNs(df, 2) # no NaN, so all rows (=2) kept
def test_sparse_fields(self):
some_fields_missing_in_some_rows = [
{'name': 'alice', 'lastname': 'foo', 'value': 1.23, 'unix_ts': 1591848156000,
'iso_ts': '2020-05-17T12:33:44Z',
'iso_dur': 'PT1H11M', 'bool_val': True},
{'name': 'clair', }
]
df = extract_df(js=some_fields_missing_in_some_rows )
print(df.dtypes)
print(df)
self.assert_dtypes_conform(df)
self.assert_correct_NaNs(df, 1) # some NaN, only 1 row kept
def test_lacking_fields(self):
some_fields_missing_in_all_rows = [
{'name': 'alice'},
{'name': 'clair'}
]
df = extract_df(js=some_fields_missing_in_all_rows )
print(df.dtypes)
print(df)
self.assert_dtypes_conform(df)
self.assert_correct_NaNs(df, 0) # all NaN, no rows
def test_no_data(self):
no_data = []
df = extract_df(js=no_data )
print(df.dtypes)
print(df)
self.assert_dtypes_conform(df)
self.assert_correct_NaNs(df, 0) # no rows
def assert_dtypes_conform(self, df: pd.DataFrame) -> None:
self.assertEqual("object", df['name'].dtype)
self.assertEqual("object", df['lastname'].dtype)
self.assertEqual("float", df['value'].dtype)
self.assertEqual("datetime64[ns, UTC]", df['unix_ts'].dtype)
self.assertEqual("datetime64[ns, UTC]", df['iso_ts'].dtype)
self.assertEqual("timedelta64[ns]", df['iso_dur'].dtype)
self.assertEqual("boolean", df['bool_val'].dtype)
def assert_correct_NaNs(self, df: pd.DataFrame, expectedNumRowsAfterDropNA: int) -> None:
self.assertEqual(expectedNumRowsAfterDropNA, len(df.dropna(subset=['lastname']).index))
self.assertEqual(expectedNumRowsAfterDropNA, len(df.dropna(subset=['value']).index))
self.assertEqual(expectedNumRowsAfterDropNA, len(df.dropna(subset=['unix_ts']).index))
self.assertEqual(expectedNumRowsAfterDropNA, len(df.dropna(subset=['iso_ts']).index))
self.assertEqual(expectedNumRowsAfterDropNA, len(df.dropna(subset=['iso_dur']).index))
self.assertEqual(expectedNumRowsAfterDropNA, len(df.dropna(subset=['bool_val']).index))
def extract_df(js: List) -> pd.DataFrame:
df = pd.json_normalize(js)
create_cols_if_absent(df=df,
expected_cols=('name', 'lastname', 'value', 'unix_ts', 'iso_ts', 'iso_dur', 'bool_val'))
# astype_per_column(df=df, column='name', dtype='str')
# astype_per_column(df=df, column='lastname', dtype='str')
# astype_per_column(df=df, column='value', dtype='float')
parse_unix_ms(df=df, column='unix_ts')
parse_iso(df=df, column='iso_ts')
parse_dur(df=df, column='iso_dur')
astype_per_column(df=df, column='bool_val', dtype='boolean')
return df
def create_cols_if_absent(df: pd.DataFrame, expected_cols: Tuple) -> None:
for col in expected_cols:
if col not in df.columns:
df[col] = np.nan # or None or pd.NA or np.nan ?
def parse_unix_ms(df, column):
df[column] = pd.to_datetime(df[column], unit='ms', origin='unix', utc=True)
def parse_iso(df, column):
df[column] = pd.to_datetime(df[column], utc=True)
def parse_iso_duration(durationstring: str) -> datetime.timedelta:
if not durationstring or pd.isna(durationstring):
return None
return isodate.parse_duration(durationstring)
def parse_dur(df, column) -> None:
df[column] = pd.to_timedelta(
df[column].apply(parse_iso_duration)) # why does to_timedelta() not support ISO8601 notation?
def astype_per_column(df: pd.DataFrame, column: str, dtype) -> None:
df[column] = df[column].astype(dtype)
</code></pre>
|
<p>Oh, well it was very close.
It should use the new (pandas >= 1.0) string type (<code>StringDtype</code>) in the <code>astype</code> call.</p>
<pre><code>def extract_df(js: List) -> pd.DataFrame:
df = pd.json_normalize(js)
create_cols_if_absent(df=df,
expected_cols=('name', 'lastname', 'value', 'unix_ts', 'iso_ts', 'iso_dur', 'bool_val'))
astype_per_column(df=df, column='name', dtype='string')
astype_per_column(df=df, column='lastname', dtype='string')
astype_per_column(df=df, column='value', dtype='float')
parse_unix_ms(df=df, column='unix_ts')
parse_iso(df=df, column='iso_ts')
parse_dur(df=df, column='iso_dur')
astype_per_column(df=df, column='bool_val', dtype='boolean')
return df
</code></pre>
|
python|python-3.x|pandas|dataframe|missing-data
| 0
|
374,415
| 62,109,293
|
Sometimes Unable to Import NumPy
|
<p>When I work in Jupyter Notebooks everything works fine, and I can import numpy and pandas successfully. However, when I try to download the script and then run it in an editor such as PyCharm or Atom, I get an import error: no module named numpy, and the same for pandas. How do I fix this? Is this due to the packages being installed in a different location than where I am downloading the code? Everything is installed with Anaconda, and when I try to do ```conda install numpy`` it tells me that all packages have already been installed.</p>
|
<p>This may be because Pycharm and Atom are using your default python install rather than your anaconda python environment.</p>
<p>You can configure Pycharm to use your conda environment via (<a href="https://www.jetbrains.com/help/pycharm/conda-support-creating-conda-virtual-environment.html" rel="nofollow noreferrer">https://www.jetbrains.com/help/pycharm/conda-support-creating-conda-virtual-environment.html</a>).</p>
|
python|pandas|numpy|installation|anaconda
| 1
|
374,416
| 62,433,465
|
How to plot 3D point clouds from an npy file?
|
<p>I have a few Numpy binary files created by LIDAR readings containing 3D point clouds. I want to be able to plot a top-down (orthogonal) view for every point cloud by reading them from a file. I looked up various 3D point cloud libraries such as Open3d, pyntcloud, etc but none of them work with NPY files. How can I plot them? </p>
<p>I am not asking for a library recommendation here. I am just looking for a possible direction in which I can proceed because I have not found a way to plot point clouds by reading them from NPY files. </p>
<p>EDIT: When I read the data from one of the files using <code>np.load()</code>, it looks like this:</p>
<pre><code>array([[(-0. , 0. , 0. , 0. , 857827240, 1579782324),
(-0. , 0. , 0. , 0. , 857882120, 1579782324),
(-0. , 0. , 0. , 0. , 857937680, 1579782324),
...,
(-0. , -0. , 0. , 0. , 957653240, 1579782324),
(-0. , -0. , 0. , 0. , 957709120, 1579782324),
(-0. , -0. , 0. , 0. , 957764680, 1579782324)],
[(15.622366 , -8.086195 , 5.7023315 , 0.00392157, 857828544, 1579782324),
(16.292194 , -8.503972 , 5.8512874 , 0.07843138, 857883424, 1579782324),
(15.855744 , -8.374023 , 5.767106 , 0.02352941, 857938984, 1579782324),
...,
(16.500275 , -9.402869 , 6.0786157 , 0.01568628, 957654544, 1579782324),
(16.197226 , -9.334285 , 6.023082 , 0.00392157, 957710424, 1579782324),
(16.260717 , -9.463429 , 6.0455737 , 0.00392157, 957765984, 1579782324)],
[(16.526688 , -8.541684 , 4.6792016 , 0.00392157, 857829848, 1579782324),
(15.844723 , -8.292216 , 4.5818253 , 0. , 857884728, 1579782324),
(15.915991 , -8.414634 , 4.5984206 , 0.00392157, 857940288, 1579782324),
...,
(15.649654 , -8.954793 , 4.6751213 , 0.01176471, 957655848, 1579782324),
(17.318968 , -9.951033 , 4.9357953 , 0.01176471, 957711728, 1579782324),
(16.125185 , -9.398413 , 4.7603803 , 0.00392157, 957767288, 1579782324)],
...,
[( 2.5268526, -1.6420269 , -0.24141277, 0.02745098, 857780808, 1579782324),
( 2.529189 , -1.6714373 , -0.24518971, 0.03137255, 857836368, 1579782324),
( 2.5140662, -1.6922294 , -0.24403782, 0.03137255, 857891248, 1579782324),
...,
( 1.7650445, -1.4837685 , -0.2509078 , 0.02745098, 957606808, 1579782324),
( 1.742465 , -1.5004072 , -0.24779865, 0.02352941, 957662368, 1579782324),
( 1.7232444, -1.5187881 , -0.245681 , 0.02745098, 957718248, 1579782324)],
[(-2.7442074, 0.9481321 , 1.1273874 , 0. , 857786024, 1579782324),
(-2.7466307, 0.94417626, 1.1274364 , 0. , 857841584, 1579782324),
(-2.749064 , 0.94022495, 1.1274853 , 0. , 857896464, 1579782324),
...,
(-3.4345033, 1.3002251 , 1.1344001 , 0. , 957612024, 1579782324),
(-3.4270716, 1.2909878 , 1.1304668 , 0. , 957667584, 1579782324),
(-3.4362614, 1.2907308 , 1.1331499 , 0. , 957723464, 1579782324)],
[(-3.1056237, 1.1257029 , 1.1556424 , 0. , 857782112, 1579782324),
(-3.1041813, 1.1214051 , 1.1539782 , 0. , 857837672, 1579782324),
(-3.102756 , 1.1170869 , 1.1523142 , 0. , 857892552, 1579782324),
...,
(-3.779868 , 1.4852207 , 1.1581781 , 0. , 957608112, 1579782324),
(-3.8071766, 1.4963622 , 1.1718962 , 0. , 957663672, 1579782324),
(-3.7931492, 1.4851598 , 1.163371 , 0. , 957719552, 1579782324)]],
dtype=[('x', '<f4'), ('y', '<f4'), ('z', '<f4'), ('intensity', '<f4'), ('t_low', '<u4'), ('t_high', '<u4')])
</code></pre>
<p>When I try to plot it as @Dorian suggested:</p>
<pre class="lang-py prettyprint-override"><code> x = data[:, 0]
y = data[:, 1]
z = data[:, 2]
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z)
plt.show()
</code></pre>
<p>I get the following error:</p>
<p></p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-20-d6d9ea7be681> in <module>
1 fig = plt.figure(figsize=(8, 8))
2 ax = fig.add_subplot(111, projection='3d')
----> 3 ax.scatter(x, y, z)
4 plt.show()
~/anaconda3/envs/pointclouds/lib/python3.8/site-packages/mpl_toolkits/mplot3d/axes3d.py in scatter(self, xs, ys, zs, zdir, s, c, depthshade, *args, **kwargs)
2325 xs, ys, zs, s, c = cbook.delete_masked_points(xs, ys, zs, s, c)
2326
-> 2327 patches = super().scatter(xs, ys, s=s, c=c, *args, **kwargs)
2328 art3d.patch_collection_2d_to_3d(patches, zs=zs, zdir=zdir,
2329 depthshade=depthshade)
~/anaconda3/envs/pointclouds/lib/python3.8/site-packages/matplotlib/__init__.py in inner(ax, data, *args, **kwargs)
1597 def inner(ax, *args, data=None, **kwargs):
1598 if data is None:
-> 1599 return func(ax, *map(sanitize_sequence, args), **kwargs)
1600
1601 bound = new_sig.bind(ax, *args, **kwargs)
~/anaconda3/envs/pointclouds/lib/python3.8/site-packages/matplotlib/axes/_axes.py in scatter(self, x, y, s, c, marker, cmap, norm, vmin, vmax, alpha, linewidths, verts, edgecolors, plotnonfinite, **kwargs)
4459 else:
4460 x, y, s, c, colors, edgecolors, linewidths = \
-> 4461 cbook._combine_masks(
4462 x, y, s, c, colors, edgecolors, linewidths)
4463
~/anaconda3/envs/pointclouds/lib/python3.8/site-packages/matplotlib/cbook/__init__.py in _combine_masks(*args)
1122 x = safe_masked_invalid(x)
1123 seqlist[i] = True
-> 1124 if np.ma.is_masked(x):
1125 masks.append(np.ma.getmaskarray(x))
1126 margs.append(x) # Possibly modified.
~/anaconda3/envs/pointclouds/lib/python3.8/site-packages/numpy/ma/core.py in is_masked(x)
6520 if m is nomask:
6521 return False
-> 6522 elif m.any():
6523 return True
6524 return False
~/anaconda3/envs/pointclouds/lib/python3.8/site-packages/numpy/core/_methods.py in _any(a, axis, dtype, out, keepdims)
43
44 def _any(a, axis=None, dtype=None, out=None, keepdims=False):
---> 45 return umr_any(a, axis, dtype, out, keepdims)
46
47 def _all(a, axis=None, dtype=None, out=None, keepdims=False):
TypeError: cannot perform reduce with flexible type
</code></pre>
<p>A small sample of data is <a href="https://drive.google.com/file/d/1YDM7Ti7mPJpb6lxP3tryiwvsTql-vL67/view?usp=sharing" rel="nofollow noreferrer">here</a>.</p>
|
<p><code>matplotlib.pyplot</code> would be my personal go to option.</p>
<p>You did not supply any data or how the data is saved, so I assume that the points of the point cloud are saved in an <code>Nx3</code> dimensional <code>numpy</code> array:</p>
<pre><code>data = np.load('file.npy')
x = data[:, 0]
y = data[:, 1]
z = data[:, 2]
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import proj3d
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z)
plt.show()
</code></pre>
<p>If you only want to have the 2D (top-down view), don't use the 3D projection and ignore your z value:</p>
<pre><code>fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111)
ax.scatter(x, y)
plt.show()
</code></pre>
|
python-3.x|numpy|plot|point-clouds
| 3
|
374,417
| 62,380,480
|
Pandas Type error while graphing pairplot in Seaborn
|
<p>I have a pandas DataFrame created from a file with the columns ['Time','Q1','Q2','T1','T2']. This works when I try to plot a lineplot: </p>
<pre class="lang-py prettyprint-override"><code>sns.lineplot(x=data4['Time'], y=data4['Q1'], label='Q1')
</code></pre>
<p>However when I do a pairplot:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(data4, columns=data4.columns)
sns.pairplot(df['Q1'], df['T1'])
</code></pre>
<p>I get the following error:
<code>'data' must be pandas DataFrame object, not: <class 'pandas.core.series.Series'></code></p>
|
<p>This is a syntax error solved by formatting the pairplot call as follows: </p>
<pre class="lang-py prettyprint-override"><code>sns.pairplot(df[['Q1', 'T1']])
</code></pre>
<p>This will create the right object type for the graph.</p>
|
python|pandas|seaborn
| 1
|
374,418
| 62,284,954
|
Flip image left right in tensorflow js
|
<p>I am new to tfjs and stuck on finding mirror of image. Is there any way to flip tensor in tensorflowjs, similar to - <a href="https://www.tensorflow.org/api_docs/python/tf/image/flip_left_right" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/image/flip_left_right</a>? </p>
|
<p><code>tf.reverse</code> can be used along the first axis</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code> flip = tf.tensor(
[[[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]],
[[7.0, 8.0, 9.0],
[10.0, 11.0, 12.0]]]).reverse(1)
flip.print()</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><html>
<head>
<!-- Load TensorFlow.js -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@latest"> </script>
</head>
<body>
</body>
</html></code></pre>
</div>
</div>
</p>
<p>Since the version 2.3 there is now <code>tf.image.flipLeftRight</code> but it expects a 4d tensor for the image</p>
|
javascript|browser|tensorflow.js
| 2
|
374,419
| 62,092,199
|
Pandas groupby apply strange behavior when NaN's in group column
|
<p>I'm encountering some unexpected Pandas groupby-apply results, and I can't figure out the exact cause. </p>
<p>Below I have to dataframes that are equal except the ordering of 2 values. df1 produces results as I expect them, but df2 produces a completely different result.</p>
<pre><code>import numpy as np
df1 = pd.DataFrame({'group_col': [0.0, np.nan, 0.0, 0.0], 'value_col': [2,2,2,2]})
df2 = pd.DataFrame({'group_col': [np.nan, 0.0, 0.0, 0.0], 'value_col': [2,2,2,2]})
</code></pre>
<p>df1:</p>
<pre><code> group_col value_col
0 0.0 2
1 NaN 2
2 0.0 2
3 0.0 2
</code></pre>
<p>df2: </p>
<pre><code> group_col value_col
0 NaN 2
1 0.0 2
2 0.0 2
3 0.0 2
</code></pre>
<p>When I groupby the <code>group_col</code> and do a value_counts of the <code>value_col</code> per group, including a reindex to include all possible values in the result I get the following for df1:</p>
<pre><code>df1.groupby('group_col').value_col.apply(lambda x: x.value_counts().reindex(index=[1,2,3]))
group_col
0.0 1 NaN
2 3.0
3 NaN
Name: value_col, dtype: float64
</code></pre>
<p>It correctly finds 1 group and returns a multi-index series with the value_counts for each possible value. But when I run the same on df2, I get a completely different result:</p>
<pre><code>0 NaN
1 NaN
2 3.0
3 NaN
Name: value_col, dtype: float64
</code></pre>
<p>Here the result contains an index matching the original DataFrame instead of the multi-index I would expect. I thought it might have something to do with the group column starting with np.nan, but then I tried dropping the last row and I get the expected result again, so apparently the cause is something else.</p>
<pre><code>df2.head(3).groupby('group_col').value_col.apply(lambda x: x.value_counts().reindex(index=[1,2,3]))
group_col
0.0 1 NaN
2 2.0
3 NaN
Name: value_col, dtype: float64
</code></pre>
<p>What could be causing this? </p>
|
<p>Let's begin with looking at some simple grouping calculations to understand how pandas works on it.</p>
<p>In the following case, grouping keys are used as index in the resulting <code>Series</code> object. The original index was dropped.</p>
<pre><code>In [4]: df1.groupby('group_col')['value_col'] \
...: .apply(lambda x: {'sum': x.sum(), 'mean': x.mean()})
Out[4]:
group_col
0.0 sum 6.0
mean 2.0
Name: value_col, dtype: float64
In [5]: df2.groupby('group_col')['value_col'] \
...: .apply(lambda x: {'sum': x.sum(), 'mean': x.mean()})
Out[5]:
group_col
0.0 sum 6.0
mean 2.0
Name: value_col, dtype: float64
</code></pre>
<p>In the next case, the index of the original <code>DataFrame</code> is preserved. Grouping keys are not contained in the result <code>Series</code>.</p>
<pre><code>In [6]: df1.groupby('group_col')['value_col'].apply(lambda x: x / len(x))
Out[6]:
0 0.666667
1 NaN
2 0.666667
3 0.666667
Name: value_col, dtype: float64
In [7]: df2.groupby('group_col')['value_col'].apply(lambda x: x / len(x))
Out[7]:
0 NaN
1 0.666667
2 0.666667
3 0.666667
Name: value_col, dtype: float64
</code></pre>
<p>What makes pandas behave differently when it produces the index of combined object?</p>
<p>Actually, this is based on <strong>whether the index was mutated by the aggregation or not</strong>. When the index is the same between the original object and the resulting object, it chooses to reuse the original index. On the other hand, when the index is different from the original object, it uses the group key in the index to form a <code>MultiIndex</code>.</p>
<p>Now, going back to the question, please notice that the index was changed for <code>df1</code>. For group key <code>0.0</code>, the index of the original chunk was <code>[0, 2, 3]</code>, whereas it is <code>[1, 2, 3]</code> after aggregation. However, for <code>df2</code>, the original index was <code>[1, 2, 3]</code>, and accidentally, it was not changed by the aggregation.</p>
|
python|pandas|numpy|dataframe|pandas-groupby
| 1
|
374,420
| 62,051,800
|
LSTM Neural Network for temperature time series predictions
|
<p>I'm learning to work with neural networks applied to time-series so I tuned and LSTM example that I found to make predictions of daily temperature data. However, I found that the results are extremely poor as is shown in the image. (I only predict the last 92 days in order to save time for now).</p>
<p><a href="https://i.stack.imgur.com/yvpGh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yvpGh.png" alt="LSTM daily minimum temperature prediction"></a></p>
<p>This is the code I implemented. The data are 3 column dataframe (minimum, maximum and mean daily temperatures), but I only employ one of the columns at one time.</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tools.eval_measures import rmse
from sklearn.preprocessing import MinMaxScaler
from keras.preprocessing.sequence import TimeseriesGenerator
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
import warnings
warnings.filterwarnings("ignore")
input_file2 = "TemperaturasCampillos.txt"
seriesT = pd.read_csv(input_file2,sep = "\t", decimal = ".", names = ["Minimas","Maximas","Medias"])
seriesT[seriesT==-999]=np.nan
date1 = '2010-01-01'
date2 = '2010-09-01'
date3 = '2020-05-17'
date4 = '2020-12-31'
mydates = pd.date_range(date2, date3).tolist()
seriesT['Fecha'] = mydates
seriesT.set_index('Fecha',inplace=True) # Para que los índices sean fechas y así se ponen en el eje x de forma predeterminada
seriesT.index = seriesT.index.to_pydatetime()
df = seriesT.drop(seriesT.columns[[1, 2]], axis=1) # df.columns is zero-based pd.Index
n_input = 92
train, test = df[:-n_input], df[-n_input:]
scaler = MinMaxScaler()
scaler.fit(train)
train = scaler.transform(train)
test = scaler.transform(test)
#n_input = 365
n_features = 1
generator = TimeseriesGenerator(train, train, length=n_input, batch_size=1)
model = Sequential()
model.add(LSTM(200, activation='relu', input_shape=(n_input, n_features)))
model.add(Dropout(0.15))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.fit_generator(generator,epochs=150)
#create an empty list for each of our 12 predictions
#create the batch that our model will predict off of
#save the prediction to our list
#add the prediction to the end of the batch to be used in the next prediction
pred_list = []
batch = train[-n_input:].reshape((1, n_input, n_features))
for i in range(n_input):
pred_list.append(model.predict(batch)[0])
batch = np.append(batch[:,1:,:],[[pred_list[i]]],axis=1)
df_predict = pd.DataFrame(scaler.inverse_transform(pred_list),
index=df[-n_input:].index, columns=['Prediction'])
df_test = pd.concat([df,df_predict], axis=1)
plt.figure(figsize=(20, 5))
plt.plot(df_test.index, df_test['Minimas'])
plt.plot(df_test.index, df_test['Prediction'], color='r')
plt.legend(loc='best', fontsize='xx-large')
plt.xticks(fontsize=18)
plt.yticks(fontsize=16)
plt.show()
</code></pre>
<p>As you can see if you click in the image link, I get a predict too smoothed, good to see the seasonality but is not what I am looking forward.
In addition, I tried to add more layers to the neural network shown, so the network looks something like:</p>
<pre><code>#n_input = 365
n_features = 1
generator = TimeseriesGenerator(train, train, length=n_input, batch_size=1)
model = Sequential()
model.add(LSTM(200, activation='relu', input_shape=(n_input, n_features)))
model.add(LSTM(128, activation='relu'))
model.add(LSTM(256, activation='relu'))
model.add(LSTM(128, activation='relu'))
model.add(LSTM(64, activation='relu'))
model.add(LSTM(n_features, activation='relu'))
model.add(Dropout(0.15))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.fit_generator(generator,epochs=100)
</code></pre>
<p>but I get this error:</p>
<p><strong>ValueError</strong>: Input 0 is incompatible with layer lstm_86: expected ndim=3, found ndim=2</p>
<p>Of course, as the model has a bad performance I cannot assure that out-of-sample predictions would be accurate.
Why I cannot implement more layers to the network? How could I improve the performance? </p>
|
<p>You are missing one argument: return_sequences.</p>
<p>When you have more than one LSTM layer, you should set it to TRUE. Because, otherwise, that layer will only output the last hidden state. Add it to each LSTM layer.</p>
<pre><code>model.add(LSTM(128, activation='relu', return_sequences=True))
</code></pre>
<p>About poor performance: my guess it is because of you have low amount of data for this application (the data seems pretty noise), add layers won't help too much.</p>
|
python|pandas|tensorflow|datetime|keras
| 0
|
374,421
| 62,090,888
|
Trying to convert a .npy file (float64) to uint8 or uint16
|
<p>I'm trying to display a 3D Numpy Array using VTK (python 3) but it needs the type to be uint8 or uint16.</p>
<p>I don't know how to do this and any help would be greatly appreciated. </p>
<p>In case, there's nothing I can do, I just want to display my .npy file using VTK. Any suggestions will be highly appreciated. </p>
|
<p>You can use the numpy.astype() function to change the data type. Here's an example:</p>
<pre><code>>>> arr = np.array([10., 20., 30., 40., 50.])
>>> print(arr)
[10. 20. 30. 40. 50.]
>>> print(arr.dtype)
float64
>>> arr = arr.astype('uint16')
>>> print(arr)
[10 20 30 40 50]
>>> print(arr.dtype)
uint16
</code></pre>
<p>To address your further questions, you want to normalize your data when converting float64 to uint8 or uint16. Basically you want to map the [min, max] data range of your array to [0, 255] for uint8 or [0, 65535] for uint16. You'll still lose information, but normalization minimizes the data loss.</p>
<p>Here's how you'd map float64 to uint8:</p>
<pre><code>dmin = np.min(arr)
dmax = np.max(arr)
drange = dmax-dmin
dscale = 255.0/drange
new_arr = (arr-dmin)*dscale
new_arr_uint8 = new_arr.astype('uint8')
</code></pre>
<p>And, yes, the astype function will work on a loaded array.</p>
|
python|python-3.x|numpy|vtk
| 0
|
374,422
| 62,390,746
|
How to speed up the Grad-CAM calculation within a loop in Keras?
|
<p>I'm coding an hand-crafted solution to compute the Grad-CAM for each image contained in a given dataset.
Here is the code:</p>
<pre><code>def grad_cam(model, image, _class, layer_name, channel_index):
class_output = model.output[:, _class]
conv_output_layer = model.get_layer(layer_name).output
gradients = K.gradients(class_output, conv_output_layer)[0]
grad_function = K.function([model.input], [conv_output_layer, gradients])
output, grad_val = grad_function([image]) # <== Execution time bottleneck
#code...
return grad_cam
</code></pre>
<p>Given an image, a convolutional layer and a channel index, the aim is to understand where a CNN classifier is looking at, reason why Grad-CAM representation has to be computed for all dataset's images. Since the dataset provides tens of thousand images, <code>grad_cam()</code> function is called within a loop:</p>
<pre><code>def visual_attention(df):
#code...
for i in range(0, len(df)):
#code...
heatmap = grad_cam(model, df[i], _class, layer_name, channel_index)
#code...
#code...
</code></pre>
<p>where <code>df</code> stands for the entire dataset. Even if the previously mentioned code works pretty good, I noticed a slowdown involving <code>grad_cam()</code> function since <code>K.gradients()</code> builds a new back-propagation graph each time it's called. I did many attempts to try to manage this situation but TensorFlow continues to add new nodes to its graph after each iteration. After a few hundred of iterations, code becomes embarrassingly slow. How to deal with this situation? Thank you very much in advance!</p>
|
<p>You should split the creation of the <code>grad_function</code> function and use it independently, for example:</p>
<pre><code>def grad_cam(model, _class, layer_name, channel_index):
class_output = model.output[:, _class]
conv_output_layer = model.get_layer(layer_name).output
gradients = K.gradients(class_output, conv_output_layer)[0]
grad_function = K.function([model.input], [conv_output_layer, gradients])
return return grad_function
</code></pre>
<p>Then you create an instance of the grad function only once:</p>
<pre><code>grad_fn = grad_cam(model, _class, layer_name, channel_index)
</code></pre>
<p>And then call this function in a loop:</p>
<pre><code>def visual_attention(df):
#code...
for i in range(0, len(df)):
#code...
heatmap = grad_fn(df[i])
#code...
#code...
</code></pre>
|
python|tensorflow|keras|neural-network|heatmap
| 0
|
374,423
| 62,458,141
|
Using Python pandas, how do I create a function to calculate the proportion of rows that represent a lower value than the previous row?
|
<p>Using Python pandas, how do I create a function to calculate the proportion of rows that represent a lower value than the previous row?
So in other words, I need a function to iterate through the values under a particular series column of a Pandas Data Frame and only count those values where the next row's value (under say column called 'Mileage') is less than the current row's value. Like say you have this:
Mileage:
row 1: 30
row 2: 20
row 3: 40
row 4: 50
row 5: 60
row 6: 55
row 7: 75</p>
<p>If the counter is working correctly, it would spot that row 2's value of 20 is less than row 1's value of 30 and so it would add +1 to counter (count that one).<br>
In the example above, another row that it should count is row 6: 55 which is < than its previous row 5: 60 and so count that one.
And so the final count would be: 2.
And then I can divide that final count by total row count to get a proportion.</p>
<p>Thank you in advance for any help! </p>
|
<p>You can do this using the <code>series.shift</code> function:</p>
<pre><code>proportion = len(df[df['Mileage'] < df['Mileage'].shift(1)])/len(df)
print(proportion)
</code></pre>
<p>output:</p>
<pre><code>0.2857142857142857
</code></pre>
<p>the part of the code:</p>
<pre><code>df[df['Mileage'] < df['Mileage'].shift(1)]
</code></pre>
<p>Uses masking to only select rows that meet that condition (in this case 2), and so we take the <code>len</code> of that divided by the total <code>len</code> of the df and get the proportion.
<code>.shift(1)</code> allows you to access the next rows value so you can compare to the current row in this way. </p>
|
python|pandas|dataframe|counter
| 0
|
374,424
| 62,073,781
|
Convert a dictionary of strings to a dictionary of numpy arrays
|
<p>I have a dictionary structured similar to below.</p>
<pre><code>test_dict = {1: 'I run fast', 2: 'She runs', 3: 'How are you?'}
</code></pre>
<p>What I'm trying to do is convert all the strings to 4x4 numpy arrays where each word is in it's own row and each letter occupies one cell of the array, populated with blanks for lines that wouldn't fill the entire row a whole row of blanks for sentences that are less than 4 words long. I also need to be able to tie it back to the ID, so the result needs to be in some format that can allow for referencing each array by it's ID later on.</p>
<p>I don't know of any pre built functions that can handle something like this, but I would be happy to be wrong. For now I've been trying to write a loop to handle it. Below is obviously incomplete because I'm stuck at the point of creating an array in the structure I would like.</p>
<pre><code>for k in test_dict.keys():
sentence = test_dict.getvalues(k)
sentence_ascii = [ord(c) for c in sentence]
sentence_array = np.array(sentence_ascii)
</code></pre>
|
<p>Is this what you mean?</p>
<pre><code>{
key: np.array([list(word.ljust(4)) for word in val.split()])
for key, val in test_dict.items()
}
</code></pre>
<p>output:</p>
<pre><code>{1: array([['I', ' ', ' ', ' '],
['r', 'u', 'n', ' '],
['f', 'a', 's', 't']], dtype='<U1'),
2: array([['S', 'h', 'e', ' '],
['r', 'u', 'n', 's']], dtype='<U1'),
3: array([['H', 'o', 'w', ' '],
['a', 'r', 'e', ' '],
['y', 'o', 'u', '?']], dtype='<U1')}
</code></pre>
|
python|arrays|numpy|for-loop|numpy-ndarray
| 1
|
374,425
| 62,110,186
|
concatenate numpy 1D array in columns
|
<p>I have two numpy arrays:</p>
<pre><code>a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
</code></pre>
<p>and I want to concatenate them into two columns like, </p>
<pre><code>1 4
2 5
3 6
</code></pre>
<p>is there any way to do this without transposing or reshaping the arrays?</p>
|
<p>You can try:</p>
<pre><code>a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
c = np.concatenate((a[np.newaxis, :], b[np.newaxis, :]), axis = 0).T
</code></pre>
<p>And you get :</p>
<p><code>c = array([[1, 4],
[2, 5],
[3, 6]])
</code></p>
<p>Best, </p>
|
arrays|numpy|concatenation
| 1
|
374,426
| 62,134,357
|
type Error : 'dict' object is not callable
|
<p>I can't find the problem.
It's ALL about the <strong>Williams Accumulation Distribution financial analysis</strong> indicator. It "supposedly" represents the amount of buying and selling in the market.</p>
<pre><code>def wadl(prices, periods):
results = holder()
dict = {}
for i in range(0,len(periods)):
WAD = []
for j in range(periods[i],len(prices)-periods[i]):
CC = prices.close.iloc[j]
CL = prices.close.iloc[j-1]
# TR high = max(current high,previous close)
TRH = np.array([prices.high.iloc[j],CL]).max()
TRL = np.array([prices.low.iloc[j],CL]).min()
if CC > CL :
PM = CC - TRL
elif CC < CL :
PM = CC - TRH
elif CC == CL :
PM = 0
else :
print('unkown error ocurred, see administrator')
AD = PM*prices.AskVol.iloc[j]
WAD = np.append(WAD,AD)
WAD = WAD.cumsum()
WAD = pd.DataFrame(WAD, index=prices.iloc[periods[i]:-periods[i]].index)
WAD.columns = ['close']
dict[periods[i]] = WAD
results.wadl= dict
return results
</code></pre>
|
<p>I think the error is at line <code>results.wadl= dict</code> as here, <code>wadl</code> is a function but you are assigning a dictionary object <code>dict</code> to a callable function <code>wadl</code>.</p>
|
python-3.x|pandas
| 0
|
374,427
| 62,173,658
|
Implement an Encoder and Decoder architecture with attention mechanism
|
<p>I want to implement Encoder-Decoder with attention mechanism from scratch. Can anyone please help me with the code?</p>
|
<p>This should be a good place to start <a href="https://www.tensorflow.org/tutorials/text/nmt_with_attention" rel="nofollow noreferrer">tutorial</a></p>
|
python|tensorflow|keras|attention-model|encoder-decoder
| 0
|
374,428
| 62,225,024
|
Handling object column with mixed data types by fixing floating point categories
|
<p><a href="https://i.stack.imgur.com/Wb1jI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wb1jI.png" alt="column with mixed dtype"></a></p>
<p>I have an object column with mixed data type and there are categories such as 57.0 and 57 that are being treated differently. Is it possible to convert the categories such as 57.0 to 57 so it is treated as same category without affecting the data type of the column? </p>
|
<p>Apply <code>int()</code> to the objects of type <code>float</code>;</p>
<pre><code>df['category'] = df['category'].map(lambda x: int(x) if isinstance(x, float) else x)
</code></pre>
<p><code>isinstance()</code> example:</p>
<pre><code>>>> x = 12
>>> isinstance(x, int)
True
>>> y = 12.0
>>> isinstance(y, float)
True
</code></pre>
|
python|pandas
| 1
|
374,429
| 62,213,054
|
How to exclude certain rows in a pandas dataframe in Python
|
<p>I have an Excel sheet which has a list of folder names. I have to read the Excel sheet and create folder names on my drive. However if the process breaks during creation or if there is an exception then when I rerun the process it should exclude the folders which have already have been created.</p>
<p>Below is my current Python code:</p>
<pre><code>data = pd.read_excel(r'C://Users//file1//Desktop//folderlist.xls')
print(data["producttype"])#folder list is in producttype column name
print(data.head())
data.drop("Unnamed: 0",axis=1,inplace=True)
root=(r'C://Users//file1//Desktop//google//')
dirlist =pd.DataFrame( [ item for item in os.listdir(root) if os.path.isdir(os.path.join(root, item)) ])
df=pd.DataFrame([x[0] for x in os.walk(r'C://Users//file1//Desktop//google//')])
print(dirlist)
for i in dirlist:
for k,j in enumerate(data["producttype"]):
if i==j:
data.drop(data.producttype.index[k],axis=0,inplace=True)
</code></pre>
<p>While this is executing it is not excluding the already created folders.</p>
<p>Can someone help me to fix the issue?</p>
|
<p>This question boils down to safely create a (nested) directory, answered here:
<a href="https://stackoverflow.com/questions/273192/how-can-i-safely-create-a-nested-directory">How can I safely create a nested directory?</a></p>
<p>This code should do the trick, taken from the linked question:</p>
<pre><code>import pandas as pd
from pathlib import Path
df_folders = pd.read_excel('file.xlsx', sheet_name='info', header=0)
for folder in df_folders['producttype']:
Path(folder).mkdir(parents=True, exist_ok=True)
</code></pre>
|
python|pandas
| 2
|
374,430
| 62,136,127
|
df.loc using index in slicer return nan
|
<p>I am trying to iterate over a dateFrame and get the max of column between certain rows, the problem is when I am putting the index value over a number I get nan:</p>
<pre><code>for index, row in df.iterrows():
if index >= 51:
print(df.loc[index:(index - 51), 'close'].max())
</code></pre>
<p>for this, I get a nan value.<br>
but if I use numbers in the slicer like this:</p>
<pre><code>for index, row in df.iterrows():
if index >= 51:
print(df.loc[0:51, 'close'].max())
</code></pre>
<p>I will get the result not the one I need because I need it to be a moving window but this is just to show the problem.</p>
<p>any ideas why it won't accept the index as a valid slicer?</p>
|
<p>I think the issue is you have the <code>loc</code> index slice backwards, causing nothing to be returned; on the first iteration, your slice is <code>df.loc[51:0, 'close'].max()</code>. Instead:</p>
<pre><code>for index, row in df.iterrows():
if index >= 51:
print(df.loc[index-51:index,'close'].max())
#first iteration: df.loc[0:51,'close']
</code></pre>
<p>I'm assuming your index are integers/numbers, hence why you can mix integers and column names using <code>loc</code>? Otherwise I think <code>iloc</code> could work.</p>
<hr>
<p>Amendment: this is what I was thinking by using <code>iloc</code>, but it is (in my eyes) no different than your method:</p>
<pre><code>close_iloc = df.columns.get_loc('close') #gets the integer number needed for iloc to reference 'close'
for i in range(len(df.index)):
if i >= 51:
print(df.iloc[i-51:i,close_iloc].max())
</code></pre>
|
python|pandas
| 1
|
374,431
| 62,467,822
|
Implement ConvND in Tensorflow
|
<p>So I need a ND convolutional layer that also supports complex numbers. So I decided to code it myself. </p>
<p>I tested this code on numpy alone and it worked. Tested with several channels, 2D and 1D and complex. However, I have problems when I do it on TF.</p>
<p>This is my code so far:</p>
<pre><code>def call(self, inputs):
with tf.name_scope("ComplexConvolution_" + str(self.layer_number)) as scope:
inputs = self._verify_inputs(inputs) # Check inputs are of expected shape and format
inputs = self.apply_padding(inputs) # Add zeros if needed
output_np = np.zeros( # I use np because tf does not support the assigment
(inputs.shape[0],) + # Per each image
self.output_size, # Image out size
dtype=self.input_dtype # To support complex numbers
)
img_index = 0
for image in inputs:
for filter_index in range(self.filters):
for i in range(int(np.prod(self.output_size[:-1]))): # for each element in the output
index = np.unravel_index(i, self.output_size[:-1])
start_index = tuple([a * b for a, b in zip(index, self.stride_shape)])
end_index = tuple([a+b for a, b in zip(start_index, self.kernel_shape)])
# set_trace()
sector_slice = tuple(
[slice(start_index[ind], end_index[ind]) for ind in range(len(start_index))]
)
sector = image[sector_slice]
new_value = tf.reduce_sum(sector * self.kernels[filter_index]) + self.bias[filter_index]
# I use Tied Bias https://datascience.stackexchange.com/a/37748/75968
output_np[img_index][index][filter_index] = new_value # The complicated line
img_index += 1
output = apply_activation(self.activation, output_np)
return output
</code></pre>
<p><code>input_size</code> is a tuple of shape (dim1, dim2, ..., dim3, channels). An 2D rgb conv for example will be (32, 32, 3) and <code>inputs</code> will have shape (None, 32, 32, 3).</p>
<p>The output size is calculated from an equation I found in this paper: <a href="https://arxiv.org/abs/1603.07285" rel="nofollow noreferrer">A guide to convolution arithmetic for deep learning</a></p>
<pre><code>out_list = []
for i in range(len(self.input_size) - 1): # -1 because the number of input channels is irrelevant
out_list.append(int(np.floor((self.input_size[i] + 2 * self.padding_shape[i] - self.kernel_shape[i]) / self.stride_shape[i]) + 1))
out_list.append(self.filters)
</code></pre>
<p>Basically, I use <code>np.zeros</code> because if I use <code>tf.zeros</code> I cannot assign the <code>new_value</code> and I get:
<code>TypeError: 'Tensor' object does not support item assignment</code></p>
<p>However, in this current state I am getting:<br>
<code>NotImplementedError: Cannot convert a symbolic Tensor (placeholder_1:0) to a numpy array.</code></p>
<p>On that same assignment. I don't see an easy fix, I think I should change the strategy of the code completely.</p>
|
<p>In the end, I did it in a very inefficient way based in this <a href="https://github.com/tensorflow/tensorflow/issues/14132#issuecomment-483002522" rel="nofollow noreferrer">comment</a>, also commented <a href="https://towardsdatascience.com/how-to-replace-values-by-index-in-a-tensor-with-tensorflow-2-0-510994fe6c5f" rel="nofollow noreferrer">here</a> but at least it works:</p>
<pre><code>new_value = tf.reduce_sum(sector * self.kernels[filter_index]) + self.bias[filter_index]
indices = (img_index,) + index + (filter_index,)
mask = tf.Variable(tf.fill(output_np.shape, 1))
mask = mask[indices].assign(0)
mask = tf.cast(mask, dtype=self.input_dtype)
output_np = array * mask + (1 - mask) * new_value
</code></pre>
<p>I say inefficient because I create a whole new array for each assignment. My code is taking ages to compute for the moment so I will keep looking for improvements and post here if I get something better.</p>
|
python|tensorflow|conv-neural-network|convolution
| 0
|
374,432
| 62,251,957
|
How to get a centred rolling mean?
|
<p>I want to compute the rolling mean of data taken on successive days. If I just use dataframe.rolling(7) the mean is from the previous week. Instead I would like day to be at the centre of the window the mean is computed over not right at the end. Is there an easy/fast way to get such a centred rolling mean of a pandas Series?</p>
|
<pre class="lang-py prettyprint-override"><code>df.rolling(7, center=True).mean()
</code></pre>
<p>The rolling mean can be centered by setting the <code>center</code> argument to <code>True</code>.</p>
<p><a href="https://i.stack.imgur.com/Fe5ZU.png" rel="noreferrer">Example plot</a></p>
|
python|pandas
| 6
|
374,433
| 62,316,979
|
linear regression on time series in python
|
<p>How to show dates on the chart for linear regression?</p>
<p>My data in csv file:</p>
<pre><code>"date","bat"
"2020-05-13 00:00:00",84
"2020-05-14 00:00:00",83
"2020-05-15 00:00:00",81
"2020-05-16 00:00:00",81
</code></pre>
<p>I'm able to generate chart with linear reg. but don't know how to make x axis to show dates.</p>
<p>my code:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
df = pd.read_csv('battery.csv', parse_dates=['date'])
x=np.array(pd.to_datetime(df['bat'].index.values, format='%Y-%m-%d'), dtype=float)
x=x.reshape(-1, 1)
y=np.array(df['bat'].values, dtype=float)
lm = linear_model.LinearRegression()
model = lm.fit(x,y)
predictions = lm.predict(x)
f, ax = plt.subplots(1, 1)
ax.plot(x, predictions,label='Linear fit', lw=3)
ax.scatter(x, y,label='value', marker='o', color='r')
plt.ylabel('bat')
ax.legend();
plt.show()
</code></pre>
|
<p>try this:</p>
<pre><code>df = pd.read_csv('battery.csv', parse_dates=['date'])
x=pd.to_datetime(df['date'], format='%Y-%m-%d')
y=df['bat'].values.reshape(-1, 1)
lm = linear_model.LinearRegression()
model = lm.fit(x.values.reshape(-1, 1),y)
predictions = lm.predict(x.values.astype(float).reshape(-1, 1))
f, ax = plt.subplots(1, 1)
ax.plot(x, predictions,label='Linear fit', lw=3)
ax.scatter(x, y,label='value', marker='o', color='r')
plt.ylabel('bat')
ax.legend();
plt.show()
</code></pre>
|
python|pandas|scikit-learn
| 4
|
374,434
| 62,286,166
|
Merge DataFrame with many-to-many
|
<p>I have 2 DataFrames containing examples, I would like to see if a example of DataFrame 1 is present in DataFrame 2.</p>
<p>Normally I would aggregate the rows per example and simply merge the DataFrames. Unfortunately the merging has to be done with a "matching table" which has a many-to-many relationship between the keys (id_low vs. id_high).</p>
<p><strong>Simplified example</strong></p>
<p><em>Matching Table:</em></p>
<p><a href="https://i.stack.imgur.com/Vq13r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vq13r.png" alt="enter image description here"></a></p>
<p><em>Input DataFrames</em></p>
<p><a href="https://i.stack.imgur.com/EKthc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EKthc.png" alt="enter image description here"></a></p>
<p><em>They are therefore matchable like this:</em></p>
<p><a href="https://i.stack.imgur.com/5V8dk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5V8dk.png" alt="enter image description here"></a></p>
<p>Expected Output:</p>
<p><a href="https://i.stack.imgur.com/3j2Qu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3j2Qu.png" alt="enter image description here"></a></p>
<p><strong>Simplified example (for Python)</strong></p>
<pre><code>import pandas as pd
# Dataframe 1 - containing 1 Example
d1 = pd.DataFrame.from_dict({'Example': {0: 'Example 1', 1: 'Example 1', 2: 'Example 1'},
'id_low': {0: 1, 1: 2, 2: 3}})
# DataFrame 2 - containing 1 Example
d2 = pd.DataFrame.from_dict({'Example': {0: 'Example 2', 1: 'Example 2', 2: 'Example 2'},
'id_low': {0: 1, 1: 4, 2: 6}})
# DataFrame 3 - matching table
dm = pd.DataFrame.from_dict({'id_low': {0: 1, 1: 2, 2: 2, 3: 3, 4: 3, 5: 4, 6: 5, 7: 6, 8: 6},
'id_high': {0: 'A',
1: 'B',
2: 'C',
3: 'D',
4: 'E',
5: 'B',
6: 'B',
7: 'E',
8: 'F'}})
</code></pre>
<p>d1 and d2 are matchable as you can see above.</p>
<p><strong>Expected Output (or similar):</strong></p>
<pre><code>df_output = pd.DataFrame.from_dict({'Example': {0: 'Example 1'}, 'Example_2': {0: 'Example 2'}})
</code></pre>
<p><strong>Failed attemps</strong></p>
<p>Aggregation of with matching table translated values then merging. Considerer using Regex with the OR-Operator.</p>
|
<p>IIUC:</p>
<pre><code>d2.merge(dm)
.merge(d1.merge(dm), on='id_high')\
.groupby(['Example_x','Example_y'])['id_high'].agg(list)\
.reset_index()
</code></pre>
<p>Output:</p>
<pre><code> Example_x Example_y id_high
0 Example 2 Example 1 [A, B, E]
</code></pre>
|
python|pandas
| 3
|
374,435
| 62,410,758
|
Pandas sqlalchemy create_engine connection with SQL server windows authentication
|
<p>I am trying to upload a Pandas DataFrame to SQL server table. From reading, the sqlalchemy to_sql method seems like a great option. However, I am not able to get the create_engine to make the connection. </p>
<p>I am able to connect to the database to retrieve data with Windows authentication. Here is the connection string I am using:</p>
<pre><code>cnxn = pyodbc.connect("Driver={SQL Server Native Client 11.0};"
"Server={server_name};"
"Database={database_name};"
"Trusted_Connection=yes;")
</code></pre>
<p>I have tried several different ways to use my login information to connect, here is the most recent version:</p>
<pre><code>engine = create_engine(
"mssql+pyodbc://{network_user_name}:{network_pw}@{server_name}//{database_name}"
)
engine.connect()
</code></pre>
<p>Here is the error I am getting:</p>
<pre><code>InterfaceError: (pyodbc.InterfaceError) ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)')
(Background on this error at: http://sqlalche.me/e/rvf5)
</code></pre>
|
<p>If you are going to use Windows authentication then you simply omit the username/password part of the connection URI. This works fine for me:</p>
<pre class="lang-py prettyprint-override"><code>connection_uri = (
"mssql+pyodbc://@192.168.0.179:49242/mydb?driver=ODBC+Driver+17+for+SQL+Server"
)
engine = sa.create_engine(connection_uri)
</code></pre>
|
python|sql-server|pandas|sqlalchemy
| 4
|
374,436
| 62,357,679
|
How to iterate over multiple datasets in TensorFlow 2
|
<p>I use <strong>TensorFlow 2.2.0</strong>. In my data pipeline, I use multiple datasets to train a neural net. Something like:</p>
<pre><code># these are all tf.data.Dataset objects:
paired_data = get_dataset(id=0, repeat=False, shuffle=True)
unpaired_images = get_dataset(id=1, repeat=True, shuffle=True)
unpaired_masks = get_dataset(id=2, repeat=True, shuffle=True)
</code></pre>
<p><strong>In the training loop</strong>, I want to iterate over <code>paired_data</code> to define one epoch. But I also want to iterate over <code>unpaired_images</code> and <code>unpaired_masks</code> to optimize other objectives (classic semi-supervised learning for semantic segmentation, with a mask discriminator).</p>
<p>In order to do this, my current code looks like:</p>
<pre><code>def train_one_epoch(self, writer, step, paired_data, unpaired_images, unpaired_masks):
unpaired_images = unpaired_images.as_numpy_iterator()
unpaired_masks = unpaired_masks.as_numpy_iterator()
for images, labels in paired_data:
with tf.GradientTape() as sup_tape, \
tf.GradientTape() as gen_tape, \
tf.GradientTape() as disc_tape:
# paired data (supervised cost):
predictions = segmentor(images, training=True)
sup_loss = weighted_cross_entropy(predictions, labels)
# unpaired data (adversarial cost):
pred_real = discriminator(next(unpaired_masks), training=True)
pred_fake = discriminator(segmentor(next(unpaired_images), training=True), training=True)
gen_loss = generator_loss(pred_fake)
disc_loss = discriminator_loss(pred_real, pred_fake)
gradients = sup_tape.gradient(sup_loss, self.segmentor.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients, self.segmentor.trainable_variables))
gradients = gen_tape.gradient(gen_loss, self.segmentor.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients, self.segmentor.trainable_variables))
gradients = disc_tape.gradient(disc_loss, self.discriminator.trainable_variables)
discriminator_optimizer.apply_gradients(zip(gradients, self.discriminator.trainable_variables))
</code></pre>
<p>However, this results in the error:</p>
<pre><code>main.py:275 train_one_epoch *
unpaired_images = unpaired_images.as_numpy_iterator()
/home/venvs/conda/miniconda3/envs/tf-gpu/lib/python3.8/site-packages/tensorflow/python/data/ops/dataset_ops.py:476 as_numpy_iterator **
raise RuntimeError("as_numpy_iterator() is not supported while tracing "
RuntimeError: as_numpy_iterator() is not supported while tracing functions
</code></pre>
<p>Any idea what is wrong with this? Is this the correct way of optimizing over multiple losses/datasets in tensorflow 2?</p>
<hr />
<p><strong>I add my current solution to the problem in the comments. Any suggestion fo more optimized ways is more than welcome! :)</strong></p>
|
<p>My current solution:</p>
<pre><code>def train_one_epoch(self, writer, step, paired_data, unpaired_images, unpaired_masks):
# create a new dataset zipping the three original dataset objects
dataset = tf.data.Dataset.zip((paired_data, unpaired_images, unpaired_masks))
for (images, labels), unpaired_images, unpaired_masks in dataset:
# go ahead and train:
with tf.GradientTape() as tape:
#[...]
</code></pre>
|
python|tensorflow|iterator|dataset
| -1
|
374,437
| 62,269,487
|
Keras: dimension mismatch between input and output after UpSampling2D
|
<p>I'm trying to implement RDN from here <a href="https://arxiv.org/pdf/1802.08797.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1802.08797.pdf</a></p>
<p>As an input I specify: (64, 64, 3) and I expect on the output (128, 128, 3), but after compiling the model keras says those dimensions do not match and both tensors must be (64, 64, 3), what do?</p>
<p>How code looks like:</p>
<pre><code>lr = Input(shape=(64, 64, 3)) # low res
... bunch of residual blocks here, assigned to model var ...
model = Conv2D(12, 4, padding='same')(model)
# Shape at this point is (None, 64, 64, 12)
model = UpSampling2D(size=2)(model)
sr = Conv2D(3, 4, padding='same')(model) # super res
final = Model(lr, sr)
final.compile(Adam(), loss='mse')
</code></pre>
<p>After I call train_on_batch on final I get:</p>
<pre><code>ValueError: Dimensions must be equal, but are 128 and 64 for '{{node mean_squared_error/SquaredDifference}} = SquaredDifference[T=DT_FLOAT](model_10/sr/BiasAdd, IteratorGetNext:1)' with input shapes: [32,128,128,3], [32,64,64,3].
</code></pre>
<p>Here is the model summary:</p>
<pre><code>Model: "model_10"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_14 (InputLayer) [(None, 64, 64, 3)] 0
__________________________________________________________________________________________________
conv2d_183 (Conv2D) (None, 64, 64, 64) 3136 input_14[0][0]
__________________________________________________________________________________________________
conv2d_184 (Conv2D) (None, 64, 64, 32) 32800 conv2d_183[0][0]
__________________________________________________________________________________________________
conv2d_185 (Conv2D) (None, 64, 64, 32) 16416 conv2d_184[0][0]
__________________________________________________________________________________________________
activation_106 (Activation) (None, 64, 64, 32) 0 conv2d_185[0][0]
__________________________________________________________________________________________________
concatenate_197 (Concatenate) (None, 64, 64, 64) 0 conv2d_184[0][0]
activation_106[0][0]
__________________________________________________________________________________________________
conv2d_186 (Conv2D) (None, 64, 64, 32) 32800 concatenate_197[0][0]
__________________________________________________________________________________________________
activation_107 (Activation) (None, 64, 64, 32) 0 conv2d_186[0][0]
__________________________________________________________________________________________________
concatenate_198 (Concatenate) (None, 64, 64, 64) 0 conv2d_184[0][0]
activation_107[0][0]
__________________________________________________________________________________________________
concatenate_199 (Concatenate) (None, 64, 64, 128) 0 concatenate_197[0][0]
concatenate_198[0][0]
__________________________________________________________________________________________________
conv2d_187 (Conv2D) (None, 64, 64, 32) 65568 concatenate_199[0][0]
__________________________________________________________________________________________________
activation_108 (Activation) (None, 64, 64, 32) 0 conv2d_187[0][0]
__________________________________________________________________________________________________
concatenate_200 (Concatenate) (None, 64, 64, 64) 0 conv2d_184[0][0]
activation_108[0][0]
__________________________________________________________________________________________________
concatenate_201 (Concatenate) (None, 64, 64, 128) 0 concatenate_197[0][0]
concatenate_200[0][0]
__________________________________________________________________________________________________
concatenate_202 (Concatenate) (None, 64, 64, 256) 0 concatenate_199[0][0]
concatenate_201[0][0]
__________________________________________________________________________________________________
conv2d_188 (Conv2D) (None, 64, 64, 32) 8224 concatenate_202[0][0]
__________________________________________________________________________________________________
add_41 (Add) (None, 64, 64, 32) 0 conv2d_184[0][0]
conv2d_188[0][0]
__________________________________________________________________________________________________
conv2d_189 (Conv2D) (None, 64, 64, 32) 16416 add_41[0][0]
__________________________________________________________________________________________________
activation_109 (Activation) (None, 64, 64, 32) 0 conv2d_189[0][0]
__________________________________________________________________________________________________
concatenate_203 (Concatenate) (None, 64, 64, 64) 0 add_41[0][0]
activation_109[0][0]
__________________________________________________________________________________________________
conv2d_190 (Conv2D) (None, 64, 64, 32) 32800 concatenate_203[0][0]
__________________________________________________________________________________________________
activation_110 (Activation) (None, 64, 64, 32) 0 conv2d_190[0][0]
__________________________________________________________________________________________________
concatenate_204 (Concatenate) (None, 64, 64, 64) 0 add_41[0][0]
activation_110[0][0]
__________________________________________________________________________________________________
concatenate_205 (Concatenate) (None, 64, 64, 128) 0 concatenate_203[0][0]
concatenate_204[0][0]
__________________________________________________________________________________________________
conv2d_191 (Conv2D) (None, 64, 64, 32) 65568 concatenate_205[0][0]
__________________________________________________________________________________________________
activation_111 (Activation) (None, 64, 64, 32) 0 conv2d_191[0][0]
__________________________________________________________________________________________________
concatenate_206 (Concatenate) (None, 64, 64, 64) 0 add_41[0][0]
activation_111[0][0]
__________________________________________________________________________________________________
concatenate_207 (Concatenate) (None, 64, 64, 128) 0 concatenate_203[0][0]
concatenate_206[0][0]
__________________________________________________________________________________________________
concatenate_208 (Concatenate) (None, 64, 64, 256) 0 concatenate_205[0][0]
concatenate_207[0][0]
__________________________________________________________________________________________________
conv2d_192 (Conv2D) (None, 64, 64, 32) 8224 concatenate_208[0][0]
__________________________________________________________________________________________________
add_42 (Add) (None, 64, 64, 32) 0 add_41[0][0]
conv2d_192[0][0]
__________________________________________________________________________________________________
conv2d_193 (Conv2D) (None, 64, 64, 32) 16416 add_42[0][0]
__________________________________________________________________________________________________
activation_112 (Activation) (None, 64, 64, 32) 0 conv2d_193[0][0]
__________________________________________________________________________________________________
concatenate_209 (Concatenate) (None, 64, 64, 64) 0 add_42[0][0]
activation_112[0][0]
__________________________________________________________________________________________________
conv2d_194 (Conv2D) (None, 64, 64, 32) 32800 concatenate_209[0][0]
__________________________________________________________________________________________________
activation_113 (Activation) (None, 64, 64, 32) 0 conv2d_194[0][0]
__________________________________________________________________________________________________
concatenate_210 (Concatenate) (None, 64, 64, 64) 0 add_42[0][0]
activation_113[0][0]
__________________________________________________________________________________________________
concatenate_211 (Concatenate) (None, 64, 64, 128) 0 concatenate_209[0][0]
concatenate_210[0][0]
__________________________________________________________________________________________________
conv2d_195 (Conv2D) (None, 64, 64, 32) 65568 concatenate_211[0][0]
__________________________________________________________________________________________________
activation_114 (Activation) (None, 64, 64, 32) 0 conv2d_195[0][0]
__________________________________________________________________________________________________
concatenate_212 (Concatenate) (None, 64, 64, 64) 0 add_42[0][0]
activation_114[0][0]
__________________________________________________________________________________________________
concatenate_213 (Concatenate) (None, 64, 64, 128) 0 concatenate_209[0][0]
concatenate_212[0][0]
__________________________________________________________________________________________________
concatenate_214 (Concatenate) (None, 64, 64, 256) 0 concatenate_211[0][0]
concatenate_213[0][0]
__________________________________________________________________________________________________
conv2d_196 (Conv2D) (None, 64, 64, 32) 8224 concatenate_214[0][0]
__________________________________________________________________________________________________
add_43 (Add) (None, 64, 64, 32) 0 add_42[0][0]
conv2d_196[0][0]
__________________________________________________________________________________________________
concatenate_215 (Concatenate) (None, 64, 64, 96) 0 add_41[0][0]
add_42[0][0]
add_43[0][0]
__________________________________________________________________________________________________
conv2d_197 (Conv2D) (None, 64, 64, 64) 6208 concatenate_215[0][0]
__________________________________________________________________________________________________
conv2d_198 (Conv2D) (None, 64, 64, 64) 65600 conv2d_197[0][0]
__________________________________________________________________________________________________
add_44 (Add) (None, 64, 64, 64) 0 conv2d_198[0][0]
conv2d_183[0][0]
__________________________________________________________________________________________________
conv2d_199 (Conv2D) (None, 64, 64, 64) 65600 add_44[0][0]
__________________________________________________________________________________________________
activation_115 (Activation) (None, 64, 64, 64) 0 conv2d_199[0][0]
__________________________________________________________________________________________________
conv2d_200 (Conv2D) (None, 64, 64, 32) 32800 activation_115[0][0]
__________________________________________________________________________________________________
activation_116 (Activation) (None, 64, 64, 32) 0 conv2d_200[0][0]
__________________________________________________________________________________________________
conv2d_201 (Conv2D) (None, 64, 64, 12) 6156 activation_116[0][0]
__________________________________________________________________________________________________
up_sampling2d_3 (UpSampling2D) (None, 128, 128, 12) 0 conv2d_201[0][0]
__________________________________________________________________________________________________
sr (Conv2D) (None, 128, 128, 3) 579 up_sampling2d_3[0][0]
==================================================================================================
Total params: 581,903
Trainable params: 581,903
Non-trainable params: 0
__________________________________________________________________________________________________
</code></pre>
|
<p>If we look at the error message again:</p>
<pre><code>ValueError: Dimensions must be equal, but are 128 and 64 for '{{node mean_squared_error/SquaredDifference}} = SquaredDifference[T=DT_FLOAT](model_10/sr/BiasAdd, IteratorGetNext:1)' with input shapes: [32,128,128,3], [32,64,64,3].
</code></pre>
<p>We'll realize that it's not saying that the input and output shape of your model must be same, but it says you are passing incorrect label(target) size to the model loss function (mean_squared_error). <strong>Because the output shape of your model is (128, 128, 3), but you are passing target data with shape (64, 64, 3).</strong> Actually your problem is not in the provided model architecture, but it is in the model.train_on_batch() section that you are passing your data. Maybe by mistake you are passing x data instead of y as target data?</p>
<p><strong>So to fix the problem, you should pass correct y data, with shape (128, 128, 3), to the model.</strong></p>
|
python|tensorflow|keras
| 0
|
374,438
| 62,112,343
|
Problem in creating a new column using for loop
|
<p>I have to create a new column 'Action' in a dataframe whose values are :
1 if the next day's Close Price is greater than the present day's
-1 if the next day's Close Price is less than the present day's
that is,
Action[i] = 1 if Close Price[i+1]>Close Price[i]
Action[i] = -1 if Close Price[i+1]
<p>I have used the following code: </p>
<pre><code>dt = pd.read_csv("C:\Subhro\ML_Internship\HDFC_Test.csv", sep=',',header=0)
df = pd.DataFrame(dt)
for i in df.index:
if(df['Close Price'][i+1]>df['Close Price'][i]):
df['Action'][i]=1
elif(df['Close Price'][i+1]<df['Close Price'][i]):
df['Action'][i]=-1
</code></pre>
<p>print(df)
But I am getting an error :
KeyError: 'Action'
in line:
df['Action'][i]=1</p>
<p>Please help me out</p>
|
<p>You are getting the key error because you don't have a column called <code>action</code>. Any of the following before the loop will resolve the error:</p>
<pre><code>df['Action'] = 0
</code></pre>
<p>or</p>
<pre><code>df['Action'] = np.nan
</code></pre>
<p>However, you will get warnings because of the way you are assigning the cell values. <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy" rel="nofollow noreferrer">(See here)</a></p>
<p>It is recommended that you instead use e.g.</p>
<pre><code>df.loc[i, "Action"] = 1
</code></pre>
<p>Note that with this method, you won't even need to create an empty "Action" column before the loop.</p>
|
pandas|dataframe
| 0
|
374,439
| 62,343,093
|
Remove empty string from a list of strings
|
<p>I have a column in my dataframe that contains a list of names with such a structure:</p>
<pre><code>df['name']=[
['anna','karen','',]
['', 'peter','mark','john']
]
</code></pre>
<p>I want to get rid of the empty strings i tried it with list comprehension</p>
<pre><code>[[name for name in df['name'] if name.strip()] for df['name'] in df]
</code></pre>
<p>But that doesnt work at all, is there a way to do it? I also used the replace method from pandas
by replacing the empty strings to nan but that also doesnt work as it always throws a key error...</p>
<pre><code>df['name'].replace('', np.nan)
</code></pre>
|
<p>You can do this using the filter function</p>
<pre><code>df['name'] = [list(filter(None, sublist)) for sublist in df['name']]
</code></pre>
|
python|pandas
| 3
|
374,440
| 62,471,054
|
Condas environment does not include pandas
|
<p>I have created a conda environment:</p>
<pre><code>conda create -n nnlibs
</code></pre>
<p>and installed some libraries on it. While trying to run a code I found out there's no pandas installed on this environment.</p>
<pre><code>>>>import pandas
No module named 'pandas'
</code></pre>
<p>and running <code>conda list | grep pandas</code> returns nothing. My <code>base</code> environment however seems to have a working pandas. Other libraries like Numpy and Scipy are intact on both environments.</p>
<p>Is there a way to fix this without re-building the environment or reinstalling the packages? I have installed a lot of libraries and I'd rather fix this environment if possible.</p>
|
<p>I managed to fix the issue by installing the library manually:</p>
<pre><code>conda install -c anaconda pandas
</code></pre>
<p>as @cel explained in the comments, this happened because
<code>conda create -n nnlibs</code>
creates an empty environment. While I managed to solve the problem by installing each package manually, a better way (as @cel suggested) would be to run:</p>
<pre><code>conda install -n nnlibs anaconda
</code></pre>
|
python|pandas|conda
| 0
|
374,441
| 62,339,563
|
Find the highest peak between width range and window integral
|
<p>I have frequency and PSD arrays which I want to find the highest peak within a certain Hz range. From the highest peak, constructing a window of 0.015 Hz and integral the area.</p>
<p><strong>Find Peak and Integral</strong></p>
<pre><code>freq=np.array([0.0, 0.0009765625, 0.001953125, 0.0029296875, 0.00390625, 0.0048828125, 0.005859375, 0.0068359375, 0.0078125, 0.0087890625, 0.009765625, 0.0107421875, 0.01171875, 0.0126953125, 0.013671875, 0.0146484375, 0.015625, 0.0166015625, 0.017578125, 0.0185546875, 0.01953125, 0.0205078125, 0.021484375, 0.0224609375, 0.0234375, 0.0244140625, 0.025390625, 0.0263671875, 0.02734375, 0.0283203125, 0.029296875, 0.0302734375, 0.03125, 0.0322265625, 0.033203125, 0.0341796875, 0.03515625, 0.0361328125, 0.037109375, 0.0380859375, 0.0390625, 0.0400390625, 0.041015625, 0.0419921875, 0.04296875, 0.0439453125, 0.044921875, 0.0458984375, 0.046875, 0.0478515625, 0.048828125, 0.0498046875, 0.05078125, 0.0517578125, 0.052734375, 0.0537109375, 0.0546875, 0.0556640625, 0.056640625, 0.0576171875, 0.05859375, 0.0595703125, 0.060546875, 0.0615234375, 0.0625, 0.0634765625, 0.064453125, 0.0654296875, 0.06640625, 0.0673828125, 0.068359375, 0.0693359375, 0.0703125, 0.0712890625, 0.072265625, 0.0732421875, 0.07421875, 0.0751953125, 0.076171875, 0.0771484375, 0.078125, 0.0791015625, 0.080078125, 0.0810546875, 0.08203125, 0.0830078125, 0.083984375, 0.0849609375, 0.0859375, 0.0869140625, 0.087890625, 0.0888671875, 0.08984375, 0.0908203125, 0.091796875, 0.0927734375, 0.09375, 0.0947265625, 0.095703125, 0.0966796875, 0.09765625, 0.0986328125, 0.099609375, 0.1005859375, 0.1015625, 0.1025390625, 0.103515625, 0.1044921875, 0.10546875, 0.1064453125, 0.107421875, 0.1083984375, 0.109375, 0.1103515625, 0.111328125, 0.1123046875, 0.11328125, 0.1142578125, 0.115234375, 0.1162109375, 0.1171875, 0.1181640625, 0.119140625, 0.1201171875, 0.12109375, 0.1220703125, 0.123046875, 0.1240234375, 0.125, 0.1259765625, 0.126953125, 0.1279296875, 0.12890625, 0.1298828125, 0.130859375, 0.1318359375, 0.1328125, 0.1337890625, 0.134765625, 0.1357421875, 0.13671875, 0.1376953125, 0.138671875, 0.1396484375, 0.140625, 0.1416015625, 0.142578125, 0.1435546875, 0.14453125, 0.1455078125, 0.146484375, 0.1474609375, 0.1484375, 0.1494140625, 0.150390625, 0.1513671875, 0.15234375, 0.1533203125, 0.154296875, 0.1552734375, 0.15625, 0.1572265625, 0.158203125, 0.1591796875, 0.16015625, 0.1611328125, 0.162109375, 0.1630859375, 0.1640625, 0.1650390625, 0.166015625, 0.1669921875, 0.16796875, 0.1689453125, 0.169921875, 0.1708984375, 0.171875, 0.1728515625, 0.173828125, 0.1748046875, 0.17578125, 0.1767578125, 0.177734375, 0.1787109375, 0.1796875, 0.1806640625, 0.181640625, 0.1826171875, 0.18359375, 0.1845703125, 0.185546875, 0.1865234375, 0.1875, 0.1884765625, 0.189453125, 0.1904296875, 0.19140625, 0.1923828125, 0.193359375, 0.1943359375, 0.1953125, 0.1962890625, 0.197265625, 0.1982421875, 0.19921875, 0.2001953125, 0.201171875, 0.2021484375, 0.203125, 0.2041015625, 0.205078125, 0.2060546875, 0.20703125, 0.2080078125, 0.208984375, 0.2099609375, 0.2109375, 0.2119140625, 0.212890625, 0.2138671875, 0.21484375, 0.2158203125, 0.216796875, 0.2177734375, 0.21875, 0.2197265625, 0.220703125, 0.2216796875, 0.22265625, 0.2236328125, 0.224609375, 0.2255859375, 0.2265625, 0.2275390625, 0.228515625, 0.2294921875, 0.23046875, 0.2314453125, 0.232421875, 0.2333984375, 0.234375, 0.2353515625, 0.236328125, 0.2373046875, 0.23828125, 0.2392578125, 0.240234375, 0.2412109375, 0.2421875, 0.2431640625, 0.244140625, 0.2451171875, 0.24609375, 0.2470703125, 0.248046875, 0.2490234375, 0.25, 0.2509765625, 0.251953125, 0.2529296875, 0.25390625, 0.2548828125, 0.255859375, 0.2568359375, 0.2578125, 0.2587890625, 0.259765625, 0.2607421875, 0.26171875, 0.2626953125, 0.263671875, 0.2646484375, 0.265625, 0.2666015625, 0.267578125, 0.2685546875, 0.26953125, 0.2705078125, 0.271484375, 0.2724609375, 0.2734375, 0.2744140625, 0.275390625, 0.2763671875, 0.27734375, 0.2783203125, 0.279296875, 0.2802734375, 0.28125, 0.2822265625, 0.283203125, 0.2841796875, 0.28515625, 0.2861328125, 0.287109375, 0.2880859375, 0.2890625, 0.2900390625, 0.291015625, 0.2919921875, 0.29296875, 0.2939453125, 0.294921875, 0.2958984375, 0.296875, 0.2978515625, 0.298828125, 0.2998046875, 0.30078125, 0.3017578125, 0.302734375, 0.3037109375, 0.3046875, 0.3056640625, 0.306640625, 0.3076171875, 0.30859375, 0.3095703125, 0.310546875, 0.3115234375, 0.3125, 0.3134765625, 0.314453125, 0.3154296875, 0.31640625, 0.3173828125, 0.318359375, 0.3193359375, 0.3203125, 0.3212890625, 0.322265625, 0.3232421875, 0.32421875, 0.3251953125, 0.326171875, 0.3271484375, 0.328125, 0.3291015625, 0.330078125, 0.3310546875, 0.33203125, 0.3330078125, 0.333984375, 0.3349609375, 0.3359375, 0.3369140625, 0.337890625, 0.3388671875, 0.33984375, 0.3408203125, 0.341796875, 0.3427734375, 0.34375, 0.3447265625, 0.345703125, 0.3466796875, 0.34765625, 0.3486328125, 0.349609375, 0.3505859375, 0.3515625, 0.3525390625, 0.353515625, 0.3544921875, 0.35546875, 0.3564453125, 0.357421875])
psd=np.array([2117.8273051302376, 4236.140291493467, 4237.59724589046, 4240.0252055507635, 4243.423723990633, 4247.792176169334, 4253.129758525505, 4259.43548902374, 4266.708207211219, 4274.946574284208, 4284.149073164268, 4294.314008583894, 4305.439507181355, 4317.523517604426, 4330.563810622718, 4344.557979248252, 4359.503438863938, 4375.397427359547, 4392.237005274798, 4410.019055949134, 4428.740285677746, 4448.397223873376, 4468.986223233491, 4490.503459912274, 4512.944933697013, 4536.306468188381, 4560.583710984068, 4585.772133865365, 4611.867032986107, 4638.863529063539, 4666.756567570628, 4695.540918929281, 4725.211178704107, 4755.761767796138, 4787.186932636233, 4819.480745377575, 4852.637104087019, 4886.649732934814, 4921.512182382421, 4957.2178293680645, 4993.75987748977, 5031.131357185633, 5069.325125911084, 5108.333868313019, 5148.150096400635, 5188.766149712915, 5230.174195482698, 5272.3662287973875, 5315.3340727562945, 5359.069378624859, 5403.563625985769, 5448.808122887352, 5494.794005989443, 5541.512240707146, 5588.953621352853, 5637.108771277072, 5685.968143008554, 5735.522018394388, 5785.760508740738, 5836.673554954971, 5888.250927690054, 5940.482227492049, 5993.356884951789, 6046.864160861696, 6100.993146378917, 6155.732763195987, 6211.071763720252, 6266.998731263484, 6323.502080242999, 6380.570056395907, 6438.190737007983, 6496.352031158834, 6555.041679985071, 6614.247256963332, 6673.9561682149115, 6734.15565283404, 6794.832783241735, 6855.974465567324, 6917.567440059729, 6979.59828153076, 7042.0533998326055, 7104.919040371838, 7168.181284662325, 7231.826050919431, 7295.8390946979425, 7360.206009576277, 7424.91222788951, 7489.9430215137345, 7555.283502704499, 7620.918624991892, 7686.83318413497, 7753.0118191382635, 7819.43901333309, 7886.099095526382, 7952.976241219845, 8020.054473902169, 8087.317666417102, 8154.749542410103, 8222.333677856454, 8290.053502673405, 8357.892302419297, 8425.833220082226, 8493.85925796106, 8561.953279641362, 8630.098012068998, 8698.276047723873, 8766.469846896473, 8834.661740069614, 8902.833930407936, 8970.968496357436, 9039.047394357443, 9107.05246166722, 9174.96541930941, 9242.767875132417, 9310.441326993718, 9377.967166066008, 9445.326680268068, 9512.501057821986, 9579.471390938439, 9646.218679631478, 9712.72383566423, 9778.967686626835, 9844.930980147667, 9910.59438823897, 9975.938511777736, 10040.943885122573, 10105.590980867222, 10169.860214731156, 10233.731950587562, 10297.186505628928, 10360.204155670208, 10422.765140589436, 10484.849669905441, 10546.43792849227, 10607.510082429611, 10668.046284988406, 10728.02668275076, 10787.431421862855, 10846.240654419666, 10904.434544979895, 10961.993277209445, 11018.897060651627, 11075.126137621917, 11130.660790225245, 11185.481347493138, 11239.56819263836, 11292.901770424143, 11345.462594645065, 11397.231255716446, 11448.18842836896, 11498.314879444895, 11547.591475792453, 11595.999192254161, 11643.519119745413, 11690.13247341888, 11735.820600910438, 11780.564990662067, 11824.347280316962, 11867.149265182015, 11908.952906752595, 11949.740341294459, 11989.493888477404, 12028.196060055176, 12065.829568585987, 12102.377336187863, 12137.822503322863, 12172.148437604206, 12205.338742619902, 12237.377266766951, 12268.248112089314, 12297.935643113355, 12326.424495674115, 12353.699585725537, 12379.74611812801, 12404.549595406235, 12428.095826470408, 12450.3709352937, 12471.361369538983, 12491.053909127475, 12509.435674742223, 12526.49413625906, 12542.217121097825, 12556.59282248638, 12569.609807630239, 12581.257025780276, 12591.523816191331, 12600.399915964148, 12607.875467763452, 12613.941027404748, 12618.587571302576, 12621.806503772925, 12623.589664182578, 12623.92933393827, 12622.81824330844, 12620.249578070656, 12616.216985977637, 12610.71458303502, 12603.736959584136, 12595.279186183037, 12585.33681927916, 12573.905906667287, 12560.982992726325, 12546.56512342876, 12530.6498511167, 12513.235239038597, 12494.319865640842, 12473.902828608632, 12451.983748650691, 12428.562773022584, 12403.640578783443, 12377.218375781365, 12349.297909362664, 12319.881462800484, 12288.971859438554, 12256.572464545938, 12222.687186878935, 12187.320479946487, 12150.477342975766, 12112.16332157456, 12072.384508087836, 12031.147541645474, 11988.459607898922, 11944.328438444576, 11898.762309931815, 11851.770042854136, 11803.361000021965, 11753.545084715939, 11702.332738519755, 11649.734938832122, 11595.76319605723, 11540.42955047392, 11483.746568783563, 11425.727340337262, 11366.38547304298, 11305.735088953863, 11243.790819538764, 11180.567800636869, 11116.081667098035, 11050.348547111098, 10983.385056222602, 10915.208291048511, 10845.835822681987, 10775.285689800387, 10703.576391474995, 10630.726879687287, 10556.756551555709, 10481.685241277257, 10405.533211788425, 10328.321146150294, 10250.070138662817, 10170.801685713559, 10090.537676366457, 10009.300382696316, 9927.112449875054, 9843.996886015937, 9759.97705178216, 9675.07664976651, 9589.319713648907, 9502.730597138887, 9415.333962710305, 9327.154770135674, 9238.218264827734, 9148.549965996126, 9058.175654627017, 8967.121361293937, 8875.413353807926, 8783.078124715606, 8690.14237865355, 8596.633019567777, 8502.577137807053, 8408.00199709902, 8312.93502141808, 8217.403781754238, 8121.435982792007, 8025.059449508735, 7928.302113701657, 7831.1920004531075, 7733.757214543326, 7636.025926820366, 7538.026360536673, 7439.786777661854, 7341.335465181225, 7242.700721389741, 7143.910842190874, 7044.994107410017, 6945.978767131936, 6846.893028071855, 6747.765039989605, 6648.622882156285, 6549.494549882854, 6450.407941119925, 6351.390843138042, 6252.470919297591, 6153.675695917459, 6055.032549251374, 5956.56869258081, 5858.311163433282, 5760.286810934598, 5662.522283303618, 5565.044015497904, 5467.87821701848, 5371.050859881796, 5274.587666766838, 5178.514099345106, 5082.855346801102, 4987.636314550716, 4892.881613164738, 4798.6155475045625, 4704.862106076914, 4611.64495061424, 4518.987405887232, 4426.912449755688, 4335.442703463726, 4244.600422185165, 4154.407485824646, 4064.885390079761, 3976.0552377694416, 3887.937730433306, 3800.553160206767, 3713.921401976139, 3628.0619058180105, 3542.9936897266894, 3458.7353326334432, 3375.304967720854, 3292.720276035498, 3210.9984804017736, 3130.156339639574, 3050.2101430880944, 2971.1757054379686, 2893.0683618735297, 2815.9029635268207, 2739.693873244689, 2664.4549616700324, 2590.1996036380365, 2516.9406748879455, 2444.690549090702, 2373.4610951925006, 2303.263675074042, 2234.109141525068, 2166.0078365334807, 2098.969589888064, 2033.003718093685, 1968.1190235974905, 1904.323794324486, 1841.6258035205565, 1780.032309900832, 1719.55005810103, 1660.1852794291933, 1601.943692915033, 1544.8305066538571, 1488.8504194418663, 1434.0076226993917, 1380.3058026784188, 1327.7481429506104, 1276.3373271717776, 1226.0755421186095, 1176.9644809932681, 1129.0053469913032, 1082.1988571281443, 1036.545246319275, 992.0442717090511, 948.6952172429552, 906.4968984779233, 865.4476676252818, 825.5454188206523, 786.7875936150785, 749.1711866815149])
peaks, _ = find_peaks(psd)
intervalPeak = []
for peak in peaks:
f = freq[peak]
if f >= 0.04 and f <= 0.26:
intervalPeak.append(peak)
print('intervalPeak: ', intervalPeak)
</code></pre>
<p><a href="https://i.stack.imgur.com/aizp4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aizp4.png" alt="Find the highest peak and integral"></a></p>
<p><strong>Problems</strong></p>
<blockquote>
<p>(Solved by @tstanisl) Find peak by using a list of PSD and find interval later.</p>
</blockquote>
<hr>
<p><strong>My Solution</strong></p>
<pre><code>hzWindow = 0.015/2
peaks, _ = find_peaks(psd)
intervalPeak = []
totalPeakPower = 0;
for peakIndex in peaks:
f = freq[peakIndex]
if f >= 0.04 and f <= 0.26:
beginFreq = freq[peakIndex] - hzWindow;
endFreq = freq[peakIndex] + hzWindow;
rangePeakIndex = np.where(np.logical_and(freq>=beginFreq, freq<=endFreq))
rangePeakY = []
rangePeakX = []
for i in rangePeakIndex:
rangePeakY.append(psd[i])
rangePeakX.append(freq[i])
peakPower = np.trapz(rangePeakY, x=rangePeakX)[0]
totalPeakPower = totalPeakPower + peakPower
</code></pre>
|
<p>First you should look for a peak in <code>psd</code>, not <code>freq</code>.
Next, the <code>psd</code> from your example is a growing sequence thus there is no peak.
Please update your question.</p>
<p>After adding a dummy value <code>0</code> at the end I got:</p>
<pre><code>psd = np.array([26.88687233, 82.36241905, 168.44720179, 0])
peaks, _ = find_peaks(psd)
print(peaks)
</code></pre>
<p>got</p>
<pre><code>[2]
</code></pre>
<p>what nicely corresponds to peak at 168.44 ...</p>
|
python|numpy|scipy
| 1
|
374,442
| 62,411,635
|
Cannot load model weights in TensorFlow 2
|
<p>I cannot load model weights after saving them in TensorFlow 2.2. Weights appear to be saved correctly (I think), however, I fail to load the pre-trained model.</p>
<p>My current code is:</p>
<pre><code>segmentor = sequential_model_1()
discriminator = sequential_model_2()
def save_model(ckp_dir):
# create directory, if it does not exist:
utils.safe_mkdir(ckp_dir)
# save weights
segmentor.save_weights(os.path.join(ckp_dir, 'checkpoint-segmentor'))
discriminator.save_weights(os.path.join(ckp_dir, 'checkpoint-discriminator'))
def load_pretrained_model(ckp_dir):
try:
segmentor.load_weights(os.path.join(ckp_dir, 'checkpoint-segmentor'), skip_mismatch=True)
discriminator.load_weights(os.path.join(ckp_dir, 'checkpoint-discriminator'), skip_mismatch=True)
print('Loading pre-trained model from: {0}'.format(ckp_dir))
except ValueError:
print('No pre-trained model available.')
</code></pre>
<p>Then I have the training loop:</p>
<pre><code># training loop:
for epoch in range(num_epochs):
for image, label in dataset:
train_step()
# save best model I find during training:
if this_is_the_best_model_on_validation_set():
save_model(ckp_dir='logs_dir')
</code></pre>
<p>And then, at the end of the training "for loop", I want to load the best model and do a test with it. Hence, I run:</p>
<pre><code># load saved model and do a test:
load_pretrained_model(ckp_dir='logs_dir')
test()
</code></pre>
<p>However, this results in a <code>ValueError</code>. I checked the directory where the weights should be saved, and there they are!</p>
<p>Any idea what is wrong with my code? Am I loading the weights incorrectly? </p>
<p>Thank you!</p>
|
<p>Ok here is your problem - the <code>try-except</code> block you have is obscuring the real issue. Removing it gives the <code>ValueError</code>:</p>
<p><code>ValueError: When calling model.load_weights, skip_mismatch can only be set to True when by_name is True.</code></p>
<p>There are two ways to mitigate this - you can either call <code>load_weights</code> with <code>by_name=True</code>, or remove <code>skip_mismatch=True</code> depending on your needs. Either case works for me when testing your code.</p>
<p>Another consideration is that you when you store both the discriminator and segmentor checkpoints to the log directory, you overwrite the <code>checkpoint</code> file each time. This contains two strings that give the path to the specific model checkpoint files. Since you save discriminator second, every time this file will say discriminator with no reference to segmentor. You can mitigate this by storing each model in two subdirectories in the log directory instead, i.e.</p>
<pre><code>logs_dir/
+ discriminator/
+ checkpoint
+ ...
+ segmentor/
+ checkpoint
+ ...
</code></pre>
<p>Although in the current state your code would work in this case.</p>
|
tensorflow|save|load
| 3
|
374,443
| 62,376,476
|
Combine multiple rows into one row based on Column values in pandas
|
<p>I am trying to parse csv file which I have done almost but stuck at one point. <strong>I want to combine row with the previous row Where Column 1 of the previous row should not be null</strong> . I have data format like this.</p>
<pre><code>C1 C2 C3 C4 C5
1001 1S30 5:00:00 MP GL
NaN 1M94 9:06:00 GL MP
1101 1P1 6:35:00 MP Vic
NaN 9E06 07:02:00 Vic N
NaN 9M08 10:02:00 N Liv
NaN 9E13 13:26:00 Liv Vic
NaN 1P26 4:40:00 Vic MP
</code></pre>
<p><strong>I want to combine rows like in the below given format</strong></p>
<p><a href="https://i.stack.imgur.com/2pU75.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2pU75.png" alt="enter image description here"></a></p>
<p>I am stuck because</p>
<p>here any help would be appreciated. </p>
|
<p>Update:</p>
<pre><code>df.groupby(df['C1'].ffill()).apply(lambda x: x.stack().reset_index())[0].unstack().reset_index()
</code></pre>
<p>Output:</p>
<pre><code> C1 0 1 2 3 4 5 6 7 8 ... 11 \
0 1001.0 1001 1S30 5:00:00 MP GL 1M94 9:06:00 GL MP ... NaN
1 1101.0 1101 1P1 6:35:00 MP Vic 9E06 07:02:00 Vic N ... N
12 13 14 15 16 17 18 19 20
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 Liv 9E13 13:26:00 Liv Vic 1P26 4:40:00 Vic MP
[2 rows x 22 columns]
</code></pre>
<hr>
<p>Try:</p>
<pre><code>df.groupby(df['C1'].ffill()).apply(pd.melt, id_vars='C1')['value'].unstack().reset_index()
</code></pre>
<p>Output:</p>
<pre><code> C1 0 1 2 3 4 5 6 7 \
0 1001.0 1S30 1M94 5:00:00 9:06:00 MP GL GL MP
1 1101.0 1P1 9E06 9M08 9E13 1P26 6:35:00 07:02:00 10:02:00
8 ... 10 11 12 13 14 15 16 17 18 19
0 NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 13:26:00 ... MP Vic N Liv Vic Vic N Liv Vic MP
[2 rows x 21 columns]
</code></pre>
|
python|pandas
| 0
|
374,444
| 62,202,915
|
How to convert Pandas read excel dataframe to a list in Python?
|
<p>I'm reading a single column from an Excel file using Pandas:</p>
<pre><code>df = pandas.read_excel(file_location, usecols=columnA)
</code></pre>
<p>and I want to convert that dataframe (df) into a list. I'm trying to do the following:</p>
<pre><code>listA = df.values()
</code></pre>
<p>but I'm getting the following error: <em>TypeError: 'numpy.ndarray' object is not callable</em>. What can I do to solve this error or is there any other way I can convert that dataframe into a list? Thank you!</p>
|
<p>remove the parenthesis from your statement. with the parens on there, it is treating values like a function. It is an instance variable:</p>
<pre><code>listA = df.values # note no parenthesis after values
</code></pre>
<p>Here are a couple ideas. You should probably access the column by name</p>
<pre><code>In [2]: import pandas as pd
In [3]: df = pd.DataFrame({'A':[1,5,99]})
In [4]: df
Out[4]:
A
0 1
1 5
2 99
In [5]: df.values
Out[5]:
array([[ 1],
[ 5],
[99]])
In [6]: my_list = list(df['A'])
In [7]: my_list
Out[7]: [1, 5, 99]
</code></pre>
|
python|excel|pandas
| 2
|
374,445
| 62,339,302
|
Last day of customer activity (Python DataFrame)
|
<p>I have a DataFrame like this:</p>
<pre><code>Customer_id Date Turnover
1 2020.6.1 123
1 2020.6.2 434
1 2020.6.3 2656
1 2020.6.4 121
1 2020.6.5 2412421
2 2020.6.1 2312
2 2020.6.2 213
2 2020.6.3 5787
3 2020.6.1 237
3 2020.6.2 223
3 2020.6.3 999
3 2020.6.4 0
</code></pre>
<p>And I need to get the last <code>Date</code> for each customer. I feel there is must be something like <code>df.groupby</code> and <code>df.max()</code>, but I haven't still figured out here. Help, please :)</p>
|
<p>Using <code>pandas.DataFrame.groupby</code> with <code>max</code>:</p>
<pre><code>new_df = df.groupby("Customer_id")["Date"].max()
print(new_df)
</code></pre>
<p>Output:</p>
<pre><code>Customer_id
1 2020.6.5
2 2020.6.3
3 2020.6.4
Name: Date, dtype: object
</code></pre>
<p>To be extra careful, use <code>pandas.to_datetime</code> beforehand, (such as to avoid <code>max("2020.06.10", "2020.6.1") == "2020.6.1"</code>):</p>
<pre><code>df["Date"] = pd.to_datetime(df["Date"])
new_df = df.groupby("Customer_id")["Date"].max()
print(new_df)
</code></pre>
<p>Output:</p>
<pre><code>Customer_id
1 2020-06-05
2 2020-06-03
3 2020-06-04
Name: Date, dtype: datetime64[ns]
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 2
|
374,446
| 62,230,798
|
How to measure accuracy for each target when some of the targets are NaNs in a TensorFlow model
|
<p>I have a dataset about 400 variables and 5 target columns. In many of the rows, only a few of the Y values are present, i.e. I have some unknown (NaNs) in the targets. I'm applying a custom loss function through TF to make sure that loss is only applied to predictions of Y values where there is a Y value to compare to. </p>
<pre><code>def nan_friendly_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
valids = tf.math.is_finite(
y_true
)
#Only use y's that aren't NaN.
y_true = y_true[valids] #tf.print(y_true)
y_pred = y_pred[valids] #tf.print(y_pred)
return = K.sum(K.square(y_pred-y_true))
</code></pre>
<p>For instance, If y_true is [1,2,Nan,NaN,5] and y_pred is [2,2,3,4,3], the loss would be 5 (1+0+0+0+4).</p>
<p>Now I'm trying to get a grasp on how well the model is performing on each of the targets. How can I make an accuracy function (mse for instance) so that, for each target Y, it skips the row when y_true is NaN when calculating average accuracies for the dataset? So far, the built in functions are just getting nan results, leading me to believe that it is unable to disregard NaN values in y_true.
For example, the Ys below</p>
<pre><code>y_true = [[1,2,Nan,NaN,5],[3,4,5,6,7]]
y_pred = [[2,2,3,4,4],[3,4,3,5,5]],
</code></pre>
<p>should give the following result:</p>
<pre><code>mse_accuracies == [0.5,0,4,1,2.5]
</code></pre>
|
<p>When we need to use a loss function (or metric) other than the ones available , we can construct our own custom function and pass to <code>model.compile</code>.</p>
<p>So define the custom loss function as below. <code>nan_friendly_loss</code> is somewhat similar but you are defining <code>sum</code> instead of <code>mean</code> -</p>
<pre><code>def custom_metric(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
valids = tf.math.is_finite(y_true)
#Only use y's that aren't NaN.
y_true = y_true[valids] #tf.print(y_true)
y_pred = y_pred[valids] #tf.print(y_pred)
return K.mean(K.square(y_pred - y_true)
</code></pre>
<p>Modify your compile statement as below -</p>
<pre><code>model.compile(optimizer='rmsprop',
loss = nan_friendly_loss,
metrics= custom_metric)
</code></pre>
<p>Now, your model's <code>custom_metric</code> will serve your requirement.</p>
<p>Hope this answers your question. Happy Learning.</p>
|
python|tensorflow|keras
| 0
|
374,447
| 62,360,455
|
ModuleNotFoundError: No module named 'sklearn.svm._classes'
|
<p>I have created a model for breast cancer prediction. Now I want to deploy my model on a UI, for that I am using flask. To connect the model, I made .pkl file of the model but when I am trying to read the file through my app.py, it is giving me an error: ModuleNotFoundError: No module named 'sklearn.svm._classes'
What should I do in order to run my app.py?</p>
<p>Here is my app.py:</p>
<pre><code>from flask import Flask,send_from_directory,render_template, request, url_for, redirect
from flask_restful import Resource, Api
from package.patient import Patients, Patient
from package.doctor import Doctors, Doctor
from package.appointment import Appointments, Appointment
from package.common import Common
import json
import pickle
import numpy as np
with open('config.json') as data_file:
config = json.load(data_file)
app = Flask(__name__, static_url_path='')
api = Api(app)
api.add_resource(Patients, '/patient')
api.add_resource(Patient, '/patient/<int:id>')
api.add_resource(Doctors, '/doctor')
api.add_resource(Doctor, '/doctor/<int:id>')
api.add_resource(Appointments, '/appointment')
api.add_resource(Appointment, '/appointment/<int:id>')
api.add_resource(Common, '/common')
model_breast=pickle.load(open('model_breast.pkl','rb'))
# Routes
@app.route('/')
def index():
return app.send_static_file('index.html')
@app.route('/predict',methods=['POST','GET'])
def predict():
int_features=[int(x) for x in request.form.values()]
final=[np.array(int_features)]
print(int_features)
print(final)
prediction=model_breast.predict(final)
output='{0:.{1}f}'.format(prediction[0][1], 2)
if output==str(4):
return render_template('../static/form.html',pred='The cancer type is MALIGNANT'
'\n This particular cell is cancerous. You belong to class: {}'.format(output))
else:
return render_template('../static/form.html',pred='The cancer type is BENIGN'
'\n This particular cell is NOT cancerous. You belong to class: {}'.format(output))
if __name__ == '__main__':
app.run(debug=True,host=config['host'],port=config['port'])
</code></pre>
<p><a href="https://i.stack.imgur.com/obqAW.png" rel="nofollow noreferrer">Error</a></p>
|
<p><code>'sklearn.svm._classes'</code> is specific to version 0.19.2 of the scikit-learn package.</p>
<p>You can install this version using:</p>
<pre><code>pip install scikit-learn==0.19.2
</code></pre>
|
python|flask|scikit-learn|pickle|sklearn-pandas
| 1
|
374,448
| 62,192,337
|
Rebalancing portfolio creates a Singular Matrix
|
<p>I am trying to create a minimum variance portfolio based on 1 year of data. I then want to rebalance the portfolio every month recomputing thus the covariance matrix. (my dataset starts in 1992 and finishes in 2017).</p>
<p>I did the following code which works when it is not in a loop. But when put in the loop the inverse of the covariance matrix is Singular. I don't understand why this problem arises since I reset every variable at the end of the loop.</p>
<pre><code>### Importing the necessary libraries ###
import pandas as pd
import numpy as np
from numpy.linalg import inv
### Importing the dataset ###
df = pd.read_csv("UK_Returns.csv", sep = ";")
df.set_index('Date', inplace = True)
### Define varibales ###
stocks = df.shape[1]
returns = []
vol = []
weights_p =[]
### for loop to compute portfolio and rebalance every 30 days ###
for i in range (0,288):
a = i*30
b = i*30 + 252
portfolio = df[a:b]
mean_ret = ((1+portfolio.mean())**252)-1
var_cov = portfolio.cov()*252
inv_var_cov = inv(var_cov)
doit = 0
weights = np.dot(np.ones((1,stocks)),inv_var_cov)/(np.dot(np.ones((1,stocks)),np.dot(inv_var_cov,np.ones((stocks,1)))))
ret = np.dot(weights, mean_ret)
std = np.sqrt(np.dot(weights, np.dot(var_cov, weights.T)))
returns.append(ret)
vol.append(std)
weights_p.append(weights)
weights = []
var_cov = np.zeros((stocks,stocks))
inv_var_cov = np.zeros((stocks,stocks))
i+=1
</code></pre>
<p>Does anyone has an idea to solve this issue? </p>
<p>The error it yields is the following:</p>
<pre><code>---------------------------------------------------------------------------
LinAlgError Traceback (most recent call last)
<ipython-input-17-979efdd1f5b2> in <module>()
21 mean_ret = ((1+portfolio.mean())**252)-1
22 var_cov = portfolio.cov()*252
---> 23 inv_var_cov = inv(var_cov)
24 doit = 0
25 weights = np.dot(np.ones((1,stocks)),inv_var_cov)/(np.dot(np.ones((1,stocks)),np.dot(inv_var_cov,np.ones((stocks,1)))))
<__array_function__ internals> in inv(*args, **kwargs)
1 frames
/usr/local/lib/python3.6/dist-packages/numpy/linalg/linalg.py in _raise_linalgerror_singular(err, flag)
95
96 def _raise_linalgerror_singular(err, flag):
---> 97 raise LinAlgError("Singular matrix")
98
99 def _raise_linalgerror_nonposdef(err, flag):
LinAlgError: Singular matrix
</code></pre>
<p>Thank you so much for any help you can provide me with!</p>
<p>The data is shared in the following google drive: <a href="https://drive.google.com/file/d/1-Bw7cowZKCNU4JgNCitmblHVw73ORFKR/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1-Bw7cowZKCNU4JgNCitmblHVw73ORFKR/view?usp=sharing</a></p>
|
<p>It would be better to identify what is causing the singularity of the matrix
but there are means of living with singular matrices.</p>
<p>Try to use pseudoinverse by <code>np.linalg.pinv()</code>. It is guaranteed to always exist.
See <a href="https://numpy.org/doc/1.18/reference/generated/numpy.linalg.pinv.html" rel="nofollow noreferrer">pinv</a></p>
<p>Other way around it is avoid computing inverse matrix at all.
Just find Least Squares solution of the system. See <a href="https://numpy.org/doc/1.18/reference/generated/numpy.linalg.lstsq.html#numpy.linalg.lstsq" rel="nofollow noreferrer">lstsq</a></p>
<p>Just replace <code>np.dot(X,inv_var_cov)</code> with</p>
<pre><code>np.linalg.lstsq(var_conv, X, rcond=None)[0]
</code></pre>
|
python|numpy|matrix|portfolio|singular
| 2
|
374,449
| 62,273,005
|
Compositing images by blurred mask in Numpy
|
<p>I have two images and a mask, all of same dimensions, as Numpy arrays: </p>
<p><a href="https://i.stack.imgur.com/O48QE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O48QE.jpg" alt="enter image description here"></a><br>
<a href="https://i.stack.imgur.com/P7J0p.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P7J0p.jpg" alt="enter image description here"></a><br>
<a href="https://i.stack.imgur.com/eovoO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eovoO.jpg" alt="enter image description here"></a></p>
<h1>Desired output</h1>
<p>I would like to merge them in such a way that the output will be like this:</p>
<p><a href="https://i.stack.imgur.com/vEubE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vEubE.jpg" alt="enter image description here"></a></p>
<h1>Current code</h1>
<pre class="lang-py prettyprint-override"><code>def merge(lena, rocket, mask):
'''Mask init and cropping'''
mask = np.zeros(lena.shape[:2], dtype='uint8')
cv2.fillConvexPoly(mask, circle, 255) # might be polygon
'''Bitwise operations'''
lena = cv2.bitwise_or(lena, lena, mask=mask)
mask_inv = cv2.bitwise_not(mask) # mask inverting
rocket = cv2.bitwise_or(rocket, rocket, mask=mask_inv)
output = cv2.bitwise_or(rocket, lena)
return output
</code></pre>
<h1>Current result</h1>
<p>This code gives me this result: </p>
<p><a href="https://i.stack.imgur.com/qpxSx.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qpxSx.jpg" alt="enter image description here"></a> </p>
<p>Applying <code>cv2.GaussianBlur(mask, (51,51), 0)</code> distorts colors of overlayed image in different ways.<br>
Other SO questions relate to similar problems but not solving exactly this type of blurred compositing. </p>
<h2>Update: this gives same result as a current one</h2>
<pre class="lang-py prettyprint-override"><code>mask = np.zeros(lena.shape[:2], dtype='uint8')
mask = cv2.GaussianBlur(mask, (51,51), 0)
mask = mask[..., np.newaxis]
cv2.fillConvexPoly(mask, circle, 1)
output = mask * lena + (1 - mask) * rocket
</code></pre>
<h1>Temporal solution</h1>
<h3>Possibly this is not optimal due to many conversions, please advise</h3>
<pre><code>mask = np.zeros(generated.shape[:2])
polygon = np.array(polygon, np.int32) # 2d array of x,y coords
cv2.fillConvexPoly(mask, polygon, 1)
mask = cv2.GaussianBlur(mask, (51, 51), 0)
mask = mask.astype('float32')
mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)
foreground = cv2.multiply(lena, mask, dtype=cv2.CV_8U)
background = cv2.multiply(rocket, (1 - mask), dtype=cv2.CV_8U)
output = cv2.add(foreground, background)
</code></pre>
<p>Please advise how can I blur a mask, properly merge it with foreground and then overlay on background image?</p>
|
<p>You need to renormalize the mask before blending:</p>
<pre><code>def blend_merge(lena, rocket, mask):
mask = cv2.GaussianBlur(mask, (51, 51), 0)
mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)
mask = mask.astype('float32') / 255
foreground = cv2.multiply(lena, mask, dtype=cv2.CV_8U)
background = cv2.multiply(rocket, (1 - mask), dtype=cv2.CV_8U)
output = cv2.add(foreground, background)
return output
</code></pre>
<p>A full working example is <a href="https://github.com/shortcipher3/stackoverflow/blob/master/compositing_images_by_blurred_mask_in_numpy.ipynb" rel="nofollow noreferrer">here</a>.</p>
|
python|numpy|opencv|python-imaging-library|bitwise-or
| 2
|
374,450
| 51,476,568
|
Using GPU with Keras
|
<p>i'm actally facing a probleme since last friday and didn't find a solution for the moment.</p>
<p>First of all, you need to know that i'm a beginner on linux,i'm trying to do some deep learning in my internship, and i discovered that even if my company have a 1080 Ti, keras wasn't using it, so have the job to correct this.</p>
<p>I am trying to use Keras with GPU. I installer Tensorflow by following these steps : <a href="https://www.tensorflow.org/install/install_linux" rel="nofollow noreferrer">https://www.tensorflow.org/install/install_linux</a>
I also installer CUDA,cuDNN.</p>
<p>I found on my machine an older installation of CUDA (version 7.5). I installed version 9.2 without uninstalling version 7.5. I added the PATH variables but it seems like it is not taking in account : [][<a href="https://i.stack.imgur.com/B3Pqm.png]" rel="nofollow noreferrer">https://i.stack.imgur.com/B3Pqm.png]</a></p>
<p>I tried to uninstall CUDA version 7.5 but i don't know how to do it, since in the usr/local folder, there is no cuda-7.5 folder.</p>
<p>When i enter nvidia-smi in the prompt, it works correctly. I installed tensorflow and tensorflow-gpu, but i does not work : [][<a href="https://i.stack.imgur.com/78gPd.png]" rel="nofollow noreferrer">https://i.stack.imgur.com/78gPd.png]</a></p>
<p>Did anyone know how to help me? i Guess the solution of my probleme is not really complicated for someone who knows Ubuntu, and i feel like i'm loosing a lot of times doing something i don't really understand.</p>
<p>If someone need some further informations in order to help me, feel free to ask.</p>
<p>Thank you</p>
|
<p>Uninstall tensorflow and install only tensorflow-gpu. You should not install both. If you are using keras, then install keras-gpu. </p>
<p>Let's say you are working with conda and you want to tidy up all this. Do </p>
<pre><code>conda remove keras
conda remove tensorflow*
conda install keras-gpu
</code></pre>
<p>If you are not, then i highly recommend <a href="https://anaconda.org/anaconda/python" rel="nofollow noreferrer">Anaconda</a> for dealing with these issues which you seem to be having stress-free.</p>
|
python|tensorflow|cuda|keras
| 2
|
374,451
| 51,423,017
|
Python pandas to compare 2 Microsoft Excel and output the changes
|
<p>I am trying to use Python pandas to determine the changes need to make on a certain rows.</p>
<p>data1</p>
<pre><code>name contract id unit qty location
siteA 00012345 A001 pcs 1 M.K.141.1
siteA 00012345 A002 pcs 2 M.K.141.1
siteA 00012345 A003 pcs 3 M.K.141.1
siteA 00012345 A004 pcs 12 M.K.141.1
siteA 00012345 A005 pcs 26 M.K.141.1
siteA 00012345 A006 pcs 2 M.K.141.1
siteB 00012345 A001 pcs 2 M.K.285.1
siteB 00012345 A003 pcs 3 M.K.285.1
siteB 00012345 A004 pcs 5 M.K.285.1
siteB 00012345 A005 pcs 10 M.K.285.1
siteB 00012345 A006 pcs 11 M.K.285.1
</code></pre>
<p>data2</p>
<pre><code>name id unit qty
siteA A001 pcs 1
siteA A002 pcs 4
siteA A003 pcs 6
siteA A004 pcs 12
siteA A005 pcs 28
siteB A001 pcs 2
siteB A003 pcs 6
siteB A004 pcs 5
siteB A005 pcs 33
siteB A006 pcs 11
</code></pre>
<p>What I am trying to figure out is to compare data2 with data1, and check the difference of the qty between both siteA and siteB respectively, and modify the qty in data1</p>
<p>need some head start as looking into pandas documentation takes me too long to able to understand what to do..</p>
<p>thanks!</p>
<p>code snippet I current have:</p>
<pre><code>import pandas as pd
df1 = pd.read_excel(r'D:\data1.xlsx', 'Sheet1')
df2 = pd.read_excel(r'D:\data2.xlsx', 'Sheet1')
for index, row in df1.iterrow():
pass
</code></pre>
<p>too bad i am too new to pandas and trying to learn how to use it.</p>
|
<p>I think I'd use merge to join those to datasets together then look for differences.</p>
<pre><code>data1.merge(data2, on=['name','id','unit']).query('qty_x != qty_y')
</code></pre>
<p>Output:</p>
<pre><code> name contract id unit qty_x location qty_y
1 siteA 12345 A002 pcs 2 M.K.141.1 4
2 siteA 12345 A003 pcs 3 M.K.141.1 6
4 siteA 12345 A005 pcs 26 M.K.141.1 28
6 siteB 12345 A003 pcs 3 M.K.285.1 6
8 siteB 12345 A005 pcs 10 M.K.285.1 33
</code></pre>
<p>Where _x and _y are default suffixes giving to commonly named columns in each dataframe. You can't redefine these suffixes using the <code>suffixes</code> parameter in merge.</p>
|
python|pandas
| 0
|
374,452
| 51,158,934
|
unable to get the details on the x-axis using plot method in pandas
|
<pre><code>import pandas as pd
from pandas import Series,DataFrame
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
poll_df=pd.read_csv('http://elections.huffingtonpost.com/pollster/2012-general-election-romney-vs-obama.csv')
poll_df.plot(x='End Date',y=['Obama','Romney','Undecided'],linestyle='',marker='o')
</code></pre>
<p>I am getting only 'End Date' written below the x-axis, but i want all the dates present inside the column End date to be mentioned.</p>
|
<p>You need to change the dtype of End Date, End Date in poll_df was a string, converting it to datetime dtype allows pandas plot to correctly format the x-axis with the labels:</p>
<pre><code>import pandas as pd
from pandas import Series,DataFrame
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
poll_df=pd.read_csv('http://elections.huffingtonpost.com/pollster/2012-general-election-romney-vs-obama.csv')
poll_df['End Date'] = pd.to_datetime(poll_df['End Date'])
poll_df.plot(x='End Date',y=['Obama','Romney','Undecided'],linestyle='',marker='o')
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/zhY3N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zhY3N.png" alt="enter image description here"></a></p>
<p>OR you can use <code>parse_dates</code> parameter in read_csv:</p>
<pre><code>poll_df=pd.read_csv('http://elections.huffingtonpost.com/pollster/2012-general-election-romney-vs-obama.csv',
parse_dates=['End Date'])
</code></pre>
|
python-3.x|pandas|numpy|dataframe|data-science
| 0
|
374,453
| 51,333,050
|
How to group by three column using conditions in Pandas(Python)?
|
<p>Hi so I am currently working with a data frame which has the following Columns:</p>
<p>User_id(has more than 30 types of repeated user id's):1,22,33,3,1,222,1,3 and so on</p>
<p>Column1(has two categories):A,B,A,B and so on</p>
<p>Column2(has two categories):BB,CC,BB,CC and so on..</p>
<p>Date: 2010-01-09,2010-01-03 and so on..</p>
<p>Now what I am trying to do is that I need to get min date when column1=A,Column2=BB for a particular user id(like say 1)
Also doing the same thing for all combinations like if Column1=B,Column 2=BB etc.</p>
<p>PS:This is using Python(Pandas,Numpy).
Thanks and looking forward to your help.</p>
|
<p>what you want to do is group by <code>Column2</code>, <code>Column1</code> and <code>id</code> and get the min of the date column:</p>
<pre><code>mins = df.groupby(['Column2', 'Column1','id']).Date.min()
</code></pre>
<p>if you want to get the info for only one particular user id you can filter the df beforehand</p>
<pre><code>df = df[df.id==1]
</code></pre>
|
python|python-3.x|pandas|pandas-groupby
| 0
|
374,454
| 51,130,264
|
Pandas - Storing the results of df.apply() to only select rows
|
<p>I have a rather convoluted <code>df.apply()</code> to calculate the business hours between two dates.</p>
<p>I have it working with no issues for a single row/example, however I'm now trying to apply it across the entire df.</p>
<p>Example code:
<code>df.apply(lambda row: calc_bus_hrs(row['Created Date'], row['T1 - Date']) if not (pd.isnull(row['T1 - Date'])) else np.nan, axis=1)</code></p>
<p>The df.apply is not relevant for every row and returns some <code>nan</code> outputs <strong>which is fine</strong>.</p>
<p>Output:</p>
<p><code>40171 NaN
40172 NaN
40173 0.399722
40174 NaN
40175 NaN
40176 NaN
40177 NaN
40178 NaN
40179 0.017222
40180 NaN</code></p>
<p>Now I want to save to my df using another columns value like so:</p>
<pre><code>df[df['T1 - From'].values[0] + " Time"]
</code></pre>
<p>Now the problem is the above code fails when <code>df[df['T1 - From'].values[0]</code> contains a <code>nan</code>.</p>
<ul>
<li>How can I save the output to only the rows that are not nan?</li>
</ul>
<p>Full code: </p>
<pre><code>df[df['T1 - From'].values[0] + " Time"] = df.apply(lambda row: calc_bus_hrs(row['Created Date'], row['T1 - Date']) if not (pd.isnull(row['T1 - Date'])) else np.nan, axis=1)
</code></pre>
<p>Error:</p>
<pre><code>TypeError: unsupported operand type(s) for +: 'float' and 'str'
</code></pre>
|
<p>You can achieve it with by defining a separate lambda function that can handle the row logic:</p>
<pre><code>def lambda_func(row):
if df['T1 - Date'] is not None:
return calc_bus_hrs(row['Created Date'], row['T1 - Date'])
else:
return nan
df[df['T1 - From'].values[0] + " Time"] = df.apply(lambda_func(x))
</code></pre>
|
python|pandas
| 1
|
374,455
| 51,278,422
|
Interpreting the FLOPs profile result of tensorflow
|
<p>I want to profile the FLOPs of a very simple neural network model, which is used to classify the MNIST dataset, and the batch size is 128. As I followed the official tutorials, I got the result of the following model, but I cannot understand some parts of the output.</p>
<pre><code>w1 = tf.Variable(tf.random_uniform([784, 15]), name='w1')
w2 = tf.Variable(tf.random_uniform([15, 10]), name='w2')
b1 = tf.Variable(tf.zeros([15, ]), name='b1')
b2 = tf.Variable(tf.zeros([10, ]), name='b2')
hidden_layer = tf.add(tf.matmul(images_iter, w1), b1)
logits = tf.add(tf.matmul(hidden_layer, w2), b2)
loss_op = tf.reduce_sum(\
tf.nn.softmax_cross_entropy_with_logits(logits=logits,
labels=labels_iter))
opetimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = opetimizer.minimize(loss_op)
</code></pre>
<p>The <code>images_iter</code> and the <code>labels_iter</code> are the iterators of tf.data, which are similar to the placeholder. </p>
<pre><code>tf.profiler.profile(
tf.get_default_graph(),
options=tf.profiler.ProfileOptionBuilder.float_operation())
</code></pre>
<p>I used this code, which equals to <code>scope -min_float_ops 1 -select float_ops -account_displayed_op_only</code> in tfprof comments line tool, to profile the FLOPs and got the below result.</p>
<pre><code>Profile:
node name | # float_ops
_TFProfRoot (--/23.83k flops)
random_uniform (11.76k/23.52k flops)
random_uniform/mul (11.76k/11.76k flops)
random_uniform/sub (1/1 flops)
random_uniform_1 (150/301 flops)
random_uniform_1/mul (150/150 flops)
random_uniform_1/sub (1/1 flops)
Adam/mul (1/1 flops)
Adam/mul_1 (1/1 flops)
softmax_cross_entropy_with_logits_sg/Sub (1/1 flops)
softmax_cross_entropy_with_logits_sg/Sub_1 (1/1 flops)
softmax_cross_entropy_with_logits_sg/Sub_2 (1/1 flops)
</code></pre>
<p>My questions are </p>
<ol>
<li>What do the numbers in the parentheses mean? For example, <code>random_uniform_1 (150/301 flops)</code>, what are 150 and 301?</li>
<li>Why is the first number in the parentheses of _TFProfRoot "--"?</li>
<li>Why are the flops of Adam/mul and softmax_cross_entropy_with_logits_sg/Sub 1?</li>
</ol>
<p>I know it is discouraging to read a question so long, but a desperate boy who cannot find relating information from the official document needs your guys to help.</p>
|
<p>I'll give it a try:</p>
<p>(1) From this example, looks like the first number is the "self" flops, the second number means the "total" flops under the naming scope. For example: for the 3 nodes respectively named random_uniform (if there is such a node), random_uniform/mul, random_uniform/sub, they respectively take 11.76k, 11.76k, and 1 flops, and in total 23.52k flops. </p>
<p>For another example: 23.83k = 23.52k + 300.</p>
<p>Does this make sense?</p>
<p>(2) The root node is a "virtual" top-level node added by the profiler, which doesn't have a "self" flops , or in other words, it has zero self flops.</p>
<p>(3) Not sure why it is 1. It would help if you can print the GraphDef and find out what this node really is, with print(sess.graph_def)</p>
<p>Hope this helps.</p>
|
tensorflow|profiler|flops
| 1
|
374,456
| 51,375,616
|
Dask groupby date performance
|
<p>Given the following dask dataframe:</p>
<pre><code>import numpy as np
import pandas as pd
import dask.dataframe as dd
N = int(1e4)
df = pd.DataFrame(np.random.randn(N, 3), columns=list('abc'),
index=pd.date_range(datetime.now(), periods=N, freq='1min'))
df['dt'] = pd.to_datetime(df.index.date)
ddf = dd.from_pandas(df, npartitions=5)
ddf
</code></pre>
<p>and this slow function:</p>
<pre><code>def f(grp, M=5):
#A slow function
x = 0
for n in range(M):
for idx1, row in grp[list('abc')].items():
for idx2, v in row.items():
x += v
return x
</code></pre>
<p>I am surprised that pandas is faster than dask for a groupby + aggregate operation, e.g.:</p>
<pre><code>%%timeit
res = ddf.groupby('dt').apply(f).compute()
#310 ms ± 3.08 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>versus:</p>
<pre><code>%%timeit
res = df.groupby('dt').apply(f)
#149 ms ± 3.76 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
<p>Am I missing something here? I thought that dask would parallelize this computation? My real use case has millions of rows and my aggregation function is very slow.</p>
|
<p>When data fit in memory <code>pandas</code> is faster than <code>dask</code>. I'm wondering which version of <code>dask</code> are you using because if you don't declare your metadata for the apply it should return a warning. (Your question is edited and I added metadata).</p>
<p>You could try to run these experiments for a bigger <code>N</code>, using a different number of partitions and using multiprocessing.</p>
<pre><code>%%timeit -n10
dask <= 0.17.5
res = ddf.groupby('dt').apply(f, meta=('x', 'f8'))\
.compute(get=dask.multiprocessing.get)
%%timeit -n10
dask >= 0.18.0
res = ddf.groupby('dt').apply(f, meta=('x', 'f8'))\
.compute(scheduler='processes')
</code></pre>
<p>For <code>N=int(1e5)</code> and <code>npartitions=4</code> on my laptop the <code>dask</code> version is faster than the <code>pandas</code> one. The next step would be try to improve your function <code>f</code>.</p>
|
python|pandas|dask
| 0
|
374,457
| 51,447,460
|
Location of documentation on special methods recognized by numpy
|
<p>One of the differences between <code>math.exp</code> and <code>numpy.exp</code> is that, if you have a custom class <code>C</code> that has a <code>C.exp</code> method, <code>numpy.exp</code> will notice and delegate to this method whereas <code>math.exp</code> will not:</p>
<pre><code>class C:
def exp(self):
return 'hey!'
import math
math.exp(C()) # raises TypeError
import numpy
numpy.exp(C()) # evaluates to 'hey!'
</code></pre>
<p>However, if you go to the <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.exp.html" rel="nofollow noreferrer">web documentation of <code>numpy.exp</code></a>, this seems to be taken for granted. It isn't explicitly stated anywhere. Is there a place where this functionality is documented?</p>
<p>More generally, is there a place with a list of <em>all</em> such methods recognized by numpy?</p>
|
<p>This isn't a special behavior of the <code>np.exp</code> function; it's just a consequence of how object dtype arrays are evaluated.</p>
<p><code>np.exp</code> like many numpy functions tries to convert non-array inputs into arrays before acting.</p>
<pre><code>In [227]: class C:
...: def exp(self):
...: return 'hey!'
...:
In [228]: np.exp(C())
Out[228]: 'hey!'
In [229]: np.array(C())
Out[229]: array(<__main__.C object at 0x7feb7154fa58>, dtype=object)
In [230]: np.exp(np.array(C()))
Out[230]: 'hey!'
</code></pre>
<p>So <code>C()</code> is converted into an array, which is an object dtype *(<code>C()</code> isn't an iterable like <code>[1,2,3]</code>). Typically a numpy function, if given an object dtype array, iterates on the elements, asking each to perform the corresponding method. That explains how [228] ends up evaluating <code>C().exp()</code>.</p>
<pre><code>In [231]: np.exp([C(),C()])
Out[231]: array(['hey!', 'hey!'], dtype=object)
In [232]: np.exp([C(),C(),2])
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-232-5010b59d525d> in <module>()
----> 1 np.exp([C(),C(),2])
AttributeError: 'int' object has no attribute 'exp'
</code></pre>
<p><code>np.exp</code> can work on an array object dtype provided all the elements have a <code>exp</code> method. Integers don't. <code>ndarray</code> doesn't either.</p>
<pre><code>In [233]: np.exp([C(),C(),np.array(2)])
AttributeError: 'numpy.ndarray' object has no attribute 'exp'
</code></pre>
<p><code>math.exp</code> expects a number, an Python scalar (or something that can convert into a scalar such as <code>np.array(3)</code>.</p>
<p>I expect this behavior is common to all<code>ufunc</code>. I don't know about other numpy functions that don't follow that protocol.</p>
<p>In some cases the <code>ufunc</code> delegates to <code>__</code> methods:</p>
<pre><code>In [242]: class C:
...: def exp(self):
...: return 'hey!'
...: def __abs__(self):
...: return 'HEY'
...: def __add__(self, other):
...: return 'heyhey'
...:
In [243]:
In [243]: np.abs(C())
Out[243]: 'HEY'
In [244]: np.add(C(),C())
Out[244]: 'heyhey'
</code></pre>
|
python|numpy
| 4
|
374,458
| 51,491,248
|
do not let matplotlib automatically adjust the order of x axis
|
<p>Here is my little data:</p>
<pre><code>aa3=pd.DataFrame({'OfficeName':['Narre Warren','Cannington','Chadstone','1_Mean',
'Traralgon','Bondi Junction','Hobart','2_Mean'],
'Ratio':[0.1,0.2,0.4,0.1,0.43,0.4,0.15,0.32]})
</code></pre>
<p>The order of OfficeName is exactly what I want. But, when I try to draw a bar chart:</p>
<pre><code>plt.bar(aa3.loc[:,'OfficeName'],aa3.loc[:,'Ratio'])
</code></pre>
<p>The chart looks like this:</p>
<p><a href="https://i.stack.imgur.com/iEfFH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iEfFH.jpg" alt="enter image description here"></a></p>
<p>You can see that the order of x axis is automatically changed. This is really bad for my work. What I should do to make the chart show the bars just based on the order in my data?</p>
|
<p>Try this code:</p>
<pre><code>a3=pd.DataFrame({'OfficeName':['Narre Warren', 'Cannington', 'Chadstone', '1_Mean',
'Traralgon', 'Bondi Junction', 'Hobart', '2_Mean'],
'Ratio':[0.1, 0.2, 0.4, 0.1, 0.43, 0.4, 0.15, 0.32]})
fig, ax = plt.subplots()
ind = np.arange(a3.loc[:, 'OfficeName'].nunique()) #Creates an array for indices on x-axis
width = 0.35 #Width of the bar plots
p1 = ax.bar(ind, a3.loc[:, 'Ratio'], width) #Creates the bar plot for plotting
plt.xticks(ind) #Sets ticks(positions) for the labels to appear. Default starts from -1(we want it to start from 0)
ax.set_xticklabels(a3.loc[:, 'OfficeName'], ha = 'center') #Write the x labels for each value
ax.set_xlabel('x Group')
ax.set_ylabel('Ratio')
plt.show()
</code></pre>
|
python|pandas|matplotlib|indexing|axis
| 1
|
374,459
| 51,463,341
|
numpy masking does not work with bounded range on both sides
|
<p>Suppose we have:</p>
<pre><code>>>> x
array([-1. , -1.3, 0. , 1.3, 0.2])
</code></pre>
<p>We can choose select elements with a range:</p>
<pre><code>>>> x[x <= 1]
array([-1. , -1.3, 0. , 0.2])
</code></pre>
<p>And we can bound it below too:</p>
<pre><code>>>> x[-1 <= x]
array([-1. , 0. , 1.3, 0.2])
</code></pre>
<p>Is there some rationale for not being able to use the following:</p>
<pre><code>>>> x[-1 <= x <= 1]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>I know that all it does is create a mask of <code>True</code> and <code>False</code>s upon doing these inequality operations, and so I can do the following:</p>
<pre><code>>>> x[(-1 <= x) * (x <= 1)]
array([-1. , 0. , 0.2])
</code></pre>
<p>as a broadcasting operation on booleans. It's not too inconvenient, but why doesn't the previous range inequality work?</p>
|
<p>Although regular Python types do support "chained comparison" like <code>1 < x < 5</code>, NumPy arrays do not. Perhaps that's unfortunate, but you can work around it easily:</p>
<pre><code>x[(-1 <= x) & (x <= 1)]
</code></pre>
|
python|numpy
| 1
|
374,460
| 51,364,416
|
CNN weights getting stuck
|
<p>This is a slightly theoretical question. Below is a graph the plots the loss as the CNN is being trained. Y axis is MSE and X axis is number of Epochs.<a href="https://i.stack.imgur.com/VlnUn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VlnUn.png" alt="enter image description here"></a></p>
<p>Description of CNN: </p>
<pre><code>class Net(nn.Module):
def __init__ (self):
super(Net, self).__init__()
self.conv1 = nn.Conv1d(in_channels = 1, out_channels = 5, kernel_size = 9) #.double
self.pool1 = nn.MaxPool1d(3)
self.fc1 = nn.Linear(5*30, 200)
#self.dropout = nn.Dropout(p = 0.5)
self.fc2 = nn.Linear(200, 99)
def forward(self, x):
x = self.pool1(F.relu(self.conv1(x)))
x = x.view(-1, 5 * 30)
#x = self.dropout(F.relu(self.fc1(x)))
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
def init_weights(m):
if type(m) == nn.Linear:
nn.init.xavier_uniform_(m.weight)
m.bias.data.fill_(0.01)
net = Net()
net.apply(init_weights)
criterion = nn.MSELoss()
optimizer = optim.Adam(net.parameters(), lr=0.01-0.0001) # depends
</code></pre>
<p>Both input and output are an array of numbers. It is a multiregression ouput-problem. </p>
<p>This issue where the loss/weights gets stuck in an incorrect place doesn’t happen as much if I use a lower learning rate. However, it still happens. In some sense that means that the hyper dimensional space that is created by the parameters of the CNN is jagged with a lot of local minimums. This could be true because the CNN ‘s inputs are very similar. Would increasing the layers of the CNN both conv layers and fully connected linears help solve this problem as they hyper dimensional space might be smoother? Or is this intuition completely incorrect?
A more broad question when should you be inclined to add more convolutional layers? I know that in practice you should almost never start from scratch and instead use another model’s first few layers. However, the inputs I am using are very different to anything I have found online and therefore cannot do this. </p>
|
<p>Is this a multiclass classification problem? If so you could try using <a href="https://pytorch.org/docs/stable/nn.html#crossentropyloss" rel="nofollow noreferrer">cross entropy loss</a>. And a softmax layer before output maybe? I'm not sure because I don't know what's the model's input and output.</p>
|
conv-neural-network|pytorch|gradient|hyperparameters
| 0
|
374,461
| 51,328,272
|
Running Growing Self-Organizing Map(GSOM) GitHub Implementation failed with AttributeError: 'numpy.ndarray' object has no attribute 'iteritems'
|
<p>I have got some codes of Growing Self-Organizing Map(GSOM) from <a href="https://github.com/philippludwig/pygsom" rel="nofollow noreferrer">GitHub</a>
(All required information for understanding the Mechanism of GSOM has described in the implementation's Documentation).</p>
<p>I tried to run it in <strong>PyCharm version 2018.1.4</strong> with the <strong>Python 3.6</strong> as Project Interpreter, but I came across this error:</p>
<blockquote>
<p>ValueError: too many values to unpack (expected 2)</p>
</blockquote>
<p>The above error is related to the constructor of the GSOM class and specifically in the below loop:</p>
<pre><code> for fn,t in dataset:
arr = scipy.array(t)
self.data.append([fn,arr])
</code></pre>
<p>I know that this error is a common error in loops and I have to say that I tried most of solutions that i have found in stack overflow.</p>
<p>For example I used the functions like <strong>iteritems()</strong> ,but I have confronted with the following error:</p>
<blockquote>
<p>AttributeError: 'numpy.ndarray' object has no attribute 'iteritems'</p>
</blockquote>
<p>The <strong>Python</strong> Program I have developed for applying this implementation is:</p>
<pre><code>from gsom import GSOM
import numpy as np
dataset = np.array([
[1., 0., 0.],
[1., 0., 1.],
[0., 0., 0.5],
[0.125, 0.529, 1.0],
[0.33, 0.4, 0.67],
[0.6, 0.5, 1.0],
[0., 1., 0.],
[1., 0., 0.],
[0., 1., 1.],
[1., 0., 1.],
[1., 1., 0.],
[1., 1., 1.],
[.33, .33, .33],
[.5, .5, .5],
[.66, .66, .66]])
SF = 0.5
Test = GSOM(dataset, SF)
</code></pre>
<p>I'm going to apply this implementation to visualize <strong>High-Dimensional Data</strong> with a <strong>2-Dimension Grid</strong>.</p>
<p>The dataset I used is 3-Dimensional (has three attribute) and is a simple example to understand the performance of the <strong>GSOM</strong>'s functionality.</p>
<p>The original dataset that I will use, has more than 20 attributes.</p>
|
<p>For solving this error after a complete 4 hours search!! i found that i have to use below code:</p>
<pre><code>for fn, t in np.ndenumerate(dataset):
arr = scipy.array(t)
self.data.append([fn, arr])
</code></pre>
<p><em>ndenumerate()</em> is the key function from numpy to loop in a numpy ndarray in a right way.</p>
<p>Thanks Me!
:)</p>
|
python|numpy|pycharm|som
| 0
|
374,462
| 51,180,102
|
DataFrame, apply, lambda, list comprehension
|
<p>I'm trying to do a bit of cleanse to some data sets, I can accomplish the task with some for loops but I wanted a more pythonic/pandorable way to do this.</p>
<p>This is the code I came up with, the data is not real..but it should work</p>
<pre><code>import pandas as pd
# This is a dataframe containing the correct values
correct = pd.DataFrame([{"letters":"abc","data":1},{"letters":"ast","data":2},{"letters":"bkgf","data":3}])
# This is the dataframe containing source data
source = pd.DataFrame([{"c":"ab"},{"c":"kh"},{"c":"bkg"}])
for i,word in source["c"].iteritems():
for j,row in correct.iterrows():
if word in row["letters"]:
source.at[i,"c"] = row["data"]
break
</code></pre>
<hr>
<p>This is my attempt to a pandorable way but it fails because of the list comprehension returning a generator:</p>
<pre><code>source["c"] = source["c"].apply(
lambda x: row["data"] if x in row["letters"] else x for row in
correct.iterrows()
)
</code></pre>
|
<p>Here's one solution using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html" rel="nofollow noreferrer"><code>pd.Series.apply</code></a> with <code>next</code> and a generator expression:</p>
<pre><code>def update_value(x):
return next((k for k, v in correct.set_index('data')['letters'].items() if x in v), x)
source['c'] = source['c'].apply(update_value)
print(source)
c
0 1
1 kh
2 3
</code></pre>
|
python|pandas|dataframe|lambda|series
| 0
|
374,463
| 51,195,654
|
Converting a CNN from classification to log regression?
|
<p>I put together a CNN using tflearn that classifies images in terms of their scaling from some original resolution (I.e. 50%, 70%, etc.) just to see what kind of accuracy I could get for this problem. I’m new to machine learning so I figured it would be a good way to start towards the overall goal of having the network determine the scaling at any level, not just the few I generated for classification.</p>
<p>After getting a reasonable level of accuracy I decided to convert to model to do logistical regression instead of classification, but I’m having a few issues both in theory and application. First, what should the labels even look like? Before I was using a one-hot array for the 5 different classes, but obviously that’s not applicable anymore. Should the labels be the scale factor (I.e. 0.5 for 50%, etc.) or something else? Then, should the model itself really look any different? It’s my understanding that I should really only be tweaking the cost function and backprop/optimization portions, as well as changing the output to be one value instead of five. Again, I’m rather new so I’d appreciate any advice on this. </p>
<p>Thank you!</p>
<p>(Also, I didn't include any code here because I feel like these questions are pretty general and my code is not really that special or involved, but if anyone needs to see it in order to help with answering/giving advice, just ask and I will post some of it here.)</p>
|
<p>Usually in classification problems, your output will use the softmax function to generate "probabilities". In neural networks, to convert this to a regression approach, remove the softmax layer and change the output dimension from the number of classes you had previously (5) to 1. Training this should yield a vector of shape [batchsize x 1] that can be fed to your loss function with labels of the same shape containing the real-valued scale factor, assuming this is indeed what the model is trying to predict. </p>
|
python|tensorflow|conv-neural-network|logistic-regression|tflearn
| 0
|
374,464
| 51,355,366
|
Invalid syntax when using apply with conditional lambda with pandas
|
<p>I am trying to add a <code>cost</code> column to the <code>w1_weekdays</code>. I want to multiply 'kwh_usage' by <code>onP_price</code> when the <code>end_time_hour</code> is equal to <code>0, 1, 2</code>. For all other hours, I want to multiply by <code>offP_price</code>.</p>
<p><a href="https://i.stack.imgur.com/DNLFs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DNLFs.png" alt="enter image description here"></a></p>
<p>I used apply and lambda to do this.</p>
<pre><code>w1_weekdays['cost'] = w1_weekdays['end_time_hour'].apply(lambda (onP_price, offP_price):\
(onP_price * w1_weekdays['kwh_usage'])\
if w1_weekdays['end_time_hour'] in (0,1,2)\
else (offP_price * w1_weekdays['kwh_usage']))
</code></pre>
<p>However, I get this error</p>
<p><a href="https://i.stack.imgur.com/9g1q7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9g1q7.png" alt="enter image description here"></a></p>
<p>Is the code even correct? And why the invalid syntax error? Thanks!</p>
|
<p>I think better here is use vectorized solution with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> by boolean mask created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow noreferrer"><code>isin</code></a> and last multiple column <code>kwh_usage</code>:</p>
<pre><code>mask = w1_weekdays['end_time_hour'].isin([0,1,2])
w1_weekdays['cost'] = np.where(mask, onP_price , offP_price) * w1_weekdays['kwh_usage']
</code></pre>
|
python|pandas
| 0
|
374,465
| 51,541,358
|
How to create logic to do value mapping in pandas?
|
<p>I have a df that looks like this (except more columns and rows): </p>
<pre><code>p_id
1
2
3
</code></pre>
<p>How do I create logic that is scalable to map certain values to certain numbers in the p_id column? </p>
<p>example df should look like: </p>
<pre><code>p_id:
a
b
c
</code></pre>
<p>in other words how do I make logic that says for every <code>1</code> in column <code>p_id</code>, change to <code>a</code></p>
|
<p>you <em>could</em> use <code>Series.map</code> & pass in a dictionary. </p>
<pre><code>df = pd.DataFrame({'p_id': [1,2,3]})
df.p_id.map({1: 'a', 2: 'b', 3: 'c'})
#output:
0 a
1 b
2 c
Name: p_id, dtype: object
</code></pre>
<p>However, if your mapping an integer to an letter, you could use the <code>chr</code> function</p>
<pre><code># 97 is the ascii code for `a`
(df.p_id+96).map(chr)
#outputs:
0 a
1 b
2 c
Name: p_id, dtype: object
</code></pre>
|
python|python-3.x|pandas
| 1
|
374,466
| 51,432,992
|
Keras: what does class_weight actually try to balance?
|
<p>My data has extreme class imbalance. About 99.99% of samples are negatives; the positives are (roughly) equally divided among three other classes. I think the models I'm training are just predicting the majority class basically all the time. For this reason, I'm trying to weight the classes. </p>
<p><strong>Model</strong></p>
<pre><code>model = Sequential()
#Layer 1
model.add(Conv1D( {{choice([32, 64, 90, 128])}}, {{choice([3, 4, 5, 6, 8])}}, activation='relu', kernel_initializer=kernel_initializer, input_shape=X_train.shape[1:]))
model.add(BatchNormalization())
#Layer 2
model.add(Conv1D({{choice([32, 64, 90, 128])}}, {{choice([3, 4, 5, 6])}}, activation='relu',kernel_initializer=kernel_initializer))
model.add(Dropout({{uniform(0, 0.9)}}))
#Flatten
model.add(Flatten())
#Output
model.add(Dense(4, activation='softmax'))
</code></pre>
<p>(The <code>{{...}}</code> are for use with <a href="https://github.com/maxpumperla/hyperas" rel="noreferrer">Hyperas</a>.)</p>
<p><strong>How I've tried to weight it</strong></p>
<p>\1. Using <code>class_weight</code> in <code>model.fit()</code></p>
<pre><code>model.fit(X_train, Y_train, batch_size=64, epochs=10, verbose=2, validation_data=(X_test, Y_test), class_weight={0: 9999, 1:9999, 2: 9999, 3:1})
</code></pre>
<p>\2. Using <code>class_weight</code> in <code>model.fit()</code> with <code>sklearn</code> <code>compute_class_weight()</code></p>
<pre><code>model.fit(..., class_weight=class_weight.compute_class_weight("balanced", np.unique(Y_train), Y_train)
</code></pre>
<p>\3. With a custom loss function</p>
<pre><code>from keras import backend as K
def custom_loss(weights):
#gist.github.com/wassname/ce364fddfc8a025bfab4348cf5de852d
def loss(Y_true, Y_pred):
Y_pred /= K.sum(Y_pred, axis=-1, keepdims=True)
Y_pred = K.clip(Y_pred, K.epsilon(), 1 - K.epsilon())
loss = Y_true * K.log(Y_pred) * weights
loss = -K.sum(loss, -1)
return loss
return loss
extreme_weights = np.array([9999, 9999, 9999, 1])
model.compile(loss=custom_loss(extreme_weights),
metrics=['accuracy'],
optimizer={{choice(['rmsprop', 'adam', 'sgd','Adagrad','Adadelta'])}}
)
#(then fit *without* class_weight)
</code></pre>
<p><strong>Results</strong></p>
<p>Poor. Accuracy across all classes is ~<code>.99</code>, and unbalanced accuracy for all classes is ~<code>.5</code>. But more meaningful metrics, like auPRC, tell a different story. The auPRC is nearly <code>1</code> for the majority class, and nearly <code>0</code> for the rest. </p>
<p>Is this how Keras balances classes? It just makes sure that the accuracy is the same across them—or should either metrics be equal or comparable too? Or am I specifying the weights wrong?</p>
|
<p>Keras uses the class weights during training but the accuracy is not reflective of that. Accuracy is calculated across all samples irrelevant of the weight between classes. This is because you're using the metric 'accuracy' in the compile(). You can define a custom and more accurate weighted accuracy and use that or use the sklearn metrics (e.g. f1_score() which can be 'binary', 'weighted' etc). </p>
<p>Example:</p>
<pre><code>def macro_f1(y_true, y_pred):
return f1_score(y_true, y_pred, average='macro')
model.compile(loss=custom_loss(extreme_weights),
metrics=['accuracy', macro_f1],
optimizer={{choice(['rmsprop', 'adam', 'sgd','Adagrad','Adadelta'])}}
)
</code></pre>
|
python|tensorflow|neural-network|keras|loss-function
| 2
|
374,467
| 51,371,528
|
Reading Images from TFrecord using Dataset API and showing them on Jupyter notebook
|
<p>I created a tfrecord from a folder of images, now I want to iterate over entries in TFrecord file using Dataset API and show them on Jupyter notebook. However I'm facing problems with reading tfrecord file.</p>
<p>Code I used to create TFRecord</p>
<pre><code>def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def generate_tfr(image_list):
with tf.python_io.TFRecordWriter(output_path) as writer:
for image in images:
image_bytes = open(image,'rb').read()
image_array = imread(image)
image_shape = image_array.shape
image_x, image_y, image_z = image_shape[0],image_shape[1], image_shape[2]
data = {
'image/bytes':_bytes_feature(image_bytes),
'image/x':_int64_feature(image_x),
'image/y':_int64_feature(image_y),
'image/z':_int64_feature(image_z)
}
features = tf.train.Features(feature=data)
example = tf.train.Example(features=features)
serialized = example.SerializeToString()
writer.write(serialized)
</code></pre>
<p>Code to read TFRecord</p>
<pre><code>#This code is incomplete and has many flaws.
#Please give some suggestions in correcting this code if you can
def parse(serialized):
features = \
{
'image/bytes': tf.FixedLenFeature([], tf.string),
'image/x': tf.FixedLenFeature([], tf.int64),
'image/y': tf.FixedLenFeature([], tf.int64),
'image/z': tf.FixedLenFeature([], tf.int64)
}
parsed_example = tf.parse_single_example(serialized=serialized,features=features)
image = parsed_example['image/bytes']
image = tf.decode_raw(image,tf.uint8)
x = parsed_example['image/x'] # breadth
y = parsed_example['image/y'] # height
z = parsed_example['image/z'] # depth
image = tf.cast(image,tf.float32)
# how can I reshape image tensor here? tf.reshape throwing some weird errors.
return {'image':image,'x':x,'y':y,'z':z}
dataset = tf.data.TFRecordDataset([output_path])
dataset.map(parse)
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
epoch = 1
with tf.Session() as sess:
for _ in range(epoch):
img = next_element.eval()
print(img)
# when I print image, it shows byte code.
# How can I convert it to numpy array and then show image on my jupyter notebook ?
</code></pre>
<p>I've never worked with any of this before and I'm stuck at reading TFRecords. Please answer how to iterate over the contents of TFrecords and show them on Jupyter notebook. Feel free to correct/optimize both pieces of code. That would help me a lot. </p>
|
<p>Is this what you may be looking for? I think once u convert to numpy array you can show in jupyter notebook using PIL.Image.</p>
<p>convert tf records to numpy => <a href="https://stackoverflow.com/questions/36026892/how-can-i-convert-tfrecords-into-numpy-arrays">How can I convert TFRecords into numpy arrays?</a></p>
<p>show numpy array as image
<a href="https://gist.github.com/kylemcdonald/2f1b9a255993bf9b2629" rel="nofollow noreferrer">https://gist.github.com/kylemcdonald/2f1b9a255993bf9b2629</a></p>
|
tensorflow|tensorflow-datasets|tfrecord
| 1
|
374,468
| 51,151,436
|
Extra 'b' preceeding the actual output in TensorFlow
|
<p>So, I am new to TensorFlow and will be just starting to learn it. I installed TensorFlow on the IDE Canopy using 'pip' command. </p>
<p>While confirming if it had been installed correctly, I entered the following code :</p>
<pre><code>import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
</code></pre>
<p>This should have given an output :</p>
<pre><code>Hello, TensorFlow!
</code></pre>
<p>Instead I get an extra letter 'b', preceeding this, like :</p>
<pre><code>b'Hello, TensorFlow!'
</code></pre>
<p>Is this a problem to be sorted or is it fine and would be ok if I don't do anything about this ?
Thanks a lot.</p>
|
<p>The 'b' indicates that it is a bytestring (rather a sequence of octets). Use decode() to get the string.</p>
<pre><code>print(sess.run(hello).decode())
</code></pre>
|
python|tensorflow|canopy
| 5
|
374,469
| 51,284,161
|
coding values in DataFrame using table with interval description in python
|
<p>I have a table in pandas df1</p>
<pre><code>id value
1 1500
2 -1000
3 0
4 50000
5 50
</code></pre>
<p>also I have another table in dataframe df2, that contains upper boundaries of groups, so essentially every row represents an interval from the previous boundary to the current one (the first interval is "<0"):</p>
<pre><code>group upper
0 0
1 1000
2 NaN
</code></pre>
<p>How should I get the relevant groups for value from df, using intervals from df2? I can't use join, merge etc., because the rules for this join should be like "if value is between previous upper and current upper" and not "if value equals something". The only way that I've found is using predefined function with df.apply() (also there is a case of categorical values in it with interval_flag==False):</p>
<pre><code>def values_to_group(x, interval_flag, groups_def):
if interval_flag==True:
for ind, gr in groups_def.sort_values(by='group').iterrows():
if x<gr[1]:
return gr[0]
elif math.isnan(gr[1]) == True:
return gr[0]
else:
for ind, gr in groups_def.sort_values(by='group').iterrows():
if x in gr[1]:
return gr[0]
</code></pre>
<p>Is there an easier/more optimal way to do it?</p>
<p>The expected output should be this:</p>
<pre><code>id value group
1 1500 2
2 -1000 0
3 0 1
4 50000 2
5 50 1
</code></pre>
|
<p>I suggest use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html" rel="nofollow noreferrer"><code>cut</code></a> with sorted <code>DataFrame</code> of <code>df2</code> by sorted <code>upper</code> and repalce last <code>NaN</code> to <code>np.inf</code>:</p>
<pre><code>df2 = pd.DataFrame({'group':[0,1,2], 'upper':[0,1000,np.nan]})
df2 = df2.sort_values('upper')
df2['upper'] = df2['upper'].replace(np.nan, np.inf)
print (df2)
group upper
0 0 0.000000
1 1 1000.000000
2 2 inf
#added first bin -np.inf
bins = np.insert(df2['upper'].values, 0, -np.inf)
df1['group'] = pd.cut(df1['value'], bins=bins, labels=df2['group'], right=False)
print (df1)
id value group
0 1 1500 2
1 2 -1000 0
2 3 0 1
3 4 50000 2
4 5 50 1
</code></pre>
|
python|pandas|binning
| 0
|
374,470
| 51,548,794
|
Compare 2 maps together with matplotlib
|
<p>@Julien : I see no point in downvoting a question that could be useful to many beginners. That's ridiculous to see so much hate, your comment (which I respected) was more than enough. </p>
<p>I am working on geopandas and I try to compare 2 maps of NYC, based on their BoroCode (BoroCode & Borocode2). </p>
<p>Please find the code that you can reproduce at home : </p>
<pre><code>import pandas as pd
import geopandas
# We import the database of NYC and we plot it :
df = geopandas.read_file(geopandas.datasets.get_path('nybb'))
ax1 = df.plot(figsize=(10, 10), alpha=0.5, edgecolor='k')
ax1
# Then I want to make another dataframe which has also a BoroCode column :
df_tmp1 = pd.DataFrame([[1.1, 5], [2.7, 4], [5.3, 3], [7, 1], [20, 2]], index = ['0', '1', '2', '3', '4'], columns = ['BoroCode2', 'BoroCode'])
df_tmp1
Out [4] :
BoroCode2 BoroCode
0 1.1 5
1 2.7 4
2 5.3 3
3 7.0 1
4 20.0 2
# Now I merge both dataframes :
df1 = pd.merge(df, df_tmp1, on = ['BoroCode'])
# And I can make to maps based on their BoroCode :
map1 = df1.plot(column='BoroCode', cmap='tab10', figsize=(15, 5), legend=True)
</code></pre>
<p><img src="https://image.ibb.co/bSsUb8/index.png" alt="map1"></p>
<pre><code>map2 = df1.plot(column='BoroCode2', cmap='tab10', figsize=(15, 5), legend=True)
</code></pre>
<p><img src="https://image.ibb.co/m6M0io/index2.png" alt="map2"></p>
<p>And then, I want to show side-by-side the <code>map1</code> & <code>map2</code> togethers in the same row. Just to compare them. </p>
<p>I have tested with <code>subplot</code>, but I think I don't master it well yet. </p>
<p>Because I am not familiar with the tutorials as they use functions to make plots and scatters from scratch with <code>ax</code>, <code>x</code>, and <code>y</code>, <code>axes</code>, etc. And I am a beginner so I can't succeed to adapt the code to my case. </p>
<p>I just want to show those maps together. Nothing more. </p>
<p>Could somebody help me to show such a way ? Thank you very much. </p>
|
<p>Subplots in matplotlib are created e.g. as </p>
<pre><code>fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(15,5))
</code></pre>
<p>Geopandas' <a href="http://geopandas.org/reference.html#geopandas.GeoDataFrame.plot" rel="nofollow noreferrer"><code>plot</code></a> accepts an argument <code>ax</code> to which to supply the axes to plot to. </p>
<pre><code>df1.plot(column='BoroCode', cmap='tab10', legend=True, ax=ax1)
df2.plot(column='BoroCode2', cmap='tab10', legend=True, ax=ax2)
</code></pre>
|
python|pandas|matplotlib
| 0
|
374,471
| 51,222,493
|
Circular shift of a string element
|
<p>Basically I have converted an integer into binary representation and after that it is stored in string format. </p>
<p>I want to circularly rotate the number. </p>
<p>How should I proceed? </p>
<p>I have used <code>np.roll()</code> but it is not working.</p>
|
<p>You can just create a new string as follows to circularly shift it</p>
<pre><code>bin_str = bin_str[-1] + bin_str[:-1]
</code></pre>
<p>If thats of no good, you can use <code>collections.deque</code> (which has a <code>rotate</code> method) to have circular shift effect</p>
<pre><code>from collections import deque
bin_str = "{0:b}".format(10)
print (bin_str)
1010
d = deque(bin_str, maxlen=len(bin_str))
print (d)
# deque(['1', '0', '1', '0'], maxlen=4)
d.rotate()
print (d)
# deque(['0', '1', '0', '1'], maxlen=4)
</code></pre>
|
python|numpy
| 2
|
374,472
| 51,291,470
|
Slice pandas dataframe columns with an array?
|
<p>This question refers to the <a href="https://stackoverflow.com/questions/51271709/interpolate-pandas-df">previous post</a>:</p>
<p>Where I have a dataframe and an array of values on which I want to interpolate:</p>
<pre><code>df_new = pd.DataFrame(np.random.randn(5,7), columns=[402.3, 407.2, 412.3, 415.8, 419.9, 423.5, 428.3])
wl = np.array([400.0, 408.2, 412.5, 417.2, 420.5, 423.3, 425.0])
</code></pre>
<p>The additional question I want to ask is HOW to slice the new dataframe:</p>
<pre><code>df_int = df_new.reindex(columns=df_new.columns.union(wl)).interpolate(axis=1, limit_direction='both')
</code></pre>
<p>So it would contain ONLY the columns from the array <strong>wl</strong>?</p>
<p>Note that the real dataset I'm using contains 480 columns, so I need something done automatically and not just assign separate values of each column.</p>
<p>I haven't found any example of such slicing in Stack Overflow, but perhaps I'm missing something</p>
|
<p>IIUC, you could simply do column filtering like this.</p>
<pre><code>df_int[wl]
</code></pre>
<p>Output:</p>
<pre><code> 400.0 408.2 412.5 417.2 420.5 423.3 425.0
0 0.293797 0.383745 0.424941 0.707308 0.793880 -0.233975 0.175342
1 1.306332 -0.872758 -0.301987 -0.683450 -0.534648 0.001957 0.940651
2 -0.477284 -0.076156 -0.268190 0.370769 0.434909 -0.235272 0.285097
3 -1.317292 -0.588243 -0.036146 1.169727 0.665479 0.831551 0.839762
4 -0.075600 0.166476 0.318865 0.128501 -1.167822 -1.533821 -0.795002
</code></pre>
|
python|pandas
| 1
|
374,473
| 51,209,169
|
Matplotlib pdf Output
|
<p>Im new to matplotlib and wont to use the graphics in Latex.
There ist a visual output as a graphic but:</p>
<p>Why is there no pdf output?</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os #to remove a file
import datetime
from matplotlib.backends.backend_pdf import PdfPages
#######################
Val1 = [1,2,3,4,5,6,7,8,9,9,5,5] # in kWh
Val2 = [159,77,1.716246,2,4,73,128,289,372,347,354,302] #in m³
index = ['Apr', 'Mai', 'Jun', 'Jul','Aug','Sep','Okt','Nov','Dez','Jan', 'Feb', 'Mrz']
df = pd.DataFrame({'Val1': Val1,'Val2': Val2}, index=index)
with PdfPages('aas2s.pdf') as pdf:
plt.rc('text', usetex=True)
params = {'text.latex.preamble' : [r'\usepackage{siunitx}', r'\usepackage{amsmath}']}
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Liberation'
plt.rcParams.update(params)
plt.figure(figsize=(8, 6))
plt.rcParams.update({'font.size': 12})
ax = df[['Val1','Val2']].plot.bar(color=['navy','maroon'])
plt.xlabel('X Achse m')
plt.ylabel('Y Achse Taxi quer ')
plt.legend(loc='upper left', frameon=False)
plt.title('Franz jagt im komplett verwahrlosten Taxi quer durch Bayern')
plt.show()
pdf.savefig()
plt.close()
</code></pre>
<p>The error is called: ValueError: No such figure: None</p>
<p>And how do i get a second "Y" axis for the second value?</p>
|
<p>In general, <code>savefig</code> should be called <em>before</em> <code>show</code>. See e.g.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/9012487/matplotlib-pyplot-savefig-outputs-blank-image">Matplotlib (pyplot) savefig outputs blank image</a></li>
<li><a href="https://stackoverflow.com/questions/51178853/how-come-pyplot-from-matplotlib-doesnt-allow-you-to-save-an-image-after-you-sho">How come pyplot from Matplotlib doesn't allow you to save an image after you show it?</a> (with some more explanation)</li>
</ul>
<p>Second, you want to produce the plot inside the created figure, not create a new one, hence use </p>
<pre><code>fig, ax = plt.subplots(figsize=...)
df.plot(..., ax=ax)
</code></pre>
<p>and later call the methods of the axes (object-oriented style).</p>
<p>In total,</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
#######################
Val1 = [1,2,3,4,5,6,7,8,9,9,5,5] # in kWh
Val2 = [159,77,1.716246,2,4,73,128,289,372,347,354,302] #in m³
index = ['Apr', 'Mai', 'Jun', 'Jul','Aug','Sep','Okt','Nov','Dez','Jan', 'Feb', 'Mrz']
df = pd.DataFrame({'Val1': Val1,'Val2': Val2}, index=index)
with PdfPages('aas2s.pdf') as pdf:
plt.rc('text', usetex=True)
params = {'text.latex.preamble' : [r'\usepackage{siunitx}', r'\usepackage{amsmath}']}
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Times New Roman'
plt.rcParams.update(params)
fig, ax = plt.subplots(figsize=(8, 6))
plt.rcParams.update({'font.size': 12})
df[['Val1','Val2']].plot.bar(color=['navy','maroon'], ax=ax)
ax.set_xlabel('X Achse m')
ax.set_ylabel('Y Achse Taxi quer ')
ax.legend(loc='upper left', frameon=False)
ax.set_title('Franz jagt im komplett verwahrlosten Taxi quer durch Bayern')
pdf.savefig()
plt.show()
plt.close()
</code></pre>
<p>Now if you still need to save the figure after is it being shown, you can do so by specifically using it as argument to <code>savefig</code></p>
<pre><code>plt.show()
pdf.savefig(fig)
</code></pre>
|
pandas|pdf|matplotlib
| 2
|
374,474
| 51,212,158
|
How to find angle between GPS coordinates in pandas dataframe Python
|
<p>I have dataframe with measurements coordinates and cell coordinates.</p>
<p>I need to find for each row angle (azimuth angle) between a line that connects these two points and the north pole.</p>
<p>df:</p>
<pre><code>id cell_lat cell_long meas_lat meas_long
1 53.543643 11.636235 53.44758 11.03720
2 52.988823 10.0421645 53.03501 9.04165
3 54.013442 9.100981 53.90384 10.62370
</code></pre>
<p>I have found some code online, but none if that really helps me get any closer to the solution.</p>
<p>I have used <a href="https://gist.github.com/jeromer/2005586" rel="nofollow noreferrer">this</a> function but not sure if get it right and I guess there is simplier solution.</p>
<p>Any help or hint is welcomed, thanks in advance.</p>
|
<p>The trickiest part of this problem is converting geodetic (latitude, longitude) coordinates to Cartesian (x, y, z) coordinates. If you look at <a href="https://en.wikipedia.org/wiki/Geographic_coordinate_conversion" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Geographic_coordinate_conversion</a> you can see how to do this, which involves choosing a reference system. Assuming we choose ECEF (<a href="https://en.wikipedia.org/wiki/ECEF" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/ECEF</a>), the following code calculates the angles you are looking for:</p>
<pre><code>def vector_calc(lat, long, ht):
'''
Calculates the vector from a specified point on the Earth's surface to the North Pole.
'''
a = 6378137.0 # Equatorial radius of the Earth
b = 6356752.314245 # Polar radius of the Earth
e_squared = 1 - ((b ** 2) / (a ** 2)) # e is the eccentricity of the Earth
n_phi = a / (np.sqrt(1 - (e_squared * (np.sin(lat) ** 2))))
x = (n_phi + ht) * np.cos(lat) * np.cos(long)
y = (n_phi + ht) * np.cos(lat) * np.sin(long)
z = ((((b ** 2) / (a ** 2)) * n_phi) + ht) * np.sin(lat)
x_npole = 0.0
y_npole = 6378137.0
z_npole = 0.0
v = ((x_npole - x), (y_npole - y), (z_npole - z))
return v
def angle_calc(lat1, long1, lat2, long2, ht1=0, ht2=0):
'''
Calculates the angle between the vectors from 2 points to the North Pole.
'''
# Convert from degrees to radians
lat1_rad = (lat1 / 180) * np.pi
long1_rad = (long1 / 180) * np.pi
lat2_rad = (lat2 / 180) * np.pi
long2_rad = (long2 / 180) * np.pi
v1 = vector_calc(lat1_rad, long1_rad, ht1)
v2 = vector_calc(lat2_rad, long2_rad, ht2)
# The angle between two vectors, vect1 and vect2 is given by:
# arccos[vect1.vect2 / |vect1||vect2|]
dot = np.dot(v1, v2) # The dot product of the two vectors
v1_mag = np.linalg.norm(v1) # The magnitude of the vector v1
v2_mag = np.linalg.norm(v2) # The magnitude of the vector v2
theta_rad = np.arccos(dot / (v1_mag * v2_mag))
# Convert radians back to degrees
theta = (theta_rad / np.pi) * 180
return theta
angles = []
for row in range(df.shape[0]):
cell_lat = df.iloc[row]['cell_lat']
cell_long = df.iloc[row]['cell_long']
meas_lat = df.iloc[row]['meas_lat']
meas_long = df.iloc[row]['meas_long']
angle = angle_calc(cell_lat, cell_long, meas_lat, meas_long)
angles.append(angle)
</code></pre>
<p>This will read each row out of your dataframe, calculate the angle and append it to the list angles. Obviously you can do what you like with those angles after they've been calculated.</p>
<p>Hope that helps!</p>
|
python|pandas|dataframe|angle|azimuth
| 4
|
374,475
| 51,136,836
|
separating each stock data into individual dataframe
|
<p>I took historical data of 100 stocks. It is a single file with all tickers with corresponding data. How to loop such that each ticker data gets separated in dataframe with its own name? I've tried this but this doesnt work.</p>
<pre><code>for ticker in stocks:
print(ticker)
tick=pd.DataFrame(data.loc[(data.ticker==ticker)])
tick['returns']=tick.close.pct_change()
value='daily_returns_'+str(ticker)
value=tick[['date']]
value['returns']=tick['returns']
print(value)
ex=str(value)+'.csv'
value.to_csv(ex)
</code></pre>
|
<p>Arbitrary variable names is considered poor practice. Instead, you can define a dictionary for a variable number of variables:</p>
<pre><code>dfs = dict(tuple(data.groupby('ticker')))
</code></pre>
<p>Then, if you wish, export to csv via iterating dictionary items:</p>
<pre><code>for k, v in dfs.items():
v.to_csv(k+'.csv', index=False)
</code></pre>
|
python|pandas|dataframe|stocks|ticker
| 1
|
374,476
| 51,535,357
|
How to interpolate data and angles with PANDAS
|
<p>I have a simple dataframe <code>df</code> that contains three columns:</p>
<ul>
<li><code>Time</code>: expressed in seconds</li>
<li><code>A</code>: set of values that can vary between <em>-inf</em> to <em>+inf</em></li>
<li><code>B:</code> set of angles (degrees) which range between <em>0</em> and <em>359</em> </li>
</ul>
<p>Here is the dataframe</p>
<pre><code>df = pd.DataFrame({'Time':[0,12,23,25,44,50], 'A':[5,7,9,8,11,6], 'B':[300,358,4,10,2,350]})
</code></pre>
<p>And it looks like this:</p>
<pre><code> Time A B
0 0 5 300
1 12 7 358
2 23 9 4
3 25 8 10
4 44 11 2
5 50 6 350
</code></pre>
<p>My idea is to interpolate the data from 0 to 50 seconds and I was able to achieve my goal using the following lines of code:</p>
<pre><code>y = pd.DataFrame({'Time':list(range(df['Time'].iloc[0], df['Time'].iloc[-1]))})
df = pd.merge(left=y, right=df, on='Time', how='left').interpolate()
</code></pre>
<p><strong>Problem</strong>: even though column A is interpolated correctly, column B is wrong because the interpolation of an angle between <em>360</em> degrees is not performed! Here is an example:</p>
<pre><code> Time A B
12 12 7.000000 358.000000
13 13 7.181818 325.818182
14 14 7.363636 293.636364
15 15 7.545455 261.454545
16 16 7.727273 229.272727
17 17 7.909091 197.090909
18 18 8.090909 164.909091
19 19 8.272727 132.727273
20 20 8.454545 100.545455
21 21 8.636364 68.363636
22 22 8.818182 36.181818
23 23 9.000000 4.000000
</code></pre>
<p><strong>Question</strong>: can you suggest me a smart and efficient way to solve this issue and being able to interpolate correctly the angles between <em>0/360</em> degrees?</p>
|
<p>You should be able to use the method described in <a href="https://stackoverflow.com/questions/27295494/bounded-circular-interpolation-in-python">this question</a> for the angle column:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame({'Time':[0,12,23,25,44,50], 'A':[5,7,9,8,11,6], 'B':[300,358,4,10,2,350]})
df['B'] = np.rad2deg(np.unwrap(np.deg2rad(df['B'])))
y = pd.DataFrame({'Time':list(range(df['Time'].iloc[0], df['Time'].iloc[-1]))})
df = pd.merge(left=y, right=df, on='Time', how='left').interpolate()
df['B'] %= 360
print(df)
</code></pre>
<p>Output:</p>
<pre><code> Time A B
0 0 5.000000 300.000000
1 1 5.166667 304.833333
2 2 5.333333 309.666667
3 3 5.500000 314.500000
4 4 5.666667 319.333333
5 5 5.833333 324.166667
6 6 6.000000 329.000000
7 7 6.166667 333.833333
8 8 6.333333 338.666667
9 9 6.500000 343.500000
10 10 6.666667 348.333333
11 11 6.833333 353.166667
12 12 7.000000 358.000000
13 13 7.181818 358.545455
14 14 7.363636 359.090909
15 15 7.545455 359.636364
16 16 7.727273 0.181818
17 17 7.909091 0.727273
18 18 8.090909 1.272727
19 19 8.272727 1.818182
20 20 8.454545 2.363636
21 21 8.636364 2.909091
22 22 8.818182 3.454545
23 23 9.000000 4.000000
24 24 8.500000 7.000000
25 25 8.000000 10.000000
26 26 8.157895 9.578947
27 27 8.315789 9.157895
28 28 8.473684 8.736842
29 29 8.631579 8.315789
30 30 8.789474 7.894737
31 31 8.947368 7.473684
32 32 9.105263 7.052632
33 33 9.263158 6.631579
34 34 9.421053 6.210526
35 35 9.578947 5.789474
36 36 9.736842 5.368421
37 37 9.894737 4.947368
38 38 10.052632 4.526316
39 39 10.210526 4.105263
40 40 10.368421 3.684211
41 41 10.526316 3.263158
42 42 10.684211 2.842105
43 43 10.842105 2.421053
44 44 11.000000 2.000000
45 45 11.000000 2.000000
46 46 11.000000 2.000000
47 47 11.000000 2.000000
48 48 11.000000 2.000000
49 49 11.000000 2.000000
</code></pre>
|
python|pandas|dataframe|interpolation|angle
| 3
|
374,477
| 51,281,477
|
Looping through dates
|
<p>I am trying to loop through some dates I created, but I get an error. This is the code:</p>
<pre><code>q3_2018 = datetime.date(2018,9,30)
q4_2018 = datetime.date(2018,12,31)
q1_2019 = datetime.date(2019,3,31)
q2_2019 = datetime.date(2018,6,30)
dates = [q3_2018, q4_2018,q1_2019,q2_2019]
values = []
for d in dates:
v = fin[fin['Date of Completion 1 payment']<d]['1st payment amount:\n(70%)'].sum()
values.append(v)
</code></pre>
<p>where fin['Date of Completion 1 payment'] is a pandas column with payment dates and fin['1st payment amount:\n(70%)'] is a pandas column with amount paid.</p>
<p>I get the following error</p>
<blockquote>
<p>TypeError: type object 2018-09-30</p>
</blockquote>
<p>where's the mistake?</p>
|
<p>I suggest convert <code>date</code>s to <code>datetimes</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a> and then for select column use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p>
<pre><code>dates = pd.to_datetime([q3_2018, q4_2018,q1_2019,q2_2019])
print (dates)
DatetimeIndex(['2018-09-30', '2018-12-31', '2019-03-31', '2018-06-30'],
dtype='datetime64[ns]', freq=None)
</code></pre>
<p>Or compare by <code>string</code>s:</p>
<pre><code>dates = pd.to_datetime([q3_2018, q4_2018,q1_2019,q2_2019]).strftime('%Y-%m-%d')
print (dates)
Index(['2018-09-30', '2018-12-31', '2019-03-31', '2018-06-30'], dtype='object')
</code></pre>
<p>Or:</p>
<pre><code>dates = ['2018-09-30', '2018-12-31' '2019-03-31','2018-06-30']
</code></pre>
<hr>
<pre><code>values = []
for d in dates:
v = fin.loc[fin['Date of Completion 1 payment']<d, '1st payment amount:\n(70%)'].sum()
values.append(v)
</code></pre>
<p>List comprehension solution:</p>
<pre><code>values = [fin.loc[fin['Date of Completion 1 payment']<d, '1st payment amount:\n(70%)'].sum()
for d in dates]
</code></pre>
<hr>
<p>Or upgrade to last version of pandas for compare with dates, check <a href="http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#whatsnew-0231-fixed-regressions" rel="nofollow noreferrer">here</a>:</p>
<pre><code># 0.22.0... Silently coerce the datetime.date
>>> Series(pd.date_range('2017', periods=2)) == datetime.date(2017, 1, 1)
0 True
1 False
dtype: bool
# 0.23.0... Do not coerce the datetime.date
>>> Series(pd.date_range('2017', periods=2)) == datetime.date(2017, 1, 1)
0 False
1 False
dtype: bool
# 0.23.1... Coerce the datetime.date with a warning
>>> Series(pd.date_range('2017', periods=2)) == datetime.date(2017, 1, 1)
/bin/python:1: FutureWarning: Comparing Series of datetimes with 'datetime.date'. Currently, the
'datetime.date' is coerced to a datetime. In the future pandas will
not coerce, and the values not compare equal to the 'datetime.date'.
To retain the current behavior, convert the 'datetime.date' to a
datetime with 'pd.Timestamp'.
#!/bin/python3
0 True
1 False
dtype: bool
</code></pre>
|
python|pandas|loops|date|datetime
| 0
|
374,478
| 51,352,699
|
Pairwise Euclidean distance with pandas ignoring NaNs
|
<p>I start with a dictionary, which is the way my data was already formatted:</p>
<pre><code>import pandas as pd
dict2 = {'A': {'a':1.0, 'b':2.0, 'd':4.0}, 'B':{'a':2.0, 'c':2.0, 'd':5.0},
'C':{'b':1.0,'c':2.0, 'd':4.0}}
</code></pre>
<p>I then convert it to a pandas dataframe:</p>
<pre><code>df = pd.DataFrame(dict2)
print(df)
A B C
a 1.0 2.0 NaN
b 2.0 NaN 1.0
c NaN 2.0 2.0
d 4.0 5.0 4.0
</code></pre>
<p>Of course, I can get the difference one at a time by doing this:</p>
<pre><code>df['A'] - df['B']
Out[643]:
a -1.0
b NaN
c NaN
d -1.0
dtype: float64
</code></pre>
<p>I figured out how to loop through and calculate A-A, A-B, A-C:</p>
<pre><code>for column in df:
print(df['A'] - df[column])
a 0.0
b 0.0
c NaN
d 0.0
Name: A, dtype: float64
a -1.0
b NaN
c NaN
d -1.0
dtype: float64
a NaN
b 1.0
c NaN
d 0.0
dtype: float64
</code></pre>
<p>What I would like to do is iterate through the columns so as to calculate |A-B|, |A-C|, and |B-C| and store the results in another dictionary.</p>
<p>I want to do this so as to calculate the Euclidean distance between all combinations of columns later on. If there is an easier way to do this I would like to see it as well. Thank you.</p>
|
<p>You can use numpy broadcasting to compute vectorised Euclidean distance (L2-norm), ignoring NaNs using <code>np.nansum</code>. </p>
<pre><code>i = df.values.T
j = np.nansum((i - i[:, None]) ** 2, axis=2) ** .5
</code></pre>
<p>If you want a DataFrame representing a distance matrix, here's what that would look like:</p>
<pre><code>df = (lambda v, c: pd.DataFrame(v, c, c))(j, df.columns)
df
A B C
A 0.000000 1.414214 1.0
B 1.414214 0.000000 1.0
C 1.000000 1.000000 0.0
</code></pre>
<p><code>df[i, j]</code> represents the distance between the i<sup>th</sup> and j<sup>th</sup> column in the original DataFrame.</p>
|
python|pandas|numpy|dataframe|euclidean-distance
| 6
|
374,479
| 51,544,356
|
how to replace a dataframe row with NaN if it doesn't contain a specific string
|
<p><a href="https://i.stack.imgur.com/3HDdv.png" rel="nofollow noreferrer">dataframe</a></p>
<p>I am working with the data frame shown in the link above. I want to make all rows that do not contain the words 'Yes' or 'No' be replaced with NaN.</p>
|
<pre><code>df.Met = np.where(~df.Met.isin(['Yes', 'No']), np.nan, df.Met)
</code></pre>
<p>Try this.</p>
|
python|pandas|dataframe|replace|nan
| 0
|
374,480
| 51,219,358
|
pandas,read_excel, usecols with list input generating an empty dataframe
|
<p>Actually i want to read only a specific column from excel into python dataframe
my code is </p>
<pre><code>import pandas as pd
file = pd.read_excel("3_Plants sorted on PLF age cost.xlsx",sheet_name="Age>25",index_col="Developer",usecols="Name of Project")
</code></pre>
<p>but i am getting an empty dataframe as output, however when i use </p>
<pre><code>import pandas as pd
file = pd.read_excel("3_Plants sorted on PLF age cost.xlsx",sheet_name="Age>25",index_col="Developer",usecols=2)
</code></pre>
<p>I get the desired result, </p>
<p>As i have to do it for many files using a loop and location of the columns keeps on changing so i have to go by its name and not location.</p>
<p>Further i cant load full file in dataframe and use <code>df["column_name"]</code>as size of my excel file is too large (150 MB) and this will make my process very slow and sometime gives memory error.</p>
<p>Thanks in advance. </p>
|
<p>As mentioned by Tomas Farias, usecols doesn't take cell values. A possible approach is to read few rows and find the location of the column and then read the file second time.</p>
<pre><code>import pandas as pd
col = pd.read_excel("3_Plants sorted on PLF age cost.xlsx",sheet_name="Age>25", nrows=2).columns
k=col.get_loc('Name of Project')+1
file = pd.read_excel("3_Plants sorted on PLF age cost.xlsx", sheet_name="Age>25", index_col="Developer", usecols=k)
</code></pre>
|
python|pandas
| 0
|
374,481
| 51,373,222
|
Pandas dataframe.set_index() deletes previous index and column
|
<p>I just came across a strange phenomenon with Pandas DataFrames, when setting index using DataFrame.set_index('some_index') the old column that was also an index is deleted! Here is an example:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'month': [1, 4, 7, 10],'year': [2012, 2014, 2013, 2014],'sale':[55, 40, 84, 31]})
df_mn=df.set_index('month')
>>> df_mn
sale year
month
1 55 2012
4 40 2014
7 84 2013
10 31 2014
</code></pre>
<p>Now I change the index to year:</p>
<pre><code>df_mn.set_index('year')
sale
year
2012 55
2014 40
2013 84
2014 31
</code></pre>
<p>.. and the month column was removed with the index. This is vary irritating because I just wanted to swap the DataFrame index.</p>
<p>Is there a way to not have the previous column that was an index from being deleted? Maybe through something like: DataFrame.set_index('new_index',delete_previous_index=False)</p>
<p>Thanks for any advice</p>
|
<p>You can do the following</p>
<pre><code>>>> df_mn.reset_index().set_index('year')
month sale
year
2012 1 55
2014 4 40
2013 7 84
2014 10 31
</code></pre>
|
python|pandas|dataframe|indexing
| 7
|
374,482
| 51,291,382
|
Python Pandas: Breaking a list or series into columns of different sizes
|
<p>I have a single series with 2 columns that looks like </p>
<pre><code>1 5.3
2 2.5
3 1.6
4 3.8
5 2.8
</code></pre>
<p>...and so on. I would like to take this series and break it into 6 columns of different sizes. So (for example) the first column would have 30 items, the next 31, the next 28, and so on. I have seen plenty of examples for same-sized columns but have not seen away to make multiple custom-sized columns. </p>
|
<p>Based on comments you can try use the index of the series to fill your dataframe</p>
<pre><code>s = pd.Series([5, 2, 1, 3, 2])
df = pd.DataFrame([], index=s.index)
df['col1'] = s.loc[:2]
df['col2'] = s.loc[3:3]
df['col3'] = s.loc[4:]
</code></pre>
<p>Result: </p>
<pre><code> col1 col2 col3
0 5.0 NaN NaN
1 2.0 NaN NaN
2 1.0 NaN NaN
3 NaN 3.0 NaN
4 NaN NaN 4.0
</code></pre>
|
python|pandas|dataframe|series
| 1
|
374,483
| 51,557,344
|
Extract nominal and standard deviation from ufloat inside a panda dataframe
|
<p>For convenience purpose I am using pandas dataframes in order to perform an uncertainty propagation on a large set on data.</p>
<p>I then wish to plot the nominal value of my data set but something like <code>myDF['colLabel'].n</code> won't work. How to extract the nominal and standard deviation from a dataframe in order to plot the nominal value and the errorbar?</p>
<p>Here is a MWE to be more consistent:</p>
<pre><code>#%% MWE
import pandas as pd
from uncertainties import ufloat
import matplotlib.pyplot as plt
# building of a dataframe filled with ufloats
d = {'value1': [ufloat(1,.1),ufloat(3,.2),ufloat(5,.6),ufloat(8,.2)], 'value2': [ufloat(10,5),ufloat(50,2),ufloat(30,3),ufloat(5,1)]}
df = pd.DataFrame(data = d)
# plot of value2 vs. value1 with errobars.
plt.plot(x = df['value1'].n, y = df['value2'].n)
plt.errorbar(x = df['value1'].n, y = df['value2'].n, xerr = df['value1'].s, yerr = df['value2'].s)
# obviously .n and .s won't work.
</code></pre>
<p>I get as an error <code>AttributeError: 'Series' object has no attribute 'n'</code> which suggest to extract the values from each series, is there a shorter way to do it than going through a loop which would separate the nominal and std values into two separated vectors?</p>
<p>Thanks.</p>
<p>EDIT: Using those functions from <a href="https://pythonhosted.org/uncertainties/user_guide.html" rel="nofollow noreferrer">the package</a> won't work either: <code>uncertainties.nominal_value(df['value2'])</code> and <code>uncertainties.std_dev(df['value2'])</code></p>
|
<p>Actually solved it with the
<code>unumpy.nominal_values(arr)</code> and <code>unumpy.std_devs(arr)</code> functions from uncertainties.</p>
|
python|pandas|uncertainty
| 3
|
374,484
| 51,403,274
|
Merging two dataframes with multiples in one of them
|
<p>I am merging two dataframes under the common header, "COUNTERPARTYNAME". So below is an example of my df5:</p>
<pre><code> CONTRACT COUNTERPARTYNAME TERM
0 450 A 300
1 400 A 350
2 270 B 600
3 360 C 300
...
</code></pre>
<p>And df6:</p>
<pre><code> COUNTERPARTYNAME CBA DAN
0 A 500 10
1 B 300 3
2 C 400 9
3 D 650 10
...
</code></pre>
<p>But essentially both dataframes share the COUNTERPARTYNAME, but there are multiples of certain cpty's in df5. I'm trying to merge the two, such that they are merged in a new df, and for every cpty, the CBA and DAN will show up next to it, including for the multiples. </p>
<p>My expected result would be like so:</p>
<pre><code> CONTRACT COUNTERPARTYNAME TERM CBA DAN
0 450 A 300 500 10
1 400 A 350 500 10
2 270 B 600 300 3
3 360 C 300 400 9
...
</code></pre>
<p>I understand how to merge it for one on one's, like if there was only one A, B, C, etc... in df5, just like there in df6.</p>
<p>However, when I've tried: </p>
<pre><code>df7=pd.merge(df5, df6),
</code></pre>
<p>hoping they would merge on the COUNTERPARTYNAME, and then print it, a lot of my data on certain cpty's disappears, while other cpty's pop up more than they actually showed up in df5. For example, I have 2 A's and 2 B's in df5, but when I merge, I for some reason now have 0 A's and like 6 B's. The CBA and DAN are right, and corresponding, but I feel like I've lost some of my data for some reason. Is there a way to right this? Am I doing the wrong type of merge?</p>
|
<p>You need to explicitly say on which column, you want your merge to be. Also you have to use <a href="https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer">merge</a> as below. </p>
<pre><code>df7 = df5.merge(df6, on=['COUNTERPARTYNAME'])
</code></pre>
<p>Output:</p>
<pre><code> CONTRACT COUNTERPARTYNAME TERM CBA DAN
0 450 A 300 500 10
1 400 A 350 500 10
2 270 B 600 300 3
3 360 C 300 400 9
</code></pre>
|
python|pandas|merge
| 0
|
374,485
| 51,349,215
|
pandas read_html clean up before or after read
|
<p>I'm trying to get the last table in this html into a data table. </p>
<p>Here is the code:</p>
<pre><code>import pandas as pd
a=pd.read_html('https://www.sec.gov/Archives/edgar/data/1303652/000130365218000016/a991-01q12018.htm')
print (a[23])
</code></pre>
<p>As you can see it reads it in, but needs to be cleaned up. My question is for someone who has experience with using this function. Is it better to read it in and then try to clean it up afterwards or before? And if anybody knows how to do it, please post some code. Thanks.</p>
|
<p><code>Code</code> below extracts the table using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html" rel="nofollow noreferrer"><code>pd.read_html()</code></a> from a website. Additional parameters could be tuned further depending on the <code>table format</code>.</p>
<pre><code># Import libraries
import pandas as pd
# Read table
link = 'https://www.sec.gov/Archives/edgar/data/1303652/000130365218000016/a991-01q12018.htm'
a=pd.read_html(link, header=None, skiprows=1)
# Save the dataframe
df = a[23]
# Remove NaN rows/columns
col_list = df.iloc[1]
df = df.loc[4:,[0,1,3,5,7,9,11]] # adjusted column names
df.columns = col_list[:len(df.columns)]
df.head(7)
</code></pre>
<p>Note: Empty cells in the original table are replaced with NaN's</p>
<p><a href="https://i.stack.imgur.com/ZXH1B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZXH1B.png" alt="enter image description here"></a></p>
<p>Top rows from the original table from website:
<a href="https://i.stack.imgur.com/m3BQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m3BQ3.png" alt="enter image description here"></a></p>
|
python|html|pandas
| 1
|
374,486
| 51,287,665
|
Get index of where group starts and ends pandas
|
<p>I grouped my data by month. Now I need to know at which observation/index my group starts and ends.
What I have is the following output where the second column represents the number of observation in each month: </p>
<pre><code>date
01 145
02 2232
03 12785
04 16720
Name: date, dtype: int64
</code></pre>
<p>with this code: </p>
<pre><code>leave.groupby([leave['date'].dt.strftime('%m')])['date'].count()
</code></pre>
<p>What I want though is an index range I could access later. Somehow like that (the format doesn't really matter and I don't mind if it returns a list or a data frame)</p>
<pre><code>date
01 0 - 145
02 146 - 2378
03 2378 - 15163
04 15164 - 31884
</code></pre>
|
<p>try the following - using <code>shift</code></p>
<pre><code>df['data'] = df['data'].shift(1).add(1).fillna(0).apply(int).apply(str) + ' - ' + df['data'].apply(str)
</code></pre>
<p>OUTPUT:</p>
<pre><code> data
date
1 0 - 145
2 146 - 2232
3 2233 - 12785
4 12786 - 16720
5 16721 - 30386
6 30387 - 120157
</code></pre>
|
python|pandas|date|indexing|grouping
| 1
|
374,487
| 51,429,286
|
Generating new column of data by aggregating specific columns of dataset
|
<p>My dataset has a few interesting columns that I want to aggregate, and hence create a metric that I can use to do some more analysis. </p>
<p>The algorithm I wrote takes around ~3 seconds to finish, so I was wondering if there is a more efficient way to do this.</p>
<pre><code>def financial_score_calculation(df, dictionary_of_parameters):
for parameter in dictionary_of_parameters:
for i in dictionary_of_parameters[parameter]['target']:
index = df.loc[df[parameter] == i].index
for i in index:
old_score = df.at[i, 'financialliteracyscore']
new_score = old_score + dictionary_of_parameters[parameter]['score']
df.at[i, 'financialliteracyscore'] = new_score
for i in df.index:
old_score = df.at[i, 'financialliteracyscore']
new_score = (old_score/27.0)*100 #converting score to percent value
df.at[i, 'financialliteracyscore'] = new_score
return df
</code></pre>
<p>Here is a truncated version of the dictionary_of_parameters:</p>
<pre><code>dictionary_of_parameters = {
# money management parameters
"SatisfactionLevelCurrentFinances": {'target': [8, 9, 10], 'score': 1},
"WillingnessFinancialRisk": {'target': [8, 9, 10], 'score': 1},
"ConfidenceLevelToEarn2000WithinMonth": {'target': [1], 'score': 1},
"DegreeOfWorryAboutRetirement": {'target': [1], 'score': 1},
"GoodWithYourMoney?": {'target': [7], 'score': 1}
}
</code></pre>
<p>EDIT: generating toy data for df</p>
<pre><code>df = pd.DataFrame(columns = dictionary_of_parameters.keys())
df['financialliteracyscore'] = 0
for i in range(10):
df.loc[i] = dict(zip(df.columns,2*i*np.ones(6)))
</code></pre>
|
<p>Note that in Pandas you can index in ways other than elementwisely with <code>at</code>. In the four-liner below, <code>index</code> is a list which can then be used for indexing with <code>loc</code>.</p>
<pre><code>for parameter in dictionary_of_parameters:
index = df[df[parameter].isin(dictionary_of_parameters[parameter]['target'])].index
df.loc[index,'financialliteracyscore'] += dictionary_of_parameters[parameter]['score']
df['financialliteracyscore'] = df['financialliteracyscore'] /27.0*100
</code></pre>
<p>Here's a reference although I personally never found it useful in my earlier days of programming... <a href="https://pandas.pydata.org/pandas-docs/stable/indexing.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/indexing.html</a></p>
|
python|pandas|dataframe
| 0
|
374,488
| 51,342,528
|
Hyperas: 'List' object has no attribute 'shape'
|
<p>I'm trying to read in some data from a TSV file for use with <a href="https://github.com/maxpumperla/hyperas" rel="nofollow noreferrer">Hyperas</a>, but any way I do it, I seem to get the same error:</p>
<pre><code>Traceback (most recent call last):
File "/path/to/cnn_search.py", line 233, in <module>
trials=trials)
File "~/miniconda3/lib/python3.6/site-packages/hyperas/optim.py", line 67, in minimize
verbose=verbose)
File "~/miniconda3/lib/python3.6/site-packages/hyperas/optim.py", line 133, in base_minimizer
return_argmin=True),
File "~/miniconda3/lib/python3.6/site-packages/hyperopt/fmin.py", line 312, in fmin
return_argmin=return_argmin,
File "~/miniconda3/lib/python3.6/site-packages/hyperopt/base.py", line 635, in fmin
return_argmin=return_argmin)
File "~/miniconda3/lib/python3.6/site-packages/hyperopt/fmin.py", line 325, in fmin
rval.exhaust()
File "~/miniconda3/lib/python3.6/site-packages/hyperopt/fmin.py", line 204, in exhaust
self.run(self.max_evals - n_done, block_until_done=self.async)
File "~/miniconda3/lib/python3.6/site-packages/hyperopt/fmin.py", line 178, in run
self.serial_evaluate()
File "~/miniconda3/lib/python3.6/site-packages/hyperopt/fmin.py", line 97, in serial_evaluate
result = self.domain.evaluate(spec, ctrl)
File "~/miniconda3/lib/python3.6/site-packages/hyperopt/base.py", line 840, in evaluate
rval = self.fn(pyll_rval)
File "~/temp_model.py", line 218, in keras_fmin_fnct
AttributeError: 'list' object has no attribute 'shape'
</code></pre>
<p>From other questions that I've seen, this error is caused by using regular arrays where NumPy arrays should be used. So, I've tried to convert the TSV I'm reading to NumPy arrays at every step:</p>
<pre><code>from hyperas import optim
...
import numpy as np
import csv
def data():
dataPath="/path/to/fm.labeled.10m.txt"
X = []
Y = []
with open(dataPath) as dP:
reader = csv.reader(dP, delimiter="\t")
for row in reader:
#skip the first two columns, and the last column is labels
X.append(np.array(row[2:-1]))
#labels
Y.append(row[-1])
encoder = LabelBinarizer()
Y_categorical = encoder.fit_transform(Y)
#split data into test and train
X_train, X_test, Y_train, Y_test = train_test_split(X, Y_categorical, test_size=0.25)
X_train_np = np.array(X_train)
X_test_np = np.array(X_test)
Y_train_np = np.array([np.array(y) for y in Y_train])
Y_test_np = np.array([np.array(y) for y in Y_test])
return X_train_np, Y_train_np, X_test_np, Y_test_np
...
trials = Trials()
best_run, best_model = optim.minimize(model=model_name,
data=data,
algo=tpe.suggest,
max_evals=numRuns,
trials=trials)
</code></pre>
<p>I also imagine that there's a more efficient way to do this, without creating so many intermediate arrays—and that'd be great, because I'll be reading in millions of rows of data.</p>
<p>What am I doing wrong?</p>
<p><em>Edit</em>: <a href="https://github.com/hyperopt/hyperopt/wiki/FMin" rel="nofollow noreferrer">Hyperopt wiki</a> describes <code>Trials</code>. </p>
|
<p>have you considered using <code>np.genfromtxt('your_file.tsv')</code>?
works wonders for reading in csv and tsv data, and i have had great experiences with it lately. Also, you should problably supply more information on your specific problem (kind of data, layout etc) if you need a more detailed answer.</p>
|
python|csv|tensorflow|machine-learning|keras
| 1
|
374,489
| 51,313,086
|
Solving non-linear coupled differential equations in python
|
<p>I am working on simulation of a system that contains coupled differential equations. My main aim is to solve the mass balance in steady condition and feed the solution of steady state as initial guess for the dynamic simulation.
There are basically three state variables Ss,Xs and Xbh. The rate equations look like this:</p>
<blockquote>
<p>r1=µH(Ss/(Ks+Ss))(So/(Koh+So))Xbh+Kh(
(Xs⁄Xbh)/(Xs⁄Xbh+Kx))(So/(Koh+So))Xbh</p>
<p>r2=(1-fp)bH*Xbh-Kh( (Xs⁄Xbh)/(Xs⁄Xbh+Kx))(So/(Koh+So))Xbh </p>
<p>r3=µH(Ss/(Ks+Ss))(So/(Koh+So))Xbh-bH*Xbh</p>
</blockquote>
<p>And the main differential equations derived from mole balance for CSTR are:</p>
<blockquote>
<p>dSs/dt = Q(Ss_in-Ss)+r1*V</p>
<p>dXs/dt= Q(Xs_in-Xs)+r2*V</p>
<p>dXbh/dt= Q(Xbh_in-Xbh)+r2*V</p>
</blockquote>
<p>Here is my code till now:</p>
<pre><code>import numpy as np
from scipy.optimize import fsolve
parameter=dict()
parameter['u_h']=6.0
parameter['k_oh']=0.20
parameter['k_s']=20.0
parameter['k_h']=3.0
parameter['k_x']=0.03
parameter['Y_h']=0.67
parameter['f_p']=0.08
parameter['b_h']=0.62
Bulk_DO=2.0 #mg/L
#influent components:
infcomp=[56.53,182.9,16.625] #mgCOD/l
Q=684000 #L/hr
V=1040000 #l
def steady(z,*args):
Ss=z[0]
Xs=z[1]
Xbh=z[2]
def monod(My_S,My_K):
return My_S/(My_S+My_K)
#Conversion rates
#Conversion of Ss
r1=((-1/parameter['Y_h'])*parameter['u_h']*monod(Ss,parameter['k_s'])\
+parameter['k_h']*monod(Xs/Xbh,parameter['k_x'])*monod(Bulk_DO,parameter['k_oh']))\
*Xbh*monod(Bulk_DO,parameter['k_oh'])
#Conversion of Xs
r2=((1-parameter['f_p'])*parameter['b_h']-parameter['k_h']*monod(Xs/Xbh,parameter['k_x']))*Xbh
#Conversion of Xbh
r3=(parameter['u_h']*monod(Ss,parameter['k_s'])*monod(Bulk_DO,parameter['k_oh'])-parameter['b_h'])*Xbh
f=np.zeros(3)
f[0]=Q*(infcomp[0]-Ss)+r1*V
f[1]=Q*(infcomp[1]-Xs)+r2*V
f[2]=Q*(infcomp[2]-Xbh)+r3*V
return f
initial_guess=(0.1,0.1,0.1)
soln=fsolve(steady,initial_guess,args=parameter)
print (soln)
</code></pre>
<p>How can I plot steady condition like this?
<a href="https://i.stack.imgur.com/e5or9.jpg" rel="nofollow noreferrer">steady state plot</a>
The solution is also not what I want since the equations implies reduction in Ss and Xs and increase of Xbh values with time. Also one solution has negative value which is practically impossible.
Any suggestions would be highly appreciated. Thanks in advance !!</p>
|
<p>This is a solution to getting negative values for your solution: instead of using fsolve, use least_squares, which allows you to set bounds to the possible values.</p>
<p>In the top, import:</p>
<pre><code>from scipy.optimize import least_squares
</code></pre>
<p>And replace the fsolve statement with:</p>
<pre><code>soln = least_squares(steady, initial_guess, bounds=[(0,0,0),(np.inf,np.inf,np.inf)], args=parameter)
</code></pre>
|
python|numpy|scipy|simulation
| 0
|
374,490
| 51,452,562
|
pandas Dataframe Consistently falling column values
|
<p>I am trying to Find the <strong>number of consistently falling values</strong>, which form a part of a column in my Pandas dataframe (df), columnname is <strong>"Values"</strong> and the data snippet is given below:</p>
<pre><code># sample data
time values
11:55 0.940353
12:00 0.919144
12:05 0.909454
12:10 0.904968
12:15 0.867957
12:20 0.801426
12:25 0.794733
12:30 0.770106
12:35 0.741985
12:40 0.671444
12:45 0.558297
12:50 0.496972
12:55 0.457803
13:00 0.446388
13:05 0.430217
13:10 0.379902
13:15 0.321828
13:20 0.298304
13:25 0.442079
13:30 0.634764
</code></pre>
<p>Note: in the above sample data the values from 11:55 to 13:20 are of interest to me.
I need to report the # of falling values (18 in this case), and then the %age of lower value compared to %age of starting/Higher Value. </p>
<p>I have tried to iterate my df using iterrows
"for index, row in df.iloc[20:].iterrows():" # my starting row being 20.
<br>I then tried to use a temp variable for comparison, but this is not giving me the desired results.</p>
<p>TIA</p>
|
<p>Here is a reproducible example with a function that accomplishes your goal. I'm assuming you might have multiple sequences of decreasing records, so the function returns the count of decreasing records and the percent of the total decrease from highest to lowest values for each decreasing sequence.</p>
<pre><code>import pandas as pd
def get_decreasing_count_and_percentage(df, min_count=7):
is_dec = df["value"].diff().lt(0).values
cnt = 0
starting_value = df["value"].values[0]
result = []
for i in range(len(is_dec)):
if is_dec[i]:
cnt += 1
else:
if cnt > 0:
percent = round((df["value"].values[i-1] / starting_value) * 100., 1)
result.append((cnt+1, percent))
cnt = 0
starting_value = df["value"].values[i]
result_df = pd.DataFrame.from_records(result, columns=['count', 'percentage'])
return result_df[result_df["count"] >= min_count]
</code></pre>
<p><strong>Original Data</strong></p>
<pre><code>times = ['11:55', '12:00', '12:05', '12:10', '12:15', '12:20', '12:25', '12:30', '12:35', '12:40', '12:45', '12:50', '12:55', '13:00', '13:05', '13:10', '13:15', '13:20', '13:25', '13:30']
values = [0.940353, 0.919144, 0.909454, 0.904968, 0.867957, 0.801426, 0.794733, 0.770106, 0.741985, 0.671444, 0.558297, 0.496972, 0.457803, 0.446388, 0.430217, 0.379902, 0.321828, 0.298304, 0.442079, 0.634764, ]
my_df = pd.DataFrame(data={'time': times, 'value': values})
print(get_decreasing_count_and_percentage(my_df))
</code></pre>
<p><em>Result</em></p>
<pre><code> count percentage
0 18 31.7
</code></pre>
<p><strong>Modified Data:</strong></p>
<pre><code>values2 = [0.940353, 0.919144, 0.909454, 0.904968, 0.867957, 0.801426, 0.894733, 0.770106, 0.741985, 0.671444, 0.558297, 0.496972, 0.457803, 0.446388, 0.430217, 0.379902, 0.321828, 0.298304, 0.442079, 0.634764]
my_df2 = pd.DataFrame(data={'time': times, 'value': values2})
print(get_decreasing_count_and_percentage(my_df2))
print(get_decreasing_count_and_percentage(my_df2, min_count=6))
</code></pre>
<p><em>Result</em></p>
<pre><code> count percentage
1 12 33.3
count percentage
0 6 85.2
1 12 33.3
</code></pre>
<hr>
<p><strong>UPDATE:</strong> Updated the code to address percent value requirement.</p>
<p><strong>UPDATE 2:</strong> The function now returns a data frame with a summary for all decreasing sequences. Also added a modified data set to show multiple decreasing sequences.</p>
<p><strong>UPDATE 3:</strong> Added min_count=7 default parameter to satisfy OP's requirement on a comment (i.e. report on sequences of length >= 7).</p>
|
python|pandas|dataframe|iteration
| 1
|
374,491
| 51,299,618
|
pandas groupby hour and calculate stockout time
|
<p>I have a time series like below:</p>
<pre><code>| datetime_create | quantity_old | quantity_new | quantity_diff | is_stockout |
| 2018-02-15 08:12:54.289 | 16 | 15 | -1 | False |
| 2018-02-15 08:14:10.619 | 15 | 13 | -2 | False |
| 2018-02-15 08:49:15.962 | 13 | 9 | -4 | False |
| 2018-02-15 08:51:04.740 | 9 | 8 | -1 | False |
| 2018-02-15 08:56:37.086 | 8 | 7 | -1 | False |
| 2018-02-15 09:23:22.858 | 7 | 5 | -2 | False |
| 2018-02-15 10:16:50.324 | 5 | 4 | -1 | False |
| 2018-02-15 10:19:25.071 | 4 | 3 | -1 | False |
| 2018-02-15 10:33:22.788 | 3 | 2 | -1 | False |
| 2018-02-15 10:33:34.125 | 2 | 0 | -2 | True |
| 2018-02-15 16:45:24.747 | 0 | 1 | 1 | False |
| 2018-02-15 16:48:29.996 | 1 | 0 | -1 | True |
| 2018-02-17 10:42:58.325 | 0 | 42 | 42 | False |
| 2018-02-17 10:47:07.380 | 42 | 41 | -1 | False |
| 2018-02-17 11:42:31.008 | 41 | 40 | -1 | False |
| 2018-02-17 11:48:31.070 | 40 | 39 | -1 | False |
| 2018-02-17 12:39:13.681 | 39 | 38 | -1 | False |
| 2018-02-17 12:48:00.286 | 38 | 37 | -1 | False |
| 2018-02-17 12:56:59.203 | 37 | 36 | -1 | False |
| 2018-02-17 13:18:12.285 | 36 | 35 | -1 | False |
| 2018-02-17 13:29:53.465 | 35 | 34 | -1 | False |
| 2018-02-17 14:54:55.810 | 34 | 33 | -1 | False |
| 2018-02-17 15:53:38.816 | 33 | 32 | -1 | False |
| 2018-02-17 16:28:08.076 | 32 | 31 | -1 | False |
| 2018-02-17 16:45:18.965 | 31 | 30 | -1 | False |
| 2018-02-17 16:59:11.111 | 30 | 29 | -1 | False |
| 2018-02-17 17:18:53.646 | 29 | 27 | -2 | False |
| 2018-02-17 17:44:43.508 | 27 | 26 | -1 | False |
| 2018-02-17 19:34:49.701 | 26 | 25 | -1 | False |
| 2018-02-17 20:49:00.205 | 25 | 24 | -1 | False |
| 2018-02-18 07:14:22.207 | 24 | 22 | -2 | False |
| 2018-02-18 08:35:41.560 | 22 | 20 | -2 | False |
| 2018-02-18 10:22:18.825 | 20 | 19 | -1 | False |
| 2018-02-18 10:28:33.909 | 19 | 18 | -1 | False |
| 2018-02-18 10:37:30.427 | 18 | 17 | -1 | False |
| 2018-02-18 10:50:55.265 | 17 | 16 | -1 | False |
| 2018-02-18 11:17:53.359 | 16 | 15 | -1 | False |
| 2018-02-18 11:42:29.214 | 0 | 30 | 30 | False |
| 2018-02-18 11:58:19.113 | 15 | 14 | -1 | False |
| 2018-02-18 11:58:56.432 | 14 | 13 | -1 | False |
| 2018-02-18 12:06:48.438 | 13 | 12 | -1 | False |
| 2018-02-18 12:21:43.634 | 12 | 11 | -1 | False |
| 2018-02-18 12:44:46.288 | 11 | 9 | -2 | False |
| 2018-02-18 13:26:01.952 | 9 | 8 | -1 | False |
| 2018-02-18 13:26:40.940 | 8 | 9 | 1 | False |
| 2018-02-18 13:27:34.090 | 9 | 8 | -1 | False |
| 2018-02-18 13:27:52.443 | 8 | 9 | 1 | False |
| 2018-02-18 13:28:58.832 | 9 | 8 | -1 | False |
| 2018-02-18 14:56:49.105 | 8 | 7 | -1 | False |
| 2018-02-18 16:00:32.212 | 7 | 6 | -1 | False |
| 2018-02-18 16:28:20.175 | 6 | 5 | -1 | False |
| 2018-02-18 16:31:48.741 | 5 | 3 | -2 | False |
| 2018-02-18 16:40:33.922 | 3 | 2 | -1 | False |
| 2018-02-18 16:56:17.864 | 2 | 1 | -1 | False |
| 2018-02-18 17:15:01.065 | 1 | 2 | 1 | False |
| 2018-02-18 17:40:43.062 | 2 | 1 | -1 | False |
| 2018-02-18 17:55:50.520 | 1 | 0 | -1 | True |
| 2018-02-18 18:20:21.664 | 30 | 29 | -1 | False |
| 2018-02-18 21:38:10.645 | 29 | 28 | -1 | False |
| 2018-02-19 06:36:04.564 | 28 | 27 | -1 | False |
| 2018-02-19 08:49:23.080 | 27 | 26 | -1 | False |
</code></pre>
<p>I want calculate the total stockout time in every hour in one day, like</p>
<pre><code>| date | 0 | 1 | 2 | 3 | ... | 23 |
| ---------- | --- | --- | --- | --- | --- | --- |
| 2018-02-15 | 10 | 0 | 0 | 10 | ... | 13 |
| 2018-02-16 | 6 | 0 | 7 | 10 | ... | 20 |
| 2018-02-17 | 6 | 0 | 0 | 10 | ... | 20 |
</code></pre>
<p>The rule:</p>
<ol>
<li>group by hour</li>
<li>I can access all rows in an hour. </li>
<li><p>calculate time between </p>
<ul>
<li>start point: <code>is_stockout</code> from <code>False</code> to <code>True</code> </li>
<li>end point: <code>is_stockout</code> from <code>True</code> to <code>False</code></li>
</ul>
<p>In an hour.
There may be many <code>start point</code> and <code>end point</code></p></li>
<li>change index to day, and column to 24 hour.</li>
</ol>
<p>It looks a little like <a href="https://pandas.pydata.org/pandas-docs/stable/groupby.html#new-syntax-to-window-and-resample-operations" rel="nofollow noreferrer">new-syntax-to-window-and-resample-operations</a> </p>
<p>I think I need use</p>
<pre><code>df.resample('H').apply(caluclate_time_in_hour)
</code></pre>
<p>But this seems not enough:</p>
<ol>
<li><code>df.resample('H')</code> result index be hour, not column</li>
<li><p>How to write proper <code>caluclate_time_in_hour</code> ? I think <code>apply</code> can't do this.</p>
<p>I wrote a pseudo-code:</p>
<pre><code>def caluclate_time_in_hour(item):
# note: item here is stockcount . not just True or False
global last_time
global is_stockout
global data
cur_time = item.name
# I need pandas return every row even that hour doesn't have data
# so that no need to check the how many hours elasped.
if item is np.nan:
if is_stockout:
data[cur_time.hour] = 60*60
else:
data[cur_time.hour] = 0
if is_stockout:
if item > 0:
data[cur_time.hour] += cur_time - last_time
else:
is_stockout = False
else:
if item = 0:
is_stockout = True
last_time = item.name
return data.copy()
</code></pre>
<p>How to know this item is the last one in this hour , so that I can return the <code>data</code> ? This is the <code>apply</code> problem. Maybe I need pandas return all rows by hours to do apply.</p></li>
</ol>
<p>I just wonder can I do above things by pandas built-in function, without looping all rows to constuct the new DataFrame.</p>
<hr>
<p>For example, <code>2018-02-15 ~ 2018-02-16</code> has below two records:</p>
<pre><code>| datetime_create | quantity_old | quantity_new | quantity_diff | is_stockout |
| 2018-02-14 00:45:00 | 40 | 10 | -30 | False |
| 2018-02-15 12:45:00 | 10 | 2 | -8 | False |
| 2018-02-15 13:45:00 | 2 | 1 | -1 | False |
| 2018-02-15 16:45:00 | 1 | 0 | -1 | True |
| 2018-02-16 10:42:00 | 0 | 42 | 42 | False |
| 2018-02-16 13:42:00 | 42 | 40 | -2 | False |
| 2018-02-16 19:42:00 | 40 | 38 | -2 | False |
| 2018-02-17 20:42:00 | 38 | 40 | 2 | False |
# duplicate above
| 2018-02-18 00:45:00 | 40 | 10 | -30 | False |
| 2018-02-19 12:45:00 | 10 | 2 | -8 | False |
| 2018-02-19 13:45:00 | 2 | 1 | -1 | False |
| 2018-02-19 16:45:00 | 1 | 0 | -1 | True |
| 2018-02-20 10:42:00 | 0 | 42 | 42 | False |
| 2018-02-20 13:42:00 | 42 | 40 | -2 | False |
| 2018-02-20 19:42:00 | 40 | 38 | -2 | False |
| 2018-02-21 20:42:00 | 38 | 40 | 2 | False |
</code></pre>
<p>csv:</p>
<pre><code>datetime_create,quantity_old,quantity_new,quantity_diff,is_stockout
2018-02-14 00:45:00,40,10,-30,False
2018-02-15 12:45:00,10,2,-8,False
2018-02-15 13:45:00,2,1,-1,False
2018-02-15 16:45:00,1,0,-1,True
2018-02-16 10:42:00,0,42,42,False
2018-02-16 13:42:00,42,40,-2,False
2018-02-16 19:42:00,40,38,-2,False
2018-02-17 20:42:00,38,40,2,False
2018-02-18 00:45:00,40,10,-30,False
2018-02-19 12:45:00,10,2,-8,False
2018-02-19 13:45:00,2,1,-1,False
2018-02-19 16:45:00,1,0,-1,True
2018-02-20 10:42:00,0,42,42,False
2018-02-20 13:42:00,42,40,-2,False
2018-02-20 19:42:00,40,38,-2,False
2018-02-21 20:42:00,38,40,2,False
</code></pre>
<p>Would result (here time unit is minutes, for beauty) :</p>
<pre><code>date,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23
2018-02-14,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
2018-02-15,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,15.0,60.0,60.0,60.0,60.0,60.0,60.0,60.0,60.0,60.0
2018-02-16,60.0,60.0,60.0,60.0,60.0,60.0,60.0,60.0,60.0,60.0,42.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
2018-02-17,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
2018-02-18,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
2018-02-19,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,15.0,60.0,60.0,60.0,60.0,60.0,60.0,60.0,60.0,60.0
2018-02-20,60.0,60.0,60.0,60.0,60.0,60.0,60.0,60.0,60.0,60.0,42.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
2018-02-21,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
</code></pre>
<p><a href="https://i.stack.imgur.com/OTQiN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OTQiN.png" alt="enter image description here"></a></p>
|
<p>I think need first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html" rel="nofollow noreferrer"><code>resample</code></a> by minutes with forward filling <code>NaN</code>s, convert to <code>inetger</code>s and for <code>Series</code> add <a href="https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.squeeze.html" rel="nofollow noreferrer"><code>DataFrame.squeeze</code></a>.</p>
<p>Then aggregate by <code>date</code>s and <code>hour</code>s with <code>sum</code> and last reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a>:</p>
<pre><code>s = df[['is_stockout']].resample('T').ffill().astype(int).squeeze()
df1 = s.groupby([s.index.date, s.index.hour]).sum().unstack(fill_value=0)
print (df1)
datetime_create 0 1 2 3 4 5 6 7 8 9 ... 14 15 16 17 \
2018-02-14 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0
2018-02-15 0 0 0 0 0 0 0 0 0 0 ... 0 0 15 60
2018-02-16 60 60 60 60 60 60 60 60 60 60 ... 0 0 0 0
2018-02-17 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0
2018-02-18 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0
2018-02-19 0 0 0 0 0 0 0 0 0 0 ... 0 0 15 60
2018-02-20 60 60 60 60 60 60 60 60 60 60 ... 0 0 0 0
2018-02-21 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0
datetime_create 18 19 20 21 22 23
2018-02-14 0 0 0 0 0 0
2018-02-15 60 60 60 60 60 60
2018-02-16 0 0 0 0 0 0
2018-02-17 0 0 0 0 0 0
2018-02-18 0 0 0 0 0 0
2018-02-19 60 60 60 60 60 60
2018-02-20 0 0 0 0 0 0
2018-02-21 0 0 0 0 0 0
</code></pre>
|
python|pandas|pandas-groupby
| 1
|
374,492
| 51,205,502
|
Convert a black and white image to array of numbers?
|
<p><a href="https://i.stack.imgur.com/gC1x7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gC1x7.png" alt="The image is 28 pixels by 28 pixels. They can interpret this as a big array of numbers:"></a>
Like the image above suggests, how can I convert the image to the left into an array that represent the darkness of the image between <code>0 for white</code> and <code>decimals for darker colours closer to 1? as shown in the image using</code>python 3`? </p>
<p>Update:
I have tried to work abit more on this. There are good answers below too.</p>
<pre><code># Load image
filename = tf.constant("one.png")
image_file = tf.read_file(filename)
# Show Image
Image("one.png")
#convert method
def convertRgbToWeight(rgbArray):
arrayWithPixelWeight = []
for i in range(int(rgbArray.size / rgbArray[0].size)):
for j in range(int(rgbArray[0].size / 3)):
lum = 255-((rgbArray[i][j][0]+rgbArray[i][j][1]+rgbArray[i][j][2])/3) # Reversed luminosity
arrayWithPixelWeight.append(lum/255) # Map values from range 0-255 to 0-1
return arrayWithPixelWeight
# Convert image to numbers and print them
image_decoded_png = tf.image.decode_png(image_file,channels=3)
image_as_float32 = tf.cast(image_decoded_png, tf.float32)
numpy.set_printoptions(threshold=numpy.nan)
sess = tf.Session()
squeezedArray = sess.run(image_as_float32)
convertedList = convertRgbToWeight(squeezedArray)
print(convertedList) # This will give me an array of numbers.
</code></pre>
|
<p>I would recommend to read in images with opencv. The biggest advantage of opencv is that it supports multiple image formats and it automatically transforms the image into a numpy array. For example: </p>
<pre><code>import cv2
import numpy as np
img_path = '/YOUR/PATH/IMAGE.png'
img = cv2.imread(img_path, 0) # read image as grayscale. Set second parameter to 1 if rgb is required
</code></pre>
<p>Now <code>img</code> is a numpy array with values between <code>0 - 255</code>. By default 0 equals black and 255 equals white. To change this you can use the opencv built in function <code>bitwise_not</code>: </p>
<pre><code>img_reverted= cv2.bitwise_not(img)
</code></pre>
<p>We can now scale the array with: </p>
<pre><code>new_img = img_reverted / 255.0 // now all values are ranging from 0 to 1, where white equlas 0.0 and black equals 1.0
</code></pre>
|
python|image|numpy|opencv|image-processing
| 8
|
374,493
| 51,387,915
|
Error with RandomGridSearchCV in Sklearn MLPRegressor
|
<p>I found similar issues around the internet but with slight differences and none of the solutions worked for me.
I have a set of explanatory variables X (2085,12) and an explained variable y (2085,1) which I have to do some stuff on, including the use of these sklearn classes (as title). In order to get the right hyperparameters I have arranged the code as follows:</p>
<pre><code>#solver: sgd
mlpsgd = MLPRegressor(max_iter = 1000, solver='sgd')
alpha = [float(x) for x in np.logspace(start = -6, stop = 3, num = 100)]
hidden_layer_sizes = [(int(x),int(y),int(z)) for x in np.logspace(start = 0, stop = 2.2, num = 8) for y in np.logspace(start = 0, stop = 2.2, num = 8) for z in np.logspace(start = 0, stop = 2.2, num = 8)]
hidden_layer_sizes.extend((int(x),int(y)) for x in np.logspace(start = 0, stop = 2, num = 25) for y in np.logspace(start = 0, stop = 2, num = 25))
hidden_layer_sizes.extend((int(x),) for x in np.logspace(start = 1, stop = 2, num = 1000))
activation = ['logistic', 'tanh', 'relu']
learning_rate = ['constant', 'invscaling','adaptive']
learning_rate_init = [float(x) for x in np.logspace(start = -5, stop = 0, num = 20)]
random_grid3 = {'learning_rate': learning_rate,'activation': activation,'learning_rate_init': learning_rate_init, 'hidden_layer_sizes': hidden_layer_sizes, 'alpha': alpha}
mlp_random3 = RandomizedSearchCV(estimator = mlpsgd, param_distributions = random_grid3, n_iter = 350, n_jobs=-1)
mlp_random3.fit(X, y)
</code></pre>
<p>Now I know that the whole random grid is insanely huge, but I tried even with a very small one and this is not the issue (and this way it is more of a match for the type of research I am supposed to do), and I should mention that I use windows and the program starts with</p>
<pre><code>if __name__ == '__main__':
</code></pre>
<p>as I understood (hopefully correctly) it is needed for the multiprocessing I am asking in the second to last line of the firs part of code I attached.
Well the thing is that when I run the code some of the 350 iteration are processed correctly, but then it stops and prints this error:</p>
<pre><code>Traceback (most recent call last):
File "c:\Users\mat\OneDrive\Desktop\TES\Analisi\Tesi.py", line 164, in <module>
perc = mlpottimizzata(x_train,y_train[:,i])
File "c:\Users\mat\OneDrive\Desktop\TES\Analisi\Tesi.py", line 72, in mlpottimizzata
mlp_random3.fit(x_train, y_train)
File "C:\Users\mat\AppData\Local\Programs\Python\Python36-32\lib\site-packages\sklearn\model_selection\_search.py", line 639, in fit
cv.split(X, y, groups)))
File "C:\Users\mat\AppData\Local\Programs\Python\Python36-32\lib\site-packages\sklearn\externals\joblib\parallel.py", line 789, in __call__
self.retrieve()
File "C:\Users\mat\AppData\Local\Programs\Python\Python36-32\lib\site-packages\sklearn\externals\joblib\parallel.py", line 740, in retrieve
raise exception
sklearn.externals.joblib.my_exceptions.JoblibValueError: JoblibValueError
___________________________________________________________________________
Multiprocessing exception:
...........................................................................
c:\Users\mat\.vscode\extensions\ms-python.python-2018.6.0\pythonFiles\PythonTools\visualstudio_py_launcher.py in <module>()
86 del sys, os
87
88 # and start debugging
89 ## Begin modification by Don Jayamanne
90 # Pass current Process id to pass back to debugger
---> 91 vspd.debug(filename, port_num, debug_id, debug_options, currentPid, run_as)
92 ## End Modification by Don Jayamanne
...........................................................................
c:\Users\mat\.vscode\extensions\ms-python.python-2018.6.0\pythonFiles\PythonTools\visualstudio_py_debugger.py in debug(file=r'c:\Users\mat\OneDrive\Desktop\TES\Analisi\Tesi.py', port_num=58990, debug_id='34806ad9-833a-4524-8cd6-18ca4aa74f14', debug_options={'RedirectOutput'}, currentPid=10548, run_as='script')
2620 if run_as == 'module':
2621 exec_module(file, globals_obj)
2622 elif run_as == 'code':
2623 exec_code(file, '<string>', globals_obj)
2624 else:
-> 2625 exec_file(file, globals_obj)
file = r'c:\Users\mat\OneDrive\Desktop\TES\Analisi\Tesi.py'
globals_obj = {'__name__': '__main__'}
2626 finally:
2627 sys.settrace(None)
2628 THREADS_LOCK.acquire()
2629 del THREADS[cur_thread.id]
...........................................................................
c:\Users\mat\.vscode\extensions\ms-python.python-2018.6.0\pythonFiles\PythonTools\visualstudio_py_util.py in exec_file(file=r'c:\Users\mat\OneDrive\Desktop\TES\Analisi\Tesi.py', global_variables={'__name__': '__main__'})
114 f = open(file, "rb")
115 try:
116 code = f.read().replace(to_bytes('\r\n'), to_bytes('\n')) + to_bytes('\n')
117 finally:
118 f.close()
--> 119 exec_code(code, file, global_variables)
code = b'import pandas as p\nimport numpy as np\nimport....score(x_train, y_train[:,i]))\n print(err)\n'
file = r'c:\Users\mat\OneDrive\Desktop\TES\Analisi\Tesi.py'
global_variables = {'__name__': '__main__'}
120
121 def exec_module(module, global_variables):
122 '''Executes the provided module as if it were provided as '-m module'. The
123 functionality is implemented using `runpy.run_module`, which was added in
...........................................................................
c:\Users\mat\.vscode\extensions\ms-python.python-2018.6.0\pythonFiles\PythonTools\visualstudio_py_util.py in exec_code(code=b'import pandas as p\nimport numpy as np\nimport....score(x_train, y_train[:,i]))\n print(err)\n', file=r'c:\Users\mat\OneDrive\Desktop\TES\Analisi\Tesi.py', global_variables={'MLPRegressor': <class 'sklearn.neural_network.multilayer_perceptron.MLPRegressor'>, 'RandomForestRegressor': <class 'sklearn.ensemble.forest.RandomForestRegressor'>, 'RandomizedSearchCV': <class 'sklearn.model_selection._search.RandomizedSearchCV'>, '__builtins__': {'ArithmeticError': <class 'ArithmeticError'>, 'AssertionError': <class 'AssertionError'>, 'AttributeError': <class 'AttributeError'>, 'BaseException': <class 'BaseException'>, 'BlockingIOError': <class 'BlockingIOError'>, 'BrokenPipeError':
<class 'BrokenPipeError'>, 'BufferError': <class 'BufferError'>, 'BytesWarning': <class 'BytesWarning'>, 'ChildProcessError': <class 'ChildProcessError'>, 'ConnectionAbortedError': <class 'ConnectionAbortedError'>, ...}, '__cached__': None, '__doc__': None, '__file__': r'c:\Users\mat\OneDrive\Desktop\TES\Analisi\Tesi.py', '__loader__': None, '__name__': '__main__', '__package__': None, ...})
90 if os.path.isdir(sys.path[0]):
91 sys.path.insert(0, os.path.split(file)[0])
92 else:
93 sys.path[0] = os.path.split(file)[0]
94 code_obj = compile(code, file, 'exec')
---> 95 exec(code_obj, global_variables)
code_obj = <code object <module> at 0x02BC45F8, file "c:\Us...at\OneDrive\Desktop\TES\Analisi\Tesi.py", line 1>
global_variables = {'MLPRegressor': <class 'sklearn.neural_network.multilayer_perceptron.MLPRegressor'>, 'RandomForestRegressor': <class 'sklearn.ensemble.forest.RandomForestRegressor'>, 'RandomizedSearchCV': <class 'sklearn.model_selection._search.RandomizedSearchCV'>, '__builtins__': {'ArithmeticError': <class 'ArithmeticError'>, 'AssertionError': <class 'AssertionError'>, 'AttributeError': <class 'AttributeError'>, 'BaseException': <class 'BaseException'>, 'BlockingIOError': <class 'BlockingIOError'>, 'BrokenPipeError': <class 'BrokenPipeError'>, 'BufferError': <class 'BufferError'>, 'BytesWarning': <class 'BytesWarning'>, 'ChildProcessError': <class 'ChildProcessError'>, 'ConnectionAbortedError': <class 'ConnectionAbortedError'>, ...}, '__cached__': None, '__doc__': None, '__file__': r'c:\Users\mat\OneDrive\Desktop\TES\Analisi\Tesi.py', '__loader__': None, '__name__': '__main__', '__package__': None, ...}
96
97 def exec_file(file, global_variables):
98 '''Executes the provided script as if it were the original script provided
99 to python.exe. The functionality is similar to `runpy.run_path`, which was
...........................................................................
c:\Users\mat\OneDrive\Desktop\TES\Analisi\Tesi.py in <module>()
159 # print("Mean squared error: {}".format(rndf_err))
160 # print('Variance score: %.2f \n \n' % rndf.fit(x_train, y_train[:,i]).score(x_test, y_test[:,i]))
161
162 #multilayer perceptron
163 print("Multilayer Perceptron \n")
--> 164 perc = mlpottimizzata(x_train,y_train[:,i])
165 y_perc = perc.predict(x_test)
166 perc_err = mean_squared_error(y_test[:,i], y_perc)
167 err[2,i]=r2_score(y_test[:,i],y_perc)
168 print("Mean squared error: {}".format(perc_err))
...........................................................................
c:\Users\mat\OneDrive\Desktop\TES\Analisi\Tesi.py in mlpottimizzata(x_train=array([[ 0.06 , 2.13 , 4.47
, .... 0.00125208,
0.00505016, 0.0039683 ]]), y_train=array([0.00827529, 0.00318743, 0.00103558, ..., 0.00064697, 0. ,
0.00333603]))
67 activation = ['logistic', 'tanh', 'relu']
68 learning_rate = ['constant', 'invscaling','adaptive']
69 learning_rate_init = [float(x) for x in np.logspace(start = -5, stop = 0, num = 20)]
70 random_grid3 = {'learning_rate': learning_rate,'activation': activation,'learning_rate_init': learning_rate_init, 'hidden_layer_sizes': hidden_layer_sizes, 'alpha': alpha}
71 mlp_random3 = RandomizedSearchCV(estimator = mlpsgd, param_distributions = random_grid3, n_iter = 350, n_jobs=-1)
---> 72 mlp_random3.fit(x_train, y_train)
mlp_random3.fit = <bound method BaseSearchCV.fit of RandomizedSear...urn_train_score='warn', scoring=None, verbose=0)>
x_train = array([[ 0.06 , 2.13 , 4.47 , .... 0.00125208,
0.00505016, 0.0039683 ]])
y_train = array([0.00827529, 0.00318743, 0.00103558, ..., 0.00064697, 0. ,
0.00333603])
73
74 if mlp_random3.best_score_ is max(mlp_random1.best_score_,mlp_random2.best_score_,mlp_random3.best_score_):
75 return mlp_random3.best_estimator_
76 if mlp_random1.best_score_ >= mlp_random2.best_score_:
...........................................................................
C:\Users\mat\AppData\Local\Programs\Python\Python36-32\lib\site-packages\sklearn\model_selection\_search.py in fit(self=RandomizedSearchCV(cv=None, error_score='raise',...turn_train_score='warn', scoring=None, verbose=0), X=array([[ 0.06 , 2.13 , 4.47 , .... 0.00125208,
0.00505016, 0.0039683 ]]), y=array([0.00827529, 0.00318743, 0.00103558, ..., 0.00064697, 0. ,
0.00333603]), groups=None, **fit_params={})
634 return_train_score=self.return_train_score,
635 return_n_test_samples=True,
636 return_times=True, return_parameters=False,
637 error_score=self.error_score)
638 for parameters, (train, test) in product(candidate_params,
--> 639 cv.split(X, y, groups)))
cv.split = <bound method _BaseKFold.split of KFold(n_splits=3, random_state=None, shuffle=False)>
X = array([[ 0.06 , 2.13 , 4.47 , .... 0.00125208,
0.00505016, 0.0039683 ]])
y = array([0.00827529, 0.00318743, 0.00103558, ..., 0.00064697, 0. ,
0.00333603])
groups = None
640
641 # if one choose to see train score, "out" will contain train score info
642 if self.return_train_score:
643 (train_score_dicts, test_score_dicts, test_sample_counts, fit_time,
...........................................................................
C:\Users\mat\AppData\Local\Programs\Python\Python36-32\lib\site-packages\sklearn\externals\joblib\parallel.py in __call__(self=Parallel(n_jobs=-1), iterable=<generator object BaseSearchCV.fit.<locals>.<genexpr>>)
784 if pre_dispatch == "all" or n_jobs == 1:
785 # The iterable was consumed all at once by the above for loop.
786 # No need to wait for async callbacks to trigger to
787 # consumption.
788 self._iterating = False
--> 789 self.retrieve()
self.retrieve = <bound method Parallel.retrieve of Parallel(n_jobs=-1)>
790 # Make sure that we get a last message telling us we are done
791 elapsed_time = time.time() - self._start_time
792 self._print('Done %3i out of %3i | elapsed: %s finished',
793 (len(self._output), len(self._output),
---------------------------------------------------------------------------
Sub-process traceback:
---------------------------------------------------------------------------
ValueError Tue Jul 17 19:33:23 2018
PID: 9280Python 3.6.5: C:\Users\mat\AppData\Local\Programs\Python\Python36-32\python.exe
...........................................................................
C:\Users\mat\AppData\Local\Programs\Python\Python36-32\lib\site-packages\sklearn\externals\joblib\parallel.py in __call__(self=<sklearn.externals.joblib.parallel.BatchedCalls object>)
126 def __init__(self, iterator_slice):
127 self.items = list(iterator_slice)
128 self._size = len(self.items)
129
130 def __call__(self):
--> 131 return [func(*args, **kwargs) for func, args, kwargs in self.items]
self.items = [(<function _fit_and_score>, (MLPRegressor(activation='relu', alpha=811.130830...tion=0.1,
verbose=False, warm_start=False), array([[ 6.00000000e-02, 2.13000000e+00, 4.470...25207638e-03, 5.05016074e-03, 3.96830145e-03]]), array([0.00827529, 0.00318743, 0.00103558, ..., 0.00064697, 0. ,
0.00333603]), {'score': <function _passthrough_scorer>}, array([ 629, 630, 631, ..., 1882, 1883, 1884]), array([ 0, 1, 2, 3, 4, 5, 6, 7, ..., 621, 622, 623,
624, 625, 626, 627, 628]), 0, {'activation': 'relu', 'alpha': 811.130830789689, 'hidden_layer_sizes': (24,),
'learning_rate': 'adaptive', 'learning_rate_init': 0.5455594781168515}), {'error_score': 'raise', 'fit_params': {},
'return_n_test_samples': True, 'return_parameters': False, 'return_times': True, 'return_train_score': 'warn'})]
132
133 def __len__(self):
134 return self._size
135
...........................................................................
C:\Users\mat\AppData\Local\Programs\Python\Python36-32\lib\site-packages\sklearn\externals\joblib\parallel.py in <listcomp>(.0=<list_iterator object>)
126 def __init__(self, iterator_slice):
127 self.items = list(iterat
</code></pre>
<p>There is nothing missing, it ends like this. Also I need to mention that mplottimizzata, which is cited in the error, is the function that contains the first block of code I attached.
I am really out of options, any help is really appreciated. Thank you all in advance :)</p>
<p>NB. Another part of the code does roughly the same thing but with solver:'lbfgs' and it works smoothly, but this only confuses me even further.</p>
|
<p>The problem is caused when you define the grid parameters using list comprehension and the <strong>float argument</strong>.</p>
<p>This works fine for me:</p>
<pre><code>from sklearn.neural_network import MLPRegressor
from sklearn.model_selection import RandomizedSearchCV
import pandas as pd
import numpy as np
from sklearn.model_selection import GridSearchCV
X = pd.read_csv('X.csv')
Y = pd.read_csv('y.csv')
X = X.iloc[1:,1:].values
Y = Y.iloc[1:,1].values
mlpsgd = MLPRegressor(max_iter = 1000, solver='sgd')
alpha = np.arange(0.01, 0.1, 0.01)
hidden_layer_sizes = [(int(x),int(y),int(z)) for x in np.logspace(start = 0, stop = 2.2, num = 8) for y in np.logspace(start = 0, stop = 2.2, num = 8) for z in np.logspace(start = 0, stop = 2.2, num = 8)]
hidden_layer_sizes.extend((int(x),int(y)) for x in np.logspace(start = 0, stop = 2, num = 25) for y in np.logspace(start = 0, stop = 2, num = 25))
hidden_layer_sizes.extend((int(x),) for x in np.logspace(start = 1, stop = 2, num = 1000))
activation = ['logistic', 'tanh', 'relu']
learning_rate = ['constant', 'invscaling','adaptive']
learning_rate_init = np.arange(0.01, 0.1, 0.01)
random_grid3 = {'learning_rate': learning_rate,'activation': activation,'learning_rate_init': learning_rate_init, 'hidden_layer_sizes': hidden_layer_sizes, 'alpha': alpha}
mlp_random3 = RandomizedSearchCV(estimator = mlpsgd, param_distributions = random_grid3, n_iter = 350, n_jobs=-1)
mlp_random3.fit(X, Y)
print(mlp_random3.best_estimator_)
</code></pre>
<blockquote>
<p>MLPRegressor(activation='relu', alpha=0.03, batch_size='auto',
beta_1=0.9,
beta_2=0.999, early_stopping=False, epsilon=1e-08,
hidden_layer_sizes=(4, 18, 1), learning_rate='adaptive',
learning_rate_init=0.05, max_iter=1000, momentum=0.9,
nesterovs_momentum=True, power_t=0.5, random_state=None,
shuffle=True, solver='sgd', tol=0.0001, validation_fraction=0.1,
verbose=False, warm_start=False)</p>
</blockquote>
|
python|python-3.x|numpy|scikit-learn|joblib
| 2
|
374,494
| 48,428,859
|
How do I reinstate the GPU version of tensorflow that was running 2 days ago on my host?
|
<p>Just two days ago, after much work on my part downloading and installing the latest stable GPU version of tensorflow, my tensorflow installation was behaving correctly as I wanted, and it reported this:</p>
<pre><code>$ source activate tensorflowgpu
(tensorflowgpu) ga@ga-HP-Z820:~$ python
Python 3.5.4 |Anaconda, Inc.| (default, Nov 20 2017, 18:44:38)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant("Hello tensorflow")
>>> sess = tf.Session()
2018-01-22 12:37:32.119300: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX
2018-01-22 12:37:33.339324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.797
pciBusID: 0000:41:00.0
totalMemory: 7.92GiB freeMemory: 7.80GiB
2018-01-22 12:37:33.339414: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:41:00.0, compute capability: 6.1)
>>> print(sess.run(hello))
b'Hello tensorflow'
</code></pre>
<p>Sadly however, to my complete surprise today my tensorflow installation is misbehaving because it is reporting the following, that is, it's using the CPU version. What in the world happened to it, and how do I make it behave correctly again, that is, reinstate the GPU version of tensorflow? This is my privately owned workstation of which I am the sole user.</p>
<pre><code>ga@ga-HP-Z820:~$ source activate tensorflowgpu
(tensorflowgpu) ga@ga-HP-Z820:~$ ipython
Python 3.5.4 |Anaconda custom (64-bit)| (default, Nov 20 2017, 18:44:38)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.2.1 -- An enhanced Interactive Python. Type '?' for help.
...
In [2]: import tensorflow as tf
...: hello = tf.constant("Hello tensorflow")
...: sess = tf.Session()
...:
2018-01-24 12:34:56.792676: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-01-24 12:34:56.792719: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-01-24 12:34:56.792729: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
</code></pre>
<p>Now looking closer, todauy my python (not ipython) does the following, which is good behavior again. That's strange -- the 2 pythons load different tensorflows. So how do I compel ipython and python to both use the GPU version of tensorflow? I really use ipython more, esp. for jupyter notebooks.</p>
<pre><code>(tensorflowgpu) ga@ga-HP-Z820:~$ python
Python 3.5.4 |Anaconda, Inc.| (default, Nov 20 2017, 18:44:38)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant("Hello tensorflow")
>>> sess = tf.Session()
2018-01-24 12:50:35.846985: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX
2018-01-24 12:50:37.222662: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.797
pciBusID: 0000:41:00.0
totalMemory: 7.92GiB freeMemory: 7.80GiB
2018-01-24 12:50:37.222721: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:41:00.0, compute capability: 6.1)
</code></pre>
<p>I had followed the installation instructions for tensorflow GPU on linux ubuntu at tensorflow website. It did not say to remove the CPU version first.</p>
<p>Edit -- As follows, I explicitly queried the tf versions. It confirms that the two different versions that are installed are being loaded by python vs ipython. I would want to cause both pythons to use only the newer version of tf, somehow.</p>
<pre><code>(tensorflowgpu) ga@ga-HP-Z820:~$ python
Python 3.5.4 |Anaconda, Inc.| (default, Nov 20 2017, 18:44:38)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'1.4.1'
>>>
(tensorflowgpu) ga@ga-HP-Z820:~$ ipython
Python 3.5.4 |Anaconda custom (64-bit)| (default, Nov 20 2017, 18:44:38)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.2.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import tensorflow as tf
In [2]: tf.__version__
Out[2]: '1.3.0'
</code></pre>
|
<p>You'll need to install the kernel in ipython, so that it knows about your environment. Currently, ipython is picking up the default python on your system, not the environment one. You can install the kernel by (make sure your environment is active)</p>
<pre><code>pip install ipykernel
python -m ipykernel install --user --name tensorflowgpu
</code></pre>
<p>Now, just select this kernel when running ipython</p>
|
tensorflow
| 1
|
374,495
| 47,992,947
|
Drop line in pandas df when string type cell's right characters don't match condition
|
<p>I am working on a dataframe containing demographic data for every single U.S state and county.</p>
<pre><code>FIPS State Area_Name CENSUS_2010_POP ESTIMATES_BASE_2010 ...
01000 AL Alabama 4779736 4780131 ...
01001 AL Autauga County 54571 54571 ...
01003 AL Baldwin County 182265 182265 ...
01005 AL Barbour County 27457 27457 ...
... ... ... ... ... ...
</code></pre>
<p>I would like to drop all the lines regarding counties in order to keep only lines regarding U.S states (that's a lot of lines to drop indeed!).
My idea was to focus on the FIPS column and to keep only FIPSs ending with '000', which are corresponding to states.
After converting the FIPS into strings, I tried the following:</p>
<pre><code>for k in df.index:
if df.iloc[k,0][-3:] != '000':
df=df.drop(df.index[k])
</code></pre>
<p>I am getting the following error: <code>single positional indexer is out-of-bounds</code>. </p>
|
<p>Select the rows based on boolean indexing , boolean obtained by <code>str</code> slicing comparison i.e </p>
<pre><code>df[df['FIPS'].astype(str).str[-3:] == '000']
FIPS State Area_Name CENSUS_2010_POP ESTIMATES_BASE_2010 ...
0 1000 AL Alabama 4779736 4780131 ...
</code></pre>
|
python|string|pandas|if-statement
| 0
|
374,496
| 47,993,087
|
convert json into pandas dataframe
|
<p>I have this json data:</p>
<pre><code>{
"current": [
[
0,
"2017-01-15T00:08:36Z"
],
[
0,
"2017-01-15T00:18:36Z"
]
],
"voltage": [
[
12.891309987,
"2017-01-15T00:08:36Z"
],
[
12.8952162966,
"2017-01-15T00:18:36Z"
]
]
}
</code></pre>
<p>and I am trying to get into it into a pandas dataframe in this format (time series):</p>
<pre><code>time current voltage
2017-01-15T00:08:36Z 0 12.891309987
2017-01-15T00:18:36Z 0 12.8952162966
</code></pre>
<p>I have tried:</p>
<pre><code>t = pd.read_json(q)
</code></pre>
<p>but this gives me:</p>
<pre><code> current voltage
0 [0, 2017-01-15T00:08:36Z] [12.891309987, 2017-01-15T00:08:36Z]
1 [0, 2017-01-15T00:18:36Z] [12.8952162966, 2017-01-15T00:18:36Z]
</code></pre>
<p>how can I get this into the correct format?</p>
|
<p>If both the columns time is same, aftering reading json we can select the values and concat them :</p>
<pre><code>ndf = pd.read_json(q)
ndf = pd.concat([ndf.apply(lambda x : x.str[0]),ndf['current'].str[1].rename('time')],1)
current voltage time
0 0 12.891310 2017-01-15T00:08:36Z
1 0 12.895216 2017-01-15T00:18:36Z
</code></pre>
|
python|json|pandas|dataframe
| 2
|
374,497
| 48,390,559
|
Slicing multiple values into single column
|
<p>The dataframe below is populated by pd.read_sql. How do I select the wf value for every unique Group / SubGroup pair where the book_date == start_date and store it in the column "new". </p>
<p>*I have asterisk'd the rows for additional clarity, the asterisk is not in the dataset.</p>
<pre><code>| | Group | SubGroup | book_date | start_date | wf | co2 | new |
|-------|-------|-----------|-----------|------------|------|----------|-----|
| 236 | Virgo | Milkyway | 3/1/1985 | 5/1/1985 | 0.04 | NaN | |
| 239 | Virgo | Milkyway | 4/1/1985 | 5/1/1985 | 0.05 | NaN | |
| 1178 | Virgo*| Milkyway* | 5/1/1985* | 5/1/1985* | 0.06*| 0.004179*| |
| 535 | Virgo | Milkyway | 6/1/1985 | 5/1/1985 | 0.07 | 0.008245 | |
| 1056 | Virgo | Andromeda | 6/1/1993 | 8/1/1993 | 1.57 | NaN | |
| 1046 | Virgo | Andromeda | 7/1/1993 | 8/1/1993 | 1.58 | NaN | |
| 956 | Virgo*| Andromeda*| 8/1/1993* | 8/1/1993* | 1.59*| 0.006688*| |
| 776 | Virgo | Andromeda | 9/1/1993 | 8/1/1993 | 1.60 | 0.012917 | |
</code></pre>
<p>This is the expected result. </p>
<pre><code>| | Group | SubGroup | book_date | start_date | wf | co2 | new |
|-------|-------|-----------|-----------|------------|------|----------|------|
| 236 | Virgo | Milkyway | 3/1/1985 | 5/1/1985 | 0.04 | NaN | 0.06 |
| 239 | Virgo | Milkyway | 4/1/1985 | 5/1/1985 | 0.05 | NaN | 0.06 |
| 1178 | Virgo*| Milkyway* | 5/1/1985* | 5/1/1985* | 0.06*| 0.004179*| 0.06 |
| 535 | Virgo | Milkyway | 6/1/1985 | 5/1/1985 | 0.07 | 0.008245 | 0.06 |
| 1056 | Virgo | Andromeda | 6/1/1993 | 8/1/1993 | 1.57 | NaN | 1.59 |
| 1046 | Virgo | Andromeda | 7/1/1993 | 8/1/1993 | 1.58 | NaN | 1.59 |
| 956 | Virgo*| Andromeda*| 8/1/1993* | 8/1/1993* | 1.59*| 0.006688*| 1.59 |
| 776 | Virgo | Andromeda | 9/1/1993 | 8/1/1993 | 1.60 | 0.012917 | 1.59 |
</code></pre>
|
<p>Taking your data</p>
<pre><code>df = pd.read_clipboard()
df.head()
df.replace({'\*': ''}, regex=True, inplace=True)
def gen_new_col(frame):
if frame['book_date'] == frame['start_date']:
return frame['wf']
else:
return 'ignore'
df['new_col'] = df.apply(gen_new_col, axis=1)
df['g_subg'] = df['Group'] + "|" + df['SubGroup']
df
Group SubGroup book_date start_date wf co2 new_col g_subg
0 Virgo Milkyway 3/1/1985 5/1/1985 0.04 NaN ignore Virgo|Milkyway
1 Virgo Milkyway 4/1/1985 5/1/1985 0.05 NaN ignore Virgo|Milkyway
2 Virgo Milkyway 5/1/1985 5/1/1985 0.06 0.004179 0.06 Virgo|Milkyway
3 Virgo Milkyway 6/1/1985 5/1/1985 0.07 0.008245 ignore Virgo|Milkyway
4 Virgo Andromeda 6/1/1993 8/1/1993 1.57 NaN ignore Virgo|Andromeda
5 Virgo Andromeda 7/1/1993 8/1/1993 1.58 NaN ignore Virgo|Andromeda
6 Virgo Andromeda 8/1/1993 8/1/1993 1.59 0.006688 1.59 Virgo|Andromeda
7 Virgo Andromeda 9/1/1993 8/1/1993 1.6 0.012917 ignore Virgo|Andromeda
# Get a lookup
valid = df[df['new_col'] != 'ignore']
lookup = dict(zip(valid['g_subg'], valid['new_col']))
lookup
{'Virgo|Andromeda': '1.59', 'Virgo|Milkyway': '0.06'}
# Bring it back in
df['final_value'] = df['g_subg'].map(lambda x: lookup[x])
df
Group SubGroup book_date start_date wf co2 new_col g_subg final_value
0 Virgo Milkyway 3/1/1985 5/1/1985 0.04 NaN ignore Virgo|Milkyway 0.06
1 Virgo Milkyway 4/1/1985 5/1/1985 0.05 NaN ignore Virgo|Milkyway 0.06
2 Virgo Milkyway 5/1/1985 5/1/1985 0.06 0.004179 0.06 Virgo|Milkyway 0.06
3 Virgo Milkyway 6/1/1985 5/1/1985 0.07 0.008245 ignore Virgo|Milkyway 0.06
4 Virgo Andromeda 6/1/1993 8/1/1993 1.57 NaN ignore Virgo|Andromeda 1.59
5 Virgo Andromeda 7/1/1993 8/1/1993 1.58 NaN ignore Virgo|Andromeda 1.59
6 Virgo Andromeda 8/1/1993 8/1/1993 1.59 0.006688 1.59 Virgo|Andromeda 1.59
7 Virgo Andromeda 9/1/1993 8/1/1993 1.6 0.012917 ignore Virgo|Andromeda 1.59
</code></pre>
|
python-3.x|pandas|numpy|dataframe
| 0
|
374,498
| 48,192,836
|
Pandas line plot suppresses half of the xticks, how to stop it?
|
<p>I am trying to make a line plot in which every one of the elements from the index appears as an xtick.</p>
<pre><code>import pandas as pd
ind = ['16-12', '17-01', '17-02', '17-03', '17-04',
'17-05','17-06', '17-07', '17-08', '17-09', '17-10', '17-11']
data = [1,3,5,2,3,6,4,7,8,5,3,8]
df = pd.DataFrame(data,index=ind)
df.plot(kind='line',x_compat=True)
</code></pre>
<p>however the resultant plot skips every second element of the index like so:
<a href="https://i.stack.imgur.com/m0Mr3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m0Mr3.png" alt="enter image description here"></a></p>
<p>My code to call the plot includes the (x_compat=True) parameter which the documentation for pandas suggests should stop the auto tick configuratioin but it seems to have no effect.</p>
|
<p>You need to use ticker object on axis and then use that axis when plotting.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
ind = ['16-12', '17-01', '17-02', '17-03', '17-04',
'17-05','17-06', '17-07', '17-08', '17-09', '17-10', '17-11']
data = [1,3,5,2,3,6,4,7,8,5,3,8]
df = pd.DataFrame(data,index=ind)
ax2 = plt.axes()
ax2.xaxis.set_major_locator(ticker.MultipleLocator(1))
df.plot(kind='line', ax=ax2)
</code></pre>
<p><a href="https://i.stack.imgur.com/IrhoB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IrhoB.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib|plot
| 3
|
374,499
| 48,323,486
|
Create a tf.contrib.learn Estimator serving that takes JSON input
|
<p>I am after some code that I can use to export a model from a tensorflow <code>Estimator</code> that would take JSON as an input. I could make this work with <code>tf.Estimator</code> using <code>tf.estimator.export.ServingInputReceiver</code>, but for models built in <code>tf.contrib.learn</code> I could not find any documentation. There is one example <a href="https://github.com/MtDersvan/tf_playground/blob/master/wide_and_deep_tutorial/wide_and_deep_basic_serving.md" rel="nofollow noreferrer">here</a> that creates an export with <code>tf.Example</code> serving, but <code>Example</code> is a bit tricky to construct.</p>
|
<p>To use contrib estimator, you have to look at earlier versions of the samples. Here is an example:</p>
<p><a href="https://github.com/GoogleCloudPlatform/training-data-analyst/blob/85c57e4da2e7edeffbb6652636e3c65b313c568f/blogs/babyweight/babyweight/trainer/model.py" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/training-data-analyst/blob/85c57e4da2e7edeffbb6652636e3c65b313c568f/blogs/babyweight/babyweight/trainer/model.py</a></p>
<p>Not that you are returning an input function ops. Having said that, I would recommend you to migrate to tf.estimator if you can.</p>
|
tensorflow|google-cloud-ml
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.