markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Group talks by session code | # sort by and group by session
keys = ('start', 'session')
sorted_talks = sorted(talks, key=itemgetter(*keys))
talk_sessions = DefaultOrderedDict(list)
for talk in sorted_talks:
talk_sessions[talk['session']].append(talk) | notebooks/session_instructions_toPDF.ipynb | EuroPython/ep-tools | mit |
Create the HTML texts for each session | session_texts = OrderedDict()
for session, talks in talk_sessions.items():
text = ['<h1>' + session + '</h1>']
for talk in talks:
text += [show_talk(talk, show_duration=False, show_link_to_admin=False)]
session_texts[session] = '\n'.join(text) | notebooks/session_instructions_toPDF.ipynb | EuroPython/ep-tools | mit |
Export to PDF
You need to have pandoc, wkhtmltopdf and xelatex installed in your computer. | import os
import os.path as op
import subprocess
os.makedirs('session_pdfs', exist_ok=True)
def pandoc_html_to_pdf(html_file, out_file, options):
cmd = 'pandoc {} {} -o {}'.format(options, html_file, out_file)
print(cmd)
subprocess.check_call(cmd, shell=True)
# pandoc options DIN-A6
# options = ' -V '.join(['-V geometry:paperwidth=6cm',
# 'geometry:paperheight=8cm',
# 'geometry:width=5.5cm',
# 'geometry:height=7.5cm',
# 'geometry:left=.25cm',
# ])
# pandoc options DIN-A4
options = ' -V '.join(['-V geometry:paperwidth=210mm',
'geometry:paperheight=297mm',
'geometry:left=2cm',
'geometry:top=2cm',
'geometry:bottom=2cm',
'geometry:right=2cm',
])
options += ' --latex-engine=xelatex'
for session, text in session_texts.items():
html_file = op.join('session_pdfs', '{}.html'.format(session))
out_file = html_file.replace('.html', '.pdf')
ops = open(html_file, mode='w')
ops.write(text)
ops.close()
pandoc_html_to_pdf(html_file, out_file, options)
os.remove(html_file) | notebooks/session_instructions_toPDF.ipynb | EuroPython/ep-tools | mit |
Trace analysis
For more information on this please check examples/trace_analysis/TraceAnalysis_TasksLatencies.ipynb. | # Load traces in memory (can take several minutes)
platform_file = os.path.join(te.res_dir, 'platform.json')
with open(platform_file, 'r') as fh:
platform = json.load(fh)
trace_file = os.path.join(te.res_dir, 'trace.dat')
trace = Trace(trace_file, my_conf['ftrace']['events'], platform, normalize_time=False)
# Find exact task name & PID
for pid, name in trace.getTasks().iteritems():
if "GLRunner" in name:
glrunner = {"pid" : pid, "name" : name}
print("name=\"" + glrunner["name"] + "\"" + " pid=" + str(glrunner["pid"])) | ipynb/deprecated/examples/android/workloads/Android_Gmaps.ipynb | ARM-software/lisa | apache-2.0 |
São muitas as colunas dessa consulta (a visualização acima omite algumas). Vamos checar quais são: | list(df_contratos)
df_contratos.to_excel('contratos.xls') | SOF_Contratos.ipynb | campagnucci/api_sof | gpl-3.0 |
Por modalidade de contratação | df_contratos.groupby(['txtDescricaoModalidade'])['valEmpenhadoLiquido', 'valPago'].sum().sort_values(['valEmpenhadoLiquido'], ascending=False) | SOF_Contratos.ipynb | campagnucci/api_sof | gpl-3.0 |
Agora já temos o DataFrame que junta todos Empenhos do Verde de 2017 com as informações de Contrato, deixando a base mais rica -- e corrigindo essa falha da falta de Razão Social e CNPJ na consulta de contratos! | df_empenhos_c_contratos.head() | SOF_Contratos.ipynb | campagnucci/api_sof | gpl-3.0 |
A tabela acima vai ter muitos valores "Nan" para codContrato -- pois há 493 empenhos e apenas 80 contratos. Como o interesse agora é só nos contratos, vamos retirar esses casos e montar um novo DataFrame que contém apenas contratos com algum empenho relacionado: | df_empenhos_c_contratos = df_empenhos_c_contratos.dropna(axis=0).reset_index(drop=True) | SOF_Contratos.ipynb | campagnucci/api_sof | gpl-3.0 |
Mudando o formato de número de decimal (tirar aquele .0 dali) para integer: | df_empenhos_c_contratos['codContrato'] = df_empenhos_c_contratos.loc[:,'codContrato'].astype(int) | SOF_Contratos.ipynb | campagnucci/api_sof | gpl-3.0 |
'Top 10' Contratos de 2017
Agora com a lista de todos os contratos do ano, vamos montar uma tabela e ordenar os dados pelo Valor Principal do Contrato. No Manual da API, aprende-se que esse campo 'valPrincipal'significa o "Valor do contrato sem ocorrência de reajustamentos, ou aditamentos". | top10 = df_contratos_empenhados[['txtDescricaoModalidade',
'txtObjetoContrato',
'txtRazaoSocial',
'numCpfCnpj',
'valPrincipal']].sort_values(['valPrincipal'], ascending=False)[:10]
top10 | SOF_Contratos.ipynb | campagnucci/api_sof | gpl-3.0 |
Passo 3 - Só quer salvar em Excel ou CSV? | df_contratos_empenhados.to_excel('exemplos/contratos_empenhados.xls')
df_contratos_empenhados.to_csv('exemplos/contratos_empenhados.csv') | SOF_Contratos.ipynb | campagnucci/api_sof | gpl-3.0 |
Passo "Bônus" - Comparando com o Portal da Transparência
Como mencionei lá em cima, o Portal da Transparência de São Paulo tem algo muito importante que é publicar os contratos na íntegra nesta página. Não vou entrar nos detalhes dos poréns que surgem aqui -- mas saiba que a ação de publicar o contrato depende de as pessoas subirem o "anexo" certo na hora de enviar o extrato para o Diário Oficial; o que acaba acontecendo é que são cerca de 700 usuários que fazem isso em toda a prefeitura e os erros são frequentes (ex.: subir arquivo de extrato no local de íntegra; indicar modalidade errada; digitar metadado do valor errado etc etc).
Eu já baixei (e arrumei alguns campos) um arquivo de lá com o mesmo exemplo da SVMA. Vamos comparar com o que vem na API do SOF: | df_contratos_portal = pd.read_excel('exemplos/contratos_portal.xls')
df_contratos_portal.sort_values('Valor (R$)', ascending=False).head()
df_contratos_portal.groupby('Modalidade')['Valor (R$)'].sum() | SOF_Contratos.ipynb | campagnucci/api_sof | gpl-3.0 |
Download the example data files if we don't already have them. | targdir = 'a1835_xmm'
if not os.path.isdir(targdir):
os.mkdir()
filenames = ('P0098010101M2U009IMAGE_3000.FTZ',
'P0098010101M2U009EXPMAP3000.FTZ',
'P0098010101M2X000BKGMAP3000.FTZ')
remotedir = 'http://heasarc.gsfc.nasa.gov/FTP/xmm/data/rev0/0098010101/PPS/'
for filename in filenames:
path = os.path.join(targdir, filename)
url = os.path.join(remotedir, filename)
if not os.path.isfile(path):
urllib.urlretrieve(url, path)
imagefile, expmapfile, bkgmapfile = [os.path.join(targdir, filename) for filename in filenames]
for filename in os.listdir(targdir):
print('{0:>10.2f} KB {1}'.format(os.path.getsize(os.path.join(targdir, filename))/1024.0, filename)) | examples/XrayImage/FirstLook.ipynb | hungiyang/StatisticalMethods | gpl-2.0 |
Use simple stock market daily data as features | #get stock basic data from quandl
df = quandl.get('WIKI/AAPL',start_date="1996-9-26",end_date='2017-12-31')
df = df[['Adj. Open','Adj. High','Adj. Low','Adj. Close','Adj. Volume']]
#calculate highest and lowest price change
df['HL_PCT']=(df['Adj. High']-df['Adj. Low'])/df['Adj. Close'] *100.0
#calculate return of stock price
df['PCT_change']= (df['Adj. Close']-df['Adj. Open'])/df['Adj. Open'] *100.0
df = df[['Adj. Close','HL_PCT','PCT_change','Adj. Volume']]
df_orig=df
date = df.index
df.head()
#plot heat map of corrlation
corr_stocks=df.corr()
corr_stocks=np.absolute(corr_stocks)
print(corr_stocks)
plt.figure(figsize=(12, 10))
plt.imshow(corr_stocks, cmap='RdYlGn', interpolation='none', aspect='auto')
plt.xticks(range(len(corr_stocks)), corr_stocks.columns, rotation='vertical')
plt.yticks(range(len(corr_stocks)), corr_stocks.columns);
plt.suptitle('Stock Correlations Heat Map', fontsize=15, fontweight='bold')
plt.show()
print('-------------------------------------------------')
print('From the correlation heat map, we can tell that the corrlation bewteen percentage change column and price is')
print('very low. So we need to get rid of this column to predict.')
#get rid of feature have least correlation
df = df[['Adj. Close','HL_PCT','Adj. Volume']]
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
#use high low price change and volume as two features
predictor=df[['HL_PCT','Adj. Volume']]
#normalize the predictor
predictor=preprocessing.scale(predictor)
price=df['Adj. Close']
predictor=np.array(predictor)
price=np.array(price)
#using 90% as training data and 10% as testing data
X_train, X_test, y_train, y_test =train_test_split(predictor , price, test_size=0.1,shuffle= False)
clf = linear_model.LinearRegression(n_jobs=-1)
clf.fit(X_train, y_train)
y_pred1 = clf.predict(X_test)
print('the coefficient of determination R^2 of the prediction:',clf.score(X_test, y_test))
print("Mean squared error:",mean_squared_error(y_test, y_pred1)) | 2018-04_Stock_prediction/linear regression stock prediction project.ipynb | NorfolkDataSci/presentations | mit |
the first varible is negative because the model can be arbitrarily worse | forecast_set = clf.predict(X_test)
num_samples = df.shape[0]
#add Forecase column to dataframe
df['Forecast'] = np.nan
df['Forecast'][int(0.9*num_samples):num_samples]=forecast_set
#plot graph for actual stock price and
style.use('ggplot')
df['Adj. Close'].plot()
df['Forecast'].plot()
plt.legend(loc=4)
plt.xlabel('Date')
plt.ylabel('Price')
plt.rcParams['figure.figsize'] = (20,20)
plt.show()
print('-------------------------')
print('from predicion graph we can see that the prediction does not work well') | 2018-04_Stock_prediction/linear regression stock prediction project.ipynb | NorfolkDataSci/presentations | mit |
use price as one of the feature to predict price
We can see from previous prediction, even high related correlation featrues did not predict well. However, the highest related featrue is price itself. This time I want to use price as one of the feature to predict price. | predictor2=df[['Adj. Close','HL_PCT','Adj. Volume']]
predictor2=preprocessing.scale(predictor2)
clf2 = linear_model.LinearRegression(n_jobs=-1)
X_train2, X_test2, y_train2, y_test2 =train_test_split(predictor2 , price, test_size=0.1,shuffle= False)
clf2.fit(X_train2, y_train2)
forecast_set2 = clf2.predict(X_test2)
print('the coefficient of determination R^2 of the prediction:',clf2.score(X_test2, y_test2))
print("Mean squared error:",mean_squared_error(y_test, forecast_set2))
print('Mean squared error is almost 0, the prediction is very well.')
num_samples = df.shape[0]
#add Forecase column to dataframe
df['Forecast'] = np.nan
df['Forecast'][int(0.9*num_samples):num_samples]=forecast_set2
style.use('ggplot')
df['Adj. Close'].plot()
df['Forecast'].plot()
plt.legend(loc=4)
plt.xlabel('Date')
plt.ylabel('Price')
plt.rcParams['figure.figsize'] = (20,20)
plt.show()
print('-------------------------')
print('from predicion graph we can see that prediction works well.') | 2018-04_Stock_prediction/linear regression stock prediction project.ipynb | NorfolkDataSci/presentations | mit |
Use 30 days stock price to predict 31 days price
Because accounting price as a feature is 100% correlation to predict the price. so we can get almost 100% match prediction.
I think using previous price to predict future price is best way to predict. | from sklearn.linear_model import LinearRegression
price_data=pd.DataFrame(df_orig['Adj. Close'])
price_data.columns = ['values']
index=price_data.index
Date=index[60:5350]
x_data = []
y_data = []
for d in range(30,price_data.shape[0]):
x = price_data.iloc[d-30:d].values.ravel()
y = price_data.iloc[d].values[0]
x_data.append(x)
y_data.append(y)
x_data=np.array(x_data)
y_data=np.array(y_data)
y_pred = []
y_pred_last = []
y_pred_ma = []
y_true = []
end = y_data.shape[0]
for i in range(30,end):
x_train = x_data[:i,:]
y_train = y_data[:i]
x_test = x_data[i,:]
y_test = y_data[i]
model = LinearRegression()
model.fit(x_train,y_train)
y_pred.append(model.predict(x_test.reshape(1, -1)))
y_true.append(y_test)
#Transforms the lists into numpy arrays
y_pred = np.array(y_pred)
y_true = np.array(y_true)
from sklearn.metrics import mean_absolute_error
print ('\nMean Absolute Error')
print ('MAE Linear Regression', mean_absolute_error(y_pred,y_true))
print("Mean squared error:",mean_squared_error(y_true, y_pred))
plt.title('AAPL stock price ')
plt.ylabel('Price')
plt.xlabel(u'date')
reg_val, = plt.plot(y_pred,color='b',label=u'Linear Regression')
true_val, = plt.plot(y_true,color='g', label='True Values', alpha=0.5,linewidth=1)
plt.legend(handles=[true_val,reg_val])
plt.show()
print('-------------------------')
print('from predicion graph we can see that the prediction works well') | 2018-04_Stock_prediction/linear regression stock prediction project.ipynb | NorfolkDataSci/presentations | mit |
Use financial fundamental data to predict stock price
I try to collect more financial fundamental data to predict stock price. To compare with previous 3 predictions, I collect apple quarterly revenue, yearly total assets, yearly gross profit and equity as key feature to forcast stock price. | #get apple revenue
revenue=quandl.get("SF1/AAPL_REVENUE_MRQ",start_date="1996-9-26",end_date='2017-12-31', authtoken="_1LjZZVx4HVVTwzCmqxg")
#get apple total assets
total_assets=quandl.get("SF1/AAPL_ASSETS_MRY",start_date="1996-9-26",end_date='2017-12-31', authtoken="_1LjZZVx4HVVTwzCmqxg")
#get apple gross profit
gross_profit=quandl.get("SF1/AAPL_GP_MRY",start_date="1996-9-26",end_date='2017-12-31', authtoken="_1LjZZVx4HVVTwzCmqxg")
#get apple shareholders equity
equity=quandl.get("SF1/AAPL_EQUITY_MRQ",start_date="1996-9-26",end_date='2017-12-31', authtoken="_1LjZZVx4HVVTwzCmqxg")
#change name of columns
revenue.columns = ['revenue']
total_assets.columns = ['total_assets']
gross_profit.columns = ['gross_profit']
equity.columns = ['equity']
fin_data=pd.concat([revenue,total_assets,gross_profit,equity],axis=1)
fin_data['date']=fin_data.index
#create quarter column and indicate the quater of data
fin_data['quarter'] = pd.to_datetime(fin_data['date']).dt.to_period('Q')
fin_data.drop('date', axis=1, inplace=True)
fin_data.head()
##handle NAN data in chart.
while fin_data['total_assets'].isnull().any():
fin_data.loc[fin_data['total_assets'].isnull(),'total_assets'] = fin_data['total_assets'].shift(1)
while fin_data['gross_profit'].isnull().any():
fin_data.loc[fin_data['gross_profit'].isnull(),'gross_profit'] = fin_data['gross_profit'].shift(1)
while fin_data['equity'].isnull().any():
fin_data.loc[fin_data['equity'].isnull(),'equity'] = fin_data['equity'].shift(1)
fin_data=fin_data.fillna(method='bfill')
fin_data.head()
fin_price=pd.DataFrame(df['Adj. Close'])
fin_price.columns=['price']
fin_price['quarter'] = pd.to_datetime(fin_price.index,errors='coerce').to_period('Q')
fin_price2=fin_price
index=fin_price2.index
fin_price.head()
#combine two dataframe together, use quarter column as key to combine
fin_price1=fin_price.set_index('quarter').join(fin_data.set_index('quarter'))
fin_price1=fin_price1.dropna(axis=0)
fin_price1.head()
print('check NAN in data\n',fin_price1.isnull().any())
#set up index to date.
fin_price1.set_index(index).head()
##correlation heat map.
corr_other=fin_price1.corr()
print(corr_other)
plt.figure(figsize=(12, 10))
plt.imshow(corr_other, cmap='RdYlGn', interpolation='none', aspect='auto')
plt.xticks(range(len(corr_other)), corr_other.columns, rotation='vertical')
plt.yticks(range(len(corr_other)), corr_other.columns);
plt.suptitle('financial data Correlations Heat Map', fontsize=15, fontweight='bold')
plt.show()
print('-------------------------')
print('surprisingly the financial fundamental data show high related with price. the correlation are even higher')
print('than daily market data.')
##linear regression with all features
predictor3=fin_price1[['revenue','total_assets','gross_profit','equity']]
#normalize predictor
predictor3=preprocessing.scale(predictor3)
#print(predictor3)
clf3 = linear_model.LinearRegression(n_jobs=-1)
X_train3, X_test3, y_train3, y_test3 =train_test_split(predictor3 , fin_price1['price'], test_size=0.1,shuffle= False)
clf3.fit(X_train3, y_train3)
forecast_set3 = clf3.predict(X_test3)
print('confident:',clf3.score(X_test3, y_test3))
print("Mean squared error:",mean_squared_error(y_test3, forecast_set3))
print('Mean squared error is accpetable.')
num_samples3 = fin_price1.shape[0]
#add Forecase column to dataframe
fin_price1['Forecast'] = np.nan
fin_price1['Forecast'][int(0.9*num_samples3):num_samples3]=forecast_set3
style.use('ggplot')
fin_price1['price'].plot()
fin_price1['Forecast'].plot()
plt.legend(loc=4)
plt.xlabel('Date')
plt.ylabel('Price')
plt.rcParams['figure.figsize'] = (20,20)
plt.show()
print('-------------------------')
print('our prediction fit the major trend of stock price') | 2018-04_Stock_prediction/linear regression stock prediction project.ipynb | NorfolkDataSci/presentations | mit |
Use PCA to reduce the number of features to two
Trying to use PCA to processing the features and test accuracy. | # Use PCA to reduce the number of features to two, and test.
from sklearn.decomposition import PCA
#reduce 4 featrues to 2
pca = PCA(n_components=2)
predictor3=pca.fit_transform(predictor3)
print(predictor3.shape)
clf4 = linear_model.LinearRegression(n_jobs=-1)
X_train4, X_test4, y_train4, y_test4 =train_test_split(predictor3 , fin_price1['price'], test_size=0.1,shuffle= False)
clf4.fit(X_train4, y_train4)
forecast_set4 = clf4.predict(X_test4)
confidence3=clf4.score(X_test4, y_test4)
print('confident:',confidence3)
print("Mean squared error:",mean_squared_error(y_test4, forecast_set4))
print('After use PCA, the prediction is worse.')
num_samples4 = fin_price1.shape[0]
#add Forecase column to dataframe
fin_price1['Forecast2'] = np.nan
fin_price1['Forecast2'][int(0.9*num_samples3):num_samples3]=forecast_set4
style.use('ggplot')
fin_price1['price'].plot()
fin_price1['Forecast2'].plot()
plt.legend(loc=4)
plt.xlabel('Date')
plt.ylabel('Price')
plt.rcParams['figure.figsize'] = (20,20)
plt.show() | 2018-04_Stock_prediction/linear regression stock prediction project.ipynb | NorfolkDataSci/presentations | mit |
Visualize the Architecture of a Neural Network | import graphviz as gv | Python/7 Neural Networks/NN-Architecture.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
The function $\texttt{generateNN}(\texttt{Topology})$ takes a
network topology Topology as its argument and draws a graph of the
resulting fully connected feed-forward neural net. A network topology is a list of numbers specifying the number of neurons of each layer.
For example, the network topology [3, 8, 6, 2] specifies a neural network with three layers of neurons.
The network has $3$ input nodes, the first hidden layer has $8$ neurons, the second hidden layer has $6$ neurons, and
the output layer has $2$ neurons. | def generateNN(Topology):
L = len(Topology)
input_layer = ['i' + str(i) for i in range(1, Topology[0]+1)]
hidden_layers = [['h' + str(k+1) + ',' + str(i) for i in range(1, s+1)]
for (k, s) in enumerate(Topology[1:-1])]
output_layer = ['o' + str(i) for i in range(1, Topology[-1]+1)]
nng = gv.Graph()
nng.attr(rankdir='LR', splines='false')
# create nodes for input layer
for n in input_layer:
nng.node(n, label='', shape='point', width='0.05')
# create nodes for hidden layers
for NodeList in hidden_layers:
for n in NodeList:
nng.node(n, label='', shape='circle', width='0.1')
# create nodes for output layer
for n in output_layer:
nng.node(n, label='', shape='circle', width='0.1')
# connect input layer to first hidden layer
for n1 in input_layer:
for n2 in hidden_layers[0]:
nng.edge(n1, n2)
# connect hidden layers d to hidden layer d+1
for d in range(0, L-3):
for n1 in hidden_layers[d]:
for n2 in hidden_layers[d+1]:
nng.edge(n1, n2)
# connect output layer
for n1 in hidden_layers[L-3]:
for n2 in output_layer:
nng.edge(n1, n2)
return nng
Topology = [3, 6, 4, 2]
nn1 = generateNN(Topology)
nn1
Topology = [8, 12, 8, 6, 3]
nn2 = generateNN(Topology)
nn2
Topology = [12, 9, 10, 8, 7, 8, 6, 5, 4, 8, 5, 6, 7, 5, 4, 4, 4, 7, 8, 9]
nn3 = generateNN(Topology)
nn3 | Python/7 Neural Networks/NN-Architecture.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
Cette idée vient d'une soirée Google Code initiée par Google et à laquelle des élèves de l'ENSAE ont participé. On dispose de la description des rues de Paris (qu'on considèrera comme des lignes droites). On veut déterminer le trajet de huit voitures de telle sorte qu'elles parcourent la ville le plus rapidement possible. On supposera deux cas :
Les voitures peuvent être placées n'importe où dans la ville.
Les voitures démarrent et reviennent au même point de départ, le même pour toutes.
Ce notebook décrit comment récupérer les données et propose une solution. Ce problème est plus connu sous le nom du problème du postier chinois ou Route inspection problem pour lequel il existe un algorithme optimal à coût polynomial. Le problème n'est donc pas NP complet. | from jyquickhelper import add_notebook_menu
add_notebook_menu() | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
Une versions de ce problème est proposée sous forme de challenge : City Tour.
Les données
On récupère les données sur Internet. | import pyensae.datasource
data = pyensae.datasource.download_data("paris_54000.zip")
data | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
On extrait du fichier l'ensemble des carrefours (vertices) et des rues ou segment de rues (edges). | name = data[0]
with open(name, "r") as f : lines = f.readlines()
vertices = []
edges = [ ]
for i,line in enumerate(lines) :
spl = line.strip("\n\r").split(" ")
if len(spl) == 2 :
vertices.append ( (float(spl[0]), float(spl[1]) ) )
elif len(spl) == 5 and i > 0:
v1,v2 = int(spl[0]),int(spl[1])
ways = int(spl[2]) # dans les deux sens ou pas
p1 = vertices[v1]
p2 = vertices[v2]
edges.append ( (v1,v2,ways,p1,p2) )
elif i > 0 :
raise Exception("unable to interpret line {0}: ".format(i) + line)
print("#E=",len(edges), "#V=",len(vertices), ">",max (max( _[0] for _ in edges), max( _[1] for _ in edges))) | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
On trace sur un graphique un échantillon des carrefours. On suppose la ville de Paris suffisamment petite et loin des pôles pour considérer les coordonnées comme cartésiennes (et non comme longitude/latitude). | import matplotlib.pyplot as plt
import random
sample = [ vertices[random.randint(0,len(vertices)-1)] for i in range(0,1000)]
plt.plot( [_[0] for _ in sample], [_[1] for _ in sample], ".") | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
Puis on dessine également un échantillon des rues. | sample = [ edges[random.randint(0,len(edges)-1)] for i in range(0,1000)]
for edge in sample:
plt.plot( [_[0] for _ in edge[-2:]], [_[1] for _ in edge[-2:]], "b-") | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
Petite remarque : il n'y a pas de rues reliant le même carrefour : | len ( list(e for e in edges if e[0]==e[1] )) | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
Une première solution au premier problème
Ce problème est très similaire à celui du portier chinois. La solution qui suit n'est pas nécessaire la meilleure mais elle donne une idée de ce que peut-être une recherche un peu expérimentale sur le sujet.
Chaque noeud représente un carrefour et chaque rue est un arc reliant des deux carrefours. L'objectif est de parcourir tous les arcs du graphe avec 8 voitures.
Premiere remarque, l'énoncé ne dit pas qu'il faut parcourir toutes les rues une seule fois. On conçoit aisément que ce serait l'idéal mais on ne sait pas si ce serait possible. Néanmoins, si une telle solution (un chemin passant une et une seule fois par toutes les rues) existe, elle est forcément optimale.
Deuxième remarque, les sens interdits rendent le problème plus complexe. On va dans un premier temps ne pas en tenir compte. On verra comment ajouter la contrainte par la suite et il y a aussi le problème des impasses. On peut néanmoins les contourner en les supprimant du graphe : il faut nécessairement faire demi-tour et il n'y a pas de meilleure solution.
Ces deux remarques étant faite, ce problème rappelle un peu le problème des sept ponts de Konigsberg : comment traverser passer par les sept de la ville une et une seule fois. Le mathématicien Euler a répondu à cette question : c'est simple, il suffit que chaque noeud du graphe soit rejoint par un nombre pair de d'arc (= ponts) sauf au plus 2 (les noeuds de départ et d'arrivée). De cette façon, à chaque qu'on rejoint un noeud, il y a toujours une façon de repartir. | import networkx as nx
g = nx.Graph()
for i,j in [(1,2),(1,3),(1,4),(2,3),(3,4),(4,5),(5,2),(2,4) ]:
g.add_edge( i,j )
import matplotlib.pyplot as plt
f, ax = plt.subplots(figsize=(6,3))
nx.draw(g, ax = ax) | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
On ne peut pas trouver une chemin qui parcourt tous les arcs du graphe précédent une et une seule fois. Qu'en est-il du graphe de la ville de Paris ? On compte les noeuds qui ont un nombre pairs et impairs d'arcs les rejoignant (on appelle cela le degré). | nb_edge = { }
for edge in edges :
v1,v2 = edge[:2]
nb_edge[v1] = nb_edge.get(v1,0)+1
nb_edge[v2] = nb_edge.get(v2,0)+1
parite = { }
for k,v in nb_edge.items():
parite[v] = parite.get(v,0) + 1
[ sorted(parite.items()) ] | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
On remarque que la plupart des carrefours sont le départ de 3 rues. Qu'à cela ne tienne, pourquoi ne pas ajouter des arcs entre des noeuds de degré impair jusqu'à ce qu'il n'y en ait plus que 2. De cette façon, il sera facile de construire un seul chemin parcourant toutes les rues. Comment ajouter ces arcs ? Cela va se faire en deux étapes :
On utilise l'algorithme de Bellman-Ford pour construire une matrice des plus courts chemins entre tous les noeuds.
On s'inspire de l'algorithme de poids minimal Kruskal. On trie les arcs par ordre croissant de distance. On ajoute ceux qui réduisent le nombre de noeuds de degré impairs en les prenant dans cet ordre.
Quelques justifications : le meilleur parcours ne pourra pas descendre en deça de la somme des distances des rues puisqu'il faut toutes les parcourir. De plus, s'il existe un chemin qui parcourt toutes les rues, en dédoublant toutes celles parcourues plus d'une fois, il est facile de rendre ce chemin eulérien dans un graphe légèrement modifié par rapport au graphe initial.
Etape 1 : la matrice Bellman
Je ne détaillerai pas trop, la page Wikipedia est assez claire. Dans un premier temps on calcule la longueur de chaque arc (de façon cartésienne). Une autre distance (Haversine) ne changerait pas le raisonnement. | def distance(p1,p2):
return ((p1[0]-p2[0])**2+(p1[1]-p2[1])**2)**0.5
edges = [ edge + (distance( edge[-2],edge[-1]),) for edge in edges] | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
Ensuite, on implémente l'algorithme de Bellman-Ford. | import datetime
init = { (e[0],e[1]) : e[-1] for e in edges }
init.update ( { (e[1],e[0]) : e[-1] for e in edges } )
edges_from = { }
for e in edges :
if e[0] not in edges_from : edges_from[e[0]] = []
if e[1] not in edges_from : edges_from[e[1]] = []
edges_from[e[0]].append(e)
edges_from[e[1]].append( (e[1], e[0], e[2], e[4], e[3], e[5] ) )
modif = 1
total_possible_edges = len(edges_from)**2
it = 0
while modif > 0 :
modif = 0
initc = init.copy() # to avoid RuntimeError: dictionary changed size during iteration
s = 0
for i,d in initc.items() :
fromi2 = edges_from[i[1]]
s += d
for e in fromi2 :
if i[0] == e[1] : # on fait attention à ne pas ajouter de boucle sur le même noeud
continue
new_e = i[0], e[1]
new_d = d + e[-1]
if new_e not in init or init[new_e] > new_d :
init[new_e] = new_d
modif += 1
print(datetime.datetime.now(), "iteration ", it, " modif ", modif, " # ", len(initc),"/",total_possible_edges,"=",
"%1.2f" %(len(initc)*100 / total_possible_edges) + "%")
it += 1
if it > 6 : break | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
On s'aperçoit vite que cela va être très très long. On décide alors de ne considérer que les paires de noeuds pour lesquelles la distance à vol d'oiseau est inférieure au plus grand segment de rue ou inférieure à cette distance multipliée par un coefficient. | max_segment = max( e[-1] for e in edges )
max_segment | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
On calcule les arcs admissibles (en espérant que les noeuds de degré impairs seront bien dedans). Cette étape prend quelques minutes : | possibles = { (e[0],e[1]) : e[-1] for e in edges }
possibles.update ( { (e[1],e[0]) : e[-1] for e in edges } )
initial = possibles.copy()
for i1,v1 in enumerate(vertices) :
for i2 in range(i1+1,len(vertices)):
v2 = vertices[i2]
d = distance(v1,v2)
if d < max_segment / 2 : # on ajuste le seuil
possibles [ i1,i2 ] = d
possibles [ i2,i1 ] = d
print("original",len(initial),"/",total_possible_edges,"=", len(initial)/total_possible_edges)
print("addition",len(possibles)-len(initial),"/",total_possible_edges,"=", (len(possibles)-len(initial))/total_possible_edges) | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
On vérifie que les noeuds de degré impairs font tous partie de l'ensemble des noeuds recevant de nouveaux arcs. La matrice de Bellman envisagera au pire 2.2% de toutes les distances possibles. | allv = { p[0]:True for p in possibles if p not in initial } # possibles est symétrique
for v,p in nb_edge.items() :
if p % 2 == 1 and v not in allv :
raise Exception("problème pour le noeud: {0}".format(v))
print("si vous voyez cette ligne, c'est que tout est bon") | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
On continue avec l'algorithme de Bellman-Ford modifié : | import datetime
init = { (e[0],e[1]) : e[-1] for e in edges }
init.update ( { (e[1],e[0]) : e[-1] for e in edges } )
edges_from = { }
for e in edges :
if e[0] not in edges_from : edges_from[e[0]] = []
if e[1] not in edges_from : edges_from[e[1]] = []
edges_from[e[0]].append(e)
edges_from[e[1]].append( (e[1], e[0], e[2], e[4], e[3], e[5] ) )
modif = 1
total_possible_edges = len(edges_from)**2
it = 0
while modif > 0 :
modif = 0
initc = init.copy() # to avoid RuntimeError: dictionary changed size during iteration
s = 0
for i,d in initc.items() :
if i not in possibles :
continue # we skip undesired edges ------------------- addition
fromi2 = edges_from[i[1]]
s += d
for e in fromi2 :
if i[0] == e[1] : # on fait attention à ne pas ajouter de boucle sur le même noeud
continue
new_e = i[0], e[1]
new_d = d + e[-1]
if new_e not in init or init[new_e] > new_d :
init[new_e] = new_d
modif += 1
print(datetime.datetime.now(), "iteration ", it, " modif ", modif, " # ", len(initc),"/",total_possible_edges,"=",
"%1.2f" %(len(initc)*100 / total_possible_edges) + "%")
it += 1
if it > 20 :
break | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
L'algorithme consiste à regarder les chemins $a \rightarrow b \rightarrow c$ et à comparer s'il est plus rapide que $a \rightarrow c$. 2.6% > 2.2% parce que le filtre est appliqué seulement sur $a \rightarrow b$. Finalement, on considère les arcs ajoutés puis on retire les arcs originaux. | original = { (e[0],e[1]) : e[-1] for e in edges }
original.update ( { (e[1],e[0]) : e[-1] for e in edges } )
additions = { k:v for k,v in init.items() if k not in original }
additions.update( { (k[1],k[0]):v for k,v in additions.items() } ) | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
Kruskall
On trie les arcs par distance croissante, on enlève les arcs qui ne relient pas des noeuds de degré impair puis on les ajoute un par jusqu'à ce qu'il n'y ait plus d'arc de degré impair. | degre = { }
for k,v in original.items() : # original est symétrique
degre[k[0]] = degre.get(k[0],0) + 1
tri = [ (v,k) for k,v in additions.items() if degre[k[0]] %2 == 1 and degre[k[1]] %2 == 1 ]
tri.extend( [ (v,k) for k,v in original.items() if degre[k[0]] %2 == 1 and degre[k[1]] %2 == 1 ] )
tri.sort()
impairs = sum ( v%2 for k,v in degre.items() )
added_edges = []
for v,a in tri :
if degre[a[0]] % 2 == 1 and degre[a[1]] % 2 == 1 :
# il faut refaire le test car degre peut changer à chaque itération
degre[a[0]] += 1
degre[a[1]] += 1
added_edges.append ( a + (v,) )
impairs -= 2
if impairs <= 0 :
break
# on vérifie
print("nb degré impairs",impairs, "nombre d'arcs ajoutés",len(added_edges))
print("longueur ajoutée ", sum( v for a,b,v in added_edges ))
print("longueur initiale ", sum( e[-1] for e in edges )) | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
Le nombre de noeuds impairs obtenus à la fin doit être inférieur à 2 pour être sûr de trouver un chemin (mais on choisira 0 pour avoir un circuit eulérien). Mon premier essai n'a pas donné satisfaction (92 noeuds impairs restant) car j'avais choisi un seuil (max_segment / 4) trop petit lors de la sélection des arcs à ajouter. J'ai augmenté le seuil par la suite mais il reste encore 22 noeuds de degré impairs. On a le choix entre augmenter ce seuil mais l'algorithme est déjà long ou chercher dans une autre direction comme laisser l'algorithme de Bellman explorer les noeuds de degré impairs. Ca ne veut pas forcément dire qu'il manque des arcs mais que peut-être ils sont mal choisis. Si l'arc $i \rightarrow j$ est choisi, l'arc $j \rightarrow k$ ne le sera pas car $j$ aura un degré pair. Mais dans ce cas, si l'arc $j \rightarrow k$ était le dernier arc disponible pour combler $k$, on est coincé. On peut augmenter le seuil encore mais cela risquee de prendre du temps et puis cela ne fonctionnerait pas toujours sur tous les jeux de données.
On pourait alors écrire une sorte d'algorithme itératif qui exécute l'algorithme de Bellman, puis lance celui qui ajoute les arcs. Puis on revient au premier en ajoutant plus d'arcs autour des noeuds problèmatique lors de la seconde étape. L'ensemble est un peu long pour tenir dans un notebook mais le code est celui de la fonction eulerien_extension. Je conseille également la lecture de cet article : Efficient Algorithms for Eulerian Extension (voir également On Making Directed Graphs Eulerian). L'exécution qui suit prend une vingtaine de minutes. | from ensae_teaching_cs.special.rues_paris import eulerien_extension, distance_paris,get_data
print("data")
edges = get_data()
print("start, nb edges", len(edges))
added = eulerien_extension(edges, distance=distance_paris)
print("end, nb added", len(added)) | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
On enregistre le résultat où on souhaite recommencer à partir de ce moment-là plus tard. | with open("added.txt","w") as f : f.write(str(added)) | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
Et si vous voulez le retrouver : | from ensae_teaching_cs.data import added
data = added()
from ensae_teaching_cs.special.rues_paris import eulerien_extension, distance_paris, get_data
edges = get_data()
with open(data, "r") as f: text = f.read()
added_edges = eval(text) | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
Chemin Eulérien
A cet instant, on n'a pas vraiment besoin de connaître la longueur du chemin eulérien passant par tous les arcs. Il s'agit de la somme des arcs initiaux et ajoutés (soit environ 334 + 1511). On suppose qu'il n'y qu'une composante connexe. Construire le chemin eulérien fait apparaître quelques difficultés comme la suivante : on parcourt le graphe dans un sens mais on peut laisser de côté une partie du chemin et créer une seconde composante connexe. | from pyquickhelper.helpgen import NbImage
NbImage("euler.png") | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
Quelques algorithmes sont disponibles sur cette page Eulerian_path. L'algorithme de Hierholzer consiste à commencer un chemin eulérien qui peut revenir au premier avant d'avoir tout parcouru. Dans ce cas, on parcourt les noeuds du graphe pour trouver un noeud qui repart ailleurs et qui revient au même noeud. On insert cette boucle dans le chemin initial. Tout d'abord, on construit une structure qui pour chaque noeud associe les noeuds suivant. La fonction euler_path | from ensae_teaching_cs.special.rues_paris import euler_path
path = euler_path(edges, added_edges)
path[:5] | _doc/notebooks/expose/ml_rue_paris_parcours.ipynb | sdpython/ensae_teaching_cs | mit |
We want to be sure that this solution is ok. We replaced known values for $E$, $I$ and $q$ to check it.
Cantilever beam with end load | sub_list = [(q(x), 0), (EI(x), E*I)]
w_sol1 = w_sol.subs(sub_list).doit()
L, F = symbols('L F')
# Fixed end
bc_eq1 = w_sol1.subs(x, 0)
bc_eq2 = diff(w_sol1, x).subs(x, 0)
# Free end
bc_eq3 = diff(w_sol1, x, 2).subs(x, L)
bc_eq4 = diff(w_sol1, x, 3).subs(x, L) + F/(E*I)
[bc_eq1, bc_eq2, bc_eq3, bc_eq4]
constants = solve([bc_eq1, bc_eq2, bc_eq3, bc_eq4], [C1, C2, C3, C4])
constants
w_sol1.subs(constants).simplify() | Euler_Bernoulli_beams.ipynb | nicoguaro/notebooks_examples | mit |
Cantilever beam with uniformly distributed load | sub_list = [(q(x), 1), (EI(x), E*I)]
w_sol1 = w_sol.subs(sub_list).doit()
L = symbols('L')
# Fixed end
bc_eq1 = w_sol1.subs(x, 0)
bc_eq2 = diff(w_sol1, x).subs(x, 0)
# Free end
bc_eq3 = diff(w_sol1, x, 2).subs(x, L)
bc_eq4 = diff(w_sol1, x, 3).subs(x, L)
constants = solve([bc_eq1, bc_eq2, bc_eq3, bc_eq4], [C1, C2, C3, C4])
w_sol1.subs(constants).simplify() | Euler_Bernoulli_beams.ipynb | nicoguaro/notebooks_examples | mit |
Cantilever beam with exponential loading | sub_list = [(q(x), exp(x)), (EI(x), E*I)]
w_sol1 = w_sol.subs(sub_list).doit()
L = symbols('L')
# Fixed end
bc_eq1 = w_sol1.subs(x, 0)
bc_eq2 = diff(w_sol1, x).subs(x, 0)
# Free end
bc_eq3 = diff(w_sol1, x, 2).subs(x, L)
bc_eq4 = diff(w_sol1, x, 3).subs(x, L)
constants = solve([bc_eq1, bc_eq2, bc_eq3, bc_eq4], [C1, C2, C3, C4])
w_sol1.subs(constants).simplify() | Euler_Bernoulli_beams.ipynb | nicoguaro/notebooks_examples | mit |
Load written as a Taylor series and constant EI
We can prove that the general function is written as | k = symbols('k', integer=True)
C = symbols('C1:4')
D = symbols('D', cls=Function)
w_sol1 = 6*(C1 + C2*x) - 1/(E*I)*(3*C3*x**2 + C4*x**3 -
6*Sum(D(k)*x**(k + 4)/((k + 1)*(k + 2)*(k + 3)*(k + 4)),(k, 0, oo)))
w_sol1 | Euler_Bernoulli_beams.ipynb | nicoguaro/notebooks_examples | mit |
Uniform load and varying cross-section | Q, alpha = symbols("Q alpha")
sub_list = [(q(x), Q), (EI(x), E*x**3/12/tan(alpha))]
w_sol1 = w_sol.subs(sub_list).doit()
M_eq = -diff(M(x), x, 2) - Q
M_eq
M_sol = dsolve(M_eq, M(x)).rhs.subs([(C1, C3), (C2, C4)])
M_sol
w_eq = f(x) + diff(w(x),x,2)
w_eq
w_sol1 = dsolve(w_eq, w(x)).subs(f(x), M_sol/(E*x**3/tan(alpha)**3)).rhs
w_sol1 = w_sol1.doit()
expand(w_sol1)
limit(w_sol1, x, 0)
L = symbols('L')
# Fixed end
bc_eq1 = w_sol1.subs(x, L)
bc_eq2 = diff(w_sol1, x).subs(x, L)
# Finite solution
bc_eq3 = C3
constants = solve([bc_eq1, bc_eq2, bc_eq3], [C1, C2, C3, C4])
simplify(w_sol1.subs(constants).subs(C4, 0)) | Euler_Bernoulli_beams.ipynb | nicoguaro/notebooks_examples | mit |
The shear stress would be | M = -E*x**3/tan(alpha)**3*diff(w_sol1.subs(constants).subs(C4, 0), x, 2)
M
diff(M, x)
w_plot = w_sol1.subs(constants).subs({C4: 0, L: 1, Q: -1, E: 1, alpha: pi/9})
plot(w_plot, (x, 1e-6, 1));
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling() | Euler_Bernoulli_beams.ipynb | nicoguaro/notebooks_examples | mit |
Quiz Question: How many predicted values in the test set are false positives? | print '1443' | course-3-classification/module-9-precision-recall-assignment-blank.ipynb | kgrodzicki/machine-learning-specialization | mit |
Computing the cost of mistakes
Put yourself in the shoes of a manufacturer that sells a baby product on Amazon.com and you want to monitor your product's reviews in order to respond to complaints. Even a few negative reviews may generate a lot of bad publicity about the product. So you don't want to miss any reviews with negative sentiments --- you'd rather put up with false alarms about potentially negative reviews instead of missing negative reviews entirely. In other words, false positives cost more than false negatives. (It may be the other way around for other scenarios, but let's stick with the manufacturer's scenario for now.)
Suppose you know the costs involved in each kind of mistake:
1. \$100 for each false positive.
2. \$1 for each false negative.
3. Correctly classified reviews incur no cost.
Quiz Question: Given the stipulation, what is the cost associated with the logistic regression classifier's performance on the test set? | false_positive = confusion_matrix[(confusion_matrix['target_label'] == -1) & (confusion_matrix['predicted_label'] == 1) ]['count'][0]
false_negative = confusion_matrix[(confusion_matrix['target_label'] == 1) & (confusion_matrix['predicted_label'] == -1) ]['count'][0]
print 100 * false_positive + 1 * false_negative | course-3-classification/module-9-precision-recall-assignment-blank.ipynb | kgrodzicki/machine-learning-specialization | mit |
Quiz Question: What fraction of the positive reviews in the test_set were correctly predicted as positive by the classifier?
Quiz Question: What is the recall value for a classifier that predicts +1 for all data points in the test_data?
Precision-recall tradeoff
In this part, we will explore the trade-off between precision and recall discussed in the lecture. We first examine what happens when we use a different threshold value for making class predictions. We then explore a range of threshold values and plot the associated precision-recall curve.
Varying the threshold
False positives are costly in our example, so we may want to be more conservative about making positive predictions. To achieve this, instead of thresholding class probabilities at 0.5, we can choose a higher threshold.
Write a function called apply_threshold that accepts two things
* probabilities (an SArray of probability values)
* threshold (a float between 0 and 1).
The function should return an SArray, where each element is set to +1 or -1 depending whether the corresponding probability exceeds threshold. | from graphlab import SArray
def apply_threshold(probabilities, threshold):
### YOUR CODE GOES HERE
# +1 if >= threshold and -1 otherwise.
array = map(lambda propability: +1 if propability > threshold else -1, probabilities)
return SArray(array) | course-3-classification/module-9-precision-recall-assignment-blank.ipynb | kgrodzicki/machine-learning-specialization | mit |
For each of the values of threshold, we compute the precision and recall scores. | precision_all = []
recall_all = []
best_threshold = None
probabilities = model.predict(test_data, output_type='probability')
for threshold in threshold_values:
predictions = apply_threshold(probabilities, threshold)
precision = graphlab.evaluation.precision(test_data['sentiment'], predictions)
recall = graphlab.evaluation.recall(test_data['sentiment'], predictions)
precision_all.append(precision)
recall_all.append(recall)
if(best_threshold is None and precision >= 0.965):
best_threshold = threshold
print best_threshold | course-3-classification/module-9-precision-recall-assignment-blank.ipynb | kgrodzicki/machine-learning-specialization | mit |
Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better? Round your answer to 3 decimal places. | 0.838 | course-3-classification/module-9-precision-recall-assignment-blank.ipynb | kgrodzicki/machine-learning-specialization | mit |
Quiz Question: Using threshold = 0.98, how many false negatives do we get on the test_data? (Hint: You may use the graphlab.evaluation.confusion_matrix function implemented in GraphLab Create.) | threshold = 0.98
probabilities = model.predict(test_data, output_type='probability')
predictions = apply_threshold(probabilities, threshold)
graphlab.evaluation.confusion_matrix(test_data['sentiment'], predictions) | course-3-classification/module-9-precision-recall-assignment-blank.ipynb | kgrodzicki/machine-learning-specialization | mit |
Second, as we did above, let's compute precision and recall for each value in threshold_values on the baby_reviews dataset. Complete the code block below. | precision_all = []
recall_all = []
best_threshold = None
for threshold in threshold_values:
# Make predictions. Use the `apply_threshold` function
## YOUR CODE HERE
predictions = apply_threshold(probabilities, threshold)
# Calculate the precision.
# YOUR CODE HERE
precision = graphlab.evaluation.precision(baby_reviews['sentiment'], predictions)
# YOUR CODE HERE
recall = graphlab.evaluation.recall(baby_reviews['sentiment'], predictions)
# Append the precision and recall scores.
precision_all.append(precision)
recall_all.append(recall)
if(best_threshold is None and precision > 0.965):
best_threshold = threshold
print best_threshold | course-3-classification/module-9-precision-recall-assignment-blank.ipynb | kgrodzicki/machine-learning-specialization | mit |
Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better for the reviews of data in baby_reviews? Round your answer to 3 decimal places. | best_threshold | course-3-classification/module-9-precision-recall-assignment-blank.ipynb | kgrodzicki/machine-learning-specialization | mit |
Explain what the cell below will produce and why. Can you change it so the answer is correct? | # Will produce 0 in python 2. produces 0.66 in python 3 because of "true" division
2/3
# the following import will change the outcome
from __future__ import division
2/3 | Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
Answer these 3 questions without typing code. Then type code to check your answer.
What is the value of the expression 4 * (6 + 5)
What is the value of the expression 4 * 6 + 5
What is the value of the expression 4 + 6 * 5 | # 4 * (6 + 5) = 44
4 * (6 + 5)
# 4 * 6 + 5 = 29
4 * 6 + 5
# 4 + 6 * 5 = 34
4 + 6 * 5 | Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
What is the type of the result of the expression 3 + 1.5 + 4?
Floating point number, 8.5 | 3 + 1.5 + 4 | Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
What would you use to find a number’s square root, as well as its square? | #x**y for square, x**0.5 for square root
print(2**2)
print(4**0.5) | Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
Strings
Given the string 'hello' give an index command that returns 'e'. Use the code below: | s = 'hello'
# Print out 'e' using indexing
# Code here
print(s[1]) | Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
Reverse the string 'hello' using indexing: | s ='hello'
# Reverse the string using indexing
# Code here
print(s[::-1]) | Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
Given the string hello, give two methods of producing the letter 'o' using indexing. | s ='hello'
# Print out the
# Code here
print(s[-1:])
print(s[len(s) - 1]) | Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
Lists
Build this list [0,0,0] two separate ways. | print([0,0,0])
print([0] * 3)
| Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
Reassign 'hello' in this nested list to say 'goodbye' item in this list: | l = [1,2,[3,4,'hello']]
l[2][2] = "goodbye"
print(l) | Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
Sort the list below: | l = [3,4,5,5,6]
result = l.sort()
print(result) | Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
Dictionaries
Using keys and indexing, grab the 'hello' from the following dictionaries: | d = {'simple_key':'hello'}
# Grab 'hello'
print(d["simple_key"])
d = {'k1':{'k2':'hello'}}
# Grab 'hello'
print(d["k1"]["k2"])
# Getting a little tricker
d = {'k1':[ {'nest_key':['this is deep',['hello']]} ]}
#Grab hello
print(d["k1"][0]["nest_key"][1][0])
# This will be hard and annoying!
d = {'k1':[
1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}
]}
]
}
print(d["k1"][2]["k2"][1]["tough"][2][0]) | Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
Can you sort a dictionary? Why or why not?
No, dictionaries aren't indexed.
Tuples
What is the major difference between tuples and lists?
mutability: lists are, tuples aren't
How do you create a tuple? | tup = (2, "yes", 3.0)
print(tup) | Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
Sets
What is unique about a set?
non-repeating elements
Use a set to find the unique values of the list below: | l = [1,2,2,33,4,4,11,22,3,3,2]
s = set(l)
print(s) | Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
Booleans
For the following quiz questions, we will get a preview of comparison operators:
<table class="table table-bordered">
<tr>
<th style="width:10%">Operator</th><th style="width:45%">Description</th><th>Example</th>
</tr>
<tr>
<td>==</td>
<td>If the values of two operands are equal, then the condition becomes true.</td>
<td> (a == b) is not true.</td>
</tr>
<tr>
<td>!=</td>
<td>If values of two operands are not equal, then condition becomes true.</td>
</tr>
<tr>
<td><></td>
<td>If values of two operands are not equal, then condition becomes true.</td>
<td> (a <> b) is true. This is similar to != operator.</td>
</tr>
<tr>
<td>></td>
<td>If the value of left operand is greater than the value of right operand, then condition becomes true.</td>
<td> (a > b) is not true.</td>
</tr>
<tr>
<td><</td>
<td>If the value of left operand is less than the value of right operand, then condition becomes true.</td>
<td> (a < b) is true.</td>
</tr>
<tr>
<td>>=</td>
<td>If the value of left operand is greater than or equal to the value of right operand, then condition becomes true.</td>
<td> (a >= b) is not true. </td>
</tr>
<tr>
<td><=</td>
<td>If the value of left operand is less than or equal to the value of right operand, then condition becomes true.</td>
<td> (a <= b) is true. </td>
</tr>
</table>
What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!) | # Answer before running cell
2 > 3 # False
# Answer before running cell
3 <= 2 # False
# Answer before running cell
3 == 2.0 # False
# Answer before running cell
3.0 == 3 # True
# Answer before running cell
4**0.5 != 2 # False | Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
Final Question: What is the boolean output of the cell block below? | # two nested lists
l_one = [1,2,[3,4]]
l_two = [1,2,{'k1':4}]
#True or False?
l_one[2][0] >= l_two[2]['k1'] # False, 3 >= 4 | Objects and Data Structures Assessment Test-Copy1.ipynb | spacedrabbit/PythonBootcamp | mit |
Some notes
should rename the tables consistently
e.g. dfsummary, dfdata, dfinfo, dfsteps, dffid
have to take care so that it also can read "old" cellpy-files
should make (or check if it is already made) an option for giving a "custom" config-file in starting the session | my_data.make_step_table()
filename2 = Path("/Users/jepe/Arbeid/Data/celldata/20171120_nb034_11_cc.nh5")
my_data.save(filename2)
print(f"size: {filename2.stat().st_size/1_048_576} MB")
my_data2 = cellreader.CellpyData()
my_data2.load(filename2)
dataset2 = my_data2.dataset
print(dataset2.steps.columns)
del my_data2
del dataset2
# next: dont load the full hdf5-file, only get datapoints for a cycle from step_table
# then: query the hdf5-file for the data (and time it)
# ex: store.select('/CellpyData/dfdata', "data_point>20130104 & data_point<20130104 & columns=['A', 'B']")
infoname = "/CellpyData/info"
dataname = "/CellpyData/dfdata"
summaryname = "/CellpyData/dfsummary"
fidname = "/CellpyData/fidtable"
stepname = "/CellpyData/step_table"
store = pd.HDFStore(filename2)
store.select("/CellpyData/dfdata", where="index>21 and index<32")
store.select(
"/CellpyData/dfdata", "index>21 & index<32 & columns=['Test_Time', 'Step_Index']"
) | dev_utils/lookup/cellpy_check_hdf5_queries.ipynb | jepegit/cellpy | mit |
Querying cellpy file (hdf5)
load steptable
get the stepnumbers for given cycle
create query and run it
scale the charge (100_000/mass) | steptable = store.select(stepname)
s = my_data.get_step_numbers(
steptype="charge",
allctypes=True,
pdtype=True,
cycle_number=None,
steptable=steptable,
)
cycle_mask = (
s["cycle"] == 2
) # also possible to give cycle_number in get_step_number instead
s.head()
a = s.loc[cycle_mask, ["point_first", "point_last"]].values[0]
v_hdr = "Voltage"
c_hdr = "Charge_Capacity"
d_hdr = "Discharge_Capacity"
i_hdr = "Current"
q = f"index>={ a[0] } & index<={ a[1] }"
q += f"& columns = ['{c_hdr}', '{v_hdr}']"
mass = dataset.mass
print(f"mass from dataset.mass = {mass:5.4} mg")
%%timeit
my_data.get_ccap(2)
%%timeit
c2 = store.select("/CellpyData/dfdata", q)
c2[c_hdr] = c2[c_hdr] * 1000000 / mass
5.03 / 3.05 | dev_utils/lookup/cellpy_check_hdf5_queries.ipynb | jepegit/cellpy | mit |
Result
65% penalty for using "hdf5" query lookup
5.03 vs 3.05 ms | plt.plot(c2[c_hdr], c2[v_hdr])
store.close() | dev_utils/lookup/cellpy_check_hdf5_queries.ipynb | jepegit/cellpy | mit |
You can see here the various ingredients going into each variety of concrete. We'll see in a moment how adding some additional synthetic features derived from these can help a model to learn important relationships among them.
We'll first establish a baseline by training the model on the un-augmented dataset. This will help us determine whether our new features are actually useful.
Establishing baselines like this is good practice at the start of the feature engineering process. A baseline score can help you decide whether your new features are worth keeping, or whether you should discard them and possibly try something else. | X = df.copy()
y = X.pop("CompressiveStrength")
# Train and score baseline model
baseline = RandomForestRegressor(criterion="mae", random_state=0)
baseline_score = cross_val_score(
baseline, X, y, cv=5, scoring="neg_mean_absolute_error"
)
baseline_score = -1 * baseline_score.mean()
print(f"MAE Baseline Score: {baseline_score:.4}") | notebooks/feature_engineering_new/raw/tut1.ipynb | Kaggle/learntools | apache-2.0 |
If you ever cook at home, you might know that the ratio of ingredients in a recipe is usually a better predictor of how the recipe turns out than their absolute amounts. We might reason then that ratios of the features above would be a good predictor of CompressiveStrength.
The cell below adds three new ratio features to the dataset. | X = df.copy()
y = X.pop("CompressiveStrength")
# Create synthetic features
X["FCRatio"] = X["FineAggregate"] / X["CoarseAggregate"]
X["AggCmtRatio"] = (X["CoarseAggregate"] + X["FineAggregate"]) / X["Cement"]
X["WtrCmtRatio"] = X["Water"] / X["Cement"]
# Train and score model on dataset with additional ratio features
model = RandomForestRegressor(criterion="mae", random_state=0)
score = cross_val_score(
model, X, y, cv=5, scoring="neg_mean_absolute_error"
)
score = -1 * score.mean()
print(f"MAE Score with Ratio Features: {score:.4}") | notebooks/feature_engineering_new/raw/tut1.ipynb | Kaggle/learntools | apache-2.0 |
Naming convention
The table which defines the used references of each atom will be called
construction table.
The contruction table of the zmatrix of the water dimer can be seen here: | zwater.loc[:, ['b', 'a', 'd']] | Tutorial/Transformation.ipynb | mcocdawc/chemcoord | lgpl-3.0 |
The absolute references are indicated by magic strings: ['origin', 'e_x', 'e_y', 'e_z'].
The atom which is to be set in the reference of three other atoms, is denoted $i$.
The bond-defining atom is represented by $b$.
The angle-defining atom is represented by $a$.
The dihedral-defining atom is represented by $d$.
Mathematical introduction
It is advantageous to treat a zmatrix simply as recursive spherical coordinates.
The $(n + 1)$-th atom uses three of the previous $n$ atoms as reference.
Those three atoms ($b, a, d$) are spanning a coordinate system, if we require righthandedness.
If we express the position of the atom $i$ in respect to this locally spanned coordinate system using
spherical coordinates, we arrive at the usual definition of a zmatrix.
PS: The question about right- or lefthandedness is equivalent to specifying a direction of rotation.
Chemcoord uses of course the IUPAC definition.
Ideal case
The ideal (and luckily most common) case is, that $\vec{ib}$, $\vec{ba}$, and $\vec{ad}$ are linearly independent.
In this case there exist a bijective mapping between spherical coordinates and cartesian coordinates and all angles, positions... are well defined.
Linear angle
One pathologic case appears, if $\vec{ib}$ and $\vec{ba}$ are linear dependent.
This means, that the angle in the zmatrix is either $0^\circ$ or $180^\circ$.
In this case there are infinitely many dihedral angles for the same configuration in cartesian space.
Or to say it in a more analytical way:
The transformation from spherical coordinates to cartesian coordinates is surjective, but not injective.
For nearly all cases (e.g. expressing the potential hyper surface in terms of internal coordinates), the surjectivity property is sufficient.
A lot of other problematic cases can be automatically solved by assigning a default value to the dihedral angle by definition ($0^\circ$ in the case of chemcoord).
Usually the user does not need to think about this case, which is automatically handled by chemcoord.
Linear reference
The real pathologic case appears, if the three reference atoms are linear.
It is important to note, that this is not a problem in the spherical coordinates of i.
The coordinate system itself which is spanned by b, a and d is undefined.
This means, that it is not visible directly from the values in the Zmatrix, if i uses an invalid reference.
I will use the term valid Zmatrix if all atoms i have valid references. In this case the transformation to cartesian coordinates is well defined.
Now there are two cases:
Creation of a valid Zmatrix
Chemcoord asserts, that the Zmatrix which is created from cartesian coordinates using get_zmat is a valid Zmatrix (or raises an explicit exception if it fails at finding valid references.) This is always done by choosing other references (instead of introducing dummy atoms.)
Manipulation of a valid Zmatrix
If a valid Zmatrix is manipulated after creation, it might occur because of an assignment, that b, a, and d are moved into a linear relationship. In this case a dummy atom is inserted which lies in the plane which was spanned by b, a, and d before the assignment. It uses the same references as the atom d, so changes in the references of b, a and d are also present in the position of the dummy atom X.
This is done using the safe assignment methods of chemcoord.
Example | water = water - water.loc[5, ['x', 'y', 'z']]
zmolecule = water.get_zmat()
c_table = zmolecule.loc[:, ['b', 'a', 'd']]
c_table.loc[6, ['a', 'd']] = [2, 1]
zmolecule1 = water.get_zmat(construction_table=c_table)
zmolecule2 = zmolecule1.copy()
zmolecule3 = water.get_zmat(construction_table=c_table) | Tutorial/Transformation.ipynb | mcocdawc/chemcoord | lgpl-3.0 |
Modifications on zmolecule1 | angle_before_assignment = zmolecule1.loc[4, 'angle']
zmolecule1.safe_loc[4, 'angle'] = 180
zmolecule1.safe_loc[5, 'dihedral'] = 90
zmolecule1.safe_loc[4, 'angle'] = angle_before_assignment
xyz1 = zmolecule1.get_cartesian()
xyz1.view() | Tutorial/Transformation.ipynb | mcocdawc/chemcoord | lgpl-3.0 |
Contextmanager
With the following contextmanager we can switch the automatic insertion of dummy atoms of and look at the cartesian which is built after assignment of .safe_loc[4, 'angle'] = 180. It is obvious from the structure, that the coordinate system spanned by O - H - O is undefined. This was the second pathological case in the mathematical introduction. | with cc.DummyManipulation(False):
try:
zmolecule3.safe_loc[4, 'angle'] = 180
except cc.exceptions.InvalidReference as e:
e.already_built_cartesian.view() | Tutorial/Transformation.ipynb | mcocdawc/chemcoord | lgpl-3.0 |
Symbolic evaluation
It is possible to use symbolic expressions from sympy. | import sympy
sympy.init_printing()
d = sympy.Symbol('d')
symb_water = zwater.copy()
symb_water.safe_loc[4, 'bond'] = d
symb_water
symb_water.subs(d, 2)
cc.xyz_functions.view([symb_water.subs(d, i).get_cartesian() for i in range(2, 5)])
# If your viewer cannot open molden files you have to uncomment the following lines
# for i in range(2, 5):
# symb_water.subs(d, i).get_cartesian().view()
# time.sleep(1) | Tutorial/Transformation.ipynb | mcocdawc/chemcoord | lgpl-3.0 |
Definition of the construction table
The construction table in chemcoord is represented by a pandas DataFrame with the columns ['b', 'a', 'd'] which can be constructed manually. | pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]], columns=['b', 'a', 'd']) | Tutorial/Transformation.ipynb | mcocdawc/chemcoord | lgpl-3.0 |
It is possible to specify only the first $i$ rows of a Zmatrix, in order to compute the $i + 1$ to $n$ rows automatically.
If the molecule consists of unconnected fragments, the construction tables are created independently for each fragment and connected afterwards.
It is important to note, that an unfragmented, monolithic molecule is treated in the same way.
It just consists of one fragment.
This means that in several methods where a list of fragments is returned or taken,
an one element list appears.
If the Zmatrix is automatically created, the oxygen 1 is the first atom.
Let's assume, that we want to change the order of fragments. | water.get_zmat() | Tutorial/Transformation.ipynb | mcocdawc/chemcoord | lgpl-3.0 |
Let's fragmentate the water | fragments = water.fragmentate()
c_table = water.get_construction_table(fragment_list=[fragments[1], fragments[0]])
water.get_zmat(c_table) | Tutorial/Transformation.ipynb | mcocdawc/chemcoord | lgpl-3.0 |
If we want to specify the order in the second fragment, so that it connects via the oxygen 1, it is important to note, that we have to specify the full row. It is not possible to define just the order without the references. | frag_c_table = pd.DataFrame([[4, 6, 5], [1, 4, 6], [1, 2, 4]], columns=['b', 'a', 'd'], index=[1, 2, 3])
c_table2 = water.get_construction_table(fragment_list=[fragments[1], (fragments[0], frag_c_table)])
water.get_zmat(c_table2) | Tutorial/Transformation.ipynb | mcocdawc/chemcoord | lgpl-3.0 |
Regresion Lineal | # librerias seabron as sns
import seaborn as sns
# Hr
sns.lmplot(x='Hr',y='HrWRF',data=data, col='Month', aspect=0.6, size=8)
# Tpro
sns.lmplot(x='Tpro',y='TproWRF',data=data, col='Month', aspect=0.6, size=8)
# Rain
sns.lmplot(x='Rain',y='RainWRF',data=data, col='Month', aspect=0.6, size=8)
# Rain polynomial regression
sns.lmplot(x='Rain',y='RainWRF',data=data, col='Month', aspect=0.6, size=8, order=2) | algoritmos/Validacion App Movil climMAPcore.ipynb | jorgemauricio/INIFAP_Course | mit |
Regresion lineal con p y pearsonr | # Hr
sns.jointplot("Hr", "HrWRF", data=data, kind="reg")
# Tpro
sns.jointplot("Tpro", "TproWRF", data=data, kind="reg")
# Rain
sns.jointplot("Rain", "RainWRF", data=data, kind="reg") | algoritmos/Validacion App Movil climMAPcore.ipynb | jorgemauricio/INIFAP_Course | mit |
OLS Regression | # HR
result = sm.ols(formula='HrWRF ~ Hr', data=data).fit()
print(result.params)
print(result.summary())
# Tpro
result = sm.ols(formula='TproWRF ~ Tpro', data=data).fit()
print(result.params)
print(result.summary())
# Rain
result = sm.ols(formula='RainWRF ~ Rain', data=data).fit()
print(result.params)
print(result.summary()) | algoritmos/Validacion App Movil climMAPcore.ipynb | jorgemauricio/INIFAP_Course | mit |
Histogramas seaborn | # Hr
sns.distplot(data['diffHr'])
# Tpro
sns.distplot(data['diffTpro'])
# Rain
sns.distplot(data['diffRain']) | algoritmos/Validacion App Movil climMAPcore.ipynb | jorgemauricio/INIFAP_Course | mit |
Model summary
Run done with model with three convolutional layers, two fully connected layers and a final softmax layer, with 64 channels per convolutional layer in first two layers and 48 in final. Fully connected layers have 512 units each. Dropout applied in first (larger) fully connected layer (dropout probability 0.5) and random augmentation of dataset with uniform random rotations, shunting and scaling. | print('## Model structure summary\n')
print(model)
params = model.get_params()
n_params = {p.name : p.get_value().size for p in params}
total_params = sum(n_params.values())
print('\n## Number of parameters\n')
print(' ' + '\n '.join(['{0} : {1} ({2:.1f}%)'.format(k, v, 100.*v/total_params)
for k, v in sorted(n_params.items(), key=lambda x: x[0])]))
print('\nTotal : {0}'.format(total_params)) | notebooks/3 convolutional layers (96-96-48 channels) 2 fully connected (512-512 units).ipynb | Neuroglycerin/neukrill-net-work | mit |
Load LendingClub dataset
We will be using a dataset from the LendingClub. A parsed and cleaned form of the dataset is availiable here. Make sure you download the dataset before running the following command. | loans = graphlab.SFrame('lending-club-data.gl/')
loans.head() | ml-classification/week-3/module-5-decision-tree-assignment-1-blank.ipynb | zomansud/coursera | mit |
Now, write some code to compute below the percentage of safe and risky loans in the dataset and validate these numbers against what was given using .show earlier in the assignment: | print "Percentage of safe loans : %.2f" % ((float(len(safe_loans_raw)) / len(loans)) * 100)
print "Percentage of risky loans : %.2f" % ((float(len(risky_loans_raw)) / len(loans)) * 100) | ml-classification/week-3/module-5-decision-tree-assignment-1-blank.ipynb | zomansud/coursera | mit |
Quiz Question: What percentage of the predictions on sample_validation_data did decision_tree_model get correct? | float((sample_validation_data['safe_loans'] == decision_tree_model.predict(sample_validation_data)).sum()) / len(sample_validation_data) | ml-classification/week-3/module-5-decision-tree-assignment-1-blank.ipynb | zomansud/coursera | mit |
Explore probability predictions
For each row in the sample_validation_data, what is the probability (according decision_tree_model) of a loan being classified as safe?
Hint: Set output_type='probability' to make probability predictions using decision_tree_model on sample_validation_data: | decision_tree_model.predict(sample_validation_data, output_type='probability') | ml-classification/week-3/module-5-decision-tree-assignment-1-blank.ipynb | zomansud/coursera | mit |
Checkpoint: You should see that the small_model performs worse than the decision_tree_model on the training data.
Now, let us evaluate the accuracy of the small_model and decision_tree_model on the entire validation_data, not just the subsample considered above. | print small_model.evaluate(validation_data)['accuracy']
print round(decision_tree_model.evaluate(validation_data)['accuracy'],2) | ml-classification/week-3/module-5-decision-tree-assignment-1-blank.ipynb | zomansud/coursera | mit |
False positives are predictions where the model predicts +1 but the true label is -1. Complete the following code block for the number of false positives: | false_positives = (predictions == +1) == (validation_data['safe_loans'] == -1)
print false_positives.sum()
print len(predictions)
fp = 0
for i in xrange(len(predictions)):
if predictions[i] == 1 and validation_data['safe_loans'][i] == -1:
fp += 1
print fp | ml-classification/week-3/module-5-decision-tree-assignment-1-blank.ipynb | zomansud/coursera | mit |
False negatives are predictions where the model predicts -1 but the true label is +1. Complete the following code block for the number of false negatives: | false_negatives = (predictions == -1) == (validation_data['safe_loans'] == +1)
print false_negatives.sum()
print len(predictions)
fn = 0
for i in xrange(len(predictions)):
if predictions[i] == -1 and validation_data['safe_loans'][i] == 1:
fn += 1
print fn | ml-classification/week-3/module-5-decision-tree-assignment-1-blank.ipynb | zomansud/coursera | mit |
Quiz Question: Let us assume that each mistake costs money:
* Assume a cost of \$10,000 per false negative.
* Assume a cost of \$20,000 per false positive.
What is the total cost of mistakes made by decision_tree_model on validation_data? | cost = fp * 20000 + fn * 10000
cost | ml-classification/week-3/module-5-decision-tree-assignment-1-blank.ipynb | zomansud/coursera | mit |
Starting with the Philips EL34 data sheet, create a PNG of the
import this image into engauge
Create 9 curves then use 'curve point tool' to add points to each curve
Change export options, "Raw Xs and Ys" and "One curve on each line", otherwise engauge will do some interrupting of your points
export a csv file | %cat data/el34-philips-1958-360V.csv
| experiments/02-modeling/pentode/pentode-modeling.ipynb | holla2040/valvestudio | mit |
Need to create scipy array like this
x = scipy.array( [[360, -0.0, 9.66], [360, -0.0, 22.99], ...
y = scipy.array( [0.17962, 0.26382, 0.3227, 0.37863, ...
Vaks = scipy.array( [9.66, 22.99, 41.49, 70.55, 116.61, ...
from the extracted curves | fname = "data/el34-philips-1958-360V.csv"
f = open(fname,'r').readlines()
deltaVgk = -4.0
n = 1.50
VgkVak = []
Iak = []
Vaks = []
f = open(fname,'r').readlines()
vg2k = 360
for l in f:
l = l.strip()
if len(l): # skip blank lines
if l[0] == 'x':
vn = float(l.split("Curve")[1]) - 1.0
Vgk = vn * deltaVgk
continue
else:plt.xkcd()
(Vak,i) = l.split(',')
VgkVak.append([vg2k,float(Vgk),float(Vak)])
Iak.append(float(i))
Vaks.append(float(Vak))
x = scipy.array(VgkVak)
y = scipy.array(Iak)
Vaks = scipy.array(Vaks)
%matplotlib inline
def func(x,K,Da,Dg2,a0,n):
rv = []
for VV in x:
Vg2k = VV[0]
Vg1k = VV[1]
Vak = VV[2]
t = Vg1k + Dg2 * Vg2k + Da * Vak
if t > 0:
a = a0 * ((2/pi) * atan(Vak/Vg2k))**(1/n)
Ia = a * K * t**n
else:
Ia = 0
# print "func",Vg2k,Vg1k,Vak,t,K,Da,Dg2,a0,n
rv.append(Ia)
return rv
popt, pcov = curve_fit(func, x, y,p0=[0.5,0.05,0.05,0.02,5])
#print popt,pcov
(K,Da,Dg2,a0,n) = popt
print "K =",K
print "Da =",Da
print "Dg2 =",Dg2
print "a0 =",a0
print "n =",n
Vg2k = x[0][0]
def IaCalc(Vg1k,Vak):
t = Vg1k + Dg2 * Vg2k + Da * Vak
if t > 0:
a = a0 * ((2/pi) * atan(Vak/Vg2k))**(1/n)
Ia = a * K * t**n
else:
Ia = 0
# print "IaCalc",Vgk,Vak,t,Ia
return Ia
Vgk = np.linspace(0,-32,9)
Vak = np.linspace(0,400,201)
vIaCalc = np.vectorize(IaCalc,otypes=[np.float])
Iavdv = vIaCalc(Vgk[:,None],Vak[None,:])
plt.figure(figsize=(14,6))
for i in range(len(Vgk)):
plt.plot(Vak,Iavdv[i],label=Vgk[i])
plt.scatter(Vaks,y,marker="+")
plt.legend(loc='upper left')
plt.suptitle('EL34@%dV Child-Langmuir-Compton-VanDerVeen Curve-Fit K/Da/Dg2 Model (Philips 1949)'%Vg2k, fontsize=14, fontweight='bold')
plt.grid()
plt.ylim((0,0.5))
plt.xlim((0,400))
plt.show()
| experiments/02-modeling/pentode/pentode-modeling.ipynb | holla2040/valvestudio | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.