markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Trying the Koren's triode phenomenological model. $$E_1 = \frac{E_{G2}}{k_P} log\left(1 + exp^{k_P (\frac{1}{u} + \frac{E_{G1}}{E_{G2}})}\right)$$ $$I_P = \left(\frac{{E_1}^X}{k_{G1}}\right) \left(1+sgn(E_1)\right)atan\left(\frac{E_P}{k_{VB}}\right)$$ Need to fit $X, k_{G1}, k_P, k_{VB}$
mu = 11.0 def sgn(val): if val >= 0: return 1 if val < 0: return -1 def funcKoren(x,X,kG1,kP,kVB): rv = [] for VV in x: EG2 = VV[0] EG1 = VV[1] EP = VV[2] if kP < 0: kP = 0 #print EG2,EG1,EP,kG1,kP,kVB,exp(kP*(1/mu + EG1/EG2)) E1 = (EG2/kP) * log(1 + exp(kP*(1/mu + EG1/EG2))) if E1 > 0: IP = (pow(E1,X)/kG1)*(1 + sgn(E1))*atan(EP/kVB) else: IP = 0 rv.append(IP) return rv popt, pcov = curve_fit(funcKoren,x,y,p0=[1.3,1000,40,20]) #print popt,pcov (X,kG1,kP,kVB) = popt print "X=%.8f kG1=%.8f kP=%.8f kVB=%.8f"%(X,kG1,kP,kVB) # koren's values 12AX7 mu=100 X=1.4 kG1=1060 kP=600 kVB=300
experiments/02-modeling/pentode/pentode-modeling.ipynb
holla2040/valvestudio
mit
<pre> SPICE model see http://www.normankoren.com/Audio/Tubemodspice_article_2.html#Appendix_A .SUBCKT 6550 1 2 3 4 ; P G1 C G2 (PENTODE) + PARAMS: MU=7.9 EX=1.35 KG1=890 KG2=4200 KP=60 KVB=24 E1 7 0 VALUE={V(4,3)/KP*LOG(1+EXP((1/MU+V(2,3)/V(4,3))*KP))} G1 1 3 VALUE={(PWR(V(7),EX)+PWRS(V(7),EX))/KG1*ATAN(V(1,3)/KVB)} G2 4 3 VALUE={(EXP(EX*(LOG((V(4,3)/MU)+V(2,3)))))/KG2} </pre>
EG2 = x[0][0] def IaCalcKoren(EG1,EP): global X,kG1,kP,kVB,mu E1 = (EG2/kP) * log(1 + exp(kP*(1/mu + EG1/EG2))) if E1 > 0: IP = (pow(E1,X)/kG1)*(1 + sgn(E1))*atan(EP/kVB) else: IP = 0 return IP Vgk = np.linspace(0,-32,9) Vak = np.linspace(0,400,201) vIaCalcKoren = np.vectorize(IaCalcKoren,otypes=[np.float]) Iakoren = vIaCalcKoren(Vgk[:,None],Vak[None,:]) plt.figure(figsize=(14,6)) for i in range(len(Vgk)): plt.plot(Vak,Iakoren[i],label=Vgk[i]) plt.scatter(Vaks,y,marker="+") plt.legend(loc='upper left') plt.suptitle('EL34@%dV Child-Langmuir-Compton-Koren Curve-Fit Model (Philips 1949)'%Vg2k, fontsize=14, fontweight='bold') plt.grid() plt.ylim((0,0.5)) plt.xlim((0,400)) plt.show() plt.figure(figsize=(14,6)) for i in range(len(Vgk)): plt.plot(Vak,Iavdv[i],label=Vgk[i],color='red') plt.plot(Vak,Iakoren[i],label=Vgk[i],color='blue') plt.scatter(Vaks,y,marker="+") plt.legend(loc='upper left') plt.suptitle('EL34@%dV CLCVDV & CLCK Curve-Fit Model (Philips 1949)'%Vg2k, fontsize=14, fontweight='bold') plt.grid() plt.ylim((0,0.5)) plt.xlim((0,400)) plt.show()
experiments/02-modeling/pentode/pentode-modeling.ipynb
holla2040/valvestudio
mit
Graf da osnovno idejo o tem, kaj uporabiti
ratingsNum=list() for number in np.arange(1,10): ratingsNum.append(len(data[data[:,2]==number,2])) plt.figure() plt.bar(np.arange(1,10),ratingsNum, 0.8, color="blue") plt.show()
BaseClass/Porazdelitve.ipynb
sorter43/PR2017LSBOLP
apache-2.0
Ker imamo vnaprej dolečen interval, ki ne ustreza Gaussu najbolje, sem se odločil uporabiti beta porazdelitev
from scipy.stats import beta a=8 b=2 n=1000 sample=beta.rvs(a, b, size=n) xr = np.linspace(0, 1, 100)# interval X P = [beta.pdf(x, a, b) for x in xr] # porazdelitvena funkcija # Histogram - porazdelitev naključlnih VZORCEV x glede na P(x) plt.figure(figsize=(10, 4)) plt.subplot(1, 2, 1) plt.title("Vzorec") plt.hist(sample, color="red") plt.xlabel("X") plt.ylabel("Število primerov") # Graf porazdelitvene funkcije plt.subplot(1, 2, 2) plt.title("Graf porazdelitve") plt.plot(xr, P, color="red") # nariši P(x) plt.ylabel("P(x)") plt.xlabel("X") plt.show()
BaseClass/Porazdelitve.ipynb
sorter43/PR2017LSBOLP
apache-2.0
这里本身我要输出(1,2,3 )但是在ipython中'_'自动识别成最新的(上一个值) 类似于matlab 中的ans 当然在python中这个多变量赋值字符串中也可以,任何迭代对象
s = 'acfun' a,b,c,d,e = s a e
data_structure_and_algorithm_py2_1.ipynb
zlxs23/Python-Cookbook
apache-2.0
只要将两边的变量或赋值数对齐就可利用任何迭代对象 1.2 解压可迭代对象赋值给多个变量 Python的星号表达式可以用来解决这个问题:如果一个可迭代对象的元素个数超过变量个数时,会抛出一个 ValueError 。 那么怎样才能从这个可迭代对象中解压出N个元素出来? 其实这里* 表示python中的可变参数
record = ('maz',18,'13679259627','62627') name,age,*tel = record len(record) name,age,*tel = record name,age,**tel = record
data_structure_and_algorithm_py2_1.ipynb
zlxs23/Python-Cookbook
apache-2.0
不科学啊,怎么星号没有用了
2**4 *ta = record
data_structure_and_algorithm_py2_1.ipynb
zlxs23/Python-Cookbook
apache-2.0
2. Uso de Pandas para descargar datos de precios de cierre Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados. Nota: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete pandas_datareader. Por lo que será necesario instalarlo aparte. El siguiente comando instala el paquete en Anaconda: *conda install -c conda-forge pandas-datareader *
assets = ['AAPL','MSFT','AA','AMZN','KO','QAI'] closes = portfolio_func.get_historical_closes(assets, '2016-01-01', '2017-09-22')
02. Parte 2/15. Clase 15/.ipynb_checkpoints/11Class NB-checkpoint.ipynb
jdsanch1/SimRC
mit
Par exemple si nous nous intéressons a l'évolution du cours des actions (valeurs) de différentes entreprises cette année, nous verrons que la pluspart des outils existent et sont disponibles. Valeurs recherchées : IBM YELP GOOGLE BRUKER
symbols_list = ['IBM','YELP', 'GOOG'] for ticker in symbols_list: d[ticker] = DataReader(ticker, "yahoo", '2016-01-01') # L'execution de cette fonction précise que vous ayez accés à Internet pan = pandas.Panel(d) df1 = pan.minor_xs('Adj Close') px=df1.asfreq('B',method='pad') rets = px.pct_change() ((1+ rets).cumprod() -1).plot()
Cours13-DILLMANN-ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
2) Visualisation géométrique
from mpl_toolkits.mplot3d import * import matplotlib.pyplot as plt import numpy as np from random import random, seed from matplotlib import cm #%%%%%%%%% Presentation d'une bulle rouge %%%%%%%%# fig = plt.figure() ax = fig.add_subplot(111, projection='3d') u = np.linspace(0, 2 * np.pi, 100) v = np.linspace(0, np.pi, 100) x = 10 * np.outer(np.cos(u), np.sin(v)) y = 10 * np.outer(np.sin(u), np.sin(v)) z = 10 * np.outer(np.ones(np.size(u)), np.cos(v)) ax.plot_surface(x, y, z, rstride=2, cstride=2, linewidth=0, alpha=1, color='y', antialiased=True, edgecolor=(0,0,0,0)) plt.show()
Cours13-DILLMANN-ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
Visualisation d'un phénomène physique L'equation de Biot-et-Savart
import numpy as np # Constantes my0=4*np.pi*1e-7; # perméabilité du vide I0=-1; # Amplitude du courant # le courant circule de gauche à droite # Dimensions d=25 # Diametre de la spire (mm) segments=100 # discretization de la spire alpha = 2*np.pi/(segments-1) # discretization de l'angle # initialisarion de la spire x=[i*0 for i in range(segments)] y=[d/2*np.sin(i*alpha) for i in range(segments)] z=[-d/2*np.cos(i*alpha) for i in range(segments)] #Distance caracteristique, des filaments distance_char=np.sqrt((z[2]-z[1])**2+(y[2]-y[1])**2); # Definition du sens du positif du courant : gauche -> droite # pour le calcul les longeurs sont exprimées en m x_spire=np.array([x])*1e-3; y_spire=np.array([y])*1e-3; z_spire=np.array([z])*1e-3; #%%%%%%%%%%%%%%% Affichage de la spire %%%%%%%%%%%%%%%%%%%%%%%%%%%# from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt fig = plt.figure() ax = fig.gca(projection='3d') plt.plot(([0,0]),([0,0]),([-0.05,0.05]), 'b-', label='ligne', linewidth=2) plt.plot(y_spire, z_spire, 'g.', label='spire 1', linewidth=2) plt.show() #%%%%%%%%%% Calcul du champ magnetique en utilisant Biot et Savart ndp=50 # Nombre de points # limites x xmin=-0.05 xmax= 0.05 # limites y ymin=-0.05 ymax=+0.05 # limites z zmin=-0.05 zmax=+0.05 dx=(xmax-xmin)/(ndp-1) #increment x dy=(ymax-ymin)/(ndp-1) # increment y dz=(zmax-zmin)/(ndp-1) # increment z #%%%%%%%%%%%%%%% Calcul magnetostatique %%%%%%%%%%%%%%%%%%%%%%%%%%%# bxf=np.zeros(ndp) # initialization de la composante Bx du champ byf=np.zeros(ndp) # initialization de la composante By du champ bzf=np.zeros(ndp) # initialization de la composante Bz du champ I0f1=my0*I0/(4*np.pi) # Magnétostatique (on multiplie le courrant #% $$ \mu_0/(4.\pi)par $$) # Intégation du champ induit en un point de la ligne bleue # par le courant circulant sur chanque segment de Boucle verte bfx=0 bfy=0 bfz=0 nseg=np.size(z_spire)-1 for i in range(ndp): #Initialisation des positions xM=(xmin+i*dx) yM=0 zM=0 #Initialisation des champs locaux bfx=0 bfy=0 bfz=0 R=np.array([xM,yM,zM]) # vecteur position sur # le point qui doit être calcul # en intégrant la contribution # de tous les courants le long # de la boucle verte for wseg in range(nseg): xs=x_spire[0][wseg] ys=y_spire[0][wseg] zs=z_spire[0][wseg] Rs=np.array([xs, ys, zs]) drsx=(x_spire[0][wseg+1]-x_spire[0][wseg]) drsy=(y_spire[0][wseg+1]-y_spire[0][wseg]) drsz=(z_spire[0][wseg+1]-z_spire[0][wseg]) drs=np.array([drsx, drsy, drsz]) #direction du courant Delta_R= Rs - R #vecteur entre l'élement de spire et #le point où est calcul le champ Delta_Rdrs=sum(Delta_R * drs) Delta_Rdist=np.sqrt(Delta_R[0]**2+Delta_R[1]**2+Delta_R[2]**2) #Delta_Rdis2=Delta_Rdist**2 Delta_Rdis3=Delta_Rdist**3 b2=1.0/Delta_Rdis3 b12=I0f1*b2*(-1) # Produit vectoriel Delta_Rxdrs_x=Delta_R[1]*drsz-Delta_R[2]*drsy Delta_Rxdrs_y=Delta_R[2]*drsx-Delta_R[0]*drsz Delta_Rxdrs_z=Delta_R[0]*drsy-Delta_R[1]*drsx #Intégration bfx=bfx+b12*Delta_Rxdrs_x bfy=bfy+b12*Delta_Rxdrs_y bfz=bfz+b12*Delta_Rxdrs_z # Il faut utiliser un champ définit comme 3 listes : # une liste pour chaque abscisse bxf[i]+=bfx byf[i]+=bfy bzf[i]+=bfz #%%%%%%%%%%% Modèle Théorique %%%%%%%%%%%%%%%%%%# r=d/2; # rayon de la spire en mm r=r*1e-3; # rayon de la spire en m bx_analytique=[abs(my0*I0)*(r)**2/(2*((r)**2+(x)**2)**(3/2)) for x in np.linspace(xmin, xmax, ndp, endpoint = True)]
Cours13-DILLMANN-ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
$$B(x)=\frac{\mu_o}{4\pi}.I_o.\frac{r^2}{2 (r^2 +x^2)^{3/2} }$$
#%%%%%%%%%%% Visualisation %%%%%%%%%%%%%%%%%%%%%%%# plt.plot(np.linspace(xmin, xmax, ndp, endpoint = True) , bxf,'bo') plt.plot(np.linspace(xmin, xmax, ndp, endpoint = True) , bx_analytique,'r-')
Cours13-DILLMANN-ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
Une autre manière de définir les fonctions : les Lambdas
def my_funct(f,arg): return f(arg) my_funct(lambda x : 2*x*x,5)
Cours13-DILLMANN-ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
Lambda est un racourci pour créer des fonctions anonymes Elles ne sont pas plus faciles à ecrire
a=(lambda x: x*x)(8) print(a) def polynome(x): return x**2 + 5*x + 4 racine=-4 print("La racine d'un polynome est la valeur pour laquelle est {0} " \ .format(polynome(racine))) print("Avec un lambda c'est plus simple : ",end="") print((lambda x:x**2 + 5*x + 4)(-4)) X=np.linspace(-10,10,50,endpoint = True) plt.plot(X,(lambda x:x**2 + 5*x + 4)(X)) plt.show()
Cours13-DILLMANN-ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
Critical to note that this ignores the queuing and assumes that xx people are processed at each time interval at the counter. This will be used in conjunction with the scanner output to choose the bottle neck at each point in time
EXIT_NUMER = zip(FRISK_PATTERN,SCAN_PATTERN) EXIT_NUMBER = [min(k) for k in EXIT_NUMER] #plot(EXIT_NUMBER,'o') #show() EXIT_PATTERN = [] for index, item in enumerate(EXIT_NUMBER): EXIT_PATTERN += [index]*item
Blog Post Content/Airport Waiting Time.ipynb
akshayrangasai/akshayrangasai.github.io
mit
Minimum number of processed people between the scanners and the frisking is the bottleneck at any given time, and this will be the exit rate at any given time.
RESIDUAL_ARRIVAL_PATTERN = ARRIVAL_LIST[0:len(EXIT_PATTERN)] WAIT_TIMES = [m-n for m,n in zip(EXIT_PATTERN,RESIDUAL_ARRIVAL_PATTERN)] #print EXIT_PATTERN ''' for i,val in EXIT_PATTERN: WAIT_TIMES += [ARRIVAL_PATTERN(i) - val] ''' plot(WAIT_TIMES,'r-') ylabel('Wait times for people entering the queue') xlabel("Order of entering the queue") ylim([0,40]) show()
Blog Post Content/Airport Waiting Time.ipynb
akshayrangasai/akshayrangasai.github.io
mit
Building predictive models First we will split our data into features and the target:
data.head() X_train = data.drop(columns='species') y_train = data['species'].values
rampwf/tests/kits/iris/iris_starting_kit.ipynb
paris-saclay-cds/ramp-workflow
bsd-3-clause
A basic predictive model using the scikit-learn random forest classifier will be presented below:
from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=1, max_leaf_nodes=2, random_state=61)
rampwf/tests/kits/iris/iris_starting_kit.ipynb
paris-saclay-cds/ramp-workflow
bsd-3-clause
We can cross-validate our classifier (clf) using cross_val_score. Below we will have specified cv=8 meaning KFold cross-valdiation splitting will be used, with 8 folds. The accuracy classification score is calculated for each split. The output score will be an array of 8 scores from each KFold. The score mean and standard of the 8 scores is printed at the end.
from sklearn.model_selection import cross_val_score scores = cross_val_score(clf, X_train, y_train, cv=8, scoring='accuracy') print("mean: %e (+/- %e)" % (scores.mean(), scores.std()))
rampwf/tests/kits/iris/iris_starting_kit.ipynb
paris-saclay-cds/ramp-workflow
bsd-3-clause
RAMP submissions For submitting to the RAMP site, you will need to write a submission.py file that defines a get_estimator function that returns a scikit-learn estimator. For example, to submit our basic example above, we would define our classifier clf within the function and return clf at the end. Remember to include all the necessary imports at the beginning of the file.
from sklearn.ensemble import RandomForestClassifier def get_estimator(): clf = RandomForestClassifier(n_estimators=1, max_leaf_nodes=2, random_state=61) return clf
rampwf/tests/kits/iris/iris_starting_kit.ipynb
paris-saclay-cds/ramp-workflow
bsd-3-clause
If you take a look at the sample submission in the directory submissions/starting_kit, you will find a file named classifier.py, which has the above code in it. You can test that the sample submission works by running ramp_test_submission in your terminal (ensure that ramp-workflow has been installed and you are in the iris ramp kit directory). Alternatively, within this notebook you can run:
!ramp_test_submission
rampwf/tests/kits/iris/iris_starting_kit.ipynb
paris-saclay-cds/ramp-workflow
bsd-3-clause
Path to configuration file with login information to the AAS SQL server
config_filename = "/Users/adrian/projects/aas-abstract-sorter/sql_login.yml" with open(config_filename) as f: config = yaml.load(f.read())
notebooks/AAS abstract similarity.ipynb
adrn/AASAbstractSorter
mit
Establish a database connection
engine = create_engine('mysql+pymysql://{user}:{password}@{server}/{database}'.format(**config)) engine.connect() _presentation_cache = dict()
notebooks/AAS abstract similarity.ipynb
adrn/AASAbstractSorter
mit
Get all presentations and sessions from AAS 227
query = """ SELECT session.so_id, presentation.title, presentation.abstract, presentation.id FROM session, presentation WHERE session.meeting_code = 'aas227' AND session.so_id = presentation.session_so_id AND presentation.status IN ('Sessioned', '') AND session.type IN ( 'Oral Session' , 'Special Session' , 'Splinter Meeting' ) ORDER BY presentation.id; """ result = engine.execute(query) all_results = result.fetchall() presentation_df = pd.DataFrame(all_results, columns=all_results[0].keys()) presentation_df['abstract'] = presentation_df['abstract'].str.replace('<[^<]+?>', '') query = """ SELECT session.title, session.start_date_time, session.end_date_time, session.so_id FROM session WHERE session.meeting_code = 'aas227' AND session.type IN ( 'Oral Session' , 'Special Session' , 'Splinter Meeting' ) ORDER BY session.so_id; """ result = engine.execute(query) session_results = result.fetchall() session_df = pd.DataFrame(session_results, columns=session_results[0].keys()) session_df['start_date_time'] = pd.to_datetime(session_df['start_date_time']) session_df['end_date_time'] = pd.to_datetime(session_df['end_date_time']) session_df = session_df[1:] # zero-th entry has a corrupt date
notebooks/AAS abstract similarity.ipynb
adrn/AASAbstractSorter
mit
Define a scikit-learn count vectorizer with a custom word tokenizer
# based on http://www.cs.duke.edu/courses/spring14/compsci290/assignments/lab02.html stemmer = PorterStemmer() def stem_tokens(tokens, stemmer): stemmed = [] for item in tokens: stemmed.append(stemmer.stem(item)) return stemmed def tokenize(text): # remove non letters text = re.sub("[^a-zA-Z]", " ", text) # tokenize tokens = nltk.word_tokenize(text) # stem stems = stem_tokens(tokens, stemmer) return stems vectorizer = text.CountVectorizer( analyzer='word', tokenizer=tokenize, lowercase=True, stop_words='english', )
notebooks/AAS abstract similarity.ipynb
adrn/AASAbstractSorter
mit
Fit the count vectorizer to all AAS abstracts from AAS 227
count_matrix = vectorizer.fit_transform(presentation_df['abstract']).toarray() count_matrix.shape
notebooks/AAS abstract similarity.ipynb
adrn/AASAbstractSorter
mit
As a quick check, what are the 10 most common words in AAS abstracts?
ten_most_common_idx = count_matrix.sum(axis=0).argsort()[::-1][:10] feature_words = np.array(vectorizer.get_feature_names()) print(feature_words[ten_most_common_idx])
notebooks/AAS abstract similarity.ipynb
adrn/AASAbstractSorter
mit
For each pair of abstracts, compute the cosine similarity
similiarity_matrix = np.zeros((count_matrix.shape[0],count_matrix.shape[0])) for ix1 in range(count_matrix.shape[0]): for ix2 in range(count_matrix.shape[0]): num = count_matrix[ix1].dot(count_matrix[ix2]) denom = np.linalg.norm(count_matrix[ix1]) * np.linalg.norm(count_matrix[ix2]) if num < 1: # if no common words, the vectors are orthogonal v = 0. else: v = num / denom similiarity_matrix[ix1,ix2] = v
notebooks/AAS abstract similarity.ipynb
adrn/AASAbstractSorter
mit
Find the top ten most similar abstracts
similiarity_matrix_1d = np.triu(similiarity_matrix).ravel() top_ten = sorted(np.unique(similiarity_matrix_1d[~np.isclose(similiarity_matrix_1d,1.)]), reverse=True)[:10] for ix1,ix2 in zip(list(ix[0]), list(ix[1])): pres1 = get_presentation(presentation_ids[ix1]) pres2 = get_presentation(presentation_ids[ix2]) print(pres1['title']) print(pres2['title']) print()
notebooks/AAS abstract similarity.ipynb
adrn/AASAbstractSorter
mit
Those seem pretty similar! Looks like the code is working... Now we'll predict which simultaneous sessions have the most overlap For now, we'll start with the first day of conference talks, 5 Jan. We'll also only check for sessions that have the same start time (of course, we should really be looking at any overlapping sessions, but this is fine as a first pass...).
def session_similarity(so_id1, so_id2): """ Compute the similarity between two sessions by getting the sub-matrix of the similarity matrix for all pairs of presentations from each session. """ presentations_session1 = presentation_df[presentation_df['so_id'] == so_id1] presentations_session2 = presentation_df[presentation_df['so_id'] == so_id2] if len(presentations_session1) == 0 or len(presentations_session2) == 0: # no presentations in session return np.array([]) index_pairs = cartesian((presentations_session1.index,presentations_session2.index)).T sub_matrix = similiarity_matrix[(index_pairs[0],index_pairs[1])] shape = (len(presentations_session1), len(presentations_session2)) sub_matrix = sub_matrix.reshape(shape) return sub_matrix for name,group in session_df[session_df['start_date_time'] >= datetime(2016, 1, 5)].groupby('start_date_time'): for title1,so_id1 in zip(group['title'],group['so_id']): for title2,so_id2 in zip(group['title'],group['so_id']): if so_id1 >= so_id2: continue scores = session_similarity(so_id1, so_id2) if len(scores) == 0: # no presentations in one of the sessions continue if scores.max() > 0.5: # totally arbitrary threshold print(title1) print(title2) print(scores.max(), np.median(scores)) print()
notebooks/AAS abstract similarity.ipynb
adrn/AASAbstractSorter
mit
Toy data from HTF, p. 339
def htf_p339(n_samples=2000, p=10, random_state=None): random_state=check_random_state(random_state) ## Inputs X = random_state.normal(size=(n_samples, max(10, p))) ## Response: \chi^2_10 0.5-prob outliers y = (np.sum(X[:, :10]**2, axis=1) > 9.34).astype(int).reshape(-1) return X, y
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Fix the RNG
random_state = np.random.RandomState(0xC01DC0DE)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Generate four samples
X_train, y_train = htf_p339(2000, 10, random_state) X_test, y_test = htf_p339(10000, 10, random_state) X_valid_1, y_valid_1 = htf_p339(2000, 10, random_state) X_valid_2, y_valid_2 = htf_p339(2000, 10, random_state)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
<hr/> Ensemble methods In general any ensemble methods can be broken down into the following two stages, possibly overlapping: 1. Populate a dictionary of base learners; 2. Combine them to get a composite predictor. Many ML estimators can be considerd ensemble methods: 1. Regression is a linear ensemble of basis functions: predictors $x\in \mathbb{R}^{p\times 1}$; 2. Any model with additive structure, like regression/classificatio trees; 3. Feedforward Neural network is a bunch of layers of nonlinear predictors stacked one atop the other, in a specific DAG-like manner; Trees A regression tree is a piecewise constant function $T:\mathcal{X} \mapsto \mathbb{R}$ having the following expression $$ T(x) = \sum_{j=1}^J w_j 1_{R_j}(x) \,, $$ where $(R_j){j=1}^J$, $J\geq 1$, is a tree-partition of the input space, and $(w_j){j=1}^J$ are estimated values at terminal nodes. In a multiclass problem, a classification tree is a composition of a majority voting decision function $$ \mathtt{MAJ}(y) = \mathop{\text{argmax}}_{k=1\,\ldots, K} y_k \,, $$ with a scoring funciton $T:\mathcal{X} \mapsto \mathbb{R}^K$ of similar structure as in the regression case $$ T(x) = \sum_{j=1}^J w_j 1_{R_j}(x) \,, $$ where $(w_j)_{j=1}^J\in\mathbb{R}^K$ are vectors of class likelihoods (probabilities) at the terminal nodes. The tree-partition $(R_j){j=1}^J$ and node values $(w_j){j=1}^J$ result from running a variant of the standard greedy top-down tree-induction algorithm (CART, C.45, et c.).
from sklearn.tree import DecisionTreeClassifier clf1_ = DecisionTreeClassifier(max_depth=1, random_state=random_state).fit(X_train, y_train) clf2_ = DecisionTreeClassifier(max_depth=3, random_state=random_state).fit(X_train, y_train) clf3_ = DecisionTreeClassifier(max_depth=7, random_state=random_state).fit(X_train, y_train) clf4_ = DecisionTreeClassifier(max_depth=None, random_state=random_state).fit(X_train, y_train) print "Decision tree (1 levels) error:", 1 - clf1_.score(X_test, y_test) print "Decision tree (3 levels) error:", 1 - clf2_.score(X_test, y_test) print "Decision tree (7 levels) error:", 1 - clf3_.score(X_test, y_test) print "Decision tree (max levels) error:", 1 - clf4_.score(X_test, y_test)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Bagging Bagging is meta algortihm that aims at constructing an esimator by averaging many noisyб but approximately unbiased models. The general idea is that averaging a set of unbiased estimates, yields an estimate with much reduced variance (provided the base estimates are uncorrelated). Bagging works poorly on models, that linearly depend on the data (like linear regression), and best performs on nonlinear base estimators (like trees). In other terms bagging succeeds in building a better combined estimator, if the base estimator is unstable. Indeed, if the learning procedure is stable, and random perturbation of the train dataset do not affect it by much, the bagging estimator will not differ much from a single predictor, and may even weak its performance somewhat. Bootstrapping Consider a train sample $Z = (X, y) = (x_i, y_i)_{i=1}^n \in \mathcal{X}\times \mathcal{Y}$, samplesed form a distribution $P$. A bootstrap sample $Z^ = (z^i){i=1}^n$ is a subsample of $Z = (z_j){j=1}^n$ with each element drawn with replacement from $Z$. More technically, a bootstrap sample of size $l$ is a sample from the empirical distribution of the training data $Z$, denoted by $\hat{P}$. So $Z^\sim \hat{P}^l$ means that $(z^_i){i=1}^l \sim \hat{P}$ iid, or, similarly, $$ z^*_i = \bigl{ z_j \text{ w. prob. } \frac{1}{n}\,,\, j=1, \ldots, n\bigr.\,. $$ An interesting property of a bootstraped sample, is that on average $36.79\%$ of the original sample are left out of each $Z^{b}$. Indeed, the probability that a given sample is present in $Z^$ is $$ 1 - \bigl(1 - \frac{1}{n}\bigr)^n = 1 - e^{-1} + o(n) \approx 63.21\%\,. $$ This means that the observations not selected for the $b$-th bootstrap sample $Z^{b}$, denoted by $Z\setminus Z^{b}$, $b=1,\ldots,B$, can be used as an independent test set. The out-of-bag sample, $Z\setminus Z^{b}$, and for estimating the generalization error, and for defining an OOB*-predictor. For a given collection of bootstrap samples $(Z^{b})_{b=1}^B$ define the set of samples the $i$-th observation does not belong to as $\Gamma_i = {b=1,\ldots, n\,:\, z_i \notin Z^{b} }$, $i=1,\ldots, n$. For a fixed observation $i$ the set $\Gamma_i$ is empty, meaning that $z_i$ is never out-of-bag, occurs with probability $\bigl(1 - (1-n^{-1})^n\bigr)^B \approx (1-e^{-1})^B$, which is negligible for $B \geq 65$. Regression Let $\mathcal{A}$ is a learning algorithm, taking a learning sample, that learns regression models $\hat{f}:\mathcal{X} \mapsto \mathbb{R}$, like Regression Tree, $k$-NN, multi-layer neural netowrk et c. The bagged regression estimator is constructed as follows: 1. Draw $B$ independent bootstrap samples $(Z^{b})_{b=1}^B$; 2. On each bootstrap sample $Z^{b}$ learn an estimator $\hat{f}^{b} = \hat{f}^{b}(\cdot; Z^{b}) = \mathcal{A}(Z^{b})(\cdot)$; 3. Construct the bagged estimator: $$ \hat{f}^{\text{bag}}B(x) = B^{-1} \sum{b=1}^B \hat{f}^*(x) \,. $$ The bagged estimator $\hat{f}^{\text{bag}}_B$ is different from the original-sample estimator $\hat{f}=\hat{f}(\cdot; Z)$ if the ML algorithm is nonlinear on the data, or adaptive. Bagged estimator $\hat{f}^{\text{bag}}_B$ is a Monte-Carlo approximation of the ideal Bagging estimator, given by the function $$ \hat{f}^{\text{bag}}(x) = \mathop{\mathbb{E}}\nolimits_{Z^} \hat{f}^(x; Z^*) \,.$$ By the law of large numbers we have $\hat{f}^{\text{bag}}_B \to \hat{f}^{\text{bag}}$ with probability one (over the empirical distribution $\hat{P}$) as $B\to \infty$. OOB samples can be used to construct the OOB-predictor -- an estimator, defined only for the training samples: $$\hat{f}^{\text{oob}}b (x_i) = \frac{1}{|\Gamma_i|} \sum{b\in \Gamma_i} \hat{f}^{*b}(x_i) \,, $$ and based on it the OOB mean squared error: $$ \text{oob-MSE} = n^{-1} \sum_{i=1}^n \bigl(y_i - \hat{f}^{\text{oob}}_B(x_i)\bigr)^2 \,, $$ where observations with $\Gamma_i=\emptyset$ are omitted. Classification In case of classification the baggin estimator is constructed similarly, but there are important caveats. In this case the ML algorithm learns a class-score function $\hat{f}:\mathcal{X} \mapsto \mathbb{R}^K$, and then the class label is predicted by $\mathtt{MAJ}$ (majority voting) on $\hat{f}(x)$. The majority vote over $K$ candidates with weights $(w_k){k=1}^K\in \mathbb{R}$ is defined as $$ \mathtt{MAJ}(w) = \mathop{\text{argmax}}{k=1\,\ldots, K} w_k \,. $$ One option is to define the bagged estimator as $$ \hat{g}^{\text{bag}}B(x) = \mathtt{MAJ}\Bigl( B^{-1}\sum{b=1}^B e_{k^{b}(x)} \Bigr) \,, $$ where $e_k$ is the $k$-th unit vector in ${0,1}^{K\times 1}$, and $k^{b}(x)=\mathtt{MAJ}\bigl(\hat{f}^{*b}(x)\bigr)$. Basically, this ensemble classifies according to voting proportions of the population of bootstrapped classifiers. However, when most calssifiers within the population classify some class correctly, then its voting poportion will overestimate the class probability. A better option, especially for well-calibrated classfiers is to use their scores directly: $$ \hat{g}^{\text{bag}}B(x) = \mathtt{MAJ}\bigl( B^{-1}\sum{b=1}^B \hat{f}^{*b}(x) \bigr) \,, $$ One can construct an OOB-classifier (or generally an OOB-predictor) using the following idea: $$ \hat{g}^{\text{oob}}B(x_i) = \mathtt{MAJ}\Bigl( \frac{1}{|\Gamma_i|} \sum{b\in \Gamma_i} e_{k^{b}(x_i)} \Bigr)\,, $$ or $$ \hat{g}^{\text{oob}}B(x_i) = \mathtt{MAJ}\Bigl( \frac{1}{|\Gamma_i|} \sum{b\in \Gamma_i} \hat{f}^{b}(x_i) \Bigr)\,. $$ Obviously, this classifier is defined only for the observed samples data, and for only those examples, for which $\Gamma_i\neq\emptyset$. Bagging a good classifier (one with misclassification rate less than $0.5$) can improve its accuracy, while bagging a poor one (with higher than $0.5$ error rate) can seriously degrade predictive accuracy. Usage
from sklearn.ensemble import BaggingClassifier, BaggingRegressor
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Both Bagging Calssifier and Regressor have similar parameters: - n_estimators -- the number of estimators in the ensemble; - base_estimator -- the base estimator from which the bagged ensemble is built; - max_samples -- the fraction of samples to be used to train each individual base estimator. Choosing max_samples &lt; 1.0 leads to a reduction of variance and an increase in bias. - max_features -- The number of features to draw from X to train each base estimator; - bootstrap -- determines whether samples are drawn with replacement; - bootstrap_features -- determines whether features are drawn with replacement; - oob_score -- determines whether to use out-of-bag samples to estimate the generalization error; Example
clf1_ = BaggingClassifier(n_estimators=10, base_estimator=DecisionTreeClassifier(max_depth=3), random_state=random_state).fit(X_train, y_train) clf2_ = BaggingClassifier(n_estimators=10, base_estimator=DecisionTreeClassifier(max_depth=None), random_state=random_state).fit(X_train, y_train) clf3_ = BaggingClassifier(n_estimators=100, base_estimator=DecisionTreeClassifier(max_depth=3), random_state=random_state).fit(X_train, y_train) clf4_ = BaggingClassifier(n_estimators=100, base_estimator=DecisionTreeClassifier(max_depth=None), random_state=random_state).fit(X_train, y_train) print "Bagged (10) decision tree (3 levels) error:", 1 - clf1_.score(X_test, y_test) print "Bagged (10) decision tree (max levels) error:", 1 - clf2_.score(X_test, y_test) print "Bagged (100) decision tree (3 levels) error:", 1 - clf3_.score(X_test, y_test) print "Bagged (100) decision tree (max levels) error:", 1 - clf4_.score(X_test, y_test)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
<hr/> Random Forest Essentially, a random forest is an bagging ensemble constructed from a large collection of decorrelated regression/decision trees. The algorithm specifially modifies the tree induction procedure to produce trees with as low correlation as possible. 1. for $b=1,\ldots, B$ do: 1. Draw a bootstrap sample $Z^{b} = (z^{b}i){i=1}^P$, of size $P = \lfloor \eta n\rfloor$ from $Z$; 2. Grow a tree $T^{b}$ in a specialized manner: the greedy recursive algorithm is the same, but each time split candidates are chosen from a random subset of features, and the tree is grown until a minimum node size is reached; 2. Take the tree ensemble $(\hat{T}^{b})_{b=1}^B$, and return the bagged estimator; Trees benefit the most from bagging and random forest ensembles due to their high nonlinearity. Usage
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
As with Bagging, Random Forest Classifier and Regressor accept similar parametrs: - criterion -- the function to measure the quality of a split. Supported criteria are: * "gini" -- Gini impurity (classification only); * "entropy" -- the information gain (classification only); * "mse" -- mean squared error (regression only); - max_features -- The number of features to consider when looking for the best split: sqrt, log2 and share in $(0,1]$ are accepted (choosing max_features &lt; n_features leads to a reduction of variance and an increase in bias); - max_depth -- maximum depth of the individual regression tree estimators (the maximum depth limits the number of nodes in the tree, the best value depends on the interaction of the input variables); - min_samples_split -- The minimum number of samples required to split an internal node; - min_samples_leaf -- The minimum number of samples required to be at a leaf node; - min_weight_fraction_leaf -- The minimum weighted fraction of the input samples required to be at a leaf node; - max_leaf_nodes -- Grow trees with max_leaf_nodes in best-first fashion, determined by the relative reduction in impurity; - bootstrap -- determines whether samples are drawn with replacement; - oob_score -- determines whether to use out-of-bag samples to estimate the generalization error. Note that in Scikit-learn the bootstrap sample size is the same as teh original sample ($\eta=1$). RandomForestClassifier also handles imbalanced classification problems via the class_weight parameter: class_weight -- weights associated with classes given in the form of a dictionary with elements {class_label: weight}, or a rebalancing mode: "balanced" -- mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data; "balanced__subsample" -- mode is the same as "balanced", except that weights are re-computed based on the bootstrap sample for every tree grown. These weights will be used to adjust the sample weight (passed through the fit method). Example
clf1_ = RandomForestClassifier(n_estimators=10, max_depth=3, random_state=random_state).fit(X_train, y_train) clf2_ = RandomForestClassifier(n_estimators=100, max_depth=3, random_state=random_state).fit(X_train, y_train) clf3_ = RandomForestClassifier(n_estimators=10, max_depth=None, random_state=random_state).fit(X_train, y_train) clf4_ = RandomForestClassifier(n_estimators=100, max_depth=None, random_state=random_state).fit(X_train, y_train) print "Random Forest (10, 3 levels) error:", 1 - clf1_.score(X_test, y_test) print "Random Forest (100, 3 levels) error:", 1 - clf2_.score(X_test, y_test) print "Random Forest (10, max levels) error:", 1 - clf3_.score(X_test, y_test) print "Random Forest (100, max levels) error:", 1 - clf4_.score(X_test, y_test)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
<hr/> Boosting Classification The underlying idea of boosting is to combine a collection of weak predictors, into one strong powerful committee model. Most commonly a dictionary of nonlinear base predictors, like decision trees (regression/classification), is used weak predictors in boosting. Consider the following classification problem: learn a hypothesis (algorithm) $h:\mathcal{X}\mapsto {-1,1}$ that is able to generalize well beyond the given learning sample $Z = (X, y) = (x_i, y_i){i=1}^n \in \mathcal{X}\times {-1, +1}$. The empirical risk is the sample average loss $$ \hat{\mathcal{R}}_Z(h(\cdot)) = n^{-1} \sum{i=1}^n L(h(x_i), y_i) = \mathbb{E}_{(x,y)\sim Z} L(h(x), y) \,, $$ where $\mathbb{E}_Z$ denotes the expectation over the empirical measure induced by $Z$. Theoretically, it would be great to learn such a classifier $g:\mathcal{X}\mapsto{-1,+1}$, that minimizes the theoretical risk $$ \mathcal{R}(h(\cdot)) = \mathbb{E}{(x, y)\sim D} 1{y\neq h(x)} \,, $$ where $D$ is the true unknown distribution on $\mathcal{X} \times {-1, +1}$ of the data. The ideal calssifier given by the Bayes classifier $g^*(x) = \mathbb{P}_D(y=1|X=x)$. However, this functional is unavailable in real life, and thus we have to get by minimizing the empirical risk, which is known to be an approximation of the theoretical risk due to the Law of Large Numbers. We do this, hoping that $$ \hat{h} \in \mathop{\text{argmin}}_{g\in \mathcal{F}} \hat{\mathcal{R}}_Z(g(\cdot)) \,, $$ also more-or-less minimizes the theoretical risk. Furthermore for a general class of hypotheses $h:\mathcal{X}\mapsto {-1,+1}$, the empirical risk minimization problem cannot be solved efficiently due to non-convexity of the objective function. FSAM Forward Stagewise Additive Modelling is a general greedy approach to modelling additive enesembles (generalized additive models). The basic idea of this approach is to construct a suboptimal model incrementally in a greedy fashion. The goal is to minimize $\sum_{i=1}^n L(y_i, f(x_i)) + \Omega(f)$ over some class $f\in \mathcal{F}$, where $\Omega(\cdot)$ is an additive complexity regularizer. Algorithm: 1. set $F_0 = 0$; 2. for $k = 1,\ldots, K$ do: 1. using some efficient method find at least a good approximation to the following: $$ f_k \leftarrow \mathop{\mathtt{argmin}}\limits_{f\in \mathcal{F}} \sum_{i=1}^n L\bigl( y_i, F_{k-1}(x_i) + f(x_i)\bigr) + \Omega(F_{k-1}) + \Omega(f) \,; $$ 2. set $ F_k = F_{k-1} + f_k$; 3. Return $F_K$. AdaBoost The AdaBoost algorithm is based on the Forward-Stagewise Additive Modelling approach, which implements a greedy strategy of constructing an additive model, such as an ensemble (or even a tree), from a rich dictionary of basis functions. In classification, it is a particular example of a convex relaxation of the empirical risk minimization problem: AdaBoost dominates the $0-1$ loss $(y, p)\mapsto 1_{y p < 0}$ with exp-loss $(y,p)\mapsto e^{-yp}$ and minimizes a convex upper bound of the classification error. AdaBoost.M1 initialize $\omega_{1i} \leftarrow \frac{1}{n}$, $i=1\ldots, n$; for $m=1,\ldots, M$ do: fit a classifier $\hat{g}m$ to $(X, y)$ with sample weights $(\omega{mi})_{i=1}^n$; get the miscassification error $\epsilon_m = W_m^{-1} \sum_{i\,:\,y_i\neq \hat{g}m(x_i)} \omega{mi}$, for $W_m = \sum_{i=1}^n \omega_{mi}$; compute the log-odds ratio $\alpha_m = \log \frac{1-\epsilon_m}{\epsilon_m}$; update the weights: $\omega_{m+1,t} \leftarrow \omega_{mi} \text{exp}\bigl( \alpha_m 1_{{i\,:\,y_i\neq \hat{g}_m(x_i)}} \bigr)$; Output the ensemble $\hat{g} = \mathop{\text{sign}}\bigl{\sum_{m=1}^m \alpha_m \hat{g}_m\bigr}$; The AdaBoost.M1 algorithm employs an adversarial teaching approach to strengthen the ensemble. As is visible from the algorithm, the teacher tries to maximize the classification error of the learner by amplifying the weights of the difficult to classify examples. The size of the ensemble $M$ serves as a regularization parameter, since the greater the $M$, the more boosting overfits. An optimal $M$ can be chosen by cross-validation (preferably on a single common validation set). A recent development, called DeepBoost Mohri et al.; 2014, proposes a new ensemble learning algorithm, that is similar in spirit to AdaBoost. Its key feature is that the algorithm incorporates a complexity penalty for convex combinations of models into the convex relaxation of the loss criterion. This enables selection of better hypotheses that minimize the upper bound on the theoretical risk. Usage
from sklearn.ensemble import AdaBoostClassifier, AdaBoostRegressor
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Common parameters: - n_estimators -- the maximum number of estimators at which boosting is terminated (in case of perfect fit, the learning procedure is stopped early); - base_estimator -- the base estimator, which supports sample weighting, from which the boosted ensemble is built; - learning_rate -- learning rate shrinks the contribution of each classifier by learning_rate. AdaBoostClassifier only: - algorithm -- the AdaBoost version to use: * "SAMME.R" -- the SAMME.R real boosting algorithm; * "SAMME" -- the SAMME (M1) discrete boosting algorithm; The SAMME.R algorithm typically converges faster than SAMME, achieving a lower test error with fewer boosting iterations. AdaBoostRegressor only: - loss -- the loss function to use when updating the weights after each boosting iteration: * "linear" -- absolute loss $L(y, p) = |y-p|$; * "square" -- squared loss $L(y, p) = |y-p|^2$; * "exponential" -- Exponential loss $L(y, p) = 1-e^{-|y-p|}$. Examples
clf1_ = AdaBoostClassifier(n_estimators=10, base_estimator=DecisionTreeClassifier(max_depth=1), random_state=random_state).fit(X_train, y_train) clf2_ = AdaBoostClassifier(n_estimators=100, base_estimator=DecisionTreeClassifier(max_depth=1), random_state=random_state).fit(X_train, y_train) clf3_ = AdaBoostClassifier(n_estimators=10, base_estimator=DecisionTreeClassifier(max_depth=3), random_state=random_state).fit(X_train, y_train) clf4_ = AdaBoostClassifier(n_estimators=100, base_estimator=DecisionTreeClassifier(max_depth=3), random_state=random_state).fit(X_train, y_train) print "AdaBoost.M1 (10, stumps) error:", 1 - clf1_.score(X_test, y_test) print "AdaBoost.M1 (100, stumps) error:", 1 - clf2_.score(X_test, y_test) print "AdaBoost.M1 (10, 3 levels) error:", 1 - clf3_.score(X_test, y_test) print "AdaBoost.M1 (100, 3 levels) error:", 1 - clf4_.score(X_test, y_test)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
<hr/> Gradient boosting In certain circumstances in order to minimize a convex twice-differentiable function $f:\mathbb{R}^p \mapsto \mathbb{R}$ one uses Newton-Raphson iterative procedure, which repeats until convergence this update step: $$ x_{m+1} \leftarrow x_m - \bigl(\nabla^2 f(x_m)\bigr)^{-1} \nabla f(x_m) \,, $$ where $\nabla^2 f(x_m)$ is the hessian of $f$ at $x_m$ and $\nabla f(x_m)$ is its gradient. In a more general setting, if the function is not twice differentiable, or if the hessian is expensive to compute, then one resorts to a gradient descent procedure, which moves in the direction os teh steepest descent and updates according to $$ x_{m+1} \leftarrow x_m - \eta \nabla f(x_m) \,,$$ for some step $\eta > 0$. Gradient Boosting is, to a certain extent, a gradient descent procedure aimed at minimizing an expected loss functional $\mathcal{L}: \mathcal{F}\mapsto \mathbb{R}$ on some function space $\mathcal{F} \subset \mathbb{R}^{\mathcal{X}}$. In particular, it the underlying distribution of the data were known, Gradient Boosting would attempt to find a minimizer $x\mapsto \hat{f}(x)$ such that for all $x\in \mathcal{X}$ $$ \hat{f}(x) = \mathop{\text{argmin}}{f\in\mathcal{F}} \mathbb{E}{y \sim P|x} L(y, f(x)) \,. $$ At each itration it would update the current estimate of the minimizer $\hat{f}m$ in the direction of the steepest-descent towards $\hat{f}$: $$ \hat{f}{m+1} \leftarrow \hat{f}m - \rho \hat{g}_m \,, $$ where $\hat{g}_m \in \mathcal{F}$ is given by $$ \hat{g}_m(x) = \biggl. \frac{\partial}{\partial f(x)} \Bigl( \mathbb{E}{y \sim P|x} L\bigl(y, f(x)\bigr) \Bigr) \biggr\rvert_{f=\hat{f}m} = \biggl. \mathbb{E}{y \sim P|x} \frac{\partial}{\partial f(x)}L\bigl(y, f(x)\bigr) \biggr\rvert_{f=\hat{f}m} \,, $$ (under some regularity conditions it is possible to interchange the expectation and differentiation operation). In turn $\rho$ is determined by $$ \rho = \mathop{\text{argmin}}\rho \mathbb{E}_{(x,y) \sim P} L(y, \hat{f}_m(x) - \rho \hat{g}_m(x)) \,. $$ Since in practice the expectaions are not known, one approximates them with their empirical counterparts, which makes the gradient undefined outside the observed sample points. That is why one needs a class of basis functions, which can generalize the gradient from a point to its neighbourhood. Gradient Boosting procedure 1. Initialize the ensemble with $\hat{f}0 \leftarrow \mathop{\text{argmin}}\gamma \sum_{i=1}^n L(y_i, \gamma)$; 2. for $m=1,\ldots, M$ do: 1. Gradient approximation: Compute the current sample descent direction (negative gradient) using the current ensmeble: $$ r_{mi} = \biggl. - \frac{\partial}{\partial f(x_i)} L\bigl(y_i, f(x_i)\bigr) \biggr\rvert_{f=f_{m-1}} \,, $$ this can be thought of as a finte-dimensional approximation of a functional gradient $\delta \mathcal{L}$ of the loss functional $\mathcal{L}$; 2. Fit an MSE minimizing parametric basis function $h(x;\theta)$ to the approximation of the gradient $(r_{mi}){i=1}^n$: $$ (\theta_m, \beta_m) \leftarrow \mathop{\text{argmin}}{\theta, \beta} \sum_{i=1}^n \bigl(r_{mi} - \beta h(x_i;\theta) \bigr)^2\,; $$ basically we hope that $h(\cdot;\theta)$ approximates the functional gradient well enought and extrapolates beyond the point estimates to their immediate neighbourhoods; 3. Line search: determine the optmial step in the direction of the functional gradient that minimizes the loss functional: $$ \gamma_m \leftarrow \mathop{\text{argmin}}\gamma \sum{i=1}^n L\bigl(y_i, f_{m-1}(x_i) + \gamma h(x_i;\theta_m)\bigr)\,;$$ 4. Update the ensemble $f_m = f_{m-1} + \eta \, \gamma_m h(\cdot;\theta_m)$; 3. Return $\hat{f}(x) = f_M(x)$; Here $\eta$>0 is the learning rate. Gradient Boosted Regression Trees Gradient Boost algorithm uses basis functions $h(\cdot; \theta)$ from some class to approximate the gradient. For example, one can use regression splines, or more generally fit a kernel ridge regression for gradient interpolation, or use regression trees. Regression trees do not assume a predetermined parametric form, and instead are constructed according to information derived from the data. Algorithm With a given tree-partition structure $(R_j){j=1}^J$, it is really straightforward to find optimal estimates $(w_j){j=1}^J\in \mathbb{R}$. Now finding an optimal partition $(R_j)_{j=1}^J$ is entirely different matter: exhaustive search is out of question, so the algorithm to go is the greedy top-down recursive partitioning procedure. Boosted trees is an ensemble $\hat{f}(x) = \sum_{m=1}^M \hat{f}_m(x)$, with weights incorporated in each base estimator. GBRT 1. Initialize the ensemble with $\hat{f}0 \leftarrow \mathop{\text{argmin}}\gamma \sum_{i=1}^n L(y_i, \gamma)$; 2. for $m=1,\ldots, M$ do: 1. Compute the current sample descent direction (negative gradient) using the current ensmeble: $$ r_{mi} = \biggl. - \frac{\partial}{\partial f(x_i)} L\bigl(y_i, f(x_i)\bigr) \biggr\rvert_{f=f_{m-1}} \,, $$ this is a finte-dimensional version of the first variation $\delta J$ of a functional $J:\mathbb{R}^{\mathcal{X}}\mapsto \mathbb{R}$ on some function space; 2. Fit an MSE minimizing regression tree $\hat{T}m = \sum{j=1}^J \beta_j 1_{R_{mj}}(x)$ to the current gradient $(r_{mi}){i=1}^n$ and keep its partition structure; basically, we want to generalize the point estimates of the variation to some neighbourhood of each sample point (here the heighbourhoods are the tree partitions); 3. Line search: determine the optmial node-weights $$w{mj} \leftarrow \mathop{\text{argmin}}w \sum{i\,:\,x_i\in R_{mj}} L(y_i, f_{m-1}(x_i) + w)\,;$$ 4. Update the ensemble $f_m = f_{m-1} + \sum_{j=1}^J w_{mj} 1_{R_{mj}}$; 3. Return $\hat{f}(x) = f_M(x)$; Usage
from sklearn.ensemble import GradientBoostingClassifier, GradientBoostingRegressor
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Both Gradient boosting ensembles in scikit accept the following paramters: - loss -- loss function to be optimized: * Classification: * 'deviance' -- refers logistic regression with probabilistic outputs; * 'exponential' -- gradient boosting recovers the AdaBoost algorithm; * Regression: * 'ls' -- refers to least squares regression; * 'lad' -- (least absolute deviation) is a highly robust loss function solely based on order information of the input variables; * 'huber' -- is a combination of the two; * 'quantile' -- allows quantile regression (use alpha to specify the quantile); - learning_rate -- learning rate shrinks the contribution of each tree by learning_rate; - n_estimators -- The number of boosting stages to perform. Gradient boosting is fairly robust to over-fitting so a large number usually results in better performance; - max_depth -- maximum depth of the individual regression tree estimators (the maximum depth limits the number of nodes in the tree, the best value depends on the interaction of the input variables); - min_samples_split -- The minimum number of samples required to split an internal node; - min_samples_leaf -- The minimum number of samples required to be at a leaf node; - min_weight_fraction_leaf -- The minimum weighted fraction of the input samples required to be at a leaf node; - subsample -- The fraction of samples to be used for fitting the individual base learners (choosing subsample &lt; 1.0 results in Stochastic Gradient Boosting and leads to a reduction of variance and an increase in bias); - max_features -- The number of features to consider when looking for the best split: sqrt, log2 and share in $(0,1]$ are accepted (choosing max_features &lt; n_features leads to a reduction of variance and an increase in bias); - max_leaf_nodes -- Grow trees with max_leaf_nodes in best-first fashion, with best nodes are defined as relative reduction in impurity; - alpha -- the alpha-quantile of the huber loss function and the quantile loss function (only if loss='huber' or loss='quantile'). Examples High learning Rate, small ensemble
clf1_ = GradientBoostingClassifier(n_estimators=10, max_depth=1, learning_rate=0.75, random_state=random_state).fit(X_train, y_train) clf2_ = GradientBoostingClassifier(n_estimators=100, max_depth=1, learning_rate=0.75, random_state=random_state).fit(X_train, y_train) clf3_ = GradientBoostingClassifier(n_estimators=10, max_depth=3, learning_rate=0.75, random_state=random_state).fit(X_train, y_train) clf4_ = GradientBoostingClassifier(n_estimators=100, max_depth=3, learning_rate=0.75, random_state=random_state).fit(X_train, y_train) print "GBRT (10, stumps) error:", 1 - clf1_.score(X_test, y_test) print "GBRT (100, stumps) error:", 1 - clf2_.score(X_test, y_test) print "GBRT (10, 3 levels) error:", 1 - clf3_.score(X_test, y_test) print "GBRT (100, 3 levels) error:", 1 - clf4_.score(X_test, y_test)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Large ensemble, small learning rate
clf1_ = GradientBoostingClassifier(n_estimators=100, max_depth=1, learning_rate=0.1, random_state=random_state).fit(X_train, y_train) clf2_ = GradientBoostingClassifier(n_estimators=1000, max_depth=1, learning_rate=0.1, random_state=random_state).fit(X_train, y_train) clf3_ = GradientBoostingClassifier(n_estimators=100, max_depth=3, learning_rate=0.1, random_state=random_state).fit(X_train, y_train) clf4_ = GradientBoostingClassifier(n_estimators=1000, max_depth=3, learning_rate=0.1, random_state=random_state).fit(X_train, y_train) print "GBRT (100, stumps) error:", 1 - clf1_.score(X_test, y_test) print "GBRT (1000, stumps) error:", 1 - clf2_.score(X_test, y_test) print "GBRT (100, 3 levels) error:", 1 - clf3_.score(X_test, y_test) print "GBRT (1000, 3 levels) error:", 1 - clf4_.score(X_test, y_test)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
<hr/> XGBoost Briefly, XGBoost, is a higlhy streamlined open-source gradient boosting library, which supports many useful loss functions and uses second order loss approximation both to increas the ensemble accuracy and speed of convergence: 1. learning rate $\eta>0$ to regulate the convergence; 2. offer $l_1$ and $l_2$ regularization on the node-weights and bias-varaince tradeoff and sparsity; 3. cost-complexity pruning of the growm trees; 4. Employs specialized regression and classification tree growth algorithms with random projections, and bagging; It important to note, that XGBoost implements binary trees, which does not restrict the model in any way. However this adds the need for an extra preprocessing step for categorical features. Specifically the binary structure requires that such features be $0-1$ encoded, which is likely to use excessive volumes of memory, especially when the set of possible categories is of the order of thousands. In order to permit the use of arbitrary convex loss functions -- $$ \sum_{i=1}^n L( y_i, \hat{y}i ) + \sum{k=1}^K \Omega(f_k) \rightarrow \mathop{\mathtt{min}}{f_k\in\mathcal{M} } \,,$$ with prediction $\hat{y}_i = \sum{k=1}^K f_k(x_i)$, the loss $L(y, \hat{y})$, and the additive complexity regularizer $\Omega(\cdot)$, -- and still achieve high preformance during learning, the author of XGBoost, implemented a clever trick: he uses FSAM general approach, but the minimization with respect to the increment $f(\cdot)$ is performed on the second order Taylor series approximation of the loss $L$ at $(x_i, y_i)$ and $F(\cdot)$. In particular the minimization over $f(\cdot)$ is done on a quadratic approximation $$ q_{y, x} = L(y, F(x)) + \frac{\partial L}{\partial \hat{y}}\bigg\vert_{(y,F(x))}!! f(x) + \frac{1}{2} \frac{\partial^2 L}{\partial \hat{y}^2}\bigg\vert_{(y,F(x))}! f(x)^2 \,, $$ rather than $L(y, F(x) + f(x))$. Since $\Omega(F_{k-1})$ and $L( y_i, F_{k-1}(x_i) )$ are unaffected by the choice of $f\in\mathcal{F}$ at iteration $k$, the greedy step can be reduced to: $$ f_k \leftarrow \mathop{\mathtt{argmin}}\limits_{f\in \mathcal{F}} \sum_{i=1}^n g^{k-1}i f(x_i) + \frac{1}{2} h^{k-1}_i f(x_i)^2 + \Omega(f) \,, $$ where $g^{k-1}_i = \frac{\partial l(y, \hat{y})}{\partial \hat{y}}$ and $h^{k-1}_i = \frac{\partial^2 l(y, \hat{y})}{\partial \hat{y}^2}$ evaluated at $y=y_i$ and $\hat{y}=F{k-1}(x_i)$. The values $g^{k-1}_i$ and $h^{k-1}_i$ are the gradient and hessian statistics on the $i$-th observation, respectively. These statistics have to be recomputed at each stage for the new $\hat{y}$. The statistics $g^{0}_i$ and $h^{0}_i$ are initialized to values of the first and second derivatives of $L(y_i, c)$ for some fixed $c$ at each $i=1,\ldots n$ ($c$ is the sample average in the case or regression, or the log-odds of the class ratio). Optimizing the objective XGBoost uses criteria derived from the objective function that permit automatic tree-pruning. Consider some tree $f$ with structure $$ f = \sum_{j=1}^J w_j 1_{R_j} \,,$$ where $(R_j){j=1}^J\subseteq \mathcal{X}$ is its partition and $w\in\mathbb{R}^J$ -- leaf predicted values. For this tree the complexity regularization is $$ \Omega(f) = \gamma J + \frac{\lambda}{2} \sum{j=1}^J w_j^2 + \alpha \sum_{j=1}^J \bigl|w_j\bigr| \,. $$ As one can see both excessively large leaf values and tree depths are penalized. stage $k\geq 1$ Using the map $x\mapsto j(x)$, which gives the unique leaf index $j=1,\ldots,J$ such that $x\in R_j$, the objective function minimized at each stage $k\geq 1$ is given by \begin{align} \mathtt{Obj}k(R, w) &= \sum{i=1}^n \bigl( g^{k-1}i w{j(x_i)} + \frac{1}{2} h^{k-1}i w{j(x_i)}^2 \bigr) + \frac{\lambda}{2} \sum_{j=1}^J w_j^2 + \alpha \sum_{j=1}^J \bigl|w_j\bigr| + \gamma J \ &= \sum_{j=1}^J \bigl( w_j G_{k-1}(R_j) + \frac{1}{2} \bigl( H_{k-1}(R_j) + \lambda \bigr) w_j^2 + \alpha \bigl|w_j\bigr| + \gamma \bigr) \,, \end{align} where for any $P\subseteq X$, the values $G_{k-1}(P) = \sum_{i\,:\,x_i\in P} g^{k-1}i$ and $H{k-1}(P) = \sum_{i\,:\,x_i\in P} h^{k-1}i$ are called the first and the second order gradient scores respectively. When $P = R_j$ these are the $j$-th leaf gradinet statistics, which depend only on the ensemble $F{k-1}$ and are constant relative to the increment $f$. The structural score of an XGBoost regression tree is the minimal value of the objective function for a fixed partition structure $R = (R_j){j=1}^J$: $$ \mathtt{Obj}^*(R) = \min{w_j} \mathtt{Obj}k(R, w) = \min{w_j} \sum_{i=1}^n \bigl( g^{k-1}i w{j(x_i)} + \frac{1}{2} h^{k-1}i w{j(x_i)}^2 \bigr) + \frac{\lambda}{2} \sum_{j=1}^J w_j^2 + \alpha \sum_{j=1}^J \bigl|w_j\bigr| + \gamma J \,. $$ This is not an intermediate value of the objective function, but rather its difference against $\sum_{i=1}^n l(y_i, F_{k-1}(x_i))$. It is worth noting, that since there are no cross interactions between scores $w_j$ for different leaves, this minimization problem equivalently reduces to $J$ univariate optimization problems: $$ w_j G_{k-1}(R_j) + \frac{1}{2} \bigl( H_{k-1}(R_j) + \lambda \bigr) w_j^2 + \alpha \bigl|w_j\bigr| + \gamma \to \min_{w_j}\,,$$ for $j=1,\ldots, J$. Let's assume that $H_{k-1}(R_j) + \lambda > 0$, since otherwise this problem has no solution. The optimal leaf value $w_j^$ in the general case is given by $$ w^j = - \frac{1}{H{k-1}(R_j) + \lambda}\begin{cases} G_{k-1}(R_j) + \alpha & \text{ if } G_{k-1}(R_j) \leq -\alpha\ 0&\text{ if } G_{k-1}(R_j) \in [-\alpha, \alpha]\ G_{k-1}(R_j) - \alpha & \text{ if } G_{k-1}(R_j) \geq \alpha \end{cases} \,. $$ Tree construction process Trees in XGBoost employ a greedy algorithm for recursive tree construction, outlined below: 1. every region $R_j$ in the partition $R$ is probed for the optimal binary split $R_j\to R_{j_1}!\|R_{j_2}$ according to the structural gain score $$ \mathtt{Gain}\bigl( R_j\to R_{j_1}!\| R_{j_2} \bigr) = \mathtt{Obj}^( R ) - \mathtt{Obj}^( R' ) \,, $$ where the partition $R'$ is constructed from $R$ by splitting $R_j\to R_{j_1}\|R_{j_2}$; 2. the region $R_j$ with the highest gain from the optimal split is split into $R_{j_1}$ and $R_{j_2}$; 3. the tree growth process continues until no more splits are possible. The first step is the most computatuionally intensive, since it requires $O( J d n\log n )$ operations. This step which is performed by XGBoost in parallel, since FSAM and tree-induction are series by nature. Tree growth gain For simplicity, let's consider the case when $\alpha = 0$, $L^2$ regularization. In this case the following weights give optimal leaf scores $$ w^j = -\frac{G{k-1}(R_j)}{H_{k-1}(R_j) + \lambda}\,.$$ The strucutral score becomes $$ \mathtt{Obj}^(R) = \gamma J - \frac{1}{2}\sum_{j=1}^J \frac{G_{k-1}^2(R_j)}{H_{k-1}(R_j) + \lambda} \,. $$ Any split $R_j \rightarrow R_{j_1}!\| R_{j_2}$ yields the following gain: $$ \mathtt{Gain} = \frac{1}{2}\Biggl( \frac{G_{k-1}^2(R_{j_1})}{H_{k-1}(R_{j_1}) + \lambda} + \frac{G_{k-1}^2(R_{j_2})}{H_{k-1}(R_{j_2}) + \lambda} - \frac{G_{k-1}^2(R_j)}{ H_{k-1}(R_j) + \lambda} \Biggr) - \gamma\,.$$ Note that $G_{k-1}(\cdot)$ and $H_{k-1}(\cdot)$ are additive by construction: $$G_{k-1}(R_j) = G_{k-1}(R_{j_1}) + G_{k-1}(R_{j_2}) \,,$$ and $$H_{k-1}(R_j) = H_{k-1}(R_{j_1}) + H_{k-1}(R_{j_2}) \,.$$ Usage
import xgboost as xg seed = random_state.randint(0x7FFFFFFF)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Scikit-Learn interface
clf_ = xg.XGBClassifier( ## Boosting: n_estimators=50, learning_rate=0.1, objective="binary:logistic", base_score=0.5, ## Regularization: tree growth max_depth=3, gamma=0.5, min_child_weight=1.0, max_delta_step=0.0, subsample=1.0, colsample_bytree=1.0, colsample_bylevel=1.0, ## Regularization: leaf weights reg_alpha=0.0, reg_lambda=1.0, ## Class balancing scale_pos_weight=1.0, ## Service parameters: missing=None, makes use np.nan as missing. seed=seed, missing=None, nthread=2, silent=False) clf_.fit( X_train, y_train, early_stopping_rounds=5, eval_set=[(X_valid_1, y_valid_1), (X_valid_2, y_valid_2),]) y_pred_ = clf_.predict(X_test)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Internally XGBoost relies heavily on a custom dataset format DMatrix. The interface, which is exposed into python has three capabilities: - load datasets in libSVM compatible format; - load SciPy's sparse matrices; - load Numpy's ndarrays. The DMatrix class is constructed with the following parameters: - data : Data source of DMatrix. When data is string type, it represents the path libsvm format txt file, or binary file that xgboost can read from, or a matrix of observed features $X$ in a numpy or scipy matrix; - label : the observation labels $y$ (could be categorical or numeric); - missing : a vector of values that encode missing observations, if None defaults to np.nan; - feature_names : the columns names of $X$; - feature_types : defines the python types of each column of $X$, in case of heterogeneous data; - weight : the vector of nonnegative weights of each observation in the dataset.
dtrain = xg.DMatrix(X_train, label=y_train, missing=np.nan) dtest = xg.DMatrix(X_test, missing=np.nan) dvalid1 = xg.DMatrix(X_valid_1, label=y_valid_1, missing=np.nan) dvalid2 = xg.DMatrix(X_valid_2, label=y_valid_2, missing=np.nan)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
The same XGBoost classifier as in the Scikit-learn example.
param = dict( ## Boosting: eta=0.1, objective="binary:logistic", base_score=0.5, ## Regularization: tree growth max_depth=3, gamma=0.5, min_child_weight=1.0, max_delta_step=0.0, subsample=1.0, colsample_bytree=1.0, colsample_bylevel=1.0, ## Regularization: leaf weights reg_alpha=0.0, reg_lambda=1.0, ## Class balancing scale_pos_weight=1.0, ## Service parameters: seed=seed, nthread=2, silent=1) evals_result = dict() xgb_ = xg.train( ## XGboost settings param, ## Train dataset dtrain, ## The size of the ensemble num_boost_round=50, ## Early-stopping early_stopping_rounds=5, evals=[(dvalid1, "v1"), (dvalid2, "v2"),], evals_result=evals_result) pred_ = xgb_.predict(dtest)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Both the sklearn-compatible and basic python interfaces have the similar parameters. Except they are passed slightly differently. Gradient boosting parameters: - eta, learning_rate ($\eta$) -- step size shirinkage factor; - n_estimators, num_boost_round ($M$) -- the size of the ensemble, number of boosting rounds; - objective -- objective functions: * "reg:linear" -- Linear regression: $(x_i, y_i){i=1}^n \in \mathcal{X} \times \mathbb{R}$, $\hat{p}:\mathcal{X} \mapsto \mathbb{R}$; * "reg:logistic" -- Logistic regression for probability regression task: $(x_i, y_i){i=1}^n \in \mathcal{X} \times [0, 1]$, $\hat{p}:\mathcal{X} \mapsto [0, 1]$; * "binary:logistic" -- Logistic regression for binary classification task: $(x_i, y_i){i=1}^n \in \mathcal{X} \times {0, 1}$, $\hat{p}:\mathcal{X} \mapsto {0, 1}$; * "binary:logitraw" -- Logistic regression for binary classification, output score before logistic transformation: $\hat{p}:\mathcal{X} \mapsto \mathbb{R}$; * "multi:softmax" -- Softmax for multi-class classification, output class index: $\hat{p}:\mathcal{X} \mapsto {1,\ldots,K}$; * "multi:softprob" -- Softmax for multi-class classification, output probability distribution: $\hat{p}:\mathcal{X} \mapsto {\omega\in [0,1]^K\,:\, \sum{k=1}^K \omega_k = 1 }$; - base_score -- global bias of the model: in linear regression ("reg:linear") sets the bias of the regression function, in binary classification ("reg:logistic", "binary:logistic" and "binary:logitraw") sets the base class ratio (transformed to log-odds and added to logistic score). Regularization - related to tree growth and decorrelation: - max_depth -- this parameters limits the size of the tree, by setting a hard bound on the number of tree layers (limits the recursion depth); - min_child_weight -- the minimal value of the hessian statistic of a leaf required for it to be considered a candidate for splitting; - gamma ($\gamma$) -- the complexity cost parameter, imposes minimal structural score gain for splitting a leaf of the currnt tree; - subsample -- the share of the training data to use for growing a tree: determines the size bootstrap smaples $Z^{b}$; - colsample_bytree -- the size of the random subset of features, that cam be used in the growth of the whole tree (accessible features); - colsample_bylevel* -- subsample ratio of features when considering a split: determines the size of the random subset of accessible features considered as candidates for node splitting at each level of every tree. Regularization - tree leaf weights: - reg_alpha ($\alpha$) -- the importance of the $L^1$ regularizer; - reg_lambda ($\lambda$) -- the weight of the $L^2$ regularization term; - max_delta_step -- clips the absolute value of each leaf's score, thereby making the tree growth step more conservative. Class balancing (not used in multiclass problems as of commit c9a73fe2a99300aec3041371675a8fa6bc6a8a72): - scale_pos_weight -- a uniform upscale/downscale factor for the weights of positive examples ($y=+1$); Useful in imbalanced binary classification problems. Early-stopping - early_stopping_rounds -- the validation error on the last validation dataset needs to decrease at least every early_stopping_rounds round(s) to continue training; If equal to None, then early stopping is deactivated. - eval_set -- validation datasets given as a list of tuples (DMatrix, name); - evals_result -- a dictionary to store the validation results; the keys are the names of the validation datasets, and values are the dictionaries of key-values pairs: loss -- list of scores. Examples
clf1_ = xg.XGBClassifier(n_estimators=10, max_depth=1, learning_rate=0.1, seed=seed).fit(X_train, y_train) clf2_ = xg.XGBClassifier(n_estimators=1000, max_depth=1, learning_rate=0.1, seed=seed).fit(X_train, y_train) clf2_ = xg.XGBClassifier(n_estimators=10, max_depth=3, learning_rate=0.1, seed=seed).fit(X_train, y_train) clf2_ = xg.XGBClassifier(n_estimators=1000, max_depth=3, learning_rate=0.1, seed=seed).fit(X_train, y_train) print "XGBoost (10, stumps) error:", 1 - clf1_.score(X_test, y_test) print "XGBoost (1000, stumps) error:", 1 - clf2_.score(X_test, y_test) print "XGBoost (10, 3 levels) error:", 1 - clf3_.score(X_test, y_test) print "XGBoost (1000, 3 levels) error:", 1 - clf4_.score(X_test, y_test) clf1_ = xg.XGBClassifier(n_estimators=10, max_depth=1, learning_rate=0.5, seed=seed).fit(X_train, y_train) clf2_ = xg.XGBClassifier(n_estimators=1000, max_depth=1, learning_rate=0.5, seed=seed).fit(X_train, y_train) clf2_ = xg.XGBClassifier(n_estimators=10, max_depth=3, learning_rate=0.5, seed=seed).fit(X_train, y_train) clf2_ = xg.XGBClassifier(n_estimators=1000, max_depth=3, learning_rate=0.5, seed=seed).fit(X_train, y_train) print "XGBoost (10, stumps) error:", 1 - clf1_.score(X_test, y_test) print "XGBoost (1000, stumps) error:", 1 - clf2_.score(X_test, y_test) print "XGBoost (10, 3 levels) error:", 1 - clf3_.score(X_test, y_test) print "XGBoost (1000, 3 levels) error:", 1 - clf4_.score(X_test, y_test) clf1_ = xg.XGBClassifier(n_estimators=1000, max_depth=1, learning_rate=0.5, seed=seed).fit(X_train, y_train, early_stopping_rounds=20, eval_set=[(X_valid_1, y_valid_1), (X_valid_2, y_valid_2),])
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
<hr/> Other methods Stacking Every ensemble method comprises of essentially two phases: 1. population of a dictionary of base learners (models, like classification trees in AdaBoost, or regression trees in GBRT); 2. aggregation of the dictionary into a sinlge estimator; These phases are not necessarily separated: in Bagging and Random Forests they are (and so these can be done in parallel), in GBRT and AdaBoost they are not. In hte latter, the procedure is path dependent (serial), i.e. the dictionary is populated sequentially, so that each successive base estimator is learnt conditional on the current dictonary. Stacking is a method which allows to corectly construct second-level meta features using ML models atop the first level inputs. By correctly we mostly mean that there is little train-test leakeage, the resultng meta- features though not i.i.d, can still to a certain degree comply to the standard ML assumtions, and allow to focus on the aggregation step of ensemble methods. General pipeline Let $Z = (X, y) = (x_i, y_i)_{i=1}^n$ be a dataset. The model construction and verification pipeline goes as follows: 1. Split the dataset into nonoverlapping train and test datasets: $Z^{\text{train}}$ and $Z^{\text{test}}$; 2. Apply stacking to get meta features, $\mathcal{P}^{\text{train}}$ (it is possible to include the first-level features as well); 3. Split the meta-features, $\mathcal{P}^{\text{train}}$, into train and validation sets: fit on the former, test and select models on the latter; 4. Use regularization at each stage to choose the best strategy against overfitting; For the final prediction: 1. learn a regularized model on the whole $Z^{\text{train}}$; 2. get the meta-features, $\mathcal{P}^{\text{test}}$, on the inputs of $Z^{\text{test}}$; 3. fit a regularized aggregaton model on the whole train sample of meta-fetatures $\mathcal{P}^{\text{train}}$; 4. use the fitted aggregation model to compute final prediction on the $\mathcal{P}^{\text{test}}$. Leave-one-out stacking The idea is to compute the meta feature of each example based on a base estimator learnt on the sample with that observation knocked out. Let $\hat{f}^{-i}m$ the $m$-th base estimator learnt on the sample $Z{-i}$ (without observation $z_i$). Then the meta-features $(\hat{p}{mi}){i=1}^n$ are given by $$ \hat{p}_{mi} = \hat{f}^{-i}_m(x_i) \,.$$ $K$-fold stacking Leave-one-out stacking is in general computationally intensive, unless the base estimator is linear in the targets, in which case this can be done quite fast. A possible solution to this is inspiured by $K$-fold cross validation technique. Let $C_k\subset{1,\ldots, n}$ be the $k$-th fold in $K$-fold, and let $C_{-k}$ be the rest of the dataset $Z$: $C_{-k} = {i\,:\,i\notin C_k}$. $C_k$ has approximately $\frac{n}{K}$ observations. The dataset is randomly shuffled before being partitioned into $K$ folds. Define $\hat{f}^{-k}m$ as the $m$-th base estimator learnt on $Z^{-k}$ given by $(z_i){i\in C_{-k}}$. then the metafeatures are computed using $$ \hat{p}{mi} = \hat{f}^{-k_i}_m(x_i) \,, $$ where $k_i$ is the unique index $k$ in the $K$-fold such that $i\in C_k$. Basically we use the data outside the $k$-th fold, $C{-k}$ to construct the meta-features inside the $k$-th fold, $C_k$. Using the meta-features For example, if we want to compute a linear combination of the regression estimators, we must solve the following optimization problem (unrestricted LS): $$ \sum_{i=1}^n \bigl(y_i - \beta'\hat{p}i \bigr)^2\,, $$ where $\hat{p}_i = (\hat{p}{mi}){m=1}^M$ and $\beta, \hat{p}_i \in \mathbb{R}^{m\times1}$ for all $i$. If a convex combination is required ($\beta_m\geq 0$ and $\sum{m=1}^M\beta_m = 1$), one solves a constrained optimization problem. If pruning is desirable, then one should use either lasso ($L_1$ regularization), or subset-selection methods. Usage Below is a simple $K$-fold stacking procedure. It estimates each model on the $K-1$ folds and predicts (with the specified method) the on the $K$-th fold.
from sklearn.base import clone from sklearn.cross_validation import KFold def kfold_stack(estimators, X, y=None, predict_method="predict", n_folds=3, shuffle=False, random_state=None, return_map=False): """Splits the dataset into `n_folds` (K) consecutive folds (without shuffling by default). Predictions are made on each fold while the K - 1 remaining folds form the training set for the predictor. Parameters ---------- estimators : list of estimators The dictionary of estimators used to construct meta-features on the dataset (X, y). A cloned copy of each estimator is fitted on remainind data of each fold. X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples], optional Target values. predict_method : string, default="predict" The method of each estimator, to be used for predictiong the meta features. n_folds : int, default=3 Number of folds. Must be at least 2. shuffle : boolean, optional Whether to shuffle the data before splitting into batches. random_state : None, int or RandomState When shuffle=True, pseudo-random number generator state used for shuffling. If None, use default numpy RNG for shuffling. Returns ---------- meta : array-like, shape = [n_samples, ...] Computed meta-features of each estimator. map : array-like The map, identifying which estimator each column of `meta` came from. """ stacked_, index_ = list(), list() folds_ = KFold(X.shape[0], n_folds=n_folds, shuffle=shuffle, random_state=random_state) for rest_, fold_ in folds_: fitted_ = [clone(est_).fit(X[rest_], y[rest_]) for est_ in estimators] predicted_ = [getattr(fit_, predict_method)(X[fold_]) for fit_ in fitted_] stacked_.append(np.stack(predicted_, axis=1)) index_.append(fold_) stacked_ = np.concatenate(stacked_, axis=0) meta_ = stacked_[np.concatenate(index_, axis=0)] if not return_map: return meta_ map_ = np.repeat(np.arange(len(estimators)), [pred_.shape[1] for pred_ in predicted_]) return meta_, map_
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Examples Combining base classifiers using Logistic Regression is a typical example of how first level features $x\in \mathcal{X}$ are transformed by $\hat{f}m:\mathcal{X}\mapsto \mathbb{R}$ into second-level meta features $(\hat{f}_m(x)){m=1}^M \in \mathbb{R}^M$, that are finally fed into a logistic regression, that does the utlimate prediction. Here $K$-fold stacking allows proper estimation of the second-level model for a classification task.
seed = random_state.randint(0x7FFFFFFF)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Define the first-level predictors.
from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC estimators_ = [ RandomForestClassifier(n_estimators=200, max_features=0.5, n_jobs=-1, random_state=seed), GradientBoostingClassifier(n_estimators=200, max_depth=3, learning_rate=0.75, random_state=seed), BaggingClassifier(n_estimators=200, base_estimator=DecisionTreeClassifier(max_depth=None), max_samples=0.5, n_jobs=-1, random_state=seed), xg.XGBClassifier(n_estimators=200, max_depth=3, learning_rate=0.5, nthread=-1, seed=seed), ## Both SVM and AdaBoost (on stumps) are very good here SVC(kernel="rbf", C=1.0, probability=True, gamma=1.0), AdaBoostClassifier(n_estimators=200, base_estimator=DecisionTreeClassifier(max_depth=1), random_state=seed), ] estimator_names_ = [est_.__class__.__name__ for est_ in estimators_]
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Create meta features for the train set: using $K$-fold stacking estimate the class-1 probabilities $\hat{p}i = (\hat{p}{mi}){m=1}^M = (\hat{f}^{-k_i}_m(x_i)){m=1}^M$ for every $i=1,\ldots, n$.
meta_train_ = kfold_stack(estimators_, X_train, y_train, n_folds=5, predict_method="predict_proba")[..., 1]
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Now using the whole train, create test set meta features: $p_j = (\hat{f}m(x_j)){m=1}^M$ for $j=1,\ldots, n_{\text{test}}$. Each $\hat{f}_m$ is estimated on the whole train set.
fitted_ = [clone(est_).fit(X_train, y_train) for est_ in estimators_] meta_test_ = np.stack([fit_.predict_proba(X_test) for fit_ in fitted_], axis=1)[..., 1]
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
The prediction error of each individual classifier (trained on the whole train dataset).
base_scores_ = pd.Series([1 - fit_.score(X_test, y_test) for fit_ in fitted_], index=estimator_names_) base_scores_
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Now using $10$-fold cross validation on the train dataset $(\hat{p}i, y_i){i=1}^n$, find the best $L_1$ regularization coefficient $C$.
from sklearn.grid_search import GridSearchCV grid_cv_ = GridSearchCV(LogisticRegression(penalty="l1"), param_grid=dict(C=np.logspace(-3, 3, num=7)), n_jobs=-1, cv=5).fit(meta_train_, y_train) log_ = grid_cv_.best_estimator_ grid_cv_.grid_scores_
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
The weights chosen by logisitc regression are:
from math import exp print "Intercept:", log_.intercept_, "\nBase probability:", 1.0/(1+exp(-log_.intercept_)) pd.Series(log_.coef_[0], index=estimator_names_)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Let's see how well the final model works on the test set:
print "Logistic Regression (l1) error:", 1 - log_.score(meta_test_, y_test)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
and the best model
log_
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
<hr/> Voting Classifier This is a very basic method of constructing an aggregated classifier from a finite dictionary. Let $\mathcal{V}$ be the set of classifiers (voters), with each calssifier's class probablilites given by $\hat{f}_v:\mathcal{X}\mapsto\mathbb{[0,1]}^K$ and prediction $\hat{g}_v(x) = \mathtt{MAJ}(\hat{f}_v(x))$. The majority vote over $K$ candidates with weights $(w_k){k=1}^K\in \mathbb{R}$ is defined as $$ \mathtt{MAJ}(w) = \mathop{\text{argmax}}{k=1\,\ldots, K} w_k \,. $$ Hard voting collects label-prediction of each voter, counts the voting proportions and, then predict the label with the most votes. Mathematically the following aggregation is used: $$ \hat{g}^{\text{maj}}\mathcal{V}(x) = \mathtt{MAJ}\Bigl( W^{-1} \sum{v\in \mathcal{V}} w_v e_{\hat{g}v(x)} \Bigr) \,, $$ where $e_k$ is the $k$-th unit vector in ${0,1}^{K\times 1}$, and $W = \sum{v\in \mathcal{V}} w_v$. Soft voting uses the class-probabilities functions directly: it computes the weighted average probability of each class over all voters, and then selects the class with the highest posterior probability. Namely, $$ \hat{g}^{\text{maj}}\mathcal{V}(x) = \mathtt{MAJ}\bigl( W^{-1} \sum{v\in \mathcal{V}} w_v \hat{f}_v(x) \bigr) \,. $$ As in Bagging, if the base classifiers are well calibrated, then hard voting will ovrestimate probabilities. Usage
from sklearn.ensemble import VotingClassifier
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
VotingClassifier options: - estimators -- The list of classifiers; - voting -- Vote aggregation strategy: * "hard" -- use predicted class labels for majority voting; * "soft" -- use sums of the predicted probalities for determine the most likely class; - weights -- weight the occurrences of predicted class labels (hard voting) or class probabilities while averaging (soft voting); Examples Combine the estimators from the stacking example
clf1_ = VotingClassifier(list(zip(estimator_names_, estimators_)), voting="hard", weights=None).fit(X_train, y_train) clf2_ = VotingClassifier(list(zip(estimator_names_, estimators_)), voting="soft", weights=None).fit(X_train, y_train) print "Hard voting classifier error:", 1 - clf1_.score(X_test, y_test) print "Soft voting classifier error:", 1 - clf2_.score(X_test, y_test)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Let's use LASSO Least Angle Regression (LARS, HTF p. 73) to select weights of the base calssifiers.
from sklearn.linear_model import Lars lars_ = Lars(fit_intercept=False, positive=True).fit(meta_train_, y_train) weights_ = lars_.coef_ pd.Series(weights_, index=estimator_names_)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Show the RMSE of lars, and the error rates of the base classifiers.
print "LARS prediction R2: %.5g"%(lars_.score(meta_test_, y_test),) base_scores_
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Let's see if there is improvement.
clf1_ = VotingClassifier(list(zip(estimator_names_, estimators_)), voting="soft", weights=weights_.tolist()).fit(X_train, y_train) print "Soft voting ensemble with LARS weights:", 1 - clf1_.score(X_test, y_test)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Indeed, this illustrates that clever selection of classifier weights might be profitable. Another example on Voting Clasifier (from Scikit guide)
from sklearn.datasets import make_gaussian_quantiles def scikit_example(n_samples, random_state=None): X1, y1 = make_gaussian_quantiles(cov=2., n_samples=int(0.4*n_samples), n_features=2, n_classes=2, random_state=random_state) X2, y2 = make_gaussian_quantiles(mean=(3, 3), cov=1.5, n_samples=int(0.6*n_samples), n_features=2, n_classes=2, random_state=random_state) return np.concatenate((X1, X2)), np.concatenate((y1, 1 - y2))
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Get a train set, a test set, and a $2$-d mesh for plotting.
from sklearn.cross_validation import train_test_split X2, y2 = scikit_example(n_samples=1500, random_state=random_state) X2_train, X2_test, y2_train, y2_test = \ train_test_split(X2, y2, test_size=1000, random_state=random_state) min_, max_ = np.min(X2, axis=0) - 1, np.max(X2, axis=0) + 1 xx, yy = np.meshgrid(np.linspace(min_[0], max_[0], num=51), np.linspace(min_[1], max_[1], num=51))
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Make a dictionary of simple classifers
from sklearn.neighbors import KNeighborsClassifier classifiers_ = [ ("AdaBoost (100) DTree (3 levels)", AdaBoostClassifier(n_estimators=100, base_estimator=DecisionTreeClassifier(max_depth=3), random_state=random_state)), ("KNN (k=3)", KNeighborsClassifier(n_neighbors=3)), ("Kernel SVM", SVC(kernel='rbf', C=1.0, gamma=1.0, probability=True)),] estimators_ = classifiers_ + [("Soft-voting ensemble", VotingClassifier(estimators=classifiers_, voting="soft", weights=[2,1,2])),]
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Show the decision boundary.
from itertools import product fig, axes = plt.subplots(2, 2, figsize=(12, 10)) for i, (name_, clf_) in zip(product([0, 1], [0, 1]), estimators_): clf_.fit(X2_train, y2_train) prob_ = clf_.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1].reshape(xx.shape) axes[i[0], i[1]].contourf(xx, yy, prob_, alpha=0.4, cmap=plt.cm.coolwarm_r, levels=np.linspace(0,1, num=51), lw=0) axes[i[0], i[1]].scatter(X2_train[:, 0], X2_train[:, 1], c=y2_train, alpha=0.8, lw=0) axes[i[0], i[1]].set_title(name_) plt.show()
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Let's see if this simple soft-voting ensemble improved the test error.
for name_, clf_ in estimators_: print name_, " error:", 1-clf_.score(X2_test, y2_test)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
<hr/> Example from HTF pp. 339 - 340 Now let's inspect the test error as a function of the size of the ensemble
stump_ = DecisionTreeClassifier(max_depth=1).fit(X_train, y_train) t224_ = DecisionTreeClassifier(max_depth=None, max_leaf_nodes=224).fit(X_train, y_train) ada_ = AdaBoostClassifier(n_estimators=400, random_state=random_state).fit(X_train, y_train) bag_ = BaggingClassifier(n_estimators=400, random_state=random_state, n_jobs=-1).fit(X_train, y_train) rdf_ = RandomForestClassifier(n_estimators=400, random_state=random_state, n_jobs=-1).fit(X_train, y_train)
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Get the prediction as a function of the memebers in the ensemble.
def get_staged_accuracy(ensemble, X, y): prob_ = np.stack([est_.predict_proba(X) for est_ in ensemble.estimators_], axis=1).astype(float) pred_ = np.cumsum(prob_[..., 1] > 0.5, axis=1).astype(float) pred_ /= 1 + np.arange(ensemble.n_estimators).reshape((1, -1)) return np.mean((pred_ > .5).astype(int) == y[:, np.newaxis], axis=0) bag_scores_ = get_staged_accuracy(bag_, X_test, y_test) rdf_scores_ = get_staged_accuracy(rdf_, X_test, y_test) ada_scores_ = np.array(list(ada_.staged_score(X_test, y_test)))
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
Plot the test error.
fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111) ax.set_ylim(0, 0.50) ; ax.set_xlim(-10, ada_.n_estimators) ax.plot(1+np.arange(ada_.n_estimators), 1-ada_scores_, c="k", label="AdaBoost") ax.plot(1+np.arange(bag_.n_estimators), 1-bag_scores_, c="m", label="Bagged DT") ax.plot(1+np.arange(bag_.n_estimators), 1-rdf_scores_, c="c", label="RF") ax.axhline(y=1 - stump_.score(X_test, y_test), c="r", linestyle="--", label="stump") ax.axhline(y=1 - t224_.score(X_test, y_test), c="b", linestyle="--", label="DT $J=224$") ax.legend(loc="best") ax.set_xlabel("Iterations") ax.set_ylabel("Test error")
year_15_16/machine_learning_course/ensemble_practicum/ensemble_methods_scikit.ipynb
ivannz/study_notes
mit
u > 0
def f1_numpy(r, u, c): return (r*c**2)/2 + (u*c**4)/4 n = 2 T = np.linspace(-n,n,101)
Smectic/SmAtoSmC.ipynb
brettavedisian/Liquid-Crystals-Summer-2015
mit
f vs c r(T) > 0
fig1 = plt.figure(figsize=(11,8)) ax1 = fig1.gca() plt.plot(T, f1_numpy(1, 1, T)) plt.xlabel('c', fontsize=14) plt.ylabel('f', rotation='horizontal',verticalalignment='center', fontsize=14) ax1.yaxis.set_label_coords(0.53,1) ax1.xaxis.set_label_coords(1.03,0.22) ax1.spines['left'].set_position('zero') ax1.spines['right'].set_color('none') ax1.spines['top'].set_color('none') ax1.spines['bottom'].set_position('zero') ax1.xaxis.set_ticks_position('bottom') ax1.yaxis.set_ticks_position('left') plt.ylim(-0.5,2) yticks1 = ax1.yaxis.get_major_ticks() yticks1[1].label1.set_visible(False);
Smectic/SmAtoSmC.ipynb
brettavedisian/Liquid-Crystals-Summer-2015
mit
r(T) < 0
fig2 = plt.figure(figsize=(11,8)) ax2 = fig2.gca() # ax2.get_yticklabels()[0].set_visible(False) plt.plot(T, f1_numpy(-1, 1, T)) plt.xlabel('c', fontsize=16) plt.ylabel('f', rotation='horizontal',verticalalignment='center', fontsize=16) ax2.yaxis.set_label_coords(0.53,1) ax2.xaxis.set_label_coords(1.03,0.22) ax2.spines['left'].set_position('zero') ax2.spines['bottom'].set_position('zero') ax2.spines['right'].set_color('none') ax2.spines['top'].set_color('none') ax2.xaxis.set_ticks_position('bottom') ax2.yaxis.set_ticks_position('left') yticks2 = ax2.yaxis.get_major_ticks() yticks2[1].label1.set_visible(False);
Smectic/SmAtoSmC.ipynb
brettavedisian/Liquid-Crystals-Summer-2015
mit
c vs T (or r(T)) Solution for $c_{min}$: $c_{min} = \pm\sqrt{\dfrac{-r(T)}{u}}$
def c1(r,u): a = [] for i in r: if i < 0: a.append(np.sqrt(-i/u)) else: a.append(0) return np.array(a) x = np.linspace(0,2,100) plt.figure(figsize=(11,8)) plt.plot(x, c1(x-0.5,1),label='+') plt.plot(x, -c1(x-0.5,1),label='-') plt.xlabel('T',fontsize=16) plt.ylabel('c',fontsize=16,rotation='horizontal') plt.gca().spines['right'].set_color('none') plt.gca().spines['top'].set_color('none') plt.gca().spines['bottom'].set_position('zero') plt.gca().xaxis.set_ticks_position('bottom') plt.gca().yaxis.set_ticks_position('left') plt.legend(fontsize=18) plt.gca().annotate('$\mathregular{T_{C}}$', xy=(0.51, 0.01), xytext=(0.6, 0.15), arrowprops=dict(facecolor='black',width=1,headwidth=7),fontsize=16) xticks3 = plt.gca().xaxis.get_major_ticks() xticks3[0].label1.set_visible(False) plt.gca().xaxis.set_label_coords(1.03,0.515) plt.gca().yaxis.set_label_coords(0.025,0.99);
Smectic/SmAtoSmC.ipynb
brettavedisian/Liquid-Crystals-Summer-2015
mit
u < 0
def f2_numpy(r, u, v, c): return (r*c**2)/2 - (abs(u)*c**4)/4 + (abs(v)*c**6)/6
Smectic/SmAtoSmC.ipynb
brettavedisian/Liquid-Crystals-Summer-2015
mit
Large r
fig3 = plt.figure(figsize=(11,8)) ax3 = fig3.gca() plt.plot(T, f2_numpy(1, -1, 1, T)) plt.xlabel('c', fontsize=14) plt.ylabel('f', rotation='horizontal',verticalalignment='center', fontsize=14) ax3.yaxis.set_label_coords(0.53,1) ax3.xaxis.set_label_coords(1.03,0.22) ax3.spines['left'].set_position('zero') ax3.spines['right'].set_color('none') ax3.spines['top'].set_color('none') ax3.spines['bottom'].set_position('zero') ax3.xaxis.set_ticks_position('bottom') ax3.yaxis.set_ticks_position('left') plt.ylim(-0.05,0.2), plt.xlim(-1.5,1.5) yticks3 = ax3.yaxis.get_major_ticks() yticks3[1].label1.set_visible(False);
Smectic/SmAtoSmC.ipynb
brettavedisian/Liquid-Crystals-Summer-2015
mit
Small r
fig4 = plt.figure(figsize=(11,8)) ax4 = fig4.gca() plt.plot(T, f2_numpy(0.23, -1, 1, T)) plt.xlabel('c', fontsize=14) plt.ylabel('f', rotation='horizontal',verticalalignment='center', fontsize=14) ax4.yaxis.set_label_coords(0.53,1) ax4.xaxis.set_label_coords(1.03,0.22) ax4.spines['left'].set_position('zero') ax4.spines['bottom'].set_position('zero') ax4.spines['right'].set_color('none') ax4.spines['top'].set_color('none') ax4.xaxis.set_ticks_position('bottom') ax4.yaxis.set_ticks_position('left') plt.ylim(-0.05,0.2), plt.xlim(-1.5,1.5) yticks4 = ax4.yaxis.get_major_ticks() yticks4[1].label1.set_visible(False);
Smectic/SmAtoSmC.ipynb
brettavedisian/Liquid-Crystals-Summer-2015
mit
Smaller r
fig5 = plt.figure(figsize=(11,8)) ax5 = fig5.gca() plt.plot(T, f2_numpy(0.187302, -1, 1, T)) plt.xlabel('c', fontsize=14) plt.ylabel('f', rotation='horizontal',verticalalignment='center', fontsize=14) ax5.yaxis.set_label_coords(0.53,1) ax5.xaxis.set_label_coords(1.03,0.22) ax5.spines['left'].set_position('zero') ax5.spines['bottom'].set_position('zero') ax5.spines['right'].set_color('none') ax5.spines['top'].set_color('none') ax5.xaxis.set_ticks_position('bottom') ax5.yaxis.set_ticks_position('left') plt.ylim(-0.05,0.2), plt.xlim(-1.5,1.5) yticks5 = ax5.yaxis.get_major_ticks() yticks5[1].label1.set_visible(False);
Smectic/SmAtoSmC.ipynb
brettavedisian/Liquid-Crystals-Summer-2015
mit
Even smaller r
fig6 = plt.figure(figsize=(11,8)) ax6 = fig6.gca() plt.plot(T, f2_numpy(0.15, -1, 1, T)) plt.xlabel('c', fontsize=14) plt.ylabel('f', rotation='horizontal',verticalalignment='center', fontsize=14) ax6.yaxis.set_label_coords(0.53,1) ax6.xaxis.set_label_coords(1.03,0.22) ax6.spines['left'].set_position('zero') ax6.spines['bottom'].set_position('zero') ax6.spines['right'].set_color('none') ax6.spines['top'].set_color('none') ax6.xaxis.set_ticks_position('bottom') ax6.yaxis.set_ticks_position('left') plt.ylim(-0.05,0.2), plt.xlim(-1.5,1.5) yticks6 = ax6.yaxis.get_major_ticks() yticks6[1].label1.set_visible(False);
Smectic/SmAtoSmC.ipynb
brettavedisian/Liquid-Crystals-Summer-2015
mit
c vs r(T) (general) The solutions for $c_{min}$: $c_{min} = \pm\sqrt{\dfrac{|u| \pm \sqrt{|u|^{2} - 4r(T)|v|}}{2|v|}}$ Conditions for the following cell: $c_{min,+} = \pm\sqrt{\dfrac{|u| + \sqrt{|u|^{2} - 4r(T)|v|}}{2|v|}}$ for $\sqrt{|u| + \sqrt{|u|^{2} - 4r(T)|v|}}, \sqrt{|u|^{2} - 4r(T)|v|} > 0$ $c_{min,-} = \pm\sqrt{\dfrac{|u| - \sqrt{|u|^{2} - 4r(T)|v|}}{2|v|}}$ for $\sqrt{|u| - \sqrt{|u|^{2} - 4r(T)|v|}}, \sqrt{|u|^{2} - 4r(T)|v|} > 0$ $c_{min} = 0$ otherwise
#might not be the best code to use def c2(r, u, v): a = [] for i in r: if (abs(u)-np.sqrt(abs(u)**2-4*i*abs(v)) > 0) and (np.sqrt(abs(u)**2-4*i*abs(v)) > 0): a.append(np.sqrt((abs(u)-np.sqrt(abs(u)**2-4*i*abs(v)))/(2*abs(v)))) elif (abs(u)+np.sqrt(abs(u)**2-4*i*abs(v)) > 0) and (np.sqrt(abs(u)**2-4*i*abs(v)) > 0): a.append(np.sqrt((abs(u)+np.sqrt(abs(u)**2-4*i*abs(v)))/(2*abs(v)))) else: a.append(np.NaN) return np.array(a) s = np.linspace(-1,5,1000) plt.figure(figsize=(11,8)) plt.scatter(s, c2(s,-1,1),label='+',facecolors='none', edgecolors='b') plt.scatter(s, -c2(s,-1,1),label='-',facecolors='none', edgecolors='g') plt.xlabel('T',fontsize=18) plt.ylabel('c',fontsize=18,rotation='horizontal') plt.xlim(-1,0.5) plt.gca().spines['right'].set_color('none') plt.gca().spines['top'].set_color('none') plt.gca().spines['bottom'].set_position('zero') plt.gca().xaxis.set_ticks_position('bottom') plt.gca().yaxis.set_ticks_position('left') plt.legend(fontsize=18) xticks7 = plt.gca().xaxis.get_major_ticks() xticks7[0].label1.set_visible(False) plt.gca().xaxis.set_label_coords(1.05,0.52) plt.gca().yaxis.set_label_coords(0.03,0.99);
Smectic/SmAtoSmC.ipynb
brettavedisian/Liquid-Crystals-Summer-2015
mit
c vs r(T) (specific) $r_{1} = \dfrac{|u|^{2}}{4|v|}$ for $c_{+} = \sqrt{\dfrac{|u| + \sqrt{|u|^{2} - 4r|v|}}{2|v|}}$ $r_{2} = 0$ for $c_{-} = \sqrt{\dfrac{|u| - \sqrt{|u|^{2} - 4r|v|}}{2|v|}}$ after solving for $\dfrac{dc}{dr} = \infty$ for both cases.
plt.figure(figsize=(11,8)) plt.plot(s,np.sqrt((abs(-1)+np.sqrt(abs(-1)**2-4*s*abs(1)))/(2*abs(1))),c='b',label='$\mathregular{c_{min,+}}$') plt.plot(s,np.sqrt((abs(-1)-np.sqrt(abs(-1)**2-4*s*abs(1)))/(2*abs(1))),c='g',label='$\mathregular{c_{min,-}}$') # plt.plot(s,-np.sqrt((abs(-1)+np.sqrt(abs(-1)**2-4*s*abs(1)))/(2*abs(1))),c='b') # plt.plot(s,-np.sqrt((abs(-1)-np.sqrt(abs(-1)**2-4*s*abs(1)))/(2*abs(1))),c='g') plt.plot((1/4)*np.ones(10),np.linspace(-1.5,1.5,10),'--k') plt.plot(np.zeros(10),np.linspace(-1.5,1.5,10),'--k') plt.plot((3/16)*np.ones(10),np.linspace(-1.5,1.5,10),'--k') plt.xlabel('r',fontsize=18) plt.ylabel('c',fontsize=18,rotation='horizontal') plt.xlim(-0.5,0.5), plt.ylim(0,1.5) plt.gca().spines['right'].set_color('none') plt.gca().spines['top'].set_color('none') plt.gca().spines['bottom'].set_position('zero') plt.gca().xaxis.set_ticks_position('bottom') plt.gca().yaxis.set_ticks_position('left') plt.legend(fontsize=18) xticks8 = plt.gca().xaxis.get_major_ticks() xticks8[0].label1.set_visible(False) plt.gca().xaxis.set_label_coords(1.05,0.01) plt.gca().yaxis.set_label_coords(0.03,0.99) plt.gca().annotate('$\mathregular{r_{2} = \ 0}$', xy=(-0.01, 0.5), xytext=(-0.2, 0.5), arrowprops=dict(facecolor='black',width=1,headwidth=4),fontsize=16) plt.gca().annotate('$\mathregular{r_{1} = \ \\frac{|u|^{2}}{4|v|}}$', xy=(0.26, 0.5), xytext=(0.36, 0.5), arrowprops=dict(facecolor='black',width=1,headwidth=4),fontsize=16) plt.gca().annotate('$\mathregular{r_{T} = \ \\frac{3|u|^{2}}{16|v|}}$', xy=(0.18, 1.2), xytext=(0.01, 1.3), arrowprops=dict(facecolor='black',width=1,headwidth=4),fontsize=16);
Smectic/SmAtoSmC.ipynb
brettavedisian/Liquid-Crystals-Summer-2015
mit
Here are some examples of basic symbol operations:
x = sympy.Symbol('x') y = x x, x*2+1, y, type(y), x == y try: x*y+z except NameError as e: print(e) sympy.symbols('x5:10'), sympy.symbols('x:z') X = sympy.numbered_symbols('variable') [ next(X) for i in range(5) ]
files/Process.ipynb
jimaples/jimaples.github.io
mit
SymPy also handles expressions:
e = sympy.sympify('x*(x-1)+(x-1)') e, sympy.factor(e), sympy.expand(e)
files/Process.ipynb
jimaples/jimaples.github.io
mit
But most work of interest is more than single expressions, so here is a helper function to handle systems of equations.
from process import parseExpr help(parseExpr) import inspect print(inspect.getsource(parseExpr))
files/Process.ipynb
jimaples/jimaples.github.io
mit
Example Use Case Let's use a simple example to explore the additional functionality. Performing a 2-D rotation involves multiple dependant variables, independant variables, and functions. $x' = x \cos \theta - y \sin \theta$ $y' = x \sin \theta + y \cos \theta$
inputs='x y theta' outputs="x' y'" expr=''' x' = x*cos(theta) - y*sin(theta) y' = x*sin(theta) + y*cos(theta) ''' ins = sympy.symbols(inputs) outs = sympy.symbols(outputs) eqn = dict(parseExpr(expr)) # No quote marks, the dictionary keys are SymPy symbols ins, outs, eqn
files/Process.ipynb
jimaples/jimaples.github.io
mit
Expression Trees SymPy maintains a tree for all expressions. Everything in SymPy has .args and .func arguments that allow the expression (at that point in the tree) to be reconstructed. The .func argument is essentially the same as calling type and specifies whether the node is a add, multiply, cosine, some other function or a symbol. As you might expect, the .args attribute for leaves is empty.
expr_inputs = set() expr_functs = set() for arg in sympy.preorder_traversal(eqn[outs[0]]): print(arg.func, '\t', arg, '\t', arg.args) if arg.is_Symbol: expr_inputs.add(arg) elif arg.is_Function: expr_functs.add(arg.func) expr_inputs, expr_functs
files/Process.ipynb
jimaples/jimaples.github.io
mit
Adding Functionality Before we go on, let's create a class around parseExpr so we can add object-oriented functionality.
from process import Block print(inspect.getdoc(Block)) b = Block(expr, '2-D Rotate', inputs, outputs) # spoiler alert! print(inspect.getsource(Block.__init__))
files/Process.ipynb
jimaples/jimaples.github.io
mit
Convert SymPy equations to text Links: Top Intro Text LaTeX Solver Evaluating Designs Help
print('\n'.join( str(k)+' = '+sympy.pretty(v) for k,v in eqn.items() )) print() print('\n'.join( str(k)+' = '+str(v) for k,v in eqn.items() ))
files/Process.ipynb
jimaples/jimaples.github.io
mit
So our Block instance can do the same thing, it needs to have pretty and __str__ functions defined. A __repr__ function could also be used to return a separate representation of the object.
b.pretty() print(b) print(inspect.getsource(b.pretty)) print(inspect.getsource(b.__str__))
files/Process.ipynb
jimaples/jimaples.github.io
mit
Convert SymPy equations to LaTeX Strings are well and good, but don't quite cut it for publications and presentations Links: Top Intro Text LaTeX Solver Evaluating Designs Help
# Generate a LaTeX string for the Jupyter notebook to render print(' \\\\\n'.join([ str(k)+' = '+sympy.latex(v) for k, v in eqn.items() ])) %%latex $ x' = x \cos{\left (\theta \right )} - y \sin{\left (\theta \right )} \\ y' = x \sin{\left (\theta \right )} + y \cos{\left (\theta \right )} $
files/Process.ipynb
jimaples/jimaples.github.io
mit
For our Block instance, the latex function doesn't need any arguments, so it can be handled as an attribute
print(b.latex) %%latex $ \underline{\verb;Block: 2-D Rotate;} \\ x' = x \cos{\left (\theta \right )} - y \sin{\left (\theta \right )} \\ y' = x \sin{\left (\theta \right )} + y \cos{\left (\theta \right )} $ f = inspect.getsource(Block).split('def ') for i,s in enumerate(f): if s.startswith('latex'): # grab the last 2 lines from the previous code block, suppress the final newline print('\n'.join(f[i-1].rsplit('\n',2)[-2:]), end="") print('def '+s.strip())
files/Process.ipynb
jimaples/jimaples.github.io
mit
As a property, the latex function above is implicitly called instead of returning the function itself. Attempting to use inspect.getsource results in a TypeError since the LaTeX output isn't source code.
try: inspect.getsource(getattr(Block,'latex')) except TypeError as e: print(type(e),' : ', e)
files/Process.ipynb
jimaples/jimaples.github.io
mit
We've seen a couple SymPy output formats. The init_printing function provides a lot of additional control over how symbols and expressions are shown, including LaTeX.
b.eqn sympy.init_printing(use_latex=True) #sympy.init_printing(use_latex=False) b.eqn sympy.Matrix(b.eqn) help(sympy.init_printing)
files/Process.ipynb
jimaples/jimaples.github.io
mit
Rearranging Equations SymPy can also handle sets of equations (sympy.Eq instances) to handle intermediate values or solve equations in terms of desired variables. Block can catch up later. Links: Top Intro Text LaTeX Solver Evaluating Designs Help
inputs='x y theta' outputs="x' y'" expr=''' x' = x*c - y*s y' = x*s + y*c c = cos(theta) s = sin(theta) ''' ins2 = sympy.symbols(inputs) outs2 = sympy.symbols(outputs) hidden2 = sympy.symbols('c s') eqn2 = tuple(sympy.Eq(k,v) for k, v in parseExpr(expr)) ins2, outs2, eqn2 sympy.solve(eqn2, outs2+hidden2) help(sympy.solve)
files/Process.ipynb
jimaples/jimaples.github.io
mit
Evaluating Expressions SymPy can evaluate equations symbolically (.subs function) or numerically (.evalf function), at specified level of precision. Links: Top Intro Text LaTeX Solver Evaluating Designs Help
x = sympy.Symbol("x'") eqn[x].subs(zip(ins, (1, 1, 45))) eqn[x].evalf(4, subs=dict(zip(ins, (1, 1, 45)))) # sanity check with NumPy import numpy as np -1*np.sin(45) + np.cos(45) e = sympy.sympify('sqrt(x)') print(e.evalf(subs={'x':2})) print(e.evalf(60, subs={'x':2})) print(sympy.pi.evalf()) print(sympy.pi.evalf(100)) help(eqn[x].subs) help(eqn[x].evalf)
files/Process.ipynb
jimaples/jimaples.github.io
mit
Compiling Expressions For efficiency, sympy.lambdify is preferred for numerical analysis. It supports mathematical functions from math, sympy.Function, or mpmath. Since these library functions are compiled Python, C, or even Fortran, they are significantly faster than sympy.evalf.
rad=np.linspace(0, np.pi, 8+1) f = sympy.lambdify(ins, eqn[x], 'numpy') %timeit f(1,0,rad) %%timeit for i in rad: # evalf doesn't support arrays! eqn[x].evalf(subs={'x':1.0,'y':0.0,'theta':i})
files/Process.ipynb
jimaples/jimaples.github.io
mit
SymPy also supports uFuncify for generating binary functions (using f2py and Cython) and Theano for GPU support. These options are discussed in the SymPy documentation.
help(sympy.lambdify)
files/Process.ipynb
jimaples/jimaples.github.io
mit
Note that sympy.lambdify also supports custom functions (e.g. conditional operations or reshaping arrays). These custom functions can also be optimized for computing as easily as including a @jit decorator from the numba library.
def my_sample(x): r = len(x) >> 1 return np.reshape(x[:2*r], (r,2)) x = np.linspace(0, np.pi, 9+1) print(x*180/np.pi) # degrees e = sympy.sympify('1+sample(x)') print(e) f = sympy.lambdify(sympy.Symbol('x'), e, {'sample':my_sample}) f(x) f = sympy.lambdify(sympy.Symbol('x'), # inputs sympy.sympify('sample(cos(x))'), # expressions ('numpy', {'sample':my_sample})) # functions # Note: Can also use empty function dictionary for consistency f(x)
files/Process.ipynb
jimaples/jimaples.github.io
mit
However, the documentation could be better. (Earlier versions didn't even include the expresion.)
help(f)
files/Process.ipynb
jimaples/jimaples.github.io
mit