text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
# Generador Unix Utilizando el generador UNIX de números aleatorios, pero con los coeficientes del generador Visual Basic, programe una serie de 60 números aleatorios en hoja de cálculo, verificando que, a igual semilla corresponde igual serie. Utilice como “Blanco” una serie del mismo tamaño generada con una macro en Visual Basic for Apps con la misma semilla (NOTA: las series no darán los mismos valores, aunque usen la misma semilla porque los algoritmos son diferentes) - 1) Demuestre que con semillas aleatorias esa serie no se repite. - 2) Utilizando la prueba de Chi cuadrado demuestre que se trata de una serie uniforme - 3) Haga las dos pruebas anteriores para la serie “VBA” - 4) Conclusiones _Obtener conclusiones, por ejemplo, calcular el promedio y la desviación estándar de ambas muestras y hacer análisis de varianza para determinar si las medias son iguales o no._ Datos para construir el generador: - m = 2^24 - a = 1140671485 - b = 12820163 ``` import numpy as np from scipy.stats import chi2 from random import randint import matplotlib.pyplot as plt from scipy.stats import norm np.set_printoptions(formatter={'float': lambda x: "{0:0.20f}".format(x)}) a = 1140671485 b = 12820163 m = 2**24 def semillar(X,tantos): semillas = np.random.rand(X, tantos) semillas.dtype = np.float64 r = np.zeros((X, tantos)) r.dtype = np.float64 for j in range(0,tantos): oldSeed = np.random.randint(0,m) for i in range(0,X): newSeed = (a*oldSeed+b) % m oldSeed = newSeed semillas[i,j] = newSeed r[i,j] = semillas[i,j] / m return r def agrupar(N,Q): g = np.zeros((N,Q.shape[1])) incremento = 1.0/np.float64(N) for i in range(0,ensayos): for j in range(0,serie): aux = 0 for k in range(0,N): aux += incremento if Q[j,i] <= aux and Q[j,i] > (aux-incremento): g[k,i] += 1 return g def chiCuadrado(r): chi = np.zeros((divIn,r.shape[1])) FE = (serie/np.float64(divIn)) for i in range(0,r.shape[1]): for j in range(0,divIn): chi[j,i] = ((FE-r[j,i])**2)/FE return chi.sum(0) ``` # El programa ``` serie = 60 ensayos = 5000 resultados = semillar(serie,ensayos) ' divIn = np.int(np.sqrt(serie).round()) ' divIn = 10 grupos = agrupar(divIn,resultados) resultados.shape ``` # Pruebas ## Medias ``` av = resultados.mean(0).mean() print 'Media:',av print 'Error:', (0.5-av) ``` ## Evaluar Varianza y Desviacion ``` print 'Varianza media:',resultados.var(0).mean() print 'Desviacion media:',resultados.std(0).mean() ``` ## Evaluar Chi La prueba Chi-Cuadrada en lugar de medir la diferencia de cada punto entre la muestra y la desviación verdadera, checa la desviación del valor esperado. n X2 =∑ (Oi−Ei)^2/Ei i Donde n es el número de intervalos de clase (ejemplo: Oi es el número observado en la clase i, y Ei es el número esperado en cada clase i , y n es el número de clases. Para una distribución uniforme, Ei , el número en cada clase esta dado por; * Ei = N / n _Para clases igualmente espaciadas, donde N es el número total de observaciones. Puede ser mostrado que la distribución de la muestra Chi-Cuadrada esta aproximadamente a la distribución Chi-Cuadrada con n-1 grados de libertad._ ``` p=0.95 gradosDeLibertad = divIn-1 print 'Chi2 Observado | Inverso de Chi2' print ' {0:0.05f} | {1:0.09f} '.format(chiCuadrado(grupos).mean(),chi2.ppf(p, gradosDeLibertad)) print'\nConfianza(%):',p print 'Grados de Libertad:',gradosDeLibertad ``` **Debido a que X2calculada < que el valor de X2(0.95,9) de la tabla, la hipótesis Nula de que no existe diferencia entre la distribución de la muestra y la distribución uniforme se Acepta.** _ el estadístico chi-cuadrado cuantifica qué tanto varía la distribución observada de conteos con respecto a la distribución hipotética._ Chi Inversa me dice para una distribución chi-cuadrado de k grados de libertad, cual es el valor de x que deja a su izquierda una probabilidad p. ``` x = np.linspace(0, serie, serie) obtenido = resultados[:,np.random.randint(0,ensayos)]*serie fig,ax = plt.subplots(1,1) obtenido.sort() linestyles = ['--', '-.'] deg_of_freedom = divIn-1 comparar = [obtenido,x] for comp, ls in zip(comparar, linestyles): ax.plot(comp, chi2.pdf(comp, deg_of_freedom), linestyle=ls, label=r'$df=%i$' % deg_of_freedom) plt.xlim(0, serie) plt.ylim(0, 0.15) plt.axvline(x=chi2.ppf(p, gradosDeLibertad),linestyle='-.',color='orange') plt.xlabel('$\chi^2$') plt.ylabel(r'$f(\chi^2)$') plt.title(r'$\chi^2\ \mathrm{Distribution}$') plt.legend() plt.show() import matplotlib.mlab as mlab import matplotlib.pyplot as plt mediaDeGrupos = grupos[:,:].mean(axis=1) plt.hist(resultados[:,np.random.randint(0,ensayos)]) mm = serie*ensayos plt.plot(np.repeat(6,serie), linewidth=2) plt.xlabel('Cantidad') plt.ylabel('Grupos') plt.title('Histograma') plt.axis([0, 1, 0, 12]) plt.grid(True) plt.show() plt.plot(mediaDeGrupos,'ro') plt.plot(np.repeat(6,divIn), linewidth=2, color='red') plt.show() ``` ## EXPORTAR ``` import pandas as pd df = pd.DataFrame(resultados) df.to_csv("exportarPython.csv",sep=';',header=None) ```
github_jupyter
## 2. Multi Layer Perceptron ### 1) import modules ``` import tensorflow as tf import numpy as np from tensorflow.examples.tutorials.mnist import input_data from modules import multi_layer_perceptron ``` ### 2) define placeholder for INPUT & LABELS ``` INPUT = tf.placeholder(tf.float32, [None, 28*28]) LABELS = tf.placeholder(tf.int32, [None]) ``` ### 3) define mlp model with multi_layer_perceptron function <img src="./images/mlp.png" alt="slp model" width=1000 align="left"/> ``` # def multi_layer_perceptron(input=None, output_dim=None): # input_dim = input.shape[1].value # # your code start here # return output prediction = multi_layer_perceptron(INPUT, output_dim=10) cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits( labels=LABELS, logits=prediction ) cost = tf.reduce_mean(cross_entropy) optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(cost) ``` ### 4) load data ``` mnist = input_data.read_data_sets("./data/", one_hot=True) ``` ### 5) start training #### - set training parameters : batch size, learning rate, total loop ``` BATCH_SIZE = 100 LEARNING_RATE = 0.01 TOTAL_LOOP = 10000 ``` - arrA = [[0,0,0,0,1],[0,1,0,0,0]] - np.where(arrA) => ([0,1], [4,1]) - ref) https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.where.html?highlight=numpy%20where#numpy.where ``` sess = tf.Session() sess.run(tf.global_variables_initializer()) for loop in range(1, TOTAL_LOOP + 1): train_images, train_labels = mnist.train \ .next_batch(BATCH_SIZE) train_labels = np.where(train_labels)[1] _, loss = sess.run( [optimizer, cost], feed_dict={ INPUT: train_images, LABELS: train_labels } ) if loop % 500 == 0 or loop == 0: print("loop: %05d,"%(loop), "loss:", loss) print("Training Finished! (loss : " + str(loss) + ")") ``` ### 6) test performance - test image shape: (100, 784) - test label shape: (100, 10) - arrB = [[0, 1, 2],[3, 4, 5]] - np.argmax(arrB) => 5 - np.argmax(arrB, axis=0) => [1, 1, 1] - np.argmax(arrB, axis=1) => [2, 2] - ref) https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.argmax.html ``` TEST_SAMPLE_SIZE = 100 TEST_NUMBER = 5 accuracy_save = dict() for number in range(1, 1+TEST_NUMBER): test_images, test_labels = mnist.test \ .next_batch(TEST_SAMPLE_SIZE) pred_result = sess.run( prediction, feed_dict={INPUTtest performance: test_images} ) pred_number = np.argmax(pred_result, axis=1) # 100x1 label_number = np.where(test_labels)[1] #100x1 accuracy_save[number] = np.sum(pred_number == label_number) print("Accuracy:", accuracy_save) print("Total mean Accuracy:", np.mean(list(accuracy_save.values())) ) ```
github_jupyter
# Subpockets to target residue(s) We explore the distance of the `kissim` subpocket centers to their target residues. ``` %load_ext autoreload %autoreload 2 from pathlib import Path import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from opencadd.databases.klifs import setup_remote from kissim.encoding import FingerprintGenerator from src.paths import PATH_DATA, PATH_RESULTS plt.style.use("seaborn") HERE = Path(_dh[-1]) # noqa: F821 DATA = PATH_DATA RESULTS = PATH_RESULTS / "all" REMOTE = setup_remote() ``` ## Load subpocket center coordinates ``` fingerprint_generator = FingerprintGenerator.from_json(RESULTS / "fingerprints_clean.json") print(f"Number of fingerprints: {len(fingerprint_generator.data)}") subpocket_centers = fingerprint_generator.subpocket_centers subpocket_centers = subpocket_centers.stack(0) subpocket_centers.index.names = ("structure.klifs_id", "subpocket") subpocket_centers = subpocket_centers.reset_index() subpocket_centers["residue.ix"] = 0 subpocket_centers ``` ## Define target residues ``` subpocket_to_residue_ixs = { "hinge_region": [46, 47, 48], "dfg_region": [81, 82, 83], "front_pocket": [4, 5, 6, 7, 8, 9], } subpocket_to_residue_ixs residue_ix_to_subpocket = { residue_ix: subpocket for subpocket, residue_ixs in subpocket_to_residue_ixs.items() for residue_ix in residue_ixs } residue_ix_to_subpocket ``` ## Load pocket residue CA atom coordinates ``` ca_atoms = pd.read_csv( DATA / "processed/pocket_residue_ca_atom_coordinates.csv.gz", header=0, index_col=[0, 1] ) # Rename a few columns and reset index ca_atoms = ca_atoms.rename(columns={"atom.x": "x", "atom.y": "y", "atom.z": "z"}) ca_atoms = ca_atoms.reset_index() # Keep only target residues ca_atoms = ca_atoms[ca_atoms["residue.ix"].isin(residue_ix_to_subpocket.keys())] # Add subpocket name ca_atoms["subpocket"] = ca_atoms.apply(lambda x: residue_ix_to_subpocket[x["residue.ix"]], axis=1) # Keep only structures that we have subpocket centers for ca_atoms = ca_atoms[ ca_atoms["structure.klifs_id"].isin(subpocket_centers["structure.klifs_id"].unique()) ] ca_atoms ``` ## Concatenate CA atom and subpocket center data ``` coordinates = ( pd.concat([ca_atoms, subpocket_centers]) .sort_values(["structure.klifs_id", "subpocket"]) .reset_index(drop=True) ) coordinates.index.name = "ix" coordinates ``` ## Get vector between subpocket centers and their target residue CA atoms ``` vectors = coordinates.groupby(["structure.klifs_id", "subpocket"]).apply( lambda group: group[group["residue.ix"] != 0][["x", "y", "z"]] - group[group["residue.ix"] == 0][["x", "y", "z"]].squeeze() ) vectors = ( vectors.reset_index() .merge(coordinates.reset_index()[["ix", "residue.ix"]], how="left", on="ix") .set_index(["structure.klifs_id", "subpocket", "residue.ix"]) .drop("ix", axis=1) ) vectors ``` ## Get vector length (distance) ``` distances = vectors.apply(lambda x: np.linalg.norm(x), axis=1) distances = distances.unstack(0).transpose() distances ``` ## Plot distance distributions ``` plt.figure(figsize=(8.5, 6)) ax = sns.boxplot( x="residue.ix", y="distance", hue="subpocket", data=distances.melt().sort_values("residue.ix").rename(columns={"value": "distance"}), ) ``` ## Plot distance distributions split by DFG conformation ``` structures = REMOTE.structures.all_structures() structure_klifs_ids_by_dfg = { name: group["structure.klifs_id"].to_list() for name, group in structures.groupby("structure.dfg") } distances_dict = { "All": distances, "DFG-in": distances[distances.index.isin(structure_klifs_ids_by_dfg["in"])], "DFG-out": distances[distances.index.isin(structure_klifs_ids_by_dfg["out"])], } fig, axes = plt.subplots(1, 3, figsize=(20, 5), sharey=True) fig.suptitle("Distance between subpocket centers and target residues") for i, (title, data) in enumerate(distances_dict.items()): sns.boxplot( x="residue.ix", y="distance", hue="subpocket", data=data.melt().sort_values("residue.ix").rename(columns={"value": "distance"}), ax=axes[i], ) axes[i].set_title(f"{title} structures (#{len(data)})") ```
github_jupyter
``` # import os # os.environ['LIBRARY_PATH'] = os.environ['LD_LIBRARY_PATH'] = '/home/apanin/cuda-8.0/lib64' # os.environ['PATH'] = "/usr/local/cuda-8.0/bin/:/home/apanin/cuda-8.0/lib64:"+os.environ['PATH'] # %env THEANO_FLAGS=device=cuda0,gpuarray.preallocate=0.5,floatX=float32 # import theano # import theano.tensor as T # from lasagne import * # from lasagne.layers import * import os import numpy as np import pandas as pd from tqdm import tqdm import matplotlib.pyplot as plt %matplotlib inline ``` ## Loading Data ``` subsystemNames = ['L1tcalo', 'L1tmu', 'Hlt', 'Pix', 'Strip', 'Ecal', 'Hcal', 'Dt', 'Rpc', 'Es', 'Csc', 'Track', 'Egamma', 'Muon', 'Jetmet', 'Lumi'] Ids_labels = ['runId','lumiId','lumi','isSig'] file_name_merged = '/home/fedor/notebook/ml4dc/ok_files/merged.pickle' data = pd.read_pickle(file_name_merged) data = data.dropna(axis=0, how='any') plt.hist(data["lumi"][data['isSig']==0], label = "bad", normed = True, alpha = 0.5, bins=40)#, range=(0.0,0.02)) plt.hist(data["lumi"][data['isSig']==1], label = "good", normed = True, alpha = 0.5, bins=40)#, range=(0.0,0.02)) plt.xlabel("lumi") plt.legend() plt.show() data[data["lumi"]<0.01][data['isSig']==1][['runId','lumiId']] nonempty = np.where(data["lumi"] >= 0.01)[0] data = data.iloc[nonempty] features = [x for x in data.columns.get_values() if x not in subsystemNames+['runId','lumiId','isSig']] for f in features: xs = data[f].values if np.std(xs) > 0.0: data[f] = (xs - np.mean(xs)) / np.std(xs) labels = data["isSig"] data_features = data[features] np.mean(labels) data_features.shape num_good = np.sum(labels) num_bad = len(labels)-np.sum(labels) weights = 0.5 / np.where(labels == 1.0, num_good, num_bad) weights *= len(labels) num_good, num_bad from sklearn.model_selection import train_test_split indx_train, indx_test = train_test_split(np.arange(len(labels), dtype='int32'), stratify=labels, test_size=0.1, random_state = 1) y_train = np.array(labels.iloc[indx_train], 'float32') y_test = np.array(labels.iloc[indx_test], 'float32') X_train = np.array(data_features.iloc[indx_train], 'float32') X_test = np.array(data_features.iloc[indx_test], 'float32') weights_train = weights[indx_train] weights_test = weights[indx_test] len(y_test)-np.sum(y_test) from sklearn.metrics import roc_curve, auc, roc_auc_score,recall_score, confusion_matrix from sklearn.model_selection import GridSearchCV,cross_val_score, StratifiedKFold, KFold from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import LogisticRegression names = ["LogisticRegression", "RBF SVM", "Random Forest", "AdaBoost", "Naive Bayes"] classifiers = [ LogisticRegression(penalty= 'l1', C=0.02), SVC(kernel='rbf', C=100, gamma=0.001, probability=True), RandomForestClassifier(max_depth=15, n_estimators=40, max_leaf_nodes = 50, min_samples_leaf=5), AdaBoostClassifier(n_estimators=60), GaussianNB()] plt.figure(figsize=(8, 8)) plt.plot([0, 1], [0, 1], '--', color='black') # iterate over classifiers for name, clf in zip(names, classifiers): clf.fit(X_train, y_train, sample_weight=weights_train) probas = clf.predict_proba(X_test) fpr, tpr, _ = roc_curve(y_test, probas[:,1], sample_weight=weights_test) auc_score = roc_auc_score(y_test, probas[:,1]) rec_score = recall_score(1-y_test, 1-np.round(probas[:,1])) plt.plot(fpr, tpr, label= name+' AUC = %.3lf' % auc_score+', recall = %.3lf' % rec_score) print name print 'confusion_matrix for train set' print confusion_matrix(y_train, np.round(clf.predict_proba(X_train)[:,1])) print 'confusion_matrix for train set' print confusion_matrix(y_test, np.round(probas[:,1])) print len(np.where(probas[:,1][y_test == 0] <= 0.6)[0]), len(np.where(probas[:,1][y_test == 1] <= 0.6)[0]) plt.legend(loc='lower right', fontsize=20) plt.title('ROC curve', fontsize=24) plt.xlabel('FPR', fontsize=20) plt.ylabel('TPR', fontsize=20) plt.show() from sklearn import svm # fit the model clf = svm.OneClassSVM(nu = 0.05) clf.fit(X_train[y_train == 1]) y_pred_train = clf.predict(X_train[y_train == 1]) y_pred_test = clf.predict(X_test[y_test == 1]) y_pred_outliers = clf.predict(np.vstack((X_test[y_test == 0],X_train[y_train == 0]))) n_error_train = y_pred_train[y_pred_train == -1].size n_error_test = y_pred_test[y_pred_test == -1].size n_error_outliers = y_pred_outliers[y_pred_outliers == 1].size y_pred_train[y_pred_train == 1].size, y_pred_test[y_pred_test == 1].size n_error_train n_error_test n_error_outliers ```
github_jupyter
``` import skimage.io as sio import matplotlib.pyplot as plt import numpy as np import pandas as pd import diff_classifier.aws as aws from skimage.filters import roberts, sobel, scharr, prewitt, median, rank from skimage import img_as_ubyte from skimage.morphology import erosion, dilation, opening, closing, white_tophat, disk, reconstruction import diff_register.im_process as imp from scipy.ndimage.morphology import distance_transform_edt filename = 'BF_cells_PEG_1_XY1.tif' folder = 'Cell_Studies/10_16_18_cell_study' bucket = 'ccurtis.data' longname = folder+'/'+filename aws.download_s3(longname, filename, bucket_name=bucket) bfimage = plt.imread(filename) fig, ax = plt.subplots(figsize=(7,7)) ax.imshow(bfimage, cmap='gray') plt.axis('off') #Building an overall function: def binary_BF(image, meanse=disk(10), edgefilt='prewitt', opense=disk(10), fill_first=False, bi_thresh=0.000025, tophatse=disk(20)): #convertim = img_as_ubyte(image) meanim = rank.mean(image, meanse) if edgefilt is 'prewitt': edgeim = prewitt(meanim) elif edgefilt is 'sobel': edgeim = sobel(meanim) elif edgefilt is 'scharr': edgeim = scharr(meanim) elif edgefilt is 'roberts': edgeim = roberts(meanim) closeim = closing(edgeim, opense) openim = opening(closeim, opense) if fill_first: seed = np.copy(openim) seed[1:-1, 1:-1] = openim.max() mask = openim filledim = reconstruction(seed, mask, method='erosion') binarim = filledim > bi_thresh else: binarim = openim > bi_thresh*np.mean(openim) seed = np.copy(binarim) seed[1:-1, 1:-1] = binarim.max() mask = binarim filledim = reconstruction(seed, mask, method='erosion') tophim = filledim - closing(white_tophat(filledim, tophatse), opense)>0.01 fig, ax = plt.subplots(nrows=2, ncols=4, figsize=(16, 8)) ax[0][0].imshow(image, cmap='gray') ax[0][1].imshow(meanim, cmap='gray') ax[0][2].imshow(edgeim, cmap='gray', vmax=4*np.mean(edgeim)) ax[0][3].imshow(closeim, cmap='gray', vmax=4*np.mean(closeim)) ax[1][0].imshow(openim, cmap='gray', vmax=4*np.mean(openim)) ax[1][1].imshow(binarim, cmap='gray') ax[1][2].imshow(filledim, cmap='gray') ax[1][3].imshow(tophim, cmap='gray') for axes in ax: for axe in axes: axe.axis('off') fig.tight_layout() return tophim tophim = binary_BF(bfimage, bi_thresh=1.5, tophatse=disk(20)) euim = distance_transform_edt(~tophim.astype(bool)) plt.figure(figsize=(7,7)) plt.imshow(euim) plt.axis('off') plt.figure(figsize=(7,7)) plt.imshow(tophim, cmap='gray') plt.axis('off') eu10 = euim < 20 ten = eu10.astype(int) + tophim.astype(int) palette = np.array([[ 0, 0, 0], # black [255, 0, 0], # red, # blue [255, 255, 255]]) ten = palette[ten] plt.figure(figsize=(7,7)) plt.imshow(ten) plt.axis('off') eu10 = euim < 20 ten = eu10.astype(int) + tophim.astype(int) plt.figure(figsize=(7,7)) plt.imshow(ten, cmap='gray', vmax=2) plt.axis('off') ```
github_jupyter
``` import statsmodels.formula.api as smf import matplotlib.pyplot as plt import fastreg as fr from fastreg import I, R, C %config InlineBackend.figure_format = 'retina' %matplotlib inline ``` ### Generate Data ``` models = ['linear', 'poisson', 'negbin', 'zinf_poisson', 'zinf_negbin'] data = fr.dataset(N=1_000_000, K1=10, K2=100, models=models, seed=89320432) data_wide = fr.dataset(N=1_000_000, K1=10, K2=10_000, models=models, seed=89320433) data.head() ``` ### Normal OLS ``` %time smf.ols('y0 ~ 1 + x1 + x2', data=data).fit().params %time fr.ols(y='y0', x=I+R('x1')+R('x2'), data=data) %time fr.ols(y='y', x=I+R('x1')+R('x2')+C('id1')+C('id2'), data=data) %time fr.ols(y='y', x=I+R('x1')+R('x2')+C('id1')+C('id2'), cluster=C('id1')*C('id2'), data=data) ``` ### High Dimensional ``` %time fr.ols(y='y', x=I+R('x1')+R('x2')+C('id1')+C('id2'), data=data_wide) %time fr.ols(y='y', x=I+R('x1')+R('x2')+C('id1'), hdfe=C('id2'), data=data_wide) %time fr.ols(y='y', x=I+R('x1')+R('x2')+C('id1'), absorb=C('id2'), data=data_wide) ``` ### Poisson ``` %time param, sigma = frg.poisson(y='p0', x=I+R('x1')+R('x2'), data=data, epochs=3) print(param['real']) %time param, sigma = frg.poisson(y='p', x=I+R('x1')+R('x2')+C('id1'), data=data, epochs=3) print(param['real']) bid1 = np.arange(10)/10 beta = param['categ']['id1']-param['categ']['id1'][0] berr = beta - bid1 stdv = np.sqrt(np.diagonal(sigma['categ']['id1']['categ']['id1'])) line, = plt.plot(bid1, berr, marker='o') # plt.plot(bid1, bid1, linestyle='--', linewidth=1, color='k'); plt.plot(bid1, np.zeros_like(bid1), linestyle='--', linewidth=1, color='k'); plt.errorbar(bid1, berr, yerr=stdv, capsize=4, color=line.get_color()); ``` ### Negative Binomial ``` %time param, sigma = frg.zinf_negbin(y='nb0', x=I+R('x1')+R('x2'), data=data, epochs=10) print(param['real']) %time param, sigma = frg.zinf_negbin(y='nb', x=I+R('x1')+R('x2')+C('id1')+C('id2'), data=data, epochs=10) bid1 = np.arange(100)/100 beta = param['categ']['id2'] - param['categ']['id2'][0] stdv = np.sqrt(np.diagonal(sigma['categ']['id2']['categ']['id2'])) line, = plt.plot(bid1, beta, marker='o') plt.plot(bid1, bid1, linestyle='--', linewidth=1, color='k'); plt.errorbar(bid1, beta, yerr=stdv, capsize=4, color=line.get_color()); ``` ### Ultra Wide ``` N = 2_000_000 df = pd.DataFrame({ 'x1': np.random.rand(N), 'x2': np.random.rand(N), 'id1': np.ceil(10*np.arange(N)/N+1e-7).astype(int), 'id2': np.random.randint(1, 10001, size=N) }) df['y'] = 1 + 2*df['x1'] + 3*df['x2'] + np.log10(df['id1']) + np.log10(df['id2']) + np.random.randn(N) print(df[['id1', 'id2']].nunique()) %time fr.ols(y='y', x=I+R('x1')+R('x2')+C('id1'), hdfe=C('id2'), data=df) %time β, Σ = fr.gols(y='y', x=I+R('x1')+R('x2')+C('id1'), hdfe=C('id2'), data=df, epochs=10) print(β['real']) print(np.sqrt(np.exp(β['lsigma2']))) print(np.sqrt(Σ['lsigma2']['lsigma2'])) np.sqrt(Σ['categ']['id2']['categ']['id2']) ```
github_jupyter
# Chapter 8 ## Question 10 Using boosting to predict `Salary` in the `Hitters` data set ``` import statsmodels.api as sm import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import sklearn.model_selection import sklearn.ensemble import sklearn.tree import sklearn.metrics import sklearn.neighbors from collections import namedtuple from tqdm import tqdm_notebook sns.set(style="whitegrid") hitters = sm.datasets.get_rdataset("Hitters", "ISLR").data hitters.head() print(hitters.dtypes) for column in ["League", "Division", "NewLeague"]: print(hitters[column].value_counts()) print() ``` ### (a) Remove the observations for whom the salary information is unknown, and then log-transform the salaries ``` hitters = hitters.dropna(subset=["Salary"]) hitters["LogSalary"] = np.log(hitters["Salary"]) ``` ### (b) Creating a training set consisting of the first 200 observations, and a test set consisting of the remaining observations ``` # I'm not doing this, I'll use test_train_split instead X = hitters.drop(columns=["Salary", "LogSalary"]) y = hitters.LogSalary # Encode categoricals for col in ["League", "Division", "NewLeague"]: X = pd.concat([X, pd.get_dummies(X[col], prefix=col, drop_first=True)],axis=1).drop(columns=col) X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=len(hitters)-200) ``` ### (c) Perform boosting on the training set with 1,000 trees for a range of values of the shrinkage parameter $\lambda$. Produce a plot of shrinkage against training MSE. ``` DataPoint = namedtuple('DataPoint', 'shrinkage_lambda, train_mse, test_mse') data = [] for shrinkage_lambda in tqdm_notebook(np.linspace(0.001, 3, 100)): boost_regressor = sklearn.ensemble.AdaBoostRegressor(base_estimator=sklearn.tree.DecisionTreeRegressor(max_depth=3), n_estimators=1000, learning_rate=shrinkage_lambda ) boost_regressor.fit(X_train, y_train) y_pred_train = boost_regressor.predict(X_train) training_mse = sklearn.metrics.mean_squared_error(y_train, y_pred_train) y_pred_test = boost_regressor.predict(X_test) test_mse = sklearn.metrics.mean_squared_error(y_test, y_pred_test) data.append(DataPoint(shrinkage_lambda, training_mse, test_mse)) ``` ### (d) Produce a plot of shrinkage vs test set MSE ``` lambdas = [x.shrinkage_lambda for x in data] train_mses = [x.train_mse for x in data] test_mses = [x.test_mse for x in data] plt.plot(lambdas, train_mses) plt.show() plt.plot(lambdas, test_mses) plt.show() ``` ### (e) Compare the test MSE of boosting to the test set MSE resulting from two of the regression approaches in Chapters 3 and 6 ``` parameters = {'base_estimator__max_depth': range(1,4), 'n_estimators': [100,1000,10000], 'learning_rate': np.linspace(0.001,1,100) } tree = sklearn.tree.DecisionTreeRegressor() boosted_regressor = sklearn.ensemble.AdaBoostRegressor(base_estimator=tree) clf = sklearn.model_selection.GridSearchCV(boosted_regressor, parameters, n_jobs=4, cv=5) clf.fit(X=X_train, y=y_train) best_tree = clf.best_estimator_ print (clf.best_score_, clf.best_params_) # Chapter 3: Linear Regression and KNN boosted_tree_test_mse = sklearn.metrics.mean_squared_error(y_test, best_tree.predict(X_test)) linear_regression = sklearn.linear_model.LinearRegression() linear_regression.fit(X_train,y_train) y_pred = linear_regression.predict(X_test) linear_regression_test_mse = sklearn.metrics.mean_squared_error(y_test, y_pred) knn_model = sklearn.neighbors.KNeighborsRegressor(n_neighbors=5) knn_model.fit(X_train, y_train) # reshape required to cast the training data to a 2d array y_pred = knn_model.predict(X_test) knn_test_mse = sklearn.metrics.mean_squared_error(y_test, y_pred) # Chapter 6: Ridge Regression/Lasso # Use the LassoCV lasso_model = sklearn.linear_model.LassoLarsCV(cv=5, max_iter=1e6) lasso_model.fit(X_train,y_train) # Don't really need to separate the data as we are using CV y_pred = lasso_model.predict(X_test) lasso_test_mse = sklearn.metrics.mean_squared_error(y_test, y_pred) ridge_regression = sklearn.linear_model.RidgeCV(cv=5) ridge_regression.fit(X_train,y_train) # Don't really need to separate the data as we are using CV y_pred = ridge_regression.predict(X_test) ridge_test_mse = sklearn.metrics.mean_squared_error(y_test, y_pred) print(f"Boosting: {boosted_tree_test_mse:.2f}") print(f"Linear Regression: {linear_regression_test_mse:.2f}") print(f"KNN: {knn_test_mse:.2f}") print(f"Lasso: {lasso_test_mse:.2f}") print(f"Ridge Regression: {ridge_test_mse:.2f}") ``` ### (f) Which variables appear to be the most important predictors in the boosted model? ``` def getFeatureImportance(random_forest, X): """Given a trained classifier, plot the feature importance""" importances = random_forest.feature_importances_ std = np.std([tree.feature_importances_ for tree in random_forest.estimators_], axis=0) indices = np.argsort(importances)[::-1] text_indices = [X.columns[i] for i in indices] # Print the feature ranking print("Feature ranking:") for f in range(X.shape[1]): print(f"{f+1}. {text_indices[f]}: {importances[indices[f]] :.2f}") # Plot the feature importances of the forest plt.figure() plt.title("Feature importances") plt.bar(range(X.shape[1]), importances[indices], color="r", yerr=std[indices], align="center") plt.xticks(range(X.shape[1]), text_indices, rotation=90) plt.xlim([-1, X.shape[1]]) plt.show() getFeatureImportance(best_tree, X) ``` ### (g) Now apply bagging to the training set. What is the test set MSE for this approach? ``` # A random forest, when all predictors are consider, is equivalent to using bagging parameters = {'max_depth': range(1,10), 'n_estimators': [100,1000,10000], 'max_leaf_nodes': [5,10,50,100,500] } random_forest = sklearn.ensemble.RandomForestRegressor(max_features=None) random_forest_cv = sklearn.model_selection.GridSearchCV(random_forest, parameters, n_jobs=4, cv=5) random_forest_cv.fit(X=X_train, y=y_train) best_random_forest = random_forest_cv.best_estimator_ print (random_forest_cv.best_score_, random_forest_cv.best_params_) y_pred = random_forest_cv.predict(X_test) bagging_test_mse = sklearn.metrics.mean_squared_error(y_test, y_pred) print(f"Bagging: {bagging_test_mse:.2f}") ```
github_jupyter
``` #Import Library #Preprocessing import os import cv2 import pandas as pd import numpy as np import tensorflow as tf import scipy.ndimage as ndi from random import shuffle from scipy.misc import imread, imresize from scipy.io import loadmat #Model from keras.preprocessing.image import ImageDataGenerator from sklearn.model_selection import train_test_split from keras.layers import Activation, Convolution2D, Dropout, Conv2D from keras.layers import AveragePooling2D, BatchNormalization from keras.layers import GlobalAveragePooling2D from keras.models import Sequential from keras.layers import Flatten from keras.models import Model from keras.layers import Input from keras.layers import MaxPooling2D from keras.layers import SeparableConv2D from keras import layers from keras.regularizers import l2 #Save Model from keras.callbacks import CSVLogger, ModelCheckpoint, EarlyStopping from keras.callbacks import ReduceLROnPlateau #Dataset Parameter dataset_name = "imdb" images_path = '.../imdb/imdb_crop/' #Model parameters batch_size = 32 num_epochs = 10 input_shape = (48, 48, 3) validation_split = .2 verbose = 1 num_classes = 2 patience = 50 #path to save the models base_path = '.../pretrained_models/' ``` # 1) Preprocessing (Define the Dataset/Train,Validation Split) ``` class DataManager(object): """Class for loading imdb gender classification dataset.""" def __init__(self, dataset_name='imdb', dataset_path=None, image_size=(48, 48)): self.dataset_name = dataset_name self.dataset_path = dataset_path self.image_size = image_size if self.dataset_path is not None: self.dataset_path = dataset_path elif self.dataset_name == 'imdb': self.dataset_path = '.../imdb/imdb_crop/imdb.mat' else: raise Exception( 'Incorrect dataset name, please input imdb dataset') def get_data(self): if self.dataset_name == 'imdb': ground_truth_data = self._load_imdb() return ground_truth_data def _load_imdb(self): face_score_treshold = 3 dataset = loadmat(self.dataset_path) image_names_array = dataset['imdb']['full_path'][0, 0][0] gender_classes = dataset['imdb']['gender'][0, 0][0] face_score = dataset['imdb']['face_score'][0, 0][0] second_face_score = dataset['imdb']['second_face_score'][0, 0][0] face_score_mask = face_score > face_score_treshold second_face_score_mask = np.isnan(second_face_score) unknown_gender_mask = np.logical_not(np.isnan(gender_classes)) mask = np.logical_and(face_score_mask, second_face_score_mask) mask = np.logical_and(mask, unknown_gender_mask) image_names_array = image_names_array[mask] gender_classes = gender_classes[mask].tolist() image_names = [] for image_name_arg in range(image_names_array.shape[0]): image_name = image_names_array[image_name_arg][0] image_names.append(image_name) return dict(zip(image_names, gender_classes)) #Preprocessing functions to read the imdb facedataset def get_labels(dataset_name): if dataset_name == 'imdb': return {0: 'woman', 1: 'man'} else: raise Exception('Invalid dataset name') def get_class_to_arg(dataset_name='imdb'): if dataset_name == 'imdb': return {'woman': 0, 'man': 1} else: raise Exception('Invalid dataset name') def split_imdb_data(ground_truth_data, validation_split=.2, do_shuffle=False): ground_truth_keys = sorted(ground_truth_data.keys()) if do_shuffle is not False: shuffle(ground_truth_keys) training_split = 1 - validation_split num_train = int(training_split * len(ground_truth_keys)) train_keys = ground_truth_keys[:num_train] validation_keys = ground_truth_keys[num_train:] return train_keys, validation_keys # Initialize object data_loader = DataManager(dataset_name) ground_truth_data = data_loader.get_data() #get_data (image_name_path , gender) ground_truth_data train_keys, val_keys = split_imdb_data(ground_truth_data, validation_split) train_keys val_keys print('Number of training samples:', len(train_keys)) print('Number of validation samples:', len(val_keys)) ``` # 2) Preprocessing (Encode the Dataset) ``` x_train = [] y_train = [] for key in train_keys: image_path = images_path + key image_array = cv2.imread(image_path) image_array = imresize(image_array, (48, 48)) num_image_channels = len(image_array.shape) #image_array = cv2.cvtColor(image_array,cv2.COLOR_RGB2GRAY).astype('float32') ground_truth = ground_truth_data[key] image_array = image_array.astype('float32') x_train.append(image_array) y_train.append(ground_truth) #validation x_valid = [] y_valid = [] for key in val_keys: image_path = images_path + key image_array = cv2.imread(image_path) image_array = imresize(image_array, (48, 48)) num_image_channels = len(image_array.shape) #image_array = cv2.cvtColor(image_array,cv2.COLOR_RGB2GRAY).astype('float32') #image_array = np.expand_dims(image_array, -1) ground_truth = ground_truth_data[key] image_array = image_array.astype('float32') x_valid.append(image_array) y_valid.append(ground_truth) #Preprocess Train & Valid Data #Normalize the input features (x_train,x_valid) xtrain = np.array(x_train) / 255 xvalid = np.array(x_valid) / 255 #Reshape the input features(x_train) w, h = 48, 48 x_train = xtrain.reshape(xtrain.shape[0], w, h, 3) x_valid = xvalid.reshape(xvalid.shape[0], w, h, 3) print("Shape of x_train: ", x_train.shape) print("Shape of x_valid: ", x_valid.shape) #Categorical encode the input label(y_train) ytrain = np.array(y_train) yvalid = np.array(y_valid) y_train = tf.keras.utils.to_categorical(ytrain, 2) y_valid = tf.keras.utils.to_categorical(yvalid, 2) print("Shape of y_train: ", y_train.shape) print("Shape of y_valid: ", y_valid.shape) ``` # 3) Train the Model ``` #Data Augmentation data_generator = ImageDataGenerator( featurewise_center=False, featurewise_std_normalization=False, rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, zoom_range=.1, horizontal_flip=True) def cnn(input_shape, num_classes, l2_regularization=0.01): regularization = l2(l2_regularization) # base img_input = Input(input_shape) x = Conv2D(8, (3, 3), strides=(1, 1), kernel_regularizer=regularization, use_bias=False)(img_input) x = BatchNormalization()(x) x = Activation('relu')(x) x = Conv2D(8, (3, 3), strides=(1, 1), kernel_regularizer=regularization, use_bias=False)(x) x = BatchNormalization()(x) x = Activation('relu')(x) # module 1 residual = Conv2D(16, (1, 1), strides=(2, 2), padding='same', use_bias=False)(x) residual = BatchNormalization()(residual) x = SeparableConv2D(16, (3, 3), padding='same', kernel_regularizer=regularization, use_bias=False)(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = SeparableConv2D(16, (3, 3), padding='same', kernel_regularizer=regularization, use_bias=False)(x) x = BatchNormalization()(x) x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x) x = layers.add([x, residual]) # module 2 residual = Conv2D(32, (1, 1), strides=(2, 2), padding='same', use_bias=False)(x) residual = BatchNormalization()(residual) x = SeparableConv2D(32, (3, 3), padding='same', kernel_regularizer=regularization, use_bias=False)(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = SeparableConv2D(32, (3, 3), padding='same', kernel_regularizer=regularization, use_bias=False)(x) x = BatchNormalization()(x) x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x) x = layers.add([x, residual]) # module 3 residual = Conv2D(64, (1, 1), strides=(2, 2), padding='same', use_bias=False)(x) residual = BatchNormalization()(residual) x = SeparableConv2D(64, (3, 3), padding='same', kernel_regularizer=regularization, use_bias=False)(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = SeparableConv2D(64, (3, 3), padding='same', kernel_regularizer=regularization, use_bias=False)(x) x = BatchNormalization()(x) x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x) x = layers.add([x, residual]) # module 4 residual = Conv2D(128, (1, 1), strides=(2, 2), padding='same', use_bias=False)(x) residual = BatchNormalization()(residual) x = SeparableConv2D(128, (3, 3), padding='same', kernel_regularizer=regularization, use_bias=False)(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = SeparableConv2D(128, (3, 3), padding='same', kernel_regularizer=regularization, use_bias=False)(x) x = BatchNormalization()(x) x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x) x = layers.add([x, residual]) x = Conv2D(num_classes, (3, 3), #kernel_regularizer=regularization, padding='same')(x) x = GlobalAveragePooling2D()(x) output = Activation('softmax',name='predictions')(x) model = Model(img_input, output) return model # model parameters/compilation model = cnn(input_shape, num_classes) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.summary() # callbacks log_file_path = base_path + '_emotion_training.log' csv_logger = CSVLogger(log_file_path, append=False) early_stop = EarlyStopping('val_loss', patience=patience) reduce_lr = ReduceLROnPlateau('val_loss', factor=0.1, patience=int(patience/4), verbose=1) trained_models_path = base_path + '_cnn' model_names = trained_models_path + '.{epoch:02d}-{val_acc:.2f}.hdf5' model_checkpoint = ModelCheckpoint(model_names, 'val_loss', verbose=1, save_best_only=True) callbacks = [model_checkpoint, csv_logger, early_stop, reduce_lr] num_epochs = 20 #Train the model model.fit_generator(data_generator.flow(x_train, y_train, batch_size), steps_per_epoch=len(x_train) / batch_size, epochs=num_epochs, verbose=1, callbacks=callbacks, validation_data=(x_valid,y_valid)) ```
github_jupyter
# SIT742: Modern Data Science **(Week 01: Programming Python)** --- - Materials in this module include resources collected from various open-source online repositories. - You are free to use, change and distribute this package. Prepared by **SIT742 Teaching Team** --- # Session 1B - Control Flow, File usage, and Advanced data ## Introduction Normally, *Python* executes a series of statement in exact top-down order. What if you want to change the flow how it works? As you might have guessed, in this prac, we will first look at **control flow** statements. We are going to practice on three control flow statements, i.e., **if**, **for** and **while**, to see how they can determine what statement is to be executed next in a program. Second, we will look at how to create, read, and write files in *Python*. We have obtained data via interation with users in previous prac. Now let us explore how to deal with files to get input of a program and write output that can be used later. Finally, we will learn about advanced data types in addition to the **strings** and **number** learnt before. We will examine **lists**, **tuples** and **dictionarys** used for a collection of related data. <a id = "cell_tuple"></a> ## Table of Content ### Part 1 Control Flow 1.1 [**If** statements](#cell_if) 1.2 [**For** statements](#cell_for) 1.3 [**While** statements](#cell_while) 1.4 [**Break** statements](#cell_break) 1.5 [Notes on Python 2](#cell_note) ### Part2 Using files 2.1 [Reading files](#cell_read) 2.2 [Writing files](#cell_write) ### Part3 Advanced data 3.1 [List](#cell_list) 3.2 [Tuple](#cell_tuple) 3.3 [Dictionary](#cell_dict) ## Part 1 Control flow <a id = "cell_if"></a> ### 1.1 **If** statements The **if** statement is used to check a condition: **if** the condition is true, we run a block of statements(**if-block**), **else** we process another block of statement(**else-block**). The **else** clause is optional. The condition is usually a boolean expression, which can have value of **True** or **False**. Often we have a block of statement inside either **if-block** or **else-block**. In this case, you need to especially pay attention to the indentation of the statements in the block. Indenting starts a block and unindenting ends it. As a general practice, Python style guide recommend using 4 white spaces for indenting a block, and not using tabs. If you get indentation error message from Python interpretor, you will need to check your code carefully. ``` x = 15 if x % 2 == 0: print('%d is even' % x) else: print('%d is odd' % x) print('This is always printed') ``` Try change **x** to even number and run the program again. The else-block in **if** statement is optional. If the **else** block is omitted, the statements in **if**-block are executed when the condition equal to **True**. Otherwise, the flow of execution continues to the statement after the **if** structure. Try the following code: ``` x = -2 if x < 0: print("The negative number %d is not valid here." % x) print("This is always printed") ``` What will be printed if the value of **x** is negative? **If** statement can also be nested within another. Further **if** structure can either be nested in **if-block** or **else-block**. Here is an example with another **if** structure nested in **else-block**. This example assume we have two integer variables, **x** and **y**. The code shows how we might decide how they are related to each other. ``` x = 10 y = 10 if x < y: print("x is less than y") else: if x > y: print("x is greater than y") else: print("x and y must be equal") ``` Here we can see that the indentation pattern can tell the Python interpretor exactly which **else** belong to which **if**. Python also provides an alternative way to write nested **if** statement. We need to use keyword **elif**. The above example is equivalent to : ``` x = 10 y = 10 if (x < y): print("x is less than y") elif (x > y): print("x is greater than y") else: print("x and y must be equal") ``` **elif** is an abbreviation of **else if**. With above structure, each condition is checked in order. If one of them is **True**, the corresponding branch executes. Even if more than one condition is **True**, only the first **True** branch executes. There is no limit of the number of **elif** statements. but only a single final **else** statement is allowed. The **else** statement must be the last branch in the statement. <a id = "cell_for"></a> ### 1.2 **For** statements Computers are often used to automate repetitive tasks. Repeated execution of a sequence of statements is called iteration. Two language features provided by Python are **while** and **for** statement. We first take a look an example of **for** statement: ``` for name in ["Joe", "Amy", "Brad", "Angelina", "Zuki"]: print("Hi %s Please come to my party on Saturday!" % name) ``` This example assume we have some friends, and we would like to send them an invitation to our party. With all the name in the list, we can print a message for each friend. This is how the **for** statement works: 1. **name** in the **for** statement is called loop variables, and the names in the square brackets is called **list** in Python. We will cover more details on list in next prac. For now, you just need to know how to use simple list in a **for** loop. 2. The second line in the program is the **loop body**. All the statements in the loop body is indented. 3. On each iteration of the loop, the loop variable is updated to refer to the next item in the list. In the above case, the loop body is executed 7 times, and each time name will refer to a different fiend. 4. At the end of each execution of the body of the loop, Python returns to the **for** statement to handle the next items. This continues until there are no item left. Then program execution continues at the next statement after the loop body. One function commonly used in loop statement is **range()**. Let us first have a look at the following example: ``` for i in [0, 1, 2, 3, 4 ]: print( 'The count is %d' % i) ``` Actually generating lists with a specific number of integers is a very common task, especially in **for** loop. For this purpose, Python provides a built-in **range()** function to generate a sequence of values. An alternative way of performing above counting using **range()** is as follows. ``` for i in range(5): print( 'The count is %d' % i) print('Good bye!') ``` Notice **range(5)** generate a list of $5$ values starting with 0 instead 1. In addition, 5 is not included in the list. Here is a note on **range()** function: a strange thing happens if you just print a range: ``` range(5) print(range(5)) ``` In many ways the object returned by **range()** behaves as if it is a list, but in fact it isn’t. It is an object which returns the successive items of the desired sequence when you iterate over it, but it doesn’t really make the list, thus saving space. We say such an object is *iterable*. In the following example, **rangeA** is **iterable**. There are functions and constructs that expect something from these objects to obtain successive items until the supply is exhausted. The **list()** function can be used to creates lists from iterables: ``` rangeA = range(5) list(rangeA) ``` In this way, we can print the list generated by **range(5)** to check the values closely. To count from 1 to 5, we need the follo0wing: ``` list(range(1, 6)) ``` We can also add another parameter, **step**, in **range()** function. For example, a step of **2** can be used to produce a list of even numbers. Look at the following example. Think about what will be the output before you run the code to check your understanding. ``` list(range(1, 6)) list(range(1, 6)) list(range(0, 19, 2)) list(range(0, 20, 2)) list(range(10, 0, -1)) ``` Let us return to the previous counting example, when **range()** generate the sequence of numbers, each number is assign to the loop variable **i** in each iteration. Then the block of statements is executed for each value of **i**. In the above example, we just print the value in the block of statements. <a id = "cell_while"></a> ### 1.3 **While** statements The **while** statement provide a much more general mechanism for iteration. It allows you to repeatedly execute a block of statements as long as a condition is **True**. Similar to the **if** statement, the **while** statement uses a boolean expression to control the flow of execution. The body of **while** will be repeated as long as the condition of boolean expression equal to **True**. Let us see how the previous counting program can be implemented by **while** statement. ``` i = 0 while (i < 6): print('The count is %d' % i) i = i + 1 print('Good bye!') ``` How the while statement works: 1. The **while** block consists of the print and increment statements. They are executed repeatedly until count is no longer less than $6$. With each iteration, the current value of the index count is displayed and then increased by $1$. 2. Same as **for** loop, this type of flow is also called a loop since **while** statements is executed repeatedly. Notice that if the condition is **False** at the first time through the loop, the statement inside the loop are never executed. Try change the first line into **i = 6**. What is the output? 2. In this example, we can prove that the loop terminates because **i** start from $0$ and increase by $1$. Eventually, **i** will be great than 5. When the condition becomes **False**, the loop stops. 3. Sometime, we will have loop that repeats forever. This is called an infinite loop. Although this kind of loop might be useful sometimes, it is often caused by a programming mistake. Try to change the first two lines of previous program into the following code. See what happen? Note that if you run the following cell, the code will run indefinitely. You will need to go to the menu: **Kernel->Restart**, and then restart the kernal. Also note that if you run such code in Python at command line or script mode, you will need to use **CTRL-C** to terminate the program. ``` i = 6 while i > 5 : print('The cound is %d' % i) i = i + 1 print('Good bye!') ``` <a id = "cell_break"></a> ### 1.4 **Break** statements The **break** statement can be used to break out of a loop statement. It can be used both in **while** loop and **for** loop. Alternative way of previous counting example with **break** statement is as follows: ``` i = 0 while True : print( 'The count is %d' % i) i = i + 1 if i > 6: break print('Good bye!') ``` In this program, we repeated print value of **i** and increase it by 1 each time. We provide a special condition to stop the program by checking if **i** is greater than 6. We then break out of the loop and continue executed the statement after the loop. Note that it is important to decide what the terminate condition should be. We can see from previous counting example that the terminate condition might be different in different loop structure,. <a id = "cell_note"></a> ### 1.5 Notes on **Python 2** Use of **range()** function: 1. One thing to be noted is: the use of **range()** in for-loop looks the same in both \emph{Python} 2 and 3. 2. While function **range()** creates an *iterable*, which can be used to produce sequence dynamically in *Python 3*; **range()** in *Python 2* creates a list. The use of iterable object in \emph{Python} 3 is more efficient in memory wise. This is especially useful when you need a giganic range. 3. As you have noticed, statement **print(range(5))** in *Python 3* will not produce a list. However, this statement is valid in *Python 2*. For example, the following can be used to check values of a range in \emph{Python} 2. Please also note that **range()** can be safely used in both versions in a **for**-loop. ## Part 2 Using files ### 2.1 Reading files You can open and use files for reading or writing by creating an object of the *file class*. The *mode* that is specified for the file opening decides what you can do with the file: read, write or both. Then the file object's **read()** or **write()** method can be used to read from or write to the file. Finally, when you are finished with the file, you call the **close()** method to tell Python that you are done using the file. Here is an example. You can download the data file **score.txt**, which includes data on students' score. The format of the data file is as follows: For Online platforms such as IBM Cloud, it is important for you to get familiar with the provided data storage or cloud data storage function. Alternatively, you might want to directly access the file, and load into your Notebook. ``` !pip install wget ``` Then you can download the file into GPFS file system. ``` import wget link_to_data = 'https://github.com/tulip-lab/sit742/raw/master/Jupyter/data/score.txt' DataSet = wget.download(link_to_data) print(DataSet) ``` The following example read from the **.txt** file and display information on the screen. Please type the code and run it under script mode. Make sure **score.txt** are saved under your **data** folder. ``` # scorefile = open('https://raw.githubusercontent.com/tulip-lab/sit742/master/Jupyter/data/score.txt','r') scorefile = open('score.txt','r') for line in scorefile: value = line.split() name = value[0] id = value[1] score = value[2] print('%s with ID %s has a score of %s' % (name, id, score)) ``` How the program works: 1. The **open()** function is used to open a file. You need to specify the name of the file and the mode in which you want to open the file. The mode can be read mode('r'), write mode('w') or append mode('a'). There are actually many more modes available. You can get more details by create a cell and typing ""open?"". Please try this in your notebook. When we finish working on the file, we need to close the file using **close()** method. 2. To process all of the data, we use a **for** loop to iterate over the lines of the file. The **line** variable is a string that is used to store characters in each line. 3. We use the **split()** method to break each line into a list containing all the fields of interest. We can then take the value corresponding to **name**, **id** and **score** and print them in the sentence. To get each data item in a list, we use index with the list. e.g. values[0] will return the item of position 0 in the list. Note that in Python the position of items in a list is starting from $0$. ### 2.2. Writing to files One of the most commonly performed data processing tasks is to read data from a file, manipulate it in some way and then write the resulting data out to a new data file to be used for other purpose later. For creating a new file used for writing, the same **open()** function is used. Instead using 'r' mode, 'w' mode is used as the parameter. When we open a file for writing, a new, empty file with the specified name is created and ready to accept our data. As an example, consider the **score.txt** data again. Assume we have request to remove the name information in the file for privacy issue. Therefore, the output file need to have student ID with the scores separated by a comma. Here is how we can generate the required file. ``` infile = open('score.txt', 'r') outfile = open('id.txt', 'w') for line in infile: values = line.split() id = values[1] score = values[2] dataline = id + ',' + score outfile.write(dataline + '\n') infile.close() outfile.close() ``` How the program works: 1. We have add another **open()** method with 'w' mode. The filename **id.txt** is chosen to store the data. If the file does not exist, it will be created. However, if the file does exist, it will be reinitialized and empty, and any previous contents will be lost 2. We have variable **dataline** to store what need to be write in the file. If you like, you can add a line **print(dataline)** to check the string value. We then call function **write()** method to write **dataline** into the file. 3. There is one additional part we need to add when writing to file. The newline character **\n** need to be concatenated to the end of the line. Otherwise, the text will be all in one continuous line. 4. The file needs to be closed at the end. <a id = "cell_list"></a> ## Part 3 Advanced Data ### 3.1 List ** List is a sequence** Like a string, a *list** is a sequence of values. The values in a list can be any type. We call the values *items** of the list. To create a list, enclose the items in square brackets. For example, ``` shoplist = ['apple', 'mango', 'carrot', 'banana'] l = [10, 20, 30, 40] empty = [ ] # Initialize an empty list shoplist l empty shoplist ``` The elements of a list do not have to be the same type. An another list can also be nested inside a list. To access the element of a list, use the bracket operator to obtain the value available at the index. Note that the indices of list start at $0$. You can also use negative value as index, if you counts from right. For example, the negative index of last item is $-1$. Try the following examples: ``` l = [10, 20, 30, 40] l[2] l[-1] l[-3] ``` Here is an example of nested list: ``` l = ['apple', 2.0, 5, [10, 20]] l[1] l[3] l[3][1] ``` Unlike strings, lists are mutable, which means they can be altered. We use bracket on the left side of an assignment to assign a value to a list item. ``` l = [10, 20, 30, 40] l[1] = 200 l ``` ** List operation ** **In** is used to perform membership operation. The result of expression equals to **True** if a value exists in the list, and equals to **False** otherwise. ``` shoplist = ['apple', 'mango', 'carrot', 'banana'] 'apple' in shoplist 'rice' in shoplist ``` Similarly, **In** operator also applies to **string** type. Here are some examples: ``` 'a' in 'banana' 'seed' in 'banana' # Test if 'seed' is a substring of 'banana' ``` '**+**' is used for concatenation operation, which repeats a list a given number of times. ``` [10, 20, 30, 40 ] + [50, 60] ``` [50, 60]*3 ** List slices ** Slicing operation allows to retrieve a slice of of the list. i.e. a part of the sequence. The sliding operation uses square brackets to enclose an optional pair of numbers separated by a colon. Again, to count the position of items from left(first item), start from $0$. If you count the position from right(last item), start from $-1$. ``` l = [1, 2, 3, 4] l[1:3] # From position 1 to position 3(excluded) l[:2] # From the beginning to position 2(excluede) l[-2:] # From the second right to the beginning ``` If you omit both the first and the second indices, the slice is a copy of the whole list. ``` l[:] ``` Since lists are mutable, above expression is often useful to make a copy before modifying original list. ``` l = [1, 2, 3, 4] l_org = l[:] l[0] = 8 l l_org # the original list is unchanged ``` ** List methods ** The methods most often applied to a list include: - append() - len() - sort() - split() - join() **append()** method adds a new element to the end of a list. ``` l= [1, 2, 3, 4] l l.append(5) l.append([6, 7]) #list [6, 7] is nested in lsit l l ``` **len()** method returns the number of items of a list. ``` l = [1, 2, 3, 4, 5] len(l) # A list nested in another list is counted as a single item l = [1, 2, 3, 4, 5, [6, 7]] len(l) ``` **sort()** arranges the elements of the list from low to high. ``` shoplist = ['apple', 'mango', 'carrot', 'banana'] shoplist.sort() shoplist ``` It is worth noted that **sort()** method modifies the list in place, and does not return any value. Please try the following: ``` shoplist = ['apple', 'mango', 'carrot', 'banana'] shoplist_sorted = shoplist.sort() shoplist_sorted # No value is returned ``` There is an alternative way of sorting a list. The build-in function **sorted()** returns a sorted list, and keeps the original one unchanged. ``` shoplist = ['apple', 'mango', 'carrot', 'banana'] shoplist_sorted = sorted(shoplist) #sorted() function return a new list shoplist_sorted shoplist ``` There are two frequently-used string methods that convert between lists and strings: First, **split()** methods is used to break a string into words: ``` s = 'I love apples' s.split(' ') s = 'spam-spam-spam' # A delimter '-' is specified here. It is used as word boundary s.split('-') ``` Second, **joint()** is the inverse of **split**. It takes a list of strings and concatenates the elements. ``` l = ['I', 'love', 'apples'] s = ' '.join(l) s ``` How it works: Since **join** is a string method, you have to invoke it on the *delimiter*. In this case, the delimiter is a space character. So **' '.join()** puts a space between words. The list **l** is passed to **join()** as parameter. For more information on list methods, type "help(list)" in your notebook. **Traverse a list** The most common way to traverse the items of a list is with a **for** loop. Try the following code: ``` shoplist = ['apple', 'mango', 'carrot', 'banana'] for item in shoplist: print(item) ``` This works well if you only need to read the items of the list. However, you will need to use indices if you want to update the elements. In this case, you need to combine the function **range()** and **len()**. ``` l = [2, 3, 5, 7] for i in range(len(l)): l[i] = l[i] * 2 print(l) ``` How it works: **len()** returns the number of items in the list, while **range(n)** returns a list from 0 to n - 1. By combining function **len()** and **range()**, **i** gets the index of the next element in each pass through the loop. The assignment statement then uses **i** to perform the operation. <a id = "cell_tuple"></a> ### 3.2 Tuple ** Tuple are immutable ** A **tuple** is also a sequence of values, and can be any type. Tuples and lists are very similar. The important difference is that tuples are immutable, which means they can not be changed. Tuples is typically used to group and organizing data into a single compound value. For example, ``` year_born = ('David Hilton', 1995) year_born ``` To define a tuple, we use a list of values separated by comma. Although it is not necessary, it is common to enclose tuples in parentheses. Most list operators also work on tuples. The bracket operator indexes an item of tuples, and the slice operator works in similar way. Here is how to define a tuple: ``` t = ( ) # Empty tuple t t = (1) type(t) # Its type is int, since no comma is following t = (1,) # One item tuple; the item needs to be followed by a comma type(t) ``` Here is how to access elements of a tuple: ``` t = ('a', 'b', 'c', 'd') t[0] t[1:3] ``` But if you try to modify the elements of the tuple, you get an error. ``` t = ('a', 'b', 'c', 'd') t[1] = 'B' ``` ### Tuple assignment Tuple assignment allows a tuple of variables on the left of an assignment to be assigned values from a tuple on the right of the assignment. (We already saw this type of statements in the previous prac) For example, ``` t = ('David', '0233', 78) (name, id, score) = t name id score ``` Naturally, the number of variables on the left and the number of values on the right have to be the same.Otherwise, you will have a system error. ``` (a, b, c, d) = (1, 2, 3) ``` ### Lists and tuples It is common to have a list of tuples. For loop can be used to travers the data. For example, ``` t = [('David', 90), ('John', 88), ('James', 70)] for (name, score) in t: print(name, score) ``` <a id = "cell_dict"></a> ### 3.3 Dictionary A **dictionary** is like an address-book where you can find the address or contact details of a person by knowing only his/her name. The way of achieving this is to associate **keys**(names) with **values**(details). Note that the key in a dictionary must be unique. Otherwise we are not able to locate correct information through the key. Also worth noted is that we can only use immutable objects(strings, tuples) for the keys, but we can use either immutable or mutable objects for the values of the dictionary. This means we can use either a string, a tuple or a list for dictionary values. The following example defines a dictionary: ``` dict = {'David': 70, 'John': 60, 'Mike': 85} dict['David'] dict['Anne'] = 92 # add an new item in the dictionary dict ``` ** Traverse a dictionary ** The key-value pairs in a dictionary are **not** ordered in any manner. The following example uses **for** loop to traversal a dictionary. Notice that the keys are in no particular order. ``` dict = {'Daid': 70, 'John': 60, 'Amy': 85} for key in dict: print(key, dict[key]) ``` However, we can sort the keys of dictionary before using it if necessary. The following example sorts the keys and stored the result in a list **sortedKey**. The **for** loop then iterates through list **sortedKey**. The items in the dictionary can then be accessed via the names in alphabetical order. Note that dictionary's **keys()** method is used to return a list of all the available keys in the dictionary. ``` dict = {'David': 70, 'John': 60, 'Amy': 85} sortedKeys = sorted(dict.keys()) for key in sortedKeys: print(key, dict[key]) ```
github_jupyter
# Periodic Signals *This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).* ## Spectrum Periodic signals are an import class of signals. Many practical signals can be approximated reasonably well as periodic functions. The latter holds often when considering only a limited time-interval. Examples for periodic signals are a superposition of harmonic signals, signals captured from vibrating structures or rotating machinery, as well as speech signals or signals from musical instruments. The spectrum of a periodic signal exhibits specific properties which are derived in the following. ### Representation A [periodic signal](https://en.wikipedia.org/wiki/Periodic_function) $x(t)$ is a signal that repeats its values in regular periods. It has to fulfill \begin{equation} x(t) = x(t + n \cdot T_\text{p}) \end{equation} for $n \in \mathbb{Z}$ where its period is denoted by $T_\text{p} > 0$. A signal is termed *aperiodic* if is not periodic. One period $x_0(t)$ of a periodic signal is given as \begin{equation} x_0(t) = \begin{cases} x(t) & \text{for } 0 \leq t < T_\text{p} \\ 0 & \text{otherwise} \end{cases} \end{equation} A periodic signal can be represented by [periodic summation](https://en.wikipedia.org/wiki/Periodic_summation) of one period $x_0(t)$ \begin{equation} x(t) = \sum_{\mu = - \infty}^{\infty} x_0(t - \mu T_\text{p}) \end{equation} which can be rewritten as convolution \begin{equation} x(t) = \sum_{\mu = - \infty}^{\infty} x_0(t) * \delta(t - \mu T_\text{p}) = x_0(t) * \sum_{\mu = - \infty}^{\infty} \delta(t - \mu T_\text{p}) \end{equation} using the sifting property of the Dirac impulse. It can be concluded that a periodic signal can be represented by one period $x_0(t)$ of the signal convolved with a series of Dirac impulses. **Example** The cosine signal $x(t) = \cos (\omega_0 t)$ has a periodicity of $T_\text{p} = \frac{2 \pi}{\omega_0}$. One period is given as \begin{equation} x_0(t) = \cos (\omega_0 t) \cdot \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} \right) \end{equation} Introduced into above representation of a periodic signal yields \begin{align} x(t) &= \cos (\omega_0 t) \cdot \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} \right) * \sum_{\mu = - \infty}^{\infty} \delta(t - \mu T_\text{p}) \\ &= \cos (\omega_0 t) \sum_{\mu = - \infty}^{\infty} \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} - \mu T_\text{p} \right) \\ &= \cos (\omega_0 t) \end{align} since the sum over the shifted rectangular signals is equal to one. ### The Dirac Comb The sum of shifted Dirac impulses, as used above to represent a periodic signal, is known as [*Dirac comb*](https://en.wikipedia.org/wiki/Dirac_comb). The Dirac comb is defined as \begin{equation} {\bot \!\! \bot \!\! \bot}(t) = \sum_{\mu = - \infty}^{\infty} \delta(t - \mu) \end{equation} It is used for the representation of periodic signals and for the modeling of ideal sampling. In order to compute the spectrum of a periodic signal, the Fourier transform of the Dirac comb $\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \}$ is derived in the following. Fourier transformation of the left- and right-hand side of above definition yields \begin{equation} \mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = \sum_{\mu = - \infty}^{\infty} e^{-j \mu \omega} \end{equation} The exponential function $e^{-j \mu \omega}$ for $\mu \in \mathbb{Z}$ is periodic with a period of $2 \pi$. Hence, the Fourier transform of the Dirac comb is also periodic with a period of $2 \pi$. Convolving a [rectangular signal](../notebooks/continuous_signals/standard_signals.ipynb#Rectangular-Signal) with the Dirac comb results in \begin{equation} {\bot \!\! \bot \!\! \bot}(t) * \text{rect}(t) = 1 \end{equation} Fourier transform of the left- and right-hand side yields \begin{equation} \mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} \cdot \text{sinc}\left(\frac{\omega}{2}\right) = 2 \pi \delta(\omega) \end{equation} For $\text{sinc}( \frac{\omega}{2} ) \neq 0$, which is equal to $\omega \neq 2 n \cdot \pi$ with $n \in \mathbb{Z} \setminus \{0\}$, this can be rearranged as \begin{equation} \mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = 2 \pi \, \delta(\omega) \cdot \frac{1}{\text{sinc}\left(\frac{\omega}{2}\right)} = 2 \pi \, \delta(\omega) \end{equation} Note that the [multiplication property](../continuous_signals/standard_signals.ipynb#Dirac-Impulse) of the Dirac impulse and $\text{sinc}(0) = 1$ has been used to derive the last equality. The Fourier transform is now known for the interval $-2 \pi < \omega < 2 \pi$. It has already been concluded that the Fourier transform is periodic with a period of $2 \pi$. Hence, the Fourier transformation of the Dirac comb can be derived by periodic continuation as \begin{equation} \mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = \sum_{\mu = - \infty}^{\infty} 2 \pi \, \delta(\omega - 2 \pi \mu) = \sum_{\mu = - \infty}^{\infty} 2 \pi \, \left( \frac{\omega}{2 \pi} - \mu \right) \end{equation} The last equality follows from the scaling property of the Dirac impulse. The Fourier transform can now be rewritten in terms of the Dirac comb \begin{equation} \mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = {\bot \!\! \bot \!\! \bot} \left( \frac{\omega}{2 \pi} \right) \end{equation} The Fourier transform of a Dirac comb with unit distance between the Dirac impulses is a Dirac comb with a distance of $2 \pi$ between the Dirac impulses which are weighted by $2 \pi$. **Example** The following example computes the truncated series \begin{equation} X(j \omega) = \sum_{\mu = -M}^{M} e^{-j \mu \omega} \end{equation} as approximation of the Fourier transform $\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \}$ of the Dirac comb. For this purpose the sum is defined and plotted in `SymPy`. ``` %matplotlib inline import sympy as sym sym.init_printing() mu = sym.symbols('mu', integer=True) w = sym.symbols('omega', real=True) M = 20 X = sym.Sum(sym.exp(-sym.I*mu*w), (mu, -M, M)).doit() sym.plot(X, xlabel='$\omega$', ylabel='$X(j \omega)$', adaptive=False, nb_of_points=1000); ``` **Exercise** * Change the summation limit $M$. How does the approximation change? Note: Increasing $M$ above a certain threshold may lead to numerical instabilities. ### Fourier-Transform In order to derive the Fourier transform $X(j \omega) = \mathcal{F} \{ x(t) \}$ of a periodic signal $x(t)$ with period $T_\text{p}$, the signal is represented by one period $x_0(t)$ and the Dirac comb. Rewriting above representation of a periodic signal in terms of a sum of Dirac impulses by noting that $\delta(t - \mu T_\text{p}) = \frac{1}{T_\text{p}} \delta(\frac{t}{T_\text{p}} - \mu)$ yields \begin{equation} x(t) = x_0(t) * \frac{1}{T_\text{p}} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T_\text{p}} \right) \end{equation} The Fourier transform is derived by application of the [convolution theorem](../fourier_transform/theorems.ipynb#Convolution-Theorem) \begin{align} X(j \omega) &= X_0(j \omega) \cdot {\bot \!\! \bot \!\! \bot} \left( \frac{\omega T_\text{p}}{2 \pi} \right) \\ &= \frac{2 \pi}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} X_0 \left( j \, \mu \frac{2 \pi}{T_\text{p}} \right) \cdot \delta \left( \omega - \mu \frac{2 \pi}{T_\text{p}} \right) \end{align} where $X_0(j \omega) = \mathcal{F} \{ x_0(t) \}$ denotes the Fourier transform of one period of the periodic signal. From the last equality it can be concluded that the Fourier transform of a periodic signal consists of a series of weighted Dirac impulses. These Dirac impulse are equally distributed on the frequency axis $\omega$ at an interval of $\frac{2 \pi}{T_\text{p}}$. The weights of the Dirac impulse are given by the values of the spectrum $X_0(j \omega)$ of one period at the locations $\omega = \mu \frac{2 \pi}{T_\text{p}}$. Such a spectrum is termed *line spectrum*. ### Parseval's Theorem [Parseval's theorem](../fourier_transform/theorems.ipynb#Parseval%27s-Theorem) relates the energy of a signal in the time domain to its spectrum. The energy of a periodic signal is in general not defined. This is due to the fact that its energy is unlimited, if the energy of one period is non-zero. As alternative, the average power of a periodic signal $x(t)$ is used. It is defined as \begin{equation} P = \frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} |x(t)|^2 \; dt \end{equation} Introducing the Fourier transform of a periodic signal into [Parseval's theorem](../fourier_transform/theorems.ipynb#Parseval%27s-Theorem) yields \begin{equation} \frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} |x(t)|^2 \; dt = \frac{1}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} \left| X_0 \left( j \, \mu \frac{2 \pi}{T_\text{p}} \right) \right|^2 \end{equation} The average power of a periodic signal can be calculated in the time-domain by integrating over the squared magnitude of one period or in the frequency domain by summing up the squared magnitude weights of the coefficients of the Dirac impulses of its Fourier transform. ### Fourier Transform of the Pulse Train The [pulse train](https://en.wikipedia.org/wiki/Pulse_wave) is commonly used for power control using [pulse-width modulation (PWM)](https://en.wikipedia.org/wiki/Pulse-width_modulation). It is constructed from a periodic summation of a rectangular signal $x_0(t) = \text{rect} (\frac{t}{T} - \frac{T}{2})$ \begin{equation} x(t) = \text{rect} \left( \frac{t}{T} - \frac{T}{2} \right) * \frac{1}{T_\text{p}} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T_\text{p}} \right) \end{equation} where $0 < T < T_\text{p}$ denotes the width of the pulse and $T_\text{p}$ its periodicity. Its usage for power control becomes evident when calculating the average power of the pulse train \begin{equation} P = \frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} | x(t) |^2 dt = \frac{T}{T_\text{p}} \end{equation} The Fourier transform of one period $X_0(j \omega) = \mathcal{F} \{ x_0(t) \}$ is derived by applying the scaling and shift theorem of the Fourier transform to the [Fourier transform of the retangular signal](../fourier_transform/definition.ipynb#Transformation-of-the-Rectangular-Signal) \begin{equation} X_0(j \omega) = e^{-j \omega \frac{T}{2}} \cdot T \, \text{sinc} \left( \frac{\omega T}{2} \right) \end{equation} from which the spectrum of the pulse train follows by application of above formula for the Fourier transform of a periodic signal \begin{equation} X(j \omega) = 2 \pi \frac{1}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} e^{-j \mu \pi \frac{T}{T_\text{p}}} \cdot T \, \text{sinc} \left( \mu \pi \frac{T}{T_\text{p}} \right) \cdot \delta \left( \omega - \mu \frac{2 \pi}{T_\text{p}} \right) \end{equation} The weights of the Dirac impulses are defined in `SymPy` for fixed values $T$ and $T_\text{p}$ ``` mu = sym.symbols('mu', integer=True) T = 2 Tp = 5 X_mu = sym.exp(-sym.I * mu * sym.pi * T/Tp) * T * sym.sinc(mu * sym.pi * T/Tp) X_mu ``` The weights of the Dirac impulses are plotted with [`matplotlib`](http://matplotlib.org/index.html), a Python plotting library. The library expects the values of the function to be plotted at a series of sampling points. In order to create these, the function [`sympy.lambdify`](http://docs.sympy.org/latest/modules/utilities/lambdify.html?highlight=lambdify#sympy.utilities.lambdify) is used which numerically evaluates a symbolic function at given sampling points. The resulting plot illustrates the positions and weights of the Dirac impulses. ``` import numpy as np import matplotlib.pyplot as plt Xn = sym.lambdify(mu, sym.Abs(X_mu), 'numpy') n = np.arange(-15, 15) plt.stem(n*2*np.pi/Tp, Xn(n)) plt.xlabel('$\omega$') plt.ylabel('$|X(j \omega)|$'); ``` **Exercise** * Change the ratio $\frac{T}{T_\text{p}}$. How does the spectrum of the pulse train change? * Can you derive the periodicity $T_\text{p}$ of the signal from its spectrum? * Calculate the average power of the pulse train in the frequency domain by applying Parseval's theorem. **Copyright** The notebooks are provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources). Feel free to use the notebooks for your own educational purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Lecture Notes on Signals and Systems* by Sascha Spors.
github_jupyter
# Homework 07 ### Preparation... Run this code from the lecture to be ready for the exercises below! ``` import glob import os.path import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn import datasets, linear_model, ensemble, neural_network from sklearn.metrics import mean_squared_error, r2_score from sklearn.model_selection import train_test_split from pathlib import Path CONFIG_FILE = '../entsoe-data.config' if not os.path.exists(CONFIG_FILE): download_dir = input('Path to ENTSO-E data folder: ') if not os.path.isdir(download_dir): raise RuntimeError(f'Invalid download_dir, please run cell again: {download_dir}') with open(CONFIG_FILE, 'w') as f: f.write(download_dir) else: with open(CONFIG_FILE) as f: download_dir = f.read() # Clear the output after this cell if you want to aovid having your path in the notebook (or execute it twice)! def read_single_csv_entso_e(file): return pd.read_csv(file, sep='\t', encoding='utf-16', parse_dates=["DateTime"]) def load_complete_entso_e_data(directory): pattern = Path(directory) / '*.csv' files = glob.glob(str(pattern)) if not files: raise ValueError(f"No files found when searching in {pattern}, wrong directory?") print(f'Concatenating {len(files)} csv files...') each_csv_file = [read_single_csv_entso_e(file) for file in files] data = pd.concat(each_csv_file, ignore_index=True) data = data.sort_values(by=["AreaName", "DateTime"]) data = data.set_index("DateTime") print("Loading done.") return data power_demand = load_complete_entso_e_data(download_dir) def get_hourly_country_data(data, country): ret_data = data[data["AreaName"] == country].interpolate() #ret_data = ret_data.set_index("DateTime") ret_data = ret_data.resample("1h").mean().interpolate() return ret_data power_demand_at_hourly = get_hourly_country_data(power_demand, "Austria")["2015-01-01":"2019-12-31"] ``` ## Exercise 1 Explain the following terms: **Input feature:** < your explanation goes here > **Output feature:** < your explanation goes here > **Fit a function to data:** < your explanation goes here > **Training data:** < your explanation goes here > **Test data:** < your explanation goes here > ## Exercise 2 In lecture07 we created a plot of the ratio of actual load and predicted load for Austria step by step (Exercise 04). Now put all of this together in one function which takes one parameter `country` as input and calculates and plots the figure of Exercise 04 for this country! The model should be trained on 2015-2019 data and then you should predict for 2020 and compare it to observations. Also do a training/test split and print the R2 for both datasets. Apply the function to the following countries and show the results in one plot: Austria, Germany, Switzerland, Italy, Spain, Sweden, United Kingdom. (1) Print the country name. Get the data for the specific country using ```get_hourly_country_data``` from the lecture and extract two periods, i.e 2015-2019 and 2020 in two separate dataframes. (2) Define X (the input features, i.e. the indicators for time) and Y (i.e. the output feature, the electricity load). Observe that for training, we use the period 2015-2019. (3) Do a train/test split (4) Fit the input features to the output feature using a ```RandomForestRegressor``` (5) Predict the output with the training data and the test data and compute the R^2 for both! (6) Print the R^2. (7) Create a new variable ```X_2020``` which contains the input features for the year 2020. (8) Predict with your model the load for 2020. (9) Assign your prediction back to the dataframe in a new column and calculate the monthly mean for prediction and for observed load. You might need to copy the dataframe first by doing something like `power_demand_hourly = power_demand_hourly.copy()` (otherwise it might be just a slice of the complete time range and then you can't add a column for some rows only). (10) Plot the ratio of load and prediction. With ```label=country```, you can add a label to your plot for making a legend. (11) Execute the function for the following countries: Austria, Germany, Switzerland, Italy, Spain, Sweden, United Kingdom. (12) After calling the functions, use ```plt.legend()``` to show a legend. ## Exercise 3 Answer the following questions: (1) Which country had the strongest decline in electricity consumption? < your answer goes here > (2) For which country does the fit work best? < your answer goes here > (3) Where is the difference of R2 between training data and test data the largest? What does that mean? < your answer goes here > (4) Look into the data of the country with the largest difference in the R2 of the training and the test data. Can you explain what is happening there? Might this effect our model? < your answer goes here > ## Exercise 4 The difference between model prediction and actual observation may help understanding how people behaved during the lockdown. In this exercise, you should come up with your own hypothesis of how people behaved and how this affected power consumption. You may, e.g., look into demand on different weekdays or in different hours. Once you have a hypothesis and a theory, why this hypothesis may be valid, test it with the model: is your hypothesis covered by what you observe in the load data? ## Exercise 5 Download ERA5 temperature data for the next lecture. First install necessary dependencies `xarray` and `cdsapi`: ``` conda install --yes xarray conda install --yes -c conda-forge cdsapi ``` The [Copernicus Climate Data Store](https://cds.climate.copernicus.eu/) provides [reanalysis climate data](https://cds.climate.copernicus.eu/cdsapp#!/search?type=dataset&keywords=((%20%22Product%20type:%20Reanalysis%22%20))). We are going to download [ERA5](https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-land?tab=form) data and use the [temperature 2m above ground values](https://apps.ecmwf.int/codes/grib/param-db?id=167). Register for the CDS API and install the API key by following [this guide](https://cds.climate.copernicus.eu/api-how-to). You don't need to run `pip install cdsapi`, this has been done in the cell above already using conda. ``` import cdsapi c = cdsapi.Client() # Add the path to the lecture repository here: PATH_TO_LECTURE_REPO = '../..' if not os.path.isdir(Path(PATH_TO_LECTURE_REPO) / 'lecture00-introduction'): raise RuntimeError(f"Wrong path to lecture repository: PATH_TO_LECTURE_REPO = {PATH_TO_LECTURE_REPO}") ``` We'll download data from 2015 to 2020 in a bounding box which covers all countries we used so far for our analysis. To make the download a bit faster, we'll use a [0.5° grid](https://confluence.ecmwf.int/display/CKB/ERA5%3A+Web+API+to+CDS+API) instead of the 0.1° grid. This will download approximately 500MB. The download might take a couple of hours, because the data is prepared on their servers before it can be downloaded. ``` filename = Path(PATH_TO_LECTURE_REPO) / 'data' / 'temperatures_era5.nc' north, west, south, east = 70.,-13.5, 35.5, 24.5 c.retrieve( 'reanalysis-era5-land', { 'format': 'netcdf', 'variable': '2m_temperature', 'area': [ north, west, south, east ], 'grid': [0.5, 0.5], # grid in 0.5deg steps in longitude/latitude 'day': [f"{day:02d}" for day in range(1, 32)], 'time': [f"{hour:02d}:00" for hour in range(24)], 'month': [f"{month:02d}" for month in range(1, 13)], 'year': [str(year) for year in range(2015, 2021)], }, f"{filename}.part") # this prevents you from accidentally using broken files: os.rename(f"{filename}.part", filename) ``` ## Exercise 6 Load the file downloaded in exercise 3 and plot the temperature for one location. This is also a test if the download was successful. To load the file import the library `xarray`. Typically it is imported by using `import xarray as xr`. Then load the file using the command `xr.load_dataset(filename)`. Check the type of the return value. Then select the data variable `t2m` (temperature at 2m), select the values for `longitude=16.5` and `latitude=48` by using `temperatures_dataset.t2m.sel(longitude=16.5, latitude=48.)`. Then plot the result by calling `.plot()` on the resulting object. Does the result look reasonable?
github_jupyter
``` import torch import numpy as np import matplotlib.pyplot as plt import os import sys; sys.path.append("../src") from models.cgans import AirfoilAoACEGAN, AirfoilAoAGenerator from train_final_cebgan import read_configs, assemble_new_gan from utils.dataloader import AirfoilDataset, NoiseGenerator from torch.utils.data import DataLoader ``` # Load Trained CEBGAN ``` device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') save_dir = '../saves/final/' checkpoint = 'cebgan14999.tar' base_dir = "../src/" dis_cfg, gen_cfg, egan_cfg, cz, noise_type = read_configs('cebgan', base_dir=base_dir) batch = 4096 egan = assemble_new_gan(dis_cfg, gen_cfg, egan_cfg, device=device) egan.load(os.path.join(save_dir, checkpoint), train_mode=False) ### LOAD TEST DATA ### device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') inp_paras = np.load('../data/inp_paras_995.npy').astype(np.float32) mean_std = (inp_paras.mean(0), inp_paras.std(0)) noise_gen = NoiseGenerator(batch, sizes=cz, device=device, noise_type=noise_type, output_prob=True) ``` # Test SLL on Predictions Correlation between Surrogate Log Likelihood and Airfoil Performance ## Surrogate Log Likelihood ``` inp_paras_org = np.load('../data/inp_paras_347.npy') inp_paras_250 = np.load('../data/inp_paras_250.npy') index = [] for each in inp_paras_250: index.append(np.where((inp_paras_org == each).prod(-1))) index = np.concatenate(index).squeeze() print(index.shape) airfoils_pred = np.load('../data/pred_cebgan/batch/new/airfoils_pred.npy')[index] aoas_pred = np.load('../data/pred_cebgan/batch/new/aoas_pred.npy')[index] inp_paras_pred = np.repeat(np.expand_dims(np.load('../data/inp_paras_250.npy'), 1), 10, axis=1) predictions = AirfoilDataset( inp_paras_pred.reshape(-1, 3), airfoils_pred.reshape(-1, 192, 2), aoas_pred.reshape(-1, 1), inp_mean_std=mean_std, device=device) pred_dataloader = DataLoader(predictions, batch_size=1, shuffle=False) ll = egan.surrogate_ll(pred_dataloader, noise_gen).cpu().detach().numpy() ``` ## Performance Data ``` ll_pred = ll.reshape(250, -1) rs_s = [np.load('../results/efys_vad_cebgan_mul{}.npy'.format(i+1)) for i in range(10)] rs = np.hstack(rs_s) plt.hist(ll_pred.reshape(-1), bins=30) plt.show() plt.hist(rs.reshape(-1), bins=20, range=[0, 2000]) plt.show() ``` ## Evaluate Correlation Coefficient ``` cre = [] for l, r in zip(ll_pred, rs): cre.append(np.corrcoef(l[r>0], r[r>0])[0,1]) plt.hist(cre, range=(-1., 1.), bins=16) plt.title('Correlation between SLL and $C_l/C_d$', fontsize='large') plt.xlabel('Pearson Correlation Coefficient') plt.ylabel('Number of Trials') plt.savefig('cor.svg') plt.show() for i in range(6): print((np.array(cre) >= i*0.1).sum() / 250) ``` # Scatter with Normalization ``` def normalize(x): mean = np.mean(x, axis=1, keepdims=True) std = np.std(x, axis=1, keepdims=True) return (x - mean) / std sll_n = normalize(ll_pred) prf_n = normalize(rs) for i in range(250): plt.scatter(sll_n[i], prf_n[i], c='black', s=10, alpha=0.4) plt.title('Standardized SLL vs Standardized $C_l/C_d$', fontsize='large') plt.xlabel('Standardized Surrogate Log Likelihood') plt.ylabel('Standardized $C_l/C_d$') plt.plot([-3.5, 3.5],[-3.5, 3.5], 'r--') plt.xlim(-3.5, 3.5) plt.ylim(-3.5, 3.5) plt.gca().set_aspect('equal', adjustable='box') plt.savefig('scatter.svg') plt.show() ```
github_jupyter
# Single Shot Object Detection SSD (Single Shot Multi-box Detection) is detecting objects in images using a single deep neural network. This tutorial use a model provided from [TensorFlow](https://github.com/tensorflow/examples/blob/master/lite/examples/object_detection/android/README.md). ``` import ( "log" "github.com/mattn/go-tflite" ) ``` Load model. ``` model := tflite.NewModelFromFile("detect.tflite") if model == nil { log.Fatal("cannot load model") } ``` Create interpreter and allocate tensors. ``` interpreter := tflite.NewInterpreter(model, nil) interpreter.AllocateTensors() input := interpreter.GetInputTensor(0) wanted_height := input.Dim(1) wanted_width := input.Dim(2) wanted_channels := input.Dim(3) ``` Make sure dimensions. ``` import "fmt" fmt.Println(wanted_width, wanted_height, wanted_channels) ``` Then, make input tensor. ``` import ( "os" "image" _ "image/jpeg" _ "image/png" "github.com/nfnt/resize" ) f, err := os.Open("example.jpg") if err != nil { log.Fatal(err) } img, _, err := image.Decode(f) if err != nil { log.Fatal(err) } resize.Resize(300, 0, img, resize.NearestNeighbor) qp := input.QuantizationParams() log.Printf("width: %v, height: %v, type: %v, scale: %v, zeropoint: %v", wanted_width, wanted_height, input.Type(), qp.Scale, qp.ZeroPoint) log.Printf("input tensor count: %v, output tensor count: %v", interpreter.GetInputTensorCount(), interpreter.GetOutputTensorCount()) if qp.Scale == 0 { qp.Scale = 1 } bb := make([]byte, wanted_width*wanted_height*wanted_channels) resized := resize.Resize(uint(wanted_width), uint(wanted_height), img, resize.NearestNeighbor) for y := 0; y < wanted_height; y++ { for x := 0; x < wanted_width; x++ { r, g, b, _ := resized.At(x, y).RGBA() bb[(y*wanted_width+x)*3+0] = byte(float64(int(b)-qp.ZeroPoint) * qp.Scale) bb[(y*wanted_width+x)*3+1] = byte(float64(int(g)-qp.ZeroPoint) * qp.Scale) bb[(y*wanted_width+x)*3+2] = byte(float64(int(r)-qp.ZeroPoint) * qp.Scale) } } copy(input.UInt8s(), bb) interpreter.Invoke() ``` Output tensor is 4 elements. |Index|Name |Description | |----|---------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------| |0 |Locations|Multidimensional array of `[10][4]` floating point values between 0 and 1_ the inner arrays representing bounding boxes in the form `[top_ left_ bottom_ right]`| |1 |Classes |Array of 10 integers (output as floating point values) each indicating the index of a class label from the labels file | |2 |Scores |Array of 10 floating point values between 0 and 1 representing probability that a class was detected | |3 |Number |Number and detections Array of length 1 containing a floating p | ``` output := interpreter.GetOutputTensor(0) var loc [10][4]float32 var clazz [10]float32 var score [10]float32 var nums [1]float32 output.CopyToBuffer(&loc[0]) interpreter.GetOutputTensor(1).CopyToBuffer(&clazz[0]) interpreter.GetOutputTensor(2).CopyToBuffer(&score[0]) interpreter.GetOutputTensor(3).CopyToBuffer(&nums[0]) num := int(nums[0]) ``` correct results and sort by score. ``` import "sort" type ssdClass struct { loc [4]float32 score float64 index int } classes := make([]ssdClass, 0, len(clazz)) var i int for i = 0; i < num; i++ { idx := int(clazz[i] + 1) score := float64(score[i]) classes = append(classes, ssdClass{loc: loc[i], score: score, index: idx}) } sort.Slice(classes, func(i, j int) bool { return classes[i].score > classes[j].score }) classes = classes[:num] ``` Draw boxes arround detected objects. ``` import ( "bufio" "fmt" "image/color" "golang.org/x/image/colornames" "github.com/llgcode/draw2d" "github.com/llgcode/draw2d/draw2dimg" ) func loadLabels(filename string) ([]string, error) { labels := []string{} f, err := os.Open(filename) if err != nil { return nil, err } defer f.Close() scanner := bufio.NewScanner(f) for scanner.Scan() { labels = append(labels, scanner.Text()) } return labels, nil } labels, err := loadLabels("labelmap.txt") if err != nil { log.Fatal(err) } canvas := image.NewRGBA(img.Bounds()) gc := draw2dimg.NewGraphicContext(canvas) draw2d.SetFontFolder("/etc/alternatives") draw2d.SetFontNamer(func(fontData draw2d.FontData) string { return "fonts-japanese-gothic.ttf" }) gc.DrawImage(img) gc.SetFontSize(25) size := img.Bounds() for i, class := range classes { label := "unknown" if class.index < len(labels) { label = labels[class.index] } gc.BeginPath() c := colornames.Map[colornames.Names[class.index]] gc.SetStrokeColor(c) gc.SetLineWidth(1) gc.MoveTo(float64(size.Dx())*float64(class.loc[1]), float64(size.Dy())*float64(class.loc[0])) gc.LineTo(float64(size.Dx())*float64(class.loc[3]), float64(size.Dy())*float64(class.loc[0])) gc.LineTo(float64(size.Dx())*float64(class.loc[3]), float64(size.Dy())*float64(class.loc[2])) gc.LineTo(float64(size.Dx())*float64(class.loc[1]), float64(size.Dy())*float64(class.loc[2])) gc.Close() gc.Stroke() s := fmt.Sprintf("%d %.5f %s", i, class.score, label) gc.StrokeStringAt(s, float64(size.Dx())*float64(class.loc[1]), float64(size.Dy())*float64(class.loc[0])) } resize.Resize(600, 0, canvas, resize.NearestNeighbor) ```
github_jupyter
``` import os import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline import warnings warnings.filterwarnings("ignore") from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score os.chdir("E:/DATA SCIENCE/Capstone Project/Win Prediction") fullraw = pd.read_csv("Win_Prediction_Data.csv") fullraw fullraw.describe(include = 'object') ``` ### Missing Value Treatment ``` fullraw.isnull().sum() ``` #### Replacing Missing Values with Mode_Value in Client Category ``` fullraw['Client Category'] = fullraw['Client Category'].fillna(fullraw['Client Category'].mode()[0]) fullraw.isnull().sum() ``` #### CorrPlot ``` df = fullraw.apply( lambda x : pd.factorize(x)[0]).corr(method = "pearson", min_periods = 1); df plt.figure (figsize = (20, 10)) ax = sns.heatmap( df, vmin=-1, vmax=1, center=0, cmap=sns.diverging_palette(20, 220, n=200), square=True ) ax.set_xticklabels(ax.get_xticklabels(), rotation=45, horizontalalignment='right'); ``` ### Data PreProcessing #### Deal_Cost Column ``` get_ipython().run_line_magic('matplotlib','inline') plt.figure(figsize = (20,10)) sns.histplot(fullraw['Deal Cost']) ``` ###### The Deal_Cost column is Right_Skewed... ``` fullraw['Deal Cost'].value_counts() ``` ###### Handling "0" values in Deal_Cost with Median Value ``` tempmedian = fullraw['Deal Cost'].median(); tempmedian fullraw['Deal Cost'].replace(0.00, tempmedian, inplace = True) fullraw['Deal Cost'].value_counts() ``` #### Log value of Deal Cost to normalise the Skewed Data ``` fullraw['Log Deal Cost'] = np.log(fullraw['Deal Cost']) plt.figure (figsize = (20,10)) sns.histplot (fullraw["Log Deal Cost"]) ``` #### Combine VP Name and Manager Name¶ ``` fullraw['Vp_Manager'] = fullraw["VP Name"] + " " + fullraw["Manager Name"] ``` #### Dropping Column's from the data, which are not going to assist us in our model ``` fullraw = fullraw.drop(["Deal Cost"], axis = 1) fullraw = fullraw.drop(['Deal Date'], axis = 1) fullraw = fullraw.drop(["VP Name"], axis = 1) fullraw = fullraw.drop(["Manager Name"], axis = 1) fullraw ``` ### Feature Engineering #### Recoding Dependant Variable ``` fullraw['Deal Status Code'] = np.where(fullraw["Deal Status Code"] == "Won", 1, 0) fullraw['Deal Status Code'].value_counts() fullraw.to_csv("final.csv", index = False) ``` #### Independant Variables ``` X = fullraw.drop("Deal Status Code", axis = 1);X ``` #### Target Variable ``` y = fullraw["Deal Status Code"];y ``` ### Target Encoding ``` cols = ['Client Category', 'Solution Type', 'Sector', 'Location', 'VP_Manager'] from sklearn.base import BaseEstimator, TransformerMixin class TargetEncoder(BaseEstimator, TransformerMixin): def __init__(self, cols=None): if isinstance(cols, str): self.cols = [cols] else: self.cols = cols def fit(self, X, y): if self.cols is None: self.cols = [col for col in X if str(X[col].dtype)=='object'] for col in self.cols: if col not in X: raise ValueError('Column \''+col+'\' not in X') self.maps = dict() for col in self.cols: tmap = dict() uniques = X[col].unique() for unique in uniques: tmap[unique] = y[X[col]==unique].mean() self.maps[col] = tmap return self def transform(self, X, y=None): Xo = X.copy() for col, tmap in self.maps.items(): vals = np.full(X.shape[0], np.nan) for val, mean_target in tmap.items(): vals[X[col]==val] = mean_target Xo[col] = vals return Xo def fit_transform(self, X, y=None): return self.fit(X, y).transform(X, y) ``` #### Target Encode the categorical Data ``` te = TargetEncoder() X_te = te.fit_transform(X, y) X_te.sample(10) ``` ### Sampling ``` fullraw2 = pd.concat([X_te, y], axis = 1); fullraw2 from sklearn.model_selection import train_test_split Train, Test = train_test_split(fullraw2, train_size = 0.7, random_state = 137) Train.shape Test.shape ``` #### Sampling into X & Y ##### Dividing each dataset into Independant Variables & Dependant Variables(depvar) ``` depvar = "Deal Status Code" TrainX = Train.drop(depvar, axis = 1).copy() TrainY = Train[depvar].copy() TrainX.shape TestX = Test.drop(depvar, axis= 1).copy() TestY = Test[depvar].copy() TestX.shape TestY ``` ## Decision Tree Model ``` from sklearn.tree import DecisionTreeClassifier, plot_tree from matplotlib.pyplot import figure M1 = DecisionTreeClassifier(random_state = 143).fit(TrainX, TrainY) plt.figure(figsize = (20,10)) DT_plot = plot_tree(M1, fontsize = 10, feature_names = TrainX.columns, filled = True, class_names = ["0", "1"]) ``` #### Prediction & Validation on Test set ``` Test_pred = M1.predict(TestX) Confu_mat = pd.crosstab(TestY, Test_pred);Confu_mat ``` #### Visualize Confusion_Matrix ``` Cm = confusion_matrix(TestY, Test_pred, labels = M1.classes_ ) disp = ConfusionMatrixDisplay( confusion_matrix = Cm, display_labels = M1.classes_) disp.plot() print(classification_report(TestY, Test_pred)) a = accuracy_score(TestY, Test_pred) print("Accuracy Score:", a) ``` #### ROC_ AUC Score ``` dt_score = M1.predict_proba(TrainX) dt_score = dt_score[ :, 1] M1.score(TrainX, TrainY) r = roc_auc_score(TrainY, dt_score) print('ROC_AUC Score:', r) ``` #### Plot ROC_Curve ``` false_positive_rate, true_positive_rate, thresholds = roc_curve(TrainY, dt_score) def plot_roc_curve(false_positive_rate, true_positive_rate, label=None): plt.plot(false_positive_rate, true_positive_rate, linewidth=2, label=label) plt.plot([0, 1], [0, 1], 'r', linewidth=4) plt.axis([0, 1, 0, 1]) plt.xlabel('False Positive Rate (FPR)', fontsize=18) plt.ylabel('True Positive Rate (TPR)', fontsize=18) plt.figure (figsize = (20,10)) plot_roc_curve(false_positive_rate, true_positive_rate) plt.show() ``` #### True Loss ``` report = TestY.copy() report = pd.DataFrame(report) report["Prediction"]= Test_pred report["Deal Cost"] = np.exp(TestX["Log Deal Cost"]) True_Loss = report[(report['Deal Status Code'] == 1) & (report['Prediction'] == 0)].sum() True_Loss['Deal Cost'] ``` ### DT Model-2 ``` M2 = DecisionTreeClassifier(random_state = 666, min_samples_leaf = 420).fit(TrainX, TrainY) plt.figure(figsize = (20,10)) DT_plot1 = plot_tree(M2, fontsize = 10, feature_names = TrainX.columns, filled = True, class_names = ["0", "1"]) ``` #### Prediction & Validation on Testset ``` Test_Pred = M2.predict(TestX) Confu_Mat = pd.crosstab(TestY, Test_Pred);Confu_Mat ``` #### Visualize Confusion_Matrix ``` Cm1 = confusion_matrix(TestY, Test_Pred, labels = M2.classes_ ) disp = ConfusionMatrixDisplay( confusion_matrix = Cm1, display_labels = M2.classes_) disp.plot() print(classification_report(TestY, Test_Pred)) a = accuracy_score(TestY, Test_pred) print("Accuracy Score:", a) ``` #### ROC_ AUC Score ``` dt_score = M2.predict_proba(TrainX) dt_score = dt_score[ :, 1] M2.score(TrainX, TrainY) r = roc_auc_score(TrainY, dt_score) print('ROC_AUC Score:', r) ``` #### Plot ROC_Curve ``` false_positive_rate, true_positive_rate, thresholds = roc_curve(TrainY, dt_score) def plot_roc_curve(false_positive_rate, true_positive_rate, label=None): plt.plot(false_positive_rate, true_positive_rate, linewidth=2, label=label) plt.plot([0, 1], [0, 1], 'r', linewidth=4) plt.axis([0, 1, 0, 1]) plt.xlabel('False Positive Rate (FPR)', fontsize=18) plt.ylabel('True Positive Rate (TPR)', fontsize=18) plt.figure (figsize = (20,10)) plot_roc_curve(false_positive_rate, true_positive_rate) plt.show() ``` #### True Loss ``` report = TestY.copy() report = pd.DataFrame(report) report["Prediction"]= Test_Pred report["Deal Cost"] = np.exp(TestX["Log Deal Cost"]) True_Loss = report[(report['Deal Status Code'] == 1) & (report['Prediction'] == 0)].sum() True_Loss['Deal Cost'] ``` ## Random Forest Model ``` from sklearn.ensemble import RandomForestClassifier M_rf1 = RandomForestClassifier(random_state = 777).fit(TrainX, TrainY) ``` #### Prediction & Validation on Test set ``` Test_Pred = M_rf1.predict(TestX) Confu_mat = pd.crosstab(TestY, Test_Pred);Confu_mat ``` #### Visualize Confusion_Matrix ``` Cm1 = confusion_matrix(TestY, Test_Pred, labels = M_rf1.classes_ ) disp = ConfusionMatrixDisplay( confusion_matrix = Cm1, display_labels = M_rf1.classes_) disp.plot() print(classification_report(TestY, Test_Pred)) a = accuracy_score(TestY, Test_Pred) print("Accuracy Score:", a) ``` #### ROC_ AUC Score ``` dt_score = M_rf1.predict_proba(TrainX) dt_score = dt_score[ :, 1] M_rf1.score(TrainX, TrainY) r = roc_auc_score(TrainY, dt_score) print('ROC_AUC Score:', r) ``` #### Plot ROC_Curve ``` false_positive_rate, true_positive_rate, thresholds = roc_curve(TrainY, dt_score) def plot_roc_curve(false_positive_rate, true_positive_rate, label=None): plt.plot(false_positive_rate, true_positive_rate, linewidth=2, label=label) plt.plot([0, 1], [0, 1], 'r', linewidth=4) plt.axis([0, 1, 0, 1]) plt.xlabel('False Positive Rate (FPR)', fontsize=18) plt.ylabel('True Positive Rate (TPR)', fontsize=18) plt.figure (figsize = (20,10)) plot_roc_curve(false_positive_rate, true_positive_rate) plt.show() ``` #### True Loss ``` report = TestY.copy() report = pd.DataFrame(report) report["Prediction"]= Test_Pred report["Deal Cost"] = np.exp(TestX["Log Deal Cost"]) True_Loss = report[(report['Deal Status Code'] == 1) & (report['Prediction'] == 0)].sum() True_Loss['Deal Cost'] ``` ### Variable Importance ``` Var_imp_df = pd.concat([pd.DataFrame(M_rf1.feature_importances_), pd.DataFrame(TrainX.columns)], axis = 1) Var_imp_df.columns = ["Value", "Variable_Name"] Var_imp_df.sort_values("Value", ascending = False, inplace = True);Var_imp_df Var_imp_df.to_csv("Var_imp_df2.csv", index = False) ``` ### Visualization ``` plt.figure(figsize = (20,10)) plt.xticks(rotation = 0) plot= sns.barplot(x = "Variable_Name", y = "Value", data = Var_imp_df) ``` #### RF_Model with Tuning Parameters ``` M_rf2 = RandomForestClassifier( random_state = 143, n_estimators = 25, max_features = 5, min_samples_leaf = 500) M_rf2 = M_rf2.fit(TrainX, TrainY) ``` #### Prediction & Validation on Test set ``` Test_Pred2 = M_rf2.predict(TestX) Confu_mat2 = pd.crosstab(TestY, Test_Pred2);Confu_mat2 ``` #### Visualize Confusion_Matrix ``` Cm2 = confusion_matrix(TestY, Test_Pred2, labels = M_rf2.classes_ ) disp = ConfusionMatrixDisplay( confusion_matrix = Cm2, display_labels = M_rf2.classes_) disp.plot() ``` ### RandomForest Grid Searching ``` from sklearn.model_selection import GridSearchCV my_para_grid = {"n_estimators" : [25, 50, 75], "max_features" : [5, 7, 9], "min_samples_leaf" : [100, 200] } ``` Grid_search_Model(Gsm) ``` Gsm = GridSearchCV(estimator = RandomForestClassifier(random_state = 13), param_grid = my_para_grid, scoring = "accuracy", cv = 3).fit(TrainX, TrainY) ``` #### Prediction & Validation on Test set ``` M_Val_df = pd.DataFrame.from_dict(Gsm.cv_results_) Rf_Final = RandomForestClassifier(random_state = 127, n_estimators = 50, max_features = 5, min_samples_leaf = 100).fit(TrainX, TrainY) Tp_Final = Rf_Final.predict(TestX) Cm_final = pd.crosstab(TestY, Tp_Final);Cm_final ``` #### Visualize Confusion_Matrix ``` Cm = confusion_matrix(TestY, Tp_Final, labels = Rf_Final.classes_ ) disp = ConfusionMatrixDisplay( confusion_matrix = Cm, display_labels = Rf_Final.classes_) disp.plot() print(classification_report(TestY, Tp_Final)) a = accuracy_score(TestY, Tp_Final) print("Accuracy Score:", a) ``` #### ROC_ AUC Score ``` dt_score = Rf_Final.predict_proba(TrainX) dt_score = dt_score[ :, 1] Rf_Final.score(TrainX, TrainY) r = roc_auc_score(TrainY, dt_score) print('ROC_AUC Score:', r) ``` #### Plot ROC_Curve ``` false_positive_rate, true_positive_rate, thresholds = roc_curve(TrainY, dt_score) def plot_roc_curve(false_positive_rate, true_positive_rate, label=None): plt.plot(false_positive_rate, true_positive_rate, linewidth=2, label=label) plt.plot([0, 1], [0, 1], 'r', linewidth=4) plt.axis([0, 1, 0, 1]) plt.xlabel('False Positive Rate (FPR)', fontsize=18) plt.ylabel('True Positive Rate (TPR)', fontsize=18) plt.figure (figsize = (20,10)) plot_roc_curve(false_positive_rate, true_positive_rate) plt.show() ``` #### True Loss ``` report = TestY.copy() report = pd.DataFrame(report) report["Prediction"]= Tp_Final report["Deal Cost"] = np.exp(TestX["Log Deal Cost"]) True_Loss = report[(report['Deal Status Code'] == 1) & (report['Prediction'] == 0)].sum() True_Loss['Deal Cost'] ``` ## XG- Boost ``` import xgboost as xgb Xgb_cl = xgb.XGBClassifier() ``` #### Prediction & Validation on Test set ``` Xgb_cl.fit(TrainX, TrainY) t_pred = Xgb_cl.predict(TestX) ``` #### Confusion Matrix ``` Conf_mat = pd.crosstab(TestY, t_pred);Conf_mat ``` #### Visualize Confusion_Matrix ``` Cm = confusion_matrix(TestY, t_pred, labels = Xgb_cl.classes_ ) disp = ConfusionMatrixDisplay( confusion_matrix = Cm, display_labels = Xgb_cl.classes_) disp.plot() from sklearn.metrics import accuracy_score print(classification_report(TestY, t_pred)) a = accuracy_score(TestY, t_pred) print("Accuracy Score:", a) ``` #### ROC_ AUC Score ``` dt_score = Xgb_cl.predict_proba(TrainX) dt_score = dt_score[ :, 1] Xgb_cl.score(TrainX, TrainY) r = roc_auc_score(TrainY, dt_score) print('ROC_AUC Score:', r) ``` ### Plot ROC_Curve ``` false_positive_rate, true_positive_rate, thresholds = roc_curve(TrainY, dt_score) def plot_roc_curve(false_positive_rate, true_positive_rate, label=None): plt.plot(false_positive_rate, true_positive_rate, linewidth=2, label=label) plt.plot([0, 1], [0, 1], 'r', linewidth=4) plt.axis([0, 1, 0, 1]) plt.xlabel('False Positive Rate (FPR)', fontsize=18) plt.ylabel('True Positive Rate (TPR)', fontsize=18) plt.figure (figsize = (20,10)) plot_roc_curve(false_positive_rate, true_positive_rate) plt.show() ``` #### True Loss ``` report = TestY.copy() report = pd.DataFrame(report) report["Prediction"]= t_pred report["Deal Cost"] = np.exp(TestX["Log Deal Cost"]) True_Loss = report[(report['Deal Status Code'] == 1) & (report['Prediction'] == 0)].sum() True_Loss['Deal Cost'] ``` ## Logistic Regression ``` from sklearn.linear_model import LogisticRegression M_1= LogisticRegression() M_1.fit(TrainX, TrainY) ``` #### Prediction & Validation on Test set ``` Test_Pred = M_1.predict(TestX) Confu_mat = pd.crosstab(TestY, Test_Pred); Confu_mat ``` #### Visualize Confusion_Matrix ``` Cm1 = confusion_matrix(TestY, Test_Pred, labels = M_1.classes_ ) disp = ConfusionMatrixDisplay( confusion_matrix = Cm1, display_labels = M_1.classes_) disp.plot() print(classification_report(TestY, Test_Pred)) a = accuracy_score(TestY,Test_Pred) print("Accuracy Score:", a) ``` #### ROC_ AUC Score ``` dt_score = M_1.predict_proba(TrainX) dt_score = dt_score[ :, 1] M_1.score(TrainX, TrainY) r = roc_auc_score(TrainY, dt_score) print('ROC_AUC Score:', r) ``` #### Plot ROC_Curve ``` false_positive_rate, true_positive_rate, thresholds = roc_curve(TrainY, dt_score) def plot_roc_curve(false_positive_rate, true_positive_rate, label=None): plt.plot(false_positive_rate, true_positive_rate, linewidth=2, label=label) plt.plot([0, 1], [0, 1], 'r', linewidth=4) plt.axis([0, 1, 0, 1]) plt.xlabel('False Positive Rate (FPR)', fontsize=18) plt.ylabel('True Positive Rate (TPR)', fontsize=18) plt.figure (figsize = (20,10)) plot_roc_curve(false_positive_rate, true_positive_rate) plt.show() ``` #### True Loss ``` report = TestY.copy() report = pd.DataFrame(report) report["Prediction"]= Test_Pred report["Deal Cost"] = np.exp(TestX["Log Deal Cost"]) True_Loss = report[(report['Deal Status Code'] == 1) & (report['Prediction'] == 0)].sum() True_Loss['Deal Cost'] ```
github_jupyter
<a href="https://colab.research.google.com/github/loosak/pysnippets/blob/master/Graphs.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Exploring the World of Graphs John Paul Mueller and Luca Massaron: Algorithms For Dummies®, 2nd Edition Graphs are discrete mathematical structures used to model pairwise relations between objects. ## Undirected simple graph. Ordered pair of vertices and edges: $$G=(V,E)$$ - $V$, a set of vertices (also called **nodes** or **points**); - $E$ $\subseteq \{\{x,y\}\mid x,y\in V\;{\textrm {and))\;x\neq y\))$, a set of edges (also called links or lines), which are unordered pairs of vertices (that is, an edge is associated with two distinct vertices). ## Undirected multigraph. ordered triple: $$\displaystyle G=(V,E,\phi )$$ - $V$, a set of vertices (also called nodes or points); - $E$, a set of edges (also called links or lines); - $\phi$ :E\to \{\{x,y\}\mid x,y\in V\;{\textrm {and))\;x\neq y\))$, an incidence function mapping every edge to an unordered pair of vertices (that is, an edge is associated with two distinct vertices). ## loop undirected simple graph permitting loops and undirected multigraph permitting loops (sometimes also undirected pseudograph), respectively. [tuple] = tuple is a finite ordered list (sequence) of elements. [uspořádaná n-tice] se v matematice označuje uspořádaný seznam konečného počtu n objektů ``` import sys sys.version import networkx as nx Graph = nx.Graph() Nodes = range(1,5) Edges = [(1,2), (2,3), (3,4), (4,5), (1,3), (1,5)] Graph.add_nodes_from(Nodes) Graph.add_edges_from(Edges) nx.draw(Graph, with_labels=True, arrows=True) import networkx as nx Graph = nx.Graph() Nodes = ['A','B','C','D','E'] Edges = [('A', 'B'), ('B', 'D'), ('D', 'C'), ('C', 'E'), ('C', 'A'), ('E', 'A')] Graph.add_nodes_from(Nodes) Graph.add_edges_from(Edges) nx.draw(Graph, with_labels=True, arrows=True) import networkx as nx import matplotlib.pyplot as plt graph = { 'A': {'B':2, 'C':3}, 'B': {'C':2, 'D':2}, 'C': {'D':3, 'E':2}, 'D': {'F':3}, 'E': {'D':1,'F':1}, 'F': {} } Graph = nx.DiGraph() for node in graph: Graph.add_nodes_from(node) for edge, weight in graph[node].items(): Graph.add_edge(node,edge, weight=weight) labels = nx.get_edge_attributes(Graph,'weight') pos = { 'A': [0.00, 0.50], 'B': [0.25, 0.75], 'C': [0.25, 0.25], 'D': [0.75, 0.75], 'E': [0.75, 0.25], 'F': [1.00, 0.50] } draw_params = { 'with_labels': True, 'arrows': True, 'node_color':'skyblue', 'node_size':700, 'width':2, 'font_size':14 } nx.draw(Graph, with_labels=True, arrows=True) #nx.draw(Graph, pos, **draw_params) #nx.draw_networkx_edge_labels(Graph, pos, font_size=14, edge_labels=labels) [ {node: graph[node]} for node in graph] [item for item in graph[node] for node in graph] I_Graph = nx.DiGraph() I_Graph.__dict__ ``` # Building graphs dictionary makes building the graph easy because the key is the node name and the values are the connections for that node. ``` graph = { 'A': ['B', 'F'], 'B': ['A', 'C'], 'C': ['B', 'D'], 'D': ['C', 'E'], 'E': ['D', 'F'], 'F': ['E', 'A'] } graph Graph = nx.DiGraph() nx.draw(Graph) import numpy as np a = np.array([1, 2, 3, 4]) b = np.array([2, 2, 4, 4]) a == b import pandas as pd df = pd.DataFrame({ 'A': [1, 2, 3, 4], 'B': [2, 2, 4, 4] }) df df.A == df.B ```
github_jupyter
<h1 align="center">PROGRAMACIÓN DE COMPUTADORES </h1> <h2 align="center">UNIVERSIDAD EAFIT</h2> <h3 align="center">MEDELLÍN - COLOMBIA </h3> <h2 align="center">Sesión 12 - Ecosistema Python - Matplotlib</h2> ## Instructor: > <strong> *Carlos Alberto Álvarez Henao, I.C. Ph.D.* </strong> ## Matplotlib > Primero hay qué instalar el paquete [matplotlib](https://anaconda.org/conda-forge/matplotlib) > - Estandar de facto de visualización en Python > - Pretende ser similar a las funciones de visualización del Matlab En Jupyter es necesario activar la opción inline para viusalizar las gráficas en el notebook ![Matplotlib_Install_Conda.PNG](attachment:Matplotlib_Install_Conda.PNG) ``` %matplotlib inline ``` > Ahora es necesario importar todos los paquetes necesarios ``` import numpy as np import matplotlib.pyplot as plt ``` > La biblioteca matplotlib es gigantesca y es difícil hacerse una idea global de todas sus posibilidades en una primera toma de contacto. Es recomendable tener a mano la documentación y la galería: ``` from IPython.display import HTML HTML('<iframe src="http://matplotlib.org/gallery.html#pylab_examples" width="800" height="600"></iframe>') ``` > Si hacemos clic en cualquiera de las imágenes, accedemos al código fuente que la ha generado ``` HTML('<iframe src="http://matplotlib.org/examples/pylab_examples/annotation_demo.html" width="800" height="600"></iframe>') ``` ### Interfaz pyplot > La interfaz *pyplot* proporciona una serie de funciones que operan sobre un estado global. > - No especificamos sobre qué gráfica o ejes estamos actuando. > - Es una forma rápida y cómoda de crear gráficas pero perdemos parte del control. > - El paquete pyplot se suele importar bajo el alias *plt*, de modo que todas las funciones se acceden a través de *plt.$<funcion>$*. > - La función más básica es la función plot: ``` plt.plot([0, 0.1, 0.2, 0.5], [1, -1, 0.5, 2]) #GRAFICA SENO # coding=utf-8 # Carga los módulos necesarios import scipy as sp import matplotlib.pyplot as plt # Crea el arreglo x de cero a cien con cien puntos x = sp.linspace(0, 10, 100) # Crea el arreglo y con la función indicada y = sp.sin(x) # Crea una figura plt.figure() # imprime la figura plt.plot(x,y) # Muestra en pantalla plt.show() #GRAFICA SENO - en repl.it import scipy as sp import matplotlib.pyplot as plt # Crea el arreglo x de cero a cien con cien puntos x = sp.linspace(0, 10, 100) # Crea el arreglo y con la función indicada y = sp.sin(x) # Crea una figura #plt.figure() fig = plt.figure(figsize=(10, 8)) ax = fig.add_subplot(111) # imprime la figura plt.plot(x,y) # Muestra en pantalla fig.savefig('seno.png') ``` > La función *plot* recibe una sola lista (si queremos especificar los valores y) o dos listas (si especificamos $x$ e $y$). > - Naturalmente si especificamos dos listas ambas tienen que tener la misma longitud. > La tarea más habitual a la hora de trabajar con *matplotlib* es representar una función. > - Lo que tendremos que hacer es definir un dominio y evaluarla en dicho dominio. Por ejemplo: $$f(x)=e^{-x^2}$$ ``` def f(x): return np.exp(-x ** 2) a = f(2) print(a) plt.plot(2,f(2)) ``` > Definimos el dominio con la función *np.linspace*, que crea un vector de puntos equiespaciados: ``` x = np.linspace(-1, 5, num=100) x ``` > Y representamos la función: ``` plt.plot(x, f(x), label="Función f(x)") plt.xlabel("Eje $x$") plt.ylabel("$f(x)$") plt.legend() plt.title("Función $f(x)=e^{-x^2}$") ``` ### Personalización > La función *plot* acepta una serie de argumentos para personalizar el aspecto de la función. > - Con una letra podemos especificar el color, y con un símbolo el tipo de línea. ``` plt.plot(x, f(x), 'g-') plt.plot(x, 1 - f(x), 'b--') ``` > Esto en realidad son códigos abreviados, que se corresponden con argumentos de la función *plot*: ``` #colorcito = 'green' plt.plot(x, f(x), color='black', linestyle='', marker='o') plt.plot(x, 1 - f(x), c='g', ls='--') ``` > La lista de posibles argumentos y abreviaturas está disponible en la documentación de la función *plot* la podrás encontrar en este [enlace](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot). ### Otro tipo de gráficas > La función *scatter* muestra una nube de puntos, con posibilidad de variar también el tamaño y el color. ``` N = 100 x = np.random.randn(N) y = np.random.randn(N) plt.scatter(x, y) ``` > Con *s* y *c* podemos modificar el tamaño y el color respectivamente. > - Para el color, a cada valor numérico se le asigna un color a través de un mapa de colores; ese mapa se puede cambiar con el argumento *cmap*. Esa correspondencia se puede visualizar llamando a la función *colorbar*. ``` s = 50 + 50 * np.random.randn(N) c = np.random.randn(N) plt.scatter(x, y, s=s, c=c, cmap=plt.cm.Blues) plt.colorbar() plt.scatter(x, y, s=s, c=c, cmap=plt.cm.Oranges) plt.colorbar() ``` > *matplotlib* trae por defecto muchos mapas de colores. En las *SciPy Lecture Notes* dan una lista de todos ellos: ![plot_colormaps_1.png](attachment:plot_colormaps_1.png) > La función *contour* se utiliza para visualizar las curvas de nivel de funciones de dos variables y está muy ligada a la función *np.meshgrid*. Veamos un ejemplo: $$f(x,y) = cos(x) + sin^2(y)$$ ``` def f(x, y): return np.cos(x) + np.sin(y) ** 2 x = np.linspace(-2, 2) y = np.linspace(-2, 2) xx, yy = np.meshgrid(x, y) plt.contour(xx, yy, f(xx, yy)) plt.colorbar() ``` > La función *contourf* es casi idéntica pero rellena el espacio entre niveles. > - Podemos especificar manualmente estos niveles usando el cuarto argumento: ``` zz = f(xx, yy) plt.contourf(xx, yy, zz, np.linspace(-0.5, 2.0),cmap=plt.cm.rainbow) plt.colorbar() ``` ### Interfaz Orientada a Objetos ``` x = np.linspace(-1, 5, num=50) x def f(x): return np.exp(-x ** 2) f(x) fig, axes = plt.subplots() axes.plot(x,f(x),'ro', label='Función f(x)') axes.set_xlim(-1,4) axes.set_ylim(0,1) fig.savefig('Grafica1.png') ``` > Ahora usaremos la función *subplots* para crear una figura con varios sistemas de ejes. ``` fig, axes = plt.subplots(2,2, sharey=True) axes[0].plot(x,f(x)) axes[0].set_xlabel('Eje x0') axes[1].plot(x,-f(x),'r') axes[1].set_xlabel('Eje x1') axes[2].plot(x,1-f(x),'r') axes[2].set_xlabel('Eje x2') x = np.random.rand(100) y = np.random.rand(100) s = 200*np.random.randn(100) c = np.random.randn(100) plt.scatter(x,y,s,c,cmap=plt.cm.seismic) ``` $$g(x,y) = cos(x) + sin^2(y)$$ ``` x = np.linspace(-2,2) y = np.linspace(-2,2) xx,yy = np.meshgrid(x,y) def g(x,y): return np.cos(x) + np.sin(y)**2 zz = g(xx,yy) fig, axes = plt.subplots() axes.contourf(xx,yy,zz, np.linspace(-1,1),cmap=plt.cm.autumn) ``` ### Ejemplo con datos reales ``` !type temperaturas.csv datos = np.loadtxt("temperaturas.csv",usecols=(1,2,3),skiprows=1,delimiter=",") fig, axes = plt.subplots() x = np.arange(len(datos[:,1])) temp_media = (datos[:,1] + datos[:,2]) / 2 axes.plot(x,datos[:,1],'r') axes.plot(x,datos[:,2],'b') axes.plot(x,temp_media,'k') ```
github_jupyter
# Air Quality Monitoring ### Import/Install packages ``` #!pip install git+https://github.com/datakaveri/iudx-python-sdk #!pip install geojsoncontour #!pip install voila #!pip install voila-gridstack # Use !voila airQualityMonitoring.ipynb --enable_nbextensions=True --template=gridstack to launch dashboard or use jupyter notebook extensions from iudx.entity.Entity import Entity import pandas as pd import numpy as np import json from datetime import date, datetime, timedelta import matplotlib.pyplot as plt import plotly.express as px import plotly.graph_objects as go import folium from folium import plugins from scipy.interpolate import griddata import geojsoncontour import ipywidgets as widgets from ipywidgets import Layout import warnings ``` ### Defining variables and widgets for interaction ``` # ids of each resource group city_ids={ "vadodara": "vmc.gov.in/ae95ac0975a80bd4fd4127c68d3a5b6f141a3436/rs.iudx.org.in/vadodara-env-aqm", "varanasi": "varanasismartcity.gov.in/62d1f729edd3d2a1a090cb1c6c89356296963d55/rs.iudx.org.in/varanasi-env-aqm", "pune": "datakaveri.org/04a15c9960ffda227e9546f3f46e629e1fe4132b/rs.iudx.org.in/pune-env-aqm" } # types of values measured in each city - instantaneous or average value_type={ 'vadodara': 'instValue', 'varanasi': 'avgOverTime', 'pune':'avgOverTime' } # list of properties common to all cities column_choice=['ambientNoise','so2','uv','co','co2','illuminance','no2','o3','pm10','pm2p5','relativeHumidity'] # widgets for interaction prompt1=widgets.HTML(value="") prompt2=widgets.HTML(value="") gif_address = 'https://www.uttf.com.ua/assets/images/loader2.gif' select_ndays=widgets.IntSlider( value=1, min=1, max=14, step=1, description='Days: ', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='d' ) select_city=widgets.Dropdown( options=city_ids.keys(), value='pune', description='City:', disabled=False, ) select_col=widgets.Dropdown( options=column_choice, value='pm10', description='Property:', disabled=False, ) mywidgets=[select_city,select_ndays,select_col] ui=widgets.VBox([select_city,select_ndays,prompt1,select_col,prompt2]) ``` ### Functions to fetch, prepare and visualize data ##### Fetch data ``` # fetch latest data in the past n days for a city and add/modify required columns def get_data(selected_city,ndays): for widget in mywidgets: widget.disabled=True prompt1.value=f'<img src="{gif_address}" height=150 width=150> Fetching data' global entity,measures,latest_measures,start_time,end_time,city city=selected_city entity=Entity(entity_id=city_ids[city]) latest_measures=entity.latest().reset_index(drop=True) end_time = latest_measures['observationDateTime'].sort_values(ascending=False).reset_index(drop=True)[0] start_time = (end_time - timedelta(days=ndays,hours=6)) measures = entity.during_search( start_time=start_time.strftime("%Y-%m-%dT%H:%M:%SZ"), end_time=end_time.strftime("%Y-%m-%dT%H:%M:%SZ"), ) measures['observationDateTime']=measures['observationDateTime'].apply(lambda x:x.tz_localize(None)) latest_measures['observationDateTime']=latest_measures['observationDateTime'].apply(lambda x:x.tz_localize(None)) rs_coordinates={} rs_label={} for res in entity.resources: rs_coordinates[res['id']]=res['location']['geometry']['coordinates'] rs_label[res['id']]=res['name'] latest_measures['x_co']=latest_measures['id'].apply(lambda id:rs_coordinates[id][0]) latest_measures['y_co']=latest_measures['id'].apply(lambda id:rs_coordinates[id][1]) measures['x_co']=measures['id'].apply(lambda id:rs_coordinates[id][0]) measures['y_co']=measures['id'].apply(lambda id:rs_coordinates[id][1]) measures['label']=measures['id'].apply(lambda id:rs_label[id]) latest_measures['label']=measures['id'].apply(lambda id:rs_label[id]) for widget in mywidgets: widget.disabled=False prompt1.value=f'Fetched {measures.shape[0]} records from {len(entity.resources)} resources' ``` ##### Spatial Visualization ``` # plot contours over a map for a given property def spatialVis1(selected_city, col): global city,units prop_desc=entity._data_descriptor[col][value_type[city]] units=prop_desc["unitText"] prompt2.value=f'{prop_desc["description"]}<br> Unit: {units}' city=selected_city column_name=col+"."+value_type[city] x_orig = [] y_orig = [] zs = [] for res in entity.resources: try: val = latest_measures[latest_measures["id"] == res["id"]][column_name].values[0] if val is not None and val>0: zs.append(val) x_orig.append(res["location"]["geometry"]["coordinates"][0]) y_orig.append(res["location"]["geometry"]["coordinates"][1]) except: pass x_orig = np.array(x_orig) y_orig = np.array(y_orig) zs = np.array(zs) # Initialize the map geomap1 = folium.Map([y_orig.mean(), x_orig.mean()], zoom_start=11, tiles="cartodbpositron") for res in entity.resources: entity_id = res["id"] try: val=latest_measures[latest_measures['id']==entity_id][column_name].values[0] if val is not None and val>0: folium.Marker([res["location"]["geometry"]["coordinates"][1], res["location"]["geometry"]["coordinates"][0]], tooltip=f'{col.upper()}: {str(val)}').add_to(geomap1) except: pass # Make lat and lon linspace y_arr = np.linspace(np.min(y_orig), np.max(y_orig), 100) x_arr = np.linspace(np.min(x_orig), np.max(x_orig), 100) # Make mesh grid x_mesh, y_mesh = np.meshgrid(x_arr, y_arr) # Perform cubic interpolation z_mesh = griddata((x_orig, y_orig), zs, (x_mesh, y_mesh), method='cubic') # Number of levels of colors levels = 20 contourf=plt.contourf(x_mesh, y_mesh, z_mesh, levels, alpha=0.5, cmap="bwr", linestyles='None', vmin=0, vmax=100) plt.close() # Convert matplotlib contourf to geojson geojson = geojsoncontour.contourf_to_geojson( contourf=contourf, min_angle_deg=3.0, ndigits=5, stroke_width=1, fill_opacity=0.5) # Plot the contour plot on folium folium.GeoJson( geojson, style_function=lambda x: { 'color': x['properties']['stroke'], 'weight': x['properties']['stroke-width'], 'fillColor': x['properties']['fill'], 'opacity': 0.6, }).add_to(geomap1) # Show map display(geomap1) # plot bubbles over a map for a given property def spatialVis2(selected_city, col): city=selected_city column_name=col+"."+value_type[city] maxval=max(list(filter(None,latest_measures[column_name]))) minval=min(list(filter(None,latest_measures[column_name]))) geomap2 = folium.Map([latest_measures['y_co'].mean(), latest_measures['x_co'].mean()], zoom_start=12, tiles="cartodbpositron") for res in entity.resources: entity_id = res["id"] try: val=latest_measures[latest_measures['id']==entity_id][column_name].values[0] if val is not None and val>0: folium.Circle( [res["location"]["geometry"]["coordinates"][1], res["location"]["geometry"]["coordinates"][0]], radius=2000*(val-minval)/(maxval-minval), popup = f'{col.upper()}: {str(val)}', color='b', fill_color=('red' if ((val-minval)/(maxval-minval))>0.6 else 'blue'), fill=True, fill_opacity=0.4 ).add_to(geomap2) except: pass display(geomap2) ``` ##### Time Series Visualization ``` # plot the measures of a proprty over ndays for the resource with the latest recording def timeSeriesVis1(selected_city, col, ndays): column_name=col+"."+value_type[city] sensor_id = measures.sort_values(by='observationDateTime',ascending=False).reset_index(drop=True)['id'][0] single_resource_data = measures.query(f"id == '{sensor_id}'") sensor_coordinates=[] for res in entity.resources: if res['id']==sensor_id: sensor_coordinates=res['location']['geometry']['coordinates'] fig = px.line( single_resource_data, x="observationDateTime", y=column_name ) display(widgets.HTML(f'<center style="font-size:14px">Temporal sensor reading for \n {col.upper()} from {start_time.date()} to {end_time.date()} for resource at {sensor_coordinates}<center>')) fig.update_layout( xaxis_title="Observed Timestamp", yaxis_title="Sensor reading for "+col.upper()+" ("+units+")", font=dict( size=12 ) ) fig.update_xaxes(rangeslider_visible=True) fig.show() # plot the measures of a proprty over ndays for all resources def timeSeriesVis2(selected_city, col, ndays): column_name=col+"."+value_type[city] fig = px.line( measures, x="observationDateTime", y=column_name, color='label' ) display(widgets.HTML(f'<center style="font-size:14px">Temporal sensor reading for {col.upper()} from {start_time.date()} to {end_time.date()} of all sensors<center>')) fig.update_layout( xaxis_title="Observed Timestamp", yaxis_title="Sensor reading for "+col.upper()+" ("+units+")", font=dict( size=12 ) ) fig.update_xaxes(rangeslider_visible=True) fig.show() # plot a box plot over each day of the week for the resource with the latest recording def timeSeriesVis3(selected_city, col, ndays): column_name=col+"."+value_type[city] sensor_id = measures.sort_values(by='observationDateTime',ascending=False).reset_index(drop=True)['id'][0] single_resource_data = measures.query(f"id == '{sensor_id}'") warnings.filterwarnings('ignore') sensor_coordinates=[] single_resource_data['day']=single_resource_data['observationDateTime'].apply(lambda x:x.strftime('%A')) for res in entity.resources: if res['id']==sensor_id: sensor_coordinates=res['location']['geometry']['coordinates'] fig = px.box( single_resource_data, x="day", y=column_name, points="all" ) display(widgets.HTML(f'<center style="font-size:14px">Box plots for \n {col.upper()} from {start_time.date()} to {end_time.date()} for resource at {sensor_coordinates}<center>')) fig.update_layout( #title=f'', xaxis_title="Day", yaxis_title="Sensor reading for "+col.upper()+" ("+units+")", font=dict( size=12 ) ) fig.show() # plot a histogram showing the average measurements over observed time for the resource with the latest recording def timeSeriesVis4(selected_city, col, ndays): column_name=col+"."+value_type[city] sensor_id = measures.sort_values(by='observationDateTime',ascending=False).reset_index(drop=True)['id'][0] single_resource_data = measures.query(f"id == '{sensor_id}'") warnings.filterwarnings('ignore') sensor_coordinates=[] single_resource_data['day']=single_resource_data['observationDateTime'].apply(lambda x:x.strftime('%A')) for res in entity.resources: if res['id']==sensor_id: sensor_coordinates=res['location']['geometry']['coordinates'] fig = px.histogram( single_resource_data, x="observationDateTime", y=column_name, histfunc="avg" ) display(widgets.HTML(f'<center style="font-size:14px">Histogram for \n {col.upper()} from {start_time.date()} to {end_time.date()} for resource at {sensor_coordinates}<center>')) fig.update_layout( #title=f'', xaxis_title="Day", yaxis_title="Sensor reading for "+col.upper()+" ("+units+")", font=dict( size=12 ) ) fig.show() ``` ##### Basic Visualization ``` # plot a bar chart for the latest measures of a property at all active resources def simpleVis1(selected_city, col): column_name=col+"."+value_type[city] display(widgets.HTML(f'<center style="font-size:14px">Latest temporal sensor reading for {col.upper()} of all sensors<center>')) fig = px.bar(latest_measures, x='label', y=column_name) fig.update_layout( xaxis_title="Sensor Id", yaxis_title="Sensor reading for "+col.upper()+" ("+units+")", font=dict( size=12 ) ) fig.show() ``` ### Interactive outputs for dashboard ``` # display the widgets ui # fetch data widgets.interactive_output(get_data,{'selected_city':select_city,'ndays':select_ndays}) # contour map widgets.interactive_output(spatialVis1,{'selected_city':select_city, 'col':select_col}) # time series (single resource) widgets.interactive_output(timeSeriesVis1,{'selected_city':select_city, 'col':select_col, 'ndays':select_ndays}) # bubble map widgets.interactive_output(spatialVis2,{'selected_city':select_city, 'col':select_col}) # time series (for all resources) widgets.interactive_output(timeSeriesVis2,{'selected_city':select_city, 'col':select_col, 'ndays':select_ndays}) # bar chart widgets.interactive_output(simpleVis1,{'selected_city':select_city, 'col':select_col}) # box plots widgets.interactive_output(timeSeriesVis3,{'selected_city':select_city, 'col':select_col, 'ndays':select_ndays}) # histogram widgets.interactive_output(timeSeriesVis4,{'selected_city':select_city, 'col':select_col, 'ndays':select_ndays}) ```
github_jupyter
# Complex Numbers as Vectors We saw that a complex number $z = a + bi$ is equivalent to (and therefore can be represented as) the ordered tuple $(a; b)$, which can be plotted in a 2D space. So, complex numbers and 2D points are equivalent. What is more, we can draw a vector from the origin of the coordinate plane to our point. This is called a point's **radius-vector**. Let's try plotting complex numbers as radius vectors. Don't forget to label the real and imaginary axes. Also, move the axes to the origin. Hint: These are called "spines"; you'll need to move 2 of them to the origin and remove the other 2 completely. Hint 2: You already did this in the previous lab. We can use `plt.quiver()` to plot the vector. It can behave a bit strangely, so we'll need to set the scale of the vectors to be the same as the scale on the graph axes: ```python plt.quiver(0, 0, z.real, z.imag, angles = "xy", scale_units = "xy", scale = 1) ``` Other than that, the main parameters are: $x_{begin}$, $y_{begin}$, $x_{length}$, $y_{length}$ in that order. Now, set the aspect ratio of the axes to be equal. Also, add grid lines. Set the axis numbers (called ticks) to be something like `range(-3, 4)` for now. ```python plt.xticks(range(-3, 4)) plt.yticks(range(-3, 4)) ``` If you wish to, you can be a bit more clever with the tick marks. Find the minimal and maximal $x$ and $y$ values and set the ticks according to them. It's a good practice not to jam the plot too much, so leave a little bit of space. That is, if the actual x-range is $[-2; 2]$, set the plotting to be $[-2.5; 2.5]$ for example. Otherwise, the vector heads (arrows) will be "jammed" into a corner or side of the plot. ``` %matplotlib inline import numpy as np import numpy.polynomial.polynomial as p import matplotlib.pyplot as plt def plot_complex_number(z): plt.quiver(0, 0, z.real, z.imag, angles = "xy", scale_units = "xy", scale = 1) plt.xticks(range(-3, 4)) plt.yticks(range(-3, 4)) plt.xlabel("R") plt.ylabel("i") ax = plt.gca() ax.xaxis.set_label_coords(1, 0.4) ax.yaxis.set_label_coords(0.4, 1) ax.spines["bottom"].set_position("zero") ax.spines["left"].set_position("zero") ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) pass plot_complex_number(2 + 3j) ``` How about many numbers? We'll need to get a little bit more creative. First, we need to create a 2D array, each element of which will be a 4-element array: `[0, 0, z.real, z.imag]`. Next, `plt.quiver()` can accept a range of values. Look at [this StackOverflow post](https://stackoverflow.com/questions/12265234/how-to-plot-2d-math-vectors-with-matplotlib) for details and adapt your code. ``` def plot_complex_numbers(numbers, colors): for key, number in enumerate(numbers): plt.quiver(0, 0, number.real, number.imag, color = colors[key], angles = "xy", scale_units = "xy", scale = 1) plt.xticks(range(-5, 6)) plt.yticks(range(-5, 6)) plt.xlabel("R") plt.ylabel("i") ax = plt.gca() ax.xaxis.set_label_coords(1, 0.4) ax.yaxis.set_label_coords(0.4, 1) ax.spines["bottom"].set_position("zero") ax.spines["left"].set_position("zero") ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) plt.show() pass plot_complex_numbers([2 + 3j, -2 - 1j, -3, 2j], ["green", "red", "blue", "orange"]) ``` Now let's see what the operations look like. Let's add two numbers and plot the result. ``` z1 = 2 + 3j z2 = 1 - 1j plot_complex_numbers([z1, z2, z1 + z2], ["red", "blue", "green"]) ``` We can see that adding the complex numbers is equivalent to adding vectors (remember the "parallelogram rule"). As special cases, let's try adding pure real and pure imaginary numbers: ``` z1 = 2 + 3j z2 = 2 + 0j plot_complex_numbers([z1, z2, z1 + z2], ["red", "blue", "green"]) z1 = 2 + 3j z2 = 0 + 2j plot_complex_numbers([z1, z2, z1 + z2], ["red", "blue", "green"]) ``` How about multiplication? First we know that multiplying by 1 gives us the same vector and mulpiplying by -1 gives us the reversed version of the same vector. How about multiplication by $\pm i$? ``` z = 2 + 3j plot_complex_numbers([z, z * 1], ["red", "blue"]) plot_complex_numbers([z, z * -1], ["red", "blue"]) plot_complex_numbers([z, z * 1j], ["red", "blue"]) plot_complex_numbers([z, z * -1j], ["red", "blue"]) ``` So, multiplication by $i$ is equivalent to 90-degree rotation. We can actually see the following equivalence relationships between multiplying numbers and rotation about the origin: | Real | Imaginary | Result rotation | |------|-----------|-----------------| | 1 | 0 | $0^\circ$ | | 0 | 1 | $90^\circ$ | | -1 | 0 | $180^\circ$ | | 0 | -1 | $270^\circ$ | Once again, we see the power of abstraction and algebra in practice. We know that complex numbers and 2D vectors are equivalent. Now we see something more: addition and multiplication are equivalent to translation (movement) and rotation! Let's test the multiplication some more. We can see the resulting vector is the sum of the original vectors, but *scaled and rotated*: ``` z1 = 2 + 3j z2 = 1 - 2j # (2 + 3j) * (1 - 2j) = 2 - 4j + 3j + 6 = 8 - 1j plot_complex_numbers([z1, z2, z1 * z2], ["red", "blue", "green"]) ```
github_jupyter
# Optimizing Performance Using Numba & Cython ## Numba & Cython: What are they? At a high level, Numba and Cython are both modules that make your Python code run faster. This means we can have the quick prototyping and iteration that Python is known for, while getting the speed we expect from programs written in C. This is great--we can have our cake and eat it too! ## Use Case: Cholesky Decomposition Matrix computations are a standard benchmark for speed. In that vein, we'll be examining the execution time of various implementations of Cholesky Decomposition, a method of matrix decomposition that we used in a previous homework assignment: * A pure Python implementation * A Numba-fied implementation * A Cython-ized implementation * The provided SciPy implementation We'll begin with a pure Python implementation and work our way towards the Numba-fied and Cython-ized versions. Then, we'll see how those fare against SciPy. ## Some mathematical formalism: This isn't too important for implementing a way of calculating the Cholesky decomposition, but it might provide some intuition. Feel free to skip ahead to the next section if you don't want to deal with the linear algebra details. Formally, the Cholesky decomposition (factorization) is the decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose. There a lot of terms here that might be foreign if you've never taken a course in linear algebra before, so I'll try to break them down: 1. A square matrix $A$ is said to be Hermitian if for every entry $a_{i,j}$ in $A$, it's true that $a_{i,j} = \overline{a}_{j,i}$, where $\overline{z}$ denotes the complex conjugate of $z$. - The complex conjugate of a complex number $z = a + bi\,$ is defined to be $\overline{z} := a - bi$. - Symmetric, real, square matrices are Hermitian 2. A Hermitian matrix $A$ is said to be positive-definite if the scalar $\overline{z} A z$ is real and positive for all non-zero column vectors $z$ of complex numbers. 3. A lower triangle matrix is a matrix $L$ of the form $$\begin{bmatrix}\ell_{11} & 0 & \cdots & 0 \\ \ell_{21} & \ell_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & 0 \\ \ell_{n1} & \ell_{n2} & \cdots & \ell_{nn}\end{bmatrix}$$ So, given a symmetric, positive-definite matrix $A$, the Cholesky decomposition of $A$ gives us a lower triangular matrix $L$ such that $$ A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix} = \begin{bmatrix} \ell_{11} & 0 \cdots & 0\\ \ell_{21} & \ell_{22} \cdots & 0\\ \vdots & \vdots & \ddots & 0\\ \ell_{n1} & \ell_{n2} & \cdots & \ell_{nn} \end{bmatrix} \begin{bmatrix} \overline{\ell}_{11} & \overline{\ell}_{21} & \cdots & \overline{\ell}_{n1} \\ 0 & \overline{\ell}_{22} & \cdots & \overline{\ell}_{n2}\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & \overline{\ell}_{nn} \end{bmatrix} = L\overline{L}^T $$ ## The Cholesky-Banachiewicz & Cholesky-Crout Algorithms ###### (Wikipedia) Let $A$ be a real, symmetric, positive-definite matrix. Then, the Cholesky factor $L$ of $A$ is the lower triangular matrix such that $A = LL^T$ and $$ L_{ii} = \sqrt{A_{ii} - \sum_{k=1}^{i-1}L_{i,k}^2} \\ L_{ij} = \frac{1}{L_{jj}}\left(A_{ij} - \sum_{k=1}^{j-1}L_{ik}L_{jk}\right) \;\;\; \text{for $i > j$} $$ This means that we can compute $L_{ij}$ if we know the entries to the left and above. - The Cholesky-Banachiewicz algorithm starts from the upper left corner of the matrix $L$ and proceeds to calculate the matrix row by row. - The Cholesky-Crout algorithm starts from the upper left corner of the matrix $L$ and proceeds to calculate the matrix column by column. Now we can move on to implementation. ``` def cholesky_banachiewicz_pure(A): n = len(A) L = [[0.0] * n for _ in xrange(n)] for i in xrange(n): for j in xrange(i + 1): # Build row i temp = A[i][j] - sum(L[i][k] * L[j][k] for k in xrange(j)) L[i][j] = temp**0.5 if i == j else temp / L[j][j] return L def cholesky_crout_pure(A): n = len(A) L = [[0.0] * n for _ in xrange(n)] for j in xrange(n): for i in xrange(j, n): # Build column i temp = A[i][j] - sum(L[i][k] * L[j][k] for k in xrange(j)) L[i][j] = temp**0.5 if i == j else temp / L[j][j] return L ``` Let's consider a small example just to verify that our implementation is working as intended. ``` import scipy import scipy.linalg A = [[6, 3, 4, 8], [3, 6, 5, 1], [4, 5, 10, 7], [8, 1, 7, 25]] A_array = scipy.array(A) L_banachiewicz = cholesky_banachiewicz_pure(A) L_crout = cholesky_crout_pure(A) L_scipy = scipy.linalg.cholesky(A_array, lower=True).tolist() assert L_banachiewicz == L_crout == L_scipy print "Looks good!" ``` Using the ``timeit`` module, let's write a small function that'll let us profile our various implementations: ``` import timeit def profile(func, n=100000): def profiled_func(*args, **kwargs): total = 0.0 worst = 0.0 best = 999999.999 # a sufficiently large amount of time for _ in xrange(n): start_time = timeit.default_timer() func(*args, **kwargs) end_time = timeit.default_timer() duration = end_time - start_time if duration > worst: worst = duration if duration < best: best = duration total += duration avg = total / n print "%s:" % (func.__name__) print " average execution time = %f" % avg print " fastest execution time = %f" % best print " slowest execution time = %f" % worst return profiled_func ``` Alternatively, we could make use of the ``%timeit`` line magic: ``` %timeit -r 10 cholesky_banachiewicz_pure(A) ``` For the sake of clarity, however, we'll be using our small ``profile`` function to benchmark our code for the rest of the tutorial. ``` profile(cholesky_banachiewicz_pure)(A) profile(cholesky_crout_pure)(A) profile(scipy.linalg.cholesky)(A_array, lower=True) ``` It looks like the SciPy implementation is a few microseconds faster than our pure Python implementation. As the matrices get larger, so does the difference in execution time between pure Python and SciPy. To illustrate, I've included some graphs of what the execution time looks like as the size of $A$ grows: ![SciPy vs. Pure Python](figures/scipy_vs_pure.png) Let's see how these performance graphs change when we optimize our code using Numba. ``` from numba import jit import numpy as np @jit def cholesky_banachiewicz_numba(A): n = len(A) L = np.zeros(A.shape) for i in xrange(n): for j in xrange(i + 1): temp = A[i,j] for k in xrange(j): temp -= L[i,k] * L[j,k] L[i,j] = temp**0.5 if i == j else temp / L[j,j] return L @jit def cholesky_crout_numba(A): n = len(A) L = np.zeros(A.shape) for j in xrange(n): for i in xrange(j, n): temp = A[i,j] for k in xrange(j): temp -= L[i,k] * L[j,k] L[i,j] = temp**0.5 if i == j else temp / L[j,j] return L profile(cholesky_banachiewicz_numba)(A_array) profile(cholesky_crout_numba)(A_array) profile(scipy.linalg.cholesky)(A_array, lower=True) ``` Now we're beating SciPy's implementation by a longshot. That's great! But wait, our code has changed. Why? Numba only accelerates code that uses scalars or (N-dimensional) arrays. You can't use built-in types like ``list`` or ``dict`` or your own custom classes, you can't allocate new arrays in accelerated code, and you can't even use recursion. This means that Numba is only useful in certain cases. Let's see how the performance changes as we increase the size of the matrix: ![Numba vs. SciPy](figures/numba_vs_scipy.png) For additional information, examples, and documentation, check out the [Numba](http://numba.pydata.org/) website. In general though, to Numba-fy your code, apply the ``@jit`` decorator, remove built-in types and custom classes, translate recursive functions to iterative ones, and don't alloacate new arrays if possible. ## The Cython Language ###### (Cython docs) Cython is a programming language that makes writing C extensions for the Python language as easy as Python itself. It aims to become a superset of the Python language which gives it high-level, object-oriented, functional, and dynamic programming. Its main feature on top of these is support for optional static type declarations as part of the language. The source code gets translated into optimized C/C++ code and compiled as Python extension modules. This allows for both very fast program execution and tight integration with external C libraries, while keeping up the high programmer productivity for which the Python language is well known. Using Cython in IPython notebooks is fairly straightforward. First, we load the Cython extension within our notebook: ``` %load_ext Cython ``` Now, we can use the cell magic ``%%cython`` to compile our original pure Python solution in the next cell: ``` %%cython def cholesky_banachiewicz_cython_v1(A): n = len(A) L = [[0.0] * n for _ in xrange(n)] for i in xrange(n): for j in xrange(i + 1): temp = A[i][j] - sum(L[i][k] * L[j][k] for k in xrange(j)) L[i][j] = temp**0.5 if i == j else temp / L[j][j] return L def cholesky_crout_cython_v1(A): n = len(A) L = [[0.0] * n for _ in xrange(n)] for j in xrange(n): for i in xrange(j, n): temp = A[i][j] - sum(L[i][k] * L[j][k] for k in xrange(j)) L[i][j] = temp**0.5 if i == j else temp / L[j][j] return L profile(cholesky_banachiewicz_cython_v1)(A) profile(cholesky_crout_cython_v1)(A) profile(scipy.linalg.cholesky)(A_array, lower=True) ``` Notice how we only needed to use the ``%%cython`` cell magic to gain this speedup. Unlike with Numba, we didn't need to make any changes to our code to see improvements; however, the speedup we get from Cython isn't quite as good as the one we get from Numba. In fact, this approach yields almost no improvement for small matrices, and can actually worsen our performance as the dimensions of our input grows: ![Cython (v1) vs. Pure Python](figures/cython-v1_vs_pure.png) ![Cython (v1) vs. Numba](figures/cython-v1_vs_numba.png) ![Cython (v1) vs. SciPy](figures/cython-v1_vs_scipy.png) Not all hope is lost for Cython, though! We can do better using what are called *typed memoryviews* and learning a little bit more about the Cython language. #### C vs. Python Functions/Variables in Cython In Cython, we can declare both C variables/functions and Python variables/functions. To declare a C variable or function, we use the ``cdef`` keyword with type definitions. Python variables and functions can be declared just as they are in Python. If we wanted to declare integers ``i, j, k`` and floats ``f, g[42], *h`` as C variables, we would do the following ``` cython cdef int i, j, k cdef float f, g[42], *h ``` C functions written in Cython, like C variables, are declared with the ``cdef`` keyword. C functions in Cython can take in either Python objects or C values as arguments, and can return either Python objects or C values. The scope of C functions written is Cython is limited, however, to the module in which it was written: "Within a Cython module, Python functions and C functions can call each other freely, but only Python functions can be called from outside the module interpreted by Python code. So any functions that you want to "export" from your Cython module must be declared as Python functions using ``def``." To learn more about the differences between C functions and Python functions in Cython, check out [Cython Language Basics](http://cython.readthedocs.io/en/latest/src/userguide/language_basics.html). #### Buffers and MemoryViews in Python Before we continue, let's take a moment to consider how Python does operations on things like Strings, DataFrames, and Series. These objects (except for DataFrames and Series when inplace operations are applied) are immutable. To perform calculations and transformations on them require that we first make a copy of the object and then apply our operations. Whenever we index into a String, DataFrame, or Series by slicing, we're making a copy of the original object. This is why you'll notice that your program runs a lot slower when you, for example, define an empty DataFrame and iteratively insert all the rows one-by-one, than when you use a mutable class (like a dictionary) to iteratively build your object *and then* convert it to a DataFrame. Python objects implemented in C can export a group of functions called the "buffer interface." These functions can be used by an object to expose its data in a raw byte-oriented format. Clients of the object can use the buffer interface to access the object data directly, without needing to copy it first. Since the release of Python 2.7, MemoryViews and Buffers provide an efficient way to deal with the general copying behavior when dealing with objects like Strings, DataFrames, and Series. A MemoryView is just like a buffer, except you can also write to it, not just read. To learn more about MemoryViews in Python 2.7, check out the [documentation](https://docs.python.org/2/c-api/buffer.html). #### Typed MemoryViews in Cython From the Cython documentation page on [Typed MemoryViews](http://cython.readthedocs.io/en/latest/src/userguide/memoryviews.html): "Typed MemoryViews allow efficient access to memory buffers, such as those underlying NumPy arrasys, without incurring any Python overhead." Here are some examples of using typed MemoryViews in Cython (taken from the documentation): ```cython # Create a complete view on a one-dimensional int buffer: cdef int[:] view1d = oneD_obj # A complete 3D view: cdef int[:, :, :] view3D = threeD_obj ``` Using Typed MemoryViews in our Cython code will provide the compiler more information about the desired behavior, enabling it to make further optimizations at compile-time. Armed with this information, we can now create an improved version of our Cython implementation: ``` %%cython import numpy as np def cholesky_banachiewicz_cython_v2(long[:, :] A): cdef int i, j, k cdef int n = len(A) cdef double[:, :] L = np.zeros(shape=(n, n)) for i in xrange(n): for j in xrange(i + 1): temp = A[i][j] - sum(L[i][k] * L[j][k] for k in xrange(j)) L[i][j] = temp**0.5 if i == j else temp / L[j][j] return np.asarray(L) def cholesky_crout_cython_v2(long[:, :] A): cdef int i, j, k cdef int n = len(A) cdef double[:, :] L = np.zeros(shape=(n, n)) for j in xrange(n): for i in xrange(j, n): temp = A[i][j] - sum(L[i][k] * L[j][k] for k in xrange(j)) L[i][j] = temp**0.5 if i == j else temp / L[j][j] return np.asarray(L) profile(cholesky_banachiewicz_cython_v2)(A_array) profile(cholesky_crout_cython_v2)(A_array) profile(scipy.linalg.cholesky)(A_array, lower=True) ``` Now, the performance graphs look a little different: ![Cython (v2) vs. Pure Python](figures/cython-v2_vs_pure.png) ![Cython (v2) vs. Numba](figures/cython-v2_vs_numba.png) ![Cython (v2) vs. SciPy](figures/cython-v2_vs_scipy.png) We managed to dramatically improve our performance relative to our ``_cython_v1`` implementations; however, we didn't beat out Numba or SciPy. In any case, though, the difference between our two Cython implementations should provide sufficient evidence to convince you that simply using the ``%%cython`` magic isn't sufficient to make full use of Cython. Using Cython in your terminal takes a little more work, but [this](http://cython.readthedocs.io/en/latest/src/quickstart/build.html) should show you how to get up and running. ## Conclusion First and foremost, we should walk away from this being reassured in the efficiency of the existing SciPy implementations. Unless you're tackling something very specific, it's almost always a good idea to use the SciPy implementation if it's available to you. However, what do we do when we don't have SciPy available to us, or it's not exactly what we need? That's where Numba and Cython come in. Due to the limitations of Numba (e.g., no lists, dicts, recursion, custom classes, etc.), it's not always the appropriate solution. In the cases where we can't use Numba, we can use Cython, which allows us to gain some noticeable speedups in comparison to a pure Python implementation. We've also seen that Cython's relative generalizability comes at a cost: it takes significantly more effort on behalf of the coder to reach Numba or SciPy-like levels of efficiency. At a certain point, the amount of effort spent optimizing a Cython function might be better spent writing an actual C implementation with considerations being made to memory layout and caching.
github_jupyter
# Jupyter We'll be using Jupyter for all of our examples&mdash;this allows us to run python in a web-based notebook, keeping a history of input and output, along with text and images. For Jupyter help, visit: https://jupyter.readthedocs.io/en/latest/content-quickstart.html We interact with python by typing into _cells_ in the notebook. By default, a cell is a _code_ cell, which means that you can enter any valid python code into it and run it. Another important type of cell is a _markdown_ cell. This lets you put text, with different formatting (italics, bold, etc) that describes what the notebook is doing. You can change the cell type via the menu at the top, or using the shortcuts: * ctrl-m m : mark down cell * ctrl-m y : code cell Some useful short-cuts: * shift+enter = run cell and jump to the next (creating a new cell if there is no other new one) * ctrl+enter = run cell-in place * alt+enter = run cell and insert a new one below ``` print("hello") ``` A "markdown cell" enables you to typeset LaTeX equations right in your notebook. Just put them in `$` or `$$`: $$e^{ix} = \cos x + i \sin x$$ <div class="alert alert-block alert-warning"> <b>Important</b> : When you work through a notebook, everything you did in previous cells is still in memory and </em>known</em> by python, so you can refer to functions and variables that were previously defined. Even if you go up to the top of a notebook and insert a cell, all the information done earlier in your notebook session is still defined&mdash;it doesn't matter where physically you are in the notebook. If you want to reset things, you can use the options under the _Kernel_ menu. </div> <div class="alert alert-block alert-info"><h3><span class="fa fa-flash"></span> Quick Exercise:</h3> Create a new cell below this one. Make sure that it is a _code_ cell, and enter the following code and run it: ``` print("Hello, World") ``` </div> `print()` is a _function_ in python that takes arguments (in the `()`) and outputs to the screen. You can print multiple quantities at once like: ``` print(1, 2, 3) ``` # Basic Datatypes Now we'll look at some of the basic datatypes in python&mdash;these are analogous to what you will find in most programming languages, including numbers (integers and floating point), and strings. Some examples come from the python tutorial: http://docs.python.org/3/tutorial/ ## integers Integers are numbers without a decimal point. They can be positive or negative. Most programming languages use a finite-amount of memory to store a single integer, but in python will expand the amount of memory as necessary to store large integers. The basic operators, `+`, `-`, `*`, and `/` work with integers ``` 2+2+3 2*-4 ``` <div class="alert alert-block alert-warning"> Note: integer division is one place where python 2 and python 3 different In python 3.x, dividing 2 integers results in a float. In python 2.x, dividing 2 integers results in an integer. The latter is consistent with many strongly-typed programming languages (like Fortran or C), since the data-type of the result is the same as the inputs, but the former is more inline with our expectations. </div> ``` 1/2 ``` To get an integer result, we can use the // operator. ``` 1//2 ``` Python is a _dynamically-typed language_&mdash;this means that we do not need to declare the datatype of a variable before initializing it. Here we'll create a variable (think of it as a descriptive label that can refer to some piece of data). The `=` operator assigns a value to a variable. ``` a = 1 b = 2 ``` Functions operate on variables and return a result. Here, `print()` will output to the screen. ``` print(a+b) print(a*b) ``` Note that variable names are case sensitive, so a and A are different ``` A = 2048 print(a, A) ``` Here we initialize 3 variables all to `0`, but these are still distinct variables, so we can change one without affecting the others. ``` x = y = z = 0 print(x, y, z) z = 1 print(x, y, z) ``` Python has some built in help (and Jupyter/ipython has even more) ``` help(x) x? ``` Another function, `type()` returns the data type of a variable ``` print(type(x)) ``` Note in languages like Fortran and C, you specify the amount of memory an integer can take (usually 2 or 4 bytes). This puts a restriction on the largest size integer that can be represented. Python will adapt the size of the integer so you don't *overflow* ``` a = 12345678901234567890123456789012345123456789012345678901234567890 print(a) print(a.bit_length()) print(type(a)) ``` ## floating point when operating with both floating point and integers, the result is promoted to a float. This is true of both python 2.x and 3.x ``` 1./2 ``` but note the special integer division operator ``` 1.//2 ``` It is important to understand that since there are infinitely many real numbers between any two bounds, on a computer we have to approximate this by a finite number. There is an IEEE standard for floating point that pretty much all languages and processors follow. The means two things * not every real number will have an exact representation in floating point * there is a finite precision to numbers -- below this we lose track of differences (this is usually called *roundoff* error) This paper: [What every computer scientist should know about floating-point arithmetic](https://dl.acm.org/doi/10.1145/103162.103163) is a great reference on understanding how a computer stores numbers. Consider the following expression, for example: ``` 0.3/0.1 - 3 ``` Here's another example: The number 0.1 cannot be exactly represented on a computer. In our print, we use a format specifier (the stuff inside of the {}) to ask for more precision to be shown: ``` a = 0.1 print(f"{a:30.20}") ``` we can ask python to report the limits on floating point ``` import sys print(sys.float_info) ``` Note that this says that we can only store numbers between 2.2250738585072014e-308 and 1.7976931348623157e+308 We also see that the precision is 2.220446049250313e-16 (this is commonly called _machine epsilon_). To see this, consider adding a small number to 1.0. We'll use the equality operator (`==`) to test if two numbers are equal: <div class="alert alert-block alert-info"> <h3><span class="fa fa-flash"></span> Quick Exercise:</h3> 1. Define two variables, $a = 1$, and $e = 10^{-16}$. 2. Now define a third variable, `b = a + e` 3. We can use the python `==` operator to test for equality. What do you expect `b == a` to return? Run it an see if it agrees with your guess. </div> ## modules The core python language is extended by a standard library that provides additional functionality. These added pieces are in the form of modules that we can _import_ into our python session (or program). The `math` module provides functions that do the basic mathematical operations as well as provide constants (note there is a separate `cmath` module for complex numbers). In python, you `import` a module. The functions are then defined in a separate _namespace_&mdash;this is a separate region that defines names and variables, etc. A variable in one namespace can have the same name as a variable in a different namespace, and they don't clash. You use the "`.`" operator to access a member of a namespace. By default, when you type stuff into the python interpreter or here in the Jupyter notebook, or in a script, it is in its own default namespace, and you don't need to prefix any of the variables with a namespace indicator. ``` import math ``` `math` provides the value of pi ``` print(math.pi) ``` This is distinct from any variable `pi` we might define here ``` pi = 3 print(pi, math.pi) ``` Note here that `pi` and `math.pi` are distinct from one another&mdash;they are in different namespaces. ### floating point operations The same operators, `+`, `-`, `*`, `/` work are usual for floating point numbers. To raise an number to a power, we use the `**` operator (this is the same as Fortran) ``` R = 2.0 print(math.pi*R**2) ``` operator precedence follows that of most languages. See https://docs.python.org/3/reference/expressions.html#operator-precedence in order of precedence: * quantites in `()` * slicing, calls, subscripts * exponentiation (`**`) * `+x`, `-x`, `~x` * `*`, `@`, `/`, `//`, `%` * `+`, `-` (after this are bitwise operations and comparisons) Parantheses can be used to override the precedence. <div class="alert alert-block alert-info"> <h3><span class="fa fa-flash"></span> Quick Exercise:</h3> Consider the following expressions. Using the ideas of precedence, think about what value will result, then try it out in the cell below to see if you were right. * `1 + 3*2**2` * `1 + (3*2)**2` * `2**3**2` </div> The `math` module provides a lot of the standard math functions we might want to use. For the trig functions, the expectation is that the argument to the function is in radians&mdash;you can use `math.radians()` to convert from degrees to radians, ex: ``` print(math.cos(math.radians(45))) ``` Notice that in that statement we are feeding the output of one function (`math.radians()`) into a second function, `math.cos()` When in doubt, as for help to discover all of the things a module provides: ``` help(math) ``` ## complex numbers python uses '`j`' to denote the imaginary unit ``` print(1.0 + 2j) a = 1j b = 3.0 + 2.0j print(a + b) print(a*b) ``` we can use `abs()` to get the magnitude and separately get the real or imaginary parts ``` print(abs(b)) print(a.real) print(a.imag) ``` ## strings python doesn't care if you use single or double quotes for strings: ``` a = "this is my string" b = 'another string' print(a) print(b) ``` Many of the usual mathematical operators are defined for strings as well. For example to concatenate or duplicate: ``` print(a+b) print(a + ". " + b) print(a*2) ``` There are several escape codes that are interpreted in strings. These start with a backwards-slash, `\`. E.g., you can use `\n` for new line ``` a = a + "\n\n" print(a) ``` <div class="alert alert-block alert-info"> <h3><span class="fa fa-flash"></span> Quick Exercise:</h3> The `input()` function can be used to ask the user for input. * Use `help(input)` to see how it works. * Write code to ask for input and store the result in a variable. `input()` will return a string. * Use the `float()` function to convert a number entered as input to a floating point variable. * Check to see if the conversion worked using the `type()` function. </div> `"""` can enclose multiline strings. This is useful for docstrings at the start of functions (more on that later...) ``` c = """ Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.""" print(c) ``` a raw string does not replace escape sequences (like \n). Just put a `r` before the first quote: ``` d = r"this is a raw string\n" print(d) ``` slicing is used to access a portion of a string. slicing a string can seem a bit counterintuitive if you are coming from Fortran. The trick is to think of the index as representing the left edge of a character in the string. When we do arrays later, the same will apply. Also note that python (like C) uses 0-based indexing Negative indices count from the right. ``` a = "this is my string" print(a) print(a[5:7]) print(a[0]) print(d) print(d[-2]) ``` <div class="alert alert-block alert-info"> <h3><span class="fa fa-flash"></span> Quick Exercise:</h3> Strings have a lot of _methods_ (functions that know how to work with a particular datatype, in this case strings). A useful method is `.find()`. For a string `a`, `a.find(s)` will return the index of the first occurrence of `s`. For our string `c` above, find the first `.` (identifying the first full sentence), and print out just the first sentence in `c` using this result </div> There are also a number of methods and functions that work with strings. Here are some examples: ``` print(a.replace("this", "that")) print(len(a)) print(a.strip()) # Also notice that strip removes the \n print(a.strip()[-1]) ``` Note that our original string, `a`, has not changed. In python, strings are *immutable*. Operations on strings return a new string. ``` print(a) print(type(a)) ``` As usual, ask for help to learn more: ``` help(str) ``` We can format strings when we are printing to insert quantities in particular places in the string. A `{}` serves as a placeholder for a quantity and is replaced using the `.format()` method: ``` a = 1 b = 2.0 c = "test" print("a = {}; b = {}; c = {}".format(a, b, c)) ```
github_jupyter
``` #default_exp nn_utils #export import torchvision import torch import torch.nn as nn import torch.nn.functional as F import numpy as np import matplotlib.pyplot as plt #export def c_imshow(img): npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() #export nn_utils class Flatten(nn.Module): def __init__(self): super(Flatten, self).__init__() def forward(self, x): return x.view(x.shape[0], -1) #export def conv3x3(in_channels, out_channels, return_indices=False, **kwargs): return nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, **kwargs), nn.BatchNorm2d(out_channels), nn.ReLU(), nn.MaxPool2d(2) if return_indices == False else nn.MaxPool2d(2, return_indices=True) ) #export nn_utils def get_proto_accuracy(prototypes, embeddings, targets): """Compute the accuracy of the prototypical network on the test/query points. Parameters ---------- prototypes : `torch.FloatTensor` instance A tensor containing the prototypes for each class. This tensor has shape `(meta_batch_size, num_classes, embedding_size)`. embeddings : `torch.FloatTensor` instance A tensor containing the embeddings of the query points. This tensor has shape `(meta_batch_size, num_examples, embedding_size)`. targets : `torch.LongTensor` instance A tensor containing the targets of the query points. This tensor has shape `(meta_batch_size, num_examples)`. Returns ------- accuracy : `torch.FloatTensor` instance Mean accuracy on the query points. """ sq_distances = torch.sum((prototypes.unsqueeze(1) - embeddings.unsqueeze(2)) ** 2, dim=-1) _, predictions = torch.min(sq_distances, dim=-1) return torch.mean(predictions.eq(targets).float()) #export def get_accuracy(logits, targets): """Compute the accuracy (after adaptation) of MAML on the test/query points Parameters ---------- logits : `torch.FloatTensor` instance Outputs/logits of the model on the query points. This tensor has shape `(num_examples, num_classes)`. targets : `torch.LongTensor` instance A tensor containing the targets of the query points. This tensor has shape `(num_examples,)`. Returns ------- accuracy : `torch.FloatTensor` instance Mean accuracy on the query points """ _, predictions = torch.max(logits, dim=-1) return torch.mean(predictions.eq(targets).float()) from nbdev.export import notebook2script; notebook2script() ```
github_jupyter
In this lab session we will learn how to pre-process feature vectors using numpy. For this purpose, lets create 10 feature vectors that have 5 features. We use numpy.random for generating these examples. ``` import numpy X = numpy.random.randn(10, 5) ``` Lets print this matrix X where each row is a feature vector. ``` print X ``` We can access the i-th row by X[i,:]. Likewise, the j-th column of X can be accessed by X[:,j] ``` print X[1,:] print X[:,1] ``` Next, lets $\ell_1$ normalize each feature vector. For this purpose we must compute the sum of the absolute values in each feature vector and divide each element in the vector by the norm. $\ell_1$ norm is defined as follows: $\ell_1 (\mathbf{x}) = \sum_i |x_i|$ Let us compue the $\ell_1$ norm of each feature vector in X. We can use the abs function that gives the absolute value of a number. This function operates on each element of an array as well, which is very convenient. sum function gives the sum, obivously! ``` for i in range(0, 10): print i, numpy.sum(numpy.abs(X[i,:])) ``` Now lets compute $\ell_2$ norms instead. We need to compute the squares, add them and take the square root for this. ``` for i in range(0, 10): print i, numpy.sqrt(numpy.sum(X[i,:] * X[i,:])) ``` If you wanted to $\ell_2$ normalize X then this can be done as follows. ``` for i in range(0,10): norm = numpy.sqrt(numpy.sum(X[i,:] * X[i,:])) X[i,:] = X[i,:] / norm print X ``` Just to make sure that X is indeed $\ell_2$ normalized lets print the norms again. ``` for i in range(0,10): print i, numpy.sqrt(numpy.sum(X[i,:] * X[i,:])) ``` OK! That looks fine. Now try to $\ell_1$ normalize X as well by yourself. Let us assume that we further wish to scale each feature (dimension) to [0,1] range using (x - min) / (max - min) method (see the lecture notes for details). We need to find the min and max for each feature across all feature vectors. This turns out to be computing the min and max for each column in X. Guess what, numpy has min and max functions that return the min and max values of an array. How convenient... ``` print X[:,0] print numpy.min(X[:,0]) print numpy.max(X[:,0]) ``` Lets use these functions to perform the [0,1] scaling on X. ``` for j in range(0, 5): minVal = numpy.min(X[:,j]) maxVal = numpy.max(X[:,j]) for i in range(0, 10): X[i,j] = (X[i,j] - minVal) / (maxVal - minVal) print X ``` OK! Everything is in [0,1] now. One thing to remember is that if min and max are the same then the division during the scaling will be illegal. If this is the case then it means all values of that feature are the same. So you can either set it to 0 or 1, as you wish as long as it is consistent. Of course, if a feature has the same value across all train instances then it is not a useful feature because it does not discriminate the different classes. So you can even remove that feature from your train data and be happy about it (one less feature to worry about). Let us assume that we wanted to do Gaussain scaling (see lecture notes) on this X. Here, we would use (x - mean) / sd, where sd is the standard deviation of the feature values. Not very surprisingly numpy has numpy.mean and numpy.std functions that do exactly this. I guess at this point I can convince you why you should use python+numpy for data mining and machine learning. ``` for j in range(0, 5): mean = numpy.mean(X[:,j]) sd = numpy.std(X[:,j]) for i in range(0, 10): X[i,j] = (X[i,j] - mean) / sd print X ```
github_jupyter
``` from __future__ import division, print_function, unicode_literals %matplotlib inline %config InlineBackend.print_figure_kwargs = {'dpi' : 150} import numpy as np import qinfer as qi import matplotlib.pyplot as plt plt.style.use('ggplot-rq') plt.rcParams['savefig.frameon'] = False ``` ## Example: Impovrishment ## ``` import scipy.stats as stats stats.norm.pdf([1, 1]).prod() particles_good = np.random.randn(1200, 2) particles_bad = np.random.uniform(-4, 4, (400, 2)) wts_bad = np.product(stats.norm.pdf(particles_bad), axis=1) wts_bad /= wts_bad.sum() try: style_cycle = plt.rcParams['axes.prop_cycle']() except: from cycler import cycler style_cycle = iter(cycler('color', plt.rcParams['axes.color_cycle'])) plt.figure(figsize=(12, 6)) ax = plt.subplot(1, 2, 1) plt.scatter(particles_bad[:, 0], particles_bad[:, 1], s=1200 * wts_bad, **style_cycle.next()) plt.legend(['1200 Samples'],bbox_to_anchor=(1, 1.1), scatterpoints=1) plt.gca().set_aspect('equal') plt.subplot(1, 2, 2, sharex=ax, sharey=ax) plt.scatter(particles_good[:, 0], particles_good[:, 1], **style_cycle.next()) plt.legend(['400 Samples'],bbox_to_anchor=(1, 1.1), scatterpoints=1) plt.gca().set_aspect('equal') plt.savefig('figures/impovrishment.png', format='png', dpi=300, frameon=False, transparent=False) ``` ## Example: Rabi/Ramsey Estimation ## ``` w = 70.3 w_max = 100.0 ts = np.pi * (1 + np.arange(100)) / (2 * w_max) ideal_signal = np.sin(w * ts / 2) ** 2 n_shots = 100 counts = np.random.binomial(n=n_shots, p=ideal_signal) plt.plot(ts, ideal_signal, label='Signal', lw=1) plt.plot(ts, counts / n_shots, '.', label='Data') plt.xlabel(u'Time (µs)') plt.ylabel(r'Population') plt.ylim(-0.01, 1.01) plt.legend(ncol=2, bbox_to_anchor=(1, 1.15), numpoints=3) plt.savefig('figures/rabi-example-signal.png', format='png', dpi=300, frameon=False, transparent=True) ideal_spectrum = np.abs(np.fft.fftshift(np.fft.fft(ideal_signal - ideal_signal.mean())))**2 spectrum = np.abs(np.fft.fftshift(np.fft.fft((counts - counts.mean()) / n_shots)))**2 ft_freq = 2 * np.pi * np.fft.fftshift(np.fft.fftfreq(n=len(counts), d=ts[1] - ts[0])) plt.plot(ft_freq, ideal_spectrum, lw=1, label='Signal') plt.plot(ft_freq, spectrum, '.', label='Data') ylim = plt.ylim() # plt.vlines(w, *ylim) plt.xlim(xmin=0, xmax=100) # plt.ylim(*ylim) plt.legend(ncol=2, bbox_to_anchor=(1, 1.15), numpoints=3) plt.xlabel('$\omega$ (MHz)') plt.savefig('figures/rabi-example-spectrum.png', format='png', dpi=300, frameon=False, transparent=True) data = np.column_stack([counts, ts, n_shots * np.ones_like(counts)]) mean, cov, extra = qi.simple_est_prec(data, freq_min=0, freq_max=w_max, return_all=True) print("Error: {:0.2e}. Estimated error: {:0.2e}.".format(abs(mean - w) / w, np.sqrt(cov) / w)) mask = np.logical_and(ft_freq >= 60, ft_freq <= 80) plt.plot(ft_freq[mask], spectrum[mask] / np.trapz(spectrum[mask], ft_freq[mask])) xlim = plt.xlim(60, 80) extra['updater'].plot_posterior_marginal(range_min=xlim[0], range_max=xlim[1], res=500) plt.xlim(*xlim) plt.ylim(ymin=-0.01) plt.xlabel('$\omega$ (MHz)') plt.legend(['Spectrum', 'Bayesian'], ncol=2, bbox_to_anchor=(1, 1.15)) plt.savefig('figures/rabi-example-posterior.png', format='png', dpi=300, frameon=False, transparent=True) true_omega = 70.3 omega_min = 0 omega_max = 99.1 n_shots = 100 ts = np.pi * np.arange(1, 101) / (2 * omega_max); signal = np.sin(true_omega * ts / 2) ** 2; counts = np.random.binomial(n=n_shots, p=ideal_signal) data = np.column_stack([counts, ts, n_shots * np.ones(len(ts))]) est = qi.simple_est_prec(data, freq_min=0, freq_max=100) est outcomes = np.array([1]) modelparams = np.array([w]) expparams = ts L = qi.SimplePrecessionModel().likelihood(outcomes, modelparams, expparams) plt.plot(ts, L[0, 0, :]) plt.savefig('figures/rabi-example-liketens.png', format='png', dpi=300, frameon=False, transparent=True) ``` ## Distribution Example ``` from qinfer import * from qinfer.tomography import * plt.figure(figsize=(8, 2)) plt.hist(NormalDistribution(0, 2).sample(n=100000), bins=20) plt.savefig('figures/normal-dist.png', format='png', dpi=300, frameon=False, transparent=True) plot_rebit_prior(GinibreReditDistribution(pauli_basis(1)), rebit_axes=[1, 3]) plt.legend(bbox_to_anchor=(1, 1.15), scatterpoints=1) plt.savefig('figures/rebit-dist.png', format='png', dpi=400, frameon=False, transparent=True) ``` ## Performance Testing Example ``` from qinfer import * from functools import partial performance = perf_test_multiple( n_trials=400, model=BinomialModel(SimplePrecessionModel()), n_particles=2000, prior=UniformDistribution([0, 1]), n_exp=100, heuristic_class=partial( ExpSparseHeuristic, t_field='x', other_fields={'n_meas': 40} ) ) class UniformSamplingHeuristic(Heuristic): def __init__(self, updater, dt=np.pi / 2, t_field=None, other_fields=None ): super(UniformSamplingHeuristic, self).__init__(updater) self._dt = dt self._t_field = t_field self._other_fields = other_fields def __call__(self): n_exps = len(self._updater.data_record) t = self._dt * (1 + n_exps) dtype = self._updater.model.expparams_dtype if self._t_field is None: return np.array([t], dtype=dtype) else: eps = np.empty((1,), dtype=dtype) for field, value in self._other_fields.items(): eps[field] = value eps[self._t_field] = t return eps performance_uniform = perf_test_multiple( n_trials=400, model=BinomialModel(SimplePrecessionModel()), n_particles=2000, prior=UniformDistribution([0, 1]), n_exp=100, heuristic_class=partial( UniformSamplingHeuristic, t_field='x', other_fields={'n_meas': 40} ) ) plt.semilogy(np.sqrt(np.median(performance['loss'], axis=0)), label='Exp. Sparse') plt.semilogy(np.sqrt(np.median(performance_uniform['loss'], axis=0)), label='Uniform') plt.legend(ncol=2, bbox_to_anchor=(1, 1.15)) plt.xlabel('# of Rabi Measurements (40 shots/ea.)') plt.ylabel('Median Error') plt.savefig('figures/rabi-performance.png', format='png', dpi=300, frameon=False, transparent=False) import matplotlib as mpl def log_formatter(n_digits): return mpl.ticker.StrMethodFormatter( "$\mathregular{{{{10^{{{{{{x:.{n_digits}f}}}}}}}}}}$".format(n_digits=n_digits) ) style_cycle = plt.rcParams['axes.prop_cycle']() ax = plt.subplot(2, 1, 1) plt.hist(np.log10(np.sqrt(performance['loss'][:, -1])), normed=True, **style_cycle.next()) plt.tick_params(labelbottom='off') plt.ylabel('Density') plt.legend(['Exp Sparse'], loc='upper left') ax = plt.subplot(2, 1, 2, sharex=ax) plt.hist(np.log10(np.sqrt(performance_uniform['loss'][:, -1])), normed=True, **style_cycle.next()) ax.get_xaxis().set_major_formatter(log_formatter(0)) plt.legend(['Uniform'], loc='upper left') plt.xlabel('Error') plt.ylabel('Density') plt.savefig('figures/rabi-performance-hist.png', format='png', dpi=300, frameon=False, transparent=False) prior = UniformDistribution([0, 1]) true_params = np.array([[0.5]]) model = RandomWalkModel(BinomialModel(SimplePrecessionModel()), NormalDistribution(0, 0.01**2)) updater = SMCUpdater(model, 2000, prior) expparams = np.array([(np.pi / 2, 40)], dtype=model.expparams_dtype) data_record = [] trajectory = [] estimates = [] for idx in range(1000): datum = model.simulate_experiment(true_params, expparams) true_params = np.clip(model.update_timestep(true_params, expparams)[:, :, 0], 0, 1) updater.update(datum, expparams) data_record.append(datum) trajectory.append(true_params[0, 0]) estimates.append(updater.est_mean()[0]) ts = 40 * np.pi / 2 * np.arange(len(data_record)) / 1e3 plt.plot(ts, trajectory, label='Actual') plt.plot(ts, estimates, label='Estimated') plt.xlabel(u't (ms)') plt.ylabel(r'$\omega$ (MHz)') plt.legend(ncol=2, bbox_to_anchor=(1, 1.15)) plt.savefig('figures/rabi-random-walk.png', format='png', dpi=300, frameon=False, transparent=False) ``` ## RB Example ## ``` p = 0.995 A = 0.5 B = 0.5 ms = np.linspace(1, 800, 201).astype(int) signal = A * p ** ms + B n_shots = 40 counts = np.random.binomial(p=signal, n=n_shots) data = np.column_stack([counts, ms, n_shots * np.ones_like(counts)]) mean, cov, extra = qi.simple_est_rb(data, p_min=0.8, return_all=True) print(mean[0], "±", np.sqrt(cov)[0, 0]) plt.figure(figsize=(12, 5)) plt.subplot(1, 2, 1) extra['updater'].plot_posterior_marginal(idx_param=0) plt.subplot(1, 2, 2) extra['updater'].plot_covariance(corr=True) plt.savefig('figures/rb-combined.png', format='png', dpi=300, frameon=False, transparent=False) extra['updater'].plot_covariance(corr=True) plt.savefig('figures/rb-corr.png', format='png', dpi=300, frameon=False, transparent=True) from qinfer.tomography import * from ipyparallel import * c = Client() load_balanced_view = c.load_balanced_view() basis = pauli_basis(1) performance = perf_test_multiple( n_trials=400, model=BinomialModel(TomographyModel(basis)), n_particles=2000, prior=GinibreDistribution(basis), n_exp=100, heuristic_class=partial( RandomPauliHeuristic, other_fields={'n_meas': 40} ), apply=load_balanced_view.apply, # ← parallel! progressbar=IPythonProgressBar ) plt.semilogy(performance['loss'].mean(axis=0), label='Mean') plt.semilogy(np.median(performance['loss'], axis=0), label='Median') plt.legend(ncol=2, bbox_to_anchor=(1, 1.15)) plt.ylabel('Quadratic Loss') plt.xlabel('# of Random Paulis (40 shots/ea)') plt.savefig('figures/tomo-rand-paulis.png', format='png', dpi=400) ```
github_jupyter
``` import os import pandas as pd import numpy as np import warnings import pickle import math from collections import OrderedDict, Counter from copy import deepcopy from Bio.PDB import PDBParser, ResidueDepth, PDBIO, Superimposer, Select from Bio.SeqUtils import seq3 from Bio.PDB.vectors import calc_angle from Bio import BiopythonWarning warnings.simplefilter('ignore', BiopythonWarning) from sklearn.ensemble import ExtraTreesRegressor from sklearn.model_selection import KFold, LeaveOneGroupOut, GroupKFold from sklearn.model_selection import cross_val_score, cross_validate, cross_val_predict from sklearn.metrics import r2_score, make_scorer, roc_auc_score, precision_recall_curve, auc from sklearn.pipeline import Pipeline from sklearn.feature_selection import SelectFromModel from scipy.stats import pearsonr, sem from mlxtend.feature_selection import SequentialFeatureSelector from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs from spl_function import * from measure_function import * import matplotlib as mpl %matplotlib inline from matplotlib import pyplot as plt import seaborn as sns kB = 1.9872036*(10**(-3)) # kcal mol^-1 K^-1 # ====================================== # Function we will use for scatter plots # ====================================== def plot_corr(ax, x, y, xerr=None, yerr=None, xlim=[-5,+5], ylim=[-5,+5], title='', legendloc=None, fit=True, diagonal=True, labelsize=16, msize=90, yax=1.36, xax=1.36, colorbar=False, vmin=0.0, vmax=2.8, cbarlabel='cbar', cbar_shrink=1.0, cbar_pad=0.15): # the absolute error for each data point diff = np.abs(x-y) cmap = plt.cm.coolwarm SC = ax.scatter(x=x, y=y, c=diff, cmap=cmap, s=msize, edgecolors='k', linewidths=1.2, zorder=10, vmin=0.0, vmax=2.8, label='_nolegend_') if yerr is None and xerr is not None: ax.errorbar(x=x, y=y, xerr=xerr, fmt=None, marker=None, color='k', linewidth=1.2, zorder=0, label='_nolegend_') elif yerr is not None and xerr is not None: ax.errorbar(x=x, y=y, xerr=xerr, yerr=yerr, fmt='none', marker=None, color='k', linewidth=1.2, zorder=0, label='_nolegend_') # Make ColorBar if colorbar is True: cbarticks = [0.0, 0.7, 1.4, 2.1, 2.8] cbar = fig.colorbar(SC, ax=ax, shrink=cbar_shrink, pad=cbar_pad, ticks=cbarticks) cbar.set_label(cbarlabel, fontsize=labelsize) cax = plt.gcf().axes[-1] cax.tick_params(labelsize=labelsize) # Ticks and labels ax.set_xlabel(r'Experimental $\Delta\Delta G$, kcal/mol', fontsize=labelsize) ax.set_ylabel(r'Calculated $\Delta\Delta G$, kcal/mol', fontsize=labelsize) ax.tick_params(axis='x', labelsize=labelsize) ax.tick_params(axis='y', labelsize=labelsize) xmin = min(xlim) xmax = max(xlim) ymin = min(ylim) ymax = max(ylim) if title != '': ax.set_title(title, fontsize=labelsize*1.2) # add diagonal if diagonal is True: ax.plot([xmin,xmax], [xmin,xmax], '--', color='gray') # add zero axes ax.axvline(x=xax, color='k', linestyle='-', linewidth=1.2) ax.axhline(y=yax, color='k', linestyle='-', linewidth=1.2) # shaded area indicating 1,2 kcal/mol errors a = [xmin,xmax] b = [j+1.4 for j in a] c = [j-1.4 for j in a] ax.fill_between(a, b, c, alpha=0.1, interpolate=True, color='k') # Linear fit if fit is True: fit = np.polyfit(x, y, 1) fit_fn = np.poly1d(fit) x_fit = np.linspace(xmin, xmax, len(x)) y_fit = fit_fn(x_fit) ax.plot(x_fit, y_fit, '-', color='k', zorder=1, label='$\Delta\Delta G_{calc} = %.2f \cdot \Delta\Delta G_{exp} %+.2f$' %(fit[0],fit[1])) # grid ax.grid(b=True, which='major', color='0.5',linestyle=':') ax.set_xlim([xmax,xmin]) ax.set_ylim([ymax,ymin]) # Make box square x0,x1 = ax.get_xlim() y0,y1 = ax.get_ylim() ax.set_aspect(aspect=(x1-x0)/(y1-y0)) # make legend if legendloc is not None: legend = ax.legend(loc=legendloc, prop={'size':labelsize*0.8}) # ==================== # Performance measures # ==================== def get_rmse(x,y): return np.sqrt((np.mean((x-y)**2))) def get_pearson(x,y): return pearsonr(x,y)[0] def get_auc_roc(y,y_pred): true_bool = np.array([i > 0.0 for i in y]) scores = np.array(y_pred) auc = roc_auc_score(true_bool, scores) return auc def get_auc_prc(exp, calc): true_bool = np.array([x > 1.36 for x in exp]) scores = np.array(calc) precision, recall, thresholds = precision_recall_curve(true_bool, scores) auc_score = auc(recall, precision) return auc_score # define additional sklearn scores my_pearson = make_scorer(get_pearson, greater_is_better=True) my_rmse = make_scorer(get_rmse, greater_is_better=False) my_roc = make_scorer(get_auc_roc, greater_is_better=True) my_prc = make_scorer(get_auc_prc, greater_is_better=True) tki = pd.read_csv("../Data/tki_total_features_ref15.csv") Y_tki = tki['LOGK_FOLDCHG'] todrop = ['PDB_ID', 'TKI','LIG_ID','SMILES','MUTATION','CHAIN','WT_IC50','LOGK_FOLDCHG', 'DDG.EXP','ligs', 'ligs3D', 'pdbs'] X_tki = tki.drop(todrop, axis=1) pTest = deepcopy(tki[['PDB_ID', 'MUTATION']]) # lists to store predictions and exp data xval_y_exp = [] xval_y_pred = [] # initialize the self-paced rate lambd = 25 lambd_add = 2 # ======== # run xval # ======== for train_idx, test_idx in LeaveOneGroupOut().split(X=tki, groups=tki['TKI']): print('Testing on %s.' % tki.iloc[test_idx[0]]['TKI']) # -------------------------------------------- # define inner cv to perform feature selection # -------------------------------------------- tki_inner = tki.iloc[train_idx] X_tki_inner = X_tki.iloc[train_idx] Y_tki_inner = Y_tki.iloc[train_idx] innercv = LeaveOneGroupOut().split(X=tki_inner, groups=tki_inner['TKI']) select = ExtraTreesRegressor(n_estimators=200, max_depth = None, min_samples_split = 2, bootstrap = True, oob_score = True, n_jobs = -1) # innercv will return only the entries for the 7 ligands used in the inner cv (excluding X_test) select.fit(X_tki_inner, Y_tki_inner) clf = SelectFromModel(select, prefit=True) # ---------------------------- # test on 8th independent fold # ---------------------------- X_train = clf.transform(X_tki.iloc[train_idx]) # select subset of features according to validation Y_train = Y_tki.iloc[train_idx].reset_index(drop=True) X_test = clf.transform(X_tki.iloc[test_idx]) # select subset of features according to validation Y_test = Y_tki.iloc[test_idx] # Self-paced learning model = ExtraTreesRegressor(n_estimators=200, max_depth = None, min_samples_split = 2, bootstrap = True, oob_score = True, n_jobs = -1) loss_s = [] # a array to record the loss value of each sample. reg = model.fit(X_train, Y_train) y_pred = reg.predict(X_train) loss_s = (Y_train - y_pred)**2 # running spl stage iter = math.ceil(X_train.shape[0]/lambd_add) loss_trian_iter = [] loss_iter = [] selectedidx_iter = [] y_pred_train_iter = [] y_pred_test_iter = [] for i in range(iter): # step 1: selet high-confidence smaples selectedidx = spl(loss_s, lambd) selectedidx_iter.append(selectedidx) x_train = X_train[selectedidx] y_train = Y_train.loc[selectedidx] # step 2: training the model reg = model.fit(x_train, y_train) Y_pred = reg.predict(X_train) loss_train = ((Y_train - Y_pred)**2).sum() Y_evl = reg.predict(X_test) loss_test = ((Y_test - Y_evl)**2).sum() loss_s = (Y_train - Y_pred)**2 # step 3: add the sample size lambd = lambd + lambd_add # store the medinum value loss_trian_iter.append(loss_train) loss_iter.append(loss_test) y_pred_train_iter.append(Y_pred) y_pred_test_iter.append(Y_evl) index = loss_iter.index(min(loss_iter)) # store predicted and exp data Y_test_pred = y_pred_test_iter[index] xval_y_exp.extend(Y_test) xval_y_pred.extend(Y_test_pred) T = 300 xval_y_pred = kB*T*np.log(10**(np.array(xval_y_pred))) xval_y_exp = kB*T*np.log(10**(np.array(xval_y_exp))) pTest.loc[:, 'DDG.ML3'] = xval_y_pred pTest.loc[:, 'DDG.EXP'] = xval_y_exp RMSE = get_rmse(xval_y_exp, xval_y_pred) Pears = pearsonr(xval_y_exp, xval_y_pred)[0] PRC = get_auc_prc(xval_y_exp, xval_y_pred) print("----Prediction Performance----") print("RMSE: %.2f" % RMSE, "\nPearson: %.2f" % Pears, "\nAUPRC: %.2f" % PRC) # ============ # PLOT # ============ T = 300 ddg_exp = pTest['DDG.EXP'] ddg_calc = pTest['DDG.ML3'] # save predictions in kcal/mol into dataframe # pTest.loc[:, 'ML1.DDG'] = ddg_calc fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(5,5), sharex=False, sharey=False) xlim = [-4, 6] ylim = [-4, 6] RMSE = get_rmse(ddg_exp, ddg_calc) Pears = pearsonr(ddg_exp, ddg_calc)[0] PRC = get_auc_prc(ddg_exp, ddg_calc) plot_corr(ax, ddg_exp, ddg_calc, title='ExtraTree_SPL', fit=False, xlim=xlim, ylim=ylim) annx = 5.8 anny = -3.3 asep = -0.6 fs=16 _ = ax.annotate('$RMSE = %.2f\ kcal/mol$' % (RMSE), xy=(annx, anny), zorder=10, fontsize=fs) _ = ax.annotate('$Pears = %.2f$' % (Pears), xy=(annx, anny-asep), zorder=10, fontsize=fs) _ = ax.annotate('$PRC = %.2f$' % (PRC), xy=(annx, anny-2*asep), zorder=10, fontsize=fs) # -------- # Fix look # -------- ax.set_xticks([-4, -2, 0, 2, 4, 6]) plt.tight_layout() ```
github_jupyter
# Tanzanian Ministry of Water Dataset Modeling **Import libraries** ``` import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder from sklearn.experimental import enable_iterative_imputer from sklearn.impute import KNNImputer, IterativeImputer from sklearn.model_selection import RandomizedSearchCV from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier, AdaBoostClassifier from sklearn.linear_model import LogisticRegression from sklearn.metrics import plot_confusion_matrix from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score import plotly.express as px import plotly.io as pio pio.renderers.default = "notebook_connected" ``` **Import datasets** ``` y = pd.read_csv('../assets/data/dependent_vars.csv') X = pd.read_csv('../assets/data/independent_vars.csv') X_test = pd.read_csv('../assets/data/independent_test.csv') SF = pd.read_csv('../assets/data/SubmissionFormat.csv') ``` **Creating a baseline model** ``` y['status_group'].value_counts(normalize=True) ``` **Create training, validation, and final test datasets** ``` X_train, X_val, y_train, y_val = train_test_split( X, y, test_size=0.25, random_state=42) ``` **TMW analysis pipeline** ``` def data_preprocesser(X, y): # Transforming Target y.drop('id', axis=1, inplace=True) le = LabelEncoder() y = le.fit_transform(y) print(le.classes_) y = pd.DataFrame(y, columns=['status_group']) # Transfroming Features # also dropping permit and public metting due to feature importance drop_features = ['extraction_type', 'extraction_type_group', 'waterpoint_type_group', 'source', 'source_type', 'quantity_group', 'water_quality', 'payment_type', 'management', 'region', 'district_code', 'num_private', 'wpt_name', 'ward', 'recorded_by', 'funder', 'installer', 'subvillage', 'scheme_management', 'permit', 'public_meeting', 'scheme_name'] X.drop(drop_features, axis=1, inplace=True) # revealing the nan values X.replace(0, np.nan, inplace=True) X.replace(-2.000000e-08, np.nan, inplace=True) X.replace('unknown', np.nan, inplace=True) # Impoting numeric features numeric_features = ['amount_tsh', 'gps_height', 'longitude', 'latitude', 'region_code', 'population', 'construction_year'] imputer = KNNImputer(n_neighbors=5) X[numeric_features] = imputer.fit_transform(X[numeric_features]) # Imputing Categorical variables categorical_features = ['basin', 'lga', 'extraction_type_class', 'management_group', 'payment', 'quality_group', 'quantity', 'source_class', 'waterpoint_type'] # Label encoding with a trick to keep nan values le = LabelEncoder() X[categorical_features] = X[categorical_features].apply(lambda series: pd.Series( le.fit_transform(series[series.notnull()]), index=series[series.notnull()].index )) imputer = IterativeImputer() X[categorical_features] = imputer.fit_transform(X[categorical_features]) # Feature Engineering DateTime X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True) X['year_recorded'] = X['date_recorded'].dt.year X['month_rec'] = X['date_recorded'].dt.month X['day_rec'] = X['date_recorded'].dt.day days_in_a_month = 31 # can potentially be done better (28, 30, 31) months_in_a_year = 12 # Sin X['sin_day'] = np.sin((X.day_rec-1)*(2*np.pi/days_in_a_month)) X['sin_month'] = np.sin((X.month_rec-1)*(2*np.pi/months_in_a_year)) # Cosine X['cos_day'] = np.cos((X.day_rec-1)*(2*np.pi/days_in_a_month)) X['cos_month'] = np.cos((X.month_rec-1)*(2*np.pi/months_in_a_year)) # Engineering years in service X['years_in_service'] = X['year_recorded'] - X['construction_year'] # Dropping unneeded features X.drop(['id'], axis=1, inplace=True) X.drop('date_recorded', axis=1, inplace=True) X.drop('construction_year', axis=1, inplace=True) X.drop('year_recorded', axis=1, inplace=True) X.drop('month_rec', axis=1, inplace=True) X.drop('day_rec', axis=1, inplace=True) return X, y ``` **Processing data for modeling** ``` X_train, y_train = data_preprocesser(X_train, y_train) X_val, y_val = data_preprocesser(X_val, y_val) ``` ### Hyper perameter tunening **Decision Tree** Tuned Decision Tree Parameters: {'splitter': 'best', 'min_samples_leaf': 3, 'max_leaf_nodes': None, 'max_features': None, 'max_depth': 21, 'criterion': 'entropy'} Best score is 0.7624469693152618 ``` # Parameter Distrubutions param_dist = {'criterion': ['gini', 'entropy'], 'splitter': ['best', 'random'], 'max_depth': [*range(2, 100, 5), None], 'min_samples_leaf': [*range(2, 10), None], 'max_features': ['auto', 'sqrt', 'log2', None], 'max_leaf_nodes': [*range(2, 10, 1), None] } model = DecisionTreeClassifier() model_cv = RandomizedSearchCV(model, param_dist, cv=100) # Fit it to the data model_cv.fit(X_train, y_train) # Print the tuned parameters and score print("Tuned Decision Tree Parameters: {}".format(model_cv.best_params_)) print("Best score is {}".format(model_cv.best_score_)) ``` **Random Forest** Tuned Random Forest Parameters: {'warm_start': False, 'n_jobs': 2, 'n_estimators': 31, 'max_samples': None, 'max_features': 'sqrt', 'criterion': 'entropy'} Best score is 0.8004041613493289 ``` param_dist = {'n_estimators': [*range(1, 100, 5), None], 'criterion': ['gini', 'entropy'], 'max_features': ['auto', 'sqrt', 'log2'], 'n_jobs': [*range(0, 5, 1), None], 'warm_start': [True, False], 'max_samples': [2, 4, 6, 8, 10, None] } model = RandomForestClassifier() model_cv = RandomizedSearchCV(model, param_dist, cv=20) # Fit it to the data model_cv.fit(X_train, y_train) # Print the tuned parameters and score print("Tuned Random Forest Parameters: {}".format(model_cv.best_params_)) print("Best score is {}".format(model_cv.best_score_)) ``` **Logistic Regression** Tuned Logistic Regression Parameters: {'penalty': 'l2'} Best score is 0.594702581369248 ``` model = LogisticRegression() model_cv = RandomizedSearchCV(model, param_dist, cv=10) # Fit it to the data model_cv.fit(X_train, y_train) # Print the tuned parameters and score print("Tuned Logistic Regression Parameters: {}".format(model_cv.best_params_)) print("Best score is {}".format(model_cv.best_score_)) ``` **ada_boost** Tuned Logistic Regression Parameters: {'n_estimators': 11, 'learning_rate': 1.5, 'algorithm': 'SAMME.R'} Best score is 0.6995735129068462 ``` param_dist = {'n_estimators': [*range(1, 100, 5), None], 'learning_rate': [0.5, 1.0, 1.5, 2.0, 2.5], 'algorithm': ['SAMME', 'SAMME.R'] } model = AdaBoostClassifier() model_cv = RandomizedSearchCV(model, param_dist, cv=10) # Fit it to the data model_cv.fit(X_train, y_train) # Print the tuned parameters and score print("Tuned adaboost Parameters: {}".format(model_cv.best_params_)) print("Best score is {}".format(model_cv.best_score_)) ``` **Gradient Boost** Tuned Gradient Boost Parameters: {'n_estimators': 150, 'learning_rate': 0.5} Best score is 0.7760718294051626 ``` param_dist = { 'learning_rate': [0.5, 1.0, 1.5], 'n_estimators': [*range(50, 250, 50), None], } model = GradientBoostingClassifier() model_cv = RandomizedSearchCV(model, param_dist, cv=10) # Fit it to the data model_cv.fit(X_train, y_train) # Print the tuned parameters and score print("Tuned Gradient Boost Parameters: {}".format(model_cv.best_params_)) print("Best score is {}".format(model_cv.best_score_)) ``` **Final fit and analysis on random forest model** ``` # Initiat the model model = RandomForestClassifier(warm_start=False, n_jobs=2, n_estimators=200, max_samples=None, max_features='sqrt', criterion='entropy') # Fit the model model.fit(X_train,y_train) # Accuracy score print("model score: %.3f" % model.score(X_val, y_val)) ``` **Feature importance** ``` rfdtmportances = pd.DataFrame(model.feature_importances_, X_train.columns, columns=['value']) rfdtmportances = pd.DataFrame(model.feature_importances_, X_train.columns, columns=['value']) rf_importances = pd.DataFrame(model.feature_importances_, X_train.columns, columns=['value']) rf_importances.reset_index(inplace=True) rf_importances = rf_importances.sort_values(by='value', ascending=True) fig = px.bar(y=rf_importances['index'], x=rf_importances['value'], width=600, height=1000, title="Random Forest Feature Importance") fig.update_xaxes(range=[0, 0.5]) fig.show() ``` **Graphing confusion matrix** ``` # Plot the decision matrix import matplotlib.pyplot as plt fig, ax = plt.subplots(1,1,figsize=(8,8)) plot_confusion_matrix(model, X_val, y_val, display_labels=['functional', 'functional needs repair', 'non-functional'], cmap=plt.cm.Blues, ax=ax) ax.set_title('Confusion Matrix: Digits decision tree model') plt.plot(); roc_yval = y_val.copy() roc_xval = X_val.copy() ``` **ROC AUC Score** ``` from sklearn.metrics import roc_curve, roc_auc_score from sklearn.model_selection import train_test_split #predicting the data y_prob_pred = model.predict_proba(roc_xval) #roc auc score roc_auc_score(roc_yval, y_prob_pred, multi_class='ovr') ``` **Ploting ROC Curve** ``` # roc curve for classes fpr = {} tpr = {} thresh ={} n_class = 3 for i in range(n_class): fpr[i], tpr[i], thresh[i] = roc_curve(y_val, y_prob_pred[:,i], pos_label=i) # plotting plt.plot(fpr[0], tpr[0], linestyle='--',color='orange', label='Class 0 vs Rest') plt.plot(fpr[1], tpr[1], linestyle='--',color='green', label='Class 1 vs Rest') plt.plot(fpr[2], tpr[2], linestyle='--',color='blue', label='Class 2 vs Rest') plt.title('Multiclass ROC curve') plt.xlabel('False Positive Rate') plt.ylabel('True Positive rate') plt.legend(loc='best') plt.savefig('Multiclass ROC',dpi=300); ``` **Submitting to DataDriven** ``` X_test, _ = data_preprocesser(X_test, y) SF['status_group'] = model.predict(X_test) SF.replace(0, 'functional', inplace=True) SF.replace(1, 'functional needs repair', inplace=True) SF.replace(2, 'non functional', inplace=True) SF.to_csv('TMW_Predicted.csv', index=False) ``` **Joblibing (pickling) model for use in web-app** ``` pkl_features = ['gps_height', 'longitude', 'latitude', 'quantity', 'years_in_service'] pkl_X = X_train[pkl_features] pkl_y = y_train pkl_X.to_csv('pkl_X.csv') pkl_y.to_csv('pkl_y.csv') # Initiat the model model = RandomForestClassifier(warm_start=False, n_jobs=2, n_estimators=10, max_samples=None, max_features='sqrt', criterion='entropy') # Fit the model model.fit(pkl_X, pkl_y) from joblib import dump, load dump(model, 'TMWRandomForest.joblib', compress=True) ```
github_jupyter
# Special care should be taken with missing data on this problem. Missing data shall never be filled in the target variable, or the results evaluation would be corrupted. That is a risk on this problem, if things are done without care, because the target variable and the features are the same, only time-shifted. First forward and then backwards fill is the best way to try to keep causality as much as possible. Some filtering of symbols that have a lot of missing data could help, or the predictor may find itself full of constant data. Filling missing data and dropping "bad samples" can be done in two or three levels: In the total data level, in the training time level, or in the base samples level. The differences are probably small for the filling part, but may be significant when dropping samples. ``` import os import pandas as pd import matplotlib.pyplot as plt import numpy as np import datetime as dt import scipy.optimize as spo import sys from time import time from sklearn.metrics import r2_score, median_absolute_error %matplotlib inline %pylab inline pylab.rcParams['figure.figsize'] = (20.0, 10.0) %load_ext autoreload %autoreload 2 sys.path.append('../../') from utils import preprocessing as pp data_df = pd.read_pickle('../../data/data_train_val_df.pkl') print(data_df.shape) data_df.head() data_df.columns.nlevels ``` ## Let's first filter at the symbol level ``` data_df['Close'].shape good_ratios = 1.0 - (data_df['Close'].isnull().sum()/ data_df['Close'].shape[0]) good_ratios.sort_values(ascending=False).plot() filtered_data_df = pp.drop_irrelevant_symbols(data_df['Close'], good_data_ratio=0.99) good_ratios = 1.0 - (filtered_data_df.isnull().sum()/ filtered_data_df.shape[0]) good_ratios.sort_values(ascending=False).plot() filtered_data_df.shape filtered_data_df.head() filtered_data_df.isnull().sum().sort_values(ascending=False) ``` ### Let's try to filter the whole dataset using only the 'Close' values ``` good_data_ratio = 0.99 FEATURE_OF_INTEREST = 'Close' filtered_data_df = data_df[FEATURE_OF_INTEREST].dropna(thresh=math.ceil(good_data_ratio*data_df[FEATURE_OF_INTEREST].shape[0]), axis=1) filtered_data_df.head() filtered_data_df.columns fdata_df = data_df.loc[:,(slice(None),filtered_data_df.columns.tolist())] new_cols = fdata_df.columns.get_level_values(1) np.setdiff1d(new_cols, filtered_data_df.columns) np.setdiff1d(filtered_data_df.columns, new_cols) np.intersect1d(filtered_data_df.columns, new_cols).shape filtered_data_df.columns.shape ``` ### Looks good to me... Let's test it on the full dataset ``` filtered_data_df = pp.drop_irrelevant_symbols(data_df, good_data_ratio=0.99) good_ratios = 1.0 - (filtered_data_df['Close'].isnull().sum()/ filtered_data_df['Close'].shape[0]) good_ratios.sort_values(ascending=False).plot() ``` ## Now, let's filter at the sample level ``` import predictor.feature_extraction as fe train_time = -1 # In real time days base_days = 7 # In market days step_days = 30 # market days ahead_days = 1 # market days today = data_df.index[-1] # Real date tic = time() x, y = fe.generate_train_intervals(data_df, train_time, base_days, step_days, ahead_days, today, fe.feature_close_one_to_one) toc = time() print('Elapsed time: %i seconds.' % (toc-tic)) x.shape y.shape x_y_df = pd.concat([x, y], axis=1) x_y_df.shape x_y_df.head() x_y_df.isnull().sum(axis=1) ```
github_jupyter
``` # Connect to Google Drive from google.colab import drive drive.mount('/content/drive') # ls /content/drive/MyDrive/ # Copy the dataset from Google Drive to local !cp "/content/drive/MyDrive/CBIS_DDSM.zip" . !unzip -qq CBIS_DDSM.zip !rm CBIS_DDSM.zip cbis_path = 'CBIS_DDSM' # Import libraries %tensorflow_version 1.x import os import numpy as np import matplotlib.pyplot as plt import seaborn as sns import itertools from sklearn.metrics import confusion_matrix, classification_report, roc_curve, auc from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras import models from tensorflow.keras import optimizers from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, LearningRateScheduler, Callback from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.optimizers import RMSprop, SGD, Adam, Adadelta, Adagrad, Adamax, Nadam from tensorflow.keras.regularizers import l2 from tensorflow.keras.utils import plot_model ``` # Data pre-processing **0->mass 1->calcification** ``` def load_training(): """ Load the training set (excluding baseline patches) """ images = np.load(os.path.join(cbis_path, 'numpy data', 'train_tensor.npy'))[1::2] labels = np.load(os.path.join(cbis_path, 'numpy data', 'train_labels.npy'))[1::2] return images, labels def load_testing(): """ Load the test set (abnormalities patches and labels, no baseline) """ images = np.load(os.path.join(cbis_path, 'numpy data', 'public_test_tensor.npy'))[1::2] labels = np.load(os.path.join(cbis_path, 'numpy data', 'public_test_labels.npy'))[1::2] return images, labels def remap_label(l): """ Remap the labels to 0->mass 1->calcification """ if l == 1 or l == 2: return 0 elif l == 3 or l == 4: return 1 else: print("[WARN] Unrecognized label (%d)" % l) return None ``` The data is prepared following these steps: 1. Import the training and testing data from numpy arrays 2. Remove the images and labels related to baseline patches (even indices in the arrays) 3. Adjust the labels for the binary classification problem, so that 0 corresponds to 'mass' and 1 maps to 'calcification' Normalize the pixels to be in the range (0-1) floating point 4. Shuffle the training set (and labels accordingly, of course) 5. Split the training data into 'training' and 'validation' subsets 6. Build Keras generators for training and validation data ``` # Load training and test images (abnormalities only, no baseline) train_images, train_labels= load_training() test_images, test_labels= load_testing() # Number of images n_train_img = train_images.shape[0] n_test_img = test_images.shape[0] print("Train size: %d \t Test size: %d" % (n_train_img, n_test_img)) # Compute width and height of images img_w = train_images.shape[1] img_h = train_images.shape[2] print("Image size: %dx%d" % (img_w, img_h)) # Remap labels train_labels = np.array([remap_label(l) for l in train_labels]) test_labels = np.array([remap_label(l) for l in test_labels]) # Create a new dimension for color in the images arrays train_images = train_images.reshape((n_train_img, img_w, img_h, 1)) test_images = test_images.reshape((n_test_img, img_w, img_h, 1)) # Convert from 16-bit (0-65535) to float (0-1) train_images = train_images.astype('uint16') / 65535 test_images = test_images.astype('uint16') / 65535 # Shuffle the training set (originally sorted by label) perm = np.random.permutation(n_train_img) train_images = train_images[perm] train_labels = train_labels[perm] # Create a generator for training images train_datagen = ImageDataGenerator( validation_split=0.2 ) # Fit the generator with some images train_datagen.fit(train_images) # Split train images into actual training and validation train_generator = train_datagen.flow(train_images, train_labels, batch_size=32, subset='training') validation_generator = train_datagen.flow(train_images, train_labels, batch_size=32, subset='validation') # Visualize one image from the dataset and its label, just to make sure the data format is correct idx = 0 plt.imshow(train_images[idx][:,:,0], cmap='gray') plt.show() print("Label: " + str(train_labels[idx])) ``` # Classification The first step is to discover how large the model should be approximately. A network that is too small won't be able to generalize well; on the other side, a model with too many parameters may learn slowly and overfit. A good way to find the appropriate size is to start from a small naive model, then gradually increase its size until it starts overfitting while learning; at that point it is flexible enough to fit the training data, and potentially generalizable to other data with a proper training. Of course, the model can be refined later, by adding new layers, modifying existing ones, including regularization techniques or tuning the hyperparameters, in order to achieve (hopefully) better performances. ## Experiment 0 Let's start with a very small CNN, made of 2 convolutional layers interleaved with max-pooling. At the end, after a fully-connect layer, a single neuron with sigmoid activation generates the output (binary classification). As already mentioned, the aim is to get a rough idea of the required model complexity. ``` # Build a simple model # Model 0 model_0 = models.Sequential() model_0.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 1))) model_0.add(layers.MaxPooling2D((2, 2))) model_0.add(layers.Conv2D(64, (3, 3), activation='relu')) model_0.add(layers.MaxPooling2D((2, 2))) model_0.add(layers.Flatten()) model_0.add(layers.Dense(16, activation='relu')) model_0.add(layers.Dense(1, activation='sigmoid')) model_0.summary() ``` The loss function is the binary cross-entropy, which is particularly suitable for this kind of problem (binary classification). The optimizer is RMSprop, an adaptive optimization algorithm which is considered quite efficient. During the training, we monitor how the loss evolves on the validation set too and save the corresponding model weights when that loss is minimum, since it is where the model usually performs best. ``` # mkdir /content/drive/MyDrive/models # Callback for checkpointing checkpoint = ModelCheckpoint('model_0_2cl_best.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True, save_freq='epoch' ) # Compile the model model_0.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) # Train history_0 = model_0.fit_generator( train_generator, steps_per_epoch=int(0.8*n_train_img) // 32, epochs=100, validation_data=validation_generator, callbacks=[checkpoint], shuffle=True, verbose=1, initial_epoch=0 ) # Save models.save_model(model_0, 'model_0_2cl_end.h5') !cp model* "/content/drive/MyDrive/models/" # History of accuracy and loss tra_loss_0 = history_0.history['loss'] tra_acc_0 = history_0.history['acc'] val_loss_0 = history_0.history['val_loss'] val_acc_0 = history_0.history['val_acc'] # Total number of epochs training epochs_0 = range(1, len(tra_acc_0)+1) end_epoch_0 = len(tra_acc_0) # Epoch when reached the validation loss minimum opt_epoch_0 = val_loss_0.index(min(val_loss_0)) + 1 # Loss and accuracy on the validation set end_val_loss_0 = val_loss_0[-1] end_val_acc_0 = val_acc_0[-1] opt_val_loss_0 = val_loss_0[opt_epoch_0-1] opt_val_acc_0 = val_acc_0[opt_epoch_0-1] # Loss and accuracy on the test set opt_model_0 = models.load_model('model_0_2cl_best.h5') test_loss_0, test_acc_0 = model_0.evaluate(test_images, test_labels, verbose=False) opt_test_loss_0, opt_test_acc_0 = opt_model_0.evaluate(test_images, test_labels, verbose=False) opt_pred_0 = opt_model_0.predict([test_images, test_labels]) pred_classes_0 = np.rint(opt_pred_0) print("Model 0\n") print("Epoch [end]: %d" % end_epoch_0) print("Epoch [opt]: %d" % opt_epoch_0) print("Valid accuracy [end]: %.4f" % end_val_acc_0) print("Valid accuracy [opt]: %.4f" % opt_val_acc_0) print("Test accuracy [end]: %.4f" % test_acc_0) print("Test accuracy [opt]: %.4f" % opt_test_acc_0) print("Valid loss [end]: %.4f" % end_val_loss_0) print("Valid loss [opt]: %.4f" % opt_val_loss_0) print("Test loss [end]: %.4f" % test_loss_0) print("Test loss [opt]: %.4f" % opt_test_loss_0) print(classification_report(test_labels, pred_classes_0, digits=4)) # Model accuracy plt.figure(figsize=(7, 7), dpi=80, facecolor='w', edgecolor='k') plt.title('Model 0 accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.plot(epochs_0, tra_acc_0, 'r', label='Training set') plt.plot(epochs_0, val_acc_0, 'g', label='Validation set') plt.plot(opt_epoch_0, val_acc_0[opt_epoch_0-1], 'go') plt.vlines(opt_epoch_0, min(val_acc_0), opt_val_acc_0, linestyle="dashed", color='g', linewidth=1) plt.hlines(opt_val_acc_0, 1, opt_epoch_0, linestyle="dashed", color='g', linewidth=1) plt.legend(loc='lower right') # Model loss plt.figure(figsize=(7, 7), dpi=80, facecolor='w', edgecolor='k') plt.title('Model 0 loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.plot(epochs_0, tra_loss_0, 'r', label='Training set') plt.plot(epochs_0, val_loss_0, 'g', label='Validation set') plt.plot(opt_epoch_0, val_loss_0[opt_epoch_0-1], 'go') plt.vlines(opt_epoch_0, min(val_loss_0), opt_val_loss_0, linestyle="dashed", color='g', linewidth=1) plt.hlines(opt_val_loss_0, 1, opt_epoch_0, linestyle="dashed", color='g', linewidth=1) plt.legend(); ``` # Experiment 6 ``` # Model 6 model_6 = models.Sequential() model_6.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 1))) model_6.add(layers.MaxPooling2D((2, 2))) model_6.add(layers.Conv2D(64, (3, 3), activation='relu')) model_6.add(layers.MaxPooling2D((2, 2))) model_6.add(layers.Conv2D(128, (3, 3), activation='relu')) model_6.add(layers.MaxPooling2D((2, 2))) model_6.add(layers.Conv2D(256, (3, 3), activation='relu')) model_6.add(layers.MaxPooling2D((2, 2))) model_6.add(layers.Flatten()) model_6.add(layers.Dense(48, activation='relu')) model_6.add(layers.Dropout(0.5)) model_6.add(layers.Dense(1, activation='sigmoid')) model_6.summary() # Early stopping (stop training after the validation loss reaches the minimum) earlystopping = EarlyStopping(monitor='val_loss', mode='min', patience=80, verbose=1) # Callback for checkpointing checkpoint = ModelCheckpoint('model_6_2cl_best.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True, save_freq='epoch' ) # Compile the model (note the decay!) model_6.compile(optimizer=RMSprop(learning_rate=0.001, decay=1e-3), loss='binary_crossentropy', metrics=['accuracy']) # Train history_6 = model_6.fit_generator( train_generator, steps_per_epoch=n_train_img // 128, epochs=500, validation_data=validation_generator, callbacks=[checkpoint, earlystopping], shuffle=True, verbose=1, initial_epoch=0 ) # Save models.save_model(model_6, 'model_6_2cl_end.h5') !cp model* "/content/drive/MyDrive/models/" # History of accuracy and loss tra_loss_6 = history_6.history['loss'] tra_acc_6 = history_6.history['acc'] val_loss_6 = history_6.history['val_loss'] val_acc_6 = history_6.history['val_acc'] # Total number of epochs training epochs_6 = range(1, len(tra_acc_6)+1) end_epoch_6 = len(tra_acc_6) # Epoch when reached the validation loss minimum opt_epoch_6 = val_loss_6.index(min(val_loss_6)) + 1 # Loss and accuracy on the validation set end_val_loss_6 = val_loss_6[-1] end_val_acc_6 = val_acc_6[-1] opt_val_loss_6 = val_loss_6[opt_epoch_6-1] opt_val_acc_6 = val_acc_6[opt_epoch_6-1] # Loss and accuracy on the test set opt_model_6 = models.load_model('model_6_2cl_best.h5') test_loss_6, test_acc_6 = model_6.evaluate(test_images, test_labels, verbose=False) opt_test_loss_6, opt_test_acc_6 = opt_model_6.evaluate(test_images, test_labels, verbose=False) opt_pred_6 = opt_model_6.predict([test_images, test_labels]) pred_classes_6 = np.rint(opt_pred_6) print("Model 6\n") print("Epoch [end]: %d" % end_epoch_6) print("Epoch [opt]: %d" % opt_epoch_6) print("Valid accuracy [end]: %.4f" % end_val_acc_6) print("Valid accuracy [opt]: %.4f" % opt_val_acc_6) print("Test accuracy [end]: %.4f" % test_acc_6) print("Test accuracy [opt]: %.4f" % opt_test_acc_6) print("Valid loss [end]: %.4f" % end_val_loss_6) print("Valid loss [opt]: %.4f" % opt_val_loss_6) print("Test loss [end]: %.4f" % test_loss_6) print("Test loss [opt]: %.4f" % opt_test_loss_6) print(classification_report(test_labels, pred_classes_6, digits=4)) # Model accuracy plt.figure(figsize=(7, 7), dpi=80, facecolor='w', edgecolor='k') plt.title('Model 6 accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.plot(epochs_6, tra_acc_6, 'r', label='Training set') plt.plot(epochs_6, val_acc_6, 'g', label='Validation set') plt.plot(opt_epoch_6, val_acc_6[opt_epoch_6-1], 'go') plt.vlines(opt_epoch_6, min(val_acc_6), opt_val_acc_6, linestyle="dashed", color='g', linewidth=1) plt.hlines(opt_val_acc_6, 1, opt_epoch_6, linestyle="dashed", color='g', linewidth=1) plt.legend(loc='lower right') # Model loss plt.figure(figsize=(7, 7), dpi=80, facecolor='w', edgecolor='k') plt.title('Model 6 loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.plot(epochs_6, tra_loss_6, 'r', label='Training set') plt.plot(epochs_6, val_loss_6, 'g', label='Validation set') plt.plot(opt_epoch_6, val_loss_6[opt_epoch_6-1], 'go') plt.vlines(opt_epoch_6, min(val_loss_6), opt_val_loss_6, linestyle="dashed", color='g', linewidth=1) plt.hlines(opt_val_loss_6, 1, opt_epoch_6, linestyle="dashed", color='g', linewidth=1) plt.legend(); ```
github_jupyter
# The Pasta Production Problem This tutorial includes everything you need to set up IBM Decision Optimization CPLEX Modeling for Python (DOcplex), build a Mathematical Programming model, and get its solution by solving the model with IBM ILOG CPLEX Optimizer. Table of contents: - [Describe the business problem](#Describe-the-business-problem) * [How decision optimization (prescriptive analytics) can help](#How--decision-optimization-can-help) * [Use decision optimization](#Use-decision-optimization) - [Step 1: Model the data](#Step-1:-Model-the-data) * [Step 2: Prepare the data](#Step-2:-Prepare-the-data) - [Step 3: Set up the prescriptive model](#Step-3:-Set-up-the-prescriptive-model) * [Define the decision variables](#Define-the-decision-variables) * [Express the business constraints](#Express-the-business-constraints) * [Express the objective](#Express-the-objective) * [Solve with Decision Optimization](#Solve-with-Decision-Optimization) * [Step 4: Investigate the solution and run an example analysis](#Step-4:-Investigate-the-solution-and-then-run-an-example-analysis) * [Summary](#Summary) **** ## Describe the business problem This notebook describes how to use CPLEX Modeling for Python to manage the production of pasta to meet demand with your resources. The model aims at minimizing the production cost for a number of products while satisfying customer demand. * Each product can be produced either inside the company or outside, at a higher cost. * The inside production is constrained by the company's resources, while outside production is considered unlimited. The model first declares the products and the resources. The data consists of the description of the products (the demand, the inside and outside costs, and the resource consumption) and the capacity of the various resources. The variables for this problem are the inside and outside production for each product. ## How decision optimization can help * Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes. * Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. * Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage. <br/> <u>With prescriptive analytics, you can:</u> * Automate the complex decisions and trade-offs to better manage your limited resources. * Take advantage of a future opportunity or mitigate a future risk. * Proactively update recommendations based on changing events. * Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes. ## Use decision optimization ### Step 1: Model the data ``` products = [("kluski", 100, 0.6, 0.8), ("capellini", 200, 0.8, 0.9), ("fettucine", 300, 0.3, 0.4)] # resources are a list of simple tuples (name, capacity) resources = [("flour", 20), ("eggs", 40)] consumptions = {("kluski", "flour"): 0.5, ("kluski", "eggs"): 0.2, ("capellini", "flour"): 0.4, ("capellini", "eggs"): 0.4, ("fettucine", "flour"): 0.3, ("fettucine", "eggs"): 0.6} ``` ### Step 2: Prepare the data The data used is very simple and is ready to use without any cleaning, massage, refactoring. ### Step 3: Set up the prescriptive model Set up the prescriptive model using the Mathematical Programming (docplex.mp) modeling package. ``` from docplex.mp.environment import Environment env = Environment() env.print_information() ``` #### Create the DOcplex model The model contains all the business constraints and defines the objective. We now use CPLEX Modeling for Python to build a Mixed Integer Programming (MIP) model for this problem. ``` from docplex.mp.model import Model mdl = Model(name="pasta") ``` #### Define the decision variables ``` inside_vars = mdl.continuous_var_dict(products, name='inside') outside_vars = mdl.continuous_var_dict(products, name='outside') ``` #### Express the business constraints * Each product can be produced either inside the company or outside, at a higher cost. * The inside production is constrained by the company's resources, while outside production is considered unlimited. ``` # --- constraints --- # demand satisfaction mdl.add_constraints((inside_vars[prod] + outside_vars[prod] >= prod[1], 'ct_demand_%s' % prod[0]) for prod in products) # --- resource capacity --- mdl.add_constraints((mdl.sum(inside_vars[p] * consumptions[p[0], res[0]] for p in products) <= res[1], 'ct_res_%s' % res[0]) for res in resources) mdl.print_information() ``` #### Express the objective Minimizing the production cost for a number of products while satisfying customer demand. ``` total_inside_cost = mdl.sum(inside_vars[p] * p[2] for p in products) total_outside_cost = mdl.sum(outside_vars[p] * p[3] for p in products) mdl.minimize(total_inside_cost + total_outside_cost) ``` #### Solve with Decision Optimization Now solve the model, using `Model.solve()`. The following cell solves using your local CPLEX (if any, and provided you have added it to your `PYTHONPATH` variable). ``` mdl.solve() ``` ### Step 4: Investigate the solution and then run an example analysis ``` obj = mdl.objective_value print("* Production model solved with objective: {:g}".format(obj)) print("* Total inside cost=%g" % total_inside_cost.solution_value) for p in products: print("Inside production of {product}: {ins_var}".format(product=p[0], ins_var=inside_vars[p].solution_value)) print("* Total outside cost=%g" % total_outside_cost.solution_value) for p in products: print("Outside production of {product}: {out_var}".format(product=p[0], out_var=outside_vars[p].solution_value)) ``` ## Summary You have learned how to set up and use IBM Decision Optimization CPLEX Modeling for Python to formulate a Mathematical Programming model and solve it with IBM Decision Optimization. ## References * <a href="https://rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html" target="_blank" rel="noopener noreferrer">CPLEX Modeling for Python documentation</a> * <a href="https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/welcome-main.html" target="_blank" rel="noopener noreferrer">Watson Studio documentation</a> <hr> Copyright © 2017-2021. This notebook and its source code are released under the terms of the MIT License. <div style="background:#F5F7FA; height:110px; padding: 2em; font-size:14px;"> <span style="font-size:18px;color:#152935;">Love this notebook? </span> <span style="font-size:15px;color:#152935;float:right;margin-right:40px;">Don't have an account yet?</span><br> <span style="color:#5A6872;">Share it with your colleagues and help them discover the power of Watson Studio!</span> <span style="border: 1px solid #3d70b2;padding:8px;float:right;margin-right:40px; color:#3d70b2;"><a href="https://ibm.co/wsnotebooks" target="_blank" style="color: #3d70b2;text-decoration: none;">Sign Up</a></span><br> </div>
github_jupyter
# Cell Tower Coverage ## Objective and Prerequisites In this example, we'll solve a simple covering problem: how to build a network of cell towers to provide signal coverage to the largest number of people possible. We'll construct a mathematical model of the business problem, implement this model in the Gurobi Python interface, and compute an optimal solution. This modeling example is at the beginner level, where we assume that you know Python and that you have some knowledge about building mathematical optimization models. **Note:** You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an [evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=CommercialDataScience) as a *commercial user*, or download a [free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=AcademicDataScience) as an *academic user*. ## Motivation Over the last ten years, smartphones have revolutionized our lives in ways that go well beyond how we communicate. Besides calling, texting, and emailing, more than two billion people around the world now use these devices to navigate to book cab rides, to compare product reviews and prices, to follow the news, to watch movies, to listen to music, to play video games,to take photographs, to participate in social media, and for numerous other applications. A cellular network is a network of handheld smartphones in which each phone communicates with the telephone network by radio waves through a local antenna at a cellular base station (cell tower). One important problem is the placement of cell towers to provide signal coverage to the largest number of people. ## Problem Description A telecom company needs to build a set of cell towers to provide signal coverage for the inhabitants of a given city. A number of potential locations where the towers could be built have been identified. The towers have a fixed range, and -due to budget constraints- only a limited number of them can be built. Given these restrictions, the company wishes to provide coverage to the largest percentage of the population possible. To simplify the problem, the company has split the area it wishes to cover into a set of regions, each of which has a known population. The goal is then to choose which of the potential locations the company should build cell towers on -in order to provide coverage to as many people as possible. The Cell Tower Coverage Problem is an instance of the Maximal Covering Location Problem [1]. It is also related to the Set Cover Problem. Set covering problems occur in many different fields, and very important applications come from the airlines industry. For example, Crew Scheduling and Tail Assignment Problem [2]. ## Solution Approach Mathematical programming is a declarative approach where the modeler formulates a mathematical optimization model that captures the key aspects of a complex decision problem. The Gurobi Optimizer solves such models using state-of-the-art mathematics and computer science. A mathematical optimization model has five components, namely: * Sets and indices. * Parameters. * Decision variables. * Objective function(s). * Constraints. We now present a mixed-integer programming (MIP) formulation for the Cell Tower Coverage Problem. ## Model Formulation ### Sets and Indices $i \in T$: Index and set of potential sites to build a tower. $j \in R$: Index and set of regions. $G(T,R,E)$: A bipartite graph defined over the set $T$ of potential sites to build a tower, the set of regions $R$ that we want to cover, and $E$ is the set of edges, where we have an edge $(i,j) \in E$ if region $j \in R$ can be covered by a tower on location $i \in T$. ### Parameters $c_{i} \in \mathbb{R}^+$: The cost of setting up a tower at site $i$. $p_{j} \in \mathbb{N}$: The population at region $j$. ### Decision Variables $covered_{j} \in \{0, 1 \}$: This variable is equal to 1 if region $j$ is covered; and 0 otherwise. $build_{i} \in \{0, 1 \}$: This variable is equal to 1 if tower $i$ is built; and 0 otherwise. ### Objective Function(s) - **Population covered**. We seek to maximize the total population covered by the towers. \begin{equation} \text{Max} \quad Z = \sum_{j \in R} p_{j} \cdot covered_{j} \tag{0} \end{equation} ### Constraints - **Coverage**. For each region $j \in R$ ensure that at least one tower that covers a region must be selected. \begin{equation} \sum_{(i,j) \in E} build_{i} \geq covered_{j} \quad \forall j \in R \tag{1} \end{equation} - **Budget**. We need to ensure that the total cost of building towers do not exceed the allocated budget. \begin{equation} \sum_{i \in T} c_{i} \cdot build_{i} \leq \text{budget} \tag{2} \end{equation} ## Python Implementation This example considers a bipartite graph for 6 towers and 9 regions. The following table illustrates which regions (columns) are covered by each cell tower site (rows). | <i></i> | Region 0 | Region 1 | Region 2 | Region 3 | Region 4 | Region 5 | Region 6 | Region 7 | Region 8 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Tower 0 | 1 | 1 | - | - | - | 1 | - | - | - | | Tower 1 | 1 | - | - | - | - | - | - | 1 | 1 | | Tower 2 | - | - | 1 | 1 | 1 | - | 1 | - | - | | Tower 3 | - | - | 1 | - | - | 1 | 1 | - | - | | Tower 4 | 1 | - | 1 | - | - | - | 1 | 1 | 1 | | Tower 5 | - | - | - | 1 | 1 | - | - | - | 1 | The population at each region is stated in the following table. | <i></i> | Region 0 | Region 1 | Region 2 | Region 3 | Region 4 | Region 5 | Region 6 | Region 7 | Region 8 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Population | 523 | 690 | 420 | 1010 | 1200 | 850 | 400 | 1008 | 950 | The cost to build a cell tower at each location site is stated inthe following table. | <i></i> | Cost (millions of USD) | | --- | --- | | Tower 0 | 4.2 | | Tower 1 | 6.1 | | Tower 2 | 5.2 | | Tower 3 | 5.5 | | Tower 4 | 4.8 | | Tower 5 | 9.2 | The allocated budget is $\$20,000,000$. We now import the Gurobi Python Module. Then, we initialize the data structures with the given data. ``` import gurobipy as gp from gurobipy import GRB # tested with Gurobi v9.0.0 and Python 3.7.0 # Parameters budget = 20 regions, population = gp.multidict({ 0: 523, 1: 690, 2: 420, 3: 1010, 4: 1200, 5: 850, 6: 400, 7: 1008, 8: 950 }) sites, coverage, cost = gp.multidict({ 0: [{0,1,5}, 4.2], 1: [{0,7,8}, 6.1], 2: [{2,3,4,6}, 5.2], 3: [{2,5,6}, 5.5], 4: [{0,2,6,7,8}, 4.8], 5: [{3,4,8}, 9.2] }) ``` ### Model Deployment We now determine the model for the Cell Tower Coverage Problem, by defining the decision variables, constraints, and objective function. Next, we start the optimization process and Gurobi finds the plan to build towers that maximizes the coverage of the population given the budget allocated. ``` # MIP model formulation m = gp.Model("cell_tower") build = m.addVars(len(sites), vtype=GRB.BINARY, name="Build") is_covered = m.addVars(len(regions), vtype=GRB.BINARY, name="Is_covered") m.addConstrs((gp.quicksum(build[t] for t in sites if r in coverage[t]) >= is_covered[r] for r in regions), name="Build2cover") m.addConstr(build.prod(cost) <= budget, name="budget") m.setObjective(is_covered.prod(population), GRB.MAXIMIZE) m.optimize() ``` ## Analysis The result of the optimization model shows that the maximum population that can be covered with the $\$20,000,000$ budget is 7,051 people. Let's see the solution that achieves that optimal result. ### Cell Tower Build Plan This plan determines at which site locations to build a cell tower. ``` # display optimal values of decision variables for tower in build.keys(): if (abs(build[tower].x) > 1e-6): print(f"\n Build a cell tower at location Tower {tower}.") ``` ### Demand Fulfillment Metrics - **Coverage**: Percentage of the population covered by the cell towers built. ``` # Percentage of the population covered by the cell towers built is computed as follows. total_population = 0 for region in range(len(regions)): total_population += population[region] coverage = round(100*m.objVal/total_population, 2) print(f"\n The population coverage associated to the cell towers build plan is: {coverage} %") ``` ### Resources Utilization Metrics - **Budget consumption**: Percentage of the budget allocated to build the cell towers. ``` # Percentage of budget consumed to build cell towers total_cost = 0 for tower in range(len(sites)): if (abs(build[tower].x) > 0.5): total_cost += cost[tower]*int(build[tower].x) budget_consumption = round(100*total_cost/budget, 2) print(f"\n The percentage of budget consumed associated to the cell towers build plan is: {budget_consumption} %") ``` ## Conclusion In this example, we address the problem of building cell towers to provide signal coverage to the largest number of people while satisfying a budget constraint. We learned how to formulate the problem as a MIP model. Also, we learned how to implement the MIP model formulation and solve it using the Gurobi Python API. ## References [1] Richard Church and Charles R. Velle. "The Maximal Covering Location Problem". Papers in Regional Science, 1974, vol. 32, issue 1, 101-118. [2] Tail Assignment Problem. https://www.gurobi.com/case_study/air-france-tail-assignment-optimization/ Copyright © 2020 Gurobi Optimization, LLC
github_jupyter
# Thesis thoughts and tests, week 1 This is a summary of my work to date. The first cell contains all the helper functions and such, and can be safely skipped. ``` import skimage.io from matplotlib import pyplot as plt import cairocffi as cairo import math, random import numpy as np from IPython.display import Image from sklearn.preprocessing import StandardScaler from sklearn.neural_network import MLPRegressor import seaborn as sns %matplotlib inline ## Cairo stuff - offload to somewhere else # Get a point as a pixel location def point_to_pixel(x, y, w, h): xp = int(min(w-1, (x*(w/2) + w/2))) # Avoid going out of bounds yp = int(min(h-1, (y*(h/2) + h/2))) return xp, yp # Floating point version (for interpolation methods later) def point_to_pixelf(x, y, w, h): xp = (min(w-1, (x*(w/2) + w/2))) # Avoid going out of bounds - a little less accurate but hey yp = (min(h-1, (y*(h/2) + h/2))) return xp, yp # Make a point on a unit cirle given an angle def point_on_circle(angle): x, y = math.cos(angle), math.sin(angle) return x, y # Drae a set of emitters and detectors def draw_des(ds, es, width, height): ## Cairo STUFF width, height = 600, 600 surface = cairo.ImageSurface (cairo.FORMAT_ARGB32, width, height) ctx = cairo.Context (surface) ctx.set_source_rgb(1,1,1) ctx.rectangle(0,0,width,height) ctx.fill() def circle(ctx, x, y, size): ctx.arc(x, y, size, 0, 2 * math.pi) ## Back to the good stuff # Connect detectors to emitters ctx.set_line_width(2) ctx.set_source_rgb(0, 0.5, 0) for e in es: for d in ds: ctx.move_to(*point_to_pixel(e[0], e[1], width, height)) # Wow that's a nifty trick!! ctx.line_to(*point_to_pixel(d[0], d[1], width, height)) ctx.stroke() # Draw detectors ctx.set_source_rgb(0, 0, 1) for d in ds: cx, cy = point_to_pixel(d[0], d[1], width, height) circle(ctx, cx, cy, 20) ctx.fill() ctx.stroke() # Draw Emitters ctx.set_source_rgb(1, 0, 1) for e in es: cx, cy = point_to_pixel(e[0], e[1], width, height) circle(ctx, cx, cy, 10) ctx.fill() ctx.stroke() return surface def random_test_image(w, h, r, max_displacement): # A random circle within the unit circle x, y = 1, 1 while (x**2 + y**2 > max_displacement**2): # Some margin to avoid overlapping the circles circumference x, y = random.random(), random.random() # Shouldn't take too long surface = cairo.ImageSurface (cairo.FORMAT_ARGB32, w, h) ctx = cairo.Context (surface) ctx.set_source_rgb(0, 0, 0) ctx.rectangle(0,0,w,h) ctx.fill() xp, yp = point_to_pixel(x, y, w, h) xp -= r yp -= r # Subtract the radius ctx.set_source_rgb(1, 1, 1) ctx.arc(xp, yp, r, 0, 2 * math.pi) ctx.fill() ctx.stroke() buf = np.frombuffer(surface.get_data(), np.uint8) img = buf.reshape(w, h, 4)[:,:,0] return surface, img, x, y def test_image(w, h, r, x, y): surface = cairo.ImageSurface (cairo.FORMAT_ARGB32, w, h) ctx = cairo.Context (surface) ctx.set_source_rgb(0, 0, 0) ctx.rectangle(0,0,w,h) ctx.fill() xp, yp = point_to_pixel(x, y, w, h) xp -= r yp -= r # Subtract the radius ctx.set_source_rgb(1, 1, 1) ctx.arc(xp, yp, r, 0, 2 * math.pi) ctx.fill() ctx.stroke() buf = np.frombuffer(surface.get_data(), np.uint8) img = buf.reshape(w, h, 4)[:,:,0] return surface, img, x, y def get_paths(img, ds, es, width, height): # Does interpolation along all paths from emitters to detectors, given an image, detectors and emitters lines = [] for e in es: for d in ds: x0, y0 = point_to_pixelf(*e, width, height) x1, y1 = point_to_pixelf(*d, width, height) # Make samplng points length = int(np.hypot(x1-x0, y1-y0)) x, y = np.linspace(x0, x1, length), np.linspace(y0, y1, length) # Extract the values along the line zi = img[x.astype(np.int), y.astype(np.int)] lines.append(sum(zi)) return lines def get_random_des(nd, ne): ds, es = [], [] for i in range(nd): ds.append(point_on_circle((2*i/nd+(random.random()-0.5)*0.3)*3.14159265)) for i in range(ne): es.append(point_on_circle((2*i/ne+(random.random()-0.5)*0.3)*3.14159265)) return ds, es def interp(start, end, width, height, img): x0, y0 = point_to_pixelf(*start, width, height) x1, y1 = point_to_pixelf(*end, width, height) # Make samplng points length = int(np.hypot(x1-x0, y1-y0)) x, y = np.linspace(x0, x1, length), np.linspace(y0, y1, length) # Extract the values along the line zi = img[x.astype(np.int), y.astype(np.int)] return sum(zi), zi def e_vs_t_pos(): X, y, Xt, yt = [], [], [], [] # 100 tests for i in range(100): s, img, xi, yi = random_test_image(256, 256, 60, 0.8) paths = get_paths(img, ds, es, 256, 256) Xt.append(paths) yt.append([xi,yi]) # 1000 training samples for i in range(1024): s, img, xi, yi = random_test_image(256, 256, 60, 0.8) paths = get_paths(img, ds, es, 256, 256) X.append(paths) y.append([xi,yi]) # Instantiate the model from sklearn.preprocessing import StandardScaler from sklearn.neural_network import MLPRegressor # Scale the inputs (important) scaler = StandardScaler() scaler.fit(X) X = scaler.transform(X) Xt = scaler.transform(Xt) # for i in range(11): # 2^10 = 1024 # n = 2**i # print(n) # mlp = MLPRegressor(hidden_layer_sizes=(20,20,20)) # # Train the neural network # mlp.fit(X[:n], y[:n]) # # Predict the positions of the test images from the test inputs # predictions = mlp.predict(Xt) # e = predictions.clip(0, 1)-yt # errors = [(d[0]**2 + d[1]**2)**0.5 for d in e ] # print(sum(errors)) ses = [] ns = range(10, 1000, 20) for n in ns: # 2^10 = 1024 mlp = MLPRegressor(hidden_layer_sizes=(20,20,20)) # Train the neural network mlp.fit(X[:n], y[:n]) # Predict the positions of the test images from the test inputs predictions = mlp.predict(Xt) e = predictions.clip(0, 1)-yt errors = [(d[0]**2 + d[1]**2)**0.5 for d in e ] ses.append(sum(errors)) plt.plot(ns, ses, label='Error vs N training samples') ``` ## Step 1 - Defining the sensor layout We specify the locations of sensors and detectors. I've been specifying them in a coordinate system that goes from -1 to 1 on both axes, and usually place them on the unit circle. Here, I make 8 emitters and 9 detectors and arrange them semi-randomly. ``` # GENERATE EMITTERS AND DETECTORS in ds and es (coordinate space from -1 to 1 x and y) ds, es = get_random_des(8, 8) # Each one is a (x,y) pair # Show them: s = draw_des(ds, es, 500, 500) s.write_to_png('t4.png') Image('t4.png') ``` ## Step 2 - Representing an object being scanned We represent an object as a greyscale image. The intensity reflects how much light the material lets through. Since this is all hypothetical at the moment, I make some simplifying assumptions. For the tests in this notebook, I've been generating small (256x256) images with a circle somewhere in the unit circle. For example, to make one in the center: ``` s, img, x, y = test_image(256, 256, 60, 0.5, 0.5) # Central # s, img, x, y = random_test_image(256, 256, 60, 0.8) # w, h, radius of circle, max dist from center print("Circle location:",x, y) s.write_to_png('t1.png') Image('t1.png') ``` ## Step 3 - Tracing along a path To see how much light will get from an emitter to a detector, we sum the pixel values along a path in the image from the emitter location to the detector location. ``` s, values = interp((-1, -1), (1, 1), 256, 256, img) print("The sum along the line: ", s) print("Plotting the values along the line:") plt.plot(values) ``` ## Step 4 - Tracing along all paths from emitters to detectors This represents the scan in full. We can optionally rotate all the emitters, detectors or both to get more readings. For example, for the semi-random arrangement shown in step 1, with 8 es and 8 ds: ``` lines = get_paths(img, ds, es, 256, 256) plt.plot(lines) # 64 values (nd*ne) as expected. ``` ## Step 5 - Going from these readings back to an image TODO: backprojection We need some way to get back to the object being scanned. Most of the work to date has been done on well defined, regular arrangements of sensors and many readings (For example filtered backprojection). Here, I decided to try out neural networks and see if I could train a network to reconstruct an image from the readings. ### 5.1 - Infering position I decided to start with a fairly simple task - getting the position of the test circle from the sensor readings. We generate some training data and some test data, and try to predict the positions of the circles in the test data based on the readings returned by get_paths(). As you can see, the model does fairly well, getting the position within ~10% (1 sd) with only 100 training images. ``` # Generating training data: X_train = [] Y_train = [] X_test = [] Y_test = [] for i in range(100): s, img, xi, yi = random_test_image(256, 256, 60, 0.8) paths = get_paths(img, ds, es, 256, 256) X_train.append(paths) # The simulated readings Y_train.append([xi,yi]) # The position for i in range(100): s, img, xi, yi = random_test_image(256, 256, 60, 0.8) paths = get_paths(img, ds, es, 256, 256) X_test.append(paths) # The simulated readings Y_test.append([xi,yi]) # The position # Scale the inputs (NNs sensitive to input scaling) - without this we get far worse performance scaler = StandardScaler() scaler.fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) # Creating and training the neural network mlp = MLPRegressor(hidden_layer_sizes=(20,20,20)) mlp.fit(X_train, Y_train) # Predicting the positions for the test data predictions = mlp.predict(X_test) # Plot predicted X values vs actual: plt.scatter(predictions[:,0].clip(0,1), np.asarray(Y_test)[:,0]) # A couple of outliers sometimes # View the distribution of the errors e = predictions.clip(0, 1)-Y_test errors = [(d[0]**2 + d[1]**2)**0.5 for d in e ] print("Standard deviation of errors:",np.asarray(errors).std()) sns.set(color_codes=True) sns.distplot(errors); ``` Let's see how the errors change with more training data ``` e_vs_t_pos() # Plots the total error vs the number of training images. ``` It converges to ~10% error fairly quicky. This makes sense to me in light of the fact that the effective resolution is going to be governed by the 'effective sensor radius', which for 16 es or ds is ~0.1 (do maths and find ref) ### 5.2 - Image inference Now the fun part - going back to the actual images from the scans. This shouldn't be possible, but the fact that we know we can always expect a circle radius 60 helps. This is an example where for a specialized use case we can get good returns by knowing what to expect and doing adequate simulation. ``` X, y, Xt, yt = [], [], [], [] def downsample(img): out = [] for r in range(len(img)): if r%16 == 0: #row = [] for c in range(len(img[r])): if c%16 == 0: out.append(img[r][c]) #out.append(row) return np.asarray(out) # 100 tests for i in range(100): s, img, xi, yi = random_test_image(256, 256, 60, 0.8) paths = get_paths(img, ds, es, 256, 256) Xt.append(paths) yt.append(downsample(img)) # 100 training samples for i in range(1000): s, img, xi, yi = random_test_image(256, 256, 60, 0.8) paths = get_paths(img, ds, es, 256, 256) X.append(paths) y.append(downsample(img)) # Instantiate the model from sklearn.preprocessing import StandardScaler from sklearn.neural_network import MLPRegressor # Scale the inputs (important) scaler = StandardScaler() scaler.fit(X) X = scaler.transform(X) Xt = scaler.transform(Xt) mlp = MLPRegressor(hidden_layer_sizes=(20,20,20)) mlp.fit(X, y) plt.imshow(mlp.predict(Xt)[0].reshape(16, 16)) plt.imshow(yt[0].reshape(16, 16)) from __future__ import print_function from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets def test_image(w, h, x, y): surface = cairo.ImageSurface (cairo.FORMAT_ARGB32, w, h) ctx = cairo.Context (surface) ctx.set_source_rgb(0, 0, 0) ctx.rectangle(0,0,w,h) ctx.fill() xp, yp = point_to_pixel(x, y, w, h) xp -= 60 yp -= 60 # Subtract the radius ctx.set_source_rgb(1, 1, 1) ctx.arc(xp, yp, 60, 0, 2 * math.pi) ctx.fill() ctx.stroke() buf = np.frombuffer(surface.get_data(), np.uint8) img = buf.reshape(256, 256, 4)[:,:,0] return surface, img, x, y def f(x, y): s, img, a, b = test_image(256, 256, x, y) d = downsample(img) paths = get_paths(img, ds, es, 256, 256) pred = mlp.predict(np.asarray(paths).reshape(1, -1)) ax1 = plt.subplot(2, 1, 1) ax1.plot(paths) ax2 = plt.subplot(2, 2, 3) ax2.imshow(img)#d.reshape(16, 16)) ax3 = plt.subplot(2, 2, 4) ax3.imshow(pred.reshape(16, 16)) interactive_plot = interactive(f, x=0.5, y=0.5) output = interactive_plot.children[-1] output.layout.height = '400px' interactive_plot ```
github_jupyter
## General Exploratory Data Analysi ## General Exploratory Data Analysi ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import math import os.path from datetime import datetime from datetime import date from dateutil import parser #import pickle #import asyncio from datetime import timedelta import dateutil.parser import imp import json import statistics #import random #from binance.client import Client #import api #import get_uptodate_binance_data #import generate_random_file #import track_pnl %%time filename = 'BTCUSDT-1h-binance.csv' timeframe = '1h' OHLC_directory = '/root/OResearch/Data/Binance_OHLC/' complete_file_path = OHLC_directory + filename df = pd.read_csv(complete_file_path) df = df.drop(columns=['Unnamed: 0'], axis=0) ``` Adding log-return ``` df['closeprice_log_return']=np.log(df.close) - np.log(df.close.shift(1)) df = df.iloc[1: , :] #Remove first row which contains NA due to log-return df['datetime'] = pd.to_datetime(df['timestamp'], errors='coerce') df['day'] = df['datetime'].dt.day_name() df['week'] = df['datetime'].dt.week df['month'] = df['datetime'].dt.month_name() df ``` # We plot the average and median log_return by day, by week, and by month. # by day ``` days=['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] fig = df[['day', 'closeprice_log_return']].groupby('day', sort=True).mean().reindex(days).plot(kind='bar', title='Average hourly log-return for BTCUSDT per day', legend=True).get_figure() fig.savefig('Images/Average hourly log-return for BTCUSDT per day.png') fig = df[['day', 'closeprice_log_return']].groupby('day', sort=False).median().reindex(days).plot(kind='bar', title='Median hourly log-return for BTCUSDT per day', legend=True).get_figure() fig.savefig('Images/Median hourly log-return for BTCUSDT per day.png') ``` # by week ``` fig = df[['week', 'closeprice_log_return']].groupby('week', sort=True).mean().plot(kind='bar', title='Average hourly log-return for BTCUSDT per week number', legend=True).get_figure() fig.savefig('Images/Average hourly log-return for BTCUSDT per week number.png') ``` We can notice quite a pattern on the 53th calendar week. Explain more. (cause 53th week only exists in 2020 so it's biased: not big enough sample ``` fig = df[['week', 'closeprice_log_return']].groupby('week').median().plot(kind='bar', title='Median hourly log-return for BTCUSDT per week number', legend=True).get_figure() fig.savefig('Images/Median hourly log-return for BTCUSDT per week number.png') ``` Let's now look at extreme value or outliers among those # by month ``` months=['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'] fig = df[['month', 'closeprice_log_return']].groupby('month', sort=False).mean().reindex(months).plot(kind='bar', title='Average hourly log-return for BTCUSDT per month', legend=True).get_figure() fig.savefig('Images/Average hourly log-return for BTCUSDT per month.png') fig = df[['month', 'closeprice_log_return']].groupby('month').median().reindex(months).plot(kind='bar', title='Median hourly log-return for BTCUSDT per month', legend=True).get_figure() fig.savefig('Images/Median hourly log-return for BTCUSDT per month.png') ```
github_jupyter
*This notebook is part of course materials for CS 345: Machine Learning Foundations and Practice at Colorado State University. Original versions were created by Ben Sattelberg and Asa Ben-Hur. The content is availabe [on GitHub](https://github.com/asabenhur/CS345).* *The text is released under the [CC BY-SA license](https://creativecommons.org/licenses/by-sa/4.0/), and code is released under the [MIT license](https://opensource.org/licenses/MIT).* <img style="padding: 10px; float:right;" alt="CC-BY-SA icon.svg in public domain" src="https://upload.wikimedia.org/wikipedia/commons/d/d0/CC-BY-SA_icon.svg" width="125"> <a href="https://colab.research.google.com/github//asabenhur/CS345/blob/master/notebooks/module08_03_neural_networks_mnist.ipynb"> <img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> ``` %autosave 0 %matplotlib inline import numpy as np import keras import matplotlib.pyplot as plt ``` # Neural Networks ### Preface: enabling GPUs on google colab Until now we ran our neural networks on a CPU. If you are running this notebook on google colab, you are in luck - google colab will allow you to run your code on a GPU. Enabling a GPU is very simple: All you need to do is navigate to Edit→Notebook Settings and select GPU from the Hardware Accelerator drop-down. This [colab notebook](https://colab.research.google.com/notebooks/gpu.ipynb) has instructions for verifying that you are using a GPU and see the resulting speedup. ## The MNIST dataset In the previous notebooks we used Keras to solve toy problems and built some understanding of how the networks are solving those problems. In this notebook, we'll look at the real (but still relatively easy) problem of handwritten digit recognition. We will be using the MNIST (modified National Institute of Standards and Technology) database which has images taken from a NIST database of handwritten digits and modified by Yann Lecun, Corinna Cortes, and Christopher J.C. Burges to be more easily used in machine learning. The first thing we need to do is to load the dataset. Fortunately, Keras does this work for us: ``` # This will download an 11.5 MB file to ~/.keras/datasets/ (X_train, y_train), (X_test, y_test) = keras.datasets.mnist.load_data() ``` Let's get some information about the dataset: ``` print(X_train.shape, y_train.shape) print(min(y_train), max(y_train)) ``` This tells that we have 60,000 input images, each of which is 28x28 pixels. The labels are, unsuprisingly for a database of digits, the numbers 0 through 9, corresponding to which digit the image represents. Now let's look at the test set: ``` print(X_test.shape) print(y_test.shape) ``` Here we have 10,000 samples with the same format as the training set. Let's look at one of the images: ``` fig, ax = plt.subplots(1,1,figsize=(6,6)) im = ax.imshow(X_train[0, :, :].reshape(28,28), cmap='Greys') ax.set_title(y_train[0]) cbar = fig.colorbar(im) cbar.set_ticks([0, 128, 255]) ax.set_xticks([0, 14, 27]) ax.set_yticks([0, 14, 27]); ``` Here we can see that the image is a grayscale 28x28 image with pixel values between 0 and 255. We can also look at a few other images in the dataset: ``` fig, axes = plt.subplots(5, 5, figsize=(10,10)) for i in range(5): for j in range(5): axes[i,j].imshow(X_train[i*5 + j, :, :].reshape(28,28), cmap='Greys') axes[i,j].set_title(y_train[i*5+j]) axes[i,j].axis('off') ``` There are a few things we want to do to the input data before we use it. The first is to convert it to 32 bit floats: ``` X_train = X_train.astype('float32') X_test = X_test.astype('float32') ``` We also want to change the range of the data from integers between 0 and 255 to numbers between 0 and 1 to help with training: ``` X_train /= 255 X_test /= 255 ``` The last step, which is less obvious, is to reshape the actual data to have an extra dimension: ``` X_train = X_train.reshape(-1, 28, 28, 1) X_test = X_test.reshape(-1, 28, 28, 1) ``` This dimension corresponds to the number of "channels" in the image. This data is grayscale, but color images are typically stored in RGB (red, green, blue) format, where there the three channels describe the amount of red, green, and blue at each pixel. Keras is designed to handle images as a native data format without needing to "flatten" the images into vectors as a preprocessing step. We will also convert the `y_train` and `y_test` to a one-hot encoding: ``` y_train_one_hot = keras.utils.to_categorical(y_train, 10) y_test_one_hot = keras.utils.to_categorical(y_test, 10) ``` The first experiment we will do with this dataset is to test a simple linear model to get a baseline for how good we can expect our models to be. We will use Keras for this, and simply not have any hidden layers in our network. In addition to the accuracy on the training set, we want to keep track of the accuracy on the testing set. One way to do this with Keras is with a callback function that keeps track of the accuracy on the testing set as we progress through it. It isn't necessary to understand the code here, but it is good to be aware of the goal of this structure. ``` # Structure based on https://github.com/keras-team/keras/issues/2548 class EvaluateCallback(keras.callbacks.Callback): def __init__(self, test_data): self.test_data = test_data def on_epoch_end(self, epoch, logs): x, y = self.test_data loss, acc = self.model.evaluate(x, y, verbose=0) if 'test loss' not in logs: logs['test loss'] = [] logs['test acc'] = [] logs['test loss'] += [loss] logs['test acc'] += [acc] print('Testing loss: {}, acc: {}\n'.format(round(loss, 4), round(acc, 4))) ``` We can now train our model. One layer to notice is the ```Flatten()``` layer. This layer converts the data from a 28x28x1 dimensional image to a 784=28\*28\*1 dimensional vector. ``` linear_model = keras.Sequential() linear_model.add(keras.layers.Flatten()) linear_model.add(keras.layers.Dense(10, activation='softmax')) loss_fn = keras.losses.CategoricalCrossentropy() opt = keras.optimizers.Adam() linear_model.compile(loss=loss_fn, optimizer=opt, metrics=['accuracy']) linear_history = linear_model.fit(X_train, y_train_one_hot, batch_size=100, epochs=20, verbose=1, callbacks=[EvaluateCallback((X_test, y_test_one_hot))]) print('Final loss: {}, test accuracy: {}'.format(*map(lambda x: round(x, 4), linear_model.evaluate(X_test, y_test_one_hot, verbose=0)))) ``` We can look at the summary of this model - the main thing to note here is the number of parameters: ``` linear_model.summary() ``` We can also look at the accuracy over the epochs of the network: ``` plt.plot(linear_history.history['accuracy'], label='Train') plt.plot(linear_history.history['test acc'], label='Test') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend() plt.ylim([0.85, 1]); ``` We can see that even a simple linear classifier gets about 92% accuracy. MNIST is commonly used as a tutorial dataset, and one of the reasons for that is that basically anything will be successful on it. The dataset is also solved - methods exist that do better than human accuracy and will reach around 99.9% accuracy (i.e 10 samples out of the 10,000 misclassified). ## An MNIST network Now let's train a more interesting neural network on this problem. We'll start with a single hidden layer with 128 nodes: ``` network = keras.Sequential() network.add(keras.layers.Flatten()) network.add(keras.layers.Dense(100, activation='relu')) network.add(keras.layers.Dense(10, activation='softmax')) loss_fn = keras.losses.CategoricalCrossentropy() opt = keras.optimizers.Adam() network.compile(loss=loss_fn, optimizer=opt, metrics=['accuracy']) history = network.fit(X_train, y_train_one_hot, batch_size=100, epochs=20, verbose=1, callbacks=[EvaluateCallback((X_test, keras.utils.to_categorical(y_test, 10)))]) print('Final loss: {}, test accuracy: {}'.format(*map(lambda x: round(x, 4), network.evaluate(X_test, y_test_one_hot, verbose=0)))) ``` The total number of parameters for this network is more than an order of magnitude higher than the linear model. However, it does improve on the linear model's accuracy from 92% to about 97.5%. ``` network.summary() ``` The network also reaches nearly 100% accuracy on the training set, and continues improving on the training set after it plateaus in accuracy on the test set. This is a sign that the network has found a solution to and that further training can potentially *reduce* the accuracy through overfitting. ``` plt.plot(history.history['accuracy'], label='Train') plt.plot(history.history['test acc'], label='Test') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend() plt.ylim([0.85, 1]); ``` Since there's the potential to overfit, and a large number of parameters, simply increasing the depth or the width of this network could potentially lead to issues. Instead of going that route, we will use a different kind of layer in our next notebook that works well for images and introduce convolutional networks, which have become the standard architecture for image data. For reference, and to convince ourselves that we obtained good accuracy with our neural network let's try random forests: ``` X_train_flat = X_train.reshape(-1, 784) X_test_flat = X_test.reshape(-1, 784) from sklearn.ensemble import RandomForestClassifier rf = RandomForestClassifier(n_estimators=500) rf.fit(X_train_flat, y_train); from sklearn.metrics import accuracy_score y_pred = rf.predict(X_test_flat) accuracy_score(y_pred, y_test) ``` ### Comments There are major issues in using feed-forward neural networks for image classification: * Fully connected networks can have very large numbers of parameters with increasing image sizes. Consider for example images of size 228x228x3, which is standard in this field. Using the network architecture we have here would result in 228\*228\*3\*100 parameters from the input to the hidden layer - about 15,000,000. This network would also not be successful - we would need to significantly increase the width and depth, compounding the issue. It is likely that billions of parameters would be necessary to achieve good accuracy. * If we take an image that represents the number seven, and shift the seven over a few pixels, we would expect it to still be classified as a seven. However, fully connected networks are not able to represent this invariance. Some of these concerns apply to other standard machine learning approaches as well. Convolutional networks which are introduced next, address these shortcomings, and have led to major improvements in accuracy in image classification tasks. Their success has led to a renaissance of the field of neural networks.
github_jupyter
# Enter State Farm ``` from __future__ import division, print_function %matplotlib inline #path = "data/state/" path = "data/state/sample/" from importlib import reload # Python 3 import utils; reload(utils) from utils import * from IPython.display import FileLink batch_size=64 ``` ## Setup batches ``` batches = get_batches(path+'train', batch_size=batch_size) val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False) steps_per_epoch = int(np.ceil(batches.samples/batch_size)) validation_steps = int(np.ceil(val_batches.samples/(batch_size*2))) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) ``` Rather than using batches, we could just import all the data into an array to save some processing time. (In most examples I'm using the batches, however - just because that's how I happened to start out.) ``` trn = get_data(path+'train') val = get_data(path+'valid') save_array(path+'results/val.dat', val) save_array(path+'results/trn.dat', trn) val = load_array(path+'results/val.dat') trn = load_array(path+'results/trn.dat') ``` ## Re-run sample experiments on full dataset We should find that everything that worked on the sample (see statefarm-sample.ipynb), works on the full dataset too. Only better! Because now we have more data. So let's see how they go - the models in this section are exact copies of the sample notebook models. ### Single conv layer ``` def conv1(batches): model = Sequential([ BatchNormalization(axis=1, input_shape=(3,224,224)), Conv2D(32,(3,3), activation='relu'), BatchNormalization(axis=1), MaxPooling2D((3,3)), Conv2D(64,(3,3), activation='relu'), BatchNormalization(axis=1), MaxPooling2D((3,3)), Flatten(), Dense(200, activation='relu'), BatchNormalization(), Dense(10, activation='softmax') ]) model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(batches, steps_per_epoch, epochs=2, validation_data=val_batches, validation_steps=validation_steps) model.optimizer.lr = 0.001 model.fit_generator(batches, steps_per_epoch, epochs=4, validation_data=val_batches, validation_steps=validation_steps) return model model = conv1(batches) ``` Interestingly, with no regularization or augmentation we're getting some reasonable results from our simple convolutional model. So with augmentation, we hopefully will see some very good results. ### Data augmentation ``` gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, shear_range=0.1, channel_shift_range=20, width_shift_range=0.1) batches = get_batches(path+'train', gen_t, batch_size=batch_size) model = conv1(batches) model.optimizer.lr = 0.0001 model.fit_generator(batches, steps_per_epoch, epochs=15, validation_data=val_batches, validation_steps=validation_steps) ``` I'm shocked by *how* good these results are! We're regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition. With such a simple model and no dropout or semi-supervised learning, this really speaks to the power of this approach to data augmentation. ### Four conv/pooling pairs + dropout Unfortunately, the results are still very unstable - the validation accuracy jumps from epoch to epoch. Perhaps a deeper model with some dropout would help. ``` gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, shear_range=0.1, channel_shift_range=20, width_shift_range=0.1) batches = get_batches(path+'train', gen_t, batch_size=batch_size) model = Sequential([ BatchNormalization(axis=1, input_shape=(3,224,224)), Conv2D(32,(3,3), activation='relu'), BatchNormalization(axis=1), MaxPooling2D(), Conv2D(64,(3,3), activation='relu'), BatchNormalization(axis=1), MaxPooling2D(), Conv2D(128,(3,3), activation='relu'), BatchNormalization(axis=1), MaxPooling2D(), Flatten(), Dense(200, activation='relu'), BatchNormalization(), Dropout(0.5), Dense(200, activation='relu'), BatchNormalization(), Dropout(0.5), Dense(10, activation='softmax') ]) model.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(batches, steps_per_epoch, epochs=2, validation_data=val_batches, validation_steps=validation_steps) model.optimizer.lr=0.001 model.fit_generator(batches, steps_per_epoch, epochs=10, validation_data=val_batches, validation_steps=validation_steps) model.optimizer.lr=0.00001 model.fit_generator(batches, steps_per_epoch, epochs=10, validation_data=val_batches, validation_steps=validation_steps) ``` This is looking quite a bit better - the accuracy is similar, but the stability is higher. There's still some way to go however... ### Imagenet conv features Since we have so little data, and it is similar to imagenet images (full color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.) ``` vgg = Vgg16() model=vgg.model last_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1] conv_layers = model.layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) test_batches = get_batches(path+'test', batch_size=batch_size*2, shuffle=False) conv_feat = conv_model.predict_generator(batches, int(np.ceil(batches.samples/batch_size))) conv_val_feat = conv_model.predict_generator(val_batches, int(np.ceil(val_batches.samples/(batch_size*2)))) conv_test_feat = conv_model.predict_generator(test_batches, int(np.ceil(test_batches.samples/(batch_size*2)))) save_array(path+'results/conv_val_feat.dat', conv_val_feat) save_array(path+'results/conv_test_feat.dat', conv_test_feat) save_array(path+'results/conv_feat.dat', conv_feat) conv_feat = load_array(path+'results/conv_feat.dat') conv_val_feat = load_array(path+'results/conv_val_feat.dat') conv_val_feat.shape ``` ### Batchnorm dense layers on pretrained conv layers Since we've pre-computed the output of the last convolutional layer, we need to create a network that takes that as input, and predicts our 10 classes. Let's try using a simplified version of VGG's dense layers. ``` def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dropout(p/2), Dense(128, activation='relu'), BatchNormalization(), Dropout(p/2), Dense(128, activation='relu'), BatchNormalization(), Dropout(p), Dense(10, activation='softmax') ] p=0.8 bn_model = Sequential(get_bn_layers(p)) bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy']) bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, epochs=1, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.01 bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, epochs=2, validation_data=(conv_val_feat, val_labels)) bn_model.save_weights(path+'models/conv8.h5') ``` Looking good! Let's try pre-computing 5 epochs worth of augmented data, so we can experiment with combining dropout and augmentation on the pre-trained model. ### Pre-computed data augmentation + dropout We'll use our usual data augmentation parameters: ``` gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, shear_range=0.1, channel_shift_range=20, width_shift_range=0.1) da_batches = get_batches(path+'train', gen_t, batch_size=batch_size, shuffle=False) ``` We use those to create a dataset of convolutional features 5x bigger than the training set. ``` da_conv_feat = conv_model.predict_generator(da_batches, 5*int(np.ceil((da_batches.samples)/(batch_size))), workers=3) save_array(path+'results/da_conv_feat2.dat', da_conv_feat) da_conv_feat = load_array(path+'results/da_conv_feat2.dat') ``` Let's include the real training data as well in its non-augmented form. ``` da_conv_feat = np.concatenate([da_conv_feat, conv_feat]) ``` Since we've now got a dataset 6x bigger than before, we'll need to copy our labels 6 times too. ``` da_trn_labels = np.concatenate([trn_labels]*6) ``` Based on some experiments the previous model works well, with bigger dense layers. ``` def get_bn_da_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dropout(p), Dense(256, activation='relu'), BatchNormalization(), Dropout(p), Dense(256, activation='relu'), BatchNormalization(), Dropout(p), Dense(10, activation='softmax') ] p=0.8 bn_model = Sequential(get_bn_da_layers(p)) bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy']) ``` Now we can train the model as usual, with pre-computed augmented data. ``` bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, epochs=1, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.01 bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, epochs=4, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.0001 bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, epochs=4, validation_data=(conv_val_feat, val_labels)) ``` Looks good - let's save those weights. ``` bn_model.save_weights(path+'models/da_conv8_1.h5') ``` ### Pseudo labeling We're going to try using a combination of [pseudo labeling](http://deeplearning.net/wp-content/uploads/2013/03/pseudo_label_final.pdf) and [knowledge distillation](https://arxiv.org/abs/1503.02531) to allow us to use unlabeled data (i.e. do semi-supervised learning). For our initial experiment we'll use the validation set as the unlabeled data, so that we can see that it is working without using the test set. At a later date we'll try using the test set. To do this, we simply calculate the predictions of our model... ``` val_pseudo = bn_model.predict(conv_val_feat, batch_size=batch_size) ``` ...concatenate them with our training labels... ``` comb_pseudo = np.concatenate([da_trn_labels, val_pseudo]) comb_feat = np.concatenate([da_conv_feat, conv_val_feat]) ``` ...and fine-tune our model using that data. ``` bn_model.load_weights(path+'models/da_conv8_1.h5') bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, epochs=1, validation_data=(conv_val_feat, val_labels)) bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, epochs=4, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.00001 bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, epochs=4, validation_data=(conv_val_feat, val_labels)) ``` That's a distinct improvement - even although the validation set isn't very big. This looks encouraging for when we try this on the test set. ``` bn_model.save_weights(path+'models/bn-ps8.h5') ``` ### Submit We'll find a good clipping amount using the validation set, prior to submitting. ``` def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx) val_preds = bn_model.predict(conv_val_feat, batch_size=batch_size*2) np.mean(keras.metrics.categorical_crossentropy(val_labels, do_clip(val_preds, 0.93)).eval()) conv_test_feat = load_array(path+'results/conv_test_feat.dat') preds = bn_model.predict(conv_test_feat, batch_size=batch_size*2) subm = do_clip(preds,0.93) subm_name = path+'results/subm.gz' classes = sorted(batches.class_indices, key=batches.class_indices.get) submission = pd.DataFrame(subm, columns=classes) submission.insert(0, 'img', [a[4:] for a in test_filenames]) submission.head() submission.to_csv(subm_name, index=False, compression='gzip') FileLink(subm_name) ``` This gets 0.534 on the leaderboard. ## The "things that didn't really work" section You can safely ignore everything from here on, because they didn't really help. ### Finetune some conv layers too ``` #for l in get_bn_layers(p): conv_model.add(l) # this choice would give a weight shape error for l in get_bn_da_layers(p): conv_model.add(l) # ... so probably this is the right one for l1,l2 in zip(bn_model.layers, conv_model.layers[last_conv_idx+1:]): l2.set_weights(l1.get_weights()) for l in conv_model.layers: l.trainable =False for l in conv_model.layers[last_conv_idx+1:]: l.trainable =True comb = np.concatenate([trn, val]) # not knowing what the experiment was about, added this to avoid a shape match error with comb using gen_t.flow comb_pseudo = np.concatenate([trn_labels, val_pseudo]) gen_t = image.ImageDataGenerator(rotation_range=8, height_shift_range=0.04, shear_range=0.03, channel_shift_range=10, width_shift_range=0.08) batches = gen_t.flow(comb, comb_pseudo, batch_size=batch_size) val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False) conv_model.compile(Adam(lr=0.00001), loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, steps_per_epoch, epochs=1, validation_data=val_batches, validation_steps=validation_steps) conv_model.optimizer.lr = 0.0001 conv_model.fit_generator(batches, steps_per_epoch, epochs=3, validation_data=val_batches, validation_steps=validation_steps) for l in conv_model.layers[16:]: l.trainable =True #- added compile instruction in order to avoid Keras 2.1 warning message conv_model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy']) conv_model.optimizer.lr = 0.00001 conv_model.fit_generator(batches, steps_per_epoch, epochs=8, validation_data=val_batches, validation_steps=validation_steps) conv_model.save_weights(path+'models/conv8_ps.h5') #conv_model.load_weights(path+'models/conv8_da.h5') # conv8_da.h5 was not saved in this notebook val_pseudo = conv_model.predict(val, batch_size=batch_size*2) save_array(path+'models/pseudo8_da.dat', val_pseudo) ``` ### Ensembling ``` drivers_ds = pd.read_csv(path+'driver_imgs_list.csv') drivers_ds.head() img2driver = drivers_ds.set_index('img')['subject'].to_dict() driver2imgs = {k: g["img"].tolist() for k,g in drivers_ds[['subject', 'img']].groupby("subject")} # It seems this function is not used in this notebook def get_idx(driver_list): return [i for i,f in enumerate(filenames) if img2driver[f[3:]] in driver_list] # drivers = driver2imgs.keys() # Python 2 drivers = list(driver2imgs) # Python 3 rnd_drivers = np.random.permutation(drivers) ds1 = rnd_drivers[:len(rnd_drivers)//2] ds2 = rnd_drivers[len(rnd_drivers)//2:] # The following cells seem to require some preparation code not included in this notebook models=[fit_conv([d]) for d in drivers] models=[m for m in models if m is not None] all_preds = np.stack([m.predict(conv_test_feat, batch_size=128) for m in models]) avg_preds = all_preds.mean(axis=0) avg_preds = avg_preds/np.expand_dims(avg_preds.sum(axis=1), 1) keras.metrics.categorical_crossentropy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval() keras.metrics.categorical_accuracy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval() ```
github_jupyter
Basics --- In this example, we'll go over the basics of atom and reside selection in MDTraj. First let's load up an example trajectory. ``` from __future__ import print_function import mdtraj as md traj = md.load('ala2.h5') print(traj) ``` We can also more directly find out how many atoms or residues there are by using `traj.n_atoms` and `traj.n_residues`. ``` print('How many atoms? %s' % traj.n_atoms) print('How many residues? %s' % traj.n_residues) ``` We can also manipulate the atom positions by working with `traj.xyz`, which is a NumPy array contain the xyz coordinated of each atom with dimensions (n_frames, n_atoms, 3). Let's find the 3D coordinates of the tenth atom in frame 5. ``` frame_idx = 4 # zero indexed frame number atom_idx = 9 # zero indexed atom index print('Where is the fifth atom at the tenth frame?') print('x: %s\ty: %s\tz: %s' % tuple(traj.xyz[frame_idx, atom_idx,:])) ``` Topology Object --- As mentioned previously in the introduction, every `Trajectory` object contains a `Topology`. The `Topology` of a `Trajectory` contains all the connectivity information of your system and specific chain, residue, and atom information. ``` topology = traj.topology print(topology) ``` With the topology object we can select a certain `atom`, or loop through them all. (Note: everything is zero-indexed) ``` print('Fifth atom: %s' % topology.atom(4)) print('All atoms: %s' % [atom for atom in topology.atoms]) ``` The same goes for residues. ``` print('Second residue: %s' % traj.topology.residue(1)) print('All residues: %s' % [residue for residue in traj.topology.residues]) ``` Additionally, every `atom` and `residue` is also an object, and has it's own set of properties. Here is a simple example that showcases just a few. ``` atom = topology.atom(10) print('''Hi! I am the %sth atom, and my name is %s. I am a %s atom with %s bonds. I am part of an %s residue.''' % ( atom.index, atom.name, atom.element.name, atom.n_bonds, atom.residue.name)) ``` There are also more complex properties, like `atom.is_sidechain` or `residue.is_protein`, which allow for more powerful selections. Putting Everything Together --- Hopefully, you can see how these properties can be combined with Python's filtered list functionality. Let's say we want the indices of all carbon atoms in the sidechains of our molecule. We could try something like this. ``` print([atom.index for atom in topology.atoms if atom.element.symbol is 'C' and atom.is_sidechain]) ``` Or maybe we want all even-indexed residues in the first chain (Although this example only has the one chain....). ``` print([residue for residue in topology.chain(0).residues if residue.index % 2 == 0]) ``` Atom Selection Language --- If you're hesistant about programming filtered lists like the ones above, MDTraj also features a rich atom selection language, similar to that of PyMol and VMD. You can access it by using `topology.select`. Let's find all atoms in the last two residues. More information about the atom selection syntax is available in the main documentation. ``` print(topology.select('resid 1 to 2')) ``` You can also do more complex operations. Here, we're looking for all nitrogen atoms in the backbone. ``` print(topology.select('name N and backbone')) ``` If you ever want to see the code that generates these results you can use `select_expression`, which will yield a string represention of the atom selection code. ``` selection = topology.select_expression('name CA and resid 1 to 2') print(selection) ```
github_jupyter
# Continuous Control --- Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** ### 1. Start the Environment We begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/). ``` from unityagents import UnityEnvironment import numpy as np ``` Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded. - **Mac**: `"path/to/Crawler.app"` - **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"` - **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"` - **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"` - **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"` - **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"` - **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"` For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows: ``` env = UnityEnvironment(file_name="Crawler.app") ``` ``` env = UnityEnvironment(file_name='../../crawler/Crawler.app') ``` Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python. ``` # get the default brain brain_name = env.brain_names[0] brain = env.brains[brain_name] ``` ### 2. Examine the State and Action Spaces Run the code cell below to print some information about the environment. ``` # reset the environment env_info = env.reset(train_mode=True)[brain_name] # number of agents num_agents = len(env_info.agents) print('Number of agents:', num_agents) # size of each action action_size = brain.vector_action_space_size print('Size of each action:', action_size) # examine the state space states = env_info.vector_observations state_size = states.shape[1] print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size)) print('The state for the first agent looks like:', states[0]) ``` ### 3. Take Random Actions in the Environment In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment. Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment! ``` env_info = env.reset(train_mode=False)[brain_name] # reset the environment states = env_info.vector_observations # get the current state (for each agent) scores = np.zeros(num_agents) # initialize the score (for each agent) while True: actions = np.random.randn(num_agents, action_size) # select an action (for each agent) actions = np.clip(actions, -1, 1) # all actions between -1 and 1 env_info = env.step(actions)[brain_name] # send all actions to tne environment next_states = env_info.vector_observations # get next state (for each agent) rewards = env_info.rewards # get reward (for each agent) dones = env_info.local_done # see if episode finished scores += env_info.rewards # update the score (for each agent) states = next_states # roll over states to next time step if np.any(dones): # exit loop if episode finished break print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores))) ``` When finished, you can close the environment. ``` env.close() ``` ### 4. It's Your Turn! Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following: ```python env_info = env.reset(train_mode=True)[brain_name] ```
github_jupyter
``` import sys sys.path.append("functions/") from datastore import DataStore from searchgrid import SearchGrid from crossvalidate import CrossValidate from sklearn.dummy import DummyClassifier from sklearn.metrics import f1_score from sklearn.metrics import roc_auc_score from sklearn.linear_model import LogisticRegression from sklearn.metrics import balanced_accuracy_score from sampleddatastore import SampledDataStore ``` First, we will load up our predefined functions for loading the data and running the model. Due to the high class imbalance, F1 score is a much better metric to use than just accuracy (since 99% of the data belongs to class 0). We will also have ROC-AUC for comparison. We fetch the true positives, false positives and false negatives to calculate the f1 score across all folds rather than using the builtin functionality. This is because the averaged f1 score returned by Sklearn is slighly biased for imbalanced class problems (for cross validation). This doesn't matter when evaluating the test set. All the relevant functions are in their respective python files (same folder as the notebook). Reference: https://www.hpl.hp.com/techreports/2009/HPL-2009-359.pdf ``` #Load object for CrossValidation crossvalidate = CrossValidate() #Load object for GridSearchCV GridSpace = SearchGrid() ``` Let's establish a baseline model that simply predicts the minority class: ``` classifier = DummyClassifier(strategy="constant", constant=1) crossvalidate.setClassifier(classifier) crossvalidate.run() f1, roc = crossvalidate.getMetrics().getScores() print(f"F1 score is: {f1}") print(f"ROC-AUC is: {roc}") ``` A good model is one that can perform better than the baseline, in terms of F1 Score. Anything below is worse than a model that simply predicts minority class. Note that 0.5 ROC-AUC score indicates that it's a random classifier. ``` classifier = LogisticRegression() crossvalidate.setClassifier(classifier) crossvalidate.run() f1, roc = crossvalidate.getMetrics().getScores() print(f"F1 score is: {f1}") print(f"ROC-AUC is: {roc}") ``` Looks like it's slightly better than a random classifier; this means that our model is learning some relationships for the underlying data, albeit small. The low score is to expected, especially given the class imbalance. Let's try using the class weight functionality that assigns weights to each class based on their frequency. ``` classifier = LogisticRegression(class_weight='balanced') crossvalidate.setClassifier(classifier) crossvalidate.run() f1, roc = crossvalidate.getMetrics().getScores() print(f"F1 score is: {f1}") print(f"ROC-AUC is: {roc}") ``` Looks like the balanced class weight performs worse in terms of f1 score (probably because it results in a lot more false positives). Let's test different parameters using GridSearchCV. We will be using our custom objects. ``` parameters = {'class_weight':[{0:1,1:1}, {0:1,1:10}, {0:1,1:100}, {0:10,1:1}]} GridSpace.setGridParameters(parameters) GridSpace.setClassifier(LogisticRegression()) GridSpace.run() parameters, scores = GridSpace.getMetrics().getBestResults() f1 = scores[0] roc = scores[1] print(f"F1 score is: {f1}") print(f"ROC-AUC is: {roc}") print(f"Best Parameters: {parameters}") ``` We are making progress, but can we do even better? Adjusting the weights were not enough, we will have to try different sampling techniques. Imbalanced-learn library will come in handy here. We will start with RandomOverSampler to duplicates records from the minority class. We will use a sampling ratio of 0.1 (i.e. increasing gilded class to ~10%). Read more: https://imbalanced-learn.readthedocs.io/en/stable/over_sampling.html#a-practical-guide ``` SampledDataStore = SampledDataStore() SampledDataStore.initializeSamplers() #Using RandomOverSampler to duplicate records belonging to class 1 (gilded) random = SampledDataStore.getRandomSampled X_resampled, y_resampled = random() classifier = LogisticRegression(class_weight={0: 1, 1: 10}) crossvalidate.getDataStore().setxTrain(X_resampled) crossvalidate.getDataStore().setyTrain(y_resampled) crossvalidate.setClassifier(classifier) crossvalidate.run() f1, roc = crossvalidate.getMetrics().getScores() print("Random Over Sampling:") print(f"F1 score is: {f1}") print(f"ROC-AUC is: {roc}") crossvalidate.getDataStore().revertToOriginal() ``` We can also generate new samples with SMOTE and ADASYN based on existing samples. We will keep the sampling ratio the same for comparison. ``` smote = SampledDataStore.getSMOTESampled ada = SampledDataStore.getADASYNSampled samplers = [smote, ada] sampler_names = ["SMOTE", "ADASYN"] for i in range(len(samplers)): X_resampled, y_resampled = samplers[i]() crossvalidate.getDataStore().setxTrain(X_resampled) crossvalidate.getDataStore().setyTrain(y_resampled) classifier = LogisticRegression(class_weight={0: 1, 1: 10}) crossvalidate.setClassifier(classifier) crossvalidate.run() f1, roc = crossvalidate.getMetrics().getScores() print(f"{sampler_names[i]}: ") print(f"F1 score is: {f1}") print(f"ROC-AUC is: {roc}") print("\n") crossvalidate.getDataStore().revertToOriginal() ``` Imbalanced learn also recommends combining oversampling with undersampling the majority class. Ref: https://imbalanced-learn.readthedocs.io/en/stable/auto_examples/combine/plot_comparison_combine.html SMOTE can generate noisy samples (ex: when classes cannot be well separated), undersampling allows to clean the noisy data. ``` smote_tomek = SampledDataStore.getSMOTETOMEKSampled smote_enn = SampledDataStore.getSMOTEENNSampled samplers = [smote_tomek, smote_enn] sampler_names = ["SMOTE TOMEK", "SMOTE ENN"] for i in range(len(samplers)): X_resampled, y_resampled = samplers[i]() crossvalidate.getDataStore().setxTrain(X_resampled) crossvalidate.getDataStore().setyTrain(y_resampled) classifier = LogisticRegression(class_weight={0: 1, 1: 10}) crossvalidate.setClassifier(classifier) crossvalidate.run() f1, roc = crossvalidate.getMetrics().getScores() print(f"{sampler_names[i]}: ") print(f"F1 score is: {f1}") print(f"ROC-AUC is: {roc}") print("\n") crossvalidate.getDataStore().revertToOriginal() ``` SMOTE, SMOTEENN and RandomOverSampler produces the best results so far. Let's evaluate those them on our test set. ``` random = SampledDataStore.getRandomSampled smote = SampledDataStore.getSMOTESampled smote_enn = SampledDataStore.getSMOTEENNSampled samplers = [random, smote, smote_enn] sampler_names = ["Random OverSampler", "SMOTE", "SMOTE ENN"] classifier = LogisticRegression() for i in range(len(samplers)): parameters = {'class_weight':[{0:1,1:10}]} X_resampled, y_resampled = samplers[i]() GridSpace.getDataStore().setxTrain(X_resampled) GridSpace.getDataStore().setyTrain(y_resampled) GridSpace.setGridParameters(parameters) GridSpace.setClassifier(classifier) grid = GridSpace.run() y_preds = grid.predict(GridSpace.getDataStore().getxTest()) print(f"{sampler_names[i]} on test set:") print(f"F1 score: {f1_score(GridSpace.getDataStore().getyTest(), y_preds)}") print(f"ROC_AUC score: {roc_auc_score(GridSpace.getDataStore().getyTest(), y_preds)}") print(f"Balanced accuracy score: {balanced_accuracy_score(GridSpace.getDataStore().getyTest(), y_preds)}") print("\n") GridSpace.getDataStore().revertToOriginal() ``` Logistic regression predicts the class probabilities for each sample and decides class based on a threshold (default: 0.5). We can also check if a different threshold value produces better results. Ref: https://machinelearningmastery.com/threshold-moving-for-imbalanced-classification/ Let's define the relevant functions first. ``` import numpy as np from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.linear_model import LogisticRegressionCV def trainAndgetProbabilities(xTrain, yTrain, xTest): rskf = RepeatedStratifiedKFold(n_splits=10, n_repeats=2, random_state=42) model = LogisticRegressionCV(cv=rskf, class_weight=[{0: 1, 1: 10}]) model.fit(xTrain, yTrain) return model.predict_proba(xTest)[:,1] def convert_probs(probs, threshold): return (probs >= threshold).astype('int') from datastore import DataStore Data = DataStore() random = SampledDataStore.getRandomSampled smote = SampledDataStore.getSMOTESampled smote_enn = SampledDataStore.getSMOTEENNSampled samplers = [random, smote, smote_enn] sampler_names = ["Random Oversampling", "SMOTE", "SMOTE ENN"] thresholds = np.arange(0, 1, 0.001) for i in range(len(samplers)): X_resampled, y_resampled = samplers[i]() probs = trainAndgetProbabilities(X_resampled, y_resampled, Data.getxTest()) f1_scores = [f1_score(Data.getyTest(), convert_probs(probs, t)) for t in thresholds] roc_scores = [roc_auc_score(Data.getyTest(), convert_probs(probs, t)) for t in thresholds] maxf1Index = np.argmax(f1_scores) maxrocIndex = np.argmax(roc_scores) print(f"\n{sampler_names[i]} on test set:") print("Maxiziming F1 Score:") print(f"Threshold: {thresholds[maxf1Index]}, F1 Score: {f1_scores[maxf1Index]}, ROC AUC: {roc_scores[maxf1Index]}") print("Maxiziming ROC-AUC Score:") print(f"Threshold: {thresholds[maxrocIndex]}, F1 Score: {f1_scores[maxrocIndex]}, ROC AUC: {roc_scores[maxrocIndex]}") ``` Better, but not ideal. The difference in ROC_AUC score points to the problem; The higher threshold value causes the model to predict smaller number of samples to be positive (true positive or false positive), resulting in lower ROC AUC and a higher F1 score. Overall, our results are better than the baseline model, but not ideal. Perhaps, we can achieve better results with a more complex (non-linear) model. Let's try SVM next.
github_jupyter
# Rank Classification using BERT on Amazon Review dataset ## Introduction In this tutorial, you learn how to train a rank classification model using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning). We will use a pretrained DistilBert model to train on the Amazon review dataset. ## About the dataset and model [Amazon Customer Review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) consists of all different valid reviews from amazon.com. We will use the "Digital_software" category that consists of 102k valid reviews. As for the pre-trained model, use the DistilBERT[[1]](https://arxiv.org/abs/1910.01108) model. It's a light-weight BERT model already trained on [Wikipedia text corpora](https://en.wikipedia.org/wiki/List_of_text_corpora), a much larger dataset consisting of over millions text. The DistilBERT served as a base layer and we will add some more classification layers to output as rankings (1 - 5). <img src="https://djl-ai.s3.amazonaws.com/resources/images/amazon_review.png" width="500"> <center>Amazon Review example</center> We will use review body as our data input and ranking as label. ## Pre-requisites This tutorial assumes you have the following knowledge. Follow the READMEs and tutorials if you are not familiar with: 1. How to setup and run [Java Kernel in Jupyter Notebook](https://github.com/deepjavalibrary/djl/blob/master/jupyter/README.md) 2. Basic components of Deep Java Library, and how to [train your first model](https://github.com/deepjavalibrary/djl/blob/master/jupyter/tutorial/02_train_your_first_model.ipynb). ## Getting started Load the Deep Java Libarary and its dependencies from Maven. In here, you can choose between MXNet or PyTorch. MXNet is enabled by default. You can uncomment PyTorch dependencies and comment MXNet ones to switch to PyTorch. ``` // %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/ %maven ai.djl:api:0.15.0 %maven ai.djl:basicdataset:0.15.0 %maven org.slf4j:slf4j-simple:1.7.32 %maven ai.djl.mxnet:mxnet-model-zoo:0.15.0 // PyTorch // %maven ai.djl.pytorch:pytorch-model-zoo:0.15.0 ``` Now let's import the necessary modules: ``` import ai.djl.*; import ai.djl.basicdataset.tabular.*; import ai.djl.basicdataset.utils.*; import ai.djl.engine.*; import ai.djl.inference.*; import ai.djl.metric.*; import ai.djl.modality.*; import ai.djl.modality.nlp.*; import ai.djl.modality.nlp.bert.*; import ai.djl.ndarray.*; import ai.djl.ndarray.types.*; import ai.djl.nn.*; import ai.djl.nn.core.*; import ai.djl.nn.norm.*; import ai.djl.repository.zoo.*; import ai.djl.training.*; import ai.djl.training.dataset.*; import ai.djl.training.evaluator.*; import ai.djl.training.listener.*; import ai.djl.training.loss.*; import ai.djl.training.util.*; import ai.djl.translate.*; import java.io.*; import java.nio.file.*; import java.util.*; import org.apache.commons.csv.*; System.out.println("You are using: " + Engine.getInstance().getEngineName() + " Engine"); ``` ## Prepare Dataset First step is to prepare the dataset for training. Since the original data was in TSV format, we can use CSVDataset to be the dataset container. We will also need to specify how do we want to preprocess the raw data. For BERT model, the input data are required to be tokenized and mapped into indices based on the inputs. In DJL, we defined an interface called Fearurizer, it is designed to allow user customize operation on each selected row/column of a dataset. In our case, we would like to clean and tokenize our sentencies. So let's try to implement it to deal with customer review sentencies. ``` final class BertFeaturizer implements CsvDataset.Featurizer { private final BertFullTokenizer tokenizer; private final int maxLength; // the cut-off length public BertFeaturizer(BertFullTokenizer tokenizer, int maxLength) { this.tokenizer = tokenizer; this.maxLength = maxLength; } /** {@inheritDoc} */ @Override public void featurize(DynamicBuffer buf, String input) { Vocabulary vocab = tokenizer.getVocabulary(); // convert sentence to tokens (toLowerCase for uncased model) List<String> tokens = tokenizer.tokenize(input.toLowerCase()); // trim the tokens to maxLength tokens = tokens.size() > maxLength ? tokens.subList(0, maxLength) : tokens; // BERT embedding convention "[CLS] Your Sentence [SEP]" buf.put(vocab.getIndex("[CLS]")); tokens.forEach(token -> buf.put(vocab.getIndex(token))); buf.put(vocab.getIndex("[SEP]")); } } ``` Once we got this part done, we can apply the `BertFeaturizer` into our Dataset. We take `review_body` column and apply the Featurizer. We also pick `star_rating` as our label set. Since we go for batch input, we need to tell the dataset to pad our data if it is less than the `maxLength` we defined. `PaddingStackBatchifier` will do the work for you. ``` CsvDataset getDataset(int batchSize, BertFullTokenizer tokenizer, int maxLength, int limit) { String amazonReview = "https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz"; float paddingToken = tokenizer.getVocabulary().getIndex("[PAD]"); return CsvDataset.builder() .optCsvUrl(amazonReview) // load from Url .setCsvFormat(CSVFormat.TDF.withQuote(null).withHeader()) // Setting TSV loading format .setSampling(batchSize, true) // make sample size and random access .optLimit(limit) .addFeature( new CsvDataset.Feature( "review_body", new BertFeaturizer(tokenizer, maxLength))) .addLabel( new CsvDataset.Feature( "star_rating", (buf, data) -> buf.put(Float.parseFloat(data) - 1.0f))) .optDataBatchifier( PaddingStackBatchifier.builder() .optIncludeValidLengths(false) .addPad(0, 0, (m) -> m.ones(new Shape(1)).mul(paddingToken)) .build()) // define how to pad dataset to a fix length .build(); } ``` ## Construct your model We will load our pretrained model and prepare the classification. First construct the `criteria` to specify where to load the embedding (DistiledBERT), then call `loadModel` to download that embedding with pre-trained weights. Since this model is built without classification layer, we need to add a classification layer to the end of the model and train it. After you are done modifying the block, set it back to model using `setBlock`. ### Load the word embedding We will download our word embedding and load it to memory (this may take a while) ``` // MXNet base model String modelUrls = "https://resources.djl.ai/test-models/distilbert.zip"; if ("PyTorch".equals(Engine.getInstance().getEngineName())) { modelUrls = "https://resources.djl.ai/test-models/traced_distilbert_wikipedia_uncased.zip"; } Criteria<NDList, NDList> criteria = Criteria.builder() .optApplication(Application.NLP.WORD_EMBEDDING) .setTypes(NDList.class, NDList.class) .optModelUrls(modelUrls) .optProgress(new ProgressBar()) .build(); ZooModel<NDList, NDList> embedding = criteria.loadModel(); ``` ### Create classification layers Then let's build a simple MLP layer to classify the ranks. We set the output of last FullyConnected (Linear) layer to 5 to get the predictions for star 1 to 5. Then all we need to do is to load the block into the model. Before applying the classification layer, we also need to add text embedding to the front. In our case, we just create a Lambda function that do the followings: 1. batch_data (batch size, token indices) -> batch_data + max_length (size of the token indices) 2. generate embedding ``` Predictor<NDList, NDList> embedder = embedding.newPredictor(); Block classifier = new SequentialBlock() // text embedding layer .add( ndList -> { NDArray data = ndList.singletonOrThrow(); NDList inputs = new NDList(); long batchSize = data.getShape().get(0); float maxLength = data.getShape().get(1); if ("PyTorch".equals(Engine.getInstance().getEngineName())) { inputs.add(data.toType(DataType.INT64, false)); inputs.add(data.getManager().full(data.getShape(), 1, DataType.INT64)); inputs.add(data.getManager().arange(maxLength) .toType(DataType.INT64, false) .broadcast(data.getShape())); } else { inputs.add(data); inputs.add(data.getManager().full(new Shape(batchSize), maxLength)); } // run embedding try { return embedder.predict(inputs); } catch (TranslateException e) { throw new IllegalArgumentException("embedding error", e); } }) // classification layer .add(Linear.builder().setUnits(768).build()) // pre classifier .add(Activation::relu) .add(Dropout.builder().optRate(0.2f).build()) .add(Linear.builder().setUnits(5).build()) // 5 star rating .addSingleton(nd -> nd.get(":,0")); // Take [CLS] as the head Model model = Model.newInstance("AmazonReviewRatingClassification"); model.setBlock(classifier); ``` ## Start Training Finally, we can start building our training pipeline to train the model. ### Creating Training and Testing dataset Firstly, we need to create a voabulary that is used to map token to index such as "hello" to 1121 (1121 is the index of "hello" in dictionary). Then we simply feed the vocabulary to the tokenizer that used to tokenize the sentence. Finally, we just need to split the dataset based on the ratio. Note: we set the cut-off length to 64 which means only the first 64 tokens from the review will be used. You can increase this value to achieve better accuracy. ``` // Prepare the vocabulary DefaultVocabulary vocabulary = DefaultVocabulary.builder() .addFromTextFile(embedding.getArtifact("vocab.txt")) .optUnknownToken("[UNK]") .build(); // Prepare dataset int maxTokenLength = 64; // cutoff tokens length int batchSize = 8; int limit = Integer.MAX_VALUE; // int limit = 512; // uncomment for quick testing BertFullTokenizer tokenizer = new BertFullTokenizer(vocabulary, true); CsvDataset amazonReviewDataset = getDataset(batchSize, tokenizer, maxTokenLength, limit); // split data with 7:3 train:valid ratio RandomAccessDataset[] datasets = amazonReviewDataset.randomSplit(7, 3); RandomAccessDataset trainingSet = datasets[0]; RandomAccessDataset validationSet = datasets[1]; ``` ### Setup Trainer and training config Then, we need to setup our trainer. We set up the accuracy and loss function. The model training logs will be saved to `build/modlel`. ``` SaveModelTrainingListener listener = new SaveModelTrainingListener("build/model"); listener.setSaveModelCallback( trainer -> { TrainingResult result = trainer.getTrainingResult(); Model model = trainer.getModel(); // track for accuracy and loss float accuracy = result.getValidateEvaluation("Accuracy"); model.setProperty("Accuracy", String.format("%.5f", accuracy)); model.setProperty("Loss", String.format("%.5f", result.getValidateLoss())); }); DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) // loss type .addEvaluator(new Accuracy()) .optDevices(Engine.getInstance().getDevices(1)) // train using single GPU .addTrainingListeners(TrainingListener.Defaults.logging("build/model")) .addTrainingListeners(listener); ``` ### Start training We will start our training process. Training on GPU will takes approximately 10 mins. For CPU, it will take more than 2 hours to finish. ``` int epoch = 2; Trainer trainer = model.newTrainer(config); trainer.setMetrics(new Metrics()); Shape encoderInputShape = new Shape(batchSize, maxTokenLength); // initialize trainer with proper input shape trainer.initialize(encoderInputShape); EasyTrain.fit(trainer, epoch, trainingSet, validationSet); System.out.println(trainer.getTrainingResult()); ``` ### Save the model ``` model.save(Paths.get("build/model"), "amazon-review.param"); ``` ## Verify the model We can create a predictor from the model to run inference on our customized dataset. Firstly, we can create a `Translator` for the model to do preprocessing and post processing. Similar to what we have done before, we need to tokenize the input sentence and get the output ranking. ``` class MyTranslator implements Translator<String, Classifications> { private BertFullTokenizer tokenizer; private Vocabulary vocab; private List<String> ranks; public MyTranslator(BertFullTokenizer tokenizer) { this.tokenizer = tokenizer; vocab = tokenizer.getVocabulary(); ranks = Arrays.asList("1", "2", "3", "4", "5"); } @Override public Batchifier getBatchifier() { return Batchifier.STACK; } @Override public NDList processInput(TranslatorContext ctx, String input) { List<String> tokens = tokenizer.tokenize(input); float[] indices = new float[tokens.size() + 2]; indices[0] = vocab.getIndex("[CLS]"); for (int i = 0; i < tokens.size(); i++) { indices[i+1] = vocab.getIndex(tokens.get(i)); } indices[indices.length - 1] = vocab.getIndex("[SEP]"); return new NDList(ctx.getNDManager().create(indices)); } @Override public Classifications processOutput(TranslatorContext ctx, NDList list) { return new Classifications(ranks, list.singletonOrThrow().softmax(0)); } } ``` Finally, we can create a `Predictor` to run the inference. Let's try with a random customer review: ``` String review = "It works great, but it takes too long to update itself and slows the system"; Predictor<String, Classifications> predictor = model.newPredictor(new MyTranslator(tokenizer)); predictor.predict(review) ```
github_jupyter
``` import numpy as np import pandas as pd import seaborn as sns import nibabel as nib import matplotlib.pyplot as plt from nilearn import plotting from os.path import join from glob import glob from matplotlib.colors import LinearSegmentedColormap sns.set_context('talk') def grab_corr(subjects, nodes, task, condition, session, atlas): errors = pd.Series(index=subjects, dtype=str) edges = pd.Series(index=subjects, name='edge', dtype=np.float64) node1 = nodes[0] node2 = nodes[1] for subject in subjects: try: if condition != None: corrmat = np.genfromtxt(join(data_dir, 'output/corrmats', '{0}-session-{1}_{2}-{3}_{4}-corrmat.csv'.format(subject, session, task, condition, atlas)), delimiter=' ') else: corrmat = np.genfromtxt(join(data_dir, 'output/corrmats', '{0}-session-{1}-{2}_network_corrmat_{3}.csv'.format(subject, session, task, atlas)), delimiter=',') edges[subject] = corrmat[node1][node2] #post_retr_conn.at[subject] = np.ravel(corrmat, order='F') except Exception as e: errors[subject] = e return edges, errors fig_dir = '/Users/kbottenh/Dropbox/Projects/physics-retrieval/figures' data_dir = '/Users/kbottenh/Dropbox/Projects/physics-retrieval/data/' roi_dir = '/Users/kbottenh/Dropbox/Data/templates/shen2015/' masks = {'shen2015': '/Users/kbottenh/Dropbox/Projects/physics-retrieval/shen2015_2mm_268_parcellation.nii.gz', 'craddock2012': '/Users/kbottenh/Dropbox/Projects/physics-retrieval/craddock2012_tcorr05_2level_270_2mm.nii.gz'} bx_dir = '/Users/kbottenh/Dropbox/Projects/physics-retrieval/data/rescored' b_df = pd.read_csv(join(data_dir, 'rescored', 'physics_learning-local_efficiency-BayesianImpute.csv'), index_col=0, header=0) drop = b_df.filter(regex='.*lEff.*').columns b_df.drop(drop, axis=1, inplace=True) subgraphs = glob(join(data_dir, 'output', '*sig_edges.csv')) subgraphs b_df.head() iqs = ['VCI', 'WMI', 'PSI', 'PRI', 'FSIQ'] for iq in iqs: b_df['delta{0}'.format(iq)] = b_df['{0}2'.format(iq)] - b_df['{0}1'.format(iq)] b_df['delta{0}XSex'.format(iq)] = b_df['delta{0}'.format(iq)] * b_df['F'] b_df['{0}2XSex'.format(iq)] = b_df['{0}2'.format(iq)] * b_df['F'] b_df['delta{0}XClass'.format(iq)] = b_df['delta{0}'.format(iq)] * b_df['Mod'] b_df['{0}2XClass'.format(iq)] = b_df['{0}2'.format(iq)] * b_df['Mod'] b_df['SexXClass'] = b_df['F'] * b_df['Mod'] b_df['delta{0}XSexXClass'.format(iq)] = b_df['delta{0}'.format(iq)] * b_df['SexXClass'] b_df['{0}2XSexXClass'.format(iq)] = b_df['{0}2'.format(iq)] * b_df['SexXClass'] husl_pal = sns.husl_palette(h=0, n_colors=268) crayons_l = sns.crayon_palette(['Vivid Tangerine', 'Cornflower']) crayons_d = sns.crayon_palette(['Brick Red', 'Midnight Blue']) grays = sns.light_palette('#999999', n_colors=3, reverse=True) f_2 = sns.crayon_palette(['Red Orange', 'Vivid Tangerine']) m_2 = sns.crayon_palette(['Cerulean', 'Cornflower']) node_size = 15 plt.tight_layout(pad=1.5) for subgraph in subgraphs: regression = subgraph[63:-4] print(regression) keys = regression.split('-') mask = keys[0] condition = 'Physics' task = keys[1].split('_')[0] iq = keys[2].split('_')[0] cov = keys[2].split('_')[1] print(iq) coordinates = plotting.find_parcellation_cut_coords(labels_img=masks[mask]) conns = pd.read_csv(subgraph, index_col=0, header=0) for i in conns.index: conns.at[i, 'x'] = coordinates[i-1][0] conns.at[i, 'y'] = coordinates[i-1][1] conns.at[i, 'z'] = coordinates[i-1][2] conns.set_index([conns.index, 'x', 'y', 'z'], inplace=True) q = plotting.plot_connectome(conns, coordinates, node_size=node_size, edge_threshold=2., edge_vmax=7., edge_cmap='PuOr_r', ) q.savefig( join(fig_dir, '{0}.png'.format(regression)), dpi=300) q.close() conns.dropna(how='all', axis=0, inplace=True) conns.dropna(how='all', axis=1, inplace=True) conns.fillna(0, inplace=True) conns.sort_index(axis=0, ascending=True, inplace=True) conns.sort_index(axis=1, ascending=True, inplace=True) conns.index.rename(['idx', 'x', 'y', 'z'], inplace=True) nodes = list(conns.index.get_level_values('idx').astype(str)) hmap_mask = np.triu(np.ones_like(conns.values, dtype=np.bool)) if conns.max().max() > 0: cmap = 'Oranges' one = conns.idxmax(axis=0) two = conns.idxmax(axis=1) else: cmap = 'Purples_r' one = conns.idxmin(axis=0) two = conns.idxmin(axis=1) edges = [] for column in conns.columns: ind = conns[conns[column] != 0].index for i in ind: #print(column, i[0]) edges.append((int(column), i[0])) for edge in edges: print(regression, edge) sns.axes_style('ticks') edges, error = grab_corr(b_df.index, edge, task, 'Physics', '1', mask) scatter_df = pd.concat([edges, b_df[[iq, 'F', 'Mod']]], axis=1) if 'iq' in cov: cov = cov.replace('iq', iq) if 'Sex' in cov: if not 'Class' in cov: print('Sex interaction:', regression) h = sns.lmplot(iq, 'edge', data=scatter_df, hue_order=[0.,1.], hue='F', palette=crayons_d, height=6) h.savefig(join(fig_dir, '{0}-{1}-scatter.png'.format(regression, edge)), dpi=300) plt.close() else: print('SexXClass interaction:', regression) h = sns.lmplot(iq, 'edge', data=scatter_df[scatter_df['F'] == 1], hue_order=[0.,1.], hue='Mod', palette=f_2, height=6) h.savefig(join(fig_dir, '{0}-{1}-scatter-f.png'.format(regression, edge)), dpi=300) h = sns.lmplot(iq, 'edge', data=scatter_df[scatter_df['F'] == 0], hue_order=[0.,1.], hue='Mod', palette=m_2, height=6) h.savefig(join(fig_dir, '{0}-{1}-scatter-m.png'.format(regression, edge)), dpi=300) plt.close() elif 'Class' in cov: if not 'Sex' in cov: print('Class interaction:', regression) h = sns.lmplot(iq, 'edge', data=scatter_df, hue='Mod', hue_order=[0.,1.], palette=grays, height=6) h.savefig(join(fig_dir, '{0}-{1}-scatter.png'.format(regression, edge)), dpi=300) plt.close() else: print('No interaction:', regression) fig1,ax1 = plt.subplots(figsize=(6,6)) sns.regplot(scatter_df[iq], scatter_df['edge'], color='0.5') sns.despine() ax1.set_title('{0}X{1} connectivity'.format(regression, edge)) plt.tight_layout() fig1.savefig(join(fig_dir, '{0}-{1}-scatter.png'.format(regression, edge)), dpi=300) plt.close() sns.axes_style("white") fig, ax = plt.subplots() g = sns.heatmap(conns[nodes], mask=hmap_mask, square=True, vmax=7., cmap='PuOr_r', linecolor='1.0', linewidths=1, center=0, cbar_kws={"shrink": .75}) g.set_yticklabels(labels=nodes, rotation=0) g.set_title(regression) fig.savefig(join(fig_dir, '{0}-heatmap.png'.format(regression)), dpi=300) plt.close() rois = None roi_nifti = nib.load(join(roi_dir, 'roi{0}.nii.gz'.format(int(nodes[0])))) roi = roi_nifti.get_fdata() rois = (roi * float(nodes[0])) if len(nodes) > 1: for node in nodes[1:]: roi_nifti = nib.load( join(roi_dir, 'roi{0}.nii.gz'.format(int(node)))) roi = roi_nifti.get_fdata() rois += (roi * int(node)) else: pass rois_nifti = nib.Nifti1Image(rois, roi_nifti.affine) rois_nifti.to_filename(join(data_dir, '{0}_nodes.nii.gz'.format(regression))) h = plotting.plot_glass_brain(rois_nifti, cmap=LinearSegmentedColormap.from_list( husl_pal, husl_pal, N=268), vmin=0, vmax=268) h.savefig(join(fig_dir, '{0}_ROIs.png'.format(regression)), dpi=300) plt.close() h.close() ```
github_jupyter
``` import pandas as pd from sklearn import model_selection from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score from sklearn.metrics import cohen_kappa_score from sklearn.svm import SVC from sklearn.ensemble import VotingClassifier # bagging col_names = ['pregnant', 'glucose', 'bg', 'skin', 'insulin', 'bmi', 'pedigree', 'age', 'label'] # Load dataset pima = pd.read_csv('pima-indians-diabetes.csv') feature_cols = ['pregnant', 'insulin', 'bmi', 'age', 'glucose', 'bp', 'pedigree'] X = pima.iloc[:, 0:7] # Features Y = pima.iloc[:, 8] # Target variable estimators = [] model1 = KNeighborsClassifier(n_neighbors=3) estimators.append(('KNN', model1)) model2 = DecisionTreeClassifier() estimators.append(('cart', model2)) model3 = SVC() estimators.append(('svm', model3)) ensemble = VotingClassifier(estimators) # Bagging ensemble eclf1 = ensemble.fit(X, Y) y1 = eclf1.predict(X) accuracy_score(Y, y1) ``` ## Adaboost used in Iris Dataset ``` # Load Libraries from sklearn.ensemble import AdaBoostClassifier from sklearn import datasets # Import train_test_slit function from sklearn.model_selection import train_test_split # Import scikit-learn metrics module fro accuracy calculations from sklearn import metrics # Load data iris = datasets.load_iris() x = iris.data # Feature y = iris.target # Class labels # # Split dataset into training set and test set # X_train, y_train, X_test, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0) # Create adaBoost classifier object abc = AdaBoostClassifier(n_estimators = 50) # Trian Adaboost Classifier model = abc.fit(x, y) # Predict the response for test dataset y_pred = model.predict(x) # Model Accuracy, how often is the classifier correct ? print("Accuracy", metrics.accuracy_score(y, y_pred)) ``` ## Adaboost ``` # Load libraries from sklearn.ensemble import AdaBoostClassifier # Getting the features and labels col_names = ['pregnant', 'glucose', 'bg', 'skin', 'insulin', 'bmi', 'pedigree', 'age', 'label'] # Load dataset pima = pd.read_csv('pima-indians-diabetes.csv') feature_cols = ['pregnant', 'insulin', 'bmi', 'age', 'glucose', 'bp', 'pedigree'] X = pima.iloc[:, 0:7] # Features Y = pima.iloc[:, 8] # Target variable # Import Support Vector Classifier from sklearn.svm import SVC # from sklearn import metrics svc = SVC(probability = True, kernel = 'linear') # Create adaboost classifier object abc = AdaBoostClassifier(n_estimators = 50, base_estimator = svc) # Take 3 different classifiers as base_estimators # Train AdaBoost Classifier model = abc.fit(X, Y) # Predict the respoinse for test dataset y_pred = model.predict(X) # Model Accuracy, how often is the classifier correct? print("Accuracy", metrics.accuracy_score(Y, y_pred)) ```
github_jupyter
<table class="ee-notebook-buttons" align="left"> <td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/FeatureCollection/distance.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td> <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/distance.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td> <td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=FeatureCollection/distance.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/distance.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td> </table> ## Install Earth Engine API Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`. The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium. ``` import subprocess try: import geehydro except ImportError: print('geehydro package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro']) ``` Import libraries ``` import ee import folium import geehydro ``` Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. ``` try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ``` ## Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`. ``` Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID') ``` ## Add Earth Engine Python script ``` # Collection.distance example. # Computes the distance to the nearest feature in a collection. # Construct a FeatureCollection from a list of geometries. fc = ee.FeatureCollection([ ee.Geometry.Point(-72.94411, 41.32902), ee.Geometry.Point(-72.94411, 41.33402), ee.Geometry.Point(-72.94411, 41.33902), # The geometries do not need to be the same type. ee.Geometry.LineString( -72.93411, 41.30902, -72.93411, 41.31902, -72.94411, 41.31902) ]) # Compute distance from the dfeatures, to a max of 1000 meters. distance = fc.distance(1000, 100) Map.setCenter(-72.94, 41.32, 13) Map.addLayer(distance, {'min': 0, 'max': 1000, 'palette': ['yellow', 'red']}, 'distance') Map.addLayer(fc, {}, 'Features') ``` ## Display Earth Engine data layers ``` Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map ```
github_jupyter
# Resample Data ## Pandas Resample You've learned about bucketing to different periods of time like Months. Let's see how it's done. We'll start with an example series of days. ``` import numpy as np import pandas as pd dates = pd.date_range('10/10/2018', periods=11, freq='D') close_prices = np.arange(len(dates)) close = pd.Series(close_prices, dates) close ``` Let's say we want to bucket these days into 3 day periods. To do that, we'll use the [DataFrame.resample](https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.resample.html) function. The first parameter in this function is a string called `rule`, which is a representation of how to resample the data. This string representation is made using an offset alias. You can find a list of them [here](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases). To create 3 day periods, we'll set `rule` to "3D". ``` close.resample('3D') ``` This returns a `DatetimeIndexResampler` object. It's an intermediate object similar to the `GroupBy` object. Just like group by, it breaks the original data into groups. That means, we'll have to apply an operation to these groups. Let's make it simple and get the first element from each group. ``` close.resample('3D').first() ``` You might notice that this is the same as `.iloc[::3]` ``` close.iloc[::3] ``` So, why use the `resample` function instead of `.iloc[::3]` or the `groupby` function? The `resample` function shines when handling time and/or date specific tasks. In fact, you can't use this function if the index isn't a [time-related class](https://pandas.pydata.org/pandas-docs/version/0.21/timeseries.html#overview). ``` try: # Attempt resample on a series without a time index pd.Series(close_prices).resample('W') except TypeError: print('It threw a TypeError.') else: print('It worked.') ``` One of the resampling tasks it can help with is resampling on periods, like weeks. Let's resample `close` from it's days frequency to weeks. We'll use the "W" offset allies, which stands for Weeks. ``` pd.DataFrame({ 'days': close, 'weeks': close.resample('W').first()}) ``` The weeks offset considers the start of a week on a Monday. Since 2018-10-10 is a Wednesday, the first group only looks at the first 5 items. There are offsets that handle more complicated problems like filtering for Holidays. For now, we'll only worry about resampling for days, weeks, months, quarters, and years. The frequency you want the data to be in, will depend on how often you'll be trading. If you're making trade decisions based on reports that come out at the end of the year, we might only care about a frequency of years or months. ## OHLC Now that you've seen how Pandas resamples time series data, we can apply this to Open, High, Low, and Close (OHLC). Pandas provides the [`Resampler.ohlc`](https://pandas.pydata.org/pandas-docs/version/0.21.0/generated/pandas.core.resample.Resampler.ohlc.html#pandas.core.resample.Resampler.ohlc) function will convert any resampling frequency to OHLC data. Let's get the Weekly OHLC. ``` close.resample('W').ohlc() ``` Can you spot a potential problem with that? It has to do with resampling data that has already been resampled. We're getting the OHLC from close data. If we want OHLC data from already resampled data, we should resample the first price from the open data, resample the highest price from the high data, etc.. To get the weekly closing prices from `close`, you can use the [`Resampler.last`](https://pandas.pydata.org/pandas-docs/version/0.21.0/generated/pandas.core.resample.Resampler.last.html#pandas.core.resample.Resampler.last) function. ``` close.resample('W').last() ``` ## Quiz Implement `days_to_weeks` function to resample OHLC price data to weekly OHLC price data. You find find more Resampler functions [here](https://pandas.pydata.org/pandas-docs/version/0.21.0/api.html#id44) for calculating high and low prices. ``` import quiz_tests def days_to_weeks(open_prices, high_prices, low_prices, close_prices): """Converts daily OHLC prices to weekly OHLC prices. Parameters ---------- open_prices : DataFrame Daily open prices for each ticker and date high_prices : DataFrame Daily high prices for each ticker and date low_prices : DataFrame Daily low prices for each ticker and date close_prices : DataFrame Daily close prices for each ticker and date Returns ------- open_prices_weekly : DataFrame Weekly open prices for each ticker and date high_prices_weekly : DataFrame Weekly high prices for each ticker and date low_prices_weekly : DataFrame Weekly low prices for each ticker and date close_prices_weekly : DataFrame Weekly close prices for each ticker and date """ open_prices_weekly = open_prices.resample('W').first() high_prices_weekly = high_prices.resample('W').max() low_prices_weekly = low_prices.resample('W').min() close_prices_weekly = close_prices.resample('W').last() return open_prices_weekly, high_prices_weekly, low_prices_weekly, close_prices_weekly quiz_tests.test_days_to_weeks(days_to_weeks) ``` ## Quiz Solution If you're having trouble, you can check out the quiz solution [here](resample_data_solution.ipynb).
github_jupyter
# Linear Support Vector Regressor with PowerTransformer This Code template is for the Classification task using Support Vector Regressor (SVR) based on the Support Vector Machine algorithm with Power Transformer as Feature Transformation Technique in a pipeline. ### Required Packages ``` import warnings import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as se from sklearn.preprocessing import PowerTransformer from sklearn.pipeline import make_pipeline from sklearn.model_selection import train_test_split from imblearn.over_sampling import RandomOverSampler from sklearn.svm import LinearSVR from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error warnings.filterwarnings('ignore') ``` ### Initialization Filepath of CSV file ``` #filepath file_path="" ``` List of features which are required for model training . ``` #x_values features=[] ``` Target feature for prediction. ``` #y_values target='' ``` ### Data fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path) df.head() ``` ### Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X and target/outcome to Y. ``` X=df[features] Y=df[target] ``` ### Data preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) ``` Calling preprocessing functions on the feature and target set. ``` x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head() ``` #### Correlation Map In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ``` f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ``` ### Data Splitting The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ``` x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)#performing datasplitting ``` ### Model Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. A Support Vector Machine is a discriminative classifier formally defined by a separating hyperplane. In other terms, for a given known/labelled data points, the SVM outputs an appropriate hyperplane that classifies the inputted new cases based on the hyperplane. In 2-Dimensional space, this hyperplane is a line separating a plane into two segments where each class or group occupied on either side. LinearSVR is similar to SVR with kernel=’linear’. It has more flexibility in the choice of tuning parameters and is suited for large samples. #### Model Tuning Parameters 1. epsilon : float, default=0.0 > Epsilon parameter in the epsilon-insensitive loss function. 2. loss : {‘epsilon_insensitive’, ‘squared_epsilon_insensitive’}, default=’epsilon_insensitive’ > Specifies the loss function. ‘hinge’ is the standard SVM loss (used e.g. by the SVC class) while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported. 3. C : float, default=1.0 > Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. 4. tol : float, default=1e-4 > Tolerance for stopping criteria. 5. dual : bool, default=True > Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=False when n_samples > n_features. ####Feature Transformation PowerTransformer applies a power transform featurewise to make data more Gaussian-like. Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired. For more information... [click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html) ``` model=make_pipeline(PowerTransformer(),LinearSVR()) model.fit(x_train, y_train) ``` #### Model Accuracy We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model. > **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction. ``` print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) ``` > **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model. ``` y_pred=model.predict(x_test) print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100)) print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred))) print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred))) ``` #### Prediction Plot First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis. ``` plt.figure(figsize=(14,10)) plt.plot(range(20),y_test[0:20], color = "green") plt.plot(range(20),model.predict(x_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show() ``` #### Creator:Anu Rithiga B , Github: [Profile](https://github.com/iamgrootsh7)
github_jupyter
##### Copyright 2021 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Simple TFX Pipeline for Vertex Pipelines <div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_simple"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td> <td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb"> <img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td> <td><a href="https://console.cloud.google.com/mlengine/notebooks/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Fgcp%252Fvertex_pipelines_simple.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Run in Google Cloud AI Platform Notebook</a></td> </table></div> This notebook-based tutorial will create a simple TFX pipeline and run it using Google Cloud Vertex Pipelines. This notebook is based on the TFX pipeline we built in [Simple TFX Pipeline Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple). If you are not familiar with TFX and you have not read that tutorial yet, you should read it before proceeding with this notebook. Google Cloud Vertex Pipelines helps you to automate, monitor, and govern your ML systems by orchestrating your ML workflow in a serverless manner. You can define your ML pipelines using Python with TFX, and then execute your pipelines on Google Cloud. See [Vertex Pipelines introduction](https://cloud.google.com/vertex-ai/docs/pipelines/introduction) to learn more about Vertex Pipelines. This notebook is intended to be run on [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb) or on [AI Platform Notebooks](https://cloud.google.com/ai-platform-notebooks). If you are not using one of these, you can simply click "Run in Goolge Colab" button above. ## Set up Before you run this notebook, ensure that you have following: - A [Google Cloud Platform](http://cloud.google.com/) project. - A [Google Cloud Storage](https://cloud.google.com/storage) bucket. See [the guide for creating buckets](https://cloud.google.com/storage/docs/creating-buckets). - Enable [Vertex AI and Cloud Storage API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,storage-component.googleapis.com). Please see [Vertex documentation](https://cloud.google.com/vertex-ai/docs/pipelines/configure-project) to configure your GCP project further. ### Install python packages We will install required Python packages including TFX and KFP to author ML pipelines and submit jobs to Vertex Pipelines. ``` # Use the latest version of pip. !pip install --upgrade pip !pip install --upgrade tfx==0.30.0 kfp==1.6.1 ``` #### Did you restart the runtime? If you are using Google Colab, the first time that you run the cell above, you must restart the runtime by clicking above "RESTART RUNTIME" button or using "Runtime > Restart runtime ..." menu. This is because of the way that Colab loads packages. If you are not on Colab, you can restart runtime with following cell. ``` # docs_infra: no_execute import sys if not 'google.colab' in sys.modules: # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) ``` ### Login in to Google for this notebook If you are running this notebook on Colab, authenticate with your user account: ``` import sys if 'google.colab' in sys.modules: from google.colab import auth auth.authenticate_user() ``` **If you are on AI Platform Notebooks**, authenticate with Google Cloud before running the next section, by running ```sh gcloud auth login ``` **in the Terminal window** (which you can open via **File** > **New** in the menu). You only need to do this once per notebook instance. Check the package versions. ``` import tensorflow as tf print('TensorFlow version: {}'.format(tf.__version__)) from tfx import v1 as tfx print('TFX version: {}'.format(tfx.__version__)) import kfp print('KFP version: {}'.format(kfp.__version__)) ``` ### Set up variables We will set up some variables used to customize the pipelines below. Following information is required: * GCP Project id. See [Identifying your project id](https://cloud.google.com/resource-manager/docs/creating-managing-projects#identifying_projects). * GCP Region to run pipelines. For more information about the regions that Vertex Pipelines is available in, see the [Vertex AI locations guide](https://cloud.google.com/vertex-ai/docs/general/locations#feature-availability). * Google Cloud Storage Bucket to store pipeline outputs. **Enter required values in the cell below before running it**. ``` GOOGLE_CLOUD_PROJECT = '' # <--- ENTER THIS GOOGLE_CLOUD_REGION = '' # <--- ENTER THIS GCS_BUCKET_NAME = '' # <--- ENTER THIS if not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME): from absl import logging logging.error('Please set all required parameters.') ``` Set `gcloud` to use your project. ``` !gcloud config set project {GOOGLE_CLOUD_PROJECT} PIPELINE_NAME = 'penguin-vertex-pipelines' # Path to various pipeline artifact. PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format( GCS_BUCKET_NAME, PIPELINE_NAME) # Paths for users' Python module. MODULE_ROOT = 'gs://{}/pipeline_module/{}'.format( GCS_BUCKET_NAME, PIPELINE_NAME) # Paths for input data. DATA_ROOT = 'gs://{}/data/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME) # This is the path where your model will be pushed for serving. SERVING_MODEL_DIR = 'gs://{}/serving_model/{}'.format( GCS_BUCKET_NAME, PIPELINE_NAME) print('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT)) ``` ### Prepare example data We will use the same [Palmer Penguins dataset](https://allisonhorst.github.io/palmerpenguins/articles/intro.html) as [Simple TFX Pipeline Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple). There are four numeric features in this dataset which were already normalized to have range [0,1]. We will build a classification model which predicts the `species` of penguins. We need to make our own copy of the dataset. Because TFX ExampleGen reads inputs from a directory, we need to create a directory and copy dataset to it on GCS. ``` !gsutil cp gs://download.tensorflow.org/data/palmer_penguins/penguins_processed.csv {DATA_ROOT}/ ``` Take a quick look at the CSV file. ``` !gsutil cat {DATA_ROOT}/penguins_processed.csv | head ``` ## Create a pipeline TFX pipelines are defined using Python APIs. We will define a pipeline which consists of three components, CsvExampleGen, Trainer and Pusher. The pipeline and model definition is almost the same as [Simple TFX Pipeline Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple). The only difference is that we don't need to set `metadata_connection_config` which is used to locate [ML Metadata](https://www.tensorflow.org/tfx/guide/mlmd) database. Because Vertex Pipelines uses a managed metadata service, users don't need to care of it, and we don't need to specify the parameter. Before actually define the pipeline, we need to write a model code for the Trainer component first. ### Write model code. We will use the same model code as in the [Simple TFX Pipeline Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple). ``` _trainer_module_file = 'penguin_trainer.py' %%writefile {_trainer_module_file} # Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple from typing import List from absl import logging import tensorflow as tf from tensorflow import keras from tensorflow_transform.tf_metadata import schema_utils from tfx import v1 as tfx from tfx_bsl.public import tfxio from tensorflow_metadata.proto.v0 import schema_pb2 _FEATURE_KEYS = [ 'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g' ] _LABEL_KEY = 'species' _TRAIN_BATCH_SIZE = 20 _EVAL_BATCH_SIZE = 10 # Since we're not generating or creating a schema, we will instead create # a feature spec. Since there are a fairly small number of features this is # manageable for this dataset. _FEATURE_SPEC = { **{ feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32) for feature in _FEATURE_KEYS }, _LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64) } def _input_fn(file_pattern: List[str], data_accessor: tfx.components.DataAccessor, schema: schema_pb2.Schema, batch_size: int) -> tf.data.Dataset: """Generates features and label for training. Args: file_pattern: List of paths or patterns of input tfrecord files. data_accessor: DataAccessor for converting input to RecordBatch. schema: schema of the input data. batch_size: representing the number of consecutive elements of returned dataset to combine in a single batch Returns: A dataset that contains (features, indices) tuple where features is a dictionary of Tensors, and indices is a single Tensor of label indices. """ return data_accessor.tf_dataset_factory( file_pattern, tfxio.TensorFlowDatasetOptions( batch_size=batch_size, label_key=_LABEL_KEY), schema=schema).repeat() def _make_keras_model() -> tf.keras.Model: """Creates a DNN Keras model for classifying penguin data. Returns: A Keras Model. """ # The model below is built with Functional API, please refer to # https://www.tensorflow.org/guide/keras/overview for all API options. inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS] d = keras.layers.concatenate(inputs) for _ in range(2): d = keras.layers.Dense(8, activation='relu')(d) outputs = keras.layers.Dense(3)(d) model = keras.Model(inputs=inputs, outputs=outputs) model.compile( optimizer=keras.optimizers.Adam(1e-2), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[keras.metrics.SparseCategoricalAccuracy()]) model.summary(print_fn=logging.info) return model # TFX Trainer will call this function. def run_fn(fn_args: tfx.components.FnArgs): """Train the model based on given args. Args: fn_args: Holds args used to train the model as name/value pairs. """ # This schema is usually either an output of SchemaGen or a manually-curated # version provided by pipeline author. A schema can also derived from TFT # graph if a Transform component is used. In the case when either is missing, # `schema_from_feature_spec` could be used to generate schema from very simple # feature_spec, but the schema returned would be very primitive. schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC) train_dataset = _input_fn( fn_args.train_files, fn_args.data_accessor, schema, batch_size=_TRAIN_BATCH_SIZE) eval_dataset = _input_fn( fn_args.eval_files, fn_args.data_accessor, schema, batch_size=_EVAL_BATCH_SIZE) model = _make_keras_model() model.fit( train_dataset, steps_per_epoch=fn_args.train_steps, validation_data=eval_dataset, validation_steps=fn_args.eval_steps) # The result of the training should be saved in `fn_args.serving_model_dir` # directory. model.save(fn_args.serving_model_dir, save_format='tf') ``` Copy the module file to GCS which can be accessed from the pipeline components. Because model training happens on GCP, we need to upload this model definition. Otherwise, you might want to build a container image including the module file and use the image to run the pipeline. ``` !gsutil cp {_trainer_module_file} {MODULE_ROOT}/ ``` ### Write a pipeline definition We will define a function to create a TFX pipeline. ``` # Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple and # slightly modified because we don't need `metadata_path` argument. def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str, module_file: str, serving_model_dir: str, ) -> tfx.dsl.Pipeline: """Creates a three component penguin pipeline with TFX.""" # Brings data into the pipeline. example_gen = tfx.components.CsvExampleGen(input_base=data_root) # Uses user-provided Python function that trains a model. trainer = tfx.components.Trainer( module_file=module_file, examples=example_gen.outputs['examples'], train_args=tfx.proto.TrainArgs(num_steps=100), eval_args=tfx.proto.EvalArgs(num_steps=5)) # Pushes the model to a filesystem destination. pusher = tfx.components.Pusher( model=trainer.outputs['model'], push_destination=tfx.proto.PushDestination( filesystem=tfx.proto.PushDestination.Filesystem( base_directory=serving_model_dir))) # Following three components will be included in the pipeline. components = [ example_gen, trainer, pusher, ] return tfx.dsl.Pipeline( pipeline_name=pipeline_name, pipeline_root=pipeline_root, components=components) ``` ## Run the pipeline on Vertex Pipelines. We used `LocalDagRunner` which runs on local environment in [Simple TFX Pipeline Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple). TFX provides multiple orchestrators to run your pipeline. In this tutorial we will use the Vertex Pipelines together with the Kubeflow V2 dag runner. We need to define a runner to actually run the pipeline. You will compile your pipeline into our pipeline definition format using TFX APIs. ``` import os PIPELINE_DEFINITION_FILE = PIPELINE_NAME + '_pipeline.json' runner = tfx.orchestration.experimental.KubeflowV2DagRunner( config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig(), output_filename=PIPELINE_DEFINITION_FILE) # Following function will write the pipeline definition to PIPELINE_DEFINITION_FILE. _ = runner.run( _create_pipeline( pipeline_name=PIPELINE_NAME, pipeline_root=PIPELINE_ROOT, data_root=DATA_ROOT, module_file=os.path.join(MODULE_ROOT, _trainer_module_file), serving_model_dir=SERVING_MODEL_DIR)) ``` The generated definition file can be submitted using kfp client. ``` # docs_infra: no_execute from kfp.v2.google import client pipelines_client = client.AIPlatformClient( project_id=GOOGLE_CLOUD_PROJECT, region=GOOGLE_CLOUD_REGION, ) _ = pipelines_client.create_run_from_job_spec(PIPELINE_DEFINITION_FILE) ``` Now you can visit the link in the output above or visit 'Vertex AI > Pipelines' in [Google Cloud Console](https://console.cloud.google.com/) to see the progress.
github_jupyter
![Egeria Logo](https://raw.githubusercontent.com/odpi/egeria/master/assets/img/ODPi_Egeria_Logo_color.png) ### Egeria Hands-On Lab # Welcome to the Understanding Server Configuration Lab ## Introduction Egeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information about data and technology. This information is called metadata. Egeria provides servers to manage the exchange of metadata between different technologies. These servers are configured using REST API calls to an Open Metadata and Governance (OMAG) Server Platform. Each call either defines a default value or configures a service that must run within the server when it is started. As each configuration call is made, the server platform builds up a [configuration document](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/configuration-document.html) with the values passed. When the configuration is finished, the configuration document will have all of the information needed to start the server. The configuration document is deployed to the server platform that is hosting the server. When a request is made to this server platform to start the server, it reads the configuration document and initializes the server with the appropriate services. In this hands-on lab you will learn about the contents of configuration documents. ## The scenario [Gary Geeke](https://opengovernance.odpi.org/coco-pharmaceuticals/personas/gary-geeke.html) is the IT Infrastructure leader at [Coco Pharmaceuticals](https://opengovernance.odpi.org/coco-pharmaceuticals/). ![Gary Geeke](https://raw.githubusercontent.com/odpi/data-governance/master/docs/coco-pharmaceuticals/personas/gary-geeke.png) Gary's userId is `garygeeke`. ``` adminUserId = "garygeeke" ``` In the [Egeria Server Configuration](../egeria-server-config.ipynb) lab, Gary configured servers for the Open Metadata and Governance (OMAG) Server Platforms shown in Figure 1: ![Figure 1](../images/coco-pharmaceuticals-systems-omag-server-platforms.png) > **Figure 1:** Coco Pharmaceuticals' OMAG Server Platforms The following command checks that the platforms and servers are running. ``` %run ../common/environment-check.ipynb ``` ---- If the platform is not running, you will see a lot of red text. There are a number of choices on how to start it. Follow [this link to set up and run the platform](https://egeria.odpi.org/open-metadata-resources/open-metadata-labs/). Once the platform is running you are ready to proceed. In this hands-on lab Gary is exploring the configuration document for the `cocoMDS1` server to understand how it is configured. The cocoMDS1 server runs on the Data Lake OMAG Server Platform. ``` mdrServerName = "cocoMDS1" platformURLroot = dataLakePlatformURL ``` ---- What follows are descriptions and coded requests to extract different parts of the configuration. ## Retrieve configuration for cocoMDS1 - Data Lake Operations metadata server The command below retrieves the configuration document for `cocoMDS1`. Its a big document so we will not display its full contents at this time. ``` operationalServicesURLcore = "/open-metadata/admin-services/users/" + adminUserId print (" ") print ("Retrieving stored configuration document for " + mdrServerName + " ...") url = platformURLroot + operationalServicesURLcore + '/servers/' + mdrServerName + '/configuration' print ("GET " + url) response = requests.get(url) if response.status_code == 200: print("Server configuration for " + mdrServerName + " has been retrieved") else: print("Server configuration for " + mdrServerName + " is unavailable") serverConfig=response.json().get('omagserverConfig') ``` ---- The configuration includes an audit trail that gives a high level overview of how the server has been configured. This is always a useful starting point to understand the content of the configuration document for the server. ``` auditTrail=serverConfig.get('auditTrail') print (" ") if auditTrail == None: print ("Empty configuration - no audit trail - configure the server before continuing") else: print ("Audit Trail: ") for x in range(len(auditTrail)): print (auditTrail[x]) ``` ---- The rest of the lab notebook extracts the different sections from the configuration document and explains what they mean and how they are used in the server. ---- ### Server names and identifiers A server has a unique name that is used on all REST calls that concern it. In addition, it is assigned a unique identifier (GUID) and an optional server type. It is also possible to set up the name of the organization that owns the server. These values are used in events the help locate the origin of metadata. ``` print (" ") serverName=serverConfig.get('localServerName') if serverName != None: print ("Server name: " + serverName) serverGUID=serverConfig.get('localServerId') if serverGUID != None: print ("Server GUID: " + serverGUID) serverType=serverConfig.get('localServerType') if serverType != None: print ("Server Type: " + serverType) organization=serverConfig.get('organizationName') if organization != None: print ("Organization: " + organization) ``` ---- In addition, if the server has a local repository then the collection of metadata stored in it has a unique identifier (GUID) and a name. These values are used to identify the origin of metadata instances since they are included in the audit header of any open metadata instance. ``` print (" ") repositoryServicesConfig = serverConfig.get('repositoryServicesConfig') if repositoryServicesConfig != None: repositoryConfig = repositoryServicesConfig.get('localRepositoryConfig') if repositoryConfig != None: localMetadataCollectionId = repositoryConfig.get('metadataCollectionId') if localMetadataCollectionId != None: print ("Local metadata collection id: " + localMetadataCollectionId) localMetadataCollectionName = repositoryConfig.get('metadataCollectionName') if localMetadataCollectionName != None: print ("Local metadata collection name: " + localMetadataCollectionName) ``` ---- Finally, a server with a repository that joins one or more cohorts needs to send out details of how a remote server should call this server during a federated query. This information is called the **local repository's remote connection**. By default, the network address that is defined in this connection begins with the value set in the **server URL root** property at the time the repository was configured. The server name is then added to the URL. The code below extracts the server URL root and the **full URL endpoint** sent to other servers in the same cohort(s) in the local repository's remote connection. ``` print (" ") serverURLRoot=serverConfig.get('localServerURL') if serverURLRoot != None: print ("Server URL root: " + serverURLRoot) if repositoryConfig != None: localRepositoryRemoteConnection = repositoryConfig.get('localRepositoryRemoteConnection') if localRepositoryRemoteConnection != None: endpoint = localRepositoryRemoteConnection.get('endpoint') if endpoint != None: fullURLEndpoint = endpoint.get('address') if fullURLEndpoint != None: print ("Full URL endpoint: " + fullURLEndpoint) print (" ") ``` You will notice that the platform's specific network address is used in both values. Using a specific network address is fine if the server is always going to run on this platform at this network address. If the server is likely to be moved to a different platform, or the platform to a different location, it is easier to set up the full URL endpoint to include a logical DNS name. This can be done by setting server URL root to this name before the local repository is configured, or updating the full URL endpoint in the local repository's remote connection. When the repository next registers with the cohort, it will send out its new full URL endpoint as part of the registration request. The complete local repository's remote connection is shown below. Notice the **connectorProviderClassName** towards the bottom of the definition. This is the factory class that creates the connector in the remote server. ``` print (" ") prettyResponse = json.dumps(localRepositoryRemoteConnection, indent=4) print ("localRepositoryRemoteConnection: ") print (prettyResponse) print (" ") ``` ---- The repository services running in a metadata repository uses a number of connectors to access the resources it needs. The cocoMDS1 metadata server needs a local repository to store metadata about the data and processing occuring in the data lake. This is the **local repository's remote connection**. ODPi Egeria supports 2 types of repositories. One is an in-memory repository that stores metadata in hash maps. It is useful for demos and testing because a restart of the server results in an empty metadata repository. However, if you need metadata to persist from one run of the server to the next, you should use the graph repository. The code below shows which type of local repository is in use. It also shows the destinations where audit log records are to be sent. A server can have a list of destinations. In this example, the server is just using a simple console log. ``` print (" ") if repositoryServicesConfig != None: auditLogConnections = repositoryServicesConfig.get('auditLogConnections') enterpriseAccessConfig = repositoryServicesConfig.get('enterpriseAccessConfig') cohortConfigList = repositoryServicesConfig.get('cohortConfigList') if auditLogConnections != None: print ("Audit Log Destinations: ") for logDestCount in range(len(auditLogConnections)): auditLogConnection = auditLogConnections[logDestCount] if auditLogConnection != None: connectorType = auditLogConnection.get('connectorType') if connectorType != None: description = connectorType.get('description') if description != None: print (str(logDestCount+1) + ". description: " + description) connectorProviderClassName = connectorType.get('connectorProviderClassName') if connectorProviderClassName != None: print (" className: " + connectorProviderClassName) if repositoryConfig != None: localRepositoryLocalConnection = repositoryConfig.get('localRepositoryLocalConnection') print (" ") if localRepositoryLocalConnection != None: print ("Local Repository's Local Connection: ") connectorType = localRepositoryLocalConnection.get('connectorType') if connectorType != None: description = connectorType.get('description') if description != None: print (" description: " + description) connectorProviderClassName = connectorType.get('connectorProviderClassName') if connectorProviderClassName != None: print (" className: " + connectorProviderClassName) ``` ---- ### Configuring security There are two levels of security to set up for an ODPi Egeria server: authentication and authorization. #### Authentication of servers and people ODPi Egeria recommends that each server has its own identity and that is embedded with each request as part of the transport level security (TLS). The members of the cohort (and the event topic) then grant access to each other and no-one else. The identity of the calling user also flows with each request, but this time as a unique string value (typically userId) in the URL of the request. You can see examples of this in the configuration requests being issued during this hands-on lab as Gary's userId `garygeeke` appears on each request. The server configuration supports a userId and password for TLS. The userId is also used when the server is processing requests that originate from an event and so there is no calling user. ``` print (" ") localServerUserId=serverConfig.get('localServerUserId') if localServerUserId != None: print ("local Server UserId: " + localServerUserId) localServerPassword=serverConfig.get('localServerPassword') if localServerPassword != None: print ("local Server Password: " + localServerPassword) ``` ---- #### Authorization of metadata requests ODPi Egeria servers also support a metadata security connector that plugs into the server and is called to provide authorization decisions as part of every request. This connector is configured in the configuration document by passing the **Connection** object that provides the properties needed to create the connecto on the following call ... ``` print (" ") serverSecurityConnection=serverConfig.get('serverSecurityConnection') if serverSecurityConnection != None: print ("Server's Security Connection:") prettyResponse = json.dumps(serverSecurityConnection, indent=4) print (prettyResponse) print (" ") ``` ---- ### Setting up the event bus The server needs to define the event bus it will use to exchange events about metadata. This event bus configuration is used to connect to the cohorts and to provide the in / out topics for each of the Open Metadata Access Services (OMASs) - more later. The event bus configuration for cocoMDS1 provides the network address that the event bus (Apache Kafka) is using. ``` print (" ") eventBusConfig=serverConfig.get('eventBusConfig') if eventBusConfig != None: print ("Event Bus Configuration:") prettyResponse = json.dumps(eventBusConfig, indent=4) print (prettyResponse) print (" ") ``` ---- ### Extracting the descriptions of the open metadata repository cohorts for the server An open metadata repository cohort defines the servers that will share metadata. A server can join multiple cohorts. For Coco Pharmaceuticals, cocoMDS1 is a member of the core `cocoCohort`. ![Figure 2](../images/coco-pharmaceuticals-systems-cohorts.png) > **Figure 2:** Membership of Coco Pharmaceuticals' cohorts You can see this in the configuration below. ``` print (" ") if cohortConfigList != None: print ("Cohort(s) that this server is a member of: ") for cohortCount in range(len(cohortConfigList)): cohortConfig = cohortConfigList[cohortCount] if cohortConfig != None: cohortName = cohortConfig.get('cohortName') print (str(cohortCount+1) + ". name: " + cohortName) cohortRegistryConnection = cohortConfig.get('cohortRegistryConnection') if cohortRegistryConnection != None: print (" Cohort Registry Connection: ") connectorType = cohortRegistryConnection.get('connectorType') if connectorType != None: description = connectorType.get('description') if description != None: print (" description: " + description) connectorProviderClassName = connectorType.get('connectorProviderClassName') if connectorProviderClassName != None: print (" className: " + connectorProviderClassName) topicConnection = cohortConfig.get('cohortOMRSTopicConnection') if topicConnection != None: print (" Cohort Topic Connection: ") connectorType = topicConnection.get('connectorType') if connectorType != None: description = connectorType.get('description') if description != None: print (" description: " + description) connectorProviderClassName = connectorType.get('connectorProviderClassName') if connectorProviderClassName != None: print (" className: " + connectorProviderClassName) ``` ---- ### Reviewing the configured access services Open Metadata Access Services (OMASs) provide the specialized APIs and events for specific tools and personas. ODPi Egeria provides an initial set of access services, and additional services can be pluggable into the server platform. To query the choice of access services available in the platform, use the follow command: ``` print (" ") print ("Retrieving the registered access services ...") url = platformURLroot + "/open-metadata/platform-services/users/" + adminUserId + "/server-platform/registered-services/access-services" print ("GET " + url) response = requests.get(url) prettyResponse = json.dumps(response.json(), indent=4) print ("Response: ") print (prettyResponse) print (" ") ``` ---- The `cocoMDS1` server is for the data lake operations. It needs the access services to support the onboarding and decommissioning of assets along with the access services that supports the different engines that maintain the data lake. ``` print (" ") accessServiceConfig=serverConfig.get('accessServicesConfig') if accessServiceConfig != None: print ("Configured Access Services: ") print (" ") for accessServiceCount in range(len(accessServiceConfig)): accessServiceDefinition = accessServiceConfig[accessServiceCount] if accessServiceDefinition != None: accessServiceName = accessServiceDefinition.get('accessServiceName') accessServiceOptions = accessServiceDefinition.get('accessServiceOptions') if accessServiceName != None: print (" " + accessServiceName + " options: " + json.dumps(accessServiceOptions, indent=4)) print (" ") ``` ---- ### Listing the topics used by a server Both the cohorts and the access services make extensive use of the event bus. The code below extracts the names of all of the event bus topics used by this server. ``` print (" ") print ("List of Topics used by " + mdrServerName) if cohortConfigList != None: for cohortCount in range(len(cohortConfigList)): cohortConfig = cohortConfigList[cohortCount] if cohortConfig != None: topicConnection = cohortConfig.get('cohortOMRSTopicConnection') if topicConnection != None: embeddedConnections = topicConnection.get('embeddedConnections') if embeddedConnections != None: for connCount in range(len(embeddedConnections)): embeddedConnection = embeddedConnections[connCount] if embeddedConnection != None: eventBusConnection = embeddedConnection.get('embeddedConnection') if eventBusConnection != None: endpoint = eventBusConnection.get('endpoint') if endpoint != None: topicName = endpoint.get('address') if topicName != None: print (" " + topicName) if accessServiceConfig != None: for accessServiceCount in range(len(accessServiceConfig)): accessService = accessServiceConfig[accessServiceCount] if accessService != None: eventBusConnection = accessService.get('accessServiceInTopic') if eventBusConnection != None: endpoint = eventBusConnection.get('endpoint') if endpoint != None: topicName = endpoint.get('address') if topicName != None: print (" " + topicName) eventBusConnection = accessService.get('accessServiceOutTopic') if eventBusConnection != None: endpoint = eventBusConnection.get('endpoint') if endpoint != None: topicName = endpoint.get('address') if topicName != None: print (" " + topicName) print (" ") ``` ---- ### Controlling the volume of metadata exchange in a single REST call To ensure that a caller can not request too much metadata in a single request, it is possible to set a maximum page size for requests that return a list of items. The maximum page size puts a limit on the number of items that can be requested. The variable below defines the value that will be added to the configuration document for each server. ``` print (" ") maxPageSize=serverConfig.get('maxPageSize') if maxPageSize != None: print ("Maximum records return on a REST call: " + str(maxPageSize)) ``` ---- Finally, here is the configuration document in total ``` print (" ") prettyResponse = json.dumps(serverConfig, indent=4) print ("Configuration for server: " + mdrServerName) print (prettyResponse) print (" ") ``` ----
github_jupyter
# Obtaining priority data from WIPO PatentScope **Version**: Dec 16 2020 Reference: [Web Scraping using Selenium and Python](https://www.scrapingbee.com/blog/selenium-python/) ## Import the package. ``` from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.common.exceptions import NoSuchElementException ``` ## Set options for Google Chrome and create a Chrome WebDriver. ``` # set chrome options chrome_options = Options() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') # create a chrome webdriver driver = webdriver.Chrome('/usr/bin/chromedriver', options=chrome_options) # wait for the requested element x seconds before throwing error # tried to debug the NoSuchElementException on obtaining priority data (see below) #driver.implicitly_wait(10) ``` ## Navigate to the target webpage. ``` driver.get('https://patentscope.wipo.int/search/en/detail.jsf?docId=WO2001029057') ``` ## Search for the priority data based on the HTML identifier. You can use various HTML element identifiers such as tag name, class name, ID, XPath, etc. To find this: 1. Open the target page in a browser, 2. Inspect webpage elements (access on Windows with Ctrl-Shift-C), and 3. Locate in the HTML code the class name (`class="X"`), ID name (`id="X"`), or other identifier corresponding to the data of interest. ### Here's a search by id. ``` search_id = "detailMainForm:PCTBIBLIO_content" try: mydata = driver.find_element_by_id(search_id) print(mydata.text) except NoSuchElementException as e: print(e) print("The request is invalid, or there is no biblio data") ``` ### Here's a search by class name. The `div` tag of this class is within the `div` tag of the above id search. The output is nearly identical. The only this output does NOT have is the last line: "Latest bibliographic data on file with the International Bureau". ``` search_class = "ps-biblio-data" try: mydata = driver.find_element_by_class_name(search_class) print(mydata.text) except NoSuchElementException as e: print(e) print("The request is invalid, or there is no biblio data") ``` ### Here's a further subset of the data. ``` search_class = "ps-biblio-data--biblio-card" try: mydata = driver.find_element_by_class_name(search_class) print(mydata.text) except NoSuchElementException as e: print(e) print("The request is invalid, or there is no biblio data") ``` ### We then hone in on the "Priority Data" and search by its corresponding id. This part doesn't seem to work consistently. Sometimes it returns the expected output of: ``` 09/418,640 15.10.1999 US ``` Most times, however, I get a `NoSuchElementException` from Selenium. ``` search_id = "detailMainForm:pctBiblio:j_idt3405" try: mydata = driver.find_element_by_id(search_id) print(mydata.text) except NoSuchElementException as e: print(e) print("The request is invalid, or there is no biblio data") ``` ## When finished, exit the browser session. ``` driver.quit() ```
github_jupyter
# Derived generators ``` import tohu from tohu.v6.primitive_generators import * from tohu.v6.derived_generators import * from tohu.v6.generator_dispatch import * from tohu.v6.utils import print_generated_sequence, make_dummy_tuples from datetime import datetime #tohu.v6.logging.logger.setLevel('DEBUG') print(f'Tohu version: {tohu.__version__}') ``` ## Apply ``` def add(x, y): return (100 * x) + y g1 = Integer(10, 99).set_tohu_name('g1') g2 = Integer(10, 99).set_tohu_name('g2') h = Apply(add, g1, g2).set_tohu_name('h') g1.reset(seed=11111) g2.reset(seed=22222) h.reset(seed=33333) print_generated_sequence(g1, num=20) print_generated_sequence(g2, num=20) print_generated_sequence(h, num=20) ``` ## GetAttribute ``` some_tuples = make_dummy_tuples(chars='abcdefghijklmnopqrstuvwxyz') some_tuples[:5] g = SelectOne(some_tuples) print_generated_sequence(g, num=10, sep='\n', seed=12345) h1 = GetAttribute(g, 'x') h2 = GetAttribute(g, 'y') g.reset(seed=12345) print_generated_sequence(g, num=10, sep='\n', seed=12345) print_generated_sequence(h1, num=10) print_generated_sequence(h2, num=10) ``` ## Lookup ``` mapping = {1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e', 6: 'f', 7: 'g', 8: 'h', 9: 'i'} g = Integer(1, 6) h = Lookup(g, mapping) g.reset(seed=12345) print_generated_sequence(g, num=20) print_generated_sequence(h, num=20) ``` ## SelectOne ``` values = ['a', 'b', 'c', 'd', 'e', 'f', 'g'] g = SelectOne(values) print_generated_sequence(g, num=20, seed=12345) ``` By default, all values are chosen with equal probability. This can be changed by passing the argument `p`. ``` g = SelectOne(values, p=[0.05, 0.05, 0.05, 0.05, 0.7, 0.05, 0.05]) print_generated_sequence(g, num=20, seed=12345) ``` ## SelectMultiple ``` values = ['a', 'b', 'c', 'd', 'e', 'f', 'g'] n_vals = Integer(1, 5) g = SelectMultiple(values, n_vals) n_vals.reset(seed=11111) g.reset(seed=99999) print_generated_sequence(g, num=10, sep='\n') ``` ## Integer ``` aa = Constant(10) bb = Integer(100, 200) g = Integer(low=aa, high=bb) aa.reset(seed=11111) bb.reset(seed=22222) print_generated_sequence(g, num=20, seed=99999) ``` ## Cumsum ``` aa = Incremental(start=100, step=4) print_generated_sequence(aa, num=20, seed=11111) g = Cumsum(aa, start_with_zero=True) g.reset_input_generators(seed=None) g.reset() print_generated_sequence(g, num=20) g = Cumsum(aa, start_with_zero=False) g.reset_input_generators(seed=None) g.reset() print_generated_sequence(g, num=20) ``` ## Timestamp ``` g_start = Constant(datetime(2018, 1, 1, 11, 22, 33)) g_end = Timestamp(start="2018-02-10", end="2018-02-20") g = Timestamp(start=g_start, end=g_end) print(type(next(g))) g_start.reset(seed=11111) g_end.reset(seed=22222) print_generated_sequence(g, num=10, sep='\n', seed=99999) g = Timestamp(start=g_start, end=g_end).strftime("%-d %b %Y, %H:%M (%a)") type(next(g)) g_start.reset(seed=11111) g_end.reset(seed=22222) print_generated_sequence(g, num=10, sep='\n', seed=99999) ``` ## Tee ``` aa = Integer(100, 200) bb = Integer(300, 400) cc = Integer(low=aa, high=bb) nn = Integer(1, 3) g = Tee(cc, num=nn) g.reset_input_generators(seed=11111) print_generated_sequence(g, num=10, seed=99999, sep='\n') ```
github_jupyter
``` !git clone https://github.com/PhysicsTeacher13/NFT-Image-Generator.git cd nft-image-generator/ from PIL import Image from IPython.display import display import random import json # Each image is made up a series of traits # The weightings for each trait drive the rarity and add up to 100% background = ["Blue", "Orange", "Purple", "Red", "Yellow"] background_weights = [30, 40, 15, 5, 10] circle = ["Blue", "Green", "Orange", "Red", "Yellow"] circle_weights = [30, 40, 15, 5, 10] square = ["Blue", "Green", "Orange", "Red", "Yellow"] square_weights = [30, 40, 15, 5, 10] # Dictionary variable for each trait. # Eech trait corresponds to its file name background_files = { "Blue": "blue", "Orange": "orange", "Purple": "purple", "Red": "red", "Yellow": "yellow", } circle_files = { "Blue": "blue-circle", "Green": "green-circle", "Orange": "orange-circle", "Red": "red-circle", "Yellow": "yellow-circle" } square_files = { "Blue": "blue-square", "Green": "green-square", "Orange": "orange-square", "Red": "red-square", "Yellow": "yellow-square" } ## Generate Traits TOTAL_IMAGES = 30 # Number of random unique images we want to generate all_images = [] # A recursive function to generate unique image combinations def create_new_image(): new_image = {} # # For each trait category, select a random trait based on the weightings new_image ["Background"] = random.choices(background, background_weights)[0] new_image ["Circle"] = random.choices(circle, circle_weights)[0] new_image ["Square"] = random.choices(square, square_weights)[0] if new_image in all_images: return create_new_image() else: return new_image # Generate the unique combinations based on trait weightings for i in range(TOTAL_IMAGES): new_trait_image = create_new_image() all_images.append(new_trait_image) # Returns true if all images are unique def all_images_unique(all_images): seen = list() return not any(i in seen or seen.append(i) for i in all_images) print("Are all images unique?", all_images_unique(all_images)) # Add token Id to each image i = 0 for item in all_images: item["tokenId"] = i i = i + 1 print(all_images) # Get Trait Counts background_count = {} for item in background: background_count[item] = 0 circle_count = {} for item in circle: circle_count[item] = 0 square_count = {} for item in square: square_count[item] = 0 for image in all_images: background_count[image["Background"]] += 1 circle_count[image["Circle"]] += 1 square_count[image["Square"]] += 1 print(background_count) print(circle_count) print(square_count) #### Generate Metadata for all Traits METADATA_FILE_NAME = './metadata/all-traits.json'; with open(METADATA_FILE_NAME, 'w') as outfile: json.dump(all_images, outfile, indent=4) #### Generate Images for item in all_images: im1 = Image.open(f'./trait-layers/backgrounds/{background_files[item["Background"]]}.jpg').convert('RGBA') im2 = Image.open(f'./trait-layers/circles/{circle_files[item["Circle"]]}.png').convert('RGBA') im3 = Image.open(f'./trait-layers/squares/{square_files[item["Square"]]}.png').convert('RGBA') #Create each composite com1 = Image.alpha_composite(im1, im2) com2 = Image.alpha_composite(com1, im3) #Convert to RGB rgb_im = com2.convert('RGB') file_name = str(item["tokenId"]) + ".png" rgb_im.save("./images/" + file_name) #### Generate Metadata for each Image f = open('./metadata/all-traits.json',) data = json.load(f) IMAGES_BASE_URI = "ADD_IMAGES_BASE_URI_HERE" PROJECT_NAME = "ADD_PROJECT_NAME_HERE" def getAttribute(key, value): return { "trait_type": key, "value": value } for i in data: token_id = i['tokenId'] token = { "image": IMAGES_BASE_URI + str(token_id) + '.png', "tokenId": token_id, "name": PROJECT_NAME + ' ' + str(token_id), "attributes": [] } token["attributes"].append(getAttribute("Background", i["Background"])) token["attributes"].append(getAttribute("Circle", i["Circle"])) token["attributes"].append(getAttribute("Square", i["Square"])) with open('./metadata/' + str(token_id), 'w') as outfile: json.dump(token, outfile, indent=4) f.close() ls cd images ls cd .. import os files_targets = os.listdir('images/') print(files_targets) import shutil from google.colab import files shutil.make_archive('images', 'zip', 'images') files.download('images.zip') print("File images.zip Downloaded!") ```
github_jupyter
<div align="center"> <h1><img width="30" src="https://madewithml.com/static/images/rounded_logo.png">&nbsp;<a href="https://madewithml.com/">Made With ML</a></h1> Applied ML · MLOps · Production <br> Join 30K+ developers in learning how to responsibly <a href="https://madewithml.com/about/">deliver value</a> with ML. <br> </div> <br> <div align="center"> <a target="_blank" href="https://newsletter.madewithml.com"><img src="https://img.shields.io/badge/Subscribe-30K-brightgreen"></a>&nbsp; <a target="_blank" href="https://github.com/GokuMohandas/MadeWithML"><img src="https://img.shields.io/github/stars/GokuMohandas/MadeWithML.svg?style=social&label=Star"></a>&nbsp; <a target="_blank" href="https://www.linkedin.com/in/goku"><img src="https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social"></a>&nbsp; <a target="_blank" href="https://twitter.com/GokuMohandas"><img src="https://img.shields.io/twitter/follow/GokuMohandas.svg?label=Follow&style=social"></a> <br> 🔥&nbsp; Among the <a href="https://github.com/topics/deep-learning" target="_blank">top ML</a> repositories on GitHub </div> <br> <hr> # Attention In this lesson we will learn how to incorporate attention mechanisms to create more context-aware representations. <div align="left"> <a target="_blank" href="https://madewithml.com/courses/foundations/attention/"><img src="https://img.shields.io/badge/📖 Read-blog post-9cf"></a>&nbsp; <a href="https://github.com/GokuMohandas/MadeWithML/blob/main/notebooks/14_Attention.ipynb" role="button"><img src="https://img.shields.io/static/v1?label=&amp;message=View%20On%20GitHub&amp;color=586069&amp;logo=github&amp;labelColor=2f363d"></a>&nbsp; <a href="https://colab.research.google.com/github/GokuMohandas/MadeWithML/blob/main/notebooks/14_Attention.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> </div> # Overview In the <a target="_blank" href="https://madewithml.com/courses/foundations/recurrent-neural-networks/">RNN lesson</a>, we were constrained to using the representation at the very end but what if we could give contextual weight to each encoded input ($h_i$) when making our prediction? This is also preferred because it can help mitigate the vanishing gradient issue which stems from processing very long sequences. Below is attention applied to the outputs from an RNN. In theory, the outputs can come from anywhere where we want to learn how to weight amongst them but since we're working with the context of an RNN from the previous lesson , we'll continue with that. <div align="left"> <img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/attention/attention.png" width="500"> </div> $ \alpha = softmax(W_{attn}h) $ $c_t = \sum_{i=1}^{n} \alpha_{t,i}h_i $ *where*: * $ h $ = RNN outputs (or any group of outputs you want to attend to) $\in \mathbb{R}^{NXMXH}$ ($N$ is the batch size, $M$ is the max sequence length in the batch, $H$ is the hidden dim) * $ \alpha_{t,i} $ = alignment function for output $ y_t $ using input $ h_i $ (we also concatenate other useful representations with $h_i$ here). In our case, this would be the attention value to attribute to each input $h_i$. * $W_{attn}$ = attention weights to learn $\in \mathbb{R}^{HX1}$. We can also apply activations functions, transformations, etc. here * $c_t$ = context vector that accounts for the different inputs with attention. We can pass this context vector to downstream processes. * **Objective:** At it's core, attention is about learning how to weigh a group of encoded representations to produce a context-aware representation to use for downstream tasks. This is done by learning a set of attention weights and then using softmax to create attention values that sum to 1. * **Advantages:** * Learn how to account for the appropriate encoded representations regardless of position. * **Disadvantages:** * Another compute step that involves learning weights. * **Miscellaneous:** * Several state-of-the-art approaches extend on basic attention to deliver highly context-aware representations (ex. self-attention). # Set up ``` import numpy as np import pandas as pd import random import torch import torch.nn as nn SEED = 1234 def set_seeds(seed=1234): """Set seeds for reproducibility.""" np.random.seed(seed) random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.cuda.manual_seed_all(seed) # multi-GPU# Set seeds for reproducibility set_seeds(seed=SEED) # Set seeds for reproducibility set_seeds(seed=SEED) # Set device cuda = True device = torch.device('cuda' if ( torch.cuda.is_available() and cuda) else 'cpu') torch.set_default_tensor_type('torch.FloatTensor') if device.type == 'cuda': torch.set_default_tensor_type('torch.cuda.FloatTensor') print (device) ``` ## Load data We will download the [AG News dataset](http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html), which consists of 120K text samples from 4 unique classes (`Business`, `Sci/Tech`, `Sports`, `World`) ``` import numpy as np import pandas as pd import re import urllib # Load data url = "https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/datasets/news.csv" df = pd.read_csv(url, header=0) # load df = df.sample(frac=1).reset_index(drop=True) # shuffle df.head() ``` ## Preprocessing We're going to clean up our input data first by doing operations such as lower text, removing stop (filler) words, filters using regular expressions, etc. ``` import nltk from nltk.corpus import stopwords from nltk.stem import PorterStemmer import re nltk.download('stopwords') STOPWORDS = stopwords.words('english') print (STOPWORDS[:5]) porter = PorterStemmer() def preprocess(text, stopwords=STOPWORDS): """Conditional preprocessing on our text unique to our task.""" # Lower text = text.lower() # Remove stopwords pattern = re.compile(r'\b(' + r'|'.join(stopwords) + r')\b\s*') text = pattern.sub('', text) # Remove words in paranthesis text = re.sub(r'\([^)]*\)', '', text) # Spacing and filters text = re.sub(r"([-;;.,!?<=>])", r" \1 ", text) text = re.sub('[^A-Za-z0-9]+', ' ', text) # remove non alphanumeric chars text = re.sub(' +', ' ', text) # remove multiple spaces text = text.strip() return text # Sample text = "Great week for the NYSE!" preprocess(text=text) # Apply to dataframe preprocessed_df = df.copy() preprocessed_df.title = preprocessed_df.title.apply(preprocess) print (f"{df.title.values[0]}\n\n{preprocessed_df.title.values[0]}") ``` ## Split data ``` import collections from sklearn.model_selection import train_test_split TRAIN_SIZE = 0.7 VAL_SIZE = 0.15 TEST_SIZE = 0.15 def train_val_test_split(X, y, train_size): """Split dataset into data splits.""" X_train, X_, y_train, y_ = train_test_split(X, y, train_size=TRAIN_SIZE, stratify=y) X_val, X_test, y_val, y_test = train_test_split(X_, y_, train_size=0.5, stratify=y_) return X_train, X_val, X_test, y_train, y_val, y_test # Data X = preprocessed_df["title"].values y = preprocessed_df["category"].values # Create data splits X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split( X=X, y=y, train_size=TRAIN_SIZE) print (f"X_train: {X_train.shape}, y_train: {y_train.shape}") print (f"X_val: {X_val.shape}, y_val: {y_val.shape}") print (f"X_test: {X_test.shape}, y_test: {y_test.shape}") print (f"Sample point: {X_train[0]} → {y_train[0]}") ``` ## LabelEncoder Next we'll define a `LabelEncoder` to encode our text labels into unique indices ``` import itertools class LabelEncoder(object): """Label encoder for tag labels.""" def __init__(self, class_to_index={}): self.class_to_index = class_to_index self.index_to_class = {v: k for k, v in self.class_to_index.items()} self.classes = list(self.class_to_index.keys()) def __len__(self): return len(self.class_to_index) def __str__(self): return f"<LabelEncoder(num_classes={len(self)})>" def fit(self, y): classes = np.unique(y) for i, class_ in enumerate(classes): self.class_to_index[class_] = i self.index_to_class = {v: k for k, v in self.class_to_index.items()} self.classes = list(self.class_to_index.keys()) return self def encode(self, y): encoded = np.zeros((len(y)), dtype=int) for i, item in enumerate(y): encoded[i] = self.class_to_index[item] return encoded def decode(self, y): classes = [] for i, item in enumerate(y): classes.append(self.index_to_class[item]) return classes def save(self, fp): with open(fp, 'w') as fp: contents = {'class_to_index': self.class_to_index} json.dump(contents, fp, indent=4, sort_keys=False) @classmethod def load(cls, fp): with open(fp, 'r') as fp: kwargs = json.load(fp=fp) return cls(**kwargs) # Encode label_encoder = LabelEncoder() label_encoder.fit(y_train) NUM_CLASSES = len(label_encoder) label_encoder.class_to_index # Convert labels to tokens print (f"y_train[0]: {y_train[0]}") y_train = label_encoder.encode(y_train) y_val = label_encoder.encode(y_val) y_test = label_encoder.encode(y_test) print (f"y_train[0]: {y_train[0]}") # Class weights counts = np.bincount(y_train) class_weights = {i: 1.0/count for i, count in enumerate(counts)} print (f"counts: {counts}\nweights: {class_weights}") ``` ## Tokenizer We'll define a `Tokenizer` to convert our text input data into token indices. ``` import json from collections import Counter from more_itertools import take class Tokenizer(object): def __init__(self, char_level, num_tokens=None, pad_token='<PAD>', oov_token='<UNK>', token_to_index=None): self.char_level = char_level self.separator = '' if self.char_level else ' ' if num_tokens: num_tokens -= 2 # pad + unk tokens self.num_tokens = num_tokens self.pad_token = pad_token self.oov_token = oov_token if not token_to_index: token_to_index = {pad_token: 0, oov_token: 1} self.token_to_index = token_to_index self.index_to_token = {v: k for k, v in self.token_to_index.items()} def __len__(self): return len(self.token_to_index) def __str__(self): return f"<Tokenizer(num_tokens={len(self)})>" def fit_on_texts(self, texts): if not self.char_level: texts = [text.split(" ") for text in texts] all_tokens = [token for text in texts for token in text] counts = Counter(all_tokens).most_common(self.num_tokens) self.min_token_freq = counts[-1][1] for token, count in counts: index = len(self) self.token_to_index[token] = index self.index_to_token[index] = token return self def texts_to_sequences(self, texts): sequences = [] for text in texts: if not self.char_level: text = text.split(' ') sequence = [] for token in text: sequence.append(self.token_to_index.get( token, self.token_to_index[self.oov_token])) sequences.append(np.asarray(sequence)) return sequences def sequences_to_texts(self, sequences): texts = [] for sequence in sequences: text = [] for index in sequence: text.append(self.index_to_token.get(index, self.oov_token)) texts.append(self.separator.join([token for token in text])) return texts def save(self, fp): with open(fp, 'w') as fp: contents = { 'char_level': self.char_level, 'oov_token': self.oov_token, 'token_to_index': self.token_to_index } json.dump(contents, fp, indent=4, sort_keys=False) @classmethod def load(cls, fp): with open(fp, 'r') as fp: kwargs = json.load(fp=fp) return cls(**kwargs) # Tokenize tokenizer = Tokenizer(char_level=False, num_tokens=5000) tokenizer.fit_on_texts(texts=X_train) VOCAB_SIZE = len(tokenizer) print (tokenizer) # Sample of tokens print (take(5, tokenizer.token_to_index.items())) print (f"least freq token's freq: {tokenizer.min_token_freq}") # use this to adjust num_tokens # Convert texts to sequences of indices X_train = tokenizer.texts_to_sequences(X_train) X_val = tokenizer.texts_to_sequences(X_val) X_test = tokenizer.texts_to_sequences(X_test) preprocessed_text = tokenizer.sequences_to_texts([X_train[0]])[0] print ("Text to indices:\n" f" (preprocessed) → {preprocessed_text}\n" f" (tokenized) → {X_train[0]}") ``` ## Padding We'll need to do 2D padding to our tokenized text. ``` def pad_sequences(sequences, max_seq_len=0): """Pad sequences to max length in sequence.""" max_seq_len = max(max_seq_len, max(len(sequence) for sequence in sequences)) padded_sequences = np.zeros((len(sequences), max_seq_len)) for i, sequence in enumerate(sequences): padded_sequences[i][:len(sequence)] = sequence return padded_sequences # 2D sequences padded = pad_sequences(X_train[0:3]) print (padded.shape) print (padded) ``` ## Datasets We're going to create Datasets and DataLoaders to be able to efficiently create batches with our data splits. ``` class Dataset(torch.utils.data.Dataset): def __init__(self, X, y): self.X = X self.y = y def __len__(self): return len(self.y) def __str__(self): return f"<Dataset(N={len(self)})>" def __getitem__(self, index): X = self.X[index] y = self.y[index] return [X, len(X), y] def collate_fn(self, batch): """Processing on a batch.""" # Get inputs batch = np.array(batch, dtype=object) X = batch[:, 0] seq_lens = batch[:, 1] y = np.stack(batch[:, 2], axis=0) # Pad inputs X = pad_sequences(sequences=X) # Cast X = torch.LongTensor(X.astype(np.int32)) seq_lens = torch.LongTensor(seq_lens.astype(np.int32)) y = torch.LongTensor(y.astype(np.int32)) return X, seq_lens, y def create_dataloader(self, batch_size, shuffle=False, drop_last=False): return torch.utils.data.DataLoader( dataset=self, batch_size=batch_size, collate_fn=self.collate_fn, shuffle=shuffle, drop_last=drop_last, pin_memory=True) # Create datasets train_dataset = Dataset(X=X_train, y=y_train) val_dataset = Dataset(X=X_val, y=y_val) test_dataset = Dataset(X=X_test, y=y_test) print ("Datasets:\n" f" Train dataset:{train_dataset.__str__()}\n" f" Val dataset: {val_dataset.__str__()}\n" f" Test dataset: {test_dataset.__str__()}\n" "Sample point:\n" f" X: {train_dataset[0][0]}\n" f" seq_len: {train_dataset[0][1]}\n" f" y: {train_dataset[0][2]}") # Create dataloaders batch_size = 64 train_dataloader = train_dataset.create_dataloader( batch_size=batch_size) val_dataloader = val_dataset.create_dataloader( batch_size=batch_size) test_dataloader = test_dataset.create_dataloader( batch_size=batch_size) batch_X, batch_seq_lens, batch_y = next(iter(train_dataloader)) print ("Sample batch:\n" f" X: {list(batch_X.size())}\n" f" seq_lens: {list(batch_seq_lens.size())}\n" f" y: {list(batch_y.size())}\n" "Sample point:\n" f" X: {batch_X[0]}\n" f" seq_len: {batch_seq_lens[0]}\n" f" y: {batch_y[0]}") ``` ## Trainer Let's create the `Trainer` class that we'll use to facilitate training for our experiments. ``` class Trainer(object): def __init__(self, model, device, loss_fn=None, optimizer=None, scheduler=None): # Set params self.model = model self.device = device self.loss_fn = loss_fn self.optimizer = optimizer self.scheduler = scheduler def train_step(self, dataloader): """Train step.""" # Set model to train mode self.model.train() loss = 0.0 # Iterate over train batches for i, batch in enumerate(dataloader): # Step batch = [item.to(self.device) for item in batch] # Set device inputs, targets = batch[:-1], batch[-1] self.optimizer.zero_grad() # Reset gradients z = self.model(inputs) # Forward pass J = self.loss_fn(z, targets) # Define loss J.backward() # Backward pass self.optimizer.step() # Update weights # Cumulative Metrics loss += (J.detach().item() - loss) / (i + 1) return loss def eval_step(self, dataloader): """Validation or test step.""" # Set model to eval mode self.model.eval() loss = 0.0 y_trues, y_probs = [], [] # Iterate over val batches with torch.no_grad(): for i, batch in enumerate(dataloader): # Step batch = [item.to(self.device) for item in batch] # Set device inputs, y_true = batch[:-1], batch[-1] z = self.model(inputs) # Forward pass J = self.loss_fn(z, y_true).item() # Cumulative Metrics loss += (J - loss) / (i + 1) # Store outputs y_prob = torch.sigmoid(z).cpu().numpy() y_probs.extend(y_prob) y_trues.extend(y_true.cpu().numpy()) return loss, np.vstack(y_trues), np.vstack(y_probs) def predict_step(self, dataloader): """Prediction step.""" # Set model to eval mode self.model.eval() y_probs = [] # Iterate over val batches with torch.no_grad(): for i, batch in enumerate(dataloader): # Forward pass w/ inputs inputs, targets = batch[:-1], batch[-1] y_prob = self.model(inputs, apply_softmax=True) # Store outputs y_probs.extend(y_prob) return np.vstack(y_probs) def train(self, num_epochs, patience, train_dataloader, val_dataloader): best_val_loss = np.inf for epoch in range(num_epochs): # Steps train_loss = self.train_step(dataloader=train_dataloader) val_loss, _, _ = self.eval_step(dataloader=val_dataloader) self.scheduler.step(val_loss) # Early stopping if val_loss < best_val_loss: best_val_loss = val_loss best_model = self.model _patience = patience # reset _patience else: _patience -= 1 if not _patience: # 0 print("Stopping early!") break # Logging print( f"Epoch: {epoch+1} | " f"train_loss: {train_loss:.5f}, " f"val_loss: {val_loss:.5f}, " f"lr: {self.optimizer.param_groups[0]['lr']:.2E}, " f"_patience: {_patience}" ) return best_model ``` # Attention Attention applied to the outputs from an RNN. In theory, the outputs can come from anywhere where we want to learn how to weight amongst them but since we're working with the context of an RNN from the previous lesson , we'll continue with that. <div align="left"> <img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/attention/attention.png" width="500"> </div> $ \alpha = softmax(W_{attn}h) $ $c_t = \sum_{i=1}^{n} \alpha_{t,i}h_i $ *where*: * $ h $ = RNN outputs (or any group of outputs you want to attend to) $\in \mathbb{R}^{NXMXH}$ ($N$ is the batch size, $M$ is the max length of each sequence in the batch, $H$ is the hidden dim) * $ \alpha_{t,i} $ = alignment function for output $ y_t $ using input $ h_i $ (we also concatenate other useful representations with $h_i$ here). In our case, this would be the attention value to attribute to each input $h_i$. * $W_{attn}$ = attention weights to learn $\in \mathbb{R}^{HX1}$. We can also apply activations functions, transformations, etc. here * $c_t$ = context vector that accounts for the different inputs with attention. We can pass this context vector to downstream processes. ``` import torch.nn.functional as F ``` The RNN will create an encoded representation for each word in our input resulting in a stacked vector that has dimensions $NXMXH$, where N is the # of samples in the batch, M is the max sequence length in the batch, and H is the number of hidden units in the RNN. ``` BATCH_SIZE = 64 SEQ_LEN = 8 EMBEDDING_DIM = 100 RNN_HIDDEN_DIM = 128 # Embed x = torch.rand((BATCH_SIZE, SEQ_LEN, EMBEDDING_DIM)) # Encode rnn = nn.RNN(EMBEDDING_DIM, RNN_HIDDEN_DIM, batch_first=True) out, h_n = rnn(x) # h_n is the last hidden state print ("out: ", out.shape) print ("h_n: ", h_n.shape) # Attend attn = nn.Linear(RNN_HIDDEN_DIM, 1) e = attn(out) attn_vals = F.softmax(e.squeeze(2), dim=1) c = torch.bmm(attn_vals.unsqueeze(1), out).squeeze(1) print ("e: ", e.shape) print ("attn_vals: ", attn_vals.shape) print ("attn_vals[0]: ", attn_vals[0]) print ("sum(attn_vals[0]): ", sum(attn_vals[0])) print ("c: ", c.shape) # Predict fc1 = nn.Linear(RNN_HIDDEN_DIM, NUM_CLASSES) output = F.softmax(fc1(c), dim=1) print ("output: ", output.shape) ``` > In a many-to-many task such as machine translation, our attentional interface will also account for the encoded representation of token in the output as well (via concatenation) so we can know which encoded inputs to attend to based on the encoded output we're focusing on. For more on this, be sure to explore <a target="_blank" href="https://arxiv.org/abs/1409.0473">Bahdanau's attention paper</a>. ## Model Now let's create our RNN based model but with the addition of the attention layer on top of the RNN's outputs. ``` RNN_HIDDEN_DIM = 128 DROPOUT_P = 0.1 HIDDEN_DIM = 100 class RNN(nn.Module): def __init__(self, embedding_dim, vocab_size, rnn_hidden_dim, hidden_dim, dropout_p, num_classes, padding_idx=0): super(RNN, self).__init__() # Initialize embeddings self.embeddings = nn.Embedding( embedding_dim=embedding_dim, num_embeddings=vocab_size, padding_idx=padding_idx) # RNN self.rnn = nn.RNN(embedding_dim, rnn_hidden_dim, batch_first=True) # Attention self.attn = nn.Linear(rnn_hidden_dim, 1) # FC weights self.dropout = nn.Dropout(dropout_p) self.fc1 = nn.Linear(rnn_hidden_dim, hidden_dim) self.fc2 = nn.Linear(hidden_dim, num_classes) def forward(self, inputs, apply_softmax=False): # Embed x_in, seq_lens = inputs x_in = self.embeddings(x_in) # Encode out, h_n = self.rnn(x_in) # Attend e = self.attn(out) attn_vals = F.softmax(e.squeeze(2), dim=1) c = torch.bmm(attn_vals.unsqueeze(1), out).squeeze(1) # Predict z = self.fc1(c) z = self.dropout(z) y_pred = self.fc2(z) if apply_softmax: y_pred = F.softmax(y_pred, dim=1) return y_pred # Simple RNN cell model = RNN( embedding_dim=EMBEDDING_DIM, vocab_size=VOCAB_SIZE, rnn_hidden_dim=RNN_HIDDEN_DIM, hidden_dim=HIDDEN_DIM, dropout_p=DROPOUT_P, num_classes=NUM_CLASSES) model = model.to(device) # set device print (model.named_parameters) ``` ## Training ``` from torch.optim import Adam NUM_LAYERS = 1 LEARNING_RATE = 1e-4 PATIENCE = 10 NUM_EPOCHS = 50 # Define Loss class_weights_tensor = torch.Tensor(list(class_weights.values())).to(device) loss_fn = nn.CrossEntropyLoss(weight=class_weights_tensor) # Define optimizer & scheduler optimizer = Adam(model.parameters(), lr=LEARNING_RATE) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau( optimizer, mode='min', factor=0.1, patience=3) # Trainer module trainer = Trainer( model=model, device=device, loss_fn=loss_fn, optimizer=optimizer, scheduler=scheduler) # Train best_model = trainer.train( NUM_EPOCHS, PATIENCE, train_dataloader, val_dataloader) ``` ## Evaluation ``` import json from sklearn.metrics import precision_recall_fscore_support def get_performance(y_true, y_pred, classes): """Per-class performance metrics.""" # Performance performance = {"overall": {}, "class": {}} # Overall performance metrics = precision_recall_fscore_support(y_true, y_pred, average="weighted") performance["overall"]["precision"] = metrics[0] performance["overall"]["recall"] = metrics[1] performance["overall"]["f1"] = metrics[2] performance["overall"]["num_samples"] = np.float64(len(y_true)) # Per-class performance metrics = precision_recall_fscore_support(y_true, y_pred, average=None) for i in range(len(classes)): performance["class"][classes[i]] = { "precision": metrics[0][i], "recall": metrics[1][i], "f1": metrics[2][i], "num_samples": np.float64(metrics[3][i]), } return performance # Get predictions test_loss, y_true, y_prob = trainer.eval_step(dataloader=test_dataloader) y_pred = np.argmax(y_prob, axis=1) # Determine performance performance = get_performance( y_true=y_test, y_pred=y_pred, classes=label_encoder.classes) print (json.dumps(performance['overall'], indent=2)) # Save artifacts from pathlib import Path dir = Path("rnn") dir.mkdir(parents=True, exist_ok=True) label_encoder.save(fp=Path(dir, "label_encoder.json")) tokenizer.save(fp=Path(dir, "tokenizer.json")) torch.save(best_model.state_dict(), Path(dir, "model.pt")) with open(Path(dir, "performance.json"), "w") as fp: json.dump(performance, indent=2, sort_keys=False, fp=fp) ``` ## Inference ``` def get_probability_distribution(y_prob, classes): """Create a dict of class probabilities from an array.""" results = {} for i, class_ in enumerate(classes): results[class_] = np.float64(y_prob[i]) sorted_results = {k: v for k, v in sorted( results.items(), key=lambda item: item[1], reverse=True)} return sorted_results # Load artifacts device = torch.device("cpu") label_encoder = LabelEncoder.load(fp=Path(dir, "label_encoder.json")) tokenizer = Tokenizer.load(fp=Path(dir, "tokenizer.json")) model = RNN( embedding_dim=EMBEDDING_DIM, vocab_size=VOCAB_SIZE, rnn_hidden_dim=RNN_HIDDEN_DIM, hidden_dim=HIDDEN_DIM, dropout_p=DROPOUT_P, num_classes=NUM_CLASSES) model.load_state_dict(torch.load(Path(dir, "model.pt"), map_location=device)) model.to(device) # Initialize trainer trainer = Trainer(model=model, device=device) # Dataloader text = "The final tennis tournament starts next week." X = tokenizer.texts_to_sequences([preprocess(text)]) print (tokenizer.sequences_to_texts(X)) y_filler = label_encoder.encode([label_encoder.classes[0]]*len(X)) dataset = Dataset(X=X, y=y_filler) dataloader = dataset.create_dataloader(batch_size=batch_size) # Inference y_prob = trainer.predict_step(dataloader) y_pred = np.argmax(y_prob, axis=1) label_encoder.decode(y_pred) # Class distributions prob_dist = get_probability_distribution(y_prob=y_prob[0], classes=label_encoder.classes) print (json.dumps(prob_dist, indent=2)) ``` # Interpretability Let's use the attention values to see which encoded tokens were most useful in predicting the appropriate label. ``` import collections import seaborn as sns class InterpretAttn(nn.Module): def __init__(self, embedding_dim, vocab_size, rnn_hidden_dim, hidden_dim, dropout_p, num_classes, padding_idx=0): super(InterpretAttn, self).__init__() # Initialize embeddings self.embeddings = nn.Embedding( embedding_dim=embedding_dim, num_embeddings=vocab_size, padding_idx=padding_idx) # RNN self.rnn = nn.RNN(embedding_dim, rnn_hidden_dim, batch_first=True) # Attention self.attn = nn.Linear(rnn_hidden_dim, 1) # FC weights self.dropout = nn.Dropout(dropout_p) self.fc1 = nn.Linear(rnn_hidden_dim, hidden_dim) self.fc2 = nn.Linear(hidden_dim, num_classes) def forward(self, inputs, apply_softmax=False): # Embed x_in, seq_lens = inputs x_in = self.embeddings(x_in) # Encode out, h_n = self.rnn(x_in) # Attend e = self.attn(out) # could add optional activation function (ex. tanh) attn_vals = F.softmax(e.squeeze(2), dim=1) return attn_vals # Initialize model interpretable_model = InterpretAttn( embedding_dim=EMBEDDING_DIM, vocab_size=VOCAB_SIZE, rnn_hidden_dim=RNN_HIDDEN_DIM, hidden_dim=HIDDEN_DIM, dropout_p=DROPOUT_P, num_classes=NUM_CLASSES) interpretable_model.load_state_dict(torch.load(Path(dir, "model.pt"), map_location=device)) interpretable_model.to(device) # Initialize trainer interpretable_trainer = Trainer(model=interpretable_model, device=device) # Get attention values attn_vals = interpretable_trainer.predict_step(dataloader) print (attn_vals.shape) # (N, max_seq_len) # Visualize a bi-gram filter's outputs sns.set(rc={"figure.figsize":(10, 1)}) tokens = tokenizer.sequences_to_texts(X)[0].split(' ') sns.heatmap(attn_vals, xticklabels=tokens) ``` The word `tennis` was attended to the most to result in the `Sports` label. # Types of attention We'll briefly look at the different types of attention and when to use each them. ## Soft (global) attention Soft attention the type of attention we've implemented so far, where we attend to all encoded inputs when creating our context vector. - **advantages**: we always have the ability to attend to all inputs in case something we saw much earlier/ see later are crucial for determing the output. - **disadvantages**: if our input sequence is very long, this can lead to expensive compute. ## Hard attention Hard attention is focusing on a specific set of the encoded inputs at each time step. - **advantages**: we can save a lot of compute on long sequences by only focusing on a local patch each time. - **disadvantages**: non-differentiable and so we need to use more complex techniques (variance reduction, reinforcement learning, etc.) to train. <div align="left"> <img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/attention/soft_attention.png" width="700"> </div> <div align="left"> <small><a href="https://arxiv.org/abs/1502.03044" target="_blank">Show, Attend and Tell: Neural Image Caption Generation with Visual Attention</a></small> </div> ## Local attention [Local attention](https://arxiv.org/abs/1508.04025) blends the advantages of soft and hard attention. It involves learning an aligned position vector and empirically determining a local window of encoded inputs to attend to. - **advantages**: apply attention to a local patch of inputs yet remain differentiable. - **disadvantages**: need to determine the alignment vector for each output but it's a worthwhile trade off to determine the right window of inputs to attend to in order to avoid attending to all of them. <div align="left"> <img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/attention/local_attention.png" width="700"> </div> <div align="left"> <small><a href="https://arxiv.org/abs/1508.04025" target="_blank">Effective Approaches to Attention-based Neural Machine Translation </a></small> </div> ## Self-attention We can also use attention within the encoded input sequence to create a weighted representation that based on the similarity between input pairs. This will allow us to create rich representations of the input sequence that are aware of the relationships between each other. For example, in the image below you can see that when composing the representation of the token "its", this specific attention head will be incorporating signal from the token "Law" (it's learned that "its" is referring to the "Law"). <div align="left"> <img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/attention/self_attention.png" width="300"> </div> <div align="left"> <small><a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention Is All You Need</a></small> </div> # Transformers Transformers are a very popular architecture that leverage and extend the concept of self-attention to create very useful representations of our input data for a downstream task. ## Scaled dot-product attention The most popular type of self-attention is scaled dot-product attention from the widely-cited [Attention is all you need](https://arxiv.org/abs/1706.03762) paper. This type of attention involves projecting our encoded input sequences onto three matrices, queries (Q), keys (K) and values (V), whose weights we learn. $ inputs \in \mathbb{R}^{NXMXH} $ ($N$ = batch size, $M$ = sequence length, $H$ = hidden dim) $ Q = XW_q $ where $ W_q \in \mathbb{R}^{HXd_q} $ $ K = XW_k $ where $ W_k \in \mathbb{R}^{HXd_k} $ $ V = XW_v $ where $ W_v \in \mathbb{R}^{HXd_v} $ $ attention (Q, K, V) = softmax( \frac{Q K^{T}}{\sqrt{d_k}} )V \in \mathbb{R}^{MXd_v} $ ## Multi-head attention Instead of applying self-attention only once across the entire encoded input, we can also separate the input and apply self-attention in parallel (heads) to each input section and concatenate them. This allows the different head to learn unique representations while maintaining the complexity since we split the input into smaller subspaces. $ MultiHead(Q, K, V) = concat({head}_1, ..., {head}_{h})W_O $ * ${head}_i = attention(Q_i, K_i, V_i) $ * $h$ = # of self-attention heads * $W_O \in \mathbb{R}^{hd_vXH} $ * $H$ = hidden dim. (or dimension of the model $d_{model}$) ## Positional encoding With self-attention, we aren't able to account for the sequential position of our input tokens. To address this, we can use positional encoding to create a representation of the location of each token with respect to the entire sequence. This can either be learned (with weights) or we can use a fixed function that can better extend to create positional encoding for lengths during inference that were not observed during training. $ PE_{(pos,2i)} = sin({pos}/{10000^{2i/H}}) $ $ PE_{(pos,2i+1)} = cos({pos}/{10000^{2i/H}}) $ where: * $pos$ = position of the token $(1...M)$ * $i$ = hidden dim $(1..H)$ This effectively allows us to represent each token's relative position using a fixed function for very large sequences. And because we've constrained the positional encodings to have the same dimensions as our encoded inputs, we can simply concatenate them before feeding them into the multi-head attention heads. ## Architecture And here's how it all fits together! It's an end-to-end architecture that creates these contextual representations and uses an encoder-decoder architecture to predict the outcomes (one-to-one, many-to-one, many-to-many, etc.) Due to the complexity of the architecture, they require massive amounts of data for training without overfitting, however, they can be leveraged as pretrained models to finetune with smaller datasets that are similar to the larger set it was initially trained on. <div align="left"> <img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/attention/transformer.png" width="800"> </div> <div align="left"> <small><a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention Is All You Need</a></small> </div> > We're not going to the implement the Transformer [from scratch](https://nlp.seas.harvard.edu/2018/04/03/attention.html) but we will use the[ Hugging Face library](https://github.com/huggingface/transformers) to do so in the [baselines](https://madewithml.com/courses/mlops/baselines/#transformers-w-contextual-embeddings) lesson!
github_jupyter
``` import os import requests import json import numpy as np from dotenv import load_dotenv import pandas as pd from pycoingecko import CoinGeckoAPI cg = CoinGeckoAPI() from coinapi_rest_v1.restapi import CoinAPIv1 import datetime, sys load_dotenv() coin_api_key = os.getenv("COIN_API_KEY2") coin_api_key2 = os.getenv("COIN_API_KEY") coin_api_key3 = os.getenv("COIN_API_KEY3") type(coin_api_key) type(coin_api_key2) type(coin_api_key3) api = CoinAPIv1(coin_api_key) api2 = CoinAPIv1(coin_api_key2) api3 = CoinAPIv1(coin_api_key3) # Pulling daily data for rolling visualizations. #Bitcoin 2016 daily pricing from Kraken beg_of_2016 = datetime.date(2016, 1, 1).isoformat() end_of_2016 = datetime.date(2016, 12, 31).isoformat() bitcoin_ohlcv_2016 = api.ohlcv_historical_data( 'KRAKEN_SPOT_BTC_USD', {'period_id': '5DAY', 'time_start': beg_of_2016, 'time_end': end_of_2016}) for period in bitcoin_ohlcv_2016: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) btc_2016 = pd.DataFrame(bitcoin_ohlcv_2016).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'btc_start_date'}, axis='columns') btc_2016 #Ethereum 2016 daily pricing from Kraken beg_of_2016 = datetime.date(2016, 1, 1).isoformat() end_of_2016 = datetime.date(2016, 12, 31).isoformat() ethereum_ohlcv_2016 = api.ohlcv_historical_data( 'KRAKEN_SPOT_ETH_USD', {'period_id': '5DAY', 'time_start': beg_of_2016, 'time_end': end_of_2016}) for period in ethereum_ohlcv_2016: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) eth_2016 = pd.DataFrame(ethereum_ohlcv_2016).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'eth_start_date'}, axis='columns') eth_2016 #LiteCoin 2016 daily pricing from Kraken beg_of_2016 = datetime.date(2016, 1, 1).isoformat() end_of_2016 = datetime.date(2016, 12, 31).isoformat() litecoin_ohlcv_2016 = api.ohlcv_historical_data( 'KRAKEN_SPOT_LTC_USD', {'period_id': '5DAY', 'time_start': beg_of_2016, 'time_end': end_of_2016}) for period in litecoin_ohlcv_2016: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) ltc_2016 = pd.DataFrame(litecoin_ohlcv_2016).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ltc_start_date'}, axis='columns') ltc_2016 #ZCash 2016 daily pricing from Kraken beg_of_2016 = datetime.date(2016, 1, 1).isoformat() end_of_2016 = datetime.date(2016, 12, 31).isoformat() zcash_ohlcv_2016 = api.ohlcv_historical_data( 'KRAKEN_SPOT_ZEC_USD', {'period_id': '5DAY', 'time_start': beg_of_2016, 'time_end': end_of_2016}) for period in zcash_ohlcv_2016: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) zec_2016 = pd.DataFrame(zcash_ohlcv_2016).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'zec_start_date'}, axis='columns') zec_2016 # Combine the dataframes into one. merged_2016 = pd.concat( [btc_2016, eth_2016, ltc_2016, zec_2016], axis=1).drop( columns=['price_high', 'price_low', 'trades_count', 'price_open']) merged_2016 # Convert 2016 dataframe to csv file merged_2016.to_csv('2016_5day_data.csv') #Bitcoin 2017 daily pricing from Kraken beg_of_2017 = datetime.date(2017, 1, 1).isoformat() end_of_2017 = datetime.date(2017, 12, 31).isoformat() bitcoin_ohlcv_2017 = api.ohlcv_historical_data( 'KRAKEN_SPOT_BTC_USD', {'period_id': '5DAY', 'time_start': beg_of_2017, 'time_end': end_of_2017}) for period in bitcoin_ohlcv_2017: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) btc_2017 = pd.DataFrame(bitcoin_ohlcv_2017).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'btc_start_date'}, axis='columns') btc_2017 #Ethereum 2017 daily pricing from Kraken beg_of_2017 = datetime.date(2017, 1, 1).isoformat() end_of_2017 = datetime.date(2017, 12, 31).isoformat() ethereum_ohlcv_2017 = api.ohlcv_historical_data( 'KRAKEN_SPOT_ETH_USD', {'period_id': '5DAY', 'time_start': beg_of_2017, 'time_end': end_of_2017}) for period in ethereum_ohlcv_2017: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) eth_2017 = pd.DataFrame(ethereum_ohlcv_2017).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'eth_start_date'}, axis='columns') eth_2017 #Litecoin 2017 daily pricing from Kraken beg_of_2017 = datetime.date(2017, 1, 1).isoformat() end_of_2017 = datetime.date(2017, 12, 31).isoformat() litecoin_ohlcv_2017 = api.ohlcv_historical_data( 'KRAKEN_SPOT_LTC_USD', {'period_id': '5DAY', 'time_start': beg_of_2017, 'time_end': end_of_2017}) for period in litecoin_ohlcv_2017: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) ltc_2017 = pd.DataFrame(litecoin_ohlcv_2017).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ltc_start_date'}, axis='columns') ltc_2017 #Tether 2017 daily pricing from Kraken beg_of_2017 = datetime.date(2017, 1, 1).isoformat() end_of_2017 = datetime.date(2017, 12, 31).isoformat() tether_ohlcv_2017 = api.ohlcv_historical_data( 'KRAKEN_SPOT_USDT_USD', {'period_id': '5DAY', 'time_start': beg_of_2017, 'time_end': end_of_2017}) for period in tether_ohlcv_2017: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) usdt_2017 = pd.DataFrame(tether_ohlcv_2017).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'USDT_start_date'}, axis='columns') usdt_2017 #Stellar 2017 daily pricing from Kraken beg_of_2017 = datetime.date(2017, 1, 1).isoformat() end_of_2017 = datetime.date(2017, 12, 31).isoformat() stellar_ohlcv_2017 = api.ohlcv_historical_data( 'KRAKEN_SPOT_XLM_USD', {'period_id': '5DAY', 'time_start': beg_of_2017, 'time_end': end_of_2017}) for period in stellar_ohlcv_2017: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) xlm_2017 = pd.DataFrame(stellar_ohlcv_2017).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'XLM_start_date'}, axis='columns') xlm_2017 #Ripple 2017 daily pricing from Kraken beg_of_2017 = datetime.date(2017, 1, 1).isoformat() end_of_2017 = datetime.date(2017, 12, 31).isoformat() ripple_ohlcv_2017 = api.ohlcv_historical_data( 'KRAKEN_SPOT_XRP_USD', {'period_id': '5DAY', 'time_start': beg_of_2017, 'time_end': end_of_2017}) for period in ripple_ohlcv_2017: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) xrp_2017 = pd.DataFrame(ripple_ohlcv_2017).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'XRP_start_date'}, axis='columns') xrp_2017 #ZCash 2017 daily pricing from Kraken beg_of_2017 = datetime.date(2017, 1, 1).isoformat() end_of_2017 = datetime.date(2017, 12, 31).isoformat() zcash_ohlcv_2017 = api.ohlcv_historical_data( 'KRAKEN_SPOT_ZEC_USD', {'period_id': '5DAY', 'time_start': beg_of_2017, 'time_end': end_of_2017}) for period in zcash_ohlcv_2017: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) zec_2017 = pd.DataFrame(ripple_ohlcv_2017).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ZEC_start_date'}, axis='columns') zec_2017 #Dash 2017 daily pricing from Kraken beg_of_2017 = datetime.date(2017, 1, 1).isoformat() end_of_2017 = datetime.date(2017, 12, 31).isoformat() dash_ohlcv_2017 = api.ohlcv_historical_data( 'KRAKEN_SPOT_DASH_USD', {'period_id': '5DAY', 'time_start': beg_of_2017, 'time_end': end_of_2017}) for period in dash_ohlcv_2017: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) dash_2017 = pd.DataFrame(dash_ohlcv_2017).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'DASH_start_date'}, axis='columns') dash_2017 merged_2017 = pd.concat( [btc_2017, eth_2017, ltc_2017, zec_2017, xlm_2017, xrp_2017, usdt_2017, dash_2017], axis=1).drop( columns=['price_high', 'price_low', 'trades_count', 'price_open']) merged_2017 merged_2017.to_csv('2017_5day_data.csv') #Bitcoin 2018 daily pricing from Kraken beg_of_2018 = datetime.date(2018, 1, 1).isoformat() end_of_2018 = datetime.date(2018, 12, 31).isoformat() bitcoin_ohlcv_2018 = api.ohlcv_historical_data( 'KRAKEN_SPOT_BTC_USD', {'period_id': '5DAY', 'time_start': beg_of_2018, 'time_end': end_of_2018}) for period in bitcoin_ohlcv_2018: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) btc_2018 = pd.DataFrame(bitcoin_ohlcv_2018).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'btc_start_date'}, axis='columns') btc_2018 #Ethereum 2018 daily pricing from Kraken beg_of_2018 = datetime.date(2018, 1, 1).isoformat() end_of_2018 = datetime.date(2018, 12, 31).isoformat() ethereum_ohlcv_2018 = api.ohlcv_historical_data( 'KRAKEN_SPOT_ETH_USD', {'period_id': '5DAY', 'time_start': beg_of_2018, 'time_end': end_of_2018}) for period in ethereum_ohlcv_2018: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) eth_2018 = pd.DataFrame(ethereum_ohlcv_2018).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'eth_start_date'}, axis='columns') eth_2018 #Litecoin 2018 daily pricing from Kraken beg_of_2018 = datetime.date(2018, 1, 1).isoformat() end_of_2018 = datetime.date(2018, 12, 31).isoformat() litecoin_ohlcv_2018 = api.ohlcv_historical_data( 'KRAKEN_SPOT_LTC_USD', {'period_id': '5DAY', 'time_start': beg_of_2018, 'time_end': end_of_2018}) for period in litecoin_ohlcv_2018: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) ltc_2018 = pd.DataFrame(litecoin_ohlcv_2018).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ltc_start_date'}, axis='columns') ltc_2018 #Tether 2018 daily pricing from Kraken beg_of_2018 = datetime.date(2018, 1, 1).isoformat() end_of_2018 = datetime.date(2018, 12, 31).isoformat() tether_ohlcv_2018 = api.ohlcv_historical_data( 'KRAKEN_SPOT_USDT_USD', {'period_id': '5DAY', 'time_start': beg_of_2018, 'time_end': end_of_2018}) for period in tether_ohlcv_2018: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) usdt_2018 = pd.DataFrame(tether_ohlcv_2018).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'USDT_start_date'}, axis='columns') usdt_2018 #Stellar 2018 daily pricing from Kraken beg_of_2018 = datetime.date(2018, 1, 1).isoformat() end_of_2018 = datetime.date(2018, 12, 31).isoformat() stellar_ohlcv_2018 = api.ohlcv_historical_data( 'KRAKEN_SPOT_XLM_USD', {'period_id': '5DAY', 'time_start': beg_of_2018, 'time_end': end_of_2018}) for period in stellar_ohlcv_2018: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) xlm_2018 = pd.DataFrame(stellar_ohlcv_2018).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'XLM_start_date'}, axis='columns') xlm_2018 #Ripple 2018 daily pricing from Kraken beg_of_2018 = datetime.date(2018, 1, 1).isoformat() end_of_2018 = datetime.date(2018, 12, 31).isoformat() ripple_ohlcv_2018 = api.ohlcv_historical_data( 'KRAKEN_SPOT_XRP_USD', {'period_id': '5DAY', 'time_start': beg_of_2018, 'time_end': end_of_2018}) for period in ripple_ohlcv_2018: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) xrp_2018 = pd.DataFrame(ripple_ohlcv_2018).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'XRP_start_date'}, axis='columns') xrp_2018 #ZCash 2018 daily pricing from Kraken beg_of_2018 = datetime.date(2018, 1, 1).isoformat() end_of_2018 = datetime.date(2018, 12, 31).isoformat() zcash_ohlcv_2018 = api.ohlcv_historical_data( 'KRAKEN_SPOT_ZEC_USD', {'period_id': '5DAY', 'time_start': beg_of_2018, 'time_end': end_of_2018}) for period in zcash_ohlcv_2018: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) zec_2018 = pd.DataFrame(ripple_ohlcv_2018).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ZEC_start_date'}, axis='columns') zec_2018 #Dash 2018 daily pricing from Kraken beg_of_2018 = datetime.date(2018, 1, 1).isoformat() end_of_2018 = datetime.date(2018, 12, 31).isoformat() dash_ohlcv_2018 = api.ohlcv_historical_data( 'KRAKEN_SPOT_DASH_USD', {'period_id': '5DAY', 'time_start': beg_of_2018, 'time_end': end_of_2018}) for period in dash_ohlcv_2018: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) dash_2018 = pd.DataFrame(dash_ohlcv_2018).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'DASH_start_date'}, axis='columns') dash_2018 #Cardano 2018 daily pricing from Kraken beg_of_2018 = datetime.date(2018, 1, 1).isoformat() end_of_2018 = datetime.date(2018, 12, 31).isoformat() cardano_ohlcv_2018 = api.ohlcv_historical_data( 'KRAKEN_SPOT_ADA_USD', {'period_id': '5DAY', 'time_start': beg_of_2018, 'time_end': end_of_2018}) for period in cardano_ohlcv_2018: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) ada_2018 = pd.DataFrame(cardano_ohlcv_2018).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ADA_start_date'}, axis='columns') ada_2018 merged_2018 = pd.concat( [btc_2018, eth_2018, ltc_2018, zec_2018, xlm_2018, xrp_2018, usdt_2018, dash_2018, ada_2018], axis=1).drop( columns=['price_high', 'price_low', 'trades_count', 'price_open']) merged_2018 merged_2018.to_csv('2018_5day_data.csv') #Bitcoin 2019 daily pricing from Kraken beg_of_2019 = datetime.date(2019, 1, 1).isoformat() end_of_2019 = datetime.date(2019, 12, 31).isoformat() bitcoin_ohlcv_2019 = api.ohlcv_historical_data( 'KRAKEN_SPOT_BTC_USD', {'period_id': '5DAY', 'time_start': beg_of_2019, 'time_end': end_of_2019}) for period in bitcoin_ohlcv_2019: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) btc_2019 = pd.DataFrame(bitcoin_ohlcv_2019).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'btc_start_date'}, axis='columns') btc_2019 #Ethereum 2019 daily pricing from Kraken beg_of_2019 = datetime.date(2019, 1, 1).isoformat() end_of_2019 = datetime.date(2019, 12, 31).isoformat() ethereum_ohlcv_2019 = api.ohlcv_historical_data( 'KRAKEN_SPOT_ETH_USD', {'period_id': '5DAY', 'time_start': beg_of_2019, 'time_end': end_of_2019}) for period in ethereum_ohlcv_2019: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) eth_2019 = pd.DataFrame(ethereum_ohlcv_2019).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'eth_start_date'}, axis='columns') eth_2019 #Litecoin 2019 daily pricing from Kraken beg_of_2019 = datetime.date(2019, 1, 1).isoformat() end_of_2019 = datetime.date(2019, 12, 31).isoformat() litecoin_ohlcv_2019 = api.ohlcv_historical_data( 'KRAKEN_SPOT_LTC_USD', {'period_id': '5DAY', 'time_start': beg_of_2019, 'time_end': end_of_2019}) for period in litecoin_ohlcv_2019: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) ltc_2019 = pd.DataFrame(litecoin_ohlcv_2019).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ltc_start_date'}, axis='columns') ltc_2019 #Tether 2019 daily pricing from Kraken beg_of_2019 = datetime.date(2019, 1, 1).isoformat() end_of_2019 = datetime.date(2019, 12, 31).isoformat() tether_ohlcv_2019 = api.ohlcv_historical_data( 'KRAKEN_SPOT_USDT_USD', {'period_id': '5DAY', 'time_start': beg_of_2019, 'time_end': end_of_2019}) for period in tether_ohlcv_2019: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) usdt_2019 = pd.DataFrame(tether_ohlcv_2019).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'USDT_start_date'}, axis='columns') usdt_2019 #Stellar 2019 daily pricing from Kraken beg_of_2019 = datetime.date(2019, 1, 1).isoformat() end_of_2019 = datetime.date(2019, 12, 31).isoformat() stellar_ohlcv_2019 = api.ohlcv_historical_data( 'KRAKEN_SPOT_XLM_USD', {'period_id': '5DAY', 'time_start': beg_of_2019, 'time_end': end_of_2019}) for period in stellar_ohlcv_2019: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) xlm_2019 = pd.DataFrame(stellar_ohlcv_2019).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'XLM_start_date'}, axis='columns') xlm_2019 #Ripple 2019 daily pricing from Kraken beg_of_2019 = datetime.date(2019, 1, 1).isoformat() end_of_2019 = datetime.date(2019, 12, 31).isoformat() ripple_ohlcv_2019 = api.ohlcv_historical_data( 'KRAKEN_SPOT_XRP_USD', {'period_id': '5DAY', 'time_start': beg_of_2019, 'time_end': end_of_2019}) for period in ripple_ohlcv_2019: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) xrp_2019 = pd.DataFrame(ripple_ohlcv_2019).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'XRP_start_date'}, axis='columns') xrp_2019 #ZCash 2019 daily pricing from Kraken beg_of_2019 = datetime.date(2019, 1, 1).isoformat() end_of_2019 = datetime.date(2019, 12, 31).isoformat() zcash_ohlcv_2019 = api.ohlcv_historical_data( 'KRAKEN_SPOT_ZEC_USD', {'period_id': '5DAY', 'time_start': beg_of_2019, 'time_end': end_of_2019}) for period in zcash_ohlcv_2019: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) zec_2019 = pd.DataFrame(ripple_ohlcv_2019).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ZEC_start_date'}, axis='columns') zec_2019 #Dash 2019 daily pricing from Kraken beg_of_2019 = datetime.date(2019, 1, 1).isoformat() end_of_2019 = datetime.date(2019, 12, 31).isoformat() dash_ohlcv_2019 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_DASH_USD', {'period_id': '5DAY', 'time_start': beg_of_2019, 'time_end': end_of_2019}) for period in dash_ohlcv_2019: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) dash_2019 = pd.DataFrame(dash_ohlcv_2019).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'DASH_start_date'}, axis='columns') dash_2019 #Cardano 2019 daily pricing from Kraken beg_of_2019 = datetime.date(2019, 1, 1).isoformat() end_of_2019 = datetime.date(2019, 12, 31).isoformat() cardano_ohlcv_2019 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_ADA_USD', {'period_id': '5DAY', 'time_start': beg_of_2019, 'time_end': end_of_2019}) for period in cardano_ohlcv_2019: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) ada_2019 = pd.DataFrame(cardano_ohlcv_2019).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ADA_start_date'}, axis='columns') ada_2019 # Doge 2019 daily pricing from Kraken beg_of_2019 = datetime.date(2019, 1, 1).isoformat() end_of_2019 = datetime.date(2019, 12, 31).isoformat() doge_ohlcv_2019 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_DOGE_USD', {'period_id': '5DAY', 'time_start': beg_of_2019, 'time_end': end_of_2019}) for period in doge_ohlcv_2019: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) doge_2019 = pd.DataFrame(doge_ohlcv_2019).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'DOGE_start_date'}, axis='columns') doge_2019 # Lisk 2019 daily pricing from Kraken beg_of_2019 = datetime.date(2019, 1, 1).isoformat() end_of_2019 = datetime.date(2019, 12, 31).isoformat() lisk_ohlcv_2019 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_LSK_USD', {'period_id': '5DAY', 'time_start': beg_of_2019, 'time_end': end_of_2019}) for period in lisk_ohlcv_2019: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) lisk_2019 = pd.DataFrame(lisk_ohlcv_2019).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'LISK_start_date'}, axis='columns') lisk_2019 # WAVES 2019 daily pricing from Kraken beg_of_2019 = datetime.date(2019, 1, 1).isoformat() end_of_2019 = datetime.date(2019, 12, 31).isoformat() waves_ohlcv_2019 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_WAVES_USD', {'period_id': '5DAY', 'time_start': beg_of_2019, 'time_end': end_of_2019}) for period in waves_ohlcv_2019: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) waves_2019 = pd.DataFrame(waves_ohlcv_2019).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'WAVES_start_date'}, axis='columns') waves_2019 # Siacoin 2019 daily pricing from Kraken beg_of_2019 = datetime.date(2019, 1, 1).isoformat() end_of_2019 = datetime.date(2019, 12, 31).isoformat() siacoin_ohlcv_2019 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_SC_USD', {'period_id': '5DAY', 'time_start': beg_of_2019, 'time_end': end_of_2019}) for period in siacoin_ohlcv_2019: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) sc_2019 = pd.DataFrame(siacoin_ohlcv_2019).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'SC_start_date'}, axis='columns') sc_2019 # Link 2019 daily pricing from Kraken beg_of_2019 = datetime.date(2019, 1, 1).isoformat() end_of_2019 = datetime.date(2019, 12, 31).isoformat() link_ohlcv_2019 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_LINK_USD', {'period_id': '5DAY', 'time_start': beg_of_2019, 'time_end': end_of_2019}) for period in link_ohlcv_2019: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) link_2019 = pd.DataFrame(link_ohlcv_2019).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'LINK_start_date'}, axis='columns') link_2019 merged_2019 = pd.concat( [btc_2019, eth_2019, ltc_2019, ada_2019, doge_2019, xlm_2019, xrp_2019, lisk_2019, waves_2019, zec_2019, sc_2019, usdt_2019, dash_2019, link_2019], axis=1).drop( columns=['price_high', 'price_low', 'trades_count', 'price_open']) merged_2019 merged_2019.to_csv('2019_5day_data.csv') #Bitcoin 2020 daily pricing from Kraken beg_of_2020 = datetime.date(2020, 1, 1).isoformat() end_of_2020 = datetime.date(2020, 12, 31).isoformat() bitcoin_ohlcv_2020 = api.ohlcv_historical_data( 'KRAKEN_SPOT_BTC_USD', {'period_id': '5DAY', 'time_start': beg_of_2020, 'time_end': end_of_2020}) for period in bitcoin_ohlcv_2020: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) btc_2020 = pd.DataFrame(bitcoin_ohlcv_2020).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'btc_start_date'}, axis='columns') btc_2020 #Ethereum 2020 daily pricing from Kraken beg_of_2020 = datetime.date(2020, 1, 1).isoformat() end_of_2020 = datetime.date(2020, 12, 31).isoformat() ethereum_ohlcv_2020 = api.ohlcv_historical_data( 'KRAKEN_SPOT_ETH_USD', {'period_id': '5DAY', 'time_start': beg_of_2020, 'time_end': end_of_2020}) for period in ethereum_ohlcv_2020: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) eth_2020 = pd.DataFrame(ethereum_ohlcv_2020).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'eth_start_date'}, axis='columns') eth_2020 #Litecoin 2020 daily pricing from Kraken beg_of_2020 = datetime.date(2020, 1, 1).isoformat() end_of_2020 = datetime.date(2020, 12, 31).isoformat() litecoin_ohlcv_2020 = api.ohlcv_historical_data( 'KRAKEN_SPOT_LTC_USD', {'period_id': '5DAY', 'time_start': beg_of_2020, 'time_end': end_of_2020}) for period in litecoin_ohlcv_2020: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) ltc_2020 = pd.DataFrame(litecoin_ohlcv_2020).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ltc_start_date'}, axis='columns') ltc_2020 #Tether 2020 daily pricing from Kraken beg_of_2020 = datetime.date(2020, 1, 1).isoformat() end_of_2020 = datetime.date(2020, 12, 31).isoformat() tether_ohlcv_2020 = api.ohlcv_historical_data( 'KRAKEN_SPOT_USDT_USD', {'period_id': '5DAY', 'time_start': beg_of_2020, 'time_end': end_of_2020}) for period in tether_ohlcv_2020: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) usdt_2020 = pd.DataFrame(tether_ohlcv_2020).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'USDT_start_date'}, axis='columns') usdt_2020 #Stellar 2020 daily pricing from Kraken beg_of_2020 = datetime.date(2020, 1, 1).isoformat() end_of_2020 = datetime.date(2020, 12, 31).isoformat() stellar_ohlcv_2020 = api.ohlcv_historical_data( 'KRAKEN_SPOT_XLM_USD', {'period_id': '5DAY', 'time_start': beg_of_2020, 'time_end': end_of_2020}) for period in stellar_ohlcv_2020: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) xlm_2020 = pd.DataFrame(stellar_ohlcv_2020).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'XLM_start_date'}, axis='columns') xlm_2020 #Ripple 2020 daily pricing from Kraken beg_of_2020 = datetime.date(2020, 1, 1).isoformat() end_of_2020 = datetime.date(2020, 12, 31).isoformat() ripple_ohlcv_2020 = api.ohlcv_historical_data( 'KRAKEN_SPOT_XRP_USD', {'period_id': '5DAY', 'time_start': beg_of_2020, 'time_end': end_of_2020}) for period in ripple_ohlcv_2020: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) xrp_2020 = pd.DataFrame(ripple_ohlcv_2020).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'XRP_start_date'}, axis='columns') xrp_2020 #ZCash 2020 daily pricing from Kraken beg_of_2020 = datetime.date(2020, 1, 1).isoformat() end_of_2020 = datetime.date(2020, 12, 31).isoformat() zcash_ohlcv_2020 = api.ohlcv_historical_data( 'KRAKEN_SPOT_ZEC_USD', {'period_id': '5DAY', 'time_start': beg_of_2020, 'time_end': end_of_2020}) for period in zcash_ohlcv_2020: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) zec_2020 = pd.DataFrame(ripple_ohlcv_2020).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ZEC_start_date'}, axis='columns') zec_2020 #Dash 2020 daily pricing from Kraken beg_of_2020 = datetime.date(2020, 1, 1).isoformat() end_of_2020 = datetime.date(2020, 12, 31).isoformat() dash_ohlcv_2020 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_DASH_USD', {'period_id': '5DAY', 'time_start': beg_of_2020, 'time_end': end_of_2020}) for period in dash_ohlcv_2020: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) dash_2020 = pd.DataFrame(dash_ohlcv_2020).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'DASH_start_date'}, axis='columns') dash_2020 #Cardano 2020 daily pricing from Kraken beg_of_2020 = datetime.date(2020, 1, 1).isoformat() end_of_2020 = datetime.date(2020, 12, 31).isoformat() cardano_ohlcv_2020 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_ADA_USD', {'period_id': '5DAY', 'time_start': beg_of_2020, 'time_end': end_of_2020}) for period in cardano_ohlcv_2020: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) ada_2020 = pd.DataFrame(cardano_ohlcv_2020).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ADA_start_date'}, axis='columns') ada_2020 # Doge 2020 daily pricing from Kraken beg_of_2020 = datetime.date(2020, 1, 1).isoformat() end_of_2020 = datetime.date(2020, 12, 31).isoformat() doge_ohlcv_2020 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_DOGE_USD', {'period_id': '5DAY', 'time_start': beg_of_2020, 'time_end': end_of_2020}) for period in doge_ohlcv_2020: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) doge_2020 = pd.DataFrame(doge_ohlcv_2020).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'DOGE_start_date'}, axis='columns') doge_2020 # Lisk 2020 daily pricing from Kraken beg_of_2020 = datetime.date(2020, 1, 1).isoformat() end_of_2020 = datetime.date(2020, 12, 31).isoformat() lisk_ohlcv_2020 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_LSK_USD', {'period_id': '5DAY', 'time_start': beg_of_2020, 'time_end': end_of_2020}) for period in lisk_ohlcv_2020: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) lisk_2020 = pd.DataFrame(lisk_ohlcv_2020).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'LISK_start_date'}, axis='columns') lisk_2020 # WAVES 2020 daily pricing from Kraken beg_of_2020 = datetime.date(2020, 1, 1).isoformat() end_of_2020 = datetime.date(2020, 12, 31).isoformat() waves_ohlcv_2020 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_WAVES_USD', {'period_id': '5DAY', 'time_start': beg_of_2020, 'time_end': end_of_2020}) for period in waves_ohlcv_2020: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) waves_2020 = pd.DataFrame(waves_ohlcv_2020).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'WAVES_start_date'}, axis='columns') waves_2020 # Siacoin 2020 daily pricing from Kraken beg_of_2020 = datetime.date(2020, 1, 1).isoformat() end_of_2020 = datetime.date(2020, 12, 31).isoformat() siacoin_ohlcv_2020 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_SC_USD', {'period_id': '5DAY', 'time_start': beg_of_2020, 'time_end': end_of_2020}) for period in siacoin_ohlcv_2020: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) sc_2020 = pd.DataFrame(siacoin_ohlcv_2020).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'SC_start_date'}, axis='columns') sc_2020 # Link 2020 daily pricing from Kraken beg_of_2020 = datetime.date(2020, 1, 1).isoformat() end_of_2020 = datetime.date(2020, 12, 31).isoformat() link_ohlcv_2020 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_LINK_USD', {'period_id': '5DAY', 'time_start': beg_of_2020, 'time_end': end_of_2020}) for period in link_ohlcv_2020: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) link_2020 = pd.DataFrame(link_ohlcv_2020).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'LINK_start_date'}, axis='columns') link_2020 merged_2020 = pd.concat( [btc_2020, eth_2020, ltc_2020, ada_2020, doge_2020, xlm_2020, xrp_2020, lisk_2020, waves_2020, zec_2020, sc_2020, usdt_2020, dash_2020, link_2020], axis=1).drop( columns=['price_high', 'price_low', 'trades_count', 'price_open']) merged_2020 merged_2020.to_csv('2020_5day_data.csv') #Bitcoin 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() bitcoin_ohlcv_2021 = api3.ohlcv_historical_data( 'KRAKEN_SPOT_BTC_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in bitcoin_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) btc_2021 = pd.DataFrame(bitcoin_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'btc_start_date'}, axis='columns') btc_2021 #Ethereum 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() ethereum_ohlcv_2021 = api3.ohlcv_historical_data( 'KRAKEN_SPOT_ETH_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in ethereum_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) eth_2021 = pd.DataFrame(ethereum_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'eth_start_date'}, axis='columns') eth_2021 #Litecoin 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() litecoin_ohlcv_2021 = api3.ohlcv_historical_data( 'KRAKEN_SPOT_LTC_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in litecoin_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) ltc_2021 = pd.DataFrame(litecoin_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ltc_start_date'}, axis='columns') ltc_2021 #Tether 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() tether_ohlcv_2021 = api3.ohlcv_historical_data( 'KRAKEN_SPOT_USDT_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in tether_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) usdt_2021 = pd.DataFrame(tether_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'USDT_start_date'}, axis='columns') usdt_2021 #Stellar 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() stellar_ohlcv_2021 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_XLM_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in stellar_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) xlm_2021 = pd.DataFrame(stellar_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'XLM_start_date'}, axis='columns') xlm_2021 #Ripple 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() ripple_ohlcv_2021 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_XRP_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in ripple_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) xrp_2021 = pd.DataFrame(ripple_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'XRP_start_date'}, axis='columns') xrp_2021 #ZCash 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() zcash_ohlcv_2021 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_ZEC_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in zcash_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) zec_2021 = pd.DataFrame(ripple_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ZEC_start_date'}, axis='columns') zec_2021 #Dash 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() dash_ohlcv_2021 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_DASH_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in dash_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) dash_2021 = pd.DataFrame(dash_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'DASH_start_date'}, axis='columns') dash_2021 #Cardano 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() cardano_ohlcv_2021 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_ADA_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in cardano_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) ada_2021 = pd.DataFrame(cardano_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'ADA_start_date'}, axis='columns') ada_2021 # Doge 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() doge_ohlcv_2021 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_DOGE_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in doge_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) doge_2021 = pd.DataFrame(doge_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'DOGE_start_date'}, axis='columns') doge_2021 # Lisk 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() lisk_ohlcv_2021 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_LSK_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in lisk_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) lisk_2021 = pd.DataFrame(lisk_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'LISK_start_date'}, axis='columns') lisk_2021 # WAVES 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() waves_ohlcv_2021 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_WAVES_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in waves_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) waves_2021 = pd.DataFrame(waves_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'WAVES_start_date'}, axis='columns') waves_2021 # Siacoin 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() siacoin_ohlcv_2021 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_SC_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in siacoin_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) sc_2021 = pd.DataFrame(waves_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'SC_start_date'}, axis='columns') sc_2021 #Solana 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() solana_ohlcv_2021 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_SOL_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in solana_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) sol_2021 = pd.DataFrame(solana_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'SOL_start_date'}, axis='columns') sol_2021 # Link 2021 daily pricing from Kraken beg_of_2021 = datetime.date(2021, 1, 1).isoformat() end_of_2021 = datetime.date(2021, 12, 31).isoformat() link_ohlcv_2021 = api2.ohlcv_historical_data( 'KRAKEN_SPOT_LINK_USD', {'period_id': '5DAY', 'time_start': beg_of_2021, 'time_end': end_of_2021}) for period in link_ohlcv_2021: print('Period start: %s' % period['time_period_start']) print('Period end: %s' % period['time_period_end']) print('Time open: %s' % period['time_open']) print('Time close: %s' % period['time_close']) print('Price open: %s' % period['price_open']) print('Price close: %s' % period['price_close']) print('Price low: %s' % period['price_low']) print('Price high: %s' % period['price_high']) print('Volume traded: %s' % period['volume_traded']) print('Trades count: %s' % period['trades_count']) link_2021 = pd.DataFrame(link_ohlcv_2021).drop( columns=['time_open', 'time_close', 'time_period_end']).rename( {'time_period_start': 'LINK_start_date'}, axis='columns') link_2021 merged_2021 = pd.concat( [btc_2021, eth_2021, ltc_2021, ada_2021, doge_2021, xlm_2021, xrp_2021, lisk_2021, waves_2021, zec_2021, sc_2021, usdt_2021, dash_2021, link_2021, sol_2021], axis=1).drop( columns=['price_high', 'price_low', 'trades_count', 'price_open']) merged_2021 merged_2021.to_csv('2021_5day_data.csv') ```
github_jupyter
# Exploring violations related to farming activity To run this notebook, load SDWIS csv data files into the folder ``../../../data/sdwis/SDWIS`` ``` import os import numpy as np import pandas as pd %matplotlib inline import matplotlib.pyplot as plt STATE_CODE = 'VT' DATA_DIR = '../../../../data' SDWIS_DIR = os.path.join(DATA_DIR, 'sdwis') # These contaminants are typically associated with farming activity. FARM_CONTAMINANTS = [ 'ALACHLOR ESA', 'Atrazine', 'Carbofuran', '2,4-D', 'Dalapon', '1,2-Dibromo-3-chloropropane', 'Dinoseb', 'Diquat', 'Endothall', 'Glyphosate', 'Lindane', 'Methoxychlor', 'Nitrate', 'Nitrite', 'Oxamyl', 'Picloram', 'Simazine', 'Toxaphene', '2,4,5-TP' ] # Label data with full year, e.g., 2012 for 01-JUL-12 def get_year_from_mmddyy_series(ser, last_year_in_2000s=pd.Timestamp('now').year): """ Expected input will be in the form 01-JUL-12. Output will be the year of the data. Assumes years will never be greater than current year. """ last_two_digits_year_cutoff = int(str(last_year_in_2000s)[-2:]) # calculate last two digits of year return_series = ser.str.split('-').str[2].astype(int) # add first two digits return_series += ( + (2000*(return_series <= last_two_digits_year_cutoff)) + (1900*(return_series > last_two_digits_year_cutoff)) ) return return_series def print_water_system_violations(water_system_df, viol_df): viol_df = viol_df.merge(water_system_df, left_on='VIOLATION.PWSID', right_on='WATER_SYSTEM.PWSID') print('# water systems: ' + str(water_system_df.shape[0])) print('# violations: ' + str(viol_df.shape[0])) print('# reporting violations: ' \ + str(viol_df[viol_df['VIOLATION.VIOLATION_CATEGORY_CODE'] == 'MR'].shape[0])) print('# health violations: ' \ + str(viol_df[viol_df['VIOLATION.IS_HEALTH_BASED_IND'] == 'Y'].shape[0])) # read input files viol = pd.read_csv(os.path.join(SDWIS_DIR, 'VIOLATION.csv'), sep=',', dtype={'VIOLATION.CONTAMINANT_CODE': np.str}, low_memory=False) ws = pd.read_csv(os.path.join(SDWIS_DIR, 'WATER_SYSTEM.csv'), low_memory=False) wsf = pd.read_csv(os.path.join(SDWIS_DIR, 'WATER_SYSTEM_FACILITY.csv'), low_memory=False) # this file currently only contains entries for VT, can be expanded to include other states # source: https://www.nass.usda.gov/Quick_Stats/CDQT/chapter/2/table/1/state/VT/county/027 farms = pd.read_csv(os.path.join(DATA_DIR, 'usda/farm_operations.csv')) contaminants = pd.read_csv(os.path.join(SDWIS_DIR, 'contaminant-codes.csv'), sep=',', dtype={'CODE': np.str}) last_two_digits_current_year = int(str(pd.Timestamp('now').year)[-2:]) viol['VIOLATION.YEAR'] = get_year_from_mmddyy_series(viol['VIOLATION.COMPL_PER_BEGIN_DATE']) # violations in 2017 viol_2017 = viol[viol['VIOLATION.YEAR'] == 2017] viol_2017.head() # Filter only to systems in Vermont ws = ws.loc[ ( ws['WATER_SYSTEM.PRIMACY_AGENCY_CODE'] == STATE_CODE) & (ws['WATER_SYSTEM.PWS_ACTIVITY_CODE'] == 'A') ] farms = farms.drop(['state_fips', 'county_code', 'commodity', 'domain_category'], axis=1) farms['county'] = farms['county'].str.capitalize() farms.head() farms[['county', '2017']].plot.bar(x='county', y='2017') viol_2017_county = pd.merge(viol_2017, ws, left_on='VIOLATION.PWSID', \ right_on='WATER_SYSTEM.PWSID') viol_2017_county = viol_2017_county[['VIOLATION.PWSID', 'VIOLATION.CONTAMINANT_CODE', 'WATER_SYSTEM.COUNTIES_SERVED']] viol_2017_county.head() viol_2017_county_contaminant = pd.merge(viol_2017_county, contaminants, \ left_on='VIOLATION.CONTAMINANT_CODE', \ right_on='CODE') viol_2017_county_contaminant_subset = viol_2017_county_contaminant[['VIOLATION.PWSID', \ 'NAME', \ 'WATER_SYSTEM.COUNTIES_SERVED']] viol_2017_county_contaminant_subset.head() viol_2017_county_contaminant_subset = viol_2017_county_contaminant_subset.loc[ viol_2017_county_contaminant_subset['NAME'].isin(pd.Series(FARM_CONTAMINANTS).str.upper()) ] ``` ## Possible farm-related contaminant violations by county ``` viol_2017_county_contaminant_subset.groupby(['WATER_SYSTEM.COUNTIES_SERVED', 'NAME'])\ .size().unstack().plot.bar(stacked=True) plt.title('Safe drinking water violations by county\nand contaminant in Vermont (2017)') plt.xlabel('County') plt.ylabel('Number of violations') plt.show() ```
github_jupyter
``` import arviz as az import matplotlib.pyplot as plt import numpy as np import pymc as pm import scipy.stats as stats %config InlineBackend.figure_format = 'retina' az.style.use('arviz-darkgrid') ``` #### Code 2.1 ``` ways = np.array([0, 3, 8, 9, 0]) ways / ways.sum() ``` #### Code 2.2 $$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$ The probability of observing six W’s in nine tosses—under a value of p=0.5 ``` stats.binom.pmf(6, n=9, p=0.5) ``` #### Code 2.3 and 2.5 Computing the posterior using a grid approximation. In the book the following code is not inside a function, but this way is easier to play with different parameters ``` def posterior_grid_approx(grid_points=5, success=6, tosses=9): """ """ # define grid p_grid = np.linspace(0, 1, grid_points) # define prior prior = np.repeat(5, grid_points) # uniform #prior = (p_grid >= 0.5).astype(int) # truncated #prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp # compute likelihood at each point in the grid likelihood = stats.binom.pmf(success, tosses, p_grid) # compute product of likelihood and prior unstd_posterior = likelihood * prior # standardize the posterior, so it sums to 1 posterior = unstd_posterior / unstd_posterior.sum() return p_grid, posterior ``` #### Code 2.3 ``` points = 20 w, n = 6, 9 p_grid, posterior = posterior_grid_approx(points, w, n) plt.plot(p_grid, posterior, 'o-', label=f'success = {w}\ntosses = {n}') plt.xlabel('probability of water', fontsize=14) plt.ylabel('posterior probability', fontsize=14) plt.title(f'{points} points') plt.legend(loc=0); ``` #### Code 2.6 Computing the posterior using the quadratic aproximation ``` data = np.repeat((0, 1), (3, 6)) with pm.Model() as normal_aproximation: p = pm.Uniform('p', 0, 1) w = pm.Binomial('w', n=len(data), p=p, observed=data.sum()) mean_q = pm.find_MAP() std_q = ((1/pm.find_hessian(mean_q, vars=[p]))**0.5)[0] mean_q['p'], std_q norm = stats.norm(mean_q, std_q) prob = .89 z = stats.norm.ppf([(1-prob)/2, (1+prob)/2]) pi = mean_q['p'] + std_q * z pi ``` #### Code 2.7 ``` # analytical calculation w, n = 6, 9 x = np.linspace(0, 1, 100) plt.plot(x, stats.beta.pdf(x , w+1, n-w+1), label='True posterior') # quadratic approximation plt.plot(x, stats.norm.pdf(x, mean_q['p'], std_q), label='Quadratic approximation') plt.legend(loc=0, fontsize=13) plt.title(f'n = {n}', fontsize=14) plt.xlabel('Proportion water', fontsize=14) plt.ylabel('Density', fontsize=14); import platform import sys import IPython import matplotlib import scipy print("""This notebook was created using:\nPython {}\nIPython {}\nPyMC {}\nArviZ {}\nNumPy {}\nSciPy {}\nMatplotlib {}\n""".format(sys.version[:5], IPython.__version__, pm.__version__, az.__version__, np.__version__, scipy.__version__, matplotlib.__version__)) ```
github_jupyter
# Final Prediction Model and Results Now that we have evaluated our model, we can use all the data and build a model to predict values of the future. In this case, we predict Electricity Consumption and Generation in year 2020 in Germany. ## Import Libraries ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import time from matplotlib.dates import date2num %matplotlib inline sns.set_style('darkgrid') from sklearn.preprocessing import MinMaxScaler from tensorflow.keras.preprocessing import sequence from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Input, LSTM from tensorflow.keras.losses import mean_squared_error from tensorflow.keras.models import Model from tensorflow.keras.models import load_model import h5py ``` ## Load Data ``` consumption= pd.read_pickle('../Data_Cleaned/consumption_ready_for_forcast.pkl') generation= pd.read_pickle('../Data_Cleaned/generation_ready_for_forcast.pkl') ``` ## Batch Size and Timesteps ``` # defining the batch size and timesteps batch_size = 64 timesteps=24*7 ``` ## Prepare Prediction Indexes ``` idx= pd.date_range(start='2020-01-01 00:00:00',end='2020-12-31 23:00:00',freq='H') con_idx= pd.DataFrame(index=idx,data={consumption.columns.values[0]:0}) gen_idx= pd.DataFrame(index=idx,data={generation.columns.values[0]:0}) consumption_full= pd.concat([consumption,con_idx]) generation_full= pd.concat([generation,gen_idx]) ``` ## Prepare Train Set ``` # Function to calculate training size regarding batch_size def get_train_length(dataset, timesteps, batch_size): # substract pred_percent to be excluded from training, reserved for predset length = len(dataset)-2.1*timesteps train_length_values = [] for x in range(int(length) - 200,int(length)): modulo=x%batch_size if (modulo == 0): train_length_values.append(x) return (max(train_length_values)) length = get_train_length(consumption, timesteps, batch_size) upper_train = length + timesteps*2 print('\nDatasets length:',len(consumption)) print('Last divisible index:', upper_train) print('Train Sets length:', length,'\n') # Set y_train variable for consumption df consumption_train_df = consumption[0:upper_train] consumption_y_train = consumption_train_df.iloc[:,].values print('\nTraining Sets Shapes after Adding Timesteps:', consumption_y_train.shape) # Set y_train variable for generation df generation_train_df = generation[0:upper_train] generation_y_train = generation_train_df.iloc[:,].values ``` ## Feature Scaling ``` #scale between 0 and 1. the weights are esier to find. sc_con = MinMaxScaler(feature_range = (0, 1)) sc_gen = MinMaxScaler(feature_range = (0, 1)) consumption_y_train_scaled = sc_con.fit_transform(np.float64(consumption_y_train)) generation_y_train_scaled = sc_gen.fit_transform(np.float64(generation_y_train)) ``` ## Creating a data structure with n timesteps ``` # Empty Lists to store X_train and y_train consumption_X_train_matrix = [] consumption_y_train_matrix = [] # Creating a data structure with n timesteps for i in range(timesteps, length + timesteps): #create X_train matrix #24*7 items per array (timestep) consumption_X_train_matrix.append(consumption_y_train_scaled[i-timesteps:i,0]) #create Y_train matrix #24*7 items per array (timestep) consumption_y_train_matrix.append(consumption_y_train_scaled[i:i+timesteps,0]) # reapeat all of these steps fot generation dataframe generation_X_train_matrix = [] generation_y_train_matrix = [] for i in range(timesteps, length + timesteps): generation_X_train_matrix.append(generation_y_train_scaled[i-timesteps:i,0]) generation_y_train_matrix.append(generation_y_train_scaled[i:i+timesteps,0]) # Check shape print() print('X_train sets shape:', np.array(consumption_X_train_matrix).shape) print('y_train sets shape:', np.array(consumption_y_train_matrix).shape) print() ``` ## Reshape ``` # Turn list into numpy array consumption_X_train_matrix = np.array(consumption_X_train_matrix) consumption_y_train_matrix = np.array(consumption_y_train_matrix) # reshape arrays consumption_X_train_reshaped = np.reshape(consumption_X_train_matrix, (consumption_X_train_matrix.shape[0], consumption_X_train_matrix.shape[1], 1)) consumption_y_train_reshaped = np.reshape(consumption_y_train_matrix, (consumption_y_train_matrix.shape[0], consumption_y_train_matrix.shape[1], 1)) # Repeat the same stes for generatin dataframe generation_X_train_matrix = np.array(generation_X_train_matrix) generation_y_train_matrix = np.array(generation_y_train_matrix) generation_X_train_reshaped = np.reshape(generation_X_train_matrix, (generation_X_train_matrix.shape[0], generation_X_train_matrix.shape[1], 1)) generation_y_train_reshaped = np.reshape(generation_y_train_matrix, (generation_y_train_matrix.shape[0], generation_y_train_matrix.shape[1], 1)) # Check shapes print() print('X_train sets shape:', generation_X_train_reshaped.shape) print('y_train sets shape:', generation_y_train_reshaped.shape) print() ``` ## Building the LSTM ``` # Initialising the LSTM Model with MSE Loss-Function # Using Functional API, each layer output is the input of next layer # Input inputs = Input(batch_shape=(batch_size,timesteps,1)) # Layer 1: LSTM lstm_1 = LSTM(12, activation='tanh', recurrent_activation='sigmoid', stateful=True, return_sequences=True)(inputs) # Layer 2: LSTM lstm_2 = LSTM(12, activation='tanh', recurrent_activation='sigmoid', stateful=True, return_sequences=True)(lstm_1) # Output output = Dense(units = 1)(lstm_2) # Sticking all layers into a Model regressor = Model(inputs=inputs, outputs = output) #adam is fast starting off and then gets slower and more precise regressor.compile(optimizer='adam', loss = mean_squared_error) # Check the model summary regressor.summary() ``` ## Run LSTM We run the code on the cloud. ```python epochs = 5 # start time start=time.time() #Statefull for i in range(epochs): print("\nEpoch: " + str(i)) #run through all data but the cell, hidden state are used for the next batch. regressor.fit(consumption_X_train_reshaped, consumption_y_train_reshaped, shuffle=False, epochs = 1, batch_size = batch_size) #resets only the states but the weights, cell and hidden are kept. regressor.reset_states() # duration of training the model duration=time.time()-start ``` **Load Models**: ``` # Model is trained with a batch size of 128 and 10 epochs regressor_con = load_model(filepath="../Models/LSTM_Model_Consumption_128.h5") # Model is trained with a batch size of 128 and 10 epochs regressor_gen = load_model(filepath="../Models/LSTM_Model_Generation_128.h5") ``` ## Prepare Prediction Function ``` def predict_lstm(dataframe, lstm_model, scaler, batch_size, timesteps): """ This function forcast next values in a timeserie dataframe based on a LSTM modelbatch_size. The predicted values start from last timestamp index plus time interval Total Number of predicted values is: batch_size*periods INPUT: dataframe: type pandas dataframe a dataframe with timestamps as index and a metric lstm_model: type Keras LSTM training Model It has to be already trained scaler: type sklearn scaker It hase to be already fitted. It is used for transform and inverse transform batch_size: type int batch_size with which the model is trained (also number of sequences) timesteps: type int number of values in a batch (or sequence) periods: type int number of periods that should be predicted OUTPUT: numpy array of shape 1 and size batch_size*periods containg predicted values """ # subsetting df_pred= dataframe[-(batch_size+timesteps):] df_pred_set=np.float64(df_pred) # scaling X_pred_scaled = scaler.fit_transform(df_pred_set) # creating input data X_pred_matrix = [] for i in range(batch_size): X_pred_matrix.append(X_pred_scaled[i:i+timesteps, 0]) # turn list into array X_pred_matrix = np.array(X_pred_matrix) # reshaping X_pred_reshaped = np.reshape(X_pred_matrix, (X_pred_matrix.shape[0], X_pred_matrix.shape[1], 1)) # prediction y_hat= lstm_model.predict(X_pred_reshaped, batch_size=batch_size) # reshaping y_hat = np.reshape(y_hat, (y_hat.shape[0], y_hat.shape[1])) # inverse transform y_hat = scaler.inverse_transform(y_hat) # creating y_pred data y_pred = [] for i in range(batch_size): y_pred = np.append(y_pred, y_hat[i,-1]) return y_pred ``` ## Results **Electricity Consumption:** ``` final_pred_con= predict_lstm(dataframe=consumption, lstm_model=regressor_con, scaler=sc_con, batch_size=128, timesteps=timesteps) # Visualising the results x_index_pred= pd.date_range(consumption.index[-1],freq='H',periods=129)[1:] fig= plt.figure(figsize=(15,5)) ax = fig.add_subplot(111) plt.plot(consumption[-256:], color = 'blue', label = 'Real Values') plt.plot(x_index_pred,final_pred_con, color = 'red', label = 'Predicted Values') plt.title("\nGermany's Electricity Consumption Forcast using LSTM\n", fontsize=20 ,fontweight='bold') ax.xaxis.set_major_locator(plt.IndexLocator(base=2, offset=0)) plt.xlabel('') plt.ylabel('Electricity Unit [MWh]', fontsize=15) plt.legend() plt.show() ``` **Electricity Generation:** ``` final_pred_gen= predict_lstm(dataframe=consumption, lstm_model=regressor_gen, scaler=sc_gen, batch_size=128, timesteps=timesteps) # Visualising the results x_index_pred= pd.date_range(generation.index[-1],freq='H',periods=129)[1:] fig= plt.figure(figsize=(15,5)) ax = fig.add_subplot(111) plt.plot(generation[-256:], color = 'blue', label = 'Real Values') plt.plot(x_index_pred,final_pred_gen, color = 'red', label = 'Predicted Values') plt.title("\nGermany's Electricity Generation Forcast using LSTM\n", fontsize=20 ,fontweight='bold') ax.xaxis.set_major_locator(plt.IndexLocator(base=2, offset=0)) plt.xlabel('') plt.ylabel('Electricity Unit [MWh]', fontsize=15) plt.legend() plt.show() ```
github_jupyter
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-use-adla-as-compute-target.png) # AML Pipeline with AdlaStep This notebook is used to demonstrate the use of AdlaStep in AML Pipelines. [AdlaStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.adla_step.adlastep?view=azure-ml-py) is used to run U-SQL scripts using Azure Data Lake Analytics service. ## AML and Pipeline SDK-specific imports ``` import os from msrest.exceptions import HttpOperationError import azureml.core from azureml.exceptions import ComputeTargetException from azureml.core import Workspace, Experiment from azureml.core.compute import ComputeTarget, AdlaCompute from azureml.core.datastore import Datastore from azureml.data.data_reference import DataReference from azureml.pipeline.core import Pipeline, PipelineData from azureml.pipeline.steps import AdlaStep # Check core SDK version number print("SDK version:", azureml.core.VERSION) ``` ## Initialize Workspace Initialize a workspace object from persisted configuration. If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure the config file is present at .\config.json ``` ws = Workspace.from_config() print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n') ``` ## Attach ADLA account to workspace To submit jobs to Azure Data Lake Analytics service, you must first attach your ADLA account to the workspace. You'll need to provide the account name and resource group of ADLA account to complete this part. ``` adla_compute_name = 'testadl' # Name to associate with new compute in workspace # ADLA account details needed to attach as compute to workspace adla_account_name = "<adla_account_name>" # Name of the Azure Data Lake Analytics account adla_resource_group = "<adla_resource_group>" # Name of the resource group which contains this account try: # check if already attached adla_compute = AdlaCompute(ws, adla_compute_name) except ComputeTargetException: print('attaching adla compute...') attach_config = AdlaCompute.attach_configuration(resource_group=adla_resource_group, account_name=adla_account_name) adla_compute = ComputeTarget.attach(ws, adla_compute_name, attach_config) adla_compute.wait_for_completion() print("Using ADLA compute:{}".format(adla_compute.cluster_resource_id)) print("Provisioning state:{}".format(adla_compute.provisioning_state)) print("Provisioning errors:{}".format(adla_compute.provisioning_errors)) ``` ## Register Data Lake Storage as Datastore To register Data Lake Storage as Datastore in workspace, you'll need account information like account name, resource group and subscription Id. > AdlaStep can only work with data stored in the **default** Data Lake Storage of the Data Lake Analytics account provided above. If the data you need to work with is in a non-default storage, you can use a DataTransferStep to copy the data before training. You can find the default storage by opening your Data Lake Analytics account in Azure portal and then navigating to 'Data sources' item under Settings in the left pane. ### Grant Azure AD application access to Data Lake Storage You'll also need to provide an Active Directory application which can access Data Lake Storage. [This document](https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory) contains step-by-step instructions on how to create an AAD application and assign to Data Lake Storage. Couple of important notes when assigning permissions to AAD app: - Access should be provided at root folder level. - In 'Assign permissions' pane, select Read, Write, and Execute permissions for 'This folder and all children'. Add as 'An access permission entry and a default permission entry' to make sure application can access any new files created in the future. ``` datastore_name = 'MyAdlsDatastore' # Name to associate with data store in workspace # ADLS storage account details needed to register as a Datastore subscription_id = os.getenv("ADL_SUBSCRIPTION_62", "<my-subscription-id>") # subscription id of ADLS account resource_group = os.getenv("ADL_RESOURCE_GROUP_62", "<my-resource-group>") # resource group of ADLS account store_name = os.getenv("ADL_STORENAME_62", "<my-datastore-name>") # ADLS account name tenant_id = os.getenv("ADL_TENANT_62", "<my-tenant-id>") # tenant id of service principal client_id = os.getenv("ADL_CLIENTID_62", "<my-client-id>") # client id of service principal client_secret = os.getenv("ADL_CLIENT_62_SECRET", "<my-client-secret>") # the secret of service principal try: adls_datastore = Datastore.get(ws, datastore_name) print("found datastore with name: %s" % datastore_name) except HttpOperationError: adls_datastore = Datastore.register_azure_data_lake( workspace=ws, datastore_name=datastore_name, subscription_id=subscription_id, # subscription id of ADLS account resource_group=resource_group, # resource group of ADLS account store_name=store_name, # ADLS account name tenant_id=tenant_id, # tenant id of service principal client_id=client_id, # client id of service principal client_secret=client_secret) # the secret of service principal print("registered datastore with name: %s" % datastore_name) ``` ## Setup inputs and outputs For purpose of this demo, we're going to execute a simple U-SQL script that reads a CSV file and writes portion of content to a new text file. First, let's create our sample input which contains 3 columns: employee Id, name and department Id. ``` # create a folder to store files for our job sample_folder = "adla_sample" if not os.path.isdir(sample_folder): os.mkdir(sample_folder) %%writefile $sample_folder/sample_input.csv 1, Noah, 100 3, Liam, 100 4, Emma, 100 5, Jacob, 100 7, Jennie, 100 ``` Upload this file to Data Lake Storage at location `adla_sample/sample_input.csv` and create a DataReference to refer to this file. ``` sample_input = DataReference( datastore=adls_datastore, data_reference_name="employee_data", path_on_datastore="adla_sample/sample_input.csv") ``` Create PipelineData object to store output produced by AdlaStep. ``` sample_output = PipelineData("sample_output", datastore=adls_datastore) ``` ## Write your U-SQL script Now let's write a U-Sql script that reads above CSV file and writes the name column to a new file. Instead of hard-coding paths in your script, you can use `@@name@@` syntax to refer to inputs, outputs, and parameters. - If `name` is the name of an input or output port binding, any occurrences of `@@name@@` in the script are replaced with actual data path of corresponding port binding. - If `name` matches any key in the `params` dictionary, any occurrences of `@@name@@` will be replaced with corresponding value in the dictionary. Note the use of @@ syntax in the below script. Before submitting the job to Data Lake Analytics service, `@@emplyee_data@@` will be replaced with actual path of `sample_input.csv` in Data Lake Storage. Similarly, `@@sample_output@@` will be replaced with a path in Data Lake Storage which will be used to store intermediate output produced by the step. ``` %%writefile $sample_folder/sample_script.usql // Read employee information from csv file @employees = EXTRACT EmpId int, EmpName string, DeptId int FROM "@@employee_data@@" USING Extractors.Csv(); // Export employee names to text file OUTPUT ( SELECT EmpName FROM @employees ) TO "@@sample_output@@" USING Outputters.Text(); ``` ## Create an AdlaStep **[AdlaStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.adla_step.adlastep?view=azure-ml-py)** is used to run U-SQL script using Azure Data Lake Analytics. - **name:** Name of module - **script_name:** name of U-SQL script file - **inputs:** List of input port bindings - **outputs:** List of output port bindings - **compute_target:** the ADLA compute to use for this job - **params:** Dictionary of name-value pairs to pass to U-SQL job *(optional)* - **degree_of_parallelism:** the degree of parallelism to use for this job *(optional)* - **priority:** the priority value to use for the current job *(optional)* - **runtime_version:** the runtime version of the Data Lake Analytics engine *(optional)* - **source_directory:** folder that contains the script, assemblies etc. *(optional)* - **hash_paths:** list of paths to hash to detect a change (script file is always hashed) *(optional)* ``` adla_step = AdlaStep( name='extract_employee_names', script_name='sample_script.usql', source_directory=sample_folder, inputs=[sample_input], outputs=[sample_output], compute_target=adla_compute) ``` ## Build and Submit the Experiment ``` pipeline = Pipeline(workspace=ws, steps=[adla_step]) pipeline_run = Experiment(ws, 'adla_sample').submit(pipeline) pipeline_run.wait_for_completion() ``` ### View Run Details ``` from azureml.widgets import RunDetails RunDetails(pipeline_run).show() ```
github_jupyter
# Data Mining (Bing News) ### Required Packages ``` #!pip install selenium #!pip install beautfulsoup #!pip install webdriver_manager #!pip install pandas #!pip install numpy #!pip install matplotlib from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager import re, csv import time from urllib.request import Request, urlopen import pandas as pd import matplotlib.pyplot as plt import numpy as np ``` ### Metadata for crawler ``` HEADERS = { "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0.5 Safari/605.1.15" } """ Change the query to search for different keywords """ NEWS_URL = 'https://www.bing.com/news/search?q=omnicron+booster/' ``` ### Crawler ``` def receive_html(url): """ This function receives the html from the url, scrolls down the page and returns the html ---- Parameters: url (string): The url to be scraped Returns: html (string): The html of the url """ request = Request(url, headers=HEADERS) response = urlopen(request).read() driver = webdriver.Chrome(service=Service(ChromeDriverManager().install())) driver.get(url) ## Scroll down to the end. driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") height = driver.execute_script("return document.body.scrollHeight") while True: driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") time.sleep(3) new_height = driver.execute_script("return document.body.scrollHeight") if new_height == height: break height = new_height ## return html return driver.page_source def html_parser(html): """ This function parses the html and extracts the title and url of the articles ---- Parameters: html (string): The html of the url Returns: articles (list): The titles of the articles """ soup = BeautifulSoup(html, "html.parser") soup.prettify() Titles = [] for result in soup.select(".card-with-cluster"): """ This loop extracts the title and url of the articles Modify the selectors to extract different information """ title = result.select_one(".title").text Titles.append(title) print(len(Titles)) return Titles def save_results(articles): """ This function saves the results in a csv file ---- Parameters: articles (list): The titles of the articles """ df = pd.DataFrame({'Title': articles}) df.to_csv('data.csv') def main(): """ This function calls the functions to scrape the data and save the results """ html = receive_html(NEWS_URL) articles = html_parser(html) save_results(articles) if __name__ == '__main__': main() ``` ### Visualization ``` df = pd.read_csv('data.csv') df.drop("Unnamed: 0", axis=1, inplace=True) df = df[:50] side_effect = df.loc[df['Title'].str.contains('side effect', flags=re.IGNORECASE)] omicron = df.loc[df['Title'].str.contains('Omicron', flags=re.IGNORECASE)] booster = df.loc[df['Title'].str.contains('Booster', flags=re.IGNORECASE)] vaccine = df.loc[df['Title'].str.contains('Vaccine', flags=re.IGNORECASE)] freq = np.array([len(side_effect), len(omicron), len(booster), len(vaccine)]) plt.bar(range(4), height=freq, tick_label=['Side Effect', 'Omnicron', 'Booster', 'Vaccine']) plt.ylabel('Frequency') plt.show() ```
github_jupyter
``` import os import time import numpy as np import pandas as pd import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader from torchvision import datasets from torchvision import transforms import matplotlib.pyplot as plt from PIL import Image import os os.environ['CUDA_VISIBLE_DEVICES'] = "2" if torch.cuda.is_available(): torch.backends.cudnn.deterministic = True # Hyperparameters RANDOM_SEED = 1 LEARNING_RATE = 0.001 BATCH_SIZE = 128 NUM_EPOCHS = 20 # Architecture NUM_FEATURES = 28*28 NUM_CLASSES = 10 # Other DEVICE = "cuda" GRAYSCALE = False train_dataset = datasets.MNIST(root='data', train=True, transform=transforms.ToTensor(), download=True) test_dataset = datasets.MNIST(root='data', train=False, transform=transforms.ToTensor()) train_loader = DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=False) test_loader = DataLoader(dataset=test_dataset, batch_size=BATCH_SIZE, shuffle=False) # Checking the dataset for images, labels in train_loader: print('Image batch dimensions:', images.shape) print('Image label dimensions:', labels.shape) break device = torch.device(DEVICE) torch.manual_seed(0) for epoch in range(2): for batch_idx, (x, y) in enumerate(train_loader): print('Epoch:', epoch+1, end='') print(' | Batch index:', batch_idx, end='') print(' | Batch size:', y.size()[0]) x = x.to(device) y = y.to(device) break from torch.utils import data from torchvision import transforms from torchvision.datasets import MNIST class BiasedMNIST(MNIST): """A base class for Biased-MNIST. We manually select ten colours to synthetic colour bias. (See `COLOUR_MAP` for the colour configuration) Usage is exactly same as torchvision MNIST dataset class. You have two paramters to control the level of bias. Parameters ---------- root : str path to MNIST dataset. data_label_correlation : float, default=1.0 Here, each class has the pre-defined colour (bias). data_label_correlation, or `rho` controls the level of the dataset bias. A sample is coloured with - the pre-defined colour with probability `rho`, - coloured with one of the other colours with probability `1 - rho`. The number of ``other colours'' is controlled by `n_confusing_labels` (default: 9). Note that the colour is injected into the background of the image (see `_binary_to_colour`). Hence, we have - Perfectly biased dataset with rho=1.0 - Perfectly unbiased with rho=0.1 (1/10) ==> our ``unbiased'' setting in the test time. In the paper, we explore the high correlations but with small hints, e.g., rho=0.999. n_confusing_labels : int, default=9 In the real-world cases, biases are not equally distributed, but highly unbalanced. We mimic the unbalanced biases by changing the number of confusing colours for each class. In the paper, we use n_confusing_labels=9, i.e., during training, the model can observe all colours for each class. However, you can make the problem harder by setting smaller n_confusing_labels, e.g., 2. We suggest to researchers considering this benchmark for future researches. """ COLOUR_MAP = [[255, 0, 0], [0, 255, 0], [0, 0, 255], [225, 225, 0], [225, 0, 225], [0, 255, 255], [255, 128, 0], [255, 0, 128], [128, 0, 255], [128, 128, 128]] def __init__(self, root, train=True, transform=None, target_transform=None, download=False, data_label_correlation=1.0, n_confusing_labels=9): super().__init__(root, train=train, transform=transform, target_transform=target_transform, download=download) self.random = True self.data_label_correlation = data_label_correlation self.n_confusing_labels = n_confusing_labels self.data, self.targets, self.colored_bg, self.biased_targets = self.build_biased_mnist() indices = np.arange(len(self.data)) self._shuffle(indices) self.data = self.data[indices].numpy() self.colored_bg = self.colored_bg[indices].numpy() self.targets = self.targets[indices] self.biased_targets = self.biased_targets[indices] @property def raw_folder(self): return os.path.join(self.root, 'raw') @property def processed_folder(self): return os.path.join(self.root, 'processed') def _shuffle(self, iteratable): if self.random: np.random.shuffle(iteratable) def _make_biased_mnist(self, indices, label): raise NotImplementedError def _update_bias_indices(self, bias_indices, label): if self.n_confusing_labels > 9 or self.n_confusing_labels < 1: raise ValueError(self.n_confusing_labels) indices = np.where((self.targets == label).numpy())[0] self._shuffle(indices) indices = torch.LongTensor(indices) n_samples = len(indices) n_correlated_samples = int(n_samples * self.data_label_correlation) n_decorrelated_per_class = int(np.ceil((n_samples - n_correlated_samples) / (self.n_confusing_labels))) correlated_indices = indices[:n_correlated_samples] bias_indices[label] = torch.cat([bias_indices[label], correlated_indices]) decorrelated_indices = torch.split(indices[n_correlated_samples:], n_decorrelated_per_class) other_labels = [_label % 10 for _label in range(label + 1, label + 1 + self.n_confusing_labels)] self._shuffle(other_labels) for idx, _indices in enumerate(decorrelated_indices): _label = other_labels[idx] bias_indices[_label] = torch.cat([bias_indices[_label], _indices]) def build_biased_mnist(self): """Build biased MNIST. """ n_labels = self.targets.max().item() + 1 bias_indices = {label: torch.LongTensor() for label in range(n_labels)} for label in range(n_labels): self._update_bias_indices(bias_indices, label) data = torch.ByteTensor() targets = torch.LongTensor() colored_bg = torch.ByteTensor() biased_targets = [] for bias_label, indices in bias_indices.items(): (_data, _colored_bg), _targets = self._make_biased_mnist(indices, bias_label) data = torch.cat([data, _data]) colored_bg = torch.cat([colored_bg, _colored_bg]) targets = torch.cat([targets, _targets]) biased_targets.extend([bias_label] * len(indices)) biased_targets = torch.LongTensor(biased_targets) return data, targets, colored_bg, biased_targets def __getitem__(self, index): img, target = self.data[index], self.targets[index] colored_bg = self.colored_bg[index] img = Image.fromarray(img.astype(np.uint8), mode='RGB') bias = Image.fromarray(colored_bg.astype(np.uint8), mode='RGB') if self.transform is not None: img = self.transform(img) bias = self.transform(bias) if self.target_transform is not None: target = self.target_transform(target) img = np.asarray(img) bias = np.asarray(bias) label = torch.zeros(10) label.scatter_(0, target, 1) # return img, target, bias, int(self.biased_targets[index]) return img, label, bias, int(self.biased_targets[index]) class ColourBiasedMNIST(BiasedMNIST): def __init__(self, root, train=True, transform=None, target_transform=None, download=False, data_label_correlation=1.0, n_confusing_labels=9): super(ColourBiasedMNIST, self).__init__(root, train=train, transform=transform, target_transform=target_transform, download=download, data_label_correlation=data_label_correlation, n_confusing_labels=n_confusing_labels) def _binary_to_colour(self, data, colour): fg_data = torch.zeros_like(data) fg_data[data != 0] = 255 fg_data[data == 0] = 0 fg_data = torch.stack([fg_data, fg_data, fg_data], dim=1) bg_data = torch.zeros_like(data) bg_data[data == 0] = 1 bg_data[data != 0] = 0 bg_data = torch.stack([bg_data, bg_data, bg_data], dim=3) bg_data = bg_data * torch.ByteTensor(colour) colored_bg = (bg_data + 1 - bg_data) * torch.ByteTensor(colour) bg_data = bg_data.permute(0, 3, 1, 2) data = fg_data + bg_data return data.permute(0, 2, 3, 1), colored_bg def _make_biased_mnist(self, indices, label): return self._binary_to_colour(self.data[indices], self.COLOUR_MAP[label]), self.targets[indices] def get_biased_mnist_dataloader(root, batch_size, data_label_correlation, n_confusing_labels=9, train=True, num_workers=4): transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))]) dataset = ColourBiasedMNIST(root, train=train, transform=transform, download=True, data_label_correlation=data_label_correlation, n_confusing_labels=n_confusing_labels) dataloader = data.DataLoader(dataset=dataset, batch_size=batch_size, shuffle=False, num_workers=num_workers, pin_memory=True) return dataloader cmnist_train = get_biased_mnist_dataloader(root='data', batch_size=128, data_label_correlation=0.999, n_confusing_labels=9, train=True) cmnist_val_bias = get_biased_mnist_dataloader(root='data', batch_size=128, data_label_correlation=0.999, n_confusing_labels=9, train=False) cmnist_val_unbias = get_biased_mnist_dataloader(root='data', batch_size=128, data_label_correlation=0.1, n_confusing_labels=9, train=False) cmnist_val_origin = get_biased_mnist_dataloader(root='data', batch_size=128, data_label_correlation=0, n_confusing_labels=9, train=False) for batch_idx, (features, targets, bias, _) in enumerate(cmnist_train): features = features targets = targets break targets[3] print(features.shape) print(targets[3]) print(bias.shape) nhw_img = np.transpose(features[3], axes=(1, 2, 0)) plt.imshow(nhw_img); bg = np.transpose(bias[3], axes=(1, 2, 0)) plt.imshow(bg); ########################## ### MODEL ########################## def conv3x3(in_planes, out_planes, stride=1): """3x3 convolution with padding""" return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False) class BasicBlock(nn.Module): expansion = 1 def __init__(self, inplanes, planes, stride=1, downsample=None): super(BasicBlock, self).__init__() self.conv1 = conv3x3(inplanes, planes, stride) self.bn1 = nn.BatchNorm2d(planes) self.relu = nn.ReLU(inplace=True) self.conv2 = conv3x3(planes, planes) self.bn2 = nn.BatchNorm2d(planes) self.downsample = downsample self.stride = stride def forward(self, x): residual = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) if self.downsample is not None: residual = self.downsample(x) out += residual out = self.relu(out) return out class ResNet(nn.Module): def __init__(self, block, layers, num_classes, grayscale): self.inplanes = 64 if grayscale: in_dim = 1 else: in_dim = 3 super(ResNet, self).__init__() self.conv1 = nn.Conv2d(in_dim, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2) self.layer3 = self._make_layer(block, 256, layers[2], stride=2) self.layer4 = self._make_layer(block, 512, layers[3], stride=2) self.avgpool = nn.AvgPool2d(7, stride=1) self.fc = nn.Linear(512 * block.expansion, num_classes) for m in self.modules(): if isinstance(m, nn.Conv2d): n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels m.weight.data.normal_(0, (2. / n)**.5) elif isinstance(m, nn.BatchNorm2d): m.weight.data.fill_(1) m.bias.data.zero_() def _make_layer(self, block, planes, blocks, stride=1): downsample = None if stride != 1 or self.inplanes != planes * block.expansion: downsample = nn.Sequential( nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(planes * block.expansion), ) layers = [] layers.append(block(self.inplanes, planes, stride, downsample)) self.inplanes = planes * block.expansion for i in range(1, blocks): layers.append(block(self.inplanes, planes)) return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) # because MNIST is already 1x1 here: # disable avg pooling #x = self.avgpool(x) x = x.view(x.size(0), -1) logits = self.fc(x) probas = F.softmax(logits, dim=1) return logits, x def resnet18(num_classes): """Constructs a ResNet-18 model.""" model = ResNet(block=BasicBlock, layers=[2, 2, 2, 2], num_classes=NUM_CLASSES, grayscale=GRAYSCALE) return model class SimpleConvNet(nn.Module): def __init__(self, num_classes=10, kernel_size=7, feature_pos='post'): super(SimpleConvNet, self).__init__() padding = kernel_size // 2 layers = [ nn.Conv2d(3, 16, kernel_size=kernel_size, padding=padding), nn.BatchNorm2d(16), nn.ReLU(inplace=True), nn.Conv2d(16, 32, kernel_size=kernel_size, padding=padding), nn.BatchNorm2d(32), nn.ReLU(inplace=True), nn.Conv2d(32, 64, kernel_size=kernel_size, padding=padding), nn.BatchNorm2d(64), nn.ReLU(inplace=True), nn.Conv2d(64, 128, kernel_size=kernel_size, padding=padding), nn.BatchNorm2d(128), nn.ReLU(inplace=True), ] self.extracter = nn.Sequential(*layers) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(128, 10) for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): nn.init.constant_(m.weight, 1) nn.init.constant_(m.bias, 0) if feature_pos not in ['pre', 'post', 'logits']: raise ValueError(feature_pos) self.feature_pos = feature_pos def forward(self, x, logits_only=False): pre_gap_feats = self.extracter(x) post_gap_feats = self.avgpool(pre_gap_feats) post_gap_feats = torch.flatten(post_gap_feats, 1) logits = self.fc(post_gap_feats) if logits_only: return logits elif self.feature_pos == 'pre': feats = pre_gap_feats elif self.feature_pos == 'post': feats = post_gap_feats else: feats = logits return logits, feats torch.manual_seed(100) model = resnet18(NUM_CLASSES) model.to(DEVICE) # model = SimpleConvNet(kernel_size=7, feature_pos='post').to(DEVICE) model_c = SimpleConvNet(kernel_size=1, feature_pos='post').to(DEVICE) model_r = resnet18(NUM_CLASSES).to(DEVICE) # model_c = resnet18(NUM_CLASSES) # model_c.to(DEVICE) # optimizer = torch.optim.Adam([{'params': model.parameters()}, {'params': model_c.parameters()}], lr=LEARNING_RATE) optimizer = torch.optim.Adam([{'params': model.parameters()}], lr=LEARNING_RATE) optimizer_c = torch.optim.Adam(model_c.parameters(), lr=LEARNING_RATE) optimizer_r = torch.optim.Adam(model_r.parameters(), lr=LEARNING_RATE) def compute_accuracy(model, data_loader, device): correct_pred, num_examples = 0, 0 for i, (features, targets, bias, _) in enumerate(data_loader): features = features.to(device) targets = torch.argmax(targets, 1).to(device) bias = bias.to(device) logits, probas = model(features) # logits, probas = model(bias) _, predicted_labels = torch.max(logits, 1) num_examples += targets.size(0) correct_pred += (predicted_labels == targets).sum() return correct_pred.float()/num_examples * 100 import math ####Training####### celoss = nn.CrossEntropyLoss(reduction='none') start_time = time.time() base_ratio = [] gge_ratio = [] gge_gradient = [] base_gradient = [] gge_hard_loss = [] base_hard_loss = [] for epoch in range(NUM_EPOCHS): weight = math.sin(math.pi/2 * (epoch+10) / (NUM_EPOCHS+10)) model.train() model_c.train() base_cost_mask_sum = 0 base_cost_sum = 0 gge_cost_mask_sum = 0 gge_cost_sum = 0 gge_gradient_sum = 0 base_gradient_sum = 0 for batch_idx, (features, targets, biases, bias_label) in enumerate(cmnist_train): features = features.to(DEVICE).requires_grad_() targets = targets.to(DEVICE) biases = biases.to(DEVICE).requires_grad_() bias_label = bias_label.to(DEVICE) mask = 1. - (torch.argmax(targets, 1) == bias_label).byte() ## FORWARD AND BACK PROP bias_pred, feat_b = model_c(features) # cost_bias = F.binary_cross_entropy_with_logits(bias_pred, targets) cost_bias = -(targets * bias_pred.log_softmax(-1)).mean() optimizer_c.zero_grad() cost_bias.backward() optimizer_c.step() logits, feats = model(features) # cost = -(targets * logits.log_softmax(1)).mean() cost = -(targets * logits.log_softmax(1)).mean(-1) + weight * (targets * bias_pred.detach().softmax(1) * logits.log_softmax(1)).mean(-1) # cost = -(y_gradient * logits.log_softmax(1)).mean(-1) optimizer.zero_grad() cost.mean().backward() optimizer.step() logits_r, feat_r = model_r(features) cost_r = -(targets * logits_r.log_softmax(-1)).mean(-1) base_cost_mask_sum += (cost_r * mask).float().sum(0).item() base_cost_sum += cost_r.float().sum(0).item() base_grad=torch.autograd.grad(cost_r.mean(), feat_r, create_graph=True)[0].float().sum(0).detach().cpu().numpy() base_gradient_sum += base_grad optimizer_r.zero_grad() cost_r.mean().backward() optimizer_r.step() ### LOGGING if not batch_idx % 50: print ('Epoch: %03d/%03d | Batch %04d/%04d | Cost: %.4f' %(epoch+1, NUM_EPOCHS, batch_idx, len(cmnist_train), cost.mean())) # if not (batch_idx+1) % 118: # print(gge_cost_mask_sum/gge_cost_sum) # gge_ratio.append(gge_cost_mask_sum/gge_cost_sum) # base_ratio.append(base_cost_mask_sum/base_cost_sum) # base_gradient.append(base_gradient_sum) # gge_gradient.append(gge_gradient_sum) # gge_hard_loss.append(gge_cost_mask_sum / 124) # base_hard_loss.append(base_cost_mask_sum / 124) # base_cost_mask_sum = 0 # base_cost_sum = 0 # gge_cost_mask_sum = 0 # gge_cost_sum = 0 # gge_gradient_sum = 0 # base_gradient_sum = 0 model.eval() with torch.set_grad_enabled(False): # save memory during inference print('Epoch: %03d/%03d | Train: %.3f%%' % ( epoch+1, NUM_EPOCHS, compute_accuracy(model, cmnist_val_unbias, device=DEVICE))) print('Time elapsed: %.2f min' % ((time.time() - start_time)/60)) ####Test#### with torch.set_grad_enabled(False): # save memory during inference print('Test accuracy: %.2f%%' % (compute_accuracy(model, cmnist_val_unbias, device=DEVICE))) ####Test#### with torch.set_grad_enabled(False): # save memory during inference print('Test accuracy: %.2f%%' % (compute_accuracy(model, cmnist_val_origin, device=DEVICE))) ####Test#### with torch.set_grad_enabled(False): # save memory during inference print('Test accuracy: %.2f%%' % (compute_accuracy(model, cmnist_val_bias, device=DEVICE))) with torch.set_grad_enabled(False): # save memory during inference print('Test accuracy: %.2f%%' % (compute_accuracy(model_r, cmnist_val_origin, device=DEVICE))) for batch_idx, (features, targets) in enumerate(test_loader): features = features targets = targets break nhw_img = np.transpose(features[4].expand(3,28,28), axes=(1, 2, 0)) plt.imshow(nhw_img); ```
github_jupyter
# Introduction to data in NCEDC 1. seismic network data 2. earthquake catalog 3. earthquake focal mechanism catalog 4. earthquake phase/polarity data 5. earthquake waveform data ## Prepare modules, file directory ``` import pandas as pd import urllib3 # dfkds # url for different datasets url_station = "https://ncedc.org/ftp/pub/doc/ncsn/ncsn.stations" # station information url_reversed = "https://ncedc.org/ftp/pub/doc/ncsn/ncsn.reversed" # polarity flip information url_channel = "https://ncedc.org/ftp/pub/doc/ncsn/ncsn.channel.summary.doy" # station channel information url_catalog = "https://ncedc.org/ftp/pub/catalogs/NCSS-catalogs/" ``` ## 1. Load seismic network data ### 1.1 Load station infomation ``` # load data from url http = urllib3.PoolManager() response = http.request('GET', url_station) sta_all = response.data.decode('utf-8').split('\n') sta_all[0],sta_all[-1] #Station file Data format (note that the index starts from 1) # 1-5 5 Station site code (within its own network) # 6-7 2 Unique station net code (assigned by IRIS) # 9-11 3 USGS (not SEED) component (channel) code. # These 3 fields comprise the "unique" station code # (1994-). See columns 119-122 for SEED # 13-15 3 RATN Station name code # 16 1 Network code ) Rationalized station code # 18 1 C Component / (1986-94) # 21-24 4 4LET "New" 4-letter code in use 1977-86 # 26-29 4 OLD "Old" or standard code (pre-1977 for USGS) # 31-32 2 LAT North latitude, degrees # 34-40 7 North latitude, minutes (F7.4) # 42-44 3 LON West longitude, degrees # 46-52 7 West longitude, minutes (F7.4) # 55-58 4 ELEV Elevation, meters # 60 1 R Region code # 63-84 22 Full geographic name # 85 1 Colon (:) if non-USGS station recorded in Menlo Park. # Semi-colon (;) if non-USGS station formerly recorded # in MP # 86-89 4 ORG Network owner/operator code # 90-97 8 OPERATN Start date of operation of instrument (YYYYMMDD) # 100-106 8 End date of operation of instrument (YYYYMMDD) # 110-116 8 DATAUSE Start date of USGS-MP usage of data (YYYYMMDD) # 119-125 8 End date of USGS-MP usage of data (YYYYMMDD) # 127-130 4 SEED SEED channel designation # 132-135 4 ALIAS Alias, erroneous or unofficial name. # 137-140 4 Additional alias, erroneous or unofficial name. # obtain all network and station names net_all = {} staname_all = {} for ista in sta_all[:-1]: istaname = ista[:5] inet = ista[5:7] if inet in net_all: net_all[inet] = net_all[inet] + 1 else: net_all[inet] = 1 if istaname in staname_all: staname_all[istaname] = staname_all[istaname] + 1 print(istaname) else: staname_all[istaname] = 1 print('Number of networks:' + str(len(net_all))) print('Number of stations:' + str(len(staname_all))) net_all ``` ### 1.2 Load channel infomation ``` # load data from url http = urllib3.PoolManager() response = http.request('GET', url_channel) chan_all = response.data.decode('utf-8').split('\n') chan_all = chan_all[2:] chan_all[0],chan_all[-1] #Channel file Data format (note that the index starts from 1) # 1-5 5 Station site code (within its own network) # 7-8 2 Unique station net code (assigned by IRIS) # 10-12 3 USGS (not SEED) component (channel) code. # These 3 fields comprise the "unique" station code # (1994-). See columns 119-122 for SEED # 15-16 2 Not sure # 18-28 11 Sampling rate # 30-46 17 DATAUSE Start date of USGS-MP usage of data (YYYY,JUL,HH:MM:SS) # 50-66 17 End date of USGS-MP usage of data (YYYY,JUL,HH:MM:SS) # 72-79 8 LAT North latitude, degrees # 81-90 10 LON West longitude, degrees # 92-97 6 ELEV Elevation, meters # 101-104 4 Dip azimuth (all 0.0) net_all_fchan = {} staname_all_fchan = {} for ichan in chan_all[:-1]: istaname = ichan[:5] inet = ichan[6:8] if inet in net_all_fchan: net_all_fchan[inet] = net_all_fchan[inet] + 1 else: net_all_fchan[inet] = 1 if istaname in sta_all: staname_all_fchan[istaname] = staname_all_fchan[istaname] + 1 print(istaname) else: staname_all_fchan[istaname] = 1 net_all_fchan,len(staname_all_fchan) ``` **Notes** <p style="color:red;">Station file contains more stations and info than the channel file</p> ### 1.3 Load station reversed infomation ``` # load data from url http = urllib3.PoolManager() response = http.request('GET', url_reversed) rev_all = response.data.decode('utf-8').split('\n') rev_all[0],rev_all[-1] rev_all[0][9:14],rev_all[0][17:19],rev_all[0][21:29],rev_all[0][30:38] #reversed file Data format (note that the index starts from 1) # 10-14 5 Station site code (within its own network) # 18-19 2 Unique station net code (assigned by IRIS) # 22-29 8 Start date of reversal (YYYYMMDD) # 31-38 8 End data of reversal (YYYYMMDD) ``` ## 2. Load Earthquake info ``` http = urllib3.PoolManager() ev_all = [] year = 1966 for year in range(1966,2022): response = http.request('GET', url_catalog+str(year)+'.ehpcsv') iyear_ev_all = response.data.decode('utf-8').split('\n') print(str(year)+': '+iyear_ev_all[0]) # check the format of each file iyear_ev_all = iyear_ev_all[1:-1] ev_all = ev_all + iyear_ev_all #earthquake catalog format #(separated by ',', for each line, use split.(',') to get all items,note that the index here starts from 1) # 1 event occurence time (YYYY-MM-DDTHH:MM:SS.SSSZ) # 2 latitude (%.5f degree) # 3 longitude (%.5f degree) # 4 depth (%.3f km) # 5 mag (%.2f unit, from maximum S amplitude from NCSN stations) # 6 magType (d=coda duration magnitude, x=S-wave maximum amplitude magnitude, l=local magnitude, # z=low gain coda duration magnitude, p=initial P-wave amplitude magnitude, # g=low-gain initial p-wave amplitude magnitude, a=S-wave maximum amplitude magnitude for which data are lost) # 7 nst (number of P&S times with final weights greater than 0.1) # 8 gap (maximum azimuthal gap, degrees) # 9 dmin (distance to the nearest station (km)) # 10 rms (RMS travel time residual) # 11 net (network: NC SC?) # 12 id (event ID) # 13 last time updated (YYYY-MM-DDTHH:MM:SS.SSSZ) # 14 place # 15 type (eq=earthquake, qb=quarry blast, r=seismic reflection explosion, n=nts blast, # f=felt, m=multiple event, b=blast, t=tremor associated, l=long period) # 16 horizontalError (km) # 17 depthError (km) # 18 magError (unit, usually 0.00) # 19 magNst (Usually 0) # 20 status (Usually F) # 21 locationSource (Usually NC) # 22 magSource (Usually NC) evtype={} for iev in ev_all: ievtype = iev.split(',')[14] if ievtype in evtype: evtype[ievtype] = evtype[ievtype] +1 a=0 a++ ```
github_jupyter
# Quick Start This tutorial show how to create a scikit-criteria `Data` structure, and how to feed them inside different multicriteria decisions algorithms. ## Conceptual Overview The multicriteria data are really complex thing; mostly because you need at least 2 totally disconected vectors to decribe your problem: A alternative matrix (`mtx`) and a vector that indicated the optimal sense of every criteria (`criteria`); also maybe you want to add weights to your criteria The `skcteria.Data` object need at least the first two to be created and also accepts the weights, the names of the criteria and the names of alternatives as optional parametes. ## Your First Data object First we need to import the `Data` structure and the `MIN`, `MAX` contants from scikit-criteria: ``` %matplotlib inline from skcriteria import Data, MIN, MAX ``` Then we need to create the `mtx` and `criteria` vectors. The `mtx` must be a **2D array-like** where every column is a criteria, and every row is an alternative ``` # 2 alternatives by 3 criteria mtx = [ [1, 2, 3], # alternative 1 [4, 5, 6], # alternative 2 ] mtx ``` The `criteria` vector must be a **1D array-like** with number of elements same as number of columns in the alternative matrix (`mtx`). Every component of the `criteria` vector represent the optimal sense of each criteria. ``` # let's says the first two alternatives are # for maximization and the last one for minimization criteria = [MAX, MAX, MIN] criteria ``` as you see the `MAX` and `MIN` constants are only aliases for the numbers `-1` (minimization) and `1` (maximization). As you can see the constants usage makes the code more readable. Also you can use as aliases of minimization and maximization the built-in function `min`, `max`, the numpy function `np.min`, `np.max`, `np.amin`, `np.amax`, `np.nanmin`, `np.nanmax` and the strings `min`, `minimization`, `max` and `maximization`. Now we can combine this two vectors in our scikit-criteria data. ``` # we use the built-in function as aliases data = Data(mtx, [min, max, min]) data ``` As you can see the output of the `Data` structure is much more friendly than the plain python lists. To change the generic names of the alternatives (A0 and A1) and the criteria (C0, C1 and C2); let's assume that our Data is about cars (*car 0* and *car 1*) and their characteristics of evaluation are *autonomy* (`MAX`), *comfort* (`MAX`) and *price* (`MIN`). To feed this information to our `Data` structure we have the params: `anames` that accept the names of alternatives (must be the same number as the rows that `mtx` has), and `cnames` the criteria names (must have same number of elements with the columns that `mtx` has) ``` data = Data(mtx, criteria, anames=["car 0", "car 1"], cnames=["autonomy", "comfort", "price"]) data ``` In our final step let's assume we know in our case, that the importance of the autonomy is the *50%*, the comfort only a *5%* and the price is *45%*. The param to feed this to the structure is called `weights` and must be a vector with the same elements as criterias on your alternative matrix (number of columns) ``` data = Data(mtx, criteria, weights=[.5, .05, .45], anames=["car 0", "car 1"], cnames=["autonomy", "comfort", "price"]) data ``` ## Manipulating the Data The data object are immutable, if you want to modify it you need create a new one. All the numerical data (mtx, criteria, and weights) are stored as [numpy arrays](https://docs.scipy.org/doc/numpy/user/basics.creation.html), and the alternative and criteria names as python tuples. You can acces to the different parts of your data, simply by typing `data.<your-parameter-name>` for example: ``` data.mtx data.criteria data.weights data.anames, data.cnames ``` If you want (for example) change the names of the cars from `car 0` and `car 1`; to `VW` and `Ford` you must copy from your original Data ``` data = Data(data.mtx, data.criteria, weights=data.weights, anames=["VW", "Ford"], cnames=data.cnames) data ``` <div class="alert alert-info"> **Note:** A more flexible data manipulation API will relased in future versions. </div> ## Plotting The Data structure suport some basic rutines for ploting. Actually 5 types of plots are supported: - [Radar Plot](http://www.datavizcatalogue.com/methods/radar_chart.html) (`radar`). - [Histogram](http://www.datavizcatalogue.com/methods/histogram.html) (`hist`). - [Violin Plot](http://www.datavizcatalogue.com/methods/violin_plot.html) (`violin`). - [Box Plot](http://www.datavizcatalogue.com/methods/box_plot.html) (`box`). - [Scatter Matrix](http://www.datavizcatalogue.com/methods/scatterplot.html) (`scatter`). The default scikit criteria uses the Radar Plot to visualize all the data. Take in account that the radar plot by default convert all the minimization criteria to maximization and push all the values to be greater than 1 (obviously all this options can be overided). ``` data.plot(); ``` You can accesing the different plot by passing as first parameter the name of the plot ``` data.plot("box"); ``` or by using the name as method call inside the `plot` attribute ``` data.plot.violin(); ``` Every plot has their own set of parameters, but at last every one can receive: - `ax`: The plot axis. - `cmap`: The color map ([More info](https://matplotlib.org/users/colormaps.html)). - `mnorm`: The normalization method for the alternative matrix as string (Default: `"none"`). - `wnorm`: The normalization method for the criteria array as string (Default: `"none"`). - `weighted`: If you want to weight the criteria (Default: `True`). - `show_criteria`: Show or not the criteria in the plot (Default: `True` in all except radar). - `min2max`: Convert the minimization criteria into maximization one (Default: `False` in all except radar). - `push_negatives`: If a criteria has values lesser than 0, add the minimun value to all the criteria (Default: `False` in all except radar). - `addepsto0`: If a criteria has values equal to 0, add an $\epsilon$ value to all the criteria (Default: `False` in all except radar). Let's change the colors of the radar plot and show their criteria optimization sense: ``` data.plot.radar(cmap="inferno", show_criteria=False); ``` ## Using this data to feed some MCDA methods Let's rank our dummy data by [Weighted Sum Model](https://en.wikipedia.org/wiki/Weighted_sum_model), [Weighted Product Model](https://en.wikipedia.org/wiki/Weighted_product_model) and [TOPSIS](https://en.wikipedia.org/wiki/TOPSIS) ``` from skcriteria.madm import closeness, simple ``` First you need to create the decision maker. Most of methods accepts hyper parameters (parameters of the to configure the method) as following: 1. the method of normalization of the alternative matrix - in Weighted Sum and Weighted Product we use divided by the `sum` normalization - in `Topsis` we can also use the `vector` normalization 2. the method to normalize the weight array (normally `sum`); But complex methods has more. ### Weighted Sum Model: ``` # first create the decision maker # (with the default hiper parameters) dm = simple.WeightedSum() dm # Now lets decide the ranking dec = dm.decide(data) dec ``` The result says that the **VW** is better than the **FORD**, lets make the maths: <div class="alert alert-info"> **Note:** The last criteria is for minimization and because the WeightedSumModel only accepts maximization criteria by default, scikit-criteria invert all the values to convert the criteria to maximization </div> ``` print("VW:", 0.5 * 1/5. + 0.05 * 2/7. + 0.45 * 1 / (3/9.)) print("FORD:", 0.5 * 4/5. + 0.05 * 5/7. + 0.45 * 1 / (6/9.)) ``` If you want to acces this points, the `Decision` object stores all the particular information of every method in a attribute called `e_` ``` print(dec.e_) dec.e_.points ``` Also you can acces the type of the solution ``` print("Generate a ranking of alternatives?", dec.alpha_solution_) print("Generate a kernel of best alternatives?", dec.beta_solution_) print("Choose the best alternative?", dec.gamma_solution_) ``` The rank as numpy array (if this decision is a $\alpha$-solution / alpha solution) ``` dec.rank_ ``` The index of the row of the best alternative (if this decision is a $\gamma$-solution / gamma solution) ``` dec.best_alternative_, data.anames[dec.best_alternative_] ``` And the kernel of the non supered alternatives (if this decision is a $\beta$-solution / beta solution) ``` # this return None because this # decision is not a beta-solution print(dec.kernel_) ``` ### Weighted Product Model ``` dm = simple.WeightedProduct() dm dec = dm.decide(data) dec ``` As before let's do the math (remember the weights are now exponents) ``` print("VW:", ((1/5.) ** 0.5) * ((2/7.) ** 0.05) + ((1 / (3/9.)) ** 0.45)) print("FORD:", ((4/5.) ** 0.5) * ((5/7.) ** 0.05) + ((1 / (6/9.)) ** 0.45)) ``` As wee expected the **Ford** are little better than the **VW**. Now lets theck the `e_` object ``` print(dec.e_) dec.e_.points ``` As you note the points are differents, this is because internally to avoid [undeflows](https://en.wikipedia.org/wiki/Arithmetic_underflow) Scikit-Criteria uses a sums of logarithms instead products. So let's check ``` import numpy as np print("VW:", 0.5 * np.log10(1/5.) + 0.05 * np.log10(2/7.) + 0.45 * np.log10(1 / (3/9.))) print("FORD:", 0.5 * np.log10(4/5.) + 0.05 * np.log10(5/7.) + 0.45 * np.log10(1 / (6/9.))) ``` ### TOPSIS ``` dm = closeness.TOPSIS() dm dec = dm.decide(data) dec ``` The TOPSIS add more information into the decision object. ``` print(dec.e_) print("Ideal:", dec.e_.ideal) print("Anti-Ideal:", dec.e_.anti_ideal) print("Closeness:", dec.e_.closeness) ``` Where the `ideal` and `anti_ideal` are the normalizated sintetic better and worst altenatives created by TOPSIS, and the `closeness` is how far from the *anti-ideal* and how closer to the *ideal* are the real alternatives Finally we can change the normalization criteria of the alternative matric to `sum` (divide every value by the sum opf their criteria) and check the result: ``` dm = closeness.TOPSIS(mnorm="sum") dm dm.decide(data) ``` The rankin has changed so, we can compare the two normalization by plotting ``` import matplotlib.pyplot as plt f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) ax1.set_title("Sum Norm") data.plot.violin(mnorm="sum", ax=ax1); ax2.set_title("Vector Norm") data.plot.violin(mnorm="vector", ax=ax2); f.set_figwidth(15) import datetime as dt import skcriteria print("Scikit-Criteria version:", skcriteria.VERSION) print("Running datetime:", dt.datetime.now()) ```
github_jupyter
<h1 style="text-align: center;">Data Mining Project 1: Frequent Pattern & Association Rule</h1> <p style="text-align:center;"> 呂伯駿<br> Q56074085<br> NetDB<br> National Cheng Kung University<br> pclu@netdb.csie.ncku.edu.tw </p> ## 1. Introduction Frequent Pattern & Association Rule 是 Data mining 中的重要議題。 本次報告實作 Apriori [1] & FP Growth [2] 兩種經典算法,並使用IBM 合成資料集與 Kaggle Random Shop cart 資料集來比較兩者的效率。 --- ## 2. Environment 這個部分我將會說明實驗所使用的環境與使用資料集。 ### 2.1 System Preferences 實驗環境如下: - <b>Operating System</b>: macOS High Sierra (10.13.6) - <b>CPU</b>: 1.3 GHz Intel Core i5 - <b>Memory</b>: 8 GB 1600 MHz DDR3 - <b>Programming Language</b> : Python 3.6.2 由於所用資料集大小皆不超過 4MB,實驗環境記憶體足以符合實驗需求。 ### 2.2 Dataset 實驗使用以下資料集,來自 IBM Quest Synthetic Data Generator 以及 Kaggle。 #### a. IBM 本次實驗使用 IBM Quest Synthetic Data Generator Lit 模式合成了四組資料,參數設置如下表 | ntrans | tlength | nitems | |----------|:-------------:|------:| | 1 | 5 | 5 | | 10 | 5 | 5 | | 10 | 5 | 30 | | 20 | 5 | 30 | - <b>ntrans</b>: number of transactions in _000 - <b>tlength</b>: avg_items per transaction - <b>nitems</b>: number of different items in _000 #### b. Kaggle: Random Shopping cart [Random Shopping cart](https://www.kaggle.com/fanatiks/shopping-cart/home) 包含隨機排序之購物車資料,適合本次尋找 Frequent Pattern 的實驗情境。 <br> 此資料集有 1499筆資料,總共有 38 種商品,由於每筆資料中物品有可能重複出現,後續仍須再次處理去除同一筆資料中之相同商品。 資料範例如下: 1. yogurt, pork, sandwich bags, lunch meat, all- purpose, flour, soda, butter, vegetables, beef, aluminum foil, all- purpose, dinner rolls, shampoo, all- purpose, mixes, soap, laundry detergent, ice cream, dinner rolls, 2. toilet paper, shampoo, hand soap, waffles, vegetables, cheeses, mixes, milk, sandwich bags, laundry detergent, dishwashing liquid/detergent, waffles, individual meals, hand soap, vegetables, individual meals, yogurt, cereals, shampoo, vegetables, aluminum foil, tortillas, mixes, 3. ... --- ## 3. Implementation ### 3.1 pre-processing #### a. IBM IBM 所產生之資料格式為 id number(每一列),為方便運算我將相同id之資料轉換至同一行,中間以','分隔。 如 (1: 255,266,267...) #### b. Kaggle 由於 Random Shopping cart 中的每一筆資料包含商品可能重複,因此再跑演算法之前需要將其過濾。<br> 已過濾之資料路徑為: /data/Kaggle/cart_dataset_v2.txt <br> 原始資料路徑則為: /data/Kaggle/cart_dataset_v1.txt ### 3.2 Apriori <b>概念:</b><br> 分為兩步驟,先迭代找出 Candidate Set,再篩選掉不符合 minimum support 的set。 <b>優點:</b><br> 易實作,適合用在少量資料集。 <b>缺點:</b> 需要產生大量 Candidates 也要重複 Scan 資料集造成效能瓶頸。 ### 3.3 FP-growth <b>概念:</b><br> 分為兩步驟,建立 FP-Tree,再從中尋找 Frequent Pattern。 <b>優點:</b><br> 不需產生 Candidates,整體資料集僅需 Scan 兩次,速度較 Apriori 快一個量級。 <b>缺點:</b><br> 需要額外構建 FP-Tree 來儲存各 pattern 出現次數。 --- ## 4. Analysis 實驗使用2.2 敘述之資料集,比較兩種演算法在改變 minsup 數值以及資料量的情況下其時間的變化。 <br> 其中 FP-Growth 之結果為十次運算之時間平均,Aprior 由於時間上之考量為兩次運算之時間平均。 ### a. IBM ``` %matplotlib inline import pandas as pd import matplotlib.pyplot as plt ibm = pd.read_csv('./results/ibm.csv') kaggle = pd.read_csv('./results/kaggle.csv') ibm.head() ``` 上圖欄位分別為:<br> <b>data</b>: data parameter <br> <b>minsup</b>: minimum support <br> <b>fp_num</b>: number of frequent pattern <br> <b>alforithm</b>: choosen algorithm <br> <b>time</b>: running time <br> ``` # t1_l5_n5 plt.figure(figsize=(15, 15)) plt.subplot(421) x = [0.5, 0.6, 0.7, 0.8, 0.9] y_1 = ibm['time'][0:5] y_2 = ibm['time'][18:23] plt.plot(x, y_1, 'o-') plt.plot(x, y_2, 'o-') plt.xlabel('minsup (%)') plt.ylabel('time (sec)') plt.legend(["FP Growth", "Apriori"], loc=1); plt.title('Figure 1: t1_l5_n5'); plt.subplot(422) x = [0.5, 0.6, 0.7, 0.8, 0.9] y_1 = ibm['fp_num'][0:5] plt.plot(x, y_1, 'o-') plt.xlabel('minsup (%)') plt.ylabel('number of frequent pattern') plt.legend(["number of frequent pattern"], loc=1); plt.title('Figure 2: t1_l5_n5, fp number'); # t10_l5_n5 plt.subplot(423) x = [0.5, 0.6, 0.7, 0.8, 0.9] y_1 = ibm['time'][5:10] y_2 = ibm['time'][23:28] plt.plot(x, y_1, 'o-') plt.plot(x, y_2, 'o-') plt.xlabel('minsup (%)') plt.ylabel('time (sec)') plt.legend(["FP Growth", "Apriori"], loc=1); plt.title('Figure 3: t10_l5_n5'); plt.subplot(424) x = [0.5, 0.6, 0.7, 0.8, 0.9] y_1 = ibm['fp_num'][5:10] plt.plot(x, y_1, 'o-') plt.xlabel('minsup (%)') plt.ylabel('number of frequent pattern') plt.legend(["number of frequent pattern"], loc=1); plt.title('Figure 4: t10_l5_n5, fp number'); # t10_l5_n30 plt.subplot(425) x = [0.2, 0.3, 0.4, 0.5] y_1 = ibm['time'][10:14] y_2 = ibm['time'][28:32] plt.plot(x, y_1, 'o-') plt.plot(x, y_2, 'o-') plt.xlabel('minsup (%)') plt.ylabel('time (sec)') plt.legend(["FP Growth", "Apriori"], loc=1); plt.title('Figure 5: t10_l5_n30'); plt.subplot(426) x = [0.2, 0.3, 0.4, 0.5] y_1 = ibm['fp_num'][10:14] plt.plot(x, y_1, 'o-') plt.xlabel('minsup (%)') plt.ylabel('number of frequent pattern') plt.legend(["number of frequent pattern"], loc=1); plt.title('Figure 6: t10_l5_n30, fp number'); # t20_l5_n30 plt.subplot(427) x = [0.2, 0.3, 0.4, 0.5] y_1 = ibm['time'][14:18] y_2 = ibm['time'][32:36] plt.plot(x, y_1, 'o-') plt.plot(x, y_2, 'o-') plt.xlabel('minsup (%)') plt.ylabel('time (sec)') plt.legend(["FP Growth", "Apriori"], loc=1); plt.title('Figure 7: t20_l5_n30'); plt.subplot(428) x = [0.2, 0.3, 0.4, 0.5] y_1 = ibm['fp_num'][14:18] plt.plot(x, y_1, 'o-') plt.xlabel('minsup (%)') plt.ylabel('number of frequent pattern') plt.legend(["number of frequent pattern"], loc=1); plt.title('Figure 8: t20_l5_n30, fp number'); plt.tight_layout() plt.show() ``` 在 IBM 合成資料集中,實驗使用四種不同 item 數,與資料量數之資料,結果顯示在 Figure 1 ~ Figure 4 中。 minsup由於不同資料集所含 frequent patterns 數目不同,因此有 0.5% ~ 0.9%, 0.2% ~ 0.5% 兩種設定。 在實驗中可以觀察到 FP growth 的效率遠遠超過 Apriori。 而當 minsup 值升高時兩者的時間差則開始縮小,其原因是因為 minsup 值升高則frequent pattern 的數目減少,而 Apriori 所需要 join 與搜索的 Candicates 數目也隨之減少的緣故。 <br> <br> <b>改變 Item 種類數目 </b><br> 而從 Figure 3 跟 Figure 5 可以看到item 種類增加(5000~30000),Apriori 時間似乎會花較久,但事實上是因為後者生成之的FP數目較多,Candidate 數目也較多所導致。 <br> <b>改變 Datasize </b><br> Figure 5 與 Figure 7 則是改變 Transaction 數目,從約10000筆資料到 20000筆資料。 雖然 Apriori 需要不斷 Scan 整份資料集,但可以看到在目前的資料量下,Apriori 依然是FP 數目影響較大,FP 越多,花的時間越多, 但 FP growth 就不同了,雖然圖片看來差異不大,但在20000筆資料所花的時間是略為大於10000筆資料的。 ``` x = [2, 4, 6, 8] y_1 = [0.014822602272033691,0.024219989776611328,0.04265165328979492,0.037372589111328125] y_2 = [1.559230923652649,3.5876978635787964,3.9564480781555176,5.6181875467300415] plt.plot(x, y_1, 'o-') plt.plot(x, y_2, 'o-') plt.xlabel('data size (10^3)') plt.ylabel('time (sec)') plt.legend(["FP Growth", "Apriori"], loc=2); plt.title('Figure 9: Scalability (minsup = 0.8%'); ``` 進一步改變 Data size 的結果可參考 Figure 9。 此實驗固定minsup 為0.8%,可觀察到兩者時間皆會隨者資料量增加而提升,但可看到 FP-growth 的時間改變量並不大,在資料量高時具有拓展性。 ### b. Kaggle ``` kaggle.head() plt.figure(figsize=(15, 10)) plt.subplot(121) x = [5, 6, 7, 8, 9, 10] y_1 = kaggle['time'][0:6] y_2 = kaggle['time'][6:12] plt.plot(x, y_1, 'o-') plt.plot(x, y_2, 'o-') plt.xlabel('minsup (%)') plt.ylabel('time (sec)') plt.legend(["FP Growth", "Apriori"], loc=1); plt.title('Figure 10: kaggle running time'); plt.subplot(122) x = [5, 6, 7, 8, 9, 10] y_1 = kaggle['fp_num'][0:6] plt.plot(x, y_1, 'o-') plt.xlabel('minsup (%)') plt.ylabel('number of frequent pattern') plt.legend(["number of frequent pattern"], loc=2); plt.title('Figure 11: kaggle fp number'); plt.show() ``` 觀察 Figure 10 與 Figure 11,在 Kaggle 資料集 Apriori 依舊比 FP-growth 慢許多,雖然資料僅有 1499 筆,但平均每個 Transaction 有 15 項 Item,算法需要花費極大的Join 與 prune 時間,為此即便FP 數目少 Apriori 所花費時間仍為IBM 1000 筆資料時的 20倍。 ### c. Association rules 由於IBM資料集是合成資料,探討其 Association Rules 意義較低,因此以下僅提供 Random Shopping Cart 出現頻率前五高的 Rules。 ``` rules = pd.read_csv('rules.csv') rules.head(10) ``` ## 5. Conclusion FP-Growth 算法比起 Apriori 有顯著的效率差異,原因是它透過建立 FP-Tree 來減少探索時間,相對的 Apriori 則需要不斷 Scan 資料集來確認是否滿足 minisup 條件。 雖然這種FP-Tree的結構在資料量大時仍具有優勢,但當 frequent patterns 的數目減少時(minsup增加),兩者的時間差異會越來越少,這時建立 FP-Tree 這種資料結構的時間花費便體現出來。<br> ## 6. References [1] In Proceedings of the 20th International Conference on Very Large Data Bases, VLDB ’94, pages 487–499, San Francisco, CA, USA, 1994. Morgan Kaufmann Publishers Inc.<br> [2] J. Han, J. Pei, and Y. Yin: “Mining frequent patterns without candidate generation”. In Proc. ACM-SIGMOD’2000, pp. 1-12, Dallas, TX, May 2000.
github_jupyter
# Dataset - US ``` import pandas as pd ``` ## Initialize ``` srcUS = "./time_series_covid19_confirmed_US.csv" dest = "./time_series_covid19_confirmed_US_transformed.csv" stateCoordinates = { "Wisconsin": (44.500000, -89.500000), "West Virginia": (39.000000, -80.500000), "Vermont": (44.000000, -72.699997), "Texas": (31.000000, -100.000000), "South Dakota": (44.500000, -100.000000), "Rhode Island": (41.700001, -71.500000), "Oregon": (44.000000, -120.500000), "New York": (43.000000, -75.000000), "New Hampshire": (44.000000, -71.500000), "Nebraska": (41.500000, -100.000000), "Kansas": (38.500000, -98.000000), "Mississippi": (33.000000, -90.000000), "Illinois": (40.000000, -89.000000), "Delaware": (39.000000, -75.500000), "Connecticut": (41.599998, -72.699997), "Arkansas": (34.799999, -92.199997), "Indiana": (40.273502, -86.126976), "Missouri": (38.573936, -92.603760), "Florida": (27.994402, -81.760254), "Nevada": (39.876019, -117.224121), "Maine": (45.367584, -68.972168), "Michigan": (44.182205, -84.506836), "Georgia": (33.247875, -83.441162), "Hawaii": (19.741755, -155.844437), "Alaska": (66.160507, -153.369141), "Tennessee": (35.860119, -86.660156), "Virginia": (37.926868, -78.024902), "New Jersey": (39.833851, -74.871826), "Kentucky": (37.839333, -84.270020), "North Dakota": (47.650589, -100.437012), "Minnesota": (46.392410, -94.636230), "Oklahoma": (36.084621, -96.921387), "Montana": (46.965260, -109.533691), "Washington": (47.751076, -120.740135), "Utah": (39.419220, -111.950684), "Colorado": (39.113014, -105.358887), "Ohio": (40.367474, -82.996216), "Alabama": (32.318230, -86.902298), "Iowa": (42.032974, -93.581543), "New Mexico": (34.307144, -106.018066), "South Carolina": (33.836082, -81.163727), "Pennsylvania": (41.203323, -77.194527), "Arizona": (34.048927, -111.093735), "Maryland": (39.045753, -76.641273), "Massachusetts": (42.407211, -71.382439), "California": (36.778259, -119.417931), "Idaho": (44.068203, -114.742043), "Wyoming": (43.075970, -107.290283), "North Carolina": (35.782169, -80.793457), "Louisiana": (30.391830, -92.329102), "Harris, Texas": (29.7752, -95.3103) } # Read data usDf = pd.read_csv(srcUS) usDf.head(20) ``` ## Data Manipulation ``` # Separate Harris from Texas harrisIndex = usDf[(usDf["Admin2"] == "Harris") & (usDf["Province_State"] == "Texas")].index usDf.at[harrisIndex, "Province_State"] = usDf.iloc[harrisIndex, :]["Admin2"] + ", " + usDf.iloc[harrisIndex, :]["Province_State"] # Drop unwanted columns droppedCols = [0, 1, 2, 3, 4, 5, 10] usDf = usDf.iloc[:, [col for col in range(len(usDf.columns)) if col not in droppedCols]] # Rename columns (inplace) usDf.rename(columns = { "Long_": "Long" }, inplace = True) # Group separately and combine # - 1. Group Lat and Long (aggregated by taking first) # - 2. Group Confirmed Cases (aggregated by taking sum) firstHalf = usDf.iloc[:, :4].groupby(["Province_State", "Country_Region"]).first().reset_index() secondHalf = usDf.drop(usDf.columns[[2, 3]], axis = 1).groupby(["Province_State", "Country_Region"]).sum().reset_index() usDf = pd.concat([firstHalf, secondHalf.iloc[:, 2:]], axis = 1) # Drop regions that are not in dictionary (stateCoordinates) usDf = usDf[usDf["Province_State"].isin(stateCoordinates)] # Update "Lat" and "Long" for index, row in usDf.iterrows(): if row["Province_State"] in stateCoordinates: usDf.at[index, "Lat"] = stateCoordinates[row["Province_State"]][0] usDf.at[index, "Long"] = stateCoordinates[row["Province_State"]][1] # Derive confirmed cases per day and attach back to the source dataframe locationsDf = usDf.iloc[:, :4] datesDf = usDf.iloc[:, 4:].diff(axis = 1) diffDf = pd.concat([locationsDf, datesDf], axis = 1) # Transform spreading "date & confirmed cases" data into "Date" adn "Confirmed Cases" usDf = diffDf.melt( id_vars = ["Province_State", "Country_Region", "Lat", "Long"], var_name = "Date", value_name = "Confirmed Cases") usDf # Save transformed dataset usDf.to_csv(dest, index = False) ```
github_jupyter
# Binary Image Denoising #### *Jiaolong Xu (GitHub ID: [Jiaolong](https://github.com/Jiaolong))* #### This notebook is written during GSoC 2014. Thanks Shell Hu and Thoralf Klein for taking time to help me on this project! This notebook illustrates how to use shogun structured output learning framework for binary images denoising. The task is defined as a pairwise factor graph model with [Graph cuts](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGraphCut.html) inference, where model parameters are learned by SOSVM using a [SGD solver](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStochasticSOSVM.html). ## Introduction This notebook illustrates how to use shogun structured output learning framework for binary images denoising. I recommend [1] for a nice introduction of structured learning and prediction in computer vision. One of the founding publications on the topic of learning structured models might be [4]. In the following, I will give an explicit example of structured output prediction for binary image denoising. Given a noise black/withe image $\textbf{x}$ of size $m \times n$, the task of denoising is to predicted the original binary image $\textbf{y}$. We flatten the image into a long vector $\textbf{y} = [y_0, \dots, y_{m\times n}]$, where $y_i \in \{0, 1\}$ is the value for pixel $i$. In this work, we aim to learn a model from a bunch of noise input images and their ground truth binary images, i.e., supervised learning. One may think about learning a binary classifier for each pixel. It has several drawbacks. First, we may not care about classifying every single pixel completely correct, i.e., if we misclassify a single pixel, this is not as bad as misclassify a whole image. Second, we lose all context, e.g., pairwise pixels (one pixel and its neighbore). The structured predition here is to predict an entire binary image $\textbf{y}$ or a grid graph of $m \times n$. Here, the output space $\mathcal{Y}$ is all possible binary images of size $m \times n$. It can be formulated as following: $$ \hat{\textbf{y}} = \underset{\textbf{y} \in \mathcal{Y}}{\operatorname{argmax}} f(\textbf{x},\textbf{y}), (1) $$ where $f(\textbf{x},\textbf{y})$ is the compitibility function, measures how well $\textbf{y}$ fits $\textbf{x}$. There are basically three challenges in doing structured learning and prediction: - Choosing a parametric form of $f(\textbf{x},\textbf{y})$ - solving $\underset{\textbf{y} \in \mathcal{Y}}{\operatorname{argmax}} f(\textbf{x},\textbf{y})$ - learning parameters for $f(\textbf{x},\textbf{y})$ to minimize a loss In this work, our parameters are pairwise and unary potentials and they can be written as: $$ f(\textbf{x},\textbf{y}) = \sum_i \textbf{w}_i'\phi_i(\textbf{x}) + \sum_{i,j} \textbf{w}_{ij}'\phi_{ij}(\textbf{x}), (2) $$ where $\textbf{w}_i$ and $\textbf{w}_{ij}$ are unary and pairwise parameters, $\phi_i(\textbf{x})$ and $\phi_{ij}(\textbf{x})$ are unary and pairwise features respectively. Equation (2) is a linear function and can be written as a dot product of a global parameter $\textbf{w}$ and joint feature vector $\Phi(\textbf{x},\textbf{y})$, i.e., $f(\textbf{x},\textbf{y}) = \textbf{w}'\Phi(\textbf{x}, \textbf{y})$. The global parameter $\textbf{w}$ is a collection of unary and pairwise parameters. The joint feature $\Phi(\textbf{x}, \textbf{y})$ maps local features, e.g., pixel values from each location, to the corresponding location of the global feature vector according to $\textbf{y}$. In factor graph model, parameters are associated with a set of factor types. As said before, the output space $\mathcal{Y}$ is usually finite but very large. In our case, it is all possible binary images of size $m \times n$. Finding ${\operatorname{argmax}}$ in such a large space by exhaustive search is not practical. To do the maximization over $\textbf{y}$ efficiently, the most popular tool is using energy functions or conditional random fields (CRFs). In this work, we implemented Graph cuts [5] for efficient inference. We also implemented max-product LP relaxation inference and tree max-product inference. However, the later is limited to tree-struct graph while for image denosing, we use grid graph. The parameters are learned by regularized risk minimization, where the risk defined by user provided loss function $\Delta(\mathbf{y},\mathbf{\hat{y}})$. We use the Hamming loss in this experiment. The empirical risk is defined in terms of the surrogate hinge loss $\mathcal{L}_i(\mathbf{w}) = \max_{\mathbf{y} \in \mathcal{Y}} \Delta(\mathbf{y}_i,\mathbf{y}) - \mathbf{w}' [\Phi(\mathbf{x}_i,\mathbf{y}_i) - \Phi(\mathbf{x}_i,\mathbf{y})]$. The training objective is given by $$ \min_{\mathbf{w}} \frac{\lambda}{2} ||\mathbf{w}||^2 + \frac{1}{N} \sum_{i=1}^N \mathcal{L}_i(\mathbf{w}). (3) $$ ## Create binary denoising dataset ``` import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') import numpy as np import numpy.random ``` We dfine an Example class for the training and testing examples. ``` class Example: """ Example class. Member variables: id: id of the example im_bw: original binary image im_noise: original image with noise feats: feature for each pixel labels: binary labels of each pixel """ def __init__(self, id, im_bw, im_noise, feats, labels): self.id = id self.im_bw = im_bw self.im_noise = im_noise self.feature = feats self.labels = labels ``` In the following, we create a toy dataset. Similar to [2], we make random noisy images, then smooth them to make the true (discrete) output values. ``` import scipy.ndimage # generate random binary images im_size = np.array([50, 50], np.int32) num_images = 30 ims_bw = [] for i in range(num_images): im_rand = np.random.random_sample(im_size) im_bw = np.round(scipy.ndimage.gaussian_filter(im_rand, sigma=3)) ims_bw.append(im_bw) ``` Next, noises are added to the binary images. We apply the same strategy as in [3], the noisy images are generated as $z_i = x_i(1-t_i^n) + (1-x_i)t_i^n$, where $x_i$ is the true binary label, and $t_i \in [0,1]$ is a random value. Here, $n \in (1, \infty)$ is the noise level, where lower values correspond to more noise. In this experiment, we use only two features as unary features: a constant of $1$ and the noisy input value at the pixel, i.e., $\textbf{u}(i) = [z_i, 1]$. ``` # define noisy level noise_level = 2 # initialize an empty list example_list = [] for i in range(len(ims_bw)): im_bw = ims_bw[i] # add noise to the binary image t = np.random.random_sample(im_bw.shape) im_noise = im_bw*(1-t**noise_level) + (1-im_bw)*(t**noise_level) # create 2-d unary features c1 = np.ravel(im_noise) c2 = np.ones(im_noise.size, np.int32) feats = np.column_stack([c1, c2]) # we use pixel-level labels # so just flatten the original binary image into a vector labels = np.ravel(im_bw) example = Example(i, im_bw, im_noise, feats, labels) example_list.append(example) ``` Now we creat a function to visualize our examples. ``` import matplotlib.pyplot as plt %matplotlib inline def plot_example(example): """ Plot example.""" fig, plots = plt.subplots(1, 2, figsize=(12, 4)) plots[0].matshow(example.im_bw, cmap=plt.get_cmap('Greys')) plots[0].set_title('Binary image') plots[1].matshow(example.im_noise, cmap=plt.get_cmap('Greys')) plots[1].set_title('Noise image') for p in plots: p.set_xticks(()) p.set_yticks(()) plt.show() # plot an example plot_example(example_list[9]) ``` ## Build Factor Graph Model ``` from shogun import Factor, TableFactorType, FactorGraph from shogun import FactorGraphObservation, FactorGraphLabels, FactorGraphFeatures from shogun import FactorGraphModel, GRAPH_CUT, LP_RELAXATION from shogun import MAPInference ``` We define a 'make_grid_edges' function to compute the indeces of the pairwise pixels. we use grid graph with neighborhood size of $4$ in our experiment. ``` def make_grid_edges(grid_w, grid_h, neighborhood=4): """ Create grid edge lists. Args: grid_w: width of the grid grid_h: height of the grid neigborhood: neigborhood of the node (4 or 8) Returns: edge list of the grid graph """ if neighborhood not in [4, 8]: raise ValueError("neighborhood can only be '4' or '8', got %s" % repr(neighborhood)) inds = np.arange(grid_w * grid_h).reshape([grid_w, grid_h]) inds = inds.astype(np.int64) right = np.c_[inds[:, :-1].ravel(), inds[:, 1:].ravel()] down = np.c_[inds[:-1, :].ravel(), inds[1:, :].ravel()] edges = [right, down] if neighborhood == 8: upright = np.c_[inds[1:, :-1].ravel(), inds[:-1, 1:].ravel()] downright = np.c_[inds[:-1, :-1].ravel(), inds[1:, 1:].ravel()] edges.extend([upright, downright]) return np.vstack(edges) # in this experiment, we use fixed image size im_w = example_list[0].im_bw.shape[1] im_h = example_list[0].im_bw.shape[0] # we compute the indeces of the pairwise nodes edge_list = make_grid_edges(im_w, im_h) ``` For binary denosing, we define two types of factors: - unary factor: the unary factor type is used to define unary potentials that captures the the appearance likelyhood of each pixel. We use very simple unary feature in this experiment, the pixel value and a constant value $1$. As we use binary label, thus the size of the unary parameter is $4$. - pairwise factor: the pairwise factor type is used to define pairwise potentials between each pair of pixels. There features of the pairwise factors are constant $1$ and there are no additional edge features. For the pairwise factors, there are $2 \times 2$ parameters. Putting all parameters together, the global parameter vector $\mathbf{w}$ has length $8$. ``` def define_factor_type(num_status, dim_feat): """ Define factor type. Args: num_status: number of status. dim_feat: dimention of the unary node feature Returns: ftype_unary: unary factor type ftype_pair: pairwise factor type """ # unary, type id = 0 cards_u = np.array([num_status], np.int32) # cardinalities w_u = np.zeros(num_status*dim_feat, np.float64) ftype_unary = TableFactorType(0, cards_u, w_u) # pairwise, type id = 1 cards_p = np.array([num_status, num_status], np.int32) w_p = np.zeros(num_status*num_status, np.float64) ftype_pair = TableFactorType(1, cards_p, w_p) return ftype_unary, ftype_pair # define factor types ftype_unary, ftype_pair = define_factor_type(num_status=2, dim_feat=2) def prepare_factor_graph_model(example_list, ftype_unary, ftype_pair, edge_list, num_status = 2, dim_feat = 2): """ Prepare factor graph model data. Args: example_list: the examples num_status: number of status dim_feat: dimention of the unary features """ num_samples = len(example_list) # Initialize factor graph features and labels feats_fg = FactorGraphFeatures(num_samples) labels_fg = FactorGraphLabels(num_samples) # Interate over all the examples for i in range(num_samples): example = example_list[i] feats = example.feature num_var = feats.shape[0] dim_feat = feats.shape[1] # Initialize factor graph cards = np.array([num_status]*num_var, np.int32) # cardinalities fg = FactorGraph(cards) # add unary for u in range(num_var): data_u = np.array(feats[u,:], np.float64) inds_u = np.array([u], np.int32) factor_u = Factor(ftype_unary, inds_u, data_u) fg.add_factor(factor_u) # add pairwise for p in range(edge_list.shape[0]): data_p = np.array([1.0], np.float64) inds_p = np.array(edge_list[p,:], np.int32) factor_p = Factor(ftype_pair, inds_p, data_p) fg.add_factor(factor_p) # add factor graph feature feats_fg.add_sample(fg) # add factor graph label labels = example.labels.astype(np.int32) assert(labels.shape[0] == num_var) loss_weight = np.array([1.0/num_var]*num_var) f_obs = FactorGraphObservation(labels, loss_weight) labels_fg.add_label(f_obs) return feats_fg, labels_fg ``` We split the samples into training and testing sets. The features and labels are converted for factor graph model. ``` num_train_samples = 10 examples_train = example_list[:num_train_samples] examples_test = example_list[num_train_samples:] # create features and labels for factor graph mode (feats_train, labels_train) = prepare_factor_graph_model(examples_train, ftype_unary, ftype_pair, edge_list) (feats_test, labels_test) = prepare_factor_graph_model(examples_test, ftype_unary, ftype_pair, edge_list) ``` In this experiment, we use [Graph cuts](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGraphCut.html) as approximate inference algorithm, i.e., solve Eq. (1). Please refer to [4] for a comprehensive understanding. ``` # inference algorithm infer_alg = GRAPH_CUT #infer_alg = LP_RELAXATION # create model and register factor types model = FactorGraphModel(feats_train, labels_train, infer_alg) model.add_factor_type(ftype_unary) model.add_factor_type(ftype_pair) ``` ## Learning parameter with structured output SVM We apply (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStochasticSOSVM.html">StochasticSOSVM</a>) to learn the parameter $\textbf{w}$. ``` from shogun import StochasticSOSVM import time # Training with Stocastic Gradient Descent sgd = StochasticSOSVM(model, labels_train, True, True) sgd.put('num_iter', 300) sgd.put('lambda', 0.0001) # train t0 = time.time() sgd.train() t1 = time.time() w_sgd = sgd.get_w() print( "SGD took", t1 - t0, "seconds.") def evaluation(labels_pr, labels_gt, model): """ Evaluation Args: labels_pr: predicted label labels_gt: ground truth label model: factor graph model Returns: ave_loss: average loss """ num_train_samples = labels_pr.get_num_labels() acc_loss = 0.0 ave_loss = 0.0 for i in range(num_train_samples): y_pred = labels_pr.get_label(i) y_truth = labels_gt.get_label(i) acc_loss = acc_loss + model.delta_loss(y_truth, y_pred) ave_loss = acc_loss / num_train_samples return ave_loss # training error labels_train_pr = sgd.apply() ave_loss = evaluation(labels_train_pr, labels_train, model) print('SGD: Average training error is %.4f' % ave_loss) def plot_primal_trainError(sosvm, name = 'SGD'): """ Plot primal objective values and training errors.""" primal_val = sosvm.get_helper().get_primal_values() train_err = sosvm.get_helper().get_train_errors() fig, plots = plt.subplots(1, 2, figsize=(12,4)) # primal vs passes plots[0].plot(range(primal_val.size), primal_val, label=name) plots[0].set_xlabel('effecitve passes') plots[0].set_ylabel('primal objective') plots[0].set_title('whole training progress') plots[0].legend(loc=1) plots[0].grid(True) # training error vs passes plots[1].plot(range(train_err.size), train_err, label=name) plots[1].set_xlabel('effecitve passes') plots[1].set_ylabel('training error') plots[1].set_title('effective passes') plots[1].legend(loc=1) plots[1].grid(True) # plot primal objective values and training errors at each pass plot_primal_trainError(sgd) ``` ## Testing results ``` # Testing error sgd.set_features(feats_test) sgd.set_labels(labels_test) labels_test_pr = sgd.apply() ave_loss = evaluation(labels_test_pr, labels_test, model) print('SGD: Average testing error is %.4f' % ave_loss) def plot_results(example, y_pred): """ Plot example.""" im_pred = y_pred.reshape(example.im_bw.shape) fig, plots = plt.subplots(1, 3, figsize=(12, 4)) plots[0].matshow(example.im_noise, cmap=plt.get_cmap('Greys')) plots[0].set_title('noise input') plots[1].matshow(example.im_bw, cmap=plt.get_cmap('Greys')) plots[1].set_title('ground truth labels') plots[2].matshow(im_pred, cmap=plt.get_cmap('Greys')) plots[2].set_title('predicted labels') for p in plots: p.set_xticks(()) p.set_yticks(()) plt.show() import matplotlib.pyplot as plt %matplotlib inline # plot one example i = 8 # get predicted output y_pred = FactorGraphObservation.obtain_from_generic(labels_test_pr.get_label(i)).get_data() # plot results plot_results(examples_test[i], y_pred) def plot_results_more(examples, labels_pred, num_samples=10): """ Plot example.""" fig, plots = plt.subplots(num_samples, 3, figsize=(12, 4*num_samples)) for i in range(num_samples): example = examples[i] # get predicted output y_pred = FactorGraphObservation.obtain_from_generic(labels_pred.get_label(i)).get_data() im_pred = y_pred.reshape(example.im_bw.shape) plots[i][0].matshow(example.im_noise, cmap=plt.get_cmap('Greys')) plots[i][0].set_title('noise input') plots[i][0].set_xticks(()) plots[i][0].set_yticks(()) plots[i][1].matshow(example.im_bw, cmap=plt.get_cmap('Greys')) plots[i][1].set_title('ground truth labels') plots[i][1].set_xticks(()) plots[i][1].set_yticks(()) plots[i][2].matshow(im_pred, cmap=plt.get_cmap('Greys')) plots[i][2].set_title('predicted labels') plots[i][2].set_xticks(()) plots[i][2].set_yticks(()) plt.show() plot_results_more(examples_test, labels_test_pr, num_samples=5) ``` ## Reference [1] Nowozin, S., & Lampert, C. H. Structured learning and prediction in computer vision. Foundations and Trends® in Computer Graphics and Vision, 6(3–4), 185-365, 2011. [2] http://users.cecs.anu.edu.au/~jdomke/JGMT/ [3] Justin Domke, Learning Graphical Model Parameters with Approximate Marginal Inference, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 10, pp. 2454-2467, 2013. [4] Tsochantaridis, I., Hofmann, T., Joachims, T., Altun, Y., Support Vector Machine Learning for Interdependent and Structured Ouput Spaces, ICML 2004. [5] Boykov, Y., Veksler, O., & Zabih, R. Fast approximate energy minimization via graph cuts. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23(11), 1222-1239, 2001.
github_jupyter
# Text Clustering with Sentence-BERT ``` !pip3 install sentence-transformers !pip install datasets import pandas as pd, numpy as np import torch, os from datasets import load_dataset dataset = load_dataset("amazon_polarity",split="train") dataset corpus=dataset.shuffle(seed=42)[:10000]['content'] pd.Series([len(e.split()) for e in corpus]).hist() ``` ## Model Selection (source link: https://www.sbert.net/docs/pretrained_models.html) The best available models for STS are: * stsb-mpnet-base-v2 * stsb-roberta-base-v2 * stsb-distilroberta-base-v2 * nli-mpnet-base-v2 * nli-roberta-base-v2 * nli-distilroberta-base-v2 Paraphrase Identification Models * paraphrase-distilroberta-base-v1 - Trained on large scale paraphrase data. * paraphrase-xlm-r-multilingual-v1 - Multilingual version of paraphrase-distilroberta-base-v1, trained on parallel data for 50+ languages. (Teacher: paraphrase-distilroberta-base-v1, Student: xlm-r-base) ``` from sentence_transformers import SentenceTransformer model_path="paraphrase-distilroberta-base-v1" #paraphrase-distilroberta-base-v1 - Trained on large scale paraphrase data. model = SentenceTransformer(model_path) corpus_embeddings = model.encode(corpus) corpus_embeddings.shape from sklearn.cluster import KMeans K=5 kmeans = KMeans(n_clusters=5, random_state=0).fit(corpus_embeddings) import pandas as pd cls_dist=pd.Series(kmeans.labels_).value_counts() cls_dist import scipy distances = scipy.spatial.distance.cdist(kmeans.cluster_centers_ , corpus_embeddings) centers={} print("Cluster", "Size", "Center-idx", "Center-Example", sep="\t\t") for i,d in enumerate(distances): ind = np.argsort(d, axis=0)[0] centers[i]=ind print(i,cls_dist[i], ind, corpus[ind] ,sep="\t\t") ``` ## Visualization of the cluster points ``` !pip install umap-learn import matplotlib.pyplot as plt import umap X = umap.UMAP(n_components=2, min_dist=0.0).fit_transform(corpus_embeddings) labels= kmeans.labels_ fig, ax = plt.subplots(figsize=(12, 8)) plt.scatter(X[:,0], X[:,1], c=labels, s=1, cmap='Paired') for c in centers: plt.text(X[centers[c],0], X[centers[c], 1], "CLS-"+ str(c), fontsize=24) plt.colorbar() ``` ## Topic Modeling with BERT BERTopic Official NOTE: BERTopic is stocastich which means that the topics might differ across runs. This is mostly due to the stocastisch nature of UMAP. ``` !pip install bertopic ``` Official Note: Restart the Notebook After installing BERTopic, some packages that were already loaded were updated and in order to correctly use them, we should now restart the notebook. From the Menu: Runtime → Restart Runtime ``` len(corpus) from bertopic import BERTopic from sentence_transformers import SentenceTransformer sentence_model = SentenceTransformer("paraphrase-distilroberta-base-v1") topic_model = BERTopic(embedding_model=sentence_model) topics, _ = topic_model.fit_transform(corpus) topic_model.get_topic_info()[:6] topic_model.get_topic(2) ```
github_jupyter
# Scientific Languages **Julia**, **Python**, and **R** are three open source languages used for scientific computing today (2020). They all come with a simplistic command line interface where you type in a statement and it is executed immediately, this is the **REPL**, short for read-evaluate-print-loop. The REPL is convenient for testing a few simple statements, it is not suitable for writing large chunks of code. For large projects, you should use an **IDE** or integrated development environment. ## Language Installation You can download: - Julia from https://julialang.org/ - Python from https://www.python.org/ - R from - CRAN R https://cran.r-project.org/ - MRAN R https://mran.microsoft.com/ (Microsoft's version of R with speed enhancements via Intel MKL) You should add them to system **PATH** (or verify that they are there) so that they are accessible from the system command line. This makes using the REPL easy. ## IDE Everyone has their favourite editor and IDE, use what you are comfortable with. Here are two popular editors: - **Atom** https://atom.io/ is a very flexible open source editor/environment created by GitHub (bought by Microsoft in 2019) - **VS Code** https://code.visualstudio.com/ is a very flexible editor/environment created by Microsoft They are great at editing text, they can work with multiple languages, they are very flexible tools that everyone should have. If you work in multiple languages, they are better than using different IDE for different languages. After all, the tool you use the most is the IDE, you should pick a good one and stick with it as muscle memory is indispensable for productivity. Install both if you like and try them out. ## Julia Atom and VS Code both have extensions for Julia: - Atom's extension for Julia, **Uber-Juno** http://docs.junolab.org/stable/man/installation/index.html is developed by the folks that created Julia - If you like VS Code, there is a VSCode extension for Julia also ## Python Atom and VS Code both have extensions for Python, if you work in multiple languages, they are good choices. There are two other IDE specific to Python that you can consider: - **Pycharm** https://www.jetbrains.com/pycharm/ is purpose-built for Python. JetBrains also makes IDE for other languages. - **Spyder** https://www.spyder-ide.org/ is also popular, it looks like R Studio ## R Atom and VS Code both have extensions for R. However, the overwhelming preference of R IDE is **R Studio** https://rstudio.com/ ## Jupyter While the REPL and IDE lets you work efficiently with code, they don't help you with documenting and sharing your work with others. You need to switch to other software to type in text, copy/paste your program fragments and arrange output/plots, this is quite inconvenient and laborious. Where there are major pain points, there will be open source projects to make life (approximately) better. Jupyter was created to let you create computational **notebooks** that combine text, code and output together seamlessly for easy sharing of ideas with others, no more tedious copy/paste. The word **Jupyter** is coined from the contraction of Julia, Python, and R. Jupyter is a front-end that can work with many language backends via language **kernels**, a protocol for sending code and receiving output/plots. **JupyterLab** is a major re-release of Jupyter with many new and improved features. Each notebook uses one kernel, you cannot mix kernels in the same notebook. R Studio has a RMarkdown package that is similar to Jupyter in that it allows the intermixing of text, code and output. A RMarkdown file can run multiple kernels within the same document (but how often would you do this in practice?) ## Jupyter Installation Complete documentation is at https://jupyterlab.readthedocs.io/en/stable/. Here are simplified instructions: 1. Jupyter requires Python, install Python first (as of now, use Python 3.7.x, not 3.8) 2. Install JupyterLab via `pip install jupyterlab` on the command line. This will install JupyterLab along with a Python kernel. 3. Run JupyterLab in default browser with `jupyter-lab` - Chrome or Firefox are fine, Internet Explorer doesn't work - The Launcher tab shows icons for all available kernels, Python should be there. 4. Create icon shortcut on your desktop or taskbar or start menu, choose the directory to start in, all notebooks will be created under this location. Other useful pip commends: - `pip list` lists installed packages - `pip install pkg==` lists all available versions of that pkg - `pip install pkg==a.b.c` installs version a.b.c of pkg - `pip install --upgrade pkg1 pkg2 ...` upgrades listed packages to most current version - `pip uninstall pkg1 pkg2 ...` uninstalls the listed packages Note: Jupyter is also included with Anaconda which some people prefer. ## Julia Kernel 1. Create `~/.julia/config/startup.jl`, this is the Julia start-up file. Use some editor to put the following in it (Julia is case sensitive): ``` ENV["JUPYTER"] = "jupyter" ``` Start up a Julia session, run the following Julia statements: ``` ENV["JUPYTER"] = "jupyter" # or complete path of jupyter.exe if not on PATH using Pkg Pkg.add("IJulia") ``` Use the following Julia statement to update all Julia packages: ``` Pkg.update() ``` Detailed instructions are at https://github.com/JuliaLang/IJulia.jl. Note this instruction here avoids installation **Conda** (Anaconda, miniconda), they are very big installs. You can have kernels for different versions of Julia. ## R Kernel Start up a R session, issue the following R statements within each installed R version: ``` install.packages('IRkernel') IRkernel::installspec(name = 'ir361', displayname = 'R 3.6.1') # Change strings to your liking ``` Instructions can be found at https://github.com/IRkernel/IRkernel You can have multiple R kernels for different versions of R. ## Kernel List Use `jupyter kernelspec list` at command prompt or terminal to list available kernels. On Windows, the kernels are placed at **C:\Users\\<userid\>\AppData\Roaming\jupyter\kernels**, each directory is a kernel. Inside each directory is **kernel.json** which is the kernel definition. ## Sas-kernel **sas-kernel** lets you create a *SAS notebook*, you write native SAS code in this notebook. This method uses JupyterLab as an interface to SAS just like SAS Display Manager, SAS Studio or SAS Enterprise Guide. You install sas-kernel with `pip install sas-kernel`. It will install **saspy** as it is a required dependency. Be sure to configure saspy properly, if saspy doesn't work, sas-kernel won't work. SAS has two streams of output: Log and Results. If your code produces only Log messages but no Results, sas-kernel shows Log messages as cell output. If your code produces Results without Log ERROR or WARNING messages, sas-kernel will show you Results as cell output. If you want to see the sas log, you need to use magics: - to see all available magics, use `%lsmagic` - to see log of last execution, use `%showLog` (case sensitive) - to see log of entire session, use `%showFullLog` (case sensitive) Line magics start with single %, cell magic starts with double %%. ## Saspy Kernel **saspy** is a kernel that lets a *Python notebook* access SAS as a computational engine. For example, to establish connection between Python and SAS: ``` import saspy # import saspy package into Python session sas = saspy.SASsession() # establish a SAS backend session called "sas" ``` After establishing a connection, you write Python code to submit text strings to SAS via the **submit()** function: ``` rc = sas.submit(""" proc print data=sashelp.fish(obs=10) round; run; """) ``` You can also use cell magics if you are using Jupyter: ``` %%SAS proc print data=sashelp.cars(obs=10); run; ``` This method is meant for people who wants to write Python and use SAS as a package. See https://sassoftware.github.io/saspy/getting-started.html# An architectural difference between saspy and other language kernels is that whereas other kernels run their sessions locally, saspy can access *remote SAS server sessions*. This means you can run Jupyter on your local machine and access SAS either locally or remotely on UNIX or Mainframes or a SAS Grid like SAS EG or SAS Studio. You install saspy with `pip install saspy`. It needs to be configured properly before it will run, the following is for a **winlocal** setup: 1. Make sure SAS is on PATH so that it executes from the command line 2. For winlocal, make sure sspiauth.dll is accessible (https://sassoftware.github.io/saspy/install.html#local) - e.g., add C:\Program Files\SASHome\SASFoundation\9.4\core\sasext to PATH 3. Navigate to pydir/Lib/site-packages/saspy - copy sascfg.py to sascfg_personal.py - edit sascfg_personal.py (see https://sassoftware.github.io/saspy/install.html#sascfg-personal-py) - use winlocal - 4 jar files are needed from the SAS installation (https://sassoftware.github.io/saspy/install.html#local) - 1 jar file is needed from saspy located at pydir/Lib/site-packages/saspy/java/saspyiom.jar - Because the locations of SAS and Python are hard-coded in this configuration file, anytime you update SAS or Python you must edit this file to update the directory path
github_jupyter
# Converting RMarkdown files to SoS notebooks * **Difficulty level**: easy * **Time need to lean**: 10 minutes or less * **Key points**: * `sos convert file.Rmd file.ipynb` converts a Rmarkdown file to SoS notebook. A `markdown` kernel is used to render markdown text with in-line expressions. * `sos convert file.Rmd file.ipynb --execute` executes the resulting SoS notebook * `sos convert file.Rmd file.html --execute --template` converts a R markdown file to SoS notebook, executes it, and converts the resulting notebook to HTML format The RMarkdown format is a markdown format with embedded R expressions and code blocks, and is extremely popular for R users. SoS Notebook provides an utility to convert Rmarkdown files to a SoS Notebook with command ``` sos convert input.Rmd output.ipynb ``` with the option to execute the resulting notebook ``` sos convert input.Rmd output.ipynb --execute ``` Example files and commands: * [Example Rmd file](../media/example.Rmd) copied from [jeromyanglim/example-r-markdown.rmd](https://gist.github.com/jeromyanglim/2716336) * [Converted ipynb file](../media/example.ipynb) generated by command ``` sos convert example.Rmd example.ipynb ``` * [Executed version of the notebook](../media/executed.ipynb) generated by command ``` sos convert example.Rmd executed.ipynb --execute ``` or ``` sos convert example.ipynb executed.ipynb --execute ``` from the converted notebook. * [Export HTML file using template `sos-report-toc-v2`](../media/example.html) generated by command ``` sos convert example.Rmd example.html --execute --template sos-report-toc-v2 ``` or ``` sos convert example.ipynb example.html --execute --template sos-report-toc-v2 ``` or ``` sos convert executed.ipynb example.html --template sos-report-toc-v2 ``` if we start from the intermediate results. ## Converting R Markdown to SoS Notebook Although there are already a number of Rmd to Jupyter converters (e.g. [notedown](https://github.com/aaren/notedown), [RMD-to-Jupyter](https://github.com/lecy/RMD-to-Jupyter) (uses rpy2)), they lack support for some of the Rmakdown features due to limitations of the Jupyter notebook platform. Fortunately, SoS Notebook, especially its Jupyter Lab extension addresses most of the limitations and offers an almost perfect conversion from R markdown to Jupyter notebook. The first Rmarkdown feature that is difficult to convert is its inline expressions, which are R expressions embedded in markdown texts. Jupyter cannot handle embedded expressions in its markdown cells because markdown cells are handled in its frontend and does not interact with the computing kernel. SoS Notebook addresses this problem with the use of a [markdown kernel](https://github.com/vatlab/markdown-kernel), which is essentially a markdown kernel For example, the following Rmarkdown text ``` I counted `r sum(c(1,2,3))` blue cars on the highway. ``` is converted to a markdown cell that is evaluated in a R kernel as follows ``` %expand `r ` --in R I counted `r sum(c(1,2,3))` blue cars on the highway. ``` The second Rmarkdown feature is its support for multiple languages, which allows it to have [code blocks in a number of langauges](https://bookdown.org/yihui/rmarkdown/language-engines.html). A Jupyter notebook with an `ir` kernel can only evaluate R scripts, but a SoS Notebook is able to include multiple kernels in one notebook. For example, code blocks such as ```{python} def f(x): return x + 2 f(2) ``` and ```{r engine="python"} def f(x): return x + 2 f(2) ``` are converted to cells with approprivate kernels such as ``` def f(x): return x + 2 f(2) ``` The last feature that is not properly supported are options such as `echo=FALSE` and `include=FALSE` for Rmarkdown code blocks. There were no corresponding features for classic Jupyter Notebook but Jupyter Lab supports hiding of input and/or output of cells. Using these features, code blocks such as the following are converted as collapsed input and/or outputs, ```{r echo=FALSE} arr <- rnorm(5) cat(arr) ``` ``` arr <- rnorm(5) cat(arr) ``` A related problem is that `jupyter nbconvert` does not respect the collasping status of cells and renders input and output of all cells. SoS Notebook addresses this problem by providing templates that honor the show/hide status of cells. For example, template `sos-report-toc-v2` outputs all cells but hides collapsed inputs and outputs by default. The hidden content could be displayed by selecting a dropdown box to the top right corner of the document. ## Option `--execute` Rmarkdown files do not contain outputs from inline expressions and code blocks so `output.ipynb` generated from command ``` sos convert input.Rmd output.ipynb ``` only contains inputs. To obtain a notebook with embedded output, you can add option `--execute` to the `convert` command ``` sos convert input.Rmd output.ipynb --execute ``` This command will convert `input.Rmd` to a SoS notebook, executes it to generate the resulting `output.ipynb`. It is basically a shortcut for commands ``` sos convert input.Rmd tmp_output.ipynb papermill --engine sos temp_output.ipynb output.ipynb rm -f temp_output.ipynb ``` ## Generate a HTML report from a Rmarkdown file Command ``` sos convert input.Rmd output.html --execute ``` convert `file.Rmd` to a SoS notebook, executes it, and generates a HTML report using specified template. It is basically a shortcut for commands ``` sos convert input.Rmd temp_output.ipynb papermill --engine sos temp_output.ipynb temp_executed.ipynb sos convert temp_executed.ipynb output.html rm -rf temp_output.ipynb temp_executed.ipynb ``` Note that SoS provides a few templates to generate reports that hides input and/or outputs of code blocks, corresponding to `echo=FALSE`, `include=FALSE` options of Rmd code blocks. You can specify the use of templates with options such as `--template sos-report-toc-v2`. You can see a list of templates provided by SoS [here](magic_convert.html).
github_jupyter
# Visualization PySwarms implements tools for visualizing the behavior of your swarm. These are built on top of `matplotlib`, thus rendering charts that are easy to use and highly-customizable. In this example, we will demonstrate three plotting methods available on PySwarms: - `plot_cost_history`: for plotting the cost history of a swarm given a matrix - `plot_contour`: for plotting swarm trajectories of a 2D-swarm in two-dimensional space - `plot_surface`: for plotting swarm trajectories of a 2D-swarm in three-dimensional space ``` # Import modules import matplotlib.pyplot as plt import numpy as np from IPython.display import Image # Import PySwarms import pyswarms as ps from pyswarms.utils.functions import single_obj as fx from pyswarms.utils.plotters import (plot_cost_history, plot_contour, plot_surface) ``` The first step is to create an optimizer. Here, we're going to use Global-best PSO to find the minima of a sphere function. As usual, we simply create an instance of its class `pyswarms.single.GlobalBestPSO` by passing the required parameters that we will use. Then, we'll call the `optimize()` method for 100 iterations. ``` options = {'c1':0.5, 'c2':0.3, 'w':0.9} optimizer = ps.single.GlobalBestPSO(n_particles=50, dimensions=2, options=options) cost, pos = optimizer.optimize(fx.sphere, iters=100) ``` ## Plotting the cost history To plot the cost history, we simply obtain the `cost_history` from the `optimizer` class and pass it to the `cost_history` function. Furthermore, this method also accepts a keyword argument `**kwargs` similar to `matplotlib`. This enables us to further customize various artists and elements in the plot. In addition, we can obtain the following histories from the same class: - mean_neighbor_history: average local best history of all neighbors throughout optimization - mean_pbest_history: average personal best of the particles throughout optimization ``` plot_cost_history(cost_history=optimizer.cost_history) plt.show() ``` ## Animating swarms The `plotters` module offers two methods to perform animation, `plot_contour()` and `plot_surface()`. As its name suggests, these methods plot the particles in a 2-D or 3-D space. Each animation method returns a `matplotlib.animation.Animation` class that still needs to be animated by a `Writer` class (thus necessitating the installation of a writer module). For the proceeding examples, we will convert the animations into a JS script. In such case, we need to invoke some extra methods to do just that. Lastly, it would be nice to add meshes in our swarm to plot the sphere function. This enables us to visually recognize where the particles are with respect to our objective function. We can accomplish that using the `Mesher` class. ``` from pyswarms.utils.plotters.formatters import Mesher # Initialize mesher with sphere function m = Mesher(func=fx.sphere) ``` There are different formatters available in the `pyswarms.utils.plotters.formatters` module to customize your plots and visualizations. Aside from `Mesher`, there is a `Designer` class for customizing font sizes, figure sizes, etc. and an `Animator` class to set delays and repeats during animation. ### Plotting in 2-D space We can obtain the swarm's position history using the `pos_history` attribute from the `optimizer` instance. To plot a 2D-contour, simply pass this together with the `Mesher` to the `plot_contour()` function. In addition, we can also mark the global minima of the sphere function, `(0,0)`, to visualize the swarm's "target". ``` %%capture # Make animation animation = plot_contour(pos_history=optimizer.pos_history, mesher=m, mark=(0,0)) # Enables us to view it in a Jupyter notebook animation.save('plot0.gif', writer='imagemagick', fps=10) Image(url='plot0.gif') ``` ### Plotting in 3-D space To plot in 3D space, we need a position-fitness matrix with shape `(iterations, n_particles, 3)`. The first two columns indicate the x-y position of the particles, while the third column is the fitness of that given position. You need to set this up on your own, but we have provided a helper function to compute this automatically ``` # Obtain a position-fitness matrix using the Mesher.compute_history_3d() # method. It requires a cost history obtainable from the optimizer class pos_history_3d = m.compute_history_3d(optimizer.pos_history) # Make a designer and set the x,y,z limits to (-1,1), (-1,1) and (-0.1,1) respectively from pyswarms.utils.plotters.formatters import Designer d = Designer(limits=[(-1,1), (-1,1), (-0.1,1)], label=['x-axis', 'y-axis', 'z-axis']) %%capture # Make animation animation3d = plot_surface(pos_history=pos_history_3d, # Use the cost_history we computed mesher=m, designer=d, # Customizations mark=(0,0,0)) # Mark minima animation3d.save('plot1.gif', writer='imagemagick', fps=10) Image(url='plot1.gif') ```
github_jupyter
## KNN Classifier The model predicts the severity of the landslide (or if there will even be one) within the next 2 days, based on weather data from the past 5 days. Binary Classification yielded a maximum accuracy of 77.53%. Severity Classification (multiple classes) was around 56%. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import sklearn from sklearn.utils import shuffle import pickle # df = pd.read_csv("full_dataset_v1.csv") df = pd.read_csv("dataset.csv") len(df) df['severity'].value_counts() # filter by severity. na is for non-landslide data # df = df[df['severity'].isin(["medium", "small", "large", "very_large", "na"])] # Remove -1 slopes # df = df.loc[~(df.slope == -1)] print(len(df)) print(df.forest.value_counts()) df['severity'].value_counts() df = shuffle(df) df.reset_index(inplace=True, drop=True) print(len(df)) df X = df.copy() y = X.landslide columns=[] for i in range(9, 4, -1): columns.append('humidity' + str(i)) columns.append('ARI' + str(i)) columns.append('slope') columns.append('forest2') columns.append('osm') X = X[columns] X ``` ## Scaling ``` from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2) scaler = StandardScaler() scaler.fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) cnt1 = 0 cnt2 = 0 for i in y_train: if i == 1: cnt1 += 1 else: cnt2 += 1 print(cnt1,cnt2) ``` ## Prediction ``` from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=17) knn.fit(X_train, y_train) from sklearn.metrics import accuracy_score pred = knn.predict(X_train) # class_probabilities = knn.predict_proba() print("ACCURACY:", accuracy_score(pred, y_train)) best = 1 highest = 0 for i in range(1, 130): knn = KNeighborsClassifier(n_neighbors=i) knn.fit(X_train, y_train) pred = knn.predict(X_test) score = round(accuracy_score(pred, y_test)*10000)/100 print("k =", i, " ACCURACY:", score) if score > highest: highest = score best = i # Binary: k = 87, 58.9 # 62.4 na/landslide print("Best k:", best, highest) ``` ## Confusion Matrix ``` knn = KNeighborsClassifier(n_neighbors=best) knn.fit(X_train, y_train) pred = knn.predict(X_test) print(accuracy_score(pred, y_test)) print("Best k:", best, highest) from sklearn.metrics import confusion_matrix array = confusion_matrix(y_test, pred) array import seaborn as sn import pandas as pd import matplotlib.pyplot as plt binary = True if binary: df_cm = pd.DataFrame(array, index = [i for i in ["No", "Yes"]], columns = [i for i in ["No", "Yes"]]) else: df_cm = pd.DataFrame(array, index = [i for i in ["None", "Small", "Medium", "Large", "Very Large"]], columns = [i for i in ["None", "Small", "Medium", "Large", "Very Large"]]) plt.figure(figsize = (10,7)) ax = sn.heatmap(df_cm, cmap="Blues", annot=True, annot_kws={"size":50}, fmt='g') ax.tick_params(axis='both', which='major', labelsize=27) plt.xlabel('Predicted', fontsize = 40) # plt.title("KNN Confusion Matrix", fontsize = 50) plt.ylabel('Actual', fontsize = 40) plt.savefig("KNN Matrix", bbox_inches="tight") plt.show() ```
github_jupyter
# Machine Learning Using Random Forests *Curtis Miller* A **random forest** is a collection of decision trees that each individually make a prediction for an observation. Each tree is formed from a random subset of the training set. The majority decision among the trees is then the predicted value of an observation. Random forests are an example of **ensemble methods**, where the predictions of individual classifiers are used for decision making. The **scikit-learn** class `RandomForestClassifier` can be used for training random forests. For random forests we may consider an additional hyperparameter to tree depth: the number of trees to train. Each tree should individually be shallow, and having more trees should lead to less overfitting. We will still be using the *Titanic* dataset. ``` import pandas as pd from pandas import DataFrame from sklearn.model_selection import train_test_split, cross_validate from sklearn.metrics import classification_report from random import seed # Set random seed for reproducible results seed(110717) # Set the seed titanic = pd.read_csv("titanic.csv") titanic_train, titanic_test = train_test_split(titanic) ``` ## Growing a Random Forest Let's generate a random forest where I cap the depth for each tree at $m = 5$ and grow 10 trees. ``` from sklearn.ensemble import RandomForestClassifier forest1 = RandomForestClassifier(n_estimators=10, # Number of trees to grow max_depth=5) # Maximum depth of a tree forest1.fit(X=titanic_train.replace({'Sex': {'male': 0, 'female': 1}} # Replace strings with numbers ).drop(["Survived", "Name"], axis=1), y=titanic_train.Survived) # Example prediction forest1.predict([[2, 0, 26, 0, 0, 30]]) pred1 = forest1.predict(titanic_train.replace({'Sex': {'male': 0, 'female': 1}} ).drop(["Survived", "Name"], axis=1)) print(classification_report(titanic_train.Survived, pred1)) ``` The random forest does not perform as well on the training data as a full-grown decision tree, but such a tree overfit. The random forest, in comparison, seems to do as well as a better decision tree so far. ## Optimizing Multiple Hyperparameters We now have two hyperparameters to optimize: tree depth and the number of trees to grow. We have a few ways to proceed: 1. We could use cross-validation to see which combination of hyperparameters performs the best. Beware that there could be many combinations to check! 2. We could use cross-validation to optimize one hyperparameter first, then the next, and so on. While not necessarily producing a globally optimal solution this is less work and likely yields a "good enough" result. 3. We could randomly pick combinations of hyperparameters and use the results to guess a good combination. This is like 1 but less work. Here I will go with option 2. I will optimize the number of trees to use first, then maximum tree depth. ``` n_candidate = [10, 20, 30, 40, 60, 80, 100] # Candidate forest sizes res1 = dict() for n in n_candidate: pred3 = RandomForestClassifier(n_estimators=n, max_depth=5) res1[n] = cross_validate(pred3, X=titanic_train.replace({'Sex': {'male': 0, 'female': 1}} # Replace strings with numbers ).drop(["Survived", "Name"], axis=1), y=titanic_train.Survived, cv=10, return_train_score=False, scoring='accuracy') res1df = DataFrame({(i, j): res1[i][j] for i in res1.keys() for j in res1[i].keys()}).T res1df.loc[(slice(None), 'test_score'), :] res1df.loc[(slice(None), 'test_score'), :].mean(axis=1) ``` $n = 100$ seems to do well. Now let's pick optimal tree depth. ``` m_candidate = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # Candidate depths res2 = dict() for m in m_candidate: pred3 = RandomForestClassifier(max_depth=m, n_estimators=40) res2[m] = cross_validate(pred3, X=titanic_train.replace({'Sex': {'male': 0, 'female': 1}} # Replace strings with numbers ).drop(["Survived", "Name"], axis=1), y=titanic_train.Survived, cv=10, return_train_score=False, scoring='accuracy') res2df = DataFrame({(i, j): res2[i][j] for i in res2.keys() for j in res2[i].keys()}).T res2df.loc[(slice(None), 'test_score'), :] res2df.loc[(slice(None), 'test_score'), :].mean(axis=1) ``` A maximum tree depth of $m = 7$ seems to work well. A way to try and combat the path-dependence of this approach would be to repeat the search for optimal forest size but with the new tree depth and so on, but I will not do so here. Let's now see how the new random forest performs on the test set. ``` forest2 = RandomForestClassifier(max_depth=9, n_estimators=40) forest2.fit(X=titanic_train.replace({'Sex': {'male': 0, 'female': 1}} # Replace strings with numbers ).drop(["Survived", "Name"], axis=1), y=titanic_train.Survived) survived_test_predict = forest2.predict(X=titanic_test.replace( {'Sex': {'male': 0, 'female': 1}} ).drop(["Survived", "Name"], axis=1)) print(classification_report(titanic_test.Survived, survived_test_predict)) ``` The random forest does reasonably well, though it does not appear to be much of an improvement over the decision tree. Given the complexity of the random forest, a simple decision tree would be preferred.
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline import pyqg year = 24*60*60*360. # Set up a model which will run for 20 years and start averaging after 10 years. # There are lots of parameters that can be specified as keyword arguments # but we are just using the defaults. m = pyqg.QGModel(tmax=20*year, twrite=10000, tavestart=10*year) # run it m.run() ``` ## Example of plots ## The QG potential vorticities in physical space ``` q1 = m.q[0] + m.Qy[0]*m.y q2 = m.q[1] + m.Qy[1]*m.y X, Y = m.x/1e3, m.y/1e3 # give units in km pv_factor = 1.e4 factor_s = str('%1.1e') %pv_factor fig = plt.figure(figsize=(16.,5.)) ax1 = fig.add_subplot(121) cf1 = ax1.contourf(X, Y, pv_factor*q1, 12, cmap='RdBu_r') cb1 = fig.colorbar(cf1) cb1.ax.text(.0,1.05,factor_s,rotation=0) ax1.set_xlabel('x [km]') ax1.set_ylabel('y [km]') ax1.set_title('Layer 1 QGPV') ax2 = fig.add_subplot(122) cf2 = ax2.contourf(X, Y, pv_factor*q2, 12, cmap='RdBu_r') cb2 = fig.colorbar(cf2) ax2.set_title('Layer 2 QGPV') cb2.ax.text(.0,1.05,factor_s,rotation=0) ax2.set_xlabel('x [km]') ax2.set_ylabel('y [km]') ``` ## KE spectra and Energy budget ``` # some spectral plots KE1spec = m.get_diagnostic('KEspec')[0].sum(axis=0) # note that this is misleading for anisotrphic flows... KE2spec = m.get_diagnostic('KEspec')[1].sum(axis=0) # we should sum azimuthaly, and plot as a functions of kappa # factor ebud ebud_factor = 1.e4 ebud_factor_s= str('%1.1e') %ebud_factor # inertial range ir = np.r_[10:20] fig = plt.figure(figsize=(16.,7.)) ax1 = fig.add_subplot(121) ax1.loglog( m.kk, KE1spec, '.-' ) ax1.loglog( m.kk, KE2spec, '.-' ) ax1.loglog( m.kk[10:20], 2*(m.kk[ir]**-3) * KE1spec[ir].mean() / (m.kk[ir]**-3).mean(), '0.5') ax1.set_ylim([1e-9,1e-3]) ax1.set_xlim([m.kk.min(), m.kk.max()]) ax1.grid() ax1.legend(['upper layer','lower layer', r'$k^{-3}$'], loc='lower left') ax1.set_xlabel(r'k (m$^{-1})$') ax1.set_title('Kinetic Energy Spectrum') # the spectral energy budget ebud = [ -m.get_diagnostic('APEgenspec').sum(axis=0), -m.get_diagnostic('APEflux').sum(axis=0), -m.get_diagnostic('KEflux').sum(axis=0), -m.rek*m.del2*m.get_diagnostic('KEspec')[1].sum(axis=0)*m.M**2 ] ebud.append(-np.vstack(ebud).sum(axis=0)) ebud_labels = ['APE gen','APE flux','KE flux','Diss.','Resid.'] ax2 = fig.add_subplot(122) [ax2.semilogx(m.kk, term) for term in ebud] ax2.legend(ebud_labels, loc='upper right') ax2.grid() ax2.set_xlim([m.kk.min(), m.kk.max()]) ax1.set_xlabel(r'k (m$^{-1})$') ax2.ticklabel_format(axis='y',style = 'sci', scilimits=(-2,2)) ax2.set_title(r' Spectral Energy Budget') ax2.set_xlabel(r'k (m$^{-1})$') ```
github_jupyter
## Mergesort Implement mergesort. ### Approach Mergesort is a divide-and-conquer algorithm. We divide the array into two sub-arrays, recursively call the function and pass in the two halves, until each sub-array has one element. Since each sub-array has only one element, they are all sorted. We then merge each sub-array until we form a sorted array. The merge function will be used to merge two sorted halves. ``` def merge_sort(array, left, right): if left < right: mid = (left + (right - 1)) // 2 # same as ((left + right) // 2) but avoids overflow for large left merge_sort(array, left, mid) merge_sort(array, mid + 1, right) merge(array, left, mid, right) return array def merge(array, left, mid, right): n1 = mid - left + 1 n2 = right - mid # create temp arrays left_array = [0] * n1 right_array = [0] * n2 # copy the data to the temp arrays for i in range(0, n1): left_array[i] = array[left + i] for j in range(0, n2): right_array[j] = array[mid + 1 + j] # merge the temp arrays into one array i = 0 # index for arr1 j = 0 # index for arr2 k = left # index for the final array while i < n1 and j < n2: if left_array[i] <= right_array[j]: array[k] = left_array[i] i += 1 else: array[k] = right_array[j] j += 1 k += 1 # copy remaining elements of left array if any while i < n1: array[k] = left_array[i] i += 1 k += 1 # copy remaining elements of right array if any while j < n2: array[k] = right_array[j] j += 1 k += 1 A = [2, 4, 1, 6, 0, 3] n = len(A) merge_sort(A, 0, n - 1) ``` ## Bottom-up approach We can implement merge-sort iteratively in a bottom-up manner. We start by sorting all sub-arrays of size 1, then we merge them into sub-arrays of two elements. We perform successive merges until the array is completely sorted. ``` def merge(A, temp, frm, mid, to): k = frm i = frm j = mid + 1 while i <= mid and j <= to: if A[i] <= A[j]: temp[k] = A[i] i += 1 else: temp[k] = A[j] j += 1 k += 1 # copy remaining elements while i < len(A) and i <= mid: temp[k] = A[i] i += 1 k += 1 # no need to copy second half ... # copy back original list to reflect sorted order for i in range(frm, to + 1): A[i] = temp[i] def mergeSort(array): left = 0 right = len(array) - 1 temp = array.copy() m = 1 while m <= right - left: for i in range(left, right, 2 * m): frm = i mid = i + m - 1 to = min(i + 2 * m - 1, right) merge(A, temp, frm, mid, to) m = 2 * m return array A = [5, -4, 3, 2, 1] mergeSort(A) ```
github_jupyter
``` from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) ``` # Lecture 3B - Data Integration* # Table of Contents * [Lecture 3B - Data Integration*](#Lecture-12---Data-Integration*) * &nbsp; * [Content](#Content) * [Learning Outcomes](#Learning-Outcomes) * [Integration of data from multiple sources](#Integration-of-data-from-multiple-sources) * [Merging Datasets](#Merging-Datasets) * [Database-style DataFrame Merges](#Database-style-DataFrame-Merges) * [Merging on Index](#Merging-on-Index) * [Concatenating Data Frames](#Concatenating-Data-Frames) * [Concatenation on axes](#Concatenation-on-axes) * [Updating Columns](#Updating-Columns) * [Combining together values within Series or DataFrame columns from different sources](#Combining-together-values-within-Series-or-DataFrame-columns-from-different-sources) --- ### Content 1. dataset merging 2. dataset concatenation 3. dataset value updating \* Content in this notebook is based on the material in the "Python for Data Analysis" book by Wes McKinney, chapter 7. and material from http://pandas.pydata.org/ ### Learning Outcomes At the end of this lecture, you should be able to: * describe the inner, outer, left, right join-types for merging dataframes * merge different dataframes on indices or common columns * concatenate dataframes horizontally or vertically * update values in one dataframe based on values from a similar dataframe --- ``` from IPython.display import HTML, IFrame IFrame("http://pandas.pydata.org/pandas-docs/dev/merging.html", width=1100, height=500) ``` ## Integration of data from multiple sources Much of the work in the overall analytics pipeline is spent on data preparation: loading, cleaning, transforming, and rearranging. The total time spent on this task can be up to 90% of the entire analytics project time resources, before any actual useful 'analytics' work is done. Increasingly datasets from multiple sources must be integrated into a single dataset. This can be a difficult task especially if done manually through Excel-type programs. In many cases it is impossible due to file size, and often undesirable to to the fact that it is difficult to document the process and also impossible to audit and repeat automatically. Many analytics professionals choose to do ad hoc processing of data from one form to another using a general purpose programming, like Python, Perl, R, or Java, or UNIX text processing tools like sed or awk. Fortunately, pandas along with the Python standard library provide you with a high-level, flexible, and high-performance set of core manipulations and algorithms to enable you to integrate and wrangle data into a single source without much trouble. ## Merging Datasets Data contained in pandas objects can be combined together in a number of built-in ways: * `pandas.merge` connects rows in DataFrames based on one or more keys. This will be familiar to users of SQL or other relational databases, as it implements database join operations. * `pandas.concat` glues or stacks together objects along an axis (`axis=1` columns, `axis=0` rows). ### Database-style DataFrame Merges Merge or join operations combine data sets by linking rows using one or more **keys**. These operations are central to relational databases. The `merge` function in pandas is the main entry point for using these algorithms on your data. We will begin with simple examples: ``` import matplotlib.pyplot as plt import pandas as pd import numpy as np import seaborn as sns from pylab import rcParams # Set some Pandas options as you like pd.set_option('max_columns', 30) pd.set_option('max_rows', 30) rcParams['figure.figsize'] = 15, 10 rcParams['font.size'] = 20 %matplotlib inline df1 = pd.DataFrame( {'name': ['ben', 'ben', 'adam', 'cindy', 'adam', 'adam', 'ben'], 'transaction': np.random.randint(1, 50, 7)} ) df2 = pd.DataFrame( {'name': ['adam', 'ben', 'darren'], 'age': [33,25,40]} ) print(df1) print('---------------') print(df2) ``` Below is an example of a many-to-one merge situation using the `pandas.merge` method; the data in `df1` has multiple rows labelled 'adam' and 'ben', whereas `df2` has only one row for each value in the key column. Calling merge with these objects we obtain: ``` pd.merge(df1, df2) ``` Note that we **did not specify** which column to join on. If not specified, merge uses the **overlapping column names as the keys**. It is however good practice to specify explicitly: ``` pd.merge(df1, df2, on='name') ``` Notice that original indexes cannot be preserved when merging on columns. If the column names are different in each object, you can specify them separately: ``` df3 = pd.DataFrame( {'lkey': ['ben', 'ben', 'adam', 'cindy', 'adam', 'adam', 'ben'], 'data1': np.random.randint(1, 50, 7)} ) df4 = pd.DataFrame( {'rkey': ['adam', 'ben', 'darren'], 'age': [33,25,40]} ) print(df3) print('---------') print(df4) pd.merge(df3, df4, left_on='lkey', right_on='rkey') ``` You probably noticed that the 'cindy' and 'darren' values and associated data are missing from the result. **By default merge does an 'inner' join**; the keys in the result are the **intersection** of the two sets. Other possible options are 'left', 'right', and 'outer'. The outer join takes the union of the keys, combining the effect of applying both left and right joins: ``` print(df1) print('---------') print(df2) pd.merge(df1, df2, how='outer') ``` The above merges have been examples of **one-to-many and many-to-one merges**. Sometimes it is necessary to perform **one-to-one merges on indexes**, these we perform on indexes and we will see them later. Many-to-many merges have well-defined though not necessarily intuitive behaviour. Here’s an example: ``` df1 = pd.DataFrame( {'name': ['ben', 'ben', 'adam', 'cindy', 'adam', 'ben'], 'transaction_1': range(6)} ) df2 = pd.DataFrame( {'name': ['adam', 'ben', 'adam', 'ben', 'darren'], 'transaction_2': range(5)} ) print(df1) print('---------') print(df2) pd.merge(df1, df2, on='name', how='left') ``` Many-to-many joins form the **Cartesian product** of the rows. Since there were 3 'ben' rows in the left DataFrame and 2 in the right one, there are 6 'ben' rows in the result. The join method only affects the distinct key values appearing in the result. By this we mean that if there are unique keys in either left or right hand side, the type of join method will determine if rows with the unique values appear in the final result: ``` pd.merge(df1, df2, how='inner') ``` **Exercises**: For the following exercises, use the 3 data sets below (source http://www.goldpriceoz.com/). The datasets below represent the "Gold Price Annual End of Period" for a selection of currencies. Create dataframes from the datasets below by highlighting the dataset and right-clicking copy, followed by the execution of the following line: df = pd.read_clipboard() **Exercise**: Your first task is to merge the Year End Period data with each of the 2 datasets containing the period end price of gold. Call them df_USD and df_AUD. ``` #df_y = pd.read_clipboard() df_y #df_USD = pd.read_clipboard() df_USD #df_AUD = pd.read_clipboard() df_AUD ``` **Exercise**: Merge df_USD and df_AUD so that only common data to both datasets is preserved in the result. **Exercise**: Merge df_USD and df_AUD so that all data in df_USD is preserved in the result. **Exercise**: Merge df_USD and df_AUD so that all data in df_AUD is preserved in the result. **Exercise**: Merge df_USD and df_AUD so that all data from both datasets is preserved in the result. **Exercise**: Plot the price of gold for one of the currencies, for each of the years in the dataset using an appropriate figure type. We can merge with multiple keys. To merge with multiple keys, pass a list of column names: ``` left = pd.DataFrame( {'key1': ['foo', 'foo', 'bar'], 'key2': ['one', 'two', 'one'], 'lval': [1, 2, 3]} ) right = pd.DataFrame( {'key1': ['foo', 'foo', 'bar', 'bar'], 'key2': ['one', 'one', 'one', 'two'], 'rval': [4, 5, 6, 7]} ) print(left) print('---------') print(right) pd.merge(left, right, on=['key1', 'key2'], how='outer') ``` To determine which key combinations will appear in the result depending on the choice of merge method, **think of the multiple keys as forming an array of tuples to be used as a single join key**. When joining columns-on-columns, the **indexes on the passed Data Frame objects are discarded**. A last issue to consider in merge operations is the treatment of overlapping column names. While you can address the overlap manually, merge has a suffixes option for specifying strings to append to overlapping names in the left and right DataFrame objects: ``` pd.merge(left, right, on='key1') ``` Notice the suffixes '_x' and '_y' above which are default. We can explicitly specify them: ``` pd.merge(left, right, on='key1', suffixes=('_left', '_right')) ``` **Exercise**: Given the following: ``` df5 = pd.DataFrame( {'key1': ['foo', 'foo', 'bar'], 'key2': ['one', 'two', 'one'], 'val': [1, 2, 3]} ) df6 = pd.DataFrame( {'key1': ['one', 'one', 'one', 'two'], 'key2': ['foo', 'foo', 'bar', 'bar'], 'val': [4, 5, 6, 7]} ) print(df5) print('---------') print(df6) ``` Your task is to merge on key1 from df5 and key2 from df6 using a merge type that preserves all unique keys, and renaming overlapping columns with the '_l' and '_r' suffixes. --- ### Merging on Index In some instances, the merge key or keys in a DataFrame will be found in its index. In this case, you can pass `left_index=True` or `right_index=True` (or both) to indicate that the index should be used as the merge key: ``` left1 = pd.DataFrame( {'key': ['a', 'b', 'a', 'a', 'b', 'c'], 'value': range(6)}) right1 = pd.DataFrame({'group_val': [3.5, 7]}, index=['a', 'b']) print(left1) print('---------') print(right1) pd.merge(left1, right1, left_on='key', right_index=True) ``` Once again, since the default merge method is to intersect the join keys, you can instead form the union of them with an outer join: ``` pd.merge(left1, right1, left_on='key', right_index=True, how='outer') ``` DataFrame has a more **convenient join method for merging by index**. It can also be used to combine together many DataFrame objects **having the same or similar indexes but non-overlapping columns**. In this example, by merging on unique indexes, we will be performing **one-to-one merge** operations. ``` right2 = pd.DataFrame( { 'group_val' : [10,20] }, index=[1,2] ) print(left1) print('---------') print(right2) left1.join(right2, how='outer') ``` ## Merge Exercises: **Exercise**: Read in the child_mortality_rates and adult_mortality_rates datasets and merge them on appropriate variables, using a meaningful merge technique. Perform data cleaning where necessary. ``` cm = pd.read_csv('../datasets/child_mortality_rates.csv') ``` **Exercise**: Generate several plots on the above data. Is there anything interesting? **Exercise**: Read in the adult_mortality_rate_by_cause dataset and merge it with the above dataset on appropriate variables, using a meaningful merge technique. ``` amc = pd.read_csv('../datasets/adult_mortality_rate_by_cause.csv') ``` **Exercise**: Finally, read in the total_health_expenditure_by_country_per_year dataset. Attempt to merge it with the above dataset. What are the challenges? How might you work around them? ``` th = pd.read_csv('../datasets/total_health_expenditure_peercent_per_capita_of_gdp_by_country_per_year.csv') ``` --- ### Concatenating Data Frames Concatenation appends data frames and series objects using the `pandas.concat` method. Data frames can be appended either using the axis=0 option (default) whereby rows are added or using the axis=1, whereby columns are added. ``` np.random.randn(3, 4) df1 = pd.DataFrame(np.random.randn(3, 4), columns=['a', 'b', 'c', 'd']) df2 = pd.DataFrame(np.random.randn(2, 3), columns=['b', 'd', 'a']) print(df1) print('----------') print(df2) pd.concat([df1, df2], sort=False) ``` The concat method appends data frames and is not concerned with creating multiple indexes. If the indexes are relevant to the data frame and it is desirable to have unique indexes, then this can be achieved as follows: ``` pd.concat([df1, df2], ignore_index=True, sort=False) ``` **Exercise**: Create a Dataframe called df5 having 4 random float values, having a column called 'a', so that it can be appended with column 'a' from df1. Write code to concat df5 with df1. ### Concatenation on axes concat can be used to append on the **column axis**: ``` df3 = pd.DataFrame(np.random.randn(2, 3), columns=['e', 'f', 'g']) df3 df1 pd.concat([df1, df3], axis=1) ``` The `concat` method is as powerful as the merge, having a number of arguments that allow you produce custom made concatenation types. We can specify the join axes which selects the specified rows: ``` pd.concat([df1, df3], axis=1 , join_axes=[df1.index[1:3]]) ``` ### Updating Columns #### Combining together values within Series or DataFrame columns from different sources Another fairly common situation is to have two like-indexed (or similarly indexed) Series or DataFrame objects and needing to “patch” values in one dataframe with values from another dataframe based on matching indices. Here is an example: ``` df1 = pd.DataFrame([[np.nan, 3., 5.], [-4.6, np.nan, 1], [np.nan, 7., np.nan]]) df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5., 1.6, 4]], index=[1, 2]) df1 df2 ``` Say we wanted to update the values in df1, column 2 with those of df1, column 2. Our intuition might be to do the following: ``` df1[2] = df2[2] df1 ``` From the result above you will notice that all the values from df2[2] have been copied over to df1[2], and that all the existing values in df1[2] have been overwritten. In cases where the index row in df1[2] was not found in df2[2], the new value was assigned as NaN. However, this is not what we wanted. We wanted to copy the values from df2[2], but preserve the values in df1[2] that did not exist in df2[2]. Let's try again ``` df1 = pd.DataFrame([[np.nan, 3., 5.], [-4.6, np.nan, 1], [np.nan, 7., np.nan]]) df1 ``` The function that we need is called update. ``` df1[2].update(df2[2]) df1 ``` Note that update performs its operation inplace. What if we now only wanted to update NaN values in df1 with the values in df2 and not just perform a blanket update? This can be achieved using the combine_first method. ``` df1 df2 df1[[0,1]].combine_first(df2[[0,1]]) ``` Note that this method only takes values from the right DataFrame if they are missing in the left DataFrame. **Exercise:** Use the datasets below and the command df = pd.read_clipboard() in order to construct dataframes for the exercises below: **Exercise:** df_USD1 has missing values for the USD and GBP. Populate the missing values with those from the dataframe df_USD2 **Exercise:** df_USD1 has a combination of missing values and erroneous values for the EUR column. Replace all the values in this columns with those that exist in dataframe df_USD2 for this column.
github_jupyter
``` %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.metrics.pairwise import cosine_similarity from surprise import Reader, Dataset, SVD import warnings; warnings.simplefilter('ignore') ``` ## Data Preprocessing and Visualization ``` df= pd. read_csv('C:/Users/Rohan Sharma/Desktop/movies_metadata.csv') df.head() df.shape df.dtypes df.isnull().sum() df.drop(['belongs_to_collection', 'homepage','tagline'], axis='columns', inplace=True) df.dropna(subset=['overview','release_date','title','video','imdb_id'],inplace=True) df['original_language'] = df['original_language'].fillna(method='bfill') df['poster_path'] = df['poster_path'].fillna(method='bfill') df['status'] = df['status'].fillna(method='bfill') df.isnull().sum() df.shape df['original_language'].drop_duplicates().shape[0] lang_df = pd.DataFrame(df['original_language'].value_counts()) lang_df['language'] = lang_df.index lang_df.columns = ['number', 'language'] lang_df.head() plt.figure(figsize=(12,5)) sns.barplot(x='language', y='number', data=lang_df.iloc[1:11]) plt.show() month_order = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] day_order = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'] def get_month(x): try: return month_order[int(str(x).split('-')[1]) - 1] except: return np.nan def get_day(x): try: year, month, day = (int(i) for i in x.split('-')) answer = datetime.date(year, month, day).weekday() return day_order[answer] except: return np.nan df['day'] = df['release_date'].apply(get_day) df['month'] = df['release_date'].apply(get_month) plt.figure(figsize=(12,6)) plt.title("Number of Movies released in a particular month.") sns.countplot(x='month', data=df, order=month_order) month_mean = pd.DataFrame(df[df['revenue'] > 1e8].groupby('month')['revenue'].mean()) month_mean['mon'] = month_mean.index plt.figure(figsize=(12,6)) plt.title("Average Gross by the Month for Blockbuster Movies") sns.barplot(x='mon', y='revenue', data=month_mean, order=month_order) df['budget'] = pd.to_numeric(df['budget'], errors='coerce') df['budget'].plot(logy=True, kind='hist') ``` ## Weighted Average ``` vote_counts = df[df['vote_count'].notnull()]['vote_count'].astype('int') vote_averages = df[df['vote_average'].notnull()]['vote_average'].astype('int') C = vote_averages.mean() C m = vote_counts.quantile(0.95) m df['year'] = pd.to_datetime(df['release_date'], errors='coerce').apply(lambda x: str(x).split('-')[0] if x != np.nan else np.nan) qualified = df[(df['vote_count'] >= m) & (df['vote_count'].notnull()) & (df['vote_average'].notnull())][['title', 'year', 'vote_count', 'vote_average', 'popularity', 'genres']] qualified['vote_count'] = qualified['vote_count'].astype('int') qualified['vote_average'] = qualified['vote_average'].astype('int') qualified.shape def weighted_rating(x): v = x['vote_count'] R = x['vote_average'] return (v/(v+m) * R) + (m/(m+v) * C) qualified['wr'] = qualified.apply(weighted_rating, axis=1) qualified = qualified.sort_values('wr', ascending=False).head() qualified.head(10) ``` ## Content Based Recommendation ``` movie_data=pd.read_csv('C:/Users/Rohan Sharma/Desktop/ml-latest-small/ratings.csv') movie_data.head(10) movies=pd.read_csv('C:/Users/Rohan Sharma/Desktop/ml-latest-small/movies.csv') movies.head(10) tags=pd.read_csv('C:/Users/Rohan Sharma/Desktop/ml-latest-small/tags.csv') tags=tags[['movieId','tag']] tags.head(10) movie_data=movie_data.merge(movies,on='movieId',how='left') movie_data=movie_data.merge(tags,on='movieId',how='left') movie_data.head(10) rating = pd.DataFrame(movie_data.groupby('title')['rating'].mean()) rating.head(10) rating['Total Rating']=pd.DataFrame(movie_data.groupby('title')['rating'].count()) rating.head(10) movie_user=movie_data.pivot_table(index='userId',columns='title',values='rating') correlation=movie_user.corrwith(movie_user['Iron Man (2008)']) correlation.head(10) recommandation=pd.DataFrame(correlation,columns=['correlation']) recommandation.dropna(inplace=True) recommandation=recommandation.join(rating['Total Rating']) recommandation.head() recc=recommandation[recommandation['Total Rating']>150].sort_values('correlation',ascending=False).reset_index() recc=recc.merge(movies,on='title',how='left') recc.head(10) ``` # Collaborative Filtering ### K Means ``` from surprise import KNNWithMeans from surprise import Dataset from surprise import accuracy from surprise.model_selection import train_test_split #load the movielens-100k dataset UserId :: MovieID :: Rating ::Timestamp data=Dataset.load_builtin('ml-100k') trainset,testset=train_test_split(data,test_size=.20) algo=KNNWithMeans(k=50,sim_options={'name':'pearson_baseline','user_based':True}) algo.fit(trainset) #We can now query for speicific predictions uid=str(196) lid=str(302) # get a prediction for specific users and items pred=algo.predict(uid,lid,verbose=True) #run the trained model against the tesset test_pred=algo.test(testset) test_pred accuracy.rmse(test_pred) accuracy.mae(test_pred) ``` ### SVD ``` data = Dataset.load_builtin('ml-100k') trainset, testset = train_test_split(data, test_size=.25) algo = SVD() algo.fit(trainset) predictions = algo.test(testset) predictions accuracy.rmse(predictions) accuracy.mae(predictions) ```
github_jupyter
<a href="https://colab.research.google.com/github/anjali0503/Internship-Projects/blob/main/Iris_ML_DTC.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## ***ANJALI RAMLOLARAKH PANDEY*** **TSF GRIP SPARKS FOUNDATION** Prediction using Decision Tree Algorithm ## TASK:06 'Create the Decision Tree classifier and visualize it graphically.' ``` #Import dependencies import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import metrics data=pd.read_csv('Iris.csv') #Reading dataset data X = data.drop(['Id', 'Species'], axis=1) y = data['Species'] print(X.shape) print(y.shape) #Splitting dataset into testing and training X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=5) print(X_train.shape) print(y_train.shape) print(X_test.shape) print(y_test.shape) #Accuracy of model score = tree_clf.score(X_test, y_test) accuracy=score*100 print("score:",score) print("Accuracy",accuracy) from sklearn.metrics import accuracy_score, confusion_matrix, classification_report #Defining function - print_score to calculate test and train score def print_score(clf, X_train, y_train, X_test, y_test, train=True): if train: pred = clf.predict(X_train) clf_report = pd.DataFrame(classification_report(y_train, pred, output_dict=True)) print("Train Result:\n================================================") print(f"Accuracy Score: {accuracy_score(y_train, pred) * 100:.2f}%") print("_______________________________________________") print(f"CLASSIFICATION REPORT:\n{clf_report}") print("_______________________________________________") print(f"Confusion Matrix: \n {confusion_matrix(y_train, pred)}\n") elif train==False: pred = clf.predict(X_test) clf_report = pd.DataFrame(classification_report(y_test, pred, output_dict=True)) print("Test Result:\n================================================") print(f"Accuracy Score: {accuracy_score(y_test, pred) * 100:.2f}%") print("_______________________________________________") print(f"CLASSIFICATION REPORT:\n{clf_report}") print("_______________________________________________") print(f"Confusion Matrix: \n {confusion_matrix(y_test, pred)}\n") #Evaluating model by checking accuracy of testing and training model from sklearn.tree import DecisionTreeClassifier tree_clf = DecisionTreeClassifier(random_state=42) tree_clf.fit(X_train, y_train) print_score(tree_clf, X_train, y_train, X_test, y_test, train=True) print_score(tree_clf, X_train, y_train, X_test, y_test, train=False) #Storing results in a table test_score = accuracy_score(y_test, tree_clf.predict(X_test)) * 100 train_score = accuracy_score(y_train, tree_clf.predict(X_train)) * 100 results_df = pd.DataFrame(data=[["Decision Tree Classifier", train_score, test_score]], columns=['Model', 'Training Accuracy %', 'Testing Accuracy %']) results_df = results_df.append(results_df, ignore_index=True) results_df ```
github_jupyter
# State preparation with circuit optimization We want to create a circuit that produces the Bell state $\vert\Phi^+\rangle = \dfrac{\vert00\rangle + \vert11\rangle}{\sqrt 2}$. We already know that this state can be produced by a circuit containing a Hadamard gate on the first qubit followed by a CNOT gate [[1](https://en.wikipedia.org/wiki/Bell_state#Creating_Bell_states)], but we would like to replace the Hadamard gate with rotation gates. In the following we will use [PennyLane](https://pennylane.readthedocs.io/), a cross-platform Python library for quantum machine learning, because it abstracts away several implementation details (especially when it comes to using different quantum simulators and frameworks such as pyQuil/Forest). PennyLane ships with its own version of NumPy, enriched to include [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) - which lets the gradients on the circuits be computed in a more efficient way using clever transformations. In the following, I will also highlight my contributions to the library: the `SquaredErrorLoss` class and the ongoing work on a Hamiltonian decomposition method. Please check the README file for instructions on how to prepare the environment and execute this notebook. ### Importing the library ``` import pennylane as qml import requests from pennylane import numpy as np from pennylane.qnn.cost import SquaredErrorLoss ``` ### Circuit definition First of all we create the "ansatz" circuit, a base structure consisting of a list of gates applied to specific wires. By doing this we can reuse the same configuration in different circuits, for instance if all we want to change is the measurement. Since we want to tackle the general case, we put both kinds of the allowed rotation gates on both qubits; then, we also add a CNOT gate. We will therefore have 4 different parameters, each corresponding to a different angle as a parameter for each rotation gate. ``` requests.post("http://localhost:5000/qvm") def circuit(angles, **kwargs): qml.RX(angles[0], wires=0) qml.RY(angles[1], wires=0) qml.RX(angles[2], wires=1) qml.RY(angles[3], wires=1) qml.CNOT(wires=[0, 1]) ``` In order to create a circuit, we need to include a _device_ and a _measurement_. For starting out we can use PennyLane's "default qubit" device, which can simulate a real quantum device with any number of qubits; we also set `analytic=False` and `shots=1000` to simulate a more realistic device. _Other devices can be used with the `pennylane-forest` plugin, which can be installed as explained in the README file. Once the plugin is installed and the QVM is running, any `forest.qvm` device can be uncommented and used in place of `default.qubit`. The `forest.qpu` device instead requires access to a real QPU._ ``` # dev = qml.device('default.qubit', wires=2, analytic=False, shots=1000) # For a more realistic QVM simulation (needs `pennylane-forest`, `qvm` and `quilc`, see README) #dev = qml.device('forest.qvm', device='2q-qvm', shots=10, noisy=True) # should be able to run with as is- just resource issues # For Aspen-8 simulation (needs `pennylane-forest`, `qvm` and `quilc`, see README) # dev = qml.device('forest.qvm', device='Aspen-8-qvm', shots=1, noisy=False) # For a real QPU - start with 1 shot up by factor of 10 each time - goal to 1000 dev = qml.device('forest.qpu', device='Aspen-8', shots=1000) # For a list of all the devices supported by the Forest SDK: # # from pyquil import list_quantum_computers # sorted(list_quantum_computers()) # To check the capabilities of a device: # # dev.capabilities() ``` We then create a list with one `PauliZ` _observable_ that we will use for the measurement. ``` observables = [qml.PauliZ(0)] ``` Now we can initialize some parameters (the gates' angles) and "execute" the circuit, using the expectation value of the observables as a measurement. The `measure` argument default for `qml.map` is already `'expval'`, so here it is added just for clarity. ``` params = [0.2, 0.8, 0.4, 0.1] qnode = qml.map(circuit, observables, dev, measure='expval') print(qnode(params)) ``` The problem of measuring only one qubit is that, in our case where we have an entangled state, such measurement would affect the other qubit as well by "assigning" it the same value. For this reason, we need to measure the 2 qubits together instead. We can use different combinations of observables: - combined observables: `[qml.PauliZ(0), qml.PauliZ(1)]` (returns two values); - tensor product observables: `[qml.PauliZ(0) @ qml.PauliZ(1)]` (returns one value); - generic Hermitian observables: `[qml.Hermitian(observable, wires=[0, 1])]` (returns one value). The latter is the most flexible, and since it is supported not only by [PennyLane](https://pennylane.readthedocs.io/en/stable/code/api/pennylane.Hermitian.html) but also by other frameworks (e.g. [Forest](http://docs.rigetti.com/en/stable/apidocs/pauli.html#pyquil.paulis.PauliSum) supports sums of Pauli operators, and [Qiskit](https://qiskit.org/textbook/ch-gates/fun-matrices.html#Pauli-decomposition) explains how to do it), we can make use of it. We start with defining our target state $\vert\psi\rangle = \dfrac{1}{\sqrt 2} \left[\begin{matrix}1\\0\\0\\1\end{matrix}\right]$: ``` psi = 1. / np.sqrt(2) * np.array([[1, 0, 0, 1]]).T ``` Now we construct the observable as the outer product $O = \vert\psi\rangle \langle\psi\vert$ of the state with itself: ``` obs = psi @ psi.conj().T print(obs) ``` Why do we "like" this matrix? Because, when used for measurements, it will "boost" the states that have close values for their first and last element. This matrix is actually the projector onto the \$\psi\$ state with eigenvalues 0 and 1. ``` eigenvalues, eigenvectors = np.linalg.eig(obs) print(eigenvalues) ``` The eigenstate corresponding to the eigenvalue 1 is our desired state: ``` eig_1 = np.array([eigenvectors[:, 0]]).T print(eig_1) print(np.allclose(eig_1, psi)) ``` Therefore, we need to optimize its expectation value to be 1. In fact, we have that obviously $\langle\psi\vert O \vert\psi\rangle = 1$: ``` print(psi.T @ obs @ psi) ``` and a vector $\tilde\psi$ close to $\psi$ will have a measurement close to 1: ``` psi_tilde = psi - 0.001 # For normalization psi_tilde[1:3] = np.sqrt(0.5 - psi_tilde[0] ** 2) assert np.allclose(np.linalg.norm(psi_tilde), 1.0) print(psi_tilde.T @ obs @ psi_tilde) ``` Let's also verify whether the observable matrix is Hermitian by checking that it is equal to its own adjunct, i.e. $M = M^\dagger$: ``` print(np.allclose(obs, obs.conj().T)) ``` We can also check whether the matrix can be decomposed as a linear combination of Pauli operators tensor products: $O = \dfrac{1}{4}((I \otimes I) + (X \otimes X) - (Y \otimes Y) + (Z \otimes Z))$ This step has to be performed manually, but my contribution for a decomposition method is [in progress](https://github.com/XanaduAI/pennylane/pull/671). ``` # Manually-defined Pauli operators. They can also be derived from PennyLane observables, # e.g. PauliX = qml.PauliX(0).matrix identity = np.eye(2) pauliX = np.array([[0, 1], [1, 0]]) pauliY = np.array([[0, -1j], [1j, 0]]) pauliZ = np.array([[1, 0], [0, -1]]) decomp = 0.25 * np.sum([ np.kron(identity, identity), np.kron(pauliX, pauliX), -np.kron(pauliY, pauliY), np.kron(pauliZ, pauliZ) ], axis=0) # Is the decomposed matrix the same as the observable matrix we created before? print(np.allclose(obs, decomp)) ``` Now we are ready to run the circuit again using the new observable: ``` observables = [qml.Hermitian(obs, wires=[0, 1])] qnode = qml.map(circuit, observables, dev, measure='expval') print(qnode(params)) ``` ### Optimization Since the circuit optimization has to be performed via gradient descent, we need first of all a good choice for the initial parameters. We can try a few different ones, including: - all parameters equal to 0; - all parameters randomly chosen; - parameters initialized to some chosen "sensible" defaults. ``` params_init_method = 'chosen' if params_init_method == 'zero': params = np.array([0.] * 4) elif params_init_method == 'random': import random random.seed(0) params = np.random.normal(0., np.pi, 4) elif params_init_method == 'chosen': params = np.array([np.pi / 4] * 4) else: raise ValueError('{} initialization method does not exist'.format(weight_init_method)) ``` Then, we need to decide how many optimization steps we will run: ``` steps = 1000 ``` We also create an additional array that will collect the learned parameters per round: ``` params_history = np.zeros((4, steps)) ``` We define the cost function as the square of the difference between the value of the observable (which can be between 0 and 1) and 1; this means that the closer the observable gets to one, the smaller the cost becomes. My contribution is the `SquaredErrorLoss` class, which gives an easy way to calculate the loss given a target. ``` loss = SquaredErrorLoss(circuit, observables, dev) def cost(params): return loss(params, target=[1]) ``` Finally, we define the optimizer that we will use in the optimization process. We can start with the simplest `GradientDescentOptimizer`, and optionally try a more advanced optimizer such as the `AdamOptimizer`: ``` # opt = qml.GradientDescentOptimizer(stepsize=0.1) opt = qml.AdamOptimizer(stepsize=0.1) ``` We are ready to kick off the experiments! It might take a while with the default parameters :) ``` %%time for i in range(steps): params = opt.step(cost, params) if i == 0: print(f'\tCost after step {i:4d}: {cost(params)[0]: .7f} ({params})') elif (i + 1) % 50 == 0: print(f'\tCost after step {i+1:4d}: {cost(params)[0]: .7f} ({params})') params_history[:, i] = params result = qml.map(circuit, observables, dev)(params) print('Optimized parameters: {}'.format(params)) print('Result: {}'.format(result)) # The state cannot be seen when using a real device try: print('Output state: {}'.format(np.round(dev.state, decimals=3))) except NotImplementedError: print('Cannot see state when using device ' + dev.name) ``` ### Evaluation Let's check the results of the last step of the last optimization: ``` print(params_history[:, -1]) ``` We can see that each row shows values around $\left[\begin{matrix}0&\dfrac{\pi}{2}&0&0\end{matrix}\right]$, corresponding to the following rotations: - $R_x(\phi) = \left[\begin{matrix}cos(\phi/2)&-i sin(\phi/2)\\-i sin(\phi/2)&cos(\phi/2)\end{matrix}\right] \implies R_x(0) = \left[\begin{matrix}1&0\\0&1\end{matrix}\right]$ - $R_y(\phi) = \left[\begin{matrix}cos(\phi/2)&-sin(\phi/2)\\sin(\phi/2)&cos(\phi/2)\end{matrix}\right] \implies R_y(0) = \left[\begin{matrix}1&0\\0&1\end{matrix}\right], R_y(\pi/2) = \dfrac{1}{\sqrt{2}}\left[\begin{matrix}1&-1\\1&1\end{matrix}\right]$ This means that only a $\dfrac{\pi}{2}$ Y-rotation should be applied on the first qubit, while the second qubit should be left unchanged. ### Visualization In this section we will see the calculated values across runs for every parameter. We will plot the points using `matplotlib`. ``` import matplotlib.pyplot as plt %matplotlib inline # TODO: can be simplified fig, axs = plt.subplots(2, 2, figsize=(15,15)) axs[0, 0].plot(params_history[0, :]) axs[0, 0].set_title('RX(0)') axs[0, 1].plot(params_history[1, :]) axs[0, 1].set_title('RY(0)') axs[1, 0].plot(params_history[2, :]) axs[1, 0].set_title('RX(1)') axs[1, 1].plot(params_history[3, :]) axs[1, 1].set_title('RY(1)') ```
github_jupyter
``` ## Necessary packages import numpy as np import pandas as pd import itertools import math import time import os import glob import copy ## Signal Processing from scipy import signal import scipy.io.wavfile as wavfile import scipy.io import librosa # from scipy.fftpack import fft # import adaptfilt as adf ## Visualization import matplotlib.pyplot as plt import matplotlib.colors as colors from matplotlib.mlab import bivariate_normal import plotly.offline as py import plotly.graph_objs as go import plotly.tools as tls import seaborn as sns ## Initialization import IPython.display as ipd import librosa.display py.init_notebook_mode(connected=True) %matplotlib inline # audio_path = 'input/' # # audio_transformation_path = 'output/20190905/' # filename = 'v5.wav' def read_audio(path, file_name): ''' Read files from specified path (relative or absolute) Parameters: path (string): relative path to read file file_name (string): name of file located in path we want to read Returns: tuple: rate and date of wav file ''' rate, data = wavfile.read(str(path) + str(filename)) # data, rate = librosa.load(str(path) + str(filename)) data = data.astype('float32') return rate, data def write_audio(path, filename, rate, data, volume = 1): ''' Write files to specified path (relative or absolute) with volume transformation Parameters: path (string): relative path to write file file_name (string): name of file we want to save to located path rate (int): audio rate data (nd.array): the data we want to save volume (int): by default it settled 1, which means no transformation Returns: Boolean: If writing was finished successfully ''' data = copy.deepcopy(data) data *= volume data = data.astype('int16') wavfile.write(str(path) + str(filename), rate, data) return True def compute_power(data, start, end): ''' Compute power of DTS Parameters: data (nd.array): the data for which we want to calculate power start (int): start range end (int): end range Returns: float: the power of specified data ''' data = data[start:end] power = np.mean(data**2) return power def plot_waveform(data, start, end): ''' Signal Visualization Parameters: data (nd.array): the data we want to visualize start (int): start range end (int): end range Returns: None: just shows the graph ''' data = data[start:end] plt.plot(data) plt.ylabel('amplitude') plt.xlabel('samples') plt.show() return None audio_path = 'inputs/' # audio_transformation_path = 'output/20190905/' filename = 'Sargis_aa_mono.wav' audio_rate, audio_data = read_audio(audio_path, filename) plot_waveform(audio_data, 16000, 24000) ipd.Audio(audio_data, rate=audio_rate) N = len(audio_data) sampling_rate = 16000 # 8000 hz F_s coef_no = int(N/2) + 1 # 1-y 0 akanna, qani vor fft-n meka da veradardznelu a freqs = np.array(range(coef_no)) * sampling_rate / N freqs coef = np.fft.rfft(audio_data) len(coef) amplitute_spectre = np.abs(coef) db_spectr = np.log10(amplitute_spectre + 1) plt.plot(freqs, amplitute_spectre) plt.xlabel('freqs in hz') plt.ylabel('amplituds') plt.show() plt.plot(freqs, db_spectr) plt.xlabel('freqs in hz') plt.ylabel('amplituds') plt.show() k_max = np.argmax(amplitute_spectre) f_max = sampling_rate / N * k_max print(f_max) # hz amplitude ```
github_jupyter
``` %matplotlib inline ``` `Learn the Basics <intro.html>`_ || `Quickstart <quickstart_tutorial.html>`_ || `Tensors <tensorqs_tutorial.html>`_ || **Datasets & DataLoaders** || `Transforms <transforms_tutorial.html>`_ || `Build Model <buildmodel_tutorial.html>`_ || `Autograd <autogradqs_tutorial.html>`_ || `Optimization <optimization_tutorial.html>`_ || `Save & Load Model <saveloadrun_tutorial.html>`_ Datasets & Dataloaders =================== Code for processing data samples can get messy and hard to maintain; we ideally want our dataset code to be decoupled from our model training code for better readability and modularity. PyTorch provides two data primitives: ``torch.utils.data.DataLoader`` and ``torch.utils.data.Dataset`` that allow you to use pre-loaded datasets as well as your own data. ``Dataset`` stores the samples and their corresponding labels, and ``DataLoader`` wraps an iterable around the ``Dataset`` to enable easy access to the samples. PyTorch domain libraries provide a number of pre-loaded datasets (such as FashionMNIST) that subclass ``torch.utils.data.Dataset`` and implement functions specific to the particular data. They can be used to prototype and benchmark your model. You can find them here: `Image Datasets <https://pytorch.org/vision/stable/datasets.html>`_, `Text Datasets <https://pytorch.org/text/stable/datasets.html>`_, and `Audio Datasets <https://pytorch.org/audio/stable/datasets.html>`_ Loading a Dataset ------------------- Here is an example of how to load the `Fashion-MNIST <https://research.zalando.com/project/fashion_mnist/fashion_mnist/>`_ dataset from TorchVision. Fashion-MNIST is a dataset of Zalando’s article images consisting of 60,000 training examples and 10,000 test examples. Each example comprises a 28×28 grayscale image and an associated label from one of 10 classes. We load the `FashionMNIST Dataset <https://pytorch.org/vision/stable/datasets.html#fashion-mnist>`_ with the following parameters: - ``root`` is the path where the train/test data is stored, - ``train`` specifies training or test dataset, - ``download=True`` downloads the data from the internet if it's not available at ``root``. - ``transform`` and ``target_transform`` specify the feature and label transformations ``` import torch from torch.utils.data import Dataset from torchvision import datasets from torchvision.transforms import ToTensor import matplotlib.pyplot as plt training_data = datasets.FashionMNIST( root="data", train=True, download=True, transform=ToTensor() ) test_data = datasets.FashionMNIST( root="data", train=False, download=True, transform=ToTensor() ) ``` Iterating and Visualizing the Dataset ----------------- We can index ``Datasets`` manually like a list: ``training_data[index]``. We use ``matplotlib`` to visualize some samples in our training data. ``` labels_map = { 0: "T-Shirt", 1: "Trouser", 2: "Pullover", 3: "Dress", 4: "Coat", 5: "Sandal", 6: "Shirt", 7: "Sneaker", 8: "Bag", 9: "Ankle Boot", } figure = plt.figure(figsize=(8, 8)) cols, rows = 3, 3 for i in range(1, cols * rows + 1): sample_idx = torch.randint(len(training_data), size=(1,)).item() img, label = training_data[sample_idx] figure.add_subplot(rows, cols, i) plt.title(labels_map[label]) plt.axis("off") plt.imshow(img.squeeze(), cmap="gray") plt.show() type(training_data) img.shape, label ``` .. .. figure:: /_static/img/basics/fashion_mnist.png :alt: fashion_mnist -------------- Creating a Custom Dataset for your files --------------------------------------------------- A custom Dataset class must implement three functions: `__init__`, `__len__`, and `__getitem__`. Take a look at this implementation; the FashionMNIST images are stored in a directory ``img_dir``, and their labels are stored separately in a CSV file ``annotations_file``. In the next sections, we'll break down what's happening in each of these functions. ``` import os import pandas as pd from torchvision.io import read_image class CustomImageDataset(Dataset): def __init__(self, annotations_file, img_dir, transform=None, target_transform=None): self.img_labels = pd.read_csv(annotations_file) self.img_dir = img_dir self.transform = transform self.target_transform = target_transform def __len__(self): return len(self.img_labels) def __getitem__(self, idx): img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0]) image = read_image(img_path) label = self.img_labels.iloc[idx, 1] if self.transform: image = self.transform(image) if self.target_transform: label = self.target_transform(label) return image, label ``` __init__ ^^^^^^^^^^^^^^^^^^^^ The __init__ function is run once when instantiating the Dataset object. We initialize the directory containing the images, the annotations file, and both transforms (covered in more detail in the next section). The labels.csv file looks like: :: tshirt1.jpg, 0 tshirt2.jpg, 0 ...... ankleboot999.jpg, 9 ``` def __init__(self, annotations_file, img_dir, transform=None, target_transform=None): self.img_labels = pd.read_csv(annotations_file) self.img_dir = img_dir self.transform = transform self.target_transform = target_transform ``` __len__ ^^^^^^^^^^^^^^^^^^^^ The __len__ function returns the number of samples in our dataset. Example: ``` def __len__(self): return len(self.img_labels) ``` __getitem__ ^^^^^^^^^^^^^^^^^^^^ The __getitem__ function loads and returns a sample from the dataset at the given index ``idx``. Based on the index, it identifies the image's location on disk, converts that to a tensor using ``read_image``, retrieves the corresponding label from the csv data in ``self.img_labels``, calls the transform functions on them (if applicable), and returns the tensor image and corresponding label in a tuple. ``` def __getitem__(self, idx): img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0]) image = read_image(img_path) label = self.img_labels.iloc[idx, 1] if self.transform: image = self.transform(image) if self.target_transform: label = self.target_transform(label) return image, label ``` -------------- Preparing your data for training with DataLoaders ------------------------------------------------- The ``Dataset`` retrieves our dataset's features and labels one sample at a time. While training a model, we typically want to pass samples in "minibatches", reshuffle the data at every epoch to reduce model overfitting, and use Python's ``multiprocessing`` to speed up data retrieval. ``DataLoader`` is an iterable that abstracts this complexity for us in an easy API. ``` from torch.utils.data import DataLoader train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True) test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True) ``` Iterate through the DataLoader -------------------------- We have loaded that dataset into the ``Dataloader`` and can iterate through the dataset as needed. Each iteration below returns a batch of ``train_features`` and ``train_labels`` (containing ``batch_size=64`` features and labels respectively). Because we specified ``shuffle=True``, after we iterate over all batches the data is shuffled (for finer-grained control over the data loading order, take a look at `Samplers <https://pytorch.org/docs/stable/data.html#data-loading-order-and-sampler>`_). ``` # Display image and label. train_features, train_labels = next(iter(train_dataloader)) print(f"Feature batch shape: {train_features.size()}") print(f"Labels batch shape: {train_labels.size()}") img = train_features[0].squeeze() label = train_labels[0] plt.imshow(img, cmap="gray") plt.show() print(f"Label: {label}") ``` -------------- Further Reading -------------- - `torch.utils.data API <https://pytorch.org/docs/stable/data.html>`_
github_jupyter
# Saving a model trained with a tabular dataset in fast.ai - Example of saving and reloading a model trained with a tabular dataset in fast.ai. - This notebook is an extension of The example shown here is adapted from the paper by Howard and Gugger https://arxiv.org/pdf/2002.04688.pdf # Prepare the notebook and ingest the dataset The first section of this notebook is identical to the chapter 2 notebook for examining tabular curated datasets: https://github.com/PacktPublishing/Deep-Learning-with-fastai-Cookbook/blob/main/ch2/examining_tabular_datasets.ipynb ``` # imports for notebook boilerplate !pip install -Uqq fastbook import fastbook from fastbook import * from fastai.tabular.all import * # set up the notebook for fast.ai fastbook.setup_book() # ingest the curated tabular dataset ADULT_SAMPLE path = untar_data(URLs.ADULT_SAMPLE) # examine the directory structure path.ls() # ingest the dataset into a Pandas dataframe df = pd.read_csv(path/'adult.csv') # examine the first few records in the dataframe df.head() # get the number of records in the dataset df.shape # get the count of unique values in each column of the dataset df.nunique() # count the number of missing values in each column of the dataset df.isnull().sum() # get the subset of the dataset where age <= 40 # streetcarjan2014[streetcarjan2014.Location == "King and Shaw"].Route df_young = df[df.age <= 40] df_young.head() ``` # Define transforms, dependent variable, continuous and categorical columns In this section we defined the transforms that will be applied to the dataset along with the target, continuous and categorical columns ``` # define transforms to apply to the tabular dataset procs = [FillMissing,Categorify] # define the dependent variable (y value) dep_var = 'salary' # define columns that are continuous / categorical cont,cat = cont_cat_split(df, 1, dep_var=dep_var) ``` # Define TabularDataLoaders object ``` # define TabularDataLoaders object # valid_idx: the indices to use for the validation set dls=TabularDataLoaders.from_df(df,path,procs= procs, cat_names= cat, cont_names = cont, y_names = dep_var, valid_idx=list(range(1024,1260)), bs=64) # use show_batch() to see a sample batch including x and y values dls.show_batch() ``` # Define and train model ``` learn = tabular_learner(dls,layers=[200,100], metrics=accuracy) learn.fit_one_cycle(3) # show sample result, including transformed x, y and predicted transformed y learn.show_results() ``` # Examine the structure of the trained model structure Use the summary() function to see the structure of the trained model, including: - the layers that make up the model - total parameters - loss function - optimizer function - callbacks ``` learn.summary() ``` # Save the trained model ``` learn.path print(os.getcwd()) if 'google.colab' in str(get_ipython()): temp_path = Path('/content/gdrive/MyDrive/fastai_cookbook/Deep-Learning-with-fastai-Cookbook/ch3/') else: temp_path = Path(os.getcwd()) learn.path = temp_path # save the trained model in /storage/data/adult_sample/adult_sample_model.pkl learn.export('adult_sample_model.pkl') learn2 = load_learner(os.path.join(temp_path,'adult_sample_model.pkl')) sample_URL = 'https://raw.githubusercontent.com/PacktPublishing/Deep-Learning-with-fastai-Cookbook/main/ch3/adult_sample_test.csv' df_test = pd.read_csv(sample_URL) df_test.shape df_test.head() df_test.iloc[1] df_test.iloc[0].shape df_test.dtypes test_sample = df_test.iloc[1] learn2.predict(test_sample) pred_class,pred_idx,outputs = learn2.predict(test_sample) print("pred class is: ",pred_class) type(pred_class) pred_class['workclass'] ```
github_jupyter
# V0.1.6 - System Identification Using Adaptative Filters Example created by Wilson Rocha Lacerda Junior ## Generating 1 input 1 output sample data The data is generated by simulating the following model: $y_k = 0.2y_{k-1} + 0.1y_{k-1}x_{k-1} + 0.9x_{k-1} + e_{k}$ If *colored_noise* is set to True: $e_{k} = 0.8\nu_{k-1} + \nu_{k}$ where $x$ is a uniformly distributed random variable and $\nu$ is a gaussian distributed variable with $\mu=0$ and $\sigma=0.1$ In the next example we will generate a data with 1000 samples with white noise and selecting 90% of the data to train the model. ``` pip install sysidentpy import numpy as np import pandas as pd import matplotlib.pyplot as plt from sysidentpy.polynomial_basis import PolynomialNarmax from sysidentpy.metrics import root_relative_squared_error from sysidentpy.utils.generate_data import get_miso_data, get_siso_data from sysidentpy.parameter_estimation import Estimators x_train, x_valid, y_train, y_valid = get_siso_data(n=1000, colored_noise=False, sigma=0.001, train_percentage=90) ``` One can create a model object and access the Adaptative Filters available in SysIdentPy individually. If you want to build a regressor matrix and estimate the parameters using only the Adaptative Filter method (not applying FROLS + ERR algorithm), follow the steps bellow. ``` model = PolynomialNarmax() psi = model.build_information_matrix(x_train, y_train, xlag=2, ylag=1, non_degree=2) # creating the regressor matrix pd.DataFrame(psi).head() [regressor_code, max_lag] = model.regressor_space(2, 2, 1, 1) regressor_code # the entire regressor space is our input in this case. But you can define specific subsets to use as an input model.final_model = regressor_code # defines the model representation model.psi = psi ``` Here we are using the Affine Least Mean Squares method, but you can use any of the methods available on SysIdentPy - Least Mean Squares (LMS) - Affine LMS - LMS Sign Error - Normalized LMS - Normalized LSM Sign Error - LSM Sign Regressor - Normalized LMS Sign Regressor - LMS Sign-Sign - Normalized LMS Sign-Sign - Normalized LMS Leaky - LMS Leaky - LMS Mixed Norm - LMS Fourth Also, you can use: - Least Squares (LS) - Total LS - Recursive LS ## Building the model ``` model.theta = Estimators(mu=0.01).affine_least_mean_squares(model.psi, y_train[1:, 0].reshape(-1, 1)) ``` ## Simulating the model ``` yhat = model.predict(x_valid, y_valid) rrse = root_relative_squared_error(y_valid, yhat) print(rrse) model.n_terms = 10 # the number of terms we selected (necessary in the 'results' methods) model.err = model.n_terms*[0] # just to use the `results` method results = pd.DataFrame(model.results(err_precision=8, dtype='dec'), columns=['Regressors', 'Parameters', 'ERR']) print(results) ee, ex, extras, lam = model.residuals(x_valid, y_valid, yhat) model.plot_result(y_valid, yhat, ee, ex) ``` ## Final code ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from sysidentpy.polynomial_basis import PolynomialNarmax from sysidentpy.metrics import root_relative_squared_error from sysidentpy.utils.generate_data import get_miso_data, get_siso_data from sysidentpy.parameter_estimation import Estimators x_train, x_valid, y_train, y_valid = get_siso_data(n=1000, colored_noise=False, sigma=0.001, train_percentage=90) model = PolynomialNarmax() psi = model.build_information_matrix(x_train, y_train, xlag=2, ylag=1, non_degree=2) # creating the regressor matrix [regressor_code, max_lag] = model.regressor_space(2, 2, 1, 1) regressor_code # the entire regressor space is our input in this case. But you can define specific subsets to use as an input model.final_model = regressor_code # defines the model representation model.psi = psi model.theta = Estimators(mu=0.01).affine_least_mean_squares(model.psi, y_train[1:, 0].reshape(-1, 1)) yhat = model.predict(x_valid, y_valid) rrse = root_relative_squared_error(y_valid, yhat) print(rrse) model.n_terms = 10 # the number of terms we selected (necessary in the 'results' methods) model.err = model.n_terms*[0] # just to use the `results` method results = pd.DataFrame(model.results(err_precision=8, dtype='dec'), columns=['Regressors', 'Parameters', 'ERR']) print(results) ee, ex, extras, lam = model.residuals(x_valid, y_valid, yhat) model.plot_result(y_valid, yhat, ee, ex) ```
github_jupyter
``` %matplotlib inline import matplotlib.pyplot as plt from keras.layers import Input, Dense, Convolution2D, MaxPooling2D, UpSampling2D from keras.models import Model from keras import regularizers import numpy as np ``` ### Reference : * Blog : building autoencoders in keras : https://blog.keras.io/building-autoencoders-in-keras.html ### Load market data from Quandl ``` import quandl # pip install quandl import pandas as pd def qData(tick='XLU'): # GOOG/NYSE_XLU.4 # WIKI/MSFT.4 qtck = "GOOG/NYSE_"+tick+".4" return quandl.get(qtck, start_date="2003-01-01", end_date="2016-12-31", collapse="daily") '''TICKERS = ['MSFT','JPM','INTC','DOW','KO', 'MCD','CAT','WMT','MMM','AXP', 'BA','GE','XOM','PG','JNJ']''' TICKERS = ['XLU','XLF','XLK','XLY','XLV','XLB','XLE','XLP','XLI'] try: D.keys() except: print('create empty Quandl cache') D = {} for tckr in TICKERS: if not(tckr in D.keys()): print(tckr) qdt = qData(tckr) qdt.rename(columns={'Close': tckr}, inplace = True) D[tckr] = qdt for tck in D.keys(): assert(D[tck].keys() == [tck]) for tck in D.keys(): print(D[tck].shape) J = D[TICKERS[0]].join(D[TICKERS[1]]) for tck in TICKERS[2:]: J = J.join(D[tck]) J.head(5) J.isnull().sum() J2 = J.fillna(method='ffill') #J2[J['WMT'].isnull()] LogDiffJ = J2.apply(np.log).diff(periods=1, axis=0) LogDiffJ.drop(LogDiffJ.index[0:1], inplace=True) print LogDiffJ.shape MktData = LogDiffJ.as_matrix(columns=None) # as numpy.array print MktData.shape np.random.shuffle(MktData) split_index = 3000 x_train = MktData[0:split_index,:]*100 x_test = MktData[split_index:,:]*100 np.std(x_train, axis=0) ``` ## Linear auto-encoder : like PCA ### We get a linear model by removing activation functions ``` original_dim = 9 # this is the size of our encoded representations encoding_dim = 3 # this is our input placeholder input_data = Input(shape=(original_dim,)) if True: # no sparsity constraint encoded = Dense(encoding_dim, activation=None)(input_data) else: encoded = Dense(encoding_dim, activation=None, activity_regularizer=regularizers.activity_l1(10e-5))(input_data) # "decoded" is the lossy reconstruction of the input decoded = Dense(original_dim, activation=None)(encoded) # this model maps an input to its reconstruction autoencoder = Model(inputs=input_data, outputs=decoded) # this model maps an input to its encoded representation encoder = Model(inputs=input_data, outputs=encoded) # create a placeholder for an encoded (32-dimensional) input encoded_input = Input(shape=(encoding_dim,)) # retrieve the last layer of the autoencoder model decoder_layer = autoencoder.layers[-1] # create the decoder model decoder = Model(inputs=encoded_input, outputs=decoder_layer(encoded_input)) # train autoencoder to reconstruct Stock returns # use L2 loss autoencoder.compile(optimizer='adadelta', loss='mean_squared_error') autoencoder.fit(x_train, x_train, epochs=50, batch_size=128, shuffle=True, validation_data=(x_test, x_test)) # encode and decode some digits # note that we take them from the *test* set encoded_data = encoder.predict(x_test) decoded_data = decoder.predict(encoded_data) for i in range(original_dim): print i, np.corrcoef(x_test[:,i].T, decoded_data[:,i].T)[0,1] decoding_error = x_test - decoded_data for i in range(original_dim): print i, np.corrcoef(decoded_data[:,i].T, decoding_error[:,i].T)[0,1] ```
github_jupyter
# 1. Python basics This chapter only gives a short introduction to Python to make the explanations in the following chapters more understandable. A detailed description would be too extensive and would go beyond the scope of this tutorial. Take a look at https://docs.python.org/tutorial/. Now let's take our first steps into the Python world. ## 1.1 Functions You can see a function as a small subprogram that does a special job. Depending on what should be done, the code is executed and/or something will be returned to the calling part of the script. There is a set of built-in functions available but you can create your own functions, too. Each function name in Python 3.x is followed by parentheses () and any input parameter or arguments are placed within it. Functions can perform calculations or actions and return None, a value or multiple values. For example function_name() function_name(parameter1, parameter2,...) ret = function_name(variable, axis=0) ### 1.1.1 Print In our notebooks we use the function <font color='blue'><b>print</b></font> to print contents when running the cells. This gives us the possibility to export the notebooks as Python scripts, which can be run directly in a terminal. Let's print the string <font color='red'>Hello World</font>. A string can be written in different ways, enclosed in single or double quotes. ``` print('Hello World') ``` This is the easiest way to use print. In order to produce a prettier output of the variable contents format specifications can be used. But we will come to this later. ## 1.2 Data types - Numeric - integer - float - complex - Boolean - True or False - Text - string - and many others We use the built-in function <font color='blue'><b>type</b></font> to retrieve the type of a variable. Example: Define a variable x with value 5 and print the content of x. ``` x = 5 print(x) ``` Let's see what type the variable really has. You can use the function type as argument to the function print. ``` print(type(x)) ``` Change the value of variable x to a floating point value of 5.0. ``` x = 5.0 print(x) ``` Get the type of the changed variable x. ``` print(type(x)) ``` Define variables of different types. ``` x = 1 y = 7.3 is_red = False title = 'Just a string' print(type(x), type(y), type(is_red), type(title)) ``` ## 1.3 Lists A list is a compound data type, used to group different values which can have different data types. Lists are written as a list of comma-separated values (items) between square brackets. ``` names = ['Hugo', 'Charles','Janine'] ages = [72, 33, 16] print(type(names), type(ages)) print(names) print(ages) ``` To select single or multiple elements of a list you can use indexing. A negative value takes the element from the end of the list. ``` first_name = names[0] last_name = names[-1] print('First name: %-10s' % first_name) print('Last name: %-10s' % last_name) print(type(names[0])) ``` To select a subset of a list you can use indices, slicing, \[start_index<font color='red'><b>:</b></font>end_index\[<font color='red'><b>:</b></font>step\]\], where the selected part of the list include the first element and all following elements until the element **before** end_index. The next example will return the first two elements (index 0 and 1) and **not** the last element. ``` print(names[0:2]) ``` What will be returned when doing the following? ``` print(names[0:3:2]) print(names[1:2]) print(names[1:3]) print(names[::-1]) ``` The slicing with \[::-1\] reverses the order of the list. Using only the colon without any indices for slicing means to create a shallow copy of the list. Working with the new list will not affect the original list. ``` names_ln = names names_cp = names[:] names[0] = 'Ben' print(names_ln) print(names_cp) ``` ``` names.append('Paul') print(names) names += ['Liz'] print(names) ``` Well, how do we do an insertion of an element right after the first element? ``` names.insert(1,'Merle') print(names) ``` If you want to add more than one element to a list use extend. ``` names.extend(['Sophie','Sebastian','James']) print(names) ``` If you decide to remove an element use remove. ``` names.remove('Janine') print(names) ``` With pop you can remove an element, too. Remove the last element of the list. ``` names.pop() print(names) ``` Remove an element by its index. ``` names.pop(2) print(names) ``` Use reverse to - yupp - reverse the list. ``` names.reverse() print(names) ``` ## 1.4 Tuples A tuple is like a list, but it's unchangeable (it's also called immutable). Once a tuple is created, you cannot change its values. To change a tuple you have to convert it to a list, change the content, and convert it back to a tuple. Define the variable tup as tuple. ``` tup = (0, 1, 1, 5, 3, 8, 5) print(type(tup)) ``` Sometimes it is increasingly necessary to make multiple variable assignments which is very tedious. But it is very easy with the tuple value packaging method. Here are some examples how to use tuples. Standard definition of variable of type integer. ``` td = 15 tm = 12 ty = 2018 print(td,tm,ty) ``` Tuple packaging ``` td,tm,ty = 15,12,2018 print(td,tm,ty) print(type(td)) ``` You can use tuple packaging to assign the values to a single variable, too. ``` date = 31,12,2018 print(date) print(type(date)) (day, month, year) = date print(year, month, day) ``` Tuple packaging makes an exchange of the content of variables much easier. ``` x,y = 47,11 x,y = y,x print(x,y) ``` Ok, now we've learned a lot about tuples, but not all. There is a very helpful way to unpack a tuple. Unpacking example with a tuple of integers. ``` tup = (123,34,79,133) X,*Y = tup print(X) print(Y) X,*Y,Z = tup print(X) print(Y) print(Z) X,Y,*Z = tup print(X) print(Y) print(Z) ``` Unpacking example with a tuple of strings. ``` Name = 'Elizabeth' A,*B,C = Name print(A) print(B) print(C) A,B,*C = Name print(A) print(B) print(C) ``` ## 1.5 Computations To do computations you can use the algebraic opertors on numeric values and lists. ``` m = 12 d = 8.1 s = m + d print(s) print(type(s)) ``` The built-in functions max(), min(), and sum() for instance can be used to do computations for us. ``` data = [12.2, 16.7, 22.0, 9.3, 13.1, 18.1, 15.0, 6.8] data_min = min(data) data_max = max(data) data_sum = sum(data) print('Minimum %6.1f' % data_min) print('Maximum %6.1f' % data_max) print('Sum %6.1f' % data_sum) ``` To do computations with lists is not that simple. Multiply the content of the list values by 10. ``` values = [1,2,3,4,5,6,7,8,9,10] values10 = values*10 print(values10) ``` Yeah, that is not what you have expected, isn't it? To multiply a list by a value means to repeat the list 10-times to the new list. We have to go through the list and multiply each single element by 10. There is a long and a short way to do it. The long way: ``` values10 = values[:] for i in range(0,len(values)): values10[i] = values[i]*10 print(values10) ``` The more efficient way is to use Python's list comprehension: ``` values10 = [i * 10 for i in values] print(values10) # just to be sure that the original values list is not overwritten. print(values) ``` To notice would be the inplace operators **+=** and ***=**. ``` ix = 1 print(ix) ix += 3 # same as x = x + 3 print(ix) ix *= 2 # same as x = x * 2 print(ix) ``` ## 1.6 Statements Like other programing languages Python uses similar flow control statements ### 1.6.1 if statement The most used statement is the **if** statement which allows us to control if a condition is **True** or **False**. It can contain optional parts like **elif** and **else**. ``` x = 0 if(x>0): print('x is greater than 0') elif(x<0): print('x is less than 0') elif(x==0): print('x is equal 0') user = 'George' if(user): print('user is set') if(user=='Dennis'): print('--> it is Dennis') else: print('--> but it is not Dennis') ``` ### 1.6.2 while statement The lines in a while loop is executed until the condition is False. ``` a = 0 b = 10 while(a < b): print('a =',a) a = a + 1 ``` ### 1.6.3 for statement The use of the for statement differs to other programming languages because it iterates over the items of any sequence, e.g. a list or a string, in the order that they appear in the sequence. ``` s = 0 for x in [1,2,3,4]: s = s + x print('sum = ', s) # Now, let us find the shortest name of the list names. # Oh, by the way this is a comment line :), which will not be executed. index = -99 length = 50 i = 0 for name in names: if(len(name)<length): length = len(name) index = i i+=1 print('--> shortest name in list names is', names[index]) ``` ## 1.7 Import Python modules Usually you need to load some additional Python packages, so called modules, in your program in order to use their functionality. This can be done with the command **import**, whose usage may look different. ```python import module_name import module_name as short_name from module_name import module_part ``` ### 1.7.1 Module os We start with a simple example. To get access to the operating system outside our program we have to import the module **os**. ``` import os ``` Take a look at the module. ``` print(help(os)) ``` Ok, let's see in which directory we are. ``` pwd = os.getcwd() print(pwd) ``` Go to the parent directory and let us see where we are then. ``` os.chdir('..') print("Directory changed: ", os.getcwd()) ``` Go back to the directory where we started (that's why we wrote the name of the directory to the variable pwd ;)). ``` os.chdir(pwd) print("Directory changed: ", os.getcwd()) ``` To retrieve the content of an environment variable the module os provides os.environment.get function. ``` HOME = os.environ.get('HOME') print('My HOME environment variable is set to ', HOME) ``` Concatenate path names with os.path.join. ``` datadir = 'data' newpath = os.path.join(HOME,datadir) print(newpath) ``` Now, we want to see if the directory really exist. ``` if os.path.isdir(newpath): print('--> directory %s exists' % newpath) else: print('--> directory %s does not exist' % newpath) ``` Modify the datadir variable, run the cells and see what happens. But how to proof if a file exist? Well, there is a function os.path.isfile, who would have thought! ``` input_file = os.path.join('data','precip.nc') if os.path.isfile(input_file): print('--> file %s exists' % input_file) else: print('--> file %s does not exist' % input_file) ``` Add a cell and play with the os functions in your environment. ### 1.7.2 Module glob In the last case we already know the name of the file we are looking for but in most cases we don't know what is in a directory. To get the file names from a directory the **glob** module is very helpful. For example, after importing the glob module the glob function of glob, weired isn't it, will return a list of all netCDF files in the subdirectory data. ``` import glob fnames = sorted(glob.glob('./data/*.nc')) print(fnames) ``` Now, we can select a file, for instance the second one, of fnames. ``` print(fnames[1]) ``` But how can we get rid of the leading path? And yes, the os module can help us again with its path.basename function. ``` print(os.path.basename(fnames[1])) ``` ### 1.7.2 Module sys The module sys provides access to system variables and functions. The Module includes functions to read from stdin, write to stdout and stderr, and others. Here we will give a closer look into the part sys.path of the module, which among other things allows us to extend the search path for loaded modules. In the subdirectory **lib** is a file containing user defined python functions called **dkrz_utils.py**. To load the file like a module we have to extend the system path before calling any function from it. ``` import sys sys.path.append('./lib/') import dkrz_utils tempK = 286.5 #-- units Kelvin print('Convert: %6.2f degK == %6.2f degC' % (tempK, (dkrz_utils.conv_K2C(tempK)))) ```
github_jupyter
If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets. Uncomment the following cell and run it. ``` #! pip install datasets transformers[sentencepiece] sacrebleu ``` If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries. To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow. First you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) then execute the following cell and input your username and password: ``` from huggingface_hub import notebook_login notebook_login() ``` Then you need to install Git-LFS. Uncomment the following instructions: ``` # !apt install git-lfs ``` Make sure your version of Transformers is at least 4.11.0 since the functionality was introduced in that version: ``` import transformers print(transformers.__version__) ``` You can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq). # Fine-tuning a model on a translation task In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model for a translation task. We will use the [WMT dataset](http://www.statmt.org/wmt16/), a machine translation dataset composed from a collection of various sources, including news commentaries and parliament proceedings. ![Widget inference on a translation task](images/translation.png) We will see how to easily load the dataset for this task using 🤗 Datasets and how to fine-tune a model on it using the `Trainer` API. ``` model_checkpoint = "Helsinki-NLP/opus-mt-en-ro" ``` This notebook is built to run with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a sequence-to-sequence version in the Transformers library. Here we picked the [`Helsinki-NLP/opus-mt-en-ro`](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) checkpoint. ## Loading the dataset We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions `load_dataset` and `load_metric`. We use the English/Romanian part of the WMT dataset here. ``` from datasets import load_dataset, load_metric raw_datasets = load_dataset("wmt16", "ro-en") metric = load_metric("sacrebleu") ``` The `dataset` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasetdict), which contains one key for the training, validation and test set: ``` raw_datasets ``` To access an actual element, you need to select a split first, then give an index: ``` raw_datasets["train"][0] ``` To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset. ``` import datasets import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=5): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) for column, typ in dataset.features.items(): if isinstance(typ, datasets.ClassLabel): df[column] = df[column].transform(lambda i: typ.names[i]) display(HTML(df.to_html())) show_random_elements(raw_datasets["train"]) ``` The metric is an instance of [`datasets.Metric`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Metric): ``` metric ``` You can call its `compute` method with your predictions and labels, which need to be list of decoded strings (list of list for the labels): ``` fake_preds = ["hello there", "general kenobi"] fake_labels = [["hello there"], ["general kenobi"]] metric.compute(predictions=fake_preds, references=fake_labels) ``` ## Preprocessing the data Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires. To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure: - we get a tokenizer that corresponds to the model architecture we want to use, - we download the vocabulary used when pretraining this specific checkpoint. That vocabulary will be cached, so it's not downloaded again the next time we run the cell. ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) ``` For the mBART tokenizer (like we have here), we need to set the source and target languages (so the texts are preprocessed properly). You can check the language codes [here](https://huggingface.co/facebook/mbart-large-cc25) if you are using this notebook on a different pairs of languages. ``` if "mbart" in model_checkpoint: tokenizer.src_lang = "en-XX" tokenizer.tgt_lang = "ro-RO" ``` By default, the call above will use one of the fast tokenizers (backed by Rust) from the 🤗 Tokenizers library. You can directly call this tokenizer on one sentence or a pair of sentences: ``` tokenizer("Hello, this one sentence!") ``` Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested. Instead of one sentence, we can pass along a list of sentences: ``` tokenizer(["Hello, this one sentence!", "This is another sentence."]) ``` To prepare the targets for our model, we need to tokenize them inside the `as_target_tokenizer` context manager. This will make sure the tokenizer uses the special tokens corresponding to the targets: ``` with tokenizer.as_target_tokenizer(): print(tokenizer(["Hello, this one sentence!", "This is another sentence."])) ``` If you are using one of the five T5 checkpoints that require a special prefix to put before the inputs, you should adapt the following cell. ``` if model_checkpoint in ["t5-small", "t5-base", "t5-larg", "t5-3b", "t5-11b"]: prefix = "translate English to Romanian: " else: prefix = "" ``` We can then write the function that will preprocess our samples. We just feed them to the `tokenizer` with the argument `truncation=True`. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model. The padding will be dealt with later on (in a data collator) so we pad examples to the longest length in the batch and not the whole dataset. ``` max_input_length = 128 max_target_length = 128 source_lang = "en" target_lang = "ro" def preprocess_function(examples): inputs = [prefix + ex[source_lang] for ex in examples["translation"]] targets = [ex[target_lang] for ex in examples["translation"]] model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True) # Setup the tokenizer for targets with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=max_target_length, truncation=True) model_inputs["labels"] = labels["input_ids"] return model_inputs ``` This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists for each key: ``` preprocess_function(raw_datasets['train'][:2]) ``` To apply this function on all the pairs of sentences in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command. ``` tokenized_datasets = raw_datasets.map(preprocess_function, batched=True) ``` Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again. Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently. ## Fine-tuning the model Now that our data is ready, we can download the pretrained model and fine-tune it. Since our task is of the sequence-to-sequence kind, we use the `AutoModelForSeq2SeqLM` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us. ``` from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) ``` Note that we don't get a warning like in our classification example. This means we used all the weights of the pretrained model and there is no randomly initialized head in this case. To instantiate a `Seq2SeqTrainer`, we will need to define three more things. The most important is the [`Seq2SeqTrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Seq2SeqTrainingArguments), which is a class that contains all the attributes to customize the training. It requires one folder name, which will be used to save the checkpoints of the model, and all other arguments are optional: ``` batch_size = 16 model_name = model_checkpoint.split("/")[-1] args = Seq2SeqTrainingArguments( f"{model_name}-finetuned-{source_lang}-to-{target_lang}", evaluation_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, weight_decay=0.01, save_total_limit=3, num_train_epochs=1, predict_with_generate=True, fp16=True, push_to_hub=True, ) ``` Here we set the evaluation to be done at the end of each epoch, tweak the learning rate, use the `batch_size` defined at the top of the cell and customize the weight decay. Since the `Seq2SeqTrainer` will save the model regularly and our dataset is quite large, we tell it to make three saves maximum. Lastly, we use the `predict_with_generate` option (to properly generate summaries) and activate mixed precision training (to go a bit faster). The last argument to setup everything so we can push the model to the [Hub](https://huggingface.co/models) regularly during training. Remove it if you didn't follow the installation steps at the top of the notebook. If you want to save your model locally in a name that is different than the name of the repository it will be pushed, or if you want to push your model under an organization and not your name space, use the `hub_model_id` argument to set the repo name (it needs to be the full name, including your namespace: for instance `"sgugger/marian-finetuned-en-to-ro"` or `"huggingface/marian-finetuned-en-to-ro"`). Then, we need a special kind of data collator, which will not only pad the inputs to the maximum length in the batch, but also the labels: ``` data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) ``` The last thing to define for our `Seq2SeqTrainer` is how to compute the metrics from the predictions. We need to define a function for this, which will just use the `metric` we loaded earlier, and we have to do a bit of pre-processing to decode the predictions into texts: ``` import numpy as np def postprocess_text(preds, labels): preds = [pred.strip() for pred in preds] labels = [[label.strip()] for label in labels] return preds, labels def compute_metrics(eval_preds): preds, labels = eval_preds if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) # Replace -100 in the labels as we can't decode them. labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # Some simple post-processing decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) result = metric.compute(predictions=decoded_preds, references=decoded_labels) result = {"bleu": result["score"]} prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds] result["gen_len"] = np.mean(prediction_lens) result = {k: round(v, 4) for k, v in result.items()} return result ``` Then we just need to pass all of this along with our datasets to the `Seq2SeqTrainer`: ``` trainer = Seq2SeqTrainer( model, args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics ) ``` We can now finetune our model by just calling the `train` method: ``` trainer.train() ``` You can now upload the result of the training to the Hub, just execute this instruction: ``` trainer.push_to_hub() ``` You can now share this model with all your friends, family, favorite pets: they can all load it with the identifier `"your-username/the-name-you-picked"` so for instance: ```python from transformers import AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("sgugger/my-awesome-model") ```
github_jupyter
**Loading Data and creating benchmark model** ``` # Defining the path to the Github repository file_url = 'https://raw.githubusercontent.com/PacktWorkshops/The-Data-Science-Workshop/master/Chapter17/Datasets/bank-full.csv' # Loading data using pandas import pandas as pd bankData = pd.read_csv(file_url,sep=";") bankData.head() # Removing the target variable Y = bankData.pop('y') from sklearn.model_selection import train_test_split # Splitting the data into train and test sets X_train, X_test, y_train, y_test = train_test_split(bankData, Y, test_size=0.3, random_state=123) # Using pipeline to transform categorical variable and numeric variables from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler, OneHotEncoder categorical_transformer = Pipeline(steps=[('onehot', OneHotEncoder(handle_unknown='ignore'))]) numeric_transformer = Pipeline(steps=[('scaler', StandardScaler())]) # Defining data types for numeric and categorical features numeric_features = bankData.select_dtypes(include=['int64', 'float64']).columns categorical_features = bankData.select_dtypes(include=['object']).columns # Defining preprocessor from sklearn.compose import ColumnTransformer preprocessor = ColumnTransformer( transformers=[ ('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features)]) # Defining the estimator for processing and classification from sklearn.linear_model import LogisticRegression estimator = Pipeline(steps=[('preprocessor', preprocessor), ('classifier',LogisticRegression(random_state=123))]) # Fit the estimator on the training set estimator.fit(X_train, y_train) print("model score: %.2f" % estimator.score(X_test, y_test)) # Predict on the test set pred = estimator.predict(X_test) # Generating classification report from sklearn.metrics import classification_report print(classification_report(pred,y_test)) ``` **Establishing entities and relationship** ``` # Creating the Ids for Demographic Entity bankData['custID'] = bankData.index.values bankData['custID'] = 'cust' + bankData['custID'].astype(str) # Creating AssetId bankData['AssetId'] = 0 bankData.loc[bankData.housing == 'yes','AssetId']= 1 # Creating LoanId bankData['LoanId'] = 0 bankData.loc[bankData.loan == 'yes','LoanId']= 1 # Creating Financial behaviour ID bankData['FinbehId'] = 0 bankData.loc[bankData.default == 'yes','FinbehId']= 1 # Importing necessary libraries import featuretools as ft import numpy as np # creating the entity set 'Bankentities' Bankentities = ft.EntitySet(id = 'Bank') # Mapping a dataframe to the entityset to form the parent entity Bankentities.entity_from_dataframe(entity_id = 'Demographic Data', dataframe = bankData, index = 'custID') # Mapping to parent entity and setting the relationship Bankentities.normalize_entity(base_entity_id='Demographic Data', new_entity_id='Assets', index = 'AssetId', additional_variables = ['housing']) Bankentities.normalize_entity(base_entity_id='Demographic Data', new_entity_id='Liability', index = 'LoanId', additional_variables = ['loan']) Bankentities.normalize_entity(base_entity_id='Demographic Data', new_entity_id='FinBehaviour', index = 'FinbehId', additional_variables = ['default']) ``` **Feature Engineering** ``` # Creating aggregation and transformation primitives aggPrimitives=[ 'std', 'min', 'max', 'mean', 'last', 'count' ] tranPrimitives=[ 'percentile', 'subtract', 'divide'] # Defining the new set of features feature_set, feature_names = ft.dfs(entityset=Bankentities, target_entity = 'Demographic Data', agg_primitives=aggPrimitives, trans_primitives=tranPrimitives, max_depth = 2, verbose = 1, n_jobs = 1) # Reindexing the feature_set feature_set = feature_set.reindex(index=bankData['custID']) feature_set = feature_set.reset_index() # Displaying the feature set feature_set.shape ``` **Cleaning na values and infinity values** ``` # Dropping all Ids X = feature_set[feature_set.columns[~feature_set.columns.str.contains( 'custID|AssetId|LoanId|FinbehId')]] # Replacing all columns with infinity with nan X = X.replace([np.inf, -np.inf], np.nan) # Dropping all columns with nan X = X.dropna(axis=1, how='any') X.shape ``` **Modelling phase** ``` # Splitting train and test sets from sklearn.model_selection import train_test_split # Splitting the data into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=123) # Creating the preprocessing pipeline from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler, OneHotEncoder categorical_transformer = Pipeline(steps=[('onehot', OneHotEncoder(handle_unknown='ignore'))]) numeric_transformer = Pipeline(steps=[('scaler', StandardScaler())]) numeric_features = X.select_dtypes(include=['int64', 'float64']).columns categorical_features = X.select_dtypes(include=['object']).columns from sklearn.compose import ColumnTransformer preprocessor = ColumnTransformer( transformers=[ ('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features)]) # Creating the estimator function and fitting the training set estimator = Pipeline(steps=[('preprocessor', preprocessor), ('classifier',LogisticRegression(random_state=123))]) estimator.fit(X_train, y_train) print("model score: %.2f" % estimator.score(X_test, y_test)) # Predicting on the test set pred = estimator.predict(X_test) # Generating the classification report from sklearn.metrics import classification_report print(classification_report(pred,y_test)) ```
github_jupyter
![Callysto.ca Banner](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-top.jpg?raw=true) <a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=Science/SourcesOfEnergy/resources-and-recycling.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a> ``` import myMagics %uiButtons ``` # Sources of Energy ## Learning Summary Students will begin to understand the importance of managing waste properly. They will learn about the different sources of energy. In addition, they will learn that these energy sources are not unlimited and can run out. By the end of this notebook, students will understand which energy sources are renewable and non-renewable. They will learn and begin to think about the good and bad traits of energy sources as well as the lifespan of the source. Students will be prompted to think about the efficiency of the energy source after taking all factors into consideration. ## Solar Energy Solar energy is energy that comes from the sun. The earth receives more energy from the sun in one hour than the whole world uses in one year. Solar energy is used to generate electricity using solar panels. A solar panel works by allowing particles of light to knock electrons free from atoms, generating a flow of electricity. Unfortunately, solar panels need solar cells to work which are created from already existing materials such as silicon, which is not an unlimited resource. Solar panels are hard to manufacture making solar energy use low. Once they are made, solar panels do not release any pollution into the air. Energy from the sun is produced every day which means it will never run out, making it a renewable resource. <img src="http://www.mountainjunkiegear.com/wp-content/uploads/2013/04/How-Solar-Panels-Work.jpg" style="margin: 0 auto; width: 1000px;"> #### Source Image: Solar Panels for your Home, Nov. 2015. Retrieved from http://www.mountainjunkiegear.com/wp-content/uploads/2013/04/How-Solar-Panels-Work ## Oil Energy and Natural Gas Energy The most used source of energy around the world is oil. Oil occurs naturally and takes over a million years to form from old plants and bacteria. Oil comes from far underground. It is brought to the surface by special pumps and pipes. Most electricity, fuel and everyday things such as plastic come from this source. The most common use of oil that you may know is gasoline for vehicles. Oil is a fossil fuel. A fossil fuel is a source of energy that does not reproduce at the rate it is being used making it a non-renewable resource. ``` from IPython.display import YouTubeVideo YouTubeVideo('WW8KfUJdTNY', width=800, height=300) ``` ##### Source Video: Crude Oil Extraction, July 2016. Retrieved from https://www.youtube.com/watch?time_continue=83&v=WW8KfUJdTNY Similarly, natural gas is found underground all over the world. It is often located around coal or oil pockets in the earth. The word 'natural' means that it is a gas formed by natural chemical processes that occur on earth. Natural gas is a by-product of decomposing biomass such as trees, grass, animals, wood and leaves. Your house uses natural gas every day for heating, cooking and electricity. How does natural gas get from deep within the ground all the way to your house to be used? Natural gas companies drill thousands of feet into the earth and use big pumps to bring it to the surface. Then they send the gas to your town through gas pipes buried underground. A gas company brings it to your house in smaller pipes. Each household in Alberta pays a natural gas bill each month. Since it is produced from the same processes as oil, natural gas is a fossil fuel too making it a non-renewable source. ## Coal Energy Another major source of energy around the world is coal. Most of the coal we use now was formed 300 million years ago. The energy in coal comes from energy that was stored in giant plants that lived hundreds of millions of years ago in swamp forests, even before the dinosaurs! <img src="http://www.dynamicscience.com.au/tester/solutions1/electric/electricenergy.gif" style="margin: 0 auto; width: 1000px;"> #### Source Image: Energy Conservation, n.d. Retrieved from http://www.dynamicscience.com.au/tester/solutions1/electric/electricenergy In the above diagram we see that plants are the only organisms on Earth that can harness solar energy and convert it into a more usable form called chemical energy. Chemical energy is trapped in the form of wood and other plant matter. Coal is also a source of chemical energy because when the plant matter died, it formed layers at the bottom of swamps. Water and dirt began to pile up on top of the dead plant remains and years of pressure formed the rock we call coal. Humans dig up coal and burn it. When coal is burned it turns into chemical energy and heat energy. The heat energy is used to heat water into super-hot steam. Steam occurs when water heats up to such a high temperature that it changes from a liquid to a gas. Steam is kinetic energy. Kinetic energy is the energy of motion, or in other words the movement of particles. The steam (or kinetic energy) causes a generator to spin and produces electrical energy. Coal is a fossil fuel making it non-renewable. Humans use coal at a faster rate than it reproduces. The law of conservation of energy states that energy cannot be created nor can it be destroyed. It can, however, change forms. Take the process outlined in the animation below. At every step there is a loss of energy. The efficiency of the process is given as an estimated percentage. <img src="http://www.dynamicscience.com.au/tester/solutions1/electric/powerstation/Untitled-17.gif" style="margin: 0 auto; width: 1000px;"> #### Source Image: Energy Conservation, n.d. Retrieved from http://www.dynamicscience.com.au/tester/solutions1/electric/electricenergy ``` %%html <style> #list { margin: 20px 0; padding: 0; } #list li { list-style: none; margin: 5px 0; } .energy { font-family: 'Courier New', Courier, monospace; font-size: 15px; } .answerSelect { margin: 10px 10px; } .correct { color: green; font-size: 25px; display: none; } .wrong { color: red; font-size: 25px; display: none; } .ansBtn { cursor: pointer; border: solid black 1px; background: #d3d3d3; padding: 10px 5px; border-radius: 0px; font-family: arial; font-size: 20px; } .ansBtn:hover { background: #f3f3f3; } .redtext { color: red; } .greentext { color: green; } </style> <body> <div style="height: 300px"> <ul id="list"> <li> <label for="q1">1) What process captures solar energy?</label> <select name="q1" id="q1" class="answerSelect"> <option value="default" selected>Select an Answer</option> <option value="evaporation">Evaporation</option> <option value="photosynthesis">Photosynthesis</option> <option value="respiration">Respiration</option> <option value="condensation">Condensation</option> </select> <span class="correct" id="Q1C">&#10003</span> <span class="wrong" id="Q1W">&#10007</span> </li> <li> <label for="q2">2) Which is the most inefficient energy conversion step in the process outlined above?</label> <select name="q2" id="q2" class="answerSelect"> <option value="default" selected>Select an Answer</option> <option value="kinetic into mechanical">Kinetic into mechanical</option> <option value="chemical into heat">Chemical into heat</option> <option value="mechanical into electrical">Mechanical into electrical</option> <option value="solar into chemical">Solar into chemical</option> </select> <span class="correct" id="Q2C">&#10003</span> <span class="wrong" id="Q2W">&#10007</span> </li> <li> <label for="q3">3) The more steps in the process of generating electrical energy the</label> <select name="q3" id="q3" class="answerSelect"> <option value="default" selected>Select an Answer</option> <option value="less electrical energy that is generated">Less electrical energy that is generated</option> <option value="the more electrical energy that is generated">The more electrical energy that is generated</option> </select> <span class="correct" id="Q3C">&#10003</span> <span class="wrong" id="Q3W">&#10007</span> </li> <li> <label for="q4">4) The energy lost is in the form of</label> <select name="q4" id="q4" class="answerSelect"> <option value="default" selected>Select an Answer</option> <option value="Electrical">Electrical</option> <option value="Heat">Heat</option> <option value="Chemical">Chemical</option> <option value="Mechanical">Mechanical</option> </select> <span class="correct" id="Q4C">&#10003</span> <span class="wrong" id="Q4W">&#10007</span> </li> <li> <label for="q5">5) What type of energy is carried by steam</label> <select name="q5" id="q5" class="answerSelect"> <option value="default" selected>Select an Answer</option> <option value="Electrical">Electrical</option> <option value="Chemical">Chemical</option> <option value="Mechanical">Mechanical</option> <option value="Kinetic">Kinetic</option> </select> <span class="correct" id="Q5C">&#10003</span> <span class="wrong" id="Q5W">&#10007</span> </li> </ul> <span class="ansBtn" id="ansBtn" onclick="checkAns()">Check Answers!</span> </div> <script src="main.js"></script> </body> ``` ## Biomass Energy Perhaps one of the oldest forms of fuel known is biomass fuel. Biomass is any kind of biological matter that humans can burn in order to produce heat or energy. Biomass mainly consists of wood, leaves, and grass. All biomass has storages of carbon and when biomass is burned the carbon is released into the atmosphere as CO2 gas. Resources such as wood, leaves and grass are NOT fossil fuels because biomass is said to be carbon neutral. Carbon neutral means that the amount of carbon released when biomass is burned is equal to the amount of carbon that is naturally needed in the environment for processes like photosynthesis. As humans we must remember that trees take years to grow so if we carelessly use wood, it may not always be a resource immediately available for use. When someone in Alberta cuts down a tree to burn it for energy, it is said that one tree must be planted to replace it. Humans across Canada work to replace trees at the rate in which we use them making biomass a renewable source. ##### Biomass Fun Facts! 1) If you’ve ever been near a campfire or a fireplace, you’ve witnessed biomass energy through the burning of wood. 2) Biomass has been around since the beginning of time when man burned wood for heating and cooking. 3) Wood was the biggest energy provider in the world in the 1800’s. 4) Garbage can be burned to generate energy as well. This not only makes use of trash for energy, but reduces the amount of trash that goes into landfills. This process is called Waste-to-Energy. ## Wind Energy Wind is a newer source of energy. The use of wind for energy production, mainly electricity, has only been developed recently. Most wind power is converted to electricity by using giant machines called wind turbines. Wind is a natural resource that will never run out making wind a renewable resource. Wind that occurs naturally moves the turbines. The turbines power a generator. A generator is a device that converts mechanical energy to electrical energy. In this case the mechanical energy is the movement of the turbines created by the wind. This mechanical energy is changed into electrical energy that can be used in a home. Did you know that Alberta has a wind energy capacity of 1,483 megawatts. Alberta's wind farms produce enough electricity each year to power 625,000 homes, which is 8 percent of Alberta's electricity demand. <img src="https://i.gifer.com/so8.gif" style="margin: 0 auto; width: 1000px;"> #### Source Image: #Wind, n.d. Retrieved from https://gifer.com/en/so8 ## Water Energy The correct term for water energy is hydropower. Hydro means water. The first use of water for energy dates back to around 4000 B.C. They used a water wheel during Roman times to water crop and supply drinking water to villages. Now, water creates energy in hydro-dams. A hydro-dam produces electricity when water pushes a device called a turbine. This turbine spins a generator which converts mechanical energy into electrical energy. In this case the mechanical energy that occurs is when the water pushes the turbine. Water is considered a resource that will never run out. It is plentiful and is replenished every time it rains. <img src="http://www.wvic.com/images/stories/Hydroplants/hydroplant-animate.gif" style="margin: 0 auto; width: 1000px;"> #### Source Image: Wisconsin Valley Improvement Company, n.d. Retrieved from http://www.wvic.com/content/how_hydropower_works.cfm ## Nuclear Energy Nuclear energy uses the power of an atom to create steam power. Atoms can create energy in two different ways: nuclear fission which is when the inside of an atom is split or nuclear fusion which is done by fusing the inside of two atoms. The energy produced by the Sun, for example, comes from nuclear fusion reactions. Hydrogen gas in the core of the Sun is squeezed together so tightly that four hydrogen particles combine to form one helium atom. This is called nuclear fusion. When one of these two physical reactions occurs (nuclear fission or nuclear fusion) the atoms experience a slight loss of mass. The mass that is lost becomes a large amount of heat energy and light. This is why the sun is so hot and shines brightly. Did you know that Albert Einstein discovered his famous equation, E = mc2, with the sun and stars in mind? In his equation, Einstein invented a way to show that "Energy equals mass times the speed of light squared." The heat generated in nuclear fusion and nuclear fission is used to heat up water and produce steam, which is then used to create electricity. The separation and joining of atoms occurs safely within the walls of a nuclear power station. Nuclear power generates nuclear waste that can be dangerous to human health and the environment. ``` from IPython.display import YouTubeVideo YouTubeVideo('igf96TS3Els', width=800, height=300) ``` #### Source Video: Nuclear Power Station, July 2008. Retrieved from https://www.youtube.com/watch?v=igf96TS3Els ## Geothermal Energy Geothermal energy is generated by the high temperature of earth's core heating water into steam. The term 'geo' means earth and the word 'thermal' means heat, which means geothermal is 'heat from the earth.' Geothermal energy plants convert large amounts of steam (kinetic energy) into usable electricity. Geothermal energy plants are located in prime areas. Canada does not have any commercial geothermal energy plants. Note, there exists a device called a geothermal heat pump that can tap into geothermal energy to heat and cool buildings. A geothermal heat pump system consists of a heat pump, an air delivery system (ductwork), and a heat exchanger-a system of pipes buried in the shallow ground near the building. In the summer, the heat pump moves heat from the indoor air into the heat exchanger. In the winter, the heat pump removes heat from the heat exchanger and pumps it into the indoor air delivery system. Why is geothermal energy a renewable resource? Because the source of geothermal energy is the unlimited amount of heat generated by the Earth's core. It is important to recognize geothermal energy systems DO NOT get their heat directly from the core. Instead, they pull heat from the crust—the rocky upper 20 miles of the planet's surface. ``` from IPython.display import YouTubeVideo YouTubeVideo('y_ZGBhy48YI', width=800, height=300) ``` #### Source Video: Energy 101: Geothermal Heat Pumps, Jan. 2011. ## Renewable Energy Sources vs. Non-renewable Renewable energy sources are energy sources that can be replaced at the same rate they are used. The source is plentiful and generally quite efficient. An example of a renewable energy source is wind. Wind is a renewable energy source because there is a limitless supply that is naturally produced. Non-renewable energy sources are those that run out more quickly than they are naturally reproduced. Usually these energy sources take millions of year to produce and they have a bigger negative impact on the earth compared to alternate sources. An example of a non-renewable energy source is oil. Oil is non-renewable because humans are using it faster than it is being replaced naturally on earth. In order to get comfortable with the two types of Energy Sources, try identifying the renewable energy sources from the non-renewable in the activity below. ``` %%html <!-- Question 1 --> <div> &nbsp;&nbsp; &nbsp;&nbsp; <!-- Is Solar Energy a Renewable Energy Source? <p> tag below --> <p>Is Solar Energy a Renewable Energy Source?</p> <ul style="list-style-type: none"> <li> <!-- Each <li> tag is a list item to represent a possible Answer q1c1 stands for "question 1 choice 1", question 2 choice 3 would be q2c3 for example. You can change this convention if you want its just what I chose. Make sure all answers for a question have the same name attribute, in this case q1. This makes it so only a single radio button can be selected at one time--> <input type="radio" name="q1" id="q1c1" value="right"> <label for="q1c1">Yes, it is Renewable.</label> </li> <li> <input type="radio" name="q1" id="q1c2" value="wrong"> <label for="q1c2">No, it is a Non-Renewable Energy Source.</label> </li> </ul> <!-- Give a unique id for the button, i chose q1Btn. Question 2 I I would choose q2Btn and so on. This is used to tell the script which question we are interested in. --> <button id="q1Btn">Submit</button> <!-- this is where the user will get feedback once answering the question, the text that will go in here will be generated inside the script --> <p id="q1AnswerStatus"></p> &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; </div> <!-- Question 2 --> <div> &nbsp;&nbsp; &nbsp;&nbsp; <p>Is Oil Energy a Renewable Energy Source?</p> <ul style="list-style-type: none"> <li> <input type="radio" name="q2" id="q2c1" value="wrong"> <label for="q2c1">Yes, it is Renewable.</label> </li> <li> <input type="radio" name="q2" id="q2c2" value="right"> <label for="q2c2">No, it is a Non-Renewable Energy Source. </label> </li> </ul> <button id="q2Btn">Submit</button> <p id="q2AnswerStatus"></p> &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; </div> <!-- Question 3 --> <div> &nbsp;&nbsp; &nbsp;&nbsp; <p>Is Natural Gas a Renewable Energy Source?</p> <ul style="list-style-type: none"> <li> <input type="radio" name="q3" id="q3c1" value="wrong"> <label for="q3c1">Yes, it is Renewable.</label> </li> <li> <input type="radio" name="q3" id="q3c2" value="right"> <label for="q3c2">No, it is a Non-Renewable Energy Source. </label> </li> </ul> <button id="q3Btn">Submit</button> <p id="q3AnswerStatus"></p> &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; </div> <!-- Question 4 --> <div> &nbsp;&nbsp; &nbsp;&nbsp; <p>Is Coal Energy a Renewable Energy Source?</p> <ul style="list-style-type: none"> <li> <input type="radio" name="q4" id="q4c1" value="wrong"> <label for="q4c1">Yes, it is Renewable.</label> </li> <li> <input type="radio" name="q4" id="q4c2" value="right"> <label for="q4c2">No, it is a Non-Renewable Energy Source. </label> </li> </ul> <button id="q4Btn">Submit</button> <p id="q4AnswerStatus"></p> &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; </div> <!-- Question 5 --> <div> &nbsp;&nbsp; &nbsp;&nbsp; <p>Is Biomass Energy a Renewable Energy Source?</p> <ul style="list-style-type: none"> <li> <input type="radio" name="q5" id="q5c1" value="right"> <label for="q5c1">Yes, it is Renewable.</label> </li> <li> <input type="radio" name="q5" id="q5c2" value="wrong"> <label for="q5c2">No, it is a Non-Renewable Energy Source. </label> </li> </ul> <button id="q5Btn">Submit</button> <p id="q5AnswerStatus"></p> &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; </div> <!-- Question 6 --> <div> &nbsp;&nbsp; &nbsp;&nbsp; <p>Is Wind Energy a Renewable Energy Source?</p> <ul style="list-style-type: none"> <li> <input type="radio" name="q6" id="q6c1" value="right"> <label for="q6c1">Yes, it is Renewable.</label> </li> <li> <input type="radio" name="q6" id="q6c2" value="wrong"> <label for="q6c2">No, it is a Non-Renewable Energy Source. </label> </li> </ul> <button id="q6Btn">Submit</button> <p id="q6AnswerStatus"></p> &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; </div> <!-- Question 7 --> <div> &nbsp;&nbsp; &nbsp;&nbsp; <p>Is Water Energy a Renewable Energy Source?</p> <ul style="list-style-type: none"> <li> <input type="radio" name="q7" id="q7c1" value="right"> <label for="q7c1">Yes, it is Renewable.</label> </li> <li> <input type="radio" name="q7" id="q7c2" value="wrong"> <label for="q7c2">No, it is a Non-Renewable Energy Source. </label> </li> </ul> <button id="q7Btn">Submit</button> <p id="q7AnswerStatus"></p> &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; </div> <!-- Question 8 --> <div> &nbsp;&nbsp; &nbsp;&nbsp; <p>Is Nuclear Energy a Renewable Energy Source?</p> <ul style="list-style-type: none"> <li> <input type="radio" name="q8" id="q8c1" value="wrong"> <label for="q8c1">Yes, it is Renewable.</label> </li> <li> <input type="radio" name="q8" id="q8c2" value="right"> <label for="q8c2">No, it is a Non-Renewable Energy Source. </label> </li> </ul> <button id="q8Btn">Submit</button> <p id="q8AnswerStatus"></p> &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; </div> <!-- Question 9 --> <div> &nbsp;&nbsp; &nbsp;&nbsp; <p>Is Geothermal Energy a Renewable Energy Source?</p> <ul style="list-style-type: none"> <li> <input type="radio" name="q9" id="q9c1" value="right"> <label for="q9c1">Yes, it is Renewable.</label> </li> <li> <input type="radio" name="q9" id="q9c2" value="wrong"> <label for="q9c2">No, it is a Non-Renewable Energy Source. </label> </li> </ul> <button id="q9Btn">Submit</button> <p id="q9AnswerStatus"></p> &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; </div> <script> // Question 1 // This looks at which question is being checked, pass in the buttons id document.getElementById("q1Btn").onclick = function () { // This if statment is used for the correct answer, in this case choice 3 is correct if (document.getElementById("q1c1").checked) { // "Correct Answer" field is where you can add any text to be displayed when it is correct document.getElementById("q1AnswerStatus").innerHTML = "Correct Answer!"; } else { // "Wrong Answer" field is where you can add any text to be displayed when it is wrong document.getElementById("q1AnswerStatus").innerHTML = "Wrong Answer :("; } }; // Question 2 document.getElementById("q2Btn").onclick = function () { if (document.getElementById("q2c2").checked) { document.getElementById("q2AnswerStatus").innerHTML = "Correct Answer!"; } else { document.getElementById("q2AnswerStatus").innerHTML = "Wrong Answer :("; } }; // Question 3 document.getElementById("q3Btn").onclick = function () { if (document.getElementById("q3c2").checked) { document.getElementById("q3AnswerStatus").innerHTML = "Correct Answer!"; } else { document.getElementById("q3AnswerStatus").innerHTML = "Wrong Answer :("; } }; // Question 4 document.getElementById("q4Btn").onclick = function () { if (document.getElementById("q4c2").checked) { document.getElementById("q4AnswerStatus").innerHTML = "Correct Answer!"; } else { document.getElementById("q4AnswerStatus").innerHTML = "Wrong Answer :("; } }; // Question 5 document.getElementById("q5Btn").onclick = function () { if (document.getElementById("q5c1").checked) { document.getElementById("q5AnswerStatus").innerHTML = "Correct Answer!"; } else { document.getElementById("q5AnswerStatus").innerHTML = "Wrong Answer :("; } }; // Question 6 document.getElementById("q6Btn").onclick = function () { if (document.getElementById("q6c1").checked) { document.getElementById("q6AnswerStatus").innerHTML = "Correct Answer!"; } else { document.getElementById("q6AnswerStatus").innerHTML = "Wrong Answer :("; } }; // Question 7 document.getElementById("q7Btn").onclick = function () { if (document.getElementById("q7c1").checked) { document.getElementById("q7AnswerStatus").innerHTML = "Correct Answer!"; } else { document.getElementById("q7AnswerStatus").innerHTML = "Wrong Answer :("; } }; // Question 8 document.getElementById("q8Btn").onclick = function () { if (document.getElementById("q8c2").checked) { document.getElementById("q8AnswerStatus").innerHTML = "Correct Answer!"; } else { document.getElementById("q8AnswerStatus").innerHTML = "Wrong Answer :("; } }; // Question 9 document.getElementById("q9Btn").onclick = function () { if (document.getElementById("q9c1").checked) { document.getElementById("q9AnswerStatus").innerHTML = "Correct Answer!"; } else { document.getElementById("q9AnswerStatus").innerHTML = "Wrong Answer :("; } }; </script> ``` ## The Good and Bad Traits of Energy Sources Now that we understand each of the energy sources, it is important to weigh the good and the bad traits of each energy source. Efficient means the energy technique is achieving maximum productivity with minimum wasted effort or expense. Note that the bad traits of an energy source are usually negative side effects that we are trying to lessen or prevent while gathering usable energy. <img src="https://thesolarscoop.com/wp-content/uploads/2018/03/Solar.jpg" style="margin: 0 auto; width: 1000px;"> #### Source Image: EcoFasten, March 2018. Retrieved from https://thesolarscoop.com/wp-content/uploads/2018/03/Solar ``` %%html <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } </style> </head> <body> <h1>Solar</h1> <p></p> <table style="width:100%" table align="left"> <tr> <th style="text-align:center">Good Traits</th> <th style="text-align:center">Bad Traits</th> </tr> <tr> <td style="text-align:left">Solar energy has recently experienced decreasing costs and high public support. </td> <td style="text-align:left">Solar energy is intermittent, i.e. electricity production is dependent on sunlight.</td> </tr> <tr> <td style="text-align:left">Low CO2 emissions.</td> <td style="text-align:left">Expensive but in recent years the cost of solar energy equipment has decreased.</td> </tr> <tr> <td style="text-align:left">Easy to install, little operation and maintenance work.</td> <td style="text-align:left">Forecasts are more unpredictable in comparison to fossil fuels (but better than wind).</td> </tr> </table> <h3></h3> <p></p> </body> </html> from IPython.display import Image Image(url= "https://ak5.picdn.net/shutterstock/videos/17748445/thumb/5.jpg", width=1000, height=300) ``` #### Source Image: Shutterstock, n.d. Retrieved from https://www.shutterstock.com/video/clip-17748445-close-up-industrial-oil-pump-jack-working ``` %%html <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } </style> </head> <body> <h1>Oil</h1> <p></p> <table style="width:100%"> <tr> <th style="text-align:center">Good Traits</th> <th style="text-align:center">Bad Traits</th> </tr> <tr> <td style="text-align:left">Oil is cheap to produce and refine. </td> <td style="text-align:left">Burning oil for electricity is a major source of air pollution on Earth and leads to health concerns and environmental damage. </td> </tr> <tr> <td style="text-align:left">Unlike the renewable energy sources such as solar and wind energy that are weather dependent sources of power, Oil represents a reliable, ready-to-use source of energy.</td> <td style="text-align:left">Burning oil for energy releases harmful gases into the atmosphere such as carbon dioxide (CO2), carbon monoxide (CO), nitrogen oxides (NOx), and sulfur dioxide (SO2, causes acid rain). </td> </tr> <tr> <td></td> <td style="text-align:left">Despite the fact that oil energy can get jobs done in a less expensive way, it is not a renewable source of energy. There will come a time when we run out of supply.</td> </tr> </table> <h3></h3> <p></p> </body> </html> from IPython.display import Image Image(filename="images/gasmap.jpg", width=1000, height=300) ``` #### Source Image: Studentenergy, n.d. Retrieved from https://www.studentenergy.org/topics/natural-gas ``` %%html <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } </style> </head> <body> <h1>Natural Gas</h1> <p></p> <table style="width:100%"> <tr> <th style="text-align:center">Good Traits</th> <th style="text-align:center">Bad Traits</th> </tr> <tr> <td style="text-align:left">Emits the least CO2 compared to the other forms of non-renewable fossil fuels.</td> <td style="text-align:left">Gas drilling has a negative impact on the environment.</td> </tr> <tr> <td style="text-align:left"> Natural gas hot water heaters typically heat water twice as fast as electric heaters.</td> <td style="text-align:left">Some regions that sell natural gas face political instability. This usually occurs when a country is dependent on natural gas as their only source of income. </td> </tr> <tr> <td></td> <td style="text-align:left">Natural gas is the more expensive energy source in comparison to other fossil fuels.</td> </tr> </table> <h3></h3> <p></p> </body> </html> from IPython.display import Image Image(url= "https://images.theconversation.com/files/125332/original/image-20160606-26003-1hjtcr5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip", width=1000, height=100) ``` #### Source Image: The Conversation, June 2016. Retrieved from http://theconversation.com/is-coal-the-only-way-to-deal-with-energy-poverty-in-developing-economies-54163 ``` %%html <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } </style> </head> <body> <h1>Coal</h1> </p> <table style="width:100%"> <tr> <th style="text-align:center">Good Traits</th> <th style="text-align:center">Bad Traits</th> </tr> <tr> <td style="text-align:left">Coal provides stable and large-scale electricity generation.</td> <td style="text-align:left">Coal power plants emit high levels of CO2.</td> </tr> <tr> <td style="text-align:left">Coal power has a competitive production cost. Fuel costs are low and coal markets are well-functioning.</td> <td style="text-align:left">Technologies to reduce coal power plant CO2 emissions are expensive.</td> </tr> <tr> <td></td> <td style="text-align:left">Coal mining impacts the landscape and infrastructure leading to erosion and displacement of animals from their natural habitats.</td> </tr> </table> </body> </html> from IPython.display import Image Image(url= "https://media.nationalgeographic.org/assets/photos/000/317/31713.jpg", width=1000, height=100) ``` #### Source Image: National Geographic, Photographs by USDA, V. Zutshi, S. Beaugez, M. Hendrikx, S. Heydt, M. Oeltjenbruns, A. Munoraharjo, F. Choudhury, G. Upton, O. Siudak, M. Gunther, R. Singh. Retrieved from https://www.nationalgeographic.org/photo/2biomass-crops-dup/ ``` %%html <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } </style> </head> <body> <h1>Biomass</h1> <p></p> <table style="width:100%"> <tr> <th style="text-align:center">Good Traits</th> <th style="text-align:center">Bad Traits</th> </tr> <tr> <td style="text-align:left">Biomass resources are abundant, cost-effective and political risk is limited.</td> <td style="text-align:left">Requires large storage space.</td> </tr> <tr> <td style="text-align:left">By using biomass in power production instead of fossil fuels, CO2 emissions are significantly reduced.</td> <td style="text-align:left">Burning of biomass still emits a fair level of CO2 and without proper management of biomass usage this CO2 could easily become a green house gas. </td> </tr> <tr> <td style="text-align:left">Properly managed biomass is carbon neutral over time. If not done in a sustainable way, biomass burning is doing more harm than good.</td> <td></td> </tr> </table> </body> </html> from IPython.display import Image Image(url= "https://d32r1sh890xpii.cloudfront.net/article/718x300/1ffb18f07cf19289be69259800495f00.jpg", width=1000, height=300) ``` #### Source Image: Oilprice, n.d. Retrieved from https://oilprice.com/Alternative-Energy/Wind-Power/US-Wind-Energy-Demand-Surges.html ``` %%html <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } </style> </head> <body> <h1>Wind</h1> <p></p> <table style="width:100%"> <tr> <th style="text-align:center">Good Traits</th> <th style="text-align:center">Bad Traits</th> </tr> <tr> <td style="text-align:left">Wind power emits essentially no CO2 across its life cycle.</td> <td style="text-align:left">Has an impact on the landscape, wildlife and also emits noise.</td> </tr> <tr> <td style="text-align:left">Has no fuel costs.</td> <td style="text-align:left">Dependent on available wind.</td> </tr> <tr> <td></td> <td style="text-align:left">Has significant investment costs.</td> </tr> </table> </body> </html> from IPython.display import Image Image(filename="images/hydroelectric.jpg") ``` #### Source Image: What is Hydroelectric Power Plant? How Does It Work?, Jul. 2020. Retrieved from https://www.usgs.gov/media/images/flow-water-produces-hydroelectricity ``` %%html <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } </style> </head> <body> <h1>Hydro</h1> <table style="width:100%"> <tr> <th style="text-align:center">Good Traits</th> <th style="text-align:center">Bad Traits</th> </tr> <tr> <td style="text-align:left">Hydro power has almost no emissions that impact the climate or the environment.</td> <td style="text-align:left">Hydro power plants are a significant encroachment on the landscape and impact river ecosystems.</td> </tr> <tr> <td style="text-align:left">Provides large-scale and stable electricity generation.</td> <td style="text-align:left">Constructing a new hydro power plant requires a substantial investment.</td> </tr> <tr> <td style="text-align:left">Has no fuel costs. Hydro power plants have a long economic life.</td> <td></td> </tr> </table> </body> </html> from IPython.display import Image Image(url= "https://images.theconversation.com/files/178921/original/file-20170719-13558-rs7g2s.jpg?ixlib=rb-1.1.0&rect=0%2C532%2C4000%2C2377&q=45&auto=format&w=496&fit=clip", width=1000, height=300) ``` #### Source Image: Harga, n.d. Retrieved from https://www.tokoonlineindonesia.id/small-nuclear-power-reactors-future-or-folly.html ``` %%html <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } </style> </head> <body> <h1>Nuclear</h1> <p></p> <table style="width:100%"> <tr> <th style="text-align:center">Good Traits</th> <th style="text-align:center">Bad Traits</th> </tr> <tr> <td style="text-align:left">Nuclear power emits low levels of CO2 across its life cycle.</td> <td style="text-align:left">The management of high-level waste requires storage in secure facilities for a very long time.</td> </tr> <tr> <td style="text-align:left">Provides stable and large-scale electricity generation.</td> <td style="text-align:left">Construction of a new nuclear power plant requires major investments.</td> </tr> <tr> <td style="text-align:left">Costs for fuel, operation and maintenance are normally relatively low.</td> <td style="text-align:left">If nuclear waste spills or is handled incorrectly it has serious effects on the environment. </td> </tr> </table> </body> </html> from IPython.display import Image Image(url= "https://www.longrefrigeration.com/wp-content/uploads/2017/06/Depositphotos_59228621_s-2015.jpg", width=1000, height=100) ``` #### Source Image: Long Heating and Cooling Geothermal Heat Pumps, June 2017. Retrieved from https://www.longrefrigeration.com/how-geothermal-energy-works/depositphotos_59228621_s-2015/ ``` %%html <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } </style> </head> <body> <h1>Geothermal</h1> <p></p> <table style="width:100%"> <tr> <th style="text-align:center">Good Traits</th> <th style="text-align:center">Bad Traits</th> </tr> <tr> <td style="text-align:left">It only requires heat from the earth to work, a limitless supply.</td> <td style="text-align:left">High costs to construct geothermal plants.</td> </tr> <tr> <td style="text-align:left">It is simple and reliable, unlike the unpredictability of solar or wind energy.</td> <td style="text-align:left">Sites must be located in prime areas, requiring long distance transportation of the resourse through pipe, which is often costly.</td> </tr> <tr> <td style="text-align:left">It is a domestic source of energy found throughout the world. This means that geothermal energy is used in many households across the world, mainly for heating/cooling systems. </td> <td style="text-align:left">Emits some sulfur dioxide (SO2). </td> </tr> </table> </body> </html> ``` ## Conclusion In this notebook students learned about the 9 most popular sources of energy. The student should have a more clear understanding of the differences between renewable energy sources and non-renewable energy sources. Note that the good and bad traits of each energy source prompted the student to think about the efficiency of each energy source. ### And now a *"FeW FUn ENeRGy JokES"* to conclude! * What did Godzilla say when he ate the nuclear power plant? “Shocking!” * Why did the lights go out? Because they liked each other! * What would a barefooted man get if he steps on an electric wire? A pair of shocks! * Why is wind power popular? Because it has a lot of fans! [![Callysto.ca License](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-bottom.jpg?raw=true)](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
github_jupyter
``` from pytorch_vision_classifier.pytorch_dataset_samplers import ImbalancedDatasetSampler from pytorch_vision_classifier.pytorch_dataset_preparation import PytorchDatasetPreparation from pytorch_vision_classifier.pytorch_device_manager import DeviceManager from pytorch_vision_classifier.pytorch_model_training import ModelInitializer, ModelTraining from classification_analysis.classification_analysis import MetricsVisualization import pickle as pk import numpy as np import random import torch import torch.nn as nn import torch.optim as optim from torch.optim.lr_scheduler import StepLR, ReduceLROnPlateau, MultiStepLR from torchvision import transforms, models # Device information dvc_mng = DeviceManager() dvc_mng.available_gpus_info() device = dvc_mng.get_gpu_device(0) # Load the dataset and dataset description dataset_dir = ['hymenoptera_data'] # Dataset directory show_images_dims_summary = False data_splitting_parameters = { 'validation_ratio': 0.2, 'splitting_random_state': 1, } data_loading_parameters = { 'training_batch_size': 16, 'validation_batch_size': 16, 'training_shuffle': False, 'validation_shuffle': False, 'training_num_workers': 3, 'validation_num_workers': 3, 'training_pin_memory': True, 'validation_pin_memory': True, 'training_sampler_class': None, # Could be None 'training_sampler_parameters': {'data_source': True, 'indices':None, 'num_samples':24960}, } if data_loading_parameters['training_sampler_class'] is None: sampler = 'Uniform' else: sampler = data_loading_parameters['training_sampler_class'].__name__ # Data transformation data_transforms = { 'train': transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'validation': transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } # Load the dataset dataset = PytorchDatasetPreparation(dataset_dir = dataset_dir, splitting_parameters = data_splitting_parameters, loading_parameters = data_loading_parameters, data_transforms = data_transforms, show_images_dims_summary = show_images_dims_summary, aws = False) dataset # Model model_name = 'resnet18' model_initializer = ModelInitializer(model_name = model_name, use_pretrained = True, update_head = {'update': True, 'init_mode': 'xavier_normal', 'val': None}, num_classes = dataset.number_of_classes, dropout = {'add_dropout': False, 'prob': None}) # Get a model (All layers are non trainble) model = model_initializer.get_model() ModelInitializer.update_trainability([model.fc], trainable = True) # Loss function loss_function = nn.CrossEntropyLoss() # Optimizer # optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9) optimizer = optim.Adam(model.parameters(), lr=0.0005) # Scheduler scheduler = MultiStepLR(optimizer, milestones=[5, 40, 80], gamma=0.1) # Epoch num_epochs = 10 model_training = ModelTraining(model = model, model_name = model_name, device = device, loss_function = loss_function, optimizer = optimizer, scheduler = scheduler, num_epochs = num_epochs, input_type={'train': 'single_crop','validation': 'single_crop'}, save_model_rate = 5, save_last_model = True) # General memory info model_training.model_memory_size(bits = 32) # Get the memory consumption for each stage during the training model_training.model_memory_utilization(batch_size=64, dim=(224, 224)) # For more information refer to https://github.com/davidtvs/pytorch-lr-finder # If you want to find the best learning rate model_training.find_best_learning_rate(dataset, num_iter=300, use_val_loss = False, end_lr = 10, step_mode='exp') # Training loop (Consider val_acc as the mertic to be followed) # This function will show updated training dashboard during the model training # This function will create a pickle file contains the best model and the evaluation metrics model_training.loop(dataset, best='val_acc', reset_epoch_cnt=True) # Extarct feature from your model model_training.extract_features(dataset, os.getcwd(), features_layer = {'last_layer': True, 'classifier': True, 'softmax': True}) # Check each stage how much time it is consiuming for 50 iters model_training.while_loop_timing(dataset, number_iters=10, show_time=False) MetricsVisualization.steps_timing_visualization(model_training.steps_timing) # If you want to loop using while model_training.while_loop_timing(dataset, number_iters=2, show_time=True) # If you want to extract the model only in .pth file ModelTraining.pth_model_save(model_training.model_name + '.pkl', 'test.pth') # If you want to create a compressed version of your model for deployment purpose ModelTraining.compress_model_file('test.pth') # If you want to show the missclassified samlpes # display misclassified images model_training.display_misclassification(model_training.model_name + '.pkl', dataset, to_display = 15) # Restore Best model model_pkl = f'{model_name}.pkl' model_training = ModelTraining.restore_last_model_training(model_pkl) # Best Evaluation metrics ModelTraining.last_model_metrics_visualization(model_pkl) # Restore Best model model_pkl = f'{model_name}.pkl' model_training = ModelTraining.restore_best_model_training(model_pkl) # Best Evaluation metrics ModelTraining.best_model_metrics_visualization(model_pkl) ```
github_jupyter
# Welcome to jupyter notebooks! ### Congratulations, the hardest step is always the first one. This exercise is designed to help you get personal with the format of jupyter notebooks, as well as learn how data is accessed and manipulated in python. ``` name = "Liz" #type your name before the pound sign, make sure you put it in quotes! age = "135" #type your age before the pound sign, make sure you put it in quotes! More on this later. address = "462C Link Hall" #type your address before the pound sign, make sure you put it in quotes! story = "My name is " + name + ". I am " + age + " years old. I am a data scientist! Find me at " + address + "." print(story) ``` ## Variable types: Variables are names in your python environment where you store values. A variable can be a number, a character string, or a more complex data structure (such as a pandas spatial dataframe...more later). At the most basic level, variables representing single values can either be numbers or strings. Numeric values can be stored as intergers, floating point numbers, or complex numbers. Strings represent strings of characters, like writing. We find out what type of variable we're dealing with by using the type() function. More info here: https://mindmajix.com/python-variable-types ``` from IPython.display import Image Image("https://cdn.mindmajix.com/blog/images/Screenshot_12-460x385.png") print(type(name)) print(type(age)) print(type(address)) print(type(story)) #We can convert numeric strings to numeric formats using either float() or int() age = float(age) print(age) print(type(age)) age=int(age) print(age) print(type(age)) #But we cannot input a numeric value into a string: story = "My name is " + name + ". I am " + age + " years old. Find me at " + address + "." print(story) ``` ## TASK 1: Edit the code in the above cell so that it will print your story without adding any additional lines of code. ``` #Type story here story = "My name is " + name + ". I am " + str(age) + " years old. Find me at " + address + "." print(story) ``` ## Mathematical operators in python: We're learning Python because it is a really powerful calculator. Below is a summary of common mathematical operators in Python. ``` Image("https://i0.wp.com/makemeanalyst.com/wp-content/uploads/2017/06/Python-Operators.png") Image("https://i0.wp.com/makemeanalyst.com/wp-content/uploads/2017/06/Relational-Operators-in-Python.png") Image("https://i0.wp.com/makemeanalyst.com/wp-content/uploads/2017/06/Bitwise-Operators_python.png") Image("https://i1.wp.com/makemeanalyst.com/wp-content/uploads/2017/06/Assignment-Operator.png") ``` We can apply mathematical operators to variables by writing equations, just like we do with a calculator. ## TASK 2: Using your "age" variable. how old will you be in five years (call this variable "a")? How many months have you been alive (we're going to estimate this, based on the fact that there are 12 months in a year, call this variable "b")? Are you older than 100 (call this variable "c")? ``` #first, convert "age" back to a numeric (float) variable age = float(age) #Complete your equations in python syntax below. Remember to use "age" as a variable. a = age + 5 b = age * 12 c = age >= 100 print(a,b,c) ``` ## Functions Often in life, we want to repeat the same sequence of operations on multiple values. To do this, we use functions. Functions are blocks of code that will repeat the same task over and over again (return some outputs) on different variables (inputs). https://docs.python.org/3/tutorial/controlflow.html#defining-functions ``` #For example, this function squares the input def fun1 (x): return x*x # note the indentation fun1(2) # what happens if you delete the indentation? #Functions can have more than one input: def fun2 (x, y): return x * y fun2(2,3) #experiment with different values ``` ### TASK 3: To simplify your future restaurant interactions, write a function that calculates a 15% tip and adds it to any check amount. Name your function tip15. How much will you owe in total on a $42.75 check? ``` #write tip15 function here def tip15 (check): # check is the bill amount return check * 1.15 tip15(42.75) ``` ### TASK 4: To customize your future restaurant interactions to future exemplary service, write a function that calculates a variable tip and adds it to any check amount. Name your function tip. How much will you owe in total if you're tipping 25% on a $42.75 check? ``` #write tip function here def tip15 (check, rate): # check is the bill amount # rate is the tipping rate expressed as a decimal, for example a 15% tip is .15 return check * (1 + rate) tip15(42.75, .25) ``` ## Lists Using base python, you can store multiple variables as a list by wrapping them in square brackets. A list can accept any kind of variable: float, integer, or string: ``` names = ["Homer", "Marge", "Bart", "Maggie", "Lisa"] ages = [40,39,11,1,9] print(names) print(ages) type(names) type(ages) ``` We can make lists of lists, which are more complex data objects: ``` names_and_ages = [names, ages] print(names_and_ages) type(names_and_ages) ``` We can extract individual values from the list using **indexing**, which in python starts with zero. We'd use our python mathematical operator on this single variable the same way we would use a calculator. For example, the age of the first person in the list is: ``` first = ages[0] print(first) ``` And we can ask: How old will the first numeric element on our list be in five years? ``` ages[0]+5 ``` ### TASK 5: Find the name and age of the third person in your list ``` # type answer here print(names[2]+ ' is ' + str(ages[2]) + " years old.") ``` ### TASK 6: Add an extra person to the list, named Grandpa, who is 67 years old. *Hint: Google "add values to a list python" if you're not sure what to do!!!! ``` # type answer here names.append("Grandpa") ages.append(67) print(names_and_ages) ``` ### TASK 7: Maggie just had a birthday! Change her age accordingly in the list. ``` # type answer here names_and_ages[1][3] = names_and_ages[1][3]+1 print(names_and_ages) ``` But because lists can be either string or numeric, we cannot apply a mathematical operator to a list. For example, how old will everyone be in five years? ``` ages + 5 #this returns an error. But hey! Errors are opportunities! Google your error message, see if you can find a workaround! ``` ## For loops One way that we can run an entire operation on a list is to use a for loop. A for loop iterates through every element of a sequence, and completes a task or a set of tasks. Example: ``` #Measure some strings words = ['cat', 'window', 'defenestrate'] for w in words: print(w, len(w)) ``` A very handy function that folks use a lot when write for loops is the range function. range() creates a range object, which generates list of intergers for each element in a series. ``` print(len(ages)) # What does len() do? print(range(len(ages))) for y in range(len(ages)): print(ages[y]) ``` ### TASK 8: Use the range function in a for loop to add 1 year to everyone's age in the names_and_ages list and print the result to the screen. ``` # type your for loop here for y in range(len(ages)): ages[y] = ages[y] + 1 print(names_and_ages) ``` ## if/else statements Another thing that is commonly used in for loops is an if/else statement. This allows us to select certain attributes that we want to use to treat elements that we're iterating over differently. ``` for y in range(len(ages)): if ages[y]>=18: print(names[y] + " is an adult") elif ages[y] >= 10: print(names[y]+" is an adolescent") else: print(names[y] + " is a child") ``` The above statement reads: if the age of the person is greater than or equal to 18, the person is an adult, or else if the age of the person is greater than or equal to 10, the person is an adolescent, or else the person is a child. Pay attention to the use of indentation in the above statement! ### TASK 9: Using if, elif, and/or else statements, write a for loop that adds 1 year to everyone's life except Maggie...since we already gave her an extra birthday. ``` # Write for loop here for y in range(len(ages)): if names[y] != "Maggie": ages[y] = ages[y] + 1 print(names_and_ages) ``` ## numpy arrays and pandas dataframes For loops can take a lot of time on large datasets. To deal with this issue, we can rely on two other packages for data structures, called numpy and pandas. *Packages* are collections of *modules*. *Modules* contain executables, functions, and object types that someone else has already written for you. Numpy arrays are standard gridded data formats (lists, matrices, and multi-dimensional arrays) where all items are of one data type, and have implicit indexing. Pandas data frames allow for mixed data types to be stored in different columns, and have explicit indicing. Pandas data frames work just like excel spreadsheets. Check out [https://www.earthdatascience.org/courses/intro-to-earth-data-science/scientific-data-structures-python/numpy-arrays/] and [https://www.earthdatascience.org/courses/intro-to-earth-data-science/scientific-data-structures-python/pandas-dataframes/] to learn more. Either numpy arrays or pandas data frames allow mathematical operators to be applied to all numeric elements. We're going to start off by loading the numpy and pandas packages. We're going to short their names to "np" and "pd", just so we don't have to type as many characters. These packages should have been downloaded automatically in Anaconda, but if you can't load them, raise your hand and share you're screen. We'll talk about installing packages and everyone will be all the wiser. Note: if importing python packages is new to you, please check out [https://www.earthdatascience.org/courses/intro-to-earth-data-science/python-code-fundamentals/use-python-packages/] to learn more. ``` import numpy as np #import the numpy package, and call it np ages = np.array(ages) ages + 5 #Wait! Did you get an error? Read it carefull...and if it doesn't make sense, try Googling it. import pandas as pd #import the pandas pacakge, and call it pd names_and_ages = pd.DataFrame({'names':names,'ages':ages}) print(names_and_ages) type(names_and_ages) #How does the "type" of variable change? ``` The cell bellow shows how to add a new column to a pandas data frame called 'ages_5yr', and how to calculate values for the new column: ``` names_and_ages['ages_5yr']=names_and_ages.ages + 5 print(names_and_ages) ``` There are three(?) ways to select a column in a pandas dataframe: *? = probably many more! ``` print(names_and_ages['names']) #index by column name print(names_and_ages.names) #call column name as an object attribute print(names_and_ages.iloc[:,0]) #use pandas built-in indexing function ``` The last option, iloc, is a pandas function. The first slot (before the comma) tells us which *rows* to pick. The second slot (after the comma) tells us which *columns* to pick. To pick find the value "ages_5yrs" of the first row, for example, we could use iloc like this: ``` print(names_and_ages.iloc[0,2]) ``` ### TASK 10: Create a new column in names_and_ages called "age_months", and populate it with each indiviuals age in months (approximately age in years times 12). ``` #Complete task ten in the space below names_and_ages['age_months'] = names_and_ages.ages*12 names_and_ages ``` ### TASK 11: Create a new column in names_and_ages called "maturity", and populate it with "adult", "adolescent", and "child", based on the age cutoffs mentioned above. ``` # Create new column, fill it with empty values (np.NaN) names_and_ages['maturity']=np.NaN # Write for loop here for i in range(names_and_ages.shape[0]): if names_and_ages.ages[i]>= 18: names_and_ages.maturity.iloc[i] = "adult" elif names_and_ages.ages[i]>=10: names_and_ages.maturity.iloc[i] = "adolescent" else: names_and_ages.maturity.iloc[i] = "child" print(names_and_ages) ``` # Read in data. You will often be using python to process files or data stored locally on your computer or server. Here will learn how to use the package os to set a path to the files you want, called your working directory. ``` import os os.listdir() ``` This .csv file is a record of daily discharge from a USGS stream gage near campus for the past year. It contains two columns, "Date" and "Discharge". It was downloaded with some modifications from [https://maps.waterdata.usgs.gov/mapper/index.html]. This is a great source of data for surface water measurements! Now that you've got the filepath set up so that python can see your stream discharge data, let's open it as a pandas data frame using the pandas.read_csv() function. This is similar to what we would do opening new data in Excel ``` stream = pd.read_csv('USGS_04240105_SYRACUSE_NY.csv', infer_datetime_format=True) print(type(stream)) #the "head" function is built into a pandas.DataFrame object, and it allows you to specify the number of rows you want to see stream.head(5) ``` ##### Here, we see the date (as "%m/%d/%Y" format) and the mean daily discharge in cubic feet/second measured by the streamgage for each day. ## Check documentation on an object The pandas dataframe is an object that is associated with many attributes and functions. How can we tell what other people have enabled their modules to do, and learn how to use them? Remember, we just read in our data (USGS_04240105_SYRACUSE_NY.csv") as a 'pandas.core.frame.DataFrame'. What does that mean? ``` #We can ask a question: ?stream #We can ask a deeper queston: ??stream #This gives you full documentation #Or we can use the 'dot' 'tab' magic. stream. #Don't click Enter!!! Click 'Tab'. What happens? #We can use 'dot' 'tab' and question marks together! ?stream.min ``` ### TASK 12: use pandas.DataFrame built in functions to determine the minimum, maximum, and mean daily discharge over the record for the station. ``` #use pandas.DataFrame functions to assign appropriate values to the variables here. min_dis = stream.Discharge.min() max_dis = stream.Discharge.max() mean_dis = stream.Discharge.mean() print(min_dis, max_dis, mean_dis) ``` ### TASK 13: create a new column called "Discharge_total" where cumulative daily discharge is calculated (hint: there are 24 x 60 x 60 seconds in a day) ``` #enter calculations here stream['Discharge_total'] = stream.Discharge*24*60*60 stream.head() ``` ## EXTRA CREDIT: Use indexing to determine the DATE on which the maximum daily discharge occured. (Hint! check out ?stream.iloc and ?stream.argmax, then try Googling "Find row where value of column is maximum in pandas DataFrame" or something similar) (Still stuck? Enter the last thing you tried in this box, and then post your unsolved code on the Questions and Answers page on blackboard). ``` #Type your answer here, see if you can get it in one line of code! stream.Date.iloc[stream['Discharge'].idxmax()] ``` # You're almost done for today! Last steps: ## * save your .ipynb ## * download it as .HTML ## * upload .HTML to today's assignment on Blackboard ## If this was brand new to you, please take time to work through Sections 4,6, and 7 in [https://www.earthdatascience.org/courses/intro-to-earth-data-science/]
github_jupyter
``` import cvxopt import numpy as np import matplotlib.pyplot as plt import sys sys.path.append('../../pyutils') import metrics ``` # Introduction The predictor $G(X)$ takes values in a discrete set $\mathbb{G}$. The input space is divided into a collection regions labeled according to their clasification. The boundaries of these regions are called decisions boundaries, and they are linear for linear methods ## Example 1 Example of $K$ classes with discriminant function: $$\delta_k(x) = \beta_{k0} + \beta_k^Tx$$ The assigned class is the one with the biggest value for $\delta_k$. The decision boundary between class $k$ and $l$ is the set of points for wich $\delta_k(x) = \delta_l(x)$. That is the hyperplane defined by $\{x: (\beta_{k0} - \beta_{l0}) + (\beta_k - \beta_l)^Tx = 0 \}$. Methods that model the posterior probabilities $P(G = k | X = x)$ are also in this class. If $\delta_k(x)$ or $P(G = k | X = x)$ is linear in $x$, then the decision boundaries will be linear. ## Example 2 Example of 2 classes with posterior probabilities: $$P(G = 1 | X = x) = \frac{\exp(\beta_0 + \beta^Tx)}{1 + \exp(\beta_0 + \beta^Tx)}$$ $$P(G = 2 | X = x) = \frac{1}{1 + \exp(\beta_0 + \beta^Tx)}$$ The decision boundary is the set of points for which the log-odds are zero: $$\log(\frac{p}{1-p}) = 0$$ $$\log \frac{P(G = 1 | X = x)}{P(G = 2 | X = x)} = \beta_0 + \beta^Tx$$ Thus the decision boundary is an hyperplane defined by $\{x | \beta_0 + \beta^Tx = 0\}$. Linear logistic regression and linear discriminant analysis have linear log-odds. Another solution is to explicitely model linear boundaries between the classes. # Linear Regression of an Indicator Matrix The output is a vector $y \in \mathbb{R}^{K}$, with $K$ the number of classes. If the example belong to classs $k$, $y_j = 1_{j=k}$. For a training set of size $N$, the output matrix is $Y \in \mathbb{R}^{N*K}$. The parameters are fitted using any multiple outputs linear classification methods for $X$ and $Y$, eg normal equations: $$\hat{B} = (X^TX)^{-1} X^TY$$. Classify an example: $$\hat{y} = x^T \hat{B}$$ $$\hat{G}(x) = \arg \max_{k \in \mathbb{G}} \hat{y}_k$$ Regression gives an expectation of conditional expectation. $$y_k = \mathbb{E}(y_k | X = x) = P(G = k | X = x)$$ ``` def gen_toy_class(N, noise=0.001): X = 2.8 * np.random.randn(N, 4)**1 + 4.67 v1 = 1.5*X[:, 0] + 2.3*X[:, 1] - 0.3*X[:, 2] + 4.5 + noise*np.random.randn(len(X)) v2 = 1.7*X[:, 0] + 0.4*X[:, 1] + 2.3*X[:, 2] - 3.7 + noise*np.random.randn(len(X)) v3 = -0.6*X[:, 0] + 5.8*X[:, 1] - 1.3*X[:, 2] + 0.1 + noise*np.random.randn(len(X)) V = np.vstack((v1, v2, v3)).T g = np.argmax(V, axis=1) return X, g def label2onehot(g, nclasses): Y = np.zeros((len(g), nclasses)) Y[np.arange(len(g)), g] = 1 return Y def onehot2labels(Y): return np.argmax(Y, axis=1) def add_col1(X): return np.append(np.ones((len(X),1)), X, axis=1) #Example with 3 classes X, g = gen_toy_class(117, noise=1e-3) Y = label2onehot(g, 3) X2 = add_col1(X) B = np.linalg.inv(X2.T @ X2) @ X2.T @ Y Y_preds = X2 @ B preds = onehot2labels(Y_preds) print('error:', np.mean((Y - Y_preds)**2)) print('acc:', np.mean(g == preds)) def gen_toy_bin(N, noise=0.001): X = 2.8 * np.random.randn(100000, 4)**2 + 4.67 v = 1.5*X[:, 0] + 2.3*X[:, 1] - 4.7*X[:, 2] + 4.5 + noise*np.random.randn(len(X)) m = v.mean() X = X[:N] g = (v[:N] > m).astype(np.int) return X, g #Binary example X, g = gen_toy_bin(117000, noise=0) Y = label2onehot(g, 2) X2 = add_col1(X) B = np.linalg.inv(X2.T @ X2) @ X2.T @ Y Y_preds = X2 @ B preds = onehot2labels(Y_preds) print('error:', np.mean((Y - Y_preds)**2)) print('acc:', np.mean(g == preds)) ``` $y_k$ doesn't belong like a probability. Even though $\sum_{k \in \mathbb{G}} y_k = 1$, $y_k$ might be negative or greater than $1$. Another approch is to construct a target $t_k$ for each class, with $t_k$ $k$-th columns of $I_K$. Obervations are $y_i = t_k$ if $g_i = k$. We fit the least-squares criterion: $$\hat{B} = \arg \min_{B} \sum_{i=1}^N ||y_i - x_i^TB||^2$$ Classify an example: $$\hat{G}(x) = \arg \min_{k} ||x_i^T\hat{B} - t_k||^2$$ Actually, this model yields exactly the same results than the previous ones. This model doesn't work well when $K \geq 3$. Because of the rigid nature of regression, classes can be masked by others. A general rule is that with $K \geq 3$ classes, polynomials terms up to degree $K - 1$ might be needed to solve them. Masking usually occurs for large $K$ and small $p$. Other methods like logistic regression and linear distriminant analysis doesn't suffer from masking # Linear Discriminant Analysis According to Bayes theorem: $$P(G = k | X = x) = \frac{P(X = x | G = k) P(G = k)}{P(X)}$$ $$P(G = k | X = x) = \frac{P(X = x | G = k) P(G = k)}{\sum_{l=1}^K P(X = x | G = l) P(G = l)}$$ Let $\pi_k$ the prior probability of class $k$: $\pi_k = P(G = k)$. Let $f_k(x)$ the class-condisional density of $X$ in class $G = k$: $P(X \in T | G = k) = \int_T f_k(x)dx$. Thus, the posterior probability is: $$P(G = k | X = x) = \frac{f_k(x) \pi_k}{\sum_{l=1}^K f_l(x) \pi_l}$$ Each density class is represented as a multivariate Gaussian: $$f_k(x) = \frac{\exp(-\frac{1}{2} (x-\mu_k)^T \Sigma^{-1} (x - \mu_k) )}{\sqrt{(2\pi)^p |\Sigma|}}$$ with: - $\Sigma \in \mathbb{R}^{p*p}$ covariance matrix shared by all class densities. - $\mu_k \in \mathbb{R}^p$ the mean vector for class density $k$. - $|\Sigma| = \det(\Sigma)$ $$\log \frac{P(G = k | X = x)}{P(G = l | X = x)} = \log \frac{\pi_k}{\pi_l} - \frac{1}{2}(\mu_k + \mu_l)^T\Sigma^{-1}(\mu_k - \mu_l) + x^T \Sigma^{-1}(\mu_k - \mu_l)$$. The log-odds function is linear in $x$, so the decision boundaries are linear. The decision rule can be described with linear descriminant functions: $$\delta_k(x) = x^T \Sigma^{-1} \mu_k - \frac{1}{2}\mu_k^T\Sigma^{-1}\mu_k + \log \pi_k$$ $$G(x) = \arg \max_k \delta_k(x)$$ The parameters are estimated from the training data: $$\hat{\pi}_k = \frac{N_k}{N}$$ $$\hat{\mu}_k = \frac{\sum_{g_i = k} x_i}{N_k}$$ $$\hat{\Sigma} = \frac{\sum_{k=1}^K \sum_{g_i = k} (x_i - \mu_k)(x_i - \mu_k)^T}{N - K}$$ ``` #Example with 3 classes def lda(X, g, K): N = X.shape[0] p = X.shape[1] pis = [] mus = [] cov = np.zeros((p, p)) for k in range(K): nk = np.sum(g == k) pi = nk / N mu = np.zeros((p,)) for i in range(N): if g[i] == k: mu += X[i] mu /= nk pis.append(pi) mus.append(mu) for i in range(N): cov += np.outer(X[i] - mus[g[i]], X[i] - mus[g[i]]) cov /= (N - K) icov = np.linalg.inv(cov) B = np.empty((p, K)) intercept = np.empty((K,)) for k in range(K): B[:, k] = icov @ mus[k] intercept[k] = - 1/2 * (mus[k] @ icov @ mus[k]) + np.log(pis[k]) return B, intercept X, g = gen_toy_class(11700, noise=1e-5) B, intercept = lda(X, g, 3) Y_preds = X @ B + intercept preds = np.argmax(Y_preds, axis=1) print('acc:', np.mean(g == preds)) ``` ## Quadratic Discriminant Analysis If each $f_k(x)$ as it's own covariance marix $\Sigma_k$, the logs-odd function and the distriminant functions became quadratic: $$\delta_l(x) = - \frac{1}{2} \log | \Sigma_k| - \frac{1}{2} (x - \mu_k)^T \Sigma_k^{-1} (x - \mu_k) + \log \pi_k$$ When $p$ large, QDA causes a dramatic increase in the number of parameters. There is little difference between LDA applied to dataset augmented with polynomials of degree $2$, and QDA. For QDA, the estimates of $\pi_k$ and $u_k$ stays the same, and the estimate of $\Sigma_k$ is: $$\hat{\Sigma}_k = \frac{\sum_{g_i = k} (x_i - \mu_k)(x_i - \mu_k)^T}{N_k - 1}$$ ``` #Example with 3 classes def qda(X, g, K): N = X.shape[0] p = X.shape[1] pis = [] mus = [] dcovs = [] icovs = [] for k in range(K): nk = np.sum(g == k) pi = nk / N mu = np.zeros((p,)) for i in range(N): if g[i] == k: mu += X[i] mu /= nk cov = np.zeros((p, p)) for i in range(N): if g[i] == k: cov += np.outer(X[i] - mu, X[i] - mu) cov /= (nk - 1) pis.append(pi) mus.append(mu) dcovs.append(-1/2 * np.log(np.linalg.det(cov))) icovs.append(np.linalg.inv(cov)) return pis, mus, dcovs, icovs def qda_pred(x, pis, mus, dcovs, icovs): K = len(pis) y = np.empty((K,)) for k in range(K): qt = -1/2 * (x - mus[k]) @ icovs[k] @ (x - mus[k]) y[k] = dcovs[k] + qt + np.log(pis[k]) return np.argmax(y) X, g = gen_toy_class(11700, noise=1e-5) pis, mus, dcovs, icovs = qda(X, g, 3) preds = np.empty((len(X),)) for i in range(len(X)): preds[i] = qda_pred(X[i], pis, mus, dcovs, icovs) print('acc:', np.mean(g == preds)) ``` ## Regularized Discriminant Analysis RDA is a oompromise between LDA and QDA, allowing to shrink the separate covariances of QDA toward a common covariance as in LDA. $$\hat{\Sigma}_k(\alpha) = \alpha \hat{\Sigma}_k + (1 - \alpha) \hat{\Sigma}$$ with $\alpha$ hyperparameter that allows a continuum of models between LDA and QDA. Another modificatio allows $\hat{\Sigma}$ to be shunk toward a scalar covariance: $$\hat{\Sigma}(\gamma) = \gamma \hat{\Sigma} + (1 - \gamma) \hat{\sigma}^2I$$ ## Computations for LDA Computations can be simplified by diagonalization of $\Sigma$. For QDA: $$\hat{\Sigma}_k = U_k D_k U^T_k$$ $$(x - \hat{\mu}_k)^T\Sigma^{-1}_k (x - \hat{\mu}_k) = [U_k^T(x - \hat{\mu}_k)]^T D^{-1}_k [U_k^T(x - \hat{\mu}_k)]$$ $$log |\Sigma_k| = \sum_l log d_{kl}$$ For LDA, we can project the data into a space where the common covariance estimate is $I$: $$\hat{\Sigma} = UDU^T$$ $$X^* \leftarrow X D^{-\frac{1}{2}}U^T$$ ## Reduced-Rank LDA The $K$ centroids in $p$-dimensional input space lie in an affine subspace of dimension $\leq K - 1$. We might just as well project $X*$ onto this centroid-spanning subpace $H_{K-1}$. We can also project $X*$ into a subspace $H_L$ for $L \leq K$, where the projected centroids were spread out as much as possible in terms of variance. ### Algorithm - Compute matrix of class centroids $M \in \mathbb{R}^{K*p}$ $$M_k = \frac{1}{N_k} \sum_{g_i = k} x_i$$ - Compute within class covariance matrix $W \in \mathbb{R}^{p*p}$ $$W = \sum_{k=1}^K \sum_{g_i = k} (x_i - M_k) (x_i - M_k)^T$$ - Compute $M^* = MW^{-\frac{1}{2}}$, $M^* \in \mathbb{R}^{K*p}$ $$P_W^T W P_W = D_W$$ $$W^{-\frac{1}{2}} = P_W D^{-\frac{1}{2}}P_W^T$$ - Compute between class covariance matrix $B^*$ of $M^*$, $B^* \in \mathbb{R}^{p*p}$ $$\mu = \frac{1}{K} \sum_{k=1}^K M_k^*$$ $$B^* = \sum_{k=1}^K N_k (M^*_k - \mu) (M^*_k - \mu)^T$$ - Diagionalize $B^*$: $B^* = V^* D_B V^{*T}$ - Project the data: $$v_l = W^{-\frac{1}{2}}v_l^*, \space v_l^* \in \mathbb{R}^p$$ $$z_l = Xv_l, \space z_l \in \mathbb{R}^n$$ ### Fisher Method This is a different method, that gives the same results. Fisher LDA looks for a projection $Z = a^TX$ such that the between-class variance is maximized relative to the within-class variance. Let $B$ and $W$ respectively the between-class and the within-class variance of $X$. Note than $T = B + W$ with $T$ the covariance matrix of $X$, ignoring class information. The between-class and within-class variance of $Z$ are respectively $a^TBa$ and $a^TWa$. The objective is: $$\max_a \frac{a^TBa}{a^TWa}$$ $a$ is the eigeinvector corresponding to the largest eigeinvalue of $W^{-1}B$ ### Algorithm - Compute matrix of class centroids $M \in \mathbb{R}^{K*p}$ $$M_k = \frac{1}{N_k} \sum_{g_i = k} x_i$$ - Compute within class covariance matrix $W \in \mathbb{R}^{p*p}$ $$W = \sum_{k=1}^K \sum_{g_i = k} (x_i - M_k) (x_i - M_k)^T$$ - Compute between class covariance matrix $B \in \mathbb{R}^{p*p}$ $$\mu = \frac{1}{K} \sum_{k=1}^K M_k$$ $$B = \sum_{k=1}^K N_k (M_k - \mu) (M_k - \mu)^T$$ - Diagionalize $W^{-1}B$: $$W^{-1}B = V D V^T$$ - Project the data: $$z_l = Xv_l, \space z_l \in \mathbb{R}^N$$ $$Z = XV_L, \space Z \in \mathbb{R}^{N*L}$$ with $V_L$ columns the $L$ eigenvectors corresponding to the largest eigeinvalues of $W^{-1}B$ ``` N = 11700 K = 3 X, g = gen_toy_class(N, noise=1e-5) p = X.shape[1] #1) Compute class centroids M M = np.zeros((K, p)) for k in range(K): nk = np.sum(g == k) for i in range(N): if g[i] == k: M[k] += X[i] M[k] /= nk #2) Compute within-class covariance W W = np.zeros((p, p)) for i in range(N): W += np.outer(X[i] - M[g[i]], X[i] - M[g[i]]) #3) Compute between class covariance B mu = np.mean(M, axis=0) B = np.zeros((p, p)) for k in range(K): nk = np.sum(g == k) B += nk * np.outer(M[k] - mu, M[k] - mu) #4) Diagonalize W^-1B d, V = np.linalg.eig(np.linalg.inv(W) @ B) #5) Project the data Vr = V[:, :2] Z = X @ Vr #6) Make predictions MZ = M @ Vr preds = np.empty((N,)).astype(np.int) for i in range(N): min_k = None min_dist = float('inf') for k in range(K): d = (Z[i] - MZ[k]) @ (Z[i] - MZ[k]) if d < min_dist: min_k = k min_dist = d preds[i] = min_k print('acc:', np.mean(g == preds)) ``` # Logistic Regression The model is defined by the log-odds of the posterior probabilities. $$\log \frac{P(G = k | X = x)}{P(G = K | X = x)} = \beta_{k0} + \beta_{k}^T x, \space k=1\text{...}K-1$$ It can be deduced that: $$P(G = k | X = x) = \frac{\exp (\beta_{k0} + \beta_{k}^T x)}{1 + \sum_{l=1}^{K-1} \exp (\beta_{l0} + \beta_{l}^T x)}, \space k=1\text{...}K-1$$ $$P(G = K | X = x) = \frac{1}{1 + \sum_{l=1}^{K-1} \exp (\beta_{l0} + \beta_{l}^T x)}$$ The log-likelihood for $N$ observations is: $$l(\theta) = \sum_{i=1}^N \log P(G=g_i | X = x_i; \theta)$$ Let's focus on the cases with $K = 2$, with a response $y_i$ with $y_i = 1$ when $g_i = 1$ and $y_1 = 0$ when $g_i = 2$ $$l(\beta) = \sum_{i=1}^N y_i \log p(x_i) + (1 - y_i) \log(1 - p(x_i))$$ $$\text{with } p(x_i) = P(G=1|X=x) = \frac{\exp(\beta^Tx)}{1 + \exp(\beta^Tx)}$$ $$l(\beta) = \sum_{i=1}^N y_i \beta^Tx_i - \log(1 + \exp(\beta^Tx_i))$$ In order to maximize the log-likelihood, we solve: $$\frac{\partial l(\beta)}{\partial \beta} = 0$$ $$\frac{\partial l(\beta)}{\partial \beta} = \sum_{i=1}^N x_i(y_i - p(x_i))$$ This can be solved using the Newton-Raphson algorithm, with second-derivates: $$\frac{\partial^2 l(\beta)}{\partial \beta \partial \beta^T} = - \sum_{i=1}^N x_ix_i^T p(x_i)(1-p(x_i))$$ The update is: $$\beta \leftarrow \beta - (\frac{\partial^2 l(\beta)}{\partial \beta \partial \beta^T})^{-1} \frac{\partial l(\beta)}{\partial \beta}$$ It can be rewritten in matrix form as: $$\frac{\partial l(\beta)}{\partial \beta} = X^T(y-p)$$ $$\frac{\partial^2 l(\beta)}{\partial \beta \partial \beta^T} = - X^TWX$$ with: - $X \in \mathbb{R}^{N * p}$ the matrix of features - $p \in \mathbb{R}^N$ the vector of predictions - $y \in \mathbb{R}^N$ the vector of labels - $W \in \mathbb{R}^{N*N}$ diagonal matrix: $W_{ii} = p_i(1-p_i)$ The update became $$\beta \leftarrow \beta + (X^TWX)^{-1}X^T (y - p)$$ $$\beta \leftarrow (X^TWX)^{-1}X^T Wz$$ $$\text{with } z = X \beta + W^{-1} (y-p)$$ So the update is equivalent to solving a weigthed least square problem with output $z$: $$\beta \leftarrow \arg \min_\beta (z - X\beta)^TW(z - X\beta)$$ ``` X, y = gen_toy_bin(117, noise=1e-5) def logreg(X, y): n = X.shape[0] p = X.shape[1] #works a lot better when init at 0 beta = np.zeros((p,)) #beta = np.random.randn(p) for i in range(5): p = np.exp(X @ beta) / (1 + np.exp(X @ beta)) l = np.sum(y * np.log(p) + (1 - y) * np.log(1 - p)) W = np.diag(p * (1-p)) #IRLS update z = X @ beta + np.linalg.inv(W) @ (y - p) beta = np.linalg.inv(X.T @ W @ X) @ X.T @ W @ z ''' #newton update g = X.T @ (y - p) H = - X.T @ W @ X beta = beta - np.linalg.inv(H) @ g ''' print('loss:', l) return beta Xc = np.mean(X, axis=0, keepdims=True) Xs = np.std(X, axis=0, keepdims=True) X2 = (X - Xc) / Xs beta = logreg(X2, y) y_hat = np.exp(X2 @ beta) / (1 + np.exp(X2 @ beta)) preds = np.round(y_hat).astype(np.int) print('beta:', beta) print('acc:', np.mean(y == preds)) ``` ## LDA vs Logistic Regression Both models have a linear logs-odd properties with exactly the same form. But the parameters are estimated in a completely different way. Logistic regression maximames the conditional likelihood and totaly ignores $P(X)$. LDA maximixes the log-likelihood on the joint density $P(X,G)$. It assumes that $P(X)$ comes from a mixture of gaussians. This restriction add more informations and helps reduce variance. In theory, if the model is Gaussian, LDA should get better results. But LDA is not robust to outliers, and Logistic regressions makes no assumption about the model data. In pratice, logistic regression is safer and more robust than LDA, but they both often tend to give silimar results. # Separating Hyperplanes An hyperplane or affine set $L$ is defined by the equation: $$\beta_0 + \beta^Tx = 0, \space \beta_0 \in \mathbb{R}, \beta, x \in \mathbb{R}^p$$ $$x_0 \in L \implies \beta^Tx_0 = - \beta_0$$ $$x_1, x_2 \in L \implies \beta^T(x_1 - x_2) = 0$$ $\beta$ is a vector normal to the surface of $L$ The signed distance from any $x$ to $L$ is given by: $$\frac{1}{||\beta||}(\beta^Tx + \beta_0)$$ positive if $x$ on the side of the hyperplane directed by $\beta$, negative if $x$ on the other side. We can use an hyperplane to perform a binary classification - if $x^T\beta + \beta_0 > 0$, $x$ is on the side of the hyperplane directed by $\beta$ - if $x^T\beta + \beta_0 < 0$, $x$ is on the other side of the hyperplane - if $x^T\beta + \beta_0 = 0$, $x$ belongs to the hyperplane Let $X \in \mathbb{R}^{n*p}$ $$ \hat{y_i} = \begin{cases} 1 & \text{if } x_i^T\beta + \beta_0 > 0\\ -1 & \text{otherwise} \end{cases} $$ The goal is to find an hyperplanes that correctly separate all examples of the data set ``` b = np.array([2, 1]) b0 = 3 def plot_data(b, b0, X, y): #bo + b1x1 + b2x2 = 0 => x2 = -b1/b2*x1 - b0/b2 x1 = np.linspace(np.min(X[:,0]) - 10, np.max(X[:,0]) + 10, 1000) plt.plot(x1, -b[0]/b[1]*x1 -b0/b[1]) b10 = b / np.linalg.norm(b) * 10 plt.plot([0, b10[0]], [-b0/b[1], b10[1]-b0/b[1]], c='y') for i in range(len(X)): plt.scatter(X[i][0], X[i][1], c=('r' if y[i] == 1 else 'g')) plt.axis('equal') plt.grid() plt.show() X = np.array([ [10, 1], [4, 2], [-2, -1] ]) y = np.array([ 1, 1, -1 ]) plot_data(b, b0, X, y) preds = np.sign(X @ b + b0) print(metrics.tdist(preds, y)) def sep_dist(x): return (x @ b + b0) / np.linalg.norm(b) for i in range(len(X)): print(sep_dist(X[i])) def perceptron(X, y, lr, nepochs): b = np.random.randn(2) b0 = np.random.randn() def get_loss(): loss = 0 for i in range(len(X)): v = X[i] @ b + b0 if np.sign(v) != y[i]: loss -= y[i] * v return loss print('epoch -1: loss = {}'.format(get_loss())) for epoch in range(nepochs): for i in range(len(X)): v = X[i] @ b + b0 if np.sign(v) != y[i]: b += lr * y[i] * X[i] b0 += lr * y[i] print('epoch {}: loss = {}'.format(epoch, get_loss())) break return b, b0 b, b0 = perceptron(X, y, 1, 10) plot_data(b, b0, X, y) ``` ## Optimal Separating Hyperplanes An optimal separating hyperplane separates correctly the 2 classes and maximizes the distance between the hyperplane and the closest point $$\max_{\beta, \beta_0} M$$ $$\text{subject to } \frac{1}{||\beta||} y_i(x_i^T\beta + \beta_0) \geq M, i=1,\text{...},N$$ - $M$ is the minimum distance between the hyperplane and every point of $X$. - $\frac{1}{||\beta||} y_i(x_i^T\beta + \beta_0)$ is the distance between $x_i$ and the hyperplane. It's possitive if correctly classified, negative otherwhise - Let's fix $||\beta|| = \frac{1}{M}$. The problem becames: $$\min_{\beta, \beta_0} \frac{1}{2} ||\beta||^2$$ $$\text{subject to } y_i(x_i^T\beta + \beta_0) \geq 1, i=1,\text{...},N$$ Using the Lagrangian, the problem is turned into an unconstrained one: $$L_P(\beta, \beta_0, \alpha) = \frac{1}{2} ||\beta||^2 - \sum_{i=1}^N \alpha_i [y_i(x_i^T\beta + \beta_0) - 1]$$ $$\min_{\beta, \beta_0} \max_{\alpha, \alpha_i \geq 0} L_P(\beta, \beta_0, \alpha)$$ Instead of solving the primal, we solve the dual, that gaves the same result: $$\min_{\beta, \beta_0} \max_{\alpha, \alpha_i \geq 0} L_P(\beta, \beta_0, \alpha) = \max_{\alpha, \alpha_i \geq 0} \min_{\beta, \beta_0} L_P(\beta, \beta_0, \alpha)$$ The solution of the primal $\min_{\beta, \beta_0} \max_{\alpha, \alpha_i \geq 0} L_P(\beta, \beta_0, \alpha)$ is the same than the solution of the dual $\max_{\alpha, \alpha_i \geq 0} \min_{\beta, \beta_0} L_P(\beta, \beta_0, \alpha)$ because the KKT conditions are satisfied. Solving $\frac{\partial L_P(\beta, \beta_0, \alpha)}{\partial \beta} = 0$, we get: $$\beta = \sum_{i=1}^N \alpha_i y_i x_i$$ Solving $\frac{\partial L_P(\beta, \beta_0, \alpha)}{\partial \beta_0} = 0$, we get: $$\sum_{i=1}^N \alpha_i y_i = 0$$ Remplacing on the original equation, we get: $$L_D(\alpha) = \sum_{i=1}^N\alpha_i - \frac{1}{2} \sum_{i=1}^N \sum_{k=1}^N \alpha_i \alpha_k y_i y_k x_i^Tx_k$$ The problem became: $$\max_{\alpha} L_D(\alpha)$$ $$\text{s.t. } \alpha_i \geq 0 \space i=1,\text{...}, N$$ $$\text{s.t. } \sum_{i=1}^N \alpha_i y_i = 0$$ Another condition is satisfied: $$\alpha_i[y_i(x_i^T\beta + \beta_0) - 1] = 0 \space i = 1,\text{...},N$$ - $\alpha_i > 0$: $y_i(x_i^T\beta + \beta_0) = 1$, which means $x_i$ is on the boundary of the slab - $\alpha_i =0$: $y_i(x_i^T\beta + \beta_0) > 1$, which means $x_i$ is outside of the slab $\beta$ is defined in terms of a linear combination of the support points $x_i$ (where $\alpha_i > 0$). $\beta_0$ is obtained by solving $y_i(x_i^T + \beta_0) = 1$ for any support points. Predictions are made using: $$\hat{G}(x) = \text{sign}(x^T\hat{\beta} + \hat{\beta_0})$$ We now convert the problem into a standard quadradic programming problem: Let $H \in \mathbb{R}^{N*N}$: $H_{ij} = y_i y_j x_i^Tx_j$ The problem became: $$\max_{\alpha} 1^T\alpha - \frac{1}{2} \alpha^TH\alpha$$ $$\text{s.t. } \alpha_i \geq 0$$ $$\text{s.t. } y^T \alpha = 0$$ We reverse the sign and turn it into a minimization: $$\min_{\alpha} \frac{1}{2} \alpha^TH\alpha - 1^T\alpha$$ $$\text{s.t. } - \alpha_i \leq 0$$ $$\text{s.t. } y^T \alpha = 0$$ We can compute the parameters: $$\beta = (y \odot \alpha) X$$ $$\beta_0 = y_i - x_i \beta, \space \forall i: \alpha_i > 0$$ ``` np.random.seed(18) N = 25 P = 2 X1 = np.random.randn(int(N/2), P) * 2 - 3.4 X2 = np.random.randn(int(N/2), P) * (-2) + 4.2 X = np.concatenate((X1, X2), axis=0) rb = np.random.randn(P) rb0 = np.random.randn() y = np.sign(X @ rb + rb0) plot_data(rb, rb0, X, y) def svm_hard(X, y): n, p = X.shape X = X.astype(np.double) y = y.astype(np.double) H = (y.reshape(-1, 1)*X @ (y.reshape(-1, 1)*X).T) P = cvxopt.matrix(H) q = cvxopt.matrix(-np.ones((n, 1))) G = cvxopt.matrix(-np.eye(n)) h = cvxopt.matrix(np.zeros((n,))) A = cvxopt.matrix(y.reshape(1, -1)) b = cvxopt.matrix(np.zeros((1,))) cvxopt.solvers.options['show_progress'] = False cvxopt.solvers.options['abstol'] = 1e-10 cvxopt.solvers.options['reltol'] = 1e-10 cvxopt.solvers.options['feastol'] = 1e-10 sol = cvxopt.solvers.qp(P, q, G, h, A, b) alpha = np.array(sol['x']).flatten() beta = (y * alpha) @ X S = (alpha > 1e-4) beta0 = y[S] - (X[S] @ beta) beta0 = np.mean(beta0) return alpha, beta, beta0 alpha, beta, beta0 = svm_hard(X, y) plot_data(beta, beta0, X, y) def plot_data_support(b, b0, X, y): #bo + b1x1 + b2x2 = 0 => x2 = -b1/b2*x1 - b0/b2 x1 = np.linspace(np.min(X[:,0]) - 10, np.max(X[:,0]) + 10, 1000) plt.plot(x1, -b[0]/b[1]*x1 -b0/b[1]) plt.plot(x1, (1 - b0 - b[0] * x1)/b[1], c='orange') plt.plot(x1, (-1 - b0 - b[0] * x1)/b[1], c='orange') b10 = b / np.linalg.norm(b) * 10 plt.plot([0, b10[0]], [-b0/b[1], b10[1]-b0/b[1]], c='y') for i in range(len(X)): plt.scatter(X[i][0], X[i][1], c=('r' if y[i] == 1 else 'g')) plt.axis('equal') plt.grid() plt.show() plot_data_support(beta, beta0, X, y) S = (alpha > 1e-4) S_idxs = np.arange(len(X))[S] S_vecs = X[S_idxs] S_nc = np.array([ np.sum(y[S_idxs] == -1), np.sum(y[S_idxs] == +1), ]) y_pred = np.sign(X @ beta + beta0) acc = np.average(y == y_pred) print('beta:', beta) print('beta_0:', beta0) print('Indices of support vectors:', S_idxs) print('Support vectors:', S_vecs) print('Number of support vectors for each class:', S_nc) print('Accuracy:', acc) #Comparing with sklearn from sklearn.svm import SVC clf = SVC(C = 1e10, kernel = 'linear') clf.fit(X, y) y_pred = clf.predict(X) acc = np.average(y == y_pred) print('beta:', clf.coef_) print('beta_0:', clf.intercept_) print('Indices of support vectors:', clf.support_) print('Support vectors:', clf.support_vectors_) print('Number of support vectors for each class:', clf.n_support_) print('Accuracy:', acc) ```
github_jupyter
<small><small><i> Introduction to Python for Bioinformatics - available at https://github.com/kipkurui/Python4Bioinformatics. </i></small></small> ## Dictionaries Dictionaries are mappings between keys and items stored in the dictionaries. Unlike lists and tuples, dictionaries are unordered. Alternatively one can think of dictionaries as sets in which something stored against every element of the set. They can be defined as follows: To define a dictionary, equate a variable to { } or dict() ``` d = dict() # or equivalently d={} print(type(d)) d['abc'] = 3 d[4] = "A string" print(d) ``` As can be guessed from the output above. Dictionaries can be defined by using the `{ key : value }` syntax. The following dictionary has three elements ``` d = { 1: 'One', 2 : 'Two', 100 : 'Hundred'} len(d) d ``` Now you are able to access 'One' by the index value set at 1 ``` d['one'] ``` There are a number of alternative ways for specifying a dictionary including as a list of `(key,value)` tuples. To illustrate this we will start with two lists and form a set of tuples from them using the **zip()** function Two lists which are related can be merged to form a dictionary. ``` names = ['Two', 'One', 'Three', 'Four', 'Five'] numbers = [1, 2, 3, 4, 5] [ (name,number) for name,number in zip(names,numbers)] # create (name,number) pairs d1 = {} for name,number in zip(names,numbers): d1[name] = number d1 ``` Now we can create a dictionary that maps the name to the number as follows. ``` a1 = dict((name,number) for name,number in zip(names,numbers)) print(a1) ``` Note that the ordering for this dictionary is not based on the order in which elements are added but on its own ordering (based on hash index ordering). It is best never to assume an ordering when iterating over elements of a dictionary. "Dictionaries are **insertion ordered[1]**. As of Python 3.6, for the CPython implementation of Python, dictionaries remember the order of items inserted. This is considered an implementation detail in Python 3.6; you need to use OrderedDict if you want insertion ordering that's guaranteed across other implementations of Python (and other ordered behavior[1]). As of Python 3.7, this is no longer an implementation detail and instead becomes a language feature." By using tuples as indexes we make a dictionary behave like a sparse matrix: ``` matrix={ (0,1): 3.5, (2,17): 0.1} matrix[2,2] = matrix[0,1] + matrix[2,17] print(matrix) ``` Dictionary can also be built using the loop style definition. ``` a2 = { name : len(name) for name in names} print(a2) ``` ### Built-in Functions The **len()** function and **in** operator have the obvious meaning: ``` print("a1 has",len(a1),"elements") print("One is in a1 is",'One' in a1,", but Zero in a1 is", 'Zero' in a1) ``` **clear( )** function is used to erase all elements. ``` a2.clear() print(a2) ``` **values( )** function returns a list with all the assigned values in the dictionary. (Acutally not quit a list, but something that we can iterate over just like a list to construct a list, tuple or any other collection): ``` a2=a1.values() list(a2) [ v for v in a1.values() ] ``` **keys( )** function returns all the index or the keys to which contains the values that it was assigned to. ``` a1.items() { k for k in a1.keys() } ``` **items( )** is returns a list containing both the list but each element in the dictionary is inside a tuple. This is same as the result that was obtained when zip function was used - except that the ordering has been 'shuffled' by the dictionary. ``` ", ".join( "%s = %d" % (name,val) for name,val in a1.items()) ``` **pop( )** function is used to get the remove that particular element and this removed element can be assigned to a new variable. But remember only the value is stored and not the key. Because the is just a index value. ``` val = a1.pop('Four') print(a1) print("Removed",val) a1['Two'] = '' ?a1.pop('Four') for key in a1.keys(): print(key, a1[key]) ``` ## Exercise - Using strings, lists, tuples and dictionaries concepts, find the reverse complement of AAAAATCCCGAGGCGGCTATATAGGGCTCCGGAGGCGTAATATAAAA Algorithm: 1. Store the DNA in a string - Use it as a string - Convert it to a list 2. Reverse the dna string - reverse method on lists - Slice methods on lists - use negative indexing (slicing) on a string or list 3. Complement: - For a string, we can use replace - Use conditionals to replace an empty list - use a DNA complement dictionary ### Using Strings ``` dna = 'AAAAATCCCGAGGCGGCTATATAGGGCTCCGGAGGCGTAATATAAAAG' #reversed_dna = ''.join([dna[-i] for i in range(1,len(dna)+1)]) reversed_dna = dna[::-1] reversed_dna = reversed_dna.replace('A','t').replace('C','g').replace('G','c').replace('G','c').replace('T','a') reversed_dna.upper() ``` ### Using Conditionals ``` comp_dna = [] for nuc in dna: if nuc == 'A': comp_dna.append('T') elif nuc == 'T': comp_dna.append('A') elif nuc == 'C': comp_dna.append('G') elif nuc == 'G': comp_dna.append('C') else: comp_dna.append(nuc) #comp_dna.reverse() ''.join(comp_dna) revCompDNA = '' for nuc in comp_dna: revCompDNA = revCompDNA + nuc revCompDNA ``` ### Using the dictionary ``` dna dna = 'AAAAATCCCGAGGCGGCTATATAGGGCTCCGGAGGCGTAATATAAAAG' rev_dna_dict = {'A':'T', 'C':'G', 'G':'C', 'T':'A'} comp_dna = '' for nuc in dna: comp_dna = rev_dna_dict[nuc] + comp_dna print(comp_dna) ```
github_jupyter
``` # import numpy as np # import pandas as pd # import matplotlib.pyplot as plt # from laspy.file import File # from pickle import dump, load # import torch # import torch.nn as nn # import torch.nn.functional as F # import torch.optim as optim # import torch.utils.data as udata # from torch.autograd import Variable # from sklearn.preprocessing import MinMaxScaler # %matplotlib inline import argparse import logging import sys import matplotlib.pyplot as plt import numpy as np import torch import torchvision import torch.nn.functional as F from torch.utils.tensorboard import SummaryWriter from utils import data import models, utils import pandas as pd from laspy.file import File from pickle import dump, load import torch.nn as nn import torch.optim as optim import torch.utils.data as udata from torch.autograd import Variable from sklearn.preprocessing import MinMaxScaler %matplotlib inline ``` ### Inputs ``` # Training Data parameters scan_line_gap_break = 7000 # threshold over which scan_gap indicates a new scan line min_pt_count = 1700 # in a scan line, otherwise line not used max_pt_count = 2000 # in a scan line, otherwise line not used seq_len = 100 num_scan_lines = 150 # to use as training set val_split = 0.2 # LSTM Model parameters hidden_size = 100 # hidden features num_layers = 2 # Default is 1, 2 is a stacked LSTM output_dim = 3 # x,y,z # Training parameters num_epochs = 500 learning_rate = 0.01 # gpu or cpu device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') ``` ### Load the data first_return_df has been processed in the following ways: * Removed outliers outside of [0.01,0.99] percentile range * Normalized xyz values to [0,1] * Mapped each point to a scan line index ``` first_return_df = pd.read_pickle("../../Data/parking_lot/first_returns_modified_164239.pkl") # Note: x_scaled, y_scaled, and z_scaled MUST be the first 3 features feature_list = [ 'x_scaled', 'y_scaled', 'z_scaled', # 'scan_line_idx', # 'scan_angle_deg', # 'abs_scan_angle_deg' ] ``` ## Mask the Data ``` # miss_pts_before is the count of missing points before the point in question (scan gap / 5 -1) first_return_df['miss_pts_before'] = round((first_return_df['scan_gap']/-5)-1) first_return_df['miss_pts_before'] = [max(0,pt) for pt in first_return_df['miss_pts_before']] # Add 'mask' column, set to one by default first_return_df['mask'] = [1]*first_return_df.shape[0] def add_missing_pts(first_return_df): # Create a series with the indices of points after gaps and the number of missing points (max of 5) miss_pt_ser = first_return_df[(first_return_df['miss_pts_before']>0)&\ (first_return_df['miss_pts_before']<6)]['miss_pts_before'] # miss_pts_arr is an array of zeros that is the dimensions [num_missing_pts,cols_in_df] miss_pts_arr = np.zeros([int(miss_pt_ser.sum()),first_return_df.shape[1]]) # Create empty series to collect the indices of the missing points indices = np.ones(int(miss_pt_ser.sum())) # Fill in the indices, such that they all slot in in order before the index i=0 for index, row in zip(miss_pt_ser.index,miss_pt_ser): new_indices = index + np.arange(row)/row-1+.01 indices[i:i+int(row)] = new_indices i+=int(row) # Create a Dataframe of the indices and miss_pts_arr miss_pts_df = pd.DataFrame(miss_pts_arr,index=indices,columns = first_return_df.columns) miss_pts_df['mask'] = [0]*miss_pts_df.shape[0] # Fill scan fields with NaN so we can interpolate them for col in ['scan_angle','scan_angle_deg']: miss_pts_df[col] = [np.NaN]*miss_pts_df.shape[0] # Concatenate first_return_df and new df full_df = first_return_df.append(miss_pts_df, ignore_index=False) # Resort so that the missing points are interspersed, and then reset the index full_df = full_df.sort_index().reset_index(drop=True) return full_df first_return_df = add_missing_pts(first_return_df) first_return_df[['scan_angle','scan_angle_deg']] = first_return_df[['scan_angle','scan_angle_deg']].interpolate() first_return_df['abs_scan_angle_deg'] = abs(first_return_df['scan_angle_deg']) first_return_df.iloc[9780:9790] ``` #### 2) Extract tensor of scan lines ``` # Number of points per scan line scan_line_pt_count = first_return_df.groupby('scan_line_idx').count()['gps_time'] # Identify the indices for points at end of scan lines scan_break_idx = first_return_df[(first_return_df['scan_gap']>scan_line_gap_break)].index # Create Tensor line_count = ((scan_line_pt_count>min_pt_count)&(scan_line_pt_count<max_pt_count)).sum() scan_line_tensor = torch.randn([line_count,min_pt_count,len(feature_list)]) # Collect the scan lines longer than min_pt_count # For each, collect the first min_pt_count points i=0 for line,count in enumerate(scan_line_pt_count): if (count>min_pt_count)&(count<max_pt_count): try: line_idx = scan_break_idx[line-1] scan_line_tensor[i,:,:] = torch.Tensor(first_return_df.iloc\ [line_idx:line_idx+min_pt_count][feature_list].values) i+=1 except RuntimeError: print("line: ",line) print("line_idx: ",line_idx) ``` Note: Setting all features to [0,1] overvalues the z coordinate in MSE Loss. ``` def min_max_tensor(tensor): # Function takes a 3-D tensor, performs minmax scaling to [0,1] along the third dimension. # First 2 dimensions are flattened a,b,c = tensor.shape # Flatten first two dimensions flat_tensor = tensor.view(-1,c) sc = MinMaxScaler() flat_norm_tensor = sc.fit_transform(flat_tensor) # Reshape to original output = flat_norm_tensor.reshape([a,b,c]) return torch.Tensor(output), sc scan_line_tensor_norm, sc = min_max_tensor(scan_line_tensor) ``` #### 3) Generate the data ``` def sliding_windows(data, seq_length, line_num, x, y): for i in range(len(data)-seq_length): # Index considers previous lines idx = i+line_num*(min_pt_count-seq_length) _x = data[i:(i+seq_length)] _y = data[i+seq_length,:3] # Assumes xyz are the first 3 features in scan_line_tensor x[idx,:,:] = _x y[idx,:,:] = _y return x,y def generate_samples(data,min_pt_count,seq_len,num_scan_lines,val_split,starting_line=1000): ''' Function generates training and validation samples for predicting the next point in the sequence. Inputs: data: 3-Tensor with dimensions: i) the number of viable scan lines in the flight pass, ii) the minimum number of points in the scan line, iii) 3 (xyz, or feature count) ''' # Create generic x and y tensors x = torch.ones([(min_pt_count-seq_len)*num_scan_lines,seq_len,len(feature_list)]) y = torch.ones([(min_pt_count-seq_len)*num_scan_lines,1,3]) i=0 # Cycle through the number of scan lines requested, starting somewhere in the middle for line_idx in range(starting_line,starting_line+num_scan_lines): x,y = sliding_windows(data[line_idx,:,:],seq_len,line_idx-starting_line, x, y) x_train,y_train,x_val,y_val = train_val_split(x,y,val_split) return x_train,y_train,x_val,y_val def train_val_split(x,y,val_split): # Training/Validation split # For now, we'll do the last part of the dataset as validation...shouldn't matter? train_val_split_idx = int(x.shape[0]*(1-val_split)) x_train = x[:train_val_split_idx,:,:] x_val = x[train_val_split_idx:,:,:] y_train = y[:train_val_split_idx,:,:] y_val = y[train_val_split_idx:,:,:] return x_train,y_train,x_val,y_val x_train,y_train,x_val,y_val = generate_samples(scan_line_tensor_norm,min_pt_count,seq_len,num_scan_lines,val_split) ``` ### 2: Train the model Borrowing a lot of code from here: https://github.com/spdin/time-series-prediction-lstm-pytorch/blob/master/Time_Series_Prediction_with_LSTM_Using_PyTorch.ipynb #### 1) Define the model ``` class LSTM(nn.Module): def __init__(self, output_dim, input_size, hidden_size, num_layers, seq_len): super(LSTM, self).__init__() # output_dim = 3: X,Y,Z self.output_dim = output_dim self.num_layers = num_layers # inputs_size = 3: X,Y,Z (could be larger in the future if we add features here) self.input_size = input_size # Not sure what to do here, larger than input size? self.hidden_size = hidden_size # Passes from above self.seq_len = seq_len self.lstm = nn.LSTM(input_size=input_size, hidden_size=hidden_size, num_layers=num_layers, batch_first=True) self.fc = nn.Linear(hidden_size, output_dim) def forward(self, x): self.lstm.flatten_parameters() h_0 = Variable(torch.zeros( self.num_layers, x.size(0), self.hidden_size)).to(device) c_0 = Variable(torch.zeros( self.num_layers, x.size(0), self.hidden_size)).to(device) # Propagate input through LSTM ula, (h_out, _) = self.lstm(x, (h_0, c_0)) # In case multiple LSTM layers are used, this predicts using only the last layer h_out = h_out.view(num_layers,-1, self.hidden_size) out = self.fc(h_out[-1,:,:]) return out # Define a loss function that weights the loss according to coordinate ranges (xmax-xmin, ymax-ymin, zmax-zmin) def weighted_MSELoss(pred,true,sc): '''Assumes that x,y,z are the first 3 features in sc scaler''' ranges = torch.Tensor(sc.data_max_[:3]-sc.data_min_[:3]) raw_loss = torch.zeros(3,dtype=float) crit = torch.nn.MSELoss() for i in range(3): raw_loss[i] = crit(pred[:,:,i], true[:,:,i]) return (ranges * raw_loss).sum() def calculate_loss(lstm,x,y,ltype='Training'): # Training loss y_pred = lstm(x).detach().to('cpu') loss = weighted_MSELoss(y_pred.unsqueeze(1), y,sc) print("{} Loss: {:.4f}".format(ltype,loss)) return loss class LidarLstmDataset(udata.Dataset): def __init__(self, x, y): super(LidarLstmDataset, self).__init__() self.x = x self.y = y def __len__(self): return self.x.shape[0] def __getitem__(self,index): return self.x[index],self.y[index] ``` #### 2) Train the model ``` batch_loss,vl = [],[] lstm_local = LSTM(output_dim, len(feature_list), hidden_size, num_layers, seq_len) lstm = lstm_local.to(device) # Create the dataloader train_dataset = LidarLstmDataset(x_train,y_train) val_dataset = LidarLstmDataset(x_val,y_val) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=1024, num_workers=4, shuffle=True) valid_loader = torch.utils.data.DataLoader(val_dataset, batch_size=1, num_workers=4, shuffle=False) # criterion = torch.nn.MSELoss() # mean-squared error for regression optimizer = torch.optim.Adam(lstm.parameters(), lr=learning_rate) # Scheduler to reduce the learning rate scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 50, gamma=0.5) # Train the model for epoch in range(num_epochs): for x,y in train_loader: print(x.shape) outputs = lstm(x.to(device)) optimizer.zero_grad() # obtain the loss function loss = weighted_MSELoss(outputs.unsqueeze(1), y.to(device),sc) loss.backward() optimizer.step() batch_loss.append(loss) print("Epoch: %d, Training batch loss: %1.5f\n" % (epoch, loss.item())) scheduler.step() if epoch % 5 == 0: print("*"*30) val = calculate_loss(lstm,x_val.to(device),y_val,'Validation') print("*"*30) vl.append(val) for x,y in train_loader: print(x.shape) # Print loss plot plt.subplot(2,1,1) plt.plot(20*np.arange(len(batch_loss)-10),tl[10:],label='Training') plt.xlabel("Batch") plt.ylabel("Weighted MSE") plt.subplot(2,1,2) plt.plot(20*np.arange(len(vl)-10),vl[10:],'+',label='Validation') plt.xlabel("Epoch") plt.ylabel("Weighted MSE") plt.legend() ``` #### 3) Evaluate the model ``` #Load the model dir_name = '8_31_20/' run_descriptor = 'seq_len_100_hidden_size_100' scaler = load(open("models/"+dir_name+"SCALER_"+run_descriptor+".pkl",'rb')) lstm = load(open("models/"+dir_name+run_descriptor+".pkl",'rb')) def print_results(x,y,lstm,sc,sample_num,transform=False): markersize,fontsize=12,14 if transform: in_seq = sc.inverse_transform(x[sample_num]) pred_norm = (lstm(x[sample_num].unsqueeze(0).to(device)).view(-1,3).detach()) pred_point = pred_norm.to('cpu')*(sc.data_max_[:3]-sc.data_min_[:3])+sc.data_min_[:3] true_point = y[sample_num]*(sc.data_max_[:3]-sc.data_min_[:3])+sc.data_min_[:3] else: in_seq = x[sample_num] pred_point = (lstm(x[sample_num].unsqueeze(0).to(device)).view(-1,3).detach()).to('cpu') true_point = y[sample_num] plt.figure(figsize=[12,12]) plt.subplot(2,1,1) plt.plot(in_seq[:,0],in_seq[:,1],'x',label='sequence') plt.plot(pred_point[0,0],pred_point[0,1],'ro',markersize=markersize,label='Prediction') plt.plot(true_point[0,0],true_point[0,1],'go',markersize=markersize,label='True') plt.xlabel("X",fontsize=fontsize) plt.ylabel("Y",fontsize=fontsize) plt.xticks(fontsize=fontsize) plt.yticks(fontsize=fontsize) plt.legend(fontsize=fontsize) plt.subplot(2,1,2) plt.plot(in_seq[:,0],in_seq[:,2],'x',label='sequence') plt.plot(pred_point[0,0],pred_point[0,2],'ro',markersize=markersize,label='Prediction') plt.plot(true_point[0,0],true_point[0,2],'go',markersize=markersize,label='True') plt.xlabel("X",fontsize=fontsize) plt.ylabel("Z",fontsize=fontsize) plt.xticks(fontsize=fontsize) plt.yticks(fontsize=fontsize) plt.legend(fontsize=fontsize) plt.show() for i in range(4120,4125): print_results(x_train,y_train,lstm,scaler,i) ``` ## Save the model ``` import os import json dir_name = '8_31_20/' run_descriptor = 'seq_len_100_hidden_size_100' os.mkdir('models/'+dir_name) class Args(object): def __init__(self): self.scan_line_gap_break = scan_line_gap_break self.min_pt_count = min_pt_count self.max_pt_count = max_pt_count self.seq_len = seq_len self.num_scan_lines = num_scan_lines self.val_split = val_split self.hidden_size = hidden_size self.num_layers = num_layers self.output_dim = output_dim self.num_epochs = num_epochs self.learning_rate = learning_rate args=Args() # Save the scaler dump(lstm, open('models/'+dir_name+run_descriptor+'.pkl','wb')) dump(sc, open('models/'+dir_name+'SCALER_'+run_descriptor+'.pkl', 'wb')) dump(args, open('models/'+dir_name+'args_'+run_descriptor+'.pkl', 'wb')) with open('models/'+dir_name+'args_'+run_descriptor+'.json', 'w') as json_file: json.dump(json.dumps(args.__dict__), json_file) ``` ### Create .pts file of predictions Include the actual and the predicted, indicated with a binary flag ``` def create_pts_file(y_val,x_val,lstm,sc): y_val_reinflate = np.concatenate((y_val[:,0,:]*(sc.data_max_[:3]-sc.data_min_[:3]) \ +sc.data_min_[:3],np.zeros((y_val.shape[0],1))),axis=1) out_df = pd.DataFrame(np.array(y_val_reinflate[:,:]),columns=['x','y','z','class']) pred_norm = (lstm(x_val).view(-1,3).detach()) pred_reinflate = pred_norm*(sc.data_max_[:3]-sc.data_min_[:3])+sc.data_min_[:3] pred_arr = np.concatenate((pred_reinflate,np.ones((pred_reinflate.shape[0],1))),axis=1) out_df = out_df.append(pd.DataFrame(pred_arr,columns = out_df.columns)).reset_index(drop=True) return out_df out_df = create_pts_file(y_val,x_val,lstm,sc) out_df.to_csv("output_test.pts") ``` ### Data Prep Already done, but this removes outliers and adds scan_line_idx to the first_return_df ``` # Adj GPS Time: Set both timestamps to zero for the first record def adjust_time(df,time_field): # Function adds adj_gps_time to points or pulse dataframe, set to zero at the minimum timestamp. df['adj_gps_time'] = df[time_field] - df[time_field].min() return df def label_returns(las_df): ''' Parses the flag_byte into number of returns and return number, adds these fields to las_df. Input - las_df - dataframe from .laz or .lz file Output - first_return_df - only the first return points from las_df. - las_df - input dataframe with num_returns and return_num fields added ''' las_df['num_returns'] = np.floor(las_df['flag_byte']/16).astype(int) las_df['return_num'] = las_df['flag_byte']%16 first_return_df = las_df[las_df['return_num']==1] first_return_df = first_return_df.reset_index(drop=True) return first_return_df, las_df def pull_first_scan_gap(df): # Separate return num, only keep the first returns, add scan_gap, sort df['num_returns'] = np.floor(df['flag_byte']/16).astype(int) df['return_num'] = df['flag_byte']%16 first_return_wall = df[df['return_num']==1] # Outliers # Remove outliers outside of [.01,.99] percentiles a = first_return_wall[['x_scaled','y_scaled','z_scaled']].quantile([.01,.99]) first_return_wall = first_return_wall[(first_return_wall['x_scaled']>a.iloc[0]['x_scaled'])&\ (first_return_wall['x_scaled']<a.iloc[1]['x_scaled'])&\ (first_return_wall['y_scaled']>a.iloc[0]['y_scaled'])&\ (first_return_wall['y_scaled']<a.iloc[1]['y_scaled'])&\ (first_return_wall['z_scaled']>a.iloc[0]['z_scaled'])&\ (first_return_wall['z_scaled']<a.iloc[1]['z_scaled'])] first_return_wall.sort_values(by=['gps_time'],inplace=True) first_return_wall.reset_index(inplace=True) first_return_wall.loc[1:,'scan_gap'] = [first_return_wall.loc[i+1,'scan_angle'] - first_return_wall.loc[i,'scan_angle'] for i in range(first_return_wall.shape[0]-1)] first_return_wall.loc[0,'scan_gap'] = 0 first_return_wall['scan_angle_deg'] = first_return_wall['scan_angle']*.006 return first_return_wall # Load LAS points las_df = pd.read_hdf("../../Data/parking_lot/las_points_164239.lz") # Separate out the first returns only las_df = adjust_time(las_df,'gps_time') # Sort records by timestamp las_df.sort_values(by=['adj_gps_time'],inplace=True) # TO DO: consider only last returns? # First returns only first_return_df = pull_first_scan_gap(las_df) # # Identify the indices for points at end of scan lines scan_break_idx = first_return_df[(first_return_df['scan_gap']>scan_line_gap_break)].index # # Concat adds index 0 as 0th scan line _right = pd.DataFrame(data=range(1,len(scan_break_idx)+1),index=scan_break_idx,columns=['scan_line_idx']) right = pd.concat([pd.DataFrame(data=[0],index=[0],columns=['scan_line_idx']),_right]) first_return_df = pd.merge_asof(first_return_df,right,left_index=True,right_index=True,direction='backward') ```
github_jupyter
``` # This cell is added by sphinx-gallery !pip install mrsimulator --quiet %matplotlib inline import mrsimulator print(f'You are using mrsimulator v{mrsimulator.__version__}') ``` # MCl₂.2D₂O, ²H (I=1) Shifting-d echo ²H (I=1) 2D NMR CSA-Quad 1st order correlation spectrum simulation. The following is a static shifting-*d* echo NMR correlation simulation of $\text{MCl}_2\cdot 2\text{D}_2\text{O}$ crystalline solid, where $M \in [\text{Cu}, \text{Ni}, \text{Co}, \text{Fe}, \text{Mn}]$. The tensor parameters for the simulation and the corresponding spectrum are reported by Walder `et al.` [#f1]_. ``` import matplotlib.pyplot as plt from mrsimulator import Simulator, SpinSystem, Site from mrsimulator.methods import Method2D from mrsimulator import signal_processing as sp ``` Generate the site and spin system objects. ``` site_Ni = Site( isotope="2H", isotropic_chemical_shift=-97, # in ppm shielding_symmetric={ "zeta": -551, # in ppm "eta": 0.12, "alpha": 62 * 3.14159 / 180, # in rads "beta": 114 * 3.14159 / 180, # in rads "gamma": 171 * 3.14159 / 180, # in rads }, quadrupolar={"Cq": 77.2e3, "eta": 0.9}, # Cq in Hz ) site_Cu = Site( isotope="2H", isotropic_chemical_shift=51, # in ppm shielding_symmetric={ "zeta": 146, # in ppm "eta": 0.84, "alpha": 95 * 3.14159 / 180, # in rads "beta": 90 * 3.14159 / 180, # in rads "gamma": 0 * 3.14159 / 180, # in rads }, quadrupolar={"Cq": 118.2e3, "eta": 0.86}, # Cq in Hz ) site_Co = Site( isotope="2H", isotropic_chemical_shift=215, # in ppm shielding_symmetric={ "zeta": -1310, # in ppm "eta": 0.23, "alpha": 180 * 3.14159 / 180, # in rads "beta": 90 * 3.14159 / 180, # in rads "gamma": 90 * 3.14159 / 180, # in rads }, quadrupolar={"Cq": 114.6e3, "eta": 0.95}, # Cq in Hz ) site_Fe = Site( isotope="2H", isotropic_chemical_shift=101, # in ppm shielding_symmetric={ "zeta": -1187, # in ppm "eta": 0.4, "alpha": 122 * 3.14159 / 180, # in rads "beta": 90 * 3.14159 / 180, # in rads "gamma": 90 * 3.14159 / 180, # in rads }, quadrupolar={"Cq": 114.2e3, "eta": 0.98}, # Cq in Hz ) site_Mn = Site( isotope="2H", isotropic_chemical_shift=145, # in ppm shielding_symmetric={ "zeta": -1236, # in ppm "eta": 0.23, "alpha": 136 * 3.14159 / 180, # in rads "beta": 90 * 3.14159 / 180, # in rads "gamma": 90 * 3.14159 / 180, # in rads }, quadrupolar={"Cq": 1.114e5, "eta": 1.0}, # Cq in Hz ) spin_systems = [ SpinSystem(sites=[s], name=f"{n}Cl$_2$.2D$_2$O") for s, n in zip( [site_Ni, site_Cu, site_Co, site_Fe, site_Mn], ["Ni", "Cu", "Co", "Fe", "Mn"] ) ] ``` Use the generic 2D method, `Method2D`, to generate a shifting-d echo method. The reported shifting-d 2D sequence is a correlation of the shielding frequencies to the first-order quadrupolar frequencies. Here, we create a correlation method using the :attr:`~mrsimulator.method.event.freq_contrib` attribute, which acts as a switch for including the frequency contributions from interaction during the event. In the following method, we assign the ``["Quad1_2"]`` and ``["Shielding1_0", "Shielding1_2"]`` as the value to the ``freq_contrib`` key. The *Quad1_2* is an enumeration for selecting the first-order second-rank quadrupolar frequency contributions. *Shielding1_0* and *Shielding1_2* are enumerations for the first-order shielding with zeroth and second-rank tensor contributions, respectively. See `freq_contrib_api` for details. ``` shifting_d = Method2D( name="Shifting-d", channels=["2H"], magnetic_flux_density=9.395, # in T spectral_dimensions=[ { "count": 512, "spectral_width": 2.5e5, # in Hz "label": "Quadrupolar frequency", "events": [ { "rotor_frequency": 0, "transition_query": {"P": [-1]}, "freq_contrib": ["Quad1_2"], } ], }, { "count": 256, "spectral_width": 2e5, # in Hz "reference_offset": 2e4, # in Hz "label": "Paramagnetic shift", "events": [ { "rotor_frequency": 0, "transition_query": {"P": [-1]}, "freq_contrib": ["Shielding1_0", "Shielding1_2"], } ], }, ], ) # A graphical representation of the method object. plt.figure(figsize=(5, 2.5)) shifting_d.plot() plt.show() ``` Create the Simulator object, add the method and spin system objects, and run the simulation. ``` sim = Simulator(spin_systems=spin_systems, methods=[shifting_d]) # Configure the simulator object. For non-coincidental tensors, set the value of the # `integration_volume` attribute to `hemisphere`. sim.config.integration_volume = "hemisphere" sim.config.decompose_spectrum = "spin_system" # simulate spectra per spin system sim.run() ``` Add post-simulation signal processing. ``` data = sim.methods[0].simulation processor = sp.SignalProcessor( operations=[ # Gaussian convolution along both dimensions. sp.IFFT(dim_index=(0, 1)), sp.apodization.Gaussian(FWHM="9 kHz", dim_index=0), # along dimension 0 sp.apodization.Gaussian(FWHM="9 kHz", dim_index=1), # along dimension 1 sp.FFT(dim_index=(0, 1)), ] ) processed_data = processor.apply_operations(data=data).real ``` The plot of the simulation. Because we configured the simulator object to simulate spectrum per spin system, the following data is a CSDM object containing five simulations (dependent variables). Let's visualize the first data corresponding to $\text{NiCl}_2\cdot 2 \text{D}_2\text{O}$. ``` data_Ni = data.split()[0] plt.figure(figsize=(4.25, 3.0)) ax = plt.subplot(projection="csdm") cb = ax.imshow(data_Ni / data_Ni.max(), aspect="auto", cmap="gist_ncar_r") plt.title(None) plt.colorbar(cb) plt.tight_layout() plt.show() ``` The plot of the simulation after signal processing. ``` proc_data_Ni = processed_data.split()[0] plt.figure(figsize=(4.25, 3.0)) ax = plt.subplot(projection="csdm") cb = ax.imshow(proc_data_Ni / proc_data_Ni.max(), cmap="gist_ncar_r", aspect="auto") plt.title(None) plt.colorbar(cb) plt.tight_layout() plt.show() ``` Let's plot all the simulated datasets. ``` fig, ax = plt.subplots( 2, 5, sharex=True, sharey=True, figsize=(12, 5.5), subplot_kw={"projection": "csdm"} ) for i, data_obj in enumerate([data, processed_data]): for j, datum in enumerate(data_obj.split()): ax[i, j].imshow(datum / datum.max(), aspect="auto", cmap="gist_ncar_r") ax[i, j].invert_xaxis() ax[i, j].invert_yaxis() plt.tight_layout() plt.show() ``` .. [#f1] Walder B.J, Patterson A.M., Baltisberger J.H, and Grandinetti P.J Hydrogen motional disorder in crystalline iron group chloride dihydrates spectroscopy, J. Chem. Phys. (2018) **149**, 084503. `DOI: 10.1063/1.5037151 <https://doi.org/10.1063/1.5037151>`_
github_jupyter
``` try: import openmdao.api as om import dymos as dm except ImportError: !python -m pip install openmdao[notebooks] !python -m pip install dymos[docs] import openmdao.api as om import dymos as dm ``` # How do I run two phases parallel-in-time? Complex models sometimes encounter state variables which are best simulated on different time scales, with some state variables changing quickly (fast variables) and some evolving slowly (slow variables). For instance, and aircraft trajectory optimization which includes vehicle component temperatures might see relatively gradual changes in altitude over the course of a two hour flight while temperatures of some components seem to exhibit step-function-like behavior on the same scale. To accommodate both fast and slow variables in the same ODE, one would typically need to use a _dense_ grid (with many segments/higher order segments). This can be unnecessarily burdensome when there are many slow variables or evaluating their rates is particularly expensive. As a solution, Dymos allows the user to run two phases over the same range of times, where one phase may have a more sparse grid to accommodate the slow variables, and one has a more dense grid for the fast variables. To connect the two phases, state variable values are passed from the first (slow) phase to the second (fast) phase as non-optimal dynamic control variables. These values are then used to evaluate the rates of the fast variables. Since outputs from the first phase in generally will not fall on the appropriate grid points to be used by the second phase, interpolation is necessary. This is one application of the interpolating timeseries component. In the following example, we solve the brachistochrone problem but do so to minimize the arclength of the resulting wire instead of the time required for the bead to travel along the wire. This is a trivial solution which should find a straight line from the starting point to the ending point. There are two phases involved, the first utilizes the standard ODE for the brachistochrone problem. The second integrates the arclength (𝑆) of the wire using the equation: \begin{align} S = \int v \sin \theta \sqrt{1 + \frac{1}{\tan^2 \theta}} dt \end{align} ## The ODE for the wire arclength ``` om.display_source('dymos.examples.brachistochrone.doc.test_doc_brachistochrone_tandem_phases.BrachistochroneArclengthODE') ``` The trick is that the bead velocity ($v$) is a state variable solved for in the first phase, and the wire angle ($\theta$) is a control variable "owned" by the first phase. In the second phase they are used as control variables with option ``opt=False`` so that their values are expected as inputs for the second phase. We need to connect their values from the first phase to the second phase, at the `control_input` node subset of the second phase. In the following example, we instantiate two phases and add an interpolating timeseries to the first phase which provides outputs at the `control_input` nodes of the second phase. Those values are then connected and the entire problem run. The result is that the position and velocity variables are solved on a relatively coarse grid while the arclength of the wire is solved on a much denser grid. ``` class BrachistochroneArclengthODE(om.ExplicitComponent): def initialize(self): self.options.declare('num_nodes', types=int) def setup(self): nn = self.options['num_nodes'] # Inputs self.add_input('v', val=np.zeros(nn), desc='velocity', units='m/s') self.add_input('theta', val=np.zeros(nn), desc='angle of wire', units='rad') self.add_output('Sdot', val=np.zeros(nn), desc='rate of change of arclength', units='m/s') # Setup partials arange = np.arange(nn) self.declare_partials(of='Sdot', wrt='v', rows=arange, cols=arange) self.declare_partials(of='Sdot', wrt='theta', rows=arange, cols=arange) def compute(self, inputs, outputs): theta = inputs['theta'] v = inputs['v'] outputs['Sdot'] = np.sqrt(1.0 + (1.0/np.tan(theta))**2) * v * np.sin(theta) def compute_partials(self, inputs, jacobian): theta = inputs['theta'] v = inputs['v'] cos_theta = np.cos(theta) sin_theta = np.sin(theta) tan_theta = np.tan(theta) cot_theta = 1.0 / tan_theta csc_theta = 1.0 / sin_theta jacobian['Sdot', 'v'] = sin_theta * np.sqrt(1.0 + cot_theta**2) jacobian['Sdot', 'theta'] = v * (cos_theta * (cot_theta**2 + 1) - cot_theta * csc_theta) / \ (np.sqrt(1 + cot_theta**2)) om.display_source('dymos.examples.brachistochrone.brachistochrone_ode') from dymos.examples.brachistochrone.brachistochrone_ode import BrachistochroneODE import numpy as np import matplotlib.pyplot as plt import openmdao.api as om import dymos as dm p = om.Problem(model=om.Group()) p.driver = om.pyOptSparseDriver() p.driver.options['optimizer'] = 'SLSQP' p.driver.declare_coloring() # The transcription of the first phase tx0 = dm.GaussLobatto(num_segments=10, order=3, compressed=False) # The transcription for the second phase (and the secondary timeseries outputs from the first phase) tx1 = dm.Radau(num_segments=20, order=9, compressed=False) # # First Phase: Integrate the standard brachistochrone ODE # phase0 = dm.Phase(ode_class=BrachistochroneODE, transcription=tx0) p.model.add_subsystem('phase0', phase0) phase0.set_time_options(fix_initial=True, duration_bounds=(.5, 10)) phase0.add_state('x', fix_initial=True, fix_final=False) phase0.add_state('y', fix_initial=True, fix_final=False) phase0.add_state('v', fix_initial=True, fix_final=False) phase0.add_control('theta', continuity=True, rate_continuity=True, units='deg', lower=0.01, upper=179.9) phase0.add_parameter('g', units='m/s**2', val=9.80665) phase0.add_boundary_constraint('x', loc='final', equals=10) phase0.add_boundary_constraint('y', loc='final', equals=5) # Add alternative timeseries output to provide control inputs for the next phase phase0.add_timeseries('timeseries2', transcription=tx1, subset='control_input') # # Second Phase: Integration of ArcLength # phase1 = dm.Phase(ode_class=BrachistochroneArclengthODE, transcription=tx1) p.model.add_subsystem('phase1', phase1) phase1.set_time_options(fix_initial=True, input_duration=True) phase1.add_state('S', fix_initial=True, fix_final=False, rate_source='Sdot', units='m') phase1.add_control('theta', opt=False, units='deg', targets='theta') phase1.add_control('v', opt=False, units='m/s', targets='v') # # Connect the two phases # p.model.connect('phase0.t_duration', 'phase1.t_duration') p.model.connect('phase0.timeseries2.controls:theta', 'phase1.controls:theta') p.model.connect('phase0.timeseries2.states:v', 'phase1.controls:v') # Minimize arclength at the end of the second phase phase1.add_objective('S', loc='final', ref=1) p.model.linear_solver = om.DirectSolver() p.setup(check=True) p['phase0.t_initial'] = 0.0 p['phase0.t_duration'] = 2.0 p.set_val('phase0.states:x', phase0.interp('x', ys=[0, 10])) p.set_val('phase0.states:y', phase0.interp('y', ys=[10, 5])) p.set_val('phase0.states:v', phase0.interp('v', ys=[0, 9.9])) p.set_val('phase0.controls:theta', phase0.interp('theta', ys=[5, 100])) p['phase0.parameters:g'] = 9.80665 p['phase1.states:S'] = 0.0 dm.run_problem(p) fig, (ax0, ax1) = plt.subplots(2, 1) fig.tight_layout() ax0.plot(p.get_val('phase0.timeseries.states:x'), p.get_val('phase0.timeseries.states:y'), '.') ax0.set_xlabel('x (m)') ax0.set_ylabel('y (m)') ax1.plot(p.get_val('phase1.timeseries.time'), p.get_val('phase1.timeseries.states:S'), '+') ax1.set_xlabel('t (s)') ax1.set_ylabel('S (m)') plt.show() from openmdao.utils.assert_utils import assert_near_equal expected = np.sqrt((10-0)**2 + (10 - 5)**2) assert_near_equal(p.get_val('phase1.timeseries.states:S')[-1], expected, tolerance=1.0E-3) ```
github_jupyter
[View in Colaboratory](https://colab.research.google.com/github/ale93111/Unet_dsb2018/blob/master/Unet_weighted_dsb2018.ipynb) ``` !apt-get install -y -qq software-properties-common python-software-properties module-init-tools !add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null !apt-get update -qq 2>&1 > /dev/null !apt-get -y install -qq google-drive-ocamlfuse fuse from google.colab import auth auth.authenticate_user() from oauth2client.client import GoogleCredentials creds = GoogleCredentials.get_application_default() import getpass !google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL vcode = getpass.getpass() !echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} !apt-get -qq install -y libsm6 libxext6 && pip install -q -U opencv-python !pwd #!fusermount -u drive !mkdir -p drive !google-drive-ocamlfuse drive import os os.chdir("drive/kaggle/Unet_dsb2018") #!pip install --upgrade pip !pip install tqdm !pip install keras !pip install imgaug import os NAME = 'bowl' # Root directory of the project ROOT_DIR = os.getcwd() print(ROOT_DIR) # Directory to save logs and trained model MODEL_DIR = os.path.join(ROOT_DIR, "logs") #Dataset directory dataset_path = os.path.join(ROOT_DIR, "RCNN_dataset_512_labels") import numpy as np from functools import partial, update_wrapper #partial name fix def wrapped_partial(func, *args, **kwargs): partial_func = partial(func, *args, **kwargs) update_wrapper(partial_func, func) return partial_func def pad2n(image,npad=32): h, w = image.shape[:2] if h % npad > 0: max_h = h - (h % npad) + npad top_pad = (max_h - h) // 2 bottom_pad = max_h - h - top_pad else: top_pad = bottom_pad = 0 # Width if w % npad > 0: max_w = w - (w % npad) + npad left_pad = (max_w - w) // 2 right_pad = max_w - w - left_pad else: left_pad = right_pad = 0 padding = [(top_pad, bottom_pad), (left_pad, right_pad), (0, 0)] image = np.pad(image, padding, mode='reflect') window = (slice(top_pad, h + top_pad),slice(left_pad, w + left_pad)) return image, window def label_to_masks(labels): h, w = labels.shape n_msk = labels.max() masks = np.empty((h,w,n_msk),dtype=np.bool) for i in range(n_msk): masks[:,:,i] = labels==i+1 return masks def masks_to_label(msk): h, w, _ = msk.shape labels = np.zeros((h, w), dtype=np.uint16) for index in range(0, msk.shape[-1]): labels[msk[:,:,index] > 0] = index + 1 return labels import tensorflow as tf import keras import keras.backend as K import keras.layers as KL import keras.models as KM import keras.utils as KU import keras.losses as KLO from keras.optimizers import Adam, SGD # Define IoU metric def mean_iou(y_true, y_pred): prec = [] for t in np.arange(0.5, 1.0, 0.05): y_pred_ = tf.to_int32(y_pred > t) score, up_opt = tf.metrics.mean_iou(y_true, y_pred_, 2, y_true) K.get_session().run(tf.local_variables_initializer()) with tf.control_dependencies([up_opt]): score = tf.identity(score) prec.append(score) return K.mean(K.stack(prec), axis=0) #AGGIUNTO AXIS=0 def dice_coef(y_true, y_pred): smooth = 1. y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth) def bce_dice_loss(y_true, y_pred): return 0.5 * KLO.binary_crossentropy(y_true, y_pred) - dice_coef(y_true, y_pred) def weighted_loss(y_true,y_pred, weights): _epsilon = 10e-8 y_pred = tf.clip_by_value(y_pred, _epsilon, 1. - _epsilon) #loss_map = K.binary_crossentropy(y_true, y_pred, from_logits=False) #change if softmax is present or not in the net #weighted_loss = loss_map*weights#[:,:,:,np.newaxis] #loss=K.mean(weighted_loss) return - tf.reduce_sum(y_true * weights * tf.log(y_pred) + (1 - y_true) * tf.log(1 - y_pred), len(y_pred.get_shape()) - 1) def Unet(img_size=None, GPU_COUNT=1): inputs = KL.Input((img_size, img_size, 3)) weights_tensor = KL.Input((img_size, img_size, 1)) s = KL.Lambda(lambda x: x/128.0 - 1.0)(inputs) #TODO: make more general c1 = KL.Conv2D(32, (3, 3), kernel_initializer='he_normal', padding='same')(s) n1 = KL.BatchNormalization(axis=3)(c1) a1 = KL.Activation("elu")(n1) c1 = KL.Dropout(0.2)(a1) c1 = KL.Conv2D(32, (3, 3), kernel_initializer='he_normal', padding='same')(c1) n1 = KL.BatchNormalization(axis=3)(c1) a1 = KL.Activation("elu")(n1) p1 = KL.MaxPooling2D((2, 2))(a1) c2 = KL.Conv2D(64, (3, 3), kernel_initializer='he_normal', padding='same')(p1) n2 = KL.BatchNormalization(axis=3)(c2) a2 = KL.Activation("elu")(n2) c2 = KL.Dropout(0.2)(a2) c2 = KL.Conv2D(64, (3, 3), kernel_initializer='he_normal', padding='same')(c2) n2 = KL.BatchNormalization(axis=3)(c2) a2 = KL.Activation("elu")(n2) p2 = KL.MaxPooling2D((2, 2))(a2) c3 = KL.Conv2D(128, (3, 3), kernel_initializer='he_normal', padding='same')(p2) n3 = KL.BatchNormalization(axis=3)(c3) a3 = KL.Activation("elu")(n3) c3 = KL.Dropout(0.3)(a3) c3 = KL.Conv2D(128, (3, 3), kernel_initializer='he_normal', padding='same')(c3) n3 = KL.BatchNormalization(axis=3)(c3) a3 = KL.Activation("elu")(n3) p3 = KL.MaxPooling2D((2, 2))(a3) c4 = KL.Conv2D(256, (3, 3), kernel_initializer='he_normal', padding='same')(p3) n4 = KL.BatchNormalization(axis=3)(c4) a4 = KL.Activation("elu")(n4) c4 = KL.Dropout(0.4)(a4) c4 = KL.Conv2D(256, (3, 3), kernel_initializer='he_normal', padding='same')(c4) n4 = KL.BatchNormalization(axis=3)(c4) a4 = KL.Activation("elu")(n4) p4 = KL.MaxPooling2D(pool_size=(2, 2))(a4) c5 = KL.Conv2D(512, (3, 3), kernel_initializer='he_normal', padding='same')(p4) n5 = KL.BatchNormalization(axis=3)(c5) a5 = KL.Activation("elu")(n5) c5 = KL.Dropout(0.4)(a5) c5 = KL.Conv2D(512, (3, 3), kernel_initializer='he_normal', padding='same')(c5) n5 = KL.BatchNormalization(axis=3)(c5) a5 = KL.Activation("elu")(n5) u6 = KL.Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same')(a5) u6 = KL.concatenate([u6, a4]) c6 = KL.Conv2D(256, (3, 3), kernel_initializer='he_normal', padding='same')(u6) n6 = KL.BatchNormalization(axis=3)(c6) a6 = KL.Activation("elu")(n6) c6 = KL.Dropout(0.4)(a6) c6 = KL.Conv2D(256, (3, 3), kernel_initializer='he_normal', padding='same')(c6) n6 = KL.BatchNormalization(axis=3)(c6) a6 = KL.Activation("elu")(n6) u7 = KL.Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(a6) u7 = KL.concatenate([u7, a3]) c7 = KL.Conv2D(128, (3, 3), kernel_initializer='he_normal', padding='same')(u7) n7 = KL.BatchNormalization(axis=3)(c7) a7 = KL.Activation("elu")(n7) c7 = KL.Dropout(0.4)(a7) c7 = KL.Conv2D(128, (3, 3), kernel_initializer='he_normal', padding='same')(c7) n7 = KL.BatchNormalization(axis=3)(c7) a7 = KL.Activation("elu")(n7) u8 = KL.Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(a7) u8 = KL.concatenate([u8, a2]) c8 = KL.Conv2D(64, (3, 3), kernel_initializer='he_normal', padding='same')(u8) n8 = KL.BatchNormalization(axis=3)(c8) a8 = KL.Activation("elu")(n8) c8 = KL.Dropout(0.2)(a8) c8 = KL.Conv2D(64, (3, 3), kernel_initializer='he_normal', padding='same')(c8) n8 = KL.BatchNormalization(axis=3)(c8) a8 = KL.Activation("elu")(n8) u9 = KL.Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(a8) u9 = KL.concatenate([u9, a1], axis=3) c9 = KL.Conv2D(32, (3, 3), kernel_initializer='he_normal', padding='same')(u9) n9 = KL.BatchNormalization(axis=3)(c9) a9 = KL.Activation("elu")(n9) c9 = KL.Dropout(0.2)(a9) c9 = KL.Conv2D(32, (3, 3), kernel_initializer='he_normal', padding='same')(c9) n9 = KL.BatchNormalization(axis=3)(c9) a9 = KL.Activation("elu")(n9) outputs = KL.Conv2D(1, (1, 1), activation='sigmoid')(a9) model = KM.Model(inputs=[inputs,weights_tensor], outputs=[outputs]) weighted_crossentropy = wrapped_partial(weighted_loss, weights=weights_tensor) model.compile(optimizer=Adam(lr=0.001,clipvalue=5), loss=weighted_crossentropy) return model import cv2 import glob import skimage.io import numpy as np from tqdm import tqdm #Find paths and load images and labels(=compressed masks) img_paths = sorted(glob.glob(os.path.join(dataset_path,"*.png"))) msk_paths = sorted(glob.glob(os.path.join(dataset_path,"*.npy"))) img_list = [] lab_list = [] for i,image_path in tqdm(enumerate(img_paths), total=len(img_paths)): img_list.append(cv2.imread(img_paths[i])) lab_list.append(np.load(msk_paths[i])) from scipy.ndimage.morphology import binary_erosion from scipy.ndimage.morphology import distance_transform_edt from tqdm import tqdm def masks_to_gt(msk): h, w, _ = msk.shape gt = np.zeros((h, w), dtype=np.bool) for index in range(0, msk.shape[-1]): gt[msk[:,:,index] > 0] = True return gt[:,:,np.newaxis] def get_weights(masks_in, w0=10, sigma=5): """masks_in shape: (w,h,n_masks)""" masks = np.transpose(masks_in,axes=(2,0,1)) merged_masks = np.squeeze(masks_to_gt(masks_in)) distances = np.array([distance_transform_edt(m == 0) for m in masks]) shortest_dist = np.sort(distances, axis=0) # distance to the border of the nearest cell d1 = shortest_dist[0] # distance to the border of the second nearest cell d2 = shortest_dist[1] if len(shortest_dist) > 1 else np.zeros(d1.shape) weights = w0 * np.exp(-(d1 + d2) ** 2 / (2 * sigma ** 2)).astype(np.float32) weights = 1 + (merged_masks == 0) * weights return weights[..., np.newaxis] gt_list = [] weight_list = [] for i,label in tqdm(enumerate(lab_list), total=len(lab_list)): #Convert to masks masks = label_to_masks(label) #Mask erosion as preprocessing for j in range(masks.shape[-1]): masks[:,:,j] = binary_erosion(masks[:,:,j].astype(np.uint8), border_value=1, iterations=1) masks = masks.astype(np.bool) #Get ground truths gt_list.append(masks_to_gt(masks)) #Compute weights weight_list.append(get_weights(masks)) import imgaug from imgaug import augmenters as iaa import random from random import shuffle from sklearn.model_selection import train_test_split def random_crop(image, mask, weights, crop_size = 256): h, w = image.shape[:2] y = random.randint(0, (h - crop_size)) x = random.randint(0, (w - crop_size)) img_crop = image[y:y + crop_size, x:x + crop_size] msk_crop = mask[y:y + crop_size, x:x + crop_size] wgt_crop = weights[y:y + crop_size, x:x + crop_size] return img_crop, msk_crop, wgt_crop def data_generator(img_list, msk_list, wgt_list, batch_size=2, crop_size=256, augmentation=None): batch_img = np.zeros((batch_size, crop_size, crop_size, 3)) batch_msk = np.zeros((batch_size, crop_size, crop_size, 1)) batch_wgt = np.zeros((batch_size, crop_size, crop_size, 1)) image_index = -1 while True: for i in range(batch_size): image_index = (image_index + 1) % len(img_list) batch_img[i], batch_msk[i], batch_wgt[i] = random_crop(img_list[image_index], msk_list[image_index], wgt_list[image_index]) if augmentation: aug_det = augmentation.to_deterministic() batch_img = aug_det.augment_images(batch_img) batch_msk = aug_det.augment_images(batch_msk) batch_wgt = aug_det.augment_images(batch_wgt) yield [batch_img, batch_wgt], batch_msk def val_data_generator(img_list, msk_list, wgt_list, batch_size=1, augmentation=None): image_index = -1 while True: image_index = (image_index + 1) % len(img_list) image, _ = pad2n(img_list[image_index]) mask, _ = pad2n(msk_list[image_index]) weight, _ = pad2n(wgt_list[image_index]) yield [image[np.newaxis], weight[np.newaxis]], mask[np.newaxis] def shuffle_list(*ls): l =list(zip(*ls)) shuffle(l) return zip(*l) #Not used because stage2 images are too big to fit in GPU def predict_generator(test_list, batch_size=8): image_index = -1 while True: image_index = (image_index + 1) % len(test_list) batch_img = np.zeros((batch_size,) + test_list[image_index].shape) batch_img[0] = test_list[image_index] batch_img[1] = np.rot90(test_list[image_index], k=1) batch_img[2] = np.rot90(test_list[image_index], k=2) batch_img[3] = np.rot90(test_list[image_index], k=3) batch_img[4] = np.fliplr(test_list[image_index]) batch_img[5] = np.flipud(test_list[image_index]) batch_img[6] = np.rot90(np.fliplr(test_list[image_index]), k=1) batch_img[7] = np.rot90(np.flipud(test_list[image_index]), k=1) yield batch_img batch_size = 16 crop_size = 256 test_split = 0.1 test_size = int(len(img_list)*test_split) train_size = len(img_list) - test_size augmentation = iaa.SomeOf((0, 2), [ iaa.CropAndPad(percent=(-0.15, 0.15), pad_mode="reflect", keep_size=True, sample_independently=False), iaa.Fliplr(0.5), iaa.Flipud(0.5), iaa.OneOf([iaa.Affine(rotate=90), iaa.Affine(rotate=180), iaa.Affine(rotate=270) ]), iaa.OneOf([iaa.Sequential([iaa.ChangeColorspace(from_colorspace="RGB", to_colorspace="HSV"), iaa.WithChannels(0, iaa.Add((0, 100))), iaa.ChangeColorspace(from_colorspace="HSV", to_colorspace="RGB")]), iaa.Sequential([iaa.ChangeColorspace(from_colorspace="RGB", to_colorspace="HSV"), iaa.WithChannels(1, iaa.Add((0, 100))), iaa.ChangeColorspace(from_colorspace="HSV", to_colorspace="RGB")]), iaa.Sequential([iaa.ChangeColorspace(from_colorspace="RGB", to_colorspace="HSV"), iaa.WithChannels(2, iaa.Add((0, 100))), iaa.ChangeColorspace(from_colorspace="HSV", to_colorspace="RGB")]), iaa.WithChannels(0, iaa.Add((0, 100))), iaa.WithChannels(1, iaa.Add((0, 100))), iaa.WithChannels(2, iaa.Add((0, 100))) ]) #imgaug.augmenters.Multiply((0.8, 1.5)), #imgaug.augmenters.GaussianBlur(sigma=(0.0, 5.0)) ]) #img_train, img_val, gt_train, gt_val = train_test_split(img_list, gt_list, test_size=0.1, random_state=7, shuffle=True) img_list, gt_list, weight_list = shuffle_list(img_list,gt_list,weight_list) weight_train = weight_list[:train_size] img_train = img_list[:train_size] gt_train = gt_list[:train_size] weight_val = weight_list[-test_size:] img_val = img_list[-test_size:] gt_val = gt_list[-test_size:] train_generator = data_generator(img_train, gt_train, weight_train ,batch_size=batch_size, crop_size=crop_size, augmentation=augmentation) val_generator = val_data_generator(img_val, gt_val, weight_val, batch_size=1) import matplotlib.pyplot as plt ix_ = np.random.randint(0, len(img_train)) fig = plt.figure(figsize=(16,16)) plt.subplot(1, 3, 1) plt.imshow(img_train[ix_]) plt.subplot(1, 3, 2) plt.imshow(gt_train[ix_][...,0]) plt.subplot(1, 3, 3) plt.imshow(weight_train[ix_][...,0]) plt.show() print(weight_train[ix_][...,0].max(), weight_train[ix_][...,0].min(), weight_train[ix_][...,0].mean()) import datetime from keras.optimizers import Adam, SGD now = datetime.datetime.now() LOG_DIR = os.path.join(MODEL_DIR, "{}{:%Y%m%dT%H%M}".format(NAME.lower(), now)) checkpoint_path = os.path.join(LOG_DIR, "U_net_{epoch:04d}.h5") #Model model = Unet(None)#crop_size) #model.summary() #model.compile(optimizer=Adam(lr=0.001), loss=bce_dice_loss, metrics=[mean_iou,KLO.binary_crossentropy]) #model.compile(optimizer=Adam(lr=0.001,clipvalue=5), loss=KLO.binary_crossentropy, metrics=[mean_iou,bce_dice_loss]) # Callbacks callbacks = [keras.callbacks.TensorBoard(log_dir=LOG_DIR, histogram_freq=0, write_graph=True, write_images=False), keras.callbacks.ModelCheckpoint(checkpoint_path, verbose=0, save_weights_only=True) ] model.fit_generator(train_generator, steps_per_epoch=len(img_train)/batch_size, epochs=100, validation_data=val_generator, validation_steps=len(img_val), initial_epoch=0, callbacks=callbacks) model.fit_generator(train_generator, steps_per_epoch=len(img_train)/batch_size, epochs=600, validation_data=val_generator, validation_steps=len(img_val)/batch_size, initial_epoch=100, callbacks=callbacks) import cv2 import pandas as pd import numpy as np from tqdm import tqdm def rleToMask(rleString,height,width): rows,cols = height,width rleNumbers = [int(numstring) for numstring in rleString.split(' ')] rlePairs = np.array(rleNumbers).reshape(-1,2) img = np.zeros(rows*cols,dtype=np.uint8) for index,length in rlePairs: index -= 1 img[index:index+length] = 255 img = img.reshape(cols,rows) img = img.T return img TEST_DIR = "../DSB2018/stage1_test/" test_ids = next(os.walk(TEST_DIR))[1] print(test_ids[:3]) images_path = TEST_DIR + "{}/images/{}.png" test_df = pd.read_csv('stage1_solution.csv') test_list = [] test_gt = [] for i,id_ in tqdm(enumerate(test_ids), total=len(test_ids)): image_path = images_path.format(id_,id_) image = cv2.imread(image_path) df_id = test_df.loc[test_df['ImageId'] == id_]#['EncodedPixels'] masks = np.zeros(image.shape[:2]+(len(df_id),), dtype=np.uint8) for j in range(len(df_id)): rle = df_id.iloc[j] masks[:,:,j] = rleToMask(rle['EncodedPixels'], rle['Height'], rle['Width']) masks = masks.astype(np.bool) test_list.append(image) test_gt.append(masks) #for i,test_image in tqdm(enumerate(test_list), total=len(test_list)): # if not (test_image[:,:,0]==test_image[:,:,1]).all(): #not gray img # temp = 255 - (cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY)) #convert to gray 1 channel + invert # test_list[i] = cv2.cvtColor(temp,cv2.COLOR_GRAY2BGR) #grayscale 3 channels shapes = [img.shape for img in test_list] a,b=np.unique(shapes, axis=0, return_counts=True) list(zip(a,b)) from skimage.morphology import label from scipy.ndimage.morphology import binary_dilation def tta_predict(image, model, cutoff): tta_n=8 voting_threshold = 5 batch_tta = [] #Do augmentations batch_tta.append(image) batch_tta.append(np.rot90(image, k=1)) batch_tta.append(np.rot90(image, k=2)) batch_tta.append(np.rot90(image, k=3)) batch_tta.append(np.fliplr(image)) batch_tta.append(np.flipud(image)) batch_tta.append(np.rot90(np.fliplr(image), k=1)) batch_tta.append(np.rot90(np.flipud(image), k=1)) #In case image is too big for memory res_predict = [] for tta_img in batch_tta: tta_weight = np.ones(tta_img.shape[:2]+(1,),dtype=np.float32) res_predict.append(np.squeeze(model.predict([tta_img[np.newaxis], tta_weight[np.newaxis]], verbose=0))>cutoff) #print(res_predict[0].shape, res_predict[0].dtype, res_predict[0].max(), res_predict[0].min()) res_predict = [res_pred.astype(np.bool) for res_pred in res_predict] res_predict = [res_pred.astype(np.uint8) for res_pred in res_predict] #Undo augmentations res_predict[0] = res_predict[0] res_predict[1] = np.rot90(res_predict[1], k=-1) res_predict[2] = np.rot90(res_predict[2], k=-2) res_predict[3] = np.rot90(res_predict[3], k=-3) res_predict[4] = np.fliplr(res_predict[4]) res_predict[5] = np.flipud(res_predict[5]) res_predict[6] = np.rot90(np.fliplr(res_predict[6]), k=1) res_predict[7] = np.rot90(np.flipud(res_predict[7]), k=1) res_predict = np.array(res_predict, dtype=np.uint8) #Voting tta_sum = np.sum(res_predict,axis=0) tta_sum = tta_sum>voting_threshold return tta_sum def get_predictions(test_list, model, cutoff=0.5, tta=True, pad=16, dilation=True): test_predictions = [] for i,image_test in tqdm(enumerate(test_list), total=len(test_list)): image = image_test #Padding64 to avoid max pool errors in unet if pad: image, window = pad2n(image, pad) if tta: raw_pred = tta_predict(image, model, cutoff) else: weight = np.ones(image.shape[:2]+(1,),dtype=np.float32) image = image[np.newaxis] weight = weight[np.newaxis] #Predict raw_pred = model.predict([image, weight], verbose=0) #Squeeze before remove padding raw_pred = np.squeeze(raw_pred) raw_pred = raw_pred[window] #Label and thresholding pred = label(raw_pred > cutoff) #Mask conversion masks = label_to_masks(pred) #Mask dilation as post processing if dilation: for j in range(masks.shape[-1]): masks[:,:,j] = binary_dilation(masks[:,:,j].astype(np.uint8), iterations=2) masks = masks.astype(np.bool) test_predictions.append(masks) return test_predictions def find_last(NAME, model_dir): """Finds the last checkpoint file of the last trained model in the model directory. Returns: log_dir: The directory where events and weights are saved checkpoint_path: the path to the last checkpoint file """ # Get directory names. Each directory corresponds to a model dir_names = next(os.walk(model_dir))[1] key = NAME.lower() dir_names = filter(lambda f: f.startswith(key), dir_names) dir_names = sorted(dir_names) if not dir_names: return None, None # Pick last directory dir_name = os.path.join(model_dir, dir_names[-1]) # Find the last checkpoint checkpoints = next(os.walk(dir_name))[2] checkpoints = filter(lambda f: f.startswith("U_net"), checkpoints) checkpoints = sorted(checkpoints) if not checkpoints: return dir_name, None checkpoint = os.path.join(dir_name, checkpoints[-1]) return dir_name, checkpoint #Model model = Unet(None) Unet_path = find_last(NAME, MODEL_DIR)[1] print(Unet_path) model.load_weights(Unet_path) #model.load_weights('/content/drive/kaggle/Unet_dsb2018/logs/bowl20180430T2133/U_net_0104.h5') #Evaluation test_predictions = get_predictions(test_list, model,tta=True, pad=64) import matplotlib.pyplot as plt import random def random_color_img(label): image = np.zeros(label.shape + (3,), dtype=np.uint8) for i in range(1,label.max()): r = random.randint(0, 255) g = random.randint(0, 255) b = random.randint(0, 255) color = np.array([r,g,b]) image[label==i] = color return image ix_ = np.random.randint(0, len(test_list)) fig = plt.figure(figsize=(21,21)) plt.subplot(1, 3, 1) plt.imshow(test_list[ix_]) plt.subplot(1, 3, 2) plt.imshow(random_color_img(masks_to_label(test_predictions[ix_]))) plt.show() #Rewriting the mIOU function to account for correct number of ground truth mask def iou_metric(y_true_in, y_pred_in, print_table=False): labels = masks_to_label(y_true_in) y_pred = masks_to_label(y_pred_in) true_objects = len(np.unique(labels)) pred_objects = len(np.unique(y_pred)) intersection = np.histogram2d(labels.flatten(), y_pred.flatten(), bins=(true_objects, pred_objects))[0] # Compute areas (needed for finding the union between all objects) area_true = np.histogram(labels, bins = true_objects)[0] area_pred = np.histogram(y_pred, bins = pred_objects)[0] area_true = np.expand_dims(area_true, -1) area_pred = np.expand_dims(area_pred, 0) # Compute union union = area_true + area_pred - intersection # Exclude background from the analysis intersection = intersection[1:,1:] union = union[1:,1:] union[union == 0] = 1e-9 # Compute the intersection over union iou = intersection / union # Precision helper function def precision_at(threshold, iou): matches = iou > threshold true_positives = np.sum(matches, axis=1) == 1 # Correct objects false_positives = np.sum(matches, axis=0) == 0 # Missed objects false_negatives = np.sum(matches, axis=1) == 0 # Extra objects tp, fp, fn = np.sum(true_positives), np.sum(false_positives), np.sum(false_negatives) return tp, fp, fn # Loop over IoU thresholds prec = [] if print_table: print("Thresh\tTP\tFP\tFN\tPrec.") for t in np.arange(0.5, 1.0, 0.05): tp, fp, fn = precision_at(t, iou) if (tp + fp + fn) > 0: p = tp / (tp + fp + fn) else: p = 0 if print_table: print("{:1.3f}\t{}\t{}\t{}\t{:1.3f}".format(t, tp, fp, fn, p)) prec.append(p) if print_table: print("AP\t-\t-\t-\t{:1.3f}".format(np.mean(prec))) return np.mean(prec) def iou_metric_batch(y_true_in, y_pred_in): batch_size = len(y_true_in) metric = [] for batch in range(batch_size): value = iou_metric(y_true_in[batch], y_pred_in[batch]) metric.append(value) # return np.array(np.mean(metric), dtype=np.float32) return metric mIOU = np.array(iou_metric_batch(test_gt, test_predictions)) print('The mean IOU is {}'.format(np.mean(mIOU))) import matplotlib.pyplot as plt image = test_list[10] res_tta = tta_predict(image, model,0.5) print(res_tta.shape, res_tta.dtype, res_tta.max(), res_tta.min()) simage = image[np.newaxis] #Predict raw_pred = model.predict(simage, verbose=0) raw_pred = np.squeeze(raw_pred) fig = plt.figure(figsize=(12,12)) plt.subplot(131) plt.imshow(image) plt.subplot(132) plt.imshow(res_tta) plt.subplot(133) plt.imshow(raw_pred) plt.show() ```
github_jupyter
# Performative Prediction: A Case Study in Strategic Classification This notebook replicates the main experiments in [Performative Prediction](https://arxiv.org/abs/2002.06673): - Juan C. Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, Moritz Hardt. "Performative Prediction." arXiv preprint 2002.06673, 2020. Strategic classification is a two-player game between an institution which deploys a classifier and agents who selectively adapt their features in order to improve their outcomes. This process is "performative" in the sense that the classifier deployed by the institution *causes* a change in the distribution of the target variable. We use WhyNot to explore when repeatedly retraining the classifier on the new distributions *converges* to a stable point. For more details and theoretical calculations, see the original paper. ``` %load_ext autoreload %autoreload 2 import matplotlib.pyplot as plt import numpy as np import whynot as wn import whynot.gym as gym import scripts.utils as utils %matplotlib inline ``` ## Setting up the strategic classification environment We use the [credit simulator](https://whynot.readthedocs.io/en/latest/simulators.html#credit-simulator), which is a strategic classification simulator based on the Kaggle [*Give Me Some Credit* dataset](https://www.kaggle.com/c/GiveMeSomeCredit). ``` env = gym.make('Credit-v0') env.seed(1) ``` ## Training a baseline logistic regression classifier The state of the environment is a dataset consisting of (1) financial features of individuals, e.g. `DebtRatio`, and (2) a binary label indicating whether an individual experienced financial distress in the subsequent two years. ``` base_dataset = env.initial_state.values() base_features, base_labels = base_dataset["features"], base_dataset["labels"] num_agents, num_features = base_features.shape print(f"The dataset has {num_agents} agents and {num_features} features.") ``` Fit a logistic regression model to the data ``` l2_penalty = 1.0 / num_agents baseline_theta = utils.fit_logistic_regression(base_features, base_labels, l2_penalty) baseline_acc = ((base_features.dot(baseline_theta) > 0) == base_labels).mean() print(f"Baseline logistic regresion model accuracy: {100 * baseline_acc:.2f}%") ``` ## Running repeated risk minimization Using the credit environment, we simulate repeated retraining of the logistic regression model in response to strategic distribution shift. We perform `num_iters` rounds. In each round: 1. We train an logistic regression classifier on the current set of features 2. The classifier is deployed via `env.step(theta)` 3. In the environment, individuals react strategically to the deployed classifier, and the environment returns a new set of fetaures for the next round. The parameter `epsilon` changes the strength of the strategic response. Higher values of epsilon correspond to proportionally more strategic distribution shift. Each run is warm-started with the `baseline_theta` classifier computed without strategic response. ``` def repeated_risk_minimization(base_theta, epsilon, num_steps): """Run repeated risk minimization for num_iters steps""" env.config.epsilon = epsilon env.config.l2_penalty = l2_penalty env.reset() # Track loss and accuracy before/after updating model on new distribution loss_start, loss_end, acc_start, acc_end, theta_gap = [], [], [], [], [] # Warm-start with baseline classifier theta = np.copy(base_theta) for step in range(num_steps): # Deploy classifier and observe strategic response observation, _, _, _ = env.step(theta) features_strat, labels = observation["features"], observation["labels"] # Evaluate loss and accuracy on the new distribution loss_start.append( utils.evaluate_logistic_loss(features_strat, labels, theta, l2_penalty)) acc_start.append( ((features_strat.dot(theta) > 0) == labels).mean() ) # Learn a new model on the induced distribution theta_new = utils.fit_logistic_regression(features_strat, labels, l2_penalty, theta_init=np.copy(theta)) # Evaluate loss and accuracy on the strategic distribution after training loss_end.append( utils.evaluate_logistic_loss(features_strat, labels, theta_new, l2_penalty) ) acc_end.append( ((features_strat.dot(theta_new) > 0) == labels).mean() ) # Track distance between iterates theta_gap.append(np.linalg.norm(theta_new - theta)) theta = np.copy(theta_new) return loss_start, loss_end, acc_start, acc_end, theta_gap epsilon_list = [1, 80, 150, 1000] num_iters = 25 loss_starts, acc_starts, loss_ends, acc_ends, theta_gaps = [], [], [], [], [] for epsilon_idx, epsilon in enumerate(epsilon_list): print(f"Running retraining for epsilon {epsilon:.2f}") loss_start, loss_end, acc_start, acc_end, theta_gap = repeated_risk_minimization( baseline_theta, epsilon, num_iters) loss_starts.append(loss_start) loss_ends.append(loss_end) acc_starts.append(acc_start) acc_ends.append(acc_end) theta_gaps.append(theta_gap) ``` ## Visualizing the results We visualize the perfromative risk during the repeated risk minimization procedure. We plot the risk at the beginning and at the end of each round, connecting the two values with a blue line and indicating change in risk due to strategic distribution shift with a dashed green line. ``` fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(16, 12)) for idx, epsilon in enumerate(epsilon_list): ax = axes[idx // 2][idx % 2] offset = 0.8 ax.set_title(f"Performative Risk, $\epsilon$={epsilon}", fontsize=20) for i in range(2, num_iters): ax.plot([i,i + offset],[loss_starts[idx][i], loss_ends[idx][i]],'b*-') if i < num_iters - 1: ax.plot([i + offset, i + 1],[loss_ends[idx][i], loss_starts[idx][i + 1]],'g--') ax.set_xlabel("Step", fontsize=16) ax.set_ylabel("Loss", fontsize=16) ax.set_yscale('log') plt.subplots_adjust(hspace=0.25) ``` The performative risk is a surrogate for the underlying metric we care about, accuracy. We can similarly plot accuracy during retraining. ``` fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(16, 12)) for idx, epsilon in enumerate(epsilon_list): ax = axes[idx // 2][idx % 2] offset = 0.8 ax.set_title(f"Performative Accuracy, $\epsilon$={epsilon}", fontsize=20) for i in range(2, num_iters): ax.plot([i,i + offset],[acc_starts[idx][i], acc_ends[idx][i]],'b*-') if i < num_iters - 1: ax.plot([i + offset, i + 1],[acc_ends[idx][i], acc_starts[idx][i + 1]],'g--') ax.set_xlabel("Step", fontsize=16) ax.set_ylabel("Accuracy", fontsize=16) ax.set_ylim([0.5, 0.75]) plt.subplots_adjust(hspace=0.25) ``` Finally, we plot the distance between consecutive iterates. This is the quantity bounded by the theorems in Performative Prediction, which shows that repeated risk minimization *converges in domain* to a stable point. ``` processed_theta_gaps = [[x for x in tg if x != 0.0] for tg in theta_gaps] _, ax = plt.subplots(figsize=(10, 8)) for idx, (gaps, eps) in enumerate(zip(processed_theta_gaps, epsilon_list)): label = '$\\varepsilon$ = {}'.format(eps) if idx == 0: ax.semilogy(gaps, label=label, linewidth=3, alpha=1, markevery=[-1], marker='*', linestyle=(0, (1, 1))) elif idx == 1: ax.semilogy(gaps, label=label, linewidth=3, alpha=1, markevery=[-1], marker='*', linestyle='solid') else: ax.semilogy(gaps, label=label, linewidth=3) ax.set_title("Convergence in Domain for Repeated Risk Minimization", fontsize=18) ax.set_xlabel('Iteration $t$',fontsize=18) ax.set_ylabel(r'Distance Between Iterates: $\|\theta_{t+1} - \theta_{t}\|_2 $', fontsize=14) ax.tick_params(labelsize=18) plt.legend(fontsize=18) ```
github_jupyter
# Modeling and Simulation in Python Case study. Copyright 2017 Allen Downey License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0) ``` # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim.py module from modsim import * ``` ### Unrolling Let's simulate a kitten unrolling toilet paper. As reference material, see [this video](http://modsimpy.com/kitten). The interactions of the kitten and the paper roll are complex. To keep things simple, let's assume that the kitten pulls down on the free end of the roll with constant force. Also, we will neglect the friction between the roll and the axle. ![](diagrams/kitten.png) This figure shows the paper roll with $r$, $F$, and $\tau$. As a vector quantity, the direction of $\tau$ is into the page, but we only care about its magnitude for now. We'll start by loading the units we need. ``` radian = UNITS.radian m = UNITS.meter s = UNITS.second kg = UNITS.kilogram N = UNITS.newton ``` And a few more parameters in the `Params` object. ``` params = Params(Rmin = 0.02 * m, Rmax = 0.055 * m, Mcore = 15e-3 * kg, Mroll = 215e-3 * kg, L = 47 * m, tension = 2e-4 * N, t_end = 120 * s) ``` `make_system` computes `rho_h`, which we'll need to compute moment of inertia, and `k`, which we'll use to compute `r`. ``` def make_system(params): """Make a system object. params: Params with Rmin, Rmax, Mcore, Mroll, L, tension, and t_end returns: System with init, k, rho_h, Rmin, Rmax, Mcore, Mroll, ts """ unpack(params) init = State(theta = 0 * radian, omega = 0 * radian/s, y = L) area = pi * (Rmax**2 - Rmin**2) rho_h = Mroll / area k = (Rmax**2 - Rmin**2) / 2 / L / radian return System(init=init, k=k, rho_h=rho_h, Rmin=Rmin, Rmax=Rmax, Mcore=Mcore, Mroll=Mroll, t_end=t_end) ``` Testing `make_system` ``` system = make_system(params) system.init ``` Here's how we compute `I` as a function of `r`: ``` def moment_of_inertia(r, system): """Moment of inertia for a roll of toilet paper. r: current radius of roll in meters system: System object with Mcore, rho, Rmin, Rmax returns: moment of inertia in kg m**2 """ unpack(system) Icore = Mcore * Rmin**2 Iroll = pi * rho_h / 2 * (r**4 - Rmin**4) return Icore + Iroll ``` When `r` is `Rmin`, `I` is small. ``` moment_of_inertia(system.Rmin, system) ``` As `r` increases, so does `I`. ``` moment_of_inertia(system.Rmax, system) ``` ## Exercises Write a slope function we can use to simulate this system. Here are some suggestions and hints: * `r` is no longer part of the `State` object. Instead, we compute `r` at each time step, based on the current value of `y`, using $y = \frac{1}{2k} (r^2 - R_{min}^2)$ * Angular velocity, `omega`, is no longer constant. Instead, we compute torque, `tau`, and angular acceleration, `alpha`, at each time step. * I changed the definition of `theta` so positive values correspond to clockwise rotation, so `dydt = -r * omega`; that is, positive values of `omega` yield decreasing values of `y`, the amount of paper still on the roll. * Your slope function should return `omega`, `alpha`, and `dydt`, which are the derivatives of `theta`, `omega`, and `y`, respectively. * Because `r` changes over time, we have to compute moment of inertia, `I`, at each time step. That last point might be more of a problem than I have made it seem. In the same way that $F = m a$ only applies when $m$ is constant, $\tau = I \alpha$ only applies when $I$ is constant. When $I$ varies, we usually have to use a more general version of Newton's law. However, I believe that in this example, mass and moment of inertia vary together in a way that makes the simple approach work out. Not all of my collegues are convinced. ``` # Solution def slope_func(state, t, system): """Computes the derivatives of the state variables. state: State object with theta, omega, y t: time system: System object with Rmin, k, Mcore, rho_h, tension returns: sequence of derivatives """ theta, omega, y = state unpack(system) r = sqrt(2*k*y + Rmin**2) I = moment_of_inertia(r, system) tau = r * tension alpha = tau / I dydt = -r * omega return omega, alpha, dydt ``` Test `slope_func` with the initial conditions. ``` # Solution slope_func(system.init, 0*s, system) ``` Run the simulation. ``` # Solution results, details = run_ode_solver(system, slope_func, max_step=10*s) details ``` And look at the results. ``` results.tail() ``` Check the results to see if they seem plausible: * The final value of `theta` should be about 220 radians. * The final value of `omega` should be near 4 radians/second, which is less one revolution per second, so that seems plausible. * The final value of `y` should be about 35 meters of paper left on the roll, which means the kitten pulls off 12 meters in two minutes. That doesn't seem impossible, although it is based on a level of consistency and focus that is unlikely in a kitten. * Angular velocity, `omega`, should increase almost linearly at first, as constant force yields almost constant torque. Then, as the radius decreases, the lever arm decreases, yielding lower torque, but moment of inertia decreases even more, yielding higher angular acceleration. Plot `theta` ``` def plot_theta(results): plot(results.theta, color='C0', label='theta') decorate(xlabel='Time (s)', ylabel='Angle (rad)') plot_theta(results) ``` Plot `omega` ``` def plot_omega(results): plot(results.omega, color='C2', label='omega') decorate(xlabel='Time (s)', ylabel='Angular velocity (rad/s)') plot_omega(results) ``` Plot `y` ``` def plot_y(results): plot(results.y, color='C1', label='y') decorate(xlabel='Time (s)', ylabel='Length (m)') plot_y(results) ```
github_jupyter
Use this utlity to update the returns and std_dev fields within investment-options.csv ``` %%javascript IPython.OutputArea.prototype._should_scroll = function(lines) { return false; } # imports import pandas as pd import numpy as np import matplotlib.pyplot as plt from pathlib import Path import brownbear as bb # format price data pd.options.display.float_format = '{:0.2f}'.format %matplotlib inline # set size of inline plots '''note: rcParams can't be in same cell as import matplotlib or %matplotlib inline %matplotlib notebook: will lead to interactive plots embedded within the notebook, you can zoom and resize the figure %matplotlib inline: only draw static images in the notebook ''' plt.rcParams["figure.figsize"] = (10, 7) ``` Globals ``` # set refresh_timeseries=True to download timeseries. Otherwise /symbol-cache is used. refresh_timeseries = True # read in dow30.csv dow30 = pd.read_csv('dow30.csv') # remove the exchange from the beginning of the symbol def _symbol(row): return row['Symbol'].split(':')[-1].strip() dow30['Symbol'] = dow30.apply(_symbol, axis=1) dow30.drop(columns=['Exchange', 'Date added', 'Notes', 'Index weighting'], inplace=True) dow30.rename(columns={'Symbol': 'Symbol', 'Company':'Description', 'Industry':'Asset Class'}, inplace=True) dow30.set_index("Symbol", inplace=True) dow30 # read in gics-2-asset-class.csv gics2asset_class = pd.read_csv('gics-2-asset-class.csv', skip_blank_lines=True, comment='#') gics2asset_class.set_index("GICS", inplace=True) gics2asset_class = gics2asset_class['Asset Class'].to_dict() gics2asset_class # map dow30 GICS sectors to brownbear defined asset classes def _asset_class(row): return gics2asset_class[row['Asset Class']] dow30['Asset Class'] = dow30.apply(_asset_class, axis=1) # yahoo finance uses '-' where '.' is used in symbol names dow30.index = dow30.index.str.replace('.', '-') dow30 # make symbols list symbols = list(dow30.index) #symbols # get the timeseries for the symbols and compile into a single csv bb.fetch_timeseries(symbols, refresh=refresh_timeseries) bb.compile_timeseries(symbols) # read symbols timeseries into a dataframe df = pd.read_csv('symbols-timeseries.csv', skip_blank_lines=True, comment='#') df.set_index("Date", inplace=True) df # sample symbol symbol = 'MMM' annual_returns = bb.annualize_returns(df, timeperiod='daily', years=1) annual_returns[symbol] # calculate annualized returns annual_returns_1mo = bb.annualize_returns(df, timeperiod='daily', years=1/12) annual_returns_3mo = bb.annualize_returns(df, timeperiod='daily', years=3/12) annual_returns_1yr = bb.annualize_returns(df, timeperiod='daily', years=1) annual_returns_3yr = bb.annualize_returns(df, timeperiod='daily', years=3) annual_returns_5yr = bb.annualize_returns(df, timeperiod='daily', years=5) # calculate volatility daily_returns = df.pct_change() years = bb.TRADING_DAYS_PER_MONTH / bb.TRADING_DAYS_PER_YEAR vola = bb.annualized_standard_deviation(daily_returns, timeperiod='daily', years=years) vola[symbol] # calculate downside volatility ds_vola = bb.annualized_standard_deviation(daily_returns, timeperiod='daily', years=years, downside=True) ds_vola[symbol] # resample df on a monthly basis df.index = pd.to_datetime(df.index) monthly = df.resample('M').ffill() bb.print_full(monthly[symbol]) # calculate monthly returns monthly_returns = monthly.pct_change() monthly_returns[symbol] # calculate standard deviation std_dev = bb.annualized_standard_deviation(monthly_returns, timeperiod='monthly', years=3) std_dev[symbol] # read investment-options-header.csv lines = [] with open('investment-options-in.csv', 'r') as f: lines = [line.strip() for line in f] lines # for each symbol, write out the 1 Yr, 3 Yr, 5 Yr, and std dev out = lines.copy() for i, (index, row) in enumerate(dow30.iterrows()): symbol = index description = row['Description'] asset_class = row['Asset Class'] ret_1mo = annual_returns_1mo[symbol] ret_3mo = annual_returns_3mo[symbol] ret_1yr = annual_returns_1yr[symbol] ret_3yr = annual_returns_3yr[symbol] ret_5yr = annual_returns_5yr[symbol] if np.isnan(ret_3yr): ret_3yr = ret_1yr if np.isnan(ret_5yr): ret_5yr = ret_3yr _vola = vola[symbol]*100 _ds_vola = ds_vola[symbol]*100 sd = std_dev[symbol]*100 out.append( '"{}","{}","{}","{:0.2f}","{:0.2f}","{:0.2f}","{:0.2f}","{:0.2f}","{:0.2f}","{:0.2f}","{:0.2f}"' .format(symbol, description, asset_class, ret_1mo, ret_3mo, ret_1yr, ret_3yr, ret_5yr, _vola, _ds_vola, sd)) # write out asset-classes.csv with open('investment-options.csv', 'w') as f: for line in out: f.write(line + '\n') ```
github_jupyter
**Data Description** age: The person's age in years sex: The person's sex (1 = male, 0 = female) cp: The chest pain experienced (Value 1: typical angina, Value 2: atypical angina, Value 3: non-anginal pain, Value 4: asymptomatic) trestbps: The person's resting blood pressure (mm Hg on admission to the hospital) chol: The person's cholesterol measurement in mg/dl fbs: The person's fasting blood sugar (> 120 mg/dl, 1 = true; 0 = false) restecg: Resting electrocardiographic measurement (0 = normal, 1 = having ST-T wave abnormality, 2 = showing probable or definite left ventricular hypertrophy by Estes' criteria) thalach: The person's maximum heart rate achieved exang: Exercise induced angina (1 = yes; 0 = no) oldpeak: ST depression induced by exercise relative to rest ('ST' relates to positions on the ECG plot. See more here) slope: the slope of the peak exercise ST segment (Value 1: upsloping, Value 2: flat, Value 3: downsloping) ca: The number of major vessels (0-3) thal: A blood disorder called thalassemia (3 = normal; 6 = fixed defect; 7 = reversable defect) target: Heart disease (0 = no, 1 = yes) ``` import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline df = pd.read_csv('heart.csv') df.head() df.shape import pandas_profiling as ppl profile = ppl.ProfileReport(df) profile ``` Check for null values in the dataset. No needed as pandas_profiling has already done this job ``` df.isnull().sum().sort_values(ascending=False) ``` Now , Check for the Correlation in the data. ``` plt.figure(figsize=(12,10)) sns.heatmap(df.corr(),cmap='viridis',annot=True) ``` **Check the Correlation of features with the target variable.** ``` df.corr()['target'].sort_values(ascending=False) ``` The following plot shows the Distribution of Age. This Graph tells that the highest number of people suffering from heart diseases are in the age group of 55-65 years. ``` sns.set_style('whitegrid') plt.figure(figsize=(10,5)) sns.distplot(df['age'],color='cyan',kde=False) ``` ### Now , Let's Look at target. It is such a quite balanced with almost equal number of both classes ``` sns.countplot(df['target'],palette='rainbow') ``` ## It's time to do some other plots. ``` plt.figure(figsize=(10,7)) sns.boxplot(df['target'], df['trestbps'],hue=df['sex'], palette = 'viridis') sns.countplot(x='target',hue='sex',data=df) sns.boxplot(x='target',y='age',hue='sex',data=df) ``` ### The following function changes int-type categorical columns to object-type to perform OneHotEncoding (using pd.get_dummies). If we don't change them to object-type,after performing OneHotEncoding the values remains same.So that's why we changed them to object-type. Then we append the categorical column into categories . ``` categories = [] def categorical(df): for column in df.drop('target',axis=1).columns : if len(df[column].value_counts()) <10 and df[column].dtype != 'object': # and df[column].dtype != 'object' is no needed. df[column] = df[column].astype('object') categories.append(column) return df df = categorical(df) categories df.head() df.info() ``` ### Creating Dummy Variables for those categorical columns. Make sure that drop_first = True to avoid "Dummy Variable Trap". ``` onehot = pd.get_dummies(df[categories],drop_first = True) onehot df.drop(categories,axis=1,inplace=True) # Removing those categorical columns df y = df['target'] df.drop('target',axis=1,inplace=True) df = pd.concat([df,onehot],axis=1) df.head() X = df.values X.shape from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.2, random_state=0) X_train.shape,X_test.shape from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train[:,0:5] = sc.fit_transform(X_train[:,0:5]) X_test[:,0:5] = sc.transform(X_test[:,0:5]) from sklearn.ensemble import RandomForestClassifier,VotingClassifier from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC from xgboost import XGBClassifier from sklearn.model_selection import GridSearchCV from sklearn.metrics import confusion_matrix,accuracy_score,classification_report rf = RandomForestClassifier() rf.fit(X_train,y_train) predictions = rf.predict(X_test) confusion_matrix(y_test,predictions) ``` # Hyperparameter Tuning Starts...! ## Tuning Random Forest ``` n_estimators = [200,300,400,500,600,700] max_depth = range(1,12) criterions = ['gini', 'entropy'] parameters = {'n_estimators':n_estimators, 'max_depth':max_depth, 'criterion': criterions } grid = GridSearchCV(estimator=RandomForestClassifier(max_features='auto',n_jobs=-1), param_grid=parameters, cv=5, verbose=1, n_jobs = -1) grid.fit(X_train,y_train) rf_grid = grid.best_estimator_ rf_grid.fit(X_train,y_train) predictions = rf_grid.predict(X_test) confusion_matrix(y_test,predictions) ``` ## Let's look at some important features...! ``` feature_importances = pd.DataFrame(rf_grid.feature_importances_, index=df.columns, columns=['importance']) feature_importances.sort_values(by='importance', ascending=False) ``` ## Tuning Logistic Regression ``` C_vals = [0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1,2,3,3.2,3.6, 4,5,6,7,8,9,10] penalties = ['l1','l2'] solvers = ['liblinear', 'sag','lbfgs'] parameters = {'penalty': penalties, 'C': C_vals, 'solver':solvers} grid = GridSearchCV(estimator=LogisticRegression(), param_grid=parameters, scoring='accuracy', cv=5, verbose=1, n_jobs=-1) grid.fit(X_train,y_train) lr_grid = grid.best_estimator_ lr_grid.fit(X_train,y_train) predictions = lr_grid.predict(X_test) confusion_matrix(y_test,predictions) ``` ## Tuning SVM ``` C = [0.01, 0.1, 1,1.2,1.5,2,2.5,3,3.2,3.5,4] gamma = [0.0001,0.001,0.005, 0.01, 0.1, 1] parameters = {'C': C, 'gamma' : gamma} grid = GridSearchCV(estimator=SVC(kernel = 'rbf', probability=True), param_grid=parameters, scoring='accuracy', verbose=1, cv=5, n_jobs=-1) grid.fit(X_train,y_train) svm_grid = grid.best_estimator_ svm_grid.fit(X_train,y_train) predictions = svm_grid.predict(X_test) confusion_matrix(y_test,predictions) feature_importances = pd.DataFrame(rf_grid.feature_importances_, index=df.columns, columns=['importance']) feature_importances.sort_values(by='importance', ascending=False) ``` ## Tuning Bagging Classifier ``` from sklearn.ensemble import BaggingClassifier n_estimators = [200,300,330,370,400,430,470,500,600,700] parameters = {'n_estimators':n_estimators} grid = GridSearchCV(BaggingClassifier(base_estimator= None), param_grid=parameters, cv=5,verbose=1, n_jobs = -1) grid.fit(X_train,y_train) bag_grid = grid.best_estimator_ bag_grid.fit(X_train,y_train) predictions = bag_grid.predict(X_test) confusion_matrix(y_test,predictions) ``` ## Tuning XGBClassifier ``` base_score = [0.1,0.3,0.5,0.7,0.9] max_depth = range(4,15) learning_rate = [0.01,0.1,0.2,0.3,0.4] gamma = [0.001,0.01,0.1,0.3,0.5] parameters = {'base_score':base_score, 'max_depth':max_depth, 'learning_rate': learning_rate, 'gamma':gamma } grid = GridSearchCV(estimator=XGBClassifier(n_jobs=-1), param_grid=parameters, cv=5, verbose=1, n_jobs = -1) grid.fit(X_train,y_train) xgb_grid = grid.best_estimator_ xgb_grid.fit(X_train,y_train) predictions = xgb_grid.predict(X_test) confusion_matrix(y_test,predictions) ``` ## Now, Combine all of them using Voting Classifier...! ``` vot_clf = VotingClassifier(estimators=[('rf',rf_grid), ('lr',lr_grid), ('svc',svm_grid), ('bag',bag_grid), ('xgb',xgb_grid)], voting='hard') vot_clf.fit(X_train,y_train) predictions = vot_clf.predict(X_test) confusion_matrix(y_test,predictions) vot_clf.score(X_test,y_test) rf_grid.score(X_test,y_test) bag_grid.score(X_test,y_test) xgb_grid.score(X_test,y_test) ``` ### Let's use Artificial Neural Network (ANN) ...! ``` from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense,Dropout from tensorflow.keras.callbacks import EarlyStopping model = Sequential() model.add(Dense(units=30,activation = 'relu' ,input_shape=(22,))) model.add(Dropout(0.2)) model.add(Dense(units=15,activation = 'relu')) model.add(Dropout(0.2)) model.add(Dense(units=7,activation = 'relu')) model.add(Dropout(0.2)) model.add(Dense(units=1,activation = 'sigmoid')) model.compile(optimizer = 'adam',loss='binary_crossentropy',metrics=['accuracy']) early_stop = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5) history = model.fit(x=X_train, y=y_train, epochs=200, validation_data=(X_test, y_test), verbose=1, callbacks=[early_stop] ) predictions = model.predict(X_test) predictions = [1 if i>0.5 else 0 for i in predictions] confusion_matrix(y_test,predictions) ``` ## Tuning ANN Using GridSearch ....! ``` from tensorflow.keras.wrappers.scikit_learn import KerasClassifier from tensorflow.keras.layers import BatchNormalization ``` ### Create a function to build our ANN model. ### Keras provides a wrapper class KerasClassifier that allows us to use our deep learning models with scikit-learn, this is especially useful when you want to tune hyperparameters using scikit-learn's RandomizedSearchCV or GridSearchCV. ``` def build_model(layers,dropout_rate=0): model = Sequential() for i,nodes in enumerate(layers): if i==0: model.add(Dense(nodes,activation='relu',input_dim=X_train.shape[1])) else : model.add(Dense(nodes,activation='relu')) model.add(BatchNormalization()) if dropout_rate: model.add(Dropout(dropout_rate)) model.add(Dense(1,activation='sigmoid')) model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy']) return model model = KerasClassifier(build_fn=build_model,verbose=0) ``` ### Define the parameters when we fit our ANN except X and y , such as epochs,callbacks etc. ``` early_stop = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5) callbacks = [early_stop] fit_parameters = {'callbacks': callbacks, 'epochs': 200, 'validation_data' : (X_test,y_test), 'verbose' : 0} ``` ### Define some of the Hyperparameters of our model. ``` layers = [(15,1),(20,10,1),(30,15,7,1)] parameters = dict(layers=layers,dropout_rate=[0,0.1,0.2,0.3],batch_size=[32,64,128,256]) grid = GridSearchCV(estimator=model, param_grid=parameters, cv=5, verbose=1, n_jobs=-1) ``` ### To fit the fit_params we have to do "**fit_params" ``` grid.fit(X_train,y_train,**fit_parameters) predictions = grid.predict(X_test) confusion_matrix(y_test,predictions) ``` ### I had used grid for every tuned model.But Below grid has the tuned ANN model because it is the latest one. ``` all_models = [rf_grid, lr_grid, svm_grid, bag_grid, xgb_grid, vot_clf, grid] c = {} for i in all_models : a = i.predict(X_test) b = accuracy_score(y_test,a) c[i] = b c ``` ## Final Prediction !!! ``` predictions = (max(c,key=c.get)).predict(X_test) confusion_matrix(y_test,predictions) print(classification_report(y_test,predictions)) ``` ## Save and Load the Model ``` import pickle ``` ### I saved the vot_clf model because ANN or any Deep Learning model can be saved in the h5 file format. ``` filename = 'model.pkl' pickle.dump(vot_clf, open(filename, 'wb')) loaded_model = pickle.load(open(filename, 'rb')) predictions = loaded_model.predict(X_test) confusion_matrix(y_test,predictions) ```
github_jupyter
``` %matplotlib inline import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = (12,8) import numpy as np import tensorflow as tf import keras import pandas as pd from keras_tqdm import TQDMNotebookCallback from keras.preprocessing.sequence import pad_sequences def data_generator(batch_size, tfrecord, start_frac=0, end_frac=1): ''' Shuffles the Audioset training data and returns a generator of training data and boolean laughter labels batch_size: batch size for each set of training data and labels tfrecord: filestring of the tfrecord file to train on start_frac: the starting point of the data set to use, as a fraction of total record length (used for CV) end_frac: the ending point of the data set to use, as a fraction of total record length (used for CV) ''' max_len=10 records = list(tf.python_io.tf_record_iterator(tfrecord)) records = records[int(start_frac*len(records)):int(end_frac*len(records))] rec_len = len(records) shuffle = np.random.permutation(range(rec_len)) num_batches = rec_len//batch_size - 1 j = 0 laugh_labels = [16, 17, 18, 19, 20, 21] while True: X = [] y = [] for idx in shuffle[j*batch_size:(j+1)*batch_size]: example = records[idx] tf_seq_example = tf.train.SequenceExample.FromString(example) example_label = list(np.asarray(tf_seq_example.context.feature['labels'].int64_list.value)) laugh_bin = any((True for x in example_label if x in laugh_labels)) y.append(laugh_bin) n_frames = len(tf_seq_example.feature_lists.feature_list['audio_embedding'].feature) audio_frame = [] for i in range(n_frames): audio_frame.append(np.frombuffer(tf_seq_example.feature_lists.feature_list['audio_embedding']. feature[i].bytes_list.value[0],np.uint8).astype(np.float32)) pad = [np.zeros([128], np.float32) for i in range(max_len-n_frames)] audio_frame += pad X.append(audio_frame) j += 1 if j >= num_batches: shuffle = np.random.permutation(range(rec_len)) j = 0 X = np.array(X) yield X, np.array(y) ``` ### Logistic Regression ``` from keras.models import Sequential from keras.layers import Dense, BatchNormalization, Flatten lr_model = Sequential() # lr_model.add(keras.Input((None, 128))) lr_model.add(BatchNormalization(input_shape=(10, 128))) lr_model.add(Flatten()) lr_model.add(Dense(1, activation='sigmoid')) # try using different optimizers and different optimizer configs lr_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) batch_size=32 CV_frac = 0.1 train_gen = data_generator(batch_size,'../Data/bal_laugh_speech_subset.tfrecord', 0, 1-CV_frac) val_gen = data_generator(128,'../Data/bal_laugh_speech_subset.tfrecord', 1-CV_frac, 1) rec_len = 18768 lr_h = lr_model.fit_generator(train_gen,steps_per_epoch=int(rec_len*(1-CV_frac))//batch_size, epochs=100, validation_data=val_gen, validation_steps=int(rec_len*CV_frac)//128, verbose=0, callbacks=[TQDMNotebookCallback()]) plt.plot(lr_h.history['acc'], label='train_acc') plt.plot(lr_h.history['val_acc'], label='val_acc') plt.legend() lr_model.save('../Models/LogisticRegression_100Epochs.h5') ``` ### Single Layer LSTM ``` from keras.models import Sequential from keras.layers import Dense, BatchNormalization, Dropout from keras.layers import LSTM from keras import regularizers lstm_model = Sequential() lstm_model.add(BatchNormalization(input_shape=(None, 128))) lstm_model.add(Dropout(0.5)) lstm_model.add(LSTM(128, activation='relu', kernel_regularizer=regularizers.l2(0.01), activity_regularizer=regularizers.l2(0.01))) lstm_model.add(Dense(1, activation='sigmoid')) # try using different optimizers and different optimizer configs lstm_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) batch_size=32 CV_frac = 0.1 train_gen = data_generator(batch_size,'../Data/bal_laugh_speech_subset.tfrecord', 0, 1-CV_frac) val_gen = data_generator(128,'../Data/bal_laugh_speech_subset.tfrecord', 1-CV_frac, 1) rec_len = 18768 lstm_h = lstm_model.fit_generator(train_gen,steps_per_epoch=int(rec_len*(1-CV_frac))//batch_size, epochs=100, validation_data=val_gen, validation_steps=int(rec_len*CV_frac)//128, verbose=0, callbacks=[TQDMNotebookCallback()]) plt.plot(lstm_h.history['acc'], label='train_acc') plt.plot(lstm_h.history['val_acc'], label='val_acc') plt.legend() lstm_model.save('../Models/LSTM_SingleLayer_100Epochs.h5') ``` ### 3 Layer LSTM ``` from keras.models import Sequential from keras.layers import Dense, BatchNormalization, Dropout from keras.layers import LSTM from keras import regularizers lstm3_model = Sequential() lstm3_model.add(BatchNormalization(input_shape=(None, 128))) lstm3_model.add(Dropout(0.5)) lstm3_model.add(LSTM(64, activation='relu', kernel_regularizer=regularizers.l2(0.01), activity_regularizer=regularizers.l2(0.01), return_sequences=True)) lstm3_model.add(BatchNormalization()) lstm3_model.add(Dropout(0.5)) lstm3_model.add(LSTM(64, activation='relu', kernel_regularizer=regularizers.l2(0.01), activity_regularizer=regularizers.l2(0.01), return_sequences=True)) lstm3_model.add(BatchNormalization()) lstm3_model.add(Dropout(0.5)) lstm3_model.add(LSTM(64, activation='relu', kernel_regularizer=regularizers.l2(0.01), activity_regularizer=regularizers.l2(0.01))) lstm3_model.add(Dense(1, activation='sigmoid')) # try using different optimizers and different optimizer configs lstm3_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) batch_size=32 CV_frac = 0.1 train_gen = data_generator(batch_size,'../Data/bal_laugh_speech_subset.tfrecord', 0, 1-CV_frac) val_gen = data_generator(128,'../Data/bal_laugh_speech_subset.tfrecord', 1-CV_frac, 1) rec_len = 18768 lstm3_h = lstm3_model.fit_generator(train_gen,steps_per_epoch=int(rec_len*(1-CV_frac))//batch_size, epochs=100, validation_data=val_gen, validation_steps=int(rec_len*CV_frac)//128, verbose=0, callbacks=[TQDMNotebookCallback()]) plt.plot(lstm3_h.history['acc'], label='train_acc') plt.plot(lstm3_h.history['val_acc'], label='val_acc') plt.legend() lstm3_model.save('../Models/LSTM_ThreeLayer_100Epochs.h5') keras.__version__ ```
github_jupyter
##### Copyright 2018 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Upgrade code to TensorFlow 2.0 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/upgrade"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/upgrade.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/upgrade.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> <td> <a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/upgrade.ipynb"> <img src="https://www.tensorflow.org/images/download_logo_32px.png" /> Download notebook</a> </td> </table> TensorFlow 2.0 includes many API changes, such as reordering arguments, renaming symbols, and changing default values for parameters. Manually performing all of these modifications would be tedious and prone to error. To streamline the changes, and to make your transition to TF 2.0 as seamless as possible, the TensorFlow team has created the `tf_upgrade_v2` utility to help transition legacy code to the new API. Note: `tf_upgrade_v2` is installed automatically for TensorFlow 1.13 and later (including all TF 2.0 builds). Typical usage is like this: <pre class="devsite-terminal devsite-click-to-copy prettyprint lang-bsh"> tf_upgrade_v2 \ --intree my_project/ \ --outtree my_project_v2/ \ --reportfile report.txt </pre> It will accelerate your upgrade process by converting existing TensorFlow 1.x Python scripts to TensorFlow 2.0. The conversion script automates as much as possible, but there are still syntactical and stylistic changes that cannot be performed by the script. ## Compatibility modules Certain API symbols can not be upgraded simply by using a string replacement. To ensure your code is still supported in TensorFlow 2.0, the upgrade script includes a `compat.v1` module. This module replaces TF 1.x symbols like `tf.foo` with the equivalent `tf.compat.v1.foo` reference. While the compatibility module is nice, we recommend that you manually proofread replacements and migrate them to new APIs in the `tf.*` namespace instead of `tf.compat.v1` namespace as quickly as possible. Because of TensorFlow 2.x module deprecations (for example, `tf.flags` and `tf.contrib`), some changes can not be worked around by switching to `compat.v1`. Upgrading this code may require using an additional library (for example, [`absl.flags`](https://github.com/abseil/abseil-py)) or switching to a package in [tensorflow/addons](http://www.github.com/tensorflow/addons). ## Recommended upgrade process The rest of this guide demonstrates how to use the upgrade script. While the upgrade script is easy to use, it is strongly recomended that you use the script as part of the following process: 1. **Unit Test**: Ensure that the code you’re upgrading has a unit test suite with reasonable coverage. This is Python code, so the language won’t protect you from many classes of mistakes. Also ensure that any dependency you have has already been upgraded to be compatible with TensorFlow 2.0. 1. **Install TensorFlow 1.14**: Upgrade your TensorFlow to the latest TensorFlow 1.x version, at least 1.14. This includes the final TensorFlow 2.0 API in `tf.compat.v2`. 1. **Test With 1.14**: Ensure your unit tests pass at this point. You’ll be running them repeatedly as you upgrade so starting from green is important. 1. **Run the upgrade script**: Run `tf_upgrade_v2` on your entire source tree, tests included. This will upgrade your code to a format where it only uses symbols available in TensorFlow 2.0. Deprecated symbols will be accessed with `tf.compat.v1`. These will will eventually require manual attention, but not immediately. 1. **Run the converted tests with TensorFlow 1.14**: Your code should still run fine in TensorFlow 1.14. Run your unit tests again. Any error in your tests here means there’s a bug in the upgrade script. [Please let us know](https://github.com/tensorflow/tensorflow/issues). 1. **Check the upgrade report for warnings and errors**: The script writes a report file that explains any conversions you should double-check, or any manual action you need to take. For example: Any remaining instances of contrib will require manual action to remove. Please consult [the RFC for more instructions](https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md). 1. **Install TensorFlow 2.0**: At this point it should be safe to switch to TensorFlow 2.0 1. **Test with `v1.disable_v2_behavior`**: Re-running your tests with al `v1.disable_v2_behavior()` in the tests main function should give the same results as running under 1.14. 1. **Enable V2 Behavior**: Now that your tests work using the v2 API, you can start looking into turning on v2 behavior. Depending on how your code is written this may require some changes. See the [Migration Guide](migration_guide.ipynb) for details. ## Using the upgrade script ### Setup Before getting started ensure that TensorlFlow 2.0 is installed. ``` !pip install tensorflow==2.0.0-rc0 from __future__ import absolute_import, division, print_function, unicode_literals import tensorflow as tf print(tf.__version__) ``` Clone the [tensorflow/models](https://github.com/tensorflow/models) git repository so you have some code to test on: ``` !git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models ``` ### Read the help The script should be installed with TensorFlow. Here is the builtin help: ``` !tf_upgrade_v2 -h ``` ### Example TF1 code Here is a simple TensorFlow 1.0 script: ``` !head -n 65 models/samples/cookbook/regression/custom_regression.py | tail -n 10 ``` With TensorFlow 2.0 installed it does not run: ``` !(cd models/samples/cookbook/regression && python custom_regression.py) ``` ### Single file The upgrade script can be run on a single Python file: ``` !tf_upgrade_v2 \ --infile models/samples/cookbook/regression/custom_regression.py \ --outfile /tmp/custom_regression_v2.py ``` The script will print errors if it can not find a fix for the code. ### Directory tree Typical projects, including this simepl example, will use much more than one file. Typically want to upgrade an entire package, so the script can also be run on a directory tree: ``` # upgrade the .py files and copy all the other files to the outtree !tf_upgrade_v2 \ --intree models/samples/cookbook/regression/ \ --outtree regression_v2/ \ --reportfile tree_report.txt ``` Note the one warning about the `dataset.make_one_shot_iterator` function. Now the script works in with TensorFlow 2.0: Note that because the `tf.compat.v1` module, the converted script will also run in TensorFlow 1.14. ``` !(cd regression_v2 && python custom_regression.py 2>&1) | tail ``` ## Detailed report The script also reports a list of detailed changes. In this example it cound one possibly unsafe transformation and included a warning at the top if the file: ``` !head -n 20 tree_report.txt ``` Note again the one warning about the `Dataset.make_one_shot_iterator function`. In other cases the output will explain the reasoning for non-trivial changes: ``` %%writefile dropout.py import tensorflow as tf d = tf.nn.dropout(tf.range(10), 0.2) z = tf.zeros_like(d, optimize=False) !tf_upgrade_v2 \ --infile dropout.py \ --outfile dropout_v2.py \ --reportfile dropout_report.txt > /dev/null !cat dropout_report.txt ``` Here is the modified file contents, note how the script adds argument names to deal with moved and renamed arguments: ``` !cat dropout_v2.py ``` A larger project might contain a few errors. For example convert the deeplab model: ``` !tf_upgrade_v2 \ --intree models/research/deeplab \ --outtree deeplab_v2 \ --reportfile deeplab_report.txt > /dev/null ``` It produced the output files: ``` !ls deeplab_v2 ``` But there were errors. The report will help you pin-point what you need to fix before this will run: ``` !cat deeplab_report.txt | grep -i models/research/deeplab | grep -i error ``` ## "Safety" mode The conversion script also has a less invasive `SAFETY` mode that simply changes the imports to use the `tensorflow.compat.v1` module: ``` !cat dropout.py !tf_upgrade_v2 --mode SAFETY --infile dropout.py --outfile dropout_v2_safe.py > /dev/null !cat dropout_v2_safe.py ``` As you can see this doesn't upgrade your code, but does allow TensorFlow 1 code to run in TensorFlow 2 ## Caveats - Do not update parts of your code manually before running this script. In particular, functions that have had reordered arguments like `tf.argmax` or `tf.batch_to_space` cause the script to incorrectly add keyword arguments that mismap your existing code. - The script assumes that `tensorflow` is imported using `import tensorflow as tf`. - This script does not reorder arguments. Instead, the script adds keyword arguments to functions that have their arguments reordered. - Check out [tf2up.ml](http://tf2up.ml) for a convenient tool to upgrade Jupyter notebooks and Python files in a GitHub repository. To report upgrade script bugs or make feature requests, please file an issue on [GitHub](https://github.com/tensorflow/tensorflow/issues). And if you’re testing TensorFlow 2.0, we want to hear about it! Join the [TF 2.0 Testing community](https://groups.google.com/a/tensorflow.org/forum/#!forum/testing) and send questions and discussion to [testing@tensorflow.org](mailto:testing@tensorflow.org).
github_jupyter
``` %reload_ext autoreload %autoreload 2 import logging import numpy as np # Make analysis reproducible np.random.seed(0) # Enable logging logging.basicConfig(level=logging.INFO) from dask.distributed import Client Client(n_workers=2, threads_per_worker=2, processes=True, memory_limit='25GB') from track_linearization import make_track_graph, plot_track_graph import matplotlib.pyplot as plt from scipy.stats import multivariate_normal angle = np.linspace(-np.pi, np.pi, num=24, endpoint=False) radius = 30 node_positions = np.stack((radius * np.cos(angle), radius * np.sin(angle)), axis=1) node_ids = np.arange(node_positions.shape[0]) edges = np.stack((node_ids, np.roll(node_ids, shift=1)), axis=1) track_graph = make_track_graph(node_positions, edges) position_angles = np.linspace(-np.pi, 31 * np.pi, num=360_000, endpoint=False) position = np.stack((radius * np.cos(position_angles), radius * np.sin(position_angles)), axis=1) # position += multivariate_normal(mean=0, cov=.005).rvs(position.shape) fig, ax = plt.subplots(figsize=(10, 10)) plot_track_graph(track_graph, ax=ax) ax.tick_params(left=True, bottom=True, labelleft=True, labelbottom=True) ax.set_xlabel("x-position") ax.set_ylabel("y-position") ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax.scatter(position[:, 0], position[:, 1], alpha=0.25, s=10, zorder=11, color="orange") from track_linearization import plot_graph_as_1D edge_spacing = 0 n_nodes = len(track_graph.nodes) edge_order = np.stack((np.roll(np.arange(n_nodes-1, -1, -1), 1), np.arange(n_nodes-1, -1, -1)), axis=1) fig, ax = plt.subplots(figsize=(n_nodes // 2, 1)) plot_graph_as_1D(track_graph, edge_spacing=edge_spacing, edge_order=edge_order, ax=ax) from track_linearization import get_linearized_position position_df = get_linearized_position(position, track_graph, edge_order=edge_order, edge_spacing=edge_spacing, use_HMM=False) position_df plt.figure(figsize=(20, 5)) sampling_frequency = 1000 time = np.arange(position_df.linear_position.size) / sampling_frequency plt.scatter(time, position_df.linear_position, clip_on=False, s=1) from replay_trajectory_classification.simulate import simulate_neuron_with_place_field angle = np.linspace(-np.pi, np.pi, num=24, endpoint=False) place_field_centers = np.stack((radius * np.cos(angle), radius * np.sin(angle)), axis=1) spikes = np.stack([simulate_neuron_with_place_field(center, position, sampling_frequency=sampling_frequency, variance=6.0**2) for center in place_field_centers], axis=1) spikes.shape spikes.sum(axis=0) fig, ax = plt.subplots(figsize=(10, 10)) for spike in spikes.T: spike_ind = np.nonzero(spike)[0] ax.scatter(position[spike_ind, 0], position[spike_ind, 1]) ax.axis("square") from replay_trajectory_classification import SortedSpikesDecoder from replay_trajectory_classification.state_transition import estimate_movement_var movement_var = estimate_movement_var(position, sampling_frequency) movement_var = np.mean(np.diag(movement_var)) movement_var = 0.25 place_bin_size = 0.5 decoder = SortedSpikesDecoder( replay_speed=1, movement_var=0.25, place_bin_size=place_bin_size) decoder decoder.fit(position_df.linear_position, spikes, track_graph=track_graph, edge_order=edge_order, edge_spacing=edge_spacing) fig, ax = plt.subplots(figsize=(20, 5)) (decoder.place_fields_ * sampling_frequency).plot(x="position", hue="neuron", add_legend=False, ax=ax); plt.figure(figsize=(10, 10)) bin1, bin2 = np.meshgrid(decoder.place_bin_edges_, decoder.place_bin_edges_) plt.pcolormesh(bin1, bin2, decoder.state_transition_.T, vmin=0.0) results = decoder.predict(spikes, time=time) fig, axes = plt.subplots(2, 1, figsize=(20, 5), sharex=True, constrained_layout=True) spike_time_ind, neuron_ind = np.nonzero(spikes) axes[0].scatter(time[spike_time_ind], neuron_ind, clip_on=False, s=1) results.acausal_posterior.plot(x="time", y="position", vmin=0.0, ax=axes[1], robust=True) # axes[1].scatter(time, position_df.linear_position, color="magenta", s=1) plt.subplots(1, 1, figsize=(20, 5), sharex=True, constrained_layout=True) results.likelihood.isel(time=spike_time_ind).plot(x="time", y="position") ``` # No Track Graph ``` decoder2 = SortedSpikesDecoder( replay_speed=1, movement_var=movement_var, place_bin_size=place_bin_size) decoder2.fit(position_df.linear_position, spikes) results2 = decoder.predict(spikes, time=time) fig, axes = plt.subplots(2, 1, figsize=(20, 5), sharex=True, constrained_layout=True) spike_time_ind, neuron_ind = np.nonzero(spikes) axes[0].scatter(time[spike_time_ind], neuron_ind, clip_on=False, s=1) results2.acausal_posterior.plot(x="time", y="position", vmin=0.0, ax=axes[1], robust=True) ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt df=pd.read_csv('FearData.csv') df.head(22) # Preprocessing : from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report,confusion_matrix from itertools import product from sklearn.preprocessing import StandardScaler # Classifiers from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn import svm from sklearn import tree from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.manifold import TSNE from sklearn.decomposition import PCA X = df.drop(['Label'], axis = 1).values Y = df['Label'] X = StandardScaler().fit_transform(X) X_Train, X_Test, Y_Train, Y_Test = train_test_split(X, Y, test_size = 0.30, random_state = 101) trainedmodel = LogisticRegression().fit(X_Train,Y_Train) predictions =trainedmodel.predict(X_Test) print(confusion_matrix(Y_Test,predictions)) print(classification_report(Y_Test,predictions)) # trainedforest = RandomForestClassifier().fit(X_Train,Y_Train) # predictionforest = trainedforest.predict(X_Test) # print(confusion_matrix(Y_Test,predictionforest)) # print(classification_report(Y_Test,predictionforest)) trainedsvm = svm.LinearSVC().fit(X_Train, Y_Train) predictionsvm = trainedsvm.predict(X_Test) print(confusion_matrix(Y_Test,predictionsvm)) print(classification_report(Y_Test,predictionsvm)) trainedtree = tree.DecisionTreeClassifier().fit(X_Train, Y_Train) predictionstree = trainedtree.predict(X_Test) print(confusion_matrix(Y_Test,predictionstree)) print(classification_report(Y_Test,predictionstree)) # predictionstree = trainedtree.predict_proba(X_Test) # print(predictionstree) trainedlda = LinearDiscriminantAnalysis().fit(X_Train, Y_Train) predictionlda = trainedlda.predict(X_Test) print(confusion_matrix(Y_Test,predictionlda)) print(classification_report(Y_Test,predictionlda)) trainednb = GaussianNB().fit(X_Train, Y_Train) predictionnb = trainednb.predict(X_Test) print(confusion_matrix(Y_Test,predictionnb)) print(classification_report(Y_Test,predictionnb)) pca = PCA(n_components=2,svd_solver='full') X_pca = pca.fit_transform(X) # print(pca.explained_variance_) X_reduced, X_test_reduced, Y_Train, Y_Test = train_test_split(X_pca, Y, test_size = 0.30, random_state = 101) # pca = PCA(n_components=2,svd_solver='full') # X_reduced = pca.fit_transform(X_Train) #X_reduced = TSNE(n_components=2).fit_transform(X_Train, Y_Train) trainednb = GaussianNB().fit(X_reduced, Y_Train) trainedtree = tree.DecisionTreeClassifier().fit(X_reduced, Y_Train) trainedforest = RandomForestClassifier(n_estimators=700).fit(X_reduced,Y_Train) trainedmodel = LogisticRegression().fit(X_reduced,Y_Train) # pca = PCA(n_components=2,svd_solver='full') # X_test_reduced = pca.fit_transform(X_Test) #X_test_reduced = TSNE(n_components=2).fit_transform(X_Test, Y_Test) print('Naive Bayes') predictionnb = trainednb.predict(X_test_reduced) print(confusion_matrix(Y_Test,predictionnb)) print(classification_report(Y_Test,predictionnb)) print('Decision Tree') predictionstree = trainedtree.predict(X_test_reduced) print(confusion_matrix(Y_Test,predictionstree)) print(classification_report(Y_Test,predictionstree)) print('Random Forest') predictionforest = trainedforest.predict(X_test_reduced) print(confusion_matrix(Y_Test,predictionforest)) print(classification_report(Y_Test,predictionforest)) print('Logistic Regression') predictions =trainedmodel.predict(X_test_reduced) print(confusion_matrix(Y_Test,predictions)) print(classification_report(Y_Test,predictions)) # Thanks to: https://scikit-learn.org/stable/auto_examples/ensemble/plot_voting_decision_regions.html # Plotting decision regions reduced_data = X_reduced trainednb = GaussianNB().fit(reduced_data, Y_Train) trainedtree = tree.DecisionTreeClassifier().fit(X_reduced, Y_Train) trainedforest = RandomForestClassifier(n_estimators=700).fit(reduced_data,Y_Train) trainedmodel = LogisticRegression().fit(reduced_data,Y_Train) x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1 y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1), np.arange(y_min, y_max, 0.1)) f, axarr = plt.subplots(2, 2, sharex='col', sharey='row', figsize=(10, 8)) for idx, clf, tt in zip(product([0, 1], [0, 1]), [trainednb, trainedtree, trainedforest, trainedmodel], ['Naive Bayes Classifier', 'Decision Tree', 'Random Forest', 'Logistic Regression']): Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) axarr[idx[0], idx[1]].contourf(xx, yy, Z,cmap=plt.cm.coolwarm, alpha=0.4) axarr[idx[0], idx[1]].scatter(reduced_data[:, 0], reduced_data[:, 1], c=Y_Train, s=20, edgecolor='k') axarr[idx[0], idx[1]].set_title(tt) plt.show() ```
github_jupyter
<!--NOTEBOOK_HEADER--> *This notebook contains material from [CBE40455-2020](https://jckantor.github.io/CBE40455-2020); content is available [on Github](https://github.com/jckantor/CBE40455-2020.git).* <!--NAVIGATION--> | [Contents](toc.html) | [2.0 Modeling](https://jckantor.github.io/CBE40455-2020/02.00-Modeling.html) ><p><a href="https://colab.research.google.com/github/jckantor/CBE40455-2020/blob/master/docs/01.00-Syllabus.ipynb"> <img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a><p><a href="https://jckantor.github.io/CBE40455-2020/01.00-Syllabus.ipynb"> <img align="left" src="https://img.shields.io/badge/Github-Download-blue.svg" alt="Download" title="Download Notebook"></a> # 1.0 CBE 40455/60455 Process Operations: Syllabus August 11, 2020 ## Catalog Description This course introduces students to methods for the analysis and optimization of process operations. Topics will include process modeling, continuous and discrete optimization, scheduling, supply chains, scenario analysis, and financial analysis for process operations. Special emphasis will be given to practical implementation of methods for real world applications. ## Course Information ### Schedule and Locations * **Class Meetings:** TTh 9:35-10:50 in 100 Jordan Hall * **Course Management:** Course materials and assignments will be managed using [Sakai](), Notre Dame's Learning Management System. * **Github Repository:** [jckantor.github.io/CBE40455](jckantor.github.io/CBE40455) * **Instructor:** Jeffrey Kantor, Department of Chemical & Biomolecular Engineering. * **Office:** 257 Nieuwland Hall * **Office Hours:** Thursday afternoons are the regularly scheduled drop-in office hours for the course. Otherwise I'm generally available during regular business hours by appointment. * **Email:** [jeff@nd.edu](jeff@nd.edu) This is the generally the best way to reach me. * **Text Message:** 574-532-4233 ## Topics ### Weeks 1-2: Discrete Event Dynamics and Simulation * **Discrete Event Simulation**. Implementing discrete event simulations in Python. Poisson processes, queues, evaluating simulation results. Reading: \citep{Beamon1998} \citep{Christopher2000} \citep{Shah2005} \citep{You2008} \citep{Ferrio2008} Project: Simulation a COVID outbreak. ### Weeks 3-4: Linear Optimization * Blending Problems. Formulation and solution of `blending' and `diet' models. History and significance of linear programming. Comparison of single product and multi-product plants. * Modeling Languages. Formulation and solution of linear programming problems using spreadsheets, algebraic modeling languages, and embedded modeling languages. * Examples of modeling languages for mathematical programming. * AMPL * APMonitor * GLPK/MathProg * CMPL * [CPLEX/OPL](http://www-03.ibm.com/software/products/en/subcategory/decision-optimization) * Examples of modeling languages embedded within programming languages. * Matlab/CVX * Matlab/YALMIP * Matlab|Python/APMonitor * Python/PuLP * Python/Pyomo * Mathematical formulation and optimality conditions. Standard formulations of linear programming problems. Necessary conditions for optimality. Outline of an active set method for solution. * Sensitivity Analysis. Weak and strong duality. Slack variables, shadow prices. Applications to process analysis and decision making. * Transportation, Assignment, and Network Flow Problems] * Project Management. Critical Path Method, PERT and Gantt charts. {Readings:} {Project:} Optimization of a simple model for refinery operations (Example 16.1 from \citep{Edgar2001}). ### Weeks 5-6: Scheduling * Mixed Integer Linear Programming. Integer and binary variables. Using binary variables to model logical constraints and application to process decisions. Application to agricultural and stock cutting operations. * Scheduling. Empirical methods for job scheduling. * Machine and Job Scheduling. Modeling machines and jobs. Disjuctive constraints. Locating bottlenecks, tracking jobs and machine activities with Gannt charts. Practical implications of computational complexity. * Short Term Scheduling of Batch Processes. Resource task networks, state task networks. Reading: \citep{Mendez2006a} \citep{Floudas2005a} Project: Scheduling production for a contract pharmaceutical manufacturer. ### Weeks 7-8: Logistics * **Inventory Management**. Inventory, reorder levels, economic order quantity. Empirical models for inventory management. * **Supply Chains**. [Beer Game Simulation](http://www.beergame.org/). Multi-echelon supply chains, processes, dynamics, and the `bullwhip' effect. The role of information flow in supply chains. #### Readings Fisher, Marshall L. ["What is the right supply chain for your product?." Harvard business review 75 (1997): 105-117.](https://pdfs.semanticscholar.org/647a/c2ded3d69e41bb09ef5556aa942e01abd14d.pdf) ### Weeks 9-10: Optimization under Uncertainty * Newsvendor Problem. Optimal order size for uncertain demand \citep{Petruzzi1999}. * Scenario Analysis. Plant expansion. Introduction to stochastic programming. Expected values, optimization of the mean scenario, value of perfect information, value of the stochastic solution. `Here and Now' decisions versus`Wait and See'. * Stochastic Linear Programming. Two stage decisions, recourse variables. Implementing stochastic linear programs with algebraic modeling languages. Solution by decomposition methods. * Process Resiliency. Measures of process flexibility and resiliency to perturbations. Readings: \citep{Sen1999a} \citep{Grossmann2014} Project: Production planning for a consumer goods company. ### Weeks 11-12: Financial Modeling, Risk and Diversification * **Time value of money**. Discounted cash flows. Replicating portfolio for a sequence of cash flows. Bonds and duration. Net present value and internal rate of returns. Project valuation. * **Stochastic Modeling of Asset Prices**. Statistical properties of prices for financial assets and commodity goods. Statistical distributions, discrete time models, model calibration, approximations with binomial trees. Correlation and copulas. * **Commodity Markets**. Futures, forwards, and options, and swaps. Hedging operational costs. Replicating portfolios. * **Investment Objectives**. Kelly's criterion, and log optimal growth. Logarithmic utility functions, certainty equivalence principle, coherent risk measures, first and second order stochastic dominance. Practical risk measures. * **Portfolio Management**. Risk versus return. Markowitz analysis, diversification. Efficient frontier. Comparison of Markowitz to Mean Absolute Difference, and optimal portfolios. Effects of adding a risk-free asset, two-fund theorem, market price of risk. * **Real Options**. European and American options on a financial asset. Extending the analysis to real options on tradable assets, and to decisions without tradable assets. \citep{Davis20xx} Reading: \citep{Adner2004}, \citep{Anderson2013}, Project: (a) Valuation of a natural resource operation, or (b) Pricing and managing an energy swap for a flex-fuel utility ### Weeks 13-14: Capstone Project The capstone experience will be a team-based, integrative, open-ended project. Student teams will propose projects of their own design or select from a suggested list of projects. Projects will consist of an initial proposal, an oral status report to the class, a final poster presentation and written report, and a Github repository memorializing the project. #### Project Ideas * **Networks Against Time.** Blood supplies, medical nuclear supplies, pharacueticals, food, and fast fashion are all examples of supply chains for perishable products. The book [Networks Against Time](https://www.springer.com/gp/book/9781461462767) (available from [Hesburgh Library](https://link-springer-com.proxy.library.nd.edu/book/10.1007%2F978-1-4614-6277-4)) describes analytics suitable for supply chains of perishable goods. Using an example from this book, develop an analytical model, sample calculation, and simulation. * **Model Predictive Control** is an optimization-based method for feedback control of dynamical systems. Implement and MPC controller for a system with discrete decisions. * **Refinery Products Pooling Problem** is the task of finding a cost set of product pools from which final products can be blended. The project is to develop and solve an example problem. * **Energy Swap** design and price an energy swap for a flex fuel utility located on a university campus. * **Demand Response** evaluate the potential for demand response for an aluminum smelter or a chlorine producer. * **Process Resiliency** has been a topic discussed in the process systems engineering literature since the pioneering work of Morari and Grossmann in the early 1980's (Grossmann, 2014). The purpose of this project is to implement a measure of process resiliency and demonstrate its application to a chemical process. * **Log Periodic Power Laws** are models that have been used to predict the collapse of speculative `bubbles' in financial and commodity markets. For this project, develop a regression method to fit a log periodic power law model to commodity price data, and compare a recent data series of your choice to the price of gold in the 2007--2009 period. Additional project ideas can be found at the following links: * [AIMMS Application Examples](http://www.aimms.com/downloads/application-examples) #### Project Github Repository Your project should be memorialized in the form of a [Github](https://github.com/) repository. The simplest way to manage your repository is to use download the [Github Desktop application](https://desktop.github.com/) and follow the [tutorial and guides](https://help.github.com/en/articles/set-up-git). <!--NAVIGATION--> | [Contents](toc.html) | [2.0 Modeling](https://jckantor.github.io/CBE40455-2020/02.00-Modeling.html) ><p><a href="https://colab.research.google.com/github/jckantor/CBE40455-2020/blob/master/docs/01.00-Syllabus.ipynb"> <img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a><p><a href="https://jckantor.github.io/CBE40455-2020/01.00-Syllabus.ipynb"> <img align="left" src="https://img.shields.io/badge/Github-Download-blue.svg" alt="Download" title="Download Notebook"></a>
github_jupyter
<a href="https://colab.research.google.com/github/dinesh110598/Spin_glass_NN/blob/master/main.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Classifying Bimodal triangular EA lattices The Hamiltonian for the EA model on a 2d triangular lattice with spins {S}: $ H = \displaystyle\sum_{<i,j>} J_{ij}(p).S_iS_j$ where \<ij\> denotes nearest neighbour positions on a triangular lattice and $J_{ij}(p)$ takes values +1 and -1 with probabilities p and 1-p respectively. We'll consider only the values of p between 0.5 and 1 throughout this notebook. ## Classification between p=0.5 and p=0.7 In this subsection, we will supply a convolutional neural network (CNN) with properly labelled samples for p=0.5 and p=0.7, over multiple realisations of couplings $J_{ij}$ in each category. Later, we'll analyse the network's output for intermediate values of p. Following this, we'll look at 2 point correlation functions of these lattices for the same values and compare the variation of correlation functions with that of neural network output. ### Imports and dependecies ``` import math import numpy as np import cupy as cp import tensorflow.keras as tfk import tensorflow as tf import matplotlib.pyplot as plt from google.colab import drive from google.colab import output drive.mount('/content/drive', force_remount=True) folder_path = "/content/drive/My Drive/Project Presentations/Spin_glass_phase_classification/" ``` I've written Numba CUDA kernels (functions that perform calculations inside GPUs using CUDA) for simulating MCMC and parallel tempering algorithms for triangular EA model, in a seperate python file. Let's import the file and its contents inside this notebook: ``` !curl -o TriEA_kernels.py https://raw.githubusercontent.com/dinesh110598/Spin_glass_NN/master/TriEA_kernels.py from TriEA_kernels import * !nvidia-smi -L ``` ### Data generating function Let's write a function that uses the imported kernels to generate our training data: ``` #Jnn = cp.random.choice ([-1,1], (1,48,48,3), #p=[1.0, 0.]).astype(np.float32) def generate_train_data (train_len, prob, lat_len=48, m=100): shape = (lat_len, lat_len) n_ens = train_len//m spin = cp.random.choice ([1,-1], (train_len,)+shape).astype(np.int8) seed = cp.random.randint (-10000,10000, size=(train_len,)+shape, dtype=np.int32) Jnn = cp.random.choice ([-1,1], (n_ens,)+shape+(3,), p=[1-prob, prob]).astype(np.float32) energy = cp.zeros ((n_ens,m), np.float32) tpb = (1,8,8) bpg = (train_len, lat_len//8, lat_len//8) perm = cp.arange (0, train_len, dtype=np.int32) temp = 0.5 T = cp.full ((n_ens,m), 0.5, np.float32) for _ in range (3000): update_red [bpg,tpb] (spin, seed, T, Jnn, perm) update_blue [bpg,tpb] (spin, seed, T, Jnn, perm) update_green [bpg,tpb] (spin, seed, T, Jnn, perm) calc_energy [math.ceil(train_len/64),64] (spin, energy, Jnn) spin = 0.5*cp.asnumpy (spin) #return cp.asnumpy (energy) return spin[...,np.newaxis]#Additional axis required for conv2d layer energy = generate_train_data (1000, 0.5) np.sort(energy[0]/(48**2)) ``` Let's generate training data for p=0.5 and concatenate with that of p=0.7, with corresponding labels 0 and 1 respectively: ``` t_lattice = generate_train_data (8000, 0.8) t_label = np.zeros (8000, np.int32) t_lattice = np.concatenate ([t_lattice, generate_train_data (8000,0.9)]) t_label = np.concatenate ([t_label, np.ones (8000, np.int32)]) output.eval_js('new Audio("https://upload.wikimedia.org/wikipedia/commons/0/05/Beep-09.ogg").play()') ``` Let's gather our numpy data in a single tf.data dataset ``` train_data = tf.data.Dataset.from_tensor_slices ((t_lattice,t_label)) train_data = train_data.shuffle (buffer_size=16000) ``` Splitting the dataset into training and validation datasets: ``` val_data = train_data.take (4000) train_data = train_data.skip (4000) val_data = val_data.batch (8) train_data = train_data.batch (8) ``` ### Neural network initialization and training: ``` brain = tfk. Sequential([ tfk.layers.Conv2D(64, (2,2), activation='relu', input_shape = (48,48,1)), tfk.layers.MaxPool2D (), tfk.layers.Conv2D(64, (2,2), activation='relu'), tfk.layers.Flatten(), tfk.layers.Dense(64, activation='relu'), tfk.layers.Dropout(0.3), tfk.layers.Dense(2, activation='softmax') ]) brain.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) hist = brain.fit (train_data, epochs=1, validation_data=val_data) brain.save (folder_path+"EA_8_9.h5") ``` ### Neural network predictions for intermediate values ``` datax = [] lattice = [] p = 0.5 while (p < 0.601): lattice.append (generate_train_data (2000, p)) datax.append (p) p += 0.01 lattice = np.concatenate (lattice) output.eval_js('new Audio("https://upload.wikimedia.org/wikipedia/commons/0/05/Beep-09.ogg").play()') brain = tfk.models.load_model (folder_path+"EA_7_8.h5") predictions = brain.predict (lattice[10000:16000]) predictions.shape datax = np.arange (0.7, 0.801, 0.02) datay = [] for i in range (len(datax)): datay.append (predictions[1000*i:1000*(i+1),0].mean()) plt.plot (datax, datay) plt.grid() ``` ### Correlation functions for intermediate values Let's calculate 2 point correlation functions with distance, averaged over both translations and the three rotations possible on a triangular lattice, using the below expression: $$ C_2(r) = \frac{<S(x).S(x+r)>_x}{<S(x)^2>_x-<S(x)>^2_x} $$ which assumes translational invariance between averaging over x as opposed to x+r, and that $<S_x>=0$ for all x. The assumptions are perfectly valid for Edwards Anderson model with periodic boundary conditions. On that note, we'll import a scipy library function pearsonr that'd help us calculate correlation between data without getting our hands dirty: ``` from scipy.stats import pearsonr ``` We'll sample Jnn's from appropriate distributions for each p: ``` Jnn = [] p = 0.5 while (p < 0.9001): Jnn.append (np.random.choice ([-1,1], (100,48,48,3), p=[1-p,p])) p += 0.02 Jnn = np.concatenate (Jnn) Jnn = cp.asarray (Jnn) ``` We've made changes to the previous generating function to run this: ``` def generate_train_data (train_len, lat_len=48, m=10): shape = (lat_len, lat_len) n_ens = train_len//m spin = cp.random.choice ([1,-1], (train_len,)+shape).astype(np.int8) seed = cp.random.randint (-10000,10000, size=(train_len,)+shape, dtype=np.int32) #Jnn = cp.random.choice ([-1,1], (n_ens,)+shape+(3,), #p=[1-prob, prob]).astype(np.float32) energy = cp.zeros ((n_ens,m), np.float32) tpb = (1,8,8) bpg = (train_len, lat_len//8, lat_len//8) perm = cp.arange (0, train_len, dtype=np.int32) temp = 0.5 T = cp.full ((n_ens,m), 0.5, np.float32) for _ in range (3000): update_red [bpg,tpb] (spin, seed, T, Jnn, perm) update_blue [bpg,tpb] (spin, seed, T, Jnn, perm) update_green [bpg,tpb] (spin, seed, T, Jnn, perm) calc_energy [math.ceil(train_len/64),64] (spin, energy, Jnn) spin = 0.5*cp.asnumpy (spin) #return cp.asnumpy (energy) return spin[...,np.newaxis]#Additional axis required for conv2d layer lattice = generate_train_data (Jnn.shape[0]*10) np.save (folder_path+"Training Data/TriEAlattice.npy", lattice) TriEA = [] for i in range (21): TriEA.append (lattice[1000*i:1000*(i+1)]) corr = np.zeros ((21,3,20,24,24), np.float32) for n in range (21): for r1 in range (24): for r2 in range (24): x1 = np.ravel (TriEA[n][:,r1,r2,:]) for r in range (1,21): x2 = np.ravel (TriEA[n][:,r1+r,r2,:]) val, _ = pearsonr (x1,x2) corr [n,0,r-1,r1,r2] = (val) x2 = np.ravel(TriEA[n][:,r1+int(r-(r%2)*(-1**r2))//2, r2+r,:]) val, _ = pearsonr (x1,x2) corr [n,1,r-1,r1,r2] = (val) x2 = np.ravel(TriEA[n][:,r1-int(r+(r%2)*(-1**r2))//2, r2+r,:]) val, _ = pearsonr (x1,x2) corr [n,2,r-1,r1,r2] = (val) corr = np.mean (corr, axis=(1,3,4)) corr2 = np.zeros ((3,20,24,24), np.float32) for r1 in range (24): for r2 in range (24): x1 = np.ravel (TriEA2[:,r1,r2,:]) for r in range (1,21): x2 = np.ravel (TriEA2[:,r1+r,r2,:]) val, _ = pearsonr (x1,x2) corr2 [0,r-1,r1,r2] = (val) x2 = np.ravel(TriEA2[:,r1+int(r-(r%2)*(-1**r2))//2, r2+r,:]) val, _ = pearsonr (x1,x2) corr2 [1,r-1,r1,r2] = (val) x2 = np.ravel(TriEA2[:,r1-int(r+(r%2)*(-1**r2))//2, r2+r,:]) val, _ = pearsonr (x1,x2) corr2 [2,r-1,r1,r2] = (val) corr2 = np.mean (corr2, axis=(0,2,3)) corr3 = np.zeros ((3,20,24,24), np.float32) for r1 in range (24): for r2 in range (24): x1 = np.ravel (TriEA3[:,r1,r2,:]) for r in range (1,21): x2 = np.ravel (TriEA3[:,r1+r,r2,:]) val, _ = pearsonr (x1,x2) corr3 [0,r-1,r1,r2] = (val) x2 = np.ravel(TriEA3[:,r1+int(r-(r%2)*(-1**r2))//2, r2+r,:]) val, _ = pearsonr (x1,x2) corr3 [1,r-1,r1,r2] = (val) x2 = np.ravel(TriEA3[:,r1-int(r+(r%2)*(-1**r2))//2, r2+r,:]) val, _ = pearsonr (x1,x2) corr3 [2,r-1,r1,r2] = (val) corr3 = np.mean (corr3, axis=(0,2,3)) output.eval_js('new Audio("https://upload.wikimedia.org/wikipedia/commons/0/05/Beep-09.ogg").play()') _, ax = plt.subplots() ax.plot (range(1,21), list(corr1), label='p = 0.50') ax.plot (range(1,21), list(corr2), label='p = 0.52') ax.plot (range(1,21), list(corr3), label='p = 0.54') ax.legend (loc= 'upper right' ) ax.grid () plt.show () _, ax = plt.subplots() ax.plot (range(1,21), list(corr1), label='p = 0.56') ax.plot (range(1,21), list(corr2), label='p = 0.58') ax.plot (range(1,21), list(corr3), label='p = 0.60') ax.legend (loc= 'upper right' ) ax.grid () plt.show () _, ax = plt.subplots() ax.plot (range(1,21), list(corr[3]), label='p = 0.56') ax.plot (range(1,21), list(corr[5]), label='p = 0.60') ax.plot (range(1,21), list(corr[7]), label='p = 0.64') ax.plot (range(1,21), list(corr[9]), label='p = 0.68') ax.legend (loc= 'upper right' ) ax.grid () plt.show () param = (corr[:,0]) probs = np.arange (0.5, 0.91, 0.02) plt.plot (probs, param) plt.xlabel ("p") plt.ylabel ("") plt.grid () mags = np.mean(t_lattice, axis=(2,3,4)) mags = np.abs (mags) mags = np.mean (mags, axis=1) mags plt.plot (np.arange (0.4, 0.61, 0.02, float), mags) ``` # Bimodal EA lattice temperature evolution ``` import math import numpy as np import cupy as cp import tensorflow.keras as tfk import tensorflow as tf import matplotlib.pyplot as plt from numba import cuda ``` ## Fixed couplings Let's fix our couplings and broadcast the same to the entire ensemble: ``` Jnn = cp.random.choice ([-1,1], (48,48,2)) Jnn = cp.broadcast_to (Jnn, (10,48,48,2)) @cuda.jit def update_sq (spin, seed, T, J_nn, is_black, perm): m = T.shape[1] z, x, y = cuda.grid (3) z = perm[z] n = int(math.floor (z / m)) l = z % m p, q = x % 3, y % 2 def random_uniform (): seed[z, x, y] = np.int32((seed[z ,x, y]*1664525 + 1013904223) % 2**31) return seed[z, x, y] / (2**31) def bvc (x): if x == spin.shape[1]: x = 0 return x def sum_nn(): # This adds spins of six neighbours instead of 4 subject to #many constraints characteristic of triangular lattices value = 0. value += J_nn[n,x,y,0]*spin[z, bvc(x+1), y] value += J_nn[n,x,y,1]*spin[z, x, bvc(y+1)] value += J_nn[n,x-1,y,0]*spin[z, x-1, y] value += J_nn[n,x,y-1,1]*spin[z, x-1, y] return value def calc(): probs = random_uniform() if (probs < math.exp(2*spin[z, x, y]*sum_nn()/T[n,l])): spin[z, x, y] *= np.int8(-1) if is_black==True: if (p == 0 and q == 0) or (p == 1 and q == 1): calc() else: if (p == 0 and q == 1) or (p == 1 and q == 0): calc() @cuda.jit def calc_energy_sq (spin, energy, J_nn): z = cuda.grid (1) n = int(math.floor (z / energy.shape[1])) l = z % energy.shape[1] def bvc (x): if x == spin.shape[1]: x = 0 return x def sum_nn_part(x, y, z): # This adds spins of six neighbours instead of 4 subject to #many constraints characteristic of triangular lattices value = 0. value += J_nn[n,x,y,0]*spin[z, bvc(x+1), y] value += J_nn[n,x,y,1]*spin[z, x, bvc(y+1)] return value ener = 0 if z < spin.shape[0]: for x in range (spin.shape[1]): for y in range (spin.shape[2]): ener += spin[z,x,y]*sum_nn_part(x,y,z) energy[n,l] = ener @cuda.jit def parallel_temper2 (T, seed, energy, perm): z = cuda.grid(1) m = T.shape[1]//2 n = int(math.floor (z/m)) l = z % m #Takes values between 0 and m//2 if z < seed.shape[0]//2: rand_n = 0 if np.float32(seed[n, 0, 0]/2**31) < 0.5 else 1 ptr = 2*l + rand_n z = 2*z + rand_n if ptr < energy.shape[0]-1: val0 = perm[z] val1 = perm[z+1] e0 = energy[n,ptr] e1 = energy[n,ptr+1] rand_unif = np.float32(seed[z, 1, 0] / 2**31) arg = (e0 - e1)*((1./T[n,ptr]) - (1./T[n,ptr+1])) if (arg < 0): if rand_unif < math.exp(arg): perm[z] = val1 perm[z+1] = val0 else: perm[z] = val1 perm[z+1] = val0 def generate_train_data (train_len, prob, lat_len=48, m=100): shape = (lat_len, lat_len) n_ens = train_len//m spin = cp.random.choice ([1,-1], (train_len,)+shape).astype(np.int8) seed = cp.random.randint (-10000,10000, size=(train_len,)+shape, dtype=np.int32) #Jnn = cp.random.choice ([-1,1], (n_ens,)+shape+(3,), #p=[1-prob, prob]).astype(np.float32) energy = cp.zeros ((n_ens,m), np.float32) tpb = (1,8,8) bpg = (train_len, lat_len//8, lat_len//8) perm = cp.arange (0, train_len, dtype=np.int32) T = cp.linspace (0.5, 4.0, m, dtype=np.float32) T = cp.broadcast_to (T, (n_ens,m)) for _ in range (500): for _ in range (4): update_sq [bpg,tpb] (spin, seed, T, Jnn, True, perm) update_sq [bpg,tpb] (spin, seed, T, Jnn, True, perm) calc_energy_sq [math.ceil(train_len/64),64] (spin, energy, Jnn) parallel_temper2 [math.ceil(train_len/128),64] (T, seed, energy, perm) T = cp.full ((n_ens,m), 0.5, np.float32) for _ in range (2000): update_sq [bpg,tpb] (spin, seed, T, Jnn, True, perm) update_sq [bpg,tpb] (spin, seed, T, Jnn, True, perm) calc_energy_sq [math.ceil(train_len/64),64] (spin, energy, Jnn) spin = 0.5*cp.asnumpy (spin) return cp.asnumpy (energy) energy = generate_train_data (1000, 0.5) np.sort(energy[0]) val = cp.asarray (True) val == False ```
github_jupyter
# 线性回归 --- 从0开始 虽然强大的深度学习框架可以减少很多重复性工作,但如果你过于依赖它提供的便利抽象,那么你可能不会很容易地理解到底深度学习是如何工作的。所以我们的第一个教程是如何只利用ndarray和autograd来实现一个线性回归的训练。 ## 线性回归 给定一个数据点集合`X`和对应的目标值`y`,线性模型的目标是找一根线,其由向量`w`和位移`b`组成,来最好地近似每个样本`X[i]`和`y[i]`。用数学符号来表示就是我们将学`w`和`b`来预测, $$\boldsymbol{\hat{y}} = X \boldsymbol{w} + b$$ 并最小化所有数据点上的平方误差 $$\sum_{i=1}^n (\hat{y}_i-y_i)^2.$$ 你可能会对我们把古老的线性回归作为深度学习的一个样例表示很奇怪。实际上线性模型是最简单但也可能是最有用的神经网络。一个神经网络就是一个由节点(神经元)和有向边组成的集合。我们一般把一些节点组成层,每一层使用下一层的节点作为输入,并输出给上面层使用。为了计算一个节点值,我们将输入节点值做加权和,然后再加上一个激活函数。对于线性回归而言,它是一个两层神经网络,其中第一层是(下图橙色点)输入,每个节点对应输入数据点的一个维度,第二层是单输出节点(下图绿色点),它使用身份函数($f(x)=x$)作为激活函数。 ![](../img/simple-net-linear.png) ## 创建数据集 这里我们使用一个人工数据集来把事情弄简单些,因为这样我们将知道真实的模型是什么样的。具体来说我们使用如下方法来生成数据 `y[i] = 2 * X[i][0] - 3.4 * X[i][1] + 4.2 + noise` 这里噪音服从均值0和标准差为0.01的正态分布。 ``` from mxnet import ndarray as nd from mxnet import autograd num_inputs = 2 num_examples = 1000 true_w = [2, -3.4] true_b = 4.2 X = nd.random_normal(shape=(num_examples, num_inputs)) y = true_w[0] * X[:, 0] + true_w[1] * X[:, 1] + true_b y += .01 * nd.random_normal(shape=y.shape) ``` 注意到`X`的每一行是一个长度为2的向量,而`y`的每一行是一个长度为1的向量(标量)。 ``` print(X[0], y[0]) ``` ## 数据读取 当我们开始训练神经网络的时候,我们需要不断读取数据块。这里我们定义一个函数它每次返回`batch_size`个随机的样本和对应的目标。我们通过python的`yield`来构造一个迭代器。 ``` import random batch_size = 10 def data_iter(): # 产生一个随机索引 idx = list(range(num_examples)) random.shuffle(idx) for i in range(0, num_examples, batch_size): j = nd.array(idx[i:min(i+batch_size,num_examples)]) yield nd.take(X, j), nd.take(y, j) ``` 下面代码读取第一个随机数据块 ``` for data, label in data_iter(): print(data, label) break ``` ## 初始化模型参数 下面我们随机初始化模型参数 ``` w = nd.random_normal(shape=(num_inputs, 1)) b = nd.zeros((1,)) params = [w, b] ``` 之后训练时我们需要对这些参数求导来更新它们的值,所以我们需要创建它们的梯度。 ``` for param in params: param.attach_grad() ``` ## 定义模型 线性模型就是将输入和模型做乘法再加上偏移: ``` def net(X): return nd.dot(X, w) + b ``` ## 损失函数 我们使用常见的平方误差来衡量预测目标和真实目标之间的差距。 ``` def square_loss(yhat, y): # 注意这里我们把y变形成yhat的形状来避免自动广播 return (yhat - y.reshape(yhat.shape)) ** 2 ``` ## 优化 虽然线性回归有显试解,但绝大部分模型并没有。所以我们这里通过随机梯度下降来求解。每一步,我们将模型参数沿着梯度的反方向走特定距离,这个距离一般叫学习率。(我们会之后一直使用这个函数,我们将其保存在[utils.py](../utils.py)。) ``` def SGD(params, lr): for param in params: param[:] = param - lr * param.grad ``` ## 训练 现在我们可以开始训练了。训练通常需要迭代数据数次,一次迭代里,我们每次随机读取固定数个数据点,计算梯度并更新模型参数。 ``` epochs = 5 learning_rate = .001 for e in range(epochs): total_loss = 0 for data, label in data_iter(): with autograd.record(): output = net(data) loss = square_loss(output, label) loss.backward() SGD(params, learning_rate) total_loss += nd.sum(loss).asscalar() print("Epoch %d, average loss: %f" % (e, total_loss/num_examples)) ``` 训练完成后我们可以比较学到的参数和真实参数 ``` true_w, w true_b, b ``` ## 结论 我们现在看到仅仅使用NDArray和autograd我们可以很容易地实现一个模型。 ## 练习 尝试用不同的学习率查看误差下降速度(收敛率) **吐槽和讨论欢迎点**[这里](https://discuss.gluon.ai/t/topic/743)
github_jupyter