code
stringlengths
2.5k
150k
kind
stringclasses
1 value
<h2 align="center">INF575 - Fuzzy Logic</h2> <h1 align="center">Segmentation of HER2 Overexpression in Histopathology Images with Fuzzy Decision Tree<h1> <center> <img src="https://rochepacientes.es/content/dam/roche-pacientes-2/es/assets/images/que-es-her2.jpg" width="60%"/> </center> <h2 align="center">Classic Decision Tree</h2> <center> <i> Sebastián Bórquez G. - <a href="mailto://sebastian.borquez.g@gmail.com">sebastian.borquez.g@gmail.com</a> - DI UTFSM - August 2020.</i> </center> ``` %cd .. import cv2 import matplotlib.pyplot as plt import pandas as pd import numpy as np import seaborn as sns; sns.set(palette="muted") from IPython.display import display, HTML from load_features import * from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier, plot_tree from sklearn.ensemble import RandomForestClassifier selected_features = [ 'mean_rawblue', 'mean_dab', 'mean_intentsity', 'mean_rawgreen', 'mean_eosin', 'mean_vertical', 'mean_rawbred', 'homogeneity_vertical', 'mean_hematoxylin', 'sobel_magnitud' ] train_csv_files = [ "./data/1+_2.csv", "./data/1+_20.csv", "./data/2+_1.csv", "./data/2+_8.csv", "./data/3+_16.csv", "./data/3+_15.csv", ] train_features = merge_features([load_features(csv_file, selected_features=selected_features) for csv_file in train_csv_files]) (feature_names, target_col), (train_X, train_y) = split_features_target(train_features) test_csv_files = [ "./data/1+_25.csv", "./data/2+_9.csv", "./data/3+_15.csv", ] test_features = merge_features([load_features(csv_file, selected_features=selected_features) for csv_file in test_csv_files]) test_X, test_y = split_features_target(test_features) # Parameters class_weight = {0: 1., 1: 20.} min_samples_leaf = 15 max_depth=5 ``` ## Train ``` # Train clf = DecisionTreeClassifier(class_weight=class_weight, min_samples_leaf=min_samples_leaf).fit(train_X, train_y) # Train rf = RandomForestClassifier(n_estimators=30, class_weight=class_weight, min_samples_leaf=min_samples_leaf).fit(train_X, train_y) train_images = train_features.image.unique() for train_image in train_images: image_features = train_features[train_features.image == train_image] X_i, y_i = split_features_target(image_features, True) predicted = clf.predict_proba(X_i)[:,1] show_images_and_masks(train_image, image_features, predicted) index=np.argsort(clf.feature_importances_,) plt.figure(figsize=(6,8)) plt.title('DT - Feature Importance') plt.barh(np.arange(len(clf.feature_importances_)), clf.feature_importances_[index], tick_label=np.array(feature_names)[index]) train_images = train_features.image.unique() for train_image in train_images: image_features = train_features[train_features.image == train_image] X_i, y_i = split_features_target(image_features, True) predicted = rf.predict_proba(X_i)[:,1] show_images_and_masks(train_image, image_features, predicted) index=np.argsort(rf.feature_importances_) plt.figure(figsize=(6,8)) plt.title('RF - Feature Importance') plt.barh(np.arange(len(rf.feature_importances_)), rf.feature_importances_[index], tick_label=np.array(feature_names)[index]) ``` ## Test ``` from time import time test_images = test_features.image.unique() true_targets = [] for test_image in test_images: image_features = test_features[test_features.image == test_image] _, test_y_i = split_features_target(image_features, True) true_targets.append(test_y_i) true_targets = np.hstack(true_targets) test_images = test_features.image.unique() dt_predicted = [] start = time() for test_image in test_images: image_features = test_features[test_features.image == test_image] test_X_i, test_y_i = split_features_target(image_features, True) #dt_predicted_i = clf.predict_proba(test_X_i)[:,1] #dt_predicted.append(dt_predicted_i) #show_images_and_masks(test_image, image_features, dt_predicted_i) end = time() #dt_predicted = np.hstack(dt_predicted) print(end - start) test_images = test_features.image.unique() rf_predicted = [] start = time() for test_image in test_images: image_features = test_features[test_features.image == test_image] test_X_i, test_y_i = split_features_target(image_features, True) #predicted = rf.predict_proba(test_X_i)[:,1] rf_predicted_i = rf.predict(test_X_i) #rf_predicted.append(rf_predicted_i) #show_images_and_masks(test_image, image_features, rf_predicted_i) end=time() #rf_predicted = np.hstack(rf_predicted) print(end - start) 3*14.635354*60/3e6 1.215993881225586/3e6 21.681201219558716/3e6 results = pd.DataFrame({ "target": true_targets, "decision tree": dt_predicted.astype(int), "random forest": rf_predicted.astype(int) }) results.to_csv("crisp_results.csv", index=False) from sklearn.metrics import classification_report print("Decision Tree") print(classification_report(results["target"], results["decision tree"], target_names=["non-overexpression", "overexpression"])) print("Random Forest") print(classification_report(results["target"], results["random forest"], target_names=["non-overexpression", "overexpression"])) ```
github_jupyter
``` from sys import modules IN_COLAB = 'google.colab' in modules if IN_COLAB: !pip install -q ir_axioms[examples] python-terrier # Start/initialize PyTerrier. from pyterrier import started, init if not started(): init(tqdm="auto", no_download=True) from pyterrier.datasets import get_dataset, Dataset # Load dataset. dataset_name = "msmarco-passage" dataset: Dataset = get_dataset(f"irds:{dataset_name}") dataset_train: Dataset = get_dataset(f"irds:{dataset_name}/trec-dl-2019/judged") dataset_test: Dataset = get_dataset(f"irds:{dataset_name}/trec-dl-2020/judged") from pathlib import Path cache_dir = Path("cache/") index_dir = cache_dir / "indices" / dataset_name.split("/")[0] from pyterrier.index import IterDictIndexer if not index_dir.exists(): indexer = IterDictIndexer(str(index_dir.absolute())) indexer.index( dataset.get_corpus_iter(), fields=["text"] ) from pyterrier.batchretrieve import BatchRetrieve # BM25 baseline retrieval. bm25 = BatchRetrieve(str(index_dir.absolute()), wmodel="BM25") from ir_axioms.axiom import ( ArgUC, QTArg, QTPArg, aSL, PROX1, PROX2, PROX3, PROX4, PROX5, TFC1, TFC3, RS_TF, RS_TF_IDF, RS_BM25, RS_PL2, RS_QL, AND, LEN_AND, M_AND, LEN_M_AND, DIV, LEN_DIV, M_TDC, LEN_M_TDC, STMC1, STMC1_f, STMC2, STMC2_f, LNC1, TF_LNC, LB1, REG, ANTI_REG, REG_f, ANTI_REG_f, ASPECT_REG, ASPECT_REG_f, ORIG ) axioms = [ ~ArgUC(), ~QTArg(), ~QTPArg(), ~aSL(), ~LNC1(), ~TF_LNC(), ~LB1(), ~PROX1(), ~PROX2(), ~PROX3(), ~PROX4(), ~PROX5(), ~REG(), ~REG_f(), ~ANTI_REG(), ~ANTI_REG_f(), ~ASPECT_REG(), ~ASPECT_REG_f(), ~AND(), ~LEN_AND(), ~M_AND(), ~LEN_M_AND(), ~DIV(), ~LEN_DIV(), ~RS_TF(), ~RS_TF_IDF(), ~RS_BM25(), ~RS_PL2(), ~RS_QL(), ~TFC1(), ~TFC3(), ~M_TDC(), ~LEN_M_TDC(), ~STMC1(), ~STMC1_f(), ~STMC2(), ~STMC2_f(), ORIG() ] from sklearn.ensemble import RandomForestClassifier from ir_axioms.modules.pivot import MiddlePivotSelection from ir_axioms.backend.pyterrier.estimator import EstimatorKwikSortReranker random_forest = RandomForestClassifier( max_depth=3, ) kwiksort_random_forest = bm25 % 20 >> EstimatorKwikSortReranker( axioms=axioms, estimator=random_forest, index=index_dir, dataset=dataset_name, pivot_selection=MiddlePivotSelection(), cache_dir=cache_dir, verbose=True, ) kwiksort_random_forest.fit(dataset_train.get_topics(), dataset_train.get_qrels()) from pyterrier.pipelines import Experiment from ir_measures import nDCG, MAP, RR experiment = Experiment( [bm25, kwiksort_random_forest ^ bm25], dataset_test.get_topics(), dataset_test.get_qrels(), [nDCG @ 10, RR, MAP], ["BM25", "KwikSort Random Forest"], verbose=True, ) experiment.sort_values(by="nDCG@10", ascending=False, inplace=True) experiment random_forest.feature_importances_ ```
github_jupyter
# DAT210x - Programming with Python for DS ## Module5- Lab5 ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('ggplot') # Look Pretty ``` ### A Convenience Function ``` def plotDecisionBoundary(model, X, y): fig = plt.figure() ax = fig.add_subplot(111) padding = 0.6 resolution = 0.0025 colors = ['royalblue','forestgreen','ghostwhite'] # Calculate the boundaris x_min, x_max = X[:, 0].min(), X[:, 0].max() y_min, y_max = X[:, 1].min(), X[:, 1].max() x_range = x_max - x_min y_range = y_max - y_min x_min -= x_range * padding y_min -= y_range * padding x_max += x_range * padding y_max += y_range * padding # Create a 2D Grid Matrix. The values stored in the matrix # are the predictions of the class at at said location xx, yy = np.meshgrid(np.arange(x_min, x_max, resolution), np.arange(y_min, y_max, resolution)) # What class does the classifier say? Z = model.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Plot the contour map cs = plt.contourf(xx, yy, Z, cmap=plt.cm.terrain) # Plot the test original points as well... for label in range(len(np.unique(y))): indices = np.where(y == label) plt.scatter(X[indices, 0], X[indices, 1], c=colors[label], label=str(label), alpha=0.8) p = model.get_params() plt.axis('tight') plt.title('K = ' + str(p['n_neighbors'])) ``` ### The Assignment Load up the dataset into a variable called `X`. Check `.head` and `dtypes` to make sure you're loading your data properly--don't fail on the 1st step! ``` # .. your code here .. X = pd.read_csv('C:/Users/mgavrilova/Desktop/DAT210x/Module5/Datasets/wheat.data') print(X.head(5)) X.dtypes ``` Copy the `wheat_type` series slice out of `X`, and into a series called `y`. Then drop the original `wheat_type` column from the `X`: ``` # .. your code here .. y = X.wheat_type X = X.drop(columns=['id', 'wheat_type'], axis = 1) print(X.head(3)) ``` Do a quick, "ordinal" conversion of `y`. In actuality our classification isn't ordinal, but just as an experiment... ``` # .. your code here .. y = y.astype('category').cat.codes ``` Do some basic nan munging. Fill each row's nans with the mean of the feature: ``` # .. your code here .. X = X.fillna(X.mean()) ``` Split `X` into training and testing data sets using `train_test_split()`. Use `0.33` test size, and use `random_state=1`. This is important so that your answers are verifiable. In the real world, you wouldn't specify a random_state: ``` # .. your code here .. from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state = 1) ``` Create an instance of SKLearn's Normalizer class and then train it using its .fit() method against your _training_ data. The reason you only fit against your training data is because in a real-world situation, you'll only have your training data to train with! In this lab setting, you have both train+test data; but in the wild, you'll only have your training data, and then unlabeled data you want to apply your models to. ``` # .. your code here .. from sklearn.preprocessing import Normalizer norm = Normalizer() norm.fit(X_train) ``` With your trained pre-processor, transform both your training AND testing data. Any testing data has to be transformed with your preprocessor that has ben fit against your training data, so that it exist in the same feature-space as the original data used to train your models. ``` # .. your code here .. X_train_norm = norm.transform(X_train) X_test_norm = norm.transform(X_test) ``` Just like your preprocessing transformation, create a PCA transformation as well. Fit it against your training data, and then project your training and testing features into PCA space using the PCA model's `.transform()` method. This has to be done because the only way to visualize the decision boundary in 2D would be if your KNN algo ran in 2D as well: ``` # .. your code here .. from sklearn.decomposition import PCA pca = PCA(n_components=2, svd_solver='randomized') pca.fit(X_train_norm) pca_train = pca.transform(X_train_norm) pca_test = pca.transform(X_test_norm) ``` Create and train a KNeighborsClassifier. Start with `K=9` neighbors. Be sure train your classifier against the pre-processed, PCA- transformed training data above! You do not, of course, need to transform your labels. ``` # .. your code here .. from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=1) knn.fit(pca_train, y_train) # I hope your KNeighbors classifier model from earlier was named 'knn' # If not, adjust the following line: plotDecisionBoundary(knn, pca_train, y_train) ``` Display the accuracy score of your test data/labels, computed by your KNeighbors model. You do NOT have to run `.predict` before calling `.score`, since `.score` will take care of running your predictions for you automatically. ``` # .. your code here .. accuracy_score = knn.score(pca_test, y_test) accuracy_score ``` ### Bonus Instead of the ordinal conversion, try and get this assignment working with a proper Pandas get_dummies for feature encoding. You might have to update some of the `plotDecisionBoundary()` code. ``` plt.show() ```
github_jupyter
``` # -*- coding: utf-8 -*- """ EVCのためのEV-GMMを構築します. そして, 適応学習する. 詳細 : https://pdfs.semanticscholar.org/cbfe/71798ded05fb8bf8674580aabf534c4dbb8bc.pdf This program make EV-GMM for EVC. Then, it make adaptation learning. Check detail : https://pdfs.semanticscholar.org/cbfe/71798ded05fb8bf8674580abf534c4dbb8bc.pdf """ from __future__ import division, print_function import os from shutil import rmtree import argparse import glob import pickle import time import numpy as np from numpy.linalg import norm from sklearn.decomposition import PCA from sklearn.mixture import GMM # sklearn 0.20.0から使えない from sklearn.preprocessing import StandardScaler import scipy.signal import scipy.sparse %matplotlib inline import matplotlib.pyplot as plt import IPython from IPython.display import Audio import soundfile as sf import wave import pyworld as pw import librosa.display from dtw import dtw import warnings warnings.filterwarnings('ignore') """ Parameters __Mixtured : GMM混合数 __versions : 実験セット __convert_source : 変換元話者のパス __convert_target : 変換先話者のパス """ # parameters __Mixtured = 40 __versions = 'pre-stored0.1.1' __convert_source = 'input/EJM10/V01/T01/TIMIT/000/*.wav' __convert_target = 'adaptation/EJM04/V01/T01/ATR503/A/*.wav' # settings __same_path = './utterance/' + __versions + '/' __output_path = __same_path + 'output/EJM04/' # EJF01, EJF07, EJM04, EJM05 Mixtured = __Mixtured pre_stored_pickle = __same_path + __versions + '.pickle' pre_stored_source_list = __same_path + 'pre-source/**/V01/T01/**/*.wav' pre_stored_list = __same_path + "pre/**/V01/T01/**/*.wav" #pre_stored_target_list = "" (not yet) pre_stored_gmm_init_pickle = __same_path + __versions + '_init-gmm.pickle' pre_stored_sv_npy = __same_path + __versions + '_sv.npy' save_for_evgmm_covarXX = __output_path + __versions + '_covarXX.npy' save_for_evgmm_covarYX = __output_path + __versions + '_covarYX.npy' save_for_evgmm_fitted_source = __output_path + __versions + '_fitted_source.npy' save_for_evgmm_fitted_target = __output_path + __versions + '_fitted_target.npy' save_for_evgmm_weights = __output_path + __versions + '_weights.npy' save_for_evgmm_source_means = __output_path + __versions + '_source_means.npy' for_convert_source = __same_path + __convert_source for_convert_target = __same_path + __convert_target converted_voice_npy = __output_path + 'sp_converted_' + __versions converted_voice_wav = __output_path + 'sp_converted_' + __versions mfcc_save_fig_png = __output_path + 'mfcc3dim_' + __versions f0_save_fig_png = __output_path + 'f0_converted' + __versions converted_voice_with_f0_wav = __output_path + 'sp_f0_converted' + __versions EPSILON = 1e-8 class MFCC: """ MFCC() : メル周波数ケプストラム係数(MFCC)を求めたり、MFCCからスペクトルに変換したりするクラス. 動的特徴量(delta)が実装途中. ref : http://aidiary.hatenablog.com/entry/20120225/1330179868 """ def __init__(self, frequency, nfft=1026, dimension=24, channels=24): """ 各種パラメータのセット nfft : FFTのサンプル点数 frequency : サンプリング周波数 dimension : MFCC次元数 channles : メルフィルタバンクのチャンネル数(dimensionに依存) fscale : 周波数スケール軸 filterbankl, fcenters : フィルタバンク行列, フィルタバンクの頂点(?) """ self.nfft = nfft self.frequency = frequency self.dimension = dimension self.channels = channels self.fscale = np.fft.fftfreq(self.nfft, d = 1.0 / self.frequency)[: int(self.nfft / 2)] self.filterbank, self.fcenters = self.melFilterBank() def hz2mel(self, f): """ 周波数からメル周波数に変換 """ return 1127.01048 * np.log(f / 700.0 + 1.0) def mel2hz(self, m): """ メル周波数から周波数に変換 """ return 700.0 * (np.exp(m / 1127.01048) - 1.0) def melFilterBank(self): """ メルフィルタバンクを生成する """ fmax = self.frequency / 2 melmax = self.hz2mel(fmax) nmax = int(self.nfft / 2) df = self.frequency / self.nfft dmel = melmax / (self.channels + 1) melcenters = np.arange(1, self.channels + 1) * dmel fcenters = self.mel2hz(melcenters) indexcenter = np.round(fcenters / df) indexstart = np.hstack(([0], indexcenter[0:self.channels - 1])) indexstop = np.hstack((indexcenter[1:self.channels], [nmax])) filterbank = np.zeros((self.channels, nmax)) for c in np.arange(0, self.channels): increment = 1.0 / (indexcenter[c] - indexstart[c]) # np,int_ は np.arangeが[0. 1. 2. ..]となるのをintにする for i in np.int_(np.arange(indexstart[c], indexcenter[c])): filterbank[c, i] = (i - indexstart[c]) * increment decrement = 1.0 / (indexstop[c] - indexcenter[c]) # np,int_ は np.arangeが[0. 1. 2. ..]となるのをintにする for i in np.int_(np.arange(indexcenter[c], indexstop[c])): filterbank[c, i] = 1.0 - ((i - indexcenter[c]) * decrement) return filterbank, fcenters def mfcc(self, spectrum): """ スペクトルからMFCCを求める. """ mspec = [] mspec = np.log10(np.dot(spectrum, self.filterbank.T)) mspec = np.array(mspec) return scipy.fftpack.realtransforms.dct(mspec, type=2, norm="ortho", axis=-1) def delta(self, mfcc): """ MFCCから動的特徴量を求める. 現在は,求める特徴量フレームtをt-1とt+1の平均としている. """ mfcc = np.concatenate([ [mfcc[0]], mfcc, [mfcc[-1]] ]) # 最初のフレームを最初に、最後のフレームを最後に付け足す delta = None for i in range(1, mfcc.shape[0] - 1): slope = (mfcc[i+1] - mfcc[i-1]) / 2 if delta is None: delta = slope else: delta = np.vstack([delta, slope]) return delta def imfcc(self, mfcc, spectrogram): """ MFCCからスペクトルを求める. """ im_sp = np.array([]) for i in range(mfcc.shape[0]): mfcc_s = np.hstack([mfcc[i], [0] * (self.channels - self.dimension)]) mspectrum = scipy.fftpack.idct(mfcc_s, norm='ortho') # splrep はスプライン補間のための補間関数を求める tck = scipy.interpolate.splrep(self.fcenters, np.power(10, mspectrum)) # splev は指定座標での補間値を求める im_spectrogram = scipy.interpolate.splev(self.fscale, tck) im_sp = np.concatenate((im_sp, im_spectrogram), axis=0) return im_sp.reshape(spectrogram.shape) def trim_zeros_frames(x, eps=1e-7): """ 無音区間を取り除く. """ T, D = x.shape s = np.sum(np.abs(x), axis=1) s[s < 1e-7] = 0. return x[s > eps] def analyse_by_world_with_harverst(x, fs): """ WORLD音声分析合成器で基本周波数F0,スペクトル包絡,非周期成分を求める. 基本周波数F0についてはharvest法により,より精度良く求める. """ # 4 Harvest with F0 refinement (using Stonemask) frame_period = 5 _f0_h, t_h = pw.harvest(x, fs, frame_period=frame_period) f0_h = pw.stonemask(x, _f0_h, t_h, fs) sp_h = pw.cheaptrick(x, f0_h, t_h, fs) ap_h = pw.d4c(x, f0_h, t_h, fs) return f0_h, sp_h, ap_h def wavread(file): """ wavファイルから音声トラックとサンプリング周波数を抽出する. """ wf = wave.open(file, "r") fs = wf.getframerate() x = wf.readframes(wf.getnframes()) x = np.frombuffer(x, dtype= "int16") / 32768.0 wf.close() return x, float(fs) def preEmphasis(signal, p=0.97): """ MFCC抽出のための高域強調フィルタ. 波形を通すことで,高域成分が強調される. """ return scipy.signal.lfilter([1.0, -p], 1, signal) def alignment(source, target, path): """ タイムアライメントを取る. target音声をsource音声の長さに合うように調整する. """ # ここでは814に合わせよう(targetに合わせる) # p_p = 0 if source.shape[0] > target.shape[0] else 1 #shapes = source.shape if source.shape[0] > target.shape[0] else target.shape shapes = source.shape align = np.array([]) for (i, p) in enumerate(path[0]): if i != 0: if j != p: temp = np.array(target[path[1][i]]) align = np.concatenate((align, temp), axis=0) else: temp = np.array(target[path[1][i]]) align = np.concatenate((align, temp), axis=0) j = p return align.reshape(shapes) """ pre-stored学習のためのパラレル学習データを作る。 時間がかかるため、利用できるlearn-data.pickleがある場合はそれを利用する。 それがない場合は一から作り直す。 """ timer_start = time.time() if os.path.exists(pre_stored_pickle): print("exist, ", pre_stored_pickle) with open(pre_stored_pickle, mode='rb') as f: total_data = pickle.load(f) print("open, ", pre_stored_pickle) print("Load pre-stored time = ", time.time() - timer_start , "[sec]") else: source_mfcc = [] #source_data_sets = [] for name in sorted(glob.iglob(pre_stored_source_list, recursive=True)): print(name) x, fs = sf.read(name) f0, sp, ap = analyse_by_world_with_harverst(x, fs) mfcc = MFCC(fs) source_mfcc_temp = mfcc.mfcc(sp) #source_data = np.hstack([source_mfcc_temp, mfcc.delta(source_mfcc_temp)]) # static & dynamic featuers source_mfcc.append(source_mfcc_temp) #source_data_sets.append(source_data) total_data = [] i = 0 _s_len = len(source_mfcc) for name in sorted(glob.iglob(pre_stored_list, recursive=True)): print(name, len(total_data)) x, fs = sf.read(name) f0, sp, ap = analyse_by_world_with_harverst(x, fs) mfcc = MFCC(fs) target_mfcc = mfcc.mfcc(sp) dist, cost, acc, path = dtw(source_mfcc[i%_s_len], target_mfcc, dist=lambda x, y: norm(x - y, ord=1)) #print('Normalized distance between the two sounds:' + str(dist)) #print("target_mfcc = {0}".format(target_mfcc.shape)) aligned = alignment(source_mfcc[i%_s_len], target_mfcc, path) #target_data_sets = np.hstack([aligned, mfcc.delta(aligned)]) # static & dynamic features #learn_data = np.hstack((source_data_sets[i], target_data_sets)) learn_data = np.hstack([source_mfcc[i%_s_len], aligned]) total_data.append(learn_data) i += 1 with open(pre_stored_pickle, 'wb') as output: pickle.dump(total_data, output) print("Make, ", pre_stored_pickle) print("Make pre-stored time = ", time.time() - timer_start , "[sec]") """ 全事前学習出力話者からラムダを推定する. ラムダは適応学習で変容する. """ S = len(total_data) D = int(total_data[0].shape[1] / 2) print("total_data[0].shape = ", total_data[0].shape) print("S = ", S) print("D = ", D) timer_start = time.time() if os.path.exists(pre_stored_gmm_init_pickle): print("exist, ", pre_stored_gmm_init_pickle) with open(pre_stored_gmm_init_pickle, mode='rb') as f: initial_gmm = pickle.load(f) print("open, ", pre_stored_gmm_init_pickle) print("Load initial_gmm time = ", time.time() - timer_start , "[sec]") else: initial_gmm = GMM(n_components = Mixtured, covariance_type = 'full') initial_gmm.fit(np.vstack(total_data)) with open(pre_stored_gmm_init_pickle, 'wb') as output: pickle.dump(initial_gmm, output) print("Make, ", initial_gmm) print("Make initial_gmm time = ", time.time() - timer_start , "[sec]") weights = initial_gmm.weights_ source_means = initial_gmm.means_[:, :D] target_means = initial_gmm.means_[:, D:] covarXX = initial_gmm.covars_[:, :D, :D] covarXY = initial_gmm.covars_[:, :D, D:] covarYX = initial_gmm.covars_[:, D:, :D] covarYY = initial_gmm.covars_[:, D:, D:] fitted_source = source_means fitted_target = target_means """ SVはGMMスーパーベクトルで、各pre-stored学習における出力話者について平均ベクトルを推定する。 GMMの学習を見てみる必要があるか? """ timer_start = time.time() if os.path.exists(pre_stored_sv_npy): print("exist, ", pre_stored_sv_npy) sv = np.load(pre_stored_sv_npy) print("open, ", pre_stored_sv_npy) print("Load pre_stored_sv time = ", time.time() - timer_start , "[sec]") else: sv = [] for i in range(S): gmm = GMM(n_components = Mixtured, params = 'm', init_params = '', covariance_type = 'full') gmm.weights_ = initial_gmm.weights_ gmm.means_ = initial_gmm.means_ gmm.covars_ = initial_gmm.covars_ gmm.fit(total_data[i]) sv.append(gmm.means_) sv = np.array(sv) np.save(pre_stored_sv_npy, sv) print("Make pre_stored_sv time = ", time.time() - timer_start , "[sec]") """ 各事前学習出力話者のGMM平均ベクトルに対して主成分分析(PCA)を行う. PCAで求めた固有値と固有ベクトルからeigenvectorsとbiasvectorsを作る. """ timer_start = time.time() #source_pca source_n_component, source_n_features = sv[:, :, :D].reshape(S, Mixtured*D).shape # 標準化(分散を1、平均を0にする) source_stdsc = StandardScaler() # 共分散行列を求める source_X_std = source_stdsc.fit_transform(sv[:, :, :D].reshape(S, Mixtured*D)) # PCAを行う source_cov = source_X_std.T @ source_X_std / (source_n_component - 1) source_W, source_V_pca = np.linalg.eig(source_cov) print(source_W.shape) print(source_V_pca.shape) # データを主成分の空間に変換する source_X_pca = source_X_std @ source_V_pca print(source_X_pca.shape) #target_pca target_n_component, target_n_features = sv[:, :, D:].reshape(S, Mixtured*D).shape # 標準化(分散を1、平均を0にする) target_stdsc = StandardScaler() #共分散行列を求める target_X_std = target_stdsc.fit_transform(sv[:, :, D:].reshape(S, Mixtured*D)) #PCAを行う target_cov = target_X_std.T @ target_X_std / (target_n_component - 1) target_W, target_V_pca = np.linalg.eig(target_cov) print(target_W.shape) print(target_V_pca.shape) # データを主成分の空間に変換する target_X_pca = target_X_std @ target_V_pca print(target_X_pca.shape) eigenvectors = source_X_pca.reshape((Mixtured, D, S)), target_X_pca.reshape((Mixtured, D, S)) source_bias = np.mean(sv[:, :, :D], axis=0) target_bias = np.mean(sv[:, :, D:], axis=0) biasvectors = source_bias.reshape((Mixtured, D)), target_bias.reshape((Mixtured, D)) print("Do PCA time = ", time.time() - timer_start , "[sec]") """ 声質変換に用いる変換元音声と目標音声を読み込む. """ timer_start = time.time() source_mfcc_for_convert = [] source_sp_for_convert = [] source_f0_for_convert = [] source_ap_for_convert = [] fs_source = None for name in sorted(glob.iglob(for_convert_source, recursive=True)): print("source = ", name) x_source, fs_source = sf.read(name) f0_source, sp_source, ap_source = analyse_by_world_with_harverst(x_source, fs_source) mfcc_source = MFCC(fs_source) #mfcc_s_tmp = mfcc_s.mfcc(sp) #source_mfcc_for_convert = np.hstack([mfcc_s_tmp, mfcc_s.delta(mfcc_s_tmp)]) source_mfcc_for_convert.append(mfcc_source.mfcc(sp_source)) source_sp_for_convert.append(sp_source) source_f0_for_convert.append(f0_source) source_ap_for_convert.append(ap_source) target_mfcc_for_fit = [] target_f0_for_fit = [] target_ap_for_fit = [] for name in sorted(glob.iglob(for_convert_target, recursive=True)): print("target = ", name) x_target, fs_target = sf.read(name) f0_target, sp_target, ap_target = analyse_by_world_with_harverst(x_target, fs_target) mfcc_target = MFCC(fs_target) #mfcc_target_tmp = mfcc_target.mfcc(sp_target) #target_mfcc_for_fit = np.hstack([mfcc_t_tmp, mfcc_t.delta(mfcc_t_tmp)]) target_mfcc_for_fit.append(mfcc_target.mfcc(sp_target)) target_f0_for_fit.append(f0_target) target_ap_for_fit.append(ap_target) # 全部numpy.arrrayにしておく source_data_mfcc = np.array(source_mfcc_for_convert) source_data_sp = np.array(source_sp_for_convert) source_data_f0 = np.array(source_f0_for_convert) source_data_ap = np.array(source_ap_for_convert) target_mfcc = np.array(target_mfcc_for_fit) target_f0 = np.array(target_f0_for_fit) target_ap = np.array(target_ap_for_fit) print("Load Input and Target Voice time = ", time.time() - timer_start , "[sec]") """ 適応話者学習を行う. つまり,事前学習出力話者から目標話者の空間を作りだす. 適応話者文数ごとにfitted_targetを集めるのは未実装. """ timer_start = time.time() epoch=100 py = GMM(n_components = Mixtured, covariance_type = 'full') py.weights_ = weights py.means_ = target_means py.covars_ = covarYY fitted_target = None for i in range(len(target_mfcc)): print("adaptation = ", i+1, "/", len(target_mfcc)) target = target_mfcc[i] for x in range(epoch): print("epoch = ", x) predict = py.predict_proba(np.atleast_2d(target)) y = np.sum([predict[:, i: i + 1] * (target - biasvectors[1][i]) for i in range(Mixtured)], axis = 1) gamma = np.sum(predict, axis = 0) left = np.sum([gamma[i] * np.dot(eigenvectors[1][i].T, np.linalg.solve(py.covars_, eigenvectors[1])[i]) for i in range(Mixtured)], axis=0) right = np.sum([np.dot(eigenvectors[1][i].T, np.linalg.solve(py.covars_, y)[i]) for i in range(Mixtured)], axis = 0) weight = np.linalg.solve(left, right) fitted_target = np.dot(eigenvectors[1], weight) + biasvectors[1] py.means_ = fitted_target print("Load Input and Target Voice time = ", time.time() - timer_start , "[sec]") """ 変換に必要なものを残しておく. """ np.save(save_for_evgmm_covarXX, covarXX) np.save(save_for_evgmm_covarYX, covarYX) np.save(save_for_evgmm_fitted_source, fitted_source) np.save(save_for_evgmm_fitted_target, fitted_target) np.save(save_for_evgmm_weights, weights) np.save(save_for_evgmm_source_means, source_means) ```
github_jupyter
Code:<a href="https://github.com/lotapp/BaseCode" target="_blank">https://github.com/lotapp/BaseCode</a> 多图旧排版:<a href="https://www.cnblogs.com/dunitian/p/9119986.html" target="_blank">https://www.cnblogs.com/dunitian/p/9119986.html</a> 在线编程:<a href="https://mybinder.org/v2/gh/lotapp/BaseCode/master" target="_blank">https://mybinder.org/v2/gh/lotapp/BaseCode/master</a> Python设计的目的就是 ==> **让程序员解放出来,不要过于关注代码本身** 步入正题:**欢迎提出更简单或者效率更高的方法** **基础系列**:(这边重点说说`Python`,上次讲过的东西我就一笔带过了) ## 1.基础回顾 ### 1.1.输出+类型转换 ``` user_num1=input("输入第一个数:") user_num2=input("输入第二个数:") print("两数之和:%d"%(int(user_num1)+int(user_num2))) ``` ### 1.2.字符串拼接+拼接输出方式 ``` user_name=input("输入昵称:") user_pass=input("输入密码:") user_url="192.168.1.121" #拼接输出方式一: print("ftp://"+user_name+":"+user_pass+"@"+user_url) #拼接输出方式二: print("ftp://%s:%s@%s"%(user_name,user_pass,user_url)) ``` ## 2.字符串遍历、下标、切片 ### 2.1.Python 重点说下`python`的 **下标**,有点意思,最后一个元素,我们一般都是`len(str)-1`,他可以直接用`-1`,倒2自然就是`-2`了 **最后一个元素:`user_str[-1]`** user_str[-1] user_str[len(user_str)-1] #其他编程语言写法 **倒数第二个元素:`user_str[-2]`** user_str[-1] user_str[len(user_str)-2] #其他编程语言写法 ``` user_str="七大姑曰:工作了吗?八大姨问:买房了吗?异性说:结婚了吗?" #遍历 for item in user_str: print(item,end=" ") # 不换行,以“ ”方式拼接 #长度:len(user_str) len(user_str) # #第一个元素:user_str[0] user_str[0] # 最后一个元素:user_str[-1] print(user_str[-1]) print(user_str[len(user_str)-1])#其他编程语言写法 #倒数第二个元素:user_str[-2] print(user_str[-2]) print(user_str[len(user_str)-2])#其他编程语言写法 ``` **python切片语法**:`[start_index:end_index:step]` (**end_index取不到**) eg:`str[1:4]` 取str[1]、str[2]、str[3] eg:`str[2:]` 取下标为2开始到最后的元素 eg:`str[2:-1]` 取下标为2~到倒数第二个元素(end_index取不到) eg:`str[1:6:2]` 隔着取~str[1]、str[3]、str[5](案例会详细说) eg:`str[::-1]` 逆向输出(案例会详细说) ``` it_str="我爱编程,编程爱它,它是程序,程序是谁?" # eg:取“编程爱它” it_str[5:9] print(it_str[5:9]) print(it_str[5:-11]) # end_index用-xx也一样 print(it_str[-15:-11])# start_index用-xx也可以 # eg:取“编程爱它,它是程序,程序是谁?” it_str[5:] print(it_str[5:])# 不写默认取到最后一个 # eg:一个隔一个跳着取("我编,程它它程,序谁") it_str[0::2] print(it_str[0::2])# step=△index(eg:0,1,2,3。这里的step=> 2-0 => 间隔1) # eg:倒序输出 it_str[::-1] # end_index不写默认是取到最后一个,是正取(从左往右)还是逆取(从右往左),就看step是正是负 print(it_str[::-1]) print(it_str[-1::-1])# 等价于上一个 ``` ### 2.2.CSharp 这次为了更加形象对比,一句一句翻译成C# 有没有发现规律,`user_str[user_str.Length-1]`==> -1是最后一个 `user_str[user_str.Length-2]`==> -2是最后第二个 python的切片其实就是在这方面简化了 ``` %%script csharp //# # 字符串遍历、下标、切片 //# user_str="七大姑曰:工作了吗?八大姨问:买房了吗?异性说:结婚了吗?" var user_str = "七大姑曰:工作了吗?八大姨问:买房了吗?异性说:结婚了吗?"; //# #遍历 //# for item in user_str: //# print(item,end=" ") foreach (var item in user_str) { Console.Write(item); } //# #长度:len(user_str) //# print(len(user_str)) Console.WriteLine(user_str.Length); //# #第一个元素:user_str[0] //# print(user_str[0]) Console.WriteLine(user_str[0]); //# #最后一个元素:user_str[-1] //# print(user_str[-1]) //# print(user_str[len(user_str)-1])#其他编程语言写法 Console.WriteLine(user_str[user_str.Length - 1]); // //# #倒数第二个元素:user_str[-2] //# print(user_str[-2]) Console.WriteLine(user_str[user_str.Length - 2]); ``` 其实你用`Pytho`n跟其他语言对比反差更大,`net`真的很强大了。 补充(对比看就清楚`Python`的`step`为什么是2了,i+=2==>2) ``` %%script csharp //# 切片:[start_index:end_index:step] (end_index取不到) //# eg:str[1:4] 取str[1]、str[2]、str[3] //# eg:str[2:] 取下标为2开始到最后的元素 //# eg:str[2:-1] 取下标为2~到倒数第二个元素(end_index取不到) //# eg:str[1:6:2] 隔着取~str[1]、str[3]、str[5](案例会详细说) //# eg:str[::-1] 逆向输出(案例会详细说,) // var it_str = "我爱编程,编程爱它,它是程序,程序是谁?"; // //#eg:取“编程爱它” it_str[5:9] // print(it_str[5:9]) // print(it_str[5:-11]) #end_index用-xx也一样 // print(it_str[-15:-11])#start_index用-xx也可以 //Substring(int startIndex, int length) Console.WriteLine(it_str.Substring(5, 4));//第二个参数是长度 // //#eg:取“编程爱它,它是程序,程序是谁?” it_str[5:] // print(it_str[5:])#不写默认取到最后一个 Console.WriteLine(it_str.Substring(5));//不写默认取到最后一个 //#eg:一个隔一个跳着取("我编,程它它程,序谁") it_str[0::2] // print(it_str[0::2])#step=△index(eg:0,1,2,3。这里的step=> 2-0 => 间隔1) //这个我第一反应是用linq ^_^ for (int i = 0; i < it_str.Length; i += 2)//对比看就清除Python的step为什么是2了,i+=2==》2 { Console.Write(it_str[i]); } Console.WriteLine("\n倒序:"); //#eg:倒序输出 it_str[::-1] //# end_index不写默认是取到最后一个,是正取(从左往右)还是逆取(从右往左),就看step是正是负 // print(it_str[::-1]) // print(it_str[-1::-1])#等价于上一个 for (int i = it_str.Length - 1; i >= 0; i--) { Console.Write(it_str[i]); } //其实可以用Linq:Console.WriteLine(new string(it_str.ToCharArray().Reverse().ToArray())); ``` ## 3.Python字符串方法系列 ### 3.1.Python查找 `find`,`rfind`,`index`,`rindex` Python查找 **推荐**你用`find`和`rfind` ``` test_str = "ABCDabcdefacddbdf" # 查找:find,rfind,index,rindex # xxx.find(str, start, end) print(test_str.find("cd"))#从左往右 print(test_str.rfind("cd"))#从右往左 print(test_str.find("dnt"))#find和rfind找不到就返回-1 # index和rindex用法和find一样,只是找不到会报错(以后用find系即可) print(test_str.index("dnt")) ``` ### 3.2.Python计数 python:`xxx.count(str, start, end)` ``` # 计数:count # xxx.count(str, start, end) print(test_str.count("d"))#4 print(test_str.count("cd"))#2 ``` ### 3.3.Python替换 Python:`xxx.replace(str1, str2, 替换次数)` ``` # 替换:replace # xxx.replace(str1, str2, 替换次数) print(test_str) print(test_str.replace("b","B"))#并没有改变原字符串,只是生成了一个新的字符串 print(test_str) # replace可以指定替换几次 print(test_str.replace("b","B",1))#ABCDaBcdefacddbdf ``` ### 3.4.Python分割 `split`(按指定字符分割),`splitlines`(按行分割) `partition`(以str分割成三部分,str前,str和str后),`rpartition`(从右边开始) 说下 **split的切片用法**:`print(test_input.split(" ",3))` 在第三个空格处切片,后面的不切了 ``` # 分割:split(按指定字符分割),splitlines(按行分割),partition(以str分割成三部分,str前,str和str后),rpartition test_list=test_str.split("a")#a有两个,按照a分割,那么会分成三段,返回类型是列表(List),并且返回结果中没有a print(test_list) test_input="hi my name is dnt" print(test_input.split(" ")) #返回列表格式(后面会说)['hi', 'my', 'name', 'is', 'dnt'] print(test_input.split(" ",3))#在第三个空格处切片,后面的不管了 ``` 继续说说`splitlines`(按行分割),和`split("\n")`的区别: ``` # splitlines()按行分割,返回类型为List test_line_str="abc\nbca\ncab\n" print(test_line_str.splitlines())#['abc', 'bca', 'cab'] print(test_line_str.split("\n"))#看出区别了吧:['abc', 'bca', 'cab', ''] # splitlines(按行分割),和split("\n")的区别没看出来就再来个案例 test_line_str2="abc\nbca\ncab\nLLL" print(test_line_str2.splitlines())#['abc', 'bca', 'cab', 'LLL'] print(test_line_str2.split("\n"))#再提示一下,最后不是\n就和上面一样效果 ``` 扩展:`split()`,默认按 **空字符**切割(`空格、\t、\n`等等,不用担心返回`''`) ``` # 扩展:split(),默认按空字符切割(空格、\t、\n等等,不用担心返回'') print("hi my name is dnt\t\n m\n\t\n".split()) ``` 最后说一下`partition`和`rpartition`: 返回是元祖类型(后面会说的) 方式和find一样,找到第一个匹配的就罢工了【**注意一下没找到的情况**】 ``` # partition(以str分割成三部分,str前,str和str后) # 返回是元祖类型(后面会说的),方式和find一样,找到第一个匹配的就罢工了【注意一下没找到的情况】 print(test_str.partition("cd"))#('ABCDab', 'cd', 'efacddbdf') print(test_str.rpartition("cd"))#('ABCDabcdefa', 'cd', 'dbdf') print(test_str.partition("感觉自己萌萌哒"))#没找到:('ABCDabcdefacddbdf', '', '') ``` ### 3.5.Python字符串连接 **join** :`"-".join(test_list)` ``` # 连接:join # separat.join(xxx) # 错误用法:xxx.join("-") print("-".join(test_list)) ``` ### 3.6.Python头尾判断 `startswith`(以。。。开头),`endswith`(以。。。结尾) ``` # 头尾判断:startswith(以。。。开头),endswith(以。。。结尾) # test_str.startswith(以。。。开头) start_end_str="http://www.baidu.net" print(start_end_str.startswith("https://") or start_end_str.startswith("http://")) print(start_end_str.endswith(".com")) ``` ### 3.7.Python大小写系 `lower`(字符串转换为小写),`upper`(字符串转换为大写) `title`(单词首字母大写),`capitalize`(第一个字符大写,其他变小写) ``` # 大小写系:lower(字符串转换为小写),upper(字符串转换为大写) # title(单词首字母大写),capitalize(第一个字符大写,其他变小写) print(test_str) print(test_str.upper())#ABCDABCDEFACDDBDF print(test_str.lower())#abcdabcdefacddbdf print(test_str.capitalize())#第一个字符大写,其他变小写 ``` ### 3.8.Python格式系列 `lstrip`(去除左边空格),`rstrip`(去除右边空格) **`strip`** (去除两边空格)美化输出系列:`ljust`,`rjust`,`center` `ljust,rjust,center`这些就不说了,python经常在linux终端中输出,所以这几个用的比较多 ``` # 格式系列:lstrip(去除左边空格),rstrip(去除右边空格),strip(去除两边空格)美化输出系列:ljust,rjust,center strip_str=" I Have a Dream " print(strip_str.strip()+"|")#我加 | 是为了看清后面空格,没有别的用处 print(strip_str.lstrip()+"|") print(strip_str.rstrip()+"|") #这个就是格式化输出,就不讲了 print(test_str.ljust(50)) print(test_str.rjust(50)) print(test_str.center(50)) ``` ### 3.9.Python验证系列 `isalpha`(是否是纯字母),`isalnum`(是否是数字|字母) `isdigit`(是否是纯数字),`isspace`(是否是纯空格) 注意~ `test_str5=" \t \n "` # **isspace() ==>true** ``` # 验证系列:isalpha(是否是纯字母),isalnum(是否是数字|字母),isdigit(是否是纯数字),isspace(是否是纯空格) # 注意哦~ test_str5=" \t \n " #isspace() ==>true test_str2="Abcd123" test_str3="123456" test_str4=" \t" #isspace() ==>true test_str5=" \t \n " #isspace() ==>true test_str.isalpha() #是否是纯字母 test_str.isalnum() #是否是数字|字母 test_str.isdigit() #是否是纯数字 test_str.isspace() #是否是纯空格 test_str2.isalnum() #是否是数字和字母组成 test_str2.isdigit() #是否是纯数字 test_str3.isdigit() #是否是纯数字 test_str5.isspace() #是否是纯空格 test_str4.isspace() #是否是纯空格 ``` ### Python补充 像这些方法练习用`ipython3`就好了(`sudo apt-get install ipython3`) code的话需要一个个的print,比较麻烦(我这边因为需要写文章,所以只能一个个code) ![图片](https://images2018.cnblogs.com/blog/1127869/201805/1127869-20180531091353949-747834264.png) ## 4.CSharp字符串方法系列 ### 4.1.查找 `index0f`就相当于python里面的`find` `LastIndexOf` ==> `rfind` ``` %%script csharp var test_str = "ABCDabcdefacddbdf"; //# # 查找:find,rfind,index,rindex //# # xxx.find(str, start, end) //# print(test_str.find("cd"))#从左往右 Console.WriteLine(test_str.IndexOf('a'));//4 Console.WriteLine(test_str.IndexOf("cd"));//6 //# print(test_str.rfind("cd"))#从右往左 Console.WriteLine(test_str.LastIndexOf("cd"));//11 //# print(test_str.find("dnt"))#find和rfind找不到就返回-1 Console.WriteLine(test_str.IndexOf("dnt"));//-1 ``` ### 4.2.计数 这个真用基础来解决的话,两种方法: 第一种自己变形一下:(原字符串长度 - 替换后的长度) / 字符串长度 ```csharp //# # 计数:count //# # xxx.count(str, start, end) // print(test_str.count("d"))#4 // print(test_str.count("cd"))#2 // 第一反应,字典、正则、linq,后来想怎么用基础知识解决,于是有了这个~(原字符串长度-替换后的长度)/字符串长度 Console.WriteLine(test_str.Length - test_str.Replace("d", "").Length);//统计单个字符就简单了 Console.WriteLine((test_str.Length - test_str.Replace("cd", "").Length) / "cd".Length); Console.WriteLine(test_str);//不用担心原字符串改变(python和C#都是有字符串不可变性的) ``` 字符串统计另一种方法(<a href="https://github.com/dunitian/LoTCodeBase/tree/master/NetCode/2.面向对象/4.字符串" target="_blank">就用index</a>) ```csharp int count = 0; int index = input.IndexOf("abc"); while (index != -1) { count++; index = input.IndexOf("abc", index + 3);//index指向abc的后一位 } ``` ### 4.3.替换 替换指定次数的功能有点业余,就不说了,你可以自行思考哦~ ``` %%script csharp var test_str = "ABCDabcdefacddbdf"; Console.WriteLine(test_str.Replace("b", "B")); ``` ### 4.4.分割 `split`里面很多重载方法,可以自己去查看下 eg:`Split("\n",StringSplitOptions.RemoveEmptyEntries)` 再说一下这个:`test_str.Split('a');` //返回数组 如果要和Python一样返回列表==》`test_str.Split('a').ToList();` 【需要引用linq的命名空间哦】 ```csharp var test_array = test_str.Split('a');//返回数组(如果要返回列表==》test_str.Split('a').ToList();) var test_input = "hi my name is dnt"; //# print(test_input.split(" ")) #返回列表格式(后面会说)['hi', 'my', 'name', 'is', 'dnt'] test_input.Split(" "); //# 按行分割,返回类型为List var test_line_str = "abc\nbca\ncab\n"; //# print(test_line_str.splitlines())#['abc', 'bca', 'cab'] test_line_str.Split("\n", StringSplitOptions.RemoveEmptyEntries); ``` ### 4.5.连接 **`string.Join(分隔符,数组)`** ```csharp Console.WriteLine(string.Join("-", test_array));//test_array是数组 ABCD-bcdef-cddbdf ``` ### 4.6.头尾判断 `StartsWith`(以。。。开头),`EndsWith`(以。。。结尾) ``` %%script csharp var start_end_str = "http://www.baidu.net"; //# print(start_end_str.startswith("https://") or start_end_str.startswith("http://")) System.Console.WriteLine(start_end_str.StartsWith("https://") || start_end_str.StartsWith("http://")); //# print(start_end_str.endswith(".com")) System.Console.WriteLine(start_end_str.EndsWith(".com")); ``` ### 4.7.大小写系 ```csharp //# print(test_str.upper())#ABCDABCDEFACDDBDF Console.WriteLine(test_str.ToUpper()); //# print(test_str.lower())#abcdabcdefacddbdf Console.WriteLine(test_str.ToLower()); ``` ### 4.8.格式化系 `Tirm`很强大,除了去空格还可以去除你想去除的任意字符 net里面`string.Format`各种格式化输出,可以参考,这边就不讲了 ``` %%script csharp var strip_str = " I Have a Dream "; //# print(strip_str.strip()+"|")#我加 | 是为了看清后面空格,没有别的用处 Console.WriteLine(strip_str.Trim() + "|"); //# print(strip_str.lstrip()+"|") Console.WriteLine(strip_str.TrimStart() + "|"); //# print(strip_str.rstrip()+"|") Console.WriteLine(strip_str.TrimEnd() + "|"); ``` ### 4.9.验证系列 `string.IsNullOrEmpty` 和 `string.IsNullOrWhiteSpace` 是系统自带的 ``` %%script csharp var test_str4 = " \t"; var test_str5 = " \t \n "; //#isspace() ==>true // string.IsNullOrEmpty 和 string.IsNullOrWhiteSpace 是系统自带的,其他的你需要自己封装一个扩展类 Console.WriteLine(string.IsNullOrEmpty(test_str4)); //false Console.WriteLine(string.IsNullOrWhiteSpace(test_str4));//true Console.WriteLine(string.IsNullOrEmpty(test_str5));//false Console.WriteLine(string.IsNullOrWhiteSpace(test_str5));//true ``` 其他的你需要自己封装一个扩展类(eg:<a href="https://github.com/dunitian/LoTCodeBase/blob/master/NetCode/5.逆天类库/LoTLibrary/Validation/ValidationHelper.cs" target="_blank">简单封装</a>) ```csharp using System; using System.Collections.Generic; using System.Linq; using System.Text.RegularExpressions; public static partial class ValidationHelper { #region 常用验证 #region 集合系列 /// <summary> /// 判断集合是否有数据 /// </summary> /// <typeparam name="T"></typeparam> /// <param name="list"></param> /// <returns></returns> public static bool ExistsData<T>(this IEnumerable<T> list) { bool b = false; if (list != null && list.Count() > 0) { b = true; } return b; } #endregion #region Null判断系列 /// <summary> /// 判断是否为空或Null /// </summary> /// <param name="objStr"></param> /// <returns></returns> public static bool IsNullOrWhiteSpace(this string objStr) { if (string.IsNullOrWhiteSpace(objStr)) { return true; } else { return false; } } /// <summary> /// 判断类型是否为可空类型 /// </summary> /// <param name="theType"></param> /// <returns></returns> public static bool IsNullableType(Type theType) { return (theType.IsGenericType && theType.GetGenericTypeDefinition().Equals(typeof(Nullable<>))); } #endregion #region 数字字符串检查 /// <summary> /// 是否数字字符串(包括小数) /// </summary> /// <param name="objStr">输入字符串</param> /// <returns></returns> public static bool IsNumber(this string objStr) { try { return Regex.IsMatch(objStr, @"^\d+(\.\d+)?$"); } catch { return false; } } /// <summary> /// 是否是浮点数 /// </summary> /// <param name="objStr">输入字符串</param> /// <returns></returns> public static bool IsDecimal(this string objStr) { try { return Regex.IsMatch(objStr, @"^(-?\d+)(\.\d+)?$"); } catch { return false; } } #endregion #endregion #region 业务常用 #region 中文检测 /// <summary> /// 检测是否有中文字符 /// </summary> /// <param name="objStr"></param> /// <returns></returns> public static bool IsZhCN(this string objStr) { try { return Regex.IsMatch(objStr, "[\u4e00-\u9fa5]"); } catch { return false; } } #endregion #region 邮箱验证 /// <summary> /// 判断邮箱地址是否正确 /// </summary> /// <param name="objStr"></param> /// <returns></returns> public static bool IsEmail(this string objStr) { try { return Regex.IsMatch(objStr, @"^([\w-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([\w-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$"); } catch { return false; } } #endregion #region IP系列验证 /// <summary> /// 是否为ip /// </summary> /// <param name="objStr"></param> /// <returns></returns> public static bool IsIP(this string objStr) { return Regex.IsMatch(objStr, @"^((2[0-4]\d|25[0-5]|[01]?\d\d?)\.){3}(2[0-4]\d|25[0-5]|[01]?\d\d?)$"); } /// <summary> /// 判断输入的字符串是否是表示一个IP地址 /// </summary> /// <param name="objStr">被比较的字符串</param> /// <returns>是IP地址则为True</returns> public static bool IsIPv4(this string objStr) { string[] IPs = objStr.Split('.'); for (int i = 0; i < IPs.Length; i++) { if (!Regex.IsMatch(IPs[i], @"^\d+$")) { return false; } if (Convert.ToUInt16(IPs[i]) > 255) { return false; } } return true; } /// <summary> /// 判断输入的字符串是否是合法的IPV6 地址 /// </summary> /// <param name="input"></param> /// <returns></returns> public static bool IsIPV6(string input) { string temp = input; string[] strs = temp.Split(':'); if (strs.Length > 8) { return false; } int count = input.GetStrCount("::"); if (count > 1) { return false; } else if (count == 0) { return Regex.IsMatch(input, @"^([\da-f]{1,4}:){7}[\da-f]{1,4}$"); } else { return Regex.IsMatch(input, @"^([\da-f]{1,4}:){0,5}::([\da-f]{1,4}:){0,5}[\da-f]{1,4}$"); } } #endregion #region 网址系列验证 /// <summary> /// 验证网址是否正确(http:或者https:)【后期添加 // 的情况】 /// </summary> /// <param name="objStr">地址</param> /// <returns></returns> public static bool IsWebUrl(this string objStr) { try { return Regex.IsMatch(objStr, @"http://([\w-]+\.)+[\w-]+(/[\w- ./?%&=]*)?|https://([\w-]+\.)+[\w-]+(/[\w- ./?%&=]*)?"); } catch { return false; } } /// <summary> /// 判断输入的字符串是否是一个超链接 /// </summary> /// <param name="objStr"></param> /// <returns></returns> public static bool IsURL(this string objStr) { string pattern = @"^[a-zA-Z]+://(\w+(-\w+)*)(\.(\w+(-\w+)*))*(\?\S*)?$"; return Regex.IsMatch(objStr, pattern); } #endregion #region 邮政编码验证 /// <summary> /// 验证邮政编码是否正确 /// </summary> /// <param name="objStr">输入字符串</param> /// <returns></returns> public static bool IsZipCode(this string objStr) { try { return Regex.IsMatch(objStr, @"\d{6}"); } catch { return false; } } #endregion #region 电话+手机验证 /// <summary> /// 验证手机号是否正确 /// </summary> /// <param name="objStr">手机号</param> /// <returns></returns> public static bool IsMobile(this string objStr) { try { return Regex.IsMatch(objStr, @"^13[0-9]{9}|15[012356789][0-9]{8}|18[0123456789][0-9]{8}|147[0-9]{8}$"); } catch { return false; } } /// <summary> /// 匹配3位或4位区号的电话号码,其中区号可以用小括号括起来,也可以不用,区号与本地号间可以用连字号或空格间隔,也可以没有间隔 /// </summary> /// <param name="objStr"></param> /// <returns></returns> public static bool IsPhone(this string objStr) { try { return Regex.IsMatch(objStr, "^\\(0\\d{2}\\)[- ]?\\d{8}$|^0\\d{2}[- ]?\\d{8}$|^\\(0\\d{3}\\)[- ]?\\d{7}$|^0\\d{3}[- ]?\\d{7}$"); } catch { return false; } } #endregion #region 字母或数字验证 /// <summary> /// 是否只是字母或数字 /// </summary> /// <param name="objStr"></param> /// <returns></returns> public static bool IsAbcOr123(this string objStr) { try { return Regex.IsMatch(objStr, @"^[0-9a-zA-Z\$]+$"); } catch { return false; } } #endregion #endregion } ```
github_jupyter
###### Text provided under a Creative Commons Attribution license, CC-BY. Code under MIT license. (c)2014 Lorena A. Barba, Pi-Yueh Chuang. Thanks: NSF for support via CAREER award #1149784. # Source Distribution on an Airfoil In [Lesson 3](03_Lesson03_doublet.ipynb) of *AeroPython*, you learned that it is possible to represent potential flow around a circular cylinder using the superposition of a doublet singularity and a free stream. But potential flow is even more powerful: you can represent the flow around *any* shape. How is it possible, you might ask? For non-lifting bodies, you can use a source distribution on the body surface, superposed with a free stream. In this assignment, you will build the flow around a NACA0012 airfoil, using a set of sources. Before you start, take a moment to think: in flow around a symmetric airfoil at $0^{\circ}$ angle of attack, * Where is the point of maximum pressure? * What do we call that point? * Will the airfoil generate any lift? At the end of this assignment, come back to these questions, and see if it all makes sense. ## Problem Setup You will read data files containing information about the location and the strength of a set of sources located on the surface of a NACA0012 airfoil. There are three data files: NACA0012_x.txt, NACA0012_y.txt, and NACA0012_sigma.txt. To load each file into a NumPy array, you need the function [`numpy.loadtxt`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html). The files should be found in the `resources` folder of the `lessons`. Using 51 mesh points in each direction, and a domain $[-1, 2]\times[-0.5, 0.5]$, compute the velocity due to the set of sources plus a free stream in the $x$-direction with $U_{\infty}=1$. Also compute the coefficient of pressure on your grid points. ## Questions: 1. What is the value of maximum pressure coefficient, $C_p$? 2. What are the array indices for the maximum value of $C_p$? Make the following plots to visualize and inspect the resulting flow pattern: * Stream lines in the domain and the profile of our NACA0012 airfoil, in one plot * Distribution of the pressure coefficient and a single marker on the location of the maximum pressure **Hint**: You might use the following NumPy functions: [`numpy.unravel_index`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.unravel_index.html) and [`numpy.argmax`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html) ##### Think 1. Do the stream lines look like you expected? 2. What does the distribution of pressure tell you about lift generated by the airfoil? 3. Does the location of the point of maximum pressure seem right to you? ``` from IPython.core.display import HTML def css_styling(filepath): styles = open(filepath, 'r').read() return HTML(styles) css_styling('../styles/custom.css') ```
github_jupyter
Copyright 2021 Google LLC. SPDX-License-Identifier: Apache-2.0 Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. # **Socratic Models: Image Captioning** Socratic Models (SMs) is a framework that composes multiple pre-existing foundation models (e.g., large language models, visual language models, audio-language models) to provide results for new multimodal tasks, without any model finetuning. This colab runs an example of SMs for image captioning. This is a reference implementation of one task demonstrated in the work: [Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language](https://socraticmodels.github.io/) **Disclaimer:** this colab uses CLIP and GPT-3 as foundation models, and may be subject to unwanted biases. This code should be used with caution (and checked for correctness) in downstream applications. ### **Quick Start:** **Step 1.** Register for an [OpenAI API key](https://openai.com/blog/openai-api/) to use GPT-3 (there's a free trial) and enter it below **Step 2.** Menu > Change runtime type > Hardware accelerator > "GPU" **Step 3.** Menu > Runtime > Run all ``` openai_api_key = "your-api-key" ``` ## **Setup** This installs a few dependencies: PyTorch, CLIP, GPT-3. ``` !pip install ftfy regex tqdm fvcore imageio imageio-ffmpeg openai pattern !pip install git+https://github.com/openai/CLIP.git !pip install -U --no-cache-dir gdown --pre !pip install profanity-filter !nvidia-smi # Show GPU info. import datetime import json import os import re import time import requests import clip import cv2 import matplotlib.pyplot as plt import numpy as np import openai from PIL import Image from profanity_filter import ProfanityFilter import torch openai.api_key = openai_api_key ``` ## **Foundation Models** Select which foundation models to use. **Defaults:** [CLIP](https://arxiv.org/abs/2103.00020) VIT-L/14 as the VLM, and [GPT-3](https://arxiv.org/abs/2005.14165) "Davinci" as the LM. ``` clip_version = "ViT-L/14" #@param ["RN50", "RN101", "RN50x4", "RN50x16", "RN50x64", "ViT-B/32", "ViT-B/16", "ViT-L/14"] {type:"string"} gpt_version = "text-davinci-002" #@param ["text-davinci-001", "text-davinci-002", "text-curie-001", "text-babbage-001", "text-ada-001"] {type:"string"} clip_feat_dim = {'RN50': 1024, 'RN101': 512, 'RN50x4': 640, 'RN50x16': 768, 'RN50x64': 1024, 'ViT-B/32': 512, 'ViT-B/16': 512, 'ViT-L/14': 768}[clip_version] ``` ## **Getting Started** Download CLIP model weights, and define helper functions. This might take a few minutes. ##### Download [CLIP](https://arxiv.org/abs/2103.00020) model weights. ``` # torch.cuda.set_per_process_memory_fraction(0.9, None) # Only needed if session crashes. model, preprocess = clip.load(clip_version) # clip.available_models() model.cuda().eval() def num_params(model): return np.sum([int(np.prod(p.shape)) for p in model.parameters()]) print("Model parameters (total):", num_params(model)) print("Model parameters (image encoder):", num_params(model.visual)) print("Model parameters (text encoder):", num_params(model.token_embedding) + num_params(model.transformer)) print("Input image resolution:", model.visual.input_resolution) print("Context length:", model.context_length) print("Vocab size:", model.vocab_size) img_size = model.visual.input_resolution ``` ##### Define CLIP helper functions (e.g., nearest neighbor search). ``` def get_text_feats(in_text, batch_size=64): text_tokens = clip.tokenize(in_text).cuda() text_id = 0 text_feats = np.zeros((len(in_text), clip_feat_dim), dtype=np.float32) while text_id < len(text_tokens): # Batched inference. batch_size = min(len(in_text) - text_id, batch_size) text_batch = text_tokens[text_id:text_id+batch_size] with torch.no_grad(): batch_feats = model.encode_text(text_batch).float() batch_feats /= batch_feats.norm(dim=-1, keepdim=True) batch_feats = np.float32(batch_feats.cpu()) text_feats[text_id:text_id+batch_size, :] = batch_feats text_id += batch_size return text_feats def get_img_feats(img): img_pil = Image.fromarray(np.uint8(img)) img_in = preprocess(img_pil)[None, ...] with torch.no_grad(): img_feats = model.encode_image(img_in.cuda()).float() img_feats /= img_feats.norm(dim=-1, keepdim=True) img_feats = np.float32(img_feats.cpu()) return img_feats def get_nn_text(raw_texts, text_feats, img_feats): scores = text_feats @ img_feats.T scores = scores.squeeze() high_to_low_ids = np.argsort(scores).squeeze()[::-1] high_to_low_texts = [raw_texts[i] for i in high_to_low_ids] high_to_low_scores = np.sort(scores).squeeze()[::-1] return high_to_low_texts, high_to_low_scores ``` ##### Define [GPT-3](https://arxiv.org/abs/2005.14165) helper functions. ``` def prompt_llm(prompt, max_tokens=64, temperature=0, stop=None): response = openai.Completion.create(engine=gpt_version, prompt=prompt, max_tokens=max_tokens, temperature=temperature, stop=stop) return response["choices"][0]["text"].strip() ``` ##### Load scene categories from [Places365](http://places2.csail.mit.edu/download.html) and compute their CLIP features. ``` # Load scene categories from Places365. if not os.path.exists('categories_places365.txt'): ! wget https://raw.githubusercontent.com/zhoubolei/places_devkit/master/categories_places365.txt place_categories = np.loadtxt('categories_places365.txt', dtype=str) place_texts = [] for place in place_categories[:, 0]: place = place.split('/')[2:] if len(place) > 1: place = place[1] + ' ' + place[0] else: place = place[0] place = place.replace('_', ' ') place_texts.append(place) place_feats = get_text_feats([f'Photo of a {p}.' for p in place_texts]) ``` ##### Load object categories from [Tencent ML Images](https://arxiv.org/pdf/1901.01703.pdf) and compute their CLIP features. This might take a few minutes. ``` # Load object categories from Tencent ML Images. if not os.path.exists('dictionary_and_semantic_hierarchy.txt'): ! wget https://raw.githubusercontent.com/Tencent/tencent-ml-images/master/data/dictionary_and_semantic_hierarchy.txt with open('dictionary_and_semantic_hierarchy.txt') as fid: object_categories = fid.readlines() object_texts = [] pf = ProfanityFilter() for object_text in object_categories[1:]: object_text = object_text.strip() object_text = object_text.split('\t')[3] safe_list = '' for variant in object_text.split(','): text = variant.strip() if pf.is_clean(text): safe_list += f'{text}, ' safe_list = safe_list[:-2] if len(safe_list) > 0: object_texts.append(safe_list) object_texts = [o for o in list(set(object_texts)) if o not in place_texts] # Remove redundant categories. object_feats = get_text_feats([f'Photo of a {o}.' for o in object_texts]) ``` ## **Demo:** Image Captioning Run image captioning on an Internet image (linked via URL). **Note:** due to the non-zero temperature used for sampling from the generative language model, results from this approach are stochastic, but comparable results are producible. ``` # Download image. img_url = "https://github.com/rmokady/CLIP_prefix_caption/raw/main/Images/COCO_val2014_000000165547.jpg" #@param {type:"string"} fname = 'demo_img.png' with open(fname, 'wb') as f: f.write(requests.get(img_url).content) verbose = True #@param {type:"boolean"} # Load image. img = cv2.cvtColor(cv2.imread(fname), cv2.COLOR_BGR2RGB) img_feats = get_img_feats(img) plt.imshow(img); plt.show() # Zero-shot VLM: classify image type. img_types = ['photo', 'cartoon', 'sketch', 'painting'] img_types_feats = get_text_feats([f'This is a {t}.' for t in img_types]) sorted_img_types, img_type_scores = get_nn_text(img_types, img_types_feats, img_feats) img_type = sorted_img_types[0] # Zero-shot VLM: classify number of people. ppl_texts = ['no people', 'people'] ppl_feats = get_text_feats([f'There are {p} in this photo.' for p in ppl_texts]) sorted_ppl_texts, ppl_scores = get_nn_text(ppl_texts, ppl_feats, img_feats) ppl_result = sorted_ppl_texts[0] if ppl_result == 'people': ppl_texts = ['is one person', 'are two people', 'are three people', 'are several people', 'are many people'] ppl_feats = get_text_feats([f'There {p} in this photo.' for p in ppl_texts]) sorted_ppl_texts, ppl_scores = get_nn_text(ppl_texts, ppl_feats, img_feats) ppl_result = sorted_ppl_texts[0] else: ppl_result = f'are {ppl_result}' # Zero-shot VLM: classify places. place_topk = 3 place_feats = get_text_feats([f'Photo of a {p}.' for p in place_texts ]) sorted_places, places_scores = get_nn_text(place_texts, place_feats, img_feats) # Zero-shot VLM: classify objects. obj_topk = 10 sorted_obj_texts, obj_scores = get_nn_text(object_texts, object_feats, img_feats) object_list = '' for i in range(obj_topk): object_list += f'{sorted_obj_texts[i]}, ' object_list = object_list[:-2] # Zero-shot LM: generate captions. num_captions = 10 prompt = f'''I am an intelligent image captioning bot. This image is a {img_type}. There {ppl_result}. I think this photo was taken at a {sorted_places[0]}, {sorted_places[1]}, or {sorted_places[2]}. I think there might be a {object_list} in this {img_type}. A creative short caption I can generate to describe this image is:''' caption_texts = [prompt_llm(prompt, temperature=0.9) for _ in range(num_captions)] # Zero-shot VLM: rank captions. caption_feats = get_text_feats(caption_texts) sorted_captions, caption_scores = get_nn_text(caption_texts, caption_feats, img_feats) print(f'{sorted_captions[0]}\n') if verbose: print(f'VLM: This image is a:') for img_type, score in zip(sorted_img_types, img_type_scores): print(f'{score:.4f} {img_type}') print(f'\nVLM: There:') for ppl_text, score in zip(sorted_ppl_texts, ppl_scores): print(f'{score:.4f} {ppl_text}') print(f'\nVLM: I think this photo was taken at a:') for place, score in zip(sorted_places[:place_topk], places_scores[:place_topk]): print(f'{score:.4f} {place}') print(f'\nVLM: I think there might be a:') for obj_text, score in zip(sorted_obj_texts[:obj_topk], obj_scores[:obj_topk]): print(f'{score:.4f} {obj_text}') print(f'\nLM generated captions ranked by VLM scores:') for caption, score in zip(sorted_captions, caption_scores): print(f'{score:.4f} {caption}') ```
github_jupyter
## Trigger Word Detection Welcome to the final programming assignment of this specialization! In this week's videos, you learned about applying deep learning to speech recognition. In this assignment, you will construct a speech dataset and implement an algorithm for trigger word detection (sometimes also called keyword detection, or wakeword detection). Trigger word detection is the technology that allows devices like Amazon Alexa, Google Home, Apple Siri, and Baidu DuerOS to wake up upon hearing a certain word. For this exercise, our trigger word will be "Activate." Every time it hears you say "activate," it will make a "chiming" sound. By the end of this assignment, you will be able to record a clip of yourself talking, and have the algorithm trigger a chime when it detects you saying "activate." After completing this assignment, perhaps you can also extend it to run on your laptop so that every time you say "activate" it starts up your favorite app, or turns on a network connected lamp in your house, or triggers some other event? <img src="images/sound.png" style="width:1000px;height:150px;"> In this assignment you will learn to: - Structure a speech recognition project - Synthesize and process audio recordings to create train/dev datasets - Train a trigger word detection model and make predictions Lets get started! Run the following cell to load the package you are going to use. ``` import numpy as np from pydub import AudioSegment import random import sys import io import os import glob import IPython from td_utils import * %matplotlib inline ``` # 1 - Data synthesis: Creating a speech dataset Let's start by building a dataset for your trigger word detection algorithm. A speech dataset should ideally be as close as possible to the application you will want to run it on. In this case, you'd like to detect the word "activate" in working environments (library, home, offices, open-spaces ...). You thus need to create recordings with a mix of positive words ("activate") and negative words (random words other than activate) on different background sounds. Let's see how you can create such a dataset. ## 1.1 - Listening to the data One of your friends is helping you out on this project, and they've gone to libraries, cafes, restaurants, homes and offices all around the region to record background noises, as well as snippets of audio of people saying positive/negative words. This dataset includes people speaking in a variety of accents. In the raw_data directory, you can find a subset of the raw audio files of the positive words, negative words, and background noise. You will use these audio files to synthesize a dataset to train the model. The "activate" directory contains positive examples of people saying the word "activate". The "negatives" directory contains negative examples of people saying random words other than "activate". There is one word per audio recording. The "backgrounds" directory contains 10 second clips of background noise in different environments. Run the cells below to listen to some examples. ``` IPython.display.Audio("./raw_data/activates/1.wav") IPython.display.Audio("./raw_data/negatives/4.wav") IPython.display.Audio("./raw_data/backgrounds/1.wav") ``` You will use these three type of recordings (positives/negatives/backgrounds) to create a labelled dataset. ## 1.2 - From audio recordings to spectrograms What really is an audio recording? A microphone records little variations in air pressure over time, and it is these little variations in air pressure that your ear also perceives as sound. You can think of an audio recording is a long list of numbers measuring the little air pressure changes detected by the microphone. We will use audio sampled at 44100 Hz (or 44100 Hertz). This means the microphone gives us 44100 numbers per second. Thus, a 10 second audio clip is represented by 441000 numbers (= $10 \times 44100$). It is quite difficult to figure out from this "raw" representation of audio whether the word "activate" was said. In order to help your sequence model more easily learn to detect triggerwords, we will compute a *spectrogram* of the audio. The spectrogram tells us how much different frequencies are present in an audio clip at a moment in time. (If you've ever taken an advanced class on signal processing or on Fourier transforms, a spectrogram is computed by sliding a window over the raw audio signal, and calculates the most active frequencies in each window using a Fourier transform. If you don't understand the previous sentence, don't worry about it.) Lets see an example. ``` IPython.display.Audio("audio_examples/example_train.wav") x = graph_spectrogram("audio_examples/example_train.wav") ``` The graph above represents how active each frequency is (y axis) over a number of time-steps (x axis). <img src="images/spectrogram.png" style="width:500px;height:200px;"> <center> **Figure 1**: Spectrogram of an audio recording, where the color shows the degree to which different frequencies are present (loud) in the audio at different points in time. Green squares means a certain frequency is more active or more present in the audio clip (louder); blue squares denote less active frequencies. </center> The dimension of the output spectrogram depends upon the hyperparameters of the spectrogram software and the length of the input. In this notebook, we will be working with 10 second audio clips as the "standard length" for our training examples. The number of timesteps of the spectrogram will be 5511. You'll see later that the spectrogram will be the input $x$ into the network, and so $T_x = 5511$. ``` _, data = wavfile.read("audio_examples/example_train.wav") print("Time steps in audio recording before spectrogram", data[:,0].shape) print("Time steps in input after spectrogram", x.shape) ``` Now, you can define: ``` Tx = 5511 # The number of time steps input to the model from the spectrogram n_freq = 101 # Number of frequencies input to the model at each time step of the spectrogram ``` Note that even with 10 seconds being our default training example length, 10 seconds of time can be discretized to different numbers of value. You've seen 441000 (raw audio) and 5511 (spectrogram). In the former case, each step represents $10/441000 \approx 0.000023$ seconds. In the second case, each step represents $10/5511 \approx 0.0018$ seconds. For the 10sec of audio, the key values you will see in this assignment are: - $441000$ (raw audio) - $5511 = T_x$ (spectrogram output, and dimension of input to the neural network). - $10000$ (used by the `pydub` module to synthesize audio) - $1375 = T_y$ (the number of steps in the output of the GRU you'll build). Note that each of these representations correspond to exactly 10 seconds of time. It's just that they are discretizing them to different degrees. All of these are hyperparameters and can be changed (except the 441000, which is a function of the microphone). We have chosen values that are within the standard ranges uses for speech systems. Consider the $T_y = 1375$ number above. This means that for the output of the model, we discretize the 10s into 1375 time-intervals (each one of length $10/1375 \approx 0.0072$s) and try to predict for each of these intervals whether someone recently finished saying "activate." Consider also the 10000 number above. This corresponds to discretizing the 10sec clip into 10/10000 = 0.001 second itervals. 0.001 seconds is also called 1 millisecond, or 1ms. So when we say we are discretizing according to 1ms intervals, it means we are using 10,000 steps. ``` Ty = 1375 # The number of time steps in the output of our model ``` ## 1.3 - Generating a single training example Because speech data is hard to acquire and label, you will synthesize your training data using the audio clips of activates, negatives, and backgrounds. It is quite slow to record lots of 10 second audio clips with random "activates" in it. Instead, it is easier to record lots of positives and negative words, and record background noise separately (or download background noise from free online sources). To synthesize a single training example, you will: - Pick a random 10 second background audio clip - Randomly insert 0-4 audio clips of "activate" into this 10sec clip - Randomly insert 0-2 audio clips of negative words into this 10sec clip Because you had synthesized the word "activate" into the background clip, you know exactly when in the 10sec clip the "activate" makes its appearance. You'll see later that this makes it easier to generate the labels $y^{\langle t \rangle}$ as well. You will use the pydub package to manipulate audio. Pydub converts raw audio files into lists of Pydub data structures (it is not important to know the details here). Pydub uses 1ms as the discretization interval (1ms is 1 millisecond = 1/1000 seconds) which is why a 10sec clip is always represented using 10,000 steps. ``` # Load audio segments using pydub activates, negatives, backgrounds = load_raw_audio() print("background len: " + str(len(backgrounds[0]))) # Should be 10,000, since it is a 10 sec clip print("activate[0] len: " + str(len(activates[0]))) # Maybe around 1000, since an "activate" audio clip is usually around 1 sec (but varies a lot) print("activate[1] len: " + str(len(activates[1]))) # Different "activate" clips can have different lengths ``` **Overlaying positive/negative words on the background**: Given a 10sec background clip and a short audio clip (positive or negative word), you need to be able to "add" or "insert" the word's short audio clip onto the background. To ensure audio segments inserted onto the background do not overlap, you will keep track of the times of previously inserted audio clips. You will be inserting multiple clips of positive/negative words onto the background, and you don't want to insert an "activate" or a random word somewhere that overlaps with another clip you had previously added. For clarity, when you insert a 1sec "activate" onto a 10sec clip of cafe noise, you end up with a 10sec clip that sounds like someone saying "activate" in a cafe, with "activate" superimposed on the background cafe noise. You do *not* end up with an 11 sec clip. You'll see later how pydub allows you to do this. **Creating the labels at the same time you overlay**: Recall also that the labels $y^{\langle t \rangle}$ represent whether or not someone has just finished saying "activate." Given a background clip, we can initialize $y^{\langle t \rangle}=0$ for all $t$, since the clip doesn't contain any "activates." When you insert or overlay an "activate" clip, you will also update labels for $y^{\langle t \rangle}$, so that 50 steps of the output now have target label 1. You will train a GRU to detect when someone has *finished* saying "activate". For example, suppose the synthesized "activate" clip ends at the 5sec mark in the 10sec audio---exactly halfway into the clip. Recall that $T_y = 1375$, so timestep $687 = $ `int(1375*0.5)` corresponds to the moment at 5sec into the audio. So, you will set $y^{\langle 688 \rangle} = 1$. Further, you would quite satisfied if the GRU detects "activate" anywhere within a short time-internal after this moment, so we actually set 50 consecutive values of the label $y^{\langle t \rangle}$ to 1. Specifically, we have $y^{\langle 688 \rangle} = y^{\langle 689 \rangle} = \cdots = y^{\langle 737 \rangle} = 1$. This is another reason for synthesizing the training data: It's relatively straightforward to generate these labels $y^{\langle t \rangle}$ as described above. In contrast, if you have 10sec of audio recorded on a microphone, it's quite time consuming for a person to listen to it and mark manually exactly when "activate" finished. Here's a figure illustrating the labels $y^{\langle t \rangle}$, for a clip which we have inserted "activate", "innocent", activate", "baby." Note that the positive labels "1" are associated only with the positive words. <img src="images/label_diagram.png" style="width:500px;height:200px;"> <center> **Figure 2** </center> To implement the training set synthesis process, you will use the following helper functions. All of these function will use a 1ms discretization interval, so the 10sec of audio is always discretized into 10,000 steps. 1. `get_random_time_segment(segment_ms)` gets a random time segment in our background audio 2. `is_overlapping(segment_time, existing_segments)` checks if a time segment overlaps with existing segments 3. `insert_audio_clip(background, audio_clip, existing_times)` inserts an audio segment at a random time in our background audio using `get_random_time_segment` and `is_overlapping` 4. `insert_ones(y, segment_end_ms)` inserts 1's into our label vector y after the word "activate" The function `get_random_time_segment(segment_ms)` returns a random time segment onto which we can insert an audio clip of duration `segment_ms`. Read through the code to make sure you understand what it is doing. ``` def get_random_time_segment(segment_ms): """ Gets a random time segment of duration segment_ms in a 10,000 ms audio clip. Arguments: segment_ms -- the duration of the audio clip in ms ("ms" stands for "milliseconds") Returns: segment_time -- a tuple of (segment_start, segment_end) in ms """ segment_start = np.random.randint(low=0, high=10000-segment_ms) # Make sure segment doesn't run past the 10sec background segment_end = segment_start + segment_ms - 1 return (segment_start, segment_end) ``` Next, suppose you have inserted audio clips at segments (1000,1800) and (3400,4500). I.e., the first segment starts at step 1000, and ends at step 1800. Now, if we are considering inserting a new audio clip at (3000,3600) does this overlap with one of the previously inserted segments? In this case, (3000,3600) and (3400,4500) overlap, so we should decide against inserting a clip here. For the purpose of this function, define (100,200) and (200,250) to be overlapping, since they overlap at timestep 200. However, (100,199) and (200,250) are non-overlapping. **Exercise**: Implement `is_overlapping(segment_time, existing_segments)` to check if a new time segment overlaps with any of the previous segments. You will need to carry out 2 steps: 1. Create a "False" flag, that you will later set to "True" if you find that there is an overlap. 2. Loop over the previous_segments' start and end times. Compare these times to the segment's start and end times. If there is an overlap, set the flag defined in (1) as True. You can use: ```python for ....: if ... <= ... and ... >= ...: ... ``` Hint: There is overlap if the segment starts before the previous segment ends, and the segment ends after the previous segment starts. ``` # GRADED FUNCTION: is_overlapping def is_overlapping(segment_time, previous_segments): """ Checks if the time of a segment overlaps with the times of existing segments. Arguments: segment_time -- a tuple of (segment_start, segment_end) for the new segment previous_segments -- a list of tuples of (segment_start, segment_end) for the existing segments Returns: True if the time segment overlaps with any of the existing segments, False otherwise """ segment_start, segment_end = segment_time ### START CODE HERE ### (≈ 4 line) # Step 1: Initialize overlap as a "False" flag. (≈ 1 line) overlap = False # Step 2: loop over the previous_segments start and end times. # Compare start/end times and set the flag to True if there is an overlap (≈ 3 lines) for previous_start, previous_end in previous_segments: if segment_start <= previous_end and segment_end >= previous_start: overlap = True ### END CODE HERE ### return overlap overlap1 = is_overlapping((950, 1430), [(2000, 2550), (260, 949)]) overlap2 = is_overlapping((2305, 2950), [(824, 1532), (1900, 2305), (3424, 3656)]) print("Overlap 1 = ", overlap1) print("Overlap 2 = ", overlap2) ``` **Expected Output**: <table> <tr> <td> **Overlap 1** </td> <td> False </td> </tr> <tr> <td> **Overlap 2** </td> <td> True </td> </tr> </table> Now, lets use the previous helper functions to insert a new audio clip onto the 10sec background at a random time, but making sure that any newly inserted segment doesn't overlap with the previous segments. **Exercise**: Implement `insert_audio_clip()` to overlay an audio clip onto the background 10sec clip. You will need to carry out 4 steps: 1. Get a random time segment of the right duration in ms. 2. Make sure that the time segment does not overlap with any of the previous time segments. If it is overlapping, then go back to step 1 and pick a new time segment. 3. Add the new time segment to the list of existing time segments, so as to keep track of all the segments you've inserted. 4. Overlay the audio clip over the background using pydub. We have implemented this for you. ``` # GRADED FUNCTION: insert_audio_clip def insert_audio_clip(background, audio_clip, previous_segments): """ Insert a new audio segment over the background noise at a random time step, ensuring that the audio segment does not overlap with existing segments. Arguments: background -- a 10 second background audio recording. audio_clip -- the audio clip to be inserted/overlaid. previous_segments -- times where audio segments have already been placed Returns: new_background -- the updated background audio """ # Get the duration of the audio clip in ms segment_ms = len(audio_clip) ### START CODE HERE ### # Step 1: Use one of the helper functions to pick a random time segment onto which to insert # the new audio clip. (≈ 1 line) segment_time = get_random_time_segment(segment_ms) # Step 2: Check if the new segment_time overlaps with one of the previous_segments. If so, keep # picking new segment_time at random until it doesn't overlap. (≈ 2 lines) while is_overlapping(segment_time, previous_segments): segment_time = get_random_time_segment(segment_ms) # Step 3: Add the new segment_time to the list of previous_segments (≈ 1 line) previous_segments.append(segment_time) ### END CODE HERE ### # Step 4: Superpose audio segment and background new_background = background.overlay(audio_clip, position = segment_time[0]) return new_background, segment_time np.random.seed(5) audio_clip, segment_time = insert_audio_clip(backgrounds[0], activates[0], [(3790, 4400)]) audio_clip.export("insert_test.wav", format="wav") print("Segment Time: ", segment_time) IPython.display.Audio("insert_test.wav") ``` **Expected Output** <table> <tr> <td> **Segment Time** </td> <td> (2254, 3169) </td> </tr> </table> ``` # Expected audio IPython.display.Audio("audio_examples/insert_reference.wav") ``` Finally, implement code to update the labels $y^{\langle t \rangle}$, assuming you just inserted an "activate." In the code below, `y` is a `(1,1375)` dimensional vector, since $T_y = 1375$. If the "activate" ended at time step $t$, then set $y^{\langle t+1 \rangle} = 1$ as well as for up to 49 additional consecutive values. However, make sure you don't run off the end of the array and try to update `y[0][1375]`, since the valid indices are `y[0][0]` through `y[0][1374]` because $T_y = 1375$. So if "activate" ends at step 1370, you would get only `y[0][1371] = y[0][1372] = y[0][1373] = y[0][1374] = 1` **Exercise**: Implement `insert_ones()`. You can use a for loop. (If you are an expert in python's slice operations, feel free also to use slicing to vectorize this.) If a segment ends at `segment_end_ms` (using a 10000 step discretization), to convert it to the indexing for the outputs $y$ (using a $1375$ step discretization), we will use this formula: ``` segment_end_y = int(segment_end_ms * Ty / 10000.0) ``` ``` # GRADED FUNCTION: insert_ones def insert_ones(y, segment_end_ms): """ Update the label vector y. The labels of the 50 output steps strictly after the end of the segment should be set to 1. By strictly we mean that the label of segment_end_y should be 0 while, the 50 following labels should be ones. Arguments: y -- numpy array of shape (1, Ty), the labels of the training example segment_end_ms -- the end time of the segment in ms Returns: y -- updated labels """ # duration of the background (in terms of spectrogram time-steps) segment_end_y = int(segment_end_ms * Ty / 10000.0) # Add 1 to the correct index in the background label (y) ### START CODE HERE ### (≈ 3 lines) for i in range(segment_end_y + 1, segment_end_y + 51): if i < Ty: y[0, i] = 1 ### END CODE HERE ### return y arr1 = insert_ones(np.zeros((1, Ty)), 9700) plt.plot(insert_ones(arr1, 4251)[0,:]) print("sanity checks:", arr1[0][1333], arr1[0][634], arr1[0][635]) ``` **Expected Output** <table> <tr> <td> **sanity checks**: </td> <td> 0.0 1.0 0.0 </td> </tr> </table> <img src="images/ones_reference.png" style="width:320;height:240px;"> Finally, you can use `insert_audio_clip` and `insert_ones` to create a new training example. **Exercise**: Implement `create_training_example()`. You will need to carry out the following steps: 1. Initialize the label vector $y$ as a numpy array of zeros and shape $(1, T_y)$. 2. Initialize the set of existing segments to an empty list. 3. Randomly select 0 to 4 "activate" audio clips, and insert them onto the 10sec clip. Also insert labels at the correct position in the label vector $y$. 4. Randomly select 0 to 2 negative audio clips, and insert them into the 10sec clip. ``` # GRADED FUNCTION: create_training_example def create_training_example(background, activates, negatives): """ Creates a training example with a given background, activates, and negatives. Arguments: background -- a 10 second background audio recording activates -- a list of audio segments of the word "activate" negatives -- a list of audio segments of random words that are not "activate" Returns: x -- the spectrogram of the training example y -- the label at each time step of the spectrogram """ # Set the random seed np.random.seed(18) # Make background quieter background = background - 20 ### START CODE HERE ### # Step 1: Initialize y (label vector) of zeros (≈ 1 line) y = np.zeros((1, Ty)) # Step 2: Initialize segment times as empty list (≈ 1 line) previous_segments = [] ### END CODE HERE ### # Select 0-4 random "activate" audio clips from the entire list of "activates" recordings number_of_activates = np.random.randint(0, 5) random_indices = np.random.randint(len(activates), size=number_of_activates) random_activates = [activates[i] for i in random_indices] ### START CODE HERE ### (≈ 3 lines) # Step 3: Loop over randomly selected "activate" clips and insert in background for random_activate in random_activates: # Insert the audio clip on the background background, segment_time = insert_audio_clip(background, random_activate, previous_segments) # Retrieve segment_start and segment_end from segment_time segment_start, segment_end = segment_time # Insert labels in "y" y = insert_ones(y, segment_end_ms=segment_end) ### END CODE HERE ### # Select 0-2 random negatives audio recordings from the entire list of "negatives" recordings number_of_negatives = np.random.randint(0, 3) random_indices = np.random.randint(len(negatives), size=number_of_negatives) random_negatives = [negatives[i] for i in random_indices] ### START CODE HERE ### (≈ 2 lines) # Step 4: Loop over randomly selected negative clips and insert in background for random_negative in random_negatives: # Insert the audio clip on the background background, _ = insert_audio_clip(background, random_negative, previous_segments) ### END CODE HERE ### # Standardize the volume of the audio clip background = match_target_amplitude(background, -20.0) # Export new training example file_handle = background.export("train" + ".wav", format="wav") print("File (train.wav) was saved in your directory.") # Get and plot spectrogram of the new recording (background with superposition of positive and negatives) x = graph_spectrogram("train.wav") return x, y x, y = create_training_example(backgrounds[0], activates, negatives) ``` **Expected Output** <img src="images/train_reference.png" style="width:320;height:240px;"> Now you can listen to the training example you created and compare it to the spectrogram generated above. ``` IPython.display.Audio("train.wav") ``` **Expected Output** ``` IPython.display.Audio("audio_examples/train_reference.wav") ``` Finally, you can plot the associated labels for the generated training example. ``` plt.plot(y[0]) ``` **Expected Output** <img src="images/train_label.png" style="width:320;height:240px;"> ## 1.4 - Full training set You've now implemented the code needed to generate a single training example. We used this process to generate a large training set. To save time, we've already generated a set of training examples. ``` # Load preprocessed training examples X = np.load("./XY_train/X.npy") Y = np.load("./XY_train/Y.npy") ``` ## 1.5 - Development set To test our model, we recorded a development set of 25 examples. While our training data is synthesized, we want to create a development set using the same distribution as the real inputs. Thus, we recorded 25 10-second audio clips of people saying "activate" and other random words, and labeled them by hand. This follows the principle described in Course 3 that we should create the dev set to be as similar as possible to the test set distribution; that's why our dev set uses real rather than synthesized audio. ``` # Load preprocessed dev set examples X_dev = np.load("./XY_dev/X_dev.npy") Y_dev = np.load("./XY_dev/Y_dev.npy") ``` # 2 - Model Now that you've built a dataset, lets write and train a trigger word detection model! The model will use 1-D convolutional layers, GRU layers, and dense layers. Let's load the packages that will allow you to use these layers in Keras. This might take a minute to load. ``` from keras.callbacks import ModelCheckpoint from keras.models import Model, load_model, Sequential from keras.layers import Dense, Activation, Dropout, Input, Masking, TimeDistributed, LSTM, Conv1D from keras.layers import GRU, Bidirectional, BatchNormalization, Reshape from keras.optimizers import Adam ``` ## 2.1 - Build the model Here is the architecture we will use. Take some time to look over the model and see if it makes sense. <img src="images/model.png" style="width:600px;height:600px;"> <center> **Figure 3** </center> One key step of this model is the 1D convolutional step (near the bottom of Figure 3). It inputs the 5511 step spectrogram, and outputs a 1375 step output, which is then further processed by multiple layers to get the final $T_y = 1375$ step output. This layer plays a role similar to the 2D convolutions you saw in Course 4, of extracting low-level features and then possibly generating an output of a smaller dimension. Computationally, the 1-D conv layer also helps speed up the model because now the GRU has to process only 1375 timesteps rather than 5511 timesteps. The two GRU layers read the sequence of inputs from left to right, then ultimately uses a dense+sigmoid layer to make a prediction for $y^{\langle t \rangle}$. Because $y$ is binary valued (0 or 1), we use a sigmoid output at the last layer to estimate the chance of the output being 1, corresponding to the user having just said "activate." Note that we use a uni-directional RNN rather than a bi-directional RNN. This is really important for trigger word detection, since we want to be able to detect the trigger word almost immediately after it is said. If we used a bi-directional RNN, we would have to wait for the whole 10sec of audio to be recorded before we could tell if "activate" was said in the first second of the audio clip. Implementing the model can be done in four steps: **Step 1**: CONV layer. Use `Conv1D()` to implement this, with 196 filters, a filter size of 15 (`kernel_size=15`), and stride of 4. [[See documentation.](https://keras.io/layers/convolutional/#conv1d)] **Step 2**: First GRU layer. To generate the GRU layer, use: ``` X = GRU(units = 128, return_sequences = True)(X) ``` Setting `return_sequences=True` ensures that all the GRU's hidden states are fed to the next layer. Remember to follow this with Dropout and BatchNorm layers. **Step 3**: Second GRU layer. This is similar to the previous GRU layer (remember to use `return_sequences=True`), but has an extra dropout layer. **Step 4**: Create a time-distributed dense layer as follows: ``` X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) ``` This creates a dense layer followed by a sigmoid, so that the parameters used for the dense layer are the same for every time step. [[See documentation](https://keras.io/layers/wrappers/).] **Exercise**: Implement `model()`, the architecture is presented in Figure 3. ``` # GRADED FUNCTION: model def model(input_shape): """ Function creating the model's graph in Keras. Argument: input_shape -- shape of the model's input data (using Keras conventions) Returns: model -- Keras model instance """ X_input = Input(shape = input_shape) ### START CODE HERE ### # Step 1: CONV layer (≈4 lines) X = Conv1D(196, 15, strides=4)(X_input) # CONV1D X = BatchNormalization()(X) # Batch normalization X = Activation('relu')(X) # ReLu activation X = Dropout(0.8)(X) # dropout (use 0.8) # Step 2: First GRU Layer (≈4 lines) X = GRU(units = 128, return_sequences=True)(X) # GRU (use 128 units and return the sequences) X = Dropout(0.8)(X) # dropout (use 0.8) X = BatchNormalization()(X) # Batch normalization # Step 3: Second GRU Layer (≈4 lines) X = GRU(units = 128, return_sequences=True)(X) # GRU (use 128 units and return the sequences) X = Dropout(0.8)(X) # dropout (use 0.8) X = BatchNormalization()(X) # Batch normalization X = Dropout(0.8)(X) # dropout (use 0.8) # Step 4: Time-distributed dense layer (≈1 line) X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) # time distributed (sigmoid) ### END CODE HERE ### model = Model(inputs = X_input, outputs = X) return model model = model(input_shape = (Tx, n_freq)) ``` Let's print the model summary to keep track of the shapes. ``` model.summary() ``` **Expected Output**: <table> <tr> <td> **Total params** </td> <td> 522,561 </td> </tr> <tr> <td> **Trainable params** </td> <td> 521,657 </td> </tr> <tr> <td> **Non-trainable params** </td> <td> 904 </td> </tr> </table> The output of the network is of shape (None, 1375, 1) while the input is (None, 5511, 101). The Conv1D has reduced the number of steps from 5511 at spectrogram to 1375. ## 2.2 - Fit the model Trigger word detection takes a long time to train. To save time, we've already trained a model for about 3 hours on a GPU using the architecture you built above, and a large training set of about 4000 examples. Let's load the model. ``` model = load_model('./models/tr_model.h5') ``` You can train the model further, using the Adam optimizer and binary cross entropy loss, as follows. This will run quickly because we are training just for one epoch and with a small training set of 26 examples. ``` opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, decay=0.01) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=["accuracy"]) model.fit(X, Y, batch_size = 5, epochs=1) ``` ## 2.3 - Test the model Finally, let's see how your model performs on the dev set. ``` loss, acc = model.evaluate(X_dev, Y_dev) print("Dev set accuracy = ", acc) ``` This looks pretty good! However, accuracy isn't a great metric for this task, since the labels are heavily skewed to 0's, so a neural network that just outputs 0's would get slightly over 90% accuracy. We could define more useful metrics such as F1 score or Precision/Recall. But let's not bother with that here, and instead just empirically see how the model does. # 3 - Making Predictions Now that you have built a working model for trigger word detection, let's use it to make predictions. This code snippet runs audio (saved in a wav file) through the network. <!-- can use your model to make predictions on new audio clips. You will first need to compute the predictions for an input audio clip. **Exercise**: Implement predict_activates(). You will need to do the following: 1. Compute the spectrogram for the audio file 2. Use `np.swap` and `np.expand_dims` to reshape your input to size (1, Tx, n_freqs) 5. Use forward propagation on your model to compute the prediction at each output step !--> ``` def detect_triggerword(filename): plt.subplot(2, 1, 1) x = graph_spectrogram(filename) # the spectogram outputs (freqs, Tx) and we want (Tx, freqs) to input into the model x = x.swapaxes(0,1) x = np.expand_dims(x, axis=0) predictions = model.predict(x) plt.subplot(2, 1, 2) plt.plot(predictions[0,:,0]) plt.ylabel('probability') plt.show() return predictions ``` Once you've estimated the probability of having detected the word "activate" at each output step, you can trigger a "chiming" sound to play when the probability is above a certain threshold. Further, $y^{\langle t \rangle}$ might be near 1 for many values in a row after "activate" is said, yet we want to chime only once. So we will insert a chime sound at most once every 75 output steps. This will help prevent us from inserting two chimes for a single instance of "activate". (This plays a role similar to non-max suppression from computer vision.) <!-- **Exercise**: Implement chime_on_activate(). You will need to do the following: 1. Loop over the predicted probabilities at each output step 2. When the prediction is larger than the threshold and more than 75 consecutive time steps have passed, insert a "chime" sound onto the original audio clip Use this code to convert from the 1,375 step discretization to the 10,000 step discretization and insert a "chime" using pydub: ` audio_clip = audio_clip.overlay(chime, position = ((i / Ty) * audio.duration_seconds)*1000) ` !--> ``` chime_file = "audio_examples/chime.wav" def chime_on_activate(filename, predictions, threshold): audio_clip = AudioSegment.from_wav(filename) chime = AudioSegment.from_wav(chime_file) Ty = predictions.shape[1] # Step 1: Initialize the number of consecutive output steps to 0 consecutive_timesteps = 0 # Step 2: Loop over the output steps in the y for i in range(Ty): # Step 3: Increment consecutive output steps consecutive_timesteps += 1 # Step 4: If prediction is higher than the threshold and more than 75 consecutive output steps have passed if predictions[0,i,0] > threshold and consecutive_timesteps > 75: # Step 5: Superpose audio and background using pydub audio_clip = audio_clip.overlay(chime, position = ((i / Ty) * audio_clip.duration_seconds)*1000) # Step 6: Reset consecutive output steps to 0 consecutive_timesteps = 0 audio_clip.export("chime_output.wav", format='wav') ``` ## 3.3 - Test on dev examples Let's explore how our model performs on two unseen audio clips from the development set. Lets first listen to the two dev set clips. ``` IPython.display.Audio("./raw_data/dev/1.wav") IPython.display.Audio("./raw_data/dev/2.wav") ``` Now lets run the model on these audio clips and see if it adds a chime after "activate"! ``` filename = "./raw_data/dev/1.wav" prediction = detect_triggerword(filename) chime_on_activate(filename, prediction, 0.5) IPython.display.Audio("./chime_output.wav") filename = "./raw_data/dev/2.wav" prediction = detect_triggerword(filename) chime_on_activate(filename, prediction, 0.5) IPython.display.Audio("./chime_output.wav") ``` # Congratulations You've come to the end of this assignment! Here's what you should remember: - Data synthesis is an effective way to create a large training set for speech problems, specifically trigger word detection. - Using a spectrogram and optionally a 1D conv layer is a common pre-processing step prior to passing audio data to an RNN, GRU or LSTM. - An end-to-end deep learning approach can be used to built a very effective trigger word detection system. *Congratulations* on finishing the final assignment! Thank you for sticking with us through the end and for all the hard work you've put into learning deep learning. We hope you have enjoyed the course! # 4 - Try your own example! (OPTIONAL/UNGRADED) In this optional and ungraded portion of this notebook, you can try your model on your own audio clips! Record a 10 second audio clip of you saying the word "activate" and other random words, and upload it to the Coursera hub as `myaudio.wav`. Be sure to upload the audio as a wav file. If your audio is recorded in a different format (such as mp3) there is free software that you can find online for converting it to wav. If your audio recording is not 10 seconds, the code below will either trim or pad it as needed to make it 10 seconds. ``` # Preprocess the audio to the correct format def preprocess_audio(filename): # Trim or pad audio segment to 10000ms padding = AudioSegment.silent(duration=10000) segment = AudioSegment.from_wav(filename)[:10000] segment = padding.overlay(segment) # Set frame rate to 44100 segment = segment.set_frame_rate(44100) # Export as wav segment.export(filename, format='wav') ``` Once you've uploaded your audio file to Coursera, put the path to your file in the variable below. ``` your_filename = "audio_examples/my_audio.wav" preprocess_audio(your_filename) IPython.display.Audio(your_filename) # listen to the audio you uploaded ``` Finally, use the model to predict when you say activate in the 10 second audio clip, and trigger a chime. If beeps are not being added appropriately, try to adjust the chime_threshold. ``` chime_threshold = 0.5 prediction = detect_triggerword(your_filename) chime_on_activate(your_filename, prediction, chime_threshold) IPython.display.Audio("./chime_output.wav") ```
github_jupyter
# Module - 2: Data visualization and Technical Analysis ###### Loading required libraries ``` import pandas as pd # data loading tool import matplotlib.pyplot as plt #ploting tool import seaborn as sns import numpy as np ``` ## 2.1 Loading dataset and changing the Date format ``` mod2_data = pd.read_csv('week2.csv') del mod2_data['Unnamed: 0'] #deleting Unnammed column mod2_data.Date = pd.to_datetime(mod2_data['Date']) mod2_data= mod2_data.set_index('Date') print(mod2_data.index.dtype == "datetime64[ns]") mod2_data.head() fig, ax = plt.subplots(figsize=(10, 6)) sns.lineplot(data=mod2_data, x=mod2_data.index, y='Close Price', ax=ax) plt.tight_layout() plt.show() ``` ###### News: 1. Between 2017-10 and 2018-02. ***Tata Power Company (TPCL) has posted an improved performance in 1QFY18 with its consolidated net profit rising by 126% YoY to Rs1.64bn (vs. Rs0.72bn in 1QFY17) due to strong performance by the coal subsidiaries, renewable business and better operational performance. Notably, renewable business generated Rs1.09bn PAT in 1QFY18 compared to Rs0.26bn in 1QFY17. Consolidated revenue rose by 2% YoY to Rs67.2bn mainly due to improved revenue from Welspun Renewable Energy (WREPL).*** Source:https://www.moneycontrol.com/news/business/stocks/buy-tata-power-target-of-rs-88-reliance-securities-2370965.html 2. Between 2018-02 and 2018-11. ***Tata Power Company's third quarter consolidated profit is expected to fall 22 percent to Rs 466 crore compared to Rs 599 crore in year-ago quarter.Revenue from operations may grow 11 percent to Rs 7,445 crore compared to Rs 6,684 crore in same quarter last fiscal, according to average of estimates of analysts polled by CNBC-TV18. Operating profit is likely to increase 15 percent year-on-year to Rs 1,611 crore and margin may expand 70 basis points to 21.6 percent in Q3.Year-on-year profit comparison may not be valid due to (1) higher interest cost in Q3FY18 to fund renewable asset acquisition and (2) tax reversal in Q3FY17, despite stable operations in the core distribution business.Analysts expect generation volumes to remain sluggish and realisations to remain flattish. They further expect coal business and renewable business to maintain strong momentum.More than numbers, the Street will watch out for restructuring news. Tata Power had guided for simplification of group structure in FY18 at the beginning of the year.*** Source:https://www.moneycontrol.com/news/business/earnings/tata-power-q3-profit-seen-down-22-generation-volumes-may-remain-sluggish-2507829.html 3. Between 2018-10 and 2019-01. ***Tata Power, HPCL join hands to set up EV charging stations*** source:https://www.moneycontrol.com/news/india/tata-power-hpcl-join-hands-to-set-up-ev-charging-stations-2991981.html 4. Between 2019-01 and 2019-03. ***Fuel cost of the company rose to Rs 3,189.87 crore from Rs 2,491.24 crore in the year-ago period. Similarly, the finance cost rose to Rs 1,013.96 crore from Rs 855.28 crore a year ago.*** Source:https://www.moneycontrol.com/news/business/tata-power-q3-profit-plunges-67-to-rs-205-cr-in-q3-3445841.html 5. After 2019-04. ***Tata Power Q4 net drops 92% to Rs 107.32 cr; declares dividend of Rs 1.30/share*** Source:https://www.moneycontrol.com/news/business/tata-power-q4-net-drops-92-to-rs-107-32-cr-declares-dividend-of-rs-1-30share-3924591.html ## 2.2 Stem plot ``` fig, ax = plt.subplots(figsize=(10, 6)) ax.stem(mod2_data.index, mod2_data.Day_Perc_Change, 'g', label='Percente Change') plt.tight_layout() plt.legend() plt.show() ``` ## 2.3 Daily volume and comparison with %stem plot ``` volume_scaled = mod2_data['No. of Trades'] - mod2_data['No. of Trades'].min() volume_scaled = volume_scaled/volume_scaled.max()*mod2_data.Day_Perc_Change.max() fig, ax = plt.subplots(figsize=(10, 6)) ax.plot(mod2_data.index, volume_scaled, label='Volume') ax.set_xlabel('Date') plt.legend(loc=2) plt.tight_layout() plt.show() fig, ax = plt.subplots(figsize=(10, 6)) ax.stem(mod2_data.index, mod2_data.Day_Perc_Change , 'g', label='Percente Change') ax.plot(mod2_data.index, volume_scaled, 'k', label='Volume') ax.set_xlabel('Date') plt.legend(loc=2) plt.tight_layout() plt.show() ``` ###### Relationship between volume and daily percentage change As the volume increases the percentage change becomes positive and vice versa. ## 2.4 Pie chart and Bar plot ``` gridsize = (2, 6) fig = plt.figure(figsize=(14, 10)) ax1 = plt.subplot2grid(gridsize, (0, 0), colspan=2, rowspan=1) ax2 = plt.subplot2grid(gridsize, (0, 3), colspan=3) ax3 = plt.subplot2grid(gridsize, (1, 0), colspan=6) mod2_data['ones'] = np.ones((mod2_data.shape[0])) sums = mod2_data.ones.groupby(mod2_data.Trend).sum() explod = [0.2, 0.2, 0.5, 0, 0, 0, 0 ,0,0] ax1.pie(sums, labels=sums.index, autopct='%1.1f%%', explode=explod) ax2.title.set_text('Trend') mod2_data = mod2_data.drop(['ones'], axis=1) bard1 = mod2_data[['Trend', 'Total Traded Quantity']].groupby(['Trend'], as_index=False).mean() bar1 = sns.barplot("Trend", 'Total Traded Quantity', data=bard1, ci=None, ax=ax2) for item in bar1.get_xticklabels(): item.set_rotation(45) ax2.set_ylabel('') ax2.title.set_text('Trend to mean of Total Traded Quantity') bard2 = mod2_data[['Trend', 'Total Traded Quantity']].groupby(['Trend'], as_index=False).median() bar2 = sns.barplot("Trend", 'Total Traded Quantity', data=bard2, ci=None, ax=ax3) for item in bar2.get_xticklabels(): item.set_rotation(45) ax3.set_ylabel('') ax3.title.set_text('Trend to meadian of Total Traded Quantity') plt.tight_layout() plt.show() ``` ## 2.5 Daily returns ``` fig, ax = plt.subplots(figsize=(10, 6)) ax.hist(mod2_data.Day_Perc_Change, bins=50) ax.set_ylabel('Percent Change') plt.show() ``` ## 2.6 Correlation ``` five_stocks = ['AMARAJABAT.csv', 'CUMMINSIND.csv', 'JINDALSTEL.csv', 'MRPL.csv', 'VOLTAS.csv'] dfs = {} for i in five_stocks: stock = i.split('.')[0] temp_df = pd.read_csv(i) temp_df = temp_df[temp_df["Series"] == "EQ"] temp_df['Day_Perc_Change'] = temp_df['Close Price'].pct_change()*100 temp_df = temp_df['Day_Perc_Change'] temp_df = temp_df.drop(temp_df.index[0]) dfs[stock] = temp_df dfs = pd.DataFrame(dfs) sns.pairplot(dfs) plt.show() ``` There is no correlation among almost all the stocks, which is good thing. To get the profit from the stock market, the company's stocks trend should be independent from the other stock's trend ## 2.7 Volatility ``` rolling1 = dfs.rolling(7).std() rolling1.dropna() fig, ax = plt.subplots(figsize=(15, 5)) ax.plot(np.arange(len(rolling1.VOLTAS)), rolling1.VOLTAS, 'k') plt.title('VOLTAS Volatility') plt.show() ``` ## 2.8 Comparing with Nifty50 ``` nifty = pd.read_csv('Nifty50.csv') nifty['Day_Perc_Change'] = nifty['Close'].pct_change()*100 rolling2 = nifty['Day_Perc_Change'].rolling(7).std() rolling2 = rolling2.dropna() fig, ax = plt.subplots(figsize=(15, 5)) ax.plot(np.arange(len(rolling1.VOLTAS)), rolling1.VOLTAS, label = 'VOLTAS') ax.plot(np.arange(len(rolling2)), rolling2, 'k', label = 'Nifty') plt.legend() plt.tight_layout() plt.show() ``` ## 2.9 Trade calls ``` nifty['roll21'] = nifty['Close'].rolling(21).mean() nifty['roll34'] = nifty['Close'].rolling(34).mean() nifty = nifty.dropna() nifty.Date = pd.to_datetime(nifty['Date']) fig, ax = plt.subplots(figsize=(15, 7)) def cross(values): l=[] were = values[0] flag = True for i, ele in enumerate(values): if were==ele: l.append(0) else: l.append(1) were = ele return l nifty['buy'] = nifty['roll21'] > nifty['roll34'] nifty['sell'] = nifty['roll21'] < nifty['roll34'] nifty['buy_change'] = np.array(cross(nifty.buy.values.reshape(1, len(nifty.buy)).flatten())) #reshaping from (461, ) nifty['sell_change'] = np.array(cross(nifty.sell.values.reshape(1, len(nifty.sell)).flatten())) #reshaping from(461, ) nifty['buy'] = nifty['buy_change'].where(nifty['buy']==True) nifty['buy'] = nifty['roll21'].where(nifty['buy']==1) nifty['sell'] = nifty['sell_change'].where(nifty['sell']==True) nifty['sell'] = nifty['roll21'].where(nifty['sell']==1) ax.plot(nifty.Date, nifty.Close, 'r') ax.plot(nifty.Date, nifty.roll34, 'b', label='34_SMA') ax.plot(nifty.Date, nifty.roll21, 'g', label='21_SMA') ax.plot(nifty.Date, nifty.buy, "g^") ax.plot(nifty.Date, nifty.sell, "kv") ax.set_xlabel('Date') plt.legend(loc=2) plt.tight_layout() plt.show() ``` ## 2.10 Trade call - using Bollinger band ``` nifty['roll14'] = nifty['Close'].rolling(14).mean() std = nifty['Close'].rolling(14).std() nifty['upper_band'] = nifty['roll14']+2*std nifty['lower_band'] = nifty['roll14']-2*std fig, ax = plt.subplots(figsize=(15, 7)) ax.plot(nifty.Date, nifty['Close'],'k' ,label = 'avg price') ax.plot(nifty.Date, nifty.roll14, label= 'roll14') ax.plot(nifty.Date, nifty.upper_band, 'r', label = 'upper_band') ax.plot(nifty.Date, nifty.lower_band,'g', label = 'lower_band') ax.set_xlabel('Date') plt.legend(loc=2) plt.tight_layout() plt.show() ```
github_jupyter
## Importing and mapping netCDF data with xarray and cartopy - Read data from a netCDF file with xarray - Select (index) and modify variables using xarray - Create user-defined functions - Set up map features with cartopy (lat/lon tickmarks, continents, country/state borders); create a function to automate these steps - Overlay various plot types: contour lines, filled contours, vectors, and barbs - Customize plot elements such as the colorbar and titles - Save figure ``` ## Imports import os, sys import numpy as np import xarray as xr import matplotlib.pyplot as plt import cartopy.crs as ccrs import cartopy.feature as cfeature from cartopy.mpl.geoaxes import GeoAxes from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter ``` ### Load netcdf data with Xarray This example demonstrates importing and mapping ERA5 reanalysis data for an AR-Thunderstorm event that occurred in Santa Barbara County on 6 March 2019. The data file created for this example can be found in the `sample-data` folder that 6-hourly ERA5 Reanalysis on a 0.5 x 0.5 deg lat-lon grid 4-8 March. ERA5 data was retrieved from the Climate Data Store and subset to a regional domain over the Western US/N. Pacific. The xarray package provides an easy interface for importing and analyzing multidimensional data. Because xarray was designed around the netCDF data model, it is an exceptionally powerful tool for working with weather and climate data. Xarray has two fundamental **data structures**: **1)** a **`DataArray`**, which holds a single n-dimensional variable. Elements of a DataArray include: - `values`: numpy array of data values - `dims`: list of named dimensions (for example, `['time','lat','lon']`) - `coords`: coordinate arrays (e.g., vectors of lat/lon values or datetime data) - `atts`: variable attributes such as `units` and `standard_name` **2)** a **`Dataset`**, which holds multiple n-dimensional variables (shared coordinates). Elements of a Dataset: data variables, dimensions, coordinates, and attributes. In the cell below, we will load the ERA5 data (netcdf file) into an xarray dataset. ``` # Path to ERA5 data filepath = "../sample-data/era5.6hr.AR-thunderstorm.20190304_08.nc" # Read nc file into xarray dataset ds = xr.open_dataset(filepath) # Print dataset contents print(ds) ``` ### Selecting/Indexing data with xarray We can always use regular numpy indexing and slicing on DataArrays and Datasets; however, it is often more powerful and easier to use xarray’s `.sel()` method of label-based indexing. ``` # Select a single time ds.sel(time='2019-03-05T18:00:00') # 5 March 2019 at 18 UTC # Select all times within a single day ds.sel(time='2019-03-06') # Select times at 06 UTC idx = (ds['time.hour'] == 6) # selection uses boolean indexing hr06 = ds.sel(time=idx) # statements could be combined into a single line print(hr06) print(hr06.time.values) # check time coordinates in new dataset ``` In the previous block, we used the `ds['time.hour']` to access the 'hour' component of a datetime object. Other datetime components include 'year', 'month', 'day', 'dayofyear', and 'season'. The 'season' component is unqiue to xarray; valid seasons include 'DJF', 'MAM', 'JJA', and 'SON'. ``` # Select a single grid point ds.sel(latitude=40, longitude=-120) # Select the grid point nearest to 34.4208° N, 119.6982°W; ds.sel(latitude=34.4208, longitude=-119.6982, method='nearest') # Select range of lats (30-40 N) # because ERA5 data latitudes are listed from 90N to 90S # you have to slice from latmax to latmin latmin=30 latmax=40 ds.sel(latitude=slice(latmax,latmin)) ``` Select data at the peak of the AR-Thunderstorm event (06-Mar-2019, 18UTC). ``` # Select the date/time of the AR event (~06 March 2019 at 06 UTC); # assign subset selection to new dataset `dsAR` dsAR = ds.sel(time='2019-03-06T06:00:00') print(dsAR) # Select data on a single pressure level `plev` plev = '250' dsAR = dsAR.sel(level=plev) ``` In the following code block, we select the data and coordinate variables needed to create a map of 250-hPa heights and winds at the time of the AR-Thunderstorm event. ``` # coordinate arrays lats = dsAR['latitude'].values # .values extracts var as numpy array lons = dsAR['longitude'].values #print(lats.shape, lons.shape) #print(lats) # data variables uwnd = dsAR['u'].values vwnd = dsAR['v'].values hgts = dsAR['zg'].values # check the shape and values of print(hgts.shape) print(hgts) ``` ### Simple arithmetic Calculate the magnitude of horizontal wind (wind speed) from its u and v components. Convert wspd data from m/s to knots. ``` # Define a function to calculate wind speed from u and v wind components def calc_wspd(u, v): """Computes wind speed from u and v components""" wspd = np.sqrt(u**2 + v**2) return wspd # Use calc_wspd() function on uwnd & vwnd wspd = calc_wspd(uwnd, vwnd) # Define a function to convert m/s to knots # Hint: 1 m/s = 1.9438445 knots def to_knots(x): x_kt = x * 1.9438445 return x_kt # Convert wspd data to knots, save as separate array wspd_kt = to_knots(wspd) print(wspd_kt) ``` ### Plotting with Cartopy Map 250-hPa height lines, isotachs (in knots), and wind vectors or barbs. ``` # Set up map properties # Projection/Coordinate systems datacrs = ccrs.PlateCarree() # data/source mapcrs = ccrs.PlateCarree() # map/destination # Map extent lonmin = lons.min() lonmax = lons.max() latmin = lats.min() latmax = lats.max() # Tickmark Locations dx = 10; dy = 10 xticks = np.arange(lonmin, lonmax+1, dx) # np.arange(start, stop, interval) returns 1d array yticks = np.arange(latmin, latmax+1, dy) # that ranges from `start` to `stop-1` by `interval` print('xticks:', xticks) print('yticks:', yticks) ``` First, we need to create a basemap to plot our data on. In creating the basemap, we will set the map extent, draw lat/lon tickmarks, and add/customize map features such as coastlines and country borders. Next, use the `contour()` function to draw lines of 250-hPa geopotential heights. ``` # Create figure fig = plt.figure(figsize=(11,8)) # Add plot axes ax = fig.add_subplot(111, projection=mapcrs) ax.set_extent([lonmin,lonmax,latmin,latmax], crs=mapcrs) # xticks (longitude tickmarks) ax.set_xticks(xticks, crs=mapcrs) lon_formatter = LongitudeFormatter() ax.xaxis.set_major_formatter(lon_formatter) # yticks (latitude tickmarks) ax.set_yticks(yticks, crs=mapcrs) lat_formatter = LatitudeFormatter() ax.yaxis.set_major_formatter(lat_formatter) # format tickmarks ax.tick_params(direction='out', # draws ticks outside of plot (`out`,`in`,`inout) labelsize=8.5, # font size of ticklabel, length=5, # lenght of tickmark in points pad=2, # points between tickmark anmd label color='black') # Add map features ax.add_feature(cfeature.LAND, facecolor='0.9') # color fill land gray ax.add_feature(cfeature.COASTLINE, edgecolor='k', linewidth=1.0) # coastlines ax.add_feature(cfeature.BORDERS, edgecolor='0.1', linewidth=0.7) # country borders ax.add_feature(cfeature.STATES, edgecolor='0.1', linewidth=0.7) # state borders # Create arr of contour levels using np.arange(start,stop,interval) clevs_hgts = np.arange(8400,12800,120) #print(clevs_hgts) # Draw contour lines for geop heights cs = ax.contour(lons, lats, hgts, transform=datacrs, # first line= required levels=clevs_hgts, # contour levels colors='blue', # line color linewidths=1.2) # line thickness (default=1.0) # Add labels to contour lines plt.clabel(cs, fmt='%d', fontsize=9, inline_spacing=5) # # Show plt.show() ``` Create a function that will create and return a figure with a background map. This saves us from having to copy/paste lines 1-27 in the previous block each time we create a new map. ``` def draw_basemap(): # Create figure fig = plt.figure(figsize=(11,9)) # Add plot axes and draw basemap ax = fig.add_subplot(111, projection=mapcrs) ax.set_extent([lonmin,lonmax,latmin,latmax], crs=mapcrs) # xticks ax.set_xticks(xticks, crs=mapcrs) lon_formatter = LongitudeFormatter() ax.xaxis.set_major_formatter(lon_formatter) # yticks ax.set_yticks(yticks, crs=mapcrs) lat_formatter = LatitudeFormatter() ax.yaxis.set_major_formatter(lat_formatter) # tick params ax.tick_params(direction='out', labelsize=8.5, length=5, pad=2, color='black') # Map features ax.add_feature(cfeature.LAND, facecolor='0.9') ax.add_feature(cfeature.COASTLINE, edgecolor='k', linewidth=1.0) ax.add_feature(cfeature.BORDERS, edgecolor='0.1', linewidth=0.7) ax.add_feature(cfeature.STATES, edgecolor='0.1', linewidth=0.7) return fig, ax ``` Use your `draw_basemap` function to create a new figure and background map. Plot height contours, then use `contourf()` to plot filled contours for wind speed (knots). ``` # Draw basemap fig, ax = draw_basemap() # Geopotential Heights (contour lines) clevs_hgts = np.arange(8400,12800,120) cs = ax.contour(lons, lats, hgts, transform=datacrs, levels=clevs_hgts, # contour levels colors='b', # line color linewidths=1.2) # line thickness # Add labels to contour lines plt.clabel(cs, fmt='%d',fontsize=8.5, inline_spacing=5) # Wind speed - contour fill clevs_wspd = np.arange(70,121,10) cf = ax.contourf(lons, lats, wspd_kt, transform=datacrs, levels=clevs_wspd, cmap='BuPu', # colormap extend='max', alpha=0.8) # transparency (0=transparent, 1=opaque) # show plt.show() ``` Add wind vectors using `quiver()` ``` # Draw basemap fig, ax = draw_basemap() # Geopotenital height lines clevs_hgts = np.arange(840,1280,12) cs = ax.contour(lons, lats, hgts/10., transform=datacrs, levels=clevs_hgts, colors='b', # line color linewidths=1.2) # line thickness # Add labels to contour lines plt.clabel(cs, fmt='%d',fontsize=9, inline_spacing=5) # Wind speed - contour fill clevs_wspd = np.arange(70,131,10) cf = ax.contourf(lons, lats, wspd_kt, transform=datacrs, levels=clevs_wspd, cmap='BuPu', extend='max', # use if max data value alpha=0.8) # transparency (0=transparent, 1=opaque) # Wind vectors ax.quiver(lons, lats, uwnd, vwnd, transform=datacrs, color='k', pivot='middle', regrid_shape=12) # increasing regrid_shape increases the number/density of vectors # show plt.show() ``` Plot barbs instead of vectors using `barbs()` ``` # Draw basemap fig, ax = draw_basemap() # Geopotenital height lines clevs_hgts = np.arange(840,1280,12) cs = ax.contour(lons, lats, hgts/10., transform=datacrs, levels=clevs_hgts, colors='b', # line color linewidths=1.25) # line thickness # Add labels to contour lines plt.clabel(cs, fmt='%d',fontsize=9, inline_spacing=5) # Wind speed - contour fill clevs_wspd = np.arange(70,131,10) cf = ax.contourf(lons, lats, wspd_kt, transform=datacrs, levels=clevs_wspd, cmap='BuPu', extend='max', alpha=0.8) # transparency (0=transparent, 1=opaque) # Wind barbs ax.barbs(lons, lats, uwnd, vwnd, transform=datacrs, # uses the same args as quiver color='k', regrid_shape=12, pivot='middle') # show plt.show() ``` Add plot elements such as a colorbar and title. Option to save figure. ``` # Draw basemap fig, ax = draw_basemap() # Geopotenital height lines clevs_hgts = np.arange(8400,12800,120) cs = ax.contour(lons, lats, hgts, transform=datacrs, levels=clevs_hgts, colors='b', # line color linewidths=1.2) # line thickness # Add labels to contour lines plt.clabel(cs, fmt='%d',fontsize=8.5, inline_spacing=5) # Wind speed - contour fill clevs_wspd = np.arange(70,131,10) cf = ax.contourf(lons, lats, wspd_kt, transform=datacrs, levels=clevs_wspd, cmap='BuPu', extend='max', alpha=0.8) # transparency (0=transparent, 1=opaque) # Wind barbs ax.barbs(lons, lats, uwnd, vwnd, transform=datacrs, # uses the same args as quiver color='k', regrid_shape=12, pivot='middle') # Add colorbar cb = plt.colorbar(cf, orientation='vertical', # 'horizontal' or 'vertical' shrink=0.7, pad=0.03) # fraction to shrink cb by; pad= space between cb and plot cb.set_label('knots') # Plot title titlestring = f"{plev}-hPa Hgts/Wind" # uses new f-string formatting ax.set_title(titlestring, loc='center',fontsize=13) # loc: {'center','right','left'} # Save figure outfile = 'map-250hPa.png' plt.savefig(outfile, bbox_inches='tight', # trims excess whitespace from around figure dpi=300) # resolution in dots per inch # show plt.show() ```
github_jupyter
# Add external catalog for source matching: allWISE catalog This notebook will create a dabase containing the allWISE all-sky mid-infrared catalog. As the catalogs grows (the allWISE catalog we are inserting contains of the order of hundreds of millions sources), using an index on the geoJSON corrdinate type to support the queries becomes unpractical, as such an index does not compress well. In this case, and healpix based indexing offers a good compromise. We will use an healpix grid of order 16, which has a resolution of ~ 3 arcseconds, simlar to the FWHM of ZTF images. References, data access, and documentation on the catalog can be found at: http://wise2.ipac.caltech.edu/docs/release/allwise/ http://irsa.ipac.caltech.edu/data/download/wise-allwise/ This notebook is straight to the point, more like an actual piece of code than a demo. For an explanation of the various steps needed in the see the 'insert_example' notebook in this same folder. ## 1) Inserting: ``` import numpy as np from healpy import ang2pix from extcats import CatalogPusher # build the pusher object and point it to the raw files. wisep = CatalogPusher.CatalogPusher( catalog_name = 'wise', data_source = '../testdata/AllWISE/', file_type = ".bz2") # read column names and types from schema file schema_file = "../testdata/AllWISE/wise-allwise-cat-schema.txt" names, types = [], {} with open(schema_file) as schema: for l in schema: if "#" in l or (not l.strip()): continue name, dtype = zip( [p.strip() for p in l.strip().split(" ") if not p in [""]]) name, dtype = name[0], dtype[0] #print (name, dtype) names.append(name) # convert the data type if "char" in dtype: types[name] = str elif "decimal" in dtype: types[name] = np.float64 elif "serial" in dtype or "integer" in dtype: types[name] = int elif "smallfloat" in dtype: types[name] = np.float16 elif "smallint" in dtype: types[name] = np.int16 elif dtype == "int8": types[name] = np.int8 else: print("unknown data type: %s"%dtype) # select the columns you want to use. use_cols = [] select = ["Basic Position and Identification Information", "Primary Photometric Information", "Measurement Quality and Source Reliability Information", "2MASS PSC Association Information"] with open(schema_file) as schema: blocks = schema.read().split("#") for block in blocks: if any([k in block for k in select]): for l in block.split("\n")[1:]: if "#" in l or (not l.strip()): continue name, dtype = zip( [p.strip() for p in l.strip().split(" ") if not p in [""]]) use_cols.append(name[0]) print("we will be using %d columns out of %d"%(len(use_cols), len(names))) # now assign the reader to the catalog pusher object import pandas as pd wisep.assign_file_reader( reader_func = pd.read_csv, read_chunks = True, names = names, usecols = lambda x : x in use_cols, #dtype = types, #this mess up with NaN values chunksize=5000, header=None, engine='c', sep='|', na_values = 'nnnn') # define the dictionary modifier that will act on the single entries def modifier(srcdict): srcdict['hpxid_16'] = int( ang2pix(2**16, srcdict['ra'], srcdict['dec'], lonlat = True, nest = True)) #srcdict['_id'] = srcdict.pop('source_id') doesn't work, seems it is not unique return srcdict wisep.assign_dict_modifier(modifier) # finally push it in the databse wisep.push_to_db( coll_name = 'srcs', index_on = "hpxid_16", overwrite_coll = True, append_to_coll = False) # if needed print extensive info on database #wisep.info() ``` ## 2) Testing the catalog At this stage, a simple test is run on the database, consisting in crossmatching with a set of randomly distributed points. ``` # now test the database for query performances. We use # a sample of randomly distributed points on a sphere # as targets. # define the funtion to test coordinate based queries: from healpy import ang2pix, get_all_neighbours from astropy.table import Table from astropy.coordinates import SkyCoord return_fields = ['designation', 'ra', 'dec'] project = {} for field in return_fields: project[field] = 1 print (project) hp_order, rs_arcsec = 16, 30. def test_query(ra, dec, coll): """query collection for points within rs of target ra, dec. The results as returned as an astropy Table.""" # find the index of the target pixel and its neighbours target_pix = int( ang2pix(2**hp_order, ra, dec, nest = True, lonlat = True) ) neighbs = get_all_neighbours(2**hp_order, ra, dec, nest = True, lonlat = True) # remove non-existing neigbours (in case of E/W/N/S) and add center pixel pix_group = [int(pix_id) for pix_id in neighbs if pix_id != -1] + [target_pix] # query the database for sources in these pixels qfilter = { 'hpxid_%d'%hp_order: { '$in': pix_group } } qresults = [o for o in coll.find(qfilter)] if len(qresults)==0: return None # then use astropy to find the closest match tab = Table(qresults) target = SkyCoord(ra, dec, unit = 'deg') matches_pos = SkyCoord(tab['ra'], tab['dec'], unit = 'deg') d2t = target.separation(matches_pos).arcsecond match_id = np.argmin(d2t) # if it's too far away don't use it if d2t[match_id]>rs_arcsec: return None return tab[match_id] # run the test wisep.run_test(test_query, npoints = 10000) ``` # 3) Adding metadata Once the database is set up and the query performance are satisfactory, metadata describing the catalog content, contact person, and query strategies have to be added to the catalog database. If presents, the keys and parameters for the healpix partitioning of the sources are also to be given, as well as the name of the compound geoJSON/legacy pair entry in the documents. This information will be added into the 'metadata' collection of the database which will be accessed by the CatalogQuery. The metadata will be stored in a dedicated collection so that the database containig a given catalog will have two collections: - db['srcs'] : contains the sources. - db['meta'] : describes the catalog. ``` mqp.healpix_meta(healpix_id_key = 'hpxid_16', order = 16, is_indexed = True, nest = True) mqp.science_meta( contact = 'C. Norris', email = 'chuck.norris@desy.de', description = 'allWISE infrared catalog', reference = 'http://wise2.ipac.caltech.edu/docs/release/allwise/') ```
github_jupyter
``` # Update sklearn to prevent version mismatches !pip install sklearn --upgrade # install joblib. This will be used to save your model. # Restart your kernel after installing !pip install joblib import pandas as pd ``` # Read the CSV and Perform Basic Data Cleaning ``` df = pd.read_csv("exoplanet_data.csv") # Drop the null columns where all values are null df = df.dropna(axis='columns', how='all') # Drop the null rows df = df.dropna() df.head() ``` # Select your features (columns) ``` # Set features. This will also be used as your x values. selected_features = df[['koi_period', 'koi_period_err1', 'koi_time0bk', 'koi_slogg', 'koi_srad']] ``` # Create a Train Test Split Use `koi_disposition` for the y values ``` #import dependencies from sklearn.model_selection import train_test_split #assign x and y values X = selected_features y = df["koi_disposition"] #split training and testing data X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1, stratify=y) X_train.head() #assign x and y values X = selected_features y = df["koi_disposition"] #split training and testing data X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1, stratify=y) ``` # Pre-processing Scale the data using the MinMaxScaler and perform some feature selection ``` # !pip install --upgrade tensorflow !conda install tensorflow import tensorflow # Scale your data from sklearn.preprocessing import LabelEncoder, MinMaxScaler from tensorflow.keras.utils import to_categorical X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1, stratify=y) X_scaler = MinMaxScaler().fit(X_train) X_train_scaled = X_scaler.transform(X_train) X_test_scaled = X_scaler.transform(X_test) # Step 1: Label-encode data set label_encoder = LabelEncoder() label_encoder.fit(y_train) encoded_y_train = label_encoder.transform(y_train) encoded_y_test = label_encoder.transform(y_test) # Step 2: Convert encoded labels to one-hot-encoding y_train_categorical = to_categorical(encoded_y_train) y_test_categorical = to_categorical(encoded_y_test) ``` # Train the Model ``` print(f"Training Data Score: {model2.score(X_train_scaled, y_train)}") print(f"Testing Data Score: {model2.score(X_test_scaled, y_test)}") ``` # Hyperparameter Tuning Use `GridSearchCV` to tune the model's parameters ``` # Create the GridSearchCV model # Train the model with GridSearch print(grid2.best_params_) print(grid2.best_score_) ``` # Save the Model ``` # save your model by updating "your_name" with your name # and "your_model" with your model variable # be sure to turn this in to BCS # if joblib fails to import, try running the command to install in terminal/git-bash import joblib filename = 'your_name.sav' joblib.dump(your_model, filename) ```
github_jupyter
# Homework #3 Programming Assignment CSCI567, Spring 2019<br>Victor Adamchik<br>**Due: 11:59 pm, March 3rd 2019** ### Before you start: On Vocareum, when you submit your homework, it takes around 5-6 minutes to run the grading scripts and evaluate your code. So, please be patient regarding the same.<br> ## Office Hour for Project Assignment 3 Office hours for Anirudh: <br> February 15th, 2pm - 3pm<br> February 22nd, 2pm - 3pm<br> March 1st, 2pm - 4pm<br> <br> Office hours for Piyush:<br> February 13th, 2pm - 3pm<br> February 21nd, 2pm - 3pm<br> March 4th, 2pm - 4pm<br> <br> Also, you can post your question on Piazza under pa-3 folder. We will try our best to answer all questions as soon as possible. Please make sure you read previous posts before creating a new post in case your question has been answered before. However, if you have any urgent issue, please feel free to send an email to both of us. <br> Anirudh, Kashi: kashia@usc.edu<br> Piyush, Umate: pumate@usc.edu<br> ## Problem 1 Neural Networks (40 points) ![MLP_diagram.png](attachment:MLP_diagram.png) <br><br> For this Assignment, you are asked to implement neural networks. We will be using this neural network to classify MNIST database of handwritten digits (0-9). The architecture of the multi-layer perceptron (MLP, just another term for fully connected feedforward networks we discussed in the lecture) you will be implementing is shown in figure 1. Following MLP is designed for a K-class classification problem. Let $(x\in\mathbb{R}^D, y\in\{1,2,\cdots,K\})$ be a labeled instance, such an MLP performs the following computations. <br><br><br><br> $$ \begin{align} \textbf{input features}: \hspace{15pt} & x \in \mathbb{R}^D \\ \textbf{linear}^{(1)}: \hspace{15pt} & u = W^{(1)}x + b^{(1)} \hspace{2em}, W^{(1)} \in \mathbb{R}^{M\times D} \text{ and } b^{(1)} \in \mathbb{R}^{M} \label{linear_forward}\\ \textbf{tanh}:\hspace{15pt} & h =\cfrac{2}{1+e^{-2u}}-1 \label{tanh_forward}\\ \textbf{relu}: \hspace{15pt} & h = max\{0, u\} = \begin{bmatrix} \max\{0, u_1\}\\ \vdots \\ \max\{0, u_M\}\\ \end{bmatrix} \label{relu_forward}\\ \textbf{linear}^{(2)}: \hspace{15pt} & a = W^{(2)}h + b^{(2)} \hspace{2em}, W^{(2)} \in \mathbb{R}^{K\times M} \text{ and } b^{(2)} \in \mathbb{R}^{K} \label{linear2_forward}\\ \textbf{softmax}: \hspace{15pt} & z = \begin{bmatrix} \cfrac{e^{a_1}}{\sum_{k} e^{a_{k}}}\\ \vdots \\ \cfrac{e^{a_K}}{\sum_{k} e^{a_{k}}} \\ \end{bmatrix}\\ \textbf{predicted label}: \hspace{15pt} & \hat{y} = argmax_k z_k. %& l = -\sum_{k} y_{k}\log{\hat{y_{k}}} \hspace{2em}, \vy \in \mathbb{R}^{k} \text{ and } y_k=1 \text{ if } \vx \text{ belongs to the } k' \text{-th class}. \end{align} $$ For a $K$-class classification problem, one popular loss function for training (i.e., to learn $W^{(1)}$, $W^{(2)}$, $b^{(1)}$, $b^{(2)}$) is the cross-entropy loss. Specifically we denote the cross-entropy loss with respect to the training example $(x, y)$ by $l$: <br><br> $$ \begin{align} l = -\log (z_y) = \log \left( 1 + \sum_{k\neq y} e^{a_k - a_y} \right) \end{align} $$ <br><br> Note that one should look at $l$ as a function of the parameters of the network, that is, $W^{(1)}, b^{(1)}, W^{(2)}$ and $b^{(2)}$. For ease of notation, let us define the one-hot (i.e., 1-of-$K$) encoding of a class $y$ as \begin{align} y \in \mathbb{R}^K \text{ and } y_k = \begin{cases} 1, \text{ if }y = k,\\ 0, \text{ otherwise}. \end{cases} \end{align} so that \begin{align} l = -\sum_{k} y_{k}\log{z_k} = -y^T \begin{bmatrix} \log z_1\\ \vdots \\ \log z_K\\ \end{bmatrix} = -y^T\log{z}. \end{align} We can then perform error-backpropagation, a way to compute partial derivatives (or gradients) w.r.t the parameters of a neural network, and use gradient-based optimization to learn the parameters. Submission: All you need to submit is neural_networks.py ### Q1.1 Mini batch Gradient Descent First, You need to implement mini-batch gradient descent which is a gradient-based optimization to learn the parameters of the neural network. <br> $$ \begin{align} \upsilon = \alpha \upsilon - \eta \delta_t\\ w_t = w_{t-1} + \upsilon \end{align} $$ <br> You can use the formula above to update the weights using momentum. <br> Here, $\alpha$ is the discount factor such that $\alpha \in (0, 1)$ <br> $\upsilon$ is the velocity update<br> $\eta$ is the learning rate<br> $\delta_t$ is the gradient<br> You need to handle with as well without momentum scenario in the ```miniBatchGradientDescent``` function. * ```TODO 1``` You need to complete ```def miniBatchGradientDescent(model, momentum, _lambda, _alpha, _learning_rate)``` in ```neural_networks.py``` ### Q1.2 Linear Layer (10 points) Second, You need to implement the linear layer of MLP. In this part, you need to implement 3 python functions in ```class linear_layer```. In ```def __init__(self, input_D, output_D)``` function, you need to initialize W with random values using np.random.normal such that the mean is 0 and standard deviation is 0.1. You also need to initialize gradients to zeroes in the same function. $$ \begin{align} \text{forward pass:}\hspace{2em} & u = \text{linear}^{(1)}\text{.forward}(x) = W^{(1)}x + b^{(1)},\\ &\text{where } W^{(1)} \text{ and } b^{(1)} \text{ are its parameters.}\nonumber\\ \nonumber\\ \text{backward pass:}\hspace{2em} &[\frac{\partial l}{\partial x}, \frac{\partial l}{\partial W^{(1)}}, \frac{\partial l}{\partial b^{(1)}}] = \text{linear}^{(1)}\text{.backward}(x, \frac{\partial l}{\partial u}). \end{align} $$ You can use the above formula as a reference to implement the ```def forward(self, X)``` forward pass and ```def backward(self, X, grad)``` backward pass in class linear_layer. In backward pass, you only need to return the backward_output. You also need to compute gradients of W and b in backward pass. * ```TODO 2``` You need to complete ```def __init__(self, input_D, output_D)``` in ```class linear_layer``` of ```neural_networks.py``` * ```TODO 3``` You need to complete ```def forward(self, X)``` in ```class linear_layer``` of ```neural_networks.py``` * ```TODO 4``` You need to complete ```def backward(self, X, grad)``` in ```class linear_layer``` of ```neural_networks.py``` ### Q1.3 Activation function - tanh (10 points) Now, you need to implement the activation function tanh. In this part, you need to implement 2 python functions in ```class tanh```. In ```def forward(self, X)```, you need to implement the forward pass and you need to compute the derivative and accordingly implement ```def backward(self, X, grad)```, i.e. the backward pass. $$ \begin{align} \textbf{tanh}:\hspace{15pt} & h =\cfrac{2}{1+e^{-2u}}-1\\ \end{align} $$ You can use the above formula for tanh as a reference. * ```TODO 5``` You need to complete ```def forward(self, X)``` in ```class tanh``` of ```neural_networks.py``` * ```TODO 6``` You need to complete ```def backward(self, X, grad)``` in ```class tanh``` of ```neural_networks.py``` ### Q1.4 Activation function - relu (10 points) You need to implement another activation function called relu. In this part, you need to implement 2 python functions in ```class relu```. In ```def forward(self, X)```, you need to implement the forward pass and you need to compute the derivative and accordingly implement ```def backward(self, X, grad)```, i.e. the backward pass. $$ \begin{align} \textbf{relu}: \hspace{15pt} & h = max\{0, u\} = \begin{bmatrix} \max\{0, u_1\}\\ \vdots \\ \max\{0, u_M\}\\ \end{bmatrix} \end{align} $$ You can use the above formula for relu as a reference. * ```TODO 7``` You need to complete ```def forward(self, X)``` in ```class relu``` of ```neural_networks.py``` * ```TODO 8``` You need to complete ```def backward(self, X, grad)``` in ```class relu``` of ```neural_networks.py``` ### Q1.5 Dropout (10 points) To prevent overfitting, we usually add regularization. Dropout is another way of handling overfitting. In this part, you will initially read and understand ```def forward(self, X, is_train)``` i.e. the forward pass of ```class dropout``` and derive partial derivatives accordingly to implement ```def backward(self, X, grad)``` i.e. the backward pass of ```class dropout```. We define the forward and the backward passes as follows. \begin{align} \text{forward pass:}\hspace{2em} & {s} = \text{dropout}\text{.forward}({q}\in\mathbb{R}^J) = \frac{1}{1-r}\times \begin{bmatrix} \textbf{1}[p_1 >= r] \times q_1\\ \vdots \\ \textbf{1}[p_J >= r] \times q_J\\ \end{bmatrix}, \\ \nonumber\\ &\text{where } p_j \text{ is sampled uniformly from }[0, 1), \forall j\in\{1,\cdots,J\}, \nonumber\\ &\text{and } r\in [0, 1) \text{ is a pre-defined scalar named dropout rate}. \end{align} \begin{align} \text{backward pass:}\hspace{2em} &\frac{\partial l}{\partial {q}} = \text{dropout}\text{.backward}({q}, \frac{\partial l}{\partial {s}})= \frac{1}{1-r}\times \begin{bmatrix} \textbf{1}[p_1 >= r] \times \cfrac{\partial l}{\partial s_1}\\ \vdots \\ \textbf{1}[p_J >= r] \times \cfrac{\partial l}{\partial s_J}\\ \end{bmatrix}. \end{align} Note that $p_j, j\in\{1,\cdots,J\}$ and $r$ are not be learned so we do not need to compute the derivatives w.r.t. to them. Moreover, $p_j, j\in\{1,\cdots,J\}$ are re-sampled every forward pass, and are kept for the following backward pass. The dropout rate $r$ is set to 0 during testing. * ```TODO 9``` You need to complete ```def backward(self, X, grad)``` in ```class dropout``` of ```neural_networks.py``` ### Q1.6 Connecting the dots In this part, you will combine the modules written from question Q1.1 to Q1.5 by implementing TODO snippets in the ```def main(main_params, optimization_type="minibatch_sgd")``` i.e. main function. After implementing forward and backward passes of MLP layers in Q1.1 to Q1.5,now in the main function you will call the forward methods and backward methods of every layer in the model in an appropriate order based on the architecture. * ```TODO 10``` You need to complete ```main(main_params, optimization_type="minibatch_sgd")``` in ```neural_networks.py``` ### Grading Your code will be graded on Vocareum with autograding script. For your reference, the solution code takes around 5 minutes to execute. As long as your code can finish grading on Vocareum, you should be good. When you finish all ```TODO``` parts, please click submit button on Vocareum. Sometimes you may need to come back to check grading report later. Your code will be tested on the correctness of modules you have implemented as well as certain custom testcases. 40 points are assigned for Question 1 while 60 points are assigned to custom testcases. ``` import numpy as np print(np.version) a = np.array([[1,-2],[3,-2],[2,-2]]) a.shape[0] a * a np.tanh(a) np.tanh(a)**2 for row in a: for element in row: print(element) if element < 0: element = 0 print(a) print(np.maximum(a,0)) a x = a x[x<=0] = 0 x[x>0] = 1 x ```
github_jupyter
![FREYA Logo](https://github.com/datacite/pidgraph-notebooks-python/blob/master/images/freya_200x121.png?raw=true) | [FREYA](https://www.project-freya.eu/en) WP2 [User Story 10](https://github.com/datacite/freya/issues/45) | As a funder, we want to be able to find all the outputs related to our awarded grants, including block grants such as doctoral training grants, for management info and looking at impact. :------------- | :------------- | :------------- Funders are interested in monitoring the output of grants they award - while the grant is active as well as retrospectively. The quality, quantity and types of the grant's outputs are useful proxies for the value obtained as a result of the funder's investment.<p /> This notebook uses the [DataCite GraphQL API](https://api.datacite.org/graphql) to retrieve all outputs of [FREYA grant award](https://cordis.europa.eu/project/id/777523) from [European Union](https://doi.org/10.13039/501100000780) to date. **Goal**: By the end of this notebook you should be able to: - Retrieve all outputs of a grant award from a specific funder; - Plot number of outputs per year-quarter of the grant award duration; - Display de-duplicated outputs in tabular format, including the number of their citations, views and downloads; - Plot a pie chart of the number of outputs per resource type; - Display an interactive chord plot of co-authorship relationships across all outputs, e.g. <br> <img src="example_plot.png" width="318" height="309" /> ## Install libraries and prepare GraphQL client ``` %%capture # Install required Python packages !pip install gql requests chord numpy # Prepare the GraphQL client import requests from IPython.display import display, Markdown from gql import gql, Client from gql.transport.requests import RequestsHTTPTransport _transport = RequestsHTTPTransport( url='https://api.datacite.org/graphql', use_json=True, ) client = Client( transport=_transport, fetch_schema_from_transport=True, ) ``` ## Define and run GraphQL query Define the GraphQL query to find all outputs of [FREYA grant award](https://cordis.europa.eu/project/id/777523) from [European Union](https://doi.org/10.13039/501100000780) to date. ``` # Generate the GraphQL query: find all outputs of FREYA grant award (https://cordis.europa.eu/project/id/777523) from funder (EU) to date query_params = { "funderId" : "https://doi.org/10.13039/501100000780", "funderAwardQuery" : "fundingReferences.awardNumber:777523", "maxWorks" : 75 } query = gql("""query getGrantOutputsForFunderAndAward($funderId: ID!, $funderAwardQuery: String!, $maxWorks: Int!) { funder(id: $funderId) { name works(query: $funderAwardQuery, first: $maxWorks) { totalCount nodes { id formattedCitation(style: "vancouver") titles { title } descriptions { description } types { resourceType } dates { date dateType } versionOfCount creators { id name } fundingReferences { funderIdentifier funderName awardNumber awardTitle } citationCount viewCount downloadCount } } } } """) ``` Run the above query via the GraphQL client ``` import json data = client.execute(query, variable_values=json.dumps(query_params)) ``` ## Display total number of works Display the total number of [FREYA grant award](https://cordis.europa.eu/project/id/777523) outputs to date. ``` # Get the total number of outputs to date funder = data['funder']['works'] display(Markdown(str(funder['totalCount']))) ``` ## Plot number of works per quarter Display a bar plot of number of [FREYA grant award](https://cordis.europa.eu/project/id/777523) outputs to date, per each quarter of project's duration. ``` # Plot the number of FREYA outputs to date, by year import matplotlib.pyplot as plt from matplotlib.ticker import FormatStrFormatter import numpy as np # Return quarter (number) given month (number) def get_quarter(month): return (month - 1) // 3 + 1 # Return list of consecutive years between min_year_quarter and max_year_quarter inclusive def get_consecutive_year_quarters(min_year_quarter, max_year_quarter): year_quarters = ["%d Q%d" % (min_year_quarter[0],min_year_quarter[1])] yq = min_year_quarter while yq != max_year_quarter: year = yq[0] quarter = yq[1] if quarter == 4: year += 1 quarter = 1 else: quarter += 1 yq = (year, quarter) year_quarters.append("%d Q%d" % (year,quarter)) year_quarters.append("%d Q%d" % (max_year_quarter[0],max_year_quarter[1])) return year_quarters plt.rcdefaults() # Retrieve works counts by year-quarter from nodes # Pick out date of type: 'Issued'; failing that use 'Created' date. num_outputs_dict = {} funder = data['funder']['works'] for r in funder['nodes']: node_date = None for date_dict in r['dates']: ym = date_dict['date'].split('-')[0:2] if len(ym) < 2: continue yq = ym[0] + " Q" + str(get_quarter(int(ym[1]))) if node_date is None: if date_dict['dateType'] in ['Issued', 'Created']: node_date = yq else: if date_dict['dateType'] in ['Issued']: node_date = yq if node_date: if node_date not in num_outputs_dict: num_outputs_dict[node_date] = 0 num_outputs_dict[node_date] += 1; # Sort works counts by year-quarter in chronological order sorted_year_quarters = sorted(list(num_outputs_dict.keys())) # Get all consecutive year-quarters FREYA-specific start-end year-quarter year_quarters = get_consecutive_year_quarters((2017,4), (2020,4)) # Populate non-zero counts for year_quarters num_outputs = [] for yq in year_quarters: if yq in sorted_year_quarters: num_outputs.append(num_outputs_dict[yq]) else: num_outputs.append(0) # Generate a plot of number of grant outputs by year - quarter fig, ax = plt.subplots(1, 1, figsize = (10, 5)) x_pos = np.arange(len(year_quarters)) ax.bar(x_pos, num_outputs, align='center', color='blue', edgecolor='black', linewidth = 0.1, alpha=0.5) ax.set_xticks(x_pos) ax.set_xticklabels(year_quarters, rotation='vertical') ax.set_ylabel('Number of outputs') ax.set_xlabel('Year Quarter') ax.set_title('Number of Grant Award Outputs per Year-Quarter') plt.show() ``` ## Display de-duplicated works in tabular format Display the outputs of [FREYA grant award](https://cordis.europa.eu/project/id/777523) in a html table, including the number of their citations, views and downloads. Note that the outputs are de-duplicated, i.e. outputs that are versions of another output are excluded. ``` from IPython.core.display import display, HTML import textwrap xstr = lambda s: 'General' if s is None else str(s) # Get details for each output outputs = [['ID','Type','Publication Date','Formatted Citation','Descriptions', 'Number of Citations', 'Number of Views', 'Number of Downloads']] # Since there is scope for duplicates in Zenodo, versions of previously seen nodes are considered duplicates and stored in duplicate_versions so that # they can be excluded if seen later for r in funder['nodes']: id = '<a href="%s">%s</a></html>' % (r['id'], '/'.join(r['id'].split("/")[3:])) if r['versionOfCount'] > 0: # If the current output is a version of another one, exclude it continue # As Publication Date, pick out date of type: 'Issued'; failing that use 'Created' date. pub_date = None for date_dict in r['dates']: if pub_date is None: if date_dict['dateType'] in ['Issued', 'Created']: pub_date = date_dict['date']; else: if date_dict['dateType'] in ['Issued']: pub_date = date_dict['date']; titles = '; '.join([s['title'] for s in r['titles']]) creators = '; '.join(['<a href="%s">%s</a>' % (s['id'],s['name']) for s in r['creators']]) formatted_citation = "%s. %s. %s; Available from: %s" % (creators, titles, pub_date, id) resource_type = xstr(r['types']['resourceType']) descriptions = textwrap.shorten('; '.join([s['description'] for s in r['descriptions']]), width=200, placeholder="...") output = [id, resource_type, pub_date, formatted_citation, descriptions, str(r['citationCount']), str(r['viewCount']), str(r['downloadCount'])] outputs += [output] # Display outputs as html table html_table = '<html><table>' html_table += '<tr><th style="text-align:center;">' + '</th><th style="text-align:center;">'.join(outputs[0]) + '</th></tr>' for row in outputs[1:]: html_table += '<tr><td style="text-align:left;">' + '</td><td style="text-align:left;">'.join(row) + '</td></tr>' html_table += '</table></html>' display(HTML(html_table)) ``` ## Plot number of outputs per resource type Plot as a pie chart the number of [FREYA grant award](https://cordis.europa.eu/project/id/777523) outputs per resource type. ``` # Plot as a pie chart the number of outputs per resource type import matplotlib.pyplot as plt from matplotlib.ticker import FormatStrFormatter import numpy as np import operator xstr = lambda s: 'General' if s is None else str(s) plt.rcdefaults() # Retrieve works counts by resource type from nodes # Pick out date of type: 'Issued'; failing that use 'Created' date. funder = data['funder']['works'] num_outputs_dict = {} for r in funder['nodes']: resource_type = xstr(r['types']['resourceType']) if resource_type not in num_outputs_dict: num_outputs_dict[resource_type] = 0 num_outputs_dict[resource_type] += 1; # Sort resource types by count of work desc sorted_num_outputs = sorted(num_outputs_dict.items(),key=operator.itemgetter(1),reverse=True) # Populate lists needed for pie chart resource_types = [s[0] for s in sorted_num_outputs] num_outputs = [s[1] for s in sorted_num_outputs] # Generate a pie chart of number of grant outputs by resource type fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.set_title('Number of Grant Outputs per Resource Type') ax.axis('equal') ax.pie(num_outputs, labels = resource_types,autopct='%1.2f%%') plt.show() ``` ## Display an interactive plot of co-authorship relationships across all outputs Display an interactive chord plot representing co-authorship relationships across all [FREYA grant award](https://cordis.europa.eu/project/id/777523) outputs. ``` # Generate a Chord plot representing co-authorship relationships across all grant award outputs from chord import Chord from IPython.display import IFrame all_creator_names_by_node = [] all_creator_names_set = set([]) funder = data['funder']['works'] for r in funder['nodes']: if r['versionOfCount'] > 0: # If the current output is a version of another one, exclude it continue # To minimise cropping of names in the below, retain just the first letter of the first name # if the author name is well formatted creator_names = [] for name in [s['name'] for s in r['creators'] if s['name']]: if name.find(",") > 0: creator_names.append(name[0:name.index(",") + 3]) elif name.find(",") == 0: creator_names.append(name[1:].strip()) else: creator_names.append(name) all_creator_names_by_node.append(creator_names) all_creator_names_set.update(creator_names) # Assemble data structures for the co-authorship chord diagram all_creator_names = sorted(list(all_creator_names_set)) # Initialise chord data matrix length = len(all_creator_names) coauthorship_matrix = [] for i in range(length): r = [] for j in range(length): r.append(0) coauthorship_matrix.append(r) # Populate chord data matrix for node_creators in all_creator_names_by_node: for creator in node_creators: c_pos = all_creator_names.index(creator) for co_creator in node_creators: co_pos = all_creator_names.index(co_creator) if c_pos != co_pos: coauthorship_matrix[c_pos][co_pos] += 1 # display co-authorship cord diagram plot = Chord(coauthorship_matrix, all_creator_names, padding=0.04, wrap_labels=False, margin=130, width=1000).to_html() IFrame(src="./out.html", width=1000, height=1000) ```
github_jupyter
<a id="title_ID"></a> # JWST Pipeline Validation Testing Notebook: spec2, extract_2d step <span style="color:red"> **Instruments Affected**</span>: NIRSpec Tested on CV3 data ### Table of Contents <div style="text-align: left"> <br> [Imports](#imports_ID) <br> [Introduction](#intro_ID) <br> [Testing Data Set](#data_ID) <br> [Run the JWST pipeline and assign_wcs validation tests](#pipeline_ID): [FS Full-Frame test](#FULLFRAME), [FS ALLSLITS test](#ALLSLITS), [MOS test](#MOS) <br> [About This Notebook](#about_ID)<br> [Results](#results) <br> </div> <a id="imports_ID"></a> # Imports The library imports relevant to this notebook are aready taken care of by importing PTT. * astropy.io for opening fits files * jwst.module.PipelineStep is the pipeline step being tested * matplotlib.pyplot.plt to generate plot NOTE: This notebook assumes that the pipeline version to be tested is already installed and its environment is activated. To be able to run this notebook you need to install nptt. If all goes well you will be able to import PTT. [Top of Page](#title_ID) ``` # Create a temporary directory to hold notebook output, and change the working directory to that directory. from tempfile import TemporaryDirectory import os import shutil data_dir = TemporaryDirectory() os.chdir(data_dir.name) # Choose CRDS cache location use_local_crds_cache = False crds_cache_tempdir = False crds_cache_notebook_dir = True crds_cache_home = False crds_cache_custom_dir = False crds_cache_dir_name = "" if use_local_crds_cache: if crds_cache_tempdir: os.environ['CRDS_PATH'] = os.path.join(os.getcwd(), "crds") elif crds_cache_notebook_dir: try: os.environ['CRDS_PATH'] = os.path.join(orig_dir, "crds") except Exception as e: os.environ['CRDS_PATH'] = os.path.join(os.getcwd(), "crds") elif crds_cache_home: os.environ['CRDS_PATH'] = os.path.join(os.environ['HOME'], 'crds', 'cache') elif crds_cache_custom_dir: os.environ['CRDS_PATH'] = crds_cache_dir_name import warnings import psutil from astropy.io import fits # Only print a DeprecationWarning the first time it shows up, not every time. with warnings.catch_warnings(): warnings.simplefilter("once", category=DeprecationWarning) import jwst from jwst.pipeline.calwebb_detector1 import Detector1Pipeline from jwst.assign_wcs.assign_wcs_step import AssignWcsStep from jwst.msaflagopen.msaflagopen_step import MSAFlagOpenStep from jwst.extract_2d.extract_2d_step import Extract2dStep # The latest version of NPTT is installed in the requirements text file at: # /jwst_validation_notebooks/environment.yml # import NPTT import nirspec_pipe_testing_tool as nptt # To get data from Artifactory from ci_watson.artifactory_helpers import get_bigdata # Print the versions used for the pipeline and NPTT pipeline_version = jwst.__version__ nptt_version = nptt.__version__ print("Using jwst pipeline version: ", pipeline_version) print("Using NPTT version: ", nptt_version) ``` <a id="intro_ID"></a> # Test Description We compared Institute's pipeline product of the assign_wcs step with our benchmark files, or with the intermediary products from the ESA pipeline, which is completely independent from the Institute's. The comparison file is referred to as 'truth'. We calculated the relative difference and expected it to be equal to or less than computer precision: relative_difference = absolute_value( (Truth - ST)/Truth ) <= 1x10^-7. For the test to be considered PASSED, every single slit (for FS data), slitlet (for MOS data) or slice (for IFU data) in the input file has to pass. If there is any failure, the whole test will be considered as FAILED. The code for this test can be obtained at: https://github.com/spacetelescope/nirspec_pipe_testing_tool/blob/master/nirspec_pipe_testing_tool/calwebb_spec2_pytests/auxiliary_code/check_corners_extract2d.py. Multi Object Spectroscopy (MOS), the code is in the same repository but is named ```compare_wcs_mos.py```, and for Integral Field Unit (IFU) data, the test is named ```compare_wcs_ifu.py```. The input file is defined in the variable ```input_file``` (see section [Testing Data Set and Variable Setup](#data_ID)). Step description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/extract_2d/main.html Pipeline code: https://github.com/spacetelescope/jwst/tree/master/jwst/extract_2d ### Results If the test **PASSED** this means that all slits, slitlets, or slices individually passed the test. However, if ony one individual slit (for FS data), slitlet (for MOS data) or slice (for IFU data) test failed, the whole test will be reported as **FAILED**. ### Calibration WG Requested Algorithm: A short description and link to the page: https://outerspace.stsci.edu/display/JWSTCC/Vanilla+Path-Loss+Correction ### Defining Term Acronymns used un this notebook: pipeline: calibration pipeline spec2: spectroscopic calibration pipeline level 2b PTT: NIRSpec pipeline testing tool (https://github.com/spacetelescope/nirspec_pipe_testing_tool) [Top of Page](#title_ID) <a id="pipeline_ID"></a> # Run the JWST pipeline and extract_2d validation tests The pipeline can be run from the command line in two variants: full or per step. Tu run the spec2 pipeline in full use the command: $ strun jwst.pipeline.Spec2Pipeline jwtest_rate.fits Tu only run the extract_2d step, use the command: $ strun jwst.extract_2d.Extract2dStep jwtest_previous_step_output.fits These options are also callable from a script with the testing environment active. The Python call for running the pipeline in full or by step are: $\gt$ from jwst.pipeline.calwebb_spec2 import Spec2Pipeline $\gt$ Spec2Pipeline.call(jwtest_rate.fits) or $\gt$ from jwst.extract_2d import Extract2dStep $\gt$ Extract2dStep.call(jwtest_previous_step_output.fits) PTT can run the spec2 pipeline either in full or per step, as well as the imaging pipeline in full. In this notebook we will use PTT to run the pipeline and the validation tests. To run PTT, follow the directions in the corresponding repo page. [Top of Page](#title_ID) <a id="data_ID"></a> # Testing Data Set All testing data is from the CV3 campaign. We chose these files because this is our most complete data set, i.e. all modes and filter-grating combinations. Data used was for testing was only FS and MOS, since extract_2d is skipped for IFU. Data sets are: - FS_PRISM_CLEAR - FS_FULLFRAME_G395H_F290LP - FS_ALLSLITS_G140H_F100LP - MOS_G140M_LINE1 - MOS_PRISM_CLEAR [Top of Page](#title_ID) ``` testing_data = {'fs_prism_clear':{ 'uncal_file_nrs1': 'fs_prism_nrs1_uncal.fits', 'uncal_file_nrs2': 'fs_prism_nrs2_uncal.fits', 'truth_file_nrs1': 'fs_prism_nrs1_extract_2d_truth.fits', 'truth_file_nrs2': 'fs_prism_nrs2_extract_2d_truth.fits', 'msa_shutter_config': None }, 'fs_fullframe_g395h_f290lp':{ 'uncal_file_nrs1': 'fs_fullframe_g35h_f290lp_nrs1_uncal.fits', 'uncal_file_nrs2': 'fs_fullframe_g35h_f290lp_nrs2_uncal.fits', 'truth_file_nrs1': 'fs_fullframe_g35h_f290lp_nrs1_extract_2d_truth.fits', 'truth_file_nrs2': 'fs_fullframe_g35h_f290lp_nrs2_extract_2d_truth.fits', 'msa_shutter_config': None }, 'fs_allslits_g140h_f100lp':{ 'uncal_file_nrs1': 'fs_allslits_g140h_f100lp_nrs1_uncal.fits', 'uncal_file_nrs2': 'fs_allslits_g140h_f100lp_nrs2_uncal.fits', 'truth_file_nrs1': 'fs_allslits_g140h_f100lp_nrs1_extract_2d_truth.fits', 'truth_file_nrs2': 'fs_allslits_g140h_f100lp_nrs2_extract_2d_truth.fits', 'msa_shutter_config': None }, # Commented out because the pipeline is failing with this file #'bots_g235h_f170lp':{ # 'uncal_file_nrs1': 'bots_g235h_f170lp_nrs1_uncal.fits', # 'uncal_file_nrs2': 'bots_g235h_f170lp_nrs2_uncal.fits', # 'truth_file_nrs1': 'bots_g235h_f170lp_nrs1_extract_2d_truth.fits', # 'truth_file_nrs2': 'bots_g235h_f170lp_nrs2_extract_2d_truth.fits', # 'msa_shutter_config': None }, 'mos_prism_clear':{ 'uncal_file_nrs1': 'mos_prism_nrs1_uncal.fits', 'uncal_file_nrs2': 'mos_prism_nrs2_uncal.fits', 'truth_file_nrs1': 'mos_prism_nrs1_extract_2d_truth.fits', 'truth_file_nrs2': None, 'msa_shutter_config': 'V0030006000104_msa.fits' }, 'mos_g140m_f100lp':{ 'uncal_file_nrs1': 'mos_g140m_line1_NRS1_uncal.fits', 'uncal_file_nrs2': 'mos_g140m_line1_NRS2_uncal.fits', 'truth_file_nrs1': 'mos_g140m_line1_nrs1_extract_2d_truth.fits', 'truth_file_nrs2': 'mos_g140m_line1_nrs2_extract_2d_truth.fits', 'msa_shutter_config': 'V8460001000101_msa.fits' }, } # define function to pull data from Artifactory def get_artifactory_file(data_set_dict, detector): """This function creates a list with all the files needed per detector to run the test. Args: data_set_dict: dictionary, contains inputs for a specific mode and configuration detector: string, either nrs1 or nrs2 Returns: data: list, contains all files needed to run test """ files2obtain = ['uncal_file_nrs1', 'truth_file_nrs1', 'msa_shutter_config'] data = [] for file in files2obtain: data_file = None try: if '_nrs' in file and '2' in detector: file = file.replace('_nrs1', '_nrs2') data_file = get_bigdata('jwst_validation_notebooks', 'validation_data', 'nirspec_data', data_set_dict[file]) except TypeError: data.append(None) continue data.append(data_file) return data # Set common NPTT switches for NPTT and run the test for both detectors in each data set # define benchmark (or 'truth') file compare_assign_wcs_and_extract_2d_with_esa = False # accepted threshold difference with respect to benchmark files extract_2d_threshold_diff = 4 # define benchmark (or 'truth') file esa_files_path, raw_data_root_file = None, None compare_assign_wcs_and_extract_2d_with_esa = False # Get the data results_dict = {} detectors = ['nrs1', 'nrs2'] for mode_config, data_set_dict in testing_data.items(): for det in detectors: print('Testing files for detector: ', det) data = get_artifactory_file(data_set_dict, det) uncal_file, truth_file, msa_shutter_config = data print('Working with uncal_file: ', uncal_file) uncal_basename = os.path.basename(uncal_file) # Make sure that there is an assign_wcs truth product to compare to, else skip this data set if truth_file is None: print('No truth file to compare to for this detector, skipping this file. \n') skip_file = True else: skip_file = False if not skip_file: # Run the stage 1 pipeline rate_object = Detector1Pipeline.call(uncal_file) # Make sure the MSA shutter configuration file is set up correctly if msa_shutter_config is not None: msa_metadata = rate_object.meta.instrument.msa_metadata_file print(msa_metadata) if msa_metadata is None or msa_metadata == 'N/A': rate_object.meta.instrument.msa_metadata_file = msa_shutter_config # Run the stage 2 pipeline steps pipe_object = AssignWcsStep.call(rate_object) if 'mos' in uncal_basename.lower(): pipe_object = MSAFlagOpenStep.call(pipe_object) extract_2d_object = Extract2dStep.call(pipe_object) # Run the validation test %matplotlib inline if 'fs' in uncal_file.lower(): print('Running test for FS...') result, _ = nptt.calwebb_spec2_pytests.auxiliary_code.check_corners_extract2d.find_FSwindowcorners( extract_2d_object, truth_file=truth_file, esa_files_path=esa_files_path, extract_2d_threshold_diff=extract_2d_threshold_diff) if 'mos' in uncal_file.lower(): print('Running test for MOS...') result, _ = nptt.calwebb_spec2_pytests.auxiliary_code.check_corners_extract2d.find_MOSwindowcorners( extract_2d_object, msa_shutter_config, truth_file=truth_file, esa_files_path=esa_files_path, extract_2d_threshold_diff= extract_2d_threshold_diff) else: result = 'skipped' # Did the test passed print("Did assign_wcs validation test passed? ", result, "\n\n") rd = {uncal_basename: result} results_dict.update(rd) # close all open files psutil.Process().open_files() closing_files = [] for fd in psutil.Process().open_files(): if data_dir.name in fd.path: closing_files.append(fd) for fd in closing_files: try: print('Closing file: ', fd) open(fd.fd).close() except: print('File already closed: ', fd) # Quickly see if the test passed print('These are the final results of the tests: ') for key, val in results_dict.items(): print(key, val) ``` <a id="about_ID"></a> ## About this Notebook **Author:** Maria A. Pena-Guerrero, Staff Scientist II - Systems Science Support, NIRSpec <br>**Updated On:** Mar/24/2021 [Top of Page](#title_ID) <img style="float: right;" src="./stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="stsci_pri_combo_mark_horizonal_white_bkgd" width="200px"/>
github_jupyter
# Pump Calculations ``` import numpy as np ``` ## Power Input ``` #Constants and inputs g = 32.174; #gravitational acceleration, ft/s^2 rho_LOx = 71.27; #Density of Liquid Oxygen- lbm/ft^3 rho_LCH4 = 26.3; #Density of Liquid Methane- lbm/ft^3 Differential = #Desired pressure differential (psi) mLOx = #Mass flow of Liquid Oxygen (lb/s) mLCH4 = #Mass Flow of Liquid Methane (lb/s) #Head Calculations HLOx = (((Differential)*144)/(rho_LOx * g))*32.174 #Head of Liquid Oxygen - ft HLCH4 = (((Differential)*144)/(rho_LCH4 * g))*32.174 #Head of Liquid Methane - ft #Power Calculations - Assume a 75% effiency (Minimum value that we can reach) Power_LOx = (((mLOx * g * HLOx)/0.75)/32.174) * 1.36; #Output is in Watts Power_LCH4 = (((mLCH4 * g * HLCH4)/0.75)/32.174) * 1.36; #Output is in Watts ``` ## Impeller Calculations ### Constants ``` QLOx = mLOx/rho_LOx #Volumetric flow rate of Liquid Oxygen in ft^3/s QLCH4 = mLCH4/rho_LCH4 #Volumetric flow rate of Liquid Methane in ft^3/s Eff_Vol = #Volumetric effiency is a measure of how much fluid is lost due to leakages, estimate the value QImp_LOx = QLox/Eff_Vol #Impeller flow rate of Liquid oxygen in ft^3/s QImp_LCH4 = QLCH4/Eff_Vol #Impeller flow rate of Liquid methane in ft^3/s n = #RPM of impeller, pick such that nq_LOx is low but not too low nq_LOx = n * (QImp_LOx ** 0.5)/(HLOx ** 0.75) #Specific speed of Liquid Oxygen nq_LCH4 = n * (QImp_LCH4 ** 0.5)/(HLCH4 ** 0.75) #Specific speed of Liquid Methane omegas_LOx = nq_LOx/52.9 #Universal specific speed omegas_LCH4 = nq_LCH4/52.9 #Universal specific speed tau = #Shear stress of desired metal (Pa) fq = 1 #Number of impeller inlets, either 1 or 2 f_t = 1.1 #Given earlier in the text PC_LOx = 1.21*f_t*(np.exp(-0.408*omegas_LOx))* nq #Pressure coefficient of static pressure rise in impeller of Liquid Oxygen, the equation given uses nq_ref, but I just use nq because I didn't define an nq_ref PC_LCH4 = 1.21*f_t*(np.exp(-0.408*omegas_LCH4))* nq #Pressure coefficienct of static pressure rise in impeller of Liquid Methane ``` #### Shaft diameter ``` dw_LOx = 3.65(Power_LOx)/(rpm*tau) #Shaft diameter of Liquid Oxygen Impeller dw_LCH4 = 3.65(Power_LCH4)/(rpm*tau) #Shaft diameter of Liquid Methane Impeller ``` #### Specific Speed ``` q_LOx = QLOx * 3600 * (.3048 ** 3) #converts ft^3/s to m^3/h q_LCH4 = QLCH4 * 3600 * (.3048 ** 3) #converts ft^3/s to m^3/h ps = 200 #static pressure in fluid close to impeller in psi pv_LOX = pv_LCH4 = A_LOx = #see two lines below to see what to do A_LCH4 = #see two lines below to see what to do v_LOx = (mLOx / rho_LOx) / A_LOx #Define A above as the area of the inlet pipe in ft^2 v_LCH4 = (mLCH4 / rho_LCH4) / A_LCH4 #Define A above as the area of the inlet pipe in ft^2 NPSH_LOx = ps/rho_LOx * (v_LOx ** 2)/(2*9.81) - pv/rho_LOx #substitue pv as Vapor Pressure of Oxygen at temperature in psi above NPSH_LCH4 = ps/rho_LOx * (v_LOx ** 2)/(2*9.81) - pv/rho_LOx #substitue pv as Vapor Pressure of Methane at temperature in psi above nss_LOx = n*(q_LOx ** 0.5)/(NPSH_LOx ** 0.75) nss_LCH4 = n*(q_LCH4 ** 0.5)/(NPSH_LCH4 ** 0.75) ``` #### Inlet diameter ``` #Note: The equation given in the book uses a (1+tan(Beta1)/tan(alpha1)) term, but since the impeller is radial, alpha1 is 90 so the term goes to infinity and therefore results in a multiplication by 1 #Beta1 is determined by finding the specific suction speed** and reading off of the graph, or using: #kn = 1 - (dn ** 2)/(d1 ** 2); Just choose a value (I assumed inlet diameter ~ 1.15x the size of dn, the hub diameter) since d1 depends on the value of kn and vice versa tan_Beta1_LOx = (kn) ** 1.1 * (125/nss_LOx) ** 2.2 * (nq_LOx/27) ** 0.418 #Calculates Beta with a 40% std deviation, so a large amount of values is determined with this formula tan_Beta1_LCH4 = (kn) ** 1.1 * (125/nss_LCH4) ** 2.2 * (nq_LCH4/27) ** 0.418 #Calculates Beta with a 40% std deviation, so a large amount of values is determined with this formula d1_LOx = 2.9 * (QImp_LOx/(fq*n*kn*tan_Beta1_LOx))^(1/3) d1_LCH4 = 2.9 * (QImp_LCH4/(fq*n*kn*tan_Beta1_LCH4))^(1/3) ``` #### Exit Diameter ``` d2_LOx = 60/(np.pi * n) * (2 * 9.81 * (HLOx * 0.3048)/(PC_LOX)) ** 0.5 d2_LCH4 = 60/(np.pi * n) * (2 * 9.81 * (HLCH4 * 0.3048)/(PC_LCH4)) ** 0.5 ``` #### Blade Thickness ``` e_LOx = 0.022 * d2_LOx #Blade thickness for LOx, this number may have to go up for manufacturing purposes e_LCH4 = 0.022 * d2_LCH4 #Blade thickness for LCH4, this number may have to go up for manufacturing purposes ``` #### Leading and Trailing Edge Profiles ``` cp_min_sf = 0.155 Lp1_LOx = (2 + (4 + 4 * ((cp_min_sf/0.373)/e_LOx)*(0.373 * e_LOx)) ** 0.5)/ (2 * (cp_min_sf/0.373)/e_LOx) #Leading Edge profile, simplification of formula in Centrifugal Pumps in terms of Quadratic formula Lp2_LOx = (2 - (4 + 4 * ((cp_min_sf/0.373)/e_LOx)*(0.373 * e_LOx)) ** 0.5)/ (2 * (cp_min_sf/0.373)/e_LOx) #Leading Edge profile, simplification of formula in Centrifugal Pumps in terms of Quadratic formula Lp1_LCH4 = (2 + (4 + 4 * ((cp_min_sf/0.373)/e_LCH4)*(0.373 * e_LCH4)) ** 0.5)/ (2 * (cp_min_sf/0.373)/e_LCH4) #Leading Edge profile, simplification of formula in Centrifugal Pumps in terms of Quadratic formula Lp2_LCH4 = (2 - (4 + 4 * ((cp_min_sf/0.373)/e_LCH4)*(0.373 * e_LCH4)) ** 0.5)/ (2 * (cp_min_sf/0.373)/e_LCH4) #Leading Edge profile, simplification of formula in Centrifugal Pumps in terms of Quadratic formula #Take whichever value above comes out positive, assumed an elliptical profile where cp,min,sf was given as 0.155. Formula changes if cp_min_sf changes TE_LOx = e_LOx/2 #Trailing edge for Liquid Oxygen using the most simple formula given TE_LCH4 = e_LCH4/2 #Trailing edge for Liquid Methane using the most simple formula given ``` # Impeller Calcuations ``` #Reference values given on page 667 of Centrifugal Pumps and then converted to imperial from metric nq_ref = 40 #unitless Href = 3280.84 #meters to feet rho_ref = 62.428 #lb/ft^3 tau3 = 1 #given epsilon_sp = np.pi #Radians. Guessed from the fact that doube volutes are generally at 180 QLe_LOx = QImp_LOx/0.95 * 0.0283168 #m^3/s. Assume that the leakages due to the volute are really low QLe_LCH4 = QImp_LCH4/0.95 * 0.0283168 #m^3/s b3_LOx = 1 #Guess; Width of the diffuser inlet (cm) b3_LCH4 = 1 #Guess; Width of the diffuser inlet (cm) u2_LOX = (np.pi*d2_LOx*n)/60 #Circumferential speed at the outer diameter of the impeller for Liquid Oxygen u2_LCH4 = (np.pi*d2_LCH4*n)/60 #Circumferential speed at the outer diameter of the impeller for Liquid Methane u1m_LOx = (np.pi*d1_LOx*n)/60 #Circumferential speed at the inner diameter of the impeller for Liquid Oxygen u1m_LCH4 = (np.pi*d1_LOx*n)/60 #Circumferential speed at the inner diameter of the impeller for Liquid Methane c1u = 1 #Formula is c1m/tan(alpha1) but alpha1 is 90 degrees, so it simplifies to 1 Qref = 1 #Since Volumetric Flow was calculated absolutely, the "reference" value is 1 a = 1 #Taken from book for Q less than or equal to 1 m^3/s m_LOx = 0.08 * a * (Qref/QImp_LOx) ** 0.15 * (45/nq_LOx) ** 0.06 #Exponential to find hydraulic efficiency m_LCH4 = 0.08 * a * (Qref/QImp_LCH4) ** 0.15 * (45/nq_LCH4) ** 0.06 #Expoential to find hydraulic efficiency Eff_Hyd_LOx = 1 - 0.055 * (Qref/QImp_LOx) ** m_LOx - 0.2 * (0.26 - np.log10(nq_LOx/25)) ** 2 #Hydraulic Efficiency of LOx Pump Eff_Hyd_LCH4 = 1 - 0.055 * (Qref/QImp_LCH4) ** m_LCH4 - 0.2 * (0.26 - np.log10(nq_LCH4/25)) ** 2 #Hydraulic Efficiency of LCH4 Pump c2u_LOx = (g*HLOx)/(Eff_Hyd_LOx*u2_LOx)+(u1m_LOx*c1u)/u2_LOx #Circumferential component of absolute velocity at impeller outlet for Liquid Oxygen c2u_LCH4 = (g*HLCH4)/(Eff_Hyd_LCH4*u2_LCH4)+(u1m_LCH4*c1u)/u2_LCH4 #Circumferential component of absolute velocity at impeller outlet for Liquid Methane d3_LOx = d2_LOx * (1.03 + 0.1*(nq_LOx/nq_ref)*0.07(rho_LOx * HLOX)/(rho_ref*Href)) #distance of the gap bewteen the impeller and volute for Liquid Oxygen d3_LCH4 = d2_LCH4 * (1.03 + 0.1*(nq_LCH4/nq_ref)*0.07(rho_LCH4 * HLCH4)/(rho_ref*Href)) #distance of the gap bewteen the impeller and volute for Liquid Methane c3u_LOx = d2_LOx * c2u_LOx / d3_LOx #Circumferential component of absolute velocity at diffuser inlet for Liquid Oxygen c3u_LCH4 = d2_LCH4 * c2u_LCH4 / d3_LCH4 #Circumferential component of absolute velocity at diffuser inlet for Liquid Methane c3m_LOx = QLe_LOx*tau3/(np.pi*d3_LOx*b3_LOx) #Meridional component of absolute velocity at diffuser inlet for Liquid Oxygen c3m_LCH4 = QLe_LCH4 * tau3/(np.pi*d3_LCH4*b3_LCH4) #Meridional component of absolute velocity at diffuser inlet for Liquid Methane tan_alpha3_LOx = c3m_LOx/c3u_LOx #Flow angle at diffuser inlet with blockage for Liquid Oxygen tan_alpha3_LCH4 = c3m_LCH4/c3u_LCH4 #Flow angle at diffuser inlet with blockage for Liquid Methane alpha3b_LOx = np.degrees(np.arctan(tan_alpha3_LOx)) + 3 #Degrees. Diffuser vane inlet, can change the scalar 3 anywhere in the realm of real numbers of [-3,3] for Liquid Oxygen alpha3b_LCH4 = np.degrees(np.arctan(tan_alpha3_LCH4)) + 3 #Degrees. Diffuser vane inlet, can change the scalar 3 anywhere in the realm of real numbers of [-3,3] for Liquid Methane r2_LOx = d2_LOx/2 #Radius of the impeller outlet for Liquid Oxygen r2_LCH4 = d2_LCH4/2 #Radius of the impeller outlet for Liquid Methane #Throat area calculations, many variables are used that aren't entirely explained Xsp_LOx = (QLe_LOx * epsilon_sp)/(np.pi*c2u_LOx*r2_LOx * 2 * np.pi) Xsp_LCH4 = (QLe_LCH4 * epsilon_sp)/(np.pi*c2u_LCH4*r2_LCH4 * 2 * np.pi) d3q_LOx = Xsp_LOx + (2*d3_LOx*Xsp_LOx) ** 0.5 d3q_LCH4 = Xsp_LCH4 + (2*d3_LCH4*Xsp_LCH4) ** 0.5 A3q_LOx = np.pi*((d3q_LOx) ** 2)/4 A3q_LCH4 = np.pi*((d3q_LCH4) ** 2)/4 ```
github_jupyter
``` import numpy as np import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data from functools import partial n_inputs = 28*28 n_hidden1 = 100 n_hidden2 = 100 n_hidden3 = 100 n_hidden4 = 100 n_hidden5 = 100 n_outputs = 5 # Let's define the placeholders for the inputs and the targets X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int64, shape=(None), name="y") # Let's create the DNN he_init = tf.contrib.layers.variance_scaling_initializer() my_dense_layer = partial( tf.layers.dense, activation=tf.nn.elu, kernel_initializer=he_init) with tf.name_scope("dnn"): hidden1 = my_dense_layer(X, n_hidden1, name="hidden1") hidden2 = my_dense_layer(hidden1, n_hidden2, name="hidden2") hidden3 = my_dense_layer(hidden2, n_hidden3, name="hidden3") hidden4 = my_dense_layer(hidden3, n_hidden4, name="hidden4") hidden5 = my_dense_layer(hidden4, n_hidden5, name="hidden5") logits = my_dense_layer(hidden5, n_outputs, activation=None, name="outputs") Y_proba = tf.nn.softmax(logits, name="Y_proba") learning_rate = 0.01 with tf.name_scope("loss"): xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") with tf.name_scope("train"): optimizer = tf.train.AdamOptimizer(learning_rate) training_op = optimizer.minimize(loss, name="training_op") with tf.name_scope("eval"): correct = tf.nn.in_top_k(logits, y , 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") init = tf.global_variables_initializer() saver = tf.train.Saver() mnist = input_data.read_data_sets("/tmp/data/") X_train1 = mnist.train.images[mnist.train.labels < 5] y_train1 = mnist.train.labels[mnist.train.labels < 5] X_valid1 = mnist.validation.images[mnist.validation.labels < 5] y_valid1 = mnist.validation.labels[mnist.validation.labels < 5] X_test1 = mnist.test.images[mnist.test.labels < 5] y_test1 = mnist.test.labels[mnist.test.labels < 5] n_epochs = 1000 batch_size = 20 max_checks_without_progress = 20 checks_without_progress = 0 best_loss = np.infty with tf.Session() as sess: init.run() for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X_train1)) for rnd_indices in np.array_split(rnd_idx, len(X_train1) // batch_size): X_batch, y_batch = X_train1[rnd_indices], y_train1[rnd_indices] sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) # Calculate loss and acc on the validation set to do early stopping loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid1, y: y_valid1}) if loss_val < best_loss: save_path = saver.save(sess, "./my_mnist_model_0_to_4.ckpt") best_loss = loss_val checks_without_progress = 0 else: checks_without_progress += 1 if checks_without_progress > max_checks_without_progress: print("Early stopping!") break print("{}\tValidation loss: {:.6f}\tBest loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_val, best_loss, acc_val * 100)) with tf.Session() as sess: saver.restore(sess, "./my_mnist_model_0_to_4.ckpt") acc_test = accuracy.eval(feed_dict={X: X_test1, y: y_test1}) print("Final test accuracy: {:.2f}%".format(acc_test * 100)) ``` <h1>DNNClassifier</h1> ``` import numpy as np import tensorflow as tf from sklearn.base import BaseEstimator, ClassifierMixin from sklearn.exceptions import NotFittedError class DNNClassifier(BaseEstimator, ClassifierMixin): def __init__(self, n_hidden_layers=5, n_neurons=100, optimizer_class=tf.train.AdamOptimizer, learning_rate=0.01, batch_size=20, activation=tf.nn.elu, initializer=he_init, batch_norm_momentum=None, dropout_rate=None, random_state=None): """Initialize the DNNClassifier by simply storing all the hyperparameters.""" self.n_hidden_layers = n_hidden_layers self.n_neurons = n_neurons self.optimizer_class = optimizer_class self.learning_rate = learning_rate self.batch_size = batch_size self.activation = activation self.initializer = initializer self.batch_norm_momentum = batch_norm_momentum self.dropout_rate = dropout_rate self.random_state = random_state self._session = None def _dnn(self, inputs): """Build the hidden layers, with support for batch normalization and dropout.""" for layer in range(self.n_hidden_layers): if self.dropout_rate: inputs = tf.layers.dropout(inputs, self.dropout_rate, training=self._training) inputs = tf.layers.dense(inputs, self.n_neurons, kernel_initializer=self.initializer, name="hidden%d" % (layer + 1)) if self.batch_norm_momentum: inputs = tf.layers.batch_normalization(inputs, momentum=self.batch_norm_momentum, training=self._training) inputs = self.activation(inputs, name="hidden%d_out" % (layer + 1)) return inputs def _build_graph(self, n_inputs, n_outputs): """Build the same model as earlier""" if self.random_state is not None: tf.set_random_seed(self.random_state) np.random.seed(self.random_state) X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int32, shape=(None), name="y") if self.batch_norm_momentum or self.dropout_rate: self._training = tf.placeholder_with_default(False, shape=(), name='training') else: self._training = None dnn_outputs = self._dnn(X) logits = tf.layers.dense(dnn_outputs, n_outputs, kernel_initializer=he_init, name="logits") Y_proba = tf.nn.softmax(logits, name="Y_proba") xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy, name="loss") optimizer = self.optimizer_class(learning_rate=self.learning_rate) training_op = optimizer.minimize(loss) correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") init = tf.global_variables_initializer() saver = tf.train.Saver() # Make the important operations available easily through instance variables self._X, self._y = X, y self._Y_proba, self._loss = Y_proba, loss self._training_op, self._accuracy = training_op, accuracy self._init, self._saver = init, saver def close_session(self): if self._session: self._session.close() def _get_model_params(self): """Get all variable values (used for early stopping, faster than saving to disk)""" with self._graph.as_default(): gvars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) return {gvar.op.name: value for gvar, value in zip(gvars, self._session.run(gvars))} def _restore_model_params(self, model_params): """Set all variables to the given values (for early stopping, faster than loading from disk)""" gvar_names = list(model_params.keys()) assign_ops = {gvar_name: self._graph.get_operation_by_name(gvar_name + "/Assign") for gvar_name in gvar_names} init_values = {gvar_name: assign_op.inputs[1] for gvar_name, assign_op in assign_ops.items()} feed_dict = {init_values[gvar_name]: model_params[gvar_name] for gvar_name in gvar_names} self._session.run(assign_ops, feed_dict=feed_dict) def fit(self, X, y, n_epochs=100, X_valid=None, y_valid=None): """Fit the model to the training set. If X_valid and y_valid are provided, use early stopping.""" self.close_session() # infer n_inputs and n_outputs from the training set. n_inputs = X.shape[1] self.classes_ = np.unique(y) n_outputs = len(self.classes_) # Translate the labels vector to a vector of sorted class indices, containing # integers from 0 to n_outputs - 1. # For example, if y is equal to [8, 8, 9, 5, 7, 6, 6, 6], then the sorted class # labels (self.classes_) will be equal to [5, 6, 7, 8, 9], and the labels vector # will be translated to [3, 3, 4, 0, 2, 1, 1, 1] self.class_to_index_ = {label: index for index, label in enumerate(self.classes_)} y = np.array([self.class_to_index_[label] for label in y], dtype=np.int32) self._graph = tf.Graph() with self._graph.as_default(): self._build_graph(n_inputs, n_outputs) # extra ops for batch normalization extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # needed in case of early stopping max_checks_without_progress = 20 checks_without_progress = 0 best_loss = np.infty best_params = None # Now train the model! self._session = tf.Session(graph=self._graph) with self._session.as_default() as sess: self._init.run() for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X)) for rnd_indices in np.array_split(rnd_idx, len(X) // self.batch_size): X_batch, y_batch = X[rnd_indices], y[rnd_indices] feed_dict = {self._X: X_batch, self._y: y_batch} if self._training is not None: feed_dict[self._training] = True sess.run(self._training_op, feed_dict=feed_dict) if extra_update_ops: sess.run(extra_update_ops, feed_dict=feed_dict) if X_valid is not None and y_valid is not None: loss_val, acc_val = sess.run([self._loss, self._accuracy], feed_dict={self._X: X_valid, self._y: y_valid}) if loss_val < best_loss: best_params = self._get_model_params() best_loss = loss_val checks_without_progress = 0 else: checks_without_progress += 1 print("{}\tValidation loss: {:.6f}\tBest loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_val, best_loss, acc_val * 100)) if checks_without_progress > max_checks_without_progress: print("Early stopping!") break else: loss_train, acc_train = sess.run([self._loss, self._accuracy], feed_dict={self._X: X_batch, self._y: y_batch}) print("{}\tLast training batch loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_train, acc_train * 100)) # If we used early stopping then rollback to the best model found if best_params: self._restore_model_params(best_params) return self def predict_proba(self, X): if not self._session: raise NotFittedError("This %s instance is not fitted yet" % self.__class__.__name__) with self._session.as_default() as sess: return self._Y_proba.eval(feed_dict={self._X: X}) def predict(self, X): class_indices = np.argmax(self.predict_proba(X), axis=1) return np.array([[self.classes_[class_index]] for class_index in class_indices], np.int32) def save(self, path): self._saver.save(self._session, path) dnn_clf = DNNClassifier(random_state=42) dnn_clf.fit(X_train1, y_train1, n_epochs=1000, X_valid=X_valid1, y_valid=y_valid1) from sklearn.metrics import accuracy_score y_pred = dnn_clf.predict(X_test1) accuracy_score(y_test1, y_pred) from sklearn.model_selection import RandomizedSearchCV def leaky_relu(alpha=0.01): def parametrized_leaky_relu(z, name=None): return tf.maximum(alpha * z, z, name=name) return parametrized_leaky_relu param_distribs = { "n_neurons": [10, 30, 50, 70, 90, 100, 120, 140, 160], "batch_size": [10, 50, 100, 500], "learning_rate": [0.01, 0.02, 0.05, 0.1], "activation": [tf.nn.relu, tf.nn.elu, leaky_relu(alpha=0.01), leaky_relu(alpha=0.1)], # you could also try exploring different numbers of hidden layers, different optimizers, etc. #"n_hidden_layers": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], #"optimizer_class": [tf.train.AdamOptimizer, partial(tf.train.MomentumOptimizer, momentum=0.95)], } rnd_search = RandomizedSearchCV(DNNClassifier(random_state=42), param_distribs, n_iter=50, fit_params={"X_valid": X_valid1, "y_valid": y_valid1, "n_epochs": 1000}, random_state=42, verbose=2) rnd_search.fit(X_train1, y_train1) rnd_search.best_params_ y_pred = rnd_search.predict(X_test1) accuracy_score(y_test1, y_pred) rnd_search.best_estimator_.save("./my_best_mnist_model_0_to_4") # Let's train the best model found, once again, to see how fast it converges dnn_clf = DNNClassifier(activation=leaky_relu(alpha=0.1), batch_size=500, learning_rate=0.01, n_neurons=50, random_state=42) dnn_clf.fit(X_train1, y_train1, n_epochs=1000, X_valid=X_valid1, y_valid=y_valid1) y_pred = dnn_clf.predict(X_test1) accuracy_score(y_test1, y_pred) # Here the accuracy is different because I put leaky_relu in the training instead of relu as the rnd_search_best_params says. # However, the accuracy is better...Wtf?! # Let's try to add Batch Normalization dnn_clf_bn = DNNClassifier(activation=leaky_relu(alpha=0.1), batch_size=500, learning_rate=0.01, n_neurons=50, random_state=42, batch_norm_momentum=0.95) dnn_clf_bn.fit(X_train1, y_train1, n_epochs=1000, X_valid=X_valid1, y_valid=y_valid1) y_pred = dnn_clf_bn.predict(X_test1) accuracy_score(y_test1, y_pred) # Mmm, Batch Normalization did not improve the accuracy. We should try to do another tuning for hyperparameters with BN # and try again. # ... # Now let's go back to our previous model and see how well perform on the training set y_pred = dnn_clf.predict(X_train1) accuracy_score(y_train1, y_pred) # Much better than the test set, so probably it is overfitting the training set. Let's try using dropout dnn_clf_dropout = DNNClassifier(activation=leaky_relu(alpha=0.1), batch_size=500, learning_rate=0.01, n_neurons=50, random_state=42, dropout_rate=0.5) dnn_clf_dropout.fit(X_train1, y_train1, n_epochs=1000, X_valid=X_valid1, y_valid=y_valid1) y_pred = dnn_clf_dropout.predict(X_test1) accuracy_score(y_test1, y_pred) # Dropout doesn't seem to help. As said before, we could try to tune the network with dropout and see what we got. # ... ``` <h1>Transfer Learning</h1> <p>Let's try to reuse the previous model on digits from 5 to 9, using only 100 images per digit!</p> ``` restore_saver = tf.train.import_meta_graph("./my_best_mnist_model_0_to_4.meta") X = tf.get_default_graph().get_tensor_by_name("X:0") y = tf.get_default_graph().get_tensor_by_name("y:0") loss = tf.get_default_graph().get_tensor_by_name("loss:0") Y_proba = tf.get_default_graph().get_tensor_by_name("Y_proba:0") logits = Y_proba.op.inputs[0] accuracy = tf.get_default_graph().get_tensor_by_name("accuracy:0") learning_rate = 0.01 output_layer_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="logits") optimizer = tf.train.AdamOptimizer(learning_rate, name="Adam2") # Freeze all the hidden layers training_op = optimizer.minimize(loss, var_list=output_layer_vars) correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") init = tf.global_variables_initializer() five_frozen_saver = tf.train.Saver() X_train2_full = mnist.train.images[mnist.train.labels >= 5] y_train2_full = mnist.train.labels[mnist.train.labels >= 5] - 5 X_valid2_full = mnist.validation.images[mnist.validation.labels >= 5] y_valid2_full = mnist.validation.labels[mnist.validation.labels >= 5] - 5 X_test2 = mnist.test.images[mnist.test.labels >= 5] y_test2 = mnist.test.labels[mnist.test.labels >= 5] - 5 def sample_n_instances_per_class(X, y, n=100): Xs, ys = [], [] for label in np.unique(y): idx = (y == label) Xc = X[idx][:n] yc = y[idx][:n] Xs.append(Xc) ys.append(yc) return np.concatenate(Xs), np.concatenate(ys) X_train2, y_train2 = sample_n_instances_per_class(X_train2_full, y_train2_full, n=100) X_valid2, y_valid2 = sample_n_instances_per_class(X_valid2_full, y_valid2_full, n=30) import time n_epochs = 1000 batch_size = 20 max_checks_without_progress = 20 checks_without_progress = 0 best_loss = np.infty with tf.Session() as sess: init.run() restore_saver.restore(sess, "./my_best_mnist_model_0_to_4") for var in output_layer_vars: var.initializer.run() t0 = time.time() for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X_train2)) for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size): X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices] sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid2, y: y_valid2}) if loss_val < best_loss: save_path = five_frozen_saver.save(sess, "./my_mnist_model_5_to_9_five_frozen") best_loss = loss_val checks_without_progress = 0 else: checks_without_progress += 1 if checks_without_progress > max_checks_without_progress: print("Early stopping!") break print("{}\tValidation loss: {:.6f}\tBest loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_val, best_loss, acc_val * 100)) t1 = time.time() print("Total training time: {:.1f}s".format(t1 - t0)) with tf.Session() as sess: five_frozen_saver.restore(sess, "./my_mnist_model_5_to_9_five_frozen") acc_test = accuracy.eval(feed_dict={X: X_test2, y: y_test2}) print("Final test accuracy: {:.2f}%".format(acc_test * 100)) ``` <p>As we can see, not so good...But of course, we're using 100 images per digit and we only changed the output layer.</p> ``` # Let's try to reuse only 4 hidden layers instead of 5 n_outputs = 5 restore_saver = tf.train.import_meta_graph("./my_best_mnist_model_0_to_4.meta") X = tf.get_default_graph().get_tensor_by_name("X:0") y = tf.get_default_graph().get_tensor_by_name("y:0") hidden4_out = tf.get_default_graph().get_tensor_by_name("hidden4_out:0") logits = tf.layers.dense(hidden4_out, n_outputs, kernel_initializer=he_init, name="new_logits") Y_proba = tf.nn.softmax(logits) xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) loss = tf.reduce_mean(xentropy) correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") learning_rate = 0.01 output_layer_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="new_logits") optimizer = tf.train.AdamOptimizer(learning_rate, name="Adam2") training_op = optimizer.minimize(loss, var_list=output_layer_vars) init = tf.global_variables_initializer() four_frozen_saver = tf.train.Saver() n_epochs = 1000 batch_size = 20 max_checks_without_progress = 20 checks_without_progress = 0 best_loss = np.infty with tf.Session() as sess: init.run() restore_saver.restore(sess, "./my_best_mnist_model_0_to_4") for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X_train2)) for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size): X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices] sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid2, y: y_valid2}) if loss_val < best_loss: save_path = four_frozen_saver.save(sess, "./my_mnist_model_5_to_9_four_frozen") best_loss = loss_val checks_without_progress = 0 else: checks_without_progress += 1 if checks_without_progress > max_checks_without_progress: print("Early stopping!") break print("{}\tValidation loss: {:.6f}\tBest loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_val, best_loss, acc_val * 100)) with tf.Session() as sess: four_frozen_saver.restore(sess, "./my_mnist_model_5_to_9_four_frozen") acc_test = accuracy.eval(feed_dict={X: X_test2, y: y_test2}) print("Final test accuracy: {:.2f}%".format(acc_test * 100)) ``` <p>Well, a bit better...</p> ``` # Let's try now to unfreeze the last two layers learning_rate = 0.01 unfrozen_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="hidden[34]|new_logits") optimizer = tf.train.AdamOptimizer(learning_rate, name="Adam3") training_op = optimizer.minimize(loss, var_list=unfrozen_vars) init = tf.global_variables_initializer() two_frozen_saver = tf.train.Saver() n_epochs = 1000 batch_size = 20 max_checks_without_progress = 20 checks_without_progress = 0 best_loss = np.infty with tf.Session() as sess: init.run() four_frozen_saver.restore(sess, "./my_mnist_model_5_to_9_four_frozen") for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X_train2)) for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size): X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices] sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid2, y: y_valid2}) if loss_val < best_loss: save_path = two_frozen_saver.save(sess, "./my_mnist_model_5_to_9_two_frozen") best_loss = loss_val checks_without_progress = 0 else: checks_without_progress += 1 if checks_without_progress > max_checks_without_progress: print("Early stopping!") break print("{}\tValidation loss: {:.6f}\tBest loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_val, best_loss, acc_val * 100)) with tf.Session() as sess: two_frozen_saver.restore(sess, "./my_mnist_model_5_to_9_two_frozen") acc_test = accuracy.eval(feed_dict={X: X_test2, y: y_test2}) print("Final test accuracy: {:.2f}%".format(acc_test * 100)) ``` <p>Not bad...And what if we unfreeze all the layers?</p> ``` learning_rate = 0.01 optimizer = tf.train.AdamOptimizer(learning_rate, name="Adam4") training_op = optimizer.minimize(loss) init = tf.global_variables_initializer() no_frozen_saver = tf.train.Saver() n_epochs = 1000 batch_size = 20 max_checks_without_progress = 20 checks_without_progress = 0 best_loss = np.infty with tf.Session() as sess: init.run() two_frozen_saver.restore(sess, "./my_mnist_model_5_to_9_two_frozen") for epoch in range(n_epochs): rnd_idx = np.random.permutation(len(X_train2)) for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size): X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices] sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid2, y: y_valid2}) if loss_val < best_loss: save_path = no_frozen_saver.save(sess, "./my_mnist_model_5_to_9_no_frozen") best_loss = loss_val checks_without_progress = 0 else: checks_without_progress += 1 if checks_without_progress > max_checks_without_progress: print("Early stopping!") break print("{}\tValidation loss: {:.6f}\tBest loss: {:.6f}\tAccuracy: {:.2f}%".format( epoch, loss_val, best_loss, acc_val * 100)) with tf.Session() as sess: no_frozen_saver.restore(sess, "./my_mnist_model_5_to_9_no_frozen") acc_test = accuracy.eval(feed_dict={X: X_test2, y: y_test2}) print("Final test accuracy: {:.2f}%".format(acc_test * 100)) # Let's compare this result with a DNN trained from scratch dnn_clf_5_to_9 = DNNClassifier(n_hidden_layers=4, random_state=42) dnn_clf_5_to_9.fit(X_train2, y_train2, n_epochs=1000, X_valid=X_valid2, y_valid=y_valid2) from sklearn.metrics import accuracy_score y_pred = dnn_clf_5_to_9.predict(X_test2) accuracy_score(y_test2, y_pred) ``` <p>Unfortunately in this case transfer learning did not help too much.</p>
github_jupyter
# LB-Colloids Colloid particle tracking LB-Colloids allows the user to perform colloid and nanoparticle tracking simulations on Computational Fluid Dynamics domains. As the user, you supply the chemical and physical properties, and the code performs the mathematics and particle tracking! Let's set up our workspace to begin. And we will use the Synthetic5 example problem to parameterize run LB-Colloids ``` %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg import os from lb_colloids import LBImage, LB2DModel from lb_colloids import ColloidModel, cIO workspace = os.path.join("..", "data") domain = "Synth100_5.png" lb_name = "s5.hdf5" endpoint = "s5.endpoint" ``` First thing, let's run a lattice Boltzmann model to get our fluid domain. For more details see the LB2D Notebook. ``` lbi = LBImage.Images(os.path.join(workspace, domain)) bc = LBImage.BoundaryCondition(lbi.arr, fluidvx=[253], solidvx=[0], nlayers=5) lbm = LB2DModel(bc.binarized) lbm.niters = 1000 lbm.run(output=os.path.join(workspace, lb_name), verbose=1000) ``` ## Setting up a Colloids particle tracking model We can begin setting up a Colloids model by using the `ColloidsConfig()` class. This class ensures that valid values are supplied to particle tracking variables and allows the user to write an external particle tracking configuration file for documentation and later use if wanted. Let's generate an empty `ColloidsConfig` instance ``` io = cIO.ColloidsConfig() ``` `ColloidsConfig()` uses dictionary keys to be parameterized. Common parameterization variables include `lbmodel`: required parameter that points to the CFD fluid domain `ncols`: required parameter that describes the number of colloids released `iters`: number of time steps to simulate transport `lbres`: the lattice Boltzmann simulation resolution in meters `gridref`: optional grid refinement parameter, uses bi-linear interpolation `ac`: colloid radius in meters `timestep`: the timestep length in seconds. Recommend very small timesteps! `continuous`: flag for continuous release. If 0 one release of colloids occurs, if > 0 a release of colloids occurs at continuous number of timesteps `i`: fluid ionic strength in M `print_time`: how often iteration progress prints to the screen `endpoint`: endpoint file name to store breakthrough information `store_time`: internal function that can be used to reduce memory requirements, a higher store_time equals less memory devoted to storing colloid positions (old positions are striped every store_time timesteps). `zeta_colloid`: zeta potential of the colloid in V `zeta_solid`: zeta potential of the solid in V `plot`: boolean flag that generates a plot at the end of the model run `showfig`: boolean flag that determines weather to show the figure or save it to disk A complete listing of these are available in the user guide. ``` # model parameters io["lbmodel"] = os.path.join(workspace, lb_name) io['ncols'] = 2000 io['iters'] = 50000 io['lbres'] = 1e-6 io['gridref'] = 10 io['ac'] = 1e-06 io['timestep'] = 1e-06 # should be less than or equal to colloid radius! io['continuous'] = 0 # chemical parameters io['i'] = 1e-03 # Molar ionic strength of solution io['zeta_colloid'] = -49.11e-3 # zeta potential of Na-Kaolinite at 1e-03 M NaCl io['zeta_solid'] = -61.76e-3 # zeta potential of Glass Beads at 1e-03 M NaCl # output control io['print_time'] = 10000 io['endpoint'] = os.path.join(workspace, endpoint) io['store_time'] = 100 io['plot'] = True io['showfig'] = True ``` We can now look at the parameter dictionaries `ColloidConfig` creates! ``` io.model_parameters, io.chemical_parameters, io.physical_parameters, io.output_control_parameters ``` we can also write a config file for documentation and later runs! And see the information that will be written to the config file by using the `io.config` call ``` io.write(os.path.join(workspace, "s2.config")) io.config ``` the `ColloidsConfig` can be directly be used with the `Config` reader to instanstiate a LB-Colloids model ``` config = cIO.Config(io.config) ``` and we can run the model using the `ColloidModel.run()` call ``` ColloidModel.run(config) ``` The output image shows the path of colloids which haven't yet broke through the model domain! ### For ColloidModel outputs please see the LB_Colloids_output_contol notebook
github_jupyter
``` %%javascript IPython.OutputArea.prototype._should_scroll = function(lines) { return false; } import matplotlib.pyplot as plt import numpy as np dt = 0.1 def draw_plot(measurements, mlabel=None, estimates=None, estlabel=None, title=None, xlabel=None, ylabel=None): xvals = np.linspace(0, dt * len(measurements), len(measurements)) plt.title(title, fontsize=12) xlabel and plt.xlabel("Time in seconds") ylabel and plt.ylabel("Distance to Wall in cm") ax = plt.subplot(111) ax.plot(measurements, label=mlabel) np.any(estimates) and estlabel and ax.plot(estimates, label=estlabel) box = ax.get_position() ax.set_position([box.x0, box.y0 + box.height * 0.1, box.width, box.height * 1.1]) ax.legend(loc='upper center', bbox_to_anchor=(.5, -0.05), ncol=3) plt.show() def add_noise(data, size=20): noise = np.random.uniform(-1, 1, len(data)) * size return data + noise ``` ### Part A ``` from kalman import predict, update, dot3 plt.rcParams["figure.figsize"] = [12, 9] wall_file = "RBE500-F17-100ms-Constant-Vel.csv" data = add_noise(np.loadtxt(wall_file, delimiter=","), 0) # Setup initial variables and matrices initial_pos = 2530 velocity = -10.0 variance = 10.0 dt = 0.1 F = np.array([[1., dt], [0., 1.]]) P = np.array([[100., 0], [0, 100.0]]) H = np.array([[1., 0.]]) R = np.array([[variance]]) Q = np.array([[0.1, 1], [1, 10.]]) x = np.array([initial_pos, velocity]).T predicted_xs = [] def run(x, P, R, Q, dt, zs): # run the kalman filter and store the results xs, cov = [], [] for z in zs: x, P = predict(x, P, F, Q) x, P = update(x, P, z, R, H) xs.append(x) cov.append(P) xs, cov = np.array(xs), np.array(cov) return xs, cov time_values = np.linspace(0, dt * len(data), len(data)) est_x, est_P = run(x, P, R, Q, dt, data) est_pos = [v[0] for v in est_x] est_vel = [v[1] for v in est_x] draw_plot(data, estimates=est_pos, title="Raw Data VS Kalman Estimation", mlabel="Measurements", estlabel="Kalman Estimates", xlabel="Time in Seconds", ylabel="Distance to Wall in CM") plt.plot(time_values, est_vel) plt.xlabel("Time in seconds") plt.ylabel("Velocity in cm/sec") plt.title("Velocity Over Time") plt.show() pos_std = [p[0][0]**0.5 for p in est_P] vel_var = [p[1][1]**0.5 for p in est_P] pos_vel_corr = [p[0][1] for p in est_P] plt.plot(time_values, pos_std) plt.ylabel("Position StdDev") plt.xlabel("Time in Seconds") plt.title("Position StdDev Over Time") plt.show() plt.plot(time_values, vel_var) plt.ylabel("Velocity StdDev") plt.xlabel("Time in Seconds") plt.title("Velocity StdDev Over Time") plt.show() plt.plot(time_values, pos_vel_corr) plt.ylabel("Velocity-Position Correlation") plt.xlabel("Time in Seconds") plt.title("Velocity-Position Correlation Over Time") plt.show() ``` ### Part B ``` data = add_noise(np.loadtxt(wall_file, delimiter=","), 0) # Setup initial variables and matrices initial_pos = 2530 velocity = -10.0 variance = 10.0 dt = 0.1 F = np.array([[1., dt], [0., 1.]]) P = np.array([[100., 0], [0, 100.0]]) H = np.array([[1., 0.]]) R = np.array([[variance]]) Q = np.array([[0.1, 1], [1, 10.]]) x = np.array([initial_pos, velocity]).T predicted_xs = [] def run(x, P, R=0, Q=0, dt=0.1, zs=None): # run the kalman filter and store the results xs, cov = [], [] for z in zs: x, P = predict(x, P, F, Q) S = dot3(H, P, H.T) + R n = z - np.dot(H, x) d = n*n / S if d < 9.0: x, P = update(x, P, z, R, H) xs.append(x) cov.append(P) xs, cov = np.array(xs), np.array(cov) return xs, cov time_values = np.linspace(0, dt * len(data), len(data)) est_x, est_P = run(x, P, R, Q, dt, data) est_pos = [v[0] for v in est_x] draw_plot(data, estimates=est_pos, title="Raw Data VS Kalman Estimation", mlabel="Measurements", estlabel="Kalman Estimates", xlabel="Time in Seconds", ylabel="Distance to Wall in CM") est_vel = [v[1] for v in est_x] plt.plot(time_values, est_vel) plt.xlabel("Time in seconds") plt.ylabel("Speed in cm/sec") plt.title("Velocity Over Time") plt.show() pos_std = [p[0][0]**0.5 for p in est_P] vel_var = [p[1][1]**0.5 for p in est_P] pos_vel_corr = [p[0][1] for p in est_P] plt.plot(time_values, pos_std) plt.ylabel("Position StdDev") plt.xlabel("Time in Seconds") plt.title("Position StdDev Over Time") plt.show() plt.plot(time_values, vel_var) plt.ylabel("Velocity StdDev") plt.xlabel("Time in Seconds") plt.title("Velocity StdDev Over Time") plt.show() plt.plot(time_values, pos_vel_corr) plt.ylabel("Velocity-Position Correlation") plt.xlabel("Time in Seconds") plt.title("Velocity-Position Correlation Over Time") plt.show() ``` ### Part C ``` data = add_noise(np.loadtxt(wall_file, delimiter=","), 0) def gaussian(x, mu, sig): return (1/(np.sqrt(2 * np.pi * np.power(sig, 2.)))) * np.exp(-np.power(x - mu, 2.) / (2 * np.power(sig, 2.))) # Setup initial variables and matrices initial_pos = 2530 velocity = -10.0 variance = 10.0 dt = 0.1 F = np.array([[1., dt], [0., 1.]]) P = np.array([[100., 0], [0, 100.0]]) H = np.array([[1., 0.]]) R = np.array([[variance]]) Q = np.array([[0.1, 1], [1, 10.]]) x = np.array([initial_pos, velocity]).T xs = [] object_c = 0.2 wall_c = 0.8 lamb = 0.0005 def object_pdf(z): return object_c * lamb * np.exp(-lamb * z) def wall_pdf(z, wall_mean): return wall_c * gaussian(z, wall_mean, variance) def run(x, P, R, Q, dt, zs): # run the kalman filter and store the results xs, cov = [], [] for z in zs: x, P = predict(x, P, F, Q) prob_wall = wall_pdf(x[0], z) prob_obj = object_pdf(z) if prob_obj < prob_wall: x, P = update(x, P, z, R, H) xs.append(x) cov.append(P) xs, cov = np.array(xs), np.array(cov) return xs, cov old_est_x = est_x est_x, est_P = run(x, P, R, Q, dt, data) est_pos = [v[0] for v in est_x] draw_plot(data, estimates=est_pos, title="Raw Data VS Kalman Estimation", mlabel="Measurements", estlabel="Kalman Estimates", xlabel="Time in Seconds", ylabel="Distance to Wall in CM") est_vel = [v[1] for v in est_x] plt.plot(time_values, est_vel) plt.xlabel("Time in seconds") plt.ylabel("Speed in cm/sec") plt.title("Velocity Over Time") plt.show() pos_std = [p[0][0]**0.5 for p in est_P] vel_var = [p[1][1]**0.5 for p in est_P] pos_vel_corr = [p[0][1] for p in est_P] plt.plot(time_values, pos_std) plt.ylabel("Position StdDev") plt.xlabel("Time in Seconds") plt.title("Position StdDev Over Time") plt.show() plt.plot(time_values, vel_var) plt.ylabel("Velocity StdDev") plt.xlabel("Time in Seconds") plt.title("Velocity StdDev Over Time") plt.show() plt.plot(time_values, pos_vel_corr) plt.ylabel("Velocity-Position Correlation") plt.xlabel("Time in Seconds") plt.title("Velocity-Position Correlation Over Time") plt.show() ``` ### Part D 1. Were any anomolous points processed by mistake? - Kalman filter without outlier detection processed object readings. Both methods of outlier detection succesfully removed object readings. 2. Were any valid points rejected? - Neither outlier detection method rejected valid points. 3. How can each of these methods fail? - As long as the data supplied is linear and linearly separable these methods should work robustly. If any of these constraints are broken all 3 methods would likely fail.
github_jupyter
# Self-Driving Car Engineer Nanodegree ## Deep Learning ## Project: Build a Traffic Sign Recognition Classifier In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/481/view) for this project. The [rubric](https://review.udacity.com/#!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file. >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- ## Step 0: Load The Data ``` # Load pickled data import pickle # TODO: Fill this in based on where you saved the training and testing data training_file = "traffic-signs-data/train.p" validation_file= "traffic-signs-data/valid.p" testing_file = "traffic-signs-data/test.p" with open(training_file, mode='rb') as f: train = pickle.load(f) with open(validation_file, mode='rb') as f: valid = pickle.load(f) with open(testing_file, mode='rb') as f: test = pickle.load(f) X_train, y_train = train['features'], train['labels'] X_valid, y_validation = valid['features'], valid['labels'] X_test, y_test = test['features'], test['labels'] ``` --- ## Step 1: Dataset Summary & Exploration The pickled data is a dictionary with 4 key/value pairs: - `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels). - `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id. - `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image. - `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES** Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. ### Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas ``` ### Replace each question mark with the appropriate value. ### Use python, pandas or numpy methods rather than hard coding the results # TODO: Number of training examples n_train = X_train.shape[0] # TODO: Number of validation examples n_validation = X_valid.shape[0] # TODO: Number of testing examples. n_test = X_test.shape[0] # TODO: What's the shape of an traffic sign image? image_shape = X_train[0].shape # TODO: How many unique classes/labels there are in the dataset. n_classes = 43 print("Number of training examples =", n_train) print("Number of testing examples =", n_test) print("Image data shape =", image_shape) print("Number of classes =", n_classes) ``` ### Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python. **NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others? ``` import random import numpy as np import matplotlib.pyplot as plt %matplotlib inline index = random.randint(0, len(X_train)) image = X_train[index].squeeze() plt.figure(figsize=(1,1)) plt.imshow(image) print(y_train[index]) ``` ---- ## Step 2: Design and Test a Model Architecture Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset). The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem: - Neural network architecture (is the network over or underfitting?) - Play around preprocessing techniques (normalization, rgb to grayscale, etc) - Number of examples per label (some have more than others). - Generate fake data. Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. ### Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. ``` ### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include ### converting to grayscale, etc. ### Feel free to use as many code cells as needed. #First shuffle the data from sklearn.utils import shuffle X_train, y_train = shuffle(X_train, y_train) #Now normalize the data. Shift pixel rgb. X_train = (X_train.astype(float)-128)/128 X_valid = (X_valid.astype(float)-128)/128 X_test = (X_test.astype(float)-128)/128 ``` ### Model Architecture ``` ### Setup Tesnorflow import tensorflow as tf EPOCHS = 100 BATCH_SIZE = 64 #128 from tensorflow.contrib.layers import flatten ######AAron idea: try dropout insted of max pooling? def LeNet(x): # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer mu = 0 sigma = 0.1 # SOLUTION: Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6. conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = mu, stddev = sigma)) conv1_b = tf.Variable(tf.zeros(6)) conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b # SOLUTION: Activation. conv1 = tf.nn.relu(conv1) # SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6. conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # SOLUTION: Layer 2: Convolutional. Output = 10x10x16. conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma)) conv2_b = tf.Variable(tf.zeros(16)) conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b # SOLUTION: Activation. conv2 = tf.nn.relu(conv2) # SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16. conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # SOLUTION: Flatten. Input = 5x5x16. Output = 400. fc0 = flatten(conv2) # SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120. fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma)) fc1_b = tf.Variable(tf.zeros(120)) fc1 = tf.matmul(fc0, fc1_W) + fc1_b # SOLUTION: Activation. fc1 = tf.nn.relu(fc1) # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84. fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma)) fc2_b = tf.Variable(tf.zeros(84)) fc2 = tf.matmul(fc1, fc2_W) + fc2_b # SOLUTION: Activation. fc2 = tf.nn.relu(fc2) # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43. fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma)) fc3_b = tf.Variable(tf.zeros(43)) logits = tf.matmul(fc2, fc3_W) + fc3_b return logits ``` ### Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting. ``` ### Calculate and report the accuracy on the training and validation set. ### Once a final model architecture is selected, ### the accuracy on the test set should be calculated and reported as well. ### Feel free to use as many code cells as needed. ## Features and Lables. # x is a placeholder for a batch of input images. y is a placeholder for a batch of output labels x = tf.placeholder(tf.float32, (None, 32, 32, 3)) y = tf.placeholder(tf.int32, (None)) one_hot_y = tf.one_hot(y, 43) ### Training Pipeline rate = .0004 #0.001 logits = LeNet(x) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits) loss_operation = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate = rate) training_operation = optimizer.minimize(loss_operation) ### Model Evaluation # Evaluate the loss and accuracy of the model for a given dataset correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() prediction = tf.argmax(logits, 1) def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0 sess = tf.get_default_session() for offset in range(0, num_examples, BATCH_SIZE): batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE] accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y}) total_accuracy += (accuracy * len(batch_x)) return total_accuracy / num_examples def predictClass(X_data): sess = tf.get_default_session() predictions = sess.run(prediction, feed_dict={x: X_data}) return predictions def getLogits(X_data): sess = tf.get_default_session() return sess.run(logits, feed_dict={x: X_data}) ``` ## Run the training algorithm ``` ### Train model #Run the training data through the training pipeline to train the model. #Before each epoch, shuffle the training set. #After each epoch, measure the loss and accuracy of the validation set. #Save the model after training. with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train) print("Training...") print() for i in range(EPOCHS): X_train, y_train = shuffle(X_train, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = X_train[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y}) validation_accuracy = evaluate(X_valid, y_validation) print("EPOCH {} ...".format(i+1)) print("Validation Accuracy = {:.3f}".format(validation_accuracy)) print() saver.save(sess, './lenet') print("Model saved") ``` ## Check model on Test Images - only do this once if possible ``` with tf.Session() as sess: saver = tf.train.import_meta_graph('lenet.meta') saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(X_test, y_test) print("Test Accuracy = {:.3f}".format(test_accuracy)) ``` --- ## Step 3: Test a Model on New Images To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type. You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. ### Load and Output the Images ``` ### Load the images and plot them here. ### Feel free to use as many code cells as needed. import cv2 import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np image1 = mpimg.imread('traffic-signs-web-images/BumpyRoad_22.jpg') image2 = mpimg.imread('traffic-signs-web-images/SlipperyRoad_23.jpg') image3 = mpimg.imread('traffic-signs-web-images/NoEntry_17.jpg') image4 = mpimg.imread('traffic-signs-web-images/WildAnimalsCrossing_31.jpg') image5 = mpimg.imread('traffic-signs-web-images/Yield_13.jpg') print('This image is:', type(image), 'with dimensions:', image.shape) #Put all the images in a lsit X_web = np.zeros([5,32,32,3],dtype=np.uint8) X_web[0] = image1 #Failed to classify X_web[1] = image2 #Failed to classify X_web[2] = image3 #Failed to classify X_web[3] = image4 #Classified! X_web[4] = image5 #Classified! #Put the labels in a list y_web = np.array([22,23,17,31,13]) plt.imshow(X_web[2]) ``` ### Predict the Sign Type for Each Image ``` ### Run the predictions here and use the model to output the prediction for each image. ### Make sure to pre-process the images with the same pre-processing pipeline used earlier. ### Feel free to use as many code cells as needed. #Normalize the data X_web = (X_web.astype(float)-128)/128 #Now make the predictions for each image with tf.Session() as sess: sess.run(tf.global_variables_initializer()) saver = tf.train.import_meta_graph('lenet-Copy1.meta') saver.restore(sess, tf.train.latest_checkpoint('.')) y_hat = predictClass(X_web) print("Predictions: "+str(y_hat)) print("Actual: "+str(y_web)) ``` ### Analyze Performance ``` ### Calculate the accuracy for these 5 new images. ### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images. with tf.Session() as sess: saver = tf.train.import_meta_graph('lenet.meta') saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(X_web, y_web) print("Test Accuracy = {:.3f}".format(test_accuracy)) ``` ### Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.html#top_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image. `tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids. Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability: ``` # (5, 6) array a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]]) ``` Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces: ``` TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32)) ``` Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices. ``` ### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. ### Feel free to use as many code cells as needed. with tf.Session() as sess: sess.run(tf.global_variables_initializer()) saver = tf.train.import_meta_graph('lenet-Copy1.meta') saver.restore(sess, tf.train.latest_checkpoint('.')) theLogits = getLogits(X_web) probs = sess.run(tf.nn.softmax(theLogits)) print(sess.run(tf.nn.top_k(tf.constant(probs), k=5))) ``` ### Project Writeup Once you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- ## Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable. For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. <figure> <img src="visualize_cnn.png" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above)</p> </figcaption> </figure> <p></p> ``` ### Visualize your network's feature maps here. ### Feel free to use as many code cells as needed. # image_input: the test image being fed into the network to produce the feature maps # tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer # activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output # plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1): # Here make sure to preprocess your image_input in a way your network expects # with size, normalization, ect if needed # image_input = # Note: x should be the same name as your network's tensorflow data placeholder variable # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function activation = tf_activation.eval(session=sess,feed_dict={x : image_input}) featuremaps = activation.shape[3] plt.figure(plt_num, figsize=(15,15)) for featuremap in range(featuremaps): plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number if activation_min != -1 & activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray") elif activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray") elif activation_min !=-1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray") else: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray") ```
github_jupyter
``` import os path = '/home/yash/Desktop/tensorflow-adversarial/tf_example' os.chdir(path) # supress tensorflow logging other than errors os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import numpy as np import tensorflow as tf from tensorflow.contrib.learn import ModeKeys, Estimator import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec from fgsm4 import fgsm import mnist img_rows = 28 img_cols = 28 img_chas = 1 input_shape = (img_rows, img_cols, img_chas) n_classes = 10 print('\nLoading mnist') (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.astype('float32') / 255. X_test = X_test.astype('float32') / 255. X_train = X_train.reshape(-1, img_rows, img_cols, img_chas) X_test = X_test.reshape(-1, img_rows, img_cols, img_chas) # one hot encoding, basically creates hte si def _to_categorical(x, n_classes): x = np.array(x, dtype=int).ravel() n = x.shape[0] ret = np.zeros((n, n_classes)) ret[np.arange(n), x] = 1 return ret def find_l2(X_test, X_adv): a=X_test.reshape(-1,28*28) b=X_adv.reshape(-1,28*28) l2_unsquared = np.sum(np.square(a-b),axis=1) return l2_unsquared y_train = _to_categorical(y_train, n_classes) y_test = _to_categorical(y_test, n_classes) print('\nShuffling training data') ind = np.random.permutation(X_train.shape[0]) X_train, y_train = X_train[ind], y_train[ind] # X_train = X_train[:1000] # y_train = y_train[:1000] # split training/validation dataset validation_split = 0.1 n_train = int(X_train.shape[0]*(1-validation_split)) X_valid = X_train[n_train:] X_train = X_train[:n_train] y_valid = y_train[n_train:] y_train = y_train[:n_train] class Dummy: pass env = Dummy() def model(x, logits=False, training=False): conv0 = tf.layers.conv2d(x, filters=32, kernel_size=[3, 3], padding='same', name='conv0', activation=tf.nn.relu) pool0 = tf.layers.max_pooling2d(conv0, pool_size=[2, 2], strides=2, name='pool0') conv1 = tf.layers.conv2d(pool0, filters=64, kernel_size=[3, 3], padding='same', name='conv1', activation=tf.nn.relu) pool1 = tf.layers.max_pooling2d(conv1, pool_size=[2, 2], strides=2, name='pool1') flat = tf.reshape(pool1, [-1, 7*7*64], name='flatten') dense1 = tf.layers.dense(flat, units=1024, activation=tf.nn.relu, name='dense1') dense2 = tf.layers.dense(dense1, units=128, activation=tf.nn.relu, name='dense2') logits_ = tf.layers.dense(dense2, units=10, name='logits') #removed dropout y = tf.nn.softmax(logits_, name='ybar') if logits: return y, logits_ return y # We need a scope since the inference graph will be reused later with tf.variable_scope('model'): env.x = tf.placeholder(tf.float32, (None, img_rows, img_cols, img_chas), name='x') env.y = tf.placeholder(tf.float32, (None, n_classes), name='y') env.training = tf.placeholder(bool, (), name='mode') env.ybar, logits = model(env.x, logits=True, training=env.training) z = tf.argmax(env.y, axis=1) zbar = tf.argmax(env.ybar, axis=1) env.count = tf.cast(tf.equal(z, zbar), tf.float32) env.acc = tf.reduce_mean(env.count, name='acc') xent = tf.nn.softmax_cross_entropy_with_logits(labels=env.y, logits=logits) env.loss = tf.reduce_mean(xent, name='loss') extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(extra_update_ops): env.optim = tf.train.AdamOptimizer(beta1=0.9, beta2=0.999, epsilon=1e-08,).minimize(env.loss) with tf.variable_scope('model', reuse=True): env.x_adv, env.all_flipped = fgsm(model, env.x, step_size=.05, bbox_semi_side=10) #epochs is redundant now! sess = tf.InteractiveSession() sess.run(tf.global_variables_initializer()) sess.run(tf.local_variables_initializer()) def save_model(label): saver = tf.train.Saver() saver.save(sess, './models/mnist/' + label) def restore_model(label): saver = tf.train.Saver() saver.restore(sess, './models/mnist/' + label) def _evaluate(X_data, y_data, env): print('\nEvaluating') n_sample = X_data.shape[0] batch_size = 128 n_batch = int(np.ceil(n_sample/batch_size)) loss, acc = 0, 0 ns = 0 for ind in range(n_batch): print(' batch {0}/{1}'.format(ind+1, n_batch), end='\r') start = ind*batch_size end = min(n_sample, start+batch_size) batch_loss, batch_count, batch_acc = sess.run( [env.loss, env.count, env.acc], feed_dict={env.x: X_data[start:end], env.y: y_data[start:end], env.training: False}) loss += batch_loss*batch_size # print('batch count: {0}'.format(np.sum(batch_count))) ns+=batch_size acc += batch_acc*batch_size loss /= ns acc /= ns # print (ns) # print (n_sample) print(' loss: {0:.4f} acc: {1:.4f}'.format(loss, acc)) return loss, acc def _predict(X_data, env): print('\nPredicting') n_sample = X_data.shape[0] batch_size = 128 n_batch = int(np.ceil(n_sample/batch_size)) yval = np.empty((X_data.shape[0], n_classes)) for ind in range(n_batch): print(' batch {0}/{1}'.format(ind+1, n_batch), end='\r') start = ind*batch_size end = min(n_sample, start+batch_size) batch_y = sess.run(env.ybar, feed_dict={ env.x: X_data[start:end], env.training: False}) yval[start:end] = batch_y return yval def train(label): print('\nTraining') n_sample = X_train.shape[0] batch_size = 128 n_batch = int(np.ceil(n_sample/batch_size)) n_epoch = 50 for epoch in range(n_epoch): print('Epoch {0}/{1}'.format(epoch+1, n_epoch)) for ind in range(n_batch): print(' batch {0}/{1}'.format(ind+1, n_batch), end='\r') start = ind*batch_size end = min(n_sample, start+batch_size) sess.run(env.optim, feed_dict={env.x: X_train[start:end], env.y: y_train[start:end], env.training: True}) if(epoch%5 == 0): model_label = label+ '{0}'.format(epoch) print("saving model " + model_label) save_model(model_label) save_model(label) def create_adv(X, Y, label): print('\nCrafting adversarial') n_sample = X.shape[0] batch_size = 1 n_batch = int(np.ceil(n_sample/batch_size)) n_epoch = 20 X_adv = np.empty_like(X) for ind in range(n_batch): print(' batch {0}/{1}'.format(ind+1, n_batch), end='\r') start = ind*batch_size end = min(n_sample, start+batch_size) tmp, all_flipped = sess.run([env.x_adv, env.all_flipped], feed_dict={env.x: X[start:end], env.y: Y[start:end], env.training: False}) # _evaluate(tmp, Y[start:end],env) X_adv[start:end] = tmp # print(all_flipped) print('\nSaving adversarial') os.makedirs('data', exist_ok=True) np.save('data/mnist/' + label + '.npy', X_adv) return X_adv label = "mnist_with_cnn" # train(label) # else #Assuming that you've started a session already else do that first! restore_model(label + '5') # restore_model(label + '10') # restore_model(label + '50') # restore_model(label + '100') _evaluate(X_train, y_train, env) def random_normal_func(X): X=X.reshape(-1,28*28) print(X.shape) mean, std = np.mean(X, axis=0), np.std(X,axis=0) randomX = np.zeros([10000,X[0].size]) print(randomX.shape) for i in range(X[0].size): randomX[:,i] = np.random.normal(mean[i],std[i],10000) randomX = randomX.reshape(-1,28,28,1) ans = sess.run(env.ybar, feed_dict={env.x: randomX,env.training: False}) labels = _to_categorical(np.argmax(ans,axis=1), n_classes) return randomX,labels test = "test_fs_exp1_0" train = "train_fs_exp1_0" random = "random_fs_exp1_0" random_normal= "random_normal_fs_exp1_0" X_train_sub = X_train[:10000] y_train_sub = sess.run(env.ybar, feed_dict={env.x: X_train_sub,env.training: False}) y_train_sub = _to_categorical(np.argmax(y_train_sub, axis=1), n_classes) y_test_sub = sess.run(env.ybar, feed_dict={env.x: X_test,env.training: False}) y_test_sub = _to_categorical(np.argmax(y_test_sub, axis=1), n_classes) X_random = np.random.rand(10000,28,28,1) X_random = X_random[:10000] y_random = sess.run(env.ybar, feed_dict={env.x: X_random,env.training: False}) y_random = _to_categorical(np.argmax(y_random, axis=1), n_classes) X_random_normal, y_random_normal = random_normal_func(X_train) X_adv_test = create_adv(X_test, y_test_sub, test) X_adv_train = create_adv(X_train_sub, y_train_sub, train) X_adv_random = create_adv(X_random,y_random, random) X_adv_random_normal = create_adv(X_random_normal, y_random_normal, random_normal) # X_adv_test = np.load('data/mnist/' + test + '.npy') # X_adv_train = np.load('data/mnist/' + train + '.npy') # X_adv_random = np.load('data/mnist/' + random + '.npy') # X_adv_random_normal = np.load('data/mnist/' + random_normal + '.npy') l2_test = find_l2(X_adv_test,X_test) l2_train = find_l2(X_adv_train, X_train_sub) l2_random = find_l2(X_adv_random,X_random) l2_random_normal = find_l2(X_adv_random_normal,X_random_normal) print(l2_train) print(X_adv_random_normal[0][3]) %matplotlib inline # evenly sampled time at 200ms intervals t = np.arange(1,10001, 1) # red dashes, blue squares and green triangles plt.plot(t, l2_test, 'r--', t, l2_train, 'b--', t, l2_random, 'y--', l2_random_normal, 'g--') plt.show() import matplotlib.patches as mpatches %matplotlib inline # evenly sampled time at 200ms intervals t = np.arange(1,101, 1) # red dashes, blue squares and green triangles plt.plot(t, l2_test[:100], 'r--', t, l2_train[:100], 'b--',t, l2_random[:100], 'y--',l2_random_normal[:100], 'g--') blue_patch = mpatches.Patch(color='blue', label='Train Data') plt.legend(handles=[blue_patch]) plt.show() %matplotlib inline plt.hist(l2_test,100) plt.title("L2 distance of test data") plt.xlabel("Distance") plt.ylabel("Frequency") plt.show() %matplotlib inline plt.hist(l2_train,100) plt.title("L2 distance of train data") plt.xlabel("Distance") plt.ylabel("Frequency") plt.show() %matplotlib inline plt.hist(l2_random,100) plt.title("L2 distance of random data") plt.xlabel("Distance") plt.ylabel("Frequency") plt.show() %matplotlib inline plt.hist(l2_random_normal,100) plt.title("L2 distance of random normal data") plt.xlabel("Distance") plt.ylabel("Frequency") plt.show() ```
github_jupyter
``` ###### Applications Lab #1-- ATOC7500 Objective Analysis - bootstrapping ##### Originally coded by Prof. Kay (CU) with input from Vineel Yettella (CU ATOC Ph.D. 2018) ##### last updated September 2, 2020 ###LEARNING GOALS: ###1) Working in an ipython notebook: read in csv file, make histogram plot ###2) Assessing statistical significance using bootstrapping (and t-test) ### GENERAL SETUP %matplotlib inline # this enables plotting within notebook import matplotlib # library for plotting import matplotlib.pyplot as plt # later you will type plt.$COMMAND import numpy as np # basic math library you will type np.$STUFF e.g., np.cos(1) import pandas as pd # library for data analysis for text files (everything but netcdf files) import scipy.stats as stats # imports stats functions https://docs.scipy.org/doc/scipy/reference/stats.html ### Read in the data filename='snow_enso_data.csv' data=pd.read_csv(filename,sep=',') data.head() ### Print the data column names print(data.columns[0]) print(data.columns[1]) print(data.columns[2]) ### Print the data values - LOOK AT YOUR DATA. If new to Python - check out what happens when you remove .values. print(data['Year'].values) print(data['LovelandPass_April1SWE_inches'].values) print(data['Nino34_anomaly_prevDec'].values) ### Calculate the average snowfall on April 1 at Loveland Pass, Colorado SWE_avg=data['LovelandPass_April1SWE_inches'].mean() SWE_std=data['LovelandPass_April1SWE_inches'].std() N_SWE=len(data.LovelandPass_April1SWE_inches) print('Average SWE (inches):',np.str(np.round(SWE_avg,2))) print('Standard Deviation SWE (inches):',np.str(np.round(SWE_std,2))) print('N:',np.str(N_SWE)) ### Print to figure out how to condition and make sure it is working. Check out if new to Python. #print(data.Nino34_anomaly_prevDec>1) ## this gives True/False #print(data[data.Nino34_anomaly_prevDec>1]) ## where it is True, values will print ### Calculate the average SWE when it was an el nino year SWE_avg_nino=data[data.Nino34_anomaly_prevDec>1.0]['LovelandPass_April1SWE_inches'].mean() SWE_std_nino=data[data.Nino34_anomaly_prevDec>1.0]['LovelandPass_April1SWE_inches'].std() N_SWE_nino=len(data[data.Nino34_anomaly_prevDec>1.0].LovelandPass_April1SWE_inches) print('Average SWE El Nino (inches):',np.str(np.round(SWE_avg_nino,2))) print('Standard Deviation SWE El Nino (inches):',np.str(np.round(SWE_std_nino,2))) print('N El Nino:',np.str(N_SWE_nino)) ### Calculate the average SWE when it was an la nina year SWE_avg_nina=data[data.Nino34_anomaly_prevDec<-1.0]['LovelandPass_April1SWE_inches'].mean() SWE_std_nina=data[data.Nino34_anomaly_prevDec<-1.0]['LovelandPass_April1SWE_inches'].std() N_SWE_nina=len(data[data.Nino34_anomaly_prevDec<-1.0].LovelandPass_April1SWE_inches) print('Average SWE La Nina (inches):',np.str(np.round(SWE_avg_nina,2))) print('Standard Deviation SWE La Nina (inches):',np.str(np.round(SWE_std_nina,2))) print('N La Nina:',np.str(N_SWE_nina)) ### Bootstrap!! Generate random samples of size N_SWE_nino and N_SWE_nina. Do it once to see if it works. P_random=np.random.choice(data.LovelandPass_April1SWE_inches,N_SWE_nino) print(P_random) ## LOOK AT YOUR DATA print(len(P_random)) ### Now Bootstrap Nbs times to generate a distribution of randomly selected mean SWE. Nbs=1000 ## initialize array P_Bootstrap=np.empty((Nbs,N_SWE_nino)) print(P_Bootstrap.shape) ### Now Bootstrap Nbs times to generate a distribution of randomly selected mean SWE. Nbs=100000 ## initialize array P_Bootstrap=np.empty((Nbs,N_SWE_nino)) ## loop over to fill in array with randomly selected values for ii in range(Nbs): P_Bootstrap[ii,:]=np.random.choice(data.LovelandPass_April1SWE_inches,N_SWE_nino) ## Calculate the means of your randomly selected SWE values. P_Bootstrap_mean=np.mean(P_Bootstrap,axis=1) print(len(P_Bootstrap_mean)) ## check length to see if you averaged across the correct axis print(np.shape(P_Bootstrap_mean)) ## another option to look at the dimensions of a variable #print(P_Bootstrap_mean) P_Bootstrap_mean_avg=np.mean(P_Bootstrap_mean) print(P_Bootstrap_mean_avg) P_Bootstrap_mean_std=np.std(P_Bootstrap_mean) print(P_Bootstrap_mean_std) P_Bootstrap_mean_min=np.min(P_Bootstrap_mean) print(P_Bootstrap_mean_min) P_Bootstrap_mean_max=np.max(P_Bootstrap_mean) print(P_Bootstrap_mean_max) ### Use matplotlib to plot a histogram of the bootstrapped means to compare to the conditioned SWE mean binsize=0.1 min4hist=np.round(np.min(P_Bootstrap_mean),1)-binsize max4hist=np.round(np.max(P_Bootstrap_mean),1)+binsize nbins=int((max4hist-min4hist)/binsize) plt.hist(P_Bootstrap_mean,nbins,edgecolor='black') plt.xlabel('Mean SWE (inches)'); plt.ylabel('Count'); plt.title('Bootstrapped Randomly Selected Mean SWE Values'); ## What is the probability that the snowfall was lower during El Nino by chance? ## Using Barnes equation (83) on page 15 to calculate probability using z-statistic sample_mean=SWE_avg_nino sample_N=1 population_mean=np.mean(P_Bootstrap_mean) population_std=np.std(P_Bootstrap_mean) xstd=population_std/np.sqrt(sample_N) z_nino=(sample_mean-population_mean)/xstd print("sample_mean - El Nino: ",np.str(np.round(sample_mean,2))) ############ print("population_mean: ",np.str(np.round(population_mean,2))) print("population_std: ",np.str(np.round(population_std,2))) print("Z-statistic (number of standard errors that the sample mean deviates from the population mean:") print(np.round(z_nino,2)) prob=(1-stats.norm.cdf(np.abs(z_nino)))*100 ##this is a one-sided test print("Probability one-tailed test (percent):") print(np.round(prob,2)) ## What is the probability that the snowfall that the El Nino mean differs from the mean by chance? ## Using Barnes equation (83) on page 15 to calculate probability using z-statistic sample_mean=SWE_avg_nino sample_N=1 population_mean=np.mean(P_Bootstrap_mean) population_std=np.std(P_Bootstrap_mean) xstd=population_std/np.sqrt(sample_N) z_nino=(sample_mean-population_mean)/xstd print("sample_mean - El Nino: ",np.str(np.round(sample_mean,2))) print("population_mean: ",np.str(np.round(population_mean,2))) print("population_std: ",np.str(np.round(population_std,2))) print("Z-statistic (number of standard errors that the sample mean deviates from the population mean):") print(np.round(z_nino,2)) prob=(1-stats.norm.cdf(np.abs(z_nino)))*2*100 ##this is a two-sided test print("Probability - two-tailed test (percent):") print(np.round(prob,2)) ## What is the probability that the snowfall was higher during La Nina just due to chance? ## Using Barnes equation (83) on page 15 to calculate probability using z-statistic sample_mean=SWE_avg_nina sample_N=1 population_mean=np.mean(P_Bootstrap_mean) population_std=np.std(P_Bootstrap_mean) xstd=population_std/np.sqrt(sample_N) z_nina=(sample_mean-population_mean)/xstd print("sample_mean - La Nina: ",np.str(np.round(sample_mean,2))) print("population_mean: ",np.str(np.round(population_mean,2))) print("population_std: ",np.str(np.round(population_std,2))) print("Z-statistic (number of standard errors that the sample mean deviates from the population mean:") print(np.round(z_nina,2)) prob=(1-stats.norm.cdf(np.abs(z_nina)))*100 ##this is a one-sided test print("Probability one-tailed test (percent):") print(np.round(prob,2)) ## What is the probability that the snowfall during La Nina differed just due to chance? ## Using Barnes equation (83) on page 15 to calculate probability using z-statistic sample_mean=SWE_avg_nina sample_N=1 population_mean=np.mean(P_Bootstrap_mean) population_std=np.std(P_Bootstrap_mean) xstd=population_std/np.sqrt(sample_N) z_nina=(sample_mean-population_mean)/xstd print("sample_mean - La Nina: ",np.str(np.round(sample_mean,2))) print("population_mean: ",np.str(np.round(population_mean,2))) print("population_std: ",np.str(np.round(population_std,2))) print("Z-statistic (number of standard errors that the sample mean deviates from the population mean):") print(np.round(z_nina,2)) prob=(1-stats.norm.cdf(np.abs(z_nina)))*2*100 ##this is a two-sided test print("Probability - two-tailed test (percent):") print(np.round(prob,2)) ### Strategy #2: Forget bootstrapping, let's use a t-test... ## Apply a t-test to test the null hypothesis that the means of the two samples ## are the same at the 95% confidence level (alpha=0.025, two-sided test) ## If pvalue < alpha - reject null hypothesis. print('Null Hypothesis: ENSO snow years have the same mean as the full record.') t=stats.ttest_ind(data[data.Nino34_anomaly_prevDec>1.0]['LovelandPass_April1SWE_inches'],data['LovelandPass_April1SWE_inches'],equal_var=False) print(t) print('Cannot reject the null hypthesis.') #### Wait a second - What is that function doing??? Let's check it with the Barnes notes. ### Always code it yourself and understand what the function is doing. ### Word to the wise - do not use python functions without checking them!! ### Let's find out what stats.ttest_ind is doing - It doesn't look like it is calculating the t-statistic ### as the difference between the sample mean and the population mean. That calculation is below... ## Calculate the t-statistic using the Barnes Notes - Compare a sample mean and a population mean. ## Barnes Eq. (96) N=len(data[data.Nino34_anomaly_prevDec>1.0]['LovelandPass_April1SWE_inches']) print(N) sample_mean=np.mean(data[data.Nino34_anomaly_prevDec>1.0]['LovelandPass_April1SWE_inches']) print(sample_mean) sample_std=np.std(data[data.Nino34_anomaly_prevDec>1.0]['LovelandPass_April1SWE_inches']) print(sample_std) population_mean=np.mean(data['LovelandPass_April1SWE_inches']) ## Using Barnes equation (96) to calculate probability using the t-statistic print("T-statistic:") t=(sample_mean-population_mean)/(sample_std/(np.sqrt(N-1))) print(np.round(t,2)) print("Probability (percent):") prob=(1-stats.t.cdf(t,N-1))*100 print(np.round(prob,2)) ## Calculate the t-statistic using the Barnes Notes - Compare two sample means. Equation (110) ## This is also called Welch's t-test ## It doesn't look like the function is calculating the t-statistic using Welch's t-test! ## as the difference between the sample mean and the population mean. That calculation is below... ## Guess using the two sample means test (i.e., Eq. 100) vs sample/population means test (i.e., Barnes Eq. ) sampledata1=data['LovelandPass_April1SWE_inches'] sampledata2=data[data.Nino34_anomaly_prevDec>1.0]['LovelandPass_April1SWE_inches'] N1=len(sampledata1) N2=len(sampledata2) print(N1) print(N2) sample_mean1=np.mean(sampledata1) sample_mean2=np.mean(sampledata2) print(sample_mean1) print(sample_mean2) sample_std1=np.std(sampledata1) sample_std2=np.std(sampledata2) print(sample_std1) print(sample_std2) ## Using Barnes equation (96) to calculate probability using the t-statistic print("T-statistic using Welch's t-test:") s=np.sqrt((N1*sample_std1**2+N2*sample_std2**2)/(N1+N2-2)) print(s) #t=(sample_mean1-sample_mean2-0)/(s*np.sqrt(1/N1+1/N2)) print(np.round(t,2)) print("Probability (percent):") prob=(1-stats.t.cdf(t,N-1))*100 print(np.round(prob,2)) ### Strategy #3 (provided by Vineel Yettella) SWE = data['LovelandPass_April1SWE_inches'] SWE_nino = data[data.Nino34_anomaly_prevDec>1.0]['LovelandPass_April1SWE_inches'] #We start by setting up a null hypothesis H0. #Our H0 will be that the difference in means of the two populations that the samples came from is equal to zero. #We will use the bootstrap to test this null hypothesis. #We next choose a significance level for the hypothesis test alpha = 0.05 #All hypothesis tests need a test statistic. #Here, we'll use the difference in sample means as the test statistic. #create array to hold bootstrapped test statistic values bootstrap_statistic = np.empty(10000) #bootstrap 10000 times for i in range(1,10000): #create a resample of SWE by sampling with replacement (same length as SWE) resample_original = np.random.choice(SWE, len(SWE), replace=True) #create a resample of SWE_nino by sampling with replacement (same length as SWE_nino) resample_nino = np.random.choice(SWE_nino, len(SWE_nino), replace=True) #Compute the test statistic from the resampled data, i.e., the difference in means bootstrap_statistic[i] = np.mean(resample_original) - np.mean(resample_nino) #Let's plot the distribution of the test statistic plt.hist(bootstrap_statistic,[-5,-4,-3,-2,-1,0,1,2,3,4,5],edgecolor='black') plt.xlabel('Difference in sample means') plt.ylabel('Count') plt.title('Bootstrap distribution of difference in sample means') #Create 95% CI from the bootstrapped distribution. The upper limit of the CI is defined as the 97.5% percentile #and the lower limit as the 2.5% percentile of the boostrap distribution, so that 95% of the #distribution lies within the two limits CI_up = np.percentile(bootstrap_statistic, 100*(1 - alpha/2.0)) CI_lo = np.percentile(bootstrap_statistic, 100*(alpha/2.0)) print(CI_up) print(CI_lo) #We see that the confidence interval contains zero, so we fail to reject the null hypothesis that the difference #in means is equal to zero ```
github_jupyter
# Feature selection We will select a group of variables, the most predictive ones, to build our machine learning model ## Why do we select variables? - For production: Fewer variables mean smaller client input requeriments(e.q customers filling out a form on a webiste or mobile app), and hence less code for error handling. This reduces the chances of introducing bugs. - For model performance: Fewer variables mean simpler, more interpretable, better generalizing models. We will select variables using the Lasso regression: Lasso has the property of setting the coefficient of non-informative varbiales to zero. This way we can identify those variables and remove then from our final model. ## Imports ``` import pandas as pd import numpy as np # for plotting import matplotlib.pyplot as plt # to build the models from sklearn.linear_model import Lasso from sklearn.feature_selection import SelectFromModel # to visualise all the columns in the dataframe pd.pandas.set_option('display.max_columns', None) # load the train and test set with the engineered variables X_train = pd.read_csv('xtrain.csv') X_test = pd.read_csv('xtest.csv') X_train.head() # Capture the target (remember that the target is log transformed) y_train = X_train['SalePrice'] y_test = X_test['SalePrice'] # drop unnecessary variables from our training and testing sets X_train.drop(['Id', 'SalePrice'], axis=1, inplace=True) X_test.drop(['Id', 'SalePrice'], axis=1, inplace=True) ``` ## Feature Selection Select a subsset of the most predictive features. There is an element of randomness in the Lasso regression, ro remembmer to set the seed. ``` # We will do the model fititng and feature selection altogether in a few lines of code # first, we specigy the Lasso Regression mode, and we slect a suitable alpha (equivalent penalty). # The bigger the alpha the less features that will be selected. # Then ewe use the selectFromModel object from sklearn, which will select automatically the features which coefficients are non-zero # remember to set the seed, the random state in this function sel_ = SelectFromModel(Lasso(alpha=0.005, random_state=0)) sel_.fit(X_train, y_train) # let's visualize those features that were selected. # (selected features marked with True) sel_.get_support() # let's print the number of total and selected features # this is how we ca nmake a list of the selected features selected_feats = X_train.columns[sel_.get_support()] # let's print some stats print('total features: {}'.format(X_train.shape[1])) print('selected features: {}'.format(len(selected_feats))) print('features with coefficients sharnk to zero: {}'.format(np.sum(sel_.estimator_.coef_ == 0))) ## Identify the selected variables # this is an alternative way of identifying the selected features # based on the non-zero regularisation coefficients: selected_feats = X_train.columns[(sel_.estimator_.coef_ != 0).ravel().tolist()] selected_feats pd.Series(selected_feats).to_csv('selected_features.csv', index=False) ```
github_jupyter
# A Baseline Named Entity Recognizer for Twitter In this notebook I'll follow the example presented in [Named entities and random fields](http://www.orbifold.net/default/2017/06/29/dutch-ner/) to train a conditional random field to recognize named entities in Twitter data. The data and some of the code below are taken from a programming assignment in the amazing class [Natural Language Processing](https://www.coursera.org/learn/language-processing) offered by [Coursera](https://www.coursera.org/). In the assignment we were shown how to build a named entity recognizer using deep learning with a bidirectional LSTM, which is a pretty complicated approach and I wanted to have a baseline model to see what sort of accuracy should be expected on this data. ### 1. Preparing the Data First load the text and tags for training, validation and test data: ``` def read_data(file_path): tokens = [] tags = [] tweet_tokens = [] tweet_tags = [] for line in open(file_path, encoding='utf-8'): line = line.strip() if not line: if tweet_tokens: tokens.append(tweet_tokens) tags.append(tweet_tags) tweet_tokens = [] tweet_tags = [] else: token, tag = line.split() # Replace all urls with <URL> token # Replace all users with <USR> token if token.startswith("http://") or token.startswith("https://"): token = "<URL>" elif token.startswith("@"): token = "<USR>" tweet_tokens.append(token) tweet_tags.append(tag) return tokens, tags train_tokens, train_tags = read_data('data/train.txt') validation_tokens, validation_tags = read_data('data/validation.txt') test_tokens, test_tags = read_data('data/test.txt') ``` The CRF model uses part of speech tags as features so we'll need to add those to the datasets. ``` %%time import nltk def build_sentence(tokens, tags): pos_tags = [item[-1] for item in nltk.pos_tag(tokens)] return list(zip(tokens, pos_tags, tags)) def build_sentences(tokens_set, tags_set): return [build_sentence(tokens, tags) for tokens, tags in zip(tokens_set, tags_set)] train_sents = build_sentences(train_tokens, train_tags) validation_sents = build_sentences(validation_tokens, validation_tags) test_sents = build_sentences(test_tokens, test_tags) ``` ### 2. Computing Features ``` def word2features(sent, i): word = sent[i][0] postag = sent[i][1] features = { 'bias': 1.0, 'word.lower()': word.lower(), 'word[-3:]': word[-3:], 'word[-2:]': word[-2:], 'word.isupper()': word.isupper(), 'word.istitle()': word.istitle(), 'word.isdigit()': word.isdigit(), 'postag': postag, 'postag[:2]': postag[:2], } if i > 0: word1 = sent[i - 1][0] postag1 = sent[i - 1][1] features.update({ '-1:word.lower()': word1.lower(), '-1:word.istitle()': word1.istitle(), '-1:word.isupper()': word1.isupper(), '-1:postag': postag1, '-1:postag[:2]': postag1[:2], }) else: features['BOS'] = True if i < len(sent) - 1: word1 = sent[i + 1][0] postag1 = sent[i + 1][1] features.update({ '+1:word.lower()': word1.lower(), '+1:word.istitle()': word1.istitle(), '+1:word.isupper()': word1.isupper(), '+1:postag': postag1, '+1:postag[:2]': postag1[:2], }) else: features['EOS'] = True return features def sent2features(sent): return [word2features(sent, i) for i in range(len(sent))] def sent2labels(sent): return [label for token, postag, label in sent] def sent2tokens(sent): return [token for token, postag, label in sent] X_train = [sent2features(s) for s in train_sents] y_train = [sent2labels(s) for s in train_sents] X_validation = [sent2features(s) for s in validation_sents] y_validation = [sent2labels(s) for s in validation_sents] X_test = [sent2features(s) for s in test_sents] y_test = [sent2labels(s) for s in test_sents] ``` ### 3. Train the Model ``` import sklearn_crfsuite crf = sklearn_crfsuite.CRF( algorithm='lbfgs', c1=0.12, c2=0.01, max_iterations=100, all_possible_transitions=True ) crf.fit(X_train, y_train) ``` ### 4. Evaluate the Model We evaluate the model using the CoNLL shared task evaluation script. ``` from evaluation import precision_recall_f1 def eval_conll(model, tokens, tags, short_report=True): """Computes NER quality measures using CONLL shared task script.""" tags_pred = model.predict(tokens) y_true = [y for s in tags for y in s] y_pred = [y for s in tags_pred for y in s] results = precision_recall_f1(y_true, y_pred, print_results=True, short_report=short_report) return results print('-' * 20 + ' Train set quality: ' + '-' * 20) train_results = eval_conll(crf, X_train, y_train, short_report=False) print('-' * 20 + ' Validation set quality: ' + '-' * 20) validation_results = eval_conll(crf, X_validation, y_validation, short_report=False) print('-' * 20 + ' Test set quality: ' + '-' * 20) test_results = eval_conll(crf, X_test, y_test, short_report=False) ``` ### 5. Tuning Parameters I tried tuning the parameters c1 and c2 of the model using [randomized grid search](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) but was not able to improve the results that way. I plan to try [GPyOpt](https://github.com/SheffieldML/GPyOpt) to see if that will do better but don't have time to do that here.
github_jupyter
``` !pip install pyclustering %load_ext autoreload %autoreload 2 from Clique.Clique import * from load_logs import * from evaluation import * from features import * from visualize import * import matplotlib.pyplot as plt import numpy as np logs, log_labels = read_logs_and_labels("./Saved/logs.txt", "./Saved/labels.txt") np.set_printoptions(threshold=np.inf) X = get_features(logs, 2, 70) X = X.toarray() np.count_nonzero(X[13456,:]) intervals = [3, 5, 10, 20, 30, 40, 50] thresholds = [0.1, 0.2, 0.3, 0.4, 0.5] def grid_search(logs, labels_, gram, min_df): X = get_features(logs, gram, min_df) X = X.toarray() idxs = np.where(np.all(X == 0, axis=1)) X = np.delete(X, idxs, axis=0) labels_ = np.delete(labels_, idxs) result_header = ["Interval", "Threshold", "VMeasure", "Fowlkes-Mallows"] results = [] for interval in intervals: for threshold in thresholds: clusters = run_clique(data=X, xsi=interval, tau=threshold) print("Clique eval for interval %2d, and threshold %2d" % (interval, threshold)) evaluate_clustering_performance(clusters, labels_) # vm = evaluate_vmeasure(labels_, labels) # fm = evaluate_fm(labels_, labels) # result = [interval, threshold, vm, fm] # results.append(result) # tab_results(result_header, results) # tab_results(result_header, results) # def grid_search(logs, labels_, gram, min_df): # X = get_features(logs, gram, min_df) # X = X.toarray() # idxs = np.where(np.all(X == 0, axis=1)) # X = np.delete(X, idxs, axis=0) # labels_ = np.delete(labels_, idxs) # result_header = ["Interval", "Threshold", "VMeasure", "Fowlkes-Mallows"] # results = [] # for interval in intervals: # for threshold in thresholds: # clique_instance = clique(X, 40, 0) # clique_instance.process() # labels = clique_instance.get_clusters() # #labels = labels.reshape # print(labels) # #print(labels.shape) # cells = clique_instance.get_cells() # #print(cells) # vm = evaluate_vmeasure(labels_, labels) # fm = evaluate_fm(labels_, labels) # result = [metric, linkage, vm, fm] # results.append(result) # tab_results(result_header, results) # tab_results(result_header, results) grid_search(logs, log_labels, 2, 70) grid_search(logs, log_labels, 3, 90) X = get_features(logs, 2, 100) X = X.toarray() idxs = np.where(np.all(X == 0, axis=1)) X = np.delete(X, idxs, axis=0) from pyclustering.cluster.clique import clique, clique_visualizer from pyclustering.utils import read_sample from pyclustering.samples.definitions import FCPS_SAMPLES # read two-dimensional input data 'Target' data = X # create CLIQUE algorithm for processing intervals = 1 # defines amount of cells in grid in each dimension threshold = 1 # lets consider each point as non-outlier clique_instance = clique(data, intervals, threshold) # start clustering process and obtain results clique_instance.process() clusters = clique_instance.get_clusters() # allocated clusters noise = clique_instance.get_noise() # points that are considered as outliers (in this example should be empty) cells = clique_instance.get_cells() # CLIQUE blocks that forms grid print("Amount of clusters:", len(clusters)) # visualize clustering results clique_visualizer.show_grid(cells, data[:,:2]) # show grid that has been formed by the algorithm clique_visualizer.show_clusters(data[:,:2], clusters, noise) # show clustering results data[:, :2] ``` ## Bigram Feature Vectorizer ``` eval_results = [] labels_ = log_labels X = get_features(logs, 2, 70) X = X.toarray() idxs = np.where(np.all(X == 0, axis=1)) X = np.delete(X, idxs, axis=0) labels_ = np.delete(labels_, idxs) clique_instance = clique(X, interval, threshold) clique_instance.process() labels = clique_instance.get_clusters() plot_clusters("CLIQUE Bigram Clustering using UMAP", X, labels) results = evaluate_clustering('CLIQUE Bigram Clustering', X, labels_, labels) print(results) eval_results.append(results) ``` ## Trigram Feature Vectorizer ``` labels_ = log_labels X = get_features(logs, 3, 90) X = X.toarray() idxs = np.where(np.all(X == 0, axis=1)) X = np.delete(X, idxs, axis=0) labels_ = np.delete(labels_, idxs) clique_instance = clique(X, interval, threshold) clique_instance.process() labels = clique_instance.get_clusters() plot_clusters("CLIQUE Bigram Clustering using UMAP", X, labels) results = evaluate_clustering('CLIQUE Bigram Clustering', X, labels_, labels) print(results) eval_results.append(results) ```
github_jupyter
``` import ipywidgets tabs = ipywidgets.Tab() tabs.children = [ipywidgets.Label(value='tab1'), ipywidgets.Label(value='tab2'), ipywidgets.Label(value='tab3'), ipywidgets.Label(value='tab4')] tabs.observe(lambda change: print(f"selected index: {change['new']}") , names='selected_index') def change_children(_): id = tabs.selected_index tabs.selected_index = None # Warning : this will emit a change event tabs.children = [ipywidgets.Label(value='tab1'), ipywidgets.Label(value='tab2'), ipywidgets.Label(value='tab3'), ipywidgets.Label(value='tab4')] tabs.selected_index = id btn = ipywidgets.Button(description='change_children') btn.on_click(change_children) ipywidgets.VBox([tabs, btn]) import ipywidgets as widgets tab_contents = ['P0', 'P1'] children = [widgets.Text(description=name) for name in tab_contents] tab = widgets.Tab() tab.children = children for i in range(len(children)): tab.set_title(i, str(i)) def tab_toggle_var(*args): global vartest if tab.selected_index ==0: vartest = 0 else: vartest = 1 tab.observe(tab_toggle_var) tab_toggle_var() print(children) metadata={} def _observe_test(change): print(change) def _observe_config(change): print('_observe_config') metadata[ widget_elec_config.description] = widget_elec_config.value metadata_json_raw = json.dumps(metadata, indent=4) export.value = "<pre>{}</pre>".format( html.escape(metadata_json_raw)) export = widgets.HTML() vbox_metadata = widgets.VBox( [ widgets.HTML(''' <h4>Preview of metadata export:</h4> <hr style="height:1px;border-width:0;color:black;background-color:gray"> '''), export ] ) for child in tab.children: print(child) child.observe(_observe_test) display(tab) w = widgets.Dropdown( options=['Addition', 'Multiplication', 'Subtraction', 'Division'], value='Addition', description='Task:', ) def on_change(change): if change['type'] == 'change' and change['name'] == 'value': print("changed to %s" % change['new']) w.observe(on_change) display(w) from IPython.display import display import ipywidgets as widgets int_range0_slider = widgets.IntSlider() int_range1_slider = widgets.IntSlider() output = widgets.Output() def interactive_function(inp0,inp1): with output: print('ie changed. int_range0_slider: '+str(inp0)+' int_range1_slider: '+str(inp1)) return def report_int_range0_change(change): with output: print('int_range0 change observed'+str(change)) return def report_ie_change(change): with output: print('ie change observed'+str(change)) return ie = widgets.interactive(interactive_function, inp0=int_range0_slider,inp1=int_range1_slider) # print(int_range0_slider.observe) # print(ie.observe) # int_range0_slider.observe(report_int_range0_change, names='value') for child in ie.children: child.observe(report_ie_change) display(int_range0_slider,int_range1_slider,output) ```
github_jupyter
# Home Credit Default Risk Can you predict how capable each applicant is of repaying a loan? Many people struggle to get loans due to **insufficient or non-existent credit histories**. And, unfortunately, this population is often taken advantage of by untrustworthy lenders. Home Credit strives to broaden financial inclusion for the **unbanked population by providing a positive and safe borrowing experience**. In order to make sure this underserved population has a positive loan experience, Home Credit makes use of a variety of alternative data--including telco and transactional information--to predict their clients' repayment abilities. While Home Credit is currently using various statistical and machine learning methods to make these predictions, they're challenging Kagglers to help them unlock the full potential of their data. Doing so will ensure that clients capable of repayment are not rejected and that loans are given with a principal, maturity, and repayment calendar that will empower their clients to be successful. **Submissions are evaluated on area under the ROC curve between the predicted probability and the observed target.** # Dataset ``` # #Python Libraries import numpy as np import scipy as sp import pandas as pd import statsmodels import pandas_profiling %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import os import sys import time import requests import datetime import missingno as msno import math import sys import gc import os # #sklearn from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score from sklearn.model_selection import RandomizedSearchCV from sklearn.model_selection import GridSearchCV from sklearn.ensemble import RandomForestRegressor from sklearn import preprocessing # #sklearn - metrics from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error from sklearn.metrics import r2_score # #XGBoost & LightGBM import xgboost as xgb import lightgbm as lgb # #Missing value imputation from fancyimpute import KNN, MICE ``` ## Data Dictionary ``` !ls -l ../data/ ``` - application_{train|test}.csv This is the main table, broken into two files for Train (**with TARGET**) and Test (without TARGET). Static data for all applications. **One row represents one loan in our data sample.** - bureau.csv All client's previous credits provided by other financial institutions that were reported to Credit Bureau (for clients who have a loan in our sample). For every loan in our sample, there are as many rows as number of credits the client had in Credit Bureau before the application date. - bureau_balance.csv Monthly balances of previous credits in Credit Bureau. This table has one row for each month of history of every previous credit reported to Credit Bureau – i.e the table has (#loans in sample * # of relative previous credits * # of months where we have some history observable for the previous credits) rows. - POS_CASH_balance.csv Monthly balance snapshots of previous POS (point of sales) and cash loans that the applicant had with Home Credit. This table has one row for each month of history of every previous credit in Home Credit (consumer credit and cash loans) related to loans in our sample – i.e. the table has (#loans in sample * # of relative previous credits * # of months in which we have some history observable for the previous credits) rows. - credit_card_balance.csv Monthly balance snapshots of previous credit cards that the applicant has with Home Credit. This table has one row for each month of history of every previous credit in Home Credit (consumer credit and cash loans) related to loans in our sample – i.e. the table has (#loans in sample * # of relative previous credit cards * # of months where we have some history observable for the previous credit card) rows. - previous_application.csv All previous applications for Home Credit loans of clients who have loans in our sample. There is one row for each previous application related to loans in our data sample. - installments_payments.csv Repayment history for the previously disbursed credits in Home Credit related to the loans in our sample. There is a) one row for every payment that was made plus b) one row each for missed payment. One row is equivalent to one payment of one installment OR one installment corresponding to one payment of one previous Home Credit credit related to loans in our sample. - HomeCredit_columns_description.csv This file contains descriptions for the columns in the various data files. ![](https://storage.googleapis.com/kaggle-media/competitions/home-credit/home_credit.png) # Data Pre-processing ``` df_application_train = pd.read_csv("../data/application_train.csv") df_application_train.head() df_application_test = pd.read_csv("../data/application_test.csv") df_application_test.head() ``` ## Missing Value Imputation ``` df_application_train_imputed = pd.read_csv("../transformed_data/application_train_imputed.csv") df_application_test_imputed = pd.read_csv("../transformed_data/application_test_imputed.csv") df_application_train.shape, df_application_test.shape df_application_train_imputed.shape, df_application_test_imputed.shape df_application_train.isnull().sum(axis = 0).sum(), df_application_test.isnull().sum(axis = 0).sum() df_application_train_imputed.isnull().sum(axis = 0).sum(), df_application_test_imputed.isnull().sum(axis = 0).sum() ``` # Model Building ## Encode categorical columns ``` # arr_categorical_columns = df_application_train.select_dtypes(['object']).columns # for var_col in arr_categorical_columns: # df_application_train[var_col] = df_application_train[var_col].astype('category').cat.codes # arr_categorical_columns = df_application_test.select_dtypes(['object']).columns # for var_col in arr_categorical_columns: # df_application_test[var_col] = df_application_test[var_col].astype('category').cat.codes ``` ## Train-Validation Split ``` input_columns = df_application_train_imputed.columns input_columns = input_columns[input_columns != 'TARGET'] target_column = 'TARGET' X = df_application_train_imputed[input_columns] y = df_application_train_imputed[target_column] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) xgb_params = { 'seed': 0, 'colsample_bytree': 0.8, 'silent': 1, 'subsample': 0.6, 'learning_rate': 0.01, 'objective': 'binary:logistic', 'eval_metric': 'auc', 'max_depth': 6, 'num_parallel_tree': 1, 'min_child_weight': 5, } watchlist = [(xgb.DMatrix(X_train, y_train), 'train'), (xgb.DMatrix(X_test, y_test), 'valid')] model = xgb.train(xgb_params, xgb.DMatrix(X_train, y_train), 270, watchlist, maximize=True, verbose_eval=100) df_predict = model.predict(xgb.DMatrix(df_application_test_imputed), ntree_limit=model.best_ntree_limit) submission = pd.DataFrame() submission["SK_ID_CURR"] = df_application_test["SK_ID_CURR"] submission["TARGET"] = df_predict submission.to_csv("../submissions/model_1_xgbstarter_missingdata_MICE_imputed.csv", index=False) submission.shape input_columns = df_application_train.columns input_columns = input_columns[input_columns != 'TARGET'] target_column = 'TARGET' X = df_application_train[input_columns] y = df_application_train[target_column] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) watchlist = [(xgb.DMatrix(X_train, y_train), 'train'), (xgb.DMatrix(X_test, y_test), 'valid')] model = xgb.train(xgb_params, xgb.DMatrix(X_train, y_train), 270, watchlist, maximize=True, verbose_eval=100) df_predict = model.predict(xgb.DMatrix(df_application_test), ntree_limit=model.best_ntree_limit) submission = pd.DataFrame() submission["SK_ID_CURR"] = df_application_test["SK_ID_CURR"] submission["TARGET"] = df_predict submission.to_csv("../submissions/model_1_xgbstarter_missingdata_MICE_nonimputed_hypothesis.csv", index=False) submission.shape ```
github_jupyter
``` library(repr) ; options(repr.plot.width = 5, repr.plot.height = 6) # Change plot sizes (in cm) ``` # Bootstrapping using rTPC package ## Introduction In this Chapter we will work through an example of model fitting using the rTPC package in R. This references the previous chapters' work, especially [Model Fitting the Bayesian way](https://www.youtube.com/watch?v=dQw4w9WgXcQ). Lets start with the requirements! ``` require('ggplot2') require('nls.multstart') require('broom') require('tidyverse') require('rTPC') require('dplyr') require('data.table') require('car') require('boot') require('patchwork') require('minpack.lm') require("tidyr") require('purrr') # update.packages(ask = FALSE) rm(list=ls()) graphics.off() setwd("/home/primuser/Documents/VByte/VecMismatchPaper1/code/") ``` Now that we have the background requirements going, we can start using the rTPC package. Lets look through the different models available! ``` #take a look at the different models available get_model_names() ``` There are 24 models to choose from. For our purposes in this chapter we will be using the sharpesschoolhigh_1981 model. More information on the model can be found [here](https://padpadpadpad.github.io/rTPC/reference/sharpeschoolhigh_1981.html). From here lets load in our data from the overall repository. This will be called '../data/Final_Traitofinterest.csv'. ``` #read in the trait data final_trait_data <- read.csv('../data/Final_Traitofinterest.csv') ``` Lets reduce this to a single trait. This data comes from the [VectorBiTE database](https://legacy.vectorbyte.org/) and so has unique IDs. We will use this to get our species and trait of interest isolated from the larger dataset. In this example we will be looking at Development Rate across temperatures for Aedes albopictus, which we can find an example of in csm7I. ``` df1 <- final_trait_data %>% dplyr::select('originalid', 'originaltraitname', 'originaltraitunit', 'originaltraitvalue', 'interactor1', 'ambienttemp', 'citation') #filter to single species and trait df2 <- dplyr::filter(df1, originalid == 'csm7I') ``` Now lets visualize our data in ggplot. ``` #visualize ggplot(df2, aes(ambienttemp, originaltraitvalue))+ geom_point()+ theme_bw(base_size = 12) + labs(x = 'Temperature (ºC)', y = 'Development Rate', title = 'Development Rate across temperatures for Aedes albopictus') ``` We will need to write which model we are using (sharpschoolhigh_1981). From here we can actually build our fit. We will use ''nls_multstart'' to automatically find our starting values. This lets us skip the [starting value problem](https://mhasoba.github.io/TheMulQuaBio/notebooks/20-ModelFitting-NLLS.html#the-starting-values-problem). From here we build our predicted line. ``` # choose model mod = 'sharpschoolhigh_1981' d<- df2 %>% rename(temp = ambienttemp, rate = originaltraitvalue) # fit Sharpe-Schoolfield model d_fit <- nest(d, data = c(temp, rate)) %>% mutate(sharpeschoolhigh = map(data, ~nls_multstart(rate~sharpeschoolhigh_1981(temp = temp, r_tref,e,eh,th, tref = 15), data = .x, iter = c(3,3,3,3), start_lower = get_start_vals(.x$temp, .x$rate, model_name = 'sharpeschoolhigh_1981') - 10, start_upper = get_start_vals(.x$temp, .x$rate, model_name = 'sharpeschoolhigh_1981') + 10, lower = get_lower_lims(.x$temp, .x$rate, model_name = 'sharpeschoolhigh_1981'), upper = get_upper_lims(.x$temp, .x$rate, model_name = 'sharpeschoolhigh_1981'), supp_errors = 'Y', convergence_count = FALSE)), # create new temperature data new_data = map(data, ~tibble(temp = seq(min(.x$temp), max(.x$temp), length.out = 100))), # predict over that data, preds = map2(sharpeschoolhigh, new_data, ~augment(.x, newdata = .y))) # unnest predictions d_preds <- select(d_fit, preds) %>% unnest(preds) ``` Lets visualize the line: ``` # plot data and predictions ggplot() + geom_line(aes(temp, .fitted), d_preds, col = 'blue') + geom_point(aes(temp, rate), d, size = 2, alpha = 0.5) + theme_bw(base_size = 12) + labs(x = 'Temperature (ºC)', y = 'Growth rate', title = 'Growth rate across temperatures') ``` This looks like a good fit! We can start exploring using bootstrapping. Lets start with refitting the model using nlsLM. ``` # refit model using nlsLM fit_nlsLM <- minpack.lm::nlsLM(rate~sharpeschoolhigh_1981(temp = temp, r_tref,e,eh,th, tref = 15), data = d, start = coef(d_fit$sharpeschoolhigh[[1]]), lower = get_lower_lims(d$temp, d$rate, model_name = 'sharpeschoolhigh_1981'), upper = get_upper_lims(d$temp, d$rate, model_name = 'sharpeschoolhigh_1981'), weights = rep(1, times = nrow(d))) ``` Now we can actually bootstrap. ``` # bootstrap using case resampling boot1 <- Boot(fit_nlsLM, method = 'case') ``` It is a good idea to explore the data again now. ``` # look at the data head(boot1$t) hist(boot1, layout = c(2,2)) ``` Now we use the bootstrapped model to build predictions which we can explore visually. ``` # create predictions of each bootstrapped model boot1_preds <- boot1$t %>% as.data.frame() %>% drop_na() %>% mutate(iter = 1:n()) %>% group_by_all() %>% do(data.frame(temp = seq(min(d$temp), max(d$temp), length.out = 100))) %>% ungroup() %>% mutate(pred = sharpeschoolhigh_1981(temp, r_tref, e, eh, th, tref = 15)) # calculate bootstrapped confidence intervals boot1_conf_preds <- group_by(boot1_preds, temp) %>% summarise(conf_lower = quantile(pred, 0.025), conf_upper = quantile(pred, 0.975)) %>% ungroup() # plot bootstrapped CIs p1 <- ggplot() + geom_line(aes(temp, .fitted), d_preds, col = 'blue') + geom_ribbon(aes(temp, ymin = conf_lower, ymax = conf_upper), boot1_conf_preds, fill = 'blue', alpha = 0.3) + geom_point(aes(temp, rate), d, size = 2, alpha = 0.5) + theme_bw(base_size = 12) + labs(x = 'Temperature (ºC)', y = 'Growth rate', title = 'Growth rate across temperatures') # plot bootstrapped predictions p2 <- ggplot() + geom_line(aes(temp, .fitted), d_preds, col = 'blue') + geom_line(aes(temp, pred, group = iter), boot1_preds, col = 'blue', alpha = 0.007) + geom_point(aes(temp, rate), d, size = 2, alpha = 0.5) + theme_bw(base_size = 12) + labs(x = 'Temperature (ºC)', y = 'Growth rate', title = 'Growth rate across temperatures') p1 + p2 ``` We can see here that when we bootstrap this data, the fit is not as good as we would expect from the initial exploration. We do not necessarily get a good thermal optima from this data. However, this does show how to use this function in the future. Please see Daniel Padfields [git](https://padpadpadpad.github.io/rTPC/articles/rTPC.html) for more information on using the rTPC package. # Please go to the [landing page](https://www.youtube.com/watch?v=YddwkMJG1Jo) and proceed on to the next stage of the training!
github_jupyter
``` # import libraries import numpy as np import pandas as pd from numpy import genfromtxt import math from scipy import optimize import matplotlib.pyplot as plt import csv import sqlite3 import os import urllib.request # Function for the SIR model with two levels of alpha and beta -- O(n) speed # # INPUTS # # S0 - initial number of susceptible people # I0 - initial number of infected people # R0 - initial number of recovered people (including those who died) # alpha1 - initial recovery rate # alpha2 - later recovery rate # beta1 - initial contact rate # beta2 - later recovery rate # n1 - time when alpha transitions # n2 - time when beta transitions # n - amount of days to simulate # # OUTPUTS # # SIR - a 3-by-(n+1) matrix storing the simulated paths for S(t), I(t), and R(t) for t = 0, 1, 2, ..., n def sir22(S0,I0,R0,alpha1,alpha2,beta1,beta2,n1,n2,n): SIR = np.zeros((3,n+1)) # Fill in initial data SIR[:,0] = np.array([S0, I0, R0]) alpha = alpha1 beta = beta1 for i in range(n): SIR[:,i+1] = SIR[:,i] + np.array([-beta*SIR[0,i]*SIR[1,i], beta*SIR[0,i]*SIR[1,i] - alpha*SIR[1,i], alpha*SIR[1,i]]) if i is n1: alpha = alpha2 if i is n2: beta = beta2 return SIR # Function for the standard SIR model -- O(n) speed # # INPUTS # # S0 - initial number of susceptible people # I0 - initial number of infected people # R0 - initial number of recovered people (including those who died) # alpha - recovery rate # beta - contact rate # n - amount of days to simulate # # OUTPUTS # # SIR - a 3-by-(n+1) matrix storing the simulated paths for S(t), I(t), and R(t) for t = 0, 1, 2, ..., n def sir11(S0,I0,R0,alpha,beta,n): SIR = np.zeros((3,n+1)) # Fill in initial data SIR[:,0] = np.array([S0, I0, R0]) for i in range(n): SIR[:,i+1] = SIR[:,i] + np.array([-beta*SIR[0,i]*SIR[1,i], beta*SIR[0,i]*SIR[1,i] - alpha*SIR[1,i], alpha*SIR[1,i]]) return SIR # Function to compute the error between predicted data and real data # # INPUTS # # data - 3-by-(n+1) matrix of real data for S(t), I(t), R(t) at times t = 0, 1, ..., l # precition - 3-by-(n+1) matrix of simulated data for S(t), I(t), and R(t) for t = 0, 1, ..., l # # OUTPUTS # # error - the sum of squared differences between real and simulated data (using root of sum of squared L2 norms) def findError(data,prediction): return math.sqrt(np.sum((data - prediction)**2)) # Read CSV file into dataframe and clean it def cleanCSV(filename): # Read the csv file into a Pandas dataframe df = pd.read_csv(filename) # Replace slashes in header with _ df.columns =[column.replace("/", "_") for column in df.columns] return df def downloadDataIntoCleanRows(): # Download and clean confirmed cases by country/date url = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv' filename = 'time_series_covid19_confirmed_global.csv' urllib.request.urlretrieve(url, filename) confirmed = cleanCSV(filename) # Download and clean recovered cases by country/date url = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv' filename = 'time_series_covid19_recovered_global.csv' urllib.request.urlretrieve(url, filename) recovered = cleanCSV(filename) # Download and clean deaths by country/date url = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv' filename = 'time_series_covid19_deaths_global.csv' urllib.request.urlretrieve(url, filename) deaths = cleanCSV(filename) return confirmed, recovered, deaths # Get data for queried country def queryForCountry(confirmed,recovered,deaths,country): popData = pd.read_csv('WorldDevelopmentIndicatorsApr92020.csv') popData.columns =[column.replace(" ", "_") for column in popData.columns] population = popData.query("Country_Name == @country").to_numpy()[0,1] # Query for country data confirmed = confirmed.query("Country_Region == @country") recovered = recovered.query("Country_Region == @country") dead = deaths.query("Country_Region == @country") # Prep data and drop columns when there are no cases confirmed = confirmed.drop(['Province_State', 'Country_Region', 'Lat', 'Long'], axis=1) confirmed = confirmed.to_numpy()[0] # Index of first case plus 9 days firstCase = np.nonzero(confirmed)[0][0] + 9 print(firstCase) # Cut dates before the first infected case confirmed = confirmed[firstCase:] recovered = recovered.drop(['Province_State', 'Country_Region', 'Lat', 'Long'], axis=1) recovered = recovered.to_numpy()[0] recovered = recovered[firstCase:] dead = dead.drop(['Province_State', 'Country_Region', 'Lat', 'Long'], axis=1) dead = dead.to_numpy()[0] dead = dead[firstCase:] # Turn the data into S I R data R = recovered + dead I = confirmed - R S = population - I - R # Create data array data = np.vstack((S, I, R)) # Convert data array to float data = np.array(list(data[:, :]), dtype=np.float64) # Find number of days of data lastData = np.size(data,1) - 1 return data, lastData, firstCase # Find values for the parameters minimizing the error confirmed, recovered, deaths = downloadDataIntoCleanRows() data, lastData, firstCase = queryForCountry(confirmed,recovered,deaths,'Jordan') # Read the initial data from the first column of data S0 = data[0,0] I0 = data[1,0] R0 = data[2,0] # Create a function of the parameters that runs the simulator and measures the error from the real data f = lambda x: findError(sir22(S0,I0,R0,x[0],x[1],x[2]/S0,x[3]/S0,18,10,lastData),data) # Optimize the parameters (via gradient descent) result = optimize.minimize(f,[0.1, 0.1, 0.2, 0.2],bounds = ((0,1),(0,1),(0,1),(0,1))) # Let x be the parameters found above x = result.x # Return details about the optimization result # Simulate SIR again with optimal parameters and plot it with the real data SIR = sir22(S0,I0,R0,x[0],x[1],x[2]/S0,x[3]/S0,18,10,200) fig, ax1 = plt.subplots() ax1.set_xlabel('Time') ax1.set_ylabel('Susceptible') ax1.plot(SIR[0,:],color='tab:blue') ax1.tick_params(axis='y') ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis ax2.set_ylabel('Infectious / Recovered') # we already handled the x-label with ax1 ax2.plot(SIR[1,:],color='tab:orange') ax2.plot(SIR[2,:],color='tab:green') ax2.tick_params(axis='y') ax1.plot(data[0,:],'.',color='tab:blue') ax2.plot(data[1,:],'.',color='tab:orange') ax2.plot(data[2,:],'.',color='tab:green') ax1.ticklabel_format(useOffset=False) fig.tight_layout() # otherwise the right y-label is slightly clipped plt.show() plt.plot(SIR[0,:]) plt.plot(SIR[1,:]) plt.plot(SIR[2,:]) plt.gca().set_prop_cycle(None) plt.plot(data[0,:],'.') plt.plot(data[1,:],'.') plt.plot(data[2,:],'.') bestError = [0,0,0,0,0,0,1000000] # This code optimizes the parameters but also times when alpha and beta change # Note that it is brute force in the times, so this runs SLOWLY for n1 in range(int(lastData/3),lastData): mError = bestError[-1] print(n1) for n2 in range(int(lastData/5),lastData): # optimize the alpha and beta parameters for given n1 and n2 f = lambda x: findError(sir22(S0,I0,R0,x[0],x[1],x[2]/S0,x[3]/S0,n1,n2,lastData),data) result = optimize.minimize(f,[0.1, 0.1, 0.2, 0.2], bounds = ((0,1),(0,1),(0,1),(0,1))) # If we find a lower error than we found previously, record the parameters and error if result.fun < bestError[-1]: x = result.x bestError = [x[0], x[1], x[2], x[3], n1, n2, result.fun] if bestError[-1] == mError: break # Display the best parameters and error we found so far print(bestError) # plot with the optimal parameters SIR = sir22(S0,I0,R0,bestError[0],bestError[1],bestError[2]/S0,bestError[3]/S0,bestError[4],bestError[5],100) #fig = plt.figure(figsize=(10, 8), dpi= 80, facecolor='w', edgecolor='k') fig, ax1 = plt.subplots() ax1.set_xlabel('Time') ax1.set_ylabel('Susceptible (blue)') ax1.plot(SIR[0,:],color='tab:blue') ax1.tick_params(axis='y') ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis ax2.set_ylabel('Infectious (orange) / Recovered (green)') # we already handled the x-label with ax1 ax2.plot(SIR[1,:],color='tab:orange') ax2.plot(SIR[2,:],color='tab:green') ax2.tick_params(axis='y') ax1.plot(data[0,:],'.',color='tab:blue') ax2.plot(data[1,:],'.',color='tab:orange') ax2.plot(data[2,:],'.',color='tab:green') #ax1.set_ylim([9719000,9720000]) ax1.ticklabel_format(useOffset=False) fig.tight_layout() # otherwise the right y-label is slightly clipped plt.show() #plt.plot(SIR[0,:]) #plt.plot(SIR[1,:]) #plt.plot(SIR[2,:]) #plt.gca().set_prop_cycle(None) #plt.plot(data[0,:],'.') #plt.plot(data[1,:],'.') #plt.plot(data[2,:],'.') ## FIND BEST PARAMETERS FOR COUNTRIES IN THE LIST # Download data and put into rows confirmed, recovered, deaths = downloadDataIntoCleanRows() countries = ['Algeria', 'Bahrain', 'Cyprus', 'Egypt', 'Iran', 'Iraq', 'Israel', 'Jordan', 'Lebanon', 'Morocco', 'Oman', 'Qatar', 'Saudi Arabia', 'Tunisia', 'Turkey', 'United Arab Emirates', 'West Bank and Gaza'] #countries = ['Algeria', 'Bahrain', 'Cyprus'] errorStore = [] for country in countries: print(country) data, lastData, firstCase = queryForCountry(confirmed,recovered,deaths,country) # Read the initial data from the first column of data S0 = data[0,0] I0 = data[1,0] R0 = data[2,0] # bestError = [country,firstCase, alpha1, alpha2, beta1, beta2, n1, n2, error] bestError = ['a',0,0,0,0,0,0,0,1000000] # This code optimizes the parameters but also times when alpha and beta change # Note that it is brute force in the times, so this runs SLOWLY for n1 in range(int(lastData/4),lastData): mError = bestError[-1] print(n1) for n2 in range(int(lastData/5),lastData): # optimize the alpha and beta parameters for given n1 and n2 f = lambda x: findError(sir22(S0,I0,R0,x[0],x[1],x[2]/S0,x[3]/S0,n1,n2,lastData),data) result = optimize.minimize(f,[0.1, 0.1, 0.2, 0.2], bounds = ((0,1),(0,1),(0,1),(0,1))) # If we find a lower error than we found previously, record the parameters and error if result.fun < bestError[-1]: x = result.x bestError = [firstCase, x[0], x[1], x[2], x[3], n1, n2, result.fun] if bestError[-1] == mError: break errorStore.append(bestError) df = pd.DataFrame(data=errorStore,index=countries,columns=['day of first case+10', 'initial alpha', 'final alpha', 'initial beta', 'final beta', 'alpha switchover', 'beta switchover', 'error']) df ```
github_jupyter
# Targeting Direct Marketing with Amazon SageMaker XGBoost _**Supervised Learning with Gradient Boosted Trees: A Binary Prediction Problem With Unbalanced Classes**_ --- ## Background Direct marketing, either through mail, email, phone, etc., is a common tactic to acquire customers. Because resources and a customer's attention is limited, the goal is to only target the subset of prospects who are likely to engage with a specific offer. Predicting those potential customers based on readily available information like demographics, past interactions, and environmental factors is a common machine learning problem. This notebook presents an example problem to predict if a customer will enroll for a term deposit at a bank, after one or more phone calls. The steps include: * Preparing your Amazon SageMaker notebook * Downloading data from the internet into Amazon SageMaker * Investigating and transforming the data so that it can be fed to Amazon SageMaker algorithms * Estimating a model using the Gradient Boosting algorithm * Evaluating the effectiveness of the model * Setting the model up to make on-going predictions --- ## Preparation _This notebook was created and tested on an ml.m4.xlarge notebook instance._ Let's start by specifying: - The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting. - The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s). ``` # cell 01 import sagemaker bucket=sagemaker.Session().default_bucket() prefix = 'sagemaker/DEMO-xgboost-dm' # Define IAM role import boto3 import re from sagemaker import get_execution_role role = get_execution_role() ``` Now let's bring in the Python libraries that we'll use throughout the analysis ``` # cell 02 import numpy as np # For matrix operations and numerical processing import pandas as pd # For munging tabular data import matplotlib.pyplot as plt # For charts and visualizations from IPython.display import Image # For displaying images in the notebook from IPython.display import display # For displaying outputs in the notebook from time import gmtime, strftime # For labeling SageMaker models, endpoints, etc. import sys # For writing outputs to notebook import math # For ceiling function import json # For parsing hosting outputs import os # For manipulating filepath names import sagemaker import zipfile # Amazon SageMaker's Python SDK provides many helper functions ``` --- ## Data Let's start by downloading the [direct marketing dataset](https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip) from the sample data s3 bucket. \[Moro et al., 2014\] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014 ``` # cell 03 !wget https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip with zipfile.ZipFile('bank-additional.zip', 'r') as zip_ref: zip_ref.extractall('.') ``` Now lets read this into a Pandas data frame and take a look. ``` # cell 04 data = pd.read_csv('./bank-additional/bank-additional-full.csv') pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns pd.set_option('display.max_rows', 20) # Keep the output on one page data ``` We will store this natively in S3 to then process it with SageMaker Processing. ``` # cell 05 from sagemaker import Session sess = Session() input_source = sess.upload_data('./bank-additional/bank-additional-full.csv', bucket=bucket, key_prefix=f'{prefix}/input_data') input_source ``` # Feature Engineering with Amazon SageMaker Processing Amazon SageMaker Processing allows you to run steps for data pre- or post-processing, feature engineering, data validation, or model evaluation workloads on Amazon SageMaker. Processing jobs accept data from Amazon S3 as input and store data into Amazon S3 as output. ![processing](https://sagemaker.readthedocs.io/en/stable/_images/amazon_sagemaker_processing_image1.png) Here, we'll import the dataset and transform it with SageMaker Processing, which can be used to process terabytes of data in a SageMaker-managed cluster separate from the instance running your notebook server. In a typical SageMaker workflow, notebooks are only used for prototyping and can be run on relatively inexpensive and less powerful instances, while processing, training and model hosting tasks are run on separate, more powerful SageMaker-managed instances. SageMaker Processing includes off-the-shelf support for Scikit-learn, as well as a Bring Your Own Container option, so it can be used with many different data transformation technologies and tasks. To use SageMaker Processing, simply supply a Python data preprocessing script as shown below. For this example, we're using a SageMaker prebuilt Scikit-learn container, which includes many common functions for processing data. There are few limitations on what kinds of code and operations you can run, and only a minimal contract: input and output data must be placed in specified directories. If this is done, SageMaker Processing automatically loads the input data from S3 and uploads transformed data back to S3 when the job is complete. ``` # cell 06 %%writefile preprocessing.py import pandas as pd import numpy as np import argparse import os from sklearn.preprocessing import OrdinalEncoder def _parse_args(): parser = argparse.ArgumentParser() # Data, model, and output directories # model_dir is always passed in from SageMaker. By default this is a S3 path under the default bucket. parser.add_argument('--filepath', type=str, default='/opt/ml/processing/input/') parser.add_argument('--filename', type=str, default='bank-additional-full.csv') parser.add_argument('--outputpath', type=str, default='/opt/ml/processing/output/') parser.add_argument('--categorical_features', type=str, default='y, job, marital, education, default, housing, loan, contact, month, day_of_week, poutcome') return parser.parse_known_args() if __name__=="__main__": # Process arguments args, _ = _parse_args() # Load data df = pd.read_csv(os.path.join(args.filepath, args.filename)) # Change the value . into _ df = df.replace(regex=r'\.', value='_') df = df.replace(regex=r'\_$', value='') # Add two new indicators df["no_previous_contact"] = (df["pdays"] == 999).astype(int) df["not_working"] = df["job"].isin(["student", "retired", "unemployed"]).astype(int) df = df.drop(['duration', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed'], axis=1) # Encode the categorical features df = pd.get_dummies(df) # Train, test, validation split train_data, validation_data, test_data = np.split(df.sample(frac=1, random_state=42), [int(0.7 * len(df)), int(0.9 * len(df))]) # Randomly sort the data then split out first 70%, second 20%, and last 10% # Local store pd.concat([train_data['y_yes'], train_data.drop(['y_yes','y_no'], axis=1)], axis=1).to_csv(os.path.join(args.outputpath, 'train/train.csv'), index=False, header=False) pd.concat([validation_data['y_yes'], validation_data.drop(['y_yes','y_no'], axis=1)], axis=1).to_csv(os.path.join(args.outputpath, 'validation/validation.csv'), index=False, header=False) test_data['y_yes'].to_csv(os.path.join(args.outputpath, 'test/test_y.csv'), index=False, header=False) test_data.drop(['y_yes','y_no'], axis=1).to_csv(os.path.join(args.outputpath, 'test/test_x.csv'), index=False, header=False) print("## Processing complete. Exiting.") ``` Before starting the SageMaker Processing job, we instantiate a `SKLearnProcessor` object. This object allows you to specify the instance type to use in the job, as well as how many instances. ``` # cell 07 train_path = f"s3://{bucket}/{prefix}/train" validation_path = f"s3://{bucket}/{prefix}/validation" test_path = f"s3://{bucket}/{prefix}/test" # cell 08 from sagemaker.sklearn.processing import SKLearnProcessor from sagemaker.processing import ProcessingInput, ProcessingOutput from sagemaker import get_execution_role sklearn_processor = SKLearnProcessor( framework_version="0.23-1", role=get_execution_role(), instance_type="ml.m5.large", instance_count=1, base_job_name='sm-immday-skprocessing' ) sklearn_processor.run( code='preprocessing.py', # arguments = ['arg1', 'arg2'], inputs=[ ProcessingInput( source=input_source, destination="/opt/ml/processing/input", s3_input_mode="File", s3_data_distribution_type="ShardedByS3Key" ) ], outputs=[ ProcessingOutput( output_name="train_data", source="/opt/ml/processing/output/train", destination=train_path, ), ProcessingOutput(output_name="validation_data", source="/opt/ml/processing/output/validation", destination=validation_path), ProcessingOutput(output_name="test_data", source="/opt/ml/processing/output/test", destination=test_path), ] ) # cell 09 !aws s3 ls $train_path/ ``` --- ## End of Lab 1 --- ## Training Now we know that most of our features have skewed distributions, some are highly correlated with one another, and some appear to have non-linear relationships with our target variable. Also, for targeting future prospects, good predictive accuracy is preferred to being able to explain why that prospect was targeted. Taken together, these aspects make gradient boosted trees a good candidate algorithm. There are several intricacies to understanding the algorithm, but at a high level, gradient boosted trees works by combining predictions from many simple models, each of which tries to address the weaknesses of the previous models. By doing this the collection of simple models can actually outperform large, complex models. Other Amazon SageMaker notebooks elaborate on gradient boosting trees further and how they differ from similar algorithms. `xgboost` is an extremely popular, open-source package for gradient boosted trees. It is computationally powerful, fully featured, and has been successfully used in many machine learning competitions. Let's start with a simple `xgboost` model, trained using Amazon SageMaker's managed, distributed training framework. First we'll need to specify the ECR container location for Amazon SageMaker's implementation of XGBoost. ``` # cell 10 container = sagemaker.image_uris.retrieve(region=boto3.Session().region_name, framework='xgboost', version='latest') ``` Then, because we're training with the CSV file format, we'll create `s3_input`s that our training function can use as a pointer to the files in S3, which also specify that the content type is CSV. ``` # cell 11 s3_input_train = sagemaker.inputs.TrainingInput(s3_data=train_path.format(bucket, prefix), content_type='csv') s3_input_validation = sagemaker.inputs.TrainingInput(s3_data=validation_path.format(bucket, prefix), content_type='csv') ``` First we'll need to specify training parameters to the estimator. This includes: 1. The `xgboost` algorithm container 1. The IAM role to use 1. Training instance type and count 1. S3 location for output data 1. Algorithm hyperparameters And then a `.fit()` function which specifies: 1. S3 location for output data. In this case we have both a training and validation set which are passed in. ``` # cell 12 sess = sagemaker.Session() xgb = sagemaker.estimator.Estimator(container, role, instance_count=1, instance_type='ml.m4.xlarge', output_path='s3://{}/{}/output'.format(bucket, prefix), sagemaker_session=sess) xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', num_round=100) xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ``` --- ## Hosting Now that we've trained the `xgboost` algorithm on our data, let's deploy a model that's hosted behind a real-time endpoint. ``` # cell 13 xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ``` --- ## Evaluation There are many ways to compare the performance of a machine learning model, but let's start by simply comparing actual to predicted values. In this case, we're simply predicting whether the customer subscribed to a term deposit (`1`) or not (`0`), which produces a simple confusion matrix. First we'll need to determine how we pass data into and receive data from our endpoint. Our data is currently stored as NumPy arrays in memory of our notebook instance. To send it in an HTTP POST request, we'll serialize it as a CSV string and then decode the resulting CSV. *Note: For inference with CSV format, SageMaker XGBoost requires that the data does NOT include the target variable.* ``` # cell 14 xgb_predictor.serializer = sagemaker.serializers.CSVSerializer() ``` Now, we'll use a simple function to: 1. Loop over our test dataset 1. Split it into mini-batches of rows 1. Convert those mini-batches to CSV string payloads (notice, we drop the target variable from our dataset first) 1. Retrieve mini-batch predictions by invoking the XGBoost endpoint 1. Collect predictions and convert from the CSV output our model provides into a NumPy array ``` # cell 15 !aws s3 cp $test_path/test_x.csv /tmp/test_x.csv !aws s3 cp $test_path/test_y.csv /tmp/test_y.csv # cell 16 def predict(data, predictor, rows=500 ): split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1)) predictions = '' for array in split_array: predictions = ','.join([predictions, predictor.predict(array).decode('utf-8')]) return np.fromstring(predictions[1:], sep=',') test_x = pd.read_csv('/tmp/test_x.csv', names=[f'{i}' for i in range(59)]) test_y = pd.read_csv('/tmp/test_y.csv', names=['y']) predictions = predict(test_x.drop(test_x.columns[0], axis=1).to_numpy(), xgb_predictor) ``` Now we'll check our confusion matrix to see how well we predicted versus actuals. ``` # cell 17 pd.crosstab(index=test_y['y'].values, columns=np.round(predictions), rownames=['actuals'], colnames=['predictions']) ``` So, of the ~4000 potential customers, we predicted 136 would subscribe and 94 of them actually did. We also had 389 subscribers who subscribed that we did not predict would. This is less than desirable, but the model can (and should) be tuned to improve this. Most importantly, note that with minimal effort, our model produced accuracies similar to those published [here](http://media.salford-systems.com/video/tutorial/2015/targeted_marketing.pdf). _Note that because there is some element of randomness in the algorithm's subsample, your results may differ slightly from the text written above._ ## Automatic model Tuning (optional) Amazon SageMaker automatic model tuning, also known as hyperparameter tuning, finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by a metric that you choose. For example, suppose that you want to solve a binary classification problem on this marketing dataset. Your goal is to maximize the area under the curve (auc) metric of the algorithm by training an XGBoost Algorithm model. You don't know which values of the eta, alpha, min_child_weight, and max_depth hyperparameters to use to train the best model. To find the best values for these hyperparameters, you can specify ranges of values that Amazon SageMaker hyperparameter tuning searches to find the combination of values that results in the training job that performs the best as measured by the objective metric that you chose. Hyperparameter tuning launches training jobs that use hyperparameter values in the ranges that you specified, and returns the training job with highest auc. ``` # cell 18 from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner hyperparameter_ranges = {'eta': ContinuousParameter(0, 1), 'min_child_weight': ContinuousParameter(1, 10), 'alpha': ContinuousParameter(0, 2), 'max_depth': IntegerParameter(1, 10)} # cell 19 objective_metric_name = 'validation:auc' # cell 20 tuner = HyperparameterTuner(xgb, objective_metric_name, hyperparameter_ranges, max_jobs=20, max_parallel_jobs=3) # cell 21 tuner.fit({'train': s3_input_train, 'validation': s3_input_validation}) # cell 22 boto3.client('sagemaker').describe_hyper_parameter_tuning_job( HyperParameterTuningJobName=tuner.latest_tuning_job.job_name)['HyperParameterTuningJobStatus'] # cell 23 # return the best training job name tuner.best_training_job() # cell 24 # Deploy the best trained or user specified model to an Amazon SageMaker endpoint tuner_predictor = tuner.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') # cell 25 # Create a serializer tuner_predictor.serializer = sagemaker.serializers.CSVSerializer() # cell 26 # Predict predictions = predict(test_x.to_numpy(),tuner_predictor) # cell 27 # Collect predictions and convert from the CSV output our model provides into a NumPy array pd.crosstab(index=test_y['y'].values, columns=np.round(predictions), rownames=['actuals'], colnames=['predictions']) ``` --- ## Extensions This example analyzed a relatively small dataset, but utilized Amazon SageMaker features such as distributed, managed training and real-time model hosting, which could easily be applied to much larger problems. In order to improve predictive accuracy further, we could tweak value we threshold our predictions at to alter the mix of false-positives and false-negatives, or we could explore techniques like hyperparameter tuning. In a real-world scenario, we would also spend more time engineering features by hand and would likely look for additional datasets to include which contain customer information not available in our initial dataset. ### (Optional) Clean-up If you are done with this notebook, please run the cell below. This will remove the hosted endpoint you created and avoid any charges from a stray instance being left on. ``` # cell 28 xgb_predictor.delete_endpoint(delete_endpoint_config=True) # cell 29 tuner_predictor.delete_endpoint(delete_endpoint_config=True) ```
github_jupyter
# Redcard Exploratory Data Analysis This dataset is taken from a fantastic paper that looks to see how analytical choices made by different data science teams on the same dataset in an attempt to answer the same research question affect the final outcome. [Many analysts, one dataset: Making transparent how variations in analytical choices affect results](https://osf.io/gvm2z/) The data can be found [here](https://osf.io/47tnc/). ## The Task Do an Exploratory Data Analysis on the redcard dataset. Keeping in mind the question is the following: **Are soccer referees more likely to give red cards to dark-skin-toned players than light-skin-toned players?** ``` !conda install -c conda-forge pandas-profiling -y !pip install missingno from __future__ import absolute_import, division, print_function %matplotlib inline %config InlineBackend.figure_format='retina' import matplotlib as mpl from matplotlib import pyplot as plt from matplotlib.pyplot import GridSpec import seaborn as sns import numpy as np import pandas as pd import os, sys from tqdm import tqdm import warnings warnings.filterwarnings('ignore') sns.set_context("poster", font_scale=1.3) import missingno as msno import pandas_profiling from sklearn.datasets import make_blobs import time ``` ## About the Data > The dataset is available as a list with 146,028 dyads of players and referees and includes details from players, details from referees and details regarding the interactions of player-referees. A summary of the variables of interest can be seen below. A detailed description of all variables included can be seen in the README file on the project website. > From a company for sports statistics, we obtained data and profile photos from all soccer players (N = 2,053) playing in the first male divisions of England, Germany, France and Spain in the 2012-2013 season and all referees (N = 3,147) that these players played under in their professional career (see Figure 1). We created a dataset of player–referee dyads including the number of matches players and referees encountered each other and our dependent variable, the number of red cards given to a player by a particular referee throughout all matches the two encountered each other. > -- https://docs.google.com/document/d/1uCF5wmbcL90qvrk_J27fWAvDcDNrO9o_APkicwRkOKc/edit | Variable Name: | Variable Description: | | -- | -- | | playerShort | short player ID | | player | player name | | club | player club | | leagueCountry | country of player club (England, Germany, France, and Spain) | | height | player height (in cm) | | weight | player weight (in kg) | | position | player position | | games | number of games in the player-referee dyad | | goals | number of goals in the player-referee dyad | | yellowCards | number of yellow cards player received from the referee | | yellowReds | number of yellow-red cards player received from the referee | | redCards | number of red cards player received from the referee | | photoID | ID of player photo (if available) | | rater1 | skin rating of photo by rater 1 | | rater2 | skin rating of photo by rater 2 | | refNum | unique referee ID number (referee name removed for anonymizing purposes) | | refCountry | unique referee country ID number | | meanIAT | mean implicit bias score (using the race IAT) for referee country | | nIAT | sample size for race IAT in that particular country | | seIAT | standard error for mean estimate of race IAT | | meanExp | mean explicit bias score (using a racial thermometer task) for referee country | | nExp | sample size for explicit bias in that particular country | | seExp | standard error for mean estimate of explicit bias measure | ## What the teams found ### Choices in model features The following is the covariates chosen for the respective models: <img src="figures/covariates.png" width=80%;> ### Choices in modeling Of the many choices made by the team, here is a small selection of the models used to answer this question: <img src="figures/models.png" width=80%;> ## Final Results - 0 teams: negative effect - 9 teams: no significant relationship - 20 teams: finding a positive effect <img src="figures/results.png" width=80%;> Above image from: http://fivethirtyeight.com/features/science-isnt-broken/#part2 > …selecting randomly from the present teams, there would have been a 69% probability of reporting a positive result and a 31% probability of reporting a null effect. This raises the possibility that many research projects contain hidden uncertainty due to the wide range of analytic choices available to the researchers. -- Silberzahn, R., Uhlmann, E. L., Martin, D. P., Pasquale, Aust, F., Awtrey, E. C., … Nosek, B. A. (2015, August 20). Many analysts, one dataset: Making transparent how variations in analytical choices affect results. Retrieved from osf.io/gvm2z Images and data from: Silberzahn, R., Uhlmann, E. L., Martin, D. P., Pasquale, Aust, F., Awtrey, E. C., … Nosek, B. A. (2015, August 20). Many analysts, one dataset: Making transparent how variations in analytical choices affect results. Retrieved from osf.io/gvm2z ## General tips - Before plotting/joining/doing something, have a question or hypothesis that you want to investigate - Draw a plot of what you want to see on paper to sketch the idea - Write it down, then make the plan on how to get there - How do you know you aren't fooling yourself - What else can I check if this is actually true? - What evidence could there be that it's wrong? ``` # Uncomment one of the following lines and run the cell: df = pd.read_csv("/home/msr/git/pycon-2017-eda-tutorial/data/redcard/redcard.csv.gz", compression='gzip') # df = pd.read_csv("https://github.com/cmawer/pycon-2017-eda-tutorial/raw/master/data/redcard/redcard.csv.gz", # compression='gzip') df.shape df.head() df.describe().T df.dtypes all_columns = df.columns.tolist() all_columns ``` # Challenge Before looking below, try to answer some high level questions about the dataset. How do we operationalize the question of referees giving more red cards to dark skinned players? * Counterfactual: if the player were lighter, a ref is more likely to have given a yellow or no card **for the same offense under the same conditions** * Regression: accounting for confounding, darker players have positive coefficient on regression against proportion red/total card Potential issues * How to combine rater1 and rater2? Average them? What if they disagree? Throw it out? * Is data imbalanced, i.e. red cards are very rare? * Is data biased, i.e. players have different amounts of play time? Is this a summary of their whole career? * How do I know I've accounted for all forms of confounding? **First, is there systematic discrimination across all refs?** Exploration/hypotheses: * Distribution of games played * red cards vs games played * Reds per game played vs total cards per game played by skin color * Distribution of # red, # yellow, total cards, and fraction red per game played for all players by avg skin color * How many refs did players encounter? * Do some clubs play more aggresively and get carded more? Or are more reserved and get less? * Does carding vary by leagueCountry? * Do high scorers get more slack (fewer cards) for the same position? * Are there some referees that give more red/yellow cards than others? * how consistent are raters? Check with Cohen's kappa. * how do red cards vary by position? e.g. defenders get more? * Do players with more games get more cards, and is there difference across skin color? * indication of bias depending on refCountry? ## Understand how the data's organized The dataset is a single csv where it aggregated every interaction between referee and player into a single row. In other words: Referee A refereed Player B in, say, 10 games, and gave 2 redcards during those 10 games. Then there would be a unique row in the dataset that said: Referee A, Player B, 2 redcards, ... This has several implications that make this first step to understanding and dealing with this data a bit tricky. First, is that the information about Player B is repeated each time -- meaning if we did a simple average of some metric of we would likely get a misleading result. For example, asking "what is the average `weight` of the players?" ``` df.height.mean() df['height'].mean() np.mean(df.groupby('playerShort').height.mean()) ``` Doing a simple average over the rows will risk double-counting the same player multiple times, for a skewed average. The simple (incorrect) average is ~76.075 kg, but the average weight of the players is ~75.639 kg. There are multiple ways of doing this, but doing a groupby on player makes it so that so each player gets counted exactly once. Not a huge difference in this case but already an illustration of some difficulty. ## Tidy Data Hadley Wickham's concept of a **tidy dataset** summarized as: > - Each variable forms a column > - Each observation forms a row > - Each type of observational unit forms a table A longer paper describing this can be found in this [pdf](https://www.jstatsoft.org/article/view/v059i10/v59i10.pdf). Having datasets in this form allows for much simpler analyses. So the first step is to try and clean up the dataset into a tidy dataset. The first step that I am going to take is to break up the dataset into the different observational units. By that I'm going to have separate tables (or dataframes) for: - players - clubs - referees - countries - dyads ## Create Tidy Players Table ``` player_index = 'playerShort' player_cols = [#'player', # drop player name, we have unique identifier 'birthday', 'height', 'weight', 'position', 'photoID', 'rater1', 'rater2', ] # Count the unique variables (if we got different weight values, # for example, then we should get more than one unique value in this groupby) all_cols_unique_players = df.groupby('playerShort').agg({col:'nunique' for col in player_cols}) all_cols_unique_players.head() # If all values are the same per player then this should be empty (and it is!) all_cols_unique_players[all_cols_unique_players > 1].dropna().head() # A slightly more elegant way to test the uniqueness all_cols_unique_players[all_cols_unique_players > 1].dropna().shape[0] == 0 ``` Hooray, our data passed our sanity check. Let's create a function to create a table and run this check for each table that we create. ``` def get_subgroup(dataframe, g_index, g_columns): """Helper function that creates a sub-table from the columns and runs a quick uniqueness test.""" g = dataframe.groupby(g_index).agg({col:'nunique' for col in g_columns}) if g[g > 1].dropna().shape[0] != 0: print("Warning: you probably assumed this had all unique values but it doesn't.") return dataframe.groupby(g_index).agg({col:'max' for col in g_columns}) players = get_subgroup(df, player_index, player_cols) players.head() def save_subgroup(dataframe, g_index, subgroup_name, prefix='raw_'): save_subgroup_filename = "".join([prefix, subgroup_name, ".csv.gz"]) dataframe.to_csv(save_subgroup_filename, compression='gzip', encoding='UTF-8') test_df = pd.read_csv(save_subgroup_filename, compression='gzip', index_col=g_index, encoding='UTF-8') # Test that we recover what we send in if dataframe.equals(test_df): print("Test-passed: we recover the equivalent subgroup dataframe.") else: print("Warning -- equivalence test!!! Double-check.") save_subgroup(players, player_index, "players") ``` ## Create Tidy Clubs Table Create the clubs table. ``` club_index = 'club' club_cols = ['leagueCountry'] clubs = get_subgroup(df, club_index, club_cols) clubs.head() clubs['leagueCountry'].value_counts() save_subgroup(clubs, club_index, "clubs", ) ``` ## Create Tidy Referees Table ``` referee_index = 'refNum' referee_cols = ['refCountry'] referees = get_subgroup(df, referee_index, referee_cols) referees.head() referees.refCountry.nunique() referees.tail() referees.shape save_subgroup(referees, referee_index, "referees") ``` ## Create Tidy Countries Table ``` country_index = 'refCountry' country_cols = ['Alpha_3', # rename this name of country 'meanIAT', 'nIAT', 'seIAT', 'meanExp', 'nExp', 'seExp', ] countries = get_subgroup(df, country_index, country_cols) countries.head() rename_columns = {'Alpha_3':'countryName', } countries = countries.rename(columns=rename_columns) countries.head() countries.shape save_subgroup(countries, country_index, "countries") # Ok testing this out: test_df = pd.read_csv("raw_countries.csv.gz", compression='gzip', index_col=country_index) for (_, row1), (_, row2) in zip(test_df.iterrows(), countries.iterrows()): if not row1.equals(row2): print(row1) print() print(row2) print() break row1.eq(row2) row1.seIAT - row2.seIAT countries.dtypes test_df.dtypes countries.head() test_df.head() ``` Looks like precision error, so I'm not concerned. All other sanity checks pass. ``` countries.tail() test_df.tail() ``` ## Create separate (not yet Tidy) Dyads Table This is one of the more complex tables to reason about -- so we'll save it for a bit later. ``` dyad_index = ['refNum', 'playerShort'] dyad_cols = ['games', 'victories', 'ties', 'defeats', 'goals', 'yellowCards', 'yellowReds', 'redCards', ] dyads = get_subgroup(df, g_index=dyad_index, g_columns=dyad_cols) dyads.head(10) dyads.shape dyads[dyads.redCards > 1].head(10) save_subgroup(dyads, dyad_index, "dyads") dyads.redCards.max() ```
github_jupyter
# Machine Learning ## Types of learning - Whether or not they are trained with human supervision (supervised, unsupervised, semisupervised, and Reinforcement Learning) - Whether or not they can learn incrementally on the fly (online versus batch learning) - Whether they work by simply comparing new data points to known data points, or instead detect patterns in the training data and build a predictive model, much like scientists do (instance-based versus model-based learning) # Types Of Machine Learning Systems ## Supervised VS Unsupervised learning ### Supervised Learning - In supervised learning, the training data you feed to the algorithm includes the desired solutions, called labels - k-Nearest Neighbors - Linear Regression - Logistic Regression - Support Vector Machines (SVMs) - Decision Trees and Random Forests - Neural networks ### Unsupervised Learning - In unsupervised learning, as you might guess, the training data is unlabeled. The system tries to learn without a teacher - Clustering - k-Means - Hierarchical Cluster Analysis (HCA) - Expectation Maximization - DBSCAN - Visualization and dimensionality reduction - Principal Component Analysis (PCA) - Kernel PCA - Locally-Linear Embedding (LLE) - t-distributed Stochastic Neighbor Embedding (t-SNE) - Association rule learning - Apriori - Eclat - Anamoly detection and novelty detection - One-Class SVM - Isolation Forest ## Semisupervised Learning - Some algorithms can deal with partially labeled training data, usually a lot of unla beled data and a little bit of labeled data. This is called semisupervised learning - Most semisupervised learning algorithms are combinations of unsupervised and supervised algorithms. For example, deep belief networks (DBNs) are based on unsu‐ pervised components called restricted Boltzmann machines (RBMs) stacked on top of one another. RBMs are trained sequentially in an unsupervised manner, and then the whole system is fine-tuned using supervised learning techniques. ## Reinforcement Learning - Reinforcement Learning is a very different beast. The learning system, called an agent in this context, can observe the environment, select and perform actions, and get rewards in return (or penalties in the form of negative rewards. It must then learn by itself what is the best strategy, called a policy, to get the most reward over time. A policy defines what action the agent should choose when it is in a given situation. ## Batch And Online Learning ### Batch Learning - In batch learning, the system is incapable of learning incrementally: it must be trained using all the available data. This will generally take a lot of time and computing resources, so it is typically done offline. First the system is trained, and then it is launched into production and runs without learning anymore; it just applies what it has learned. This is called offline learning. - If you want a batch learning system to know about new data (such as a new type of spam), you need to train a new version of the system from scratch on the full dataset (not just the new data, but also the old data), then stop the old system and replace it with the new one. ### Online Learning - In online learning, you train the system incrementally by feeding it data instances sequentially, either individually or by small groups called mini-batches. Each learning step is fast and cheap, so the system can learn about new data on the fly, as it arrives - Online learning is great for systems that receive data as a continuous flow (e.g., stock prices) and need to adapt to change rapidly or autonomously. It is also a good option if you have limited computing resources: once an online learning system has learned about new data instances, it does not need them anymore, so you can discard them (unless you want to be able to roll back to a previous state and “replay” the data). This can save a huge amount of space. - Online learning algorithms can also be used to train systems on huge datasets that cannot fit in one machine’s main memory (this is called out-of-core learning). The algorithm loads part of the data, runs a training step on that data, and repeats the process until it has run on all of the data - This whole process is usually done offline (i.e., not on the live system), so online learning can be a confusing name. Think of it as incremental learning - Importance of Learning Rate in Online Learning ## Instance-Based Vs Model-Based Learning ### Instance-Based Learning - the system learns the examples by heart, then generalizes to new cases using a similarity measure ### Model-Based Learning - Another way to generalize from a set of examples is to build a model of these examples, then use that model to make predictions. This is called model-based learning # Challenges of Machine Learning - Insuffcient Quantity of Training Data - Nonrepresentative Training Data - Poor-Quality Data - Irrelevant Features - Overfitting the Training Data - Underfitting the Training Data Most Common Supervised Learning Tasks are Classification(predicting classes) And Regression(predicting Values)
github_jupyter
<img src="https://raw.githubusercontent.com/Qiskit/qiskit-tutorials/master/images/qiskit-heading.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left"> ## _*Superposition*_ The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial. *** ### Contributors Jay Gambetta, Antonio Córcoles, Andrew Cross, Anna Phan ### Qiskit Package Versions ``` import qiskit qiskit.__qiskit_version__ ``` ## Introduction Many people tend to think quantum physics is hard math, but this is not actually true. Quantum concepts are very similar to those seen in the linear algebra classes you may have taken as a freshman in college, or even in high school. The challenge of quantum physics is the necessity to accept counter-intuitive ideas, and its lack of a simple underlying theory. We believe that if you can grasp the following two Principles, you will have a good start: 1. A physical system in a definite state can still behave randomly. 2. Two systems that are too far apart to influence each other can nevertheless behave in ways that, though individually random, are somehow strongly correlated. In this tutorial, we will be discussing the first of these Principles, the second is discussed in [this other tutorial](entanglement_introduction.ipynb). ``` # useful additional packages import matplotlib.pyplot as plt %matplotlib inline import numpy as np # importing Qiskit from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute from qiskit import BasicAer, IBMQ # import basic plot tools from qiskit.tools.visualization import plot_histogram backend = BasicAer.get_backend('qasm_simulator') # run on local simulator by default # Uncomment the following lines to run on a real device #IBMQ.load_accounts() #from qiskit.providers.ibmq import least_busy #backend = least_busy(IBMQ.backends(operational=True, simulator=False)) #print("the best backend is " + backend.name()) ``` ## Quantum States - Basis States and Superpositions<a id='section1'></a> The first Principle above tells us that the results of measuring a quantum state may be random or deterministic, depending on what basis is used. To demonstrate, we will first introduce the computational (or standard) basis for a qubit. The computational basis is the set containing the ground and excited state $\{|0\rangle,|1\rangle\}$, which also corresponds to the following vectors: $$|0\rangle =\begin{pmatrix} 1 \\ 0 \end{pmatrix}$$ $$|1\rangle =\begin{pmatrix} 0 \\ 1 \end{pmatrix}$$ In Python these are represented by ``` zero = np.array([[1],[0]]) one = np.array([[0],[1]]) ``` In our quantum processor system (and many other physical quantum processors) it is natural for all qubits to start in the $|0\rangle$ state, known as the ground state. To make the $|1\rangle$ (or excited) state, we use the operator $$ X =\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}.$$ This $X$ operator is often called a bit-flip because it exactly implements the following: $$X: |0\rangle \rightarrow |1\rangle$$ $$X: |1\rangle \rightarrow |0\rangle.$$ In Python this can be represented by the following: ``` X = np.array([[0,1],[1,0]]) print(np.dot(X,zero)) print(np.dot(X,one)) ``` Next, we give the two quantum circuits for preparing and measuring a single qubit in the ground and excited states using Qiskit. ``` # Creating registers qr = QuantumRegister(1) cr = ClassicalRegister(1) # Quantum circuit ground qc_ground = QuantumCircuit(qr, cr) qc_ground.measure(qr[0], cr[0]) # Quantum circuit excited qc_excited = QuantumCircuit(qr, cr) qc_excited.x(qr) qc_excited.measure(qr[0], cr[0]) qc_ground.draw(output='mpl') qc_excited.draw(output='mpl') ``` Here we have created two jobs with different quantum circuits; the first to prepare the ground state, and the second to prepare the excited state. Now we can run the prepared jobs. ``` circuits = [qc_ground, qc_excited] job = execute(circuits, backend) result = job.result() ``` After the run has been completed, the data can be extracted from the API output and plotted. ``` plot_histogram(result.get_counts(qc_ground)) plot_histogram(result.get_counts(qc_excited)) ``` Here we see that the qubit is in the $|0\rangle$ state with 100% probability for the first circuit and in the $|1\rangle$ state with 100% probability for the second circuit. If we had run on a quantum processor rather than the simulator, there would be a difference from the ideal perfect answer due to a combination of measurement error, preparation error, and gate error (for the $|1\rangle$ state). Up to this point, nothing is different from a classical system of a bit. To go beyond, we must explore what it means to make a superposition. The operation in the quantum circuit language for generating a superposition is the Hadamard gate, $H$. Let's assume for now that this gate is like flipping a fair coin. The result of a flip has two possible outcomes, heads or tails, each occurring with equal probability. If we repeat this simple thought experiment many times, we would expect that on average we will measure as many heads as we do tails. Let heads be $|0\rangle$ and tails be $|1\rangle$. Let's run the quantum version of this experiment. First we prepare the qubit in the ground state $|0\rangle$. We then apply the Hadamard gate (coin flip). Finally, we measure the state of the qubit. Repeat the experiment 1024 times (shots). As you likely predicted, half the outcomes will be in the $|0\rangle$ state and half will be in the $|1\rangle$ state. Try the program below. ``` # Quantum circuit superposition qc_superposition = QuantumCircuit(qr, cr) qc_superposition.h(qr) qc_superposition.measure(qr[0], cr[0]) qc_superposition.draw() job = execute(qc_superposition, backend, shots = 1024) result = job.result() plot_histogram(result.get_counts(qc_superposition)) ``` Indeed, much like a coin flip, the results are close to 50/50 with some non-ideality due to errors (again due to state preparation, measurement, and gate errors). So far, this is still not unexpected. Let's run the experiment again, but this time with two $H$ gates in succession. If we consider the $H$ gate to be analog to a coin flip, here we would be flipping it twice, and still expecting a 50/50 distribution. ``` # Quantum circuit two Hadamards qc_twohadamard = QuantumCircuit(qr, cr) qc_twohadamard.h(qr) qc_twohadamard.barrier() qc_twohadamard.h(qr) qc_twohadamard.measure(qr[0], cr[0]) qc_twohadamard.draw(output='mpl') job = execute(qc_twohadamard, backend) result = job.result() plot_histogram(result.get_counts(qc_twohadamard)) ``` This time, the results are surprising. Unlike the classical case, with high probability the outcome is not random, but in the $|0\rangle$ state. *Quantum randomness* is not simply like a classical random coin flip. In both of the above experiments, the system (without noise) is in a definite state, but only in the first case does it behave randomly. This is because, in the first case, via the $H$ gate, we make a uniform superposition of the ground and excited state, $(|0\rangle+|1\rangle)/\sqrt{2}$, but then follow it with a measurement in the computational basis. The act of measurement in the computational basis forces the system to be in either the $|0\rangle$ state or the $|1\rangle$ state with an equal probability (due to the uniformity of the superposition). In the second case, we can think of the second $H$ gate as being a part of the final measurement operation; it changes the measurement basis from the computational basis to a *superposition* basis. The following equations illustrate the action of the $H$ gate on the computational basis states: $$H: |0\rangle \rightarrow |+\rangle=\frac{|0\rangle+|1\rangle}{\sqrt{2}}$$ $$H: |1\rangle \rightarrow |-\rangle=\frac{|0\rangle-|1\rangle}{\sqrt{2}}.$$ We can redefine this new transformed basis, the superposition basis, as the set {$|+\rangle$, $|-\rangle$}. We now have a different way of looking at the second experiment above. The first $H$ gate prepares the system into a superposition state, namely the $|+\rangle$ state. The second $H$ gate followed by the standard measurement changes it into a measurement in the superposition basis. If the measurement gives 0, we can conclude that the system was in the $|+\rangle$ state before the second $H$ gate, and if we obtain 1, it means the system was in the $|-\rangle$ state. In the above experiment we see that the outcome is mainly 0, suggesting that our system was in the $|+\rangle$ superposition state before the second $H$ gate. The math is best understood if we represent the quantum superposition state $|+\rangle$ and $|-\rangle$ by: $$|+\rangle =\frac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ 1 \end{pmatrix}$$ $$|-\rangle =\frac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ -1 \end{pmatrix}$$ A standard measurement, known in quantum mechanics as a projective or von Neumann measurement, takes any superposition state of the qubit and projects it to either the state $|0\rangle$ or the state $|1\rangle$ with a probability determined by: $$P(i|\psi) = |\langle i|\psi\rangle|^2$$ where $P(i|\psi)$ is the probability of measuring the system in state $i$ given preparation $\psi$. We have written the Python function ```state_overlap``` to return this: ``` state_overlap = lambda state1, state2: np.absolute(np.dot(state1.conj().T,state2))**2 ``` Now that we have a simple way of going from a state to the probability distribution of a standard measurement, we can go back to the case of a superposition made from the Hadamard gate. The Hadamard gate is defined by the matrix: $$ H =\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}$$ The $H$ gate acting on the state $|0\rangle$ gives: ``` Hadamard = np.array([[1,1],[1,-1]],dtype=complex)/np.sqrt(2) psi1 = np.dot(Hadamard,zero) P0 = state_overlap(zero,psi1) P1 = state_overlap(one,psi1) plot_histogram({'0' : P0.item(0), '1' : P1.item(0)}) ``` which is the ideal version of the first superposition experiment. The second experiment involves applying the Hadamard gate twice. While matrix multiplication shows that the product of two Hadamards is the identity operator (meaning that the state $|0\rangle$ remains unchanged), here (as previously mentioned) we prefer to interpret this as doing a measurement in the superposition basis. Using the above definitions, you can show that $H$ transforms the computational basis to the superposition basis. ``` print(np.dot(Hadamard,zero)) print(np.dot(Hadamard,one)) ``` This is just the beginning of how a quantum state differs from a classical state. Please continue to [Amplitude and Phase](amplitude_and_phase.ipynb) to explore further!
github_jupyter
``` %matplotlib inline import numpy as np import pandas as pd from scipy import signal, ndimage, interpolate, stats from scipy.interpolate import CubicSpline from itertools import combinations import matplotlib.pyplot as plt import matplotlib.ticker as ticker from matplotlib.ticker import FormatStrFormatter from matplotlib.offsetbox import AnchoredText import statsmodels.api as sm from sklearn.model_selection import train_test_split import statsmodels.formula.api as smf import seaborn as sns font = {'family' : 'sans-serif', 'size' : 20} plt.rc('font', **font) plt.rc('text',usetex=False) from pathlib import Path import os,sys import h5py, json import sys import pickle as pkl import time import nept sys.path.append('../PreProcessing/') sys.path.append('../TrackingAnalyses/') sys.path.append('../Lib/') sys.path.append('../Analyses/') from filters_ag import * from importlib import reload # Python 3.4+ only. import pre_process_neuralynx as PPN import TreeMazeFunctions as TMF import spike_functions as SF import spatial_tuning as ST import stats_functions as StatsF import plot_functions as PF import zone_analyses_session as ZA animal = 'Li' task = 'T3g' date = '062718' session = animal+'_'+task+'_'+date sessionPaths = ZA.getSessionPaths(oakPaths,session) PosDat = TMF.getBehTrackData(sessionPaths,0) cell_FR, mua_FR = SF.getSessionFR(sessionPaths) import numpy as np from scipy.sparse.linalg import svds from functools import partial def emsvd(Y, k=None, tol=1E-3, maxiter=None): """ Approximate SVD on data with missing values via expectation-maximization Inputs: ----------- Y: (nobs, ndim) data matrix, missing values denoted by NaN/Inf k: number of singular values/vectors to find (default: k=ndim) tol: convergence tolerance on change in trace norm maxiter: maximum number of EM steps to perform (default: no limit) Returns: ----------- Y_hat: (nobs, ndim) reconstructed data matrix mu_hat: (ndim,) estimated column means for reconstructed data U, s, Vt: singular values and vectors (see np.linalg.svd and scipy.sparse.linalg.svds for details) """ if k is None: svdmethod = partial(np.linalg.svd, full_matrices=False) else: svdmethod = partial(svds, k=k) if maxiter is None: maxiter = np.inf # initialize the missing values to their respective column means mu_hat = np.nanmean(Y, axis=0, keepdims=1) valid = np.isfinite(Y) Y_hat = np.where(valid, Y, mu_hat) halt = False ii = 1 v_prev = 0 while not halt: # SVD on filled-in data U, s, Vt = svdmethod(Y_hat - mu_hat) # impute missing values Y_hat[~valid] = (U.dot(np.diag(s)).dot(Vt) + mu_hat)[~valid] # update bias parameter mu_hat = Y_hat.mean(axis=0, keepdims=1) # test convergence using relative change in trace norm v = s.sum() if ii >= maxiter or ((v - v_prev) / v_prev) < tol: halt = True ii += 1 v_prev = v return Y_hat, mu_hat, U, s, Vt def getPosSequence(PosZones,startID,endID): nSamps = len(PosZones) pos = [] samp = [] pos.append(PosZones[0]) samp.append(0) for p in np.arange(nSamps-1): p0 = PosZones[p] p1 = PosZones[p+1] if p0!=p1: pos.append(p1) samp.append(p+1) pos = np.array(pos) samp = np.array(samp) + startID nPos = len(pos) dur = np.zeros(nPos,dtype=int) for p in np.arange(nPos-1): dur[p] = samp[p+1]-samp[p] dur[-1] = endID-samp[-1] return pos, samp, dur def cmp(a,b): return (a>b)-(a<b) def getTrials(dat,**kwargs): nTr = dat.shape[0] trials = set(np.arange(1,nTr+1)) try: for k,v in kwargs.items(): trials = trials & set(np.where(dat[k]==v)[0]+1) except: print('Invalid Selection {} {}'.format(k,v)) pass return np.sort(np.array(list(trials))) def zscore(x,mu,sig): return (x-mu)/sig def getFR_TrZone(TrialInfo, FRMat): nCells = FRMat.shape[0] TrZnFR = {} # FR for every zone visited in that trial OTrZnFR = {} # FR for every zone visited in that trial for tr in TrialInfo['All']['Trials']: nPos = len(TrialInfo['TrSeq']['Pos'][tr]) trSpPos = np.zeros((nCells,nPos)) for p in np.arange(nPos): s=TrialInfo['TrSeq']['Samp'][tr][p] d=TrialInfo['TrSeq']['Dur'][tr][p] samps = np.arange(s,s+d) for cell in np.arange(nCells): trSpPos[cell, p]=np.mean(FRMat[cell][samps]) nPos = len(TrialInfo['OffTrSeq']['Pos'][tr]) otrSpPos = np.zeros((nCells,nPos)) for p in np.arange(nPos): s=TrialInfo['OffTrSeq']['Samp'][tr][p] d=TrialInfo['OffTrSeq']['Dur'][tr][p] samps = np.arange(s,s+d) for cell in np.arange(nCells): otrSpPos[cell, p]=np.mean(FRMat[cell][samps]) TrZnFR[tr] = trSpPos OTrZnFR[tr] = otrSpPos return TrZnFR, OTrZnFR def AICc(model): n = model.nobs llf = model.llf k = len(model.params) AIC = 2*(k-llf) c = 2*k*(k+1)/(n-k-1) return AIC+c def R2(x,y): return (np.corrcoef(x,y)**2)[0,1] def aR2(model,y,fit=[]): if fit==[]: fit = model.fittedvalues r2 = R2(fit,y) n = model.nobs p = len(model.params)-1 aR2 = 1-(1-r2)*(n-1)/(n-p-1) return aR2 def getParamSet(): ''' Returns a dictionary of parameter sets for modeling. ''' params = ['Loc','IO','Cue','Desc','Co'] combs = [] for i in np.arange(1, len(params)+1): combs+= [list(x) for x in combinations(params, i)] param_set = {} cnt=0 for c in combs: param_set[cnt] = c cnt+=1 for c in combs: if ('IO' in c) and ('Loc' in c): param_set[cnt] = ['Loc:IO']+c cnt+=1 return param_set def getModel_testR2(dat,formula='',params=[],mixedlm=True, verbose=False): ''' Obtains the test R2 based on even/odd splits of the data ''' if len(params)>0 and len(formula)==0: formula = getFormula(params) else: print('No Method of selecting parameters provided.') return np.nan,np.nan,[] print('\nComputing mixedlm with formula: {}'.format(formula)) dat_even = dat[dat['EvenTrial']==True] dat_odd = dat[dat['EvenTrial']==False] if mixedlm: md_even = smf.mixedlm(formula, data=dat_even,groups=dat_even["trID"]) else: md_even = smf.ols(formula + 'trID', data=dat_even) mdf_even = md_even.fit() pred_odd = mdf_even.predict(dat_odd) if mixedlm: md_odd = smf.mixedlm(formula, data=dat_odd,groups=dat_odd["trID"]) else: md_odd = smf.ols(formula + 'trID', data=md_odd) mdf_odd = md_odd.fit() pred_even = mdf_odd.predict(dat_even) if verbose: print('\nPerformance Train-Even:Test-Odd') print("Train_aR2 = {0:.3f}".format(aR2(mdf_even,dat_even['zFR']))) print("Model_AICc = {0:.3f}".format(AICc(mdf_even))) print("Test_R2 = {0:.3f}".format(R2(pred_odd,dat_odd['zFR']))) print('\nPerformance Train-Odd:Test-Even') print("Train_aR2 = {0:.3f}".format(aR2(mdf_odd,dat_odd['zFR']))) print("Model_AICc = {0:.3f}".format(AICc(mdf_odd))) print("Test_R2 = {0:.3f}".format(R2(pred_even,dat_even['zFR']))) dat['Pred']=np.zeros(dat.shape[0]) dat.loc[dat['EvenTrial']==True,'Pred']=pred_even dat.loc[dat['EvenTrial']==False,'Pred']=pred_odd r2 = R2(dat['zFR'],dat['Pred']) print('\nOverall test R2: {0:.3f}'.format(r2)) return r2 def getFormula(params): formula = 'zFR ~ ' nP = len(params) cnt=1 for i in params: formula += i if cnt<nP: formula +='+' cnt+=1 return formula def getModelPerf(dat,formula='',params=[],mixedlm=True): ''' Obtains the train adjusted R2, and AIC for data. returns aR2, AIC, and the fitted model. ''' print('\nComputing mixedlm with formula: {}'.format(formula)) if len(params)>0 and len(formula)==0: formula = getFormula(params) else: print('No Method of selecting parameters provided.') return np.nan,np.nan,[] if mixedlm: md = smf.mixedlm(formula, data=dat, groups=dat["trID"]) else: md = smf.ols(formula + '+trID', data=dat) mdf = md.fit() print('\n Model Performance:') train_aR2 = aR2(mdf,dat['zFR']) print("Train_aR2 = {0:.3f}".format(train_aR2)) aic = AICc(mdf) print("Model_AICc = {0:.3f}".format(aic)) return train_aR2, aic, mdf ValidTraj = {'R_S1':['Home','SegA','Center','SegB','I1','SegC','G1'], 'R_S2':['Home','SegA','Center','SegB','I1','SegD','G2'], 'R_L1':['Home','SegA','Center','SegB','I1','SegD','G2','SegD','I1','SegC','G1'], 'R_L2':['Home','SegA','Center','SegB','I1','SegC','G1','SegC','I1','SegD','G2'], 'L_S3':['Home','SegA','Center','SegE','I2','SegF','G3'], 'L_S4':['Home','SegA','Center','SegE','I2','SegG','G4'], 'L_L3':['Home','SegA','Center','SegE','I2','SegG','G4','SegG','I2','SegF','G3'], 'L_L4':['Home','SegA','Center','SegE','I2','SegF','G3','SegF','I2','SegG','G4'], } ValidOffTraj = {} for k,v in ValidTraj.items(): ValidOffTraj[k] = v[::-1] # get trial durations and samples TrialVec = PosDat['EventDat']['TrID'] nTr=TrialVec.max() startIDs = np.zeros(nTr,dtype=int) endIDs = np.zeros(nTr,dtype=int) for tr in np.arange(nTr): trIDs = np.where(TrialVec==(tr+1))[0] startIDs[tr]=trIDs[0] endIDs[tr] = trIDs[-1] TrialDurs = endIDs-startIDs OffTrialDurs=np.concatenate((startIDs[1:],[len(PosDat['t'])]))-endIDs OffTrialVec = np.full_like(TrialVec,0) for tr in np.arange(nTr): idx = np.arange(endIDs[tr],endIDs[tr]+OffTrialDurs[tr]) OffTrialVec[idx]=tr+1 # Pre allocated Trial Info structure. TrialInfo = {'All':{'Trials':[],'Co':[],'InCo':[]},'L':{'Trials':[],'Co':[],'InCo':[]}, 'R':{'Trials':[],'Co':[],'InCo':[]},'BadTr':[],'Cues':np.full(nTr,''),'Desc':np.full(nTr,''), 'DurThr':45,'TrDurs':TrialDurs, 'TrialVec':TrialVec,'TrStSamp':startIDs,'TrEnSamp':endIDs,'TrSeq':{'Pos':{},'Samp':{},'Dur':{}}, 'OffTrStSamp':endIDs,'OffTrEnSamp':endIDs+OffTrialDurs,'OffTrDurs':OffTrialDurs, 'OffTrialVec':OffTrialVec, 'OffTrSeq':{'Pos':{},'Samp':{},'Dur':{}}, 'ValidSeqTrials':[],'ValidSeqOffTrials':[],'ValidSeqTrID':[],'ValidSeqOffTrID':[], 'ValidSeqNames':ValidTraj,'ValidSeqOffNames':ValidOffTraj} TrialInfo['All']['Trials']=np.arange(nTr)+1 #get separate trials and allocate by correct/incorrect for tr in TrialInfo['All']['Trials']: idx= TrialVec==tr for s in ['L','R']: c = PosDat['EventDat']['C'+s][idx] d = PosDat['EventDat'][s+'Ds'][idx] if np.mean(d)>0.5: # descicion TrialInfo['Desc'][tr-1]=s if np.mean(c)>0.5: # cue TrialInfo[s]['Trials'].append(tr) TrialInfo['Cues'][tr-1]=s if np.mean(d&c)>0.5: # correct descicion TrialInfo[s]['Co'].append(tr) else: TrialInfo[s]['InCo'].append(tr) assert set(TrialInfo['R']['Trials']) & set(TrialInfo['L']['Trials']) == set(), 'Trial classified as both left and right.' assert len(TrialInfo['Cues']) ==len(TrialInfo['Desc']), 'Number of trials mismatch' assert len(TrialInfo['Cues']) ==nTr, 'Number of trials mismatch' for trC in ['Co', 'InCo']: TrialInfo['All'][trC] = np.sort(TrialInfo['L'][trC]+TrialInfo['R'][trC]) for i in ['All','L','R']: for j in ['Trials','Co','InCo']: TrialInfo[i]['n'+j]=len(TrialInfo[i][j]) TrialInfo[i][j]=np.array(TrialInfo[i][j]) # determine if the trials are too long to be included. TrialInfo['BadTr'] = np.where(TrialInfo['TrDurs']*PosDat['step']>TrialInfo['DurThr'])[0] # get positions for each trial for tr in TrialInfo['All']['Trials']: idx = TrialInfo['TrialVec']==tr sID = TrialInfo['TrStSamp'][tr-1] eID = TrialInfo['TrEnSamp'][tr-1] p,s,d=getPosSequence(PosDat['PosZones'][idx],sID,eID) #p,s,d=getPosSequence(p4[idx],sID,eID) TrialInfo['TrSeq']['Pos'][tr]=p TrialInfo['TrSeq']['Samp'][tr]=s TrialInfo['TrSeq']['Dur'][tr]=d idx = TrialInfo['OffTrialVec']==tr sID = TrialInfo['OffTrStSamp'][tr-1] eID = TrialInfo['OffTrEnSamp'][tr-1] p,s,d=getPosSequence(PosDat['PosZones'][idx],sID,eID) TrialInfo['OffTrSeq']['Pos'][tr]=p TrialInfo['OffTrSeq']['Samp'][tr]=s TrialInfo['OffTrSeq']['Dur'][tr]=d # determine if the sequence of positions are valid for each trial TrSeqs = {} vTr = [] OffTrSeqs = {} vOTr = [] for tr in TrialInfo['All']['Trials']: seq = [TMF.Zones[a] for a in TrialInfo['TrSeq']['Pos'][tr]] match = 0 for vSeqN, vSeq in ValidTraj.items(): if cmp(seq,vSeq)==0: match = 1 vTr.append(tr) TrSeqs[tr]=vSeqN break if match==0: TrSeqs[tr]=[] seq = [TMF.Zones[a] for a in TrialInfo['OffTrSeq']['Pos'][tr]] match = 0 for vSeqN, vSeq in ValidOffTraj.items(): if cmp(seq,vSeq)==0: match = 1 vOTr.append(tr) OffTrSeqs[tr]=vSeqN break if match==0: OffTrSeqs[tr]=[] TrialInfo['ValidSeqTrials'] = vTr TrialInfo['ValidSeqOffTrials'] = vOTr TrialInfo['ValidSeqTrID'] = TrSeqs TrialInfo['ValidSeqOffTrID'] = OffTrSeqs conds = ['Cues','Desc','Co','Traj','OTraj','Dur','Good','Length','OLength'] TrCondMat = pd.DataFrame(np.full((nTr,len(conds)),np.nan),index=TrialInfo['All']['Trials'],columns=conds) TrCondMat['Cues'] = TrialInfo['Cues'] TrCondMat['Desc'] = TrialInfo['Desc'] TrCondMat['Dur'] = TrialDurs TrCondMat['Co'].loc[TrialInfo['All']['Co']]='Co' TrCondMat['Co'].loc[TrialInfo['All']['InCo']]='InCo' vseq=TrialInfo['ValidSeqTrials'] TrCondMat['Traj'].loc[vseq]=[TrialInfo['ValidSeqTrID'][s] for s in vseq] vseq=TrialInfo['ValidSeqOffTrials'] TrCondMat['OTraj'].loc[vseq]=[TrialInfo['ValidSeqOffTrID'][s] for s in vseq] TrCondMat['Good'] = (~TrCondMat['Traj'].isnull()) & (TrialDurs*PosDat['step']<TrialInfo['DurThr']) x=np.full(nTr,'') for k,v in TrialInfo['ValidSeqTrID'].items(): if len(v)>0: x[k-1]=v[2] TrCondMat['Length']= x x=np.full(nTr,'') for k,v in TrialInfo['ValidSeqOffTrID'].items(): if len(v)>0: x[k-1]=v[2] TrCondMat['OLength']= x # working version of long DF trialxPos matrix. this takes either short or long trajectories in the outbound but only short trajectories inbound nMaxPos = 11 nMinPos = 7 nTr =len(TrialInfo['All']['Trials']) nCells = cell_FR.shape[0] nMua = mua_FR.shape[0] nTotalUnits = nCells+nMua nUnits = {'cell':nCells,'mua':nMua} TrZn={'cell':[],'mua':[]} OTrZn={'cell':[],'mua':[]} TrZn['cell'],OTrZn['cell'] = getFR_TrZone(TrialInfo,cell_FR) TrZn['mua'],OTrZn['mua'] = getFR_TrZone(TrialInfo,mua_FR) cellCols = ['cell_'+str(i) for i in np.arange(nCells)] muaCols = ['mua_'+str(i) for i in np.arange(nMua)] unitCols = {'cell':cellCols,'mua':muaCols} allUnits = cellCols+muaCols mu = {'cell':np.mean(cell_FR,1),'mua':np.mean(mua_FR,1)} sig = {'cell':np.std(cell_FR,1),'mua':np.std(mua_FR,1)} Out=pd.DataFrame(np.full((nTr*nMaxPos,nTotalUnits),np.nan),columns=allUnits) In=pd.DataFrame(np.full((nTr*nMaxPos,nTotalUnits),np.nan),columns=allUnits) O_I=pd.DataFrame(np.full((nTr*nMaxPos,nTotalUnits),np.nan),columns=allUnits) Out = Out.assign(trID = np.tile(TrialInfo['All']['Trials'],nMaxPos)) In = In.assign(trID = np.tile(TrialInfo['All']['Trials'],nMaxPos)) O_I = O_I.assign(trID = np.tile(TrialInfo['All']['Trials'],nMaxPos)) Out = Out.assign(Pos = np.repeat(np.arange(nMaxPos),nTr)) In = In.assign(Pos = np.repeat(np.arange(nMaxPos),nTr)) O_I = O_I.assign(Pos = np.repeat(np.arange(nMaxPos),nTr)) Out = Out.assign(IO = ['Out']*(nTr*nMaxPos)) In = In.assign(IO = ['In']*(nTr*nMaxPos)) O_I = O_I.assign(IO = ['O_I']*(nTr*nMaxPos)) for ut in ['cell','mua']: for cell in np.arange(nUnits[ut]): X=pd.DataFrame(np.full((nTr,nMaxPos),np.nan),index=TrialInfo['All']['Trials'],columns=np.arange(nMaxPos)) Y=pd.DataFrame(np.full((nTr,nMaxPos),np.nan),index=TrialInfo['All']['Trials'],columns=np.arange(nMaxPos)) Z=pd.DataFrame(np.full((nTr,nMaxPos),np.nan),index=TrialInfo['All']['Trials'],columns=np.arange(nMaxPos)) m = mu[ut][cell] s = sig[ut][cell] for tr in TrialInfo['All']['Trials']: traj = TrialInfo['ValidSeqTrID'][tr] if traj in ValidTrajNames: if traj[2]=='S': X.loc[tr][0:nMinPos] = zscore(TrZn[ut][tr][cell],m,s) else: X.loc[tr] = zscore(TrZn[ut][tr][cell],m,s) otraj = TrialInfo['ValidSeqOffTrID'][tr] if otraj in ValidTrajNames: if otraj[2]=='S': Y.loc[tr][4:] = zscore(OTrZn[ut][tr][cell],m,s) else: Y.loc[tr] = zscore(OTrZn[ut][tr][cell],m,s) if (traj in ValidTrajNames) and (otraj in ValidTrajNames): if traj==otraj: Z.loc[tr] = X.loc[tr].values-Y.loc[tr][::-1].values elif traj[2]=='L' and otraj[2]=='S': # ambigous interserction position, skipping that computation Z.loc[tr][[0,1,2,3]] = X.loc[tr][[0,1,2,3]].values-Y.loc[tr][[10,9,8,7]].values Z.loc[tr][[5,6]] = X.loc[tr][[9,10]].values-Y.loc[tr][[5,4]].values elif traj[2]=='S' and otraj[2]=='L': Z.loc[tr][[0,1,2,3]] = X.loc[tr][[0,1,2,3]].values-Y.loc[tr][[10,9,8,7]].values Z.loc[tr][[5,6]] = X.loc[tr][[5,6]].values-Y.loc[tr][[1,0]].values Out[unitCols[ut][cell]]=X.melt(value_name='zFR')['zFR'] In[unitCols[ut][cell]]=Y.melt(value_name='zFR')['zFR'] O_I[unitCols[ut][cell]]=Z.melt(value_name='zFR')['zFR'] Data = pd.DataFrame() Data = pd.concat([Data,Out]) Data = pd.concat([Data,In]) Data = pd.concat([Data,O_I]) Data = Data.reset_index() Data nMaxPos = 11 nTr =len(TrialInfo['All']['Trials']) Cols = ['trID','Pos','IO','Cue','Desc','Traj','Loc','OTraj', 'Goal','ioMatch','Co','Valid'] nCols = len(Cols) nDatStack = 3 # Out, In, O-I TrialMatInfo = pd.DataFrame(np.full((nTr*nMaxPos*nDatStack,nCols),np.nan),columns=Cols) TrialMatInfo['trID'] = np.tile(np.tile(TrialInfo['All']['Trials'],nMaxPos),nDatStack) TrialMatInfo['Pos'] = np.tile(np.repeat(np.arange(nMaxPos),nTr),nDatStack) TrialMatInfo['IO'] = np.repeat(['Out','In','O_I'],nTr*nMaxPos) TrialMatInfo['Traj'] = np.tile(np.tile(TrCondMat['Traj'],nMaxPos),nDatStack) TrialMatInfo['OTraj'] = np.tile(np.tile(TrCondMat['OTraj'],nMaxPos),nDatStack) TrialMatInfo['Co'] = np.tile(np.tile(TrCondMat['Co'],nMaxPos),nDatStack) TrialMatInfo['Cue'] = np.tile(np.tile(TrCondMat['Cues'],nMaxPos),nDatStack) TrialMatInfo['Desc'] = np.tile(np.tile(TrCondMat['Desc'],nMaxPos),nDatStack) TrialMatInfo['ioMatch'] = [traj==otraj for traj,otraj in zip(TrialMatInfo['Traj'],TrialMatInfo['OTraj'])] TrialMatInfo['Goal'] = [traj[3] if traj==traj else '' for traj in TrialMatInfo['Traj']] TrialMatInfo['Len'] = [traj[2] if traj==traj else '' for traj in TrialMatInfo['Traj']] TrialMatInfo['OLen'] = [traj[2] if traj==traj else '' for traj in TrialMatInfo['OTraj']] # get true location in each trials sequence 'Loc' # note that 'Pos' is a numerical indicator of the order in a sequence outTrSeq = pd.DataFrame(np.full((nTr,nMaxPos),np.nan),index=TrialInfo['All']['Trials']) inTrSeq = pd.DataFrame(np.full((nTr,nMaxPos),np.nan),index=TrialInfo['All']['Trials']) oiTrSeq = pd.DataFrame(np.full((nTr,nMaxPos),np.nan),index=TrialInfo['All']['Trials']) #allTrValid = [] for tr in TrialInfo['All']['Trials']: traj = TrialInfo['ValidSeqTrID'][tr] if traj in ValidTrajNames: seq = TrialInfo['ValidSeqNames'][traj] if len(seq)==nMaxPos: outTrSeq.loc[tr]=seq else: outTrSeq.loc[tr]= seq + [np.nan]*4 else: outTrSeq.loc[tr] = [np.nan]*nMaxPos otraj = TrialInfo['ValidSeqOffTrID'][tr] if otraj in ValidTrajNames: oseq = TrialInfo['ValidSeqOffNames'][otraj] if len(oseq)==nMaxPos: inTrSeq.loc[tr]=oseq else: inTrSeq.loc[tr]=[np.nan]*4+oseq else: inTrSeq.loc[tr]=[np.nan]*nMaxPos if (traj in ValidTrajNames) and (otraj in ValidTrajNames): if traj==otraj: if len(seq)==nMaxPos: oiTrSeq.loc[tr]=seq else: oiTrSeq.loc[tr] = seq + [np.nan]*4 elif traj[2]=='L' and otraj[2]=='S': oiTrSeq.loc[tr] = seq[:4]+[np.nan]+seq[9:]+[np.nan]*4 elif traj[2]=='S' and otraj[2]=='L': oiTrSeq.loc[tr] = seq[:4]+[np.nan]+seq[5:]+[np.nan]*4 else: oiTrSeq.loc[tr] = [np.nan]*nMaxPos TrialMatInfo['Loc'] = pd.concat([pd.concat([outTrSeq.melt(value_name='Loc')['Loc'], inTrSeq.melt(value_name='Loc')['Loc']]), oiTrSeq.melt(value_name='Loc')['Loc']]).values TrialMatInfo['Valid'] = ~TrialMatInfo['Loc'].isnull() TrialMatInfo['EvenTrial'] = TrialMatInfo['trID']%2==0 TrialMatInfo traj = 'R_L1' goal = traj[3] nPos = 11 if traj[2]=='S': nOutPos = 7 elif traj[2]=='L': nOutPos = 11 outTrials = getTrials(TrCondMat,Good=True,Traj=traj) nOutTr = len(outTrials) if nOutTr > 1: # Valid outbound trials outLocs = ValidTraj[traj] x = pd.DataFrame(np.full((nOutTr,nPos),np.nan),columns=np.arange(nPos)) cnt = 0 for tr in outTrials: x.iloc[cnt][0:nOutPos]=X.loc[tr][outLocs].values cnt+=1 cTr = TrCondMat.loc[outTrials]['Co'] cues = TrCondMat.loc[outTrials]['Cues'] desc = TrCondMat.loc[outTrials]['Desc'] x = x.melt(var_name='Pos',value_name='zFR') x = x.assign(Traj = [traj]*(nOutTr*nPos) ) x = x.assign(IO = ['Out']*(nOutTr*nPos) ) x = x.assign(Goal = [goal]*(nOutTr*nPos) ) x = x.assign(Cue = np.tile(cues,nPos) ) x = x.assign(Desc = np.tile(desc,nPos) ) x = x.assign(trID = np.tile(outTrials, nPos).astype(int)) x = x.assign(Co = np.tile(cTr,nPos)) # y: subset of outbound trials that are directly 'short' inbound # z: difference in the overlapping positions between outbount/inbound badTrials = getTrials(TrCondMat,Good=True,Traj=traj,OLength='') inTrials = np.setdiff1d(outTrials,badTrials) nInTr = len(inTrials) y = pd.DataFrame(np.full((nInTr,nPos),np.nan),columns=np.arange(nPos)) z = pd.DataFrame(np.full((nInTr,nPos),np.nan),columns=np.arange(nPos)) cTr = TrCondMat.loc[inTrials]['Co'] cues = TrCondMat.loc[inTrials]['Cues'] desc = TrCondMat.loc[inTrials]['Desc'] cnt = 0 for tr in inTrials: inLocs = ValidOffTraj[TrCondMat.loc[tr,'OTraj']] if len(inLocs)==nPos: y.iloc[cnt]=Y.loc[tr][inLocs].values z.iloc[cnt]=Z.loc[tr][inLocs].values else: y.iloc[cnt][(nPos-len(inLocs)):]=Y.loc[tr][inLocs].values z.iloc[cnt][(nPos-len(inLocs)):]=Z.loc[tr][inLocs].values cnt+=1 y = y.melt(var_name='Pos',value_name='zFR') y = y.assign(Traj = [traj]*(nInTr*nPos) ) y = y.assign(IO = ['In']*(nInTr*nPos) ) y = y.assign(Goal = [goal]*(nInTr*nPos) ) y = y.assign(Cue = np.tile(cues, nPos)) y = y.assign(Desc = np.tile(desc,nPos) ) y = y.assign(trID = np.tile(inTrials, nPos).astype(int)) y = y.assign(Co = np.tile(cTr,nPos)) cellDat = pd.concat([cellDat,y]) z = z.melt(var_name='Pos',value_name='zFR') z = z.assign(Traj = [traj]*(nInTr*nPos) ) z = z.assign(IO = ['O-I']*(nInTr*nPos) ) z = z.assign(Goal = [goal]*(nInTr*nPos) ) z = z.assign(Cue = np.tile(cues, nPos)) z = z.assign(Desc = np.tile(desc,nPos) ) z = z.assign(trID = np.tile(inTrials, nPos).astype(int)) z = z.assign(Co = np.tile(cTr,nPos)) cellDat = pd.concat([cellDat,z]) cell = 10 cellDat = TrialMatInfo.copy() cellDat['zFR'] = Data[cellCols[cell]] sns.set() sns.set(style="whitegrid",context='notebook',font_scale=1.5,rc={ 'axes.spines.bottom': False, 'axes.spines.left': False, 'axes.spines.right': False, 'axes.spines.top': False, 'axes.edgecolor':'0.5'}) pal = sns.xkcd_palette(['green','purple']) f,ax = plt.subplots(2,3, figsize=(15,6)) w = 0.25 h = 0.43 ratio = 6.5/10.5 hsp = 0.05 vsp = 0.05 W = [w,w*ratio,w*ratio] yPos = [vsp,2*vsp+h] xPos = [hsp,1.5*hsp+W[0],2.5*hsp+W[1]+W[0]] xlims = [[-0.25,10.25],[3.75,10.25],[-0.25,6.25]] for i in [0,1]: for j in np.arange(3): ax[i][j].set_position([xPos[j],yPos[i],W[j],h]) ax[i][j].set_xlim(xlims[j]) xPosLabels = {} xPosLabels[0] = ['Home','SegA','Center','SegBE','Int','CDFG','Goals','CDFG','Int','CDFG','Goals'] xPosLabels[2] = ['Home','SegA','Center','SegBE','Int','CDFG','Goals'] xPosLabels[1] = xPosLabels[2][::-1] plotAll = False alpha=0.15 mlw = 1 with sns.color_palette(pal): coSets = ['InCo','Co'] for i in [0,1]: if i==0: leg=False else: leg='brief' if plotAll: subset = (cellDat['IO']=='Out') & (cellDat['Co']==coSets[i]) & (cellDat['Valid']) ax[i][0] = sns.lineplot(x='Pos',y='zFR',hue='Cue',style='Goal',ci=None,data=cellDat[subset], ax=ax[i][0],legend=False,lw=3,hue_order=['L','R'],style_order=['1','2','3','4']) ax[i][0] = sns.lineplot(x='Pos',y='zFR',hue='Desc',estimator=None,units='trID',data=cellDat[subset], ax=ax[i][0],legend=False,lw=mlw,alpha=alpha,hue_order=['L','R']) subset = (cellDat['IO']=='In') & (cellDat['Co']==coSets[i]) & (cellDat['Pos']>=4) & (cellDat['Valid']) ax[i][1] = sns.lineplot(x='Pos',y='zFR',hue='Cue',style='Goal',ci=None,data=cellDat[subset], ax=ax[i][1],legend=False,lw=3,hue_order=['L','R'],style_order=['1','2','3','4']) ax[i][1] = sns.lineplot(x='Pos',y='zFR',hue='Cue',estimator=None,units='trID',data=cellDat[subset], ax=ax[i][1],legend=False,lw=mlw,alpha=alpha,hue_order=['L','R']) subset = (cellDat['IO']=='O_I') & (cellDat['Co']==coSets[i])& (cellDat['Valid']) ax[i][2] = sns.lineplot(x='Pos',y='zFR',hue='Cue',style='Goal',ci=None,data=cellDat[subset], ax=ax[i][2],legend=leg,lw=3,hue_order=['L','R'],style_order=['1','2','3','4']) ax[i][2] = sns.lineplot(x='Pos',y='zFR',hue='Cue',estimator=None,units='trID',data=cellDat[subset], ax=ax[i][2],legend=False,lw=mlw,alpha=alpha,hue_order=['L','R']) else: subset = (cellDat['IO']=='Out') & (cellDat['Co']==coSets[i]) & (cellDat['Valid']) ax[i][0] = sns.lineplot(x='Pos',y='zFR',hue='Cue',style='Goal',data=cellDat[subset], ax=ax[i][0],lw=2,legend=False,hue_order=['L','R'],style_order=['1','2','3','4']) subset = (cellDat['IO']=='In') & (cellDat['Co']==coSets[i]) & (cellDat['Pos']>=4) & (cellDat['Valid']) ax[i][1] = sns.lineplot(x='Pos',y='zFR',hue='Cue',style='Goal',data=cellDat[subset], ax=ax[i][1],lw=2,legend=False,hue_order=['L','R'],style_order=['1','2','3','4']) subset = (cellDat['IO']=='O_I') & (cellDat['Co']==coSets[i])& (cellDat['Valid']) ax[i][2] = sns.lineplot(x='Pos',y='zFR',hue='Cue',style='Goal',data=cellDat[subset], ax=ax[i][2],legend=leg,lw=2,hue_order=['L','R'],style_order=['1','2','3','4']) ax[i][1].set_xticks(np.arange(4,nMaxPos)) ax[i][0].set_xticks(np.arange(nMaxPos)) ax[i][2].set_xticks(np.arange(nMinPos)) for j in np.arange(3): ax[i][j].set_xlabel('') ax[i][j].set_ylabel('') ax[i][j].tick_params(axis='x', rotation=60) ax[i][0].set_ylabel('{} zFR'.format(coSets[i])) ax[i][1].set_yticklabels('') if i==0: for j in np.arange(3): ax[i][j].set_xticklabels(xPosLabels[j]) else: ax[i][0].set_title('Out') ax[i][1].set_title('In') ax[i][2].set_title('O-I') for j in np.arange(3): ax[i][j].set_xticklabels('') l =ax[1][2].get_legend() plt.legend(bbox_to_anchor=(1.05, 0), loc=6, borderaxespad=0.,frameon=False) l.set_frame_on(False) # out/in limits lims = np.zeros((4,2)) cnt =0 for i in [0,1]: for j in [0,1]: lims[cnt]=np.array(ax[i][j].get_ylim()) cnt+=1 minY = np.floor(np.min(lims[:,0])*20)/20 maxY = np.ceil(np.max(lims[:,1]*20))/20 for i in [0,1]: for j in [0,1]: ax[i][j].set_ylim([minY,maxY]) # o-i limits lims = np.zeros((2,2)) cnt =0 for i in [0,1]: lims[cnt]=np.array(ax[i][2].get_ylim()) cnt+=1 minY = np.floor(np.min(lims[:,0])*20)/20 maxY = np.ceil(np.max(lims[:,1]*20))/20 for i in [0,1]: ax[i][2].set_ylim([minY,maxY]) f,ax = plt.subplots(1,2, figsize=(10,4)) sns.set(style="whitegrid",font_scale=1.5,rc={ 'axes.spines.bottom': False, 'axes.spines.left': False, 'axes.spines.right': False, 'axes.spines.top': False, 'axes.edgecolor':'0.5'}) pal = sns.xkcd_palette(['spring green','light purple']) subset = cellDat['Co']=='Co' dat =[] dat = cellDat[subset].groupby(['trID','IO','Cue','Desc']).mean() dat = dat.reset_index() with sns.color_palette(pal): ax[0]=sns.violinplot(y='zFR',x='IO',hue='Desc',data=dat,split=True, ax=ax[0], scale='count',inner='quartile',hue_order=['L','R'],saturation=0.5,order=['Out','In','O_I']) pal = sns.xkcd_palette(['emerald green','medium purple']) with sns.color_palette(pal): ax[0]=sns.swarmplot(y='zFR',x='IO',hue='Desc',data=dat,dodge=True,hue_order=['L','R'],alpha=0.7,ax=ax[0], edgecolor='gray',order=['Out','In','O_I']) l=ax[0].get_legend() l.set_visible(False) ax[0].set_xlabel('Direction') pal = sns.xkcd_palette(['spring green','light purple']) subset= cellDat['IO']=='Out' dat = cellDat[subset].groupby(['trID','Cue','Co','Desc']).mean() dat = dat.reset_index() with sns.color_palette(pal): ax[1]=sns.violinplot(y='zFR',x='Desc',hue='Cue',data=dat,split=True,scale='width',ax=ax[1], inner='quartile',order=['L','R'],hue_order=['L','R'],saturation=0.5) pal = sns.xkcd_palette(['emerald green','medium purple']) with sns.color_palette(pal): ax[1]=sns.swarmplot(y='zFR',x='Desc',hue='Cue',data=dat,dodge=True,order=['L','R'],ax=ax[1], hue_order=['L','R'],alpha=0.7,edgecolor='gray') ax[1].set_xlabel('Decision') ax[1].set_ylabel('') l=ax[1].get_legend() handles, labels = ax[1].get_legend_handles_labels() l.set_visible(False) plt.legend(handles[2:],labels[2:],bbox_to_anchor=(1.05, 0), loc=3, borderaxespad=0.,frameon=False,title='Cue') cell = 10 cellDat = TrialMatInfo.copy() cellDat['zFR'] = Data[cellCols[cell]] subset = (cellDat['Valid']) & ~(cellDat['IO']=='O_I') dat = cellDat[subset] dat = dat.reset_index() #dat['Pos'] = dat['Pos'].astype('category') md = smf.mixedlm("zFR ~ Loc*IO+Co+Cue+Desc", data=dat,groups=dat["trID"]) #md = smf.mixedlm("zFR ~ Loc*IO", data=dat,groups=dat["trID"]) #md = smf.mixedlm("zFR ~ IO", data=dat,groups=dat["trID"]) mdf = md.fit() dat['Fit'] = mdf.fittedvalues dat['Res'] = mdf.resid print(mdf.summary()) print("R2 = {0:.3f}".format((np.corrcoef(dat['Fit'],dat['zFR'])**2)[0,1])) print(mdf.wald_test_terms()) md = smf.mixedlm("Res ~ Cue+Desc+Co", data=dat,groups=dat["trID"]) mdf = md.fit() print(mdf.summary()) print("R2 = {0:.3f}".format((np.corrcoef(mdf.fittedvalues,dat['Res'])**2)[0,1])) print(mdf.wald_test_terms()) f,ax = plt.subplots(1,2, figsize=(10,4)) sns.set(style="whitegrid",font_scale=1.5,rc={ 'axes.spines.bottom': False, 'axes.spines.left': False, 'axes.spines.right': False, 'axes.spines.top': False, 'axes.edgecolor':'0.5'}) pal = sns.xkcd_palette(['spring green','light purple']) y = 'Fit' subset = dat['Co']=='Co' dat2 =[] dat2 = dat[subset].groupby(['trID','IO','Cue','Desc']).mean() dat2 = dat2.reset_index() with sns.color_palette(pal): ax[0]=sns.violinplot(y=y,x='IO',hue='Desc',data=dat2,split=True, ax=ax[0], scale='count',inner='quartile',hue_order=['L','R'],saturation=0.5,order=['Out','In']) pal = sns.xkcd_palette(['emerald green','medium purple']) with sns.color_palette(pal): ax[0]=sns.swarmplot(y=y,x='IO',hue='Desc',data=dat2,dodge=True,hue_order=['L','R'],alpha=0.7,ax=ax[0], edgecolor='gray',order=['Out','In']) l=ax[0].get_legend() l.set_visible(False) ax[0].set_xlabel('Direction') pal = sns.xkcd_palette(['spring green','light purple']) subset= dat['IO']=='Out' dat2 =[] dat2 = dat[subset].groupby(['trID','Cue','Co','Desc']).mean() dat2 = dat2.reset_index() with sns.color_palette(pal): ax[1]=sns.violinplot(y=y,x='Desc',hue='Cue',data=dat2,split=True,scale='width',ax=ax[1], inner='quartile',order=['L','R'],hue_order=['L','R'],saturation=0.5) pal = sns.xkcd_palette(['emerald green','medium purple']) with sns.color_palette(pal): ax[1]=sns.swarmplot(y=y,x='Desc',hue='Cue',data=dat2,dodge=True,order=['L','R'],ax=ax[1], hue_order=['L','R'],alpha=0.7,edgecolor='gray') ax[1].set_xlabel('Decision') ax[1].set_ylabel('') l=ax[1].get_legend() handles, labels = ax[1].get_legend_handles_labels() l.set_visible(False) plt.legend(handles[2:],labels[2:],bbox_to_anchor=(1.05, 0), loc=3, borderaxespad=0.,frameon=False,title='Cue') TrialInfo['TrDurs'] cell = 10 cellDat = TrialMatInfo.copy() cellDat['zFR'] = Data[cellCols[cell]] dat = [] dat = cellDat[~(cellDat['IO']=='O_I') & (cellDat['Valid'])].copy() dat['trID'] = dat['trID'].astype('category') dat = dat.reset_index() dat_even = dat[dat['EvenTrial']==True] dat_odd = dat[dat['EvenTrial']==False] #md1 = smf.mixedlm("zFR ~ Loc*IO+Cue+Desc+Co", data=dat1,groups=dat1["trID"]) #md_even = sm.OLS.from_formula("zFR ~ Loc:IO+Loc+IO+Cue+Desc+Co+trID", data=dat_even) md_even = smf.mixedlm("zFR ~ Loc:IO+Loc+IO+Cue+Desc+Co", data=dat_even,groups=dat_even["trID"]) mdf_even = md_even.fit() print('\nPerformance Train-Even:Test-Odd') print("Train_aR2 = {0:.3f}".format(aR2(mdf_even,dat_even['zFR']))) print("Model_AICc = {0:.3f}".format(AICc(mdf_even))) #print(mdf.wald_test_terms()) pred_odd = mdf_even.predict(dat_odd) print("Test_R2 = {0:.3f}".format(R2(pred_odd,dat_odd['zFR']))) print('\nPerformance Train-Odd:Test-Even') #md2 = smf.mixedlm("zFR ~ Loc*IO+Cue+Desc+Co", data=dat2,groups=dat2["trID"]) #md_odd = sm.OLS.from_formula("zFR ~ Loc:IO+Loc+IO+Cue+Desc+Co+trID", data=dat_odd) md_odd = smf.mixedlm("zFR ~ Loc:IO+Loc+IO+Cue+Desc+Co", data=dat_odd,groups=dat_odd["trID"]) mdf_odd = md_odd.fit() print("Train_aR2 = {0:.3f}".format(aR2(mdf_odd,dat_odd['zFR']))) print("Model_AICc = {0:.3f}".format(AICc(mdf_odd))) #print(mdf.wald_test_terms()) pred_even = mdf_odd.predict(dat_even) print("Test_R2 = {0:.3f}".format(R2(pred_even,dat_even['zFR']))) dat['Pred']=np.zeros(dat.shape[0]) dat.loc[dat['EvenTrial']==True,'Pred']=pred_even dat.loc[dat['EvenTrial']==False,'Pred']=pred_odd cell = 10 R2thr = 0.2 param_set = getParamSet() nModels = len(param_set) cellDat = TrialMatInfo.copy() cellDat['zFR'] = Data[cellCols[cell]] dat = [] dat = cellDat[~(cellDat['IO']=='O_I') & (cellDat['Valid'])].copy() dat['trID'] = dat['trID'].astype('category') dat = dat.reset_index() params = ['Loc:IO','Loc','IO','Cue','Desc','Co'] form = getFormula(params) t1 = time.time() tR2 = getModel_testR2(form,dat) if tR2>=R2thr: for k,params in param_set.items(): form = getFormula(params) t2 = time.time() trainR2[k],trainAIC[k],_ = getModelPerf(form,dat) t3 = time.time() print('Time to fit model {0} : {1} = {2:0.3f}s'.format(k,form,t3-t2)) print('Fitting Completed for cell {0}, total time = {1:0.3f}s'.format(cell,t3-t1)) from joblib import Parallel, delayed if not sys.warnoptions: import warnings warnings.simplefilter("ignore") all_params = ['Loc:IO','Loc','IO','Cue','Desc','Co'] param_set = getParamSet() nModels = len(param_set) R2thr = 0.2 nTr =len(TrialInfo['All']['Trials']) cellColIDs = [i for i,item in enumerate(Data.columns.values) if 'cell' in item] nCells = len(cellColIDs) muaColIDs = [i for i,item in enumerate(Data.columns.values) if 'mua' in item] nMua = len(musColIDs) nTotalUnits = nCells+nMua nUnits = {'cell':nCells,'mua':nMua} cellCols = Data.columns[cellColIDs] muaCols = Data.columns[muaColIDs] unitCols = {'cell':cellCols,'mua':muaCols} perfCols = ['FullMod_tR2','modelNum','trainR2','AICc','testR2'] Cols = ['ut']+perfCols+ all_params nCols = len(Cols) LM_Dat = pd.DataFrame(np.full((nTotalUnits,nCols),np.nan),columns=Cols) LM_Dat.loc[:,'ut'] = ['cell']*nCells+['mua']*nMua datSubset = ~(TrialMatInfo['IO']=='O_I') & (TrialMatInfo['Valid']) dat = [] dat = TrialMatInfo[datSubset].copy() dat['trID'] = dat['trID'].astype('category') dat = dat.reset_index() N = dat.shape[0] dat['zFR'] = np.zeros(N) t0 = time.time() with Parallel(n_jobs=16) as parallel: cnt=0 for ut in ['cell','mua']: for cell in np.arange(nUnits[ut]): print('\n\nAnalyzing {} {}'.format(ut,cell)) dat.loc[:,'zFR'] = Data.loc[datSubset,unitCols[ut][cell]].values t1 = time.time() tR2 = getModel_testR2(dat,params=all_params) t2 = time.time() LM_Dat.loc[cnt,'FullMod_tR2'] = tR2 print('Full Model Test Set Fit completed. Time = {}'.format(t2-t1)) if tR2>=R2thr: print('Full Model passed the threshold, looking for optimal submodel.') r = parallel(delayed(getModelPerf)(dat,params=params) for params in param_set.values()) trainR2,trainAICc,_ = zip(*r) t3 = time.time() print('\nFitting Completed for {0} {1}, total time = {2:0.3f}s'.format(ut,cell,t3-t1)) selMod = np.argmin(trainAICc) selMod_tR2 = getModel_testR2(dat,params=param_set[selMod]) print('Selected Model = {}, AICc = {}, testR2 = {} '.format(selMod,trainAICc[selMod],selMod_tR2)) LM_Dat.loc[cnt,'modelNum']=selMod LM_Dat.loc[cnt,'trainR2']=trainR2[selMod] LM_Dat.loc[cnt,'AICc'] = trainAICc[selMod] LM_Dat.loc[cnt,'testR2'] = selMod_tR2 temp = r[selMod][2].wald_test_terms() LM_Dat.loc[cnt,param_set[selMod]] = np.sqrt(temp.summary_frame()['chi2'][param_set[selMod]]) cnt+=1 print('Model Fitting completed. Time = {}s'.format(time.time()-t0)) LM_Dat param_set sns.lmplot(x='zFR',y='fit',hue='Cue',col='Desc',data=dat2,col_wrap=3,sharex=False, sharey=False, robust=True) sns.set_style("whitegrid") sns.jointplot(dat2['zFR'],dat2['fit'],kind='reg') ```
github_jupyter
# BSM ## Assumptions: - Price of underlying asset follows a lognormal dist; return ~ normal - $r_f^c$ is known and constant - volatility $\sigma$ of underlying asset is known and constant - Frictionless market - No cash flow* (dividend) - European options ## Formula ### European Call $$c_0= S_0e^{-qT}*N(d1)- Xe^{-R_f^cxT} * N(d2)$$ $$d1 = \frac{ln(S_0/X) + (r-q+\sigma^2/2)T} {\sigma \sqrt{T}}$$ $$d2 = d1 - \sigma \sqrt{T}$$ ### European Put $$p_0= Xe^{-R_f^cxT}*N(-d2) - S_0e^{-qT}*N(-d1)$$ ## Notes - Call: `S0xN(d1)`: buy #delta stock & `-X` term: borrow money - Put: `X` term: buy in a bond & `-S0xN(-d1)`: Short Stock - Roughly: N(D1) = Prob. ITM before T - N(D2) = Prlb(S_T > X) = Prob. Exercise at T rfd quotes My personal setup with options is to do non-directional short term trades to benefit of the rising implied volatility or simply sell expensive premium without betting on a single market direction. To make the best of options, one should implement complex strategies, which improves risk reward ratio. Butterflies, Iron Butterflies, double calendar, hedged sttaddle or strangle amd put/call ratio are better vehicles for the short term, which can greatly benefit from the increasing volatility of a nervous market without making bets on a single direction. If the goal is simplicity, I prefer to sell puts (get paid to buy the stock at the price you want) and once assigned, sell calls (to increase income and sell for a guaranteed profit). Once in cash again, rinse and repeat. If the market tanks when holding the stock, sell calls further out and have the temperament for market gyrations - it's temporary. This works very well with Canadian bank stocks or companies like Costco. Sell a put to buy with a 10% discount from current price. When it tanks enough and you are assigned, sell a call to have a 5% or 10% profit in a month or 2 out. Keep receiving the dividends and call premium meanwhile, until it's sold for your profit price. Repeat again, sell puts (collecting premium) until the price drops again, and so forth. This works forever on quality companies - main Canadian banks and Costco are just some examples. Also works on BCE and AT&T. Again, simple income strategy with fairly low risk. https://forums.redflagdeals.com/day-trading-option-2259692/ I personally find swing trading with options (larger timeframe) a lot easier than day trading. Technical analysis and multiple indicators help to validate trend / support / resistance, where you can place directional or non-directional trades. ``` def newton(f, Df, x0, epsilon, max_iter): '''Approximate solution of f(x)=0 by Newton's method. https://www.math.ubc.ca/~pwalls/math-python/roots-optimization/newton/ Parameters ---------- f : function Function for which we are searching for a solution f(x)=0. Df : function Derivative of f(x). x0 : number Initial guess for a solution f(x)=0. epsilon : number Stopping criteria is abs(f(x)) < epsilon. max_iter : integer Maximum number of iterations of Newton's method. Returns ------- xn : number Implement Newton's method: compute the linear approximation of f(x) at xn and find x intercept by the formula x = xn - f(xn)/Df(xn) Continue until abs(f(xn)) < epsilon and return xn. If Df(xn) == 0, return None. If the number of iterations exceeds max_iter, then return None. Examples -------- >>> f = lambda x: x**2 - x - 1 >>> Df = lambda x: 2*x - 1 >>> newton(f,Df,1,1e-8,10) Found solution after 5 iterations. 1.618033988749989 ''' xn = x0 for n in range(0,max_iter): fxn = f(xn) if abs(fxn) < epsilon: print('Found solution after',n,'iterations.') return xn Dfxn = Df(xn) if Dfxn == 0: print('Zero derivative. No solution found.') return None xn = xn - fxn/Dfxn print('Exceeded maximum iterations. No solution found.') return None import numpy as np import matplotlib.pyplot as plt import pandas as pd from math import log, sqrt, exp from scipy import stats from functools import partial class Bsm(object): def __init__(self, s0, k, t, r, sigma=0.5, q=0): self.s0 = s0 # current stock price self.k = k # strike price self.t = t # time to expiration in years self.r = r # continuous risk free rate self.q = q # dividend rate self.sigma = sigma # sd def d1(self, sigma_est=None): sigma = sigma_est or self.sigma return (log(self.s0/self.k) + (self.r - self.q + 0.5*sigma**2)*self.t)\ / (sigma * sqrt(self.t)) def d2(self, sigma_est=None): sigma = sigma_est or self.sigma return self.d1(sigma_est) - (sigma * sqrt(self.t)) def call_value(self, sigma_est=None): return self.s0 * stats.norm.cdf(self.d1(sigma_est),0.,1.) - \ self.k * exp(-self.r*self.t) * stats.norm.cdf(self.d2(sigma_est),0.,1.) def put_value(self, sigma_est=None): return self.k * exp(-self.r*self.t) * stats.norm.cdf(-self.d2(sigma_est),0.,1.) - \ self.s0 * stats.norm.cdf(-self.d1(sigma_est),0.,1.) def vega(self, sigma_est=None): return self.s0 * stats.norm.pdf(self.d1(sigma_est),0.,1.) * sqrt(self.t) def call_iv_newton(self, c0, sigma_est=1., n=100): self.sigma_est = sigma_est f = lambda x: self.call_value(x) - c0 df = lambda x: self.vega(x) return newton(f, df, sigma_est, 0.001, 100) def call_iv_dichotomy(self, c0): c_est = 0 high = 3 low = 0 sigma = (high + low) / 2 while abs(c0 - c_est) > 1e-8: c_est = self.call_value(sigma) # print(f'c_est = {c_est}, sigma = {sigma}') if c0 - c_est > 0: low = sigma sigma = (sigma + high) / 2 else: high = sigma sigma = (low + sigma) / 2 return sigma def __repr__(self): return f'Base(s0={self.s0}, k={self.k}, t={self.t}, r={self.r})' bsm1 = Bsm(s0=100, k=100, t=1.0, r=0.05, q=0, sigma=0.3) n_d1 = stats.norm.cdf(bsm1.d1(), 0., 1.) bsm1.call_value() bsm2 = Bsm(s0=100, k=100, t=1.0, r=0.05, q=0,) #bsm2.call_iv_newton(c0=14.2313, n=10) bsm2.call_iv_dichotomy(c0=14.2313) def iv(df, current_date, strike_date, s0, rf): k = df['Strike'] call = (df['Bid'] + df['Ask']) / 2 t = (pd.Timestamp(strike_date) - pd.Timestamp(current_date)).days / 365 sigma_init = 1 sigma_newton = [] sigma_dichotomy = [] for i in range(df.shape[0]): model = Bsm(s0, k[i], t, rf) try: sigma_newton.append(model.call_iv_newton(c0=call[i], sigma_est=sigma_init)) #sigma_dichotomy.append(model.call_iv_dichotomy(c0=call[i])) except ZeroDivisionError as zde: print(f'{zde!r}: {model!r}') sigma_dichotomy.append(None) return sigma_newton, sigma_dichotomy s0 = 1290.69 rf = 0.0248 # libor pd_read_excel = partial(pd.read_excel, 'OEX1290.69.xlsx', skiprows=3) df_05 = pd_read_excel(sheet_name='20190517') df_06 = pd_read_excel(sheet_name='20190621') df_07 = pd_read_excel(sheet_name='20190719') df_09 = pd_read_excel(sheet_name='20190930') iv_05 = iv(df_05, '20190422', '20190517', s0, rf) iv_06 = iv(df_06, '20190422', '20190621', s0, rf) iv_07 = iv(df_07, '20190422', '20190719', s0, rf) iv_09 = iv(df_09, '20190422', '20190930', s0, rf) dfs = [df_05, df_06, df_07, df_09] ivs = [iv_05, iv_06, iv_07, iv_09] color = (c for c in ['r', 'g', 'k', 'c']) for result in zip(dfs, ivs): k = result[0]['Strike'] iv_newton = result[1][0] print(len(k), len(iv_newton)) plt.plot(k, iv_newton, lw=1.5) plt.plot(k, iv_newton, next(color)) plt.grid(True) plt.xlabel('Strike') plt.ylabel('Implied volatility') plt.legend() plt.xlim(1100, 1800) plt.ylim(0, .4) plt.show() ``` # Option Greeks BSM 5 inputs: 1. Underlying asset price -> $\frac{\Delta C}{\Delta S}$ Delta ---> Gamma 1. Volatility -> $\frac{\Delta C}{\Delta \sigma}$ Vega 1. risk-free rate -> $\frac{\Delta C}{\Delta r_f}$ Rho (rate) 1. time to expiration -> $\frac{\Delta C}{\Delta t}$ Theta 1. strike price -> $\frac{\Delta C}{\Delta X}$ X/K - `S+` --> `C+` (0<=Delta<=1) - `S-` --> `P+` (-1<=Delta<=0) - $\sigma$+ --> `C+` & `P+` (Vega>0) - $r_f$+ --> `C+` (Rho>0) (C=P+S-K/(1+__r__)^T) - $r_f$- --> `P+` (Rho<0) - Theta < 0; Time decay* - `X-` --> `C+` - `X+` --> `P+` - Long: `Gamma>0`; Short: `Gamma<0` - `Gamma` max at ATM, approaches 0 when deep ITM or OTM ## Delta - Sensitivity of the option price to a change in the price of the underlying asset - One option = delta stock - $delta_{call} = \Delta C / \Delta S = N(d1)$ - $delta_{put} = \Delta P / \Delta S = delta_{call} - 1 = N(d1) - 1$ - 0 when deep OTM - option price not censitive to $\delta S$ - $\pm1$ when deep ITM - very sensitive. 1-1 ratio on stock price - $\pm0.5$ around `X` - When t->T - ITM delta_c --> 1 - OTM delta_c --> 0 ### delta_p = delta_c - 1 - Forward on S = Call on S - Put on S - FP = C - P --> $dC/dS - dP/dS = d{FP}/dS$ - FP and S relation: 1:1 - delta_p = delta_c - 1 ### N(d1) = delta_c - $\frac{dBSM}{dS} = N(d1) + 0 = delta_c$ ### Dynamic hedging - TL;DR: adjusting # of call options to make delta-neutral portfolio - delta-neutral protfolio. value of the portfolio remains unchanged - `+ S - h*C`, h=1/delta - when S increased by n dollar, call decrease by $delta \times n$; delta calls decrease n dollar - hedge ratio depends on which hedges which - Long Stock & - short call / long put - portfolio value unchanged: - need: $\Delta_{portfolio} / \Delta S = 0$ - $nS \times \Delta S - nC \times \Delta C = 0$ - $\frac{\Delta C}{\Delta S} = \frac{nS}{nC} = delta_{call}$ - Dynamic - as t -> T, delta is changing (to OTM 0 or ITM 1) - $nS$ remains unchanged, need to change $nC$ - Maximum Cost - change of delta is fast (Gamma max) - max at ATM - rebalance portfolio more frequently -> higher transaction cost ## Gamma - Sensitivity of the option delta to a change in the price of the underlying asset - $gamma = \Delta delta / \Delta S$ - Call and put options on the same stock with same T and X have equal gammas ## Vega - sensitivity of the option value to a change in the volatility of the underlying asset ## Theta - sensitivity of the option value to a change in the calendar time - Time decay ## Rho - sensitivity of the option value to a change in the $r_f$ - leverage effect. rho>0 for call - small impact compared to vega ## Volatility 1. Historical Volatility - using historical data to calculate the variance and s.d. of the continuously compounded returns - $S_{R_i^c}^2 = \frac{\Sigma_{i=1}^N(R_i^c-\bar{R_i^c})^2}{N-1}$ - $\sigma = \sqrt{S_{R_i^c}^2}$ 1. Implied Volatility - Calculate $\sigma$ backwards - IV=20%, Historical=10% -> Option Overvalued # Strategies ## Synthetic - Synthetic long/short asset - C - P = +S = Forward - P - C = -S - Synthetic call/put - C = S + P - P = -S + C - Synthetic Stock - Equity = rf + forward/futures - Synthetic Equity = risk-free asset + stock futures - Synthetic Cash - rf = Stock - Forward - Synthetic risk-free asset = Stock - stock futures - S0=10, 1yr FP, rf=10% no div - Long stock - forward; - $FP=S_0*(1+r_f)^T$ - if S1=12: stock+=2, (-forward)-=1 => portfolio+=1 - if S1=8: stock-=2, (-forward)+=3 => portfolio+=1 ## Fiduciary call: C + bond - C(X,T) - Pure-discount bond pays X in T years - Payoff: - $S_T \le X$: $X$ - $S_T > X$: $S_T$ ## Covered call: $S - C$ - Neutral - Call covered by a stock - 股价温和上涨, 达不到X; CG + call premium - Same as: Short Put - profit: $(S_T-S_0) - max\{0, (S_T-X)\} + C$ - Cost / break even: $S_T = S_0 - C$ - Max loss at $S_T=0$ - All the cost: $-(S_0 - C)$ - Max profit at $S_T \ge X$ - Constant: $X - (S_0 - C)$ ## Protective put: $S + P_{atm}$ - Bullish - Married Put - Same as: long call - Pros: - unlimited profit - Cons: - Pay put premium - Lower total return - Put will expire - Deductible: $S_0 - X$ - If lower the cost, higher deductible - i.e., more OTM - profit: $(S_T-S_0) + max\{0,(X-S_T)\} - P$ - Cost: $P + S_0$ - Max loss at $S_T<X$ - $ X - (P + S_0)$: Strike price - cost - Gain at $S_T > P_0 + S_0$ ## Bull Call Spread - (bullish) benefit from a stock's limited increase in price. - Long 1 call & - short 1 call, the same expiry, __higher__ X - Trade-off: - Reduce promium - Limit upside profit potential - Max loss: - Net_premium_spent x 100 - Max upside profit potential: - (call spread width - premium spent) x 100 - Breakeven: - Lower X + net premium spent ## Bear Put Spread - bearish. moderate decline - Long 1 put & Short 1 put at lower strike ## Collar - protect against large losses & limits large gains - Currently long 100 shares with a gain - Sell 1 OTM call & - buy 1 OTM put, same expiry - Max Loss: Limited - Net debit: Put X - Stock purchase price - net premium paid - Net credit: Put X - Stock purchase price + net premium collected - Max Profit: Limited - Net debit: Call X - Stock purchase price - net premium paid - Net credit: Call X - Stock purchase price + net premium collected - Breakeven: - Net debit: Stock purchase price + net premium paid - Net credit: Stock purchase price - net premium collected ## Straddle - Profit from a very strong move in either direction - Move from low volatility to high volatility - `\/` - Long 1 call ATM & - Long 1 put ATM on same S, same T and same X - X is very close to ATM - Max Loss: Limited - at X: (Call + Put premium) x 100 - Max Profit: Unlimited - Breakeven: - Up: Call strike + call premium + put premium - Down: Put strike - call premium - put premium ## Strangle - Long 1 OTM Call & - Long 1 OTM Put, Put X < Call X - Net debit - `\_/` - Breakeven: - Up: Call strike + put premium + call premium - Down: Put strike - put premium - call premium - Max loss: limited - put premium + call premium + commission - between put X and call X ## Butterfly - Neutral - net debit - a bull spread + a bear spread - Long 1 ITM Call - Short 2 ATM Call - Long 1 OTM Call - `_/\_` - Net premium = C_ITM - 2xC_ATM + C_OTM - Breakeven: - Lower: Long Call Lower X + Net premium - Upper: Long Call Higher X - Net premium - Max loss: limited - Net premium + commission - when underlying <= Long Call Lower X OR underlying >= Long Call Upper X - Max profit: limited - Short Call X - Long Call Lower X - Net premium - commission - when underlying = Short Call X ## Condor - Limited profit: Low IV - Long outside X, Short middle X - Long 1 ITM Call (Lower X) - Short 1 ITM Call - Short 1 OTM Call - Long 1 OTM Call (Higher X) - Same T, - Long Condor: `_/T\_`. Cut off butterfly - Breakeven: - Lower: Long Call Lower X + Net premium - Upper: Long Call Higher X - Net premium - Max loss: limited - Net premium + commission - when underlying <= Long Call Lower X OR underlying >= Long Call Upper X - Max profit: limited - Short Call X - Long Call Lower X - Net premium - commission - when underlying between 2 short calls ## Iron Condor: - Call Bull spread + Put Bear spread - Long 1 ITM Call - Short 1 ITM Call - Short 1 ITM Put - Long 1 ITM Put - Breakeven: - Lower: Short Call X + Net premium - Upper: Short Put X - Net premium - Max profit: limited - Net premium + commission - when underlying between 2 short calls - Max loss: limited - Long Call X - Short Call X - Net premium - commission - when underlying <= Long Call X OR underlying >= Long Put X ## Iron Butterfly - When IV is Ext. high - Long 1 OTM Put - Short 1 ATM Put - Short 1 ATM Call - Long 1 OTM Call
github_jupyter
# Caso de uso: Agrupación de textos por temáticas similares **Autor:** Unidad de Científicos de Datos (UCD) --- Este es un caso de uso que utiliza varias funcionalidades de la libtería **ConTexto** para procesar y vectorizar textos de noticias sobre diferentes temas. Luego, sobre estos vectores se aplica t-SNE, una técnica no lineal de reducción de dimensionalidad, para transformar los vectores a un espacio bidimensional. Una vez se tengan los vectores en 2 dimensiones, se graficarán con un color correspondiente al tema de cada texto. Esto permite observar si los puntos de cada tema quedan juntos entre sí y separados de los otros temas. --- ## 1. Cargar librerías necesarias y definir parámetros importantes para el caso de uso El primer paso es cargar los módulos y las librerías necesarias para correr el caso de estudio. De parte de **ConTexto** se necesitan funciones del módulo de `limpieza`, para hacer un procesamiento básico de los textos y remover *stopwords* y el módulo de `vectorización`, para generar las representaciones vectoriales de los textos. Adicionalmente, se importan los siguientes paquetes externos: - `pyplot`: para generar y mostrar las gráficas - `manifold`, de `sklearn`: para hacer la reducción de dimensionalidad por medio de t-SNE - De `sklearn.datasets` se importa la función `fetch_20newsgroups`, que permite descargar noticias en inglés sobre 20 temas distintos. Para más información sobre este conjunto de datos, se puede consultar <a href="https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html" target="_blank">su documentación</a>. ``` # Componentes de ConTexto necesarios from contexto.limpieza import limpieza_texto, lista_stopwords from contexto.vectorizacion import * from contexto.utils.auxiliares import verificar_crear_dir # Librerías adicionales import matplotlib.pyplot as plt from sklearn import manifold from sklearn.datasets import fetch_20newsgroups ``` De los 20 temas disponibles en el conjunto de datos, se arman dos grupos, de 4 temas cada uno. La siguiente tabla muestra los temas que contiene cada grupo: | Grupo 1 | Grupo 2 | |----------------------|------------------------| |Computación - Gráficos|Religión - Cristiana | |Deportes - Béisbol |Alternativo - Ateísmo | |Ciencia - Medicina |Religión - Miscelánea | |Política - Armas |Política - Medio Oriente| Como se puede ver, el grupo 1 tiene temas que son más diferentes entre sí, y por lo tanto los vectores resultantes deberían poderse ver más separados entre sí al graficarlos en dos dimensiones. Por el otro lado, el grupo 2 tiene temas con más cosas en común, por lo que en primera instancia puede que no sea tan fácil verlos separados. Finalmente, se define una lista de vectorizadores a utilizar. Se utilizan los siguientes: - BOW - TF-IDF - Hashing - Doc2Vec - Word2Vec - Word2Vec - ignorando palabras desconocidas (Para mayor información sobre esto, consultar el ejemplo de vectorización) ``` # Temas que en principio son bastante diferentes entre sí, por lo que deberían # ser más fáciles de agrupar grupo_1 = ['comp.graphics', 'rec.sport.baseball', 'sci.med', 'talk.politics.guns'] # Temas mucho más similares entre sí grupo_2 = ['soc.religion.christian', 'alt.atheism', 'talk.religion.misc', 'talk.politics.mideast'] # Vectorizadores a considerar vectorizadores = ['bow', 'tfidf', 'hash', 'doc2vec', 'word2vec', 'word2vec_conocidas'] ``` --- ## 2. Funciones de apoyo A continuación se definen dos funciones, que se encargan de llevar a cabo lo siguiente: - `graficar_textos`: Recibe un arreglo de vectores (de dos dimensiones) y sus respectivas etiquetas (el tema de cada texto). Esta función se encarga de pintar los vectores bidimensionales y asignar un color a cada punto, dependiendo de su respectivo tema. El parámetro *dir_salida* permite definir la carpeta en donde se guardarán los resultados. El nombre de cada gráfica depende del grupo de temas utilizado, de si se realizó normalización sobre los vectores de los textos y de un título que se asigne a la gráfica (parámetro *titulo*). - `comparacion_vectorizadores`: Recibe el número que indica qué grupo de temas se va a tratar, una lista de vectorizadores a utilizar y un parámetro *normalizar*, que indica si se quiere hacer normalización min-max sobre los vectores generados para el grupo de textos. Esta normalización puede mejorar los resultados para algunos vectorizadores, en particular los basados en frecuencias (menos el TF-IDF, que ya hace una especie de normalización al tener en cuenta la frecuencia inversa en documentos de los términos). Ests función realiza todo el proceso, que consiste en: - Extraer las noticias de los temas del grupo determinado, utilizando la función `fetch_20newsgroups` - Pre-procesar los textos, removiendo signos de puntuación y *stopwords*, además de pasar todo el texto a minúsculas - Inicializar y ajustar (si aplica) los vectorizadores sobre el corpus de noticias - Aplicar los vectorizadores para obtener las representaciones vectoriales de los textos - Normalizar los vectores utilizando min-max, solo si se indicó en el parámetro *normalizar* de la función - Aplicar la reducción de dimensionalidad, para llevar los vectores a 2 dimensiones. Este paso puede ser un poco demorado - Utilizar la función `graficar_textos` para producir las gráficas ``` # Función para graficar los puntos def graficar_textos(X, y, titulo, num_grupo, norm, nombres, dir_salida='salida/caso_uso_vectores/'): num_cats = len(np.unique(y)) # Hasta 8 diferentes categorías colores = ['black', 'blue', 'yellow', 'red', 'green', 'orange', 'brown', 'purple'] color_dict = {i:colores[i] for i in range(num_cats)} label_dict = {i:nombres[i] for i in range(num_cats)} fig, ax = plt.subplots(figsize=(10,10)) for g in range(num_cats): ix = np.where(y == g) ax.scatter(X[ix,0], X[ix,1], c=color_dict[g], label=label_dict[g]) # Convenciones plt.legend(loc="lower right", title="Clases") plt.xticks([]), plt.yticks([]) plt.title(titulo) # Guardar la imagen resultante verificar_crear_dir(dir_salida) norm_str = '_norm' if norm else '' nombre_archivo = f'grupo_{num_grupo}_{titulo}{norm_str}.jpg' plt.savefig(dir_salida + nombre_archivo) plt.close() def comparacion_vectorizadores(num_grupo, normalizar, vectorizadores=vectorizadores, dir_salida='salida/caso_uso_vectores/'): grupo = grupo_1 if num_grupo == 1 else grupo_2 # Obtener dataset de las categorías seleccionadas dataset = fetch_20newsgroups(subset='all', categories=grupo, shuffle=True, random_state=42) clases = dataset.target nombres_clases = dataset.target_names # Limpieza básica a los textos para quitar ruido # Tener en cuenta que los textos están en inglés textos_limpios = [limpieza_texto(i, lista_stopwords('en')) for i in dataset.data] # Inicializar los 5 vectorizadores. Todos se configuran para tener 300 elementos, # de modo que estén en igualdad de condiciones v_bow = VectorizadorFrecuencias(tipo='bow', max_elementos=500) v_tfidf = VectorizadorFrecuencias(tipo='tfidf', max_elementos=500) v_hash = VectorizadorHash(n_elementos=500) v_word2vec = VectorizadorWord2Vec('en') v_doc2vec = VectorizadorDoc2Vec(n_elementos=300) # Ajustar los modelos que deben ser ajustados sobre el corpus v_bow.ajustar(textos_limpios) v_tfidf.ajustar(textos_limpios) v_doc2vec.entrenar_modelo(textos_limpios) # Obtener los vectores para cada vectorizador dict_vectores = {} for v in vectorizadores: print(f'Vectorizando con técnica {v}...') if 'conocidas' in v: v_mod = v.split('_')[0] dict_vectores[v] = eval(f'v_{v_mod}.vectorizar(textos_limpios, quitar_desconocidas=True)') else: dict_vectores[v] = eval(f'v_{v}.vectorizar(textos_limpios)') # Normalizar los vectores if normalizar: for v in vectorizadores: min_v = dict_vectores[v].min(axis=0) max_v = dict_vectores[v].max(axis=0) dict_vectores[v] = (dict_vectores[v] - min_v) / (max_v - min_v) # Aplicar t-sne para dejar vectores en 2 dimensiones dict_tsne = {} for v in vectorizadores: print(f'Reducción de dimensionalidad a vector {v}...') dict_tsne[v] = manifold.TSNE(n_components=2, init="pca").fit_transform(dict_vectores[v]) # Graficar los puntos para cada técnica for v in vectorizadores: graficar_textos(dict_tsne[v], clases, v, num_grupo, normalizar, nombres_clases, dir_salida=dir_salida) ``` --- ## 3. Realizar el barrido, para generar las gráficas y comparar A continuación se hace un barrido para ambos grupos de noticias (1 y 2) y ambas opciones de normalizar (hacerlo o no hacerlo), para generar todas las gráficas, y así poder determinar qué vectorizadores generar vectores más "separables" en cada caso. ``` # Barrido para realizar las pruebas for num_grupo in [1, 2]: for normalizar in [True, False]: print(f'\n -------------- Grupo: {num_grupo}, normalizar: {normalizar}') comparacion_vectorizadores(num_grupo, normalizar, vectorizadores=vectorizadores) ``` --- ## 4. Resultados Las imágenes de los resultados quedarán guardadas en la carpeta especificada en el parámetro *dir_salida*. Todas las combinaciones de vectorizadores, grupo y normalización producen 24 imágenes. A continuación se muestra la imagen obtenida para el grupo uno con el vectorizador Word2Vec, sin tener en cuenta palabras desconocidas y sin normalizar. Se puede ver que en este caso los textos aparecen agrupados por temas, con algunas pocas excepciones. ``` import matplotlib.image as mpimg img = mpimg.imread('salida/caso_uso_vectores/grupo_1_word2vec_conocidas.jpg') plt.figure(figsize=(10,10)) imgplot = plt.imshow(img) plt.axis('off') plt.show() ``` Por otro lado, en el grupo 2 los resultados no son tan buenos. Esto era de esperarse, dado que los temas en este grupo son más cercanos entre sí. A continuación se muestra la gráfica obtenida con el vectoriador Doc2Vec sin normalizar. Se puede ver que, aunque los textos salen mucho más mezclados entre sí, el tema de *Política - Medio oriente*, que en el papel era el más distinto de los 4, sale más separado del resto. Es importante recordar que en este caso de uso solo se hizo una vectorización, seguida de reducción de dimensionalidad. El entrenamiento de otros modelos no supervisados (*clustering*) o supervisados (modelos de clasificación multiclase) pueden llevar a mejores resultados. ``` img = mpimg.imread('salida/caso_uso_vectores/grupo_2_doc2vec.jpg') plt.figure(figsize=(10,10)) imgplot = plt.imshow(img) plt.axis('off') plt.show() ```
github_jupyter
``` ### imports # import pandas as pd import numpy as np # import gzip import csv import json import string import warnings warnings.filterwarnings('ignore') # from distutils.util import strtobool # pickle import pickle # from sklearn.neighbors import NearestNeighbors ``` EDA ``` # convert files asheville = pd.read_csv('ashevillelisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) austin = pd.read_csv('austinlisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) boston = pd.read_csv('bostonlisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) broward = pd.read_csv('browardcountylisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) cambridge = pd.read_csv('cambridgelisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) chicago = pd.read_csv('chicagolisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) clarkcounty = pd.read_csv('clarkcountylisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) columbus = pd.read_csv('columbuslisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) dc = pd.read_csv('dclisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) denver = pd.read_csv('denverlisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) hawaii = pd.read_csv('hawaiilisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) jerseycity = pd.read_csv('jerseycitylisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) losangeles = pd.read_csv('losangeleslisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) nashville = pd.read_csv('nashvillelisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) neworleans = pd.read_csv('neworleanslisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) nyc = pd.read_csv('nyclisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) oakland = pd.read_csv('oaklandlisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) pacificgrove = pd.read_csv('pacificfgrovelisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) portland = pd.read_csv('portlandlisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) ri = pd.read_csv('rhodeislandlisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) salem = pd.read_csv('salemlisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) sandiego = pd.read_csv('sandiegolisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) sanfran = pd.read_csv('sanfranciscolisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) sanmateo = pd.read_csv('sanmateocountylisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) santacruz = pd.read_csv('santacruzlisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) seattle = pd.read_csv('seattlelisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) twincities = pd.read_csv('twincitieslisting.csv.gz', compression='gzip', header=0, sep=',', quotechar='"', error_bad_lines=False) # get city asheville['location'] = 'Asheville' austin['location'] = 'Austin' boston['location'] = 'Boston' broward['location'] = 'Broward County' cambridge['location'] = 'Cambridge' chicago['location'] = 'Chicago' clarkcounty['location'] = 'Clark County' columbus['location'] = 'Columbus' dc['location'] = 'Washington, D.C.' denver['location'] = 'Denver' hawaii['location'] = 'Anywhere in Hawaii' jerseycity['location'] = 'Jersey City' losangeles['location'] = 'Los Angeles' nashville['location'] = 'Nashville' neworleans['location'] = 'New Orleans' nyc['location'] = 'New York City' oakland['location'] = 'Oakland' pacificgrove['location'] = 'Pacific Grove' portland['location'] = 'Portland' ri['location'] = 'Any City in Rhode Island' salem['location'] = 'Salem' sandiego['location'] = 'San Diego' sanfran['location'] = 'San Francisco' sanmateo['location'] = 'San Mateo County' santacruz['location'] = 'Santa Cruz County' seattle['location'] = 'Seattle' twincities['location'] = 'Twin Cities' # get states asheville['state'] = 'NC' austin['state'] = 'TX' boston['state'] = 'MA' broward['state'] = 'FL' cambridge['state'] = 'MA' chicago['state'] = 'IL' clarkcounty['state'] = 'NV' columbus['state'] = 'OH' dc['state'] = 'DC' denver['state'] = 'CO' hawaii['state'] = 'HI' jerseycity['state'] = 'NJ' losangeles['state'] = 'CA' nashville['state'] = 'TN' neworleans['state'] = 'LA' nyc['state'] = 'NY' oakland['state'] = 'CA' pacificgrove['state'] = 'CA' portland['state'] = 'OR' ri['state'] = 'RI' salem['state'] = 'OR' sandiego['state'] = 'CA' sanfran['state'] = 'CA' sanmateo['state'] = 'CA' santacruz['state'] = 'CA' seattle['state'] = 'WA' twincities['state'] = 'MN' # concat all the dfs df_first = pd.concat([asheville,austin,boston,broward,cambridge, chicago,clarkcounty,columbus,dc,denver, hawaii,jerseycity,losangeles,nashville,neworleans, nyc,oakland,pacificgrove,portland,ri, salem,sandiego,sanfran,sanmateo, santacruz,seattle,twincities ]) # dropping cols df_first = df_first.drop(['calculated_host_listings_count', 'calculated_host_listings_count_entire_homes', 'calculated_host_listings_count_private_rooms', 'calculated_host_listings_count_shared_rooms', 'first_review', 'minimum_minimum_nights', 'maximum_minimum_nights', 'minimum_maximum_nights', 'maximum_maximum_nights', 'has_availability', 'availability_30', 'availability_60', 'availability_90', 'availability_365', 'number_of_reviews_ltm', 'number_of_reviews_l30d', 'host_response_time', 'host_response_rate', 'host_acceptance_rate', 'host_listings_count', 'host_total_listings_count', 'calendar_updated', 'reviews_per_month', 'neighbourhood', 'neighbourhood_cleansed', 'neighbourhood_group_cleansed', 'latitude', 'longitude', 'license', 'name', 'neighborhood_overview', 'host_about', 'host_id', 'host_since', 'host_verifications', 'review_scores_rating', 'review_scores_accuracy', 'review_scores_cleanliness', 'review_scores_checkin', 'review_scores_communication', 'review_scores_location', 'review_scores_value', 'host_location', 'last_scraped', 'last_review', 'minimum_nights', 'maximum_nights', 'minimum_nights_avg_ntm', 'maximum_nights_avg_ntm', 'calendar_last_scraped', 'scrape_id', 'description', 'picture_url', 'host_url', 'host_name', 'host_thumbnail_url', 'host_picture_url', 'host_neighbourhood', 'bathrooms', 'host_has_profile_pic', 'host_identity_verified', 'number_of_reviews' ], axis=1) df1 = df_first # replace nulls df1 = df1.replace(np.nan, 0) # cleaning on bathrooms_text df1['bathrooms_text'] = df1['bathrooms_text'].str.rstrip(string.ascii_letters) df1['bathrooms_text'] = df1['bathrooms_text'].str.strip() df1['bathrooms_text'] = df1['bathrooms_text'].str.lower() # create and apply subset df1 = df1[df1['bathrooms_text'].notna()] subset = df1['bathrooms_text'].str.contains('half') df1['bathrooms_text'] = df1['bathrooms_text'].where(~subset, other=0.5) # cleaning on bathrooms_text df1['bathrooms_text'] = df1['bathrooms_text'].str.rstrip(string.ascii_letters) df1['bathrooms_text'] = df1['bathrooms_text'].str.strip() df1['bathrooms_text'] = df1['bathrooms_text'].str.lower() # create and apply subset df1 = df1[df1['bathrooms_text'].notna()] subset = df1['bathrooms_text'].str.contains('half') df1['bathrooms_text'] = df1['bathrooms_text'].where(~subset, other=0.5) # for some reason you get an error trying to convert it to float if I don't run the above line twice # change bathroom_text to float df1['bathrooms_text'] = df1['bathrooms_text'].astype(float) df1.info() # lowcase other cols that besides state and city/location df1['amenities'] = df1['amenities'].str.lower() df1['property_type'] = df1['property_type'].str.lower() df1['room_type'] = df1['room_type'].str.lower() # feat engineer amenities if in coloumns to help model df1['hot_water'] = df1['amenities'].str.contains('hot water') df1['air_conditioning'] = df1['amenities'].str.contains('air conditioning') df1['parking'] = df1['amenities'].str.contains('parking') df1['refrigerator'] = df1['amenities'].str.contains('refrigerator') df1['patio_balcony'] = df1['amenities'].str.contains('patio') df1['wifi'] = df1['amenities'].str.contains('wifi') df1['breakfast'] = df1['amenities'].str.contains('breakfast') df1['hair_dryer'] = df1['amenities'].str.contains('hair dryer') df1['waterfront'] = df1['amenities'].str.contains('waterfront') df1['workspace'] = df1['amenities'].str.contains('workspace') df1['kitchen'] = df1['amenities'].str.contains('kitchen') df1['fireplace'] = df1['amenities'].str.contains('fireplace') df1['tv'] = df1['amenities'].str.contains('tv') df1['clothes_dryer'] = df1['amenities'].str.contains('dryer') df1.head() ``` ### load in models and apply knn ``` tfidf = pickle.load(open('tfidf.pkl', 'rb')) dtm = tfidf.fit_transform(df1['amenities']) dtm = pd.DataFrame(dtm.todense(), columns=tfidf.get_feature_names()) # knn knn = NearestNeighbors(n_neighbors=10, metric='cosine') knn.fit(dtm) def recommender(text): x = pd.DataFrame(columns=df1.columns) input_features = tfidf.transform(text) for i in knn.kneighbors(input_features, n_neighbors=5, return_distance=False)[0]: x = x.append(df1.iloc[[i]]) return x ``` Takes fourteen popular AirBnB amenities to help with accuracy along with other listings to assist in determining house listing price using nearest neighbor 1. hot water 2. air_conditioning 3. parking 4. refrigerator 5. patio_balcony 6. wifi 7. breakfast 8. hair_dryer 9. waterfront 10. workspace 11. kitchen 12. fireplace 13. tv 14. clothes_dryer ``` recommender(['studio', 'tv', 'kitchen']) ```
github_jupyter
``` import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import torchvision from torchvision import datasets from torchvision import transforms from torchvision.utils import save_image from torchsummary import summary from matplotlib import pyplot as plt #from pushover import notify from random import randint from IPython.display import Image from IPython.core.display import Image, display import dataloader as dl import model as m import networks from networks import LeNet, ClassificationNet from testers import attack_test from resnet import ResNet import gmm as gmm import parameters as p import helper import misc from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt %load_ext autoreload %autoreload 2 torch.cuda.set_device(1) bs = 256 train_loader,test_loader,loader_list = misc.get_dataloaders("Lenet") fixed_x, _ = next(iter(loader_list[0])) save_image(fixed_x, 'real_image.png') Image('real_image.png') def in_top_k(targets, preds, k): topk = preds.topk(k,largest=False)[1] return (targets.unsqueeze(1) == topk).any(dim=1) def cross_corr(centers): c = centers.view(-1,10*centers.size(1)) corr =torch.matmul(c.T,c) loss = torch.norm(torch.triu(corr, diagonal=1, out=None)) return 2*loss/corr.size(0) class Proximity(nn.Module): def __init__(self, num_classes=100, feat_dim=1024, use_gpu=True, margin = 0.0 ): super(Proximity, self).__init__() self.num_classes = num_classes self.feat_dim = feat_dim self.use_gpu = use_gpu self.device = torch.device("cuda:1") self.margin = margin if self.use_gpu: self.centers = nn.Parameter(torch.randn(self.num_classes, self.feat_dim).cuda()) else: self.centers = nn.Parameter(torch.randn(self.num_classes, self.feat_dim)) def forward(self, x , labels): batch_size = x.size(0) distmat = torch.pow(x, 2).sum(dim=1, keepdim=True).expand(batch_size, self.num_classes) + \ torch.pow(self.centers, 2).sum(dim=1, keepdim=True).expand(self.num_classes, batch_size).t() distmat.addmm_(1, -2, x, self.centers.t()) classes = torch.arange(self.num_classes).long() if self.use_gpu: classes = classes.to(self.device) labels = labels.unsqueeze(1).expand(batch_size, self.num_classes) mask = labels.eq(classes.expand(batch_size, self.num_classes)) d_y = distmat[mask.clone()] values, indices = torch.topk(distmat,2, dim=1, largest=False, sorted=True, out=None) d_1 = values[:,0] d_2 = values[:,1] indicators = in_top_k(labels,distmat,1)[:,0] con_indicators = ~ indicators.clone() d_c = d_2*indicators + d_1*con_indicators loss = F.relu((d_y-d_c)/(d_y+d_c) + self.margin) mean_loss = loss.mean() return mean_loss, torch.argmin(distmat,dim=1) image_channels = fixed_x.size(1) embedding_net = LeNet() model = ClassificationNet(embedding_net, n_classes=p.n_classes).cuda() gmm = gmm.GaussianMixturePrior(p.num_classes, network_weights=list(model.embedding_net.layers.parameters()), pi_zero=0.99).cuda() criterion_prox_256 = Proximity(num_classes=10, feat_dim=256, use_gpu=True,margin=0.75) criterion_prox_1024 = Proximity(num_classes=10, feat_dim=1024, use_gpu=True, margin=0.75) optimizer_pre = torch.optim.Adam([{'params':model.parameters()}], lr=1e-3, weight_decay=1e-7) #optimizer_post = torch.optim.Adam([{'params':model.parameters()}, # {'params': gmm.means, 'lr': p.lr_mu}, # {'params': gmm.gammas, 'lr': p.lr_gamma}, # {'params': gmm.rhos, 'lr': p.lr_rho}], lr=p.lr_post) optimizer_post = torch.optim.Adam([{'params':model.parameters()}], lr=5e-3, weight_decay=1e-7) #optimizer_prox_1024 = torch.optim.SGD(criterion_prox_1024.parameters(), lr=0.1) #optimizer_conprox_1024 = torch.optim.SGD(criterion_conprox_1024.parameters(), lr=0.0001) optimizer_prox_256 = torch.optim.SGD(criterion_prox_256.parameters(), lr=0.01) optimizer_prox_1024 = torch.optim.SGD(criterion_prox_1024.parameters(), lr=0.01) criterion = nn.CrossEntropyLoss() !rm -rfr reconstructed !rm -rfr softmaxreconstructed !rm -rfr figs !mkdir reconstructed !mkdir softmaxreconstructed !mkdir figs epochs_0 = 50 epochs_1 = 60 import time import pandas as pd import matplotlib.patheffects as PathEffects %matplotlib inline import seaborn as sns import numpy as np import matplotlib import matplotlib.pyplot as plt sns.set_style('darkgrid') sns.set_palette('muted') sns.set_context("notebook", font_scale=1.5, rc={"lines.linewidth": 2.5}) RS = 123 from sklearn.manifold import TSNE from sklearn.decomposition import PCA colors = ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf'] mnist_classes = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] def t_sne_gen(data): fashion_tsne = TSNE(random_state=RS).fit_transform(data.numpy()) #fashion_pca = PCA(n_components=2, svd_solver='full').fit(data.numpy()) #x = fashion_pca.transform(data.numpy()) return fashion_tsne def fashion_scatter(x, colors,name,folder): # choose a color palette with seaborn. num_classes = len(np.unique(colors)) palette = np.array(sns.color_palette("hls", num_classes)) # create a scatter plot. f = plt.figure(figsize=(8, 8)) ax = plt.subplot(aspect='equal') sc = ax.scatter(x[:,0], x[:,1], lw=0, s=40, c=palette[colors.astype(np.int)]) plt.title(name) plt.xlim(-25, 25) plt.ylim(-25, 25) ax.axis('off') ax.axis('tight') # add the labels for each digit corresponding to the label txts = [] for i in range(num_classes): # Position of each label at median of data points. xtext, ytext = np.median(x[colors == i, :], axis=0) txt = ax.text(xtext, ytext, str(i), fontsize=24) txt.set_path_effects([ PathEffects.Stroke(linewidth=5, foreground="w"), PathEffects.Normal()]) txts.append(txt) plt.savefig(folder+name+'.png') return f, ax, sc, txts def plot_embeddings(embeddings, targets, xlim=None, ylim=None): fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(111, projection='3d') for i in range(10): #ax = fig.add_subplot(111, projection='3d') inds = np.where(targets==i)[0] ax.scatter(embeddings[inds,0], embeddings[inds,1], embeddings[inds,2], alpha=0.5, color=colors[i]) if xlim: plt.xlim(xlim[0], xlim[1]) if ylim: plt.ylim(ylim[0], ylim[1]) plt.legend(mnist_classes) def extract_embeddings(dataloader, model, pretrain): with torch.no_grad(): model.eval() embeddings_1 = np.zeros((len(dataloader.dataset), networks.vis_size)) embeddings_2 = np.zeros((len(dataloader.dataset), networks.vis_size)) labels = np.zeros(len(dataloader.dataset)) k = 0 for images, target in dataloader: images = images.cuda() emb_1, emb_2= model.get_embedding(images, pretrain) emb_1, emb_2 = emb_1.cpu(), emb_2.cpu() embeddings_1[k:k+len(images)] = emb_1 embeddings_2[k:k+len(images)] = emb_2 labels[k:k+len(images)] = target.numpy() k += len(images) return embeddings_1, embeddings_2, labels import copy correct =0 num_example =0 test_loss_bce=0 test_correct=0 test_num_example =0 for epoch in range(epochs_0): model.train() for idx, (images, target) in enumerate(train_loader): images, target= images.cuda(), target.cuda() out, rep_1, rep_2 = model(images, test= False) loss_bce = criterion(out,target) #loss_prox_1024 = criterion_prox_1024(rep_1, target) #loss_conprox_1024 = criterion_conprox_1024(rep_1, target) #loss_prox_256 = criterion_prox_256(rep_2, target) #loss_conprox_256= criterion_conprox_256(rep_2, target) loss = loss_bce #+ loss_prox_1024 + loss_prox_256 - loss_conprox_1024*0.0001 - loss_conprox_256*0.0001 preds = out.data.max(1, keepdim=True)[1] correct += preds.eq(target.data.view_as(preds)).sum() num_example += len(target) optimizer_pre.zero_grad() loss.backward() optimizer_pre.step() to_print = "Epoch[{}/{}] Loss: {:.3f} Accuracy: {}".format(epoch+1,epochs_0, loss.item()/bs, correct.item()/num_example) if idx % 500 == 0: print(to_print) model.eval() with torch.no_grad(): for images, target in test_loader: images, target = images.cuda(), target.cuda() out, rep_1, rep_2= model(images, test=False) loss_bce = criterion(out,target) preds = out.data.max(1, keepdim=True)[1] test_correct += preds.eq(target.data.view_as(preds)).sum() test_num_example += len(target) test_loss_bce+=loss_bce.item() test_loss_bce /= len(test_loader.dataset) print( "test_Loss: {:.3f} Test accuracy: {}".format( test_loss_bce, test_correct.item()/test_num_example)) if epoch %10==0: val_embeddings_1, val_embeddings_2, val_labels_baseline = extract_embeddings(test_loader, model,False) plot_embeddings(val_embeddings_1, val_labels_baseline) plot_embeddings(val_embeddings_2, val_labels_baseline) #fashion_scatter(t_sne_gen(rep_2.cpu()), target.cpu().numpy(),"Clean_data: "+"VAE_"+str(epoch)+"softmax_rep2","./softmaxreconstructed/") #fashion_scatter(t_sne_gen(rep_1.cpu()), target.cpu().numpy(),"Clean_data: "+"VAE_"+str(epoch)+"softmax_rep1","./softmaxreconstructed/") attack_test(model, test_loader, nn.CrossEntropyLoss() ) import copy correct =0 num_example =0 test_loss_bce=0 test_correct=0 test_num_example =0 pre_wts= copy.deepcopy(list(model.embedding_net.layers.parameters())) for epoch in range(epochs_1): model.train() for idx, (images, target) in enumerate(train_loader): images, target= images.cuda(), target.cuda() out, rep_1, rep_2 = model(images,test=False) #loss_bce = criterion(out,target) loss_prox_1024, _ = criterion_prox_1024(rep_1, target) loss_prox_256, preds = criterion_prox_256(rep_2, target) loss = loss_prox_256 + loss_prox_1024 + 0.1 * cross_corr(criterion_prox_256.centers) #preds = out.data.max(1, keepdim=True)[1] correct += preds.eq(target.data.view_as(preds)).sum() num_example += len(target) optimizer_post.zero_grad() optimizer_prox_1024.zero_grad() optimizer_prox_256.zero_grad() loss.backward() optimizer_post.step() for param in criterion_prox_256.parameters(): param.grad.data *= (1. /1) optimizer_prox_256.step() for param in criterion_prox_1024.parameters(): param.grad.data *= (1. /1) optimizer_prox_256.step() to_print = "Epoch[{}/{}] Loss: {:.3f} Accuracy: {}".format(epoch+1,epochs_1, loss.item()/bs, correct.item()/num_example) if idx % 500 == 0: print(to_print) #helper.plot_histogram(epoch,idx, pre_wts, list(model.embedding_net.layers.parameters()), list(gmm.parameters()), correct.item()/num_example,"./figs/") model.eval() with torch.no_grad(): for images, target in test_loader: images, target = images.cuda(), target.cuda() out, rep_1, rep_2= model(images, test=False) loss_bce = criterion(out,target) loss_prox_256, preds = criterion_prox_256(rep_2, target) #preds = out.data.max(1, keepdim=True)[1] test_correct += preds.eq(target.data.view_as(preds)).sum() test_num_example += len(target) test_loss_bce+=loss_bce.item() test_loss_bce /= len(test_loader.dataset) print( "test_Loss: {:.3f} Test accuracy: {}".format( test_loss_bce, test_correct.item()/test_num_example)) if epoch %10==0: val_embeddings_1, val_embeddings_2, val_labels_baseline = extract_embeddings(test_loader, model,True) plot_embeddings(val_embeddings_1, val_labels_baseline) plot_embeddings(val_embeddings_2, val_labels_baseline) #fashion_scatter(t_sne_gen(rep_2.cpu()), target.cpu().numpy(),"Clean_data: "+"VAE_"+str(epoch)+"softmax_rep2","./softmaxreconstructed/") #fashion_scatter(t_sne_gen(rep_1.cpu()), target.cpu().numpy(),"Clean_data: "+"VAE_"+str(epoch)+"softmax_rep1","./softmaxreconstructed/") attack_test(model, test_loader, nn.CrossEntropyLoss() ) ```
github_jupyter
``` import mdptoolbox import matplotlib.pyplot as plt import numpy as np import scipy.sparse as ss import seaborn as sns import warnings warnings.filterwarnings('ignore', category=ss.SparseEfficiencyWarning) # params alpha = 0.9 T = 8 state_count = (T+1) * (T+1) epsilon = 10e-5 # game action_count = 3 adopt = 0; override = 1; wait = 2 # mapping utils state_mapping = {} states = [] count = 0 for a in range(T+1): for h in range(T+1): state_mapping[(a, h)] = count states.append((a, h)) count += 1 # initialize matrices transitions = []; reward_selfish = []; reward_honest = [] for _ in range(action_count): transitions.append(ss.csr_matrix(np.zeros(shape=(state_count, state_count)))) reward_selfish.append(ss.csr_matrix(np.zeros(shape=(state_count, state_count)))) reward_honest.append(ss.csr_matrix(np.zeros(shape=(state_count, state_count)))) # populate matrices for state_index in range(state_count): a, h = states[state_index] # adopt transitions transitions[adopt][state_index, state_mapping[1, 0]] = alpha transitions[adopt][state_index, state_mapping[0, 1]] = 1 - alpha # adopt rewards reward_honest[adopt][state_index, state_mapping[1, 0]] = h reward_honest[adopt][state_index, state_mapping[0, 1]] = h # override if a > h: transitions[override][state_index, state_mapping[a-h, 0]] = alpha reward_selfish[override][state_index, state_mapping[a-h, 0]] = h+1 transitions[override][state_index, state_mapping[a-h-1, 1]] = 1 - alpha reward_selfish[override][state_index, state_mapping[a-h-1, 1]] = h+1 else: transitions[override][state_index, 0] = 1 reward_honest[override][state_index, 0] = 10000 # wait transitions if (a < T) and (h < T): transitions[wait][state_index, state_mapping[a+1, h]] = alpha transitions[wait][state_index, state_mapping[a, h+1]] = 1 - alpha else: transitions[wait][state_index, 0] = 1 reward_honest[wait][state_index, 0] = 10000 low = 0; high = 1 while (high - low) > epsilon / 8: rho = (low + high) / 2 print(low, high, rho) total_reward = [] for i in range(action_count): total_reward.append((1-rho)*reward_selfish[i] - rho*reward_honest[i]) rvi = mdptoolbox.mdp.RelativeValueIteration(transitions, total_reward, epsilon/8) rvi.run() if rvi.average_reward > 0: low = rho else: high = rho policy = rvi.policy print('alpha: ', alpha, 'lower bound reward:', rho) f, ax = plt.subplots(figsize=(6,6)) ax.imshow(np.reshape(policy, (9,9))) ax = sns.heatmap(np.reshape(policy, (9,9)), annot=True, cmap='viridis') cb = ax.collections[-1].colorbar cb.remove() plt.xticks([]) plt.yticks([]) plt.show() ```
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Grid-Search" data-toc-modified-id="Grid-Search-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Grid Search</a></span></li><li><span><a href="#Best-params-result" data-toc-modified-id="Best-params-result-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Best params result</a></span></li></ul></div> ``` nb_name = 'Classification_svm_nb_baseline-augmentation' from classification_utils import * from clustering_utils import * from eda_utils import * from nn_utils_keras import * from feature_engineering_utils import * from data_utils import * import warnings warnings.filterwarnings("ignore") train, test = load_data() # train, upsampling_info = upsampling_train(train) train_text, train_label = train_augmentation(train, select_comb=[['text'], ['reply', 'reference_one']]) test_text, test_label = test['text'], test['label'] # test_text = test_text.apply(lambda x: normal_string(x)) # train_text = train_text.apply(lambda x: normal_string(x)) #################################### ### label mapper #################################### labels = sorted(train_label.unique()) label_mapper = dict(zip(labels, range(len(labels)))) train_label = train_label.map(label_mapper) test_label = test_label.map(label_mapper) y_train = train_label y_test = test_label print(train_text.shape) print(test_text.shape) print(train_label.shape) print(test_label.shape) set(train_label.tolist()) ``` # Grid Search ``` metric = "f1_macro" text_clf = Pipeline([('tfidf', TfidfVectorizer()), ('clf', MultinomialNB())]) parameters = {'tfidf__min_df': [1, 3, 5], 'tfidf__stop_words': [None, 'english'], 'tfidf__use_idf': [True, False], 'tfidf__binary': [True, False], 'clf__alpha': [0.2, 0.4, 0.6, 0.8, 1]} gs_clf = GridSearchCV(text_clf, scoring=metric, param_grid=parameters, cv=4) gs_clf = gs_clf.fit(train_text, y_train) for param_name in gs_clf.best_params_: print("{0}:\t{1}".format(param_name, gs_clf.best_params_[param_name])) print("best f1 score: {:.3f}".format(gs_clf.best_score_)) cv_results = pd.DataFrame(gs_clf.cv_results_) cv_results.to_excel(f"NB_cv_result_{nb_name}.xlsx") metric = "f1_macro" text_clf = Pipeline([('tfidf', TfidfVectorizer()), ('clf', LinearSVC())]) parameters = {'tfidf__min_df': [1, 3, 5], 'tfidf__stop_words': [None, 'english'], 'tfidf__use_idf': [True, False], 'tfidf__binary': [True, False], 'clf__penalty':['l2'], 'clf__C':[1,2,3]} gs_clf = GridSearchCV(text_clf, scoring=metric, param_grid=parameters, cv=4) gs_clf = gs_clf.fit(train_text, y_train) for param_name in gs_clf.best_params_: print("{0}:\t{1}".format(param_name, gs_clf.best_params_[param_name])) print("best f1 score: {:.3f}".format(gs_clf.best_score_)) cv_results = pd.DataFrame(gs_clf.cv_results_) cv_results.to_excel(f"SVC_cv_result_{nb_name}.xlsx") ``` # Best params result ``` X_train, X_test, word_to_idx, tfidf_vect = tfidf_vectorizer(train_text, test_text, stop_words=None, binary=True, min_df=1) # tfidf_vectorizer(train_text, test_text, min_df=2, max_df=100) # X_train, transform_mapper = dimension_reduction(X_train, out_dim=500) # X_test = transform_mapper.transform(X_test) print('X_train.shape', X_train.shape) print('X_test.shape', X_test.shape) # clf = LinearSVC(penalty="l2", multi_class='ovr', C=3.0, dual=True,) # clf.fit(X_train, y_train) pred = clf.predict(X_test) classification_report = evaluation_report(y_test, pred, labels=labels) # roc_auc(y_test, pred) ########################################## ## CV shows the stable result ########################################## # cv_metrics = ["precision_macro","accuracy", "f1_macro"] # cv = cross_validate(clf, X_train, y_train,scoring=cv_metrics, cv=4, return_train_score=True) # cv = pd.DataFrame(cv) # f1 = cv['test_f1_macro'].mean() # print("cv average f1 macro: ", f1) # cv clf = MultinomialNB() clf.fit(X_train, y_train) pred = clf.predict(X_test) classification_report = evaluation_report(y_test, pred, labels=labels) ########################################## ## CV shows the stable result ########################################## # cv_metrics = ["precision_macro","accuracy", "f1_macro"] # cv = cross_validate(clf, X_train, y_train,scoring=cv_metrics, cv=4, return_train_score=True) # cv = pd.DataFrame(cv) # f1 = cv['test_f1_macro'].mean() # print("cv average f1 macro: ", f1) # cv ```
github_jupyter
### Image Classification - Conv Nets -Pytorch > Classifying if an image is a `bee` of an `ant` using `ConvNets` in pytorch ### Imports ``` import cv2 import matplotlib.pyplot as plt import numpy as np from sklearn.model_selection import train_test_split import torch from torch import nn import torch.nn.functional as F import os ``` ### Data Preparation ``` class Insect: BEE = 'BEE' ANT = "ANT" BEES_IMAGES_PATH = 'data/colored/rgb/bees' ANTS_IMAGES_PATH = 'data/colored/rgb/ants' classes = {'bee': 0, 'ant' : 1} classes =dict([(i, j) for (j, i) in classes.items()]) classes os.path.exists(Insect.BEES_IMAGES_PATH) insects = [] for path in os.listdir(Insect.BEES_IMAGES_PATH): img_path = os.path.join(Insect.BEES_IMAGES_PATH, path) image = np.array(cv2.imread(img_path, cv2.IMREAD_UNCHANGED), dtype='float32') image = image / 255 insects.append([image, 0]) for path in os.listdir(Insect.ANTS_IMAGES_PATH): img_path = os.path.join(Insect.ANTS_IMAGES_PATH, path) image = np.array(cv2.imread(img_path, cv2.IMREAD_UNCHANGED), dtype='float32') image = image / 255 insects.append([image, 1]) insects = np.array(insects) np.random.shuffle(insects) ``` ### Visualization ``` plt.imshow(insects[7][0], cmap="gray"), insects[10][0].shape ``` > Seperating Labels and features ``` X = np.array([insect[0] for insect in insects]) y = np.array([insect[1] for insect in insects]) X[0].shape ``` > Splitting the data into training and test. ``` X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=33, test_size=.2) X_train.shape, y_train.shape, y_test.shape, X_test.shape ``` > Converting the data into `torch` tensor. ``` X_train = torch.from_numpy(X_train.astype('float32')) X_test = torch.from_numpy(X_test.astype('float32')) y_train = torch.Tensor(y_train) y_test = torch.Tensor(y_test) ``` ### Model Creation ``` class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(in_channels=3, out_channels= 32, kernel_size=(3, 3)) self.conv2 = nn.Conv2d(32, 64, (3, 3)) self.conv3 = nn.Conv2d(64, 64, (3, 3)) self._to_linear = None # protected variable self.x = torch.randn(3, 200, 200).view(-1, 3, 200, 200) self.conv(self.x) self.fc1 = nn.Linear(self._to_linear, 64) self.fc2 = nn.Linear(64, 2) def conv(self, x): x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv2(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv3(x)), (2, 2)) if self._to_linear is None: self._to_linear = x.shape[1] * x.shape[2] * x.shape[3] return x def forward(self, x): x = self.conv(x) x = x.view(-1, self._to_linear) x = F.relu(self.fc1(x)) return x net = Net() net optimizer = torch.optim.SGD(net.parameters(), lr=1e-3) loss_function = nn.CrossEntropyLoss() EPOCHS = 10 BATCH_SIZE = 5 for epoch in range(EPOCHS): print(f'Epochs: {epoch+1}/{EPOCHS}') for i in range(0, len(y_train), BATCH_SIZE): X_batch = X_train[i: i+BATCH_SIZE].view(-1, 3, 200, 200) y_batch = y_train[i: i+BATCH_SIZE].long() net.zero_grad() ## or you can say optimizer.zero_grad() outputs = net(X_batch) loss = loss_function(outputs, y_batch) loss.backward() optimizer.step() print("Loss", loss) ``` ### Evaluating the model ### Test set ``` total, correct = 0, 0 with torch.no_grad(): for i in range(len(X_test)): correct_label = torch.argmax(y_test[i]) prediction = torch.argmax(net(X_test[i].view(-1, 3, 200, 200))[0]) if prediction == correct_label: correct+=1 total +=1 print(f"Accuracy: {correct/total}") torch.argmax(net(X_test[1].view(-1, 3, 200, 200))), y_test[0] ``` ### Train set ``` total, correct = 0, 0 with torch.no_grad(): for i in range(len(X_train)): correct_label = torch.argmax(y_train[i]) prediction = torch.argmax(net(X_train[i].view(-1, 3, 200, 200))[0]) if prediction == correct_label: correct+=1 total +=1 print(f"Accuracy: {correct/total}") ``` ### Making Predictions ``` plt.imshow(X_test[12]) plt.title(classes[torch.argmax(net(X_test[12].view(-1, 3, 200, 200))).item()].title(), fontsize=16) plt.show() fig, ax = plt.subplots(nrows=3, ncols=3, figsize=(10, 10)) for row in ax: for col in row: col.imshow(X_test[2]) plt.show() ```
github_jupyter
# Data Pre-Processing When training any sort of model using a machine learning algorithm, a large dataset is first needed to train that model off of. Data can be anything that would help benefit with the training of the model. In this case, images of people facing the camera head on wearing/not wearing a face mask is the type of data that is being used. # Data preparation Raw data is first collected. These are just images of people wearing/not wearing face masks. This is not enough by itself as the data must the be divided into two groups, i.e into 'with_mask' and 'without_mask'. # Categorisation and labeling Next the data must be categorised and labeled as such. ``` import cv2,os data_path='dataset' categories=os.listdir(data_path) labels=[i for i in range(len(categories))] label_dict=dict(zip(categories,labels)) #empty dictionary print(label_dict) print(categories) print(labels) ``` # Resizing and reshaping the data When feeding any sort of data into an algorithm, it is important that we normalise the data. In this case of working with many images in the dataset, the images must be resized and shaped so that they are all fixed and common in size. For the purpose of this project, each image was resized to be 50 pixels by 50 pixels and were converted to greyscale. It is also important to note that each image was added to the data array, and its corresponding label is added to the target array ``` img_size=100 data=[] target=[] for category in categories: folder_path=os.path.join(data_path,category) img_names=os.listdir(folder_path) for img_name in img_names: img_path=os.path.join(folder_path,img_name) img=cv2.imread(img_path) try: gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) resized=cv2.resize(gray,(img_size,img_size)) data.append(resized) target.append(label_dict[category]) except Exception as e: print('Exception:',e) ``` # Serialising the resulting pre processed data Now that the dataset has been preprocessed and sorted into arrays, it must now be serialised so it can be used in the training process. To serialise the data, numpy is used as it is capable of serialising arrays and deserialising them later on for use (also known as flattening and unflattening) ``` import numpy as np data=np.array(data)/255.0 data=np.reshape(data,(data.shape[0],img_size,img_size,1)) target=np.array(target) from keras.utils import np_utils new_target=np_utils.to_categorical(target) np.save('data',data) np.save('target',new_target) ``` # Finishing up Now that the data has been preprocessed and serialised, it is now ready to be used in the training process. It is important to note that this must be done any time data needs to be added or removed from the dataset so it is not uncommon to have to use this multiple times throughout the development of the project
github_jupyter
# Using Models as Layers in Another Model In this notebook, we show how you can use Keras models as Layers within a larger model and still perform pruning on that model. ``` # Import required packages import tensorflow as tf import mann from sklearn.metrics import confusion_matrix, classification_report # Load the data (x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data() # Convert images from grayscale to RGB x_train = tf.image.grayscale_to_rgb(tf.Variable(x_train.reshape(-1, 28, 28, 1))) x_test = tf.image.grayscale_to_rgb(tf.Variable(x_test.reshape(-1, 28, 28, 1))) ``` ## Model Creation In the following cells, we create two models and put them together to create a larger model. The first model, called the `preprocess_model`, takes in images, divides the pixel values by 255 to ensure all values are between 0 and 1, resized the image to a height and width of 40 pixels. It then performs training data augmentation by randomly flips some images across the y-axis, randomly rotates images, and randomly translates the images. The second model, called the `true_model`, contains the logic for performing prediction on images. It contains blocks of convolutional layers followed by max pooling and dropout layers. The output of these blocks is flattened and passed through fully-connected layers to output predicted class probabilities. These two models are combined in the `training_model` to be trained. ``` preprocess_model = tf.keras.models.Sequential() preprocess_model.add(tf.keras.layers.Rescaling(1./255)) preprocess_model.add(tf.keras.layers.Resizing(40, 40, input_shape = (None, None, 3))) preprocess_model.add(tf.keras.layers.RandomFlip('horizontal')) preprocess_model.add(tf.keras.layers.RandomRotation(0.1)) preprocess_model.add(tf.keras.layers.RandomTranslation(0.1, 0.1)) true_model = tf.keras.models.Sequential() true_model.add(mann.layers.MaskedConv2D(16, padding = 'same', input_shape = (40, 40, 3))) true_model.add(mann.layers.MaskedConv2D(16, padding = 'same')) true_model.add(tf.keras.layers.MaxPool2D()) true_model.add(tf.keras.layers.Dropout(0.2)) true_model.add(mann.layers.MaskedConv2D(32, padding = 'same', activation = 'relu')) true_model.add(mann.layers.MaskedConv2D(32, padding = 'same', activation = 'relu')) true_model.add(tf.keras.layers.MaxPool2D()) true_model.add(tf.keras.layers.Dropout(0.2)) true_model.add(mann.layers.MaskedConv2D(64, padding = 'same', activation = 'relu')) true_model.add(mann.layers.MaskedConv2D(64, padding = 'same', activation = 'relu')) true_model.add(tf.keras.layers.MaxPool2D()) true_model.add(tf.keras.layers.Dropout(0.2)) true_model.add(tf.keras.layers.Flatten()) true_model.add(mann.layers.MaskedDense(256, activation = 'relu')) true_model.add(mann.layers.MaskedDense(256, activation = 'relu')) true_model.add(mann.layers.MaskedDense(10, activation = 'softmax')) training_input = tf.keras.layers.Input((None, None, 3)) training_x = preprocess_model(training_input) training_output = true_model(training_x) training_model = tf.keras.models.Model( training_input, training_output ) training_model.compile( loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'], optimizer = 'adam' ) training_model.summary() ``` ## Model Training In this cell, we create the `ActiveSparsification` object to continually sparsify the model as it trains, and train the model. ``` callback = mann.utils.ActiveSparsification( 0.80, sparsification_rate = 5 ) training_model.fit( x_train, y_train, epochs = 200, batch_size = 512, validation_split = 0.2, callbacks = [callback] ) ``` ## Convert the model to not have masking layers In the following cell, we configure the model to remove masking layers and replace them with non-masking native TensorFlow layers. We then perform prediction on the resulting model and present the results. ``` model = mann.utils.remove_layer_masks(training_model) preds = model.predict(x_test).argmax(axis = 1) print(confusion_matrix(y_test, preds)) print(classification_report(y_test, preds)) ``` ## Save only the model that performs prediction Lastly, save only the part of the model that performs prediction ``` model.layers[2].save('ModelLayer.h5') ```
github_jupyter
# Data Visualization Using Plotly ##### To visualize plots in this notebook please click [here](https://nbviewer.jupyter.org/github/hirenhk15/ga-code-alongs/blob/main/1_Introduction_to_plotly/notebook/Plotly_data_visualization.ipynb) ``` # Import packages import cufflinks import plotly import plotly.graph_objects as go from plotly.offline import iplot from plotly.offline import init_notebook_mode import pandas as pd import matplotlib.pyplot as plt import warnings warnings.filterwarnings('ignore') # Sample graph using plotly fig = go.Figure(data=go.Bar(y=[2, 3, 1])) fig.write_html('first_figure.html', auto_open=True) ``` # Loading Data ``` timesData = pd.read_csv('../data/timesData.csv') timesData.head() ``` # Bar plot using dataframe ## Citations of top 4 universities in 2015 ``` # Settings to enable dataframe plotting cufflinks.go_offline(connected=True) init_notebook_mode(connected=True) # Creating the dataframe df = timesData[timesData.year == 2015].iloc[:4,] df2 = df[['citations','university_name']].set_index('university_name') # Plotting the bar plot df2.iplot(kind='bar', xTitle='University', yTitle='Citation Score', title='University Citations') ``` # Bar plot using plotly graph objects ### Citations of top 4 universities in 2015 ``` df_2015 = timesData[timesData.year == 2015].iloc[:4,:] trace = go.Bar( x=df_2015.university_name, y=df_2015.citations, name='Citations', text=df_2015.country ) fig = go.Figure(data=[trace]) fig.show() #iplot(fig) #plotly.offline.plot(fig, filename='bar.html') ``` ### Citations and Research score of top 4 universities in 2015 - Vertical ``` trace1 = go.Bar( x=df_2015.university_name, y=df_2015.citations, name='Citations', marker=dict(color='rgba(26, 118, 255, 0.5)', line=dict(color='rgb(0, 0, 0)', width=1.5)), text=df_2015.country ) trace2 = go.Bar( x=df_2015.university_name, y=df_2015.research, name='Research', marker=dict(color='rgba(249, 6, 6, 0.5)', line=dict(color='rgb(0, 0, 0)', width=1.5)), text=df_2015.country ) data = [trace1, trace2] fig = go.Figure(data=data) fig.show() ``` ### Citations and Research score of top 4 universities in 2015 - Horizontal ``` trace1 = go.Bar( y=df_2015.university_name, x=df_2015.citations, name='Citations', orientation='h', marker=dict(color='rgba(26, 118, 255, 0.5)', line=dict(color='rgb(0, 0, 0)', width=1.5)), text=df_2015.country ) trace2 = go.Bar( y=df_2015.university_name, x=df_2015.research, name='Research', orientation='h', marker=dict(color='rgba(249, 6, 6, 0.5)', line=dict(color='rgb(0, 0, 0)', width=1.5)), text=df_2015.country ) data = [trace1, trace2] fig = go.Figure(data=data) fig.show() ``` # Scatter Plot ### Citation vs world rank of top 100 universities with 2014, 2015 and 2016 years ``` df_2014 = timesData[timesData.year==2014].iloc[:100,] df_2015 = timesData[timesData.year==2015].iloc[:100,] df_2016 = timesData[timesData.year==2016].iloc[:100,] trace1 = go.Scatter( x=df_2014.world_rank, y=df_2014.citations, name='2014', mode='markers', marker=dict(color='rgba(255, 128, 255, 0.8)'), text=df_2014.university_name ) trace2 = go.Scatter( x=df_2015.world_rank, y=df_2015.citations, name='2015', mode='markers', marker=dict(color='rgba(255, 128, 2, 0.8)'), text=df_2015.university_name ) trace3 = go.Scatter( x=df_2016.world_rank, y=df_2016.citations, name='2016', mode='markers', marker=dict(color='rgba(0, 255, 200, 0.8)'), text=df_2016.university_name ) data = [trace1, trace2, trace3] layout = dict(title='Citation vs world rank of top 100 universities with 2014, 2015 and 2016 years', xaxis=dict(title='World Rank'), yaxis=dict(title='Citation'), template='plotly_dark' ) fig = go.Figure(data=data, layout=layout) fig.show() ``` # Lineplot ### Citation vs world rank of top 100 universities with 2014, 2015 and 2016 years ``` df_2014 = timesData[timesData.year==2014].iloc[:100,] df_2015 = timesData[timesData.year==2015].iloc[:100,] df_2016 = timesData[timesData.year==2016].iloc[:100,] trace1 = go.Scatter( x=df_2014.world_rank, y=df_2014.citations, name='2014', mode='lines', marker=dict(color='rgba(255, 128, 255, 0.8)'), text=df_2014.university_name ) trace2 = go.Scatter( x=df_2015.world_rank, y=df_2015.citations, name='2015', mode='lines', marker=dict(color='rgba(255, 128, 2, 0.8)'), text=df_2015.university_name ) trace3 = go.Scatter( x=df_2016.world_rank, y=df_2016.citations, name='2016', mode='lines', marker=dict(color='rgba(0, 255, 200, 0.8)'), text=df_2016.university_name ) data = [trace1, trace2, trace3] layout = dict(title='Citation vs world rank of top 100 universities with 2014, 2015 and 2016 years', xaxis=dict(title='World Rank'), yaxis=dict(title='Citation'), template='plotly_dark' ) fig = go.Figure(data=data, layout=layout) fig.show() ``` # Bubble Plot ### University world rank (first 20) vs teaching score with number of students(size) and international score (color) in 2015 ``` df_2015 = timesData[timesData.year == 2015].iloc[:20,] # Converting following columns to float num_students_size = df_2015.num_students.apply(lambda x: float(x.replace(',', '.'))).tolist() international_color = df_2015.international.astype('float').tolist() # Plotting bubble plot trace = go.Scatter( x=df_2015.world_rank, y=df_2015.teaching, mode='markers', marker=dict(color=international_color, size=num_students_size, showscale=True), text= df_2015.university_name ) fig = go.Figure(data=[trace]) fig.show() ``` # Histogram ### Histogram of students-staff ratio in 2011 and 2016 year ``` x2011 = timesData.student_staff_ratio[timesData.year == 2011] x2016 = timesData.student_staff_ratio[timesData.year == 2016] # Creating histogram trace trace1 = go.Histogram( x=x2011, opacity=0.75, name='2011', marker=dict(color='rgba(171, 50, 96, 0.6)') ) trace2 = go.Histogram( x=x2016, opacity=0.75, name='2016', marker=dict(color='rgba(12, 50, 196, 0.6)') ) data = [trace1, trace2] layout = go.Layout( barmode='overlay', title='Student-Staff Ratio in 2011 and 2016', xaxis=dict(title='Student-Staff Ratio'), yaxis=dict(title='Count') ) fig = go.Figure(data=data, layout=layout) fig.show() ```
github_jupyter
``` import pathlib import lzma import re import os import datetime import copy import functools import numpy as np import pandas as pd # Makes it so any changes in pymedphys is automatically # propagated into the notebook without needing a kernel reset. from IPython.lib.deepreload import reload %load_ext autoreload %autoreload 2 import pymedphys._icom.extract import pymedphys.mudensity import pymedphys patient_id = '008566' patients_dir = pathlib.Path(r'\\physics-server\iComLogFiles\patients') patient_data_paths = list(patients_dir.glob(f'{patient_id}_*/*.xz')) patient_data_paths patient_data_path = patient_data_paths[0] with lzma.open(patient_data_path, 'r') as f: patient_data = f.read() DATE_PATTERN = re.compile(rb"\d\d\d\d-\d\d-\d\d\d\d:\d\d:\d\d.") def get_data_points(data): date_index = [m.span() for m in DATE_PATTERN.finditer(data)] start_points = [span[0] - 8 for span in date_index] end_points = start_points[1::] + [None] data_points = [data[start:end] for start, end in zip(start_points, end_points)] return data_points patient_data_list = get_data_points(patient_data) len(patient_data_list) @functools.lru_cache() def get_coll_regex(label, number): header = rb"0\xb8\x00DS\x00R.\x00\x00\x00" + label + b"\n" item = rb"0\x1c\x01DS\x00R.\x00\x00\x00(-?\d+\.\d+)" regex = re.compile(header + b"\n".join([item] * number)) return regex def extract_coll(data, label, number): regex = get_coll_regex(label, number) match = regex.search(data) span = match.span() data = data[0 : span[0]] + data[span[1] + 1 : :] items = np.array([float(item) for item in match.groups()]) return data, items def get_delivery_data_items(single_icom_stream): shrunk_stream, mu = pymedphys._icom.extract.extract(single_icom_stream, "Delivery MU") shrunk_stream, gantry = pymedphys._icom.extract.extract(shrunk_stream, "Gantry") shrunk_stream, collimator = pymedphys._icom.extract.extract(shrunk_stream, "Collimator") shrunk_stream, mlc = extract_coll(shrunk_stream, b"MLCX", 160) mlc = mlc.reshape((80,2)) mlc = np.fliplr(np.flipud(mlc * 10)) mlc[:,1] = -mlc[:,1] mlc = np.round(mlc,10) # shrunk_stream, result["ASYMX"] = extract_coll(shrunk_stream, b"ASYMX", 2) shrunk_stream, jaw = extract_coll(shrunk_stream, b"ASYMY", 2) jaw = np.round(np.array(jaw) * 10, 10) jaw = np.flipud(jaw) return mu, gantry, collimator, mlc, jaw mu, gantry, collimator, mlc, jaw = get_delivery_data_items(patient_data_list[250]) gantry collimator mu jaw len(patient_data_list) delivery_raw = [ get_delivery_data_items(single_icom_stream) for single_icom_stream in patient_data_list ] mu = np.array([item[0] for item in delivery_raw]) diff_mu = np.concatenate([[0], np.diff(mu)]) diff_mu[diff_mu<0] = 0 mu = np.cumsum(diff_mu) gantry = np.array([item[1] for item in delivery_raw]) collimator = np.array([item[2] for item in delivery_raw]) mlc = np.array([item[3] for item in delivery_raw]) jaw = np.array([item[4] for item in delivery_raw]) icom_delivery = pymedphys.Delivery(mu, gantry, collimator, mlc, jaw) icom_delivery = icom_delivery._filter_cps() monaco_directory = pathlib.Path(r'\\monacoda\FocalData\RCCC\1~Clinical') tel_path = list(monaco_directory.glob(f'*~{patient_id}/plan/*/tel.1'))[-1] tel_path GRID = pymedphys.mudensity.grid() delivery_tel = pymedphys.Delivery.from_monaco(tel_path) mudensity_tel = delivery_tel.mudensity() pymedphys.mudensity.display(GRID, mudensity_tel) mudensity_icom = icom_delivery.mudensity() pymedphys.mudensity.display(GRID, mudensity_icom) icom_delivery.mu[16] delivery_tel.mu[1] delivery_tel.mlc[1] icom_delivery.mlc[16] delivery_tel.jaw[1] icom_delivery.jaw[16] # new_mlc = np.fliplr(np.flipud(np.array(icom_delivery.mlc[16]) * 10)) # new_mlc[:,1] = -new_mlc[:,1] # new_mlc ```
github_jupyter
``` #|hide #|skip ! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab #|all_slow #|export from __future__ import annotations from fastai.basics import * from fastai.callback.progress import * from fastai.text.data import TensorText from fastai.tabular.all import TabularDataLoaders, Tabular from fastai.callback.hook import total_params #|hide from nbdev.showdoc import * #|default_exp callback.wandb ``` # Wandb > Integration with [Weights & Biases](https://docs.wandb.com/library/integrations/fastai) First thing first, you need to install wandb with ``` pip install wandb ``` Create a free account then run ``` wandb login ``` in your terminal. Follow the link to get an API token that you will need to paste, then you're all set! ``` #|export import wandb #|export class WandbCallback(Callback): "Saves model topology, losses & metrics" remove_on_fetch,order = True,Recorder.order+1 # Record if watch has been called previously (even in another instance) _wandb_watch_called = False def __init__(self, log:str=None, # What to log (can be `gradients`, `parameters`, `all` or None) log_preds:bool=True, # Whether to log model predictions on a `wandb.Table` log_preds_every_epoch:bool=False, # Whether to log predictions every epoch or at the end log_model:bool=False, # Whether to save the model checkpoint to a `wandb.Artifact` model_name:str=None, # The name of the `model_name` to save, overrides `SaveModelCallback` log_dataset:bool=False, # Whether to log the dataset to a `wandb.Artifact` dataset_name:str=None, # A name to log the dataset with valid_dl:TfmdDL=None, # If `log_preds=True`, then the samples will be drawn from `valid_dl` n_preds:int=36, # How many samples to log predictions seed:int=12345, # The seed of the samples drawn reorder=True): store_attr() def after_create(self): # log model if self.log_model: if not hasattr(self, 'save_model'): # does not have the SaveModelCallback self.learn.add_cb(SaveModelCallback(fname=ifnone(self.model_name, 'model'))) else: # override SaveModelCallback if self.model_name is not None: self.save_model.fname = self.model_name def before_fit(self): "Call watch method to log model topology, gradients & weights" # Check if wandb.init has been called if wandb.run is None: raise ValueError('You must call wandb.init() before WandbCallback()') # W&B log step self._wandb_step = wandb.run.step - 1 # -1 except if the run has previously logged data (incremented at each batch) self._wandb_epoch = 0 if not(wandb.run.step) else math.ceil(wandb.run.summary['epoch']) # continue to next epoch self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") and rank_distrib()==0 if not self.run: return # Log config parameters log_config = self.learn.gather_args() _format_config(log_config) try: wandb.config.update(log_config, allow_val_change=True) except Exception as e: print(f'WandbCallback could not log config parameters -> {e}') if not WandbCallback._wandb_watch_called: WandbCallback._wandb_watch_called = True # Logs model topology and optionally gradients and weights if self.log is not None: wandb.watch(self.learn.model, log=self.log) # log dataset assert isinstance(self.log_dataset, (str, Path, bool)), 'log_dataset must be a path or a boolean' if self.log_dataset is True: if Path(self.dls.path) == Path('.'): print('WandbCallback could not retrieve the dataset path, please provide it explicitly to "log_dataset"') self.log_dataset = False else: self.log_dataset = self.dls.path if self.log_dataset: self.log_dataset = Path(self.log_dataset) assert self.log_dataset.is_dir(), f'log_dataset must be a valid directory: {self.log_dataset}' metadata = {'path relative to learner': os.path.relpath(self.log_dataset, self.learn.path)} log_dataset(path=self.log_dataset, name=self.dataset_name, metadata=metadata) if self.log_preds: try: if not self.valid_dl: #Initializes the batch watched wandbRandom = random.Random(self.seed) # For repeatability self.n_preds = min(self.n_preds, len(self.dls.valid_ds)) idxs = wandbRandom.sample(range(len(self.dls.valid_ds)), self.n_preds) if isinstance(self.dls, TabularDataLoaders): test_items = getattr(self.dls.valid_ds.items, 'iloc', self.dls.valid_ds.items)[idxs] self.valid_dl = self.dls.test_dl(test_items, with_labels=True, process=False) else: test_items = [getattr(self.dls.valid_ds.items, 'iloc', self.dls.valid_ds.items)[i] for i in idxs] self.valid_dl = self.dls.test_dl(test_items, with_labels=True) self.learn.add_cb(FetchPredsCallback(dl=self.valid_dl, with_input=True, with_decoded=True, reorder=self.reorder)) except Exception as e: self.log_preds = False print(f'WandbCallback was not able to prepare a DataLoader for logging prediction samples -> {e}') def before_batch(self): self.ti_batch = time.perf_counter() def after_batch(self): "Log hyper-parameters and training loss" if self.training: batch_time = time.perf_counter() - self.ti_batch self._wandb_step += 1 self._wandb_epoch += 1/self.n_iter hypers = {f'{k}_{i}':v for i,h in enumerate(self.opt.hypers) for k,v in h.items()} wandb.log({'epoch': self._wandb_epoch, 'train_loss': self.smooth_loss, 'raw_loss': self.loss, **hypers}, step=self._wandb_step) wandb.log({'train_samples_per_sec': len(self.xb[0]) / batch_time}, step=self._wandb_step) def log_predictions(self): try: inp,preds,targs,out = self.learn.fetch_preds.preds b = tuplify(inp) + tuplify(targs) x,y,its,outs = self.valid_dl.show_results(b, out, show=False, max_n=self.n_preds) wandb.log(wandb_process(x, y, its, outs, preds), step=self._wandb_step) except Exception as e: self.log_preds = False self.remove_cb(FetchPredsCallback) print(f'WandbCallback was not able to get prediction samples -> {e}') def after_epoch(self): "Log validation loss and custom metrics & log prediction samples" # Correct any epoch rounding error and overwrite value self._wandb_epoch = round(self._wandb_epoch) if self.log_preds and self.log_preds_every_epoch: self.log_predictions() wandb.log({'epoch': self._wandb_epoch}, step=self._wandb_step) wandb.log({n:s for n,s in zip(self.recorder.metric_names, self.recorder.log) if n not in ['train_loss', 'epoch', 'time']}, step=self._wandb_step) def after_fit(self): if self.log_preds and not self.log_preds_every_epoch: self.log_predictions() if self.log_model: if self.save_model.last_saved_path is None: print('WandbCallback could not retrieve a model to upload') else: metadata = {n:s for n,s in zip(self.recorder.metric_names, self.recorder.log) if n not in ['train_loss', 'epoch', 'time']} log_model(self.save_model.last_saved_path, name=self.save_model.fname, metadata=metadata) self.run = True if self.log_preds: self.remove_cb(FetchPredsCallback) wandb.log({}) # ensure sync of last step self._wandb_step += 1 ``` Optionally logs weights and or gradients depending on `log` (can be "gradients", "parameters", "all" or None), sample predictions if ` log_preds=True` that will come from `valid_dl` or a random sample of the validation set (determined by `seed`). `n_preds` are logged in this case. If used in combination with `SaveModelCallback`, the best model is saved as well (can be deactivated with `log_model=False`). Datasets can also be tracked: * if `log_dataset` is `True`, tracked folder is retrieved from `learn.dls.path` * `log_dataset` can explicitly be set to the folder to track * the name of the dataset can explicitly be given through `dataset_name`, otherwise it is set to the folder name * *Note: the subfolder "models" is always ignored* For custom scenarios, you can also manually use functions `log_dataset` and `log_model` to respectively log your own datasets and models. ``` #|export @patch def gather_args(self:Learner): "Gather config parameters accessible to the learner" # args stored by `store_attr` cb_args = {f'{cb}':getattr(cb,'__stored_args__',True) for cb in self.cbs} args = {'Learner':self, **cb_args} # input dimensions try: n_inp = self.dls.train.n_inp args['n_inp'] = n_inp xb = self.dls.valid.one_batch()[:n_inp] args.update({f'input {n+1} dim {i+1}':d for n in range(n_inp) for i,d in enumerate(list(detuplify(xb[n]).shape))}) except: print(f'Could not gather input dimensions') # other useful information with ignore_exceptions(): args['batch size'] = self.dls.bs args['batch per epoch'] = len(self.dls.train) args['model parameters'] = total_params(self.model)[0] args['device'] = self.dls.device.type args['frozen'] = bool(self.opt.frozen_idx) args['frozen idx'] = self.opt.frozen_idx args['dataset.tfms'] = f'{self.dls.dataset.tfms}' args['dls.after_item'] = f'{self.dls.after_item}' args['dls.before_batch'] = f'{self.dls.before_batch}' args['dls.after_batch'] = f'{self.dls.after_batch}' return args #|export def _make_plt(img): "Make plot to image resolution" # from https://stackoverflow.com/a/13714915 my_dpi = 100 fig = plt.figure(frameon=False, dpi=my_dpi) h, w = img.shape[:2] fig.set_size_inches(w / my_dpi, h / my_dpi) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) return fig, ax #|export def _format_config_value(v): if isinstance(v, list): return [_format_config_value(item) for item in v] elif hasattr(v, '__stored_args__'): return {**_format_config(v.__stored_args__), '_name': v} return v #|export def _format_config(config): "Format config parameters before logging them" for k,v in config.items(): if isinstance(v, dict): config[k] = _format_config(v) else: config[k] = _format_config_value(v) return config #|export def _format_metadata(metadata): "Format metadata associated to artifacts" for k,v in metadata.items(): metadata[k] = str(v) #|export def log_dataset(path, name=None, metadata={}, description='raw dataset'): "Log dataset folder" # Check if wandb.init has been called in case datasets are logged manually if wandb.run is None: raise ValueError('You must call wandb.init() before log_dataset()') path = Path(path) if not path.is_dir(): raise f'path must be a valid directory: {path}' name = ifnone(name, path.name) _format_metadata(metadata) artifact_dataset = wandb.Artifact(name=name, type='dataset', metadata=metadata, description=description) # log everything except "models" folder for p in path.ls(): if p.is_dir(): if p.name != 'models': artifact_dataset.add_dir(str(p.resolve()), name=p.name) else: artifact_dataset.add_file(str(p.resolve())) wandb.run.use_artifact(artifact_dataset) #|export def log_model(path, name=None, metadata={}, description='trained model'): "Log model file" if wandb.run is None: raise ValueError('You must call wandb.init() before log_model()') path = Path(path) if not path.is_file(): raise f'path must be a valid file: {path}' name = ifnone(name, f'run-{wandb.run.id}-model') _format_metadata(metadata) artifact_model = wandb.Artifact(name=name, type='model', metadata=metadata, description=description) with artifact_model.new_file(name, mode='wb') as fa: fa.write(path.read_bytes()) wandb.run.log_artifact(artifact_model) #|export @typedispatch def wandb_process(x:TensorImage, y, samples, outs, preds): "Process `sample` and `out` depending on the type of `x/y`" res_input, res_pred, res_label = [],[],[] for s,o in zip(samples, outs): img = s[0].permute(1,2,0) res_input.append(wandb.Image(img, caption='Input_data')) for t, capt, res in ((o[0], "Prediction", res_pred), (s[1], "Ground_Truth", res_label)): fig, ax = _make_plt(img) # Superimpose label or prediction to input image ax = img.show(ctx=ax) ax = t.show(ctx=ax) res.append(wandb.Image(fig, caption=capt)) plt.close(fig) return {"Inputs":res_input, "Predictions":res_pred, "Ground_Truth":res_label} #export def _unlist(l): "get element of lists of lenght 1" if isinstance(l, (list, tuple)): if len(l) == 1: return l[0] else: return l #|export @typedispatch def wandb_process(x:TensorImage, y:(TensorCategory,TensorMultiCategory), samples, outs, preds): table = wandb.Table(columns=["Input image", "Ground_Truth", "Predictions"]) for (image, label), pred_label in zip(samples,outs): table.add_data(wandb.Image(image.permute(1,2,0)), label, _unlist(pred_label)) return {"Prediction_Samples": table} #|export @typedispatch def wandb_process(x:TensorImage, y:TensorMask, samples, outs, preds): res = [] codes = getattr(outs[0][0], 'codes', None) if codes is not None: class_labels = [{'name': name, 'id': id} for id, name in enumerate(codes)] else: class_labels = [{'name': i, 'id': i} for i in range(preds.shape[1])] table = wandb.Table(columns=["Input Image", "Ground_Truth", "Predictions"]) for (image, label), pred_label in zip(samples, outs): img = image.permute(1,2,0) table.add_data(wandb.Image(img), wandb.Image(img, masks={"Ground_Truth": {'mask_data': label.numpy().astype(np.uint8)}}, classes=class_labels), wandb.Image(img, masks={"Prediction": {'mask_data': pred_label[0].numpy().astype(np.uint8)}}, classes=class_labels) ) return {"Prediction_Samples": table} #|export @typedispatch def wandb_process(x:TensorText, y:(TensorCategory,TensorMultiCategory), samples, outs, preds): data = [[s[0], s[1], o[0]] for s,o in zip(samples,outs)] return {"Prediction_Samples": wandb.Table(data=data, columns=["Text", "Target", "Prediction"])} #|export @typedispatch def wandb_process(x:Tabular, y:Tabular, samples, outs, preds): df = x.all_cols for n in x.y_names: df[n+'_pred'] = y[n].values return {"Prediction_Samples": wandb.Table(dataframe=df)} ``` ## Example of use: Once your have defined your `Learner`, before you call to `fit` or `fit_one_cycle`, you need to initialize wandb: ``` import wandb wandb.init() ``` To use Weights & Biases without an account, you can call `wandb.init(anonymous='allow')`. Then you add the callback to your `learner` or call to `fit` methods, potentially with `SaveModelCallback` if you want to save the best model: ``` from fastai.callback.wandb import * # To log only during one training phase learn.fit(..., cbs=WandbCallback()) # To log continuously for all training phases learn = learner(..., cbs=WandbCallback()) ``` Datasets and models can be tracked through the callback or directly through `log_model` and `log_dataset` functions. For more details, refer to [W&B documentation](https://docs.wandb.com/library/integrations/fastai). ``` #|hide #|slow from fastai.vision.all import * import tempfile path = untar_data(URLs.MNIST_TINY) items = get_image_files(path) tds = Datasets(items, [PILImageBW.create, [parent_label, Categorize()]], splits=GrandparentSplitter()(items)) dls = tds.dataloaders(after_item=[ToTensor(), IntToFloatTensor()]) os.environ['WANDB_MODE'] = 'dryrun' # run offline with tempfile.TemporaryDirectory() as wandb_local_dir: wandb.init(anonymous='allow', dir=wandb_local_dir) learn = vision_learner(dls, resnet18, loss_func=CrossEntropyLossFlat(), cbs=WandbCallback(log_model=False)) learn.fit(1) # add more data from a new learner on same run learn = vision_learner(dls, resnet18, loss_func=CrossEntropyLossFlat(), cbs=WandbCallback(log_model=False)) learn.fit(1, lr=slice(0.005)) # save model learn = cnn_learner(dls, resnet18, loss_func=CrossEntropyLossFlat(), cbs=WandbCallback(log_model=True)) learn.fit(1, lr=slice(0.005)) # save model override name learn = cnn_learner(dls, resnet18, loss_func=CrossEntropyLossFlat(), cbs=[WandbCallback(log_model=True, model_name="good_name"), SaveModelCallback(fname="bad_name")]) learn.fit(1, lr=slice(0.005)) # finish writing files to temporary folder wandb.finish() #|export _all_ = ['wandb_process'] ``` ## Export - ``` #|hide from nbdev.export import * notebook2script() ```
github_jupyter
``` import torch import torchvision import torch.nn as nn import torch.nn.functional as F from torchvision import transforms import torch.utils.data as data train_data_path = "./train" transform = transforms.Compose([ transforms.Resize((64, 64)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) train_data = torchvision.datasets.ImageFolder(root=train_data_path, transform=transform) transform val_data_path = "./val/" val_data = torchvision.datasets.ImageFolder(root=val_data_path, transform=transform) test_data_path = "./test/" test_data = torchvision.datasets.ImageFolder(root=test_data_path, transform=transform) batch_size = 64 train_data_loader = data.DataLoader(train_data, batch_size=batch_size) val_data_loader = data.DataLoader(val_data, batch_size=batch_size) test_data_loader = data.DataLoader(test_data, batch_size=batch_size) image, label = next(iter(train_data_loader)) image.shape, label.shape class SimpleNet(nn.Module): def __init__(self): super(SimpleNet, self).__init__() self.fc1 = nn.Linear(12288, 84) self.fc2 = nn.Linear(84, 50) self.fc3 = nn.Linear(50, 2) def forward(self, x): x = x.view(-1, 12288) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x simplenet = SimpleNet() simplenet import torch.optim as optim optimizer = optim.Adam(simplenet.parameters(), lr=0.001) if torch.cuda.is_available(): device = torch.device("cuda") else: device = torch.device("cpu") # model.to(device) device def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"): print(device) for epoch in range(epochs): training_loss = 0.0 valid_loss = 0.0 model.train() for batch in train_loader: optimizer.zero_grad() input, target = batch input = input.to(device) target = target.to(device) output = model(input) loss = loss_fn(output, target) loss.backward() optimizer.step() training_loss += loss.data.item() training_loss /= len(train_loader) model.eval() num_correct = 0 num_examples = 0 for batch in val_loader: input, target = batch input = input.to(device) output = model(input) target = target.to(device) loss = loss_fn(output, target) valid_loss += loss.data.item() correct = torch.eq(torch.max(F.softmax(output), dim=1)[1], target).view(-1) num_correct += torch.sum(correct).item() num_examples += correct.shape[0] valid_loss /= len(val_loader) print("Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}".format(epoch, training_loss, valid_loss, num_correct / num_examples)) device train(simplenet, optimizer, torch.nn.CrossEntropyLoss(),train_data_loader, val_data_loader, 20, device) torch.save(simplenet, "simplenet") img = test_data[0][0] img.numpy().shape img = Image.fromarray(img.numpy()) classes = train_data_loader.dataset.classes classes cat = "/Users/christian_acuna/workspace/beginners-pytorch-deep-learning/chapter2/test/cat/3156111_a9dba42579.jpg" fish = "/Users/christian_acuna/workspace/beginners-pytorch-deep-learning/chapter2/test/fish/35099721_fbc694fa1b.jpg" from PIL import Image def inference(filename, classes, model): labels = ['cat', 'fish'] img = Image.open(filename) img = transform(img) img = img.unsqueeze(0) prediction = model(img) prediction = prediction.argmax() print(classes[prediction]) inference(cat, classes, simplenet) inference(fish, classes, simplenet) net = torch.load("simplenet") net ```
github_jupyter
<figure> <IMG SRC="https://upload.wikimedia.org/wikipedia/commons/thumb/d/d5/Fachhochschule_Südwestfalen_20xx_logo.svg/320px-Fachhochschule_Südwestfalen_20xx_logo.svg.png" WIDTH=250 ALIGN="right"> </figure> # Skriptsprachen ### Sommersemester 2021 Prof. Dr. Heiner Giefers # Covid-19 Daten visualisieren In dieser kleinen Aufgabe geht es darum, die weltweiten Covid-19 Zahlen zu visualisieren. Wir wollen die die Daten von einer URL herunterladen und als *Pandas DataFrame* anlegen. Damit können wir die Daten vorverarbeiten und schließlich auch plotten. ``` import numpy as np import csv import requests import io import pandas as pd import plotly.express as px import geojson url = "https://covid.ourworldindata.org/data/owid-covid-data.csv" s=requests.get(url).content df=pd.read_csv(io.StringIO(s.decode('utf-8'))) print(f"Die Anzahl aller Einträge ist {df.size}") ``` Um sich einen ersten Eindruck von der Tabelle zu machen, kann man eine Reihe von Pandas-Methoden aufrufen: - `df.head(k)` zeigt die ersten `k` Einträge der Tabelle. Sie werden sehen, dass die Daten nach Ländern sortiert sind - `df.info()` zeigt Informationen zu den Spalten der Tabelle - `df.describe()` Gibt einige statistische Kennzahlen zu den Daten aus ``` df.head(10) ``` Wenn Sie die Daten nach einer anderen Spalte sortieren wollen, geht das mit der `sort_by_values` Methode: ``` df_date = df.sort_values(by='date') df_date ``` Wir wollen nun das in Deutschland häufig verwendete Maß der *7-Tage Inzidenz* darstellen. Dieses ist in der Tabelle aber nicht direkt enthalten. Wir können es allerdings aus den neuen Fällen pro Tag berechnen. Um eine Normalisierung gemäß der Einwohnerzahlen zu erreichen, verwendenden wir die Spalte `new_cases_per_million`. Diese kann allerdings fehlende Werte enthalten, z.B. weil für einige Länder an bestimmten Tagen keine Daten vorlagen. Um diese fehlenden Werte zu *schätzen*, interpolieren wir. D.h. wir nehmen an, bei einer *Lücke* würden sie Werte linear fortlaufen. Also bei der folge `1, 2, 3, NaN, 7, 8, 9` würde das `NaN` durch `5` ersetzt. Nun gibt es noch ein weiteres Problem: Wir haben die Tabelle nach Daten sortiert, alle Länder stehen also vermischt in der Tabelle. Beim Aufsummieren der *7-Tage-Inzidenz* sollen aber natürlich nur Daten innerhalb eines Landes betrachtet werden. Um dies zu erreichen, können wir die `groupby`-Methode verwenden. Als Parameter erfhält `groupby` eine Funktion, die auf die gruppierten Daten angewendet wird. **Aufgabe:** Implementieren Sie die Funktion `berechne_inzidenz(x)`, die dem DataFrame `x` eine Spalte `Inzidenz` hinzufügt. Dazu soll zuerst die Spalte `new_cases_per_million` mit der Funktion `interpolate()` interpoliert werden. Anschließend soll die 7-Tage Inzidenz ausgerechnet werden. Sie können dazu die Methode `rolling(k)` verwenden, die ein gleitendes *Fenster* über `k`-Werte der Spalte liefert. ``` # Falsche Werte aussortieren indexEntries = df_date[df_date['new_cases_per_million'] < 0 ].index df_cleaned = df_date.drop(indexEntries) def berechne_inzidenz(x): # YOUR CODE HERE raise NotImplementedError() return x df_cleaned = df_cleaned.groupby('iso_code').apply(berechne_inzidenz) ``` Nun können wir die Inzidenz-Werte anzeigen. Dafür eignet sich gut eine Darstellung als Weltkarte, die wir z.B. mit der *Plotly* Methode `choropleth` erzeugen können. ``` fig = px.choropleth(df_cleaned, locations="iso_code", color="Inzidenz", #scope='europe', range_color = [0,200], hover_name="location", animation_frame="date", title = "Corvid: weltweite 7-Tages Inzidenz", color_continuous_scale=px.colors.sequential.Jet) fig["layout"].pop("updatemenus") fig.show() ``` Um bestimmte Zeilen eines DataFrames herauszufiltern, kann man bei der Auswahl der Spalten Bedingungen angeben. So können wir z.B. die Werte aus Deutschland aus der Tabelle herausfiltern: ``` df_de = df[df['iso_code']=='DEU'] print(f"Die Anzahl aller Einträge aus Deutschland ist {df_de.size}") df_de.head() ``` **Aufgabe:** Plotten Sie die Inzidenz-Werte für Deutschland (`DEU`), Großbritannien (`GBR`) und USA (`USA`) in einen gemeinsamen Graphen. Verwenden sie dau die *Matplotlib*-Methode `plot()`. ``` import matplotlib.pyplot as plt from matplotlib.ticker import MaxNLocator %matplotlib inline fig, axes = plt.subplots(1,1,figsize=(16,8)) df_cleaned['dedate'] = pd.to_datetime(df_cleaned.date).dt.strftime('%d.%m.%Y') # YOUR CODE HERE raise NotImplementedError() axes.xaxis.set_major_locator(MaxNLocator(15)) plt.xticks(rotation = 45) plt.legend() plt.show() ``` ## Datenquelle Die verwendeten Daten stammen von [_Our World in Data_](https://ourworldindata.org/) und wurden dem Git-Repository [https://github.com/owid/covid-19-data](https://github.com/owid/covid-19-data) entnommen Details zum Datensatz findet man in der folgenden Publikation: > Hasell, J., Mathieu, E., Beltekian, D. _et al._ A cross-country database of COVID-19 testing. _Sci Data_ **7**, 345 (2020). [https://doi.org/10.1038/s41597-020-00688-8](https://doi.org/10.1038/s41597-020-00688-8)
github_jupyter
# Load Necessary Libraries ``` %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ``` # Hide warning messages in notebook ``` import warnings warnings.filterwarnings('ignore') ``` # Load in data ``` mouse_drug_data = pd.read_csv('data/mouse_drug_data.csv') mouse_drug_data.head() clinical_trial_data = pd.read_csv('data/clinicaltrial_data.csv') clinical_trial_data.head() ``` # Combine the data into a single dataset ``` mouse_drug_data.sort_values(by="Mouse ID", inplace=True) clinical_trial_data.sort_values(by="Mouse ID", inplace=True) mouse_drug_data = mouse_drug_data.reset_index(drop=True) clinical_trial_data = clinical_trial_data.reset_index(drop=True) df = pd.merge(mouse_drug_data, clinical_trial_data, on="Mouse ID", how="left") df.head() ``` # Tumor Response to Treatment Store the Mean Tumor Volume Data grouped by Drug and Timepoint ``` df_groupby = df.groupby(['Drug', 'Timepoint']) mean_tumor_volume_df = df_groupby['Tumor Volume (mm3)'].mean() mean_tumor_volume_df tumor_response = pd.DataFrame(mean_tumor_volume_df).reset_index() tumor_response.head() ``` Store the Standart Error of Tumor Volumes grouped by Drug and Timepoint ``` tumor_error = df_groupby['Tumor Volume (mm3)'].sem() tumor_response_error = pd.DataFrame(tumor_error).reset_index() tumor_response_error.head() ``` Minor Data Munging to Re-Format the Data Frames ``` reformat_df = tumor_response.pivot(index = 'Timepoint', columns = 'Drug', values = 'Tumor Volume (mm3)') reformat_df.head() ``` # Generate the Plot (with Error Bars) ``` plt.figure(figsize=(12,8)) plt.title('Tumor Response to Treatment', fontdict={'fontweight':'bold', 'fontsize':18}) Capomulin_df = tumor_response_error.loc[tumor_response_error['Drug'] == 'Capomulin', 'Tumor Volume (mm3)'] Ceftamin_df = tumor_response_error.loc[tumor_response_error['Drug'] == 'Ceftamin', 'Tumor Volume (mm3)'] Infubinol_df = tumor_response_error.loc[tumor_response_error['Drug'] == 'Infubinol', 'Tumor Volume (mm3)'] Ketapril_df = tumor_response_error.loc[tumor_response_error['Drug'] == 'Ketapril', 'Tumor Volume (mm3)'] Naftisol_df = tumor_response_error.loc[tumor_response_error['Drug'] == 'Naftisol', 'Tumor Volume (mm3)'] Placebo_df = tumor_response_error.loc[tumor_response_error['Drug'] == 'Placebo', 'Tumor Volume (mm3)'] Propriva_df = tumor_response_error.loc[tumor_response_error['Drug'] == 'Propriva', 'Tumor Volume (mm3)'] Ramicane_df = tumor_response_error.loc[tumor_response_error['Drug'] == 'Ramicane', 'Tumor Volume (mm3)'] Stelasyn_df = tumor_response_error.loc[tumor_response_error['Drug'] == 'Stelasyn', 'Tumor Volume (mm3)'] Zoniferol_df = tumor_response_error.loc[tumor_response_error['Drug'] == 'Zoniferol', 'Tumor Volume (mm3)'] Time = [0, 5, 10, 15, 20, 25, 30, 35, 40, 45] plt.errorbar(Time, reformat_df['Capomulin'], yerr = Capomulin_df, color = 'b', linestyle = '--', marker = 'o', label='Capomulin') plt.errorbar(Time, reformat_df["Ceftamin"], yerr = Ceftamin_df, color = 'r', linestyle = '--', marker = '*', label="Ceftamin") plt.errorbar(Time, reformat_df["Infubinol"], yerr = Infubinol_df, color = 'y', linestyle = '--', marker = '^', label="Infubinol") plt.errorbar(Time, reformat_df["Ketapril"], yerr = Ketapril_df, color = 'g', linestyle = '--', marker = 's', label="Ketapril") plt.errorbar(Time, reformat_df["Naftisol"], yerr = Naftisol_df, color = '#abcdef', linestyle = '--', marker = 'D', label="Naftisol") plt.errorbar(Time, reformat_df["Placebo"], yerr = Placebo_df, color = '#adefab', linestyle = '--', marker = 'v', label="Placebo") plt.errorbar(Time, reformat_df["Propriva"], yerr = Propriva_df, color = '#d0abef', linestyle = '--', marker = '>', label="Propriva") plt.errorbar(Time, reformat_df["Ramicane"], yerr = Ramicane_df, color = '#efabec', linestyle = '--', marker = '<', label="Ramicane") plt.errorbar(Time, reformat_df["Stelasyn"], yerr = Stelasyn_df, color = '#c2abef', linestyle = '--', marker = '3', label="Stelasyn") plt.errorbar(Time, reformat_df["Zoniferol"], yerr = Zoniferol_df, color = 'grey', linestyle = '--', marker = '4', label="Zoniferol") for Drug in reformat_df: if Drug !='Tumor Volume (mm3)': print(Drug) plt.xlabel('Time (Days)') plt.ylabel ('Tumor Volume (mm3)') plt.legend() plt.grid() plt.savefig('Tumor_Response_to_Treatment', dpi=300) plt.show() ``` # Metastatic Response to Treatment Store the Mean Met. Site Data Grouped by Drug and Timepoint ``` df_groupby = df.groupby(['Drug', 'Timepoint']) mean_tumor_volume_df = df_groupby['Metastatic Sites'].mean() mean_tumor_volume_df tumor_response = pd.DataFrame(mean_tumor_volume_df).reset_index() tumor_response.head() ``` Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint ``` tumor_error = df_groupby['Metastatic Sites'].sem() tumor_error = pd.DataFrame(tumor_error).reset_index() tumor_error.head() ``` Minor Data Munging to Re-Format the Data Frames ``` metastatic_df = tumor_response.pivot(index = 'Timepoint', columns = 'Drug', values = 'Metastatic Sites') metastatic_df.head() ``` Generate the Plot (with Error Bars) ``` plt.figure(figsize=(12,8)) plt.title('Metastatic Spread During Treatment', fontdict={'fontweight':'bold', 'fontsize':16}) Capomulin_df = tumor_error.loc[tumor_response_error['Drug'] == 'Capomulin', 'Metastatic Sites'] Ceftamin_df = tumor_error.loc[tumor_response_error['Drug'] == 'Ceftamin', 'Metastatic Sites'] Infubinol_df = tumor_error.loc[tumor_response_error['Drug'] == 'Infubinol', 'Metastatic Sites'] Ketapril_df = tumor_error.loc[tumor_response_error['Drug'] == 'Ketapril', 'Metastatic Sites'] Naftisol_df = tumor_error.loc[tumor_response_error['Drug'] == 'Naftisol', 'Metastatic Sites'] Placebo_df = tumor_error.loc[tumor_response_error['Drug'] == 'Placebo', 'Metastatic Sites'] Propriva_df = tumor_error.loc[tumor_response_error['Drug'] == 'Propriva', 'Metastatic Sites'] Ramicane_df = tumor_error.loc[tumor_response_error['Drug'] == 'Ramicane', 'Metastatic Sites'] Stelasyn_df = tumor_error.loc[tumor_response_error['Drug'] == 'Stelasyn', 'Metastatic Sites'] Zoniferol_df = tumor_error.loc[tumor_response_error['Drug'] == 'Zoniferol', 'Metastatic Sites'] Time = [0, 5, 10, 15, 20, 25, 30, 35, 40, 45] plt.errorbar(Time, metastatic_df['Capomulin'], yerr = Capomulin_df, color = '#4287f5', linestyle = 'solid', marker = '*', label='Capomulin') plt.errorbar(Time, metastatic_df["Ceftamin"], yerr = Ceftamin_df, color = '#48f542', linestyle = '-.', marker = 'p', label="Ceftamin") plt.errorbar(Time, metastatic_df["Infubinol"], yerr = Infubinol_df, color = '#42f5d4', linestyle = '-', marker = 'h', label="Infubinol") plt.errorbar(Time, metastatic_df["Ketapril"], yerr = Ketapril_df, color = '#f542ec', linestyle = '-', marker = 's', label="Ketapril") plt.errorbar(Time, metastatic_df["Naftisol"], yerr = Naftisol_df, color = '#f56042', linestyle = 'solid', marker = 'X', label="Naftisol") plt.errorbar(Time, metastatic_df["Placebo"], yerr = Placebo_df, color = '#adefab', linestyle = '-.', marker = 'd', label="Placebo") plt.errorbar(Time, metastatic_df["Propriva"], yerr = Propriva_df, color = '#d0abef', linestyle = 'solid', marker = '>', label="Propriva") plt.errorbar(Time, metastatic_df["Ramicane"], yerr = Ramicane_df, color = '#efabec', linestyle = '-', marker = '<', label="Ramicane") plt.errorbar(Time, metastatic_df["Stelasyn"], yerr = Stelasyn_df, color = '#c2abef', linestyle = 'solid', marker = '.', label="Stelasyn") plt.errorbar(Time, metastatic_df["Zoniferol"], yerr = Zoniferol_df, color = '#f5425a', linestyle = '-', marker = '8', label="Zoniferol") for Drug in reformat_df: if Drug !='Metastatic Sites)': print(Drug) plt.xlabel('Treatment Duration (Days)') plt.ylabel ('Met Sites') plt.legend() plt.grid() plt.savefig('Metastatic Spread During Treatment', dpi=300) plt.show() ``` # Survival Rates Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric) ``` df_groupby = df.groupby(['Drug', 'Timepoint']) mice_count = df_groupby['Mouse ID'].count() mice_df = pd.DataFrame(mice_count).reset_index() rename_mice_df = mice_df.rename(columns={'Mouse ID': 'Mouse Count'}) rename_mice_df.head() ``` Minor Data Munging to Re-Format the Data Frames ``` mouse_df = mice_df.pivot(index = 'Timepoint', columns = 'Drug', values = 'Mouse ID') mouse_df.head() ``` Generate the Plot (Accounting for percentages) ``` plt.figure(figsize=(8,5)) plt.title('Survival During Treatment', fontdict={'fontweight':'bold', 'fontsize':14}) Time = [0, 5, 10, 15, 20, 25, 30, 35, 40, 45] plt.plot(Time, (mouse_df['Capomulin']/25)*100, 'b.-', marker = '*', label='Capomulin') plt.plot(Time, (mouse_df['Ceftamin']/25)*100, 'r.-', marker = 'p', label='Ceftamin') plt.plot(Time, (mouse_df['Infubinol']/25)*100, 'g.-', marker = 'h', label='Infubinol') plt.plot(Time, (mouse_df['Ketapril']/25)*100, 'y.-', marker = 's', label='Ketapril') plt.plot(Time, (mouse_df['Naftisol']/25)*100, 'm.-', marker = 'X', label='Naftisol') plt.plot(Time, (mouse_df['Placebo']/25)*100, 'c.-', marker = 'D', label='Placebo') plt.plot(Time, (mouse_df['Propriva']/25)*100, color = '#adefab', linestyle = '-', marker = '>', label='Propriva') plt.plot(Time, (mouse_df['Ramicane']/25)*100, color = '#d0abef', linestyle = '-', marker = '<', label='Ramicane') plt.plot(Time, (mouse_df['Stelasyn']/25)*100, color = '#f542ec', linestyle = '-', marker = '.', label='Stelasyn') plt.plot(Time, (mouse_df['Zoniferol']/25)*100, color = '#f5425a', linestyle = '-', marker = '8', label='Zoniferoln') plt.xlabel('Time (Days)') plt.ylabel ('Survival Rate (%)') plt.legend() plt.grid() plt.savefig('Survival During Treatment', dpi=300) plt.show() ``` # Summary Bar Graph Calculate the percent changes for each drug ``` tumor_volume = 45 percent_changes = ((reformat_df.loc[45, :] - tumor_volume)/tumor_volume)*100 percent_changes ``` Store all Relevant Percent Changes into a Tuple ``` tuple_changes = tuple(zip(percent_changes.index, percent_changes)) tuple_changes_list = list(tuple_changes) tuple_changes_list ``` Splice the data between passing and failing drugs ``` passing_drugs =[] failing_drugs =[] indx_pass_drugs =[] indx_fail_drugs =[] for j,elements in tuple_changes_list: if elements>0: pass_drugs = elements passing_drugs.append(elements) indx_pass_drugs.append(j) else: fail_drugs = elements failing_drugs.append(elements) indx_fail_drugs.append(j) passingDrugs = list(zip(indx_pass_drugs, passing_drugs)) failingDrugs = list(zip(indx_fail_drugs, failing_drugs)) print(passingDrugs) print(failingDrugs) ``` Orient widths. Add labels, tick marks, etc. ``` fig, fig_df = plt.subplots(figsize=(8, 5)) fig_df.set_title('Tumor Change Over 45 Day Treatment', fontdict={'fontweight':'bold', 'fontsize':12}) y = [percent_changes['Ceftamin'], percent_changes['Infubinol'], percent_changes['Ketapril'], percent_changes['Naftisol'], percent_changes['Placebo'], percent_changes['Propriva'], percent_changes['Stelasyn'], percent_changes['Zoniferol'] ] x_axis = [0] x_axis1 = [1] x_axis2 =[2, 3, 4, 5, 6, 7, 8, 9] bars = fig_df.bar(x_axis, percent_changes['Capomulin'], color = 'g', alpha=0.8, align='edge', width = -1) bars1 = fig_df.bar(x_axis1, percent_changes['Ramicane'], color = 'g', alpha=0.8, align='edge', width = -1) bars2 = fig_df.bar(x_axis2, y, color='m', alpha=0.8, align='edge', width = -1) x_labels=["Capomulin", "Ramicane", "Ceftamin", "Infubinol", "Ketapril", "Naftisol", "Placebo", "Propriva", "Stelasyn", "Zoniferol"] plt.setp(fig_df, xticks=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9], xticklabels=["Capomulin", "Ramicane", "Ceftamin", "Infubinol", "Ketapril", "Naftisol", "Placebo", "Propriva", "Stelasyn", "Zoniferol"], yticks=[-20, 0, 20, 40, 60]) plt.xticks(rotation='vertical') plt.subplots_adjust(bottom=0.10) fig_df.set_ylabel('% Tumor Volume Change') fig_df.grid() def autolabel(rects): for rect in rects: height = rect.get_height() fig_df.text(rect.get_x() + rect.get_width()/2, .1*height, "%d" %int(height)+ "%", horizontalalignment='center', verticalalignment='top', color="black") autolabel(bars) autolabel(bars1) autolabel(bars2) fig.tight_layout() fig.savefig('Tumor Change Over 45 Day Treatment', dpi=300) fig.show() ```
github_jupyter
# Transform JD text files into an LDA model and pyLDAvis visualization ### Steps: 1. Use spaCy phrase matching to identify skills 2. Parse the job descriptions. A full, readable job description gets turned into a bunch of newline-delimited skills. 3. Create a Gensim corpus and dictionary from the parsed skills 4. Train an LDA model using the corpus and dictionary 5. Visualize the LDA model 6. Compare user input to the LDA model; get out a list of relevant skills ``` # Modeling and visualization import gensim from gensim.corpora import Dictionary, MmCorpus from gensim.models.ldamodel import LdaModel import pyLDAvis import pyLDAvis.gensim # Utilities import codecs import pickle import os import warnings # Black magic import spacy from spacy.matcher import Matcher from spacy.attrs import * nlp = spacy.load('en') ``` ### 1. Use spaCy phrase matching to ID skills in job descriptions **First, we read in a pickled dictionary that contains the word patterns we'll use to extract skills from JDs. Here's what the first few patterns look like:** ``` Python { 0 : [{"lower": "after"}, {"lower": "effects"}], 1 : [{"lower": "amazon"}, {"lower": "web"}, {"lower": "services"}], 2 : [{"lower": "angular"}, {"lower": "js"}], 3 : [{"lower": "ansible"}], 4 : [{"lower": "bash"}, {"lower": "shell"}], 5 : [{"lower": "business"}, {"lower": "intelligence"}] } ``` **We generated the pickled dictionary through some (rather heavy) preprocessing steps:** 1. Train a word2vec model on all of the job descriptions. Cluster the word embeddings, identify clusters associated with hard skills, and annotate all of the words in those clusters. Save those words as a "skill repository" (a text document that we'll use as the canonical list of hard tech skills). 2. Clean the skill repository. Inevitably, terms that are not hard skills made it into the word2vec "skill" clusters. Remove them. In this case, we defined a "skill" as "a tool, platform, or language that would make sense as a skill to learn or improve." 3. Use the skill repository to train an Named Entity Recognition model (in our case, using Prodigy). Use the training process to identify hard skills that we previously did not have in our repository. Add the new skills to the repository. 4. Create a Python dictionary of the skills. Format the dictionary so that the values can be ingested as spaCy language patterns. See spaCy's [matcher documentation](https://spacy.io/api/matcher#init) for more details. ``` # read pickled dict() object with open('skill_dict.pkl', 'rb') as f: skill_dict = pickle.load(f) %%time # Read JDs into memory import os directory = os.fsencode('../local_data/') jds = [] for file in os.listdir(directory): filename = os.fsdecode(file) path = '../local_data/' + filename with open(path, 'r') as infile: jds.append(infile.read()) print(len(jds), "JDs") import sys print(sys.getsizeof(jds)/1000000, "Megabytes") ``` ### 2. Parse job descriptions From each JD, generate a list of skills. ``` %%time # Write skill-parsed JDs to file. # This took about three hours for 106k jobs. for idx, jd in enumerate(jds): out_path = '../skill_parsed/'+ str(idx+1) + '.txt' with open(out_path, 'w') as outfile: # Creating a matcher object doc = nlp(jd) matcher = Matcher(nlp.vocab) for label, pattern in skill_dict.items(): matcher.add(label, None, pattern) matches = matcher(doc) for match in matches: # match object returns a tuple with (id, startpos, endpos) output = str(doc[match[1]:match[2]]).replace(' ', '_').lower() outfile.write(output) outfile.write('\n') ``` ### 3. Generate a Gensim corpus and dictionary from the parsed skill documents ``` %%time # Load parsed items back into memory directory = os.fsencode('skill_parsed//') parsed_jds = [] for file in os.listdir(directory): filename = os.fsdecode(file) path = 'skill_parsed/' + filename # Ran into an encoding issue; changing to latin-1 fixed it with codecs.open(path, 'r', encoding='latin-1') as infile: parsed_jds.append(infile.read()) %%time ''' Gensim needs documents to be formatted as a list-of-lists, where the inner lists are simply lists including the tokens (skills) from a given document. It's important to note that any bigram or trigram skills are already tokenized with underscores instead of spaces to preserve them as tokens. ''' nested_dict_corpus = [text.split() for text in parsed_jds] print(nested_dict_corpus[222:226]) from gensim.corpora import Dictionary, MmCorpus gensim_skills_dict = Dictionary(nested_dict_corpus) # save the dict gensim_skills_dict.save('gensim_skills.dict') corpus = [gensim_skills_dict.doc2bow(text) for text in nested_dict_corpus] # Save the corpus gensim.corpora.MmCorpus.serialize('skill_bow_corpus.mm', corpus, id2word=gensim_skills_dict) # Load up the dictionary gensim_skills_dict = Dictionary.load('gensim_skills.dict') # Load the corpus bow_corpus = MmCorpus('skill_bow_corpus.mm') ``` ### 4. Create the LDA model using Gensim ``` %%time with warnings.catch_warnings(): warnings.simplefilter('ignore') lda_alpha_auto = LdaModel(bow_corpus, id2word=gensim_skills_dict, num_topics=20) lda_alpha_auto.save('lda/skills_lda') # load the finished LDA model from disk lda = LdaModel.load('lda/skills_lda') ``` ### 5. Visualize using pyLDAvis ``` LDAvis_data_filepath = 'lda/ldavis/ldavis' %%time LDAvis_prepared = pyLDAvis.gensim.prepare(lda, bow_corpus, gensim_skills_dict) with open(LDAvis_data_filepath, 'wb') as f: pickle.dump(LDAvis_prepared, f) # load the pre-prepared pyLDAvis data from disk with open(LDAvis_data_filepath, 'rb') as f: LDAvis_prepared = pickle.load(f) pyLDAvis.display(LDAvis_prepared) # Save the file as HTML pyLDAvis.save_html(LDAvis_prepared, 'lda/html/lda.html') ``` ### 6. Compare user input to the LDA model Output the skills a user has and does not have from various topics. ``` # Look at the topics def explore_topic(topic_number, topn=20): """ accept a topic number and print out a formatted list of the top terms """ print(u'{:20} {}'.format(u'term', u'frequency') + u'') for term, frequency in lda.show_topic(topic_number, topn=40): print(u'{:20} {:.3f}'.format(term, round(frequency, 3))) for i in range(20): # Same number as the types of jobs we scraped initially print("\n\nTopic %s" % i) explore_topic(topic_number=i) # A stab at naming the topics topic_names = {1: u'Data Engineering (Big Data Focus)', 2: u'Microsoft OOP Engineering (C, C++, .NET)', 3: u'Web Application Development (Ruby, Rails, JS, Databases)', 4: u'Linux/Unix, Software Engineering, and Scripting', 5: u'Database Administration', 6: u'Project Management (Agile Focus)', 7: u'Project Management (General Software)', 8: u'Product Management', 9: u'General Management & Productivity (Microsoft Office Focus)', 10: u'Software Program Management', 11: u'Project and Program Management', 12: u'DevOps and Cloud Computing/Infrastructure', 13: u'Frontend Software Engineering and Design', 14: u'Business Intelligence', 15: u'Analytics', 16: u'Quality Engineering, Version Control, & Build', 17: u'Big Data Analytics; Hardware & Scientific Computing', 18: u'Software Engineering', 19: u'Data Science, Machine Learning, and AI', 20: u'Design'} ``` #### Ingest user input & transform into list of skills ``` matcher = Matcher(nlp.vocab) user_input = ''' My skills are Postgresql, and Python. Experience with Chef Puppet and Docker required. I also happen to know Blastoise and Charzard. Also NeuRal neTwOrk. I use Git, Github, svn, Subversion, but not git, github or subversion. Additionally, I can program using Perl, Java, and Haskell. But not perl, java, or haskell.''' # Construct matcher object doc = nlp(user_input) for label, pattern in skill_dict.items(): matcher.add(label, None, pattern) # Compare input to pre-defined skill patterns user_skills = [] matches = matcher(doc) for match in matches: if match is not None: # match object returns a tuple with (id, startpos, endpos) output = str(doc[match[1]:match[2]]).lower() user_skills.append(output) print("*** User skills: *** ") for skill in user_skills: print(skill) ``` #### Compare user skills to the LDA model ``` def top_match_items(input_doc, lda_model, input_dictionary, num_terms=20): """ (1) parse input doc with spaCy, apply text pre-proccessing steps, (3) create a bag-of-words representation (4) create an LDA representation """ doc_bow = gensim_skills_dict.doc2bow(input_doc) # create an LDA representation document_lda = lda_model[doc_bow] # Sort in descending order sorted_doc_lda = sorted(document_lda, key=lambda review_lda: -review_lda[1]) topic_number, freq = sorted_doc_lda[0][0], sorted_doc_lda[0][1] highest_probability_topic = topic_names[topic_number+1] top_topic_skills = [] for term, term_freq in lda.show_topic(topic_number, topn=num_terms): top_topic_skills.append(term) return highest_probability_topic, round(freq, 3), top_topic_skills matched_topic, matched_freq, top_topic_skills = top_match_items(user_skills, lda, gensim_skills_dict) def common_skills(top_topic_skills, user_skills): return [item for item in top_topic_skills if item in user_skills] def non_common_skills(top_topic_skills, user_skills): return [item for item in top_topic_skills if item not in user_skills] print("**** User's matched topic and percent match:") print(matched_topic, matched_freq) print("\n**** Skills user has in common with topic:") for skill in common_skills(top_topic_skills, user_skills): print(skill) print("\n**** Skills user does NOT have in common with topic:") for skill in non_common_skills(top_topic_skills, user_skills): print(skill) ```
github_jupyter
# Unconstrainted optimization with NN models In this tutorial we will go over type 1 optimization problem which entails nn.Module rerpesented cost function and __no constarint__ at all. This type of problem is often written as follows: $$ \min_{x} f_{\theta}(x) $$ we can find Type1 problems quite easily. For instance assuming you are the manager of some manufactoring facilities, then your primary objective would be to maximize the yield of the manufactoring process. In industrial grade of manufactoring process the model of process is often __unknown__. hence we may need to learn the model through your favorite differentiable models such as neural networks and perform the graident based optimization to find the (local) optimums that minimize (or maximize) the yield. ### General problem solving tricks; Cast your problem into QP, approximately. As far as I know, Convex optimization is the most general class of optmization problems where we have algorithms that can solve the problem optimally. Qudartic progamming (QP) is a type of convex optimization problems which is well developed in the side of theory and computations. We will heavily utilize QPs to solve the optimziation problems that have dependency with `torch` models. Our general problem solving tricks are as follows: 1. Construct the cost or constraint models from the data 2. By utilizting `torch` automatic differentiation functionality, compute the jacobian or hessians of the moodels. 3. solve (possibley many times) QP with the estimated jacobian and hessians. > It is noteworthy that even we locally cast the problem into QP, that doesn't mean our original problem is convex. Therefore, we cannot say that this approahces we will look over can find the global optimum. ``` import torch import numpy as np import matplotlib.pyplot as plt from torch.utils.data import TensorDataset, DataLoader from src.utils import generate_y from src.nn.MLP import MLP ``` ## Generate training dataset ``` x_min, x_max = -4.0, 4.0 xs_linspace = torch.linspace(-4, 4, 2000).view(-1, 1) ys_linspace = generate_y(xs_linspace) # samples to construct training dataset x_dist = torch.distributions.uniform.Uniform(-4.0, 4.0) xs = x_dist.sample(sample_shape=(500, 1)) ys = generate_y(xs) BS = 64 # Batch size ds = TensorDataset(xs, ys) loader = DataLoader(ds, batch_size=BS, shuffle=True) input_dim, output_dim = 1, 1 m = MLP(input_dim, output_dim, num_neurons=[128, 128]) mse_criteria = torch.nn.MSELoss() opt = torch.optim.Adam(m.parameters(), lr=1e-3) n_update = 0 print_every = 500 epochs = 200 for _ in range(epochs): for x, y in loader: y_pred = m(x) loss = mse_criteria(y_pred, y) opt.zero_grad() loss.backward() opt.step() n_update += 1 if n_update % print_every == 0: print(n_update, loss.item()) # save model for the later usages torch.save(m.state_dict(), './model.pt') ``` ## Solve the unconstraint optimization problem Let's solve the unconstraint optimization problem with torch estmiated graidents and simple gradient descent method. ``` def minimize_y(x_init, model, num_steps=15, step_size=1e-1): def _grad(model, x): return torch.autograd.functional.jacobian(model, x).squeeze() x = x_init xs = [x] ys = [model(x)] gs = [_grad(model, x)] for _ in range(num_steps): grad = _grad(model, x) x = (x- step_size * grad).clone() y = model(x) xs.append(x) ys.append(y) gs.append(grad) xs = torch.stack(xs).detach().numpy() ys = torch.stack(ys).detach().numpy() gs = torch.stack(gs).detach().numpy() return xs, ys, gs x_min, x_max = -4.0, 4.0 n_steps = 40 x_init = torch.tensor(np.random.uniform(x_min, x_max, 1)).float() opt_xs, opt_ys, grad = minimize_y(x_init, m, n_steps) pred_ys = m(xs_linspace).detach() fig, axes = plt.subplots(1, 1, figsize=(10, 5)) axes.grid() axes.plot(xs_linspace, ys_linspace, label='Ground truth') axes.plot(xs_linspace, pred_ys, label='Model prediction') axes.scatter(opt_xs[0], opt_ys[0], label='Opt start', c='green', marker='*', s=100.0) axes.scatter(opt_xs[1:], opt_ys[1:], label='NN opt', c='green') _ = axes.legend() ```
github_jupyter
``` import pdal import warnings warnings.filterwarnings('ignore') # import geoplot as gplt # import geoplot.crs as gcrs import geopandas as gpd import imageio import pathlib import mapclassify as mc import numpy as np import laspy import rasterio from rasterio import mask import folium import matplotlib.pyplot as plt import logging import sys from shapely.geometry import Polygon sys.path.append("/home/michael/USGS-LIDAR-on-AgriTech") from fetch_data import FetchData from script.visualize import Visualize MINX, MINY, MAXX, MAXY = [-93.756155, 41.918015, -93.756055, 42.918115] polygon = Polygon(((MINX, MINY), (MINX, MAXY), (MAXX, MAXY), (MAXX, MINY), (MINX, MINY))) region = "IA_FullState" data_fetcher = FetchData(polygon, region) data=data_fetcher.run_pipeline() print(type(data)) df = data_fetcher.get_elevation(data) print(df.info()) print(df) ## Plot raster/tif image # -------------------- def plot_raster(rast_data, title=''): """ Plots raster tif image both in log scale(+1) and original verion """ fig, (axlog, axorg) = plt.subplots(1, 2, figsize=(14,7)) im1 = axlog.imshow(np.log1p(rast_data)) # vmin=0, vmax=2.1) # im2 = axorg.imshow(rast_data) plt.title("{}".format(title), fontdict = {'fontsize': 15}) plt.axis('off') plt.colorbar(im1, fraction=0.03) # Read raster/tif file # -------------------- iowa_tif = '../data/tif/iowa.tif' raster_iowa = rasterio.open(iowa_tif) iowa_data = raster_iowa.read(1) type(iowa_data) count = iowa_data[iowa_data > 0].sum() count title = 'Log scaled (+1) and No Scale Raster plots'.format(count) plot_raster(iowa_data, title) ep.hist(array, title=["Band 1", "Band 2", "Band 3", 'Band 4']) plt.show() # get shp from tif from glob import glob def get_shp_from_tif(tif_path:str, shp_file_path:str) -> None: raster = rasterio.open(tif_path) bounds = raster.bounds df = gpd.GeoDataFrame({"id":1,"geometry":[box(*bounds)]}) # save to file df.to_file(shp_file_path) return df print('Saved..') shp_df = get_shp_from_tif("../data/tif/iowa.tif", "../data/shp") shp_df def select_name(name:str): name_ls = [] names_list = io.open('../data/filename.txt', encoding='UTF-8').read().strip().split('\n') if name in names_list: return name if name == 'all': return names_list else: for words in names_list: words_ls = words.split('_') if name in words_ls: name_ls.append(words) else: continue if name_ls == []: print(f"Name - ({name}) not found, input a valid name") return None else: return name_ls y = select_name('all') import json with open('../pipeline.json', 'r')as json_file: dict_ob = json.load(json_file) dict_ob piplines = pdal.Pipeline(json.dumps(dict_ob)) piplines piplines.execute() metadata = piplines.metadata log = piplines.log ```
github_jupyter
# Function Practice Exercises Problems are arranged in increasing difficulty: * Warmup - these can be solved using basic comparisons and methods * Level 1 - these may involve if/then conditional statements and simple methods * Level 2 - these may require iterating over sequences, usually with some kind of loop * Challenging - these will take some creativity to solve ## WARMUP SECTION: #### LESSER OF TWO EVENS: Write a function that returns the lesser of two given numbers *if* both numbers are even, but returns the greater if one or both numbers are odd lesser_of_two_evens(2,4) --> 2 lesser_of_two_evens(2,5) --> 5 ``` def lesser_of_two_evens(a,b): pass # Check lesser_of_two_evens(2,4) # Check lesser_of_two_evens(2,5) ``` #### ANIMAL CRACKERS: Write a function takes a two-word string and returns True if both words begin with same letter animal_crackers('Levelheaded Llama') --> True animal_crackers('Crazy Kangaroo') --> False ``` def animal_crackers(text): pass # Check animal_crackers('Levelheaded Llama') # Check animal_crackers('Crazy Kangaroo') ``` #### MAKES TWENTY: Given two integers, return True if the sum of the integers is 20 *or* if one of the integers is 20. If not, return False makes_twenty(20,10) --> True makes_twenty(12,8) --> True makes_twenty(2,3) --> False ``` def makes_twenty(n1,n2): pass # Check makes_twenty(20,10) # Check makes_twenty(2,3) ``` # LEVEL 1 PROBLEMS #### OLD MACDONALD: Write a function that capitalizes the first and fourth letters of a name old_macdonald('macdonald') --> MacDonald Note: `'macdonald'.capitalize()` returns `'Macdonald'` ``` def old_macdonald(name): pass # Check old_macdonald('macdonald') ``` #### MASTER YODA: Given a sentence, return a sentence with the words reversed master_yoda('I am home') --> 'home am I' master_yoda('We are ready') --> 'ready are We' Note: The .join() method may be useful here. The .join() method allows you to join together strings in a list with some connector string. For example, some uses of the .join() method: >>> "--".join(['a','b','c']) >>> 'a--b--c' This means if you had a list of words you wanted to turn back into a sentence, you could just join them with a single space string: >>> " ".join(['Hello','world']) >>> "Hello world" ``` def master_yoda(text): pass # Check master_yoda('I am home') # Check master_yoda('We are ready') ``` #### ALMOST THERE: Given an integer n, return True if n is within 10 of either 100 or 200 almost_there(90) --> True almost_there(104) --> True almost_there(150) --> False almost_there(209) --> True NOTE: `abs(num)` returns the absolute value of a number ``` def almost_there(n): pass # Check almost_there(104) # Check almost_there(150) # Check almost_there(209) ``` # LEVEL 2 PROBLEMS #### FIND 33: Given a list of ints, return True if the array contains a 3 next to a 3 somewhere. has_33([1, 3, 3]) → True has_33([1, 3, 1, 3]) → False has_33([3, 1, 3]) → False ``` def has_33(nums): pass # Check has_33([1, 3, 3]) # Check has_33([1, 3, 1, 3]) # Check has_33([3, 1, 3]) ``` #### PAPER DOLL: Given a string, return a string where for every character in the original there are three characters paper_doll('Hello') --> 'HHHeeellllllooo' paper_doll('Mississippi') --> 'MMMiiissssssiiippppppiii' ``` def paper_doll(text): pass # Check paper_doll('Hello') # Check paper_doll('Mississippi') ``` #### BLACKJACK: Given three integers between 1 and 11, if their sum is less than or equal to 21, return their sum. If their sum exceeds 21 *and* there's an eleven, reduce the total sum by 10. Finally, if the sum (even after adjustment) exceeds 21, return 'BUST' blackjack(5,6,7) --> 18 blackjack(9,9,9) --> 'BUST' blackjack(9,9,11) --> 19 ``` def blackjack(a,b,c): pass # Check blackjack(5,6,7) # Check blackjack(9,9,9) # Check blackjack(9,9,11) ``` #### SUMMER OF '69: Return the sum of the numbers in the array, except ignore sections of numbers starting with a 6 and extending to the next 9 (every 6 will be followed by at least one 9). Return 0 for no numbers. summer_69([1, 3, 5]) --> 9 summer_69([4, 5, 6, 7, 8, 9]) --> 9 summer_69([2, 1, 6, 9, 11]) --> 14 ``` def summer_69(arr): pass # Check summer_69([1, 3, 5]) # Check summer_69([4, 5, 6, 7, 8, 9]) # Check summer_69([2, 1, 6, 9, 11]) ``` # CHALLENGING PROBLEMS #### SPY GAME: Write a function that takes in a list of integers and returns True if it contains 007 in order spy_game([1,2,4,0,0,7,5]) --> True spy_game([1,0,2,4,0,5,7]) --> True spy_game([1,7,2,0,4,5,0]) --> False ``` def spy_game(nums): pass # Check spy_game([1,2,4,0,0,7,5]) # Check spy_game([1,0,2,4,0,5,7]) # Check spy_game([1,7,2,0,4,5,0]) ``` #### COUNT PRIMES: Write a function that returns the *number* of prime numbers that exist up to and including a given number count_primes(100) --> 25 By convention, 0 and 1 are not prime. ``` def count_primes(num): pass # Check count_primes(100) ``` ### Just for fun: #### PRINT BIG: Write a function that takes in a single letter, and returns a 5x5 representation of that letter print_big('a') out: * * * ***** * * * * HINT: Consider making a dictionary of possible patterns, and mapping the alphabet to specific 5-line combinations of patterns. <br>For purposes of this exercise, it's ok if your dictionary stops at "E". ``` def print_big(letter): pass print_big('a') ``` ## Great Job!
github_jupyter
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).* *The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* # A Quick Tour of Python Language Syntax ``` x = 1 y = 4 z = x + y z # set the midpoint midpoint = 5 # make two empty lists lower = []; upper = [] # split the numbers into lower and upper for i in range(10): if (i < midpoint): lower.append(i) else: upper.append(i) print("lower:", lower) print("upper:", upper) ``` ## Comments Are Marked by ``#`` The script starts with a comment: ``` python # set the midpoint ``` Comments in Python are indicated by a pound sign (``#``), and anything on the line following the pound sign is ignored by the interpreter. This means, for example, that you can have stand-alone comments like the one just shown, as well as inline comments that follow a statement. For example: ``` python x += 2 # shorthand for x = x + 2 ``` Python does not have any syntax for multi-line comments, such as the ``/* ... */`` syntax used in C and C++, though multi-line strings are often used as a replacement for multi-line comments (more on this in [String Manipulation and Regular Expressions](14-Strings-and-Regular-Expressions.ipynb)). ## End-of-Line Terminates a Statement The next line in the script is ``` python midpoint = 5 ``` This is an assignment operation, where we've created a variable named ``midpoint`` and assigned it the value ``5``. Notice that the end of this statement is simply marked by the end of the line. This is in contrast to languages like C and C++, where every statement must end with a semicolon (``;``). In Python, if you'd like a statement to continue to the next line, it is possible to use the "``\``" marker to indicate this: ## Semicolon Can Optionally Terminate a Statement Sometimes it can be useful to put multiple statements on a single line. The next portion of the script is ``` python lower = []; upper = [] ``` This shows the example of how the semicolon (``;``) familiar in C can be used optionally in Python to put two statements on a single line. Functionally, this is entirely equivalent to writing ``` python lower = [] upper = [] ``` Using a semicolon to put multiple statements on a single line is generally discouraged by most Python style guides, though occasionally it proves convenient. ## Indentation: Whitespace Matters! Next, we get to the main block of code: ``` Python for i in range(10): if i < midpoint: lower.append(i) else: upper.append(i) ``` This is a compound control-flow statement including a loop and a conditional – we'll look at these types of statements in a moment. For now, consider that this demonstrates what is perhaps the most controversial feature of Python's syntax: whitespace is meaningful! ## Parentheses Are for Grouping or Calling In the previous code snippet, we see two uses of parentheses. First, they can be used in the typical way to group statements or mathematical operations: ``` 2 * (3 + 4) ``` They can also be used to indicate that a *function* is being called. In the next snippet, the ``print()`` function is used to display the contents of a variable (see the sidebar). The function call is indicated by a pair of opening and closing parentheses, with the *arguments* to the function contained within: ``` print('first value:', 1) ``` # Basic Python Semantics: Variables and Objects ## Python Variables Are Pointers Assigning variables in Python is as easy as putting a variable name to the left of the equals (``=``) sign: ```python # assign 4 to the variable x x = 4 ``` ``` x = 1 # x is an integer x = 'hello' # now x is a string x = [1, 2, 3] # now x is a list x = [1, 2, 3] y = x ``` We've created two variables ``x`` and ``y`` which both point to the same object. Because of this, if we modify the list via one of its names, we'll see that the "other" list will be modified as well: ``` print(y) x.append(4) # append 4 to the list pointed to by x print(y) # y's list is modified as well! print(y) x = 10 y = x x += 5 # add 5 to x's value, and assign it to x print("x =", x) print("y =", y) ``` When we call ``x += 5``, we are not modifying the value of the ``10`` object pointed to by ``x``; we are rather changing the variable ``x`` so that it points to a new integer object with value ``15``. For this reason, the value of ``y`` is not affected by the operation. ## Everything Is an Object Python is an object-oriented programming language, and in Python everything is an object. Let's flesh-out what this means. Earlier we saw that variables are simply pointers, and the variable names themselves have no attached type information. This leads some to claim erroneously that Python is a type-free language. But this is not the case! Consider the following: ``` x = 4 type(x) x.bit_length() x.real x = 'hello' type(x) x.upper() x = 3.14159 type(x) x.as_integer_ratio() ``` Python has types; however, the types are linked not to the variable names but *to the objects themselves*. In object-oriented programming languages like Python, an *object* is an entity that contains data along with associated metadata and/or functionality. ``` L = [1, 2, 3] L.append(100) print(L) ``` While it might be expected for compound objects like lists to have attributes and methods, what is sometimes unexpected is that in Python even simple types have attached attributes and methods. For example, numerical types have a ``real`` and ``imag`` attribute that returns the real and imaginary part of the value, if viewed as a complex number: ``` x = 4.5 print(x.real, "+", x.imag, 'i') ``` Methods are like attributes, except they are functions that you can call using opening and closing parentheses. For example, floating point numbers have a method called ``is_integer`` that checks whether the value is an integer: ``` x = 4.5 x.is_integer() x = 4.0 x.is_integer() ```
github_jupyter
<!-- :Author: Arthur Goldberg <Arthur.Goldberg@mssm.edu> --> <!-- :Date: 2020-08-02 --> <!-- :Copyright: 2020, Karr Lab --> <!-- :License: MIT --> # DE-Sim: Ordering simultaneous events DE-Sim makes it easy to build and simulate discrete-event models. This notebook discusses DE-Sim's methods for controlling the execution order of simultaneous messages. ## Installation Use `pip` to install `de_sim`. ## Scheduling events with equal simulation times A discrete-event simulation may execute multiple events simultaneously, that is, at a particular simulation time. To ensure that simulation runs are reproducible and deterministic, a simulator must provide mechanisms that deterministically control the execution order of simultaneous events. Two types of situations arise, *local* and *global*. A local situation arises when a simulation object receives multiple event messages simultaneously, while a global situation arises when multiple simulation objects execute events simultaneously. Separate *local* and *global* mechanisms ensure that these situations are simulated deterministically. The local mechanism ensures that simultaneous events are handled deterministically at a single simulation object, while the global mechanism ensures that simultaneous events are handled deterministically across all objects in a simulation. ### Local mechanism: simultaneous event messages at a simulation object The local mechanism, called *event superposition* after the [physics concept of superposition](https://en.wikipedia.org/wiki/Superposition_principle), involves two components: 1. When a simulation object receives multiple event messages at the same time, the simulator passes all of the event messages to the object's event handler in a list. (However, if simultaneous event messages have different handlers then the simulator raises a `SimulatorError` exception.) 2. The simulator sorts the events in the list so that any given list of events will always be arranged in the same order. Event messages are sorted by the pair (event message priority, event message content). Sorting costs O(n log n), but since simultaneous events are usually rare, sorting event lists is unlikely to slow down simulations. ``` """ This example illustrates the local mechanism that handles simultaneous event messages received by a simulation object """ import random import de_sim from de_sim.event import Event class Double(de_sim.EventMessage): 'Double value' class Increment(de_sim.EventMessage): 'Increment value' class IncrementThenDoubleSimObject(de_sim.SimulationObject): """ Execute Increment before Double, demonstrating superposition """ def __init__(self, name): super().__init__(name) self.value = 0 def init_before_run(self): self.send_events() def handle_superposed_events(self, event_list): """ Process superposed events in an event list Each Increment message increments value, and each Double message doubles value. Assumes that `event_list` contains an Increment event followed by a Double event. Args: event_list (:obj:`event_list` of :obj:`de_sim.Event`): list of events """ for event in event_list: if isinstance(event.message, Increment): self.value += 1 elif isinstance(event.message, Double): self.value *= 2 self.send_events() # The order of the message types in event_handlers, (Increment, Double), determines # the sort order of messages in `event_list` received by `handle_superposed_events` event_handlers = [(Increment, 'handle_superposed_events'), (Double, 'handle_superposed_events')] def send_events(self): # To show that the simulator delivers event messages to `handle_superposed_events` # sorted into the order (Increment, Double), send them in a random order. if random.randrange(2): self.send_event(1, self, Double()) self.send_event(1, self, Increment()) else: self.send_event(1, self, Increment()) self.send_event(1, self, Double()) # Register the message types sent messages_sent = (Increment, Double) class TestSuperposition(object): def increment_then_double_from_0(self, iterations): v = 0 for _ in range(iterations): v += 1 v *= 2 return v def test_superposition(self, max_time): simulator = de_sim.Simulator() simulator.add_object(IncrementThenDoubleSimObject('name')) simulator.initialize() simulator.simulate(max_time) for sim_obj in simulator.get_objects(): assert sim_obj.value, self.increment_then_double_from_0(max_time) print(f'Simulation to {max_time} executed all messages in the order (Increment, Double).') TestSuperposition().test_superposition(20) ``` This example shows how event superposition handles simultaneous events. An `IncrementThenDoubleSimObject` simulation object stores an integer value. It receives two events every time unit, one carrying an `Increment` message and another containing a `Double` message. Executing an `Increment` event increments the value, while executing a `Double` message event doubles the value. The design for `IncrementThenDoubleSimObject` requires that it increments before doubling. Several features of DE-Sim and `IncrementThenDoubleSimObject` ensure this behavior: 1. The mapping between event message types and event handlers, stored in the list `event_handlers`, contains `Increment` before `Double`. This gives events containing an `Increment` message a higher priority than events containing `Double`. 2. Under the covers, when DE-Sim passes superposed events to a subclass of [`SimulationObject`](https://docs.karrlab.org/de_sim/master/source/de_sim.html#de_sim.simulation_object.SimulationObject), it sorts the messages by their (event message priority, event message content), which sorts events with higher priority message types earlier. 3. The message handler `handle_superposed_events` receives a list of events and executes them in order. To challenge and test this superposition mechanism, the `send_events()` method in `IncrementThenDoubleSimObject` randomizes the order in which it sends `Increment` and `Double` events. Finally, `TestSuperposition().test_superposition()` runs a simulation of `IncrementThenDoubleSimObject` and asserts that the value it computes equals the correct value for a sequence of increment and double operations. ### Global mechanism: simultaneous event messages at multiple simulation objects A *global* mechanism is needed to ensure that simultaneous events which occur at distinct objects in a simulation are executed in a deterministic order. Otherwise, the discrete-event simulator might execute simultaneous events at distinct simulation objects in a different order in different simulation runs that use the same input. When using a simulator that allows 0-delay event messages or global state shared between simulation objects -- both of which DE-Sim supports -- this can alter the simulation's predictions and thereby imperil debugging efforts, statistical analyses of predictions and other essential uses of simulation results. The global mechanism employed by DE-Sim conceives of the simulation time as a pair -- the event time, and a *sub-time* which breaks event time ties. Sub-time values within a particular simulation time must be distinct. Given that constraint, many approaches for selecting the sub-time would achieve the objective. DE-Sim creates a distinct sub-time from the state of the simulation object receiving an event. The sub-time is a pair composed of a priority assigned to the simulation class and a unique identifier for each class instance. Each simulation class defines a `class_priority` attribute that determines the relative execution order of simultaneous events by different simulation classes. Among multiple instances of a simulation class, the attribute `event_time_tiebreaker`, which defaults to a simulation instance's unique name, breaks ties. All classes have the same default priority of `LOW`. If class priorities are not set and `event_time_tiebreaker`s are not set for individual simulation objects, then an object's global priority is given by its name. ``` from de_sim.simulation_object import SimObjClassPriority class ExampleMsg(de_sim.EventMessage): 'Example message' class NoPrioritySimObj(de_sim.SimulationObject): def init_before_run(self): self.send_event(0., self, ExampleMsg()) # register the message types sent messages_sent = (ExampleMsg, ) class LowPrioritySimObj(NoPrioritySimObj): def handler(self, event): print(f"{self.time}: LowPrioritySimObj {self.name} running") self.send_event(1., self, ExampleMsg()) event_handlers = [(ExampleMsg, 'handler')] # have `LowPrioritySimObj`s execute at low priority class_priority = SimObjClassPriority.LOW class MediumPrioritySimObj(NoPrioritySimObj): def handler(self, event): print(f"{self.time}: MediumPrioritySimObj {self.name} running") self.send_event(1., self, ExampleMsg()) event_handlers = [(ExampleMsg, 'handler')] # have `MediumPrioritySimObj`s execute at medium priority class_priority = SimObjClassPriority.MEDIUM simulator = de_sim.Simulator() simulator.add_object(LowPrioritySimObj('A')) simulator.add_object(MediumPrioritySimObj('B')) simulator.initialize() print(simulator.simulate(2).num_events, 'events executed') ``` This example illustrates the scheduling of simultaneous event messages. `SimObjClassPriority` is an `IntEnum` that provides simulation object class priorities, including `LOW`, `MEDIUM`, and `HIGH`. We create two classes, `LowPrioritySimObj` and `MediumPrioritySimObj`, with `LOW` and `MEDIUM` priorities, respectively, and execute them simultaneously at simulation times 0, 1, 2, ... At each time, the `MediumPrioritySimObj` object runs before the `LowPrioritySimObj` one. #### Execution order of objects without an assigned `class_priority` The next example shows the ordering of simultaneous events executed by objects that don't have assigned priorities. ``` class DefaultPrioritySimObj(NoPrioritySimObj): def handler(self, event): print(f"{self.time}: DefaultPrioritySimObj {self.name} running") self.send_event(1., self, ExampleMsg()) event_handlers = [(ExampleMsg, 'handler')] simulator = de_sim.Simulator() for name in random.sample(range(10), k=3): sim_obj = DefaultPrioritySimObj(str(name)) print(f"{sim_obj.name} priority: {sim_obj.class_event_priority.name}") simulator.add_object(sim_obj) simulator.initialize() print(simulator.simulate(2).num_events, 'events executed') ``` In this example, the [`SimulationObject`s](https://docs.karrlab.org/de_sim/master/source/de_sim.html#de_sim.simulation_object.SimulationObject) have no priorities assigned, so their default priorities are `LOW`. (The `class_event_priority` attribute of a simulation object is a `SimObjClassPriority`) Three objects with names randomly selected from '0', '1', ..., '9', are created. When they execute simultaneously, events are ordered by the sort order of the objects' names. #### Execution order of instances of simulation object classes with relative priorities Often, a modeler wants to control the *relative* simultaneous priorities of simulation objects, but does not care about their absolute priorities. The next example shows how to specify relative priorities. ``` class FirstNoPrioritySimObj(NoPrioritySimObj): def handler(self, event): print(f"{self.time}: FirstNoPrioritySimObj {self.name} running") self.send_event(1., self, ExampleMsg()) event_handlers = [(ExampleMsg, 'handler')] class SecondNoPrioritySimObj(NoPrioritySimObj): def handler(self, event): print(f"{self.time}: SecondNoPrioritySimObj {self.name} running") self.send_event(1., self, ExampleMsg()) event_handlers = [(ExampleMsg, 'handler')] # Assign decreasing priorities to classes in [FirstNoPrioritySimObj, SecondNoPrioritySimObj] SimObjClassPriority.assign_decreasing_priority([FirstNoPrioritySimObj, SecondNoPrioritySimObj]) simulator = de_sim.Simulator() simulator.add_object(SecondNoPrioritySimObj('A')) simulator.add_object(FirstNoPrioritySimObj('B')) for sim_obj in simulator.simulation_objects.values(): print(f"{type(sim_obj).__name__}: {sim_obj.name}; " f"priority: {sim_obj.class_event_priority.name}") simulator.initialize() print(simulator.simulate(2).num_events, 'events executed') ``` The `assign_decreasing_priority` method of `SimObjClassPriority` takes an iterator over `SimulationObject` subclasses, and assigns them decreasing simultaneous event priorities. The `FirstNoPrioritySimObj` instance therefore executes before the `SecondNoPrioritySimObj` instance at each discrete simulation time.
github_jupyter
``` import pandas as pd import numpy as np import matplotlib as plt import seaborn as sns %matplotlib inline ``` # Build dataframe with data for plotting http://koaning.io/radial-basis-functions.html $\phi_{i} (x) = exp\left( \frac{-1}{2 \alpha} (x-m_i)^2 \right)$ ``` def rbf(x, alpha, m): return np.exp(-1/(2*alpha)*(x-m)**2) # Generate data with rbf x = np.arange(-2, 5, 0.01) df=pd.DataFrame(data=x, columns=['x']).set_index('x') ## make 5 radial functions, with different m for i in range(5): df[str(i)] = rbf(x,.25,i) df.rename_axis('m', axis='columns', inplace = True) df.head() df.plot(); ``` # Example: pattern learning with rbf and linear regression Assuming $\alpha=1$, then $\phi_{i} (x) = exp\left( \frac{-1}{2} (x-m_i)^2 \right)$ ## Generate monthly data ``` # Generate rbf x = np.arange(0, 12, 0.01) df=pd.DataFrame(data=x, columns=['x']) df['y'] = np.sin(x) + 2*np.cos(x/2) + np.random.normal(loc=0.0, scale=.2, size=len(df)) df.head() df.plot(kind='scatter', x='x', y='y'); ``` ## With $\alpha =1$ add a radial function for each month ``` for i in range(1,13): df[str(i)] = rbf(df['x'],1,i) df.head() df.set_index('x').plot(figsize=(12,6)); ``` ## Plot two different fits: one with floor and the other with rbf ``` dfz =pd.DataFrame() for i in range(1,13): dfz[i]=np.floor(df['x']) ``` # Forecasting bridge data with rbf linear regression ``` # lin reg with z model X = dfz Y=df['y'] model = sm.OLS( Y, X ).fit() df['model_z'] = model.predict(X) model.summary() df.columns #lin reg with rbf X = df[['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12']] Y=df['y'] model = sm.OLS( Y, X ).fit() df['model_rbf'] = model.predict(X) model.summary() df[['model_z','x', 'model_rbf','y']].plot(x='x', alpha=.5); ``` # Testing linear regression methods and results ``` import statsmodels.formula.api as sm x = np.arange(-1, 1.1, .1) df = pd.DataFrame(data=x, columns=['x']) df['intercept'] =np.ones(len(df)) df['y'] = 2*df['x']+3+ np.random.normal(loc=0.0, scale=.2, size=len(df)) df.head() df.plot(kind='scatter', x='x',y='y'); df.plot( x='x', y=['y','model']); ``` ## add more independent variables ``` x = np.arange(-1, 1.1, .1) df = pd.DataFrame(data=x, columns=['x']) df['intercept'] =np.ones(len(df)) df['cos'] = 2*np.cos(df['x']) df['y'] = 2*df['x']+3 + df['cos'] + np.random.normal(loc=0.0, scale=.2, size=len(df)) df.head() df.plot(kind='scatter', x='x',y='y'); X = df[['x', 'intercept', 'cos']] Y = df['y'] model = sm.OLS( Y, X ).fit() df['model'] = model.predict(X) model.summary() df[ ['x', 'y','model' ] ].plot( x="x"); ``` ## add non-contributing column ``` x = np.arange(-1, 1.1, .1) df = pd.DataFrame(data=x, columns=['x']) df['intercept'] =np.ones(len(df)) df['cos'] = 2*np.cos(df['x']) df['sin'] = 2*np.sin(df['x']) df['y'] = 2*df['x']+3 + df['cos'] + np.random.normal(loc=0.0, scale=.2, size=len(df)) df.head() X = df[['x', 'intercept', 'cos', 'sin']] Y = df['y'] model = sm.OLS( Y, X ).fit() df['model'] = model.predict(X) model.summary() df[ ['x', 'y','model' ] ].plot( x="x"); ```
github_jupyter
``` %matplotlib inline import numpy as np from matplotlib import pyplot as plt from matplotlib import cm import pandas as pd import matplotlib as mpl mpl.rcParams['text.usetex'] = True mpl.rcParams['text.latex.unicode'] = True blues = cm.get_cmap(plt.get_cmap('Blues')) greens = cm.get_cmap(plt.get_cmap('Greens')) reds = cm.get_cmap(plt.get_cmap('Reds')) oranges = cm.get_cmap(plt.get_cmap('Oranges')) purples = cm.get_cmap(plt.get_cmap('Purples')) greys = cm.get_cmap(plt.get_cmap('Greys')) from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) import warnings warnings.filterwarnings('ignore') des2a = pd.read_csv('../Data/design2a.csv') des2b = pd.read_csv('../Data/design2b.csv') fig, axis = plt.subplots(nrows=2,ncols=4,figsize=(11,5),sharex='row',sharey='row') x1 = np.arange(1) x2 = np.arange(1) _ = axis[0,0].bar(x1,des2a['CpuMeanTime'][0],yerr=des2a['CpuStdTime'][0], color=blues(150)) _ = axis[0,0].bar(x2+1,des2a['GpuMeanTime'][0],yerr=des2a['GpuStdTime'][0], color=reds(150)) _ = axis[0,0].set_xticks([0,1]) _ = axis[0,0].grid('on', linestyle=':', linewidth=2) _ = axis[0,0].set_xticklabels(['CPUs', 'GPUs'], fontsize=22) _ = axis[0,1].bar(x1,des2a['CpuMeanTime'][1],yerr=des2a['CpuStdTime'][1], color=blues(150)) _ = axis[0,1].bar(x2+1,des2a['GpuMeanTime'][1],yerr=des2a['GpuStdTime'][1], color=reds(150)) _ = axis[0,1].set_xticks([0,1]) _ = axis[0,1].grid('on', linestyle=':', linewidth=2) _ = axis[0,1].set_xticklabels(['CPUs', 'GPUs'], fontsize=22) _ = axis[0,2].bar(x1,des2a['CpuMeanTime'][2],yerr=des2a['CpuStdTime'][2], color=blues(150)) _ = axis[0,2].bar(x2+1,des2a['GpuMeanTime'][2],yerr=des2a['GpuStdTime'][2], color=reds(150)) _ = axis[0,2].set_xticks([0,1]) _ = axis[0,2].grid('on', linestyle=':', linewidth=2) _ = axis[0,2].set_xticklabels(['CPUs', 'GPUs'], fontsize=22) _ = axis[0,3].bar(x1,des2a['CpuMeanTime'][3],yerr=des2a['CpuStdTime'][3], color=blues(150)) _ = axis[0,3].bar(x2+1,des2a['GpuMeanTime'][3],yerr=des2a['GpuStdTime'][3], color=reds(150)) _ = axis[0,3].set_xticks([0,1]) _ = axis[0,3].grid('on', linestyle=':', linewidth=2) _ = axis[0,3].set_xticklabels(['CPUs', 'GPUs'], fontsize=22) _ = axis[0,0].set_yticklabels(axis[0,0].get_yticks().astype('int').tolist(),fontsize=22) _ = axis[1,0].hist(eval(des2a['Images'][0]),bins=50) _ = axis[1,1].hist(eval(des2a['Images'][1]),bins=50) _ = axis[1,2].hist(eval(des2a['Images'][2]),bins=50) _ = axis[1,3].hist(eval(des2a['Images'][3]),bins=50) _ = axis[0,0].set_title('Node 1',fontsize=24) _ = axis[0,1].set_title('Node 2',fontsize=24) _ = axis[0,2].set_title('Node 3',fontsize=24) _ = axis[0,3].set_title('Node 4',fontsize=24) _ = axis[1,0].set_xlabel('Image Size\nin MBs', fontsize=24) _ = axis[1,1].set_xlabel('Image Size\nin MBs', fontsize=24) _ = axis[1,2].set_xlabel('Image Size\nin MBs', fontsize=24) _ = axis[1,3].set_xlabel('Image Size\nin MBs', fontsize=24) _ = axis[0,0].set_ylabel('Time', fontsize=24) _ = axis[1,0].set_yticklabels(axis[1,0].get_yticks().astype('int').tolist(),fontsize=22) _ = axis[1,0].set_xticklabels(axis[1,0].get_xticks().astype('int').tolist(),fontsize=22) _ = axis[1,1].set_xticklabels(axis[1,1].get_xticks().astype('int').tolist(),fontsize=22) _ = axis[1,2].set_xticklabels(axis[1,2].get_xticks().astype('int').tolist(),fontsize=22) _ = axis[1,3].set_xticklabels(axis[1,3].get_xticks().astype('int').tolist(),fontsize=22) _ = axis[1,0].grid('on', linestyle=':', linewidth=2) _ = axis[1,1].grid('on', linestyle=':', linewidth=2) _ = axis[1,2].grid('on', linestyle=':', linewidth=2) _ = axis[1,3].grid('on', linestyle=':', linewidth=2) _ = axis[1,0].set_ylabel('Number of\nImages', fontsize=24) _ = axis[0,0].set_ylabel('Time', fontsize=24) _ = axis[1,0].set_yticklabels(axis[1,0].get_yticks().astype('int').tolist(),fontsize=22) #fig.savefig('design2_timelines.pdf',dpi=800,bbox_inches='tight') level0 = axis[0,0].get_ylim() level1 = axis[1,0].get_ylim() fig, axis = plt.subplots(nrows=2,ncols=4,figsize=(11,5),sharex='row',sharey='row') x1 = np.arange(1) x2 = np.arange(1) _ = axis[0,0].set_ylim(level0) _ = axis[1,0].set_ylim(level1) _ = axis[0,0].bar(x1,des2b['CpuMeanTime'][0],yerr=des2b['CpuStdTime'][0], color=blues(150)) _ = axis[0,0].bar(x2+1,des2b['GpuMeanTime'][0],yerr=des2b['GpuStdTime'][0], color=reds(150)) _ = axis[0,0].set_xticks([0,1]) _ = axis[0,0].grid('on', linestyle=':', linewidth=2) _ = axis[0,0].set_xticklabels(['CPUs', 'GPUs'], fontsize=22) _ = axis[0,1].bar(x1,des2b['CpuMeanTime'][1],yerr=des2b['CpuStdTime'][1], color=blues(150)) _ = axis[0,1].bar(x2+1,des2b['GpuMeanTime'][1],yerr=des2b['GpuStdTime'][1], color=reds(150)) _ = axis[0,1].set_xticks([0,1]) _ = axis[0,1].grid('on', linestyle=':', linewidth=2) _ = axis[0,1].set_xticklabels(['CPUs', 'GPUs'], fontsize=22) _ = axis[0,2].bar(x1,des2b['CpuMeanTime'][2],yerr=des2b['CpuStdTime'][2], color=blues(150)) _ = axis[0,2].bar(x2+1,des2b['GpuMeanTime'][2],yerr=des2b['GpuStdTime'][2], color=reds(150)) _ = axis[0,2].set_xticks([0,1]) _ = axis[0,2].grid('on', linestyle=':', linewidth=2) _ = axis[0,2].set_xticklabels(['CPUs', 'GPUs'], fontsize=22) _ = axis[0,3].bar(x1,des2b['CpuMeanTime'][3],yerr=des2b['CpuStdTime'][3], color=blues(150)) _ = axis[0,3].bar(x2+1,des2b['GpuMeanTime'][3],yerr=des2b['GpuStdTime'][3], color=reds(150)) _ = axis[0,3].set_xticks([0,1]) _ = axis[0,3].grid(axis='both', linestyle=':', linewidth=2) _ = axis[0,3].set_xticklabels(['CPUs', 'GPUs'], fontsize=22) _ = axis[0,0].set_yticklabels(axis[0,0].get_yticks().astype('int').tolist(),fontsize=22) _ = axis[1,0].grid('on', linestyle=':', linewidth=2) _ = axis[1,1].grid('on', linestyle=':', linewidth=2) _ = axis[1,2].grid('on', linestyle=':', linewidth=2) _ = axis[1,3].grid('on', linestyle=':', linewidth=2) _ = axis[1,0].hist(eval(des2b['Images'][0]),bins=50) _ = axis[1,1].hist(eval(des2b['Images'][1]),bins=50) _ = axis[1,2].hist(eval(des2b['Images'][2]),bins=50) _ = axis[1,3].hist(eval(des2b['Images'][3]),bins=50) _ = axis[0,0].set_title('Node 1',fontsize=24) _ = axis[0,1].set_title('Node 2',fontsize=24) _ = axis[0,2].set_title('Node 3',fontsize=24) _ = axis[0,3].set_title('Node 4',fontsize=24) _ = axis[1,0].set_xlabel('Image Size\nin MBs', fontsize=24) _ = axis[1,1].set_xlabel('Image Size\nin MBs', fontsize=24) _ = axis[1,2].set_xlabel('Image Size\nin MBs', fontsize=24) _ = axis[1,3].set_xlabel('Image Size\nin MBs', fontsize=24) _ = axis[1,0].set_ylabel('Number of\nImages', fontsize=24) _ = axis[0,0].set_ylabel('Time', fontsize=24) _ = axis[1,0].set_yticklabels(axis[1,0].get_yticks().astype('int').tolist(),fontsize=22) _ = axis[1,0].set_xticklabels(axis[1,0].get_xticks().astype('int').tolist(),fontsize=22) _ = axis[1,1].set_xticklabels(axis[1,1].get_xticks().astype('int').tolist(),fontsize=22) _ = axis[1,2].set_xticklabels(axis[1,2].get_xticks().astype('int').tolist(),fontsize=22) _ = axis[1,3].set_xticklabels(axis[1,3].get_xticks().astype('int').tolist(),fontsize=22) #fig.savefig('design2a_timelines.pdf',dpi=800,bbox_inches='tight') ```
github_jupyter
# HandGestureDetection using OpenCV This code template is for Hand Gesture detection in a video using OpenCV Library. ### Required Packages ``` !pip install opencv-python !pip install mediapipe import cv2 import mediapipe as mp import time ``` ### Hand Detection For detecting hands in the image, we use the detectMultiScale() function. It detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles which can be then plotted around the hands. ####Tuning Parameters: **image** - Matrix of the type CV_8U containing an image where objects are detected. **objects** - Vector of rectangles where each rectangle contains the detected object, the rectangles may be partially outside the original image. **scaleFactor** - Parameter specifying how much the image size is reduced at each image scale. **minNeighbors** - Parameter specifying how many neighbors each candidate rectangle should have to retain it. ``` class handDetector(): def __init__(self, mode = False, maxHands = 2, detectionCon = 0.5, trackCon = 0.5): self.mode = mode self.maxHands = maxHands self.detectionCon = detectionCon self.trackCon = trackCon self.mpHands = mp.solutions.hands self.hands = self.mpHands.Hands(self.mode, self.maxHands, self.detectionCon, self.trackCon) self.mpDraw = mp.solutions.drawing_utils def findHands(self,img, draw = True): imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) self.results = self.hands.process(imgRGB) # print(results.multi_hand_landmarks) if self.results.multi_hand_landmarks: for handLms in self.results.multi_hand_landmarks: if draw: self.mpDraw.draw_landmarks(img, handLms, self.mpHands.HAND_CONNECTIONS) return img def findPosition(self, img, handNo = 0, draw = True): lmlist = [] if self.results.multi_hand_landmarks: myHand = self.results.multi_hand_landmarks[handNo] for id, lm in enumerate(myHand.landmark): h, w, c = img.shape cx, cy = int(lm.x * w), int(lm.y * h) lmlist.append([id, cx, cy]) if draw: cv2.circle(img, (cx, cy), 3, (255, 0, 255), cv2.FILLED) return lmlist ``` To run the handDetector(), save this file with .py extension and allow your webcam to take a video. The co-ordinates will be outputted at the terminal. ``` pTime = 0 cTime = 0 cap = cv2.VideoCapture(0) detector = handDetector() while True: success, img = cap.read() img = detector.findHands(img) lmlist = detector.findPosition(img) if len(lmlist) != 0: print(lmlist[4]) cTime = time.time() fps = 1 / (cTime - pTime) pTime = cTime cv2.putText(img, str(int(fps)), (10, 70), cv2.FONT_HERSHEY_PLAIN, 3, (255, 0, 255), 3) cv2.imshow("Image", img) cv2.waitKey(1) ``` #### Creator: Ayush Gupta , Github: [Profile](https://github.com/guptayush179)
github_jupyter
The purpose of this notebook is to convert the wide-format car data to long-format. The car data comes from the mlogit package. The data description is reproduced below. Note the data originally comes from McFadden and Train (2000). #### Description - Cross-Sectional Dataset - Number of Observations: 4,654 - Unit of Observation: Individual - Country: United States #### Format A dataframe containing : - choice: choice of a vehicule amoung 6 propositions - college: college education? - hsg2: size of household greater than 2? - coml5: commulte lower than 5 miles a day? - typez: body type, one of regcar (regular car), sportuv (sport utility vehicule), sportcar, stwagon (station wagon), truck, van, for each proposition z from 1 to 6 - fuelz: fuel for proposition z, one of gasoline, methanol, cng (compressed natural gas), electric. pricez price of vehicule divided by the logarithme of income - rangez: hundreds of miles vehicule can travel between refuelings/rechargings - accz: acceleration, tens of seconds required to reach 30 mph from stop - speedz: highest attainable speed in hundreds of mph - pollutionz: tailpipe emissions as fraction of those for new gas vehicule - sizez: 0 for a mini, 1 for a subcompact, 2 for a compact and 3 for a mid–size or large vehicule - spacez: fraction of luggage space in comparable new gas vehicule - costz: cost per mile of travel(tens of cents). Either cost of home recharging for electric vehicule or the cost of station refueling otherwise - stationz: fraction of stations that can refuel/recharge vehicule #### Source McFadden, Daniel and Kenneth Train (2000) “Mixed MNL models for discrete response”, Journal of Applied Econometrics, 15(5), 447–470. Journal of Applied Econometrics data archive : http://jae.wiley.com/jae/ ``` import pandas as pd import numpy as np import pylogit as pl ``` # Load the Car data ``` wide_car = pd.read_csv("../data/raw/car_wide_format.csv") wide_car.head().T ``` # Convert the Car dataset to long-format ``` # Look at the columns of the car data print(wide_car.columns.tolist()) # Create the list of individual specific variables ind_variables = wide_car.columns.tolist()[1:4] # Specify the variables that vary across individuals and some or all alternatives # The keys are the column names that will be used in the long format dataframe. # The values are dictionaries whose key-value pairs are the alternative id and # the column name of the corresponding column that encodes that variable for # the given alternative. Examples below. new_name_to_old_base = {'body_type': 'type{}', 'fuel_type': 'fuel{}', 'price_over_log_income': 'price{}', 'range': 'range{}', 'acceleration': 'acc{}', 'top_speed': 'speed{}', 'pollution': 'pollution{}', 'vehicle_size': 'size{}', 'luggage_space': 'space{}', 'cents_per_mile': 'cost{}', 'station_availability': 'station{}'} alt_varying_variables =\ {k: dict([(x, v.format(x)) for x in range(1, 7)]) for k, v in list(new_name_to_old_base.items())} # Specify the availability variables # Note that the keys of the dictionary are the alternative id's. # The values are the columns denoting the availability for the # given mode in the dataset. availability_variables =\ {x: 'avail_{}'.format(x) for x in range(1, 7)} for col in availability_variables.values(): wide_car[col] = 1 ########## # Determine the columns for: alternative ids, the observation ids and the choice ########## # The 'custom_alt_id' is the name of a column to be created in the long-format data # It will identify the alternative associated with each row. custom_alt_id = "alt_id" # Create a custom id column that ignores the fact that this is a # panel/repeated-observations dataset. Note the +1 ensures the id's start at one. obs_id_column = "obs_id" wide_car[obs_id_column] =\ np.arange(1, wide_car.shape[0] + 1, dtype=int) # Create a variable recording the choice column choice_column = "choice" # Store the original choice column in a new variable wide_car['orig_choices'] = wide_car['choice'].values # Alter the original choice column choice_str_to_value = {'choice{}'.format(x): x for x in range(1, 7)} wide_car[choice_column] =\ wide_car[choice_column].map(choice_str_to_value) # Convert the wide-format data to long format long_car =\ pl.convert_wide_to_long(wide_data=wide_car, ind_vars=ind_variables, alt_specific_vars=alt_varying_variables, availability_vars=availability_variables, obs_id_col=obs_id_column, choice_col=choice_column, new_alt_id_name=custom_alt_id) long_car.head().T # Save the long-format data long_car.to_csv("../data/interim/car_long_format.csv", index=False) ```
github_jupyter
# Score for the Fed's dual mandate The U.S. Congress established three key objectives for monetary policy in the Federal Reserve Act: *Maximum employment, stable prices*, and moderate long-term interest rates. The first two objectives are sometimes referred to as the Federal Reserve's **dual mandate**. Here we examine unemployment and inflation data to construct a time-series which gives a numerical score to the Fed's performance on the dual mandate. (This notebook could be extended to studies of the **Phillips curve**, see Appendix 1). The key is to find comparable units to measure performance and a suitable scalar measure to show deviation from the dual mandate. Our visualization features *time-sequential* scatter plots using color *heat* map. Short URL: https://git.io/phillips *Dependencies:* - fecon235 repository https://github.com/rsvp/fecon235 - Python: matplotlib, pandas *CHANGE LOG* 2016-11-14 Fix #2 by v5 and PREAMBLE-p6.16.0428 upgrades. Switch from fecon to fecon235 for main import module. Minor edits given additional year of data. 2015-12-15 Switch to yi_0sys dependencies. Phillips curve. 2015-11-18 First version. ``` from fecon235.fecon235 import * # PREAMBLE-p6.16.0428 :: Settings and system details from __future__ import absolute_import, print_function system.specs() pwd = system.getpwd() # present working directory as variable. print(" :: $pwd:", pwd) # If a module is modified, automatically reload it: %load_ext autoreload %autoreload 2 # Use 0 to disable this feature. # Notebook DISPLAY options: # Represent pandas DataFrames as text; not HTML representation: import pandas as pd pd.set_option( 'display.notebook_repr_html', False ) from IPython.display import HTML # useful for snippets # e.g. HTML('<iframe src=http://en.mobile.wikipedia.org/?useformat=mobile width=700 height=350></iframe>') from IPython.display import Image # e.g. Image(filename='holt-winters-equations.png', embed=True) # url= also works from IPython.display import YouTubeVideo # e.g. YouTubeVideo('1j_HxD4iLn8', start='43', width=600, height=400) from IPython.core import page get_ipython().set_hook('show_in_pager', page.as_hook(page.display_page), 0) # Or equivalently in config file: "InteractiveShell.display_page = True", # which will display results in secondary notebook pager frame in a cell. # Generate PLOTS inside notebook, "inline" generates static png: %matplotlib inline # "notebook" argument allows interactive zoom and resize. ``` ## Comparable unit for comparison A 1% change in inflation may have different economic significance than a 1% change in unemployment. We retrieve historical data, then let one standard deviation represent one unit for scoring purposes. Note that the past scores will thus be represented *ex-post*, which is to say, influenced by recent data. The dual mandate is not required to be explicitly stated numerically by the Federal Reserve, so we assign explicit target values for unemployment and inflation (those frequently mentioned in Congressional testimonies and used as benchmarks by the market). These targets can be finely reset below so that the Fed performance can be re-evaluated. Sidenote: "The natural rate of unemployment [**NAIRU**, *non-accelerating inflation rate of unemployment*] is the rate of unemployment arising from all sources except fluctuations in aggregate demand. Estimates of potential GDP are based on the long-term natural rate. The short-term natural rate is used to gauge the amount of current and projected slack in labor markets." See https://research.stlouisfed.org/fred2/series/NROU and Appendix 1 for details. ``` # Set DUAL MANDATE, assumed throughout this notebook: unem_target = 5.0 infl_target = 2.0 # The Fed varies the targets over time, # sometimes only implicitly. So for example, # there may be disagreement among Board members # regarding NAIRU -- but we set it to what # seems to be the prevalent market assumption. ``` ## Unemployment rate ``` unem = get( m4unemp ) # m4 implies monthly frequency. # # Starts 1948-01-01, uncomment to view: # stats( unem ) # Standard deviation for unemployment rate: unem_std = unem.std() unem_std # Uncomment to plot raw unemployment rate: # plot( unem ) # Score unemployment as standard deviations from target: unem_score = todf( (unem - unem_target) / unem_std ) ``` ## Inflation ``` # Use synthetic inflation # which averages CPI and PCE for both headline and core versions: infl_level = get( m4infl ) # Get the YoY inflation rate: infl = todf(pcent( infl_level, 12 )) # # Starts 1960-01-01, uncomment to view: # stats(infl) infl_std = infl.std() infl_std # # Uncomment to plot inflation rate: # plot( infl ) # Score inflation as standard deviations from target: infl_score = todf( (infl - infl_target) / infl_std ) ``` ## Expressing duality as complex number We encode each joint score for unemployment and inflation into a single complex number. Let *u* be the unemployment score and *i* the inflation score. (Note: we follow the Python/engineering convention by letting **j** be the imaginary number $\sqrt -1$.) So let *z* be our dual encoding as follows: $ z = u + i \mathbf{j} $ (In the history of mathematics, this was the precursor to the idea of a *vector*.) ``` # Let's start constructing our 4-column dataframe: scores = paste( [unem_score, infl_score, infl_score, infl_score ] ) # Third and fouth columns are dummy placeholders to be replaced later. # Give names to the scores columns: scores.columns = ['z_unem', 'z_infl', 'z', 'z_norm'] # Fill in THIRD column z as complex number per our discussion: scores.z = scores.z_unem + (scores.z_infl * 1j) # The imaginary number in itself in Python is represented as 1j, # since j may be a variable elsewhere. ``` ## Computing the Fed score Each dual score can be interpreted as a vector in the complex plane. Its component parts, real for unemployment and imaginary for inflation, measure deviation from respective targets in units expressed as standard deviations. *Our **key idea** is to use the length of this vector (from the origin, 0+0**j**, representing the dual mandate) as the Fed score* = |z|. Python, which natively handles complex numbers, can compute the *norm* of such a vector using abs(z). Later we shall visualize the trajectory of the component parts using a color heat map. ``` # Finally fill-in the FOURTH placeholder column: scores.z_norm = abs( scores.z ) # Tail end of recent scores: tail( scores ) # ... nicely revealing the data structure: ``` ## Visualizing Fed scores z_norm is expressed in standard deviation units, thus it truly represents deviation from the dual mandate on a Gaussian scale. ``` # Define descriptive dataframe from our mathematical construct: fed_score = todf( scores.z_norm ) # FED SCORE plot( fed_score ) ``` *We can say that a score greater than 2 is definitively cause for concern. For example, between 1974 and 1984, there are two peaks extending into 4 standard deviations mainly due to high inflation. Fed score during the Great Recession hit 3 mainly due to severe unemployment.* ``` stats( fed_score ) ``` ## Remarks on fed_score Our fed_score *handles both positive and negative deviations from the Fed's dual mandate*, moreover, it handles them jointly using a historical fair measuring unit: the standard deviation. This avoids using ad-hoc weights to balance the importance between unemployment and inflation. The **fed_score is always a positve real number (since it is a norm) which is zero if and only if the Fed has achieved targets** (which we have explicitly specified). *That score can be interpreted as the number of standard deviations away from the dual mandate.* Our **fed_score** can also be simply interpreted as an economic **crisis level** indicator (much like a *n-alarm* fire) when Fed monetary policy becomes crucial for the US economy. Since 1960, ex-post fed_score averages around 1.31 where the mid-50% percentile range is approximately [0.47, 1.79]. But keep in mind that this computation relies on our current fixed targets, so fed_score is most useful in accessing recent performance from the perspective of historical variance. This means that the fed_score for a particular point in time may change as new incoming data arrives. 2015-12-15 notes: The current fed_score is 0.46 gravitating towards zero as the Fed is preparing its first rate hike in a decade. 2016-11-14 notes: The current fed_score is 0.14 as the Fed is expected to announce the second rate hike since the Great Recession. The labor market has exhibited sustained improvement. ## CONCLUSION: visualize fed_score components over time When FOMC makes its releases, the market becomes fixated on the so-called blue dot chart of expected future interest rates by its members. The next scatter plot helps us to understand the motivations and constraints behind the Federal Reserve's monetary policy. We see that the key components of the U.S. economy, unemployment and inflation, came back "full circle" to target from 2005 to 2016, or more accurately "full figure eight" with *major* (3 standard) deviations seen in the unemployment component. ``` # Scatter plot of recent data using color heat map: scatter( scores['2005':], col=[0, 1] ) # (Ignore FutureWarning from matplotlib/collections.py:590 # regarding elementwise comparison due to upstream numpy.) ``` In this scatter plot: z_unem is shown along the x-axis, z_infl along the y-axis, such that the color *heat* map depicts chronological movement from *blue to green to red*. The coordinate (0, 0) represents our Fed dual mandate, so it is easy to see deviations from target. Geometrically a point's distance from the origin is what we have computed as fed_score (= z_norm, in the complex plane). - - - - ## Appendix 1: Phillips curve The Phillips curve purports to explain the relationship between inflation and unemployment, however, as it turns out the relationship is one of mere correlation during certain periods in a given country. It is too simplistic to assert that decreased unemployment (i.e. increased levels of employment) will cause with the inflation rate to increase. "Most economists no longer use the Phillips curve in its original form because it was shown to be too simplistic. This can be seen in a cursory analysis of US inflation and unemployment data from 1953-92. There is no single curve that will fit the data, but there are three rough aggregations—1955–71, 1974–84, and 1985–92 — each of which shows a general, downwards slope, but at three very different levels with the shifts occurring abruptly. The data for 1953-54 and 1972-73 do not group easily, and a more formal analysis posits up to five groups/curves over the period. Modern versions distinguish between short-run and long-run effects on unemployment. The "short-run Phillips curve" is also called the "expectations-augmented Phillips curve," since it shifts up when inflationary expectations rise. In the long run, this implies that monetary policy cannot affect unemployment, which adjusts back to its "natural rate", also called the "NAIRU" or "long-run Phillips curve". However, this long-run "neutrality" of monetary policy does allow for short run fluctuations and the ability of the monetary authority to temporarily decrease unemployment by increasing permanent inflation, and vice versa. In many recent stochastic general equilibrium models, with sticky prices, there is a positive relation between the rate of inflation and the level of demand, and therefore a negative relation between the rate of inflation and the rate of unemployment. This relationship is often called the "New Keynesian Phillips curve." Like the expectations-augmented Phillips curve, the New Keynesian Phillips curve implies that increased inflation can lower unemployment temporarily, but cannot lower it permanently." For more details, see https://en.wikipedia.org/wiki/Phillips_curve Curiously, according to Edmund Phelps who won the 2006 Nobel Prize in Economics, the **long-run Phillips Curve is *vertical* ** such that the rate of inflation has no effect on unemployment at its NAIRU. The name "NAIRU" arises because with actual unemployment below it, inflation accelerates, while with unemployment above it, inflation decelerates. With the actual rate equal to it, inflation is stable, neither accelerating nor decelerating. ``` Image(url="https://upload.wikimedia.org/wikipedia/commons/e/e3/NAIRU-SR-and-LR.svg", embed=False) ``` We know from our data that economic reality is far more complicated than the ideal diagram above. For example, in the late 1990s, the unemployment rate fell below 4%, much lower than almost all estimates of the NAIRU. But inflation stayed very moderate rather than accelerating. Since our z data captures the normalized version of unemployment vs inflation rates, it is easy to visualize the actual paths using our scatter plots. ``` # Uncomment for # Scatter plot of selected data using color heat map: # scatter( scores['1995':'2000'], col=[0, 1] ) ```
github_jupyter
# 1 Getting started – Python, Platform and Jupyter ``` print("HelloWorld!") ``` # 2 Numpy ### 2.1 Arrays ##### 1. Run the following: ``` import numpy as np a = np.array([1, 2, 3]) print(type(a)) print(a.shape) print(a[0], a[1], a[2]) a[0] = 5 print(a[0], a[1], a[2]) ``` What are the rank, shape of a, and the current values of a[0], a[1], a[2]? Answer: 1. The rank of a is 1. 2. The shape of a is (3,). 3. a[0], a[1], a[2] = 5, 2, 3 ##### 2. Run the following: ``` b = np.array([[1, 2, 3], [4, 5, 6]]) print(b[0, 0], b[0, 1], b[1, 0]) ``` What are the rank, shape of b, and the current values of b[0, 0], b[0, 1], b[1, 0]? Answer: 1. The rank of a is 2. 2. The shape of a is (2,3). 3. b[0, 0], b[0, 1], b[1, 0] = 1, 2, 4 ##### 3. Numpy also provides many functions to create arrays. Assign the correct comment to each of the following instructions: ``` a = np.zeros((2,2)) # an array of all zeros b = np.ones((1,2)) # an array of all ones c = np.full((2,2), 7) # a constant array d = np.eye(2) # a 2x2 identity matrix e = np.random.random((2,2)) # an array filled with random values ``` ### 2.2 Array Indexing ##### 4. Run the following ``` a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]]) b = a[:2, 1:3] ``` The shape of a and b are: ``` print(a.shape) print(b.shape) ``` The values in b are: ``` b ``` ##### 5. A slice of an array is a view into the same data. Follow the comments in the following snippet ``` print(a[0, 1]) # Prints "2" b[0, 0] = 77 print(a[0, 1]) ``` Last line prints 77, modifying b also modifies the original array. ##### 6. You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing. Create the following rank 2 array with shape (3, 4): ``` a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]]) row_r1 = a[1, :] # Rank 1 view of the second row of a row_r2 = a[1:2, :] # Rank 2 view of the second row of a print(row_r1, row_r1.shape) # Prints "[5 6 7 8] (4,)" print(row_r2, row_r2.shape) # Prints "[[5 6 7 8]] (1, 4)" col_r1 = a[:, 1] col_r2 = a[:, 1:2] ``` What are the values and shapes of col r1 and col r2? ``` print(col_r1, col_r1.shape) print(col_r2, col_r2.shape) ``` ##### 7. Run the following: ``` a = np.array([[1,2], [3, 4], [5, 6]]) print(a[[0, 1, 2], [0, 1, 0]]) print(np.array([a[0, 0], a[1, 1], a[2, 0]])) ``` They are equivalent. ##### 8. When using integer array indexing, you can duplicate several times the same element from the source array. Compare the following instructions: ``` b = a[[0, 0], [1, 1]] c = np.array([a[0, 1], a[0, 1]]) print(b) print(c) ``` They are equivalent. ##### 9. One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix. Run the following code ``` a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) b = np.array([0, 2, 0, 1]) print(a[np.arange(4), b]) a[np.arange(4), b] += 10 print(a) ``` Observation: np.arange(4) = [0, 1, 2, 3] and b = [0, 2, 0, 1] So a[np.arange(4), b]) will be [a[0,0], a[1,2], a[2,0], a[3,1]]. That is [1, 6, 7, 11] Modifying this diretly via a[np.arange(4), b] += 10 will also modify the original values in a ##### 10. Run the following ``` a = np.array([[1,2], [3, 4], [5, 6]]) bool_idx = (a > 2) ``` What is the result stored in bool idx? Run the following: What are the values printed? What is their shape? ``` print(a[bool_idx]) print(a[a > 2]) print(a[bool_idx].shape) print(a[a > 2].shape) ``` Their shapes are the same, (4,) ### 2.3 Datatypes ##### 11. Here is an example What are the datatypes of x, y, z? ``` x = np.array([1, 2]) # Let numpy choose the datatype print(x.dtype) y = np.array([1.0, 2.0]) # Let numpy choose the datatype print(y.dtype) z = np.array([1, 2], dtype=np.int32) # Force a particular datatype print(z.dtype) ``` ### 2.4 Array math ##### 12. Give the output for each print statement in the following code snippet ``` x = np.array([[1,2],[3,4]], dtype=np.float64) y = np.array([[5,6],[7,8]], dtype=np.float64) # Elementwise sum; both produce the array print(x + y) print(np.add(x, y)) # Elementwise difference; both produce the array print(x - y) print(np.subtract(x, y)) # Elementwise product; both produce the array print(x * y) print(np.multiply(x, y)) # Elementwise division; both produce the array print(x / y) print(np.divide(x, y)) # Elementwise square root; produces the array print(np.sqrt(x)) ``` ##### 13. What are the mathematical operations performed by the last two instructions? ``` v = np.array([9,10]) w = np.array([11, 12]) print(v.dot(w)) print(np.dot(v, w)) ``` It performs the inner product of vectors. 9\*11 + 10\*12 = 219 ##### 14. What are the mathematical operations performed in the snippet below? ``` x = np.array([[1,2],[3,4]]) print(x.dot(v)) print(np.dot(x, v)) ``` It performs the matrix product. ##### 15. Write a code to compute the product of the matrices x with the following matrix y ``` y = np.array([[5,6,7],[7,8,9]]) print(np.dot(x,y)) print(x.shape) print(y.shape) print(np.dot(x,y)) print(np.dot(y.T,x)) x = np.array([[1,2],[3,4]]) print(np.sum(x)) # Compute sum of all elements; prints "10" print(np.sum(x, axis=0)) # Compute sum of each column; prints "[4 6]" print(np.sum(x, axis=1)) # Compute sum of each row; prints "[3 7]" ``` ##### 16. In the following ``` x = np.array([[1,2,3], [3,4,5]]) print(x) print(x.T) ``` What is the transpose of x? How is it different from x? Exchange the element based on the symmetry for diagonal. ### 2.5 Broadcasting ##### 17. Complete the following code in order to add the vector v to each row of a matrix x: ``` x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) v = np.array([1,0,1]) y = np.zeros(x.shape) for i in range(x.shape[0]): for j in range(x.shape[1]): y[i,j] = x[i, j] + v[j] print(y) ``` ##### 18. The previous solution works well; however when the matrix x is very large, computing an explicit loop in Python could be slow. An alternative is to use tile. Run the following and interpret the result. ``` vv = np.tile(v, (4, 1)) print(vv) ``` Next, complete the code to compute y from x and vv without explicit loops. ``` x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) v = np.array([1,0,1]) vv = np.tile(v, (4, 1)) y = x + vv print(y) ``` ##### 19. Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. This is simply obtained as follows ``` x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) v = np.array([1, 0, 1]) y = x + v # Add v to each row of x using broadcasting print(y) ``` Compute the outer product of v and w using broadcasting ``` v = np.array([1,2,3]) # v has shape (3,) w = np.array([4,5]) # w has shape (2,) print(np.reshape(v, (3, 1)) * w) ``` ##### 20. Add v to each row of x using broadcasting ``` x = np.array([[1,2,3], [4,5,6]]) print(x + v) ``` ##### 21. Add w to each column of x using broadcasting. ``` print(x + np.reshape(w, (2, 1))) ``` ##### 22. Numpy treats scalars as arrays of shape (). Write a code to multiply x with 2. ``` print(x * 2) ``` ### 2.6 Numpy Documentation # 3 Matplotlib ### 3.1 Plotting ##### An important function in matplotlib is plot, which allows you to plot 2D graphs. Here is a simple example: ``` import numpy as np import matplotlib.pyplot as plt # Compute the x and y coordinates for points on a sine curve x = np.arange(0, 3 * np.pi, 0.1) y = np.sin(x) # Plot the points using matplotlib plt.plot(x, y) plt.show() # You must call plt.show() to make graphics appear. ``` ##### 24. Modify the above example by adding these extra instructions before plt.show() ``` # Compute the x and y coordinates for points on a sine curve x = np.arange(0, 3 * np.pi, 0.1) y = np.sin(x) # Plot the points using matplotlib plt.plot(x, y) plt.xlim(0, 3 * np.pi) plt.xlabel('x') plt.ylabel('y') plt.title('Sinusoidal curve(s)') plt.legend(['Sine']) plt.show() # You must call plt.show() to make graphics appear. ``` ##### 25. Write a code to plot both sine and cosine curves in the same plot with proper legend. ``` # Compute the x and y coordinates for points on a sine curve x = np.arange(0, 3 * np.pi, 0.1) y1 = np.sin(x) y2 = np.cos(x) # Plot the points using matplotlib plt.plot(x, y1) plt.plot(x, y2) plt.xlim(0, 3 * np.pi) plt.xlabel('x') plt.ylabel('y') plt.title('Sin(x) and Cos(x) curve(s)') plt.legend(['Sine', 'Cosine']) plt.show() # You must call plt.show() to make graphics appear. ``` ### 3.2 Subplots ``` x = np.arange(0, 3 * np.pi, 0.1) y_sin = np.sin(x) # Set up a subplot grid that has height 2 and width 1, # and set the first such subplot as active. plt.subplot(2, 1, 1) # Make the first plot plt.plot(x, y_sin) plt.xlim(0, 3 * np.pi) plt.title('Sine') plt.show() ``` ##### 26. Complete the above code to create a second subplot in the same grid representing the cosine function. ``` x = np.arange(0, 3 * np.pi, 0.1) y_sin = np.sin(x) y_cos = np.cos(x) # Set up a subplot grid that has height 2 and width 1, # and set the first such subplot as active. plt.subplot(2, 1, 1) # Make the first plot plt.plot(x, y_sin) plt.xlim(0, 3 * np.pi) plt.title('Sine') plt.subplot(2, 1, 2) # Make the second plot plt.plot(x, y_cos) plt.xlim(0, 3 * np.pi) plt.title('Cosine') plt.show() ``` ### 3.3 Images ``` import numpy as np import imageio import matplotlib.pyplot as plt img = imageio.imread('assets/cat.jpg') img_tinted = img * [1, 0.95, 0.9] # Show the original image plt.subplot(1, 2, 1) plt.imshow(img) # Show the tinted image plt.subplot(1, 2, 2) # A slight gotcha with imshow is that it might give strange results # if presented with data that is not uint8. To work around this, we # explicitly cast the image to uint8 before displaying it. plt.imshow(np.uint8(img_tinted)) plt.show() ```
github_jupyter
# ¿Cómo medir rendimiento y riesgo en un portafolio? II <img style="float: right; margin: 0px 0px 15px 15px;" src="http://www.picpedia.org/clipboard/images/stock-portfolio.jpg" width="600px" height="400px" /> > La clase pasada y la presente, están dedicadas a obtener medidas de rendimiento y riesgo en un portafolio. > Vimos que podemos obtener los rendimientos de un portafolio mediante la relación $r_p=\sum_{i=1}^{n}w_ir_i$, y una vez teniendo los rendimientos del portafolio, lo podemos tratar como un activo individual. > Por otra parte, vimos que si conocemos los rendimientos esperados de cada activo que conforma el portafolio $E[r_i]$, podemos calcular el rendimiento esperado del portafolio como el promedio ponderado de los rendimientos esperados de los activos $E[r_p]=\sum_{i=1}^{n}w_iE[r_i]$. > Sin embargo, vimos que esto no es válido para la medida de riesgo (desviación estándar). Es decir, la varianza (o volatilidad, o desviación estándar) no es el promedio ponderado de las varianzas individuales. Anticipamos que esto es clave en el concepto de **diversificación**. **Objetivos:** - Medir el riesgo en un portafolio a partir del riesgo de cada uno de los activos que lo conforman. *Referencias* - Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera. ___ ## 1. Midiendo el riesgo en un portafolio ### 1.1. Volatilidad de un portafolio Retomamos el ejemplo qur veníamos trabajando la clase pasada... **Ejemplo.** Supongamos que tenemos inversión en activos de Toyota, Walmart y Pfizer. Tenemos cuatro posibles estados económicos: ``` import numpy as np import pandas as pd # Creamos tabla tabla = pd.DataFrame(columns=['Prob', 'Toyota', 'Walmart', 'Pfizer'], index=['Expansion', 'Normal', 'Recesion', 'Depresion']) tabla.index.name = 'Estado' tabla['Prob']=np.array([0.1, 0.4, 0.3, 0.2]) tabla['Toyota']=np.array([0.06, 0.075, 0.02, -0.03]) tabla['Walmart']=np.array([0.045, 0.055, 0.04, -0.01]) tabla['Pfizer']=np.array([0.025, -0.005, 0.01, 0.13]) tabla.round(4) ## Rendimientos esperados # Toyota ErT = (tabla['Prob']*tabla['Toyota']).sum() # Walmart ErW = (tabla['Prob']*tabla['Walmart']).sum() # Pfizer ErP = (tabla['Prob']*tabla['Pfizer']).sum() # Mostrar ErT, ErW, ErP ## Volatilidad # Toyota sT = (tabla['Prob'] * (tabla['Toyota'] - ErT)**2).sum()**0.5 # Walmart sW = (tabla['Prob'] * (tabla['Walmart'] - ErW)**2).sum()**0.5 # Pfizer sP = (tabla['Prob'] * (tabla['Pfizer'] - ErP)**2).sum()**0.5 # Mostrar sT, sW, sP # Portafolio 0.5Toyota+0.5Pfizer tabla['PortTP'] = 0.5 * tabla['Toyota'] + 0.5 * tabla['Pfizer'] tabla # Rendimiento portafolio (como activo individual) ErTP = (tabla['Prob']*tabla['PortTP']).sum() # Rendimiento portafolio (como suma ponderada de rendimientos individuales) ErTP2 = 0.5 * ErT + 0.5 * ErP ErTP, ErTP2 # Volatilidad del portafolio sTP = (tabla['Prob'] * (tabla['PortTP'] - ErTP)**2).sum()**0.5 sTP # Notar que sTP < 0.5 * sT + 0.5 * sP # la volatilidad del portafolio siempre es menor # a la suma ponderada de las volatilidades individuales sTP, 0.5 * sT + 0.5 * sP, sTP < 0.5 * sT + 0.5 * sP ``` **Actividad.** Encontrar la volatilidad del portafolio formado $0.5$ Toyota y $0.5$ Walmart. ``` # Encontrar los rendimientos del portafolio en cada estado de la economía tabla['ProbTW'] = 0.5 * tabla['Toyota'] + 0.5 * tabla['Walmart'] tabla # Encontrar el rendimiento esperado del portafolio ErTW = (tabla['Prob'] * tabla['ProbTW']).sum() ErTW # Encontrar la volatilidad de Toyota, Walmart y el portafolio sTW = (tabla['Prob']*(tabla['ProbTW']-ErTW)**2).sum()**0.5 sTW # Notar que sTW < 0.5 * sT + 0.5 * sW # la volatilidad del portafolio siempre es menor # a la suma ponderada de las volatilidades individuales sTW, 0.5 * sT + 0.5 * sW, sTW < 0.5 * sT + 0.5 * sW ``` ### 1.2. Midiendo el co-movimiento entre instrumentos - Una vez más, concluimos que la volatilidad (varianza) **NO** es el promedio ponderado de las varianzas individales. - Por el contrario, la varianza de los rendimientos de un portafolio está afectada por el movimiento relativo de un activo individual respecto a otro. - Por tanto, necesitamos definir las medidas de **covarianza** y **correlación**, que nos permiten evaluar las fluctuaciones relativas entre los activos. #### Covarianza: Es una medida el movimiento relativo entre dos instrumentos. Matemáticamente, si tenemos dos activos $A_1$ y $A_2$ cuyos rendimientos son $r_1$ y $r_2$, respectivamente, entonces la covarianza de los rendimientos de los activos es $$\text{cov}(r_1,r_2)=\sigma_{12}=\sum_{j=1}^{m}p_j(r_{1j}-E[r_1])(r_{2j}-E[r_2]).$$ $$\text{cov}(r_2,r_1)=\sigma_{21}=\sum_{j=1}^{m}p_j(r_{2j}-E[r_2])(r_{1j}-E[r_1]) = \sigma_{12}.$$ Podemos notar fácilmente que la covarianza de los rendimientos de un activo con los rendimientos del mismo activo corresponde a la varianza $$\text{cov}(r_1,r_1)=\sigma_{11}=\sum_{j=1}^{m} p_j(r_{1j}-E[r_1])(r_{1j}-E[r_1])=\sigma_1^2=\text{var}(r_1).$$ **Ejemplo.** Calcular la covarianza entre los rendimientos de Toyota y Pfizer. ``` # Mostrar tabla tabla # Calcular la covarianza covTP = (tabla['Prob'] * (tabla['Toyota'] - ErT) * (tabla['Pfizer'] - ErP)).sum() covTP ``` **Actividad.** Calcular la covarianza entre los rendimientos de Toyota y Walmart. ``` # Calcular la covarianza covTW = (tabla['Prob'] * (tabla['Toyota'] - ErT) * (tabla['Walmart'] - ErW)).sum() covTW ``` ¿Qué nos dice este número? - El signo nos dice las direcciones relativas entre los rendimientos de cada activo. Por ejemplo, la covarianza entre los rendimientos de Toyota y Pfizer es negativa... ver los rendimientos. - La magnitud de la covarianza no nos dice mucho acerca de la fuerza con que se relacionan o no estos rendimientos. **Correlación:** Un posible problema de la covarianza es que la magnitud de esta medida no nos dice mucho acerca de la fuerza de los movimientos relativos. La *correlación* es una medida normalizada del movimiento relativo entre los rendimientos de dos activos. Matemáticamente, $$\text{corr}(r_1,r_2)=\rho_{12}=\rho_{21}=\frac{\sigma_{12}}{\sigma_1\sigma_{2}}.$$ Propiedades: - Podemos notar fácilmente que la correlación de los rendimientos de un activo con los rendimientos del mismo activo es $1$: $$\text{corr}(r_1,r_1)=\rho_{11}=\frac{\sigma_{11}}{\sigma_1\sigma_1}=\frac{\sigma_{1}^2}{\sigma_1\sigma_1}=1.$$ - El signo de la correlación y la covarianza es el mismo. - La correlación satisface: $$-1\leq\rho_{12}\leq 1.$$ **Ejemplo.** Calcular la correlación entre los rendimientos de Toyota y Pfizer. ``` rTP = covTP / (sT * sP) rTP sTP, 0.5 * sT + 0.5 * sP ``` **Actividad.** Calcular la correlación entre los rendimientos de Toyota y Walmart. ``` rTW = covTW / (sT * sW) rTW sTW, 0.5 * sT + 0.5 * sW ``` **Conclusión.** - Es una medida normalizada de la fluctuación relativa de los rendimientos de dos activos. - En los ejemplos que vimos, sería conveniente invertir en el portafolio de Toyota y Pfizer puesto que su correlación es negativa, y esto impactaría positivamente en la diversificación del riesgo. ___ ## 2. Uniendo todo... - Entonces, vimos mediante ejemplos que el riesgo en un portafolio se ve afectado significativamente por como los rendimientos de los activos se mueven relativamente. - Este movimiento relativo lo medimos mediante la covarianza o la correlación. - Si se mueven de una manera que no están perfectamente correlacionados ($\rho<1$), entonces el riesgo del portafolio siempre será menor que el promedio ponderado de los riesgos individuales. <img style="float: left; margin: 0px 0px 15px 15px;" src="https://www.publicdomainpictures.net/pictures/20000/velka/happy-child.jpg" width="300px" height="200px" /> ## Ésta es la razón por la que combinar activos en un portafolio permite diversificar el riesgo... Entonces, ¿cómo podemos incorporar esta medida en el cálculo de la varianza del portafolio? - <font color=blue> Ver en el tablero...</font> - ¿Cómo sería para dos activos? **Ejemplo.** Calcular por fórmula para el portafolio de Toyota y Pfizer. Comparar. ``` wT = 0.5 wP = 0.5 sTP2 = (wT**2 * sT**2 + 2 * wT * wP * covTP + wP**2 * sP**2)**0.5 sTP, sTP2 ``` **Actividad.** Calcular por fórmula para el portafolio de Toyota y Walmart. Comparar. ## 2.1. <font color=blue> Ver en el tablero...</font> ### Matriz de varianza covarianza. ### Matriz de correlación. # Anuncios parroquiales ## 1. Recordar quiz la siguiente clase. Temas: Clases 6 y 7. ## 2. Tarea: revisar archivo "Tarea4_MidiendoRendimientoRiesgo" en clase. <script> $(document).ready(function(){ $('div.prompt').hide(); $('div.back-to-top').hide(); $('nav#menubar').hide(); $('.breadcrumb').hide(); $('.hidden-print').hide(); }); </script> <footer id="attribution" style="float:right; color:#808080; background:#fff;"> Created with Jupyter by Esteban Jiménez Rodríguez. </footer>
github_jupyter
# <center>Welcome to Supervised Learning</center> ## <center>Part 2: How to prepare your data for supervised machine learning</center> ## <center>Instructor: Andras Zsom</center> ### <center>https://github.com/azsom/Supervised-Learning<center> ## The topic of the course series: supervised Machine Learning (ML) - how to build an ML pipeline from beginning to deployment - we assume you already performed data cleaning - this is the first course out of 6 courses - Part 1: Introduction to machine learning and the bias-variance tradeoff - **Part 2: How to prepare your data for supervised machine learning** - Part 3: Evaluation metrics in supervised machine learning - Part 4: SVMs, Random Forests, XGBoost - Part 5: Missing data in supervised ML - Part 6: Interpretability - you can complete the courses in sequence or complete individual courses based on your interest ### Structured data | X|feature_1|feature_2|...|feature_j|...|feature_m|<font color='red'>Y</font>| |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |__data_point_1__|x_11|x_12|...|x_1j|...|x_1m|__<font color='red'>y_1</font>__| |__data_point_2__|x_21|x_22|...|x_2j|...|x_2m|__<font color='red'>y_2</font>__| |__...__|...|...|...|...|...|...|__<font color='red'>...</font>__| |__data_point_i__|x_i1|x_i2|...|x_ij|...|x_im|__<font color='red'>y_i</font>__| |__...__|...|...|...|...|...|...|__<font color='red'>...</font>__| |__data_point_n__|x_n1|x_n2|...|x_nj|...|x_nm|__<font color='red'>y_n</font>__| We focus on the feature matrix (X) in this course. ### Learning objectives of this course By the end of the course, you will be able to - describe why data splitting is necessary in machine learning - summarize the properties of IID data - list examples of non-IID datasets - apply IID splitting techniques - apply non-IID splitting techniques - identify when a custom splitting strategy is necessary - describe the two motivating concepts behind preprocessing - apply various preprocessors to categorical and continuous features - perform preprocessing with a sklearn pipeline and ColumnTransformer # Module 1: Split IID data ### Learning objectives of this module: - describe why data splitting is necessary in machine learning - summarize the properties of IID data - apply IID splitting techniques ## Why do we split the data? - we want to find the best hyper-parameters of our ML algorithms - fit models to training data - evaluate each model on validation set - we find hyper-parameter values that optimize the validation score - we want to know how the model will perform on previously unseen data - the generalization error - apply our final model on the test set ### We need to split the data into three parts! ## Ask yourself these questions! - What is the intended use of the model? What is it supposed to do/predict? - What data/info do you have available at the time of prediction? - Your split must mimic the intended use of the model only then will you accurately estimate how well the model will perform on previously unseen points (generalization error). - two examples: - if you want to predict the outcome of a new patient's visit to the ER: - your test score must be based on patients not included in training and validation - your validation score must be based on patients not included in training - points of one patient should not be distributed over multiple sets because your generalization error will be off - predict stocks price - it is a time series data - if you predict the stocks price at a certain time in development, make sure that you only use information predating that time ## How should we split the data into train/validation/test? - data is **Independent and Identically Distributed** (iid) - all samples stem from the same generative process and the generative process is assumed to have no memory of past generated samples - identify cats and dogs on images - predict the house price - predict if someone's salary is above or below 50k - examples of not iid data: - data generated by time-dependent processes - data has group structure (samples collected from e.g., different subjects, experiments, measurement devices) ## Splitting strategies for iid data: basic approach - 60% train, 20% validation, 20% test for small datasets - 98% train, 1% validation, 1% test for large datasets - if you have 1 million points, you still have 10000 points in validation and test which is plenty to assess model performance ### Let's work with the adult data! https://archive.ics.uci.edu/ml/datasets/adult ``` import pandas as pd from sklearn.model_selection import train_test_split df = pd.read_csv('data/adult_data.csv') # let's separate the feature matrix X, and target variable y y = df['gross-income'] # remember, we want to predict who earns more than 50k or less than 50k X = df.loc[:, df.columns != 'gross-income'] # all other columns are features print(y) print(X.head()) help(train_test_split) random_state = 42 # first split to separate out the training set X_train, X_other, y_train, y_other = train_test_split(X,y,train_size = 0.6,random_state=random_state) print('training set:',X_train.shape, y_train.shape) # 60% of points are in train print(X_other.shape, y_other.shape) # 40% of points are in other print(X_train.head()) # second split to separate out the validation and test sets X_val, X_test, y_val, y_test = train_test_split(X_other,y_other,train_size = 0.5,random_state=random_state) print('validation set:',X_val.shape, y_val.shape) # 20% of points are in validation print('test set:',X_test.shape, y_test.shape) # 20% of points are in test print(X_val.head()) print(X_test.head()) ``` ## Randomness due to splitting - the model performance, validation and test scores will change depending on which points are in train, val, test - inherent randomness or uncertainty of the ML pipeline - change the random state a couple of times and repeat the whole ML pipeline to assess how much the random splitting affects your test score - you would expect a similar uncertainty when the model is deployed ## Splitting strategies for iid data: k-fold splitting <center><img src="figures/grid_search_cross_validation.png" width="600"></center> ``` from sklearn.model_selection import KFold help(KFold) random_state = 42 # first split to separate out the test set X_other, X_test, y_other, y_test = train_test_split(X,y,test_size = 0.2,random_state=random_state) print(X_other.shape,y_other.shape) print('test set:',X_test.shape,y_test.shape) # do KFold split on other kf = KFold(n_splits=5,shuffle=True,random_state=random_state) for train_index, val_index in kf.split(X_other,y_other): X_train = X_other.iloc[train_index] y_train = y_other.iloc[train_index] X_val = X_other.iloc[val_index] y_val = y_other.iloc[val_index] print(' training set:',X_train.shape, y_train.shape) print(' validation set:',X_val.shape, y_val.shape) # the validation set contains different points in each iteration print(X_val[['age','workclass','education']].head()) ``` ## How many splits should I create? - tough question, 3-5 is most common - if you do $n$ splits, $n$ models will be trained, so the larger the $n$, the most computationally intensive it will be to train the models - KFold is usually better suited for small datasets - KFold is good to estimate uncertainty due to random splitting of train and val, but it is not perfect - the test set remains the same ### Why shuffling iid data is important? - by default, data is not shuffled by Kfold which can introduce errors! <center><img src="figures/kfold.png" width="600"></center> ## Imbalanced data - imbalanced data: only a small fraction of the points are in one of the classes, usually ~5% or less but there is no hard limit here - examples: - people visit a bank's website. do they sign up for a new credit card? - most customers just browse and leave the page - usually 1% or less of the customers get a credit card (class 1), the rest leaves the page without signing up (class 0). - fraud detection - only a tiny fraction of credit card payments are fraudulent - rare disease diagnosis - the issue with imbalanced data: - if you apply train_test_split or KFold, you might not have class 1 points in one of your sets by chance - this is what we need to fix ## Solution: stratified splits ``` random_state = 42 X_train, X_other, y_train, y_other = train_test_split(X,y,train_size = 0.6,random_state=random_state) X_val, X_test, y_val, y_test = train_test_split(X_other,y_other,train_size = 0.5,random_state=random_state) print('**balance without stratification:**') # a variation on the order of 1% which would be too much for imbalanced data! print(y_train.value_counts(normalize=True)) print(y_val.value_counts(normalize=True)) print(y_test.value_counts(normalize=True)) X_train, X_other, y_train, y_other = train_test_split(X,y,train_size = 0.6,stratify=y,random_state=random_state) X_val, X_test, y_val, y_test = train_test_split(X_other,y_other,train_size = 0.5,stratify=y_other,random_state=random_state) print('**balance with stratification:**') # very little variation (in the 4th decimal point only) which is important if the problem is imbalanced print(y_train.value_counts(normalize=True)) print(y_val.value_counts(normalize=True)) print(y_test.value_counts(normalize=True)) ``` ## Stratified folds <center><img src="figures/stratified_kfold.png" width="600"></center> ``` from sklearn.model_selection import StratifiedKFold help(StratifiedKFold) # what we did before: variance in balance on the order of 1% random_state = 42 X_other, X_test, y_other, y_test = train_test_split(X,y,test_size = 0.2,random_state=random_state) print('test balance:',y_test.value_counts(normalize=True)) # do KFold split on other kf = KFold(n_splits=5,shuffle=True,random_state=random_state) for train_index, val_index in kf.split(X_other,y_other): X_train = X_other.iloc[train_index] y_train = y_other.iloc[train_index] X_val = X_other.iloc[val_index] y_val = y_other.iloc[val_index] print('train balance:') print(y_train.value_counts(normalize=True)) print('val balance:') print(y_val.value_counts(normalize=True)) # stratified K Fold: variation in balance is very small (4th decimal point) random_state = 42 # stratified train-test split X_other, X_test, y_other, y_test = train_test_split(X,y,test_size = 0.2,stratify=y,random_state=random_state) print('test balance:',y_test.value_counts(normalize=True)) # do StratifiedKFold split on other kf = StratifiedKFold(n_splits=5,shuffle=True,random_state=random_state) for train_index, val_index in kf.split(X_other,y_other): X_train = X_other.iloc[train_index] y_train = y_other.iloc[train_index] X_val = X_other.iloc[val_index] y_val = y_other.iloc[val_index] print('train balance:') print(y_train.value_counts(normalize=True)) print('val balance:') print(y_val.value_counts(normalize=True)) ``` # Module 2: Split non-IID data ### Learning objectives of this module: - list examples of non-IID datasets - apply non-IID splitting techniques - identify when a custom splitting strategy is necessary ## Examples of non-iid data - if there is any sort of time or group structure in your data, it is likely non-iid - group structure: - each point is someone's visit to the ER and some people visited the ER multiple times - each point is a customer's visit to website and customers tend to return regularly - time structure - each point is the stocks price at a given time - eahc point is a person's health or activity status ## Group-based split: GroupShuffleSplit <center><img src="figures/groupshufflesplit.png" width="600"></center> ``` import numpy as np from sklearn.model_selection import GroupShuffleSplit X = np.ones(shape=(8, 2)) y = np.ones(shape=(8, 1)) groups = np.array([1, 1, 2, 2, 2, 3, 3, 3]) gss = GroupShuffleSplit(n_splits=10, train_size=.8, random_state=42) for train_idx, test_idx in gss.split(X, y, groups): print("TRAIN:", train_idx, "TEST:", test_idx) ``` ## Group-based split: GroupKFold <center><img src="figures/groupkfold.png" width="600"></center> ``` from sklearn.model_selection import GroupKFold group_kfold = GroupKFold(n_splits=3) for train_index, test_index in group_kfold.split(X, y, groups): print("TRAIN:", train_index, "TEST:", test_index) help(GroupKFold) ``` ## Data leakage in time series data is similar! - do NOT use information in validation or test which will not be available once your model is deployed - don't use future information! <center><img src="figures/timeseriessplit.png" width="600"></center> ``` import numpy as np from sklearn.model_selection import TimeSeriesSplit X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]]) y = np.array([1, 2, 3, 4, 5, 6]) tscv = TimeSeriesSplit() for train_index, test_index in tscv.split(X): print("TRAIN:", train_index, "TEST:", test_index) X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] ``` ## When should you develop your own splitting function? - there are certain splitting strategies sklearn can't handle at the moment - time series data with group structure is one example - if you want certain groups to be in certain sets - group structure in classification where all points in a group belong to a certain class - you might want a roughly equal number of groups of each class to be in each set - check out the [model selection](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.model_selection) part of sklearn - if the splitting stragey you want to follow is not there, implement your own function # Module 3: Preprocess continuous and categorical features ### Learning objectives of this module: - describe the two motivating concepts behind preprocessing - apply various preprocessors to categorical and continuous features - perform preprocessing with a sklearn pipeline and ColumnTransformer ### Data almost never comes in a format that's directly usable in ML - ML works with numerical data but some columns are text (e.g., home country, educational level, gender, race) - some ML algorithms accept (and prefer) a non-numerical feature matrix (like [CatBoost](https://catboost.ai/) ) but that's not standard - sklearn throws an error message if the feature matrix contains non-numerical elements - the order of magnitude of numerical features can vary greatly which is not good for most ML algorithms (e.g., salary in USD, age in years, time spent on the site in sec) - many ML algorithms are distance-based and they perform better and converge faster if the features are standardized (features have a mean of 0 and the same standard deviation, usually 1) - Lasso and Ridge regression because of the penalty term, K Nearest Neightbors, SVM, linear models if you want to use the coefficients to measure feature importance (more on this in part 6), neural networks - tree-based methods don't require standardization - check out part 1 to learn more about linear and logistic regression, Lasso and Ridge - check out part 4 to learn more about SVMs, tree-based methods, and K Nearest Neighbors ### scikit-learn transformers to the rescue! Preprocessing is done with various transformers. All transformes have three methods: - **fit** method: estimates parameters necessary to do the transformation, - **transform** method: transforms the data based on the estimated parameters, - **fit_transform** method: both steps are performed at once, this can be faster than doing the steps separately. ### Transformers we cover - **OrdinalEncoder** - converts categorical features into an integer array - **OneHotEncoder** - converts categorical features into dummy arrays - **StandardScaler** - standardizes continuous features by removing the mean and scaling to unit variance ## Ordered categorical data: OrdinalEncoder Let's assume we have a categorical feature and training and test sets The cateogies can be ordered or ranked E.g., educational level in the adult dataset ``` import pandas as pd train_edu = {'educational level':['Bachelors','Masters','Bachelors','Doctorate','HS-grad','Masters']} test_edu = {'educational level':['HS-grad','Masters','Masters','College','Bachelors']} X_train = pd.DataFrame(train_edu) X_test = pd.DataFrame(test_edu) from sklearn.preprocessing import OrdinalEncoder help(OrdinalEncoder) # initialize the encoder cats = ['HS-grad','Bachelors','Masters','Doctorate'] enc = OrdinalEncoder(categories = [cats]) # The ordered list of # categories need to be provided. By default, the categories are alphabetically ordered! # fit the training data enc.fit(X_train) # print the categories - not really important because we manually gave the ordered list of categories print(enc.categories_) # transform X_train. We could have used enc.fit_transform(X_train) to combine fit and transform X_train_oe = enc.transform(X_train) print(X_train_oe) # transform X_test X_test_oe = enc.transform(X_test) # OrdinalEncoder always throws an error message if # it encounters an unknown category in test print(X_test_oe) ``` ## Unordered categorical data: one-hot encoder some categories cannot be ordered. e.g., workclass, relationship status first feature: gender (male, female, unknown) second feature: browser used these categories cannot be ordered ``` train = {'gender':['Male','Female','Unknown','Male','Female','Female'],\ 'browser':['Safari','Safari','Internet Explorer','Chrome','Chrome','Internet Explorer']} test = {'gender':['Female','Male','Unknown','Female'],'browser':['Chrome','Firefox','Internet Explorer','Safari']} X_train = pd.DataFrame(train) X_test = pd.DataFrame(test) # How do we convert this to numerical features? from sklearn.preprocessing import OneHotEncoder help(OneHotEncoder) # initialize the encoder enc = OneHotEncoder(sparse=False) # by default, OneHotEncoder returns a sparse matrix. sparse=False returns a 2D array # fit the training data enc.fit(X_train) print('categories:',enc.categories_) print('feature names:',enc.get_feature_names()) # transform X_train X_train_ohe = enc.transform(X_train) #print(X_train_ohe) # do all of this in one step X_train_ohe = enc.fit_transform(X_train) print(X_train_ohe) # transform X_test X_test_ohe = enc.transform(X_test) print('X_test transformed') print(X_test_ohe) ``` ## Continuous features: StandardScaler ``` train = {'salary':[50_000,75_000,40_000,1_000_000,30_000,250_000,35_000,45_000]} test = {'salary':[25_000,55_000,1_500_000,60_000]} X_train = pd.DataFrame(train) X_test = pd.DataFrame(test) from sklearn.preprocessing import StandardScaler help(StandardScaler) scaler = StandardScaler() print(scaler.fit_transform(X_train)) print(scaler.transform(X_test)) ``` ## How and when to do preprocessing in the ML pipeline? - **SPLIT YOUR DATA FIRST!** - **APPLY TRANSFORMER.FIT ONLY ON YOUR TRAINING DATA!** Then transform the validation and test sets. - One of the most common mistake practitioners make is leaking statistics! - fit_transform is applied to the whole dataset, then the data is split into train/validation/test - this is wrong because the test set statistics impacts how the training and validation sets are transformed - but the test set must be separated from train and val, and val must be separated from train - or fit_transform is applied to the train, then fit_transform is applied to the validation set, and fit_transform is applied to the test set - this is wrong because the relative position of the points change <center><img src="figures/no_separate_scaling.png" width="1200"></center> ## Scikit-learn's pipelines - Preprocessing and model training (not the splitting) can be chained together into a scikit-learn pipeline which consists of transformers and one final estimator which is usually your classifier or regression model. - It neatly combines the preprocessing steps and it helps to avoid leaking statistics. https://scikit-learn.org/stable/auto_examples/compose/plot_column_transformer_mixed_types.html ``` import pandas as pd import numpy as np from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler, OneHotEncoder, OrdinalEncoder, MinMaxScaler from sklearn.model_selection import train_test_split np.random.seed(0) df = pd.read_csv('data/adult_data.csv') # let's separate the feature matrix X, and target variable y y = df['gross-income'] # remember, we want to predict who earns more than 50k or less than 50k X = df.loc[:, df.columns != 'gross-income'] # all other columns are features random_state = 42 # first split to separate out the training set X_train, X_other, y_train, y_other = train_test_split(X,y,train_size = 0.6,random_state=random_state) # second split to separate out the validation and test sets X_val, X_test, y_val, y_test = train_test_split(X_other,y_other,train_size = 0.5,random_state=random_state) # collect which encoder to use on each feature # needs to be done manually ordinal_ftrs = ['education'] ordinal_cats = [[' Preschool',' 1st-4th',' 5th-6th',' 7th-8th',' 9th',' 10th',' 11th',' 12th',' HS-grad',\ ' Some-college',' Assoc-voc',' Assoc-acdm',' Bachelors',' Masters',' Prof-school',' Doctorate']] onehot_ftrs = ['workclass','marital-status','occupation','relationship','race','sex','native-country'] std_ftrs = ['capital-gain','capital-loss','age','hours-per-week'] # collect all the encoders preprocessor = ColumnTransformer( transformers=[ ('ord', OrdinalEncoder(categories = ordinal_cats), ordinal_ftrs), ('onehot', OneHotEncoder(sparse=False,handle_unknown='ignore'), onehot_ftrs), ('std', StandardScaler(), std_ftrs)]) # for now we only preprocess, later on we will add other steps here # note the final scaler which is a standard scaler # the ordinal and one hot encoded features do not have a mean of 0 and an std of 1 # the final scaler standardizes those features clf = Pipeline(steps=[('preprocessor', preprocessor),('final scaler',StandardScaler())]) X_train_prep = clf.fit_transform(X_train) X_val_prep = clf.transform(X_val) X_test_prep = clf.transform(X_test) print(X_train.shape) print(X_train_prep.shape) print(np.mean(X_train_prep,axis=0)) print(np.std(X_train_prep,axis=0)) print(np.mean(X_val_prep,axis=0)) print(np.std(X_val_prep,axis=0)) print(np.mean(X_test_prep,axis=0)) print(np.std(X_test_prep,axis=0)) ```
github_jupyter
<center> <img src="https://github.com/Yorko/mlcourse.ai/blob/master/img/ods_stickers.jpg?raw=true" />      <br> <div style="font-weight:700;font-size:25px"> [mlcourse.ai](https://mlcourse.ai) - Open Machine Learning Course </div> <br> Auteurs: [Vitaliy Radchenko](https://www.linkedin.com/in/vitaliyradchenk0/) et [Yury Kashnitsky](https://yorko.github.io). Traduit et édité par [Christina Butsko](https://www.linkedin.com/in/christinabutsko/), [Egor Polusmak](https://www.linkedin.com/in/egor-polusmak/), [Anastasia Manokhina](https://www.linkedin.com/in/anastasiamanokhina/), [Anna Shirshova](http://linkedin.com/in/anna-shirshova-b908458b), [Yuanyuan Pao](https://www.linkedin.com/in/yuanyuanpao/) et [Ousmane Cissé](https://github.com/oussou-dev). Ce matériel est soumis aux termes et conditions de la licence [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). L'utilisation gratuite est autorisée à des fins non commerciales. # <center> Thème 5. Ensembles et Forêts aléatoires (random forest) </center> <center> <div style="font-weight:700;font-size:25px"> Partie 1. Bagging (Boostrap Aggregating) </div> </center> <h1>Sommaire<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Outline" data-toc-modified-id="Outline-1">Outline</a></span></li><li><span><a href="#1.-Ensembles" data-toc-modified-id="1.-Ensembles-2">1. Ensembles</a></span></li><li><span><a href="#2.-Bootstrapping" data-toc-modified-id="2.-Bootstrapping-3">2. Bootstrapping</a></span></li><li><span><a href="#3.-Bagging" data-toc-modified-id="3.-Bagging-4">3. Bagging</a></span></li><li><span><a href="#4.-L'erreur-Out-of-bag" data-toc-modified-id="4.-L'erreur-Out-of-bag-5">4. L'erreur Out-of-bag</a></span></li><li><span><a href="#5.-Mission-d'entraînement" data-toc-modified-id="5.-Mission-d'entraînement-6">5. Mission d'entraînement</a></span></li><li><span><a href="#6.-Ressources-utiles" data-toc-modified-id="6.-Ressources-utiles-7">6. Ressources utiles</a></span></li></ul></div> Dans des articles précédents, vous avez exploré différents algorithmes de classification ainsi que des techniques pouvant être utilisées pour valider et évaluer correctement la qualité de vos modèles. Supposons maintenant que vous avez choisi le meilleur modèle possible pour un problème particulier et que vous vous efforcez d’améliorer encore sa précision. Dans ce cas, vous devez appliquer des techniques d’apprentissage automatique plus avancées, appelées collectivement **_ensembles_**. Un *ensemble* est une collection d’éléments qui contribuent collectivement à un tout. Un exemple familier est un ensemble musical, qui associe les sons de plusieurs instruments de musique pour créer une harmonie, ou des ensembles architecturaux, qui sont un ensemble de bâtiments conçus comme une unité. Dans les ensembles, le résultat (global) harmonieux est plus important que la performance d'une partie individuelle. ## 1. Ensembles Le théorème [Condorcet's jury theorem](https://en.wikipedia.org/wiki/Condorcet%27s_jury_theorem) (1784) traite d'un ensemble dans un certain sens. Il indique que, si chaque membre du jury rend un jugement indépendant et que la probabilité d'une décision correcte par chaque juré est supérieure à 0,5, la probabilité d'une décision correcte par l'ensemble du jury augmente avec le nombre total de jurés et tend à un. D'autre part, si la probabilité d'avoir raison est inférieure à 0,5 pour chaque juré, la probabilité d'une décision correcte par l'ensemble du jury diminue avec le nombre de jurés et tend vers zéro. Écrivons une expression analytique pour ce théorème: - $\large N$ est le nombre total de jurés; - $\large m$ est un nombre minimal de jurés qui formeraient une majorité, c'est-à-dire $\large m = floor(N/2) + 1$; - $\large {N \choose i}$ est le nombre de combinaisons $\large i$ d'un ensemble contenant des éléments $\large N$. - $\large p$ est la probabilité que le juré prenne la décision correcte; - $\large \mu$ est la probabilité que le jury entier prenne la bonne décision. Ensuite: $$ \large \mu = \sum_{i=m}^{N}{N\choose i}p^i(1-p)^{N-i} $$ On peut voir que si $\large p > 0.5$, alors $\large \mu > p$. De plus, si $\large N \rightarrow \infty $, alors $\large \mu \rightarrow 1$. Regardons un autre exemple d'ensembles : une observation connue sous le nom de [ Wisdom of the crowd](https://en.wikipedia.org/wiki/Wisdom_of_the_crowd). <img src="https://github.com/Yorko/mlcourse.ai/blob/master/img/bull.png?raw=true" align="right" width=15% height=15%> En 1906, [Francis Galton](https://en.wikipedia.org/wiki/Francis_ Galton) s'est rendu à une foire à Plymouth où il a assisté à un concours pour les agriculteurs. 800 participants ont essayé d'estimer le poids d'un taureau abattu. Le poids réel du taureau était de 1198 livres. Bien qu'aucun des fermiers ne puisse deviner le poids exact de l'animal, la moyenne de leurs prédictions était de 1197 livres. Une idée similaire de réduction des erreurs a été adoptée dans le domaine de l'apprentissage automatique. ## 2. Bootstrapping *Bagging* (également connu sous le nom de [Bootstrap aggregation](https://en.wikipedia.org/wiki/Bootstrap_aggregating)) est l'une des premières et des plus fondamentales des techniques d'ensemble. Il a été proposé par [Leo Breiman](https://en.wikipedia.org/fr/wiki/Leo_Breiman) en 1994. Le Bagging repose sur la méthode statistique de [bootstrapping](https://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29), ce qui permet d'évaluer de nombreuses statistiques de modèles complexes. La méthode de Bootstrap se présente ainsi. Soit un échantillon $\large X$ de taille $\large N$. Nous pouvons créer un nouvel échantillon à partir de l'échantillon d'origine en tirant des éléments $\large N$ de celui-ci de manière aléatoire et uniforme, avec remplacement. En d'autres termes, nous sélectionnons un élément aléatoire dans l'échantillon d'origine de taille $\large N$ et procédons de la sorte $\large N$ fois. Tous les éléments sont également susceptibles d'être sélectionnés, ainsi chaque élément est tiré avec la probabilité égale à $\large \frac{1}{N}$. Disons que nous tirons les boules d'un sac une à la fois. A chaque étape, la boule sélectionnée est remise dans le sac de sorte que la sélection suivante soit faite de manière égale, c'est-à-dire à partir du même nombre de boules $\large N$. Notez que, comme nous remettons les boules dans le sac, il peut y avoir des doublons dans le nouvel échantillon. Appelons ce nouvel exemple $\large X_1$. En répétant cette procédure $\large M$ fois, nous créons $\large M$ * échantillons bootstrap * $\large X_1, \dots, X_M$. Au final, nous disposons d’un nombre suffisant d’échantillons et pouvons calculer diverses statistiques de la distribution initiale. ![image](https://github.com/Yorko/mlcourse.ai/blob/master/img/bootstrap_eng.png?raw=true) Pour notre exemple, nous utiliserons le jeu de données `telecom_churn` bien connu. Précédemment, lorsque nous avions discuté de l'importance des caractéristiques, nous avions constaté que l'une des caractéristiques les plus importantes de cet ensemble de données était le nombre d'appels au service clientèle. Visualisons les données et regardons la distribution de cette caractéristique. ``` import numpy as np import pandas as pd import seaborn as sns sns.set() %matplotlib inline from matplotlib import pyplot as plt telecom_data = pd.read_csv("../../data/telecom_churn.csv") telecom_data.loc[telecom_data["Churn"] == False, "Customer service calls"].hist( label="Loyal" ) telecom_data.loc[telecom_data["Churn"] == True, "Customer service calls"].hist( label="Churn" ) plt.xlabel("Number of calls") plt.ylabel("Density") plt.legend(); ``` On dirait que les clients fidèles font moins d’appels au service clientèle que ceux qui finissent par partir. À présent, il peut être judicieux d’estimer le nombre moyen d’appels au service clientèle dans chaque groupe. Comme notre ensemble de données est petit, nous ne pourrions pas obtenir une bonne estimation en calculant simplement la moyenne de l'échantillon initial. Nous ferons mieux d'appliquer la méthode bootstrap. Générons 1000 nouveaux échantillons bootstrap à partir de notre population d'origine et produisons une estimation par intervalle de la moyenne. ``` def get_bootstrap_samples(data, n_samples): """Generate bootstrap samples using the bootstrap method.""" indices = np.random.randint(0, len(data), (n_samples, len(data))) samples = data[indices] return samples def stat_intervals(stat, alpha): """Produce an interval estimate.""" boundaries = np.percentile(stat, [100 * alpha / 2.0, 100 * (1 - alpha / 2.0)]) return boundaries # on sauvegarde les données sur les clients fidèles et les anciens clients pour fractionner l'ensemble de données loyal_calls = telecom_data.loc[ telecom_data["Churn"] == False, "Customer service calls" ].values churn_calls = telecom_data.loc[ telecom_data["Churn"] == True, "Customer service calls" ].values # nous fixonx random.seed() à 0 pour conserver la reproductibité des résultats np.random.seed(0) # Générer les échantillons à l'aide du bootstrapping et calculer la moyenne pour chacun d'entre eux loyal_mean_scores = [ np.mean(sample) for sample in get_bootstrap_samples(loyal_calls, 1000) ] churn_mean_scores = [ np.mean(sample) for sample in get_bootstrap_samples(churn_calls, 1000) ] # afficher les estimations d'intervalle résultantes print( "Service calls from loyal: mean interval", stat_intervals(loyal_mean_scores, 0.05) ) print( "Service calls from churn: mean interval", stat_intervals(churn_mean_scores, 0.05) ) ``` Pour l'interprétation des intervalles de confiance, vous pouvez lire cette [note](https://www.graphpad.com/guides/prism/7/statistics/stat_more_about_confidence_interval.htm?toc=0&printWindow) concise ou tout cours sur les statistiques. Il n'est pas exact de dire qu'un intervalle de confiance contient 95% des valeurs. Notez que l'intervalle pour les clients fidèles est plus étroit, ce qui est raisonnable car ils effectuent moins d'appels (0, 1 ou 2) par rapport aux clients désabonnés qui appellent jusqu'à en avoir marre et décident de changer de fournisseur. ## 3. Bagging Maintenant que vous avez compris l’idée du Bootstraping, nous pouvons passer au *Bagging*. Supposons que nous ayons un jeu de données d'entraînement $\large X$. À l’aide du Bootstraping , nous générons des exemples $\large X_1, \dots, X_M$. Maintenant, pour chaque échantillon bootstrap, nous entraînons son propre classifieur $\large a_i(x)$. Le classifieur final fera la moyenne des sorties de tous ces classifieurs individuels. En cas de classification, cette technique correspond au vote: $$\large a(x) = \frac{1}{M}\sum_{i = 1}^M a_i(x).$$ L'image ci-dessous illustre cet algorithme: <img src="https://github.com/Yorko/mlcourse.ai/blob/master/img/bagging.png?raw=true" alt="image"/> Considérons un problème de régression avec les algorithmes de base $\large b_1(x), \dots , b_n(x)$. Supposons qu'il existe une fonction cible idéale de vraies réponses $\large y(x)$ définie pour toutes les entrées et que la distribution $\large p(x)$ est définie. Nous pouvons ensuite exprimer l'erreur pour chaque fonction de régression comme suit : $$\large \varepsilon_i(x) = b_i(x) - y(x), \quad i = 1, \dots, n$$ Et la valeur attendue de l'erreur quadratique moyenne : $$\large \E_x\left[\left(b_i(x) - y(x)\right)^{2}\right] = \E_x\left[\varepsilon_i^{2}(x)\right].$$ Ensuite, l’erreur moyenne sur toutes les fonctions de régression se présentera comme suit : $$ \large \E_1 = \frac{1}{n} \E_x\left[ \sum_i^n \varepsilon_i^{2}(x)\right]$$ Nous supposerons que les erreurs sont non biaisées et non corrélées, c'est-à-dire : $$\large \begin{array}{rcl} \E_x\left[\varepsilon_i(x)\right] &=& 0, \\ \E_x\left[\varepsilon_i(x)\varepsilon_j(x)\right] &=& 0, \quad i \neq j. \end{array}$$ Construisons maintenant une nouvelle fonction de régression qui calculera la moyenne des valeurs des fonctions individuelles : $$\large a(x) = \frac{1}{n}\sum_{i=1}^{n}b_i(x)$$ Trouvons son erreur quadratique moyenne : $$\large \begin{array}{rcl}\E_n &=& \E_x\left[\frac{1}{n}\sum_{i=1}^{n}b_i(x)-y(x)\right]^2 \\ &=& \E_x\left[\frac{1}{n}\sum_{i=1}^{n}\varepsilon_i\right]^2 \\ &=& \frac{1}{n^2}\E_x\left[\sum_{i=1}^{n}\varepsilon_i^2(x) + \sum_{i \neq j}\varepsilon_i(x)\varepsilon_j(x)\right] \\ &=& \frac{1}{n}\E_1\end{array}$$ Ainsi, en faisant la moyenne des réponses individuelles, nous avons réduit l’erreur quadratique moyenne par un facteur de $\large n$. Dans notre leçon précédente, rappelons les composants qui constituent l'erreur totale hors échantillon : $$\large \begin{array}{rcl} \Err\left(\vec{x}\right) &=& \E\left[\left(y - \hat{f}\left(\vec{x}\right)\right)^2\right] \\ &=& \sigma^2 + f^2 + \Var\left(\hat{f}\right) + \E\left[\hat{f}\right]^2 - 2f\E\left[\hat{f}\right] \\ &=& \left(f - \E\left[\hat{f}\right]\right)^2 + \Var\left(\hat{f}\right) + \sigma^2 \\ &=& \Bias\left(\hat{f}\right)^2 + \Var\left(\hat{f}\right) + \sigma^2 \end{array}$$ Le Bagging réduit la variance d'un classifieur en diminuant la différence d'erreur lorsque nous entraînons le modèle sur différents jeux de données. En d'autres termes, le Bagging empêche le sur-apprentissage. L'efficacité du Bagging vient du fait que les modèles individuels sont très différents en raison des données d'entraînement différentes et que leurs erreurs s'annulent pendant le vote. De plus, les valeurs aberrantes sont probablement omises dans certains exemples de bootstrap d'entraînement. La bibliothèque `scikit-learn` prend en charge le Bagging avec les méta-estimateurs `BaggingRegressor` et `BaggingClassifier`. Vous pouvez utiliser la plupart des algorithmes comme base. Examinons comment fonctionne le Bagging et comparons-le avec un arbre de décision. Pour cela, nous allons utiliser un exemple tiré de [la documentation de sklearn](http://scikit-learn.org/stable/auto_examples/ensemble/plot_bias_variance.html#sphx-glr-auto-examples-ensemble-plot-bias-variance-py). ![image](https://github.com/Yorko/mlcourse.ai/blob/master/img/tree_vs_bagging_eng.png?raw=true) L'erreur pour l'arbre de décision: $$ \large 0.0255 \, (\Err) = 0.0003 \, (\Bias^2) + 0.0152 \, (\Var) + 0.0098 \, (\sigma^2) $$ L'erreur lors de l'utilisation du Bagging: $$ \large 0.0196 \, (\Err) = 0.0004 \, (\Bias^2) + 0.0092 \, (\Var) + 0.0098 \, (\sigma^2) $$ Comme vous pouvez le voir sur le graphique ci-dessus, la variance de l'erreur est beaucoup plus faible pour le Bagging. Rappelez-vous que nous l'avons déjà prouvé théoriquement. Le Bagging est efficace sur de petits ensembles de données. L'abandon d'une petite partie des données d'apprentissage conduit à la construction de classifieurs de base très différents. Si vous avez un grand jeu de données, vous générerez des échantillons de bootstrap de taille beaucoup plus petite. L'exemple ci-dessus ne sera probablement pas applicable à un travail réel. En effet, nous avons fortement présumé que nos erreurs individuelles ne sont pas corrélées. Le plus souvent, c’est beaucoup trop optimiste pour des applications réelles. Lorsque cette hypothèse est fausse, la réduction d'erreur ne sera pas aussi significative. Dans les cours suivants, nous aborderons certaines méthodes d'ensemble plus sophistiquées, qui permettent des prévisions plus précises sur des problèmes du monde réel. ## 4. L'erreur Out-of-bag Pour la suite, dans le cas d'une forêt aléatoire, il n'est pas nécessaire d'utiliser des échantillons de validation croisée ou un échantilon de données de test (hold-out samples) pour obtenir une estimation non biaisée de l'erreur. Pourquoi ? Parce que, dans les techniques d'ensemble, l'estimation de l'erreur se fait en interne.   Les arbres aléatoires sont construits à l'aide de différents échantillons bootstrap de l'ensemble de données d'origine. Environ 37% des entrées sont exclues d'un échantillon bootstrap particulier et ne sont pas utilisées dans la construction de l'arbre $\large k$-th. C'est facile à prouver. Supposons qu'il y a des exemples $\large \ell$ dans notre jeu de données. A chaque étape, chaque point de données a une probabilité égale de se retrouver dans un échantillon bootstrap avec remplacement, probabilité $\large\frac{1}{\ell}.$ La probabilité qu’il n’existe pas d’échantillon bootstrap contenant un élément particulier de l'ensemble de données (c’est-à-dire qu’il a été omis $\large \ell$ fois) est égale à $\large (1 - \frac{1}{\ell})^\ell$. Lorsque $\large \ell \rightarrow +\infty$, il devient égal à la [seconde limite remarquable](https://en.wikipedia.org/wiki/List_of_limits) $\large \frac{1}{e}$. Ensuite, la probabilité de choisir un exemple spécifique est $\large \approx 1 - \frac{1}{e} \approx 63\%$. Visualisons le fonctionnement de l'estimation de l'erreurs Out-of-Bag (ou OOBE) : ![image](https://github.com/Yorko/mlcourse.ai/blob/master/img/oob.png?raw=true) La partie supérieure de la figure ci-dessus représente notre jeu de données d'origine. Nous l'avons divisé en ensembles d'entraînement (à gauche) et de test (à droite). Dans l'image de gauche, nous dessinons une grille qui divise parfaitement notre jeu de données en fonction des classes. Maintenant, nous utilisons la même grille pour estimer la part des réponses correctes sur notre ensemble de tests. Nous pouvons voir que notre classifieur a donné des réponses incorrectes dans les 4 cas qui n'ont pas été utilisés pendant l'entraînement (à gauche). Par conséquent, la précision de notre classifieur est $\large \frac{11}{15}*100\% = 73.33\%$. En résumé, chaque algorithme de base est formé sur $\large \approx 63\%$ des exemples originaux. Il peut être validé sur le $\large \approx 37\%$ restant. L'estimation Out-of-Bag n'est rien de plus que l'estimation moyenne des algorithmes de base sur ces $\large \approx 37\%$ d'entrées laissées en dehors de l'entraînement. ## 5. Mission d'entraînement Vous pouvez vous entraîner avec [cette mission](https://www.kaggle.com/kashnitsky/a5-demo-logit-and-rf-for-credit-scoring) dans laquelle vous travaillerez avec la régression logistique et le Random Forest dans un tâche de scoring de crédit. Avec une [solution](https://www.kaggle.com/kashnitsky/a5-demo-logit-and-rf-for-credit-scoring-sol). ## 6. Ressources utiles - Cours principal [site](https://mlcourse.ai), [repo github](https://github.com/Yorko/mlcourse.ai) et [chaîne YouTube](https://www.youtube.com/watch?v=QKTuw4PNOsU&list=PLVlY_7IJCMJeRfZ68eVfEcu-UcN9BbwiX) - mlcourse.ai [video](https://www.youtube.com/watch?v=neXJL-AqI_c) sur le Random Forest - Medium ["story"](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-5-ensembles-of-algorithms-and-random-forest-8e05246cbba7) based on this notebook - Course materials as a [Kaggle Dataset](https://www.kaggle.com/kashnitsky/mlcourse) - Si vous lisez le Russse : un [article](https://habrahabr.ru/company/ods/blog/324402/) sur Habrahabr avec ~ le même material. Et une [video](https://youtu.be/G0DmuuFeC30) on YouTube - Chapter 15 of the book “[Elements of Statistical Learning](https://statweb.stanford.edu/~tibs/ElemStatLearn/)” by Jerome H. Friedman, Robert Tibshirani, and Trevor Hastie. - More about practical applications of random forests and other algorithms can be found in the [official documentation](http://scikit-learn.org/stable/modules/ensemble.html) of `scikit-learn`. - For a more in-depth discussion of variance and decorrelation of random forests, see the [original paper](https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf).
github_jupyter
## 2. Random Forest ### a) ``` import pandas as pd headers = ["Number of times pregnant", "Plasma glucose concentration a 2 hours in an oral glucose tolerance test", "Diastolic blood pressure (mm Hg)", "Triceps skinfold thickness (mm)", "2-Hour serum insulin (mu U/ml)", "Body mass index (weight in kg/(height in m)^2)", "Diabetes pedigree function", "Age (years)", "Class variable(0 or 1)"] raw_df = pd.read_csv('Diabetes.csv', names= headers) print(raw_df.shape) raw_df.head(6) ``` <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> <p style =" direction:rtl;text-align:right;"> ابتدا یک لیست از اسامی هدر ها ایجاد کرده و سپس فایل csv را به کمک ان فراخوانی می کنیم </p> <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> ### b) #### Handling Missing Values ``` raw_df.isnull().sum() raw_df['Class variable(0 or 1)'].value_counts() ``` <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> <p style =" direction:rtl;text-align:right;"> همانطور که مشخص است داده missing در دیتاست وجود ندارد. همینطور با توجه به محموع تعداد هرکدام از مقادیر یکتا که برابر است به تعداد کل رکوردها می توان فهمید که در ستون کلاس null نداریم. این تحلیل را برای سایر ستون ها هم می توان انجام داد. </p> <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> #### Normalizing Numerical Columns and Encoding Categorical Ones ``` raw_df.dtypes ``` <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> <p style =" direction:rtl;text-align:right;"> متغیر کتگوریکال نداریم پس نیازی به انکود نیست </p> <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> ``` from sklearn.preprocessing import normalize numericals = pd.DataFrame(raw_df.loc[:, raw_df.columns != 'Class variable(0 or 1)']) categoricals = pd.DataFrame(raw_df['Class variable(0 or 1)']) #remove outlires print("Before Removing Outliers: ",raw_df.shape) Q1 = raw_df.loc[:, raw_df.columns != 'Class variable(0 or 1)'].quantile(0.25) Q3 = raw_df.loc[:, raw_df.columns != 'Class variable(0 or 1)'].quantile(0.75) IQR = Q3 - Q1 mask = ~((raw_df < (Q1 - 1.5 * IQR)) | (raw_df > (Q3 + 1.5 * IQR))).any(axis=1) print("#Outliers = ",raw_df[~mask].dropna().shape[0]) print("#Not outliers = ",raw_df.shape[0]-raw_df[~mask].dropna().shape[0]) raw_df= raw_df[mask] print("After Removing Outliers: ",raw_df.shape) raw_df.head() numericals = pd.DataFrame(raw_df.loc[:, raw_df.columns != 'Class variable(0 or 1)']) categoricals = pd.DataFrame(raw_df['Class variable(0 or 1)']) #normalize raw_df.loc[:, raw_df.columns != 'Class variable(0 or 1)'] = normalize(numericals, norm='l2') df.loc[:,:]=raw_df df.head() ``` <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> <p style =" direction:rtl;text-align:right;"> با جزئیاتی کاملا مشابه سوال قبل حذف داده های پرت و نرمال سازی انجام شده است.</p> <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> ### c) ``` Y = df['Class variable(0 or 1)'] X = df.drop('Class variable(0 or 1)', axis=1) ``` ### d) ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2) print('Distribution of defferent classes in training data:(%) ') y = pd.DataFrame(y_train) ((y.groupby('Class variable(0 or 1)')['Class variable(0 or 1)'].count())/y_train.shape[0])*100 print('Distribution of defferent classes in test data:(%) ') y = pd.DataFrame(y_test) ((y.groupby('Class variable(0 or 1)')['Class variable(0 or 1)'].count())/y_test.shape[0])*100 ``` <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> <p style =" direction:rtl;text-align:right;"> به کمک تابع train_test_split داده ها را به دو بخش آموزشی و تست با نسبت خواسته شده تقسیم می کنیم.سپس درصد وجود هر کلاس را در داده های آموزش و تست محاسبه می کنیم. همانطور که مشحص است هر کدام از این دسته ها درصدهای نزدیک به همی در دو نوع داده دارند </p> <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> ### e) ``` from sklearn.ensemble import RandomForestClassifier myclassifier = RandomForestClassifier(max_depth=3, criterion='entropy').fit(X_train, y_train) ``` <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> <p style =" direction:rtl;text-align:right;"> کلاسیفایر جنگل تصادفی را با پارامترهای گفته شده در سوال می سازیم . سپس داده ها را برای آموزش به مدل دادیم. </p> <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> ### f) ``` from sklearn.metrics import classification_report, confusion_matrix, accuracy_score y_predicted = myclassifier.predict(X_test) print("[+] confusion matrix\n") print(confusion_matrix(y_test, y_predicted)) print("\n[+] classification report\n") print(classification_report(y_test, y_predicted)) print("Mean Accuracy = ",myclassifier.score(X_test,y_test,y_predicted)) print("Accuracy Score = " ,accuracy_score(y_test, y_predicted)) ``` <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> <p style =" direction:rtl;text-align:right;"> توضیح مفصل confusion matrix را در بخش های بعد ذکر خواهم کرد. اما در این مرحله خطای mse و دقت را به صورت فوق گزارش کرده ام که دقت حوالی 70 است. </p> <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> ### g) ``` import numpy as np import matplotlib.pyplot as plt from matplotlib.legend_handler import HandlerLine2D # ranging from 1 to 32 max_depths = np.linspace(1, 32, 32, endpoint=True) train_results = [] test_results = [] for max_depth in max_depths: rf = RandomForestClassifier(max_depth=max_depth, criterion='entropy') rf.fit(X_train, y_train) train_pred = rf.predict(X_train) ac = accuracy_score(y_train, train_pred) train_results.append(ac) y_pred = rf.predict(X_test) acc = accuracy_score(y_test, y_pred) test_results.append(acc) fig, axes = plt.subplots(1,1,figsize=(9,9)) trains, = plt.plot(max_depths, train_results, 'b', label='Train Accuracy') tests, = plt.plot(max_depths, test_results, 'r', label='Test Accuracy') plt.legend(handler_map={trains: HandlerLine2D(numpoints=2)}) plt.ylabel('Acccuracy Score') plt.xlabel('max_depth') plt.show() from sklearn.metrics import roc_curve, auc max_depths = np.linspace(1, 32, 32, endpoint=True) train_results = [] test_results = [] for max_depth in max_depths: dt = RandomForestClassifier(max_depth=max_depth, criterion='entropy') dt.fit(X_train, y_train) train_pred = dt.predict(X_train) false_positive_rate, true_positive_rate, thresholds = roc_curve(y_train, train_pred) roc_auc = auc(false_positive_rate, true_positive_rate) train_results.append(roc_auc) y_pred = dt.predict(X_test) false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred) roc_auc = auc(false_positive_rate, true_positive_rate) test_results.append(roc_auc) fig, axes = plt.subplots(1,1,figsize=(9,9)) trains, = plt.plot(max_depths, train_results, 'b', label='Train Accuracy') tests, = plt.plot(max_depths, test_results, 'r', label='Test Accuracy') plt.legend(handler_map={trains: HandlerLine2D(numpoints=2)}) plt.ylabel('Acccuracy Score') plt.xlabel('max_depth') plt.show() myclassifier = RandomForestClassifier(max_depth=5, criterion='entropy').fit(X_train, y_train) y_predicted = myclassifier.predict(X_test) print("[+] confusion matrix\n") print(confusion_matrix(y_test, y_predicted)) print("\n[+] classification report\n") print(classification_report(y_test, y_predicted)) print("Mean Accuracy = ",myclassifier.score(X_test,y_test,y_predicted)) print("Accuracy Score = " ,accuracy_score(y_test, y_predicted)) ``` <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;"> <p style =" direction:rtl;text-align:right;"> این پارامتر بیشینه عمق درخت را نمایش می دهد. اگر ست نشود مدل تا جایی ادامه می دهد تا به نودهای تماما خالص برسد یا برگها کمتر ازmin_samples_split باشند همانطور که در نمودار هایی که برای دقت در دو بخش train و test رسم کرده ایم مشخص است بعد از مدتی ( حوالی 10 ) مدل ما با خطر overfit شدن مواجه می شود. یعنی برای داده های آموزش نتایج بسیار خوبی حاصل می شود اما قادر به پیش بینی داده های جدید مخواهد بود. به همین علت باید این پارامتر را به صورتی مقداررهی کنیم که نه این اتفاق رخ بدهد نه اینکه مدل به علت مقدار کم آن به خوبی آموزش نبیند. پس این مقدار را برابر 5 قرارداده ام که قبل از بیش برازش است و دقت تست خوبی هم دارد. </p> <hr style = "border-top: 3px solid #000000 ; border-radius: 3px;">
github_jupyter
<div> <h1 style="text-align: center;">Machine learning from scratch - Part II</h1> <h2 style="text-align: center;">EMBO practical course on population genomics 2019 @ Procida, Italy</h2> <div> --- ### Authors: Marco Chierici & Margherita Francescatto ### _FBK/MPBA_ --- **Recap.** We are using a subset of the SEQC neuroblastoma data set [Zhang et al, Genome Biology, 2015] consisting of 272 samples (136 training, 136 test). The data was preprocessed a bit to facilitate the progress of the tutorial. We start by loading the modules we need to process the data. ``` import numpy as np import pylab as pl ## for plotting import pandas as pd ## for reading text files and manipulating data frames from sklearn import neighbors ## kNN classifier from sklearn import svm ## SVM classifier from sklearn.ensemble import RandomForestClassifier ## RF classifier from sklearn.model_selection import cross_val_score ## needed to train in CV %matplotlib inline np.random.seed(42) ## set random seed just in case ``` Define files to read: ``` ## for convenience, define the data directory as a variable DATA_DIR = "../data/" #"/path/to/your/data" DATA_TR = DATA_DIR + "MAV-G_272_tr.txt.gz" DATA_TS = DATA_DIR + "MAV-G_272_ts.txt.gz" LABS_TR = DATA_DIR + "labels_tr.txt" LABS_TS = DATA_DIR + "labels_ts.txt" ``` Read in the files as _pandas dataframes_ (they are conceptually like R data frames): ``` data_tr = pd.read_csv(DATA_TR, sep = "\t") data_ts = pd.read_csv(DATA_TS, sep = "\t") ``` Since we already looked at the data in the first part of the dataset, we move directly to the juicy stuff. We drop the first column from the train and test expression sets, since it's just the sample IDs... ``` data_tr = data_tr.drop('sampleID',axis =1) data_ts = data_ts.drop('sampleID',axis =1) ``` ...and store the data into Numpy arrays. ``` x_tr = data_tr.values x_ts = data_ts.values ``` Now we read in the files containing labels and select the column with the CLASS target to do our first round of analyses. ``` labs_tr = pd.read_csv(LABS_TR, sep = "\t") labs_ts = pd.read_csv(LABS_TS, sep = "\t") class_lab_tr = labs_tr[['CLASS']] class_lab_ts = labs_ts[['CLASS']] y_tr = class_lab_tr.values.ravel() y_ts = class_lab_ts.values.ravel() ``` In the previous part of the tutorial, we focused on the k-NN classifiers. In the previous lecture, however, we explored theoretical aspects related to two other broadly used classifiers: Support Vector Machines (SVMs) and Random Forests (RFs). In this second part of tutorial, the first thing we want to do is assessing how these two alternative classification methods perform on our neuroblastoma dataset. We start with SVM. We first rescale the data, import the relevant model and create an instance of the SVM classifier. ``` from sklearn.preprocessing import MinMaxScaler ## first you need to create a "scaler" object scaler = MinMaxScaler(feature_range=(-1,1)) ## then you actually scale data by fitting the scaler object on the data scaler.fit(x_tr) x_tr = scaler.transform(x_tr) x_ts = scaler.transform(x_ts) ## import support vector classifier (SVC) and create an instance from sklearn.svm import SVC svc = SVC(random_state=42, verbose=1, kernel='linear') ``` Note that the specification _kernel = 'linear'_ implies that a linear kernel will be used. If you remember from the lecture, this means that a linear function is used to define the decision boundaries of the classifier. Alternatives include _‘poly’_ and _‘rbf’_ for polynomial or gaussian kernels respectively. Scikit-learn offers an alternative implementation of linear SVMs. You can find more details in Scikit User Guide. As previously done with the k-NN classifier, we fit the SVM and get the predictions for the test data. ``` ## fit the model and get the predictions svc.fit(x_tr, y_tr) class_pred_ts = svc.predict(x_ts) ``` Now we give a look at the classification metrics introduced in the first part of the tutorial. to access the functions, we need to load the metrics module. ``` from sklearn import metrics print('MCC = ', metrics.matthews_corrcoef(class_lab_ts, class_pred_ts)) print('ACC = ', metrics.accuracy_score(class_lab_ts, class_pred_ts)) print('SENS = ', metrics.recall_score(class_lab_ts, class_pred_ts)) ``` We can also give a look at the classification report. ``` print(metrics.classification_report(class_lab_ts, class_pred_ts)) ``` Exercise: **one-shot Random Forest classification**. _Hint:_ the RF classifier is implemented in the Scikit learn class RandomForestClassifier, from _sklearn.ensemble_ module. ``` ## space for exercise from sklearn.ensemble import RandomForestClassifier as RFC clf = RFC(n_estimators=500) clf.fit(x_tr, y_tr) y_pred_rfc = clf.predict(x_ts) print('MCC = ', metrics.matthews_corrcoef(class_lab_ts, y_pred_rfc)) print('ACC = ', metrics.accuracy_score(class_lab_ts, y_pred_rfc)) print('SENS = ', metrics.recall_score(class_lab_ts, y_pred_rfc)) print(metrics.classification_report(class_lab_ts, y_pred_rfc)) ``` ## Parameter tuning As mentioned in the lecture, Scikit learn offers a very useful and flexible tool for parameter tuning called _GridSearchCV_. While the tool is very sophisticated and efficient, it is useful to at least try an example _by hand_ to understand what is happening in the background. For this example we use a linear SVM and try to tune the C parameter. You might remember from the lectures that the paramenter C essentially controls how much we want to avoid misclassifying each training example. Large values of C result in smaller margins, i.e. closer fitting to the training data. As mentioned in the classes, the drawback is over-fitting, resulting in poor generalization. ``` ## define the sequence of C values we want to use in the search of the best one C_list = [0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1] for C in C_list: print('C = ', C) svc = svm.SVC(kernel='linear', C=C) svc.fit(x_tr, class_lab_tr.values.ravel()) class_pred_ts = svc.predict(x_ts) print('MCC = ', metrics.matthews_corrcoef(class_lab_ts, class_pred_ts)) print('ACC = ', metrics.accuracy_score(class_lab_ts, class_pred_ts)) print('SENS = ', metrics.recall_score(class_lab_ts, class_pred_ts), "\n") ``` From C = 1e-03 the classification performance reaches a plateau. C = 1e-04 yields the highest MCC on the test set: when tuning the parameters we would consider this as the best choice for the problem. **Exercise:** as you already saw in the lectures, there are many parameters that can be tuned, also when considering only one simple classifier. For example, if you consider SVM with 'rbf' kernel, you could check performance changes with different values of C **and** gamma, for example using two nested loops. ``` ## space for exercise ``` As we mentioned, Scikit offers fully automated parameter tuning engine. We illustrate its power with a simple example on our data. We use GridSearchCV to search through a grid of C and gamma parameter options for SVM with 'rbf' kernel. In order to do this we define a small function svc_param_selection that does the work for us. ``` from sklearn.model_selection import GridSearchCV def svc_param_selection(X, y, nfolds): Cs = [0.001, 0.01, 0.1, 1, 10] gammas = [0.001, 0.01, 0.1, 1, 'auto'] param_grid = {'C': Cs, 'gamma' : gammas} grid_search = GridSearchCV(svm.SVC(kernel='rbf'), param_grid, cv=nfolds, n_jobs=4) grid_search.fit(X, y) grid_search.best_params_ return grid_search.best_params_ svc_param_selection(x_tr, y_tr, 5) ``` ## Feature ranking As mentioned in the lecture, random forests have a built in tool for feature ranking ``` # Build a forest and compute the feature importances rf = RandomForestClassifier(n_estimators=250) rf.fit(x_tr, y_tr) ``` For the sake of completeness make the predictions and check the classification performance. ``` class_pred_ts = rf.predict(x_ts) print('MCC = ', metrics.matthews_corrcoef(class_lab_ts, class_pred_ts)) print('ACC = ', metrics.accuracy_score(class_lab_ts, class_pred_ts)) print('SENS = ', metrics.recall_score(class_lab_ts, class_pred_ts)) ``` Now extract the feature importances and display the first 10: ``` importances = rf.feature_importances_ indices = np.argsort(importances)[::-1] # Print the feature ranking print("Feature ranking (top 10 features):") for f in range(10): print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]])) ``` Would be nice to know to which genes they actually correspond. If you remember the gene names are the column names of the pandas dataframe containing the training/test data. ``` columnsNamesArr = data_tr.columns.values for i in range(10): print(columnsNamesArr[indices[i]]) ``` ## Extra exercises The classification task considered so far (CLASS) is quite easy, since the classes reflect extreme disease outcomes (favorable vs unfavorable). A more interesting task could be the prediction of Event-Free Survival (EFS). To do this, an extended version of the dataset is provided in the `/data/marco` directory: ``` DATA_TR = DATA_DIR + "MAV-G_498_tr.txt.gz" DATA_TS = DATA_DIR + "MAV-G_498_ts.txt.gz" LABS_TR = DATA_DIR + "labels_full_tr.txt" LABS_TS = DATA_DIR + "labels_full_ts.txt" ``` Read the data in and prepare the `x_tr`, `x_ts`, `y_tr`, `y_ts` Numpy arrays, as before, using "EFS" as target variable. Recalling concepts from the first practical, perform an explorative PCA analyisis, plotting the first two components. Train a kNN classifier in one-shot mode: fit the model on the training set and predict the labels on the test set. Compute performance metrics using the provided true labels of the test set. Experiment with different classifier(s) and/or different parameters. Try tuning the parameters (e.g. using GridSearchCV) and find the optimal parameter set. Using the optimal parameters, run a (iterated) cross-validation on the training set; compute the average cross-validation metrics. Using the optimal parameters, train a model on the whole training set and predict the labels of the test set. Compute the metrics and compare them with the average cross-validation metrics. What do you expect? Use the trained model to rank the features and inspect the top ones.
github_jupyter
``` import numpy as np from scipy.spatial import Delaunay from scipy.interpolate import LinearNDInterpolator from scipy.constants import mu_0 from scipy.constants import elementary_charge as q_e from scipy.constants import proton_mass as m_i from astropy.convolution import convolve, convolve_fft from scipy.signal import fftconvolve from scipy.interpolate import SmoothBivariateSpline import write_canonical_flux_tube_quantities as wcf reload(wcf) from datetime import date from datetime import datetime import visit_writer import structured_3d_vtk as struc_3d reload(struc_3d) %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns sns.set_style('white') import os import ion_current_to_mach_number as ic_to_mach reload(ic_to_mach) import read_from_sql from mpl_toolkits.mplot3d import Axes3D ``` # Read idl files and generate inteprolators ``` bx_all_planes = wcf.save_idl_quantity_to_unstructured_grids('bx', 'B_x', 'now', x_min=-0.032, x_max=0.028, y_min=-0.022, y_max=0.032, z_min=0.249, z_max=0.416) by_all_planes = wcf.save_idl_quantity_to_unstructured_grids('by', 'B_y', 'now', x_min=-0.032, x_max=0.028, y_min=-0.022, y_max=0.032, z_min=0.249, z_max=0.416) bz_all_planes = wcf.save_idl_quantity_to_unstructured_grids('bz', 'B_z', 'now', x_min=-0.032, x_max=0.028, y_min=-0.022, y_max=0.032, z_min=0.249, z_max=0.416) te_all_planes = wcf.save_idl_quantity_to_unstructured_grids('te', 'T_e', 'now', x_min=-0.026, x_max=0.028, y_min=-0.03, y_max=0.028, z_min=0.249, z_max=0.416, bounds=(1e-3, 1e3)) n_all_planes = wcf.save_idl_quantity_to_unstructured_grids('n', 'n', 'now', x_min=-0.026, x_max=0.028, y_min=-0.03, y_max=0.028, z_min=0.249, z_max=0.416, bounds=(1e3, 1e22)) n_three_planes = wcf.remove_plane(0.302, n_all_planes) te_three_planes = wcf.remove_plane(0.302, te_all_planes) n_all_planes = n_three_planes te_all_planes = te_three_planes bx_triangulation, bx_interpolators = wcf.give_delaunay_and_interpolator(bx_all_planes) by_triangulation, by_interpolators = wcf.give_delaunay_and_interpolator(by_all_planes) bz_triangulation, bz_interpolators = wcf.give_delaunay_and_interpolator(bz_all_planes) te_triangulation, te_interpolators = wcf.give_delaunay_and_interpolator(te_all_planes) n_triangulation, n_interpolators = wcf.give_delaunay_and_interpolator(n_all_planes) ``` # Read Mach measurements from database and build interpolators ``` timesteps = 250 database = '/home/jensv/rsx/jens_analysis/shots_database/source/shots.db' table = 'Shots' z_direction_1, z_direction_2 = 0, 180 y_direction_1, y_direction_2 = 90, 270 angle_signs = {0: 1, 180: -1, 90: -1, 0: 1} min_spectral_density = 1.6e-8 condition_z_0416 = ("campaigns = 'mach_probe_plane_campaign_1'" " AND fiducial_pre_crowbar_gyration_spectral_density > " + str(min_spectral_density) + " AND mach_signals_exist = 1" " AND (mach_orientation = " + str(z_direction_1) + " OR mach_orientation = " + str(z_direction_2) + ")") condition_y_0416 = ("campaigns = 'mach_probe_plane_campaign_1'" " AND fiducial_pre_crowbar_gyration_spectral_density > " + str(min_spectral_density) + " AND mach_signals_exist = 1" " AND (mach_orientation = " + str(y_direction_1) + " OR mach_orientation = " + str(y_direction_2) + ")") cursor, connection = read_from_sql.cursor_with_rows(condition_z_0416, database, table) z_0416_shots = cursor.fetchall() cursor.close() connection.close() cursor, connection = read_from_sql.cursor_with_rows(condition_y_0416, database, table) y_0416_shots = cursor.fetchall() cursor.close() connection.close() condition_z_302 = ("campaigns = 'mach_probe_plane_campaign_2'" " AND fiducial_pre_crowbar_gyration_spectral_density > " + str(min_spectral_density) + " AND mach_signals_exist = 1" " AND (mach_orientation = " + str(z_direction_1) + " OR mach_orientation = " + str(z_direction_2) + ")") cursor, connection = read_from_sql.cursor_with_rows(condition_z_302, database, table) z_0302_shots = cursor.fetchall() cursor.close() connection.close() mach_z_0416_measurements = ic_to_mach.run_mach_analysis(z_0416_shots, timesteps, angle_signs) mach_y_0416_measurements = ic_to_mach.run_mach_analysis(y_0416_shots, timesteps, angle_signs) mach_z_0302_measurements = ic_to_mach.run_mach_analysis(z_0302_shots, timesteps, angle_signs) mach_z_0416_measurements['delays'] = np.arange(timesteps) mach_y_0416_measurements['delays'] = np.arange(timesteps) mach_z_0302_measurements['delays'] = np.arange(timesteps) mach_z_0416_measurements = struc_3d.average_duplicate_points(mach_z_0416_measurements) mach_y_0416_measurements = struc_3d.average_duplicate_points(mach_y_0416_measurements) mach_z_0302_measurements = struc_3d.average_duplicate_points(mach_z_0302_measurements) mach_y_measurements = {0.416: mach_y_0416_measurements} mach_z_measurements = {0.302: mach_z_0302_measurements, 0.416: mach_z_0416_measurements} mach_y_all_planes = wcf.save_quantity_to_unstructured_grids(mach_y_measurements, 'Mach_y', 'Mach_y', '2016-07-26', planes=[0.416], x_min=-0.052, x_max=0.052, y_min=-0.022, y_max=0.032, z_min=0.249, z_max=0.416, bounds=(-10, 10)) mach_z_all_planes = wcf.save_quantity_to_unstructured_grids(mach_z_measurements, 'Mach_z', 'Mach_z', '2016-07-26', planes=[0.302, 0.416], x_min=-0.032, x_max=0.032, y_min=-0.022, y_max=0.032, z_min=0.249, z_max=0.416, bounds=(-10, 10)) mach_y_all_planes = wcf.remove_nan_points(mach_y_all_planes) mach_z_all_planes = wcf.remove_nan_points(mach_z_all_planes) mach_y_triangulation, mach_y_interpolators = wcf.give_delaunay_and_interpolator(mach_y_all_planes) mach_z_triangulation, mach_z_interpolators = wcf.give_delaunay_and_interpolator(mach_z_all_planes) ``` # Just work on timestep 0 ``` time_point = 0 ``` # Interpolate Temperature and density ``` (x_min, x_max, y_min, y_max, z_min, z_max) = wcf.joint_mach_bdot_tp_extent() spatial_increment = 0.001 mesh = np.meshgrid(np.linspace(x_min, x_max, np.ceil((x_max-x_min)/spatial_increment)), np.linspace(y_min, y_max, np.ceil((y_max-y_min)/spatial_increment)), np.linspace(z_min, z_max, np.ceil((z_max-z_min)/spatial_increment))) mesh_wo_edges = wcf.remove_edges_mesh([np.array(mesh[0]), np.array(mesh[1]), np.array(mesh[2])]) ones = np.ones(mesh_wo_edges[0].shape) te_interpolator = te_interpolators[time_point] n_interpolator = n_interpolators[time_point] temperature = wcf.scalar_on_mesh(te_interpolator, mesh_wo_edges) density = wcf.scalar_on_mesh(n_interpolator, mesh_wo_edges) b_field, b_field_norm = wcf.b_field_on_mesh([bx_interpolators[time_point], by_interpolators[time_point], bz_interpolators[time_point]], mesh_wo_edges, bias=2e-2) temperature_plane_normalized = temperature / np.nanmax(np.nanmax(temperature, 0), 0)[None, None, :] ``` # Determine ion_velocity_term1 ``` alpha = 1 filter_width = 15 (x_min, x_max, y_min, y_max, z_min, z_max) = wcf.joint_mach_bdot_tp_extent() spatial_increment = 0.001 mesh = np.meshgrid(np.linspace(x_min, x_max, np.ceil((x_max-x_min)/spatial_increment)), np.linspace(y_min, y_max, np.ceil((y_max-y_min)/spatial_increment)), np.linspace(z_min, z_max, np.ceil((z_max-z_min)/spatial_increment))) mesh_wo_edges = wcf.remove_edges_mesh([np.array(mesh[0]), np.array(mesh[1]), np.array(mesh[2])]) ones = np.ones(mesh_wo_edges[0].shape) time_point = 200 bx_interpolator = bx_interpolators[time_point] by_interpolator = by_interpolators[time_point] bz_interpolator = bz_interpolators[time_point] te_interpolator = te_interpolators[time_point] n_interpolator = n_interpolators[time_point] bx_derivative = wcf.triangulate_derivatives(mesh, bx_triangulation, bx_interpolator, increment=0.0000001) bx_derivative = wcf.remove_edges_derivative_meshes(bx_derivative) by_derivative = wcf.triangulate_derivatives(mesh, by_triangulation, by_interpolator, increment=0.0000001) by_derivative = wcf.remove_edges_derivative_meshes(by_derivative) bz_derivative = wcf.triangulate_derivatives(mesh, bz_triangulation, bz_interpolator, increment=0.0000001) bz_derivative = wcf.remove_edges_derivative_meshes(bz_derivative) current = wcf.current_on_mesh([bx_derivative, by_derivative, bz_derivative]) b_field, b_field_norm = wcf.b_field_on_mesh([bx_interpolator, by_interpolator, bz_interpolator], mesh_wo_edges, bias=2e-2) temperature = wcf.scalar_on_mesh(te_interpolator, mesh_wo_edges) density = wcf.scalar_on_mesh(n_interpolator, mesh_wo_edges) current = np.asarray(current) density = np.asarray(density) b_field_norm = np.asarray(b_field_norm) density = wcf.boxcar_filter_quantity_mesh(density, filter_width) for direction in xrange(len(current)): current[direction] = wcf.boxcar_filter_quantity_mesh(current[direction], filter_width) density_constant = 1e18*np.ones(density.shape) ion_velocity_term_1 = wcf.calc_ion_velocity_term_1(current, density, q_e) ion_velocity_term_1_constant_density = wcf.calc_ion_velocity_term_1(current, density_constant, q_e) ion_velocity_term_2 = wcf.calc_ion_velocity_term_2(b_field_norm, alpha) ion_vorticity_term_1 = wcf.calc_ion_vorticity_term_1(current, density, q_e, mesh_wo_edges) ion_vorticity_term_1_constant_density = wcf.calc_ion_vorticity_term_1(current, density_constant, q_e, mesh_wo_edges) ion_vorticity_term_2 = wcf.calc_ion_vorticity_term_2(b_field_norm, alpha, mesh_wo_edges) mesh_wo_edges = wcf.remove_edges_mesh([np.array(mesh[0]), np.array(mesh[1]), np.array(mesh[2])]) ``` # Determine measured ion_velocity u_i ``` (x_min, x_max, y_min, y_max, z_min, z_max) = wcf.joint_mach_bdot_tp_extent() spatial_increment = 0.001 mesh = np.meshgrid(np.linspace(x_min, x_max, np.ceil((x_max-x_min)/spatial_increment)), np.linspace(y_min, y_max, np.ceil((y_max-y_min)/spatial_increment)), np.linspace(z_min, z_max, np.ceil((z_max-z_min)/spatial_increment))) mach_y_interpolator = mach_y_interpolators[0] mach_z_interpolator = mach_z_interpolators[0] te_interpolator = te_interpolators[0] mach_y = wcf.scalar_on_mesh(mach_y_interpolator, mesh[:2]) mach_z = wcf.scalar_on_mesh(mach_z_interpolator, mesh) te = wcf.scalar_on_mesh(te_interpolator, mesh) u_i_y = np.sqrt(te*q_e/m_i)*mach_y u_i_z = np.sqrt(te*q_e/m_i)*mach_z u_i_y = np.reshape(u_i_y, mesh[0].shape) u_i_z = np.reshape(u_i_z, mesh[0].shape) u_i_y = wcf.remove_edges_scalar_quantity_meshes(u_i_y) u_i_z = wcf.remove_edges_scalar_quantity_meshes(u_i_z) np.nanmax(u_i_y) np.sum(np.isnan(u_i_z[:,:,-1])) alpha_from_y = (u_i_y - ion_velocity_term_1[1])/ b_field_norm[1] alpha_from_z = (u_i_z - ion_velocity_term_1[2])/ b_field_norm[2] alpha_from_y_flattened_z04 = alpha_from_y[:,:,-1].ravel() alpha_from_z_flattened_z04 = alpha_from_z[:,:,-1].ravel() points_x = mesh_wo_edges[0][:, :, -1].ravel() points_y = mesh_wo_edges[1][:, :, -1].ravel() points_z_z04 = mesh[2][0,0,-1]*np.ones(points_x.shape) std_z04 = np.expand_dims(np.zeros(points_x.shape), 0) data_y_z04 = {'a_out': np.expand_dims(alpha_from_y_flattened_z04, 0), 'x_out': points_x, 'y_out': points_y, 'z_out': points_z_z04, 'std': std_z04, 'delays': np.asarray([0]) } data_z_z04 = {'a_out': np.expand_dims(alpha_from_z_flattened_z04, 0), 'x_out': points_x, 'y_out': points_y, 'z_out': points_z_z04, 'std': std_z04, 'delays': np.asarray([0]) } data_y_z04 = wcf.remove_nan_points(data_y_z04) data_z_z04 = wcf.remove_nan_points(data_z_z04) positions_to_remove = np.where(np.abs(data_y_z04['a_out'][0]) > np.abs(np.nanmean(data_y_z04['a_out'][0])*10000)) data_y_z04 = wcf.remove_positions(data_y_z04, positions_to_remove) alpha_z_z04_triangulation, alpha_z_z04_interpolators = wcf.give_delaunay_and_interpolator(data_z_z04) alpha_y_z04_triangulation, alpha_y_z04_interpolators = wcf.give_delaunay_and_interpolator(data_y_z04) alpha_z_test = wcf.scalar_on_mesh(alpha_z_z04_interpolators[0], [mesh_wo_edges[0], mesh_wo_edges[1]]) alpha_z_test[:,:,0] alpha_y_test = wcf.scalar_on_mesh(alpha_y_z04_interpolators[0], [mesh_wo_edges[0], mesh_wo_edges[1]]) alpha_y_test[:,:,0] ``` # Compare contour plots of ion velocities ``` plt.contourf(mesh_wo_edges[0][:,:,0], mesh_wo_edges[1][:,:,0], u_i_z[:,:,-1]) plt.colorbar() plt.show() plt.contourf(mesh_wo_edges[0][:,:,0], mesh_wo_edges[1][:,:,0], b_field_norm[2][:, :, -1]*alpha_test[:,:,-1] + ion_velocity_term_1[2][:, :, -1]) plt.colorbar() plt.show() ``` # Now look at y using alpha_z interpolation ``` plt.contourf(mesh_wo_edges[0][:,:,0], mesh_wo_edges[1][:,:,0], u_i_y[:,:,-1]) plt.colorbar() plt.show() plt.contourf(mesh_wo_edges[0][:,:,0], mesh_wo_edges[1][:,:,0], b_field_norm[1][:, :, -1]*alpha_test[:,:,-1] + ion_velocity_term_1[1][:, :, -1]) plt.colorbar() plt.show() ``` # Now look at u_y using alpha_y interpolation ``` plt.contourf(mesh_wo_edges[0][:,:,0], mesh_wo_edges[1][:,:,0], u_i_y[:,:,-1]) plt.colorbar() plt.show() plt.contourf(mesh_wo_edges[0][:,:,0], mesh_wo_edges[1][:,:,0], b_field_norm[1][:, :, -1]*alpha_y_test[:,:,-1] + ion_velocity_term_1[1][:, :, -1]) plt.colorbar() plt.show() alpha_y_test_smooth = wcf.boxcar_filter_quantity_mesh(alpha_y_test, 15) plt.contourf(mesh_wo_edges[0][:,:,0], mesh_wo_edges[1][:,:,0], b_field_norm[0][:, :, -1]*alpha_y_test[:,:,-1] + ion_velocity_term_1[0][:, :, -1], vmax=1e5) plt.show() plt.contourf(mesh_wo_edges[0][:,:,0], mesh_wo_edges[1][:,:,0], b_field_norm[0][:, :, -1]*alpha_y_test_smooth[:,:,-1] + ion_velocity_term_1[0][:, :, -1]) plt.colorbar() plt.show() np.max(ion_velocity_term_1[0][:, :, -1]) np.max(ion_velocity_term_1[1][:, :, -1]) np.max(b_field_norm[1][:, :, -1]) np.max(b_field_norm[0][:, :, -1]) np.nanmean(ion_velocity_term_1[0][:, :, -1] / ion_velocity_term_1[1][:, :, -1]) np.nanmax(b_field_norm[0][:, :, -1]*alpha_y_test[:,:,-1] + ion_velocity_term_1[0][:, :, -1]) np.where((b_field_norm[0][:, :, -1]*alpha_y_test[:,:,-1] + ion_velocity_term_1[0][:, :, -1]) > 1.2e8) (b_field_norm[0][:, :, -1]*alpha_y_test[:,:,-1] + ion_velocity_term_1[0][:, :, -1])[13,1] b_field_norm[0][13, 1, -1] alpha_y_test[13, 1,-1] ion_velocity_term_1[0][13,1,-1] ion_velocity_term_1[1][:, :, -1] alpha_from_y_flattened_z04 = alpha_from_y[:,:,-1].ravel() alpha_from_z_flattened_z04 = alpha_from_z[:,:,-1].ravel() points_x = mesh_wo_edges[0][:, :, -1].ravel() points_y = mesh_wo_edges[1][:, :, -1].ravel() points_z_z04 = mesh[2][0,0,-1]*np.ones(points_x.shape) std_z04 = np.expand_dims(np.zeros(points_x.shape), 0) data_y_z04 = {'a_out': np.expand_dims(alpha_from_y_flattened_z04, 0), 'x_out': points_x, 'y_out': points_y, 'z_out': points_z_z04, 'std': std_z04 } ```
github_jupyter
# Similarity Encoders with Keras ## using the model definition from `simec.py` ``` from __future__ import unicode_literals, division, print_function, absolute_import from builtins import range import numpy as np np.random.seed(28) import matplotlib.pyplot as plt from sklearn.manifold import Isomap from sklearn.decomposition import KernelPCA from sklearn.preprocessing import StandardScaler from sklearn.datasets import fetch_mldata, fetch_20newsgroups import tensorflow as tf tf.set_random_seed(28) import keras from keras.models import Sequential, Model from keras.layers import Input, Dense, Activation # https://github.com/cod3licious/nlputils from nlputils.features import FeatureTransform, features2mat from simec import SimilarityEncoder from utils import center_K, check_embed_match, check_similarity_match from utils_plotting import plot_mnist, plot_20news %matplotlib inline %load_ext autoreload %autoreload 2 ``` ### MNIST with Linear Kernel ``` # load digits mnist = fetch_mldata('MNIST original', data_home='data') X = mnist.data/255. # normalize to 0-1 y = np.array(mnist.target, dtype=int) # subsample 10000 random data points np.random.seed(42) n_samples = 10000 n_test = 2000 rnd_idx = np.random.permutation(X.shape[0])[:n_samples] X_test, y_test = X[rnd_idx[:n_test],:], y[rnd_idx[:n_test]] X, y = X[rnd_idx[n_test:],:], y[rnd_idx[n_test:]] ss = StandardScaler(with_std=False) X = ss.fit_transform(X) X_test = ss.transform(X_test) n_train, n_features = X.shape # centered linear kernel matrix K_lin = center_K(np.dot(X, X.T)) # linear kPCA kpca = KernelPCA(n_components=2, kernel='linear') X_embed = kpca.fit_transform(X) X_embed_test = kpca.transform(X_test) plot_mnist(X_embed, y, X_embed_test, y_test, title='MNIST - linear Kernel PCA') print("error similarity match: msqe: %.10f ; r^2: %.10f ; rho: %.10f" % check_similarity_match(X_embed, K_lin)) # on how many target similarities you want to train - faster and works equally well than training on all n_targets = 1000 # K_lin.shape[1] # initialize the model simec = SimilarityEncoder(X.shape[1], 2, n_targets, s_ll_reg=0.5, S_ll=K_lin[:n_targets,:n_targets]) # train the model to get an embedding with which the target similarities # can be linearly approximated simec.fit(X, K_lin[:,:n_targets], epochs=25) # get the embeddings X_embeds = simec.transform(X) X_embed_tests = simec.transform(X_test) plot_mnist(X_embeds, y, X_embed_tests, y_test, title='MNIST - SimEc (lin. kernel, linear)') # correlation with the embedding produced by the spectral method should be high print("correlation with lin kPCA : %f" % check_embed_match(X_embed, X_embeds)[1]) print("correlation with lin kPCA (test): %f" % check_embed_match(X_embed_test, X_embed_tests)[1]) # similarity match error should be similar to the one from kpca print("error similarity match: msqe: %.10f ; r^2: %.10f ; rho: %.10f" % check_similarity_match(X_embeds, K_lin)) ``` ### Non-linear MNIST embedding with isomap ``` # isomap isomap = Isomap(n_neighbors=10, n_components=2) X_embed = isomap.fit_transform(X) X_embed_test = isomap.transform(X_test) plot_mnist(X_embed, y, X_embed_test, y_test, title='MNIST - isomap') # non-linear SimEc to approximate isomap solution K_geod = center_K(-0.5*(isomap.dist_matrix_**2)) n_targets = 1000 # initialize the model simec = SimilarityEncoder(X.shape[1], 2, n_targets, hidden_layers=[(20, 'tanh')], s_ll_reg=0.5, S_ll=K_geod[:n_targets,:n_targets], opt=keras.optimizers.Adamax(lr=0.01)) # train the model to get an embedding with which the target similarities # can be linearly approximated simec.fit(X, K_geod[:,:n_targets], epochs=25) # get the embeddings X_embeds = simec.transform(X) X_embed_tests = simec.transform(X_test) plot_mnist(X_embeds, y, X_embed_tests, y_test, title='MNIST - SimEc (isomap, 1 h.l.)') print("correlation with isomap : %f" % check_embed_match(X_embed, X_embeds)[1]) print("correlation with isomap (test): %f" % check_embed_match(X_embed_test, X_embed_tests)[1]) ``` ## 20newsgroups embedding ``` ## load the data and transform it into a tf-idf representation categories = [ "comp.graphics", "rec.autos", "rec.sport.baseball", "sci.med", "sci.space", "soc.religion.christian", "talk.politics.guns" ] newsgroups_train = fetch_20newsgroups(subset='train', remove=( 'headers', 'footers', 'quotes'), data_home='data', categories=categories, random_state=42) newsgroups_test = fetch_20newsgroups(subset='test', remove=( 'headers', 'footers', 'quotes'), data_home='data', categories=categories, random_state=42) # store in dicts (if the text contains more than 3 words) textdict = {i: t for i, t in enumerate(newsgroups_train.data) if len(t.split()) > 3} textdict.update({i: t for i, t in enumerate(newsgroups_test.data, len(newsgroups_train.data)) if len(t.split()) > 3}) train_ids = [i for i in range(len(newsgroups_train.data)) if i in textdict] test_ids = [i for i in range(len(newsgroups_train.data), len(textdict)) if i in textdict] print("%i training and %i test samples" % (len(train_ids), len(test_ids))) # transform into tf-idf features ft = FeatureTransform(norm='max', weight=True, renorm='max') docfeats = ft.texts2features(textdict, fit_ids=train_ids) # organize in feature matrix X, featurenames = features2mat(docfeats, train_ids) X_test, _ = features2mat(docfeats, test_ids, featurenames) print("%i features" % len(featurenames)) targets = np.hstack([newsgroups_train.target,newsgroups_test.target]) y = targets[train_ids] y_test = targets[test_ids] target_names = newsgroups_train.target_names n_targets = 1000 # linear kPCA kpca = KernelPCA(n_components=2, kernel='linear') X_embed = kpca.fit_transform(X) X_embed_test = kpca.transform(X_test) plot_20news(X_embed, y, target_names, X_embed_test, y_test, title='20newsgroups - linear Kernel PCA', legend=True) # compute linear kernel and center K_lin = center_K(X.dot(X.T).A) K_lin_test = center_K(X_test.dot(X_test.T).A) print("similarity approximation : msqe: %.10f ; r^2: %.10f ; rho: %.10f" % check_similarity_match(X_embed, K_lin)) print("similarity approximation (test): msqe: %.10f ; r^2: %.10f ; rho: %.10f" % check_similarity_match(X_embed_test, K_lin_test)) # project to 2d with linear similarity encoder # careful: our input is sparse!!! simec = SimilarityEncoder(X.shape[1], 2, n_targets, sparse_inputs=True, opt=keras.optimizers.SGD(lr=50.)) # train the model to get an embedding with which the target similarities # can be linearly approximated simec.fit(X, K_lin[:,:n_targets], epochs=25) # get the embeddings X_embeds = simec.transform(X) X_embed_tests = simec.transform(X_test) plot_20news(X_embeds, y, target_names, X_embed_tests, y_test, title='20 newsgroups - SimEc (lin. kernel, linear)', legend=True) print("correlation with lin kPCA : %f" % check_embed_match(X_embed, X_embeds)[1]) print("correlation with lin kPCA (test): %f" % check_embed_match(X_embed_test, X_embed_tests)[1]) print("similarity approximation : msqe: %.10f ; r^2: %.10f ; rho: %.10f" % check_similarity_match(X_embeds, K_lin)) print("similarity approximation (test): msqe: %.10f ; r^2: %.10f ; rho: %.10f" % check_similarity_match(X_embed_tests, K_lin_test)) ```
github_jupyter
# Analyzing data from clusters FAMD-v6 ``` import pandas as pd import numpy as np import json import datetime as dt import seaborn as sns import matplotlib.pyplot as plt import statsmodels.api as sm import statsmodels.formula.api as smf import statsmodels.stats.multicomp as multi import scipy.stats # Configuración de pandas pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) ``` ## Loading data ``` path_file = 'data/clusterizacion_v6.csv' data = pd.read_csv(path_file) data.head() data.isnull().any() ``` ## Data analysis This section is to analyze clusters behaviour with two cathegorization of clients ### Cathegorical data ``` sns.countplot(x='cluster', data = data) plt.suptitle('Frecuency of observation by cluster') data['cluster'].value_counts() data['arrears_days'].describe() ``` We can see that there are more customers on the cluster_1 than the cluster_0 ### Analyzing relationship on other variables with two clusters ``` #---------------------------------------------------------------- ``` #### Cathegorical = cluster_id_2 vs Quantitative = 'Column name' ``` data.info() sns.catplot(x='cluster', y='arrears_days', kind='bar', data=data) plt.suptitle('Cluster vs arrears_days') ``` On this graph we can see that cluster 1 has less arrears_days than cluster 0 ``` sns.catplot(x='cluster', y='Monto Acumulado', kind='bar', data=data) plt.suptitle('Cluster vs Monto Acumulado') ``` Cluster 1 has less 'Monto Acumulado' than cluster 0 ``` sns.catplot(x='cluster', y='Score Bureau Empresa', kind='bar', data=data) plt.suptitle('Cluster vs Score Bureau Empresa') ``` Score Bureau between the two clusters are relatively similar ``` sns.catplot(x='cluster', y='Huellas de Consulta', kind='bar', data=data) plt.suptitle('Cluster vs Huellas de Consulta') ``` Cluster 1 has less 'Huellas de consulta' than cluster 0 ``` sns.catplot(x='cluster', y='Número de accionistas', kind='bar', data=data) plt.suptitle('Cluster vs Número de accionistas') ``` Cluster 0 has more 'Accionistas' than cluster 1 ``` sns.catplot(x='cluster', y='# Empleados', kind='bar', data=data) plt.suptitle('Cluster vs # Empleados') ``` Cluster 1 has more 'Empleados' than cluster 0 ``` sns.catplot(x='cluster', y='Mujeres en cargos directivos', kind='bar', data=data) plt.suptitle('Cluster vs Mujeres en cargos directivos') ``` Cluster 1 has more 'Mujeres en cargos directivos' than cluster 0 ``` #-------------------------------------------------------- ``` #### Cathegorical = cluster_id_2 vs Cathegorical = 'Column name' ``` data.info() data.groupby('cluster')['state'].value_counts()/len(data) ``` Cluster 0 has more clients on state PAID than cluster 1, but at the same time has more clients on Late than cluster 1 ``` data.groupby('cluster')['Uso de los recursos'].value_counts()/len(data) ``` We can see the distribution between the two clusters ``` data.groupby('cluster')['Plazo'].value_counts()/len(data) # Cluster 0 has more clients on 'Plazo' on 13 to 24 months. On the other hand we have cluster 1 with 'Plazo' less than 12 months data.groupby('cluster')['Sector'].value_counts()/len(data) # Cluster 0 has a ranking of (Servicios, Comercio and Industria) - Cluster 1 has a ranking of (Servicios, Industria y Comercio) data.groupby('cluster')['Ingresos'].value_counts()/len(data) data.groupby('cluster')['Acceso previso a la banca'].value_counts()/len(data) data.groupby('cluster')['Mujeres empresarias'].value_counts()/len(data) data.groupby('cluster')['Activador'].value_counts()/len(data) data.groupby('cluster')['Website empresa'].value_counts()/len(data) data.groupby('cluster')['Estrato Mínimo'].value_counts()/len(data) data.groupby('cluster')['Ubicación'].value_counts()/len(data) data.groupby('cluster')['Procesos judiciales'].value_counts()/len(data) data.groupby('cluster')['Instagram empresa'].value_counts()/len(data) data.groupby('cluster')['Impacto'].value_counts()/len(data) #end #-------------------------------------------------------------------- ``` ### Multi-linear regression ``` X = data[['Huellas de Consulta', 'Score Bureau Empresa']] Y = data['arrears_days'] X = sm.add_constant(X) # adding a constant model = sm.OLS(Y, X).fit() predictions = model.predict(X) print_model = model.summary() print(print_model) ```
github_jupyter
** ----- IMPORTANT ------ ** The code presented here assumes that you're running TensorFlow v1.3.0 or higher, this was not released yet so the easiet way to run this is update your TensorFlow version to TensorFlow's master. To do that go [here](https://github.com/tensorflow/tensorflow#installation) and then execute: `pip install --ignore-installed --upgrade <URL for the right binary for your machine>`. For example, considering a Linux CPU-only running python2: `pip install --upgrade https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-cp27-none-linux_x86_64.whl` ## Here is walk-through to help getting started with tensorflow 1) Simple Linear Regression with low-level TensorFlow 2) Simple Linear Regression with a canned estimator 3) Playing with real data: linear regressor and DNN 4) Building a custom estimator to classify handwritten digits (MNIST) ### [What's next?](https://goo.gl/hZaLPA) ## Dependencies ``` from __future__ import absolute_import from __future__ import division from __future__ import print_function import collections # tensorflow import tensorflow as tf print('Expected TensorFlow version is v1.3.0 or higher') print('Your TensorFlow version:', tf.__version__) # data manipulation import numpy as np import pandas as pd # visualization import matplotlib import matplotlib.pyplot as plt %matplotlib inline matplotlib.rcParams['figure.figsize'] = [12,8] ``` ## 1) Simple Linear Regression with low-level TensorFlow ### Generating data This function creates a noisy dataset that's roughly linear, according to the equation y = mx + b + noise. Notice that the expected value for m is 0.1 and for b is 0.3. This is the values we expect the model to predict. ``` def make_noisy_data(m=0.1, b=0.3, n=100): x = np.random.randn(n) noise = np.random.normal(scale=0.01, size=len(x)) y = m * x + b + noise return x, y ``` Create training data ``` x_train, y_train = make_noisy_data() ``` Plot the training data ``` plt.plot(x_train, y_train, 'b.') ``` ### The Model ``` # input and output x = tf.placeholder(shape=[None], dtype=tf.float32, name='x') y_label = tf.placeholder(shape=[None], dtype=tf.float32, name='y_label') # variables W = tf.Variable(tf.random_normal([1], name="W")) # weight b = tf.Variable(tf.random_normal([1], name="b")) # bias # actual model y = W * x + b ``` ### The Loss and Optimizer Define a loss function (here, squared error) and an optimizer (here, gradient descent). ``` loss = tf.reduce_mean(tf.square(y - y_label)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1) train = optimizer.minimize(loss) ``` ### The Training Loop and generating predictions ``` init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) # initialize variables for i in range(100): # train for 100 steps sess.run(train, feed_dict={x: x_train, y_label:y_train}) x_plot = np.linspace(-3, 3, 101) # return evenly spaced numbers over a specified interval # using the trained model to predict values for the training data y_plot = sess.run(y, feed_dict={x: x_plot}) # saving final weight and bias final_W = sess.run(W) final_b = sess.run(b) ``` ### Visualizing predictions ``` plt.scatter(x_train, y_train) plt.plot(x_plot, y_plot, 'g') ``` ### What is the final weight and bias? ``` print('W:', final_W, 'expected: 0.1') print('b:', final_b, 'expected: 0.3') ``` ## 2) Simple Linear Regression with a canned estimator ### Input Pipeline ``` x_dict = {'x': x_train} train_input = tf.estimator.inputs.numpy_input_fn(x_dict, y_train, shuffle=True, num_epochs=None) # repeat forever ``` ### Describe input feature usage ``` features = [tf.feature_column.numeric_column('x')] # because x is a real number ``` ### Build and train the model ``` estimator = tf.estimator.LinearRegressor(features) estimator.train(train_input, steps = 1000) ``` ### Generating and visualizing predictions ``` x_test_dict = {'x': np.linspace(-5, 5, 11)} data_source = tf.estimator.inputs.numpy_input_fn(x_test_dict, shuffle=False) predictions = list(estimator.predict(data_source)) preds = [p['predictions'][0] for p in predictions] for y in predictions: print(y['predictions']) plt.scatter(x_train, y_train) plt.plot(x_test_dict['x'], preds, 'g') ``` ## 3) Playing with real data: linear regressor and DNN ### Get the data The Adult dataset is from the Census bureau and the task is to predict whether a given adult makes more than $50,000 a year based attributes such as education, hours of work per week, etc. But the code here presented can be easilly aplicable to any csv dataset that fits in memory. More about the data [here](https://archive.ics.uci.edu/ml/machine-learning-databases/adult/old.adult.names) ``` census_train_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data' census_train_path = tf.contrib.keras.utils.get_file('census.train', census_train_url) census_test_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test' census_test_path = tf.contrib.keras.utils.get_file('census.test', census_test_url) ``` ### Load the data ``` column_names = [ 'age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'income' ] census_train = pd.read_csv(census_train_path, index_col=False, names=column_names) census_test = pd.read_csv(census_train_path, index_col=False, names=column_names) census_train_label = census_train.pop('income') == " >50K" census_test_label = census_test.pop('income') == " >50K" census_train.head(10) census_train_label[:20] ``` ### Input pipeline ``` train_input = tf.estimator.inputs.pandas_input_fn( census_train, census_train_label, shuffle=True, batch_size = 32, # process 32 examples at a time num_epochs=None, ) test_input = tf.estimator.inputs.pandas_input_fn( census_test, census_test_label, shuffle=True, num_epochs=1) features, labels = train_input() features ``` ### Feature description ``` features = [ tf.feature_column.numeric_column('hours-per-week'), tf.feature_column.bucketized_column(tf.feature_column.numeric_column('education-num'), list(range(25))), tf.feature_column.categorical_column_with_vocabulary_list('sex', ['male','female']), tf.feature_column.categorical_column_with_hash_bucket('native-country', 1000), ] estimator = tf.estimator.LinearClassifier(features, model_dir='census/linear',n_classes=2) estimator.train(train_input, steps=5000) ``` ### Evaluate the model ``` estimator.evaluate(test_input) ``` ## DNN model ### Update input pre-processing ``` features = [ tf.feature_column.numeric_column('education-num'), tf.feature_column.numeric_column('hours-per-week'), tf.feature_column.numeric_column('age'), tf.feature_column.indicator_column( tf.feature_column.categorical_column_with_vocabulary_list('sex',['male','female'])), tf.feature_column.embedding_column( # now using embedding! tf.feature_column.categorical_column_with_hash_bucket('native-country', 1000), 10) ] estimator = tf.estimator.DNNClassifier(hidden_units=[20,20], feature_columns=features, n_classes=2, model_dir='census/dnn') estimator.train(train_input, steps=5000) estimator.evaluate(test_input) ``` ## Custom Input Pipeline using Datasets API ### Read the data ``` def census_input_fn(path): def input_fn(): dataset = ( tf.contrib.data.TextLineDataset(path) .map(csv_decoder) .shuffle(buffer_size=100) .batch(32) .repeat()) columns = dataset.make_one_shot_iterator().get_next() income = tf.equal(columns.pop('income')," >50K") return columns, income return input_fn csv_defaults = collections.OrderedDict([ ('age',[0]), ('workclass',['']), ('fnlwgt',[0]), ('education',['']), ('education-num',[0]), ('marital-status',['']), ('occupation',['']), ('relationship',['']), ('race',['']), ('sex',['']), ('capital-gain',[0]), ('capital-loss',[0]), ('hours-per-week',[0]), ('native-country',['']), ('income',['']), ]) def csv_decoder(line): parsed = tf.decode_csv(line, csv_defaults.values()) return dict(zip(csv_defaults.keys(), parsed)) ``` ### Try the input function ``` tf.reset_default_graph() census_input = census_input_fn(census_train_path) training_batch = census_input() with tf.Session() as sess: features, high_income = sess.run(training_batch) print(features['education']) print(features['age']) print(high_income) ``` ## 4) Building a custom estimator to classify handwritten digits (MNIST) ![mnist](http://rodrigob.github.io/are_we_there_yet/build/images/mnist.png?1363085077) Image from: http://rodrigob.github.io/are_we_there_yet/build/images/mnist.png?1363085077 ``` train,test = tf.contrib.keras.datasets.mnist.load_data() x_train,y_train = train x_test,y_test = test mnist_train_input = tf.estimator.inputs.numpy_input_fn({'x':np.array(x_train, dtype=np.float32)}, np.array(y_train,dtype=np.int32), shuffle=True, num_epochs=None) mnist_test_input = tf.estimator.inputs.numpy_input_fn({'x':np.array(x_test, dtype=np.float32)}, np.array(y_test,dtype=np.int32), shuffle=True, num_epochs=1) ``` ### tf.estimator.LinearClassifier ``` estimator = tf.estimator.LinearClassifier([tf.feature_column.numeric_column('x',shape=784)], n_classes=10, model_dir="mnist/linear") estimator.train(mnist_train_input, steps = 10000) estimator.evaluate(mnist_test_input) ``` ### Examine the results with [TensorBoard](http://0.0.0.0:6006) $> tensorboard --logdir mnnist/DNN ``` estimator = tf.estimator.DNNClassifier(hidden_units=[256], feature_columns=[tf.feature_column.numeric_column('x',shape=784)], n_classes=10, model_dir="mnist/DNN") estimator.train(mnist_train_input, steps = 10000) estimator.evaluate(mnist_test_input) # Parameters BATCH_SIZE = 128 STEPS = 10000 ``` ## A Custom Model ``` def build_cnn(input_layer, mode): with tf.name_scope("conv1"): conv1 = tf.layers.conv2d(inputs=input_layer,filters=32, kernel_size=[5, 5], padding='same', activation=tf.nn.relu) with tf.name_scope("pool1"): pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2) with tf.name_scope("conv2"): conv2 = tf.layers.conv2d(inputs=pool1,filters=64, kernel_size=[5, 5], padding='same', activation=tf.nn.relu) with tf.name_scope("pool2"): pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2) with tf.name_scope("dense"): pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64]) dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu) with tf.name_scope("dropout"): is_training_mode = mode == tf.estimator.ModeKeys.TRAIN dropout = tf.layers.dropout(inputs=dense, rate=0.4, training=is_training_mode) logits = tf.layers.dense(inputs=dropout, units=10) return logits def model_fn(features, labels, mode): # Describing the model input_layer = tf.reshape(features['x'], [-1, 28, 28, 1]) tf.summary.image('mnist_input',input_layer) logits = build_cnn(input_layer, mode) # Generate Predictions classes = tf.argmax(input=logits, axis=1) predictions = { 'classes': classes, 'probabilities': tf.nn.softmax(logits, name='softmax_tensor') } if mode == tf.estimator.ModeKeys.PREDICT: # Return an EstimatorSpec object return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions) with tf.name_scope('loss'): loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits) loss = tf.reduce_sum(loss) tf.summary.scalar('loss', loss) with tf.name_scope('accuracy'): accuracy = tf.cast(tf.equal(tf.cast(classes,tf.int32),labels),tf.float32) accuracy = tf.reduce_mean(accuracy) tf.summary.scalar('accuracy', accuracy) # Configure the Training Op (for TRAIN mode) if mode == tf.estimator.ModeKeys.TRAIN: train_op = tf.contrib.layers.optimize_loss( loss=loss, global_step=tf.train.get_global_step(), learning_rate=1e-4, optimizer='Adam') return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions, loss=loss, train_op=train_op) # Configure the accuracy metric for evaluation eval_metric_ops = { 'accuracy': tf.metrics.accuracy( classes, input=labels) } return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions, loss=loss, eval_metric_ops=eval_metric_ops) ``` ## Runs estimator ``` # create estimator run_config = tf.contrib.learn.RunConfig(model_dir='mnist/CNN') estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config) # train for 10000 steps estimator.train(input_fn=mnist_train_input, steps=10000) # evaluate estimator.evaluate(input_fn=mnist_test_input) # predict preds = estimator.predict(input_fn=test_input_fn) ``` ## Distributed tensorflow: using experiments ``` # Run an experiment from tensorflow.contrib.learn.python.learn import learn_runner # Enable TensorFlow logs tf.logging.set_verbosity(tf.logging.INFO) # create experiment def experiment_fn(run_config, hparams): # create estimator estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config) return tf.contrib.learn.Experiment( estimator, train_input_fn=train_input_fn, eval_input_fn=test_input_fn, train_steps=STEPS ) # run experiment learn_runner.run(experiment_fn, run_config=run_config) ``` ### Examine the results with [TensorBoard](http://0.0.0.0:6006) $> tensorboard --logdir mnist/CNN
github_jupyter
``` from autoreduce import * import numpy as np from sympy import symbols # Post conservation law and other approximations phenomenological model at the RNA level n = 4 # Number of states nouts = 2 # Number of outputs # Inputs by user x_init = np.zeros(n) n = 4 # Number of states timepoints_ode = np.linspace(0, 100, 100) C = [[0, 0, 1, 0], [0, 0, 0, 1]] nstates_tol = 3 error_tol = 0.3 # System dynamics symbolically # params = [100, 50, 10, 5, 5, 0.02, 0.02, 0.01, 0.01] # params = [1, 1, 5, 0.1, 0.2, 1, 1, 100, 100] # Parameter set for which reduction doesn't work # K,b_t,b_l,d_t,d_l,del_t,del_l,beta_t,beta_l = params x0 = symbols('x0') x1 = symbols('x1') x2 = symbols('x2') x3 = symbols('x3') x = [x0, x1, x2, x3] K = symbols('K') b_t = symbols('b_t') b_l = symbols('b_l') d_t = symbols('d_t') d_l = symbols('d_l') del_t = symbols('del_t') del_l = symbols('del_l') beta_t = symbols('beta_t') beta_l = symbols('beta_l') params = [K,b_t,b_l,d_t,d_l,del_t,del_l,beta_t,beta_l] f0 = K * b_t**2/(b_t**2 + x[3]**2) - d_t * x[0] f1 = K * b_l**2/(b_l**2 + x[2]**2) - d_l * x[1] f2 = beta_t * x[0] - del_t * x[2] f3 = beta_l * x[1] - del_l * x[3] f = [f0,f1,f2,f3] # parameter values params_values = [100, 50, 10, 5, 5, 0.02, 0.02, 0.01, 0.01] sys = System(x, f, params = params, params_values = params_values, C = C, x_init = x_init) from autoreduce.utils import get_ODE sys_ode = get_ODE(sys, timepoints_ode) sol = sys_ode.solve_system().T try: import matplotlib.pyplot as plt plt.plot(timepoints_ode, np.transpose(np.array(C)@sol)) plt.xlabel('Time') plt.ylabel('[Outputs]') plt.show() except: print('Plotting libraries missing.') from autoreduce.utils import get_SSM timepoints_ssm = np.linspace(0,100,100) sys_ssm = get_SSM(sys, timepoints_ssm) Ss = sys_ssm.compute_SSM() # len(timepoints) x len(params) x len(states) out_Ss = [] for i in range(len(params)): out_Ss.append((np.array(C)@(Ss[:,i,:].T))) out_Ss = np.reshape(np.array(out_Ss), (len(timepoints_ssm), len(params), nouts)) try: import seaborn as sn import matplotlib.pyplot as plt for j in range(nouts): sn.heatmap(out_Ss[:,:,j].T) plt.xlabel('Time') plt.ylabel('Parameters') plt.title('Sensitivity of output[{0}] with respect to all parameters'.format(j)) plt.show() except: print('Plotting libraries missing.') from autoreduce.utils import get_reducible timepoints_ssm = np.linspace(0,100,10) timepoints_ode = np.linspace(0, 100, 100) sys_reduce = get_reducible(sys, timepoints_ode, timepoints_ssm) results = sys_reduce.reduce_simple() list(results.keys())[0].f[1] reduced_system, collapsed_system = sys_reduce.solve_timescale_separation([x0,x1], fast_states = [x3, x2]) reduced_system.f[1] ```
github_jupyter
``` # Define a smallest_number function that accepts a list of numbers. # It should return the smallest value in the list. def smallest_number(numbers): numbers = list(numbers) smallest = numbers[0] for number in numbers: if number < smallest: smallest = number return smallest smallest_number((1, 2, 3)) #=> 1 # smallest_number([3, 2, 1]) => 1 # smallest_number([4, 5, 4]) => 4 # smallest_number([-3, -2, -1]) => -3 # Define a sum_of_lengths function that accepts a list of strings. # The function should return the sum of the string lengths. def sum_of_lengths(list_of_strings): sums = 0 for string in list_of_strings: sums += len(string) return sums sum_of_lengths(["Hello", "Bob"]) #=> 8 #sum_of_lengths(["Nonsense"]) #=> 8 # Define a product function that accepts a list of numbers. # The function should return the product of the numbers. # The list will always have at least one value # def product(numbers): products = 1 for number in numbers: products *= number return products #product([1, 2, 3]) #=> 6 product([4, 5, 6, 7]) #=> 840 #product([10]) #=> 10 # Define a function concatenate that accepts a list of strings. # The function should return a concatenated string which consists of all list elements whose length is greater than 2 characters. def concatenate(lists): concat = "" for string in lists: if len(string) > 2: concat += string return concat #concatenate(["abc", "def", "ghi"]) # => "abcdefghi" concatenate(["abc", "de", "fgh", "ic"]) #=> "abcfgh" # concatenate(["ab", "cd", "ef", "gh"]) => "" # Write a function super_sum that accepts a list of strings. # The function should sum the index positions of the first occurence of the letter “s” in each word. # Not every word is guaranteed to have an “s”. # Don’t use the sum keyword as it’s a built-in keyword. # def super_sum(string_list): index = 0 for string in string_list: index += string.find('s') return index # super_sum([]) => 0 # super_sum(["mustache"]) => 2 #super_sum(["mustache", "greatest"]) #=> 8 #super_sum(["mustache", "pessimist"]) #=> 4 super_sum(["mustache", "greatest", "almost"]) #=> 12 # Define an in_list function that accepts a list of strings and a separate string. # Return the index where the string exists in the list. If the string does not exist, return -1. # Do NOT use the find or index methods. def in_list(values, target): for idx, value in enumerate(values): if target == value: return idx return -1 strings = ["enchanted", "sparks fly", "long live"] in_list(strings, "enchanted") #==> 0 # in_list(strings, "sparks fly") ==> 1 # in_list(strings, "fifteen") ==> -1 # in_list(strings, "love story") ==> -1 in_list(strings, "fifteen") # ==> -1 # Define a sum_of_values_and_indices function that accepts a list of numbers. # It should return the sum of all of the elements along with their index values. # def sum_of_values_and_indices(numbers): total = 0 for idx, num in enumerate(numbers): total += (idx + num) return total sum_of_values_and_indices([1, 2, 3]) #=> (1 + 0) + (2 + 1) + (3 + 2) => 9 # sum_of_values_and_indices([0, 0, 0, 0]) => 6 # sum_of_values_and_indices([]) => 0 # Declare a function same_index_values that accepts two lists. # The function should return a list of the index positions in which the two lists have equal elements # def same_index_values(list_1, list_2): position = [] for idx, i in enumerate(list_1): if i == list_2[idx]: position.append(idx) return position #same_values([1, 2, 3], [3, 2, 1]) => [1] same_index_values(["a", "b", "c", "d"], ["c", "b", "a", "d"]) #=> [1, 3] # Declare a sum_from function that accepts two numbers as arguments. # The second number will always be greater than the first number. # The function should return the sum of all numbers from the first number to the second number (inclusive). # def sum_from(number_1, number_2): summation = 0 for number in range(number_1, number_2 + 1): summation += number return summation # sum_from(1, 2) # 1 + 2 => 3 sum_from(1, 5) # 1 + 2 + 3 + 4 + 5 => 15 # sum_from(3, 8) # 3 + 4 + 5 + 6 + 7 + 8 => 33 # sum_from(9, 12) # 9 + 10 + 11 + 12 => 42 # Declare a length_match function that accepts a list of strings and an integer. # It should return a count of the number of strings whose length is equal to the number. # def length_match(strings, interger): count = 0 for idx, string in enumerate(strings): if len(string) == interger: count += 1 return count length_match(["cat", "dog", "kangaroo", "mouse"], 3) #=> 2 # length_match(["cat", "dog", "kangaroo", "mouse"], 5)) => 1 # length_match(["cat", "dog", "kangaroo", "mouse"], 4)) => 0 # length_match([], 5)) => 0 # Given the tuple below, destructure the three values and # assign them to position, city and salary variables # Do NOT use index positions (i.e. job_opening[1]) job_opening = ("Software Engineer", "New York City", 100000) position, city, salary = job_opening # Given the tuple below, # - destructure the first value and assign it to a street variable # - destructure the last value and assign it to a zip_code variable # - destructure the middle two values into a list and assign it to a city_and_state variable address = ("35 Elm Street", "San Francisco", "CA", "94107") street, *city_and_state, zip_code = address # Declare a sum_of_evens_and_odds function that accepts a tuple of numbers. # It should return a tuple with two numeric values: # -- the sum of the even numbers # -- the sum of the odd numbers. def sum_of_evens_and_odds(numbers): sum_odd = 0 sum_even = 0 sums = 0, 0 #sums = 0 for num in numbers: if num%2 == 1: sum_odd += num else: sum_even += num sums = sum_even, sum_odd return sums sum_of_evens_and_odds((1, 2, 3, 4)) #=> (6, 4) #sum_of_evens_and_odds((1, 3, 5)) #=> (0, 9) #sum_of_evens_and_odds((2, 4, 6)) #=> (12, 0) # this function returns the smallest and the highest value in a list def s_and_h(numbers): smallest = numbers[0] highest = numbers[0] smallest_and_highest = 0, 0 for number in numbers: if number < smallest: smallest = number for number in numbers: if number > highest: highest = number smallest_and_highest = (smallest, highest) return smallest_and_highest s_and_h([9,1,2,3,0,4,5,6]) # Declare a function right_words that accepts a list of words and a number. # Return a new list with the words that have a length equal to the number. # Do NOT use list comprehension. def right_words(words, number): new_list = [] for word in words: if len(word) == number: new_word = new_list.append(word) return new_list #right_words(['cat', 'dog', 'bean', 'ace', 'heart'], 3) right_words(['cat', 'dog', 'bean', 'ace', 'heart'], 5) #=> ['heart'] # Define an only_evens function that accepts a list of numbers. # It should return a new list consisting of only the even numbers from the original list. def only_evens(numbers): even_numbers = [] for number in numbers: if number % 2 == 0: even_numbers.append(number) return even_numbers only_evens([4, 8, 15, 16, 23, 42]) #=> [4, 8, 16, 42] # only_evens([1, 3, 5]) => [] # only_evens([]) => [] # Define a long_strings function that accepts a list of strings. # It should return a new list consisting of only the strings that have 5 characters or more. def long_strings(strings): five_plus_strings = [] for string in strings: if len(string) >= 5: five_plus_strings.append(string) return five_plus_strings long_strings(["Hello", "Goodbye", "Sam"]) #=> ["Hello", "Goodbye"] # long_strings(["Ace", "Cat", "Job"] => [] # long_strings([] => [] # Write a factors function that accepts a positive whole number # It should return a list of all of the number's factors in ascending order # HINT: Could the range function be helpful here? Or maybe a while loop? def factors(pwn): factor_s = [] for number in range(1, pwn + 1): if pwn % number == 0: factor_s.append(number) return factor_s # factors(1) => [1] # factors(2) => [1, 2] # factors(10) => [1, 2, 5, 10] factors(64) #=> [1, 2, 4, 8, 16, 32, 64] # Declare a delete_all function that accepts a list of strings and a target string # Remove all occurrences of the target string from the list and return it def delete_all(strings, target): new_list = [] for string in strings: if string != target: new_list.append(string) return new_list #delete_all([1, 3, 5], 3) #=> [1, 5] delete_all([5, 3, 5], 5) #=> [3] #delete_all([4, 4, 4], 4) #=> [] #delete_all([4, 4, 4], 6) #=> [4, 4, 4] # Declare a push_or_pop function that accepts a list of numbers # Build up and return a new list by iterating over the list of numbers # If a number is greater than 5, add it to the end of the new list # If a number is less than or equal to 5, remove the last element added to the new list # Assume the order of numbers in the argument will never require removing from an empty list def push_or_pop(numbers): new_list = [] for number in numbers: if number > 5: new_list.append(number) else: new_list.pop(-1) return new_list # push_or_pop([10]) => [10] push_or_pop([10, 4]) #=> [] #push_or_pop([10, 20, 30]) #=> [10, 20, 30] #push_or_pop([10, 20, 3, 30, 4, 50, 60]) #=>[10, 50, 60] push_or_pop([10, 20, 2, 30]) #=> [10, 30] bf = ["egg", "fish", "meat"] lc = ["water", "beans", "soup"] dn = ["fat", "thin", "short"] print(list(zip(bf, lc, dn))) for b, l, d in zip(bf, lc, dn): print(f"My meal today were {b}, {l} and {d}.") food = [ ["egg", "fish", "meat"], ["water", "beans", "soup"], ["fat", "thin", "short"] ] print(len(food)) ##length of list print(food[0]) ##accessing list in food print(food[1]) print(food[2]) print(len(food[0])) print(food[1][1]) ##accessing elements in list of lists food = [ ["egg", "fish", "meat"], ["water", "beans", "soup"], ["fat", "thin", "short"] ] all_food = [] for f in food: for i in f: all_food.append(i) print(all_food) # Define a nested_sum function that accepts a list of lists of numbers # The function should return the sum of the values # The list may contain empty lists # def nested_sum(numbers): sums = 0 for lists in numbers: for number in lists: sums = sums + number return sums #nested_sum([[1, 2, 3], [4, 5]]) #=> # 15 # nested_sum([[1, 2, 3], [], [], [4], [5]]) => # 15 nested_sum([[]]) #=> 0 # Define a fancy_concatenate function that accepts a list of lists of strings # The function should return a concatenated string # The strings in a list should only be concatenated if the length of the list is 3 # def fancy_concatenate(strings): concat = "" for lists in strings: for string in lists: if len(lists) == 3: concat += string return concat # fancy_concatenate([["A, "B", "C"]]) => "ABC" #fancy_concatenate([["A", "B", "C"], ["D", "E", "F"]]) #=> "ABCDEF" fancy_concatenate([["A", "B", "C"], ["D", "E", "F", "G"]]) #=> "ABC" # fancy_concatenate([["A", "B", "C"], ["D", "E"]]) => "ABC" # fancy_concatenate([["A", "B"], ["C", "D"]]) => "" donuts = ["Boston Cream", "Jelly", "Vanilla Cream", "Chcocolate Cream"] print([donut for donut in donuts if "Cream" in donut]) print([donut.split(" ")[0] for donut in donuts if "Cream" in donut]) # Uncomment the commented lines of code bellow and complete the list comprehension logic # The floats variable should store the floating point values for each string in the values list. values = ["3.14", "9.99", "567.324", "5.678"] floats = [float(value) for value in values] # The letters variable should store a list of 5 strings. # Each of the strings should be a character from name concatenated together 3 times. # i.e. ['BBB', 'ooo', 'rrr', 'iii', 'sss'] name = "Boris" letters = [letter*3 for letter in name] # The 'lengths' list should store a list with the lengths # of each string in the 'elements' list elements = ["Hydrogen", "Helium", "Lithium", "Boron", "Carbon"] lengths = [len(element) for element in elements] # Declare a function destroy_elements that accepts two lists. # It should return a list of all elements from the first list that are NOT contained in the second list. # Use list comprehension in your solution. # def destroy_elements(list_1, list_2): #return [element for element in list_1 if element not in list_2] #list comprehension new_list = [] for element in list_1: if element not in list_2: new_list.append(element) return new_list #destroy_elements([1, 2, 3], [1, 2]) #=> [3] #destroy_elements([1, 2, 3], [1, 2, 3]) #=> [] destroy_elements([1, 2, 3], [4, 5]) #=> [1, 2, 3] def delete_all(strings, target): return list(filter(lambda x: x != target, strings)) delete_all([1, 3, 5], 3) #=> [1, 5] #delete_all([5, 3, 5], 5) #=> [3] #delete_all([4, 4, 4], 4) #=> [] #delete_all([4, 4, 4], 6) #=> [4, 4, 4] def delete_all(strings, target): return [d for d in strings if d != target] # delete_all([1, 3, 5], 3) => [1, 5] #delete_all([5, 3, 5], 5) #=> [3] delete_all([4, 4, 4], 4) #=> [] # delete_all([4, 4, 4], 6) => [4, 4, 4] # Define an encrypt_message function that accepts a string. # The input string will consist of only alphabetic characters. # The function should return a string where all characters have been moved # "up" two spots in the alphabet. For example, "a" will become "c". def encrypt_message(string): alphabet = "abcdefghijklmnopqrstuvwxyz" new_string = "" for letter in string: moved = (alphabet.index(letter) + 2) % 26 new_string += alphabet[moved] return new_string #encrypt_message("abc") encrypt_message("man") #encrypt_message("xyz") # Define a cleanup function that accepts a list of strings. # The function should return the strings joined together by a space. # There's one BIG problem -- some of the strings are empty or only consist of spaces! # These should NOT be included in the final string # def cleanup(strings): result = [] for string in strings: string = string.strip() if not string: continue result.append(string) return " ".join(result) #cleanup(["cat", "er", "pillar"]) #=> "cat er pillar" cleanup(["cat", " ", "er", "", "pillar"]) #=> "cat er pillar" #cleanup(["", "", " "]) #=> "" # Define a word_lengths function that accepts a string. # It should return a list with the lengths of each word. # def word_lengths(string): word_length = [] string_list = string.split(" ") print(string_list) for word in string_list: word_length.append(len(word)) return word_length #word_lengths("Mary Poppins was a nanny") #=> [4, 7, 3, 1, 5] word_lengths("Somebody stole my donut") #=> [8, 5, 2, 5] # Declare a function right_words that accepts a list of words and a number. # Return a new list with the words that have a length equal to the number. # Do NOT use list comprehension. # def right_words(words, number): new_list = [] for word in words: if len(word) == number: new_list.append(word) return new_list right_words(['cat', 'dog', 'bean', 'ace', 'heart'], 3) #=> ['cat', 'dog', 'ace'] # right_words(['cat', 'dog', 'bean', 'ace', 'heart'], 5) => ['heart'] # right_words([], 4) => [] # Declare an only_odds function. # It should return a list with only the odd numbers. # Do NOT use list comprehension. # def only_odds(numbers): new_list = [] for number in numbers: if number%2 == 1: new_list.append(number) return new_list #only_odds([1, 3, 5, 6, 7, 8]) #=> [1, 3, 5, 7] only_odds([2, 4, 6, 8]) #=> [] # Declare a count_of_a function that accepts a list of strings. # It should return a list with counts of how many “a” characters appear per string. # Do NOT use list comprehension. # def count_of_a(words): return (list(map(lambda lists:lists.count("a"), words))) #words = ["alligator", "aardvark", "albatross"] #=> [2, 3, 2] #count_of_a(list3) count_of_a(["alligator", "aardvark", "albatross"]) #count_of_a(["plywood"]) => [0] # count_of_a([]) => [] ```
github_jupyter
``` from matplotlib import pyplot as plt from matplotlib import cm import pandas as pd from pprint import pprint from random import randint, random, gauss, uniform import numpy as np #import matplotlib as mpl #mpl.rcParams['text.usetex'] = True #mpl.rcParams['text.latex.unicode'] = True blues = cm.get_cmap(plt.get_cmap('Blues')) greens = cm.get_cmap(plt.get_cmap('Greens')) reds = cm.get_cmap(plt.get_cmap('Reds')) oranges = cm.get_cmap(plt.get_cmap('Oranges')) purples = cm.get_cmap(plt.get_cmap('Purples')) greys = cm.get_cmap(plt.get_cmap('Greys')) set1 = cm.get_cmap(plt.get_cmap('Set1')) def tableau20(color): # Use coordinated colors. These are the "Tableau 20" colors as # RGB. Each pair is strong/light. For a theory of color tableau20 = [(31 , 119, 180), (174, 199, 232), # blue [ 0,1 ] (255, 127, 14 ), (255, 187, 120), # orange [ 2,3 ] (44 , 160, 44 ), (152, 223, 138), # green [ 4,5 ] (214, 39 , 40 ), (255, 152, 150), # red [ 6,7 ] (148, 103, 189), (197, 176, 213), # purple [ 8,9 ] (140, 86 , 75 ), (196, 156, 148), # brown [10,11] (227, 119, 194), (247, 182, 210), # pink [12,13] (188, 189, 34 ), (219, 219, 141), # yellow [14,15] (23 , 190, 207), (158, 218, 229), # cyan [16,17] (65 , 68 , 81 ), (96 , 99 , 106), # gray [18,19] (127, 127, 127), (143, 135, 130), # gray [20,21] (165, 172, 175), (199, 199, 199), # gray [22,23] (207, 207, 207)] # gray [24] # Scale the RGB values to the [0, 1] range, which is the format # matplotlib accepts. r, g, b = tableau20[color] return (round(r/255.,1), round(g/255.,1), round(b/255.,1)) from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) import warnings warnings.filterwarnings('ignore') dist = list() for _ in range(1000): tmp = list() for _ in range(1000): tmp.append(1 / gauss(1, 0.06)) dist.append(min(tmp)) fig, axis = plt.subplots(nrows=1,ncols=1) fig.set_size_inches(15,7.5) _ = axis.hist(dist,bins=1000,color=tableau20(0)) _ = axis.set_xticklabels(np.around(axis.get_xticks(), decimals=5).tolist(),fontsize=16) _ = axis.set_yticklabels((axis.get_yticks()).tolist(),fontsize=16) _ = axis.grid('on') print(np.mean(dist), np.std(dist)) dist2 = list() for _ in range(10000): dist2.append(gauss(0 + 1 * np.log10(10000), 0.4)) fig, axis = plt.subplots(nrows=1,ncols=1, figsize=(15,7.5)) _ = axis.hist(dist2, bins=1000) print(np.mean(dist),np.std(dist)) dist = list() for _ in range(1000): tmp = list() for _ in range(1000): tmp.append(uniform(-10, 10)) dist.append(max(tmp)) fig, axis = plt.subplots(nrows=1,ncols=1) fig.set_size_inches(15,7.5) _ = axis.hist(dist,bins=1000,color=tableau20(0)) _ = axis.set_xticklabels(np.around(axis.get_xticks(), decimals=5).tolist(),fontsize=16) _ = axis.set_yticklabels((axis.get_yticks()).tolist(),fontsize=16) _ = axis.grid('on') from scipy.stats import invgauss dist = list() for _ in range(10000): r = invgauss.rvs(1, size=10000) dist.append(max(r)) fig, axis = plt.subplots(nrows=1,ncols=1) fig.set_size_inches(15,7.5) _ = axis.hist(dist,bins=1000,color=tableau20(0)) _ = axis.set_xticklabels(np.around(axis.get_xticks(), decimals=5).tolist(),fontsize=16) _ = axis.set_yticklabels((axis.get_yticks()).tolist(),fontsize=16) _ = axis.grid('on') for n in [4,8,16,32,64,128,256,1024]: dist = list() for _ in range(100000): tmp = list() for _ in range(n): tmp.append(uniform(-0.1, 0.1)) dist.append(max(tmp)) print(n,':',np.mean(dist), np.std(dist)) for n in [4,8,16,32,64,128,256,1024]: dist = list() for _ in range(100000): tmp = list() for _ in range(n): tmp.append(uniform(-0.2, 0.2)) dist.append(max(tmp)) print(n,':',np.mean(dist), np.std(dist)) fig, axis = plt.subplots(nrows=1,ncols=1) fig.set_size_inches(15,7.5) _ = axis.hist(dist,bins=1000,color=tableau20(0)) _ = axis.set_xticklabels(np.around(axis.get_xticks(), decimals=5).tolist(),fontsize=16) _ = axis.set_yticklabels((axis.get_yticks()).tolist(),fontsize=16) _ = axis.grid('on') for n in [4,8,16,32,64,128,256,1024]: dist = list() for _ in range(100000): tmp = list() for _ in range(n): tmp.append(uniform(-0.3, 0.3)) dist.append(max(tmp)) print(n,':',np.mean(dist), np.std(dist)) for n in [4,8,16,32,64,128,256,1024]: dist = list() for _ in range(100000): tmp = list() for _ in range(n): tmp.append(uniform(-0.4, 0.4)) dist.append(max(tmp)) print(n,':',np.mean(dist), np.std(dist)) for n in [4,8,16,32,64,128,256,1024]: dist = list() for _ in range(100000): tmp = list() for _ in range(n): tmp.append(uniform(-0.5, 0.5)) dist.append(max(tmp)) print(n,':',np.mean(dist), np.std(dist)) for n in [4,8,16,32,64,128,256,1024]: dist = list() for _ in range(100000): tmp = list() for _ in range(n): tmp.append(uniform(-0.1, 0.1)) dist.append(max(tmp)) print(n,':',np.mean(dist), np.std(dist)) np.sqrt(np.std(tmp)**2 + 0.0036) ```
github_jupyter
``` # Use Splinter to navigate the sites when needed and BeautifulSoup to help find and parse out the necessary data. from splinter import Browser from bs4 import BeautifulSoup import pandas as pd from selenium import webdriver ``` # NASA Mars News ``` executable_path = {"executable_path": "chromedriver"} browser = Browser("chrome", **executable_path, headless=False) url = "https://mars.nasa.gov/news/?page=0&per_page=40&order=publish_date+desc%2Ccreated_at+desc&search=&category=19%2C165%2C184%2C204&blank_scope=Latest" browser.visit(url) html = browser.html soup = BeautifulSoup(html, "html.parser") print(soup.prettify()) news_title = soup.find("div", class_="content_title").get_text() news_p = soup.find("div", class_="article_teaser_body").get_text() print(f"{news_title}:{news_p}") ``` # JPL Mars Space Images ``` executable_path = {"executable_path": "chromedriver"} browser = Browser("chrome", **executable_path, headless=False) url = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars" browser.visit(url) html = browser.html soup = BeautifulSoup(html, "html.parser") image_url = soup.footer.find("a", class_="button fancybox")["data-fancybox-href"] featured_image_url = "https://www.jpl.nasa.gov" + image_url print(featured_image_url) ``` # Mars Weather ``` executable_path = {"executable_path": "chromedriver"} browser = Browser("chrome", **executable_path, headless=False) url = "https://twitter.com/marswxreport?lang=en" browser.visit(url) html = browser.html soup = BeautifulSoup(html, "html.parser") tweets = soup.find_all("p", class_="TweetTextSize TweetTextSize--normal js-tweet-text tweet-text") for tweet in tweets: tweet_parent = tweet.find_parent("div", class_="content") tweet_id = tweet_parent.find("a", class_="account-group js-account-group js-action-profile js-user-profile-link js-nav")["href"] if tweet_id == '/MarsWxReport': mars_weather = tweet_parent.find("p", class_="TweetTextSize TweetTextSize--normal js-tweet-text tweet-text").get_text() break mars_weather ``` # Mars Facts ``` url = 'https://space-facts.com/mars/' tables = pd.read_html(url) tables df = tables[0] df.columns = ["Description", "Value"] df.set_index(df["Description"], inplace=True) df = df[["Value"]] html_table = df.to_html() html_table = html_table.replace('\n', '') html_table ``` # Mars Hemispheres ``` executable_path = {"executable_path": "chromedriver"} browser = Browser("chrome", **executable_path, headless=False) url = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars" browser.visit(url) html = browser.html soup = BeautifulSoup(html, "html.parser") h3s = soup.find_all("h3") titles = [] for h3 in h3s: h3 = str(h3) h3 = h3[4:-14] titles.append(h3) titles img_urls = [] for title in titles: browser.click_link_by_partial_text(title) html = browser.html soup = BeautifulSoup(html, "html.parser") img_urls.append(soup.find("div", class_="downloads").find("a")["href"]) img_urls hemisphere_image_urls = [] for title, img_url in zip(titles, img_urls): hemisphere_image_urls.append({"title": title, "img_url":img_url}) hemisphere_image_urls ```
github_jupyter
# Understanding the FFT Algorithm Copy from http://jakevdp.github.io/blog/2013/08/28/understanding-the-fft/ *This notebook first appeared as a post by Jake Vanderplas on [Pythonic Perambulations](http://jakevdp.github.io/blog/2013/08/28/understanding-the-fft/). The notebook content is BSD-licensed.* <!-- PELICAN_BEGIN_SUMMARY --> The Fast Fourier Transform (FFT) is one of the most important algorithms in signal processing and data analysis. I've used it for years, but having no formal computer science background, It occurred to me this week that I've never thought to ask *how* the FFT computes the discrete Fourier transform so quickly. I dusted off an old algorithms book and looked into it, and enjoyed reading about the deceptively simple computational trick that JW Cooley and John Tukey outlined in their classic [1965 paper](http://www.ams.org/journals/mcom/1965-19-090/S0025-5718-1965-0178586-1/) introducing the subject. The goal of this post is to dive into the Cooley-Tukey FFT algorithm, explaining the symmetries that lead to it, and to show some straightforward Python implementations putting the theory into practice. My hope is that this exploration will give data scientists like myself a more complete picture of what's going on in the background of the algorithms we use. <!-- PELICAN_END_SUMMARY --> ## The Discrete Fourier Transform The FFT is a fast, $\mathcal{O}[N\log N]$ algorithm to compute the Discrete Fourier Transform (DFT), which naively is an $\mathcal{O}[N^2]$ computation. The DFT, like the more familiar continuous version of the Fourier transform, has a forward and inverse form which are defined as follows: **Forward Discrete Fourier Transform (DFT):** $$X_k = \sum_{n=0}^{N-1} x_n \cdot e^{-i~2\pi~k~n~/~N}$$ **Inverse Discrete Fourier Transform (IDFT):** $$x_n = \frac{1}{N}\sum_{k=0}^{N-1} X_k e^{i~2\pi~k~n~/~N}$$ The transformation from $x_n \to X_k$ is a translation from configuration space to frequency space, and can be very useful in both exploring the power spectrum of a signal, and also for transforming certain problems for more efficient computation. For some examples of this in action, you can check out Chapter 10 of our upcoming Astronomy/Statistics book, with figures and Python source code available [here](http://www.astroml.org/book_figures/chapter10/). For an example of the FFT being used to simplify an otherwise difficult differential equation integration, see my post on [Solving the Schrodinger Equation in Python](http://jakevdp.github.io/blog/2012/09/05/quantum-python/). Because of the importance of the FFT in so many fields, Python contains many standard tools and wrappers to compute this. Both NumPy and SciPy have wrappers of the extremely well-tested FFTPACK library, found in the submodules ``numpy.fft`` and ``scipy.fftpack`` respectively. The fastest FFT I am aware of is in the [FFTW](http://www.fftw.org/) package, which is also available in Python via the [PyFFTW](https://pypi.python.org/pypi/pyFFTW) package. For the moment, though, let's leave these implementations aside and ask how we might compute the FFT in Python from scratch. ## Computing the Discrete Fourier Transform For simplicity, we'll concern ourself only with the forward transform, as the inverse transform can be implemented in a very similar manner. Taking a look at the DFT expression above, we see that it is nothing more than a straightforward linear operation: a matrix-vector multiplication of $\vec{x}$, $$\vec{X} = M \cdot \vec{x}$$ with the matrix $M$ given by $$M_{kn} = e^{-i~2\pi~k~n~/~N}.$$ With this in mind, we can compute the DFT using simple matrix multiplication as follows: ``` import numpy as np def DFT_slow(x): """Compute the discrete Fourier Transform of the 1D array x""" x = np.asarray(x, dtype=float) N = x.shape[0] n = np.arange(N) k = n.reshape((N, 1)) M = np.exp(-2j * np.pi * k * n / N) return np.dot(M, x) ``` We can double-check the result by comparing to numpy's built-in FFT function: ``` x = np.random.random(1024) np.allclose(DFT_slow(x), np.fft.fft(x)) ``` Just to confirm the sluggishness of our algorithm, we can compare the execution times of these two approaches: ``` %timeit DFT_slow(x) %timeit np.fft.fft(x) ``` We are over 1000 times slower, which is to be expected for such a simplistic implementation. But that's not the worst of it. For an input vector of length $N$, the FFT algorithm scales as $\mathcal{O}[N\log N]$, while our slow algorithm scales as $\mathcal{O}[N^2]$. That means that for $N=10^6$ elements, we'd expect the FFT to complete in somewhere around 50 ms, while our slow algorithm would take nearly 20 hours! So how does the FFT accomplish this speedup? The answer lies in exploiting symmetry. ## Symmetries in the Discrete Fourier Transform One of the most important tools in the belt of an algorithm-builder is to exploit symmetries of a problem. If you can show analytically that one piece of a problem is simply related to another, you can compute the subresult only once and save that computational cost. Cooley and Tukey used exactly this approach in deriving the FFT. We'll start by asking what the value of $X_{N+k}$ is. From our above expression: $$ \begin{align*} X_{N + k} &= \sum_{n=0}^{N-1} x_n \cdot e^{-i~2\pi~(N + k)~n~/~N}\\ &= \sum_{n=0}^{N-1} x_n \cdot e^{- i~2\pi~n} \cdot e^{-i~2\pi~k~n~/~N}\\ &= \sum_{n=0}^{N-1} x_n \cdot e^{-i~2\pi~k~n~/~N} \end{align*} $$ where we've used the identity $\exp[2\pi~i~n] = 1$ which holds for any integer $n$. The last line shows a nice symmetry property of the DFT: $$X_{N+k} = X_k.$$ By a simple extension, $$X_{k + i \cdot N} = X_k$$ for any integer $i$. As we'll see below, this symmetry can be exploited to compute the DFT much more quickly. ## DFT to FFT: Exploiting Symmetry Cooley and Tukey showed that it's possible to divide the DFT computation into two smaller parts. From the definition of the DFT we have: $$ \begin{align} X_k &= \sum_{n=0}^{N-1} x_n \cdot e^{-i~2\pi~k~n~/~N} \\ &= \sum_{m=0}^{N/2 - 1} x_{2m} \cdot e^{-i~2\pi~k~(2m)~/~N} + \sum_{m=0}^{N/2 - 1} x_{2m + 1} \cdot e^{-i~2\pi~k~(2m + 1)~/~N} \\ &= \sum_{m=0}^{N/2 - 1} x_{2m} \cdot e^{-i~2\pi~k~m~/~(N/2)} + e^{-i~2\pi~k~/~N} \sum_{m=0}^{N/2 - 1} x_{2m + 1} \cdot e^{-i~2\pi~k~m~/~(N/2)} \end{align} $$ We've split the single Discrete Fourier transform into two terms which themselves look very similar to smaller Discrete Fourier Transforms, one on the odd-numbered values, and one on the even-numbered values. So far, however, we haven't saved any computational cycles. Each term consists of $(N/2)*N$ computations, for a total of $N^2$. The trick comes in making use of symmetries in each of these terms. Because the range of $k$ is $0 \le k < N$, while the range of $n$ is $0 \le n < M \equiv N/2$, we see from the symmetry properties above that we need only perform half the computations for each sub-problem. Our $\mathcal{O}[N^2]$ computation has become $\mathcal{O}[M^2]$, with $M$ half the size of $N$. But there's no reason to stop there: as long as our smaller Fourier transforms have an even-valued $M$, we can reapply this divide-and-conquer approach, halving the computational cost each time, until our arrays are small enough that the strategy is no longer beneficial. In the asymptotic limit, this recursive approach scales as $\mathcal{O}[N\log N]$. This recursive algorithm can be implemented very quickly in Python, falling-back on our slow DFT code when the size of the sub-problem becomes suitably small: ``` def FFT(x): """A recursive implementation of the 1D Cooley-Tukey FFT""" x = np.asarray(x, dtype=float) N = x.shape[0] if N % 2 > 0: raise ValueError("size of x must be a power of 2") elif N <= 32: # this cutoff should be optimized return DFT_slow(x) else: X_even = FFT(x[::2]) X_odd = FFT(x[1::2]) factor = np.exp(-2j * np.pi * np.arange(N) / N) return np.concatenate([X_even + factor[:N / 2] * X_odd, X_even + factor[N / 2:] * X_odd]) ``` Here we'll do a quick check that our algorithm produces the correct result: ``` x = np.random.random(1024) np.allclose(FFT(x), np.fft.fft(x)) ``` And we'll time this algorithm against our slow version: ``` %timeit DFT_slow(x) %timeit FFT(x) %timeit np.fft.fft(x) ``` Our calculation is faster than the naive version by over an order of magnitude! What's more, our recursive algorithm is asymptotically $\mathcal{O}[N\log N]$: we've implemented the Fast Fourier Transform. Note that we still haven't come close to the speed of the built-in FFT algorithm in numpy, and this is to be expected. The FFTPACK algorithm behind numpy's ``fft`` is a Fortran implementation which has received years of tweaks and optimizations. Furthermore, our NumPy solution involves both Python-stack recursions and the allocation of many temporary arrays, which adds significant computation time. A good strategy to speed up code when working with Python/NumPy is to vectorize repeated computations where possible. We can do this, and in the process remove our recursive function calls, and make our Python FFT even more efficient. ## Vectorized Numpy Version Notice that in the above recursive FFT implementation, at the lowest recursion level we perform $N~/~32$ identical matrix-vector products. The efficiency of our algorithm would benefit by computing these matrix-vector products all at once as a single matrix-matrix product. At each subsequent level of recursion, we also perform duplicate operations which can be vectorized. NumPy excels at this sort of operation, and we can make use of that fact to create this vectorized version of the Fast Fourier Transform: ``` def FFT_vectorized(x): """A vectorized, non-recursive version of the Cooley-Tukey FFT""" x = np.asarray(x, dtype=float) N = x.shape[0] if np.log2(N) % 1 > 0: raise ValueError("size of x must be a power of 2") # N_min here is equivalent to the stopping condition above, # and should be a power of 2 N_min = min(N, 32) # Perform an O[N^2] DFT on all length-N_min sub-problems at once n = np.arange(N_min) k = n[:, None] M = np.exp(-2j * np.pi * n * k / N_min) X = np.dot(M, x.reshape((N_min, -1))) # build-up each level of the recursive calculation all at once while X.shape[0] < N: X_even = X[:, :X.shape[1] / 2] X_odd = X[:, X.shape[1] / 2:] factor = np.exp(-1j * np.pi * np.arange(X.shape[0]) / X.shape[0])[:, None] X = np.vstack([X_even + factor * X_odd, X_even - factor * X_odd]) return X.ravel() ``` Though the algorithm is a bit more opaque, it is simply a rearrangement of the operations used in the recursive version with one exception: we exploit a symmetry in the ``factor`` computation and construct only half of the array. Again, we'll confirm that our function yields the correct result: ``` x = np.random.random(1024) np.allclose(FFT_vectorized(x), np.fft.fft(x)) ``` Because our algorithms are becoming much more efficient, we can use a larger array to compare the timings, leaving out ``DFT_slow``: ``` x = np.random.random(1024 * 16) %timeit FFT(x) %timeit FFT_vectorized(x) %timeit np.fft.fft(x) ``` We've improved our implementation by another order of magnitude! We're now within about a factor of 10 of the FFTPACK benchmark, using only a couple dozen lines of pure Python + NumPy. Though it's still no match computationally speaking, readibility-wise the Python version is far superior to the FFTPACK source, which you can browse [here](http://www.netlib.org/fftpack/fft.c). So how does FFTPACK attain this last bit of speedup? Well, mainly it's just a matter of detailed bookkeeping. FFTPACK spends a lot of time making sure to reuse any sub-computation that can be reused. Our numpy version still involves an excess of memory allocation and copying; in a low-level language like Fortran it's easier to control and minimize memory use. In addition, the Cooley-Tukey algorithm can be extended to use splits of size other than 2 (what we've implemented here is known as the *radix-2* Cooley-Tukey FFT). Also, other more sophisticated FFT algorithms may be used, including fundamentally distinct approaches based on convolutions (see, e.g. Bluestein's algorithm and Rader's algorithm). The combination of the above extensions and techniques can lead to very fast FFTs even on arrays whose size is not a power of two. Though the pure-Python functions are probably not useful in practice, I hope they've provided a bit of an intuition into what's going on in the background of FFT-based data analysis. As data scientists, we can make-do with black-box implementations of fundamental tools constructed by our more algorithmically-minded colleagues, but I am a firm believer that the more understanding we have about the low-level algorithms we're applying to our data, the better practitioners we'll be. *This blog post was written entirely in the IPython Notebook. The full notebook can be downloaded [here](http://jakevdp.github.io/downloads/notebooks/UnderstandingTheFFT.ipynb), or viewed statically [here](http://nbviewer.ipython.org/url/jakevdp.github.io/downloads/notebooks/UnderstandingTheFFT.ipynb).*
github_jupyter
``` # default_exp label ``` # Label > A collection of functions to do label-based quantification ``` #hide from nbdev.showdoc import * ``` ## Label search The label search is implemented based on the compare_frags from the search. We have a fixed number of reporter channels and check if we find a respective peak within the search tolerance. Useful resources: - [IsobaricAnalyzer](https://abibuilder.informatik.uni-tuebingen.de/archive/openms/Documentation/nightly/html/TOPP_IsobaricAnalyzer.html) - [TMT Talk from Hupo 2015](https://assets.thermofisher.com/TFS-Assets/CMD/Reference-Materials/PP-TMT-Multiplexed-Protein-Quantification-HUPO2015-EN.pdf) ``` #export from numba import njit from alphapept.search import compare_frags import numpy as np @njit def label_search(query_frag: np.ndarray, query_int: np.ndarray, label: np.ndarray, reporter_frag_tol:float, ppm:bool)-> (np.ndarray, np.ndarray): """Function to search for a label for a given spectrum. Args: query_frag (np.ndarray): Array with query fragments. query_int (np.ndarray): Array with query intensities. label (np.ndarray): Array with label masses. reporter_frag_tol (float): Fragment tolerance for search. ppm (bool): Flag to use ppm instead of Dalton. Returns: np.ndarray: Array with intensities for the respective label channel. np.ndarray: Array with offset masses. """ report = np.zeros(len(label)) off_mass = np.zeros_like(label) hits = compare_frags(query_frag, label, reporter_frag_tol, ppm) for idx, _ in enumerate(hits): if _ > 0: report[idx] = query_int[_-1] off_mass[idx] = query_frag[_-1] - label[idx] if ppm: off_mass[idx] = off_mass[idx] / (query_frag[_-1] + label[idx]) *2 * 1e6 return report, off_mass def test_label_search(): query_frag = np.array([1,2,3,4,5]) query_int = np.array([1,2,3,4,5]) label = np.array([1.0, 2.0, 3.0, 4.0, 5.0]) frag_tolerance = 0.1 ppm= False assert np.allclose(label_search(query_frag, query_int, label, frag_tolerance, ppm)[0], query_int) query_frag = np.array([1,2,3,4,6]) query_int = np.array([1,2,3,4,5]) assert np.allclose(label_search(query_frag, query_int, label, frag_tolerance, ppm)[0], np.array([1,2,3,4,0])) query_frag = np.array([1,2,3,4,6]) query_int = np.array([5,4,3,2,1]) assert np.allclose(label_search(query_frag, query_int, label, frag_tolerance, ppm)[0], np.array([5,4,3,2,0])) query_frag = np.array([1.1, 2.2, 3.3, 4.4, 6.6]) query_int = np.array([1,2,3,4,5]) frag_tolerance = 0.5 ppm= False assert np.allclose(label_search(query_frag, query_int, label, frag_tolerance, ppm)[1], np.array([0.1, 0.2, 0.3, 0.4, 0.0])) test_label_search() #Example usage query_frag = np.array([127, 128, 129.1, 132]) query_int = np.array([100, 200, 300, 400, 500]) label = np.array([127.0, 128.0, 129.0, 130.0]) frag_tolerance = 0.1 ppm = False report, offset = label_search(query_frag, query_int, label, frag_tolerance, ppm) print(f'Reported intensities {report}, Offset {offset}') ``` ## MS2 Search ``` #export from typing import NamedTuple import alphapept.io def search_label_on_ms_file(file_name:str, label:NamedTuple, reporter_frag_tol:float, ppm:bool): """Wrapper function to search labels on an ms_file and write results to the peptide_fdr of the file. Args: file_name (str): Path to ms_file: label (NamedTuple): Label with channels, mod_name and masses. reporter_frag_tol (float): Fragment tolerance for search. ppm (bool): Flag to use ppm instead of Dalton. """ ms_file = alphapept.io.MS_Data_File(file_name, is_read_only = False) df = ms_file.read(dataset_name='peptide_fdr') label_intensities = np.zeros((len(df), len(label.channels))) off_masses = np.zeros((len(df), len(label.channels))) labeled = df['sequence'].str.startswith(label.mod_name).values query_data = ms_file.read_DDA_query_data() query_indices = query_data["indices_ms2"] query_frags = query_data['mass_list_ms2'] query_ints = query_data['int_list_ms2'] for idx, query_idx in enumerate(df['raw_idx']): query_idx_start = query_indices[query_idx] query_idx_end = query_indices[query_idx + 1] query_frag = query_frags[query_idx_start:query_idx_end] query_int = query_ints[query_idx_start:query_idx_end] query_frag_idx = query_frag < label.masses[-1]+1 query_frag = query_frag[query_frag_idx] query_int = query_int[query_frag_idx] if labeled[idx]: label_int, off_mass = label_search(query_frag, query_int, label.masses, reporter_frag_tol, ppm) label_intensities[idx, :] = label_int off_masses[idx, :] = off_mass df[label.channels] = label_intensities df[[_+'_off_ppm' for _ in label.channels]] = off_masses ms_file.write(df, dataset_name="peptide_fdr", overwrite=True) #Overwrite dataframe with label information #export import logging import os from alphapept.constants import label_dict def find_labels( to_process: dict, callback: callable = None, parallel:bool = False ) -> bool: """Wrapper function to search for labels. Args: to_process (dict): A dictionary with settings indicating which files are to be processed and how. callback (callable): A function that accepts a float between 0 and 1 as progress. Defaults to None. parallel (bool): If True, process multiple files in parallel. This is not implemented yet! Defaults to False. Returns: bool: True if and only if the label finding was succesful. """ index, settings = to_process raw_file = settings['experiment']['file_paths'][index] try: base, ext = os.path.splitext(raw_file) file_name = base+'.ms_data.hdf' label = label_dict[settings['isobaric_label']['label']] reporter_frag_tol = settings['isobaric_label']['reporter_frag_tolerance'] ppm = settings['isobaric_label']['reporter_frag_tolerance_ppm'] search_label_on_ms_file(file_name, label, reporter_frag_tol, ppm) logging.info(f'Tag finding of file {file_name} complete.') return True except Exception as e: logging.error(f'Tag finding of file {file_name} failed. Exception {e}') return f"{e}" #Can't return exception object, cast as string return True #hide from nbdev.export import * notebook2script() ```
github_jupyter
# For today's code challenge you will be reviewing yesterdays lecture material. Have fun! ### if you get done early check out [these videos](https://www.3blue1brown.com/neural-networks). # The Perceptron The first and simplest kind of neural network that we could talk about is the perceptron. A perceptron is just a single node or neuron of a neural network with nothing else. It can take any number of inputs and spit out an output. What a neuron does is it takes each of the input values, multplies each of them by a weight, sums all of these products up, and then passes the sum through what is called an "activation function" the result of which is the final value. I really like figure 2.1 found in this [pdf](http://www.uta.fi/sis/tie/neuro/index/Neurocomputing2.pdf) even though it doesn't have bias term represented there. ![Figure 2.1](http://www.ryanleeallred.com/wp-content/uploads/2019/04/Screen-Shot-2019-04-01-at-2.34.58-AM.png) If we were to write what is happening in some verbose mathematical notation, it might look something like this: \begin{align} y = sigmoid(\sum(weight_{1}input_{1} + weight_{2}input_{2} + weight_{3}input_{3}) + bias) \end{align} Understanding what happens with a single neuron is important because this is the same pattern that will take place for all of our networks. When imagining a neural network I like to think about the arrows as representing the weights, like a wire that has a certain amount of resistance and only lets a certain amount of current through. And I like to think about the node itselef as containing the prescribed activation function that neuron will use to decide how much signal to pass onto the next layer. # Activation Functions (transfer functions) In Neural Networks, each node has an activation function. Each node in a given layer typically has the same activation function. These activation functions are the biggest piece of neural networks that have been inspired by actual biology. The activation function decides whether a cell "fires" or not. Sometimes it is said that the cell is "activated" or not. In Artificial Neural Networks activation functions decide how much signal to pass onto the next layer. This is why they are sometimes referred to as transfer functions because they determine how much signal is transferred to the next layer. ## Common Activation Functions: ![Activation Functions](http://www.snee.com/bobdc.blog/img/activationfunctions.png) # Implementing a Perceptron from scratch in Python ### Establish training data ``` import numpy as np np.random.seed(812) inputs = np.array([ [0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1] ]) correct_outputs = [[0], [1], [1], [0]] ``` ### Sigmoid activation function and its derivative for updating weights ``` def sigmoid(x): return 1 / (1 + np.exp(-x)) def sigmoid_derivative(x): sx = sigmoid(x) return sx * (1 - sx) ``` ## Updating weights with derivative of sigmoid function: ![Sigmoid Function](https://upload.wikimedia.org/wikipedia/commons/thumb/8/88/Logistic-curve.svg/320px-Logistic-curve.svg.png) ### Initialize random weights for our three inputs ``` weights = 2 * np.random.random((3, 1)) - 1 weights.shape inputs.shape ``` ### Calculate weighted sum of inputs and weights ``` weighted_sum = np.dot(inputs, weights) weighted_sum ``` ### Output the activated value for the end of 1 training epoch ``` activated_value = sigmoid(weighted_sum) activated_value ``` ### take difference of output and true values to calculate error ``` error = correct_outputs - activated_value error ``` ### Put it all together ``` adjustments = error * sigmoid_derivative(activated_value) adjustments, inputs.T for i in range(10000): weighted_sum = np.dot(inputs, weights) activated_value = sigmoid(weighted_sum) error = correct_outputs - activated_value adjustments = error * sigmoid_derivative(activated_value) weights += np.dot(inputs.T, adjustments) print(weights) print("\n-----\n", activated_value) ```
github_jupyter
# Random Forest on the penguin dataset ``` import numpy as np import pandas as pd import seaborn as sns from matplotlib import pyplot as plt from sklearn.compose import ColumnTransformer from sklearn.pipeline import make_pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import OneHotEncoder from sklearn.preprocessing import KBinsDiscretizer from sklearn.preprocessing import MinMaxScaler from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.ensemble import RandomForestClassifier ``` ### Preparations ``` # let's bring in the data and get rid of the Nans df = pd.read_csv('./data/train.csv') # df.dropna(inplace=True) ``` #### 1. Inspect the size of the dataset ``` df.head() df.isna().sum() # sns.scatterplot(x=df['Age'], y=df['Pclass'], hue=df['Sex'])Second ways to do the same! sns.scatterplot(x='Age', y='Pclass', hue='Sex', data= df) y = df['Survived'] X = df[['Pclass', 'Age', 'Sex' ]] X_train, X_test, y_train, y_test = train_test_split(X, y) X_train.shape, X_test.shape X_train.head() ``` ## pipline ``` impute_and_MinMaxScaler = make_pipeline( SimpleImputer(strategy='most_frequent'), MinMaxScaler() ) ``` ## Column Transformer ``` fe = ColumnTransformer([ # (name, transformer, column-names) # ('do-nothing', 'passthrough', ['culmen_length_mm', 'body_mass_g']), ('imputation, scaling', impute_and_MinMaxScaler, ['Age']), # ('imputation', SimpleImputer(), ['Age']), # ('binning', KBinsDiscretizer(n_bins=3, encode='onehot-dense'), ['Sex']), ('one-hot-encode', OneHotEncoder(), ['Sex', 'Pclass']), # ('one-hot', OneHotEncoder, ['Age']), # DON'T DO THIS: # ('one-hot-encode', OneHotEncoder(sparse=False, handle_unknown='ignore'), ['body_mass_g']) ]) fe fe.fit(X_train) # transform the training data X_train_trans = fe.transform(X_train) pd.DataFrame(X_train_trans) ``` ## Find the optimal separation with Scikit #### 7. Train the model ``` from sklearn.tree import DecisionTreeClassifier, plot_tree # initialize and train model # m = DecisionTreeClassifier(max_depth=2) m= RandomForestClassifier( n_estimators=100, # number of decision trees in the forest max_depth=4 # depth of each tree ) m.fit(X_train_trans, y_train) ``` ## predictions ``` y_pred_train = m.predict(X_train_trans) accuracy_score(y_train, y_pred_train) ``` #### Calculate the accuracy ``` m.classes_ titanic_kaggel = pd.read_csv('./data/test.csv', index_col = 0) titanic_kaggel.head() X_kaggel = titanic_kaggel[['Pclass', 'Age', 'Sex']] X_kaggel_trans = fe.transform(X_kaggel) y_pred_kaggel = m.predict(X_kaggel_trans) type(y_pred_kaggel) submission = pd.DataFrame(y_pred_kaggel, index=X_kaggel.index, columns = ['Survived']) submission.to_csv('submission3.csv', index=True) submission # plt.figure(figsize=(10, 12)) # t = plot(m, feature_names=['Age','female', 'male', 'class1', 'class2','class3'], class_names=['Non_survived','Survived']) fe.named_transformers_['one-hot-encode'].get_feature_names() ```
github_jupyter
``` import numpy as np import pandas as pd import warnings warnings.filterwarnings('ignore') import seaborn as sns sns.set_palette('Set2') import matplotlib.pyplot as plt %matplotlib inline from sklearn.metrics import confusion_matrix, mean_squared_error from sklearn.preprocessing import LabelEncoder, MinMaxScaler, StandardScaler from sklearn.model_selection import train_test_split, cross_val_score from sklearn.linear_model import LinearRegression, Lasso, Ridge, SGDRegressor from sklearn.preprocessing import KBinsDiscretizer from sklearn.ensemble import RandomForestRegressor from sklearn.svm import LinearSVC, SVC, LinearSVR, SVR from sklearn.tree import DecisionTreeRegressor from sklearn.model_selection import GridSearchCV from scipy.stats import zscore from sklearn.metrics import mean_squared_error import requests import json from datetime import datetime import time import os from pandas.tseries.holiday import USFederalHolidayCalendar as calendar from config import yelp_api_key from config import darksky_api_key ``` ## Set Up ``` # Analysis Dates start_date = '2017-01-01' # Start Date Inclusive end_date = '2019-06-19' # End Date Exclusive search_business = 'Jupiter Disco' location = 'Brooklyn, NY' ``` ## Pull Weather Data ### Latitude + Longitude from Yelp API ``` host = 'https://api.yelp.com' path = '/v3/businesses/search' search_limit = 10 # Yelp Authorization Header with API Key headers = { 'Authorization': 'Bearer {}'.format(yelp_api_key) } # Build Requests Syntax with Yelp Host and Path and URL Paramaters # Return JSON response def request(host, path, url_params=None): url_params = url_params or {} url = '{}{}'.format(host, path) response = requests.get(url, headers=headers, params=url_params) return response.json() # Build URL Params for the Request and provide the host and path def search(term, location): url_params = { 'term': term.replace(' ', '+'), 'location': location.replace(' ', '+'), 'limit': search_limit } return request(host, path, url_params=url_params) # Return Coordinates if Exact Match Found def yelp_lat_long(business, location): # Call search function here with business name and location response = search(business, location) # Set state to 'No Match' in case no Yelp match found state = 'No Match' possible_matches = [] # Check search returns for match wtith business for i in range(len(response['businesses'])): # If match found: if response['businesses'][i]['name'] == business: # Local variables to help navigate JSON return response_ = response['businesses'][0] name_ = response_['name'] print(f'Weather Location: {name_}') state = 'Match Found' #print(response['businesses'][0]) return response_['coordinates']['latitude'], response_['coordinates']['longitude'] else: # If no exact match, append all search returns to list possible_matches.append(response['businesses'][i]['name']) # If no match, show user potential matches if state == 'No Match': print('Exact match not found, did you mean one of the following? \n') for possible_match in possible_matches: print(possible_match) return None, None lat, long = yelp_lat_long(search_business, location) #print(f'Latitude: {lat}\nLongitude: {long}') ``` ### Darksky API Call ``` # Create List of Dates of target Weather Data def find_dates(start_date, end_date): list_of_days = [] daterange = pd.date_range(start_date, end_date) for single_date in daterange: list_of_days.append(single_date.strftime("%Y-%m-%d")) return list_of_days # Concatenate URL to make API Call def build_url(api_key, lat, long, day): _base_url = 'https://api.darksky.net/forecast/' _time = 'T20:00:00' _url = f'{_base_url}{api_key}/{lat},{long},{day + _time}?America/New_York&exclude=flags' return _url def make_api_call(url): r = requests.get(url) return r.json() # Try / Except Helper Function for Handling JSON API Output def find_val(dictionary, *keys): level = dictionary for key in keys: try: level = level[key] except: return np.NAN return level # Parse API Call Data using Try / Except Helper Function def parse_data(data): time = datetime.fromtimestamp(data['currently']['time']).strftime('%Y-%m-%d') try: precip_max_time = datetime.fromtimestamp(find_val(data, 'daily', 'data', 0, 'precipIntensityMaxTime')).strftime('%I:%M%p') except: precip_max_time = datetime(1900,1,1,5,1).strftime('%I:%M%p') entry = {'date': time, 'temperature': float(find_val(data, 'currently', 'temperature')), 'apparent_temperature': float(find_val(data, 'currently', 'apparentTemperature')), 'humidity': float(find_val(data, 'currently', 'humidity')), 'precip_intensity_max': float(find_val(data,'daily','data', 0, 'precipIntensityMax')), 'precip_type': find_val(data, 'daily', 'data', 0, 'precipType'), 'precip_prob': float(find_val(data, 'currently', 'precipProbability')), 'pressure': float(find_val(data, 'currently', 'pressure')), 'summary': find_val(data, 'currently', 'icon'), 'precip_max_time': precip_max_time} return entry # Create List of Weather Data Dictionaries & Input Target Dates def weather_call(start_date, end_date, _lat, _long): weather = [] list_of_days = find_dates(start_date, end_date) for day in list_of_days: data = make_api_call(build_url(darksky_api_key, _lat, _long, day)) weather.append(parse_data(data)) return weather result = weather_call(start_date, end_date, lat, long) # Build DataFrame from List of Dictionaries def build_weather_df(api_call_results): df = pd.DataFrame(api_call_results) # Add day of week to DataFrame + Set Index as date df['date'] = pd.to_datetime(df['date']) df['day_of_week'] = df['date'].dt.weekday df['month'] = df['date'].dt.month df.set_index('date', inplace=True) df['apparent_temperature'].fillna(method='ffill',inplace=True) df['temperature'].fillna(method='ffill',inplace=True) df['humidity'].fillna(method='ffill',inplace=True) df['precip_prob'].fillna(method='ffill', inplace=True) df['pressure'].fillna(method='ffill', inplace=True) df['precip_type'].fillna(value='none', inplace=True) return df weather_df = build_weather_df(result); weather_df.to_csv(f'weather_{start_date}_to_{end_date}.csv') weather_csv_file = f'weather_{start_date}_to_{end_date}.csv' ``` ## Import / Clean / Prep File ``` # Import Sales Data bar_sales_file = 'bar_x_sales_export.csv' rest_1_file = 'rest_1_dinner_sales_w_covers_061819.csv' # Set File current_file = rest_1_file weather_csv_file = 'weather_2017-01-01_to_2019-06-19.csv' # HELPER FUNCTION def filter_df(df, start_date, end_date): return df[(df.index > start_date) & (df.index < end_date)] # HELPER FUNCTION def import_parse(file): data = pd.read_csv(file, index_col = 'date', parse_dates=True) df = pd.DataFrame(data) # Rename Column to 'sales' df = df.rename(columns={df.columns[0]: 'sales', 'dinner_covers': 'covers'}) # Drop NaN #df = df.query('sales > 0').copy() df.fillna(0, inplace=True) print(f'"{file}" has been imported + parsed. The file has {len(df)} rows.') return df # HELPER FUNCTION def prepare_data(current_file, weather_file): df = filter_df(import_parse(current_file), start_date, end_date) weather_df_csv = pd.read_csv(weather_csv_file, parse_dates=True, index_col='date') weather_df_csv['summary'].fillna(value='none', inplace=True) df = pd.merge(df, weather_df_csv, how='left', on='date') return df ``` ### Encode Closed Days ``` # Set Closed Dates using Sales ## REST 1 CLOSED DATES additional_closed_dates = ['2018-12-24', '2017-12-24', '2017-02-05', '2017-03-14', '2018-01-01', '2018-02-04', '2019-02-03'] ## BAR CLOSED DATES #additional_closed_dates = ['2018-12-24', '2017-12-24', '2017-10-22'] closed_dates = [pd.to_datetime(date) for date in additional_closed_dates] # Drop or Encode Closed Days def encode_closed_days(df): # CLOSED FEATURE cal = calendar() # Local list of days with zero sales potential_closed_dates = df[df['sales'] == 0].index # Enocodes closed days with 1 df['closed'] = np.where((((df.index.isin(potential_closed_dates)) & \ (df.index.isin(cal.holidays(start_date, end_date)))) | df.index.isin(closed_dates)), 1, 0) df['sales'] = np.where(df['closed'] == 1, 0, df['sales']) return df baseline_df = encode_closed_days(prepare_data(current_file, weather_csv_file)) baseline_df = baseline_df[['sales', 'outside', 'day_of_week', 'month', 'closed']] baseline_df = add_dummies(add_clusters(baseline_df)) mod_baseline = target_trend_engineering(add_cal_features(impute_outliers(baseline_df))) mod_baseline = mod_baseline.drop(['month'], axis=1) ``` ### Replace Outliers in Training Data ``` # Replace Outliers with Medians ## Targets for Outliers z_thresh = 3 def impute_outliers(df, *col): # Check for Outliers in Sales + Covers for c in col: # Impute Median for Sales & Covers Based on Day of Week Outiers for d in df['day_of_week'].unique(): # Median / Mean / STD for each day of the week daily_median = np.median(df[df['day_of_week'] == d][c]) daily_mean = np.mean(df[df['day_of_week'] == d][c]) daily_std = np.std(df[df['day_of_week'] ==d ][c]) # Temporary column encoded if Target Columns have an Outlier df['temp_col'] = np.where((df['day_of_week'] == d) & (df['closed'] == 0) & ((np.abs(df[c] - daily_mean)) > (daily_std * z_thresh)), 1, 0) # Replace Outlier with Median df[c] = np.where(df['temp_col'] == 1, daily_median, df[c]) df = df.drop(['temp_col'], axis=1) return df def add_ppa(df): df['ppa'] = np.where(df['covers'] > 0, df['sales'] / df['covers'], 0) return df ``` ## Clean File Here ``` data = add_ppa(impute_outliers(encode_closed_days(prepare_data(current_file, weather_csv_file)), 'sales', 'covers')) ``` ### Download CSV for EDA ``` data.to_csv('CSV_for_EDA.csv') df_outside = data['outside'] ``` ## CHOOSE TARGET --> SALES OR COVERS ``` target = 'sales' def daily_average_matrix_ann(df, target): matrix = df.groupby([df.index.dayofweek, df.index.month, df.index.year]).agg({target: 'mean'}) matrix = matrix.rename_axis(['day', 'month', 'year']) return matrix.unstack(level=1) daily_average_matrix_ann(data, target) ``` ### Create Month Clusters ``` from sklearn.cluster import KMeans day_k = 7 mo_k = 3 def create_clusters(df, target, col, k): # MAKE DATAFRAME USING CENTRAL TENDENCIES AS FEATURES describe = df.groupby(col)[target].aggregate(['median', 'std', 'max']) df = describe.reset_index() # SCALE TEMPORARY DF scaler = MinMaxScaler() f = scaler.fit_transform(df) # INSTANTIATE MODEL km = KMeans(n_clusters=k, random_state=0).fit(f) # GET KMEANS CLUSTER PREDICTIONS labels = km.predict(f) # MAKE SERIES FROM PREDICTIONS temp = pd.DataFrame(labels, columns = ['cluster'], index=df.index) # CONCAT CLUSTERS TO DATAFRAME df = pd.concat([df, temp], axis=1) # CREATE CLUSTER DICTIONARY temp_dict = {} for i in list(df[col]): temp_dict[i] = df.loc[df[col] == i, 'cluster'].iloc[0] return temp_dict # Create Global Dictionaries to Categorize Day / Month #day_dict = create_clusters(data, 'day_of_week', day_k) month_dict = create_clusters(data, target, 'month', mo_k) # Print Clusters #print('Day Clusters: ', day_dict, '\n', 'Total Clusters: ', len(set(day_dict.values())), '\n') print('Month Clusters: ', month_dict, '\n', 'Total Clusters: ', len(set(month_dict.values()))) ``` ### Add Temperature Onehot Categories ``` def encode_temp(df): temp_enc = KBinsDiscretizer(n_bins=5, encode='onehot', strategy='kmeans') temp_enc.fit(df[['apparent_temperature']]) return temp_enc def one_hot_temp(df, temp_enc): binned_transform = temp_enc.transform(df[['apparent_temperature']]) binned_df = pd.DataFrame(binned_transform.toarray(), index=df.index, columns=['temp_very_cold', 'temp_cold', 'temp_warm', 'temp_hot', 'temp_very_hot']) df = df.merge(binned_df, how='left', on='date') df.drop(['apparent_temperature', 'temperature'], axis=1, inplace=True) return df, temp_enc ``` ## Feature Engineering ``` # Add Clusters to DataFrame to use as Features def add_clusters(df): #df['day_cluster'] = df['day_of_week'].apply(lambda x: day_dict[x]).astype('category') df['month_cluster'] = df['month'].apply(lambda x: month_dict[x]).astype('category') return df ``` ### Add Weather Features ``` hours_start = '05:00PM' hours_end = '11:59PM' hs_dt = datetime.strptime(hours_start, "%I:%M%p") he_dt = datetime.strptime(hours_end, "%I:%M%p") def between_time(check_time): if hs_dt <= datetime.strptime(check_time, "%I:%M%p") <= he_dt: return 1 else: return 0 add_weather = True temp_delta_window = 1 def add_weather_features(df): if add_weather: # POOR WEATHER FEATURES df['precip_while_open'] = df['precip_max_time'].apply(lambda x: between_time(x)) # DROP FEATURES features_to_drop = ['precip_max_time'] df.drop(features_to_drop, axis=1, inplace=True) return df ``` ### Add Calendar Features ``` def add_cal_features(df): cal = calendar() # THREE DAY WEEKEND FEATURE sunday_three_days = [date + pd.DateOffset(-1) for date in cal.holidays(start_date, end_date) if date.dayofweek == 0] df['sunday_three_day'] = np.where(df.index.isin(sunday_three_days), 1, 0) return df ``` ### Add Dummies ``` def add_dummies(df): df['day_of_week'] = df['day_of_week'].astype('category') df = pd.get_dummies(data=df, columns=['day_of_week', 'month_cluster']) return df ``` ### Add Interactions ``` def add_interactions(df): apply_this_interaction = False if apply_this_interaction: for d in [col for col in df.columns if col.startswith('day_cluster')]: for m in [col for col in df.columns if col.startswith('month_cluster')]: col_name = d + '_X_' + m df[col_name] = df[d] * df[m] df.drop([d], axis=1, inplace=True) df.drop([col for col in df.columns if col.startswith('month_cluster')], axis=1, inplace=True) return df else: return df def add_weather_interactions(df): apply_this_interaction = True if apply_this_interaction: try: df['outside_X_precip_open'] = df['outside'] * df['precip_while_open'] for w in [col for col in df.columns if col.startswith('temp_')]: col_name = w + '_X_' + 'outside' df[col_name] = df[w] * df['outside'] df.drop(['outside'], axis=1, inplace=True) except: pass return df else: return df ``` ### Feature Selection ``` def feature_selection(df): try: target_list = ['sales', 'covers', 'ppa'] target_to_drop = [t for t in target_list if t != target] df = df.drop(target_to_drop, axis=1) except: pass # Feature Selection / Drop unnecessary or correlated columns cols_to_drop = ['month', 'precip_type', 'summary', 'pressure', 'precip_intensity_max', 'day_of_week_0'] df = df.drop(cols_to_drop, axis=1) return df ``` ### Add Target Trend Feature Engineering ``` trend_days_rolling = 31 trend_days_shift = 7 days_fwd = trend_days_rolling + trend_days_shift + 1 def target_trend_engineering(df): df['target_trend'] = df[target].rolling(trend_days_rolling).mean() / df[target].shift(trend_days_shift).rolling(trend_days_rolling).mean() #df['target_delta'] = df[target].shift(7) + df[target].shift(14) - df[target].shift(21) - df[target].shift(28) return df ``` ## Start Here ``` # IMPORT & PARSE CLEAN TRAINING SET data = add_ppa(impute_outliers(encode_closed_days(prepare_data(current_file, weather_csv_file)), 'sales', 'covers')); # One Hot Encode Temperature Data data, temp_enc = one_hot_temp(data, encode_temp(data)) # Create CSV data.to_csv('csv_before_features.csv') def feature_engineering(df): df.columns = df.columns.map(str) # Add day & Month Clusters // Dicts with data held in Global Variable df = add_clusters(df) # Add Engineered Features for Weather & Calendar df = add_weather_features(df) df = add_cal_features(df) # Create Dummies df = add_dummies(df) # Add Interactions df = add_interactions(df) df = add_weather_interactions(df) # Drop Selected Columns df = feature_selection(df) return df dfx = feature_engineering(data) dfx = target_trend_engineering(dfx) def corr_chart(df): corr = df.corr() mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure sns.set_style('whitegrid') f, ax = plt.subplots(figsize=(16, 12)) # Generate a custom diverging colormap cmap = sns.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr, mask=mask, cmap=cmap, vmax=1, vmin=-1, center=0, square=True, linewidths=.75, annot=False, cbar_kws={"shrink": .75}); corr_chart(dfx) ``` ## Update Sales ``` # # File from Start based on Target Variable # current_sales_df = import_parse(rest_1_file) # date = '2019-06-13' # sales = 15209.75 # covers = 207 # outside = 1 # closed = 0 # def add_sales_row(date, sales): # df = pd.DataFrame({'sales': sales, # 'covers': covers, # 'outside': outside, # 'closed': closed}, # index=[date]) # return df # temp = add_sales_row(date, sales) # def build_sales_df(df, temp): # df = df.append(temp) # return df # current_sales_df = build_sales_df(current_sales_df, temp) # # Download Current DataFrame to CSV # current_sales_df.to_csv(f'rest_1_clean_updated_{start_date}_to_{end_date}.csv') # df_import = pd.read_csv('rest_1_clean_updated_2017-01-01_to_2019-06-17.csv', parse_dates=True, index_col='Unnamed: 0') # def import_current(df): # df.index = pd.to_datetime(df.index) # df = add_ppa(df) # target_list = ['sales', 'covers', 'ppa'] # target_to_drop = [t for t in target_list if t != target] # df = df.drop(target_to_drop, axis=1) # return df # current_df = import_current(df_import) ``` ### Add Recent Sales Data ``` # # Import Most Recent DataFrame # df_before_features = pd.read_csv('csv_before_features.csv', index_col='date', parse_dates=True) # # Create New Weather DataFrame with Updated Data # new_date_start = '2019-06-15' # new_date_end = '2019-06-17' # def update_current_df(sales_df, df_before_features, new_date_start, new_end_date): # sales_df = sales_df[new_date_start:] # sales_df = sales_df.rename_axis(index = 'date') # sales_df.index = pd.to_datetime(sales_df.index) # ## Find Lat Long for Business # lat, long = yelp_lat_long(search_business, location) # ## Pull Weather Data / Forecast # weather_df = build_weather_df(weather_call(new_date_start, new_date_end, lat, long)) # ## Parse, Clean, Engineer # df = pd.merge(sales_df, weather_df, how='left', on='date') # df, _ = one_hot_temp(df, temp_enc) # df = pd.concat([df_before_features, df]) # df = target_trend_engineering(feature_engineering(df)) # return df # current_df = update_current_df(current_df, df_before_features, new_date_start, new_date_end) ``` ## Test / Train / Split ### Drop Closed Days? ``` drop_all_closed = False if drop_all_closed: current_df = current_df[current_df['closed'] == 0] dfx.columns def drop_weather(df): no_weather = False if no_weather: df = df.drop(['humidity', 'precip_prob', 'temp_very_cold', 'temp_cold', 'temp_hot', 'temp_very_hot', 'precip_while_open', \ 'temp_very_cold_X_outside', 'temp_cold_X_outside', 'temp_hot_X_outside','temp_very_hot_X_outside', 'outside_X_precip_open'], axis=1) df = df.merge(df_outside, on='date', how='left') return df else: return df dfx = drop_weather(dfx) dfx.head() def cv_split(df): features = dfx.drop([target], axis=1)[days_fwd:] y = dfx[target][days_fwd:] return features, y cv_features, cv_y = cv_split(dfx) baseline_cv_x, baseline_cv_y = cv_split(mod_baseline) def train_test_split(df): # Separate Target & Features y = df[target] features = df.drop([target], axis=1) # Test / Train / Split train_date_start = '2017-01-01' train_date_end = '2018-12-31' X_train = features[pd.to_datetime(train_date_start) + pd.DateOffset(days_fwd):train_date_end] X_test = features[pd.to_datetime(train_date_end) + pd.DateOffset(1): ] y_train = y[pd.to_datetime(train_date_start) + pd.DateOffset(days_fwd):train_date_end] y_test = y[pd.to_datetime(train_date_end) + pd.DateOffset(1): ] # Scale scaler = MinMaxScaler() X_train_scaled = scaler.fit_transform(X_train) X_test_scaled = scaler.transform(X_test) X_train = pd.DataFrame(X_train_scaled, columns=X_train.columns) X_test = pd.DataFrame(X_test_scaled, columns=X_train.columns) print('Train set: ', len(X_train)) print('Test set: ', len(X_test)) return X_train, X_test, y_train, y_test, scaler X_train, X_test, y_train, y_test, scaler = train_test_split(dfx) baseline_X_train, baseline_X_test, baseline_y_train, baseline_y_test, baseline_scaler = train_test_split(mod_baseline) ``` ### Linear Regression ``` def linear_regression_model(X_train, y_train): lr = LinearRegression(fit_intercept=True) lr_rgr = lr.fit(X_train, y_train) return lr_rgr lr_rgr = linear_regression_model(X_train, y_train) baseline_lr_rgr = linear_regression_model(baseline_X_train, baseline_y_train) def rgr_score(rgr, X_train, y_train, X_test, y_test, cv_features, cv_y): y_hat = rgr.predict(X_test) sum_squares_residual = sum((y_test - y_hat)**2) sum_squares_total = sum((y_test - np.mean(y_test))**2) r_squared = 1 - (float(sum_squares_residual))/sum_squares_total adjusted_r_squared = 1 - (1-r_squared)*(len(y_test)-1)/(len(y_test)-X_test.shape[1]-1) print('Formula Scores - R-Squared: ', r_squared, 'Adjusted R-Squared: ', adjusted_r_squared, '\n') train_score = rgr.score(X_train, y_train) test_score = rgr.score(X_test, y_test) y_pred = rgr.predict(X_test) rmse = np.sqrt(mean_squared_error(y_test, y_pred)) pred_df = pd.DataFrame(y_pred, index=y_test.index) pred_df = pred_df.rename(columns={0: target}) print('Train R-Squared: ', train_score) print('Test R-Squared: ', test_score, '\n') print('Root Mean Squared Error: ', rmse, '\n') print('Cross Val Avg R-Squared: ', \ np.mean(cross_val_score(rgr, cv_features, cv_y, cv=10, scoring='r2')), '\n') print('Intercept: ', rgr.intercept_, '\n') print('Coefficients: \n') for index, col_name in enumerate(X_test.columns): print(col_name, ' --> ', rgr.coef_[index]) return pred_df pred_df = rgr_score(lr_rgr, X_train, y_train, X_test, y_test, cv_features, cv_y) baseline_preds = rgr_score(baseline_lr_rgr, baseline_X_train, baseline_y_train, baseline_X_test, baseline_y_test, baseline_cv_x, baseline_cv_y) ``` ### Prediction Function ``` outside = 1 def predict_df(clf, scaler, X_train, current_df, date_1, date_2): # Find Lat Long for Business lat, long = yelp_lat_long(search_business, location) # Pull Weather Data / Forecast weather_df = build_weather_df(weather_call(date_1, date_2, lat, long)) day_of_week, apparent_temperature = weather_df['day_of_week'], weather_df['apparent_temperature'] weather_df['outside'] = outside # One Hot Encode Temperature Using Fitted Encoder df, _ = one_hot_temp(weather_df, temp_enc) df['closed'] = 0 # Add Feature Engineering df = feature_engineering(df) # Add Sales Data for Sales Trend Engineering current_df = current_df[target] df = pd.merge(df, current_df, on='date', how='left') df[target] = df[target].fillna(method='ffill') df = target_trend_engineering(df) df = df.drop([target], axis=1) # Ensure Column Parity missing_cols = set(X_train.columns) - set(df.columns) for c in missing_cols: df[c] = 0 df = df[X_train.columns][-2:] # Scale Transform df_scaled = scaler.transform(df) df = pd.DataFrame(df_scaled, columns=df.columns, index=df.index) # Predict and Build Prediction DataFrame for Review pred_array = pd.DataFrame(clf.predict(df), index=df.index, columns=[target]) pred_df = df[df.columns[(df != 0).any()]] pred_df = pd.concat([pred_df, day_of_week, apparent_temperature], axis=1) final_predict = pd.concat([pred_array, pred_df], axis=1) return final_predict tonight = predict_df(lr_rgr, scaler, X_train, dfx, pd.datetime.now().date() + pd.DateOffset(-days_fwd), pd.datetime.now().date()) tonight[-2:] ``` ## Lasso ``` def lasso_model(X_train, y_train): lassoReg = Lasso(fit_intercept=True, alpha=.05) lasso_rgr = lassoReg.fit(X_train,y_train) return lasso_rgr lasso_rgr = lasso_model(X_train, y_train) baseline_lasso = lasso_model(baseline_X_train, baseline_y_train) pred_df_ppa_lasso = rgr_score(lasso_rgr, X_train, y_train, X_test, y_test, cv_features, cv_y) baseline_lasso = rgr_score(baseline_lasso, baseline_X_train, baseline_y_train, baseline_X_test, baseline_y_test, baseline_cv_x, baseline_cv_y) tonight = predict_df(lasso_rgr, scaler, X_train, dfx, pd.datetime.now().date() + pd.DateOffset(-days_fwd), pd.datetime.now().date()) tonight[-2:] from yellowbrick.regressor import ResidualsPlot from yellowbrick.features.importances import FeatureImportances plt.figure(figsize=(12,8)) visualizer = ResidualsPlot(lasso_rgr, hist=False) visualizer.fit(X_train, y_train) visualizer.score(X_test, y_test) visualizer.poof() features = list(X_train.columns) fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot() labels = list(map(lambda x: x.title(), features)) visualizer = FeatureImportances(lasso_rgr, ax=ax, labels=labels, relative=False) visualizer.fit(X_train, y_train) visualizer.poof() ``` ### Random Forest Regression ``` (1/6)**20 8543 * 8543 def rf_regression_model(X_train, y_train): rfr = RandomForestRegressor(max_depth= 11, max_features= 0.60, min_impurity_decrease= 0.005, n_estimators= 300, min_samples_leaf = 2, min_samples_split = 2, random_state = 0) rfr_rgr = rfr.fit(X_train, y_train) return rfr_rgr rfr_rgr = rf_regression_model(X_train, y_train) def rfr_score(rgr, X_test, y_test, cv_features, cv_y): y_hat = rgr.predict(X_test) sum_squares_residual = sum((y_test - y_hat)**2) sum_squares_total = sum((y_test - np.mean(y_test))**2) r_squared = 1 - (float(sum_squares_residual))/sum_squares_total adjusted_r_squared = 1 - (1-r_squared)*(len(y_test)-1)/(len(y_test)-X_test.shape[1]-1) print('Formula Scores - R-Squared: ', r_squared, 'Adjusted R-Squared: ', adjusted_r_squared, '\n') train_score = rgr.score(X_train, y_train) test_score = rgr.score(X_test, y_test) y_pred = rgr.predict(X_test) rmse = np.sqrt(mean_squared_error(y_test, y_pred)) pred_df = pd.DataFrame(y_pred, index=y_test.index) pred_df = pred_df.rename(columns={0: target}) print('Train R-Squared: ', train_score) print('Test R-Squared: ', test_score, '\n') print('Root Mean Squared Error: ', rmse, '\n') print('Cross Val Avg R-Squared: ', \ np.mean(cross_val_score(rgr, cv_features, cv_y, cv=10, scoring='r2')), '\n') return pred_df pred_df = rfr_score(rfr_rgr, X_test, y_test, cv_features, cv_y) ``` # Random Forest Regression Prediction ``` tonight = predict_df(rfr_rgr, scaler, X_train, dfx, pd.datetime.now().date() + pd.DateOffset(-days_fwd), pd.datetime.now().date()) tonight[-2:] ``` ### Grid Search Helper Function ``` def run_grid_search(rgr, params, X_train, y_train): cv = 5 n_jobs = -1 scoring = 'neg_mean_squared_error' grid = GridSearchCV(rgr, params, cv=cv, n_jobs=n_jobs, scoring=scoring, verbose=10) grid = grid.fit(X_train, y_train) best_grid_rgr = grid.best_estimator_ print('Grid Search: ', rgr.__class__.__name__, '\n') print('Grid Search Best Score: ', grid.best_score_) print('Grid Search Best Params: ', grid.best_params_) print('Grid Search Best Estimator: ', grid.best_estimator_) return best_grid_rgr params = { 'n_estimators': [250, 275, 300, 350, 400,500], 'max_depth': [5, 7, 9, 11, 13, 15], 'min_impurity_decrease': [0.005, 0.001, 0.0001], 'max_features': ['auto', 0.65, 0.75, 0.85, 0.95] } best_grid_rgr = run_grid_search(rfr_rgr, params, X_train, y_train) ``` ### OLS Model ``` import statsmodels.api as sm from statsmodels.formula.api import ols f = '' for c in dfx.columns: f += c + '+' x = f[6:-1] f= target + '~' + x model = ols(formula=f, data=dfx).fit() model.summary() ``` ## XGB Regressor ``` from xgboost import XGBRegressor def xgb_model(X_train, y_train): objective = 'reg:linear' booster = 'gbtree' nthread = 4 learning_rate = 0.02 max_depth = 3 colsample_bytree = 0.75 n_estimators = 450 min_child_weight = 2 xgb_rgr= XGBRegressor(booster=booster, objective=objective, colsample_bytree=colsample_bytree, learning_rate=learning_rate, \ max_depth=max_depth, nthread=nthread, n_estimators=n_estimators, min_child_weight=min_child_weight, random_state = 0) xgb_rgr = xgb_rgr.fit(X_train, y_train) return xgb_rgr xgb_rgr = xgb_model(X_train, y_train) # Convert column names back to original X_test = X_test[X_train.columns] pred_df_covers_xgb = rfr_score(xgb_rgr, X_test, y_test, cv_features, cv_y) ``` ## HYBRID ``` filex = 'predicted_ppa_timeseries.csv' df_ts = pd.read_csv(filex,index_col='date',parse_dates=True) pred_df = pred_df_covers_xgb.merge(df_ts, on='date', how='left') pred_df.head() pred_df['sales'] = pred_df['covers'] * pred_df['pred_ppa'] pred_df = pred_df[['sales']] r2 = metrics.r2_score(y_test, pred_df) r2 rmse = np.sqrt(mean_squared_error(y_test, pred_df)) rmse tonight = predict_df(xgb_rgr, scaler, X_train, dfx, pd.datetime.now().date() + pd.DateOffset(-days_fwd), pd.datetime.now().date()) tonight[-2:] xgb1 = XGBRegressor() parameters = {'nthread':[4], #when use hyperthread, xgboost may become slower 'objective':['reg:linear'], 'learning_rate': [.015, 0.02, .025], #so called `eta` value 'max_depth': [3, 4, 5], 'min_child_weight': [1, 2, 3], 'silent': [1], 'subsample': [0.7], 'colsample_bytree': [0.55, 0.60, 0.65], 'n_estimators': [400, 500, 600]} xgb_grid = GridSearchCV(xgb1, parameters, cv = 3, n_jobs = 5, verbose=True, scoring = 'neg_mean_squared_error') xgb_grid.fit(X_train, y_train) print(xgb_grid.best_score_) print(xgb_grid.best_params_) ``` ## CLEAN RUN
github_jupyter
# Numpy ### GitHub repository: https://github.com/jorgemauricio/curso_itesm ### Instructor: Jorge Mauricio ``` # librerías import numpy as np ``` # Crear Numpy Arrays ## De una lista de python Creamos el arreglo directamente de una lista o listas de python ``` my_list = [1,2,3] my_list np.array(my_list) my_matrix = [[1,2,3],[4,5,6],[7,8,9]] my_matrix ``` ## Métodos ### arange ``` np.arange(0,10) np.arange(0,11,2) ``` ### ceros y unos Generar arreglos de ceros y unos ``` np.zeros(3) np.zeros((5,5)) np.ones(3) np.ones((3,3)) ``` ### linspace Generar un arreglo especificando un intervalo ``` np.linspace(0,10,3) np.linspace(0,10,50) ``` ### eye Generar matrices de identidad ``` np.eye(4) ``` ### Random ### rand Generar un arreglo con una forma determinada de numeros con una distribución uniforme [0,1] ``` np.random.rand(2) np.random.rand(5,5) ``` ### randn Generar un arreglo con una distribucion estandar a diferencia de rand que es uniforme ``` np.random.randn(2) np.random.randn(5,5) ``` ### randint Generar numeros aleatorios en un rango determinado ``` np.random.randint(1,100) np.random.randint(1,100,10) ``` ### Métodos y atributos de arreglos ``` arr = np.arange(25) ranarr = np.random.randint(0,50,10) arr ranarr ``` ### Reshape Regresa el mismo arreglo pero en diferente forma ``` arr.reshape(5,5) ``` ### max, min, argmax, argmin Metodos para encontrar máximos y mínimos de los valores y sus indices ``` ranarr ranarr.max() ranarr.argmax() ranarr.min() ranarr.argmin() ``` ### Shape Atributo para desplegar la forma que tienen el arreglo ``` # Vector arr.shape # Tomar en cuenta que se implementan dos corchetes arr.reshape(1,25) arr.reshape(1,25).shape arr.reshape(25,1) arr.reshape(25,1).shape ``` ### dtype Despliega el tipo de dato de los objetos del arreglo ``` arr.dtype ``` # Selección e indices en Numpy ``` # crear un arreglo arr = np.arange(0,11) # desplegar el arreglo arr ``` # Selección utilizando corchetes ``` # obtener el valor del indice 8 arr[8] # obtener los valores de un rango arr[1:5] #obtener los valores de otro rango arr[2:6] ``` # Reemplazar valores ``` # reemplazar valores en un rango determinado arr[0:5]=100 # desplegar el arreglo arr # Generar nuevamente el arreglo arr = np.arange(0,11) # desplegar arr # corte de un arreglo slice_of_arr = arr[0:6] # desplegar el corte slice_of_arr # cambiar valores del corte slice_of_arr[:]=99 # desplegar los valores del corte slice_of_arr # desplegar arreglo arr # para obtener una copia se debe hacer explicitamente arr_copy = arr.copy() # desplegar el arreglo copia arr_copy ``` ## Indices en un arreglo 2D (matrices) La forma general de un arreglo 2d es la siguiente **arr_2d[row][col]** o **arr_2d[row,col]** ``` # generar un arreglo 2D arr_2d = np.array(([5,10,15],[20,25,30],[35,40,45])) #Show arr_2d # indices de filas arr_2d[1] # Formato es arr_2d[row][col] o arr_2d[row,col] # Seleccionar un solo elemento arr_2d[1][0] # Seleccionar un solo elemento arr_2d[1,0] # Cortes en 2D # forma (2,2) desde la esquina superior derecha arr_2d[:2,1:] #forma desde la ultima fila arr_2d[2] # forma desde la ultima fila arr_2d[2,:] # longitud de un arreglo arr_length = arr_2d.shape[1] arr_length ``` # Selección ``` arr = np.arange(1,11) arr arr > 4 bool_arr = arr>4 bool_arr arr[bool_arr] arr[arr>2] x = 2 arr[arr>x] ```
github_jupyter
``` from transformers import ( AutoConfig, AutoModelForSeq2SeqLM, AutoTokenizer, DataCollatorForSeq2Seq, HfArgumentParser, MBartTokenizer, default_data_collator, AutoModelWithLMHead, set_seed ) model_name = "./models/First/" model = AutoModelWithLMHead.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) if tokenizer.encode("<extra_id_0> <extra_id_1>") != [32099, 32098, 32097, 1]: # For non-t5 tokenizer tokenizer.add_special_tokens( {"additional_special_tokens": ["<Temp_S>", "<Temp_E>", "<Relation_S>", "<Relation_E>", \ "<ORG>", "<VEH>", "<WEA>", "<LOC>","<FAC>","<End>" ,"<PER>","<GPE>"]}) b = tokenizer.convert_tokens_to_ids(["<Temp_S>", "<Temp_E>", "<Relation_S>", "<Relation_E>"]) print(b) tokenizer.add_special_tokens( {"additional_special_tokens": ["<Temp_S>", "<Temp_E>", "<Relation_S>", "<Relation_E>", \ "<ORG>", "<VEH>", "<WEA>", "<LOC>","<FAC>","<End>" ,"<PER>","<GPE>"]}) b = tokenizer.convert_tokens_to_ids(["<Temp_S>", "<Temp_E>", "<Relation_S>", "<Relation_E>"]) print(b) print(tokenizer.convert_tokens_to_ids(["<Temp_S>"])[0]) if tokenizer.convert_tokens_to_ids(["<Temp_S>"])[0] == 32100: print("kakd") def add_space(text): """ add space between special token :param text: :return: """ new_text_list = list() for item in zip(split_bracket.findall(text), split_bracket.split(text)[1:]): new_text_list += item return ' '.join(new_text_list) ``` # Read Data ``` from datasets import load_dataset, load_metric data_files = {} data_files["train"] = './data/new_text2tree/one_ie_ace2005_subtype/train.json' extension = 'json' datasets = load_dataset(extension, data_files=data_files) column_names = datasets["train"].column_names print(column_names) train_dataset = datasets["train"] text_column = "text" summary_column = "event" inputs = train_dataset[text_column] targets = train_dataset[summary_column] padding = "max_length" model_inputs = tokenizer(inputs, max_length= 256, padding=padding, truncation=True) print(type(model_inputs)) ``` # New data format ``` import os import json from collections import Counter, defaultdict from data_convert.format.text2tree import Text2Tree from data_convert.task_format.event_extraction import Event, DyIEPP from data_convert.utils import read_file, check_output, data_counter_to_table, get_schema, output_schema from nltk.corpus import stopwords target_class=Text2Tree type_format='subtype' import os in_filename = "data/raw_data/ace05-EN/train.oneie.json" output_filename = "data/new_text2tree/ace2005_event/dev" if not os.path.exists(output_filename): os.makedirs(output_filename) event_output = open(output_filename + '.json', 'w') count = 0 number = 0 for line in read_file(in_filename): document = Event(json.loads(line.strip())) if len(document.entities) > 3: break for sentence in document.generate_relations(): number += 1 if(len(sentence['relations']) == 0): count += 1 print(count) print(number) import os in_filename = "data/raw_data/ace05-EN/train.oneie.json" output_filename = "data/new_text2tree/ace2005_event/dev" if not os.path.exists(output_filename): os.makedirs(output_filename) event_output = open(output_filename + '.json', 'w') count = 0 number = 0 for line in read_file(in_filename): document = Event(json.loads(line.strip())) sentences = document.generate_relations() for sentence in sentences: if(len(sentence['relations']) > 0): print(sentence['relations']) break break for sentence in document.generate_relations(): print(sentence['relations']) print(" ") import os in_filename = "data/datasets/conll04/conll04_train.json" count = 0 number = 0 entity_set = set() relation_type_set = set() for line in read_file(in_filename): for line_ in json.loads(line.strip()): for entity in line_['entities'] break print(entity_set) print(relation_type_set) for relations_in_sentence, sentence_start, sentence, entity in zip(document.relations, document.sentence_start, document.sentences, document.ner): print(sentence) print(relations_in_sentence) print(entity) print(" ") def generate_relations(document): relations = list() for relation in document.relations: arguments = list() relation_type = relation['relation_type'] for argument in relation['arguments']: argument_entity = document.entities[argument['entity_id']] arguments += [list(range(argument_entity['start'], argument_entity['end']))] for old_relation in relations: if relation_type == old_relation['type']: old_relation['arguments'].append(arguments) continue relations += [{'type': relation_type, 'arguments': [arguments]}] return relations relations = generate_relations(document) print(relations) def generate_relation(document): for relations_in_sentence, sentence_start in zip(document.relations, document.sentence_start): relations = list() type_set = set() for relation in relations_in_sentence: # 'arguments': [['Arg-1', [9]], ['Arg-2', [14]]] arguments = [list(range(relation[0]-sentence_start, relation[1]+1-sentence_start)), list(range(relation[2]-sentence_start,relation[3]+1-sentence_start))] relation_type = relation[4].split('.')[0] if relation_type in type_set: for old_relation in relations: if relation_type == old_relation['type']: old_relation['arguments'].append(arguments) else: type_set.add(relation_type) relations += [{'type': relation_type, 'arguments': [arguments]}] print(relations) generate_relation(document) Name_Entity_Type = set() for ner, sentence, events_in_sentence, sentence_start in zip(document.ner , document.sentences, document.events, document.sentence_start): events = list() for event in events_in_sentence: trigger, event_type = event[0] trigger_ner = ner trigger -= sentence_start suptype, subtype = event_type.split('.') if type_format == 'subtype': event_type = subtype elif type_format == 'suptype': event_type = suptype else: event_type = suptype + type_format + subtype arguments = list() for start, end, role in event[1:]: start -= sentence_start end -= sentence_start arguments += [[role, list(range(start, end + 1))]] for argument in arguments: for ner_pos in ner: Name_Entity_Type.add(ner_pos[2]) if((ner_pos[0]-sentence_start) == argument[1][0]): argument.insert(1, ner_pos[2]) if(len(argument) != 3): print("Wrong") event = {'type': event_type, 'tokens': [trigger], 'arguments': arguments} events += [event] # print(events) if(len(event['arguments']) >1): A_predict = {'tokens': sentence, 'events': events} # print(A_predict) print(Name_Entity_Type) list_A = ['ORG', 'VEH', 'WEA', 'LOC', 'FAC', 'PER', 'GPE'] for a in list_A: entity_token = tokenizer.encode(a) print(entity_token) print(tokenizer.encode('<extra_id_0>')) entity_dic = {'ORG', 'VEH', 'WEA', 'LOC', 'FAC', 'PER', 'GPE'} tokenizer.add_special_tokens({"additional_special_tokens": list_A}) model.resize_token_embeddings(len(tokenizer)) print(tokenizer.convert_tokens_to_ids(list_A)) if 'ORG' in list_A: print("Wrong") print(tokenizer.decode(32103)) print(tokenizer.pad_token_id) a = 1.213232 b = round(a, 4) print(b) ``` # OneIE data processing ``` events = list() # print(document.events) # print(document.entities) for event, entity in zip(document.events, document.entities): # print(event) # print(entity) arguments = list() for argument in event['arguments']: argument_entity = document.entities[argument['entity_id']] # print("argument_entity", argument_entity) arguments += [[argument['role'], argument_entity['entity_type'] ,list(range(argument_entity['start'], argument_entity['end']))]] suptype, subtype = event['event_type'].split(':') if type_format == 'subtype': event_type = subtype elif type_format == 'suptype': event_type = suptype else: event_type = suptype + type_format + subtype events += [{ 'type': event_type, 'tokens': list(range(event['trigger']['start'], event['trigger']['end'])), 'arguments': arguments }] print(events) for ner, sentence, events_in_sentence, sentence_start in zip(document.ner , document.sentences, document.events, document.sentence_start): if(len(ner) > 0): print((ner)) ``` # Annotated from event text to tree ``` from data_convert.utils import read_file, check_output, data_counter_to_table, get_schema, output_schema from nltk.corpus import stopwords type_start = '<extra_id_0>' type_end = '<extra_id_1>' role_start = '<extra_id_2>' role_end = '<extra_id_3>' def get_str_from_tokens(tokens, sentence, separator=' '): start, end_exclude = tokens[0], tokens[-1] + 1 return separator.join(sentence[start:end_exclude]) event_schema_set = set() for event in A_predict['events']: print(event) event_schema_set = event_schema_set | get_schema(event) sep = ' ' predicate = sep.join([A_predict['tokens'][index] for index in event['tokens']]) # counter['pred'].update([predicate]) # counter['type'].update([event['type']]) # data_counter[in_filename].update(['event']) # for argument in event['arguments']: # data_counter[in_filename].update(['argument']) # counter['role'].update([argument[0]]) print(predicate) print(event_schema_set) tokens=A_predict['tokens'] predicate_arguments=A_predict['events'] token_separator = ' ' event_str_rep_list = list() for predicate_argument in predicate_arguments: event_type = predicate_argument['type'] # predicate_argument['tokens'] is the trigger index # tokens is the sentence tokens, we get the trigger text span here predicate_text = get_str_from_tokens(predicate_argument['tokens'], tokens, separator=token_separator) # prefix_tokens[predicate_argument['tokens'][0]] = ['[ '] # suffix_tokens[predicate_argument['tokens'][-1]] = [' ]'] role_str_list = list() # role_name is the argument role, role_tokens are corresponding text span index for role_name, role_entity, role_tokens in predicate_argument['arguments']: # if role_name == 'Place' or role_name.startswith('Time'): if role_name == event_type: continue # get the role text span from role tokens index role_text = get_str_from_tokens(role_tokens, tokens, separator=token_separator) # print(role_text) # print(role_entity) if False: role_str = ' '.join([role_start, role_name, role_entity ,role_text, role_end]) else: role_str = ' '.join([type_start, role_name,role_entity ,role_text, type_end]) # All arguments in the sentence # print(role_str) role_str_list += [role_str] role_str_list_str = ' '.join(role_str_list) event_str_rep = f"{type_start} {event_type} {predicate_text} {role_str_list_str} {type_end}" event_str_rep_list += [event_str_rep] # print(tokens) source_text = token_separator.join(tokens) target_text = ' '.join(event_str_rep_list) if not False: target_text = f'{type_start} ' + \ ' '.join(event_str_rep_list) + f' {type_end}' print(source_text) print(" ") print(target_text) a = ["%s%s" % ("type_start ", "type_end")] * 2 print(a) ``` # Get Label Name ``` def get_label_name_tree(label_name_list, tokenizer, end_symbol='<end>'): # Change recurring into non-recurring labels, sub_token_tree = dict() # this is label_name token ids label_tree = dict() for typename in label_name_list: # print(typename) after_tokenized = tokenizer.encode(typename) label_tree[typename] = after_tokenized print(label_tree) for _, sub_label_seq in label_tree.items(): # sub_label_seq is the tokenize_ids of typename parent = sub_token_tree for value in sub_label_seq: if value not in parent: parent[value] = dict() parent = parent[value] parent[end_symbol] = None return sub_token_tree from extraction.event_schema import EventSchema event_schema = './data/text2tree/dyiepp_ace2005_subtype/event.schema' decoding_type_schema = EventSchema.read_from_file(event_schema) # print(decoding_type_schema.type_list) # print(decoding_type_schema.role_list) # print(decoding_type_schema.type_role_dict) type_tree = get_label_name_tree(decoding_type_schema.type_list, tokenizer, end_symbol='<tree-end>') # print(list(type_tree.keys())) # print(" ") subtree = type_tree[9900][18][667] print(len(subtree)) print(subtree) if '<tree-end>' in subtree: print("end_tree") role_tree = get_label_name_tree(decoding_type_schema.role_list, tokenizer, end_symbol='<tree-end>') print(role_tree.keys()) print(role_tree) print(role_tree) if('<tree-end>' in role_tree): print("Wrong") first_tree = role_tree[30837] print(first_tree[1]) if '<tree-end>' in first_tree[1]: print("Wrong") ``` # Transform output to Tree ``` def convert_bracket(_text): # replace the special token labels to Formal statement _text = add_space(_text) for start in [role_start, type_start]: _text = _text.replace(start, left_bracket) for end in [role_end, type_end]: _text = _text.replace(end, right_bracket) return _text def add_space(text): """ add space between special token :param text: :return: """ new_text_list = list() for item in zip(split_bracket.findall(text), split_bracket.split(text)[1:]): new_text_list += item return ' '.join(new_text_list) def get_tree_str(tree): """ get str from event tree :param tree: :return: """ str_list = list() for element in tree: if isinstance(element, str): str_list += [element] return ' '.join(str_list) import re left_bracket = '【' right_bracket = '】' text = "<extra_id_0> <extra_id_0> ORG-AFF <extra_id_0> Minister <extra_id_0> British <extra_id_1> <extra_id_1> ORG-AFF <extra_id_1> <extra_id_0> PART-WHOLE <extra_id_0> Grand Hotel Europe <extra_id_0> Saint Petersburg <extra_id_1> <extra_id_1> PART-WHOLE <extra_id_1> <extra_id_1>" from nltk.tree import ParentedTree brackets = left_bracket + right_bracket print(text) split_bracket = re.compile(r"<extra_id_\d>") new_text_list = list() for item in zip(split_bracket.findall(text), split_bracket.split(text)[1:]): new_text_list += item print(' '.join(new_text_list)) new_text = convert_bracket(text) print(new_text) gold_tree = ParentedTree.fromstring(new_text, brackets=brackets) print(gold_tree) str_list = list() for relation_tree in gold_tree: print(relation_tree.label()) for role_tree in relation_tree: if isinstance(role_tree, str): print("end", role_tree) else: role1_text = role_tree.label() + ' ' + get_tree_str(role_tree) role2_text = role_tree[-1].label() + ' '+get_tree_str(role_tree[-1]) print(role1_text + " aa "+ role2_text) # print(role_tree.label()+ ' ' +a) # print(role_tree[-1].label() + ' '+get_tree_str(role_tree[-1])) # print(role_tree[0]) kind = ' '.join(str_list) print(kind) new_kind = ' '.join(kind.split(' ')[1:]) print(new_kind) b = [] a = ["LOC", "ORG", "VEH", "FAC", "PER", "WEA", "GPE"] a.append(" ") b =a print(b) c = b+[1] print(c) print(tokenizer.encode(" ")) a = tokenizer.decode(3) b = [1,2,3] b.append(a) print(b) c = [0,1] print(c+a) a = [1548, 3, 6, 38, 17952, 3859, 3292, 3457, 3, 6, 79, 33, 6326, 53, 3, 9, 775, 682, 13, 4719, 3, 6, 652, 5413, 13, 8, 7749, 9534, 45, 8, 10101, 3, 5] b = list() for x in a: b.append(tokenizer.decode(x)) print(' '.join(b)) import torch labels = torch.tensor(a) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=False) print(decoded_labels) print(' '.join(decoded_labels)) print(type([3])) special_token_set = {-1,-2} tgt_generated = [1,2,3,-1,2,3,2,-1,4,3,-2,4,-2] special_index_token = list(filter(lambda x: x[1] in special_token_set, list(enumerate(tgt_generated)))) print(special_index_token) Target = [0, 32099, 32099, 4674, 517, 18, 188, 9089, 32099, 7471, 32099, 13143, 32099, 1] Target = [0, 32099, 32099, 3, 19846, 18, 518, 6299, 3765, 32099, 3, 19600, 989, 23, 12489, 32099, 783, 563, 32099, 1] text = tokenizer.decode(Target) print(text) Source = [2, 0, 7, 7, 7, 5] text2 = tokenizer.decode(Source) print(text2) add_special_tokens print(tokenizer.sep_token) ``` # Test List in Python ``` record_list = list() record = {'relation': int} record['relation'] = 1 record_list += [record] print(record_list) record['relation'] = 2 record_list += [record] print(record_list) tokenizer.decode(2) from transformers import AutoModelForSeq2SeqLM, AutoTokenizer def extract_triplets(text): triplets = [] relation, subject, relation, object_ = '', '', '', '' text = text.strip() current = 'x' for token in text.replace("<s>", "").replace("<pad>", "").replace("</s>", "").split(): if token == "<triplet>": current = 't' if relation != '': triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()}) relation = '' subject = '' elif token == "<subj>": current = 's' if relation != '': triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()}) object_ = '' elif token == "<obj>": current = 'o' relation = '' else: if current == 't': subject += ' ' + token elif current == 's': object_ += ' ' + token elif current == 'o': relation += ' ' + token if subject != '' and relation != '' and object_ != '': triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()}) return triplets # Load model and tokenizer tokenizer = AutoTokenizer.from_pretrained("Babelscape/rebel-large") model = AutoModelForSeq2SeqLM.from_pretrained("Babelscape/rebel-large") gen_kwargs = { "max_length": 256, "length_penalty": 0, "num_beams": 3, "num_return_sequences": 3, } # Text to extract triplets from text = 'Punta Cana is a resort town in the municipality of Higüey, in La Altagracia Province, the easternmost province of the Dominican Republic.' # Tokenizer text model_inputs = tokenizer(text, max_length=256, padding=True, truncation=True, return_tensors = 'pt') # Generate generated_tokens = model.generate( model_inputs["input_ids"].to(model.device), attention_mask=model_inputs["attention_mask"].to(model.device), **gen_kwargs, ) # Extract text decoded_preds = tokenizer.batch_decode(generated_tokens, skip_special_tokens=False) # Extract triplets for idx, sentence in enumerate(decoded_preds): print(f'Prediction triplets sentence {idx}') print(extract_triplets(sentence)) tokenizer = AutoTokenizer.from_pretrained("Babelscape/rebel-large") model_name_or_path = "Babelscape/rebel-large" model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path) ``` # Get the label data ``` output_filename = './data/new_text2tree/ace_label/' if not os.path.exists(output_filename): os.makedirs(output_filename) import json with open('./data/new_text2tree/one_ie_ace2005_subtype/test.json', 'r', encoding="utf-8") as f: # 读取所有行 每行会是一个字符串 for jsonstr in f.readlines(): # 将josn字符串转化为dict字典 jsonstr = json.loads(jsonstr) with open(output_filename + 'test_end_before.json', 'a+', encoding='utf-8') as f2: line = json.dumps(jsonstr["relation"], ensure_ascii=False) f2.write(line+'\n') ```
github_jupyter
``` import os import numpy as np import pandas as pd import glob from prediction_utils.util import yaml_read, df_dict_concat table_path = '../figures/hyperparameters/' os.makedirs(table_path, exist_ok = True) param_grid_base = { "lr": [1e-3, 1e-4, 1e-5], "batch_size": [128, 256, 512], "drop_prob": [0.0, 0.25, 0.5, 0.75], "num_hidden": [1, 2, 3], "hidden_dim": [128, 256], } the_dict = {'hyperparameter': [], 'Grid': []} for key, value in param_grid_base.items(): the_dict['hyperparameter'].append(key) the_dict['Grid'].append(value) the_df = pd.DataFrame(the_dict) rename_grid = { 'hyperparameter': ['lr', 'batch_size', 'drop_prob', 'num_hidden', 'hidden_dim'], 'Hyperparameter': ['Learning Rate', 'Batch Size', 'Dropout Probability', 'Number of Hidden Layers', 'Hidden Dimension'] } rename_df = pd.DataFrame(rename_grid) the_df = the_df.merge(rename_df)[['Hyperparameter', 'Grid']].sort_values('Hyperparameter') the_df the_df.to_latex(os.path.join(table_path, 'param_grid.txt'), index=False) selected_models_path = '/share/pi/nigam/projects/spfohl/cohorts/admissions/optum/experiments/baseline_tuning_fold_1/config/selected_models' selected_models_path_dict = { 'starr': '/share/pi/nigam/projects/spfohl/cohorts/admissions/starr_20200523/experiments/baseline_tuning_fold_1_10/config/selected_models', 'mimic': '/share/pi/nigam/projects/spfohl/cohorts/admissions/mimic_omop/experiments/baseline_tuning_fold_1_10/config/selected_models', 'optum': '/share/pi/nigam/projects/spfohl/cohorts/admissions/optum/experiments/baseline_tuning_fold_1/config/selected_models', } selected_param_dict = { db: { task: yaml_read(glob.glob(os.path.join(db_path, task, '*.yaml'), recursive=True)[0]) for task in os.listdir(db_path) } for db, db_path in selected_models_path_dict.items() } col_order = { 'starr': ['hospital_mortality', 'LOS_7', 'readmission_30'], 'mimic': ['los_icu_3days', 'los_icu_7days', 'mortality_hospital', 'mortality_icu'], 'optum': ['readmission_30', 'LOS_7'], } for db in selected_param_dict.keys(): db_params = selected_param_dict[db] db_df = ( pd.concat({key: pd.DataFrame(value, index= [0]) for key, value in db_params.items()}) .reset_index(level=1,drop=True) .rename_axis('task') .transpose() .rename_axis('hyperparameter') .reset_index() .merge(rename_df, how ='right') ) db_df = db_df[['Hyperparameter'] + list(set(db_df.columns) - set(['hyperparameter', 'Hyperparameter']))].sort_values('Hyperparameter') db_df = db_df[['Hyperparameter'] + col_order[db]].sort_values('Hyperparameter') selected_param_path = os.path.join(table_path, db) os.makedirs(selected_param_path, exist_ok=True) db_df.to_latex(os.path.join(selected_param_path, 'selected_param_table.txt'), index=False) ```
github_jupyter
# 时间序列预测 时间序列是随着时间的推移定期收集的数据。时间序列预测是指根据历史数据预测未来数据点的任务。时间序列预测用途很广泛,包括天气预报、零售和销量预测、股市预测,以及行为预测(例如预测一天的车流量)。时间序列数据有很多,识别此类数据中的模式是很活跃的机器学习研究领域。 <img src='notebook_ims/time_series_examples.png' width=80% /> 在此 notebook 中,我们将学习寻找时间规律的一种方法,即使用 SageMaker 的监督式学习模型 [DeepAR](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html)。 ### DeepAR DeepAR 使用循环神经网络 (RNN),它会接受序列数据点作为历史输入,并生成预测序列数据点。这种模型如何学习? 在训练过程中,你需要向 DeepAR estimator 提供训练数据集(由多个时间序列组成)。该 estimator 会查看所有的训练时间序列并尝试发现它们之间的相似性。它通过从训练时间序列中随机抽取**训练样本**进行训练。 * 每个训练样本都由相邻的**上下文**和**预测**窗口(长度已提前固定好)对组成。 * `context_length` 参数会控制模型能看到过去多久的数据。 * `prediction_length` 参数会控制模型可以对未来多久做出预测。 * 详情请参阅[此文档](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_how-it-works.html)。 <img src='notebook_ims/context_prediction_windows.png' width=50% /> > 因为 DeepAR 用多个时间序列进行训练,所以很适合有**循环规律**的数据。 在任何预测任务中,选择的上下文窗口都应该能向模型提供足够的**相关**信息,这样才能生成准确的预测。通常,最接近预测时间帧的数据包含的信息对确定预测结果的影响最大。在很多预测应用中,例如预测月销量,上下文和预测窗口大小一样,但有时候有必要设置更大的上下文窗口,从而发现数据中的更长期规律。 ### 能耗数据 在此 notebook 中,我们将使用的数据是全球的家庭耗电量数据。数据集来自 [Kaggle](https://www.kaggle.com/uciml/electric-power-consumption-data-set),表示从 2006 年到 2010 年的耗电量数据。对于这么庞大的数据集,我们可以预测很长时间的耗电量,例如几天、几周或几个月。预测能耗有很多用途,例如确定耗电量的季节性价格,以及根据预测用量有效地向居民供电。 **趣味阅读**:Google DeepMind 最近展开了一项逆相关项目,他们使用机器学习预测风力发电机产生的电量,并有效地将电力输送给电网。你可以在[这篇帖子](https://deepmind.com/blog/machine-learning-can-boost-value-wind-energy/)中详细了解这项研究。 ### 机器学习工作流程 此 notebook 将时间序列预测分成了以下几个步骤: * 加载和探索数据 * 创建时间序列训练集和测试集 * 将数据变成 JSON 文件并上传到 S3 * 实例化和训练 DeepAR estimator * 部署模型并创建预测器 * 评估预测器 --- 首先加载常规资源。 ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline ``` # 加载和探索数据 我们将收集在几年内收集的全球能耗数据。以下单元格将加载并解压缩此数据,并为你提供一个文本数据文件 `household_power_consumption.txt`。 ``` ! wget https://s3.amazonaws.com/video.udacity-data.com/topher/2019/March/5c88a3f1_household-electric-power-consumption/household-electric-power-consumption.zip ! unzip household-electric-power-consumption ``` ### 读取 `.txt` 文件 下个单元格显示了文本文件里的前几行,我们可以看看数据格式。 ``` # display first ten lines of text data n_lines = 10 with open('household_power_consumption.txt') as file: head = [next(file) for line in range(n_lines)] display(head) ``` ## 预处理数据 household_power_consumption.txt 文件具有以下属性: * 每个数据点都具有日期和时间记录 (时:分:秒) * 各个数据特征用分号 (;) 分隔 * 某些数据为“nan”或“?”,我们将它们都当做 `NaN` 值 ### 处理 `NaN` 值 此 DataFrame 包含一些缺失值的数据点。到目前为止,我们只是丢弃这些值,但是还有其他处理 `NaN` 值的方式。一种技巧是用缺失值所在列的**均值**填充;这样填充的值可能比较符合实际。 我在 `txt_preprocessing.py` 中提供了一些辅助函数,可以帮助你将原始文本文件加载为 DataFrame,并且用各列的平均特征值填充 `NaN` 值。这种技巧对于长期预测来说是可行的,如果是按小时分析和预测,则最好丢弃这些 `NaN` 值或对很小的滑动窗口求平均值,而不是采用整个数据列的平均值。 **在下面的单元格中,我将文件读取为 DataFrame 并用特征级平均值填充 `NaN` 值。** ``` import txt_preprocessing as pprocess # create df from text file initial_df = pprocess.create_df('household_power_consumption.txt', sep=';') # fill NaN column values with *average* column value df = pprocess.fill_nan_with_mean(initial_df) # print some stats about the data print('Data shape: ', df.shape) df.head() ``` ## 全球有效能耗 在此示例中,我们想要预测全球有效能耗,即全球的家庭每分钟平均有效能耗(千瓦)。在下面获取这列数据并显示生成的图形。 ``` # Select Global active power data power_df = df['Global_active_power'].copy() print(power_df.shape) # display the data plt.figure(figsize=(12,6)) # all data points power_df.plot(title='Global active power', color='blue') plt.show() ``` 因为数据是每分钟记录的,上图包含很多值。所以我只在下面显示了一小部分数据。 ``` # can plot a slice of hourly data end_mins = 1440 # 1440 mins = 1 day plt.figure(figsize=(12,6)) power_df[0:end_mins].plot(title='Global active power, over one day', color='blue') plt.show() ``` ### 每小时与每天 每分钟收集了很多数据,我可以通过以下两种方式之一分析数据: 1. 创建很多简短的时间序列,例如一周左右,并且每小时都记录一次能耗,尝试预测接下来的几小时或几天的能耗。 2. 创建更少的很长时间序列,数据每天记录一次,并使用这些数据预测未来几周或几个月的能耗。 两种任务都很有意思。具体取决于你是要预测一天/一周还是更长时间(例如一个月)的规律。鉴于我拥有的数据量,我认为可以查看在多个月或一年内发生的更长重复性趋势。所以我将重采样“全球有效能耗”,将**每日**数据点记录为 24 小时的平均值。 > 我们可以使用 pandas [时间序列工具](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html)根据特定的频率重采样数据,例如按照每小时 ('H') 或每天 ('D') 重采样数据点。 ``` # resample over day (D) freq = 'D' # calculate the mean active power for a day mean_power_df = power_df.resample(freq).mean() # display the mean values plt.figure(figsize=(15,8)) mean_power_df.plot(title='Global active power, mean per day', color='blue') plt.tight_layout() plt.show() ``` 在此图形中,可以看到每年都出现了有趣的趋势。每年初和每年末都会出现能耗高峰,这时候是冬季,供暖和照明使用量都更高。在 8 月份左右也出现小高峰,这时候全球的温度通常更高。 数据依然不够平滑,但是展现了明显的趋势,所以适合用机器学习模型识别这些规律。 --- ## 创建时间序列 我的目标是看看能否根据从 2007-2009 的整年数据,准确地预测 2010 年多个月的平均全球有效能耗。 接下来为每个完整的年份数据创建一个时间序列。这只是一种设计决策,我决定使用一整年的数据,从 2007 年 1 月开始,因为 2006 年的数据点不太多,并且这种划分更容易处理闰年。我还可以从第一个收集的数据点开始构建时间序列,只需在下面的函数中更改 `t_start` 和 `t_end` 即可。 函数 `make_time_series` 将为传入的每个年份 `['2007', '2008', '2009']` 创建 pandas `Series`。 * 所有的时间序列将从相同的时间点 `t_start`(或 t0)开始。 * 在准备数据时,需要为每个时间序列使用一致的起始点;DeepAR 将此时间点作为参考帧,从而学习循环规律,例如工作日的行为与周末不一样,或者夏天与冬天不一样。 * 你可以更改起始和结束索引,并定义你创建的任何时间序列。 * 在创建时间序列时,我们应该考虑到闰年,例如 2008 年。 * 通常,我们通过从 DataFrame 获取相关的全球能耗数据和日期索引创建 `Series`。 ``` # get global consumption data data = mean_power_df[start_idx:end_idx] # create time series for the year index = pd.DatetimeIndex(start=t_start, end=t_end, freq='D') time_series.append(pd.Series(data=data, index=index)) ``` ``` def make_time_series(mean_power_df, years, freq='D', start_idx=16): '''Creates as many time series as there are complete years. This code accounts for the leap year, 2008. :param mean_power_df: A dataframe of global power consumption, averaged by day. This dataframe should also be indexed by a datetime. :param years: A list of years to make time series out of, ex. ['2007', '2008']. :param freq: The frequency of data recording (D = daily) :param start_idx: The starting dataframe index of the first point in the first time series. The default, 16, points to '2017-01-01'. :return: A list of pd.Series(), time series data. ''' # store time series time_series = [] # store leap year in this dataset leap = '2008' # create time series for each year in years for i in range(len(years)): year = years[i] if(year == leap): end_idx = start_idx+366 else: end_idx = start_idx+365 # create start and end datetimes t_start = year + '-01-01' # Jan 1st of each year = t_start t_end = year + '-12-31' # Dec 31st = t_end # get global consumption data data = mean_power_df[start_idx:end_idx] # create time series for the year index = pd.DatetimeIndex(start=t_start, end=t_end, freq=freq) time_series.append(pd.Series(data=data, index=index)) start_idx = end_idx # return list of time series return time_series ``` ## 测试结果 下面为每个完整的年份创建一个时间序列,并显示结果。 ``` # test out the code above # yearly time series for our three complete years full_years = ['2007', '2008', '2009'] freq='D' # daily recordings # make time series time_series = make_time_series(mean_power_df, full_years, freq=freq) # display first time series time_series_idx = 0 plt.figure(figsize=(12,6)) time_series[time_series_idx].plot() plt.show() ``` --- # 按时间拆分数据 我们将用测试数据集评估模型。对于分类等机器学习任务,我们通常随机将样本拆分成不同的数据集,创建训练/测试数据。对于未来预测来说,一定要按照**时间**拆分训练/测试数据,不能按照单个数据点拆分。 > 通常,在创建训练数据时,我们从每个完整的时间序列中去除最后 `prediction_length` 个数据点,并创建训练时间序列。 ### 练习:创建训练时间序列 请完成 `create_training_series` 函数,它应该接受完整时间序列数据列表,并返回截断的训练时间序列列表。 * 在此例中,我们想预测一个月的数据,将 `prediction_length` 设为 30 天。 * 为了创建训练数据集,我们将从生成的每个时间序列中去除最后 30 个数据点,所以仅使用第一部分作为训练数据。 * **测试集包含每个时间序列的完整范围**。 ``` # create truncated, training time series def create_training_series(complete_time_series, prediction_length): '''Given a complete list of time series data, create training time series. :param complete_time_series: A list of all complete time series. :param prediction_length: The number of points we want to predict. :return: A list of training time series. ''' # your code here pass # test your code! # set prediction length prediction_length = 30 # 30 days ~ a month time_series_training = create_training_series(time_series, prediction_length) ``` ### 训练和测试序列 我们可以将训练/测试序列绘制到同一个坐标轴上,可视化这些序列。我们应该看到测试序列包含一年的所有数据,训练序列包含最后 `prediction_length` 个数据点之外的数据。 ``` # display train/test time series time_series_idx = 0 plt.figure(figsize=(15,8)) # test data is the whole time series time_series[time_series_idx].plot(label='test', lw=3) # train data is all but the last prediction pts time_series_training[time_series_idx].plot(label='train', ls=':', lw=3) plt.legend() plt.show() ``` ## 转换为 JSON 根据 [DeepAR 文档](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html),DeepAR 要求输入训练数据是 JSON 格式,并包含以下字段: * **start**:定义时间序列开始日期的字符串,格式为“YYYY-MM-DD HH:MM:SS”。 * **target**:表示时间序列的数值数组。 * **cat**(可选):类别特征数值数组,可以用于表示记录所属的组。这个字段适合按照项目类别寻找模型,例如对于零售销量,可以将 {'shoes', 'jackets', 'pants'} 表示成类别 {0, 1, 2}。 输入数据的格式应该为,在 JSON 文件中每行一个时间序列。每行看起来像字典,例如: ``` {"start":'2007-01-01 00:00:00', "target": [2.54, 6.3, ...], "cat": [1]} {"start": "2012-01-30 00:00:00", "target": [1.0, -5.0, ...], "cat": [0]} ... ``` 在上述示例中,每个时间序列都有一个相关的类别特征和一个时间序列特征。 ### 练习:格式化能耗数据 对于我们的数据来说: * 开始日期“start”将为时间序列中第一行的索引,即这一年的 1 月 1 日。 * “target”将为时间序列存储的所有能耗值。 * 我们将不使用可选“cat”字段。 请完成以下实用函数,它应该将 `pandas.Series` 对象转换成 DeepAR 可以使用的相应 JSON 字符串。 ``` def series_to_json_obj(ts): '''Returns a dictionary of values in DeepAR, JSON format. :param ts: A single time series. :return: A dictionary of values with "start" and "target" keys. ''' # your code here pass # test out the code ts = time_series[0] json_obj = series_to_json_obj(ts) print(json_obj) ``` ### 将数据保存到本地 下面的辅助函数会将一个序列放入 JSON 文件的一行中,并使用换行符“\n”分隔。数据还会编码并写入我们指定的文件名中。 ``` # import json for formatting data import json import os # and os for saving def write_json_dataset(time_series, filename): with open(filename, 'wb') as f: # for each of our times series, there is one JSON line for ts in time_series: json_line = json.dumps(series_to_json_obj(ts)) + '\n' json_line = json_line.encode('utf-8') f.write(json_line) print(filename + ' saved.') # save this data to a local directory data_dir = 'json_energy_data' # make data dir, if it does not exist if not os.path.exists(data_dir): os.makedirs(data_dir) # directories to save train/test data train_key = os.path.join(data_dir, 'train.json') test_key = os.path.join(data_dir, 'test.json') # write train/test JSON files write_json_dataset(time_series_training, train_key) write_json_dataset(time_series, test_key) ``` --- ## 将数据上传到 S3 接下来,为了使 estimator 能够访问此数据,我将数据上传到 S3。 ### Sagemaker 资源 首先指定: * 训练模型用到的 sagemaker 角色和会话。 * 默认 S3 存储桶,我们可以在其中存储训练、测试和模型数据。 ``` import boto3 import sagemaker from sagemaker import get_execution_role # session, role, bucket sagemaker_session = sagemaker.Session() role = get_execution_role() bucket = sagemaker_session.default_bucket() ``` ### 练习:将训练和测试 JSON 文件上传到 S3 指定唯一的训练和测试 prefix,它们定义了数据在 S3 中的位置。 * 将训练数据上传到 S3 中的某个位置,并将该位置保存到 `train_path` * 将测试数据上传到 S3 中的某个位置,并将该位置保存到 `test_path` ``` # suggested that you set prefixes for directories in S3 # upload data to S3, and save unique locations train_path = None test_path = None # check locations print('Training data is stored in: '+ train_path) print('Test data is stored in: '+ test_path) ``` --- # 训练 DeepAR Estimator 某些 estimator 具有特定的 SageMaker 构造函数,但是并非都有。你可以创建一个基本 `Estimator` 并传入保存特定模型的特定镜像(或容器)。 接下来,配置要在我们运行模型所在的区域使用的容器镜像。 ``` from sagemaker.amazon.amazon_estimator import get_image_uri image_name = get_image_uri(boto3.Session().region_name, # get the region 'forecasting-deepar') # specify image ``` ### 练习:实例化 Estimator 现在可以定义将启动训练作业的 estimator 了。一般的 Estimator 将由普通的构造函数参数和 `image_name` 进行定义。 > 你可以查看 [estimator 源代码](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/estimator.py#L595)了解详情。 ``` from sagemaker.estimator import Estimator # instantiate a DeepAR estimator estimator = None ``` ## 设置超参数 接下来,我们需要定义一些 DeepAR 超参数,这些超参数定义了模型大小和训练行为。需要定义周期数评论、预测时长和上下文时长。 * **epochs**:在训练时遍历数据的最大次数。 * **time_freq**:数据集中的时间序列频率(“D”表示每天)。 * **prediction_length**:一个字符串,表示训练模型预测的时间步数量(基于频率单位)。 * **context_length**:模型在做出预测之前可以看到的时间点数量。 ### 上下文长度 通常,建议从 `context_length`=`prediction_length` 开始。这是因为 DeepAR 模型还会从目标时间序列那接收“延迟”输入,使模型能够捕获长期依赖关系。例如,每日时间序列可以具有每年季节效应,DeepAR 会自动包含一年延迟。所以上下文长度可以短于一年,模型依然能够捕获这种季节效应。 模型选择的延迟值取决于时间序列的频率。例如,每日频率的延迟值是上一周、上两周、上三周、上四周和一年。详情请参阅 [DeepAR 工作原理文档](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_how-it-works.html)。 ### 可选超参数 你还可以配置可选超参数,以进一步优化模型。包括 RNN 模型的层数、每层的单元格数量、似然率函数,以及训练选项,例如批次大小和学习速率。 要了解所有不同 DeepAR 超参数的详尽列表,请参阅 DeepAR [超参数文档](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_hyperparameters.html)。 ``` freq='D' context_length=30 # same as prediction_length hyperparameters = { "epochs": "50", "time_freq": freq, "prediction_length": str(prediction_length), "context_length": str(context_length), "num_cells": "50", "num_layers": "2", "mini_batch_size": "128", "learning_rate": "0.001", "early_stopping_patience": "10" } # set the hyperparams estimator.set_hyperparameters(**hyperparameters) ``` ## 训练作业 现在我们可以启动训练作业了。SageMaker 将启动 EC2 实例、从 S3 下载数据、开始训练模型并保存训练过的模型。 如果你提供了 `test` 数据通道(就像在示例中一样),DeepAR 还会计算训练过的模型在此测试数据集上的准确率指标。计算方法是预测测试集中每个时间序列的最后 `prediction_length` 个点,并将它们与时间序列的实际值进行比较。计算的误差指标将包含在日志输出中。 下个单元格可能需要几分钟才能完成,取决于数据大小、模型复杂度和训练选项。 ``` %%time # train and test channels data_channels = { "train": train_path, "test": test_path } # fit the estimator estimator.fit(inputs=data_channels) ``` ## 部署和创建预测器 训练模型后,我们可以将模型部署到预测器端点上,并使用模型做出预测。 在此 notebook 结束时,记得**删除端点**。我们将在此 notebook 的最后提供一个删除单元格,但是建议提前记住。 ``` %%time # create a predictor predictor = estimator.deploy( initial_instance_count=1, instance_type='ml.t2.medium', content_type="application/json" # specify that it will accept/produce JSON ) ``` --- # 生成预测 根据 DeepAR 的[推理格式](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar-in-formats.html),`predictor` 要求输入数据是 JSON 格式,并具有以下键: * **instances**:一个 JSON 格式的时间序列列表,模型应该预测这些时间序列。 * **configuration**(可选):一个配置信息字典,定义了请求希望的响应类型。 在 configuration 中,可以配置以下键: * **num_samples**:一个整数,指定了模型在做出概率预测时,生成的样本数。 * **output_types**:一个指定响应类型的列表。我们需要 **quantiles**,它会查看模型生成的 num_samples 列表,并根据这些值为每个时间点生成[分位数估值](https://en.wikipedia.org/wiki/Quantile)。 * **quantiles**:一个列表,指定生成哪些分位数估值并在响应中返回这些估值。 下面是向 DeepAR 模型端点发出 JSON 查询的示例。 ``` { "instances": [ { "start": "2009-11-01 00:00:00", "target": [4.0, 10.0, 50.0, 100.0, 113.0] }, { "start": "1999-01-30", "target": [2.0, 1.0] } ], "configuration": { "num_samples": 50, "output_types": ["quantiles"], "quantiles": ["0.5", "0.9"] } } ``` ## JSON 预测请求 以下代码接受时间序列**列表**作为输入并接受一些配置参数。然后将该序列变成 JSON 实例格式,并将输入转换成相应格式的 JSON_input。 ``` def json_predictor_input(input_ts, num_samples=50, quantiles=['0.1', '0.5', '0.9']): '''Accepts a list of input time series and produces a formatted input. :input_ts: An list of input time series. :num_samples: Number of samples to calculate metrics with. :quantiles: A list of quantiles to return in the predicted output. :return: The JSON-formatted input. ''' # request data is made of JSON objects (instances) # and an output configuration that details the type of data/quantiles we want instances = [] for k in range(len(input_ts)): # get JSON objects for input time series instances.append(series_to_json_obj(input_ts[k])) # specify the output quantiles and samples configuration = {"num_samples": num_samples, "output_types": ["quantiles"], "quantiles": quantiles} request_data = {"instances": instances, "configuration": configuration} json_request = json.dumps(request_data).encode('utf-8') return json_request ``` ### 获得预测 然后,我们可以使用该函数获得格式化时间序列的预测。 在下个单元格中,我获得了时间序列输入和已知目标,并将格式化输入传入预测器端点中,以获得预测。 ``` # get all input and target (test) time series input_ts = time_series_training target_ts = time_series # get formatted input time series json_input_ts = json_predictor_input(input_ts) # get the prediction from the predictor json_prediction = predictor.predict(json_input_ts) print(json_prediction) ``` ## 解码预测 预测器返回 predictor returns JSON 格式的预测,所以我们需要提取可视化结果所需的预测和分位数数据。以下函数会读取 JSON 格式的预测并生成每个分位数中的预测列表。 ``` # helper function to decode JSON prediction def decode_prediction(prediction, encoding='utf-8'): '''Accepts a JSON prediction and returns a list of prediction data. ''' prediction_data = json.loads(prediction.decode(encoding)) prediction_list = [] for k in range(len(prediction_data['predictions'])): prediction_list.append(pd.DataFrame(data=prediction_data['predictions'][k]['quantiles'])) return prediction_list # get quantiles/predictions prediction_list = decode_prediction(json_prediction) # should get a list of 30 predictions # with corresponding quantile values print(prediction_list[0]) ``` ## 显示结果 分位数数据可以提供查看预测结果所需的所有信息。 * 分位数 0.1 和 0.9 表示预测值的上下限。 * 分位数 0.5 表示所有样本预测的中位数。 ``` # display the prediction median against the actual data def display_quantiles(prediction_list, target_ts=None): # show predictions for all input ts for k in range(len(prediction_list)): plt.figure(figsize=(12,6)) # get the target month of data if target_ts is not None: target = target_ts[k][-prediction_length:] plt.plot(range(len(target)), target, label='target') # get the quantile values at 10 and 90% p10 = prediction_list[k]['0.1'] p90 = prediction_list[k]['0.9'] # fill the 80% confidence interval plt.fill_between(p10.index, p10, p90, color='y', alpha=0.5, label='80% confidence interval') # plot the median prediction line prediction_list[k]['0.5'].plot(label='prediction median') plt.legend() plt.show() # display predictions display_quantiles(prediction_list, target_ts) ``` ## 预测未来 我们没有向模型提供任何 2010 年数据,但是我们看看如果只有已知开始日期,**没有目标**,模型能否预测能耗。 ### 练习:为“未来”预测设定请求 请创建一个格式化输入并传入部署的 `predictor`,同时传入常规“configuration”参数。这里的“instances”只有 1 个实例,定义如下: * **start**:开始时间将为你指定的时间戳。要预测 2010 年的前 30 天,从 1 月 1 日“2010-01-01”开始。 * **target**:目标将为空列表,因为这一年没有完整的相关时间序列。我们特意从模型中去除了该信息,以便测试模型。 ``` {"start": start_time, "target": []} # empty target ``` ``` # Starting my prediction at the beginning of 2010 start_date = '2010-01-01' timestamp = '00:00:00' # formatting start_date start_time = start_date +' '+ timestamp # format the request_data # with "instances" and "configuration" request_data = None # create JSON input json_input = json.dumps(request_data).encode('utf-8') print('Requesting prediction for '+start_time) ``` 然后正常地获取和解码预测响应。 ``` # get prediction response json_prediction = predictor.predict(json_input) prediction_2010 = decode_prediction(json_prediction) ``` 最后,将预测与已知目标序列进行比较。此目标将来自 2010 年的时间序列,我在下面创建了该序列。 ``` # create 2010 time series ts_2010 = [] # get global consumption data # index 1112 is where the 2011 data starts data_2010 = mean_power_df.values[1112:] index = pd.DatetimeIndex(start=start_date, periods=len(data_2010), freq='D') ts_2010.append(pd.Series(data=data_2010, index=index)) # range of actual data to compare start_idx=0 # days since Jan 1st 2010 end_idx=start_idx+prediction_length # get target data target_2010_ts = [ts_2010[0][start_idx:end_idx]] # display predictions display_quantiles(prediction_2010, target_2010_ts) ``` ## 删除端点 请用不同的时间序列尝试你的代码。建议调节 DeepAR 超参数,看看能否改进此预测器的性能。 评估完预测器(任何预测器)后,记得删除端点。 ``` ## TODO: delete the endpoint predictor.delete_endpoint() ``` ## 总结 你已经见过一个复杂但是应用广泛的时间序列预测方法,并且掌握了将 DeepAR 模型应用到你感兴趣的数据上所需的技能。
github_jupyter
# Neural networks with PyTorch Deep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks. ``` # Import necessary packages %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import torch import helper import matplotlib.pyplot as plt ``` Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below <img src='assets/mnist.png'> Our goal is to build a neural network that can take one of these images and predict the digit in the image. First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later. ``` ### Run this cell from torchvision import datasets, transforms # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)), ]) # Download and load the training data trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) ``` We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like ```python for image, label in trainloader: ## do things with images and labels ``` You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images. ``` dataiter = iter(trainloader) images, labels = dataiter.next() print(type(images)) print(images.shape) print(labels.shape) ``` This is what one of the images looks like. ``` plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r'); ``` First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures. The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors. Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next. > **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next. ``` ## Your solution def sigmoid(x) : return 1/(1+torch.exp(-x)) input_unit = images.shape[2]*images.shape[3] hidden_unit = 256 output_unit = 10 img = images.view(images.shape[0],-1) # img = images.view(-1,input_unit) W1 = torch.randn(input_unit,hidden_unit) W2 = torch.randn(hidden_unit,output_unit) B1 = torch.randn(hidden_unit) B2 = torch.randn(output_unit) out = torch.mm(sigmoid(torch.mm(img,W1)+B1),W2)+B2 # output of your network, should have shape (64,10) print(out.shape) ``` Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this: <img src='assets/image_distribution.png' width=500px> Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class. To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like $$ \Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}} $$ What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one. > **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns. ``` def softmax(x): ## TODO: Implement the softmax function here return torch.exp(x) / torch.sum(torch.exp(x),dim=1).view(-1,1) # matrix / vector => error (broadcasting) # Here, out should be the output of the network in the previous excercise with shape (64,10) print(torch.sum(torch.exp(out),dim=1).shape) probabilities = softmax(out) # Does it have the right shape? Should be (64, 10) print(probabilities.shape) # Does it sum to 1? print(probabilities.sum(dim=1)) ``` ## Building networks with PyTorch PyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output. ``` from torch import nn class Network(nn.Module): def __init__(self): super().__init__() #register layers # Inputs to hidden layer linear transformation self.hidden = nn.Linear(784, 256) # Output layer, 10 units - one for each digit self.output = nn.Linear(256, 10) # Define sigmoid activation and softmax output self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) def forward(self, x): # Pass the input tensor through each of our operations x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) return x ``` Let's go through this bit by bit. ```python class Network(nn.Module): ``` Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything. ```python self.hidden = nn.Linear(784, 256) ``` This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`. ```python self.output = nn.Linear(256, 10) ``` Similarly, this creates another linear transformation with 256 inputs and 10 outputs. ```python self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) ``` Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns. ```python def forward(self, x): ``` PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method. ```python x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) ``` Here the input tensor `x` is passed through each operation and reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method. Now we can create a `Network` object. ``` # Create the network and look at it's text representation model = Network() model ``` You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`. ``` import torch.nn.functional as F class Network(nn.Module): def __init__(self): super().__init__() # Inputs to hidden layer linear transformation self.hidden = nn.Linear(784, 256) # Output layer, 10 units - one for each digit self.output = nn.Linear(256, 10) def forward(self, x): # Hidden layer with sigmoid activation x = F.sigmoid(self.hidden(x)) # functional 함수는 init할 필요없이 그냥 가져다 쓰면 되서 편함 extra parameter를 만들지 않아도 되서 좋기도 함 # Output layer with softmax activation x = F.softmax(self.output(x), dim=1) return x ``` ### Activation functions So far we've only been looking at the sigmoid activation function, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit). <img src="assets/activation.png" width=700px> In practice, the ReLU function is used almost exclusively as the activation function for hidden layers. ### Your Turn to Build a Network <img src="assets/mlp_mnist.png" width=600px> > **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function. It's good practice to name your layers by their type of network, for instance 'fc' to represent a fully-connected layer. As you code your solution, use `fc1`, `fc2`, and `fc3` as your layer names. ``` import torch.nn.functional as F ## Your solution here class fc(nn.Module) : def __init__(self) : super().__init__() self.fc1 = nn.Linear(784,128) self.fc2 = nn.Linear(128,64) self.fc3 = nn.Linear(64,10) def forward(self,x) : x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) x = self.fc3(x) x = F.softmax(x) return x ``` ### Initializing weights and biases The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance. ``` model = fc() print(model.fc1.weight) print(model.fc1.bias) ``` For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values. ``` # Set biases to all zeros model.fc1.bias.data.fill_(0) # sample from random normal with standard dev = 0.01 model.fc1.weight.data.normal_(std=0.01) ``` ### Forward pass Now that we have a network, let's see what happens when we pass in an image. ``` # Grab some data dataiter = iter(trainloader) images, labels = dataiter.next() # Resize images into a 1D vector, new shape is (batch size, color channels, image pixels) images.resize_(64, 1, 784) # or images.resize_(images.shape[0], 1, 784) to automatically get batch size # Forward pass through the network img_idx = 0 ps = model.forward(images[img_idx,:]) img = images[img_idx] helper.view_classify(img.view(1, 28, 28), ps) ``` As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random! ### Using `nn.Sequential` PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential)). Using this to build the equivalent network: ``` # Hyperparameters for our network input_size = 784 hidden_sizes = [128, 64] output_size = 10 # Build a feed-forward network model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(), nn.Linear(hidden_sizes[0], hidden_sizes[1]), nn.ReLU(), nn.Linear(hidden_sizes[1], output_size), nn.Softmax(dim=1)) print(model) # Forward pass through the network and display output images, labels = next(iter(trainloader)) images.resize_(images.shape[0], 1, 784) ps = model.forward(images[0,:]) helper.view_classify(images[0].view(1, 28, 28), ps) ``` Here our model is the same as before: 784 input units, a hidden layer with 128 units, ReLU activation, 64 unit hidden layer, another ReLU, then the output layer with 10 units, and the softmax output. The operations are available by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`. ``` print(model[0]) model[0].weight ``` You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_. ``` from collections import OrderedDict model = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(input_size, hidden_sizes[0])), ('relu1', nn.ReLU()), ('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])), ('relu2', nn.ReLU()), ('output', nn.Linear(hidden_sizes[1], output_size)), ('softmax', nn.Softmax(dim=1))])) model ``` Now you can access layers either by integer or the name ``` print(model[0]) print(model.fc1) ``` In the next notebook, we'll see how we can train a neural network to accuractly predict the numbers appearing in the MNIST images.
github_jupyter
### Netflix Scrapper The purpose of the code is to get details of all the Categories on Netflix and then to gather information about Sub-Categories and movies under each Sub-Category. ``` from bs4 import BeautifulSoup import requests import pandas as pd import numpy as np def make_soup(url): return BeautifulSoup(requests.get(url).text, 'html.parser') def browseCategory(category, data): category_url = data[category-1][2] category = data[category-1][1] subCategory_details = [] count = 1 subCategories = [] soup = make_soup(category_url) cards_list = soup.find_all('section',{'class':'nm-collections-row'}) for card in cards_list: try: subCategory = card.find('h1').text movie_list = [] movies = card.find_all('li') movie_count = 1 for movie in movies: try: movie_title = movie.find('span',{'class':'nm-collections-title-name'}).text movie_link = movie.find('a').get('href') movie_list.append([movie_count, movie_title , movie_link]) movie_count += 1 except AttributeError: pass subCategories.append(subCategory) subCategory_details.append(movie_list) count += 1 except AttributeError: pass return subCategories, subCategory_details, count-1 def getCategories(base_url): category_soup = make_soup(base_url) categories = category_soup.find_all('section',{'class':'nm-collections-row'}) result=[] count = 1 for category in categories: try: Title = category.find('span', {'class':'nm-collections-row-name'}).text url = category.find('a').get('href') result.append([count, Title, url]) count += 1 except AttributeError: pass #print(result) return result def main(): netflix_url = "https://www.netflix.com/in/browse/genre/839338" categories = getCategories(netflix_url) print("Please select one of the category") df = pd.DataFrame(np.array(categories), columns=['Sr.No', 'Title', 'link']) print(df.to_string(index=False)) choice = int(input('\n\n Please Enter your Choice: \n')) subCategories, movieList, count = browseCategory(choice, categories) for i in range(0, count): print(subCategories[i],'\n\n') subCategory_df = pd.DataFrame(np.array(movieList[i]), columns=['Sr.No', 'Title', 'link']) print(subCategory_df.to_string(index=False)) print("\n\n\n") if __name__ == '__main__': main() ```
github_jupyter
# Tidy datasets are easy to manipulate, model and visualise, and have a specific structure: each variable is a column, each observation is a row, and each type of observational unit is a table (3rd level normalization of relational database). ### Tidy Data, Hadley Wickham (2014) ``` # This statement widens the notebook page to the window size. from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) import numpy as np import pandas as pd # error message in reading as pandas cannot decode the encoding billboard = pd.read_csv('billboard.csv') # Latin_1 encoding accept any possible byte as input and converts it to the unicode character of same code. billboard = pd.read_csv('billboard.csv', encoding='latin_1') billboard ``` ## This table is messy in two ways: 1. Table combines two observational unit: the song and its rank in each week. 2. Variable of week is repreated in over 76 columns! ``` # rename columns to be more readable new_cols = [ 'wk1', 'wk2', 'wk3', 'wk4', 'wk5', 'wk6', 'wk7', 'wk8', 'wk9', 'wk10', 'wk11', 'wk12', 'wk13', 'wk14', 'wk15', 'wk16', 'wk17', 'wk18', 'wk19', 'wk20', 'wk21', 'wk22', 'wk23', 'wk24', 'wk25', 'wk26', 'wk27', 'wk28', 'wk29', 'wk30', 'wk31', 'wk32', 'wk33', 'wk34', 'wk35', 'wk36', 'wk37', 'wk38', 'wk39', 'wk40', 'wk41', 'wk42', 'wk43', 'wk44', 'wk45', 'wk46', 'wk47', 'wk48', 'wk49', 'wk50', 'wk51', 'wk52', 'wk53', 'wk54', 'wk55', 'wk56', 'wk57', 'wk58', 'wk59', 'wk60', 'wk61', 'wk62', 'wk63', 'wk64', 'wk65', 'wk66', 'wk67', 'wk68', 'wk69', 'wk70', 'wk71', 'wk72', 'wk73', 'wk74', 'wk75', 'wk76' ] billboard.rename(columns=dict(zip(billboard.columns[7:], new_cols)), inplace=True) # If there is no need to make the changes inplace in the original read datafile (for example for a later computation), # following method is more efficient as just the indices are read not the whole DataFrame. # billboard.columns.values[range(7,83)] = new_cols billboard.rename(columns={'artist.inverted': 'artist'}, inplace=True) billboard.head() # `id_vars` or identifier variables are columns not to be changed while `value_vars` or measured varaible are "unpivoted" to the row axis leaving just two non-identifier columns, 'variable' and 'value'. # Note how readable and coincise the table becomes compared to the original table. # Note that 'year' and 'date.peaked' extra columns are eliminated as they can be computed them from the 'date.entered' and 'week' columns. billboardmelt = billboard.melt( id_vars=['artist', 'track', 'time', 'genre', 'date.entered'], value_vars=new_cols, var_name='week', value_name='rank') billboardmelt # Covnvert the string-integer format of week column values to integers to be more readable and also be computable. billboardmelt['week'] = billboardmelt['week'].apply(lambda s: int(s[2:])) # Sort the table to note the redundancy; multiple columns of artist, track, time, genre have repretitive records. billboardmelt.sort_values(['artist', 'track'], inplace=True) billboardmelt ``` ## First let's remove date redundancy ### "date.entered" column was usable for original dataset with one date row for all the multiple weeks columns. However, now that mutiple columns are rows each consequtive week needs its separate corresponding date otherwise for the rows of all weeks from 1 to 76, the "date.entered" of the first week would be repeated (date redundancy). To remove this redundancy, date of each subsequent week is computed by adding multiples of 7 to the original "date.entered": for example; date for week 3 is "date.entered" + 14 while date of week 76 is date.entered + 75*7. ``` # converts date.entered strings to proper panda date format to make is computable billboardmelt['date.entered'] = pd.to_datetime(billboardmelt['date.entered']) # Compute date column from date.entered (the date of the first week that a song enters the market) and week columns. billboardmelt['date'] = billboardmelt['date.entered'] + pd.Timedelta( '7 days') * (billboardmelt['week'] - 1) # Drop the original date.entered column. billboardmelt.drop(['date.entered'], axis=1, inplace=True) billboardmelt ``` ## To remove artist, track, time and genre redundancies we move them into a SEPARATE table as distinct observational unit of song details and move week, rank and date column to another SEPARATE table as distinct observational unit of ranking and then link them together through unique indices. ``` # Create a separate table with track data and unique index (like primary key in database) using drop_duplictaes() method tracks = billboardmelt[['artist', 'track', 'time', 'genre']].drop_duplicates() # Basically, unique indices of tracks dataframe is inserted into the tracksid datafarme as a separate column. To carry the unique index "id" explicitly as a separate column, a new index is made by # reseting the index to default. tracks dataframe is assigned to tracksid dataframe with a default index and a column of unique indices. tracks.index.name = 'id' tracksid = tracks.reset_index() tracksid # Link tracksid table to the ranking using unique indices by joining tracksid dataframe with billboardmelt and dropping common columns. # This makes the second table with unique observational unit of ranking with the relevant date and week columns. # Note that unique ids are repeated accoring to week number but date, week and rank are unique. ranking = pd.merge(tracksid, billboardmelt, on=['artist', 'track', 'time', 'genre']).drop(['artist', 'track', 'time', 'genre'], axis=1) ranking ``` ## In summary, messy billboard dataframe is split into two tidy tables of tracksid and ranking. -Should we need rank of a song, we query on the distinct observational ranking table. -Should we need details of song, we query on the distinct observational song table usinf the song unique index. ## Example: Find the details for the song with maximum ranking in the first week ``` # First find the unique song index ranking.loc[ranking[ranking.week == 1]['rank'].idxmax()] # Then find the track details tracksid.query('id ==248') %load_ext version_information %version_information numpy, pandas, matplotlib, version_information ```
github_jupyter
# Explain Attacking BERT models using CAptum Captum is a PyTorch library to explain neural networks Here we show a minimal example using Captum to explain BERT models from TextAttack [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/QData/TextAttack/blob/master/docs/2notebook/Example_5_Explain_BERT.ipynb) [![View Source on GitHub](https://img.shields.io/badge/github-view%20source-black.svg)](https://github.com/QData/TextAttack/blob/master/docs/2notebook/Example_5_Explain_BERT.ipynb) ``` import torch from copy import deepcopy from textattack.datasets import HuggingFaceDataset from textattack.models.tokenizers import AutoTokenizer from textattack.models.wrappers import HuggingFaceModelWrapper from textattack.models.wrappers import ModelWrapper from transformers import AutoModelForSequenceClassification from captum.attr import IntegratedGradients, LayerConductance, LayerIntegratedGradients, LayerDeepLiftShap, InternalInfluence, LayerGradientXActivation from captum.attr import visualization as viz device = torch.device("cuda:2" if torch.cuda.is_available() else "cpu") print(device) torch.cuda.set_device(device) dataset = HuggingFaceDataset("ag_news", None, "train") original_model = AutoModelForSequenceClassification.from_pretrained("textattack/bert-base-uncased-ag-news") original_tokenizer = AutoTokenizer("textattack/bert-base-uncased-ag-news") model = HuggingFaceModelWrapper(original_model,original_tokenizer) def captum_form(encoded): input_dict = {k: [_dict[k] for _dict in encoded] for k in encoded[0]} batch_encoded = { k: torch.tensor(v).to(device) for k, v in input_dict.items()} return batch_encoded def get_text(tokenizer,input_ids,token_type_ids,attention_mask): list_of_text = [] number = input_ids.size()[0] for i in range(number): ii = input_ids[i,].cpu().numpy() tt = token_type_ids[i,] am = attention_mask[i,] txt = tokenizer.decode(ii, skip_special_tokens=True) list_of_text.append(txt) return list_of_text sel =2 encoded = model.tokenizer.batch_encode([dataset[i][0]['text'] for i in range(sel)]) labels = [dataset[i][1] for i in range(sel)] batch_encoded = captum_form(encoded) clone = deepcopy(model) clone.model.to(device) def calculate(input_ids,token_type_ids,attention_mask): #convert back to list of text return clone.model(input_ids,token_type_ids,attention_mask)[0] # x = calculate(**batch_encoded) lig = LayerIntegratedGradients(calculate, clone.model.bert.embeddings) # lig = InternalInfluence(calculate, clone.model.bert.embeddings) # lig = LayerGradientXActivation(calculate, clone.model.bert.embeddings) bsl = torch.zeros(batch_encoded['input_ids'].size()).type(torch.LongTensor).to(device) labels = torch.tensor(labels).to(device) attributions,delta = lig.attribute(inputs=batch_encoded['input_ids'], baselines=bsl, additional_forward_args=(batch_encoded['token_type_ids'], batch_encoded['attention_mask']), n_steps = 10, target = labels, return_convergence_delta=True ) atts = attributions.sum(dim=-1).squeeze(0) atts = atts / torch.norm(atts) # print(attributions.size()) atts = attributions.sum(dim=-1).squeeze(0) atts = atts / torch.norm(atts) from textattack.attack_recipes import PWWSRen2019 attack = PWWSRen2019.build(model) results_iterable = attack.attack_dataset(dataset, indices=range(10)) viz_list = [] for n,result in enumerate(results_iterable): orig = result.original_text() pert = result.perturbed_text() encoded = model.tokenizer.batch_encode([orig]) batch_encoded = captum_form(encoded) x = calculate(**batch_encoded) print(x) print(dataset[n][1]) pert_encoded = model.tokenizer.batch_encode([pert]) pert_batch_encoded = captum_form(pert_encoded) x_pert = calculate(**pert_batch_encoded) attributions,delta = lig.attribute(inputs=batch_encoded['input_ids'], # baselines=bsl, additional_forward_args=(batch_encoded['token_type_ids'], batch_encoded['attention_mask']), n_steps = 10, target = torch.argmax(calculate(**batch_encoded)).item(), return_convergence_delta=True ) attributions_pert,delta_pert = lig.attribute(inputs=pert_batch_encoded['input_ids'], # baselines=bsl, additional_forward_args=(pert_batch_encoded['token_type_ids'], pert_batch_encoded['attention_mask']), n_steps = 10, target = torch.argmax(calculate(**pert_batch_encoded)).item(), return_convergence_delta=True ) orig = original_tokenizer.tokenizer.tokenize(orig) pert = original_tokenizer.tokenizer.tokenize(pert) atts = attributions.sum(dim=-1).squeeze(0) atts = atts / torch.norm(atts) atts_pert = attributions_pert.sum(dim=-1).squeeze(0) atts_pert = atts_pert / torch.norm(atts) all_tokens = original_tokenizer.tokenizer.convert_ids_to_tokens(batch_encoded['input_ids'][0]) all_tokens_pert = original_tokenizer.tokenizer.convert_ids_to_tokens(pert_batch_encoded['input_ids'][0]) v = viz.VisualizationDataRecord( atts[:45].detach().cpu(), torch.max(x).item(), torch.argmax(x,dim=1).item(), dataset[n][1], 2, atts.sum().detach(), all_tokens[:45], delta) v_pert = viz.VisualizationDataRecord( atts_pert[:45].detach().cpu(), torch.max(x_pert).item(), torch.argmax(x_pert,dim=1).item(), dataset[n][1], 2, atts_pert.sum().detach(), all_tokens_pert[:45], delta_pert) viz_list.append(v) viz_list.append(v_pert) # print(result.perturbed_text()) print(result.__str__(color_method='ansi')) print('\033[1m', 'Visualizations For AG NEWS', '\033[0m') viz.visualize_text(viz_list) # reference for viz datarecord # def __init__( # self, # word_attributions, # pred_prob, # pred_class, # true_class, # attr_class, # attr_score, # raw_input, # convergence_score, # ): ```
github_jupyter
## Advanced Lane Finding Project The goals / steps of this project are the following: * Compute the camera calibration matrix and distortion coefficients given a set of chessboard images. * Apply a distortion correction to raw images. * Use color transforms, gradients, etc., to create a thresholded binary image. * Apply a perspective transform to rectify binary image ("birds-eye view"). * Detect lane pixels and fit to find the lane boundary. * Determine the curvature of the lane and vehicle position with respect to center. * Warp the detected lane boundaries back onto the original image. * Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position. --- ## First, I'll compute the camera calibration using chessboard images ``` import numpy as np import cv2 import glob import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) objp = np.zeros((6*9,3), np.float32) objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2) # Arrays to store object points and image points from all the images. obj_points = [] # 3d points in real world space img_points = [] # 2d points in image plane. # Make a list of calibration images images = glob.glob('../camera_cal/calibration*.jpg') # Step through the list and search for chessboard corners for fname in images: img = cv2.imread(fname) gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # Find the chessboard corners ret, corners = cv2.findChessboardCorners(gray, (9,6),None) # If found, add object points, image points if ret: #print('found the corners') obj_points.append(objp) img_points.append(corners) # Draw and display the corners img = cv2.drawChessboardCorners(img, (9,6), corners, ret) plt.imshow(img) #cv2.waitKey(500) else: print('not found') cv2.destroyAllWindows() ``` ## Now applying distortion correction to the row image ``` def remove_distortion(img, obj_points, img_points): #print('inside remove distotion') gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(obj_points, img_points, gray.shape[::-1], None, None) #ret, corners = cv2.findChessboardCorners(gray, (nx, ny), None) un_dst = cv2.undistort(img, mtx, dist, None, mtx) #f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9)) #f.tight_layout() #ax1.imshow(img) #ax1.set_title('Original Image', fontsize=50) #ax2.imshow(un_dst) #ax2.set_title('Undistorted Image', fontsize=50) #plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.) #plt.savefig('un_distorted_real.png') #plt.show() return dist, mtx, un_dst #org_img = cv2.imread('../test_images/straight_lines1.jpg') #img = cv2.imread('../camera_cal/calibration1.jpg') #org_img = cv2.imread('../test_images/test5.jpg') org_img = cv2.imread('not_working.jpg') plt.imshow(org_img) dist, mtx, un_dst = remove_distortion(org_img, obj_points, img_points) ``` ## Applying color and gradient transform to the image ``` def gradient_color_transform(img, s_thresh=(170, 255), sx_thresh=(30, 80)): hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS) s_channel = hls[:,:,2] gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx)) thresh_min = 20 thresh_max = 100 sxbinary = np.zeros_like(scaled_sobel) sxbinary[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1 s_thresh_min = 170 s_thresh_max = 255 s_binary = np.zeros_like(s_channel) s_binary[(s_channel >= s_thresh_min) & (s_channel <= s_thresh_max)] = 1 color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, s_binary)) * 255 combined_binary = np.zeros_like(sxbinary) combined_binary[(s_binary == 1) | (sxbinary == 1)] = 1 #Plotting thresholded images # f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10)) # ax1.set_title('Stacked thresholds') # ax1.imshow(color_binary) # ax2.set_title('Combined S channel and gradient thresholds') # ax2.imshow(combined_binary, cmap='gray') # plt.imshow(combined_binary, cmap='gray') # plt.savefig('binary_combo_real.png') # plt.savefig('../combined_image.jpg') return combined_binary image = un_dst #image = cv2.imread('../test_images/test1.jpg') plt.imshow(image) plt.show() combined_image = gradient_color_transform(image) ``` ## Applying perspective transform ``` def transform(cur, dist, mtx): #img = np.copy(cur) img = cur #print(img.shape) un_dst = cv2.undistort(img, mtx, dist, None, mtx) #plt.imshow(un_dst) #plt.show() #print(un_dst.shape) img_size = (img.shape[1], img.shape[0]) src = np.float32([[200, img_size[1] - 1], [595, 450], [685, 450], [1100, img_size[1] - 1]]) dst = np.float32([[350, img_size[1] - 1], [350, 1], [970, 1], [970, img_size[1] - 1]]) M = cv2.getPerspectiveTransform(src, dst) Minv = cv2.getPerspectiveTransform(dst, src) warped = cv2.warpPerspective(un_dst, M, img_size) color=[255, 0, 0] thickness=5 # cv2.line(img, (200, img_size[1] - 1), (590, 450), color, thickness) # cv2.line(img, (590, 450), (690, 450), color, thickness) # cv2.line(img, (690, 450), (1100, img_size[1] - 1), color, thickness) # cv2.line(img, (1100, img_size[1] - 1), (300, img_size[1] - 1), color, thickness) # cv2.line(warped, (350, img_size[1] - 1), (350, 1), color, thickness) # cv2.line(warped, (350, 1), (970, 1), color, thickness) # cv2.line(warped, (970, 1), (970, img_size[1] - 1), color, thickness) # cv2.line(warped, (970, img_size[1] - 1), (350, img_size[1] - 1), color, thickness) # f, (ax1, ax2) = plt.subplots(1, 2, figsize=(30,9)) # f.tight_layout() # ax1.imshow(img) # ax1.set_title('Un-distorted Image with source points', fontsize=50) # ax2.imshow(warped) # ax2.set_title('Warped Image with destination', fontsize=50) # plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.) # plt.savefig('warped_straight_lines_real.png') # plt.show() return warped, Minv #combined_image = cv2.cvtColor(mpimg.imread('../test_images/straight_lines1.jpg'), cv2.COLOR_RGB2GRAY) #combined_image = mpimg.imread('../binary-combo-img.jpg') plt.imshow(combined_image, cmap='gray') binary_warped, Minv = transform(combined_image, dist, mtx) plt.imshow(binary_warped, cmap='gray') ``` ## Detecting lane And Finding the curvature ``` def getCurvature_and_center_of_line(leftx, lefty, rightx, righty, y_eval, shape): ym_per_pix = 30 / 720 # meters per pixel in y dimension xm_per_pix = 3.7 / 700 # meters per pixel in x dimension left_fit = np.polyfit(lefty * ym_per_pix, leftx * xm_per_pix, 2) right_fit = np.polyfit(righty * ym_per_pix, rightx * xm_per_pix, 2) # Generate x and y values for plotting #print(y_eval, left_fit, right_fit) y_eval_meters = y_eval * ym_per_pix left_bottom = left_fit[0] * y_eval_meters ** 2 + left_fit[1] * y_eval_meters + left_fit[2] right_bottom = right_fit[0] * y_eval_meters ** 2 + right_fit[1] * y_eval_meters + right_fit[2] line_center = (left_bottom + right_bottom) / 2 img_center = shape[1] /2 * xm_per_pix dif = img_center - line_center #print('curvature left ', left_bottom, 'right',right_bottom, 'line_center ', line_center, img_center, dif) left_curvature = ((1 + (2 * left_fit[0] * y_eval *ym_per_pix + left_fit[1]) ** 2) ** 1.5) / np.absolute(2 * left_fit[0]) right_curvature = ((1 + (2 * right_fit[0] * y_eval *ym_per_pix + right_fit[1]) ** 2) ** 1.5) / np.absolute(2 * right_fit[0]) mean_curvature = (left_curvature + right_curvature) /2 return mean_curvature, dif def find_lane_pixels(binary_warped): # Take a histogram of the bottom half of the image histogram = np.sum(binary_warped[binary_warped.shape[0] // 2:, :], axis=0) #plt.plot(histogram) # Create an output image to draw on and visualize the result out_img = np.dstack((binary_warped, binary_warped, binary_warped)) # Find the peak of the left and right halves of the histogram # These will be the starting point for the left and right lines midpoint = np.int(histogram.shape[0] // 2) left_margin = 300 #to avoid wrong identification of dividers as lines leftx_base = left_margin + np.argmax(histogram[left_margin:midpoint]) #print('leftx_base', leftx_base, midpoint) rightx_base = np.argmax(histogram[midpoint:]) + midpoint #print(leftx_base,rightx_base) # HYPERPARAMETERS # Choose the number of sliding windows nwindows = 9 # Set the width of the windows +/- margin margin = 100 # Set minimum number of pixels found to recenter window minpix = 50 # Set height of windows - based on nwindows above and image shape window_height = np.int(binary_warped.shape[0] // nwindows) # Identify the x and y positions of all nonzero pixels in the image nonzero = binary_warped.nonzero() # print('non-zero', len(nonzero[0]), len(nonzero[1])) nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) # Current positions to be updated later for each window in nwindows leftx_current = leftx_base rightx_current = rightx_base # Create empty lists to receive left and right lane pixel indices left_lane_inds = [] right_lane_inds = [] # Step through the windows one by one for window in range(nwindows): # Identify window boundaries in x and y (and right and left) win_y_low = binary_warped.shape[0] - (window + 1) * window_height win_y_high = binary_warped.shape[0] - window * window_height ### TO-DO: Find the four below boundaries of the window ### win_xleft_low = leftx_current - margin # Update this win_xleft_high = leftx_current + margin # Update this win_xright_low = rightx_current - margin # Update this win_xright_high = rightx_current + margin # Update this # Draw the windows on the visualization image # cv2.rectangle(out_img, (win_xleft_low, win_y_low), # (win_xleft_high, win_y_high), (0, 255, 0), 2) # cv2.rectangle(out_img, (win_xright_low, win_y_low), # (win_xright_high, win_y_high), (0, 255, 0), 2) # Identify the nonzero pixels in x and y within the window good_left_inds = ((nonzerox > win_xleft_low) & (nonzerox < win_xleft_high) & (nonzeroy < win_y_high) & ( nonzeroy > win_y_low)).nonzero()[0] good_right_inds = ((nonzerox > win_xright_low) & (nonzerox < win_xright_high) & (nonzeroy < win_y_high) & ( nonzeroy > win_y_low)).nonzero()[0] # Append these indices to the lists if (good_left_inds.shape[0] != 0): left_lane_inds.append(good_left_inds) if (good_right_inds.shape[0] != 0): right_lane_inds.append(good_right_inds) #print(win_y_low,good_left_inds.shape) #(`right` or `leftx_current`) on their mean position # try: if (len(good_left_inds) > minpix): leftx_current = np.int(np.mean(nonzerox[good_left_inds])) rightx_current = np.int(np.mean(nonzerox[good_right_inds])) except: pass #print(len(left_lane_inds),'l eft ', len(right_lane_inds)) # Concatenate the arrays of indices (previously was a list of lists of pixels) try: left_lane_inds = np.concatenate(left_lane_inds) right_lane_inds = np.concatenate(right_lane_inds) except ValueError: pass # Extract left and right line pixel positions leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] #print(leftx.shape, lefty.shape, rightx.shape, righty.shape) return leftx, lefty, rightx, righty, out_img def fit_polynomial(binary_warped): # Find our lane pixels first leftx, lefty, rightx, righty, out_img = find_lane_pixels(binary_warped) ### TO-DO: Fit a second order polynomial to each using `np.polyfit` ### left_fit = np.polyfit(lefty, leftx, 2) right_fit = np.polyfit(righty, rightx, 2) #print('left fit', left_fit, right_fit) # Generate x and y values for plotting ploty = np.linspace(0, binary_warped.shape[0] - 1, binary_warped.shape[0]) try: left_fitx = left_fit[0] * ploty ** 2 + left_fit[1] * ploty + left_fit[2] right_fitx = right_fit[0] * ploty ** 2 + right_fit[1] * ploty + right_fit[2] except TypeError: # Avoids an error if `left` and `right_fit` are still none or incorrect print('The function failed to fit a line!') left_fitx = 1 * ploty ** 2 + 1 * ploty right_fitx = 1 * ploty ** 2 + 1 * ploty ## Visualization ## # Colors in the left and right lane regions out_img[lefty, leftx] = [255, 0, 0] out_img[righty, rightx] = [0, 0, 255] y_eval = np.max(ploty) mean_curvature, center_to_image = getCurvature_and_center_of_line(leftx, lefty, rightx, righty, y_eval, out_img.shape) #Plots the left and right polynomials on the lane lines #plt.plot(left_fitx, ploty, color='yellow') #plt.plot(right_fitx, ploty, color='yellow') lines_mid = (left_fitx[-1] + right_fitx[-1]) //2 return out_img, left_fitx, right_fitx, ploty, mean_curvature, center_to_image plt.imshow(binary_warped) plt.show() warped_line_detected_box_method, left_fitx, right_fitx, ploty, mean_curvature, center_to_image = fit_polynomial(binary_warped) plt.imshow(warped_line_detected_box_method) plt.show() #plt.savefig('warped_line_detected_box_method.png') ``` ## Warp line back into the original image ``` def transform_back_add_text(binary_warped, left_fitx, right_fitx, ploty, org_img, mean_curvature, center_to_image ): warp_zero = np.zeros_like(binary_warped).astype(np.uint8) color_warp = np.dstack((warp_zero, warp_zero, warp_zero)) # Recast the x and y points into usable format for cv2.fillPoly() pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))]) pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))]) pts = np.hstack((pts_left, pts_right)) #print(np.int_(pts).shape) # Draw the lane onto the warped blank image cv2.fillPoly(color_warp, np.int_([pts]), (0, 255, 0)) # Warp the blank back to original image space using inverse perspective matrix (Minv) newwarp = cv2.warpPerspective(color_warp, Minv, (org_img.shape[1], org_img.shape[0])) # Combine the result with the original image result = cv2.addWeighted(org_img, 1, newwarp, 0.3, 0) direction = 'left' if center_to_image < 0 else 'right' string_direction = 'Vehicle is %.2fm' %abs(center_to_image)+' '+direction+' to the center' string_mean_curvature = 'Radius of Curvature %.2f(m)' %mean_curvature cv2.putText(result,string_mean_curvature, (100,50), cv2.FONT_HERSHEY_SIMPLEX, 2,(255,255,255),2) cv2.putText(result,string_direction, (100,100), cv2.FONT_HERSHEY_SIMPLEX, 2,(255,255,255),2) #cv2.imshow(result) #plt.imshow(result) return result final_image = transform_back_add_text(binary_warped, left_fitx, right_fitx, ploty,org_img, mean_curvature, center_to_image) #plt.imshow(final_image) ``` ## Pipeline for video ``` def pipeline(image): dist, mtx, un_dst = remove_distortion(image, obj_points, img_points) combined_image = gradient_color_transform(un_dst) binary_warped, Minv = transform(combined_image, dist, mtx) warped_line_detected_box_method, left_fitx, right_fitx, ploty, mean_curvature, center_to_image = fit_polynomial(binary_warped) final_image = transform_back_add_text(binary_warped, left_fitx, right_fitx, ploty,image, mean_curvature, center_to_image) #plt.imshow(final_image) #print(final_image.shape) return final_image org_img = cv2.imread('not_working.jpg') res = pipeline(org_img) plt.imshow(res) plt.show() #org_img = cv2.imread('../test_images/test1.jpg') #res = pipeline(org_img) #plt.imshow(res) #plt.show() def process_image(image): return pipeline(image) from moviepy.editor import VideoFileClip from IPython.display import HTML video_output = '../harder_challenge_video_output_copy1.mp4' video_input = "../harder_challenge_video.mp4" clip1 = VideoFileClip(video_input) print(clip1.duration) white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! #clip1.reader.close() #clip1.audio.reader.close_proc() %time white_clip.write_videofile(video_output, audio=False) #print(white_clip) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(video_output)) video = VideoFileClip('../project_video.mp4') np_frame = video.get_frame(23) plt.imshow(np_frame) #plt.savefig('not_working.jpg') video.save_frame('not_working.jpg', t=23) ```
github_jupyter
``` import pandas as pd import datetime import vk_api import os import requests import json import random %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import sys token = '4e6e771d37dbcbcfcc3b53d291a274d3ae21560a2e81f058a7c177aff044b5141941e89aff1fead50be4f' vk_session = vk_api.VkApi(token=token) vk = vk_session.get_api() vk.messages.send( chat_id=1, random_id=2, message='Matrix has you ...') df = pd.read_csv('/home/jupyter-an.karpov/shared/ads_data.csv.zip', compression='zip') ad_data = df.groupby(['ad_id', 'event'], as_index=False) \ .agg({'user_id': 'count'}) ad_data = ad_data.pivot(index='ad_id', columns='event', values='user_id').reset_index() ad_data = ad_data.fillna(0).assign(ctr=ad_data.click / ad_data.view) top_ctr = ad_data.query('click > 20 & view > 100').sort_values('ctr').tail(10) top_ctr.to_csv('top_ctr_data.csv', index=False) path = '/home/jupyter-an.karpov/lesson_7/top_ctr_data.csv' file_name = 'top_ctr_data.csv' path_to_file = path upload_url = vk.docs.getMessagesUploadServer(peer_id=2000000001)["upload_url"] file = {'file': (file_name, open(path_to_file, 'rb'))} response = requests.post(upload_url, files=file) json_data = json.loads(response.text) json_data saved_file = vk.docs.save(file=json_data['file'], title=file_name) saved_file attachment = 'doc{}_{}'.format(saved_file['doc']['owner_id'], saved_file['doc']['id']) attachment vk.messages.send( chat_id=1, random_id=3, message='Топ объявлений по CTR', attachment = attachment ) import gspread from df2gspread import df2gspread as d2g from oauth2client.service_account import ServiceAccountCredentials scope = ['https://spreadsheets.google.com/feeds', 'https://www.googleapis.com/auth/drive'] my_mail = 'anatoly1804@gmail.com' # Authorization credentials = ServiceAccountCredentials.from_json_keyfile_name('heroic-venture-268009-1dfbcc34e5fa.json', scope) gs = gspread.authorize(credentials) # Name of the table in google sheets, # can be url for open_by_url # or id (key) part for open_by_key table_name = 'to_sequence' # Your table # Get this table work_sheet = gs.open(table_name) # Select 1st sheet sheet1 = work_sheet.sheet1 # Get data in python lists format data = sheet1.get_all_values() data headers = data.pop(0) df = pd.DataFrame(data, columns=headers) df df.sort_values('price', ascending=False) sheet1.append_row([500, 'group_4']) # Looks like spreadsheet should be already present at the dist (so, run code in create table section) spreadsheet_name = 'A new spreadsheet' sheet = 'KarpovCorses2' d2g.upload(df, spreadsheet_name, sheet, credentials=credentials, row_names=True) url = 'https://api-metrika.yandex.net/stat/v1/data?' visits = 'metrics=ym:s:visits&dimensions=ym:s:date&id=44147844' vistis_url = url + visits vistis_request = requests.get(vistis_url) vistis_request json_data = json.loads(vistis_request.text) json_data['data'] y_df = pd.DataFrame([(i['dimensions'][0]['name'], i['metrics'][0]) for i in json_data['data']], columns=['date', 'visits']) spreadsheet_name = 'A new spreadsheet' sheet = 'Yandex_visits' d2g.upload(y_df, spreadsheet_name, sheet, credentials=credentials, row_names=True) ```
github_jupyter
# Hello, Slicer! Let's test that the setup was successful by checking that: 1. We're running the python kernel bundled with Slicer 2. The python kernel actually works 3. We have access to Slicer functionality ``` import sys print(f'We\'re running {sys.executable}\n') print("Hello, Slicer!") try: import vtk import numpy as np from emoji import UNICODE_EMOJI import JupyterNotebooksLib as slicernb print('Imports succeeded. Slicer environment is setup correctly.') except ImportError as e: print(f'Imports failed with the following message: {e}.\nSlicer environment was not setup correctly.') ``` --- ## Great! Now that we have everything setup and python is working let's try to create and view some objects in Slicer. Let's create a function that generates a cube: ``` def create_cube(size_xyz: list, center_xyz: list) -> vtk.vtkCubeSource: """ Creates a cube of a given size at a given point. :param size_xyz: List specifying size [x,y,z]. :type size_xyz: list :param center_xyz: List specifying centroid coordinated [x,y,z]. :type center_xyz: list :returns: A VKT box. :rtype: vtkCubeSource """ box = vtk.vtkCubeSource() box.SetXLength(size_xyz[0]) box.SetYLength(size_xyz[1]) box.SetZLength(size_xyz[2]) box.SetCenter(center_xyz) return box ``` Now let's generate some cubes ``` some = 5 # why not? for i in range(some): box = create_cube(np.random.randint(1, 42, 3).tolist(), np.random.randint(-50, 50, 3).tolist()) # Create a model node that displays our box boxNode = slicer.modules.models.logic().AddModel(box.GetOutputPort()) boxNode.SetName(f'Box{str(i+1)}') # Adjust display properties (color and opacity) boxNode.GetDisplayNode().SetColor(np.random.sample(3)) boxNode.GetDisplayNode().SetOpacity(np.random.uniform(0, 1)) ``` And let's adjust the 3D viewport camera position to render our cubes in the notebook output. ``` lm = slicer.app.layoutManager() view = lm.threeDWidget(0).threeDView() threeDViewNode = view.mrmlViewNode() cameraNode = slicer.modules.cameras.logic().GetViewActiveCameraNode(threeDViewNode) cameraNode.SetPosition([342,242,242]) slicernb.View3DDisplay() ``` ## Nice! So we've tested that VTK is in place and that the 3D view works. What about the slice views? --- ## Let's create some images to display them in slice views. We'll create a "💀 Hello Slicer" sign spanned across the Red, Green and Yellow slice views. Let's start with creating images represented as numpy arrays ``` def create_np_text_img(text: str, size: tuple = (128, 128)) -> np.ndarray: """ Creates a numpy text image. Creates a text-on-background image and returns it as a flat 3D numpy array. Check font paths when copying this function. The font paths should point to actual true-type font files on the disk. :param text: Input unicode text. :type text: str :param size: Target image size (optional). :type size: tuple :returns: Flat 3D numpy array containing pixel values. :rtype: np.ndarray """ from PIL import Image, ImageDraw, ImageFont if bool(set(text).intersection(UNICODE_EMOJI)): font_path = "/System/Library/Fonts/Apple Color Emoji.ttc" font = ImageFont.truetype(font_path, 64) else: font_path = "/System/Library/Fonts/Microsoft/Arial Black.ttf" font = ImageFont.truetype(font_path, 24) text_width, text_height = font.getsize(text) text_image = Image.new('I', size, "black") draw = ImageDraw.Draw(text_image) draw.text((text_width/2, text_height/2), text, 'white', font) return np.asarray(text_image).reshape(*size, 1) skull_image_array = create_np_text_img('💀') hello_image_array = create_np_text_img('Hello') slicer_image_array = create_np_text_img('Slicer') ``` The coordinate space of the images, the arrays and slice views is usually not the same. There is a good wiki article about it here: [https://www.slicer.org/wiki/Coordinate_systems](https://www.slicer.org/wiki/Coordinate_systems) That means that displaying our sign in the slice views might require some rotations and flipping. Let's create an empty (128, 128, 128) array and paste our images onto it's sides. ``` volume = np.zeros((128, 128, 128)) volume[:1, :, :] = np.rot90(skull_image_array, 2, (0, 1)).transpose(2, 0, 1) volume[:, :1, :] = np.rot90(hello_image_array[::-1], 1, (0, 1)).transpose(1, 2, 0) volume[:, :, :1] = np.rot90(slicer_image_array, 2, (0, 1)) ``` Finally we can create a volume node and render it's slices in the slice views ``` volumeNode = slicer.util.addVolumeFromArray(volume) volumeNode.SetName('Hello Slicer!') def show_slice_in_slice_view(volumeNode: slicer.vtkMRMLScalarVolumeNode, sliceNum: int = 0, sliceView: str = 'Red'): """ Render a numpy image on slice view. :param volumeNode: The volume node :type volumeNode: vtkMRMLScalarVolumeNode :param sliceNum: The number of the slice that we want to show. Optional. Defaults to 0. :type sliceNum: int :param sliceView: One of default slice views ('Red', 'Green', 'Yellow') :type sliceView: str """ sliceViewWidget = slicer.app.layoutManager().sliceWidget(sliceView) sliceWidgetLogic = sliceViewWidget.sliceLogic() sliceWidgetLogic.GetSliceCompositeNode().SetForegroundVolumeID(volumeNode.GetID()) sliceWidgetLogic.FitSliceToAll() sliceWidgetLogic.SetSliceOffset(sliceNum) pass for sliceView in ['Red', 'Green', 'Yellow']: show_slice_in_slice_view(volumeNode, 0, sliceView) slicernb.ViewDisplay('FourUp', False) ``` # All right! This was fun! Let's play with volumes a bit more in the next notebook.
github_jupyter
##### 1 ![1](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0001.jpg) ##### 2 ![2](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0002.jpg) ##### 3 ![3](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0003.jpg) ##### 4 ![4](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0004.jpg) ##### 5 ![5](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0005.jpg) ##### 6 ![6](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0006.jpg) ##### 7 ![7](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0007.jpg) ##### 8 ![8](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0008.jpg) ##### 9 ![9](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0009.jpg) ##### 10 ![10](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0010.jpg) ##### 11 ![11](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0011.jpg) ##### 12 ![12](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0012.jpg) ##### 13 ![13](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0013.jpg) ##### 14 ![14](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0014.jpg) ##### 15 ![15](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0015.jpg) ##### 16 ![16](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0016.jpg) ##### 17 ![17](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0017.jpg) ##### 18 ![18](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0018.jpg) ##### 19 ![19](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0019.jpg) ##### 20 ![20](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0020.jpg) ##### 21 ![21](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0021.jpg) ##### 22 ![22](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0022.jpg) ##### 23 ![23](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0023.jpg) ##### 24 ![24](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0024.jpg) ##### 25 ![25](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0025.jpg) ##### 26 ![26](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0026.jpg) ##### 27 ![27](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0027.jpg) ##### 28 ![28](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0028.jpg) ##### 29 ![29](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0029.jpg) ##### 30 ![30](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0030.jpg) ##### 31 ![31](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0031.jpg) ##### 32 ![32](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0032.jpg) ##### 33 ![33](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0033.jpg) ##### 34 ![34](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0034.jpg) ##### 35 ![35](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0035.jpg) ##### 36 ![36](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0036.jpg) ##### 37 ![37](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0037.jpg) ##### 38 ![38](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0038.jpg) ##### 39 ![39](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0039.jpg) ##### 40 ![40](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0040.jpg) ##### 41 ![41](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0041.jpg) ##### 42 ![42](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0042.jpg) ##### 43 ![43](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0043.jpg) ##### 44 ![44](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0044.jpg) ##### 45 ![45](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0045.jpg) ##### 46 ![46](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0046.jpg) ##### 47 ![47](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0047.jpg) ##### 48 ![48](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0048.jpg) ##### 49 ![49](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0049.jpg) ##### 50 ![50](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0050.jpg) ##### 51 ![51](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0051.jpg) ##### 52 ![52](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0052.jpg) ##### 53 ![53](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0053.jpg) ##### 54 ![54](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0054.jpg) ##### 55 ![55](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0055.jpg) ##### 56 ![56](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0056.jpg) ##### 57 ![57](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0057.jpg) ##### 58 ![58](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0058.jpg) ##### 59 ![59](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0059.jpg) ##### 60 ![60](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0060.jpg) ##### 61 ![61](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0061.jpg) ##### 62 ![62](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0062.jpg) ##### 63 ![63](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0063.jpg) ##### 64 ![64](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0064.jpg) ##### 65 ![65](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0065.jpg) ##### 66 ![66](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0066.jpg) ##### 67 ![67](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0067.jpg) ##### 68 ![68](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0068.jpg) ##### 69 ![69](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0069.jpg) ##### 70 ![70](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0070.jpg) ##### 71 ![71](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0071.jpg) ##### 72 ![72](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0072.jpg) ##### 73 ![73](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0073.jpg) ##### 74 ![74](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0074.jpg) ##### 75 ![75](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0075.jpg) ##### 76 ![76](http://7xqhfk.com1.z0.glb.clouddn.com/zbml/lec08/0076.jpg)
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import math import cv2 import os import tqdm import glob from statistics import mode from sklearn.cluster import KMeans from sklearn.cluster import MeanShift def loadFileName(directory): dealerIDir = glob.glob(directory+'/*') allVidir = [] for i in range(len(dealerIDir)): allVidir.append(glob.glob(dealerIDir[i]+'/*mp4')) return np.array(allVidir) def eulidist(rectlist,center): dx1_2 = (rectlist[0]-center[0])**2 dx2_2 = (rectlist[2]-center[0])**2 dy1_2 = (rectlist[1]-center[1])**2 dy2_2 = (rectlist[3]-center[1])**2 return dx1_2+dx2_2+dy1_2+dy2_2 def selectFaces(faces,center): disList = [] for i in range(len(faces)): (x,y,w,h) = faces[i] loc = np.array((x,y,x+w,y+h)) disList.append(eulidist(loc,center)) disList = np.array(disList) # print(faces) return faces[np.argmin(disList)] def extractFrame(mp4Dir): vid = cv2.VideoCapture(mp4Dir) allFrame = [] while vid.isOpened(): ret,frame = vid.read() if ret: allFrame.append(frame) else: break return np.array(allFrame) def extractXYWH(allFrame): face_cascade = cv2.CascadeClassifier('C:/Users/44754/Anaconda3/Lib/site-packages/cv2/data/haarcascade_frontalface_default.xml') imgList = allFrame.copy() for i in tqdm.tqdm(range(len(allFrame))): img = imgList[i].copy() gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray) if not len(faces): face = np.array((0,0,0,0)) else: center = np.array(imgList[i][:,:,::-1].shape)[:2][::-1]/2 (x,y,w,h) = selectFaces(faces,center) face = np.array((x,y,w,h)) if i == 0: coorList = face else: coorList = np.vstack((coorList,face)) return coorList def meanShiftCluster(coorList): ms = MeanShift(bandwidth=20) ms.fit(coorList[:,:2]) labels = ms.labels_ remainIdx = mode(labels) w = int(np.mean(coorList[labels==remainIdx,2])) h = int(np.mean(coorList[labels==remainIdx,3])) return labels,remainIdx,w,h def plotCluster(coorList,labels): xmin,xmax = np.min(coorList[:,0]),np.max(coorList[:,0]) ymin,ymax = np.min(coorList[:,1]),np.max(coorList[:,1]) numCol = int(np.ceil((len(np.unique(labels))+1)/2)) plt.figure(figsize=(20,10)) plt.subplot(2,numCol,1) plt.scatter(coorList[:,0],coorList[:,1],c=labels) plt.xlim([xmin,xmax]) plt.ylim([ymin,ymax]) for i in range(len(np.unique(labels))): plt.subplot(2,numCol,i+2) plt.scatter(coorList[labels==i,0],coorList[labels==i,1]) plt.xlim([xmin,xmax]) plt.ylim([ymin,ymax]) plt.title(np.sum(labels==i)) def modifyFrames(allFrame,labels,remainIdx,w,h): imgList = allFrame.copy() modList = [] for i in tqdm.tqdm(range(len(allFrame))): img = imgList[i].copy() if labels[i] == remainIdx: x,y = coorList[i,0],coorList[i,1] img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) modList.append(img) modList = np.array(modList) return modList def extractFaces(allFrame,coorList,labels,remainIdx,w,h): imgList = allFrame.copy() facesList = [] for i in range(len(allFrame)): if labels[i] == remainIdx: img = imgList[i].copy() x,y = coorList[i,0],coorList[i,1] tmp = img[y:y+h,x:x+w] facesList.append(tmp) facesList = np.array(facesList) return facesList def dir2processedarr(mp4Dir): allFrame = extractFrame(mp4Dir) print(allFrame.shape) coorArr = extractXYWH(allFrame) labels,remainIdx,w,h = meanShiftCluster(coorArr) facesArr = extractFaces(allFrame,coorArr,labels,remainIdx,w,h) return facesArr def storeProcessedImgs(directory): currentdir = os.getcwd() # create train dir path = os.path.join(currentdir,'train') if not os.path.exists(path): os.makedirs(path) # list all IDs dealersID = os.listdir(directory) for i in range(len(dealersID)): print('{} dealers'.format(i)) tmppath = os.path.join(path,dealersID[i]) if not os.path.exists(tmppath): os.makedirs(tmppath) dealDir = os.path.join(directory,dealersID[i]) vidsID = os.listdir(dealDir) j = 0 # Each video for a specific dealer for vid in vidsID: mp4Dir = os.path.join(dealDir,vid) tmparr = dir2processedarr(mp4Dir) # saving for k in range(tmparr.shape[0]): savepath = os.path.join(tmppath,'{}.jpg'.format(j)) cv2.imwrite(savepath,tmparr[k]) j+=1 vidir = 'D:/DreamAI/videosubset' storeProcessedImgs(vidir) currentdir = os.getcwd() # create train dir path = os.path.join(currentdir,'train') # list all IDs dealersID = os.listdir(path) for ids in dealersID: tmp = os.path.join(path,ids) print('Dealer {}: '.format(ids)+str(len(glob.glob(tmp+'/*.jpg')))) ``` ### There are differences among the numbers of images for different dealers.
github_jupyter
# Software Analytics Mini Tutorial Part I: Jupyter Notebook and Python basics ## Introduction This series of notebooks are a simple mini tutorial to introduce you to the basic functionality of Jupyter, Python, pandas and matplotlib. The comprehensive explanations should guide you to be able to analyze software data on your own. Therefore, the examples is chosen in such a way that we come across the typical methods in a data analysis. Have fun! *This is part I: The basics of Jupyter Notebook and Python. For the data analysis framework pandas and the visualization library matplotlib, go to [the other tutorial](10%20pandas%20and%20matplotlib%20basics.ipynb).* ## The Jupyter Notebook System First, we'll take a closer look at Jupyter Notebook. What you see here is Jupyter, the interactive notebook environment for programming. Jupyter Notebook allows us us to write code and documentation in executable **cells**. We see below a cell in which we can enter Python code. #### Execute a cell 1. select the next cell (mouse click or arrow keys). 1. type in for example `"Hello World!`. 1. execute the cell with a `Ctrl`+`Enter`. 1. click on the cell again 1. execute the cell with `Shift`+`Enter`. ##### Discussion * What's the difference between the two different ways of executing cells? #### Create a new cell We will use the built-in keyboard shortcuts for this: 1. if not happened yet, click on this cell. 1. enter **command mode**, selectable with `Esc` key. 1. create a **new cell** after this text by pressing the key `b`. 1. change the **cell type** to **Markdown** with key `m`. 1. switch to **edit mode** with `Enter` 1. write a text 1. execute cell with `Ctrl` + `Enter`. *Additional information:* * We've seen an important feature of Jupyter Notebook: The distinction between **command mode** (accessible via the `Esc` key) and **edit mode** (accessible via the `Enter` key). Note the differences: * In command mode, the border of the current cell is blue. This mode allows you to manipulate the **notebook**'s content. * In edit mode, the border turns green. This allows you to manipulate the **cell**'s content. * **Markdown** is a simple markup language that can be used to write and format texts. This allows us to directly document the steps we have taken. ## Python Basics Let's take a look at some basic Python programming constructs that we will need later when working with the pandas data analysis framework. We look at very basic functions: * variable assignments * value range accesses * method calls #### Assign text to a variable 1. **assign** the text **value** "Hello World" to the **variable** `text` by using the syntax `<variable> = <value>`. 1. type the variable `text` in the next line and execute the cell. 1. execute the cell (this will be necessary for each upcoming cell, so we won't mention it from now on) #### Accessing slices of information By using the array notation with the square brackets `[` and `]` (the slice operators), we can access the first letter of our text with a 0-based index (this also works for other types like lists). 1. access the first letter in `text` with `[0]`. #### Select last character 1. access the last letter in `text` with `[-1]`. #### Select ranges 1. access a range of `text` with the **slice** `[2:5]`. #### Select open ranges 1. access an open range with the slice `[:5]` (which is an abbreviation for a 0-based slice `[0:5]`) #### Reverse a list of values 1. reverse the text (or a list) by using the `::` notation with an following `-1`. #### Use auto completion and execute a method 1. append a `.` to `text` and look at the functions with the `Tab` key. 1. find and execute the **method** `upper()` of the `text` object (Tip: Type a `u` when using auto completion). #### Execute a method with parameters ...and find out how this works by using the integrated, interactive documentation: 1. select the `split` method of `text`. 1. press `Shift`+`Tab`. 1. press `Shift`+`Tab` twice in quick succession. 1. press `Shift`+`Tab` three times in quick succession (and then `ESC` to hide). 1. read the documentation of `split` 1. split the text in `text` with `split` exactly once (**parameter** `maxsplit`) apart by using an `l` (a lower case "L") as separator (parameter `sep`). *Note: What we are using here additionally is Python's feature to swap the methods parameter. This works, if you assign the inputs directly to the method argument's name (e.g. `maxsplit=1`). ## Summary OK, we were just warming up! Proceed to [the next section](10%20pandas%20and%20matplotlib%20basics.ipynb) If you want to dive deeper into this topic, take a look at my [blog posts on that topic](http://www.feststelltaste.de/category/software-analytics/) or my microsite [softwareanalytics.de](https://softwareanalytics.de/). I'm looking forward to your comments and feedback on [GitHub](https://github.com/feststelltaste/software-analytics-workshop/issues) or on [Twitter](https://www.twitter.com/feststelltaste)!
github_jupyter
``` ## plot the histogram showing the modeled and labeled result import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline # for loop version def read_comp(file): Pwave = {} Pwave['correct'] = [] Pwave['wrongphase'] = [] Pwave['miss'] = 0 Pwave['multiphase'] = [] with open(file, 'r') as comp: for line in comp: pickline = line.split() if pickline[0].strip()[-1] == pickline[1].strip() and len(pickline)==3: Pwave['correct'].append(float(pickline[2][:-2])) if pickline[0].strip()[-1] != pickline[1].strip() and pickline[1].strip() != 'N': Pwave['wrongphase'].append (float(pickline[2][:-2])) if pickline[0].strip()[-1] == pickline[1].strip() and len(pickline)>3: Pwave['multiphase'].append(len(pickline)-2) if pickline[1].strip() == 'N': Pwave['miss'] +=1 return Pwave # run all the output file ty = ['EQS','EQP','SUS','SUP','THS','THP','SNS','SNP','PXS','PXP'] wave = {} for name in ty: wave[name] = read_comp('../comparison_out/comp.'+name+'.out') # plot histogram of the correct plot def plotfig(name): fig, ax = plt.subplots(figsize = (10,6)) # filename = wave['EQP']['correct'] fig = plt.hist(name,bins= 10) ax.set_ylabel('number', fontsize=15) ax.set_xlabel('time difference (s)', fontsize=15) ax.set_title('Phase with time difference') plt.xticks(fontsize=15) #plt.xticks( rotation='vertical') plt.xticks(np.arange(-10, 10, step=1)) plt.yticks(fontsize=15) plt.legend(fontsize=15) #plt.savefig('test.jpg') for k in wave.keys(): for t in wave[k].keys(): # print (k, t) fig, ax = plt.subplots(figsize = (10,6)) # filename = wave['EQP']['correct'] fig = plt.hist(wave[k][t],bins= 10) ax.set_ylabel('number', fontsize=15) ax.set_xlabel('time difference (s)', fontsize=15) ax.set_title('{} Phase with time difference for {}'.format(t,k)) plt.xticks(fontsize=15) #plt.xticks( rotation='vertical') plt.xticks(np.arange(-10, 10, step=1)) plt.yticks(fontsize=15) #plt.legend(fontsize=15) #plt.savefig('{}_{}.jpg'.format(k,t)) plt.hist(wave['EQS']['correct']) #plt.xlim(0, 50) plt.savefig('time_diff.jpg') plotfig(wave['EQS']['wrongphase']) plotfig(wave['EQS']['multiphase']) plotfig(wave['EQS']['correct']) # plot histogram of the wrongphase plot fig, ax = plt.subplots(figsize = (10,6)) #k = np.random.normal(float(test['EQS'][ID]['time']), 3, 1000) fig = plt.hist(wave['EQP']['wrongphase']) ax.set_ylabel('number', fontsize=15) ax.set_xlabel('time difference (S)', fontsize=15) ax.set_title('Correct Phase with time difference') plt.xticks(fontsize=15) plt.yticks(fontsize=15) plt.xticks(np.arange(-10, 10, step=1)) plt.legend(fontsize=15) # plot histogram of the wrongphase plot fig, ax = plt.subplots(figsize = (10,6)) #k = np.random.normal(float(test['EQS'][ID]['time']), 3, 1000) fig = plt.hist(test['multiphase']) ax.set_ylabel('number', fontsize=15) ax.set_xlabel('time difference (S)', fontsize=15) ax.set_title('Correct Phase with time difference') plt.xticks(fontsize=15) plt.yticks(fontsize=15) #plt.xticks(np.arange(-10, 10, step=1)) plt.legend(fontsize=15) comp = pd. read_csv('../comparison.out', names=['type','mod_type','time'],sep = ' ') comp.head() comp['mod_type'].value_counts() comp['time'][comp['type']=='THP'].describe() comp = pd. read_csv('../comparison_out/comp.EQP.out', sep = '/s+') comp.head() with open('../comparison_out/comp.EQP.out', 'r') as comp: leng = [] for line in comp: pickline = line.split(' ') leng.append(len(pickline)) max(leng) ```
github_jupyter
<img src="aiayn.png"> > When teaching, I emphasize implementation as a way to understand recent developments in ML. This post is an attempt to keep myself honest along this goal. The recent ["Attention is All You Need"] (https://arxiv.org/abs/1706.03762) paper from NIPS 2017 has been instantly impactful paper as a new method for machine translation and potentiall NLP generally. The paper is very clearly written, but the conventional wisdom has been that it is quite difficult to implement correctly. > > In this post I follow the paper through from start to finish and try to implement each component in code. (I have done some minor reordering and skipping from the original paper). This document itself is a working notebook, and should be a completely usable and efficient implementation. To follow along you will first need to install [PyTorch](http://pytorch.org/) and [torchtext](https://github.com/pytorch/text). The complete code is available on [github](https://github.com/harvardnlp/annotated-transformer). >- Alexander "Sasha" Rush ([@harvardnlp](https://twitter.com/harvardnlp)) ``` # Standard PyTorch imports import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import math, copy from torch.autograd import Variable # For plots %matplotlib inline import matplotlib.pyplot as plt ``` # Background The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [16], ByteNet [18] and ConvS2S [9], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions [12]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2. Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations [4, 27, 28, 22]. End-to-end memory networks are based on a recurrent attention mechanism instead of sequencealigned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks [34]. To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequencealigned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as [17, 18] and [9]. # Model Architecture Most competitive neural sequence transduction models have an encoder-decoder structure [(cite)](cho2014learning,bahdanau2014neural,sutskever14). Here, the encoder maps an input sequence of symbol representations $(x_1, ..., x_n)$ to a sequence of continuous representations $\mathbf{z} = (z_1, ..., z_n)$. Given $\mathbf{z}$, the decoder then generates an output sequence $(y_1,...,y_m)$ of symbols one element at a time. At each step the model is auto-regressive [(cite)](graves2013generating), consuming the previously generated symbols as additional input when generating the next. ``` class EncoderDecoder(nn.Module): """ A standard Encoder-Decoder architecture. Base model for this and many other models. """ def __init__(self, encoder, decoder, src_embed, tgt_embed, generator): super(EncoderDecoder, self).__init__() self.encoder = encoder self.decoder = decoder self.src_embed = src_embed self.tgt_embed = tgt_embed self.generator = generator def forward(self, src, tgt, src_mask, tgt_mask): "Take in and process masked src and target sequences." memory = self.encoder(self.src_embed(src), src_mask) output = self.decoder(self.tgt_embed(tgt), memory, src_mask, tgt_mask) return output ``` The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively. <img src="ModalNet-21.png" width=400px> ## Encoder and Decoder Stacks ### Encoder: The encoder is composed of a stack of $N=6$ identical layers. ``` def clones(module, N): "Produce N identical layers." return nn.ModuleList([copy.deepcopy(module) for _ in range(N)]) class Encoder(nn.Module): "Core encoder is a stack of N layers" def __init__(self, layer, N): super(Encoder, self).__init__() self.layers = clones(layer, N) self.norm = LayerNorm(layer.size) def forward(self, x, mask): "Pass the input (and mask) through each layer in turn." for layer in self.layers: x = layer(x, mask) return self.norm(x) ``` We employ a residual connection [(cite)](he2016deep) around each of the two sub-layers, followed by layer normalization [(cite)](layernorm2016). ``` class LayerNorm(nn.Module): "Construct a layernorm module (See citation for details)." def __init__(self, features, eps=1e-6): super(LayerNorm, self).__init__() self.a_2 = nn.Parameter(torch.ones(features)) self.b_2 = nn.Parameter(torch.zeros(features)) self.eps = eps def forward(self, x): mean = x.mean(-1, keepdim=True) std = x.std(-1, keepdim=True) return self.a_2 * (x - mean) / (std + self.eps) + self.b_2 ``` That is, the output of each sub-layer is $\mathrm{LayerNorm}(x + \mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function implemented by the sub-layer itself. We apply dropout [(cite)](srivastava2014dropout) to the output of each sub-layer, before it is added to the sub-layer input and normalized. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $d_{\text{model}}=512$. ``` class SublayerConnection(nn.Module): """ A residual connection followed by a layer norm. Note for code simplicity we apply the norm first as opposed to last. """ def __init__(self, size, dropout): super(SublayerConnection, self).__init__() self.norm = LayerNorm(size) self.dropout = nn.Dropout(dropout) def forward(self, x, sublayer): "Apply residual connection to any sublayer function that maintains the same size." return x + self.dropout(sublayer(self.norm(x))) ``` Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. ``` class EncoderLayer(nn.Module): "Encoder is made up of two sublayers, self-attn and feed forward (defined below)" def __init__(self, size, self_attn, feed_forward, dropout): super(EncoderLayer, self).__init__() self.self_attn = self_attn self.feed_forward = feed_forward self.sublayer = clones(SublayerConnection(size, dropout), 2) self.size = size def forward(self, x, mask): "Follow Figure 1 (left) for connections." x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask)) return self.sublayer[1](x, self.feed_forward) ``` ### Decoder: The decoder is also composed of a stack of $N=6$ identical layers. ``` class Decoder(nn.Module): "Generic N layer decoder with masking." def __init__(self, layer, N): super(Decoder, self).__init__() self.layers = clones(layer, N) self.norm = LayerNorm(layer.size) def forward(self, x, memory, src_mask, tgt_mask): for layer in self.layers: x = layer(x, memory, src_mask, tgt_mask) return self.norm(x) ``` In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. ``` class DecoderLayer(nn.Module): "Decoder is made up of three sublayers, self-attn, src-attn, and feed forward (defined below)" def __init__(self, size, self_attn, src_attn, feed_forward, dropout): super(DecoderLayer, self).__init__() self.size = size self.self_attn = self_attn self.src_attn = src_attn self.feed_forward = feed_forward self.sublayer = clones(SublayerConnection(size, dropout), 3) def forward(self, x, memory, src_mask, tgt_mask): "Follow Figure 1 (right) for connections." m = memory x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, tgt_mask)) x = self.sublayer[1](x, lambda x: self.src_attn(x, m, m, src_mask)) return self.sublayer[2](x, self.feed_forward) ``` We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$. ``` def subsequent_mask(size): "Mask out subsequent positions." attn_shape = (1, size, size) subsequent_mask = np.triu(np.ones(attn_shape), k=1).astype('uint8') return torch.from_numpy(subsequent_mask) == 0 # The attention mask shows the position each tgt word (row) is allowed to look at (column). # Words are blocked for attending to future words during training. plt.figure(figsize=(5,5)) plt.imshow(subsequent_mask(20)[0]) ``` ### Attention: An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. We call our particular attention "Scaled Dot-Product Attention". The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. We compute the dot products of the query with all keys, divide each by $\sqrt{d_k}$, and apply a softmax function to obtain the weights on the values. <img width="220px" src="ModalNet-19.png"> In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$. The keys and values are also packed together into matrices $K$ and $V$. We compute the matrix of outputs as: $$ \mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V $$ ``` def attention(query, key, value, mask=None, dropout=0.0): "Compute 'Scaled Dot Product Attention'" d_k = query.size(-1) scores = torch.matmul(query, key.transpose(-2, -1)) \ / math.sqrt(d_k) if mask is not None: scores = scores.masked_fill(mask == 0, -1e9) p_attn = F.softmax(scores, dim = -1) # (Dropout described below) p_attn = F.dropout(p_attn, p=dropout) return torch.matmul(p_attn, value), p_attn ``` The two most commonly used attention functions are additive attention [(cite)](bahdanau2014neural), and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. While for small values of $d_k$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_k$ [(cite)](DBLP:journals/corr/BritzGLL17). We suspect that for large values of $d_k$, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients (To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean $0$ and variance $1$. Then their dot product, $q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance $d_k$.). To counteract this effect, we scale the dot products by $\frac{1}{\sqrt{d_k}}$. ### Multi-Head Attention Instead of performing a single attention function with $d_{\text{model}}$-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_v$-dimensional output values. These are concatenated and once again projected, resulting in the final values: <img width="270px" src="ModalNet-20.png"> Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. $$ \mathrm{MultiHead}(Q, K, V) = \mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O \\ \text{where}~\mathrm{head_i} = \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i) $$ Where the projections are parameter matrices $W^Q_i \in \mathbb{R}^{d_{\text{model}} \times d_k}$, $W^K_i \in \mathbb{R}^{d_{\text{model}} \times d_k}$, $W^V_i \in \mathbb{R}^{d_{\text{model}} \times d_v}$ and $W^O \in \mathbb{R}^{hd_v \times d_{\text{model}}}$. In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_k=d_v=d_{\text{model}}/h=64$. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. ``` class MultiHeadedAttention(nn.Module): def __init__(self, h, d_model, dropout=0.1): "Take in model size and number of heads." super(MultiHeadedAttention, self).__init__() assert d_model % h == 0 # We assume d_v always equals d_k self.d_k = d_model // h self.h = h self.p = dropout self.linears = clones(nn.Linear(d_model, d_model), 4) self.attn = None def forward(self, query, key, value, mask=None): "Implements Figure 2" if mask is not None: # Same mask applied to all h heads. mask = mask.unsqueeze(1) nbatches = query.size(0) # 1) Do all the linear projections in batch from d_model => h x d_k query, key, value = [l(x).view(nbatches, -1, self.h, self.d_k).transpose(1, 2) for l, x in zip(self.linears, (query, key, value))] # 2) Apply attention on all the projected vectors in batch. x, self.attn = attention(query, key, value, mask=mask, dropout=self.p) # 3) "Concat" using a view and apply a final linear. x = x.transpose(1, 2).contiguous().view(nbatches, -1, self.h * self.d_k) return self.linears[-1](x) ``` ### Applications of Attention in our Model The Transformer uses multi-head attention in three different ways: 1) In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [(cite)](wu2016google, bahdanau2014neural,JonasFaceNet2017). 2) The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. 3) Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to $-\infty$) all values in the input of the softmax which correspond to illegal connections. ## Position-wise Feed-Forward Networks In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. $$\mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2$$ While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $d_{\text{model}}=512$, and the inner-layer has dimensionality $d_{ff}=2048$. ``` class PositionwiseFeedForward(nn.Module): "Implements FFN equation." def __init__(self, d_model, d_ff, dropout=0.1): super(PositionwiseFeedForward, self).__init__() # Torch linears have a `b` by default. self.w_1 = nn.Linear(d_model, d_ff) self.w_2 = nn.Linear(d_ff, d_model) self.dropout = nn.Dropout(dropout) def forward(self, x): return self.w_2(self.dropout(F.relu(self.w_1(x)))) ``` ## Embeddings and Softmax Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $d_{\text{model}}$. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to [(cite)](press2016using). In the embedding layers, we multiply those weights by $\sqrt{d_{\text{model}}}$. ``` class Embeddings(nn.Module): def __init__(self, d_model, vocab): super(Embeddings, self).__init__() self.lut = nn.Embedding(vocab, d_model) self.d_model = d_model def forward(self, x): return self.lut(x) * math.sqrt(self.d_model) ``` ## Positional Encoding Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d_{\text{model}}$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed [(cite)](JonasFaceNet2017). In this work, we use sine and cosine functions of different frequencies: $$ PE_{(pos,2i)} = sin(pos / 10000^{2i/d_{\text{model}}}) \\ PE_{(pos,2i+1)} = cos(pos / 10000^{2i/d_{\text{model}}}) $$ where $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000 \cdot 2\pi$. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of $P_{drop}=0.1$. ``` class PositionalEncoding(nn.Module): "Implement the PE function." def __init__(self, d_model, dropout, max_len=5000): super(PositionalEncoding, self).__init__() self.dropout = nn.Dropout(p=dropout) # Compute the positional encodings once in log space. pe = torch.zeros(max_len, d_model) position = torch.arange(0., max_len).unsqueeze(1) div_term = torch.exp(torch.arange(0., d_model, 2) * -(math.log(10000.0) / d_model)) pe[:, 0::2] = torch.sin(position * div_term) pe[:, 1::2] = torch.cos(position * div_term) pe = pe.unsqueeze(0) self.register_buffer('pe', pe) def forward(self, x): x = x + Variable(self.pe[:, :x.size(1)], requires_grad=False) return self.dropout(x) # The positional encoding will add in a sine wave based on position. # The frequency and offset of the wave is different for each dimension. plt.figure(figsize=(15, 5)) pe = PositionalEncoding(20, 0) y = pe.forward(Variable(torch.zeros(1, 100, 20))) plt.plot(np.arange(100), y[0, :, 4:8].data.numpy()) plt.legend(["dim %d"%p for p in [4,5,6,7]]) None ``` We also experimented with using learned positional embeddings [(cite)](JonasFaceNet2017) instead, and found that the two versions produced nearly identical results. We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. ## Generation ``` class Generator(nn.Module): "Standard generation step. (Not described in the paper.)" def __init__(self, d_model, vocab): super(Generator, self).__init__() self.proj = nn.Linear(d_model, vocab) def forward(self, x): return F.log_softmax(self.proj(x), dim=-1) ``` ## Full Model ``` def make_model(src_vocab, tgt_vocab, N=6, d_model=512, d_ff=2048, h=8, dropout=0.1): "Construct a model object based on hyperparameters." c = copy.deepcopy attn = MultiHeadedAttention(h, d_model, dropout) ff = PositionwiseFeedForward(d_model, d_ff, dropout) position = PositionalEncoding(d_model, dropout) model = EncoderDecoder( Encoder(EncoderLayer(d_model, c(attn), c(ff), dropout), N), Decoder(DecoderLayer(d_model, c(attn), c(attn), c(ff), dropout), N), nn.Sequential(Embeddings(d_model, src_vocab), c(position)), nn.Sequential(Embeddings(d_model, tgt_vocab), c(position)), Generator(d_model, tgt_vocab)) # This was important from their code. Initialize parameters with Glorot or fan_avg. for p in model.parameters(): if p.dim() > 1: nn.init.xavier_uniform_(p) return model # Small example model. tmp_model = make_model(10, 10, 2) tmp_model ``` # Training This section describes the training regime for our models. ## Training Data and Batching We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding \citep{DBLP:journals/corr/BritzGLL17}, which has a shared source-target vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary [(cite)](wu2016google). Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens. ## Hardware and Schedule We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models, step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days). ## Optimizer We used the Adam optimizer [(cite)](kingma2014adam) with $\beta_1=0.9$, $\beta_2=0.98$ and $\epsilon=10^{-9}$. We varied the learning rate over the course of training, according to the formula: $$ lrate = d_{\text{model}}^{-0.5} \cdot \min({step\_num}^{-0.5}, {step\_num} \cdot {warmup\_steps}^{-1.5}) $$ This corresponds to increasing the learning rate linearly for the first $warmup\_steps$ training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used $warmup\_steps=4000$. ``` # Note: This part is incredibly important. # Need to train with this setup of the model is very unstable. class NoamOpt: "Optim wrapper that implements rate." def __init__(self, model_size, factor, warmup, optimizer): self.optimizer = optimizer self._step = 0 self.warmup = warmup self.factor = factor self.model_size = model_size self._rate = 0 def step(self): "Update parameters and rate" self._step += 1 rate = self.rate() for p in self.optimizer.param_groups: p['lr'] = rate self._rate = rate self.optimizer.step() def rate(self, step = None): "Implement `lrate` above" if step is None: step = self._step return self.factor * \ (self.model_size ** (-0.5) * min(step ** (-0.5), step * self.warmup**(-1.5))) def get_std_opt(model): return NoamOpt(model.src_embed[0].d_model, 2, 4000, torch.optim.Adam(model.parameters(), lr=0, betas=(0.9, 0.98), eps=1e-9)) # Three settings of the lrate hyperparameters. opts = [NoamOpt(512, 1, 4000, None), NoamOpt(512, 1, 8000, None), NoamOpt(256, 1, 4000, None)] plt.plot(np.arange(1, 20000), [[opt.rate(i) for opt in opts] for i in range(1, 20000)]) plt.legend(["512:4000", "512:8000", "256:4000"]) None ``` ## Regularization ### Label Smoothing During training, we employed label smoothing of value $\epsilon_{ls}=0.1$ [(cite)](DBLP:journals/corr/SzegedyVISW15). This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score. ``` class LabelSmoothing(nn.Module): """ Implement label smoothing. """ def __init__(self, size, padding_idx, smoothing=0.0): super(LabelSmoothing, self).__init__() self.criterion = nn.KLDivLoss(size_average=False) self.padding_idx = padding_idx self.confidence = 1.0 - smoothing self.smoothing = smoothing self.size = size self.true_dist = None def forward(self, x, target): assert x.size(1) == self.size true_dist = x.data.clone() true_dist.fill_(self.smoothing / (self.size - 2)) true_dist.scatter_(1, target.data.unsqueeze(1), self.confidence) true_dist[:, self.padding_idx] = 0 mask = torch.nonzero(target.data == self.padding_idx) if mask.dim() > 0 and mask.size(-1) > 0: true_dist.index_fill_(0, mask.squeeze(), 0.0) self.true_dist = true_dist return self.criterion(x, Variable(true_dist, requires_grad=False)) #Example crit = LabelSmoothing(5, 0, 0.5) predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0], [0, 0.2, 0.7, 0.1, 0], [0, 0.2, 0.7, 0.1, 0]]) v = crit(Variable(predict.log()), Variable(torch.LongTensor([2, 1, 0]))) # Show the target distributions expected by the system. plt.imshow(crit.true_dist) None # Label smoothing starts to penalize the model # if it gets very confident about a given choice crit = LabelSmoothing(5, 0, 0.2) def loss(x): d = x + 3 * 1 predict = torch.FloatTensor([[0, x / d, 1 / d, 1 / d, 1 / d], ]) #print(predict) return crit(Variable(predict.log()), Variable(torch.LongTensor([1]))).data[0] plt.plot(np.arange(1, 100), [loss(x) for x in range(1, 100)]) ``` ### Memory Optimization ``` def loss_backprop(generator, criterion, out, targets, normalize): """ Memory optmization. Compute each timestep separately and sum grads. """ assert out.size(1) == targets.size(1) total = 0.0 out_grad = [] for i in range(out.size(1)): out_column = Variable(out[:, i].data, requires_grad=True) gen = generator(out_column) loss = criterion(gen, targets[:, i].float()) / normalize total += loss.data[0] loss.backward() out_grad.append(out_column.grad.data.clone()) out_grad = torch.stack(out_grad, dim=1) out.backward(gradient=out_grad) return total def make_std_mask(src, tgt, pad): src_mask = (src != pad).unsqueeze(-2) tgt_mask = (tgt != pad).unsqueeze(-2) tgt_mask = tgt_mask & Variable(subsequent_mask(tgt.size(-1)).type_as(tgt_mask.data)) return src_mask, tgt_mask def train_epoch(train_iter, model, criterion, opt, transpose=False): model.train() for i, batch in enumerate(train_iter): src, trg, src_mask, trg_mask = \ batch.src, batch.trg, batch.src_mask, batch.trg_mask out = model.forward(src, trg[:, :-1], src_mask, trg_mask[:, :-1, :-1]) loss = loss_backprop(model.generator, criterion, out, trg[:, 1:], batch.ntokens) model_opt.step() model_opt.optimizer.zero_grad() if i % 10 == 1: print(i, loss, model_opt._rate) def valid_epoch(valid_iter, model, criterion, transpose=False): model.test() total = 0 for batch in valid_iter: src, trg, src_mask, trg_mask = \ batch.src, batch.trg, batch.src_mask, batch.trg_mask out = model.forward(src, trg[:, :-1], src_mask, trg_mask[:, :-1, :-1]) loss = loss_backprop(model.generator, criterion, out, trg[:, 1:], batch.ntokens) class Batch: def __init__(self, src, trg, src_mask, trg_mask, ntokens): self.src = src self.trg = trg self.src_mask = src_mask self.trg_mask = trg_mask self.ntokens = ntokens def data_gen(V, batch, nbatches): for i in range(nbatches): data = torch.from_numpy(np.random.randint(1, V, size=(batch, 10))) src = Variable(data, requires_grad=False) tgt = Variable(data, requires_grad=False) src_mask, tgt_mask = make_std_mask(src, tgt, 0) yield Batch(src, tgt, src_mask, tgt_mask, (tgt[1:] != 0).data.sum()) V = 11 criterion = LabelSmoothing(size=V, padding_idx=0, smoothing=0.0) model = make_model(V, V, N=2) model_opt = get_std_opt(model) for epoch in range(2): train_epoch(data_gen(V, 30, 20), model, criterion, model_opt) ``` # A Real World Example ``` # For data loading. from torchtext import data, datasets !pip install torchtext spacy !python -m spacy download en !python -m spacy download de # Load words from IWSLT #!pip install torchtext spacy #!python -m spacy download en #!python -m spacy download de import spacy spacy_de = spacy.load('de') spacy_en = spacy.load('en') def tokenize_de(text): return [tok.text for tok in spacy_de.tokenizer(text)] def tokenize_en(text): return [tok.text for tok in spacy_en.tokenizer(text)] BOS_WORD = '<s>' EOS_WORD = '</s>' BLANK_WORD = "<blank>" SRC = data.Field(tokenize=tokenize_de, pad_token=BLANK_WORD) TGT = data.Field(tokenize=tokenize_en, init_token = BOS_WORD, eos_token = EOS_WORD, pad_token=BLANK_WORD) MAX_LEN = 100 train, val, test = datasets.IWSLT.splits(exts=('.de', '.en'), fields=(SRC, TGT), filter_pred=lambda x: len(vars(x)['src']) <= MAX_LEN and len(vars(x)['trg']) <= MAX_LEN) MIN_FREQ = 1 SRC.build_vocab(train.src, min_freq=MIN_FREQ) TGT.build_vocab(train.trg, min_freq=MIN_FREQ) # Detail. Batching seems to matter quite a bit. # This is temporary code for dynamic batching based on number of tokens. # This code should all go away once things get merged in this library. BATCH_SIZE = 4096 global max_src_in_batch, max_tgt_in_batch def batch_size_fn(new, count, sofar): "Keep augmenting batch and calculate total number of tokens + padding." global max_src_in_batch, max_tgt_in_batch if count == 1: max_src_in_batch = 0 max_tgt_in_batch = 0 max_src_in_batch = max(max_src_in_batch, len(new.src)) max_tgt_in_batch = max(max_tgt_in_batch, len(new.trg) + 2) src_elements = count * max_src_in_batch tgt_elements = count * max_tgt_in_batch return max(src_elements, tgt_elements) class MyIterator(data.Iterator): def create_batches(self): if self.train: def pool(d, random_shuffler): for p in data.batch(d, self.batch_size * 100): p_batch = data.batch( sorted(p, key=self.sort_key), self.batch_size, self.batch_size_fn) for b in random_shuffler(list(p_batch)): yield b self.batches = pool(self.data(), self.random_shuffler) else: self.batches = [] for b in data.batch(self.data(), self.batch_size, self.batch_size_fn): self.batches.append(sorted(b, key=self.sort_key)) def rebatch(pad_idx, batch): "Fix order in torchtext to match ours" src, trg = batch.src.transpose(0, 1), batch.trg.transpose(0, 1) src_mask, trg_mask = make_std_mask(src, trg, pad_idx) return Batch(src, trg, src_mask, trg_mask, (trg[1:] != pad_idx).data.sum()) train_iter = MyIterator(train, batch_size=BATCH_SIZE, device=0, repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)), batch_size_fn=batch_size_fn, train=True) valid_iter = MyIterator(val, batch_size=BATCH_SIZE, device=0, repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)), batch_size_fn=batch_size_fn, train=False) # Create the model an load it onto our GPU. pad_idx = TGT.vocab.stoi["<blank>"] model = make_model(len(SRC.vocab), len(TGT.vocab), N=6) model_opt = get_std_opt(model) model.cuda() criterion = LabelSmoothing(size=len(TGT.vocab), padding_idx=pad_idx, smoothing=0.1) criterion.cuda() for epoch in range(15): train_epoch((rebatch(pad_idx, b) for b in train_iter), model, criterion, model_opt) valid_epoch((rebatch(pad_idx, b) for b in valid_iter), model, criterion) ``` OTHER ``` BOS_WORD = '<s>' EOS_WORD = '</s>' BLANK_WORD = "<blank>" SRC = data.Field() TGT = data.Field(init_token = BOS_WORD, eos_token = EOS_WORD, pad_token=BLANK_WORD) # only target needs BOS/EOS MAX_LEN = 100 train = datasets.TranslationDataset(path="/n/home00/srush/Data/baseline-1M_train.tok.shuf", exts=('.en', '.fr'), fields=(SRC, TGT), filter_pred=lambda x: len(vars(x)['src']) <= MAX_LEN and len(vars(x)['trg']) <= MAX_LEN) SRC.build_vocab(train.src, max_size=50000) TGT.build_vocab(train.trg, max_size=50000) pad_idx = TGT.vocab.stoi["<blank>"] print(pad_idx) model = make_model(len(SRC.vocab), len(TGT.vocab), pad_idx, N=6) model_opt = get_opt(model) model.cuda() criterion = LabelSmoothing(size=len(TGT.vocab), padding_idx=pad_idx, label_smoothing=0.1) criterion.cuda() for epoch in range(15): train_epoch(train_iter, model, criterion, model_opt) valid_epoch() print(pad_idx) print(len(SRC.vocab)) torch.save(model, "/n/rush_lab/trans_ipython.pt") #weight = torch.ones(len(TGT.vocab)) #weight[pad_idx] = 0 #criterion = nn.NLLLoss(size_average=False, weight=weight.cuda()) criterion = LabelSmoothing(size=len(TGT.vocab), padding_idx=pad_idx, label_smoothing=0.1) criterion.cuda() for epoch in range(15): train_epoch(train_iter, model, criterion, model_opt) 1 10.825187489390373 6.987712429686844e-07 101 9.447168171405792 3.56373333914029e-05 201 7.142856806516647 7.057589553983712e-05 301 6.237934365868568 0.00010551445768827134 401 5.762486848048866 0.00014045301983670557 501 5.415792358107865 0.00017539158198513977 601 5.081815680023283 0.000210330144133574 701 4.788327748770826 0.00024526870628200823 801 4.381739928154275 0.0002802072684304424 901 4.55433791608084 0.00031514583057887664 1001 4.911875109748507 0.0003500843927273108 1101 4.0579032292589545 0.0003850229548757451 1201 4.2276234351193125 0.0004199615170241793 1301 3.932735869428143 0.00045490007917261356 1401 3.8179439397063106 0.0004898386413210477 1501 3.3608515430241823 0.000524777203469482 1601 3.832796103321016 0.0005597157656179162 1701 2.907085266895592 0.0005946543277663504 1801 3.5280659823838505 0.0006295928899147847 1901 2.895841649500653 0.0006645314520632189 2001 3.273784235585481 0.000699470014211653 2101 3.181488689899197 0.0007344085763600873 2201 3.4151616653980454 0.0007693471385085215 2301 3.4343731447588652 0.0008042857006569557 2401 3.0505455391539726 0.0008392242628053899 2501 2.8089329147478566 0.0008741628249538242 2601 2.7827929875456903 0.0009091013871022583 2701 2.4428516102489084 0.0009440399492506926 2801 2.4015486147254705 0.0009789785113991267 2901 2.3568112018401735 0.001013917073547561 3001 2.6349758653668687 0.0010488556356959952 3101 2.5981983028614195 0.0010837941978444295 3201 2.666826274838968 0.0011187327599928637 3301 3.0092043554177508 0.0011536713221412978 3401 2.4580375660589198 0.0011886098842897321 3501 2.586465588421561 0.0012235484464381662 3601 2.5663993963389657 0.0012584870085866006 3701 2.9430236657499336 0.0012934255707350347 3801 2.464644919440616 0.001328364132883469 3901 2.7124062888276512 0.0013633026950319032 4001 2.646443709731102 0.0013971932312809247 4101 2.7294750874862075 0.001380057517579748 4201 2.1295202329056337 0.0013635372009002666 4301 2.596563663915731 0.001347596306985731 4401 2.1265982036820787 0.0013322017384983986 4501 2.3880532500334084 0.0013173229858148 4601 2.6129120760888327 0.0013029318725783852 4701 2.2873719420749694 0.001289002331178292 4801 2.4949760700110346 0.0012755102040816328 4901 2.496607314562425 0.001262433067573089 5001 2.1889712483389303 0.0012497500749750088 5101 1.8677761815488338 0.0012374418168536253 5201 2.2992054556962103 0.0012254901960784316 5301 2.664361578106707 0.0012138783159049418 5401 2.705850490485318 0.0012025903795063202 5501 2.581445264921058 0.0011916115995949978 5601 2.2480602325085783 0.0011809281169581616 5701 1.9289666265249252 0.0011705269268863989 5801 2.4863578918157145 0.0011603958126073107 5901 2.632946971571073 0.0011505232849492607 6001 2.496141305891797 0.0011408985275576757 6101 2.6422974687084206 0.0011315113470699342 6201 2.448802186456305 0.0011223521277270118 ```
github_jupyter
# COVID and Ontario Licensed Child Care ``` import pandas as pd import datetime import io from io import StringIO import os import requests import urllib.request import time from bs4 import BeautifulSoup %matplotlib inline # import naming conventions import numpy as np import matplotlib.pyplot as plt url = 'https://data.ontario.ca/dataset/5bf54477-6147-413f-bab0-312f06fcb388/resource/eee282d3-01e6-43ac-9159-4ba694757aea/download/lccactivecovid.csv' response = requests.get(url) s = requests.get(url).text covid_df = pd.read_csv(StringIO(s)) ``` ## Toronto (new cases per date) ``` covid_tor_df = covid_df.loc[(covid_df['municipality'] == 'Toronto')] covid_tor_df.set_index('collected_date') covid_sum_tor = covid_tor_df.groupby('collected_date')['total_confirmed_cases'].sum().to_frame(name='sum') covid_sum_tor covid_sum_tor['sum'].plot(figsize=(10,5),kind='line', color='b') ``` ## All municipalities in Ontario (inclusive of Toronto; new cases per date) ``` covid_all_df = covid_df covid_all_df.set_index('collected_date') covid_sum_all = covid_all_df.groupby('collected_date')['total_confirmed_cases'].sum().to_frame(name='sum') covid_sum_all['sum'].plot(figsize=(10,5),kind='line', color='r') covid_tor_df.head() # looking at Ys for friends #covid_tor_df[covid_tor_df['lcc_name'].str.contains("YMCA")] #ymca_df = covid_tor_df[covid_tor_df['lcc_name'].str.contains("YMCA")] #ymca_df.groupby(["lcc_name","reported_date"])["total_confirmed_cases"].sum() # looking at Kids and Co for friends #kids_df = covid_tor_df[(covid_tor_df['lcc_name']).str.contains("Kids &")] #kids_df.groupby(["lcc_name","reported_date"])["total_confirmed_cases"].sum() covid_tor_df.lcc_name from arcgis.gis import GIS from arcgis.geocoding import geocode from IPython.display import display gis = GIS() map1 = gis.map() map1 # set the map's extent by geocoding the location toronto = geocode("Toronto")[0] map1.extent = toronto['extent'] map1.zoom = 14 url = 'http://opendata.toronto.ca/childrens.services/licensed-child-care-centres/child-care.csv' response = requests.get(url) s = requests.get(url).text lcc_df = pd.read_csv(StringIO(s)) gis = GIS() map1 = gis.map() map1 # set the map's extent by geocoding the location toronto = geocode("Toronto")[0] map1.extent = toronto['extent'] map1.zoom = 14 lcc_df = lcc_df[['STR_NO', 'STREET','LOC_NAME','TOTSPACE','LONGITUDE','LATITUDE']] lcc_df lcc_df['ADDRESS'] = lcc_df['STR_NO']+ " " +lcc_df['STREET'] lcc_df = lcc_df[['LOC_NAME','ADDRESS','TOTSPACE','LONGITUDE','LATITUDE']] lcc_df.columns = ['Name','Address','Total Capacity','Longitude','Latitude'] lcc_df lcc_df[lcc_df['LOC_NAME'].str.contains('Matthew-John Early Learning Centre')] lcc_df.isna().sum() lcc_df[lcc_df['Longitude'].isnull()] # NEED TO FIGURE OUT WHAT TO DO WITH THE 7 NAN lcc_df = lcc_df[lcc_df['Longitude'].notna()] lcc_df.info() lcc_df = lcc_df.reset_index() del lcc_df['index'] ``` ## what is wrong with row 1019 - why stops here ## keep only first 1018 rows out of 1029 for now use this later https://developers.arcgis.com/python/guide/accessing-and-creating-content/ ``` lcc_df2 = lcc_df.head(1018) lcc1 = gis.content.import_data(lcc_df2) map1.add_layer(lcc1) ```
github_jupyter
``` %matplotlib inline ``` torchaudio Tutorial =================== PyTorch is an open source deep learning platform that provides a seamless path from research prototyping to production deployment with GPU support. Significant effort in solving machine learning problems goes into data preparation. ``torchaudio`` leverages PyTorch’s GPU support, and provides many tools to make data loading easy and more readable. In this tutorial, we will see how to load and preprocess data from a simple dataset. For this tutorial, please make sure the ``matplotlib`` package is installed for easier visualization. ``` import torch import torchaudio import matplotlib.pyplot as plt ``` Opening a file ----------------- ``torchaudio`` also supports loading sound files in the wav and mp3 format. We call waveform the resulting raw audio signal. ``` filename = "../_static/img/steam-train-whistle-daniel_simon-converted-from-mp3.wav" waveform, sample_rate = torchaudio.load(filename) print("Shape of waveform: {}".format(waveform.size())) print("Sample rate of waveform: {}".format(sample_rate)) plt.figure() plt.plot(waveform.t().numpy()) ``` When you load a file in ``torchaudio``, you can optionally specify the backend to use either `SoX <https://pypi.org/project/sox/>`_ or `SoundFile <https://pypi.org/project/SoundFile/>`_ via ``torchaudio.set_audio_backend``. These backends are loaded lazily when needed. ``torchaudio`` also makes JIT compilation optional for functions, and uses ``nn.Module`` where possible. Transformations --------------- ``torchaudio`` supports a growing list of `transformations <https://pytorch.org/audio/transforms.html>`_. - **Resample**: Resample waveform to a different sample rate. - **Spectrogram**: Create a spectrogram from a waveform. - **GriffinLim**: Compute waveform from a linear scale magnitude spectrogram using the Griffin-Lim transformation. - **ComputeDeltas**: Compute delta coefficients of a tensor, usually a spectrogram. - **ComplexNorm**: Compute the norm of a complex tensor. - **MelScale**: This turns a normal STFT into a Mel-frequency STFT, using a conversion matrix. - **AmplitudeToDB**: This turns a spectrogram from the power/amplitude scale to the decibel scale. - **MFCC**: Create the Mel-frequency cepstrum coefficients from a waveform. - **MelSpectrogram**: Create MEL Spectrograms from a waveform using the STFT function in PyTorch. - **MuLawEncoding**: Encode waveform based on mu-law companding. - **MuLawDecoding**: Decode mu-law encoded waveform. - **TimeStretch**: Stretch a spectrogram in time without modifying pitch for a given rate. - **FrequencyMasking**: Apply masking to a spectrogram in the frequency domain. - **TimeMasking**: Apply masking to a spectrogram in the time domain. Each transform supports batching: you can perform a transform on a single raw audio signal or spectrogram, or many of the same shape. Since all transforms are ``nn.Modules`` or ``jit.ScriptModules``, they can be used as part of a neural network at any point. To start, we can look at the log of the spectrogram on a log scale. ``` specgram = torchaudio.transforms.Spectrogram()(waveform) print("Shape of spectrogram: {}".format(specgram.size())) plt.figure() plt.imshow(specgram.log2()[0,:,:].numpy(), cmap='gray') ``` Or we can look at the Mel Spectrogram on a log scale. ``` specgram = torchaudio.transforms.MelSpectrogram()(waveform) print("Shape of spectrogram: {}".format(specgram.size())) plt.figure() p = plt.imshow(specgram.log2()[0,:,:].detach().numpy(), cmap='gray') ``` We can resample the waveform, one channel at a time. ``` new_sample_rate = sample_rate/10 # Since Resample applies to a single channel, we resample first channel here channel = 0 transformed = torchaudio.transforms.Resample(sample_rate, new_sample_rate)(waveform[channel,:].view(1,-1)) print("Shape of transformed waveform: {}".format(transformed.size())) plt.figure() plt.plot(transformed[0,:].numpy()) ``` As another example of transformations, we can encode the signal based on Mu-Law enconding. But to do so, we need the signal to be between -1 and 1. Since the tensor is just a regular PyTorch tensor, we can apply standard operators on it. ``` # Let's check if the tensor is in the interval [-1,1] print("Min of waveform: {}\nMax of waveform: {}\nMean of waveform: {}".format(waveform.min(), waveform.max(), waveform.mean())) ``` Since the waveform is already between -1 and 1, we do not need to normalize it. ``` def normalize(tensor): # Subtract the mean, and scale to the interval [-1,1] tensor_minusmean = tensor - tensor.mean() return tensor_minusmean/tensor_minusmean.abs().max() # Let's normalize to the full interval [-1,1] # waveform = normalize(waveform) ``` Let’s apply encode the waveform. ``` transformed = torchaudio.transforms.MuLawEncoding()(waveform) print("Shape of transformed waveform: {}".format(transformed.size())) plt.figure() plt.plot(transformed[0,:].numpy()) ``` And now decode. ``` reconstructed = torchaudio.transforms.MuLawDecoding()(transformed) print("Shape of recovered waveform: {}".format(reconstructed.size())) plt.figure() plt.plot(reconstructed[0,:].numpy()) ``` We can finally compare the original waveform with its reconstructed version. ``` # Compute median relative difference err = ((waveform-reconstructed).abs() / waveform.abs()).median() print("Median relative difference between original and MuLaw reconstucted signals: {:.2%}".format(err)) ``` Functional --------------- The transformations seen above rely on lower level stateless functions for their computations. These functions are available under ``torchaudio.functional``. The complete list is available `here <https://pytorch.org/audio/functional.html>`_ and includes: - **istft**: Inverse short time Fourier Transform. - **gain**: Applies amplification or attenuation to the whole waveform. - **dither**: Increases the perceived dynamic range of audio stored at a particular bit-depth. - **compute_deltas**: Compute delta coefficients of a tensor. - **equalizer_biquad**: Design biquad peaking equalizer filter and perform filtering. - **lowpass_biquad**: Design biquad lowpass filter and perform filtering. - **highpass_biquad**:Design biquad highpass filter and perform filtering. For example, let's try the `mu_law_encoding` functional: ``` mu_law_encoding_waveform = torchaudio.functional.mu_law_encoding(waveform, quantization_channels=256) print("Shape of transformed waveform: {}".format(mu_law_encoding_waveform.size())) plt.figure() plt.plot(mu_law_encoding_waveform[0,:].numpy()) ``` You can see how the output fron ``torchaudio.functional.mu_law_encoding`` is the same as the output from ``torchaudio.transforms.MuLawEncoding``. Now let's experiment with a few of the other functionals and visualize their output. Taking our spectogram, we can compute it's deltas: ``` computed = torchaudio.functional.compute_deltas(specgram, win_length=3) print("Shape of computed deltas: {}".format(computed.shape)) plt.figure() plt.imshow(computed.log2()[0,:,:].detach().numpy(), cmap='gray') ``` We can take the original waveform and apply different effects to it. ``` gain_waveform = torchaudio.functional.gain(waveform, gain_db=5.0) print("Min of gain_waveform: {}\nMax of gain_waveform: {}\nMean of gain_waveform: {}".format(gain_waveform.min(), gain_waveform.max(), gain_waveform.mean())) dither_waveform = torchaudio.functional.dither(waveform) print("Min of dither_waveform: {}\nMax of dither_waveform: {}\nMean of dither_waveform: {}".format(dither_waveform.min(), dither_waveform.max(), dither_waveform.mean())) ``` Another example of the capabilities in ``torchaudio.functional`` are applying filters to our waveform. Applying the lowpass biquad filter to our waveform will output a new waveform with the signal of the frequency modified. ``` lowpass_waveform = torchaudio.functional.lowpass_biquad(waveform, sample_rate, cutoff_freq=3000) print("Min of lowpass_waveform: {}\nMax of lowpass_waveform: {}\nMean of lowpass_waveform: {}".format(lowpass_waveform.min(), lowpass_waveform.max(), lowpass_waveform.mean())) plt.figure() plt.plot(lowpass_waveform.t().numpy()) ``` We can also visualize a waveform with the highpass biquad filter. ``` highpass_waveform = torchaudio.functional.highpass_biquad(waveform, sample_rate, cutoff_freq=2000) print("Min of highpass_waveform: {}\nMax of highpass_waveform: {}\nMean of highpass_waveform: {}".format(highpass_waveform.min(), highpass_waveform.max(), highpass_waveform.mean())) plt.figure() plt.plot(highpass_waveform.t().numpy()) ``` Migrating to torchaudio from Kaldi ---------------------------------- Users may be familiar with `Kaldi <http://github.com/kaldi-asr/kaldi>`_, a toolkit for speech recognition. ``torchaudio`` offers compatibility with it in ``torchaudio.kaldi_io``. It can indeed read from kaldi scp, or ark file or streams with: - read_vec_int_ark - read_vec_flt_scp - read_vec_flt_arkfile/stream - read_mat_scp - read_mat_ark ``torchaudio`` provides Kaldi-compatible transforms for ``spectrogram``, ``fbank``, ``mfcc``, and ``resample_waveform with the benefit of GPU support, see `here <compliance.kaldi.html>`__ for more information. ``` n_fft = 400.0 frame_length = n_fft / sample_rate * 1000.0 frame_shift = frame_length / 2.0 params = { "channel": 0, "dither": 0.0, "window_type": "hanning", "frame_length": frame_length, "frame_shift": frame_shift, "remove_dc_offset": False, "round_to_power_of_two": False, "sample_frequency": sample_rate, } specgram = torchaudio.compliance.kaldi.spectrogram(waveform, **params) print("Shape of spectrogram: {}".format(specgram.size())) plt.figure() plt.imshow(specgram.t().numpy(), cmap='gray') ``` We also support computing the filterbank features from waveforms, matching Kaldi’s implementation. ``` fbank = torchaudio.compliance.kaldi.fbank(waveform, **params) print("Shape of fbank: {}".format(fbank.size())) plt.figure() plt.imshow(fbank.t().numpy(), cmap='gray') ``` You can create mel frequency cepstral coefficients from a raw audio signal This matches the input/output of Kaldi’s compute-mfcc-feats. ``` mfcc = torchaudio.compliance.kaldi.mfcc(waveform, **params) print("Shape of mfcc: {}".format(mfcc.size())) plt.figure() plt.imshow(mfcc.t().numpy(), cmap='gray') ``` Available Datasets ----------------- If you do not want to create your own dataset to train your model, ``torchaudio`` offers a unified dataset interface. This interface supports lazy-loading of files to memory, download and extract functions, and datasets to build models. The datasets ``torchaudio`` currently supports are: - **VCTK**: Speech data uttered by 109 native speakers of English with various accents (`Read more here <https://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html>`_). - **Yesno**: Sixty recordings of one individual saying yes or no in Hebrew; each recording is eight words long (`Read more here <https://www.openslr.org/1/>`_). - **Common Voice**: An open source, multi-language dataset of voices that anyone can use to train speech-enabled applications (`Read more here <https://voice.mozilla.org/en/datasets>`_). - **LibriSpeech**: Large-scale (1000 hours) corpus of read English speech (`Read more here <http://www.openslr.org/12>`_). ``` yesno_data = torchaudio.datasets.YESNO('./', download=True) # A data point in Yesno is a tuple (waveform, sample_rate, labels) where labels is a list of integers with 1 for yes and 0 for no. # Pick data point number 3 to see an example of the the yesno_data: n = 3 waveform, sample_rate, labels = yesno_data[n] print("Waveform: {}\nSample rate: {}\nLabels: {}".format(waveform, sample_rate, labels)) plt.figure() plt.plot(waveform.t().numpy()) ``` Now, whenever you ask for a sound file from the dataset, it is loaded in memory only when you ask for it. Meaning, the dataset only loads and keeps in memory the items that you want and use, saving on memory. Conclusion ---------- We used an example raw audio signal, or waveform, to illustrate how to open an audio file using ``torchaudio``, and how to pre-process, transform, and apply functions to such waveform. We also demonstrated how to use familiar Kaldi functions, as well as utilize built-in datasets to construct our models. Given that ``torchaudio`` is built on PyTorch, these techniques can be used as building blocks for more advanced audio applications, such as speech recognition, while leveraging GPUs.
github_jupyter
``` import numpy as np import pandas as pd from pandas.io.json import json_normalize from IPython.core.display import display, HTML import locale locale.setlocale(locale.LC_ALL, 'en_US') from concurrent.futures import ProcessPoolExecutor import multiprocessing from tqdm import tqdm_notebook as tqdm timeout = 3600 # Timeout for building bipartite graph. num_processes = int(multiprocessing.cpu_count()) DATA_DOI = 'doi:10.7910/DVN/DCBDEP' DATAFILE_URL = 'https://dataverse.harvard.edu/api/access/datafile/{id}?format=original' DATASETS_URL = 'https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1/files?persistentId={}'.format(DATA_DOI) import time import graph_tool as gt from graph_tool.stats import vertex_average def buildGraph(df, debug=False, timeout=900): if debug: tic = time.time() b = gt.Graph(directed=False) b.add_edge_list(df[['post', 'user']].values) if debug: print("Bipartite nodes: %d, vertices: %d" % (b.num_vertices(), b.num_edges())) print("Average degree:", vertex_average(b, "total")) print("Time", time.time() - tic) tic = time.time() user_count = df.user.value_counts() p = projected_graph(b, user_count[user_count >= 2].index, timeout=timeout) if debug: print("Projection nodes: %d, vertices: %d" % (p.num_vertices(), p.num_edges())) print("Average degree:", vertex_average(p, "total")) print("Time", time.time() - tic) # degree_sequence = sorted(p.get_out_degrees(p.get_vertices()), reverse=True) del b return p def projected_graph(b, nodes, timeout=900): tic = time.time() p = gt.Graph(directed=False) try: for u in nodes: nbrs2 = [n for nbr in b.vertex(u).all_neighbours() for n in b.vertex(nbr).all_neighbours()] p.add_edge_list((u, n) for n in nbrs2) if time.time() - tic > timeout: raise except: p = gt.Graph(directed=False) return p def extractUserInteractions(uri): df = pd.read_csv(uri, index_col=False, engine='c', dtype={'post': np.int64, 'user': np.float64, 'type': str}) df = df[df['type'] == 'C'] df.dropna(inplace=True) _map = df['user'].drop_duplicates().reset_index(drop=True) _map = pd.Series(_map.index.values, index=_map) df['user'] = df['user'].map(_map) _map = df['post'].drop_duplicates().reset_index(drop=True) _map = pd.Series(_map.index.values, index=_map) _map += len(df['user'].unique()) + 10 df['post'] = df['post'].map(_map) df['user'] = df['user'].astype(np.uint32, casting='safe') df['post'] = df['post'].astype(np.uint32, casting='safe') # print("Read %d interactions for the page %d" % (len(df), page)) return df def download_datafile(dataFile_id, overwrite=True): import shutil import requests import os.path try: r = requests.get(DATAFILE_URL.format(id=dataFile_id), stream=True, allow_redirects=True, timeout=1) filename = r.headers.get('content-disposition') filename = filename.split("'")[2] if len(filename.split("'")) > 2 else filename.split('"')[1] try: if overwrite or not os.path.isfile('data/{}'.format(filename)): with open('data/{}'.format(filename), 'wb') as fd: shutil.copyfileobj(r.raw, fd) return (filename, dataFile_id) except: if os.path.isfile('data/{}'.format(filename)): os.remove('data/{}'.format(filename)) except: pass return (None, dataFile_id) def process_file(filename, timeout=900): try: if filename in ['00__combinedPageInteractions.csv']: raise df = pd.read_csv('data/' + filename, index_col=False, dtype={'post': np.int64, 'user': np.float64, 'type': str}) types = df.type.value_counts() graph = buildGraph(df[df['type'] == 'C'].dropna(), debug=False, timeout=timeout) return {filename.split('.')[0]: [len(df.post.unique()), len(df.user.unique()), types['C'] if 'C' in types.index else 0, types['L'] if 'L' in types.index else 0, graph.num_edges() if graph.num_edges() else np.nan, graph.num_vertices() if graph.num_vertices() else np.nan]} except: pass return {filename.split('.')[0]: []} files = json_normalize(data=pd.read_json(DATASETS_URL)['data']).set_index('label') file_ids = files['dataFile.id'] filenames = set() for _ in range(5): # Try five times to download all files from Dataverse. # Process the rows in chunks in parallel with ProcessPoolExecutor(num_processes) as pool: fn = list(tqdm(pool.map(download_datafile, file_ids, chunksize=1), total=len(file_ids))) filenames = filenames.union(set(x[0] for x in fn if x[0])) file_ids = [x[1] for x in fn if not x[0]] if len(file_ids) == 0: break len(filenames) with ProcessPoolExecutor(4) as pool: pages = list(tqdm(pool.map(process_file, filenames, chunksize=1), total=len(filenames))) stats = pd.DataFrame.from_dict(dict((key,d[key]) for d in pages for key in d), orient='index', columns=["Posts", "Users", "Comments", "Likes", 'Edges', 'Nodes']) stats.to_pickle('stats_01.pkl') print(len(stats.drop(['00__combinedPageInteractions'], errors='ignore'))) s = (stats.drop(['00__combinedPageInteractions'], errors='ignore').describe() .T[['mean','std','min','25%', '50%', '75%', 'max']]) s.columns.name = 'Metric' s.index.name = None s['Sum'] = stats.sum() s.rename(columns={'mean': "Mean",'std': 'Std.','min': 'Min','25%': '$Q1$', '50%': 'Median', '75%': '$Q3$', 'max': 'Max'}, inplace=True) s.to_latex('article/pageStats.tex', bold_rows=True, escape=False, float_format=lambda x: "$%s$" % locale.format("%d", x, grouping=True)) display(HTML(s.to_html(float_format=lambda x: locale.format("%d", x, grouping=True)))) names = pd.read_csv("data/names.csv", dtype={0: str, 1: str}).set_index('Page') s = stats.copy().drop(['00__combinedPageInteractions'], errors='ignore') s['Id'] = s.index s['Edges'] = [pd.np.nan if x == 0.0 else x for x in s['Edges']] s['Nodes'] = [pd.np.nan if x == 0.0 else x for x in s['Nodes']] print(len(s)) def name_mapping(x, latex_escape=False): if type(names.loc[x.Id].Name) is float: ret = str(x.Id) else: if latex_escape: ret = (str(names.loc[x.Id].Name).replace('\\', '\\textbackslash ') .replace('_', '\\_') .replace('%', '\\%').replace('$', '\\$') .replace('#', '\\#').replace('{', '\\{') .replace('}', '\\}').replace('~', '\\textasciitilde ') .replace('^', '\\textasciicircum ') .replace('&', '\\&')) else: ret = str(names.loc[x.Id].Name) if pd.np.isnan(x['Edges']): ret = ret + "$^\ddagger$" return ret """Save table as latex file.""" s['Name'] = s.apply(lambda x: name_mapping(x, True), axis=1) (s.fillna(np.inf).sort_values(['Edges','Nodes', 'Users']) .to_latex('article/allPages.tex', escape=False, longtable=True, index=False, columns=['Name', 'Id', 'Posts', 'Users', 'Comments', 'Likes','Edges','Nodes'], float_format=lambda x: "" if x == np.inf else locale.format("%d", x, grouping=True))) """Display table (after remapping the Name).""" s['Name'] = s.apply(name_mapping, axis=1) display(HTML(s.fillna(pd.np.inf).sort_values(['Edges','Nodes', 'Users']) .to_html(index=False, columns=['Name', 'Id', 'Posts', 'Users', 'Comments', 'Likes','Edges','Nodes'], float_format=lambda x: "" if x == np.inf else locale.format("%d", x, grouping=True)))) ```
github_jupyter
### Note * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. ``` # Dependencies and Setup import pandas as pd # File to Load (Remember to Change These) file_to_load = "purchase_data.csv" # Read Purchasing File and store into Pandas data frame purchase_data_df = pd.read_csv(file_to_load) ``` ## Player Count * Display the total number of players ``` #counting length of the unique values in SN column of the data frame count = len(purchase_data_df["SN"].unique()) #Putting the count into a dictonary of list count1 = [{"Total Players" : count}] #Creating a data frame to display the total number of players noofplayers_df = pd.DataFrame(count1) #printing the data frame noofplayers_df ``` ## Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc. * Create a summary data frame to hold the results * Optional: give the displayed data cleaner formatting * Display the summary data frame ``` #Calculating unique items, average price, number of purchases and total revenue uniitems = len(purchase_data_df["Item ID"].unique()) avgprice = purchase_data_df["Price"].mean(axis=0) noofpurchases = purchase_data_df["Purchase ID"].count() totalrevenue = purchase_data_df["Price"].sum() #Inserting all new calculations into a summary table summary_df = [{"Number of Unique Items": uniitems, "Average Price": avgprice, "Number of Purchases" : noofpurchases, "Total Revenue" : totalrevenue}] summary_df = pd.DataFrame(summary_df) #Formatting the Average Price and Total Revenue object to include $ sign and decimal points. summary_df["Average Price"] = summary_df["Average Price"].map("${:.2f}".format) summary_df["Total Revenue"] = summary_df["Total Revenue"].map("${:,}".format) #Aligning the data to align left summary_df = summary_df.style.set_properties(**{'text-align':'left'}) #Prining the data frame summary_df ``` ## Gender Demographics * Percentage and Count of Male Players * Percentage and Count of Female Players * Percentage and Count of Other / Non-Disclosed ``` #Grouping the data frame by Gender and getting unique count of column SN groupedpurchase = purchase_data_df.groupby('Gender').agg({ "SN": "nunique"}) #Inserting new column called Total Count groupedpurchase.columns=["Total Count"] #Inserting new column called Percentage of Players and calculating the percentange of total players by Gender groupedpurchase["Percentage of Players"] = (groupedpurchase["Total Count"]/purchase_data_df["SN"].nunique())*100 #Dropping the name of the index groupedpurchase.index.name=None #Formatting the column Percentage of Players to have % sign and two decimal points groupedpurchase["Percentage of Players"] = groupedpurchase["Percentage of Players"].map("{:.2f}%".format) #Sorting the data frame by Total Count groupedpurchase = groupedpurchase.sort_values("Total Count", ascending=False) #Aligning all the text to be left indented groupedpurchase = groupedpurchase.style.set_properties(**{'text-align':'left'}) #Setting all table style to be left indented which causes the index to be left indented as well groupedpurchase.set_table_styles([dict(selector='th', props=[('text-align', 'left')])]) #Printing the table groupedpurchase ``` ## Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender * Create a summary data frame to hold the results * Optional: give the displayed data cleaner formatting * Display the summary data frame ``` #Grouping the data frame by Gender and getting unique count of column SN, count of Purchase Id and mean & sum of Price purchanalysis = purchase_data_df.groupby('Gender').agg({"Purchase ID": "count", "Price": ["mean", "sum"], "SN": "nunique"}) #Inserting new columns purchanalysis.columns=["Purchase Count", "Average Purchase Price", "Total Purchase Value", "SN"] #Calculating Avg Total Purchase per Person and inserting the column purchanalysis["Avg Total Purchase per Person"] = purchanalysis["Total Purchase Value"]/purchanalysis["SN"] #Deleting column SN del purchanalysis ["SN"] #Formatting Average Purchase Price, Total Purchase Value and Avg Total Purchase per Person to have $ sign and two decimal points purchanalysis["Average Purchase Price"] = purchanalysis["Average Purchase Price"].map("${:.2f}".format) purchanalysis["Total Purchase Value"] = purchanalysis["Total Purchase Value"].map("${:.2f}".format) purchanalysis["Avg Total Purchase per Person"] = purchanalysis["Avg Total Purchase per Person"].map("${:.2f}".format) #Aligning all the text to be left indented purchanalysis = purchanalysis.style.set_properties(**{'text-align':'left'}) #Setting all table style to be left indented which causes the index to be left indented as well purchanalysis.set_table_styles([dict(selector='th', props=[('text-align', 'left')])]) ``` ## Age Demographics * Establish bins for ages * Categorize the existing players using the age bins. Hint: use pd.cut() * Calculate the numbers and percentages by age group * Create a summary data frame to hold the results * Optional: round the percentage column to two decimal points * Display Age Demographics Table ``` #Establishing bins for ages bins = [0, 9, 14, 19, 24, 29, 34, 39, 45] #Creating labels for the bin group_labels = ["<10","10-14","15-19","20-24","25-29", "30-34", "35-39", "40+"] #Using cut method to slice the data by Age, applying group labels and to include the lowest range pd.cut(purchase_data_df["Age"], bins, labels=group_labels).head() purchase_data_df["Age Ranges"] = pd.cut(purchase_data_df["Age"], bins, labels=group_labels, include_lowest=True) #Grouping the data frame by new column and include unique counts of column SN to get total count by age Agerange = purchase_data_df.groupby("Age Ranges").agg({"SN": "nunique"}) #Inser the new column Agerange.columns=["Total Count"] #Calculate percentage of players in each age range and insert in new column of percentage of playes Agerange["Percentage of Players"] = (Agerange["Total Count"]/Agerange["Total Count"].sum())*100 #Formatting Percentage of Players to display % and two decimal points Agerange["Percentage of Players"] = Agerange["Percentage of Players"].map("{:.2f}%".format) #Dropping the index name Agerange.index.name=None #Aligning all the text to be left indented Agerange = Agerange.style.set_properties(**{'text-align':'left'}) #Setting all table style to be left indented which causes the index to be left indented as well Agerange.set_table_styles([dict(selector='th', props=[('text-align', 'left')])]) ``` ## Purchasing Analysis (Age) * Bin the purchase_data data frame by age * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below * Create a summary data frame to hold the results * Optional: give the displayed data cleaner formatting * Display the summary data frame ``` #Grouping the data frame by alread established bin and group labels and getting unique count of column SN, count of Purchase Id and mean & sum of Price Agerange1 = purchase_data_df.groupby("Age Ranges").agg({"Purchase ID": "count", "Price": ["mean", "sum"], "SN": "nunique"}) #Inserting new columns Agerange1.columns=["Purchase Count", "Average Purchase Price", "Total Purchase Value", "SN"] #Calculating Avg Total Purchase per Person Agerange1["Avg Total Purchase per Person"] = Agerange1["Total Purchase Value"]/Agerange1["SN"] #Deleting column SN del Agerange1 ["SN"] #Formatting Average Purchase Price, Total Purchase Value and Avg Total Purchase per Person to have $ sign and two decimal points Agerange1["Average Purchase Price"] = Agerange1["Average Purchase Price"].map("${:.2f}".format) Agerange1["Total Purchase Value"] = Agerange1["Total Purchase Value"].map("${:.2f}".format) Agerange1["Avg Total Purchase per Person"] = Agerange1["Avg Total Purchase per Person"].map("${:.2f}".format) #Aligning all the text to be left indented Agerange1 = Agerange1.style.set_properties(**{'text-align':'left'}) #Setting all table style to be left indented which causes the index to be left indented as well Agerange1.set_table_styles([dict(selector='th', props=[('text-align', 'left')])]) ``` ## Top Spenders * Run basic calculations to obtain the results in the table below * Create a summary data frame to hold the results * Sort the total purchase value column in descending order * Optional: give the displayed data cleaner formatting * Display a preview of the summary data frame ``` #Grouping the data frame by SN and getting count of Purchase Id and mean & sum of Price SN1 = purchase_data_df.groupby("SN").agg({"Purchase ID": "count", "Price": ["mean", "sum"]}) #Inserting new columns SN1.columns=["Purchase Count", "Average Purchase Price", "Total Purchase Value"] #Sorting by Total Purchase Value highest to lowest SN1 = SN1.sort_values("Total Purchase Value", ascending=False) #Formatting Average Purchase Price and Total Purchase Value to have $ sign and two decimal points SN1["Average Purchase Price"] = SN1["Average Purchase Price"].map("${:.2f}".format) SN1["Total Purchase Value"] = SN1["Total Purchase Value"].map("${:.2f}".format) #Aligning all the text to be left indented SN1 = SN1.head(5).style.set_properties(**{'text-align':'left'}) #Setting all table style to be left indented which causes the index to be left indented as well SN1.set_table_styles([dict(selector='th', props=[('text-align', 'left')])]) ``` ## Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns * Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value * Create a summary data frame to hold the results * Sort the purchase count column in descending order * Optional: give the displayed data cleaner formatting * Display a preview of the summary data frame ``` #Grouping the data frame by Item Id and Item Name and getting count of Purchase Id and mean & sum of Price Item1 = purchase_data_df.groupby(["Item ID", "Item Name"]).agg({"Purchase ID": "count", "Price": ["mean", "sum"]}) #Inserting new columns Item1.columns=["Purchase Count", "Item Price", "Total Purchase Value"] #Sorting by Purchase Count highest to lowest Item1 = Item1.sort_values("Purchase Count", ascending=False) #Formatting Item Price and Total Purchase Value to have $ sign and two decimal points Item1["Item Price"] = Item1["Item Price"].map("${:.2f}".format) Item1["Total Purchase Value"] = Item1["Total Purchase Value"].map("${:.2f}".format) #Aligning all the text to be left indented Item1 = Item1.head(5).style.set_properties(**{'text-align':'left'}) #Setting all table style to be left indented which causes the index to be left indented as well Item1.set_table_styles([dict(selector='th', props=[('text-align', 'left')])]) ``` ## Most Profitable Items * Sort the above table by total purchase value in descending order * Optional: give the displayed data cleaner formatting * Display a preview of the data frame ``` #Grouping the data frame by Item Id and Item Name and getting count of Purchase Id and mean & sum of Price Item2 = purchase_data_df.groupby(["Item ID", "Item Name"]).agg({"Purchase ID": "count", "Price": ["mean", "sum"]}) #Inserting new columns Item2.columns=["Purchase Count", "Item Price", "Total Purchase Value"] #Sorting by Total Purchase Value highest to lowest Item2 = Item2.sort_values("Total Purchase Value", ascending=False) #Formatting Item Price and Total Purchase Value to have $ sign and two decimal points Item2["Item Price"] = Item2["Item Price"].map("${:.2f}".format) Item2["Total Purchase Value"] = Item2["Total Purchase Value"].map("${:.2f}".format) #Aligning all the text to be left indented Item2 = Item2.head(5).style.set_properties(**{'text-align':'left'}) #Setting all table style to be left indented which causes the index to be left indented as well Item2.set_table_styles([dict(selector='th', props=[('text-align', 'left')])]) ```
github_jupyter
# EEP/IAS 118 - Section 6 ## Fixed Effects Regression ### August 1, 2019 Today we will practice with fixed effects regressions in __R__. We have two different ways to estimate the model, and we will see how to do both and the situations in which we might favor one versus the other. Let's give this a try using the dataset `wateruse.dta`, a panel of residential water use for residents in Alameda and Contra Costa Counties. The subset of households are high water users, people who used over 1,000 gallons per billing cycle. We have information on their water use, weather during the period, as well as information on the city and zipcode of where the home is located, and information on the size and value of the house. Suppose we are interested in running the following panel regression of residential water use: $$ GPD_{it} = \beta_0 + \beta_1 degree\_days_{it} + \beta_2 precip_{it} ~~~~~~~~~~~~~~~~~~~~~~~(1)$$ Where $GPD$ is the gallons used per day by household $i$ in billing cycle $t$, $degree\_days$ the count of degree days experienced by the household in that billing cycle (degree days are a measure of cumulative time spent above a certain temperature threshold), and $precip$ the amount of precipitation in millimeters. ``` library(tidyverse) library(haven) library(lfe) waterdata <- read_dta("wateruse.dta") %>% mutate(gpd = (unit*748)/num_days) waterdata <- mutate(waterdata, n = 1:nrow(waterdata)) head(waterdata) dim(waterdata) reg1 <- lm(gpd ~ degree_days + precip, data = waterdata) summary(reg1) ``` Here we obtain an estimate of $\hat\beta_1 = 0.777$, telling us that an additional degree day per billing cycle is associated with an additional $0.7769$ gallon used per day. These billing cycles are roughly two months long, so this suggests an increase of roughly 47 gallons per billing cycle. Our estimate is statistically significant at all conventional levels, suggesting residential water use does respond to increased exposure to high heat. We estimate a statistically insignificant coefficient on additional precipitation, which tells us that on average household water use in our sample doesn't adjust to how much it rains. We might think that characteristics of the home impact how much water is used there, so we add in some home controls: $$ GPD_{it} = \beta_0 + \beta_1 degree\_days_{it} + \beta_2 precip_{it} + \beta_3 lotsize_{i} + \beta_4 homesize_i + \beta_5 num\_baths_i + \beta_6 num\_beds_i + \beta_7 homeval_i~~~~~~~~~~~~~~~~~~~~~~~(2)$$ ``` reg2 <- lm(gpd ~ degree_days + precip + lotsize + homesize + num_baths + num_beds + homeval, data = waterdata) summary(reg2) ``` Our coefficient on $degree\_days$ remains statistically significant and doesn't change much, so we find that $\hat\beta_1$ is robust to the addition of home characteristics. Of these characteristics, we obtain statistically significant coefficients on the size of the lot (in acres), the size of the home ($ft^2$), and the number of bedrooms in the home. We get a curious result for $\hat\beta_6$: for each additional bedroom in the home we predict that water use will _fall_ by 48 gallons per day. ### Discussion: what might be driving this effect? ``` waterdata %>% filter( city <= 9) %>% ggplot(aes(x=num_beds, y=gpd)) + geom_point() + facet_grid(. ~ city) waterdata %>% filter(city> 9 & city <= 18) %>% ggplot(aes(x=num_beds, y=gpd)) + geom_point() + facet_grid(. ~ city) waterdata %>% filter( city> 18) %>% ggplot(aes(x=num_beds, y=gpd)) + geom_point() + facet_grid(. ~ city) ``` Since there are likely a number of sources of omitted variable bias in the previous model, we think it might be worth including some fixed effects in our model. ## Method 1: Fixed Effects with lm() Up to this point we have been running our regressions using the `lm()` function. We can still use `lm()` for our fixed effects models, but it takes some more work. Recall that we can write our general panel fixed effects model as $$ y_{it} = \beta x_{it} + \mathbf{a}_i + {d}_t + u_{it} $$ * $y$ our outcome of interest, which varies in both the time and cross-sectional dimensions * $x_{it}$ our set of time-varying unit characteristics * $\mathbf{a}_i$ our set of unit fixed effects * $d_t$ our time fixed effects We can estimate this model in `lm()` provided we have variables in our dataframe that correspond to $a_i$ and $d_t$. This means we'll have to generate them before we can run any regression. ### Generating Dummy Variables In order to include fixed effects for our regression, we have to first generate the set of dummy variables that we want. For example, if we want to include a set of city fixed effects in our model, we need to generate them. We can do this in a few ways. 1. First, we can use `mutate()` and add a separate line for each individual city: ``` fe_1 <- waterdata %>% mutate(city_1 = as.numeric((city==1)), city_2 = as.numeric((city ==2)), city_3 = as.numeric((city ==3))) %>% select(n, city, city_1, city_2, city_3) head(fe_1) ``` This can be super tedious though when we have a bunch of different levels of our variable that we want to make fixed effects for. In this case, we have 27 different cities. 2. Alternatively, we can use the `spread()` function to help us out. Here we add in a constant variable `v` that is equal to one in all rows, and a copy of city that adds "city_" to the front of the city number. Then we pass the data to `spread`, telling it to split the variable `cty` into dummy variables for all its levels, with all the "false" cases filled with zeros. ``` fe_2 <- waterdata %>% select(n, city) head(fe_2) fe_2 %>% mutate(v = 1, cty = paste0("city_", city)) %>% spread(cty, v, fill = 0) ``` That is much easier! Let's now do that so that they'll be in `waterdata`: ``` waterdata <- waterdata %>% mutate(v = 1, cty = paste0("city_", city)) %>% spread(cty, v, fill = 0) head(waterdata) names(waterdata) ``` Note that both of the variables we used in `spread` are no longer in our dataset. While we're at it, let's also add in a set of billing cycle fixed effects. ``` waterdata <- waterdata %>% mutate(v = 1, cyc = paste0("cycle_", billingcycle)) %>% spread(cyc, v, fill = 0) head(waterdata) ``` Now we have all our variables to run the regression $$ GPD_{it} = \beta_0 + \beta_1 degree\_days_{it} + \beta_2 precip_{it} + \mathbf{a}_i + \mathbf{d}_t~~~~~~~~~~~~~~~~~~~~~~~(2)$$ Where $\mathbf{a}_i$ are our city fixed effects, and $\mathbf{d}_t$ our billing cycle fixed effects. Now we can run our model! Well, now we can _write out_ our model. The challenge here is that we need to specify all of the dummy variables in our formula. We could do this all by hand, but when we end up with a bunch of fixed effects it's easier to use the following trick: we can write `y ~ .` to tell __R__ we want it to put every variable in our dataset other than $y$ on the right hand side of our regression. That means we can create a version of our dataset with only $gpd$, $degree\_days$, $precip$, and our fixed effects and won't have to write out all those fixed effects by hand! Note that we can use `select()` and `-` to remove variables from our dataframe. If there is a list of variables in order that we want to get rid of, we can do it easily by passing a vector through! For instance, we want to get rid of the first 12 variables in our data, so we can add `-unit:-hh` in select to get rid of them all at once. if we separate with a comma, we can drop other sections of our data too! ``` fe_data <- waterdata %>% select(-unit:-hh, -city, -n) head(fe_data) fe_reg1 <- lm(gpd ~ ., data = fe_data) summary(fe_reg1) ``` Since I specified it this way, __R__ chose the last dummy variable for each set of fixed effect to leave out as our omitted group. Now that we account for which billing cycle we're in (i.e. whether we're in the winter or whether we're in the summer), we find that the coefficient on $degree\_days$ is now much smaller and statistically insignificant. This makes sense, as we were falsely attributing the extra water use that comes from seasonality to temperature on its own. Now that we control for the season we're in via billing cycle fixed effects, we find that deviations in temperature exposure during a billing cycle doesn't result in dramatically higher water use within the sample. ### Discussion: Why did we drop the home characteristics from our model? ## Method 2: Fixed Effects with felm() Alternatively, we could do everything way faster using the `felm()` function from the package __lfe__. This package doesn't require us to produce all the dummy variables by hand. Further, it performs the background math way faster so will be much quicker to estimate models using large datasets and many variables. The syntax we use is now `felm(y ~ x1 + x2 + ... + xk | FE_1 + FE_2, data = df)` * The first section $y \sim x1 + x2 +... xk$ is our formula, written the same way as with `lm()` * We now add a `|` and in the second section we specify our fixed effects. Here we say $FE\_1 + FE\_2$ which tells __R__ to include fixed effects for each level of $FE\_1$ and $FE\_2$. * Note that our fixed effect variables must be of class "factor" - we can force our variables to take this class by adding them as `as.factor(FE_1) + as.factor(FE_2)`. * we add the data source after the comma, as before. Let's go ahead and try this now with our water data model: ``` fe_reg2 <- felm(gpd ~ degree_days + precip | as.factor(city) + as.factor(billingcycle), data = waterdata) summary(fe_reg2) ``` And we estimate the exact same coefficients on $degree\_days$ and $precip$ as in the case where we specified everything by hand! We didn't have to mutate our data or add any variables. The one potential downside is that this approach doesn't report the fixed effects themselves. However, the tradeoff is that `felm` runs a lot faster than `lm`. To see this, we can compare run times: ``` lm_start <- Sys.time() fe_data <- waterdata %>% mutate(v = 1, cyc = paste0("cycle_", billingcycle)) %>% spread(cyc, v, fill = 0) %>% mutate(v = 1, cty = paste0("city_", city)) %>% spread(cty, v, fill = 0) %>% select(-unit:-hh, -city, -n) lm(gpd ~ ., data = fe_data) lm_end <- Sys.time() lm_dur <- lm_end - lm_start felm_start <- Sys.time() felm(gpd ~ degree_days + precip | as.factor(city) + as.factor(billingcycle), data = waterdata) felm_end <- Sys.time() felm_dur <- felm_end - felm_start print(paste0("lm() duration is ", lm_dur, " seconds, while felm() duration is ", felm_dur, " seconds.")) ``` Okay, neither of these models took very long, but that's because we only have two covariates other than our fixed effects and only around 2300 observations. If we have hundreds of covariates and millions of observations, this time difference becomes massive. # Regression Discontinuity Let's practice running a regression discontinuity model. Suppose we were interested in exploring the weird relationship we saw earlier between water use and number of bedrooms in a home. Let's take a look at that relationship a bit more closely. ``` waterdata %>% ggplot(aes(x = num_beds, y = gpd)) + geom_point( alpha = 0.4, colour = "royalblue") ``` We see that average water use appears to rise as we add bedrooms to a house from a low number, peaks when households have five bedrooms, then begins to fall with larger and larger houses... though there are a few high outliers in the 6-9 bedroom cases as well, which might overshadow that trend. Is there something else that's correlated with the number of bedrooms in a home that may also be driving this? ``` waterdata %>% ggplot(aes(x = num_beds, y = lotsize)) + geom_point( alpha = 0.4, colour = "royalblue") ``` It looks like lotsize and the number of bedrooms share a similar relationship - lot size increasing in \# bedrooms up until 5, then declining from there. Given that it looks like 5 bedrooms is where the relationship changes, let's use this as our running variable and allow the relationship for the number of bedrooms to differ around a threshold of five bedrooms. We can write an RD model as $$ GPD_{it} = \beta_0 + \beta_1 T_i + \beta_2 (num\_beds - 5) + \beta_3\left( T_i \times (num\_beds - 5) \right) + u_{it} $$ ``` rd_data <- waterdata %>% select(gpd, num_beds, lotsize) %>% mutate(treat = (num_beds > 5), beds_below = num_beds - 5, beds_above = treat * (num_beds - 5)) rd_reg <- lm(gpd ~ treat + beds_below + beds_above, data = rd_data) summary(rd_reg) ``` What if we limit our comparison closer to around the threshold? Right now we're using data from the entire sample, but this might not be a valid comparison. Let's see what happens when we reduce our bandwidth to 3 and look at only homes with between 3 and 8 bedrooms. ``` rd_data_trim <- rd_data %>% filter(!(num_beds < 2)) %>% filter(!(num_beds > 8)) rd_reg2 <- lm(gpd ~ treat + beds_below + beds_above, data = rd_data_trim) summary(rd_reg2) ``` We now estimate a treatment effect at the discontinuity! Our model finds a discontinuity, estimating a jump down of 284 gallons per day right around the 5 bedroom threshold. However, we saw earlier that it appears that lotsize is correlated with the number of bedrooms in the home, and is definitely a factor correlated with residential water use. What happens to our LATE estimate when we control for lot size in our regression? ``` rd_reg3 <- lm(gpd ~ treat + beds_below + beds_above + lotsize, data = rd_data_trim) summary(rd_reg3) ``` Once we control for lot size, our interpretation changes. Now we estimate a coefficient on treatment nearly half the magnitude as before and without statistical significance. ### Discussion Q: What does this tell us about the covariance between lot size and the number of bedrooms? # Fixed Effects Practice Question #1 #### From a random sample of agricultural yields Y (1000 dollars per acre) for region $i$ in year $t$ for the US, we have estimated the following eqation: \begin{align*} \widehat{\log(Y)}_{it} &= 0.49 + .01 GE_{it} ~~~~ R^2 = .32\\ &~~~~~(.11) ~~~~ (.01) ~~~~ n = 1526 \end{align*} #### (a) Interpret the results on the Genetically engineered ($GE$) technology on yields. (follow SSS= Sign Size Significance) #### (b) Suppose $GE$ is used more on the West Coast, where crop yields are also higher. How would the estimated effect of GE change if we include a West Coast region dummy variable in the equation? Justify your answer. #### (c) If we include region fixed effects, would they control for the factors in (b)? Justify your answer. #### (d) If yields have been generally improving over time and GE adoption was only recently introduced in the USA, what would happen to the coefficient of GE if we included year fixed effects? # Fixed Effects Practice Question #2 #### A recent paper investigates whether advertisement for Viagra causes increases in birth rates in the USA. Apparently, advertising for products, including Viagra, happens on TV and reaches households that have a TV within a Marketing region and does not happen in areas outside a designated marketing region. What the authors do is look at hospital birth rates in regions inside and near the advertising region border and collect data on dollars per 100 people (Ads) for a certain time, and compare those to the birth rates in hospitals located outside and near the advertising region designated border. They conduct a panel data analysis and estimate the following model: $$ Births_{it} = \beta_0 + \beta_1 Ads + \beta_2 Ads^2 + Z_i + M_t + u_{it}$$ #### Where $Z_i$ are zipcode fixed effects and $M_t$ monthly fixed effects. #### (a) Why do the authors include Zip Code Fixed Effects? In particular, what would be a variable that they are controlling for when adding Zip Code fixed effects that could cause a problem when interpreting the marginal effect of ad spending on birth rates? What would that (solved) problem be? #### (b) Why do they add month fixed effects?
github_jupyter