text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
# Analysing data (GRIB) In this notebook we will demonstrate how to: * find locations of extreme values from GRIB data * compute and plot a time series extracted from a point * mask values that are not of interest * compute wind speed * compute and plot a vertical cross section * compute and plot a vertical profile We will use **Metview** to do all of this. The data we will work with in this notebook is related to storm Joachim from December 2011. The 10 metre windgust forecast was retrieved from MARS for a few steps on a low resolution grid to provide input for the exercises. This is stored in file "joachim_wind_gust.grib". Further data, stored in "joachim_uv.grib", gives wind components on various pressure levels. ## Exploring the surface wind gusts ### Loading the data and finding the extreme values ``` import metview as mv ``` Read the data from GRIB file into a Metview [Fieldset](../data_types/fieldset.rst): ``` filename = "joachim_wind_gust.grib" if mv.exist(filename): wg = mv.read(filename) else: wg = mv.gallery.load_dataset(filename) print(wg) print(len(wg)) ``` The data is a [Fieldset](../data_types/fieldset.rst) with 5 fields. Let's inspect the contents with a few GRIB keys: ``` mv.grib_get(wg, ['shortName', 'dataDate', 'dataTime', 'stepRange', 'validityDate', 'validityTime']) ``` First let's check the minimum and maximum values over all the fields: ``` print(mv.minvalue(wg), mv.maxvalue(wg)) ``` Now, the maximum values for each field - iterate over the fieldset: ``` all_maxes = [mv.maxvalue(f) for f in wg] all_maxes ``` So we can see immediately that the largest value occurs in the first field. Let's restrict our operations to this first field: ``` wg0 = wg[0] max0 = all_maxes[0] ``` Find the locations where the value equals the maximum: ``` max_location = mv.find(wg0, max0) max_location ``` ### Extracting a time series Obtain a time series of values at this location (one value from each field): ``` vals_for_point = mv.nearest_gridpoint(wg, max_location[0]) times = mv.valid_date(wg) for tv in zip(times, vals_for_point): print(tv) ``` Let's make a simple plot from the data - we could use matplotlib, but Metview can also give us a nice time series curve: ``` haxis = mv.maxis( axis_type = "date", axis_date_type = "hours", axis_hours_label = "on", axis_hours_label_height = 0.4, axis_years_label_height = 0.4, axis_months_label_height = 0.4, axis_days_label_height = 0.4 ) ts_view = mv.cartesianview( x_automatic = "on", x_axis_type = "date", y_automatic = "on", horizontal_axis = haxis ) curve_wg = mv.input_visualiser( input_x_type = "date", input_date_x_values = times, input_y_values = vals_for_point) visdef = mv.mgraph(graph_line_thickness=3) ``` Finally we set the plotting target to the **Jupyter notebook** (we only have to do it once in a notebook) and generate the plot: ``` mv.setoutput('jupyter') ``` The order of parameters to the [plot()](../api/functions/plot.rst) command matters: view, data, visual definitions. There can be multiple sets of data and visual definitions in the same [plot()](../api/functions/plot.rst) command. ``` mv.plot(ts_view, curve_wg, visdef) ``` ### Finding a range of extreme values Find the locations where the value is within 95% of the maximum by supplying a range of values: ``` mv.find(wg0, [max0*0.95, max0]) ``` If we want to work with these points in Metview, the easiest way is to use the gfind() function to return a [Geopoints](../data_types/geopoints.rst) variable: ``` max_points = mv.gfind(wg0, max0, max0*0.05) print(len(max_points), 'points') print('first point:') max_points[0] ``` Compute a simple bounding box for these points: ``` north = mv.latitudes(max_points).max() + 2 south = mv.latitudes(max_points).min() - 2 east = mv.longitudes(max_points).max() + 2 west = mv.longitudes(max_points).min() - 2 [north, south, east, west] ``` Plot the points on a map using this bounding box: ``` view = mv.geoview( map_area_definition = "corners", area = [south, west, north, east] ) coloured_markers = mv.msymb( legend = "on", symbol_type = "marker", symbol_table_mode = "advanced", symbol_advanced_table_max_level_colour = "red", symbol_advanced_table_min_level_colour = "RGB(1, 0.8, 0.8)", symbol_advanced_table_height_list = 0.8 ) mv.plot(view, max_points, coloured_markers) ``` ### Using Fieldset operations to preserve only the extreme values Alternative way to obtain the largest values - use [Fieldset](../data_types/fieldset.rst) operations: ``` # compute field of 1s and 0s according to test largest_mask = wg0 > (max0*0.85) # convert 0s into missing values largest_mask = mv.bitmap(largest_mask, 0) # copy the pattern of missing values to wg0 masked_wg0 = mv.bitmap(wg0, largest_mask) ``` The result has **missing values** where the original values were below our threshold. In terms of actual data storage, if we were to write this into a GRIB file, it would be much smaller than the original GRIB file because GRIB is very efficient at (not) storing missing values. Let's plot the result with grid point markers: ``` gridvals_1x1 = mv.mcont( contour = "off", contour_grid_value_plot = "on", contour_grid_value_plot_type = "both", contour_grid_value_format = "(F4.2)", contour_grid_value_height = 0.45, grib_scaling_of_retrieved_fields = "off" ) mv.plot(view, masked_wg0, gridvals_1x1) ``` ## Exploring the atmosphere ### Retrieve U/V wind component data on multiple pressure levels To explore further into the atmosphere, we will need appropriate data. We can either retrieve from MARS or read from the supplied GRIB file, which was originally taken from MARS and then subsampled and downgraded a little in order to make the file smaller: ``` use_mars = False if use_mars: uv = mv.retrieve( type = "fc", levelist = [1000,925,850,700,500,400,300,250,200,150,100], param = ["u","v"], date = 20111215, step = 12, area = [25,-60,75,60], grid = [0.25,0.25] ) else: uv = mv.read("joachim_uv.grib") mv.grib_get(uv, ['shortName','level']) ``` ### Compute wind speed Extract the U and V components into different Fieldsets, each will have 11 fields: ``` u = mv.read(data = uv, param = "u") v = mv.read(data = uv, param = "v") ``` Compute the **wind speed** directly on the Fieldsets, giving us a single Fieldset containing 11 fields of wind speed: ``` spd = mv.sqrt(u*u + v*v) ``` Change the paramId and extract 500hPa level for plotting (not stricly necessary, but it lets us use the default ecCharts style, which requires a correct paramId): ``` spd = mv.grib_set_long(spd, ['paramId', 10]) spd500 = mv.read(data = spd, levelist = 500) ``` Plot the field into a view that covers the data area: ``` view = mv.geoview( map_area_definition = "corners", area = [25,-60,75,60] ) mv.plot(view, spd500, mv.mcont(contour_automatic_setting='ecmwf', legend='on')) ``` ### Compute and plot a vertical cross section Define a line along an area of interest and plot it onto the map: ``` line = [43.3,-36.0,54.4,13.1] # S, W, N, E line_graph = mv.mgraph( graph_type = "curve", graph_line_colour = "pink", graph_line_thickness = 7 ) mv.plot( view, spd500, mv.mcont(contour_automatic_setting='ecmwf', legend='on'), mv.mvl_geoline(*line,1),line_graph ) ``` Use this line to define a cross section view ([mxsectview()](../gen_files/icon_functions/mxsectview.rst)): ``` xs_view = mv.mxsectview( bottom_level = 1000.0, top_level = 100, line = line ) ``` Create a colour scale using [mcont()](../gen_files/icon_functions/mcont.rst) to plot the data with: ``` xs_shade = mv.mcont( legend = "on", contour_line_style = "dash", contour_line_colour = "charcoal", contour_highlight = "off", contour_level_count = 20, contour_label = "off", contour_shade = "on", contour_shade_method = "area_fill", contour_shade_max_level_colour = "red", contour_shade_min_level_colour = "blue", contour_shade_colour_direction = "clockwise" ) ``` Into the view, plot the data using the given Contouring definition (note the order): ``` mv.plot(xs_view, spd, xs_shade) ``` We can also obtain the computed cross section data as a [NetCDF](../data_types/netcdf.rst), which we can write to disk as "mv.write('my_xs.nc', xs_data)" : ``` xs_data = mv.mcross_sect( data = spd, line = line ) print(xs_data) ``` ### Compute and plot a vertical profile Define a **vertical profile view** and plotting attributes: ``` vp_view = mv.mvertprofview( point = [47.0,-3.5], bottom_level = 1000, top_level = 100 ) graph_plotting = mv.mgraph( graph_line_colour = "red", graph_line_thickness = 3 ) mv.plot(vp_view, spd, graph_plotting) ```
github_jupyter
``` # based on notebook "Stage1_MLP" import pandas as pd import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers, activations, losses, Model, Input from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint from tensorflow.keras.metrics import SparseCategoricalAccuracy, SparseTopKCategoricalAccuracy # import tensorflow_ranking as tfr from sklearn.model_selection import train_test_split import os, time, gc, json from tqdm.notebook import tqdm, trange from matplotlib import pyplot as plt from joblib import Parallel, delayed from ast import literal_eval import multiprocessing import itertools from pandarallel import pandarallel pandarallel.initialize(progress_bar=True) def create_shift_features(df, c, lags=[1,2,3,4,5]): try: naid = df[c[:-3]].cat.categories.tolist().index('_') except ValueError: naid = df[c].max() + 1 for i in lags: if (c + '_lag%d'%i) in df.columns: continue tmp = df[['session_id_hash', c]].shift(i) tmp.loc[df.session_id_hash != tmp.session_id_hash, c] = naid tmp[c] = tmp[c].astype(df[c].dtype) df[c + '_lag%d'%i] = tmp[c] print('Created\t' + c + '_lag%d'%i) print() if not os.path.exists('df_browse_phase2'): print('df_browse_phase2 not found') # read training data df_browse = pd.read_csv("./data/browsing_train.csv") df_browse['train'] = np.int8(1) df_browse.server_timestamp_epoch_ms = pd.to_datetime(df_browse.server_timestamp_epoch_ms, unit='ms') df_browse['server_day'] = df_browse.server_timestamp_epoch_ms.dt.date # read phase 1 testing data with open('./data/rec_test_phase_1.json') as f: df_test = json.load(f) df_test = pd.json_normalize(df_test, record_path =['query']) df_test['train'] = np.int8(1) # todo: should take search event into account df_test = df_test[~df_test.is_search] df_test.server_timestamp_epoch_ms = pd.to_datetime(df_test.server_timestamp_epoch_ms, unit='ms') df_test['server_day'] = df_test.server_timestamp_epoch_ms.dt.date df_test = df_test[df_browse.columns] # read phase 2 testing data with open('./data/rec_test_phase_2.json') as f: df_test2 = json.load(f) df_test2 = pd.json_normalize(df_test2, record_path =['query']) df_test2['train'] = np.int8(0) # todo: should take search event into account df_test2 = df_test2[~df_test2.is_search] df_test2.server_timestamp_epoch_ms = pd.to_datetime(df_test2.server_timestamp_epoch_ms, unit='ms') df_test2['server_day'] = df_test2.server_timestamp_epoch_ms.dt.date df_test2 = df_test2[df_browse.columns] # set items which only appears in test set as minority for c in ['product_sku_hash', 'hashed_url']: tmp = df_test2[c][~df_test2[c].isna()] test_only_sku = tmp[~tmp.isin(df_browse[c])] df_test2.loc[df_test2[c].isin(test_only_sku), c] = 'minority' del tmp, test_only_sku gc.collect() # combine training and testing data df_browse = pd.concat((df_browse, df_test, df_test2), ignore_index=True) df_browse.sort_values(['session_id_hash', 'server_timestamp_epoch_ms'], inplace=True, ignore_index=True) del df_test, df_test2 gc.collect() # combine values with few records in training for c in ['product_sku_hash', 'hashed_url']: tmp = df_browse[c].value_counts() df_browse.loc[df_browse[c].isin(tmp[tmp<=1].index), c] = 'minority' del tmp gc.collect() # factorize columns for c in ['product_sku_hash', 'hashed_url']: # set NaN to _ if df_browse[c].isnull().values.any(): df_browse.loc[df_browse[c].isna(), c] = '_' # add new column, id start from 0 df_browse[c] = df_browse[c].astype('category') df_browse[c + '_id'] = df_browse[c].cat.codes # add shift features max_lag = 5 for c in ['product_sku_hash_id', 'hashed_url_id']: create_shift_features(df_browse, c, lags=[i+1 for i in range(max_lag)]) # add target features (this column only useful for train set) create_shift_features(df_browse, 'product_sku_hash_id', lags=[-1]) df_browse.rename(columns={'product_sku_hash_id_lag-1': 'next_sku'}, inplace=True) naid = df_browse.product_sku_hash.cat.categories.tolist().index('_') minorid = df_browse.product_sku_hash.cat.categories.tolist().index('minority') # add next_interacted_sku df_browse['next_interacted_sku'] = df_browse.next_sku.copy() df_browse.loc[df_browse.next_sku==naid, 'next_interacted_sku'] = np.nan df_browse.next_interacted_sku = df_browse.groupby('session_id_hash')['next_interacted_sku'].apply(lambda x: x.bfill()) df_browse.loc[df_browse.next_interacted_sku.isna(), 'next_interacted_sku'] = naid gc.collect() # set a random id per group, used for calculating testing metric later df_browse = df_browse.sample(frac=1, random_state=0).reset_index(drop=True) df_browse['rand_id'] = df_browse.groupby('session_id_hash').cumcount().astype(np.int16) df_browse.sort_values(['session_id_hash', 'server_timestamp_epoch_ms'], inplace=True, ignore_index=True) df_browse.to_parquet('df_browse_phase2') else: print('df_browse_phase2 found') df_browse = pd.read_parquet('df_browse_phase2') df_browse['sample_weights'] = pd.to_datetime(df_browse.server_day).dt.month # get num_x from x_lag1 because function create_shift_features # may created a new_id in lag columns num_sku = df_browse.product_sku_hash_id_lag1.max() + 1 num_url = df_browse.hashed_url_id_lag1.max() + 1 naid = df_browse.product_sku_hash.cat.categories.tolist().index('_') minorid = df_browse.product_sku_hash.cat.categories.tolist().index('minority') fea_prod = ['product_sku_hash_id_lag5', 'product_sku_hash_id_lag4', 'product_sku_hash_id_lag3', 'product_sku_hash_id_lag2', 'product_sku_hash_id_lag1', 'product_sku_hash_id'] fea_url = ['hashed_url_id_lag5', 'hashed_url_id_lag4', 'hashed_url_id_lag3', 'hashed_url_id_lag2', 'hashed_url_id_lag1', 'hashed_url_id'] features = fea_prod + fea_url # training + phase 1 testing set df_train = df_browse[df_browse.train==1] df_train = df_train[(df_train.next_sku!=naid)].reset_index(drop=True) # phase 2 testing set df_test = df_browse[df_browse.train==0] df_test = df_test[(df_test.next_sku!=naid)].reset_index(drop=True) gc.collect() if 'kfoldidx' in df_train.columns: df_train.drop(columns='kfoldidx', inplace=True) df_test['kfoldidx'] = 1 def strafiedkfold(df, idcol, k=5): """ sklearn kfold will do unnessary sorting so build my own function """ df_kidx = df_train.session_id_hash.unique() np.random.seed(123) np.random.shuffle(df_kidx) df_kidx = pd.DataFrame({idcol: df_kidx}) df_kidx['kfoldidx'] = df_kidx.index % k df = df.merge(df_kidx, on=idcol, copy=False) print(df['kfoldidx'].value_counts()) return df df_train = strafiedkfold(df_train, 'session_id_hash', k=5) df_train = pd.concat([df_train, df_test]) x_train = {} x_val = {} x_one = {} for k, f in zip(['sku','url'], [fea_prod, fea_url]): x_train[k] = np.array(df_train.loc[df_train.kfoldidx!=0, f]) x_val[k] = np.array(df_train.loc[df_train.kfoldidx==0, f]) x_one[k] = x_train[k][0:100] x_train_sessid = df_train.loc[df_train.kfoldidx!=0].session_id_hash.reset_index(drop=True) x_val_sessid = df_train.loc[df_train.kfoldidx==0].session_id_hash.reset_index(drop=True) y_train = np.array(df_train.loc[df_train.kfoldidx!=0, 'next_sku']) y_val = np.array(df_train.loc[df_train.kfoldidx==0, 'next_sku']) y_one = y_train[0:100] x_train_weights = df_train.loc[df_train.kfoldidx!=0].sample_weights.reset_index(drop=True) # model architecture class MLP(Model): def __init__(self, num_sku, num_url, embed_dim=312): super().__init__() self.normal_init = keras.initializers.RandomNormal(mean=0., stddev=0.01) self.sku_embed = layers.Embedding(num_sku, embed_dim, self.normal_init) self.url_embed = layers.Embedding(num_url, embed_dim, self.normal_init) self.dense1 = layers.Dense(1024) self.norm1 = layers.BatchNormalization() self.activate1 = layers.ReLU() self.dropout1 = layers.Dropout(0.2) self.dense2 = layers.Dense(1024) self.norm2 = layers.BatchNormalization() self.activate2 = layers.ReLU() self.dropout2 = layers.Dropout(0.2) self.dense3 = layers.Dense(embed_dim) self.norm3 = layers.BatchNormalization() self.activate3 = layers.ReLU(name='sess_embed') self.dropout3 = layers.Dropout(0.2) self.output_bias = tf.random.normal((num_sku,), 0., 0.01) def call(self, inputs): lag_sku, lag_url = inputs['sku'], inputs['url'] sku_embed = layers.Flatten()(self.sku_embed(lag_sku)) url_embed = layers.Flatten()(self.url_embed(lag_url)) x = layers.concatenate([sku_embed, url_embed]) x = self.activate1(self.norm1(self.dense1(x))) x = self.dropout1(x) x = self.activate2(self.norm2(self.dense2(x))) x = self.dropout2(x) sess_embed = self.activate3(self.norm3(self.dense3(x))) x = self.dropout3(sess_embed) x = tf.matmul(x, tf.transpose(self.sku_embed.weights[0])) logits = tf.nn.bias_add(x, self.output_bias, name='logits') return {'logits': logits, 'embed': sess_embed} def build_graph(self): x = {'sku': Input(shape=(6)), 'url': Input(shape=(6))} return Model(inputs=x, outputs=self.call(x)) def predict_subset(self, x, u, l): _x = [] for i in range(len(x)): _x.append(x[i][u:l]) return self.predict(_x) keras.utils.plot_model(MLP(num_sku, num_url).build_graph(), show_shapes=False) strategy = tf.distribute.MirroredStrategy() print('Number of devices: {}'.format(strategy.num_replicas_in_sync)) batch_size = 2000 tf_train = tf.data.Dataset.from_tensor_slices((x_train, y_train, x_train_weights)).batch(batch_size) tf_val = tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(batch_size) tf_one = tf.data.Dataset.from_tensor_slices((x_one, y_one)).batch(10) dist_train = strategy.experimental_distribute_dataset(tf_train) dist_val = strategy.experimental_distribute_dataset(tf_val) with strategy.scope(): model_mlp = MLP(num_sku, num_url) LossFunc = {'logits':keras.losses.SparseCategoricalCrossentropy(from_logits=True), 'embed':None} metrics = {'logits': [keras.metrics.SparseCategoricalAccuracy(name='top1_acc'), keras.metrics.SparseTopKCategoricalAccuracy(k=20, name='top20_acc')]} model_mlp.compile(optimizer='adam', loss=LossFunc, metrics=metrics) es = EarlyStopping(monitor='val_logits_loss', mode='min', verbose=1, patience=1) mc = ModelCheckpoint('model_phase2_nn.h5', monitor='val_logits_loss', mode='min', verbose=1, save_best_only=True, save_weights_only=True) history = model_mlp.fit(tf_train.shuffle(batch_size), epochs=10, validation_data=tf_val, callbacks=[es, mc]) model_mlp.load_weights('model_phase2_nn.h5') @tf.function def mrr(i, model, x, y, topk=tf.constant(20)): # this is for tf dataset version interval = tf.shape(y)[0] i = tf.cast(i, tf.int32) _u = interval * i _l = interval * (i+1) y_pred = model(x, training=False) col_to_zero = [naid, minorid] tnsr_shape=tf.shape(y_pred['logits']) mask = [tf.one_hot(col_num*tf.ones((tnsr_shape[0], ), dtype=tf.int32), tnsr_shape[-1]) for col_num in col_to_zero] mask = tf.reduce_sum(mask, axis=0) * -9999 y_pred['logits'] = tf.add(y_pred['logits'], mask) # topk items' id for each session, 2d array r = tf.math.top_k(y_pred['logits'], k=topk).indices # True indicate that item is the correct prediction r = tf.cast(tf.equal(r, tf.expand_dims(tf.cast(y, tf.int32), 1)), tf.float32) # rank of the correct prediction, rank = 9999999+1 if no correction prediction within topk r = tf.add((tf.reduce_sum(r, 1)-1) * -9999999, tf.cast(tf.argmax(r, 1) + 1, tf.float32)) return 1/r s = time.time() rr = [] # Iterate over the `tf.distribute.DistributedDataset` i = tf.constant(0) topk = tf.constant(20) for x, y, z in dist_train: # process dataset elements rr.append(strategy.run(mrr, args=(i, model_mlp, x, y, topk))) i += 1 if i % 1000 == 0: print(time.time()-s) rr = np.append(np.array(rr[0:-1]).reshape(-1), np.array(rr[-1])) print('MRR=%.4f'%np.mean(rr)) out_rr = (rr<1/topk.numpy()) print('%d out of %d records (%.2f%%) with prediction outside top%d'%( out_rr.sum(), out_rr.shape[0], (out_rr.mean())*100., topk.numpy()), flush=True) df_train['rr'] = -1 df_train['r'] = -1 df_train.loc[df_train.kfoldidx!=0, 'rr'] = rr df_train.loc[df_train.kfoldidx!=0, 'r'] = 1/rr print('---------------------------') print('random pick one per session') cond = df_train.loc[df_train.kfoldidx!=0].groupby(['session_id_hash'])['rand_id'].transform(min) == df_train.loc[df_train.kfoldidx!=0, 'rand_id'] print('MRR=%.4f ' % np.mean(rr[cond])) print('%d out of %d sessions (%.2f%%) with prediction outside top%d'%( out_rr[cond].sum(), out_rr[cond].shape[0], (out_rr[cond].mean())*100., topk.numpy()), flush=True) s = time.time() rr = [] # Iterate over the `tf.distribute.DistributedDataset` i = tf.constant(0) for x, y in dist_val: # process dataset elements rr.append(strategy.run(mrr, args=(i, model_mlp, x, y, topk))) i += 1 if i % 1000 == 0: print(time.time()-s) rr = np.append(np.array(rr[0:-1]).reshape(-1), np.array(rr[-1])) print('MRR=%.4f'%np.mean(rr)) out_rr = (rr<1/topk.numpy()) print('%d out of %d records (%.2f%%) with prediction outside top%d'%( out_rr.sum(), out_rr.shape[0], (out_rr.mean())*100., topk.numpy()), flush=True) df_train.loc[df_train.kfoldidx==0, 'rr'] = rr df_train.loc[df_train.kfoldidx==0, 'r'] = 1/rr print('---------------------------') print('random pick one per session') cond = df_train.loc[df_train.kfoldidx==0].groupby(['session_id_hash'])['rand_id'].transform(min) == df_train.loc[df_train.kfoldidx==0, 'rand_id'] print('MRR=%.4f ' % np.mean(rr[cond])) print('%d out of %d sessions (%.2f%%) with prediction outside top%d'%( out_rr[cond].sum(), out_rr[cond].shape[0], (out_rr[cond].mean())*100., topk.numpy()), flush=True) ``` ## MRR of testing set ``` df_test = df_browse[df_browse.train==0].copy() df_test = df_test.loc[df_test.next_interacted_sku!=naid] x_test = [np.array(df_test[fea_prod]), np.array(df_test[fea_url])] x_test_sessid = df_test.session_id_hash.reset_index(drop=True) y_test = np.array(df_test.next_interacted_sku) batch_size = 2000 tf_test = tf.data.Dataset.from_tensor_slices(({'sku':x_test[0], 'url':x_test[1]}, y_test)).batch(batch_size) dist_test = strategy.experimental_distribute_dataset(tf_test) s = time.time() rr = [] # Iterate over the `tf.distribute.DistributedDataset` i = tf.constant(0) topk = tf.constant(20) for x, y in dist_test: # process dataset elements rr.append(strategy.run(mrr, args=(i, model_mlp, x, y))) i += 1 if i % 1000 == 0: print(time.time()-s) rr = np.append(np.array(rr[0:-1]).reshape(-1), np.array(rr[-1])) print('MRR=%.4f'%np.mean(rr)) out_rr = (rr<1/topk.numpy()) print('%d out of %d records (%.2f%%) with prediction outside top%d'%( out_rr.sum(), out_rr.shape[0], (out_rr.mean())*100., topk.numpy()), flush=True) df_test['rr'] = rr df_test['r'] = 1/rr print('---------------------------') print('random pick one per session') cond = df_test.groupby(['session_id_hash'])['rand_id'].transform(min) == df_test['rand_id'] print('MRR=%.4f ' % np.mean(rr[cond])) print('%d out of %d sessions (%.2f%%) with prediction outside top%d'%( out_rr[cond].sum(), out_rr[cond].shape[0], (out_rr[cond].mean())*100., topk.numpy()), flush=True) tmp = df_test.loc[df_test.next_sku!=naid] x_test = {'sku': np.array(tmp[fea_prod]), 'url': np.array(tmp[fea_url])} x_test_sessid = tmp.session_id_hash.reset_index(drop=True) y_test = np.array(tmp.next_sku) batch_size = 2000 tf_test = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(batch_size) dist_test = strategy.experimental_distribute_dataset(tf_test) s = time.time() rr = [] # Iterate over the `tf.distribute.DistributedDataset` i = tf.constant(0) topk = tf.constant(20) for x, y in dist_test: # process dataset elements rr.append(strategy.run(mrr, args=(i, model_mlp, x, y, topk))) i += 1 if i % 1000 == 0: print(time.time()-s) rr = np.append(np.array(rr[0:-1]).reshape(-1), np.array(rr[-1])) print('MRR=%.4f'%np.mean(rr)) out_rr = (rr<1/topk.numpy()) print('%d out of %d records (%.2f%%) with prediction outside top%d'%( out_rr.sum(), out_rr.shape[0], (out_rr.mean())*100., topk.numpy()), flush=True) df_test['rr_next_sku'] = -1 df_test['r_next_sku'] = -1 df_test.loc[df_test.next_sku!=naid, 'rr_next_sku'] = rr df_test.loc[df_test.next_sku!=naid, 'r_next_sku'] = 1/rr print('---------------------------') print('random pick one per session') cond = df_test.loc[df_test.next_sku!=naid].groupby(['session_id_hash'])['rand_id'].transform(min) == df_test.loc[df_test.next_sku!=naid,'rand_id'] print('MRR=%.4f ' % np.mean(rr[cond])) print('%d out of %d sessions (%.2f%%) with prediction outside top%d'%( out_rr[cond].sum(), out_rr[cond].shape[0], (out_rr[cond].mean())*100., topk.numpy()), flush=True) ``` ## prepare submission ``` df_submission = df_browse[df_browse.train==0] df_submission = df_submission.groupby('session_id_hash').tail(1).reset_index(drop=True) batch_size = 2000 x_submission = [np.array(df_submission[fea_prod]), np.array(df_submission[fea_url])] tf_submission = tf.data.Dataset.from_tensor_slices(({'sku':x_submission[0], 'url':x_submission[1]})).batch(batch_size) naid = df_browse.product_sku_hash.cat.categories.tolist().index('_') minorid = df_browse.product_sku_hash.cat.categories.tolist().index('minority') next_sku_all = [] for x in tqdm(tf_submission): y = model_mlp.predict(x) y['logits'][:,naid] = -99999 y['logits'][:,minorid] = -99999 next_sku_id = np.argpartition(y['logits'], range(-20, 0), axis=1)[:, ::-1][:,0:20] next_sku_all += np.array(df_browse.product_sku_hash.cat.categories)[next_sku_id].tolist() test_file='./data/rec_test_phase_2.json' with open(test_file) as json_file: # read the test cases from the provided file test_queries = json.load(json_file) def set_submission(q): sess_id = q['query'][0]['session_id_hash'] try: next_sku = next_sku_all[df_submission.session_id_hash.tolist().index(sess_id)] except ValueError: # query with only search events not exists in df_test next_sku = np.random.choice(df_browse.product_sku_hash.cat.categories, 20, False).tolist() # copy the test case _pred = dict(q) # append the label - which needs to be a list _pred["label"] = next_sku return _pred my_predictions = Parallel(n_jobs=multiprocessing.cpu_count()//2, backend='multiprocessing')(delayed(set_submission)(q) for q in tqdm(test_queries)) # check for consistency assert len(my_predictions) == len(test_queries) # print out some "coverage" # print("Predictions made in {} out of {} total test cases".format(cnt_preds, len(test_queries))) EMAIL = '' local_prediction_file = '{}_{}.json'.format(EMAIL.replace('@', '_'), round(time.time() * 1000)) # dump to file with open(local_prediction_file, 'w') as outfile: json.dump(my_predictions, outfile, indent=2) print(local_prediction_file) from dotenv import load_dotenv from datetime import datetime import boto3 load_dotenv(verbose=True, dotenv_path='./submission/upload.env') BUCKET_NAME = os.getenv('BUCKET_NAME') # you received it in your e-mail EMAIL = os.getenv('EMAIL') # the e-mail you used to sign up PARTICIPANT_ID = os.getenv('PARTICIPANT_ID') # you received it in your e-mail AWS_ACCESS_KEY = os.getenv('AWS_ACCESS_KEY') # you received it in your e-mail AWS_SECRET_KEY = os.getenv('AWS_SECRET_KEY') # you received it in your e-mail def upload_submission( local_file: str, task: str ): """ Thanks to Alex Egg for catching the bug! :param local_file: local path, may be only the file name or a full path :param task: rec or cart :return: """ print("Starting submission at {}...\n".format(datetime.utcnow())) # instantiate boto3 client s3_client = boto3.client( 's3', aws_access_key_id=AWS_ACCESS_KEY , aws_secret_access_key=AWS_SECRET_KEY, region_name='us-west-2' ) s3_file_name = os.path.basename(local_file) # prepare s3 path according to the spec s3_file_path = '{}/{}/{}'.format(task, PARTICIPANT_ID, s3_file_name) # it needs to be like e.g. "rec/id/*.json" # upload file s3_client.upload_file(local_file, BUCKET_NAME, s3_file_path) # say bye print("\nAll done at {}: see you, space cowboy!".format(datetime.utcnow())) return upload_submission(local_file=local_prediction_file, task='rec') !date ```
github_jupyter
``` #Checking the Missing values through heatmap sns.heatmap(df.isnull(),yticklabels=False,cbar=False) df.drop(['PoolQC','Fence','MiscFeature'],axis=1,inplace=True) df.drop(['Alley'],axis=1,inplace=True) df.info() df.drop(['GarageYrBlt'],axis=1,inplace=True) df #Filling the missing values df['LotFrontage']=df['LotFrontage'].fillna(df['LotFrontage'].mean()) df['BsmtCond']=df['BsmtCond'].fillna(df['BsmtCond'].mode()[0]) df['BsmtQual']=df['BsmtQual'].fillna(df['BsmtQual'].mode()[0]) df['FireplaceQu']=df['FireplaceQu'].fillna(df['FireplaceQu'].mode()[0]) df['GarageType']=df['GarageType'].fillna(df['GarageType'].mode()[0]) df['GarageFinish']=df['GarageFinish'].fillna(df['GarageFinish'].mode()[0]) df['GarageQual']=df['GarageQual'].fillna(df['GarageQual'].mode()[0]) df['GarageCond']=df['GarageCond'].fillna(df['GarageCond'].mode()[0]) df.shape df.drop(['Id'],axis=1,inplace=True) df.isnull().sum() df['MasVnrType']=df['MasVnrType'].fillna(df['MasVnrType'].mode()[0]) df['MasVnrArea']=df['MasVnrArea'].fillna(df['MasVnrArea'].mode()[0]) sns.heatmap(df.isnull(),yticklabels=False,cbar=False,cmap='YlGnBu') df['BsmtExposure']=df['BsmtExposure'].fillna(df['BsmtExposure'].mode()[0]) sns.heatmap(df.isnull(),yticklabels=False,cbar=False,cmap='YlGnBu') df['BsmtFinType2']=df['BsmtFinType2'].fillna(df['BsmtFinType2'].mode()[0]) df.dropna(inplace=True) df.shape df.head() columns=['MSZoning','Street','LotShape','LandContour','Utilities','LotConfig','LandSlope','Neighborhood', 'Condition2','BldgType','Condition1','HouseStyle','SaleType', 'SaleCondition','ExterCond', 'ExterQual','Foundation','BsmtQual','BsmtCond','BsmtExposure','BsmtFinType1','BsmtFinType2', 'RoofStyle','RoofMatl','Exterior1st','Exterior2nd','MasVnrType','Heating','HeatingQC', 'CentralAir', 'Electrical','KitchenQual','Functional', 'FireplaceQu','GarageType','GarageFinish','GarageQual','GarageCond','PavedDrive'] import numpy as np # Linear algebra import pandas as pd # Data processing, CSV file I/O (e.g. pd.read_csv) import warnings warnings.filterwarnings('ignore') # Loading train data train = pd.read_csv("Untitled Folder 3/train.csv") print(train.shape) train.head() test = pd.read_csv("Untitled Folder 3/test.csv") print(test.shape) test.head() Samplefeatures = train.drop('label',axis=1) Labels = train['label'] Labels.value_counts() from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier trainimg, testimg,trainlab,testlab = train_test_split(Samplefeatures, Labels, test_size = 0.2, random_state=42) knn = KNeighborsClassifier(n_neighbors=7) knn.fit(trainimg, trainlab) knn.predict(testimg) test = pd.read_csv("Untitled Folder 3/test.csv") test.head(3) #accuracy print(knn.score(testimg,testlab)) index_list = [] for i in list(test.index): index_list.append(i+1) test_prediction = knn.predict(test) submissions = pd.DataFrame({ "ImageId": index_list, "Label": test_prediction }) submissions.to_csv('my_submissions', index=False) with open('readme.txt', 'w') as f: f.write('readm) import io from nltk.corpus import stopwords from nltk.tokenize import word_tokenize # word_tokenize accepts # a string as an input, not a file. stop_words = set(stopwords.words('english')) file1 = open("Reportedlinks.txt") # Use this to read file content as a stream: line = file1.read() words = line.split() for r in words: if not r in stop_words: appendFile = open('filteredtext.txt','a') appendFile.write(" "+r) appendFile.close() ```
github_jupyter
``` import tensorflow as tf from tensorflow import data import shutil import math from datetime import datetime from tensorflow.python.feature_column import feature_column from tensorflow.contrib.learn import learn_runner from tensorflow.contrib.learn import make_export_strategy print(tf.__version__) ``` ## Steps to use the TF Experiment APIs 1. Define dataset **metadata** 2. Define **data input function** to read the data from csv files + **feature processing** 3. Create TF **feature columns** based on metadata + **extended feature columns** 4. Define an **estimator** (DNNRegressor) creation function with the required **feature columns & parameters** 5. Define a **serving function** to export the model 7. Run an **Experiment** with **learn_runner** to train, evaluate, and export the model 8. **Evaluate** the model using test data 9. Perform **predictions** ``` MODEL_NAME = 'reg-model-03' TRAIN_DATA_FILES_PATTERN = 'data/train-*.csv' VALID_DATA_FILES_PATTERN = 'data/valid-*.csv' TEST_DATA_FILES_PATTERN = 'data/test-*.csv' RESUME_TRAINING = False PROCESS_FEATURES = True EXTEND_FEATURE_COLUMNS = True MULTI_THREADING = True ``` ## 1. Define Dataset Metadata * CSV file header and defaults * Numeric and categorical feature names * Target feature name * Unused columns ``` HEADER = ['key','x','y','alpha','beta','target'] HEADER_DEFAULTS = [[0], [0.0], [0.0], ['NA'], ['NA'], [0.0]] NUMERIC_FEATURE_NAMES = ['x', 'y'] CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY = {'alpha':['ax01', 'ax02'], 'beta':['bx01', 'bx02']} CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.keys()) FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES TARGET_NAME = 'target' UNUSED_FEATURE_NAMES = list(set(HEADER) - set(FEATURE_NAMES) - {TARGET_NAME}) print("Header: {}".format(HEADER)) print("Numeric Features: {}".format(NUMERIC_FEATURE_NAMES)) print("Categorical Features: {}".format(CATEGORICAL_FEATURE_NAMES)) print("Target: {}".format(TARGET_NAME)) print("Unused Features: {}".format(UNUSED_FEATURE_NAMES)) ``` ## 2. Define Data Input Function * Input csv files name pattern * Use TF Dataset APIs to read and process the data * Parse CSV lines to feature tensors * Apply feature processing * Return (features, target) tensors ### a. parsing and preprocessing logic ``` def parse_csv_row(csv_row): columns = tf.decode_csv(csv_row, record_defaults=HEADER_DEFAULTS) features = dict(zip(HEADER, columns)) for column in UNUSED_FEATURE_NAMES: features.pop(column) target = features.pop(TARGET_NAME) return features, target def process_features(features): features["x_2"] = tf.square(features['x']) features["y_2"] = tf.square(features['y']) features["xy"] = tf.multiply(features['x'], features['y']) # features['x'] * features['y'] features['dist_xy'] = tf.sqrt(tf.squared_difference(features['x'],features['y'])) return features ``` ### b. data pipeline input function ``` def csv_input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL, skip_header_lines=0, num_epochs=None, batch_size=200): shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False print("") print("* data input_fn:") print("================") print("Input file(s): {}".format(files_name_pattern)) print("Batch size: {}".format(batch_size)) print("Epoch Count: {}".format(num_epochs)) print("Mode: {}".format(mode)) print("Shuffle: {}".format(shuffle)) print("================") print("") file_names = tf.matching_files(files_name_pattern) dataset = data.TextLineDataset(filenames=file_names) dataset = dataset.skip(skip_header_lines) if shuffle: dataset = dataset.shuffle(buffer_size=2 * batch_size + 1) #useful for distributed training when training on 1 data file, so it can be shareded #dataset = dataset.shard(num_workers, worker_index) dataset = dataset.batch(batch_size) dataset = dataset.map(lambda csv_row: parse_csv_row(csv_row)) if PROCESS_FEATURES: dataset = dataset.map(lambda features, target: (process_features(features), target)) #dataset = dataset.batch(batch_size) #??? very long time dataset = dataset.repeat(num_epochs) iterator = dataset.make_one_shot_iterator() features, target = iterator.get_next() return features, target features, target = csv_input_fn(files_name_pattern="") print("Feature read from CSV: {}".format(list(features.keys()))) print("Target read from CSV: {}".format(target)) ``` ## 3. Define Feature Columns The input numeric columns are assumed to be normalized (or have the same scale). Otherise, a normlizer_fn, along with the normlisation params (mean, stdv) should be passed to tf.feature_column.numeric_column() constructor. ``` def extend_feature_columns(feature_columns): # crossing, bucketizing, and embedding can be applied here feature_columns['alpha_X_beta'] = tf.feature_column.crossed_column( [feature_columns['alpha'], feature_columns['beta']], 4) return feature_columns def get_feature_columns(): CONSTRUCTED_NUMERIC_FEATURES_NAMES = ['x_2', 'y_2', 'xy', 'dist_xy'] all_numeric_feature_names = NUMERIC_FEATURE_NAMES.copy() if PROCESS_FEATURES: all_numeric_feature_names += CONSTRUCTED_NUMERIC_FEATURES_NAMES numeric_columns = {feature_name: tf.feature_column.numeric_column(feature_name) for feature_name in all_numeric_feature_names} categorical_column_with_vocabulary = \ {item[0]: tf.feature_column.categorical_column_with_vocabulary_list(item[0], item[1]) for item in CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.items()} feature_columns = {} if numeric_columns is not None: feature_columns.update(numeric_columns) if categorical_column_with_vocabulary is not None: feature_columns.update(categorical_column_with_vocabulary) if EXTEND_FEATURE_COLUMNS: feature_columns = extend_feature_columns(feature_columns) return feature_columns feature_columns = get_feature_columns() print("Feature Columns: {}".format(feature_columns)) ``` ## 4. Define an Estimator Creation Function * Get dense (numeric) columns from the feature columns * Convert categorical columns to indicator columns * Create Instantiate a DNNRegressor estimator given **dense + indicator** feature columns + params ``` def create_estimator(run_config, hparams): feature_columns = list(get_feature_columns().values()) dense_columns = list( filter(lambda column: isinstance(column, feature_column._NumericColumn), feature_columns ) ) categorical_columns = list( filter(lambda column: isinstance(column, feature_column._VocabularyListCategoricalColumn) | isinstance(column, feature_column._BucketizedColumn), feature_columns) ) indicator_columns = list( map(lambda column: tf.feature_column.indicator_column(column), categorical_columns) ) estimator = tf.estimator.DNNRegressor( feature_columns= dense_columns + indicator_columns , hidden_units= hparams.hidden_units, optimizer= tf.train.AdamOptimizer(), activation_fn= tf.nn.elu, dropout= hparams.dropout_prob, config= run_config ) print("") print("Estimator Type: {}".format(type(estimator))) print("") return estimator ``` ## 5. Define Serving Funcion ``` def csv_serving_input_fn(): SERVING_HEADER = ['x','y','alpha','beta'] SERVING_HEADER_DEFAULTS = [[0.0], [0.0], ['NA'], ['NA']] rows_string_tensor = tf.placeholder(dtype=tf.string, shape=[None], name='csv_rows') receiver_tensor = {'csv_rows': rows_string_tensor} row_columns = tf.expand_dims(rows_string_tensor, -1) columns = tf.decode_csv(row_columns, record_defaults=SERVING_HEADER_DEFAULTS) features = dict(zip(SERVING_HEADER, columns)) return tf.estimator.export.ServingInputReceiver( process_features(features), receiver_tensor) ``` ## 6. Run Experiment ### a. Define Experiment Function ``` def generate_experiment_fn(**experiment_args): def _experiment_fn(run_config, hparams): train_input_fn = lambda: csv_input_fn( files_name_pattern=TRAIN_DATA_FILES_PATTERN, mode = tf.contrib.learn.ModeKeys.TRAIN, num_epochs=hparams.num_epochs, batch_size=hparams.batch_size ) eval_input_fn = lambda: csv_input_fn( files_name_pattern=VALID_DATA_FILES_PATTERN, mode=tf.contrib.learn.ModeKeys.EVAL, num_epochs=1, batch_size=hparams.batch_size ) estimator = create_estimator(run_config, hparams) return tf.contrib.learn.Experiment( estimator, train_input_fn=train_input_fn, eval_input_fn=eval_input_fn, eval_steps=None, **experiment_args ) return _experiment_fn ``` ### b. Set HParam and RunConfig ``` TRAIN_SIZE = 12000 NUM_EPOCHS = 1000 BATCH_SIZE = 500 NUM_EVAL = 10 CHECKPOINT_STEPS = int((TRAIN_SIZE/BATCH_SIZE) * (NUM_EPOCHS/NUM_EVAL)) hparams = tf.contrib.training.HParams( num_epochs = NUM_EPOCHS, batch_size = BATCH_SIZE, hidden_units=[8, 4], dropout_prob = 0.0) model_dir = 'trained_models/{}'.format(MODEL_NAME) run_config = tf.contrib.learn.RunConfig( save_checkpoints_steps=CHECKPOINT_STEPS, tf_random_seed=19830610, model_dir=model_dir ) print(hparams) print("Model Directory:", run_config.model_dir) print("") print("Dataset Size:", TRAIN_SIZE) print("Batch Size:", BATCH_SIZE) print("Steps per Epoch:",TRAIN_SIZE/BATCH_SIZE) print("Total Steps:", (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS) print("Required Evaluation Steps:", NUM_EVAL) print("That is 1 evaluation step after each",NUM_EPOCHS/NUM_EVAL," epochs") print("Save Checkpoint After",CHECKPOINT_STEPS,"steps") ``` ### c. Run Experiment via learn_runner ``` if not RESUME_TRAINING: print("Removing previous artifacts...") shutil.rmtree(model_dir, ignore_errors=True) else: print("Resuming training...") tf.logging.set_verbosity(tf.logging.INFO) time_start = datetime.utcnow() print("Experiment started at {}".format(time_start.strftime("%H:%M:%S"))) print(".......................................") learn_runner.run( experiment_fn=generate_experiment_fn( export_strategies=[make_export_strategy( csv_serving_input_fn, exports_to_keep=1 )] ), run_config=run_config, schedule="train_and_evaluate", hparams=hparams ) time_end = datetime.utcnow() print(".......................................") print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S"))) print("") time_elapsed = time_end - time_start print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds())) ``` ## 7. Evaluate the Model ``` TRAIN_SIZE = 12000 VALID_SIZE = 3000 TEST_SIZE = 5000 train_input_fn = lambda: csv_input_fn(files_name_pattern= TRAIN_DATA_FILES_PATTERN, mode= tf.estimator.ModeKeys.EVAL, batch_size= TRAIN_SIZE) valid_input_fn = lambda: csv_input_fn(files_name_pattern= VALID_DATA_FILES_PATTERN, mode= tf.estimator.ModeKeys.EVAL, batch_size= VALID_SIZE) test_input_fn = lambda: csv_input_fn(files_name_pattern= TEST_DATA_FILES_PATTERN, mode= tf.estimator.ModeKeys.EVAL, batch_size= TEST_SIZE) estimator = create_estimator(run_config, hparams) train_results = estimator.evaluate(input_fn=train_input_fn, steps=1) train_rmse = round(math.sqrt(train_results["average_loss"]),5) print() print("############################################################################################") print("# Train RMSE: {} - {}".format(train_rmse, train_results)) print("############################################################################################") valid_results = estimator.evaluate(input_fn=valid_input_fn, steps=1) valid_rmse = round(math.sqrt(valid_results["average_loss"]),5) print() print("############################################################################################") print("# Valid RMSE: {} - {}".format(valid_rmse,valid_results)) print("############################################################################################") test_results = estimator.evaluate(input_fn=test_input_fn, steps=1) test_rmse = round(math.sqrt(test_results["average_loss"]),5) print() print("############################################################################################") print("# Test RMSE: {} - {}".format(test_rmse, test_results)) print("############################################################################################") ``` ## 8. Prediction ``` import itertools predict_input_fn = lambda: csv_input_fn(files_name_pattern=TEST_DATA_FILES_PATTERN, mode= tf.estimator.ModeKeys.PREDICT, batch_size= 5) predictions = estimator.predict(input_fn=predict_input_fn) values = list(map(lambda item: item["predictions"][0],list(itertools.islice(predictions, 5)))) print() print("Predicted Values: {}".format(values)) ``` ## What can we improve? * **Use .tfrecords files instead of CSV** - TFRecord files are optimised for tensorflow. * **Build a Custom Estimator** - Custom Estimator APIs give you the flexibility to build custom models in a simple and standard way
github_jupyter
## ERDAP without erddapy example for ArcticHeat Alamo ** Plot PAR ** its in a seperate file, not on ERDDAP ``` %matplotlib inline import xarray as xa import netCDF4 as nc import pandas as pd import numpy as np import urllib import datetime import matplotlib.pyplot as plt import matplotlib as mpl from matplotlib.dates import YearLocator, WeekdayLocator, MonthLocator, DayLocator, HourLocator, DateFormatter import matplotlib.ticker as ticker import cmocean ### Plot settings mpl.rcParams['axes.grid'] = False mpl.rcParams['axes.edgecolor'] = 'white' mpl.rcParams['axes.linewidth'] = 0.25 mpl.rcParams['grid.linestyle'] = '--' mpl.rcParams['grid.linestyle'] = '--' mpl.rcParams['xtick.major.size'] = 4 mpl.rcParams['xtick.minor.size'] = 3.75 mpl.rcParams['xtick.major.width'] = 2 mpl.rcParams['xtick.minor.width'] = 1.75 mpl.rcParams['ytick.major.size'] = 4 mpl.rcParams['ytick.minor.size'] = 3.75 mpl.rcParams['ytick.major.width'] = 2 mpl.rcParams['ytick.minor.width'] = 1.75 mpl.rcParams['ytick.direction'] = 'out' mpl.rcParams['xtick.direction'] = 'out' mpl.rcParams['ytick.color'] = 'k' mpl.rcParams['xtick.color'] = 'k' mpl.rcParams['font.size'] = 24 mpl.rcParams['font.sans-serif'] = "Arial" mpl.rcParams['font.family'] = "sans-serif" mpl.rcParams['font.weight'] = 'medium' mpl.rcParams['svg.fonttype'] = 'none' ``` ### connecting and basic information ``` ### old way - Pre erddapy ### cmap = cmocean.cm.solar temp_filename = "data/9119.ptsPar.txt" ALAMOID = "http://ferret.pmel.noaa.gov/alamo/erddap/tabledap/arctic_heat_alamo_profiles_9119" cmap = cmocean.cm.thermal temp_filename = "data/tmp.nc" start_date="2017-09-16" end_date ="2017-12-12" urllib.urlretrieve(ALAMOID+".ncCFMA?profileid%2CFLOAT_SERIAL_NO%2CCYCLE_NUMBER%2CREFERENCE_DATE_TIME%2CJULD%2Ctime%2Clatitude%2Clongitude%2CPRES%2CTEMP%2CPSAL&time%3E="+start_date+"T23%3A52%3A00Z",temp_filename) start_date_dt = datetime.datetime.strptime(start_date,"%Y-%m-%d"), end_date_dt = datetime.datetime.strptime(end_date,"%Y-%m-%d") ``` Preliminary PAR data was provided as a text file for ALAMO 9119 ``` #datanc = nc.Dataset('data/tmp.nc') #using netcdf library datapd = pd.read_csv('data/9119.ptsPar.txt',sep='\s+',names=['id','profile','pressure','temperature','salinity','PAR']) dataxa = xa.open_dataset('data/tmp.nc') def plot_PAR(): depth_array = np.arange(0,55,0.25) temparray = np.ones((dataxa.dims['profile'],len(depth_array)))*np.nan ProfileTime = [] cycle_col = 0 plt.figure(1, figsize=(18, 3), facecolor='w', edgecolor='w') plt.subplot(1,1,1) ax1=plt.gca() for cycle in range(dataxa['profile'].min(),dataxa['profile'].max()+1,1): temp_time = dataxa.time[cycle].data[~np.isnat(dataxa.time[cycle].data)] ProfileTime = ProfileTime + [temp_time] #remove where pressure may be unknown Pressure = dataxa.PRES[cycle].data[~np.isnan(dataxa.PRES[cycle].data)] try: Temperature = datapd.groupby('profile').get_group(int(dataxa.CYCLE_NUMBER[cycle][0].values)).PAR except: Temperature = Pressure * 0 + np.nan temparray[cycle_col,:] = np.interp(depth_array,np.flip(Pressure,axis=0),np.flip(Temperature,axis=0), left=np.nan,right=np.nan) cycle_col +=1 ###plot black dots at sample points #plt.scatter(x=temp_time, y=Pressure,s=1,marker='.', edgecolors='none', c='k', zorder=3, alpha=1) ###plot colored dots at sample points with colorscheme based on variable value plt.scatter(x=temp_time, y=Pressure,s=30,marker='.', edgecolors='none', c=np.log(Temperature+1.808), vmin=0, vmax=10, cmap=cmocean.cm.solar, zorder=2) cbar = plt.colorbar() time_array = np.array([x[0] for x in ProfileTime]) #plt.contourf(time_array,depth_array,np.log(temparray.T),extend='both', # cmap=cmocean.cm.solar,levels=np.arange(0,10,1),alpha=0.9,zorder=1) #plt.contour(time_array,depth_array,temparray.T, colors='#d3d3d3',linewidths=1, alpha=1.0,zorder=3) ax1.invert_yaxis() ax1.yaxis.set_minor_locator(ticker.MultipleLocator(5)) ax1.xaxis.set_major_locator(DayLocator(bymonthday=range(0,31,5))) ax1.xaxis.set_minor_locator(DayLocator(bymonthday=range(1,32,1))) ax1.xaxis.set_minor_formatter(DateFormatter('')) ax1.xaxis.set_major_formatter(DateFormatter('%d')) ax1.xaxis.set_tick_params(which='major', pad=25) ax1.xaxis.set_tick_params(which='minor', pad=5) ax1.set_xlim([start_date_dt,end_date_dt]) ax1.set_ylim([50,0]) plot_PAR() ```
github_jupyter
# Time Series Forecast with Basic RNN * Dataset is downloaded from https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data ``` import pandas as pd import numpy as np import datetime from matplotlib import pyplot as plt import seaborn as sns from sklearn.preprocessing import MinMaxScaler df = pd.read_csv('data/pm25.csv') print(df.shape) df.head() df.isnull().sum()*100/df.shape[0] df.dropna(subset=['pm2.5'], axis=0, inplace=True) df.reset_index(drop=True, inplace=True) df['datetime'] = df[['year', 'month', 'day', 'hour']].apply( lambda row: datetime.datetime(year=row['year'], month=row['month'], day=row['day'],hour=row['hour']), axis=1) df.sort_values('datetime', ascending=True, inplace=True) df.head() df['year'].value_counts() plt.figure(figsize=(5.5, 5.5)) g = sns.lineplot(data=df['pm2.5'], color='g') g.set_title('pm2.5 between 2010 and 2014') g.set_xlabel('Index') g.set_ylabel('pm2.5 readings') ``` ### Note * Scaling the variables will make optimization functions work better, so here going to scale the variable into [0,1] range ``` scaler = MinMaxScaler(feature_range=(0, 1)) df['scaled_pm2.5'] = scaler.fit_transform(np.array(df['pm2.5']).reshape(-1, 1)) df.head() plt.figure(figsize=(5.5, 5.5)) g = sns.lineplot(data=df['scaled_pm2.5'], color='purple') g.set_title('Scaled pm2.5 between 2010 and 2014') g.set_xlabel('Index') g.set_ylabel('scaled_pm2.5 readings') # 2014 data as validation data, before 2014 as training data split_date = datetime.datetime(year=2014, month=1, day=1, hour=0) df_train = df.loc[df['datetime']<split_date] df_val = df.loc[df['datetime']>=split_date] print('Shape of train:', df_train.shape) print('Shape of test:', df_val.shape) df_val.reset_index(drop=True, inplace=True) df_val.head() # The way this works is to have the first nb_timesteps-1 observations as X and nb_timesteps_th as the target, ## collecting the data with 1 stride rolling window. def makeXy(ts, nb_timesteps): """ Input: ts: original time series nb_timesteps: number of time steps in the regressors Output: X: 2-D array of regressors y: 1-D array of target """ X = [] y = [] for i in range(nb_timesteps, ts.shape[0]): X.append(list(ts.loc[i-nb_timesteps:i-1])) y.append(ts.loc[i]) X, y = np.array(X), np.array(y) return X, y X_train, y_train = makeXy(df_train['scaled_pm2.5'], 7) print('Shape of train arrays:', X_train.shape, y_train.shape) print(X_train[0], y_train[0]) print(X_train[1], y_train[1]) X_val, y_val = makeXy(df_val['scaled_pm2.5'], 7) print('Shape of validation arrays:', X_val.shape, y_val.shape) print(X_val[0], y_val[0]) print(X_val[1], y_val[1]) ``` ### Note * In 2D array above for X_train, X_val, it means (number of samples, number of time steps) * However RNN input has to be 3D array, (number of samples, number of time steps, number of features per timestep) * Only 1 feature which is scaled_pm2.5 * So, the code below converts 2D array to 3D array ``` X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1)) X_val = X_val.reshape((X_val.shape[0], X_val.shape[1], 1)) print('Shape of arrays after reshaping:', X_train.shape, X_val.shape) import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import SimpleRNN from tensorflow.keras.layers import Dense, Dropout, Input from tensorflow.keras.models import load_model from tensorflow.keras.callbacks import ModelCheckpoint from sklearn.metrics import mean_absolute_error tf.random.set_seed(10) model = Sequential() model.add(SimpleRNN(32, input_shape=(X_train.shape[1:]))) model.add(Dropout(0.2)) model.add(Dense(1, activation='linear')) model.compile(optimizer='rmsprop', loss='mean_absolute_error', metrics=['mae']) model.summary() save_weights_at = 'basic_rnn_model' save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0, save_best_only=True, save_weights_only=False, mode='min', save_freq='epoch') history = model.fit(x=X_train, y=y_train, batch_size=16, epochs=20, verbose=1, callbacks=[save_best], validation_data=(X_val, y_val), shuffle=True) # load the best model best_model = load_model('basic_rnn_model') # Compare the prediction with y_true preds = best_model.predict(X_val) pred_pm25 = scaler.inverse_transform(preds) pred_pm25 = np.squeeze(pred_pm25) # Measure MAE of y_pred and y_true mae = mean_absolute_error(df_val['pm2.5'].loc[7:], pred_pm25) print('MAE for the validation set:', round(mae, 4)) mae = mean_absolute_error(df_val['scaled_pm2.5'].loc[7:], preds) print('MAE for the scaled validation set:', round(mae, 4)) # Check the metrics and loss of each apoch mae = history.history['mae'] val_mae = history.history['val_mae'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(mae)) plt.plot(epochs, mae, 'bo', label='Training MAE') plt.plot(epochs, val_mae, 'b', label='Validation MAE') plt.title('Training and Validation MAE') plt.legend() plt.figure() # Here I was using MAE as loss too, that's why they lookedalmost the same... plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and Validation loss') plt.legend() plt.show() ``` ### Note * Best model saved by `ModelCheckpoint` saved 12th epoch result, which had 0.12 val_loss * From the history plot of training vs validation loss, 12th epoch result (i=11) has the lowest validation loss. This aligh with the result from `ModelCheckpoint` * Set different tensorflow seed will get different results!
github_jupyter
``` import hddm import pandas as pd import matplotlib.pyplot as plt import numpy as np %matplotlib inline print(hddm.__version__) ``` # load the data ``` data = hddm.load_csv('data/study2.csv') data = hddm.utils.flip_errors(data) data data = data.drop(data[data.stim == '1Vne1A'].index) data['movie_valence'] = data.stim.str[:1] data['movie_arousal'] = data.stim.str[-2:-1] di = {'group1': "1V1A", 'group2': "1V0A", 'group3': "0V1A", 'group4': "0V0A"} data['group_code'] = data.group data = data.replace({"group": di}) data['mood_valence'] = data.group.str[:1] data['mood_arousal'] = data.group.str[-2:-1] ``` # fit a HDDM with parameters depends on the 4 variables ### 2 (movie_valence) * 2 (movie_arousal) * 2 (mood_valence) * 2 (mood_arousal) ``` m_cell_means_all = hddm.HDDM(data, depends_on={'v': ['stim', 'group'], 'a': ['stim', 'group'], 't': ['stim', 'group'], 'z': ['stim', 'group']}, include=('v', 'a', 't', 'z'), p_outlier=0.05) m_cell_means_all.find_starting_values() m_cell_means_all.sample(10000, burn=2000, dbname='study2_cell_mean_all.db', db='pickle') m_cell_means_all.save('study2_cell_mean_all') v_0V0A_0V0A, v_0V0A_0V1A, v_0V0A_1V0A, v_0V0A_1V1A, v_1V0A_0V0A, v_1V0A_0V1A, v_1V0A_1V0A, v_1V0A_1V1A, \ v_0V1A_0V0A, v_0V1A_0V1A, v_0V1A_1V0A, v_0V1A_1V1A, v_1V1A_0V0A, v_1V1A_0V1A, v_1V1A_1V0A, v_1V1A_1V1A, \ a_0V0A_0V0A, a_0V0A_0V1A, a_0V0A_1V0A, a_0V0A_1V1A, a_1V0A_0V0A, a_1V0A_0V1A, a_1V0A_1V0A, a_1V0A_1V1A, \ a_0V1A_0V0A, a_0V1A_0V1A, a_0V1A_1V0A, a_0V1A_1V1A, a_1V1A_0V0A, a_1V1A_0V1A, a_1V1A_1V0A, a_1V1A_1V1A, \ t_0V0A_0V0A, t_0V0A_0V1A, t_0V0A_1V0A, t_0V0A_1V1A, t_1V0A_0V0A, t_1V0A_0V1A, t_1V0A_1V0A, t_1V0A_1V1A, \ t_0V1A_0V0A, t_0V1A_0V1A, t_0V1A_1V0A, t_0V1A_1V1A, t_1V1A_0V0A, t_1V1A_0V1A, t_1V1A_1V0A, t_1V1A_1V1A, \ z_0V0A_0V0A, z_0V0A_0V1A, z_0V0A_1V0A, z_0V0A_1V1A, z_1V0A_0V0A, z_1V0A_0V1A, z_1V0A_1V0A, z_1V0A_1V1A, \ z_0V1A_0V0A, z_0V1A_0V1A, z_0V1A_1V0A, z_0V1A_1V1A, z_1V1A_0V0A, z_1V1A_0V1A, z_1V1A_1V0A, z_1V1A_1V1A = m_cell_means_all.nodes_db.loc[[ "v(0V0A.0V0A)", "v(0V0A.0V1A)", "v(0V0A.1V0A)", "v(0V0A.1V1A)", "v(1V0A.0V0A)", "v(1V0A.0V1A)", "v(1V0A.1V0A)", "v(1V0A.1V1A)", "v(0V1A.0V0A)", "v(0V1A.0V1A)", "v(0V1A.1V0A)", "v(0V1A.1V1A)", "v(1V1A.0V0A)", "v(1V1A.0V1A)", "v(1V1A.1V0A)", "v(1V1A.1V1A)", "a(0V0A.0V0A)", "a(0V0A.0V1A)", "a(0V0A.1V0A)", "a(0V0A.1V1A)", "a(1V0A.0V0A)", "a(1V0A.0V1A)", "a(1V0A.1V0A)", "a(1V0A.1V1A)", "a(0V1A.0V0A)", "a(0V1A.0V1A)", "a(0V1A.1V0A)", "a(0V1A.1V1A)", "a(1V1A.0V0A)", "a(1V1A.0V1A)", "a(1V1A.1V0A)", "a(1V1A.1V1A)", "t(0V0A.0V0A)", "t(0V0A.0V1A)", "t(0V0A.1V0A)", "t(0V0A.1V1A)", "t(1V0A.0V0A)", "t(1V0A.0V1A)", "t(1V0A.1V0A)", "t(1V0A.1V1A)", "t(0V1A.0V0A)", "t(0V1A.0V1A)", "t(0V1A.1V0A)", "t(0V1A.1V1A)", "t(1V1A.0V0A)", "t(1V1A.0V1A)", "t(1V1A.1V0A)", "t(1V1A.1V1A)", "z(0V0A.0V0A)", "z(0V0A.0V1A)", "z(0V0A.1V0A)", "z(0V0A.1V1A)", "z(1V0A.0V0A)", "z(1V0A.0V1A)", "z(1V0A.1V0A)", "z(1V0A.1V1A)", "z(0V1A.0V0A)", "z(0V1A.0V1A)", "z(0V1A.1V0A)", "z(0V1A.1V1A)", "z(1V1A.0V0A)", "z(1V1A.0V1A)", "z(1V1A.1V0A)", "z(1V1A.1V1A)",], 'node'] ``` ## Create a dataframe includes the drift rate samples as DV and the IVs ``` trace = np.concatenate((v_0V0A_0V0A.trace(), v_0V0A_0V1A.trace(), v_0V0A_1V0A.trace(), v_0V0A_1V1A.trace(), v_1V0A_0V0A.trace(), v_1V0A_0V1A.trace(), v_1V0A_1V0A.trace(), v_1V0A_1V1A.trace(), \ v_0V1A_0V0A.trace(), v_0V1A_0V1A.trace(), v_0V1A_1V0A.trace(), v_0V1A_1V1A.trace(), \ v_1V1A_0V0A.trace(), v_1V1A_0V1A.trace(), v_1V1A_1V0A.trace(), v_1V1A_1V1A.trace())) mood = np.concatenate((np.repeat("0V0A", 8000), np.repeat("0V0A", 8000), np.repeat("0V0A", 8000), np.repeat("0V0A", 8000), np.repeat("1V0A", 8000), np.repeat("1V0A", 8000), np.repeat("1V0A", 8000), np.repeat("1V0A", 8000), np.repeat("0V1A", 8000), np.repeat("0V1A", 8000), np.repeat("0V1A", 8000), np.repeat("0V1A", 8000), np.repeat("1V1A", 8000), np.repeat("1V1A", 8000), np.repeat("1V1A", 8000), np.repeat("1V1A", 8000))) mood_valence = np.concatenate((np.repeat("negative valence", 8000*4), np.repeat("positive valence", 8000*4), np.repeat("negative valence", 8000*4), np.repeat("positive valence", 8000*4))) mood_arousal = np.concatenate((np.repeat("low arousal", 8000*8), np.repeat("high arousal", 8000*8))) movie = np.concatenate((np.repeat("0V0A", 8000), np.repeat("0V1A", 8000), np.repeat("1V0A", 8000), np.repeat("1V1A", 8000), np.repeat("0V0A", 8000), np.repeat("0V1A", 8000), np.repeat("1V0A", 8000), np.repeat("1V1A", 8000), np.repeat("0V0A", 8000), np.repeat("0V1A", 8000), np.repeat("1V0A", 8000), np.repeat("1V1A", 8000), np.repeat("0V0A", 8000), np.repeat("0V1A", 8000), np.repeat("1V0A", 8000), np.repeat("1V1A", 8000))) movie_valence = np.concatenate((np.repeat("negative valence", 8000*2), np.repeat("positive valence", 8000*2), np.repeat("negative valence", 8000*2), np.repeat("positive valence", 8000*2), np.repeat("negative valence", 8000*2), np.repeat("positive valence", 8000*2), np.repeat("negative valence", 8000*2), np.repeat("positive valence", 8000*2))) movie_arousal = np.tile(np.concatenate((np.repeat("low arousal", 8000), np.repeat("high arousal", 8000))),8) trace_df = pd.DataFrame({"trace": trace, "movie": movie, "mood": mood, "movie_valence": movie_valence, "movie_arousal": movie_arousal, "mood_valence": mood_valence, "mood_arousal": mood_arousal}) # save the dataframe for plotting trace_df.to_csv("plot/study2_cell_mean_trace_df.csv") ```
github_jupyter
## Dependencies ``` # !pip install --quiet /kaggle/input/kerasapplications # !pip install --quiet /kaggle/input/efficientnet-git import warnings, glob from tensorflow.keras import Sequential, Model # import efficientnet.tfkeras as efn from cassava_scripts import * seed = 0 seed_everything(seed) warnings.filterwarnings('ignore') ``` ### Hardware configuration ``` # TPU or GPU detection # Detect hardware, return appropriate distribution strategy strategy, tpu = set_up_strategy() AUTO = tf.data.experimental.AUTOTUNE REPLICAS = strategy.num_replicas_in_sync print(f'REPLICAS: {REPLICAS}') ``` # Model parameters ``` BATCH_SIZE = 8 * REPLICAS HEIGHT = 512 WIDTH = 512 CHANNELS = 3 N_CLASSES = 5 TTA_STEPS = 0 # Do TTA if > 0 ``` # Augmentation ``` def data_augment(image, label): p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32) p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32) # p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32) # p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32) # p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32) p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32) # Flips image = tf.image.random_flip_left_right(image) image = tf.image.random_flip_up_down(image) if p_spatial > .75: image = tf.image.transpose(image) # Rotates if p_rotate > .75: image = tf.image.rot90(image, k=3) # rotate 270º elif p_rotate > .5: image = tf.image.rot90(image, k=2) # rotate 180º elif p_rotate > .25: image = tf.image.rot90(image, k=1) # rotate 90º # # Pixel-level transforms # if p_pixel_1 >= .4: # image = tf.image.random_saturation(image, lower=.7, upper=1.3) # if p_pixel_2 >= .4: # image = tf.image.random_contrast(image, lower=.8, upper=1.2) # if p_pixel_3 >= .4: # image = tf.image.random_brightness(image, max_delta=.1) # Crops if p_crop > .7: if p_crop > .9: image = tf.image.central_crop(image, central_fraction=.7) elif p_crop > .8: image = tf.image.central_crop(image, central_fraction=.8) else: image = tf.image.central_crop(image, central_fraction=.9) elif p_crop > .4: crop_size = tf.random.uniform([], int(HEIGHT*.8), HEIGHT, dtype=tf.int32) image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS]) # # Crops # if p_crop > .6: # if p_crop > .9: # image = tf.image.central_crop(image, central_fraction=.5) # elif p_crop > .8: # image = tf.image.central_crop(image, central_fraction=.6) # elif p_crop > .7: # image = tf.image.central_crop(image, central_fraction=.7) # else: # image = tf.image.central_crop(image, central_fraction=.8) # elif p_crop > .3: # crop_size = tf.random.uniform([], int(HEIGHT*.6), HEIGHT, dtype=tf.int32) # image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS]) return image, label ``` ## Auxiliary functions ``` # Datasets utility functions def resize_image(image, label): image = tf.image.resize(image, [HEIGHT, WIDTH]) image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS]) return image, label def process_path(file_path): name = get_name(file_path) img = tf.io.read_file(file_path) img = decode_image(img) # img, _ = scale_image(img, None) # img = center_crop(img, HEIGHT, WIDTH) return img, name def get_dataset(files_path, shuffled=False, tta=False, extension='jpg'): dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled) dataset = dataset.map(process_path, num_parallel_calls=AUTO) if tta: dataset = dataset.map(data_augment, num_parallel_calls=AUTO) dataset = dataset.map(resize_image, num_parallel_calls=AUTO) dataset = dataset.batch(BATCH_SIZE) dataset = dataset.prefetch(AUTO) return dataset ``` # Load data ``` database_base_path = '/kaggle/input/cassava-leaf-disease-classification/' submission = pd.read_csv(f'{database_base_path}sample_submission.csv') display(submission.head()) TEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec') NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES) print(f'GCS: test: {NUM_TEST_IMAGES}') model_path_list = glob.glob('/kaggle/input/126-cassava-leaf-effnetb3-scl-imagenet-512x512/*.h5') model_path_list.sort() print('Models to predict:') print(*model_path_list, sep='\n') ``` # Model ``` class UnitNormLayer(L.Layer): """ Normalize vectors (euclidean norm) in batch to unit hypersphere. """ def __init__(self, **kwargs): super(UnitNormLayer, self).__init__(**kwargs) def call(self, input_tensor): norm = tf.norm(input_tensor, axis=1) return input_tensor / tf.reshape(norm, [-1, 1]) def encoder_fn(input_shape): inputs = L.Input(shape=input_shape, name='input_image') # base_model = efn.EfficientNetB3(input_tensor=inputs, base_model = tf.keras.applications.EfficientNetB3(input_tensor=inputs, include_top=False, weights=None, pooling='avg') norm_embeddings = UnitNormLayer()(base_model.output) model = Model(inputs=inputs, outputs=norm_embeddings) return model def classifier_fn(input_shape, N_CLASSES, encoder, trainable=True): for layer in encoder.layers: layer.trainable = trainable unfreeze_model(encoder) # unfreeze all layers except "batch normalization" inputs = L.Input(shape=input_shape, name='input_image') features = encoder(inputs) features = L.Dropout(.5)(features) features = L.Dense(512, activation='relu')(features) features = L.Dropout(.5)(features) output = L.Dense(N_CLASSES, activation='softmax', name='output')(features) output_healthy = L.Dense(1, activation='sigmoid', name='output_healthy')(features) output_cmd = L.Dense(1, activation='sigmoid', name='output_cmd')(features) model = Model(inputs=inputs, outputs=[output, output_healthy, output_cmd]) return model with strategy.scope(): encoder = encoder_fn((None, None, CHANNELS)) model = classifier_fn((None, None, CHANNELS), N_CLASSES, encoder, trainable=True) model.summary() ``` # Test set predictions ``` files_path = f'{database_base_path}test_images/' test_size = len(os.listdir(files_path)) test_preds = np.zeros((test_size, N_CLASSES)) for model_path in model_path_list: print(model_path) K.clear_session() model.load_weights(model_path) if TTA_STEPS > 0: test_ds = get_dataset(files_path, tta=True).repeat() ct_steps = TTA_STEPS * ((test_size/BATCH_SIZE) + 1) preds = model.predict(test_ds, steps=ct_steps, verbose=1)[0][:(test_size * TTA_STEPS)] preds = np.mean(preds.reshape(test_size, TTA_STEPS, N_CLASSES, order='F'), axis=1) test_preds += preds / len(model_path_list) else: test_ds = get_dataset(files_path, tta=False) x_test = test_ds.map(lambda image, image_name: image) test_preds += model.predict(x_test)[0] / len(model_path_list) test_preds = np.argmax(test_preds, axis=-1) test_names_ds = get_dataset(files_path) image_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_names_ds.unbatch())] submission = pd.DataFrame({'image_id': image_names, 'label': test_preds}) submission.to_csv('submission.csv', index=False) display(submission.head()) ```
github_jupyter
``` library(ggplot2) #library(fmsb) library(dplyr) library(tidyr) #library(ggforce) library(tibble) library(RColorBrewer) #library(dynutils) library(plyr) library(stringr) library(R.utils) ``` ``` metrics_tab_lab <- read.csv("/storage/groups/ml01/workspace/group.daniela/metrics_lisi/all.csv") methods <- colnames(metrics_tab_lab)[-1] metrics <- as.character(metrics_tab_lab[,1]) metrics_tab <- t(metrics_tab_lab) metrics_tab[metrics_tab == ""] <- NA metrics_tab <- as.data.frame(metrics_tab[-1,]) colnames(metrics_tab) <- plyr::mapvalues( metrics, from = c("ASW_label", "ASW_label/batch", "cell_cycle_conservation"), to = c("Cell type ASW", "Batch ASW", "CC conservation")) colnames(metrics_tab) <- sub("_", " ", colnames(metrics_tab)) method_names <- sapply(str_split(rownames(metrics_tab), "_"), function(x) x[1]) method_names <- capitalize(method_names) method_names <- plyr::mapvalues(method_names, from = c("Seurat", "Bbknn", "Trvae"), to = c("Seurat v3", "BBKNN", "TrVAE")) row_groups <- sapply(str_split(rownames(metrics_tab), "_"), function(x) x[2]) metrics_tab <- add_column(metrics_tab, "Method" = method_names, .before = 1) bbknn_ind <- grep("bbknn", rownames(metrics_tab)) lisi_ind <- grep("LISI", colnames(metrics_tab)) metrics_tab[bbknn_ind, lisi_ind] <- NA # reorder columns by groups col.ordered <- c("Method", "PCR batch", "Batch ASW", "iLISI", "kBET", "NMI cluster/label", "ARI cluster/label", "Cell type ASW", "cLISI", "CC conservation") metrics_tab <- metrics_tab[, col.ordered] metrics_tab[,-1] <- as.numeric(as.matrix(metrics_tab[, -1])) #metrics_tab[,-1] <- apply(metrics_tab[,-1], 2, function(x) scale_minmax(x)) score_group1 <- rowMeans(metrics_tab[, 2:5], na.rm = T) score_group2 <- rowMeans(metrics_tab[, 6:10], na.rm = T) score_all <- rowMeans(metrics_tab[, 2:10], na.rm = T) metrics_tab <- add_column(metrics_tab, "Overall Score" = score_all, .after = "Method") metrics_tab <- add_column(metrics_tab, "Batch Correction" = score_group1, .after = "Overall Score") metrics_tab <- add_column(metrics_tab, "Bio conservation" = score_group2, .after = "kBET") metrics_tab <- add_column(metrics_tab, "method_group" = row_groups, .after = "Method") # order methods by the overall score if(length(unique(score_all))!= 1){ metrics_tab <- metrics_tab[order(metrics_tab$method_group, metrics_tab$`Overall Score`, decreasing = T), ] } method_group <- plyr::mapvalues(metrics_tab$method_group, from = c("knn", "embed", "full"), to = c("KNN", "Embeddings", "Features")) metrics_tab <- metrics_tab[, colnames(metrics_tab) != "method_group"] metrics_tab orig_rownames = rownames(metrics_tab) metrics_tab$Method = paste(metrics_tab$Method, sapply(strsplit(orig_rownames, '_'), `[`, 2)) rownames(metrics_tab) = metrics_tab$Method metrics_tab library(pheatmap) as.matrix(metrics_tab[,3:ncol(metrics_tab)]) library(repr) options(repr.plot.width=5, repr.plot.height=3) p = pheatmap(as.matrix(metrics_tab[,3:ncol(metrics_tab)]), na_col = "white", cluster_rows=FALSE, cluster_cols=FALSE) library(data.table) met_dt = as.data.table(metrics_tab) met_dt[, HVG_label := NULL] typeof(melt(met_dt, variable.name = 'Metrics')$value) options(repr.plot.width=6, repr.plot.height=3.5) title="Integration Performance Scores" ggheatmap <- ggplot(data = melt(met_dt, variable.name = 'Metrics'), aes(x=Metrics, y=Method, fill=value)) + geom_tile(color = "grey50")+ scale_fill_gradient2(high = "royalblue", low = "lightgoldenrodyellow", #mid = "lightgoldenrodyellow", midpoint = 0.5, limit = c(0,1), space = "Lab", name="") + theme_minimal() + theme(axis.text.x = element_text(angle = 30, vjust = 1, hjust = 1)) + labs(title="", x = "", y = "") + coord_fixed() print(ggheatmap) ggheatmap + geom_text(aes(Metrics, Method, label = round(value,2)), size = 3) ggsave(paste0(getwd(), '/score_heatmap.svg'), p, device = 'svg') file.exists(paste0(getwd(), '/score_heatmap.svg')) subset(data, is.na(value)) ```
github_jupyter
# YAP Jupyter Interface ![yap.ico](attachment:yap.ico) ## Walkthrough and User Guide The next cells show examples of input output interaction with Prolog and Jupyter. We assume basic knowledge of both Prolog and Python/R/Jupyter. Notice that this is experimental software, subject to bugs and change. Also remember that - all cells in the same page belong to the same process; - _magic_ refers to annotations that perform extra, non-trivial work - check the top=right ball right to the side of `YAP 6`: if empty the system is avsilable; otherwise, it is busy, ### Basic Usage Any Prolog system should be able to unify two terms: ``` X=2 X= s X=f(Y) f(X,['olá',X]) = f(`hello`,Z) ``` Unification may fail: ``` f('olá',[X]) = f(`hello`,Z) X=Y ``` You observe that the first time you press `shift-enter` or `ctl-enter`, YAP/Jupyter writes down `X=2`, the answer. If you press down `shift-enter` again, it writes `No (more) answers` Pressing again returns you to the first answer, `X=2`: - In YAP/Jupyter cells have a state that depends on how many answers you generated. YAP also allows asking for ll solutions in a single run: ``` between(1,100,I), J is I^I * ``` The YAP `^` operator generates floating-point numbers for large exponentials. You can try replacing `^` by `**` in the cell: notice that the cell state is reset, as changes in the text of a cell may mean anything. ``` between(1,20,I), J is 20-I, IJ is I*J * ``` NB: in the current version, the states in a page are single-threaded, and only one cell is active at a rime. ## Programming with cells Cells can store programs: that is wahat they do The next cell shows a program to recognise state-checking predicates: ``` state_info(Name/Arity) :- current_predicate(Name/Arity), atom_concat(current,_,Name). state_info(Name/Arity) :- system_predicate(Name/Arity), atom_concat(current,_,Name). ``` Now you can query: ``` state_info(P) ``` Notice that you need to consult the program cell first. We can just do both in the same cell: ``` generate_ith(I, I, [Head|Tail], Head, Tail). generate_ith(I, IN, [_|List], El, Tail) :- I1 is I+1, generate_ith(I1, IN, List, El, Tail). ith(V, In, Element, Tail) :- var(V), !, generate_ith(0, V, In, Element, Tail). ith(0, [Head|Tail], Head, Tail) :- !. ith(N, [Head|Tail], Elem, [Head|Rest]) :- M is N-1, ith(M, Tail, Elem, Rest). ith(X,[1,2,3,4,5],4, T) %%bash ls ``` ### Magic YAP allows the standard magics, buth with line and cell: - line magics should be the first non-empty line, and must start with `%` followed immediately by the name. - cell magics start with `%%` and must be the only magic in the cell. You can use the completion mechanism to list all magics. ``` %matplotlib inline main :- python_import( matplotlib.pyplot as plt ), python_import( numpy as np ), T = np.arange(0.0, 2.0, 0.01), S = 1 + np.sin(2*np.pi*T), plt.plot(T, S), plt.xlabel(`time (s)`), plt.ylabel(`voltage (mV)`), plt.title(`About as simple as it gets, folks`), plt.grid(true), plt.savefig("test2.png"), plt.show(). main %matplotlib inline main2 :- python_import( numpy as np ), python_import( matplotlib.mlab as mlab ), python_import( matplotlib.pyplot as plt ), /* example data */ Mu = 100, /* mean of distribution, */ Sigma = 15, /* standard deviation of distribution, */ X = Mu + Sigma * np.random.randn(10000), NumBins = 50, /* the histogram of the data */ t(n, bins, patches) := plt.hist(X, NumBins, normed=1, facecolor= `green`, alpha=0.5), /* add a `best fit` line */ y := mlab.normpdf(bins, Mu, Sigma), plt.plot(bins, y, 'r--'), plt.xlabel('Smarts'), plt.ylabel('Probability'), plt.title('Histogram of IQ: $\\mu=100$, $\\sigma=15$'), /* Tweak spacing to prevent clipping of ylabel, */ plt.show(). main2 ``` Last, Prolog can talk to R, so you can get a Python to R bridge ``` :- [library(real)]. X <- c(1:10), x := X ``` get_code(C) ``` get_code(C) %%python3 import sys input() ```
github_jupyter
# 📝 Exercise M3.02 The goal is to find the best set of hyperparameters which maximize the statistical performance on a training set. Here again with limit the size of the training set to make computation run faster. Feel free to increase the `train_size` value if your computer is powerful enough. ``` import numpy as np import pandas as pd adult_census = pd.read_csv("../../datasets/adult-census.csv") target_name = "class" target = adult_census[target_name] data = adult_census.drop(columns=[target_name, "education-num"]) from sklearn.model_selection import train_test_split data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=42, train_size = 0.8) ``` Create your machine learning pipeline You should: * preprocess the categorical columns using a `OneHotEncoder` and use a `StandardScaler` to normalize the numerical data. * use a `LogisticRegression` as a predictive model. Start by defining the columns and the preprocessing pipelines to be applied on each columns. ``` data.dtypes from sklearn.preprocessing import OneHotEncoder from sklearn.preprocessing import StandardScaler ``` Subsequently, create a `ColumnTransformer` to redirect the specific columns a preprocessing pipeline. ``` from sklearn.compose import make_column_selector as selector from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder from sklearn.preprocessing import StandardScaler from sklearn.pipeline import make_pipeline from sklearn.linear_model import LogisticRegression categorical_columns_selector = selector(dtype_include=object) categorical_columns = categorical_columns_selector(data) numerical_columns_selector = selector(dtype_exclude=object) numerical_columns = numerical_columns_selector(data) categorical_processor = OneHotEncoder(handle_unknown="ignore") numerical_processor = StandardScaler() preprocessor = ColumnTransformer( [('cat-preprocessor', categorical_processor, categorical_columns), ('num-preprocessor', numerical_processor, numerical_columns)] ) model = make_pipeline(preprocessor, LogisticRegression()) ``` Finally, concatenate the preprocessing pipeline with a logistic regression. Use a `RandomizedSearchCV` to find the best set of hyperparameters by tuning the following parameters of the `model`: - the parameter `C` of the `LogisticRegression` with values ranging from 0.001 to 10. You can use a log-uniform distribution (i.e. `scipy.stats.loguniform`); - the parameter `with_mean` of the `StandardScaler` with possible values `True` or `False`; - the parameter `with_std` of the `StandardScaler` with possible values `True` or `False`. Once the computation has completed, print the best combination of parameters stored in the `best_params_` attribute. ``` from sklearn.model_selection import RandomizedSearchCV from scipy.stats import loguniform param_distributions = { "logisticregression__C": loguniform(0.001, 10), "columntransformer__num-preprocessor__with_mean": [True, False], "columntransformer__num-preprocessor__with_std": [True, False], } model_random_search = RandomizedSearchCV( model, param_distributions=param_distributions, n_iter=20, error_score=np.nan, n_jobs=-1, verbose=1) model_random_search.fit(data_train, target_train) model_random_search.best_params_ ```
github_jupyter
# Title _Brief abstract/introduction/motivation. State what the chapter is about in 1-2 paragraphs._ _Then, have an introduction video:_ ``` from bookutils import YouTubeVideo YouTubeVideo("w4u5gCgPlmg") ``` **Prerequisites** * _Refer to earlier chapters as notebooks here, as here:_ [Earlier Chapter](Fuzzer.ipynb). ``` import bookutils ``` ## Synopsis <!-- Automatically generated. Do not edit. --> To [use the code provided in this chapter](Importing.ipynb), write ```python >>> from fuzzingbook.Template import <identifier> ``` and then make use of the following features. _For those only interested in using the code in this chapter (without wanting to know how it works), give an example. This will be copied to the beginning of the chapter (before the first section) as text with rendered input and output._ You can use `int_fuzzer()` as: ```python >>> print(int_fuzzer()) 76.5 ``` ## _Section 1_ \todo{Add} ## _Section 2_ \todo{Add} ### Excursion: All the Details This text will only show up on demand (HTML) or not at all (PDF). This is useful for longer implementations, or repetitive, or specialized parts. ### End of Excursion ## _Section 3_ \todo{Add} _If you want to introduce code, it is helpful to state the most important functions, as in:_ * `random.randrange(start, end)` - return a random number [`start`, `end`] * `range(start, end)` - create a list with integers from `start` to `end`. Typically used in iterations. * `for elem in list: body` executes `body` in a loop with `elem` taking each value from `list`. * `for i in range(start, end): body` executes `body` in a loop with `i` from `start` to `end` - 1. * `chr(n)` - return a character with ASCII code `n` ``` import random def int_fuzzer(): """A simple function that returns a random integer""" return random.randrange(1, 100) + 0.5 # More code pass ``` ## _Section 4_ \todo{Add} ## Synopsis _For those only interested in using the code in this chapter (without wanting to know how it works), give an example. This will be copied to the beginning of the chapter (before the first section) as text with rendered input and output._ You can use `int_fuzzer()` as: ``` print(int_fuzzer()) ``` ## Lessons Learned * _Lesson one_ * _Lesson two_ * _Lesson three_ ## Next Steps _Link to subsequent chapters (notebooks) here, as in:_ * [use _mutations_ on existing inputs to get more valid inputs](MutationFuzzer.ipynb) * [use _grammars_ (i.e., a specification of the input format) to get even more valid inputs](Grammars.ipynb) * [reduce _failing inputs_ for efficient debugging](Reducer.ipynb) ## Background _Cite relevant works in the literature and put them into context, as in:_ The idea of ensuring that each expansion in the grammar is used at least once goes back to Burkhardt \cite{Burkhardt1967}, to be later rediscovered by Paul Purdom \cite{Purdom1972}. ## Exercises _Close the chapter with a few exercises such that people have things to do. To make the solutions hidden (to be revealed by the user), have them start with_ ``` **Solution.** ``` _Your solution can then extend up to the next title (i.e., any markdown cell starting with `#`)._ _Running `make metadata` will automatically add metadata to the cells such that the cells will be hidden by default, and can be uncovered by the user. The button will be introduced above the solution._ ### Exercise 1: _Title_ _Text of the exercise_ ``` # Some code that is part of the exercise pass ``` _Some more text for the exercise_ **Solution.** _Some text for the solution_ ``` # Some code for the solution 2 + 2 ``` _Some more text for the solution_ ### Exercise 2: _Title_ _Text of the exercise_ **Solution.** _Solution for the exercise_
github_jupyter
# Using `pybind11` The package `pybind11` provides an elegant way to wrap C++ code for Python, including automatic conversions for `numpy` arrays and the C++ `Eigen` linear algebra library. Used with the `cppimport` package, this provides a very nice work flow for integrating C++ and Python: - Edit C++ code - Run Python code ```bash ! pip install pybind11 ! pip install cppimport ``` Copy the Eigen library if necessary - no installation is required as Eigen is a header only library. ``` import os if not os.path.exists('eigen'): ! wget https://gitlab.com/libeigen/eigen/-/archive/3.3.9/eigen-3.3.9.tar.gz ! tar -xzf eigen-3.3.9.tar.gz ! mv eigen-3.3.9.tar.gz eigen ``` ## Resources - [`pybind11`](http://pybind11.readthedocs.io/en/latest/) - [`cppimport`](https://github.com/tbenthompson/cppimport) - [`Eigen`](http://eigen.tuxfamily.org) ## Example 1 - Basic usage ### Standalone compilation ``` %%file ex0.cpp #include <pybind11/pybind11.h> namespace py = pybind11; PYBIND11_MODULE(ex0, m) { m.def("add", [](int a, int b) { return a + b; }); m.def("mult", [](int a, int b) { return a * b; }); } %%bash g++ -Wall -shared -std=c++11 -fPIC $(python3 -m pybind11 --includes) \ ex0.cpp -o ex0$(python3-config --extension-suffix) import ex0 ex0.add(3,4), ex0.mult(3,4) ``` ### Using the `cppimport` package ``` %%file ex1.cpp <% setup_pybind11(cfg) %> #include <pybind11/pybind11.h> namespace py = pybind11; PYBIND11_MODULE(ex1, m) { m.def("add", [](int a, int b) { return a + b; }); m.def("mult", [](int a, int b) { return a * b; }); } import cppimport ex1 = cppimport.imp("ex1") ex1.add(3,4) from ex1 import mult mult(3,4) ls ex1*so ``` ## Example 2 - Adding doc and named/default arguments ``` %%file ex2.cpp <% setup_pybind11(cfg) %> #include <pybind11/pybind11.h> namespace py = pybind11; using namespace pybind11::literals; PYBIND11_MODULE(ex2, m) { m.def("add", [](int a, int b) { return a + b; }, "Add two integers.", py::arg("a") = 3, py::arg("b") = 4); m.def("mult", [](int a, int b) { return a * b; }, "Multiply two integers.", "a"_a=3, "b"_a=4); } import cppimport ex2 = cppimport.imp("ex2") help(ex1.add) help(ex2.add) ``` ### Example 3 - Split into execution modules for efficient compilation ``` %%file funcs1.cpp #include <pybind11/pybind11.h> namespace py = pybind11; int add(int a, int b) { return a + b; } void init_f1(py::module &m) { m.def("add", &add); } %%file funcs2.cpp #include <pybind11/pybind11.h> namespace py = pybind11; int mult(int a, int b) { return a + b; } void init_f2(py::module &m) { m.def("mult", &mult); } %%file ex3.cpp <% setup_pybind11(cfg) cfg['sources'] = ['funcs1.cpp', 'funcs2.cpp'] %> #include <pybind11/pybind11.h> namespace py = pybind11; void init_f1(py::module &m); void init_f2(py::module &m); PYBIND11_MODULE(ex3, m) { init_f1(m); init_f2(m); } import cppimport ex3 = cppimport.imp("ex3") ex3.add(3,4), ex3.mult(3, 4) ## Example 4 - Using setup.py to create shared libraries %%file funcs.hpp #pragma once int add(int a, int b); int mult(int a, int b); %%file funcs.cpp #include "funcs.hpp" int add(int a, int b) { return a + b; } int mult(int a, int b) { return a * b; } %%file ex4.cpp #include "funcs.hpp" #include <pybind11/pybind11.h> namespace py = pybind11; PYBIND11_MODULE(ex4, m) { m.def("add", &add); m.def("mult", &mult); } import os if not os.path.exists('./pybind11'): ! git clone https://github.com/pybind/pybind11.git %%file setup.py import os, sys from distutils.core import setup, Extension from distutils import sysconfig cpp_args = ['-std=c++14'] ext_modules = [ Extension( 'ex4', ['funcs.cpp', 'ex4.cpp'], include_dirs=['pybind11/include'], language='c++', extra_compile_args = cpp_args, ), ] setup( name='ex4', version='0.0.1', author='Cliburn Chan', author_email='cliburn.chan@duke.edu', description='Example', ext_modules=ext_modules, ) %%bash python3 setup.py build_ext -i import ex4 ex4.add(3,4), ex4.mult(3,4) ``` ## Example 5 - Using STL containers ``` %%file ex5.cpp <% setup_pybind11(cfg) %> #include "funcs.hpp" #include <pybind11/pybind11.h> #include <pybind11/stl.h> #include <vector> namespace py = pybind11; double vsum(const std::vector<double>& vs) { double res = 0; for (const auto& i: vs) { res += i; } return res; } std::vector<int> range(int start, int stop, int step) { std::vector<int> res; for (int i=start; i<stop; i+=step) { res.push_back(i); } return res; } PYBIND11_MODULE(ex5, m) { m.def("vsum", &vsum); m.def("range", &range); } import cppimport ex5 = cppimport.imp("ex5") ex5.vsum(range(10)) ex5.range(1, 10, 2) ``` ## Using `cppimport` The `cppimport` package allows you to specify several options. See [Github page](https://github.com/tbenthompson/cppimport) ### Use of `cppimport.imp` Note that `cppimport.imp` only needs to be called to build the shared library. Once it is called, the shared library is created and can be sued. Any updates to the C++ files will be detected by `cppimport` and it will automatically trigger a re-build. ## Example 6: Vectorizing functions for use with `numpy` arrays Example showing how to vectorize a `square` function. Note that from here on, we don't bother to use separate header and implementation files for these code snippets, and just write them together with the wrapping code in a `code.cpp` file. This means that with `cppimport`, there are only two files that we actually code for, a C++ `code.cpp` file and a python test file. ``` %%file ex6.cpp <% cfg['compiler_args'] = ['-std=c++14'] setup_pybind11(cfg) %> #include <pybind11/pybind11.h> #include <pybind11/numpy.h> namespace py = pybind11; double square(double x) { return x * x; } PYBIND11_MODULE(ex6, m) { m.doc() = "pybind11 example plugin"; m.def("square", py::vectorize(square), "A vectroized square function."); } import cppimport ex6 = cppimport.imp("ex6") ex6.square([1,2,3]) import ex6 ex6.square([2,4,6]) ``` ## Example 7: Using `numpy` arrays as function arguments and return values Example showing how to pass `numpy` arrays in and out of functions. These `numpy` array arguments can either be generic `py:array` or typed `py:array_t<double>`. The properties of the `numpy` array can be obtained by calling its `request` method. This returns a `struct` of the following form: ```c++ struct buffer_info { void *ptr; size_t itemsize; std::string format; int ndim; std::vector<size_t> shape; std::vector<size_t> strides; }; ``` Here is C++ code for two functions - the function `twice` shows how to change a passed in `numpy` array in-place using pointers; the function `sum` shows how to sum the elements of a `numpy` array. By taking advantage of the information in `buffer_info`, the code will work for arbitrary `n-d` arrays. ``` %%file ex7.cpp <% cfg['compiler_args'] = ['-std=c++11'] setup_pybind11(cfg) %> #include <pybind11/pybind11.h> #include <pybind11/numpy.h> namespace py = pybind11; // Passing in an array of doubles void twice(py::array_t<double> xs) { py::buffer_info info = xs.request(); auto ptr = static_cast<double *>(info.ptr); int n = 1; for (auto r: info.shape) { n *= r; } for (int i = 0; i <n; i++) { *ptr++ *= 2; } } // Passing in a generic array double sum(py::array xs) { py::buffer_info info = xs.request(); auto ptr = static_cast<double *>(info.ptr); int n = 1; for (auto r: info.shape) { n *= r; } double s = 0.0; for (int i = 0; i <n; i++) { s += *ptr++; } return s; } PYBIND11_MODULE(ex7, m) { m.doc() = "auto-compiled c++ extension"; m.def("sum", &sum); m.def("twice", &twice); } %%file test_code.py import cppimport import numpy as np ex7 = cppimport.imp("ex7") if __name__ == '__main__': xs = np.arange(12).reshape(3,4).astype('float') print(xs) print("np :", xs.sum()) print("cpp:", ex7.sum(xs)) print() ex7.twice(xs) print(xs) %%bash python test_code.py ``` ## Example 8: Using the C++ `eigen` library to calculate matrix inverse and determinant Example showing how `Eigen` vectors and matrices can be passed in and out of C++ functions. Note that `Eigen` arrays are automatically converted to/from `numpy` arrays simply by including the `pybind/eigen.h` header. Because of this, it is probably simplest in most cases to work with `Eigen` vectors and matrices rather than `py::buffer` or `py::array` where `py::vectorize` is insufficient. **Note**: When working with matrices, you can make code using `eigen` more efficient by ensuring that the eigen Matrix and numpy array have the same data types and storage layout, and using the Eigen::Ref class to pass in arguments. By default, numpy stores data in row major format while Eigen stores data in column major format, and this incompatibility triggers a copy which can be expensive for large matrices. There are basically 3 ways to make pass by reference work: 1. Use Eigen reference with arbitrary storage order `Eigen::Ref<MatrixType, 0, Eigen::Stride<Eigen::Dynamic, Eigen::Dynamic>>` 2. Use Eigen row order matrices `Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>` 3. Create numpy arrays with column order `np.array(data, order='F')` This is an advanced topic that you can explore in the [docs](https://pybind11.readthedocs.io/en/stable/advanced/cast/eigen.html?highlight=eigen#pass-by-reference). **Note**: A more recent alternative to `eigen` is `xtensor` which also provides Python bindings via `pybind11`. `xtensor` has an interface modeled after `numpy` which is appealing, but I have not had time to fully evaluate it for this class. Link: https://github.com/xtensor-stack/xtensor-python ``` %%file ex8.cpp <% cfg['compiler_args'] = ['-std=c++11'] cfg['include_dirs'] = ['./eigen'] setup_pybind11(cfg) %> #include <pybind11/pybind11.h> #include <pybind11/eigen.h> #include <Eigen/LU> namespace py = pybind11; // convenient matrix indexing comes for free double get(Eigen::MatrixXd xs, int i, int j) { return xs(i, j); } // takes numpy array as input and returns double double det(Eigen::MatrixXd xs) { return xs.determinant(); } // takes numpy array as input and returns another numpy array Eigen::MatrixXd inv(Eigen::MatrixXd xs) { return xs.inverse(); } PYBIND11_MODULE(ex8, m) { m.doc() = "auto-compiled c++ extension"; m.def("inv", &inv); m.def("det", &det); } import cppimport import numpy as np code = cppimport.imp("ex8") A = np.array([[1,2,1], [2,1,0], [-1,1,2]]) print(A) print(code.det(A)) print(code.inv(A)) ``` ## Example 9: Using `pybind11` with `openmp` (optional) Here is an example of using OpenMP to integrate the value of $\pi$ written using `pybind11`. ``` %%file ex9.cpp /* <% cfg['compiler_args'] = ['-std=c++11', '-fopenmp'] cfg['linker_args'] = ['-lgomp'] setup_pybind11(cfg) %> */ #include <cmath> #include <omp.h> #include <pybind11/pybind11.h> #include <pybind11/numpy.h> namespace py = pybind11; // Passing in an array of doubles void twice(py::array_t<double> xs) { py::gil_scoped_acquire acquire; py::buffer_info info = xs.request(); auto ptr = static_cast<double *>(info.ptr); int n = 1; for (auto r: info.shape) { n *= r; } #pragma omp parallel for for (int i = 0; i <n; i++) { *ptr++ *= 2; } } PYBIND11_MODULE(ex9, m) { m.doc() = "auto-compiled c++ extension"; m.def("twice", [](py::array_t<double> xs) { /* Release GIL before calling into C++ code */ py::gil_scoped_release release; return twice(xs); }); } import cppimport import numpy as np code = cppimport.imp("ex9") xs = np.arange(10).astype('double') code.twice(xs) xs ```
github_jupyter
Now we have [for loops](https://matthew-brett.github.io/cfd2019/chapters/03/iteration) and [ranges](https://matthew-brett.github.io/cfd2019/chapters/03/Ranges), we can solve the problem in [population, permutation](https://matthew-brett.github.io/cfd2019/chapters/05/population_permutation). ``` # Array library. import numpy as np # Data frame library. import pandas as pd # Plotting import matplotlib.pyplot as plt %matplotlib inline # Fancy plots plt.style.use('fivethirtyeight') ``` We load the Brexit survey data again: ``` # Load the data frame, and put it in the variable "audit_data" audit_data = pd.read_csv('audit_of_political_engagement_14_2017.tab', sep='\t') ``` Again, we get the ages for the Leavers and the Remainers: ``` # Drop rows where age is 0 age_not_0 = audit_data['numage'] != 0 good_data = audit_data[age_not_0] # Get data frames for leavers and remainers is_remain = good_data['cut15'] == 1 remain_ages = good_data[is_remain]['numage'] is_leave = good_data['cut15'] == 2 leave_ages = good_data[is_leave]['numage'] remain_ages.hist(); leave_ages.hist(); ``` Here is the number of Remain voters: ``` n_remain = len(remain_ages) n_remain ``` Here was the actual difference between the means of the two groups: ``` actual_diff = np.mean(leave_ages) - np.mean(remain_ages) actual_diff ``` We want to know if we have a reasonable chance of seeing a difference of this magnitude, if the two groups are samples from the same underlying population. We don't have the actual population to take samples from, so we need to wing it, by using the data we have. We asserted we could use permutation to take random samples from the data that we already have: ``` pooled = np.append(remain_ages, leave_ages) np.random.shuffle(pooled) fake_remainers = pooled[:n_remain] fake_leavers = pooled[n_remain:] ``` Those are our samples. Now we get the difference in mean ages, as one example of a difference we might see, if the samples are from the same population: ``` example_diff = np.mean(fake_leavers) - np.mean(fake_remainers) example_diff ``` Now we know how do to this once, we can use the `for` loop to do the shuffle operation many times. We collect the results in an array. You will recognize the code in the `for` loop from the code in the cells above. ``` # An array of zeros to store the fake differences example_diffs = np.zeros(10000) # Do the shuffle / difference steps 10000 times for i in np.arange(10000): np.random.shuffle(pooled) fake_remainers = pooled[:n_remain] fake_leavers = pooled[n_remain:] eg_diff = np.mean(fake_leavers) - np.mean(fake_remainers) # Collect the results in the results array example_diffs[i] = eg_diff ``` Our results array now has 10000 fake mean differences: What distribution do these differences have? ``` plt.hist(example_diffs); ``` This is called the *sampling distribution*. In our case, this is the sampling distribution of the difference in means. It is the *sampling* distribution, because it is the distribution we expect to see, when taking random *samples* from the same underlying population. Our question now is, is the difference we actually saw, a likely value, given the sampling distribution. Looking at the distribution above - what do you think? As a first pass, let us check how many of the values from the sampling distribution are as large, or larger than the value we actually saw. ``` are_as_high = example_diffs >= actual_diff n_as_high = np.count_nonzero(are_as_high) n_as_high ``` The number above is the number of values in the sampling distribution that are as high as, or higher than, the value we actually saw. If we divide by 10000, we get the proportion of the sampling distribution that is as high, or higher. ``` proportion = n_as_high / 10000 proportion ``` We think of this proportion as an estimate of the *probability* that we would see a value this high, or higher, *if these were random samples from the same underlying population*. We call this a *p value*.
github_jupyter
## Use Predicto Trade Picks with Alpaca to place hedged orders Sample usage to retrieve latest trade picks and submit to alpaca programmatically (https://predic.to) (https://alpaca.markets) This is a simple example on how to retrieve latest trade pick for a ticker and then place an alpaca order with target price and stoploss price. ### For Predicto authentication To use predicto api and reproduce this notebook, you'll need to have a valid Predicto account. If you don't have one, you can create one here: https://predic.to and get a free trial period. To authenticate you'll need an api key. To retrieve it, login to https://predic.to and head to your [settings page](https://predic.to/account). Then paste it in the `predicto_api_key` variable below. If you get any exception/error while running below code, please make sure your api key is correct and your subscription/trial is not expired. Please note that there is a limit to the number of requests you can make per minute, depending on your account type. ### For Alpaca authentication You'll need an alpaca.markets account. Then you can retrieve your API Key ID and Endpoint from your account page. You can use either paper money or real money. We recommend to experiment with a paper money account first. More info about Alpaca trade api can be found here: https://github.com/alpacahq/alpaca-trade-api-python/ ### Import needed packages ``` import sys sys.path.append("../predicto_api/") import time import requests import pandas as pd import matplotlib.pyplot as plt from datetime import datetime, timedelta import IPython.display as display import alpaca_trade_api as tradeapi from predicto_api_wrapper import PredictoApiWrapper, TradeAction, TradeOrderType from alpaca_api_wrapper import AlpacaApiWrapper ``` ### Prepare and initialize our Predicto wrapper You'll need to have a valid Predicto account as mentioned above, and get an api key. Then replace the `predicto_api_key` variable below ``` predicto_api_key = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' predicto_api_wrapper = PredictoApiWrapper(predicto_api_key) ``` ### Prepare and initialize our Alpaca API wrapper You'll need to have a valid Alpaca account as mentioned above, and replace below variables with your own credentials. ``` alpaca_api_endpoint = 'https://paper-api.alpaca.markets' # use paper money endpoint for now (test env) alpaca_api_key_id = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' alpaca_api_secret_key = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' alpaca_wrapper = AlpacaApiWrapper(alpaca_api_endpoint, alpaca_api_key_id, alpaca_api_secret_key) ``` ### Make sure our alpaca keys work as expected Retrieve some latest prices ``` print('Latest price of AAPL is: ', alpaca_wrapper.get_latest_price('AAPL')) print('Latest price of FB is: ', alpaca_wrapper.get_latest_price('FB')) print('Latest price of MSFT is: ', alpaca_wrapper.get_latest_price('MSFT')) print('Latest price of NIO is: ', alpaca_wrapper.get_latest_price('NIO')) print('Latest price of TSLA is: ', alpaca_wrapper.get_latest_price('TSLA')) print('Latest price of V is: ', alpaca_wrapper.get_latest_price('V')) print('Looks good \N{grinning face}') ``` ### Let's start by retrieving all supported tickers for Forecasting in Predicto We generate forecasts and trade picks for a limited amount of stocks for the time being ``` # Get all supported stocks stocks_json = predicto_api_wrapper.get_supported_tickers() stocks_df = pd.DataFrame(stocks_json) # print some information print('Total tickers supported: {0}'.format(len(stocks_df))) print('Here is a sample:') stocks_df.head(15) ``` ### Let's inspect forecasts and trade picks that were generated yesterday (All forecasts are for 15 days ahead) ``` sdate = (datetime.today() - timedelta(days=1)).strftime('%Y-%m-%d') print(sdate) ``` ### Let's see what forecast we got for Visa Inc (V) We predict an up trend forecast, and recommended action is BUY with entry/target/stopLoss prices ``` (forecast_json, trade_pick_json) = predicto_api_wrapper.get_forecast_and_tradepick_info('V', sdate, True) ``` ### That seems interesting, let's submit a hedged order to alpaca using above information ### First, show all my holding positions in Alpaca paper account It's all fake money btw, it's a paper testing account :) ``` positions = alpaca_wrapper.get_holding_positions() ``` ### Inspect our V trade pick json ``` trade_pick_json ``` ### Prepare our alpaca order parameters Symbol, TradeAction, Investment amount, Entry price, exit price, stop loss price etc ``` # Generic parameters like symbol, trade action etc symbol = trade_pick_json['Symbol'] trade_action = trade_pick_json['TradeAction'] generated_date = trade_pick_json['SDate'] # Entry price, exit price, stop loss price entry_price = trade_pick_json['StartingPrice'] target_price = trade_pick_json['TargetSellPrice'] stop_loss_price = trade_pick_json['TargetStopLossPrice'] # How many stocks to BUY or SELL? You need to provide a max Investment amount available, and quantity # - if quantity is None or 0, entire investment amount will be used # For now let's assume we will use $1000 for our next trade investmentAmount = 1000 quantity = None ``` ### Step 1 of 2: Submit our market order to Alpaca ### Be Carefull: if action is SELL this mean that we are going to short the stock! Make sure you understand the risks of shorting before using those trade orders. Otherwise you can choose to only use BUY orders. If you see any error, check the error msg. For example, our alpaca api wrapper only allows you to submit orders when the market is open! This is to make sure that market order can be filled so that we can go ahead and set target price and stop loss orders for it. ``` # Create a client order id, for easy tracking client_market_order_id = 'Predicto__{0}_{1}_{2}_market'.format(generated_date, symbol, trade_action) # submit the order to alpaca market_order_result = alpaca_wrapper.submit_order( TradeOrderType.Market, TradeAction(int(trade_action)), symbol, quantity, investmentAmount, entry_price, target_price, stop_loss_price, client_market_order_id) # Keep in mind that if price changed since the Predicto Trade Pick generation (very possible due to pre-market and after-hours activity) # then target price and stop loss prices will be re-adjusted, and order will only be submitted if it makes sense # e.g. if action is BUY and entry price is more than expected we'll cancel, # if action is BUY and entry price is less than expected we'll go through, etc # (more details on this logic in AlpacaApiWrapper.validate_latest_price_and_stoploss method) # unpack market order result if market_order_result: (marker_order, newStartingPrice, newTargetPrice, newStopLossPrice, newQuantity) = market_order_result print() print('Order details') print('newStartingPrice : ', newStartingPrice) print('newTargetPrice : ', newTargetPrice) print('newStopLossPrice : ', newStopLossPrice) print('newQuantity : ', newQuantity) print() print(market_order_result) ``` ### Note that market order entry price and stop loss price have been readjusted before making the order to match latest prices. Expected price: 210.35 but Actual price: 212.22 ### Step 2 of 2: Hedge our Trade Pick using target price and stop loss order We can do this using an OCO order (one-cancels-other). For more info check alpaca api [relevant documentation](https://alpaca.markets/docs/trading-on-alpaca/orders/#:~:text=OCO%20Orders,only%20exit%20order%20is%20supported.) If you see any error, check the error msg. For example, you can only submit oco orders for market orders that have been already filled! ``` if market_order_result and marker_order: # Create a client order id, for easy tracking client_oco_order_id = 'Predicto__{0}_{1}_{2}_oco'.format(generated_date, symbol, trade_action) # submit the order to alpaca (oco order will go through only if above market order was filled successfully) oco_order_result = alpaca_wrapper.open_oco_position( marker_order.id, symbol, newTargetPrice, newStopLossPrice, client_oco_order_id) # inspect print() print(oco_order_result) else: print("Market order not found") ``` ### If everything went according to plan and market order was filled, we should see the new V position in holding positions And there it is - we now have 4 shares of V (matching our maximum investment amount of: 4 shares * \\$212.22 <= \\$1000) ``` positions = alpaca_wrapper.get_holding_positions() ``` ## And this was a very simple example on how to use Predicto forecasts and Trade Picks with Alpaca API! For robust systems, it's very important to handle all errors and always verify occasionally that there are no orphan orders. ideally using some kind of automated supervisor. But this is out of scope for this notebook. ### As a last example and exercise ### Here is how to get latest forecasts with a predicted absolute move >= 3% You can filter based on several fields of the prediction and decide which trade picks you want to submit Feel free to experiment with difference fields ``` # retrieve all supported stocks stocks_json = predicto_api_wrapper.get_supported_tickers() stocks_df = pd.DataFrame(stocks_json) # our predicted move threshold abs_change_pct_threshold = 0.03 # iterate and check for the ones with >= 3% predicted abs change sdate = (datetime.today() - timedelta(days=1)).strftime('%Y-%m-%d') for symbol in stocks_df.RelatedStock: try: (forecast_json, tp_json) = predicto_api_wrapper.get_forecast_and_tradepick_info(symbol, sdate, False) change_pct = (tp_json['TargetSellPrice'] - tp_json['StartingPrice']) / tp_json['StartingPrice'] if abs(change_pct) >= abs_change_pct_threshold: print('Expected change for {0} is {1:.2f} !'.format(symbol, change_pct)) # do more filtering and inspections here # ... # If all looks good you can: # - Go ahead and submit market order here using examples above # - Go ahead and open oco position for market order here using examples above # make sure you sleep a bit to avoid hitting api limit time.sleep(1) except Exception: pass ```
github_jupyter
<a href="https://colab.research.google.com/github/tirtharajghosh/Machine-Learning/blob/master/Multiple_Linear_Regression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ### Importing required libraries ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd ``` ### Importing dataset ``` dataset = pd.read_csv('/content/drive/My Drive/Colab Notebooks/Datasets/50_Startups.csv') x = dataset.iloc[:,:-1].values y = dataset.iloc[:,4].values #print("Value of X :\n",x) #print("Value of Y :\n",y) ``` ### Encoding categorical data - Dummy Variables Note: We don't always need to use `LabelEncoder` anymore. Instead used `ColumnTransfer`. ``` from sklearn.preprocessing import LabelEncoder, OneHotEncoder from sklearn.compose import ColumnTransformer ct = ColumnTransformer([("State", OneHotEncoder(), [3])], remainder = 'passthrough') x = ct.fit_transform(x) #print("Value of X after Dummy Variable Encoding:\n",x) ``` ### Avoiding the Dummy Variable Trap It is a good practise to avoid Dummy Variable Trap manually. Though Scikit Learn Library takes care of this, we don't wanna create any dependencies. ``` x = x[:,1:] #print("Value of X :",x) ``` ### Splitting dataset - Training and Test Note: `cross_validation` name is now deprecated and was replaced by `model_selection` ``` from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.2,random_state = 0) ``` ### Fitting Multiple Linear Regression to the Training Set ``` from sklearn.linear_model import LinearRegression regressor = LinearRegression() regressor.fit(x_train,y_train) ``` ### Predicting the Test Set Result ``` from tabulate import tabulate y_pred = regressor.predict(x_test) print(tabulate(zip(y_test,y_pred), headers=["Test","Prediction"], tablefmt="github")) ``` ### Building the optimal model using Backward Elimination SL = 0.05 & (P-Value > SL) supported columns should be removed. ``` import statsmodels.api as sm x = np.append(arr = np.ones((50,1), dtype = int), values = x, axis = 1 ) #print(x) ``` **Stage 1** ``` x_opt = np.array(x[:,[0, 1, 2, 3, 4, 5]], dtype=int) regressor_OLS = sm.OLS(endog = y, exog = x_opt).fit() regressor_OLS.summary() ``` **Stage 2** ``` x_opt = np.array(x[:,[0, 1, 3, 4, 5]], dtype=int) regressor_OLS = sm.OLS(endog = y, exog = x_opt).fit() regressor_OLS.summary() ``` **Stage 3** ``` x_opt = np.array(x[:,[0, 3, 4, 5]], dtype=int) regressor_OLS = sm.OLS(endog = y, exog = x_opt).fit() regressor_OLS.summary() ``` **Stage 4** ``` x_opt = np.array(x[:,[0, 3, 5]], dtype=int) regressor_OLS = sm.OLS(endog = y, exog = x_opt).fit() regressor_OLS.summary() ``` **Stage 5** ``` x_opt = np.array(x[:,[0, 3]], dtype=int) regressor_OLS = sm.OLS(endog = y, exog = x_opt).fit() regressor_OLS.summary() ```
github_jupyter
``` import os %env MODEL = /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/person-detection-retail-0013/FP32/person-detection-retail-0013.xml %env USE_SAFETY_MODEL = ../resources/worker-safety-mobilenet/FP32/worker_safety_mobilenet.xml #!/usr/bin/env python3 """ Copyright (c) 2018 Intel Corporation. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ from __future__ import print_function import sys import os import cv2 import numpy as np import datetime import json from inference import Network # Global vars cpu_extension = '' conf_modelLayers = '' conf_modelWeights = '' targetDevice = "CPU" conf_batchSize = 1 conf_modelPersonLabel = 1 conf_inferConfidenceThreshold = 0.7 conf_inFrameViolationsThreshold = 19 conf_inFramePeopleThreshold = 5 padding = 30 viol_wk = 0 acceptedDevices = ['CPU', 'GPU', 'MYRIAD', 'HETERO:FPGA,CPU', 'HDDL'] videos = [] name_of_videos = [] CONFIG_FILE = '../resources/config.json' class Video: def __init__(self, idx, path): if path.isnumeric(): self.video = cv2.VideoCapture(int(path)) self.name = "Cam " + str(idx) else: if os.path.exists(path): self.video = cv2.VideoCapture(path) self.name = "Video " + str(idx) else: print("Either wrong input path or empty line is found. Please check the conf.json file") exit(21) if not self.video.isOpened(): print("Couldn't open video: " + path) sys.exit(20) self.height = int(self.video.get(cv2.CAP_PROP_FRAME_HEIGHT)) self.width = int(self.video.get(cv2.CAP_PROP_FRAME_WIDTH)) self.currentViolationCount = 0 self.currentViolationCountConfidence = 0 self.prevViolationCount = 0 self.totalViolations = 0 self.totalPeopleCount = 0 self.currentPeopleCount = 0 self.currentPeopleCountConfidence = 0 self.prevPeopleCount = 0 self.currentTotalPeopleCount = 0 cv2.namedWindow(self.name, cv2.WINDOW_NORMAL) self.frame_start_time = datetime.datetime.now() def env_parser(): """ Parses the inputs. :return: None """ global use_safety_model, conf_modelLayers, conf_modelWeights, targetDevice, cpu_extension, videos,\ conf_safety_modelWeights, conf_safety_modelLayers, is_async_mode if 'MODEL' in os.environ: conf_modelLayers = os.environ['MODEL'] conf_modelWeights = os.path.splitext(conf_modelLayers)[0] + ".bin" else: print("Please provide path for the .xml file.") sys.exit(0) if 'DEVICE' in os.environ: targetDevice = os.environ['DEVICE'] if 'MULTI' not in targetDevice and targetDevice not in acceptedDevices: print("Unsupported device: " + targetDevice) sys.exit(2) elif 'MULTI' in targetDevice: target_devices = targetDevice.split(':')[1].split(',') for multi_device in target_devices: if multi_device not in acceptedDevices: print("Unsupported device: " + targetDevice) sys.exit(2) if 'CPU_EXTENSION' in os.environ: cpu_extension = os.environ['CPU_EXTENSION'] if 'USE_SAFETY_MODEL' in os.environ: conf_safety_modelLayers = os.environ['USE_SAFETY_MODEL'] conf_safety_modelWeights = os.path.splitext(conf_safety_modelLayers)[0] + ".bin" use_safety_model = True else: use_safety_model = False if 'FLAG' in os.environ: if os.environ['FLAG'] == 'async': is_async_mode = True print('Application running in Async mode') else: is_async_mode = False print('Application running in Sync mode') else: is_async_mode = True print('Application running in Async mode') assert os.path.isfile(CONFIG_FILE), "{} file doesn't exist".format(CONFIG_FILE) config = json.loads(open(CONFIG_FILE).read()) for idx, item in enumerate(config['inputs']): vid = Video(idx, item['video']) name_of_videos.append([idx, item['video']]) videos.append([idx, vid]) def detect_safety_hat(img): """ Detection of the hat of the person. :param img: Current frame :return: Boolean value of the detected hat """ lowH = 15 lowS = 65 lowV = 75 highH = 30 highS = 255 highV = 255 crop = 0 height = 15 perc = 8 hsv = np.zeros(1) try: hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) except cv2.error as e: print("%d %d %d" % (img.shape)) print("%d %d %d" % (img.shape)) print(e) threshold_img = cv2.inRange(hsv, (lowH, lowS, lowV), (highH, highS, highV)) x = 0 y = int(threshold_img.shape[0] * crop / 100) w = int(threshold_img.shape[1]) h = int(threshold_img.shape[0] * height / 100) img_cropped = threshold_img[y: y + h, x: x + w] if cv2.countNonZero(threshold_img) < img_cropped.size * perc / 100: return False return True def detect_safety_jacket(img): """ Detection of the safety jacket of the person. :param img: Current frame :return: Boolean value of the detected jacket """ lowH = 0 lowS = 150 lowV = 42 highH = 11 highS = 255 highV = 255 crop = 15 height = 40 perc = 23 hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) threshold_img = cv2.inRange(hsv, (lowH, lowS, lowV), (highH, highS, highV)) x = 0 y = int(threshold_img.shape[0] * crop / 100) w = int(threshold_img.shape[1]) h = int(threshold_img.shape[0] * height / 100) img_cropped = threshold_img[y: y + h, x: x + w] if cv2.countNonZero(threshold_img) < img_cropped.size * perc / 100: return False return True def detect_workers(workers, frame): """ Detection of the person with the safety guards. :param workers: Total number of the person in the current frame :param frame: Current frame :return: Total violation count of the person """ violations = 0 global viol_wk for worker in workers: xmin, ymin, xmax, ymax = worker crop = frame[ymin:ymax, xmin:xmax] if 0 not in crop.shape: if detect_safety_hat(crop): if detect_safety_jacket(crop): cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), (0, 255, 0), 2) else: cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), (0, 0, 255), 2) violations += 1 viol_wk += 1 else: cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), (0, 0, 255), 2) violations += 1 viol_wk += 1 return violations def main(): """ Load the network and parse the output. :return: None """ env_parser() global is_async_mode nextReq = 1 currReq = 0 nextReq_s = 1 currReq_s = 0 prevVideo = None vid_finished = [False] * len(videos) min_FPS = min([videos[i][1].video.get(cv2.CAP_PROP_FPS) for i in range(len(videos))]) # Initialise the class infer_network = Network() infer_network_safety = Network() # Load the network to IE plugin to get shape of input layer plugin, (batch_size, channels, model_height, model_width) = \ infer_network.load_model(conf_modelLayers, targetDevice, 1, 1, 2, cpu_extension) if use_safety_model: batch_size_sm, channels_sm, model_height_sm, model_width_sm = \ infer_network_safety.load_model(conf_safety_modelLayers, targetDevice, 1, 1, 2, cpu_extension, plugin)[1] while True: for index, currVideo in videos: # Read image from video/cam vfps = int(round(currVideo.video.get(cv2.CAP_PROP_FPS))) for i in range(0, int(round(vfps / min_FPS))): ret, current_img = currVideo.video.read() if not ret: vid_finished[index] = True break if vid_finished[index]: stream_end_frame = np.zeros((int(currVideo.height), int(currVideo.width), 1), dtype='uint8') cv2.putText(stream_end_frame, "Input file {} has ended".format (name_of_videos[index][1].split('/')[-1]) , (10, int(currVideo.height/2)), cv2.FONT_HERSHEY_COMPLEX, 1, (255, 255, 255), 2) cv2.imshow(currVideo.name, stream_end_frame) continue # Transform image to person detection model input rsImg = cv2.resize(current_img, (model_width, model_height)) rsImg = rsImg.transpose((2, 0, 1)) rsImg = rsImg.reshape((batch_size, channels, model_height, model_width)) infer_start_time = datetime.datetime.now() # Infer current image if is_async_mode: infer_network.exec_net(nextReq, rsImg) else: infer_network.exec_net(currReq, rsImg) prevVideo = currVideo previous_img = current_img # Wait for previous request to end if infer_network.wait(currReq) == 0: infer_end_time = (datetime.datetime.now() - infer_start_time) * 1000 in_frame_workers = [] people = 0 violations = 0 hard_hat_detection =False vest_detection = False result = infer_network.get_output(currReq) # Filter output for obj in result[0][0]: if obj[2] > conf_inferConfidenceThreshold: xmin = int(obj[3] * prevVideo.width) ymin = int(obj[4] * prevVideo.height) xmax = int(obj[5] * prevVideo.width) ymax = int(obj[6] * prevVideo.height) xmin = int(xmin - padding) if (xmin - padding) > 0 else 0 ymin = int(ymin - padding) if (ymin - padding) > 0 else 0 xmax = int(xmax + padding) if (xmax + padding) < prevVideo.width else prevVideo.width ymax = int(ymax + padding) if (ymax + padding) < prevVideo.height else prevVideo.height cv2.rectangle(previous_img, (xmin, ymin), (xmax, ymax), (0, 255, 0), 2) people += 1 in_frame_workers.append((xmin, ymin, xmax, ymax)) new_frame = previous_img[ymin:ymax, xmin:xmax] if use_safety_model: # Transform image to safety model input in_frame_sm = cv2.resize(new_frame, (model_width_sm, model_height_sm)) in_frame_sm = in_frame_sm.transpose((2, 0, 1)) in_frame_sm = in_frame_sm.reshape((batch_size_sm, channels_sm, model_height_sm, model_width_sm)) infer_start_time_sm = datetime.datetime.now() if is_async_mode: infer_network_safety.exec_net(nextReq_s, in_frame_sm) else: infer_network_safety.exec_net(currReq_s, in_frame_sm) # Wait for the result infer_network_safety.wait(currReq_s) infer_end_time_sm = (datetime.datetime.now() - infer_start_time_sm) * 1000 result_sm = infer_network_safety.get_output(currReq_s) # Filter output hard_hat_detection = False vest_detection = False detection_list = [] for obj_sm in result_sm[0][0]: if (obj_sm[2] > 0.4): # Detect safety vest if (int(obj_sm[1])) == 2: xmin_sm = int(obj_sm[3] * (xmax-xmin)) ymin_sm = int(obj_sm[4] * (ymax-ymin)) xmax_sm = int(obj_sm[5] * (xmax-xmin)) ymax_sm = int(obj_sm[6] * (ymax-ymin)) if vest_detection == False: detection_list.append([xmin_sm+xmin, ymin_sm+ymin, xmax_sm+xmin, ymax_sm+ymin]) vest_detection = True # Detect hard-hat if int(obj_sm[1]) == 4: xmin_sm_v = int(obj_sm[3] * (xmax-xmin)) ymin_sm_v = int(obj_sm[4] * (ymax-ymin)) xmax_sm_v = int(obj_sm[5] * (xmax-xmin)) ymax_sm_v = int(obj_sm[6] * (ymax-ymin)) if hard_hat_detection == False: detection_list.append([xmin_sm_v+xmin, ymin_sm_v+ymin, xmax_sm_v+xmin, ymax_sm_v+ymin]) hard_hat_detection = True if hard_hat_detection is False or vest_detection is False: violations += 1 for _rect in detection_list: cv2.rectangle(current_img, (_rect[0] , _rect[1]), (_rect[2] , _rect[3]), (0, 255, 0), 2) if is_async_mode: currReq_s, nextReq_s = nextReq_s, currReq_s # Use OpenCV if worker-safety-model is not provided else : violations = detect_workers(in_frame_workers, previous_img) # Check if detected violations equals previous frames if violations == prevVideo.currentViolationCount: prevVideo.currentViolationCountConfidence += 1 # If frame threshold is reached, change validated count if prevVideo.currentViolationCountConfidence == conf_inFrameViolationsThreshold: # If another violation occurred, save image if prevVideo.currentViolationCount > prevVideo.prevViolationCount: prevVideo.totalViolations += (prevVideo.currentViolationCount - prevVideo.prevViolationCount) prevVideo.prevViolationCount = prevVideo.currentViolationCount else: prevVideo.currentViolationCountConfidence = 0 prevVideo.currentViolationCount = violations # Check if detected people count equals previous frames if people == prevVideo.currentPeopleCount: prevVideo.currentPeopleCountConfidence += 1 # If frame threshold is reached, change validated count if prevVideo.currentPeopleCountConfidence == conf_inFrameViolationsThreshold: prevVideo.currentTotalPeopleCount += ( prevVideo.currentPeopleCount - prevVideo.prevPeopleCount) if prevVideo.currentTotalPeopleCount > prevVideo.prevPeopleCount: prevVideo.totalPeopleCount += prevVideo.currentTotalPeopleCount - prevVideo.prevPeopleCount prevVideo.prevPeopleCount = prevVideo.currentPeopleCount else: prevVideo.currentPeopleCountConfidence = 0 prevVideo.currentPeopleCount = people frame_end_time = datetime.datetime.now() cv2.putText(previous_img, 'Total people count: ' + str( prevVideo.totalPeopleCount), (10, prevVideo.height - 10), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2) cv2.putText(previous_img, 'Current people count: ' + str( prevVideo.currentTotalPeopleCount), (10, prevVideo.height - 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2) cv2.putText(previous_img, 'Total violation count: ' + str( prevVideo.totalViolations), (10, prevVideo.height - 70), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2) cv2.putText(previous_img, 'FPS: %0.2fs' % (1 / ( frame_end_time - prevVideo.frame_start_time).total_seconds()), (10, prevVideo.height - 100), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2) cv2.putText(previous_img, 'Inference time: N\A for async mode' if is_async_mode else 'Inference time: {}ms'.format((infer_end_time).total_seconds()), (10, prevVideo.height - 130), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2) cv2.imshow(prevVideo.name, previous_img) prevVideo.frame_start_time = datetime.datetime.now() # Swap if is_async_mode: currReq, nextReq = nextReq, currReq previous_img = current_img prevVideo = currVideo # Exit if ESC key is pressed if cv2.waitKey(1) == 27: print("Attempting to stop input files") infer_network.clean() infer_network_safety.clean() cv2.destroyAllWindows() return if False not in vid_finished: infer_network.clean() infer_network_safety.clean() cv2.destroyAllWindows() break if __name__ == '__main__': main() ```
github_jupyter
## Metaprogramming Warning: Advanced topic! ### Metaprogramming globals Consider a bunch of variables, each of which need initialising and incrementing: ``` bananas = 0 apples = 0 oranges = 0 bananas += 1 apples += 1 oranges += 1 ``` The right hand side of these assignments doesn't respect the DRY principle. We could of course define a variable for our initial value: ``` initial_fruit_count = 0 bananas = initial_fruit_count apples = initial_fruit_count oranges = initial_fruit_count ``` However, this is still not as DRY as it could be: what if we wanted to replace the assignment with, say, a class constructor and a buy operation: ``` class Basket: def __init__(self): self.count = 0 def buy(self): self.count += 1 bananas = Basket() apples = Basket() oranges = Basket() bananas.buy() apples.buy() oranges.buy() ``` We had to make the change in three places. Whenever you see a situation where a refactoring or change of design might require you to change the code in multiple places, you have an opportunity to make the code DRYer. In this case, metaprogramming for incrementing these variables would involve just a loop over all the variables we want to initialise: ``` baskets = [bananas, apples, oranges] for basket in baskets: basket.buy() ``` However, this trick **doesn't** work for initialising a new variable: ``` from pytest import raises with raises(NameError): baskets = [bananas, apples, oranges, kiwis] ``` So can we declare a new variable programmatically? Given a list of the **names** of fruit baskets we want, initialise a variable with that name? ``` basket_names = ['bananas', 'apples', 'oranges', 'kiwis'] globals()['apples'] ``` Wow, we can! Every module or class in Python, is, under the hood, a special dictionary, storing the values in its **namespace**. So we can create new variables by assigning to this dictionary. globals() gives a reference to the attribute dictionary for the current module ``` for name in basket_names: globals()[name] = Basket() kiwis.count ``` This is **metaprogramming**. I would NOT recommend using it for an example as trivial as the one above. A better, more Pythonic choice here would be to use a data structure to manage your set of fruit baskets: ``` baskets = {} for name in basket_names: baskets[name] = Basket() baskets['kiwis'].count ``` Or even, using a dictionary comprehension: ``` baskets = {name: Basket() for name in baskets} baskets['kiwis'].count ``` Which is the nicest way to do this, I think. Code which feels like metaprogramming is needed to make it less repetitive can often instead be DRYed up using a refactored data structure, in a way which is cleaner and more easy to understand. Nevertheless, metaprogramming is worth knowing. ### Metaprogramming class attributes We can metaprogram the attributes of a **module** using the globals() function. We will also want to be able to metaprogram a class, by accessing its attribute dictionary. This will allow us, for example, to programmatically add members to a class. ``` class Boring: pass ``` If we are adding our own attributes, we can just do so directly: ``` x = Boring() x.name = "Michael" x.name ``` And these turn up, as expected, in an attribute dictionary for the class: ``` x.__dict__ ``` We can use `getattr` to access this special dictionary: ``` getattr(x, 'name') ``` If we want to add an attribute given it's name as a string, we can use setattr: ``` setattr(x, 'age', 75) x.age ``` And we could do this in a loop to programmatically add many attributes. The real power of accessing the attribute dictionary comes when we realise that there is *very little difference* between member data and member functions. Now that we know, from our functional programming, that **a function is just a variable that can be *called* with `()`**, we can set an attribute to a function, and it becomes a member function! ``` setattr(Boring, 'describe', lambda self: f"{self.name} is {self.age}") x.describe() x.describe Boring.describe ``` Note that we set this method as an attribute of the class, not the instance, so it is available to other instances of `Boring`: ``` y = Boring() y.name = 'Terry' y.age = 78 y.describe() ``` We can define a standalone function, and then **bind** it to the class. Its first argument automagically becomes `self`. ``` def broken_birth_year(b_instance): import datetime current = datetime.datetime.now().year return current - b_instance.age Boring.birth_year = broken_birth_year x.birth_year() x.birth_year x.birth_year.__name__ ``` ### Metaprogramming function locals We can access the attribute dictionary for the local namespace inside a function with `locals()` but this *cannot be written to*. Lack of safe programmatic creation of function-local variables is a flaw in Python. ``` class Person: def __init__(self, name, age, job, children_count): for name, value in locals().items(): if name == 'self': continue print(f"Setting self.{name} to {value}") setattr(self, name, value) terry = Person("Terry", 78, "Screenwriter", 0) terry.name ``` ### Metaprogramming warning! Use this stuff **sparingly**! The above example worked, but it produced Python code which is not particularly understandable. Remember, your objective when programming is to produce code which is **descriptive of what it does**. The above code is **definitely** less readable, less maintainable and more error prone than: ``` class Person: def __init__(self, name, age, job, children_count): self.name = name self.age = age self.job = job self.children_count = children_count ``` Sometimes, metaprogramming will be **really** helpful in making non-repetitive code, and you should have it in your toolbox, which is why I'm teaching you it. But doing it all the time overcomplicated matters. We've talked a lot about the DRY principle, but there is another equally important principle: > **KISS**: *Keep it simple, Stupid!* Whenever you write code and you think, "Gosh, I'm really clever",you're probably *doing it wrong*. Code should be about clarity, not showing off.
github_jupyter
# Making multiple interface-reporting dataframes for several structures using snakemake This notebook builds on the basics covered in [Working with PDBePISA interface lists/reports in Jupyter Basics](Working%20with%20PDBePISA%20interfacelists%20in%20Jupyter%20Basics.ipynb) in order to generate dataframes detailing the interface data for many structures. ---- A previous notebook, [Working with PDBePISA interface lists/reports in Jupyter Basics](Working%20with%20PDBePISA%20interfacelists%20in%20Jupyter%20Basics.ipynb), stepped through making dataframes detailing the interface data for many structures using their PDB identifier codes, or data copied from the PDBePISA 'Interfaces' webpage, as input. Is there a way to scale this up to make dataframes for a lot of structures? This may be especially helpful for the research that collects data for a lot of complexes. This notebook spells out a way to do this with minimal effort. In fact, you only need knowledge of the PDB code identifiers of the structures of interest. You'll make a file listing the PDB indentifier codes each on a separate line to define the structures and kick off the process and make the dataframes. The method used to do this scaling up involves the workflow management software **Snakemake**. For up to a few dozen structures you could make a list of PDB codes in Python and iterate on them making a dataframe for each. That's perfectly valid. However, as you try to increase your scale beyond that you'll hit issues that workflow management software helps avoid. Several aspects of the scaling problem are easy to illustrate just by thinking of trying to run several through the simplistic iterating. Imagine you just did that for seventy-seven structures and now someone wants you to add thirteen more today and then they find they need another forty-six added tomorrow. And what if you realize as you are doing this that a third of the forty-six they added later, had been done the first day? Now you have two versions of some of the files. In addition you probably remade the list to process three times even though the input was the only thing that differed. Now imagine all the numbers of structure being studied was ten- or fifty-fold greater than that. Now you can probably grasp it would benefifical for long-term tracking to have one list of the PDB codes you processed and it would be nice if you could easily work in additional data to analyze and have the files produced sorted out for you in one place. Snakemake does that. It tracks what it's already made and just makes any new files you need. In short, it makes it much easier to create reproducible and scalable data analyses. Some examples follow to help convince you of this, and then I provide some guidance if you want to learn to use Snakemake in your own workflows/pipelines. This notebook will demonstrate using Snakemake to make dataframes for several structures. And then also demonstrate adding in a few more. Some of the general features of Snakemake use are covered in the course of this. ----- **Step #1:** The only input for the Snakemake file that has been made to do this scalable process is a text file listing the PDB id codes each on a separate line. The contents of the produced file will look much this without actual indenting: ```text 6kiv 6kiz 6kix ``` If you want, you can open a text file in Jupyter and directly edit the file to make your table. For the sake of the demonstration, this will be done using code within this notebook found in the cell below. **Step #2:** Save the table with the following name, `interface_data_2_df_codes.txt`. It has to have that name for the file with the list of PDB codes to be recognized and processed to make the dataframes corresponding to those codes. The following will do that here using this notebook; however, you can, and will want to, skip running this if already made your own file listing PDB codes. If you run it, it will replace your file. Alternatively, you can edit the code below to make a table with the contents that interest you. ``` s='''6kiv 6kiz 6kix ''' %store s >interface_data_2_df_codes.txt ``` **Step #3:** Run snakemake and it will process the `interface_data_2_df_codes.txt` file to extract the PDB codes and make pickled datafiles for each dataframe. The file snakemake uses by default, named `Snakefile`, is already here and that is what will run when the next command is executed. It will take about a minute or less to complete if you are running the demonstration. **Step #3:** Run snakemake and point it at the corresponding snake file `pdb_code_2_intf_df_snakefile` and it will process the `interface_data_2_df_codes.txt` file to extract the PDB codes off each line and make pickled datafiles for each dataframe. Since there is only three in this simplisitc demo, this will be very similar to running the previous notebooks in this series with the items spelled out for each structure. The file snakemake uses in this pipeline, named `pdb_code_2_intf_df_snakefile`, is already here. It is related to Python scripts and you can examine the text if you wish. It will take about a minute or less to complete if you are running the demonstration. ``` !snakemake -s pdb_code_2_intf_df_snakefile --cores 1 ``` (For those knowledgeable with snakemake, I will say that I set the number of cores as one because I was finding with eight that occasionally a race condition would ensue where some of the auxillary scripts fetched in the course of running the report-generating notebooks would overwrite each other as they was being accessed by another notebook causing failures. Using one core avoids that hazard. I will add though that in most cases if you use multiple cores, you can easily get the additional files and a new archive made by running snakemake with your chosen number of cores again. I never saw a race hazard with my clean rule, and so if you want to quickly start over you can run `!snakemake -s pdb_code_2_intf_df_snakefile --cores 8 clean`.) **Step #4:** Verify the pickled versions of the dataframes were generated. You can go to the dashboard and see the ouput of running snakemake. To do that click on the Jupyter logo in the upper left top of this notebook and on that page you'll look in the notebooks directory and you should see files that begin with the PDB identifier codes and end with `_PISAinterface_summary_pickled_df.pkl`. You could examine some of them to insure all is as expected. I'm going to move that to below so that the snakemake run log above is close by when discussing downloading the result. If things seem to be working and you haven't run your data yet, run `!snakemake -s pdb_code_2_intf_df_snakefile --cores 8 clean` in a cell to reset things, and then edit & save `interface_data_2_df_codes.txt` to have your information, and then run the `!snakemake -s pdb_code_2_intf_df_snakefile --cores 1` step above, again. **Step #5:** If this was anything other than the demonstration run, download the archive containing all the Jupyter notebooks bundled together. For ease in downloading, all the created notebooks have been saved as a compressed archive **so that you only need to retrieve and keep track of one file**. The file you are looking for begins with `collection_of_interface_dfs` in front of a date/time stamp and ends with `.tar.gz`. The snakemake run will actually highlight this archive towards the very bottom of the run, following the words 'Be sure to download'. **Download that file from this remote, temporary session to your local computer.** You should see this archive file ending in `.tar.gz` on the dashboard. Toggle next to it to select it and then select `Download` to bring it from the remote Jupyterhub session to your computer. If you don't retieve that file and the session ends, you'll need to re-run to get the results again. You should be able to unpack that archive using your favorite software to extract compressed files. If that is proving difficult, you can always reopen a session like you did to run this series of notebooks and upload the archive and then run the following command in a Jupyter notebook cellk to unpack it: ```bash !tar xzf collection_of_interface_dfs* ``` (If you are running that command on the command line, leave off the exclamation mark.) You can then examine the files in the session or download the individual Jupyter notebooks similar to the advice on how to download the archive given above. ----- ### Verifying generation of the pickled dataframes Demonstrating verifying the generation of the picked dataframes was momentarily skipped over above so that that what needed to be collected from the results could be discussed. Also, check for one was covered in prior notebook. And this can be just extended to check for the other two, like so: ``` import pandas as pd dfv = pd.read_pickle("6kiv_PISAinterface_summary_pickled_df.pkl") dfv.head() dfx = pd.read_pickle("6kix_PISAinterface_summary_pickled_df.pkl") dfx.head() dfz = pd.read_pickle("6kiz_PISAinterface_summary_pickled_df.pkl") dfz.head() ``` ----- ### Benefit of workflow managment software With the simple demo above, the benefit of using workflow management software won't be readily observed. Let's further demonstrate some steps that may be tpyical to better illustrate. #### Example A. Add additional data. Let's add two more. To that, all you need is to edit `interface_data_2_df_codes` to add the additional PDB codes and kick off the snakemake workflow again. This will append two more to `interface_data_2_df_codes`. ``` a='''6kiw 6kiu ''' %store a >>interface_data_2_df_codes.txt ``` Verify those are added: ``` cat interface_data_2_df_codes.txt ``` Now the same snakemake command can be used to re-run the workflow. ``` !snakemake -s pdb_code_2_intf_df_snakefile --cores 1 ``` That was easier than writing out the individual code to add in two steps to process the new PDB codes similar to how it was done in the first notebook in this series. Or at least as easy because there was only two added. As a bonus from snakemake, notice that for the rule `run_pisa_interface_list_to_df_for_pdb`, it only processed the two added. At the scale of this demonstration, this may not seem like much; however, what if in the first round we had already run the process on, say 50 PDB codes, and we just wanted to add another two. You can see where snakemake makes this easy and consistent. There's no special editing of any code to only repeat with the extra two. Just add the additional codes into the input file. Snakemake tracks the input and output for each rule and when called it will make any files the resuling file is dependent on. #### Example B. Update a file upstream in the workflow. To illustrate further how snakemake monitors your workflow and will make updates to just what is necessary when the workflow is triggered, let's 'touch' one of the pickled dataframe files so that it's file timestamp is newer than the final resulting file in the workflow, which is the packaged archive of the pickled dataframe files. (The unix 'touch' command is equivalent to opening a file so it shows as modified in your typical computer's graphical user interface. The 'touch' command has the bonus of being useful to make a file with no contents if it already didn't exist. *Here we are just taking advantage of its ability to show a file as updated without us really editing it.*) So run the next cell to cell to see the timestamp of `6kiv_PISAinterface_summary_pickled_df.pkl` now: ``` ls -la 6kiv* ``` The timestamp is to the right side of the month and day listing and just in front of the file name `6kiv_PISAinterface_summary_pickled_df.pkl`. Let's build in a pause of a minute to make sure this is noticeable. **You can skip running this cell if you are runing these cells as you read along because there will already be enough pause between then the files where last updated and now.** ``` # So 'Run All' works to show a difference in timestamp; skip if you are running by hand import time time.sleep(61) ``` So run the next cell to update the file `6kiv_PISAinterface_summary_pickled_df.pkl` now: ``` !touch 6kiv_PISAinterface_summary_pickled_df.pkl ``` Let's see what that did by examining the timestamps of the files that begin with '6ki': ``` ls -la 6ki* ``` Note that `6kiv_PISAinterface_summary_pickled_df.pkl` is newer than the other pickled dataframes. Now if we trigger the workflow, snakemake will want to update the version of the archive to be newer because it thinks one of the files that resulting file is dependent on has been more recently updated. It wants to have the resulting file, the archive in this case, be the last file produced or otherwise snakemake suspects it doesn't have the updated set of files in it. Here we didn't actually change the file, but let's imagine we did. **It's nice that snakemake is tracking stuff for us and wants to update the archive to have the latest versions of the input files.** Here we only have six files to track; however, snakemake could handle way more than we'd want to be bothered to track ourselves. Since we only updated the file by touching it, we didn't change anything, and so let's not bother actually generating a new archive. However, we can check into what snakemake wants to do, and at the same time illustrate another nice feature of snakemake. Ever wondered what would change if you ran a workflow but didn't actually want to do it because you were 99.9% happy with everything you had and there was a chance something you did recently could potentially trigger a catastrophe by writing over or destroying files you labored to get? Well, **snakemake has 'dry run'** feature. You can call snakemake with the `--dry-run` option, which can also be abbreviated just the flag `-n` **to see what it would do WITHOUT ACTUALLY RUNNING IT**. ``` !snakemake -s pdb_code_2_intf_df_snakefile --cores 1 --dry-run ``` As expected, snakemake has seen one of the files to go into the archive is newer than the last version of the archive and it wants to make it with what it believes is the new file. Because we used the 'dry run' option, though, it didn't actually go ahead and do that. We can see that visually by telling snakemake to build a workflow diagram, a.k.a., directed acyclic graph (DAG), of what it wants to do. ``` !snakemake -s pdb_code_2_intf_df_snakefile --dag | dot -Tpng > dag.png from IPython.display import Image Image("dag.png") ``` (For those wondering about the code in the cell above, the first line is snakemake making the image file making use of a helper program called `dot`, adpated from [here](https://github.com/ctb/2019-snakemake-ucdavis/blob/master/tutorial.md#outputting-the-entire-workflow-diagram). The other two lines are just some Python that can be used to display images in Jupyter cells.) Interesting, it illustrates what snakemake presently wants to do. However, that graph isn't that exciting. **Let's see what we'd get if we had snakemake start over?** First, we'll tell it to use the clean rule using the following cell: ``` !snakemake -s pdb_code_2_intf_df_snakefile --cores 8 clean ``` That rule was pointed out earlier as a way to reset things. This is a good example of taking advantage of that. To see the workflow diagram, **you could have also placed the next cell earlier in the notebook before actually running snakemake** so that you have a record of what is to follow. Feel free to use that feature in your own Jupyter notebooks using snakemake. It was left until last here because it is assumed snakemake woukd be unfamiliar to most people just looking to get interface records in a computation useable form. Let's see the workflow diagram now that we have cleaned up to rest things: ``` !snakemake -s pdb_code_2_intf_df_snakefile --dag | dot -Tpng > dag.png from IPython.display import Image Image("dag.png") ``` **Yes, that is our expanded workflow nicely visualized!** If we instead reset and remade `interface_data_2_df_codes.txt` with just the original three PDB codes, it would only have those **original three** in the workflow digram. ``` !snakemake -s pdb_code_2_intf_df_snakefile --cores 8 clean s="6kiv\n6kiz\n6kix" %store s >interface_data_2_df_codes.txt !snakemake -s pdb_code_2_intf_df_snakefile --dag | dot -Tpng > dag.png Image("dag.png") ``` With the ability to print the workflow diagram ahead of actually running things via the use of the dry run option, you could redo the steps we took for 'Example A. Add additional data' to add the other two PDB codes and then do a 'dry run' to see ahead of time that the rule `run_pisa_interface_list_to_df_for_pdb` will only processed the two added. (The reader can do that if they wish.) We originally had to look over the text logs after-the-fact to note this. The workflow diagram combined with the 'dry run' option makes following what will happen much more clear. ------ Hopefully, the demonstration of a pretty straightforward and short wokflow has given you a sense of how to use Snakemake and an appreciation for how your work can benefit from the use of the workflow management software. There's other options for workflow management software that you make want to explore before going with Snakemake. [Nextflow](https://www.nextflow.io/) for common workflow language, Rabix and CWL airflow are the other popular alternatives for workflow management software for computational biology. If you wish to see a more detailed list of the features of Snakemake and why it is a good choice for work such as this [here](https://angus.readthedocs.io/en/2019/snakemake_for_automation.html#introduction-to-snakemake) and [here](https://github.com/ctb/2019-snakemake-ucdavis/blob/master/tutorial.md#thinking-about-workflows---a-stronger-argument). If you are curious to learn about using Snakemake to make your own workflows, I'd suggest Titus Brown's tutorials on this topic, such as [this one](https://angus.readthedocs.io/en/2019/snakemake_for_automation.html) and The Carpentries' [material on Snakemake](https://carpentries-incubator.github.io/workflows-snakemake/). The enviornment in which you are most likely currently viewing this notebook is all set up for working through The Carpentries' [material on Snakemake](https://carpentries-incubator.github.io/workflows-snakemake/). You can retrieve and extract the 'Lesson Data Files' accompanying the Carpentries' [material on Snakemake](https://carpentries-incubator.github.io/workflows-snakemake/) by running the following command in a Jupyter notebook cell: ```text !curl -OL https://github.com/carpentries-incubator/workflows-snakemake/raw/gh-pages/files/workflow-engines-lesson.tar.gz !tar xzf workflow-engines-lesson.tar.gz ``` (Leave off the exclamation point in the front of the two commands if you are looking to run them in a terminal. See [here](https://carpentries-incubator.github.io/workflows-snakemake/setup) for more on that.) Titus Brown's tutorials link to active launchable sessions that run in RStudio, also served via MyBinder.org, that come complete with the necessary software and materials already installed or present. ---- In the next notebook in this series, [Making the multiple reports generated via snakemake clearer by adding protein names](Making%20the%20multiple%20reports%20generated%20via%20snakemake%20clearer%20by%20adding%20protein%20names.ipynb), I work through how to make the reports more human readable by swapping the chain designations with the actual names of the proteins. This is similar to making the report more human readable that was discussed at the bottom of the previous notebook, [Using PDBsum data to highlight changes in protein-protein interactions](Using%20PDBsum%20data%20to%20highlight%20changes%20in%20protein-protein%20interactions.ipynb); however, it will be done to all the notebooks at once based on the file name beginning with `interactions_report_for_` and ending with `.ipynb`. ----- Please continue on with the next notebook in this series, [Making the multiple reports generated via snakemake clearer by adding protein names](Making%20the%20multiple%20reports%20generated%20via%20snakemake%20clearer%20by%20adding%20protein%20names.ipynb). Or if you are interested in using PDBsum's interface statistics tables with Python or easily comparing those statistics for two structuress, see [Interface statistics basics & comparing Interface statistics for two structures](Interface%20statistics%20basics%20and%20comparing%20Interface%20statistics%20for%20two%20structures.ipynb). -----
github_jupyter
``` #pip install pandahouse import pandahouse as ph import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import numpy as np connection_default = {'host': 'http://clickhouse.beslan.pro:8080', 'database':'default', 'user':'student', 'password':'dpo_python_2020' } connection_test = dict(database='default', host='http://clickhouse.beslan.pro:8080', user='student-rw', password='656e2b0c9c') ``` Для начала, проверим правильность подключения к ClickHouse через pandahouse, отправив простой запрос: выведите количество строк в таблице ldn_listings. ``` q = ''' SELECT count(id) AS count_rows FROM {db}.ldn_listings ''' q_count = ph.read_clickhouse(query=q, connection=connection_default) q_count ``` Сгруппируйте полученный датафрейм по типу жилья и посчитайте 75-й перцентиль цены. В качестве ответа впишите полученное значение 75 перцентиля цены для комнат типа Private room. ``` q2 = ''' SELECT quantile(0.75)(price), room_type FROM (SELECT id, room_type, toFloat32OrNull(replaceRegexpAll(price, '[$,]', '')) AS price FROM {db}.ldn_listings ORDER BY id ASC LIMIT 1000) AS df GROUP by room_type ''' q_room = ph.read_clickhouse(query=q2, connection=connection_default) q_room q3 = ''' SELECT room_type, AVG(toFloat32OrNull(replaceRegexpAll(price, '[$,]', ''))) AS avg_price, AVG(toFloat32OrNull(replaceRegexpAll(review_scores_rating, '[$,]', ''))) AS avg_review_scores_rating FROM default.ldn_listings GROUP by room_type ''' q_room_3 = ph.read_clickhouse(query=q3, connection=connection_default) q_room_3 sns.scatterplot(data=q_room_3, x='avg_price', y='avg_review_scores_rating', hue='room_type') ``` Используйте методы explode и value_counts, чтобы посчитать, сколько раз встречается каждый способ верификации Сколько хозяев подтвердили профиль с помощью аккаунта Google? ``` q4 = ''' SELECT DISTINCT host_id, host_verifications FROM default.ldn_listings WHERE experiences_offered !='none' ''' q_room_4 = ph.read_clickhouse(query=q4, connection=connection_default) q_room_4 q_room_4['host_verifications'] = q_room_4['host_verifications'].apply(lambda x: pd.eval(x)) q_room_4.host_verifications[0] q_room_4.explode('host_verifications').host_verifications.value_counts().to_frame(name='count') ``` Теперь посмотрим, для скольких объявлений и в каких районах хозяева указали впечатления. Сгруппируйте данные по району и виду впечатления и посчитайте количество объявлений. Новый столбец назовите experiences_count.Отсортируйте данные по убыванию experiences_count и выгрузите первые 100 строк. Затем преобразуйте данные с помощью pivot, поместив в индексы название района, столбцы – вид впечатления, а значения – число объявлений с таким впечатлением для каждого района. Визуализируйте результат с помощью sns.heatmap() c палитрой cmap=sns.cubehelix_palette(as_cmap=True). ``` q5 = ''' SELECT neighbourhood_cleansed, experiences_offered, COUNT(experiences_offered )AS experiences_count FROM default.ldn_listings WHERE experiences_offered != 'none' GROUP BY neighbourhood_cleansed, experiences_offered ORDER BY experiences_count DESC LIMIT 100 ''' q_room_5 = ph.read_clickhouse(query=q5, connection=connection_default) q_room_5 q_room_5_final = q_room_5.pivot(index='neighbourhood_cleansed', columns='experiences_offered', values='experiences_count') plt.figure(figsize=(12, 8)) sns.heatmap(q_room_5_final, cmap=sns.cubehelix_palette(as_cmap=True)) ``` Выгрузите данные о ценах за ночь для разных типов жилья, для которых также доступен какой-либо вид впечатления. Необходимые для составления запроса столбцы: room_type – тип сдаваемого жилья (доступные варианты: Entire home/apt, Private room, Hotel room, Shared room) price – цена за ночь experiences_offered – вид доступного впечатления (оставить не 'none') Далее постройте два графика, используя distplot из библиотеки seaborn: На первом отобразите исходные распределения цен для каждого типа жилья На втором – логарифмированные значения (np.log()) ``` q6 = ''' SELECT room_type, toFloat32OrNull(replaceRegexpAll(price, '[$,]', '')) AS price FROM default.ldn_listings WHERE experiences_offered != 'none' ''' q_room_6 = ph.read_clickhouse(query=q6, connection=connection_default) q_room_6 plt.figure(figsize=(12, 8)) sns.displot(data=q_room_6, x="price", hue="room_type", kde=False) sns.displot(data=q_room_6, x=np.log(q_room_6.price), hue="room_type", kde=False) ``` Выгрузите данные о цене, типе жилья и дате первого отзыва, начиная со 2 января 2010 года. Необходимые столбцы: room_type – тип сдаваемого жилья (доступные варианты: Entire home/apt, Private room, Hotel room, Shared room) price – цена за ночь first_review – дата первого отзыва (отфильтровать по правилу "строго больше 2010-01-01") Ограничение поставьте на 1000 строк. Используя библиотеку seaborn и функцию lineplot, постройте график динамики средних цен на жилье (ось Y) в зависимости от типа комнаты (цвет линии, параметр 'hue') по годам (ось X). Датафрейм должен быть отсортирован по годам. ``` q7 = ''' SELECT first_review, toFloat32OrNull(replaceRegexpAll(price, '[$,]', '')) AS price, room_type FROM default.ldn_listings WHERE first_review > '2010-01-01' LIMIT 1000 ''' q_room_7 = ph.read_clickhouse(query=q7, connection=connection_default) q_room_7 q_room_7['first_review'] = q_room_7.first_review.apply(lambda x: str(x)[:-6]) q_room_7.head(10) q_room_8 = q_room_7.groupby(['room_type', 'first_review'], as_index=False) \ .agg({'price': 'mean'}) \ .rename(columns={'price': 'avg_price'}) \ .rename(columns={'first_review': 'year'}) q_room_8.head(10) sns.lineplot(x='year', y='avg_price', hue='room_type', data=q_room_8) ```
github_jupyter
<table class="ee-notebook-buttons" align="left"> <td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Visualization/styled_layer_descriptors.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td> <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Visualization/styled_layer_descriptors.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td> <td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Visualization/styled_layer_descriptors.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Visualization/styled_layer_descriptors.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td> </table> ## Install Earth Engine API Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`. The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time. ``` # %%capture # !pip install earthengine-api # !pip install geehydro ``` Import libraries ``` import ee import folium import geehydro ``` Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error. ``` # ee.Authenticate() ee.Initialize() ``` ## Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`. ``` Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID') ``` ## Add Earth Engine Python script ``` cover = ee.Image('MODIS/051/MCD12Q1/2012_01_01').select('Land_Cover_Type_1') # Define an SLD style of discrete intervals to apply to the image. sld_intervals = \ '<RasterSymbolizer>' + \ ' <ColorMap type="intervals" extended="false" >' + \ '<ColorMapEntry color="#aec3d4" quantity="0" label="Water"/>' + \ '<ColorMapEntry color="#152106" quantity="1" label="Evergreen Needleleaf Forest"/>' + \ '<ColorMapEntry color="#225129" quantity="2" label="Evergreen Broadleaf Forest"/>' + \ '<ColorMapEntry color="#369b47" quantity="3" label="Deciduous Needleleaf Forest"/>' + \ '<ColorMapEntry color="#30eb5b" quantity="4" label="Deciduous Broadleaf Forest"/>' + \ '<ColorMapEntry color="#387242" quantity="5" label="Mixed Deciduous Forest"/>' + \ '<ColorMapEntry color="#6a2325" quantity="6" label="Closed Shrubland"/>' + \ '<ColorMapEntry color="#c3aa69" quantity="7" label="Open Shrubland"/>' + \ '<ColorMapEntry color="#b76031" quantity="8" label="Woody Savanna"/>' + \ '<ColorMapEntry color="#d9903d" quantity="9" label="Savanna"/>' + \ '<ColorMapEntry color="#91af40" quantity="10" label="Grassland"/>' + \ '<ColorMapEntry color="#111149" quantity="11" label="Permanent Wetland"/>' + \ '<ColorMapEntry color="#cdb33b" quantity="12" label="Cropland"/>' + \ '<ColorMapEntry color="#cc0013" quantity="13" label="Urban"/>' + \ '<ColorMapEntry color="#33280d" quantity="14" label="Crop, Natural Veg. Mosaic"/>' + \ '<ColorMapEntry color="#d7cdcc" quantity="15" label="Permanent Snow, Ice"/>' + \ '<ColorMapEntry color="#f7e084" quantity="16" label="Barren, Desert"/>' + \ '<ColorMapEntry color="#6f6f6f" quantity="17" label="Tundra"/>' + \ '</ColorMap>' + \ '</RasterSymbolizer>' Map.addLayer(cover.sldStyle(sld_intervals), {}, 'IGBP classification styled') ``` ## Display Earth Engine data layers ``` Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map ```
github_jupyter
![SegmentLocal](../../assets/images/Logo2.png) # Simulating Data *Neural Time Series Data* ### Prerequisites For this chapter, you should be familiar with the following concepts and techniques: * Basic Python programming * Basic Math. **(recap your skills in Linea Algebra, Sine Waves and Euler's Formula)** ### Scope of this tutorial In this tutorial, you will learn the conceptual, mathematical, and implementational (via python programming) basis of time- and time-frequency-analysis of EEG recordings. Alternating between theoretical background knowledge and practical exercises you will learn the basics of how EEG Data is recorded, preprocessed and analysed. We will however only cover the **fundamental basics of EEG analysis**; but with this, you will then be prepared to dig deeper into the endless opportunities of Neural Time Series analysis. <div class="alert alert-block alert-warning"> <b>General remark:</b> In order to make the most out of the exercises, we highly recommend you to exploit the interactive character of these notebooks; play around with the parameters and see how this affects your results... <span style=font-style:italic>(e.g. what happens if you use a lower transition bandwidth in your filter or if you don't extract the abs() values of the Fourier Transform?)</span> This way, it is much easier to really understand what you are doing! </div> ## 1. The Math behind EEG signals Before we are ready to work with real EEG data, we will first create artificial signals. This makes it much easier to understand the *maths* behind EEG signals, which in return will help you to understand the following analysis steps a lot better. Step by step, we will make our signal more complex until it approximates *'real'* EEG data. In the next section of this chapter, we will then start to use this knowledge in order to analyse EEG Data, recorded by a Neurobiopsychology research group of our institute. For the following exercises, we will use the signal processing toolbox from scipy. [This link](https://docs.scipy.org/doc/scipy/reference/signal.html) leads you to the documentation of the toolbox, where you can find almost all the functions that you need to solve the following tasks. Whenever you are working with a toolbox, I highly recommend to take some time to explore the corresponding documentation. It helps you to make the most of all the tools it supplies! ``` !pip install mne #run this if mne is not installed on the colab kernel import matplotlib.pyplot as plt import numpy as np import random from scipy import signal #dont name any of your variables signal otherwise the package won't work! ``` ### Simple signals One of the key concepts you need in order to understand the maths behind oscillatory signals (like neural signals in EEG) is the **sine wave**. Here you can find a short overview of the parameters that define a sine wave (more details have been covered in the video "Analysing Neural Time Series Data / EEG Intro" in Chapter 6.1. on studIP). ![SineUrl](https://media.giphy.com/media/U6prF59vkch44/giphy.gif "Sine") ![SegmentLocal2](../../assets/images/sinewave.png) With the parameters ```amplitude```,```frequency``` and ```phase```($\theta$), a sine wave can be described with the following formula: $$Asin(2*\pi ft + \theta)$$ ### 1.1 Simple Sinewave With this information, we are now ready to create a simple signal as a combination of two sinusoids. For this: - Define a time scale of 1 second, i.e. 1000ms - Create THREE sinewaves with a length of 1sec: one with a frequency of 10Hz, 15Hz, and 20Hz (*for simplicity, we will for now ignore amplitude and phase, they will be used in the next step though*) - Add them together to create your first simple signal - Create a plot for each of the sinusoids ``` t = np.linspace(0, 1, 1000, False) # 1 second ... # plot all three figures fig, ax = plt.subplots(4, 1, sharex=True, sharey=True, figsize=(16,8)) # Plot each graph ... plt.show() ``` ### 1.2 More complex signal As a next step, we want to achieve something more 'complex'. For this, we select a list of frequencies, that we want our signal to be composed of, and define their amplitudes and phases. The exact values that you should use for this are already predefined; but play around with them and see how your results change! With the help of these parameters: - Create a new, a bit more complex signal by combining the resulting sinusoids (you should get 6 sinusoids with the respective ```freq```, ```amplit``` and ```phase```) - To make it more realistic, create some random Gaussian noise with the same length and add it to your signal - Then plot both, the clean and the noisy signal <div class="alert alert-block alert-warning"> <b>The Nyquist Sampling Theorem states:</b> In order to prevent distortions of the underyling information, the minimum sampling frequency of a signal should be double the frequency of its highest frequency component (which is in our case 60Hz). </div> We will define it a little bit higher for our filter to properly work. But you can of course change it and see how this affects your plots (especially your filter in exercise 2.2). ``` # set parameters: srate = 1000 # define sampling rate nyq = srate/2 #nyquist frequency freq = [3, 10, 5, 15, 35, 60] # define a list of frequencies amplit = [5, 15, 10, 5, 7, 1] # define their amplitudes phase = [np.pi/7, np.pi/8, np.pi, np.pi/2, -np.pi/4, np.pi/3] # and their respective phases # 1. create signal t = np.linspace(0, 1, 1000, False) # 1 second sig = [] # HINT: use a for loop ... # 2. add some random noise # info: the third paramter defines the size of your noise HINT: use np.random.normal noise = ... signal_final = ... signal_noisy = ... # 3. plot both figures (signal with and without noise) fig, axs = plt.subplots(2, 1, sharex=True, sharey=True, figsize=(15,8)) # add big shared axes, hide frame to share ylabel between subplots fig.add_subplot(111, frameon=False) plt.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False) plt.xlabel('Time [s]',fontsize = 12) plt.ylabel('Amplitude',fontsize = 12) # Plot each graph ... plt.show() ``` In reality, your EEG signal is roughly based on the same components: it typically contains a mixture of simultaneous neural oscillations at different frequencies plus some noise. This noise can be non-neural (caused by line noise or muscle activity); but also neural oscillations that are not of your interest can be considered as 'noise'. In order to be able to do your analysis as "clean" as possible, you want to isolate only the part of the signal that you are interested in, thereby increasing the **signal-to-noise-ratio (SNR)** of your signal. A way to do this, is by **filtering** your data - this will be the focus of the following exercises. ## 2. How to get rid of unwanted noise? ### The Fourier Transform Before telling you more about how EEG data can be filtered, you need to first learn about the **Fourier Transform (FT)**, which is a really useful and important mathematical tool to analyse EEG time series data (and any kind of time series data in general); with its help we can separate the different frequency components that compose our signal and thus get rid of unwanted frequencies. To get a more intuitive explanation of Fourier Transform watch this video by [3Blue1Brown](https://www.youtube.com/watch?v=spUNpyF58BY&ab_channel=3Blue1Brown). <span style=color:#1F618D;font-size:11pt>→ If you want to have a more thorough and more EEG-based intro, [check out this vdeo](https://www.youtube.com/watch?v=d1Yj_7ti_IU&list=PLn0OLiymPak3lrIErlYVnIc3pGTwgt_ml&index=3) by Mike X. Cohen. Or [watch the whole playlist](https://www.youtube.com/playlist?list=PLn0OLiymPak3lrIErlYVnIc3pGTwgt_ml) for even more detailed explanations.</span> ### 2.1 Extracting the frequency spectrum with the FFT Now we are ready to apply the **fast Fourier Transform** [```fft.fft()```](https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fft.html) on our signals, in order to get its frequency spectrum. Since we created the signal on our own, we can check whether it shows all the frequencies that we used to compose it. * We will have to compute the frequency spectrum several times, therefore it is useful to write a proper function that can compute the fourier transform of any signal and already selects the absolute part of the Fourier Transform. The explanation for why we only use the absolute values of the FT gets really mathematical; but if you are curious, I can highly recommend [this video!](https://www.youtube.com/watch?v=Nupda1rm01Y). * Then apply and plot the FFT of the noisy signal from 1.2. You will see a lot more, if you limit your x-axis to 100, since we are not interested in higher frequencies anyway. If you look at the amplitudes, you will realize that they are half the amplitudes that we predefined when creating the signal. This happens because we are only taking the absolute values of the FT-frequencies and the amplitudes for the negative frequencies have been "removed". ``` def getFT(sig): # compute fft FFT = np.fft.fft(sig) FFT = np.abs(FFT)/len(sig) #normalise the frequencies to the number of time-points return FFT # compute and plot FFT of the noisy signal frequencies = getFT(...) N = int(len(frequencies)/2) fig, ax = plt.subplots(figsize=(8,8)) ax.plot(frequencies[:N]) #What happens when you plot all frequencies? Try it!!! plt.suptitle('Fourier Transform of the Signal') ax.set( title='Frequencies {} plus random Gaussian noise'.format(freq), xlim=(0,100), xlabel='Frequency (Hz)', ylabel='Amplitude' ) plt.show() print(N) ``` ### Filtering EEG Data Now that we have recovered the components of our signal with Fourier Transform, we will have a short look into how EEG data is filtered,in order to remove the noise. This knowledge will be important for the second half of the Notebook. ### 2.2 Filtering in the time-domain vs. filtering in the frequency-domain In this part, we will see with some practical examples how we can filter our data in two different ways. Usually, you only filter in the frequency domain since this is computationally a lot faster. Yet, it is really useful to learn about both procedures in order to better understand the concept of filtering in general. In the video above you already got a first impression of filtering in the frequency domain. In order to better understand its time-domain equivalent, you need to first learn about the process of convolution, i.e. the (mathematical) procedure of applying your filter to your data in the time domain. This should however not be entirely new to you, you will find some similarities to the procedure behind the Fourier Transform: <div class="alert alert-block alert-success"> <b>Convolution:</b> Convolution is used to isolate frequency-band-specific activity and to localize that frequency-band-specific activity in time. This is done by <b>convolving wavelets— i.e. time-limited sine waves—with EEG data.</b> As the wavelet (i.e. the convolution kernel) is dragged along the EEG data (the convolution signal): it reveals when and to what extent the EEG data contain features that look like the wavelet. When convolution is repeated on the same EEG data using wavelets of different frequencies, a time-frequency representation can be formed." <span style=font-style:italic>(Mike X Cohen, "Analyzing Neural Time Series Data: Theory and Practice"</span> → If you want a more thorough and more visual explanation of convolution, I can highly recommend [this video](https://www.youtube.com/watch?v=9Hk-RAIzOaw) by Mike X Cohen. </div> <div class="alert alert-block alert-success"> <b>Convolution theorem:</b> Convolution in the time domain is the same as multiplication in the frequency domain. </div> ![SegmentLocal](../../assets/images/Convolution_Theorem.png) ### 2.2.1 Filter in the time domain According to the figure above, in order to filter our signal in the time domain, we need a windowed sinewave as a filter-kernel. The windowing helps to obtain **temporally localized frequency information**. We then convolve this wavelet with our signal, extracting the frequency bands that we want to work with. - First define your pass-band as 25Hz. Ideally everything above this frequency is filtered out; in reality however, we need a transition band of about 10 Hz, or a region between the pass-frequency ```f_p``` and stop-frequency ```f_s```. In this range, frequencies are only attenuated instead completely excluded. This is necessary in order to account for the trade-off between precision in the frequency-domain and precision in the time-domain. - Next, we define the gains at each frequency band: everything outside 0 and our pass-band of 25Hz should be attenuated, i.e. have a gain close to 0. - Using the function [```firwin2()```](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.signal.firwin2.html) of the signal package and the parameters from above, we can now construct our filter-kernel ```h_win``` (the result should be a wavelet with a length/duration of 0.6 seconds) - Plot your kernel as well as its frequency spectrum. It should look like a step-function, that assigns a gain of 1 to all frequencies in our pass-band between 0 - 25Hz. Tip: Play around with your the parameters of your filter (e.g. the filter's duration, its transition bandwidth or its stop- and passband, the sampling rate etc.) and see how the plots change. You can then also proceed with the whole filtering process and check out what different filters (with different paramters) do to your data. This way, you can properly understand how the different parameters are finally affecting you data. ``` # Create a Low-pass Filter: Windowed 10-Hz transition # 1. Define Filtering parameteres filter_duration = 0.8 n = int(round(srate * filter_duration)+1) #odd number of filter coefficients for linear phase f_p = 25. # define passband trans_bandwidth = 10. f_s = f_p + trans_bandwidth # stopband = 35 Hz print(f_s) # define gains of each frequency band freq = [0., f_p, f_s, nyq] gain = [1., 1., 0., 0.] # 2. Compute filter graph h_win = signal.firwin2(..., nyq=nyq) # 3. Compute freqeuncy spectrum of the filter frequencies = getFT(...) # 4. Plot filter in time and in frequency domain fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(10,8)) plt.subplots_adjust(hspace=0.5) time = np.linspace(-1, 1, len(h_win)) ... plt.show() # use inbuilt mne-function to plot filter characteristics (a lot more detailed) import mne flim = (1., nyq) # limits for plotting mne.viz.plot_filter(h_win, srate, freq, gain, 'Windowed {} Hz transition ({}s filter-duration)'.format(trans_bandwidth, filter_duration),flim=flim, compensate=True) ``` Now we are ready to convolve our signal with our self-constructed FIR filter ```h_win```. - For this, we use the [```convolve()```](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve.html) function of the *signal* package. - Then plot both, the filtered and the unfiltered signal in order to see the effect of your filter. - Aftwards, we want to test in the frequency-spectrum of our signal whether our filter successfully attenuated the frequency-components above 25Hz. For this: compute and plot the FT of both, the filtered and the unfiltered signal. - In order to compare which filtering procedure is faster, record the computation-time of the time-domain convolution with the help of the magic function [```%timeit```](https://docs.python.org/2/library/timeit.html) (you can write an extra line for this, where you again perform your convolution). ``` # 1. Convolve signal with the filter conv_time = signal.convolve(..., mode='same') # and Calculate computation time %timeit signal.convolve(..., mode='same') # 2. Plot filtered and unfiltered signal ... # 3. Compute and plot frequency spectrum of the filtered and the unfiltered signal fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(10,8)) plt.subplots_adjust(hspace=0.5) ... plt.show() ``` ### 2.2.2 Filter in the frequency domain Filtering in the frequency domain is computationally much faster and easier. According to the convolution theorem (see above): - Multiply the frequency-spectrum of our filter-kernel with the frequency-spectrum of our signal. - In order to compare the filtered and the unfiltered signal, first compute the inverse Fourier-Transform of your filtering result with [```fft.ifft```](https://numpy.org/doc/stable/reference/generated/numpy.fft.ifft.html) (in order to translate it to the time domain) and then plot both signals, the unfiltered and filtered one, in one plot. <div class="alert alert-block alert-warning"> <b>Note:</b> So far, every time we applied the fourier transform (FT) to our signal, we only used the absolute values of the FT-result, because this was what we were interested in. To visualize what that means, just plot the FT of any of our signals with and without the abs()-function. For the inverse FT to work porperly however, we need the "whole" result of the FT, which is why we omit the abs() function this time. </div> - In a second plot, compare your result from filtering in the frequency domain with your convolution result from before in the time domain (from 2.2 a). According to relationship between frequency and time domain, both curves should look exactly the same! - In order to compare which filtering procedure is faster, again record the computation-time of the frequency-domain filtering with the help of the magic function [```%timeit```](https://docs.python.org/2/library/timeit.html). Compare the result to the computation time of the time-domain convolution. Which one is faster? ``` # 1. Compute lengths of the result # in order to make the inverse FFT return the correct number of time points: # we need to make sure to compute the FFTs of the signal and the kernel using # the appropriate number of time points. In other words: # the length of the signal (=srate = 1000) plus the length of the kernel (= 801) minus one (= 1800) # Afterwards the result has to be trimmed to its orignal length again (step 5) n_signal = ... n_kernel = ... print(n_signal, n_kernel) nconv = n_signal + n_kernel -1 halfk = np.floor(n_kernel/2) # 2. Compute FT of the kernel and the signal h_winX = np.fft.fft(...) signalX = np.fft.fft(...) # 3. Multiply frequecies result_frequencydomain = h_winX*signalX # 4. Compute inverse FT (convert frequency-domain to the time-domain) result_timedomain = np.fft.ifft(result_frequencydomain) print(len(result_timedomain)) # 5. Cut the signal to original length result_timedomain = result_timedomain[int(halfk):-int(halfk)] # 6. Plot both signals (unfiltered and filtered) in one plot ... plt.legend(bbox_to_anchor=[1.2, 1.3], ncol=2) # 7. Plot results of filtering in the frequency domain and filtering in the time domain fig, ax = plt.subplots(3,1, figsize=(15,8), sharey=True) ... plt.subplots_adjust(hspace=0.2) fig.add_subplot(111, frameon=False) plt.suptitle('Compare results of filtering in the frequency domain and filtering in the time domain') plt.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False) plt.xlabel('Time [ms]',fontsize = 12) plt.ylabel('Amplitude',fontsize = 12) plt.legend(); # 8. calculate computation time filtering_fd_time = get_ipython().run_line_magic('timeit', "conv_fd = h_winX*signalX") ``` ## Bonus Exercise: Time-Frequency Analysis FT alone does not describe the signal perfectly. For non-stationary signals (like EEG), we are interested in the evoked response of the brain. The result of the simple FT alone will not show us that. Hence, we rely on Time-Frequency Analysis in order to understand the temporal structure of the different frequencies in the signal. **Spectrograms** will do the trick! After applying a specific (time-resolved) version of the FT, they shows us how much of each frequency compoment was present at a specific time point. [This section of one of Cohen's videos about the Fast Fourier Transform](https://youtu.be/T9x2rvdhaIE?t=118), nicely explains and visualizes how a spectogram is computed. - Plot the power spectogram of the noisy signal using the function [```plt.specgram()```](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.specgram.html). - Also compare the results with the [stft](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.stft.html) method ``` # Plot simple spectogram of the noisy signal ... ``` ## Further reading In case you want to learn more about EEG processing, I highly recommend the following books: - Cohen, M. X. (2014). *Analyzing neural time series data: Theory and practice*. MIT press. - Luck, S. J. (2014). *An introduction to the event-related potential technique, second edition*. Cambridge, Massachusetts: The MIT Press. - Collection of notebooks for EEG analysis in Python [EEGinPython](https://elifesciences.org/labs/f779833b/python-for-the-practicing-neuroscientist-an-online-educational-resource) ## Summary: What you have learned about Neural Time Series Data Congratulations, you've mastered the first chapter about neural time series data analysis! In this chapter you have learned: - The basic mathematical concepts behind EEG signals - How to first create an artificial signal and then decompose it into its parts with the Fourier Transform - How to apply this knowledge to the different filtering procedures, creating your own filter-kernel and then playing around with its parameters - How to filter your data in the frequency and the time domain and thereby smoothly move between the two domains
github_jupyter
# Databases <span class="tocSkip"></span> Introduction ------------ Many of you will deal with complex data — and often, lots of it. Ecological and Evolutionary data are particularly complex because they contain large numbers of attributes, often measured in very different scales and units for individual taxa, populations, etc. In this scenario, storing the data in a database makes a lot of sense! You can easily include the database in your analysis workflow — indeed, that's why people use databases. And you can use python (and R) to build, manipulate and use your database. ### Relational databases A *relational* database is a collection of interlinked (*related*) tables that altogether store a complex dataset in a logical, computer-readable format. Dividing a dataset into multiple tables minimizes redundancies. For example, if your data were sampled from three sites — then, rather than repeating the site name and description in each row in a text file, you could just specify a numerical "key" that directs to another table containing the sampling site name and description. Finally, if you have many rows in your data file, the type of sequential access we have been using in our `python` and `R` scripts is inefficient — you should be able to instantly access any row regardless of its position Data columns in a database are usually called *fields*, while the rows are the *records*. Here are a few things to keep in mind about databases: * Each field typically contains only one data type (e.g., integers, floats, strings) * Each record is a "data point", composed of different values, one for each field — somewhat like a python tuple * Some fields are special, and are called *keys*: * The *primary key* uniquely defines a record in a table (e.g., each row is identified by a unique number) * To allow fast retrieval, some fields (and typically all the keys) are indexed — a copy of certain columns that can be searched very efficiently. * *Foreign keys* are keys in a table that are primary keys in another table and define relationships between the tables The key to designing a database is to minimize redundancy and dependency without losing the logical consistency of tables — this is called *normalization* (arguably more of an art than a science!) Let's look at a simple example. Imagine you recorded body sizes of species from different field sites in a single text file (e.g., a `.csv` file) with the following fields: |Field|Definition| |:-|:-| |`ID` | Unique ID for the record| |`SiteName` | Name of the site| |`SiteLong` | Longitude of the site| |`SiteLat` | Latitude of the site| |`SamplingDate` | Date of the sample| |`SamplingHour` | Hour of the sampling| |`SamplingAvgTemp` | Average air temperature on the sampling day| |`SamplingWaterTemp` | Temperature of the water| |`SamplingPH` | PH of the water| |`SpeciesCommonName`| Species of the sampled individual| |`SpeciesLatinBinom`| Latin binomial of the species| |`BodySize` | Width of the individual| |`BodyWeight` | Weight of the individual| It would be logical to divide the data into four tables: *Site table*: |Field|Definition| |:-|:-| |`SiteID` |ID for the site| |`SiteName`| Name of the site| |`SiteLong` | Longitude of the site| |`SiteLat` | Latitude of the site| *Sample table*: |Field|Definition| |:-|:-| |`SamplingID` | ID for the sampling date| |`SamplingDate` | Date of the sample| |`SamplingHour` | Hour of the sample| |`SamplingAvgTemp` |Average air temperature| |`SamplingWaterTemp`| Temperature of the water| |`SamplingPH` | PH of the water| *Species table*: |Field|Definition| |:-|:-| |`SpeciesID` | ID for the species| |`SpeciesCommonName`| Species name| |`SpeciesLatinBinom` | Latin binomial of the species| *Individual table*: |Field|Definition| |:-|:-| |`IndividualID`| ID for the individual sampled| |`SpeciesID` | ID for the species| |`SamplingID` |ID for the sampling day| |`SiteID` | ID for the site| |`BodySize` | Width of the individual| |`BodyWeight` | Weight of the individual| In each table, the first ID field is the primary key. The last table contains three foreign keys because each individual is associated with one species, one sampling day and one sampling site. These structural features of a database are called its *schema*. ## SQLite `SQLite` is a simple (and very popular) SQL (Structured Query Language)-based solution for managing localized, personal databases. I can safely bet that most, if not all of you unknowingly (or knowingly!) use `SQLite` — it is used by MacOSX, Firefox, Acrobat Reader,iTunes, Skype, iPhone, etc. SQLite is also the database "engine" underlying your [Silwood Masters Web App](http://silwoodmasters.co.uk) We can easily use SQLite through Python scripts. First, install SQLite by typing in the Ubuntu terminal: ```bash sudo apt install sqlite3 libsqlite3-dev ``` Also, make sure that you have the necessary package for python by typing `import sqlite3` in the python or ipython shell. Finally, you may install a GUI for SQLite3 : `sudo apt install sqliteman` Now type `sqlite3` in the Ubuntu terminal to check if SQLite successfully launches. SQLite has very few data types (and lacks a boolean and a date type): |Field Data Type| Definition| |:-|:-| |`NULL` | The value is a NULL value | |`INTEGER` | The value is a signed integer, stored in up to or 8 bytes | | `REAL` | The value is a floating point value, stored as in 8 bytes | | `TEXT` | The value is a text string | | `BLOB` | The value is a blob of data, stored exactly as it was input (useful for binary types, such as bitmap images or pdfs) | Typically, you will build a database by importing csv data — be aware that: * Headers: the csv should have no headers * Separators: if the comma is the separator, each record should not contain any other commas * Quotes: there should be no quotes in the data * Newlines: there should be no newlines Now build your first database in SQLite! We will use as example a global dataset on metabolic traits called *Biotraits* that we are currently developing in our lab (should be in your `Data` directory). This dataset contains 164 columns (fields). Thermal response curves for different traits and species are stored in rows. This means that site description or taxonomy are repeated as many times as temperatures are measured in the curve. You can imagine how much redundacy can be here!!! For this reason, it is easier to migrate the dataset to SQL and split it into several tables: * *TCP*: Includes the thermal curve performance for each species and trait (as many rows per trait and species as temperatures have been measured within the TCP) * *TraitInfo*: Contains site description and conditions under the traits were measured (one row per thermal curve) * Consumer: Consumer description including taxonomy (one row per thermal curve). * Resource: Resource description including taxonomy (one row per thermal curve). * Size: Size data for each species (one row per thermal curve) DataSource: Contains information about the data source (citation, contributors) (one row per thermal curve). So all these tables compose the *Biotraits* `schema`. In an Linux/Unix terminal, navigate to your `data` directory: Now, launch a new database using sqlite: ```bash sqlite3 Biotraits.db ``` This should return something like: ```sql SQLite version 3.11.0 2016-02-15 17:29:24 Enter ".help" for usage hints. ``` This creates an empty database in your `data` directory. You should now see the sqlite cursor (`sqlite>`), and will be entering your commands there. Now we need to create a table with some fields. Let's start with the *TraitInfo* table (enter these one line at a time, without the `...>`): ```bash sqlite> CREATE TABLE TraitInfo (Numbers integer primary key, ...> OriginalID text, ...> FinalID text, ...> OriginalTraitName text, ...> OriginalTraitDef text, ...> Replicates integer, ...> Habitat integer, ...> Climate text, ...> Location text, ...> LocationType text, ...> LocationDate text, ...> CoordinateType text, ...> Latitude integer, ...> Longitude integer); ``` Note that I am writing all SQL commands in upper case, but it is not necessary. I am using upper case here because SQL syntax is long and clunky, and it quickly becomes hard to spot (and edit) commands in long strings of complex queries. Now let's import the dataset: `sqlite> .mode csv` `sqlite> .import TraitInfo.csv TraitInfo` So we built a table and imported a csv file into it. Now we can ask SQLite to show all the tables we currently have: `sqlite> .tables` Let's run our first *Query* (note that you need a semicolon to end a command): `sqlite> SELECT * FROM TraitInfo LIMIT 5;` Let's turn on some nicer formatting: `sqlite> .mode column` `sqlite> .header ON` `sqlite> SELECT * FROM TraitInfo LIMIT 5;` You should see something like: ```bash Numbers OriginalID FinalID OriginalTraitName ... ------- ---------- ---------- ------------------------- ... 1 1 MTD1 Resource Consumption Rate ... 4 2 MTD2 Resource Consumption Rate ... 6 3 MTD3 Resource Consumption Rate ... 9 4 MTD4 Resource Mass Consumption ... 12 5 MTD5 Resource Mass Consumption ... ``` The main statement to select records from a table is `SELECT`: `sqlite> .width 40 ## NOTE: Control the width` `sqlite> SELECT DISTINCT OriginalTraitName FROM TraitInfo; # Returns unique values` Which gives: ```bash OriginalTraitName ---------------------------------------- Resource Consumption Rate Resource Mass Consumption Rate Mass-Specific Mass Consumption Rate Voluntary Body Velocity Forward Attack Distance Foraging Velocity Resource Reaction Distance .... ``` Now try these: ```bash sqlite> SELECT DISTINCT Habitat FROM TraitInfo ...> WHERE OriginalTraitName = "Resource Consumption Rate"; # Sets a condition` Habitat ---------------------------------------- freshwater marine terrestrial sqlite> SELECT COUNT (*) FROM TraitInfo; # Returns number of rows Count (*) -------------------- 2336 sqlite> SELECT Habitat, COUNT(OriginalTraitName) # Returns number of rows for each group ...> FROM TraitInfo GROUP BY Habitat; Habitat COUNT(OriginalTraitName) ---------- ------------------------ NA 16 freshwater 609 marine 909 terrestria 802 sqlite> SELECT COUNT(DISTINCT OriginalTraitName) # Returns number of unique values ...> FROM TraitInfo; COUNT(DISTINCT OriginalTraitName) --------------------------------- 220 sqlite> SELECT COUNT(DISTINCT OriginalTraitName) TraitCount # Assigns alias to the variable ...> FROM TraitInfo; TraitCount ---------- sqlite> SELECT Habitat, ...> COUNT(DISTINCT OriginalTraitName) AS TN ...> FROM TraitInfo GROUP BY Habitat; Habitat TN ---------- ---------- NA 7 freshwater 82 marine 95 terrestria 96 sqlite> SELECT * # WHAT TO SELECT ...> FROM TraitInfo # FROM WHERE ...> WHERE Habitat = "marine" # CONDITIONS ...> AND OriginalTraitName = "Resource Consumption Rate"; Numbers OriginalID FinalID OriginalTraitName ... ---------- ---------- ---------- ------------------------- ... 778 308 MTD99 Resource Consumption Rate ... 798 310 MTD101 Resource Consumption Rate ... 806 311 MTD102 Resource Consumption Rate ... 993 351 MTD113 Resource Consumption Rate ... ``` The structure of the `SELECT` command is as follows (*Note: **all** characters are case **in**sensitive*): ```bash SELECT [DISTINCT] field FROM table WHERE predicate GROUP BY field HAVING predicate ORDER BY field LIMIT number ; ``` Let's try some more elaborate queries: ```bash sqlite> SELECT Numbers FROM TraitInfo LIMIT 5; Numbers ---------- 1 4 6 9 12 sqlite> SELECT Numbers ...> FROM TraitInfo ...> WHERE Numbers > 100 ...> AND Numbers < 200; Numbers ---------- 107 110 112 115 sqlite> SELECT Numbers ...> FROM TraitInfo ...> WHERE Habitat = "freshwater" ...> AND Number > 700 ...> AND Number < 800; Numbers ---------- 704 708 712 716 720 725 730 735 740 744 748 ``` You can also match records using something like regular expressions. In SQL, when we use the command `LIKE`, the percent % symbol matches any sequence of zero or more characters and the underscore matches any single character. Similarly, `GLOB` uses the asterisk and the underscore. ```bash sqlite> SELECT DISTINCT OriginalTraitName ...> FROM TraitInfo ...> WHERE OriginalTraitName LIKE "_esource Consumption Rate"; OriginalTraitName ------------------------- Resource Consumption Rate sqlite> SELECT DISTINCT OriginalTraitName ...> FROM TraitInfo ...> WHERE OriginalTraitName LIKE "Resource%"; OriginalTraitName ---------------------------------------- Resource Consumption Rate Resource Mass Consumption Rate Resource Reaction Distance Resource Habitat Encounter Rate Resource Consumption Probability Resource Mobility Selection Resource Size Selection Resource Size Capture Intent Acceptance Resource Encounter Rate Resource Escape Response Probability sqlite> SELECT DISTINCT OriginalTraitName ...> FROM TraitInfo ...> WHERE OriginalTraitName GLOB "Resource*"; OriginalTraitName ---------------------------------------- Resource Consumption Rate Resource Mass Consumption Rate Resource Reaction Distance Resource Habitat Encounter Rate Resource Consumption Probability Resource Mobility Selection Resource Size Selection Resource Size Capture Intent Acceptance Resource Encounter Rate Resource Escape Response Probability # NOTE THAT GLOB IS CASE SENSITIVE, WHILE LIKE IS NOT sqlite> SELECT DISTINCT OriginalTraitName ...> FROM TraitInfo ...> WHERE OriginalTraitName LIKE "resource%"; OriginalTraitName ---------------------------------------- Resource Consumption Rate Resource Mass Consumption Rate Resource Reaction Distance Resource Habitat Encounter Rate Resource Consumption Probability Resource Mobility Selection Resource Size Selection Resource Size Capture Intent Acceptance Resource Encounter Rate Resource Escape Response Probability ``` We can also order by any column: ```bash sqlite> SELECT OriginalTraitName, Habitat FROM ...> TraitInfo LIMIT 5; OriginalTraitName Habitat ------------------------- ---------- Resource Consumption Rate freshwater Resource Consumption Rate freshwater Resource Consumption Rate freshwater Resource Mass Consumption freshwater Resource Mass Consumption freshwater sqlite> SELECT OriginalTraitName, Habitat FROM ...> TraitInfo ORDER BY OriginalTraitName LIMIT 5; OriginalTraitName Habitat -------------------------- ---------- 48-hr Hatching Probability marine Asexual Reproduction Rate marine Attack Body Acceleration marine Attack Body Velocity marine Attack Body Velocity marine ``` Until now we have just queried data from one single table, but as we have seen, the point of storing a database in SQL is that we can use multiple tables minimizing redundancies within them. And of course, querying data from those different tables at the same time will be necessary at some point. Let's import then one more table to our database: ```bash sqlite> CREATE TABLE Consumer (Numbers integer primary key, ...> OriginalID text, ...> FinalID text, ...> Consumer text, ...> ConCommon text, ...> ConKingdom text, ...> ConPhylum text, ...> ConClass text, ...> ConOrder text, ...> ConFamily text, ...> ConGenus text, ...> ConSpecies text); ``` ```bash sqlite> .import Consumer.csv Consumer ``` Now we have two tables in our database: ```bash sqlite> .tables Consumer TraitInfo ``` These tables are connected by two different keys: `OriginalID` and `FinalID`. These are unique IDs for each thermal curve. For each `FinalID` we can get the trait name (`OriginalTraitName`) from the `TraitInfo` table and the corresponding species name (`ConSpecies`) from the `Consumer` table. ```bash sqlite> SELECT A1.FinalID, A1.Consumer, A2.FinalID, A2.OriginalTraitName ...> FROM Consumer A1, TraitInfo A2 ...> WHERE A1.FinalID=A2.FinalID LIMIT 8; FinalID Consumer FinalID OriginalTraitName ---------- --------------------- ---------- ------------------------- MTD1 Chaoborus trivittatus MTD1 Resource Consumption Rate MTD2 Chaoborus trivittatus MTD2 Resource Consumption Rate MTD3 Chaoborus americanus MTD3 Resource Consumption Rate MTD4 Stizostedion vitreum MTD4 Resource Mass Consumption MTD5 Macrobrachium rosenbe MTD5 Resource Mass Consumption MTD6 Ranatra dispar MTD6 Resource Consumption Rate MTD7 Ceriodaphnia reticula MTD7 Mass-Specific Mass Consum MTD8 Polyphemus pediculus MTD8 Voluntary Body Velocity # In the same way we assign alias to variables, we can use them for tables. ``` This example seems easy because both tables have the same number of rows. But the query is still as simple when we have tables with different rows. Let's import the TCP table: ```bash sqlite> CREATE TABLE TCP (Numbers integer primary key, ...> OriginalID text, ...> FinalID text, ...> OriginalTraitValue integer, ...> OriginalTraitUnit text, ...> LabGrowthTemp integer, ...> LabGrowthTempUnit text, ...> ConTemp integer, ...> ConTempUnit text, ...> ConTempMethod text, ...> ConAcc text, ...> ConAccTemp integer); sqlite> .import TCP.csv TCP sqlite> .tables Consumer TCP TraitInfo ``` Now imagine we want to query the thermal performance curves that we have stored for the species Mytilus edulis. Using the FinalID to match the tables, the query can be as simple as: ```bash sqlite> CREATE TABLE TCP (Numbers integer primary key, ...> OriginalID text, ...> FinalID text, ...> OriginalTraitValue integer, ...> OriginalTraitUnit text, ...> LabGrowthTemp integer, ...> LabGrowthTempUnit text, ...> ConTemp integer, ...> ConTempUnit text, ...> ConTempMethod text, ...> ConAcc text, ...> ConAccTemp integer); sqlite> .import TCP.csv TCP sqlite> .tables Consumer TCP TraitInfo sqlite> SELECT A1.ConTemp, A1.OriginalTraitValue, A2.OriginalTraitName, A3.Consumer ...> FROM TCP A1, TraitInfo A2, Consumer A3 ...> WHERE A1.FinalID=A2.FinalID AND A3.ConSpecies="Mytilus edulis" AND A3.FinalID=A2.FinalID LIMIT 8 ConTemp OriginalTraitValue OriginalTraitName Consumer ---------- -------------------- ------------------------------ -------------------- 25 2.707075 Filtration Rate Mytilus edulis 20 3.40721 Filtration Rate Mytilus edulis 5 3.419455 Filtration Rate Mytilus edulis 15 3.711165 Filtration Rate Mytilus edulis 10 3.875465 Filtration Rate Mytilus edulis 5 0.34 In Vitro Gill Particle Transpo Mytilus edulis 10 0.46 In Vitro Gill Particle Transpo Mytilus edulis 15 0.595 In Vitro Gill Particle Transpo Mytilus edulis ``` So on and so forth (joining tables etc. would come next...). But if you want to keep practicing and learn more about sqlite commands, this is a very useful site: <http://www.sqlite.org/sessions/sqlite.html>. You can store your queries and database management commands in an ` .sql` file (`geany` will take care of syntax highlighting etc.) ## SQLite with Python It is easy to access, update and manage SQLite databases with Python (you will find this script file in the `code` directory): ```python import sqlite3 conn = sqlite3.connect(":memory:") c = conn.cursor() c.execute("CREATE TABLE tt (Val TEXT)") conn.commit() z = [('a',), ('ab',), ('abc',), ('b',), ('c',)] c.executemany("INSERT INTO tt VALUES (?)", z) conn.commit() c.execute("SELECT * FROM tt WHERE Val LIKE 'a%'").fetchall() conn.close() ``` You can create a database in memory, without using the disk — thus you can create and discard an SQLite database within your workflow! Readings and Resources ---------------------- * "The Definitive Guide to SQLite" is a pretty complete guide to SQLite and freely available from [here]( http://sd.blackball.lv/library/The_Definitive_Guide_to_SQLite_2nd_edition.pdf) * For databses in general, try the [Stanford Introduction to Databases course](https://www.coursera.org/course/db) * A set of sqlite tutorials in Jupyter: https://github.com/royalosyin/Practice-SQL-with-SQLite-and-Jupyter-Notebook
github_jupyter
# Partial Least Squares Regression (PLSR) on Sensory and Fluorescence data This notebook illustrates how to use the **hoggorm** package to carry out partial least squares regression (PLSR) on multivariate data. Furthermore, we will learn how to visualise the results of the PLSR using the **hoggormPlot** package. --- ### Import packages and prepare data First import **hoggorm** for analysis of the data and **hoggormPlot** for plotting of the analysis results. We'll also import **pandas** such that we can read the data into a data frame. **numpy** is needed for checking dimensions of the data. ``` import hoggorm as ho import hoggormplot as hop import pandas as pd import numpy as np ``` Next, load the data that we are going to analyse using **hoggorm**. After the data has been loaded into the pandas data frame, we'll display it in the notebook. ``` # Load fluorescence data X_df = pd.read_csv('cheese_fluorescence.txt', index_col=0, sep='\t') X_df # Load sensory data Y_df = pd.read_csv('cheese_sensory.txt', index_col=0, sep='\t') Y_df ``` The ``nipalsPLS2`` class in hoggorm accepts only **numpy** arrays with numerical values and not pandas data frames. Therefore, the pandas data frames holding the imported data need to be "taken apart" into three parts: * two numpy array holding the numeric values * two Python list holding variable (column) names * two Python list holding object (row) names. The numpy arrays with values will be used as input for the ``nipalsPLS2`` class for analysis. The Python lists holding the variable and row names will be used later in the plotting function from the **hoggormPlot** package when visualising the results of the analysis. Below is the code needed to access both data, variable names and object names. ``` # Get the values from the data frame X = X_df.values Y = Y_df.values # Get the variable or columns names X_varNames = list(X_df.columns) Y_varNames = list(Y_df.columns) # Get the object or row names X_objNames = list(X_df.index) Y_objNames = list(Y_df.index) ``` --- ### Apply PLSR to our data Now, let's run PLSR on the data using the ``nipalsPLS2`` class. The documentation provides a [description of the input parameters](https://hoggorm.readthedocs.io/en/latest/plsr.html). Using input paramter ``arrX`` and ``arrY`` we define which numpy array we would like to analyse. ``arrY`` is what typically is considered to be the response matrix, while the measurements are typically defined as ``arrX``. By setting input parameter ``Xstand=False`` and ``Ystand=False`` we make sure that the variables are only mean centered, not scaled to unit variance, if this is what you want. This is the default setting and actually doesn't need to expressed explicitly. Setting paramter ``cvType=["loo"]`` we make sure that we compute the PLS2 model using full cross validation. ``"loo"`` means "Leave One Out". By setting paramter ``numpComp=4`` we ask for four components to be computed. ``` model = ho.nipalsPLS2(arrX=X, Xstand=False, arrY=Y, Ystand=False, cvType=["loo"], numComp=4) ``` That's it, the PLS2 model has been computed. Now we would like to inspect the results by visualising them. We can do this using plotting functions of the separate [**hoggormPlot** package](https://hoggormplot.readthedocs.io/en/latest/). If we wish to plot the results for component 1 and component 2, we can do this by setting the input argument ``comp=[1, 2]``. The input argument ``plots=[1, 2, 3, 4, 6]`` lets the user define which plots are to be plotted. If this list for example contains value ``1``, the function will generate the scores plot for the model. If the list contains value ``2``, then the loadings plot will be plotted. Value ``3`` stands for correlation loadings plot and value ``4`` stands for bi-plot and ``6`` stands for explained variance plot. The hoggormPlot documentation provides a [description of input paramters](https://hoggormplot.readthedocs.io/en/latest/mainPlot.html). ``` hop.plot(model, comp=[1, 2], plots=[1, 2, 3, 4, 6], objNames=X_objNames, XvarNames=X_varNames, YvarNames=Y_varNames) ``` Plots can also be called separately. ``` # Plot cumulative explained variance (both calibrated and validated) using a specific function for that. hop.explainedVariance(model) # Plot cumulative validated explained variance for each variable in Y hop.explainedVariance(model, individual = True) # Plot cumulative validated explained variance in X. hop.explainedVariance(model, which=['X']) hop.scores(model) hop.correlationLoadings(model) # Plot X loadings in line plot hop.loadings(model, weights=True, line=True) # Plot regression coefficients hop.coefficients(model, comp=[3]) ``` --- ### Accessing numerical results Now that we have visualised the PLSR results, we may also want to access the numerical results. Below are some examples. For a complete list of accessible results, please see this part of the documentation. ``` # Get X scores and store in numpy array X_scores = model.X_scores() # Get scores and store in pandas dataframe with row and column names X_scores_df = pd.DataFrame(model.X_scores()) X_scores_df.index = X_objNames X_scores_df.columns = ['comp {0}'.format(x+1) for x in range(model.X_scores().shape[1])] X_scores_df help(ho.nipalsPLS2.X_scores) # Dimension of the X_scores np.shape(model.X_scores()) ``` We see that the numpy array holds the scores for all countries and OECD (35 in total) for four components as required when computing the PCA model. ``` # Get X loadings and store in numpy array X_loadings = model.X_loadings() # Get X loadings and store in pandas dataframe with row and column names X_loadings_df = pd.DataFrame(model.X_loadings()) X_loadings_df.index = X_varNames X_loadings_df.columns = ['comp {0}'.format(x+1) for x in range(model.X_loadings().shape[1])] X_loadings_df help(ho.nipalsPLS2.X_loadings) np.shape(model.X_loadings()) ``` Here we see that the array holds the loadings for the 10 variables in the data across four components. ``` # Get Y loadings and store in numpy array Y_loadings = model.Y_loadings() # Get Y loadings and store in pandas dataframe with row and column names Y_loadings_df = pd.DataFrame(model.Y_loadings()) Y_loadings_df.index = Y_varNames Y_loadings_df.columns = ['comp {0}'.format(x+1) for x in range(model.Y_loadings().shape[1])] Y_loadings_df # Get X correlation loadings and store in numpy array X_corrloadings = model.X_corrLoadings() # Get X correlation loadings and store in pandas dataframe with row and column names X_corrloadings_df = pd.DataFrame(model.X_corrLoadings()) X_corrloadings_df.index = X_varNames X_corrloadings_df.columns = ['comp {0}'.format(x+1) for x in range(model.X_corrLoadings().shape[1])] X_corrloadings_df help(ho.nipalsPLS2.X_corrLoadings) # Get Y loadings and store in numpy array Y_corrloadings = model.X_corrLoadings() # Get Y loadings and store in pandas dataframe with row and column names Y_corrloadings_df = pd.DataFrame(model.Y_corrLoadings()) Y_corrloadings_df.index = Y_varNames Y_corrloadings_df.columns = ['comp {0}'.format(x+1) for x in range(model.Y_corrLoadings().shape[1])] Y_corrloadings_df help(ho.nipalsPLS2.Y_corrLoadings) # Get calibrated explained variance of each component in X X_calExplVar = model.X_calExplVar() # Get calibrated explained variance in X and store in pandas dataframe with row and column names X_calExplVar_df = pd.DataFrame(model.X_calExplVar()) X_calExplVar_df.columns = ['calibrated explained variance in X'] X_calExplVar_df.index = ['comp {0}'.format(x+1) for x in range(model.X_loadings().shape[1])] X_calExplVar_df help(ho.nipalsPLS2.X_calExplVar) # Get calibrated explained variance of each component in Y Y_calExplVar = model.Y_calExplVar() # Get calibrated explained variance in Y and store in pandas dataframe with row and column names Y_calExplVar_df = pd.DataFrame(model.Y_calExplVar()) Y_calExplVar_df.columns = ['calibrated explained variance in Y'] Y_calExplVar_df.index = ['comp {0}'.format(x+1) for x in range(model.Y_loadings().shape[1])] Y_calExplVar_df help(ho.nipalsPLS2.Y_calExplVar) # Get cumulative calibrated explained variance in X X_cumCalExplVar = model.X_cumCalExplVar() # Get cumulative calibrated explained variance in X and store in pandas dataframe with row and column names X_cumCalExplVar_df = pd.DataFrame(model.X_cumCalExplVar()) X_cumCalExplVar_df.columns = ['cumulative calibrated explained variance in X'] X_cumCalExplVar_df.index = ['comp {0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)] X_cumCalExplVar_df help(ho.nipalsPLS2.X_cumCalExplVar) # Get cumulative calibrated explained variance in Y Y_cumCalExplVar = model.Y_cumCalExplVar() # Get cumulative calibrated explained variance in Y and store in pandas dataframe with row and column names Y_cumCalExplVar_df = pd.DataFrame(model.Y_cumCalExplVar()) Y_cumCalExplVar_df.columns = ['cumulative calibrated explained variance in Y'] Y_cumCalExplVar_df.index = ['comp {0}'.format(x) for x in range(model.Y_loadings().shape[1] + 1)] Y_cumCalExplVar_df help(ho.nipalsPLS2.Y_cumCalExplVar) # Get cumulative calibrated explained variance for each variable in X X_cumCalExplVar_ind = model.X_cumCalExplVar_indVar() # Get cumulative calibrated explained variance for each variable in X and store in pandas dataframe with row and column names X_cumCalExplVar_ind_df = pd.DataFrame(model.X_cumCalExplVar_indVar()) X_cumCalExplVar_ind_df.columns = X_varNames X_cumCalExplVar_ind_df.index = ['comp {0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)] X_cumCalExplVar_ind_df help(ho.nipalsPLS2.X_cumCalExplVar_indVar) # Get cumulative calibrated explained variance for each variable in Y Y_cumCalExplVar_ind = model.Y_cumCalExplVar_indVar() # Get cumulative calibrated explained variance for each variable in Y and store in pandas dataframe with row and column names Y_cumCalExplVar_ind_df = pd.DataFrame(model.Y_cumCalExplVar_indVar()) Y_cumCalExplVar_ind_df.columns = Y_varNames Y_cumCalExplVar_ind_df.index = ['comp {0}'.format(x) for x in range(model.Y_loadings().shape[1] + 1)] Y_cumCalExplVar_ind_df help(ho.nipalsPLS2.Y_cumCalExplVar_indVar) # Get calibrated predicted Y for a given number of components # Predicted Y from calibration using 1 component Y_from_1_component = model.Y_predCal()[1] # Predicted Y from calibration using 1 component stored in pandas data frame with row and columns names Y_from_1_component_df = pd.DataFrame(model.Y_predCal()[1]) Y_from_1_component_df.index = Y_objNames Y_from_1_component_df.columns = Y_varNames Y_from_1_component_df # Get calibrated predicted Y for a given number of components # Predicted Y from calibration using 4 component Y_from_4_component = model.Y_predCal()[4] # Predicted Y from calibration using 1 component stored in pandas data frame with row and columns names Y_from_4_component_df = pd.DataFrame(model.Y_predCal()[4]) Y_from_4_component_df.index = Y_objNames Y_from_4_component_df.columns = Y_varNames Y_from_4_component_df help(ho.nipalsPLS2.X_predCal) # Get validated explained variance of each component X X_valExplVar = model.X_valExplVar() # Get calibrated explained variance in X and store in pandas dataframe with row and column names X_valExplVar_df = pd.DataFrame(model.X_valExplVar()) X_valExplVar_df.columns = ['validated explained variance in X'] X_valExplVar_df.index = ['comp {0}'.format(x+1) for x in range(model.X_loadings().shape[1])] X_valExplVar_df help(ho.nipalsPLS2.X_valExplVar) # Get validated explained variance of each component Y Y_valExplVar = model.Y_valExplVar() # Get calibrated explained variance in X and store in pandas dataframe with row and column names Y_valExplVar_df = pd.DataFrame(model.Y_valExplVar()) Y_valExplVar_df.columns = ['validated explained variance in Y'] Y_valExplVar_df.index = ['comp {0}'.format(x+1) for x in range(model.Y_loadings().shape[1])] Y_valExplVar_df help(ho.nipalsPLS2.Y_valExplVar) # Get cumulative validated explained variance in X X_cumValExplVar = model.X_cumValExplVar() # Get cumulative validated explained variance in X and store in pandas dataframe with row and column names X_cumValExplVar_df = pd.DataFrame(model.X_cumValExplVar()) X_cumValExplVar_df.columns = ['cumulative validated explained variance in X'] X_cumValExplVar_df.index = ['comp {0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)] X_cumValExplVar_df help(ho.nipalsPLS2.X_cumValExplVar) # Get cumulative validated explained variance in Y Y_cumValExplVar = model.Y_cumValExplVar() # Get cumulative validated explained variance in Y and store in pandas dataframe with row and column names Y_cumValExplVar_df = pd.DataFrame(model.Y_cumValExplVar()) Y_cumValExplVar_df.columns = ['cumulative validated explained variance in Y'] Y_cumValExplVar_df.index = ['comp {0}'.format(x) for x in range(model.Y_loadings().shape[1] + 1)] Y_cumValExplVar_df help(ho.nipalsPLS2.Y_cumValExplVar) # Get cumulative validated explained variance for each variable in Y Y_cumCalExplVar_ind = model.Y_cumCalExplVar_indVar() # Get cumulative validated explained variance for each variable in Y and store in pandas dataframe with row and column names Y_cumValExplVar_ind_df = pd.DataFrame(model.Y_cumValExplVar_indVar()) Y_cumValExplVar_ind_df.columns = Y_varNames Y_cumValExplVar_ind_df.index = ['comp {0}'.format(x) for x in range(model.Y_loadings().shape[1] + 1)] Y_cumValExplVar_ind_df help(ho.nipalsPLS2.X_cumValExplVar_indVar) # Get validated predicted Y for a given number of components # Predicted Y from validation using 1 component Y_from_1_component_val = model.Y_predVal()[1] # Predicted Y from calibration using 1 component stored in pandas data frame with row and columns names Y_from_1_component_val_df = pd.DataFrame(model.Y_predVal()[1]) Y_from_1_component_val_df.index = Y_objNames Y_from_1_component_val_df.columns = Y_varNames Y_from_1_component_val_df # Get validated predicted Y for a given number of components # Predicted Y from validation using 3 components Y_from_3_component_val = model.Y_predVal()[3] # Predicted Y from calibration using 3 components stored in pandas data frame with row and columns names Y_from_3_component_val_df = pd.DataFrame(model.Y_predVal()[3]) Y_from_3_component_val_df.index = Y_objNames Y_from_3_component_val_df.columns = Y_varNames Y_from_3_component_val_df help(ho.nipalsPLS2.Y_predVal) # Get predicted scores for new measurements (objects) of X # First pretend that we acquired new X data by using part of the existing data and overlaying some noise import numpy.random as npr new_X = X[0:4, :] + npr.rand(4, np.shape(X)[1]) np.shape(X) # Now insert the new data into the existing model and compute scores for two components (numComp=2) pred_X_scores = model.X_scores_predict(new_X, numComp=2) # Same as above, but results stored in a pandas dataframe with row names and column names pred_X_scores_df = pd.DataFrame(model.X_scores_predict(new_X, numComp=2)) pred_X_scores_df.columns = ['comp {0}'.format(x+1) for x in range(2)] pred_X_scores_df.index = ['new object {0}'.format(x+1) for x in range(np.shape(new_X)[0])] pred_X_scores_df help(ho.nipalsPLS2.X_scores_predict) # Predict Y from new X data pred_Y = model.Y_predict(new_X, numComp=2) # Predict Y from nex X data and store results in a pandas dataframe with row names and column names pred_Y_df = pd.DataFrame(model.Y_predict(new_X, numComp=2)) pred_Y_df.columns = Y_varNames pred_Y_df.index = ['new object {0}'.format(x+1) for x in range(np.shape(new_X)[0])] pred_Y_df ```
github_jupyter
``` import numpy as np import pandas import urllib2 from sklearn.metrics.cluster import adjusted_rand_score import seaborn as sns import matplotlib.pyplot as plt from scipy import stats from functools import reduce chrs_length = [249250621,243199373,198022430,191154276,180915260,171115067,159138663,146364022,141213431,135534747,135006516,133851895,115169878,107349540,102531392,90354753,81195210,78077248,59128983,63025520,48129895,51304566] res = 10000 bedf = pandas.read_table(urllib2.urlopen('http://bx.psu.edu/~lua137/OnTAD/data/E116-Ctcf.fc.signal.bigwig.10kb.bed'),sep='\t',header=None) tabf = pandas.read_table(urllib2.urlopen('http://bx.psu.edu/~lua137/OnTAD/data/E116-Ctcf.fc.signal.bigwig.10kb.tab'),sep='\t',header=None) ###Define a function to caculate the ChIP-seq signal in each bin of corresponding Hi-C matrix###### def computeMatrix(bedfile,boundarylist,chrn,winsize,res,chrs_l,tabfile): blist = boundarylist[(boundarylist>winsize)&(boundarylist<(chrs_l-winsize*res)/res)].astype(int) mm = np.zeros((len(blist),2*winsize+1)) chrinfo = tabfile.loc[bedfile[0]==chrn] for i in range(0,len(blist)): mm[i,:]=chrinfo.iloc[blist[i]-winsize:blist[i]+winsize+1,4].values return mm def compute_jaccard_index(set_1, set_2, offset): if offset == 0: n = len(np.intersect1d(set_1,set_2)) else: set_1_offset=np.copy(set_1) for i in range(0,offset): set_1_offset = np.union1d(np.union1d(set_1_offset,set_1_offset - 1),set_1_offset + 1) n = len(np.intersect1d(set_1_offset,set_2)) return n / float(len(np.union1d(set_1,set_2))) def compute_intersect(set_1, set_2, offset): nlist = np.zeros(offset+4) for i in range(0,offset+1): set_1_offset=np.copy(set_1) set_1_offset = np.union1d(set_1_offset - i,set_1_offset + i) nlist[i] = len(np.intersect1d(set_1_offset,set_2)) nlist[i+1] = len(set_1) nlist[i+2] = len(set_2) nlist[i+3] = len(np.union1d(set_1,set_2)) return nlist def TADtoCluster (tads, chrbinlen, maxdist): tmat = np.zeros((chrbinlen,chrbinlen)) ftads = tads[(tads[:,1]-tads[:,0]).argsort()[::-1],:].astype(int) a = [] for i in range(0,ftads.shape[0]): tmat[ftads[i,0]:ftads[i,1],ftads[i,0]:ftads[i,1]] = i for offset in range(0,min(maxdist,chrbinlen-1)): ta= [row[rown+offset] for rown,row in enumerate(tmat) if rown+offset < len(row)] a+=ta return np.asarray(a) def boundarylevel(tad): leftb,leftl = np.unique(tad[:,0],return_counts=True) rightb, rightl = np.unique(tad[:,1],return_counts=True) allb = np.copy(leftb) alll = np.copy(leftl) for i in range(0,len(rightb)): ind = np.where(leftb==rightb[i])[0] if len(ind) > 0: if rightl[i]>leftl[ind[0]]: alll[ind[0]]=rightl[i] else: allb=np.append(allb,rightb[i]) alll=np.append(alll,rightl[i]) return (allb,alll) jar0 = [] jar1 = [] jar2 = [] jarrep = [] rand = [] randrep = [] Arrow_jar = [] Arrow_rand = [] repint = np.zeros(6) rnint = np.zeros(6) cOnTAD_rawball = np.empty((0,21)) cOnTAD_normball = np.empty((0,21)) l1b = np.array([0,0,0]) l2b = np.array([0,0,0]) l3b = np.array([0,0,0]) l4b = np.array([0,0,0]) l5b = np.array([0,0,0]) for chrnum in range(1,23): OnTAD_raw = pandas.read_table(urllib2.urlopen('http://bx.psu.edu/~lua137/OnTAD/output/OnTAD/Gm12878/10kb/OnTAD_raw_pen0.1_max200_hsz5_chr'+str(chrnum)+'.tad'),sep='\t',header=None) OnTAD_rawa = OnTAD_raw.loc[(OnTAD_raw[2]>0),:].values[:,0:2]-1 OnTAD_rawb = np.unique(OnTAD_rawa.flatten()) #rawt = TADtoCluster(OnTAD_rawa, chrs_length[chrnum-1]/res, 200) OnTAD_norm = pandas.read_table(urllib2.urlopen('http://bx.psu.edu/~lua137/OnTAD/output/OnTAD/Gm12878/10kb/OnTAD_KRnorm_pen0.1_max200_hsz5_chr'+str(chrnum)+'.tad'),sep='\t',header=None) OnTAD_norma = OnTAD_norm.loc[(OnTAD_norm[2]>0),:].values[:,0:2]-1 OnTAD_normb = np.unique(OnTAD_norma.flatten()) #normt = TADtoCluster(OnTAD_norma, chrs_length[chrnum-1]/res, 200) OnTAD_rawallb, OnTAD_rawalll = boundarylevel(OnTAD_rawa) OnTAD_rawo1b = OnTAD_rawallb[OnTAD_rawalll==1] OnTAD_rawo2b = OnTAD_rawallb[OnTAD_rawalll==2] OnTAD_rawo3b = OnTAD_rawallb[OnTAD_rawalll==3] OnTAD_rawo4b = OnTAD_rawallb[OnTAD_rawalll==4] OnTAD_rawo5b = OnTAD_rawallb[OnTAD_rawalll>=5] l1b += np.array([len(OnTAD_rawo1b), len(np.intersect1d(OnTAD_rawo1b, OnTAD_normb)), len(np.intersect1d(reduce(np.union1d, (OnTAD_rawo1b, OnTAD_rawo1b-1, OnTAD_rawo1b+1 )), OnTAD_normb))]) l2b += np.array([len(OnTAD_rawo2b), len(np.intersect1d(OnTAD_rawo2b, OnTAD_normb)), len(np.intersect1d(reduce(np.union1d, (OnTAD_rawo2b, OnTAD_rawo2b-1, OnTAD_rawo2b+1 )), OnTAD_normb))]) l3b += np.array([len(OnTAD_rawo3b), len(np.intersect1d(OnTAD_rawo3b, OnTAD_normb)), len(np.intersect1d(reduce(np.union1d, (OnTAD_rawo3b, OnTAD_rawo3b-1, OnTAD_rawo3b+1 )), OnTAD_normb))]) l4b += np.array([len(OnTAD_rawo4b), len(np.intersect1d(OnTAD_rawo4b, OnTAD_normb)), len(np.intersect1d(reduce(np.union1d, (OnTAD_rawo4b, OnTAD_rawo4b-1, OnTAD_rawo4b+1 )), OnTAD_normb))]) l5b += np.array([len(OnTAD_rawo5b), len(np.intersect1d(OnTAD_rawo5b, OnTAD_normb)), len(np.intersect1d(reduce(np.union1d, (OnTAD_rawo5b, OnTAD_rawo5b-1, OnTAD_rawo5b+1 )), OnTAD_normb))]) cOnTAD_rawball = np.append(cOnTAD_rawball,computeMatrix(bedf,OnTAD_rawb,'chr'+str(chrnum),10,10000,chrs_length[chrnum-1],tabf), axis=0) cOnTAD_normball = np.append(cOnTAD_normball,computeMatrix(bedf,OnTAD_normb,'chr'+str(chrnum),10,10000,chrs_length[chrnum-1],tabf), axis=0) ''' OnTAD_rawrep1 = pandas.read_table(urllib2.urlopen('http://bx.psu.edu/~lua137/OnTAD/output/OnTAD/Gm12878_primary/10kb/OnTADtopdom_pen0.1_max200_chr'+str(chrnum)+'.tad'),sep='\t',header=None) OnTAD_rawrep1a = OnTAD_rawrep1.loc[(OnTAD_rawrep1[2]>0),:].values[:,0:2]-1 OnTAD_rawrep1b = np.unique(OnTAD_rawrep1a.flatten()) rep1t = TADtoCluster(OnTAD_rawrep1a, chrs_length[chrnum-1]/res, 200) OnTAD_rawrep2 = pandas.read_table(urllib2.urlopen('http://bx.psu.edu/~lua137/OnTAD/output/OnTAD/Gm12878_replicate/10kb/OnTADtopdom_pen0.1_max200_chr'+str(chrnum)+'.tad'),sep='\t',header=None) OnTAD_rawrep2a = OnTAD_rawrep2.loc[(OnTAD_rawrep2[2]>0),:].values[:,0:2]-1 OnTAD_rawrep2b = np.unique(OnTAD_rawrep2a.flatten()) rep2t = TADtoCluster(OnTAD_rawrep2a, chrs_length[chrnum-1]/res, 200) Arrowheadraw = pandas.read_table(urllib2.urlopen('http://bx.psu.edu/~lua137/OnTAD/output/juicer/Arrowhead.Gm12878.combined.10kb.m2000.raw.chr'+str(chrnum)),sep='\t',header=None) Arrowraw = Arrowheadraw.loc[:,1:2].values/res Arrowrawb=np.unique(Arrowraw.flatten()) Arrowrawt = TADtoCluster(Arrowraw, chrs_length[chrnum-1]/res, 200) Arrowheadnorm = pandas.read_table(urllib2.urlopen('http://bx.psu.edu/~lua137/OnTAD/output/juicer/Arrowhead.Gm12878.10kb.KR.chr'+str(chrnum)),sep='\t',header=None) Arrownorm = Arrowheadnorm.loc[:,1:2].values/res Arrownormb=np.unique(Arrownorm.flatten()) Arrownormt = TADtoCluster(Arrownorm, chrs_length[chrnum-1]/res, 200) Arrow_jar.append(compute_jaccard_index(Arrowrawb,Arrownormb,0)) Arrow_rand.append(adjusted_rand_score(Arrowrawt, Arrownormt)) jar0.append(compute_jaccard_index(OnTAD_rawb,OnTAD_normb,0)) jar1.append(compute_jaccard_index(OnTAD_rawb,OnTAD_normb,1)) jar2.append(compute_jaccard_index(OnTAD_rawb,OnTAD_normb,2)) jarrep.append(compute_jaccard_index(OnTAD_rawrep1b,OnTAD_rawrep2b,0)) repint += compute_intersect(OnTAD_rawrep1b,OnTAD_rawrep2b,2) rnint += compute_intersect(OnTAD_rawb,OnTAD_normb,2) rand.append(adjusted_rand_score(rawt, normt)) randrep.append(adjusted_rand_score(rep1t, rep2t)) cOnTAD_rawball = np.append(cOnTAD_rawball,computeMatrix(bedf,OnTAD_rawb,'chr'+str(chrnum),10,10000,chrs_length[chrnum-1],tabf), axis=0) cOnTAD_normball = np.append(cOnTAD_normball,computeMatrix(bedf,OnTAD_normb,'chr'+str(chrnum),10,10000,chrs_length[chrnum-1],tabf), axis=0) print '####Done with chr'+str(chrnum)+'####' ''' l1b, l2b, l3b, l4b, l5b plt.figure(2,figsize=(5,8)) # set width of bar barWidth = 0.25 # set height of bar bars1 = [l1b[1]/float(l1b[0]), l2b[1]/float(l2b[0]), l3b[1]/float(l3b[0]), l4b[1]/float(l4b[0]), l5b[1]/float(l5b[0])] bars2 = [l1b[2]/float(l1b[0]), l2b[2]/float(l2b[0]), l3b[2]/float(l3b[0]), l4b[2]/float(l4b[0]), l5b[2]/float(l5b[0])] # Set position of bar on X axis r1 = np.arange(len(bars1)) r2 = [x + barWidth for x in r1] # Make the plot plt.bar(r1, bars1, color='#7f6d5f', width=barWidth, edgecolor='white', label='offset0') plt.bar(r2, bars2, color='#557f2d', width=barWidth, edgecolor='white', label='offset1') # Add xticks on the middle of the group bars plt.ylabel('Percent recovered in normalized data', {'color': 'k', 'fontsize': 15}) plt.xticks([r + barWidth for r in range(len(bars1))], ['Level1', 'Level2', 'Level3', 'Level4', 'Level>=5'], size=12) plt.yticks(size=14) # Create legend & Show graphic plt.legend(prop={'size': 15}) plt.grid(color='grey', linestyle='-', linewidth=0.25, alpha=0.5) plt.savefig("/Users/linan/Desktop/raw_norm_boverlap.png", dpi=150, transparent=True, bbox_inches='tight') plt.show() out = np.array([np.mean(cOnTAD_rawball, axis=0),np.mean(cOnTAD_normball, axis=0)]) plt.figure(1) plt.plot(out[0,:],c='k',label='Raw',linewidth=2) plt.plot(out[1,:],c='m',label='Normalized',linewidth=2) plt.legend(loc="upper right",prop={'size': 12}) plt.ylabel('CTCF Signal', {'color': 'k', 'fontsize': 20}) plt.xlabel('Relative Position', {'color': 'k', 'fontsize': 20}) plt.yticks(color='k',size=14) plt.xticks((0,5,10,15,20),('-100kb','-50kb','0','50kb','100kb'),color='k',size=15) plt.grid(color='grey', linestyle='-', linewidth=0.25, alpha=0.5) plt.savefig("/Users/linan/Desktop/raw_norm_ctcf.png", dpi=150, transparent=True, bbox_inches='tight') plt.show() tt = np.loadtxt(urllib2.urlopen('http://bx.psu.edu/~lua137/OnTAD/rebuttal/testadjr2allparameter_rawvsnorm_hsz5.txt')) plt.figure(6) fig,ax = plt.subplots(1) ax.plot(tt[0,0:150],c='k',label='Raw',linewidth=2) ax.plot(tt[1,0:150],c='m',label='Normalized',linewidth=2) plt.legend(loc="upper right",prop={'size': 15} ) plt.ylabel('TAD$adjR^2$', {'color': 'k', 'fontsize': 20}) plt.xlabel('Genomic Distance (per bin)', {'color': 'k', 'fontsize': 20}) plt.yticks(color='k',size=14) plt.xticks(color='k',size=14) plt.grid(color='grey', linestyle='-', linewidth=0.25, alpha=0.5) plt.savefig("/Users/linan/Desktop/raw_norm_tadrsquared.png", dpi=150, transparent=True, bbox_inches='tight') plt.show() jartable = pandas.DataFrame({'between reps':jarrep, 'Arrowhead':Arrow_jar, 'offset0':jar0, 'offset1':jar1, 'offset2':jar2}) repint, rnint a = np.array([1,2,4]) b = np.array([1,2,3]) compute_jaccard_index(a,b,0) plt.figure(6) sns.boxplot(data=jartable, width=0.5, palette="colorblind") sns.stripplot(data=jartable,jitter=True, marker='o', alpha=0.5, color='black') plt.yticks(color='k',size=15) plt.xticks(color='k',size=15) plt.show() randtable = pandas.DataFrame({'between reps':randrep, 'Raw_vs_Normalized':rand}) sns.boxplot(data=randtable, width=0.5, palette="colorblind") sns.stripplot(data=randtable,jitter=True, marker='o', alpha=0.5, color='black') x1, x2 = 0, 1 # columns y, h, col = randtable['between reps'].max() + 0.01, 0.01, 'k' plt.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col) plt.text((x1+x2)*.5, y+1.5*h, r"p = %1.3f"% (stats.ttest_ind(randrep, rand)[1]), ha='center', va='bottom', color=col) plt.yticks(color='k',size=15) plt.xticks(color='k',size=15) plt.show() (0,0) + (3,4) ```
github_jupyter
# Name Data processing by creating a cluster in Cloud Dataproc # Label Cloud Dataproc, cluster, GCP, Cloud Storage, KubeFlow, Pipeline # Summary A Kubeflow Pipeline component to create a cluster in Cloud Dataproc. # Details ## Intended use Use this component at the start of a Kubeflow Pipeline to create a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. ## Runtime arguments | Argument | Description | Optional | Data type | Accepted values | Default | |----------|-------------|----------|-----------|-----------------|---------| | project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | | | region | The Cloud Dataproc region to create the cluster in. | No | GCPRegion | | | | name | The name of the cluster. Cluster names within a project must be unique. You can reuse the names of deleted clusters. | Yes | String | | None | | name_prefix | The prefix of the cluster name. | Yes | String | | None | | initialization_actions | A list of Cloud Storage URIs identifying executables to execute on each node after the configuration is completed. By default, executables are run on the master and all the worker nodes. | Yes | List | | None | | config_bucket | The Cloud Storage bucket to use to stage the job dependencies, the configuration files, and the job driver console’s output. | Yes | GCSPath | | None | | image_version | The version of the software inside the cluster. | Yes | String | | None | | cluster | The full [cluster configuration](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters#Cluster). | Yes | Dict | | None | | wait_interval | The number of seconds to pause before polling the operation. | Yes | Integer | | 30 | ## Output Name | Description | Type :--- | :---------- | :--- cluster_name | The name of the cluster. | String Note: You can recycle the cluster by using the [Dataproc delete cluster component](https://github.com/kubeflow/pipelines/tree/master/components/gcp/dataproc/delete_cluster). ## Cautions & requirements To use the component, you must: * Set up the GCP project by following these [steps](https://cloud.google.com/dataproc/docs/guides/setup-project). * The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details. * Grant the following types of access to the Kubeflow user service account: * Read access to the Cloud Storage buckets which contains initialization action files. * The role, `roles/dataproc.editor` on the project. ## Detailed description This component creates a new Dataproc cluster by using the [Dataproc create cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/create). Follow these steps to use the component in a pipeline: 1. Install the Kubeflow Pipeline SDK: ``` %%capture --no-stderr !pip3 install kfp --upgrade ``` 2. Load the component using KFP SDK ``` import kfp.components as comp dataproc_create_cluster_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0/components/gcp/dataproc/create_cluster/component.yaml') help(dataproc_create_cluster_op) ``` ### Sample Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. #### Set sample parameters ``` # Required Parameters PROJECT_ID = '<Please put your project ID here>' # Optional Parameters EXPERIMENT_NAME = 'Dataproc - Create Cluster' ``` #### Example pipeline that uses the component ``` import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc create cluster pipeline', description='Dataproc create cluster pipeline' ) def dataproc_create_cluster_pipeline( project_id = PROJECT_ID, region = 'us-central1', name='', name_prefix='', initialization_actions='', config_bucket='', image_version='', cluster='', wait_interval='30' ): dataproc_create_cluster_op( project_id=project_id, region=region, name=name, name_prefix=name_prefix, initialization_actions=initialization_actions, config_bucket=config_bucket, image_version=image_version, cluster=cluster, wait_interval=wait_interval) ``` #### Compile the pipeline ``` pipeline_func = dataproc_create_cluster_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ``` #### Submit the pipeline for execution ``` #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ``` ## References * [Kubernetes Engine for Kubeflow](https://www.kubeflow.org/docs/started/getting-started-gke/#gcp-service-accounts) * [Component Python code](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/component_sdk/python/kfp_component/google/dataproc/_create_cluster.py) * [Component Docker file](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/Dockerfile) * [Sample notebook](https://github.com/kubeflow/pipelines/blob/master/components/gcp/dataproc/create_cluster/sample.ipynb) * [Dataproc create cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/create) ## License By deploying or using this software you agree to comply with the [AI Hub Terms of Service](https://aihub.cloud.google.com/u/0/aihub-tos) and the [Google APIs Terms of Service](https://developers.google.com/terms/). To the extent of a direct conflict of terms, the AI Hub Terms of Service will control.
github_jupyter
# Bonus Material: Word count The word count problem is the 'Hello world' equivalent of distributed programming. Word count is also the basic process by which text is converted into features for text mining and topic modeling. We show a variety of ways to solve the word count problem in Python to familiarize you with different coding approaches. ``` text = ''''Twas brillig, and the slithy toves Did gyre and gimble in the wabe; All mimsy were the borogoves, And the mome raths outgrabe. 'Beware the Jabberwock, my son! The jaws that bite, the claws that catch! Beware the Jubjub bird, and shun The frumious Bandersnatch!' He took his vorpal sword in hand: Long time the manxome foe he sought-- So rested he by the Tumtum tree, And stood awhile in thought. And as in uffish thought he stood, The Jabberwock, with eyes of flame, Came whiffling through the tulgey wood, And burbled as it came! One, two! One, two! And through and through The vorpal blade went snicker-snack! He left it dead, and with its head He went galumphing back. 'And hast thou slain the Jabberwock? Come to my arms, my beamish boy! O frabjous day! Callooh! Callay!' He chortled in his joy. 'Twas brillig, and the slithy toves Did gyre and gimble in the wabe; All mimsy were the borogoves, And the mome raths outgrabe.''' ``` ### Convert to list of words ``` import string table = dict.fromkeys(map(ord, string.punctuation)) words = text.translate(table).strip().lower().split() words[:10] ``` ### Slower version without translate ``` for char in string.punctuation: text = text.replace(char, '') words2 = text.strip().lower().split() words2[:10] ``` ### Using a regular dictionary ``` c1 = {} for word in words: c1[word] = c1.get(word, 0) + 1 sorted(c1.items(), key=lambda x: x[1], reverse=True)[:3] ``` ### Using a default dictionary ``` from collections import defaultdict c2 = defaultdict(int) for word in words: c2[word] += 1 sorted(c2.items(), key=lambda x: x[1], reverse=True)[:3] ``` ### Using a Counter ``` from collections import Counter c3 = Counter(words) c3.most_common(3) ``` ### Using third party function ``` from toolz import frequencies c4 = frequencies(words) sorted(c4.items(), key=lambda x: x[1], reverse=True)[:3] ``` ### Counting without dictionaries ``` from itertools import groupby c5 = map(lambda x: (x[0], sum(1 for item in x[1])), groupby(sorted(words))) sorted(c5, key=lambda x: x[1], reverse=True)[:3] ``` ### Vectorized version ``` import numpy as np values, counts = np.unique(words, return_counts=True) c6 = dict(zip(values, counts)) sorted(c6.items(), key=lambda x: x[1], reverse=True)[:3] ```
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Double-check-we-are-using-Python-3.8" data-toc-modified-id="Double-check-we-are-using-Python-3.8-1">Double check we are using Python 3.8</a></span></li><li><span><a href="#What-is-Naive-Bayes?" data-toc-modified-id="What-is-Naive-Bayes?-2">What is Naive Bayes?</a></span></li><li><span><a href="#What-is-Guassian-Naive-Bayes" data-toc-modified-id="What-is-Guassian-Naive-Bayes-3">What is Guassian Naive Bayes</a></span></li><li><span><a href="#Machine-learning-models-needs-data" data-toc-modified-id="Machine-learning-models-needs-data-4">Machine learning models needs data</a></span></li><li><span><a href="#Build-Naive-Bayes-Classifier" data-toc-modified-id="Build-Naive-Bayes-Classifier-5">Build Naive Bayes Classifier</a></span><ul class="toc-item"><li><span><a href="#What-about-P(data)?" data-toc-modified-id="What-about-P(data)?-5.1">What about <code>P(data)</code>?</a></span></li></ul></li><li><span><a href="#Prediction" data-toc-modified-id="Prediction-6">Prediction</a></span></li><li><span><a href="#Sources" data-toc-modified-id="Sources-7">Sources</a></span></li></ul></div> # Double check we are using Python 3.8 ``` from platform import python_version assert python_version().startswith('3.8') ``` # What is Naive Bayes? One of the simplest machine learning models <center><img src="https://chrisalbon.com/images/machine_learning_flashcards/Gaussian_Naive_Bayes_Classifier_print.png" width="75%"/></center> # What is Guassian Naive Bayes Continuous data assume Gaussian (aka, normal distribution). Each feature has a mean and variance calculated from the data, segmented by category (aka, class). # Machine learning models needs data https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Sex_classification # Build Naive Bayes Classifier ``` from statistics import NormalDist # Historical data height_male = NormalDist.from_samples([6, 5.92, 5.58, 5.92]) height_female = NormalDist.from_samples([5, 5.5, 5.42, 5.75]) weight_male = NormalDist.from_samples([180, 190, 170, 165]) weight_female = NormalDist.from_samples([100, 150, 130, 150]) foot_size_male = NormalDist.from_samples([12, 11, 12, 10]) foot_size_female = NormalDist.from_samples([6, 8, 7, 9]) # Starting with a 50% prior probability of being male or female, prior_male, prior_female = 0.5, 0.5 # New person ht = 6.0 # height wt = 130 # weight fs = 8 # foot size # We compute the posterior as the prior times the product of likelihoods for the feature measurements given the gender: posterior_male = (prior_male * height_male.pdf(ht) * weight_male.pdf(wt) * foot_size_male.pdf(fs)) posterior_female = (prior_female * height_female.pdf(ht) * weight_female.pdf(wt) * foot_size_female.pdf(fs)) ``` What about `P(data)`? ------- The denominator is P(data) which the prior probability of the data/features occurring. How likely is it to see a human (regardless of category) with this weight? The denominators are the same for all class labels so we can cancel/drop them to simplify our computation. Ignoring the denominator is standard for Naive Bayes classifier. # Prediction ``` # The final prediction goes to the largest posterior. # This is known as the maximum a posteriori or MAP: prediction = 'male' if posterior_male > posterior_female else 'female' print(f"Given the data, the new person is predicted to be: {prediction}") ``` # Sources - https://docs.python.org/3/library/statistics.html#normaldist-examples-and-recipes - https://chrisalbon.com/images/machine_learning_flashcards/Gaussian_Naive_Bayes_Classifier_print.png <br> <br> <br> ----
github_jupyter
``` import time import pickle from itertools import combinations import pandas as pd from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import Select import pyderman from tqdm import tqdm from lazyme import zigzag, retry options = Options() options.headless = True # Somehow somtimes headless don't working =( for this site. path = pyderman.install(browser=pyderman.chrome) driver = webdriver.Chrome(path, options=options) # Fetch the page. driver.get('https://www.gov.sg/resources/translation') # Select the "All" category. category=Select(driver.find_element_by_name("content_0$DdlCategory")) category.select_by_value('-1') # All. # Selec the "languages". from_lang = Select(driver.find_element_by_name("content_0$DdlFrom")) to_lang = Select(driver.find_element_by_name("content_0$DdlTo")) from_lang.select_by_value('1') # English. to_lang.select_by_value('2') # Mandarin. @retry(Exception, delay=1) def find_last_page(driver): # Go to the last page . driver.find_element_by_id("content_0_RGridTranslation_ctl00_ctl03_ctl01_Last").click() # Find what is the page no. of the last page. bsoup = BeautifulSoup(driver.page_source, 'lxml') last_page = int(bsoup.find("tr", attrs={"class":"rgPager"}).find_all('span')[-1].text) return last_page # Click on the "search" (magnifying glass) button. driver.find_element_by_name("content_0$BtnTranslateSearch").click() # Find the last page. last_page = find_last_page(driver) # Go back to the first page. driver.find_element_by_id("content_0_RGridTranslation_ctl00_ctl03_ctl01_BtnFirst").click() assert last_page == 1045 @retry(Exception, delay=1) def munge_page_for_translations(driver): # Reads the page source into beautiful soup. html = driver.page_source bsoup = BeautifulSoup(driver.page_source, 'lxml') # Munge and get the translations. translations = [div.text.strip() for div in bsoup.find('tbody').find_all('div') if div.text.strip()] # zigzag splits a list into two by alternative, even and odd items. # zip(*iterable) iterates throught the zigzag list one pair at a time. return dict(zip(*zigzag(translations))) munge_page_for_translations(driver) terminology = {} # Iterate through the pages and get the dictionary entries for each page. for i in tqdm(range(last_page)): translations = munge_page_for_translations(driver) terminology.update(translations) # Moves to the next page. driver.find_element_by_id("content_0_RGridTranslation_ctl00_ctl03_ctl01_Next").click() driver.implicitly_wait(1.5) # Convert the dictionary to a two columns dataframe. df = pd.DataFrame(list(terminology.items()), columns=['english', 'mandarin']) df.head() # Save the dataframe to tsv file. df.to_csv('../datasets/gov-sg-terms-translations.tsv', sep='\t', index=False, quotechar='"') # Example to re-read the saved tsv file. pd.read_csv('../datasets/gov-sg-terms-translations.tsv', sep='\t', quotechar='"') ```
github_jupyter
``` import numpy as np from xgboost import XGBClassifier from bayes_opt import BayesianOptimization from sklearn.model_selection import train_test_split import xgboost as xgb from pathlib import Path import os def xgb_classifier(n_estimators, max_depth, reg_alpha, reg_lambda, min_child_weight, num_boost_round, gamma): params = {"booster": 'gbtree', "objective": 'multi:softmax', "eval_metric": "auc", # "is_unbalance": True, "n_estimators": int(n_estimators), "max_depth": int(max_depth), "reg_alpha": reg_alpha, "reg_lambda": reg_lambda, "gamma": gamma, "num_class": 3, "min_child_weight": int(min_child_weight), "learning_rate": 0.01, "subsample_freq": 5, "verbosity": 0, "num_boost_round": int(num_boost_round)} cv_result = xgb.cv(params, train_data, 1000, early_stopping_rounds=100, stratified=True, nfold=3) return cv_result['test-auc-mean'].iloc[-1] target_address = os.path.join(Path(os.getcwd()).parent,'Window_capture\\Data\\command_keys.npy') screenshot_address = os.path.join(Path(os.getcwd()).parent,'Window_capture\\Data\\screenshots.npy') labels = np.load(target_address) images = np.load(screenshot_address, allow_pickle = True) labels = labels[1500:2000] images = images[1500:2000,:] print("Dimensions for Targets: ", np.unique(labels, return_counts=True), "Dimensions for Images", images.shape) # X_train, X_test, y_train, y_test = make_dataset(10000, z=100) X_train, X_test, y_train, y_test = train_test_split(images, labels, test_size = 0.25) train_data = xgb.DMatrix(X_train, y_train) xgb_bo = BayesianOptimization(xgb_classifier, {"n_estimators": (10, 100), 'max_depth': (5, 40), 'reg_alpha': (0.0, 0.1), 'reg_lambda': (0.0, 0.1), 'min_child_weight': (1, 10), 'num_boost_round': (100, 1000), "gamma": (0, 10) }) xgb_bo.maximize(n_iter=15, init_points=2) #Extracting the best parameters params = xgb_bo.max['params'] print(params) #Converting the max_depth and n_estimator values from float to int params['gamma']= int(params['gamma']) params['max_depth']= int(params['max_depth']) params['n_estimators']= int(params['n_estimators']) params['num_boost_round']= int(params['num_boost_round']) print(params) # #Initialize an XGBClassifier with the tuned parameters and fit the training data # from xgboost import XGBClassifier # classifier2 = XGBClassifier(**params).fit(text_tfidf, clean_data_train['author']) # #predicting for training set # train_p2 = classifier2.predict(text_tfidf) # #Looking at the classification report # print(classification_report(train_p2, clean_data_train['author'])) ```
github_jupyter
<a href="https://colab.research.google.com/github/dimi-fn/Various-Data-Science-Scripts/blob/main/Algorithms.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Algorithm performance can be measured based on **runtime** (Big-O time complexity) and **space** complexity (Memory Footprint of the algorithm). # Algorithm Performance Analysis: Runtime Measurement with Big-O Notation * Execution time is not considered a good metric to measure algorithmic complexity (different hardware will require different time execution), therefore, the `Big-O notation` is used as a standard metric instead. * Algorithm speed is not measured in seconds but in terms of growth * The Big-O Notation indicates the grade of how an algorithm **scales** with regard to changes in the input dataset size > **Big-0 Notation**: Statistical measure used to describe the **complexity** of the algorithm. >> **n**: the relationship between the input and the steps taken by the algorithm. In the examples below, `n` will b the number of items in the input array (i.e. n == input size) >> **c**: positive constant |Time - Algorithm | Big-O Notation - Complexity | Algorithm Type| Runtime Analysis Example: let n=10 | n = 20 | --- | --- | -------|---------------------------------------| ----| | Constant | O(1) | | 1| 1| | Logarithmic | O(log(n)) | Binary Search| log(10) = 1 |log(20) = 1.301 | | Polygarithmic | O((log(n))<sup>c</sup>) | | (log(10))<sup>2</sup> = 1 |(log(20))<sup>2</sup> = 1.693 | | Linear | O(n) | Linear Search| 10 = 10| 20 = 20 | | Log Linear (Superlinear) | O(nlog(n)) | Heap Sort, Merge Sort|10 * log(10) = 10 | 20 * log(20) = 59.9 | | Quadratic | O(n<sup>2</sup>) | | | | | Qubic | O(n<sup>3</sup>) | | | | | Polynomial (Algebraic) |O(n<sup>c</sup>) | Strassen’s Matrix Multiplication, Bubble Sort, Selection Sort, Insertion Sort, Bucket Sort| 10<sup>2</sup> = 100| 20<sup>2</sup> = 400| | Exponential | O(c<sup>n</sup>) | Tower of Hanoi| 2<sup>10</sup> = 1,024| 2<sup>20</sup> = 1,048,576 | | Factorial | O(n!) | Determinant Expansion by Minors, Brute force Search algorithm for Traveling Salesman Problem | 10! = 3,628,800 |20! = 2.432902e+18<sup>18</sup>| ## Constant complexity - O(1) * complexity remains constant * the steps required to complete the execution of an algorithm remain constant, irrespective of the number of inputs, i.e., the algorithm needs the same amount of time to execute independently of the input size > e.g. below the algorithm 1) finds the square of the first element of the list, and 2) it prints it out. It does that every time regardless of the number of items in the list -> complexity == constant ``` def constant_complexity(items): result = items[0] * items[0] print (result) constant_complexity([10, 15, 20, 25, 30]) import matplotlib.pyplot as plt import numpy as np x = [10, 15, 20, 25, 30] y = [2, 2, 2, 2, 2] plt.figure(figsize=(14,7)) plt.plot(x, y, 'b') plt.xlabel('Inputs') plt.ylabel('Steps') plt.title('Constant Complexity - O(c)') plt.show() ``` ## Logarithmic - O(log(n)) * the number of operations increases by one each time the data is doubled * binary search -> logarithmic time * simply put: the algorithm compares the target value to the middle element of a **sorted** array, and then it goes up or down based on the item position to match ## Linear Complexity - O(n) * the steps required to complete the execution of an algorithm increase or decrease linearly with the number of inputs. * the number of the iterations that the algorithm will proceed to will be equal to the size of the input items array. Example: ``` def linear_complexity(items): for item in items: print(item) linear_complexity([10, 15, 20, 25, 30]) import matplotlib.pyplot as plt import numpy as np x = [10, 15, 20, 25, 30] y = [10, 15, 20, 25, 30] plt.figure(figsize=(14,7)) plt.plot(x, y, 'b') plt.xlabel('Inputs') plt.ylabel('Steps') plt.title('Linear Complexity - O(n)') plt.show() ``` ## Quadratic Complexity O(n<sup>2</sup>) * the steps required to execute an algorithm are a quadratic function of the number of items in the input * total number of steps: n * n, e.g. if the input has 10 elements then the algorithm will do 100 operations > Below: a total of 25 operations since the number of items in list is 5 ``` def quadratic_complexity(items): count_operations=0 for item in items: for item2 in items: print(item, '---' ,item) count_operations = count_operations +1 print("Number of operations was: {}".format(count_operations)) quadratic_complexity([10, 15, 20, 25, 30]) ``` # Algorithm Performance Analysis: Space Complexity (Memory Footprint) Time complexity is not enough in order to judge algorithm's performance. We also need to take into consideration the memory allocation and usage during algorithm's execution. There is a trade-off between trying to achieve the best runtime performance with optimal memory allocation, as it happens e.g., between precision and recall in classification metrics. In other words, one algorithm might be quite fast, but it might also occupy a lot of memory space.
github_jupyter
## Perform Analysis on Athletes This file reads the detailed athlete information and performs Linear Regression analysis on this data. The following areas are examined in this code * <a href=#Visualize>Visualize Data</a> * <a href=#LinearRegression>Linear Regression</a> * <a href=#LASSO>LASSO</a> * <a href=#MixedEffect>Mixed Effect</a> * <a href=#Algebraic>Algebraic Model</a> ``` # Necessary imports import pandas as pd import numpy as np import statsmodels.api as sm import statsmodels.formula.api as smf import patsy from math import sqrt import seaborn as sns import matplotlib.pyplot as plt from scipy.stats import kurtosis, skew from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.linear_model import RidgeCV from sklearn.pipeline import make_pipeline from sklearn.preprocessing import PolynomialFeatures from sklearn.model_selection import KFold from sklearn.preprocessing import StandardScaler from sklearn.metrics import mean_squared_error from sklearn.linear_model import Lasso from sklearn import linear_model %matplotlib inline ``` ## Read data ``` boy1600 = pd.read_csv("../1allDistrict_boy1600.csv") girl1600 = pd.read_csv("../1allDistrict_girl1600.csv") girl400 = pd.read_csv("../1allDistrict_girl400.csv") boy400 = pd.read_csv("../1allDistrict_boy400.csv") boy1600['sex'] = 'boy' girl1600['sex'] = 'girl' boy400['sex'] = 'boy' girl400['sex'] = 'girl' print(f"Girl 1600: {girl1600.shape}") print(f"Boy 1600: {boy1600.shape}") print(f"Girl 400: {girl400.shape}") print(f"Boy 400: {boy400.shape}") # put the boy and girl data into one file athlete_data = pd.concat([boy1600,girl1600]) #athlete_data = pd.concat([boy400,girl400]) print(athlete_data.shape) print(athlete_data.columns) # rename columns because statsmodels doesn't like the 12_PR format # add a numerical column for sex of the athlete athlete_data['PR12'] = athlete_data['12_PR'] athlete_data['PR11'] = athlete_data['11_PR'] athlete_data['PR10'] = athlete_data['10_PR'] athlete_data['PR9'] = athlete_data['9_PR'] athlete_data['Nsex'] = [1 if s == 'boy' else 0 for s in athlete_data['sex']] print('number of unique schools: ',len(athlete_data['School'].unique())) ``` ## Set up X and y ``` # How many unique athletes in each district athlete_data.District.value_counts() print(athlete_data.District[athlete_data.District == 'District 7']) print(athlete_data.District[athlete_data.District == 'District 8']) # for 1600 data # drop the 3 athletes from District 7 and 8 athlete_data.drop(index=104,inplace=True) athlete_data.drop(index=201,inplace=True) athlete_data.drop(index=252,inplace=True) print(athlete_data.District[athlete_data.District == 'District 7']) print(athlete_data.District[athlete_data.District == 'District 8']) # for 400 data # drop the athlete from District 8 athlete_data.drop(index=132,inplace=True) athlete_data.head() ``` Variable |Description |Value ----------|------------------------------:|:---- District 1|Athlete school in this district| 0 or 1 District 2|Athlete school in this district| 0 or 1 District 3|Athlete school in this district| 0 or 1 District 4|Athlete school in this district| 0 or 1 District 5|Athlete school in this district| 0 or 1 District 6|Athlete school in this district| 0 or 1 Sex |Athlete girl or boy | 1=boy, 0=girl Grad Year |Graduation Year | int 9th Grade PR|Best time in 9th Grade | float 10th Grade PR|Best time in 10th Grade | float 11th Grade PR|Best time in 11th Grade | float| ``` #given the athlete_data read from files, generate the X & y dataframes def get_Xy(athlete_data,Dist=100): X = pd.DataFrame() if Dist == 100: # create one-hot columns for District X = pd.get_dummies(athlete_data[['District']]) X = pd.concat([X, athlete_data[['PR9','PR10','PR11','Nsex','Grad_Yr']]], axis=1, sort=False) y = athlete_data['PR12'] else: filtered_data = athlete_data[athlete_data['District'] == 'District '+str(Dist)] X = filtered_data[['PR9','PR10','PR11','Nsex','Grad_Yr']] y = filtered_data['PR12'] #y = pd.DataFrame(y.values.reshape((len(y),1))) return(X,y) X,y = get_Xy(athlete_data,100) X.shape y.shape type(y) ``` ## Visualize Data <a name='Visualize' /> ``` X.corr() X.info() sns.distplot(athlete_data['PR12']) plt.show() sns.distplot(athlete_data['PR12'],label = '12th Grade',norm_hist=False) sns.distplot(athlete_data['PR11'],label = '11th Grade',norm_hist=False) sns.distplot(athlete_data['PR10'],label = '10th Grade',norm_hist=False) sns.distplot(athlete_data['PR9'],label = '9th Grade',norm_hist=False) plt.legend() plt.show(); # plot 9th grade PR vs 12th grade PR for boys by district grid=sns.lmplot(x = "PR9",y = "PR12",col="District", col_wrap=3, data=athlete_data[athlete_data['Nsex'] == 1]) plt.ylim(top=450) # adjust the top leaving bottom unchanged plt.ylim(bottom=240) # adjust the top leaving bottom unchanged sns.catplot(x="District",y="PR12", data=athlete_data[(athlete_data['Nsex'] == 1)]); #plt.figure(figsize=(10,2)) plt.ylabel('12th grade PR (Seconds)') plt.xlabel('District') plt.xticks(range(0,6),('1','2','3','4','5','6')); plt.title('Variation in 12th grade times by district'); #plt.figure(figsize=(6,3)) #plt.savefig('12_PR_by_District.png') #boxplot = athlete_data.boxplot(column=[athlete_data[athlete_data[District == 'District 1'['PR12'], # athlete_data[athlete_data[District == 'District 2'['PR12']]) data = ([athlete_data[athlete_data.District == 'District 1']['PR12'], athlete_data[athlete_data.District == 'District 2']['PR12'], athlete_data[athlete_data.District == 'District 3']['PR12'], athlete_data[athlete_data.District == 'District 4']['PR12'], athlete_data[athlete_data.District == 'District 5']['PR12'], athlete_data[athlete_data.District == 'District 6']['PR12']]) fig_box, fig = plt.subplots() fig.set_title('12th grade PR for each district') fig.boxplot(data) plt.xlabel('District') plt.ylabel('time (seconds)') plt.show() # How many unique athletes in each district athlete_data.School.value_counts() ``` ## Linear Regression Model <a name='LinearRegression' /> ``` #divide in to train and test sets X_train,X_test, y_train, y_test = train_test_split(X, y, test_size=0.4,random_state=42,stratify=X['Nsex']) X_train.shape X_test.shape # Create an empty model lr = LinearRegression() # Fit the model to the full dataset lr.fit(X_train, y_train) # Print out the R^2 for the model against the full dataset lr.score(X_train,y_train) y_pred = lr.predict(X_test) X.columns RMSE = sqrt(((y_test-y_pred)**2).values.mean()) print(RMSE) plt.scatter(y_pred,y_test,alpha=0.5); plt.ylabel('y_test (seconds)'); plt.xlabel('y_predicted (seconds)'); plt.plot([max(y_pred),min(y_pred)],[max(y_pred),min(y_pred)],color='r') #plt.plot([240,470],[240,470],color='r') #plt.savefig('test_vs_pred.png'); print('Using all data (9th, 10th & 11th grades) to predict 12th grade PR') print('Train R^2: ',lr.score(X_train, y_train)) print('Train RMSE:', sqrt(mean_squared_error(y_train, lr.predict(X_train)))) print('Test R^2: ', lr.score(X_test, y_test)) print('Test RMSE:', sqrt(mean_squared_error(y_test, lr.predict(X_test)))) data = y_test-lr.predict(X_test) print('Skew:',skew(data)) print("mean : ", np.mean(data)) print("var : ", np.var(data)) print("skew : ",skew(data)) print("kurt : ",kurtosis(data)) data = y_test-lr.predict(X_test) plt.hist(data,40) plt.plot([0,0],[0,150],color='r') plt.title('histogram of residuals') plt.xlabel('y_predicted - y') #remove 9th grade PR data - how good does it do now X1_train = X_train.drop(['PR9'],axis=1) X1_test = X_test.drop(['PR9'],axis=1) lr.fit(X1_train,y_train) print('Using only 10th & 11th to predict 12th grade PR') print('Train R^2: ',lr.score(X1_train, y_train)) print('Train RMSE:', sqrt(mean_squared_error(y_train, lr.predict(X1_train)))) print('Test R^2: ', lr.score(X1_test, y_test)) print('Test RMSE:', sqrt(mean_squared_error(y_test, lr.predict(X1_test)))) #remove 9th grade PR data - how good does it do now # only select boys athlete_data_boy = athlete_data[athlete_data.sex == 'boy'].copy() X1,y1 = get_Xy(athlete_data_boy,100) X1_train, X1_test, y1_train, y1_test = train_test_split(X1, y1, test_size=0.4,random_state=42) X1_train.drop(['PR9'],axis=1) lr = LinearRegression() lr.fit(X1_train,y1_train) print('Using only 10th & 11th to predict 12th grade PR for boys') print('Train R^2: ',lr.score(X1_train, y1_train)) print('Train RSSE:', sqrt(mean_squared_error(y1_train, lr.predict(X1_train)))) print('Test R^2: ', lr.score(X1_test, y1_test)) print('Test RSSE:', sqrt(mean_squared_error(y1_test, lr.predict(X1_test)))) #remove 10th and 11th grade PR data - how good does it do now X2_train = X_train.drop(['PR10','PR11'],axis=1) X2_test = X_test.drop(['PR10','PR11'],axis=1) lr.fit(X2_train,y_train) print('Using only 9th grade to predict 12th grade PR') print('Train R^2: ',lr.score(X2_train, y_train)) print('Train SSE:', mean_squared_error(y_train, lr.predict(X2_train))) print('Test R^2: ', lr.score(X2_test, y_test)) print('Test SSE:', mean_squared_error(y_test, lr.predict(X2_test))) # add a PR11**2 and PR10**2 term to linear regression X3_train = X_train.copy() X3_train['PR11squared'] = X_train['PR11']**2 X3_train['PR10squared'] = X_train['PR10']**2 X3_test = X_test.copy() X3_test['PR11squared'] = X_test['PR11']**2 X3_test['PR10squared'] = X_test['PR10']**2 # Create an empty model lr = LinearRegression() lr.fit(X3_train,y_train) print('Using squared terms as well to predict 12th grade PR') print('Train R^2: ',lr.score(X3_train, y_train)) print('Train RMSE:', sqrt(mean_squared_error(y_train, lr.predict(X3_train)))) print('Test R^2: ', lr.score(X3_test, y_test)) print('Test RMSE:', sqrt(mean_squared_error(y_test, lr.predict(X3_test)))) # add a PR11**2 and PR10**2 term to linear regression X4_train = X_train.copy() X4_train['PR11squared'] = X_train['PR11']**2 X4_train['PR10squared'] = X_train['PR10']**2 #X4_train['PR11_o_PR10'] = X_train['PR11']/X_train['PR10'] #X4_train['PR10_o_PR9'] = X_train['PR10']/X_train['PR9'] X4_test = X_test.copy() X4_test['PR11squared'] = X_test['PR11']**2 X4_test['PR10squared'] = X_test['PR10']**2 #X4_test['PR11_o_PR10'] = X_test['PR11']/X_test['PR10'] #X4_test['PR10_o_PR9'] = X_test['PR11']/X_test['PR9'] # Create an empty model lr = LinearRegression() lr.fit(X4_train,y_train) print('Using squared terms as well to predict 12th grade PR') print('Train R^2: ',lr.score(X4_train, y_train)) print('Train RMSE:', sqrt(mean_squared_error(y_train, lr.predict(X4_train)))) print('Test R^2: ', lr.score(X4_test, y_test)) print('Test RMSE:', sqrt(mean_squared_error(y_test, lr.predict(X4_test)))) data = y_test-lr.predict(X4_test) print('Skew:',skew(data)) print("mean : ", np.mean(data)) print("var : ", np.var(data)) print("skew : ",skew(data)) print("kurt : ",kurtosis(data)) import yellowbrick from sklearn.linear_model import Ridge from yellowbrick.regressor import ResidualsPlot # Instantiate the linear model and visualizer visualizer = ResidualsPlot(model = lr) visualizer.fit(X3_train, y_train) # Fit the training data to the model visualizer.poof() ``` Now do it with statsmodels ``` X = pd.DataFrame() # create one-hot columns for District X = pd.get_dummies(athlete_data[['District']]) X = pd.concat([X, athlete_data[['PR9','PR10','PR11','Nsex','Grad_Yr']]], axis=1, sort=False) y = athlete_data['PR12'] #y = pd.DataFrame(y.values.reshape((len(y),1))) X.shape,y.shape sm_data = pd.DataFrame() # create one-hot columns for District sm_data = pd.get_dummies(athlete_data[['District']]) sm_data = pd.concat([X, athlete_data[['PR9','PR10','PR11','PR12','Nsex','Grad_Yr']]], axis=1, sort=False) y_train_sm, X_train_sm = patsy.dmatrices('PR12 ~ PR9 + PR10 + PR11 + Nsex + Grad_Yr',data = sm_data, return_type='dataframe') model = sm.OLS(y_train_sm,X_train_sm) fit = model.fit() print(fit.summary()) ``` Explore the effect of sample size on the results. ``` # Set District to filter for only one district, Dist=100 is all districts Dist = 100 filtered_X, filtered_y = get_Xy(athlete_data,Dist) #divide into train and test sets X_train, X_test, y_train, y_test = train_test_split(filtered_X, filtered_y, test_size=0.4, random_state=42,stratify=filtered_X['Nsex']) # Create an empty model output_data = pd.DataFrame() max_sample_size = min(401,len(X_train)) for sample_size in range(10,max_sample_size,1): X2_train = X_train.sample(n=sample_size,random_state=1) y2_train = y_train.sample(n=sample_size,random_state=1) #X2_test = X_test.sample(n=sample_size,random_state=1) #y2_test = y_test.sample(n=sample_size,random_state=1) lr = LinearRegression() lr.fit(X2_train, y2_train) y2_predict = lr.predict(X_test) test_score = lr.score(X_test,y_test) train_score = lr.score(X2_train,y2_train) train_error = mean_squared_error(y2_train, lr.predict(X2_train)) test_error = mean_squared_error(y_test, lr.predict(X_test)) #test_error = mean_squared_error(y2_test, lr.predict(X2_test)) #print(sample_size,train_error,test_error) output_data = output_data.append([[sample_size,test_score,train_score,train_error,test_error]]) #print('Train R^2: ', train_score) #print('Train SSE:', train_error) #print('Test R^2: ', test_score) #print('Test SSE:', test_error) plt.plot(output_data[0],output_data[3],label='Train Error') plt.plot(output_data[0],output_data[4],label='Test Error') plt.legend() plt.title('Model error vs. number of data points'); plt.xlabel('Number of data points'); plt.ylabel('RMS Error'); print('boys in train set: ',X_train[X_train.Nsex == 1]['Nsex'].count()) print('girls in train set:',X_train[X_train.Nsex == 0]['Nsex'].count()) print('boys in test set: ',X_test[X_test.Nsex == 1]['Nsex'].count()) print('girls in test set: ',X_test[X_test.Nsex == 0]['Nsex'].count()) ``` ## LASSO shows feature importance <a name='LASSO' /> ``` X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42,stratify=X['Nsex']) lr_lasso = linear_model.Lasso(alpha=0.1) lr_fit = lr_lasso.fit(X_train, y_train) # Print out the R^2 for the model against the full dataset lr_lasso.score(X_train,y_train) #lr_lasso.get_params()['lassocv'].alpha_ lr_lasso.get_params() X_train.columns print(X_train.shape) print(y_train.shape,lr_lasso.predict(X_train).shape) X_train.head() print('Train R^2: ',lr_lasso.score(X_train, y_train)) print('Train RMSE:', sqrt(mean_squared_error(y_train,lr_lasso.predict(X_train)))) print('Test R^2: ', lr_lasso.score(X_test, y_test)) print('Test RMSE:', sqrt(mean_squared_error(y_test, lr_lasso.predict(X_test)))) alpha_list = [1e-4, 1e-3, 1e-2, .05, 1e-1,.3,.5,.7] lasso_results = [] for alpha in alpha_list: lr_lasso = linear_model.Lasso(alpha=alpha) lr_lasso_fit = lr_lasso.fit(X_train, y_train) score = lr_lasso.score(X_train,y_train) RMSE = sqrt(mean_squared_error(y_test, lr_lasso.predict(X_test))) coef = lr_lasso_fit.coef_.tolist() #print(coef) lasso_results.append([alpha,score,coef,RMSE]) num_features = X.shape[1] for alpha,score,coef,RMSE in lasso_results: #print(alpha,score,coef) test = (alpha == 0.7) test = True if test: plt.plot(range(1,num_features+1),coef,label=f"alpha = {alpha}") plt.legend() plt.xticks(np.linspace(0,num_features+1, num=num_features+2)); plt.xlabel('Feature') plt.ylabel('Lasso coefficient'); num_features = X.shape[1] for alpha,score,coef,RMSE in lasso_results: #print(alpha,score,coef) #test = (alpha == 0.7) test = (alpha >= 0.001) and (alpha <= .3) if test: plt.plot(range(1,num_features+1),coef,label=f"alpha = {alpha}") plt.legend() plt.xticks(np.linspace(0,num_features+1, num=num_features+2)); plt.xlabel('Feature') plt.ylabel('Lasso coefficient'); X_train.columns pd.DataFrame(lasso_results) lasso_results[5][2] xx = [row[0] for row in lasso_results] yy = [row[3] for row in lasso_results] plt.semilogx(xx,yy); plt.xlabel('alpha') plt.ylabel('RMSE'); ``` ## Modeling District as a mixed effect <a name='MixedEffect' /> Random effect - District Fixed effect - PRs from each year, grad year We expect to see some clustering due to the random effect variable. ``` sm_data = athlete_data[['District','PR9','PR10','PR11','PR12','Nsex','Grad_Yr']] y_train_sm, X_train_sm = patsy.dmatrices('PR12 ~ PR9 + PR10 + PR11 + Nsex + Grad_Yr', data = sm_data, return_type='dataframe') print(sm_data.shape) sm_data.head() print(y_train_sm.shape,X_train_sm.shape) #X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4,random_state=42) #data_train = pd.concat([y_train,X_train],axis=1,sort=False) #data_test = pd.concat([y_test,X_test],axis=1,sort=False) #md = smf.mixedlm("12_PR ~ 9_PR + 10_PR + 11_PR + sex + Grad_Yr", # data = athlete_data, # groups = athlete_data["District"]) md = smf.mixedlm('PR12 ~ PR9 + PR10 + PR11 + Nsex + Grad_Yr', data = sm_data, groups = sm_data['District']) mdf = md.fit() print(mdf.summary()) y_sm = sm_data['PR12'] #X_sm = sm_data = athlete_data[['District','PR9','PR10','PR11','Nsex','Grad_Yr']] #y_sm_predict = mdf.predict(X_sm) y_sm_predict = mdf.fittedvalues RMSE = sqrt(((y_sm-y_sm_predict)**2).values.mean()) print(RMSE) # and let's plot the predictions performance = pd.DataFrame() performance["predicted"] = mdf.fittedvalues performance["residuals"] = mdf.resid.values #performance["PR12"] = data.age_scaled sns.lmplot(x = "predicted", y = "residuals", data = performance) ``` ## Algebraic Model <a name='Algebraic' /> How well can you predict 12th grade scores if you use a brute force method. Assume the ratio in the decrease in times from 10th grade to 11th grade is the same as from 11th grade to 12th grade. In this way with the competition times in 10th and 11th grade you can predict the time for 12th grade. ``` athlete_data.head() print(y_test.iloc[831]) print(y_test) RMSE = 0 average = 0 total = 0 growth = [] growth1 = [] residual = [] max_val = [] df = athlete_data.sample(n=len(y_test)) for index,athlete in df.iterrows(): g12 = athlete['PR12'] g11 = athlete['PR11'] g10 = athlete['PR10'] g9 = athlete['PR9'] g12_predict = g11 + (g11/g10)*(g11-g10) #g12_predict = g11**2/g10 RMSE += (g12_predict - g12)**2 average += g12 total += 1 growth.append((g12/g11)/(g11/g10)) residual.append(g12_predict - g12) if (g11-g10) != 0: g = (g12-g11)/(g11-g10) if g < 5: growth1.append(g) max_val.append(g12) RMSE = sqrt(RMSE/total) average = average/total print('RMSE:',RMSE) print('12th grade average time:',average) #plt.scatter(max,growth) #plt.hist(growth1,1000); plt.hist(growth,40); plt.title('Histogram of ratio of 12/11 grade times to 11/10 grade times'); #plt.xlim(-10,10) plt.plot([1,1],[0,105],color='r') #plt.plot([0,0],[0,130],color='y') plt.hist(residual,40) plt.plot([0,0],[0,150],color='r') plt.title('histogram of residuals') plt.xlabel('y_predicted - y') ```
github_jupyter
<!--BOOK_INFORMATION--> <a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a> *This notebook contains an excerpt from the book [Machine Learning for OpenCV](https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv) by Michael Beyeler. The code is released under the [MIT license](https://opensource.org/licenses/MIT), and is available on [GitHub](https://github.com/mbeyeler/opencv-machine-learning).* *Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations. If you find this content useful, please consider supporting the work by [buying the book](https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv)!* <!--NAVIGATION--> < [Reducing the Dimensionality of the Data](04.02-Reducing-the-Dimensionality-of-the-Data.ipynb) | [Contents](../README.md) | [Representing Text Features](04.04-Represening-Text-Features.ipynb) > # Representing Categorical Variables One of the most common data types we might encounter while building a machine learning system are **categorical features** (also known as **discrete features**), such as the color of a fruit or the name of a company. The challenge with categorical features is that they don't change in a continuous way, which makes it hard to represent them with numbers. For example, a banana is either green or yellow, but not both. A product belongs either in the clothing department or in the books department, but rarely in both, and so on. How would you go about representing such features? Consider the following data containing a list of some of the forefathers of machine learning and artificial intelligence: ``` data = [ {'name': 'Alan Turing', 'born': 1912, 'died': 1954}, {'name': 'Herbert A. Simon', 'born': 1916, 'died': 2001}, {'name': 'Jacek Karpinski', 'born': 1927, 'died': 2010}, {'name': 'J.C.R. Licklider', 'born': 1915, 'died': 1990}, {'name': 'Marvin Minsky', 'born': 1927, 'died': 2016}, ] ``` While the features 'born' and 'died' are already in numeric format, the 'name' feature is a bit trickier to encode. We might be intrigued to encode them in the following way: ``` {'Alan Turing': 1, 'Herbert A. Simon': 2, 'Jacek Karpinsky': 3, 'J.C.R. Licklider': 4, 'Marvin Minsky': 5}; ``` Although this seems like a good idea, it does not make much sense from a machine learning perspective. Why not? Refer to the book for the answer (p. 97). A better way is to use a `DictVectorizer`, also known as a *one-hot encoding*. The way it works is by feeding a dictionary containing the data to the `fit_transform` function, and the function automatically determines which features to encode: ``` from sklearn.feature_extraction import DictVectorizer vec = DictVectorizer(sparse=False, dtype=int) vec.fit_transform(data) ``` What happened here? The two year entries are still intact, but the rest of the rows have been replaced by ones and zeros. We can call `get_feature_names` to find out the listed order of the features: ``` vec.get_feature_names() ``` The first row of our data matrix, which stands for Alan Turing, is now encoded as `'born'=1912`, `'died'=1954`, `'Alan Turing'=1`, `'Herbert A. Simon'=0`, `'J.C.R Licklider'=0`, `'Jacek Karpinsik'=0`, and `'Marvin Minsky'=0`. If your category has many possible values, it is better to use a sparse matrix: ``` vec = DictVectorizer(sparse=True, dtype=int) vec.fit_transform(data) ``` We will come back to this technique when we talk about neural networks in [Chapter 9](09.00-Using-Deep-Learning-to-Classify-Handwritten-Digits.ipynb), *Using Deep Learning to Classify Handwritten Digits*. <!--NAVIGATION--> < [Reducing the Dimensionality of the Data](04.02-Reducing-the-Dimensionality-of-the-Data.ipynb) | [Contents](../README.md) | [Representing Text Features](04.04-Represening-Text-Features.ipynb) >
github_jupyter
``` import os import random import numpy as np import pandas as pd import tensorflow as tf from tqdm.notebook import tqdm from nasbench import api from search_spaces import load_nasbench_101 from random_search import run_random_search, random_spec from neural_predictor import classifier, regressor from input_preprocessing import preprocess_nasbench os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "1" os.environ["TF_FORCE_GPU_ALLOW_GROWTH"] = "true" nasbench = load_nasbench_101() ``` ### Two-stage vs One-stage prediction 10 split (N=172) ``` N = 172 def get_N_samples(N): models = [] for _ in range(N): while True: model = random_spec(nasbench) if model not in models: models.append(nasbench.query(model)) break return preprocess_nasbench(models) train_clf_data = get_N_samples(N) clf = classifier([train_clf_data['X'], train_clf_data['norm_A'], train_clf_data['norm_AT']], train_clf_data['labels']) train_data = get_N_samples(N) reg = regressor([train_data['X'], train_data['norm_A'], train_data['norm_AT']], train_data['val_acc']) LOOPS = 60 MAX_SAMPLES = 500 MAX_TIME_BUDGET = 8e5 ``` ### Two-stage ``` np_val_avg, np_test_avg = [], [] np_val_std, np_test_std = [], [] val_acc, test_acc = [], [] for k in tqdm(range(MAX_SAMPLES - N)): K = k + 1 loop_val, loop_test = [], [] for _ in range(LOOPS): test_models = get_N_samples(N+K) filtered_models = clf.predict([test_models['X'], test_models['norm_A'], test_models['norm_AT']]) idx = np.where(filtered_models > 0.50)[0] X = test_models['X'][idx] A = test_models['norm_A'][idx] AT = test_models['norm_AT'][idx] labels = test_models['val_acc'][idx] test_labels = test_models['test_acc'][idx] pred_acc = reg.predict([X, A, AT]).ravel() topk_idx = tf.math.top_k(pred_acc, k=len(pred_acc) if len(pred_acc) < K else K).indices.numpy() selected_val = labels[topk_idx] selected_test = test_labels[topk_idx] best_val_idx = np.argmax(selected_val) loop_val.append(selected_val[best_val_idx]) loop_test.append(selected_test[best_val_idx]) val_acc.append(np.mean(loop_val)) test_acc.append(np.mean(loop_test)) np_val_avg.append(np.max(val_acc)) np_val_std.append(np.std(loop_val)) np_test_avg.append(np.max(test_acc)) np_test_std.append(np.std(loop_test)) np.save('outputs/np2_val_avg_by_samples.npy', np_val_avg) np.save('outputs/np2_val_std_by_samples.npy', np_val_std) np.save('outputs/np2_test_avg_by_samples.npy', np_test_avg) np.save('outputs/np2_test_std_by_samples.npy', np_test_std) ```
github_jupyter
## Feature Scaling We discussed previously that the scale of the features is an important consideration when building machine learning models. Briefly: ### Feature magnitude matters because: - The regression coefficients of linear models are directly influenced by the scale of the variable. - Variables with bigger magnitude / larger value range dominate over those with smaller magnitude / value range - Gradient descent converges faster when features are on similar scales - Feature scaling helps decrease the time to find support vectors for SVMs - Euclidean distances are sensitive to feature magnitude. - Some algorithms, like PCA require the features to be centered at 0. ### The machine learning models affected by the feature scale are: - Linear and Logistic Regression - Neural Networks - Support Vector Machines - KNN - K-means clustering - Linear Discriminant Analysis (LDA) - Principal Component Analysis (PCA) ### Feature Scaling **Feature scaling** refers to the methods or techniques used to normalize the range of independent variables in our data, or in other words, the methods to set the feature value range within a similar scale. Feature scaling is generally the last step in the data preprocessing pipeline, performed **just before training the machine learning algorithms**. There are several Feature Scaling techniques, which we will discuss throughout this section: - Standardisation - Mean normalisation - Scaling to minimum and maximum values - MinMaxScaling - Scaling to maximum value - MaxAbsScaling - Scaling to quantiles and median - RobustScaling - Normalization to vector unit length In this notebook, we will discuss **Standardisation**. ================================================================= ## Standardisation Standardisation involves centering the variable at zero, and standardising the variance to 1. The procedure involves subtracting the mean of each observation and then dividing by the standard deviation: **z = (x - x_mean) / std** The result of the above transformation is **z**, which is called the z-score, and represents how many standard deviations a given observation deviates from the mean. A z-score specifies the location of the observation within a distribution (in numbers of standard deviations respect to the mean of the distribution). The sign of the z-score (+ or - ) indicates whether the observation is above (+) or below ( - ) the mean. The shape of a standardised (or z-scored normalised) distribution will be identical to the original distribution of the variable. If the original distribution is normal, then the standardised distribution will be normal. But, if the original distribution is skewed, then the standardised distribution of the variable will also be skewed. In other words, **standardising a variable does not normalize the distribution of the data** and if this is the desired outcome, we should implement any of the techniques discussed in section 7 of the course. In a nutshell, standardisation: - centers the mean at 0 - scales the variance at 1 - preserves the shape of the original distribution - the minimum and maximum values of the different variables may vary - preserves outliers Good for algorithms that require features centered at zero. ## In this demo We will perform standardisation using the Boston House Prices data set that comes with Scikit-learn ``` import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns # dataset for the demo from sklearn.datasets import load_boston from sklearn.model_selection import train_test_split # the scaler - for standardisation from sklearn.preprocessing import StandardScaler # load the the Boston House price data # this is how we load the boston dataset from sklearn boston_dataset = load_boston() # create a dataframe with the independent variables data = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names) # add target data['MEDV'] = boston_dataset.target data.head() # Information about the boston house prince dataset # you will find details about the different variables # the aim is to predict the "Median value of the houses" # MEDV column in this dataset # and there are variables with characteristics about # the homes and the neighborhoods # print the dataset description print(boston_dataset.DESCR) # let's have a look at the main statistical parameters of the variables # to get an idea of the feature magnitudes data.describe() ``` The different variables present different value ranges, mean, max, min, standard deviations, etc. In other words, they show different magnitudes or scales. Note for this demo, how **the mean values are not centered at zero, and the standard deviations are not scaled to 1**. When standardising the data set, we need to first identify the mean and standard deviation of the variables. These parameters need to be learned from the train set, stored, and then used to scale test and future data. Thus, we will first divide the data set into train and test, as we have done throughout the course. ``` # let's separate the data into training and testing set X_train, X_test, y_train, y_test = train_test_split(data.drop('MEDV', axis=1), data['MEDV'], test_size=0.3, random_state=0) X_train.shape, X_test.shape ``` ### Standardisation The StandardScaler from scikit-learn removes the mean and scales the data to unit variance. Plus, it learns and stores the parameters needed for scaling. Thus, it is top choice for this feature scaling technique. On the downside, you can't select which variables to scale directly, it will scale the entire data set, and it returns a NumPy array, without the variable values. ``` # standardisation: with the StandardScaler from sklearn # set up the scaler scaler = StandardScaler() # fit the scaler to the train set, it will learn the parameters scaler.fit(X_train) # transform train and test sets X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) # the scaler stores the mean of the features, learned from train set scaler.mean_ # the scaler stores the standard deviation deviation of the features, # learned from train set scaler.scale_ # let's transform the returned NumPy arrays to dataframes for the rest of # the demo X_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns) X_test_scaled = pd.DataFrame(X_test_scaled, columns=X_test.columns) # let's have a look at the original training dataset: mean and standard deviation # I use np.round to reduce the number of decimals to 1. np.round(X_train.describe(), 1) # let's have a look at the scaled training dataset: mean and standard deviation # I use np.round to reduce the number of decimals to 1. np.round(X_train_scaled.describe(), 1) ``` As expected, the mean of each variable, which were not centered at zero, is now around zero and the standard deviation is set to 1. Note however, that the minimum and maximum values vary according to how spread the variable was to begin with and is highly influenced by the presence of outliers. ``` # let's compare the variable distributions before and after scaling fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5)) # before scaling ax1.set_title('Before Scaling') sns.kdeplot(X_train['RM'], ax=ax1) sns.kdeplot(X_train['LSTAT'], ax=ax1) sns.kdeplot(X_train['CRIM'], ax=ax1) # after scaling ax2.set_title('After Standard Scaling') sns.kdeplot(X_train_scaled['RM'], ax=ax2) sns.kdeplot(X_train_scaled['LSTAT'], ax=ax2) sns.kdeplot(X_train_scaled['CRIM'], ax=ax2) plt.show() ``` Note from the above plots how standardisation centered all the distributions at zero, but it preserved their original distribution. The value range is not identical, but it looks more homogeneous across the variables. Note something interesting in the following plot: ``` # let's compare the variable distributions before and after scaling fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5)) # before scaling ax1.set_title('Before Scaling') sns.kdeplot(X_train['AGE'], ax=ax1) sns.kdeplot(X_train['DIS'], ax=ax1) sns.kdeplot(X_train['NOX'], ax=ax1) # after scaling ax2.set_title('After Standard Scaling') sns.kdeplot(X_train_scaled['AGE'], ax=ax2) sns.kdeplot(X_train_scaled['DIS'], ax=ax2) sns.kdeplot(X_train_scaled['NOX'], ax=ax2) plt.show() X_train['AGE'].min(), X_train['AGE'].max(), ``` In the above plot, we can see how, by scaling, the variable NOX, which varied across a very narrow range of values [0-1], and AGE which varied across [0-100], now spread over a more homogeneous range of values, so that we can compare them directly in one plot, whereas before it was difficult. In a linear model, AGE would dominate the output, but after standardisation, both variables will be able to have an input (assuming that they are both predictive). ``` plt.scatter(X_train['AGE'], X_train['NOX']) plt.scatter(X_train_scaled['AGE'], X_train_scaled['NOX']) ```
github_jupyter
___ <a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a> ___ <center><em>Copyright Pierian Data</em></center> <center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center> # RNN Example for Time Series ``` import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt ``` ## Data Release: Advance Monthly Sales for Retail and Food Services Units: Millions of Dollars, Not Seasonally Adjusted Frequency: Monthly The value for the most recent month is an advance estimate that is based on data from a subsample of firms from the larger Monthly Retail Trade Survey. The advance estimate will be superseded in following months by revised estimates derived from the larger Monthly Retail Trade Survey. The associated series from the Monthly Retail Trade Survey is available at https://fred.stlouisfed.org/series/MRTSSM448USN Information about the Advance Monthly Retail Sales Survey can be found on the Census website at https://www.census.gov/retail/marts/about_the_surveys.html Suggested Citation: U.S. Census Bureau, Advance Retail Sales: Clothing and Clothing Accessory Stores [RSCCASN], retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/RSCCASN, November 16, 2019. https://fred.stlouisfed.org/series/RSCCASN ``` df = pd.read_csv('../Data/RSCCASN.csv',index_col='DATE',parse_dates=True) df.head() df.columns = ['Sales'] df.plot(figsize=(12,8)) ``` ## Train Test Split ``` len(df) ``` Data is monthly, let's forecast 1.5 years into the future. ``` len(df)- 18 test_size = 18 test_ind = len(df)- test_size train = df.iloc[:test_ind] test = df.iloc[test_ind:] train test ``` ## Scale Data ``` from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() # IGNORE WARNING ITS JUST CONVERTING TO FLOATS # WE ONLY FIT TO TRAININ DATA, OTHERWISE WE ARE CHEATING ASSUMING INFO ABOUT TEST SET scaler.fit(train) scaled_train = scaler.transform(train) scaled_test = scaler.transform(test) ``` # Time Series Generator This class takes in a sequence of data-points gathered at equal intervals, along with time series parameters such as stride, length of history, etc., to produce batches for training/validation. #### Arguments data: Indexable generator (such as list or Numpy array) containing consecutive data points (timesteps). The data should be at 2D, and axis 0 is expected to be the time dimension. targets: Targets corresponding to timesteps in `data`. It should have same length as `data`. length: Length of the output sequences (in number of timesteps). sampling_rate: Period between successive individual timesteps within sequences. For rate `r`, timesteps `data[i]`, `data[i-r]`, ... `data[i - length]` are used for create a sample sequence. stride: Period between successive output sequences. For stride `s`, consecutive output samples would be centered around `data[i]`, `data[i+s]`, `data[i+2*s]`, etc. start_index: Data points earlier than `start_index` will not be used in the output sequences. This is useful to reserve part of the data for test or validation. end_index: Data points later than `end_index` will not be used in the output sequences. This is useful to reserve part of the data for test or validation. shuffle: Whether to shuffle output samples, or instead draw them in chronological order. reverse: Boolean: if `true`, timesteps in each output sample will be in reverse chronological order. batch_size: Number of timeseries samples in each batch (except maybe the last one). ``` from tensorflow.keras.preprocessing.sequence import TimeseriesGenerator # Let's redefine to get 12 months back and then predict the next month out length = 12 generator = TimeseriesGenerator(scaled_train, scaled_train, length=length, batch_size=1) # What does the first batch look like? X,y = generator[0] print(f'Given the Array: \n{X.flatten()}') print(f'Predict this y: \n {y}') ``` ### Create the Model ``` from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import LSTM # We're only using one feature in our time series n_features = 1 # define model model = Sequential() model.add(LSTM(100, activation='relu', input_shape=(length, n_features))) model.add(Dense(1)) model.compile(optimizer='adam', loss='mse') model.summary() ``` ### EarlyStopping and creating a Validation Generator NOTE: The scaled_test dataset size MUST be greater than your length chosen for your batches. Review video for more info on this. ``` from tensorflow.keras.callbacks import EarlyStopping early_stop = EarlyStopping(monitor='val_loss',patience=2) validation_generator = TimeseriesGenerator(scaled_test,scaled_test, length=length, batch_size=1) # fit model model.fit_generator(generator,epochs=20, validation_data=validation_generator, callbacks=[early_stop]) losses = pd.DataFrame(model.history.history) losses.plot() ``` ## Evaluate on Test Data ``` first_eval_batch = scaled_train[-length:] first_eval_batch = first_eval_batch.reshape((1, n_input, n_features)) model.predict(first_eval_batch) scaled_test[0] ``` Now let's put this logic in a for loop to predict into the future for the entire test range. ---- **NOTE: PAY CLOSE ATTENTION HERE TO WHAT IS BEING OUTPUTED AND IN WHAT DIMENSIONS. ADD YOUR OWN PRINT() STATEMENTS TO SEE WHAT IS TRULY GOING ON!!** ``` test_predictions = [] first_eval_batch = scaled_train[-length:] current_batch = first_eval_batch.reshape((1, length, n_features)) for i in range(len(test)): # get prediction 1 time stamp ahead ([0] is for grabbing just the number instead of [array]) current_pred = model.predict(current_batch)[0] # store prediction test_predictions.append(current_pred) # update batch to now include prediction and drop first value current_batch = np.append(current_batch[:,1:,:],[[current_pred]],axis=1) ``` ## Inverse Transformations and Compare ``` true_predictions = scaler.inverse_transform(test_predictions) # IGNORE WARNINGS test['Predictions'] = true_predictions test test.plot(figsize=(12,8)) ``` # Retrain and Forecasting ``` full_scaler = MinMaxScaler() scaled_full_data = full_scaler.fit_transform(df) length = 12 # Length of the output sequences (in number of timesteps) generator = TimeseriesGenerator(scaled_full_data, scaled_full_data, length=length, batch_size=1) model = Sequential() model.add(LSTM(100, activation='relu', input_shape=(length, n_features))) model.add(Dense(1)) model.compile(optimizer='adam', loss='mse') # fit model model.fit_generator(generator,epochs=8) forecast = [] # Replace periods with whatever forecast length you want periods = 12 first_eval_batch = scaled_full_data[-length:] current_batch = first_eval_batch.reshape((1, length, n_features)) for i in range(periods): # get prediction 1 time stamp ahead ([0] is for grabbing just the number instead of [array]) current_pred = model.predict(current_batch)[0] # store prediction forecast.append(current_pred) # update batch to now include prediction and drop first value current_batch = np.append(current_batch[:,1:,:],[[current_pred]],axis=1) forecast = scaler.inverse_transform(forecast) ``` ### Creating new timestamp index with pandas. ``` df forecast_index = pd.date_range(start='2019-11-01',periods=periods,freq='MS') forecast_df = pd.DataFrame(data=forecast,index=forecast_index, columns=['Forecast']) forecast_df df.plot() forecast_df.plot() ``` ### Joining pandas plots https://stackoverflow.com/questions/13872533/plot-different-dataframes-in-the-same-figure ``` ax = df.plot() forecast_df.plot(ax=ax) ax = df.plot() forecast_df.plot(ax=ax) plt.xlim('2018-01-01','2020-12-01') ``` # Great Job!
github_jupyter
``` import itertools import numpy as np import torch import matplotlib import matplotlib.pyplot as plt %matplotlib inline from bayesian_privacy_accountant import BayesianPrivacyAccountant ma_eps = [] ba_eps = [] quant = 0.05 sigma = 1.0 plot_range = np.arange(100) moment_accountant = BayesianPrivacyAccountant(powers=16, total_steps=plot_range[-1]+1, bayesianDP=False) bayes_accountant = BayesianPrivacyAccountant(powers=16, total_steps=plot_range[-1]+1) for i in plot_range: grads = np.random.weibull(0.5, [50, 1000]) C = np.quantile(np.linalg.norm(grads, axis=1), quant) #grads /= np.maximum(1, np.linalg.norm(grads, axis=1, keepdims=True) / C) moment_accountant.accumulate( ldistr=(C*2, sigma * C * 2), # multiply by 2, because 2C is the actual max distance rdistr=(0, sigma * C * 2), q=1/1000, steps=1 ) ma_eps += [moment_accountant.get_privacy(target_delta=1e-5)[0]] pairs = list(zip(*itertools.combinations(torch.tensor(grads), 2))) bayes_accountant.accumulate( ldistr=(torch.stack(pairs[0]), sigma * C * 2), rdistr=(torch.stack(pairs[1]), sigma * C * 2), #ldistr=(torch.tensor(grads[:25]), sigma * C * 2), #rdistr=(torch.tensor(grads[25:]), sigma * C * 2), q=1/1000, steps=1 ) ba_eps += [bayes_accountant.get_privacy(target_delta=1e-5)[0]] plt.plot(plot_range, ma_eps, label='DP') plt.plot(plot_range, ba_eps, label='BDP') plt.yscale('log') plt.legend() plt.xlabel(r'Step', fontsize=12) plt.ylabel(r'$\varepsilon$', fontsize=12) plt.title(r'Privacy loss evolution, $C=0.05$-quantile, no clipping', fontsize=12) plt.savefig('eps_step_05q_noclip.pdf', format='pdf', bbox_inches='tight') grads = np.random.weibull(0.5, [50, 1000]) quant = 0.99 C = np.quantile(np.linalg.norm(grads, axis=1), quant) grads /= np.maximum(1, np.linalg.norm(grads, axis=1, keepdims=True) / C) ma_eps = [] ba_eps = [] sigmas = np.arange(0.55, 1.4, 0.05) for sigma in sigmas: moment_accountant = BayesianPrivacyAccountant(powers=24, total_steps=1, bayesianDP=False) moment_accountant.accumulate(ldistr=(C * 2, sigma * C * 2), rdistr=(0, sigma * C * 2), q=1/1000, steps=1) ma_eps += [moment_accountant.get_privacy(target_delta=1e-5)[0]] bayes_accountant = BayesianPrivacyAccountant(powers=24, total_steps=1) pairs = list(zip(*itertools.combinations(torch.tensor(grads), 2))) bayes_accountant.accumulate( ldistr=(torch.stack(pairs[0]), sigma * C * 2), rdistr=(torch.stack(pairs[1]), sigma * C * 2), q=1/1000, steps=1 ) ba_eps += [bayes_accountant.get_privacy(target_delta=1e-5)[0]] plt.rc('axes', titlesize=14, labelsize=14) plt.plot(sigmas, ma_eps, '--', linewidth=2, label='DP') plt.plot(sigmas, ba_eps, linewidth=2, label='BDP') plt.legend() plt.xlabel(r'$\sigma$') plt.ylabel(r'$\varepsilon$') plt.title(r'$\sigma$ vs $\varepsilon$, $C=0.99$-quantile') plt.savefig('eps_sigma_99q.pdf', format='pdf', bbox_inches='tight') ba_eps = [] sigma = 1.0 plot_range = range(1, 50) ba_eps_c = [] for C in [0.1, 0.7, 1.0]: grads = np.random.randn(50, 1000) grads /= np.maximum(1, np.linalg.norm(grads, axis=1, keepdims=True) / C) ba_eps = [] for p in plot_range: bayes_accountant = BayesianPrivacyAccountant(powers=p, total_steps=10000, bayesianDP=False) bayes_accountant.accumulate(ldistr=(C, sigma), rdistr=(0, sigma), q=64/60000, steps=10000) ba_eps += [bayes_accountant.get_privacy(target_delta=1e-5)[0]] ba_eps_c += [ba_eps] plt.rc('axes', titlesize=14, labelsize=14) plt.plot(plot_range, ba_eps_c[0], ':', linewidth=2, label='C=0.1') plt.plot(plot_range, ba_eps_c[1], '--', linewidth=2, label='C=0.7') plt.plot(plot_range, ba_eps_c[2], '-', linewidth=2, label='C=1.0') plt.yscale('log') plt.legend() plt.xlabel(r'$\lambda$') plt.ylabel(r'$\varepsilon$') plt.title(r'$\lambda$ vs $\varepsilon$ for different C, $\delta=10^{-5}$') plt.savefig('eps_lambda_C.pdf', format='pdf', bbox_inches='tight') ```
github_jupyter
``` !pip install transformers==3.3.1 !git clone https://github.com/adeepH/OLD-Shared-Task.git import pandas as pd train = pd.read_csv('/content/tamil_train.csv',delimiter='\t', header=None,names=['text','label','nan']) train = train.drop(columns=['nan']) train.label = train.label.apply({'Not_offensive':0,'Offensive_Untargetede':1,'Offensive_Targeted_Insult_Group':2, 'Offensive_Targeted_Insult_Individual':3,'not-Tamil':4, 'Offensive_Targeted_Insult_Other':5}.get) val = pd.read_csv('/content/Tamil_dev.csv',delimiter='\t', header=None,names=['text','label','nan']) val = val.drop(columns=['nan']) val.label = val.label.apply({'Not_offensive':0,'Offensive_Untargetede':1,'Offensive_Targeted_Insult_Group':2, 'Offensive_Targeted_Insult_Individual':3,'not-Tamil':4, 'Offensive_Targeted_Insult_Other':5}.get) #test = pd.read_csv('/content/drive/MyDrive/sdpra2021/test.csv',delimiter=',', # header=None,names=['text']) from sklearn.model_selection import train_test_split from transformers import set_seed set_seed(52) train,eval = train_test_split(train,test_size=0.2,shuffle=True) print('Training set size:',train.shape) print('Testing set size:',val.shape) print('validation set size:',eval.shape) import pandas as pd from torch.utils.data import Dataset,DataLoader class RFDataset(Dataset): def __init__(self,text,label,tokenizer,max_len): self.text = text self.label = label self.tokenizer = tokenizer self.max_len = max_len def __len__(self): return len(self.text) def __getitem__(self,item): text = str(self.text[item]) label = self.label[item] encoding = self.tokenizer.encode_plus( text, add_special_tokens=True, max_length = self.max_len, return_token_type_ids = False, padding = 'max_length', return_attention_mask= True, return_tensors='pt', truncation=True ) return { 'text' : text, 'input_ids' : encoding['input_ids'].flatten(), 'attention_mask' : encoding['attention_mask'].flatten(), 'label' : torch.tensor(label,dtype=torch.long) } import numpy as np from sklearn.utils import class_weight class_weights = class_weight.compute_class_weight('balanced', np.unique(train.label.values), train.label.values) class_weights def create_data_loader(df,tokenizer,max_len,batch_size,shuffle): ds = RFDataset( text = df.text.to_numpy(), label = df.label.to_numpy(), tokenizer = tokenizer, max_len = max_len ) return DataLoader(ds, batch_size = batch_size, shuffle = shuffle, num_workers=4) from transformers import AdamW,get_linear_schedule_with_warmup,AutoModel,AutoTokenizer device = 'cuda' PRE_TRAINED_MODEL_NAME = 'bert-base-multilingual-cased' tokenizer = AutoTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME,return_dict=False) BATCH_SIZE = 16 MAX_LEN = 256 train_data_loader = create_data_loader(train,tokenizer,MAX_LEN,BATCH_SIZE,True) val_data_loader = create_data_loader(val,tokenizer,MAX_LEN,BATCH_SIZE,False) bert_model = AutoModel.from_pretrained(PRE_TRAINED_MODEL_NAME,return_dict=False) import torch.nn as nn class RFClassifier(nn.Module): def __init__(self, n_classes,pre_trained): super(RFClassifier, self).__init__() self.auto = AutoModel.from_pretrained(pre_trained,return_dict=False) self.drop = nn.Dropout(p=0.4) #self.out = nn.Linear(self.bert.config.hidden_size, n_classes) self.out1 = nn.Linear(self.auto.config.hidden_size, 128) self.drop1 = nn.Dropout(p=0.4) self.relu = nn.ReLU() self.out = nn.Linear(128, n_classes) def forward(self, input_ids, attention_mask): _,pooled_output = self.auto( input_ids=input_ids, attention_mask=attention_mask ) #output = self.relu(pooled_output) output = self.drop(pooled_output) output = self.out1(output) output = self.relu(output) output = self.drop1(output) return nn.functional.softmax(self.out(output)) import torch.nn as nn from transformers import BertModel,BertTokenizer class MixedBertModel(nn.Module): def __init__(self,pre_trained='bert-base-uncased'): super().__init__() self.bert = BertModel.from_pretrained(pre_trained) self.hidden_size = self.bert.config.hidden_size self.LSTM = nn.LSTM(self.hidden_size,self.hidden_size,bidirectional=True) self.clf = nn.Linear(self.hidden_size*2,1) def forward(self,inputs): encoded_layers, pooled_output = self.bert(input_ids=inputs[0],attention_mask=inputs[1]) encoded_layers = encoded_layers.permute(1, 0, 2) enc_hiddens, (last_hidden, last_cell) = self.LSTM(pack_padded_sequence(encoded_layers, inputs[2])) output_hidden = torch.cat((last_hidden[0], last_hidden[1]), dim=1) output_hidden = F.dropout(output_hidden,0.2) output = self.clf(output_hidden) return F.softmax(output,dim=1) model = RFClassifier(6,'bert-base-multilingual-cased') #model = MixedBertModel() model = model.to(device) EPOCHS = 3 optimizer = AdamW(model.parameters(), lr=2e-5, correct_bias=False) total_steps = len(train_data_loader) * EPOCHS scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=0, num_training_steps=total_steps ) loss_fn = nn.CrossEntropyLoss().to(device) def train_epoch(model,data_loader,loss_fn,optimizer,device,scheduler,n_examples): model = model.train() losses = [] correct_predictions = 0 for data in data_loader: input_ids = data['input_ids'].to(device) attention_mask = data['attention_mask'].to(device) labels = data['label'].to(device) outputs = model( input_ids=input_ids, attention_mask=attention_mask ) _, preds = torch.max(outputs, dim=1) loss = loss_fn(outputs,labels) correct_predictions += torch.sum(preds == labels) losses.append(loss.item()) loss.backward() nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0) optimizer.step() scheduler.step() optimizer.zero_grad() return correct_predictions.double() / n_examples, np.mean(losses) def eval_model(model, data_loader, loss_fn, device, n_examples): model = model.eval() losses = [] correct_predictions = 0 with torch.no_grad(): for d in data_loader: input_ids = d["input_ids"].to(device) attention_mask = d["attention_mask"].to(device) labels = d["label"].to(device) outputs = model( input_ids=input_ids, attention_mask=attention_mask ) _, preds = torch.max(outputs, dim=1) loss = loss_fn(outputs, labels) correct_predictions += torch.sum(preds == labels) losses.append(loss.item()) return correct_predictions.double() / n_examples, np.mean(losses) import time def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs from collections import defaultdict import torch history = defaultdict(list) best_accuracy = 0 for epoch in range(EPOCHS): start_time = time.time() train_acc,train_loss = train_epoch( model, train_data_loader, loss_fn, optimizer, device, scheduler, len(train) ) val_acc,val_loss = eval_model( model, val_data_loader, loss_fn, device, len(val) ) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s') print(f'Train Loss {train_loss} accuracy {train_acc}') print(f'Val Loss {val_loss} accuracy {val_acc}') print() history['train_acc'].append(train_acc) history['train_loss'].append(train_loss) history['val_acc'].append(val_acc) history['val_loss'].append(val_loss) if val_acc > best_accuracy: torch.save(model.state_dict(),'bert-base-uncased.bin') best_accuracy = val_acc import matplotlib.pyplot as plt plt.plot(history['train_acc'], label='train accuracy') plt.plot(history['val_acc'], label='validation accuracy') plt.title('Training history') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend() #plt.ylim([0, 1]); val_acc, _ = eval_model( model, val_data_loader, loss_fn, device, len(val) #Change it to test when you have the test results ) val_acc.item() def get_predictions(model, data_loader): model = model.eval() sentence = [] predictions = [] prediction_probs = [] real_values = [] with torch.no_grad(): for d in data_loader: texts = d["text"] input_ids = d["input_ids"].to(device) attention_mask = d["attention_mask"].to(device) labels = d["label"].to(device) outputs = model( input_ids=input_ids, attention_mask=attention_mask ) _, preds = torch.max(outputs, dim=1) sentence.extend(texts) predictions.extend(preds) prediction_probs.extend(outputs) real_values.extend(labels) predictions = torch.stack(predictions).cpu() prediction_probs = torch.stack(prediction_probs).cpu() real_values = torch.stack(real_values).cpu() return sentence, predictions, prediction_probs, real_values test_data_loader = create_data_loader(eval,tokenizer,MAX_LEN,BATCH_SIZE,True) class_name = ['Not_offensive' ,'Offensive_Untargetede' ,'Offensive_Targeted_Insult_Group' , 'Offensive_Targeted_Insult_Individual' ,'not-Tamil' , 'Offensive_Targeted_Insult_Other' ] y_review_texts, y_pred, y_pred_probs, y_test = get_predictions( model, val_data_loader ) from sklearn.metrics import classification_report,confusion_matrix print(classification_report(y_test, y_pred, target_names=class_name,zero_division=0)) import seaborn as sns def show_confusion_matrix(confusion_matrix): hmap = sns.heatmap(confusion_matrix, annot=True, fmt="d", cmap="Blues") hmap.yaxis.set_ticklabels(hmap.yaxis.get_ticklabels(), rotation=0, ha='right') hmap.xaxis.set_ticklabels(hmap.xaxis.get_ticklabels(), rotation=30, ha='right') plt.ylabel('True sentiment') plt.xlabel('Predicted sentiment'); cm = confusion_matrix(y_test, y_pred) df_cm = pd.DataFrame(cm, index=class_name, columns=class_name) show_confusion_matrix(df_cm) ```
github_jupyter
<h2>SMS Spam Detection</h2> See <a href="https://www.kaggle.com/uciml/sms-spam-collection-dataset">Kaggle.com</a> ``` import numpy as np import numpy.core.defchararray as npf from sklearn.preprocessing import LabelEncoder # load data import pandas as pd #sms = pd.read_csv("../data/spam.csv", encoding="latin-1") sms = pd.read_csv("../data/spam_utf8.csv") sms = sms.drop(['Unnamed: 2','Unnamed: 3','Unnamed: 4'],axis=1) sms = sms.rename(columns = {'v1':'label','v2':'message'}) X_raw = sms["message"] y_raw = sms["label"] print(X_raw[5]) print(y_raw[5]) for i in range(0,X_raw.shape[0]): X_raw[i] = str(npf.replace(X_raw[i], "'", "")) X_raw[i] = str(npf.replace(X_raw[i], "å", "")) X_raw[i] = str(npf.replace(X_raw[i], "!", "")) X_raw[i] = str(npf.replace(X_raw[i], "?", "")) X_raw[i] = str(npf.replace(X_raw[i], ".", " ")) X_raw[i] = str(npf.replace(X_raw[i], "\"", "")) print(X_raw[5]) # Convert class label strings to integers encoder = LabelEncoder() encoder.fit(y_raw) y = encoder.transform(y_raw) #np_data = df.values # split data into X and y #X_raw = np_data[:,0:-1] #y_raw = pd.factorize(np_data[:,-1])[0] #print(X_raw[4]) #print(y_raw[2]) # set seed to randomizer #seed = 7 # flatten input matrix to vector #X_raw = X_raw.ravel() print("Instances: {}".format(X_raw.shape[0])) ``` <h2>Convert to bag of words</h2> ``` from sklearn.feature_extraction.text import CountVectorizer def gen_bagofwords(X_raw, ng_min=1, ng_max=1, df_min=1): count_vect = CountVectorizer(ngram_range=(ng_min,ng_max), min_df=df_min) X = count_vect.fit_transform(X_raw) print("Bag-of-words size: {}".format(X.shape[1])) return X ``` <h2>Convert from occurences to frequencies</h2> ``` from sklearn.feature_extraction.text import TfidfTransformer def conv_frequencies(X): tf_transformer = TfidfTransformer(sublinear_tf=True).fit(X) X = tf_transformer.transform(X) return X ``` <h2>Function for evaluating model accuracy</h2> ``` from sklearn.metrics import accuracy_score from sklearn.model_selection import cross_val_predict from sklearn.metrics import confusion_matrix from sklearn.model_selection import train_test_split def evaluate_test(model): print("\n-- Test set --") X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=111, stratify=y) # train model on training dataset model.fit(X_train, y_train) # evaluate dataset y_pred = model.predict(X_test) # calculate accuracy accuracy = accuracy_score(y_test, y_pred) print("Accuracy: %.2f%%" % (accuracy * 100.0)) # confusion matrix print("Confusion Matrix:") conf_mx = confusion_matrix(y_test, y_pred) print(conf_mx) def evaluate_cv(model): print("\n-- 5-fold CV --") # 10-fold CV y_pred = cross_val_predict(model, X, y, cv=5) # calculate accuracy accuracy = accuracy_score(y, y_pred) print("Average accuracy: %.2f%%" % (accuracy * 100.0)) # confusion matrix print("Confusion Matrix:") conf_mx = confusion_matrix(y, y_pred) print(conf_mx) ``` <h2>Naive Bayes</h2> ``` from sklearn.naive_bayes import BernoulliNB, MultinomialNB #X = gen_bagofwords(X_raw, df_min=0.001) X = gen_bagofwords(X_raw) X = conv_frequencies(X) model = MultinomialNB(alpha=0.01) evaluate_test(model) evaluate_cv(model) X = gen_bagofwords(X_raw, 1, 2, df_min=0.001) #X = gen_bagofwords(X_raw) X = conv_frequencies(X) model = MultinomialNB(alpha=0.01) evaluate_test(model) evaluate_cv(model) ``` <h2>SVM</h2> ``` from sklearn import svm X = gen_bagofwords(X_raw, df_min=0.001) #X = gen_bagofwords(X_raw) X = conv_frequencies(X) model = svm.LinearSVC(random_state=42) evaluate_test(model) evaluate_cv(model) X = gen_bagofwords(X_raw, 1, 2, df_min=0.001) #X = gen_bagofwords(X_raw) X = conv_frequencies(X) model = svm.LinearSVC(random_state=42) evaluate_test(model) evaluate_cv(model) ``` <h2>Pipeline example</h2> ``` from sklearn.pipeline import Pipeline X = X_raw.ravel() model = Pipeline([('vect', CountVectorizer(stop_words='english')), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB(alpha=.01)),]) evaluate(model) ```
github_jupyter
``` import matplotlib.pyplot as plt import pandas as pd import numpy as np from numpy import exp, abs, log from scipy.special import gamma, factorial import os, sys import scipy.stats as stats import statsmodels.api as sm from utils import * import time import datetime as dt import universal as up from universal import tools, algos from universal.algos import * from strategies import * from metrics import * import universal as up from universal import tools, algos from universal.result import AlgoResult, ListResult import random np.set_printoptions(suppress=True) np.set_printoptions(precision=4) np.set_printoptions(threshold=np.inf) np.random.RandomState(seed=12345) import warnings warnings.filterwarnings('ignore') ``` ``` def valid_G4P_passive(strategy, paras, phs, _eps_rng_delta=0.01, max_iter=5): def valid_G4P_passive_run(_eps): paras['passive_eps'] = _eps # paras['passive_gamma'] = _gamma maxiter, subn, lb, c = paras['maxiter'], paras['subn'], paras['lb'], paras['c'] results = [] concat_result_dict = {} data_path1 = './stock_data/TSE_stock_phase%02d_lb%d.npz' % (phs, lb) data = np.load(data_path1) cov_train, choice = data['cov_train'], data['choice'] if paras['inverse']: choice = choice[::-1] cov = cov_train[choice[:subn], :][:, choice[:subn]] predY0, std_varY0, sample_Y0 = get_res(paras, phs, maxiter, subn, lb) result_dict = {} strategy_name = strategy.split('(')[0] rt_v, x_vec = eval(strategy) result_dict[strategy_name] = [rt_v.copy(), x_vec.copy(), [sample_Y0.copy()]] if strategy_name in concat_result_dict: concat_result_dict[strategy_name][0] += rt_v.copy() concat_result_dict[strategy_name][1] += x_vec.copy() concat_result_dict[strategy_name][2] += [sample_Y0.copy()] else: concat_result_dict[strategy_name] = [rt_v.copy(), x_vec.copy(), [sample_Y0.copy()]] results.append(result_dict) rt_v, x_vec, sample_Y0 = concat_result_dict[strategy_name] B = np.concatenate([v.reshape(1,-1) for v in x_vec], axis=0) B = pd.DataFrame(B) Y = pd.DataFrame(np.concatenate(sample_Y0, axis=0)) result = AlgoResult(Y, B) result.set_rf_rate(paras['rf']) result.fee = 0.0025 tw = result.total_wealth print(tw) return tw cov = matern32() gp = GaussianProcess(cov, optimize=False, usegrads=True) acq = Acquisition(mode='Entropy') # param = {'_eps': ('cont', _eps_rng), # '_gamma': ('cont', _gamma_rng)} _eps_rng = [paras['passive_eps']-_eps_rng_delta, paras['passive_eps']+_eps_rng_delta] param = {'_eps': ('cont', _eps_rng)} gpgo = GPGO(gp, acq, valid_G4P_passive_run, param) gpgo.run(max_iter=max_iter) rs=gpgo.getResult()[0] _eps = rs['_eps'] return _eps def run(paras, strategy_lst, figsize=(10,5), phase_cnt=12, disp=True): if not disp: old_stdout = sys.stdout sys.stdout = open(os.devnull, 'w') maxiter, subn, lb, c = paras['maxiter'], paras['subn'], paras['lb'], paras['c'] results = [] concat_result_dict = {} for phs in range(0, phase_cnt): print('Phase %d' % phs) data_path1 = './stock_data/TSE_stock_phase%02d_lb%d.npz' % (phs, lb) data = np.load(data_path1) cov_train, choice = data['cov_train'], data['choice'] if paras['inverse']: choice = choice[::-1] cov = cov_train[choice[:subn], :][:, choice[:subn]] predY0, std_varY0, sample_Y0 = get_res(paras, phs, maxiter, subn, lb) result_dict = {} for strategy in strategy_lst: strategy_name = strategy.split('(')[0] print(strategy_name) # if phs > 0: # start validation # if strategy_name=='G4P_passive': # _eps = valid_G4P_passive(strategy, paras, phs-1) # paras['passive_eps'] = _eps # paras['passive_gamma'] = _gamma # elif strategy_name=='G4P': # _gamma = valid_G4P(strategy, paras, phs-1) # paras['gamma'] = _gamma rt_v, x_vec = eval(strategy) result_dict[strategy_name] = [rt_v.copy(), x_vec.copy(), [sample_Y0.copy()]] if strategy_name in concat_result_dict: concat_result_dict[strategy_name][0] += rt_v.copy() concat_result_dict[strategy_name][1] += x_vec.copy() concat_result_dict[strategy_name][2] += [sample_Y0.copy()] else: concat_result_dict[strategy_name] = [rt_v.copy(), x_vec.copy(), [sample_Y0.copy()]] if disp: disp_k = 3 fig, axs = plt.subplots(disp_k, disp_k, figsize=(30,6)) for i in range(disp_k): ixv = random.randint(0,len(x_vec)-4) for j in range(disp_k): xv = x_vec[ixv+j] axs[i,j].bar(range(len(xv)),xv) plt.show() results.append(result_dict) if not disp: sys.stdout = old_stdout return results, concat_result_dict def display(strategy_name, concat_results, paras, figsize=(10,5)): rt_v, x_vec, sample_Y0 = concat_results[strategy_name] B = np.concatenate([v.reshape(1,-1) for v in x_vec], axis=0) B = pd.DataFrame(B) Y = pd.DataFrame(np.concatenate(sample_Y0, axis=0)) result = AlgoResult(Y, B) result.set_rf_rate(paras['rf']) print('========================================================') print(strategy_name) print('--------------------------------------------------------') print('fee = 0') print(result.summary()) print('Total wealth:', result.total_wealth) plt.figure(figsize=figsize) result.plot(weights=False, assets=False, ucrp=False, logy=False) plt.show() print('--------------------------------------------------------') c=0.0025 print('fee = %f'%c) result.fee = c print(result.summary()) print('Total wealth:', result.total_wealth) plt.figure(figsize=figsize) result.plot(weights=False, assets=False, ucrp=False, logy=False) plt.show() print('--------------------------------------------------------') c=0.005 print('fee = %f'%c) result.fee = c print(result.summary()) print('Total wealth:', result.total_wealth) plt.figure() result.plot(weights=False, assets=False, ucrp=False, logy=False) plt.show() print() def display_phs(strategy_name, results, phs, paras, figsize=(10,5)): print('========================================================') print('Phase %d'%phs) for strategy in strategy_lst: strategy_name = strategy.split('(')[0] rt_v, x_vec, sample_Y0 = results[phs][strategy_name] B = np.concatenate([v.reshape(1,-1) for v in x_vec], axis=0) B = pd.DataFrame(B) Y = pd.DataFrame(sample_Y0[0]) # print(B.shape) # print(Y.shape) result = AlgoResult(Y, B) result.set_rf_rate(paras['rf']) print(strategy_name) print('--------------------------------------------------------') print('fee = 0') print(result.summary()) print('Total wealth:', result.total_wealth) strategy_lst = ['G4P_passive(predY0, std_varY0, sample_Y0, cov, paras["passive_eps"], paras["passive_gamma"])'] paras={'dataset':'TSE', 'maxiter':500, 'subn':100, 'lb':5, 'M':50, 'inverse':'True', 'ggd':'True', 'c':0.001, 'rf':0.0007, 'opt_gamma':0.0, 'passive_eps':1.5, 'passive_gamma':0.001, 'mattype':-2, 'dg':0.85 } results, concat_results = run(paras, strategy_lst, phase_cnt=9) # plot_cumulative_return_history(concat_results, strategy_lst, figsize=(10,5)) for strategy in strategy_lst: strategy_name = strategy.split('(')[0] display(strategy_name, concat_results, paras) strategy_lst = ['G4P(predY0, std_varY0, sample_Y0, cov, paras["gamma"])'] paras={'dataset':'TSE', 'maxiter':500, 'subn':100, 'lb':5, 'M':50, 'inverse':'True', 'ggd':'True', 'c':0.001, 'rf':0.0007, 'gamma':0.01, 'passive_eps':1, 'passive_gamma':0, 'mattype':-2, 'dg':0.85 } results, concat_results = run(paras, strategy_lst, phase_cnt=9) for strategy in strategy_lst: strategy_name = strategy.split('(')[0] display(strategy_name, concat_results, paras) ```
github_jupyter
# TNBoW Experiments © 2020 Nokia Licensed under the BSD 3 Clause license SPDX-License-Identifier: BSD-3-Clause ``` %load_ext autoreload %autoreload 2 from pathlib import Path import json import os os.environ["snippets_collection"] = "so-ds-feb20" os.environ["valid_dataset"] = "so-ds-feb20-valid" os.environ["test_dataset"] = "so-ds-feb20-test" os.environ["text_input_raw"] = "so-python-question-titles-feb20" output_dir = Path("test") os.environ["output_dir"] = str(output_dir) if not output_dir.exists(): output_dir.mkdir() ``` ## python SO titles ### preprocessing hyper-params ``` text_overrides = [{}, {"lemmatize": False}, {"remove_stop": False}] text_inputs = ["SO-python-question-titles-feb20-lemma.tok.txt", "SO-python-question-titles-feb20.tok.txt", "SO-python-question-titles-feb20-stopwords.tok.txt"] for i, (text_overrides_, text_input) in enumerate(zip(text_overrides, text_inputs)): os.environ["text_overrides"] = json.dumps(text_overrides_) os.environ["text_input"] = text_input output_base = str(output_dir/f"python_so_preprocess{i}") !python -m nbconvert tnbow.ipynb --execute --NbConvertApp.output_base=$output_base --ExecutePreprocessor.timeout=3600 ``` ### fast text hyper-params #### num epochs ``` os.environ["text_overrides"] = "{}" os.environ["text_input"] = "SO-python-question-titles-feb20-lemma.tok.txt" fast_text_overrides = [{"epoch": 10}, {"epoch": 20}, {"epoch": 30}, {"epoch": 40}, {"epoch": 50}, {"epoch": 100}, {"epoch": 200}, {"epoch": 300}, {"epoch": 500}] for i, (fast_text_overrides_) in enumerate(fast_text_overrides): os.environ["fast_text_overrides"] = json.dumps(fast_text_overrides_) output_base = str(output_dir/f"python_so_fasttext_epochs{i}") !python -m nbconvert tnbow.ipynb --execute --NbConvertApp.output_base=$output_base --ExecutePreprocessor.timeout=3600 os.environ["text_overrides"] = "{}" os.environ["text_input"] = "SO-python-question-titles-feb20-lemma.tok.txt" fast_text_overrides = [ {"lr": 0.05, "epoch": 200}, {"lr": 0.2, "epoch": 200}] for i, (fast_text_overrides_) in enumerate(fast_text_overrides): os.environ["fast_text_overrides"] = json.dumps(fast_text_overrides_) output_base = str(output_dir/f"python_so_fasttext_lr{i}") !python -m nbconvert tnbow.ipynb --execute --NbConvertApp.output_base=$output_base --ExecutePreprocessor.timeout=3600 os.environ["text_overrides"] = "{}" os.environ["text_input"] = "SO-python-question-titles-feb20-lemma.tok.txt" fast_text_overrides = [ {"epoch": 200}] * 5 + [{"epoch": 50}] * 5 + [{"epoch": 100}] * 5 + [{"epoch": 150}] * 5 for i, (fast_text_overrides_) in enumerate(fast_text_overrides): os.environ["fast_text_overrides"] = json.dumps(fast_text_overrides_) output_base = str(output_dir/f"python_so_fasttext_epochs.2{i}") !python -m nbconvert tnbow.ipynb --execute --NbConvertApp.output_base=$output_base --ExecutePreprocessor.timeout=3600 ``` #### minCount ``` os.environ["text_overrides"] = "{}" os.environ["text_input"] = "SO-python-question-titles-feb20-lemma.tok.txt" fast_text_overrides = [{"epoch": 200, "minCount": 1},{"epoch": 200, "minCount": 3}, {"epoch": 200, "minCount": 5}, {"epoch": 200, "minCount": 10} ] for i, (fast_text_overrides_) in enumerate(fast_text_overrides): os.environ["fast_text_overrides"] = json.dumps(fast_text_overrides_) output_base = str(output_dir/f"python_so_fasttext_minCount{i}") !python -m nbconvert tnbow.ipynb --execute --NbConvertApp.output_base=$output_base --ExecutePreprocessor.timeout=3600 ``` #### window size ``` os.environ["text_overrides"] = "{}" os.environ["text_input"] = "SO-python-question-titles-feb20-lemma.tok.txt" fast_text_overrides = [{"epoch": 150, "ws": 10}] * 5 + [{"epoch": 150, "ws": 20}] * 5 + [{"epoch": 200, "ws": 10}] * 5 for i, (fast_text_overrides_) in enumerate(fast_text_overrides): os.environ["fast_text_overrides"] = json.dumps(fast_text_overrides_) output_base = str(output_dir/f"python_so_fasttext_windowsize{i}") !python -m nbconvert tnbow.ipynb --execute --NbConvertApp.output_base=$output_base --ExecutePreprocessor.timeout=3600 ``` #### recheck non-lemmatized with optimal fasttext hyperparams ``` os.environ["text_overrides"] = json.dumps({'lemmatize': False}) os.environ["text_input"] = "SO-python-question-titles-feb20.tok.txt" fast_text_overrides = [{"epoch": 300, "ws": 10}] for i, (fast_text_overrides_) in enumerate(fast_text_overrides): os.environ["fast_text_overrides"] = json.dumps(fast_text_overrides_) output_base = str(output_dir/f"python_so_fasttext_nolemma{i}") !python -m nbconvert tnbow.ipynb --execute --NbConvertApp.output_base=$output_base --ExecutePreprocessor.timeout=3600 ``` ### Save optimal ``` os.environ["text_overrides"] = '{}' os.environ["text_input"] = "SO-python-question-titles-feb20-lemma.tok.txt" os.environ["fast_text_overrides"] = json.dumps({"epoch": 300}) os.environ["model_filename"] = str(output_dir/"best_tnbow_embedder") output_base = str(output_dir/f"best_tnbow") !python -m nbconvert tnbow.ipynb --execute --NbConvertApp.output_base=$output_base --ExecutePreprocessor.timeout=3600 ```
github_jupyter
``` import numpy as np import pandas as pd import warnings warnings.filterwarnings('ignore') df = pd.read_csv('creditcard.csv') df.head() df.isnull().sum().sum() df.Class.value_counts() # creating independent and dependent variable X = df.drop(['Class'], axis=1) y = df['Class'] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.3, random_state=0) ``` #### Cross Validation use KFold, #### Hyperparameter tunning use GridsearchCV ``` from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score,confusion_matrix,classification_report from sklearn.model_selection import KFold from sklearn.model_selection import GridSearchCV lr = LogisticRegression() # to variable grid for hyperparameter grid={'C':10.0 **np.arange(-2,3),'penalty':['l1','l2']} # to variable cv for kfold cv=KFold(n_splits=5,random_state=None,shuffle=False) lregression = GridSearchCV(lr, grid, cv=cv,n_jobs=-1,scoring='f1_macro') lregression.fit(X_train, y_train) y_pred = lregression.predict(X_test) print(confusion_matrix(y_pred, y_test)) print(accuracy_score(y_pred, y_test)) print(classification_report(y_pred, y_test)) ``` #### Ensemble techniques with class_weight parameter ``` from sklearn.ensemble import RandomForestClassifier # basically class weight is balanced & balanced subsample,assign more importance for 1 bcoz its imbalance dataset Rfc = RandomForestClassifier(class_weight={0: 1, 1:100}) Rfc.fit(X_train, y_train) y_pred = Rfc.predict(X_test) print(confusion_matrix(y_pred, y_test)) print(accuracy_score(y_pred, y_test)) print(classification_report(y_pred, y_test)) ``` #### Without using any parameter, use it as defalt ``` from sklearn.ensemble import RandomForestClassifier Rfc = RandomForestClassifier() Rfc.fit(X_train, y_train) y_pred = Rfc.predict(X_test) print(confusion_matrix(y_pred, y_test)) print(accuracy_score(y_pred, y_test)) print(classification_report(y_pred, y_test)) ``` #### Under sampling method: This technique samples down or reduces the samples of the class containing more data equivalent to the class containing the least samples. Suppose class A has 900 samples and class B has 100 samples, then the imbalance ratio is 9:1. Using the undersampling technique we keep class B as 100 samples and from class A we randomly select 100 samples out of 900. Then the ratio becomes 1:1 and we can say it’s balanced ``` from collections import Counter Counter(y_train) from imblearn.under_sampling import NearMiss ns=NearMiss(0.8) X_train, y_train= ns.fit_resample(X_train, y_train) from sklearn.ensemble import RandomForestClassifier Rfc = RandomForestClassifier() Rfc.fit(X_train, y_train) y_pred = Rfc.predict(X_test) print(confusion_matrix(y_pred, y_test)) print(accuracy_score(y_pred, y_test)) print(classification_report(y_pred, y_test)) ``` #### Over sampling methods: Oversampling is just the opposite of undersampling. Here the class containing less data is made equivalent to the class containing more data. This is done by adding more data to the least sample containing class. Let’s take the same example of undersampling, then, in this case, class A will remain 900 and class B will also be 900 (which was previously 100). Hence the ratio will be 1:1 and it’ll be balanced ``` from imblearn.over_sampling import RandomOverSampler OS = RandomOverSampler(1) x_train_os, y_train_os = OS.fit_resample(X_train, y_train) from sklearn.ensemble import RandomForestClassifier Rfc = RandomForestClassifier() Rfc.fit(x_train_os, y_train_os) y_pred = Rfc.predict(X_test) print(confusion_matrix(y_pred, y_test)) print(accuracy_score(y_pred, y_test)) print(classification_report(y_pred, y_test)) ``` #### SMOTETomek sampling technique: SMOTETomek is somewhere upsampling and downsampling. SMOTETomek is a hybrid method which is a mixture of the above two methods, it uses an under-sampling method (Tomek) with an oversampling method (SMOTE). ``` from imblearn.combine import SMOTETomek st=SMOTETomek(0.85) X_train_st,y_train_st=st.fit_resample(X_train, y_train) print("The number of classes before fit {}".format(Counter(y_train))) print("The number of classes after fit {}".format(Counter(y_train_st))) from sklearn.ensemble import RandomForestClassifier Rfc = RandomForestClassifier() Rfc.fit(X_train_st, y_train_st) y_pred = Rfc.predict(X_test) print(confusion_matrix(y_pred, y_test)) print(accuracy_score(y_pred, y_test)) print(classification_report(y_pred, y_test)) ``` #### EasyEnsembleClassifier Method: ``` from imblearn.ensemble import EasyEnsembleClassifier easy=EasyEnsembleClassifier() easy.fit(X_train,y_train) y_pred=easy.predict(X_test) print(confusion_matrix(y_test,y_pred)) print(accuracy_score(y_test,y_pred)) print(classification_report(y_test,y_pred)) ``` #### BalancedRandomForestClassifier Method: ``` from imblearn.ensemble import BalancedRandomForestClassifier brf = BalancedRandomForestClassifier() brf.fit(X_train,y_train) y_pred=brf.predict(X_test) print(confusion_matrix(y_test,y_pred)) print(accuracy_score(y_test,y_pred)) print(classification_report(y_test,y_pred)) ```
github_jupyter
<table align="center"> <td align="center"><a target="_blank" href="http://introtodeeplearning.com"> <img src="http://introtodeeplearning.com/images/colab/mit.png" style="padding-bottom:5px;" /> Visit MIT Deep Learning</a></td> <td align="center"><a target="_blank" href="https://colab.research.google.com/github/aamini/introtodeeplearning/blob/master/lab3/RL.ipynb"> <img src="http://introtodeeplearning.com/images/colab/colab.png?v2.0" style="padding-bottom:5px;" />Run in Google Colab</a></td> <td align="center"><a target="_blank" href="https://github.com/aamini/introtodeeplearning/blob/master/lab3/RL.ipynb"> <img src="http://introtodeeplearning.com/images/colab/github.png" height="70px" style="padding-bottom:5px;" />View Source on GitHub</a></td> </table> # Copyright Information ``` # Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved. # # Licensed under the MIT License. You may not use this file except in compliance # with the License. Use and/or modification of this code outside of 6.S191 must # reference: # # © MIT 6.S191: Introduction to Deep Learning # http://introtodeeplearning.com # ``` # Laboratory 3: Reinforcement Learning Reinforcement learning (RL) is a subset of machine learning which poses learning problems as interactions between agents and environments. It often assumes agents have no prior knowledge of a world, so they must learn to navigate environments by optimizing a reward function. Within an environment, an agent can take certain actions and receive feedback, in the form of positive or negative rewards, with respect to their decision. As such, an agent's feedback loop is somewhat akin to the idea of "trial and error", or the manner in which a child might learn to distinguish between "good" and "bad" actions. In practical terms, our RL agent will interact with the environment by taking an action at each timestep, receiving a corresponding reward, and updating its state according to what it has "learned". ![alt text](https://www.kdnuggets.com/images/reinforcement-learning-fig1-700.jpg) While the ultimate goal of reinforcement learning is to teach agents to act in the real, physical world, games provide a convenient proving ground for developing RL algorithms and agents. Games have some properties that make them particularly well suited for RL: 1. In many cases, games have perfectly describable environments. For example, all rules of chess can be formally written and programmed into a chess game simulator; 2. Games are massively parallelizable. Since they do not require running in the real world, simultaneous environments can be run on large data clusters; 3. Simpler scenarios in games enable fast prototyping. This speeds up the development of algorithms that could eventually run in the real-world; and 4. ... Games are fun! In previous labs, we have explored both supervised (with LSTMs, CNNs) and unsupervised / semi-supervised (with VAEs) learning tasks. Reinforcement learning is fundamentally different, in that we are training a deep learning algorithm to govern the actions of our RL agent, that is trying, within its environment, to find the optimal way to achieve a goal. The goal of training an RL agent is to determine the best next step to take to earn the greatest final payoff or return. In this lab, we focus on building a reinforcement learning algorithm to master two different environments with varying complexity. 1. **Cartpole**: Balance a pole, protruding from a cart, in an upright position by only moving the base left or right. Environment with a low-dimensional observation space. 2. [**Pong**](https://en.wikipedia.org/wiki/Pong): Beat your competitors (whether other AI or humans!) at the game of Pong. Environment with a high-dimensional observation space -- learning directly from raw pixels. Let's get started! First we'll import TensorFlow, the course package, and some dependencies. ``` !apt-get install -y xvfb python-opengl x11-utils > /dev/null 2>&1 !pip install gym pyvirtualdisplay scikit-video > /dev/null 2>&1 %tensorflow_version 2.x import tensorflow as tf import numpy as np import base64, io, time, gym import IPython, functools import matplotlib.pyplot as plt from tqdm import tqdm !pip install mitdeeplearning import mitdeeplearning as mdl ``` Before we dive in, let's take a step back and outline our approach, which is generally applicable to reinforcement learning problems in general: 1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt. 2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards. 3. **Define a reward function**: describes the reward associated with an action or sequence of actions. 4. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors. # Part 1: Cartpole ## 3.1 Define the Cartpole environment and agent ### Environment In order to model the environment for both the Cartpole and Pong tasks, we'll be using a toolkit developed by OpenAI called [OpenAI Gym](https://gym.openai.com/). It provides several pre-defined environments for training and testing reinforcement learning agents, including those for classic physics control tasks, Atari video games, and robotic simulations. To access the Cartpole environment, we can use `env = gym.make("CartPole-v0")`, which we gained access to when we imported the `gym` package. We can instantiate different [environments](https://gym.openai.com/envs/#classic_control) by passing the enivronment name to the `make` function. One issue we might experience when developing RL algorithms is that many aspects of the learning process are inherently random: initializing game states, changes in the environment, and the agent's actions. As such, it can be helpful to set a initial "seed" for the environment to ensure some level of reproducibility. Much like you might use `numpy.random.seed`, we can call the comparable function in gym, `seed`, with our defined environment to ensure the environment's random variables are initialized the same each time. ``` ### Instantiate the Cartpole environment ### env = gym.make("CartPole-v0") env.seed(1) ``` In Cartpole, a pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The pole starts upright, and the goal is to prevent it from falling over. The system is controlled by applying a force of +1 or -1 to the cart. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center of the track. A visual summary of the cartpole environment is depicted below: <img width="400px" src="https://danielpiedrahita.files.wordpress.com/2017/02/cart-pole.png"></img> Given this setup for the environment and the objective of the game, we can think about: 1) what observations help define the environment's state; 2) what actions the agent can take. First, let's consider the observation space. In this Cartpole environment our observations are: 1. Cart position 2. Cart velocity 3. Pole angle 4. Pole rotation rate We can confirm the size of the space by querying the environment's observation space: ``` n_observations = env.observation_space print("Environment has observation space =", n_observations) ``` Second, we consider the action space. At every time step, the agent can move either right or left. Again we can confirm the size of the action space by querying the environment: ``` n_actions = env.action_space.n print("Number of possible actions that the agent can choose from =", n_actions) ``` ### Cartpole agent Now that we have instantiated the environment and understood the dimensionality of the observation and action spaces, we are ready to define our agent. In deep reinforcement learning, a deep neural network defines the agent. This network will take as input an observation of the environment and output the probability of taking each of the possible actions. Since Cartpole is defined by a low-dimensional observation space, a simple feed-forward neural network should work well for our agent. We will define this using the `Sequential` API. ``` ### Define the Cartpole agent ### # Defines a feed-forward neural network def create_cartpole_model(): model = tf.keras.models.Sequential([ # First Dense layer tf.keras.layers.Dense(units=32, activation='relu'), tf.keras.layers.Dense(units=n_actions, activation=None) ]) return model cartpole_model = create_cartpole_model() ``` Now that we have defined the core network architecture, we will define an *action function* that executes a forward pass through the network, given a set of observations, and samples from the output. This sampling from the output probabilities will be used to select the next action for the agent. **Critically, this action function is totally general -- we will use this function for both Cartpole and Pong, and it is applicable to other RL tasks, as well!** ``` ### Define the agent's action function ### # Function that takes observations as input, executes a forward pass through model, # and outputs a sampled action. # Arguments: # model: the network that defines our agent # observation: observation which is fed as input to the model # Returns: # action: choice of agent action def choose_action(model, observation): # add batch dimension to the observation observation = np.expand_dims(observation, axis=0) '''TODO: feed the observations through the model to predict the log probabilities of each possible action.''' logits = model.predict(observation) # pass the log probabilities through a softmax to compute true probabilities prob_weights = tf.nn.softmax(logits).numpy() '''TODO: randomly sample from the prob_weights to pick an action. Hint: carefully consider the dimensionality of the input probabilities (vector) and the output action (scalar)''' action = np.random.choice(n_actions, size=1, p= prob_weights.flatten() )[0] return action ``` ## 3.2 Define the agent's memory Now that we have instantiated the environment and defined the agent network architecture and action function, we are ready to move on to the next step in our RL workflow: 1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt. 2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards. 3. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors. In reinforcement learning, training occurs alongside the agent's acting in the environment; an *episode* refers to a sequence of actions that ends in some terminal state, such as the pole falling down or the cart crashing. The agent will need to remember all of its observations and actions, such that once an episode ends, it can learn to "reinforce" the good actions and punish the undesirable actions via training. Our first step is to define a simple memory buffer that contains the agent's observations, actions, and received rewards from a given episode. **Once again, note the modularity of this memory buffer -- it can and will be applied to other RL tasks as well!** ``` ### Agent Memory ### class Memory: def __init__(self): self.clear() # Resets/restarts the memory buffer def clear(self): self.observations = [] self.actions = [] self.rewards = [] # Add observations, actions, rewards to memory def add_to_memory(self, new_observation, new_action, new_reward): self.observations.append(new_observation) '''TODO: update the list of actions with new action''' self.actions.append(new_action) '''TODO: update the list of rewards with new reward''' self.rewards.append(new_reward) # TODO: your update code here memory = Memory() ``` ## 3.3 Reward function We're almost ready to begin the learning algorithm for our agent! The next step is to compute the rewards of our agent as it acts in the environment. Since we (and the agent) is uncertain about if and when the game or task will end (i.e., when the pole will fall), it is useful to emphasize getting rewards **now** rather than later in the future -- this is the idea of discounting. This is a similar concept to discounting money in the case of interest. ecall from lecture, we use reward discount to give more preference at getting rewards now rather than later in the future. The idea of discounting rewards is similar to discounting money in the case of interest. To compute the expected cumulative reward, known as the **return**, at a given timestep in a learning episode, we sum the discounted rewards expected at that time step $t$, within a learning episode, and projecting into the future. We define the return (cumulative reward) at a time step $t$, $R_{t}$ as: >$R_{t}=\sum_{k=0}^\infty\gamma^kr_{t+k}$ where $0 < \gamma < 1$ is the discount factor and $r_{t}$ is the reward at time step $t$, and the index $k$ increments projection into the future within a single learning episode. Intuitively, you can think of this function as depreciating any rewards received at later time steps, which will force the agent prioritize getting rewards now. Since we can't extend episodes to infinity, in practice the computation will be limited to the number of timesteps in an episode -- after that the reward is assumed to be zero. Take note of the form of this sum -- we'll have to be clever about how we implement this function. Specifically, we'll need to initialize an array of zeros, with length of the number of time steps, and fill it with the real discounted reward values as we loop through the rewards from the episode, which will have been saved in the agents memory. What we ultimately care about is which actions are better relative to other actions taken in that episode -- so, we'll normalize our computed rewards, using the mean and standard deviation of the rewards across the learning episode. ``` ### Reward function ### # Helper function that normalizes an np.array x def normalize(x): x -= np.mean(x) x /= np.std(x) return x.astype(np.float32) # Compute normalized, discounted, cumulative rewards (i.e., return) # Arguments: # rewards: reward at timesteps in episode # gamma: discounting factor # Returns: # normalized discounted reward def discount_rewards(rewards, gamma=0.95): discounted_rewards = np.zeros_like(rewards) R = 0 for t in reversed(range(0, len(rewards))): # update the total discounted reward R = R * gamma + rewards[t] discounted_rewards[t] = R return normalize(discounted_rewards) ``` ## 3.4 Learning algorithm Now we can start to define the learing algorithm which will be used to reinforce good behaviors of the agent and discourage bad behaviours. In this lab, we will focus on *policy gradient* methods which aim to **maximize** the likelihood of actions that result in large rewards. Equivalently, this means that we want to **minimize** the negative likelihood of these same actions. We achieve this by simply **scaling** the probabilities by their associated rewards -- effectively amplifying the likelihood of actions that result in large rewards. Since the log function is monotonically increasing, this means that minimizing **negative likelihood** is equivalent to minimizing **negative log-likelihood**. Recall that we can easily compute the negative log-likelihood of a discrete action by evaluting its [softmax cross entropy](https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits). Like in supervised learning, we can use stochastic gradient descent methods to achieve the desired minimization. Let's begin by defining the loss function. ``` ### Loss function ### # Arguments: # logits: network's predictions for actions to take # actions: the actions the agent took in an episode # rewards: the rewards the agent received in an episode # Returns: # loss def compute_loss(logits, actions, rewards): '''TODO: complete the function call to compute the negative log probabilities''' neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=actions) '''TODO: scale the negative log probability by the rewards''' loss = tf.reduce_mean( neg_logprob * rewards) return loss ``` Now let's use the loss function to define a training step of our learning algorithm: ``` ### Training step (forward and backpropagation) ### def train_step(model, optimizer, observations, actions, discounted_rewards): with tf.GradientTape() as tape: # Forward propagate through the agent network logits = model(observations) '''TODO: call the compute_loss function to compute the loss''' loss = compute_loss(logits, actions,discounted_rewards) '''TODO: run backpropagation to minimize the loss using the tape.gradient method. Use `model.trainable_variables`''' grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) ``` ## 3.5 Run cartpole! Having had no prior knowledge of the environment, the agent will begin to learn how to balance the pole on the cart based only on the feedback received from the environment! Having defined how our agent can move, how it takes in new observations, and how it updates its state, we'll see how it gradually learns a policy of actions to optimize balancing the pole as long as possible. To do this, we'll track how the rewards evolve as a function of training -- how should the rewards change as training progresses? ``` ### Cartpole training! ### # Learning rate and optimizer learning_rate = 1e-3 optimizer = tf.keras.optimizers.Adam(learning_rate) # instantiate cartpole agent cartpole_model = create_cartpole_model() # to track our progress smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9) plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Rewards') if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists for i_episode in range(1000): plotter.plot(smoothed_reward.get()) # Restart the environment observation = env.reset() memory.clear() while True: # using our observation, choose an action and take it in the environment action = choose_action(cartpole_model, observation) next_observation, reward, done, info = env.step(action) # add to memory memory.add_to_memory(observation, action, reward) # is the episode over? did you crash or do so well that you're done? if done: # determine total reward and keep a record of this total_reward = sum(memory.rewards) smoothed_reward.append(total_reward) # initiate training - remember we don't know anything about how the # agent is doing until it has crashed! train_step(cartpole_model, optimizer, observations=np.vstack(memory.observations), actions=np.array(memory.actions), discounted_rewards = discount_rewards(memory.rewards)) # reset the memory memory.clear() break # update our observatons observation = next_observation ``` To get a sense of how our agent did, we can save a video of the trained model working on balancing the pole. Realize that this is a brand new environment that the agent has not seen before! Let's display the saved video to watch how our agent did! ``` saved_cartpole = mdl.lab3.save_video_of_model(cartpole_model, "CartPole-v0") mdl.lab3.play_video(saved_cartpole) ``` How does the agent perform? Could you train it for shorter amounts of time and still perform well? Do you think that training longer would help even more?
github_jupyter
# The Basics: Training Your First Model Celsius to Fahrenheit 변환기를 텐서플로를 사용하여 구현해 보자. 섭씨를 화씨로 변환하는 공식은 아래와 같다.: $$ f = c \times 1.8 + 32 $$ TensorFlow에서 Celsius 데이터 (0, 8, 15, 22, 38)를 입력으로 하고 출력이 Fahrenheit values (32, 46, 59, 72, 100)가 되도록 신경망을 훈련하자. 최종적으로 섭씨를 화씨로 변환하는 모형이 학습된다. ## Import dependencies ``` import tensorflow as tf import numpy as np ``` ## Set up training data 신경망 모형을 학습하기 위해서 훈련 데이터를 생성한다. ``` celsius_q = np.array([-40, -10, 0, 8, 15, 22, 38], dtype=float) fahrenheit_a = np.array([-40, 14, 32, 46, 59, 72, 100], dtype=float) for i,c in enumerate(celsius_q): print("{} degrees Celsius = {} degrees Fahrenheit".format(c, fahrenheit_a[i])) ``` ### Some Machine Learning terminology - **Feature** — 모형의 입력값이다. 이 경우에는 단일한 값인데 Celsius 온도에 해당한다. - **Labels** — 모형이 예측하고자 하는 값이다. 예측 target에 해당된다. 이 경우에는 단일한 값으로Fahrenheit 온도이다. - **Example** — 훈련 단계에서 활용되는 데이터로 inputs/outputs의 쌍으로 이루어 진다. 이번 예제에서는 `celsius_q` 와 `fahrenheit_a`의 쌍이다. 예를 들면 `(22,72)`와 같은 형태를 말한다. ## Create the model 텐서플로우의 Dense layer를 사용하여 간단한 모형을 생성한다. ### Build a layer `tf.keras.layers.Dense` 의 설정은 다음과 같다.: * `input_shape=[1]` — input value가 1개의 값이라는 것이다. one-dimensional array 로 1개의 값을 입력한다. 입력값은 화씨 데이터이다. * `units=1` — layer의 neuron 갯수이다. 우리의 예제에서는 1개의 layer만을 사용하므로 output값인 화씨 데이터를 출력하기 위해 1로 설정한다. — 신경망의 최종적인 output은 화씨값이다. ``` l0 = tf.keras.layers.Dense(units=1, input_shape=[1]) ``` ### Assemble layers into the model 위에서 지정한 layer로 모형을 생성한다. ``` model = tf.keras.Sequential([l0]) ``` **Note** 위의 설정은 아래와 같이 간단하게 표현 가능하다. ```python model = tf.keras.Sequential([ tf.keras.layers.Dense(units=1, input_shape=[1]) ]) ``` ## Compile the model, with loss and optimizer functions training에 앞서서, 모형을 compile해야 한다. 이때 다음과 같은 것을 설정해야 한다: - **Loss function** — 모형의 예측값과 실제값의 차이를 계산한다. 모형의 훈련은 이 loss함수의 값을 최소화하는데 있다. - **Optimizer function** — loss를 최소화하는 알고리즘을 말한다. ``` model.compile(loss='mean_squared_error', optimizer=tf.keras.optimizers.Adam(0.1)) ``` loss function ([mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error)) mean_squared_error는 regression 문제를 해결하기 위해 주로 사용된다. optimizer ([Adam](https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/)) 주로 사용되는 optimizer는 Adam이 있다. 뒤에서 자세히 배우게 된다. Optimizer 설정시 제일 중요한 요소는 learning rate (`0.1` in the code above)의 설정이다. 학습률이라고 하는데 0.001에서 0.1사이의 값을 기본적으로 사용한다. ## Train the model `fit` method로 모형을 훈련한다. training을 진행하면서 모형의 "weights"가 optimizer에 의해서 최적화 된다. `fit` method가 학습의 기본적인 요소들을 설정한다. 첫 번째 인수는 inputs, 두번째 인수는 outputs. `epochs` 은 얼마나 많은 학습을 진행할건지 횟수를 나타낸다. 그리고 `verbose`인수는 훈련과정을 화면에 표시할지 설정한다. ``` history = model.fit(celsius_q, fahrenheit_a, epochs=500, verbose=0) print("Finished training the model") ``` ## Display training statistics `fit` method는 history object를 리턴한다. 이 object 를 그래프로 표현하여 학습 진행과정을 확인할 수 있다. 훈련 epoch이 진행 되면서 loss가 줄어든 것을 모니터링 할 수 있다. ``` import matplotlib.pyplot as plt %matplotlib inline plt.xlabel('Epoch Number') plt.ylabel("Loss Magnitude") plt.plot(history.history['loss']) ``` ## Use the model to predict values 생성된 모형을 사용하여 예측을 해보자. ``` print(model.predict([100.0])) ``` ## Looking at the layer weights 최종적으로 생성된 Dense layer의 weights값을 확인해 보자. ``` print("These are the layer variables: {}".format(l0.get_weights())) ``` 결과값이 변환 공식과 유사하게 훈련된 것을 알 수 있다. ## 추가적인 시도 Dense layers를 더 많이 넣어서 모형을 구성하는 것이 가능하다. ``` l0 = tf.keras.layers.Dense(units=4, input_shape=[1]) l1 = tf.keras.layers.Dense(units=4) l2 = tf.keras.layers.Dense(units=1) model = tf.keras.Sequential([l0, l1, l2]) model.compile(loss='mean_squared_error', optimizer=tf.keras.optimizers.Adam(0.1)) model.fit(celsius_q, fahrenheit_a, epochs=500, verbose=False) print("Finished training the model") # 훈련된 모형으로 결과를 예측한다. print("Model predicts that 100 degrees Celsius is: {} degrees Fahrenheit".format(model.predict([100.0]))) # 훈련된 모형의 가중치 정보를 확인한다. print("These are the l0 variables: {}".format(l0.get_weights())) print("These are the l1 variables: {}".format(l1.get_weights())) print("These are the l2 variables: {}".format(l2.get_weights())) ``` ## 실습 위에 제시된 예제를 적절하게 응용하여 각자 섭씨온도를 화씨온도로 전환하는 딥러닝 모형을 구현해 보자. * hidden layer의 갯수 및 hidden unit의 갯수를 적절히 설정하여 모형을 구성한다. * 적절한 optimizer를 선택할 수 있다. * 훈련 epoch 수를 적절히 선택한다.
github_jupyter
# Multiple Kernel Learning #### By Saurabh Mahindre - <a href="https://github.com/Saurabh7">github.com/Saurabh7</a> This notebook is about multiple kernel learning in shogun. We will see how to construct a combined kernel, determine optimal kernel weights using MKL and use it for different types of [classification](http://en.wikipedia.org/wiki/Statistical_classification) and [novelty detection](http://en.wikipedia.org/wiki/Novelty_detection). 1. [Introduction](#Introduction) 2. [Mathematical formulation](#Mathematical-formulation-(skip-if-you-just-want-code-examples)) 3. [Using a Combined kernel](#Using-a-Combined-kernel) 4. [Example: Toy Data](#Prediction-on-toy-data) 1. [Generating Kernel weights](#Generating-Kernel-weights) 5. [Binary classification using MKL](#Binary-classification-using-MKL) 6. [MKL for knowledge discovery](#MKL-for-knowledge-discovery) 7. [Multiclass classification using MKL](#Multiclass-classification-using-MKL) 8. [One-class classification using MKL](#One-class-classification-using-MKL) ``` %matplotlib inline import os import numpy as np import matplotlib.pyplot as plt import shogun as sg SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') ``` ### Introduction <em>Multiple kernel learning</em> (MKL) is about using a combined kernel i.e. a kernel consisting of a linear combination of arbitrary kernels over different domains. The coefficients or weights of the linear combination can be learned as well. [Kernel based methods](http://en.wikipedia.org/wiki/Kernel_methods) such as support vector machines (SVMs) employ a so-called kernel function $k(x_{i},x_{j})$ which intuitively computes the similarity between two examples $x_{i}$ and $x_{j}$. </br> Selecting the kernel function $k()$ and it's parameters is an important issue in training. Kernels designed by humans usually capture one aspect of data. Choosing one kernel means to select exactly one such aspect. Which means combining such aspects is often better than selecting. In shogun the [MKL](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1MKL.html) is the base class for MKL. We can do classifications: [binary](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1MKLClassification.html), [one-class](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1MKLOneClass.html), [multiclass](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1MKLMulticlass.html) and regression too: [regression](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1MKLRegression.html). ### Mathematical formulation (skip if you just want code examples) </br>In a SVM, defined as: $$f({\bf x})=\text{sign} \left(\sum_{i=0}^{N-1} \alpha_i k({\bf x}, {\bf x_i})+b\right)$$</br> where ${\bf x_i},{i = 1,...,N}$ are labeled training examples ($y_i \in {±1}$). One could make a combination of kernels like: $${\bf k}(x_i,x_j)=\sum_{k=0}^{K} \beta_k {\bf k_k}(x_i, x_j)$$ where $\beta_k > 0$ and $\sum_{k=0}^{K} \beta_k = 1$ In the multiple kernel learning problem for binary classification one is given $N$ data points ($x_i, y_i$ ) ($y_i \in {±1}$), where $x_i$ is translated via $K$ mappings $\phi_k(x) \rightarrow R^{D_k} $, $k=1,...,K$ , from the input into $K$ feature spaces $(\phi_1(x_i),...,\phi_K(x_i))$ where $D_k$ denotes dimensionality of the $k$-th feature space. In MKL $\alpha_i$,$\beta$ and bias are determined by solving the following optimization program. For details see [1]. $$\mbox{min} \hspace{4mm} \gamma-\sum_{i=1}^N\alpha_i$$ $$ \mbox{w.r.t.} \hspace{4mm} \gamma\in R, \alpha\in R^N \nonumber$$ $$\mbox {s.t.} \hspace{4mm} {\bf 0}\leq\alpha\leq{\bf 1}C,\;\;\sum_{i=1}^N \alpha_i y_i=0 \nonumber$$ $$ {\frac{1}{2}\sum_{i,j=1}^N \alpha_i \alpha_j y_i y_j \leq \gamma}, \forall k=1,\ldots,K\nonumber\\ $$ Here C is a pre-specified regularization parameter. Within shogun this optimization problem is solved using [semi-infinite programming](http://en.wikipedia.org/wiki/Semi-infinite_programming). For 1-norm MKL one of the two approaches described in [1] is used. The first approach (also called the wrapper algorithm) wraps around a single kernel SVMs, alternatingly solving for $\alpha$ and $\beta$. It is using a traditional SVM to generate new violated constraints and thus requires a single kernel SVM and any of the SVMs contained in shogun can be used. In the MKL step either a linear program is solved via [glpk](http://en.wikipedia.org/wiki/GNU_Linear_Programming_Kit) or cplex or analytically or a newton (for norms>1) step is performed. The second much faster but also more memory demanding approach performing interleaved optimization, is integrated into the chunking-based [SVMlight](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1SVMLight.html). ### Using a Combined kernel Shogun provides an easy way to make combination of kernels using the [CombinedKernel](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CombinedKernel.html) class, to which we can append any [kernel](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Kernel.html) from the many options shogun provides. It is especially useful to combine kernels working on different domains and to combine kernels looking at independent features and requires [CombinedFeatures](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CombinedFeatures.html) to be used. Similarly the CombinedFeatures is used to combine a number of feature objects into a single CombinedFeatures object ``` kernel = sg.CombinedKernel() ``` ### Prediction on toy data In order to see the prediction capabilities, let us generate some data using the [GMM](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMM.html) class. The data is sampled by setting means ([GMM notebook](http://www.shogun-toolbox.org/static/notebook/current/GMM.html)) such that it sufficiently covers X-Y grid and is not too easy to classify. ``` num=30; num_components=4 means=np.zeros((num_components, 2)) means[0]=[-1,1] means[1]=[2,-1.5] means[2]=[-1,-3] means[3]=[2,1] covs=np.array([[1.0,0.0],[0.0,1.0]]) # gmm=sg.distribution("GMM") # gmm.set_pseudo_count(num_components) gmm=sg.GMM(num_components) [gmm.set_nth_mean(means[i], i) for i in range(num_components)] [gmm.set_nth_cov(covs,i) for i in range(num_components)] gmm.set_coef(np.array([1.0,0.0,0.0,0.0])) xntr=np.array([gmm.sample() for i in range(num)]).T xnte=np.array([gmm.sample() for i in range(5000)]).T gmm.set_coef(np.array([0.0,1.0,0.0,0.0])) xntr1=np.array([gmm.sample() for i in range(num)]).T xnte1=np.array([gmm.sample() for i in range(5000)]).T gmm.set_coef(np.array([0.0,0.0,1.0,0.0])) xptr=np.array([gmm.sample() for i in range(num)]).T xpte=np.array([gmm.sample() for i in range(5000)]).T gmm.set_coef(np.array([0.0,0.0,0.0,1.0])) xptr1=np.array([gmm.sample() for i in range(num)]).T xpte1=np.array([gmm.sample() for i in range(5000)]).T traindata=np.concatenate((xntr,xntr1,xptr,xptr1), axis=1) trainlab=np.concatenate((-np.ones(2*num), np.ones(2*num))) testdata=np.concatenate((xnte,xnte1,xpte,xpte1), axis=1) testlab=np.concatenate((-np.ones(10000), np.ones(10000))) #convert to shogun features and generate labels for data feats_train=sg.features(traindata) labels=sg.BinaryLabels(trainlab) _=plt.jet() plt.figure(figsize=(18,5)) plt.subplot(121) # plot train data _=plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=100) plt.title('Toy data for classification') plt.axis('equal') colors=["blue","blue","red","red"] # a tool for visualisation from matplotlib.patches import Ellipse def get_gaussian_ellipse_artist(mean, cov, nstd=1.96, color="red", linewidth=3): vals, vecs = np.linalg.eigh(cov) order = vals.argsort()[::-1] vals, vecs = vals[order], vecs[:, order] theta = np.degrees(np.arctan2(*vecs[:, 0][::-1])) width, height = 2 * nstd * np.sqrt(vals) e = Ellipse(xy=mean, width=width, height=height, angle=theta, \ edgecolor=color, fill=False, linewidth=linewidth) return e for i in range(num_components): plt.gca().add_artist(get_gaussian_ellipse_artist(means[i], covs, color=colors[i])) ``` ### Generating Kernel weights Just to help us visualize let's use two gaussian kernels ([GaussianKernel](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GaussianKernel.html)) with considerably different widths. As required in MKL, we need to append them to the Combined kernel. To generate the optimal weights (i.e $\beta$s in the above equation), training of [MKL](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1MKLClassification.html) is required. This generates the weights as seen in this example. ``` width0=0.5 kernel0=sg.kernel("GaussianKernel", log_width=np.log(width0)) width1=25 kernel1=sg.kernel("GaussianKernel", log_width=np.log(width1)) #combine kernels kernel.append_kernel(kernel0) kernel.append_kernel(kernel1) kernel.init(feats_train, feats_train) mkl = sg.MKLClassification() #set the norm, weights sum to 1. mkl.set_mkl_norm(1) mkl.set_C(1, 1) mkl.set_kernel(kernel) mkl.set_labels(labels) #train to get weights mkl.train() w=kernel.get_subkernel_weights() print(w) ``` ### Binary classification using MKL Now with the data ready and training done, we can do the binary classification. The weights generated can be intuitively understood. We will see that on plotting individual subkernels outputs and outputs of the MKL classification. To apply on test features, we need to reinitialize the kernel with `kernel.init` and pass the test features. After that it's just a matter of doing `mkl.apply` to generate outputs. ``` size=100 x1=np.linspace(-5, 5, size) x2=np.linspace(-5, 5, size) x, y=np.meshgrid(x1, x2) #Generate X-Y grid test data grid=sg.features(np.array((np.ravel(x), np.ravel(y)))) kernel0t=sg.kernel("GaussianKernel", log_width=np.log(width0)) kernel1t=sg.kernel("GaussianKernel", log_width=np.log(width1)) kernelt=sg.CombinedKernel() kernelt.append_kernel(kernel0t) kernelt.append_kernel(kernel1t) #initailize with test grid kernelt.init(feats_train, grid) mkl.set_kernel(kernelt) #prediction grid_out=mkl.apply() z=grid_out.get_values().reshape((size, size)) plt.figure(figsize=(10,5)) plt.title("Classification using MKL") c=plt.pcolor(x, y, z) _=plt.contour(x, y, z, linewidths=1, colors='black') _=plt.colorbar(c) ``` To justify the weights, let's train and compare two subkernels with the MKL classification output. Training MKL classifier with a single kernel appended to a combined kernel makes no sense and is just like normal single kernel based classification, but let's do it for comparison. ``` z=grid_out.get_labels().reshape((size, size)) # MKL plt.figure(figsize=(20,5)) plt.subplot(131, title="Multiple Kernels combined") c=plt.pcolor(x, y, z) _=plt.contour(x, y, z, linewidths=1, colors='black') _=plt.colorbar(c) comb_ker0=sg.CombinedKernel() comb_ker0.append_kernel(kernel0) comb_ker0.init(feats_train, feats_train) mkl.set_kernel(comb_ker0) mkl.train() comb_ker0t=sg.CombinedKernel() comb_ker0t.append_kernel(kernel0) comb_ker0t.init(feats_train, grid) mkl.set_kernel(comb_ker0t) out0=mkl.apply() # subkernel 1 z=out0.get_labels().reshape((size, size)) plt.subplot(132, title="Kernel 1") c=plt.pcolor(x, y, z) _=plt.contour(x, y, z, linewidths=1, colors='black') _=plt.colorbar(c) comb_ker1=sg.CombinedKernel() comb_ker1.append_kernel(kernel1) comb_ker1.init(feats_train, feats_train) mkl.set_kernel(comb_ker1) mkl.train() comb_ker1t=sg.CombinedKernel() comb_ker1t.append_kernel(kernel1) comb_ker1t.init(feats_train, grid) mkl.set_kernel(comb_ker1t) out1=mkl.apply() # subkernel 2 z=out1.get_labels().reshape((size, size)) plt.subplot(133, title="kernel 2") c=plt.pcolor(x, y, z) _=plt.contour(x, y, z, linewidths=1, colors='black') _=plt.colorbar(c) ``` As we can see the multiple kernel output seems just about right. Kernel 1 gives a sort of overfitting output while the kernel 2 seems not so accurate. The kernel weights are hence so adjusted to get a refined output. We can have a look at the errors by these subkernels to have more food for thought. Most of the time, the MKL error is lesser as it incorporates aspects of both kernels. One of them is strict while other is lenient, MKL finds a balance between those. ``` kernelt.init(feats_train, sg.features(testdata)) mkl.set_kernel(kernelt) out = mkl.apply() evaluator = sg.evaluation("ErrorRateMeasure") print("Test error is %2.2f%% :MKL" % (100*evaluator.evaluate(out,sg.BinaryLabels(testlab)))) comb_ker0t.init(feats_train, sg.features(testdata)) mkl.set_kernel(comb_ker0t) out = mkl.apply() evaluator = sg.evaluation("ErrorRateMeasure") print("Test error is %2.2f%% :Subkernel1"% (100*evaluator.evaluate(out,sg.BinaryLabels(testlab)))) comb_ker1t.init(feats_train, sg.features(testdata)) mkl.set_kernel(comb_ker1t) out = mkl.apply() evaluator = sg.evaluation("ErrorRateMeasure") print("Test error is %2.2f%% :subkernel2" % (100*evaluator.evaluate(out,sg.BinaryLabels(testlab)))) ``` ### MKL for knowledge discovery MKL can recover information about the problem at hand. Let us see this with a binary classification problem. The task is to separate two concentric classes shaped like circles. By varying the distance between the boundary of the circles we can control the separability of the problem. Starting with an almost non-separable scenario, the data quickly becomes separable as the distance between the circles increases. ``` def circle(x, radius, neg): y=np.sqrt(np.square(radius)-np.square(x)) if neg: return[x, -y] else: return [x,y] def get_circle(radius): neg=False range0=np.linspace(-radius,radius,100) pos_a=np.array([circle(i, radius, neg) for i in range0]).T neg=True neg_a=np.array([circle(i, radius, neg) for i in range0]).T c=np.concatenate((neg_a,pos_a), axis=1) return c def get_data(r1, r2): c1=get_circle(r1) c2=get_circle(r2) c=np.concatenate((c1, c2), axis=1) feats_tr=sg.features(c) return c, feats_tr l=np.concatenate((-np.ones(200),np.ones(200))) lab=sg.BinaryLabels(l) #get two circles with radius 2 and 4 c, feats_tr=get_data(2,4) c1, feats_tr1=get_data(2,3) _=plt.gray() plt.figure(figsize=(10,5)) plt.subplot(121) plt.title("Circles with different separation") p=plt.scatter(c[0,:], c[1,:], c=lab.get_labels()) plt.subplot(122) q=plt.scatter(c1[0,:], c1[1,:], c=lab.get_labels()) ``` These are the type of circles we want to distinguish between. We can try classification with a constant separation between the circles first. ``` def train_mkl(circles, feats_tr): #Four kernels with different widths kernel0=sg.kernel("GaussianKernel", log_width=np.log(1)) kernel1=sg.kernel("GaussianKernel", log_width=np.log(5)) kernel2=sg.kernel("GaussianKernel", log_width=np.log(7)) kernel3=sg.kernel("GaussianKernel", log_width=np.log(10)) kernel = sg.CombinedKernel() kernel.append_kernel(kernel0) kernel.append_kernel(kernel1) kernel.append_kernel(kernel2) kernel.append_kernel(kernel3) kernel.init(feats_tr, feats_tr) mkl = sg.MKLClassification() mkl.set_mkl_norm(1) mkl.set_C(1, 1) mkl.set_kernel(kernel) mkl.set_labels(lab) mkl.train() w=kernel.get_subkernel_weights() return w, mkl def test_mkl(mkl, grid): kernel0t=sg.kernel("GaussianKernel", log_width=np.log(1)) kernel1t=sg.kernel("GaussianKernel", log_width=np.log(5)) kernel2t=sg.kernel("GaussianKernel", log_width=np.log(7)) kernel3t=sg.kernel("GaussianKernel", log_width=np.log(10)) kernelt = sg.CombinedKernel() kernelt.append_kernel(kernel0t) kernelt.append_kernel(kernel1t) kernelt.append_kernel(kernel2t) kernelt.append_kernel(kernel3t) kernelt.init(feats_tr, grid) mkl.set_kernel(kernelt) out=mkl.apply() return out size=50 x1=np.linspace(-10, 10, size) x2=np.linspace(-10, 10, size) x, y=np.meshgrid(x1, x2) grid=sg.features(np.array((np.ravel(x), np.ravel(y)))) w, mkl=train_mkl(c, feats_tr) print(w) out=test_mkl(mkl,grid) z=out.get_values().reshape((size, size)) plt.figure(figsize=(5,5)) c=plt.pcolor(x, y, z) _=plt.contour(x, y, z, linewidths=1, colors='black') plt.title('classification with constant separation') _=plt.colorbar(c) ``` As we can see the MKL classifier classifies them as expected. Now let's vary the separation and see how it affects the weights.The choice of the kernel width of the Gaussian kernel used for classification is expected to depend on the separation distance of the learning problem. An increased distance between the circles will correspond to a larger optimal kernel width. This effect should be visible in the results of the MKL, where we used MKL-SVMs with four kernels with different widths (1,5,7,10). ``` range1=np.linspace(5.5,7.5,50) x=np.linspace(1.5,3.5,50) temp=[] for i in range1: #vary separation between circles c, feats=get_data(4,i) w, mkl=train_mkl(c, feats) temp.append(w) y=np.array([temp[i] for i in range(0,50)]).T plt.figure(figsize=(20,5)) _=plt.plot(x, y[0,:], color='k', linewidth=2) _=plt.plot(x, y[1,:], color='r', linewidth=2) _=plt.plot(x, y[2,:], color='g', linewidth=2) _=plt.plot(x, y[3,:], color='y', linewidth=2) plt.title("Comparison between kernel widths and weights") plt.ylabel("Weight") plt.xlabel("Distance between circles") _=plt.legend(["1","5","7","10"]) ``` In the above plot we see the kernel weightings obtained for the four kernels. Every line shows one weighting. The courses of the kernel weightings reflect the development of the learning problem: as long as the problem is difficult the best separation can be obtained when using the kernel with smallest width. The low width kernel looses importance when the distance between the circle increases and larger kernel widths obtain a larger weight in MKL. Increasing the distance between the circles, kernels with greater widths are used. ### Multiclass classification using MKL MKL can be used for multiclass classification using the [MKLMulticlass](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1MKLMulticlass.html) class. It is based on the GMNPSVM Multiclass SVM. Its termination criterion is set by `set_mkl_epsilon(float64_t eps )` and the maximal number of MKL iterations is set by `set_max_num_mkliters(int32_t maxnum)`. The epsilon termination criterion is the L2 norm between the current MKL weights and their counterpart from the previous iteration. We set it to 0.001 as we want pretty accurate weights. To see this in action let us compare it to the normal [GMNPSVM](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMNPSVM.html) example as in the [KNN notebook](http://www.shogun-toolbox.org/static/notebook/current/KNN.html#Comparison-to-Multiclass-Support-Vector-Machines), just to see how MKL fares in object recognition. We use the [USPS digit recognition dataset](http://www.gaussianprocess.org/gpml/data/). ``` from scipy.io import loadmat, savemat from os import path, sep mat = loadmat(sep.join(['..','..','..','data','multiclass', 'usps.mat'])) Xall = mat['data'] Yall = np.array(mat['label'].squeeze(), dtype=np.double) # map from 1..10 to 0..9, since shogun # requires multiclass labels to be # 0, 1, ..., K-1 Yall = Yall - 1 np.random.seed(0) subset = np.random.permutation(len(Yall)) #get first 1000 examples Xtrain = Xall[:, subset[:1000]] Ytrain = Yall[subset[:1000]] Nsplit = 2 all_ks = range(1, 21) print(Xall.shape) print(Xtrain.shape) ``` Let's plot five of the examples to get a feel of the dataset. ``` def plot_example(dat, lab): for i in range(5): ax=plt.subplot(1,5,i+1) plt.title(int(lab[i])) ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest') ax.set_xticks([]) ax.set_yticks([]) _=plt.figure(figsize=(17,6)) plt.gray() plot_example(Xtrain, Ytrain) ``` We combine a [Gaussian kernel](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GaussianKernel.html) and a [PolyKernel](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CPolyKernel.html). To test, examples not included in training data are used. This is just a demonstration but we can see here how MKL is working behind the scene. What we have is two kernels with significantly different properties. The gaussian kernel defines a function space that is a lot larger than that of the linear kernel or the polynomial kernel. The gaussian kernel has a low width, so it will be able to represent more and more complex relationships between the training data. But it requires enough data to train on. The number of training examples here is 1000, which seems a bit less as total examples are 10000. We hope the polynomial kernel can counter this problem, since it will fit the polynomial for you using a lot less data than the squared exponential. The kernel weights are printed below to add some insight. ``` # MKL training and output labels = sg.MulticlassLabels(Ytrain) feats = sg.features(Xtrain) #get test data from 5500 onwards Xrem=Xall[:,subset[5500:]] Yrem=Yall[subset[5500:]] #test features not used in training feats_rem = sg.features(Xrem) labels_rem = sg.MulticlassLabels(Yrem) kernel = sg.CombinedKernel() feats_train = sg.CombinedFeatures() feats_test = sg.CombinedFeatures() #append gaussian kernel subkernel = sg.kernel("GaussianKernel", log_width=np.log(15)) feats_train.append_feature_obj(feats) feats_test.append_feature_obj(feats_rem) kernel.append_kernel(subkernel) #append PolyKernel feats = sg.features(Xtrain) subkernel = sg.kernel('PolyKernel', degree=10, c=2) feats_train.append_feature_obj(feats) feats_test.append_feature_obj(feats_rem) kernel.append_kernel(subkernel) kernel.init(feats_train, feats_train) mkl = sg.MKLMulticlass(1.2, kernel, labels) mkl.set_epsilon(1e-2) mkl.set_mkl_epsilon(0.001) mkl.set_mkl_norm(1) mkl.train() #initialize with test features kernel.init(feats_train, feats_test) out = mkl.apply() evaluator = sg.evaluation("MulticlassAccuracy") accuracy = evaluator.evaluate(out, labels_rem) print("Accuracy = %2.2f%%" % (100*accuracy)) idx=np.where(out.get_labels() != Yrem)[0] Xbad=Xrem[:,idx] Ybad=Yrem[idx] _=plt.figure(figsize=(17,6)) plt.gray() plot_example(Xbad, Ybad) w=kernel.get_subkernel_weights() print(w) # Single kernel:PolyKernel C=1 pk = sg.kernel('PolyKernel', degree=10, c=2) svm = sg.GMNPSVM(C, pk, labels) _=svm.train(feats) out=svm.apply(feats_rem) evaluator = sg.evaluation("MulticlassAccuracy") accuracy = evaluator.evaluate(out, labels_rem) print("Accuracy = %2.2f%%" % (100*accuracy)) idx=np.where(out.get_labels() != Yrem)[0] Xbad=Xrem[:,idx] Ybad=Yrem[idx] _=plt.figure(figsize=(17,6)) plt.gray() plot_example(Xbad, Ybad) #Single Kernel:Gaussian kernel width=15 C=1 gk=sg.kernel("GaussianKernel", log_width=np.log(width)) svm=sg.GMNPSVM(C, gk, labels) _=svm.train(feats) out=svm.apply(feats_rem) evaluator = sg.evaluation("MulticlassAccuracy") accuracy = evaluator.evaluate(out, labels_rem) print("Accuracy = %2.2f%%" % (100*accuracy)) idx=np.where(out.get_labels() != Yrem)[0] Xbad=Xrem[:,idx] Ybad=Yrem[idx] _=plt.figure(figsize=(17,6)) plt.gray() plot_example(Xbad, Ybad) ``` The misclassified examples are surely pretty tough to predict. As seen from the accuracy MKL seems to work a shade better in the case. One could try this out with more and different types of kernels too. ### One-class classification using MKL [One-class classification](http://en.wikipedia.org/wiki/One-class_classification) can be done using MKL in shogun. This is demonstrated in the following simple example using [MKLOneClass](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1MKLOneClass.html). We will see how abnormal data is detected. This is also known as novelty detection. Below we generate some toy data and initialize combined kernels and features. ``` X = -0.3 * np.random.randn(100,2) traindata = np.r_[X + 2, X - 2].T X = -0.3 * np.random.randn(20, 2) testdata = np.r_[X + 2, X - 2].T trainlab=np.concatenate((np.ones(99),-np.ones(1))) #convert to shogun features and generate labels for data feats=sg.features(traindata) labels=sg.BinaryLabels(trainlab) xx, yy = np.meshgrid(np.linspace(-5, 5, 500), np.linspace(-5, 5, 500)) grid=sg.features(np.array((np.ravel(xx), np.ravel(yy)))) #test features feats_t=sg.features(testdata) x_out=(np.random.uniform(low=-4, high=4, size=(20, 2))).T feats_out=sg.features(x_out) kernel=sg.CombinedKernel() feats_train=sg.CombinedFeatures() feats_test=sg.CombinedFeatures() feats_test_out=sg.CombinedFeatures() feats_grid=sg.CombinedFeatures() #append gaussian kernel subkernel=sg.kernel("GaussianKernel", log_width=np.log(8)) feats_train.append_feature_obj(feats) feats_test.append_feature_obj(feats_t) feats_test_out.append_feature_obj(feats_out) feats_grid.append_feature_obj(grid) kernel.append_kernel(subkernel) #append PolyKernel feats = sg.features(traindata) subkernel = sg.kernel('PolyKernel', degree=10, c=3) feats_train.append_feature_obj(feats) feats_test.append_feature_obj(feats_t) feats_test_out.append_feature_obj(feats_out) feats_grid.append_feature_obj(grid) kernel.append_kernel(subkernel) kernel.init(feats_train, feats_train) mkl = sg.MKLOneClass() mkl.set_kernel(kernel) mkl.set_labels(labels) mkl.set_interleaved_optimization_enabled(False) mkl.set_epsilon(1e-2) mkl.put('mkl_epsilon', 0.1) mkl.set_mkl_norm(1) ``` Now that everything is initialized, let's see MKLOneclass in action by applying it on the test data and on the X-Y grid. ``` mkl.train() print("Weights:") w=kernel.get_subkernel_weights() print(w) #initialize with test features kernel.init(feats_train, feats_test) normal_out = mkl.apply() #test on abnormally generated data kernel.init(feats_train, feats_test_out) abnormal_out = mkl.apply() #test on X-Y grid kernel.init(feats_train, feats_grid) grid_out=mkl.apply() z=grid_out.get_values().reshape((500,500)) z_lab=grid_out.get_labels().reshape((500,500)) a=abnormal_out.get_labels() n=normal_out.get_labels() #check for normal and abnormal classified data idx=np.where(normal_out.get_labels() != 1)[0] abnormal=testdata[:,idx] idx=np.where(normal_out.get_labels() == 1)[0] normal=testdata[:,idx] plt.figure(figsize=(15,6)) pl =plt.subplot(121) plt.title("One-class classification using MKL") _=plt.pink() c=plt.pcolor(xx, yy, z) _=plt.contour(xx, yy, z_lab, linewidths=1, colors='black') _=plt.colorbar(c) p1=pl.scatter(traindata[0, :], traindata[1,:], cmap=plt.gray(), s=100) p2=pl.scatter(normal[0,:], normal[1,:], c="red", s=100) p3=pl.scatter(abnormal[0,:], abnormal[1,:], c="blue", s=100) p4=pl.scatter(x_out[0,:], x_out[1,:], c=a, cmap=plt.jet(), s=100) _=pl.legend((p1, p2, p3), ["Training samples", "normal samples", "abnormal samples"], loc=2) plt.subplot(122) c=plt.pcolor(xx, yy, z) plt.title("One-class classification output") _=plt.gray() _=plt.contour(xx, yy, z, linewidths=1, colors='black') _=plt.colorbar(c) ``` MKL one-class classification will give you a bit more flexibility compared to normal classifications. The kernel weights are expected to be more or less similar here since the training data is not overly complicated or too easy, which means both the gaussian and polynomial kernel will be involved. If you don't know the nature of the training data and lot of features are invoved, you could easily use kernels with much different properties and benefit from their combination. ### References: [1] Soeren Sonnenburg, Gunnar Raetsch, Christin Schaefer, and Bernhard Schoelkopf. Large Scale Multiple Kernel Learning. Journal of Machine Learning Research, 7:1531-1565, July 2006. [2]F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the SMO algorithm. In C. E. Brodley, editor, Twenty-first international conference on Machine learning. ACM, 2004 [3] Kernel Methods for Object Recognition , Christoph H. Lampert
github_jupyter
##### Copyright 2021 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Migrating feature_columns to TF2's Keras Preprocessing Layers <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/migrate/migrating_feature_columns"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/migrating_feature_columns.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/migrating_feature_columns.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/migrating_feature_columns.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Training a model will usually come with some amount of feature preprocessing, particularly when dealing with structured data. When training a `tf.estimator.Estimator` in TF1, this feature preprocessing is usually done with the `tf.feature_column` API. In TF2, this preprocessing can be done directly with Keras layers, called _preprocessing layers_. In this migration guide, you will perform some common feature transformations using both feature columns and preprocessing layers, followed by training a complete model with both APIs. First, start with a couple of necessary imports, ``` import tensorflow as tf import tensorflow.compat.v1 as tf1 import math ``` and add a utility for calling a feature column for demonstration: ``` def call_feature_columns(feature_columns, inputs): # This is a convenient way to call a `feature_column` outside of an estimator # to display its output. feature_layer = tf1.keras.layers.DenseFeatures(feature_columns) return feature_layer(inputs) ``` ## Input handling To use feature columns with an estimator, model inputs are always expected to be a dictionary of tensors: ``` input_dict = { 'foo': tf.constant([1]), 'bar': tf.constant([0]), 'baz': tf.constant([-1]) } ``` Each feature column needs to be created with a key to index into the source data. The output of all feature columns is concatenated and used by the estimator model. ``` columns = [ tf1.feature_column.numeric_column('foo'), tf1.feature_column.numeric_column('bar'), tf1.feature_column.numeric_column('baz'), ] call_feature_columns(columns, input_dict) ``` In Keras, model input is much more flexible. A `tf.keras.Model` can handle a single tensor input, a list of tensor features, or a dictionary of tensor features. You can handle dictionary input by passing a dictionary of `tf.keras.Input` on model creation. Inputs will not be concatenated automatically, which allows them to be used in much more flexible ways. They can be concatenated with `tf.keras.layers.Concatenate`. ``` inputs = { 'foo': tf.keras.Input(shape=()), 'bar': tf.keras.Input(shape=()), 'baz': tf.keras.Input(shape=()), } # Inputs are typically transformed by preprocessing layers before concatenation. outputs = tf.keras.layers.Concatenate()(inputs.values()) model = tf.keras.Model(inputs=inputs, outputs=outputs) model(input_dict) ``` ## One-hot encoding integer IDs A common feature transformation is one-hot encoding integer inputs of a known range. Here is an example using feature columns: ``` categorical_col = tf1.feature_column.categorical_column_with_identity( 'type', num_buckets=3) indicator_col = tf1.feature_column.indicator_column(categorical_col) call_feature_columns(indicator_col, {'type': [0, 1, 2]}) ``` Using Keras preprocessing layers, these columns can be replaced by a single `tf.keras.layers.CategoryEncoding` layer with `output_mode` set to `'one_hot'`: ``` one_hot_layer = tf.keras.layers.CategoryEncoding( num_tokens=3, output_mode='one_hot') one_hot_layer([0, 1, 2]) ``` Note: For large one-hot encodings, it is much more efficient to use a sparse representation of the output. If you pass `sparse=True` to the `CategoryEncoding` layer, the output of the layer will be a `tf.sparse.SparseTensor`, which can be efficiently handled as input to a `tf.keras.layers.Dense` layer. ## Normalizing numeric features When handling continuous, floating-point features with feature columns, you need to use a `tf.feature_column.numeric_column`. In the case where the input is already normalized, converting this to Keras is trivial. You can simply use a `tf.keras.Input` directly into your model, as shown above. A `numeric_column` can also be used to normalize input: ``` def normalize(x): mean, variance = (2.0, 1.0) return (x - mean) / math.sqrt(variance) numeric_col = tf1.feature_column.numeric_column('col', normalizer_fn=normalize) call_feature_columns(numeric_col, {'col': tf.constant([[0.], [1.], [2.]])}) ``` In contrast, with Keras, this normalization can be done with `tf.keras.layers.Normalization`. ``` normalization_layer = tf.keras.layers.Normalization(mean=2.0, variance=1.0) normalization_layer(tf.constant([[0.], [1.], [2.]])) ``` ## Bucketizing and one-hot encoding numeric features Another common transformation of continuous, floating point inputs is to bucketize then to integers of a fixed range. In feature columns, this can be achieved with a `tf.feature_column.bucketized_column`: ``` numeric_col = tf1.feature_column.numeric_column('col') bucketized_col = tf1.feature_column.bucketized_column(numeric_col, [1, 4, 5]) call_feature_columns(bucketized_col, {'col': tf.constant([1., 2., 3., 4., 5.])}) ``` In Keras, this can be replaced by `tf.keras.layers.Discretization`: ``` discretization_layer = tf.keras.layers.Discretization(bin_boundaries=[1, 4, 5]) one_hot_layer = tf.keras.layers.CategoryEncoding( num_tokens=4, output_mode='one_hot') one_hot_layer(discretization_layer([1., 2., 3., 4., 5.])) ``` ## One-hot encoding string data with a vocabulary Handling string features often requires a vocabulary lookup to translate strings into indices. Here is an example using feature columns to lookup strings and then one-hot encode the indices: ``` vocab_col = tf1.feature_column.categorical_column_with_vocabulary_list( 'sizes', vocabulary_list=['small', 'medium', 'large'], num_oov_buckets=0) indicator_col = tf1.feature_column.indicator_column(vocab_col) call_feature_columns(indicator_col, {'sizes': ['small', 'medium', 'large']}) ``` Using Keras preprocessing layers, use the `tf.keras.layers.StringLookup` layer with `output_mode` set to `'one_hot'`: ``` string_lookup_layer = tf.keras.layers.StringLookup( vocabulary=['small', 'medium', 'large'], num_oov_indices=0, output_mode='one_hot') string_lookup_layer(['small', 'medium', 'large']) ``` Note: For large one-hot encodings, it is much more efficient to use a sparse representation of the output. If you pass `sparse=True` to the `StringLookup` layer, the output of the layer will be a `tf.sparse.SparseTensor`, which can be efficiently handled as input to a `tf.keras.layers.Dense` layer. ## Embedding string data with a vocabulary For larger vocabularies, an embedding is often needed for good performance. Here is an example embedding a string feature using feature columns: ``` vocab_col = tf1.feature_column.categorical_column_with_vocabulary_list( 'col', vocabulary_list=['small', 'medium', 'large'], num_oov_buckets=0) embedding_col = tf1.feature_column.embedding_column(vocab_col, 4) call_feature_columns(embedding_col, {'col': ['small', 'medium', 'large']}) ``` Using Keras preprocessing layers, this can be achieved by combining a `tf.keras.layers.StringLookup` layer and an `tf.keras.layers.Embedding` layer. The default output for the `StringLookup` will be integer indices which can be fed directly into an embedding. Note: The `Embedding` layer contains trainable parameters. While the `StringLookup` layer can be applied to data inside or outside of a model, the `Embedding` must always be part of a trainable Keras model to train correctly. ``` string_lookup_layer = tf.keras.layers.StringLookup( vocabulary=['small', 'medium', 'large'], num_oov_indices=0) embedding = tf.keras.layers.Embedding(3, 4) embedding(string_lookup_layer(['small', 'medium', 'large'])) ``` ## Complete training example To show a complete training workflow, first prepare some data with three features of different types: ``` features = { 'type': [0, 1, 1], 'size': ['small', 'small', 'medium'], 'weight': [2.7, 1.8, 1.6], } labels = [1, 1, 0] predict_features = {'type': [0], 'size': ['foo'], 'weight': [-0.7]} ``` Define some common constants for both TF1 and TF2 workflows: ``` vocab = ['small', 'medium', 'large'] one_hot_dims = 3 embedding_dims = 4 weight_mean = 2.0 weight_variance = 1.0 ``` ### With feature columns Feature columns must be passed as a list to the estimator on creation, and will be called implicitly during training. ``` categorical_col = tf1.feature_column.categorical_column_with_identity( 'type', num_buckets=one_hot_dims) # Convert index to one-hot; e.g. [2] -> [0,0,1]. indicator_col = tf1.feature_column.indicator_column(categorical_col) # Convert strings to indices; e.g. ['small'] -> [1]. vocab_col = tf1.feature_column.categorical_column_with_vocabulary_list( 'size', vocabulary_list=vocab, num_oov_buckets=1) # Embed the indices. embedding_col = tf1.feature_column.embedding_column(vocab_col, embedding_dims) normalizer_fn = lambda x: (x - weight_mean) / math.sqrt(weight_variance) # Normalize the numeric inputs; e.g. [2.0] -> [0.0]. numeric_col = tf1.feature_column.numeric_column( 'weight', normalizer_fn=normalizer_fn) estimator = tf1.estimator.DNNClassifier( feature_columns=[indicator_col, embedding_col, numeric_col], hidden_units=[1]) def _input_fn(): return tf1.data.Dataset.from_tensor_slices((features, labels)).batch(1) estimator.train(_input_fn) ``` The feature columns will also be used to transform input data when running inference on the model. ``` def _predict_fn(): return tf1.data.Dataset.from_tensor_slices(predict_features).batch(1) next(estimator.predict(_predict_fn)) ``` ### With Keras preprocessing layers Keras preprocessing layers are more flexible in where they can be called. A layer can be applied directly to tensors, used inside a `tf.data` input pipeline, or built directly into a trainable Keras model. In this example, you will apply preprocessing layers inside a `tf.data` input pipeline. To do this, you can define a separate `tf.keras.Model` to preprocess your input features. This model is not trainable, but is a convenient way to group preprocessing layers. ``` inputs = { 'type': tf.keras.Input(shape=(), dtype='int64'), 'size': tf.keras.Input(shape=(), dtype='string'), 'weight': tf.keras.Input(shape=(), dtype='float32'), } # Convert index to one-hot; e.g. [2] -> [0,0,1]. type_output = tf.keras.layers.CategoryEncoding( one_hot_dims, output_mode='one_hot')(inputs['type']) # Convert size strings to indices; e.g. ['small'] -> [1]. size_output = tf.keras.layers.StringLookup(vocabulary=vocab)(inputs['size']) # Normalize the numeric inputs; e.g. [2.0] -> [0.0]. weight_output = tf.keras.layers.Normalization( axis=None, mean=weight_mean, variance=weight_variance)(inputs['weight']) outputs = { 'type': type_output, 'size': size_output, 'weight': weight_output, } preprocessing_model = tf.keras.Model(inputs, outputs) ``` Note: As an alternative to supplying a vocabulary and normalization statistics on layer creation, many preprocessing layers provide an `adapt()` method for learning layer state directly from the input data. See the [preprocessing guide](https://www.tensorflow.org/guide/keras/preprocessing_layers#the_adapt_method) for more details. You can now apply this model inside a call to `tf.data.Dataset.map`. Please note that the function passed to `map` will automatically be converted into a `tf.function`, and usual caveats for writing `tf.function` code apply (no side effects). ``` # Apply the preprocessing in tf.data.Dataset.map. dataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(1) dataset = dataset.map(lambda x, y: (preprocessing_model(x), y), num_parallel_calls=tf.data.AUTOTUNE) # Display a preprocessed input sample. next(dataset.take(1).as_numpy_iterator()) ``` Next, you can define a separate `Model` containing the trainable layers. Note how the inputs to this model now reflect the preprocessed feature types and shapes. ``` inputs = { 'type': tf.keras.Input(shape=(one_hot_dims,), dtype='float32'), 'size': tf.keras.Input(shape=(), dtype='int64'), 'weight': tf.keras.Input(shape=(), dtype='float32'), } # Since the embedding is trainable, it needs to be part of the training model. embedding = tf.keras.layers.Embedding(len(vocab), embedding_dims) outputs = tf.keras.layers.Concatenate()([ inputs['type'], embedding(inputs['size']), tf.expand_dims(inputs['weight'], -1), ]) outputs = tf.keras.layers.Dense(1)(outputs) training_model = tf.keras.Model(inputs, outputs) ``` You can now train the `training_model` with `tf.keras.Model.fit`. ``` # Train on the preprocessed data. training_model.compile( loss=tf.keras.losses.BinaryCrossentropy(from_logits=True)) training_model.fit(dataset) ``` Finally, at inference time, it can be useful to combine these separate stages into a single model that handles raw feature inputs. ``` inputs = preprocessing_model.input outpus = training_model(preprocessing_model(inputs)) inference_model = tf.keras.Model(inputs, outpus) predict_dataset = tf.data.Dataset.from_tensor_slices(predict_features).batch(1) inference_model.predict(predict_dataset) ``` This composed model can be saved as a [SavedModel](https://www.tensorflow.org/guide/saved_model) for later use. ``` inference_model.save('model') restored_model = tf.keras.models.load_model('model') restored_model.predict(predict_dataset) ``` Note: Preprocessing layers are not trainable, which allows you to apply them *asynchronously* using `tf.data`. This has performence benefits, as you can both [prefetch](https://www.tensorflow.org/guide/data_performance#prefetching) preprocessed batches, and free up any accelerators to focus on the differentiable parts of a model. As this guide shows, seperating preprocessing during training and composing it during inference is a flexible way to leverage these performance gains. However, if your model is small or preprocessing time is negligable, it may be simpler to build preprocessing into a complete model from the start. To do this you can build a single model starting with `tf.keras.Input`, followed by preprocessing layers, followed by trainable layers. ## Feature column equivalence table For reference, here is an approximate correspondence between feature columns and preprocessing layers:<table> <tr> <th>Feature Column</th> <th>Keras Layer</th> </tr> <tr> <td>`feature_column.bucketized_column`</td> <td>`layers.Discretization`</td> </tr> <tr> <td>`feature_column.categorical_column_with_hash_bucket`</td> <td>`layers.Hashing`</td> </tr> <tr> <td>`feature_column.categorical_column_with_identity`</td> <td>`layers.CategoryEncoding`</td> </tr> <tr> <td>`feature_column.categorical_column_with_vocabulary_file`</td> <td>`layers.StringLookup` or `layers.IntegerLookup`</td> </tr> <tr> <td>`feature_column.categorical_column_with_vocabulary_list`</td> <td>`layers.StringLookup` or `layers.IntegerLookup`</td> </tr> <tr> <td>`feature_column.crossed_column`</td> <td>Not implemented.</td> </tr> <tr> <td>`feature_column.embedding_column`</td> <td>`layers.Embedding`</td> </tr> <tr> <td>`feature_column.indicator_column`</td> <td>`output_mode='one_hot'` or `output_mode='multi_hot'`*</td> </tr> <tr> <td>`feature_column.numeric_column`</td> <td>`layers.Normalization`</td> </tr> <tr> <td>`feature_column.sequence_categorical_column_with_hash_bucket`</td> <td>`layers.Hashing`</td> </tr> <tr> <td>`feature_column.sequence_categorical_column_with_identity`</td> <td>`layers.CategoryEncoding`</td> </tr> <tr> <td>`feature_column.sequence_categorical_column_with_vocabulary_file`</td> <td>`layers.StringLookup`, `layers.IntegerLookup`, or `layer.TextVectorization`†</td> </tr> <tr> <td>`feature_column.sequence_categorical_column_with_vocabulary_list`</td> <td>`layers.StringLookup`, `layers.IntegerLookup`, or `layer.TextVectorization`†</td> </tr> <tr> <td>`feature_column.sequence_numeric_column`</td> <td>`layers.Normalization`</td> </tr> <tr> <td>`feature_column.weighted_categorical_column`</td> <td>`layers.CategoryEncoding`</td> </tr> </table> \* `output_mode` can be passed to `layers.CategoryEncoding`, `layers.StringLookup`, `layers.IntegerLookup`, and `layers.TextVectorization`. † `layers.TextVectorization` can handle freeform text input directly (e.g. entire sentences or paragraphs). This is not one-to-one replacement for categorical sequence handling in TF1, but may offer a convinient replacement for ad-hoc text preprocessing. ## Next Steps - For more information on keras preprocessing layers, see [the guide to preprocessing layers](https://www.tensorflow.org/guide/keras/preprocessing_layers). - For a more in-depth example of applying preprocessing layers to structured data, see [the structured data tutorial](https://www.tensorflow.org/tutorials/structured_data/preprocessing_layers).
github_jupyter
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); </script> # ADM Quantities in terms of BSSN Quantities ## Author: Zach Etienne ### Formatting improvements courtesy Brandon Clark [comment]: <> (Abstract: TODO) **Notebook Status:** <font color='orange'><b> Self-Validated </b></font> **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)** ### NRPy+ Source Code for this module: [ADM_in_terms_of_BSSN.py](../edit/BSSN/ADM_in_terms_of_BSSN.py) ## Introduction: This tutorial notebook constructs all quantities in the [ADM formalism](https://en.wikipedia.org/wiki/ADM_formalism) (see also Chapter 2 in Baumgarte & Shapiro's book *Numerical Relativity*) in terms of quantities in our adopted (covariant, tensor-rescaled) BSSN formalism. That is to say, we will write the ADM quantities $\left\{\gamma_{ij},K_{ij},\alpha,\beta^i\right\}$ and their derivatives in terms of the BSSN quantities $\left\{\bar{\gamma}_{ij},\text{cf},\bar{A}_{ij},\text{tr}K,\alpha,\beta^i\right\}$ and their derivatives. ### A Note on Notation: As is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component. * Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction. As a corollary, any expressions in NRPy+ involving mixed Greek and Latin indices will need to offset one set of indices by one; a Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). <a id='toc'></a> # Table of Contents $$\label{toc}$$ This notebook is organized as follows 1. [Step 1](#initializenrpy): Initialize core Python/NRPy+ modules 1. [Step 2](#threemetric): The ADM three-metric $\gamma_{ij}$ and its derivatives in terms of rescaled BSSN quantities 1. [Step 2.a](#derivatives_e4phi): Derivatives of $e^{4\phi}$ 1. [Step 2.b](#derivatives_adm_3metric): Derivatives of the ADM three-metric: $\gamma_{ij,k}$ and $\gamma_{ij,kl}$ 1. [Step 2.c](#christoffel): Christoffel symbols $\Gamma^i_{jk}$ associated with the ADM 3-metric $\gamma_{ij}$ 1. [Step 3](#extrinsiccurvature): The ADM extrinsic curvature $K_{ij}$ and its derivatives in terms of rescaled BSSN quantities 1. [Step 4](#code_validation): Code Validation against `BSSN.ADM_in_terms_of_BSSN` NRPy+ module 1. [Step 5](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file <a id='initializenrpy'></a> # Step 1: Initialize core Python/NRPy+ modules \[Back to [top](#toc)\] $$\label{initializenrpy}$$ Let's start by importing all the needed modules from Python/NRPy+: ``` # Step 1.a: Import all needed modules from NRPy+ from outputC import * # NRPy+: Core C code output module import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support import reference_metric as rfm # NRPy+: Reference metric support import sys # Standard Python module for multiplatform OS-level functions # Step 1.b: Set the coordinate system for the numerical grid par.set_parval_from_str("reference_metric::CoordSystem","Spherical") # Step 1.c: Given the chosen coordinate system, set up # corresponding reference metric and needed # reference metric quantities # The following function call sets up the reference metric # and related quantities, including rescaling matrices ReDD, # ReU, and hatted quantities. rfm.reference_metric() # Step 1.d: Set spatial dimension (must be 3 for BSSN, as BSSN is # a 3+1-dimensional decomposition of the general # relativistic field equations) DIM = 3 # Step 1.e: Import all basic (unrescaled) BSSN scalars & tensors import BSSN.BSSN_quantities as Bq Bq.BSSN_basic_tensors() gammabarDD = Bq.gammabarDD cf = Bq.cf AbarDD = Bq.AbarDD trK = Bq.trK Bq.gammabar__inverse_and_derivs() gammabarDD_dD = Bq.gammabarDD_dD gammabarDD_dDD = Bq.gammabarDD_dDD Bq.AbarUU_AbarUD_trAbar_AbarDD_dD() AbarDD_dD = Bq.AbarDD_dD ``` <a id='threemetric'></a> # Step 2: The ADM three-metric $\gamma_{ij}$ and its derivatives in terms of rescaled BSSN quantities. \[Back to [top](#toc)\] $$\label{threemetric}$$ The ADM three-metric is written in terms of the covariant BSSN three-metric tensor as (Eqs. 2 and 3 of [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)): $$ \gamma_{ij} = \left(\frac{\gamma}{\bar{\gamma}}\right)^{1/3} \bar{\gamma}_{i j}, $$ where $\gamma=\det{\gamma_{ij}}$ and $\bar{\gamma}=\det{\bar{\gamma}_{ij}}$. The "standard" BSSN conformal factor $\phi$ is given by (Eq. 3 of [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)): \begin{align} \phi &= \frac{1}{12} \log\left(\frac{\gamma}{\bar{\gamma}}\right) \\ \implies e^{\phi} &= \left(\frac{\gamma}{\bar{\gamma}}\right)^{1/12} \\ \implies e^{4 \phi} &= \left(\frac{\gamma}{\bar{\gamma}}\right)^{1/3} \end{align} Thus the ADM three-metric may be written in terms of the BSSN three-metric and conformal factor $\phi$ as $$ \gamma_{ij} = e^{4 \phi} \bar{\gamma}_{i j}. $$ NRPy+'s implementation of BSSN allows for $\phi$ and two other alternative conformal factors to be defined: \begin{align} \chi &= e^{-4\phi} \\ W &= e^{-2\phi}, \end{align} Thus if `"BSSN_quantities::EvolvedConformalFactor_cf"` is set to `"chi"`, then \begin{align} \gamma_{ij} &= \frac{1}{\chi} \bar{\gamma}_{i j} \\ &= \frac{1}{\text{cf}} \bar{\gamma}_{i j}, \end{align} and if `"BSSN_quantities::EvolvedConformalFactor_cf"` is set to `"W"`, then \begin{align} \gamma_{ij} &= \frac{1}{W^2} \bar{\gamma}_{i j} \\ &= \frac{1}{\text{cf}^2} \bar{\gamma}_{i j}. \end{align} ``` # Step 2: The ADM three-metric gammaDD and its # derivatives in terms of BSSN quantities. gammaDD = ixp.zerorank2() exp4phi = sp.sympify(0) if par.parval_from_str("EvolvedConformalFactor_cf") == "phi": exp4phi = sp.exp(4*cf) elif par.parval_from_str("EvolvedConformalFactor_cf") == "chi": exp4phi = (1 / cf) elif par.parval_from_str("EvolvedConformalFactor_cf") == "W": exp4phi = (1 / cf**2) else: print("Error EvolvedConformalFactor_cf type = \""+par.parval_from_str("EvolvedConformalFactor_cf")+"\" unknown.") sys.exit(1) for i in range(DIM): for j in range(DIM): gammaDD[i][j] = exp4phi*gammabarDD[i][j] ``` <a id='derivatives_e4phi'></a> ## Step 2.a: Derivatives of $e^{4\phi}$ \[Back to [top](#toc)\] $$\label{derivatives_e4phi}$$ To compute derivatives of $\gamma_{ij}$ in terms of BSSN variables and their derivatives, we will first need derivatives of $e^{4\phi}$ in terms of the conformal BSSN variable `cf`. \begin{align} \frac{\partial}{\partial x^i} e^{4\phi} &= 4 e^{4\phi} \phi_{,i} \\ \implies \frac{\partial}{\partial x^j} \frac{\partial}{\partial x^i} e^{4\phi} &= \frac{\partial}{\partial x^j} \left(4 e^{4\phi} \phi_{,i}\right) \\ &= 16 e^{4\phi} \phi_{,i} \phi_{,j} + 4 e^{4\phi} \phi_{,ij} \end{align} Thus computing first and second derivatives of $e^{4\phi}$ in terms of the BSSN quantity `cf` requires only that we evaluate $\phi_{,i}$ and $\phi_{,ij}$ in terms of $e^{4\phi}$ (computed above in terms of `cf`) and derivatives of `cf`: If `"BSSN_quantities::EvolvedConformalFactor_cf"` is set to `"phi"`, then \begin{align} \phi_{,i} &= \text{cf}_{,i} \\ \phi_{,ij} &= \text{cf}_{,ij} \end{align} If `"BSSN_quantities::EvolvedConformalFactor_cf"` is set to `"chi"`, then \begin{align} \text{cf} = e^{-4\phi} \implies \text{cf}_{,i} &= -4 e^{-4\phi} \phi_{,i} \\ \implies \phi_{,i} &= -\frac{e^{4\phi}}{4} \text{cf}_{,i} \\ \implies \phi_{,ij} &= -e^{4\phi} \phi_{,j} \text{cf}_{,i} -\frac{e^{4\phi}}{4} \text{cf}_{,ij}\\ &= -e^{4\phi} \left(-\frac{e^{4\phi}}{4} \text{cf}_{,j}\right) \text{cf}_{,i} -\frac{e^{4\phi}}{4} \text{cf}_{,ij} \\ &= \frac{1}{4} \left[\left(e^{4\phi}\right)^2 \text{cf}_{,i} \text{cf}_{,j} -e^{4\phi} \text{cf}_{,ij}\right] \\ \end{align} If `"BSSN_quantities::EvolvedConformalFactor_cf"` is set to `"W"`, then \begin{align} \text{cf} = e^{-2\phi} \implies \text{cf}_{,i} &= -2 e^{-2\phi} \phi_{,i} \\ \implies \phi_{,i} &= -\frac{e^{2\phi}}{2} \text{cf}_{,i} \\ \implies \phi_{,ij} &= -e^{2\phi} \phi_{,j} \text{cf}_{,i} -\frac{e^{2\phi}}{2} \text{cf}_{,ij}\\ &= -e^{2\phi} \left(-\frac{e^{2\phi}}{2} \text{cf}_{,j}\right) \text{cf}_{,i} -\frac{e^{2\phi}}{2} \text{cf}_{,ij} \\ &= \frac{1}{2} \left[e^{4\phi} \text{cf}_{,i} \text{cf}_{,j} -e^{2\phi} \text{cf}_{,ij}\right] \\ \end{align} ``` # Step 2.a: Derivatives of $e^{4\phi}$ phidD = ixp.zerorank1() phidDD = ixp.zerorank2() cf_dD = ixp.declarerank1("cf_dD") cf_dDD = ixp.declarerank2("cf_dDD","sym01") if par.parval_from_str("EvolvedConformalFactor_cf") == "phi": for i in range(DIM): phidD[i] = cf_dD[i] for j in range(DIM): phidDD[i][j] = cf_dDD[i][j] elif par.parval_from_str("EvolvedConformalFactor_cf") == "chi": for i in range(DIM): phidD[i] = -sp.Rational(1,4)*exp4phi*cf_dD[i] for j in range(DIM): phidDD[i][j] = sp.Rational(1,4)*( exp4phi**2*cf_dD[i]*cf_dD[j] - exp4phi*cf_dDD[i][j] ) elif par.parval_from_str("EvolvedConformalFactor_cf") == "W": exp2phi = (1 / cf) for i in range(DIM): phidD[i] = -sp.Rational(1,2)*exp2phi*cf_dD[i] for j in range(DIM): phidDD[i][j] = sp.Rational(1,2)*( exp4phi*cf_dD[i]*cf_dD[j] - exp2phi*cf_dDD[i][j] ) else: print("Error EvolvedConformalFactor_cf type = \""+par.parval_from_str("EvolvedConformalFactor_cf")+"\" unknown.") sys.exit(1) exp4phidD = ixp.zerorank1() exp4phidDD = ixp.zerorank2() for i in range(DIM): exp4phidD[i] = 4*exp4phi*phidD[i] for j in range(DIM): exp4phidDD[i][j] = 16*exp4phi*phidD[i]*phidD[j] + 4*exp4phi*phidDD[i][j] ``` <a id='derivatives_adm_3metric'></a> ## Step 2.b: Derivatives of the ADM three-metric: $\gamma_{ij,k}$ and $\gamma_{ij,kl}$ \[Back to [top](#toc)\] $$\label{derivatives_adm_3metric}$$ Recall the relation between the ADM three-metric $\gamma_{ij}$, the BSSN conformal three-metric $\bar{\gamma}_{i j}$, and the BSSN conformal factor $\phi$: $$ \gamma_{ij} = e^{4 \phi} \bar{\gamma}_{i j}. $$ Now that we have constructed derivatives of $e^{4 \phi}$ in terms of the chosen BSSN conformal factor `cf`, and the [BSSN.BSSN_quantities module](../edit/BSSN/BSSN_quantities.py) ([**tutorial**](Tutorial-BSSN_quantities.ipynb)) defines derivatives of $\bar{\gamma}_{ij}$ in terms of rescaled BSSN variables, derivatives of $\gamma_{ij}$ can be immediately constructed using the product rule: \begin{align} \gamma_{ij,k} &= \left(e^{4 \phi}\right)_{,k} \bar{\gamma}_{i j} + e^{4 \phi} \bar{\gamma}_{ij,k} \\ \gamma_{ij,kl} &= \left(e^{4 \phi}\right)_{,kl} \bar{\gamma}_{i j} + \left(e^{4 \phi}\right)_{,k} \bar{\gamma}_{i j,l} + \left(e^{4 \phi}\right)_{,l} \bar{\gamma}_{ij,k} + e^{4 \phi} \bar{\gamma}_{ij,kl} \end{align} ``` # Step 2.b: Derivatives of gammaDD, the ADM three-metric gammaDDdD = ixp.zerorank3() gammaDDdDD = ixp.zerorank4() for i in range(DIM): for j in range(DIM): for k in range(DIM): gammaDDdD[i][j][k] = exp4phidD[k]*gammabarDD[i][j] + exp4phi*gammabarDD_dD[i][j][k] for l in range(DIM): gammaDDdDD[i][j][k][l] = exp4phidDD[k][l]*gammabarDD[i][j] + \ exp4phidD[k]*gammabarDD_dD[i][j][l] + \ exp4phidD[l]*gammabarDD_dD[i][j][k] + \ exp4phi*gammabarDD_dDD[i][j][k][l] ``` <a id='christoffel'></a> ## Step 2.c: Christoffel symbols $\Gamma^i_{jk}$ associated with the ADM 3-metric $\gamma_{ij}$ \[Back to [top](#toc)\] $$\label{christoffel}$$ The 3-metric analog to the definition of Christoffel symbol (Eq. 1.18) in Baumgarte & Shapiro's *Numerical Relativity* is given by $$ \Gamma^i_{jk} = \frac{1}{2} \gamma^{il} \left(\gamma_{lj,k} + \gamma_{lk,j} - \gamma_{jk,l} \right), $$ which we implement here: ``` # Step 2.c: 3-Christoffel symbols associated with ADM 3-metric gammaDD # Step 2.c.i: First compute the inverse 3-metric gammaUU: gammaUU, detgamma = ixp.symm_matrix_inverter3x3(gammaDD) GammaUDD = ixp.zerorank3() for i in range(DIM): for j in range(DIM): for k in range(DIM): for l in range(DIM): GammaUDD[i][j][k] += sp.Rational(1,2)*gammaUU[i][l]* \ (gammaDDdD[l][j][k] + gammaDDdD[l][k][j] - gammaDDdD[j][k][l]) ``` <a id='extrinsiccurvature'></a> # Step 3: The ADM extrinsic curvature $K_{ij}$ and its derivatives in terms of rescaled BSSN quantities. \[Back to [top](#toc)\] $$\label{extrinsiccurvature}$$ The ADM extrinsic curvature may be written in terms of the BSSN trace-free extrinsic curvature tensor $\bar{A}_{ij}$ and the trace of the ADM extrinsic curvature $K$: \begin{align} K_{ij} &= \left(\frac{\gamma}{\bar{\gamma}}\right)^{1/3} \bar{A}_{ij} + \frac{1}{3} \gamma_{ij} K \\ &= e^{4\phi} \bar{A}_{ij} + \frac{1}{3} \gamma_{ij} K \\ \end{align} We only compute first spatial derivatives of $K_{ij}$, as higher-derivatives are generally not needed: $$ K_{ij,k} = \left(e^{4\phi}\right)_{,k} \bar{A}_{ij} + e^{4\phi} \bar{A}_{ij,k} + \frac{1}{3} \left(\gamma_{ij,k} K + \gamma_{ij} K_{,k}\right) $$ which is expressed in terms of quantities already defined. ``` # Step 3: Define ADM extrinsic curvature KDD and # its first spatial derivatives KDDdD # in terms of BSSN quantities KDD = ixp.zerorank2() for i in range(DIM): for j in range(DIM): KDD[i][j] = exp4phi*AbarDD[i][j] + sp.Rational(1,3)*gammaDD[i][j]*trK KDDdD = ixp.zerorank3() trK_dD = ixp.declarerank1("trK_dD") for i in range(DIM): for j in range(DIM): for k in range(DIM): KDDdD[i][j][k] = exp4phidD[k]*AbarDD[i][j] + exp4phi*AbarDD_dD[i][j][k] + \ sp.Rational(1,3)*(gammaDDdD[i][j][k]*trK + gammaDD[i][j]*trK_dD[k]) ``` <a id='code_validation'></a> # Step 4: Code Validation against `BSSN.ADM_in_terms_of_BSSN` NRPy+ module \[Back to [top](#toc)\] $$\label{code_validation}$$ Here, as a code validation check, we verify agreement in the SymPy expressions between 1. this tutorial and 2. the NRPy+ [BSSN.ADM_in_terms_of_BSSN](../edit/BSSN/ADM_in_terms_of_BSSN.py) module. ``` all_passed=True def comp_func(expr1,expr2,basename,prefixname2="Bq."): if str(expr1-expr2)!="0": print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2)) all_passed=False def gfnm(basename,idx1,idx2=None,idx3=None,idx4=None): if idx2==None: return basename+"["+str(idx1)+"]" if idx3==None: return basename+"["+str(idx1)+"]["+str(idx2)+"]" if idx4==None: return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]" return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]["+str(idx4)+"]" expr_list = [] exprcheck_list = [] namecheck_list = [] import BSSN.ADM_in_terms_of_BSSN as AB AB.ADM_in_terms_of_BSSN() namecheck_list.extend(["detgamma"]) exprcheck_list.extend([AB.detgamma]) expr_list.extend([detgamma]) for i in range(DIM): for j in range(DIM): namecheck_list.extend([gfnm("gammaDD",i,j),gfnm("gammaUU",i,j),gfnm("KDD",i,j)]) exprcheck_list.extend([AB.gammaDD[i][j],AB.gammaUU[i][j],AB.KDD[i][j]]) expr_list.extend([gammaDD[i][j],gammaUU[i][j],KDD[i][j]]) for k in range(DIM): namecheck_list.extend([gfnm("gammaDDdD",i,j,k),gfnm("GammaUDD",i,j,k),gfnm("KDDdD",i,j,k)]) exprcheck_list.extend([AB.gammaDDdD[i][j][k],AB.GammaUDD[i][j][k],AB.KDDdD[i][j][k]]) expr_list.extend([gammaDDdD[i][j][k],GammaUDD[i][j][k],KDDdD[i][j][k]]) for l in range(DIM): namecheck_list.extend([gfnm("gammaDDdDD",i,j,k,l)]) exprcheck_list.extend([AB.gammaDDdDD[i][j][k][l]]) expr_list.extend([gammaDDdDD[i][j][k][l]]) for i in range(len(expr_list)): comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i]) if all_passed: print("ALL TESTS PASSED!") else: print("ERROR. ONE OR MORE TESTS FAILED") sys.exit(1) ``` <a id='latex_pdf_output'></a> # Step 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\] $$\label{latex_pdf_output}$$ The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename [Tutorial-ADM_in_terms_of_BSSN.pdf](Tutorial-ADM_in_terms_of_BSSN.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.) ``` !jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-ADM_in_terms_of_BSSN.ipynb !pdflatex -interaction=batchmode Tutorial-ADM_in_terms_of_BSSN.tex !pdflatex -interaction=batchmode Tutorial-ADM_in_terms_of_BSSN.tex !pdflatex -interaction=batchmode Tutorial-ADM_in_terms_of_BSSN.tex !rm -f Tut*.out Tut*.aux Tut*.log ```
github_jupyter
# Ordinary Least Squares ``` %matplotlib inline import numpy as np import statsmodels.api as sm import matplotlib.pyplot as plt from statsmodels.sandbox.regression.predstd import wls_prediction_std np.random.seed(9876789) ``` ## OLS estimation Artificial data: ``` nsample = 100 x = np.linspace(0, 10, 100) X = np.column_stack((x, x**2)) beta = np.array([1, 0.1, 10]) e = np.random.normal(size=nsample) ``` Our model needs an intercept so we add a column of 1s: ``` X = sm.add_constant(X) y = np.dot(X, beta) + e ``` Fit and summary: ``` model = sm.OLS(y, X) results = model.fit() print(results.summary()) ``` Quantities of interest can be extracted directly from the fitted model. Type ``dir(results)`` for a full list. Here are some examples: ``` print('Parameters: ', results.params) print('R2: ', results.rsquared) ``` ## OLS non-linear curve but linear in parameters We simulate artificial data with a non-linear relationship between x and y: ``` nsample = 50 sig = 0.5 x = np.linspace(0, 20, nsample) X = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample))) beta = [0.5, 0.5, -0.02, 5.] y_true = np.dot(X, beta) y = y_true + sig * np.random.normal(size=nsample) ``` Fit and summary: ``` res = sm.OLS(y, X).fit() print(res.summary()) ``` Extract other quantities of interest: ``` print('Parameters: ', res.params) print('Standard errors: ', res.bse) print('Predicted values: ', res.predict()) ``` Draw a plot to compare the true relationship to OLS predictions. Confidence intervals around the predictions are built using the ``wls_prediction_std`` command. ``` prstd, iv_l, iv_u = wls_prediction_std(res) fig, ax = plt.subplots(figsize=(8,6)) ax.plot(x, y, 'o', label="data") ax.plot(x, y_true, 'b-', label="True") ax.plot(x, res.fittedvalues, 'r--.', label="OLS") ax.plot(x, iv_u, 'r--') ax.plot(x, iv_l, 'r--') ax.legend(loc='best'); ``` ## OLS with dummy variables We generate some artificial data. There are 3 groups which will be modelled using dummy variables. Group 0 is the omitted/benchmark category. ``` nsample = 50 groups = np.zeros(nsample, int) groups[20:40] = 1 groups[40:] = 2 #dummy = (groups[:,None] == np.unique(groups)).astype(float) dummy = sm.categorical(groups, drop=True) x = np.linspace(0, 20, nsample) # drop reference category X = np.column_stack((x, dummy[:,1:])) X = sm.add_constant(X, prepend=False) beta = [1., 3, -3, 10] y_true = np.dot(X, beta) e = np.random.normal(size=nsample) y = y_true + e ``` Inspect the data: ``` print(X[:5,:]) print(y[:5]) print(groups) print(dummy[:5,:]) ``` Fit and summary: ``` res2 = sm.OLS(y, X).fit() print(res2.summary()) ``` Draw a plot to compare the true relationship to OLS predictions: ``` prstd, iv_l, iv_u = wls_prediction_std(res2) fig, ax = plt.subplots(figsize=(8,6)) ax.plot(x, y, 'o', label="Data") ax.plot(x, y_true, 'b-', label="True") ax.plot(x, res2.fittedvalues, 'r--.', label="Predicted") ax.plot(x, iv_u, 'r--') ax.plot(x, iv_l, 'r--') legend = ax.legend(loc="best") ``` ## Joint hypothesis test ### F test We want to test the hypothesis that both coefficients on the dummy variables are equal to zero, that is, $R \times \beta = 0$. An F test leads us to strongly reject the null hypothesis of identical constant in the 3 groups: ``` R = [[0, 1, 0, 0], [0, 0, 1, 0]] print(np.array(R)) print(res2.f_test(R)) ``` You can also use formula-like syntax to test hypotheses ``` print(res2.f_test("x2 = x3 = 0")) ``` ### Small group effects If we generate artificial data with smaller group effects, the T test can no longer reject the Null hypothesis: ``` beta = [1., 0.3, -0.0, 10] y_true = np.dot(X, beta) y = y_true + np.random.normal(size=nsample) res3 = sm.OLS(y, X).fit() print(res3.f_test(R)) print(res3.f_test("x2 = x3 = 0")) ``` ### Multicollinearity The Longley dataset is well known to have high multicollinearity. That is, the exogenous predictors are highly correlated. This is problematic because it can affect the stability of our coefficient estimates as we make minor changes to model specification. ``` from statsmodels.datasets.longley import load_pandas y = load_pandas().endog X = load_pandas().exog X = sm.add_constant(X) ``` Fit and summary: ``` ols_model = sm.OLS(y, X) ols_results = ols_model.fit() print(ols_results.summary()) ``` #### Condition number One way to assess multicollinearity is to compute the condition number. Values over 20 are worrisome (see Greene 4.9). The first step is to normalize the independent variables to have unit length: ``` norm_x = X.values for i, name in enumerate(X): if name == "const": continue norm_x[:,i] = X[name]/np.linalg.norm(X[name]) norm_xtx = np.dot(norm_x.T,norm_x) ``` Then, we take the square root of the ratio of the biggest to the smallest eigen values. ``` eigs = np.linalg.eigvals(norm_xtx) condition_number = np.sqrt(eigs.max() / eigs.min()) print(condition_number) ``` #### Dropping an observation Greene also points out that dropping a single observation can have a dramatic effect on the coefficient estimates: ``` ols_results2 = sm.OLS(y.iloc[:14], X.iloc[:14]).fit() print("Percentage change %4.2f%%\n"*7 % tuple([i for i in (ols_results2.params - ols_results.params)/ols_results.params*100])) ``` We can also look at formal statistics for this such as the DFBETAS -- a standardized measure of how much each coefficient changes when that observation is left out. ``` infl = ols_results.get_influence() ``` In general we may consider DBETAS in absolute value greater than $2/\sqrt{N}$ to be influential observations ``` 2./len(X)**.5 print(infl.summary_frame().filter(regex="dfb")) ```
github_jupyter
``` %%html <link href="https://fonts.googleapis.com/css?family=Open+Sans" rel="stylesheet"> <style>#notebook-container{font-size: 13pt;font-family:'Open Sans', sans-serif;} div.text_cell{max-width: 104ex;}</style> %pylab inline import matplotlib.patches as patches ``` # Left and right-hand sums We want to find the approximate area under the graph of $f(x)=x^4$ for the interval $0 \leq x \leq 1$ by taking the right-hand sum. Using $n=25$. $$\Delta x=\dfrac{b-a}{n} \implies \dfrac{1-0}{25} = \dfrac{1}{25}$$ We are going to calculate the area of 25 rectangles and take their sum. This can be expressed as: $$R_{25} = \sum\limits_{i=1}^n f(x_i)\cdot\Delta x \iff \sum\limits_{i=1}^n f\left(a+i \cdot \Delta x \right)\cdot \Delta x \implies \sum\limits_{i=1}^{25} \left(i\cdot \dfrac{1}{25}\right)^4\cdot \dfrac{1}{25}=\dfrac{1}{25}\sum\limits_{i=1}^{25}\dfrac{i^4}{25^4}=0.22$$ ``` def right_hand_sum(a, b, n, f): # Approximate the area under the graph with a right-hand sum. delta_x = (b-a)/n result = sum([f(x*delta_x)*delta_x for x in range(1, n)]) # Line for plotting f(x) X = np.linspace(a, b, n) y = [f(x) for x in X] # Plot fig = plt.figure(figsize=(10,5)) ax = fig.add_subplot(111, aspect='equal') plot(X, y, c='b') ylabel('f(x)') xlabel('x') title('Right-hand sum estimated: {:.3f}'.format(result)) # Drawing rectangles (right-hand) for x in range(0, n): ax.add_patch(patches.Rectangle( (a+x*delta_x, 0), delta_x, f(a+(x+1)*delta_x), fill=True, color='b', alpha=0.2 )) return result def left_hand_sum(a, b, n, f): # Approximate the area under the graph with a left-hand sum. delta_x = (b-a)/n result = sum([f((x-1)*delta_x)*delta_x for x in range(1, n)]) # Line for plotting f(x) X = np.linspace(a, b, n) y = [f(x) for x in X] # Plot fig = plt.figure(figsize=(10,5)) ax = fig.add_subplot(111, aspect='equal') plot(X, y, c='b') ylabel('f(x)') xlabel('x') title('Left-hand sum estimated: {:.3f}'.format(result)) # Drawing rectangles (left-hand) for x in range(0, n): ax.add_patch(patches.Rectangle( (a+x*delta_x, 0), delta_x, f(a+x*delta_x), fill=True, color='b', alpha=0.2 )) return result ``` Taking the right-hand sum for $f(x)=x^4$ over the interval $0\leq x\leq 1$ with $n=25$: ``` right_hand_sum(0, 1, 25, lambda x: x**4) ``` Considering that $\int^{2\pi}_0 \sin(x) \ dx=0$. Taking the right-hand sum for $f(x) = \sin x$ over the interval $0 \leq x \leq 2\pi$ with $n=50$: ``` res = right_hand_sum(0, 2*math.pi, 50, lambda x: math.sin(x)) ``` Left hand sum for $f(x)=\cos x$ over the interval $0 \leq x \leq \pi$ with $n=50$: ``` left_hand_sum(0, math.pi, 50, lambda x: math.cos(x)) ``` Taking the right-hand sum for $f(x)=-x^2+10$ over the interval $-2\leq x\leq 2$ with $n=25$: ``` right_hand_sum(-2, 2, 25, lambda x: -x**2+10) right_hand_sum(-2, 2, 50, lambda x: -x**2+10) right_hand_sum(-2, 2, 15, lambda x: -x**2+10) right_hand_sum(0, 3, 8, lambda x: -0.6380*x**2+3.97142*x+0.01666) left_hand_sum(0, 3, 8, lambda x: -0.6380*x**2+3.97142*x+0.01666) X = np.array([0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]) y = np.array([0, 1.9, 3.3, 4.5, 5.5, 5.9, 6.2]) a = min(X) b = max(X) n = len(X) degree = 2 fit = np.polyfit(X, y, degree) p = np.poly1d(fit) linspace = np.linspace(a, b, n) fx = p(linspace) plot(X, y, c='lightgray', linestyle='--', lw=2.0) plot(linspace, fx, c='b') legend(['Data', 'Regression ({}-degree)'.format(degree)]) delta_x = (b-a)/(n-1) L = [fit[0]*x**2 + fit[1]*x + fit[2] for x in linspace[:-1]] R = [fit[0]*x**2 + fit[1]*x + fit[2] for x in linspace] delta_x L sum(L)*delta_x R sum(R)*delta_x plt.figure(figsize=(10,4)) plot(linspace, fx, lw=5, c='lightgray') plot(linspace[1:], L, c='b', ls='dotted') plot(linspace, R, c='b', ls='--') grid(linestyle='--') legend(['f(x)', '$L_6$', '$R_6$'], fontsize='x-large') title('Upper and lower sums'); ``` ## The Definite Integral $$ \tag{1}\lim_{n\rightarrow\infty} \sum\limits_{i=1}^n f(x_i^*)\cdot\Delta x = \lim_{n\rightarrow\infty}\Bigl[\ f(x_1^*)\cdot\Delta x+f(x_2^*)\cdot\Delta x+\ldots+f(x_n^*)\cdot\Delta x\ \Bigr]$$ This type of limit gets a special name and notation. **Definition of a Definite Integral** If $f$ is a function defined for $a\leq x\leq b$, we divide the interval $[a,b]$ into $n$ subintervals of equal width $\Delta x=(b-a)/n$. We let $x_0 (=a), x_1, x_2, \ldots, x_n(=b)$ be the endpoints of these subintervals and we let $x_1*, x_2*, \ldots, x_n*$ be any sample points in these sub-intervals, so $x_i*$ lies in the $i$-th subinterval $[x_{i-1},x_i]$. Then the definite integral of $f$ from $a$ to $b$ is: $$\tag{2}\int_a^bf(x) \ dx = \underbrace{\lim_{n\rightarrow\infty}\sum\limits_{i=1}^nf(x_i^*)\cdot\Delta x}_{\text{Riemann sum}}$$ The _integrand_ is $f$ and the _limits of integration_ are $a$ and $b$. The _lower limit_ is $a$, and the _upper limit_ is $b$. the procedure for calculating the integral is called _integration_. ``` x = np.linspace(0, 2*math.pi, 100) y = np.array([math.sin(x) for x in x]) plt.figure(figsize=(10,4)) plt.plot(x, y, c='b') plt.fill_between(x,y, facecolor='b', alpha=0.2) plt.xlabel('x') plt.ylabel('sin x'); ``` **Theorem (3)** If $f$ is continuous on $[a,b]$, or if $f$ has only a finite number of jumps discontinuities, then $f$ is integrable on $[a,b]$, that is, the definite integral $\int_a^bf(x) \ dx$ exists. **Theorem (4)** If $f$ is integrable on $[a,b]$ then $$\tag{4}\int_a^b f(x) \ dx = \lim_{n\rightarrow\infty}f(x_i)\cdot\Delta x$$ **Rules for sums** $\tag{5}\sum\limits_{i=1}^n i = \dfrac{n(n+1)}{n}$ $\tag{6}\sum\limits_{i=1}^n i^2 = \dfrac{n(n+1)(2n+1)}{6}$ $\tag{7}\sum\limits_{i=1}^n i^3 = \left[\dfrac{n(n+1)}{2}\right]^2$ $\tag{8}\sum\limits_{i=1}^n c = nc$ $\tag{9}\sum\limits_{i=1}^n c\cdot a_i = c \sum\limits_{i=1}^n a_i$ $\tag{10}\sum\limits_{i=1}^n (a_i + b_i) = \sum\limits_{i=1}^n a_i + \sum\limits_{i=1}^n b_i$ $\tag{11}\sum\limits_{i=1}^n (a_i - b_i) = \sum\limits_{i=1}^n a_i - \sum\limits_{i=1}^n b_i$ **Midpoint rule** $$\int_a^b f(x) \ dx \approx \sum\limits_{i=1}^n f(\bar{x}_i)\cdot\Delta x = \Delta x \left[ f(\bar{x}_1) + f(\bar{x}_2) + \ldots + f(\bar{x}_n) \right]$$ where $\Delta x = \dfrac{b-a}{n}$ and $\bar{x}_i=\dfrac{1}{2}(x_{i-1}, x_i)$ is the midpoint of $[x_{i-1}, x_i]$. **Properties of the Definite Integral** $$\int_a^b f(x) \ dx = -\int_b^a f(x) \ dx$$ $$\int_a^b f(x) \ dx = 0$$ $$\tag{1}\int_a^b c\ dx = c(b-a) \qquad ,\text{where $c$ is any constant}$$ $$\tag{2}\int_a^b\left[f(x)+g(x)\right]\ dx = \int_a^b f(x)\ dx + \int_a^b g(x)\ dx$$ $$\tag{3}\int_a^b c\cdot f(x) \ dx = c \int_a^b f(x)\ dx \qquad ,\text{where $c$ is any constant}$$ $$\tag{4}\int_a^b\left[f(x)-g(x)\right]\ dx = \int_a^b f(x)\ dx - \int_a^b g(x)\ dx$$ $$\tag{5} \int_a^c f(x)\ dx + \int_c^b f(x)\ dx = \int_a^b f(x)\ dx$$ **Comparison Properties of the Integral** $$\tag{6}\text{If $f(x) \geq 0$ for $a\leq x\leq b$, then} \int_a^b f(x)\ dx \geq 0.$$ $$\tag{7}\text{If $f(x) \geq g(x)$ for $a\leq x\leq b$, then} \int_a^b f(x)\ dx \geq \int_a^b g(x)\ dx.$$ $$\tag{8}\text{If $m \leq f(x) \leq M$ for $a\leq x \leq b$, then } m(b-a) \leq \int_a^b f(x)\ dx \leq M(b-a).$$ ``` def riemann_sum(a, b, n, f): # Calculating the Riemann sum using midpoints delta_x = (b - a) / n x = np.linspace(a, b - delta_x, n) + delta_x / 2 y = np.array([f(x) for x in x]) print('Riemann sum for {} from {} to {} with n={}:'.format('f', a, b, n)) print('delta_x: {}'.format(delta_x)) print('x: {}'.format(x)) print('y: {}'.format(y)) result = sum(y) * delta_x # Plotting fig = plt.figure(figsize=(10,5)) ax = fig.add_subplot(111) ax.axvline(x=a, c='lightgray', ls='--') ax.axvline(x=b, c='lightgray', ls='--') rng = b - a px = np.linspace(a - rng * 0.25, b + rng * 0.25, n+100) py = np.array([f(x) for x in px]) plot(px, py, c='b', lw=2) xlim(a - rng * 0.1, b + rng * 0.1) rngy = np.max(y) - np.min(y) spacing = rngy * 0.10 ylim(np.min(y)-spacing, np.max(y)+spacing) for x, y in zip(x, y): ax.add_patch(patches.Rectangle( (x - delta_x / 2, 0), delta_x, y, fill=True, color='b' if y >= 0 else 'r', alpha=0.2 )) title('Riemann sum for f, ${:.2f}\leq x\leq{:.2f}$, n=${}$'.format(a,b,n)) ylabel('f(x)') xlabel('x') if abs(round(result, 6)) == 0: return 0 return result riemann_sum(0, math.pi/2, 4, lambda x: math.cos(x)**4) ``` Calculating the Riemann Sum for an area under the probability density function: $$f(x\ |\ \mu, \sigma^2) = \dfrac{1}{\sqrt{2\pi\sigma^2}}e^{-\dfrac{(x-\mu)^2}{2\sigma^2}}$$ ``` riemann_sum(-10, 10, 40, lambda x: 1/math.sqrt(2*math.pi)*math.e**(-(x)**2/25)) riemann_sum(0, 1, 16, lambda x: x**3-x**2+x) riemann_sum(0, math.pi, 16, lambda x: math.cos(x)) riemann_sum(0, 2, 50, lambda x: x/(x+1)) riemann_sum(1, 5, 4, lambda x: x**2*math.e**(-x)) riemann_sum(1, 5, 10, lambda x: x**2*math.e**(-x)) riemann_sum(1, 5, 20, lambda x: x**2*math.e**(-x)) riemann_sum(0, 1.5, 28, lambda x: sin(math.pi*x**2)) ``` Evaluating the right hand Riemann sums $R_n$ for the integral $\int^\pi_0 \sin x \ \mathrm{d}x$ with $n=5,10,50,100$. ``` for n in [5, 10, 50, 100]: delta_x = math.pi / n x = np.linspace(0, math.pi - delta_x, n) y = np.array([math.sin(x) for x in x]) rs = np.sum(y)*delta_x print('Right hand sum for n={} yields {:.6f}.'.format(n, rs)) ``` **Right-hand sum** ``` import math import numpy as np def rhs(a, b, n, f): delta_x = (b - a) / n X = np.linspace(a, b - delta_x, n) + delta_x y = np.array([f(x) for x in X]) return np.sum(y) * delta_x rhs(0, math.pi, 10, lambda x: math.sin(x)) ``` **Left-hand sum** ``` def lhs(a, b, n, f): delta_x = (b - a) / n X = np.linspace(a, b - delta_x, n) y = np.array([f(x) for x in X]) return np.sum(y) * delta_x lhs(0, math.pi, 10, lambda x: math.sin(x)) ``` **Midpoint sum** ``` def mps(a, b, n, f): delta_x = (b - a) / n x = np.linspace(a, b - delta_x, n) + delta_x / 2 y = np.array([f(x) for x in x]) return np.sum(y) * delta_x mps(0, math.pi, 10, lambda x: math.sin(x)) ``` Evaluating the left and right Riemann sums $L_n$ and $R_n$ for the integral $\int_0^\pi e^{-x^2} \ \mathrm{d}x$ with $n=5,10,50,100$. ``` f = lambda x: math.e**(-x**2) print('Mn: {}\n'.format(riemann_sum(-1, 2, 10, f))) x = np.arange(5, 100, 5) y1 = [] y2 = [] for n in x: fx = lhs(-1, 2, n, f) y1.append(fx) print('Left-hand sum with n={} yields {}.'.format(n, fx)) fx = rhs(-1, 2, n, f) y2.append(fx) print('Right-hand sum with n={} yields {}.\n'.format(n, fx)) ``` **Limit approximation** ``` import numpy.polynomial.polynomial as poly plt.figure(figsize=(10,6)) scatter(x, y1) scatter(x, y2) reg1 = reg2 = np.linspace(0, x[-1]*1.5) pf1 = poly.polyfit(x, y1, 2) p1 = poly.Polynomial(pf1) r1 = p1(reg1) plot(reg1, r1) pf2 = poly.polyfit(x, y2, 2) p2 = poly.Polynomial(pf2) r2 = p2(reg2) plot(reg2, r2) axhline(y=1.62891, c='lightgray', ls='--') xlabel('$n$ subintervals') ylabel('approximation of the sum') legend(['QuadReg LHS', 'QuadReg RHS', '$\int_{-1}^2 e^{-x^2}$', 'Left-hand sum', 'Right-hand sum']); ``` The idea was to approximate the limit by finding the intersection between the two quadratic regressors. As we can see from the plot, it won't be that easy. It seems that the limit is between the left and the right hand sum. If we add half of the difference to the lower sum we might get a good approximate result of the limit. $$x = L_n + \dfrac{R_n-L_n}{2}\iff x=\dfrac{R_n+L_n}{2}$$ ``` x = (y2[-1] + y1[-1]) / 2 x ``` Considering that $\int_{-1}^2 e^{-x^2} \approx 1.62891$, we are indeed quite close. Let's see what $M_n$ yields: ``` mn = mps(-1, 2, 100, f) mn ``` This result is even better than our previous approximation. ``` bar(['$M_n$', '$\int_{-1}^2e^{-x^2}$', '$L_n + (R_n-L_n)/2$'], [mn, 1.62891, x], facecolor='b', alpha=0.5) ylim((1.6288, 1.62898)) f = lambda x: math.sin(x) x = np.linspace(0, math.pi*2, 100) y = [f(x) for x in x] plt.figure(figsize=(10,6)) plot(x,y) x = np.linspace(0, math.pi, 100) y = [f(x) for x in x] plt.figure(figsize=(10,6)) plot(x,y) riemann_sum(0, math.pi, 10, lambda x: math.sin(x)) ```
github_jupyter
``` # Erasmus+ ICCT project (2018-1-SI01-KA203-047081) # Toggle cell visibility from IPython.display import HTML tag = HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide() } else { $('div.input').show() } code_show = !code_show } $( document ).ready(code_toggle); </script> Toggle cell visibility <a href="javascript:code_toggle()">here</a>.''') display(tag) # Hide the code completely # from IPython.display import HTML # tag = HTML('''<style> # div.input { # display:none; # } # </style>''') # display(tag) ``` ## Equilibrium Points as a function of system input and system modes This example shows what the equilibrium points of a Linear Time Invariant (LTI) system are. The aim of this example is to show where equilibrium points of the system lie in the state space (a 2D plane in this example) as a function of system matrices and input value. Recall, that given the LTI system dynamics: $$ \dot{x}(t)=Ax(t)+B\bar{u}, $$ with $\bar{u}$ a constant input. The equilibrium points of the system can be found as solution of the equation: $$ Ax(t)=-B\bar{u}. $$ ### How to use this notebook? Try to define matrices and constant input $\bar u$ so that: * the system has one equilibrium point at the origin, * the system has one equilibrium point bu not at the origin, * the system has $\infty$ equilibrium points, * the system has $\infty^2$ equilibrium points. ``` #Preparatory Cell import control import numpy from IPython.display import display, Markdown import ipywidgets as widgets import matplotlib.pyplot as plt #print a matrix latex-like def bmatrix(a): """Returns a LaTeX bmatrix - by Damir Arbula (ICCT project) :a: numpy array :returns: LaTeX bmatrix as a string """ if len(a.shape) > 2: raise ValueError('bmatrix can at most display two dimensions') lines = str(a).replace('[', '').replace(']', '').splitlines() rv = [r'\begin{bmatrix}'] rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines] rv += [r'\end{bmatrix}'] return '\n'.join(rv) # Display formatted matrix: def vmatrix(a): if len(a.shape) > 2: raise ValueError('bmatrix can at most display two dimensions') lines = str(a).replace('[', '').replace(']', '').splitlines() rv = [r'\begin{vmatrix}'] rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines] rv += [r'\end{vmatrix}'] return '\n'.join(rv) #create a NxM matrix widget def createMatrixWidget(n,m): M = widgets.GridBox(children=[widgets.FloatText(layout=widgets.Layout(width='100px', height='40px'), value=0.0, disabled=False, label=i) for i in range(n*m)], layout=widgets.Layout( #width='50%', grid_template_columns= ''.join(['100px ' for i in range(m)]), #grid_template_rows='80px 80px 80px', grid_row_gap='0px', track_size='0px') ) return M #extract matrix from widgets and convert to numpy matrix def getNumpyMatFromWidget(M,n,m): #get W gridbox dims M_ = numpy.matrix(numpy.zeros((n,m))) for irow in range(0,n): for icol in range(0,m): M_[irow,icol] = M.children[irow*3+icol].value #this is a simple derived class from FloatText used to experience with interact class floatWidget(widgets.FloatText): def __init__(self,**kwargs): #self.n = n self.value = 30.0 #self.M = widgets.FloatText.__init__(self, **kwargs) # def value(self): # return 0 #self.FloatText.value from traitlets import Unicode from ipywidgets import register #matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value ! class matrixWidget(widgets.VBox): def updateM(self,change): for irow in range(0,self.n): for icol in range(0,self.m): self.M_[irow,icol] = self.children[irow].children[icol].value #print(self.M_[irow,icol]) self.value = self.M_ def dummychangecallback(self,change): pass def __init__(self,n,m): self.n = n self.m = m self.M_ = numpy.matrix(numpy.zeros((self.n,self.m))) self.value = self.M_ widgets.VBox.__init__(self, children = [ widgets.HBox(children = [widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)] ) for j in range(n) ]) #fill in widgets and tell interact to call updateM each time a children changes value for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].value = self.M_[irow,icol] self.children[irow].children[icol].observe(self.updateM, names='value') #value = Unicode('example@example.com', help="The email value.").tag(sync=True) self.observe(self.updateM, names='value', type= 'All') def setM(self, newM): #disable callbacks, change values, and reenable self.unobserve(self.updateM, names='value', type= 'All') for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].unobserve(self.updateM, names='value') self.M_ = newM self.value = self.M_ for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].value = self.M_[irow,icol] for irow in range(0,self.n): for icol in range(0,self.m): self.children[irow].children[icol].observe(self.updateM, names='value') self.observe(self.updateM, names='value', type= 'All') #self.children[irow].children[icol].observe(self.updateM, names='value') #overlaod class for state space systems that DO NOT remove "useless" states (what "professor" of automatic control would do this?) class sss(control.StateSpace): def __init__(self,*args): #call base class init constructor control.StateSpace.__init__(self,*args) #disable function below in base class def _remove_useless_states(self): pass #define matrices A = matrixWidget(2,2) B = matrixWidget(2,1) ubar = matrixWidget(1,1) A.setM(-numpy.identity(2)) #this is the main callback and does all the computations and plots def main_callback(matA,matB,ubar_,DW,sel): #check if a specific matrix is requested or is manual if sel=='manual' : pass elif sel == '1 equilibrium point': matA = numpy.zeros((2,2)) matA[0,0] = -1 matA[1,1] = -2 A.setM(matA) matB = numpy.ones((2,1)) B.setM(matB) ubar_ = numpy.ones((1,1)) ubar.setM(ubar_) elif sel == 'infinite^1 equilibrium points': matA = numpy.zeros((2,2)) matA[0,0] = -1 matA[0,1] = 2 A.setM(matA) matB = numpy.zeros((2,1)) B.setM(matB) ubar_ = numpy.zeros((1,1)) ubar.setM(ubar_) elif sel == 'infinite^2 equilibrium points': matA = numpy.zeros((2,2)) A.setM(matA) matB = numpy.zeros((2,1)) B.setM(matB) ubar_ = numpy.zeros((1,1)) ubar.setM(ubar_) else : matA = numpy.zeros((2,2)) matA[0,0] = -1 matA[1,1] = -1 A.setM(matA) matB = numpy.zeros((2,1)) B.setM(matB) ubar_ = numpy.zeros((1,1)) ubar.setM(ubar_) #get system eigenvalues lambdas, eigvectors = numpy.linalg.eig(matA) #count eigenvalues in 0 #NOTE: when extracting i-th right eignvectors (columns from the matrix eigvectors) # must use the notation : eigvectors[:,i:i+1] that generates a column array # because the notation eigvectors[:,i] generates a row array !!!! WTF MBM eigin0 = 0 if lambdas[0] == 0.0: eig0 = True dir0 = eigvectors[:,0:1] eigin0 = eigin0+1 else: eig0 = False if lambdas[1] == 0.0: eig1 = True dir1 = eigvectors[:,1:2] eigin0 = eigin0+1 else: eig1 = False #create textual output display(Markdown('Matrix: $%s$ has $%s$ eigenvalues equal to 0.' % (vmatrix(matA), str(eigin0)) )) #create modes string: #test if aoslution exist: if eigin0 == 0: #a solution always exists print('There is only one solution, that is, one equilibrium point.') pass else: # in order for a solution of Ax = -Bu, with A singular, it is necessary that # rank([A b])=rank(A) # if this does not happen pinv cannot be used !!! if numpy.linalg.matrix_rank(numpy.concatenate((matA,matB), axis = 1)) == numpy.linalg.matrix_rank(matA): #a solution exists if eigin0 == 1: print('There are infinite equilibrium points lying on a line.') else: print('There are infinite^2 equilibrium points: they occupy the entire state space.') else: #a solution does not exist print('Warning: a solution does not exist. No equilibrium points!') return #print(lambdas) #print('----') #print(eigvectors) #print(eigvectors[:,1:2]) #v1 = eigvectors[:,] #print(eigvectors.__type__) #print(v1.__type__) #print(eigvectors[:,0:1]) #print('----') #compute equilibrium points if eigin0 == 0 : #only one equilibrium point eq = - numpy.dot(numpy.linalg.inv(matA),matB)*ubar_ eqdir = numpy.zeros((2,1)) elif eigin0 == 1: #equilibrium along a line eq = - numpy.dot(numpy.linalg.pinv(matA),matB)*ubar_ if eig0: eqdir = dir0 else: eqdir = dir1 else: #equilibrium is the entire plane eq = numpy.zeros((2,1)) eqdir = numpy.zeros((2,1)) pass #set limits of plot xlim = max(abs(eq[0,0])*1.1, 1.) ylim = max(abs(eq[1,0])*1.1, 1.) #print(eq) #print(eqdir) #print(xlim) #print(ylim) #plot equilibrium points (poles only) pzmap = plt.figure(figsize=(6,6)) sf = pzmap.add_subplot(111) sf.grid(True) sf.set_xlabel('$x_1$') sf.set_ylabel('$x_2$') sf.set_xlim([-xlim, xlim]) sf.set_ylim([-ylim, ylim]) #sf.set_aspect('equal', adjustable='datalim') if eigin0 == 0: #one equilibrium point sf.plot(eq[0,0],eq[1,0],marker='o') elif eigin0 == 1: #infinite equilibrium points along a line sf.plot(eq[0,0],eq[1,0],marker='o') #ph.hold(True) sf.plot([eq[0,0]-eqdir[0,0]*xlim*10, eq[0,0]+eqdir[0,0]*xlim*10],[eq[1,0]-eqdir[1,0]*ylim*10, eq[1,0]+eqdir[1,0]*ylim*10]) else: #infinite^2 equilibrium points occupying the entire state plane sf.fill((-xlim, xlim, xlim, -xlim),(-ylim, -ylim, ylim, ylim),alpha = 0.5) #create dummy widget DW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px')) #create button widget START = widgets.Button( description='Test', disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' tooltip='Test', icon='check' ) def on_start_button_clicked(b): #This is a workaround to have intreactive_output call the callback: # force the value of the dummy widget to change if DW.value> 0 : DW.value = -1 else: DW.value = 1 pass START.on_click(on_start_button_clicked) #define type of ipout SELECT = widgets.Dropdown( options=['manual', 'reset', '1 equilibrium point', 'infinite^1 equilibrium points', 'infinite^2 equilibrium points'], value='manual', description='examples', disabled=False, ) #create a graphic structure to hold all widgets alltogether = widgets.VBox([widgets.HBox([widgets.Label('$\dot{x}(t) = $',border=3), A,widgets.Label('$x(t) + $',border=3),B,widgets.Label('$\\bar{u}$',border=3),ubar, START]), SELECT ] ) out = widgets.interactive_output(main_callback,{'matA': A, 'matB': B,'ubar_': ubar,'DW': DW, 'sel': SELECT}) display(alltogether,out) ```
github_jupyter
Copyright 2019 The Google Research Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. # Domain Adaptation using DVRL * Jinsung Yoon, Sercan O Arik, Tomas Pfister, "Data Valuation using Reinforcement Learning", arXiv preprint arXiv:1909.11671 (2019) - https://arxiv.org/abs/1909.11671 This notebook describes the user-guide of a domain adaptation application using "Data Valuation using Reinforcement Learning (DVRL)". We consider the scenario where the training dataset comes from a substantially different distribution from the validation and testing sets. Data valuation is expected to be beneficial for this task by selecting the samples from the training dataset that best match the distribution of the validation dataset. You need: **Source / Target / Validation Datasets** * If there is no explicit validation set, users can utilize a small portion of target set as the validation set and the remaining as the target set. * If users come with their own source / target / validation datasets, the users should save those files as 'source.csv', 'target.csv', 'valid.csv' in './data_files/' directory. * we use Rossmann store sales dataset (https://www.kaggle.com/c/rossmann-store-sales) as an example in this notebook. Please download the dataset (rossmann-store-sales.zip) from the following link (https://www.kaggle.com/c/rossmann-store-sales/data) and save it to './data_files/' directory. ## Requirements Clone https://github.com/google-research/google-research/tree/master/dvrl to the current directory ``` %load_ext autoreload %autoreload 2 ``` ## Necessary packages and functions call * rossmann_data_loading: data loader for rossmann dataset * data_preprocess: data extraction and normalization * dvrl_regress: data valuation function for regression problem * metrics: evaluate the quality of data valuation in domain adatation setting ``` import numpy as np import tensorflow as tf import lightgbm from data_loading import load_rossmann_data, preprocess_data import dvrl from dvrl_metrics import learn_with_dvrl, learn_with_baseline ``` ## Data loading & Select source, target, validation datasets * Load source, target, validation dataset and save those datasets as source.csv, target.csv, valid.csv in './data_files/' directory * If users have their own source.csv, target.csv, valid.csv, the users can skip this cell and just save those files to './data_files/' directory **Input**: * dict_no: the number of source / valid / target samples. We use 79% / 1% / 20% as the ratio of each dataset * settings: 'train-on-all', 'train-on-rest', 'train-on-specific' * target_store_type: target store types ('A','B','C','D'). For instance, to evaluate the performance of store type 'A', (1) 'train-on-all' setting uses the entire source dataset, (2) 'train-on-rest' setting uses the source samples with store type 'B', 'C', and 'D', (3) 'train-on-specific' setting uses the source samples with store type 'A'. Therefore, 'train-on-rest' has the maximum distribution differences between source and target datasets. ``` # The number of source / validation / target samples (79%/1%/20%) dict_no = dict() dict_no['source'] = 667027 # 79% of data dict_no['valid'] = 8443 # 1% of data # Selects a setting and target store type setting = 'train-on-rest' target_store_type = 'B' # Loads data and selects source, target, validation datasets load_rossmann_data(dict_no, setting, target_store_type) print('Finished data loading.') ``` ## Data preprocessing * Extract features and labels from source.csv, valid.csv, target.csv in './data_files/' directory * Normalize the features of source, validation, and target sets ``` # Normalization methods: either 'minmax' or 'standard' normalization = 'minmax' # Extracts features and labels. Then, normalizes features. x_source, y_source, x_valid, y_valid, x_target, y_target, _ = \ preprocess_data(normalization, 'source.csv', 'valid.csv', 'target.csv') print('Finished data preprocess.') ``` ## Run DVRL 1. **Input**: * data valuator network parameters - set network parameters of data valuator. * pred_model: The predictor model that maps output from the input. Any machine learning model (e.g. a neural network or ensemble decision tree) can be used as the predictor model, as long as it has fit, and predict (for regression)/predict_proba (for classification) as its subfunctions. Fit can be implemented using multiple backpropagation iterations. 2. **Output**: * data_valuator: function that uses training set as inputs to estimate data values * dvrl_predictor: function that predicts labels of the testing samples * dve_out: estimated data values of the entire training samples ``` # Resets the graph tf.reset_default_graph() # Defines the problem problem = 'regression' # Network parameters parameters = dict() parameters['hidden_dim'] = 100 parameters['comb_dim'] = 10 parameters['iterations'] = 1000 parameters['activation'] = tf.nn.tanh parameters['layer_number'] = 5 parameters['batch_size'] = 50000 parameters['learning_rate'] = 0.001 # Defines predictive model pred_model = lightgbm.LGBMRegressor() # Sets checkpoint file name checkpoint_file_name = './tmp/model.ckpt' # Defines flag for using stochastic gradient descent / pre-trained model flags = {'sgd': False, 'pretrain': False} # Initializes DVRL dvrl_class = dvrl.Dvrl(x_source, y_source, x_valid, y_valid, problem, pred_model, parameters, checkpoint_file_name, flags) # Trains DVRL dvrl_class.train_dvrl('rmspe') # Estimates data values dve_out = dvrl_class.data_valuator(x_source, y_source) # Predicts with DVRL y_target_hat = dvrl_class.dvrl_predictor(x_target) print('Finished data valuation.') ``` ## Evaluations * In this notebook, we use LightGBM as the predictor model for evaluation purposes (but you can also replace it with another model). * Here, we use Root Mean Squared Percentage Error (RMSPE) as the performance metric. ### DVRL Performance DVRL learns robustly although the training data has different distribution from the target data distribution, using the guidance from the small validation data (which comes from the target distribution) via reinforcement learning. * Train predictive model with weighted optimization using estimated data values by DVRL as the weights ``` # Defines evaluation model eval_model = lightgbm.LGBMRegressor() # DVRL-weighted learning dvrl_perf = learn_with_dvrl(dve_out, eval_model, x_source, y_source, x_valid, y_valid, x_target, y_target, 'rmspe') # Baseline prediction performance (treat all training samples equally) base_perf = learn_with_baseline(eval_model, x_source, y_source, x_target, y_target, 'rmspe') print('Finished evaluation.') print('DVRL learning performance: ' + str(np.round(dvrl_perf, 4))) print('Baseline performance: ' + str(np.round(base_perf, 4))) ```
github_jupyter
``` #all_slow #export from fastai.basics import * from fastai.vision.all import * #default_exp vision.gan #default_cls_lvl 3 #hide from nbdev.showdoc import * ``` # GAN > Basic support for [Generative Adversarial Networks](https://arxiv.org/abs/1406.2661) GAN stands for [Generative Adversarial Nets](https://arxiv.org/pdf/1406.2661.pdf) and were invented by Ian Goodfellow. The concept is that we train two models at the same time: a generator and a critic. The generator will try to make new images similar to the ones in a dataset, and the critic will try to classify real images from the ones the generator does. The generator returns images, the critic a single number (usually a probability, 0. for fake images and 1. for real ones). We train them against each other in the sense that at each step (more or less), we: 1. Freeze the generator and train the critic for one step by: - getting one batch of true images (let's call that `real`) - generating one batch of fake images (let's call that `fake`) - have the critic evaluate each batch and compute a loss function from that; the important part is that it rewards positively the detection of real images and penalizes the fake ones - update the weights of the critic with the gradients of this loss 2. Freeze the critic and train the generator for one step by: - generating one batch of fake images - evaluate the critic on it - return a loss that rewards positively the critic thinking those are real images - update the weights of the generator with the gradients of this loss > Note: The fastai library provides support for training GANs through the GANTrainer, but doesn't include more than basic models. ## Wrapping the modules ``` #export class GANModule(Module): "Wrapper around a `generator` and a `critic` to create a GAN." def __init__(self, generator=None, critic=None, gen_mode=False): if generator is not None: self.generator=generator if critic is not None: self.critic =critic store_attr('gen_mode') def forward(self, *args): return self.generator(*args) if self.gen_mode else self.critic(*args) def switch(self, gen_mode=None): "Put the module in generator mode if `gen_mode`, in critic mode otherwise." self.gen_mode = (not self.gen_mode) if gen_mode is None else gen_mode ``` This is just a shell to contain the two models. When called, it will either delegate the input to the `generator` or the `critic` depending of the value of `gen_mode`. ``` show_doc(GANModule.switch) ``` By default (leaving `gen_mode` to `None`), this will put the module in the other mode (critic mode if it was in generator mode and vice versa). ``` #export @delegates(ConvLayer.__init__) def basic_critic(in_size, n_channels, n_features=64, n_extra_layers=0, norm_type=NormType.Batch, **kwargs): "A basic critic for images `n_channels` x `in_size` x `in_size`." layers = [ConvLayer(n_channels, n_features, 4, 2, 1, norm_type=None, **kwargs)] cur_size, cur_ftrs = in_size//2, n_features layers += [ConvLayer(cur_ftrs, cur_ftrs, 3, 1, norm_type=norm_type, **kwargs) for _ in range(n_extra_layers)] while cur_size > 4: layers.append(ConvLayer(cur_ftrs, cur_ftrs*2, 4, 2, 1, norm_type=norm_type, **kwargs)) cur_ftrs *= 2 ; cur_size //= 2 init = kwargs.get('init', nn.init.kaiming_normal_) layers += [init_default(nn.Conv2d(cur_ftrs, 1, 4, padding=0), init), Flatten()] return nn.Sequential(*layers) #export class AddChannels(Module): "Add `n_dim` channels at the end of the input." def __init__(self, n_dim): self.n_dim=n_dim def forward(self, x): return x.view(*(list(x.shape)+[1]*self.n_dim)) #export @delegates(ConvLayer.__init__) def basic_generator(out_size, n_channels, in_sz=100, n_features=64, n_extra_layers=0, **kwargs): "A basic generator from `in_sz` to images `n_channels` x `out_size` x `out_size`." cur_size, cur_ftrs = 4, n_features//2 while cur_size < out_size: cur_size *= 2; cur_ftrs *= 2 layers = [AddChannels(2), ConvLayer(in_sz, cur_ftrs, 4, 1, transpose=True, **kwargs)] cur_size = 4 while cur_size < out_size // 2: layers.append(ConvLayer(cur_ftrs, cur_ftrs//2, 4, 2, 1, transpose=True, **kwargs)) cur_ftrs //= 2; cur_size *= 2 layers += [ConvLayer(cur_ftrs, cur_ftrs, 3, 1, 1, transpose=True, **kwargs) for _ in range(n_extra_layers)] layers += [nn.ConvTranspose2d(cur_ftrs, n_channels, 4, 2, 1, bias=False), nn.Tanh()] return nn.Sequential(*layers) critic = basic_critic(64, 3) generator = basic_generator(64, 3) tst = GANModule(critic=critic, generator=generator) real = torch.randn(2, 3, 64, 64) real_p = tst(real) test_eq(real_p.shape, [2,1]) tst.switch() #tst is now in generator mode noise = torch.randn(2, 100) fake = tst(noise) test_eq(fake.shape, real.shape) tst.switch() #tst is back in critic mode fake_p = tst(fake) test_eq(fake_p.shape, [2,1]) #export _conv_args = dict(act_cls = partial(nn.LeakyReLU, negative_slope=0.2), norm_type=NormType.Spectral) def _conv(ni, nf, ks=3, stride=1, self_attention=False, **kwargs): if self_attention: kwargs['xtra'] = SelfAttention(nf) return ConvLayer(ni, nf, ks=ks, stride=stride, **_conv_args, **kwargs) #export @delegates(ConvLayer) def DenseResBlock(nf, norm_type=NormType.Batch, **kwargs): "Resnet block of `nf` features. `conv_kwargs` are passed to `conv_layer`." return SequentialEx(ConvLayer(nf, nf, norm_type=norm_type, **kwargs), ConvLayer(nf, nf, norm_type=norm_type, **kwargs), MergeLayer(dense=True)) #export def gan_critic(n_channels=3, nf=128, n_blocks=3, p=0.15): "Critic to train a `GAN`." layers = [ _conv(n_channels, nf, ks=4, stride=2), nn.Dropout2d(p/2), DenseResBlock(nf, **_conv_args)] nf *= 2 # after dense block for i in range(n_blocks): layers += [ nn.Dropout2d(p), _conv(nf, nf*2, ks=4, stride=2, self_attention=(i==0))] nf *= 2 layers += [ ConvLayer(nf, 1, ks=4, bias=False, padding=0, norm_type=NormType.Spectral, act_cls=None), Flatten()] return nn.Sequential(*layers) #export class GANLoss(GANModule): "Wrapper around `crit_loss_func` and `gen_loss_func`" def __init__(self, gen_loss_func, crit_loss_func, gan_model): super().__init__() store_attr('gen_loss_func,crit_loss_func,gan_model') def generator(self, output, target): "Evaluate the `output` with the critic then uses `self.gen_loss_func`" fake_pred = self.gan_model.critic(output) self.gen_loss = self.gen_loss_func(fake_pred, output, target) return self.gen_loss def critic(self, real_pred, input): "Create some `fake_pred` with the generator from `input` and compare them to `real_pred` in `self.crit_loss_func`." fake = self.gan_model.generator(input).requires_grad_(False) fake_pred = self.gan_model.critic(fake) self.crit_loss = self.crit_loss_func(real_pred, fake_pred) return self.crit_loss ``` In generator mode, this loss function expects the `output` of the generator and some `target` (a batch of real images). It will evaluate if the generator successfully fooled the critic using `gen_loss_func`. This loss function has the following signature ``` def gen_loss_func(fake_pred, output, target): ``` to be able to combine the output of the critic on `output` (which the first argument `fake_pred`) with `output` and `target` (if you want to mix the GAN loss with other losses for instance). In critic mode, this loss function expects the `real_pred` given by the critic and some `input` (the noise fed to the generator). It will evaluate the critic using `crit_loss_func`. This loss function has the following signature ``` def crit_loss_func(real_pred, fake_pred): ``` where `real_pred` is the output of the critic on a batch of real images and `fake_pred` is generated from the noise using the generator. ``` #export class AdaptiveLoss(Module): "Expand the `target` to match the `output` size before applying `crit`." def __init__(self, crit): self.crit = crit def forward(self, output, target): return self.crit(output, target[:,None].expand_as(output).float()) #export def accuracy_thresh_expand(y_pred, y_true, thresh=0.5, sigmoid=True): "Compute accuracy after expanding `y_true` to the size of `y_pred`." if sigmoid: y_pred = y_pred.sigmoid() return ((y_pred>thresh).byte()==y_true[:,None].expand_as(y_pred).byte()).float().mean() ``` ## Callbacks for GAN training ``` #export def set_freeze_model(m, rg): for p in m.parameters(): p.requires_grad_(rg) #export class GANTrainer(Callback): "Handles GAN Training." run_after = TrainEvalCallback def __init__(self, switch_eval=False, clip=None, beta=0.98, gen_first=False, show_img=True): store_attr('switch_eval,clip,gen_first,show_img') self.gen_loss,self.crit_loss = AvgSmoothLoss(beta=beta),AvgSmoothLoss(beta=beta) def _set_trainable(self): train_model = self.generator if self.gen_mode else self.critic loss_model = self.generator if not self.gen_mode else self.critic set_freeze_model(train_model, True) set_freeze_model(loss_model, False) if self.switch_eval: train_model.train() loss_model.eval() def before_fit(self): "Initialize smootheners." self.generator,self.critic = self.model.generator,self.model.critic self.gen_mode = self.gen_first self.switch(self.gen_mode) self.crit_losses,self.gen_losses = [],[] self.gen_loss.reset() ; self.crit_loss.reset() #self.recorder.no_val=True #self.recorder.add_metric_names(['gen_loss', 'disc_loss']) #self.imgs,self.titles = [],[] def before_validate(self): "Switch in generator mode for showing results." self.switch(gen_mode=True) def before_batch(self): "Clamp the weights with `self.clip` if it's not None, set the correct input/target." if self.training and self.clip is not None: for p in self.critic.parameters(): p.data.clamp_(-self.clip, self.clip) if not self.gen_mode: (self.learn.xb,self.learn.yb) = (self.yb,self.xb) def after_batch(self): "Record `last_loss` in the proper list." if not self.training: return if self.gen_mode: self.gen_loss.accumulate(self.learn) self.gen_losses.append(self.gen_loss.value) self.last_gen = to_detach(self.pred) else: self.crit_loss.accumulate(self.learn) self.crit_losses.append(self.crit_loss.value) def before_epoch(self): "Put the critic or the generator back to eval if necessary." self.switch(self.gen_mode) #def after_epoch(self): # "Show a sample image." # if not hasattr(self, 'last_gen') or not self.show_img: return # data = self.learn.data # img = self.last_gen[0] # norm = getattr(data,'norm',False) # if norm and norm.keywords.get('do_y',False): img = data.denorm(img) # img = data.train_ds.y.reconstruct(img) # self.imgs.append(img) # self.titles.append(f'Epoch {epoch}') # pbar.show_imgs(self.imgs, self.titles) # return add_metrics(last_metrics, [getattr(self.smoothenerG,'smooth',None),getattr(self.smoothenerC,'smooth',None)]) def switch(self, gen_mode=None): "Switch the model and loss function, if `gen_mode` is provided, in the desired mode." self.gen_mode = (not self.gen_mode) if gen_mode is None else gen_mode self._set_trainable() self.model.switch(gen_mode) self.loss_func.switch(gen_mode) ``` > Warning: The GANTrainer is useless on its own, you need to complete it with one of the following switchers ``` #export class FixedGANSwitcher(Callback): "Switcher to do `n_crit` iterations of the critic then `n_gen` iterations of the generator." run_after = GANTrainer def __init__(self, n_crit=1, n_gen=1): store_attr('n_crit,n_gen') def before_train(self): self.n_c,self.n_g = 0,0 def after_batch(self): "Switch the model if necessary." if not self.training: return if self.learn.gan_trainer.gen_mode: self.n_g += 1 n_iter,n_in,n_out = self.n_gen,self.n_c,self.n_g else: self.n_c += 1 n_iter,n_in,n_out = self.n_crit,self.n_g,self.n_c target = n_iter if isinstance(n_iter, int) else n_iter(n_in) if target == n_out: self.learn.gan_trainer.switch() self.n_c,self.n_g = 0,0 #export class AdaptiveGANSwitcher(Callback): "Switcher that goes back to generator/critic when the loss goes below `gen_thresh`/`crit_thresh`." run_after = GANTrainer def __init__(self, gen_thresh=None, critic_thresh=None): store_attr('gen_thresh,critic_thresh') def after_batch(self): "Switch the model if necessary." if not self.training: return if self.gan_trainer.gen_mode: if self.gen_thresh is None or self.loss < self.gen_thresh: self.gan_trainer.switch() else: if self.critic_thresh is None or self.loss < self.critic_thresh: self.gan_trainer.switch() #export class GANDiscriminativeLR(Callback): "`Callback` that handles multiplying the learning rate by `mult_lr` for the critic." run_after = GANTrainer def __init__(self, mult_lr=5.): self.mult_lr = mult_lr def before_batch(self): "Multiply the current lr if necessary." if not self.learn.gan_trainer.gen_mode and self.training: self.learn.opt.set_hyper('lr', self.learn.opt.hypers[0]['lr']*self.mult_lr) def after_batch(self): "Put the LR back to its value if necessary." if not self.learn.gan_trainer.gen_mode: self.learn.opt.set_hyper('lr', self.learn.opt.hypers[0]['lr']/self.mult_lr) ``` ## GAN data ``` #export class InvisibleTensor(TensorBase): def show(self, ctx=None, **kwargs): return ctx #export def generate_noise(fn, size=100): return cast(torch.randn(size), InvisibleTensor) #export @typedispatch def show_batch(x:InvisibleTensor, y:TensorImage, samples, ctxs=None, max_n=10, nrows=None, ncols=None, figsize=None, **kwargs): if ctxs is None: ctxs = get_grid(min(len(samples), max_n), nrows=nrows, ncols=ncols, figsize=figsize) ctxs = show_batch[object](x, y, samples, ctxs=ctxs, max_n=max_n, **kwargs) return ctxs #export @typedispatch def show_results(x:InvisibleTensor, y:TensorImage, samples, outs, ctxs=None, max_n=10, nrows=None, ncols=None, figsize=None, **kwargs): if ctxs is None: ctxs = get_grid(min(len(samples), max_n), nrows=nrows, ncols=ncols, add_vert=1, figsize=figsize) ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(outs.itemgot(0),ctxs,range(max_n))] return ctxs bs = 128 size = 64 dblock = DataBlock(blocks = (TransformBlock, ImageBlock), get_x = generate_noise, get_items = get_image_files, splitter = IndexSplitter([]), item_tfms=Resize(size, method=ResizeMethod.Crop), batch_tfms = Normalize.from_stats(torch.tensor([0.5,0.5,0.5]), torch.tensor([0.5,0.5,0.5]))) path = untar_data(URLs.LSUN_BEDROOMS) dls = dblock.dataloaders(path, path=path, bs=bs) dls.show_batch(max_n=16) ``` ## GAN Learner ``` #export def gan_loss_from_func(loss_gen, loss_crit, weights_gen=None): "Define loss functions for a GAN from `loss_gen` and `loss_crit`." def _loss_G(fake_pred, output, target, weights_gen=weights_gen): ones = fake_pred.new_ones(fake_pred.shape[0]) weights_gen = ifnone(weights_gen, (1.,1.)) return weights_gen[0] * loss_crit(fake_pred, ones) + weights_gen[1] * loss_gen(output, target) def _loss_C(real_pred, fake_pred): ones = real_pred.new_ones (real_pred.shape[0]) zeros = fake_pred.new_zeros(fake_pred.shape[0]) return (loss_crit(real_pred, ones) + loss_crit(fake_pred, zeros)) / 2 return _loss_G, _loss_C #export def _tk_mean(fake_pred, output, target): return fake_pred.mean() def _tk_diff(real_pred, fake_pred): return real_pred.mean() - fake_pred.mean() #export @delegates() class GANLearner(Learner): "A `Learner` suitable for GANs." def __init__(self, dls, generator, critic, gen_loss_func, crit_loss_func, switcher=None, gen_first=False, switch_eval=True, show_img=True, clip=None, cbs=None, metrics=None, **kwargs): gan = GANModule(generator, critic) loss_func = GANLoss(gen_loss_func, crit_loss_func, gan) if switcher is None: switcher = FixedGANSwitcher(n_crit=5, n_gen=1) trainer = GANTrainer(clip=clip, switch_eval=switch_eval, gen_first=gen_first, show_img=show_img) cbs = L(cbs) + L(trainer, switcher) metrics = L(metrics) + L(*LossMetrics('gen_loss,crit_loss')) super().__init__(dls, gan, loss_func=loss_func, cbs=cbs, metrics=metrics, **kwargs) @classmethod def from_learners(cls, gen_learn, crit_learn, switcher=None, weights_gen=None, **kwargs): "Create a GAN from `learn_gen` and `learn_crit`." losses = gan_loss_from_func(gen_learn.loss_func, crit_learn.loss_func, weights_gen=weights_gen) return cls(gen_learn.dls, gen_learn.model, crit_learn.model, *losses, switcher=switcher, **kwargs) @classmethod def wgan(cls, dls, generator, critic, switcher=None, clip=0.01, switch_eval=False, **kwargs): "Create a WGAN from `data`, `generator` and `critic`." return cls(dls, generator, critic, _tk_mean, _tk_diff, switcher=switcher, clip=clip, switch_eval=switch_eval, **kwargs) GANLearner.from_learners = delegates(to=GANLearner.__init__)(GANLearner.from_learners) GANLearner.wgan = delegates(to=GANLearner.__init__)(GANLearner.wgan) from fastai.callback.all import * generator = basic_generator(64, n_channels=3, n_extra_layers=1) critic = basic_critic (64, n_channels=3, n_extra_layers=1, act_cls=partial(nn.LeakyReLU, negative_slope=0.2)) learn = GANLearner.wgan(dls, generator, critic, opt_func = RMSProp) learn.recorder.train_metrics=True learn.recorder.valid_metrics=False learn.fit(1, 2e-4, wd=0.) learn.show_results(max_n=9, ds_idx=0) ``` ## Export - ``` #hide from nbdev.export import notebook2script notebook2script() ```
github_jupyter
``` %matplotlib inline import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import numpy as np from os.path import join import os import matplotlib.patches as mpatches import matplotlib.pyplot as plt sample_md = pd.read_pickle('pickle_df') alpha_div_fp = '/home/johnchase/office-project/office-microbes/notebooks/UNITE-analysis/core_div/core_div_open/arare_max999/alpha_div_collated/observed_species.txt' alpha_div = pd.read_csv(alpha_div_fp, sep='\t', index_col=0) alpha_div = alpha_div.T.drop(['sequences per sample', 'iteration']) alpha_cols = [e for e in alpha_div.columns if '990' in e] alpha_div = alpha_div[alpha_cols] sample_md = pd.concat([sample_md, alpha_div], axis=1, join='outer') sample_md['MeanAlphaITS'] = sample_md[alpha_cols].mean(axis=1) alpha_div_fp = '/home/office-microbe-files/core_div_out/arare_max1000/alpha_div_collated/observed_otus.txt' alpha_div = pd.read_csv(alpha_div_fp, sep='\t', index_col=0) alpha_div = alpha_div.T.drop(['sequences per sample', 'iteration']) alpha_cols = [e for e in alpha_div.columns if '1000' in e] alpha_div = alpha_div[alpha_cols] sample_md = pd.concat([sample_md, alpha_div], axis=1, join='outer') sample_md['MeanAlpha16S'] = sample_md[alpha_cols].mean(axis=1) wa_its = sample_md[(sample_md['wa_day'].notnull()) & (sample_md['MeanAlphaITS'].notnull())].copy() wa_16s = sample_md[(sample_md['wa_day'].notnull()) & (sample_md['MeanAlpha16S'].notnull())].copy() wa_its['normed'] = wa_its['MeanAlphaITS']/wa_its['MeanAlphaITS'].max() wa_16s['normed'] = wa_16s['MeanAlpha16S']/wa_16s['MeanAlpha16S'].max() its_16s_corr = wa_16s.drop_duplicates('ProjectID') its_its_corr = wa_its.drop_duplicates('ProjectID') its_16s_corr = its_16s_corr.set_index('ProjectID')[['MeanAlpha16S']].copy() its_its_corr = its_its_corr.set_index('ProjectID')[['MeanAlphaITS']].copy() import matplotlib.lines as mlines with plt.rc_context(dict(sns.axes_style("darkgrid"), **sns.plotting_context("notebook", font_scale=2))): plt.figure(figsize=(12, 10)) ax = sns.regplot(x='wa_day', y='MeanAlphaITS', data=wa_its, color='#95d5b9', label='ITS') ax = sns.regplot(x='wa_day', y='MeanAlpha16S', data=wa_16s, color='#1f2f87', label='16S') green_patch = mpatches.Patch(color='#95d5b9', label='ITS') blue_patch = mpatches.Patch(color='#1f2f87', label='16S') plt.legend(handles=[green_patch, blue_patch]) ax.set_xlabel('Daily Water Activity') ax.set_ylim(-50, 600) ax.set_ylabel('Observed OTUs') plt.savefig('figure-6.svg', dpi=300) ```
github_jupyter
# Modeling and Simulation in Python Chapter 13 Copyright 2017 Allen Downey License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0) ``` # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim.py module from modsim import * ``` ### Code from previous chapters `make_system`, `plot_results`, and `calc_total_infected` are unchanged. ``` def make_system(beta, gamma): """Make a system object for the SIR model. beta: contact rate in days gamma: recovery rate in days returns: System object """ init = State(S=89, I=1, R=0) init /= np.sum(init) t0 = 0 t_end = 7 * 14 return System(init=init, t0=t0, t_end=t_end, beta=beta, gamma=gamma) def plot_results(S, I, R): """Plot the results of a SIR model. S: TimeSeries I: TimeSeries R: TimeSeries """ plot(S, '--', label='Susceptible') plot(I, '-', label='Infected') plot(R, ':', label='Recovered') decorate(xlabel='Time (days)', ylabel='Fraction of population') def calc_total_infected(results): """Fraction of population infected during the simulation. results: DataFrame with columns S, I, R returns: fraction of population """ return get_first_value(results.S) - get_last_value(results.S) def run_simulation(system, update_func): """Runs a simulation of the system. system: System object update_func: function that updates state returns: TimeFrame """ init, t0, t_end = system.init, system.t0, system.t_end frame = TimeFrame(columns=init.index) frame.row[t0] = init for t in linrange(t0, t_end): frame.row[t+1] = update_func(frame.row[t], t, system) return frame def update_func(state, t, system): """Update the SIR model. state: State (s, i, r) t: time system: System object returns: State (sir) """ beta, gamma = system.beta, system.gamma s, i, r = state infected = beta * i * s recovered = gamma * i s -= infected i += infected - recovered r += recovered return State(S=s, I=i, R=r) ``` ### Sweeping beta Make a range of values for `beta`, with constant `gamma`. ``` beta_array = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 , 1.1] gamma = 0.2 ``` Run the simulation once for each value of `beta` and print total infections. ``` for beta in beta_array: system = make_system(beta, gamma) results = run_simulation(system, update_func) print(system.beta, calc_total_infected(results)) ``` Wrap that loop in a function and return a `SweepSeries` object. ``` def sweep_beta(beta_array, gamma): """Sweep a range of values for beta. beta_array: array of beta values gamma: recovery rate returns: SweepSeries that maps from beta to total infected """ sweep = SweepSeries() for beta in beta_array: system = make_system(beta, gamma) results = run_simulation(system, update_func) sweep[system.beta] = calc_total_infected(results) return sweep ``` Sweep `beta` and plot the results. ``` infected_sweep = sweep_beta(beta_array, gamma) label = 'gamma = ' + str(gamma) plot(infected_sweep, label=label) decorate(xlabel='Contact rate (beta)', ylabel='Fraction infected') savefig('figs/chap13-fig01.pdf') ``` ### Sweeping gamma Using the same array of values for `beta` ``` beta_array ``` And now an array of values for `gamma` ``` gamma_array = [0.2, 0.4, 0.6, 0.8] ``` For each value of `gamma`, sweep `beta` and plot the results. ``` plt.figure(figsize=(7, 4)) for gamma in gamma_array: infected_sweep = sweep_beta(beta_array, gamma) label = 'gamma = ' + str(gamma) plot(infected_sweep, label=label) decorate(xlabel='Contact rate (beta)', ylabel='Fraction infected', loc='upper left') plt.legend(bbox_to_anchor=(1.02, 1.02)) plt.tight_layout() savefig('figs/chap13-fig02.pdf') ``` **Exercise:** Suppose the infectious period for the Freshman Plague is known to be 2 days on average, and suppose during one particularly bad year, 40% of the class is infected at some point. Estimate the time between contacts. ``` # Solution # Sweep beta with fixed gamma gamma = 1/2 infected_sweep = sweep_beta(beta_array, gamma) # Solution # Interpolating by eye, we can see that the infection rate passes through 0.4 # when beta is between 0.6 and 0.7 # We can use the `crossings` function to interpolate more precisely # (although we don't know about it yet :) beta_estimate = crossings(infected_sweep, 0.4) # Solution # Time between contacts is 1/beta time_between_contacts = 1/beta_estimate ``` ## SweepFrame The following sweeps two parameters and stores the results in a `SweepFrame` ``` def sweep_parameters(beta_array, gamma_array): """Sweep a range of values for beta and gamma. beta_array: array of infection rates gamma_array: array of recovery rates returns: SweepFrame with one row for each beta and one column for each gamma """ frame = SweepFrame(columns=gamma_array) for gamma in gamma_array: frame[gamma] = sweep_beta(beta_array, gamma) return frame ``` Here's what the `SweepFrame` look like. ``` frame = sweep_parameters(beta_array, gamma_array) frame.head() ``` And here's how we can plot the results. ``` for gamma in gamma_array: label = 'gamma = ' + str(gamma) plot(frame[gamma], label=label) decorate(xlabel='Contact rate (beta)', ylabel='Fraction infected', title='', loc='upper left') ``` We can also plot one line for each value of `beta`, although there are a lot of them. ``` plt.figure(figsize=(7, 4)) for beta in [1.1, 0.9, 0.7, 0.5, 0.3]: label = 'beta = ' + str(beta) plot(frame.row[beta], label=label) decorate(xlabel='Recovery rate (gamma)', ylabel='Fraction infected') plt.legend(bbox_to_anchor=(1.02, 1.02)) plt.tight_layout() savefig('figs/chap13-fig03.pdf') ``` It's often useful to separate the code that generates results from the code that plots the results, so we can run the simulations once, save the results, and then use them for different analysis, visualization, etc. After running `sweep_parameters`, we have a `SweepFrame` with one row for each value of `beta` and one column for each value of `gamma`. ``` contour(frame) decorate(xlabel='Recovery rate (gamma)', ylabel='Contact rate (beta)', title='Fraction infected, contour plot') savefig('figs/chap13-fig04.pdf') ```
github_jupyter
# Human numbers ``` from fastai2.text.all import * bs=64 ``` ## Data ``` path = untar_data(URLs.HUMAN_NUMBERS) path.ls() def readnums(d): return ', '.join(o.strip() for o in open(path/d).readlines()) train_txt = readnums('train.txt'); train_txt[:80] valid_txt = readnums('valid.txt'); valid_txt[-80:] train_tok = tokenize1(train_txt) valid_tok = tokenize1(valid_txt) dsets = Datasets([train_tok, valid_tok], tfms=Numericalize, dl_type=LMDataLoader, splits=[[0], [1]]) dls = dsets.dataloaders(bs=bs, val_bs=bs) dsets.show((dsets.train[0][0][:80],)) len(dsets.valid[0][0]) len(dls.valid) dls.seq_len, len(dls.valid) 13017/72/bs it = iter(dls.valid) x1,y1 = next(it) x2,y2 = next(it) x3,y3 = next(it) it.close() x1.numel()+x2.numel()+x3.numel() ``` This is the closes multiple of 64 below 13017 ``` x1.shape,y1.shape x2.shape,y2.shape x1[0] y1[0] v = dls.vocab ' '.join([v[x] for x in x1[0]]) ' '.join([v[x] for x in y1[0]]) ' '.join([v[x] for x in x2[0]]) ' '.join([v[x] for x in x3[0]]) ' '.join([v[x] for x in x1[1]]) ' '.join([v[x] for x in x2[1]]) ' '.join([v[x] for x in x3[1]]) ' '.join([v[x] for x in x3[-1]]) dls.valid.show_batch() ``` ## Single fully connected model ``` dls = dsets.dataloaders(bs=bs, seq_len=3) x,y = dls.one_batch() x.shape,y.shape nv = len(v); nv nh=64 def loss4(input,target): return F.cross_entropy(input, target[:,-1]) def acc4 (input,target): return accuracy(input, target[:,-1]) class Model0(Module): def __init__(self): self.i_h = nn.Embedding(nv,nh) # green arrow self.h_h = nn.Linear(nh,nh) # brown arrow self.h_o = nn.Linear(nh,nv) # blue arrow self.bn = nn.BatchNorm1d(nh) def forward(self, x): h = self.bn(F.relu(self.h_h(self.i_h(x[:,0])))) if x.shape[1]>1: h = h + self.i_h(x[:,1]) h = self.bn(F.relu(self.h_h(h))) if x.shape[1]>2: h = h + self.i_h(x[:,2]) h = self.bn(F.relu(self.h_h(h))) return self.h_o(h) learn = Learner(dls, Model0(), loss_func=loss4, metrics=acc4) learn.fit_one_cycle(6, 1e-4) ``` ## Same thing with a loop ``` class Model1(Module): def __init__(self): self.i_h = nn.Embedding(nv,nh) # green arrow self.h_h = nn.Linear(nh,nh) # brown arrow self.h_o = nn.Linear(nh,nv) # blue arrow self.bn = nn.BatchNorm1d(nh) def forward(self, x): h = torch.zeros(x.shape[0], nh).to(device=x.device) for i in range(x.shape[1]): h = h + self.i_h(x[:,i]) h = self.bn(F.relu(self.h_h(h))) return self.h_o(h) learn = Learner(dls, Model1(), loss_func=loss4, metrics=acc4) learn.fit_one_cycle(6, 1e-4) ``` ## Multi fully connected model ``` dls = dsets.dataloaders(bs=bs, seq_len=20) x,y = dls.one_batch() x.shape,y.shape class Model2(Module): def __init__(self): self.i_h = nn.Embedding(nv,nh) self.h_h = nn.Linear(nh,nh) self.h_o = nn.Linear(nh,nv) self.bn = nn.BatchNorm1d(nh) def forward(self, x): h = torch.zeros(x.shape[0], nh).to(device=x.device) res = [] for i in range(x.shape[1]): h = h + self.i_h(x[:,i]) h = F.relu(self.h_h(h)) res.append(self.h_o(self.bn(h))) return torch.stack(res, dim=1) learn = Learner(dls, Model2(), loss_func=CrossEntropyLossFlat(), metrics=accuracy) learn.fit_one_cycle(10, 1e-4, pct_start=0.1) ``` ## Maintain state ``` class Model3(Module): def __init__(self): self.i_h = nn.Embedding(nv,nh) self.h_h = nn.Linear(nh,nh) self.h_o = nn.Linear(nh,nv) self.bn = nn.BatchNorm1d(nh) self.h = torch.zeros(bs, nh).cuda() def forward(self, x): res = [] if x.shape[0]!=self.h.shape[0]: self.h = torch.zeros(x.shape[0], nh).cuda() h = self.h for i in range(x.shape[1]): h = h + self.i_h(x[:,i]) h = F.relu(self.h_h(h)) res.append(self.bn(h)) self.h = h.detach() res = torch.stack(res, dim=1) res = self.h_o(res) return res def reset(self): self.f.h = torch.zeros(bs, nh).cuda() learn = Learner(dls, Model3(), metrics=accuracy, loss_func=CrossEntropyLossFlat()) learn.fit_one_cycle(20, 3e-3) ``` ## nn.RNN ``` class Model4(Module): def __init__(self): self.i_h = nn.Embedding(nv,nh) self.rnn = nn.RNN(nh,nh, batch_first=True) self.h_o = nn.Linear(nh,nv) self.bn = BatchNorm1dFlat(nh) self.h = torch.zeros(1, bs, nh).cuda() def forward(self, x): if x.shape[0]!=self.h.shape[1]: self.h = torch.zeros(1, x.shape[0], nh).cuda() res,h = self.rnn(self.i_h(x), self.h) self.h = h.detach() return self.h_o(self.bn(res)) learn = Learner(dls, Model4(), loss_func=CrossEntropyLossFlat(), metrics=accuracy) learn.fit_one_cycle(20, 3e-3) ``` ## 2-layer GRU ``` class Model5(Module): def __init__(self): self.i_h = nn.Embedding(nv,nh) self.rnn = nn.GRU(nh, nh, 2, batch_first=True) self.h_o = nn.Linear(nh,nv) self.bn = BatchNorm1dFlat(nh) self.h = torch.zeros(2, bs, nh).cuda() def forward(self, x): if x.shape[0]!=self.h.shape[1]: self.h = torch.zeros(2, x.shape[0], nh).cuda() res,h = self.rnn(self.i_h(x), self.h) self.h = h.detach() return self.h_o(self.bn(res)) learn = Learner(dls, Model5(), loss_func=CrossEntropyLossFlat(), metrics=accuracy) learn.fit_one_cycle(10, 1e-2) ``` ## fin
github_jupyter
``` import cv2 import numpy as np from glob import glob from matplotlib import pyplot as plt def patch_gen(img_orig,patchSize,r): # print(len(img_orig.shape)) if(len(img_orig.shape)==3): img = cv2.cvtColor(img_orig,cv2.COLOR_RGB2GRAY) else: img = img_orig # plt.imshow(img, cmap='gray') # print(img.shape[1]) x = np.random.randint(r, img.shape[1]-r-patchSize) y = np.random.randint(r, img.shape[0]-r-patchSize) p1 = (x,y) p2 = (patchSize+x, y) p3 = (patchSize+x, patchSize+y) p4 = (x, patchSize+y) src = [p1, p2, p3, p4] # # Printing and plotting Source Points # print(src) # plt.figure() # src_image = img_orig.copy() # cv2.polylines(src_image, np.int64([src]), 1, (255,0,0),2) # plt.imshow(src_image) dst = [] for pt in src: dst.append((pt[0]+np.random.randint(-r, r), pt[1]+np.random.randint(-r, r))) # #Printing and Plotting Destination Points # print(dst) # plt.figure() # dst_image = img_orig.copy() # cv2.polylines(dst_image, np.int64([dst]), 1, (255,0,0),2) # plt.imshow(dst_image) H = cv2.getPerspectiveTransform(np.float32(src), np.float32(dst)) H_inv = np.linalg.inv(H) warpImg = cv2.warpPerspective(img, H_inv, (img.shape[1],img.shape[0])) # # Plotting Warped Image # tempImg = warpImg.copy() # cv2.polylines(tempImg, np.int32([src]), 1, (0),2) # plt.figure() # plt.imshow(tempImg, cmap='gray') # plt.show() patch1 = img[y:y + patchSize, x:x + patchSize] patch2 = warpImg[y:y + patchSize, x:x + patchSize] # plt.figure() # plt.imshow(patch1, cmap='gray') # plt.show() # plt.figure() # plt.imshow(patch2, cmap='gray') # plt.show() imgData = np.dstack((patch1, patch2)) hData = np.subtract(np.array(dst), np.array(src)) return imgData,hData trainImg = glob("./Data/Train/*.jpg") valImg = glob("./Data/Val/*.jpg") print("No. of Training Images = " + str(len(trainImg))) print("No. of Validation Images = " + str(len(valImg))) def data_gen(Image,size=(640,480),patchSize=128,r=32): X=[] Y=[] for j in range(1): print("No. of samples collected = "+str(len(Image)*j)) for i in range(len(Image)): img = plt.imread(Image[i]) img = cv2.resize(img,size) imgData,hData = patch_gen(img,patchSize,r) X.append(imgData) Y.append(hData) return X,Y X_train=[] Y_train=[] X_val = [] Y_val = [] X_train,Y_train = data_gen(trainImg) X_val,Y_val = data_gen(valImg) # Saving features and labels import pickle training = {'features': X_train, 'labels': Y_train} pickle.dump(training, open("training_less.pkl", "wb")) validation = {'features': X_val, 'labels': Y_val} pickle.dump(validation, open("validation_less.pkl", "wb")) ```
github_jupyter
``` %load_ext autoreload %autoreload 2 %matplotlib inline import sys from pathlib import Path sys.path.append(str(Path().cwd().parent)) import pandas as pd from load_dataset import Dataset from model import TimeSeriesPredictor, TimeSeriesDetector from sklearn.linear_model import Ridge import plotting from typing import Tuple ``` ## Разбор (запускать не надо) #### Возьмем часовой временной ряд ``` dataset = Dataset('../data/dataset') ts = dataset['hour_2263.csv'] ts.plot(figsize=(10, 8)) ``` #### Разделим на трейн и тест ``` ts_train, ts_test = ts[:-100], ts[-100:] ``` #### Создадим инстанс детектора с Ridge в качестве базы ``` detector = TimeSeriesDetector( granularity='PT1H', num_lags=24, model=Ridge, alpha=7, sigma=2.3 ) ``` #### Обучим модель на трейне ``` detector.fit(ts_train) ``` #### Используя обученную модель, соберем статистику с трейна ``` detector.fit_statistics(ts_train) detector.std ``` #### Сделаем in-sample прогноз на ts_test ``` preds = detector.predict_batch(ts_train, ts_test) ``` #### Используя найденную статистику (стандартное отклонение от остатков), получим доверительные интервалы ``` lower, upper = detector.get_prediction_intervals(preds) lower upper ``` #### Получим аномалии ``` anoms = detector.detect(ts_test, preds) anoms plotting.plot_detection(ts_test, upper, lower, preds) ``` ## Практика ``` class TimeSeriesDetector(TimeSeriesPredictor): def __init__(self, sigma=2.7, *args, **kwargs): super().__init__(*args, **kwargs) self.sigma = sigma def fit_statistics(self, ts: pd.Series): """ Используя метод predict_batch получает in-sample предсказания на ряде ts. Далее получает остатки, вычитая этот прогноз из факта. На остатках считает стандартное отклонение и записывает его в аттрибут std """ self.std = std def get_prediction_intervals(self, y_pred: pd.Series, season=False) -> Tuple[pd.Series]: """ Используя найденный std остатков, заданную self.sigma и предсказанные значения y_pred, возвращает для них доверительные интервалы """ return lower, upper def detect(self, ts_true, ts_pred, season=False) -> pd.Series: """ Используя метод get_predictions_intervals, получает те значения ряда ts_true, которые выходят за границы доверительных интервалов """ return def fit_seasonal_statistics(self, ts_train, n_splits=3, period=24): pass ``` ## Практика. Часть1. Базовая. ### Задание 1. Метод fit_statistics. Добавить в класс `TimeSeriesPredictor` метод `fit_statistics`, который при помощи обученной модели получает остатки предсказаний на трейне и возвращает стандартное отклонение остатков. * принимает на вход ряд ts * используя метод `self.predict_batch` получает in-sample прогноз для ts * получает остатки, вычитая прогноз из факта (не забудьте про abs()) * на остатках считает стандартное отклонение и записывает его в аттрибут `self.std` ### Задание 2. Метод get_predictions_intervals. Добавить в класс TimeSeriesPredictor метод get_prediction_intervals, который принимает предсказанный батч ts_pred, и, используя std на трейне возвращает для каждой точки ее lower и upper доверительные интервалы. * принимает на вход предсказанный батч y_pred * возвращает для него верхний и нижный интервал по формуле `upper, lower = y_pred +/- sigma * std` ### Задание 3. Метод detect. Добавить в класс TimeSeriesPredictor метод detect, который принимает на вход ts_true и ts_pred и возвращает значения ряда, выходящие за границы доверительных интервалов. ## Практика. Часть 2. Продвинутая. * Самое важное допущение данного метода заключается в том, что остатки нашего метода хоть как-то похожи на нормальные. * Одно из очевидных нарушений "нормальности" остатков можно наблюдать в тех рядах, где шумовая компонента имеет разную дисперсию в зависимости от того, в каком моменте периода сезонности она находится. Так, например, у нашего ряда шум днем, "в пиках" ряда, явно выше того, что наблюдается рано утром или ночью, что объясняется естественным отличием в бизнес правилах. * Одним из способов справиться с проблемой выше может быть метод расчета сезонных интервалов, когда вместо расчета стандартного отклонения остатков на всей истории ряда, мы считаем стандартное отклонение, соответствующее определенному периоду на сезонном интервале ### Задание 1. Метод fit_seasonal_statistics. Добавить метод `fit_seasonal_statistics`, который бы возвращал стандартное отклонение остатков предсказания для `n_splits` равных участков внутри периода сезонности. Например для часовых рядов возвращал сигмы для `n_splits=3` участков: с 0 до 8-ми утра, с 8-ми утра до 16 вечера, с 16 до полуночи. * `def fit_seasonal_statistics(self, ts_train, n_splits=3, period=24):` # получаю предсказания ts_train # разбиваю остатки на datetime интервалы длинной period/n_splits # считаю стандартное отклонение для каждого участка # записываю стандартное отклонение в аттрибут self.season_std = { datetime_range_1: std_1, datetime_range_2: std_2, datetime_range_3: std_3, } ### Задание 2. Модифицировать get_predictions_intervals. В метод `get_prediction_intervals` добавить параметр `season=True/False`, который, будучи включенным, определяет к какому из интервалов в `self.season_std` относится каждая точка из `ts_pred`, и, использую соответствующий `std`, возвращает для нее доверительные интервалы. ``` predictor.fit(ts_train) predictor.fit_seasonal_statistics(ts_train) predictions = predictor.predict_batch(ts_train, ts_test) lower, upper = predictor.get_prediction_intervals(predictions, season=True) predictor.detect(ts_test, predictions, season=True) ```
github_jupyter
# Data Visualization and manipulations ``` import sys import numpy as np from numpy import set_printoptions set_printoptions(precision=3) import pandas as pd import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (16,12) import seaborn as sns from IPython.display import display ``` # Load diabetes data from CSV file ``` filename = "../data/pima-indians-diabetes.data.csv" columns = ['pregnant', 'plasma_glucose', 'blood_pressure', 'skin_fold', 'serum_insulin', 'bmi', 'pedigree', 'age', 'class'] data = pd.read_csv(filename, names=columns) display(data.head()) display(data.sample(5, random_state=1)) print "Data Rows: {}, Cols: {}".format(data.shape[0], data.shape[1]) data.info() data.describe() ``` # `groupby` operation to consolidate columns ``` display(data.groupby('class').sum()) display(data.groupby('class').mean()) ``` # `apply` to manipulate columns You can pass in any `function`, Return type depends on whether passed function aggregates ``` display(data.apply(np.mean)) display(data.apply(np.sin).head()) ``` # Correlation ``` corr = data.corr(method='pearson') corr sns.heatmap(corr, cmap=sns.cubehelix_palette(as_cmap=True), annot=True) ``` # Skewness ``` data.skew() # Right Skew sns.distplot(data['pedigree']) # Left Skew sns.distplot(data['blood_pressure']) sns.pairplot(data[['plasma_glucose', 'blood_pressure', 'serum_insulin', 'class']], hue="class") ``` # Feature Selections ### Univariate Selection ``` from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 array = data.values X = array[:,0:8] Y = array[:,8] test = SelectKBest(score_func=chi2, k=3) fit = test.fit(X, Y) print(fit.scores_) ``` ### Recursive Feature Elimination ``` from sklearn.linear_model import LogisticRegression from sklearn.feature_selection import RFE model = LogisticRegression() # Model does not matter much # Select top 5 features rfe = RFE(model, 5) fit = rfe.fit(X, Y) print("Num Features: %d" % fit.n_features_) print("Selected Features: %s" % fit.support_) print("Feature Ranking: %s" % fit.ranking_) ``` ### Feature importance using Random forest ``` from sklearn.ensemble import ExtraTreesClassifier model = ExtraTreesClassifier() model.fit(X, Y) print(model.feature_importances_) ``` # Statistical Learning Techniques ### Extratrees Classifier ``` from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=1234) # model = LogisticRegression() model = ExtraTreesClassifier(max_depth=100) model.fit(X_train, Y_train) result = model.score(X_test, Y_test) print("Accuracy: %.3f%%" % (result*100.0)) ``` --- # Basic Neural Network ## Cross-entropy Loss or Negative Log Likelihood $$-\frac{1}{N}\sum_{n \epsilon N}\sum_{i \epsilon C} y_{n,i} log \hat{y}_{n,i}$$ ### Gradients --- $$\delta_{2} = \hat{y} - y$$ $$\delta_{1} = \delta_{2} W_{2}^{T}\odot a_{1} \odot (1 - a_{1})$$ --- $$ \frac{\delta L}{\delta W_{2}} = a^{T}_{1} \delta_{2}$$ $$ \frac{\delta L}{\delta b_{2}} = \delta_{2}$$ $$ \frac{\delta L}{\delta W_{1}} = x^{T} \delta_{1}$$ $$ \frac{\delta L}{\delta b_{2}} = \delta_{1}$$ ``` class NeuralNetwork(object): def __init__(self, n_features=10, n_output=10, n_hidden=100, learning_rate=0.001, reg_lambda=None): ### Network Dimensions self.n_output = n_output self.n_features = n_features self.n_hidden = n_hidden ### Initialize weights self.w_hid = np.random.randn(self.n_features, self.n_hidden) self.b_hid = np.random.randn(self.n_hidden) self.w_out = np.random.randn(self.n_hidden, self.n_output) self.b_out = np.random.randn(self.n_output) ### Hyper parameters self.learning_rate = learning_rate self.reg_lambda = reg_lambda def _one_hotize(self, y, k): onehot = np.zeros((k, y.shape[0])) for idx, val in enumerate(y): onehot[val, idx] = 1.0 return onehot def _sigmoid(self, x): return 1 / (1 + np.exp(-x)) def _softmax(self, x): exp_x = np.exp(x) return exp_x / exp_x.sum(axis=1, keepdims=True) def train(self, input_list, target_list): ### Convert inputs list to 2d array inputs = np.array(input_list, ndmin=2) targets = np.array(target_list, ndmin=2) targets = self._one_hotize(targets, self.n_output).T ### Forward A = self._sigmoid(inputs.dot(self.w_hid) + self.b_hid) Y = self._softmax(A.dot(self.w_out) + self.b_out) ### Calculate Loss ### loss = np.sum(targets * np.log(Y))/len(targets) ### Backward # Total Error delta2 = Y - targets delta1 = delta2.dot(self.w_out.T) * A * (1 - A) dw_out = A.T.dot(delta2) db_out = delta2.sum(axis=0) dw_hid = inputs.T.dot(delta1) db_hid = delta1.sum(axis=0) ### Add L2 regularization terms (b1 and b2 don't have regularization terms) if reg_lambda: dw_out += self.reg_lambda * self.w_out dw_hid += self.reg_lambda * self.w_hid ### Update Weights self.w_hid -= self.learning_rate * dw_hid self.b_hid -= self.learning_rate * db_hid self.w_out -= self.learning_rate * dw_out self.b_out -= self.learning_rate * db_out return loss def inference(self, input_list): inputs = np.array(input_list, ndmin=2) A = self._sigmoid(inputs.dot(self.w_hid) + self.b_hid) Y = self._softmax(A.dot(self.w_out) + self.b_out) return Y def get_accuracy(self, input_list, target_labels): preds = np.argmax(self.inference(input_list), axis=1) accuracy = np.sum(target_labels == preds, axis=0) * 1.0/len(target_labels) return accuracy ### Load Data ### filename = "../data/pima-indians-diabetes.data.csv" columns = ['pregnant', 'plasma_glucose', 'blood_pressure', 'skin_fold', 'serum_insulin', 'bmi', 'pedigree', 'age', 'class'] data = pd.read_csv(filename, names=columns) #### Normalize Data ### feature_columns = ['pregnant', 'plasma_glucose', 'blood_pressure', 'skin_fold', 'serum_insulin', 'bmi', 'pedigree', 'age'] scaled_features = {} for col in feature_columns: mean, std = data[col].mean(), data[col].std() scaled_features[col] = [mean, std] data.loc[:, col] = (data[col] - mean)/std #### Drop Less "Important" columns ### data = data.drop(['pregnant', 'blood_pressure', 'skin_fold', 'serum_insulin', 'pedigree'], axis=1) ### Train/Test Split ### X = data.iloc[:,:-1] y = data.iloc[:,-1:] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1234) epochs = 1000 n_features = X_train.shape[1] n_output = 2 n_hidden = 100 learning_rate = 0.001 reg_lambda = 0.001 ### Stats ### training_stats = {'training_acc': [], 'validation_acc': [], 'loss': []} network = NeuralNetwork(n_features, n_output, n_hidden, learning_rate, reg_lambda) for e in range(epochs): # Go through a random batch of 128 records from the training data set batch = np.random.choice(X_train.index, size=128) loss = 0 n = 0 for record, target in zip(X_train.ix[batch].values, y_train.ix[batch]['class']): loss += network.train(record, target) n += 1 loss = loss/n training_stats['loss'].append(loss) # Printing out the training progress train_acc = network.get_accuracy(X_train, list(y_train['class'])) val_acc = network.get_accuracy(X_test, list(y_test['class'])) sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \ + "% ... Training acc: {}%".format(str(train_acc * 100)[:5]) \ + " ... Validation acc: {}%".format(str(val_acc * 100)[:5])) training_stats['training_acc'].append(train_acc) training_stats['validation_acc'].append(val_acc) print "\n-------" stats_df = pd.DataFrame.from_dict(training_stats) display(stats_df.head()) stats_df[['training_acc', 'validation_acc']].plot() ``` # Now Keras ``` from keras.models import Sequential from keras.layers import Dense import numpy def one_hotize(df): onehot = [] for row in df.iterrows(): cod = [0, 0] cod[row[1]['class']] = 1 onehot.append(cod) onehot = np.array(onehot) return onehot filename = "../data/pima-indians-diabetes.data.csv" columns = ['pregnant', 'plasma_glucose', 'blood_pressure', 'skin_fold', 'serum_insulin', 'bmi', 'pedigree', 'age', 'class'] data = pd.read_csv(filename, names=columns) #### Normalize Data ### feature_columns = ['pregnant', 'plasma_glucose', 'blood_pressure', 'skin_fold', 'serum_insulin', 'bmi', 'pedigree', 'age'] scaled_features = {} for col in feature_columns: mean, std = data[col].mean(), data[col].std() scaled_features[col] = [mean, std] data.loc[:, col] = (data[col] - mean)/std #### Drop Less "Important" columns ### data = data.drop(['skin_fold', 'serum_insulin', 'age'], axis=1) X = data.iloc[:,:-1] Y = data.iloc[:,-1:] Y_onehot = one_hotize(Y) n_features = X.shape[1] n_output = 2 n_hidden = 20 # create model model = Sequential() model.add(Dense(n_hidden, input_dim=n_features, init='uniform', activation='relu')) model.add(Dense(n_hidden, init='uniform', activation='relu')) model.add(Dense(n_output, init='uniform', activation='softmax')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) training = model.fit(X, Y_onehot, validation_split=0.20, epochs=100, batch_size=5, verbose=1) pd.DataFrame.from_dict(training.history)[['acc', 'val_acc']].plot() ```
github_jupyter
# Training Built-in Algorithms with SageMaker (Part 4/4) Download | Structure | Preprocessing (Built-in) | **Train Model (Built-in)** ``` ``` **Notes**: * This notebook should be used with the conda_amazonei_mxnet_p36 kernel * This notebook is part of a series of notebooks beginning with `01_download_data`, `02_structuring_data` and `03a_builtin_preprocessing`. * You can also explore training with TensorFlow and PyTorch by running `04b_tensorflow_training` and `04c_pytorch_training`, respectively. <pre> </pre> In this notebook, you will use the SageMaker SDK to create an Estimator for SageMaker's Built-in Image Classification algorithm and train it on a remote EC2 instance. <pre> </pre> ## Overview * #### [Dependecies](#ipg4a.1) * #### [Built-in Image Classification algorithm](#ipg4a.2) * #### [Understanding the training output](#ipg4a.3) <pre> </pre> <a id='ipg4a.1'></a> ## Dependencies ___ ### Import packages and check SageMaker version ``` import boto3 import shutil import urllib import pickle import pathlib import tarfile import subprocess import sagemaker ``` ### Load S3 bucket name & category labels The `category_labels` file was generated from the first notebook in this series `01_download_data.ipynb`. You will need to run that notebook before running the code here. An S3 bucket for this guide was created in Part 3. ``` with open("pickled_data/builtin_bucket_name.pickle", "rb") as f: bucket_name = pickle.load(f) print("Bucket Name: ", bucket_name) with open("pickled_data/category_labels.pickle", "rb") as f: category_labels = pickle.load(f) ``` <pre> </pre> <a id='ipg4a.2'></a> ## Built-in Image Classification algorithm ___ ### Create SageMaker training and validation channels ``` train_data = sagemaker.inputs.TrainingInput( s3_data=f"s3://{bucket_name}/data/train", content_type="application/x-recordio", s3_data_type="S3Prefix", input_mode="Pipe", ) val_data = sagemaker.inputs.TrainingInput( s3_data=f"s3://{bucket_name}/data/val", content_type="application/x-recordio", s3_data_type="S3Prefix", input_mode="Pipe", ) data_channels = {"train": train_data, "validation": val_data} ``` ### Configure the algorithm's hyperparameters https://docs.aws.amazon.com/sagemaker/latest/dg/IC-Hyperparameter.html * **num_layers** - The built-in image classification algrorithm is based off the ResNet architecture. There are many different versions of this architecture differing by how many layers they use. We'll use the smallest one for this guide to speed up training. If the algorithm's accuracy is hitting a plateau and you need better accuracy, increasing the number of layers may help. * **use_pretrained_model** - This will initialize the weights from a pre-trained model for transfer learning. Otherwise weights are initialized randomly. * **augmentation_type** - Allows you to add augmentations to your trainingset to help your model generalize better. For small datasets, augmentation can greatly imporve training. * **image_shape** - The channel, height, width of all the images * **num_classes** - Number of classes in your dataset * **num_training_samples** - Total number of images in your training set (used to help calculate progres) * **mini_batch_size** - The batch size you would like to use during training. * **epochs** - An epoch refers to one cycle through the training set and having more epochs to train means having more oppotunities to improve accracy. Suitable values range from 5 to 25 epochs depending on your time and budget constraints. Ideally, the right number of epochs is right before your validation accuracy plateaus. * **learning_rate**: After each batch of training we update the model's weights to give us the best possible results for that batch. The learning rate controls by how much we should update the weights. Best practices dictate a value between 0.2 and .001, typically never going higher than 1. The higher the learning rate, the faster your training will converge to the optimal weights, but going too fast can lead you to overshoot the target. In this example, we're using the weights from a pre-trained model so we'd want to start with a lower learning rate because the weights have already been optimized and we don't want move too far away from them. * **precision_dtype** - Whether you want to use a 32-bit float data type for the model's weights or 16-bit. 16-bit can be used if you're running into memory management issues. However, weights can grow or shrink rapidly so having 32-bit weights make your training more robust to these issues and is typically the default in most frameworks. ``` num_classes = len(category_labels) num_training_samples = len(set(pathlib.Path("data_structured/train").rglob("*.jpg"))) hyperparameters = { "num_layers": 18, "use_pretrained_model": 1, "augmentation_type": "crop_color_transform", "image_shape": "3,224,224", "num_classes": num_classes, "num_training_samples": num_training_samples, "mini_batch_size": 64, "epochs": 5, "learning_rate": 0.001, "precision_dtype": "float32", } ``` ### Configure the type of algorithm and resources to use ``` training_image = sagemaker.image_uris.retrieve( "image-classification", sagemaker.Session().boto_region_name ) algo_config = { "hyperparameters": hyperparameters, "image_uri": training_image, "role": sagemaker.get_execution_role(), "instance_count": 1, "instance_type": "ml.p3.2xlarge", "volume_size": 100, "max_run": 360000, "output_path": f"s3://{bucket_name}/data/output", } ``` ### Create and train the algorithm ``` algorithm = sagemaker.estimator.Estimator(**algo_config) algorithm.fit(inputs=data_channels, logs=True) ``` <pre> </pre> <a id='ipg4a.3'></a> ## Understanding the training output ___ ``` [09/14/2020 05:37:38 INFO 139869866030912] Epoch[0] Batch [20]#011Speed: 111.811 samples/sec#011accuracy=0.452381 [09/14/2020 05:37:54 INFO 139869866030912] Epoch[0] Batch [40]#011Speed: 131.393 samples/sec#011accuracy=0.570503 [09/14/2020 05:38:10 INFO 139869866030912] Epoch[0] Batch [60]#011Speed: 139.540 samples/sec#011accuracy=0.617700 [09/14/2020 05:38:27 INFO 139869866030912] Epoch[0] Batch [80]#011Speed: 144.003 samples/sec#011accuracy=0.644483 [09/14/2020 05:38:43 INFO 139869866030912] Epoch[0] Batch [100]#011Speed: 146.600 samples/sec#011accuracy=0.664991 ``` Training has begun: * Epoch[0]: One epoch corresponds to one training cycle through all the data. Stochastic optimizers like SGD and Adam improve accuracy by running multiple epochs. Random data augmentations is also applied with each new epoch allowing the training algorithm to learn on modified data. * Batch: The number of batches processed by the training algorithm. We specified one batch to be 64 images in the `mini_batch_size` hyperparameter. For algorithms like SGD, the model get a chance to update itself every batch. * Speed: the number of images sent to the training algorithm per second. This information is important in determining how changes in your dataset affect the speed of training. * Accuracy: the training accuracy achieved at each interval (in this case, 20 batches). ``` [09/14/2020 05:38:58 INFO 139869866030912] Epoch[0] Train-accuracy=0.677083 [09/14/2020 05:38:58 INFO 139869866030912] Epoch[0] Time cost=102.745 [09/14/2020 05:39:02 INFO 139869866030912] Epoch[0] Validation-accuracy=0.729492 [09/14/2020 05:39:02 INFO 139869866030912] Storing the best model with validation accuracy: 0.729492 [09/14/2020 05:39:02 INFO 139869866030912] Saved checkpoint to "/opt/ml/model/image-classification-0001.params" ``` The first epoch of training has ended (for this example we only train for one epoch). The final training accuracy is reported as well as the accuracy on the validation set. Comparing these two number is important in determining if your model is overfit or underfit as well as the bais/variance trade-off. The saved model uses the learned weights from the epoch with the best validation accuracy. ``` 2020-09-14 05:39:03 Uploading - Uploading generated training model 2020-09-14 05:39:15 Completed - Training job completed Training seconds: 235 Billable seconds: 235 ``` The final model parameters are saved as a `.tar.gz` in S3 to the directory specified in the `output_path` of `algo_config`. Total billable seconds is also reported to help compute the cost of training since you are only charged for the time the EC2 instance is training on the data. Other costs such as S3 storage also apply, but are not included here. <pre> </pre> ## Rollback to default version of SDK Only do this if you're done with this guide and want to use the same kernel for other notebooks with an incompatible version of the SageMaker SDK. ``` # print(f'Original version: {original_sagemaker_version[0]}') # print(f'Current version: {sagemaker.__version__}') # print('') # print(f'Rolling back to {original_sagemaker_version[0]}. Restart notebook kernel to use this version.') # print('') # s = f'sagemaker=={original_sagemaker_version[0]}' # !{sys.executable} -m pip install {s} ``` <pre> </pre> ## Next Steps This concludes the Image Data Guide for SageMaker's Built-in algorithms. If you'd like to deploy your model and get predictions on your test data, all the info you'll need to get going can be foud here: [Deploy Models for Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html)
github_jupyter
# Download Repo and Setup Installation Download the github repo containing the model, data, and weights ``` %cd !git clone --quiet https://github.com/bwproud/mask_rcnn_recyclables.git %cd ~/mask_rcnn_recyclables !pip install -r requirements.txt !python setup.py install ``` # Mount Directory If you want to save your model or use images in your drive as input it's necessary to mount your directory ``` from google.colab import drive drive.mount('/content/gdrive') ``` # Unzip Dataset Unzip dataset that contains our images broken up by val/train/test and then by classes ``` %cd ~/mask_rcnn_recyclables !rm -rf dataset import os from zipfile import ZipFile from shutil import copy os.makedirs('dataset') os.chdir('dataset') fileName = '../data.zip' ds = ZipFile(fileName) ds.extractall() print('Extracted zip file ' + fileName) print() ``` # Train model If you want to train the model, run the cell below; you can customize the number of epochs, the number of layers to train, and the initial starting weights ``` %cd ~/mask_rcnn_recyclables !python detect_recyclables.py train --dataset=dataset/ --weights=coco --epochs=30 --style=heads ``` # Load weights Load your newly trained weights for use in inference. If you didn't train and want to supply the weights, just put the path to your weights in custom_WEIGHTS_PATH. To test on our weights, go to https://drive.google.com/file/d/1AunkAymLdhg-dgJYU4jddjcjF0XMxzzJ/view and click "Add to my drive" or download it and add it to your drive manually. If you mounted your drive earlier then you should be able to retrieve our weights from your drive ``` %cd ~/mask_rcnn_recyclables import os import cv2 import sys import random import math import re import time import numpy as np import tensorflow as tf import matplotlib import matplotlib.pyplot as plt import matplotlib.patches as patches import skimage import glob from mrcnn import utils from mrcnn import visualize from mrcnn.visualize import display_images import mrcnn.model as modellib from mrcnn.model import log from importlib import reload import detect_recyclables # Root directory of the project ROOT_DIR = os.getcwd() # Import Mask RCNN sys.path.append(ROOT_DIR) # To find local version of the library %matplotlib inline # Directory to save logs and trained model MODEL_DIR = os.path.join(ROOT_DIR, "logs") config = detect_recyclables.RecycleConfig() custom_DIR = os.path.join(ROOT_DIR, "dataset") class InferenceConfig(config.__class__): # Run detection on one image at a time GPU_COUNT = 1 IMAGES_PER_GPU = 1 config = InferenceConfig() config.display() # Device to load the neural network on. # Useful if you're training a model on the same # machine, in which case use CPU and leave the # GPU for training. DEVICE = "/gpu:0" # /cpu:0 or /gpu:0 # Inspect the model in training or inference modes # values: 'inference' or 'training' # TODO: code for 'training' test mode not ready yet TEST_MODE = "inference" def get_ax(rows=1, cols=1, size=16): """Return a Matplotlib Axes array to be used in all visualizations in the notebook. Provide a central point to control graph sizes. Adjust the size attribute to control how big to render images """ _, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows)) return ax # Load validation dataset val_dataset = detect_recyclables.RecycleDataset() val_dataset.load_recycle(custom_DIR, "val") # Must call before using the dataset val_dataset.prepare() print("Images: {}\nClasses: {}".format(len(val_dataset.image_ids), val_dataset.class_names)) # Load test dataset test_dataset = detect_recyclables.RecycleDataset() test_dataset.load_recycle(custom_DIR, "test") # Must call before using the dataset test_dataset.prepare() print("Images: {}\nClasses: {}".format(len(test_dataset.image_ids), test_dataset.class_names)) # Create model in inference mode with tf.device(DEVICE): model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config) def load_weights(model, weight_path): # Load weights print("Loading weights ", weight_path) model.load_weights(weight_path, by_name=True) reload(visualize) # custom_WEIGHTS_PATH = sorted(glob.glob("/logs/*/mask_rcnn_*.h5"))[-1] # TO RUN ON OUR WEIGHTS UNCOMMENT LINE BELOW AND COMMENT LINE ABOVE # TO RUN ON YOUR OWN TRAINED WEIGHTS COMMENT LINE BELOW AND UNCOMMENT ABOVE custom_WEIGHTS_PATH = '/content/gdrive/My Drive/custom_weights.h5' load_weights(model, custom_WEIGHTS_PATH) ``` # Save weights The cell below is used to save weights to your google drive so you can use them later ``` with open('/content/gdrive/My Drive/custom_weights.h5', 'wb') as f, open(custom_WEIGHTS_PATH, 'rb') as f1: f.write(f1.read()) ``` # Test model Use your trained model on the validation/test set and see your mean average precision ``` def runOnValSet(ds, name, mode): APs=[] for image_id in ds.image_ids: image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(ds, config, image_id, use_mini_mask=False) # Run object detection results = model.detect([image], verbose=1) # Display results r = results[0] AP, precisions, recalls, overlaps = utils.compute_ap(gt_bbox, gt_class_id, gt_mask, r['rois'], r['class_ids'], r['scores'], r['masks']) visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], ds.class_names, r['scores'], title="Predictions") APs.append(AP) print("%s (%s): mAP @ IoU=50: %s" % (name, mode, np.mean(APs))) return np.mean(APs) runOnValSet(test_dataset, "custom_weights", "test") ``` # Test image from URL Load an image from a URL and test it on your trained model ``` import requests from io import BytesIO from PIL import Image import numpy as np def load(url): """ Given an url of an image, downloads the image and returns a PIL image """ response = requests.get(url) pil_image = Image.open(BytesIO(response.content)).convert("RGB") # convert to BGR format image = np.array(pil_image)[:, :, [2, 1, 0]] return image image = load("https://image.shutterstock.com/image-photo/different-steps-compressing-plastic-bottle-260nw-427852495.jpg") # Run detection results = model.detect([image], verbose=1) # Visualize results r = results[0] print(r['class_ids']) visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], val_dataset.class_names, r['scores']) ``` # Detect in local photo or video Use model to detect for recyclables in any photo or video. Video is broken into frames, with detection being run on each frame. ffmpeg is needed to stitch together pngs into the final video. ``` !apt-get install ffmpeg import datetime import cv2 def detectInImage(model, im_path): print("Running on {}".format(im_path)) # Read image image = skimage.io.imread(im_path) # Detect objects r = model.detect([image], verbose=1)[0] im = visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], val_dataset.class_names, r['scores'], save=True) im.savefig('/content/gdrive/My Drive/test{}.png'.format(datetime.datetime.now()), bbox_inches='tight', pad_inches = 0.0, transparent=True) def detectInVideo(model, video_path): # Video capture vcapture = cv2.VideoCapture(video_path) # Define codec and create video writer file_name = "/content/gdrive/My Drive/splash{}.avi".format(datetime.datetime.now()) count = 0 success = True colors = visualize.random_colors(len(val_dataset.class_names)) while success: print("frame: ", count) # Read next image success, image = vcapture.read() if success: # OpenCV returns images as BGR, convert to RGB image = image[..., ::-1] # Detect objects r = model.detect([image], verbose=0)[0] im = visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], val_dataset.class_names, r['scores'], colors=colors, save=True) im.savefig('/content/gdrive/My Drive/vid/test{}.png'.format(count), bbox_inches='tight', pad_inches = 0.0, transparent=True) count += 1 detectInImage(model, '/content/gdrive/My Drive/can.jpg') detectInVideo(model, '/content/gdrive/My Drive/IMG_1776.MOV') !ffmpeg -f image2 -r 24 -i '/content/gdrive/My Drive/vid/test%d.png' -vcodec mpeg4 -y '/content/gdrive/My Drive/vid/movie.mp4' !rm -f /content/gdrive/'My Drive'/vid/*.png ```
github_jupyter
``` import pandas as pd import numpy as np import scipy import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns from sklearn.model_selection import train_test_split from sklearn import linear_model from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC from sklearn.ensemble import GradientBoostingClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import cross_val_score from sklearn.metrics import recall_score, confusion_matrix from sklearn.metrics import roc_auc_score ``` Credit card dataset obtained from: https://www.kaggle.com/mlg-ulb/creditcardfraud ``` creditcard = pd.read_csv(r'/Users/admin/Documents/Supervised_learning/Supervised_learning/creditcard.csv') creditcard.head() creditcard.shape #The authors have warned us that the dataset is unbalanced, there are very few fraudulent cases np.unique(creditcard.Class, return_counts = True) ``` # Vanilla Logistic Regression ``` creditcard.columns Y = creditcard['Class'] X =creditcard.loc[:, ~creditcard.columns.isin(['Class'])] X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=108) logreg = LogisticRegression(C=1e6) print(np.mean(cross_val_score(logreg, X_train, Y_train, scoring = 'roc_auc'))) ``` # Ridge Regression ``` roc_scores = [] Cs = [] for value in [1e-10,1e-3, 1, 5, 20]: ridge = LogisticRegression(C=value, penalty= 'l2') roc = np.mean(cross_val_score(ridge, X_train, Y_train, scoring = 'roc_auc')) roc_scores.append(roc) Cs.append(value) df = pd.DataFrame(roc_scores) df['params'] = Cs df.columns=['roc_auc_scores', 'params'] df.sort_values(by = 'roc_auc_scores', ascending=False).reset_index(drop=True) ``` # Test Set Validation ``` ridge = LogisticRegression(C=2.000000e+01, penalty= 'l2') ridge.fit(X_train, Y_train) roc_auc_score(Y_test, ridge.predict_proba(X_test)[:, 1]) ``` # Lasso ``` roc_auc_scores = [] Cs = [] for value in [1e-15, 1e-3, 10]: lasso = LogisticRegression(C=value, penalty= 'l1') roc = np.mean(cross_val_score(lasso, X_train, Y_train, scoring = 'roc_auc')) roc_scores.append(roc) Cs.append(value) df = pd.DataFrame(roc_scores) df['params'] = Cs df.columns=['roc_auc_scores', 'params'] df.sort_values(by = 'roc_auc_scores', ascending=False).reset_index(drop=True) ``` # Test Set Validation ``` lasso = LogisticRegression(C=1.000000e+01, penalty= 'l1') lasso.fit(X_train, Y_train) roc_auc_score(Y_test, lasso.predict_proba(X_test)[:, 1]) ``` # Random Forest ``` roc_auc_scores = [] parameters = [] est_number = [100, 500,700] for value in est_number: rfc = RandomForestClassifier(n_jobs = -1, n_estimators = value, class_weight = 'balanced') roc_auc = np.mean(cross_val_score(rfc, X_train, Y_train, scoring = 'roc_auc', n_jobs=-1)) roc_auc_scores.append(roc_auc) parameters.append(value) df = pd.DataFrame(roc_auc_scores) df['params'] = parameters df.columns=['roc_auc_scores', 'params'] df.sort_values(by = 'roc_auc_scores', ascending=False).reset_index(drop=True) roc_auc_scores = [] parameters = [] depth = [8, 20, 50] for value in depth: rfc = RandomForestClassifier( n_jobs = -1, class_weight = 'balanced', n_estimators = 700, max_depth = value) roc_auc = np.mean(cross_val_score( rfc, X_train, Y_train, scoring = 'roc_auc', n_jobs=-1)) roc_auc_scores.append(roc_auc) parameters.append(value) df = pd.DataFrame(roc_auc_scores) df['params'] = parameters df.columns=['roc_auc_scores', 'params'] df.sort_values(by = 'roc_auc_scores', ascending=False).reset_index(drop=True) rfc = RandomForestClassifier( n_jobs = -1, class_weight = 'balanced', n_estimators = 1000, max_depth = 10) roc_auc = np.mean(cross_val_score( rfc, X_train, Y_train, scoring = 'roc_auc', n_jobs=-1)) print(roc_auc) ``` # Test Set Validation ``` rfc= RandomForestClassifier(n_estimators = 1000, max_depth = 10, n_jobs=-1, class_weight='balanced') rfc.fit(X_train, Y_train) roc_auc_score(Y_test, rfc.predict_proba(X_test)[:, 1]) ``` # We can manually set a threshhold that reflects our business objectives ``` def prediction(classifier, feature_set, prob): y_predicted = [] for i in classifier.predict_proba(feature_set)[:, 1]: if i > prob: y_predicted.append(1) else: y_predicted.append(0) return y_predicted y_predicted = prediction(rfc, X_test, 0.05) confusion_matrix(Y_test, y_predicted) y_predicted = prediction(rfc, X_test, 0.3) confusion_matrix(Y_test, y_predicted) ```
github_jupyter
## Multivariable Calculus Review We will be covering some of the most relevant concepts from multivariable calculus for this course, and show how they can be extended to deal with matrices of training data. ### Partial Derivatives The derivative of a function of 2 variables $f(x, y)$ w.r.t. either one of its variables is defined as $$ \begin{aligned} \frac{\partial}{\partial x} f(x, y) &:= \lim_{h \to 0} \frac{f(x + h, y) - f(x, y)}{h}\\ \frac{\partial}{\partial y} f(x, y) &:= \lim_{h \to 0} \frac{f(x, y + h) - f(x, y)}{h} \end{aligned} $$ This is a very obvious extension going from the derivative definition for functions of single variables. As an example, what is $\frac{\partial}{\partial x} f(x,y)$ where $f(x, y) = x^2 cos(xy)$? $$\frac{\partial}{\partial x} f(x,y) = 2x cos(xy) - x^2 sin(xy)y$$ --- ### Directional Derivatives Before we define the gradient, we discuss the closely related concept of a directional derivative. The directional derivative of a function $f(x, y)$ at $(x_0, y_0)$ moving in the direction of a unit vector $u = [a, b]$ is $$ \begin{aligned} D_{u} f(x_0, y_0) := \lim_{h \to 0} \frac{f(x_0 + ha, y_0 + hb) - f(x_0, y_0)}{h} \end{aligned} $$ This simply tells us how the scalar output of $f$ will change when we move in an arbitrary direction that is not necesarily axis-aligned. It can be written as $$ \begin{aligned} D_{u} f(x, y) = \left[\frac{\partial}{\partial x} f(x, y), \frac{\partial}{\partial y} f(x, y)\right] \cdot u \end{aligned} $$ --- ### Gradient This is perhaps the term you will hear most often in the context of neural networks. The gradient is informally defined as the vector pointing in the direction of steepest increase for a given function. Thus, if we would like to minimize a loss function such as mean squared error (MSE), we move in the opposite direction of the gradient, i.e. in the direction of the negative gradient. Now the gradient is simply defined as $$ \begin{aligned} \nabla f(x, y) := \left[\frac{\partial}{\partial x} f(x, y), \frac{\partial}{\partial y} f(x, y)\right] \end{aligned} $$ Note that this is a vector-field (vector-valued function), and not a scalar-valued function! What does $\nabla f(x, y)$ have to do with neural networks and minimizing loss functions? Let's see in which direction the directional derivative achieves the highest possible value. Using the gradient notation, we can rewrite the directional derivative as $$ \begin{aligned} D_{u} f(x, y) &= \nabla f(x, y) \cdot u\\ &=|| \nabla f(x, y) ||_{2} \cdot || u ||_{2} \cdot cos (\theta)\\ &= ||\nabla f(x, y)||_{2} \cdot cos(\theta) \end{aligned} $$ This quantity is maximized when $cos(\theta) = 1$, i.e. $\nabla f(x, y)$ and $u$ are parallel. Therefore, the gradient points in the direction of steepest increase for $f$. So if $f$ is a neural network parameterized by some weights $w$, and $x \in \mathbb{R}^D$ is some input to it, then taking steps in $\nabla_{w} f_{w}(x)$ will move us in the direction that maximizes the output of $f$. Thus, if $f$ is a binary classification network, say the probability of an image being of a car, then moving in this direction would maximimally increase the probability of $x$ being classified as a car. <img src="https://csc413-2020.github.io/assets/misc/gradient.png" width="500" height="300" align="center"/> --- ## Basic ML Setup If you recall from CSC311, the basic supervised learning scenarios are regression and classification. Most buzz in computer vision has driven by success on classifcation problems, so we start with that first. ### Classification In $K$-class classification, we are given a dataset $\{\mathbf{x}^{(i)}, t^{(i)}\}_{i=1}^{N}$ of $N$ samples where $\mathbf{x}^{(i)} \in \mathbb{R}^D$ and $t^{(i)} \in [1, ..., K]$. We would like to fit a probabilistic model, in our case a neural network, that maximizes the probability $$ \begin{aligned} \prod_{i=1}^{N} p(t^{(i)} | \mathbf{x}^{(i)}, \mathbf{w}) \end{aligned} $$ we can vectorize these objects/terms by defining $$ \mathbf{X} = \begin{bmatrix} (\mathbf{x}^{(1)})^{\top} \\ \vdots \\ (\mathbf{x}^{(N)})^{\top}\end{bmatrix} $$ $$ \mathbf{T} = \begin{bmatrix} (t^{(1)})^{\top} \\ \vdots \\ (t^{(N)})^{\top} \end{bmatrix} $$ where $\mathbf{X}$ is called the design matrix, and has dimensions $(N, D)$, where entry $\mathbf{X}_{ij}$ corresponds to the j-th feature for the i-th training sample. $\mathbf{T}$ is our matrix of labels. The i-th row of $\mathbf{T}$ is the one-hot encoded vector of the label for the i-th sample which is simply a $K$-dimensional vector of all $0$s except for the a single $1$ in the position of the label. If $K=4$, and $t^{(i)} = 2$, the one-hot encoded version is $$ \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} $$ We assume that all vectors are column vectors. Our goal becomes to maximize the probability $$ \begin{aligned} p(\mathbf{t} | \mathbf{X}, \mathbf{w}) &= \prod_{i=1}^{N} p(t^{(i)} | \mathbf{x}^{(i)}, \mathbf{w})\\ &= \prod_{i=1}^{N} \prod_{k=1}^{K} p(k | \mathbf{x}^{(i)}, \mathbf{w})^{\mathbf{T}_{ik}} \end{aligned} $$ Note that just the term in the innermost product corresponding to the true label of the i-th can be less than 1. Since this expression would lead to numerical underflow given a large enough dataset, we can equivalently maximize a monotonically increasing function of this expression, i.e. take the log $$ \begin{aligned} \mathrm{log}\left(\prod_{i=1}^{N} \prod_{k=1}^{K} p(k | \mathbf{x}^{(i)}, \mathbf{w})^{\mathbf{T}_{ik}}\right) &= \sum_{i=1}^{N} \sum_{k=1}^{K} \mathbf{T}_{ik} \mathrm{log} ( p(k | \mathbf{x}^{(i)}, \mathbf{w}) ) \end{aligned} $$ If we negate this term, we end up with the most popular loss function for classification known as cross-entropy $$ \begin{aligned} L_{CE} = - \sum_{i=1}^{N} \sum_{k=1}^{K} \mathbf{T}_{ik} \mathrm{log} ( p(k | \mathbf{x}^{(i)}, \mathbf{w}) ) \end{aligned} $$ Our goal is to **minimize** $L_{CE}$ by moving in the direction $- \nabla_{\mathbf{w}} L_{CE}$. --- ### Linear Regression #### Multivariate input. Univariate output. $\mathbb{R}^M \rightarrow \mathbb{R}$ ##### Forward pass Suppose we have some features $x_1, x_2, \dots, x_M$, and we predict a single scalar $y = w_1x_1 + w_2x_2 + \dotsm + w_Mx_M$. We can write this in matrix notation as a dot product $$ y = \sum_{i=1}^M w_ix_i = \mathbf{w}^T\mathbf{x} \quad\text{ where }\quad\mathbf{w} = \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_M \end{bmatrix}, \mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_M \end{bmatrix} $$ We say $\mathbf{x} \in \mathbb{R}^M$ to indicate that $\mathbf{x}$ is a $M$-dimensional vector. ##### Backward pass Suppose we have a target $t$ and a loss function $L = (y-t)^2$, what is the gradient $\nabla_\mathbf{w} L$? Remember the definition of the gradient, $$ \nabla_w L = \begin{bmatrix} \frac{\partial L}{\partial w_1} \\ \frac{\partial L}{\partial w_2} \\ \vdots \\ \frac{\partial L}{\partial w_M} \\ \end{bmatrix} $$ Each element is a partial derivative that can be written out using *chain rule*. $$ \frac{\partial L}{\partial w_i} = \frac{\partial L}{\partial y} \; \frac{\partial y}{\partial w_i} = 2(y-t) \; x_i $$ So the gradient is $$ \nabla_w L = \begin{bmatrix} 2(y-t) \; x_1 \\ \vdots \\ 2(y-t) \; x_M \\ \end{bmatrix} = 2(y-t) \mathbf{x} $$ **Remark.** There are some matrix calculus formulas that are worth remembering. One that we could have used here is $$ \mathbf{w}^T\mathbf{x} \implies \nabla_\mathbf{w} \mathbf{w}^T\mathbf{x} = \mathbf{x} $$ So the math becomes much easier. $$\nabla_\mathbf{w} L = \underbrace{(\nabla_y L )(\nabla_\mathbf{w} y)}_{\text{Chain rule holds for matrix calculus.}} = 2(y - t)\mathbf{x}$$ --- #### Multivariate input. __Multivariate output__. $\mathbb{R}^D \rightarrow \mathbb{R}^M$ ##### Forward pass Suppose we have some features $x_1, x_2, \dots, x_D$, and we predict a **vector** $\mathbf{y \in \mathbb{R}^M}$ where $y_j = w_{j1}x_1 + w_{j2}x_2 + \dotsm + w_{jD}x_D$. We can write this in matrix notation as a **matrix-vector product**. $$ \mathbf{y} = \begin{bmatrix} w_{11}x_1 + w_{12}x_2 + \dotsm + w_{1D}x_D \\ w_{21}x_1 + w_{22}x_2 + \dotsm + w_{2D}x_D \\ \vdots \\ w_{M1}x_1 + w_{M2}x_2 +\dotsm + w_{MD}x_D \\ \end{bmatrix} = \begin{bmatrix} w_1^Tx \\ w_2^Tx \\ \vdots \\ w_M^Tx \\ \end{bmatrix} = \mathbf{W}\mathbf{x}, $$ Where $$ \mathbf{W} = \begin{bmatrix} w_1^T \\ w_2^T \\ \vdots \\ w_M^T \end{bmatrix} = \begin{bmatrix} w_{11} & w_{12} & \dots & w_{1D} \\ w_{21} & w_{22} & \dots & w_{2D} \\ \vdots \\ w_{M1} & w_{M2} & \dots & w_{MD} \end{bmatrix} $$ We say $\mathbf{W} \in \mathbb{R}^{M \times D}$ to indicate that $\mathbf{W}$ is a matrix of dimension $M$ by $D$. ##### Backward pass Suppose we have a target $t$ and a loss function $L = ||y-t||_2^2$, what is the gradient $\nabla_\mathbf{W} L$? Remember the definition of the gradient, $$ \nabla_w L = \begin{bmatrix} \frac{\partial L}{\partial w_{11}} & \frac{\partial L}{\partial w_{12}} & \dots & \frac{\partial L}{\partial w_{1D}} \\ \frac{\partial L}{\partial w_{21}} & \frac{\partial L}{\partial w_{22}} & \dots & \frac{\partial L}{\partial w_{2D}} \\ \vdots \\ \frac{\partial L}{\partial w_{M1}} & \frac{\partial L}{\partial w_{M2}} & \dots & \frac{\partial L}{\partial w_{MD}} \\ \end{bmatrix} = \begin{bmatrix} \frac{\partial L}{\partial \mathbf{w_1}}^T \\ \frac{\partial L}{\partial \mathbf{w_2}}^T \\ \vdots \\ \frac{\partial L}{\partial \mathbf{w_M}}^T \\ \end{bmatrix} $$ In this particular case, *because $L$ depends on $\mathbf{w_1}$ only through $y_1$*, we have $$ \frac{\partial L}{\partial \mathbf{w_k}} = \sum_{j=1}^M \frac{\partial L}{\partial y_j} \; \frac{\partial y_j}{\partial \mathbf{w_k}} = \frac{\partial L}{\partial y_k} \; \frac{\partial y_k}{\partial \mathbf{w_k}} $$ (ie. Can show that $\frac{\partial y_j}{\partial \mathbf{w_k}} = 0$ if $j\ne k$.) The two parts: $$\frac{\partial L}{\partial y_k} = \frac{\partial}{\partial y_k} \sum_{j=1}^D (y_j-t_j)^2 = 2(y_k - t_k)$$ $$\frac{\partial y_k}{\partial \mathbf{w_k}} = \frac{\partial}{\partial \mathbf{w_k}} \mathbf{w_k}^T\mathbf{x} = \mathbf{x}$$ And we finally have $$\nabla_w L = \begin{bmatrix} \frac{\partial L}{\partial \mathbf{w_1}}^T \\ \frac{\partial L}{\partial \mathbf{w_2}}^T \\ \vdots \\ \frac{\partial L}{\partial \mathbf{w_M}}^T \\ \end{bmatrix} = \begin{bmatrix} 2(y_1 - t_1)\mathbf{x}^T \\ 2(y_2 - t_2)\mathbf{x}^T \\ \vdots \\ 2(y_M - t_M)\mathbf{x}^T \\ \end{bmatrix} = 2(\mathbf{y} - \mathbf{t})\mathbf{x}^T $$ **Remark.** There are some matrix calculus formulas that are worth remembering. One that we could have used here is For any vectors $\mathbf{v} \in \mathbb{R}^M, \mathbf{A} \in \mathbb{R}^{M \times D}, \mathbf{u} \in \mathbb{R}^D$, For any vector $\mathbf{v}$, $$ \nabla_\mathbf{A} \mathbf{v}^T\mathbf{A}\mathbf{u} = \mathbf{v}\mathbf{u}^T $$ So the math becomes much easier. $$\nabla_\mathbf{W} L = \underbrace{(\nabla_\mathbf{y} L)^T(\nabla_\mathbf{W} \mathbf{y})}_{\text{Chain rule holds for matrix calculus.}} = \nabla_\mathbf{W} (\underbrace{2(\mathbf{y} - \mathbf{t})}_{\text{Evaluate } \nabla_\mathbf{y} L\text{ first.}})^TW\mathbf{x} = 2(\mathbf{y} - \mathbf{t})\mathbf{x}^T$$ --- ### 1-Hidden Layer Neural Network Putting it all together, suppose we now have a neural network with a single hidden layer. <img src="https://csc413-2020.github.io/assets/misc/neural_net.png" width="300" height="100" align="center"/> No more summation notation. Let's put what we've learned about matrix notation in use. $$ \begin{aligned} &(1) \;\; \nabla_\mathbf{w} \mathbf{w}^T\mathbf{x} = \mathbf{x} \\ &(2) \;\; \nabla_\mathbf{A} \mathbf{v}^T\mathbf{A}\mathbf{u} = \mathbf{v}\mathbf{u}^T \end{aligned} $$ **Forward pass.** $$ \begin{aligned} & \mathbf{z} = \mathbf{W^{(1)}}\mathbf{x} \quad\quad\quad &\text{(Linear transformation)}\\ & \mathbf{h} = \sigma(\mathbf{z}) &\text{(Element-wise nonlinear function)}\\ & \mathbf{y} = \mathbf{W^{(2)}}\mathbf{h} &\text{(Linear transformation)}\\ & L = ||\mathbf{y}-\mathbf{t}||^2 &\text{(Loss function)}\\ \end{aligned} $$ **Backward pass.** $$ \begin{aligned} & \nabla_\mathbf{y} L = 2(\mathbf{y} - \mathbf{t}) &\text{(Pass gradient through loss function)}\\ & \nabla_\mathbf{h} L = ((\nabla_\mathbf{y} L)^T\mathbf{W^{(2)}})^T &\text{(Pass gradient through linear function; uses (1))}\\ & \nabla_\mathbf{W^{(2)}} L = (\nabla_\mathbf{y} L)\mathbf{h}^T &\text{(Pass gradient to } \mathbf{W^{(2)}} \text{ ; uses (2))}\\ & \nabla_\mathbf{z} L = (\nabla_\mathbf{h} L) \circ \sigma'(\mathbf{z}) &\text{(Pass gradient through nonlinearity)} \\ & \nabla_\mathbf{W^{(1)}} L = (\nabla_\mathbf{z} L)\mathbf{x}^T &\text{(Pass gradient to }\mathbf{W^{(1)}} \text{ ; uses (2))}\\ \end{aligned} $$
github_jupyter
# iChef <img src="https://imgur.com/ij75Z2Z.png" alt="logo" border="0"> ## Backend 3 Hackthon iFood ### Carrega as Bibliotecas nescessarias ``` import pandas as pd import numpy as np import json ``` ### Criar o banco de dados: Banco de dados de receitas, titulo, igredientes e como fazer <p>Fonte de dados: https://www.kaggle.com/paultimothymooney/recipenlg ``` recipes = pd.read_csv('RecipeNLG_dataset.csv', encoding='utf8') recipes.sample(5) ``` Segunda tabela do banco de dados <p> O objetivo deste codigo é criar uma lista de igredientes usados e um dicionarios com a relação Igrediente: [lista de ids de receitas que usam] ``` dic_igredientes = {} lista_igredientes = [list] for i, row in recipes.iterrows(): for ingrediente in row['NER'].split(','): ingrediente = ingrediente.replace('"',"").replace('[','').replace(']','') ingrediente = " ".join(ingrediente.split()) lista_igredientes.append(ingrediente) try: dic_igredientes[ingrediente].append(i) except KeyError: dic_igredientes[ingrediente] = [i] ``` ## API Codigo de API para retornar as 5 receitas que mais combinam com o cliente dado os igredientes que ele possui ``` ''' Iteligencia artificial de aprendizado de reforço -dado o dia da semana, o cluster do usuario, hora e a receita a ser comparada com o usuario etorna o quanto o usuario vai gostar dessa receita API np.random É uma tecnica importantissia baseada em varios dados como a semente aleatoria e deve ser substituida o mais rapido possivel na proxima fase do projeto. Quanto maior o valor, mais o cliente vai gostar da receita. ''' def ml_reforcement(id_receita): week_day = 5 user_cluster = 30 time = 1930 seed = week_day + user_cluster + time + id_receita np.random.seed(seed=seed) value = np.random.randint(0,100,1) return value a = dic_igredientes['peanut butter'] b = dic_igredientes['graham cracker crumbs'] c = dic_igredientes['butter'] d = dic_igredientes['powdered sugar'] e = dic_igredientes['chocolate chips'] #f = dic_igredientes['vanilla'] #g = dic_igredientes['butter'] receitas_possiveis = set.intersection(*[set(list) for list in [a,b,c,d,e]]) receitas_finais = [] for receita in receitas_possiveis: if len(recipes['NER'][receita].replace('"',"").replace('[','').replace(']','').split(',')) == len([a,b,c,d,e]): receitas_finais.append(receita) retorno = {} indice = 0 for receita in receitas_finais: ml_points = ml_reforcement(receita) indice= indice + 1 retorno[ml_points[0]]={ 'id_recipe': receita, 'title':recipes['title'][receita], 'love': ml_points[0], 'directions':recipes['directions'][receita], 'directions':recipes['ingredients'][receita] } retorno = pd.DataFrame.from_dict(retorno, orient='index').sort_index(ascending=False) result = retorno.head(5).to_json(orient="records") result = json.loads(result) result ``` ### Equipe <table> <tr> <td align="center"><a href="https://www.linkedin.com/in/kaianeluz/"><img src="https://i.imgur.com/6qQQUaX.jpeg" width="100px;" alt=""/><br /><sub> <b> Kaiane Luanara C. F. Luz</b></sub></a><br /> <a href="https://www.linkedin.com/in/kaianeluz/" title="Site">💻</a> <a href="luzkaiane@gmail.com" title="Email">📧</a> </td> <td align="center"><a href="https://www.linkedin.com/in/barbosaamanda/"><img src="https://i.imgur.com/ilyTmPQ_d.jpg" width="100px;" alt=""/><br /><sub> <b> Amanda Barbosa</b></sub></a><br /> <a href="https://www.linkedin.com/in/barbosaamanda/" title="Site">💻</a> <a href="amandahpereira@gmail.com" title="Email">📧</a> </td> <td align="center"><a href="https://www.linkedin.com/in/devmatheusoliveira"><img src="https://i.imgur.com/GS8Y6Vw_d.jpg" width="100px;" alt=""/><br /><sub> <b> Matheus Oliveira</b></sub></a><br /> <a href="https://www.github.com/devmatheusoliveira/" title="Site">💻</a> <a href="matheusoliveira.workmso@gmail.com" title="Email">📧</a> </td> <td align="center"><a href="https://www.linkedin.com/in/debora-melo-gestao/ "><img src="https://i.imgur.com/06nmBo6_d.jpg" width="100px;" alt=""/><br /><sub> <b> Débora Regianne M. Costa</b></sub></a><br /> <a href="https://www.linkedin.com/in/debora-melo-gestao/ " title="Site">💻</a> <a href="dregiannemelo@gmail.com " title="Email">📧</a> </td> <td align="center"><a href="https://github.com/iulihardt/"><img src="https://i.imgur.com/KNytPG4.png" width="100px;" alt=""/><br /><sub> <b> Iuli Hardt</b></sub></a><br /> <a href="https://www.linkedin.com/in/iuli-hardt-634190119/" title="Site">💻</a><a href="iulihardt@gmail.com" title="Email">📧</a> </td> </table>
github_jupyter
``` # default_exp __init__ # hide import os notebooks_dir = os.getcwd() project_dir = os.path.dirname(notebooks_dir) import sys sys.path.append(project_dir) ``` # Huobi ``` from ccstabilizer import Exchange from ccstabilizer import secrets # export import contextlib, requests from decimal import Decimal, ROUND_DOWN class Huobi(Exchange): def __init__(self): super().__init__() self.huobi = HuobiSVC(os.environ['HUOBI_API_ACCESS_KEY'], os.environ['HUOBI_API_SECRET_KEY']) self.account_id = self.huobi.get_accounts()['data'][0]['id'] self.update_trading_specifications() def update_trading_specification(self, crypto_symbol, fiat_symbol): return self.update_trading_specifications().get((crypto_symbol, fiat_symbol), {}) def update_trading_specifications(self): specs = self.huobi.get_symbols()['data'] for spec in specs: crypto_symbol, fiat_symbol = spec.get('base-currency', '').upper(), spec.get('quote-currency', '').upper() trading_spec = self.trading_specifications.setdefault((crypto_symbol, fiat_symbol), {}) trading_spec['min_trade_unit'] = max( Decimal(repr(spec.get('limit-order-min-order-amt', 0.0001))), Decimal(repr(spec.get('sell-market-min-order-amt', 0.0001))), Decimal('0.1') ** Decimal(repr(spec.get('amount-precision', 0))), ) trading_spec['min_trade_fiat_money_limit'] = Decimal(repr(spec.get('min-order-value', 1))) trading_spec['fee_rate'] = Decimal('0.002') trading_spec['liquid'] = spec.get('api-trading', 'disabled') == 'enabled' return self.trading_specifications def get_portfolio(self): return { asset['currency'].upper(): Decimal(asset.get('balance', '0')) for asset in self.huobi.get_balance(**{'account-id': self.account_id})['data']['list'] \ if asset['type'] == 'trade' and Decimal(asset.get('balance', '0')) > 0 } def get_price(self, crypto_symbol, fiat_symbol): now_ticker = self.huobi.get_ticker(symbol=f'{crypto_symbol.lower()}{fiat_symbol.lower()}')['tick'] now_buy_fiat_price_without_fee = Decimal(now_ticker.get('ask', ['Infinity', '0'])[0]) now_sell_fiat_price_without_fee = Decimal(now_ticker.get('bid', ['0', '0'])[0]) fee_rate = self.get_trading_specification(crypto_symbol, fiat_symbol).get('fee_rate', 0) return { 'now_buy_fiat_price': self.get_buy_fiat_price(now_buy_fiat_price_without_fee, fee_rate), 'now_sell_fiat_price': self.get_sell_fiat_price(now_sell_fiat_price_without_fee, fee_rate), 'now_buy_fiat_price_without_fee': now_buy_fiat_price_without_fee, 'now_sell_fiat_price_without_fee': now_sell_fiat_price_without_fee, } def get_buy_fiat_price(self, fiat_price_without_fee, fee_rate): return fiat_price_without_fee / (1 - fee_rate) def get_sell_fiat_price(self, fiat_price_without_fee, fee_rate): return fiat_price_without_fee * (1 - fee_rate) def buy(self, crypto_symbol, fiat_symbol, amount, fiat_price_without_fee): if amount <= 0: raise Exception('Huobi::buy amount <= 0') fiat_money = (fiat_price_without_fee * amount).quantize(Decimal('1.00'), ROUND_DOWN) self.huobi.send_order( **{'account-id': self.account_id, 'symbol': f'{crypto_symbol.lower()}{fiat_symbol.lower()}', 'type': 'buy-market', 'amount': str(fiat_money)} ) def sell(self, crypto_symbol, fiat_symbol, amount, fiat_price_without_fee): if amount <= 0: raise Exception('Huobi::sell amount <= 0') self.huobi.send_order( **{'account-id': self.account_id, 'symbol': f'{crypto_symbol.lower()}{fiat_symbol.lower()}', 'type': 'sell-market', 'amount': str(amount)} ) def has_enough_unused_fiat_money(self, buy_fiat_money, unused_fiat_money): return buy_fiat_money <= unused_fiat_money def trade_fiat_money_is_larger_than_limit(self, crypto_symbol, fiat_symbol, amount, fiat_price_without_fee): return fiat_price_without_fee * amount > self.get_trading_specification(crypto_symbol, fiat_symbol).get('min_trade_fiat_money_limit', 0) ``` ## Huobi API * https://github.com/HuobiRDCenter/huobi_Python * https://huobiapi.github.io/docs * https://github.com/chiangqiqi/pyhuobi/blob/master/huobi/utils.py * https://github.com/MJeremy2017/HuobiApi/blob/master/huobiApi/service.py ``` # export import base64 import contextlib import hashlib import hmac import json import urllib import urllib.parse import urllib.request import requests from datetime import datetime class HuobiSVC(object): methods = { # Public methods 'get_kline': {'url': 'market/history/kline', 'method': 'GET', 'private': False}, 'get_depth': {'url': 'market/depth', 'method': 'GET', 'private': False}, 'get_trade': {'url': 'market/trade', 'method': 'GET', 'private': False}, 'get_tickers': {'url': 'market/tickers', 'method': 'GET', 'private': False}, 'get_ticker': {'url': 'market/detail/merged', 'method': 'GET', 'private': False}, 'get_detail': {'url': 'market/detail', 'method': 'GET', 'private': False}, 'get_symbols': {'url': 'v1/common/symbols', 'method': 'GET', 'private': False}, 'get_currencies': {'url': 'v1/common/currencys', 'method': 'GET', 'private': False}, # Private methods 'get_accounts': {'url': 'v1/account/accounts', 'method': 'GET', 'private': True}, 'get_balance': {'url': 'v1/account/accounts/{account-id}/balance', 'method': 'GET', 'private': True}, 'send_order': {'url': 'v1/order/orders/place', 'method': 'POST', 'private': True}, 'cancel_order': {'url': 'v1/order/orders/{order-id}/submitcancel', 'method': 'POST', 'private': True}, 'order_info': {'url': 'v1/order/orders/{order-id}', 'method': 'GET', 'private': True}, 'order_matchresults': {'url': 'v1/order/orders/{order-id}/matchresults', 'method': 'GET', 'private': True}, 'orders_list': {'url': 'v1/order/orders', 'method': 'GET', 'private': True}, 'orders_matchresults': {'url': 'v1/order/matchresults', 'method': 'GET', 'private': True}, 'open_orders': {'url': 'v1/order/openOrders', 'method': 'GET', 'private': True}, 'cancel_open_orders': {'url': 'v1/order/orders/batchCancelOpenOrders', 'method': 'POST', 'private': True}, 'withdraw': {'url': 'v1/dw/withdraw/api/create', 'method': 'POST', 'private': True}, 'cancel_withdraw': {'url': 'v1/dw/withdraw-virtual/{withdraw-id}/cancel', 'method': 'POST', 'private': True}, } RETRY_INTERVAL = 60 def __init__(self, access_key, secret_key): self.access_key = access_key self.secret_key = secret_key self.base_url = 'https://api-aws.huobi.pro/' def __getattr__(self, command): def call_api(**kwargs): while True: try: url = self.base_url + type(self).methods[command]['url'].format(**kwargs) payload = kwargs headers = {} payload_str = urllib.parse.urlencode(payload) if self.methods[command]['private']: if self.methods[command]['method'] == 'GET': return self.api_key_get(url, **kwargs) else: return self.api_key_post(url, **kwargs) else: if self.methods[command]['method'] == 'GET': return self.http_get_request(url, **kwargs) except Exception as e: raise time.sleep(type(self).RETRY_INTERVAL) return call_api def http_get_request(self, url, add_to_headers=None, **params): headers = { 'Content-type': 'application/x-www-form-urlencoded', 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36', } if add_to_headers: headers.update(add_to_headers) postdata = urllib.parse.urlencode(params) with contextlib.closing( requests.get(url, postdata, headers=headers, timeout=5) ) as response: try: response_json = response.json() if response_json['status'] == 'ok': return response_json else: raise Exception('status == error') except BaseException as e: print(f'httpGet failed, detail: {response.text}, {e}') print(f'url = {url}') print(f'params = {params}') raise def http_post_request(self, url, add_to_headers=None, **params): headers = { 'Accept': 'application/json', 'Content-Type': 'application/json' } if add_to_headers: headers.update(add_to_headers) postdata = json.dumps(params) with contextlib.closing( requests.post(url, postdata, headers=headers, timeout=10) ) as response: try: response_json = response.json() if response_json['status'] == 'ok': return response_json else: raise Exception('status == error') except BaseException as e: print(f'httpPost failed, detail: {response.text}, {e}') print(f'url = {url}') print(f'params = {params}') raise def api_key_get(self, url, **params): method = 'GET' timestamp = datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%S') params.update({'AccessKeyId': self.access_key, 'SignatureMethod': 'HmacSHA256', 'SignatureVersion': '2', 'Timestamp': timestamp}) host_name = urllib.parse.urlparse(url).hostname host_name = host_name.lower() request_path = urllib.parse.urlparse(url).path params['Signature'] = self.createSign(params, method, host_name, request_path, self.secret_key) return self.http_get_request(url, **params) def api_key_post(self, url, **params): method = 'POST' timestamp = datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%S') params_to_sign = {'AccessKeyId': self.access_key, 'SignatureMethod': 'HmacSHA256', 'SignatureVersion': '2', 'Timestamp': timestamp} host_name = urllib.parse.urlparse(url).hostname host_name = host_name.lower() request_path = urllib.parse.urlparse(url).path params_to_sign['Signature'] = self.createSign(params_to_sign, method, host_name, request_path, self.secret_key) url += '?' + urllib.parse.urlencode(params_to_sign) return self.http_post_request(url, **params) def createSign(self, pParams, method, host_url, request_path, secret_key): sorted_params = sorted(pParams.items(), key=lambda d: d[0], reverse=False) encode_params = urllib.parse.urlencode(sorted_params) payload = [method, host_url, request_path, encode_params] payload = '\n'.join(payload) payload = payload.encode(encoding='UTF8') secret_key = secret_key.encode(encoding='UTF8') digest = hmac.new(secret_key, payload, digestmod=hashlib.sha256).digest() signature = base64.b64encode(digest) signature = signature.decode() return signature # get KLine # def get_kline(self, symbol, period, size=150): # """ # :param symbol: btcusdt, ethbtc, ... # :param period: 可选值:{1min, 5min, 15min, 30min, 60min, 1day, 1mon, 1week, 1year } # :param size: 可选值: [1,2000] # :return: # """ # params = {'symbol': symbol, # 'period': period, # 'size': size} # url = self.market_url + '/market/history/kline' # return self.http_get_request(url, params) # get market prices def get_kline_df(self, symbol, period, size): res = self.get_kline(symbol=symbol, period=period, size=size) if res['status'] == 'ok': import pandas as pd kline_df = pd.DataFrame(res['data']) return kline_df else: raise Exception('Query failed with status: {}'.format(res)) # get market depth # def get_depth(self, symbol, type): # """ # :param symbol # :param type: 可选值:{ percent10, step0, step1, step2, step3, step4, step5 } # :return: # """ # params = {'symbol': symbol, # 'type': type} # url = self.market_url + '/market/depth' # return self.http_get_request(url, params) # get trade detail # def get_trade(self, symbol): # """ # :param symbol # :return: # """ # params = {'symbol': symbol} # url = self.market_url + '/market/trade' # return self.http_get_request(url, params) # Tickers detail # def get_tickers(self): # """ # :return: # """ # params = {} # url = self.market_url + '/market/tickers' # return self.http_get_request(url, params) # get merge ticker # def get_ticker(self, symbol): # """ # :param symbol: # :return: # """ # params = {'symbol': symbol} # url = self.market_url + '/market/detail/merged' # return self.http_get_request(url, params) # get Market Detail 24 hour volume # def get_detail(self, symbol): # """ # :param symbol # :return: # """ # params = {'symbol': symbol} # url = self.market_url + '/market/detail' # return self.http_get_request(url, params) # get available symbols # def get_symbols(self, long_polling=None): # """ # """ # params = {} # if long_polling: # params['long-polling'] = long_polling # path = '/v1/common/symbols' # return self.api_key_get(params, path) # Get available currencies # def get_currencies(self): # """ # :return: # """ # params = {} # url = self.market_url + '/v1/common/currencys' # return self.http_get_request(url, params) # Get all the trading assets # def get_trading_assets(self): # """ # :return: # """ # params = {} # url = self.market_url + '/v1/common/symbols' # return self.http_get_request(url, params) """ Trade/Account API """ # def get_accounts(self): # """ # :return: # """ # path = "/v1/account/accounts" # params = {} # return self.api_key_get(params, path) # get account balance # def get_balance(self, account_id=None): # """ # :param account_id # :return: # """ # if account_id is None: # accounts = self.get_accounts() # account_id = accounts['data'][0]['id'] # url = f'{self.base_url}v1/account/accounts/{account_id}/balance' # params = {'account-id': account_id} # return self.api_key_get(url, **params) # get balance for a currency def get_balance_currency(self, account_id, symbol): res = self.get_balance(account_id=account_id) if res['status'] == 'ok': res_dict = {} import pandas as pd balance_df = pd.DataFrame(res['data']['list']) res_df = balance_df[balance_df['currency'] == symbol] res_dict['trade_balance'] = float(res_df[res_df['type'] == 'trade']['balance'].values[0]) res_dict['frozen_balance'] = float(res_df[res_df['type'] == 'frozen']['balance'].values[0]) return res_dict else: raise Exception('Query failed with status -> {}'.format(res['status'])) # Making Orders # def send_order(self, acct_id, amount, source, symbol, _type, price=0, stop_price=0, operator=None): # """ # :param acct_id: account id # :param amount: # :param source: 如果使用借贷资产交易,请在下单接口的请求参数source中填写'margin-api' # :param symbol: # :param _type: options {buy-market:市价买, sell-market:市价卖, buy-limit:限价买, sell-limit:限价卖} # :param price: # :param stop_price: # :param operator: gte – greater than and equal (>=), lte – less than and equal (<=) # :return: # """ # params = {"account-id": acct_id, # "amount": amount, # "symbol": symbol, # "type": _type, # "source": source} # if price: # params["price"] = price # if stop_price: # params["stop-price"] = stop_price # if operator: # params["operator"] = operator # url = '/v1/order/orders/place' # return self.api_key_post(params, url) # cancel an order # def cancel_order(self, order_id): # """ # :param order_id: # :return: # """ # params = {} # url = f'{self.base_url}v1/order/orders/{order_id}/submitcancel' # return self.api_key_post(url, **params) # get an order info # def order_info(self, order_id): # """ # :param order_id: # :return: # """ # params = {} # url = f'{self.base_url}v1/order/orders/{order_id}' # return self.api_key_get(url, **params) # get order results # def order_matchresults(self, order_id): # """ # :param order_id: # :return: # """ # params = {} # url = f'{self.base_url}v1/order/orders/{order_id}/matchresults' # return self.api_key_get(url, **params) # get order list # def orders_list(self, symbol, states, types=None, start_date=None, end_date=None, _from=None, direct=None, size=None): # """ # :param symbol: # :param states: options {pre-submitted 准备提交, submitted 已提交, partial-filled 部分成交, partial-canceled 部分成交撤销, filled 完全成交, canceled 已撤销} # :param types: options {buy-market:市价买, sell-market:市价卖, buy-limit:限价买, sell-limit:限价卖} # :param start_date: # :param end_date: # :param _from: # :param direct: options {prev 向前,next 向后} # :param size: # :return: # """ # params = {'symbol': symbol, # 'states': states} # if types: # params['types'] = types # if start_date: # params['start-date'] = start_date # if end_date: # params['end-date'] = end_date # if _from: # params['from'] = _from # if direct: # params['direct'] = direct # if size: # params['size'] = size # url = '/v1/order/orders' # return self.api_key_get(params, url) # get matched orders # def orders_matchresults(self, symbol, types=None, start_date=None, end_date=None, _from=None, direct=None, size=None): # """ # :param symbol: # :param types: options {buy-market:市价买, sell-market:市价卖, buy-limit:限价买, sell-limit:限价卖} # :param start_date: # :param end_date: # :param _from: # :param direct: options {prev 向前,next 向后} # :param size: # :return: # """ # params = {'symbol': symbol} # if types: # params['types'] = types # if start_date: # params['start-date'] = start_date # if end_date: # params['end-date'] = end_date # if _from: # params['from'] = _from # if direct: # params['direct'] = direct # if size: # params['size'] = size # url = '/v1/order/matchresults' # return self.api_key_get(params, url) # get open orders # def open_orders(self, account_id, symbol, side='', size=10): # """ # :param symbol: # :return: # """ # params = {} # url = "/v1/order/openOrders" # if symbol: # params['symbol'] = symbol # if account_id: # params['account-id'] = account_id # if side: # params['side'] = side # if size: # params['size'] = size # return self.api_key_get(params, url) # batch cancel orders # def cancel_open_orders(self, account_id, symbol, side='', size=10): # """ # :param symbol: # :return: # """ # params = {} # url = "/v1/order/orders/batchCancelOpenOrders" # if symbol: # params['symbol'] = symbol # if account_id: # params['account-id'] = account_id # if side: # params['side'] = side # if size: # params['size'] = size # return self.api_key_post(params, url) # withdraw currencies # def withdraw(self, address, amount, currency, fee=0, addr_tag=""): # """ # :param address_id: # :param amount: # :param currency:btc, ltc, bcc, eth, etc ...(火币Pro支持的币种) # :param fee: # :param addr-tag: # :return: { # "status": "ok", # "data": 700 # } # """ # params = {'address': address, # 'amount': amount, # "currency": currency, # "fee": fee, # "addr-tag": addr_tag} # url = '/v1/dw/withdraw/api/create' # return self.api_key_post(params, url) # cancel withdraw order # def cancel_withdraw(self, address_id): # """ # :param address_id: # :return: { # "status": "ok", # "data": 700 # } # """ # params = {} # url = '/v1/dw/withdraw-virtual/{0}/cancel'.format(address_id) # return self.api_key_post(params, url) """ MARGIN API """ # create and send margin order # def send_margin_order(self, account_id, amount, symbol, _type, price=0): # """ # :param account_id: # :param amount: # :param symbol: # :param _type: options {buy-market:市价买, sell-market:市价卖, buy-limit:限价买, sell-limit:限价卖} # :param price: # :return: # """ # try: # accounts = self.get_accounts() # acct_id = accounts['data'][0]['id'] # except BaseException as e: # print('get acct_id error.%s' % e) # acct_id = account_id # params = {"account-id": acct_id, # "amount": amount, # "symbol": symbol, # "type": _type, # "source": 'margin-api'} # if price: # params["price"] = price # url = '/v1/order/orders/place' # return self.api_key_post(params, url) # exchange account to margin account # def exchange_to_margin(self, symbol, currency, amount): # """ # :param amount: # :param currency: # :param symbol: # :return: # """ # params = {"symbol": symbol, # "currency": currency, # "amount": amount} # url = "/v1/dw/transfer-in/margin" # return self.api_key_post(params, url) # margin account to exchange account # def margin_to_exchange(self, symbol, currency, amount): # """ # :param amount: # :param currency: # :param symbol: # :return: # """ # params = {"symbol": symbol, # "currency": currency, # "amount": amount} # url = "/v1/dw/transfer-out/margin" # return self.api_key_post(params, url) # get margin # def get_margin(self, symbol, currency, amount): # """ # :param amount: # :param currency: # :param symbol: # :return: # """ # params = {"symbol": symbol, # "currency": currency, # "amount": amount} # url = "/v1/margin/orders" # return self.api_key_post(params, url) # repay # def repay_margin(self, order_id, amount): # """ # :param order_id: # :param amount: # :return: # """ # params = {"order-id": order_id, # "amount": amount} # url = "/v1/margin/orders/{0}/repay".format(order_id) # return self.api_key_post(params, url) # loan order # def loan_orders(self, symbol, currency, start_date="", end_date="", start="", direct="", size=""): # """ # :param symbol: # :param currency: # :param direct: prev 向前,next 向后 # :return: # """ # params = {"symbol": symbol, # "currency": currency} # if start_date: # params["start-date"] = start_date # if end_date: # params["end-date"] = end_date # if start: # params["from"] = start # if direct and direct in ["prev", "next"]: # params["direct"] = direct # if size: # params["size"] = size # url = "/v1/margin/loan-orders" # return self.api_key_get(params, url) # get margin balance # def margin_balance(self, symbol): # """ # :param symbol: # :return: # """ # params = {} # url = "/v1/margin/accounts/balance" # if symbol: # params['symbol'] = symbol # return self.api_key_get(params, url) # def margin_loan_info(self, symbol): # """ # :param symbol: # :return: # """ # params = {} # url = "/v1/margin/loan-info" # if symbol: # params['symbol'] = symbol # return self.api_key_get(params, url) import pprint for spec in HuobiSVC('', '').get_symbols()['data']: if spec['base-currency'] == 'ar' and spec['quote-currency'] == 'usdt': pprint.PrettyPrinter(indent=4).pprint(spec) Huobi().get_trading_specification('AR', 'ETH') HuobiSVC(os.environ['HUOBI_API_ACCESS_KEY'], os.environ['HUOBI_API_SECRET_KEY']).get_accounts() HuobiSVC(os.environ['HUOBI_API_ACCESS_KEY'], os.environ['HUOBI_API_SECRET_KEY']).get_balance(**{'account-id': 23069350}) HuobiSVC('', '').get_ticker(symbol='ethusdt') # HuobiSVC(os.environ['HUOBI_API_ACCESS_KEY'], os.environ['HUOBI_API_SECRET_KEY']).send_order(**{'account-id': 23069350, 'symbol': 'xrpusdt', 'type': 'buy-limit', 'amount': 10, 'price': 0.5}) # HuobiSVC(os.environ['HUOBI_API_ACCESS_KEY'], os.environ['HUOBI_API_SECRET_KEY']).send_order(**{'account-id': 23069350, 'symbol': 'arusdt', 'type': 'buy-market', 'amount': '5'}) # HuobiSVC(os.environ['HUOBI_API_ACCESS_KEY'], os.environ['HUOBI_API_SECRET_KEY']).send_order(**{'account-id': 23069350, 'symbol': 'arusdt', 'type': 'sell-market', 'amount': '0.16'}) HuobiSVC(os.environ['HUOBI_API_ACCESS_KEY'], os.environ['HUOBI_API_SECRET_KEY']).orders_list(**{'symbol': 'xrpusdt', 'states': 'submitted'}) HuobiSVC(os.environ['HUOBI_API_ACCESS_KEY'], os.environ['HUOBI_API_SECRET_KEY']).order_info(**{'order-id': 251846879621754}) ```
github_jupyter
# Riskfolio-Lib Tutorial: <br>__[Financionerioncios](https://financioneroncios.wordpress.com)__ <br>__[Orenji](https://www.orenj-i.net)__ <br>__[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)__ <br>__[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__ <a href='https://ko-fi.com/B0B833SXD' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://cdn.ko-fi.com/cdn/kofi1.png?v=2' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a> ## Tutorial 25: Hierarchical Equal Risk Contribution (HERC) Portfolio Optimization ## 1. Downloading the data: ``` import numpy as np import pandas as pd import yfinance as yf import warnings warnings.filterwarnings("ignore") yf.pdr_override() pd.options.display.float_format = '{:.4%}'.format # Date range start = '2016-01-01' end = '2019-12-30' # Tickers of assets assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'APA', 'MMC', 'JPM', 'ZION', 'PSA', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'TMO', 'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI', 'T', 'BA'] assets.sort() # Downloading data data = yf.download(assets, start = start, end = end) data = data.loc[:,('Adj Close', slice(None))] data.columns = assets # Calculating returns Y = data[assets].pct_change().dropna() display(Y.head()) import riskfolio.PlotFunctions as plf # Plotting Assets Clusters ax = plf.plot_clusters(returns=Y, correlation='pearson', linkage='ward', k=None, max_k=10, leaf_order=True, dendrogram=True, #linecolor='tab:purple', ax=None) ``` The graph above suggest that optimal number of clusters are four. ## 2. Estimating HERC Portfolio This is the original model proposed by Raffinot (2018). Riskfolio-Lib expand this model to 14 risk measures. ### 2.1 Calculating the HERC portfolio ``` import riskfolio.HCPortfolio as hc # Building the portfolio object port = hc.HCPortfolio(returns=Y) # Estimate optimal portfolio: model='HERC' # Could be HRP or HERC correlation = 'pearson' # Correlation matrix used to group assets in clusters rm = 'MV' # Risk measure used, this time will be variance rf = 0 # Risk free rate linkage = 'ward' # Linkage method used to build clusters max_k = 10 # Max number of clusters used in two difference gap statistic leaf_order = True # Consider optimal order of leafs in dendrogram w = port.optimization(model=model, correlation=correlation, rm=rm, rf=rf, linkage=linkage, max_k=max_k, leaf_order=leaf_order) display(w.T) ``` ### 2.2 Plotting portfolio composition ``` # Plotting the composition of the portfolio ax = plf.plot_pie(w=w, title='HERC Naive Risk Parity', others=0.05, nrow=25, cmap="tab20", height=8, width=10, ax=None) ``` ### 2.3 Plotting Risk Contribution ``` # Plotting the risk contribution per asset mu = Y.mean() cov = Y.cov() # Covariance matrix returns = Y # Returns of the assets ax = plf.plot_risk_con(w=w, cov=cov, returns=returns, rm=rm, rf=0, alpha=0.05, color="tab:blue", height=6, width=10, t_factor=252, ax=None) ``` ### 2.4 Calculate Optimal HERC Portfolios for Several Risk Measures ``` # Risk Measures available: # # 'vol': Standard Deviation. # 'MV': Variance. # 'MAD': Mean Absolute Deviation. # 'MSV': Semi Standard Deviation. # 'FLPM': First Lower Partial Moment (Omega Ratio). # 'SLPM': Second Lower Partial Moment (Sortino Ratio). # 'VaR': Conditional Value at Risk. # 'CVaR': Conditional Value at Risk. # 'EVaR': Entropic Value at Risk. # 'WR': Worst Realization (Minimax) # 'MDD': Maximum Drawdown of uncompounded cumulative returns (Calmar Ratio). # 'ADD': Average Drawdown of uncompounded cumulative returns. # 'DaR': Drawdown at Risk of uncompounded cumulative returns. # 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns. # 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns. # 'UCI': Ulcer Index of uncompounded cumulative returns. # 'MDD_Rel': Maximum Drawdown of compounded cumulative returns (Calmar Ratio). # 'ADD_Rel': Average Drawdown of compounded cumulative returns. # 'DaR_Rel': Drawdown at Risk of compounded cumulative returns. # 'CDaR_Rel': Conditional Drawdown at Risk of compounded cumulative returns. # 'EDaR_Rel': Entropic Drawdown at Risk of compounded cumulative returns. # 'UCI_Rel': Ulcer Index of compounded cumulative returns. rms = ['vol', 'MV', 'MAD', 'MSV', 'FLPM', 'SLPM', 'VaR','CVaR', 'EVaR', 'WR', 'MDD', 'ADD', 'DaR', 'CDaR', 'EDaR', 'UCI', 'MDD_Rel', 'ADD_Rel', 'DaR_Rel', 'CDaR_Rel', 'EDaR_Rel', 'UCI_Rel'] w_s = pd.DataFrame([]) for i in rms: w = port.optimization(model=model, correlation=correlation, rm=i, rf=rf, linkage=linkage, max_k=max_k, leaf_order=leaf_order) w_s = pd.concat([w_s, w], axis=1) w_s.columns = rms w_s.style.format("{:.2%}").background_gradient(cmap='YlGn') import matplotlib.pyplot as plt # Plotting a comparison of assets weights for each portfolio fig = plt.gcf() fig.set_figwidth(14) fig.set_figheight(6) ax = fig.subplots(nrows=1, ncols=1) w_s.plot.bar(ax=ax) ```
github_jupyter
# 15 minutes to QCoDeS This short introduction is aimed mainly for beginners. Before you start with your first code using QCoDeS, make sure you have properly set up the Python environment for QCoDeS as explained in [this document](http://qcodes.github.io/Qcodes/start/index.html#installation). ## Introduction An experimental setup comprises of many instruments. We call an experimental setup as "station". A station is connected to many instruments or devices. QCoDeS provides a way to interact with all these instruments to help users the measurements and store the data in a database. To interact (read, write, trigger, etc) with the instruments, we have created a [library of drivers](http://qcodes.github.io/Qcodes/api/generated/qcodes.instrument_drivers.html) for commonly used ones. These drivers implement the most needed functionalities of the instruments. An "Instrument" can perform many functions. For example, on an oscilloscope instrument, we first set a correct trigger level and other parameters and then obtain a trace. In QCoDeS lingo, we call "trigger_level" and "trace" as `parameter` of this `instrument`. An instrument at any moment will have many such parameters which together define the state of the instrument, hence a parameter can be thought of as a state variable of the instrument. QCoDeS provides a method to set values of these parameters (set trigger level) and get the values from them (obtain a trace). By this way, we can interact with all the needed parameters of an instrument and are ready to set up a measurement. QCoDeS has a similar programmatic structure, as well. QCoDeS structure comprises of a `Station` class which is a bucket of objects from `Instrument` class containing many objects from `Parameter` class. The value of these parameters are set and measured during a measurement. The `Measurement` class provides a context manager for registering the parameters and providing a link between different parameters. The measured data is stored in a database. Here, we will briefly discuss how you can set up your own experiment with the help of QCoDeS. ![SchematicOverviewQcodes](files/Schematic_Overview_Qcodes.png) ## Imports If you are using QCoDeS as your main data acquisition framework, a typical Python script at your disposal may look like: ``` %matplotlib inline import os from time import sleep import matplotlib.pyplot as plt import numpy as np import qcodes as qc from qcodes import ( Measurement, experiments, initialise_database, initialise_or_create_database_at, load_by_guid, load_by_run_spec, load_experiment, load_last_experiment, load_or_create_experiment, new_experiment, ) from qcodes.dataset.plotting import plot_dataset from qcodes.logger.logger import start_all_logging from qcodes.tests.instrument_mocks import DummyInstrument, DummyInstrumentWithMeasurement ``` We strongly recommend not to import unused packages to increase readability of your code. ## Logging In every measurement session, it is highly recommended to have QCoDeS logging turned on. This will allow you to have all the logs in case troubleshooting is required. To enable logging, we can either add the following single line of code at the beginnig of our scripts after the imports: ``` start_all_logging() ``` or we can configure qcodes to automatically start logging on every import of qcodes, by running the following code once. (This will persist the current configuration in `~\qcodesrc.json`) ``` from qcodes import config config.logger.start_logging_on_import = 'always' config.save_to_home() ``` You can find the log files at ".qcodes" directory, typically located at your home folder (e.g., see the corresponding path to the "Filename" key above). This path contains two log files: - command_history.log: contains the commands executed. And in this particular case - 191113-13960-qcodes.log: contains python logging information. The file is named as \[date (YYMMDD)\]-\[process id\]-\[qcodes\].log. The display message from `start_all_logging()` function shows that the `Qcodes Logfile` is saved at `C:\Users\a-halakh\.qcodes\logs\191113-13960-qcodes.log` ## Station creation A station is a collection of all the instruments and devices present in your experiment. As mentioned earlier, it can be thought of as a bucket where you can add your `instruments`, `parameters` and other `components`. Each of these terms has a definite meaning in QCoDeS and shall be explained in later sections. Once a station is properly configured, you can use its instances to access these components. We refer to tutorial on [Station](http://qcodes.github.io/Qcodes/examples/Station.html) for more details. We start with instantiating a station class which at the moment does not comprise of any instruments or parameters. ``` station = qc.Station() ``` ### Snapshot We can look at all the instruments and the parameters inside this station bucket using `snapshot` method. Since at the moment we have not added anything to our station, the snapshot will contain the names of the keys with no values: ``` station.snapshot() ``` The [snapshot](http://qcodes.github.io/Qcodes/examples/DataSet/Working%20with%20snapshots.html) of the station is categorized as the dictionary of all the `instruments`,` parameters`, `components` and list of `default_measurement`. Once you have populated your station you may want to look at the snapshot again. ## Instrument `Instrument` class in Qcodes is responsible for holding connections to hardware, creating a parameter or method for each piece of functionality of the instrument. For more information on instrument class we refer to the [detailed description here](http://qcodes.github.io/Qcodes/user/intro.html#instrument) or the corresponding [api documentation](http://qcodes.github.io/Qcodes/api/instrument/index.html). Let us, now, create two dummy instruments and associate two parameters for each of them: ``` # A dummy instrument dac with two parameters ch1 and ch2 dac = DummyInstrument('dac', gates=['ch1', 'ch2']) # A dummy instrument that generates some real looking output depending # on the values set on the setter_instr, in this case the dac dmm = DummyInstrumentWithMeasurement('dmm', setter_instr=dac) ``` Aside from the bare ``snapshot``, which returns a Python dictionary, a more readable form can be returned via: ``` dac.print_readable_snapshot() dmm.print_readable_snapshot() ``` ### Add instruments into station Every instrument that you are working with during an experiment should be added to the instance of the `Station` class. Here, we add the `dac` and `dmm` instruments by using ``add_component`` method: #### Add components ``` station.add_component(dac) station.add_component(dmm) ``` #### Remove component We use the method `remove_component` to remove a component from the station. For example you can remove `dac` as follows: ``` station.remove_component('dac') station.components ``` Let us add the `dac` instrument back: ``` station.add_component(dac) ``` #### Station snapshot As there are two instruments added to the station object, the snapshot will include all the properties associated with them: ``` station.snapshot() ``` #### Station Configurator The instantiation of the instruments, that is, setting up the proper initial values of the corresponding parameters and similar pre-specifications of a measurement constitutes the initialization portion of the code. In general, this portion can be quite long and tedious to maintain. These (and more) concerns can be solved by a YAML configuration file of the `Station` object. We refer to the notebook on [station](http://qcodes.github.io/Qcodes/examples/Station.html#Default-Station) for more details. ## Parameter A QCoDeS `Parameter` has the property that it is settable, gettable or both. Let us clarify this with an example of a real instrument, say an oscilloscope. An oscilloscope contains settings such as trigger mode, trigger level, source etc. Most of these settings can be set to a particular value in the instrument. For example, trigger mode can be set to 'edge' mode and trigger level to some floating number. Hence, these parameters are called settable. Similarly, the parameters that we are able to retrieve the values currently associated with them are called gettable. In this example notebook, we have a 'dac' instrument with 'ch1' and 'ch2' are added as its `Parameter`s. Similarly, we have a 'dmm' instrument with 'v1' and 'v2' are added as its `Parameter`s. We also note that, apart from the trivial use of `Parameter` as the standard parameter of the instrument, it can be used as a common variable to utilize storing/retrieving data. Furthermore, it can be used as a subclass in more complex design cases. QCoDeS provides following parameter classes built in: - `Parameter` : Represents a single value at a given time. Example: voltage. - `ParameterWithSetpoints`: Represents an array of values of all the same type that are returned all at once. Example: voltage vs time waveform . We refer to the [notebook](http://qcodes.github.io/Qcodes/examples/Parameters/Simple-Example-of-ParameterWithSetpoints.html) in which more detailed examples concerning the use cases of this parameter can be found. - `DelegateParameter`: It is intended for proxy-ing other parameters. You can use different label, unit, etc in the delegated parameter as compared to the source parameter. - `MultiParameter`: Represents a collection of values with different meanings and possibly different dimensions. Example: I and Q, or I vs time and Q vs time. Most of the times you can use these classes directly and use the `get`, `set` functions to get or set the values to those parameters. But sometimes it may be useful to subclass the above classes, in that case you should define `get_raw` and `set_raw` methods rather then `get` or `set` methods. The `get_raw`, `set_raw` method is automatically wrapped to provide a `get`, `set` method on the parameter instance. Overwriting get in subclass of above parameters or the `_BaseParameter` is not allowed and will throw a runtime error. To understand more about parameters consult the [notebook on Parameter](http://qcodes.github.io/Qcodes/examples/index.html#parameters) for more details. In most cases, a settable parameter accepts its value as a function argument. Let us set the a value of 1.1 for the 'ch1' parameter of the 'dac' instrument: ``` dac.ch1(1.1) ``` Similarly, we ask the current value of a gettable parameter with a simple function call. For example, the output voltage of dmm can be read via ``` dmm.v1() ``` Further information can be found in the [user guide](http://qcodes.github.io/Qcodes/user/intro.html#parameter) or [api documentation](http://qcodes.github.io/Qcodes/api/parameters/index.html) of parameter. ## Initialise database and experiment Before starting a measurement, we first initialise a database. The location of the database is specified by the configuration object of the QCoDeS installation. The database is created with the latest supported version complying with the QCoDeS version that is currently under use. If a database already exists but an upgrade has been done to the QCoDeS, then that database can continue to be used and it is going to be upgraded to the latest version automatically at first connection. The initialisation of the database is achieved via: ``` initialise_database() ``` Alternatively, if you already have a QCoDeS database which you would like to use for your measurement, say at ``~/myData.db``, it is sufficient to use ``` initialise_or_create_database_at("~/myData.db") ``` Note that it is user's responsibility to provide the correct absolute path for the existing database. The notation of the path may differ with respect to the operating system. The method ``initialise_or_create_database_at`` makes sure that your QCoDeS session is connected to the referred database. If the database file does not exist, it will be initiated at the provided path. ### Current location of database By default, QCoDeS initialises an empty database to your home directory: ``` qc.config.core.db_location ``` ### Change location of database In case you would like to change the location of the database, for example, to the current working directory, it is sufficient to assign the new path as the value of the corresponding key ``db_location``: ``` cwd = os.getcwd() qc.config["core"]["db_location"] = os.path.join(cwd, 'testing.db') ``` ### Load or create experiment After initialising the database we create the `Experiment` object. This object contains the name of the experiment and the sample, and the path of the database. You can use `load_or_create_experiment` to find and return an experiment with the given experiment and sample name if it already exists, or create one if not found. ``` exp = load_or_create_experiment(experiment_name='dataset_context_manager', sample_name="no sample1") ``` The methods shown above to load or create the experiment is the most versatile one. However for specific cases the following alternative methods can be used to create or load experiments: ``` # load_experiment_by_name(experiment_name='dataset_context_manager',sample_name="no sample") # load_last_experiment() # load_experiment(1) # new_experiment(experiment_name='dataset_context_manager',sample_name="no sample") ``` ## Measurement Qcodes `Measurement` module provides a context manager for registering parameters to measure and store results. The measurement is first linked to the correct experiment and to the station by passing them as arguments. If no arguments are given, the latest experiment and station are taken as defaults. QCoDeS is capable of storing relations between the parameters, i.e., which parameter is independent and which parameter depends on another one. This capability is later used to make useful plots, where the knowledge of interdependencies is used to define the corresponding variables for the coordinate axes. The required (mandatory) parameters in the measurement are first registered. If there is an interdependency between any given two or more parameters, the independent one is declared as a 'setpoint'. In our example, ``dac.ch1`` is the independent parameter and ``dmm.v1`` is the dependent parameter whose setpoint is ``dac.ch1``. ``` meas = Measurement(exp=exp, station=station) meas.register_parameter(dac.ch1) # register the first independent parameter meas.register_parameter(dmm.v1, setpoints=(dac.ch1,)) # now register the dependent oone meas.write_period = 2 with meas.run() as datasaver: for set_v in np.linspace(0, 25, 10): dac.ch1.set(set_v) get_v = dmm.v1.get() datasaver.add_result((dac.ch1, set_v), (dmm.v1, get_v)) dataset = datasaver.dataset # convenient to have for plotting ``` The ``meas.run()`` returns a context manager for the experiment run. Entering the context returns the ``DataSaver`` object to the `datasaver` variable. The ``DataSaver`` class handles the saving of data to the database using the method ``add_result``. The ``add_result`` method validates the sizes of all the data points and store them intermittently into a private variable. Within every write-period of the measurement, the data of the private variable is flushed to the database. ``meas.write_period`` is used to define the periods after which the data is committed to the database. We do not commit individual datapoints during measurement to the database but only after some amount of data is collected in stipulated time period (in this case for 2 seconds). The default value of write_period is 5 seconds. ## Data exploration ### List all the experiments in the database The list of experiments that are stored in the database can be called back as follows: ``` experiments() ``` While our example database contains only few experiments, in reality the database will contain several experiments containing many datasets. Seldom, you would like to load a dataset from a particular experiment for further analysis. Here we shall explore different ways to find and retrieve already measured dataset from the database. ### List all the datasets in the database Let us now retrieve the datasets stored within the current experiment via: ``` exp.data_sets() ``` ### Load the data set using one or more specifications The method ``load_by_run_spec`` can be used to load a run with given specifications such as 'experiment name' and 'sample name': ``` dataset = load_by_run_spec(experiment_name='dataset_context_manager', captured_run_id=1) ``` While the arguments are optional, the function call will raise an error if more than one run matching the supplied specifications is found. If such an error occurs, the traceback will contain the specifications of the runs, as well. Further information concerning 'Uniquely identifying and loading runs' can be found in [this example notebook](DataSet/Extracting-runs-from-one-DB-file-to-another.ipynb#Uniquely-identifying-and-loading-runs). For more information on the `DataSet` object that `load_by_run_spec` returned, refer to [DataSet class walkthrough article](DataSet/DataSet-class-walkthrough.ipynb). ### Plot dataset We arrived at a point where we can visualize our data. To this end, we use the ``plot_dataset`` method with ``dataset`` as its argument: ``` plot_dataset(dataset) ``` For more detailed examples of plotting QCoDeS datasets, refer to the following articles: - [Offline plotting tutorial](DataSet/Offline%20Plotting%20Tutorial.ipynb) - [Offline plotting with categorical data](DataSet/Offline%20plotting%20with%20categorical%20data.ipynb) - [Offline plotting with complex data](DataSet/Offline%20plotting%20with%20complex%20data.ipynb) ### Get data of specific parameter of a dataset If you are interested in numerical values of a particular parameter within a given dataset, the corresponding data can be retrieved by using `get_parameter_data` method: ``` dataset.get_parameter_data('dac_ch1') dataset.get_parameter_data('dmm_v1') ``` We refer reader to [exporting data section of the performing measurements using qcodes parameters and dataset](DataSet/Performing-measurements-using-qcodes-parameters-and-dataset.ipynb#Accessing-and-exporting-the-measured-data) and [Accessing data in DataSet notebook](DataSet/Accessing-data-in-DataSet.ipynb) for further information on `get_parameter_data` method. ### Export data to pandas dataframe If desired, any data stored within a QCoDeS database can also be exported as pandas dataframes. This can be achieved via: ``` df = dataset.to_pandas_dataframe_dict()['dmm_v1'] df.head() ``` ### Export data to xarray It's also possible to export data stored within a QCoDeS database to an `xarray.DataArray`. This can be achieved via: ``` xarray = dataset.to_xarray_dataarray_dict()['dmm_v1'] xarray.head() ``` We refer to [example notebook on working with pandas](DataSet/Working-With-Pandas-and-XArray.ipynb) and [Accessing data in DataSet notebook](DataSet/Accessing-data-in-DataSet.ipynb) for further information. ### Explore the data using an interactive widget Experiments widget presents the most important information at a glance, has buttons to plot the dataset and easily explore a snapshot, enabled users to add a note to a dataset. It is only available in the Jupyter notebook because it uses [`ipywidgets`](https://ipywidgets.readthedocs.io/) to display an interactive elements. Use it in the following ways: ```python # import it first from qcodes.interactive_widget import experiments_widget # and then just run it experiments_widget() # you can pass a specific database path experiments_widget(db="path_of_db.db") # you can also pass a specific list of DataSets: # say, you're only interested in datasets of a particular experiment experiments = qcodes.experiments() data_sets = experiments[2].data_sets() experiments_widget(data_sets=data_sets) # you can change the sorting of the datasets # by passing None, "run_id", "timestamp" as sort_by argument: experiments_widget(sort_by="timestamp") ``` Here's a short video that summarizes the looks and the features: ![video demo about experiments widget should show here](../_static/experiments_widget.webp) ## Things to remember ### QCoDeS configuration QCoDeS uses a JSON based configuration system. It is shipped with a default configuration. The default config file should not be overwritten. If you have any modifications, you should save the updated config file on your home directory or in the current working directory of your script/notebook. The QCoDeS config system first looks in the current directory for a config file and then in the home directory for one and only then - if no config files are found - it falls back to using the default one. The default config is located in `qcodes.config`. To know how to change and save the config please refer to the [documentation on config](http://qcodes.github.io/Qcodes/user/configuration.html?). ### QCoDeS instrument drivers We support and provide drivers for most of the instruments currently in use at the Microsoft stations. However, if more functionalities than the ones which are currently supported by drivers are required, one may update the driver or request the features form QCoDeS team. You are more than welcome to contribute and if you would like to have a quick overview on how to write instrument drivers, please refer to the [example notebooks on writing drivers](http://qcodes.github.io/Qcodes/examples/index.html#writing-drivers). ### QCoDeS measurements live plotting with Plottr Plottr supports and is recommended for QCoDeS measurements live plotting. [How to use plottr with QCoDeS for live plotting](plotting/How-to-use-Plottr-with-QCoDeS-for-live-plotting.ipynb) notebook contains more information.
github_jupyter
``` from platform import python_version print(python_version()) import sys # Add the path to system, local or mounted S3 bucket, e.g. /dbfs/mnt/<path_to_bucket> sys.path.append('./secrets.py') import logging import math import os from influxdb import DataFrameClient import numpy as np import matplotlib.mlab as mlab import pandas as pd import matplotlib.pyplot as plt from tabulate import tabulate from tqdm import tqdm %matplotlib inline logging.basicConfig(level=logging.INFO) LOGGER = logging.getLogger(__name__) # Need to ssh tunnel for this to work # ssh -L 8086:localhost:8086 aq.byu.edu -N influx = DataFrameClient( host=HOST, port=PORT, username=USERNAME, password=PASSWORD, database=DATABASE, ) def large_query(influx, measurement, query, total=None, limit=100_000): if total is not None: total = math.ceil(total / limit) with tqdm(total=total) as pbar: offset = 0 while True: new_query = query + " LIMIT {} OFFSET {}".format(limit, offset) data = influx.query(new_query) data = data[measurement] received = len(data) pbar.update(1) yield data offset += limit if received != limit: break def load_data(filename): if os.path.exists(filename): LOGGER.info("Loading cached data...") return pd.read_hdf(filename) LOGGER.info("Downloading data...") result = influx.query( "SELECT COUNT(sequence) FROM air_quality_sensor WHERE time > '2019-10-01' AND time <= '2020-04-30'" ) count = result["air_quality_sensor"].values[0][0] queries = large_query( influx, "air_quality_sensor", "SELECT * FROM air_quality_sensor WHERE time > '2019-10-01' AND time <= '2020-04-30'", count, ) all_data = pd.concat(list(queries), sort=False) all_data.to_hdf(filename, "data") return all_data data = load_data("aq_data.h5") gold_data = load_data("aq_data.h5") LOGGER.info("Done loading data...") # all_modified_gers - This is the working boxplot for all_modified_gers only Mongolia deployed sensors # https://stackoverflow.com/questions/22800079/converting-time-zone-pandas-dataframe # https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DataFrame.between_time.html # Don't include sensors: FL, IA, KS, MB, NB, NJ, NL, OR, WA, WY # Look more closely at: NE, NS from IPython.core.debugger import set_trace # https://matplotlib.org/3.1.3/gallery/statistics/boxplot_color.html print(data.index[1]) data.index = data.index.tz_convert('Asia/Ulaanbaatar') print(data.index[1]) labels = ['All Modified Gers'] plt.xlabel('') plt.ylabel('PM 2.5 Value') plt.title('Week PM 2.5 for All Modified Gers Sensors') plt.grid(True) days = ['05', '06', '07', '08', '09', '10', '11'] # Clean up data this way: data.loc[data['pm2_5'] > 1000, 'pm2_5'] = 1000 data.loc[data['pm2_5'] < 0, 'pm2_5'] = 0 # https://www.geeksforgeeks.org/create-a-new-column-in-pandas-dataframe-based-on-the-existing-columns/ data['pm2_5'] = data['pm2_5'] / (1 + ((0.4/1.65)/(-1+(1/(35/100))))) # data['pm2_5'] = np.where(data['pm2_5'] >= 5000, 5000, data['pm2_5']) data = data[data.location_name == 'Mongolia'] # start clean up data mode when in office or switched from outdoor to indoor or vice versa # -------------------------- In Office ---------------------------------------------------------------------------------------------------------------------------- ak = data[data.index < '2020-02-15'].groupby("name").get_group('AK') co = data[data.index < '2020-02-15'].groupby("name").get_group('CO') ky = data[data.index < '2020-02-15'].groupby("name").get_group('KY') # mb = data[data.index < '2020-02-15'].groupby("name").get_group('MB') # mb = mb[(mb.index < '2020-01-26') | (mb.index >= '2020-02-04')] # nj = data[(data.index < '2020-01-28') | (data.index >= '2020-02-04')].groupby("name").get_group('NJ') nu = data[(data.index < '2020-01-26') | (data.index >= '2020-02-04')].groupby("name").get_group('NU') # oregon = data[(data.index < '2020-01-26') | (data.index >= '2020-02-04')].groupby("name").get_group('OR') pe = data[(data.index < '2020-02-11')].groupby("name").get_group('PE') #outdoor sensor we are no longer using these sensors data # wy = data[(data.index < '2020-02-11')].groupby("name").get_group('WY') # --------------------------------------- Switched --------------------------------------------------------------------------------------------------------------- ab = data[(data.index > '2020-01-28') & (data.index <= '2020-02-14')].groupby("name").get_group('AB') # outdoor sensor we are no longer using these sensors data before but will use after the switch to indoor ns = data[(data.index >= '2020-01-28')].groupby("name").get_group('NS') # outdoor sensor we are no longer using these sensors data before but will use after the switch to indoor # oregon = oregon[(oregon.index >= '2020-01-28')] # outdoor sensor we are no longer using these sensors data before but will use after the switch to indoor ut = data[(data.index >= '2020-01-29')].groupby("name").get_group('UT') # outdoor sensor we are no longer using these sensors data before but will use after the switch to indoor # finish clean up data mode when in office or switched from outdoor to indoor or vice versa # ------------------------------------------------------------------------------------------------------------------------------------------------------ modified_gers = ['AL', 'AR', 'AZ', 'CA', 'CT', 'DE', 'ID', 'IL', 'LA', 'MA', 'MD', 'ME', 'MI', 'MN', 'MS', 'MT', 'NC', 'NH', 'NM', 'GA', 'ND', 'NE'] modified_gers_data = data[(data.name == modified_gers[0]) | (data.name == modified_gers[1]) | (data.name == modified_gers[2]) | (data.name == modified_gers[3]) | (data.name == modified_gers[4]) | (data.name == modified_gers[5]) | (data.name == modified_gers[6]) | (data.name == modified_gers[7]) | (data.name == modified_gers[8]) | (data.name == modified_gers[9]) | (data.name == modified_gers[10]) | (data.name == modified_gers[11]) | (data.name == modified_gers[12]) | (data.name == modified_gers[13]) | (data.name == modified_gers[14]) | (data.name == modified_gers[15]) | (data.name == modified_gers[16]) | (data.name == modified_gers[17]) | (data.name == modified_gers[18]) | (data.name == modified_gers[19]) | (data.name == modified_gers[20]) | (data.name == modified_gers[21])] modified_gers_data = modified_gers_data.append(ak) modified_gers_data = modified_gers_data.append(co) modified_gers_data = modified_gers_data.append(ky) # unmodified_gers = ['NJ', 'NS', 'NU', 'OK', 'OR', 'PA', 'RI', 'SD', 'UT', 'VA', 'WI'] # unmodified_gers_data = data[(data.name == unmodified_gers[0]) | (data.name == unmodified_gers[1]) | (data.name == unmodified_gers[2]) | (data.name == unmodified_gers[3]) | (data.name == unmodified_gers[4]) | (data.name == unmodified_gers[5]) | (data.name == unmodified_gers[6]) | (data.name == unmodified_gers[7]) | (data.name == unmodified_gers[8]) | (data.name == unmodified_gers[9]) | (data.name == unmodified_gers[10])] unmodified_gers = ['OK', 'PA', 'RI', 'SD', 'VA', 'WI'] unmodified_gers_data = data[(data.name == unmodified_gers[0]) | (data.name == unmodified_gers[1]) | (data.name == unmodified_gers[2]) | (data.name == unmodified_gers[3]) | (data.name == unmodified_gers[4]) | (data.name == unmodified_gers[5])] unmodified_gers_data = unmodified_gers_data.append(ab) # unmodified_gers_data = unmodified_gers_data.append(mb) # unmodified_gers_data = unmodified_gers_data.append(nj) unmodified_gers_data = unmodified_gers_data.append(nu) unmodified_gers_data = unmodified_gers_data.append(ns) # unmodified_gers_data = unmodified_gers_data.append(oregon) unmodified_gers_data = unmodified_gers_data.append(ut) ## Import the packages import numpy as np from scipy import stats ## Define 2 random distributions #Sample Size N = 10 #Gaussian distributed data with mean = 2 and var = 1 # a = np.random.randn(N) + 2 a = modified_gers_data.pm2_5.dropna()[0:1162704] #Gaussian distributed data with with mean = 0 and var = 1 # b = np.random.randn(N) b = unmodified_gers_data.pm2_5.dropna() ## Calculate the Standard Deviation #Calculate the variance to get the standard deviation #For unbiased max likelihood estimate we have to divide the var by N-1, and therefore the parameter ddof = 1 var_a = a.var(ddof=1) var_b = b.var(ddof=1) #std deviation s = np.sqrt((var_a + var_b)/2) s ## Calculate the t-statistics t = (a.mean() - b.mean())/(s*np.sqrt(2/N)) ## Compare with the critical t-value #Degrees of freedom df = 2*N - 2 #p-value after comparison with the t p = 1 - stats.t.cdf(t,df=df) print("t = " + str(t)) print("p = " + str(2*p)) ### You can see that after comparing the t statistic with the critical t value (computed internally) we get a good p value of 0.0005 and thus we reject the null hypothesis and thus it proves that the mean of the two distributions are different and statistically significant. ## Cross Checking with the internal scipy function t2, p2 = stats.ttest_ind(a,b) print("t = " + str(t2)) print("p = " + str(p2)) # good reference for pands stat significance: https://stackoverflow.com/questions/25571882/pandas-columns-correlation-with-statistical-significance # https://realpython.com/numpy-scipy-pandas-correlation-python/ # https://realpython.com/python-statistics/ # https://dataschool.com/fundamentals-of-analysis/correlation-and-p-value/ # https://blog.minitab.com/blog/alphas-p-values-confidence-intervals-oh-my # https://www.thoughtco.com/the-difference-between-alpha-and-p-values-3126420 # Difference Between P-Value and Alpha # To determine if an observed outcome is statistically significant, we compare the values of alpha and the p-value. There are two possibilities that emerge: # The p-value is less than or equal to alpha. In this case, we reject the null hypothesis. When this happens, we say that the result is statistically significant. In other words, we are reasonably sure that there is something besides chance alone that gave us an observed sample. # The p-value is greater than alpha. In this case, we fail to reject the null hypothesis. When this happens, we say that the result is not statistically significant. In other words, we are reasonably sure that our observed data can be explained by chance alone. from scipy.stats import pearsonr print (len(modified_gers_data.pm2_5.dropna()), 'and', len(unmodified_gers_data.pm2_5.dropna()), ',modified sliced: ', len(modified_gers_data.pm2_5.dropna()[0:1176227])) if len(modified_gers_data.pm2_5.dropna()[0:1176227]) == len(unmodified_gers_data.pm2_5.dropna()): print ('the x and y are the same length') print ('correlation and p-value') pearsonr(modified_gers_data.pm2_5.dropna()[0:1176227], unmodified_gers_data.pm2_5.dropna()) ```
github_jupyter
## Introduction: How can you use leaf indexes from a tree ensemble? [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catboost/tutorials/blob/master/leaf_indexes_calculation/leaf_indexes_calculation_tutorial.ipynb) Supose we have fitted tree ensemble of size $T$. Then calculation of a predict on a new object can be viewed as follows. First original features of the sample are transformed to a sequence of $T$ categorical features indicating at which leaf of each tree the object gets. Then that sequence of categorical features is one-hot encoded. And finally predict calculated as a scalar product of one-hot encoding and vector of all leaf values of the ensemble. So a tree ensemble can be viewed as linear model over transformed features. Ultimately, one can say that boosting on trees is a linear model with a generator of tree-transformed features. And in process of training it generates new features and fits coefficients for them in a greedy way. This decomposition of a tree ensemble on a feature transformation and a linear model suggests several tricks: 1. We can tune leaf values alltogether (not greedily) with the help of all techiques for linear models. 2. Transfer learning: we can take feature transformation from one model and apply it to other dataset with same features (e.g. to predict other target or fit new model on a fresh data). 3. Online learning: we can keep feature transformation (i. e. tree-structures) constant and perform online updates on leaf values (viewed as a coefficients of the linear model). See real world example in this paper [Practical Lessons from Predicting Clicks on Ads at Facebook](https://research.fb.com/wp-content/uploads/2016/11/practical-lessons-from-predicting-clicks-on-ads-at-facebook.pdf). ## In this tutorial we will: 1. See how to get feature transformation from a catboost model (i. e. calculate at which leafs of model trees objects get). 2. Perform a sanity check for the first use case of leaf indexes calculation mentioned above on the california housing dataset. ``` from __future__ import print_function import numpy as np from scipy.stats import ttest_rel from sklearn.datasets import fetch_california_housing from sklearn.linear_model import ElasticNet from sklearn.model_selection import train_test_split from sklearn.preprocessing import OneHotEncoder from sklearn.metrics import mean_squared_error from catboost import CatBoostRegressor import warnings warnings.filterwarnings("ignore", category=ConvergenceWarning) seed = 42 ``` ### Download and split data Since it's a demo let's leave major part of the data to test. ``` data = fetch_california_housing(return_X_y=True) splitted_data = train_test_split(*data, test_size = 0.9, random_state=seed) X_train, X_test, y_train, y_test = splitted_data X_train, X_validate, y_train, y_validate = train_test_split(X_train, y_train, test_size=0.2, random_state=seed) print("{:<20} {}".format("train size:", X_train.shape[0])) print("{:<20} {}".format("validation size:", X_validate.shape[0])) print("{:<20} {}".format("test size:", X_test.shape[0])) ``` ### Fit catboost I've put very large learning rate in order to get small model (and fast tutorial). Decreasing of learning rate yields better but larger ensemble. The effect of leaf values tuning deteriorates in that case but remains statistically significant. Iterestingly that trick still works for ensemble of size $\approx 500$ (learning_rate 0.1-0.2) when number of features in linear model exceeds number of training objects five times. ``` catboost_params = { "iterations": 500, "learning_rate": 0.6, "depth": 4, "loss_function": "RMSE", "verbose": False, "random_seed": seed } cb_regressor = CatBoostRegressor(**catboost_params) cb_regressor.fit(X_train, y_train, eval_set=(X_validate, y_validate), plot=True) print("tree count: {}".format(cb_regressor.tree_count_)) print("best rmse: {:.5}".format(cb_regressor.best_score_['validation']["RMSE"])) ``` ### Transform train data ``` class LeafIndexTransformer(object): def __init__(self, model): self.model = model self.transformer = OneHotEncoder(handle_unknown="ignore") def fit(self, X): leaf_indexes = self.model.calc_leaf_indexes(X) self.transformer.fit(leaf_indexes) def transform(self, X): leaf_indexes = self.model.calc_leaf_indexes(X) return self.transformer.transform(leaf_indexes) transformer = LeafIndexTransformer(cb_regressor) transformer.fit(X_train) train_embedding = transformer.transform(X_train) validate_embedding = transformer.transform(X_validate) ``` ### Fit linear model ``` lin_reg = ElasticNet(warm_start=True) alpha_range = np.round(np.exp(np.linspace(np.log(0.001), np.log(0.01), 5)), decimals=5) best_alpha = None best_loss = None for curr_alpha in alpha_range: lin_reg.set_params(alpha=curr_alpha) lin_reg.fit(train_embedding, y_train) validate_predict = lin_reg.predict(validate_embedding) validate_loss = mean_squared_error(y_validate, validate_predict) if best_alpha is None or best_loss > validate_loss: best_alpha = curr_alpha best_loss = validate_loss print("best alpha: {}".format(best_alpha)) print("best rmse: {}".format(np.sqrt(best_loss))) lin_reg.set_params(alpha=best_alpha) lin_reg.fit(train_embedding, y_train) ``` ### Evaluate on test data ``` test_embedding = transformer.transform(X_test) tuned_predict = lin_reg.predict(test_embedding) untuned_predict = cb_regressor.predict(X_test) tuned_rmse = np.sqrt(np.mean((tuned_predict - y_test)**2)) untuned_rmse = np.sqrt(np.mean((untuned_predict - y_test)**2)) percent_delta = 100. * (untuned_rmse / tuned_rmse - 1) print("Tuned model test rmse: {:.5}".format(tuned_rmse)) print("Untuned model test rmse: {:.5} (+{:.2}%)".format(untuned_rmse, percent_delta)) pvalue = ttest_rel((tuned_predict - y_test)**2, (untuned_predict - y_test)**2).pvalue print("pvalue: {:.5}".format(pvalue)) ```
github_jupyter
# Unsupervised methods In this lesson, we'll cover unsupervised computational text anlalysis approaches. The central methods covered are TF-IDF and Topic Modeling. Both of these are common approachs in the social sciences and humanities. [DTM/TF-IDF](#dtm)<br> [Topic modeling](#topics)<br> ### Today you will * Understand the DTM and why it's important to text analysis * Learn how to create a DTM in Python * Learn basic functionality of Python's package scikit-learn * Understand tf-idf scores * Learn a simple way to identify distinctive words * Implement a basic topic modeling algorithm and learn how to tweak it * In the process, gain more familiarity and comfort with the Pandas package and manipulating data ### Key Jargon * *Document Term Matrix*: * a matrix that describes the frequency of terms that occur in a collection of documents. In a document-term matrix, rows correspond to documents in the collection and columns correspond to terms. * *TF-IDF Scores*: * short for term frequency–inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. * *Topic Modeling*: * A general class of statistical models that uncover abstract topics within a text. It uses the co-occurrence of words within documents, compared to their distribution across documents, to uncover these abstract themes. The output is a list of weighted words, which indicate the subject of each topic, and a weight distribution across topics for each document. * *LDA*: * Latent Dirichlet Allocation. A particular model for topic modeling. It does not take document order into account, unlike other topic modeling algorithms. ## DTM/TF-IDF <a id='dtm'></a> In this lesson we will use Python's scikit-learn package learn to make a document term matrix from a .csv Music Reviews dataset (collected from MetaCritic.com). We will then use the DTM and a word weighting technique called tf-idf (term frequency inverse document frequency) to identify important and discriminating words within this dataset (utilizing the Pandas package). The illustrating question: **what words distinguish reviews of Rap albums, Indie Rock albums, and Jazz albums?** ``` import os import numpy as np import pandas as pd DATA_DIR = 'data' music_fname = 'music_reviews.csv' music_fname = os.path.join(DATA_DIR, music_fname) ``` ### First attempt at reading in file ``` reviews = pd.read_csv(music_fname, sep='\t') reviews.head() ``` Print the text of the first review. ``` print(reviews['body'][0]) ``` ### Explore the Data using Pandas Let's first look at some descriptive statistics about this dataset, to get a feel for what's in it. We'll do this using the Pandas package. Note: this is always good practice. It serves two purposes. It checks to make sure your data is correct, and there's no major errors. It also keeps you in touch with your data, which will help with interpretation. <3 your data! First, what genres are in this dataset, and how many reviews in each genre? ``` #We can count this using the value_counts() function reviews['genre'].value_counts() ``` The first thing most people do is to `describe` their data. (This is the `summary` command in R, or the `sum` command in Stata). ``` #There's only one numeric column in our data so we only get one column for output. reviews.describe() ``` This only gets us numerical summaries. To get summaries of some of the other columns, we can explicitly ask for it. ``` reviews.describe(include=['O']) ``` Who were the reviewers? ``` reviews['critic'].value_counts().head(10) ``` And the artists? ``` reviews['artist'].value_counts().head(10) ``` We can get the average score as follows: ``` reviews['score'].mean() ``` Now we want to know the average score for each genre? To do this, we use Pandas `groupby` function. You'll want to get very familiar with the `groupby` function. It's quite powerful. (Similar to `collapse` on Stata) ``` reviews_grouped_by_genre = reviews.groupby("genre") reviews_grouped_by_genre['score'].mean().sort_values(ascending=False) ``` ### Creating the DTM using scikit-learn Ok, that's the summary of the metadata. Next, we turn to analyzing the text of the reviews. Remember, the text is stored in the 'body' column. First, a preprocessing step to remove numbers. ``` def remove_digits(comment): return ''.join([ch for ch in comment if not ch.isdigit()]) reviews['body_without_digits'] = reviews['body'].apply(remove_digits) reviews reviews['body_without_digits'].head() ``` ### CountVectorizer Function Our next step is to turn the text into a document term matrix using the scikit-learn function called `CountVectorizer`. ``` from sklearn.feature_extraction.text import CountVectorizer countvec = CountVectorizer() sparse_dtm = countvec.fit_transform(reviews['body_without_digits']) ``` Great! We made a DTM! Let's look at it. ``` sparse_dtm ``` This format is called Compressed Sparse Format. It save a lot of memory to store the dtm in this format, but it is difficult to look at for a human. To illustrate the techniques in this lesson we will first convert this matrix back to a Pandas DataFrame, a format we're more familiar with. For larger datasets, you will have to use the Compressed Sparse Format. Putting it into a DataFrame, however, will enable us to get more comfortable with Pandas! ``` dtm = pd.DataFrame(sparse_dtm.toarray(), columns=countvec.get_feature_names(), index=reviews.index) dtm.head() ``` ### What can we do with a DTM? We can quickly identify the most frequent words ``` dtm.sum().sort_values(ascending=False).head(10) ``` ### Challenge * Print out the most infrequent words rather than the most frequent words. You can look at the [Pandas documentation](http://pandas.pydata.org/pandas-docs/stable/api.html#api-dataframe-stats) for more information. * Print the average number of times each word is used in a review. * Print this out sorted from highest to lowest. ### TF-IDF scores How to find distinctive words in a corpus is a long-standing question in text analysis. Today, we'll learn one simple approach to this: TF-IDF. The idea behind words scores is to weight words not just by their frequency, but by their frequency in one document compared to their distribution across all documents. Words that are frequent, but are also used in every single document, will not be distinguising. We want to identify words that are unevenly distributed across the corpus. One of the most popular ways to weight words (beyond frequency counts) is `tf-idf score`. By offsetting the frequency of a word by its document frequency (the number of documents in which it appears) will in theory filter out common terms such as 'the', 'of', and 'and'. Traditionally, the *inverse document frequency* of word $j$ is calculated as: $idf_{j} = log\left(\frac{\#docs}{\#docs\,with\,j}\right)$ and the *term freqency - inverse document frequency* is $tfidf_{ij} = f_{ij}\times{idf_j}$ where $f_{ij}$ is the number of occurences of word $j$ in document $i$. You can, and often should, normalize the word frequency: $tfidf_{ij} = \frac{f_{ij}}{\#words\,in\,doc\,i}\times{idf_{j}}$ We can calculate this manually, but scikit-learn has a built-in function to do so. This function also uses log frequencies, so the numbers will not correspond excactly to the calculations above. We'll use the [scikit-learn calculation](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html), but a challenge for you: use Pandas to calculate this manually. ### TF-IDFVectorizer Function To do so, we simply do the same thing we did above with CountVectorizer, but instead we use the function TfidfVectorizer. ``` from sklearn.feature_extraction.text import TfidfVectorizer tfidfvec = TfidfVectorizer() sparse_tfidf = tfidfvec.fit_transform(reviews['body_without_digits']) sparse_tfidf tfidf = pd.DataFrame(sparse_tfidf.toarray(), columns=tfidfvec.get_feature_names(), index=reviews.index) tfidf.head() ``` Let's look at the 20 words with highest tf-idf weights. ``` tfidf.max().sort_values(ascending=False).head(20) ``` Ok! We have successfully identified content words, without removing stop words. ### Identifying Distinctive Words What can we do with this? These scores are best used when you want to identify distinctive words for individual documents, or groups of documents, compared to other groups or the corpus as a whole. To illustrate this, let's compare three genres and identify the most distinctive words by genre. First we add in a column of genre. ``` tfidf['genre_'] = reviews['genre'] tfidf.head() ``` Now lets compare the words with the highest tf-idf weight for each genre. ``` rap = tfidf[tfidf['genre_']=='Rap'] indie = tfidf[tfidf['genre_']=='Indie'] jazz = tfidf[tfidf['genre_']=='Jazz'] rap.max(numeric_only=True).sort_values(ascending=False).head() indie.max(numeric_only=True).sort_values(ascending=False).head() jazz.max(numeric_only=True).sort_values(ascending=False).head() ``` There we go! A method of identifying distinctive words. ### Challenge Instead of outputting the highest weighted words, output the lowest weighted words. How should we interpret these words? # Topic modeling <a id='topics'></a> The goal of topic models can be twofold: 1/ learning something about the topics themselves, i.e what the the ext is about 2/ reduce the dimensionality of text to represent a document as a weighted average of K topics instead of a vector of token counts over the whole vocabulary. In the latter case, topic modeling a way to treat text as any data in a more tractable way for any subsequent statistical analysis (linear/logistic regression, etc). There are many topic modeling algorithms, but we'll use LDA. This is a standard model to use. Again, the goal is not to learn everything you need to know about topic modeling. Instead, this will provide you some starter code to run a simple model, with the idea that you can use this base of knowledge to explore this further. We will run Latent Dirichlet Allocation, the most basic and the oldest version of topic modeling$^1$. We will run this in one big chunk of code. Our challenge: use our knowledge of scikit-learn that we gained above to walk through the code to understand what it is doing. Your challenge: figure out how to modify this code to work on your own data, and/or tweak the parameters to get better output. First, a bit of theory. LDA is a generative model - a model over the entire data generating process - in which a document is a mixture of topics and topics are probability distributions over tokens in the vocabulary. The (normalized) frequency of word $j$ in document $i$ can be written as: $q_{ij} = v_{i1}*\theta_{1j} + v_{i2}*\theta_{2j} + ... + v_{iK}*\theta_{Kj}$ where K is the total number of topics, $\theta_{kj}$ is the probability that word $j$ shows up in topic $k$ and $v_{ik}$ is the weight assigned to topic $k$ in document $i$. The model treats $v$ and $\theta$ as generated from Dirichlet-distributed priors and can be estimated through Maximum Likelihood or Bayesian methods. Note: we will be using a different dataset for this technique. The music reviews in the above dataset are often short, one word or one sentence reviews. Topic modeling is not really appropriate for texts that are this short. Instead, we want texts that are longer and are composed of multiple topics each. For this exercise we will use a database of children's literature from the 19th century. The data were compiled by students in this course: http://english197s2015.pbworks.com/w/page/93127947/FrontPage Found here: http://dhresourcesforprojectbuilding.pbworks.com/w/page/69244469/Data%20Collections%20and%20Datasets#demo-corpora That page has additional corpora, for those interested in exploring text analysis further. $^1$ Reference: Blei, D. M., A. Y. Ng, and M. I. Jordan (2003). Latent Dirichlet allocation. Journal of Machine Learning Research 3, 993–1022. ``` literature_fname = os.path.join(DATA_DIR, 'childrens_lit.csv.bz2') df_lit = pd.read_csv(literature_fname, sep='\t', encoding = 'utf-8', compression = 'bz2', index_col=0) #drop rows where the text is missing df_lit = df_lit.dropna(subset=['text']) df_lit.head() ``` Now we're ready to fit the model. This requires the use of CountVectorizer, which we've already used, and the scikit-learn function LatentDirichletAllocation. See [here](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.LatentDirichletAllocation.html) for more information about this function. First, we have to import it from sklearn. ``` from sklearn.decomposition import LatentDirichletAllocation ``` In sklearn, the input to LDA is a DTM (with either counts or TF-IDF scores). ``` tfidf_vectorizer = TfidfVectorizer(max_df=0.80, min_df=50, stop_words='english') tfidf = tfidf_vectorizer.fit_transform(df_lit['text']) tf_vectorizer = CountVectorizer(max_df=0.80, min_df=50, stop_words='english' ) tf = tf_vectorizer.fit_transform(df_lit['text']) ``` This is where we fit the model. ``` import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) lda = LatentDirichletAllocation(n_topics=10, max_iter=20, random_state=0) lda = lda.fit(tf) ``` This is a function to print out the top words for each topic in a pretty way. Don't worry too much about understanding every line of this code. ``` def print_top_words(model, feature_names, n_top_words): for topic_idx, topic in enumerate(model.components_): print("\nTopic #{}:".format(topic_idx)) print(" ".join([feature_names[i] for i in topic.argsort()[:-n_top_words - 1:-1]])) print() tf_feature_names = tf_vectorizer.get_feature_names() print_top_words(lda, tf_feature_names, 20) ``` ### Challenge Modify the script above to: * increase the number of topics * increase the number of printed top words per topic * fit the model to the tf-idf matrix instead of the tf one ## Topic weights One thing we may want to do with the output is compare the prevalence of each topic across documents. A simple way to do this (but not memory efficient), is to merge the topic distribution back into the Pandas dataframe. First get the topic distribution array. ``` topic_dist = lda.transform(tf) topic_dist ``` Merge back with original dataframe ``` topic_dist_df = pd.DataFrame(topic_dist) df_w_topics = topic_dist_df.join(df_lit) df_w_topics ``` Now we can chech the average weight of each topic across gender using `groupby`. ``` grouped = df_w_topics.groupby('author gender') grouped[0].mean().sort_values(ascending=False) ``` ## LDA as dimensionality reduction Now that we obtained a distribution of topic weights for each document, we can represent our corpus with a dense document-weight matrix as opposed to our initial sparse DTM. The weights can then replace tokens as features for any subsequent task (classification, prediction, etc). A simple example may consist in measuring cosine similarity between documents. For instance, which book is closest to the first book in our corpus? Let's use pairwise cosine similarity to find out. NB: cosine similarity measures an angle between two vectors, which provides a measure of distance robust to vectors of different lenghts (total number of tokens) First, let's turn the DTM into a readable dataframe. ``` dtm = pd.DataFrame(tf_vectorizer.fit_transform(df_lit['text']).toarray(), columns=tf_vectorizer.get_feature_names(), index = df_lit.index) ``` Next let's import the cosine_similarity function from sklearn and print the cosine similarity between the first and second book or the first and third book. ``` from sklearn.metrics.pairwise import cosine_similarity print("Cosine similarity between first and second book: " + str(cosine_similarity(dtm.iloc[0,:], dtm.iloc[1,:]))) print("Cosine similarity between first and third book: " + str(cosine_similarity(dtm.iloc[0,:], dtm.iloc[2,:]))) ``` What if we use the topic weights instead of word frequencies? ``` dwm = df_w_topics.iloc[:,:10] print("Cosine similarity between first and second book: " + str(cosine_similarity(dwm.iloc[0,:], dwm.iloc[1,:]))) print("Cosine similarity between first and third book: " + str(cosine_similarity(dwm.iloc[0,:], dwm.iloc[2,:]))) ``` ### Challenge Calculate the cosine similarity between the first book and all other books to identify the most similar one. ### Further resources [This blog post](https://de.dariah.eu/tatom/feature_selection.html) goes through finding distinctive words using Python in more detail Paper: [Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict](http://languagelog.ldc.upenn.edu/myl/Monroe.pdf), Burt Monroe, Michael Colaresi, Kevin Quinn [Topic modeling with Textacy](https://github.com/repmax/topic-model/blob/master/topic-modelling.ipynb)
github_jupyter
This notebook is aimed at identifying unphased regions of the priamry contigs based on short read illumina mapping data. Parts of it are really slow. This notebook was only designed for the purpose of analyzing the Pst-104E genome. No gurantees it works in any other situtation. It will have spelling errors due to the lack of autocorrection. ### Coverage analysis script The original idea of the script is to simply use the coverage dataframe generated by samtools depth -aa ${x} > ${x}.aall.cov and pull out 1x homozygous and unique coverage areas. This is fr now done with the 'bwamem.Pst79_folder5.sam.sorted.bam.aall.cov' files. Think about use DeepTools in future when comparing mulitple mapping experiments. #### preserved comments and ideas from the origianl version do a groupby and the mean on this. get ride of the high coverage contigs. calcutate standard statistics. Do a rolling window on each group and convert to a gff file kind of format. also look at the coverage on p only and calcualte coverage there in the same way this should be the diploid coverage really (kind off). Think about combining the two dataframes for pcontigs and do a covariation analysis on the sliding windows look for areas where there is no change for p vs ph mapping and high coverage in both cases. This should be a homozygous region. ``` %matplotlib inline import pandas as pd import matplotlib.pyplot as plt #import seaborn import matplotlib from Bio import SeqIO, SeqUtils import os #Define the PATH BASE_AA_PATH = '/home/benjamin/genome_assembly/PST79/FALCON/p_assemblies/v9_1/Pst_104E_v12' BASE_A_PATH = '/home/benjamin/genome_assembly/PST79/FALCON/p_assemblies/v9_1/032017_assembly' #for now use the previous mapping that still included high coverage regions BASE_ORIGINAL = '/home/benjamin/genome_assembly/PST79/FALCON/p_assemblies/v9_1/Pst_E104_v1' COV_IN_PATH = '/home/benjamin/genome_assembly/PST79/FALCON/p_assemblies/v9_1/Pst_E104_v1/COV' BAM_IN_PATH = '/home/benjamin/genome_assembly/PST79/FALCON/p_assemblies/v9_1/Pst_E104_v1/SRM' #apply analysis restricted to final assembly Pst_104E_v12 COV_OUT_PATH = os.path.join(BASE_AA_PATH, 'COV') if not os.path.isdir(COV_OUT_PATH): os.mkdir(COV_OUT_PATH) input_genome = 'Pst_E104_v1' coverage_file_suffix = 'bwamem.Pst79_folder5.sam.sorted.bam.aall.cov' output_genome = 'Pst_104E_v12' #get all the cov files with this coverage_file_suffix. Expacts to get all three p, h and ph mapping cov_files = [os.path.join(COV_IN_PATH, x) for x in os.listdir(COV_IN_PATH) if x.endswith(coverage_file_suffix)] cov_header = ["contig", "position", 'coverage'] ph_cov = pd.read_csv([y for y in cov_files if 'ph_ctg' in y][0], sep='\t', header=None, names=cov_header) print('Read in following file as ph_coverage produced with samtools depth -aa feature: %s' %[y for y in cov_files if 'ph_ctg' in y][0]) p_cov = pd.read_csv([y for y in cov_files if 'p_ctg' in y][0], sep='\t', header=None, names=cov_header) print('Read in following file as p_coverage produced with samtools depth -aa feature: %s' %[y for y in cov_files if 'p_ctg' in y][0]) h_cov = pd.read_csv([y for y in cov_files if 'h_ctg' in y][0], sep='\t', header=None, names=cov_header) print('Read in following file as h_coverage produced with samtools depth -aa feature: %s' %[y for y in cov_files if 'h_ctg' in y][0]) #summarize the mean coverage by contigs for p_contigs when mapping against p_contigs only mean_cov_per_contig = p_cov.groupby('contig').mean() mean_cov_per_contig['coverage'].plot.box() overall_mean = p_cov['coverage'].mean() overall_std = p_cov['coverage'].std() print("The mean overall coverage is %.2f and the std is %.2f for p" % (overall_mean, overall_std)) #summarize the mean coverage by contigs for all contigs when mapping against p and h contigs. Plot only coverage plot for p_contigs mean_cov_per_contig_ph = ph_cov.groupby('contig').mean() mean_cov_per_contig_ph['contig'] = mean_cov_per_contig_ph.index mean_cov_per_contig_ph[mean_cov_per_contig_ph['contig'].str.contains('pcontig')]['coverage'].plot.box(sym ='r+') overall_mean_ph = ph_cov['coverage'].mean() overall_std_ph = ph_cov['coverage'].std() print("The mean overall coverage is %.2f and the std is %.2f for ph mapping" % (overall_mean_ph, overall_std_ph)) #summarize the mean coverage by contigs for h contigs when mapping against h contigs. Plot only coverage plot for h_contigs mean_cov_per_contig_h = h_cov.groupby('contig').mean() mean_cov_per_contig_h['contig'] = mean_cov_per_contig_h.index mean_cov_per_contig_h['coverage'].plot.box(sym ='r+') overall_mean_h = h_cov['coverage'].mean() overall_std_h = h_cov['coverage'].std() print("The mean overall coverage is %.2f and the std is %.2f for ph mapping" % (overall_mean_h, overall_std_h)) #drop all contigs that have mean coverage above 2000 as calcuated on p mapping. pcontig_greater_2000 = mean_cov_per_contig[mean_cov_per_contig['coverage'] > 2000].index p_cov_smaller_2000 = p_cov[~p_cov['contig'].isin(pcontig_greater_2000)] pcontig_smaller_2000 = p_cov_smaller_2000['contig'].unique() ph_cov_smaller_2000_p = ph_cov[ph_cov['contig'].isin(pcontig_smaller_2000)] #drop all contigs that have mean coverage above 2000 as calcuated on p mapping. #In this case h selected on the ph contig ph_cov['pcontig'] = ph_cov['contig'].str.replace('h','p').str[:-4] #this is a bit of a hack as the pcontigs are also #proccessed but shorten so the next line selects only h contigs ph_cov_smaller_2000_h = ph_cov[ph_cov['pcontig'].isin(pcontig_smaller_2000)] #drop all contigs that have mean coverage above 2000 as calcuated on p mapping. Apply this to the h contigs as well mean_cov_per_contig_h['pcontig'] = mean_cov_per_contig_h['contig'].str.replace('h','p').str[:-4] h_cov['pcontig'] = h_cov['contig'].str.replace('h','p').str[:-4] h_cov_smaller_2000 = h_cov[h_cov['pcontig'].isin(pcontig_smaller_2000)] mean_cov_contig_s2000 = p_cov_smaller_2000.groupby(by='contig')['coverage'].mean() mean_cov_contig_s2000.plot.box() mean_s2000 = mean_cov_contig_s2000.mean() std_s2000 = mean_cov_contig_s2000.std() print("The mean overall coverage for s2000 contigs is %.2f and the std is %.2f" % (mean_s2000, std_s2000)) mean_cov_contig_s2000_ph_p = ph_cov_smaller_2000_p.groupby(by='contig')['coverage'].mean() mean_cov_contig_s2000_ph_p.plot.box() plt.title('mean_cov_contig_s2000_ph_p') mean_cov_contig_s2000_h = h_cov_smaller_2000.groupby(by='contig')['coverage'].mean() mean_cov_contig_s2000_h.plot.box() plt.title('mean_cov_contig_s2000_h') mean_cov_contig_s2000_ph_h = ph_cov_smaller_2000_h.groupby(by='contig')['coverage'].mean() mean_cov_contig_s2000_ph_h.plot.box() plt.title('mean_cov_contig_s2000_ph_h') #these might be collpased repeats mean_cov_contig_s2000_h[mean_cov_contig_s2000_h > 200] mean_cov_contig_s2000_ph_h[mean_cov_contig_s2000_ph_h >200] ``` ``` len(ph_cov_smaller_2000_p['contig'].unique()) #get the list of p contigs with haplotig [pwh] and without haplotig [pwoh] pwh_list = pd.read_csv('/home/benjamin/genome_assembly/PST79/FALCON/p_assemblies/v9_1/Pst_E104_v1/Pst_E104_v1_pwh_ctg.txt',\ sep='\t', header=None)[0].tolist() pwoh_list = pd.read_csv('/home/benjamin/genome_assembly/PST79/FALCON/p_assemblies/v9_1/Pst_E104_v1/Pst_E104_v1_pwoh_ctg.txt',\ sep='\t', header=None)[0].tolist() mean_s2000_ph_p = mean_cov_contig_s2000_ph_p.mean() std_s2000_ph_p = mean_cov_contig_s2000_ph_p.std() print("The mean overall coverage for primary contigs with s2000 contigs is %.2f and the std is %.2f" % (mean_s2000_ph_p, std_s2000_ph_p)) mean_s2000_ph_h = mean_cov_contig_s2000_ph_h.mean() std_s2000_ph_h = mean_cov_contig_s2000_ph_h.std() print("The mean overall coverage for haplotigs with primary contigs cov < 2000 is %.2f and the std is %.2f" % (mean_s2000_ph_h, std_s2000_ph_h)) #mean_cov_contig_s2000_ph_pwh represents all contigs that are below 2000x coverage and are pwh contigs mean_cov_contig_s2000_ph_pwh = mean_cov_contig_s2000_ph_p[mean_cov_contig_s2000_ph_p.index.isin(pwh_list)] mean_cov_contig_s2000_ph_pwh.plot.box() plt.title('mean_cov_contig_s2000_ph_pwh') mean_s2000_ph_pwh = mean_cov_contig_s2000_ph_pwh.mean() std_s2000_ph_pwh = mean_cov_contig_s2000_ph_pwh.std() print("The mean overall coverage for s2000 contigs of pwh contigs while ph mapping is %.2f and the std is %.2f" % (mean_s2000_ph_pwh, std_s2000_ph_pwh)) mean_cov_contig_s2000_ph_pwoh = mean_cov_contig_s2000_ph_p[mean_cov_contig_s2000_ph_p.index.isin(pwoh_list)] mean_cov_contig_s2000_ph_pwoh.plot.box() plt.title('mean_cov_contig_s2000_ph_pwoh') mean_s2000_ph_pwoh = mean_cov_contig_s2000_ph_pwoh.mean() std_s2000_ph_pwoh = mean_cov_contig_s2000_ph_pwoh.std() print("The mean overall coverage for s2000 contigs of pwoh contigs while ph mapping is %.2f and the std is %.2f" % (mean_s2000_ph_pwoh, std_s2000_ph_pwoh)) #add h contigs as well print("We have %i sensible pwh contigs with %i h contigs and %i sensible pwoh contigs" % (len(mean_cov_contig_s2000_ph_pwh),len(mean_cov_contig_s2000_ph_h.index), len(mean_cov_contig_s2000_ph_pwoh) )) mean_cov_contig_s2000_ph_h.index mean_cov_contig_s2000_ph_pwh.index mean_cov_contig_s2000_ph_p.index #think about what thresholds to pick in the long run. threshold_up_ph_p = mean_s2000_ph_p + 2*std_s2000_ph_p threshold_down_ph_p = mean_s2000_ph_p - 2*std_s2000_ph_p threshold_up = mean_s2000 + 2*std_s2000 threshold_down = mean_s2000 - 2*std_s2000 #potnetial fully homozygous contigs mean_cov_contig_s2000_ph_pwoh[mean_cov_contig_s2000_ph_pwoh > threshold_up_ph_p ] mean_cov_contig_s2000_ph_pwoh 2*mean_s2000_ph_p - std_s2000_ph_p mean_s2000 - std_s2000 ``` ``` #think about what thresholds to pick in the long run. threshold_up_ph_p = mean_s2000_ph_p + 2*std_s2000_ph_p threshold_down_ph_p = mean_s2000_ph_p - 2*std_s2000_ph_p threshold_up = mean_s2000 + 2*std_s2000 threshold_down = mean_s2000 - 2*std_s2000 import warnings warnings.filterwarnings('ignore') #now write a loop that does it all for your over the whole two dataframes bed_p_uniqe_list = [] bed_p_homo_list = [] process_p_df_dict = {} process_ph_df_dict = {} for contig in pcontig_smaller_2000: tmp_p_df = '' tmp_ph_df = '' #now subset the two dataframes tmp_p_df = p_cov[p_cov['contig'] == contig] tmp_p_df_ph = ph_cov[ph_cov['contig'] == contig] #generarte the rolling windows tmp_p_df['Rolling_w1000_p'] = tmp_p_df.rolling(window=1000, min_periods=1, center=True, win_type='blackmanharris')['coverage'].mean() tmp_p_df_ph['Rolling_w1000_ph_p'] = tmp_p_df_ph.rolling(window=1000, min_periods=1, center=True, win_type='blackmanharris')['coverage'].mean() tmp_p_df['Rolling_w10000_p'] = tmp_p_df.rolling(window=10000, min_periods=1, center=True, win_type='blackmanharris')['coverage'].mean() tmp_p_df_ph['Rolling_w10000_ph_p'] = tmp_p_df_ph.rolling(window=10000, min_periods=1,center=True, win_type='blackmanharris')['coverage'].mean() tmp_p_df['Rolling_w1000_ph_p'] = tmp_p_df_ph['Rolling_w1000_ph_p'] process_p_df_dict[contig] = tmp_p_df process_ph_df_dict[contig] = tmp_ph_df #potentially p_unique DNA streatches are defined as p contig cov streatches, while doing p mapping, that are heterozygous coverage # coverage -> mean_s2000_ph_p # [Rolling_w1000_p < mean_s2000_ph_p + 2*std_s2000_ph_p] tmp_p_df_p_unique = tmp_p_df[tmp_p_df['Rolling_w1000_p'] < (mean_s2000_ph_p + 2*std_s2000_ph_p)] if len(tmp_p_df_p_unique) > 0: tmp_p_df_p_unique.reset_index(drop=True, inplace=True) #add a position +1 column by copying the position datafram 1: and adding making position+1 for the last element # in the dataframe equal to its own value tmp_p_df_p_unique['position+1']= tmp_p_df_p_unique.loc[1:, 'position'].\ append(pd.Series(tmp_p_df_p_unique.loc[len(tmp_p_df_p_unique)-1, 'position'], index=[tmp_p_df_p_unique.index[-1]])).reset_index(drop=True) tmp_p_df_p_unique['position_diff+1'] = tmp_p_df_p_unique['position+1'] - tmp_p_df_p_unique['position'] #add a position -1 column by copying the position datafram :len-2 and adding/making position-1 for the first element # in the dataframe equal to its own value position_1 = list(tmp_p_df_p_unique.loc[:len(tmp_p_df_p_unique)-2, 'position']) position_1.insert(0, tmp_p_df_p_unique.loc[0, 'position']) tmp_p_df_p_unique['position-1']= position_1 tmp_p_df_p_unique['position_diff-1'] = tmp_p_df_p_unique['position'] - tmp_p_df_p_unique['position-1'] #start points of feature streatch => where previous position is unequal 1 away #tmp_p_df_p_unique[tmp_p_df_p_unique['position_diff-1'] != 1 ].head() start_pos_index = '' stop_pos_index = '' contig_name_list = '' p_unique_bed = '' #this should be good now as it flows double check and loop around to finish this off start_pos_index = tmp_p_df_p_unique[tmp_p_df_p_unique['position_diff-1'] != 1 ].index stop_pos_index = tmp_p_df_p_unique[tmp_p_df_p_unique['position_diff+1'] != 1 ].index contig_name_list = [contig]*len(start_pos_index) start_pos = [tmp_p_df_p_unique.loc[pos, 'position'] -1 for pos in start_pos_index] stop_pos = [tmp_p_df_p_unique.loc[pos, 'position'] for pos in stop_pos_index] p_unique_bed = pd.DataFrame([contig_name_list, start_pos, stop_pos]).T bed_p_uniqe_list.append(p_unique_bed) #potentially p_homo DNA streatches are defined as p contig cov streatches, while doing ph mapping, that are homozygous coverage # coverage -> 2*mean_s2000_ph_p # [Rolling_w1000_p > 2*mean_s2000_ph_p - 2*std_s2000_ph_p] #here might be a consideration to ask for a difference in profile (covariance != 1) tmp_p_df_p_homo = tmp_p_df[(tmp_p_df['Rolling_w1000_ph_p'] > (2*mean_s2000_ph_p - 2*std_s2000_ph_p))] if len(tmp_p_df_p_homo) > 0: tmp_p_df_p_homo.reset_index(drop=True, inplace=True) #add a position +1 column by copying the position datafram 1: and adding making position+1 for the last element # in the dataframe equal to its own value tmp_p_df_p_homo['position+1']= tmp_p_df_p_homo.loc[1:, 'position'].\ append(pd.Series(tmp_p_df_p_homo.loc[len(tmp_p_df_p_homo)-1, 'position'], index=[tmp_p_df_p_homo.index[-1]])).reset_index(drop=True) tmp_p_df_p_homo['position_diff+1'] = tmp_p_df_p_homo['position+1'] - tmp_p_df_p_homo['position'] #add a position -1 column by copying the position datafram :len-2 and adding/making position-1 for the first element # in the dataframe equal to its own value position_1 = list(tmp_p_df_p_homo.loc[:len(tmp_p_df_p_homo)-2, 'position']) position_1.insert(0, tmp_p_df_p_homo.loc[0, 'position']) tmp_p_df_p_homo['position-1']= position_1 tmp_p_df_p_homo['position_diff-1'] = tmp_p_df_p_homo['position'] - tmp_p_df_p_homo['position-1'] #start points of feature streatch => where previous position is unequal 1 away #tmp_p_df_p_homo[tmp_p_df_p_homo['position_diff-1'] != 1 ].head() start_pos_index = '' stop_pos_index = '' contig_name_list = '' p_homo_bed = '' #this should be good now as it flows double check and loop around to finish this off start_pos_index = tmp_p_df_p_homo[tmp_p_df_p_homo['position_diff-1'] != 1 ].index stop_pos_index = tmp_p_df_p_homo[tmp_p_df_p_homo['position_diff+1'] != 1 ].index contig_name_list = [contig]*len(start_pos_index) start_pos = [tmp_p_df_p_homo.loc[pos, 'position'] -1 for pos in start_pos_index] stop_pos = [tmp_p_df_p_homo.loc[pos, 'position'] for pos in stop_pos_index] p_homo_bed = pd.DataFrame([contig_name_list, start_pos, stop_pos]).T bed_p_homo_list.append(p_homo_bed) print('Contig %s done.' % contig) len(bed_p_uniqe_list) p_homo_bed_df = pd.concat(bed_p_homo_list).sort_values(by=[0,1]) p_unique_bed_df = pd.concat(bed_p_uniqe_list).sort_values(by=[0,1]) #p_homo_bed_df.to_csv(cov_folder+'Pst_E104_v1_ph_ctg.ph_p_homo_cov.bed', header=None, index=None, sep='\t') #p_unique_bed_df.to_csv(cov_folder+'Pst_E104_v1_ph_ctg.p_p_het_cov.bed', header=None, index=None, sep='\t') p_homo_bed_df.to_csv(os.path.join(COV_OUT_PATH, output_genome + '_ph_ctg.ph_p_homo_cov.bed'), header=None, index = None, sep ='\t') p_unique_bed_df.to_csv(os.path.join(COV_OUT_PATH, output_genome + '-ph_ctg.p_p_het_cov.bed'), header=None, index = None, sep ='\t') #also write out the proceessed dataframes with rolling and such process_p_df = pd.concat(process_p_df_dict.values()).sort_values(by=['contig','position']) process_ph_df = pd.concat(process_p_df_dict.values()).sort_values(by=['contig','position']) process_p_df.to_csv(os.path.join(COV_OUT_PATH, input_genome + 'p_ctg.' +coverage_file_suffix.replace('.cov', '.processed.cov')), index = None, sep ='\t') process_ph_df.to_csv(os.path.join(COV_OUT_PATH, input_genome + 'ph_ctg.'+coverage_file_suffix.replace('.cov', '.processed.cov')), index = None, sep ='\t') process_p_df[process_p_df['Rolling_w1000_p'] < 20] ```
github_jupyter
# Introduction to Python ### Function Exercises #### 1) Write a function that takes an integer minutes and converts it to seconds. <font color='green'>(EASY)</font> #### 2) Create a function that converts a date formatted as MM/DD/YYYY to YYYYDDMM. The input is a string! (do not use datetime module) #### 3) A typical car can hold four passengers and one driver, allowing five people to travel around. Given n number of people, `return` how many cars are needed to seat everyone comfortably. #### 4) Create a function that replaces all the vowels in a string with the character '#'. #### 5) Write a function that inverts the keys and values of a dictionary. #### 6) Create a function that takes a list of strings and return a list, sorted from shortest to longest. #### 7) Create a function that receives two list as parameters, and returns True if the first list is a subset of the second, or False otherwise. Do not use sets. #### 8) Create a function that takes an integer and returns the factorial of that integer. That is, the integer multiplied by all positive lower integers. #### 9) Create a function that takes a number as an argument and returns how many times it is multiple of 3. #### 10) Create a function that sorts a list and removes all duplicate items from it. (Don't use 'set') #### 11) Create a function that checks if the number is even or odd #### 12) Create a function that receives a list, an element, a position and add the element in the given position in the list. This list is going to be the output of the function. #### 13) Create a function that receives multiple parameters and returns a list with all the parameters #### 14) Given a string, create a function that finds every word that contains a letter given as a parameter #### 15) [Project Euler problem #17](https://projecteuler.net/index.php?section=problems&id=17) If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total. If all the numbers from 1 to 1000 (one thousand, inclusive) were written out in words, how many letters would be used? NOTE: Do not count spaces or hyphens. For example, the number 342 (three hundred and forty-two) contains 23 letters and the number 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage. Hint: Create a function (or a set of functions) that receives an integer and returns it in "written_form". Example: print(written_number(234)) > 'two hundred and thirty-four' #### 16) [Project Euler problem #19](https://projecteuler.net/index.php?section=problems&id=1) "Counting Sundays" - 1 Jan 1900 was a Monday. - Thirty days has September, April, June and November. - All the rest have thirty-one, - Saving February alone, which has twenty-eight, rain or shine. - And on leap years, twenty-nine. - A leap year occurs on any year evenly divisible by 4, but not on a century unless it is divisible by 400. How many Sundays fell on the first of the month during the twentieth century (1 Jan 1901 to 31 Dec 2000)?
github_jupyter
# Air Quality Predictions with Amazon SageMaker and Amazon EMR This notebook demonstrates the ability to use Apache Spark on Amazon EMR to do data prep with two different datasets in order to build an urban air quality predictor with Amazon SageMaker. To create the environment, use the `us-east-1` CloudFormation template from the [Create and Managed Amazon EMR Clusters from SageMaker Studio](https://aws.amazon.com/blogs/machine-learning/part-1-create-and-manage-amazon-emr-clusters-from-sagemaker-studio-to-run-interactive-spark-and-ml-workloads/) blog post. This notebook makes use of the approach demonstrated in the blog post about how to [Build a model to predict the impact of weather on urban air quality using Amazon SageMaker](https://aws.amazon.com/blogs/machine-learning/build-a-model-to-predict-the-impact-of-weather-on-urban-air-quality-using-amazon-sagemaker/) and combines data from these two open datasets: - [OpenAQ physical air quality data](https://registry.opendata.aws/openaq/) - [NOAA Global Surface Summary of Day](https://registry.opendata.aws/noaa-gsod/) **Note: This notebook was written for Spark3 (running on EMR6+)** Before we get started - we need to upgrade the version of pandas we use as there is a [minor version conflict with numpy](https://github.com/numpy/numpy/issues/18355). Run the cell below, restart the kernel, and run the next cell to validate the version of pandas is `1.0.5`. ``` %%local %pip install pandas==1.0.5 %%local import pandas as pd print(pd.__version__) ``` Next, we use the `sagemaker_studio_analytics_extension` to connect to our EMR cluster that we created using "Clusters" section under the "SageMaker resources" tab to the left. ``` # %load_ext sagemaker_studio_analytics_extension.magics # %sm_analytics emr connect --cluster_id j-xxxxxxxxxxxx --auth-type None ``` When you first connect to the cluster, the extension prints out your YARN Application ID and a link you can use to start the Spark UI. If you need to fetch the link again, you can always use the `%%info` magic. ``` %%info ``` ## Part 1: Data Prep in Amazon EMR In the cells below, we're going to perform the following operations: - Use Spark on the EMR cluster to read our data from the OpenAQ S3 Bucket. - Filter the available data to Seattle and NO2 readings (indicative of air quality). - Group the readings by day. - Export the aggregate dataset to a local Pandas dataframe in the notebook. ``` df = spark.read.json("s3://openaq-fetches/realtime-gzipped/2022-01-05/1641409725.ndjson.gz") df2 = spark.read.schema(df.schema).json("s3://openaq-fetches/realtime-gzipped/20*") df2.head() from pyspark.sql.functions import split, lower dfSea = df2.filter(lower((df2.city)).contains('seattle')).filter(df2.parameter == "no2").withColumn("year", split(df2.date.utc, "-", 0)[0]).cache() dfSea.show(truncate=False) from pyspark.sql.functions import to_date dfNoAvg = dfSea.withColumn("ymd", to_date(dfSea.date.utc)).groupBy("ymd").avg("value").withColumnRenamed("avg(value)", "no2_avg") dfNoAvg.show() ``` While this is running, you can click the Spark UI link mentioned above to debug your job while it's running. Some useful pages to check out: - "Jobs" page shows you the current status of your job/task - "Event Timeline" on the Jobs page shows Spark Executors starting up or shutting down - The "Executors" tab shows you how many Executors are started, what the capacity is of each, and allows you to drill into logs Here, you could also experiement with the `dfSea` dataframe as it is cached. The command below should execute within a matter of seconds. ``` %%spark -o dfNoAvg ``` ## Part 2: Bring Spark results into SageMaker Studio With the `%%spark -o` command above, we took the `dfNoAvg` dataframe from Spark and made it available in the `%%local` Python context as a Pandas dataframe with the same name. Now we can use local libraries to explore the data as well. ``` %matplotlib inline import pandas as pd from datetime import datetime import numpy as np import seaborn as sns import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [15, 5] dfNoAvg.plot(x='ymd') plt.ylabel('NO2 Conc. ppm') plt.xlabel('Daily Average') plt.show() %%local # There are some gaps in 2017 and 2018 we need to fill dfNoAvg = dfNoAvg.set_index('ymd') dfNoAvg.loc['2018-10-11':'2018-11-21']['no2_avg'].head(50) %%local # Fill in the date index first idx = pd.date_range(dfNoAvg.index.min(), dfNoAvg.index.max()) dfNoAvg = dfNoAvg.reindex(idx, fill_value=None) dfNoAvg.loc['2018-10-11':'2018-10-25']['no2_avg'].head(10) %%local # Then interpolate the values that are missing dfNoAvg = dfNoAvg.interpolate(method='time') dfNoAvg.loc['2018-10-11':'2018-10-20']['no2_avg'].head(10) %%local year_min, year_max = [f"{dfNoAvg.index.min().year}", f"{dfNoAvg.index.max().year}"] year_min, year_max %%send_to_spark -i year_min %%send_to_spark -i year_max ``` ## Part 3: Data Prep in Amazon EMR with the second dataset Now that our first dataset looks good, we used the `%%send_to_spark` magic above to send the start and stop years we want to read data for back to the Spark driver on EMR. We can use those variables to limit the data we want to read. ## And now the weather ``` from pyspark.sql.types import DoubleType from pyspark.sql import functions as F # Scope to Seattle, WA, USA longLeft, latBottom, longRight, latTop = [-122.459696,47.481002,-122.224433,47.734136] dfSchema = spark.read.csv("s3://noaa-gsod-pds/2022/32509099999.csv", header=True, inferSchema=True) # We read our first year, then union the rest of the years :) def read_year(year): return spark.read.csv(f"s3://noaa-gsod-pds/{year}/", header=True, schema=dfSchema.schema) year_range = range(int(year_min), int(year_max)+1) df = read_year(year_range[0]) for year in year_range[1:]: df = df.union(read_year(year)) df = df \ .withColumn('LATITUDE', df.LATITUDE.cast(DoubleType())) \ .withColumn('LONGITUDE', df.LONGITUDE.cast(DoubleType())) seadf = df \ .filter(df.LATITUDE >= latBottom) \ .filter(df.LATITUDE <= latTop) \ .filter(df.LONGITUDE >= longLeft) \ .filter(df.LONGITUDE <= longRight) # Rename columns so they're easier to read seafeatures = seadf.selectExpr("Date as date", "MAX as temp_max", "MIN as temp_min", "WDSP as wind_avg", "SLP as pressure_sea_level", "STP as pressure_station", "VISIB as visibility") # Remove invalid readings no_data_mappings = [ ["temp_max", 9999.9], ["temp_min", 9999.9], ["wind_avg", 999.9], ["pressure_sea_level", 9999.9], ["pressure_station", 9999.9], ["visibility", 999.9], ] for [name, val] in no_data_mappings: seafeatures = seafeatures.withColumn(name, F.when(F.col(name)==val, None).otherwise(F.col(name))) # Now average each reading per day seafeatures = seafeatures.groupBy("date").agg(*[F.mean(c).alias(c) for c in seafeatures.columns[1:]]) %%spark -o seafeatures ``` ## Part 4: Data Analysis in SageMaker Studio We again use the `%%spark -o` magic to send the aggregate back to SageMaker so we can do some exploration. One thing to note is that you can certainly do some of this exploration with Spark as well. It just depends on the use case and the size of your data. Because we've aggregated our data down to a few thousand rows, it's relatively easy to manage in the notebook. But if you're unable to do this, you can still use Spark to split your training/test datasets or do other aggregations and write the results out to S3. ``` %%local seafeatures.plot(x='date', y=['temp_min', 'temp_max']) plt.ylabel('Max Temp (F)') plt.xlabel('Daily Average') plt.show() %%local seafeatures.plot(x='date', y=['wind_avg']) plt.ylabel('Average Wind (mph)') plt.xlabel('Daily Average') plt.show() %%local seafeatures.plot(x='date', y=['visibility']) plt.ylabel('Visibility (miles)') plt.xlabel('Daily Average') plt.show() ``` ## Part 5: Marry the data Now that we've taken a quick look at our data and done some initial exploration, let's merge the two datasets. ``` %%local print(dfNoAvg) seafeaturesi = seafeatures.set_index('date').sort_index() print(seafeaturesi) %%local # We need to make sure the data frames line up, so we'll create new # dataframes from the min and max of the existing ones. min_viable_date = max(dfNoAvg.index.min(), seafeaturesi.index.min()) max_viable_date = min(dfNoAvg.index.max(), seafeaturesi.index.max()) print(f"Merging dataframes between {min_viable_date} and {max_viable_date}") comp_df = pd.merge( seafeaturesi[min_viable_date:max_viable_date], dfNoAvg[min_viable_date:max_viable_date][['no2_avg']], left_index=True, right_index=True ) print(comp_df.sort_index().head(20)) %%local # Check some data we looked into previously print(comp_df.loc['2018-10-11':'2018-10-20'].sort_index()) comp_df = comp_df.sort_index() ``` Now that we've merged them, we can do some quick correlation tests to see what the impact is of different weather events on NO2 readings. Please see the [afore-mentioned blog post](https://aws.amazon.com/blogs/machine-learning/build-a-model-to-predict-the-impact-of-weather-on-urban-air-quality-using-amazon-sagemaker/) for more in-depth explations of these different charts. ``` %%local mydata = comp_df[['wind_avg','no2_avg']] x = mydata['wind_avg'] y = mydata['no2_avg'] plt.scatter(x, y) z = np.polyfit(x, y, 1) p = np.poly1d(z) plt.plot(x,p(x),"r--") plt.ylabel('NO2 Conc. ppm') plt.xlabel('Wind Speed (mph)') plt.show() %%local mydata = comp_df[['visibility','no2_avg']].dropna() x = mydata['visibility'] y = mydata['no2_avg'] plt.scatter(x, y) z = np.polyfit(x, y, 1) p = np.poly1d(z) plt.plot(x,p(x),"r--") plt.ylabel('NO2 Conc. ppm') plt.xlabel('Visibility (miles)') plt.show() plt.show() %%local from datetime import timedelta comp_df = comp_df.sort_index() comp_df['no2_avg_prev'] = comp_df["no2_avg"].shift(1) mydata = comp_df[['no2_avg_prev','no2_avg']] start_date = comp_df.index.min() + timedelta(days=1) end_date = comp_df.index.max() + timedelta(days=-1) mydata = mydata[start_date:end_date] x = mydata['no2_avg_prev'] y = mydata['no2_avg'] plt.scatter(x, y) z = np.polyfit(x, y, 1) p = np.poly1d(z) plt.plot(x,p(x),"r--") plt.ylabel('no2_avg') plt.xlabel('no2_avg_prev') plt.show() %%local cor_cols = ['temp_max', 'temp_min', 'wind_avg','pressure_sea_level','visibility','no2_avg_prev', 'no2_avg'] fig = plt.figure(figsize=(10,10)) im = plt.matshow(comp_df.loc[:, cor_cols].corr(), fignum=0) fig.colorbar(im) plt.xticks(range(len(cor_cols)), cor_cols) plt.yticks(range(len(cor_cols)), cor_cols) plt.show() %%local # Drop the 1st row as NaN aq_df = comp_df.iloc[1:].copy() # Drop visibility as it didn't seem correlate much and has NaNs that break the training aq_df = aq_df.drop('visibility', 1) # Use the data from years 2016 up to 2020 as training, and the year 2021 as our candidate year for testing and validating our model. aq_train_df = aq_df[aq_df.index.year < 2021] aq_test_df = aq_df[aq_df.index.year == 2021] x_train = aq_train_df.drop('no2_avg',1) x_test = aq_test_df.drop('no2_avg',1) y_train = aq_train_df[["no2_avg"]] y_test = aq_test_df[["no2_avg"]] print(x_train.shape, y_train.shape) print(x_test.shape, y_test.shape) print(x_train.head()) %%local from math import sqrt from sklearn.metrics import mean_squared_error, r2_score, explained_variance_score # sMAPE is used in KDD Air Quality challenge: https://biendata.com/competition/kdd_2018/evaluation/ def smape(actual, predicted): dividend= np.abs(np.array(actual) - np.array(predicted)) denominator = np.array(actual) + np.array(predicted) return 2 * np.mean(np.divide(dividend, denominator, out=np.zeros_like(dividend), where=denominator!=0, casting='unsafe')) def print_metrics(y_test, y_pred): print("RMSE: %.4f" % sqrt(mean_squared_error(y_test, y_pred))) print('Variance score: %.4f' % r2_score(y_test, y_pred)) print('Explained variance score: %.4f' % explained_variance_score(y_test, y_pred)) forecast_err = np.array(y_test) - np.array(y_pred) print('Forecast bias: %.4f' % (np.sum(forecast_err) * 1.0/len(y_pred) )) print('sMAPE: %.4f' % smape(y_test, y_pred)) %%local import boto3 from sagemaker import get_execution_role, session sess = session.Session() bucket = sess.default_bucket() # This is used to run the LinearLearner training job role = get_execution_role() ``` ## Part 6: Train and Deploy a Machine Learning Model In the section below, we create a new training job using the Linear Learner algorithm. Once that job completes, we deploy an endpoint and run some validation tests against it. 💁 **NOTE**: You only need to create this training job and deploy it once. You can use the same endpoint, even in future runs of this notebook, without re-training or re-deploying. 💁 ``` %%local from sagemaker import LinearLearner data_location = f's3://{bucket}/aq-linearlearner/data/train' output_location = f's3://{bucket}/aq-linearlearner/output' llearner = LinearLearner(role=role, predictor_type='regressor', normalize_data=True, normalize_label=True, instance_count=1, instance_type='ml.c5.xlarge', output_path=output_location, data_location=data_location) %%local llearner.fit([ llearner.record_set(x_train.values.astype('float32'), y_train.values[:, 0].astype('float32'), channel='train'), llearner.record_set(x_test.values.astype('float32'), y_test.values[:, 0].astype('float32'), channel='test') ]) ``` ### Create our estimator ``` %%local llearner_predictor = llearner.deploy(initial_instance_count=1, instance_type='ml.t2.medium') %%local result = llearner_predictor.predict(x_test.values.astype('float32')) y_sm_pred = [r.label["score"].float32_tensor.values[0] for r in result] y_sm_test = y_test.values[:, 0].astype('float32') print_metrics(y_sm_test, y_sm_pred) %%local y_sm_pred_df = pd.DataFrame(y_sm_pred, columns=y_train.columns).set_index(y_test.index).sort_index() y_sm_test_df = pd.DataFrame(y_sm_test, columns=y_train.columns).set_index(y_test.index).sort_index() plt.plot(y_sm_test_df, label='actual') plt.plot(y_sm_pred_df, label='forecast') plt.legend() plt.show() %%local endpoint_name = llearner_predictor.endpoint_name ``` ### Reuse an existing estimator ``` %%local # The endpoint can take a while to create, so we'll use a previously created one. # Can specify if there is an existing endpoint # endpoint_name = "" from sagemaker import LinearLearnerPredictor llearner_predictor = LinearLearnerPredictor(endpoint_name) result = llearner_predictor.predict(x_test.values.astype('float32')) y_sm_pred = [r.label["score"].float32_tensor.values[0] for r in result] y_sm_test = y_test.values[:, 0].astype('float32') print_metrics(y_sm_test, y_sm_pred) %%local y_sm_pred_df = pd.DataFrame(y_sm_pred, columns=y_train.columns).set_index(y_test.index).sort_index() y_sm_test_df = pd.DataFrame(y_sm_test, columns=y_train.columns).set_index(y_test.index).sort_index() plt.plot(y_sm_test_df, label='actual') plt.plot(y_sm_pred_df, label='forecast') plt.legend() plt.show() ``` ## Clean Up ``` %%cleanup -f %%local llearner_predictor.delete_endpoint() ```
github_jupyter
Sascha Spors, Professorship Signal Theory and Digital Signal Processing, Institute of Communications Engineering (INT), Faculty of Computer Science and Electrical Engineering (IEF), University of Rostock, Germany # Data Driven Audio Signal Processing - A Tutorial with Computational Examples Winter Semester 2021/22 (Master Course #24512) - lecture: https://github.com/spatialaudio/data-driven-audio-signal-processing-lecture - tutorial: https://github.com/spatialaudio/data-driven-audio-signal-processing-exercise Feel free to contact lecturer frank.schultz@uni-rostock.de # Exercise 7: Least Squares Solution / Left Inverse in SVD and QR Domain ``` import numpy as np import matplotlib.pyplot as plt from scipy.linalg import svd, diagsvd, qr, inv, pinv, norm from numpy.linalg import matrix_rank np.set_printoptions(precision=2, floatmode='fixed', suppress=True) matplotlib_widget_flag = False ``` For the given full-column rank matrix $\mathbf{A}$ (tall/thin shape with independent columns) and outcome vector $\mathbf{b}$ the linear set of equations $\mathbf{A} \mathbf{x} = \mathbf{b}$ is to be solved for unknowns $\mathbf{x}$. We obviously cannot invert $\mathbf{A}$, so we must find a best possible row space estimate $\hat{\mathbf{x}}$. ### Least Squares Solution great material, strong recommendation: - Gilbert Strang (2020): "Linear Algebra for Everyone", Wellesley-Cambridge Press, Ch. 4.3 - Gilbert Strang (2019): "Linear Algebra and Learning from Data", Wellesley-Cambridge Press, Ch. II.2 We know for sure, that pure row space $\hat{\mathbf{x}}$ maps to pure column space $\hat{\mathbf{b}}$ $\mathbf{A} \hat{\mathbf{x}} = \hat{\mathbf{b}}$ The given outcome vector $\mathbf{b}$ might not necessarily (in practical problems probably never!) live in pure column space of $\mathbf{A}$, we therefore need an offset (error) vector to get there $\mathbf{A} \hat{\mathbf{x}} + \mathbf{e} = \mathbf{b}$ We want to find the one $\hat{\mathbf{x}}$ that yields the smallest $||\mathbf{e}||_2^2$ or equivalently $||\mathbf{e}||_2$. This is basically our optimization criterion, known as least squares. So, thinking in vector addition $\mathbf{b} = \hat{\mathbf{b}} + \mathbf{e} \rightarrow \mathbf{e} = \mathbf{b} - \hat{\mathbf{b}}$ we can geometrically figure (imagine this is in 2D / 3D) that smallest $||\mathbf{e}||_2^2$ is achieved when we span a **right-angled triangle** using $\mathbf{b}$ as hypotenuse. Therefore, $\hat{\mathbf{b}} \perp \mathbf{e}$. This means that $\mathbf{e}$ must live in left null space of $\mathbf{A}$, which is perpendicular to the column space where $\hat{\mathbf{b}}$ lives. Left null space requirement formally written as $\mathbf{A}^\mathrm{H} \mathbf{e} = \mathbf{0}$ can be utilized as $\mathbf{A} \hat{\mathbf{x}} + \mathbf{e} = \mathbf{b} \rightarrow \mathbf{A}^\mathrm{H}\mathbf{A} \hat{\mathbf{x}} + \mathbf{A}^\mathrm{H}\mathbf{e} = \mathbf{A}^\mathrm{H}\mathbf{b} \rightarrow \mathbf{A}^\mathrm{H}\mathbf{A} \hat{\mathbf{x}} = \mathbf{A}^\mathrm{H}\mathbf{b}$ The last equation in the line is known as normal equation. This can be solved using the left inverse of $\mathbf{A}^\mathrm{H} \mathbf{A}$ (this matrix is full rank and therefore invertible) $(\mathbf{A}^\mathrm{H} \mathbf{A})^{-1} (\mathbf{A}^\mathrm{H} \mathbf{A}) \hat{\mathbf{x}} = (\mathbf{A}^\mathrm{H} \mathbf{A})^{-1} \mathbf{A}^\mathrm{H} \mathbf{b}$ Since for left inverse $(\mathbf{A}^\mathrm{H} \mathbf{A})^{-1} (\mathbf{A}^\mathrm{H} \mathbf{A}) = \mathbf{I}$ holds, we get the least-squares sense solution for $\mathbf{x}$ in the row space of $\mathbf{A}$ $\hat{\mathbf{x}} = (\mathbf{A}^\mathrm{H} \mathbf{A})^{-1} \mathbf{A}^\mathrm{H} \mathbf{b}$ We find the **left inverse** of $\mathbf{A}$ as $\mathbf{A}^{+L} = (\mathbf{A}^\mathrm{H} \mathbf{A})^{-1} \mathbf{A}^\mathrm{H}$ ### Least Squares Solution in SVD Domain great material, strong recommendation: - Gilbert Strang (2019): "Linear Algebra and Learning from Data", Wellesley-Cambridge Press, p.125ff The left inverse $\mathbf{A}^{+L} = (\mathbf{A}^\mathrm{H} \mathbf{A})^{-1} \mathbf{A}^\mathrm{H}$ in terms of SVD $\mathbf{A}^{+L} = ((\mathbf{U}\mathbf{S}\mathbf{V}^\mathrm{H})^\mathrm{H} \mathbf{U}\mathbf{S}\mathbf{V}^\mathrm{H})^{-1} (\mathbf{U}\mathbf{S}\mathbf{V}^\mathrm{H})^\mathrm{H}$ $\mathbf{A}^{+L} = (\mathbf{V}\mathbf{S}^\mathrm{H}\mathbf{S}\mathbf{V}^\mathrm{H})^{-1} (\mathbf{V}\mathbf{S}^\mathrm{H}\mathbf{U}^\mathrm{H})$ $\mathbf{A}^{+L} = \mathbf{V} (\mathbf{S}^\mathrm{H}\mathbf{S})^{-1} \mathbf{V}^\mathrm{H} \mathbf{V}\mathbf{S}^\mathrm{H}\mathbf{U}^\mathrm{H}$ $\mathbf{A}^{+L} = \mathbf{V} (\mathbf{S}^\mathrm{H}\mathbf{S})^{-1} \mathbf{S}^\mathrm{H}\mathbf{U}^\mathrm{H}$ $\mathbf{A}^{+L} = \mathbf{V} \mathbf{S}^\mathrm{+L} \mathbf{U}^\mathrm{H}$ allows for a convenient discussion, how singular values act when mapping column space back to row space. Considering only one $\sigma_i$ and corresponding left/right singular vectors, the left inverse $\mathbf{S}^\mathrm{+L} = (\mathbf{S}^\mathrm{H}\mathbf{S})^{-1} \mathbf{S}^\mathrm{H}$ reduces to $\frac{\sigma_i}{\sigma_i^2} = \frac{1}{\sigma_i}$ For very, very small $\sigma_i$, the inversion thus leads to huge values, which might be not meaningful as this(these) weighted $\mathbf{v}_i$ vector(s) then dominate(s) the row space solution. Small changes in $\sigma_i$ then lead to comparably large changes in the row space solution $\hat{\mathbf{x}}$. So-called ridge regression (aka Tikhonov regularization) is a straightforward workaround for ill-conditioned matrices. See stuff below. ### Least Squares Solution in QR Domain great material, strong recommendation: - Gilbert Strang (2020): "Linear Algebra for Everyone", Wellesley-Cambridge Press, p.170ff - Gilbert Strang (2019): "Linear Algebra and Learning from Data", Wellesley-Cambridge Press, p.128ff The normal equation $\mathbf{A}^\mathrm{H}\mathbf{A} \hat{\mathbf{x}} = \mathbf{A}^\mathrm{H}\mathbf{b}$ can be conveniently given as QR decomposition (recall $\mathbf{Q}^\mathrm{H} \mathbf{Q}=\mathbf{I}$ due to Gram-Schmidt orthonormalization) $(\mathbf{Q R})^\mathrm{H}\mathbf{Q R} \hat{\mathbf{x}} = (\mathbf{Q R})^\mathrm{H}\mathbf{b}$ $\mathbf{R}^\mathrm{H} \mathbf{Q}^\mathrm{H} \mathbf{Q R} \hat{\mathbf{x}} = (\mathbf{Q R})^\mathrm{H}\mathbf{b}$ $\mathbf{R}^\mathrm{H} \mathbf{R} \hat{\mathbf{x}} = \mathbf{R}^\mathrm{H} \mathbf{Q}^\mathrm{H} \mathbf{b}$ $\mathbf{R} \hat{\mathbf{x}} = \mathbf{Q}^\mathrm{H} \mathbf{b}$ Numerical approaches try to solve for $\hat{\mathbf{x}}$ using the last line by back substitution, given that we found a numerically robust solution for upper triangle $\mathbf{R}$ and orthonormal $\mathbf{Q}$. We should not expect that algorithms solve $\hat{\mathbf{x}} = \mathbf{R}^{+L} \mathbf{Q}^\mathrm{H} \mathbf{b}$ with the left inverse $\mathbf{R}^{+L}$ of upper triangle $\mathbf{R}$, we should not do this for non-toy-examples as well. ``` rng = np.random.default_rng(1) mean, stdev = 0, 0.01 M = 100 N = 3 A = rng.normal(mean, stdev, [M, N]) print('rank =', matrix_rank(A), '== number of cols =', N) if matplotlib_widget_flag: %matplotlib widget [Q, R] = qr(A) [U, s, Vh] = svd(A) print('sing vals', s) V = Vh.conj().T # scipy function Ali_pinv = pinv(A) # manual normal equation solver Ali_man = inv(A.conj().T @ A) @ A.conj().T # SVD Si = diagsvd(1/s, N, M) # works if array s has only non-zero entries Ali_svd = V @ Si @ U.conj().T # QR Ali_qr = pinv(R) @ Q.conj().T print('pinv == inverse via normal eq?', np.allclose(Ali_pinv, Ali_man)) print('pinv == inverse via SVD?', np.allclose(Ali_pinv, Ali_svd)) print('pinv == inverse via QR?', np.allclose(Ali_pinv, Ali_qr)) # create b from one column space entry and one left null space entry # note that we use unit length vectors for convenience: ||e||_2^2 = 1 bh = U[:,0] # choose one of col space e = U[:,N] # assuming rank N -> we choose some one of left null space b = bh + e # find xh in row space xh = Ali_pinv @ b # only bh gets mapped to row space (this is our LS solution xh), e is mapped to zero vec print(xh) print('norm(A @ xh - bh, 2) == 0 -> ', norm(A @ xh - bh, 2)) # == 0 print('||e||_2^2:') print(norm(A @ xh - b, 2)**2) print(norm(e, 2)**2) ``` ### Ridge Regression / Regularization in SVD Domain For ridge regression, aka Tikhonov regularization, aka regression with penalty on $||\mathbf{x}||_2^2$ the ridge, left inverse $\mathbf{A}^{+\mathrm{L,Ridge}} = \mathbf{V} \left((\mathbf{S}^\mathrm{H}\mathbf{S} + \beta^2 \mathbf{I})^{-1} \mathbf{S}^\mathrm{H}\right) \mathbf{U}^\mathrm{H}$ yields the solution $\hat{\mathbf{x}}^\mathrm{Ridge} = \mathbf{A}^{+\mathrm{L,Ridge}} \mathbf{b}$ for the minimization problem $\mathrm{min}_\mathbf{x} \left(||\mathbf{A} \mathbf{x} - \mathbf{b}||_2^2 + \beta^2 ||\mathbf{x}||_2^2 \right)$. For limit $\beta^2=0$ this is identical to above standard least squares solution. For a single singular value the ridge penalty becomes $\frac{\sigma_i}{\sigma_i^2 + \beta^2}$, which can be discussed conveniently with below plot. ``` beta = 1/10 lmb = beta**2 singval = np.logspace(-4, 4, 2**5) # ridge regression inv_singval = singval / (singval**2 + beta**2) plt.plot(singval, 1 / singval, label='no penalty') plt.plot(singval, inv_singval, label='penalty') plt.xscale('log') plt.yscale('log') plt.xticks(10.**np.arange(-4,5)) plt.yticks(10.**np.arange(-4,5)) plt.xlabel(r'$\sigma_i$') plt.ylabel(r'$\sigma_i \,\,\,/\,\,\, (\sigma_i^2 + \beta^2)$') plt.title(r'ridge penalty, $\beta =$'+str(beta)) plt.legend() plt.grid() plt.axis('equal') plt.show() print('beta =', beta, 'beta^2 = lambda =', lmb) rng = np.random.default_rng(1) mean, stdev = 0, 10 M, N = 3, 3 A_tmp = rng.normal(mean, stdev, [M, N]) [U_tmp, s_tmp, Vh_tmp] = svd(A_tmp) V_tmp = Vh_tmp.conj().T s_tmp = [10, 8, 0.5] # create sing vals S_tmp = diagsvd(s_tmp, M, N) # create full rank square matrix to work with (no nullspaces except 0-vectors!) A = U_tmp @ S_tmp @ Vh_tmp [U, s, Vh] = svd(A) print('A\n', A) print('rank of A: ', matrix_rank(A)) print('sigma', s) S = diagsvd(s, M, N) V = Vh.conj().T # b as column space linear combination b = 1*U[:,0] + 1*U[:,1] + 1*U[:,2] xh = inv(A) @ b print('xh =', xh, '\nA xh =', A @ xh, '\nb =', b) print('inverted sigma no penalty: ', 1 / s) # == (because in b all U weighted with unity gain) print('||xh||_2^2 =', norm(xh,2)) print('norm of vec: inverted sigma no penalty: ', norm(1 / s,2)) lmb = 2 Sli_ridge = inv(S.conj().T @ S + lmb*np.eye(3)) @ S.conj().T Ali_ridge = V @ Sli_ridge @ U.conj().T xh_ridge = Ali_ridge @ b print('xh_ridge =', xh_ridge, '\nA xh_ridge =', A @ xh_ridge, '\nb = ', b) print('inverted sigma with penalty: ', s / (s**2 + lmb)) # == (because in b all U weighted with unity gain) print('||xh_ridge||_2^2 =', norm(xh_ridge,2)) print('norm of vec: inverted sigma with penalty: ', norm(s / (s**2 + lmb),2)) fig1 = plt.figure(figsize=(5,5)) ax = plt.axes(projection='3d') w = Vh @ xh wr = Vh @ xh_ridge for n in range(3): ax.plot([0, w[n]*V[0,n]], [0, w[n]*V[1,n]], [0, w[n]*V[2,n]], color='C'+str(n), lw=1, ls=':', label=r'$\hat{x}$@$v_i$, no penalty') ax.plot([0, wr[n]*V[0,n]], [0, wr[n]*V[1,n]], [0, wr[n]*V[2,n]], color='C'+str(n), lw=3, ls='-', label=r'$\hat{x}$@$v_i$, penalty') ax.plot([0, xh[0]], [0, xh[1]], [0, xh[2]], 'black', label=r'$\hat{x}$, no penalty') ax.plot([0, xh_ridge[0]], [0, xh_ridge[1]], [0, xh_ridge[2]], 'C7', label='$\hat{x}$, penalty') ax.set_xlabel(r'$x$') ax.set_ylabel(r'$y$') ax.set_zlabel(r'$z$') lim = 1 ax.set_xlim(-lim, lim) ax.set_ylim(-lim, lim) ax.set_zlim(-lim, lim) ax.set_title('V / row space') plt.legend() fig2 = plt.figure(figsize=(5,5)) ax = plt.axes(projection='3d') w = Vh @ xh wr = Vh @ xh_ridge for n in range(3): ax.plot([0, U[0,n]], [0, U[1,n]], [0, U[2,n]], color='C'+str(n), lw=2, ls='-', label=r'$u_i$') ax.plot([0, s[n]*U[0,n]], [0, s[n]*U[1,n]], [0, s[n]*U[2,n]], color='C'+str(n), lw=1, ls=':', label=r'$\sigma_i \cdot u_i$') ax.plot([0, b[0]], [0, b[1]], [0, b[2]], 'black', lw=1, label=r'$b$') ax.set_xlabel(r'$x$') ax.set_ylabel(r'$y$') ax.set_zlabel(r'$z$') lim = 5 ax.set_xlim(-lim, lim) ax.set_ylim(-lim, lim) ax.set_zlim(-lim, lim) ax.set_title('U / row space') plt.legend(); if matplotlib_widget_flag: plt.close(fig1) plt.close(fig2) ``` ## Copyright - the notebooks are provided as [Open Educational Resources](https://en.wikipedia.org/wiki/Open_educational_resources) - feel free to use the notebooks for your own purposes - the text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) - the code of the IPython examples is licensed under under the [MIT license](https://opensource.org/licenses/MIT) - please attribute the work as follows: *Frank Schultz, Data Driven Audio Signal Processing - A Tutorial Featuring Computational Examples, University of Rostock* ideally with relevant file(s), github URL https://github.com/spatialaudio/data-driven-audio-signal-processing-exercise, commit number and/or version tag, year.
github_jupyter
# Demonstration of the processing steps for converting the raw data * RADAR type is FMCW * 60000 chirps per file = to 60 seconds record time * 1ms for one chirp * Bandwith: 400MHz Processing code ported from a matlab version created by Aleksander Angelov who modified the original Test SDR-KIT for extracting micro-Doppler signature by WWW.ANCORTEK.COM (Micro-Doppler Signature of Human Movement) ``` # Plot graphs inline %matplotlib inline import os cwd = os.getcwd() if cwd == '/content': from google.colab import drive drive.mount('/content/gdrive') BASE_PATH = '/content/gdrive/My Drive/Level-4-Project/' os.chdir('gdrive/My Drive/Level-4-Project/') elif cwd == 'D:\\Google Drive\\Level-4-Project\\notebooks': BASE_PATH = "D:/Google Drive/Level-4-Project/" elif cwd == "/export/home/2192793m/Level-4-Project/": BASE_PATH = "/export/home/2192793m/Level-4-Project/" INTERIM_PATH = BASE_PATH + 'data/interim/' RESULTS_PATH = BASE_PATH + "results/data_processing_demonstration/" if not os.path.exists(RESULTS_PATH): os.makedirs(RESULTS_PATH) import pandas as pd import numpy as np import time import matplotlib.pyplot as plt from matplotlib import mlab from matplotlib import colors from scipy.signal import butter, lfilter SAVE_GRAPHS = False # change to true to overwrite ``` Load in the data and extract radar specifications from the top of the file. ``` # Load in file to pandas dataframe # Dataset 97 is sitting and standing (0 degrees aspect angle) df = pd.read_csv(INTERIM_PATH + "Dataset_97.dat", header=None)[1] # Grab RADAR settings from top of file center_frequency = float(df.iloc[1]) # 5800000000Hz (5.6 GHz) sweep_time = float(df.iloc[2])/1000 # convert to seconds (0.001 seconds) number_of_time_samples = float(df.iloc[3]) # 128 bandwidth = float(df.iloc[4]) # 400000000Hz (400 MHz) sampling_frequency = number_of_time_samples/sweep_time ''' record length = 60s = 60000 chirps with sweep time of 1ms = (7680000 measurements / 128 time samples) with sweep time of 1ms ''' record_length = (len(df.iloc[5:])/number_of_time_samples) * sweep_time number_of_chirps = record_length/sweep_time # 60000 # Put data values into an np array and convert to complex (originally str) data = df.iloc[5:].apply(complex).values ``` Convert the raw measurments into a matrix with a column per chirp. ``` # Reshape into chirps over time data_time = np.reshape(data, (int(number_of_chirps), int(number_of_time_samples))) data_time = np.rot90(data_time) # flip upside down to get range cell axis to be correct plt.imshow(np.flipud(abs(data_time)), cmap='jet', aspect="auto") plt.title("Raw RADAR Data (60s)") plt.xlabel("No. of pulses") plt.ylabel("Range Cells") if SAVE_GRAPHS: plt.savefig(RESULTS_PATH + "raw_reshaped_60s.pdf", format='pdf') plt.show() ``` Apply a FFT along each chirp to compute the range profiles. ``` win = np.ones((int(number_of_time_samples), data_time.shape[1])) win = np.ones(data_time.shape) # Apply fast fourier transform which should compute distance (range) from objects fft_applied = np.fft.fftshift(np.fft.fft((data_time * win), axis=0), 0) # take half as the other half looks to contain only noise data_range = fft_applied[1:int(number_of_time_samples/2), :] plt.imshow(20*np.log10(np.flipud(abs(data_range))), cmap='jet', aspect="auto") plt.title("Range Profiles") plt.xlabel("No. of Sweeps") plt.ylabel("Range") if SAVE_GRAPHS: plt.savefig(RESULTS_PATH + "range_profiles.pdf", format='pdf') plt.show() ``` Apply a MTI filter to the range profiles. Moving Target Indicator (MTI) Filter : * suppress echoes from clutter * clutter is stationary or close to stationary * The MTI filter is a high pass filter that filters out the low Doppler frequencies, (information taken from http://www.diva-portal.se/smash/get/diva2:1143293/FULLTEXT01.pdf section 5.1) ``` # IIR Notch filter x = data_range.shape[1] # set ns to nearest even number to x if x % 2 == 0: ns = x else: ns = x - 1 data_range_MTI = np.zeros((data_range.shape[0], ns), dtype=np.complex128) (b, a) = butter(4, 0.01, btype="high") # Apply Filter for i in range(data_range.shape[0]): data_range_MTI[i, :ns] = lfilter(b, a, data_range[i, :ns], axis=0) plt.imshow(20*(np.log10(np.flipud(abs(data_range_MTI)))), cmap='jet', aspect="auto") plt.title("Range Profiles after MTI Filter") plt.xlabel("No. of Sweeps") plt.ylabel("Range cells") if SAVE_GRAPHS: plt.savefig(RESULTS_PATH + "range_profiles_after_MTI_filter.pdf", format='pdf') plt.show() ``` Now compute the micro-Doppler signature from the MTI filtered range profile data. Spectrogram processing for 2nd FFT to get Doppler ``` # Selects range bins bin_indl = 5 bin_indu = 25 time_window_length = 200 overlap_factor = 0.95 overlap_length = np.round(time_window_length * overlap_factor) pad_factor = 4 fft_points = pad_factor * time_window_length prf = 1/sweep_time doppler_bin = prf / fft_points doppler_axis = np.arange(-prf / 2, prf / 2 - doppler_bin + 1, doppler_bin) whole_duration = data_range_MTI.shape[1] / prf num_segments = np.floor((data_range_MTI.shape[1] - time_window_length) / (np.floor(time_window_length * (1 - overlap_factor)))) data_spec_MTI=0 for rbin in range(bin_indl-1, bin_indu): s, f, t = mlab.specgram(data_range_MTI[rbin, :], Fs=1, window=np.hamming(time_window_length), noverlap=overlap_length, NFFT=time_window_length, mode='complex', pad_to=fft_points ) data_spec_MTI = data_spec_MTI+abs(s) time_axis = np.linspace(0, whole_duration, data_spec_MTI.shape[1]) data_spec_MTI.shape plt.pcolormesh(time_axis, doppler_axis, 20 * np.log10(np.flipud(np.abs(data_spec_MTI))), cmap='jet', rasterized=True) plt.ylim([-150, 150]) plt.title("Micro-Doppler Signature") plt.xlabel("Time[s]") plt.ylabel("Doppler [Hz]") if SAVE_GRAPHS: plt.savefig(RESULTS_PATH + "micro-doppler_signature.pdf", format='pdf') plt.show() ``` ### Removing noise by threshold (comparing clipping value) One method to remove the noise would be to record the same room but without a subject infront of the radar then compare histograms however as this was not available manually deciding the threshold was chosen. Below compares the different levels of thresholding. ``` for threshold in [15, 20, 25, 30, 35, 40]: minimum_value = threshold norm = colors.Normalize(vmin=minimum_value, vmax=None, clip=True) plt.pcolormesh(time_axis, doppler_axis, 20 * np.log10(np.flipud(np.abs(data_spec_MTI))), cmap='jet', rasterized=True, norm=norm) plt.ylim([-150, 150]) plt.title("Threshold Value: " + str(threshold)) plt.xlabel("Time[s]") plt.ylabel("Doppler [Hz]") if SAVE_GRAPHS: plt.savefig(RESULTS_PATH + "thresholding_comparison_"+str(threshold)+ ".pdf", format='pdf') plt.show() image_width = 150 image_height = 150 minimum_value = 35 norm = colors.Normalize(vmin=minimum_value, vmax=None, clip=True) save_spectrograms = True folder = "C:/Users/macka/Desktop/test/" # save to test folder as only a demonstration # This process is very slow unless matplotlib.use('Agg') has been set # Taking 3 second slices window_size = 300 # 3 seconds iterations = data_spec_MTI.shape[1] - window_size step_size = 10 # 0.1 seconds for i in range(5500, iterations, step_size): start_time = time.time() center = int(data_spec_MTI.shape[0]/2) data_spec_small = data_spec_MTI[(center-150):(center+150), i:(i + window_size)] if save_spectrograms: # used for saving 150px x 150px image of the spectrogram w = image_width h = image_height fig = plt.figure(frameon=False) fig.set_size_inches(w, h) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ax.imshow(20 * np.log10(abs(data_spec_small)), cmap='jet', norm=norm, aspect="auto") fig.savefig(folder + str(i) + ".png", dpi=1) time_for_row = (time.time() - start_time)/60 print("\n--- %s minutes ---" % (time_for_row)) ```
github_jupyter
``` import requests from pprint import pprint from IPython.core.display import display, HTML from markdownify import markdownify as md import json import re import urllib.request from unidecode import unidecode from datetime import datetime IMPO_ENDPOINT='https://www.impo.com.uy/bases/' LUC='leyes/19889-2020/' ARTICULOS=[1, 4, 5, 10, 11,12, 13, 14, 18, 21, 22, 23, 24, 35, 43, 44, 45, 49, 50, 51, 52, 56, 63, 64, 65, 74, 75, 76, 77, 78, 79, 80, 86, 118, 125, 126, 127, 128, 129, 130, 134, 135, 136, 140, 142, 143, 144, 145, 146, 148, 151, 152, 155, 156, 158, 159, 160, 161, 163, 167, 169, 171, 172, 183, 184, 185, 186, 193, 198, 206, 207, 208, 209, 210, 211, 212, 215, 219, 220, 221, 224, 225, 235, 236, 237, 285, 357, 358, 392, 399, 403, 404, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 468, 469, 470, 471, 472, 473, 474, 475, 476] #ARTICULOS=[426] def getJsonFromUrl(url): params = dict( json='true', ) resp = requests.get(url=url, params=params) data = None try: data = json.loads(resp.text, strict=False) except Exception as e: print(resp.text) data = json.loads(unidecode(resp.text), strict=False) return data def getAnteriorOriginal(notasArticulo): textoOriginal = None if (notasArticulo): textosOriginales = md(notasArticulo).split('**TEXTO ORIGINAL:**') if len(textosOriginales) > 1: #Saco de las notas el link y referencia del texto original textosOriginales = textosOriginales[1].split(',') linkUltimoTextoOriginal = 'https://www.impo.com.uy'+textosOriginales[0].split('](')[1].split(')')[0] #Descargo el texto original y me quedo con el texto data_articulo_original = getJsonFromUrl(linkUltimoTextoOriginal) textoOriginal = md(data_articulo_original['textoArticulo'],strip=['a','b']) return textoOriginal def buscarRedaccionModificada(tipo, destino): textoOriginalMarkdown = None textoModificadoMarkdown = None if ('nueva redaccion' in unidecode(tipo) or 'agrego a' in unidecode(tipo)): data_nueva_redaccion = getJsonFromUrl(destino) textoModificadoMarkdown = md(data_nueva_redaccion['textoArticulo'],strip=['a','b']) textoOriginalMarkdown = getAnteriorOriginal(md(data_nueva_redaccion['notasArticulo'])) else: print(tipo,destino) return textoOriginalMarkdown, textoModificadoMarkdown tipos = [] for articulo in ARTICULOS: rich_data = {} data = getJsonFromUrl(IMPO_ENDPOINT+LUC+str(articulo)) now = datetime.now() # current date and time date_time = now.strftime("%d/%m/%Y, %H:%M:%S") rich_data['fechaDescarga'] = date_time rich_data['json_original'] = data #Saco datos rich_data['numeroArticulo'] = str(articulo) rich_data['seccionArticulo'] = re.search(r'SECCIÓN (.*?) ', data['tituloArticulo']).group(1) try: rich_data['capituloArticulo'] = re.search(r'CAPÍTULO (.*?) ', data['tituloArticulo']).group(1) except Exception as ignore: rich_data['capituloArticulo'] = None rich_data['textoArticulo'] = md(data['textoArticulo'], strip=['a','b']) if (data.get('notasArticulo', None)): #Paso a Markdown notas_articulo_markdown = md(data.get('notasArticulo', '')) #Completo Links notas_articulo_markdown = notas_articulo_markdown.replace('/bases/','https://www.impo.com.uy/bases/') #Separo tipo_modificacion = re.search(r'\*\*(.*?)\*\*', notas_articulo_markdown).group(1) destino_modificacion = re.search(r'\((.*?)\)', notas_articulo_markdown).group(1) textoOriginal, textoModificado = buscarRedaccionModificada(tipo_modificacion, destino_modificacion) rich_data['notasArticulo'] = md(data.get('notasArticulo', ''), strip=['a','b']) rich_data['textoOriginal'] = textoOriginal rich_data['textoModificado'] = textoModificado if not rich_data.get('textoOriginal',None): rich_data['textoOriginal'] = rich_data['textoArticulo'] print(rich_data['seccionArticulo'], rich_data['capituloArticulo'], rich_data['numeroArticulo']) json_file = open('LUC_articulo_'+rich_data['numeroArticulo']+'.json', "w") json_file.write(json.dumps(rich_data, indent=4)) json_file.close() ```
github_jupyter
## Setup ``` !pip install -q git+https://github.com/rwightman/pytorch-image-models from timm.data.constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD import timm import torch import tensorflow as tf import tensorflow_datasets as tfds import pickle import numpy as np import matplotlib.pyplot as plt from tqdm.notebook import tqdm BATCH_SIZE = 64 # Reduce if ResourceExhaustedError happens AUTO = tf.data.AUTOTUNE ``` ## Data loader Don't execute. ``` normalization_layer = tf.keras.layers.Normalization(mean=np.array(IMAGENET_DEFAULT_MEAN), variance=np.array(IMAGENET_DEFAULT_STD) ** 2) def preprocess_image(image, label): image = tf.cast(image, tf.float32) / 255. image = normalization_layer(image) return image, label IMG_SIZE = 256 imagenet_a = tfds.load("imagenet_a", split="test", as_supervised=True) imagenet_a = ( imagenet_a .map(lambda x, y: (tf.image.resize(x, (IMG_SIZE, IMG_SIZE)), y)) .batch(BATCH_SIZE) .map(preprocess_image, num_parallel_calls=True) .prefetch(AUTO) ) image_batch, label_batch = next(iter(imagenet_a)) print(image_batch.shape, label_batch.shape) tf.reduce_max(image_batch), tf.reduce_min(image_batch) ``` ## BotNet Assessment Weight not yet available :( ``` # Load BoTNet models all_botnet_models = timm.list_models("botnet*") all_botnet_models !gdown --id 1-jvhJaMyy-KziAuFnmt5rkoZrm5364UF !tar xf botnet50.pth.tar top_1_accs = {} top_5_accs = {} top_1 = tf.keras.metrics.SparseCategoricalAccuracy() top_5 = tf.keras.metrics.SparseTopKCategoricalAccuracy(k=5) for botnet_model_name in tqdm(all_botnet_models): print(f"Evaluating {botnet_model_name}") botnet_model = timm.create_model(botnet_model_name, pretrained=True) botnet_model.eval() botnet_model = botnet_model.to("cuda") all_top_1 = [] all_top_5 = [] for image_batch, label_batch in imagenet_a.as_numpy_iterator(): with torch.no_grad(): image_batch = torch.Tensor(label_batch).to("cuda") logits = botnet_model(image_batch) batch_accuracy_top_1 = top_1(label_batch, logits.cpu().numpy()) batch_accuracy_top_5 = top_5(label_batch, logits.cpu().numpy()) all_top_1.append(batch_accuracy_top_1) all_top_5.append(batch_accuracy_top_5) top_1_accs.update({botnet_model_name: np.mean(all_top_1)}) top_5_accs.update({botnet_model_name: np.mean(all_top_5)}) ``` ## Other attention-infused models<sup>*</sup> <sup>*</sup><sup>As mentioned in [`timm`](https://github.com/rwightman/pytorch-image-models)</sup> * Bottleneck Transformer - https://arxiv.org/abs/2101.11605 * CBAM - https://arxiv.org/abs/1807.06521 * Effective Squeeze-Excitation (ESE) - https://arxiv.org/abs/1911.06667 * Efficient Channel Attention (ECA) - https://arxiv.org/abs/1910.03151 * Gather-Excite (GE) - https://arxiv.org/abs/1810.12348 * Global Context (GC) - https://arxiv.org/abs/1904.11492 * Halo - https://arxiv.org/abs/2103.12731 * Involution - https://arxiv.org/abs/2103.06255 * Lambda Layer - https://arxiv.org/abs/2102.08602 * Non-Local (NL) - https://arxiv.org/abs/1711.07971 * Squeeze-and-Excitation (SE) - https://arxiv.org/abs/1709.01507 * Selective Kernel (SK) - https://arxiv.org/abs/1903.06586 * Split (SPLAT) - https://arxiv.org/abs/2004.08955 * Shifted Window (SWIN) - https://arxiv.org/abs/2103.14030 ### Common utilities ``` def get_normalization_layer(imagenet_stats=False, scale=None, offset=None): if imagenet_stats: return tf.keras.layers.Normalization( mean=np.array(IMAGENET_DEFAULT_MEAN), variance=np.array(IMAGENET_DEFAULT_STD) ** 2 ) elif (scale and offset): return tf.keras.layers.Rescaling( scale=scale, offset=offset ) else: return tf.keras.layers.Rescaling(scale=1./255) def preprocess_image(normalization_layer): def f(image, label): if isinstance(normalization_layer, tf.keras.layers.Normalization): image = tf.cast(image, tf.float32) / 255. else: image = tf.cast(image, tf.float32) image = normalization_layer(image) return image, label return f def get_dataset(imagenet_stats=False, resize=224, scale=None, offset=None): if imagenet_stats: norm_layer = get_normalization_layer(imagenet_stats) elif (scale and offset): norm_layer = get_normalization_layer(imagenet_stats, scale, offset) else: norm_layer = get_normalization_layer() imagenet_r = tfds.load("imagenet_r", split="test", as_supervised=True) imagenet_r = ( imagenet_r .map(lambda x, y: (tf.image.resize(x, (resize, resize)), y)) .batch(BATCH_SIZE) .map(preprocess_image(norm_layer), num_parallel_calls=True) .prefetch(AUTO) ) return imagenet_r # Verify ds = get_dataset() image_batch, label_batch = next(iter(ds)) print(image_batch.shape, label_batch.shape) print(tf.reduce_max(image_batch), tf.reduce_min(image_batch)) def eval_single_model(dataset, model_name): top_1 = tf.keras.metrics.SparseCategoricalAccuracy() top_5 = tf.keras.metrics.SparseTopKCategoricalAccuracy(k=5) model = timm.create_model(model_name, pretrained=True) model.eval() model = model.to("cuda") all_top_1 = [] all_top_5 = [] for image_batch, label_batch in dataset.as_numpy_iterator(): with torch.no_grad(): image_batch = torch.Tensor(image_batch).to("cuda") image_batch = image_batch.permute(0, 3, 1, 2) logits = model(image_batch) batch_accuracy_top_1 = top_1(label_batch, logits.cpu().numpy()) batch_accuracy_top_5 = top_5(label_batch, logits.cpu().numpy()) all_top_1.append(batch_accuracy_top_1) all_top_5.append(batch_accuracy_top_5) return np.mean(all_top_1), np.mean(all_top_5) ``` ### Global Context ``` all_gc_models = timm.list_models("*gc*") all_gc_models # EfficientNetV2: [-1, 1] # Reference: https://github.com/google/automl/blob/master/efficientnetv2/infer.py dataset = get_dataset(imagenet_stats=False, scale=1./127.5, offset=-1) image_batch, label_batch = next(iter(dataset)) print(image_batch.shape, label_batch.shape) print(tf.reduce_max(image_batch), tf.reduce_min(image_batch)) top_1_accs = {} top_5_accs = {} # Only single model is available as pre-trained. print(f"Evaluating {all_gc_models[0]}") mean_top_1, mean_top_5 = eval_single_model(dataset, all_gc_models[0]) top_1_accs.update({all_gc_models[0]: np.mean(mean_top_1)}) top_5_accs.update({all_gc_models[0]: np.mean(mean_top_5)}) top_1_accs[all_gc_models[0]], top_5_accs[all_gc_models[0]] ``` ### Gather-Excite (GE) ``` dataset = get_dataset(imagenet_stats=True) image_batch, label_batch = next(iter(dataset)) print(image_batch.shape, label_batch.shape) print(tf.reduce_max(image_batch), tf.reduce_min(image_batch)) all_ge_models = timm.list_models("ge*") all_ge_models ``` First model is not available as ImageNet-1k pretrained. ``` top_1_accs = {} top_5_accs = {} for ge_model_name in tqdm(all_ge_models[1:]): print(f"Evaluating {ge_model_name}") mean_top_1, mean_top_5 = eval_single_model(dataset, ge_model_name) top_1_accs.update({ge_model_name: mean_top_1}) top_5_accs.update({ge_model_name: mean_top_5}) top_1_accs, top_5_accs ``` ### Selective kernel ``` all_sk_models = timm.list_models("sk*") all_sk_models top_1_accs = {} top_5_accs = {} for sk_model_name in tqdm(all_sk_models): if sk_model_name not in ["skresnet50", "skresnet50d"]: print(f"Evaluating {sk_model_name}") mean_top_1, mean_top_5 = eval_single_model(dataset, sk_model_name) top_1_accs.update({sk_model_name: mean_top_1}) top_5_accs.update({sk_model_name: mean_top_5}) top_1_accs, top_5_accs ```
github_jupyter
## Digit Recognizer Learn computer vision fundamentals with the famous MNIST dat https://www.kaggle.com/c/digit-recognizer ### Competition Description MNIST ("Modified National Institute of Standards and Technology") is the de facto “hello world” dataset of computer vision. Since its release in 1999, this classic dataset of handwritten images has served as the basis for benchmarking classification algorithms. As new machine learning techniques emerge, MNIST remains a reliable resource for researchers and learners alike. In this competition, your goal is to correctly identify digits from a dataset of tens of thousands of handwritten images. We’ve curated a set of tutorial-style kernels which cover everything from regression to neural networks. We encourage you to experiment with different algorithms to learn first-hand what works well and how techniques compare. ### Practice Skills Computer vision fundamentals including simple neural networks Classification methods such as SVM and K-nearest neighbors #### Acknowledgements More details about the dataset, including algorithms that have been tried on it and their levels of success, can be found at http://yann.lecun.com/exdb/mnist/index.html. The dataset is made available under a Creative Commons Attribution-Share Alike 3.0 license. ``` import pandas as pd import math import numpy as np import matplotlib.pyplot as plt, matplotlib.image as mpimg from sklearn.model_selection import train_test_split %matplotlib inline ``` # Tensorflow 2.0 ### initiate TF 2.0 ``` try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow as tf from tensorflow import keras from tensorflow.keras import models from tensorflow.keras import losses,optimizers,metrics from tensorflow.keras import layers from tensorflow.keras.callbacks import TensorBoard import time NAME = 'DigiRecognizer-CNN-{}'.format(int(time.time())) tensorboard = TensorBoard(log_dir='logs/{}'.format(NAME)) ``` ### Data preparation ``` from google.colab import drive drive.mount('/content/gdrive') labeled_images = pd.read_csv('gdrive/My Drive/dataML/train.csv') #labeled_images = pd.read_csv('train.csv') images = labeled_images.iloc[:,1:] labels = labeled_images.iloc[:,:1] train_images, test_images,train_labels, test_labels = train_test_split(images, labels, test_size=0.01) print(train_images.shape) print(train_labels.shape) print(test_images.shape) print(test_labels.shape) ``` ### Helper functions for batch learning ``` def one_hot_encode(vec, vals=10): ''' For use to one-hot encode the 10- possible labels ''' n = len(vec) out = np.zeros((n, vals)) out[range(n), vec] = 1 return out class CifarHelper(): def __init__(self): self.i = 0 # Intialize some empty variables for later on self.training_images = None self.training_labels = None self.test_images = None self.test_labels = None def set_up_images(self): print("Setting Up Training Images and Labels") # Vertically stacks the training images self.training_images = train_images.values train_len = self.training_images.shape[0] # Reshapes and normalizes training images self.training_images = self.training_images.reshape(train_len,28,28,1)/255 # One hot Encodes the training labels (e.g. [0,0,0,1,0,0,0,0,0,0]) self.training_labels = one_hot_encode(train_labels.values.reshape(-1), 10) print("Setting Up Test Images and Labels") # Vertically stacks the test images self.test_images = test_images.values test_len = self.test_images.shape[0] # Reshapes and normalizes test images self.test_images = self.test_images.reshape(test_len,28,28,1)/255 # One hot Encodes the test labels (e.g. [0,0,0,1,0,0,0,0,0,0]) self.test_labels = one_hot_encode(test_labels.values.reshape(-1), 10) def next_batch(self, batch_size): # Note that the 100 dimension in the reshape call is set by an assumed batch size of 100 x = self.training_images[self.i:self.i+batch_size] y = self.training_labels[self.i:self.i+batch_size] self.i = (self.i + batch_size) % len(self.training_images) return x, y # Before Your tf.Session run these two lines ch = CifarHelper() ch.set_up_images() # During your session to grab the next batch use this line # (Just like we did for mnist.train.next_batch) # batch = ch.next_batch(100 ``` ### Creating the Model Class ``` class CNNModel(tf.keras.Model): def __init__(self): super().__init__() # self.Dense1 = layers.Dense(units=10, activation='relu') self.conv1 = layers.Conv2D(filters=12, kernel_size=(6,6), strides=(1,1), padding='same', activation='relu') self.conv2 = layers.Conv2D(filters=24, kernel_size=(5,5), strides=(2,2), padding='same', activation='relu') self.conv3 = layers.Conv2D(filters=48, kernel_size=(4,4), strides=(2,2), padding='same', activation='relu') self.flatten = layers.Flatten() self.dense = layers.Dense(units=10,activation='softmax') def call(self, inputs): x = self.conv1(inputs) x = self.conv2(x) x = self.conv3(x) x = self.flatten(x) output = self.dense(x) return output num_epochs = 1000 batch_size = 50 learning_rate = 0.001 ``` ### Initialize the model and optimizer ``` model = CNNModel() optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) ``` ### Train the model ``` for epoch in range(num_epochs): X, y_true = ch.next_batch(batch_size) with tf.GradientTape() as tape: y_pred = model(X) loss = tf.keras.losses.categorical_crossentropy(y_true=y_true, y_pred=y_pred) loss = tf.reduce_mean(loss) grad = tape.gradient(loss, model.variables) optimizer.apply_gradients(grads_and_vars=zip(grad, model.variables)) if epoch%100 == 0: print("epch: {}, loss: {}".format(epoch, loss.numpy())) ``` ### Evaluate the model ``` categorical_accuracy = tf.keras.metrics.CategoricalAccuracy() y_pred = model.predict(x = ch.test_images) categorical_accuracy.update_state(y_true = ch.test_labels, y_pred=y_pred) accuracy = categorical_accuracy.result().numpy() print("The accuracy is:{}".format(accuracy)) ``` ### Train and Evaluate the model ``` categorical_accuracy = tf.keras.metrics.CategoricalAccuracy() for epoch in range(num_epochs): X, y_true = ch.next_batch(batch_size) with tf.GradientTape() as tape: y_pred = model(X) loss = tf.keras.losses.categorical_crossentropy(y_true=y_true, y_pred=y_pred) loss = tf.reduce_mean(loss) grad = tape.gradient(loss, model.variables) optimizer.apply_gradients(grads_and_vars=zip(grad, model.variables)) y_test_pred = model.predict(x = ch.test_images) categorical_accuracy.update_state(y_true = ch.test_labels, y_pred=y_test_pred) accuracy = categorical_accuracy.result().numpy() if epoch%100 == 0: print("epch: {}, loss: {}, accuracy: {}".format(epoch, loss.numpy(),accuracy)) printcategorical_accuracy.result().numpy() ``` # Tensorflow 1.0 ### Helper functions for batch learning ``` def one_hot_encode(vec, vals=10): ''' For use to one-hot encode the 10- possible labels ''' n = len(vec) out = np.zeros((n, vals)) out[range(n), vec] = 1 return out class CifarHelper(): def __init__(self): self.i = 0 # Intialize some empty variables for later on self.training_images = None self.training_labels = None self.test_images = None self.test_labels = None def set_up_images(self): print("Setting Up Training Images and Labels") # Vertically stacks the training images self.training_images = train_images.as_matrix() train_len = self.training_images.shape[0] # Reshapes and normalizes training images self.training_images = self.training_images.reshape(train_len,28,28,1)/255 # One hot Encodes the training labels (e.g. [0,0,0,1,0,0,0,0,0,0]) self.training_labels = one_hot_encode(train_labels.as_matrix().reshape(-1), 10) print("Setting Up Test Images and Labels") # Vertically stacks the test images self.test_images = test_images.as_matrix() test_len = self.test_images.shape[0] # Reshapes and normalizes test images self.test_images = self.test_images.reshape(test_len,28,28,1)/255 # One hot Encodes the test labels (e.g. [0,0,0,1,0,0,0,0,0,0]) self.test_labels = one_hot_encode(test_labels.as_matrix().reshape(-1), 10) def next_batch(self, batch_size): # Note that the 100 dimension in the reshape call is set by an assumed batch size of 100 x = self.training_images[self.i:self.i+batch_size] y = self.training_labels[self.i:self.i+batch_size] self.i = (self.i + batch_size) % len(self.training_images) return x, y # Before Your tf.Session run these two lines ch = CifarHelper() ch.set_up_images() # During your session to grab the next batch use this line # (Just like we did for mnist.train.next_batch) # batch = ch.next_batch(100) ``` ## Creating the Model ** Create 2 placeholders, x and y_true. Their shapes should be: ** * X shape = [None,28,28,1] * Y_true shape = [None,10] ** Create three more placeholders * lr: learning rate * step:for learning rate decay * drop_rate ``` X = tf.placeholder(tf.float32, shape=[None,28,28,1]) Y_true = tf.placeholder(tf.float32, shape=[None,10]) lr = tf.placeholder(tf.float32) step = tf.placeholder(tf.int32) drop_rate = tf.placeholder(tf.float32) ``` ### Initialize Weights and bias neural network structure for this sample: X [batch, 28, 28, 1] Layer 1: conv. layer 6x6x1=>6, stride 1 W1 [6, 6, 1, 6] , B1 [6] Y1 [batch, 28, 28, 6] Layer 2: conv. layer 5x5x6=>12, stride 2 W2 [5, 5, 6, 12] , B2 [12] Y2 [batch, 14, 14, 12] Layer 3: conv. layer 4x4x12=>24, stride 2 W3 [4, 4, 12, 24] , B3 [24] Y3 [batch, 7, 7, 24] => reshaped to YY [batch, 7*7*24] Layer 4: fully connected layer (relu+dropout), W4 [7*7*24, 200] B4 [200] Y4 [batch, 200] Layer 5: fully connected layer (softmax) W5 [200, 10] B5 [10] Y [batch, 10] ``` # three convolutional layers with their channel counts, and a # fully connected layer (the last layer has 10 softmax neurons) K = 12 # first convolutional layer output depth L = 24 # second convolutional layer output depth M = 48 # third convolutional layer N = 200 # fully connected layer W1 = tf.Variable(tf.truncated_normal([6,6,1,K], stddev=0.1)) B1 = tf.Variable(tf.ones([K])/10) W2 = tf.Variable(tf.truncated_normal([5,5,K,L], stddev=0.1)) B2 = tf.Variable(tf.ones([L])/10) W3 = tf.Variable(tf.truncated_normal([4,4,L,M], stddev=0.1)) B3 = tf.Variable(tf.ones([M])/10) W4 = tf.Variable(tf.truncated_normal([7*7*M,N], stddev=0.1)) B4 = tf.Variable(tf.ones([N])/10) W5 = tf.Variable(tf.truncated_normal([N, 10], stddev=0.1)) B5 = tf.Variable(tf.zeros([10])) ``` ### layers ``` Y1 = tf.nn.relu(tf.nn.conv2d(X, W1, strides = [1,1,1,1], padding='SAME') + B1) Y2 = tf.nn.relu(tf.nn.conv2d(Y1,W2, strides = [1,2,2,1], padding='SAME') + B2) Y3 = tf.nn.relu(tf.nn.conv2d(Y2,W3, strides = [1,2,2,1], padding='SAME') + B3) #flat the inputs for the fully connected nn YY3 = tf.reshape(Y3, shape = (-1,7*7*M)) Y4 = tf.nn.relu(tf.matmul(YY3, W4) + B4) Y4d = tf.nn.dropout(Y4,rate = drop_rate) Ylogits = tf.matmul(Y4d, W5) + B5 Y = tf.nn.softmax(Ylogits) ``` ### Loss Function ``` #cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y_true,logits=Ylogits)) cross_entropy = tf.losses.softmax_cross_entropy(onehot_labels = Y_true, logits = Ylogits) #cross_entropy = -tf.reduce_mean(y_true * tf.log(Ylogits)) * 1000.0 ``` ### Optimizer ``` lr = 0.0001 + tf.train.exponential_decay(learning_rate = 0.003, global_step = step, decay_steps = 2000, decay_rate = 1/math.e ) #optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.005) optimizer = tf.train.AdamOptimizer(learning_rate=lr) train = optimizer.minimize(cross_entropy) ``` ### Intialize Variables ``` init = tf.global_variables_initializer() ``` ### Saving the Model ``` saver = tf.train.Saver() ``` ## Graph Session ** Perform the training and test print outs in a Tf session and run your model! ** ``` history = {'acc_train':list(),'acc_val':list(), 'loss_train':list(),'loss_val':list(), 'learning_rate':list()} with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(20000): batch = ch.next_batch(100) sess.run(train, feed_dict={X: batch[0], Y_true: batch[1], step: i, drop_rate: 0.25}) # PRINT OUT A MESSAGE EVERY 100 STEPS if i%100 == 0: # Test the Train Model feed_dict_train = {X: batch[0], Y_true: batch[1], drop_rate: 0.25} feed_dict_val = {X:ch.test_images, Y_true:ch.test_labels, drop_rate: 0} matches = tf.equal(tf.argmax(Y,1),tf.argmax(Y_true,1)) acc = tf.reduce_mean(tf.cast(matches,tf.float32)) history['acc_train'].append(sess.run(acc, feed_dict = feed_dict_train)) history['acc_val'].append(sess.run(acc, feed_dict = feed_dict_val)) history['loss_train'].append(sess.run(cross_entropy, feed_dict = feed_dict_train)) history['loss_val'].append(sess.run(cross_entropy, feed_dict = feed_dict_val)) history['learning_rate'].append(sess.run(lr, feed_dict = {step: i})) print("Iteration {}:\tlearning_rate={:.6f},\tloss_train={:.6f},\tloss_val={:.6f},\tacc_train={:.6f},\tacc_val={:.6f}" .format(i,history['learning_rate'][-1], history['loss_train'][-1], history['loss_val'][-1], history['acc_train'][-1], history['acc_val'][-1])) print('\n') saver.save(sess,'models_saving/my_model.ckpt') plt.plot(history['acc_train'],'b') plt.plot(history['acc_val'],'r') plt.plot(history['loss_train'],'b') plt.plot(history['loss_val'],'r') plt.plot(history['learning_rate']) ``` ### Loading a Model ``` unlabeled_images_test = pd.read_csv('gdrive/My Drive/dataML/test.csv') #unlabeled_images_test = pd.read_csv('test.csv') X_unlabeled = unlabeled_images_test.values.reshape(unlabeled_images_test.shape[0],28,28,1)/255 with tf.Session() as sess: # Restore the model saver.restore(sess, 'models_saving/my_model.ckpt') # Fetch Back Results label = sess.run(Y, feed_dict={X:X_unlabeled,drop_rate:0}) label ``` ## Predict the unlabeled test sets using the model ``` imageId = np.arange(1,label.shape[0]+1).tolist() prediction_pd = pd.DataFrame({'ImageId':imageId, 'Label':label}) prediction_pd.to_csv('gdrive/My Drive/dataML/out_cnn4.csv',sep = ',', index = False) ```
github_jupyter
# Hypothesis Testing of Human Height Data In this lab, you will learn how to use Python 3 to perform and understand the basics of hypothesis testing. Hypothesis testing is widely used. Anytime you are trying to determine if a parameter or relationship is statistically significant you can perform a hypothesis test. In this lab you will explore and perform hypothesis tests on a famous data set collect by Frances Galton, who invented the regression method. Galton collected these data from Families living in late 19th century London. Gaulton published his famous paper in 1885, showing that the highs of adult children regressed to the mean of the population, regardless of the heights of the parents. From this seminal study we have the term regression in statistics, ## Exercise 1. Explore the data In this first exercise you will load the Galton data set. You will then and explore differences between some of the variables in these data using some simple visulaizaiton technques. **** **Note:** Data visualization is convered in subsequent modules of this course. ### Load and examine the data set Execute the code in the cell below to load the Gaulton data set. ``` from azureml import Workspace ws = Workspace() ds = ws.datasets['GaltonFamilies.csv'] galton = ds.to_dataframe() ``` With the data loaded, you can examine the first few rows by executing the code in the cell below: ``` galton.head() ``` This data set has 9 features: 1. A case or row number. 2. A unique code for each family in the sample. 3. The height of the father in inches. 4. The height of the mother in inches. 5. The average height of the parents. 6. The number of childern in the family. 7. A code for the each unique child in the family. 8. The gender of the child. 9. The height of the adult child in inches. Execute the code in the cell below to determine the number of cases in this data set. ``` galton.shape ``` There are a total of 934 cases, or childern, in the sample comprising this data set. ### Visualizing some relationships in these data To develop a better understanding of some of the relationships in these data you will create and compare some histograms of some of the variables. The code in the cell below creates a pair of histograms to compare the distributions of two variables. The historgrams are ploted on the same horizontal scale to aid in comparison. A red line is plotted at the mean value of each variable. Exectue the code in the cell below to plot a pair of histograms comparing the hight of mothers to the height of their sons. You can safely ignore any warnings about building a font cache. ``` %matplotlib inline def hist_family(df, col1, col2, num_bins = 30): import matplotlib.pyplot as plt ## Setup for ploting two charts one over the other fig, ax = plt.subplots(2, 1, figsize = (12,8)) mins = min([df[col1].min(), df[col2].min()]) maxs = max([df[col1].max(), df[col2].max()]) mean1 = df[col1].mean() mean2 = df[col2].mean() ## Plot the histogram temp = df[col1].as_matrix() ax[1].hist(temp, bins = 30, alpha = 0.7) ax[1].set_xlim([mins, maxs]) ax[1].axvline(x=mean1, color = 'red', linewidth = 4) ax[1].set_ylabel('Count') ax[1].set_xlabel(col1) ## Plot the histogram temp = df[col2].as_matrix() ax[0].hist(temp, bins = 30, alpha = 0.7) ax[0].set_xlim([mins, maxs]) ax[0].axvline(x=mean2, color = 'red', linewidth = 4) ax[0].set_ylabel('Count') ax[0].set_xlabel(col2) return [col1, col2] sons = galton[galton.gender == 'male'] hist_family(sons, 'childHeight', 'mother') ``` Examine these histogram and note the following: - The distributions of the height of the mothers and their sons have a fair degree of overlap. - The mean height of the sons is noticeably greater than the mothers. Next you will compare the heights of mothers to the heights of their daughters. ``` daughters = galton[galton.gender == 'female'] hist_family(daughters, 'childHeight', 'mother') ``` Examine these histogram and note the following: - The distributions of the height of the mothers and their daughters overlap almost entirely. - The mean height of the daughters is nearly the same as the mothers. In summary, it appears that sons are usually taller than their mothers, whereas, the height of daughters does not appear to be much different from their mothers. But, how valid is this conclusion statistically? ## Apply t test Now that you have examined some of the relationships between the variables in these data, you will now apply formal hypothesis testing. In hypothesis testing the a null hypothesis is tested against a statistic. The null hypothesis is simply that the difference is not significant. Depending on the value of the test statistic, you can accept or reject the null hypthesis. In this case, you will use the two-sided t-test to determine if the difference in means of two variables are significantly different. The null hypothesis is that there is no significant difference between the means. There are multiple criteria which are used to interpret the test results. You will determine if you can reject the null hyposesis based on the following criteria: - Selecting a **confidence level** of **5%** or **0.05**. - Determine if the t-statistic for the degrees of freedom is greater than the **critical value**. The difference in means of Normally distributed variables follows a t-distribution. The large t-statistic indicates the probility that the difference in means is unlikely to be by chance alone. - Determine if the P-value is less than the **confidence level**. A small P-value indicates the probability of the difference of the means being more extreme by chance alone is the small. - The **confidence interval** around the difference of the means does not overlap with **0**. If the **confidence interval** is far from **0** this indicates that the difference in means is unlikely to include **0**. Based on these criteria you will accept of reject the null hypothesis. However, rejecting the null-hypothesis should not be confused with accepting the alternative. It simply means the null is not a good hypothesis. The **family_test** function in the cell below uses the **CompareMeans** function from the **weightstats** package to compute the two-sided t statistics. The **hist_family_conf** function calls the **family_test** function and plots the results. Execute this code to compute and disply the results. ``` def family_test(df, col1, col2, alpha): from scipy import stats import scipy.stats as ss import pandas as pd import statsmodels.stats.weightstats as ws n, _, diff, var, _, _ = stats.describe(df[col1] - df[col2]) degfree = n - 1 temp1 = df[col1].as_matrix() temp2 = df[col2].as_matrix() res = ss.ttest_rel(temp1, temp2) means = ws.CompareMeans(ws.DescrStatsW(temp1), ws.DescrStatsW(temp2)) confint = means.tconfint_diff(alpha=alpha, alternative='two-sided', usevar='unequal') degfree = means.dof_satt() index = ['DegFreedom', 'Difference', 'Statistic', 'PValue', 'Low95CI', 'High95CI'] return pd.Series([degfree, diff, res[0], res[1], confint[0], confint[1]], index = index) def hist_family_conf(df, col1, col2, num_bins = 30, alpha =0.05): import matplotlib.pyplot as plt ## Setup for ploting two charts one over the other fig, ax = plt.subplots(2, 1, figsize = (12,8)) mins = min([df[col1].min(), df[col2].min()]) maxs = max([df[col1].max(), df[col2].max()]) mean1 = df[col1].mean() mean2 = df[col2].mean() tStat = family_test(df, col1, col2, alpha) pv1 = mean2 + tStat[4] pv2 = mean2 + tStat[5] ## Plot the histogram temp = df[col1].as_matrix() ax[1].hist(temp, bins = 30, alpha = 0.7) ax[1].set_xlim([mins, maxs]) ax[1].axvline(x=mean1, color = 'red', linewidth = 4) ax[1].axvline(x=pv1, color = 'red', linestyle='--', linewidth = 4) ax[1].axvline(x=pv2, color = 'red', linestyle='--', linewidth = 4) ax[1].set_ylabel('Count') ax[1].set_xlabel(col1) ## Plot the histogram temp = df[col2].as_matrix() ax[0].hist(temp, bins = 30, alpha = 0.7) ax[0].set_xlim([mins, maxs]) ax[0].axvline(x=mean2, color = 'red', linewidth = 4) ax[0].set_ylabel('Count') ax[0].set_xlabel(col2) return tStat hist_family_conf(sons, 'mother', 'childHeight') ``` ##### Examine the printed table of results and the charts noting the following: - The difference of the means is 5.2 inches. You can see this difference graphically by comparing the positions of the solid red lines showing the means of the two distirbutions. - The **critical value** of the two-sided t-statistic at 945 degrees of freedom is **1.96**. The t-statistic of -39.5 is larger than this **critical value**. - The P-value is effectively 0, which is smaller than the **confidence level** of 0.05. - The 95% **confidence interval** of the difference in means is from -4.9 to -5.5, which does not overlap 0. You can see the confidence interval plotted as the two dashed red lines in the lower chart shown above. This **confidence interval** around the mean of the mother's heights does not overlap with the mean of the son's height. Overall, these statistics indicate you can reject the null hypothesis, or that there difference in the means is not **0**. ``` hist_family_conf(daughters, 'mother', 'childHeight') ``` Examine the printed table of results, which are quite differnt from the test of the heights of mothers vs. sons. Examine the statistics and charts noting the following: - The difference of the means is only 0.04 inches. You can see this small difference graphically by comparing the positions of the solid red lines showing the means of the two distributions. - The **critical value** of the two-sided t-statistic at 480 degrees of freedom is **1.96**. The t-statistic of 0.35 is smaller than this **critical value**. - The P-value is 0.73, which is larger than the **confidence level** of 0.05. - The 95% **confidence interval** of the difference is from -0.26 to 0.35, which overlaps 0. You can see the confidence interval plotted as the two dashed red lines in the lower chart shown above. This **confidence interval** around the mean of the mother's heights does overlaps the mean of the dauther's height. Overall, these statistics indicate you cannot reject the null hypothesis that there are is not a significant difference in the means. **Evaluation question** You have found that you could not reject the null hypothesis that there was no significant difference between the heights of mothers and their adult dauhters. But what about the difference in height between fathers and their adult daughters? Perform the t-test on the Galton data set to answer the question below: - Can you reject the null hypothesis that there is no significant difference in the heights of fathers and their adult daughters. ``` hist_family_conf(daughters, 'father', 'childHeight') ```
github_jupyter
# BLU03 - Learning Notebook - Part 1 of 3 - Relational Databases and SQL ## 1. Introduction In this notebook, we'll start by learning about databases. There are different types of databases, but here we'll focus on relational databases. In that context, we'll talk about tables and how different tables can relate with each other. Then, we'll connect to a real database and practice extracting information from there, using a query-language called SQL. Finally, we'll see how to load data from databases into pandas DataFrames. ## 2. What is a Database Generically, a [**database**](https://en.wikipedia.org/wiki/Database) is an organized collection of data. By this definition, a teenager's diary or an excel spreadsheet are examples of databases. In fact, a database by itself is of little relevance. We are actually only interested in databases supported by a **database-management system** (DBMS). These are software applications that allow end-users and other applications to interact with the stored data. <img src="media/dbms.png" width=500/> You may have heard about some popular DBMSs, like PostgreSQL, MySQL or SQLite. From here on, we will be referring to databases as combination of the actual database and its DBMS. --- We can separate databases in two groups: **relational** and **non-relational** databases. #### Relational databases In a relational database, we model the data through tables and relationships between them. It may sound a little vague now, but it will start making sense once we look at some examples. When talking about relational databases, it is fundamental to also talk about [**SQL**](https://en.wikipedia.org/wiki/SQL). It stands for **S**tructured **Q**uery **L**anguage, and as the name suggests, it's a domain-specific language, used to interact with this kind of databases. As a result, relational databases are often called **SQL databases**. #### Non-relational databases There are many different types of non-relational databases, such as key-value databases, document databases or graph databases. In opposition to SQL databases, non-relational databases are often called [**NoSQL**](https://en.wikipedia.org/wiki/NoSQL) **databases**. In this notebook we will only be talking relational databases and SQL, but you should definitely checkout NoSQL databases if you have the time! ## 3. SQL database management systems There are a few different DBMSs around, these days. Here is a list of the most relevant: * [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL) * [MySQL](https://en.wikipedia.org/wiki/MySQL) * [SQLite](https://en.wikipedia.org/wiki/SQLite) * [Microsoft SQL Server](https://en.wikipedia.org/wiki/Microsoft_SQL_Server) * [Oracle DB](https://en.wikipedia.org/wiki/Oracle_Database) Although this is not what we want focus on in this notebook, it is worth mentioning that there are some SQL syntax differences between these DBMSs, so when in doubt, just check the respective documentation. ## 4. SQL In order to introduce you to SQL, let's imagine that we want to build a website similar to [Rotten Tomatoes](https://www.rottentomatoes.com/) or [IMDb](https://www.imdb.com/), with the help of a SQL database to store and interact with our data. For future reference, we will call our database **MoviesDb**. ### 4.1 Tables In the first place, MoviesDb must be able to display a set of movies to its users. Thus, our first challenge is to model the movie entity. Each movie in our database is characterized by its *original title*, *release date*, *budget*, *runtime* and *original language*. To store such data we need a **table**, called movie table. It's common practice to use singular nouns as tables names. The table is the basic building block of a SQL Database. Each table has: * **rows** (aka **records**), that aggregate the data of a **single entry** of the entity being modeled * **columns** (aka **fields**), that are the properties that characterize the entity ### 4.2 Data Types Similarly to what we saw in pandas DataFrames, each column has a unique data type, i.e, all the values in a column have the same data type. There are different data types to choose from, such as: 1. INTEGER - integer numeric 2. FLOAT - decimal numeric 3. VARCHAR(n) - character string with variable length, up to n characters 4. DATE - stores year, month, day 5. TIMESTAMP - stores year, month, day, hour, minute, second, millisecond... 6. BOOLEAN - true or false Our movie table has the following data types: * *id* - INTEGER * *original_title* - VARCHAR(255) * *release_date* - TIMESTAMP * *budget* - INTEGER * *runtime* - INTEGER * *original_language* - VARCHAR(5) Here is a snapshot of the movie table: id |original_title |release_date |budget |runtime |original_language | ----|-----------------|--------------------|----------|--------|------------------| 1 |Toy Story |1995-10-30 00:00:00 |30000000 |81 |en | 2 |Jumanji |1995-12-15 00:00:00 |65000000 |104 |en | 3 |Grumpier Old Men |1995-12-22 00:00:00 |0 |101 |en | ### 4.3 Constraints In SQL tables, we can impose some constraints on the columns values. These constraints are like rules that determine if a certain value may belong to a column or not. The most frequent constraints are the following: * **NOT NULL**: a column with this constraint may not have NULL values * **UNIQUE**: a column with this constraint may not have duplicate values * **DEFAULT**: when inserting records in the table, if no value is specified for this column, a default value will be used ### 4.4 Indexes An index in SQL works like an index in a book: it's a guide to help to find the value you're searching for in a faster way. It's very useful to have indexed columns in tables that are consulted frequently. The downside is that inserting, deleting or updating data in such tables is slower, as they need to be reindexed when the data changes. ### 4.5 Keys Keys are used to identify records in a table and also to create relationships among different tables. The two most important types of keys are the **primary key** and the **foreign key**. **Primary key** A primary key uniquely identifies each record in a table. It can be one column or a combination of multiple columns. In any case, it must respect the UNIQUE and NOT NULL constraints, in order to uniquely identify a record in a table. An example of a primary key is column *id* in the movie table. **Foreign key** A foreign key is a column (or a combination of columns) used to link the table with other tables. The foreign key references the primary key in another table. Let's see an example. In the MoviesDb, we have other tables besides the movie table. One of them is the oscar table, that stores the winners of the Best Picture Oscar since 1929: id |year |movie_id | ---|-----|---------| 1 |1929 |1817 | 2 |1930 |1818 | In this table, column *id* is the table's primary key and column *movie_id* is a foreign key that references the primary key of the movie table. ## 5. Connecting to a SQL database Time to interact with a real database! For this, we'll need a SQL client, which is a graphical interface to interact with the database. So let's install a SQL client and get started! Here we're going to use the [DBeaver SQL client](https://dbeaver.io/) since it can work with different DBMSs and can be installed in Linux, Mac and Windows machines. If you're a Mac user and DBeaver doesn't work for you, for some reason, try [PSequel](http://www.psequel.com/) which is native for Mac OSX. #### Step 1: Download and Install DBeaver / PSequel SQL client [Download Link for DBeaver](https://dbeaver.io/download/) [Download Link for PSequel](http://www.psequel.com/) The following printscreens were taken in DBeaver, but in PSequel, it's all very similar. #### Step 2: Connect to the academy database First, select a PostgreSQL connection type (File -> New -> DBeaver -> Database Connection), and then introduce the following connection settings: - **host**: batch4-s02-db-instance.ctq2kxc7kx1i.eu-west-1.rds.amazonaws.com - **port**: 5432 - **database**: batch4_s02_db - **user**: ldsa_student - **password**: (this was shared with you through slack) <img src="media/dbeaver/connect_1.png" width=400/> <img src="media/dbeaver/connect_2.png" width=400/> #### Step 3: Setup In the Database Navigator (a pannel on the left-hand side of the screen), right-click on the Postgres database and select "Edit Connection". Go to Driver Properties and edit the "Current Schema" to **blu03**. Click Ok. <img src="media/dbeaver/connect_3.png" width=550/> In PSequel, you can select the schema in the upper left schema drop-down. <img src="media/psequel_schema.png" width=550/> Note: a [schema](https://www.postgresql.org/docs/current/static/ddl-schemas.html) is a way to group a set of tables under the same name. Here, we're specifying the blu03 schema in order to only see the tables that matter to us in the context of this BLU. #### Step 4: Test that everything is working Select "SQL Editor" in the toolbar and then click on "New SQL editor". Type "SELECT * FROM movie;" and check that you get the results (press CTRL+Enter or click the orange arrow to execute SQL statements) <img src="media/dbeaver/connect_4.png" width=900/> ## 6. SQL queries A SQL query is the operation of retrieving some information from a database. SQL is a very human oriented language, in the sense that its syntax uses keywords that mean something in English and just by reading a query, you understand immediately what it's doing. So, now that we are connected to the database, we can start querying the MoviesDb. ### 6.1 SELECT * FROM We can select all the rows from a table with the query: **SELECT \* FROM *table_name;*** The \* is called a wildcard and when you read this query, you can think of it as "select all the columns from table_name". Also, the semicolon at the end of the query indicates the end of the query. ~~~ SELECT * FROM movie; ~~~ id |imdb_id |original_title |release_date |budget |runtime |original_language | ----|-----------|----------------|--------------------|----------|--------|------------------| 1 |tt0114709 |Toy Story |1995-10-30 00:00:00 |30000000 |81 |en | 2 |tt0113497 |Jumanji |1995-12-15 00:00:00 |65000000 |104 |en | --- ~~~ SELECT * FROM oscar; ~~~ id |year |movie_id | ---|-----|---------| 1 |1929 |1817 | 2 |1930 |1818 | We may also select just a subset of the table's fields. It's often to see people saying that doing selects with wildcards (i.e, SELECT * FROM) is bad practice. This is because selecting many columns from a table has a higher performance cost, meaning that the query will take longer to run. So, as a rule of thumb, only select from a table the fields that you really need. As an example, this is how we select the *release_date* and *original_title* from the movie table. ~~~ SELECT release_date, original_title FROM movie; ~~~ release_date |original_title | --------------------|-----------------| 1995-10-30 00:00:00 |Toy Story | 1995-12-15 00:00:00 |Jumanji | ### 6.2 WHERE We can filter the selected rows, according to some conditions, using the **WHERE** keyword. In the example below we are trying to select movies with a specific *runtime* value. ~~~ SELECT id, original_title, runtime FROM movie WHERE runtime = 299; ~~~ id |original_title | runtime | ------|---------------|---------| 42032 |The Phantom |299 | We can also combine conditions using logical operators like **AND** or **OR**. ~~~ SELECT id, original_title, runtime FROM movie WHERE runtime = 299 OR runtime = 306; ~~~ id |original_title | runtime | ------|------------------|---------| 42031 |The Miracle Rider |306 | 42032 |The Phantom |299 | --- ~~~ SELECT id, original_title, runtime, original_language FROM movie WHERE runtime > 300 AND original_language = 'it'; ~~~ id |original_title | runtime | original_language | ------|------------------ |---------|-------------------| 7421 |La Meglio Gioventú | 366 |it | 8754 |Novecento | 317 |it | Remember what we said about the unique constraint? Well, column *original_title* certainly does not have the unique constraint, otherwise this query would only return one row. ~~~ SELECT id, original_title FROM movie WHERE original_title = 'The Phantom'; ~~~ id |original_title | ------|---------------| 744 |The Phantom | 15496 |The Phantom | 42032 |The Phantom | ### 6.3 LIMIT When doing a SELECT * FROM query, we are selecting **all the rows** from the table. In tables with many rows, this query may take a long time to run. By using the **LIMIT** keyword, we can define how many rows the query will return. ~~~ SELECT id, original_title FROM movie WHERE original_language = 'pt' AND runtime > 200 LIMIT 1; ~~~ id |original_title | ------|--------------------| 5867 |Vale Abraão | Practical exercise: try to run this query without the LIMIT, to confirm that it would return more than one row. ### 6.4 ORDER BY We can control the order by which the rows are returned, using the **ORDER BY** keyword. The **DESC** keyword allows us to order the rows in descending order. There is also an **ASC** keyword, that makes the sorting in ascending order, but since this is the default behaviour of the ORDER BY, the ASC keyword can be omitted. ~~~ SELECT id, original_title, release_date FROM movie WHERE original_language = 'pt' ORDER BY release_date LIMIT 4; ~~~ id |original_title |release_date | ------|---------------------|--------------------| 14480 |Limite |1931-05-17 00:00:00 | 34481 |Douro, Faina Fluvial |1931-09-21 00:00:00 | 33093 |A Canção de Lisboa |1933-11-07 00:00:00 | 33187 |O Pai Tirano |1941-09-19 00:00:00 | --- ~~~ SELECT id, original_title, budget FROM movie WHERE original_language = 'pt' ORDER BY budget DESC LIMIT 4; ~~~ id |original_title |budget | ------|--------------------|---------| 2321 |A Civil Action |70000000 | 9790 |Capitães de Abril |6105121 | 44975 |Operações Especiais |6000000 | 38086 |Besouro |5000000 | ### 6.5 NULL, NOT NULL When we want to check if a field is null or not in a where clause, we can use the keywords **IS NULL** and **IS NOT NULL**. ~~~ SELECT id, original_title, original_language FROM movie WHERE original_language IS NULL LIMIT 3; ~~~ id |original_title |original_language | ------|------------------------|------------------| 19575 |Shadowing the Third Man | | 21602 |Unfinished Sky | | 22832 |13 Fighting Men | | --- ~~~ SELECT id, original_title, original_language FROM movie WHERE original_language IS NOT NULL LIMIT 3; ~~~ id |original_title |original_language | ---|-----------------|------------------| 1 |Toy Story |en | 2 |Jumanji |en | 3 |Grumpier Old Men |en | ### 6.6 GROUP BY and aggregate functions Sometimes it's useful to group rows, in order analyse the data at a group level instead of at record level. When doing a **GROUP BY** operation, we then need to define an aggregate function on each selected column. The most common aggregate functions are **COUNT**, **AVG**, **MIN** and **MAX**, and each of them does exactly what its name indicates. [Here](https://www.postgresql.org/docs/9.5/static/functions-aggregate.html) you can find a list with all the aggregate functions in PostgreSQL. In other DBMSs the available aggregate functions should be similar. In this example, we're grouping movies by their original language and then finding the number of movies in each group (with the COUNT(id) statement), and the average runtime in each group (with the AVG(runtime) statement). We can use an aggregate value to order the result. In the example, we're sorting the query result showing first the groups with more movies. For this, we used the ORDER BY COUNT(id) DESC statement. ~~~ SELECT original_language, COUNT(id) AS total, AVG(runtime) AS average_runtime FROM movie WHERE original_language IS NOT NULL GROUP BY original_language ORDER BY COUNT(id) DESC LIMIT 3; ~~~ original_language |total |average_runtime | ------------------|------|------------------| en |32269 |93.13871237717368 | fr |2438 |91.98053024026513 | it |1529 |87.11179277436946 | ### 6.7 HAVING The **HAVING** keyword is used to filter groups, in the same way that the WHERE keyword filters rows. For instance, in this example, we are selecting groups of movies by original_language, but using the HAVING keyword to filter out groups of movies with more than 100 movies. ~~~ SELECT original_language, COUNT(*) AS total_movies, AVG(runtime) AS average_runtime FROM movie WHERE original_language IS NOT NULL GROUP BY original_language HAVING COUNT(*) < 100 ORDER BY COUNT(*) DESC LIMIT 3; ~~~ original_language |total_movies |average_runtime | ------------------|-------------|-------------------| ta |78 |149.67948717948718 | th |76 |99.94736842105263 | he |67 |97.26865671641791 | ### 6.8 JOIN So far we have been making queries that target the movie table alone. What if we wanted something different? What if the answer to our questions is spread accross multiple tables? The **JOIN** keyword can help us in these situations. You can think of a JOIN operation as an horizontal concatenation of two tables, where the rows are aligned according to some field. The basic syntax of a JOIN operation is the following: ~~~ SELECT * FROM tableA JOIN tableB ON tableA.Key = tableB.Key; ~~~ There are 4 main types of joins, that detemine which rows should be returned based on how the two tables overlap. The column(s) we're joining the tables on is called the key. There are different types of joins: * **INNER JOIN**: this selects the rows for which the value of the Key fields exists in the two tables. <img src="media/inner_join.png" width=250 align=left> * **LEFT JOIN**: this selects the rows with a value of Key that exists in the two tables plus the rows where the value of Key only exists in table A. <img src="media/left_join.png" width=230 align=left> * **RIGHT JOIN**: this selects the rows with a value of Key that exists in the two tables plus the rows where the value of Key only exists in table B. <img src="media/right_join.png" width=230 align=left> * **FULL OUTER JOIN**: this selects all the rows that exist in the two tables. <img src="media/full_outer_join.png" width=210 align=left> ### Examples #### 1. Let's see some examples. Starting with table movie_actor. This table has a primary key on column *id* and two foreign keys on columns *movie_id* and *actor_id*. The *movie_id* foreign key references the *id* column of the movie table, and the *actor_id* foreign key references the *id* column of the actor table. ~~~ SELECT * FROM movie_actor LIMIT 2 ~~~ id |movie_id |actor_id |character_name | ---|---------|---------|---------------| 1 |2 |46265 |Alan Parrish | 2 |2 |52473 |Judy Shepherd | Here we're joining table movie_actor with table movie, using an INNER JOIN. ~~~ SELECT ma.*, m.original_title FROM movie AS m INNER JOIN movie_actor AS ma ON ma.movie_id = m.id LIMIT 3; ~~~ id |movie_id |actor_id |character_name | original_title | ---|---------|---------|---------------|----------------| 1 |2 |46265 |Alan Parrish |Jumanji | 2 |2 |52473 |Judy Shepherd |Jumanji | 3 |2 |114219 |Peter Shepherd |Jumanji | On this query, note that: * we're using the **alias** m to designate table movie and the alias ma to designate table movie_actor * we're selecting all the columns in table movie_actor by doing: SELECT ma.* * we're only selecting column original_title from table movie by doing: SELECT m.original_title #### 2. In the second example, we'll use table *oscar*. This table has a primary key on column *id* and a foreign key on column *movie_id* referencing column *id* of the movie table. ``` SELECT * FROM oscar LIMIT 3; ``` id |year |movie_id | ---|---------|---------| 1 |1929 |1817 | 2 |1930 |1818 | 3 |1931 |1820 | By doing an INNER JOIN between table *oscar* and table *movie*, we get a table with all the movies that have an oscar. ``` SELECT o.id AS oscar_id, o.year, o.movie_id, m.original_title FROM oscar AS o INNER JOIN movie AS m ON m.id = o.movie_id LIMIT 3; ``` oscar_id |year |movie_id |original_title | ---------|---------|---------|--------------------| 1 |1929 |1817 |Wings | 2 |1930 |1818 |The Broadway Melody | 3 |1931 |1820 |Cimarron | On this query note that we're using an alias on the o.id column in order to rename it to oscar_id. #### 3. In the third example, we'll see an example of a LEFT JOIN, using tables movie and oscar. The result has NULL values in columns oscar_id and year, for the movies that didn't win an oscar. ``` SELECT m.id AS movie_id, m.original_title, o.id AS oscar_id, o.year FROM movie AS m LEFT JOIN oscar AS o ON m.id = o.movie_id; ``` movie_id |original_title |oscar_id |year | ---------|-----------------------------|---------|-----| 107 |Catwalk |NULL |NULL | 108 |Headless Body in Topless Bar |NULL |NULL | 109 |Braveheart |67 |1996 | 110 |Taxi Driver |NULL |NULL | #### 4. The last example is of a slightly more complicated query. We want to count how many movies we have per genre. For this, we'll need to join tables movie, movie_genre, and genre. You can check how they look like on the SQL client. ``` SELECT g.name AS genre, COUNT(*) AS movie_count FROM movie AS m INNER JOIN movie_genre AS mg ON m.id = mg.movie_id INNER JOIN genre AS g ON mg.genre_id = g.id GROUP BY g.name ORDER BY COUNT(*) DESC LIMIT 3; ``` genre |movie_count | ---------|------------| Drama |20265 | Comedy |13182 | Thriller |7624 | You can check some more examples at the end of the notebook. ## 7. SQL queries and python Being that SQL databases are such a powerful data source, we're interested in exporting the results of SQL queries to the python realm. And of course that pandas, with function **read_sql_query**, has our back! Let's see how to connect to a SQL database and do a query using python and pandas. ``` # required imports import pandas as pd from sqlalchemy import create_engine # Db settings - PostgreSQL username = 'ldsa_student' password = 'XXX' # the password is not XXX by the way host_name = 'batch4-s02-db-instance.ctq2kxc7kx1i.eu-west-1.rds.amazonaws.com' port = 5432 db_name = 'batch4_s02_db' schema = 'blu03' conn_str = 'postgresql://{}:{}@{}:{}/{}'.format(username, password, host_name, port, db_name) conn_args = {'options': '-csearch_path={}'.format(schema)} ``` We need to create an engine object, which is an object that stores the connection settings, required to communicate with our database. For this, we're using a package called [SQLAlchemy](https://www.sqlalchemy.org/). Then, we just need to call the **read_sql_query** function, passing it the SQL query as a string and the engine. ``` engine = create_engine(conn_str, connect_args=conn_args) query = 'SELECT * FROM movie LIMIT 5;' pd.read_sql_query(query, engine) ``` When queries are a little more complicated, it may not be very convenient to write it down in line with the code. In those cases, we can write the query in a file and load it with some python code. ``` # when queries are a bit more complicated, with open('queries/select_movies_and_genres.sql') as query_from_file: df = pd.read_sql_query(query_from_file.read(), engine) df ``` ## 8. Other SQL DBMSs So far we have been using a PostegreSQL database somewhere in the cloud. Now, we're going to try to connect and query a [SQLite](https://www.sqlite.org/index.html) database. Unlike the Postgres database that we've been working on, a SQLite database is just a file! You can read more about that [here](https://www.sqlite.org/onefile.html). In the root of this repository, you have a file called **the_movies.db**. This file is a SQLite database and now, we're going to connect to it and query it using pandas. You can also connect to it using DBeaver, give it a try! :) If you've been using PSequel, you need to install another client that can connect to SQLite databases, like [DB Browser](https://sqlitebrowser.org/). ``` # Local SQLite Db db_file_path = 'data/the_movies.db' conn_str = 'sqlite:///{}'.format(db_file_path) engine = create_engine(conn_str) query = 'SELECT * FROM movie LIMIT 5' pd.read_sql_query(query, engine) ``` ## 8. Optional ### 8.1 More SQL queries (advanced) Who is the actor playing in more movies? ``` SELECT a.name AS actor_name, COUNT(*) AS movie_count FROM actor AS a INNER JOIN movie_actor AS ma ON a.id = ma.actor_id GROUP BY a.name ORDER BY COUNT(*) DESC LIMIT 1; ``` --- For each genre, select: * the number of movies * the number of oscars * the average budget in thousands of dollars as an integer (this is called a CAST) * the average runtime as an integer Then, sort the result by the number of oscars per genre in descending order, and return only the first 3 results. ``` SELECT g.name AS genre_name, COUNT(m.id) AS n_movies, COUNT(DISTINCT o.id) AS n_oscars, AVG(m.budget / 1000) :: INT AS avg_budget_in_k_dollars, AVG(m.runtime) :: INT AS avg_runtime FROM genre AS g INNER JOIN movie_genre AS mg ON g.id = mg.genre_id INNER JOIN movie AS m ON mg.movie_id = m.id LEFT JOIN oscar AS o ON m.id = o.movie_id GROUP BY genre_name ORDER BY COUNT(DISTINCT o.id) DESC LIMIT 3; ``` ### 8.2 Database design If you're wondering why the tables in the MoviesDb are modeled the way they are, in particular regarding the relationships between the tables, read [this](https://database.guide/the-3-types-of-relationships-in-database-design/) article. It's about the types of relationships between tables in relational databases. When designing a database, you must first define its purpose and map out the different types of information that it will store. Then, you can divide this information into tables, defining each table's primary key and the relationships between tables (i.e. the foreign keys). There are several other more advanced design guidelines and best practices (naming conventions, normalization, which columns to index...). You can find many resources online on this topic, if you're interested in exploring further.
github_jupyter
# Optimization Methods Until now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result. Gradient descent goes "downhill" on a cost function $J$. Think of it as trying to do this: <img src="images/cost.jpg" style="width:650px;height:300px;"> <caption><center> <u> **Figure 1** </u>: **Minimizing the cost is like finding the lowest point in a hilly landscape**<br> At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. </center></caption> **Notations**: As usual, $\frac{\partial J}{\partial a } = $ `da` for any variable `a`. To get started, run the following code to import the libraries you will need. ``` import numpy as np import matplotlib.pyplot as plt import scipy.io import math import sklearn import sklearn.datasets from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset from testCases import * %matplotlib inline plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' ``` ## 1 - Gradient Descent A simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent. **Warm-up exercise**: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{1}$$ $$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{2}$$ where L is the number of layers and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding. ``` # GRADED FUNCTION: update_parameters_with_gd def update_parameters_with_gd(parameters, grads, learning_rate): """ Update parameters using one step of gradient descent Arguments: parameters -- python dictionary containing your parameters to be updated: parameters['W' + str(l)] = Wl parameters['b' + str(l)] = bl grads -- python dictionary containing your gradients to update each parameters: grads['dW' + str(l)] = dWl grads['db' + str(l)] = dbl learning_rate -- the learning rate, scalar. Returns: parameters -- python dictionary containing your updated parameters """ L = len(parameters) // 2 # number of layers in the neural networks # Update rule for each parameter for l in range(L): ### START CODE HERE ### (approx. 2 lines) parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads['dW' + str(l+1)] parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads['db' + str(l+1)] ### END CODE HERE ### return parameters parameters, grads, learning_rate = update_parameters_with_gd_test_case() parameters = update_parameters_with_gd(parameters, grads, learning_rate) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected Output**: <table> <tr> <td > **W1** </td> <td > [[ 1.63535156 -0.62320365 -0.53718766] [-1.07799357 0.85639907 -2.29470142]] </td> </tr> <tr> <td > **b1** </td> <td > [[ 1.74604067] [-0.75184921]] </td> </tr> <tr> <td > **W2** </td> <td > [[ 0.32171798 -0.25467393 1.46902454] [-2.05617317 -0.31554548 -0.3756023 ] [ 1.1404819 -1.09976462 -0.1612551 ]] </td> </tr> <tr> <td > **b2** </td> <td > [[-0.88020257] [ 0.02561572] [ 0.57539477]] </td> </tr> </table> A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent. - **(Batch) Gradient Descent**: ``` python X = data_input Y = labels parameters = initialize_parameters(layers_dims) for i in range(0, num_iterations): # Forward propagation a, caches = forward_propagation(X, parameters) # Compute cost. cost = compute_cost(a, Y) # Backward propagation. grads = backward_propagation(a, caches, parameters) # Update parameters. parameters = update_parameters(parameters, grads) ``` - **Stochastic Gradient Descent**: ```python X = data_input Y = labels parameters = initialize_parameters(layers_dims) for i in range(0, num_iterations): for j in range(0, m): # Forward propagation a, caches = forward_propagation(X[:,j], parameters) # Compute cost cost = compute_cost(a, Y[:,j]) # Backward propagation grads = backward_propagation(a, caches, parameters) # Update parameters. parameters = update_parameters(parameters, grads) ``` In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this: <img src="images/kiank_sgd.png" style="width:750px;height:250px;"> <caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **SGD vs GD**<br> "+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). </center></caption> **Note** also that implementing SGD requires 3 for-loops in total: 1. Over the number of iterations 2. Over the $m$ training examples 3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$) In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples. <img src="images/kiank_minibatch.png" style="width:750px;height:250px;"> <caption><center> <u> <font color='purple'> **Figure 2** </u>: <font color='purple'> **SGD vs Mini-Batch GD**<br> "+" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. </center></caption> <font color='blue'> **What you should remember**: - The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step. - You have to tune a learning rate hyperparameter $\alpha$. - With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large). ## 2 - Mini-Batch Gradient descent Let's learn how to build mini-batches from the training set (X, Y). There are two steps: - **Shuffle**: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches. <img src="images/kiank_shuffle.png" style="width:550px;height:300px;"> - **Partition**: Partition the shuffled (X, Y) into mini-batches of size `mini_batch_size` (here 64). Note that the number of training examples is not always divisible by `mini_batch_size`. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full `mini_batch_size`, it will look like this: <img src="images/kiank_partition.png" style="width:550px;height:300px;"> **Exercise**: Implement `random_mini_batches`. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches: ```python first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size] second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size] ... ``` Note that the last mini-batch might end up smaller than `mini_batch_size=64`. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is `math.floor(s)` in Python). If the total number of examples is not a multiple of `mini_batch_size=64` then there will be $\lfloor \frac{m}{mini\_batch\_size}\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini_\_batch_\_size \times \lfloor \frac{m}{mini\_batch\_size}\rfloor$). ``` # GRADED FUNCTION: random_mini_batches def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0): """ Creates a list of random minibatches from (X, Y) Arguments: X -- input data, of shape (input size, number of examples) Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples) mini_batch_size -- size of the mini-batches, integer Returns: mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y) """ np.random.seed(seed) # To make your "random" minibatches the same as ours m = X.shape[1] # number of training examples mini_batches = [] # Step 1: Shuffle (X, Y) permutation = list(np.random.permutation(m)) shuffled_X = X[:, permutation] shuffled_Y = Y[:, permutation].reshape((1,m)) # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case. num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning for k in range(0, num_complete_minibatches): ### START CODE HERE ### (approx. 2 lines) mini_batch_X = shuffled_X[:, k * mini_batch_size : (k+1) * mini_batch_size] mini_batch_Y = shuffled_Y[:, k * mini_batch_size : (k+1) * mini_batch_size] ### END CODE HERE ### mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) # Handling the end case (last mini-batch < mini_batch_size) if m % mini_batch_size != 0: ### START CODE HERE ### (approx. 2 lines) mini_batch_X = shuffled_X[:,num_complete_minibatches * mini_batch_size : ] mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size : ] ### END CODE HERE ### mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) return mini_batches X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case() mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size) print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape)) print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape)) print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape)) print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape)) print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape)) print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape)) print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3])) ``` **Expected Output**: <table style="width:50%"> <tr> <td > **shape of the 1st mini_batch_X** </td> <td > (12288, 64) </td> </tr> <tr> <td > **shape of the 2nd mini_batch_X** </td> <td > (12288, 64) </td> </tr> <tr> <td > **shape of the 3rd mini_batch_X** </td> <td > (12288, 20) </td> </tr> <tr> <td > **shape of the 1st mini_batch_Y** </td> <td > (1, 64) </td> </tr> <tr> <td > **shape of the 2nd mini_batch_Y** </td> <td > (1, 64) </td> </tr> <tr> <td > **shape of the 3rd mini_batch_Y** </td> <td > (1, 20) </td> </tr> <tr> <td > **mini batch sanity check** </td> <td > [ 0.90085595 -0.7612069 0.2344157 ] </td> </tr> </table> <font color='blue'> **What you should remember**: - Shuffling and Partitioning are the two steps required to build mini-batches - Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128. ## 3 - Momentum Because mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will "oscillate" toward convergence. Using momentum can reduce these oscillations. Momentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the "velocity" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill. <img src="images/opt_momentum.png" style="width:400px;height:250px;"> <caption><center> <u><font color='purple'>**Figure 3**</u><font color='purple'>: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$.<br> <font color='black'> </center> **Exercise**: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the `grads` dictionary, that is: for $l =1,...,L$: ```python v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)]) v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)]) ``` **Note** that the iterator l starts at 0 in the for loop while the first parameters are v["dW1"] and v["db1"] (that's a "one" on the superscript). This is why we are shifting l to l+1 in the `for` loop. ``` # GRADED FUNCTION: initialize_velocity def initialize_velocity(parameters): """ Initializes the velocity as a python dictionary with: - keys: "dW1", "db1", ..., "dWL", "dbL" - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters. Arguments: parameters -- python dictionary containing your parameters. parameters['W' + str(l)] = Wl parameters['b' + str(l)] = bl Returns: v -- python dictionary containing the current velocity. v['dW' + str(l)] = velocity of dWl v['db' + str(l)] = velocity of dbl """ L = len(parameters) // 2 # number of layers in the neural networks v = {} # Initialize velocity for l in range(L): ### START CODE HERE ### (approx. 2 lines) v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape) v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape) ### END CODE HERE ### return v parameters = initialize_velocity_test_case() v = initialize_velocity(parameters) print("v[\"dW1\"] = " + str(v["dW1"])) print("v[\"db1\"] = " + str(v["db1"])) print("v[\"dW2\"] = " + str(v["dW2"])) print("v[\"db2\"] = " + str(v["db2"])) ``` **Expected Output**: <table style="width:40%"> <tr> <td > **v["dW1"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **v["db1"]** </td> <td > [[ 0.] [ 0.]] </td> </tr> <tr> <td > **v["dW2"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **v["db2"]** </td> <td > [[ 0.] [ 0.] [ 0.]] </td> </tr> </table> **Exercise**: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$: $$ \begin{cases} v_{dW^{[l]}} = \beta v_{dW^{[l]}} + (1 - \beta) dW^{[l]} \\ W^{[l]} = W^{[l]} - \alpha v_{dW^{[l]}} \end{cases}\tag{3}$$ $$\begin{cases} v_{db^{[l]}} = \beta v_{db^{[l]}} + (1 - \beta) db^{[l]} \\ b^{[l]} = b^{[l]} - \alpha v_{db^{[l]}} \end{cases}\tag{4}$$ where L is the number of layers, $\beta$ is the momentum and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a "one" on the superscript). So you will need to shift `l` to `l+1` when coding. ``` # GRADED FUNCTION: update_parameters_with_momentum def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate): """ Update parameters using Momentum Arguments: parameters -- python dictionary containing your parameters: parameters['W' + str(l)] = Wl parameters['b' + str(l)] = bl grads -- python dictionary containing your gradients for each parameters: grads['dW' + str(l)] = dWl grads['db' + str(l)] = dbl v -- python dictionary containing the current velocity: v['dW' + str(l)] = ... v['db' + str(l)] = ... beta -- the momentum hyperparameter, scalar learning_rate -- the learning rate, scalar Returns: parameters -- python dictionary containing your updated parameters v -- python dictionary containing your updated velocities """ L = len(parameters) // 2 # number of layers in the neural networks # Momentum update for each parameter for l in range(L): ### START CODE HERE ### (approx. 4 lines) # compute velocities v["dW" + str(l+1)] = beta * v["dW" + str(l+1)] + (1 -beta) * grads["dW" + str(l+1)] v["db" + str(l+1)] = beta * v["db" + str(l+1)] + (1 - beta) * grads["db" + str(l+1)] # update parameters parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * v["dW" + str(l+1)] parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * v["db" + str(l+1)] ### END CODE HERE ### return parameters, v parameters, grads, v = update_parameters_with_momentum_test_case() parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) print("v[\"dW1\"] = " + str(v["dW1"])) print("v[\"db1\"] = " + str(v["db1"])) print("v[\"dW2\"] = " + str(v["dW2"])) print("v[\"db2\"] = " + str(v["db2"])) ``` **Expected Output**: <table style="width:90%"> <tr> <td > **W1** </td> <td > [[ 1.62544598 -0.61290114 -0.52907334] [-1.07347112 0.86450677 -2.30085497]] </td> </tr> <tr> <td > **b1** </td> <td > [[ 1.74493465] [-0.76027113]] </td> </tr> <tr> <td > **W2** </td> <td > [[ 0.31930698 -0.24990073 1.4627996 ] [-2.05974396 -0.32173003 -0.38320915] [ 1.13444069 -1.0998786 -0.1713109 ]] </td> </tr> <tr> <td > **b2** </td> <td > [[-0.87809283] [ 0.04055394] [ 0.58207317]] </td> </tr> <tr> <td > **v["dW1"]** </td> <td > [[-0.11006192 0.11447237 0.09015907] [ 0.05024943 0.09008559 -0.06837279]] </td> </tr> <tr> <td > **v["db1"]** </td> <td > [[-0.01228902] [-0.09357694]] </td> </tr> <tr> <td > **v["dW2"]** </td> <td > [[-0.02678881 0.05303555 -0.06916608] [-0.03967535 -0.06871727 -0.08452056] [-0.06712461 -0.00126646 -0.11173103]] </td> </tr> <tr> <td > **v["db2"]** </td> <td > [[ 0.02344157] [ 0.16598022] [ 0.07420442]]</td> </tr> </table> **Note** that: - The velocity is initialized with zeros. So the algorithm will take a few iterations to "build up" velocity and start to take bigger steps. - If $\beta = 0$, then this just becomes standard gradient descent without momentum. **How do you choose $\beta$?** - The larger the momentum $\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\beta$ is too big, it could also smooth out the updates too much. - Common values for $\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\beta = 0.9$ is often a reasonable default. - Tuning the optimal $\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$. <font color='blue'> **What you should remember**: - Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent. - You have to tune a momentum hyperparameter $\beta$ and a learning rate $\alpha$. ## 4 - Adam Adam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum. **How does Adam work?** 1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction). 2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction). 3. It updates parameters in a direction based on combining information from "1" and "2". The update rule is, for $l = 1, ..., L$: $$\begin{cases} v_{dW^{[l]}} = \beta_1 v_{dW^{[l]}} + (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \\ v^{corrected}_{dW^{[l]}} = \frac{v_{dW^{[l]}}}{1 - (\beta_1)^t} \\ s_{dW^{[l]}} = \beta_2 s_{dW^{[l]}} + (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \\ s^{corrected}_{dW^{[l]}} = \frac{s_{dW^{[l]}}}{1 - (\beta_1)^t} \\ W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{dW^{[l]}}}{\sqrt{s^{corrected}_{dW^{[l]}}} + \varepsilon} \end{cases}$$ where: - t counts the number of steps taken of Adam - L is the number of layers - $\beta_1$ and $\beta_2$ are hyperparameters that control the two exponentially weighted averages. - $\alpha$ is the learning rate - $\varepsilon$ is a very small number to avoid dividing by zero As usual, we will store all parameters in the `parameters` dictionary **Exercise**: Initialize the Adam variables $v, s$ which keep track of the past information. **Instruction**: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for `grads`, that is: for $l = 1, ..., L$: ```python v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)]) v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)]) s["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)]) s["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)]) ``` ``` # GRADED FUNCTION: initialize_adam def initialize_adam(parameters) : """ Initializes v and s as two python dictionaries with: - keys: "dW1", "db1", ..., "dWL", "dbL" - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters. Arguments: parameters -- python dictionary containing your parameters. parameters["W" + str(l)] = Wl parameters["b" + str(l)] = bl Returns: v -- python dictionary that will contain the exponentially weighted average of the gradient. v["dW" + str(l)] = ... v["db" + str(l)] = ... s -- python dictionary that will contain the exponentially weighted average of the squared gradient. s["dW" + str(l)] = ... s["db" + str(l)] = ... """ L = len(parameters) // 2 # number of layers in the neural networks v = {} s = {} # Initialize v, s. Input: "parameters". Outputs: "v, s". for l in range(L): ### START CODE HERE ### (approx. 4 lines) v["dW" + str(l + 1)] = np.zeros_like(parameters["W" + str(l + 1)]) v["db" + str(l + 1)] = np.zeros_like(parameters["b" + str(l + 1)]) s["dW" + str(l+1)] = np.zeros_like(parameters["W" + str(l + 1)]) s["db" + str(l+1)] = np.zeros_like(parameters["b" + str(l + 1)]) ### END CODE HERE ### return v, s parameters = initialize_adam_test_case() v, s = initialize_adam(parameters) print("v[\"dW1\"] = " + str(v["dW1"])) print("v[\"db1\"] = " + str(v["db1"])) print("v[\"dW2\"] = " + str(v["dW2"])) print("v[\"db2\"] = " + str(v["db2"])) print("s[\"dW1\"] = " + str(s["dW1"])) print("s[\"db1\"] = " + str(s["db1"])) print("s[\"dW2\"] = " + str(s["dW2"])) print("s[\"db2\"] = " + str(s["db2"])) ``` **Expected Output**: <table style="width:40%"> <tr> <td > **v["dW1"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **v["db1"]** </td> <td > [[ 0.] [ 0.]] </td> </tr> <tr> <td > **v["dW2"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **v["db2"]** </td> <td > [[ 0.] [ 0.] [ 0.]] </td> </tr> <tr> <td > **s["dW1"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **s["db1"]** </td> <td > [[ 0.] [ 0.]] </td> </tr> <tr> <td > **s["dW2"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **s["db2"]** </td> <td > [[ 0.] [ 0.] [ 0.]] </td> </tr> </table> **Exercise**: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$: $$\begin{cases} v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \\ v^{corrected}_{W^{[l]}} = \frac{v_{W^{[l]}}}{1 - (\beta_1)^t} \\ s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \\ s^{corrected}_{W^{[l]}} = \frac{s_{W^{[l]}}}{1 - (\beta_2)^t} \\ W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{W^{[l]}}}{\sqrt{s^{corrected}_{W^{[l]}}}+\varepsilon} \end{cases}$$ **Note** that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding. ``` # GRADED FUNCTION: update_parameters_with_adam def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01, beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8): """ Update parameters using Adam Arguments: parameters -- python dictionary containing your parameters: parameters['W' + str(l)] = Wl parameters['b' + str(l)] = bl grads -- python dictionary containing your gradients for each parameters: grads['dW' + str(l)] = dWl grads['db' + str(l)] = dbl v -- Adam variable, moving average of the first gradient, python dictionary s -- Adam variable, moving average of the squared gradient, python dictionary learning_rate -- the learning rate, scalar. beta1 -- Exponential decay hyperparameter for the first moment estimates beta2 -- Exponential decay hyperparameter for the second moment estimates epsilon -- hyperparameter preventing division by zero in Adam updates Returns: parameters -- python dictionary containing your updated parameters v -- Adam variable, moving average of the first gradient, python dictionary s -- Adam variable, moving average of the squared gradient, python dictionary """ L = len(parameters) // 2 # number of layers in the neural networks v_corrected = {} # Initializing first moment estimate, python dictionary s_corrected = {} # Initializing second moment estimate, python dictionary # Perform Adam update on all parameters for l in range(L): # Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v". ### START CODE HERE ### (approx. 2 lines) v["dW" + str(l + 1)] = beta1 * v["dW" + str(l + 1)] + (1 - beta1) * grads['dW' + str(l + 1)] v["db" + str(l + 1)] = beta1 * v["db" + str(l + 1)] + (1 - beta1) * grads['db' + str(l + 1)] ### END CODE HERE ### # Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected". ### START CODE HERE ### (approx. 2 lines) v_corrected["dW" + str(l + 1)] = v["dW" + str(l + 1)] / (1 - np.power(beta1, t)) v_corrected["db" + str(l + 1)] = v["db" + str(l + 1)] / (1 - np.power(beta1, t)) ### END CODE HERE ### # Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s". ### START CODE HERE ### (approx. 2 lines) s["dW" + str(l + 1)] = beta2 * s["dW" + str(l + 1)] + (1 - beta2) * np.power(grads['dW' + str(l + 1)], 2) s["db" + str(l + 1)] = beta2 * s["db" + str(l + 1)] + (1 - beta2) * np.power(grads['db' + str(l + 1)], 2) ### END CODE HERE ### # Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected". ### START CODE HERE ### (approx. 2 lines) s_corrected["dW" + str(l + 1)] = s["dW" + str(l + 1)] / (1 - np.power(beta2, t)) s_corrected["db" + str(l + 1)] = s["db" + str(l + 1)] / (1 - np.power(beta2, t)) ### END CODE HERE ### # Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters". ### START CODE HERE ### (approx. 2 lines) parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * v_corrected["dW" + str(l + 1)] / np.sqrt(s_corrected["dW" + str(l + 1)] + epsilon) parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * v_corrected["db" + str(l + 1)] / np.sqrt(s_corrected["db" + str(l + 1)] + epsilon) ### END CODE HERE ### return parameters, v, s parameters, grads, v, s = update_parameters_with_adam_test_case() parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) print("v[\"dW1\"] = " + str(v["dW1"])) print("v[\"db1\"] = " + str(v["db1"])) print("v[\"dW2\"] = " + str(v["dW2"])) print("v[\"db2\"] = " + str(v["db2"])) print("s[\"dW1\"] = " + str(s["dW1"])) print("s[\"db1\"] = " + str(s["db1"])) print("s[\"dW2\"] = " + str(s["dW2"])) print("s[\"db2\"] = " + str(s["db2"])) ``` **Expected Output**: <table> <tr> <td > **W1** </td> <td > [[ 1.63178673 -0.61919778 -0.53561312] [-1.08040999 0.85796626 -2.29409733]] </td> </tr> <tr> <td > **b1** </td> <td > [[ 1.75225313] [-0.75376553]] </td> </tr> <tr> <td > **W2** </td> <td > [[ 0.32648046 -0.25681174 1.46954931] [-2.05269934 -0.31497584 -0.37661299] [ 1.14121081 -1.09245036 -0.16498684]] </td> </tr> <tr> <td > **b2** </td> <td > [[-0.88529978] [ 0.03477238] [ 0.57537385]] </td> </tr> <tr> <td > **v["dW1"]** </td> <td > [[-0.11006192 0.11447237 0.09015907] [ 0.05024943 0.09008559 -0.06837279]] </td> </tr> <tr> <td > **v["db1"]** </td> <td > [[-0.01228902] [-0.09357694]] </td> </tr> <tr> <td > **v["dW2"]** </td> <td > [[-0.02678881 0.05303555 -0.06916608] [-0.03967535 -0.06871727 -0.08452056] [-0.06712461 -0.00126646 -0.11173103]] </td> </tr> <tr> <td > **v["db2"]** </td> <td > [[ 0.02344157] [ 0.16598022] [ 0.07420442]] </td> </tr> <tr> <td > **s["dW1"]** </td> <td > [[ 0.00121136 0.00131039 0.00081287] [ 0.0002525 0.00081154 0.00046748]] </td> </tr> <tr> <td > **s["db1"]** </td> <td > [[ 1.51020075e-05] [ 8.75664434e-04]] </td> </tr> <tr> <td > **s["dW2"]** </td> <td > [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04] [ 1.57413361e-04 4.72206320e-04 7.14372576e-04] [ 4.50571368e-04 1.60392066e-07 1.24838242e-03]] </td> </tr> <tr> <td > **s["db2"]** </td> <td > [[ 5.49507194e-05] [ 2.75494327e-03] [ 5.50629536e-04]] </td> </tr> </table> You now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference. ## 5 - Model with different optimization algorithms Lets use the following "moons" dataset to test the different optimization methods. (The dataset is named "moons" because the data from each of the two classes looks a bit like a crescent-shaped moon.) ``` train_X, train_Y = load_dataset() ``` We have already implemented a 3-layer neural network. You will train it with: - Mini-batch **Gradient Descent**: it will call your function: - `update_parameters_with_gd()` - Mini-batch **Momentum**: it will call your functions: - `initialize_velocity()` and `update_parameters_with_momentum()` - Mini-batch **Adam**: it will call your functions: - `initialize_adam()` and `update_parameters_with_adam()` ``` def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9, beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True): """ 3-layer neural network model which can be run in different optimizer modes. Arguments: X -- input data, of shape (2, number of examples) Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples) layers_dims -- python list, containing the size of each layer learning_rate -- the learning rate, scalar. mini_batch_size -- the size of a mini batch beta -- Momentum hyperparameter beta1 -- Exponential decay hyperparameter for the past gradients estimates beta2 -- Exponential decay hyperparameter for the past squared gradients estimates epsilon -- hyperparameter preventing division by zero in Adam updates num_epochs -- number of epochs print_cost -- True to print the cost every 1000 epochs Returns: parameters -- python dictionary containing your updated parameters """ L = len(layers_dims) # number of layers in the neural networks costs = [] # to keep track of the cost t = 0 # initializing the counter required for Adam update seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours # Initialize parameters parameters = initialize_parameters(layers_dims) # Initialize the optimizer if optimizer == "gd": pass # no initialization required for gradient descent elif optimizer == "momentum": v = initialize_velocity(parameters) elif optimizer == "adam": v, s = initialize_adam(parameters) # Optimization loop for i in range(num_epochs): # Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch seed = seed + 1 minibatches = random_mini_batches(X, Y, mini_batch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # Forward propagation a3, caches = forward_propagation(minibatch_X, parameters) # Compute cost cost = compute_cost(a3, minibatch_Y) # Backward propagation grads = backward_propagation(minibatch_X, minibatch_Y, caches) # Update parameters if optimizer == "gd": parameters = update_parameters_with_gd(parameters, grads, learning_rate) elif optimizer == "momentum": parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate) elif optimizer == "adam": t = t + 1 # Adam counter parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t, learning_rate, beta1, beta2, epsilon) # Print the cost every 1000 epoch if print_cost and i % 1000 == 0: print ("Cost after epoch %i: %f" %(i, cost)) if print_cost and i % 100 == 0: costs.append(cost) # plot the cost plt.plot(costs) plt.ylabel('cost') plt.xlabel('epochs (per 100)') plt.title("Learning rate = " + str(learning_rate)) plt.show() return parameters ``` You will now run this 3 layer neural network with each of the 3 optimization methods. ### 5.1 - Mini-batch Gradient descent Run the following code to see how the model does with mini-batch gradient descent. ``` # train 3-layer model layers_dims = [train_X.shape[0], 5, 2, 1] parameters = model(train_X, train_Y, layers_dims, optimizer = "gd") # Predict predictions = predict(train_X, train_Y, parameters) # Plot decision boundary plt.title("Model with Gradient Descent optimization") axes = plt.gca() axes.set_xlim([-1.5,2.5]) axes.set_ylim([-1,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` ### 5.2 - Mini-batch gradient descent with momentum Run the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains. ``` # train 3-layer model layers_dims = [train_X.shape[0], 5, 2, 1] parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum") # Predict predictions = predict(train_X, train_Y, parameters) # Plot decision boundary plt.title("Model with Momentum optimization") axes = plt.gca() axes.set_xlim([-1.5,2.5]) axes.set_ylim([-1,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` ### 5.3 - Mini-batch with Adam mode Run the following code to see how the model does with Adam. ``` # train 3-layer model layers_dims = [train_X.shape[0], 5, 2, 1] parameters = model(train_X, train_Y, layers_dims, optimizer = "adam") # Predict predictions = predict(train_X, train_Y, parameters) # Plot decision boundary plt.title("Model with Adam optimization") axes = plt.gca() axes.set_xlim([-1.5,2.5]) axes.set_ylim([-1,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` ### 5.4 - Summary <table> <tr> <td> **optimization method** </td> <td> **accuracy** </td> <td> **cost shape** </td> </tr> <td> Gradient descent </td> <td> 79.7% </td> <td> oscillations </td> <tr> <td> Momentum </td> <td> 79.7% </td> <td> oscillations </td> </tr> <tr> <td> Adam </td> <td> 94% </td> <td> smoother </td> </tr> </table> Momentum usually helps, but given the small learning rate and the simplistic dataset, its impact is almost negligeable. Also, the huge oscillations you see in the cost come from the fact that some minibatches are more difficult thans others for the optimization algorithm. Adam on the other hand, clearly outperforms mini-batch gradient descent and Momentum. If you run the model for more epochs on this simple dataset, all three methods will lead to very good results. However, you've seen that Adam converges a lot faster. Some advantages of Adam include: - Relatively low memory requirements (though higher than gradient descent and gradient descent with momentum) - Usually works well even with little tuning of hyperparameters (except $\alpha$) **References**: - Adam paper: https://arxiv.org/pdf/1412.6980.pdf
github_jupyter
# Cloud APIs for Computer Vision: Up and Running in 15 Minutes This code is part of [Chapter 8- Cloud APIs for Computer Vision: Up and Running in 15 Minutes ](https://learning.oreilly.com/library/view/practical-deep-learning/9781492034858/ch08.html). ## Compile Results for Image Tagging In this file we will compile the results using the ground truth and the collected data for all the test images. You will need to edit the following: 1. Please edit `data_path` with the path to the test images that have been used for the experiments. 2. If you used different filenames for the prediction filenames, please edit the filenames accordingly. 3. Please download Gensim, which we will be using for comparing word similarity between ground truth with predicted class. Unzip and place the `GoogleNews-vectors-negative300.bin` within `data_path`. Download at: https://github.com/mmihaltz/word2vec-GoogleNews-vectors Let's start by loading the ground truth JSON file. ``` data_path = "/home/deepvision/production/code/chapter-8/image-tagging/data-may-2020" validation_images_path = data_path + "/val2017" import json with open(data_path + "/final-ground-truth-tags.json") as json_file: ground_truth = json.load(json_file) # helper functions to get image name from image id and converse. def get_id_from_name(name): return int(name.split("/")[-1].split(".jpg")[0]) def get_name_from_id(image_id): filename = validation_images_path + \ "/000000" + str(image_id) + ".jpg" return filename # Class ids to their string equivalent with open(data_path + '/class-id-to-name.json') as f: class_id_to_name = json.load(f) ``` ## Helper functions ``` def convert_class_id_to_string(l): result = [] for class_id in l: result.append(class_id_to_name[str(class_id)]) return result def parse(l): l1 = [] for each in l: if len(each) >= 2: l1.append(each.lower()) return l1 def get_class_from_prediction(l): return list([item[0] for item in l]) ``` Please download Gensim, which we will be using for comparing word similarity between ground truth with predicted class. ``` import gensim from gensim.models import Word2Vec model = gensim.models.KeyedVectors.load_word2vec_format(data_path + '/GoogleNews-vectors-negative300.bin', binary=True) def check_gensim(word, pred): # get similarity between word and all predicted words in returned predictions similarity = 0 for each_pred in pred: # check if returned prediction exists in the Word2Vec model if each_pred not in model: continue current_similarity = model.similarity(word, each_pred) #print("Word=\t", word, "\tPred=\t", each_pred, "\tSim=\t", current_similarity) if current_similarity > similarity: similarity = current_similarity return similarity ``` ### Parsing Each cloud provider sends the results in slightly different formats and we need to parse each of them correctly. So, we will develop a parsing function unique to each cloud provider. #### Microsoft Specific Parsing ``` def microsoft_name(image_id): return "000000" + str(image_id) + ".jpg" def parse_microsoft_inner(word): b = word.replace("_", " ") c = b.lower().strip().split() return c def parse_microsoft_response_v1(l): result = [] b = "" for each in l["categories"]: a = each["name"] result.extend(parse_microsoft_inner(a)) for each in l["tags"]: a = each["name"] result.extend(parse_microsoft_inner(a)) if "hint" in each: a = each["hint"] result.extend(parse_microsoft_inner(a)) return list(set(result)) def parse_microsoft_response(l): result = [] b = "" for each in l: result.extend(parse_microsoft_inner(each[0])) return list(set(result)) ``` #### Amazon Specific Parsing ``` def parse_amazon_response(l): result = [] for each in l: result.append(each.lower()) return list(set(result)) ``` #### Google specific parsing ``` def parse_google_response(l): l1 = [] for each in l: l1.append(each[0].lower()) if len(each[0].split()) > 1: l1.extend(each[0].split()) return l1 ``` The `threshold` defines how much similar do two words (ground truth and predicted category name) need to be according to Word2Vec for the prediction to be a correct prediction. You can play around with the `threshold`. ``` threshold = .3 def calculate_score(ground_truth, predictions, arg): total = 0 correct = 0 avg_ground_truth_length = 0 avg_amazon_length = 0 avg_microsoft_length = 0 avg_google_length = 0 for each in ground_truth.keys(): pred = [] gt = list(set(convert_class_id_to_string(ground_truth[each]))) if gt == None or len(gt) < 1: continue total += len(gt) avg_ground_truth_length += len(gt) if arg == "google" and get_name_from_id(each) in predictions: pred = predictions[get_name_from_id(each)] if pred == None or len(pred) <= 0: continue pred = parse_google_response(predictions[get_name_from_id(each)]) avg_google_length += len(pred) elif arg == "microsoft" and microsoft_name(each) in predictions: pred = predictions[microsoft_name(each)] if pred == None or len(pred) <= 0: continue pred = parse_microsoft_response(predictions[microsoft_name(each)]) avg_microsoft_length += len(pred) elif arg == "amazon" and get_name_from_id(each) in predictions: pred = predictions[get_name_from_id(each)] if pred == None or len(pred) <= 0: continue pred = parse_amazon_response(predictions[get_name_from_id(each)]) avg_amazon_length += len(pred) match = 0 match_word = [] for each_word in gt: # Check if ground truth exists "as is" in the entire list of predictions if each_word in pred: correct += 1 match += 1 match_word.append(each_word) # Also, ensure that ground truth exists in the Word2Vec model elif each_word not in model: continue # Otherwise, check for similarity between the ground truth and the predictions elif check_gensim(each_word, pred) >= threshold: correct += 1 match += 1 match_word.append(each_word) if arg == "google": print("Google's Stats\nTotal number of tags returned = ", avg_google_length, "\nAverage number of tags returned per image = ", avg_google_length * 1.0 / len(ground_truth.keys())) elif arg == "amazon": print("Amazon's Stats\nTotal number of tags returned = ", avg_amazon_length, "\nAverage number of tags returned per image = ", avg_amazon_length * 1.0 / len(ground_truth.keys())) elif arg == "microsoft": print("Microsoft's Stats\nTotal number of tags returned = ", avg_microsoft_length, "\nAverage number of tags returned per image = ", avg_microsoft_length * 1.0 / len(ground_truth.keys())) print("\nGround Truth Stats\nTotal number of Ground Truth tags = ", total, "\nTotal number of correct tags predicted = ", correct) print("\nScore = ", float(correct) / float(total)) ``` Now, we are ready to load the predictions that we obtained by using APIs! ``` # Google with open(data_path + '/google-tags.json') as f: google = json.load(f) # Get Google Score calculate_score(ground_truth, google, "google") ``` **Note**: Microsoft's API for object classification has two versions. The results from both the APIs are different. If you want to check out Microsoft's outdated (v1) API then use the `microsoft_tags.json` file. We will be using the latest version (i.e., `microsoft_tags_DESCRIPTION.json`) for our November 2019 experiments. ``` # Microsoft with open(data_path + '/microsoft-tags.json') as f: microsoft = json.load(f) # Get Microsoft score calculate_score(ground_truth, microsoft, "microsoft") # Amazon with open(data_path + '/amazon-tags.json') as f: amazon = json.load(f) # Get Amazon score calculate_score(ground_truth, amazon, "amazon") ```
github_jupyter
``` import sys sys.path.append('C:\\Users\\Danilo Santos\\Desktop\\Qualificação PPGCC\\abordagem\\RFNS') from grimoire.BaseEnginnering import BaseEnginnering import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.preprocessing import OneHotEncoder, LabelEncoder df_acute = pd.read_csv('../datasets/acute/diagnosis.csv', engine='c', memory_map=True, low_memory=True) df_acute.head() df_acute.shape X = df_acute[['temperatura', 'nausea', 'dorlombar', 'urinepushing', 'miccao', 'queimacao', 'inflamacao']] # Labels y = df_acute['target'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=100, shuffle=True, stratify=y) X_train.shape X_train.head() encoder_X = OneHotEncoder(categories='auto', drop=None, sparse=False, dtype=np.float64, handle_unknown='ignore', n_values='auto') encoder_X = LabelEncoder() type_not_encoder = [int, float, complex, np.int8, np.int16, np.int32, np.int64, np.float, np.float64, np.complex64] df_encoded = pd.DataFrame() dict_feature_encoded = {} for col in X_train.columns: if type(X_train[col][0]) in type_not_encoder: df_encoded.insert(loc=df_encoded.shape[1], column=col, value=X_train[col]) else: df_col = X_train.loc[:, [col]] unique_categories = df_col[col].unique()[::-1] dict_feature_encoded[col] = unique_categories df_tmp = encoder_X.fit_transform(df_col) if (len(df_tmp.shape) == 1): df_encoded.insert(loc=df_encoded.shape[1], column='{0}_all'.format(col), value=df_tmp) else: for i, c in zip(range(df_tmp.shape[1]), unique_categories): df_encoded.insert(loc=df_encoded.shape[1], column='{0}_{1}'.format(col, c), value=df_tmp[:,i]) print('Shape Before: ', df_encoded.shape) df_encoded.head() # testa com enginnering model = BaseEnginnering() model.train_X = X_train print('Shape: ', model.train_X.shape) model.train_X.head() # habilita a codificação dos dados (X) model.encoder_enable = True model.encoder_data = True print('Shape Before: ', model.train_X.shape) model.get_transform() print('Shape After: ', model.train_X.shape) print(model.train_X) model.train_X.head() X_train.head() encoder_X.get_feature_names() encoder_X.categories_ dict_feature_encoded list(encoder_X.get_feature_names()) ```
github_jupyter
# job1: Anti-Terrorism Intelligence Analysis Metro [Link](https://www.indeed.com/jobs?q=Intelligence%20Analysis&l=Washington%2C%20DC&vjk=2666edf71c7b3200&advn=477295927342672) # job2: Counterintelligence Investigator Leido [Link](https://www.indeed.com/viewjob?jk=696eb25ddaa842f0&tk=1d6gov55q270i002&from=serp&vjs=3&advn=8766782434770833&adid=149689977&sjdu=0ZFwD5rbjMRcHz87Kzx_g0cw0T9TkucWQF5N5b0w5Auz7smrsuwW8GraeB26d0QsuqSNiDiEm65m5BjzkzMUQDZkYE2IFBHGyBO7C8I20FLqDvUZ-Tz7kgvVVMmsL9MNzbHflQ-VK8Tc6bHMIqKUL7g3WQtOk7nlIWXpMKPdZ9g0Nm_SJd9fWSAsMFSKnXIb) # Q1 ``` import xlwt from collections import Counter from nltk.corpus import stopwords stop = set(stopwords.words('english')) book = xlwt.Workbook() # create a new excel file sheet_test = book.add_sheet('word_count') # add a new sheet i = 0 sheet_test.write(i,0,'word') # write the header of the first column sheet_test.write(i,1,'count') # write the header of the second column sheet_test.write(i,2,'ratio') # write the header of the third column with open('Job2.txt','r',encoding='utf-8', errors = 'ignore') as text_word: # define the location of your txt file # convert all the word into lower cases # filter out stop words word_list = [i for i in text_word.read().lower().split() if i not in stop] word_total = word_list.__len__() count_result = Counter(word_list) for result in count_result.most_common(10): i = i+1 sheet_test.write(i,0,result[0]) sheet_test.write(i,1,result[1]) sheet_test.write(i,2,(result[1]/word_total)) book.save('Job2.xls')# define the location of your excel file import xlwt from collections import Counter from nltk.corpus import stopwords stop = set(stopwords.words('english')) book = xlwt.Workbook() # create a new excel file sheet_test = book.add_sheet('word_count') # add a new sheet i = 0 sheet_test.write(i,0,'word') # write the header of the first column sheet_test.write(i,1,'count') # write the header of the second column sheet_test.write(i,2,'ratio') # write the header of the third column with open('Job.txt','r',encoding='utf-8', errors = 'ignore') as text_word: # define the location of your txt file # convert all the word into lower cases # filter out stop words word_list = [i for i in text_word.read().lower().split() if i not in stop] word_total = word_list.__len__() count_result = Counter(word_list) for result in count_result.most_common(10): i = i+1 sheet_test.write(i,0,result[0]) sheet_test.write(i,1,result[1]) sheet_test.write(i,2,(result[1]/word_total)) book.save('Job1.xls')# define the location of your excel file ``` # Q2 <img src="Job1.png"> <img src="Job2.png"> # Q3 ``` with open('Job.txt','r', encoding='utf-8', errors = 'ignore') as job1: with open('job2.txt','r', encoding='utf-8', errors = 'ignore') as job2: job1_str =job1.read() job2_str =job2.read() job1_set = set (job1_str.split()) job2_set = set(job2_str.split()) print(job1_set.difference(job2_set)) print(job2_set.difference(job1_set)) ``` # Q4 ``` from fuzzywuzzy import fuzz with open('Job.txt','r', encoding='utf-8', errors = 'ignore') as job1: with open('job2.txt','r', encoding='utf-8', errors = 'ignore') as job2: job1_str =job1.read() job2_str =job2.read() print(fuzz.token_sort_ratio(job1_str,job2_str)) # job1_set = set (job1_str.split()) # job2_set = set(job2_str.split()) # print(job1_set.difference(job2_set)) # print(job2_set.difference(job1_set)) ```
github_jupyter
``` # HIDDEN # The standard set of libraries we need import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Make plots look a little bit more fancy plt.style.use('fivethirtyeight') # The standard library for data in tables import pandas as pd # A tiny function to read a file directly from a URL from urllib.request import urlopen def read_url(url): return urlopen(url).read().decode() # HIDDEN # Read the text of Pride and Prejudice, split into chapters. book_url = 'http://www.gutenberg.org/ebooks/42671.txt.utf-8' book_text = read_url(book_url) # Break the text into Chapters book_chapters = book_text.split('CHAPTER ') # Drop the first "Chapter" - it's the Project Gutenberg header book_chapters = book_chapters[1:] ``` [Pride and Prejudice](https://en.wikipedia.org/wiki/Pride_and_Prejudice) is the story of five sisters: Jane, Elizabeth, Mary, Kitty and Lydia, and their journey through the social life of the mid-17th century. You may remember that Elizabeth ends up marrying the dashing and aloof Mr Darcy, but along the way, the feckless Lydia runs off with the equally feckless Mr Wickham, and the slightly useless Mr Bingley wants to marry Jane, the most beautiful of the sisters. We can see when these characters appear in the book, by counting how many times their names are mentioned in each chapter. ``` # Count how many times the characters appear in each chapter. counts = pd.DataFrame.from_dict({ 'Elizabeth': np.char.count(book_chapters, 'Elizabeth'), 'Darcy': np.char.count(book_chapters, 'Darcy'), 'Lydia': np.char.count(book_chapters, 'Lydia'), 'Wickham': np.char.count(book_chapters, 'Wickham'), 'Bingley': np.char.count(book_chapters, 'Bingley'), 'Jane': np.char.count(book_chapters, 'Jane')}, ) # The cumulative counts: # how many times in Chapter 1, how many times in Chapters 1 and 2, and so on. cum_counts = counts.cumsum() # Add the chapter numbers number_of_chapters = len(book_chapters) cum_counts['Chapter'] = np.arange(number_of_chapters) # Do the plot cum_counts.plot(x='Chapter') plt.title('Cumulative Number of Times Each Name Appears'); ``` In the plot above, the horizontal axis shows chapter numbers and the vertical axis shows how many times each character has been mentioned up to and including that chapter. Notice first that Elizabeth and Darcy are the main characters. Around chapter 13 we see Wickham and Lydia spike up, as they run away together, and mentions of Darcy flatten off, when he goes to look for them. Around chapter 50 we see Jane and Bingley being mentioned at a very similar rate, as Bingley proposes, and Jane accepts. {% data8page Literary_Characters %}
github_jupyter
<a href="https://colab.research.google.com/github/papagorgio23/Python101/blob/master/Py_202_F%2B_Model_Answers.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` # Installing Library !pip install pydata_google_auth # import base packages into the namespace for this program import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import BernoulliNB from sklearn import model_selection from sklearn.model_selection import train_test_split from sklearn.model_selection import StratifiedKFold from yellowbrick.classifier import ROCAUC from yellowbrick.classifier import ClassificationReport ``` # F+ Lead Scoring Model ![NPV Model](https://raw.githubusercontent.com/papagorgio23/Python101/master/NPV%20Model.png) ## Get Data ``` # Using GBQ shout Out to Hughes import pandas_gbq import pydata_google_auth SCOPES = [ 'https://www.googleapis.com/auth/cloud-platform', 'https://www.googleapis.com/auth/drive', ] # get credentials credentials = pydata_google_auth.get_user_credentials( SCOPES, auth_local_webserver=False) # GBQ sql = """ SELECT id , co_app_verifiable_annual_income__c , loan_use__c , employment_status__c , amount_of_loan_requested__c , fico__c , lti__c , bcc0300__c , ndi_ratio__c , utm_source__c , CASE WHEN date_funded__c IS NOT NULL THEN 1 ELSE 0 END AS Fund FROM `freedom-dw.salesforce_ffam.application__c` a WHERE createddate >= '2019-07-01' AND a.loan_officer__c IS NOT NULL """ # run query fplus_df = pandas_gbq.read_gbq(sql, project_id='ffn-dw-bigquery-prd', credentials=credentials, dialect='standard') ``` ## View Data ``` # view top 5 observations fplus_df.head() # view bottom 5 observations fplus_df.tail() # view columns, data types fplus_df.info() # get summary statistics fplus_df.describe() # check NA fplus_df.isna().sum() ``` # Investigate Variables ## 1) - Co App We only care if an application has a Co App with them. We are using co_app_verifiable_annual_income__c to let us know if they have one. <br> * If NA then No Co App * If Not NA then Co App <br> ### View data ``` # drop na for co app and view distribution co_app = fplus_df.dropna(subset=['co_app_verifiable_annual_income__c']) co_app.info() co_app.describe() ``` ### Plot ``` # Cut the window in 2 parts f, (ax_hist, ax_box) = plt.subplots(2, sharex=True, gridspec_kw={"height_ratios": (.85, .25)}) # Add a graph in each part sns.boxplot(co_app['co_app_verifiable_annual_income__c'], ax=ax_box) sns.distplot(co_app['co_app_verifiable_annual_income__c'], ax=ax_hist) # Remove x axis name for the boxplot ax_box.set(xlabel='') ``` ### Remove outliers ``` ## remove outlier... co_app = co_app[co_app['co_app_verifiable_annual_income__c'] < 200000] # Cut the window in 2 parts f, (ax_hist, ax_box) = plt.subplots(2, sharex=True, gridspec_kw={"height_ratios": (.85, .25)}) # Add a graph in each part sns.boxplot(co_app['co_app_verifiable_annual_income__c'], ax=ax_box) sns.distplot(co_app['co_app_verifiable_annual_income__c'], ax=ax_hist) # Remove x axis name for the boxplot ax_box.set(xlabel='') def get_co_app_cat(co_app_income): """This function creates a Co-App Flag""" if pd.isnull(co_app_income): return 0 return 1 # apply function to dataset fplus_df['co_app'] = fplus_df['co_app_verifiable_annual_income__c'].apply(get_co_app_cat) # view counts now fplus_df['co_app'].value_counts() # view percentage fplus_df['co_app'].value_counts()/len(fplus_df) ``` ### How does Co-App affect Funding? ``` # plot 2 barplots, 1 showing total counts, 1 showing co app percent by fund fig, (axis1,axis2) = plt.subplots(1,2, sharex=True, figsize=(10,5)) # barplot sns.countplot(x='co_app', data=fplus_df, order=[1,0], ax=axis1) # Get Fund rate for Co-App vs No Co-App fund_perc = fplus_df[["Fund", "co_app"]].groupby(['Fund'], as_index=False).mean() sns.barplot(x='Fund', y='co_app', data=fund_perc, order=[1,0], ax=axis2) ``` ## 2) - Loan to Income (LTI) ### View Data ``` # drop na for lti and view distribution loan_income = fplus_df.dropna(subset=['lti__c']) loan_income['lti__c'].describe() # Cut the window in 2 parts f, (ax_hist, ax_box) = plt.subplots(2, sharex=True, gridspec_kw={"height_ratios": (.85, .25)}) # Add a graph in each part sns.boxplot(loan_income['lti__c'], ax=ax_box) sns.distplot(loan_income['lti__c'], ax=ax_hist) # Remove x axis name for the boxplot ax_box.set(xlabel='') ``` ### Remove Outliers ``` 744/139230 ``` ## 3) - Marketing Channel (utm_source) ``` # Cross tab cm = sns.light_palette("green", as_cmap=True) pd.crosstab(fplus_df['utm_source__c'], fplus_df['Fund'], values=fplus_df['Fund'], aggfunc=[len, np.mean], margins=True, margins_name="Total").style.background_gradient(cmap = cm) # pivot table def fund_rate(x): '''This function is used within pivot_tables to calculate ratios''' return np.sum(x) / np.size(x) pivoting = pd.pivot_table(fplus_df, values='Fund', index='utm_source__c', aggfunc={'Fund': [np.sum, np.size, fund_rate]}) print(pivoting.sort_values('size', ascending=False).to_string()) ``` ## 4) - FICO ## 5) - Employment ## 6) - Debt to Income (NDI) ## 7) - Debt to Income Squared ## 8) - Loan Use ## 9) - Bank Card Trades (bcc0300) ## 10) - Loan Amount # Prediction Model ``` model_data = fplus_df.dropna() def get_co_app_cat(co_app_income): if pd.isnull(co_app_income): return 0 return 1 model_data['co_app_verifiable_annual_income__c'] = model_data['co_app_verifiable_annual_income__c'].apply(get_co_app_cat) # create dummies cat_vars = ['loan_use__c','employment_status__c','utm_source__c'] for var in cat_vars: cat_list = pd.get_dummies(model_data[var], prefix=var) temp = model_data.join(cat_list) model_data = temp data_vars = model_data.columns.values.tolist() to_keep = [i for i in data_vars if i not in cat_vars] model_data = model_data[to_keep] model_data.info() model_data = model_data.drop(['id'], axis = 1) # segment out the variable we are predicting from the rest of the data y = model_data['Fund'] X = model_data.drop(['Fund'], axis = 1) # Split the data into a train and test dataset X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) print(X_train.shape, y_train.shape) print(X_test.shape, y_test.shape) # 10 Fold Cross Validation. num_instances = len(X) seed = 7 kfold = model_selection.KFold(n_splits=10, random_state=seed) model = LogisticRegression() results = model_selection.cross_val_score(model, X, y, cv=kfold) print("Accuracy: %.3f%% (%.3f%%)" % (results.mean()*100.0, results.std()*100.0)) # Set classes for all plots classes = ['Not Fund', 'Fund'] # Instantiate the visualizer with the classification model visualizer = ROCAUC(model, classes=classes) visualizer.fit(X_train, y_train) # Fit the training data to the visualizer visualizer.score(X_test, y_test) # Evaluate the model on the test data g = visualizer.poof() # Instantiate the classification model and visualizer visualizer = ClassificationReport(model, classes=classes, support=True) visualizer.fit(X_train, y_train) # Fit the visualizer and the model visualizer.score(X_test, y_test) # Evaluate the model on the test data g = visualizer.poof() # Draw/show/poof the data ``` ## Data Transformation Functions ``` import numpy as np import pandas as pd def get_co_app_cat(co_app_income): if pd.isnull(co_app_income): return 0 return 1 def get_loan_use_cat(loan_use): if pd.isnull(loan_use): return 3 loan_use = loan_use.strip() if (loan_use == 'Credit Card Refinancing'): return 4 if (loan_use in ['Major Purchase','Other']): return 2 if (loan_use == 'Auto Purchase'): return 1 return 3 def get_employment_cat(employment_status): if pd.isnull(employment_status): employment_status = '' employment_status = employment_status.strip() if (employment_status == 'Retired'): return 4 if (employment_status in ['Self-employed']): return 2 if (employment_status in ['Other', '']): return 1 return 3 def get_loan_amount_cat(loan_amount): if pd.isnull(loan_amount): return 1 loan_amount = float(loan_amount) if (loan_amount < 15000): return 4 if (loan_amount >= 15000) and (loan_amount < 20000): return 3 if (loan_amount >= 20000) and (loan_amount < 25000): return 2 return 1 def get_mkt_chan_cat(utm_source): if pd.isnull(utm_source): return 3 utm_source = utm_source.strip() if (utm_source in ['creditkarma','nerdwallet']): return 7 if (utm_source in ['credible','experian']): return 6 if (utm_source in ['website', 'google','msn','ck','nerd', '115','save','dm','SLH','201']): return 5 if (utm_source in ['facebook', 'even','uplift','Quinstreet', 'Personalloanpro','113']): return 2 if (utm_source in ['LendEDU', 'monevo','247','sfl']): return 1 return 3 def get_fico(fico): if pd.isnull(fico): return 990 fico = int(fico) if (fico >= 9000): return 990 if fico < 600: return 990 return fico def get_lti(lti): if pd.isnull(lti): return 36 lti = float(lti) if (lti > 35) or (lti < 1): return 36 if (lti >= 1) and (lti < 2): return 35 if (lti >= 2) and (lti < 3): return 34 return np.floor(lti) def get_bcc0300(bcc0300): if pd.isnull(bcc0300): return 99 bcc0300 = int(bcc0300) if (bcc0300 >= 25): return 30 return bcc0300 def get_ndi_ratio(ndi_ratio): if pd.isnull(ndi_ratio): return 5 ndi_ratio = float(ndi_ratio) ndi_ratio = np.floor(ndi_ratio) if (ndi_ratio < 10): return 5 if (ndi_ratio > 75): return 80 return ndi_ratio # Compute the correlation matrix corr = model_data.corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = sns.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}) # select only numeric data num_data = fplus_df.select_dtypes(include='number') # Compute the correlation matrix corr = num_data.corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = sns.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}) ```
github_jupyter
**Tools - pandas** *The `pandas` library provides high-performance, easy-to-use data structures and data analysis tools. The main data structure is the `DataFrame`, which you can think of as an in-memory 2D table (like a spreadsheet, with column names and row labels). Many features available in Excel are available programmatically, such as creating pivot tables, computing columns based on other columns, plotting graphs, etc. You can also group rows by column value, or join tables much like in SQL. Pandas is also great at handling time series.* Prerequisites: * NumPy – if you are not familiar with NumPy, we recommend that you go through the [NumPy tutorial](tools_numpy.ipynb) now. <table align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/tools_pandas.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> </table> # Setup First, let's import `pandas`. People usually import it as `pd`: ``` import pandas as pd ``` # `Series` objects The `pandas` library contains these useful data structures: * `Series` objects, that we will discuss now. A `Series` object is 1D array, similar to a column in a spreadsheet (with a column name and row labels). * `DataFrame` objects. This is a 2D table, similar to a spreadsheet (with column names and row labels). * `Panel` objects. You can see a `Panel` as a dictionary of `DataFrame`s. These are less used, so we will not discuss them here. ## Creating a `Series` Let's start by creating our first `Series` object! ``` s = pd.Series([2,-1,3,5]) s ``` ## Similar to a 1D `ndarray` `Series` objects behave much like one-dimensional NumPy `ndarray`s, and you can often pass them as parameters to NumPy functions: ``` import numpy as np np.exp(s) ``` Arithmetic operations on `Series` are also possible, and they apply *elementwise*, just like for `ndarray`s: ``` s + [1000,2000,3000,4000] ``` Similar to NumPy, if you add a single number to a `Series`, that number is added to all items in the `Series`. This is called * broadcasting*: ``` s + 1000 ``` The same is true for all binary operations such as `*` or `/`, and even conditional operations: ``` s < 0 ``` ## Index labels Each item in a `Series` object has a unique identifier called the *index label*. By default, it is simply the rank of the item in the `Series` (starting at `0`) but you can also set the index labels manually: ``` s2 = pd.Series([68, 83, 112, 68], index=["alice", "bob", "charles", "darwin"]) s2 ``` You can then use the `Series` just like a `dict`: ``` s2["bob"] ``` You can still access the items by integer location, like in a regular array: ``` s2[1] ``` To make it clear when you are accessing by label or by integer location, it is recommended to always use the `loc` attribute when accessing by label, and the `iloc` attribute when accessing by integer location: ``` s2.loc["bob"] s2.iloc[1] ``` Slicing a `Series` also slices the index labels: ``` s2.iloc[1:3] ``` This can lead to unexpected results when using the default numeric labels, so be careful: ``` surprise = pd.Series([1000, 1001, 1002, 1003]) surprise surprise_slice = surprise[2:] surprise_slice ``` Oh look! The first element has index label `2`. The element with index label `0` is absent from the slice: ``` try: surprise_slice[0] except KeyError as e: print("Key error:", e) ``` But remember that you can access elements by integer location using the `iloc` attribute. This illustrates another reason why it's always better to use `loc` and `iloc` to access `Series` objects: ``` surprise_slice.iloc[0] ``` ## Init from `dict` You can create a `Series` object from a `dict`. The keys will be used as index labels: ``` weights = {"alice": 68, "bob": 83, "colin": 86, "darwin": 68} s3 = pd.Series(weights) s3 ``` You can control which elements you want to include in the `Series` and in what order by explicitly specifying the desired `index`: ``` s4 = pd.Series(weights, index = ["colin", "alice"]) s4 ``` ## Automatic alignment When an operation involves multiple `Series` objects, `pandas` automatically aligns items by matching index labels. ``` print(s2.keys()) print(s3.keys()) s2 + s3 ``` The resulting `Series` contains the union of index labels from `s2` and `s3`. Since `"colin"` is missing from `s2` and `"charles"` is missing from `s3`, these items have a `NaN` result value. (ie. Not-a-Number means *missing*). Automatic alignment is very handy when working with data that may come from various sources with varying structure and missing items. But if you forget to set the right index labels, you can have surprising results: ``` s5 = pd.Series([1000,1000,1000,1000]) print("s2 =", s2.values) print("s5 =", s5.values) s2 + s5 ``` Pandas could not align the `Series`, since their labels do not match at all, hence the full `NaN` result. ## Init with a scalar You can also initialize a `Series` object using a scalar and a list of index labels: all items will be set to the scalar. ``` meaning = pd.Series(42, ["life", "universe", "everything"]) meaning ``` ## `Series` name A `Series` can have a `name`: ``` s6 = pd.Series([83, 68], index=["bob", "alice"], name="weights") s6 ``` ## Plotting a `Series` Pandas makes it easy to plot `Series` data using matplotlib (for more details on matplotlib, check out the [matplotlib tutorial](tools_matplotlib.ipynb)). Just import matplotlib and call the `plot()` method: ``` %matplotlib inline import matplotlib.pyplot as plt temperatures = [4.4,5.1,6.1,6.2,6.1,6.1,5.7,5.2,4.7,4.1,3.9,3.5] s7 = pd.Series(temperatures, name="Temperature") s7.plot() plt.show() ``` There are *many* options for plotting your data. It is not necessary to list them all here: if you need a particular type of plot (histograms, pie charts, etc.), just look for it in the excellent [Visualization](http://pandas.pydata.org/pandas-docs/stable/visualization.html) section of pandas' documentation, and look at the example code. # Handling time Many datasets have timestamps, and pandas is awesome at manipulating such data: * it can represent periods (such as 2016Q3) and frequencies (such as "monthly"), * it can convert periods to actual timestamps, and *vice versa*, * it can resample data and aggregate values any way you like, * it can handle timezones. ## Time range Let's start by creating a time series using `pd.date_range()`. This returns a `DatetimeIndex` containing one datetime per hour for 12 hours starting on October 29th 2016 at 5:30pm. ``` dates = pd.date_range('2016/10/29 5:30pm', periods=12, freq='H') dates ``` This `DatetimeIndex` may be used as an index in a `Series`: ``` temp_series = pd.Series(temperatures, dates) temp_series ``` Let's plot this series: ``` temp_series.plot(kind="bar") plt.grid(True) plt.show() ``` ## Resampling Pandas lets us resample a time series very simply. Just call the `resample()` method and specify a new frequency: ``` temp_series_freq_2H = temp_series.resample("2H") temp_series_freq_2H ``` The resampling operation is actually a deferred operation, which is why we did not get a `Series` object, but a `DatetimeIndexResampler` object instead. To actually perform the resampling operation, we can simply call the `mean()` method: Pandas will compute the mean of every pair of consecutive hours: ``` temp_series_freq_2H = temp_series_freq_2H.mean() ``` Let's plot the result: ``` temp_series_freq_2H.plot(kind="bar") plt.show() ``` Note how the values have automatically been aggregated into 2-hour periods. If we look at the 6-8pm period, for example, we had a value of `5.1` at 6:30pm, and `6.1` at 7:30pm. After resampling, we just have one value of `5.6`, which is the mean of `5.1` and `6.1`. Rather than computing the mean, we could have used any other aggregation function, for example we can decide to keep the minimum value of each period: ``` temp_series_freq_2H = temp_series.resample("2H").min() temp_series_freq_2H ``` Or, equivalently, we could use the `apply()` method instead: ``` temp_series_freq_2H = temp_series.resample("2H").apply(np.min) temp_series_freq_2H ``` ## Upsampling and interpolation This was an example of downsampling. We can also upsample (ie. increase the frequency), but this creates holes in our data: ``` temp_series_freq_15min = temp_series.resample("15Min").mean() temp_series_freq_15min.head(n=10) # `head` displays the top n values ``` One solution is to fill the gaps by interpolating. We just call the `interpolate()` method. The default is to use linear interpolation, but we can also select another method, such as cubic interpolation: ``` temp_series_freq_15min = temp_series.resample("15Min").interpolate(method="cubic") temp_series_freq_15min.head(n=10) temp_series.plot(label="Period: 1 hour") temp_series_freq_15min.plot(label="Period: 15 minutes") plt.legend() plt.show() ``` ## Timezones By default datetimes are *naive*: they are not aware of timezones, so 2016-10-30 02:30 might mean October 30th 2016 at 2:30am in Paris or in New York. We can make datetimes timezone *aware* by calling the `tz_localize()` method: ``` temp_series_ny = temp_series.tz_localize("America/New_York") temp_series_ny ``` Note that `-04:00` is now appended to all the datetimes. This means that these datetimes refer to [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time) - 4 hours. We can convert these datetimes to Paris time like this: ``` temp_series_paris = temp_series_ny.tz_convert("Europe/Paris") temp_series_paris ``` You may have noticed that the UTC offset changes from `+02:00` to `+01:00`: this is because France switches to winter time at 3am that particular night (time goes back to 2am). Notice that 2:30am occurs twice! Let's go back to a naive representation (if you log some data hourly using local time, without storing the timezone, you might get something like this): ``` temp_series_paris_naive = temp_series_paris.tz_localize(None) temp_series_paris_naive ``` Now `02:30` is really ambiguous. If we try to localize these naive datetimes to the Paris timezone, we get an error: ``` try: temp_series_paris_naive.tz_localize("Europe/Paris") except Exception as e: print(type(e)) print(e) ``` Fortunately using the `ambiguous` argument we can tell pandas to infer the right DST (Daylight Saving Time) based on the order of the ambiguous timestamps: ``` temp_series_paris_naive.tz_localize("Europe/Paris", ambiguous="infer") ``` ## Periods The `pd.period_range()` function returns a `PeriodIndex` instead of a `DatetimeIndex`. For example, let's get all quarters in 2016 and 2017: ``` quarters = pd.period_range('2016Q1', periods=8, freq='Q') quarters ``` Adding a number `N` to a `PeriodIndex` shifts the periods by `N` times the `PeriodIndex`'s frequency: ``` quarters + 3 ``` The `asfreq()` method lets us change the frequency of the `PeriodIndex`. All periods are lengthened or shortened accordingly. For example, let's convert all the quarterly periods to monthly periods (zooming in): ``` quarters.asfreq("M") ``` By default, the `asfreq` zooms on the end of each period. We can tell it to zoom on the start of each period instead: ``` quarters.asfreq("M", how="start") ``` And we can zoom out: ``` quarters.asfreq("A") ``` Of course we can create a `Series` with a `PeriodIndex`: ``` quarterly_revenue = pd.Series([300, 320, 290, 390, 320, 360, 310, 410], index = quarters) quarterly_revenue quarterly_revenue.plot(kind="line") plt.show() ``` We can convert periods to timestamps by calling `to_timestamp`. By default this will give us the first day of each period, but by setting `how` and `freq`, we can get the last hour of each period: ``` last_hours = quarterly_revenue.to_timestamp(how="end", freq="H") last_hours ``` And back to periods by calling `to_period`: ``` last_hours.to_period() ``` Pandas also provides many other time-related functions that we recommend you check out in the [documentation](http://pandas.pydata.org/pandas-docs/stable/timeseries.html). To whet your appetite, here is one way to get the last business day of each month in 2016, at 9am: ``` months_2016 = pd.period_range("2016", periods=12, freq="M") one_day_after_last_days = months_2016.asfreq("D") + 1 last_bdays = one_day_after_last_days.to_timestamp() - pd.tseries.offsets.BDay() last_bdays.to_period("H") + 9 ``` # `DataFrame` objects A DataFrame object represents a spreadsheet, with cell values, column names and row index labels. You can define expressions to compute columns based on other columns, create pivot-tables, group rows, draw graphs, etc. You can see `DataFrame`s as dictionaries of `Series`. ## Creating a `DataFrame` You can create a DataFrame by passing a dictionary of `Series` objects: ``` people_dict = { "weight": pd.Series([68, 83, 112], index=["alice", "bob", "charles"]), "birthyear": pd.Series([1984, 1985, 1992], index=["bob", "alice", "charles"], name="year"), "children": pd.Series([0, 3], index=["charles", "bob"]), "hobby": pd.Series(["Biking", "Dancing"], index=["alice", "bob"]), } people = pd.DataFrame(people_dict) people ``` A few things to note: * the `Series` were automatically aligned based on their index, * missing values are represented as `NaN`, * `Series` names are ignored (the name `"year"` was dropped), * `DataFrame`s are displayed nicely in Jupyter notebooks, woohoo! You can access columns pretty much as you would expect. They are returned as `Series` objects: ``` people["birthyear"] ``` You can also get multiple columns at once: ``` people[["birthyear", "hobby"]] ``` If you pass a list of columns and/or index row labels to the `DataFrame` constructor, it will guarantee that these columns and/or rows will exist, in that order, and no other column/row will exist. For example: ``` d2 = pd.DataFrame( people_dict, columns=["birthyear", "weight", "height"], index=["bob", "alice", "eugene"] ) d2 ``` Another convenient way to create a `DataFrame` is to pass all the values to the constructor as an `ndarray`, or a list of lists, and specify the column names and row index labels separately: ``` values = [ [1985, np.nan, "Biking", 68], [1984, 3, "Dancing", 83], [1992, 0, np.nan, 112] ] d3 = pd.DataFrame( values, columns=["birthyear", "children", "hobby", "weight"], index=["alice", "bob", "charles"] ) d3 ``` To specify missing values, you can either use `np.nan` or NumPy's masked arrays: ``` masked_array = np.ma.asarray(values, dtype=np.object) masked_array[(0, 2), (1, 2)] = np.ma.masked d3 = pd.DataFrame( masked_array, columns=["birthyear", "children", "hobby", "weight"], index=["alice", "bob", "charles"] ) d3 ``` Instead of an `ndarray`, you can also pass a `DataFrame` object: ``` d4 = pd.DataFrame( d3, columns=["hobby", "children"], index=["alice", "bob"] ) d4 ``` It is also possible to create a `DataFrame` with a dictionary (or list) of dictionaries (or list): ``` people = pd.DataFrame({ "birthyear": {"alice":1985, "bob": 1984, "charles": 1992}, "hobby": {"alice":"Biking", "bob": "Dancing"}, "weight": {"alice":68, "bob": 83, "charles": 112}, "children": {"bob": 3, "charles": 0} }) people ``` ## Multi-indexing If all columns are tuples of the same size, then they are understood as a multi-index. The same goes for row index labels. For example: ``` d5 = pd.DataFrame( { ("public", "birthyear"): {("Paris","alice"):1985, ("Paris","bob"): 1984, ("London","charles"): 1992}, ("public", "hobby"): {("Paris","alice"):"Biking", ("Paris","bob"): "Dancing"}, ("private", "weight"): {("Paris","alice"):68, ("Paris","bob"): 83, ("London","charles"): 112}, ("private", "children"): {("Paris", "alice"):np.nan, ("Paris","bob"): 3, ("London","charles"): 0} } ) d5 ``` You can now get a `DataFrame` containing all the `"public"` columns very simply: ``` d5["public"] d5["public", "hobby"] # Same result as d5["public"]["hobby"] ``` ## Dropping a level Let's look at `d5` again: ``` d5 ``` There are two levels of columns, and two levels of indices. We can drop a column level by calling `droplevel()` (the same goes for indices): ``` d5.columns = d5.columns.droplevel(level = 0) d5 ``` ## Transposing You can swap columns and indices using the `T` attribute: ``` d6 = d5.T d6 ``` ## Stacking and unstacking levels Calling the `stack()` method will push the lowest column level after the lowest index: ``` d7 = d6.stack() d7 ``` Note that many `NaN` values appeared. This makes sense because many new combinations did not exist before (eg. there was no `bob` in `London`). Calling `unstack()` will do the reverse, once again creating many `NaN` values. ``` d8 = d7.unstack() d8 ``` If we call `unstack` again, we end up with a `Series` object: ``` d9 = d8.unstack() d9 ``` The `stack()` and `unstack()` methods let you select the `level` to stack/unstack. You can even stack/unstack multiple levels at once: ``` d10 = d9.unstack(level = (0,1)) d10 ``` ## Most methods return modified copies As you may have noticed, the `stack()` and `unstack()` methods do not modify the object they apply to. Instead, they work on a copy and return that copy. This is true of most methods in pandas. ## Accessing rows Let's go back to the `people` `DataFrame`: ``` people ``` The `loc` attribute lets you access rows instead of columns. The result is a `Series` object in which the `DataFrame`'s column names are mapped to row index labels: ``` people.loc["charles"] ``` You can also access rows by integer location using the `iloc` attribute: ``` people.iloc[2] ``` You can also get a slice of rows, and this returns a `DataFrame` object: ``` people.iloc[1:3] ``` Finally, you can pass a boolean array to get the matching rows: ``` people[np.array([True, False, True])] ``` This is most useful when combined with boolean expressions: ``` people[people["birthyear"] < 1990] ``` ## Adding and removing columns You can generally treat `DataFrame` objects like dictionaries of `Series`, so the following work fine: ``` people people["age"] = 2018 - people["birthyear"] # adds a new column "age" people["over 30"] = people["age"] > 30 # adds another column "over 30" birthyears = people.pop("birthyear") del people["children"] people birthyears ``` When you add a new colum, it must have the same number of rows. Missing rows are filled with NaN, and extra rows are ignored: ``` people["pets"] = pd.Series({"bob": 0, "charles": 5, "eugene":1}) # alice is missing, eugene is ignored people ``` When adding a new column, it is added at the end (on the right) by default. You can also insert a column anywhere else using the `insert()` method: ``` people.insert(1, "height", [172, 181, 185]) people ``` ## Assigning new columns You can also create new columns by calling the `assign()` method. Note that this returns a new `DataFrame` object, the original is not modified: ``` people.assign( body_mass_index = people["weight"] / (people["height"] / 100) ** 2, has_pets = people["pets"] > 0 ) ``` Note that you cannot access columns created within the same assignment: ``` try: people.assign( body_mass_index = people["weight"] / (people["height"] / 100) ** 2, overweight = people["body_mass_index"] > 25 ) except KeyError as e: print("Key error:", e) ``` The solution is to split this assignment in two consecutive assignments: ``` d6 = people.assign(body_mass_index = people["weight"] / (people["height"] / 100) ** 2) d6.assign(overweight = d6["body_mass_index"] > 25) ``` Having to create a temporary variable `d6` is not very convenient. You may want to just chain the assigment calls, but it does not work because the `people` object is not actually modified by the first assignment: ``` try: (people .assign(body_mass_index = people["weight"] / (people["height"] / 100) ** 2) .assign(overweight = people["body_mass_index"] > 25) ) except KeyError as e: print("Key error:", e) ``` But fear not, there is a simple solution. You can pass a function to the `assign()` method (typically a `lambda` function), and this function will be called with the `DataFrame` as a parameter: ``` (people .assign(body_mass_index = lambda df: df["weight"] / (df["height"] / 100) ** 2) .assign(overweight = lambda df: df["body_mass_index"] > 25) ) ``` Problem solved! ## Evaluating an expression A great feature supported by pandas is expression evaluation. This relies on the `numexpr` library which must be installed. ``` people.eval("weight / (height/100) ** 2 > 25") ``` Assignment expressions are also supported. Let's set `inplace=True` to directly modify the `DataFrame` rather than getting a modified copy: ``` people.eval("body_mass_index = weight / (height/100) ** 2", inplace=True) people ``` You can use a local or global variable in an expression by prefixing it with `'@'`: ``` overweight_threshold = 30 people.eval("overweight = body_mass_index > @overweight_threshold", inplace=True) people ``` ## Querying a `DataFrame` The `query()` method lets you filter a `DataFrame` based on a query expression: ``` people.query("age > 30 and pets == 0") ``` ## Sorting a `DataFrame` You can sort a `DataFrame` by calling its `sort_index` method. By default it sorts the rows by their index label, in ascending order, but let's reverse the order: ``` people.sort_index(ascending=False) ``` Note that `sort_index` returned a sorted *copy* of the `DataFrame`. To modify `people` directly, we can set the `inplace` argument to `True`. Also, we can sort the columns instead of the rows by setting `axis=1`: ``` people.sort_index(axis=1, inplace=True) people ``` To sort the `DataFrame` by the values instead of the labels, we can use `sort_values` and specify the column to sort by: ``` people.sort_values(by="age", inplace=True) people ``` ## Plotting a `DataFrame` Just like for `Series`, pandas makes it easy to draw nice graphs based on a `DataFrame`. For example, it is trivial to create a line plot from a `DataFrame`'s data by calling its `plot` method: ``` people.plot(kind = "line", x = "body_mass_index", y = ["height", "weight"]) plt.show() ``` You can pass extra arguments supported by matplotlib's functions. For example, we can create scatterplot and pass it a list of sizes using the `s` argument of matplotlib's `scatter()` function: ``` people.plot(kind = "scatter", x = "height", y = "weight", s=[40, 120, 200]) plt.show() ``` Again, there are way too many options to list here: the best option is to scroll through the [Visualization](http://pandas.pydata.org/pandas-docs/stable/visualization.html) page in pandas' documentation, find the plot you are interested in and look at the example code. ## Operations on `DataFrame`s Although `DataFrame`s do not try to mimick NumPy arrays, there are a few similarities. Let's create a `DataFrame` to demonstrate this: ``` grades_array = np.array([[8,8,9],[10,9,9],[4, 8, 2], [9, 10, 10]]) grades = pd.DataFrame(grades_array, columns=["sep", "oct", "nov"], index=["alice","bob","charles","darwin"]) grades ``` You can apply NumPy mathematical functions on a `DataFrame`: the function is applied to all values: ``` np.sqrt(grades) ``` Similarly, adding a single value to a `DataFrame` will add that value to all elements in the `DataFrame`. This is called *broadcasting*: ``` grades + 1 ``` Of course, the same is true for all other binary operations, including arithmetic (`*`,`/`,`**`...) and conditional (`>`, `==`...) operations: ``` grades >= 5 ``` Aggregation operations, such as computing the `max`, the `sum` or the `mean` of a `DataFrame`, apply to each column, and you get back a `Series` object: ``` grades.mean() ``` The `all` method is also an aggregation operation: it checks whether all values are `True` or not. Let's see during which months all students got a grade greater than `5`: ``` (grades > 5).all() ``` Most of these functions take an optional `axis` parameter which lets you specify along which axis of the `DataFrame` you want the operation executed. The default is `axis=0`, meaning that the operation is executed vertically (on each column). You can set `axis=1` to execute the operation horizontally (on each row). For example, let's find out which students had all grades greater than `5`: ``` (grades > 5).all(axis = 1) ``` The `any` method returns `True` if any value is True. Let's see who got at least one grade 10: ``` (grades == 10).any(axis = 1) ``` If you add a `Series` object to a `DataFrame` (or execute any other binary operation), pandas attempts to broadcast the operation to all *rows* in the `DataFrame`. This only works if the `Series` has the same size as the `DataFrame`s rows. For example, let's substract the `mean` of the `DataFrame` (a `Series` object) from the `DataFrame`: ``` grades - grades.mean() # equivalent to: grades - [7.75, 8.75, 7.50] ``` We substracted `7.75` from all September grades, `8.75` from October grades and `7.50` from November grades. It is equivalent to substracting this `DataFrame`: ``` pd.DataFrame([[7.75, 8.75, 7.50]]*4, index=grades.index, columns=grades.columns) ``` If you want to substract the global mean from every grade, here is one way to do it: ``` grades - grades.values.mean() # substracts the global mean (8.00) from all grades ``` ## Automatic alignment Similar to `Series`, when operating on multiple `DataFrame`s, pandas automatically aligns them by row index label, but also by column names. Let's create a `DataFrame` with bonus points for each person from October to December: ``` bonus_array = np.array([[0,np.nan,2],[np.nan,1,0],[0, 1, 0], [3, 3, 0]]) bonus_points = pd.DataFrame(bonus_array, columns=["oct", "nov", "dec"], index=["bob","colin", "darwin", "charles"]) bonus_points grades + bonus_points ``` Looks like the addition worked in some cases but way too many elements are now empty. That's because when aligning the `DataFrame`s, some columns and rows were only present on one side, and thus they were considered missing on the other side (`NaN`). Then adding `NaN` to a number results in `NaN`, hence the result. ## Handling missing data Dealing with missing data is a frequent task when working with real life data. Pandas offers a few tools to handle missing data. Let's try to fix the problem above. For example, we can decide that missing data should result in a zero, instead of `NaN`. We can replace all `NaN` values by a any value using the `fillna()` method: ``` (grades + bonus_points).fillna(0) ``` It's a bit unfair that we're setting grades to zero in September, though. Perhaps we should decide that missing grades are missing grades, but missing bonus points should be replaced by zeros: ``` fixed_bonus_points = bonus_points.fillna(0) fixed_bonus_points.insert(0, "sep", 0) fixed_bonus_points.loc["alice"] = 0 grades + fixed_bonus_points ``` That's much better: although we made up some data, we have not been too unfair. Another way to handle missing data is to interpolate. Let's look at the `bonus_points` `DataFrame` again: ``` bonus_points ``` Now let's call the `interpolate` method. By default, it interpolates vertically (`axis=0`), so let's tell it to interpolate horizontally (`axis=1`). ``` bonus_points.interpolate(axis=1) ``` Bob had 0 bonus points in October, and 2 in December. When we interpolate for November, we get the mean: 1 bonus point. Colin had 1 bonus point in November, but we do not know how many bonus points he had in September, so we cannot interpolate, this is why there is still a missing value in October after interpolation. To fix this, we can set the September bonus points to 0 before interpolation. ``` better_bonus_points = bonus_points.copy() better_bonus_points.insert(0, "sep", 0) better_bonus_points.loc["alice"] = 0 better_bonus_points = better_bonus_points.interpolate(axis=1) better_bonus_points ``` Great, now we have reasonable bonus points everywhere. Let's find out the final grades: ``` grades + better_bonus_points ``` It is slightly annoying that the September column ends up on the right. This is because the `DataFrame`s we are adding do not have the exact same columns (the `grades` `DataFrame` is missing the `"dec"` column), so to make things predictable, pandas orders the final columns alphabetically. To fix this, we can simply add the missing column before adding: ``` grades["dec"] = np.nan final_grades = grades + better_bonus_points final_grades ``` There's not much we can do about December and Colin: it's bad enough that we are making up bonus points, but we can't reasonably make up grades (well I guess some teachers probably do). So let's call the `dropna()` method to get rid of rows that are full of `NaN`s: ``` final_grades_clean = final_grades.dropna(how="all") final_grades_clean ``` Now let's remove columns that are full of `NaN`s by setting the `axis` argument to `1`: ``` final_grades_clean = final_grades_clean.dropna(axis=1, how="all") final_grades_clean ``` ## Aggregating with `groupby` Similar to the SQL language, pandas allows grouping your data into groups to run calculations over each group. First, let's add some extra data about each person so we can group them, and let's go back to the `final_grades` `DataFrame` so we can see how `NaN` values are handled: ``` final_grades["hobby"] = ["Biking", "Dancing", np.nan, "Dancing", "Biking"] final_grades ``` Now let's group data in this `DataFrame` by hobby: ``` grouped_grades = final_grades.groupby("hobby") grouped_grades ``` We are ready to compute the average grade per hobby: ``` grouped_grades.mean() ``` That was easy! Note that the `NaN` values have simply been skipped when computing the means. ## Pivot tables Pandas supports spreadsheet-like [pivot tables](https://en.wikipedia.org/wiki/Pivot_table) that allow quick data summarization. To illustrate this, let's create a simple `DataFrame`: ``` bonus_points more_grades = final_grades_clean.stack().reset_index() more_grades.columns = ["name", "month", "grade"] more_grades["bonus"] = [np.nan, np.nan, np.nan, 0, np.nan, 2, 3, 3, 0, 0, 1, 0] more_grades ``` Now we can call the `pd.pivot_table()` function for this `DataFrame`, asking to group by the `name` column. By default, `pivot_table()` computes the mean of each numeric column: ``` pd.pivot_table(more_grades, index="name") ``` We can change the aggregation function by setting the `aggfunc` argument, and we can also specify the list of columns whose values will be aggregated: ``` pd.pivot_table(more_grades, index="name", values=["grade","bonus"], aggfunc=np.max) ``` We can also specify the `columns` to aggregate over horizontally, and request the grand totals for each row and column by setting `margins=True`: ``` pd.pivot_table(more_grades, index="name", values="grade", columns="month", margins=True) ``` Finally, we can specify multiple index or column names, and pandas will create multi-level indices: ``` pd.pivot_table(more_grades, index=("name", "month"), margins=True) ``` ## Overview functions When dealing with large `DataFrames`, it is useful to get a quick overview of its content. Pandas offers a few functions for this. First, let's create a large `DataFrame` with a mix of numeric values, missing values and text values. Notice how Jupyter displays only the corners of the `DataFrame`: ``` much_data = np.fromfunction(lambda x,y: (x+y*y)%17*11, (10000, 26)) large_df = pd.DataFrame(much_data, columns=list("ABCDEFGHIJKLMNOPQRSTUVWXYZ")) large_df[large_df % 16 == 0] = np.nan large_df.insert(3,"some_text", "Blabla") large_df ``` The `head()` method returns the top 5 rows: ``` large_df.head() ``` Of course there's also a `tail()` function to view the bottom 5 rows. You can pass the number of rows you want: ``` large_df.tail(n=2) ``` The `info()` method prints out a summary of each columns contents: ``` large_df.info() ``` Finally, the `describe()` method gives a nice overview of the main aggregated values over each column: * `count`: number of non-null (not NaN) values * `mean`: mean of non-null values * `std`: [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) of non-null values * `min`: minimum of non-null values * `25%`, `50%`, `75%`: 25th, 50th and 75th [percentile](https://en.wikipedia.org/wiki/Percentile) of non-null values * `max`: maximum of non-null values ``` large_df.describe() ``` # Saving & loading Pandas can save `DataFrame`s to various backends, including file formats such as CSV, Excel, JSON, HTML and HDF5, or to a SQL database. Let's create a `DataFrame` to demonstrate this: ``` my_df = pd.DataFrame( [["Biking", 68.5, 1985, np.nan], ["Dancing", 83.1, 1984, 3]], columns=["hobby","weight","birthyear","children"], index=["alice", "bob"] ) my_df ``` ## Saving Let's save it to CSV, HTML and JSON: ``` my_df.to_csv("my_df.csv") my_df.to_html("my_df.html") my_df.to_json("my_df.json") ``` Done! Let's take a peek at what was saved: ``` for filename in ("my_df.csv", "my_df.html", "my_df.json"): print("#", filename) with open(filename, "rt") as f: print(f.read()) print() ``` Note that the index is saved as the first column (with no name) in a CSV file, as `<th>` tags in HTML and as keys in JSON. Saving to other formats works very similarly, but some formats require extra libraries to be installed. For example, saving to Excel requires the openpyxl library: ``` try: my_df.to_excel("my_df.xlsx", sheet_name='People') except ImportError as e: print(e) ``` ## Loading Now let's load our CSV file back into a `DataFrame`: ``` my_df_loaded = pd.read_csv("my_df.csv", index_col=0) my_df_loaded ``` As you might guess, there are similar `read_json`, `read_html`, `read_excel` functions as well. We can also read data straight from the Internet. For example, let's load the top 1,000 U.S. cities from github: ``` us_cities = None try: csv_url = "https://raw.githubusercontent.com/plotly/datasets/master/us-cities-top-1k.csv" us_cities = pd.read_csv(csv_url, index_col=0) us_cities = us_cities.head() except IOError as e: print(e) us_cities ``` There are more options available, in particular regarding datetime format. Check out the [documentation](http://pandas.pydata.org/pandas-docs/stable/io.html) for more details. # Combining `DataFrame`s ## SQL-like joins One powerful feature of pandas is it's ability to perform SQL-like joins on `DataFrame`s. Various types of joins are supported: inner joins, left/right outer joins and full joins. To illustrate this, let's start by creating a couple simple `DataFrame`s: ``` city_loc = pd.DataFrame( [ ["CA", "San Francisco", 37.781334, -122.416728], ["NY", "New York", 40.705649, -74.008344], ["FL", "Miami", 25.791100, -80.320733], ["OH", "Cleveland", 41.473508, -81.739791], ["UT", "Salt Lake City", 40.755851, -111.896657] ], columns=["state", "city", "lat", "lng"]) city_loc city_pop = pd.DataFrame( [ [808976, "San Francisco", "California"], [8363710, "New York", "New-York"], [413201, "Miami", "Florida"], [2242193, "Houston", "Texas"] ], index=[3,4,5,6], columns=["population", "city", "state"]) city_pop ``` Now let's join these `DataFrame`s using the `merge()` function: ``` pd.merge(left=city_loc, right=city_pop, on="city") ``` Note that both `DataFrame`s have a column named `state`, so in the result they got renamed to `state_x` and `state_y`. Also, note that Cleveland, Salt Lake City and Houston were dropped because they don't exist in *both* `DataFrame`s. This is the equivalent of a SQL `INNER JOIN`. If you want a `FULL OUTER JOIN`, where no city gets dropped and `NaN` values are added, you must specify `how="outer"`: ``` all_cities = pd.merge(left=city_loc, right=city_pop, on="city", how="outer") all_cities ``` Of course `LEFT OUTER JOIN` is also available by setting `how="left"`: only the cities present in the left `DataFrame` end up in the result. Similarly, with `how="right"` only cities in the right `DataFrame` appear in the result. For example: ``` pd.merge(left=city_loc, right=city_pop, on="city", how="right") ``` If the key to join on is actually in one (or both) `DataFrame`'s index, you must use `left_index=True` and/or `right_index=True`. If the key column names differ, you must use `left_on` and `right_on`. For example: ``` city_pop2 = city_pop.copy() city_pop2.columns = ["population", "name", "state"] pd.merge(left=city_loc, right=city_pop2, left_on="city", right_on="name") ``` ## Concatenation Rather than joining `DataFrame`s, we may just want to concatenate them. That's what `concat()` is for: ``` result_concat = pd.concat([city_loc, city_pop]) result_concat ``` Note that this operation aligned the data horizontally (by columns) but not vertically (by rows). In this example, we end up with multiple rows having the same index (eg. 3). Pandas handles this rather gracefully: ``` result_concat.loc[3] ``` Or you can tell pandas to just ignore the index: ``` pd.concat([city_loc, city_pop], ignore_index=True) ``` Notice that when a column does not exist in a `DataFrame`, it acts as if it was filled with `NaN` values. If we set `join="inner"`, then only columns that exist in *both* `DataFrame`s are returned: ``` pd.concat([city_loc, city_pop], join="inner") ``` You can concatenate `DataFrame`s horizontally instead of vertically by setting `axis=1`: ``` pd.concat([city_loc, city_pop], axis=1) ``` In this case it really does not make much sense because the indices do not align well (eg. Cleveland and San Francisco end up on the same row, because they shared the index label `3`). So let's reindex the `DataFrame`s by city name before concatenating: ``` pd.concat([city_loc.set_index("city"), city_pop.set_index("city")], axis=1) ``` This looks a lot like a `FULL OUTER JOIN`, except that the `state` columns were not renamed to `state_x` and `state_y`, and the `city` column is now the index. The `append()` method is a useful shorthand for concatenating `DataFrame`s vertically: ``` city_loc.append(city_pop) ``` As always in pandas, the `append()` method does *not* actually modify `city_loc`: it works on a copy and returns the modified copy. # Categories It is quite frequent to have values that represent categories, for example `1` for female and `2` for male, or `"A"` for Good, `"B"` for Average, `"C"` for Bad. These categorical values can be hard to read and cumbersome to handle, but fortunately pandas makes it easy. To illustrate this, let's take the `city_pop` `DataFrame` we created earlier, and add a column that represents a category: ``` city_eco = city_pop.copy() city_eco["eco_code"] = [17, 17, 34, 20] city_eco ``` Right now the `eco_code` column is full of apparently meaningless codes. Let's fix that. First, we will create a new categorical column based on the `eco_code`s: ``` city_eco["economy"] = city_eco["eco_code"].astype('category') city_eco["economy"].cat.categories ``` Now we can give each category a meaningful name: ``` city_eco["economy"].cat.categories = ["Finance", "Energy", "Tourism"] city_eco ``` Note that categorical values are sorted according to their categorical order, *not* their alphabetical order: ``` city_eco.sort_values(by="economy", ascending=False) ``` # What next? As you probably noticed by now, pandas is quite a large library with *many* features. Although we went through the most important features, there is still a lot to discover. Probably the best way to learn more is to get your hands dirty with some real-life data. It is also a good idea to go through pandas' excellent [documentation](http://pandas.pydata.org/pandas-docs/stable/index.html), in particular the [Cookbook](http://pandas.pydata.org/pandas-docs/stable/cookbook.html).
github_jupyter
[![img/pythonista.png](img/pythonista.png)](https://www.pythonista.io) # Reglas de *URL*. ## Preliminar. ``` from flask import Flask app = Flask(__name__) ``` ## Extracción de valores a partir de una *URL*. Las reglas de *URL* no sólo permiten definir rutas estáticas que apunten a una función, sino que pueden definir rutas dinámicas que permitan obtener información de la propia ruta y usarla como argumento para la función de vista. `` '{texto 1}<{nombre 1}>{texto 2}<{nombre 2}> ...' `` Donde: * ```{texto i}``` es un segmento de texto de la *URL* que es fijo. * ```{nombre i}``` es el nombre que se le asignará al texto capturado en ese segmento de la *URL*. **Ejemplo:** * La siguiente celda define una regla para ```app.route()``` la cual capturaría el texto al final de la ruta ```/saluda/``` y lo guardaría con el nombre ```usuario```, el cual sería usado como argumento para la función ```saluda()```. La aplicación regresaría un documento *HTML* con un texto construido a partir del contenido de ```usuario```. ``` @app.route('/saluda/<usuario>') def saluda(usuario): return f'<p>Hola, {usuario}.</p>' ``` ## Reglas de ruta con indicadores de tipo. Es posible indicarle al sewrvidor de aplciaciones el tipo de dato esperado en la ruta. ``` <{tipo}:{nombre}> ``` Donde: * ```{tipo}``` puede ser: * ```string```, el cual corresponde a una cadena de caracteres. * ```int```, el cual corresponde a un número entero. * ```float```, el cual coorresponde a un número de punto flotante (es obligatorio que tenga un punto). * ```path```, el cual corresponde a una cadena de caracteres que representa a una ruta. * ```uuid```, el cual corresponde a un identificador único universal, tal como está definido en el [*RFC 4122*](https://datatracker.ietf.org/doc/html/rfc4122). En caso de que el segmento de *URL* no corresponda al tipo indicado, *Flask* regresará un estado ```404```. **Ejemplos:** La siguiente celda define una regla de *URL* para la función de vista ```operacion()``` en la que es necesario ingresar un valor de tipo ```float``` después del texto ```/operacion/``` y antes del texto ```/mitad``` . * La ruta ```/operacion/5.0/mitad``` captuará el valor de ```5.0``` en el parámetro ```numero``` y regresará un documento *HTML* desplegando operaciones con dicho valor. * La ruta ```/operacion/5/mitad``` regresará un estado ```404```. * La ruta ```/operacion/Juan/mitad``` regresará un estado ```404```. ``` @app.route('/operacion/<float:numero>/mitad') def mitad(numero): return f'La mitad de {numero} es {numero / 2}.' @app.route('/operacion/<int:a>_suma_<int:b>') def suma(a, b): return f'La suma de {a} + {b} es {a + b}.' ``` **Advertencia:** Una vez ejecutada la siguiente celda, es necesario interrumpir el *kernel* de *Jupyter* para poder ejecutar el resto de las celdas de la *notebook*. ``` app.run(host="0.0.0.0", port=5000) ``` * http://localhost:5000/saluda/Juan * http://localhost:5000/operacion/5.0/mitad * http://localhost:5000/operacion/5/mitad * http://localhost:5000/operacion/Juan/mitad * http://localhost:5000/operacion/5.0 * http://localhost:5000/operacion/2_suma_3 <p style="text-align: center"><a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Licencia Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />Esta obra está bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p> <p style="text-align: center">&copy; José Luis Chiquete Valdivieso. 2022.</p>
github_jupyter
### Load data ``` import numpy as np import pandas as pd column_names = ['txn_key', 'from_user', 'to_user', 'date', 'amount'] df = pd.read_csv('../data/bitcoin_uic_data_and_code_20130410/user_edges.txt', names=column_names) df.head() ``` ### Select transactions in or before 2010 ``` df[ df.date < 20110000000000 ].to_csv('../data/subset/user_edges_2010.csv', index=False) df = pd.read_csv('../data/subset/user_edges_2010.csv') df['date'] = pd.to_datetime(df.date, format='%Y-%m-%d %H:%M:%S') df.to_csv('../data/subset/user_edges_2010.csv', index=False) ``` Transaction Features to use --------------------------- - Number of transaction under this key - Transaction amount, total amount under this key - From equals to? - Number of unique from/to under this key - Transaction date - Year - Month - Day - Day of week - Day of year - Hour - Minute - Second - From/to in/out (unique) degree - From/to clustering coefficient - From/to in/out transaction frequency - All - Within ±12 hours - From/to transaction volume - All - Within ±12 hours - From/to first transaction date - From/to average in/out transaction amount - From/to average time between in/out transactions ### Build graphs from transaction data - Undirected, directed and multi-directed ``` import networkx as nx # for features only defined in undirected graph G = nx.from_pandas_dataframe(df, source='from_user', target='to_user', edge_attr=['txn_key', 'amount', 'date'], create_using=nx.Graph() ) # unique links between users G_di = nx.from_pandas_dataframe(df, source='from_user', target='to_user', edge_attr=['txn_key', 'amount', 'date'], create_using=nx.DiGraph() ) # the full graph G_mdi = nx.from_pandas_dataframe(df, source='from_user', target='to_user', edge_attr=['txn_key', 'amount', 'date'], create_using=nx.MultiDiGraph() ) # transaction feature maps count_by_key = df.groupby('txn_key').size() amount_by_key = df.groupby('txn_key').amount.sum() ufrom_by_key = df.groupby('txn_key').from_user.agg(pd.Series.nunique) uto_by_key = df.groupby('txn_key').to_user.agg(pd.Series.nunique) # user feature maps in_txn_count = df.groupby('to_user').size() in_key_count = df.groupby('to_user').txn_key.agg(pd.Series.nunique) out_txn_count = df.groupby('from_user').size() out_key_count = df.groupby('from_user').txn_key.agg(pd.Series.nunique) total_in_txn_amt = df.groupby('to_user').amount.sum() total_out_txn_amt = df.groupby('from_user').amount.sum() avg_in_txn_amt = df.groupby('to_user').amount.mean() avg_out_txn_amt = df.groupby('from_user').amount.mean() from_fst_txn_date = df.groupby('from_user').date.min() df_feat = df.assign( # transaction features count_by_key = df.txn_key.map(count_by_key), amount_by_key = df.txn_key.map(amount_by_key), from_eq_to = df.from_user == df.to_user, ufrom_by_key = df.txn_key.map(ufrom_by_key), uto_by_key = df.txn_key.map(uto_by_key), # transaction date features date_year = df.date.dt.year, date_month = df.date.dt.month, date_day = df.date.dt.day, date_dayofweek = df.date.dt.dayofweek, date_dayofyear = df.date.dt.dayofyear, date_hour = df.date.dt.hour, date_minute = df.date.dt.minute, date_second = df.date.dt.second, # user features from_in_txn_count = df.from_user.map(in_txn_count), from_in_key_count = df.from_user.map(in_key_count), from_out_txn_count = df.from_user.map(out_txn_count), from_out_key_count = df.from_user.map(out_key_count), to_in_txn_count = df.to_user.map(in_txn_count), to_in_key_count = df.to_user.map(in_key_count), to_out_txn_count = df.to_user.map(out_txn_count), to_out_key_count = df.to_user.map(out_key_count), from_total_in_txn_amt = df.from_user.map(total_in_txn_amt), from_total_out_txn_amt = df.from_user.map(total_out_txn_amt), to_total_in_txn_amt = df.to_user.map(total_in_txn_amt), to_total_out_txn_amt = df.to_user.map(total_out_txn_amt), from_avg_in_txn_amt = df.from_user.map(avg_in_txn_amt), from_avg_out_txn_amt = df.from_user.map(avg_out_txn_amt), to_avg_in_txn_amt = df.to_user.map(avg_in_txn_amt), to_avg_out_txn_amt = df.to_user.map(avg_out_txn_amt), from_in_deg = df.from_user.map(G_mdi.in_degree()), from_out_deg = df.from_user.map(G_mdi.out_degree()), from_in_udeg = df.from_user.map(G_di.in_degree()), from_out_udeg = df.from_user.map(G_di.out_degree()), to_in_deg = df.to_user.map(G_mdi.in_degree()), to_out_deg = df.to_user.map(G_mdi.out_degree()), to_in_udeg = df.to_user.map(G_di.in_degree()), to_out_udeg = df.to_user.map(G_di.out_degree()), from_cc = df.from_user.map(nx.clustering(G)), to_cc = df.to_user.map(nx.clustering(G)) ) df_feat.fillna(0, inplace=True) ``` ### Isolation Forest for anomaly detection ``` from sklearn.ensemble import IsolationForest not_train_cols = ['txn_key', 'from_user', 'to_user', 'date'] X_train = df_feat[ [col for col in df_feat.columns if col not in not_train_cols] ].values clf = IsolationForest(n_estimators=100, contamination=0.01, n_jobs=-1, random_state=42) clf.fit(X_train) clf.threshold_ pred = clf.predict(X_train) anomalies = (pred != 1) ``` ### Anomaly Scores ``` import seaborn as sns import matplotlib.pyplot as plt plt.figure(figsize=(12, 8)) sns.distplot(scores, kde=False) line = plt.vlines(clf.threshold_, 0, 30000, colors='r', linestyles='dotted') line.set_label('Threshold = -0.0948') plt.legend(loc='upper left', fontsize='medium') plt.title('Anomaly Scores returned by Isolation Forest', fontsize=16); ``` ### Visualizing the Transactions ``` from sklearn.manifold import TSNE tsne = TSNE(n_components=2, #perplexity=50, #n_iter=200, n_iter_without_progress=10, #angle=0.7, random_state=42) X_tsne = tsne.fit_transform(X_train[150000:155000]) %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns outlier = anomalies[150000:155000] plt.figure(figsize=(12,8)) plt.scatter(X_tsne[~outlier][:,0], X_tsne[~outlier][:,1], marker='.', c='b', alpha=.2) plt.scatter(X_tsne[outlier][:,0], X_tsne[outlier][:,1], marker='o', c='r', alpha=1) plt.legend(['Normal Transactions', 'Abnormal Transactions']) plt.title('t-SNE Visualization of Normal Transactions vs Abnormal Transactions', fontsize=16); ``` ### D3 Network Visualization ``` json_date_format = '%Y-%m-%dT%H:%M:%SZ' # df['date'] = df.date.dt.strftime(json_date_format) for scc in nx.strongly_connected_components(G_di): if len(scc) > 5: intersect = np.intersect1d(list(scc), anomalies_id) if intersect.size > 0: G_sub = G_di.subgraph(scc) #G_sub_json = json_graph.node_link_data(G_sub) #with open('../d3/json/network.json', 'w') as json_file: # json.dump(G_sub_json, json_file) anomalies = (pred != 1) np.concatenate((df[anomalies].from_user, df[anomalies].to_user)) anomalies_pairs = zip(df[anomalies].from_user, df[anomalies].to_user) for i, j in anomalies_pairs: neigh = G_di.neighbors(i) neigh += G_di.neighbors(j) nodes = neigh + [i, j] if len(nodes) > 10 and len(nodes) < 20: G_sub = nx.subgraph(G_di, nodes) from networkx.readwrite import json_graph import json anomalies_pairs = zip(df[anomalies].from_user, df[anomalies].to_user) for e in G_sub.edges_iter(): if e[:2] in anomalies_pairs: G_sub.edge[e[0]][e[1]]['type'] = 'licensing' else: G_sub.edge[e[0]][e[1]]['type'] = 'suit' G_sub_json = json_graph.node_link_data(G_sub) with open('../d3/json/network.json', 'w') as json_file: json.dump(G_sub_json, json_file) G_di = nx.from_pandas_dataframe(df, source='from_user', target='to_user', edge_attr=['txn_key', 'amount', 'date'], create_using=nx.DiGraph() ) from IPython.display import IFrame IFrame('./d3/html/network.html', width=1000, height=500) ```
github_jupyter
**Important: This notebook will only work with fastai-0.7.x. Do not try to run any fastai-1.x code from this path in the repository because it will load fastai-0.7.x** ``` %reload_ext autoreload %autoreload 2 %matplotlib inline from fastai.learner import * import torchtext from torchtext import vocab, data from torchtext.datasets import language_modeling from fastai.rnn_reg import * from fastai.rnn_train import * from fastai.nlp import * from fastai.lm_rnn import * import dill as pickle import spacy ``` ## Language modeling ### Data The [large movie view dataset](http://ai.stanford.edu/~amaas/data/sentiment/) contains a collection of 50,000 reviews from IMDB. The dataset contains an even number of positive and negative reviews. The authors considered only highly polarized reviews. A negative review has a score ≤ 4 out of 10, and a positive review has a score ≥ 7 out of 10. Neutral reviews are not included in the dataset. The dataset is divided into training and test sets. The training set is the same 25,000 labeled reviews. The **sentiment classification task** consists of predicting the polarity (positive or negative) of a given text. However, before we try to classify *sentiment*, we will simply try to create a *language model*; that is, a model that can predict the next word in a sentence. Why? Because our model first needs to understand the structure of English, before we can expect it to recognize positive vs negative sentiment. So our plan of attack is the same as we used for Dogs v Cats: pretrain a model to do one thing (predict the next word), and fine tune it to do something else (classify sentiment). Unfortunately, there are no good pretrained language models available to download, so we need to create our own. To follow along with this notebook, we suggest downloading the dataset from [this location](http://files.fast.ai/data/aclImdb.tgz) on files.fast.ai. ``` PATH='data/aclImdb/' TRN_PATH = 'train/all/' VAL_PATH = 'test/all/' TRN = f'{PATH}{TRN_PATH}' VAL = f'{PATH}{VAL_PATH}' %ls {PATH} ``` Let's look inside the training folder... ``` trn_files = !ls {TRN} trn_files[:10] ``` ...and at an example review. ``` review = !cat {TRN}{trn_files[6]} review[0] ``` Sounds like I'd really enjoy *Zombiegeddon*... Now we'll check how many words are in the dataset. ``` !find {TRN} -name '*.txt' | xargs cat | wc -w !find {VAL} -name '*.txt' | xargs cat | wc -w ``` Before we can analyze text, we must first *tokenize* it. This refers to the process of splitting a sentence into an array of words (or more generally, into an array of *tokens*). *Note:* If you get an error like: Can't find model 'en'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory. then you need to install the Spacy language model by running this command on the command-line: $ python -m spacy download en ``` spacy_tok = spacy.load('en') ' '.join([sent.string.strip() for sent in spacy_tok(review[0])]) ``` We use Pytorch's [torchtext](https://github.com/pytorch/text) library to preprocess our data, telling it to use the wonderful [spacy](https://spacy.io/) library to handle tokenization. First, we create a torchtext *field*, which describes how to preprocess a piece of text - in this case, we tell torchtext to make everything lowercase, and tokenize it with spacy. ``` TEXT = data.Field(lower=True, tokenize="spacy") ``` fastai works closely with torchtext. We create a ModelData object for language modeling by taking advantage of `LanguageModelData`, passing it our torchtext field object, and the paths to our training, test, and validation sets. In this case, we don't have a separate test set, so we'll just use `VAL_PATH` for that too. As well as the usual `bs` (batch size) parameter, we also now have `bptt`; this define how many words are processing at a time in each row of the mini-batch. More importantly, it defines how many 'layers' we will backprop through. Making this number higher will increase time and memory requirements, but will improve the model's ability to handle long sentences. ``` bs=64; bptt=70 FILES = dict(train=TRN_PATH, validation=VAL_PATH, test=VAL_PATH) md = LanguageModelData.from_text_files(PATH, TEXT, **FILES, bs=bs, bptt=bptt, min_freq=10) ``` After building our `ModelData` object, it automatically fills the `TEXT` object with a very important attribute: `TEXT.vocab`. This is a *vocabulary*, which stores which words (or *tokens*) have been seen in the text, and how each word will be mapped to a unique integer id. We'll need to use this information again later, so we save it. *(Technical note: python's standard `Pickle` library can't handle this correctly, so at the top of this notebook we used the `dill` library instead and imported it as `pickle`)*. ``` pickle.dump(TEXT, open(f'{PATH}models/TEXT.pkl','wb')) ``` Here are the: # batches; # unique tokens in the vocab; # tokens in the training set; # sentences ``` len(md.trn_dl), md.nt, len(md.trn_ds), len(md.trn_ds[0].text) ``` This is the start of the mapping from integer IDs to unique tokens. ``` # 'itos': 'int-to-string' TEXT.vocab.itos[:12] # 'stoi': 'string to int' TEXT.vocab.stoi['the'] ``` Note that in a `LanguageModelData` object there is only one item in each dataset: all the words of the text joined together. ``` md.trn_ds[0].text[:12] ``` torchtext will handle turning this words into integer IDs for us automatically. ``` TEXT.numericalize([md.trn_ds[0].text[:12]]) ``` Our `LanguageModelData` object will create batches with 64 columns (that's our batch size), and varying sequence lengths of around 80 tokens (that's our `bptt` parameter - *backprop through time*). Each batch also contains the exact same data as labels, but one word later in the text - since we're trying to always predict the next word. The labels are flattened into a 1d array. ``` next(iter(md.trn_dl)) ``` ### Train We have a number of parameters to set - we'll learn more about these later, but you should find these values suitable for many problems. ``` em_sz = 200 # size of each embedding vector nh = 500 # number of hidden activations per layer nl = 3 # number of layers ``` Researchers have found that large amounts of *momentum* (which we'll learn about later) don't work well with these kinds of *RNN* models, so we create a version of the *Adam* optimizer with less momentum than it's default of `0.9`. ``` opt_fn = partial(optim.Adam, betas=(0.7, 0.99)) ``` fastai uses a variant of the state of the art [AWD LSTM Language Model](https://arxiv.org/abs/1708.02182) developed by Stephen Merity. A key feature of this model is that it provides excellent regularization through [Dropout](https://en.wikipedia.org/wiki/Convolutional_neural_network#Dropout). There is no simple way known (yet!) to find the best values of the dropout parameters below - you just have to experiment... However, the other parameters (`alpha`, `beta`, and `clip`) shouldn't generally need tuning. ``` learner = md.get_model(opt_fn, em_sz, nh, nl, dropouti=0.05, dropout=0.05, wdrop=0.1, dropoute=0.02, dropouth=0.05) learner.reg_fn = partial(seq2seq_reg, alpha=2, beta=1) learner.clip=0.3 ``` As you can see below, I gradually tuned the language model in a few stages. I possibly could have trained it further (it wasn't yet overfitting), but I didn't have time to experiment more. Maybe you can see if you can train it to a better accuracy! (I used `lr_find` to find a good learning rate, but didn't save the output in this notebook. Feel free to try running it yourself now.) ``` learner.fit(3e-3, 4, wds=1e-6, cycle_len=1, cycle_mult=2) learner.save_encoder('adam1_enc') learner.load_encoder('adam1_enc') learner.fit(3e-3, 1, wds=1e-6, cycle_len=10) ``` In the sentiment analysis section, we'll just need half of the language model - the *encoder*, so we save that part. ``` learner.save_encoder('adam3_10_enc') learner.load_encoder('adam3_10_enc') ``` Language modeling accuracy is generally measured using the metric *perplexity*, which is simply `exp()` of the loss function we used. ``` math.exp(4.165) pickle.dump(TEXT, open(f'{PATH}models/TEXT.pkl','wb')) ``` ### Test We can play around with our language model a bit to check it seems to be working OK. First, let's create a short bit of text to 'prime' a set of predictions. We'll use our torchtext field to numericalize it so we can feed it to our language model. ``` m=learner.model ss=""". So, it wasn't quite was I was expecting, but I really liked it anyway! The best""" s = [TEXT.preprocess(ss)] t=TEXT.numericalize(s) ' '.join(s[0]) ``` We haven't yet added methods to make it easy to test a language model, so we'll need to manually go through the steps. ``` # Set batch size to 1 m[0].bs=1 # Turn off dropout m.eval() # Reset hidden state m.reset() # Get predictions from model res,*_ = m(t) # Put the batch size back to what it was m[0].bs=bs ``` Let's see what the top 10 predictions were for the next word after our short text: ``` nexts = torch.topk(res[-1], 10)[1] [TEXT.vocab.itos[o] for o in to_np(nexts)] ``` ...and let's see if our model can generate a bit more text all by itself! ``` print(ss,"\n") for i in range(50): n=res[-1].topk(2)[1] n = n[1] if n.data[0]==0 else n[0] print(TEXT.vocab.itos[n.data[0]], end=' ') res,*_ = m(n[0].unsqueeze(0)) print('...') ``` ### Sentiment We'll need to the saved vocab from the language model, since we need to ensure the same words map to the same IDs. ``` TEXT = pickle.load(open(f'{PATH}models/TEXT.pkl','rb')) ``` `sequential=False` tells torchtext that a text field should be tokenized (in this case, we just want to store the 'positive' or 'negative' single label). `splits` is a torchtext method that creates train, test, and validation sets. The IMDB dataset is built into torchtext, so we can take advantage of that. Take a look at `lang_model-arxiv.ipynb` to see how to define your own fastai/torchtext datasets. ``` IMDB_LABEL = data.Field(sequential=False) splits = torchtext.datasets.IMDB.splits(TEXT, IMDB_LABEL, 'data/') t = splits[0].examples[0] t.label, ' '.join(t.text[:16]) ``` fastai can create a ModelData object directly from torchtext splits. ``` md2 = TextData.from_splits(PATH, splits, bs) m3 = md2.get_model(opt_fn, 1500, bptt, emb_sz=em_sz, n_hid=nh, n_layers=nl, dropout=0.1, dropouti=0.4, wdrop=0.5, dropoute=0.05, dropouth=0.3) m3.reg_fn = partial(seq2seq_reg, alpha=2, beta=1) m3.load_encoder(f'adam3_10_enc') ``` Because we're fine-tuning a pretrained model, we'll use differential learning rates, and also increase the max gradient for clipping, to allow the SGDR to work better. ``` m3.clip=25. lrs=np.array([1e-4,1e-4,1e-4,1e-3,1e-2]) m3.freeze_to(-1) m3.fit(lrs/2, 1, metrics=[accuracy]) m3.unfreeze() m3.fit(lrs, 1, metrics=[accuracy], cycle_len=1) m3.fit(lrs, 7, metrics=[accuracy], cycle_len=2, cycle_save_name='imdb2') m3.load_cycle('imdb2', 4) accuracy_np(*m3.predict_with_targs()) ``` A recent paper from Bradbury et al, [Learned in translation: contextualized word vectors](https://einstein.ai/research/learned-in-translation-contextualized-word-vectors), has a handy summary of the latest academic research in solving this IMDB sentiment analysis problem. Many of the latest algorithms shown are tuned for this specific problem. ![image.png](attachment:image.png) As you see, we just got a new state of the art result in sentiment analysis, decreasing the error from 5.9% to 5.5%! You should be able to get similarly world-class results on other NLP classification problems using the same basic steps. There are many opportunities to further improve this, although we won't be able to get to them until part 2 of this course... ### End
github_jupyter
## Special Functions - col and lit Let us understand special functions such as col and lit. These functions are typically used to convert the strings to column type. ``` %%HTML <iframe width="560" height="315" src="https://www.youtube.com/embed/lP2LOZfMcIc?rel=0&amp;controls=1&amp;showinfo=0" frameborder="0" allowfullscreen></iframe> ``` * First let us create Data Frame for demo purposes. Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our [10 node state of the art cluster/labs](https://labs.itversity.com/plans) to learn Spark SQL using our unique integrated LMS. ``` from pyspark.sql import SparkSession import getpass username = getpass.getuser() spark = SparkSession. \ builder. \ config('spark.ui.port', '0'). \ config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \ enableHiveSupport(). \ appName(f'{username} | Python - Processing Column Data'). \ master('yarn'). \ getOrCreate() ``` If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches. **Using Spark SQL** ``` spark2-sql \ --master yarn \ --conf spark.ui.port=0 \ --conf spark.sql.warehouse.dir=/user/${USER}/warehouse ``` **Using Scala** ``` spark2-shell \ --master yarn \ --conf spark.ui.port=0 \ --conf spark.sql.warehouse.dir=/user/${USER}/warehouse ``` **Using Pyspark** ``` pyspark2 \ --master yarn \ --conf spark.ui.port=0 \ --conf spark.sql.warehouse.dir=/user/${USER}/warehouse ``` ``` employees = [(1, "Scott", "Tiger", 1000.0, "united states", "+1 123 456 7890", "123 45 6789" ), (2, "Henry", "Ford", 1250.0, "India", "+91 234 567 8901", "456 78 9123" ), (3, "Nick", "Junior", 750.0, "united KINGDOM", "+44 111 111 1111", "222 33 4444" ), (4, "Bill", "Gomes", 1500.0, "AUSTRALIA", "+61 987 654 3210", "789 12 6118" ) ] employeesDF = spark. \ createDataFrame(employees, schema="""employee_id INT, first_name STRING, last_name STRING, salary FLOAT, nationality STRING, phone_number STRING, ssn STRING""" ) ``` * For Data Frame APIs such as `select`, `groupBy`, `orderBy` etc we can pass column names as strings. ``` employeesDF. \ select("first_name", "last_name"). \ show() employeesDF. \ groupBy("nationality"). \ count(). \ show() employeesDF. \ orderBy("employee_id"). \ show() ``` * If there are no transformations on any column in any function then we should be able to pass all column names as strings. * If not we need to pass all columns as type column by using col function. * If we want to apply transformations using some of the functions then passing column names as strings will not suffice. We have to pass them as column type. ``` from pyspark.sql.functions import col employeesDF. \ select(col("first_name"), col("last_name")). \ show() from pyspark.sql.functions import upper employeesDF. \ select(upper("first_name"), upper("last_name")). \ show() ``` * `col` is the function which will convert column name from string type to **Column** type. We can also refer column names as **Column** type using Data Frame name. ``` from pyspark.sql.functions import col, upper employeesDF. \ select(upper(col("first_name")), upper(col("last_name"))). \ show() employeesDF. \ groupBy(upper(col("nationality"))). \ count(). \ show() ``` * Also, if we want to use functions such as `alias`, `desc` etc on columns then we have to pass the column names as column type (not as strings). ``` # This will fail as the function desc is available only on column type. employeesDF. \ orderBy("employee_id".desc()). \ show() # We can invoke desc on columns which are of type column employeesDF. \ orderBy(col("employee_id").desc()). \ show() employeesDF. \ orderBy(col("first_name").desc()). \ show() # Alternative - we can also refer column names using Data Frame like this employeesDF. \ orderBy(upper(employeesDF['first_name']).alias('first_name')). \ show() # Alternative - we can also refer column names using Data Frame like this employeesDF. \ orderBy(upper(employeesDF.first_name).alias('first_name')). \ show() ``` * Sometimes, we want to add a literal to the column values. For example, we might want to concatenate first_name and last_name separated by comma and space in between. ``` from pyspark.sql.functions import concat # Same as above employeesDF. \ select(concat(col("first_name"), ", ", col("last_name"))). \ show() # Referring columns using Data Frame employeesDF. \ select(concat(employeesDF["first_name"], ", ", employeesDF["last_name"])). \ show() ``` * If we pass the literals directly in the form of string or numeric type, then it will fail. It will search for column by name using the string passed. In this example, it will check for column by name `, ` (comma followed by space). * We have to convert literals to column type by using `lit` function. ``` from pyspark.sql.functions import concat, col, lit employeesDF. \ select(concat(col("first_name"), lit(", "), col("last_name") ).alias("full_name") ). \ show(truncate=False) ```
github_jupyter
# ANOVOS - Datetime Following notebook shows the list of functions related to "datetime" module provided under ANOVOS package and how it can be invoked accordingly. - [Timestamp and Epoch Conversion](#Timestamp-and-Epoch-Conversion) - [Timezone Conversion](#Timezone-Conversion) - [Timestamp and String Conversion](#Timestamp-and-String-Conversion) - [Dateformat Conversion](#Dateformat-Conversion) - [Time Units Extraction](#Time-Units-Extraction) - [Time Difference](#Time-Difference) - [Time Elapsed](#Time-Elapsed) - [Adding Time Units](#Adding-Time-Units) - [Timestamp Comparison](#Timestamp-Comparison) - [Aggregator](#Aggregator) - [Window Aggregation](#Window-Aggregation) - [Lagged Timeseries](#Lagged-Timeseries) - [Start / End of Month / Year / Quarter](#Start-/-End-of-Month-/-Year-/-Quarter) - [Binary features](#Binary-features) - Is start/end of month/year/quarter nor not - Is first half of the year/selected hours/leap year/weekend or not **Setting Spark Session** ``` import pandas as pd import pyspark from pyspark.sql import SparkSession from pyspark.sql import functions as F from pyspark.sql import types as T from pyspark.sql.window import Window spark = SparkSession.builder.appName("spark").getOrCreate() sc = spark.sparkContext sc.setLogLevel('ERROR') # Check the timezone print('Spark Timezone:', spark. conf.get("spark.sql.session.timeZone")) ``` ### Read Input Data ``` from anovos.data_ingest.data_ingest import read_dataset df = read_dataset(spark, file_path='../data/datetime_dataset/dataset2.csv', file_type="csv", file_configs={"header": "True", "delimiter": "," , "inferSchema": "True"}) df.limit(5).toPandas() df.printSchema() ``` # Timestamp and Epoch Conversion ## Timestamp to Unix - API specification of function **timestamp_to_unix** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import timestamp_to_unix # Example 1: result in second + input column in local timezone + append the new column odf = timestamp_to_unix(spark, df, 'time1', output_mode='append') odf.limit(5).toPandas() # Example 2: result in millisecond + input column in local timezone + replace the original column odf = timestamp_to_unix(spark, df, 'time1', precision="ms") odf.limit(5).toPandas() # Example 3: result in second + input column in utc + append the new column odf = timestamp_to_unix(spark, df, 'time1', tz='utc', output_mode='append') odf.limit(5).toPandas() ``` ## Unix to Timestamp - API specification of function **unix_to_timestamp** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import unix_to_timestamp # Example 1: input column in second & local timezone + append the new column odf = unix_to_timestamp(spark, df, 'unix', output_mode='append') odf.limit(5).toPandas() # Example 2: input column in millisecond & local timezone + replace the original column df2 = df.withColumn('unix_ms', F.col('unix')*F.lit(1000.0)) odf = unix_to_timestamp(spark, df2, 'unix_ms', precision="ms") odf.limit(5).toPandas() # Example 3: input column in millisecond & UTC + append the new column odf = unix_to_timestamp(spark, df, 'unix', tz='utc', output_mode='append') odf.limit(5).toPandas() ``` # Timezone Conversion - API specification of function **timezone_conversion** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import timezone_conversion # Example 1: local to UTC + append the new column odf = timezone_conversion(spark, df, 'time1', given_tz='local', output_tz='UTC',output_mode='append') odf.limit(5).toPandas() # Example 2: UTC to local + replace the original column odf = timezone_conversion(spark, df, 'time1', given_tz='UTC', output_tz='local') odf.limit(5).toPandas() ``` # Timestamp and String Conversion ## String to Timestamp - API specification of function **string_to_timestamp** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import string_to_timestamp # Example 1: output timestamp + append the new column odf = string_to_timestamp(spark, df, 'time2', input_format="%d/%m/%y %H:%M", output_type="ts",output_mode="append") odf.limit(5).toPandas() # Example 2: output date + replace the original column odf = string_to_timestamp(spark, df, 'time2', input_format="%d/%m/%y %H:%M", output_type="dt") odf.limit(5).toPandas() ``` ## Timestamp to String - API specification of function **timestamp_to_string** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import timestamp_to_string # Example 1: output format: %Y/%d/%m %H:%M:%S + append the new column odf = timestamp_to_string(spark, df, 'time1', output_format="%Y/%d/%m %H:%M:%S",output_mode="append") odf.limit(5).toPandas() # Example 2: output format: %Y/%d/%m + replace the original column odf = timestamp_to_string(spark, df, 'time1', output_format="%Y/%d/%m") odf.limit(5).toPandas() # Example 3: output format: %Y + replace the original column odf = timestamp_to_string(spark, df, 'time1', output_format="%Y") odf.limit(5).toPandas() ``` # Dateformat Conversion - API specification of function **dateformat_conversion** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import dateformat_conversion # Example 1: to default output format %Y-%m-%d %H:%M:%S + append the new column odf = dateformat_conversion(spark, df, 'time2', input_format="%d/%m/%y %H:%M", output_mode="append") odf.limit(5).toPandas() # Example 1: to %Y/%m/%d + replace the original column odf = dateformat_conversion(spark, df, 'time2', input_format="%d/%m/%y %H:%M", output_format="%Y/%m/%d") odf.limit(5).toPandas() ``` # Time Units Extraction - API specification of function **timeUnits_extraction** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import timeUnits_extraction # Example 1: Extract all units + append new columns odf = timeUnits_extraction(df, 'time1', 'all') odf.limit(5).toPandas() # Example 2: Extract selected units + append new columns odf = timeUnits_extraction(df, 'time1', ['dayofmonth', 'weekofyear', 'quarter']) odf.limit(5).toPandas() # Example 3: Extract selected units + pass units as string + replace the original column odf = timeUnits_extraction(df, 'time1', 'dayofmonth|weekofyear', output_mode='replace') odf.limit(5).toPandas() ``` # Time Difference - API specification of function **time_diff** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import time_diff # Example 1: output difference in hour + append the new column df2 = df.withColumn('time3', (F.col('time1') + F.expr('Interval '+ str(1) + ' hours'))) odf = time_diff(df2, 'time1', 'time3', unit='hour') odf.limit(5).toPandas() # Example 2: output difference in second + replace the original column df2 = df.withColumn('time3', (F.col('time1') + F.expr('Interval '+ str(1) + ' hours'))) odf = time_diff(df2, 'time1', 'time3', unit='second', output_mode="replace") odf.limit(5).toPandas() ``` # Time Elapsed - API specification of function **time_elapsed** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import time_elapsed # Example 1: output difference in day + append the new column odf = time_elapsed(df, 'time1', unit='day') odf.limit(5).toPandas() # Example 2: output difference in year + replace the original column odf = time_elapsed(df, 'time1', unit='year', output_mode="replace") odf.limit(5).toPandas() ``` # Adding Time Units - API specification of function **adding_timeUnits** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import adding_timeUnits # Example 1: minus 2 years + append the new column odf = adding_timeUnits(df, 'time1', unit='years', unit_value=-2) odf.limit(5).toPandas() # Example 2: plus 30 seconds + replace the original column odf = adding_timeUnits(df, 'time1', unit='seconds', unit_value=30, output_mode="replace") odf.limit(5).toPandas() ``` # Timestamp Comparison - API specification of function **timestamp_comparison** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import timestamp_comparison # Example 1: use the default comparison_format + append the new column odf = timestamp_comparison(spark, df, "time1", comparison_type="less_than", comparison_value="2015-02-11 06:50:00") odf.limit(5).toPandas() # Example 2: use nondefault comparison_format + append the new column odf = timestamp_comparison(spark, df, "time1", comparison_type="greaterThan_equalTo", comparison_value="2015/02/11 06:50:00", comparison_format="%Y/%m/%d %H:%M:%S") odf.limit(5).toPandas() # Example 3: use nondefault comparison_format + replace the original column odf = timestamp_comparison(spark, df, "time1", comparison_type="greater_than", comparison_value="2015/02/11 06:50:00", comparison_format="%Y/%m/%d %H:%M:%S", output_mode="replace") odf.limit(5).toPandas() ``` # Aggregator - API specification of function **aggregator** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import aggregator # Example 1: aggregate by date odf = aggregator(spark, df, ['Temperature', 'Humidity'], list_of_aggs=['min', 'max'], time_col='time1', granularity_format="%Y-%m-%d") odf.limit(5).toPandas() # Example 2: aggregate by week + pass columns and units as string odf = aggregator(spark, df, 'Light|CO2', list_of_aggs='mean|median', time_col='time1', granularity_format="%w") odf.limit(5).toPandas() ``` # Window Aggregation - API specification of function **window_aggregator** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import window_aggregator # Example 1: order by time1 + expanding window odf = window_aggregator(df, ['Temperature', 'Light'], ['min', 'max'], order_col='time1', window_type='expanding') odf.orderBy('id').limit(10).toPandas() # Example 2: order by time1 + rolling window of size 2 odf = window_aggregator(df, 'Humidity|id', 'mean|sum', order_col='time1', window_type='rolling', window_size=5) odf.orderBy('id').limit(10).toPandas() # Example 3: order by time1 + rolling window of size 2 + partition by Occupancy odf = window_aggregator(df, 'Humidity|id', 'mean|sum', order_col='time1', window_type='rolling', window_size=5, partition_col='Occupancy') odf.where(F.col('Occupancy')==0).limit(5).toPandas() ``` # Lagged Timeseries - API specification of function **lagged_ts** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import lagged_ts # Example 1: generate the lag column odf = lagged_ts(df, 'time1', lag=2, output_type='ts') odf.orderBy('id').limit(5).toPandas() # Example 2: generate the lag column and the time difference column odf = lagged_ts(df, 'time1', lag=2, output_type='ts_diff') odf.orderBy('id').limit(5).toPandas() # Example 3: generate the lag column and the time difference column in minutes odf = lagged_ts(df, 'time1', lag=2, output_type='ts_diff', tsdiff_unit='minutes') odf.orderBy('id').limit(5).toPandas() # Example 3: generate the lag column and the time difference column in minutes + partition by Occupancy odf = lagged_ts(df, 'time1', lag=2, output_type='ts_diff', tsdiff_unit='minutes', partition_col='Occupancy') odf.where(F.col('Occupancy')==0).limit(5).toPandas() ``` # Start / End of Month / Year / Quarter - `output_mode="replace"` can be used to replace the original column ## Start of Month - API specification of function **start_of_month** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import start_of_month odf = start_of_month(df, 'time1') odf.limit(5).toPandas() ``` ## End of Month - API specification of function **end_of_month** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import end_of_month odf = end_of_month(df, 'time1') odf.limit(5).toPandas() ``` ## Start of Year - API specification of function **start_of_year** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import start_of_year odf = start_of_year(df, 'time1') odf.limit(5).toPandas() ``` ## End of Year - API specification of function **end_of_year** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import end_of_year odf = end_of_year(df, 'time1') odf.limit(5).toPandas() ``` ## Start of Quarter - API specification of function **start_of_quarter** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import start_of_quarter odf = start_of_quarter(df, 'time1') odf.limit(5).toPandas() ``` ## End of Quarter - API specification of function **end_of_quarter** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import end_of_quarter odf = end_of_quarter(df, 'time1') odf.limit(5).toPandas() ``` # Binary features ## Is Month Start - API specification of function **is_monthStart** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import is_monthStart df2 = df.withColumn('time1', F.col('time1').cast('string')).replace({'2015-02-11 06:48:00': '2015-02-01 00:00:00'}) odf = is_monthStart(df2, 'time1') odf.limit(5).toPandas() ``` ## Is Month End - API specification of function **is_monthEnd** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import is_monthEnd df2 = df.withColumn('time1', F.col('time1').cast('string')).replace({'2015-02-11 06:48:00': '2015-02-28 00:00:00'}) odf = is_monthEnd(df2, 'time1') odf.limit(5).toPandas() ``` ## Is Year Start - API specification of function **is_yearStart** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import is_yearStart df2 = df.withColumn('time1', F.col('time1').cast('string')).replace({'2015-02-11 06:48:00': '2015-01-01 00:00:00'}) odf = is_yearStart(df2, 'time1') odf.limit(5).toPandas() ``` ## Is Year End - API specification of function **is_yearEnd** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import is_yearEnd df2 = df.withColumn('time1', F.col('time1').cast('string')).replace({'2015-02-11 06:48:00': '2015-12-31 00:00:00'}) odf = is_yearEnd(df2, 'time1') odf.limit(5).toPandas() ``` ## Is Quarter Start - API specification of function **is_quarterStart** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import is_quarterStart df2 = df.withColumn('time1', F.col('time1').cast('string')).replace({'2015-02-11 06:48:00': '2015-04-01 00:00:00'}) odf = is_quarterStart(df2, 'time1') odf.limit(5).toPandas() ``` ## Is Quarter End - API specification of function **is_quarterEnd** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import is_quarterEnd df2 = df.withColumn('time1', F.col('time1').cast('string')).replace({'2015-02-11 06:48:00': '2015-03-31 00:00:00'}) odf = is_quarterEnd(df2, 'time1') odf.limit(5).toPandas() ``` ## Is First Half of the Year - API specification of function **is_yearFirstHalf** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import is_yearFirstHalf df2 = df.withColumn('time1', F.col('time1').cast('string')).replace({'2015-02-11 06:48:00': '2015-12-01 00:00:00'}) odf = is_yearFirstHalf(df2, 'time1') odf.limit(5).toPandas() ``` ## Is Selected Hour - API specification of function **is_selectedHour** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import is_selectedHour df2 = df.withColumn('time1', F.col('time1').cast('string')).replace({'2015-02-11 06:48:00': '2015-02-01 03:00:00'}) odf = is_selectedHour(df2, 'time1', 6, 7) odf.limit(5).toPandas() ``` ## Is Leap Year - API specification of function **is_leapYear** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import is_leapYear df2 = df.withColumn('time1', F.col('time1').cast('string')).replace({'2015-02-11 06:48:00': '2016-02-01 00:00:00'}) odf = is_leapYear(df2, 'time1') odf.limit(5).toPandas() ``` ## Is Weekend - API specification of function **is_weekend** can be found <a href="https://docs.anovos.ai/api/data_transformer/datetime.html">here</a> ``` from anovos.data_transformer.datetime import is_weekend df2 = df.withColumn('time1', F.col('time1').cast('string')).replace({'2015-02-11 06:48:00': '2015-02-01 00:00:00'}) odf = is_weekend(df2, 'time1') odf.limit(5).toPandas() ```
github_jupyter
호텔 예약이 취소되는 원인이 무엇인지 찾아봅시다. 이 분석은 Dowhy 라이브러리 사이트의 Case Study내용을 발췌하였으며, Antonio, Almeida, Nunes(2019)의 호텔 예약 데이터 셋을 사용합니다. 데이터는 github의 rfordatascience/tidytuseday 에서 구할 수 있습니다. 호텔 예약이 취소되는 이유는 여러가지가 있을 수 있습니다. 예를 들어, - 1. 고객이 호텔이 제공하기 어려운 요청을 하고(ex. 호텔의 주차공간이 부족하고, 고객은 주차공간을 요청), 요청을 거절받은 고객이 예약을 취소할 수 있고, 혹은 - 2. 고객이 여행 계획을 취소했기 때문에 호텔예약을 취소했을 수 있습니다. 1번과 같은 경우는, 호텔에서 추가 조치(다른 시설의 주차공간을 확보)를 취할 수 있는 반면, 2번과 같은 경우는 호텔이 취할 수 있는 조치가 없습니다. 어찌됐든, 우리는 예약취소를 유발하는 원인들을 보다 더 자세히 이해하는 것이 목표입니다. 이를 발견하는 가장 좋은 방법은 RCT(Randomized Control Trail)와 같은 실험을 하는 것입니다. 주차 공간 제공이 호텔 예약 취소에 미치는 정량적 영향도를 알아보겠다면, 고객을 두 개의 범주로 나눠, 한 그룹에는 주차공간을 할당하고, 나머지 한 그룹에는 주차공간을 할당하지 않습니다. 그리고 각 그룹 간 호텔 예약 취소율을 비교하면 됩니다. 물론, 저런 실험이 소문나면 호텔 장사는 다했다고 봐야죠. 과거 데이터와 가설만 있는 상황에서, 우리는 어떻게 답을 찾아야 할까요? ``` %reload_ext autoreload %autoreload 2 # Config dict to set the logging level import logging.config DEFAULT_LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'loggers': { '': { 'level': 'INFO', }, } } logging.config.dictConfig(DEFAULT_LOGGING) # Disabling warnings output import warnings # !pip install sklearn from sklearn.exceptions import DataConversionWarning, ConvergenceWarning warnings.filterwarnings(action='ignore', category=DataConversionWarning) warnings.filterwarnings(action='ignore', category=ConvergenceWarning) warnings.filterwarnings(action='ignore', category=UserWarning) # !pip install dowhy import dowhy import pandas as pd import numpy as np import matplotlib.pyplot as plt # pd.options.plotting.backend = 'plotly' dataset = pd.read_csv('https://raw.githubusercontent.com/Sid-darthvader/DoWhy-The-Causal-Story-Behind-Hotel-Booking-Cancellations/master/hotel_bookings.csv') # dataset = pd.read_csv('hotel_bookings.csv') dataset.head() dataset.columns dataset[['is_canceled']].plot() ``` ## Feature Engineering 이제 차원수를 줄이기 위해 의미있는 Feature들을 만들어봅시다. **Total Stay** = **stays_in_weekend_nights** + **stays_in_weekend_nights** **Guests** = **adults** + **children** + **babies** **Different_room_assigned** = 예약과 다른 룸을 받았다면 1 아니라면 0 ``` # Total stay in nights dataset['total_stay'] = dataset['stays_in_week_nights']+dataset['stays_in_weekend_nights'] # Total number of guests dataset['guests'] = dataset['adults']+dataset['children'] +dataset['babies'] # Creating the different_room_assigned feature dataset['different_room_assigned']=0 slice_indices = dataset['reserved_room_type']!=dataset['assigned_room_type'] dataset.loc[slice_indices,'different_room_assigned']=1 # Deleting older features dataset = dataset.drop(['stays_in_week_nights','stays_in_weekend_nights','adults','children','babies' ,'reserved_room_type','assigned_room_type'],axis=1) ``` 결측치가 많거나 Unique value가 많은 컬럼은 본 분석에서는 사용될 일이 적으니, 삭제를 하겠습니다. 그리고 Country의 경우는, 가장 빈도가 높은 나라를 결측치에 대입하겠습니다. **distribution_channel** 도 **market_segemnt** 컬럼과 많이 중복되니 삭제를 하도록 하겠습니다. ``` dataset.isnull().sum() # Country,Agent,Company contain 488,16340,112593 missing entries dataset = dataset.drop(['agent','company'],axis=1) # Replacing missing countries with most freqently occuring countries dataset['country']= dataset['country'].fillna(dataset['country'].mode()[0]) dataset = dataset.drop(['reservation_status','reservation_status_date','arrival_date_day_of_month'],axis=1) dataset = dataset.drop(['arrival_date_year'],axis=1) dataset = dataset.drop(['distribution_channel'], axis=1) # Replacing 1 by True and 0 by False for the experiment and outcome variables dataset['different_room_assigned']= dataset['different_room_assigned'].replace(1,True) dataset['different_room_assigned']= dataset['different_room_assigned'].replace(0,False) dataset['is_canceled']= dataset['is_canceled'].replace(1,True) dataset['is_canceled']= dataset['is_canceled'].replace(0,False) dataset.dropna(inplace=True) print(dataset.columns) dataset.iloc[:, 5:20].head(100) dataset = dataset[dataset.deposit_type=="No Deposit"] dataset.groupby(['deposit_type','is_canceled']).count() dataset_copy = dataset.copy(deep=True) ``` ### Calculating Expected Count 가설을 하나 세워봅시다. - *고객은 예약과 다른 방을 배정받으면, 예약을 취소한다.* 위의 가설에 해당하는 그룹과 그렇지 않은 그룹으로 데이터를 구분 후 *가설에 해당되는 그룹의 인원*을 계산해볼 수 있겠습니다. **is_cancled**와 **different_room_assigned**가 매우 Imbalance하기 때문에,(different_room_assigned=0: 104,469개, different_room_assigned=1: 14,917개) 1,000개의 관측치를 랜덤으로 샘플링 후, - **different_room_assigned**변수와 **is_cancled**변수가 같은 값을 가지는 경우가 얼마나 있었는지 (i.e. case 1. "예약과 다른 방이 배정" & "예약 취소" 인 경우와 case 2. "예약과 동일한 방이 배정" & "예약 유지"인 경우가 얼마나 있었는지) 확인합니다. 그리고 이 프로세스(샘플링하고 갯수세기)를 10,000번 반복하면, *가설에 해당되는 그룹 인원* 기댓값를 계산할 수 있겠네요 계산해보면, *가설에 해당되는 그룹*의 Expected Count는 거의 50%에 가깝습니다. (i.e. 두 변수가 무작위로 동일한 값을 얻을 확률) 풀어서 설명하면, 임의의 고객에게 예약한 방과 다른 방을 배정하면, 예약을 취소할 수도 있고 취소하지 않을 수도 있습니다. 따라서, 통계적으로는 이 단계에서는 명확한 결론이 없습니다. ``` counts_sum=0 for i in range(1,10000): counts_i = 0 rdf = dataset.sample(1000) counts_i = rdf[rdf["is_canceled"]== rdf["different_room_assigned"]].shape[0] # counts_i = rdf.loc[(rdf["is_canceled"]==1)&(rdf["different_room_assigned"]==1)].shape[0] counts_sum+= counts_i counts_sum/10000 ``` 이제 예약변경 횟수가 0인 집단 중 *가설에 해당되는 그룹*의 Expected Count를 확인하겠습니다. ``` # Expected Count when there are no booking changes counts_sum=0 for i in range(1,10000): counts_i = 0 rdf = dataset[dataset["booking_changes"]==0].sample(1000) counts_i = rdf[rdf["is_canceled"]== rdf["different_room_assigned"]].shape[0] counts_sum+= counts_i counts_sum/10000 ``` 두 번째 케이스로, 예약변경이 1회 이상인 집단 중 *가설에 해당되는 그룹*의 Expected Count를 확인하겠습니다. ``` # Expected Count when there are booking changes = 66.4% counts_sum=0 for i in range(1,10000): counts_i = 0 rdf = dataset[dataset["booking_changes"]>0].sample(1000) counts_i = rdf[rdf["is_canceled"]== rdf["different_room_assigned"]].shape[0] counts_sum+= counts_i counts_sum/10000 ``` 예약 변경횟수가 1보다 큰 경우의 Expected Count(약 600)가 예약 변경횟수가 0인 경우(약 500)보다 훨씬 큰 것을 확인할 수 있습니다. 우리는 여기서 **Booking Changes** 컬럼이 Confounding variable(교란변수, X와 Y 양쪽에 영향을 미쳐 X, Y 간 인과관계의 크기를 왜곡함)임을 알 수 있습니다. 하지만, **Booking Changes**가 유일한 Confounding Variable일까요? 만약 컬럼들 중, 우리가 확인하지 못한 Confounding Variable이 있다면, 우리는 이전과 같은 주장을 할 수 있을까요? ### Step-1. Create a Causal Graph 예측 모델과 관련된 사전 지식들을 Causal Inference Graph로 먼저 표현해봅시다. 전체 그래프를 다 그릴 필요는 없으니 큰 걱정은 안하셔도 됩니다. Causal Inference Graph로 표현할 가정들은 아래와 같습니다. * **Market Segment** 컬럼은 2개의 값을 가지고 있음. **TA**는 Travel Agent를 의미하고, **TO**는 Tour Operator임. Market Segment는 LeadTime(예약시점부터 체크인할 때까지의 시간)에 영향을 미칠 것임 TA, TO 상세 내용은 링크 참조: https://www.tenontours.com/the-difference-between-tour-operators-and-travel-agents/ * **Country**는 고객이 예약을 일찍할지 늦게할지와 고객이 어떤 식사를 좋아할지 판단하는데 도움이 될 것임. * **LeadTime**은 **Days in Waitlist**의 크기에 영향을 미칠 것임(예약을 늦게 한다면 남아있는 방이 적겠죠?) * **Days in Waitlist**, **Total Stay in nights**, **Guest**의 크기는 예약이 취소될지 유지될지에 영향을 미칠 겁니다 (손님이 많고 숙박할 날짜가 길다면 다른 호텔을 구하기 쉽지 않겠죠) * **Previous Booking Retentions**는 고객이 Repeated Guest인지 아닌지에 영향을 미칠겁니다. 그리고, 이 두 변수는 예약이 취소될지 아닐지에도 영향을 줄겁니다. (예를들어, 이전에 여러 번 예약을 유지한 고객은 다음번에도 예약을 유지할 가능성이 크고, 예약을 자주 취소했던 고객은 다음번에도 예약을 취소할 가능성이 크겠죠) * **Booking Changes** 는 (앞에서 보셨다시피) 고객이 different room할지말지(=예약과 다른 방에 배정될지 아닐지)와 예약취소에도 영향을 미칠 겁니다. * 마지막으로 **Booking Changes**가 우리가 알고자 하는 원인인자변수(다른 방을 배정)를 교란하는 *유일한 변수*일 개연성은 작습니다.(경험적으로) ``` import pygraphviz causal_graph = """digraph { different_room_assigned[label="Different Room Assigned"]; is_canceled[label="Booking Cancelled"]; booking_changes[label="Booking Changes"]; previous_bookings_not_canceled[label="Previous Booking Retentions"]; days_in_waiting_list[label="Days in Waitlist"]; lead_time[label="Lead Time"]; market_segment[label="Market Segment"]; country[label="Country"]; U[label="Unobserved Confounders"]; is_repeated_guest; total_stay; guests; meal; hotel; U->different_room_assigned; U->is_canceled;U->required_car_parking_spaces; market_segment -> lead_time; lead_time->is_canceled; country -> lead_time; different_room_assigned -> is_canceled; country->meal; lead_time -> days_in_waiting_list; days_in_waiting_list ->is_canceled; previous_bookings_not_canceled -> is_canceled; previous_bookings_not_canceled -> is_repeated_guest; is_repeated_guest -> is_canceled; total_stay -> is_canceled; guests -> is_canceled; booking_changes -> different_room_assigned; booking_changes -> is_canceled; hotel -> is_canceled; required_car_parking_spaces -> is_canceled; total_of_special_requests -> is_canceled; country->{hotel, required_car_parking_spaces,total_of_special_requests,is_canceled}; market_segment->{hotel, required_car_parking_spaces,total_of_special_requests,is_canceled}; }""" ``` Treatment는 고객이 예약을 할 때 선택한 방을 배정받았는지(**different_room_assigned**)입니다. Outcome은 예약이 취소될지 아닐지(**is_cancled**) 입니다. Common Cause는 Treatment와 Outcome 둘 모두에 영향을 미치는 변수입니다. **Booking Changes**와 **Unobserved Confounders**(우리가 확인하지 못한 교란변수) 2개가 Common Cause에 해당됩니다. 만약 우리가 그래프를 명시적으로 지정하지 않는다면(추천하지 않습니다!), 교란변수들은 파라메터로 사용됩니다. ``` model= dowhy.CausalModel( data = dataset, graph=causal_graph.replace("\n", " "), treatment='different_room_assigned', outcome='is_canceled') model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ``` ## Step2. Identify the Causal Effect Treatment변수의 변화가 Outcome변수의 변화만 이끌어낸다면 우리는 Treatment변수가 Outcome변수에 영향을 끼쳤다고 말할수 있습니다. 이번 step에서는 영향인자를 식별해보도록 하겠습니다. ``` import statsmodels model= dowhy.CausalModel( data = dataset, graph=causal_graph.replace("\n", " "), treatment="different_room_assigned", outcome='is_canceled') #Identify the causal effect identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ``` ## Step3. Estimate the identified estimand ``` estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification",target_units="ate") # ATE = Average Treatment Effect # ATT = Average Treatment Effect on Treated (i.e. those who were assigned a different room) # ATC = Average Treatment Effect on Control (i.e. those who were not assigned a different room) print(estimate) ``` 상당히 재밌는 결과가 나왔습니다. 라이브러리가 계산한 영향도를 보면, 예약과 다른 방이 배정됐을 때(**different_room_assign** = 1) , 예약이 취소될 가능성이 더 적을 것이라고 하네요. 여기서 한번 더 생각을 해보면...이게 올바른 Causal Effect가 맞는 걸까요?? 예약된 객실을 사용할 수 없고, 다른 객실이 배정하는 것이 고객에게 긍정적인 영향을 미칠 수 있을까요? <br/> 다른 매커니즘이 있을 수도 있습니다. 예약과 다른 객실을 배정하는 건 체크인 할 때만 발생하고, 이는 고객이 이미 호텔에 도착했다는 뜻이니, 예약을 취소할 가능성이 낮다고 볼 수 있을 것 같네요. 이 매커니즘이 맞다면, 우리가 가정한 그래프에는 예약과 다른 객실이 "언제" 발생하는 지에 대한 정보가 없습니다. 예약과 다른 객실이 배정되는 이벤트가 "언제" 발생하는 지 알 수 있다면, 분석을 개선하는데 도움이 될 수 있을 겁니다. <br/> 앞서 연관 분석에서 is_cancled와 different_room_assign 사이에 양의 상관관계가 있음을 확인 했지만, DoWhy라이브러리를 이용해 인과관계를 추정하면 다른 결과가 나옵니다. 이는 호텔이 "예약과 다른 객실을 배정하는 행위"의 횟수를 줄이는 결정이 호텔에게 비생산적일 수 있음을 의미합니다. ## Step4. Refute result 인과 자체는 데이터 자체에서 나오는 것이 아닙니다. 데이터는 단순히 통계적 추정에 사용됩니다. 다시말해 우리의 가설("고객은 예약과 다른 방을 배정받으면, 예약을 취소한다.")이 옳은지 여부를 확인하는 것이 중요합니다. 만약 또다른 common cause가 있다면 어떻게 될까요? Treatment 변수의 영향도가 플라시보 효과에 의한 거라면 어떻게 될까요? ### Method-1 **Random Common Cause:** 무작위로 추출한 공변량을 데이터에 추가하고, 분석을 다시 실행하여, 인과 추정치(estimand effect)가 변하는지 여부를 확인합니다. 우리의 가정이 옳았다면, 인과추정치(estimand effect)가 크게 변하지 않아야 합니다. ``` refute1_results=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(refute1_results) ``` ### Method-2 **Placebo Treatment Refuter:** 임의의 공변량을 Treatment변수에 할당하고 분석을 다시 실행합니다. 우리의 가정이 옳았다면, 새로운 추정치는 0이 되어야 합니다. ``` refute2_results=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter") print(refute2_results) ``` ### Method-3 **Data Subset Refuter:** 데이터 부분집합을 생성하고(cross-validation하듯이), 부분집합 별로 분석을 수행하면서, 추정치가 얼마나 변하는 지 확인합니다. 우리의 가정이 옳았다면, 추정치는 크게 변하지 않아야 합니다. ``` refute3_results=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter") print(refute3_results) ``` 총 3가지 방법을 이용해 Refutation test(반박 시험)을 수행했습니다. 이 테스트는 정확성을 증명하지는 않지만, 추정치에 대한 신뢰도가 더 올라갔습니다.
github_jupyter
# Encoders and decoders > In this post, we will implement simple autoencoder architecture. This is the summary of lecture "Probabilistic Deep Learning with Tensorflow 2" from Imperial College London. - toc: true - badges: true - comments: true - author: Chanseok Kang - categories: [Python, Coursera, Tensorflow_probability, ICL] - image: images/mnist_reconstruction.png ## Packages ``` import tensorflow as tf import tensorflow_probability as tfp import numpy as np import matplotlib.pyplot as plt tfd = tfp.distributions tfpl = tfp.layers tfb = tfp.bijectors plt.rcParams['figure.figsize'] = (10, 6) print("Tensorflow Version: ", tf.__version__) print("Tensorflow Probability Version: ", tfp.__version__) ``` ## Overview ### AutoEncoder architecture with tensorflow AutoEncoder has bottleneck architecture. The width of each of the dense layers is decreasing at first, all the way down to the middle bottleneck layer. The network then starts to widen out again afterwards util the final layer is the same size of shape as the input layer. ```python from tensorflow.karas.models import Sequential from tensorflow.keras.layers import Flatten, Dense, Reshape autoencoder = Sequential([ Flatten(input_shape=(28, 28)), Dense(256, activation='sigmoid'), Dnese(64, activation='sigmoid'), Dense(2, activation='sigmoid'), Dense(64, activation='sigmoid'), Dense(256, activation='sigmoid'), Dense(784, activation='sigmoid'), Reshape((28, 28)) ]) optimizer = tf.keras.optimizers.RMSprop(learning_rate=0.0005) autoencoder.compile(loss='mse', optimizer=optimizer) autoencoder.fit(X_train, X_train, epochs=20) ``` You can see that this network is trained with X_train, not labels. So it is sort of unsupervised learning. ### Encoder and decoder We can separate the autoencoder architecture with encoder and decoder. ```python from tensorflow.keras.models import Models encoder = Sequential([ Flatten(input_shape=(28, 28)), Dense(256, activation='sigmoid'), Dense(64, activation='sigmoid'), Dense(2, activation='sigmoid') ]) decoder = Sequential([ Dense(64, activation='sigmoid', input_shape=(2, )), Dense(256, activation='sigmoid'), Dense(784, activation='sigmoid'), Reshape((28, 28)) ]) autoencoder = Model(inputs=encoder.input, outputs=decoder(encoder.output)) autoencoder.compile(loss='mse', optimizer='sgd') autoencoder.fit(X_train, X_train, epochs=20) ``` ### Test for reconstruction ```python # X_test: (1, 28, 28) reconstruction = autoencoder(X_test) X_encoded = encoder(X_test) z = tf.random.normal([1, 2]) z_decoded = decoder(z) # (1, 28, 28) ``` ## Tutorial We'll use Fashion MNIST for the tutorial. ``` from tensorflow.keras.models import Sequential, Model from tensorflow.keras.layers import Dense, Flatten, Reshape import seaborn as sns # Load Fashion MNIST (X_train, y_train), (X_test, y_test) = tf.keras.datasets.fashion_mnist.load_data() X_train = X_train.astype('float32') / 255. X_test = X_test.astype('float32') / 255. class_names = np.array(['T-shirt/top', 'Trouser/pants', 'Pullover shirt', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag','Ankle boot']) # Display a few examples n_examples = 1000 example_images = X_test[0:n_examples] example_labels = y_test[0:n_examples] f, axs = plt.subplots(1, 5, figsize=(15, 4)) for j in range(len(axs)): axs[j].imshow(example_images[j], cmap='binary') axs[j].axis('off') plt.show() # Define the encoder encoded_dim = 2 encoder = Sequential([ Flatten(input_shape=(28, 28)), Dense(256, activation='sigmoid'), Dense(64, activation='sigmoid'), Dense(encoded_dim) ]) # Encode examples before training pretrain_example_encodings = encoder(example_images).numpy() # Plot encoded examples before training f, ax = plt.subplots(1, 1, figsize=(7, 7)) sns.scatterplot(x=pretrain_example_encodings[:, 0], y=pretrain_example_encodings[:, 1], hue=class_names[example_labels], ax=ax, palette=sns.color_palette("colorblind", 10)); ax.set_xlabel('Encoding dimension 1'); ax.set_ylabel('Encoding dimension 2') ax.set_title('Encodings of example images before training'); plt.show() # Define the decoder decoder = Sequential([ Dense(64, activation='sigmoid', input_shape=(encoded_dim,)), Dense(256, activation='sigmoid'), Dense(28*28, activation='sigmoid'), Reshape((28, 28)) ]) # Compile and fit the model autoencoder = Model(inputs=encoder.inputs, outputs=decoder(encoder.output)) # Specify loss - input and output is in [0., 1.], so we can use a binary cross-entropy loss autoencoder.compile(loss='binary_crossentropy') # Fit the model - highlight that labels and input are the same autoencoder.fit(X_train, X_train, epochs=10, batch_size=32) # Compute example encodings after training posttrain_example_encodings = encoder(example_images).numpy() # Compare the example encodings before and after training f, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 7)) sns.scatterplot(x=pretrain_example_encodings[:, 0], y=pretrain_example_encodings[:, 1], hue=class_names[example_labels], ax=axs[0], palette=sns.color_palette("colorblind", 10)); sns.scatterplot(x=posttrain_example_encodings[:, 0], y=posttrain_example_encodings[:, 1], hue=class_names[example_labels], ax=axs[1], palette=sns.color_palette("colorblind", 10)); axs[0].set_title('Encodings of example images before training'); axs[1].set_title('Encodings of example images after training'); for ax in axs: ax.set_xlabel('Encoding dimension 1') ax.set_ylabel('Encoding dimension 2') ax.legend(loc='lower right') plt.show() # Compute the autoencoder's reconstructions reconstructed_example_images = autoencoder(example_images) # Evaluate the autoencoder's reconstructions f, axs = plt.subplots(2, 5, figsize=(15, 4)) for j in range(5): axs[0, j].imshow(example_images[j], cmap='binary') axs[1, j].imshow(reconstructed_example_images[j].numpy().squeeze(), cmap='binary') axs[0, j].axis('off') axs[1, j].axis('off') plt.show() ```
github_jupyter
``` from pyspark.sql import SparkSession spark = SparkSession.builder.appName("Grand_Assign").getOrCreate() df = spark.read.csv('hdfs://quickstart.cloudera:8020/user/cloudera/census.csv',header=True,inferSchema=True) # Uh oh Strings! df.describe().printSchema() df.select('STNAME').distinct().show() df.select('CTYNAME').distinct().show() df.select(['CTYNAME','CENSUS2010POP']).distinct().show() # Could have also used describe from pyspark.sql.functions import max,min df.select(max("CENSUS2010POP"),min("CENSUS2010POP")).show() df.filter("CENSUS2010POP = 82 ").select("STNAME","CTYNAME").show() df.filter("CENSUS2010POP = 37253956 ").select("STNAME","CTYNAME").show() df.filter("STNAME= 'California' ").select("STNAME","CTYNAME").show() df.filter("STNAME= 'California' ").select("STNAME","CTYNAME").show() import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline pd_df = pd.read_csv("https://raw.githubusercontent.com/words-sdsc/coursera/master/big-data-2/csv/census.csv") pd_df.head() pd_df_sub = pd_df[['BIRTHS2010', 'DEATHS2010']] pd_df_sub.plot.scatter(x='BIRTHS2010', y='DEATHS2010') pd_df_sub = pd_df[['BIRTHS2011', 'DEATHS2011']] pd_df_sub.plot.scatter(x='BIRTHS2011', y='DEATHS2011') pd_df_sub = pd_df[['BIRTHS2012', 'DEATHS2012']] pd_df_sub.plot.scatter(x='BIRTHS2012', y='DEATHS2012') pd_df_sub = pd_df[['BIRTHS2013', 'DEATHS2013']] pd_df_sub.plot.scatter(x='BIRTHS2013', y='DEATHS2013') pd_df_sub = pd_df[['BIRTHS2014', 'DEATHS2014']] pd_df_sub.plot.scatter(x='BIRTHS2014', y='DEATHS2014') pd_df_sub = pd_df[['BIRTHS2015', 'DEATHS2015']] pd_df_sub.plot.scatter(x='BIRTHS2015', y='DEATHS2015') pd_df_sub = pd_df['RBIRTH2011'] pd_df_sub.plot.line() pd_df_sub = pd_df['RBIRTH2012'] pd_df_sub.plot.line() pd_df_sub = pd_df['RBIRTH2013'] pd_df_sub.plot.line() pd_df_sub = pd_df['RBIRTH2014'] pd_df_sub.plot.line() pd_df_sub = pd_df['RBIRTH2015'] pd_df_sub.plot.line() pd_df_sub = pd_df['RDEATH2011'] pd_df_sub.plot.line() pd_df_sub = pd_df['RDEATH2012'] pd_df_sub.plot.line() pd_df_sub = pd_df['RDEATH2013'] pd_df_sub.plot.line() pd_df_sub = pd_df['RDEATH2014'] pd_df_sub.plot.line() pd_df_sub = pd_df['RDEATH2015'] pd_df_sub.plot.line() pd_df_sub = pd_df['RINTERNATIONALMIG2011'] pd_df_sub.plot.line() pd_df_sub = pd_df['RINTERNATIONALMIG2012'] pd_df_sub.plot.line() pd_df_sub = pd_df['RINTERNATIONALMIG2013'] pd_df_sub.plot.line() pd_df_sub = pd_df['RINTERNATIONALMIG2014'] pd_df_sub.plot.line() pd_df_sub = pd_df['RINTERNATIONALMIG2015'] pd_df_sub.plot.line() ```
github_jupyter
# Change Detection - Image Ratio U-Net Classifier ### Summary This notebook trains a Convolutional Neural Network (CNN) to identify building change from the pixel ratios between before/after [Sentinel-2](https://sentinel.esa.int/web/sentinel/user-guides/sentinel-2-msi/overview) imagery. For a better understanding of the ratio method begin with `change_detection.ipynb`. The model is trained on the pixel ratios of pre- & post-disaster imagery for events in the Caribbean. Ground truth building damage data is gathered from [Copernicus EMS](https://emergency.copernicus.eu/mapping/map-of-activations-rapid#zoom=3&lat=29.18235&lon=-70.57787&layers=BT00). If you already have a trained model and simply wish to evaluate it's output at a new location, skip to section 5 after section 1. ### Contents - 1 - [Visualise Ratio & Damage Labels](#trLabels) - 2 - [Training Images](#trImages) - 3 - [Build Model](#buildModel) - 4 - [Train Model](#trainModel) - 5 - [Predictions](#Predictions) - 6 - [Evaluate Prediction Accuracy](#PredictionAccuracy) - 7 - [Test Location](#TestLocation) ### Requirements - Lat/Long of desired location - Before and after dates for change detection - Output of damages at location if evaluating model __________________________ ### Initialisation steps - Define variables & import packages ``` ## Define location, dates and satellite location = 'Roseau' # Name for saved .pngs lat, lon = 15.3031, -61.3834 # Center of Area of Interest zoom = 15 # Map tile zoom, default 16 st_date, end_date = ['2017-08-15', '2017-10-01'], ['2017-09-15', '2017-12-01'] # Timeframes for before-after imagery: start 1, start 2; end 1 ,end 2 satellite = "sentinel-2:L1C" # Descartes product name bands = ['red','green','blue'] # Bands used for visualisation cloudFraction = 0.05 # May need adjusted to get images from appropriate dates for Sentinel ## Testing preModel = "models/optimalModel" # Use a pre-trained model - if training leave as "" deployed = False # Run model for area without damage assessment # If a damage geojson already exists for location - else leave as "" dmgJsons = "" # Damage file name qualifying location and area size if already exists # Form new damage assessment json from Copernicus EMS database dmgAssess = "gradings/EMSR246_04ROSEAU_02GRADING_v1_5500_settlements_point_grading.dbf" # Copernicus EMS damage assessment database location (.dbf file needs .prj,.shp,.shx in same directory) grades = ['Completely Destroyed','Highly Damaged'] # Copernicus EMS labels included, options: 'Not Applicable','Negligible to slight damage', 'Moderately Damaged', 'Highly Damaged' area = 0.0004 # Building footprint diameter in lat/long degrees (0.0001~10m at equator) newDmgLocation = 'geojsons/'+location+'Damage'+str(area)[2:]+'g'+str(len(grades))+'.geojson' # Location for newly created damage .json ## Training - Model training input resolution = 10 # Resolution of satellite imagery -> 10 if Sentinel tilesize, pad, trainArea = 16, 0, 0.0003 # Tilesize for rastering -> 32 as default, tile padding records = "records/"+location+str(trainArea)[2:]+"g"+str(len(grades))+"x"+str(tilesize)+"p"+str(pad)+".tfrecords" # Name of file for training labels learning_rate, epochs, batch_size, n_samples = 1e-3, 50, 8, 2000 # Model training parameters modelName = "models/"+location+"g"+str(len(grades))+"ts"+str(tilesize)+"pd"+str(pad)+"lr"+str(learning_rate)[2:]+"e"+str(epochs)+"bs"+str(batch_size)+"a"+str(trainArea)[2:]+"n"+str(n_samples) if preModel is "" else preModel # Define output model name # Import packages # Python libraries import IPython import ipywidgets import ipyleaflet import json import random import os import geojson import numpy as np import pandas as pd import geopandas as gpd import tensorflow as tf import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline # Library functions from tqdm import tqdm from ipyleaflet import Map, GeoJSON, GeoData, LegendControl from shapely.geometry import Polygon, Point from tensorflow.keras.models import load_model # Descartes Labs import descarteslabs as dl import descarteslabs.workflows as wf # Custom functions from utils import make_ground_dataset_from_ratio_polygons, get_center_location from unet import UNet ``` ______________________ <a id='trLabels'></a> ## 1. Visualise Ratio & Damage Labels First let's extract the training labels from EMS Copernicus data and visualise them. Use magic markers below map to scale imagery properly. ``` # Function to create damage json from EMS Copernicus database if not deployed: def createDmgJson(dmgAssess, grades,area,dmgJsons): settlements = gpd.read_file(dmgAssess).to_crs({'init': 'epsg:4326'}) # Read from file color_dict = {'Not Applicable':'green','Negligible to slight damage':'blue', 'Moderately Damaged':'yellow', 'Highly Damaged':'orange', 'Completely Destroyed':'red'} damage = settlements[settlements.grading.isin(grades)] # Filter settlements to be within specified damage grade and location polygon if damage.geometry[damage.index[0]].type is not 'Polygon': # Gets point assessment damages into geojson file features = [] for i in tqdm(damage.index): poly = Polygon([[damage.geometry.x[i], damage.geometry.y[i]], [damage.geometry.x[i]+area, damage.geometry.y[i]], [damage.geometry.x[i]+area, damage.geometry.y[i]+area], [damage.geometry.x[i], damage.geometry.y[i]+area], [damage.geometry.x[i], damage.geometry.y[i]]]) features.append(geojson.Feature(properties={"Damage": damage.grading[i]}, geometry=poly)) fc = geojson.FeatureCollection(features) with open(dmgJsons, 'w') as f: geojson.dump(fc, f) else: with open(dmgJsons, 'w') as f: geojson.dump(damage, f) # Puts polygon assessments into geojson file # If geojson of damage from EMS Copernicus does not exist - create one if not os.path.exists(dmgJsons) and not os.path.exists(newDmgLocation): createDmgJson(dmgAssess,grades,area,newDmgLocation) try: fc = gpd.read_file(dmgJsons) # Read training label data from damage file except: fc, dmgJsons = gpd.read_file(newDmgLocation), newDmgLocation # Initialise map m1 = wf.interactive.MapApp() m1.center, m1.zoom = (lat, lon), zoom # Define function which displays satellite imagery on map def getImage(time,bands,opacity,mapNum): img = wf.ImageCollection.from_id(satellite,start_datetime=st_date[time], end_datetime=end_date[time]) if 'sentinel' in satellite: # Use sentinel cloud-mask band if available img = img.filter(lambda img: img.properties["cloud_fraction"] <= cloudFraction) img = img.map(lambda img: img.mask(img.pick_bands('cloud-mask')==1)) mos = (img.mosaic().pick_bands(bands)) globals()['mos_'+str(time+1)+str(bands)] = mos display = mos.visualize('Image '+str(time+1)+' '+str(bands), map=mapNum) display.opacity = opacity # Display before and after images for selected bands - needs to be RGB for training this model for i in range(len(st_date)): getImage(i,bands,0.7,m1) # Calculate logarithmic ratio for RGB images and display ratio = wf.log10(globals()['mos_1'+str(bands)] / globals()['mos_2'+str(bands)]) rdisplay = ratio.visualize('Ratio' ,map=m1) rdisplay.opacity = 0 # Plot damage assessment data if not deployed: geo_data = GeoData(geo_dataframe = fc, style={"color": "red", "fillOpacity": 0.4}, hover_style={"fillOpacity": 0.5}) m1.add_layer(geo_data) # Legend m1.add_control(LegendControl({"Recorded Damage":"#FF0000"})) m1 # Display map ``` Sections 2-4 are for training a new model. If assessing perfomance on new location with damage assessments jump to [section 5](#Predictions). If evaluating change over a new area without ground data (i.e. deployed = True), jump to [section 7](#TestLocation) ______________________ <a id='trImages'></a> ## 2. Training images Next, let's make an image dataset for training. The training data for this segmentation model will be comprised of RGB image tiles with corresponding target rasters of the same size. Targets are binary rasters where 1 indicates the presence of a damaged building and 0 indicates the absence. The function below tiles the region covering the labels, it extracts the corresponding tile of the ratio image displayed above, and makes the corresponding target raster. Training pixel size can be varied in the variables section. These training data are saved as .tfRecords for efficient model training. This step will take 5-10 minutes. The dataset only has to be created once. In case the notebook is re-run with same parameters as a previous run, this cell will be skipped. ``` if not os.path.exists(records): # If records have not already been created if not os.path.exists("records"): os.mkdir("records") # Create directory for record output if not existing trainJsons = 'geojsons/'+location+'Damage'+str(trainArea)[2:]+'g'+str(len(grades))+'.geojson' if not os.path.exists(trainJsons): createDmgJson(dmgAssess,grades,trainArea,trainJsons) n_samples = make_ground_dataset_from_ratio_polygons( ratio, trainJsons, products=satellite, bands=bands, resolution=resolution, tilesize=tilesize, pad=pad, start_datetime=st_date[0], end_datetime=end_date[0], out_file=records, ) ``` In order to read the TFRecords the data structure and a parsing function is defined next. ``` # Define the features in the TFRecords file features = { "image/image_data": tf.io.FixedLenSequenceFeature([], dtype=tf.float32, allow_missing=True), "image/height": tf.io.FixedLenFeature([], tf.int64), "image/width": tf.io.FixedLenFeature([], tf.int64), "image/channels": tf.io.FixedLenFeature([], tf.int64), "target/target_data": tf.io.FixedLenSequenceFeature([], dtype=tf.float32, allow_missing=True), "target/height": tf.io.FixedLenFeature([], tf.int64), "target/width": tf.io.FixedLenFeature([], tf.int64), "target/channels": tf.io.FixedLenFeature([], tf.int64), "dltile": tf.io.FixedLenFeature([], tf.string), } def parse_example(example_proto): image_features = tf.io.parse_single_example(example_proto, features) img_height = tf.cast(image_features["image/height"], tf.int32) img_width = tf.cast(image_features["image/width"], tf.int32) img_channels = tf.cast(image_features["image/channels"], tf.int32) target_height = tf.cast(image_features["target/height"], tf.int32) target_width = tf.cast(image_features["target/width"], tf.int32) target_channels = tf.cast(image_features["target/channels"], tf.int32) image_raw = tf.reshape( tf.squeeze(image_features["image/image_data"]), tf.stack([img_height, img_width, img_channels]), ) target_raw = tf.reshape( tf.squeeze(image_features["target/target_data"]), tf.stack([target_height, target_width, target_channels]), ) return image_raw, target_raw ``` Let's create a simple data pipeline to visualize some samples from the dataset. ``` # Create a TFRecordDataset to read images from these TFRecords data = tf.data.TFRecordDataset(records).map(parse_example, num_parallel_calls=4) data_viz = iter(data.batch(1)) # Visualize samples. You can re-run this cell to iterate through the dataset. img, trg = next(data_viz) fig, ax = plt.subplots(1, 2, figsize=(8, 5)) rat = ax[0].imshow(np.exp(img.numpy()).astype(np.float)[0]) lab = ax[1].imshow(trg.numpy().astype(np.uint8)[0].squeeze()) rat_title = ax[0].set_title("Ratio pixels") lab_title = ax[1].set_title('Building damages (yellow)') ``` The above display shows the first training sample, displaying the image and corresponding target. Each image can have one or more damaged buildings or can be a negative image without any. You can iterate through the training images by re-running the cell above multiple times. ______________________ <a id='buildModel'></a> ## 3. Build Model The model architecture is a [UNet classifier](https://arxiv.org/abs/1505.04597). We'll use a pre-built implementation in [TensorFlow](https://www.tensorflow.org/)_v2 / Keras . ``` # Insure tensorflow is version 2 assert int(tf.__version__[0]) > 1, "Please install Tensorflow 2" learning_rate, epochs, batch_size, n_samples = 1e-4, 10, 2, 300 modelName = "models/"+location+"g"+str(len(grades))+"ts"+str(tilesize)+"pd"+str(pad)+"lr"+str(learning_rate)[2:]+"e"+str(epochs)+"bs"+str(batch_size)+"a"+str(trainArea)[2:]+"n"+str(n_samples) if preModel is "" else preModel # Define output model name # Build the model. We could just use the base_model but then the input size would be fixed once we load a saved model. # In order to be able to predict on larger tiles we create an input layer with no fixed size base_model = UNet() inputs = tf.keras.layers.Input(shape=(None, None, 3)) model = tf.keras.Model(inputs=inputs, outputs=base_model(inputs)) ``` We will now compile the model and output a summary which takes three arguments: - Optimizer: A reasonable choice of optimizer is [Adam](https://arxiv.org/abs/1412.6980v8) - it performs well in most real-world scenarios. - Loss function: We will use [binary crossentropy](https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy) as a loss as it is suitable for binary classification problems. - Metric: From our experience using the simple ratio method, 0.7 precision should be achievable but a big problem is increasing recall. Therefore we will focus on this metric for training. ``` model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate), loss="binary_crossentropy", metrics=["binary_accuracy","Precision","Recall"]#,"Precision","Recall"]#tfa.metrics.F1Score(num_classes=2, threshold=0.5)#tf.keras.metrics.RecallAtPrecision(precision=0.7) #["accuracy"] ) model.summary() ``` ______________________ <a id='trainModel'></a> ## 4. Train Model We will now train the model using Stochastic Gradient Descent (SGD) with data batches. Before training the data is shuffled before splitting into a training and validation set. ``` # Get train/validation set sizes n_train_samples = int(0.8 * n_samples) n_val_samples = n_samples - n_train_samples # Get data and apply transform data = tf.data.TFRecordDataset("records/HaitiAbricots0002g3x16p0.tfrecords").map(parse_example, num_parallel_calls=4) def type_transform(feature, target): return tf.cast(feature, tf.float32), tf.cast(target, tf.float32) data = data.map(type_transform, num_parallel_calls=4) # # Concatenate second training location records if wanted # data2 = tf.data.TFRecordDataset("records/HaitiAbricots0002g3x32.tfrecords").map(parse_example, num_parallel_calls=4) # data2 = data2.map(type_transform, num_parallel_calls=4) # data = data.concatenate(data2) # # Concatenate third training location records if wanted # data3 = tf.data.TFRecordDataset("records/HaitiLesCayes0002g2x32.tfrecords").map(parse_example, num_parallel_calls=4) # data3 = data3.map(type_transform, num_parallel_calls=4) # data = data.concatenate(data3) # Shuffle the data and split into train and validation set data = data.shuffle(buffer_size=300, seed=1) data_train = data.take(n_train_samples).repeat().batch(batch_size) data_val = data.skip(n_train_samples).repeat().batch(batch_size) ``` Let's train the model! This will take a while depending on training set size and number of epochs requested. ``` with tf.device('/gpu:0'): history = model.fit( data_train, steps_per_epoch=n_train_samples // batch_size, validation_data=data_val, validation_steps=n_val_samples // batch_size, epochs=epochs, ) # Save the model to folder if not os.path.exists("models"): os.mkdir("models") tf.saved_model.save(model, modelName) # Save to Descartes Labs storage if you so desire print(modelName) # You'll have to copy this into both parts of the !zip command below !zip -r copyHere.zip copyHere print('Upload model to Storage') storage = dl.Storage() storage.set_file(modelName, modelName+".zip") os.remove(modelName+".zip") ``` Let's plot the training history. Our model's loss should go down smoothly while the accuracy should go up. ``` fig, ax = plt.subplots(1, 2, figsize=(12, 5)) ax[0].plot(history.history["loss"], label="train") ax[0].plot(history.history["val_loss"], label="val") ax[1].plot(history.history[list(history.history.items())[1][0]], label="train") ax[1].plot(history.history[list(history.history.items())[3][0]], label="val") ax[0].set_title("model loss") ax[1].set_title("model accuracy") ax[0].set_xlabel("epoch") ax[0].set_ylabel("loss") ax[1].set_xlabel("epoch") ax[1].set_ylabel(list(history.history.items())[1][0]) ax[0].legend(loc="upper right") p, r = history.history["precision"][epochs-1], history.history["recall"][epochs-1] f1 = 2*(p*r)/(p+r) print("Training metrics - Precision: ",p,", Recall: ",r,", F1 Score: ",f1) plt.show() n_test_samples = 500 data_test = tf.data.TFRecordDataset("records/HaitiPortSalut0004g3x32.tfrecords").map(parse_example, num_parallel_calls=4) data_test = data_test.map(type_transform, num_parallel_calls=4) data_test = data_test.shuffle(buffer_size=1000, seed=1) data_test = data_test.take(n_test_samples).repeat().batch(batch_size) results = model.evaluate(data_test, batch_size=8, steps=n_test_samples//batch_size) p, r = results[2], results[3] f1 = 2*(p*r)/(p+r) print("F1: ",f1) ``` ________________ <a id='Predictions'></a> ## 5. Predictions Let's take a look at some predictions made by our model. We will retrieve a tile of the ratio image from our specified test location and get the predicted change from our model. If you want to use a model you've saved to Descartes Storage mark `dl_storage` as True in next box. ``` # Function for loading model from Descartes Labs storage dl_storage = False def load_model_from_storage(storage_key): """Load TF model from DL.Storage""" import tempfile model_zip = tempfile.NamedTemporaryFile() model_dir = tempfile.TemporaryDirectory() dl.Storage().get_file(storage_key, model_zip.name) os.system("unzip {} -d {}".format(model_zip.name, model_dir.name)) os.path.join(model_dir.name, "saved_model") model_zip.close() model_dir.cleanup() return model # Function retrieving appropriate tile of the ratio def get_ratio_image(dltile_key,ratio,tilesize,bands): tile = dl.scenes.DLTile.from_key(dltile_key) sc, ctx = dl.scenes.search(aoi=tile, products=satellite, start_datetime=st_date[0], end_datetime=end_date[0]) return ratio.compute(ctx).ndarray.reshape(tilesize,tilesize,len(bands)) # Function retrieving desired tile from Sentinel imagery for display def get_sentinel_image(dltile_key, bands): tile = dl.scenes.DLTile.from_key(dltile_key) sc, ctx = dl.scenes.search(aoi=tile, products=satellite, start_datetime=st_date[0], end_datetime=end_date[0]) im = sc.mosaic(bands=bands, ctx=ctx, bands_axis=-1) return im, ctx ``` For simplicity, we'll put all of the necessary steps into a single function that loads the model, retrieves the ratio tile for a location specified by `dltile_key`, pre-processes the tile and performs model prediction. ``` def predict_image(dltile_key,ratio,tilesize,bands): print("Predict on image for dltile {}".format(dltile_key)) # load model model = load_model_from_storage(modelName) if dl_storage else load_model(modelName) # get imagery im = get_ratio_image(dltile_key,ratio,tilesize,bands) # add batch dimension im = np.expand_dims(im, axis=0).astype(np.float32) # predict pred = model.predict(im) return im, pred # Type in here if you would like to change the coordinates from the map center defined in variables section. lat, lon, tilesize = lat, lon, tilesize tile = dl.scenes.DLTile.from_latlon(lat, lon, resolution=resolution, tilesize=tilesize, pad=pad) # Convert coordinates to nearest descartes labs tile with size of our choosing im, pred = predict_image(tile.key,ratio,tilesize,bands) # Run prediction function for tile sent, ctx = get_sentinel_image(tile.key,bands) # Get Sentinel imagery for tile # Simple plot of predictions fig, ax = plt.subplots(1, 3, figsize=(18, 6)) visBand = 0 # Choose band to visualise ratio of # Plot ratio of chosen band a = ax[0].imshow((im.data[0,:,:,visBand].squeeze()).astype("float"), cmap ='magma') fig.colorbar(a, ax = ax[0]) a_tit = ax[0].set_title("Ratio for "+bands[visBand]+" band") # Plot identified change disting = pred > 0.2 b = ax[1].imshow(disting[0].squeeze().astype("float")) fig.colorbar(b, ax = ax[1]) b_tit = ax[1].set_title("Building change classification") # Plot confidence in prediction c = ax[2].imshow(pred[0].squeeze().astype("float")) fig.colorbar(c, ax = ax[2]) c_tit = ax[2].set_title("Damage probability") # Extract latitude & longitude of each pixel in prediction (whether true or false) bounds, disting = ctx.bounds, disting[0,:,:,0] if len(disting.shape) == 4 else disting # Get bounds from tile and reduce extra dimensionality of classification matrix lats, longs = np.linspace(bounds[3],bounds[1],disting.shape[0]), np.linspace(bounds[0],bounds[2],disting.shape[1]) # Vector of lat, longs # Create matrix of coordinates for pixels with change detected xm, ym = np.meshgrid(longs,lats) xc, yc = xm*(disting), ym*(disting) # Get geodataframe for pixel points df = pd.DataFrame(columns=['Northing', 'Easting']) for i,j in zip(np.nonzero(xc)[0], np.nonzero(xc)[1]): df = df.append({'Northing': yc[i][j],'Easting': xc[i][j]}, ignore_index=True) det = gpd.GeoDataFrame(df, crs={'init':ctx.bounds_crs}, geometry=gpd.points_from_xy(df.Easting, df.Northing)).to_crs({'init': 'epsg:4326'}) # Initialise map m3 = wf.interactive.MapApp() m3.center, m3.zoom = (lat, lon), zoom getImage(1,bands,0.7,m3) # Display sentinel imagery using function from map 1 # Add layer for predicted building damages geo_data = GeoData(geo_dataframe = det, style={'color': 'yellow', 'radius':2, 'fillColor': 'yellow', 'opacity':1, 'weight':1.9, 'dashArray':'2', 'fillOpacity':1}, hover_style={'fillColor': 'red' , 'fillOpacity': 1}, point_style={'radius': 3, 'color': 'yellow', 'fillOpacity': 0.7, 'fillColor': 'yellow', 'weight': 3}, name = 'Damages') m3.add_layer(geo_data) # Plot bounding box for damage search poly = gpd.GeoSeries(Polygon.from_bounds(ctx.bounds[0],ctx.bounds[1],ctx.bounds[2],ctx.bounds[3]), crs={'init':ctx.bounds_crs}).to_crs(epsg=4326) box = GeoData(geo_dataframe = gpd.GeoDataFrame(geometry = poly.envelope), style={'color':'black','fillOpacity':0, 'opacity':0.9}) m3.add_layer(box) # Legend m3.add_control(LegendControl({"Detected Change":"#FFFF00","Search Area":"#000000"})) m3 ``` ____________ <a id='PredictionAccuracy'></a> ## 6. Evaluate Prediction Accuracy Finally, let's compare the prediction to known damages from Copernicus EMS assessments and evaluate the effectiveness of our learnt model. As in the ratio method notebook (`change_detection.ipynb`) we determine the accuracy by evaluating the correspondance of detected change pixels to building footprints. The metrics are as follows: - Precision (proportion of damage detected): $P = \frac{True Positives}{True Positives + False Positives}$ - Recall (proportion of detections corresponding to damage): $R = \frac{True Positives}{True Positives + False Negatives}$ - F1 Score: $F1 = 2x\frac{P*R}{P+R}$ ``` # Load building damages and filter for within detection area dmg = gpd.read_file(dmgJsons) filtered = gpd.GeoDataFrame(crs={'init': 'epsg:4326'}) tilePoly = gpd.GeoSeries(Polygon.from_bounds(ctx.bounds[0],ctx.bounds[1],ctx.bounds[2],ctx.bounds[3]), crs={'init':ctx.bounds_crs}).to_crs(epsg=4326).geometry[0] for i in dmg.index: if dmg.geometry[i].centroid.within(tilePoly): filtered = filtered.append(dmg.loc[i]) print('Changed pixels:',len(det), '\nDamaged buildings:',len(dmg)) # Initialise accuracy and recall vectors acc, rec = np.zeros([max(filtered.index)+1,1]), np.zeros([max(det.index)+1,1]) # Initialise accuracy, recall arrays # Loop through pixels to determine recall (if pixel corresponds to damaged building) for i in tqdm(det.index): # Loop through building to determine accuracy (damaged building has been detected) for j in filtered.index: if det.geometry[i].within(filtered.geometry[j]): rec[i,0], acc[j,0] = True, True # Calculate metrics from vector outputs a = sum(acc)/len(filtered) r = sum(rec)/len(det) f1 = 2*(a*r)/(a+r) print('Accuracy:',a[0],'\nRecall:',r[0],'\nF1 score:',f1[0]) ## Plot success of change detection in matplotlib and save figure # Damage detected true/false filtered['found'] = pd.Series(acc[filtered.index,0], index=filtered.index) filtPlot = filtered.plot(figsize=(12,8), column='found',legend=True,cmap='RdYlGn',alpha = 0.7) # False detection points points = np.vstack([rec[i] for i in det.index]) x1, y1 = np.array(det.geometry.x)*(1-points).transpose(), np.array(det.geometry.y)*(1-points).transpose() x1, y1 = x1[x1 != 0], y1[y1 != 0] filtPlot.scatter(x1,y1,s=0.05,color='b', label='False detections') filtPlot.set_xlim([tilePoly.bounds[0], tilePoly.bounds[2]]) filtPlot.set_ylim([tilePoly.bounds[1], tilePoly.bounds[3]]) # # Set titles and save # plt.set_title('Threshold:'+str(threshold)+', Area:'+str(area)+', Kernel:'+str(kSize)+' - Acc:'+str(a[0])[:6]+', Re:'+str(r[0])[:6]) # plt.legend() # plt.figure.savefig('results/'+location+'_t'+str(threshold)[2:]+'a'+str(area)[2:]+'g'+str(len(grades))+str(bands)+'.png') ## Display on interactive map # Initialise map m4 = wf.interactive.MapApp() m4.center, m4.zoom = (lat, lon), zoom # Plot background imagery as image 2 using function from map 1 getImage(1,bands,0.7,m4) det_data = GeoData(geo_dataframe = det, style={'color': 'blue', 'radius':2, 'fillColor': 'blue', 'opacity':0.7, 'weight':1.9, 'dashArray':'2', 'fillOpacity':0.7}, point_style={'radius': 3, 'color': 'blue', 'fillOpacity': 0.7, 'fillColor': 'blue', 'weight': 3}, name = 'Damages') m4.add_layer(det_data) # Add layers for building polygons whether red for not found, green for found not_found = GeoData(geo_dataframe = filtered.loc[filtered['found']==0], style={'color': 'red', 'radius':2, 'fillColor': 'red', 'opacity':0.7, 'weight':1.9, 'dashArray':'2', 'fillOpacity':0.7}, hover_style={'fillColor': 'red' , 'fillOpacity': 0.5}, name = 'Damages') found = GeoData(geo_dataframe = filtered.loc[filtered['found']==1], style={'color': 'green', 'radius':2, 'fillColor': 'green', 'opacity':0.7, 'weight':1.9, 'dashArray':'2', 'fillOpacity':0.7}, hover_style={'fillColor': 'green' , 'fillOpacity': 0.5}, name = 'Damages') m4.add_layer(not_found) m4.add_layer(found) # Plot bounding box for damage search poly = gpd.GeoSeries(Polygon.from_bounds(ctx.bounds[0],ctx.bounds[1],ctx.bounds[2],ctx.bounds[3]), crs={'init':'EPSG:32618'}).to_crs(epsg=4326) box = GeoData(geo_dataframe = gpd.GeoDataFrame(geometry = poly.envelope), style={'color':'yellow','fillOpacity':0, 'opacity':0.9}) m4.add_layer(box) # Legend m4.add_control(LegendControl({"Damage Identified":"#008000", "Damage Not Identified":"#FF0000", "Detected Change":"#0000FF", "Search Area":"#FFFF00"})) m4 ``` _________________ <a id='TestLocation'></a> ## 7. Test Location Beyond predicting for a single tile, we would like to evaluate the model's performance over an arbitrary wider area. For this let's draw a polygon over the desired area. Then, each corresponding tile will individually be fed in to the model for assessing change detection. If over a location with ground data, accuracy will then be evaluated for the combined output for all tiles. > One could question the decision not to just increase tilesize. However not only does this method make the evaluation area more flexible, but also the model does not cater well for tile sizes larger than that for which it was trained due to the input layer structure. ``` # Display map upon which to draw Polygon for analysis r = 10*area testPoly = ipyleaflet.Polygon(locations=[(lat-r, lon-r), (lat-r, lon+r), (lat+r, lon+r),(lat+r, lon-r)], color="yellow", fill_color="yellow", transform=True) pos = Map(center=(lat, lon), zoom=zoom) if not deployed: pos.add_layer(geo_data) pos.add_control(LegendControl({"Recorded Damage":"#FF0000"})) pos.add_layer(testPoly) pos # Define all functions required for obtaining detections if deployed: # Define functions if not defined in section 5 # Function retrieving appropriate tile of the ratio def get_ratio_image(dltile_key,ratio,tilesize,bands): tile = dl.scenes.DLTile.from_key(dltile_key) sc, ctx = dl.scenes.search(aoi=tile, products=satellite, start_datetime=st_date[0], end_datetime=end_date[0]) return ratio.compute(ctx).ndarray.reshape(tilesize,tilesize,len(bands)) # Function retrieving desired tile from Sentinel imagery for display def get_sentinel_image(dltile_key, bands): tile = dl.scenes.DLTile.from_key(dltile_key) sc, ctx = dl.scenes.search(aoi=tile, products=satellite, start_datetime=st_date[0], end_datetime=end_date[0]) im = sc.mosaic(bands=bands, ctx=ctx, bands_axis=-1) return im, ctx # Function running predict image for each tile def predict_image(dltile_key,ratio,tilesize,bands): print("Predict on image for dltile {}".format(dltile_key)) # load model model = load_model(modelName) # get imagery im = get_ratio_image(dltile_key,ratio,tilesize,bands) # add batch dimension im = np.expand_dims(im, axis=0).astype(np.float32) # predict pred = model.predict(im) return im, pred ## Function to get detections for each tile def testTile(lat,lon,tilesize,threshold): tile = dl.scenes.DLTile.from_latlon(lat, lon, resolution=resolution, tilesize=tilesize, pad=pad) # Convert coordinates to nearest descartes labs tile with size of our choosing im, pred = predict_image(tile.key,ratio,tilesize,bands) # Run prediction function for tile sent, ctx = get_sentinel_image(tile.key,bands) # Get Sentinel imagery for tile disting = pred > threshold # Get damaged predictions # Extract latitude & longitude of each pixel in prediction (whether true or false) bounds, disting = ctx.bounds, disting[0,:,:,0] if len(disting.shape) == 4 else disting # Get bounds from tile and reduce extra dimensionality of classification matrix lats, longs = np.linspace(bounds[3],bounds[1],disting.shape[0]), np.linspace(bounds[0],bounds[2],disting.shape[1]) # Vector of lat, longs # Create matrix of coordinates for pixels with change detected xm, ym = np.meshgrid(longs,lats) xc, yc = xm*(disting), ym*(disting) # Get geodataframe for pixel points df = pd.DataFrame(columns=['Northing', 'Easting']) for i,j in zip(np.nonzero(xc)[0], np.nonzero(xc)[1]): df = df.append({'Northing': yc[i][j],'Easting': xc[i][j]}, ignore_index=True) det = gpd.GeoDataFrame(df, crs={'init':ctx.bounds_crs}, geometry=gpd.points_from_xy(df.Easting, df.Northing)).to_crs({'init': 'epsg:4326'}) return det, ctx ``` Looping through tiles may take a while depending on polygon size and tile size. About 8 seconds per tile requested on 16GB RAM. ``` ## Loop through tiles to get all detections # Get latitudes and longitudes for tiles according to polygon drawn and tilesize tileLats = np.arange(testPoly.locations[0][0]['lat'],testPoly.locations[0][2]['lat'],resolution*1E-5*tilesize) tileLons = np.arange(testPoly.locations[0][0]['lng'],testPoly.locations[0][2]['lng'],resolution*1E-5*tilesize) print("Number of tiles requested:",len(tileLats)*len(tileLons),". Approximately",8*len(tileLats)*len(tileLons),"seconds on 16GB RAM.") threshold = 0.5 allDet = gpd.GeoDataFrame(crs={'init': 'epsg:4326'}) allCtx = np.array([]) for lat in tqdm(tileLats): for lon in tqdm(tileLons): newDet, newCtx = testTile(lat,lon,tilesize,threshold) newDet.index = newDet.index + len(allDet.index) allDet = allDet.append(newDet) allCtx = np.append([allCtx], [np.array(newCtx.bounds)]) ## Evaluate against damages if not deployed: # Load building damages and filter for within detection area dmg = gpd.read_file(dmgJsons) filtered = gpd.GeoDataFrame(crs={'init': 'epsg:4326'}) tilePoly = gpd.GeoSeries(Polygon.from_bounds(min(allCtx[0::4]),min(allCtx[1::4]),max(allCtx[2::4]),max(allCtx[3::4])), crs={'init':ctx.bounds_crs}).to_crs(epsg=4326).geometry[0] for i in dmg.index: if dmg.geometry[i].centroid.within(tilePoly): filtered = filtered.append(dmg.loc[i]) print('Changed pixels:',len(allDet), '\nDamaged buildings:',len(filtered)) # Initialise accuracy and recall vectors acc, rec = np.zeros([max(filtered.index)+1,1]), np.zeros([max(allDet.index)+1,1]) # Initialise accuracy, recall arrays # Loop through pixels to determine recall (if pixel corresponds to damaged building) for i in tqdm(allDet.index): # Loop through building to determine accuracy (damaged building has been detected) for j in filtered.index: if allDet.geometry[i].within(filtered.geometry[j]): rec[i,0], acc[j,0] = True, True # Calculate metrics from vector outputs a = sum(acc)/len(filtered) r = sum(rec)/len(allDet) f1 = 2*(a*r)/(a+r) print('Accuracy:',a[0],'\nRecall:',r[0],'\nF1 score:',f1[0]) # Initialise map m5 = wf.interactive.MapApp() m5.center, m5.zoom = (lat, lon), zoom getImage(1,bands,0.7,m5) # Display sentinel imagery using function from map 1 # Ass layer for detections from model allDet_data = GeoData(geo_dataframe = allDet, style={'color': 'yellow', 'radius':2, 'fillColor': 'yellow', 'opacity':0.7, 'weight':1.9, 'dashArray':'2', 'fillOpacity':0.7}, point_style={'radius': 2, 'color': 'yellow', 'fillOpacity': 0.7, 'fillColor': 'blue', 'weight': 3}, name = 'Damages') m5.add_layer(allDet_data) # Add layers for building polygons whether red for not found, green for found if not deployed: filtered['found'] = pd.Series(acc[filtered.index,0], index=filtered.index) all_not_found = GeoData(geo_dataframe = filtered.loc[filtered['found']==0], style={'color': 'red', 'radius':2, 'fillColor': 'red', 'opacity':0.7, 'weight':1.9, 'dashArray':'2', 'fillOpacity':0.7}, hover_style={'fillColor': 'red' , 'fillOpacity': 0.5}, name = 'Damages') all_found = GeoData(geo_dataframe = filtered.loc[filtered['found']==1], style={'color': 'green', 'radius':2, 'fillColor': 'green', 'opacity':0.7, 'weight':1.9, 'dashArray':'2', 'fillOpacity':0.7}, hover_style={'fillColor': 'green' , 'fillOpacity': 0.5}, name = 'Damages') m5.add_layer(all_not_found) m5.add_layer(all_found) # Legend m5.add_control(LegendControl({"Damage Identified":"#008000", "Damage Not Identified":"#FF0000", "Detected Change":"#0000FF", "Search Area":"#FFFF00"})) else: m5.add_control(LegendControl({"Detected Change":"#FFFF00", "Search Area":"#0000FF"})) # Plot bounding box for damage search testPoly.color, testPoly.fill_opacity = 'blue', 0 m5.add_layer(testPoly) m5 ``` ## --------- END ----------
github_jupyter
## Title : Exercise: Computing the CI ## Description : You are the manager of the Advertising division of your company, and your boss asks you the question, **"How much more sales will we have if we invest $1000 dollars in TV advertising?"** <img src="../fig/fig3.jpeg" style="width: 500px;"> The goal of this exercise is to estimate the Sales with a 95% confidence interval using the Advertising.csv dataset. ## Data Description: ## Instructions: - Read the file `Advertising.csv` as a dataframe. - Fix a budget amount of 1000 dollars for TV advertising as variable called Budget. - Select the number of bootstraps. - For each bootstrap: - Select a new dataframe with the predictor as TV and the response as Sales. - Fit a simple linear regression on the data. - Predict on the budget and compute the error estimate using the helper function `error_func()`. - Store the sales as a sum of the prediction and the error estimate and append to `sales_list`. - Sort the `sales_list` which is a distribution of predicted sales over numboot bootstraps. - Compute the 95% confidence interval of `sales_list`. - Use the helper function `plot_simulation` to visualize the distribution and print the estimated sales. ## Hints: <a href="https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.randint.html" target="_blank">np.random.randint()</a> Returns list of integers as per mentioned size <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sample.html" target="_blank">df.sample()</a> Get a new sample from a dataframe <a href="https://matplotlib.org/3.2.2/api/_as_gen/matplotlib.pyplot.hist.html" target="_blank">plt.hist()</a> Plots a histogram <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.axvline.html" target="_blank">plt.axvline()</a> Adds a vertical line across the axes <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.axhline.html" target="_blank">plt.axhline()</a> Add a horizontal line across the axes <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.html" target="_blank">plt.legend()</a> Place a legend on the axes <a href="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.sort.html#numpy.ndarray.sort" target="_blank">ndarray.sort()</a> Returns the sorted ndarray. <a href="https://numpy.org/doc/stable/reference/generated/numpy.percentile.html" target="_blank">np.percentile(list, q)</a> Returns the q-th percentile value based on the provided ascending list of values. Note: This exercise is **auto-graded and you can try multiple attempts**. ``` # Import necessary libraries %matplotlib inline import numpy as np import pandas as pd from scipy import stats import matplotlib.pyplot as plt from sklearn import preprocessing from sklearn.metrics import mean_squared_error from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.preprocessing import PolynomialFeatures # Read the `Advertising.csv` dataframe df = pd.read_csv('Advertising.csv') # Take a quick look at the data df.head() # Helper function to compute the variance of the error term def error_func(y,y_p): n = len(y) return np.sqrt(np.sum((y-y_p)**2/(n-2))) # Set the number of bootstraps numboot = 1000 # Set the budget as per the instructions given # Use 2D list to facilitate model prediction (sklearn.LinearRegression requires input as a 2d array) budget = [[___]] # Initialize an empty list to store sales predictions for each bootstrap sales_list = [] # Loop through each bootstrap for i in range(___): # Create bootstrapped version of the data using the sample function # Set frac=1 and replace=True to get a bootstrap df_new = df.sample(___, replace=___) # Get the predictor data ('TV') from the new bootstrapped data x = df_new[[___]] # Get the response data ('Sales') from the new bootstrapped data y = df_new.___ # Initialize a Linear Regression model linreg = LinearRegression() # Fit the model on the new data linreg.fit(___,___) # Predict on the budget from the original data prediction = linreg.predict(budget) # Predict on the bootstrapped data y_pred = linreg.predict(x) # Compute the error using the helper function error_func error = np.random.normal(0,error_func(y,y_pred)) # The final sales prediction is the sum of the model prediction # and the error term sales = ___ # Convert the sales to float type and append to the list sales_list.append(np.float64(___)) ### edTest(test_sales) ### # Sort the list containing sales predictions in ascending order sales_list.sort() # Find the 95% confidence interval using np.percentile function # at 2.5% and 97.5% sales_CI = (np.percentile(___,___),np.percentile(___, ___)) # Helper function to plot the histogram of beta values along # with the 95% confidence interval def plot_simulation(simulation,confidence): plt.hist(simulation, bins = 30, label = 'beta distribution', align = 'left', density = True,edgecolor='k') plt.axvline(confidence[1], 0, 1, color = 'r', label = 'Right Interval') plt.axvline(confidence[0], 0, 1, color = 'red', label = 'Left Interval') plt.xlabel('Beta value') plt.ylabel('Frequency') plt.legend(frameon = False, loc = 'upper right') plt.show(); # Call the plot_simulation function above with the computed sales # distribution and the confidence intervals computed earlier plot_simulation(sales_list,sales_CI) # Print the computed values print(f"With a TV advertising budget of ${budget[0][0]},") print(f"we can expect an increase of sales anywhere between {sales_CI[0]:0.2f} to {sales_CI[1]:.2f}\ with a 95% confidence interval") ``` ⏸ The sales predictions here is based on the Simple-Linear regression model between `TV` and `Sales`. Re-run the above exercise by fitting the model considering all variables in `Advertising.csv`. Keep the budget the same, i.e $1000 for 'TV' advertising. You may have to change the `budget` variable to something like `[[1000,0,0]]` for proper computation. Does your predicted sales interval change? Why, or why not? ``` ### edTest(test_chow1) ### # Type your answer within in the quotes given answer1 = '___' ```
github_jupyter
# Create Correlation Matrix & randomize gaia errors ``` import numpy as np import matplotlib.pyplot as plt from astropy import constants as const, units as u from astropy.table import Table, join, vstack, hstack, Column, MaskedColumn from astropy import coordinates, units as u, wcs import warnings from astropy.utils.exceptions import AstropyWarning import os, glob, getpass, sys user = getpass.getuser() from astropy.io import fits # Specify inputs ======================================= dir_1 = '/Users/' + user + '/Dropbox/Public/clustering_oph/gaia_cone/' tb_all = dir_1 + 'entire_ophiuchus_cone-result.vot' # Read Entire Cone ===================================== warnings.filterwarnings('ignore', category=AstropyWarning, append=True) tb_all = Table.read(tb_all, format = 'votable') # Select parameter-cols ================================ cols = ['ra', 'dec', 'pmra', 'pmdec', 'parallax'] cols_e = [col + '_error' for col in cols] cols_corr = [col for col in tb_all.colnames if '_corr' in col] tb_all = tb_all[cols + cols_e + cols_corr] # Ensure Units are uniform ============================= for col in ['ra_error', 'dec_error']: tb_all[col] = tb_all[col] * 1/1000 * 1/3600 tb_all[col].unit = u.deg tb_all[0:3] #Reading Synthetic measurement ================= syn = fits.open('syn_data.fits') syn = syn[0].data N_REP = len(syn[0]) #Plot Stats ==================================== inp_tg = 0 # Select the target (i.e. Gaia row) you want to inspect obs = tb_all[inp_tg] errors = syn[inp_tg] errors = [[inp[j] for inp in errors] for j in range(len(cols))] #Reconstruct Table ============================= tb_0 = Table(syn[:,0], names = cols) tb_0['ra'].unit = u.degree tb_0['dec'].unit = u.degree tb_0['pmra'].unit = u.mas / u.year tb_0['pmdec'].unit = u.mas / u.year tb_0['parallax'].unit = u.mas tb_0 #Make Figure =================================== fig = plt.figure(figsize=[30,12.5]) ftsize = 14 linewidth = 2 for i in range(len(cols)): ax = plt.subplot(2,5,i + 1) plt.plot(errors[i], 'bo', markersize = 1.5, mew = 0) plt.hlines(xmin=0, xmax=N_REP, y=obs[cols[i]], colors='k', linewidth = linewidth) plt.hlines(xmin=0, xmax=N_REP, y=obs[cols[i]] + obs[cols_e[i]], linestyles='--', colors='k', linewidth = linewidth) plt.hlines(xmin=0, xmax=N_REP, y=obs[cols[i]] - obs[cols_e[i]], linestyles='--', colors='k', linewidth = linewidth) plt.tick_params(axis='x', which='major', labelsize=ftsize) if i == 0 or i == 1: ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%0.7f')) plt.yticks(rotation=45) ax = plt.subplot(2,5,i + 6) plt.xlabel(cols[i].upper(), fontsize = ftsize*1.5) plt.hist(errors[i], 15) ymax = 2200 plt.ylim([0,ymax]) plt.vlines(ymin=0, ymax=ymax, x=obs[cols[i]], colors='k') plt.vlines(ymin=0, ymax=ymax, x=obs[cols[i]] + obs[cols_e[i]], linestyles='--', colors='k') plt.vlines(ymin=0, ymax=ymax, x=obs[cols[i]] - obs[cols_e[i]], linestyles='--', colors='k') plt.tick_params(axis='x', which='major', labelsize=ftsize) if i == 0 or i == 1: ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%0.7f')) plt.xticks(rotation=45) plt.show() fig.savefig('random_errors.pdf', bbox_inches = 'tight', overwrite = True) ```
github_jupyter
``` from notebook.services.config import ConfigManager cm = ConfigManager() cm.update('livereveal', {'scroll': True,}) %load_ext autoreload %autoreload 2 import os import sys sys.path.append(os.path.abspath(".")) from viewer import ThreeJsViewer import matplotlib.pyplot as plt %matplotlib inline %%html <style> body.rise-enabled div.code_cell{ font-size:60%; } body.rise-enabled div.inner_cell>div.promt_container{ width:10%; } body.rise-enabled div.inner_cell>div.text_cell_render.rendered_html { font-size: 50%; } </style> ``` # Motion planning and planning scene ## Motion planning * Collision checking (= path planning) * Trajectory checking (synchronization, consider speed and acceleration of moving objects) <img src="images/path_planning_00.png" width="600" /> # Collision checking * Intricate positions (spatial assembly) * Multiple robots working closely <table> <tr> <td><img src="http://www.dfab.arch.ethz.ch/data/ProjectImages/02_Web/M/248/190617_248_robotic_setup_3x2_AX_WM.jpg" style="height: 300px" /></td> <td><img src="http://www.dfab.arch.ethz.ch/data/ProjectImages/02_Web/M/188/160922_188_BuildupPrototype_MP_01_WM.jpg" style="height: 300px" /></td> </tr> </table> # Trajectory checking * Synchronisation * Continuous processes <table> <tr> <td><img src="images/path_planning_04.jpg" style="height: 300px" /></td> <td><img src="http://www.dfab.arch.ethz.ch/web/images/content/GKR_Infrastructure_7.jpg" style="height: 300px" /></td> </tr> </table> ## Path vs. Trajectory ### Path A sequence of robot configurations in a particular order without regard to the timing between these configurations (geometric path, obstacle avoidance, shortest path). ### Trajectory Is concerned about the timing between these configurations (involving accelleration, speed, limits). ``` """Example: Joint Trajectory Point and Joint Trajectory """ from compas.robots import Joint from compas_fab.robots import Configuration from compas_fab.robots import JointTrajectory from compas_fab.robots import JointTrajectoryPoint # create configuration values = [1.571, 0, 0, 0.262, 0, 0] types = [Joint.REVOLUTE] * 6 c = Configuration(values, types) # create joint trajectory point p = JointTrajectoryPoint(values, types) print(p.accelerations) print(p.velocities) print(p.time_from_start) # create joint trajectory trj = JointTrajectory([p]) ``` ## MoveIt! * ROS’ default motion planning library * Works with different planners: OMPL (Open Motion Planning Library) * Collision checking through FCL (Flexible Collision Library) * Kinematics plugins (default KDL: Kinematics and Dynamics Library) * Trajectory processing routine <img src="https://moveit.ros.org/assets/images/moveit2_logo_black.png" width="300"> https://gramaziokohler.github.io/compas_fab/latest/backends/ros.html docker compose up ## RViz in web browser http://localhost:8080/vnc.html?resize=scale&autoconnect=true ## RViz in web browser (if you have Docker Toolbox) http://192.168.99.100:8080/vnc.html?resize=scale&autoconnect=true <img src="images/moveit_ur5.jpg" width="700" /> <img src="images/moveit_rfl.jpg" width="700" /> <img src="images/moveit_ddad.jpg" width="700" /> ## OMPL Planners PRM, RRT, EST, SBL, KPIECE, SyCLOP, ... https://ompl.kavrakilab.org/ ## Robot class in compas_fab `robot = Robot(model, artist, semantics, client)` * model: The robot model, usually created from an URDF structure. * artist: Instance of the artist used to visualize the robot * semantics: The semantic model of the robot (planning group, disabled joints) * client: The backend client to use for communication (planning requests, execution tasks) ``` import compas_fab from compas.robots import RobotModel from compas.robots import LocalPackageMeshLoader from compas_fab.robots import Robot from compas_fab.robots import RobotSemantics from viewer import RobotArtist urdf_filename = compas_fab.get('universal_robot/ur_description/urdf/ur5.urdf') srdf_filename = compas_fab.get('universal_robot/ur5_moveit_config/config/ur5.srdf') model = RobotModel.from_urdf_file(urdf_filename) loader = LocalPackageMeshLoader(compas_fab.get('universal_robot'), 'ur_description') model.load_geometry(loader) semantics = RobotSemantics.from_srdf_file(srdf_filename, model) artist = RobotArtist(model) robot = Robot(model, artist=artist, semantics=semantics) robot robot.info() ``` ## Inverse kinematic ``` from compas.geometry import Frame from compas_fab.backends import RosClient from compas_fab.robots import Configuration frame = Frame([0.3, 0.1, 0.5], [1, 0, 0], [0, 1, 0]) start_configuration = Configuration.from_revolute_values([0] * 6) group = "manipulator" # or robot.main_group_name # for those with docker toolbox: check your IP, usually 192.168.99.100 ip = "127.0.0.1" with RosClient(ip) as client: robot.client = client config = robot.inverse_kinematics(frame, start_configuration, group) print(config) frame_RCF = robot.forward_kinematics(config, backend='model') frame_WCF = robot.to_world_coords(frame_RCF) print(frame_WCF) from viewer import ThreeJsViewer robot.update(config) geo = robot.draw_visual() viewer = ThreeJsViewer() viewer.show(geometry=geo) ``` ## 8 (analytic) ik solutions for 6-axis robot <img src="images/all_ik.jpg" width="800"> ## Cartesian path <div align="middle"><img src="images/cpath.jpg" width="600"/></div> ## Difference between cartesian motion and free-space motion <img src="images/diff_cm_fsm.svg" width="600" /> ``` from compas.geometry import Frame from compas_fab.backends import RosClient from compas_fab.robots import Configuration frames = [] frames.append(Frame([0.3, 0.1, 0.5], [1, 0, 0], [0, 1, 0])) frames.append(Frame([0.5, 0.1, 0.6], [1, 0, 0], [0, 1, 0])) start_configuration = Configuration.from_revolute_values([-0.042, 0.033, -2.174, 5.282, -1.528, 0.000]) ip = "127.0.0.1" with RosClient(ip) as client: robot.client = client trajectory = robot.plan_cartesian_motion(frames, start_configuration, max_step=0.01, avoid_collisions=True) print("Computed cartesian path with %d configurations, " % len(trajectory.points)) print("following %d%% of requested trajectory." % (trajectory.fraction * 100)) print("Executing this path at full speed would take approx. %.3f seconds." % trajectory.time_from_start) positions = [] velocities = [] accelerations = [] time_from_start = [] for p in trajectory.points: positions.append(p.positions) velocities.append(p.velocities) accelerations.append(p.accelerations) time_from_start.append(p.time_from_start.seconds) import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [17, 4] plt.subplot(131) plt.title('positions') plt.plot(positions) plt.subplot(132) plt.plot(velocities) plt.title('velocities') plt.subplot(133) plt.plot(accelerations) plt.title('accelerations') ``` # Free-space motion The problem of motion planning can be stated as follows. Given: * A start pose of the robot * A desired goal pose * A geometric description of the robot * A geometric description of the world Find a path that moves the robot gradually from start to goal while never touching any obstacle <img src="http://www.willowgarage.com/sites/default/files/blog/200909/path_planning_01.600h.png" /> ## Constraints * __JointConstraint__ Constrains the value of a joint to be within a certain bound. * __OrientationConstraint__ Constrains a link to be within a certain orientation. * __PositionConstraint__ Constrains a link to be within a certain bounding volume. ``` from compas.geometry import Sphere from compas.geometry import Frame from compas_fab.robots import BoundingVolume from compas_fab.robots import JointConstraint from compas_fab.robots import OrientationConstraint from compas_fab.robots import PositionConstraint # Joint Constraint jc = JointConstraint("joint_0", 1.4, tolerance=0.1, weight=1.) print(jc) # Position Constraint bv = BoundingVolume.from_sphere(Sphere((3,4,5), 0.5)) pc = PositionConstraint('link_0', bv, weight=1.) print(pc) # Orientation Constraint frame = Frame([1, 1, 1], [0.68, 0.68, 0.27], [-0.67, 0.73, -0.15]) oc = OrientationConstraint("link_0", frame.quaternion, tolerances=[0.1, 0.1, 0.1], weight=1.) print(oc) import math from compas.geometry import Frame from compas_fab.robots import Configuration from compas_fab.backends import RosClient frame = Frame([0.4, 0.3, 0.4], [0, 1, 0], [0, 0, 1]) tolerance_position = 0.001 tolerance_axes = [math.radians(1)] * 3 start_configuration = Configuration.from_revolute_values([-0.042, 4.295, 0, -3.327, 4.755, 0.]) group = robot.main_group_name # create goal constraints from frame goal_constraints = robot.constraints_from_frame(frame, tolerance_position, tolerance_axes, group) # Other planners 'PRM' with RosClient() as client: robot.client = client trajectory = robot.plan_motion(goal_constraints, start_configuration, group, planner_id='RRT') print("Computed kinematic path with %d configurations." % len(trajectory.points)) print("Executing this path at full speed would take approx. %.3f seconds." % trajectory.time_from_start) positions = [] velocities = [] accelerations = [] time_from_start = [] for p in trajectory.points: positions.append(p.positions) velocities.append(p.velocities) accelerations.append(p.accelerations) time_from_start.append(p.time_from_start.seconds) plt.rcParams['figure.figsize'] = [17, 4] plt.subplot(131) plt.title('positions') plt.plot(positions) plt.subplot(132) plt.plot(velocities) plt.title('velocities') plt.subplot(133) plt.plot(accelerations) plt.title('accelerations') ``` ## Planning scene and collision objects ## Collision meshes ``` """Example: add a floor to the planning scene """ import time from compas.datastructures import Mesh import compas_fab from compas_fab.backends import RosClient from compas_fab.robots import CollisionMesh from compas_fab.robots import PlanningScene from compas_fab.robots.ur5 import Robot with RosClient() as client: robot = Robot(client) scene = PlanningScene(robot) mesh = Mesh.from_stl(compas_fab.get('planning_scene/floor.stl')) cm = CollisionMesh(mesh, 'floor') scene.add_collision_mesh(cm) # sleep a bit before terminating the client time.sleep(1) """Example: remove the floor from the planning scene """ import time from compas_fab.backends import RosClient from compas_fab.robots import PlanningScene from compas_fab.robots.ur5 import Robot with RosClient() as client: robot = Robot(client) scene = PlanningScene(robot) scene.remove_collision_mesh('floor') # sleep a bit before terminating the client time.sleep(1) """Example: add several bricks to the planning scene. Note: APPEND instead of ADD """ import time from compas.datastructures import Mesh from compas.geometry import Box from compas_fab.backends import RosClient from compas_fab.robots import CollisionMesh from compas_fab.robots import PlanningScene from compas_fab.robots.ur5 import Robot with RosClient() as client: robot = Robot(client) scene = PlanningScene(robot) brick = Box.from_width_height_depth(0.11, 0.07, 0.25) for i in range(5): mesh = Mesh.from_vertices_and_faces(brick.vertices, brick.faces) cm = CollisionMesh(mesh, 'brick') cm.frame.point.y += 0.5 cm.frame.point.z += brick.zsize * i scene.append_collision_mesh(cm) # sleep a bit before terminating the client time.sleep(1) """Example: remove the floor from the planning scene """ import time from compas_fab.backends import RosClient from compas_fab.robots import PlanningScene from compas_fab.robots.ur5 import Robot with RosClient() as client: robot = Robot(client) scene = PlanningScene(robot) scene.remove_collision_mesh('brick') # sleep a bit before terminating the client time.sleep(1) ``` ## Attach a collision mesh to a robot’s end-effector ``` import time from compas.datastructures import Mesh import compas_fab from compas_fab.backends import RosClient from compas_fab.robots import CollisionMesh from compas_fab.robots import PlanningScene from compas_fab.robots.ur5 import Robot with RosClient() as client: robot = Robot(client) scene = PlanningScene(robot) # create collison objects mesh = Mesh.from_stl(compas_fab.get('planning_scene/cone.stl')) cm = CollisionMesh(mesh, 'tip') # attach it to the end-effector group = robot.main_group_name scene.attach_collision_mesh_to_robot_end_effector(cm, group=group) # sleep a bit before terminating the client time.sleep(1) ``` ## Plan path with attached collision mesh ``` import math from compas.geometry import Frame from compas_fab.backends import RosClient from compas_fab.robots import Configuration from compas_fab.robots import AttachedCollisionMesh frame = Frame([0.4, 0.3, 0.4], [0, 1, 0], [0, 0, 1]) tolerance_position = 0.001 tolerance_axes = [math.radians(1)] * 3 start_configuration = Configuration.from_revolute_values([-0.042, 4.295, 0, -3.327, 4.755, 0.]) group = robot.main_group_name # create attached collision object mesh = Mesh.from_stl(compas_fab.get('planning_scene/cone.stl')) cm = CollisionMesh(mesh, 'tip') ee_link_name = 'ee_link' touch_links = ['wrist_3_link', 'ee_link'] acm = AttachedCollisionMesh(cm, ee_link_name, touch_links) # create goal constraints from frame goal_constraints = robot.constraints_from_frame(frame, tolerance_position, tolerance_axes, group) # Other planners 'PRM' with RosClient() as client: robot.client = client trajectory = robot.plan_motion(goal_constraints, start_configuration, group, planner_id='RRT', attached_collision_meshes=[acm]) print("Computed kinematic path with %d configurations." % len(trajectory.points)) print("Executing this path at full speed would take approx. %.3f seconds." % trajectory.time_from_start) ``` ## Continue with grasshopper... https://github.com/compas-dev/compas_fab/blob/master/docs/examples/03_backends_ros/files/robot.ghx
github_jupyter
``` from tensorflow.keras import layers from tensorflow.keras.regularizers import l2 from tensorflow.keras.layers import Activation, Conv1D, Conv2D, Input, Lambda from tensorflow.keras.layers import BatchNormalization, Flatten, Dense, Reshape from tensorflow.keras.layers import MaxPooling2D, AveragePooling2D, GlobalAveragePooling2D weight_decay = 1e-4 def identity_block_2D(input_tensor, kernel_size, filters, stage, block, trainable=True): """The identity block is the block that has no conv layer at shortcut. # Arguments input_tensor: input tensor kernel_size: default 3, the kernel size of middle conv layer at main path filters: list of integers, the filterss of 3 conv layer at main path stage: integer, current stage label, used for generating layer names block: 'a','b'..., current block label, used for generating layer names # Returns Output tensor for the block. """ filters1, filters2, filters3 = filters bn_axis = 3 conv_name_1 = 'conv' + str(stage) + '_' + str(block) + '_1x1_reduce' bn_name_1 = 'conv' + str(stage) + '_' + str(block) + '_1x1_reduce/bn' x = Conv2D(filters1, (1, 1), kernel_initializer='orthogonal', use_bias=False, trainable=trainable, kernel_regularizer=l2(weight_decay), name=conv_name_1)(input_tensor) x = BatchNormalization(axis=bn_axis, trainable=trainable, name=bn_name_1)(x) x = Activation('relu')(x) conv_name_2 = 'conv' + str(stage) + '_' + str(block) + '_3x3' bn_name_2 = 'conv' + str(stage) + '_' + str(block) + '_3x3/bn' x = Conv2D(filters2, kernel_size, padding='same', kernel_initializer='orthogonal', use_bias=False, trainable=trainable, kernel_regularizer=l2(weight_decay), name=conv_name_2)(x) x = BatchNormalization(axis=bn_axis, trainable=trainable, name=bn_name_2)(x) x = Activation('relu')(x) conv_name_3 = 'conv' + str(stage) + '_' + str(block) + '_1x1_increase' bn_name_3 = 'conv' + str(stage) + '_' + str(block) + '_1x1_increase/bn' x = Conv2D(filters3, (1, 1), kernel_initializer='orthogonal', use_bias=False, trainable=trainable, kernel_regularizer=l2(weight_decay), name=conv_name_3)(x) x = BatchNormalization(axis=bn_axis, trainable=trainable, name=bn_name_3)(x) x = layers.add([x, input_tensor]) x = Activation('relu')(x) return x def conv_block_2D(input_tensor, kernel_size, filters, stage, block, strides=(2, 2), trainable=True): """A block that has a conv layer at shortcut. # Arguments input_tensor: input tensor kernel_size: default 3, the kernel size of middle conv layer at main path filters: list of integers, the filterss of 3 conv layer at main path stage: integer, current stage label, used for generating layer names block: 'a','b'..., current block label, used for generating layer names # Returns Output tensor for the block. Note that from stage 3, the first conv layer at main path is with strides=(2,2) And the shortcut should have strides=(2,2) as well """ filters1, filters2, filters3 = filters bn_axis = 3 conv_name_1 = 'conv' + str(stage) + '_' + str(block) + '_1x1_reduce' bn_name_1 = 'conv' + str(stage) + '_' + str(block) + '_1x1_reduce/bn' x = Conv2D(filters1, (1, 1), strides=strides, kernel_initializer='orthogonal', use_bias=False, trainable=trainable, kernel_regularizer=l2(weight_decay), name=conv_name_1)(input_tensor) x = BatchNormalization(axis=bn_axis, trainable=trainable, name=bn_name_1)(x) x = Activation('relu')(x) conv_name_2 = 'conv' + str(stage) + '_' + str(block) + '_3x3' bn_name_2 = 'conv' + str(stage) + '_' + str(block) + '_3x3/bn' x = Conv2D(filters2, kernel_size, padding='same', kernel_initializer='orthogonal', use_bias=False, trainable=trainable, kernel_regularizer=l2(weight_decay), name=conv_name_2)(x) x = BatchNormalization(axis=bn_axis, trainable=trainable, name=bn_name_2)(x) x = Activation('relu')(x) conv_name_3 = 'conv' + str(stage) + '_' + str(block) + '_1x1_increase' bn_name_3 = 'conv' + str(stage) + '_' + str(block) + '_1x1_increase/bn' x = Conv2D(filters3, (1, 1), kernel_initializer='orthogonal', use_bias=False, trainable=trainable, kernel_regularizer=l2(weight_decay), name=conv_name_3)(x) x = BatchNormalization(axis=bn_axis, trainable=trainable, name=bn_name_3)(x) conv_name_4 = 'conv' + str(stage) + '_' + str(block) + '_1x1_proj' bn_name_4 = 'conv' + str(stage) + '_' + str(block) + '_1x1_proj/bn' shortcut = Conv2D(filters3, (1, 1), strides=strides, kernel_initializer='orthogonal', use_bias=False, trainable=trainable, kernel_regularizer=l2(weight_decay), name=conv_name_4)(input_tensor) shortcut = BatchNormalization(axis=bn_axis, trainable=trainable, name=bn_name_4)(shortcut) x = layers.add([x, shortcut]) x = Activation('relu')(x) return x def resnet_2D_v1(inputs, mode='train'): bn_axis = 3 # if mode == 'train': # inputs = Input(shape=input_dim, name='input') # else: # inputs = Input(shape=(input_dim[0], None, input_dim[-1]), name='input') # =============================================== # Convolution Block 1 # =============================================== x1 = Conv2D(64, (7, 7), kernel_initializer='orthogonal', use_bias=False, trainable=True, kernel_regularizer=l2(weight_decay), padding='same', name='conv1_1/3x3_s1')(inputs) x1 = BatchNormalization(axis=bn_axis, name='conv1_1/3x3_s1/bn', trainable=True)(x1) x1 = Activation('relu')(x1) x1 = MaxPooling2D((2, 2), strides=(2, 2))(x1) # =============================================== # Convolution Section 2 # =============================================== x2 = conv_block_2D(x1, 3, [48, 48, 96], stage=2, block='a', strides=(1, 1), trainable=True) x2 = identity_block_2D(x2, 3, [48, 48, 96], stage=2, block='b', trainable=True) # =============================================== # Convolution Section 3 # =============================================== x3 = conv_block_2D(x2, 3, [96, 96, 128], stage=3, block='a', trainable=True) x3 = identity_block_2D(x3, 3, [96, 96, 128], stage=3, block='b', trainable=True) x3 = identity_block_2D(x3, 3, [96, 96, 128], stage=3, block='c', trainable=True) # =============================================== # Convolution Section 4 # =============================================== x4 = conv_block_2D(x3, 3, [128, 128, 256], stage=4, block='a', trainable=True) x4 = identity_block_2D(x4, 3, [128, 128, 256], stage=4, block='b', trainable=True) x4 = identity_block_2D(x4, 3, [128, 128, 256], stage=4, block='c', trainable=True) # =============================================== # Convolution Section 5 # =============================================== x5 = conv_block_2D(x4, 3, [256, 256, 512], stage=5, block='a', trainable=True) x5 = identity_block_2D(x5, 3, [256, 256, 512], stage=5, block='b', trainable=True) x5 = identity_block_2D(x5, 3, [256, 256, 512], stage=5, block='c', trainable=True) y = MaxPooling2D((3, 1), strides=(2, 1), name='mpool2')(x5) return inputs, y def resnet_2D_v2(inputs, mode='train'): bn_axis = 3 # if mode == 'train': # inputs = Input(shape=input_dim, name='input') # else: # inputs = Input(shape=(input_dim[0], None, input_dim[-1]), name='input') # =============================================== # Convolution Block 1 # =============================================== x1 = Conv2D(64, (7, 7), strides=(2, 2), kernel_initializer='orthogonal', use_bias=False, trainable=True, kernel_regularizer=l2(weight_decay), padding='same', name='conv1_1/3x3_s1')(inputs) x1 = BatchNormalization(axis=bn_axis, name='conv1_1/3x3_s1/bn', trainable=True)(x1) x1 = Activation('relu')(x1) x1 = MaxPooling2D((2, 2), strides=(2, 2))(x1) # =============================================== # Convolution Section 2 # =============================================== x2 = conv_block_2D(x1, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1), trainable=True) x2 = identity_block_2D(x2, 3, [64, 64, 256], stage=2, block='b', trainable=True) x2 = identity_block_2D(x2, 3, [64, 64, 256], stage=2, block='c', trainable=True) # =============================================== # Convolution Section 3 # =============================================== x3 = conv_block_2D(x2, 3, [128, 128, 512], stage=3, block='a', trainable=True) x3 = identity_block_2D(x3, 3, [128, 128, 512], stage=3, block='b', trainable=True) x3 = identity_block_2D(x3, 3, [128, 128, 512], stage=3, block='c', trainable=True) # =============================================== # Convolution Section 4 # =============================================== x4 = conv_block_2D(x3, 3, [256, 256, 1024], stage=4, block='a', strides=(1, 1), trainable=True) x4 = identity_block_2D(x4, 3, [256, 256, 1024], stage=4, block='b', trainable=True) x4 = identity_block_2D(x4, 3, [256, 256, 1024], stage=4, block='c', trainable=True) # =============================================== # Convolution Section 5 # =============================================== x5 = conv_block_2D(x4, 3, [512, 512, 2048], stage=5, block='a', trainable=True) x5 = identity_block_2D(x5, 3, [512, 512, 2048], stage=5, block='b', trainable=True) x5 = identity_block_2D(x5, 3, [512, 512, 2048], stage=5, block='c', trainable=True) y = MaxPooling2D((3, 1), strides=(2, 1), name='mpool2')(x5) return inputs, y import tensorflow.keras as keras import tensorflow as tf import tensorflow.keras.backend as K class VladPooling(keras.layers.Layer): ''' This layer follows the NetVlad, GhostVlad ''' def __init__(self, mode, k_centers, g_centers=0, **kwargs): self.k_centers = k_centers self.g_centers = g_centers self.mode = mode super(VladPooling, self).__init__(**kwargs) def build(self, input_shape): self.cluster = self.add_weight(shape=[self.k_centers+self.g_centers, input_shape[0][-1]], name='centers', initializer='orthogonal') self.built = True def compute_output_shape(self, input_shape): assert input_shape return (input_shape[0][0], self.k_centers*input_shape[0][-1]) def call(self, x): # feat : bz x W x H x D, cluster_score: bz X W x H x clusters. feat, cluster_score = x num_features = feat.shape[-1] # softmax normalization to get soft-assignment. # A : bz x W x H x clusters max_cluster_score = K.max(cluster_score, -1, keepdims=True) exp_cluster_score = K.exp(cluster_score - max_cluster_score) A = exp_cluster_score / K.sum(exp_cluster_score, axis=-1, keepdims = True) # Now, need to compute the residual, self.cluster: clusters x D A = K.expand_dims(A, -1) # A : bz x W x H x clusters x 1 feat_broadcast = K.expand_dims(feat, -2) # feat_broadcast : bz x W x H x 1 x D feat_res = feat_broadcast - self.cluster # feat_res : bz x W x H x clusters x D weighted_res = tf.multiply(A, feat_res) # weighted_res : bz x W x H x clusters x D cluster_res = K.sum(weighted_res, [1, 2]) if self.mode == 'gvlad': cluster_res = cluster_res[:, :self.k_centers, :] cluster_l2 = K.l2_normalize(cluster_res, -1) outputs = K.reshape(cluster_l2, [-1, int(self.k_centers) * int(num_features)]) return outputs def amsoftmax_loss(y_true, y_pred, scale=30, margin=0.35): y_pred = y_true * (y_pred - margin) + (1 - y_true) * y_pred y_pred *= scale return K.categorical_crossentropy(y_true, y_pred, from_logits=True) def vggvox_resnet2d_icassp(inputs, num_class=8631, mode='train', args=None): # python predict.py --gpu 1 --net resnet34s --ghost_cluster 2 # --vlad_cluster 8 --loss softmax --resume net='resnet34s' loss='softmax' vlad_clusters=8 ghost_clusters=2 bottleneck_dim=512 aggregation = 'gvlad' mgpu = 0 if net == 'resnet34s': inputs, x = resnet_2D_v1(inputs, mode=mode) else: inputs, x = resnet_2D_v2(inputs, mode=mode) # =============================================== # Fully Connected Block 1 # =============================================== x_fc = keras.layers.Conv2D(bottleneck_dim, (7, 1), strides=(1, 1), activation='relu', kernel_initializer='orthogonal', use_bias=True, trainable=True, kernel_regularizer=keras.regularizers.l2(weight_decay), bias_regularizer=keras.regularizers.l2(weight_decay), name='x_fc')(x) # =============================================== # Feature Aggregation # =============================================== if aggregation == 'avg': if mode == 'train': x = keras.layers.AveragePooling2D((1, 5), strides=(1, 1), name='avg_pool')(x) x = keras.layers.Reshape((-1, bottleneck_dim))(x) else: x = keras.layers.GlobalAveragePooling2D(name='avg_pool')(x) x = keras.layers.Reshape((1, bottleneck_dim))(x) elif aggregation == 'vlad': x_k_center = keras.layers.Conv2D(vlad_clusters, (7, 1), strides=(1, 1), kernel_initializer='orthogonal', use_bias=True, trainable=True, kernel_regularizer=keras.regularizers.l2(weight_decay), bias_regularizer=keras.regularizers.l2(weight_decay), name='vlad_center_assignment')(x) x = VladPooling(k_centers=vlad_clusters, mode='vlad', name='vlad_pool')([x_fc, x_k_center]) elif aggregation == 'gvlad': x_k_center = keras.layers.Conv2D(vlad_clusters+ghost_clusters, (7, 1), strides=(1, 1), kernel_initializer='orthogonal', use_bias=True, trainable=True, kernel_regularizer=keras.regularizers.l2(weight_decay), bias_regularizer=keras.regularizers.l2(weight_decay), name='gvlad_center_assignment')(x) x = VladPooling(k_centers=vlad_clusters, g_centers=ghost_clusters, mode='gvlad', name='gvlad_pool')([x_fc, x_k_center]) else: raise IOError('==> unknown aggregation mode') # =============================================== # Fully Connected Block 2 # =============================================== x = keras.layers.Dense(bottleneck_dim, activation='relu', kernel_initializer='orthogonal', use_bias=True, trainable=True, kernel_regularizer=keras.regularizers.l2(weight_decay), bias_regularizer=keras.regularizers.l2(weight_decay), name='fc6')(x) # =============================================== # Softmax Vs AMSoftmax # =============================================== if loss == 'softmax': y = keras.layers.Dense(num_class, activation='softmax', kernel_initializer='orthogonal', use_bias=False, trainable=True, kernel_regularizer=keras.regularizers.l2(weight_decay), bias_regularizer=keras.regularizers.l2(weight_decay), name='prediction')(x) trnloss = 'categorical_crossentropy' elif loss == 'amsoftmax': x_l2 = keras.layers.Lambda(lambda x: K.l2_normalize(x, 1))(x) y = keras.layers.Dense(num_class, kernel_initializer='orthogonal', use_bias=False, trainable=True, kernel_constraint=keras.constraints.unit_norm(), kernel_regularizer=keras.regularizers.l2(weight_decay), bias_regularizer=keras.regularizers.l2(weight_decay), name='prediction')(x_l2) trnloss = amsoftmax_loss else: raise IOError('==> unknown loss.') if mode == 'eval': y = keras.layers.Lambda(lambda x: keras.backend.l2_normalize(x, 1))(x) return y # model = keras.models.Model(inputs, y, name='vggvox_resnet2D_{}_{}'.format(loss, aggregation)) # if mode == 'train': # if mgpu > 1: # model = ModelMGPU(model, gpus=mgpu) # # set up optimizer. # if args.optimizer == 'adam': opt = keras.optimizers.Adam(lr=1e-3) # elif args.optimizer =='sgd': opt = keras.optimizers.SGD(lr=0.1, momentum=0.9, decay=0.0, nesterov=True) # else: raise IOError('==> unknown optimizer type') # model.compile(optimizer=opt, loss=trnloss, metrics=['acc']) # return model class Model: def __init__(self): self.X = tf.placeholder(tf.float32, [None, 257, None, 1]) params = {'dim': (257, None, 1), 'nfft': 512, 'spec_len': 250, 'win_length': 400, 'hop_length': 160, 'n_classes': 5994, 'sampling_rate': 16000, 'normalize': True, } self.logits = vggvox_resnet2d_icassp(self.X, num_class=params['n_classes'], mode='eval') self.logits = tf.identity(self.logits, name = 'logits') ckpt_path = 'out/vggvox.ckpt' tf.reset_default_graph() sess = tf.InteractiveSession() model = Model() sess.run(tf.global_variables_initializer()) var_lists = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) saver = tf.train.Saver(var_list = var_lists) saver.restore(sess, ckpt_path) import librosa import numpy as np # =============================================== # code from Arsha for loading data. # =============================================== def load_wav(vid_path, sr, mode='train'): wav, sr_ret = librosa.load(vid_path, sr=sr) assert sr_ret == sr if mode == 'train': extended_wav = np.append(wav, wav) if np.random.random() < 0.3: extended_wav = extended_wav[::-1] return extended_wav else: extended_wav = np.append(wav, wav[::-1]) return extended_wav def lin_spectogram_from_wav(wav, hop_length, win_length, n_fft=1024): linear = librosa.stft(wav, n_fft=n_fft, win_length=win_length, hop_length=hop_length) # linear spectrogram return linear.T def load_data(path, win_length=400, sr=16000, hop_length=160, n_fft=512, spec_len=250, mode='train'): wav = load_wav(path, sr=sr, mode=mode) linear_spect = lin_spectogram_from_wav(wav, hop_length, win_length, n_fft) mag, _ = librosa.magphase(linear_spect) # magnitude mag_T = mag.T freq, time = mag_T.shape if mode == 'train': if time > spec_len: randtime = np.random.randint(0, time-spec_len) spec_mag = mag_T[:, randtime:randtime+spec_len] else: spec_mag = np.pad(mag_T, ((0, 0), (0, spec_len - time)), 'constant') else: spec_mag = mag_T # preprocessing, subtract mean, divided by time-wise var mu = np.mean(spec_mag, 0, keepdims=True) std = np.std(spec_mag, 0, keepdims=True) return (spec_mag - mu) / (std + 1e-5) from glob import glob import numpy as np files = glob('/Users/huseinzolkepli/Documents/malaya-speech/speech/record/*.wav') len(files) from glob import glob import numpy as np files = glob('/Users/huseinzolkepli/Documents/malaya-speech/speech/record/*.wav') wavs = [load_data(wav, mode = 'eval') for wav in files] [wav.shape for wav in wavs] def pred(x): return sess.run(model.logits, feed_dict = {model.X: np.expand_dims([x], -1)}) r = [pred(wav) for wav in wavs] results = np.concatenate(r) results.shape saver = tf.train.Saver() saver.save(sess, 'v2/model.ckpt') strings = ','.join( [ n.name for n in tf.get_default_graph().as_graph_def().node if ('Variable' in n.op or 'Placeholder' in n.name or 'logits' in n.name or 'alphas' in n.name or 'self/Softmax' in n.name) and 'adam' not in n.name and 'beta' not in n.name and 'global_step' not in n.name and 'Assign' not in n.name ] ) def freeze_graph(model_dir, output_node_names): if not tf.gfile.Exists(model_dir): raise AssertionError( "Export directory doesn't exists. Please specify an export " 'directory: %s' % model_dir ) checkpoint = tf.train.get_checkpoint_state(model_dir) input_checkpoint = checkpoint.model_checkpoint_path absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1]) output_graph = absolute_model_dir + '/frozen_model.pb' clear_devices = True with tf.Session(graph = tf.Graph()) as sess: saver = tf.train.import_meta_graph( input_checkpoint + '.meta', clear_devices = clear_devices ) saver.restore(sess, input_checkpoint) output_graph_def = tf.graph_util.convert_variables_to_constants( sess, tf.get_default_graph().as_graph_def(), output_node_names.split(','), ) with tf.gfile.GFile(output_graph, 'wb') as f: f.write(output_graph_def.SerializeToString()) print('%d ops in the final graph.' % len(output_graph_def.node)) freeze_graph('v2', strings) # def load_graph(frozen_graph_filename): # with tf.gfile.GFile(frozen_graph_filename, 'rb') as f: # graph_def = tf.GraphDef() # graph_def.ParseFromString(f.read()) # with tf.Graph().as_default() as graph: # tf.import_graph_def(graph_def) # return graph def load_graph(frozen_graph_filename, **kwargs): with tf.gfile.GFile(frozen_graph_filename, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) # https://github.com/onnx/tensorflow-onnx/issues/77#issuecomment-445066091 # to fix import T5 for node in graph_def.node: if node.op == 'RefSwitch': node.op = 'Switch' for index in xrange(len(node.input)): if 'moving_' in node.input[index]: node.input[index] = node.input[index] + '/read' elif node.op == 'AssignSub': node.op = 'Sub' if 'use_locking' in node.attr: del node.attr['use_locking'] elif node.op == 'AssignAdd': node.op = 'Add' if 'use_locking' in node.attr: del node.attr['use_locking'] elif node.op == 'Assign': node.op = 'Identity' if 'use_locking' in node.attr: del node.attr['use_locking'] if 'validate_shape' in node.attr: del node.attr['validate_shape'] if len(node.input) == 2: node.input[0] = node.input[1] del node.input[1] with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def) return graph g = load_graph('v2/frozen_model.pb') x = g.get_tensor_by_name('import/Placeholder:0') logits = g.get_tensor_by_name('import/logits:0') test_sess = tf.InteractiveSession(graph = g) def pred(o): return test_sess.run(logits, feed_dict = {x: np.expand_dims([o], -1)}) r = [pred(wav) for wav in wavs] r = np.concatenate(r) logits from tensorflow.tools.graph_transforms import TransformGraph import tensorflow as tf pb = 'v2/frozen_model.pb' transforms = ['add_default_attributes', 'remove_nodes(op=Identity, op=CheckNumerics, op=Dropout)', 'fold_constants(ignore_errors=true)', 'fold_batch_norms', 'fold_old_batch_norms', 'quantize_weights', 'strip_unused_nodes', 'sort_by_execution_order'] input_graph_def = tf.GraphDef() with tf.gfile.FastGFile(pb, 'rb') as f: input_graph_def.ParseFromString(f.read()) transformed_graph_def = TransformGraph(input_graph_def, ['Placeholder'], ['logits'], transforms) with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f: f.write(transformed_graph_def.SerializeToString()) ```
github_jupyter
# Sample grouping We are going to linger into the concept of sample groups. As in the previous section, we will give an example to highlight some surprising results. This time, we will use the handwritten digits dataset. ``` from sklearn.datasets import load_digits digits = load_digits() data, target = digits.data, digits.target ``` We will recreate the same model used in the previous exercise: a logistic regression classifier with preprocessor to scale the data. ``` from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression from sklearn.pipeline import make_pipeline model = make_pipeline(StandardScaler(), LogisticRegression()) ``` We will use the same baseline model. We will use a `KFold` cross-validation without shuffling the data at first. ``` from sklearn.model_selection import cross_val_score, KFold cv = KFold(shuffle=False) test_score_no_shuffling = cross_val_score(model, data, target, cv=cv, n_jobs=-1) print(f"The average accuracy is " f"{test_score_no_shuffling.mean():.3f} +/- " f"{test_score_no_shuffling.std():.3f}") ``` Now, let's repeat the experiment by shuffling the data within the cross-validation. ``` cv = KFold(shuffle=True) test_score_with_shuffling = cross_val_score(model, data, target, cv=cv, n_jobs=-1) print(f"The average accuracy is " f"{test_score_with_shuffling.mean():.3f} +/- " f"{test_score_with_shuffling.std():.3f}") ``` We observe that shuffling the data improves the mean accuracy. We could go a little further and plot the distribution of the testing score. We can first concatenate the test scores. ``` import pandas as pd all_scores = pd.DataFrame( [test_score_no_shuffling, test_score_with_shuffling], index=["KFold without shuffling", "KFold with shuffling"], ).T ``` Let's plot the distribution now. ``` import matplotlib.pyplot as plt import seaborn as sns all_scores.plot.hist(bins=10, edgecolor="black", density=True, alpha=0.7) plt.xlim([0.8, 1.0]) plt.xlabel("Accuracy score") plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left") _ = plt.title("Distribution of the test scores") ``` The cross-validation testing error that uses the shuffling has less variance than the one that does not impose any shuffling. It means that some specific fold leads to a low score in this case. ``` print(test_score_no_shuffling) ``` Thus, there is an underlying structure in the data that shuffling will break and get better results. To get a better understanding, we should read the documentation shipped with the dataset. ``` print(digits.DESCR) ``` If we read carefully, 13 writers wrote the digits of our dataset, accounting for a total amount of 1797 samples. Thus, a writer wrote several times the same numbers. Let's suppose that the writer samples are grouped. Subsequently, not shuffling the data will keep all writer samples together either in the training or the testing sets. Mixing the data will break this structure, and therefore digits written by the same writer will be available in both the training and testing sets. Besides, a writer will usually tend to write digits in the same manner. Thus, our model will learn to identify a writer's pattern for each digit instead of recognizing the digit itself. We can solve this problem by ensuring that the data associated with a writer should either belong to the training or the testing set. Thus, we want to group samples for each writer. Here, we will manually define the group for the 13 writers. ``` from itertools import count import numpy as np # defines the lower and upper bounds of sample indices # for each writer writer_boundaries = [0, 130, 256, 386, 516, 646, 776, 915, 1029, 1157, 1287, 1415, 1545, 1667, 1797] groups = np.zeros_like(target) lower_bounds = writer_boundaries[:-1] upper_bounds = writer_boundaries[1:] for group_id, lb, up in zip(count(), lower_bounds, upper_bounds): groups[lb:up] = group_id ``` We can check the grouping by plotting the indices linked to writer ids. ``` plt.plot(groups) plt.yticks(np.unique(groups)) plt.xticks(writer_boundaries, rotation=90) plt.xlabel("Target index") plt.ylabel("Writer index") _ = plt.title("Underlying writer groups existing in the target") ``` Once we group the digits by writer, we can use cross-validation to take this information into account: the class containing `Group` should be used. ``` from sklearn.model_selection import GroupKFold cv = GroupKFold() test_score = cross_val_score(model, data, target, groups=groups, cv=cv, n_jobs=-1) print(f"The average accuracy is " f"{test_score.mean():.3f} +/- " f"{test_score.std():.3f}") ``` We see that this strategy is less optimistic regarding the model statistical performance. However, this is the most reliable if our goal is to make handwritten digits recognition writers independent. Besides, we can as well see that the standard deviation was reduced. ``` all_scores = pd.DataFrame( [test_score_no_shuffling, test_score_with_shuffling, test_score], index=["KFold without shuffling", "KFold with shuffling", "KFold with groups"], ).T all_scores.plot.hist(bins=10, edgecolor="black", density=True, alpha=0.7) plt.xlim([0.8, 1.0]) plt.xlabel("Accuracy score") plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left") _ = plt.title("Distribution of the test scores") ``` As a conclusion, it is really important to take any sample grouping pattern into account when evaluating a model. Otherwise, the results obtained will be over-optimistic in regards with reality.
github_jupyter