text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
## 2-1. NISQアルゴリズムとlong-termアルゴリズム
現在発明・発見されている量子アルゴリズムは、実現可能性の観点から2つのグループに大別できる。
一つは**NISQアルゴリズム**、もう一つは**long-termアルゴリズム**である。(これらの単語は一般的ではないので、他の文献を見る際には注意すること。また、**この2つの区別は絶対的なものではなく、解くべき問題の大きさや技術の進歩などによって移り変るものであることに留意されたい。**)それらの代表例を表に示す。

(VQE = Variational Quantum Eigensolver (5-1節), QAOA = Quantum Approximate Optimization Algorithm (5-3節), QCL = Quantum Circuit Learning (5-2節), QFT = Quantum Fourier Transform (2-3節), QPE = Quantum Phase Estimation (2-4節), HHL = Harrow-Hassidim-Lloyd algorithm (7-3節))
### NISQ アルゴリズム
#### NISQとは
まずNISQアルゴリズムについて説明するが、そもそも、最近よく聞く NISQ([NISA](https://www.fsa.go.jp/policy/nisa2/about/index.html)ではない)デバイスとは何なのであろうか。
NISQ(一般に「ニスク」と発音される)デバイスとはNoisy Intermediate-Scale Quantum deviceの略で、数年〜十年以内に実現可能と考えられている、小~中規模(量子ビット数〜数百個程度)の量子コンピュータの総称である。NISQデバイスは、大量の量子ビットを必要とする「量子誤り訂正」([第9章])を行う事ができず、計算途中で起こる誤り(ノイズ)の影響をもろに受けてしまう(このような意味でNoisyという形容詞が付されている)。計算途中での誤りを随時訂正できる真の量子コンピュータにはまだ遠く、通常のコンピュータに例えるなら、集積回路の発明以前のトランジスタ式や真空管式のコンピュータのような段階と言っても良いかもしれない。
#### NISQアルゴリズムの概要
NISQデバイスでは、前述のようにノイズの影響が不可避である。このノイズは、計算が長ければ長くなるほど(アルゴリズムが複雑になればなるほど)蓄積していき、最終的には出力結果をデタラメにしてしまう。例えば、有名な量子アルゴリズムであるShorのアルゴリズムやGroverのアルゴリズムは回路が複雑(操作の回数が多い)であり、エラー耐性の低いNISQではパワー不足で実行することが難しい。
一方で、NISQを用いたとしても何か実用的に役立つ例を見つけられないか、ということで生み出されたのがNISQアルゴリズムである。上のような言い方をするとネガティブな印象を持たれるかもしれないが、化学反応のシミュレーションなどのタスクにおいて、NISQが古典コンピュータを上回る可能性が示唆されている(Qmedia記事[量子コンピュータの現在とこれから](https://www.qmedia.jp/nisq-era-john-preskill))。NISQは、量子コンピュータの古典コンピュータに対する優位性が示される「量子スプレマシー」の担い手として注目を集めているのである。
一般に量子計算は、量子ビット数が大きく、量子演算の回数が多くなるほど、エラーの影響を受けやすくなる。そのため、NISQアルゴリズムは少数の量子ビットで、かつ浅い量子回路(少ない量子ゲート数)で行える必要がある。このような背景から、NISQアルゴリズムの研究においては、「**量子・古典ハイブリッドアルゴリズム**」というアプローチが主流となっている。これは、行いたい計算のすべてを量子コンピュータに任せるのではなく、量子コンピュータの得意な部分のみを量子計算機に任せ、残りは古典コンピュータで処理する、というものである。Quantum Native Dojoで扱うNISQアルゴリズムは、基本的にこの古典・量子ハイブリッドのアプローチに基づいている。
### Long-Termアルゴリズム
一方、long-termアルゴリズムは、多数の量子ビットが利用可能、かつ誤り訂正が可能という仮定のもとで初めて可能になるアルゴリズムである。もちろん、**NISQで実行できるかどうかは解きたい問題のサイズや精度に依存する**ので、どのアルゴリズムがNISQで、どのアルゴリズムがlong-termであるということに深い意味はない。基本的に全ての量子アルゴリズムはlong-termアルゴリズムであり、その一部がNISQデバイスでも実行なアルゴリズムであると考えるのが良いかもしれない。
この章で学んでいくアルゴリズムは、long-termアルゴリズムのうち入門的なものである(上表の黄色の部分を参照)。後半の章では、近年盛んに研究が進んでいるNISQアルゴリズムの他、Groverのアルゴリズムといったより高度なlong-termアルゴリズムについて取り扱う。
### より深く知るには:
- Qmedia 量子コンピュータの現在とこれから https://www.qmedia.jp/nisq-era-john-preskill/
- Quantum Algorithm Zoo http://quantumalgorithmzoo.org/
- Quantum Algorithm Zoo 日本語訳 https://www.qmedia.jp/algebraic-number-theoretic-algorithms/
| github_jupyter |
```
# --- added to file ----
# Takes in a String, "bucket_name", a string, "remote_folder",
# and a list of strings or a single string, "keywords". Gets all
# s3 keys for bucket_name/remote_folder. Uses a list convention
# to go through keywords (i.e): ['a', 'b', 'c OR d OR e'] will
# find all files containing 'a' and 'b' and either 'c', 'd', or 'e'.
# Using '' will return every file key in folder.
def get_s3_keys(bucket_name, remote_folder, keywords=''):
s3 = boto3.resource('s3')
bucket = s3.Bucket(bucket_name)
obj_list = []
keywords = [i.split('OR') for i in list(keywords)]
keywords = [list(map(lambda x:x.strip(), i)) for i in keywords]
for object in bucket.objects.all():
filename = object.key.split("/")[-1]
kwds_in = all(any(k in filename for k in ([keyword]*isinstance(keyword, str) or keyword)) for keyword in keywords)
if remote_folder in object.key and kwds_in:
obj_list.append(s3.Object(object.bucket_name, object.key))
return obj_list
import pandas as pd
from os import listdir, getcwd, chdir
from os.path import isfile, join
import pandas as pd
import numpy as np
import csv
!pwd
from os import listdir
from os.path import isfile, join
import pandas as pd
import numpy as np
import csv
# Takes in a path and list of keywords. Returns a list of filenames
# that are within the path that contain one of the keyword in the list.
# Set keyword to "" to get all files in the path.
def get_files(path, keywords = ["features_ OR msd_"]):
"""
Takes in a path and list of keywords. Returns a list of filenames
that are within the path that contain one of the keyword in the list.
Set keyword to "" to get all files in the path.
"""
keywords = [i.split('OR') for i in list(keywords)]
keywords = [list(map(lambda x:x.strip(), i)) for i in keywords]
files = [f for f in listdir(path) if isfile(join(path, f))]
file_list = []
for filename in files:
kwds_in = all(any(k in filename for k in ([keyword]*isinstance(keyword, str) or keyword)) for keyword in keywords)
if (kwds_in):
file_list.append(filename)
return file_list
# Pre: Both files must exhist; Feature must be in the feature file
# Throws a FileNotFoundError exception if preconditions not met
#
# Adds a feature from produced features file to the track file.
def combine_track(trackFile, feature=None, featureDF=None):
'''
Adds a feature or set of feature to the corresponding track file
Preconditions: Both files must exhist; Feature(s) must be in the
feature file.
Input:
------
trackFile : string :
The file location of the dataframe
feature : list : string : tuple :
feature or set of features to attach to track dataframe
Output:
-------
trackDF : pd.DataFrame :
DataFrame of the combined tracks
'''
if isinstance(trackFile, str):
try:
trackDF = pd.read_csv(trackFile)
except FileNotFoundError:
raise("DataFrame cannot be located")
else:
trackDF = trackFile
if featureDF is None:
featureDF = find_pair(trackFile)
if feature is None:
feature = np.setdiff1d(featureDF.columns.values, trackDF.columns.values)
elif isinstance(feature, str):
feature = [feature]
elif isinstance(feature, tuple):
feature = list(feature)
trackDF = trackDF.reindex(columns=[*trackDF.columns.tolist()] + [*feature], fill_value=np.nan)
maxFrames = int(trackDF["Frame"].max())
maxTracks = int(trackDF["Track_ID"].max())
for i in range(int(maxTracks)+1):
for feat in feature:
trackFeature = featureDF.loc[i, feat]
trackDF.loc[(maxFrames)*(i+1) + i, feat] = trackFeature
return trackDF
# Trys to find the feature file pair for either the msd_ or Traj_
# Returns the pd.DataFrame of that pair if found.
def find_pair(filename):
"""
Trys to find the feature file pair for either the msd_ or traj_ df,
or the Traj_ or msd_ file for input feauture_ file.
Returns the pd.DataFrame of that pair if found.
"""
if "msd_" in filename:
try:
filename = filename.replace("msd_", "").replace("Traj_", "")
filename = filename.split("/")
filename[-1] = "features_" + filename[-1]
featureFile = "/".join(filename)
return pd.read_csv(featureFile)
except FileNotFoundError:
print("File pair could not be found")
elif "features_" in filename:
try:
filename = filename.replace("features_", "")
filename = filename.split("/")
filename[-1] = "msd_" + filename[-1]
featureFile = "/".join(filename)
return pd.read_csv(featureFile)
except:
try:
filename = filename.replace("features_", "")
filename = filename.split("/")
filename[-1] = "Traj_" + filename[-1]
featureFile = "/".join(filename)
return pd.read_csv(featureFile)
except FileNotFoundError:
print("File pair could not be found")
if not 'workbookDir' in globals():
workbookDir = getcwd()
print('Current Notebook Dir: ' + workbookDir)
chdir(workbookDir) # Go to current workbook Dir
chdir('..') # Go up one
workbookDir = getcwd()
print(f'Using current directory for loading data: {getcwd()}')
dataset_path = './raw_data_region_cortex_striatum'
track_file_list = get_files(dataset_path, keywords=['msd_'])
feature_file_list = get_files(dataset_path, ['features_'])
!pwd
feature_file_list
fstats_tot = None
video_num = 0
for filename in feature_file_list:
try:
fstats = pd.read_csv(dataset_path + '/' + filename, encoding = "ISO-8859-1", index_col='Unnamed: 0')
tstats = find_pair(dataset_path + '/' + filename)
print('{} size: {}'.format(filename, fstats.shape))
if 'cortex' in filename:
fstats['region'] = pd.Series(fstats.shape[0]*[0], index=fstats.index)
elif 'striatum' in filename:
fstats['region'] = pd.Series(fstats.shape[0]*[1], index=fstats.index)
else:
print('Error, no target')
fstats['Video Number'] = pd.Series(fstats.shape[0]*[video_num], index=fstats.index)
fstats = combine_track(tstats, feature=np.append(feat, ['region']), featureDF=fstats)
if fstats_tot is None:
fstats_tot = fstats
else:
fstats_tot = fstats_tot.append(fstats, ignore_index=True)
video_num += 1
except Exception:
print('Skipped!: {}'.format(filename))
fstats_tot
filename = 'features_NT_slice_2_striatum_vid_5.csv'
fstats = pd.read_csv(dataset_path + '/' + filename, encoding = "ISO-8859-1", index_col='Unnamed: 0')
tstats = find_pair(dataset_path + '/' + filename)
print('{} size: {}'.format(filename, fstats.shape))
if 'cortex' in filename:
fstats['region'] = pd.Series(fstats.shape[0]*[0], index=fstats.index)
elif 'striatum' in filename:
fstats['region'] = pd.Series(fstats.shape[0]*[1], index=fstats.index)
else:
print('Error, no target')
fstats = combine_track(tstats, feature=np.append(feat, ['region']), featureDF=fstats)
if fstats_tot is None:
fstats_tot = fstats
else:
fstats_tot = fstats_tot.append(fstats, ignore_index=True)
# fstats_tot.to_csv('cortex_striatum_featuresandtracks.csv')
fstats_tot = pd.read_csv('saved_datasets/cortex_striatum_featuresandtracks.csv')
feat = np.array(['AR', 'D_fit', 'Deff1', 'Deff2', 'MSD_ratio', 'alpha',
'asymmetry1', 'asymmetry2', 'asymmetry3', 'boundedness',
'efficiency', 'elongation', 'fractal_dim', 'frames', 'kurtosis',
'straightness', 'trappedness'])
def zero_df(df, col, res=(0, 651)):
'''
Zeros a single dataframe column so that the first value will be
located at the start of the track.
'''
try:
shift_val = df.iloc[res[0]:res[1]][col].reset_index().dropna().index[0]
except:
shift_val = res[0]-res[1]-1
return df.iloc[res[0]:res[1]][col].reset_index().shift(-shift_val, fill_value=np.nan)[col]
def get_zeroed_tracks(df, col, res=650):
'''
Creates an array of all the tracks for a single column in a file
in which the value is zeroed to frame = 0
'''
lower = 0
upper = res+1
value = []
while (upper <= len(df)):
value.append(list(zero_df(df, col=col, res=[lower, upper])))
lower = upper
upper = lower + res + 1
return value
import numpy as np
import pandas as pd
# Creates x and y datasets for LSTM based off of input
# track_df data
def get_xy_data(df, target, feat=None, use_feat=False, res=650):
n_tracks = int((len(df))/(res+1))
frame = get_zeroed_tracks(df, 'Frame', res=res)
X = get_zeroed_tracks(df, 'X', res=res)
Y = get_zeroed_tracks(df, 'Y', res=res)
MSDs = get_zeroed_tracks(df, 'MSDs', res=res)
trgt = df[target]
datax = []
datay = []
datafeat = []
print(n_tracks)
for j in range(n_tracks):
trackx = []
tracky = []
trackfeat = []
for i in range(res+1):
trackx.append([int(frame[j][i]), X[j][i], Y[j][i], MSDs[j][i]])
datax.append(trackx)
del(trackx)
tracky.append(trgt[(res+1)*(j+1)-1])
datay.append(tracky)
del(tracky)
if use_feat is True:
trackfeat.append(list(df.loc[(res+1)*(j+1)-1, feat]))
datafeat.append(trackfeat)
del(trackfeat)
del(df, frame, X, Y, MSDs, trgt)
datax = np.array(datax)
datax = datax.reshape(n_tracks, res+1, 4)
datay = np.array(datay)
datay = datay.reshape(n_tracks, 1)
datafeat = np.array(datafeat)
datafeat = datafeat.reshape(n_tracks, len(feat))
result = [datax, datay]
if use_feat is True:
result += [datafeat]
return tuple(result)
feat
(datax, datay, datafeat) = get_xy_data(fstats_tot, 'region', feat, True)
def get_track(df, track, res):
return df.loc[(res+1)*(track):(res+1)*(track+1)-1]
def get_feat(df, track, res, feat):
return df.loc[(res+1)*(track+1)-1, feat]
np.save('./saved_datasets/RNN_region_datax', datax)
np.save('./saved_datasets/RNN_region_datay', datay)
np.save('./saved_datasets/RNN_region_datafeat', datafeat)
# datax = np.load('./saved_datasets/RNN_region_datax.npy')
# datay = np.load('./saved_datasets/RNN_region_datay.npy')
# datafeat = np.load('./saved_datasets/RNN_region_datafeat.npy')
split = 0.8
train_index = np.random.choice(np.arange(0, len(datax)), int(len(datax)*0.7), replace=False)
test_index = np.setdiff1d(np.arange(0, len(datax)), train_index)
datax = np.nan_to_num(datax, copy=True, nan=-1.0, posinf=-1.0, neginf=-1.0)
datay = np.nan_to_num(datay, copy=True, nan=-1.0, posinf=-1.0, neginf=-1.0)
X_train = datax[train_index]
y_train = datay[train_index]
feat_train = datafeat[train_index]
X_test = datax[test_index]
y_test = datay[test_index]
feat_test = datafeat[test_index]
def numpy_one_hot_encode(mat, encoder=None):
if encoder is None:
encoder = np.unique(mat)
mat = np.array(encoder == mat).astype(int)
return mat, encoder
y_train, encoder = numpy_one_hot_encode(y_train)
y_test, encoder = numpy_one_hot_encode(y_test, encoder)
def numpy_decode(mat, encoder):
return np.array([i[i!=0] for i in mat * encoder])
y_train = numpy_decode(y_train, encoder)
y_test = numpy_decode(y_test, encoder)
n_timesteps, n_features, n_outputs = X_train.shape[1], X_train.shape[2], y_train.shape[1]
(n_timesteps, n_features, n_outputs)
n_samples, n_feat_size = feat_train.shape
(n_samples, n_feat_size)
#Kera libraries
import numpy
from tensorflow.keras.datasets import imdb
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import Dense, LSTM, Input, Dropout, Concatenate, Flatten, TimeDistributed
from tensorflow.keras.preprocessing import sequence
# LSTM without dropout for sequence classification in the IMDB dataset
import numpy
from tensorflow.keras.datasets import imdb
from tensorflow.keras.models import Sequential
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Embedding
from tensorflow.keras.preprocessing import sequence
def rnn_clsfy(X_train, y_train, n_timesteps, n_features, n_outputs, epochs=15, batch_size=64, verbose=0, **kwargs):
if 'dropout' not in kwargs:
dropout = 0.5
else:
dropout = kwargs['dropout']
if 'seed' not in kwargs:
seed = 123
else:
seed = kwargs['seed']
if 'metrics' not in kwargs:
metrics = ['accuracy']
else:
metrics = kwargs['metrics']
if 'n_rnnnodes' not in kwargs:
n_rnnnodes = 100
else:
n_rnnnodes = kwargs['n_rnnnodes']
# create the model
model = Sequential()
model.add(LSTM(n_rnnnodes, input_shape=(n_timesteps, n_features), return_sequences=False))
model.add(Dropout(dropout))
model.add(Dense(100, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=metrics)
print(model.summary())
model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size)
# Final evaluation:
score = model.evaluate(X_test, y_test, batch_size=batch_size, verbose=verbose)
print(f'Accuracy: {score[1]}')
return model
model = rnn_clsfy(X_train, y_train, n_timesteps, n_features, n_outputs, epochs=50, batch_size=100, verbose=0, dropout=0.4, seed=10, metrics = ['accuracy'])
model.save('.\saved_models\LSTM_RNN_MODEL_50_50_SPLIT_Striatum_Cortex_TARGET_JUL3020_DATE_50_EPOCHS_40_DROPOUT_SHACK')
model = load_model('.\saved_models\LSTM_RNN_MODEL_70_20_SPLIT_Striatum_Cortex_TARGET_JUL3020_DATE_SHACK')
from utils.constants import MAX_NB_VARIABLES, MAX_TIMESTEPS_LIST
from utils.generic_utils import load_dataset_at, calculate_dataset_metrics, cutoff_choice, \
cutoff_sequence
from keras import backend as K
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.optimizers import Adam
from keras.layers import Permute
from keras.models import Model
from sklearn.preprocessing import LabelEncoder
import warnings
warnings.simplefilter('ignore', category=DeprecationWarning)
def multi_label_log_loss(y_pred, y_true):
return K.sum(K.binary_crossentropy(y_pred, y_true), axis=-1)
def _average_gradient_norm(model, X_train, y_train, batch_size):
# just checking if the model was already compiled
if not hasattr(model, "train_function"):
raise RuntimeError("You must compile your model before using it.")
weights = model.trainable_weights # weight tensors
get_gradients = model.optimizer.get_gradients(
model.total_loss, weights) # gradient tensors
input_tensors = [
# input data
model.inputs[0],
# how much to weight each sample by
model.sample_weights[0],
# labels
model.targets[0],
# train or test mode
K.learning_phase()
]
grad_fct = K.function(inputs=input_tensors, outputs=get_gradients)
steps = 0
total_norm = 0
s_w = None
nb_steps = X_train.shape[0] // batch_size
if X_train.shape[0] % batch_size == 0:
pad_last = False
else:
pad_last = True
def generator(X_train, y_train, pad_last):
for i in range(nb_steps):
X = X_train[i * batch_size: (i + 1) * batch_size, ...]
y = y_train[i * batch_size: (i + 1) * batch_size, ...]
yield (X, y)
if pad_last:
X = X_train[nb_steps * batch_size:, ...]
y = y_train[nb_steps * batch_size:, ...]
yield (X, y)
datagen = generator(X_train, y_train, pad_last)
while steps < nb_steps:
X, y = next(datagen)
# set sample weights to one
# for every input
if s_w is None:
s_w = np.ones(X.shape[0])
gradients = grad_fct([X, s_w, y, 0])
total_norm += np.sqrt(np.sum([np.sum(np.square(g))
for g in gradients]))
steps += 1
if pad_last:
X, y = next(datagen)
# set sample weights to one
# for every input
if s_w is None:
s_w = np.ones(X.shape[0])
gradients = grad_fct([X, s_w, y, 0])
total_norm += np.sqrt(np.sum([np.sum(np.square(g))
for g in gradients]))
steps += 1
return total_norm / float(steps)
def rnn_train_model(model: Model,
train_dataset,
eval_dataset,
folds=5,
epochs=50,
batch_size=128,
val_subset=None,
cutoff=None,
learning_rate=1e-3,
monitor='loss',
optimization_mode='auto',
compile_model=True):
X_train, y_train, X_test, y_test, is_timeseries = load_dataset_at(dataset_id,
fold_index=dataset_fold_id,
normalize_timeseries=normalize_timeseries)
max_timesteps, max_nb_variables = calculate_dataset_metrics(X_train)
if max_nb_variables != MAX_NB_VARIABLES[dataset_id]:
if cutoff is None:
choice = cutoff_choice(dataset_id, max_nb_variables)
else:
assert cutoff in [
'pre', 'post'], 'Cutoff parameter value must be either "pre" or "post"'
choice = cutoff
if choice not in ['pre', 'post']:
return
else:
X_train, X_test = cutoff_sequence(
X_train, X_test, choice, dataset_id, max_nb_variables)
classes = np.unique(y_train)
le = LabelEncoder()
y_ind = le.fit_transform(y_train.ravel())
recip_freq = len(y_train) / (len(le.classes_) *
np.bincount(y_ind).astype(np.float64))
class_weight = recip_freq[le.transform(classes)]
print("Class weights : ", class_weight)
y_train = to_categorical(y_train, len(np.unique(y_train)))
y_test = to_categorical(y_test, len(np.unique(y_test)))
if is_timeseries:
factor = 1./np.cbrt(2)
else:
factor = 1./np.sqrt(2)
if dataset_fold_id is None:
weight_fn = "./weights/%s_weights.h5" % dataset_prefix
else:
weight_fn = "./weights/%s_fold_%d_weights.h5" % (
dataset_prefix, dataset_fold_id)
model_checkpoint = ModelCheckpoint(weight_fn, verbose=1, mode=optimization_mode,
monitor=monitor, save_best_only=True, save_weights_only=True)
reduce_lr = ReduceLROnPlateau(monitor=monitor, patience=100, mode=optimization_mode,
factor=factor, cooldown=0, min_lr=1e-4, verbose=2)
callback_list = [model_checkpoint, reduce_lr]
optm = Adam(lr=learning_rate)
if compile_model:
model.compile(optimizer=optm,
loss='categorical_crossentropy', metrics=['accuracy'])
if val_subset is not None:
X_test = X_test[:val_subset]
y_test = y_test[:val_subset]
model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, callbacks=callback_list,
class_weight=class_weight, verbose=2, validation_data=(X_test, y_test))
def evaluate_model(model: Model, dataset_id, dataset_prefix, dataset_fold_id=None, batch_size=128, test_data_subset=None,
cutoff=None, normalize_timeseries=False):
_, _, X_test, y_test, is_timeseries = load_dataset_at(dataset_id,
fold_index=dataset_fold_id,
normalize_timeseries=normalize_timeseries)
max_timesteps, max_nb_variables = calculate_dataset_metrics(X_test)
if max_nb_variables != MAX_NB_VARIABLES[dataset_id]:
if cutoff is None:
choice = cutoff_choice(dataset_id, max_nb_variables)
else:
assert cutoff in [
'pre', 'post'], 'Cutoff parameter value must be either "pre" or "post"'
choice = cutoff
if choice not in ['pre', 'post']:
return
else:
_, X_test = cutoff_sequence(
None, X_test, choice, dataset_id, max_nb_variables)
if not is_timeseries:
X_test = pad_sequences(
X_test, maxlen=MAX_NB_VARIABLES[dataset_id], padding='post', truncating='post')
y_test = to_categorical(y_test, len(np.unique(y_test)))
optm = Adam(lr=1e-3)
model.compile(optimizer=optm, loss='categorical_crossentropy',
metrics=['accuracy'])
if dataset_fold_id is None:
weight_fn = "./weights/%s_weights.h5" % dataset_prefix
else:
weight_fn = "./weights/%s_fold_%d_weights.h5" % (
dataset_prefix, dataset_fold_id)
model.load_weights(weight_fn)
if test_data_subset is not None:
X_test = X_test[:test_data_subset]
y_test = y_test[:test_data_subset]
print("\nEvaluating : ")
loss, accuracy = model.evaluate(X_test, y_test, batch_size=batch_size)
print()
print("Final Accuracy : ", accuracy)
return accuracy, loss
def set_trainable(layer, value):
layer.trainable = value
# case: container
if hasattr(layer, 'layers'):
for l in layer.layers:
set_trainable(l, value)
# case: wrapper (which is a case not covered by the PR)
if hasattr(layer, 'layer'):
set_trainable(layer.layer, value)
import numpy
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
# fix random seed for reproducibility
numpy.random.seed(7)
# load the dataset but only keep the top n words, zero the rest
top_words = 5000
import numpy as np
# save np.load
#np_load_old = np.load
# modify the default parameters of np.load
#np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)
# call load_data with allow_pickle implicitly set to true
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)
# restore np.load for future normal usage
#np.load = np_load_old
# truncate and pad input sequences
max_review_length = 500
X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
import numpy
from tensorflow.keras.datasets import imdb
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Embedding
from tensorflow.keras.preprocessing import sequence
# fix random seed for reproducibility
numpy.random.seed(7)
# create the model
embedding_vecor_length = 32
model = tf.keras.Sequential()
# model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=3, batch_size=64)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
# LSTM with Dropout for sequence classification in the IMDB dataset
import numpy
from tensorflow.keras.datasets import imdb
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Embedding
from tensorflow.keras.preprocessing import sequence
# fix random seed for reproducibility
numpy.random.seed(7)
# load the dataset but only keep the top n words, zero the rest
top_words = 5000
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)
# truncate and pad input sequences
max_review_length = 500
X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
# create the model
embedding_vecor_length = 32
model = tf.keras.Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
model.add(Dropout(0.2))
model.add(LSTM(100))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, epochs=3, batch_size=64)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
# LSTM without dropout for sequence classification in the IMDB dataset
import numpy
from tensorflow.keras.datasets import imdb
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Embedding
from tensorflow.keras.preprocessing import sequence
# fix random seed for reproducibility
numpy.random.seed(7)
# load the dataset but only keep the top n words, zero the rest
top_words = 5000
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)
# truncate and pad input sequences
max_review_length = 500
X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
# create the model
embedding_vecor_length = 32
model = tf.keras.Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, epochs=3, batch_size=64)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
# LSTM and CNN for sequence classification in the IMDB dataset
import numpy
from tensorflow.keras.datasets import imdb
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers.convolutional import Conv1D
from tensorflow.keras.layers.convolutional import MaxPooling1D
from tensorflow.keras.layers.embeddings import Embedding
from tensorflow.keras.preprocessing import sequence
# fix random seed for reproducibility
numpy.random.seed(7)
# load the dataset but only keep the top n words, zero the rest
top_words = 5000
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)
# truncate and pad input sequences
max_review_length = 500
X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
# create the model
embedding_vecor_length = 32
model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, epochs=3, batch_size=64)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
model.add(Embedding(n_most_common_words, emb_dim, input_length=X.shape[1]))
model.add(SpatialDropout1D(0.7))
model.add(LSTM(64, dropout=0.7, recurrent_dropout=0.7))
model.add(Dense(4, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
X_test
# LSTM with Dropout for sequence classification in msd dataset
import numpy
from tensorflow.keras.datasets import imdb
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Dropout
from keras.layers.embeddings import Embedding
from tensorflow.keras.preprocessing import sequence
# fix random seed for reproducibility
numpy.random.seed(7)
# load the dataset b
(X_train, y_train)
(X_test, y_test)
# truncate and pad input sequences
max_review_length = 500
X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
# create the model
embedding_vecor_length = 32
model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
model.add(Dropout(0.2))
model.add(LSTM(100))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, epochs=3, batch_size=64)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
```
---
---
```
import collections
from tensorflow.python.ops.math_ops import tanh
class RNNCell(object):
def __call__(self, inputs, state, scope=None):
raise NotImplementedError("Abstract method")
class LSTMCell(RNNCell):
"""Basic LSTM recurrent network cell.
The implementation is based on: http://arxiv.org/abs/1409.2329.
We add forget_bias (default: 1) to the biases of the forget gate in order to
reduce the scale of forgetting in the beginning of the training.
It does not allow cell clipping, a projection layer, and does not
use peep-hole connections: it is the basic baseline.
For advanced models, please use the full LSTMCell that follows.
"""
def __init__(self, n_units, n_proj=None, forget_bias=0.0, input_size=None, activation=tanh):
self._n_units = n_units
self._n_proj = n_proj
self._forget_bias = forget_bias
self._input_size = input_size
self._activation = activation
(self._state_size,
self._output_size) = ((LSTMStateTuple(n_units, n_proj) , n_units + n_proj)
if n_proj else (LSTMStateTuple(n_units, n_units), 2*n_units))
@property
def state_size(self):
return self._state_size
@property
def output_size(self):
return self. _output_size
def __call__(self, inputs, state, scope=None):
pass
# class LSTM(LSTM):
# def __init__(self, ):
# pass
_LSTMStateTuple = collections.namedtuple("LSTMStateTuple", ("c", "h"))
class LSTMStateTuple(_LSTMStateTuple):
"""Tuple used by LSTM Cells for `state_size`, `zero_state`, and output state.
Stores two elements: `(c, h)`, in that order.
Only used when `state_is_tuple=True`.
"""
__slots__ = ()
@property
def dtype(self):
(c, h) = self
if not c.dtype == h.dtype:
raise TypeError("Inconsistent internal state: %s vs %s" %
(str(c.dtype), str(h.dtype)))
return c.dtype
x = LSTMCell(50, 20, 1.0, 128)
x.state_size
x = LSTMCell()
```
| github_jupyter |
# Databolt Flow
For data scientists and data engineers, d6tflow is a python library which makes building complex data science workflows easy, fast and intuitive.
https://github.com/d6t/d6tflow
## Benefits of using d6tflow
[4 Reasons Why Your Machine Learning Code is Probably Bad](https://medium.com/@citynorman/4-reasons-why-your-machine-learning-code-is-probably-bad-c291752e4953)
# Example Usage For a Machine Learning Workflow
Below is an example of a typical machine learning workflow: you retreive data, preprocess it, train a model and evaluate the model output.
In this example you will:
* Build a machine learning workflow made up of individual tasks
* Check task dependencies and their execution status
* Execute the model training task including dependencies
* Save intermediary task output to Parquet, pickle and in-memory
* Load task output to pandas dataframe and model object for model evaluation
* Intelligently rerun workflow after changing a preprocessing parameter
```
import d6tflow
import luigi
import sklearn, sklearn.datasets, sklearn.svm
import pandas as pd
# define workflow
class TaskGetData(d6tflow.tasks.TaskPqPandas): # save dataframe as parquet
def run(self):
iris = sklearn.datasets.load_iris()
df_train = pd.DataFrame(iris.data,columns=['feature{}'.format(i) for i in range(4)])
df_train['y'] = iris.target
self.save(df_train) # quickly save dataframe
class TaskPreprocess(d6tflow.tasks.TaskPqPandas):
do_preprocess = luigi.BoolParameter(default=True) # parameter for preprocessing yes/no
def requires(self):
return TaskGetData() # define dependency
def run(self):
df_train = self.input().load() # quickly load required data
if self.do_preprocess:
df_train.iloc[:,:-1] = sklearn.preprocessing.scale(df_train.iloc[:,:-1])
self.save(df_train)
class TaskTrain(d6tflow.tasks.TaskPickle): # save output as pickle
do_preprocess = luigi.BoolParameter(default=True)
def requires(self):
return TaskPreprocess(do_preprocess=self.do_preprocess)
def run(self):
df_train = self.input().load()
model = sklearn.svm.SVC()
model.fit(df_train.iloc[:,:-1], df_train['y'])
self.save(model)
# Check task dependencies and their execution status
d6tflow.preview(TaskTrain())
# Execute the model training task including dependencies
d6tflow.run(TaskTrain())
# Load task output to pandas dataframe and model object for model evaluation
model = TaskTrain().output().load()
df_train = TaskPreprocess().output().load()
print(sklearn.metrics.accuracy_score(df_train['y'],model.predict(df_train.iloc[:,:-1])))
# Intelligently rerun workflow after changing a preprocessing parameter
d6tflow.preview(TaskTrain(do_preprocess=False))
d6tflow.run(TaskTrain(do_preprocess=False)) # execute with new parameter
```
# Next steps: Transition code to d6tflow
See https://d6tflow.readthedocs.io/en/latest/transition.html
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
cd /Users/martin/Git/estates/src/data/gold
from rentals import load_rentals
import numpy as np
import pandas as pd
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder, Binarizer, FunctionTransformer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestRegressor
from sklearn.compose import make_column_transformer
cd /Users/martin/Git/estates/src/models
from utils import GroupImputer, DataFrameTransformer
rentals = load_rentals('/Users/martin/Git/estates/data/silver')
rentals.head()
rentals.columns
constant_imputer = DataFrameTransformer(transformers = [(
'constant_imputer',
SimpleImputer(strategy='constant', fill_value='ostatni'),
['district', 'disposition'])], remainder='passthrough')
zero_imputer = DataFrameTransformer(transformers = [(
'zero_imputer',
SimpleImputer(strategy='constant', fill_value=0),
['furnishing', 'elevator'])], remainder='passthrough')
mode_imputer = DataFrameTransformer(transformers = [(
'mode_imputer',
SimpleImputer(strategy='most_frequent'),
['category', 'efficiency', 'floor', 'building_type', 'building_state', 'ownership'])], remainder='passthrough')
group_imputer = DataFrameTransformer(transformers = [(
'mode_imputer',
GroupImputer(group_cols=['disposition'], target='area_m2', metric='median'),
['disposition', 'area_m2'])], remainder='passthrough')
encoder = DataFrameTransformer([(
'encoder',
OneHotEncoder(),
['district', 'disposition', 'category', 'building_type', 'building_state', 'ownership'])],
remainder='passthrough'
)
imputer = Pipeline([
('constant_imputer', constant_imputer),
('mode_imputer', mode_imputer),
('zero_imputer', zero_imputer),
('group_imputer', group_imputer),
])
binarizer = DataFrameTransformer(transformers = [(
'binarizer',
FunctionTransformer(lambda distance: distance < 500),
['theatre', 'cinema', 'groceries', 'candy_shop',
'veterinary', 'train', 'pharmacist', 'atm', 'sports', 'bus', 'doctors',
'school', 'kindergarten', 'pub', 'post_office', 'restaurant',
'seven_eleven', 'playground']
)], remainder='passthrough')
preprocessor = Pipeline([
('imputer', imputer),
('binarizer', binarizer)
])
X, y = rentals.drop('price', axis=1), rentals.price
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
preprocessed = preprocessor.fit_transform(X_train)
preprocessed
```
| github_jupyter |
# 3.1 Constants and variables in programs
In this notebook, you will learn how to use constants and variables in a robot control program.
Once again, you will be creating programs to run in the RoboLab simulator, so load the simulator by running the following code cell:
```
from nbev3devsim.load_nbev3devwidget import roboSim, eds
%load_ext nbev3devsim
```
## 3.1.1 An introduction to constants in computer programs
Various elements of the code in the following code cell may be familiar to you from the previous notebook. Specifically, the code describes a program that is intended to cause the robot to traverse something approximating a square path in the simulator.
Download the program in the following code cell to the notebook and run it with the robot in *pen down* mode so that you can see the path it follows.
*Note that we have used the `-R` magic switch to automatically run the program as soon as it has been downloaded.*
```
%%sim_magic_preloaded --pendown --pencolor green -R
# Try to draw a square
#Go straight
# Set the left and right motors
# in a forward direction
# and run for 1 rotation
tank_drive.on_for_rotations(SpeedPercent(40),
SpeedPercent(40), 1)
#Turn
# Set the robot to turn on the spot
# and run for a certain number of rotations
# *of the wheels*
tank_turn.on_for_rotations(-100,
SpeedPercent(40), 1.6)
#Go straight
# Set the left and right motors
# in a forward direction
# and run for 1 rotation
tank_drive.on_for_rotations(SpeedPercent(40),
SpeedPercent(40), 1)
#Turn
# Set the robot to turn on the spot
# and run for a certain number of rotations
# *of the wheels*
tank_turn.on_for_rotations(-100,
SpeedPercent(40), 1.6)
#Go straight
# Set the left and right motors
# in a forward direction
# and run for 1 rotation
tank_drive.on_for_rotations(SpeedPercent(40),
SpeedPercent(40), 1)
#Turn
# Set the robot to turn on the spot
# and run for a certain number of rotations
# *of the wheels*
tank_turn.on_for_rotations(-100,
SpeedPercent(40), 1.6)
#Go straight
# Set the left and right motors
# in a forward direction
# and run for 1 rotation
tank_drive.on_for_rotations(SpeedPercent(40),
SpeedPercent(40), 1)
#Turn
# Set the robot to turn on the spot
# and run for a certain number of rotations
# *of the wheels*
tank_turn.on_for_rotations(-100,
SpeedPercent(40), 1.6)
```
Within the program, the explicit number `1.6` gives the number of rotations used when turning the robot. Several other numerical values are also evident; for example, the steering setting (`-100`) and the various speeds (`40`). These are all examples of a *literal value*, that is, values (numbers in this case) that are provided explicitly in the program at the point where they are referenced (which is to say, at the point in the program where they are meaningfully used).
When writing computer programs it is bad practice to litter them with literal values like these. One reason is that the programmer may easily forget what the numbers mean, and if another programmer has to maintain the program, they will find it very hard to do so. Another reason is that using literal values can be inflexible, particularly if you need to use the same number at several points in the program, as we have done in the above example.
You will see a better approach to referring to numbers in the next section.
### 3.1.2 Question — Interpreting what you read
In the program above, there are multiple occurrences of the literally stated number `40`. Do they all mean the same thing?
*Record your answer here before revealing my answer.*
#### Example observations
*Click the arrow in the sidebar or run this cell to reveal my answer.*
To a certain extent, the various occurrences of the number `40` do mean the same thing because they are all motor speed percentage values.
However, in another sense, they are *not* the same at all: four of them refer to the speed of the left motor when driving forwards, four to the right motor speed for the same command, and four of them relate to the speed of both motors during the turn.
### 3.1.3 Literals can make program updates and maintenance difficult
Suppose you want the robot to turn more slowly than it currently does to see if this affects the rotation count value you need to set to turn the robot through a right angle.
To do this, you would have to change the motor speed in each of the four turn instructions. You might then have to modify each of the four turn rotation count values so the robot itself continues to turn through roughly ninety degrees.
And then suppose you wanted to see if doing the turn *faster* rather than slower made the robot more or less accurate when trying to set the turn rotation value.
It could be a lot of work, couldn’t it? And possibly prone to error, making all those changes. So how might we improve things?
### 3.1.4 Optional activity
Try changing the turn speed in the program to see if it makes any difference to the precision with which the robot turns through ninety degrees. If it does, try setting the turn rotation count so that the robot draws something close to a square, if not an exact square, once again.
## 3.2 Working with constants and variables
If you tried changing the motor speeds to a reduced level in the turn commands, then you may have found that the robot became more controllable. And you probably also discovered that changing each numerical constant individually can be quite time-consuming. How much better it would be if they could all be changed at the same time? This can be achieved by *declaring* a constant in your program.
In some programming languages, *constants* are named items that are assigned a particular value once and once only in a program, and this value remains unchanged as the program executes. *Variables*, on the other hand, are named items whose value may be set at the start of a program but whose value may also change as the program executes.
*By convention*, in many Python programs, if we want to refer to an item with a value that is intended to be a fixed, *constant* value as the program runs, then we create a *variable* but with an UPPER-CASE name.
### 3.2.1 Using constants in RoboLab programs
In the following code cell, I have replaced the literal values ‘inside’ the program with ‘constants’ that are defined at the start of the program.
If you download and run the program, then you should find that it behaves as before.
```
%%sim_magic_preloaded -b Empty_Map -R -p -C
# Try to draw a square
STEERING = -100
TURN_ROTATIONS = 1.6
TURN_SPEED = 40
STRAIGHT_SPEED_PC = SpeedPercent(40)
STRAIGHT_ROTATIONS = 1
#Go straight
# Set the left and right motors in a
# forward directionand run for
# STRAIGHT_ROTATIONS number of rotations
tank_drive.on_for_rotations(STRAIGHT_SPEED_PC,
STRAIGHT_SPEED_PC,
STRAIGHT_ROTATIONS)
#Turn
# Set the robot to turn on the spot
# and run for a certain number of rotations
# *of the wheels*
tank_turn.on_for_rotations(STEERING,
SpeedPercent(TURN_SPEED), TURN_ROTATIONS)
#Go straight
# Set the left and right motors in a
# forward directionand run for
# STRAIGHT_ROTATIONS number of rotations
tank_drive.on_for_rotations(STRAIGHT_SPEED_PC,
STRAIGHT_SPEED_PC,
STRAIGHT_ROTATIONS)
#Turn
# Set the robot to turn on the spot
# and run for a certain number of rotations
# *of the wheels*
tank_turn.on_for_rotations(STEERING,
SpeedPercent(TURN_SPEED), TURN_ROTATIONS)
#Go straight
# Set the left and right motors in a
# forward directionand run for
# STRAIGHT_ROTATIONS number of rotations
tank_drive.on_for_rotations(STRAIGHT_SPEED_PC,
STRAIGHT_SPEED_PC,
STRAIGHT_ROTATIONS)
#Turn
# Set the robot to turn on the spot
# and run for a certain number of rotations
# *of the wheels*
tank_turn.on_for_rotations(STEERING,
SpeedPercent(TURN_SPEED), TURN_ROTATIONS)
#Go straight
# Set the left and right motors in a
# forward directionand run for
# STRAIGHT_ROTATIONS number of rotations
tank_drive.on_for_rotations(STRAIGHT_SPEED_PC,
STRAIGHT_SPEED_PC,
STRAIGHT_ROTATIONS)
#Turn
# Set the robot to turn on the spot
# and run for a certain number of rotations
# *of the wheels*
tank_turn.on_for_rotations(STEERING,
SpeedPercent(TURN_SPEED), TURN_ROTATIONS)
```
Note that I have used two slightly different approaches to define the turn speed and the straight speed. In the case of the turn speed, I have defined `TURN_SPEED = 40`, setting the constant to a value that is passed to the `SpeedPercent()` function. For the straight speed, `STRAIGHT_SPEED_PC = SpeedPercent(40)`, I used a slightly different naming convention and defined the constant as a `SpeedPercent()` value directly.
### 3.2.2 Activity – Changing a constant to tune a program
When the program is executed in the simulator, the value of the constant in the code is used in the same way as the literal value.
However, if we want to try running the program using different robot speeds or turn rotation values, we can now do so very straightforwardly: we simply change the required values in a single place, once for each constant value, at the top of the program.
Modify the above program using different values for the constants, then download and run the program in the simulator. How much easier is it to explore different values now?
*Record your own observations about how the use of variables influences the ease with which you can experiment with programme settings here.*
### 3.2.3 Activity – RoboLab challenge
In the simulator, load the *Square* background by selecting it from the drop-down list at the top of the simulator.
The challenge is to get the robot to go round the outside of the solid grey square and stay within the outer square boundary, without the robot touching either the inner square or the outside square boundary, in the shortest time possible.
*Hint: you may find it useful to use the previous program for traversing a square, or create your own program using a for loop and one or more constants.*
```
%%sim_magic_preloaded -b Square -pC -x 550 -y 300 -a -90
# YOUR CODE HERE
```
#### Example solution
*Click on the arrow in the sidebar or run this cell to reveal an example solution.*
I tried to simplify the program by using a `for` loop to generate each side and turn. I used constants to define the motor speeds and the number of wheel rotations required when driving in a straight line for the edges, and during the turns.
```
%%sim_magic_preloaded -b Square -pCR -x 550 -y 300 -a -90
SIDES = 4
# Try to draw a square
STEERING = -100
TURN_ROTATIONS = 1.6
TURN_SPEED = 40
STRAIGHT_SPEED_PC = SpeedPercent(40)
STRAIGHT_ROTATIONS = 6
for side in range(SIDES):
#Go straight
# Set the left and right motors in a forward direction
# and run for STRAIGHT_ROTATIONS number of rotations
tank_drive.on_for_rotations(STRAIGHT_SPEED_PC, STRAIGHT_SPEED_PC, STRAIGHT_ROTATIONS)
#Turn
# Set the robot to turn on the spot
# and run for a certain number of rotations *of the wheels*
tank_turn.on_for_rotations(STEERING, SpeedPercent(TURN_SPEED), TURN_ROTATIONS)
```
### 3.2.4 Optimising parameter values
If you have not already done so, try adjusting the values of `STRAIGHT_ROTATIONS` so that the robot goes as close as possible to the grey square without touching it. Don’t spend too long on this: the simulator might not provide you with the resolution you are reaching for.
## 3.3 Working with variables
How many coins have you got on you? At the moment I have 12 coins in my pocket. (You might say: I have four tappable cards or phone payment devices!)
I could write:
`number_of_coins_in_my_pocket = 12`
If I buy lunch using five of these coins, then there are only seven left. So I could write:
`number_of_coins_in_my_pocket = 7`
At any time the number of coins in my pocket may vary. The name `number_of_coins_in_my_pocket` is an example of what is called a *variable* when used in computer programs.
The value of a variable can change as the program executes. Contrast this with a constant, which is intended to remain unchanged while the program executes.
### 3.3.1 Constant or variable?
Which of the following are intended as constants (that is, things that aren’t intended to change) and which are variables (that is, they are quite likely to change)? Stylistically, how might we represent constants and variables in a Python program so that we can distinguish between them?
`number_of_coins_in_my_pocket`
`the_number_of_pennies_in_a_pound`
`the_diameter_of_robots_wheels`
`the_distance_robot_travels_in_a_second`
Write your answers here:
`number_of_coins_in_my_pocket`: __variable or constant?__
`the_number_of_pennies_in_a_pound`: __variable or constant?__
`the_diameter_of_robots_wheels`: __variable or constant?__
`the_distance_robot_travels_in_a_second`: __variable or constant?__
My thoughts on how we might stylistically distinguish between them: ...
#### Example answer
*Click the arrow in the sidebar or run this cell to reveal the example answer.*
The amount of money in my pocket varies all the time, so if this were used in a computer program it would be a variable.
The number of pennies in a pound is always a hundred, so this would be a constant.
A robot’s wheels might be 50 mm in diameter and this would be a constant. (A different robot might have different size wheels, but their size will not vary while the program executes.)
You may have wondered whether the distance a robot travels in a second is best represented as a constant or a variable. For a robot that could speed up or slow down its drive motors or change gear, and where this value may be used to report the speed of the robot, this would certainly be a variable. But for a simple robot (like the simulated one) with a fixed gear drive travelling at a constant speed we might used the value to *define* a fixed property of the robot, in which case it would make more sense to treat it as a constant, albeit one that we might wish to tweak or modify as we did in the programs above.
Stylistically, by convention, we use upper-case characters to identify constants and lower-case characters to represent variables. So for example, we might define the constant values `PENNIES_IN_A_POUND` or `WHEEL_DIAMETER`.
### 3.3.2 Using variables
Anything that will not change during the execution of the program should be defined as a constant, whereas anything that may change should be viewed as a variable.
Variables are an essential ingredient of computer programs: they allow the computer to store the value of things at a given instant of time.
Using the `nbtutor` extension, we can keep track of how specific variables can change their value as a program executes.
First, load in the `nbtutor` extension by running the following code cell:
```
%reload_ext nbtutor
```
*You may find you need to expand the width of this notebook column to see the `nbtutor` display correctly. Click on the right hand edge of the notebook column to drag it and expand or reduce the column width.*
Now let’s use the `nbtutor` visualisation to follow what happens to the values of the `counter` and `previous` variables as the following program executes.
Run the code cell below, then step through each line of code one line at a time using the `nbtutor` *Next >* button.
As you do so, observe how the previously executed line, identified by the green arrow, modifies the value of the variables. Also note how the program flow progresses by comparing the location of the previously executed line with the next line (red arrow).
```
%%nbtutor --reset --force
counter = 0
while counter < 5:
print("value",counter)
previous = counter
counter += 1
counter, previous
```
*The `%%nbtutor` magic opens a toolbar above each cell in the notebook. When you have completed an `nbtutor` activity, you can close the notebook toolbar from the notebook `View` menu `Cell menu`: selecting the `None` option will the cell toolbar.*
*Note that after running an `%%nbtutor` activity, the ability to use simulator magic to download new programs to the `nbev3devsim` simulator seems to be broken.*
## 3.5 Summary
In this notebook, you have seen how we can use constants and variables in a program to take literal values out of the body of a program and replace them by meaningfully named terms defined at the top of the program. This can improve readability of a program, as well as making it easier to maintain and update.
Although Python doesn’t really do constants, by convention we can refer to terms we want to treat as constant values by using upper-case characters when naming them.
When a program executes, the values of variables may be updated by program statements.
In the next notebook, you will explore another way of using variables in a robot control program by associating them with things like particular sensors and using them to provide a means of referring to, and accessing, current sensor values.
| github_jupyter |
```
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
print('total train dataset', mnist.train.images.shape[0])
print('total test dataset', mnist.test.images.shape[0])
print('dimension of picture', mnist.train.images.shape[1])
print('total unique classes', np.unique(np.argmax(mnist.train.labels,axis=1)))
class RNN:
def __init__(self, input_size, output_size, num_layer, size_layer, learning_rate):
self.X = tf.placeholder(tf.float32, (None, None,input_size))
self.Y = tf.placeholder(tf.float32, (None, output_size))
def rnn_cell():
return tf.nn.rnn_cell.BasicRNNCell(size_layer)
self.rnn_cells = tf.nn.rnn_cell.MultiRNNCell([rnn_cell() for _ in range(num_layer)])
outputs, states = tf.nn.dynamic_rnn(self.rnn_cells, self.X, dtype=tf.float32)
w1 = tf.Variable(tf.random_normal([size_layer, size_layer]))
w2 = tf.Variable(tf.random_normal([size_layer, output_size]))
feed = tf.nn.relu(tf.matmul(outputs[:,-1], w1))
self.logits = tf.matmul(feed, w2)
self.cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=self.Y, logits=self.logits))
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(self.cost)
SIZE = 64
EPOCH = 10
BATCH_SIZE = 128
LEARNING_RATE = 0.001
NUM_LAYER = 2
INPUT_SIZE = int(np.sqrt(mnist.train.images.shape[1]))
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = RNN(INPUT_SIZE, np.unique(np.argmax(mnist.train.labels,axis=1)).shape[0], SIZE, NUM_LAYER, LEARNING_RATE)
sess.run(tf.global_variables_initializer())
LOSS, ACCURACY, ACCURACY_TEST = [], [], []
for i in range(EPOCH):
total_loss, total_acc = 0, 0
for k in range(0, (mnist.train.images.shape[0] // BATCH_SIZE) * BATCH_SIZE, BATCH_SIZE):
batch_x = mnist.train.images[k:k+BATCH_SIZE, :].reshape((-1, INPUT_SIZE, INPUT_SIZE))
batch_x = batch_x / 255.0
batch_y = mnist.train.labels[k:k+BATCH_SIZE, :]
logits, loss, _ = sess.run([model.logits, model.cost, model.optimizer], feed_dict={model.X:batch_x, model.Y:batch_y})
acc = np.mean(np.argmax(logits,axis=1)==np.argmax(batch_y,axis=1))
total_loss += loss
total_acc += acc
total_loss /= (mnist.train.images.shape[0] // BATCH_SIZE)
total_acc /= (mnist.train.images.shape[0] // BATCH_SIZE)
LOSS.append(total_loss)
ACCURACY.append(total_acc)
total_acc = 0
for k in range(0, (mnist.test.images.shape[0] // BATCH_SIZE) * BATCH_SIZE, BATCH_SIZE):
batch_x = mnist.test.images[k:k+BATCH_SIZE, :].reshape((-1, INPUT_SIZE, INPUT_SIZE))
batch_x = batch_x / 255.0
batch_y = mnist.test.labels[k:k+BATCH_SIZE, :]
logits = sess.run(model.logits, feed_dict={model.X:batch_x})
acc = np.mean(np.argmax(logits,axis=1)==np.argmax(batch_y,axis=1))
total_acc += acc
total_acc /= (mnist.test.images.shape[0] // BATCH_SIZE)
ACCURACY_TEST.append(total_acc)
print('epoch %d, loss %f, training accuracy %f, testing accuracy %f'%(i+1, LOSS[-1], ACCURACY[-1], ACCURACY_TEST[-1]))
plt.figure(figsize=(10,5))
plt.plot(ACCURACY, label='training accuracy')
plt.plot(ACCURACY_TEST, label='testing accuracy')
plt.legend()
plt.show()
```
| github_jupyter |
## Generating partial coherence phase screens for modeling rotating diffusers
```
%pylab
%matplotlib inline
import SimMLA.fftpack as simfft
import SimMLA.grids as grids
import SimMLA.fields as fields
from numpy.fft import fft, ifft, fftshift, ifftshift
from scipy.integrate import simps
from scipy.interpolate import interp1d
```
I am simulating a partially spatially coherent laser beam using a technique described in [Xiao and Voelz, "Wave optics simulation approach for partial spatially coherent beams," Opt. Express 14, 6986-6992 (2006)](https://www.osapublishing.org/oe/abstract.cfm?uri=oe-14-16-6986) and Chapter 9 of [Computational Fourier Optics: A MATLAB Tutorial](http://spie.org/Publications/Book/858456), which is also by Voelz. This workbook demonstrates how to generate the partially coherent beam and propagate it through the dual-MLA system.
Here I am breaking from Xiao and Voelz's implementation by decoupling the phase screen parameters \\( \sigma\_f \\) and \\( \sigma\_r \\).
**Note: This notebook contains LaTeX that may not be visible when viewed from GitHub. Try downloading it and opening it with the Jupyter Notebook application.**
## Build the coordinate system and dual-MLA's
```
numLenslets = 21 # Must be odd; corresponds to the number of lenslets in one dimension
lensletSize = 500 # microns
focalLength = 13700 # microns, lenslet focal lengths
fc = 50000 # microns, collimating lens focal length
dR = -10000 # microns, distance of diffuser from telescope focus
L1 = 500000 # microns, distance between collimating lens and first MLA
L2 = 200000 # microns, distance between second MLA and objective BFP
wavelength = 0.642 # microns
subgridSize = 10001 # Number of grid (or lattice) sites for a single lenslet
physicalSize = numLenslets * lensletSize # The full extent of the MLA
# dim = 1 makes the grid 1D
collGrid = grids.Grid(20001, 5000, wavelength, fc, dim = 1)
grid = grids.GridArray(numLenslets, subgridSize, physicalSize, wavelength, focalLength, dim = 1, zeroPad = 3)
```
Now, the output from the telescope + diffuser may be generated by multiplying the focused Gaussian beam with a random phase mask from the Voelz code.
The input beam has a 4 mm waist (radius), but is focused by a telescope whose first lens has a focal length of 100 mm = 1e5 microns. [Using a Gaussian beam calculator](http://www.calctool.org/CALC/phys/optics/f_NA), this means that the focused beam has a waist diameter of \\( 2w = 10.2 \, \mu m \\) and a beam standard deviation of \\( \frac{5.1 \mu m}{\sqrt{2}} = 3.6 \mu m \\). The measured beam standard deviation in the setup is in reality about \\( 6.0 \, \mu m \\) due to a slight astigmatism in the beam beam and spherical aberration. (The telescope lenses are simple plano-convex lenses.)
After multiplying the beam by the phase screen, the field is Fourier transformed by the second telescope lens with \\( f = 50 \, mm \\) to produce the field in the focal plane of the collimating lens. The following steps are then taken to get the field on the sample:
1. The field from the collimating lens is propagated a distance \\( L_1 \\) to the first MLA.
2. The field immediately after the second MLA is computed via a spatially-parallel Fourier transform operation.
3. This field is propagated a distance \\( L_2 \\) to the back focal plane of the objective.
4. The field is Fourier transformed to produce the field on the sample.
```
Z0 = 376.73 # Impedance of free space, Ohms
power = 100 # mW
beamStd = 6 # microns
sigma_f = 10 # microns, diffuser correlation length
sigma_r = 1 # variance of the random phase
fieldAmp = np.sqrt(power / 1000 * Z0 / beamStd / np.sqrt(np.pi)) # Factor of 1000 converts from mW to W
# The diffuser sits 'dR' microns from the focus
beam = lambda x: fields.GaussianBeamDefocused(fieldAmp, beamStd, wavelength, dR)(x) \
* fields.diffuserMask(sigma_f, sigma_r, collGrid)(x)
# Sample the beam at the diffuser
beamSample = beam(collGrid.px)
# Propagate the sample back to the focal plane of the telescope
beamSample = simfft.fftPropagate(beamSample, collGrid, -dR)
plt.plot(collGrid.px, np.abs(beamSample), linewidth = 2)
plt.xlim((-1000,1000))
plt.xlabel(r'x-position')
plt.ylabel(r'Field amplitude')
plt.grid(True)
plt.show()
plt.plot(collGrid.px, np.angle(beamSample), linewidth = 2, label ='Phase')
plt.plot(collGrid.px, np.abs(beamSample) / np.max(np.abs(beamSample)) * np.angle(beamSample), label = 'Phase with Gaussian envelope')
plt.xlim((-1000,1000))
plt.ylim((-4, 4))
plt.xlabel(r'x-position')
plt.ylabel(r'Field phase, rad')
plt.grid(True)
plt.legend()
plt.show()
```
## Create the input field to the MLA's
The MLA inputs are the Fourier transform of this field when the diffuser is in the focal plane of the collimating lens.
```
scalingFactor = collGrid.physicalSize / (collGrid.gridSize - 1) / np.sqrt(collGrid.wavelength * collGrid.focalLength)
inputField = scalingFactor * np.fft.fftshift(np.fft.fft(np.fft.ifftshift(beamSample)))
plt.plot(collGrid.pX, np.abs(inputField))
plt.xlim((-20000, 20000))
plt.grid(True)
plt.show()
# Interpolate this field onto the MLA grid
mag = np.abs(inputField)
ang = np.angle(inputField)
inputMag = interp1d(collGrid.pX,
mag,
kind = 'nearest',
bounds_error = False,
fill_value = 0.0)
inputAng = interp1d(collGrid.pX,
ang,
kind = 'nearest',
bounds_error = False,
fill_value = 0.0)
plt.plot(grid.px, np.abs(inputMag(grid.px) * np.exp(1j * inputAng(grid.px))))
plt.xlim((-5000, 5000))
plt.grid(True)
plt.show()
field2 = lambda x: inputMag(x) * np.exp(1j * inputAng(x))
interpMag, interpAng = simfft.fftSubgrid(field2, grid)
# Plot the field behind the second MLA center lenslet
plt.plot(grid.pX, np.abs(interpMag[10](grid.pX) * np.exp(1j * interpAng[10](grid.pX))))
plt.xlim((-500, 500))
plt.xlabel('x-position')
plt.ylabel('Field amplitude')
plt.grid(True)
plt.show()
```
## Propagate this field through the dual MLA illuminator
The rest of this code is exactly the same as before: propagate the partially coherent beam through the illuminator and observe the irradiance pattern on the sample.
## Compute many realizations of the diffuser
```
fObj = 3300 # microns
bfpDiam = 2 * 1.4 * fObj # microns, BFP diameter, 2 * NA * f_OBJ
# Grid for interpolating the field after the second MLA
newGridSize = subgridSize * numLenslets # microns
newGrid = grids.Grid(5*newGridSize, 5*physicalSize, wavelength, fObj, dim = 1)
%%time
nIter = 100
#sigma_r = np.array([0.1, 0.3, 1, 3])
sigma_r = np.array([1])
# Create multiple sample irradiance patterns for various values of sigma_r
for sigR in sigma_r:
# New phase mask; the diffuser sits 'dR' microns from the focus
beam = lambda x: fields.GaussianBeamDefocused(fieldAmp, beamStd, wavelength, dR)(x) \
* fields.diffuserMask(sigma_f, sigR, collGrid)(x)
avgIrrad = np.zeros(newGrid.px.size, dtype=np.float128)
for realization in range(nIter):
print('sigma_r: {0:.2f}'.format(sigR))
print('Realization number: {0:d}'.format(realization))
# Propagate the field from the diffuser to the telescope focus
beamSample = beam(collGrid.px)
beamSample = simfft.fftPropagate(beamSample, collGrid, -dR)
# Compute the field in the focal plane of the collimating lens
scalingFactor = collGrid.physicalSize / (collGrid.gridSize - 1) / np.sqrt(collGrid.wavelength * collGrid.focalLength)
afterColl = scalingFactor * np.fft.fftshift(np.fft.fft(np.fft.ifftshift(beamSample)))
# Interpolate the input onto the new grid;
# Propagate it to the first MLA at distance L1 away from the focal plane of the collimating lens
inputMag = interp1d(collGrid.pX,
np.abs(afterColl),
kind = 'nearest',
bounds_error = False,
fill_value = 0.0)
inputAng = interp1d(collGrid.pX,
np.angle(afterColl),
kind = 'nearest',
bounds_error = False,
fill_value = 0.0)
inputField = lambda x: simfft.fftPropagate(inputMag(x) * np.exp(1j * inputAng(x)), grid, L1)
# Compute the field magnitude and phase for each individual lenslet just beyond the second MLA
interpMag, interpPhase = simfft.fftSubgrid(inputField, grid)
# For each interpolated magnitude and phase corresponding to a lenslet
# 1) Compute the full complex field
# 2) Sum it with the other complex fields
field = np.zeros(newGrid.gridSize)
for currMag, currPhase in zip(interpMag, interpPhase):
fieldMag = currMag(newGrid.px)
fieldPhase = currPhase(newGrid.px)
currField = fieldMag * np.exp(1j * fieldPhase)
field = field + currField
# Propagate the field to the objective's BFP and truncate the region outside the aperture
field = simfft.fftPropagate(field, newGrid, L2)
field[np.logical_or(newGrid.px < -bfpDiam / 2, newGrid.px > bfpDiam / 2)] = 0.0
# Propagate the truncated field in the BFP to the sample
scalingFactor = newGrid.physicalSize / (newGrid.gridSize - 1) / np.sqrt(newGrid.wavelength * newGrid.focalLength)
F = scalingFactor * np.fft.fftshift(np.fft.fft(np.fft.ifftshift(field)))
# Compute the irradiance on the sample
Irrad = np.abs(F)**2 / Z0 * 1000
# Save the results for this realization
avgIrrad = avgIrrad + Irrad
# Average irradiance
avgIrrad = avgIrrad / nIter
# Save the results
# The folder 'Rotating Diffuser Calibration' should already exist.
#np.save('Rotating Diffuser Calibration/x-coords_sigR_{0:.3f}.npy'.format(sigR), newGrid.pX)
#np.save('Rotating Diffuser Calibration/avgIrrad_sigR_{0:.3f}.npy'.format(sigR), avgIrrad)
plt.plot(newGrid.pX, avgIrrad)
plt.xlim((-100,100))
plt.xlabel(r'Sample plane x-position, $\mu m$')
plt.ylabel(r'Irradiance, $mW / \mu m$')
plt.grid(True)
plt.show()
# Check the output power
powerOut = simps(avgIrrad, newGrid.pX)
print('The output power is {0:.2f} mW'.format(powerOut))
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Since there is no column name so lets add that
# column 0 to 59 so they will be named as Feature 0,..,Feature 59
# and lets name target column as Class
# First lets make a lost of column name
new_column_names = []
for i in range(60):
new_column_names.append(f"Feature {i}")
new_column_names.append("Class")
train = pd.read_csv('../input/sonaralldata/sonar.all-data.csv',header=None, names= new_column_names)
train.head()
# Replacing M with 0 and R with 1
train = train.replace({'Class': {'M': 0,
'R': 1}})
x = train.drop("Class",axis=1)
y = train['Class']
y.value_counts()
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x = scaler.fit_transform(x)
# splitting the dataset into train and test dataset with 4:1 ratio (80%-20%)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = .2, random_state = 26,stratify=y)
```
## Training on different algorithms
### Logistic Regression
```
from sklearn.linear_model import LogisticRegression
# Create instance of model
lreg = LogisticRegression()
# Pass training data into model
lreg.fit(x_train, y_train)
# Getting prediciton on x_test
y_pred_lreg = lreg.predict(x_test)
# Scoring our model
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score,f1_score, precision_score, recall_score
# Confusion Matrix
print('Logistic Regression')
print('\n')
print('Confusion Matrix')
print(confusion_matrix(y_test, y_pred_lreg))
print('--'*50)
# Classification Report
print('Classification Report')
print(classification_report(y_test,y_pred_lreg))
# Accuracy of our model
print('--'*50)
logreg_accuracy = round(accuracy_score(y_test, y_pred_lreg) * 100,8)
print('Accuracy = ', logreg_accuracy,'%')
```
**We have a accuracy of 99.90%**
### LINEAR SUPPORT VECTOR CLASSIFIER
```
%%time
from sklearn.svm import SVC
# Instantiate the model
svc = SVC()
# Fit the model on training data
svc.fit(x_train, y_train)
# Getting the predictions for x_test
y_pred_svc = svc.predict(x_test)
print('Support Vector Classifier')
print('\n')
# Confusion matrix
print('Confusion Matrix')
print(confusion_matrix(y_test, y_pred_svc))
print('--'*50)
# Classification report
print('Classification Report')
print(classification_report(y_test, y_pred_svc))
# Accuracy
print('--'*50)
svc_accuracy = round(accuracy_score(y_test, y_pred_svc)*100,8)
print('Accuracy = ', svc_accuracy,'%')
```
### K-NEAREST NEIGHBORS
```
%%time
from sklearn.neighbors import KNeighborsClassifier
# in knn we need to select a value of nearest neighbour, for now lets use a for loop. If accuarcy
# is better than other models then we would search for optimal parameter
error_rate = []
for i in range (2,15):
knn = KNeighborsClassifier(n_neighbors = i)
knn.fit(x_train, y_train)
pred_i = knn.predict(x_test)
error_rate.append(np.mean(pred_i != y_test))
# Plot error rate
plt.figure(figsize = (10,6))
plt.plot(range(2,15), error_rate, color = 'blue', linestyle = '--', marker = 'o',
markerfacecolor = 'green', markersize = 10)
plt.title('Error Rate vs K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
plt.show()
# now using above data to train with n_neighbors having least error rate
n_value = 0
min_error = float('inf')
for idx,error in enumerate(error_rate):
if min_error>error:
min_error=error
n_value=idx+2
knn = KNeighborsClassifier(n_neighbors = n_value)
# Fit new KNN on training data
knn.fit(x_train, y_train)
# Predict KNN
y_pred_knn_op = knn.predict(x_test)
print('K-Nearest Neighbors(KNN)')
print('k =',n_value)
# Confusion Matrix
print('\n')
print(confusion_matrix(y_test, y_pred_knn_op))
# Classification Report
print('--'*50)
print('Classfication Report',classification_report(y_test, y_pred_knn_op))
# Accuracy
print('--'*50)
knn_op_accuracy =round(accuracy_score(y_test, y_pred_knn_op)*100,8)
print('Accuracy = ',knn_op_accuracy,'%')
```
### RANDOM FOREST
```
from sklearn.ensemble import RandomForestClassifier
# Create model object
rfc = RandomForestClassifier(n_estimators = 250,n_jobs=-1)
# Fit model to training data
rfc.fit(x_train,y_train)
y_pred_rfc = rfc.predict(x_test)
print('Random Forest')
# Confusion matrix
print('\n')
print('Confusion Matrix')
print(confusion_matrix(y_test, y_pred_rfc))
# Classification report
print('--'*50)
print('Classification Report')
print(classification_report(y_test, y_pred_rfc))
# Accuracy
print('--'*50)
rf_accuracy = round(accuracy_score(y_test, y_pred_rfc)*100,8)
print('Accuracy = ', rf_accuracy,'%')
```
### XGBoost Classifier
```
from xgboost import XGBClassifier
# Create model object
xgb = XGBClassifier(n_jobs=-1)
# Fit model to training data
xgb.fit(x_train, y_train)
y_pred_xgb = xgb.predict(x_test)
print('XGBoost Classifer')
# Confusion matrix
print('\n')
print('Confusion Matrix')
print(confusion_matrix(y_test, y_pred_xgb))
# Classification report
print('--'*50)
print('Classification Report')
print(classification_report(y_test, y_pred_xgb))
# Accuracy
print('--'*50)
xgb_accuracy = round(accuracy_score(y_test, y_pred_xgb)*100,8)
print('Accuracy = ', xgb_accuracy,'%')
models = pd.DataFrame({
'Model': ['Logistic Regression', 'Linear SVC',
'K-Nearest Neighbors', 'Random Forest','XGBoost Classifier'],
'Score': [logreg_accuracy, svc_accuracy,
knn_op_accuracy, rf_accuracy,xgb_accuracy]})
models.sort_values(by='Score', ascending=False)
```
| github_jupyter |
```
import argparse
import csv
import matplotlib.pyplot as plt
import glob
import os
import json
import seaborn as sns
import pandas as pd
import mpld3
from IPython import display
from process_log import Tags, Log, Epochs
leonhard_directory = "../logs/naive_scaling_Nov_15_073912"
tags = Tags("tags.hpp")
all_names = os.listdir(leonhard_directory)
# Validate JSON
json_file = list(filter(lambda x: ".json" in x, all_names))
if len(json_file) == 0:
print("Could not find JSON file in directory {}".format(leonhard_directory))
exit(1)
if len(json_file) > 1:
print("Found multiple JSON files ({}) in the directory {}".format(json_file, leonhard_directory))
exit(1)
json_file = json_file[0]
with open(os.path.join(leonhard_directory, json_file)) as file:
json_file = json.load(file)
repetitions = json_file["repetitions"]
all_names = list(filter(lambda x: os.path.isdir(os.path.join(leonhard_directory, x)), all_names))
unique_names = list(set(map(lambda x: "_".join(x.split("_")[:-1]), all_names)))
unique_names
df = None
for run_name in unique_names:
n = run_name.split("_")[0]
data = run_name.split("_")[1]
for repetition in range(repetitions):
folder_name = run_name + "_" + str(repetition)
folder_contents = os.listdir(os.path.join(leonhard_directory, folder_name))
folder_contents = list(filter(lambda x: ".bin" in x, folder_contents))
logs = [Log(os.path.join(leonhard_directory, folder_name, path), tags) for path in folder_contents]
for filename in folder_contents:
log = Log(os.path.join(leonhard_directory, folder_name, filename), tags)
rank = int(filename.split("_")[-2])
epochs = Epochs(log, tags)
if df is None:
df = pd.DataFrame(epochs.get_fitness_vs_time_dataframe(), columns=["fitness", "wall clock time", "epoch"])
df["rank"] = rank
df["rep"] = repetition
df["n"] = n
df["data"] = data
else:
df2 = pd.DataFrame(epochs.get_fitness_vs_time_dataframe(), columns=["fitness", "wall clock time", "epoch"])
df2["rank"] = rank
df2["rep"] = repetition
df2["n"] = n
df2["data"] = data
df = df.append(df2, ignore_index=True)
df.to_csv("naive_scaling_fitness_time.gz", compression="gzip")
df = pd.read_csv("naive_scaling_fitness_time.gz")
df = df.drop(columns="Unnamed: 0")
df
# Take out rank variation
new_df = df.groupby(["epoch", "rep", "n", "data"], as_index=False).agg({"fitness" : "min", "wall clock time" : "max"})
new_df = new_df.drop(columns="wall clock time")
new_df
fig, ax = plt.subplots()
sns.lineplot(ax=ax, x="epoch", y="fitness", hue="n", legend='full', data=new_df[new_df.data == "a280csv"])
ax.set_xlim(500, 2500)
ax.set_ylim(4000, 14000)
fig.savefig("naive_scaling_a280_part.svg")
fig, ax = plt.subplots()
sns.lineplot(ax=ax, x="epoch", y="fitness", hue="n", legend='full', data=new_df[new_df.data == "berlin52csv"][new_df.epoch % 100 == 0])
ax.set_title("Naive Model - TSP Graph berlin52")
ax.set_xlim(0, 1000)
ax.set_ylim(7500, 11000)
fig.savefig("naive_scaling_berlin52_fast.eps")
```
| github_jupyter |
Note: It is recommended to run this notebook from an [Azure DSVM](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/overview) instance.
```
# Useful for being able to dump images into the Notebook
import IPython.display as D
```
# Big Picture
In the previous notebooks, we tried together [Custom Vision service](https://github.com/CatalystCode/CVWorkshop/blob/master/%232%20Policy%20Classfication%20With%20Custom%20Vision%20Service.ipynb) in addition to [Transfer Learning](https://github.com/CatalystCode/CVWorkshop/blob/master/%233%20Policy%20Recognition%20with%20Resnet%20and%20Transfer%20Learning.ipynb) which is one of the popular approaches in deep learning where pre-trained models are used as the starting point on computer vision.
So if we look on the big picture, we will realize that the previous notebooks are focusing on preparing/loading training data set, building models, training models then evaluating the output.
In this tutorial, we will move the focus to operationalizing models by deploying trained models as web services so that you can consume it later from any client application via REST API call. For that purporse, we are using Azure Machine Learning Model Management Service.

# Azure Model Management Service
Azure Machine Learning Model Management enables you to manage and deploy machine-learning models. It provides different services like creating Docker containers with models for local testing, deploying models to production through Azure ML Compute Environment with [Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/) and versioning & tracking models. Learn more here: [Conceptual Overview of Azure Model Management Service](https://docs.microsoft.com/en-us/azure/machine-learning/preview/model-management-overview)
### What's needed to deploy my model?
* Your Model File or Directory of Model Files
* You need to create a score.py that loads your model and returns the prediction result(s) using the model and also used to generates a schema JSON file
* Schema JSON file for API parameters (validates API input and output)
* Runtime Environment Choice e.g. python or spark-py
* Conda dependency file listing runtime dependencies
### How it works:

Learn more here: [Conceptual Overview of Azure Model Management Service](https://docs.microsoft.com/en-us/azure/machine-learning/preview/model-management-overview)
### Deployment Steps:
* Use your saved, trained, Machine Learning model
* Create a schema for your web service's input and output data
* Create a Docker-based container image
* Create and deploy the web service
### Deployment Target Environments:
1. Local Environment: You can set up a local environment to deploy and test your web service on your local machine or DSVM. (Requires you to install Docker on the machine)
2. Production Environment: You can use Cluster deployment for high-scale production scenarios. It sets up an ACS cluster with Kubernetes as the orchestrator. The ACS cluster can be scaled out to handle larger throughput for your web service calls. (Kubernetes deployment on an Azure Container Service (ACS) cluster)

# Challenge
```
# Run the following train.py from the notebook to generate a classifier model
from sklearn.svm import SVC
from cvworkshop_utils import ensure_exists
import pickle
# indicator1, NF1, cellprofiling
X = [[362, 160, 88], [354, 140, 86], [320, 120, 76], [308, 108, 47], [332, 130, 80], [380, 180, 94], [350, 128, 78],
[354, 140, 80], [318, 110, 74], [342, 150, 84], [362, 170, 86]]
Y = ['positive', 'positive', 'negative', 'negative', 'positive', 'positive', 'negative', 'negative', 'negative', 'positive', 'positive']
clf = SVC()
clf = clf.fit(X, Y)
print('Predicted value:', clf.predict([[380, 140, 86]]))
print('Accuracy', clf.score(X,Y))
print('Export the model to output/trainedModel.pkl')
ensure_exists('output')
f = open('output/trainedModel.pkl', 'wb')
pickle.dump(clf, f)
f.close()
print('Import the model from output/trainedModel.pkl')
f2 = open('output/trainedModel.pkl', 'rb')
clf2 = pickle.load(f2)
X_new = [[308, 108, 70]]
print('New Sample:', X_new)
print('Predicted class:', clf2.predict(X_new))
```
Now navigate to the repository root directory then **open "output" folder** and you should be able to see the **created trained model file "trainedModel.pkl"**
```
# Run the following score.py from the notebook to generate the web serivce schema JSON file
# Learn more about creating score file from here: https://docs.microsoft.com/en-us/azure/machine-learning/preview/model-management-service-deploy
def init():
from sklearn.externals import joblib
global model
model = joblib.load('output/trainedModel.pkl')
def run(input_df):
import json
pred = model.predict(input_df)
return json.dumps(str(pred[0]))
def main():
from azureml.api.schema.dataTypes import DataTypes
from azureml.api.schema.sampleDefinition import SampleDefinition
from azureml.api.realtime.services import generate_schema
import pandas
df = pandas.DataFrame(data=[[380, 120, 76]], columns=['indicator1', 'NF1', 'cellprofiling'])
# Check the output of the function
init()
input1 = pandas.DataFrame([[380, 120, 76]])
print("Result: " + run(input1))
inputs = {"input_df": SampleDefinition(DataTypes.PANDAS, df)}
# Generate the service_schema.json
generate_schema(run_func=run, inputs=inputs, filepath='output/service_schema.json')
print("Schema generated")
if __name__ == "__main__":
main()
```
Navigate again to the repository root directory then **open "output" folder** and you should be able to see the **created JSON schema file "service_schema.json"**
By reaching this point, we now have what's needed (Score.py file, trained model and JSON schema file) to start deploying our trained model using Azure Model Management Service. Now it's the time to think which deployment environoment are you going to consider as deployment target (Local Deployment or Cluster Deploymment). In this tutorial, we will walk through both scenarios so feel free to either walk through **scenario A** or **scenario B** or even **both**.
Before deploying, first login to you Azure subscription using your command prompt and register few environment providers.
Once you execute this command, the command prompt will show you a message asking you to open your web browser then navigate to https://aka.ms/devicelogin to enter a specific code given in the terminal to login to your Azure subscription.
```
#Return to your command prompt and execute the following commands
!az login
# Once you are logged in, now let's execute the following commands to register our environment providers
!az provider register -n Microsoft.MachineLearningCompute
!az provider register -n Microsoft.ContainerRegistry
!az provider register -n Microsoft.ContainerService
```
Registering the environments takes some time so you can monitor the status using the following command:
```
az provider show -n {Envrionment Provider Name}
```
Before you complete this tutorial, make sure that all the registration status for all the providers are **"Registered"**.
```
!az provider show -n Microsoft.MachineLearningCompute
!az provider show -n Microsoft.ContainerRegistry
!az provider show -n Microsoft.ContainerService
```
While waiting the enviroment providers to be registered, you can create a resource group to include all the resources that we are going to provision through this tutorial.
```
# command format az group create --name {group name} --location {azure region}
!az group create --name capetownrg --location westus
```
Also create a Model Management account to be used for our deployment whether the local deployment or the custer deployment.
```
# command format az ml account modelmanagement create -l {resource targeted region} -n {model management name} -g {name of created resource group}
!az ml account modelmanagement create -l eastus2 -n capetownmodelmgmt -g capetownrg
```
Once your model management account is create, set the model management you created to be used in our deployment.
```
# command format az ml account modelmanagement set -n {your model management account name} -g {name of created resource group}
!az ml account modelmanagement set -n capetownmodelmgmt -g capetownrg
```
### Cluster Deployment - Enviroment Setup:
If you want to deploy from a cluster you need to setup a cluster deployment environment using the following command first to be able to deploy our trained model as a web service
***Creating the environment may take 10-20 minutes. ***
```
# command format az ml env setup -c --name {your environment name} --location {azure region} -g {name of created resource group}
!az ml env setup -c --name capetownenv --location eastus2 -g capetownrg -y --debug
```
You can use the following command to monitor the status:
```
# command format az ml env show -g {name of created resource group} -n {your environment name}
!az ml env show -g capetownrg -n capetownenv
```
Once your provisioning status is "Succeeded", open your web browser and login to your Azure subscription through the portal and you should be able to see the following resources created in your resource group:
* A storage account
* An Azure Container Registry (ACR)
* A Kubernetes deployment on an Azure Container Service (ACS) cluster
* An Application insights account
Now set set your environment as your deployment enviroment using the following command:
```
# command format az ml env set -n {your environment name} -g {name of created resource group}
!az ml env set -n capetownenv -g capetownrg --debug
```
Now feel free to choose one of the following deployment environments as your targeted environment.
### Local Deployment - Enviroment Setup:
You need to set up a local environment using the following command first to be able to deploy our trained model as a web service
```
# command format az ml env setup -l {azure region} -n {your environment name} -g {name of created resource group}
# !az ml env setup -l eastus2 -n capetownlocalenv -g capetownrg -y
```
Creating the enviroment may take some time so you can use the following command to monitor the status:
```
# command format az ml env show -g {name of created resource group} -n {your environment name}
# !az ml env show -g capetownrg -n capetownlocalenv
```
Once your provisioning status is "Succeeded", open your web browser and login to your Azure subscription through the portal and you should be able to see the following resources created in your resource group:
* A storage account
* An Azure Container Registry (ACR)
* An Application insights account
Now set set your environment as your deployment enviroment using the following command:
```
# command format az ml env set -n {your environment name} -g {name of created resource group}
!az ml env set -n capetownenv -g capetownrg --debug
```
**Whether you finish your enviroment setup by following Scenario A or Scenario B. Now you are ready to deploy our trained model as a web service to cosnume later from any application.**
### Create your Web Service:
As a reminder, here's what's needed to create your webservice:
* Your trained model file -> in our case it's "output/trainedModel.pkl"
* Your score.py file which loads your model and returns the prediction result(s) using the model -> in our case it's "modelmanagement/score.py"
* Your JSON schema file that automatically validate the input and output of your web service -> in our case it's "output/service_schema.json"
* You runtime environment for the Docker container -> in our case it's "python"
* conda dependencies file for additional python packages. (We don't have it in our case)
Use the following command to create your web service:
```
# command format az ml service create realtime --model-file {model file/folder path} -f {scoring file} -n {your web service name} -s {json schema file} -r {runtime choice} -c {conda dependencies file}
!az ml service create realtime -m output/trainedModel.pkl -f score.py -n classifierservice -s output/service_schema.json -r python --debug
```
### Test your Web Service:
Once the web service is successfully created, open your web browser and login to your Azure subscription through the portal then jump into your resource group and open your model management account.
**Open** your model management account

**Click** on "Model Management" under Application Settings

**Click** on "Services" and you select your created "classifier" service from the righ hand side panel

**Copy** your "Service id", "URL" and "Primary key"

**Call your web service from your terminal:**
```
# command format az ml service run realtime -i {your service id} -d {json input for your web service}
# usage example
!az ml service run realtime -i YOUR_SERVICE_ID -d "{\"input_df\": [{\"NF1\": 120, \"cellprofiling\": 76, \"indicator1\": 380}]}"
```
**Call your web service from [Postman](https://www.getpostman.com/):**


| github_jupyter |
# Intro to profiling
Python's dirty little secret is that it can be made to run pretty fast.
The bare-metal HPC people will be angrily tweeting at me now, or rather, they would be if they could get their wireless drivers working.
Still, there are some things you *really* don't want to do in Python. Nested loops are usually a bad idea. But often you won't know where your code is slowing down just by looking at it and trying to accelerate everything can be a waste of time. (Developer time, that is, both now and in the future: you incur technical debt if you unintentionally obfuscate code to make it faster when it doesn't need to be).
The first step is always to find the bottlenecks in your code, via _profiling_: analyzing your code by measuring the execution time of its parts.
Tools
-----
2. `cProfile`
1. [`line_profiler`](https://github.com/rkern/line_profiler)
3. `timeit`
**Note**:
If you haven't already installed it, you can do
```console
conda install line_profiler
```
or
```console
pip install line_profiler
```
## Some bad code
Here's a bit of code guaranteed to perform poorly: it sleeps for 1.5 seconds after doing any work! We will profile it and see where we might be able to help.
```
import numpy
from time import sleep
def bad_call(dude):
sleep(.5)
def worse_call(dude):
sleep(1)
def sumulate(foo):
if not isinstance(foo, int):
return
a = numpy.random.random((1000, 1000))
numpy.dot(a,a)
ans = 0
for i in range(foo):
ans += i
bad_call(ans)
worse_call(ans)
return ans
sumulate(150)
```
## using `cProfile`
[`cProfile`](https://docs.python.org/3.4/library/profile.html#module-cProfile) is the built-in profiler in Python (available since Python 2.5). It provides a function-by-function report of execution time. First import the module, then usage is simply a call to `cProfile.run()` with your code as argument. It will print out a list of all the functions that were called, with the number of calls and the time spent in each.
```
import cProfile
cProfile.run('sumulate(150)')
```
You can see here that when our code `sumulate()` executes, it spends almost all its time in the method `time.sleep` (a bit over 1.5 seconds).
If your program is more complicated that this cute demo, you'll have a hard time parsing the long output of `cProfile`. In that case, you may want a profiling visualization tool, like [SnakeViz](https://jiffyclub.github.io/snakeviz/). But that is outside the scope of this tutorial.
## using `line_profiler`
`line_profiler` offers more granular information thatn `cProfile`: it will give timing information about each line of code in a profiled function.
Load the `line_profiler` extension
```
%load_ext line_profiler
```
### For a pop-up window with results in notebook:
IPython has an `%lprun` magic to profile specific functions within an executed statement. Usage:
`%lprun -f func_to_profile <statement>` (get more help by running `%lprun?` in IPython).
### Profiling two functions
```
%lprun -f sumulate sumulate(13)
%lprun -f bad_call -f worse_call sumulate(13)
```
### Write results to a text file
```
%lprun -T timings.txt -f sumulate sumulate(12)
%ls -l
%load timings.txt
```
## Profiling on the command line
Open file, add `@profile` decorator to any function you want to profile, then run
```console
kernprof -l script_to_profile.py
```
which will generate `script_to_profile.py.lprof` (pickled result). To view the results, run
```console
python -m line_profiler script_to_profile.py.lprof
```
```
from IPython.display import IFrame
IFrame('http://localhost:7000/terminals/1', width=800, height=700)
```
## `timeit`
`timeit` is not perfect, but it is helpful.
Potential concerns re: `timeit`
* Returns minimum time of run
* Only runs benchmark 3 times
* It disables garbage collection
```python
python -m timeit -v "print(42)"
```
```python
python -m timeit -r 25 "print(42)"
```
```python
python -m timeit -s "gc.enable()" "print(42)"
```
### Line magic
```
%timeit x = 5
```
### Cell magic
```
%%timeit
x = 5
y = 6
x + y
```
The `-q` flag quiets output. The `-o` flag allows outputting results to a variable. The `-q` flag sometimes disagrees with OSX so please remove it if you're having issues.
```
a = %timeit -qo x = 5
print a
a.all_runs
a.best
a.worst
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import scipy.stats as ss
closest_collection = "typeIII_submission_collection_closest.csv"
hungarian_collection = "typeIII_submission_collection_hungarian.csv"
```
## How many predicted pKas are matched differently between closest and hungarian algorithms?
```
df_closest = pd.read_csv(closest_collection,index_col=0)
df_closest.head()
df_hungarian = pd.read_csv(hungarian_collection, index_col=0)
df_hungarian.head()
prediction_methods = set(df_closest["name"])
len(prediction_methods)
# Iterate through prediction methods and create a database that compares hunagarian and closest matching
matched_pKa_list = []
for method in prediction_methods:
#for method in ["Full quantum chemical calculation of free energies and fit to experimental pKa"]:
submission_id = df_closest[df_closest["name"] == method]["receipt_id"].values[0]
df_closest_1method = df_closest[df_closest["name"] == method].reset_index(drop=True)
df_hungarian_1method = df_hungarian[df_hungarian["name"] == method].reset_index(drop=True)
pKa_IDs = list(df_closest_1method["pKa ID"])
# Iterate through pKa_IDs to check if predicted pKas match
for pKa_ID in pKa_IDs:
pKa_exp = df_closest_1method[df_closest_1method["pKa ID"] == pKa_ID]["pKa (exp)"].values[0]
pKa_pred_closest = df_closest_1method[df_closest_1method["pKa ID"] == pKa_ID]["pKa (calc)"].values[0]
pKa_pred_hungarian = df_hungarian_1method[df_hungarian_1method["pKa ID"] == pKa_ID]["pKa (calc)"].values
closest_hungarian_diff = pKa_pred_closest - pKa_pred_hungarian
matched_pKa_row = [pKa_ID, pKa_exp, pKa_pred_closest, pKa_pred_hungarian, closest_hungarian_diff, submission_id]
matched_pKa_list.append(matched_pKa_row)
#print(matched_pKa_row)
# Convert to pandas dataframe
df_compare_matching = pd.DataFrame(matched_pKa_list, columns = ['pKa ID', 'pKa (exp)',
'pKa (pred, closest)', 'pKa (pred, hungarian)',
'closest - hungarian diff.', 'submission ID'])
df_compare_matching.head()
# Print out pKas that have different matching between hungarian and closest
df_difference_in_matching = df_compare_matching[df_compare_matching["closest - hungarian diff."] != 0]
df_difference_in_matching
# Why is nb006 SM14_pKa2 matched to a very different number?
# These are submitted SM14 predictions
# SM14, -1.77, 1.77
# SM14, 3.38, 1.77
# SM14, 24.63, 1.77
# SM14 Experimental values
# 2.58 ± 0.01
# 5.30 ± 0.01
df_nb006 = df_closest[df_closest['receipt_id']=='nb006']
df_nb006_SM14 = df_nb006[df_nb006['Molecule ID']=='SM14']
df_nb006_SM14
df_nb006 = df_hungarian[df_hungarian['receipt_id']=='nb006']
df_nb006_SM14 = df_nb006[df_nb006['Molecule ID']=='SM14']
df_nb006_SM14
```
### Experimental pKas of molecules with pKas differently matched
SM06
3.03 ± 0.04
11.74 ± 0.01
SM14
2.58 ± 0.01
5.30 ± 0.01
SM18
2.15 ± 0.02
9.58 ± 0.03
11.02 ± 0.04
SM22
2.40 ± 0.02
7.43 ± 0.01
### Experimental pKas of molecules with pKas equally matched even thought they have multiple pKas
SM15
4.70 ± 0.01
8.94 ± 0.01
SM16
5.37 ± 0.01
10.65 ± 0.01
## How many pKa predictions are matched without conserving the sequence with Hungarian method?
```
# Test for comparing rank orders - ORDERED MATCH
exp_pKas = np.array([2.4, 4.3, 7.0])
pred_pKas = np.array([2.5, 4.2, 7.2])
#exp_pKa_ranks = list(np.array([1, 3, 2]))
#pred_pKa_ranks = list(np.array([1, 3, 2]))
exp_pKa_ranks = ss.rankdata(exp_pKas)
print("exp ranks:", exp_pKa_ranks)
pred_pKa_ranks = ss.rankdata(pred_pKas)
print("pred ranks:", pred_pKa_ranks)
# Is rank order the same?
if list(exp_pKa_ranks) == list(pred_pKa_ranks):
ordered_match = True
else:
ordered_match = False
ordered_match
# Test for comparing rank orders - UNORDERED MATCH
exp_pKas = np.array([2.4, 4.3, 7.0])
pred_pKas = np.array([2.5, 7.0, 4.5])
#exp_pKa_ranks = list(np.array([1, 3, 2]))
#pred_pKa_ranks = list(np.array([1, 3, 2]))
exp_pKa_ranks = ss.rankdata(exp_pKas)
print("exp ranks:", exp_pKa_ranks)
pred_pKa_ranks = ss.rankdata(pred_pKas)
print("pred ranks:", pred_pKa_ranks)
# Is rank order the same?
if list(exp_pKa_ranks) == list(pred_pKa_ranks):
ordered_match = True
else:
ordered_match = False
ordered_match
df_hungarian.head()
pKa_rank_comparison_list =[]
# Iterate through methods
for method in prediction_methods:
#for method in ["Full quantum chemical calculation of free energies and fit to experimental pKa"]:
submission_ID = set(df_hungarian[df_hungarian["name"] == method]["receipt_id"].values)
df_hungarian_1method = df_hungarian[df_hungarian["name"] == method].reset_index(drop=True)
mol_IDs = list(df_hungarian_1method["Molecule ID"])
#Iterate through molecules
for mol_ID in mol_IDs:
df_hungarian_1method_1mol = df_hungarian_1method[df_hungarian_1method["Molecule ID"] == mol_ID].reset_index(drop=True)
pKa_IDs = df_hungarian_1method_1mol['pKa ID'].values
#Assign Rank order of experimental and predicted pKa.
exp_pKas = df_hungarian_1method_1mol['pKa (exp)'].values
exp_pKa_ranks = ss.rankdata(exp_pKas )
pred_pKas = df_hungarian_1method_1mol['pKa (calc)'].values
pred_pKa_ranks = ss.rankdata(pred_pKas) # rank is given to only matched pred pKas
# Is rank order the same?
if list(exp_pKa_ranks) == list(pred_pKa_ranks):
ordered_match = True
else:
ordered_match = False
pKa_rank_comparison_list.append([mol_ID, ordered_match, pKa_IDs, exp_pKas, pred_pKas, exp_pKa_ranks, pred_pKa_ranks, submission_ID])
# Convert to pandas dataframe
df_compare_ranks_hungarian = pd.DataFrame(pKa_rank_comparison_list, columns = ['mol ID', 'ordered match', 'pKa IDs', 'pKa (exp)',
'pKa (pred)', 'pKa rank (exp)', 'pKa rank (pred)', 'submission ID'])
df_compare_ranks_hungarian = df_compare_ranks_hungarian.astype(str)
df_compare_ranks_hungarian = df_compare_ranks_hungarian.drop_duplicates()
df_compare_ranks_hungarian
# Print out pKas that don't preserve increasing order when matched my Hungarian algorithm
df_unordered_matching_hungarian = df_compare_ranks_hungarian[df_compare_ranks_hungarian["ordered match"] == False]
df_unordered_matching_hungarian
# Just SM18
df_unordered_matching_hungarian_SM18 = df_compare_ranks_hungarian[df_compare_ranks_hungarian["mol ID"] == 'SM18']
df_unordered_matching_hungarian_SM18
# SM18 prediction of 0hxtm method was also matched in expected order.
df_unordered_matching_hungarian_SM18_0hxtm = df_unordered_matching_hungarian_SM18[df_compare_ranks_hungarian["submission ID"] == '0hxtm']
df_unordered_matching_hungarian_SM18_0hxtm
```
There isn't any matches out of order in this set.
### Was there a out of order match in the past for Hungarian matching? Is it random for SM18 in 0hxtm submission?
```
# SAMPL6 repository branch pKa_typeIII_analysis3_hungarian
# https://github.com/MobleyLab/SAMPL6/blob/pKa_typeIII_analysis3_hungarian/physical_properties/pKa/analysis/analysis_of_typeIII_predictions/analysis_outputs_hungarian/typeIII_submission_collection.csv
hungarian_collection_a3 = 'typeIII_submission_collection_hungarian_analysis3.csv'
df_hungarian = pd.read_csv(hungarian_collection_a3, index_col=0)
pKa_rank_comparison_list =[]
# Iterate through methods
for method in prediction_methods:
#for method in ["Full quantum chemical calculation of free energies and fit to experimental pKa"]:
submission_ID = df_hungarian[df_hungarian["name"] == method]["receipt_id"].values[0]
df_hungarian_1method = df_hungarian[df_hungarian["name"] == method].reset_index(drop=True)
mol_IDs = list(df_hungarian_1method["Molecule ID"])
#Iterate through molecules
for mol_ID in mol_IDs:
df_hungarian_1method_1mol = df_hungarian_1method[df_hungarian_1method["Molecule ID"] == mol_ID].reset_index(drop=True)
pKa_IDs = df_hungarian_1method_1mol['pKa ID'].values
#Assign Rank order of experimental and predicted pKa.
exp_pKas = df_hungarian_1method_1mol['pKa (exp)'].values
exp_pKa_ranks = ss.rankdata(exp_pKas )
pred_pKas = df_hungarian_1method_1mol['pKa (calc)'].values
pred_pKa_ranks = ss.rankdata(pred_pKas) # rank is given to only matched pred pKas
# Is rank order the same?
if list(exp_pKa_ranks) == list(pred_pKa_ranks):
ordered_match = True
else:
ordered_match = False
pKa_rank_comparison_list.append([mol_ID, ordered_match, pKa_IDs, exp_pKas, pred_pKas, exp_pKa_ranks, pred_pKa_ranks, submission_ID])
# Convert to pandas dataframe
df_compare_ranks_hungarian = pd.DataFrame(pKa_rank_comparison_list, columns = ['mol ID', 'ordered match', 'pKa IDs', 'pKa (exp)',
'pKa (pred)', 'pKa rank (exp)', 'pKa rank (pred)', 'submission ID'])
# Print out pKas that don't preserve increasing order when matched my Hungarian algorithm
df_unordered_matching_hungarian = df_compare_ranks_hungarian[df_compare_ranks_hungarian["ordered match"] == False]
df_unordered_matching_hungarian = df_unordered_matching_hungarian.astype(str)
df_unordered_matching_hungarian = df_unordered_matching_hungarian.drop_duplicates()
df_unordered_matching_hungarian
# SAMPL6 repository branch pKa_typeIII_analysis5_hungarian
# https://github.com/MobleyLab/SAMPL6/blob/pKa_typeIII_analysis5/physical_properties/pKa/analysis/analysis_of_typeIII_predictions/analysis_outputs_hungarian/typeIII_submission_collection.csv
hungarian_collection_a5 = 'typeIII_submission_collection_hungarian_analysis5.csv'
df_hungarian = pd.read_csv(hungarian_collection_a5, index_col=0)
pKa_rank_comparison_list =[]
# Iterate through methods
for method in prediction_methods:
#for method in ["Full quantum chemical calculation of free energies and fit to experimental pKa"]:
submission_ID = df_hungarian[df_hungarian["name"] == method]["receipt_id"].values[0]
df_hungarian_1method = df_hungarian[df_hungarian["name"] == method].reset_index(drop=True)
mol_IDs = list(df_hungarian_1method["Molecule ID"])
#Iterate through molecules
for mol_ID in mol_IDs:
df_hungarian_1method_1mol = df_hungarian_1method[df_hungarian_1method["Molecule ID"] == mol_ID].reset_index(drop=True)
pKa_IDs = df_hungarian_1method_1mol['pKa ID'].values
#Assign Rank order of experimental and predicted pKa.
exp_pKas = df_hungarian_1method_1mol['pKa (exp)'].values
exp_pKa_ranks = ss.rankdata(exp_pKas )
pred_pKas = df_hungarian_1method_1mol['pKa (calc)'].values
pred_pKa_ranks = ss.rankdata(pred_pKas) # rank is given to only matched pred pKas
# Is rank order the same?
if list(exp_pKa_ranks) == list(pred_pKa_ranks):
ordered_match = True
else:
ordered_match = False
pKa_rank_comparison_list.append([mol_ID, ordered_match, pKa_IDs, exp_pKas, pred_pKas, exp_pKa_ranks, pred_pKa_ranks, submission_ID])
# Convert to pandas dataframe
df_compare_ranks_hungarian = pd.DataFrame(pKa_rank_comparison_list, columns = ['mol ID', 'ordered match', 'pKa IDs', 'pKa (exp)',
'pKa (pred)', 'pKa rank (exp)', 'pKa rank (pred)', 'submission ID'])
# Print out pKas that don't preserve increasing order when matched my Hungarian algorithm
df_unordered_matching_hungarian = df_compare_ranks_hungarian[df_compare_ranks_hungarian["ordered match"] == False]
df_unordered_matching_hungarian
# Just SM18
df_unordered_matching_hungarian_SM18 = df_compare_ranks_hungarian[df_compare_ranks_hungarian["mol ID"] == 'SM18']
# SM18 prediction of yqkga method
df_unordered_matching_hungarian_SM18_yqkga = df_unordered_matching_hungarian_SM18[df_compare_ranks_hungarian["submission ID"] == 'yqkga']
df_unordered_matching_hungarian_SM18_yqkga
```
Hungarian matching algorithm doesn't always make matches that break the natural order of pKa values.
Only in cases where the order preserving match and the unordered match have the same cost value, then the results are random.
The hungarian collection set of branch `pKa_typeIII_analysis5` (commit b1bef28) doesn't have any unordered matches.
The hungarian collection set of branch `pKa_typeIII_analysis3` (commit 70d828e) has unordered matched for SM18 pKas for the following submission files:
0hxtm, yqkga, ryzue, yc70m
### Was there a out of order match in the latest type III analysis run?
sampl6-physicochemical-properties reporistory
commit 389a9540 "Rerun type III analysis 20190913."
```
#https://github.com/choderalab/sampl6-physicochemical-properties/blob/master/analysis_of_pKa_predictions/analysis_of_typeIII_predictions/analysis_outputs_hungarian/typeIII_submission_collection.csv
hungarian_collection = "typeIII_submission_collection_hungarian_3891954.csv"
df_hungarian = pd.read_csv(hungarian_collection, index_col=0)
df_hungarian.head()
prediction_methods = set(df_hungarian["name"])
pKa_rank_comparison_list =[]
# Iterate through methods
for method in prediction_methods:
#for method in ["Full quantum chemical calculation of free energies and fit to experimental pKa"]:
submission_ID = df_hungarian[df_hungarian["name"] == method]["receipt_id"].values[0]
df_hungarian_1method = df_hungarian[df_hungarian["name"] == method].reset_index(drop=True)
mol_IDs = list(df_hungarian_1method["Molecule ID"])
#Iterate through molecules
for mol_ID in mol_IDs:
df_hungarian_1method_1mol = df_hungarian_1method[df_hungarian_1method["Molecule ID"] == mol_ID].reset_index(drop=True)
pKa_IDs = df_hungarian_1method_1mol['pKa ID'].values
#Assign Rank order of experimental and predicted pKa.
exp_pKas = df_hungarian_1method_1mol['pKa (exp)'].values
exp_pKa_ranks = ss.rankdata(exp_pKas )
pred_pKas = df_hungarian_1method_1mol['pKa (calc)'].values
pred_pKa_ranks = ss.rankdata(pred_pKas) # rank is given to only matched pred pKas
# Is rank order the same?
if list(exp_pKa_ranks) == list(pred_pKa_ranks):
ordered_match = True
else:
ordered_match = False
pKa_rank_comparison_list.append([mol_ID, ordered_match, pKa_IDs, exp_pKas, pred_pKas, exp_pKa_ranks, pred_pKa_ranks, submission_ID])
# Convert to pandas dataframe
df_compare_ranks_hungarian = pd.DataFrame(pKa_rank_comparison_list, columns = ['mol ID', 'ordered match', 'pKa IDs', 'pKa (exp)',
'pKa (pred)', 'pKa rank (exp)', 'pKa rank (pred)', 'submission ID'])
# Print out pKas that don't preserve increasing order when matched my Hungarian algorithm
df_unordered_matching_hungarian = df_compare_ranks_hungarian[df_compare_ranks_hungarian["ordered match"] == False]
df_unordered_matching_hungarian
# Just SM18
df_unordered_matching_hungarian_SM18 = df_compare_ranks_hungarian[df_compare_ranks_hungarian["mol ID"] == 'SM18']
# SM18 prediction of yqkga method
df_unordered_matching_hungarian_SM18_yqkga = df_unordered_matching_hungarian_SM18[df_compare_ranks_hungarian["submission ID"] == 'yqkga']
df_unordered_matching_hungarian_SM18_yqkga
```
Still, Hungarian matching doesn't produce any non-sequencial matches.
| github_jupyter |
# SLU06 - File & String handling
Now we're going to test how well you understood the learning notebook.
Also, this notebook is going to often require some googling skills. It's very important to learn to google anything you don't remember or don't know how to do.
A small hint: list comprehensions might make it easier to solve sove of the exercises
### Exercise 1
- Open a local file called 'assignment.txt'. Store it in a variable called 'f'
- Read the whole file and store it in a variable called 'text'. The variable should consist of only one string representing all the lines from the file
```
# f = ...
# text = ...
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(text, str), "Are you sure you read the whole file as one string (not as list of strings)?"
assert len(text) == 727, "The length of the string doesn't match"
assert text[0]=='O', "Did you read the correct file?"
```
### Exercise 2
- Move the read cursor to the beginning of the file
- Read each line of the file and store it in a variable called lines. This line should be a list of strings.
```
# ...
# lines = ...
# YOUR CODE HERE
raise NotImplementedError()
assert len(lines) != 0, "Did you move the cursor to the beginning of the file?"
assert isinstance(lines, list), "Are you sure you stored the file as a list of strings?"
assert len(lines) == 13, "The number of string elements in the list doesn't match"
assert lines[0]=='On an exceptionally hot evening early in July a young man came out of\n', "Are you sure you read the correct file?"
```
### Exercise 3
Let's preprocess our file.
- For each string in the list, remove all the newline characters and save the result in the same variable
```
# lines = ...
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(lines, list), "Lines has to be a list of strings"
assert len(lines) == 13, "The number of strings in the list doesn't match"
assert len(lines[0])==69, "Are you sure you removed the newline characters?"
```
### Exercise 4
Remove all the empty lines:
```
# lines = ...
# YOUR CODE HERE
raise NotImplementedError()
assert len(lines) == 11, "Are you sure that you removed all the empty strings?"
```
### Exercise 5
Concatenate all the strings in the list into one long string separating them with a space symbol
```
# lines = ...
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(lines, str), "It's not a string"
assert len(lines) == 725, "The length of the string doesn't match"
```
### Exercise 6
Convert the string to a list of words and save it in a variable called 'words'.
Hint: Use space symbol as a separator
```
# words = ...
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(words, list), "It's not a list"
assert isinstance(words[0], str), "The list has to consist of strings"
assert len(words) == 132, "The number of words doesn't match"
```
### Exercise 7
Convert all the words to lowercase
```
# words = ...
# YOUR CODE HERE
raise NotImplementedError()
assert all([word.islower() for word in words]), "Not all the words are lowercase"
```
### Exercise 8
In Natural Language Processing we usually want to remove some words that have no meaning. They are usually called 'stop words'.
We're not dealing with NLP yet, but let's also remove such words.
- remove the words that are in the following list:
['on','an','in','a','of','the','and']
```
# stop_words = ['on','an','in','a','of','the','and']
# words = ...
# YOUR CODE HERE
raise NotImplementedError()
assert len(words) == 100, "The number of words doesn't match"
```
### Exercise 9
Keep only unique words in this list words created on exercise 8.
Store them in the same variable. It has to be a list of unique words
P.S., Don't remove any symbols in the strings! Here we assume that if the original string had both 'hello.' and 'hello', they are both unique and we keep both of them.
Hint: You'll need some Googling on this question.
```
# words = ...
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(words, list), "We need to store the unique words in a list"
assert len(words) == 77, "The number of words doesn't match"
```
### Exercise 10
Create a function to find if a word has more characters than a given number:
- Create the function is_long
- Receive 2 parameters: a string and an integer
- The string represents the word that we're checking
- The integer represents the minimum length that word should have
- return True if length of the word >= integer. Return False otherwise
```
# def is_long(word, num):
# ...
# YOUR CODE HERE
raise NotImplementedError()
assert is_long('Exercise', 5), "The function didn't pass the test for the following parameters: word='exercise', num=5"
assert not is_long('Exercise', 10), "The function didn't pass the test for the following parameters: word='exercise', num=10"
```
### Exercise 11
Apply is_long function to each word in 'words' variable you have created in Exercise 9.
Assume the integer parameter num = 5.
Store the result in a variable called long_words
```
# long_words = ...
# YOUR CODE HERE
raise NotImplementedError()
assert len(long_words) == 47, "The number of words doesn't match"
```
### Exercise 12
Create a function that calculates how many words in a list start with a given letter.
- Create the function start_with
- Accept 2 parameters: a list of words, a letter to check
- return the number of cases when this letter is the first letter in a word
```
# def start_with(words_list, letter):
# ...
# YOUR CODE HERE
raise NotImplementedError()
assert start_with(long_words, 'b') == 2, "The function didn't work for long_words and letter='b'"
assert start_with(words, 'a') == 5, "The function didn't work for words and letter='a'"
assert start_with(words, 'd') == 3, "The function didn't work for words and letter='d'"
```
### Exercise 13
This task is going to be a bit more difficult, but also pretty realistic. We're going to create our own encoder.
Encoder is a program that transforms one format of data into another format. You're going to use them all the time on the academy. Now we're going to write a simple encoder by ourselves.
Our encoder is going to check if an sentence has some predifined words. Imagine that we have a lot of sentences and we want to understand if they are positive or negative.
Our idea is the following:
If the sentence has words like 'good', 'awesome', 'fantastic', 'hilarious' etc., it's positive. And it's negative otherwise.
Let's call the words like 'good' our vocabulary.
Create a function, which:
- is called encoder()
- gets 2 inputs: 1)sentence (string) 2)vocabulary (list of words)
- for each word in the vocabulary, checks whether this word is in the sentence.
- returns a list of length of vocabulary, but instead of words it has only zeros and ones. 1 means that this word was in the sentence, 0 otherwise.
Example:
`vocabulary = ['good', 'better', 'awesome', 'fantastic']
sentence = 'This day was fantastic. Tomorrow will be even better!'
encoder(sentence, vocabulary)`
Output:
> [0,1,0,1]
Explanation:
'good' was not in the sentence => 0
'better' was in the sentence => 1
'awesome' was not in the sentence => 0
'fantastic' was in the sentence => 1'
Don't forget to lowercase the words in the sentence and in the vocabulary. Assume that all the words are separated by space symbol.
If you're stuck with this task, try to divide it in smaller parts.
**A hint:**
we can divide this task into a few smaller steps.
1) We might firstly need to lowercase each word in the vocabulary
2) Split 'sentence' into a list of words. Let's call it "words_list"
3) Lowercase each word in this list.
4) For each word in our vocabulary, ask the program whether this word is in words_list. If yes, the answer is 1. If it's not, the answer is 0.
5) return the list of answers
```
# def encoder(sentence, vocabulary):
# ...
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(encoder('The day was good', ['good']), list), 'The function has to return a list'
assert len(encoder('The day was good', ['good'])) != 0, 'Your function returns an empty list'
assert isinstance(encoder('The day was good', ['good'])[0], int), 'The elements of the output list have to be integers'
assert len(encoder('The day was good', ['good', 'bad', 'awesome'])) == 3, 'Number of elements in the output list does not match the length of the vocabulary'
assert encoder('The day was good', ['good', 'bad', 'awesome']) == [1,0,0], "The function failed on the test: encoder('The day was good', ['good', 'bad', 'awesome'])"
```
### Exercise 14
`def function(a):
assert isinstance(a, int)
print(a + 1)`
P.S, isinstance function checks whether a variable has a specific format. In our case it checks whether variable a is in integer
What is going to happen if you call:
`function(3.5)`
Options:
- a. 4.5 will be printed
- b. AssertionError will be raised
- c. AssertionError will be printed
- d. Nothing. The code is not valid
Write the letter with the correct answer to a variable called 'answer' as a string.
For example,
`answer = 'a'`
Note: try to think before writing anything. If you don't know the right answer, check the learning material.
But please try not to guess the answer by testing the function itself (don't call it).
You're learning it for yourself, not for correct answers or some grades 🙂
```
# answer = ''
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(answer, str), 'The answer should be a string'
assert answer == 'b', 'The answer is wrong'
```
### Exercise 15
What's the output of the following line?
`'I am' + 17 + 'years old'`
Options:
- a. Name error
- b. 'I am 17 years old'
- c. Type error
- d. 'I am years old 17'
Save the answer in a variable called 'answer'. Example:
`answer = 'a'`
```
# answer = ''
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(answer, str), 'The answer should be a string'
assert answer == 'c', 'The answer is wrong'
```
### Exercise 16
What's the result of the following lines?
`text = ['I', 'am', 'happy', 'today']
print(text[4])`
Options:
- a. 'today'
- b. 'h'
- c. Type error
- d. Index error
Save the answer in a variable called 'answer'. Example:
`answer = 'a'`
```
# answer = ''
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(answer, str), 'The answer should be a string'
assert answer == 'd', 'The answer is wrong'
```
### Exercise 17
Create a function called join_words that accepts a list of strings called `list_strings` and a string `separator`.
Join all strings on `list_strings` with string `separator` between all words to a new variable called `sentence`. Return `sentence`.
If the operation goes wrong return the following string `"function inputs don't match the requirements"`
```
#def join_words(list_strings, separator):
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(join_words(["this", "is", "a", "string"], " "), str), "String output is expected"
assert join_words(["this", "is", "a", "string"], " ") == "this is a string", "The answer is wrong for input=['This', 'is', 'a', 'string']"
assert join_words(["this", "is", False, "a", "string"], " ") == "function inputs don't match the requirements", "Error message was not returned"
```
### Exercise 18
Create a function called multiply_not_five, which:
- has an integer num as an input
- return the number multipled by 2
- if the number is equal to 5, raises an AssertionError
```
#def multiply_not_five(num):
# YOUR CODE HERE
raise NotImplementedError()
assert multiply_not_five(10) == 20, 'Wrong answer for number = 10'
assert multiply_not_five(-5) == -10, 'Wrong answe for number = -5'
def check_answer(function):
try:
function(5)
raise Exception("The function shouldn't work for number = 5")
except AssertionError:
pass
except:
raise Exception("You need to return an AssertionError, not another exception type")
check_answer(multiply_not_five)
```
### Exercise 19
The last exercise is pretty similar to exercise 18.
Create a function called multiply_string, which:
- has a string as an input
- return the string repeated 10 times.E.g, multiply_string('a') = 'aaaaaaaaaa'
- **hint**: you might want to google how to repeat strings in python
- if the input is not a string, raise an Exception with some text output. For example, Exception("It's not a string, dude!")
- **hint**: you might want to google how to check whether a variable is a string in python
```
#def multiply_string(string):
# YOUR CODE HERE
raise NotImplementedError()
assert multiply_string('a') == 'aaaaaaaaaa', "The function didn't work for input='a'"
assert multiply_string('z') == 'zzzzzzzzzz', "The function didn't work for input='z'"
def check_answer(function):
try:
function(5)
raise Exception("The function shouldn't work for integers")
except:
pass
check_answer(multiply_string)
```
| github_jupyter |
# Scheduling a Doubles Pickleball Tournament
My friend Steve asked for help in creating a schedule for a round-robin doubles pickleball tournament with 8 or 9 players on 2 courts. ([Pickleball](https://en.wikipedia.org/wiki/Pickleball) is a paddle/ball/net game played on a court that is smaller than tennis but larger than ping-pong.)
To generalize: given *P* players and *C* available courts, we would like to create a **schedule**: a table where each row is a time period (a round of play), each column is a court, and each cell contains a game, which consists of two players partnered together and pitted against two other players. The preferences for the schedule are:
- Each player should partner with each other player exactly once (or as close to that as possible).
- Fewer rounds are better (in other words, try to fill all the courts each round).
- Each player should play against each other player twice, or as close to that as possible.
- A player should not be scheduled to play two games at the same time.
For example, here's a perfect schedule for *P*=8 players on *C*=2 courts:
[([[1, 6], [2, 4]], [[3, 5], [7, 0]]),
([[1, 5], [3, 6]], [[2, 0], [4, 7]]),
([[2, 3], [6, 0]], [[4, 5], [1, 7]]),
([[4, 6], [3, 7]], [[1, 2], [5, 0]]),
([[1, 0], [6, 7]], [[3, 4], [2, 5]]),
([[2, 6], [5, 7]], [[1, 4], [3, 0]]),
([[2, 7], [1, 3]], [[4, 0], [5, 6]])]
This means that in the first round, players 1 and 6 partner against 2 and 4 on one court, while 3 and 5 partner against 7 and 0 on the other. There are 7 rounds.
My strategy for finding a good schedule is to use **hillclimbing**: start with an initial schedule, then repeatedly alter the schedule by swapping partners in one game with partners in another. If the altered schedule is better, keep it; if not, discard it. Repeat.
## Coding it up
The strategy in more detail:
- First form all pairs of players, using `all_pairs(P)`.
- Put pairs together to form a list of games using `initial_games`.
- Use `Schedule` to create a schedule; it calls `one_round` to create each round and `scorer` to evaluate the schedule.
- Use `hillclimb` to improve the initial schedule: call `alter` to randomly alter a schedule, `Schedule` to re-allocate the games to rounds and courts, and `scorer` to check if the altered schedule's score is better.
(Note: with *P* players there are *P × (P - 1) / 2* pairs of partners; this is an even number when either *P* or *P - 1* is divisible by 4, so everything works out when, say, *P*=4 or *P*=9, but for, say, *P*=10 there are 45 pairs, and so `initial_games` chooses to create 22 games, meaning that one pair of players never play together, and thus play one fewer game than everyone else.)
```
import random
from itertools import combinations
from collections import Counter
#### Types
Player = int # A player is an int: `1`
Pair = list # A pair is a list of two players who are partners: `[1, 2]`
Game = list # A game is a list of two pairs: `[[1, 2], [3, 4]]`
Round = tuple # A round is a tuple of games: `([[1, 2], [3, 4]], [[5, 6], [7, 8]])`
class Schedule(list):
"""A Schedule is a list of rounds (augmented with a score and court count)."""
def __init__(self, games, courts=2):
games = list(games)
while games: # Allocate games to courts, one round at a time
self.append(one_round(games, courts))
self.score = scorer(self)
self.courts = courts
#### Functions
def hillclimb(P, C=2, N=100000):
"Schedule games for P players on C courts by randomly altering schedule N times."
sched = Schedule(initial_games(all_pairs(P)), C)
for _ in range(N):
sched = max(alter(sched), sched, key=lambda s: s.score)
return sched
def all_pairs(P): return list(combinations(range(P), 2))
def initial_games(pairs):
"""An initial list of games: [[[1, 2], [3, 4]], ...].
We try to have every pair play every other pair once, and
have each game have 4 different players, but that isn't always true."""
random.shuffle(pairs)
games = []
while len(pairs) >= 2:
A = pairs.pop()
B = first(pair for pair in pairs if disjoint(pair, A)) or pairs[0]
games.append([A, B])
pairs.remove(B)
return games
def disjoint(A, B):
"Do A and B have disjoint players in them?"
return not (players(A) & players(B))
def one_round(games, courts):
"""Place up to `courts` games into `round`, all with disjoint players."""
round = []
while True:
G = first(g for g in games if disjoint(round, g))
if not G or not games or len(round) == courts:
return Round(round)
round.append(G)
games.remove(G)
def players(x):
"All distinct players in a pair, game, or sequence of games."
return {x} if isinstance(x, Player) else set().union(*map(players, x))
def first(items): return next(items, None)
def pairing(p1, p2): return tuple(sorted([p1, p2]))
def scorer(sched):
"Score has penalties for a non-perfect schedule."
penalty = 50 * len(sched) # More rounds are worse (avoid empty courts)
penalty += 1000 * sum(len(players(game)) != 4 # A game should have 4 players!
for round in sched for game in round)
penalty += 1 * sum(abs(c - 2) ** 3 + 8 * (c == 0) # Try to play everyone twice
for c in opponents(sched).values())
return -penalty
def opponents(sched):
"A Counter of {(player, opponent): times_played}."
return Counter(pairing(p1, p2)
for round in sched for A, B in round for p1 in A for p2 in B)
def alter(sched):
"Modify a schedule by swapping two pairs."
games = [Game(game) for round in sched for game in round]
G = len(games)
i, j = random.sample(range(G), 2) # index into games
a, b = random.choice((0, 1)), random.choice((0, 1)) # index into each game
games[i][a], games[j][b] = games[j][b], games[i][a]
return Schedule(games, sched.courts)
def report(sched):
"Print information about this schedule."
for i, round in enumerate(sched, 1):
print('Round {}: {}'.format(i, '; '.join('{} vs {}'.format(*g) for g in round)))
games = sum(sched, ())
P = len(players(sched))
print('\n{} games in {} rounds for {} players'.format(len(games), len(sched), P))
opp = opponents(sched)
fmt = ('{:2X}|' + P * ' {}' + ' {}').format
print('Number of times each player plays against each opponent:\n')
print(' |', *map('{:X}'.format, range(P)), ' Total')
print('--+' + '--' * P + ' -----')
for row in range(P):
counts = [opp[pairing(row, col)] for col in range(P)]
print(fmt(row, *[c or '-' for c in counts], sum(counts) // 2))
```
# 8 Player Tournament
I achieved (in a previous run) a perfect schedule for 8 players: the 14 games fit into 7 rounds, each player partners with each other once, and plays each individual opponent twice:
```
report([
([[1, 6], [2, 4]], [[3, 5], [7, 0]]),
([[1, 5], [3, 6]], [[2, 0], [4, 7]]),
([[2, 3], [6, 0]], [[4, 5], [1, 7]]),
([[4, 6], [3, 7]], [[1, 2], [5, 0]]),
([[1, 0], [6, 7]], [[3, 4], [2, 5]]),
([[2, 6], [5, 7]], [[1, 4], [3, 0]]),
([[2, 7], [1, 3]], [[4, 0], [5, 6]]) ])
```
# 9 Player Tournament
For 9 players, I can fit the 18 games into 9 rounds, but some players play each other 1 or 3 times:
```
report([
([[1, 7], [4, 0]], [[3, 5], [2, 6]]),
([[2, 7], [1, 3]], [[4, 8], [6, 0]]),
([[5, 0], [1, 6]], [[7, 8], [3, 4]]),
([[7, 0], [5, 8]], [[1, 2], [4, 6]]),
([[3, 8], [1, 5]], [[2, 0], [6, 7]]),
([[1, 4], [2, 5]], [[3, 6], [8, 0]]),
([[5, 6], [4, 7]], [[1, 8], [2, 3]]),
([[1, 0], [3, 7]], [[2, 8], [4, 5]]),
([[3, 0], [2, 4]], [[6, 8], [5, 7]]) ])
```
# 10 Player Tournament
With *P*=10 there is an odd number of pairings (45), so two players necessarily play one game less than the other players. Let's see what kind of schedule we can come up with:
```
%time report(hillclimb(P=10))
```
In this schedule several players never play each other; it may be possible to improve on that (in another run that has better luck with random numbers).
# 16 Player Tournament
Let's jump to 16 players on 4 courts (this will take a while):
```
%time report(hillclimb(P=16, C=4))
```
We get a pretty good schedule, although it takes 19 rounds rather than the 15 it would take if every court was filled, and again there are some players who never face each other.
| github_jupyter |
# Introduction #
In the previous lesson we looked at our first model-based method for feature engineering: clustering. In this lesson we look at our next: principal component analysis (PCA). Just like clustering is a partitioning of the dataset based on proximity, you could think of PCA as a partitioning of the variation in the data. PCA is a great tool to help you discover important relationships in the data and can also be used to create more informative features.
(Technical note: PCA is typically applied to [standardized](https://www.kaggle.com/alexisbcook/scaling-and-normalization) data. With standardized data "variation" means "correlation". With unstandardized data "variation" means "covariance". All data in this course will be standardized before applying PCA.)
# Principal Component Analysis #
In the [*Abalone*](https://www.kaggle.com/rodolfomendes/abalone-dataset) dataset are physical measurements taken from several thousand Tasmanian abalone. (An abalone is a sea creature much like a clam or an oyster.) We'll just look at a couple features for now: the `'Height'` and `'Diameter'` of their shells.
You could imagine that within this data are "axes of variation" that describe the ways the abalone tend to differ from one another. Pictorially, these axes appear as perpendicular lines running along the natural dimensions of the data, one axis for each original feature.
<figure style="padding: 1em;">
<img src="https://i.imgur.com/rr8NCDy.png" width=300, alt="">
<figcaption style="textalign: center; font-style: italic"><center>
</center></figcaption>
</figure>
Often, we can give names to these axes of variation. The longer axis we might call the "Size" component: small height and small diameter (lower left) contrasted with large height and large diameter (upper right). The shorter axis we might call the "Shape" component: small height and large diameter (flat shape) contrasted with large height and small diameter (round shape).
Notice that instead of describing abalones by their `'Height'` and `'Diameter'`, we could just as well describe them by their `'Size'` and `'Shape'`. This, in fact, is the whole idea of PCA: instead of describing the data with the original features, we describe it with its axes of variation. The axes of variation become the new features.
<figure style="padding: 1em;">
<img src="https://i.imgur.com/XQlRD1q.png" width=600, alt="">
<figcaption style="textalign: center; font-style: italic"><center>The principal components become the new features by a rotation of the dataset in the feature space.
</center></figcaption>
</figure>
The new features PCA constructs are actually just linear combinations (weighted sums) of the original features:
```
df["Size"] = 0.707 * X["Height"] + 0.707 * X["Diameter"]
df["Shape"] = 0.707 * X["Height"] - 0.707 * X["Diameter"]
```
These new features are called the **principal components** of the data. The weights themselves are called **loadings**. There will be as many principal components as there are features in the original dataset: if we had used ten features instead of two, we would have ended up with ten components.
A component's loadings tell us what variation it expresses through signs and magnitudes:
| Features \ Components | Size (PC1) | Shape (PC2) |
|-----------------------|------------|-------------|
| Height | 0.707 | 0.707 |
| Diameter | 0.707 | -0.707 |
This table of loadings is telling us that in the `Size` component, `Height` and `Diameter` vary in the same direction (same sign), but in the `Shape` component they vary in opposite directions (opposite sign). In each component, the loadings are all of the same magnitude and so the features contribute equally in both.
PCA also tells us the *amount* of variation in each component. We can see from the figures that there is more variation in the data along the `Size` component than along the `Shape` component. PCA makes this precise through each component's **percent of explained variance**.
<figure style="padding: 1em;">
<img src="https://i.imgur.com/xWTvqDA.png" width=600, alt="">
<figcaption style="textalign: center; font-style: italic"><center> Size accounts for about 96% and the Shape for about 4% of the variance between Height and Diameter.
</center></figcaption>
</figure>
The `Size` component captures the majority of the variation between `Height` and `Diameter`. It's important to remember, however, that the amount of variance in a component doesn't necessarily correspond to how good it is as a predictor: it depends on what you're trying to predict.
# PCA for Feature Engineering #
There are two ways you could use PCA for feature engineering.
The first way is to use it as a descriptive technique. Since the components tell you about the variation, you could compute the MI scores for the components and see what kind of variation is most predictive of your target. That could give you ideas for kinds of features to create -- a product of `'Height'` and `'Diameter'` if `'Size'` is important, say, or a ratio of `'Height'` and `'Diameter'` if `Shape` is important. You could even try clustering on one or more of the high-scoring components.
The second way is to use the components themselves as features. Because the components expose the variational structure of the data directly, they can often be more informative than the original features. Here are some use-cases:
- **Dimensionality reduction**: When your features are highly redundant (*multicollinear*, specifically), PCA will partition out the redundancy into one or more near-zero variance components, which you can then drop since they will contain little or no information.
- **Anomaly detection**: Unusual variation, not apparent from the original features, will often show up in the low-variance components. These components could be highly informative in an anomaly or outlier detection task.
- **Noise reduction**: A collection of sensor readings will often share some common background noise. PCA can sometimes collect the (informative) signal into a smaller number of features while leaving the noise alone, thus boosting the signal-to-noise ratio.
- **Decorrelation**: Some ML algorithms struggle with highly-correlated features. PCA transforms correlated features into uncorrelated components, which could be easier for your algorithm to work with.
PCA basically gives you direct access to the correlational structure of your data. You'll no doubt come up with applications of your own!
<blockquote style="margin-right:auto; margin-left:auto; background-color: #ebf9ff; padding: 1em; margin:24px;">
<strong>PCA Best Practices</strong><br>
There are a few things to keep in mind when applying PCA:
<ul>
<li> PCA only works with numeric features, like continuous quantities or counts.
<li> PCA is sensitive to scale. It's good practice to standardize your data before applying PCA, unless you know you have good reason not to.
<li> Consider removing or constraining outliers, since they can an have an undue influence on the results.
</ul>
</blockquote>
# Example - 1985 Automobiles #
In this example, we'll return to our [*Automobile*](https://www.kaggle.com/toramky/automobile-dataset) dataset and apply PCA, using it as a descriptive technique to discover features. We'll look at other use-cases in the exercise.
This hidden cell loads the data and defines the functions `plot_variance` and `make_mi_scores`.
```
#$HIDE_INPUT$
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from IPython.display import display
from sklearn.feature_selection import mutual_info_regression
plt.style.use("seaborn-whitegrid")
plt.rc("figure", autolayout=True)
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=14,
titlepad=10,
)
def plot_variance(pca, width=8, dpi=100):
# Create figure
fig, axs = plt.subplots(1, 2)
n = pca.n_components_
grid = np.arange(1, n + 1)
# Explained variance
evr = pca.explained_variance_ratio_
axs[0].bar(grid, evr)
axs[0].set(
xlabel="Component", title="% Explained Variance", ylim=(0.0, 1.0)
)
# Cumulative Variance
cv = np.cumsum(evr)
axs[1].plot(np.r_[0, grid], np.r_[0, cv], "o-")
axs[1].set(
xlabel="Component", title="% Cumulative Variance", ylim=(0.0, 1.0)
)
# Set up figure
fig.set(figwidth=8, dpi=100)
return axs
def make_mi_scores(X, y, discrete_features):
mi_scores = mutual_info_regression(X, y, discrete_features=discrete_features)
mi_scores = pd.Series(mi_scores, name="MI Scores", index=X.columns)
mi_scores = mi_scores.sort_values(ascending=False)
return mi_scores
df = pd.read_csv("../input/fe-course-data/autos.csv")
```
We've selected four features that cover a range of properties. Each of these features also has a high MI score with the target, `price`. We'll standardize the data since these features aren't naturally on the same scale.
```
features = ["highway_mpg", "engine_size", "horsepower", "curb_weight"]
X = df.copy()
y = X.pop('price')
X = X.loc[:, features]
# Standardize
X_scaled = (X - X.mean(axis=0)) / X.std(axis=0)
```
Now we can fit scikit-learn's `PCA` estimator and create the principal components. You can see here the first few rows of the transformed dataset.
```
from sklearn.decomposition import PCA
# Create principal components
pca = PCA()
X_pca = pca.fit_transform(X_scaled)
# Convert to dataframe
component_names = [f"PC{i+1}" for i in range(X_pca.shape[1])]
X_pca = pd.DataFrame(X_pca, columns=component_names)
X_pca.head()
```
After fitting, the `PCA` instance contains the loadings in its `components_` attribute. (Terminology for PCA is inconsistent, unfortunately. We're following the convention that calls the transformed columns in `X_pca` the *components*, which otherwise don't have a name.) We'll wrap the loadings up in a dataframe.
```
loadings = pd.DataFrame(
pca.components_.T, # transpose the matrix of loadings
columns=component_names, # so the columns are the principal components
index=X.columns, # and the rows are the original features
)
loadings
```
Recall that the signs and magnitudes of a component's loadings tell us what kind of variation it's captured. The first component (`PC1`) shows a contrast between large, powerful vehicles with poor gas milage, and smaller, more economical vehicles with good gas milage. We might call this the "Luxury/Economy" axis. The next figure shows that our four chosen features mostly vary along the Luxury/Economy axis.
```
# Look at explained variance
plot_variance(pca);
```
Let's also look at the MI scores of the components. Not surprisingly, `PC1` is highly informative, though the remaining components, despite their small variance, still have a significant relationship with `price`. Examining those components could be worthwhile to find relationships not captured by the main Luxury/Economy axis.
```
mi_scores = make_mi_scores(X_pca, y, discrete_features=False)
mi_scores
```
The third component shows a contrast between `horsepower` and `curb_weight` -- sports cars vs. wagons, it seems.
```
# Show dataframe sorted by PC3
idx = X_pca["PC3"].sort_values(ascending=False).index
cols = ["make", "body_style", "horsepower", "curb_weight"]
df.loc[idx, cols]
```
To express this contrast, let's create a new ratio feature:
```
df["sports_or_wagon"] = X.curb_weight / X.horsepower
sns.regplot(x="sports_or_wagon", y='price', data=df, order=2);
```
# Your Turn #
[**Improve your feature set**](#$NEXT_NOTEBOOK_URL$) by decomposing the variation in *Ames Housing* and use principal components to detect outliers.
| github_jupyter |
# Modelagem magnética 3D de uma esfera
## Importando as bibliotecas
```
import numpy as np
import matplotlib.pyplot as plt
import sphere_mag
```
## Gerando os parâmetros do sistema de coordenadas
```
Nx = 100
Ny = 50
area = [-1000.,1000.,-1000.,1000.]
shape = (Nx,Ny)
x = np.linspace(area[0],area[1],num=Nx)
y = np.linspace(area[2],area[3],num=Ny)
yc,xc = np.meshgrid(y,x)
voo = -200.
zc = voo*np.ones_like(xc)
coordenadas = np.array([yc.ravel(),xc.ravel(),zc.ravel()])
```
## Gerando os parâmetros do prisma
```
intensidades = np.array([50.])
direcoes = np.array([[-30.,-30.]])
modelo = np.array([[0,0,200,100]])
```
## Cálculo das componentes do campo de gravidade e do potencial
```
bz = sphere_mag.magnetics(coordenadas,modelo,intensidades,direcoes,field="b_z")
bx = sphere_mag.magnetics(coordenadas,modelo,intensidades,direcoes,field="b_x")
by = sphere_mag.magnetics(coordenadas,modelo,intensidades,direcoes,field="b_y")
```
### Anomalia de campo total aproximada
```
I0,D0 = -20.,-20.
j0x = np.cos(np.deg2rad(I0))*np.cos(np.deg2rad(D0))
j0y = np.cos(np.deg2rad(I0))*np.sin(np.deg2rad(D0))
j0z = np.sin(np.deg2rad(I0))
tfa = j0x*bx + j0y*by + j0z*bz
```
## Visualização dos dados calculados
```
title_font = 18
bottom_font = 15
plt.close('all')
plt.figure(figsize=(10,10), tight_layout=True)
plt.subplot(2,2,1)
plt.xlabel('easting (m)', fontsize = title_font)
plt.ylabel('northing (m)', fontsize = title_font)
plt.title('Bx (nT)', fontsize=title_font)
plt.pcolor(yc,xc,bx.reshape(shape),shading='auto',cmap='jet')
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.ax.tick_params(labelsize=bottom_font)
plt.subplot(2,2,2)
plt.xlabel('easting (m)', fontsize = title_font)
plt.ylabel('northing (m)', fontsize = title_font)
plt.title('By (nT)', fontsize=title_font)
plt.pcolor(yc,xc,by.reshape(shape),shading='auto',cmap='jet')
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.ax.tick_params(labelsize=bottom_font)
plt.subplot(2,2,3)
plt.xlabel('easting (m)', fontsize = title_font)
plt.ylabel('northing (m)', fontsize = title_font)
plt.title('Bz (nT)', fontsize=title_font)
plt.pcolor(yc,xc,bz.reshape(shape),shading='auto',cmap='jet')
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.ax.tick_params(labelsize=bottom_font)
plt.subplot(2,2,4)
plt.xlabel('easting (m)', fontsize = title_font)
plt.ylabel('northing (m)', fontsize = title_font)
plt.title('TFA (nT)', fontsize=title_font)
plt.pcolor(yc,xc,tfa.reshape(shape),shading='auto',cmap='jet')
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.ax.tick_params(labelsize=bottom_font)
file_name = 'images/forward_modeling_mag_sphere_mag_tot_HS'
plt.savefig(file_name+'.png',dpi=300)
plt.show()
```
| github_jupyter |
```
import numpy as np
import pylab as plt
import swyft
swyft.set_verbosity(0)
import torch
from scipy import stats
%load_ext autoreload
%autoreload 2
DEVICE = 'cuda'
```
## Torus model
```
def model(params, center = np.array([0.6, 0.8])):
a, b, c = params['a'], params['b'], params['c']
r = ((a-center[0])**2+(b-center[1])**2)**0.5 # Return radial distance from center
x = np.array([a, r, c])
return dict(x=x)
def noise(obs, params, noise = np.array([0.03, 0.005, 0.2])):
x = obs['x']
n = np.random.randn(*x.shape)*noise
return dict(x = x + n)
par0 = dict(a=0.57, b=0.8, c=1.0)
obs0 = model(par0) # Using Asimov data
prior = swyft.Prior({"a": ["uniform", 0., 1.], "b": ["uniform", 0., 1.], "c": ["uniform", 0., 1.]})
s = swyft.NestedRatios(model, prior, noise = noise, obs = obs0, device = DEVICE, Ninit = 3000, Nmax = 15000)
s.run(train_args = dict(lr_schedule = [1e-3, 1e-4, 1e-5]), max_rounds = 10, keep_history = True)
plt.figure(figsize = (15, 10))
N = len(s._history)
for i in range(N):
params = s._history[i]['marginals']._prior.sample(1000)
r = s._history[i]['marginals']._re.lnL(s._obs, params)[('a',)]
rmax = r.max()
plt.scatter(params['a'], r, label = str(i))
plt.legend()
s2 = []
for Ninit in [1000, 3000, 10000, 30000]:
st = swyft.NestedRatios(model, prior, noise = noise, obs = obs0, device = DEVICE, Ninit = Ninit, Nmax = 15000)
st.run(train_args = dict(lr_schedule = [1e-3, 1e-4, 1e-5]), max_rounds = 1, keep_history = True)
s2.append(st)
st = swyft.NestedRatios(model, prior, noise = noise, obs = obs0, device = DEVICE, Ninit = 100000, Nmax = 15000)
st.run(train_args = dict(lr_schedule = [1e-3, 1e-4, 1e-5]), max_rounds = 1, keep_history = True)
s2.append(st)
plt.figure(figsize = (15, 10))
for i in range(5):
params = s2[i]._history[0]['marginals']._prior.sample(10000)
r = s2[i]._history[0]['marginals']._re.lnL(s._obs, params)[('a',)]
rmax = r.max()
plt.scatter(params['a'], r, label = str(i))
plt.axhline(0)
plt.legend()
plt.ylim([-20, 5])
plt.figure(figsize = (15, 10))
for i in range(5):
params = s2[i]._history[0]['marginals']._prior.sample(10000)
r = s2[i]._history[0]['marginals']._re.lnL(s._obs, params)[('b',)]
rmax = r.max()
plt.scatter(params['b'], r, label = str(i))
plt.axhline(0)
plt.legend()
plt.ylim([-20, 5])
plt.figure(figsize = (15, 10))
for i in range(5):
params = s2[i]._history[0]['marginals']._prior.sample(10000)
r = s2[i]._history[0]['marginals']._re.lnL(s._obs, params)[('c',)]
rmax = r.max()
plt.scatter(params['c'], r, label = str(i))
plt.axhline(0)
plt.legend()
plt.ylim([-10, 3])
```
| github_jupyter |
<font style="font-size:96px; font-weight:bolder; color:#0040a0"><img src="http://montage.ipac.caltech.edu/docs/M51_logo.png" alt="M" style="float: left; padding: 25px 30px 25px 0px;" /></font>
<i><b>Montage</b> Montage is an astronomical image toolkit with components for reprojection, background matching, coaddition and visualization of FITS files. It can be used as a set of command-line tools (Linux, OS X and Windows), C library calls (Linux and OS X) and as Python binary extension modules.
The Montage source is written in ANSI-C and code can be downloaded from GitHub ( https://github.com/Caltech-IPAC/Montage ). The Python package can be installed from PyPI ("</i>pip install MontagePy<i>"). The package has no external dependencies. See http://montage.ipac.caltech.edu/ for details on the design and applications of Montage.
# MontagePy.main modules: mProject
The Montage modules are generally used as steps in a workflow to create a mosaic of a set of input images. These steps are: determine the geometry of the mosaic on the sky, reproject the images to a common frame and spatial sampling; rectify the backgrounds to a common level, and coadd the images into a mosaic. This page illustrates the use of one Montage module, mProject, which is one of the modules used to reproject images.
Visit <a href="Mosaic.ipynb">Building a Mosaic with Montage</a> to see how mProject is used as part of a workflow to creage a mosaic (or the <a href="Mosaic_oneshot.ipynb"> one shot </a> version if you just want to see the commands). See the complete list of Montage Notebooks <a href="http://montage.ipac.caltech.edu/MontageNotebooks">here</a>.
```
from MontagePy.main import mProject, mViewer
help(mProject)
```
## mProject Example
mProject is one of four modules focused on the task of reprojecting an astronomical image. It is totally general (any projection and coordinate system) and flux-conserving but is also the slowest. The algorithm is based on pixel overlap in spherical sky coordinates rather than in the input or output planar pixel space.
mProject has a number of extra controls for things like toggling from the normal flux-density mode to total energy mode or excluding a border (image borders often have bad pixels). But the basic inputs are a FITS image and a FITS header describing the output image we want. In all cases, the only output is a FITS image with the data from the input resampled to the output header pixel space.
The input FITS header (actually an ASCII file that looks like a FITS header but with newlines and unpadded line lengths) can be produced in a number of ways. There are Montage tools to take an image list (or point source list) and determine a bounding box (mMakeHdr) or just a location and size (mHdr). You can also pull the header off another file (mGetHdr) if you want to build a matching mosaic from other data. Or you can just create the output header by hand (<i>e.g.</i>, a simple all-sky Aitoff projection).
Here we have pulled the header from the input image and edited it by hand to modify the rotation by 30 degrees.
```
rtn = mProject('M17/raw/2mass-atlas-990502s-j1340186.fits',
'work/M17/2mass-atlas-990502s-j1340186_project.fits',
'M17/rotated.hdr')
print(rtn)
```
## Before and After
Here are the original image and the reprojected one:
```
from IPython.display import Image
rtn = mViewer("-ct 1 -gray M17/raw/2mass-atlas-990502s-j1340186.fits \
-2s max gaussian-log -out work/M17/2mass-atlas-990502s-j1340186.png",
"", mode=2)
print(rtn)
Image(filename='work/M17/2mass-atlas-990502s-j1340186.png')
rtn = mViewer("-ct 1 -gray work/M17/2mass-atlas-990502s-j1340186_project.fits \
-2s max gaussian-log -out work/M17/2mass-atlas-990502s-j1340186_project.png",
"", mode=2)
print(rtn)
Image(filename='work/M17/2mass-atlas-990502s-j1340186_project.png')
```
## mProject Error Handling
If mProject encounters an error, the return structure will just have two elements: a status of 1 ("error") and a message string that tries to diagnose the reason for the error.
For instance, if the user asks for an image that doesn't exist:
```
rtn = mProject('M17/raw/unknown.fits',
'work/M17/2mass-atlas-990502s-j1340186_project.fits',
'M17/rotated.hdr')
print(rtn)
```
## Classic Montage: mProject as a Stand-Alone Program
### Unix/Windows Command-line Arguments
<p>mProject can also be run as a command-line tool in Linux, OS X, and Windows:</p>
<p><tt>
<b>Usage:</b> mProject [-z factor][-d level][-s statusfile][-h hdu][-x scale][-w weightfile][W fixed-weight][-t threshold][-X(expand)][-b border-string][-e(nergy-mode)][-f(ull-region)] in.fits out.fits hdr.template
</tt></p>
<p> </p>
## mProject as a Library Call
If you are writing in C/C++ on Linux or OSX, mProject can be accessed as a library function:
<pre>
/*-***********************************************************************/
/* */
/* mProject */
/* */
/* Montage is a set of general reprojection / coordinate-transform / */
/* mosaicking programs. Any number of input images can be merged into */
/* an output FITS file. The attributes of the input are read from the */
/* input files; the attributes of the output are read a combination of */
/* the command line and a FITS header template file. */
/* */
/* This module, mProject, processes a single input image and */
/* projects it onto the output space. It's output is actually a pair */
/* of FITS files, one for the sky flux the other for the fractional */
/* pixel coverage. Once this has been done for all input images, */
/* mAdd can be used to coadd them into a composite output. */
/* */
/* Each input pixel is projected onto the output pixel space and the */
/* exact area of overlap is computed. Both the total 'flux' and the */
/* total sky area of input pixels added to each output pixel is */
/* tracked, and the flux is appropriately normalized before writing to */
/* the final output file. This automatically corrects for any multiple */
/* coverages that may occur. */
/* */
/* The input can come from from arbitrarily disparate sources. It is */
/* assumed that the flux scales in the input images match, but this is */
/* not required (leading to some interesting combinations). */
/* */
/* char *input_file FITS file to reproject */
/* char *output_file Reprojected FITS file */
/* char *template_file FITS header file used to define the desired */
/* output */
/* */
/* int hdu Optional HDU offset for input file */
/* char *weight_file Optional pixel weight FITS file (must match */
/* input) */
/* */
/* double fixedWeight A weight value used for all pixels */
/* double threshold Pixels with weights below this level treated */
/* as blank */
/* */
/* char *borderstr Optional string that contains either a border */
/* width or comma-separated 'x1,y1,x2,y2, ...' */
/* pairs defining a pixel region polygon where */
/* we keep only the data inside. */
/* */
/* double drizzle Optional pixel area 'drizzle' factor */
/* double fluxScale Scale factor applied to all pixels */
/* int energyMode Pixel values are total energy rather than */
/* energy density */
/* int expand Expand output image area to include all of */
/* the input pixels */
/* int fullRegion Do not 'shrink-wrap' output area to non-blank */
/* pixels */
/* int debug Debugging output level */
/* */
/*************************************************************************/
struct mProjectReturn *mProject(char *input_file, char *ofile, char *template_file, int hduin,
char *weight_file, double fixedWeight, double threshold, char *borderstr,
double drizzle, double fluxScale, int energyMode, int expand, int fullRegion,
int debugin)
</pre>
<p><b>Return Structure</b></p>
<pre>
struct mProjectReturn
{
int status; // Return status (0: OK, 1:ERROR)
char msg [1024]; // Return message (for error return)
char json[4096]; // Return parameters as JSON string
double time; // Run time (sec)
};
</pre>
| github_jupyter |
<a href="https://colab.research.google.com/github/JSJeong-me/KOSA-Big-Data_Vision/blob/main/Roboflow_CLIP_Zero_Shot_Cake.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# How to use CLIP Zero-Shot on your own classificaiton dataset
This notebook provides an example of how to benchmark CLIP's zero shot classification performance on your own classification dataset.
[CLIP](https://openai.com/blog/clip/) is a new zero shot image classifier relased by OpenAI that has been trained on 400 million text/image pairs across the web. CLIP uses these learnings to make predicts based on a flexible span of possible classification categories.
CLIP is zero shot, that means **no training is required**.
Try it out on your own task here!
Be sure to experiment with various text prompts to unlock the richness of CLIP's pretraining procedure.

# Download and Install CLIP Dependencies
```
#installing some dependencies, CLIP was release in PyTorch
import subprocess
CUDA_version = [s for s in subprocess.check_output(["nvcc", "--version"]).decode("UTF-8").split(", ") if s.startswith("release")][0].split(" ")[-1]
print("CUDA version:", CUDA_version)
if CUDA_version == "10.0":
torch_version_suffix = "+cu100"
elif CUDA_version == "10.1":
torch_version_suffix = "+cu101"
elif CUDA_version == "10.2":
torch_version_suffix = ""
else:
torch_version_suffix = "+cu110"
!pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex
import numpy as np
import torch
print("Torch version:", torch.__version__)
!ls -l
#clone the CLIP repository
!git clone https://github.com/openai/CLIP.git
%cd CLIP
```
# Download Classification Data or Object Detection Data
We will download the [public flowers classificaiton dataset](https://public.roboflow.com/classification/flowers_classification) from Roboflow. The data will come out as folders broken into train/valid/test splits and seperate folders for each class label.
You can easily download your own dataset from Roboflow in this format, too.
We made a conversion from object detection to CLIP text prompts in Roboflow, too, if you want to try that out.
To get your data into Roboflow, follow the [Getting Started Guide](https://blog.roboflow.ai/getting-started-with-roboflow/).
```
!mkdir cake
from google.colab import drive
drive.mount('/content/drive')
%cd ..
!cp /content/drive/MyDrive/Flowers_Classification.v3-augmented.clip.zip .
!pwd
#download classification data
#replace with your link
!curl -L "https://public.roboflow.com/ds/vPLCmk4Knv?key=tCrKLQNpTi" > roboflow.zip; unzip roboflow.zip; rm roboflow.zip
!unzip ./Flowers.zip
import os
#our the classes and images we want to test are stored in folders in the test set
class_names = os.listdir('./test/')
class_names.remove('_tokenization.txt')
class_names
class_names
!pwd
#we auto generate some example tokenizations in Roboflow but you should edit this file to try out your own prompts
#CLIP gets a lot better with the right prompting!
#be sure the tokenizations are in the same order as your class_names above!
%cat ./test/_tokenization.txt
#edit your prompts as you see fit here
%%writefile ./test/_tokenization.txt
daisy
dandelion
cake
candidate_captions = []
with open('./test/_tokenization.txt') as f:
candidate_captions = f.read().splitlines()
!pwd
%cd ./CLIP/
```
# Run CLIP inference on your classification dataset
```
import torch
import clip
from PIL import Image
import glob
def argmax(iterable):
return max(enumerate(iterable), key=lambda x: x[1])[0]
device = "cuda" if torch.cuda.is_available() else "cpu"
model, transform = clip.load("ViT-B/32", device=device)
correct = []
#define our target classificaitons, you can should experiment with these strings of text as you see fit, though, make sure they are in the same order as your class names above
text = clip.tokenize(candidate_captions).to(device)
for cls in class_names:
class_correct = []
test_imgs = glob.glob('./test/' + cls + '/*.jpg')
for img in test_imgs:
print(img)
image = transform(Image.open(img)).unsqueeze(0).to(device)
with torch.no_grad():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
logits_per_image, logits_per_text = model(image, text)
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
pred = class_names[argmax(list(probs)[0])]
print(pred)
if pred == cls:
correct.append(1)
class_correct.append(1)
else:
correct.append(0)
class_correct.append(0)
print('accuracy on class ' + cls + ' is :' + str(sum(class_correct)/len(class_correct)))
print('accuracy on all is : ' + str(sum(correct)/len(correct)))
#Hope you enjoyed!
#As always, happy inferencing
#Roboflow
```
| github_jupyter |
- import lib
```
# What version of Python do you have?
import sys
from collections import Counter
import tensorflow.keras
import pandas as pd
import sklearn as sk
from imblearn.over_sampling import SMOTE
from imblearn.over_sampling import SMOTENC
import tensorflow as tf
import seaborn as sns
import math
import matplotlib.pyplot as plt
import numpy as np
import re
import demoji
import datetime
print(f"Tensor Flow Version: {tf.__version__}")
print(f"Keras Version: {tensorflow.keras.__version__}")
print()
print(f"Python {sys.version}")
print(f"Pandas {pd.__version__}")
print(f"Scikit-Learn {sk.__version__}")
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
```
- read data
```
tweet_vector_df = pd.read_csv("./data_vectors/vectors-100-2019-02-06.csv")
```
- SVD
```
u, s, vh = np.linalg.svd(tweet_vector_df, full_matrices=True)
num_of_bars = 50
_ = plt.bar(np.arange(num_of_bars), s[:num_of_bars])
threshold = 15
basis_events = []
for i in range(len(s)):
if s[i] < threshold: break
if s[i] >= threshold: basis_events.append(vh[i])
len(basis_events)
```
- project each non-basis vectors onto the basis vectors and aggregate them
```
aggregated_vector = [0 for i in range(768)]
for idx, tweet in tweet_vector_df.iterrows():
# iterate through each basis event
for basis_event in basis_events:
norm = np.linalg.norm(basis_event)
aggregated_vector = np.add(aggregated_vector, np.multiply(np.dot(tweet.to_list(), basis_event) / norm, basis_event))
# print(np.dot(tweet.to_list(), basis_event))
# print("==============")
# print(aggregated_vector)
```
- assign sentiment class
```
def sentiment_class(num):
if num >= 1/3:
return 1
elif num >=-1/3:
return 0
else:
return -1
def sentiment_class2(num):
if num >= 1/15 or num < -1/15:
return 11
else:
return 22
```
- function to aggregate vector with projection
```
def aggregator(date):
# read date
tweet_vector_df = pd.read_csv("./data_vectors/vectors-10000-" + date +".csv")
tweet_vector_df['sentiment_class'] = tweet_vector_df.sentiment.apply(sentiment_class)
# SMOTE oversampling
all_rows = tweet_vector_df.values.tolist()
X = [row[: -1] for row in all_rows]
y = [row[-1] for row in all_rows]
sentiment_count = Counter(y)
if sentiment_count[0] > sentiment_count[1] + sentiment_count[-1]:
if min(sentiment_count.values()) > 3:
sm = SMOTE(random_state=2, k_neighbors=3)
X_res, y_res = sm.fit_resample(X, y)
# re-assemble back to dataframe
tweet_vector_df = pd.DataFrame(X_res)
tweet_vector_df = tweet_vector_df.rename(columns={768: "sentiment_score"})
tweet_vector_df['sentiment_class'] = y_res
# print(tweet_vector_df.shape, tweet_vector_df.columns)
# svd
u, s, vh = np.linalg.svd(tweet_vector_df, full_matrices=True)
# generate basis events
threshold = 15
basis_events = []
for i in range(len(s)):
if s[i] < threshold: break
if s[i] >= threshold: basis_events.append(vh[i])
# projection
aggregated_vector = [0 for i in range(797)]
for idx, tweet in tweet_vector_df.iterrows():
# iterate through each basis event
for basis_event in basis_events:
norm = np.linalg.norm(basis_event)
aggregated_vector = np.add(aggregated_vector, np.multiply(np.dot(tweet.to_list(), basis_event) / norm, basis_event))
return np.concatenate((np.array([date]), aggregated_vector))
# plain addition version of aggregator
def aggregator(date):
# read date
tweet_vector_df = pd.read_csv("./data_vectors/vectors-10000-" + date +".csv")
tweet_vector_df['sentiment_class'] = tweet_vector_df.sentiment.apply(sentiment_class)
# trigger SMOTE oversampling iff neutral ratio too large
all_rows = tweet_vector_df.values.tolist()
X = [row[: -1] for row in all_rows]
y = [row[-1] for row in all_rows]
sentiment_count = Counter(y)
if sentiment_count[0] > sentiment_count[1] + sentiment_count[-1]:
if min(sentiment_count.values()) > 3:
sm = SMOTE(random_state=2, k_neighbors=3)
X_res, y_res = sm.fit_resample(X, y)
# re-assemble back to dataframe
tweet_vector_df = pd.DataFrame(X_res)
tweet_vector_df = tweet_vector_df.rename(columns={768: "sentiment_score"})
tweet_vector_df['sentiment_class'] = y_res
'''if sentiment_count[1] <= 3 or sentiment_count[-1] <= 3:
# read date
tweet_vector_df['sentiment_class'] = tweet_vector_df.sentiment.apply(sentiment_class2)
# SMOTE oversampling
all_rows = tweet_vector_df.values.tolist()
X = [row[: -1] for row in all_rows]
y = [row[-1] for row in all_rows]
sm = SMOTE(random_state=2, k_neighbors=3)
X_res, y_res = sm.fit_resample(X, y)
# re-assemble back to dataframe
tweet_vector_df = pd.DataFrame(X_res)
tweet_vector_df = tweet_vector_df.rename(columns={768: "sentiment_score"})
tweet_vector_df['sentiment_class'] = y_res'''
# print(tweet_vector_df.shape, tweet_vector_df.columns)
# svd
u, s, vh = np.linalg.svd(tweet_vector_df, full_matrices=True)
# generate basis events
threshold = 15
basis_events = []
for i in range(len(s)):
if s[i] < threshold: break
if s[i] >= threshold: basis_events.append(vh[i])
# projection
aggregated_vector = [0 for i in range(tweet_vector_df.shape[1])]
for idx, tweet in tweet_vector_df.iterrows():
aggregated_vector = np.add(aggregated_vector, tweet.to_list())
return np.concatenate((np.array([date]), aggregated_vector))
```
- wrap up
```
# Inclusive on both sides
start_date = datetime.date(2019, 2, 6)
end_date = datetime.date(2020, 2, 6)
time_window = (end_date - start_date).days + 1
sample_df = pd.read_csv("./data_vectors/vectors-10000-2019-02-06.csv")
feature_by_date = pd.DataFrame(columns=['date'] + list(sample_df.columns) + ['sentiment_class'])
for single_date in (start_date + datetime.timedelta(n) for n in range(time_window)):
this_date = single_date.strftime("%Y-%m-%d")
new_row_df = pd.DataFrame(aggregator(this_date)).transpose()
new_row_df = new_row_df.rename(columns={0: "Date"})
new_row_df.columns = ['date'] + list(sample_df.columns) + ['sentiment_class']
feature_by_date = pd.concat([feature_by_date, new_row_df], ignore_index=True)
sys.stdout.write('\r')
# the exact output you're looking for:
sys.stdout.write("finished %s" % this_date)
sys.stdout.flush()
print("\nall finished")
feature_by_date = feature_by_date.rename(columns={"date": "Date"})
feature_by_date.drop(['sentiment_class'], axis=1).to_csv("aggregated_vector.csv", index=False)
feature_by_date
```
| github_jupyter |
# Measuring Quantum Volume
## Introduction
**Quantum Volume (QV)** is a single-number metric that can be measured using a concrete
protocol on near-term quantum computers of modest size. The QV method quantifies
the largest random circuit of equal width and depth that the computer successfully implements.
Quantum computing systems with high-fidelity operations, high connectivity, large calibrated gate
sets, and circuit rewriting toolchains are expected to have higher quantum volumes.
## The Quantum Volume Protocol
A QV protocol (see [1]) consists of the following steps:
(We should first import the relevant qiskit classes for the demonstration).
```
%matplotlib inline
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
import matplotlib.pyplot as plt
#Import Qiskit classes
import qiskit
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, thermal_relaxation_error
#Import the qv function
import qiskit.ignis.verification.quantum_volume as qv
```
### Step 1: Generate QV sequences
It is well-known that quantum algorithms can be expressed as polynomial-sized quantum circuits built from two-qubit unitary gates. Therefore, a model circuit consists of $d$ layers of random permutations of the qubit labels, followed by random two-qubit gates (from $SU(4)$). When the circuit width $m$ is odd, one of the qubits is idle in each layer.
More precisely, a **QV circuit** with **depth $d$** and **width $m$**, is a sequence $U = U^{(d)}...U^{(2)}U^{(1)}$ of $d$ layers:
$$ U^{(t)} = U^{(t)}_{\pi_t(m'-1),\pi_t(m)} \otimes ... \otimes U^{(t)}_{\pi_t(1),\pi_t(2)} $$
each labeled by times $t = 1 ... d$ and acting on $m' = 2 \lfloor n/2 \rfloor$ qubits.
Each layer is specified by choosing a uniformly random permutation $\pi_t \in S_m$ of the $m$ qubit indices
and sampling each $U^{(t)}_{a,b}$, acting on qubits $a$ and $b$, from the Haar measure on $SU(4)$.
In the following example we have 6 qubits Q0,Q1,Q3,Q5,Q7,Q10. We are going to look at subsets up to the full set
(each volume circuit will be depth equal to the number of qubits in the subset)
```
# qubit_lists: list of list of qubit subsets to generate QV circuits
qubit_lists = [[0,1,3],[0,1,3,5],[0,1,3,5,7],[0,1,3,5,7,10]]
# ntrials: Number of random circuits to create for each subset
ntrials = 50
```
We generate the quantum volume sequences. We start with a small example (so it doesn't take too long to run).
```
qv_circs, qv_circs_nomeas = qv.qv_circuits(qubit_lists, ntrials)
```
As an example, we print the circuit corresponding to the first QV sequence. Note that the ideal circuits are run on the first n qubits (where n is the number of qubits in the subset).
```
#pass the first trial of the nomeas through the transpiler to illustrate the circuit
qv_circs_nomeas[0] = qiskit.compiler.transpile(qv_circs_nomeas[0], basis_gates=['u1','u2','u3','cx'])
print(qv_circs_nomeas[0][0])
```
### Step 2: Simulate the ideal QV circuits
The quantum volume method requires that we know the ideal output for each circuit, so we use the statevector simulator in Aer to get the ideal result.
```
#The Unitary is an identity (with a global phase)
backend = qiskit.Aer.get_backend('statevector_simulator')
ideal_results = []
for trial in range(ntrials):
print('Simulating trial %d'%trial)
ideal_results.append(qiskit.execute(qv_circs_nomeas[trial], backend=backend).result())
```
Next, we load the ideal results into a quantum volume fitter
```
qv_fitter = qv.QVFitter(qubit_lists=qubit_lists)
qv_fitter.add_statevectors(ideal_results)
```
### Step 3: Calculate the heavy outputs
To define when a model circuit $U$ has been successfully implemented in practice, we use the *heavy output* generation problem. The ideal output distribution is $p_U(x) = |\langle x|U|0 \rangle|^2$,
where $x \in \{0,1\}^m$ is an observable bit-string.
Consider the set of output probabilities given by the range of $p_U(x)$ sorted in ascending order
$p_0 \leq p_1 \leq \dots \leq p_{2^m-1}$. The median of the set of probabilities is
$p_{med} = (p_{2^{m-1}} + p_{2^{m-1}-1})/2$, and the *heavy outputs* are
$$ H_U = \{ x \in \{0,1\}^m \text{ such that } p_U(x)>p_{med} \}.$$
The heavy output generation problem is to produce a set of output strings such that more than two-thirds are heavy.
As an illustration, we print the heavy outputs from various depths and their probabilities (for trial 0):
```
for qubit_list in qubit_lists:
l = len(qubit_list)
print ('qv_depth_'+str(l)+'_trial_0:', qv_fitter._heavy_outputs['qv_depth_'+str(l)+'_trial_0'])
for qubit_list in qubit_lists:
l = len(qubit_list)
print ('qv_depth_'+str(l)+'_trial_0:', qv_fitter._heavy_output_prob_ideal['qv_depth_'+str(l)+'_trial_0'])
```
### Step 4: Define the noise model
We define a noise model for the simulator. To simulate decay, we add depolarizing error probabilities to the CNOT and U gates.
```
noise_model = NoiseModel()
p1Q = 0.002
p2Q = 0.02
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1Q, 1), 'u2')
noise_model.add_all_qubit_quantum_error(depolarizing_error(2*p1Q, 1), 'u3')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2Q, 2), 'cx')
#noise_model = None
```
We can execute the QV sequences either using Qiskit Aer Simulator (with some noise model) or using IBMQ provider, and obtain a list of exp_results.
```
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 1024
exp_results = []
for trial in range(ntrials):
print('Running trial %d'%trial)
exp_results.append(qiskit.execute(qv_circs[trial], basis_gates=basis_gates, backend=backend, noise_model=noise_model, backend_options={'max_parallel_experiments': 0}).result())
```
### Step 5: Calculate the average gate fidelity
The *average gate fidelity* between the $m$-qubit ideal unitaries $U$ and the executed $U'$ is:
$$ F_{avg}(U,U') = \frac{|Tr(U^{\dagger}U')|^2/2^m+1}{2^m+1}$$
The observed distribution for an implementation $U'$ of model circuit $U$ is $q_U(x)$, and the probability of sampling
a heavy output is:
$$ h_U = \sum_{x \in H_U} q_U(x)$$
As an illustration, we print the heavy output counts from various depths (for trial 0):
```
qv_fitter.add_data(exp_results)
for qubit_list in qubit_lists:
l = len(qubit_list)
#print (qv_fitter._heavy_output_counts)
print ('qv_depth_'+str(l)+'_trial_0:', qv_fitter._heavy_output_counts['qv_depth_'+str(l)+'_trial_0'])
```
### Step 6: Calculate the achievable depth
The probability of observing a heavy output by implementing a randomly selected depth $d$ model circuit is:
$$h_d = \int_U h_U dU$$
The *achievable depth* $d(m)$ is the largest $d$ such that we are confident that $h_d > 2/3$. In other words,
$$ h_1,h_2,\dots,h_{d(m)}>2/3 \text{ and } h_{d(m)+1} \leq 2/3$$
We now convert the heavy outputs in the different trials and calculate the mean $h_d$ and the error for plotting the graph.
```
plt.figure(figsize=(10, 6))
ax = plt.gca()
# Plot the essence by calling plot_rb_data
qv_fitter.plot_qv_data(ax=ax, show_plt=False)
# Add title and label
ax.set_title('Quantum Volume for up to %d Qubits \n and %d Trials'%(len(qubit_lists[-1]), ntrials), fontsize=18)
plt.show()
```
### Step 7: Calculate the Quantum Volume
The quantum volume treats the width and depth of a model circuit with equal importance and measures the largest square-shaped (i.e., $m = d$) model circuit a quantum computer can implement successfully on average.
The *quantum volume* $V_Q$ is defined as
$$\log_2 V_Q = \arg\max_{m} \min (m, d(m))$$
We list the statistics for each depth. For each depth we list if the depth was successful or not and with what confidence interval. For a depth to be successful the confidence interval must be > 97.5%.
```
qv_success_list = qv_fitter.qv_success()
qv_list = qv_fitter.ydata
QV = 1
for qidx, qubit_list in enumerate(qubit_lists):
if qv_list[0][qidx]>2/3:
if qv_success_list[qidx][0]:
print("Width/depth %d greater than 2/3 (%f) with confidence %f (successful). Quantum volume %d"%
(len(qubit_list),qv_list[0][qidx],qv_success_list[qidx][1],qv_fitter.quantum_volume()[qidx]))
QV = qv_fitter.quantum_volume()[qidx]
else:
print("Width/depth %d greater than 2/3 (%f) with confidence %f (unsuccessful)."%
(len(qubit_list),qv_list[0][qidx],qv_success_list[qidx][1]))
else:
print("Width/depth %d less than 2/3 (unsuccessful)."%len(qubit_list))
print ("The Quantum Volume is:", QV)
```
### References
[1] Andrew W. Cross, Lev S. Bishop, Sarah Sheldon, Paul D. Nation, and Jay M. Gambetta, *Validating quantum computers using randomized model circuits*, Phys. Rev. A **100**, 032328 (2019). https://arxiv.org/pdf/1811.12926
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
# Retail Demo Store Experimentation Workshop - A/B Testing Exercise
In this exercise we will define, launch, and evaluate the results of an A/B experiment using the experimentation framework implemented in the Retail Demo Store project. If you have not already stepped through the **[3.1-Overview](./3.1-Overview.ipynb)** workshop notebook, please do so now as it provides the foundation built upon in this exercise.
Recommended Time: 30 minutes
## Prerequisites
Since this module uses the Retail Demo Store's Recommendation service to run experiments across variations that depend on the personalization features of the Retail Demo Store, it is assumed that you have either completed the [Personalization](../1-Personalization/1.1-Personalize.ipynb) workshop or those resources have been pre-provisioned in your AWS environment. If you are unsure and attending an AWS managed event such as a workshop, check with your event lead.
## Exercise 1: A/B Experiment
For the first exercise we will demonstrate how to use the A/B testing technique to implement an experiment over two implementations, or variations, of product recommendations. The first variation will represent our current implementation using the **Default Product Resolver** and the second variation will use the **Personalize Resolver**. The scenario we are simulating is adding product recommendations powered by Amazon Personalize to home page and measuring the impact/uplift in click-throughs for products as a result of deploying a personalization strategy.
### What is A/B Testing?
A/B testing, also known as bucket or split testing, is used to compare the performance of two variations (A and B) of a single variable/experience by exposing separate groups of users to each variation and measuring user responses. An A/B experiment is run for a period of time, typically dictated by the number of users necessary to reach a statistically significant result, followed by statistical analysis of the results to determine if a conclustion can be reached as to the best performing variation.
### Our Experiment Hypothesis
**Sample scenario:**
Website analytics have shown that user sessions frequently end on the home page for our e-commerce site, the Retail Demo Store. Furthermore, when users do make a purchase, most purchases are for a single product. Currently on our home page we are using a basic approach of recommending featured products. We hypothesize that adding personalized recommendations to the home page will result in increasing the click-through rate of products by 25%. The current click-through rate is 15%.
### ABExperiment Class
Before stepping through creating and executing our A/B test, let's look at the relevant source code for the **ABExperiment** class that implements A/B experiments in the Retail Demo Store project.
As noted in the **3.1-Overview** notebook, all experiment types are subclasses of the abstract **Experiment** class. See **[3.1-Overview](./3.1-Overview.ipynb)** for more details on the experimentation framework.
The `ABExperiment.get_items()` method is where item recommendations are retrieved for the experiment. The `ABExperiment.calculate_variation_index()` method is where users are assigned to a variation/group using a consistent hashing algorithm. This ensures that each user is assigned to the same variation across multiple requests for recommended items for the duration of the experiment. Once the variation is determined, the variation's **Resolver** is used to retrieve recommendations. Details on the experiment are added to item list to support conversion/outcome tracking and UI annotation.
```python
# from src/recommendations/src/recommendations-service/experimentation/experiment_ab.py
class ABExperiment(Experiment):
...
def get_items(self, user_id, current_item_id = None, item_list = None, num_results = 10, tracker = None):
...
# Determine which variation to use for the user.
variation_idx = self.calculate_variation_index(user_id)
# Increment exposure counter for variation for this experiment.
self._increment_exposure_count(variation_idx)
# Get item recommendations from the variation's resolver.
variation = self.variations[variation_idx]
resolve_params = {
'user_id': user_id,
'product_id': current_item_id,
'num_results': num_results
}
items = variation.resolver.get_items(**resolve_params)
# Inject experiment details into recommended item list.
rank = 1
for item in items:
correlation_id = self._create_correlation_id(user_id, variation_idx, rank)
item_experiment = {
'id': self.id,
'feature': self.feature,
'name': self.name,
'type': self.type,
'variationIndex': variation_idx,
'resultRank': rank,
'correlationId': correlation_id
}
item.update({
'experiment': item_experiment
})
rank += 1
...
return items
def calculate_variation_index(self, user_id):
""" Given a user_id and this experiment's configuration, return the variation
The same variation will be returned for given user for this experiment no
matter how many times this method is called.
"""
if len(self.variations) == 0:
return -1
hash_str = f'experiments.{self.feature}.{self.name}.{user_id}'.encode('ascii')
hash_int = int(hashlib.sha1(hash_str).hexdigest()[:15], 16)
index = hash_int % len(self.variations)
return index
```
### Setup - Import Dependencies
Throughout this workshop we will need access to some common libraries and clients for connecting to AWS services. Let's set those up now.
```
import boto3
import json
import uuid
import numpy as np
import requests
import pandas as pd
import random
import scipy.stats as scs
import time
import decimal
import matplotlib.pyplot as plt
from boto3.dynamodb.conditions import Key
from random import randint
# import custom scripts used for plotting
from src.plot import *
from src.stats import *
%matplotlib inline
plt.style.use('ggplot')
# We will be using a DynamoDB table to store configuration info for our experiments.
dynamodb = boto3.resource('dynamodb')
# Service discovery will allow us to dynamically discover Retail Demo Store resources
servicediscovery = boto3.client('servicediscovery')
# Retail Demo Store config parameters are stored in SSM
ssm = boto3.client('ssm')
# Utility class to convert types for printing as JSON.
class CompatEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, decimal.Decimal):
if obj % 1 > 0:
return float(obj)
else:
return int(obj)
else:
return super(CompatEncoder, self).default(obj)
```
### Sample Size Calculation
The first step is to determine the sample size necessary to reach a statistically significant result given a target of 25% gain in click-through rate from the home page. There are several sample size calculators available online including calculators from [Optimizely](https://www.optimizely.com/sample-size-calculator/?conversion=15&effect=20&significance=95), [AB Tasty](https://www.abtasty.com/sample-size-calculator/), and [Evan Miller](https://www.evanmiller.org/ab-testing/sample-size.html#!15;80;5;25;1). For this exercise, we will use the following function to calculate the minimal sample size for each variation.
```
def min_sample_size(bcr, mde, power=0.8, sig_level=0.05):
"""Returns the minimum sample size to set up a split test
Arguments:
bcr (float): probability of success for control, sometimes
referred to as baseline conversion rate
mde (float): minimum change in measurement between control
group and test group if alternative hypothesis is true, sometimes
referred to as minimum detectable effect
power (float): probability of rejecting the null hypothesis when the
null hypothesis is false, typically 0.8
sig_level (float): significance level often denoted as alpha,
typically 0.05
Returns:
min_N: minimum sample size (float)
References:
Stanford lecture on sample sizes
http://statweb.stanford.edu/~susan/courses/s141/hopower.pdf
"""
# standard normal distribution to determine z-values
standard_norm = scs.norm(0, 1)
# find Z_beta from desired power
Z_beta = standard_norm.ppf(power)
# find Z_alpha
Z_alpha = standard_norm.ppf(1-sig_level/2)
# average of probabilities from both groups
pooled_prob = (bcr + bcr+mde) / 2
min_N = (2 * pooled_prob * (1 - pooled_prob) * (Z_beta + Z_alpha)**2
/ mde**2)
return min_N
# This is the conversion rate using the current implementation
baseline_conversion_rate = 0.15
# This is the lift expected by adding personalization
absolute_percent_lift = baseline_conversion_rate * .25
# Calculate the sample size needed to reach a statistically significant result
sample_size = int(min_sample_size(baseline_conversion_rate, absolute_percent_lift))
print('Sample size for each variation: ' + str(sample_size))
```
### Experiment Strategy Datastore
With our sample size defined, let's create an experiment strategy for our A/B experiment. Walk through each of the following steps to configure your environment.
A DynamoDB table was created by the Retail Demo Store CloudFormation template that we will use to store the configuration information for our experiments. The table name can be found in a system parameter.
```
response = ssm.get_parameter(Name='retaildemostore-experiment-strategy-table-name')
table_name = response['Parameter']['Value'] # Do Not Change
print('Experiments DDB table: ' + table_name)
table = dynamodb.Table(table_name)
```
Next we need to lookup the Amazon Personalize campaign ARN for product recommendations. This is the campaign that was created in the [Personalization workshop](../1-Personalization/personalize.ipynb) (or was pre-built for you depending on your workshop event).
```
response = ssm.get_parameter(Name = 'retaildemostore-product-recommendation-campaign-arn')
campaign_arn = response['Parameter']['Value'] # Do Not Change
print('Personalize product recommendations ARN: ' + campaign_arn)
```
### Create A/B Experiment
The Retail Demo Store supports running multiple experiments concurrently. For this workshop we will create a single A/B test/experiment that uniformly splits users between a control group that receives recommendations from the default behavior and a variance group that receives recommendations from Amazon Personalize. The Recommendations service already has logic that supports A/B tests once an active experiment is detected our Experiment Strategy DynamoDB table.
Experiment configurations are stored in a DynamoDB table where each item in the table represents an experiment and has the following fields.
- **id** - Uniquely identified this experience (UUID).
- **feature** - Identifies the Retail Demo Store feature where the experiment should be applied. The name for the home page product recommendations feature is `home_product_recs`.
- **name** - The name of the experiment. Keep the name short but descriptive. It will be used in the UI for demo purposes and when logging events for experiment result tracking.
- **status** - The status of the experiment (`ACTIVE`, `EXPIRED`, or `PENDING`).
- **type** - The type of test (`ab` for an A/B test, `interleaving` for interleaved recommendations, or `mab` for multi-armed bandit test)
- **variations** - List of configurations representing variations for the experiment. For example, for A/B tests of the `home_product_recs` feature, the `variations` can be two Amazon Personalize campaign ARNs (variation type `personalize-recommendations`) or a single Personalize campaign ARN and the default product behavior.
```
feature = 'home_product_recs'
experiment_name = 'home_personalize_ab'
# First, make sure there are no other active experiments so we can isolate
# this experiment for the exercise (to keep things clean/simple).
response = table.scan(
ProjectionExpression='#k',
ExpressionAttributeNames={'#k' : 'id'},
FilterExpression=Key('status').eq('ACTIVE')
)
for item in response['Items']:
response = table.update_item(
Key=item,
UpdateExpression='SET #s = :inactive',
ExpressionAttributeNames={
'#s' : 'status'
},
ExpressionAttributeValues={
':inactive' : 'INACTIVE'
}
)
# Query the experiment strategy table to see if our experiment already exists
response = table.query(
IndexName='feature-name-index',
KeyConditionExpression=Key('feature').eq(feature) & Key('name').eq(experiment_name),
FilterExpression=Key('status').eq('ACTIVE')
)
if response.get('Items') and len(response.get('Items')) > 0:
print('Experiment already exists')
home_page_experiment = response['Items'][0]
else:
print('Creating experiment')
# Default product resolver
variation_0 = {
'type': 'product'
}
# Amazon Personalize resolver
variation_1 = {
'type': 'personalize-recommendations',
'campaign_arn': campaign_arn
}
home_page_experiment = {
'id': uuid.uuid4().hex,
'feature': feature,
'name': experiment_name,
'status': 'ACTIVE',
'type': 'ab',
'variations': [ variation_0, variation_1 ]
}
response = table.put_item(
Item=home_page_experiment
)
print(json.dumps(response, indent=4))
print('Experiment item:')
print(json.dumps(home_page_experiment, indent=4, cls=CompatEncoder))
```
## Load Users
For our experiment simulation, we will load all Retail Demo Store users and run the experiment until the sample size for both variations has been met.
First, let's discover the IP address for the Retail Demo Store's Users service.
```
response = servicediscovery.discover_instances(
NamespaceName='retaildemostore.local',
ServiceName='users',
MaxResults=1,
HealthStatus='HEALTHY'
)
users_service_instance = response['Instances'][0]['Attributes']['AWS_INSTANCE_IPV4']
print('Users Service Instance IP: {}'.format(users_service_instance))
```
Next, let's fetch all users, randomize their order, and load them into a local data frame.
```
# Load all 5K users so we have enough to satisfy our sample size requirements.
response = requests.get('http://{}/users/all?count=5000'.format(users_service_instance))
users = response.json()
random.shuffle(users)
users_df = pd.DataFrame(users)
pd.set_option('display.max_rows', 5)
users_df
```
## Discover Recommendations Service
Next, let's discover the IP address for the Retail Demo Store's Recommendation service. This is the service where the Experimentation framework is implemented and the `/recommendations` endpoint is what we call to simulate our A/B experiment.
```
response = servicediscovery.discover_instances(
NamespaceName='retaildemostore.local',
ServiceName='recommendations',
MaxResults=1,
HealthStatus='HEALTHY'
)
recommendations_service_instance = response['Instances'][0]['Attributes']['AWS_INSTANCE_IPV4']
print('Recommendation Service Instance IP: {}'.format(recommendations_service_instance))
```
## Simulate Experiment
Next we will define a function to simulate our A/B experiment by making calls to the Recommendations service across the users we just loaded. Then we will run our simulation.
### Simulation Function
The following `simulate_experiment` function is supplied with the sample size for each group (A and B) and the probability of conversion for each group that we want to use for our simulation. It runs the simulation long enough to satisfy the sample size requirements and calls the Recommendations service for each user in the experiment.
```
def simulate_experiment(N_A, N_B, p_A, p_B):
"""Returns a pandas dataframe with simulated CTR data
Parameters:
N_A (int): sample size for control group
N_B (int): sample size for test group
Note: final sample size may not match N_A & N_B provided because the
group at each row is chosen at random by the ABExperiment class.
p_A (float): conversion rate; conversion rate of control group
p_B (float): conversion rate; conversion rate of test group
Returns:
df (df)
"""
# will hold exposure/outcome data
data = []
# total number of users to sample for both variations
N = N_A + N_B
if N > len(users):
raise ValueError('Sample size is greater than number of users')
print('Generating data for {} users... this may take a few minutes'.format(N))
# initiate bernoulli distributions to randomly sample from based on simulated probabilities
A_bern = scs.bernoulli(p_A)
B_bern = scs.bernoulli(p_B)
for idx in range(N):
if idx > 0 and idx % 500 == 0:
print('Generated data for {} users so far'.format(idx))
# initite empty row
row = {}
# Get next user from shuffled list
user = users[idx]
# Call Recommendations web service to get recommendations for the user
response = requests.get('http://{}/recommendations?userID={}&feature={}'.format(recommendations_service_instance, user['id'], feature))
recommendations = response.json()
recommendation = recommendations[randint(0, len(recommendations)-1)]
variation = recommendation['experiment']['variationIndex']
row['variation'] = variation
# Determine if variation converts based on probabilities provided
if variation == 0:
row['converted'] = A_bern.rvs()
else:
row['converted'] = B_bern.rvs()
if row['converted'] == 1:
# Update experiment with outcome/conversion
correlation_id = recommendation['experiment']['correlationId']
requests.post('http://{}/experiment/outcome'.format(recommendations_service_instance), data={'correlationId':correlation_id})
data.append(row)
# convert data into dataframe
df = pd.DataFrame(data)
print('Done')
return df
```
### Run Simulation
Next we run the simulation by defining our simulation parameters for sample sizes and probabilities and then call `simulate_experiment`. This will take several minutes depending on the sample sizes.
```
%%time
# Set size of both groups to calculated sample size
N_A = N_B = sample_size
# Use probabilities from our hypothesis
# bcr: baseline conversion rate
p_A = 0.15
# d_hat: difference in a metric between the two groups, sometimes referred to as minimal detectable effect or lift depending on the context
p_B = 0.1875
# Run simulation
ab_data = simulate_experiment(N_A, N_B, p_A, p_B)
ab_data
```
### Inspect Experiment Summary Statistics
Since the **Experiment** class updates statistics for the experiment in the experiment strategy DynamoDB table when a user is exposed to an experiment ("exposure") and when a user converts ("outcome"), we should see updated counts on our experiment. Let's reload our experiment and inspect the exposure and conversion counts for our simulation.
```
# Query DDB table for experiment item.
response = table.get_item(Key={'id': home_page_experiment['id']})
print(json.dumps(response['Item'], indent=4, cls=CompatEncoder))
```
You should now see counters for `conversions` and `exposures` for each variation. These represent how many times a user has been exposed to a variation and how many times a user has converted for a variation (i.e. clicked on a recommended item/product).
### Analyze Simulation Results
Next, let's take a closer look at the results of our simulation. We'll start by calculating some summary statistics.
```
ab_summary = ab_data.pivot_table(values='converted', index='variation', aggfunc=np.sum)
# add additional columns to the pivot table
ab_summary['total'] = ab_data.pivot_table(values='converted', index='variation', aggfunc=lambda x: len(x))
ab_summary['rate'] = ab_data.pivot_table(values='converted', index='variation')
ab_summary
```
The output above tells us how many users converted for each variation, the actual sample size for each variation in the simulation, and the conversion rate for each variation.
Next let's isolate the data and conversion counts for each variation.
```
A_group = ab_data[ab_data['variation'] == 0]
B_group = ab_data[ab_data['variation'] == 1]
A_converted, B_converted = A_group['converted'].sum(), B_group['converted'].sum()
A_converted, B_converted
```
Isolate the actual sample size for each variation.
```
A_total, B_total = len(A_group), len(B_group)
A_total, B_total
```
Calculate the actual conversion rates and uplift for our simulation.
```
p_A, p_B = A_converted / A_total, B_converted / B_total
p_A, p_B
p_B - p_A
```
### Determining Statistical Significance
In statistical hypothesis testing there are two types of errors that can occur. These are referred to as type 1 and type 2 errors.
Type 1 errors occur when the null hypothesis is true but is rejected. In other words, a "false positive" conclusion. Put in A/B testing terms, a type 1 error is when we conclude a statistically significant result when there isn't one.
Type 2 errors occur when we conclude that there is not a winner between two variations when in fact there is an actual winner. In other words, the null hypothesis is false yet we fail to reject it. Therefore, type 2 errors are a "false negative" conclusion.
If the probability of making a type 1 error is determined by "α" (alpha), the probability of a type 2 error is "β" (beta). Beta depends on the power of the test (i.e the probability of not committing a type 2 error, which is equal to 1-β).
Let's inspect the results of our simulation more closely to verify that it is statistically significant.
#### Calculate p-value
Formally, the p-value is the probability of seeing a particular result (or greater) from zero, assuming that the null hypothesis is TRUE. In other words, the p-value is the expected fluctuation in a given sample, similar to the variance. As an example, imagine we ran an A/A test where displayed the same variation to two groups of users. After such an experiment we would expect the conversion rates of both groups to be very similar but not dramatically different.
What we are hoping to see is a p-value that is less than our significance level. The significance level we used when calculating our sample size was 5%, which means we are seeking results with 95% accuracy. 5% is considered the industry standard.
```
p_value = scs.binom(A_total, p_A).pmf(p_B * B_total)
print('p-value = {0:0.9f}'.format(p_value))
```
Is the p-value less than the signficance level of 5%? This tells us the probability of a type 1 error.
Let's plot the data from both groups as binomial distributions.
```
fig, ax = plt.subplots(figsize=(12,6))
xA = np.linspace(A_converted-49, A_converted+50, 100)
yA = scs.binom(A_total, p_A).pmf(xA)
ax.scatter(xA, yA, s=10)
xB = np.linspace(B_converted-49, B_converted+50, 100)
yB = scs.binom(B_total, p_B).pmf(xB)
ax.scatter(xB, yB, s=10)
plt.xlabel('converted')
plt.ylabel('probability')
```
Based the probabilities from our hypothesis, we should see that the test group in blue (B) converted more users than the control group in red (A). However, the plot above is not a plot of the null and alternate hypothesis. The null hypothesis is a plot of the difference between the probability of the two groups.
> Given the randomness of our user selection, group hashing, and probabilities, your simulation results should be different for each simulation run and therefore may or may not be statistically significant.
In order to calculate the difference between the two groups, we need to standardize the data. Because the number of samples can be different between the two groups, we should compare the probability of successes, p.
According to the central limit theorem, by calculating many sample means we can approximate the true mean of the population from which the data for the control group was taken. The distribution of the sample means will be normally distributed around the true mean with a standard deviation equal to the standard error of the mean.
```
SE_A = np.sqrt(p_A * (1-p_A)) / np.sqrt(A_total)
SE_B = np.sqrt(p_B * (1-p_B)) / np.sqrt(B_total)
SE_A, SE_B
fig, ax = plt.subplots(figsize=(12,6))
xA = np.linspace(0, .3, A_total)
yA = scs.norm(p_A, SE_A).pdf(xA)
ax.plot(xA, yA)
ax.axvline(x=p_A, c='red', alpha=0.5, linestyle='--')
xB = np.linspace(0, .3, B_total)
yB = scs.norm(p_B, SE_B).pdf(xB)
ax.plot(xB, yB)
ax.axvline(x=p_B, c='blue', alpha=0.5, linestyle='--')
plt.xlabel('Converted Proportion')
plt.ylabel('PDF')
```
The dashed lines represent the mean conversion rate for each group. The distance between the red dashed line and the blue dashed line is equal to d_hat, or the minimum detectable effect.
```
p_A_actual = ab_summary.loc[0, 'rate']
p_B_actual = ab_summary.loc[1, 'rate']
bcr = p_A_actual
d_hat = p_B_actual - p_A_actual
A_total, B_total, bcr, d_hat
```
Finally, let's calculate the power, alpha, and beta from our simulation.
```
abplot(A_total, B_total, bcr, d_hat, show_power=True)
```
The power value we used when determining out sample size for our experiment was 80%. This is considered the industry standard. Is the power value calculated in the plot above greater than 80%?
```
abplot(A_total, B_total, bcr, d_hat, show_beta=True)
abplot(A_total, B_total, bcr, d_hat, show_alpha=True)
```
Are the alpha and beta values plotted in the graphs above less than our significance level of 5%? If so, we have a statistically significant result.
## Next Steps
You have completed the exercise for implementing an A/B test using the experimentation framework in the Retail Demo Store. Close this notebook and open the notebook for the next exercise, **[3.3-Interleaving-Experiment](./3.3-Interleaving-Experiment.ipynb)**.
### References and Further Reading
- [A/B testing](https://en.wikipedia.org/wiki/A/B_testing), Wikipedia
- [A/B testing](https://www.optimizely.com/optimization-glossary/ab-testing/), Optimizely
- [Evan's Awesome A/B Tools](https://www.evanmiller.org/ab-testing/), Evan Miller
| github_jupyter |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# Vowpal Wabbit Deep Dive
<center>
<img src="https://github.com/VowpalWabbit/vowpal_wabbit/blob/master/logo_assets/vowpal-wabbits-github-logo.png?raw=true" height="30%" width="30%" alt="Vowpal Wabbit">
</center>
[Vowpal Wabbit](https://github.com/VowpalWabbit/vowpal_wabbit) は、レコメンデーション用途に関連するいくつかのアルゴリズムを実装する高速オンライン機械学習ライブラリです。
Vowpal Wabbit (VW)の主な利点は、トレーニングは通常、確率的勾配降下または類似手法を使用してオンラインで行われ、非常に大きなデータセットにうまくスケーリングできることです。さらに、非常に高速に実行するように最適化されており、非常に大規模なデータセットの分散トレーニング シナリオをサポートできます。
VW は、大きすぎてメモリには収まらないが、単一のノードのディスクであれば格納できるようなデータサイズの問題に最適です。分散トレーニングはノードの追加のセットアップと構成で可能です。VWが適切に扱う問題の種類は、主に機械学習の教師あり分類の領域(線形回帰、ロジスティック回帰、マルチクラス分類、サポートベクターマシン、シンプルニューラルネット)に分類されます。また、行列因子化アプローチと潜在ディリクレット割り当て、およびその他のいくつかのアルゴリズムもサポートしています(詳細については[wiki](https://github.com/VowpalWabbit/vowpal_wabbit/wiki)を参照してください)。
一般的な良い展開例は、ユーザーの広告を掲載するオークションがミリ秒単位で決定されるリアルタイム入札シナリオです。ユーザーとアイテムに関するフィーチャ情報を抽出してモデルに渡し、クリック (またはその他の反応) の可能性を短い時間で予測する必要があります。また、ユーザーとコンテキストのフィーチャーが常に変化している場合 (例えば、ユーザーの利用ブラウザや現地時間)、可能なすべての入力の組み合わせを事前にスコア付けすることは不可能な場合があります。VWは、さまざまなアルゴリズムをオフラインで探索し、大量の履歴データで高精度なモデルをトレーニングし、リアルタイムで迅速な予測を生成できるように、モデルを本番環境に展開するプラットフォームとして価値を提供します。もちろん、これはVWを展開できる唯一の方法ではなく、モデルが絶えず更新されている完全にオンラインな環境で使用したり、アクティブラーニングアプローチを使用したり、事前スコアリングモードで完全にオフラインで作業することも可能です。
<h3>Recommendations 用の Vowpal Wabbit</h3>
このノートブックでは、VW ライブラリを使用して [MovieLens](https://grouplens.org/datasets/movielens/) データセットに関するレコメンデーションを生成する方法を示します。
このノートブックで VW がどのように使用されるかについて、以下の内容に注目してください:
Azure Data Science Virtual Machine ([DSVM](https://azure.microsoft.com/en-us/services/virtual-machines/data-science-virtual-machines/)) を使用すると VW はプリインストールされており、コマンド ラインから直接使用できます。DSVM を使用していない場合は、vw を自分でインストールする必要があります。
また、Python 環境内で VW を使用できるようにする Python バインディングや、SciKit-Learn Estimator API に準拠したラッパーもあります。ただし、Python バインディングは Boost と依存関係を持つ追加の Python パッケージとしてインストールする必要があるため、VW の実行を簡潔にするために、モデルのコマンド ライン実行から動作を模倣するサブプロセス呼び出しを介して行われます。
VWは特定の[入力フォーマット](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Input-format)を期待しており、このノートブックの to_vw() は、標準的な MovieLens データセットを対応するデータ形式に変換する便利な機能です。その後、データファイルはディスクに書き込まれ、トレーニングのためにVWに渡されます。
以下の例は、異なるアプローチのパフォーマンス上の利点を示さない VW の機能的能力を示すものです。[コマンドライン オプション](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-Line-Arguments)を使用して調整できるVWモデルのパフォーマンスに大きな影響を与えるハイパーパラメータ(学習率や正規化用語など)がいくつかあります。アプローチを適切に比較するには、関連するデータセットについて学習し、調整すると便利でしょう。
# 0. グローバル セットアップ
```
import sys
sys.path.append('../..')
import os
from subprocess import run
from tempfile import TemporaryDirectory
from time import process_time
import pandas as pd
import papermill as pm
import scrapbook as sb
from reco_utils.common.notebook_utils import is_jupyter
from reco_utils.dataset.movielens import load_pandas_df
from reco_utils.dataset.python_splitters import python_random_split
from reco_utils.evaluation.python_evaluation import (rmse, mae, exp_var, rsquared, get_top_k_items,
map_at_k, ndcg_at_k, precision_at_k, recall_at_k)
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
def to_vw(df, output, logistic=False):
"""Convert Pandas DataFrame to vw input format
Args:
df (pd.DataFrame): input DataFrame
output (str): path to output file
logistic (bool): flag to convert label to logistic value
"""
with open(output, 'w') as f:
tmp = df.reset_index()
# vw の書式設定を簡略化するに評価タイプを整数にリセットする必要がある
tmp['rating'] = tmp['rating'].astype('int64')
# 評価をバイナリ値に変換する
if logistic:
tmp['rating'] = tmp['rating'].apply(lambda x: 1 if x >= 3 else -1)
# 各行を VW 入力形式に変換する (https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Input-format)
# [ラベル] [タグ]|[ユーザー名空間] [ユーザー ID 変数] |[項目の名前空間][ムービー ID 変数]
# ラベルは true の評価で、タグは予測を真のユーザーにリンクするために使用される例の一意の ID であり、項目の名前空間は、
# コマンド ライン オプションを通じて相互作用機能をサポートする機能を分離します。
for _, row in tmp.iterrows():
f.write('{rating:d} {index:d}|user {userID:d} |item {itemID:d}\n'.format_map(row))
def run_vw(train_params, test_params, test_data, prediction_path, logistic=False):
"""Convenience function to train, test, and show metrics of interest
Args:
train_params (str): vw training parameters
test_params (str): vw testing parameters
test_data (pd.dataFrame): test data
prediction_path (str): path to vw prediction output
logistic (bool): flag to convert label to logistic value
Returns:
(dict): metrics and timing information
"""
# モデルのトレーニング
train_start = process_time()
run(train_params.split(' '), check=True)
train_stop = process_time()
# モデルのテスト
test_start = process_time()
run(test_params.split(' '), check=True)
test_stop = process_time()
# 予測の読み取り
pred_df = pd.read_csv(prediction_path, delim_whitespace=True, names=['prediction'], index_col=1).join(test_data)
pred_df.drop("rating", axis=1, inplace=True)
test_df = test_data.copy()
if logistic:
# メトリックが正しくキャプチャできるように完全なバイナリ ラベルを作成
test_df['rating'] = test['rating'].apply(lambda x: 1 if x >= 3 else -1)
else:
# 結果が正しい範囲の整数であることを確認する
pred_df['prediction'] = pred_df['prediction'].apply(lambda x: int(max(1, min(5, round(x)))))
# メトリクスの計算
result = dict()
result['RMSE'] = rmse(test_df, pred_df)
result['MAE'] = mae(test_df, pred_df)
result['R2'] = rsquared(test_df, pred_df)
result['Explained Variance'] = exp_var(test_df, pred_df)
result['Train Time (ms)'] = (train_stop - train_start) * 1000
result['Test Time (ms)'] = (test_stop - test_start) * 1000
return result
# データ ファイルを管理するための一時ディレクトリの作成
tmpdir = TemporaryDirectory()
model_path = os.path.join(tmpdir.name, 'vw.model')
saved_model_path = os.path.join(tmpdir.name, 'vw_saved.model')
train_path = os.path.join(tmpdir.name, 'train.dat')
test_path = os.path.join(tmpdir.name, 'test.dat')
train_logistic_path = os.path.join(tmpdir.name, 'train_logistic.dat')
test_logistic_path = os.path.join(tmpdir.name, 'test_logistic.dat')
prediction_path = os.path.join(tmpdir.name, 'prediction.dat')
all_test_path = os.path.join(tmpdir.name, 'new_test.dat')
all_prediction_path = os.path.join(tmpdir.name, 'new_prediction.dat')
```
# 1. データの読み取りと変形
```
# MovieLens データサイズの選択: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
TOP_K = 10
# MovieLens データの読み込み
df = load_pandas_df(MOVIELENS_DATA_SIZE)
# トレーニングとテストセット用にデータを分割し、デフォルト値は各ユーザーの評価の75%をトレーニング用として、25%をテスト用として受け取る
train, test = python_random_split(df, 0.75)
# トレーニング、テスト データを vw フォーマットで保存する
to_vw(df=train, output=train_path)
to_vw(df=test, output=test_path)
# ロジスティクス回帰用のデータを保存する (ラベルの調整が必要)
to_vw(df=train, output=train_logistic_path, logistic=True)
to_vw(df=test, output=test_logistic_path, logistic=True)
```
# 2. 回帰ベースのレコメンデーション
機械学習の問題を解決するためのさまざまなアプローチを検討する場合、パフォーマンス、時間、およびリソース (メモリまたは CPU) の使用状況全体において、より複雑なソリューションがどのように実行されるかを理解するためのベースライン アプローチを生成すると役立ちます。
回帰ベースのアプローチは、多くの ML 問題に対して考慮すべき最も簡単で最速のベースラインの一つです。
## 2.1 線形回帰
データは 1 から 5 の数値での評価を提供しており、これらの値を線形回帰モデルに適合させるのは簡単なアプローチです。このモデルは、目的変数としての評価の例と、独立した機能としての対応するユーザー ID とムービー ID に関するトレーニングを受けます。
各ユーザー項目の評価を例として渡すことで、モデルは各ユーザーの平均評価とアイテムごとの平均評価に基づいて重みを学習し始めます。
ただし、これは整数でなくなった予測評価を生成できるため、必要に応じて 1 から 5 の整数スケールに戻すには、予測時にいくつかの追加の調整を行う必要があります。ここでは、これを評価関数で行います。
```
"""
使用されるコマンド ライン パラメータの簡単な説明
その他のオプションパラメータは、こちらで確認できます: https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-Line-Arguments
VW はデフォルトで線形回帰を使用するので、コマンド ライン オプションはありません
-f <model_path>: トレーニング後の最終的なモデル ファイルを保持する場所を示します
-d <data_path>: トレーニングまたはテストに使用するデータ ファイルを示します
--quiet: これは、Quiet モードで vw を実行します (デバッグの場合は、Quiet モードを使用しないことに役立ちます)
-i <model_path>: トレーニング中に作成した以前のモデル ファイルを読み込む場所を示します
-t: これは推論のみを実行します (モデルに対して学習の更新を行いません)
-p <prediction_path>: 予測出力を格納する場所を示します
"""
train_params = 'vw -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
# 後でトップ K 分析で使用するために結果を保存する
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = pd.DataFrame(result, index=['Linear Regression'])
comparison
```
## 2.2 相互作用フィーチャーと線形回帰
これ以前は、ユーザーのフィーチャーとアイテムの機能を個別に扱いましたが、機能間の相互作用を考慮すると、ユーザーのよりきめ細かい環境設定を学習するメカニズムを提供できます。
相互作用機能を生成するには、二次コマンド ライン引数を使用し、各文字の最初の文字に基づいてユーザーと項目の名前空間を組み合わせて結合するオプション '-q ui' を組み合わせて名前空間を指定します。
現在使用される userID および itemID は、フィーチャ ID が直接使用される整数であり、例えばユーザ ID 123 評価ムービー 456 の場合、トレーニング例はフィーチャ 123 および 456 の値に 1 を入れる。ただし、インタラクションが指定されている場合 (またはフィーチャが文字列の場合)、結果として得られるインタラクションフィーチャは使用可能なフィーチャ空間にハッシュされます。フィーチャ ハッシュは、非常に疎な高次元フィーチャ空間を取り込み、より低い次元空間に減らす方法です。これにより、フィーチャとモデルの重み付けの高速計算を維持しながら、メモリを削減できます。
機能ハッシュの注意点は、ハッシュ衝突が発生し、別々のフィーチャが同じ場所にマップされる可能性があることです。この場合、高いカーディナリティのフィーチャ間の相互作用をサポートするためにスペースのサイズを大きくすると有益です。使用可能なフィーチャスペースは --bit_precision (-b) 引数で指定されます。モデル内のすべてのフィーチャーの使用可能な合計スペースは 2<sup>N</sup> です。
詳細については、[フィーチャー ハッシュと抽出](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Feature-Hashing-and-Extraction)を参照してください。
```
"""
使用されるコマンド ライン パラメータの簡単な説明
-b <N>: メモリ サイズを 2<sup>N</sup> エントリに設定します
-q <ab>: 'a' と 'b' で始まる名前空間のフィーチャ間の二次フィーチャの相互作用を作成する
"""
train_params = 'vw -b 26 -q ui -f {model} -d {data} --quiet'.format(model=saved_model_path, data=train_path)
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=saved_model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
saved_result = result
comparison = comparison.append(pd.DataFrame(result, index=['Linear Regression w/ Interaction']))
comparison
```
## 2.3 多項式ロジスティック回帰
線形回帰の代わりに、各評価値を個別のクラスとして扱う多項ロジスティック回帰または多クラス分類を活用します。
これにより、整数以外の結果は回避されますが、異なる評価レベルのカウントが歪んでいるとパフォーマンスが低下する可能性がある各クラスのトレーニング データも減少します。
基本的なマルチクラスロジスティック回帰は、N がクラスの数であり、使用する損失関数のロジスティック オプションを証明する '--oaa N' オプションで指定された 1対全 アプローチを使用して実行できます。
```
"""
使用されるコマンド ライン パラメータの簡単な説明
--loss_function logistic: ロジスティック回帰のモデル損失関数を設定します
--oaa <N>: 1対全 アプローチを使用して N の別々のモデルを列車 (すべてのモデルは単一のモデル ファイルにキャプチャされます)
これにより、ラベルが 1 から始まる連続した整数であることが想定されます
--link logistic: 予測出力をロジットから確率に変換します
予測される出力は、最も可能性の高いモデル(ラベル)です
"""
train_params = 'vw --loss_function logistic --oaa 5 -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
test_params = 'vw --link logistic -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = comparison.append(pd.DataFrame(result, index=['Multinomial Regression']))
comparison
```
## 2.4 ロジスティック回帰
さらに、ユーザーがアイテムを好きか嫌いかに興味を持つ場合、(1,3] は悪い評価(否定的な結果)で、(3,5] は良い評価(肯定的な結果)などと、入力データをバイナリの結果として表すように調整することができます。
このフレーミングを使用すると、単純なロジスティック回帰モデルを適用できます。ロジスティック回帰を実行するには、loss_function パラメータが'ロジスティック' に変更され、ターゲットラベルが [0, 1] に切り替わります。また、予測中に '--link logistic' を設定して、ロジット出力を確率値に戻してください。
```
train_params = 'vw --loss_function logistic -f {model} -d {data} --quiet'.format(model=model_path, data=train_logistic_path)
test_params = 'vw --link logistic -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_logistic_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path,
logistic=True)
comparison = comparison.append(pd.DataFrame(result, index=['Logistic Regression']))
comparison
```
# 3. 行列因子化に基づくレコメンデーション
上記のアプローチはすべて回帰モデルをトレーニングしますが、VW は 2 つの異なるアプローチで行列因子化もサポートしています。
行列因子化は、回帰モデルをトレーニングする際に特定のユーザー、アイテム、および相互作用に対して直接的な重みを学習するのではなく、ユーザーがアイテムを評価する方法を決定する潜在因子を学習しようとします。これがどのように機能するかの例としては、ジャンル別にユーザー設定と項目分類を表すことができる場合があります。ジャンルのセットが小さい場合は、各項目が各ジャンルクラスにどれだけ属しているかを関連付け、各ジャンルのユーザーの好みに合った重みを設定できます。重みの両方のセットは、内部製品がユーザー項目の評価になるベクトルとして表すことができる。行列因子化アプローチでは、ユーザーの潜在フィーチャと項目の低ランク行列を学習し、これらの行列を組み合わせて元のユーザー項目行列を近似できるようにします。
## 3.1. 単数形値分解を基にした行列因子化
最初のアプローチでは、特異値分解(SVD)に基づいて行列因子化を実行し、ユーザ項目評価行列の低ランク近似を学習します。これは '--rank' コマンドライン引数を使用して呼び出されます。
詳細については、[行列因子化の例](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Matrix-factorization-example)を参照してください。
```
"""
使用されるコマンド ライン パラメータの簡単な説明
--rank <N>: 減少行列内の潜在因子の数を設定します
"""
train_params = 'vw --rank 5 -q ui -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = comparison.append(pd.DataFrame(result, index=['Matrix Factorization (Rank)']))
comparison
```
## 3.2. ファクタリングマシンベースの行列因子化
[Rendel のファクタリング マシン](https://cseweb.ucsd.edu/classes/fa17/cse291-b/reading/Rendle2010FM.pdf)に基づく別のアプローチは、'--lrq' (低ランク二次) を使用して呼び出されます。このデモの LRQ の詳細について詳しくは、[こちら](https://github.com/VowpalWabbit/vowpal_wabbit/tree/master/demo/movielens)をご覧ください。
これにより、ユーザー項目評価行列の近似値を生成するために乗算される 2 つの下位ランク行列が学習されます。この方法で行列を圧縮すると、非常に疎な相互作用機能を持つ回帰モデルを使用する場合の制限の一部を回避する汎用的な要因を学習できます。これにより、コンバージェンスが向上し、ディスク上のモデルが小さくなる可能性があります。
パフォーマンスを向上させる追加用語は --lrqdropout で、トレーニング中に列を削除します。ただし、これは最適なランク サイズを大きくする傾向があります。L2 正則化などの他のパラメーターは、オーバーフィットを回避するのに役立ちます。
```
"""
使用されるコマンド ライン パラメータの簡単な説明
--lrq <abN>: 'a' と 'b' で始まる名前空間との間の二次相互作用に対するランク N の近似値を学習します
--lrqdroupout: 一般化を改善するためにトレーニング中にドロップアウトを実行します
"""
train_params = 'vw --lrq ui7 -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = comparison.append(pd.DataFrame(result, index=['Matrix Factorization (LRQ)']))
comparison
```
# 4. 結論
上の表は、推奨予測に使用できる VW ライブラリのアプローチの一部を示しています。相対的なパフォーマンスは、異なるデータセットに適用され、適切に調整されると変わりますが、すべてのアプローチがトレーニングできる速度 (75,000 例) とテスト (25,000 例) に注目すると便利です。
# 5. スコアリング
上記のいずれかのアプローチでモデルをトレーニングした後、モデルを使用して、オフライン バッチ モードまたはリアルタイム スコアリング モードで潜在的なユーザー ペアをスコア付けできます。次の例は、reco_utils ディレクトリのユーティリティを活用して、オフラインスコア付け出力から Top-K のレコメンデーションを生成する方法を示しています。
```
# 最初に、各ユーザーのすべての項目 (トレーニング中に見られるものを除く) のテスト セットを構築します
users = df[['userID']].drop_duplicates()
users['key'] = 1
items = df[['itemID']].drop_duplicates()
items['key'] = 1
all_pairs = pd.merge(users, items, on='key').drop(columns=['key'])
# トレーニングデータと組み合わせて、トレーニングでメモされたエントリのみを保持する
merged = pd.merge(train[['userID', 'itemID', 'rating']], all_pairs, on=["userID", "itemID"], how="outer")
all_user_items = merged[merged['rating'].isnull()].fillna(0).astype('int64')
# vw形式で保存(これはしばらく時間がかかる場合があります)
to_vw(df=all_user_items, output=all_test_path)
# 新しいデータセットで保存されたモデル (相互作用を伴う線形回帰) を実行する
test_start = process_time()
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=saved_model_path, data=all_test_path, pred=prediction_path)
run(test_params.split(' '), check=True)
test_stop = process_time()
test_time = test_stop - test_start
# 予測を読み込み、以前に保存した結果からトップkを取得
pred_data = pd.read_csv(prediction_path, delim_whitespace=True, names=['prediction'], index_col=1).join(all_user_items)
top_k = get_top_k_items(pred_data, col_rating='prediction', k=TOP_K)[['prediction', 'userID', 'itemID']]
top_k.head()
# ランキング メトリックを取得する
args = [test, top_k]
kwargs = dict(col_user='userID', col_item='itemID', col_rating='rating', col_prediction='prediction',
relevancy_method='top_k', k=TOP_K)
rank_metrics = {'MAP': map_at_k(*args, **kwargs),
'NDCG': ndcg_at_k(*args, **kwargs),
'Precision': precision_at_k(*args, **kwargs),
'Recall': recall_at_k(*args, **kwargs)}
# 最終結果
all_results = ['{k}: {v}'.format(k=k, v=v) for k, v in saved_result.items()]
all_results += ['{k}: {v}'.format(k=k, v=v) for k, v in rank_metrics.items()]
print('\n'.join(all_results))
```
# 6. クリーンアップ
```
# テストの結果を記録
if is_jupyter():
sb.glue('rmse', saved_result['RMSE'])
sb.glue('mae', saved_result['MAE'])
sb.glue('rsquared', saved_result['R2'])
sb.glue('exp_var', saved_result['Explained Variance'])
sb.glue("train_time", saved_result['Train Time (ms)'])
sb.glue("test_time", test_time)
sb.glue('map', rank_metrics['MAP'])
sb.glue('ndcg', rank_metrics['NDCG'])
sb.glue('precision', rank_metrics['Precision'])
sb.glue('recall', rank_metrics['Recall'])
tmpdir.cleanup()
```
## 参考文献
1. John Langford, et. al. Vowpal Wabbit Wiki. URL: https://github.com/VowpalWabbit/vowpal_wabbit/wiki
2. Steffen Rendel. Factorization Machines. 2010 IEEE International Conference on Data Mining.
3. Jake Hoffman. Matrix Factorization Example. URL: https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Matrix-factorization-example
4. Paul Minero. Low Rank Quadratic Example. URL: https://github.com/VowpalWabbit/vowpal_wabbit/tree/master/demo/movielens
| github_jupyter |
```
"""
Implementation of the CPC baseline based on the code available on
https://openreview.net/forum?id=8qDwejCuCN
"""
import os
import random
import pickle
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
os.chdir("../") #Load from parent directory
from data_utils import Plots,gen_loader,load_datasets
from models import select_encoder
utils_plot=Plots()
def epoch_run(data, encoder, ds_estimator, auto_regressor, device, window_size, n_size, optimizer, train=True):
if window_size==-1:
window_size = max(2,int(data.shape[-1]/10))
if window_size*10 == data.shape[-1]:
window_size = window_size-1
if train:
encoder.train()
ds_estimator.train()
auto_regressor.train()
else:
encoder.eval()
ds_estimator.eval()
auto_regressor.eval()
epoch_loss = 0
acc = 0
for sample in data:
rnd_t = np.random.randint(5*window_size,sample.shape[-1]-5*window_size)
sample = torch.Tensor(sample[:,max(0,(rnd_t-20*window_size)):min(sample.shape[-1], rnd_t+20*window_size)])
T = sample.shape[-1]
windowed_sample = np.split(sample[:, :(T // window_size) * window_size], (T // window_size), -1)
windowed_sample = torch.tensor(np.stack(windowed_sample, 0), device=device)
encodings = encoder(windowed_sample)
window_ind = torch.randint(2,len(encodings)-2, size=(1,))
_, c_t = auto_regressor(encodings[max(0, window_ind[0]-10):window_ind[0]+1].unsqueeze(0))
density_ratios = torch.bmm(encodings.unsqueeze(1),
ds_estimator(c_t.squeeze(1).squeeze(0)).expand_as(encodings).unsqueeze(-1)).view(-1,)
r = set(range(0, window_ind[0] - 2))
r.update(set(range(window_ind[0] + 3, len(encodings))))
rnd_n = np.random.choice(list(r), n_size)
X_N = torch.cat([density_ratios[rnd_n], density_ratios[window_ind[0] + 1].unsqueeze(0)], 0)
if torch.argmax(X_N)==len(X_N)-1:
acc += 1
labels = torch.Tensor([len(X_N)-1]).to(device)
loss = torch.nn.CrossEntropyLoss()(X_N.view(1, -1), labels.long())
epoch_loss += loss.item()
if train:
optimizer.zero_grad()
loss.backward()
optimizer.step()
return epoch_loss / len(data), acc/(len(data))
def learn_encoder(n_cross_val,data_type,datasets,lr,window_size,n_size,tr_percentage,
encoder_type,encoding_size,decay,n_epochs,suffix,device,device_ids,verbose,show_encodings):
accuracies=[]
for cv in range(n_cross_val):
train_data,train_labels,test_data,test_labels = load_datasets(args['data_type'],args['datasets'],cv)
#Save Location
save_dir = './results/baselines/%s_cpc/%s/'%(datasets,data_type)
if not os.path.exists(save_dir):
os.makedirs(save_dir)
save_file = str((save_dir +'encoding_%d_encoder_%d_checkpoint_%d%s.pth.tar')
%(encoding_size,encoder_type, cv,suffix))
if verbose:
print('Saving at: ',save_file)
#Models
input_size = train_data.shape[1]
encoder,_ = select_encoder(device,encoder_type,input_size,encoding_size)
ds_estimator = torch.nn.Linear(encoder.encoding_size, encoder.encoding_size).to(device)
auto_regressor = torch.nn.GRU(input_size=encoding_size, hidden_size=encoding_size, batch_first=True).to(device)
#Training init
params = list(ds_estimator.parameters()) + list(encoder.parameters()) + list(auto_regressor.parameters())
optimizer = torch.optim.Adam(params, lr=lr, weight_decay=decay)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, n_epochs, gamma=0.999)
#Split/Shuffle train and val
inds = list(range(len(train_data)))
random.shuffle(inds)
train_data = train_data[inds]
n_train = int(tr_percentage*len(train_data))
best_acc = 0
best_loss = np.inf
train_loss, val_loss = [], []
#Train
for epoch in range(n_epochs):
epoch_loss, acc = epoch_run(train_data[:n_train], encoder, ds_estimator,
auto_regressor, device, window_size, n_size, optimizer, train=True)
epoch_loss_val, acc_val = epoch_run(train_data[n_train:], encoder, ds_estimator,
auto_regressor, device, window_size, n_size, optimizer, train=False)
scheduler.step()
if verbose:
print('\nEpoch ', epoch)
print('Train ===> Loss: ', epoch_loss)
print('Validation ===> Loss: ', epoch_loss_val)
train_loss.append(epoch_loss)
val_loss.append(epoch_loss_val)
if epoch_loss_val<best_loss:
state = {
'epoch': epoch,
'encoder_state_dict': encoder.state_dict()
}
best_acc = acc_val
best_loss = epoch_loss_val
torch.save(state, save_file)
if verbose:
print('Saving ckpt')
accuracies.append(best_acc)
plt.figure()
plt.plot(np.arange(n_epochs), train_loss, label="Train")
plt.plot(np.arange(n_epochs), val_loss, label="Validation")
plt.title("CPC Unsupervised Loss")
plt.legend()
plt.savefig(save_dir +'encoding_%d_encoder_%d_checkpoint_%d%s.png'%(encoding_size,encoder_type, cv,suffix))
if verbose:
plt.show()
plt.close()
if verbose:
print('Best Train ===> Loss: ', np.min(train_loss))
print('Best Validation ===> Loss: ', np.min(val_loss))
print('-----Accuracy: %.2f +- %.2f-----' % (100 * np.mean(accuracies), 100 * np.std(accuracies)))
return
def run_cpc(args):
s_scores=[]
dbi_scores=[]
#Run Process
learn_encoder(**args)
#Plot Features
title = 'CPC Encoding TSNE for %s'%(args['data_type'])
if args['show_encodings']:
for cv in range(args['n_cross_val']):
train_data,train_labels,test_data,test_labels = load_datasets(args['data_type'],args['datasets'],cv)
utils_plot.plot_distribution(test_data, test_labels,args['encoder_type'],
args['encoding_size'],args['window_size'],'cpc',
args['datasets'],args['data_type'],args['suffix'],
args['device'], title, cv,augment=100)
return
def main(args):
#Devices
args['device'] = torch.device("cuda" if torch.cuda.is_available() else 'cpu')
args['device_ids'] = [i for i in range(torch.cuda.device_count())]
args['device'] = torch.device("cuda:0")
print('Using', args['device'])
#Experiment Parameters
args['window_size'] = 2500
args['encoder_type'] = 1
args['encoding_size'] = 128
args['lr'] = 1e-4
args['decay'] = 1e-5
args['datasets'] = args['data_type']
args['n_size'] = 4
run_cpc(args)
return
args = {'n_cross_val':5,
'data_type':'afdb', #options: afdb, ims, urban
'tr_percentage':0.8,
'n_epochs':100,
'suffix':'',
'show_encodings':False,
'verbose': True}
main(args)
```
| github_jupyter |
# SPARTA QuickStart
-----------------------------------
## 1. Extracting Radial Velocities
### 1.1 Reading and handling spectra
#### `Observations` (class)
`Observations` class enables one to load data from a given folder
and place it into a TimeSeries object.
```
from sparta import Observations
```
The `ob.Observations` module can be used to load a batch of spectra from a given folder into an `Observations` class. In the
exmample below we load data measured by the NRES spectrograph. If no folder was specified, a selection box will be toggled.
```
obs_list = Observations(survey='NRES', target_visits_lib='data/TOI677/')
```
<br />
The Resulting object contains the following methods and attributes:
*Methods*:
`calc_PDC, convert_times_to_relative_float_values`
*Speacial attributes*:
`spec_list, observation_TimeSeries`
*Auxiliary attributes*: `time_list, rad_list, file_list, bcv, data_type, first_time`
<br />
### 1.2 Preparing the spectra for analysis
#### `Spectrum` (class)
Once initialized, the Observations module loads all the spectra found in the given folder. The loaded spectra are shifted according to the Barycentric velocity provided in the fits file header, and stored in a list of `Spectrum` objects under `spec_list`.
<br/>
*Attributes:* The two main attributes of this class are `wv`, a list of wavelength vectors, and `sp` the corresponding list of intensities. When loaded as part of the `Observations` class the spectra are shifted to a barycentric frame.
<br/>
*methods:*
`InterpolateSpectrum`: resamples the spectrum on a linear or logarithmic scale.
`TrimSpec`: Cuts the edges of the spectrum, removes zero and NaN paddding.
`FilterSpectrum`: A Butterworth bandpass filter.
`ApplyCosineBell`: Applies a Tuckey window on the data.
`RemoveCosmics`: Removes outliers that deviate above the measured spectrum.
`BarycentricCorrection`: Preforms a barycentric correction.
There is also an 'overall' procedure, that calls all the routines with
some default values, `SpecPreProccess`.
<br/>
### Let's demonstrate the work with Spectrum objects:
```
import copy as cp
import matplotlib.pyplot as plt
# Choose a specifiec observation
spec = cp.deepcopy(obs_list.spec_list[5])
# Remove cosmics, NaNs and zero paddings:
spec.SpecPreProccess(Ntrim=100, CleanMargins=True, RemoveNaNs=True,
delta=0.5, RemCosmicNum=3, FilterLC=4, FilterHC=0.15, alpha=0.3)
# Plot the resulting spectrum
plt.rcParams.update({'font.size': 14})
# plot order 27
plt.figure(figsize=(13, 4), dpi= 80, facecolor='w', edgecolor='k')
plt.plot(spec.wv[20], spec.sp[20], 'k')
plt.xlabel(r'Wavelength [${\rm \AA}$]')
plt.ylabel(r'Normazlied Intensity')
plt.grid()
```
###### <br/>
### Preprocess the measured spectra
Now, assuming that we are pleased by the preproccessing parameters, we will run the procedure on all the measured spectra.
```
import numpy as np
# Keep only the required orders
selected_ords = [10+I for I in np.arange(40)]
obs_list.SelectOrders(
selected_ords,
remove=False)
# Remove NaNs, trim edges, remove cosmics:
# ---------------------------------------
RemoveNaNs = True # Remove NaN values.
CleanMargins = True # Cut the edges of the observed spectra. Remove zero padding.
Ntrim = 100 # Number of pixels to cut from each side of the spectrum.
RemCosmicNum = 3 # Number of sigma of outlier rejection. Only points that deviate upwards are rejected.
# Interpolate the spectrum to evenly sampled bins:
# ------------------------------------------------
delta = 0.5 # 0.5 is equivalent to oversampling of fator 2.
# Interpolate the spectrum to evenly sampled bins:
# ------------------------------------------------
FilterLC = 4 # Stopband freq for low-pass filter. In units of the minimal frequency (max(w)-min(w))**(-1)
FilterHC = 0.15 # Stopband freq for the high-pass filter. In units of the Nyquist frequency.
order = 1 # The order of the Butterworth filter (integer)
# Apply cosine-bell:
# ------------------
alpha = 0.3 # Shape parameter of the Tukey window
obs_list.PreProccessSpectra(Ntrim=Ntrim, CleanMargins=CleanMargins, RemoveNaNs=RemoveNaNs,
delta=delta, RemCosmicNum=RemCosmicNum, FilterLC=FilterLC, FilterHC=FilterHC,
alpha=alpha, verbose=True)
```
<br/>
### 1.3 Preparing the model spectrum
#### `Template` (class)
In order to derive the velocities, we may want to obtain a synthetic template. The next step will therefore be to obtain a PHONIEX template, set it's parameters and make it analysis-ready. The `Template`, at its core, is a `Spectrum` object (stored under the `.model` attribute) with several additional routines.
A synthetic spectrum can be easily downloaded from the PHOENIX ftp, as demonstrated below. Once downloaded, it can be downsampled with integration, broadened, and arranged to match the shape of the obserbed spectrum.
```
from sparta.UNICOR.Template import Template
# Retrieve the template.
# If the template is not located in a local directory
# it will be downloaded from the PHOENIX FTP:
template = Template(temp=5800,
log_g=3.5,
metal=0.5,
alpha=0.0,
min_val=4650,
max_val=7500,
air=False)
# Bin the template, to reduce computational strain:
print('Integrating.', end=' ')
template.integrate_spec(integration_ratio=3)
# Make sure that the data are evenly sampled.
# No over sampling required, so the delta=1 (because when delta<1 sp is oversampled)
print('Interpolating.', end=' ')
template.model.InterpolateSpectrum(delta=1)
# Apply rotational broadening of 6 km/s:
print('Rotating.', end=' ')
template.RotationalBroadening(vsini=6, epsilon=0.5)
# Instrumental broadening for R=53,000
print('Broadening.', end=' ')
template.GaussianBroadening(resolution=53000)
# Cut the template like the observed spectrum
print('Cutting to fit spectra.', end=' ')
template.cut_multiorder_like(obs_list.spec_list[0], margins=150)
# Filter the spectrum Just like the observations were filtered:
template.model.SpecPreProccess(Ntrim=10, CleanMargins=False, RemoveNaNs=False,
delta=1, RemCosmicNum=3, FilterLC=4, FilterHC=0.15, alpha=0.3)
print('Done.')
```
##### Let's see how the model looks against the data
```
# Plot the resulting spectrum
plt.rcParams.update({'font.size': 14})
# plot order 27
plt.figure(figsize=(13, 4), dpi= 80, facecolor='w', edgecolor='k')
ax1 = plt.plot(obs_list.spec_list[0].wv[10], obs_list.spec_list[0].sp[10], 'k', label='Data')
ax2 = plt.plot(template.model.wv[10], template.model.sp[10], 'r', label='Model')
plt.xlabel(r'Wavelength [${\rm \AA}$]')
plt.ylabel(r'Normazlied Intensity')
plt.legend()
plt.grid()
```
If we are pleased with the template, we can now move on to calculate CCF and derive the velocities
<br/>
### Cross-correlating the spectra against the template
The CCF1d class holds the toolds for the cross correlation, and is called by a wrapper in the `Observations` class.
```
# Set the correlation velocity resolution and bounds.
# ---------------------------------------------------
dv = 0.05 # Assumed to be in km/s unless provided as an Astropy Unit.
# Set the velocity range for analysis:
# -----------------------------------
VelBound = [-50, 100] # Boundaries for the cross correlation.
obs_list.calc_rv_against_template(template, dv=dv, VelBound=VelBound, err_per_ord=False, combine_ccfs=True, fastccf=True)
plt.figure(figsize=(13, 4), dpi= 80, facecolor='w', edgecolor='k')
plt.errorbar(obs_list.time_list,
obs_list.vels,
yerr=obs_list.evels,
fmt='.k')
plt.title('RVs!')
plt.xlabel(r'JD $-$ ${\rm JD}_0$')
plt.ylabel(r'RV [km/s]')
plt.grid()
```
#### Plot the CCF
Here's a CCF of observation #4.
The thin gray lines mark the CCFs of each order, and the thick dark line is the combined CCF.
A red stripe is centered around the derived velocity. Its thicknes shows the velocity uncertainty.
```
# %matplotlib inline
_ = obs_list.ccf_list[3].plotCCFs()
```
| github_jupyter |
# Machine Learning Exercise 7 - K-Means Clustering & PCA
This notebook covers a Python-based solution for the seventh programming exercise of the machine learning class on Coursera. Please refer to the [exercise text](https://github.com/jdwittenauer/ipython-notebooks/blob/master/exercises/ML/ex7.pdf) for detailed descriptions and equations.
In this exercise we'll implement K-means clustering and use it to compress an image. We'll start with a simple 2D data set to see how K-means works, then we'll apply it to image compression. We'll also experiment with principal component analysis and see how it can be used to find a low-dimensional representation of images of faces.
## K-means clustering
To start out we're going to implement and apply K-means to a simple 2-dimensional data set to gain some intuition about how it works. K-means is an iterative, unsupervised clustering algorithm that groups similar instances together into clusters. The algorithm starts by guessing the initial centroids for each cluster, and then repeatedly assigns instances to the nearest cluster and re-computes the centroid of that cluster. The first piece that we're going to implement is a function that finds the closest centroid for each instance in the data.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
from scipy.io import loadmat
%matplotlib inline
def find_closest_centroids(X, centroids):
m = X.shape[0]
k = centroids.shape[0]
idx = np.zeros(m)
for i in range(m):
min_dist = 1000000
for j in range(k):
dist = np.sum((X[i,:] - centroids[j,:]) ** 2)
if dist < min_dist:
min_dist = dist
idx[i] = j
return idx
```
Let's test the function to make sure it's working as expected. We'll use the test case provided in the exercise.
```
data = loadmat('data/ex7data2.mat')
X = data['X']
initial_centroids = initial_centroids = np.array([[3, 3], [6, 2], [8, 5]])
idx = find_closest_centroids(X, initial_centroids)
idx[0:3]
```
The output matches the expected values in the text (remember our arrays are zero-indexed instead of one-indexed so the values are one lower than in the exercise). Next we need a function to compute the centroid of a cluster. The centroid is simply the mean of all of the examples currently assigned to the cluster.
```
def compute_centroids(X, idx, k):
m, n = X.shape
centroids = np.zeros((k, n))
for i in range(k):
indices = np.where(idx == i)
centroids[i,:] = (np.sum(X[indices,:], axis=1) / len(indices[0])).ravel()
return centroids
compute_centroids(X, idx, 3)
```
This output also matches the expected values from the exercise. So far so good. The next part involves actually running the algorithm for some number of iterations and visualizing the result. This step was implmented for us in the exercise, but since it's not that complicated I'll build it here from scratch. In order to run the algorithm we just need to alternate between assigning examples to the nearest cluster and re-computing the cluster centroids.
```
def run_k_means(X, initial_centroids, max_iters):
m, n = X.shape
k = initial_centroids.shape[0]
idx = np.zeros(m)
centroids = initial_centroids
for i in range(max_iters):
idx = find_closest_centroids(X, centroids)
centroids = compute_centroids(X, idx, k)
return idx, centroids
idx, centroids = run_k_means(X, initial_centroids, 10)
cluster1 = X[np.where(idx == 0)[0],:]
cluster2 = X[np.where(idx == 1)[0],:]
cluster3 = X[np.where(idx == 2)[0],:]
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(cluster1[:,0], cluster1[:,1], s=30, color='r', label='Cluster 1')
ax.scatter(cluster2[:,0], cluster2[:,1], s=30, color='g', label='Cluster 2')
ax.scatter(cluster3[:,0], cluster3[:,1], s=30, color='b', label='Cluster 3')
ax.legend()
```
One step we skipped over is a process for initializing the centroids. This can affect the convergence of the algorithm. We're tasked with creating a function that selects random examples and uses them as the initial centroids.
```
def init_centroids(X, k):
m, n = X.shape
centroids = np.zeros((k, n))
idx = np.random.randint(0, m, k)
for i in range(k):
centroids[i,:] = X[idx[i],:]
return centroids
init_centroids(X, 3)
```
Our next task is to apply K-means to image compression. The intuition here is that we can use clustering to find a small number of colors that are most representative of the image, and map the original 24-bit colors to a lower-dimensional color space using the cluster assignments. Here's the image we're going to compress.
```
from IPython.display import Image
Image(filename='data/bird_small.png')
```
The raw pixel data has been pre-loaded for us so let's pull it in.
```
image_data = loadmat('data/bird_small.mat')
image_data
A = image_data['A']
A.shape
```
Now we need to apply some pre-processing to the data and feed it into the K-means algorithm.
```
# normalize value ranges
A = A / 255.
# reshape the array
X = np.reshape(A, (A.shape[0] * A.shape[1], A.shape[2]))
X.shape
# randomly initialize the centroids
initial_centroids = init_centroids(X, 16)
# run the algorithm
idx, centroids = run_k_means(X, initial_centroids, 10)
# get the closest centroids one last time
idx = find_closest_centroids(X, centroids)
# map each pixel to the centroid value
X_recovered = centroids[idx.astype(int),:]
X_recovered.shape
# reshape to the original dimensions
X_recovered = np.reshape(X_recovered, (A.shape[0], A.shape[1], A.shape[2]))
X_recovered.shape
plt.imshow(X_recovered)
```
Cool! You can see that we created some artifacts in the compression but the main features of the image are still there. That's it for K-means. We'll now move on to principal component analysis.
## Principal component analysis
PCA is a linear transformation that finds the "principal components", or directions of greatest variance, in a data set. It can be used for dimension reduction among other things. In this exercise we're first tasked with implementing PCA and applying it to a simple 2-dimensional data set to see how it works. Let's start off by loading and visualizing the data set.
```
data = loadmat('data/ex7data1.mat')
data
X = data['X']
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(X[:, 0], X[:, 1])
```
The algorithm for PCA is fairly simple. After ensuring that the data is normalized, the output is simply the singular value decomposition of the covariance matrix of the original data.
```
def pca(X):
# normalize the features
X = (X - X.mean()) / X.std()
# compute the covariance matrix
X = np.matrix(X)
cov = (X.T * X) / X.shape[0]
# perform SVD
U, S, V = np.linalg.svd(cov)
return U, S, V
U, S, V = pca(X)
U, S, V
```
Now that we have the principal components (matrix U), we can use these to project the original data into a lower-dimensional space. For this task we'll implement a function that computes the projection and selects only the top K components, effectively reducing the number of dimensions.
```
def project_data(X, U, k):
U_reduced = U[:,:k]
return np.dot(X, U_reduced)
Z = project_data(X, U, 1)
Z
```
We can also attempt to recover the original data by reversing the steps we took to project it.
```
def recover_data(Z, U, k):
U_reduced = U[:,:k]
return np.dot(Z, U_reduced.T)
X_recovered = recover_data(Z, U, 1)
X_recovered
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(X_recovered[:, 0], X_recovered[:, 1])
```
Notice that the projection axis for the first principal component was basically a diagonal line through the data set. When we reduced the data to one dimension, we lost the variations around that diagonal line, so in our reproduction everything falls along that diagonal.
Our last task in this exercise is to apply PCA to images of faces. By using the same dimension reduction techniques we can capture the "essence" of the images using much less data than the original images.
```
faces = loadmat('data/ex7faces.mat')
X = faces['X']
X.shape
```
The exercise code includes a function that will render the first 100 faces in the data set in a grid. Rather than try to re-produce that here, you can look in the exercise text for an example of what they look like. We can at least render one image fairly easily though.
```
face = np.reshape(X[3,:], (32, 32))
plt.imshow(face)
```
Yikes, that looks awful. These are only 32 x 32 grayscale images though (it's also rendering sideways, but we can ignore that for now). Anyway's let's proceed. Our next step is to run PCA on the faces data set and take the top 100 principal components.
```
U, S, V = pca(X)
Z = project_data(X, U, 100)
```
Now we can attempt to recover the original structure and render it again.
```
X_recovered = recover_data(Z, U, 100)
face = np.reshape(X_recovered[3,:], (32, 32))
plt.imshow(face)
```
Observe that we lost some detail, though not as much as you might expect for a 10x reduction in the number of dimensions.
That concludes exercise 7. In the final exercise we'll implement algorithms for anomaly detection and build a recommendation system using collaborative filtering.
| github_jupyter |
# Lab 2: Welcome to Python + Data Structures
## Overview
Welcome to your first lab! Labs in CS41 are designed to be your opportunity to experiment with Python and gain hands-on experience with the language.
The primary goal of the first half is to ensure that your Python installation process went smoothly, and that there are no lingering Python installation bugs floating around.
The second half focuses more on using data structures to solve some interesting problems.
You're welcome to work in groups or individually. Remember to have some fun! Make friends and maybe relax a little too. If you want to cue up any songs to the CS41 playlist, let us know or add them yourself at [this link](https://open.spotify.com/playlist/1pn8cUoKsLlOfX7WEEARz4?si=jKogUQTsSDmqu6RbSBGfGA)!
**Note: These labs are *designed* to be long! You shouldn't be able to finish all the problems in one class period. Work through as much as you can in the time allotted, but also feel free to skip from question to question freely. Part II is intended to be extra practice, if you want to hone your Python skills even more.**
Above all, have fun playing with Python! Enjoy.
### Running this notebook
To run a Jupyter notebook, first change directories (`cd`) to wherever you've downloaded the notebook. Then, from a command line, activate the CS41 environment (usually `source ~/cs41-env/bin/activate`) and run:
```shell
(cs41-env) $ jupyter notebook
```
You might need to `pip install jupyterlab` from within your virtual environment if you haven't yet.
## Zen of Python
Run the following code cell by selecting the cell and pressing Shift+Enter.
```
import this
```
## Hello World
Edit the following cell so that it prints `"Hello, world!"` when executed, and then run the cell to confirm.
```
# Edit me so that I print out "Hello, world!" when run!
```
## Warmup Problems! 🦄
### Fizz, Buzz, FizzBuzz!
If we list all of the natural numbers under 41 that are a multiple of 3 or 5, we get
```
3, 5, 6, 9, 10, 12, 15,
18, 20, 21, 24, 25, 27, 30,
33, 35, 36, 39, 40
```
The sum of these numbers is 408.
Find the sum of all the multiples of 3 or 5 below 1001. As a sanity check, the last two digits of the sum should be `68`.
```
def fizzbuzz(n):
"""Returns the sum of all numbers less than n divisible by 3 or 5."""
pass
print(fizzbuzz(41)) # => 408
print(fizzbuzz(1001))
```
### Collatz Sequence
Depending on from whom you took CS106A, you may have seen this problem before.
The *Collatz sequence* is an iterative sequence defined on the positive integers by:
```
n -> n / 2 if n is even
n -> 3n + 1 if n is odd
```
For example, using the rule above and starting with 13 yields the sequence:
```
13 -> 40 -> 20 -> 10 -> 5 -> 16 -> 8 -> 4 -> 2 -> 1
```
It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although unproven, it it hypothesized that all starting numbers finish at 1.
What is the length of the longest chain which has a starting number under 1000?
**Challenge**: Same question, but for any starting number under 1,000,000. What about for any starting number under 10,000,000? You may need to implement a cleverer-than-naive algorithm.
<details>
<summary><b>Hints</b> (click to expand):</summary>
<ol>
<li>You can implement <code>collatz_len</code> really elegantly using recursion, but you don't need that! You can also use a <code>while</code> loop.</li>
<li>(for the challenge) You don't need to do any complicated math to get the cleverer-than-naive algorithm! Just try to improve efficiency when your code encounters a Collatz sequence that it's already encountered.</li>
</ol>
</details>
*NOTE: Once the chain starts the terms are allowed to go above one thousand.*
```
def collatz_len(n):
"""Computes the length of the Collatz sequence starting at n."""
pass
def max_collatz_len(n):
"""Computes the longest Collatz sequence length for starting numbers less than n"""
pass
print(collatz_len(13)) # => 10
print(max_collatz_len(1000))
# Challenge: Only attempt to solve these if you feel very comfortable with this material.
# print(max_collatz_len(1000000))
# print(max_collatz_len(100000000))
```
### Zen Printing
Write a program using `print()` that, when run, prints out a tic-tac-toe board.
```
X | . | .
-----------
. | O | .
-----------
. | O | X
```
You may find the optional arguments to `print()` useful, which you can read about [here](https://docs.python.org/3/library/functions.html#print). In no more than five minutes, try to use these optional arguments to print out this particular tic-tac-toe board.
```
# Print a tic-tac-toe board using optional arguments.
def print_tictactoe():
"""Print out a specific partially-filled tic-tac-toe board."""
pass
```
Maybe you were able to print out the tic-tac-toe board. Maybe not. In the five minutes you've been working on that, I've gotten bored with normal tic-tac-toe (too many ties!) so now, I want to play SUPER tic-tac-toe.
Write a program that prints out a SUPER tic-tac-toe board.
```
| | H | | H | |
--+--+--H--+--+--H--+--+--
| | H | | H | |
--+--+--H--+--+--H--+--+--
| | H | | H | |
========+========+========
| | H | | H | |
--+--+--H--+--+--H--+--+--
| | H | | H | |
--+--+--H--+--+--H--+--+--
| | H | | H | |
========+========+========
| | H | | H | |
--+--+--H--+--+--H--+--+--
| | H | | H | |
--+--+--H--+--+--H--+--+--
| | H | | H | |
```
You'll find that there might be many ways to solve this problem. Which do you think is the most 'pythonic?' Talk to someone next to you about your approach to this problem. Remember the Zen of Python!
**Hint:** To find the most Pythonic way to do this, think about what you'd do if I said you have 10 seconds to write code that prints out the list.
```
def print_super_tictactoe():
"""Print an empty SUPER tic-tac-toe board."""
pass
```
## Data Structures, whooo!
If you've gotten this far, you're already above and beyond! Our recommendation at this point is to pick out the most interesting of the following problems and solve those!
[Optional Reading on Standard Types - check out Sequence types and Mapping types](https://docs.python.org/3/library/stdtypes.html)
### Tuples and Dicts: Caching
In this problem, we'll use tuples and dicts to memoize a recursive function.
*Note*: In general, we try to keep this class as self-contained as possible and don't ask that you know too much computer science outside of CS41. For this problem, though, you will use recursion. If you aren't super comfortable with recursion or recursive memoization, feel free to skip ahead!
This problem is called **lattice paths**. Starting in the top left corner of a $2 \times 2$ grid, and only being able to move to the right and down, there are exactly 6 routes to the bottom right corner.

Write a recursive (non-memoized) function to calculate the number of paths from the top left of an $m \times n$ grid ($m$ rows and $n$ columns) to the bottom right corner.
Notice that the problem seems very similar to itself... At every point in the grid, you only have two moves: down and left.
<details>
<summary><b>Hints</b> (click to expand):</summary>
<ol>
<li>The recursive base case is a grid where <i>either</i> the number of rows is zero or the number of columns is zero. How many paths are there in this case? Notice that programming it this way also prevents $m$ or $n$ from being negative.</li>
<li>Suppose you move down. Then, the remaining problem is <i>still</i> to count the number of paths from the top left of a grid to the bottom right, but now the grid is a different size.</li>
<li>If you move down, the problem becomes counting the number of paths from the top left of a grid of size $(m-1) \times n$ to the bottom right of that grid.</li>
</ol>
</details>
```
def lattice_paths(m, n):
pass
print(lattice_paths(2, 2)) # => 6
print(lattice_paths(6, 6)) # => 924
#print(lattice_paths(20, 20)) # => this takes too long without memoization!
#print(lattice_paths(40, 40)) # => this takes WAY too long without memoization!
```
Notice that the way that our code is constructed results in a lot of repeated calls to the same functions. For example, if you start at the top of a $20 \times 20$ grid and go down, then right, you'll make a function call for a $19 \times 19$ grid. But you'll also make that call if you go right and *then* down. This is where memoization comes in. We'll store the output of `lattice_paths` in a dictionary the first time that we call it and then the second time, we'll just immediately return the value from the dictionary so that we won't make calls twice.
Let's memoize `lattice_paths` by adding its output to a dictionary. This dictionary should have keys that are tuples $(m,n)$ and values that are the output `lattice_paths(m,n)`. Make sure to check if the tuple `(m,n)` is in the memoization dictionary at the beginning of the function execution.
```
memoization_dict = {}
def lattice_paths_memoized(m, n, memoization_dict):
pass
print(lattice_paths_memoized(2, 2, memoization_dict)) # => 6
print(lattice_paths_memoized(6, 6, memoization_dict)) # => 924
print(lattice_paths_memoized(20, 20, memoization_dict)) # => sanity check: ends with 8820
print(lattice_paths_memoized(40, 40, memoization_dict)) # => sanity check: ends with 1620
memoization_dict.clear()
```
### Lists
Predict what the following lines of Python will do. Then, run the code block below to see if they match what you expect:
```Python
s = [0] * 3
print(s)
s[0] += 1
print(s)
s = [''] * 3
print(s)
s[0] += 'a'
print(s)
s = [[]] * 3
print(s)
s[0] += [1]
print(s)
```
```
# Explore the elements of lists. Is the output what you expect?
s = [0] * 3
print(s)
s[0] += 1
print(s)
s = [''] * 3
print(s)
s[0] += 'a'
print(s)
s = [[]] * 3
print(s)
s[0] += [1]
print(s)
```
#### Let's investigate... why is this happening?
Strategies for investigation (that you can use to understand this situation better):
1. We'll use the `id` function to investigate further.
2. Think about when Python creates a copy of objects and when it doesn't... does this help explain the behavior in the examples?
3. What happens when we replace the second-to-last line with `s[0] = s[0] + [1]`? What if we replace the line with `s[0].append(1)`?
```
s_int = [0] * 3
s_str = [''] * 3
s_lst = [[]] * 3
print(s_int[0] is s_int[1] is s_int[2])
print(s_str[0] is s_str[1] is s_str[2])
print(s_lst[0] is s_lst[1] is s_lst[2])
```
Okay, this is how the lists are initialized... so when does this change?
```
s_int[0] += 1
print(id(s_int[0]))
print(id(s_int[1]))
print(id(s_int[2]))
s_str[0] += 'a'
print(id(s_str[0]))
print(id(s_str[1]))
print(id(s_str[2]))
s_lst[0] += [[1]]
print(id(s_lst[0]))
print(id(s_lst[1]))
print(id(s_lst[2]))
```
**WOW**! What does that tell you about the way that Python treats ints vs. strings vs. lists? Tell your neighbor, in an excited tone of voice (or a Barack Obama impression)!
Based on that, predict what this function is going to do...
```
lst = ['I', 'know', "you'll", 'think', "I'm", 'like', 'the', 'others']
s = "Who saw your name and number on the "
i = 8675308
def sing(lst, s, i):
lst += ['before']
s += 'wall'
i += 1
sing(lst, s, i)
print(' '.join(lst))
print(s)
print(i)
```
Jenny, Jenny, who can I turn to... (Jenny may be ignoring you, but Python's always got your back 😉)
### Tuples
Write a function to compute the [GCD](https://en.wikipedia.org/wiki/Greatest_common_divisor) of two positive integers. You can freely use the fact that `gcd(a, b)` is mathematically equal to `gcd(b, a % b)`, and that `gcd(a, 0) == a`.
You can assume that `a >= b` if you'd like.
It is possible to accomplish this in three lines of Python code (or with extra cleverness, even fewer!). Consider exploiting tuple packing and unpacking!
*Note: The standard library has a `gcd` function. Avoid simply importing that function and using it here - the goal is to practice with tuple packing and unpacking!*
```
def gcd(a, b):
"""Compute the GCD of two positive integers."""
pass # Your implementation here
gcd(10, 25) # => 5
gcd(14, 15) # => 1
gcd(3, 9) # => 3
gcd(1, 1) # => 1
```
### Flipping Dictionaries
I asked our course staff what their favorite animal is and they gave me these answers:
```python
fav_animals = {
'parth': 'unicorn',
'michael': 'elephant',
'sam': 'ox',
'zheng': 'tree',
'theo': 'puppy',
'alex': 'dog',
'nick': 'daisy'
}
```
I know that, realistically, they're lying to themselves. In fact, the dict probably looks more like this:
```python
fav_animals = {
'parth': 'unicorn',
'michael': 'unicorn',
'sam': 'unicorn',
'zheng': 'tree',
'theo': 'unicorn',
'alex': 'dog',
'nick': 'daisy'
}
```
In this problem, we'll reverse the `fav_animals` dictionary to create a new dictionary which associates animals to a list of people for whom that animal is their favorite.
More precisely, write a function that properly reverses the keys and values of a dictionary - each key (originally a value) should map to a collection of values (originally keys) that mapped to it. For example,
```python
flip_dict({"CA": "US", "NY": "US", "ON": "CA"})
# => {"US": ["CA", "NY"], "CA": ["ON"]}
```
Note: there is a data structure in the `collections` module from the standard library called `defaultdict` which provides exactly this sort of functionality. You provide it a factory method for creating default values in the dictionary (in this case, a list.) You can read more about `defaultdict` and other `collections` data structures [here](https://docs.python.org/3/library/collections.html).
```
fav_animals = {
'parth': 'unicorn',
'michael': 'unicorn',
'sam': 'unicorn',
'zheng': 'tree',
'theo': 'unicorn',
'alex': 'dog',
'nick': 'daisy'
}
def flip_dict(d):
pass
print(flip_dict(fav_animals))
# {'unicorn': ['parth', 'michael', 'sam', 'theo'], 'tree': ['zheng'], 'dog': ['alex'], 'daisy': ['nick']}
```
# Bonus Problems
Don't worry about doing these bonus problems. In most cases, bonus questions ask you to think more critically or use more advanced algorithms.
In this case, we'll use list comprehensions to solve some interesting problems.
### Pascal's Triangle
Write a function that generates the next level of [Pascal's triangle](https://en.wikipedia.org/wiki/Pascal%27s_triangle) given a list that represents a row of Pascal’s triangle.
```
generate_pascal_row([1, 2, 1]) -> [1, 3, 3, 1]
generate_pascal_row([1, 4, 6, 4, 1]) -> [1, 5, 10, 10, 5, 1]
generate_pascal_row([]) -> [1]
```
As a reminder, each element in a row of Pascal's triangle is formed by summing the two elements in the previous row directly above (to the left and right) that elements. If there is only one element directly above, we only add that one. For example, the first 5 rows of Pascal's triangle look like:
```
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
```
You could solve this problem with `enumerate` or look up the `zip` function for an even more Pythonic solution (use iPython, perhaps?)! Avoid using a loop of the form `for i in len(range(row)):`.
*Hint: Check out the diagram below. How could you use this insight to help complete this problem?*
```
0 1 3 3 1
+ 1 3 3 1 0
-----------
1 4 6 4 1
```
```
def generate_pascal_row(row):
"""Generate the next row of Pascal's triangle."""
pass
generate_pascal_row([1, 2, 1]) # => [1, 3, 3, 1]
generate_pascal_row([1, 4, 6, 4, 1]) # => [1, 5, 10, 10, 5, 1]
generate_pascal_row([]) # => [1]
```
## Cyclone Phrases (challenge)
For the following problem, we describe a criterion that makes a word (or phrase!) special, similarly to our "Efficient Words" from lecture.
Before that, though, we need to load up a list of all of the words in the English language to see which words fit the criterion.
If you are using macOS or Linux, you should have a dictionary file available at `/usr/share/dict/words`, a 2.5M text file containing over 200 thousand English words, one per line. However, we've also mirrored this file at `https://stanfordpython.com/res/misc/words`, so you can download the dictionary from there if your computer doesn't have this dictionary file readily available.
What would be an appropriate data structure in which to store the English words?
Write the method `load_english` to load English words from this file. How many English words are there in this file?
```
# If you downloaded words from the course website,
# change me to the path to the downloaded file.
DICTIONARY_FILE = '/usr/share/dict/words'
def load_english():
"""Load and return a collection of english words from a file."""
pass
english = load_english()
print(len(english))
```
Cyclone words are English words that have a sequence of characters in alphabetical order when following a cyclic pattern.
For example:

Write a function that to determine whether an entire phrase passed into a function is made of cyclone words. You can assume that all words are made of only alphabetic characters, and are separated by whitespace.
```
is_cyclone_phrase("adjourned") # => True
is_cyclone_phrase("settled") # => False
is_cyclone_phrase("all alone at noon") # => True
is_cyclone_phrase("by myself at twelve pm") # => False
is_cyclone_phrase("acb") # => True
is_cyclone_phrase("") # => True
```
Generate a list of all cyclone words. How many are there? As a sanity check, we found 769 distinct cyclone words.
```
def is_cyclone_word(word):
"""Return whether a word is a cyclone word."""
def is_cyclone_phrase(word):
"""Return whether a phrase is composed only of cyclone words."""
```
## Done Early?
Download the second part of this lab and keep working, if you'd like! The second part of the lab is significantly more difficult than the problems here, especially the later problems in the extra lab notebook. You can download the notebook at [this link](https://github.com/stanfordpython/python-labs/blob/master/notebooks/lab-2-data-structures-part-2.ipynb).
Skim [Python’s Style Guide](https://www.python.org/dev/peps/pep-0008/), keeping the Zen of Python in mind. Feel free to skip portions of the style guide that cover material we haven't yet touched on in this class, but it's always good to start with an overview of good style.
## Submitting Labs
Alright, you did it! There's nothing to submit for this lab - just show one of the course staff members what you've done. You're free to leave as soon as you've finished this lab.
*Credit to Sam Redmond, Puzzling.SE (specifically [JLee](https://puzzling.stackexchange.com/users/463/jlee)), ProjectEuler and InterviewCake for several problem ideas*
> With 🦄 by @psarin and @coopermj
| github_jupyter |
```
import os
import sys
import pandas as pd
from sklearn.model_selection import train_test_split
import numpy as np
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Conv1D, BatchNormalization, GlobalAveragePooling1D, Permute, Dropout, Flatten
from tensorflow.keras.layers import Input, Dense, LSTM, concatenate, Activation, GRU, SimpleRNN
from tensorflow.keras.models import Model
from tensorflow.keras import regularizers
import tensorflow as tf
import pickle
from random import randint
sys.path.append('./')
os.environ["CUDA_VISIBLE_DEVICES"]="1"
from utils.constants import NB_CLASSES_LIST
from model_training.exp_utils import train_model, evaluate_model
def generate_dynamic_lstmfcn(NB_CLASS, NUM_CELLS=128):
ip = Input(shape=(1, None))
x = Permute((2, 1))(ip)
x = LSTM(NUM_CELLS)(x)
x = Dropout(0.2)(x)
y = Permute((2, 1))(ip)
y = Conv1D(128, 8, padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = Conv1D(256, 5, padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = Conv1D(128, 3, padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = GlobalAveragePooling1D()(y)
x = concatenate([x, y])
x = Dense(256, activation='relu',kernel_regularizer=regularizers.l2(0.01))(x)
out = Dense(NB_CLASS, activation='softmax',kernel_regularizer=regularizers.l2(0.001))(x)
model = Model(ip, out)
return model
epochs = 25
LR = 5e-5
batch_size = 512
N_TRIALS = 25
TRAINING = True
for trial_no in range(1,N_TRIALS+1):
seed = randint(0,1e3)
tf.random.set_seed(seed)
dataset_map = [('Dataset-C/TRIAL-{}'.format(trial_no),0),('Dataset-W/TRIAL-{}'.format(trial_no),1)]
base_log_name = '%s_%d_cells_new_datasets.csv'
base_weights_dir = '%s_%d_cells_weights/'
normalize_dataset = False
MODELS = [('dynamic_lstmfcn',generate_dynamic_lstmfcn),]
CELLS = [128]
for model_id, (MODEL_NAME, model_fn) in enumerate(MODELS):
for cell in CELLS:
for dname, did in dataset_map:
NB_CLASS = NB_CLASSES_LIST[did]
K.clear_session()
weights_dir = base_weights_dir % (MODEL_NAME, cell)
os.makedirs('weights/' + weights_dir,exist_ok=True)
dataset_name_ = weights_dir + dname
model = model_fn(NB_CLASS, cell)
print('*' * 20, "Training model %s for dataset %s" % (MODEL_NAME,dname), '*' * 20)
if(TRAINING):
model,history = train_model(model, did, dataset_name_, epochs=epochs, batch_size=batch_size,normalize_timeseries=normalize_dataset,learning_rate=LR)
print(history)
print('--' * 20, "Evaluating model %s for dataset %s" % (MODEL_NAME,dname), '*' * 20)
acc = evaluate_model(model, did, dataset_name_, batch_size=batch_size,normalize_timeseries=normalize_dataset)
```
| github_jupyter |
# PDP Team 11 Design/Course Prep Planning Meeting
## August 9, 2018
Meeting goal: discuss PDP team 11's current status and plan August activities.
[Design notebook](https://docs.google.com/document/d/1iexo2xeYYIVDWD_pfgW4sOA5sRZSStsGNth-_bU_lrU/edit#)
[Teaching plan](https://docs.google.com/document/d/1xb4omX9AnZTlrfoPtOVJf1j6Hr_BCGGEwoEH_YzwSWc/edit)
### Overview of Teaching Plan Status, Priorities
**Teaching Plan Items**
~~Strikethrough~~ means drafted in Teaching Plan
* ~~Meta Info~~
* Content learning outcomes
* Requires: ~~Content rubric~~
* Requires: ~~Content prompt~~
* Practice learning outcomes
* Requires: ~~Practice rubric~~
* ~~E&I Approach~~
* Inquiry Activity
* Set of ODE model systems for students to explore
* Introduction
* Raising Questions
* Investigations
* CAT (Culminating Assessment Task)
* Synthesis
Note that learning outcomes are largely derived from other elements, basically a different way of expressing the design elements. Thus, these are not currently a priority and my current plan is to quickly draft these after the core of the inquiry activity is designed.
**Priorities**
* Determine set of ODE model systems
* Develop introduction materials, draft teaching plan section
* Draft raising questions/investigations in teaching plan
* Develop CAT, draft in teaching plan
**Resources**
[Example of CMSE NB working with compartmental models](http://nbviewer.jupyter.org/url/amjacobs.net/assets/NBs/Day-10_In-Class_CompartmentalModeling-STUDENT.ipynb)
[Example of CMSE NB working with ODEs](http://nbviewer.jupyter.org/url/amjacobs.net/assets/NBs/HW07_LorenzModel_ODEINT.ipynb)
### Tasks
+ Explore the prompt questions for a particular ODE model system
+ Each team member does one (**Adam**, **Justin**, **Mallory**, **Philipp**)
+ Some possibilities (feel free to find others, and if you think a non-compartmental ODE model may work well for the purposes of the activity that would be good to know about as well).
predator-prey
disease spread (sometimes called SIR)
spread of rumor (e.g. through social media)
reaction network
+ Things to consider
Be careful to relate back to the content rubric/prompt and how useful the model is for helping students engage with the dimensions.
Are there domains for certain ranges of model parameters? A model parameter is, for example, a constant in the ODE relating to modeled behavior, such as the breeding rate of rabbits in a Foxes/Rabbits predator-prey model
What does convergence look like? What appear to be reasonable stepsizes?
When we get back together, we'll consider common elements/design and how we'll teach the content rubric dimension of the general applicability of ODE methods to different models
+ Adam will work on getting some resources to the team regarding models and basics
+ Develop Introduction Material, Draft in Teaching Plan (**Adam**)
+ Draft Raising Questions, Investigations sections of Teaching Plan (**TBD**)
+ Draft CAT (**TBD**)
+ Draft Synthesis (not immediate priority) (**TBD**)
### Next Meeting
+ Discuss model exploration, decide on the model set we'll use
+ Discuss intro, possibly approve draft (this is currently envisioned as Adam's responsibility due to me already having some experience with relevant CMSE materials)
+ Plan and distribute tasks for the completion of the Inquiry Activity design, which will then be drafted in the Teaching Plan
### Meeting notes
| github_jupyter |
```
# testing installation
import pandas as pd
import matplotlib.pyplot as plt
conf = pd.read_csv('sensingbee.conf', index_col='param')
import sys, os
sys.path.append(conf.loc['GEOHUNTER_PATH','val'])
sys.path.append(conf.loc['SOURCE_PATH','val'])
import geohunter
import sensingbee
conf
```
# Data preparation
```
import geopandas as gpd
city_shape = gpd.read_file(os.path.join(conf.loc['DATA_PATH','val'],'newcastle.geojson'))
fig, ax = plt.subplots(figsize=(7,5))
city_shape.plot(ax=ax)
```
Get open sensors data of Newcastle upon Tyne from [Urban Observatory data portal](http://newcastle.urbanobservatory.ac.uk/) or from the API.
```
data = pd.read_csv('http://newcastle.urbanobservatory.ac.uk/api/v1.1/sensors/data/csv/?starttime=20180117100000&endtime=20180117120000&data_variable=Temperature')
data.head()
```
Separate data in *samples* and *metadata*
```
# To resampĺe data by median values on regular intervals
samples = data[['Variable','Sensor Name','Timestamp','Value']].loc[data['Flagged as Suspect Reading']==False]
samples['Timestamp'] = pd.to_datetime(samples['Timestamp'])
samples = samples.set_index(['Variable','Sensor Name','Timestamp'])['Value']
level_values = samples.index.get_level_values
samples = (samples.groupby([level_values(i) for i in [0,1]]
+[pd.Grouper(freq=conf.loc['SAMPLE_FREQ','val'], level=-1)]).median())
samples
import shapely
metadata = data[['Sensor Name', 'Ground Height Above Sea Level', 'Broker Name', 'Sensor Centroid Longitude', 'Sensor Centroid Latitude']]
metadata = metadata.set_index('Sensor Name').drop_duplicates()
# Transform into a GeoDataFrame
metadata['geometry'] = metadata.apply(lambda x: shapely.geometry.Point([(x['Sensor Centroid Longitude']), x['Sensor Centroid Latitude']]), axis=1)
metadata = gpd.GeoDataFrame(metadata, geometry=metadata['geometry'], crs={'init':'epsg:4326'})
fig, ax = plt.subplots(figsize=(7,5))
city_shape.plot(ax=ax)
metadata.plot(ax=ax, color='black')
# To get sensors only within city_shape
metadata = gpd.sjoin(metadata, city_shape, op='within')[metadata.columns]
idx = pd.IndexSlice
samples = samples.loc[idx[:,metadata.index,:]]
fig, ax = plt.subplots(figsize=(7,5))
city_shape.plot(ax=ax)
metadata.plot(ax=ax, color='black')
```
Set the regions where predictions have be made. We call this set of regions as "grid"
```
bbox = {'north':city_shape.bounds.max().values[3],
'east':city_shape.bounds.max().values[2],
'south':city_shape.bounds.min().values[1],
'west':city_shape.bounds.min().values[0]}
grid = geohunter.features.Grid(resolution=0.5).fit(city_shape).data
fig, ax = plt.subplots(figsize=(7,5))
city_shape.plot(ax=ax)
grid.plot(ax=ax, color='orange')
class Data(object):
def __init__(self, samples, metadata, grid):
self.samples = samples
self.metadata = metadata
self.metadata['lon'] = self.metadata.geometry.x
self.metadata['lat'] = self.metadata.geometry.y
self.grid = grid
data = Data(samples, metadata, grid)
```
# Feature Engineering
_____
In order to predict spatial variables, one have to translate samples into explanatory variables, also called features. Each spatial variable must be analysed separately and described in terms of spatial features.
One of the ways sensingbee extracts features is from Inverse Distance Weighting (IDW) estimations.
For now, let's extract primary features (Temperature features to predict Temperature, for instance). One can extract IDW from all places in grid, as below.
```
method = sensingbee.feature_engineering.inverse_distance_weighting
idx = pd.IndexSlice
params = {'p': 2, 'threshold': 10/110}
y = samples.loc['Temperature']
if False:
X = pd.DataFrame(index=pd.MultiIndex.from_product([grid.index, samples.index.get_level_values('Timestamp')], names=['Sensor Name', 'Timestamp']))
X = X.loc[~X.index.duplicated(keep='first')]
else:
X = pd.DataFrame(index=y.index)
for var in samples.index.get_level_values('Variable').unique():
for time in samples.index.get_level_values('Timestamp').unique():
v_samples = samples.loc[idx[var, :, time]].reset_index()
v_sensors = metadata.loc[v_samples['Sensor Name']]
v_sensors = v_sensors.dropna()
mask = y.loc[idx[:, time],:]
if grid is not None:
_ = grid.apply(lambda x: method(x,
v_sensors, v_samples.set_index('Sensor Name'), **params), axis=1)
_.index = X.loc[idx[:, time],:].index
X.loc[idx[:, time],var] = _.values
else:
_ = y.loc[idx[:, time],:].reset_index()['Sensor Name']\
.apply(lambda x: method(metadata.loc[x],
v_sensors, v_samples.set_index('Sensor Name'), **params))
_.index = y.loc[idx[:, time],:].index
X.loc[idx[:, time],var] = _.values
fig, ax = plt.subplots(figsize=(7,5))
t = '2018-01-17 10:00:00'
x = X.loc[idx[:,t],:].join(grid, on='Sensor Name')
gpd.GeoDataFrame(x).plot(column='Temperature', ax=ax)
distance_threshold = 10/69 # limit of 10 miles
X_idw, y = sensingbee.feature_engineering.Features(variable='Temperature',
method='idw', p=2, threshold=distance_threshold).transform(samples, metadata)
```
| github_jupyter |
# Dingocar Demo
This Notebook will take allow you to train a Dingocar (_Donkeycar, down-under_). The model will be trained using data uploaded to your Google Drive. The trained model will be saved in your nominated Google Drive Folder .
## Requirements
A zip file of training data. I recomend a zip file because you'll be transfering the data from drive to the virtual machine this notebook is running on and I found this to be orders of magnitude faster if you zip things up first. If you dont have data but you want to have a play I have a public folder [here](https://drive.google.com/file/d/1gv5k5vK90QOSgenwT42DMm-jmBdB9yEX/view?usp=sharing). Make a folder in your google drive called `dingocar` and add this folder.
Some knowledge of python, and a high level understanding of Machine Learning, not too much, just.
If anyone want an introduction to CNNs.
- [Convolutional Neural Networks (CNNs) explained](https://www.youtube.com/watch?v=YRhxdVk_sIs) Length = 8m:36s
- [A friendly introduction to Convolutional Neural Networks and Image Recognition](https://www.youtube.com/watch?v=2-Ol7ZB0MmU) Length = 32min:07min
```
Training Time:
--------------
Input data --> ML Magic --> Prediction
^ |
| ERROR |
(Prediction - Label)^2
Prediction Time:
----------------
Input data --> ML Magic --> Prediction
```
---
## Setup
First we need to:
- Clone the git repo
- Change to the requied directory
- Install the python modules
- Make a directory on the Google Colab virtual mahine to copy your training data into
```
!git clone https://github.com/tall-josh/dingocar.git
%cd dingocar
!python setup.py develop
%mkdir data
```
## Connect to Google Drive
This piece of code will mount the your google drive to this Google Colab virtual machine. It will prompt you to follow a link to get a verification code. Once you get it, copy and paste it in the box provided and hit enter.
You can nevigate the file system by clicking the "Files" tab in the <-- left side bar. All your google drive files should be in `/content/drive/My\ Drive`
```
from google.colab import drive
drive.mount('/content/drive')
```
Here we copy the contents of '`your/data/directory`' to the '`data`' directory we created earlier.
```
# Gotch'a:
# 1. 'My Drive' has a space so you'll need to delimit it with a '\' or put the
# path in 'single quotes'. ie:
# '/content/drive/My Drive' or /content/drive/My\ Drive
# 2. You can right click on the file system to the right to get the path of the
# file or folder. It ommits the leading '/' before the 'content'. So don't
# forget to add it. ie:
# 'content/' = :-(
# '/content/' = :-)
!rm -r ./data/*
!rsync -r --info=progress2 '/content/drive/My Drive/dingocar/data/tub.zip' ./data
!cd data && unzip tub.zip > _ && cd ..
!echo "Number of examples: `ls data/tub/*.jpg | wc -l`"
```
## Import some required modules
```
%matplotlib inline
import matplotlib
from matplotlib.pyplot import imshow
import os
from PIL import Image
from glob import glob
import numpy as np
import json
from tqdm import tqdm
```
## Load and visualise the data
Donkeycar calls the directory(s) where your training data is stored a "_tub_". The Dingocar follows the same convention.
`Tubs` contain 3 types of files:
- images: in the form of `.jpg`
- records: in the form `.json`
- `meta` which contains some aditional information, also `.json`
Below we set the `tub` location and visualize an image and record
```
from dingocar.parts.datastore import Tub
tub_path = 'data/tub'
tub = Tub(tub_path)
# Tubs provide a simple way to access the training.
# Each entry is a dict record which contains the
# a camera image plus the steering and throttle commands
# that were recorded when driving the car manually.
# The dict keys are as follows.
IMAGE_KEY = "cam/image_array"
STEERING_KEY = "user/angle"
THROTTLE_KEY = "user/throttle"
# Read a single record from the tub
idx = 123
record = tub.get_record(idx)
print(f"Steering: {record[STEERING_KEY]}")
print(f"Throttle: {record[THROTTLE_KEY]}")
imshow(record[IMAGE_KEY])
```
## Data Augmentation
Data augmentation allows us to add a bit more variety to the training data. One very handy augmentation transformation is to randomly mirror the input image and the steering label. This ensurse the data contains the same number or left and right turns so the neural network does not become bias to a specifc direction of turn.
There are also some other augmentation transformations you can apply below. These will hopefully make the network a bit more robust to canging lighting and help prevent overfitting.
```
import config
from functools import partial
from dingocar.parts.augmentation import apply_aug_config
# Play with the data augmentation settings if you like
# In all cases 'aug_prob' is the probability the given
# augmentation will occure. All the other parameters are
# explained below.
aug_config = {
# Mirror the image horizontally
"mirror_y" : {"aug_prob" : 0.5},
# Randomly turn pixels black of white.
# "noise" : The probability a pixel is affected.
# 0.0 : No pixels will be effected
# 1.0 : All pixels will be effected
"salt_and_pepper" : {"aug_prob" : 0.3,
"noise" : 0.2},
# Randomly turn pixels a random color
# "noise" : The probability a pixel is affected.
# 0.0 : No pixels will be effected
# 1.0 : All pixels will be effected
"100s_and_1000s" : {"aug_prob" : 0.3,
"noise" : 0.2},
# Randomly increase or decrease the pixel values by an
# value between 'min_val' and 'max_val'. The resulting
# value will be clipped between 0 and 255
"pixel_saturation" : {"aug_prob" : 0.3,
"min_val" :-20,
"max_val" : 20},
# Randomly shuffle the RGB channel order
"shuffle_channels" : {"aug_prob" : 0.3},
# Randomly set a rectangular setction of the image to 0
# the rectangle height and width is randomly generated
# to be between dimention*min_frac and dimention*max_frac.
# So min_frac = 0.0 and max_frac = 1.0 would result
# in a random rectangel that could cover the entire image,
# or none of the image or anywhere inbetween.
"blockout" : {"aug_prob" : 0.3,
"min_frac" : 0.07,
"max_frac" : 0.3}
}
# If you're unfamiliar with the 'partial' function. It allows you to
# call a function with some of the arguments pre-filled.
# In this case we made a function that is like 'apply_aug_config', but
# has the `aug_config` parameter pre-filled.
record_transform=partial(apply_aug_config, aug_config=aug_config)
record = tub.get_record(idx, record_transform=record_transform)
print(f"Steering: {record[STEERING_KEY]}")
print(f"Throttle: {record[THROTTLE_KEY]}")
imshow(record[IMAGE_KEY])
```
## Define the CNN
```
from tensorflow.python.keras.layers import Convolution2D
from tensorflow.python.keras.layers import Dropout, Flatten, Dense
from dingocar.parts.keras import KerasLinear
from tensorflow.python.keras.layers import Input
from tensorflow.python.keras.models import Model, load_model
# Tub objects maintain a dictionary of data. You can access the data via 'keys'.
# Traditionally x stands for inputs and y stands for outputs.
# In our case, for every input image (x) there are 2 output labels,
# steering angle and throttle (y).
X_KEYS = [IMAGE_KEY]
Y_KEYS = [STEERING_KEY, THROTTLE_KEY]
# If you'd like you can play with this neural network as much as you like. See
# if you can get the network to be more accurate!
# The only things you need to watch out for are:
# 1. 'img_in' cannot change.
# 2. 'angle_out' must always haev 'units=1'
# 3. 'throttle_out' must always have 'units=1'
def convolutional_neural_network():
img_in = Input(shape=(120, 160, 3), name='img_in')
x = img_in
# Convolution2D class name is an alias for Conv2D
x = Convolution2D(filters=24, kernel_size=(5, 5), strides=(2, 2), activation='relu')(x)
x = Convolution2D(filters=32, kernel_size=(5, 5), strides=(2, 2), activation='relu')(x)
x = Convolution2D(filters=64, kernel_size=(5, 5), strides=(2, 2), activation='relu')(x)
x = Convolution2D(filters=64, kernel_size=(3, 3), strides=(2, 2), activation='relu')(x)
x = Convolution2D(filters=64, kernel_size=(3, 3), strides=(1, 1), activation='relu')(x)
x = Flatten(name='flattened')(x)
x = Dense(units=100, activation='linear')(x)
x = Dropout(rate=.2)(x)
x = Dense(units=50, activation='linear')(x)
x = Dropout(rate=.2)(x)
# categorical output of the angle
angle_out = Dense(units=1, activation='linear', name='angle_out')(x)
# continous output of throttle
throttle_out = Dense(units=1, activation='linear', name='throttle_out')(x)
model = Model(inputs=[img_in], outputs=[angle_out, throttle_out])
model.compile(optimizer='adam',
loss={'angle_out': 'mean_squared_error',
'throttle_out': 'mean_squared_error'},
loss_weights={'angle_out': 0.5, 'throttle_out': 0.5})
return model
# KerasLinear is a class the contains some functions we can use to train
# our model and to get predictions out if it later.
model = KerasLinear(model=convolutional_neural_network())
```
## Train the model
```
from manage import train
import config
# Load 16 image at a time into the model
BATCH_SIZE = 32
# 70% of the data is used for training. 30% for validation
TRAIN_TEST_SPLIT = 0.7
# Number of time to look over all the training data
EPOCHS = 100
# Stop training if the validation loss has not improved for the last 'PATIENTS'
# Epochs.
USE_EARLY_STOP = True
PATIENCE = 5
# Where to save the trained model
new_model_path = "/content/drive/My Drive/dingocar/no_mirror1.hdf5"
# If you want to start from a pre-trained model you can add the path here
base_model_path = None
# These are generators that will be used to feed data into the model
# when training. The generator uses a constant random seed so the train/val
# split is the same every time.
train_gen, val_gen = tub.get_train_val_gen(X_KEYS, Y_KEYS,
batch_size=BATCH_SIZE,
train_frac=TRAIN_TEST_SPLIT,
train_record_transform=record_transform,
val_record_transform=None)
training_history = model.train(train_gen,
val_gen,
new_model_path,
epochs=EPOCHS,
patience=PATIENCE,
use_early_stop=USE_EARLY_STOP)
```
# Visualize Predictions
```
from dingocar.parts.keras import KerasLinear
new_model_path = "/content/drive/My Drive/dingocar/no_mirror1.hdf5"
trained_model = new_model_path
# Load a pre-trained model
model = KerasLinear()
model.load(trained_model)
from dingocar.parts.datastore import Tub
_, val_gen = tub.get_train_val_gen(X_KEYS, Y_KEYS,
batch_size=1,
train_frac=TRAIN_TEST_SPLIT,
train_record_transform=None,
val_record_transform=None)
preds = []
truth = []
val_count = int(tub.get_num_records() * (1-TRAIN_TEST_SPLIT))
for _ in tqdm(range(val_count)):
sample = next(val_gen)
pred = model.run(sample[0][0][0])
preds.append(pred)
truth.append((sample[1][0][0], sample[1][1][0]))
preds = np.array(preds)
truth = np.array(truth)
print(preds.shape)
print(truth.shape)
import matplotlib.pyplot as plt
def mean_squared_error(preds, true):
squared_error = (true - preds)**2
return np.mean(squared_error)
def xy_scatter(preds, truth):
fig = plt.figure(figsize=(14,14))
steering_p = preds[...,0]
throttle_p = preds[...,1]
steering_t = truth[...,0]
throttle_t = truth[...,1]
steering_mse = mean_squared_error(steering_p, steering_t)
throttle_mse = mean_squared_error(throttle_p, throttle_t)
plt.plot(steering_p, steering_t, 'b.')
plt.title(f"MSE: {steering_mse:.3f}")
plt.xlabel("predictions")
plt.ylabel("ground truth")
plt.gca().set_xlim(-1, 1)
plt.show()
# fig = plt.gcf()
# fig.savefig(path + "/pred_vs_anno.png", dpi=100)
# Only display the validation set
xy_scatter(preds, truth)
preds = []
truth = []
for idx in tqdm(range(tub.get_num_records())):
sample = tub.get_record(idx)
pred = model.run(sample[IMAGE_KEY])
preds.append(pred)
truth.append((sample[STEERING_KEY], sample[THROTTLE_KEY]))
preds = np.array(preds)
truth = np.array(truth)
print(preds.shape)
from ipywidgets import interact, fixed
import ipywidgets as widgets
def plt_image(ax, image, title):
ax.imshow(image)
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(title)
def plt_samples(idxs, axs, tub):
records = [tub.get_record(i) for i in idxs]
images = [r[IMAGE_KEY] for r in records]
titles = [f"frame: {i}" for i in idxs]
for a,i,t in zip(axs, images, titles):
plt_image(a,i,t)
def time_series(x=300):#, axs=axs, tub=tub):
fig = plt.figure(figsize=(21,12))
plt.tight_layout()
ax1 = plt.subplot2grid((2, 5), (0, 0), colspan=5)
ax2 = plt.subplot2grid((2, 5), (1, 0))
ax3 = plt.subplot2grid((2, 5), (1, 1))
ax4 = plt.subplot2grid((2, 5), (1, 2))
ax5 = plt.subplot2grid((2, 5), (1, 3))
ax6 = plt.subplot2grid((2, 5), (1, 4))
axs = [ax2, ax3, ax4, ax5, ax6]
steering_p = preds[...,0]
throttle_p = preds[...,1]
steering_t = truth[...,0]
throttle_t = truth[...,1]
idxs = np.arange(x-2,x+3)
plt_samples(idxs, axs, tub)
start = x-300
end = x + 300
ax1.plot(steering_p, label="predictions")
ax1.plot(steering_t, label="ground truth")
#ax1.axvline(x=x, linewidth=4, color='r')
ax1.legend(bbox_to_anchor=(0.91, 0.96), loc=2, borderaxespad=0.)
ax1.set_title("Time Series Throttle Predictions vs Ground Truth")
ax1.set_xlabel("time (frames)")
ax1.set_ylabel("steering command")
ax1.set_xlim(start, end)
#time_series(x=600)
interact(time_series, x=(300, len(truth-300)))#, axs=fixed(axs), tub=fixed(tub))
import numpy as np
import matplotlib.pyplot as plt
testData = np.array([[0,0], [0.1, 0], [0, 0.3], [-0.4, 0], [0, -0.5]])
fig, ax = plt.subplots()
sctPlot, = ax.plot(testData[:,0], testData[:,1], "o", picker = 5)
plt.grid(True)
plt.axis([-0.5, 0.5, -0.5, 0.5])
def on_pick(event):
artist = event.artist
artist.set_color(np.random.random(3))
print("click!")
fig.canvas.draw()
fig.canvas.mpl_connect('pick_event', on_pick)
```
| github_jupyter |
```
#hide
#skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
#default_exp vision.data
#export
from fastai.torch_basics import *
from fastai.data.all import *
from fastai.vision.core import *
#hide
from nbdev.showdoc import *
# from fastai.vision.augment import *
```
# Vision data
> Helper functions to get data in a `DataLoaders` in the vision application and higher class `ImageDataLoaders`
The main classes defined in this module are `ImageDataLoaders` and `SegmentationDataLoaders`, so you probably want to jump to their definitions. They provide factory methods that are a great way to quickly get your data ready for training, see the [vision tutorial](http://docs.fast.ai/tutorial.vision) for examples.
## Helper functions
```
#export
@delegates(subplots)
def get_grid(n, nrows=None, ncols=None, add_vert=0, figsize=None, double=False, title=None, return_fig=False, **kwargs):
"Return a grid of `n` axes, `rows` by `cols`"
nrows = nrows or int(math.sqrt(n))
ncols = ncols or int(np.ceil(n/nrows))
if double: ncols*=2 ; n*=2
fig,axs = subplots(nrows, ncols, figsize=figsize, **kwargs)
axs = [ax if i<n else ax.set_axis_off() for i, ax in enumerate(axs.flatten())][:n]
if title is not None: fig.suptitle(title, weight='bold', size=14)
return (fig,axs) if return_fig else axs
```
This is used by the type-dispatched versions of `show_batch` and `show_results` for the vision application. By default, there will be `int(math.sqrt(n))` rows and `ceil(n/rows)` columns. `double` will double the number of columns and `n`. The default `figsize` is `(cols*imsize, rows*imsize+add_vert)`. If a `title` is passed it is set to the figure. `sharex`, `sharey`, `squeeze`, `subplot_kw` and `gridspec_kw` are all passed down to `plt.subplots`. If `return_fig` is `True`, returns `fig,axs`, otherwise just `axs`.
```
# export
def clip_remove_empty(bbox, label):
"Clip bounding boxes with image border and label background the empty ones"
bbox = torch.clamp(bbox, -1, 1)
empty = ((bbox[...,2] - bbox[...,0])*(bbox[...,3] - bbox[...,1]) <= 0.)
return (bbox[~empty], label[~empty])
bb = tensor([[-2,-0.5,0.5,1.5], [-0.5,-0.5,0.5,0.5], [1,0.5,0.5,0.75], [-0.5,-0.5,0.5,0.5], [-2, -0.5, -1.5, 0.5]])
bb,lbl = clip_remove_empty(bb, tensor([1,2,3,2,5]))
test_eq(bb, tensor([[-1,-0.5,0.5,1.], [-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]]))
test_eq(lbl, tensor([1,2,2]))
#export
def bb_pad(samples, pad_idx=0):
"Function that collect `samples` of labelled bboxes and adds padding with `pad_idx`."
samples = [(s[0], *clip_remove_empty(*s[1:])) for s in samples]
max_len = max([len(s[2]) for s in samples])
def _f(img,bbox,lbl):
bbox = torch.cat([bbox,bbox.new_zeros(max_len-bbox.shape[0], 4)])
lbl = torch.cat([lbl, lbl .new_zeros(max_len-lbl .shape[0])+pad_idx])
return img,bbox,lbl
return [_f(*s) for s in samples]
img1,img2 = TensorImage(torch.randn(16,16,3)),TensorImage(torch.randn(16,16,3))
bb1 = tensor([[-2,-0.5,0.5,1.5], [-0.5,-0.5,0.5,0.5], [1,0.5,0.5,0.75], [-0.5,-0.5,0.5,0.5]])
lbl1 = tensor([1, 2, 3, 2])
bb2 = tensor([[-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]])
lbl2 = tensor([2, 2])
samples = [(img1, bb1, lbl1), (img2, bb2, lbl2)]
res = bb_pad(samples)
non_empty = tensor([True,True,False,True])
test_eq(res[0][0], img1)
test_eq(res[0][1], tensor([[-1,-0.5,0.5,1.], [-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]]))
test_eq(res[0][2], tensor([1,2,2]))
test_eq(res[1][0], img2)
test_eq(res[1][1], tensor([[-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5], [0,0,0,0]]))
test_eq(res[1][2], tensor([2,2,0]))
```
## Show methods -
```
#export
@typedispatch
def show_batch(x:TensorImage, y, samples, ctxs=None, max_n=10, nrows=None, ncols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = get_grid(min(len(samples), max_n), nrows=nrows, ncols=ncols, figsize=figsize)
ctxs = show_batch[object](x, y, samples, ctxs=ctxs, max_n=max_n, **kwargs)
return ctxs
#export
@typedispatch
def show_batch(x:TensorImage, y:TensorImage, samples, ctxs=None, max_n=10, nrows=None, ncols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = get_grid(min(len(samples), max_n), nrows=nrows, ncols=ncols, add_vert=1, figsize=figsize, double=True)
for i in range(2):
ctxs[i::2] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs[i::2],range(max_n))]
return ctxs
```
## `TransformBlock`s for vision
These are the blocks the vision application provide for the [data block API](http://docs.fast.ai/data.block).
```
#export
def ImageBlock(cls=PILImage):
"A `TransformBlock` for images of `cls`"
return TransformBlock(type_tfms=cls.create, batch_tfms=IntToFloatTensor)
#export
def MaskBlock(codes=None):
"A `TransformBlock` for segmentation masks, potentially with `codes`"
return TransformBlock(type_tfms=PILMask.create, item_tfms=AddMaskCodes(codes=codes), batch_tfms=IntToFloatTensor)
#export
PointBlock = TransformBlock(type_tfms=TensorPoint.create, item_tfms=PointScaler)
BBoxBlock = TransformBlock(type_tfms=TensorBBox.create, item_tfms=PointScaler, dls_kwargs = {'before_batch': bb_pad})
PointBlock.__doc__ = "A `TransformBlock` for points in an image"
BBoxBlock.__doc__ = "A `TransformBlock` for bounding boxes in an image"
show_doc(PointBlock, name='PointBlock')
show_doc(BBoxBlock, name='BBoxBlock')
#export
def BBoxLblBlock(vocab=None, add_na=True):
"A `TransformBlock` for labeled bounding boxes, potentially with `vocab`"
return TransformBlock(type_tfms=MultiCategorize(vocab=vocab, add_na=add_na), item_tfms=BBoxLabeler)
```
If `add_na` is `True`, a new category is added for NaN (that will represent the background class).
## ImageDataLoaders -
```
#export
class ImageDataLoaders(DataLoaders):
"Basic wrapper around several `DataLoader`s with factory methods for computer vision problems"
@classmethod
@delegates(DataLoaders.from_dblock)
def from_folder(cls, path, train='train', valid='valid', valid_pct=None, seed=None, vocab=None, item_tfms=None,
batch_tfms=None, **kwargs):
"Create from imagenet style dataset in `path` with `train` and `valid` subfolders (or provide `valid_pct`)"
splitter = GrandparentSplitter(train_name=train, valid_name=valid) if valid_pct is None else RandomSplitter(valid_pct, seed=seed)
get_items = get_image_files if valid_pct else partial(get_image_files, folders=[train, valid])
dblock = DataBlock(blocks=(ImageBlock, CategoryBlock(vocab=vocab)),
get_items=get_items,
splitter=splitter,
get_y=parent_label,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return cls.from_dblock(dblock, path, path=path, **kwargs)
@classmethod
@delegates(DataLoaders.from_dblock)
def from_path_func(cls, path, fnames, label_func, valid_pct=0.2, seed=None, item_tfms=None, batch_tfms=None, **kwargs):
"Create from list of `fnames` in `path`s with `label_func`"
dblock = DataBlock(blocks=(ImageBlock, CategoryBlock),
splitter=RandomSplitter(valid_pct, seed=seed),
get_y=label_func,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return cls.from_dblock(dblock, fnames, path=path, **kwargs)
@classmethod
def from_name_func(cls, path, fnames, label_func, **kwargs):
"Create from the name attrs of `fnames` in `path`s with `label_func`"
f = using_attr(label_func, 'name')
return cls.from_path_func(path, fnames, f, **kwargs)
@classmethod
def from_path_re(cls, path, fnames, pat, **kwargs):
"Create from list of `fnames` in `path`s with re expression `pat`"
return cls.from_path_func(path, fnames, RegexLabeller(pat), **kwargs)
@classmethod
@delegates(DataLoaders.from_dblock)
def from_name_re(cls, path, fnames, pat, **kwargs):
"Create from the name attrs of `fnames` in `path`s with re expression `pat`"
return cls.from_name_func(path, fnames, RegexLabeller(pat), **kwargs)
@classmethod
@delegates(DataLoaders.from_dblock)
def from_df(cls, df, path='.', valid_pct=0.2, seed=None, fn_col=0, folder=None, suff='', label_col=1, label_delim=None,
y_block=None, valid_col=None, item_tfms=None, batch_tfms=None, **kwargs):
"Create from `df` using `fn_col` and `label_col`"
pref = f'{Path(path) if folder is None else Path(path)/folder}{os.path.sep}'
if y_block is None:
is_multi = (is_listy(label_col) and len(label_col) > 1) or label_delim is not None
y_block = MultiCategoryBlock if is_multi else CategoryBlock
splitter = RandomSplitter(valid_pct, seed=seed) if valid_col is None else ColSplitter(valid_col)
dblock = DataBlock(blocks=(ImageBlock, y_block),
get_x=ColReader(fn_col, pref=pref, suff=suff),
get_y=ColReader(label_col, label_delim=label_delim),
splitter=splitter,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return cls.from_dblock(dblock, df, path=path, **kwargs)
@classmethod
def from_csv(cls, path, csv_fname='labels.csv', header='infer', delimiter=None, **kwargs):
"Create from `path/csv_fname` using `fn_col` and `label_col`"
df = pd.read_csv(Path(path)/csv_fname, header=header, delimiter=delimiter)
return cls.from_df(df, path=path, **kwargs)
@classmethod
@delegates(DataLoaders.from_dblock)
def from_lists(cls, path, fnames, labels, valid_pct=0.2, seed:int=None, y_block=None, item_tfms=None, batch_tfms=None,
**kwargs):
"Create from list of `fnames` and `labels` in `path`"
if y_block is None:
y_block = MultiCategoryBlock if is_listy(labels[0]) and len(labels[0]) > 1 else (
RegressionBlock if isinstance(labels[0], float) else CategoryBlock)
dblock = DataBlock.from_columns(blocks=(ImageBlock, y_block),
splitter=RandomSplitter(valid_pct, seed=seed),
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return cls.from_dblock(dblock, (fnames, labels), path=path, **kwargs)
ImageDataLoaders.from_csv = delegates(to=ImageDataLoaders.from_df)(ImageDataLoaders.from_csv)
ImageDataLoaders.from_name_func = delegates(to=ImageDataLoaders.from_path_func)(ImageDataLoaders.from_name_func)
ImageDataLoaders.from_path_re = delegates(to=ImageDataLoaders.from_path_func)(ImageDataLoaders.from_path_re)
ImageDataLoaders.from_name_re = delegates(to=ImageDataLoaders.from_name_func)(ImageDataLoaders.from_name_re)
```
This class should not be used directly, one of the factory methods should be preferred instead. All those factory methods accept as arguments:
- `item_tfms`: one or several transforms applied to the items before batching them
- `batch_tfms`: one or several transforms applied to the batches once they are formed
- `bs`: the batch size
- `val_bs`: the batch size for the validation `DataLoader` (defaults to `bs`)
- `shuffle_train`: if we shuffle the training `DataLoader` or not
- `device`: the PyTorch device to use (defaults to `default_device()`)
```
show_doc(ImageDataLoaders.from_folder)
```
If `valid_pct` is provided, a random split is performed (with an optional `seed`) by setting aside that percentage of the data for the validation set (instead of looking at the grandparents folder). If a `vocab` is passed, only the folders with names in `vocab` are kept.
Here is an example loading a subsample of MNIST:
```
path = untar_data(URLs.MNIST_TINY)
dls = ImageDataLoaders.from_folder(path)
```
Passing `valid_pct` will ignore the valid/train folders and do a new random split:
```
dls = ImageDataLoaders.from_folder(path, valid_pct=0.2)
dls.valid_ds.items[:3]
show_doc(ImageDataLoaders.from_path_func)
```
The validation set is a random `subset` of `valid_pct`, optionally created with `seed` for reproducibility.
Here is how to create the same `DataLoaders` on the MNIST dataset as the previous example with a `label_func`:
```
fnames = get_image_files(path)
def label_func(x): return x.parent.name
dls = ImageDataLoaders.from_path_func(path, fnames, label_func)
```
Here is another example on the pets dataset. Here filenames are all in an "images" folder and their names have the form `class_name_123.jpg`. One way to properly label them is thus to throw away everything after the last `_`:
```
show_doc(ImageDataLoaders.from_path_re)
```
The validation set is a random subset of `valid_pct`, optionally created with `seed` for reproducibility.
Here is how to create the same `DataLoaders` on the MNIST dataset as the previous example (you will need to change the initial two / by a \ on Windows):
```
pat = r'/([^/]*)/\d+.png$'
dls = ImageDataLoaders.from_path_re(path, fnames, pat)
show_doc(ImageDataLoaders.from_name_func)
```
The validation set is a random subset of `valid_pct`, optionally created with `seed` for reproducibility. This method does the same as `ImageDataLoaders.from_path_func` except `label_func` is applied to the name of each filenames, and not the full path.
```
show_doc(ImageDataLoaders.from_name_re)
```
The validation set is a random subset of `valid_pct`, optionally created with `seed` for reproducibility. This method does the same as `ImageDataLoaders.from_path_re` except `pat` is applied to the name of each filenames, and not the full path.
```
show_doc(ImageDataLoaders.from_df)
```
The validation set is a random subset of `valid_pct`, optionally created with `seed` for reproducibility. Alternatively, if your `df` contains a `valid_col`, give its name or its index to that argument (the column should have `True` for the elements going to the validation set).
You can add an additional `folder` to the filenames in `df` if they should not be concatenated directly to `path`. If they do not contain the proper extensions, you can add `suff`. If your label column contains multiple labels on each row, you can use `label_delim` to warn the library you have a multi-label problem.
`y_block` should be passed when the task automatically picked by the library is wrong, you should then give `CategoryBlock`, `MultiCategoryBlock` or `RegressionBlock`. For more advanced uses, you should use the data block API.
The tiny mnist example from before also contains a version in a dataframe:
```
path = untar_data(URLs.MNIST_TINY)
df = pd.read_csv(path/'labels.csv')
df.head()
```
Here is how to load it using `ImageDataLoaders.from_df`:
```
dls = ImageDataLoaders.from_df(df, path)
```
Here is another example with a multi-label problem:
```
path = untar_data(URLs.PASCAL_2007)
df = pd.read_csv(path/'train.csv')
df.head()
dls = ImageDataLoaders.from_df(df, path, folder='train', valid_col='is_valid')
```
Note that can also pass `2` to valid_col (the index, starting with 0).
```
show_doc(ImageDataLoaders.from_csv)
```
Same as `ImageDataLoaders.from_df` after loading the file with `header` and `delimiter`.
Here is how to load the same dataset as before with this method:
```
dls = ImageDataLoaders.from_csv(path, 'train.csv', folder='train', valid_col='is_valid')
show_doc(ImageDataLoaders.from_lists)
```
The validation set is a random subset of `valid_pct`, optionally created with `seed` for reproducibility. `y_block` can be passed to specify the type of the targets.
```
path = untar_data(URLs.PETS)
fnames = get_image_files(path/"images")
labels = ['_'.join(x.name.split('_')[:-1]) for x in fnames]
dls = ImageDataLoaders.from_lists(path, fnames, labels)
#export
class SegmentationDataLoaders(DataLoaders):
"Basic wrapper around several `DataLoader`s with factory methods for segmentation problems"
@classmethod
@delegates(DataLoaders.from_dblock)
def from_label_func(cls, path, fnames, label_func, valid_pct=0.2, seed=None, codes=None, item_tfms=None, batch_tfms=None, **kwargs):
"Create from list of `fnames` in `path`s with `label_func`."
dblock = DataBlock(blocks=(ImageBlock, MaskBlock(codes=codes)),
splitter=RandomSplitter(valid_pct, seed=seed),
get_y=label_func,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
res = cls.from_dblock(dblock, fnames, path=path, **kwargs)
return res
show_doc(SegmentationDataLoaders.from_label_func)
```
The validation set is a random subset of `valid_pct`, optionally created with `seed` for reproducibility. `codes` contain the mapping index to label.
```
path = untar_data(URLs.CAMVID_TINY)
fnames = get_image_files(path/'images')
def label_func(x): return path/'labels'/f'{x.stem}_P{x.suffix}'
codes = np.loadtxt(path/'codes.txt', dtype=str)
dls = SegmentationDataLoaders.from_label_func(path, fnames, label_func, codes=codes)
```
# Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
```
import cv2
import numpy as np
import matplotlib.pyplot as plt
import scipy.ndimage
def overlay(overlay_img, bg_img, scale, starting_y, starting_x, rotate=False, choice=''):
# Rotating image 45 degrees if it has to be rotated
if rotate:
img = overlay_img
overlay_img = scipy.ndimage.rotate(overlay_img, 45)
# Scaling image
overlay_img = cv2.resize(overlay_img, (0, 0), fx=scale, fy=scale)
# Overlaying overlay image ontop of background image
bg_img[starting_y:overlay_img.shape[0]+starting_y,
starting_x:overlay_img.shape[1] +starting_x] = overlay_img
# Showing image
plt.title(choice)
plt.imshow(bg_img)
plt.show()
return bg_img
def sepia(image):
kernel = np.array([[0.393, 0.769, 0.189],
[0.349, 0.686, 0.168],
[0.272, 0.534, 0.131]])
# multiply each layer (r, g, b) by the corresponding array to convert to sepia
sepia_img = cv2.transform(image, kernel)
return sepia_img
def sharpen(image):
kernel = np.array([[-1, -1, -1],
[-1, 9.5, -1],
[-1, -1, -1]])
# change the pixel intensity value of an image based on the surrounding pixel intensity values
sharpened_img = cv2.filter2D(image, -1, kernel)
return sharpened_img
def sharpened_sepia(image):
sepia_img = sepia(image)
sharpened_img = sharpen(sepia_img)
converted_color = cv2.cvtColor(sharpened_img, cv2.COLOR_RGB2BGR)
return converted_color
def surf_bozu(overlay_img, background_img):
sepia_img = sharpened_sepia(overlay_img)
cropped_image = background_img[0:400, 100:700]
bozu_surfing = overlay(sepia_img, cropped_image, 0.09, 50, 220, True, 'Surf Bozu')
cv2.imwrite('Bozu_Surfing.jpg', bozu_surfing)
def vingette(image):
rows, cols = image.shape[:2] # initializing rows and columns
# Create a Gaussian filter
# create gaussian distribution for the shape of for columns (808, 1)
kernel_x = cv2.getGaussianKernel(cols, 250)
# creates a gaussian distribution for the shape of the rows (938, 1)
kernel_y = cv2.getGaussianKernel(rows, 250)
# multiplies the kernels by each other to get the kernel for the final image
kernel = kernel_y @ np.transpose(kernel_x)
filter = 255 * kernel / np.linalg.norm(kernel)
vingette_image = np.copy(image)
layers = image.shape[-1]
for layer in range(layers):
# multiplying each layer by the filter
vingette_image[:, :, layer] = vingette_image[:, :, layer] * filter
return vingette_image
def horror_bozu(image, background_img):
cropped_background = background_img[:, 320:1600]
vingette_bozu = vingette(image)
horror_bozu = overlay(vingette_bozu, cropped_background, 0.40, 500, 470, choice="Horror Bozu")
cv2.imwrite('Horror_Bozu.jpg', horror_bozu)
bozu_img = cv2.imread('Bozu.png')
horror_house = cv2.imread('horror_house.jpeg')
waves = cv2.imread('surf_water.jpeg')
# surf_bozu(bozu, waves)
# horror_bozu(bozu, background_img)
options_dict = {
'1': [surf_bozu, waves],
'2': [horror_bozu, horror_house]
}
choice = input(
"Enter 1 to get Bozu surfing\n2 to get a Bozu in a haunted house\nor 3 to exit\nYour choice: ")
while choice != '3':
if choice not in options_dict.keys():
print('Invalid choice')
else:
# Nice documentation for this trick --> https://www.geeksforgeeks.org/python-store-function-as-dictionary-value/
options_dict[choice][0](bozu_img, options_dict[choice][1])
choice = input("Enter 1 to vignette Bozu\nYour choice: ")
print("Thank you for using our program!")
```
| github_jupyter |
### Plotting the ADCP spectra
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
Little tweaks in the matplotlib configuration to make nicer plots
```
plt.rcParams.update({'font.size': 25, 'legend.handlelength' : 2.0
, 'legend.markerscale': 1., 'legend.fontsize' : 20, 'axes.titlesize' : 35, 'axes.labelsize' : 30})
plt.rc('xtick', labelsize=35)
plt.rc('ytick', labelsize=35)
#plt.rcParams['lines.width'] = 2.
#plt.rcParams['alpha'] = 0.5
#plt.rcParams?
```
Define nice colors
```
color2 = '#6495ed'
color1 = '#ff6347'
color5 = '#8470ff'
color3 = '#3cb371'
color4 = '#ffd700'
color6 = '#ba55d3'
lw1=3
aph=.7
```
### Load data
```
data_path = './outputs/'
slab1=np.load(data_path+'adcp_spec_slab1.npz')
slab2=np.load(data_path+'adcp_spec_slab2.npz')
slab3=np.load(data_path+'adcp_spec_slab3.npz')
slab2ns=np.load(data_path+'adcp_spec_slab2ns.npz')
kK = slab1['kK1']
# scaling factor to account for the variance reduced by hanning window
k1 = slab1['k']
w = np.hanning(k1.size)
Nw = k1.size/w.sum()
Nw
## -2 and -3 slopes in the loglog space
ks = np.array([1.e-3,1])
Es2 = .2e-4*(ks**(-2))
Es3 = .5e-6*(ks**(-3))
rd1 = 22.64 # [km]
Enoise = np.ones(2)*2.*1.e-4
def add_second_axis(ax1):
""" Add a x-axis at the top of the spectra figures """
ax2 = ax1.twiny()
ax2.set_xscale('log')
ax2.set_xlim(ax1.axis()[0], ax1.axis()[1])
kp = 1./np.array([500.,200.,100.,40.,20.,10.,5.])
lp=np.array([500,200,100,40,20,10,5])
ax2.set_xticks(kp)
ax2.set_xticklabels(lp)
plt.xlabel('Wavelength [km]')
def plt_adcp_spectrum(slab,vlevel=1,lw=3):
""" Plots ADCP spectrum in the given vertical level
slab is a dictionary contaning the spectra """
if vlevel==1:
ltit = r'26-50 m, 232 DOF'
fig_num = 'a'
elif vlevel==2:
ltit=r'58-98 m, 238 DOF'
fig_num = 'b'
elif vlevel==3:
ltit=r'106-202 m, 110 DOF'
fig_num = 'c'
fig = plt.figure(facecolor='w', figsize=(12.,10.))
ax1 = fig.add_subplot(111)
ax1.set_rasterization_zorder(1)
ax1.fill_between(slab['k'],slab['Eul']/2.,slab['Euu']/2., color=color1,\
alpha=0.35, zorder=0)
ax1.fill_between(slab['k'],slab['Evl']/2.,slab['Evu']/2.,\
color=color2, alpha=0.35,zorder=0)
ax1.set_xscale('log'); ax1.set_yscale('log')
ax1.loglog(slab['k'],slab['Eu']/2.,color=color1,\
linewidth=lw,label=r'$\hat{C}^u$: across-track',zorder=0)
ax1.loglog(slab['k'],slab['Ev']/2.,color=color2,\
linewidth=lw,label=r'$\hat{C}^v$: along-track',zorder=0)
ax1.loglog(kK,slab['Kpsi']/2,color=color3,linewidth=lw,\
label='$\hat{C}^\psi$: rotational',zorder=0)
ax1.loglog(kK,slab['Kphi']/2,color=color4,linewidth=lw,\
label='$\hat{C}^\phi$: divergent',zorder=0)
ax1.loglog(slab['ks'],slab['Enoise']/2., color='.5',alpha=.7,\
linewidth=lw1,label='instrumental error',zorder=0)
ax1.loglog(ks,Es2,'--', color='k',linewidth=2.,alpha=.5,zorder=0)
ax1.loglog(ks,Es3,'--', color='k',linewidth=2.,alpha=.5,zorder=0)
ax1.axis((1./(1000),1./4,.4e-5,10))
plt.text(0.0011, 5.41,u'k$^{-2}$')
plt.text(0.0047, 5.51,u'k$^{-3}$')
plt.xlabel('Along-track wavenumber [cpkm]')
plt.ylabel(u'KE spectral density [ m$^{2}$ s$^{-2}$/ cpkm]')
lg = plt.legend(loc=(.01,.075),title=ltit, numpoints=1,ncol=2)
lg.draw_frame(False)
plt.axis((1./1.e3,1./5.,.5/1.e4,1.e1))
plt.text(1/20., 5., "ADCP", size=25, rotation=0.,
ha="center", va="center",
bbox = dict(boxstyle="round",ec='k',fc='w'))
plt.text(1/6.5, 4.5, fig_num, size=35, rotation=0.)
add_second_axis(ax1)
plt.savefig('figs/spec_adcp_slab'+str(vlevel)+'_bcf_decomp_ke',bbox_inches='tight')
plt.savefig('figs/spec_adcp_slab'+str(vlevel)+'_bcf_decomp_ke.eps'\
, rasterized=True, dpi=300)
```
### Call the function to plot the spectra
```
## 26-50 m
plt_adcp_spectrum(slab1,vlevel=1)
## 58-98 m
plt_adcp_spectrum(slab2,vlevel=2)
## 106-202 m
plt_adcp_spectrum(slab3,vlevel=3)
```
### Now plot the spectra for sub-transects to the north and south of the polar front
```
fig = plt.figure(figsize=(12.,10.))
ax1 = fig.add_subplot(111)
ax1.fill_between(slab2ns['kns'],slab2ns['Euln']/2,slab2ns['Euun']/2, color=color1, alpha=0.25)
ax1.fill_between(slab2ns['kns'],slab2ns['Evln']/2,slab2ns['Evun']/2, color=color2, alpha=0.25)
ax1.fill_between(slab2ns['kns'],slab2ns['Euls']/2,slab2ns['Euus']/2, color=color1, alpha=0.25)
ax1.fill_between(slab2ns['kns'],slab2ns['Evls']/2,slab2ns['Evus']/2, color=color2, alpha=0.25)
ax1.set_xscale('log'); ax1.set_yscale('log')
ax1.loglog(slab2ns['kns'],slab2ns['Eun']/2,color=color1,linewidth=lw1,label='$\hat{C}^u$: across-track')
ax1.loglog(slab2ns['kns'],slab2ns['Evn']/2,color=color2,linewidth=lw1,label='$\hat{C}^v$: along-track')
ax1.loglog(slab2ns['kns'],slab2ns['Eus']/2,'--',color=color1,linewidth=lw1)
ax1.loglog(slab2ns['kns'],slab2ns['Evs']/2,'--',color=color2,linewidth=lw1)
#ax1.loglog(slab2ns['kKn'],slab2ns['Kpsin']/2,color=color3,linewidth=2.,
# label='$\hat{C}^\psi$: rotational')
#ax1.loglog(slab2ns['kKn'],slab2ns['Kphin']/2,color=color4,linewidth=2.,
# label='$\hat{C}^\phi$: divergent')
#ax1.loglog(slab2ns['kKs'],slab2ns['Kpsis']/2,'--',color=color3,linewidth=2.)
#ax1.loglog(slab2ns['kKs'],slab2ns['Kphis']/2,'--',color=color4,linewidth=2.)
ax1.loglog(slab2ns['ks'],slab2ns['Enoise']/2., color='.5',alpha=.7, linewidth=lw1,label='instrumental error')
ax1.loglog(slab2ns['ks'],slab2ns['Es2'],'--', color='k',linewidth=2.,alpha=.7)
ax1.loglog(slab2ns['ks'],slab2ns['Es3'],'--', color='k',linewidth=2.,alpha=.7)
ax1.axis((1./(1000),1./4,.4e-5,10))
plt.text(0.0011, 5.41,u'k$^{-2}$')
plt.text(0.0047, 5.51,u'k$^{-3}$',fontsize=30)
plt.xlabel('Along-track wavenumber [cpkm]')
plt.ylabel(u'KE spectral density [ m$^{2}$ s$^{-2}$/ cpkm]')
lg = plt.legend(loc=(.01,.05),title=r"58-98 m, 388 (328) DOF North (South) of PF", numpoints=1,ncol=2)
lg.draw_frame(False)
plt.axis((1./1.e3,1./4.,.5/1.e4,1.e1))
plt.text(1/20., 5., "ADCP", size=25, rotation=0.,
ha="center", va="center",
bbox = dict(boxstyle="round",ec='k',fc='w'))
#plt.text(0.7, 4.5, 'd', size=35, rotation=0.)
add_second_axis(ax1)
plt.savefig('figs/spec_adcp_slab2ns_decomp_ke_bw',bbox_inches='tight')
from pyspec import spectrum as spec
ki, Eui = spec.avg_per_decade(slab1['k'],slab1['Eu'].real,nbins = 1000)
ki, Evi = spec.avg_per_decade(slab1['k'],slab1['Ev'].real,nbins = 1000)
plt.loglog(ki,Eui)
plt.loglog(ki,Evi)
plt.loglog(slab1['k'],slab1['Eu'])
plt.loglog(slab1['k'],slab1['Ev'])
from pyspec import helmholtz as helm
E = 1./ki**3/1e10
helm_slab1 = helm.BCFDecomposition(ki,3*E,E)
#helm_slab1 = helm.BCFDecomposition(slab1['k'],3*slab1['Ev'],slab1['Ev'])
plt.loglog(ki,helm_slab1.Cpsi,'r')
plt.loglog(ki,helm_slab1.Cphi,'y')
plt.loglog(ki,3*E/2,'b')
plt.loglog(ki,E/2,'g')
#plt.loglog(slab1['k'],helm_slab1.Cpsi,'r')
#plt.loglog(slab1['k'],helm_slab1.Cphi,'y')
#plt.loglog(slab1['k'],3*slab1['Ev']/2,'m')
#plt.loglog(slab1['k'],slab1['Ev']/2,'g')
slab1['k'].size
dk = 1./(800.)
k = np.arange(0,160/2.)*dk
)
plt.loglog(k[1:79],slab2['Eu'],'m')
```
| github_jupyter |
<img src="interactive_image.png"/>
# Interactive image
The following interactive widget is intended to allow the developer to explore
images drawn with different parameter settings.
```
# preliminaries
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from jp_doodle import dual_canvas
from IPython.display import display
# Display a canvas with an image which can be adjusted interactively
# Below we configure the canvas using the Python interface.
# This method is terser than using Javascript, but the redraw operations create a jerky effect
# because the canvas displays intermediate states due to roundtrip messages
# between the Python kernal and the Javascript interpreter.
image_canvas = dual_canvas.SnapshotCanvas("interactive_image.png", width=320, height=220)
image_canvas.display_all()
def change_image(x=0, y=0, w=250, h=50, dx=-50, dy=-25,
degrees=0, sx=30, sy=15, sWidth=140, sHeight=20, whole=False
): #sx:30, sy:15, sWidth:140, sHeight:20
if whole:
sx = sy = sWidth = sHeight = None
canvas = image_canvas
with canvas.delay_redraw():
# This local image reference works in "classic" notebook, but not in Jupyter Lab.
canvas.reset_canvas()
mandrill_url = "../mandrill.png"
image_canvas.name_image_url("mandrill", mandrill_url)
canvas.named_image("mandrill",
x, y, w, h, degrees, sx, sy, sWidth, sHeight, dx=dx, dy=dy, name=True)
canvas.fit()
canvas.lower_left_axes(
max_tick_count=4
)
canvas.circle(x=x, y=y, r=10, color="#999")
canvas.fit(None, 30)
#canvas.element.invisible_canvas.show()
change_image()
w = interactive(
change_image,
x=(-100, 100),
y=(-100,100),
dx=(-300, 300),
dy=(-300,300),
w=(-300,300),
h=(-300,300),
degrees=(-360,360),
sx=(0,600),
sy=(0,600),
sWidth=(0,600),
sHeight=(0,600),
)
display(w)
# Display a canvas with an image which can be adjusted interactively
# Using the Javascript interface:
# This approach requires more typing because Python values must
# be explicitly mapped to Javascript variables.
# However the canvas configuration is smooth because no intermediate
# results are shown.
image_canvas2 = dual_canvas.SnapshotCanvas("interactive_image.png", width=320, height=220)
image_canvas2.display_all()
def change_rect_js(x=0, y=0, w=250, h=50, dx=-50, dy=-25,
degrees=0, sx=30, sy=15, sWidth=140, sHeight=20, whole=False
): #sx:30, sy:15, sWidth:140, sHeight:20
if whole:
sx = sy = sWidth = sHeight = None
canvas = image_canvas2
canvas.js_init("""
element.reset_canvas();
var mandrill_url = "../mandrill.png";
element.name_image_url("mandrill", mandrill_url);
element.named_image({image_name: "mandrill",
x:x, y:y, dx:dx, dy:dy, w:w, h:h, degrees:degrees,
sx:sx, sy:sy, sWidth:sWidth, sHeight:sHeight});
element.fit();
element.lower_left_axes({max_tick_count: 4});
element.circle({x:x, y:y, r:5, color:"#999"});
element.fit(null, 30);
""",
x=x, y=y, dx=dx, dy=dy, w=w, h=h, degrees=degrees,
sx=sx, sy=sy, sWidth=sWidth, sHeight=sHeight)
w = interactive(
change_rect_js,
x=(-100, 100),
y=(-100,100),
dx=(-300, 300),
dy=(-300, 300),
w=(-300,300),
h=(-300,300),
fill=True,
lineWidth=(0,20),
degrees=(-360,360),
red=(0,255),
green=(0,255),
blue=(0,255),
alpha=(0.0,1.0,0.1)
)
display(w)
```
| github_jupyter |
Before running this notebook, it's helpful to
`conda install -c conda-forge nb_conda_kernels`
`conda install -c conda-forge ipywidgets`
and set the kernel to the conda environment in which you installed glmtools (typically, `glmval`)
```
import os
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from glmtools.io.glm import GLMDataset
```
## Use a sample data file included in glmtools
```
from glmtools.test.common import get_sample_data_path
sample_path = get_sample_data_path()
samples = [
"OR_GLM-L2-LCFA_G16_s20181830433000_e20181830433200_c20181830433231.nc",
"OR_GLM-L2-LCFA_G16_s20181830433200_e20181830433400_c20181830433424.nc",
"OR_GLM-L2-LCFA_G16_s20181830433400_e20181830434000_c20181830434029.nc",
]
samples = [os.path.join(sample_path, s) for s in samples]
filename = samples[0]
```
## Use data from the most recent minute or two
Requires siphon.
To load data via siphon from opendap, you must
`conda install -c conda-forge siphon`
```
# Load data from the most recent minute or two!
if False:
from siphon.catalog import TDSCatalog
g16url = "http://thredds-test.unidata.ucar.edu/thredds/catalog/satellite/goes16/GRB16/GLM/LCFA/current/catalog.xml"
satcat = TDSCatalog(g16url)
filename = satcat.datasets[-1].access_urls['OPENDAP']
```
## Load the data
```
glm = GLMDataset(filename)
print(glm.dataset)
```
## Flip through each flash, plotting each.
Event centroids are small black squares.
Group centroids are white circles, colored by group energy.
Flash centroids are red 'x's
```
from glmtools.plot.locations import plot_flash
import ipywidgets as widgets
# print(widgets.Widget.widget_types.values())
fl_id_vals = list(glm.dataset.flash_id.data)
fl_id_vals.sort()
flash_slider = widgets.SelectionSlider(
description='Flash',
options=fl_id_vals,
)
def do_plot(flash_id):
fig = plot_flash(glm, flash_id)
widgets.interact(do_plot, flash_id=flash_slider)
```
# Find flashes in some location
There are hundreds of flashes to browse above, and they are randomly scattered across the full disk. Storms near Lubbock, TX at the time of sample data file had relatively low flash rates, so let's find those.
```
flashes_subset = glm.subset_flashes(lon_range = (-102.5, -100.5), lat_range = (32.5, 34.5))
print(flashes_subset)
from glmtools.plot.locations import plot_flash
import ipywidgets as widgets
# print(widgets.Widget.widget_types.values())
fl_id_vals = list(flashes_subset.flash_id.data)
fl_id_vals.sort()
flash_slider = widgets.SelectionSlider(
description='Flash',
options=fl_id_vals,
)
# from functools import partial
# glm_plotter = partial(plot_flash, glm) # fails with a __name__ attr not found
def do_plot(flash_id):
fig = plot_flash(glm, flash_id)
widgets.interact(do_plot, flash_id=flash_slider)
```
| github_jupyter |
```
from PIL import Image
import glob
from keras.applications.inception_v3 import InceptionV3
from keras.applications.inception_v3 import preprocess_input, decode_predictions
from keras.preprocessing import image
import numpy as np
import json
```
## Define data path
#### You can add multiple file extensions by extending the glob as shown below
```
images_paths = glob.glob("./data/rit/Harsh2/*.jpg")
images_paths.extend(glob.glob("./data/rit/Harsh2/*.JPG"))
images_paths.extend(glob.glob("./data/rit/Harsh2/*.png"))
def images_to_sprite(data):
"""
Creates the sprite image along with any necessary padding
Source : https://github.com/tensorflow/tensorflow/issues/6322
Args:
data: NxHxW[x3] tensor containing the images.
Returns:
data: Properly shaped HxWx3 image with any necessary padding.
"""
if len(data.shape) == 3:
data = np.tile(data[...,np.newaxis], (1,1,1,3))
data = data.astype(np.float32)
min = np.min(data.reshape((data.shape[0], -1)), axis=1)
data = (data.transpose(1,2,3,0) - min).transpose(3,0,1,2)
max = np.max(data.reshape((data.shape[0], -1)), axis=1)
data = (data.transpose(1,2,3,0) / max).transpose(3,0,1,2)
n = int(np.ceil(np.sqrt(data.shape[0])))
padding = ((0, n ** 2 - data.shape[0]), (0, 0),
(0, 0)) + ((0, 0),) * (data.ndim - 3)
data = np.pad(data, padding, mode='constant',
constant_values=0)
# Tile the individual thumbnails into an image.
data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3)
+ tuple(range(4, data.ndim + 1)))
data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:])
data = (data * 255).astype(np.uint8)
return data
def populate_img_arr(images_paths, size=(100,100),should_preprocess= False):
"""
Get an array of images for a list of image paths
Args:
size: the size of image , in pixels
should_preprocess: if the images should be processed (according to InceptionV3 requirements)
Returns:
arr: An array of the loaded images
"""
arr = []
for i,img_path in enumerate(images_paths):
img = image.load_img(img_path, target_size=size)
x = image.img_to_array(img)
arr.append(x)
arr = np.array(arr)
if should_preprocess:
arr = preprocess_input(arr)
return arr
```
## Model Definition
### If you want to use another model, you can change it here
```
model = InceptionV3(include_top=False,pooling='avg')
model.summary()
img_arr = populate_img_arr(images_paths,size=(299,299),should_preprocess=True)
preds = model.predict(img_arr,batch_size=64)
preds.tofile("./oss_data/tensor.bytes")
del img_arr,preds
raw_imgs = populate_img_arr(images_paths ,size=(100,100),should_preprocess=False)
sprite = Image.fromarray(images_to_sprite(raw_imgs).astype(np.uint8))
sprite.save('./oss_data/sprites.png')
del raw_imgs
```
| github_jupyter |
# Exorad 2.0
This Notebook will show you how to use exorad library to build your own pipeline.
Before we start, let's silent the exorad logger.
```
import warnings
warnings.filterwarnings("ignore")
from exorad.log import disableLogging
disableLogging()
```
## Preparing the instrument
### Load the instrument descrition
The first step is to load the instrument description.
We use here the payload described in `examples/payload_example.xml`.
We call the `LoadOptions` task that parses the xml file into a Python dictionary.
```
from exorad.tasks import LoadOptions
payload_file = 'payload_example.xml'
loadOptions = LoadOptions()
payload = loadOptions(filename=payload_file)
```
## build the channels
Once we have the payload description we can build the channels using the `BuildChannels` taks, this will iterate over the channel and build each of the instruments listed in the payload config. To give it a closer look, let's do it step by step.
Inside `example_payload.xml` are described two channels: "Phot" that is a photometer and "Spec" that is a spectrometer. We want to build them and store them into a dictionary
```
channels = {}
from exorad.tasks import BuildInstrument
buildInstrument = BuildInstrument()
channels['Phot'] = buildInstrument(type="photometer",
name = "Phot",
description=payload['channel']['Phot'],
payload=payload,
write=False, output=None)
channels['Spec'] = buildInstrument(type="spectrometer",
name = "Spec",
description=payload['channel']['Spec'],
payload=payload,
write=False, output=None)
```
## Plot instrument photo-conversion efficiency
Thanks to exorad plotter you can easily plot the channels photon-conversion efficiency. To do so, we need to merge the channel output table to a cumulative table.
```
from exorad.tasks import MergeChannelsOutput
mergeChannelsOutput = MergeChannelsOutput()
table = mergeChannelsOutput(channels=channels)
from exorad.utils.plotter import Plotter
plotter = Plotter(channels=channels, input_table=table)
efficiency_fig = plotter.plot_efficiency()
```
## Acess the payload data
Assume you want to edit one of the payload parameters, for example you quant to move the Quatum Efficiency for the photometer from 0.55 to 0.65.
Then you will need to build the channels again and produce an updated efficiency figure.
```
payload['channel']['Phot']['detector']['qe']['value'] = 0.65
from exorad.tasks import BuildChannels
buildChannels = BuildChannels()
channels = buildChannels(payload=payload, write=False, output=None)
table = mergeChannelsOutput(channels=channels)
plotter = Plotter(channels=channels, input_table=table)
efficiency_fig = plotter.plot_efficiency()
```
## Explore the telescope self emission
Even withot a target, we still have signal in our telescope coming from self emission. This can be expored with exorad.
We can make a plot of the signals using the previous plotter. We have to manually set the lower limit for y-axes because exorad assumes 1e-3 ct/s as lower limit, but for the instrument we built the self emission is far lower because of the low temperature assumed (~60K for optics).
The self emission is stored in the channel output table in a column named `instrument_signal`. Information on the signal produced by each optical element can be retrieved in the channel dictionary under `['built_instr']['optical_path']['signal_table']`
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
ax = plotter.plot_signal(ax, ylim=1e-32, scale='log', channel_edges=False)
```
## Observing a target list
### Load a Target list
To observe a list of targets we first need to define them. Exorad can load target list from file, as the one you can find in `examples/test_target.csv`, or directly from Python. Because the first case is covered by the documentation, let's focus here on the latter. To describe a target in python, you need to use Astropy QTable and to follow the same notation used in the file and described in the documentation. Here we produce an example considering a single target called "test" that has mass 1 solar masses, effective temperature 5000 K, radius 1 solar radius and 10 pc away from us. Obviously, you can add more element to the list if you have more than one target.
```
from astropy.table import QTable, Column
import astropy.units as u
names = Column(['test'], name='star name')
masses = Column([1]*u.M_sun, name='star M')
temperatures = Column([5000]*u.K, name='star Teff')
radii = Column([1] * u.R_sun, name='star R')
distances = Column([10] * u.pc, name='star D')
magK = Column([0]* u.Unit(""), name='star magK')
raw_targetlist = QTable([names, masses,temperatures, radii, distances, magK])
from exorad.tasks import LoadTargetList
loadTargetList = LoadTargetList()
targets = loadTargetList(target_list=raw_targetlist)
# "targets" is now a list of Target classes.
# To read the content of the loaded element we need to convert the Target class into a dictionary
print(targets.target[0].to_dict())
```
### Foregrounds
Before you can observe a target you first need to prepare the table to fill. For that you need to call `PrepareTarget`. This populates the target attribute `table` that contains the merged channel tables and will be populated with the successive steps.
Then we can think about the foregrounds. These are defined in the payload configuration file. In out case we have indicated a zodiacal foreground and a custom one described by a csv file. These are listed now in `payload['common']['foreground']`. The Task `EstimateForegrounds` builds both of them in one shot, but for the sake of learning, let's produce them one per time with their specific classes. The task mentioned requires the target as input and returns it as output because adds foreground information to the class.
Remember that the order is important when you list your contributions in your payload configuration file, because foregrounds can have both emission and transmission. In this optic, an element locate before another, has its light passed through the second one and so its total signal contribution is reduced.
```
from exorad.tasks import PrepareTarget, EstimateForeground, EstimateZodi
target = targets.target[0]
wl_min, wl_max = payload['common']['wl_min']['value'], payload['common']['wl_max']['value']
prepareTarget = PrepareTarget()
target = prepareTarget(target=target, channels=channels)
estimateZodi = EstimateZodi()
target = estimateZodi(zodi=payload['common']['foreground']['zodiacal'],
target=target,
wl_range=(wl_min, wl_max))
estimateForeground = EstimateForeground()
target = estimateForeground(foreground=payload['common']['foreground']['skyFilter'],
target=target,
wl_range=(wl_min, wl_max))
# We plot now the foreground radiances
fig_zodi, ax = target.foreground['zodi'].plot()
fig_zodi.suptitle('zodi')
fig_sky, ax = target.foreground['skyFilter'].plot()
fig_sky.suptitle('skyFilter')
```
Once the contributions has been estimated, we can propagate them. Here we propagate and plot the foregrounds signal. The `PropagateForegroundLight` task also populates the target table with the computed foreground signal.
```
from exorad.tasks import PropagateForegroundLight
propagateForegroundLight = PropagateForegroundLight()
target = propagateForegroundLight(channels=channels, target=target)
plotter = Plotter(channels=channels, input_table=target.table)
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
ax = plotter.plot_signal(ax, scale='log', channel_edges=False)
# We show here the information content of the target table, that now contains also the foreground signal
print(target.table.keys())
```
### Target source
We can now load the light source we are gonna use for the target. As described in the documentation, we can use a black body, or a phoenix star or a custom sed described in a csv file. Here we use a black body, as indicated in the payload configuration file `<sourceSpectrum> planck </sourceSpectrum>`, and now in the dic `payload['common']['sourceSpectrum']`.
```
from exorad.tasks import LoadSource
loadSource = LoadSource()
target, sed = loadSource(target=target,
source=payload['common']['sourceSpectrum'],
wl_range=(wl_min, wl_max))
fig_source, ax=sed.plot()
fig_source.suptitle(target.name)
```
We can now propagate the source light. The light signal information will be added also to the target table
```
from exorad.tasks import PropagateTargetLight
propagateTargetLight = PropagateTargetLight()
target = propagateTargetLight(channels=channels, target=target)
plotter = Plotter(channels=channels, input_table=target.table)
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
ax = plotter.plot_signal(ax, scale='log', channel_edges=False)
# We show here the information content of the target table, that now contains also the source signal
print(target.table.keys())
```
## Estimate the noise
The noise estimation is the last step. Exorad computes the photon noise from every signal considered so far, but aldo dark current noise and read noise from the detector. It also takes into account for custom noise source that can be added at channel level or at common level in the payload description.
Finally, all these information will be adedd to the target table, that is now our final product.
```
from exorad.tasks import EstimateNoise
estimateNoise = EstimateNoise()
target = estimateNoise(target=target, channels=channels)
plotter = Plotter(channels=channels, input_table=target.table)
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
ax = plotter.plot_noise(ax, scale='log', channel_edges=False)
# We show here the information content of the target table, that now contains also all the noise contributions
print(target.table.keys())
```
| github_jupyter |
```
from netCDF4 import Dataset
path = '/home/joao/Downloads/'
ds = Dataset(path+'OR_ABI-L2-CMIPF-M6C13_G16_s20192781230281_e20192781240001_c20192781240078.nc')
import GOES
import numpy as np
SatHeight = ds.variables['goes_imager_projection'].perspective_point_height
SatLon = ds.variables['goes_imager_projection'].longitude_of_projection_origin
SatSweep = ds.variables['goes_imager_projection'].sweep_angle_axis
X = ds.variables['x']
Y = ds.variables['y']
var = ds.variables['CMI']
# calculing center of pixels
lons, lats = GOES.get_lonlat(X, Y, SatLon, SatHeight, SatSweep)
# masking invalid values of the satellite image
var = np.where((var[:].mask==True)|(lons==-999.99), np.nan, var[:])
# if you want to use pcolormesh to plot data you will need calcute the corners of each pixel
loncor, latcor = GOES.get_lonlat_corners(lons, lats)
# creates a custom color palette with custom_color_palette package
# see https://github.com/joaohenry23/custom_color_palette)
import matplotlib.pyplot as plt
import custom_color_palette as ccpl
ListColor = ['maroon','red','darkorange','#ffff00',
'forestgreen','cyan','royalblue',(148/255,0/255,211/255)]
mypalette, clabels, norm = ccpl.creates_palette([ListColor, plt.cm.Greys],[180.0,240.0,330.0],
EditPalette=[None,[180.0,330.0,240.0,330.0]])
# creating colorbar labels
tickslabels = np.arange(180,331,15)
# PLOTTING IMAGE USING CARTOPY
import cartopy.crs as ccrs
import cartopy.feature as cf
from cartopy.mpl.ticker import LatitudeFormatter, LongitudeFormatter
import matplotlib.ticker as mticker
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
# Defines the plot area
LLLon, URLon = -170.0, 20.0
LLLat, URLat = -90.0, 90.0
# Defines map projection
MapProj = ccrs.PlateCarree()
# Defines field projection
FieldProj = ccrs.PlateCarree()
# Creates figure
fig = plt.figure('example_01_cartopy', figsize=(4,4), dpi=200)
ax = fig.add_axes([0.1, 0.16, 0.80, 0.75], projection=MapProj)
ax.set_extent(extents=[LLLon, URLon, LLLat, URLat], crs=MapProj)
# Add geographic boundaries
countries = cf.NaturalEarthFeature(category='cultural', name='admin_0_countries',
scale='50m', facecolor='none')
ax.add_feature(countries, edgecolor='black', linewidth=0.25)
states = cf.NaturalEarthFeature(category='cultural', name='admin_1_states_provinces_lines',
scale='50m', facecolor='none')
ax.add_feature(states, edgecolor='black', linewidth=0.25)
# Plot image
img = ax.pcolormesh(loncor, latcor, var, cmap=mypalette, norm=norm, transform = FieldProj)
# Customizing the plot border
ax.outline_patch.set_linewidth(0.3)
# Plot colorbar
cbar = plt.colorbar(img, ticks=tickslabels, extend='neither', spacing='proportional',
orientation = 'horizontal', cax=fig.add_axes([0.14, 0.07, 0.72, 0.02]))
cbar.ax.tick_params(labelsize=6, labelcolor='black', width=0.5, direction='out', pad=1.0)
cbar.set_label(label='Brightness temperature [K]', size=6, color='black', weight='normal')
cbar.outline.set_linewidth(0.5)
# Sets X axis characteristics
xticks = np.arange(-180.0,25.0,20.0)
ax.set_xticks(xticks, crs=MapProj)
ax.set_xticklabels(xticks, fontsize=5.5, color='black')
lon_formatter = LongitudeFormatter(number_format='.0f', degree_symbol='°',
dateline_direction_label=True)
ax.xaxis.set_major_formatter(lon_formatter)
# Sets Y axis characteristics
yticks = np.arange(90.0,-95.0,-15.0)
ax.set_yticks(yticks, crs=MapProj)
ax.set_yticklabels(yticks, fontsize=5.5, color='black')
lat_formatter = LatitudeFormatter(number_format='.0f', degree_symbol='°')
ax.yaxis.set_major_formatter(lat_formatter)
# Sets grid characteristics
ax.tick_params(left=True, bottom=True, labelleft=True, labelbottom=True, length=0.0, width=0.05)
ax.gridlines(xlocs=xticks, ylocs=yticks, color='gray', alpha=0.6, draw_labels=False,
linewidth=0.25, linestyle='--')
ax.set_xlim(LLLon, URLon)
ax.set_ylim(LLLat, URLat)
fig.text(0.5, 0.11, 'Longitude', color='black', fontsize=8, verticalalignment='center',
horizontalalignment='center')
fig.text(0.02, 0.5, 'Latitude', color='black', fontsize=8, verticalalignment='center',
horizontalalignment='center', rotation=90.0)
#plt.savefig(path+'example_01_cartopy.png')
plt.show()
```
| github_jupyter |
# Eurostat bioenergy balance 2018
Extract bioenergy related data from an archive containing XLSB files, one for each EU country which contain multiple sheets for each year (1990-2018).
Walk through excel files (country spreadsheets) and parse selected variables and fuels for each year (sheet in country's spreadsheet).
Somewhere on Eurostat there might be a better source for this data, but I did not find it.
```
import os
import zipfile
import requests
import pandas as pd
import numpy as np
import pyxlsb
def parse_values_for_country(file, country, variables, fuels):
"""Reads fuel variable in multiple sheets 2002-2018.
Sums the values across multiple columns if relevant.
Returns: dict
"""
country_data = {}
for year in range(2002,2019):
df = pd.read_excel(
file,
engine='pyxlsb',
sheet_name=str(year),
skiprows=[0,1,2,3],
index_col=1,
na_values=':',
)
for variable in variables:
for fuel, start, end in fuels:
try:
country_data[(country, year, fuel, variable.lower().replace(' ', '_'))] = df.loc[variable, start:end].sum()
except TypeError:
country_data[(country, year, fuel, variable.lower().replace(' ', '_'))] = pd.to_numeric(df.loc[variable, start:end], errors='coerce').sum()
return country_data
def walk_through_excel_files(directory, variables, fuels):
d = {}
for filename in os.listdir(directory):
if '!' not in filename: # skip readme files
country = filename.split('-')[0]
excel_path = os.path.join(directory, filename)
data = parse_values_for_country(excel_path, country, variables, fuels)
d.update(data)
return d
# Selected variables for bioenergy and some other for context
variables = [
'Primary production',
'Imports',
'Exports',
'Gross inland consumption',
]
fuels = [
('total', 'Total', 'Total'),
('renewables', 'Renewables and biofuels', 'Renewables and biofuels'),
('bioenergy', 'Bioenergy', 'Bioenergy',),
('solid_biomass', 'Primary solid biofuels', 'Primary solid biofuels'),
('biofuels', 'Pure biogasoline', 'Other liquid biofuels'),
('biogas', 'Biogases', 'Biogases'),
('ren_mun_waste', 'Renewable municipal waste', 'Renewable municipal waste'),
]
url = 'https://ec.europa.eu/eurostat/documents/38154/4956218/Energy-Balances-April-2020-edition.zip/69da6e9f-bf8f-cd8e-f4ad-50b52f8ce616'
r = requests.get(url)
with open('eurostat_balances_2020.zip', 'wb') as f:
f.write(r.content)
with zipfile.ZipFile('eurostat_balances_2020.zip', 'r') as zip_archive:
zip_archive.extractall(path='balances/')
# This is quite slow, opening many files, one time for each sheet
# There must be a better way
%time data_dict = walk_through_excel_files('balances/', variables, fuels)
# https://stackoverflow.com/questions/44012099/creating-a-dataframe-from-a-dict-where-keys-are-tuples
df1 = pd.Series(data_dict).reset_index()
df1.columns = ['country', 'year', 'fuel', 'variable', 'value']
df1.head(3)
df2 = df1.set_index(['country', 'year', 'fuel', 'variable']).unstack(level=3)
df2.head(3)
df2.columns = df2.columns.droplevel(0).values
df2.info()
df2.sort_index(ascending=True, inplace=True)
df2['dependency'] = (df2['imports'] - df2['exports']) / df2['gross_inland_consumption']
df2
df2.to_csv(
'balances_bioenergy_2002_2018_ktoe.csv',
decimal=',',
)
df3 = df2.copy()
tj_ktoe = 41.868
df3 = df3.loc[:, 'exports': 'primary_production'] * tj_ktoe
# Keep the share based on the original data in ktoe
df3['dependency'] = df2['dependency']
df3
df3.to_csv(
'balances_bioenergy_2002_2018_tj.csv',
decimal=',',
)
# Some minimal testing
idx = pd.IndexSlice
df2.loc[idx['CZ', 2018, 'bioenergy'], ['exports']]
assert df2.loc[idx['CZ', 2018, 'bioenergy'], ['exports']].item() == 549.453
df2.loc[idx['CZ', 2009, 'bioenergy'], ['primary_production']]
assert df2.loc[idx['CZ', 2009, 'bioenergy'], ['primary_production']].item() == 2761.8
result_cz_2009_bioenergy = df2.loc[idx['CZ', 2009, 'bioenergy']]
result_cz_2009_bioenergy
cz_2009_bioenergy = pd.Series(
{'exports': 318.821,
'gross_inland_consumption': 2568.609,
'imports': 123.617,
'primary_production': 2761.8,
'dependency': -0.075996,
})
cz_2009_bioenergy
cz_2009_bioenergy.name = ('CZ', 2009, 'bioenergy')
pd.testing.assert_series_equal(cz_2009_bioenergy, result_cz_2009_bioenergy)
```
| github_jupyter |
[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb)
# Discrete Bayes Filter
```
#format the book
%matplotlib inline
from __future__ import division, print_function
from book_format import load_style
load_style()
```
The Kalman filter belongs to a family of filters called *Bayesian filters*. Most textbook treatments of the Kalman filter present the Bayesian formula, perhaps shows how it factors into the Kalman filter equations, but mostly keeps the discussion at a very abstract level.
That approach requires a fairly sophisticated understanding of several fields of mathematics, and it still leaves much of the work of understanding and forming an intuitive grasp of the situation in the hands of the reader.
I will use a different way to develop the topic, to which I owe the work of Dieter Fox and Sebastian Thrun a great debt. It depends on building an intuition on how Bayesian statistics work by tracking an object through a hallway - they use a robot, I use a dog. I like dogs, and they are less predictable than robots which imposes interesting difficulties for filtering. The first published example of this that I can find seems to be Fox 1999 [1], with a fuller example in Fox 2003 [2]. Sebastian Thrun also uses this formulation in his excellent Udacity course Artificial Intelligence for Robotics [3]. In fact, if you like watching videos, I highly recommend pausing reading this book in favor of first few lessons of that course, and then come back to this book for a deeper dive into the topic.
Let's now use a simple thought experiment, much like we did with the g-h filter, to see how we might reason about the use of probabilities for filtering and tracking.
## Tracking a Dog
Let's begin with a simple problem. We have a dog friendly workspace, and so people bring their dogs to work. Occasionally the dogs wander out of offices and down the halls. We want to be able to track them. So during a hackathon somebody invented a sonar sensor to attach to the dog's collar. It emits a signal, listens for the echo, and based on how quickly an echo comes back we can tell whether the dog is in front of an open doorway or not. It also senses when the dog walks, and reports in which direction the dog has moved. It connects to the network via wifi and sends an update once a second.
I want to track my dog Simon, so I attach the device to his collar and then fire up Python, ready to write code to track him through the building. At first blush this may appear impossible. If I start listening to the sensor of Simon's collar I might read **door**, **hall**, **hall**, and so on. How can I use that information to determine where Simon is?
To keep the problem small enough to plot easily we will assume that there are only 10 positions in the hallway, which we will number 0 to 9, where 1 is to the right of 0. For reasons that will be clear later, we will also assume that the hallway is circular or rectangular. If you move right from position 9, you will be at position 0.
When I begin listening to the sensor I have no reason to believe that Simon is at any particular position in the hallway. From my perspective he is equally likely to be in any position. There are 10 positions, so the probability that he is in any given position is 1/10.
Let's represent our belief of his position in a NumPy array. I could use a Python list, but NumPy arrays offer functionality that we will be using soon.
```
import numpy as np
belief = np.array([1./10]*10)
print(belief)
```
In [Bayesian statistics](https://en.wikipedia.org/wiki/Bayesian_probability) this is called a [*prior*](https://en.wikipedia.org/wiki/Prior_probability). It is the probability prior to incorporating measurements or other information. More completely, this is called the *prior probability distribution*. A [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) is a collection of all possible probabilities for an event. Probability distributions always sum to 1 because something had to happen; the distribution lists all possible events and the probability of each.
I'm sure you've used probabilities before - as in "the probability of rain today is 30%". The last paragraph sounds like more of that. But Bayesian statistics was a revolution in probability because it treats probability as a belief about a single event. Let's take an example. I know that if I flip a fair coin infinitely many times I will get 50% heads and 50% tails. This is called [*frequentist statistics*](https://en.wikipedia.org/wiki/Frequentist_inference) to distinguish it from Bayesian statistics. Computations are based on the frequency in which events occur.
I flip the coin one more time and let it land. Which way do I believe it landed? Frequentist probability has nothing to say about that; it will merely state that 50% of coin flips land as heads. In some ways it is meaningless to assign a probability to the current state of the coin. It is either heads or tails, we just don't know which. Bayes treats this as a belief about a single event - the strength of my belief or knowledge that this specific coin flip is heads is 50%. Some object to the term "belief"; belief can imply holding something to be true without evidence. In this book it always is a measure of the strength of our knowledge. We'll learn more about this as we go.
Bayesian statistics takes past information (the prior) into account. We observe that it rains 4 times every 100 days. From this I could state that the chance of rain tomorrow is 1/25. This is not how weather prediction is done. If I know it is raining today and the storm front is stalled, it is likely to rain tomorrow. Weather prediction is Bayesian.
In practice statisticians use a mix of frequentist and Bayesian techniques. Sometimes finding the prior is difficult or impossible, and frequentist techniques rule. In this book we can find the prior. When I talk about the probability of something I am referring to the probability that some specific thing is true given past events. When I do that I'm taking the Bayesian approach.
Now let's create a map of the hallway. We'll place the first two doors close together, and then another door further away. We will use 1 for doors, and 0 for walls:
```
hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0])
```
I start listening to Simon's transmissions on the network, and the first data I get from the sensor is **door**. For the moment assume the sensor always returns the correct answer. From this I conclude that he is in front of a door, but which one? I have no reason to believe he is in front of the first, second, or third door. What I can do is assign a probability to each door. All doors are equally likely, and there are three of them, so I assign a probability of 1/3 to each door.
```
from kf_book.book_plots import figsize, set_figsize
import kf_book.book_plots as book_plots
import matplotlib.pyplot as plt
belief = np.array([1./3, 1./3, 0, 0, 0, 0, 0, 0, 1/3, 0])
plt.figure()
set_figsize(y=2)
book_plots.bar_plot(belief)
```
This distribution is called a [*categorical distribution*](https://en.wikipedia.org/wiki/Categorical_distribution), which is a discrete distribution describing the probability of observing $n$ outcomes. It is a [*multimodal distribution*](https://en.wikipedia.org/wiki/Multimodal_distribution) because we have multiple beliefs about the position of our dog. Of course we are not saying that we think he is simultaneously in three different locations, merely that we have narrowed down our knowledge to one of these three locations. My (Bayesian) belief is that there is a 33.3% chance of being at door 0, 33.3% at door 1, and a 33.3% chance of being at door 8.
This is an improvement in two ways. I've rejected a number of hallway positions as impossible, and the strength of my belief in the remaining positions has increased from 10% to 33%. This will always happen. As our knowledge improves the probabilities will get closer to 100%.
A few words about the [*mode*](https://en.wikipedia.org/wiki/Mode_(statistics))
of a distribution. Given a set of numbers, such as {1, 2, 2, 2, 3, 3, 4}, the *mode* is the number that occurs most often. For this set the mode is 2. A set can contain more than one mode. The set {1, 2, 2, 2, 3, 3, 4, 4, 4} contains the modes 2 and 4, because both occur three times. We say the former set is [*unimodal*](https://en.wikipedia.org/wiki/Unimodality), and the latter is *multimodal*.
Another term used for this distribution is a [*histogram*](https://en.wikipedia.org/wiki/Histogram). Histograms graphically depict the distribution of a set of numbers. The bar chart above is a histogram.
I hand coded the `belief` array in the code above. How would we implement this in code? We represent doors with 1, and walls as 0, so we will multiply the hallway variable by the percentage, like so;
```
belief = hallway * (1./3)
print(belief)
```
## Extracting Information from Sensor Readings
Let's put Python aside and think about the problem a bit. Suppose we were to read the following from Simon's sensor:
* door
* move right
* door
Can we deduce Simon's location? Of course! Given the hallway's layout there is only one place from which you can get this sequence, and that is at the left end. Therefore we can confidently state that Simon is in front of the second doorway. If this is not clear, suppose Simon had started at the second or third door. After moving to the right, his sensor would have returned 'wall'. That doesn't match the sensor readings, so we know he didn't start there. We can continue with that logic for all the remaining starting positions. The only possibility is that he is now in front of the second door. Our belief is:
```
belief = np.array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0.])
```
I designed the hallway layout and sensor readings to give us an exact answer quickly. Real problems are not so clear cut. But this should trigger your intuition - the first sensor reading only gave us low probabilities (0.333) for Simon's location, but after a position update and another sensor reading we know more about where he is. You might suspect, correctly, that if you had a very long hallway with a large number of doors that after several sensor readings and positions updates we would either be able to know where Simon was, or have the possibilities narrowed down to a small number of possibilities. This is possible when a set of sensor readings only matches one to a few starting locations.
We could implement this solution now, but instead let's consider a real world complication to the problem.
## Noisy Sensors
Perfect sensors are rare. Perhaps the sensor would not detect a door if Simon sat in front of it while scratching himself, or misread if he is not facing down the hallway. Thus when I get **door** I cannot use 1/3 as the probability. I have to assign less than 1/3 to each door, and assign a small probability to each blank wall position. Something like
```Python
[.31, .31, .01, .01, .01, .01, .01, .01, .31, .01]
```
At first this may seem insurmountable. If the sensor is noisy it casts doubt on every piece of data. How can we conclude anything if we are always unsure?
The answer, as for the problem above, is with probabilities. We are already comfortable assigning a probabilistic belief to the location of the dog; now we have to incorporate the additional uncertainty caused by the sensor noise.
Say we get a reading of **door**, and suppose that testing shows that the sensor is 3 times more likely to be right than wrong. We should scale the probability distribution by 3 where there is a door. If we do that the result will no longer be a probability distribution, but we will learn how to fix that in a moment.
Let's look at that in Python code. Here I use the variable `z` to denote the measurement. `z` or `y` are customary choices in the literature for the measurement. As a programmer I prefer meaningful variable names, but I want you to be able to read the literature and/or other filtering code, so I will start introducing these abbreviated names now.
```
def update_belief(hall, belief, z, correct_scale):
for i, val in enumerate(hall):
if val == z:
belief[i] *= correct_scale
belief = np.array([0.1] * 10)
reading = 1 # 1 is 'door'
update_belief(hallway, belief, z=reading, correct_scale=3.)
print('belief:', belief)
print('sum =', sum(belief))
plt.figure()
book_plots.bar_plot(belief)
```
This is not a probability distribution because it does not sum to 1.0. But the code is doing mostly the right thing - the doors are assigned a number (0.3) that is 3 times higher than the walls (0.1). All we need to do is normalize the result so that the probabilities correctly sum to 1.0. Normalization is done by dividing each element by the sum of all elements in the list. That is easy with NumPy:
```
belief / sum(belief)
```
FilterPy implements this with the `normalize` function:
```Python
from filterpy.discrete_bayes import normalize
normalize(belief)
```
It is a bit odd to say "3 times as likely to be right as wrong". We are working in probabilities, so let's specify the probability of the sensor being correct, and compute the scale factor from that. The equation for that is
$$scale = \frac{prob_{correct}}{prob_{incorrect}} = \frac{prob_{correct}} {1-prob_{correct}}$$
Also, the `for` loop is cumbersome. As a general rule you will want to avoid using `for` loops in NumPy code. NumPy is implemented in C and Fortran, so if you avoid for loops the result often runs 100x faster than the equivalent loop.
How do we get rid of this `for` loop? NumPy lets you index arrays with boolean arrays. You create a boolean array with logical operators. We can find all the doors in the hallway with:
```
hallway == 1
```
When you use the boolean array as an index to another array it returns only the elements where the index is `True`. Thus we can replace the `for` loop with
```python
belief[hall==z] *= scale
```
and only the elements which equal `z` will be multiplied by `scale`.
Teaching you NumPy is beyond the scope of this book. I will use idiomatic NumPy constructs and explain them the first time I present them. If you are new to NumPy there are many blog posts and videos on how to use NumPy efficiently and idiomatically. For example, this video by Jake Vanderplas is often recommended: https://vimeo.com/79820956.
Here is our improved version:
```
from filterpy.discrete_bayes import normalize
def scaled_update(hall, belief, z, z_prob):
scale = z_prob / (1. - z_prob)
belief[hall==z] *= scale
normalize(belief)
belief = np.array([0.1] * 10)
scaled_update(hallway, belief, z=1, z_prob=.75)
print('sum =', sum(belief))
print('probability of door =', belief[0])
print('probability of wall =', belief[2])
book_plots.bar_plot(belief, ylim=(0, .3))
```
We can see from the output that the sum is now 1.0, and that the probability of a door vs wall is still three times larger. The result also fits our intuition that the probability of a door must be less than 0.333, and that the probability of a wall must be greater than 0.0. Finally, it should fit our intuition that we have not yet been given any information that would allow us to distinguish between any given door or wall position, so all door positions should have the same value, and the same should be true for wall positions.
This result is called the [*posterior*](https://en.wikipedia.org/wiki/Posterior_probability), which is short for *posterior probability distribution*. All this means is a probability distribution *after* incorporating the measurement information (posterior means 'after' in this context). To review, the *prior* is the probability distribution before including the measurement's information.
Another term is the [*likelihood*](https://en.wikipedia.org/wiki/Likelihood_function). When we computed `belief[hall==z] *= scale` we were computing how *likely* each position was given the measurement. The likelihood is not a probability distribution because it does not sum to one.
The combination of these gives the equation
$$\mathtt{posterior} = \frac{\mathtt{likelihood} \times \mathtt{prior}}{\mathtt{normalization}}$$
It is very important to learn and internalize these terms as most of the literature uses them extensively.
Does `scaled_update()` perform this computation? It does. Let me recast it into this form:
```
def scaled_update(hall, belief, z, z_prob):
scale = z_prob / (1. - z_prob)
likelihood = np.ones(len(hall))
likelihood[hall==z] *= scale
return normalize(likelihood * belief)
```
This function is not fully general. It contains knowledge about the hallway, and how we match measurements to it. We always strive to write general functions. Here we will remove the computation of the likelihood from the function, and require the caller to compute the likelihood themselves.
Here is a full implementation of the algorithm:
```python
def update(likelihood, prior):
return normalize(likelihood * prior)
```
Computation of the likelihood varies per problem. For example, the sensor might not return just 1 or 0, but a `float` between 0 and 1 indicating the probability of being in front of a door. It might use computer vision and report a blob shape that you then probabilistically match to a door. It might use sonar and return a distance reading. In each case the computation of the likelihood will be different. We will see many examples of this throughout the book, and learn how to perform these calculations.
FilterPy implements `update`. Here is the previous example in a fully general form:
```
from filterpy.discrete_bayes import update
def lh_hallway(hall, z, z_prob):
""" compute likelihood that a measurement matches
positions in the hallway."""
try:
scale = z_prob / (1. - z_prob)
except ZeroDivisionError:
scale = 1e8
likelihood = np.ones(len(hall))
likelihood[hall==z] *= scale
return likelihood
belief = np.array([0.1] * 10)
likelihood = lh_hallway(hallway, z=1, z_prob=.75)
update(likelihood, belief)
```
## Incorporating Movement
Recall how quickly we were able to find an exact solution when we incorporated a series of measurements and movement updates. However, that occurred in a fictional world of perfect sensors. Might we be able to find an exact solution with noisy sensors?
Unfortunately, the answer is no. Even if the sensor readings perfectly match an extremely complicated hallway map, we cannot be 100% certain that the dog is in a specific position - there is, after all, a tiny possibility that every sensor reading was wrong! Naturally, in a more typical situation most sensor readings will be correct, and we might be close to 100% sure of our answer, but never 100% sure. This may seem complicated, but let's go ahead and program the math.
First let's deal with the simple case - assume the movement sensor is perfect, and it reports that the dog has moved one space to the right. How would we alter our `belief` array?
I hope that after a moment's thought it is clear that we should shift all the values one space to the right. If we previously thought there was a 50% chance of Simon being at position 3, then after he moved one position to the right we should believe that there is a 50% chance he is at position 4. The hallway is circular, so we will use modulo arithmetic to perform the shift.
```
def perfect_predict(belief, move):
""" move the position by `move` spaces, where positive is
to the right, and negative is to the left
"""
n = len(belief)
result = np.zeros(n)
for i in range(n):
result[i] = belief[(i-move) % n]
return result
belief = np.array([.35, .1, .2, .3, 0, 0, 0, 0, 0, .05])
plt.subplot(121)
book_plots.bar_plot(belief, title='Before prediction', ylim=(0, .4))
belief = perfect_predict(belief, 1)
plt.subplot(122)
book_plots.bar_plot(belief, title='After prediction', ylim=(0, .4))
```
We can see that we correctly shifted all values one position to the right, wrapping from the end of the array back to the beginning.
If you execute the next cell by pressing CTRL-Enter in it you can see this in action. This simulates Simon walking around and around the hallway. It does not (yet) incorporate new measurements so the probability distribution does not change.
```
import time
%matplotlib notebook
set_figsize(y=2)
fig = plt.figure()
for _ in range(50):
# Simon takes one step to the right
belief = perfect_predict(belief, 1)
plt.cla()
book_plots.bar_plot(belief, ylim=(0, .4))
fig.canvas.draw()
time.sleep(0.05)
# reset to noninteractive plot settings
%matplotlib inline
set_figsize(y=2);
```
## Terminology
Let's pause a moment to review terminology. I introduced this terminology in the last chapter, but let's take a second to help solidify your knowledge.
The *system* is what we are trying to model or filter. Here the system is our dog. The *state* is its current configuration or value. In this chapter the state is our dog's position. We rarely know the actual state, so we say our filters produce the *estimated state* of the system. In practice this often gets called the state, so be careful to understand the context.
One cycle of prediction and updating with a measurement is called the state or system *evolution*, which is short for *time evolution* [7]. Another term is *system propogation*. It refers to how the state of the system changes over time. For filters, time is usually a discrete step, such as 1 second. For our dog tracker the system state is the position of the dog, and the state evolution is the position after a discrete amount of time has passed.
We model the system behavior with the *process model*. Here, our process model is that the dog moves one or more positions at each time step. This is not a particularly accurate model of how dogs behave. The error in the model is called the *system error* or *process error*.
The prediction is our new *prior*. Time has moved forward and we made a prediction without benefit of knowing the measurements.
Let's work an example. The current position of the dog is 17 m. Our epoch is 2 seconds long, and the dog is traveling at 15 m/s. Where do we predict he will be in two seconds?
Clearly,
$$ \begin{aligned}
\bar x &= 17 + (15*2) \\
&= 47
\end{aligned}$$
I use bars over variables to indicate that they are priors (predictions). We can write the equation for the process model like this:
$$ \bar x_{k+1} = f_x(\bullet) + x_k$$
$x_k$ is the current position or state. If the dog is at 17 m then $x_k = 17$.
$f_x(\bullet)$ is the state propagation function for x. It describes how much the $x_k$ changes over one time step. For our example it performs the computation $15 \cdot 2$ so we would define it as
$$f_x(v_x, t) = v_k t$$.
## Adding Uncertainty to the Prediction
`perfect_sensor()` assumes perfect measurements, but all sensors have noise. What if the sensor reported that our dog moved one space, but he actually moved two spaces, or zero? This may sound like an insurmountable problem, but let's model it and see what happens.
Assume that the sensor's movement measurement is 80% likely to be correct, 10% likely to overshoot one position to the right, and 10% likely to undershoot to the left. That is, if the movement measurement is 4 (meaning 4 spaces to the right), the dog is 80% likely to have moved 4 spaces to the right, 10% to have moved 3 spaces, and 10% to have moved 5 spaces.
Each result in the array now needs to incorporate probabilities for 3 different situations. For example, consider the reported movement of 2. If we are 100% certain the dog started from position 3, then there is an 80% chance he is at 5, and a 10% chance for either 4 or 6. Let's try coding that:
```
def predict_move(belief, move, p_under, p_correct, p_over):
n = len(belief)
prior = np.zeros(n)
for i in range(n):
prior[i] = (
belief[(i-move) % n] * p_correct +
belief[(i-move-1) % n] * p_over +
belief[(i-move+1) % n] * p_under)
return prior
belief = [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.]
prior = predict_move(belief, 2, .1, .8, .1)
book_plots.plot_belief_vs_prior(belief, prior)
```
It appears to work correctly. Now what happens when our belief is not 100% certain?
```
belief = [0, 0, .4, .6, 0, 0, 0, 0, 0, 0]
prior = predict_move(belief, 2, .1, .8, .1)
book_plots.plot_belief_vs_prior(belief, prior)
prior
```
Here the results are more complicated, but you should still be able to work it out in your head. The 0.04 is due to the possibility that the 0.4 belief undershot by 1. The 0.38 is due to the following: the 80% chance that we moved 2 positions (0.4 $\times$ 0.8) and the 10% chance that we undershot (0.6 $\times$ 0.1). Overshooting plays no role here because if we overshot both 0.4 and 0.6 would be past this position. **I strongly suggest working some examples until all of this is very clear, as so much of what follows depends on understanding this step.**
If you look at the probabilities after performing the update you might be dismayed. In the example above we started with probabilities of 0.4 and 0.6 in two positions; after performing the update the probabilities are not only lowered, but they are strewn out across the map.
This is not a coincidence, or the result of a carefully chosen example - it is always true of the prediction. If the sensor is noisy we lose some information on every prediction. Suppose we were to perform the prediction an infinite number of times - what would the result be? If we lose information on every step, we must eventually end up with no information at all, and our probabilities will be equally distributed across the `belief` array. Let's try this with 100 iterations. The plot is animated; recall that you put the cursor in the cell and press Ctrl-Enter to execute the code and see the animation.
```
%matplotlib notebook
set_figsize(y=2)
belief = np.array([1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
plt.figure()
for i in range(100):
plt.cla()
belief = predict_move(belief, 1, .1, .8, .1)
book_plots.bar_plot(belief)
plt.title('Step {}'.format(i+1))
plt.gcf().canvas.draw()
print('Final Belief:', belief)
# reset to noninteractive plot settings
%matplotlib inline
set_figsize(y=2)
```
After 100 iterations we have lost almost all information, even though we were 100% sure that we started in position 0. Feel free to play with the numbers to see the effect of differing number of updates. For example, after 100 updates a small amount of information is left, after 50 a lot is left, but by 200 iterations essentially all information is lost.
And, if you are viewing this online here is an animation of that output.
<img src="animations/02_no_info.gif">
I will not generate these standalone animations through the rest of the book. Please see the preface for instructions to run this book on the web, for free, or install IPython on your computer. This will allow you to run all of the cells and see the animations. It's very important that you practice with this code, not just read passively.
## Generalizing with Convolution
We made the assumption that the movement error is at most one position. But it is possible for the error to be two, three, or more positions. As programmers we always want to generalize our code so that it works for all cases.
This is easily solved with [*convolution*](https://en.wikipedia.org/wiki/Convolution). Convolution modifies one function with another function. In our case we are modifying a probability distribution with the error function of the sensor. The implementation of `predict_move()` is a convolution, though we did not call it that. Formally, convolution is defined as
$$ (f \ast g) (t) = \int_0^t \!f(\tau) \, g(t-\tau) \, \mathrm{d}\tau$$
where $f\ast g$ is the notation for convolving f by g. It does not mean multiply.
Integrals are for continuous functions, but we are using discrete functions. We replace the integral with a summation, and the parenthesis with array brackets.
$$ (f \ast g) [t] = \sum\limits_{\tau=0}^t \!f[\tau] \, g[t-\tau]$$
Comparison shows that `predict_move()` is computing this equation - it computes the sum of a series of multiplications.
[Khan Academy](https://www.khanacademy.org/math/differential-equations/laplace-transform/convolution-integral/v/introduction-to-the-convolution) [4] has a good introduction to convolution, and Wikipedia has some excellent animations of convolutions [5]. But the general idea is already clear. You slide an array called the *kernel* across another array, multiplying the neighbors of the current cell with the values of the second array. In our example above we used 0.8 for the probability of moving to the correct location, 0.1 for undershooting, and 0.1 for overshooting. We make a kernel of this with the array `[0.1, 0.8, 0.1]`. All we need to do is write a loop that goes over each element of our array, multiplying by the kernel, and summing the results. To emphasize that the belief is a probability distribution I have named it `pdf`.
```
def predict_move_convolution(pdf, offset, kernel):
N = len(pdf)
kN = len(kernel)
width = int((kN - 1) / 2)
prior = np.zeros(N)
for i in range(N):
for k in range (kN):
index = (i + (width-k) - offset) % N
prior[i] += pdf[index] * kernel[k]
return prior
```
This illustrates the algorithm, but it runs very slow. SciPy provides a convolution routine `convolve()` in the `ndimage.filters` module. We need to shift the pdf by `offset` before convolution; `np.roll()` does that. The move and predict algorithm can be implemented with one line:
```python
convolve(np.roll(pdf, offset), kernel, mode='wrap')
```
FilterPy implements this with `discrete_bayes`' `predict()` function.
```
from filterpy.discrete_bayes import predict
belief = [.05, .05, .05, .05, .55, .05, .05, .05, .05, .05]
prior = predict(belief, offset=1, kernel=[.1, .8, .1])
book_plots.plot_belief_vs_prior(belief, prior, ylim=(0,0.6))
```
All of the elements are unchanged except the middle ones. The values in position 4 and 6 should be
$$(0.1 \times 0.05)+ (0.8 \times 0.05) + (0.1 \times 0.55) = 0.1$$
Position 5 should be $$(0.1 \times 0.05) + (0.8 \times 0.55)+ (0.1 \times 0.05) = 0.45$$
Let's ensure that it shifts the positions correctly for movements greater than one and for asymmetric kernels.
```
prior = predict(belief, offset=3, kernel=[.05, .05, .6, .2, .1])
book_plots.plot_belief_vs_prior(belief, prior, ylim=(0,0.6))
```
The position was correctly shifted by 3 positions and we give more weight to the likelihood of an overshoot vs an undershoot, so this looks correct.
Make sure you understand what we are doing. We are making a prediction of where the dog is moving, and convolving the probabilities to get the prior.
If we weren't using probabilities we would use this equation that I gave earlier:
$$ \bar x_{k+1} = x_k + f_{\mathbf x}(\bullet)$$
The prior, our prediction of where the dog will be, is the amount the dog moved plus his current position. The dog was at 10, he moved 5 meters, so he is now at 15 m. It couldn't be simpler. But we are using probabilities to model this, so our equation is:
$$ \bar{ \mathbf x}_{k+1} = \mathbf x_k \ast f_{\mathbf x}(\bullet)$$
We are *convolving* the current probabilistic position estimate with a probabilistic estimate of how much we think the dog moved. It's the same concept, but the math is slightly different. $\mathbf x$ is bold to denote that it is an array of numbers.
## Integrating Measurements and Movement Updates
The problem of losing information during a prediction may make it seem as if our system would quickly devolve into having no knowledge. However, each prediction is followed by an update where we incorporate the measurement into the estimate. The update improves our knowledge. The output of the update step is fed into the next prediction. The prediction degrades our certainty. That is passed into another update, where certainty is again increased.
Let's think about this intuitively. Consider a simple case - you are tracking a dog while he sits still. During each prediction you predict he doesn't move. Your filter quickly *converges* on an accurate estimate of his position. Then the microwave in the kitchen turns on, and he goes streaking off. You don't know this, so at the next prediction you predict he is in the same spot. But the measurements tell a different story. As you incorporate the measurements your belief will be smeared along the hallway, leading towards the kitchen. On every epoch (cycle) your belief that he is sitting still will get smaller, and your belief that he is inbound towards the kitchen at a startling rate of speed increases.
That is what intuition tells us. What does the math tell us?
We have already programmed the update and predict steps. All we need to do is feed the result of one into the other, and we will have implemented a dog tracker!!! Let's see how it performs. We will input measurements as if the dog started at position 0 and moved right one position each epoch. As in a real world application, we will start with no knowledge of his position by assigning equal probability to all positions.
```
from filterpy.discrete_bayes import update
hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0])
prior = np.array([.1] * 10)
likelihood = lh_hallway(hallway, z=1, z_prob=.75)
posterior = update(likelihood, prior)
book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5))
```
After the first update we have assigned a high probability to each door position, and a low probability to each wall position.
```
kernel = (.1, .8, .1)
prior = predict(posterior, 1, kernel)
book_plots.plot_prior_vs_posterior(prior, posterior, True, ylim=(0,.5))
```
The predict step shifted these probabilities to the right, smearing them about a bit. Now let's look at what happens at the next sense.
```
likelihood = lh_hallway(hallway, z=1, z_prob=.75)
posterior = update(likelihood, prior)
book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5))
```
Notice the tall bar at position 1. This corresponds with the (correct) case of starting at position 0, sensing a door, shifting 1 to the right, and sensing another door. No other positions make this set of observations as likely. Now we will add an update and then sense the wall.
```
prior = predict(posterior, 1, kernel)
likelihood = lh_hallway(hallway, z=0, z_prob=.75)
posterior = update(likelihood, prior)
book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5))
```
This is exciting! We have a very prominent bar at position 2 with a value of around 35%. It is over twice the value of any other bar in the plot, and is about 4% larger than our last plot, where the tallest bar was around 31%. Let's see one more cycle.
```
prior = predict(posterior, 1, kernel)
likelihood = lh_hallway(hallway, z=0, z_prob=.75)
posterior = update(likelihood, prior)
book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5))
```
I ignored an important issue. Earlier I assumed that we had a motion sensor for the predict step; then, when talking about the dog and the microwave I assumed that you had no knowledge that he suddenly began running. I mentioned that your belief that the dog is running would increase over time, but I did not provide any code for this. In short, how do we detect and/or estimate changes in the process model if aren't directly measuring it?
For now I want to ignore this problem. In later chapters we will learn the mathematics behind this estimation; for now it is a large enough task just to learn this algorithm. It is profoundly important to solve this problem, but we haven't yet built enough of the mathematical apparatus that is required, and so for the remainder of the chapter we will ignore the problem by assuming we have a sensor that senses movement.
## The Discrete Bayes Algorithm
This chart illustrates the algorithm:
```
book_plots.create_predict_update_chart()
```
This filter is a form of the g-h filter. Here we are using the percentages for the errors to implicitly compute the $g$ and $h$ parameters. We could express the discrete Bayes algorithm as a g-h filter, but that would obscure the logic of this filter.
The filter equations are:
$$\begin{aligned} \bar {\mathbf x} &= \mathbf x \ast f_{\mathbf x}(\bullet)\, \, &\text{Predict Step} \\
\mathbf x &= \|\mathcal L \cdot \bar{\mathbf x}\|\, \, &\text{Update Step}\end{aligned}$$
$\mathcal L$ is the usual way to write the likelihood function, so I use that. The $\|\|$ notation denotes taking the norm. We need to normalize the product of the likelihood with the prior to ensure $x$ is a probability distribution that sums to one.
We can express this in pseudocode.
**Initialization**
1. Initialize our belief in the state
**Predict**
1. Based on the system behavior, predict state at the next time step
2. Adjust belief to account for the uncertainty in prediction
**Update**
1. Get a measurement and associated belief about its accuracy
2. Compute residual between estimated state and measurement
3. Determine whether the measurement matches each state
4. Update state belief if it matches the measurement
When we cover the Kalman filter we will use this exact same algorithm; only the details of the computation will differ.
Algorithms in this form are sometimes called *predictor correctors*. We make a prediction, then correct them.
Let's animate this. I've plotted the position of the doorways in black. Prior are drawn in orange, and the posterior in blue. You can see how the prior shifts the position and reduces certainty, and the posterior stays in the same position and increases certainty as it incorporates the information from the measurement. I've made the measurement perfect with the line `z_prob = 1.0`; we will explore the effect of imperfect measurements in the next section. finally, I draw a thick vertical line to indicate where Simon really is. This is not an output of the filter - we know where Simon really is only because we are simulating his movement.
```
def discrete_bayes_sim(pos, kernel, zs, z_prob_correct, sleep=0.25):
%matplotlib notebook
N = len(hallway)
fig = plt.figure()
for i, z in enumerate(zs):
plt.cla()
prior = predict(pos, 1, kernel)
book_plots.bar_plot(hallway, c='k')
book_plots.bar_plot(prior, ylim=(0,1.0), c='#ff8015')
plt.axvline(i % N + 0.4, lw=5)
fig.canvas.draw()
time.sleep(sleep)
plt.cla()
likelihood = lh_hallway(hallway, z=z, z_prob=z_prob_correct)
pos = update(likelihood, prior)
book_plots.bar_plot(hallway, c='k')
book_plots.bar_plot(pos, ylim=(0,1.0))
plt.axvline(i % 10 + 0.4, lw=5)
fig.canvas.draw()
time.sleep(sleep)
plt.show()
%matplotlib inline
set_figsize(y=2)
print('Final posterior:', pos)
# change these numbers to alter the simulation
kernel = (.1, .8, .1)
z_prob = 1.0
# list of perfect measurements
hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0])
measurements = [hallway[i % len(hallway)] for i in range(25)]
pos = np.array([.1]*10)
discrete_bayes_sim(pos, kernel, measurements, z_prob)
```
## The Effect of Bad Sensor Data
You may be suspicious of the results above because I always passed correct sensor data into the functions. However, we are claiming that this code implements a *filter* - it should filter out bad sensor measurements. Does it do that?
To make this easy to program and visualize I will change the layout of the hallway to mostly alternating doors and hallways, and run the algorithm on 15 correct measurements:
```
hallway = np.array([1, 0, 1, 0, 0]*2)
kernel = (.1, .8, .1)
prior = np.array([.1] * 10)
measurements = [1, 0, 1, 0, 0]
z_prob = 0.75
discrete_bayes_sim(prior, kernel, measurements, z_prob)
```
We have identified the likely cases of having started at position 0 or 5, because we saw this sequence of doors and walls: 1,0,1,0,0. Now I inject a bad measurement. The next measurement should be 1, but instead we get a 0:
```
measurements = [1, 0, 1, 0, 0, 0]
discrete_bayes_sim(prior, kernel, measurements, z_prob)
```
That one bad measurement has significantly eroded our knowledge. Now let's continue with a series of correct measurements.
```
with figsize(y=5.5):
measurements = [0, 1, 0, 1, 0, 0]
for i, m in enumerate(measurements):
likelihood = lh_hallway(hallway, z=m, z_prob=.75)
posterior = update(likelihood, prior)
prior = predict(posterior, 1, kernel)
plt.subplot(3, 2, i+1)
book_plots.bar_plot(posterior, ylim=(0, .4), title='step {}'.format(i+1))
plt.tight_layout()
```
We quickly filtered out the bad sensor reading and converged on the most likely positions for our dog.
## Drawbacks and Limitations
Do not be mislead by the simplicity of the examples I chose. This is a robust and complete filter, and you may use the code in real world solutions. If you need a multimodal, discrete filter, this filter works.
With that said, this filter it is not used often because it has several limitations. Getting around those limitations is the motivation behind the chapters in the rest of this book.
The first problem is scaling. Our dog tracking problem used only one variable, $pos$, to denote the dog's position. Most interesting problems will want to track several things in a large space. Realistically, at a minimum we would want to track our dog's $(x,y)$ coordinate, and probably his velocity $(\dot{x},\dot{y})$ as well. We have not covered the multidimensional case, but instead of an array we use a multidimensional grid to store the probabilities at each discrete location. Each `update()` and `predict()` step requires updating all values in the grid, so a simple four variable problem would require $O(n^4)$ running time *per time step*. Realistic filters can have 10 or more variables to track, leading to exorbitant computation requirements.
The second problem is that the filter is discrete, but we live in a continuous world. The histogram requires that you model the output of your filter as a set of discrete points. A 100 meter hallway requires 10,000 positions to model the hallway to 1cm accuracy. So each update and predict operation would entail performing calculations for 10,000 different probabilities. It gets exponentially worse as we add dimensions. A 100x100 m$^2$ courtyard requires 100,000,000 bins to get 1cm accuracy.
A third problem is that the filter is multimodal. In the last example we ended up with strong beliefs that the dog was in position 4 or 9. This is not always a problem. Particle filters, which we will study later, are multimodal and are often used because of this property. But imagine if the GPS in your car reported to you that it is 40% sure that you are on D street, and 30% sure you are on Willow Avenue.
A forth problem is that it requires a measurement of the change in state. We need a motion sensor to detect how much the dog moves. There are ways to work around this problem, but it would complicate the exposition of this chapter, so, given the aforementioned problems, I will not discuss it further.
With that said, if I had a small problem that this technique could handle I would choose to use it; it is trivial to implement, debug, and understand, all virtues.
## Tracking and Control
We have been passively tracking an autonomously moving object. But consider this very similar problem. I am automating a warehouse and want to use robots to collect all of the items for a customer's order. Perhaps the easiest way to do this is to have the robots travel on a train track. I want to be able to send the robot a destination and have it go there. But train tracks and robot motors are imperfect. Wheel slippage and imperfect motors means that the robot is unlikely to travel to exactly the position you command. There is more than one robot, and we need to know where they all are so we do not cause them to crash.
So we add sensors. Perhaps we mount magnets on the track every few feet, and use a Hall sensor to count how many magnets are passed. If we count 10 magnets then the robot should be at the 10th magnet. Of course it is possible to either miss a magnet or to count it twice, so we have to accommodate some degree of error. We can use the code from the previous section to track our robot since magnet counting is very similar to doorway sensing.
But we are not done. We've learned to never throw information away. If you have information you should use it to improve your estimate. What information are we leaving out? We know what control inputs we are feeding to the wheels of the robot at each moment in time. For example, let's say that once a second we send a movement command to the robot - move left 1 unit, move right 1 unit, or stand still. If I send the command 'move left 1 unit' I expect that in one second from now the robot will be 1 unit to the left of where it is now. This is a simplification because I am not taking acceleration into account, but I am not trying to teach control theory. Wheels and motors are imperfect. The robot might end up 0.9 units away, or maybe 1.2 units.
Now the entire solution is clear. We assumed that the dog kept moving in whatever direction he was previously moving. That is a dubious assumption for my dog! Robots are far more predictable. Instead of making a dubious prediction based on assumption of behavior we will feed in the command that we sent to the robot! In other words, when we call `predict()` we will pass in the commanded movement that we gave the robot along with a kernel that describes the likelihood of that movement.
### Simulating the Train Behavior
We need to simulate an imperfect train. When we command it to move it will sometimes make a small mistake, and its sensor will sometimes return the incorrect value.
```
class Train(object):
def __init__(self, track_len, kernel=[1.], sensor_accuracy=.9):
self.track_len = track_len
self.pos = 0
self.kernel = kernel
self.sensor_accuracy = sensor_accuracy
def move(self, distance=1):
""" move in the specified direction
with some small chance of error"""
self.pos += distance
# insert random movement error according to kernel
r = random.random()
s = 0
offset = -(len(self.kernel) - 1) / 2
for k in self.kernel:
s += k
if r <= s:
break
offset += 1
self.pos = int((self.pos + offset) % self.track_len)
return self.pos
def sense(self):
pos = self.pos
# insert random sensor error
if random.random() > self.sensor_accuracy:
if random.random() > 0.5:
pos += 1
else:
pos -= 1
return pos
```
With that we are ready to write the filter. We will put it in a function so that we can run it with different assumptions. I will assume that the robot always starts at the beginning of the track. The track is implemented as being 10 units long, but think of it as a track of length, say 10,000, with the magnet pattern repeated every 10 units. A length of 10 makes it easier to plot and inspect.
```
def train_filter(iterations, kernel, sensor_accuracy,
move_distance, do_print=True):
track = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
prior = np.array([.9] + [0.01]*9)
normalize(prior)
robot = Train(len(track), kernel, sensor_accuracy)
for i in range(iterations):
robot.move(distance=move_distance)
m = robot.sense()
if do_print:
print('''time {}: pos {}, sensed {}, '''
'''at position {}'''.format(
i, robot.pos, m, track[robot.pos]))
likelihood = lh_hallway(track, m, sensor_accuracy)
posterior = update(likelihood, prior)
index = np.argmax(posterior)
if i < iterations - 1:
prior = predict(posterior, move_distance, kernel)
if do_print:
print(''' predicted position is {}'''
''' with confidence {:.4f}%:'''.format(
index, posterior[index]*100))
book_plots.bar_plot(posterior)
if do_print:
print()
print('final position is', robot.pos)
index = np.argmax(posterior)
print('''predicted position is {} with '''
'''confidence {:.4f}%:'''.format(
index, posterior[index]*100))
```
Read the code and make sure you understand it. Now let's do a run with no sensor or movement error. If the code is correct it should be able to locate the robot with no error. The output is a bit tedious to read, but if you are at all unsure of how the update/predict cycle works make sure you read through it carefully to solidify your understanding.
```
import random
random.seed(3)
np.set_printoptions(precision=2, suppress=True, linewidth=60)
train_filter(4, kernel=[1.], sensor_accuracy=.999,
move_distance=4, do_print=True)
```
We can see that the code was able to perfectly track the robot so we should feel reasonably confident that the code is working. Now let's see how it fairs with some errors.
```
random.seed(5)
train_filter(4, kernel=[.1, .8, .1], sensor_accuracy=.9,
move_distance=4, do_print=True)
```
There was a sensing error at time 1, but we are still quite confident in our position.
Now let's run a very long simulation and see how the filter responds to errors.
```
with figsize(y=5.5):
for i in range (4):
random.seed(3)
plt.subplot(221+i)
train_filter(148+i, kernel=[.1, .8, .1],
sensor_accuracy=.8,
move_distance=4, do_print=False)
plt.title ('iteration {}'.format(148+i))
```
We can see that there was a problem on iteration 149 as the confidence degrades. But within a few iterations the filter is able to correct itself and regain confidence in the estimated position.
## Bayes Theorem
We developed the math in this chapter merely by reasoning about the information we have at each moment. In the process we discovered [*Bayes Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given previous information. That is exactly what we have been doing in this chapter. With luck our code should match the Bayes Theorem equation!
We implemented the `update()` function with this probability calculation:
$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$
To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.
Bayes theorem is
$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$
If you are not familiar with this notation, let's review. $P(A)$ means the probability of event $A$. If $A$ is the event of a fair coin landing heads, then $P(A) = 0.5$.
$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P(\mathtt{rain\_today} \mid \mathtt{rain\_yesterday})$.
In the Bayes theorem equation above $B$ is the *evidence*, $P(A)$ is the *prior*, $P(B \mid A)$ is the *likelihood*, and $P(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches out update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $Z$ for the measurement. Hence, we want to know $P(x_i \mid Z)$, that is, the probability of the dog being at $x_i$ given the measurement $Z$.
So, let's plug that into the equation and solve it.
$$P(x_i \mid Z) = \frac{P(Z \mid x_i) P(x_i)}{P(Z)}$$
That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $P(Z \mid x_i)$. This is the the likelihood, or the probability for the measurement at every cell $x_i$. $P(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:
```python
def update(likelihood, prior):
posterior = prior * likelihood # P(Z|x)*P(x)
return normalize(posterior)
```
The last term to consider is the denominator $P(Z)$. This is the probability of getting the measurement $Z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes theorem.
The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as
$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{\int P(B \mid A_j) P(A_j) \mathtt{d}A_j}\cdot$$
In practice the denominator can be fiendishly difficult to solve analytically (a recent opinion piece for the Royal Statistical Society [called it](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up) a "dog's breakfast" [8]. Filtering textbooks are filled with integral laden equations which you cannot be expected to solve. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent.
## Total Probability Theorem
We now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is
$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$
That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation
```python
for i in range(N):
for k in range (kN):
index = (i + (width-k) - offset) % N
result[i] += prob_dist[index] * kernel[k]
```
## Summary
The code is very short, but the result is impressive! We have implemented a form of a Bayesian filter. We have learned how to start with no information and derive information from noisy sensors. Even though the sensors in this chapter are very noisy (most sensors are more than 80% accurate, for example) we quickly converge on the most likely position for our dog. We have learned how the predict step always degrades our knowledge, but the addition of another measurement, even when it might have noise in it, improves our knowledge, allowing us to converge on the most likely result.
This book is mostly about the Kalman filter. The math it uses is different, but the logic is exactly the same as used in this chapter. It uses Bayesian reasoning to form estimates from a combination of measurements and process models.
**If you can understand this chapter you will be able to understand and implement Kalman filters.** I cannot stress this enough. If anything is murky, go back and reread this chapter and play with the code. The rest of this book will build on the algorithms that we use here. If you don't understand why this filter works you will have little success with the rest of the material. However, if you grasp the fundamental insight - multiplying probabilities when we measure, and shifting probabilities when we update leads to a converging solution - then after learning a bit of math you are ready to implement a Kalman filter.
## References
* [1] D. Fox, W. Burgard, and S. Thrun. "Monte carlo localization: Efficient position estimation for mobile robots." In *Journal of Artifical Intelligence Research*, 1999.
http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/fox99a-html/jair-localize.html
* [2] Dieter Fox, et. al. "Bayesian Filters for Location Estimation". In *IEEE Pervasive Computing*, September 2003.
http://swarmlab.unimaas.nl/wp-content/uploads/2012/07/fox2003bayesian.pdf
* [3] Sebastian Thrun. "Artificial Intelligence for Robotics".
https://www.udacity.com/course/cs373
* [4] Khan Acadamy. "Introduction to the Convolution"
https://www.khanacademy.org/math/differential-equations/laplace-transform/convolution-integral/v/introduction-to-the-convolution
* [5] Wikipedia. "Convolution"
http://en.wikipedia.org/wiki/Convolution
* [6] Wikipedia. "Law of total probability"
http://en.wikipedia.org/wiki/Law_of_total_probability
* [7] Wikipedia. "Time Evolution"
https://en.wikipedia.org/wiki/Time_evolution
* [8] We need to rethink how we teach statistics from the ground up
http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up
| github_jupyter |
```
import matplotlib.pyplot as plt
import iris
import iris.plot as iplt
import numpy
import iris.coord_categorisation
import re
%matplotlib inline
infile = '/g/data/ua6/DRSv2/CMIP5/CSIRO-Mk3-6-0/rcp85/mon/ocean/r1i1p1/tauuo/latest/tauuo_Omon_CSIRO-Mk3-6-0_rcp85_r1i1p1_200601-210012.nc'
cube = iris.load_cube(infile, 'surface_downward_x_stress')
print(cube)
def get_time_constraint(time_list):
"""Get the time constraint used for reading an iris cube."""
if time_list:
start_date, end_date = time_list
date_pattern = '([0-9]{4})-([0-9]{1,2})-([0-9]{1,2})'
assert re.search(date_pattern, start_date)
assert re.search(date_pattern, end_date)
if (start_date == end_date):
year, month, day = start_date.split('-')
time_constraint = iris.Constraint(time=iris.time.PartialDateTime(year=int(year), month=int(month), day=int(day)))
else:
start_year, start_month, start_day = start_date.split('-')
end_year, end_month, end_day = end_date.split('-')
time_constraint = iris.Constraint(time=lambda t: iris.time.PartialDateTime(year=int(start_year), month=int(start_month), day=int(start_day)) <= t.point <= iris.time.PartialDateTime(year=int(end_year), month=int(end_month), day=int(end_day)))
else:
time_constraint = iris.Constraint()
return time_constraint
with iris.FUTURE.context(cell_datetime_objects=True):
start_cube = cube.extract(get_time_constraint(['2006-01-01', '2025-12-31']))
start_clim = start_cube.collapsed('time', iris.analysis.MEAN)
fig = plt.figure(figsize=[20,8])
iplt.contourf(start_clim, cmap='RdBu_r',
levels=[-0.25, -0.2, -0.15, -0.1, -0.05, 0, 0.05, 0.1, 0.15, 0.20, 0.25],
extend='both')
plt.gca().coastlines()
plt.gca().gridlines()
cbar = plt.colorbar()
cbar.set_label(str(start_clim.units))
plt.show()
with iris.FUTURE.context(cell_datetime_objects=True):
end_cube = cube.extract(get_time_constraint(['2081-01-01', '2100-12-31']))
end_clim = end_cube.collapsed('time', iris.analysis.MEAN)
fig = plt.figure(figsize=[20,8])
iplt.contourf(end_clim, cmap='RdBu_r',
levels=[-0.25, -0.2, -0.15, -0.1, -0.05, 0, 0.05, 0.1, 0.15, 0.20, 0.25],
extend='both')
plt.gca().coastlines()
plt.gca().gridlines()
cbar = plt.colorbar()
cbar.set_label(str(end_clim.units))
plt.show()
```
## Metrics
```
iris.coord_categorisation.add_year(cube, 'time')
annual_cube = cube.aggregated_by(['year'], iris.analysis.MEAN)
zonal_mean = annual_cube.collapsed('longitude', iris.analysis.MEAN)
print(zonal_mean)
iplt.plot(zonal_mean[0, :])
plt.show()
y_data = zonal_mean[0, :].data
x_data = zonal_mean.coord('latitude').points
y_data
x_data
plt.plot(x_data[130:150], y_data[130:150], 'o-')
from scipy.interpolate import interp1d
#xnew = numpy.linspace(x_data[0], x_data[-1], num=1000, endpoint=True)
f_data = interp1d(x_data, y_data, kind='cubic')
xnew = numpy.linspace(x_data[0], x_data[-1], num=1000, endpoint=True)
ynew = f_data(xnew)
plt.plot(x_data, y_data, 'o', xnew, ynew, '--')
plt.legend(['data', 'cubic'], loc='best')
plt.xlim(30, 55)
plt.ylim(0, 0.07)
plt.show()
```
Interpolation still noisy... need to fit a smooth curve instead.
```
xnew = numpy.linspace(x_data[0], x_data[-1], num=1000, endpoint=True)
ynew = f_data(xnew)
plt.plot(x_data, y_data, 'o', xnew, ynew, '--')
plt.legend(['data', 'cubic'], loc='best')
plt.axhline()
plt.show()
type(ynew)
def wind_stress_metrics(xdata, ydata, hemisphere, direction):
"""Calculate location and magnitude metric for surface wind stress."""
assert hemisphere in ['nh', 'sh']
assert direction in ['westerly', 'easterly']
# assert regularly spaced y values
if (hemisphere == 'sh') and (direction == 'westerly'):
print(hemisphere, direction)
selection = (xdata < 0) & (ydata > 0)
elif (hemisphere == 'nh') and (direction == 'westerly'):
print(hemisphere, direction)
selection = (xdata > 0) & (ydata > 0)
elif (hemisphere == 'sh') and (direction == 'easterly'):
print(hemisphere, direction)
selection = (xdata < 0) & (xdata > -50) & (ydata < 0)
elif (hemisphere == 'nh') and (direction == 'easterly'):
print(hemisphere, direction)
selection = (xdata > 0) & (xdata < 40) & (ydata < 0)
x_filtered = numpy.where(selection, xdata, 0)
y_filtered = numpy.where(selection, ydata, 0)
location = numpy.sum(x_filtered * y_filtered) / numpy.sum(y_filtered) # centre of gravity
magnitude = numpy.sum(y_filtered)
return location, magnitude
```
The centre of mass/gravity idea came from [this post](http://ifcuriousthenlearn.com/blog/2015/06/18/center-of-mass/).
```
sw_loc, sw_mag = wind_stress_metrics(xnew, ynew, 'sh', 'westerly')
print(sw_loc, sw_mag)
nw_loc, nw_mag = wind_stress_metrics(xnew, ynew, 'nh', 'westerly')
print(nw_loc, nw_mag)
se_loc, se_mag = wind_stress_metrics(xnew, ynew, 'sh', 'easterly')
print(se_loc, se_mag)
ne_loc, ne_mag = wind_stress_metrics(xnew, ynew, 'nh', 'easterly')
print(ne_loc, ne_mag)
xnew = numpy.linspace(x_data[0], x_data[-1], num=1000, endpoint=True)
ynew = f_data(xnew)
plt.plot(x_data, y_data, 'o', xnew, ynew, '--')
plt.legend(['data', 'cubic'], loc='best')
plt.axhline()
plt.axvline(sw_loc)
plt.axvline(nw_loc)
plt.axvline(se_loc)
plt.axvline(ne_loc)
plt.show()
```
| github_jupyter |
```
import os
%env DEVICE = CPU
%env MODEL=/opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/person-detection-retail-0013/FP32/person-detection-retail-0013.xml
"""Restricted Zone Notifier."""
"""
Copyright (c) 2018 Intel Corporation.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit person to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
import os
import sys
import json
import time
import socket
import cv2
import logging as log
import paho.mqtt.client as mqtt
from threading import Thread
from collections import namedtuple
from argparse import ArgumentParser
from inference import Network
# Assemblyinfo contains information about assembly area
MyStruct = namedtuple("assemblyinfo", "safe")
INFO = MyStruct(True)
# Assemblyinfo contains information about assembly line
MyStruct = namedtuple("assemblyinfo", "safe, alert")
INFO = MyStruct(True, False)
# MQTT server environment variables
HOSTNAME = socket.gethostname()
IPADDRESS = socket.gethostbyname(HOSTNAME)
TOPIC = "Restricted_zone_python"
MQTT_HOST = IPADDRESS
MQTT_PORT = 1883
MQTT_KEEPALIVE_INTERVAL = 60
# Global variables
accepted_devices = ['CPU', 'GPU', 'MYRIAD', 'HETERO:FPGA,CPU', 'HDDL']
TARGET_DEVICE = 'CPU'
is_async_mode = True
CONFIG_FILE = '../resources/config.json'
# Flag to control background thread
KEEP_RUNNING = True
DELAY = 5
def ssd_out(res, initial_wh, selected_region):
"""
Parse SSD output.
:param res: Detection results
:param args: Parsed arguments
:param initial_wh: Initial width and height of the frame
:param selected_region: Selected region coordinates
:return: None
"""
global INFO
person = []
INFO = INFO._replace(safe=True)
INFO = INFO._replace(alert=False)
for obj in res[0][0]:
# Draw only objects when probability more than specified threshold
if obj[2] > prob_threshold:
xmin = int(obj[3] * initial_wh[0])
ymin = int(obj[4] * initial_wh[1])
xmax = int(obj[5] * initial_wh[0])
ymax = int(obj[6] * initial_wh[1])
person.append([xmin, ymin, xmax, ymax])
for p in person:
# area_of_person gives area of the detected person
area_of_person = (p[2] - p[0]) * (p[3] - p[1])
x_max = max(p[0], selected_region[0])
x_min = min(p[2], selected_region[0] + selected_region[2])
y_min = min(p[3], selected_region[1] + selected_region[3])
y_max = max(p[1], selected_region[1])
point_x = x_min - x_max
point_y = y_min - y_max
# area_of_intersection gives area of intersection of the
# detected person and the selected area
area_of_intersection = point_x * point_y
if point_x < 0 or point_y < 0:
continue
else:
if area_of_person > area_of_intersection:
# assembly line area flags
INFO = INFO._replace(safe=True)
INFO = INFO._replace(alert=False)
else:
# assembly line area flags
INFO = INFO._replace(safe=False)
INFO = INFO._replace(alert=True)
def message_runner():
"""
Publish worker status to MQTT topic.
Pauses for rate second(s) between updates
:return: None
"""
while KEEP_RUNNING:
s = json.dumps({"Worker safe": INFO.safe, "Alert": INFO.alert})
time.sleep(rate)
CLIENT.publish(TOPIC, payload=s)
def main():
"""
Load the network and parse the output.
:return: None
"""
global CLIENT
global KEEP_RUNNING
global DELAY
global SIG_CAUGHT
global prob_threshold
global rate
global TARGET_DEVICE
global is_async_mode
CLIENT = mqtt.Client()
CLIENT.connect(MQTT_HOST, MQTT_PORT, MQTT_KEEPALIVE_INTERVAL)
CLIENT.subscribe(TOPIC)
try:
pointx = int(os.environ['POINTX'])
pointy = int(os.environ['POINTY'])
width = int(os.environ['WIDTH'])
height = int(os.environ['HEIGHT'])
except KeyError:
pointx = 0
pointy = 0
width = 0
height = 0
try:
# Number of seconds between data updates to MQTT server
rate = float(os.environ['RATE'])
except KeyError:
rate = 1
try:
# Probability threshold for detections filtering
prob_threshold = float(os.environ['PROB_THRESHOLD'])
except KeyError:
prob_threshold = 0.7
if 'DEVICE' in os.environ.keys():
TARGET_DEVICE = os.environ['DEVICE']
if 'MULTI' not in TARGET_DEVICE and TARGET_DEVICE not in accepted_devices:
print("Unsupported device: " + TARGET_DEVICE)
sys.exit(2)
elif 'MULTI' in TARGET_DEVICE:
target_devices = TARGET_DEVICE.split(':')[1].split(',')
for multi_device in target_devices:
if multi_device not in accepted_devices:
print("Unsupported device: " + TARGET_DEVICE)
sys.exit(2)
cpu_extension = os.environ['CPU_EXTENSION'] if 'CPU_EXTENSION' in os.environ.keys() else None
model = os.environ["MODEL"]
if 'FLAG' in os.environ.keys():
async_mode = os.environ['FLAG']
if async_mode == "sync":
is_async_mode = False
else:
is_async_mode = True
log.basicConfig(format="[ %(levelname)s ] %(message)s",
level=log.INFO, stream=sys.stdout)
logger = log.getLogger()
render_time = 0
roi_x = pointx
roi_y = pointy
roi_w = width
roi_h = height
assert os.path.isfile(CONFIG_FILE), "{} file doesn't exist".format(CONFIG_FILE)
config = json.loads(open(CONFIG_FILE).read())
for idx, item in enumerate(config['inputs']):
if item['video'].isdigit():
input_stream = int(item['video'])
else:
input_stream = item['video']
cap = cv2.VideoCapture(input_stream)
if not cap.isOpened():
logger.error("ERROR! Unable to open video source")
sys.exit(1)
# Init inference request IDs
cur_request_id = 0
next_request_id = 1
# Initialise the class
infer_network = Network()
# Load the network to IE plugin to get shape of input layer
n, c, h, w = infer_network.load_model(model, TARGET_DEVICE, 1, 1, 2, cpu_extension)[1]
message_thread = Thread(target=message_runner, args=())
message_thread.setDaemon(True)
message_thread.start()
if is_async_mode:
print("Application running in async mode...")
else:
print("Application running in sync mode...")
ret, frame = cap.read()
while ret:
ret, next_frame = cap.read()
if not ret:
KEEP_RUNNING = False
break
initial_wh = [cap.get(3), cap.get(4)]
if next_frame is None:
KEEP_RUNNING = False
log.error("ERROR! blank FRAME grabbed")
break
# If either default values or negative numbers are given,
# then we will default to start of the FRAME
if roi_x <= 0 or roi_y <= 0:
roi_x = 0
roi_y = 0
if roi_w <= 0:
roi_w = next_frame.shape[1]
if roi_h <= 0:
roi_h = next_frame.shape[0]
key_pressed = cv2.waitKey(1)
# 'c' key pressed
if key_pressed == 99:
# Give operator chance to change the area
# Select rectangle from left upper corner, dont display crosshair
ROI = cv2.selectROI("Assembly Selection", frame, True, False)
print("Assembly Area Selection: -x = {}, -y = {}, -w = {},"
" -h = {}".format(ROI[0], ROI[1], ROI[2], ROI[3]))
roi_x = ROI[0]
roi_y = ROI[1]
roi_w = ROI[2]
roi_h = ROI[3]
cv2.destroyAllWindows()
cv2.rectangle(frame, (roi_x, roi_y),
(roi_x + roi_w, roi_y + roi_h), (0, 0, 255), 2)
selected_region = [roi_x, roi_y, roi_w, roi_h]
in_frame_fd = cv2.resize(next_frame, (w, h))
# Change data layout from HWC to CHW
in_frame_fd = in_frame_fd.transpose((2, 0, 1))
in_frame_fd = in_frame_fd.reshape((n, c, h, w))
# Start asynchronous inference for specified request.
inf_start = time.time()
if is_async_mode:
# Async enabled and only one video capture
infer_network.exec_net(next_request_id, in_frame_fd)
else:
# Async disabled
infer_network.exec_net(cur_request_id, in_frame_fd)
# Wait for the result
infer_network.wait(cur_request_id)
det_time = time.time() - inf_start
# Results of the output layer of the network
res = infer_network.get_output(cur_request_id)
# Parse SSD output
ssd_out(res, initial_wh, selected_region)
# Draw performance stats
inf_time_message = "Inference time: N\A for async mode" if is_async_mode else \
"Inference time: {:.3f} ms".format(det_time * 1000)
render_time_message = "OpenCV rendering time: {:.3f} ms". \
format(render_time * 1000)
if not INFO.safe:
warning = "HUMAN IN ASSEMBLY AREA: PAUSE THE MACHINE!"
cv2.putText(frame, warning, (15, 100), cv2.FONT_HERSHEY_COMPLEX, 0.8, (0, 0, 255), 2)
log_message = "Async mode is on." if is_async_mode else \
"Async mode is off."
cv2.putText(frame, log_message, (15, 15), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
cv2.putText(frame, inf_time_message, (15, 35), cv2.FONT_HERSHEY_COMPLEX, 0.5, (255, 255, 255), 1)
cv2.putText(frame, render_time_message, (15, 55), cv2.FONT_HERSHEY_COMPLEX, 0.5, (255, 255, 255), 1)
cv2.putText(frame, "Worker Safe: {}".format(INFO.safe), (15, 75), cv2.FONT_HERSHEY_COMPLEX, 0.5, (255, 255, 255), 1)
render_start = time.time()
cv2.imshow("Restricted Zone Notifier", frame)
render_end = time.time()
render_time = render_end - render_start
frame = next_frame
if key_pressed == 27:
print("Attempting to stop background threads")
KEEP_RUNNING = False
break
# Tab key pressed
if key_pressed == 9:
is_async_mode = not is_async_mode
print("Switched to {} mode".format("async" if is_async_mode else "sync"))
if is_async_mode:
# Swap infer request IDs
cur_request_id, next_request_id = next_request_id, cur_request_id
infer_network.clean()
message_thread.join()
cap.release()
cv2.destroyAllWindows()
CLIENT.disconnect()
if __name__ == '__main__':
main()
```
| github_jupyter |
# Wasserstein GAN
<img src="https://miro.medium.com/max/3200/1*M_YipQF_oC6owsU1VVrfhg.jpeg" width="800" height="400">
##### Importing libraries
```
import numpy as np
import matplotlib.pyplot as plt
from glob import glob
from PIL import Image
from time import time
import pandas as pd
import argparse
import math
import sys
import re
import itertools
from sklearn.model_selection import train_test_split
import torchvision.transforms as transforms
from torchvision.utils import save_image
from torch.utils.data import DataLoader
from torchvision import datasets
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch
import os
os.chdir('C:/Users/Nicolas/Documents/Data/Faces')
```
##### Function to sort files
```
def sorted_alphanumeric(data):
convert = lambda text: int(text) if text.isdigit() else text.lower()
alphanum_key = lambda key: [ convert(c) for c in re.split('([0-9]+)', key) ]
return sorted(data, key=alphanum_key)
```
##### Loading the 800 women
```
def load_women():
faces = pd.read_csv('800_women.csv', header=None).values
faces = faces.ravel().tolist()
return faces
faces = load_women()
y = np.repeat(1, len(faces))
```
##### Removing problematic target names
```
# faces = [i for i in files if (i[-34] == '1') and len(i[-37:-35].strip('\\').strip('d')) == 2 ] # MEN
# y = [i[-34] for i in files if (i[-34] == '1') and len(i[-37:-35].strip('\\').strip('d')) > 1 ] # MEN
assert len(y) == len(faces), 'The X and Y are not of the same length!'
```
#### This is the shape width/height
```
dim = 60
```
#### Cropping function
```
def crop(img):
if img.shape[0]<img.shape[1]:
x = img.shape[0]
y = img.shape[1]
crop_img = img[: , int(y/2-x/2):int(y/2+x/2)]
else:
x = img.shape[1]
y = img.shape[0]
crop_img = img[int(y/2-x/2):int(y/2+x/2) , :]
return crop_img
```
##### Loading and cropping images
```
print('Scaling...', end='')
start = time()
x = []
num_to_load = len(faces) # int(len(faces)/5)
for ix, file in enumerate(faces[:num_to_load]):
image = plt.imread(file, 'jpg')
image = Image.fromarray(image).resize((dim, dim)).convert('L')
image = crop(np.array(image))
x.append(image)
print(f'\rDone. {int(time() - start)} seconds')
```
##### Turning the pictures into arrays
```
x = np.array(x, dtype=np.float32).reshape(-1, 1, 60, 60)
y = np.array(y, dtype=np.float32)
labels = y.copy()
```
##### Turning the targets into a 2D matrix
```
assert x.ndim == 4, 'The input is the wrong shape!'
yy, xx = y.nbytes, x.nbytes
print(f'The size of X is {xx:,} bytes and the size of Y is {yy:,} bytes.')
files, faces = None, None
```
##### Displaying the pictures
```
fig = plt.figure(figsize=(12, 12))
for i in range(1, 5):
plt.subplot(1, 5, i)
rand = np.random.randint(0, x.shape[0])
ax = plt.imshow(x[rand][0, :, :], cmap='gray')
plt.title('<Women>')
yticks = plt.xticks([])
yticks = plt.yticks([])
print('Scaling...', end='')
image_size = x.shape[1] * x.shape[1]
x = (x.astype('float32') - 127.5) / 127.5
print('\rDone. ')
if torch.cuda.is_available():
x = torch.from_numpy(x)
y = torch.from_numpy(y)
print('Tensors successfully flushed to CUDA.')
else:
print('CUDA not available!')
```
##### Making a dataset class
```
class Face():
def __init__(self):
self.len = x.shape[0]
self.x = x
self.y = y
def __getitem__(self, index):
return x[index], y[index].unsqueeze(0)
def __len__(self):
return self.len
```
##### Instantiating the class
```
faces = Face()
```
##### Parsing the args
```
parser = argparse.ArgumentParser()
parser.add_argument("--n_epochs", type=int, default=1_000, help="number of epochs of training")
parser.add_argument("--batch_size", type=int, default=128, help="size of the batches")
parser.add_argument("--lr", type=float, default=0.00005, help="learning rate")
parser.add_argument("--n_cpu", type=int, default=8, help="number of cpu threads to use during batch generation")
parser.add_argument("--latent_dim", type=int, default=32, help="dimensionality of the latent space")
parser.add_argument("--img_size", type=int, default=60, help="size of each image dimension")
parser.add_argument("--channels", type=int, default=1, help="number of image channels")
parser.add_argument("--n_critic", type=int, default=5, help="number of training steps for discriminator per iter")
parser.add_argument("--clip_value", type=float, default=0.005, help="lower and upper clip value for disc. weights")
parser.add_argument("--sample_interval", type=int, default=1, help="interval betwen image samples")
opt, unknown = parser.parse_known_args()
print(opt)
```
#### Making the generator
```
img_shape = (opt.channels, opt.img_size, opt.img_size)
cuda = True if torch.cuda.is_available() else False
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
def block(in_feat, out_feat, normalize=True):
layers = [nn.Linear(in_feat, out_feat)]
if normalize:
layers.append(nn.BatchNorm1d(out_feat, 0.8))
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers
self.model = nn.Sequential(
*block(opt.latent_dim, 128, normalize=False),
*block(128, 256),
*block(256, 512),
*block(512, 1024),
nn.Linear(1024, int(np.prod(img_shape))),
nn.Tanh()
)
def forward(self, z):
img = self.model(z)
img = img.view(img.shape[0], *img_shape)
return img
```
#### Making the discriminator
```
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
self.model = nn.Sequential(
nn.Linear(int(np.prod(img_shape)), 512),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(512, 256),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(256, 1),
)
def forward(self, img):
img_flat = img.view(img.shape[0], -1)
validity = self.model(img_flat)
return validity
# Initialize generator and discriminator
generator = Generator()
discriminator = Discriminator()
if cuda:
generator.cuda()
discriminator.cuda()
```
#### Loading the trained models
```
generator.load_state_dict(torch.load('deep_conv_gan_generator'))
discriminator.load_state_dict(torch.load('deep_conv_gan_discriminator'))
```
#### Setting up the dataloader
```
# Configure data loader
dataloader = torch.utils.data.DataLoader(faces,
batch_size=opt.batch_size,
shuffle=True,
)
```
#### Making the optimizers
```
# Optimizers
optimizer_G = torch.optim.RMSprop(generator.parameters(), lr=opt.lr)
optimizer_D = torch.optim.RMSprop(discriminator.parameters(), lr=opt.lr)
Tensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor
```
#### Training the model
```
batches_done = 0
if not os.path.isdir('wsgan'):
os.mkdir('wsgan')
for epoch in range(1, opt.n_epochs + 1):
break # model is already trained!
for i, (imgs, _) in enumerate(dataloader):
# Configure input
real_imgs = Variable(imgs.type(Tensor))
# ---------------------
# Train Discriminator
# ---------------------
optimizer_D.zero_grad()
# Sample noise as generator input
z = Variable(Tensor(np.random.normal(0, 1, (imgs.shape[0], opt.latent_dim))))
# Generate a batch of images
fake_imgs = generator(z).detach()
# Adversarial loss
loss_D = -torch.mean(discriminator(real_imgs)) + torch.mean(discriminator(fake_imgs))
loss_D.backward()
optimizer_D.step()
# Clip weights of discriminator
for p in discriminator.parameters():
p.data.clamp_(-opt.clip_value, opt.clip_value)
# Train the generator every n_critic iterations
if i % opt.n_critic == 0:
# -----------------
# Train Generator
# -----------------
optimizer_G.zero_grad()
# Generate a batch of images
gen_imgs = generator(z)
# Adversarial loss
loss_G = -torch.mean(discriminator(gen_imgs))
loss_G.backward()
optimizer_G.step()
batches_done = epoch * len(dataloader) + i + 1
if epoch >= 500 and epoch % 100 == 0:
val = input("\nContinue training? [y/n]: ")
print()
if val in ('y', 'yes'):
val = True
pass
elif val in ('n', 'no'):
break
else:
pass
if batches_done % opt.sample_interval == 0:
save_image(gen_imgs.data[:25], "wsgan/%d.png" % batches_done, nrow=5, normalize=True)
if epoch % 50 == 0:
print(
"[Epoch %d/%d] [D loss: %f] [G loss: %f]"
% (epoch, opt.n_epochs, loss_G.item(), loss_G.item())
)
```
##### Saving the models
```
torch.save(generator.state_dict(), 'wasserstein_gan_generator')
torch.save(discriminator.state_dict(), 'wasserstein_gan_discriminator')
```
##### Function to save images
```
def sample_image(directory, n_row, batches_done):
"""Saves a grid of generated digits"""
# Sample noise
z = Variable(Tensor(np.random.normal(0, 1, (n_row ** 2, opt.latent_dim))))
gen_imgs = generator(z)
save_image(gen_imgs.data, "%s/%d.png" % (directory, batches_done), nrow=n_row, normalize=True)
```
##### Generating 25,000 pictures
```
if not os.path.isdir('wsgan_800_women'):
os.mkdir('wsgan_800_women')
images = 0
for epoch in range(1, 2_00 + 1): # make it 200!
for i, (imgs, _) in enumerate(dataloader):
with torch.no_grad():
# Adversarial ground truths
valid = Variable(Tensor(imgs.shape[0], 1).fill_(1.0), requires_grad=False)
fake = Variable(Tensor(imgs.shape[0], 1).fill_(0.0), requires_grad=False)
# Configure input
real_imgs = Variable(imgs.type(Tensor))
batches_done = epoch * len(dataloader) + i
sample_image('wsgan_800_women', n_row=5, batches_done=batches_done)
images += 25
if images % 5_000 == 0:
print(f'Pictures created: {images:,}')
if len(os.listdir(os.path.join(os.getcwd(), 'wsgan_800_women'))) >= 1_000:
print('\n25,000 images successfully generated.')
break
```
##### Visualizing the generated images
```
images = []
for file_name in sorted_alphanumeric(glob('wsgan_800_women/*.png')):
if file_name.endswith('.png'):
file_path = os.path.join(file_name)
images.append(file_path)
picture = plt.imread(images[-1])
plt.figure(figsize=(6, 6))
plt.imshow(picture)
plt.xticks([]), plt.yticks([])
plt.title('Generated Faces')
plt.show()
```
| github_jupyter |
👇 (Press on the three dots to expand the code)
```
# Code preamble: we'll need some packages to display the information in the notebook.
# Feel free to ignore this cell unless you're running the code.
import folium # Map visualizations
import requests # Basic http requests
import json # For handling API return data
import pandas as pd # Pandas is a data manipulation and analysis library
api_base = "https://api.resourcewatch.org/v1"
def show_layer(layer_id, year, provider):
tiles_url = f"{api_base}/layer/{layer_id}/tile/{provider}/{{z}}/{{x}}/{{y}}?year={str(year)}"
attribution = "ResourceWatch & Vizzuality, 2018",
map_object = folium.Map(tiles = tiles_url, attr=attribution, max_zoom = 18, min_zoom= 2)
return map_object
```
# NEX-GDDP & LOCA indicators calculations
As part of the development of PREP we processed data from two climate downscaling datasets: [NEX-GDDP](https://nex.nasa.gov/nex/projects/1356/) (NASA Earth eXchange Global Daily Downscaled Projections) and [LOCA](http://loca.ucsd.edu/) (LOcalized Constructed Analogs). Both these models are *downscaled climate scenarios*, where coarse-resolution climate models are applied to a finer spatial resolution grid. GDDP data is offered at the global scale, while LOCA data covers the contiguous United States. Data access is offered through their homepages (linked above) and through several additional data cloud repositories --[Amazon AWS](https://registry.opendata.aws/nasanex/) and the [OpenNEX initiative](https://nex.nasa.gov/nex/static/htdocs/site/extra/opennex/) among them. For ease of use, we'll illustrate any examples with the GDDP data available in [Google Earth Engine](https://earthengine.google.com/).
The general data structure is similar for both datasets: daily measures of three forecasted variables (minimum and maximum daily temperatures, daily precipitation) are available for two of the scenarios of the Representative Concentration Pathways (RCPs), the RCP 4.5 and RCP 8.5. Roughly, these correspond to different levels of radiative forcing due to greenhouse emissions. The former scenario's level of emissions would peak at 2040 and then decline, while the latter would countinue to rise throughtout the 21st century. Each of these scenarios is comprised of forecasts for a set of models (21 for GDDP, 31 for LOCA), daily, from 2006 to 2100. A historical series is also included, where these models are applied to the historical forcing conditions, from 1950 to 2006. This results in a massive amount of data: about 12 terabytes of compressed netcdf files are available just for GDDP data. This amount of data is unwieldy, so some processing is needed to reduce it into a smaller, simpler dataset. We have applied two processes on the data: first, we are calculating several climate indicators on the data, in addition to the base variables. These indicators are then used to create an ensemble measure (an average of the different models) and their 25th and 75th percentiles. These are presented at two different temporal resolutions: decadal averages and three 30-year period averages.
## The indicators
This information is present in the [PREP website](https://prepdata.org). We'll query the RW API (which powers PREP) to obtain the datasets and their layers. You can check out the actual code we've ran [here](https://github.com/resource-watch/nexgddp-dataprep/tree/develop).
```
nex_datasets = json.loads(requests.get(f"{api_base}/dataset?provider=nexgddp&page[size]=1000&includes=layer").text)['data']
loca_datasets = json.loads(requests.get(f"{api_base}/dataset?provider=loca&page[size]=1000&includes=layer").text)['data']
get_data = lambda dset: (
dset['attributes']['name'],
dset['attributes']['tableName'],
next(iter(dset['attributes']['layer']), {"id": None})['id']
)
df = pd.DataFrame([
*[get_data(dset) for dset in nex_datasets],
*[get_data(dset) for dset in loca_datasets]
])
df.columns = ['description', 'tableName', 'layerId']
df
```
### Calculating an indicator
Given that the data is expressed in several dimensions, the data has to be reduced across these dimensions. This is done in a certain order. Consider the format of the 'raw data': daily maps from 1951 to 2100 for each model. The first step is to extract a single year of the raw data, for a single model. It's from this data from which we calculate the indicator --in this case, the average maximum temperature.

The output we are interested in is still at a lower temporal resolution than the data we have now. If we were to calculate an indicator for the decadal temporal resolution dataset, we would take a whole decade of the indicator, as calculated above, and average it again.

It's from these averaged indicators from where we can calculate the average and the 25th and 75th percentiles. The final measure --the one that can be seen on the web-- is the average of the indicators *across models*. This is known as an ensemble measure.

### Temperature indicators
#### Maximum daily temperature (tasmax)
The maximum daily temperature is already present in the 'raw' datasets. These values are averaged per temporal unit (decadal, 30y) and model, as described above.
```
tasmax_layer = show_layer("964d1388-4490-487d-b9cc-cd282e4d3d28", 1971, "nexgddp")
tasmax_layer
```
#### Minimum daily temperature
Similarly to the indicator above, no processing is needed in this indicator other than the averaging.
```
tasmin_layer = show_layer("c3bb62e8-2d50-4ad2-9ca5-8ce02bed1de5", 1971, "nexgddp")
tasmin_layer
```
#### Average daily temperature
We construct the 'average daily temperature' from the average maximum and minimun daily temperatures. This would be the first step in the processing --we would first construct a 'tasavg' variable, and then procesed with the rest of the analysis as usual.
```
tasavg_layer = show_layer("02e9f747-7c20-4fc8-a773-a5135f24cc91", 1971, "nexgddp")
tasavg_layer
```
#### Heating degree days
[Heating Degree Days (HDDs)](https://en.wikipedia.org/wiki/Heating_degree_day) is a measure of the demand for energy for heating. It's defined in terms of a fixed baseline, which in our case is 65F. The measure is the accumulated difference in *average* temperatures (Kelvin degrees) to this baseline for days where this temperature does not reach the baseline (i.e. in a day hotter than 65F, 0 heating degree days would be accumulated).
```
hdd_layer = show_layer("8bc10da3-e610-4105-9f4e-8ebfb1725874", 1971, "nexgddp")
hdd_layer
```
#### Cooling degree days
In the same vein than HDDs, Cooling Degree Days (CDDs) are the accumulated degrees in excess of the baseline (again, 65F) of the average temperature for a year. It's a measure of energy consumption for cooling in hot days.
```
cdd_layer = show_layer("a632a688-a181-48b5-93bc-d230e24550d9", 1971, "nexgddp")
cdd_layer
```
#### Extreme heat days
The number of extreme heat days in a year is defined as the count of days with a maximum temperature higher than the 99th percentile of the baseline. This baseline is calculated per model and per raster pixel, and is the temperature for which 99% of measures from 1971 to 2000 fall below --any temperature higher than this is considered extreme.

```
xh_layer = show_layer("2266fa97-e19c-4056-a1a9-4d4f29dd178e", 1971, "nexgddp")
xh_layer
# Notice the large difference
xh_layer_2 = show_layer("2266fa97-e19c-4056-a1a9-4d4f29dd178e", 2051, "nexgddp")
xh_layer_2
```
#### Frost free season
The frost free season is the longest streak of days (measured in *number of days*) above 0C per year.
```
ffs_layer = show_layer("83ec85e4-997b-4613-bf9e-2301ba6d7b63", 1971, "nexgddp")
ffs_layer
```
### Precipitation indicators
#### Cummulative precipitation
Precipitation is given in kg m^⁻2 s^-1, both in solid and liquid phases and from all types of clouds. The calculated measure is given as the accumulated yearly precipitation mass per square meter -- it's transformed to mm in the front-end.
```
cummpr_layer = show_layer("56e19aef-3194-4aad-8df0-9bb9064ac8e6", 1971, "nexgddp")
cummpr_layer
```
#### Extreme precipitation days
Calculated with the same method as the extreme temperatures indicator, but the baseline is constructed with the precipitations indicator.
```
xpr_layer = show_layer("7e76e90f-4c35-48fb-9604-3c187a28723b", 1971, "nexgddp")
xpr_layer
```
#### Dry spells
Average count of 5-day perdios without precipitation per year. In this case, we count longer periods as consecutive dry spells. Any excess on a multiple of 5 days is added as a 'fractional' dry spell.
```
dry_layer = show_layer("72996b8f-1f59-4d1d-b48b-490d72677473", 1971, "nexgddp")
dry_layer
```
| github_jupyter |
## 1. Which college majors will pay the bills?
<p><img src="https://s3.amazonaws.com/assets.datacamp.com/production/project_584/img/salary.png" width="400" align="center"></p>
<p>Wondering if that Philosophy major will really help you pay the bills? Think you're set with an Engineering degree? Choosing a college major is a complex decision evaluating personal interest, difficulty, and career prospects. Your first paycheck right out of college might say a lot about your salary potential by mid-career. Whether you're in school or navigating the postgrad world, join me as we explore the short and long term financial implications of this <em>major</em> decision.</p>
<p>In this notebook, we'll be using data collected from a year-long survey of 1.2 million people with only a bachelor's degree by PayScale Inc., made available <a href="http://online.wsj.com/public/resources/documents/info-Degrees_that_Pay_you_Back-sort.html?mod=article_inline">here</a> by the Wall Street Journal for their article <a href="https://www.wsj.com/articles/SB121746658635199271">Ivy League's Big Edge: Starting Pay</a>. After doing some data clean up, we'll compare the recommendations from three different methods for determining the optimal number of clusters, apply a k-means clustering analysis, and visualize the results.</p>
<p>To begin, let's prepare by loading the following packages: <code>tidyverse</code>, <code>dplyr</code>, <code>readr</code>, <code>ggplot2</code>, <code>cluster</code>, and <code>factoextra</code>. We'll then import the data from <code>degrees-that-pay-back.csv</code> (which is stored in a folder called <code>datasets</code>), and take a quick look at what we're working with.</p>
```
# Load relevant packages
library(tidyr)
library(dplyr)
library(readr)
library(ggplot2)
library(cluster)
library(factoextra)
# Read in the dataset
degrees <- read_csv('datasets/degrees-that-pay-back.csv', col_names=c("College.Major", "Starting.Median.Salary", "Mid.Career.Median.Salary", "Career.Percent.Growth", "Percentile.10", "Percentile.25", "Percentile.75", "Percentile.90"), skip=1)
# Display the first few rows and a summary of the data frame
head(degrees)
summary(degrees)
```
## 2. Currency and strings and percents, oh my!
<p>Notice that our salary data is in currency format, which R considers a string. Let's strip those special characters using the <code>gsub</code> function and convert all of our columns <em>except</em> <code>College.Major</code> to numeric. </p>
<p>While we're at it, we can also convert the <code>Career.Percent.Growth</code> column to a decimal value. </p>
```
# Clean up the data
degrees_clean <- degrees %>%
mutate_at(vars(2:8), function(x) as.numeric(gsub("[\\$,]","",x))) %>%
mutate(Career.Percent.Growth = Career.Percent.Growth / 100)
degrees_clean
```
## 3. The elbow method
<p>Great! Now that we have a more manageable dataset, let's begin our clustering analysis by determining how many clusters we should be modeling. The best number of clusters for an unlabeled dataset is not always a clear-cut answer, but fortunately there are several techniques to help us optimize. We'll work with three different methods to compare recommendations: </p>
<ul>
<li>Elbow Method</li>
<li>Silhouette Method</li>
<li>Gap Statistic Method</li>
</ul>
<p>First up will be the <strong>Elbow Method</strong>. This method plots the percent variance against the number of clusters. The "elbow" bend of the curve indicates the optimal point at which adding more clusters will no longer explain a significant amount of the variance. To begin, let's select and scale the following features to base our clusters on: <code>Starting.Median.Salary</code>, <code>Mid.Career.Median.Salary</code>, <code>Perc.10</code>, and <code>Perc.90</code>. Then we'll use the fancy <code>fviz_nbclust</code> function from the <em>factoextra</em> library to determine and visualize the optimal number of clusters. </p>
```
# Select and scale the relevant features and store as k_means_data
k_means_data <- degrees_clean %>%
select(Starting.Median.Salary, Mid.Career.Median.Salary, Percentile.10, Percentile.90) %>%
scale()
# Run the fviz_nbclust function with our selected data and method "wss"
elbow_method <- fviz_nbclust(k_means_data, FUNcluster = kmeans, method = "wss")
# View the plot
elbow_method
```
## 4. The silhouette method
<p>Wow, that <code>fviz_nbclust</code> function was pretty nifty. Instead of needing to "manually" apply the elbow method by running multiple k_means models and plotting the calculated the total within cluster sum of squares for each potential value of k, <code>fviz_nbclust</code> handled all of this for us behind the scenes. Can we use it for the <strong>Silhouette Method</strong> as well? The Silhouette Method will evaluate the quality of clusters by how well each point fits within a cluster, maximizing average "silhouette" width.</p>
```
# Run the fviz_nbclust function with the method "silhouette"
silhouette_method <- fviz_nbclust(k_means_data, FUNcluster = kmeans, method = "silhouette")
# View the plot
silhouette_method
```
## 5. The gap statistic method
<p>Marvelous! But hmm, it seems that our two methods so far disagree on the optimal number of clusters... Time to pull out the tie breaker.</p>
<p>For our final method, let's see what the <strong>Gap Statistic Method</strong> has to say about this. The Gap Statistic Method will compare the total variation within clusters for different values of <em>k</em> to the null hypothesis, maximizing the "gap." The "null hypothesis" refers to a uniformly distributed <em>simulated reference</em> dataset with no observable clusters, generated by aligning with the principle components of our original dataset. In other words, how much more variance is explained by <em>k</em> clusters in our dataset than in a fake dataset where all majors have equal salary potential? </p>
<p>Fortunately, we have the <code>clusGap</code> function to calculate this behind the scenes and the <code>fviz_gap_stat</code> function to visualize the results.</p>
```
# Use the clusGap function to apply the Gap Statistic Method
gap_stat <- clusGap(k_means_data, FUN = kmeans, nstart = 25, K.max = 10, B = 50)
# Use the fviz_gap_stat function to vizualize the results
gap_stat_method <- fviz_gap_stat(gap_stat)
# View the plot
gap_stat_method
```
## 6. K-means algorithm
<p>Looks like the Gap Statistic Method agreed with the Elbow Method! According to majority rule, let's use 3 for our optimal number of clusters. With this information, we can now run our k-means algorithm on the selected data. We will then add the resulting cluster information to label our original dataframe.</p>
```
# Set a random seed
set.seed(111)
# Set k equal to the optimal number of clusters
num_clusters <- 3
# Run the k-means algorithm
k_means <- kmeans(k_means_data, centers = num_clusters, iter.max = 15, nstart = 25)
# Label the clusters of degrees_clean
degrees_labeled <- degrees_clean %>%
mutate(clusters = k_means$cluster)
```
## 7. Visualizing the clusters
<p>Now for the pretty part: visualizing our results. First let's take a look at how each cluster compares in Starting vs. Mid Career Median Salaries. What do the clusters say about the relationship between Starting and Mid Career salaries?</p>
```
# Graph the clusters by Starting and Mid Career Median Salaries
career_growth <- ggplot(degrees_labeled, aes(x = Starting.Median.Salary, y = Mid.Career.Median.Salary, color=factor(clusters))) +
geom_point(alpha = 4/5, size = 7) +
scale_x_continuous(labels = scales::dollar) +
scale_y_continuous(labels = scales::dollar) +
xlab("Starting Median Salaryy") +
ylab("Mid Career Median Salary") +
ggtitle("Cluster Analysis Plot") +
scale_color_manual(values = c("#EC2C73","#29AEC7","#FFDD30"), name = "Clusters")
# View the plot
career_growth
```
## 8. A deeper dive into the clusters
<p>Unsurprisingly, most of the data points are hovering in the top left corner, with a relatively linear relationship. In other words, the higher your starting salary, the higher your mid career salary. The three clusters provide a level of delineation that intuitively supports this. </p>
<p>How might the clusters reflect potential mid career growth? There are also a couple curious outliers from clusters 1 and 3... perhaps this can be explained by investigating the mid career percentiles further, and exploring which majors fall in each cluster.</p>
<p>Right now, we have a column for each percentile salary value. In order to visualize the clusters and majors by mid career percentiles, we'll need to reshape the <code>degrees_labeled</code> data using tidyr's <code>gather</code> function to make a <code>percentile</code> <em>key</em> column and a <code>salary</code> <em>value</em> column to use for the axes of our following graphs. We'll then be able to examine the contents of each cluster to see what stories they might be telling us about the majors.</p>
```
# Use the gather function to reshape degrees and
# use mutate() to reorder the new percentile column
degrees_perc <- degrees_labeled %>%
select(College.Major, Percentile.10, Percentile.25, Mid.Career.Median.Salary, Percentile.75, Percentile.90, clusters) %>%
gather("percentile", "salary", -c(College.Major, clusters)) %>%
mutate(percentile = factor(percentile, levels = c('Percentile.10','Percentile.25','Mid.Career.Median.Salary','Percentile.75', 'Percentile.90')))
```
## 9. The liberal arts cluster
<p>Let's graph Cluster 1 and examine the results. These Liberal Arts majors may represent the lowest percentiles with limited growth opportunity, but there is hope for those who make it! Music is our riskiest major with the lowest 10th percentile salary, but Drama wins the highest growth potential in the 90th percentile for this cluster (so don't let go of those Hollywood dreams!). Nursing is the outlier culprit of cluster number 1, with a higher safety net in the lowest percentile to the median. Otherwise, this cluster does represent the majors with limited growth opportunity.</p>
<p>An aside: It's worth noting that most of these majors leading to lower-paying jobs are women-dominated, according to this <a href="https://www.glassdoor.com/research/app/uploads/sites/2/2017/04/FULL-STUDY-PDF-Gender-Pay-Gap2FCollege-Major.pdf">Glassdoor study</a>. According to the research:</p>
<blockquote>
<p>"The single biggest cause of the gender pay gap is occupation and industry sorting of men and women into jobs that pay differently throughout the economy. In the U.S., occupation and industry sorting explains 54 percent of the overall pay gap—by far the largest factor." </p>
</blockquote>
<p>Does this imply that women are statistically choosing majors with lower pay potential, or do certain jobs pay less because women choose them...?</p>
```
# Graph the majors of Cluster 1 by percentile
cluster_1 <- degrees_perc %>%
filter(clusters == 1) %>%
ggplot(aes(percentile, salary, group = College.Major, color = College.Major)) +
geom_point() +
geom_line() +
ggtitle("Cluster 1: The Liberal Arts") +
theme(axis.text.x = element_text(size = 7, angle = 25))
# View the plot
cluster_1
```
## 10. The goldilocks cluster
<p>On to Cluster 2, right in the middle! Accountants are known for having stable job security, but once you're in the big leagues you may be surprised to find that Marketing or Philosophy can ultimately result in higher salaries. The majors of this cluster are fairly middle of the road in our dataset, starting off not too low and not too high in the lowest percentile. However, this cluster also represents the majors with the greatest differential between the lowest and highest percentiles.</p>
```
# Modify the previous plot to display Cluster 2
cluster_2 <- degrees_perc %>%
filter(clusters == 2) %>%
ggplot(aes(percentile, salary, group = College.Major, color = College.Major)) +
geom_point() +
geom_line() +
ggtitle("Cluster 2: The Goldilocks") +
theme(axis.text.x = element_text(size = 7, angle = 25))
# View the plot
cluster_2
```
## 11. The over achiever cluster
<p>Finally, let's visualize Cluster 3. If you want financial security, these are the majors to choose from. Besides our one previously observed outlier now identifiable as Physician Assistant lagging in the highest percentiles, these heavy hitters and solid engineers represent the highest growth potential in the 90th percentile, as well as the best security in the 10th percentile rankings. Maybe those Freakonomics guys are on to something...</p>
```
# Modify the previous plot to display Cluster 3
cluster_3 <- ....
# View the plot
cluster_3
```
## 12. Every major's wonderful
<p>Thus concludes our journey exploring salary projections by college major via a k-means clustering analysis! Dealing with unsupervized data always requires a bit of creativity, such as our usage of three popular methods to determine the optimal number of clusters. We also used visualizations to interpret the patterns revealed by our three clusters and tell a story. </p>
<p>Which two careers tied for the highest career percent growth? While it's tempting to focus on starting career salaries when choosing a major, it's important to also consider the growth potential down the road. Keep in mind that whether a major falls into the Liberal Arts, Goldilocks, or Over Achievers cluster, one's financial destiny will certainly be influenced by numerous other factors including the school attended, location, passion or talent for the subject, and of course the actual career(s) pursued. </p>
<p>A similar analysis to evaluate these factors may be conducted on the additional data provided by the Wall Street Journal article, comparing salary potential by type and region of college attended. But in the meantime, here's some inspiration from <a href="https://xkcd.com/1052/">xkcd</a> for any students out there still struggling to choose a major.</p>
```
# Sort degrees by Career.Percent.Growth
# .... YOUR CODE FOR TASK 12 ....
# Identify the two majors tied for highest career growth potential
highest_career_growth <- c('....','....')
```
| github_jupyter |
### LSTM Model v2
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from utils import split_sequence, get_apple_close_price, plot_series
from utils import plot_residual_forecast_error, print_performance_metrics
from utils import get_range, difference, inverse_difference
from utils import train_test_split, NN_walk_forward_validation
apple_close_price = get_apple_close_price()
short_series = get_range(apple_close_price, '2003-01-01')
# Model parameters
look_back = 5 # days window look back
n_features = 1 # our only feature will be Close price
n_outputs = 5 # days forecast
batch_size = 32 # for NN, batch size before updating weights
n_epochs = 1000 # for NN, number of training epochs
```
We need to first train/test split, then transform and scale our data
```
from scipy.stats import boxcox
from scipy.special import inv_boxcox
train, test= train_test_split(apple_close_price,'2018-05-31')
boxcox_series, lmbda = boxcox(train.values)
transformed_train = boxcox_series
transformed_test = boxcox(test, lmbda=lmbda)
# transformed_train = train.values
# transformed_test = test.values
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaled_train = scaler.fit_transform(transformed_train.reshape(-1, 1))
scaled_test = scaler.transform(transformed_test.reshape(-1, 1))
X_train, y_train = split_sequence(scaled_train, look_back, n_outputs)
y_train = y_train.reshape(-1, n_outputs)
from keras.models import Sequential
from keras.layers import LSTM, Dense, Flatten, LeakyReLU
from keras.optimizers import Adam
import warnings
warnings.simplefilter('ignore')
def build_LSTM(look_back, n_features, n_outputs, optimizer='adam'):
model = Sequential()
model.add(LSTM(50, input_shape=(look_back, n_features)))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(n_outputs))
model.compile(optimizer=optimizer, loss='mean_squared_error')
return model
model = build_LSTM(look_back, n_features, n_outputs, optimizer=Adam(0.0001))
model.summary()
history = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size, shuffle=False)
plot_series(history.history['loss'], title='MLP model - Loss over time')
model.save_weights('lstm-model_weights.h5')
size = 252 # approx. one year
predictions = NN_walk_forward_validation(model,
scaled_train, scaled_test[:252],
size=size,
look_back=look_back,
n_outputs=n_outputs)
from utils import plot_walk_forward_validation
from utils import plot_residual_forecast_error, print_performance_metrics
```
We need to revert the scaling and transformation:
```
descaled_preds = scaler.inverse_transform(predictions.reshape(-1, 1))
descaled_test = scaler.inverse_transform(scaled_test.reshape(-1, 1))
descaled_preds = inv_boxcox(descaled_preds, lmbda)
descaled_test = inv_boxcox(descaled_test, lmbda)
fig, ax = plt.subplots(figsize=(15, 6))
plt.plot(descaled_test[:size])
plt.plot(descaled_preds)
ax.set_title('Walk forward validation - 5 days prediction')
ax.legend(['Expected', 'Predicted'])
plot_residual_forecast_error(descaled_preds, descaled_test[:size])
print_performance_metrics(descaled_preds,
descaled_test[:size],
model_name='LSTM',
total_days=size, steps=n_outputs)
model.load_weights('cnn-model_weights.h5')
```
| github_jupyter |
# Summary:
This notebook contains the soft smoothing figures for Swarthmore (Figure 2(a)).
## Load libraries
```
# import packages
from __future__ import division
import networkx as nx
import os
import numpy as np
from sklearn import metrics
from sklearn.preprocessing import label_binarize
from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import StratifiedShuffleSplit
import matplotlib.pyplot as plt
## function to create + save dictionary of features
def create_dict(key, obj):
return(dict([(key[i], obj[i]) for i in range(len(key))]))
```
## load helper functions and datasets
```
# set the working directory and import helper functions
#get the current working directory and then redirect into the functions under code
cwd = os.getcwd()
# parents working directory of the current directory: which is the code folder
parent_cwd = os.path.dirname(cwd)
# get into the functions folder
functions_cwd = parent_cwd + '/functions'
# change the working directory to be .../functions
os.chdir(functions_cwd)
# import all helper functions
exec(open('parsing.py').read())
exec(open('ZGL.py').read())
exec(open('create_graph.py').read())
exec(open('ZGL_softing_new_new.py').read())
# import the data from the data folder
data_cwd = os.path.dirname(parent_cwd)+ '/data'
# change the working directory and import the fb dataset
fb100_file = data_cwd +'/Swarthmore42'
A, metadata = parse_fb100_mat_file(fb100_file)
# change A(scipy csc matrix) into a numpy matrix
adj_matrix_tmp = A.todense()
#get the gender for each node(1/2,0 for missing)
gender_y_tmp = metadata[:,1]
# get the corresponding gender for each node in a disctionary form
gender_dict = create_dict(range(len(gender_y_tmp)), gender_y_tmp)
(graph, gender_y) = create_graph(adj_matrix_tmp,gender_dict,'gender',0,None,'yes')
```
## general setup
```
percent_initially_unlabelled = [0.99,0.95,0.9,0.8,0.7,0.6,0.5,0.4,0.3,0.2,0.1,0.05]
percent_initially_labelled = np.subtract(1, percent_initially_unlabelled)
n_iter = 10
cv_setup = 'stratified'
w = [0.001,1,10,100,1000,10000000]
```
## hard smoothing (ZGL)
```
adj_matrix_tmp_ZGL = adj_matrix_tmp
(graph, gender_y) = create_graph(adj_matrix_tmp_ZGL,gender_dict,'gender',0,None,'yes')
# ZGL Setup
adj_matrix_gender = np.array(nx.adjacency_matrix(graph).todense())
# run ZGL
exec(open("/Users/yatong_chen/Google Drive/research/DSG_empirical/code/functions/ZGL.py").read())
(mean_accuracy_zgl_Swarthmore, se_accuracy_zgl_Swarthmore,
mean_micro_auc_zgl_Swarthmore,se_micro_auc_zgl_Swarthmore,
mean_wt_auc_zgl_Swarthmore,se_wt_auc_zgl_Swarthmore) =ZGL(np.array(adj_matrix_gender),
np.array(gender_y),percent_initially_unlabelled,
n_iter,cv_setup)
```
## Soft smoothing (with different parameters w)
```
# ZGL soft smoothing Setup
(graph_new, gender_y_new) = create_graph(adj_matrix_tmp,gender_dict,'gender',0,None,'yes')
adj_matrix_gender = np.array(nx.adjacency_matrix(graph_new).todense())
gender_dict_new = create_dict(range(len(gender_y_new)), gender_y_new)
(mean_accuracy_zgl_softing_new_new_Swarthmore01, se_accuracy_zgl_softing_new_new_Swarthmore01,
mean_micro_auc_zgl_softing_new_new_Swarthmore01,se_micro_auc_zgl_softing_new_new_Swarthmore01,
mean_wt_auc_zgl_softing_new_new_Swarthmore01,se_wt_auc_zgl_softing_new_new_Swarthmore01) = ZGL_softing_new_new(w[0], adj_matrix_gender,
gender_dict_new,'gender', percent_initially_unlabelled, n_iter,cv_setup)
(mean_accuracy_zgl_softing_new_new_Swarthmore1, se_accuracy_zgl_softing_new_new_Swarthmore1,
mean_micro_auc_zgl_softing_new_new_Swarthmore1,se_micro_auc_zgl_softing_new_new_Swarthmore1,
mean_wt_auc_zgl_softing_new_new_Swarthmore1,se_wt_auc_zgl_softing_new_new_Swarthmore1) = ZGL_softing_new_new(w[1], adj_matrix_gender,
gender_dict_new,'gender', percent_initially_unlabelled, n_iter,cv_setup)
(mean_accuracy_zgl_softing_new_new_Swarthmore10, se_accuracy_zgl_softing_new_new_Swarthmore10,
mean_micro_auc_zgl_softing_new_new_Swarthmore10,se_micro_auc_zgl_softing_new_new_Swarthmore10,
mean_wt_auc_zgl_softing_new_new_Swarthmore10,se_wt_auc_zgl_softing_new_new_Swarthmore10) = ZGL_softing_new_new(w[2], adj_matrix_gender,
gender_dict_new,'gender', percent_initially_unlabelled, n_iter,cv_setup)
(mean_accuracy_zgl_softing_new_new_Swarthmore100, se_accuracy_zgl_softing_new_new_Swarthmore100,
mean_micro_auc_zgl_softing_new_new_Swarthmore100,se_micro_auc_zgl_softing_new_new_Swarthmore100,
mean_wt_auc_zgl_softing_new_new_Swarthmore100,se_wt_auc_zgl_softing_new_new_Swarthmore100) = ZGL_softing_new_new(w[3], adj_matrix_gender,
gender_dict_new,'gender', percent_initially_unlabelled, n_iter,cv_setup)
(mean_accuracy_zgl_softing_new_new_Swarthmore1000, se_accuracy_zgl_softing_new_new_Swarthmore1000,
mean_micro_auc_zgl_softing_new_new_Swarthmore1000,se_micro_auc_zgl_softing_new_new_Swarthmore1000,
mean_wt_auc_zgl_softing_new_new_Swarthmore1000,se_wt_auc_zgl_softing_new_new_Swarthmore1000) = ZGL_softing_new_new(w[4], adj_matrix_gender,
gender_dict_new,'gender', percent_initially_unlabelled, n_iter,cv_setup)
(mean_accuracy_zgl_softing_new_new_Swarthmore10000, se_accuracy_zgl_softing_new_new_Swarthmore10000,
mean_micro_auc_zgl_softing_new_new_Swarthmore10000,se_micro_auc_zgl_softing_new_new_Swarthmore10000,
mean_wt_auc_zgl_softing_new_new_Swarthmore10000,se_wt_auc_zgl_softing_new_new_Swarthmore10000) = ZGL_softing_new_new(w[5], adj_matrix_gender,
gender_dict_new,'gender', percent_initially_unlabelled, n_iter,cv_setup)
```
## plot
```
%matplotlib inline
from matplotlib.ticker import FixedLocator,LinearLocator,MultipleLocator, FormatStrFormatter
fig = plt.figure()
#seaborn.set_style(style='white')
from mpl_toolkits.axes_grid1 import Grid
grid = Grid(fig, rect=111, nrows_ncols=(1,1),
axes_pad=0.1, label_mode='L')
for i in range(4):
if i == 0:
# set the x and y axis
grid[i].xaxis.set_major_locator(FixedLocator([0,25,50,75,100]))
grid[i].yaxis.set_major_locator(FixedLocator([0.4, 0.5,0.6,0.7,0.8,0.9,1]))
grid[i].errorbar(percent_initially_labelled*100, mean_wt_auc_zgl_Swarthmore,
yerr=se_wt_auc_zgl_Swarthmore, fmt='--o', capthick=2,
alpha=1, elinewidth=8, color='black')
grid[i].errorbar(percent_initially_labelled*100, mean_wt_auc_zgl_softing_new_new_Swarthmore01,
yerr=se_wt_auc_zgl_softing_new_new_Swarthmore01, fmt='--o', capthick=2,
alpha=1, elinewidth=3, color='gold')
grid[i].errorbar(percent_initially_labelled*100, mean_wt_auc_zgl_softing_new_new_Swarthmore1,
yerr=se_wt_auc_zgl_softing_new_new_Swarthmore1, fmt='--o', capthick=2,
alpha=1, elinewidth=3, color='darkorange')
grid[i].errorbar(percent_initially_labelled*100, mean_wt_auc_zgl_softing_new_new_Swarthmore10,
yerr=se_wt_auc_zgl_softing_new_new_Swarthmore10, fmt='--o', capthick=2,
alpha=1, elinewidth=3, color='crimson')
grid[i].errorbar(percent_initially_labelled*100, mean_wt_auc_zgl_softing_new_new_Swarthmore100,
yerr=se_wt_auc_zgl_softing_new_new_Swarthmore100, fmt='--o', capthick=2,
alpha=1, elinewidth=3, color='red')
grid[i].errorbar(percent_initially_labelled*100, mean_wt_auc_zgl_softing_new_new_Swarthmore1000,
yerr=se_wt_auc_zgl_softing_new_new_Swarthmore1000, fmt='--o', capthick=2,
alpha=1, elinewidth=3, color='maroon')
grid[i].errorbar(percent_initially_labelled*100, mean_wt_auc_zgl_softing_new_new_Swarthmore10000,
yerr=se_wt_auc_zgl_softing_new_new_Swarthmore10000, fmt='--o', capthick=2,
alpha=1, elinewidth=3, color='darkred')
grid[i].set_ylim(0.45,1)
grid[i].set_xlim(0,101)
grid[i].annotate('soft: a = 0.001', xy=(3, 0.96),
color='gold', alpha=1, size=12)
grid[i].annotate('soft: a = 1', xy=(3, 0.92),
color='darkorange', alpha=1, size=12)
grid[i].annotate('soft: a = 10', xy=(3, 0.88),
color='red', alpha=1, size=12)
grid[i].annotate('soft: a = 100', xy=(3, 0.84),
color='crimson', alpha=1, size=12)
grid[i].annotate('soft: a = 1000', xy=(3, 0.80),
color='maroon', alpha=1, size=12)
grid[i].annotate('soft: a = 1000000', xy=(3, 0.76),
color='darkred', alpha=1, size=12)
grid[i].annotate('hard smoothing', xy=(3, 0.72),
color='black', alpha=1, size=12)
grid[i].set_ylim(0.4,0.8)
grid[i].set_xlim(0,100)
grid[i].spines['right'].set_visible(False)
grid[i].spines['top'].set_visible(False)
grid[i].tick_params(axis='both', which='major', labelsize=13)
grid[i].tick_params(axis='both', which='minor', labelsize=13)
grid[i].set_xlabel('Percent of Nodes Initially Labeled').set_fontsize(15)
grid[i].set_ylabel('AUC').set_fontsize(15)
grid[0].set_xticks([0,25, 50, 75, 100])
grid[0].set_yticks([0.4,0.6,0.8,1])
grid[0].minorticks_on()
grid[0].tick_params('both', length=4, width=1, which='major', left=1, bottom=1, top=0, right=0)
```
| github_jupyter |
## Required extra package:
For hypergraphs:
* pip install hypernetx
```
import pandas as pd
import numpy as np
import igraph as ig
import partition_igraph
import hypernetx as hnx
import pickle
import matplotlib.pyplot as plt
%matplotlib inline
from collections import Counter
from functools import reduce
import itertools
## the data directory
datadir='../Datasets/'
```
# Summary of extra functions for HNX hypergraphs
### Build hypergraph and pre-compute key quantities
We build the hypergraph HG using:
```python
HG = hnx.Hypergraph(dict(enumerate(Edges)))
```
where 'Edges' is a list of sets; edges are then indexed as 0-based integers,
so to preserve unique ids, we represent nodes as strings.
For example Edges[0] = {'0','2'}
Once the HNX hypergraph is built, the following function is called to
compute node strengths, d-degrees and binomial coefficients:
```python
HNX_precompute(HG)
```
### Partitions
We use two representations for partitions: list of sets (the parts) or dictionary.
Those functions are used to map from one to the other:
```python
dict2part(D)
part2dict(A)
```
### H-modularity
The function to compute H-modularity for HG w.r.t. partition A (list of sets covering the vertices):
```python
HNX_modularity(HG, A, wcd=linear)
```
where 'wcd' is the weight function (default = 'linear'). Other choices are 'strict'
and 'majority', or any user-supplied function with the following format:
```python
def linear(d,c):
return c/d if c>d/2 else 0
```
where d is the edge size, and d>=c>d/2 the number of nodes in the majority class.
### Two-section graph
Build the random-walk based 2-section graph given some hypergraph HG:
```python
G = HNX_2section(HG)
```
where G is an igraph Graph.
```
## Functions for HNX nypergraphs as described above:
def factorial(n):
if n < 2: return 1
return reduce(lambda x, y: x*y, range(2, int(n)+1))
## Precompute some values on HNX hypergraph for computing qH faster
def HNX_precompute(HG):
## 1. compute node strenghts (weighted degrees)
for v in HG.nodes:
HG.nodes[v].strength = 0
for e in HG.edges:
try:
w = HG.edges[e].weight
except:
w = 1
## add unit weight if none to simplify other functions
HG.edges[e].weight = 1
for v in list(HG.edges[e]):
HG.nodes[v].strength += w
## 2. compute d-weights
ctr = Counter([len(HG.edges[e]) for e in HG.edges])
for k in ctr.keys():
ctr[k]=0
for e in HG.edges:
ctr[len(HG.edges[e])] += HG.edges[e].weight
HG.d_weights = ctr
HG.total_weight = sum(ctr.values())
## 3. compute binomial coeffcients (modularity speed-up)
bin_coef = {}
for n in HG.d_weights.keys():
for k in np.arange(n//2+1,n+1):
bin_coef[(n,k)] = factorial(n)/(factorial(k)*factorial(n-k))
HG.bin_coef = bin_coef
#########################################
## some weight function 'wdc' for d-edges with c-majority
## default: linear w.r.t. c
def linear(d,c):
return c/d if c>d/2 else 0
## majority
def majority(d,c):
return 1 if c>d/2 else 0
## strict
def strict(d,c):
return 1 if c==d else 0
#########################################
## compute vol(A_i)/vol(V) for each part A_i in A (list of sets)
def compute_partition_probas(HG, A):
p = []
for part in A:
vol = 0
for v in part:
vol += HG.nodes[v].strength
p.append(vol)
s = sum(p)
return [i/s for i in p]
## degree tax
def DegreeTax(HG, Pr, wdc):
DT = 0
for d in HG.d_weights.keys():
tax = 0
for c in np.arange(d//2+1,d+1):
for p in Pr:
tax += p**c * (1-p)**(d-c) * HG.bin_coef[(d,c)] * wdc(d,c)
tax *= HG.d_weights[d]
DT += tax
DT /= HG.total_weight
return DT
## edge contribution, A is list of sets
def EdgeContribution(HG, A, wdc):
EC = 0
for e in HG.edges:
d = HG.size(e)
for part in A:
if HG.size(e,part) > d/2:
EC += wdc(d,HG.size(e,part)) * HG.edges[e].weight
EC /= HG.total_weight
return EC
## HG: HNX hypergraph
## A: partition (list of sets)
## wcd: weight function (ex: strict, majority, linear)
def HNX_modularity(HG, A, wdc=linear):
Pr = compute_partition_probas(HG, A)
return EdgeContribution(HG, A, wdc) - DegreeTax(HG, Pr, wdc)
#########################################
## 2-section igraph from HG
def HNX_2section(HG):
s = []
for e in HG.edges:
E = HG.edges[e]
## random-walk 2-section (preserve nodes' weighted degrees)
try:
w = HG.edges[e].weight/(len(E)-1)
except:
w = 1/(len(E)-1)
s.extend([(k[0],k[1],w) for k in itertools.combinations(E,2)])
G = ig.Graph.TupleList(s,weights=True).simplify(combine_edges='sum')
return G
#########################################
## we use 2 representations for partitions (0-based part ids):
## (1) dictionary or (2) list of sets
def dict2part(D):
P = []
k = list(D.keys())
v = list(D.values())
for x in range(max(D.values())+1):
P.append(set([k[i] for i in range(len(k)) if v[i]==x]))
return P
def part2dict(A):
x = []
for i in range(len(A)):
x.extend([(a,i) for a in A[i]])
return {k:v for k,v in x}
```
# Toy hypergraph example with HNX
```
## build an hypergraph from a list of sets (the hyperedges)
## using 'enumerate', edges will have integer IDs
E = [{'A','B'},{'A','C'},{'A','B','C'},{'A','D','E','F'},{'D','F'},{'E','F'}]
HG = hnx.Hypergraph(dict(enumerate(E)))
hnx.draw(HG)
## dual hypergraph
HD = HG.dual()
hnx.draw(HD)
## compute node strength (add unit weight if none), d-degrees, binomial coefficients
HNX_precompute(HG)
## show the edges (unit weights were added by default)
HG.edges.elements
## show the nodes (here strength = degree since all weights are 1)
HG.nodes.elements
## d-weights distribution
HG.d_weights
## compute modularity qH for the following partitions:
A1 = [{'A','B','C'},{'D','E','F'}]
A2 = [{'B','C'},{'A','D','E','F'}]
A3 = [{'A','B','C','D','E','F'}]
A4 = [{'A'},{'B'},{'C'},{'D'},{'E'},{'F'}]
print('linear:',HNX_modularity(HG,A1),HNX_modularity(HG,A2),HNX_modularity(HG,A3),HNX_modularity(HG,A4))
print('strict:',HNX_modularity(HG,A1,strict),HNX_modularity(HG,A2,strict),HNX_modularity(HG,A3,strict),HNX_modularity(HG,A4,strict))
print('majority:',HNX_modularity(HG,A1,majority),HNX_modularity(HG,A2,majority),HNX_modularity(HG,A3,majority),HNX_modularity(HG,A4,majority))
## 2-section graph
G = HNX_2section(HG)
G.vs['label'] = G.vs['name']
ig.plot(G,bbox=(0,0,250,250))
## 2-section clustering with ECG
G.vs['community'] = G.community_ecg().membership
dict2part({v['name']:v['community'] for v in G.vs})
```
# Game of Thrones scenes hypergraph
REF: https://github.com/jeffreylancaster/game-of-thrones
We built an hypergraph from the game of thrones scenes with he following elements:
* **Nodes** are characters in the series
* **Hyperedges** are groups of character appearing in the same scene(s)
* **Hyperedge weights** are total scene(s) duration in seconds involving those characters
We kept hyperedges with at least 2 characters and we discarded characters with degree below 5.
We saved the following:
* *Edges*: list of sets where the nodes are 0-based integers represented as strings: '0', '1', ... 'n-1'
* *Names*: dictionary; mapping of nodes to character names
* *Weights*: list; hyperedge weights (in same order as Edges)
```
with open(datadir+"GoT/GoT.pkl","rb") as f:
Edges, Names, Weights = pickle.load(f)
```
## Build weighted hypergraph
We also show some simple function to access nodes and edges
```
## Nodes are represented as strings from '0' to 'n-1'
HG = hnx.Hypergraph(dict(enumerate(Edges)))
## add edge weights
for e in HG.edges:
HG.edges[e].weight = Weights[e]
## add full names
for v in HG.nodes:
HG.nodes[v].name = Names[v]
## pre-compute required quantities for modularity and clustering
HNX_precompute(HG)
print(HG.number_of_nodes(),'nodes and',HG.number_of_edges(),'edges')
## node indices are strings
HG.nodes['0']
## int for edges
HG.edges[0]
## to get the nodes for a given edge
HG.edges[0].elements
## or just the keys
HG.edges[0].elements.keys()
```
### EDA on GoT hypergraph
```
## edge sizes (number of characters per scene)
plt.hist([HG.edges[e].size() for e in HG.edges], bins=25, color='grey')
plt.xlabel("Edge size",fontsize=14);
#plt.savefig('got_hist_1.eps');
## edge weights (total scene durations for each group of characters)
plt.hist([HG.edges[e].weight for e in HG.edges], bins=25, color='grey')
plt.xlabel("Edge weight",fontsize=14);
#plt.savefig('got_hist_2.eps');
## max edge weight
print('max = ',max([HG.edges[e].weight for e in HG.edges]))
## node degrees
plt.hist(hnx.degree_dist(HG),bins=20, color='grey')
plt.xlabel("Node degree",fontsize=14);
#plt.savefig('got_hist_3.eps');
## node strength (total appearance)
plt.hist([HG.nodes[n].strength for n in HG.nodes], bins=20, color='grey')
plt.xlabel("Node strength",fontsize=14);
#plt.savefig('got_hist_4.eps');
## build dataframe with node characteristics
dg = [HG.degree(v) for v in HG.nodes()]
st = [HG.nodes[v].strength for v in HG.nodes()]
nm = [HG.nodes[v].name for v in HG.nodes()]
D = pd.DataFrame(np.array([nm,dg,st]).transpose(),columns=['name','degree','strength'])
D['degree'] = pd.to_numeric(D['degree'])
D['strength'] = pd.to_numeric(D['strength'])
D.sort_values(by='strength',ascending=False).head()
D.sort_values(by='degree',ascending=False).head()
plt.plot(D['degree'],D['strength'],'.')
plt.xlabel('degree',fontsize=14)
plt.ylabel('strength',fontsize=14);
```
## Build 2-section graph and compute a few centrality measures
```
## build 2-section
G = HNX_2section(HG)
## sanity check -- node ordering
## ordering of nodes in HG
ord_HG = list(HG.nodes.elements.keys())
## ordering of nodes in G
ord_G = [v['name'] for v in G.vs]
ord_HG == ord_G
b = G.betweenness(directed=False,weights='weight')
n = G.vcount()
D['betweenness'] = [2*x/((n-1)*(n-2)) for x in b]
D['pagerank'] = G.pagerank(directed=False,weights='weight')
D.sort_values(by='strength',ascending=False).head(10)
D.sort_values(by='betweenness',ascending=False).head()
```
## Hypergraph modularity and clustering
```
## visualize the 2-section graph
print('nodes:',G.vcount(),'edges:',G.ecount())
G.vs['size'] = 10
G.vs['color'] = 'lightgrey'
G.vs['label'] = [int(x) for x in G.vs['name']] ## use int(name) as label
G.vs['character'] = [HG.nodes[n].name for n in G.vs['name']]
G.vs['label_size'] = 5
ly = G.layout_fruchterman_reingold()
ig.plot(G, layout = ly, bbox=(0,0,600,400))
## we see a small clique: Braavosi theater troup
print([HG.nodes[str(x)].name for x in np.arange(166,173)])
## Modularity (qH) on several random partition with K parts for a range of K's
## This should be close to 0 and can be negative.
h = []
for K in np.arange(2,21):
for rep in range(10):
V = list(HG.nodes)
p = np.random.choice(K, size=len(V))
RandPart = dict2part({V[i]:p[i] for i in range(len(V))})
## compute qH
h.append(HNX_modularity(HG, RandPart))
print('range for qH:',min(h),'to',max(h))
## Cluster the 2-section graph (with Louvain) and compute qH
## We now see qH >> 0
G.vs['louvain'] = G.community_multilevel(weights='weight').membership
D['cluster'] = G.vs['louvain']
ML = dict2part({v['name']:v['louvain'] for v in G.vs})
## Compute qH
print(HNX_modularity(HG, ML))
## plot 2-section w.r.t. the resulting clusters
cl = G.vs['louvain']
pal = ig.GradientPalette("white","black",max(cl)+2)
## uncomment line below for color plot:
pal = ig.ClusterColoringPalette(max(cl)+1)
G.vs['color'] = [pal[x] for x in cl]
G.vs['label_size'] = 5
ig.plot(G, layout = ly, bbox=(0,0,500,400))
#ig.plot(G, target='GoT_clusters.eps', layout = ly, bbox=(0,0,400,400))
## ex: high strength nodes in same cluster with Daenerys Targaryen
dt = int(D[D['name']=='Daenerys Targaryen']['cluster'])
D[D['cluster']==dt].sort_values(by='strength',ascending=False).head(9)
```
# Motifs example
Using HNX draw function to get patterns from Figure 7.1 in the book
```
## H1 pattern
E = [{'A','B'},{'A','C'},{'A','D'},{'B','D'},{'C','D'}]
HG = hnx.Hypergraph(dict(enumerate(E)))
hnx.draw(HG)
## H2 pattern
E = [{'A','B','C'},{'A','D'},{'C','D'}]
HG = hnx.Hypergraph(dict(enumerate(E)))
hnx.draw(HG)
## H3 pattern
E = [{'A','B','C'},{'B','C','D'}]
HG = hnx.Hypergraph(dict(enumerate(E)))
hnx.draw(HG)
### Counting those patterns -- Table 7.2
```
## Experiment with simple community random hypergraphs
note: qH-based heuristics are still very experimental; we only provide this for illustration
* 16 hypergraphs each with 1000 nodes, 1400 edges of size 2 to 8 (200 each)
* 10 communities with 0%, 5%, 10% or 15% pure noise edges (mu)
* community edge homogeneity (tau) from 0.5 to 1
* 3 algorithms:
* qG-based Louvain on 2-section
* qH-based heuristic clustering algorithm on hypergraph
* qH+: same but using true homogeneity (tau)
* Experiment results are stored in files taus_xx.pkl with xx in {00, 05, 10, 15}
```
## load results (here mu = .05)
with open( datadir+"Hypergraph/taus_05.pkl", "rb" ) as f:
results = pickle.load(f)
R = pd.DataFrame(results,columns=['tau','Graph','Hypergraph','Hypergraph+']).groupby(by='tau').mean()
t = [x for x in np.arange(.501,1,.025)]
pal = ig.GradientPalette("grey","black",3)
#pal = ig.GradientPalette("red","blue",3)
plt.plot(t,R['Graph'],'o-',label='qG-based',color=pal[0])
plt.plot(t,R['Hypergraph'],'o-',label='qH-based',color=pal[1])
plt.plot(t,R['Hypergraph+'],'o-',label='qH-based (tuned)',color=pal[2])
plt.xlabel(r'homogeneity ($\tau$)',fontsize=14)
plt.ylabel('AMI',fontsize=14)
plt.legend();
#plt.savefig('taus_05.eps');
```
## Community hypergraphs
We have hyperedge list and communities for 3 random hypergraph with communities, namely:
* edges65, comm65: hypergraphs with $\tau_e = \lceil(d*0.65)\rceil$ for all community edges of side $d$
* edges85, comm85: hypergraphs with $\tau_e = \lceil(d*0.85)\rceil$ for all community edges of side $d$
* edges65_unif, comm65_unif: hypergraphs with $\tau_e$ chosen uniformly from $\{\lceil(d*0.65)\rceil,...,d\}$ for all community edges of side $d$
All have 1000 nodes, 1400 edges of size 2 to 8 (200 each) 10 communities and noise parameter $\mu=0.1$.
```
## load hypergraphs
with open(datadir+"Hypergraph/hypergraphs.pkl","rb") as f:
(edges65, comm65, edges85, comm85, edges65_unif, comm65_unif) = pickle.load(f)
## estimating tau
## pick one of the three hypergraphs
comm = comm65
L = edges65
## true communities
HG = hnx.Hypergraph(dict(enumerate(L)))
x = []
for e in L:
x.append(max([len(e.intersection(k)) for k in comm])/len(e))
y = []
for t in np.arange(.501,1,.025):
y.append(sum([i>t for i in x])/len(x))
plt.plot(np.arange(.501,1,.025),y,'.-',color='grey',label='true communities')
## Louvain
G = HNX_2section(HG)
G.vs['louvain'] = G.community_multilevel(weights='weight').membership
ML = dict2part({v['name']:v['louvain'] for v in G.vs})
x = []
for e in L:
x.append(max([len(e.intersection(k)) for k in ML])/len(e))
y = []
for t in np.arange(.501,1,.025):
y.append(sum([i>t for i in x])/len(x))
plt.plot(np.arange(.501,1,.025),y,'.-',color='black',label='Louvain')
plt.grid()
#plt.title(r'Estimating $\tau$ from data',fontsize=14)
plt.ylabel(r'Pr(homogeneity > $\tau$)',fontsize=14)
plt.xlabel(r'$\tau$',fontsize=14)
plt.legend()
plt.ylim(0,1);
#plt.savefig('tau_65.eps');
## distribution of edge homogeneity -- single value for 'tau'
x = []
for e in edges65:
x.append(max([len(e.intersection(k)) for k in comm65])/len(e))
plt.hist(x,bins='rice',color='grey');
#plt.savefig('hist_65.eps');
## distribution of edge homogeneity -- range for 'tau'
## we see many more pure community edges
x = []
for e in edges65_unif:
x.append(max([len(e.intersection(k)) for k in comm65_unif])/len(e))
plt.hist(x, bins='rice',color='grey');
#plt.savefig('hist_65_unif.eps');
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from IPython.display import clear_output
from matplotlib import pyplot as plt
from matplotlib import style
style.use('fivethirtyeight')
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
dftrain.head()
y_train.head()
import tensorflow as tf
tf.random.set_seed(123)
dftrain.describe()
dftrain.shape[0], dfeval.shape[0]
dftrain.age.hist(bins=20)
plt.show()
dftrain['sex'].value_counts().plot(kind='bar') #kind='barh'
plt.show()
dftrain['class'].value_counts().plot(kind='bar') #kind='barh'
plt.show()
dftrain['embark_town'].value_counts().plot(kind='barh')
plt.show()
pd.concat([dftrain,y_train]).head()
pd.concat([dftrain,y_train],axis=1).head()
pd.concat([dftrain,y_train],axis=1).groupby('sex').survived.mean().plot(kind='barh').set_xlabel('% survive')
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return tf.feature_column.indicator_column(tf.feature_column.categorical_column_with_vocabulary_list(feature_name,vocab))
feature_columns = []
for feature_name in CATEGORICAL_COLUMNS:
vocabulary = dftrain[feature_name].unique()
feature_columns.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(tf.feature_column.numeric_column(feature_name,dtype=tf.float32))
example = dict(dftrain.head(1))
class_fc = tf.feature_column.indicator_column(tf.feature_column.categorical_column_with_vocabulary_list('class', ('First', 'Second', 'Third')))
print('Feature value: "{}"'.format(example['class'].iloc[0]))
print('One-hot encoded: ', tf.keras.layers.DenseFeatures([class_fc])(example).numpy())
tf.keras.layers.DenseFeatures(feature_columns)(example).numpy()
NUM_EXAMPLES = len(y_train)
def make_input_fn(X, y, n_epochs=None, shuffle=True):
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((dict(X), y))
if shuffle:
dataset = dataset.shuffle(NUM_EXAMPLES)
dataset=dataset.repeat(n_epochs)
dataset=dataset.batch(NUM_EXAMPLES)
return dataset
return input_fn
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
linear_est=tf.estimator.LinearClassifier(feature_columns)
linear_est.train(train_input_fn, max_steps=100)
result = linear_est.evaluate(eval_input_fn)
print(pd.Series(result))
n_batches = 1
est = tf.estimator.BoostedTreesClassifier(feature_columns,
n_batches_per_layer=n_batches)
est.train(train_input_fn, max_steps=100)
result = est.evaluate(eval_input_fn)
print(pd.Series(result))
pred_dicts = list(est.predict(eval_input_fn))
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
probs.plot(kind='hist', bins=20, title='predicted probabilities')
plt.show()
from sklearn.metrics import roc_curve
fpr, tpr, _ = roc_curve(y_eval, probs)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0,)
plt.ylim(0,)
plt.show()
```
| github_jupyter |
```
#default_exp basics
#export
from fastcore.imports import *
import builtins
from fastcore.test import *
from nbdev.showdoc import *
from fastcore.nb_imports import *
```
# Basic functionality
> Basic functionality used in the fastai library
## Basics
```
# export
defaults = SimpleNamespace()
# export
def ifnone(a, b):
"`b` if `a` is None else `a`"
return b if a is None else a
```
Since `b if a is None else a` is such a common pattern, we wrap it in a function. However, be careful, because python will evaluate *both* `a` and `b` when calling `ifnone` (which it doesn't do if using the `if` version directly).
```
test_eq(ifnone(None,1), 1)
test_eq(ifnone(2 ,1), 2)
#export
def maybe_attr(o, attr):
"`getattr(o,attr,o)`"
return getattr(o,attr,o)
```
Return the attribute `attr` for object `o`. If the attribute doesn't exist, then return the object `o` instead.
```
class myobj: myattr='foo'
test_eq(maybe_attr(myobj, 'myattr'), 'foo')
test_eq(maybe_attr(myobj, 'another_attr'), myobj)
#export
def basic_repr(flds=None):
if isinstance(flds, str): flds = re.split(', *', flds)
flds = list(flds or [])
def _f(self):
sig = ', '.join(f'{o}={getattr(self,o)!r}' for o in flds)
return f'{self.__class__.__name__}({sig})'
return _f
```
Lookup a user-supplied list of attributes (`flds`) of an object and generate a string with the name of each attribute and its corresponding value. The format of this string is `key=value`, where `key` is the name of the attribute, and `value` is the value of the attribute. For each value, attempt to use the `__name__` attribute, otherwise fall back to using the value's `__repr__` when constructing the string.
```
class SomeClass:
a=1
b='foo'
__repr__=basic_repr('a,b')
__name__='some-class'
class AnotherClass:
c=SomeClass()
d='bar'
__repr__=basic_repr(['c', 'd'])
sc = SomeClass()
ac = AnotherClass()
test_eq(repr(sc), "SomeClass(a=1, b='foo')")
test_eq(repr(ac), "AnotherClass(c=SomeClass(a=1, b='foo'), d='bar')")
#export
def is_array(x):
"`True` if `x` supports `__array__` or `iloc`"
return hasattr(x,'__array__') or hasattr(x,'iloc')
is_array(np.array(1)),is_array([1])
#export
def listify(o=None, *rest, use_list=False, match=None):
"Convert `o` to a `list`"
if rest: o = (o,)+rest
if use_list: res = list(o)
elif o is None: res = []
elif isinstance(o, list): res = o
elif isinstance(o, str) or is_array(o): res = [o]
elif is_iter(o): res = list(o)
else: res = [o]
if match is not None:
if is_coll(match): match = len(match)
if len(res)==1: res = res*match
else: assert len(res)==match, 'Match length mismatch'
return res
```
Conversion is designed to "do what you mean", e.g:
```
test_eq(listify('hi'), ['hi'])
test_eq(listify(array(1)), [array(1)])
test_eq(listify(1), [1])
test_eq(listify([1,2]), [1,2])
test_eq(listify(range(3)), [0,1,2])
test_eq(listify(None), [])
test_eq(listify(1,2), [1,2])
arr = np.arange(9).reshape(3,3)
listify(arr)
listify(array([1,2]))
```
Generators are turned into lists too:
```
gen = (o for o in range(3))
test_eq(listify(gen), [0,1,2])
```
Use `match` to provide a length to match:
```
test_eq(listify(1,match=3), [1,1,1])
```
If `match` is a sequence, it's length is used:
```
test_eq(listify(1,match=range(3)), [1,1,1])
```
If the listified item is not of length `1`, it must be the same length as `match`:
```
test_eq(listify([1,1,1],match=3), [1,1,1])
test_fail(lambda: listify([1,1],match=3))
#export
def tuplify(o, use_list=False, match=None):
"Make `o` a tuple"
return tuple(listify(o, use_list=use_list, match=match))
test_eq(tuplify(None),())
test_eq(tuplify([1,2,3]),(1,2,3))
test_eq(tuplify(1,match=[1,2,3]),(1,1,1))
#export
def true(x):
"Test whether `x` is truthy; collections with >0 elements are considered `True`"
try: return bool(len(x))
except: return bool(x)
[(o,true(o)) for o in
(array(0),array(1),array([0]),array([0,1]),1,0,'',None)]
#export
class NullType:
"An object that is `False` and can be called, chained, and indexed"
def __getattr__(self,*args):return null
def __call__(self,*args, **kwargs):return null
def __getitem__(self, *args):return null
def __bool__(self): return False
null = NullType()
bool(null.hi().there[3])
#export
def tonull(x):
"Convert `None` to `null`"
return null if x is None else x
bool(tonull(None).hi().there[3])
#export
def get_class(nm, *fld_names, sup=None, doc=None, funcs=None, **flds):
"Dynamically create a class, optionally inheriting from `sup`, containing `fld_names`"
attrs = {}
for f in fld_names: attrs[f] = None
for f in listify(funcs): attrs[f.__name__] = f
for k,v in flds.items(): attrs[k] = v
sup = ifnone(sup, ())
if not isinstance(sup, tuple): sup=(sup,)
def _init(self, *args, **kwargs):
for i,v in enumerate(args): setattr(self, list(attrs.keys())[i], v)
for k,v in kwargs.items(): setattr(self,k,v)
all_flds = [*fld_names,*flds.keys()]
def _eq(self,b):
return all([getattr(self,k)==getattr(b,k) for k in all_flds])
if not sup: attrs['__repr__'] = basic_repr(all_flds)
attrs['__init__'] = _init
attrs['__eq__'] = _eq
res = type(nm, sup, attrs)
if doc is not None: res.__doc__ = doc
return res
show_doc(get_class, title_level=4)
_t = get_class('_t', 'a', b=2)
t = _t()
test_eq(t.a, None)
test_eq(t.b, 2)
t = _t(1, b=3)
test_eq(t.a, 1)
test_eq(t.b, 3)
t = _t(1, 3)
test_eq(t.a, 1)
test_eq(t.b, 3)
test_eq(repr(t), '_t(a=1, b=3)')
test_eq(t, pickle.loads(pickle.dumps(t)))
```
Most often you'll want to call `mk_class`, since it adds the class to your module. See `mk_class` for more details and examples of use (which also apply to `get_class`).
```
#export
def mk_class(nm, *fld_names, sup=None, doc=None, funcs=None, mod=None, **flds):
"Create a class using `get_class` and add to the caller's module"
if mod is None: mod = sys._getframe(1).f_locals
res = get_class(nm, *fld_names, sup=sup, doc=doc, funcs=funcs, **flds)
mod[nm] = res
```
Any `kwargs` will be added as class attributes, and `sup` is an optional (tuple of) base classes.
```
mk_class('_t', a=1, sup=dict)
t = _t()
test_eq(t.a, 1)
assert(isinstance(t,dict))
```
A `__init__` is provided that sets attrs for any `kwargs`, and for any `args` (matching by position to fields), along with a `__repr__` which prints all attrs. The docstring is set to `doc`. You can pass `funcs` which will be added as attrs with the function names.
```
def foo(self): return 1
mk_class('_t', 'a', sup=dict, doc='test doc', funcs=foo)
t = _t(3, b=2)
test_eq(t.a, 3)
test_eq(t.b, 2)
test_eq(t.foo(), 1)
test_eq(t.__doc__, 'test doc')
t
#export
def wrap_class(nm, *fld_names, sup=None, doc=None, funcs=None, **flds):
"Decorator: makes function a method of a new class `nm` passing parameters to `mk_class`"
def _inner(f):
mk_class(nm, *fld_names, sup=sup, doc=doc, funcs=listify(funcs)+[f], mod=f.__globals__, **flds)
return f
return _inner
@wrap_class('_t', a=2)
def bar(self,x): return x+1
t = _t()
test_eq(t.a, 2)
test_eq(t.bar(3), 4)
#export
class ignore_exceptions:
"Context manager to ignore exceptions"
def __enter__(self): pass
def __exit__(self, *args): return True
show_doc(ignore_exceptions, title_level=4)
with ignore_exceptions():
# Exception will be ignored
raise Exception
#export
def exec_local(code, var_name):
"Call `exec` on `code` and return the var `var_name"
loc = {}
exec(code, globals(), loc)
return loc[var_name]
test_eq(exec_local("a=1", "a"), 1)
#export
def risinstance(types, obj=None):
"Curried `isinstance` but with args reversed"
types = tuplify(types)
if obj is None: return partial(risinstance,types)
if any(isinstance(t,str) for t in types):
return any(t.__name__ in types for t in type(obj).__mro__)
return isinstance(obj, types)
assert risinstance(int, 1)
assert not risinstance(str, 0)
assert risinstance(int)(1)
```
`types` can also be strings:
```
assert risinstance(('str','int'), 'a')
assert risinstance('str', 'a')
assert not risinstance('int', 'a')
```
## NoOp
These are used when you need a pass-through function.
```
show_doc(noop, title_level=4)
noop()
test_eq(noop(1),1)
show_doc(noops, title_level=4)
class _t: foo=noops
test_eq(_t().foo(1),1)
```
## Infinite Lists
These lists are useful for things like padding an array or adding index column(s) to arrays.
```
#export
#hide
class _InfMeta(type):
@property
def count(self): return itertools.count()
@property
def zeros(self): return itertools.cycle([0])
@property
def ones(self): return itertools.cycle([1])
@property
def nones(self): return itertools.cycle([None])
#export
class Inf(metaclass=_InfMeta):
"Infinite lists"
pass
show_doc(Inf, title_level=4);
```
`Inf` defines the following properties:
- `count: itertools.count()`
- `zeros: itertools.cycle([0])`
- `ones : itertools.cycle([1])`
- `nones: itertools.cycle([None])`
```
test_eq([o for i,o in zip(range(5), Inf.count)],
[0, 1, 2, 3, 4])
test_eq([o for i,o in zip(range(5), Inf.zeros)],
[0]*5)
test_eq([o for i,o in zip(range(5), Inf.ones)],
[1]*5)
test_eq([o for i,o in zip(range(5), Inf.nones)],
[None]*5)
```
## Operator Functions
```
#export
_dumobj = object()
def _oper(op,a,b=_dumobj): return (lambda o:op(o,a)) if b is _dumobj else op(a,b)
def _mk_op(nm, mod):
"Create an operator using `oper` and add to the caller's module"
op = getattr(operator,nm)
def _inner(a, b=_dumobj): return _oper(op, a,b)
_inner.__name__ = _inner.__qualname__ = nm
_inner.__doc__ = f'Same as `operator.{nm}`, or returns partial if 1 arg'
mod[nm] = _inner
#export
def in_(x, a):
"`True` if `x in a`"
return x in a
operator.in_ = in_
#export
_all_ = ['lt','gt','le','ge','eq','ne','add','sub','mul','truediv','is_','is_not','in_']
#export
for op in ['lt','gt','le','ge','eq','ne','add','sub','mul','truediv','is_','is_not','in_']: _mk_op(op, globals())
# test if element is in another
assert in_('c', ('b', 'c', 'a'))
assert in_(4, [2,3,4,5])
assert in_('t', 'fastai')
test_fail(in_('h', 'fastai'))
# use in_ as a partial
assert in_('fastai')('t')
assert in_([2,3,4,5])(4)
test_fail(in_('fastai')('h'))
```
In addition to `in_`, the following functions are provided matching the behavior of the equivalent versions in `operator`: *lt gt le ge eq ne add sub mul truediv is_ is_not*.
```
lt(3,5),gt(3,5),is_(None,None),in_(0,[1,2])
```
Similarly to `_in`, they also have additional functionality: if you only pass one param, they return a partial function that passes that param as the second positional parameter.
```
lt(5)(3),gt(5)(3),is_(None)(None),in_([1,2])(0)
#export
def true(*args, **kwargs):
"Predicate: always `True`"
return True
assert true(1,2,3)
assert true(False)
assert true(None)
assert true([])
#export
def stop(e=StopIteration):
"Raises exception `e` (by default `StopException`)"
raise e
#export
def gen(func, seq, cond=true):
"Like `(func(o) for o in seq if cond(func(o)))` but handles `StopIteration`"
return itertools.takewhile(cond, map(func,seq))
test_eq(gen(noop, Inf.count, lt(5)),
range(5))
test_eq(gen(operator.neg, Inf.count, gt(-5)),
[0,-1,-2,-3,-4])
test_eq(gen(lambda o:o if o<5 else stop(), Inf.count),
range(5))
#export
def chunked(it, chunk_sz=None, drop_last=False, n_chunks=None):
"Return batches from iterator `it` of size `chunk_sz` (or return `n_chunks` total)"
assert bool(chunk_sz) ^ bool(n_chunks)
if n_chunks: chunk_sz = max(math.ceil(len(it)/n_chunks), 1)
if not isinstance(it, Iterator): it = iter(it)
while True:
res = list(itertools.islice(it, chunk_sz))
if res and (len(res)==chunk_sz or not drop_last): yield res
if len(res)<chunk_sz: return
```
Note that you must pass either `chunk_sz`, or `n_chunks`, but not both.
```
t = list(range(10))
test_eq(chunked(t,3), [[0,1,2], [3,4,5], [6,7,8], [9]])
test_eq(chunked(t,3,True), [[0,1,2], [3,4,5], [6,7,8], ])
t = map(lambda o:stop() if o==6 else o, Inf.count)
test_eq(chunked(t,3), [[0, 1, 2], [3, 4, 5]])
t = map(lambda o:stop() if o==7 else o, Inf.count)
test_eq(chunked(t,3), [[0, 1, 2], [3, 4, 5], [6]])
t = np.arange(10)
test_eq(chunked(t,3), [[0,1,2], [3,4,5], [6,7,8], [9]])
test_eq(chunked(t,3,True), [[0,1,2], [3,4,5], [6,7,8], ])
test_eq(chunked([], 3), [])
test_eq(chunked([], n_chunks=3), [])
#export
def otherwise(x, tst, y):
"`y if tst(x) else x`"
return y if tst(x) else x
test_eq(otherwise(2+1, gt(3), 4), 3)
test_eq(otherwise(2+1, gt(2), 4), 4)
```
## Attribute Helpers
These functions reduce boilerplate when setting or manipulating attributes or properties of objects.
```
#export
def custom_dir(c, add):
"Implement custom `__dir__`, adding `add` to `cls`"
return object.__dir__(c) + listify(add)
```
`custom_dir` allows you extract the [`__dict__` property of a class](https://stackoverflow.com/questions/19907442/explain-dict-attribute) and appends the list `add` to it.
```
class _T:
def f(): pass
s = custom_dir(_T(), add=['foo', 'bar'])
assert {'foo', 'bar', 'f'}.issubset(s)
#export
class AttrDict(dict):
"`dict` subclass that also provides access to keys as attrs"
def __getattr__(self,k): return self[k] if k in self else stop(AttributeError(k))
def __setattr__(self, k, v): (self.__setitem__,super().__setattr__)[k[0]=='_'](k,v)
def __dir__(self): return super().__dir__() + list(self.keys())
show_doc(AttrDict, title_level=4)
d = AttrDict(a=1,b="two")
test_eq(d.a, 1)
test_eq(d['b'], 'two')
test_eq(d.get('c','nope'), 'nope')
d.b = 2
test_eq(d.b, 2)
test_eq(d['b'], 2)
d['b'] = 3
test_eq(d['b'], 3)
test_eq(d.b, 3)
assert 'a' in dir(d)
#exports
def type_hints(f):
"Same as `typing.get_type_hints` but returns `{}` if not allowed type"
return typing.get_type_hints(f) if isinstance(f, typing._allowed_types) else {}
```
Below is a list of allowed types for type hints in python:
```
list(typing._allowed_types)
```
For example, type `func` is allowed so `type_hints` returns the same value as `typing.get_hints`:
```
def f(a:int)->bool: ... # a function with type hints (allowed)
exp = {'a':int,'return':bool}
test_eq(type_hints(f), typing.get_type_hints(f))
test_eq(type_hints(f), exp)
```
However, `class` is not an allowed type, so `type_hints` returns `{}`:
```
class _T:
def __init__(self, a:int=0)->bool: ...
assert not type_hints(_T)
#export
def annotations(o):
"Annotations for `o`, or `type(o)`"
res = {}
if not o: return res
res = type_hints(o)
if not res: res = type_hints(getattr(o,'__init__',None))
if not res: res = type_hints(type(o))
return res
```
This supports a wider range of situations than `type_hints`, by checking `type()` and `__init__` for annotations too:
```
for o in _T,_T(),_T.__init__,f: test_eq(annotations(o), exp)
assert not annotations(int)
assert not annotations(print)
#export
def anno_ret(func):
"Get the return annotation of `func`"
return annotations(func).get('return', None) if func else None
def f(x) -> float: return x
test_eq(anno_ret(f), float)
def f(x) -> typing.Tuple[float,float]: return x
test_eq(anno_ret(f), typing.Tuple[float,float])
```
If your return annotation is `None`, `anno_ret` will return `NoneType` (and not `None`):
```
def f(x) -> None: return x
test_eq(anno_ret(f), NoneType)
assert anno_ret(f) is not None # returns NoneType instead of None
```
If your function does not have a return type, or if you pass in `None` instead of a function, then `anno_ret` returns `None`:
```
def f(x): return x
test_eq(anno_ret(f), None)
test_eq(anno_ret(None), None) # instead of passing in a func, pass in None
#export
def argnames(f, frame=False):
"Names of arguments to function or frame `f`"
code = getattr(f, 'f_code' if frame else '__code__')
return code.co_varnames[:code.co_argcount+code.co_kwonlyargcount]
test_eq(argnames(f), ['x'])
#export
def with_cast(f):
"Decorator which uses any parameter annotations as preprocessing functions"
anno, out_anno, params = annotations(f), anno_ret(f), argnames(f)
c_out = ifnone(out_anno, noop)
defaults = dict(zip(reversed(params), reversed(f.__defaults__ or {})))
@functools.wraps(f)
def _inner(*args, **kwargs):
args = list(args)
for i,v in enumerate(params):
if v in anno:
c = anno[v]
if v in kwargs: kwargs[v] = c(kwargs[v])
elif i<len(args): args[i] = c(args[i])
elif v in defaults: kwargs[v] = c(defaults[v])
return c_out(f(*args, **kwargs))
return _inner
@with_cast
def _f(a, b:Path, c:str='', d=0): return (a,b,c,d)
test_eq(_f(1, '.', 3), (1,Path('.'),'3',0))
test_eq(_f(1, '.'), (1,Path('.'),'',0))
@with_cast
def _g(a:int=0)->str: return a
test_eq(_g(4.0), '4')
test_eq(_g(4.4), '4')
test_eq(_g(2), '2')
#export
def _store_attr(self, anno, **attrs):
stored = getattr(self, '__stored_args__', None)
for n,v in attrs.items():
if n in anno: v = anno[n](v)
setattr(self, n, v)
if stored is not None: stored[n] = v
#export
def store_attr(names=None, self=None, but='', cast=False, store_args=None, **attrs):
"Store params named in comma-separated `names` from calling context into attrs in `self`"
fr = sys._getframe(1)
args = argnames(fr, True)
if self: args = ('self', *args)
else: self = fr.f_locals[args[0]]
if store_args is None: store_args = not hasattr(self,'__slots__')
if store_args and not hasattr(self, '__stored_args__'): self.__stored_args__ = {}
anno = annotations(self) if cast else {}
if names and isinstance(names,str): names = re.split(', *', names)
ns = names if names is not None else getattr(self, '__slots__', args[1:])
added = {n:fr.f_locals[n] for n in ns}
attrs = {**attrs, **added}
if isinstance(but,str): but = re.split(', *', but)
attrs = {k:v for k,v in attrs.items() if k not in but}
return _store_attr(self, anno, **attrs)
```
In it's most basic form, you can use `store_attr` to shorten code like this:
```
class T:
def __init__(self, a,b,c): self.a,self.b,self.c = a,b,c
```
...to this:
```
class T:
def __init__(self, a,b,c): store_attr('a,b,c', self)
```
This class behaves as if we'd used the first form:
```
t = T(1,c=2,b=3)
assert t.a==1 and t.b==3 and t.c==2
```
In addition, it stores the attrs as a `dict` in `__stored_args__`, which you can use for display, logging, and so forth.
```
test_eq(t.__stored_args__, {'a':1, 'b':3, 'c':2})
```
Since you normally want to use the first argument (often called `self`) for storing attributes, it's optional:
```
class T:
def __init__(self, a,b,c:str): store_attr('a,b,c')
t = T(1,c=2,b=3)
assert t.a==1 and t.b==3 and t.c==2
#hide
class _T:
def __init__(self, a,b):
c = 2
store_attr('a,b,c')
t = _T(1,b=3)
assert t.a==1 and t.b==3 and t.c==2
```
With `cast=True` any parameter annotations will be used as preprocessing functions for the corresponding arguments:
```
class T:
def __init__(self, a:listify, b, c:str): store_attr('a,b,c', cast=True)
t = T(1,c=2,b=3)
assert t.a==[1] and t.b==3 and t.c=='2'
```
You can inherit from a class using `store_attr`, and just call it again to add in any new attributes added in the derived class:
```
class T2(T):
def __init__(self, d, **kwargs):
super().__init__(**kwargs)
store_attr('d')
t = T2(d=1,a=2,b=3,c=4)
assert t.a==2 and t.b==3 and t.c==4 and t.d==1
```
You can skip passing a list of attrs to store. In this case, all arguments passed to the method are stored:
```
class T:
def __init__(self, a,b,c): store_attr()
t = T(1,c=2,b=3)
assert t.a==1 and t.b==3 and t.c==2
class T4(T):
def __init__(self, d, **kwargs):
super().__init__(**kwargs)
store_attr()
t = T4(4, a=1,c=2,b=3)
assert t.a==1 and t.b==3 and t.c==2 and t.d==4
class T4:
def __init__(self, *, a: int, b: float = 1):
store_attr()
t = T4(a=3)
assert t.a==3 and t.b==1
t = T4(a=3, b=2)
assert t.a==3 and t.b==2
#hide
# ensure that subclasses work with or without `store_attr`
class T4(T):
def __init__(self, **kwargs):
super().__init__(**kwargs)
store_attr()
t = T4(a=1,c=2,b=3)
assert t.a==1 and t.b==3 and t.c==2
class T4(T): pass
t = T4(a=1,c=2,b=3)
assert t.a==1 and t.b==3 and t.c==2
#hide
#ensure that kwargs work with names==None
class T:
def __init__(self, a,b,c,**kwargs): store_attr(**kwargs)
t = T(1,c=2,b=3,d=4,e=-1)
assert t.a==1 and t.b==3 and t.c==2 and t.d==4 and t.e==-1
#hide
#ensure that kwargs work with names==''
class T:
def __init__(self, a, **kwargs):
self.a = a+1
store_attr('', **kwargs)
t = T(a=1, d=4)
test_eq(t.a, 2)
test_eq(t.d, 4)
```
You can skip some attrs by passing `but`:
```
class T:
def __init__(self, a,b,c): store_attr(but='a')
t = T(1,c=2,b=3)
assert t.b==3 and t.c==2
assert not hasattr(t,'a')
```
You can also pass keywords to `store_attr`, which is identical to setting the attrs directly, but also stores them in `__stored_args__`.
```
class T:
def __init__(self): store_attr(a=1)
t = T()
assert t.a==1
```
You can also use store_attr inside functions.
```
def create_T(a, b):
t = SimpleNamespace()
store_attr(self=t)
return t
t = create_T(a=1, b=2)
assert t.a==1 and t.b==2
#export
def attrdict(o, *ks, default=None):
"Dict from each `k` in `ks` to `getattr(o,k)`"
return {k:getattr(o, k, default) for k in ks}
class T:
def __init__(self, a,b,c): store_attr()
t = T(1,c=2,b=3)
test_eq(attrdict(t,'b','c'), {'b':3, 'c':2})
#export
def properties(cls, *ps):
"Change attrs in `cls` with names in `ps` to properties"
for p in ps: setattr(cls,p,property(getattr(cls,p)))
class T:
def a(self): return 1
def b(self): return 2
properties(T,'a')
test_eq(T().a,1)
test_eq(T().b(),2)
#export
_c2w_re = re.compile(r'((?<=[a-z])[A-Z]|(?<!\A)[A-Z](?=[a-z]))')
_camel_re1 = re.compile('(.)([A-Z][a-z]+)')
_camel_re2 = re.compile('([a-z0-9])([A-Z])')
#export
def camel2words(s, space=' '):
"Convert CamelCase to 'spaced words'"
return re.sub(_c2w_re, rf'{space}\1', s)
test_eq(camel2words('ClassAreCamel'), 'Class Are Camel')
#export
def camel2snake(name):
"Convert CamelCase to snake_case"
s1 = re.sub(_camel_re1, r'\1_\2', name)
return re.sub(_camel_re2, r'\1_\2', s1).lower()
test_eq(camel2snake('ClassAreCamel'), 'class_are_camel')
test_eq(camel2snake('Already_Snake'), 'already__snake')
#export
def snake2camel(s):
"Convert snake_case to CamelCase"
return ''.join(s.title().split('_'))
test_eq(snake2camel('a_b_cc'), 'ABCc')
#export
def class2attr(self, cls_name):
"Return the snake-cased name of the class; strip ending `cls_name` if it exists."
return camel2snake(re.sub(rf'{cls_name}$', '', self.__class__.__name__) or cls_name.lower())
class Parent:
@property
def name(self): return class2attr(self, 'Parent')
class ChildOfParent(Parent): pass
class ParentChildOf(Parent): pass
p = Parent()
cp = ChildOfParent()
cp2 = ParentChildOf()
test_eq(p.name, 'parent')
test_eq(cp.name, 'child_of')
test_eq(cp2.name, 'parent_child_of')
#export
def getattrs(o, *attrs, default=None):
"List of all `attrs` in `o`"
return [getattr(o,attr,default) for attr in attrs]
from fractions import Fraction
getattrs(Fraction(1,2), 'numerator', 'denominator')
#export
def hasattrs(o,attrs):
"Test whether `o` contains all `attrs`"
return all(hasattr(o,attr) for attr in attrs)
assert hasattrs(1,('imag','real'))
assert not hasattrs(1,('imag','foo'))
#export
def setattrs(dest, flds, src):
f = dict.get if isinstance(src, dict) else getattr
flds = re.split(r",\s*", flds)
for fld in flds: setattr(dest, fld, f(src, fld))
d = dict(a=1,bb="2",ignore=3)
o = SimpleNamespace()
setattrs(o, "a,bb", d)
test_eq(o.a, 1)
test_eq(o.bb, "2")
d = SimpleNamespace(a=1,bb="2",ignore=3)
o = SimpleNamespace()
setattrs(o, "a,bb", d)
test_eq(o.a, 1)
test_eq(o.bb, "2")
#export
def try_attrs(obj, *attrs):
"Return first attr that exists in `obj`"
for att in attrs:
try: return getattr(obj, att)
except: pass
raise AttributeError(attrs)
test_eq(try_attrs(1, 'real'), 1)
test_eq(try_attrs(1, 'foobar', 'real'), 1)
```
## Attribute Delegation
```
#export
class GetAttrBase:
"Basic delegation of `__getattr__` and `__dir__`"
_attr=noop
def __getattr__(self,k):
if k[0]=='_' or k==self._attr: return super().__getattr__(k)
return self._getattr(getattr(self, self._attr)[k])
def __dir__(self): return custom_dir(self, getattr(self, self._attr))
#export
class GetAttr:
"Inherit from this to have all attr accesses in `self._xtra` passed down to `self.default`"
_default='default'
def _component_attr_filter(self,k):
if k.startswith('__') or k in ('_xtra',self._default): return False
xtra = getattr(self,'_xtra',None)
return xtra is None or k in xtra
def _dir(self): return [k for k in dir(getattr(self,self._default)) if self._component_attr_filter(k)]
def __getattr__(self,k):
if self._component_attr_filter(k):
attr = getattr(self,self._default,None)
if attr is not None: return getattr(attr,k)
raise AttributeError(k)
def __dir__(self): return custom_dir(self,self._dir())
# def __getstate__(self): return self.__dict__
def __setstate__(self,data): self.__dict__.update(data)
show_doc(GetAttr, title_level=4)
```
Inherit from `GetAttr` to have attr access passed down to an instance attribute.
This makes it easy to create composites that don't require callers to know about their components. For a more detailed discussion of how this works as well as relevant context, we suggest reading the [delegated composition section of this blog article](https://www.fast.ai/2019/08/06/delegation/).
You can customise the behaviour of `GetAttr` in subclasses via;
- `_default`
- By default, this is set to `'default'`, so attr access is passed down to `self.default`
- `_default` can be set to the name of any instance attribute that does not start with dunder `__`
- `_xtra`
- By default, this is `None`, so all attr access is passed down
- You can limit which attrs get passed down by setting `_xtra` to a list of attribute names
To illuminate the utility of `GetAttr`, suppose we have the following two classes, `_WebPage` which is a superclass of `_ProductPage`, which we wish to compose like so:
```
class _WebPage:
def __init__(self, title, author="Jeremy"):
self.title,self.author = title,author
class _ProductPage:
def __init__(self, page, price): self.page,self.price = page,price
page = _WebPage('Soap', author="Sylvain")
p = _ProductPage(page, 15.0)
```
How do we make it so we can just write `p.author`, instead of `p.page.author` to access the `author` attribute? We can use `GetAttr`, of course! First, we subclass `GetAttr` when defining `_ProductPage`. Next, we set `self.default` to the object whose attributes we want to be able to access directly, which in this case is the `page` argument passed on initialization:
```
class _ProductPage(GetAttr):
def __init__(self, page, price): self.default,self.price = page,price #self.default allows you to access page directly.
p = _ProductPage(page, 15.0)
```
Now, we can access the `author` attribute directly from the instance:
```
test_eq(p.author, 'Sylvain')
```
If you wish to store the object you are composing in an attribute other than `self.default`, you can set the class attribute `_data` as shown below. This is useful in the case where you might have a name collision with `self.default`:
```
class _C(GetAttr):
_default = '_data' # use different component name; `self._data` rather than `self.default`
def __init__(self,a): self._data = a
def foo(self): noop
t = _C('Hi')
test_eq(t._data, 'Hi')
test_fail(lambda: t.default) # we no longer have self.default
test_eq(t.lower(), 'hi')
test_eq(t.upper(), 'HI')
assert 'lower' in dir(t)
assert 'upper' in dir(t)
```
By default, all attributes and methods of the object you are composing are retained. In the below example, we compose a `str` object with the class `_C`. This allows us to directly call string methods on instances of class `_C`, such as `str.lower()` or `str.upper()`:
```
class _C(GetAttr):
# allow all attributes and methods to get passed to `self.default` (by leaving _xtra=None)
def __init__(self,a): self.default = a
def foo(self): noop
t = _C('Hi')
test_eq(t.lower(), 'hi')
test_eq(t.upper(), 'HI')
assert 'lower' in dir(t)
assert 'upper' in dir(t)
```
However, you can choose which attributes or methods to retain by defining a class attribute `_xtra`, which is a list of allowed attribute and method names to delegate. In the below example, we only delegate the `lower` method from the composed `str` object when defining class `_C`:
```
class _C(GetAttr):
_xtra = ['lower'] # specify which attributes get passed to `self.default`
def __init__(self,a): self.default = a
def foo(self): noop
t = _C('Hi')
test_eq(t.default, 'Hi')
test_eq(t.lower(), 'hi')
test_fail(lambda: t.upper()) # upper wasn't in _xtra, so it isn't available to be called
assert 'lower' in dir(t)
assert 'upper' not in dir(t)
```
You must be careful to properly set an instance attribute in `__init__` that corresponds to the class attribute `_default`. The below example sets the class attribute `_default` to `data`, but erroneously fails to define `self.data` (and instead defines `self.default`).
Failing to properly set instance attributes leads to errors when you try to access methods directly:
```
class _C(GetAttr):
_default = 'data' # use a bad component name; i.e. self.data does not exist
def __init__(self,a): self.default = a
def foo(self): noop
# TODO: should we raise an error when we create a new instance ...
t = _C('Hi')
test_eq(t.default, 'Hi')
# ... or is it enough for all GetAttr features to raise errors
test_fail(lambda: t.data)
test_fail(lambda: t.lower())
test_fail(lambda: t.upper())
test_fail(lambda: dir(t))
#hide
# I don't think this test is essential to the docs but it probably makes sense to
# check that everything works when we set both _xtra and _default to non-default values
class _C(GetAttr):
_xtra = ['lower', 'upper']
_default = 'data'
def __init__(self,a): self.data = a
def foo(self): noop
t = _C('Hi')
test_eq(t.data, 'Hi')
test_eq(t.lower(), 'hi')
test_eq(t.upper(), 'HI')
assert 'lower' in dir(t)
assert 'upper' in dir(t)
#hide
# when consolidating the filter logic, I choose the previous logic from
# __getattr__ k.startswith('__') rather than
# _dir k.startswith('_').
class _C(GetAttr):
def __init__(self): self.default = type('_D', (), {'_under': 1, '__dunder': 2})()
t = _C()
test_eq(t.default._under, 1)
test_eq(t._under, 1) # _ prefix attr access is allowed on component
assert '_under' in dir(t)
test_eq(t.default.__dunder, 2)
test_fail(lambda: t.__dunder) # __ prefix attr access is not allowed on component
assert '__dunder' not in dir(t)
assert t.__dir__ is not None # __ prefix attr access is allowed on composite
assert '__dir__' in dir(t)
#hide
#Failing test. TODO: make GetAttr pickle-safe
# class B:
# def __init__(self): self.a = A()
# @funcs_kwargs
# class A(GetAttr):
# wif=after_iter= noops
# _methods = 'wif after_iter'.split()
# _default = 'dataset'
# def __init__(self, **kwargs): pass
# a = A()
# b = A(wif=a.wif)
# a = A()
# b = A(wif=a.wif)
# tst = pickle.dumps(b)
# c = pickle.loads(tst)
#export
def delegate_attr(self, k, to):
"Use in `__getattr__` to delegate to attr `to` without inheriting from `GetAttr`"
if k.startswith('_') or k==to: raise AttributeError(k)
try: return getattr(getattr(self,to), k)
except AttributeError: raise AttributeError(k) from None
```
`delegate_attr` is a functional way to delegate attributes, and is an alternative to `GetAttr`. We recommend reading the documentation of `GetAttr` for more details around delegation.
You can use achieve delegation when you define `__getattr__` by using `delegate_attr`:
```
#hide
import pandas as pd
class _C:
def __init__(self, o): self.o = o # self.o corresponds to the `to` argument in delegate_attr.
def __getattr__(self, k): return delegate_attr(self, k, to='o')
t = _C('HELLO') # delegates to a string
test_eq(t.lower(), 'hello')
t = _C(np.array([5,4,3])) # delegates to a numpy array
test_eq(t.sum(), 12)
t = _C(pd.DataFrame({'a': [1,2], 'b': [3,4]})) # delegates to a pandas.DataFrame
test_eq(t.b.max(), 4)
```
## Extensible Types
`ShowPrint` is a base class that defines a `show` method, which is used primarily for callbacks in fastai that expect this method to be defined.
```
#export
#hide
class ShowPrint:
"Base class that prints for `show`"
def show(self, *args, **kwargs): print(str(self))
```
`Int`, `Float`, and `Str` extend `int`, `float` and `str` respectively by adding an additional `show` method by inheriting from `ShowPrint`.
The code for `Int` is shown below:
```
#export
#hide
class Int(int,ShowPrint):
"An extensible `int`"
pass
#export
#hide
class Str(str,ShowPrint):
"An extensible `str`"
pass
class Float(float,ShowPrint):
"An extensible `float`"
pass
```
Examples:
```
Int(0).show()
Float(2.0).show()
Str('Hello').show()
```
## Collection functions
Functions that manipulate popular python collections.
```
#export
def flatten(o):
"Concatenate all collections and items as a generator"
for item in o:
try: yield from flatten(item)
except TypeError: yield item
#export
def concat(colls)->list:
"Concatenate all collections and items as a list"
return list(flatten(colls))
concat([(o for o in range(2)),[2,3,4], 5])
#export
def strcat(its, sep:str='')->str:
"Concatenate stringified items `its`"
return sep.join(map(str,its))
test_eq(strcat(['a',2]), 'a2')
test_eq(strcat(['a',2], ';'), 'a;2')
#export
def detuplify(x):
"If `x` is a tuple with one thing, extract it"
return None if len(x)==0 else x[0] if len(x)==1 and getattr(x, 'ndim', 1)==1 else x
test_eq(detuplify(()),None)
test_eq(detuplify([1]),1)
test_eq(detuplify([1,2]), [1,2])
test_eq(detuplify(np.array([[1,2]])), np.array([[1,2]]))
#export
def replicate(item,match):
"Create tuple of `item` copied `len(match)` times"
return (item,)*len(match)
t = [1,1]
test_eq(replicate([1,2], t),([1,2],[1,2]))
test_eq(replicate(1, t),(1,1))
# export
def setify(o):
"Turn any list like-object into a set."
return o if isinstance(o,set) else set(listify(o))
# test
test_eq(setify(None),set())
test_eq(setify('abc'),{'abc'})
test_eq(setify([1,2,2]),{1,2})
test_eq(setify(range(0,3)),{0,1,2})
test_eq(setify({1,2}),{1,2})
#export
def merge(*ds):
"Merge all dictionaries in `ds`"
return {k:v for d in ds if d is not None for k,v in d.items()}
test_eq(merge(), {})
test_eq(merge(dict(a=1,b=2)), dict(a=1,b=2))
test_eq(merge(dict(a=1,b=2), dict(b=3,c=4), None), dict(a=1, b=3, c=4))
#export
def range_of(x):
"All indices of collection `x` (i.e. `list(range(len(x)))`)"
return list(range(len(x)))
test_eq(range_of([1,1,1,1]), [0,1,2,3])
#export
def groupby(x, key, val=noop):
"Like `itertools.groupby` but doesn't need to be sorted, and isn't lazy, plus some extensions"
if isinstance(key,int): key = itemgetter(key)
elif isinstance(key,str): key = attrgetter(key)
if isinstance(val,int): val = itemgetter(val)
elif isinstance(val,str): val = attrgetter(val)
res = {}
for o in x: res.setdefault(key(o), []).append(val(o))
return res
test_eq(groupby('aa ab bb'.split(), itemgetter(0)), {'a':['aa','ab'], 'b':['bb']})
```
Here's an example of how to *invert* a grouping, using an `int` as `key` (which uses `itemgetter`; passing a `str` will use `attrgetter`), and using a `val` function:
```
d = {0: [1, 3, 7], 2: [3], 3: [5], 4: [8], 5: [4], 7: [5]}
groupby(((o,k) for k,v in d.items() for o in v), 0, 1)
#export
def last_index(x, o):
"Finds the last index of occurence of `x` in `o` (returns -1 if no occurence)"
try: return next(i for i in reversed(range(len(o))) if o[i] == x)
except StopIteration: return -1
test_eq(last_index(9, [1, 2, 9, 3, 4, 9, 10]), 5)
test_eq(last_index(6, [1, 2, 9, 3, 4, 9, 10]), -1)
#export
def filter_dict(d, func):
"Filter a `dict` using `func`, applied to keys and values"
return {k:v for k,v in d.items() if func(k,v)}
letters = {o:chr(o) for o in range(65,73)}
letters
filter_dict(letters, lambda k,v: k<67 or v in 'FG')
#export
def filter_keys(d, func):
"Filter a `dict` using `func`, applied to keys"
return {k:v for k,v in d.items() if func(k)}
filter_keys(letters, lt(67))
#export
def filter_values(d, func):
"Filter a `dict` using `func`, applied to values"
return {k:v for k,v in d.items() if func(v)}
filter_values(letters, in_('FG'))
#export
def cycle(o):
"Like `itertools.cycle` except creates list of `None`s if `o` is empty"
o = listify(o)
return itertools.cycle(o) if o is not None and len(o) > 0 else itertools.cycle([None])
test_eq(itertools.islice(cycle([1,2,3]),5), [1,2,3,1,2])
test_eq(itertools.islice(cycle([]),3), [None]*3)
test_eq(itertools.islice(cycle(None),3), [None]*3)
test_eq(itertools.islice(cycle(1),3), [1,1,1])
#export
def zip_cycle(x, *args):
"Like `itertools.zip_longest` but `cycle`s through elements of all but first argument"
return zip(x, *map(cycle,args))
test_eq(zip_cycle([1,2,3,4],list('abc')), [(1, 'a'), (2, 'b'), (3, 'c'), (4, 'a')])
#export
def sorted_ex(iterable, key=None, reverse=False):
"Like `sorted`, but if key is str use `attrgetter`; if int use `itemgetter`"
if isinstance(key,str): k=lambda o:getattr(o,key,0)
elif isinstance(key,int): k=itemgetter(key)
else: k=key
return sorted(iterable, key=k, reverse=reverse)
#export
def not_(f):
"Create new function that negates result of `f`"
def _f(*args, **kwargs): return not f(*args, **kwargs)
return _f
def f(a): return a>0
test_eq(f(1),True)
test_eq(not_(f)(1),False)
test_eq(not_(f)(a=-1),True)
#export
def argwhere(iterable, f, negate=False, **kwargs):
"Like `filter_ex`, but return indices for matching items"
if kwargs: f = partial(f,**kwargs)
if negate: f = not_(f)
return [i for i,o in enumerate(iterable) if f(o)]
#export
def filter_ex(iterable, f=noop, negate=False, gen=False, **kwargs):
"Like `filter`, but passing `kwargs` to `f`, defaulting `f` to `noop`, and adding `negate` and `gen`"
if f is None: f = lambda _: True
if kwargs: f = partial(f,**kwargs)
if negate: f = not_(f)
res = filter(f, iterable)
if gen: return res
return list(res)
#export
def range_of(a, b=None, step=None):
"All indices of collection `a`, if `a` is a collection, otherwise `range`"
if is_coll(a): a = len(a)
return list(range(a,b,step) if step is not None else range(a,b) if b is not None else range(a))
test_eq(range_of([1,1,1,1]), [0,1,2,3])
test_eq(range_of(4), [0,1,2,3])
#export
def renumerate(iterable, start=0):
"Same as `enumerate`, but returns index as 2nd element instead of 1st"
return ((o,i) for i,o in enumerate(iterable, start=start))
test_eq(renumerate('abc'), (('a',0),('b',1),('c',2)))
#export
def first(x, f=None, negate=False, **kwargs):
"First element of `x`, optionally filtered by `f`, or None if missing"
x = iter(x)
if f: x = filter_ex(x, f=f, negate=negate, gen=True, **kwargs)
return next(x, None)
test_eq(first(['a', 'b', 'c', 'd', 'e']), 'a')
test_eq(first([False]), False)
test_eq(first([False], noop), None)
#export
def nested_attr(o, attr, default=None):
"Same as `getattr`, but if `attr` includes a `.`, then looks inside nested objects"
try:
for a in attr.split("."): o = getattr(o, a)
except AttributeError: return default
return o
a = SimpleNamespace(b=(SimpleNamespace(c=1)))
test_eq(nested_attr(a, 'b.c'), getattr(getattr(a, 'b'), 'c'))
test_eq(nested_attr(a, 'b.d'), None)
#export
def nested_idx(coll, *idxs):
"Index into nested collections, dicts, etc, with `idxs`"
if not coll or not idxs: return coll
if isinstance(coll,str) or not isinstance(coll, typing.Collection): return None
res = coll.get(idxs[0], None) if hasattr(coll, 'get') else coll[idxs[0]] if idxs[0]<len(coll) else None
return nested_idx(res, *idxs[1:])
a = {'b':[1,{'c':2}]}
test_eq(nested_idx(a), a)
test_eq(nested_idx(a, 'b'), [1,{'c':2}])
test_eq(nested_idx(a, 'b', 1), {'c':2})
test_eq(nested_idx(a, 'b', 1, 'c'), 2)
#export
def val2idx(x):
"Dict from value to index"
return {v:k for k,v in enumerate(x)}
test_eq(val2idx([1,2,3]), {3:2,1:0,2:1})
#export
def uniqueify(x, sort=False, bidir=False, start=None):
"Unique elements in `x`, optionally `sort`-ed, optionally return reverse correspondence, optionally prepend with elements."
res = list(dict.fromkeys(x))
if start is not None: res = listify(start)+res
if sort: res.sort()
return (res,val2idx(res)) if bidir else res
t = [1,1,0,5,0,3]
test_eq(uniqueify(t),[1,0,5,3])
test_eq(uniqueify(t, sort=True),[0,1,3,5])
test_eq(uniqueify(t, start=[7,8,6]), [7,8,6,1,0,5,3])
v,o = uniqueify(t, bidir=True)
test_eq(v,[1,0,5,3])
test_eq(o,{1:0, 0: 1, 5: 2, 3: 3})
v,o = uniqueify(t, sort=True, bidir=True)
test_eq(v,[0,1,3,5])
test_eq(o,{0:0, 1: 1, 3: 2, 5: 3})
#export
# looping functions from https://github.com/willmcgugan/rich/blob/master/rich/_loop.py
def loop_first_last(values):
"Iterate and generate a tuple with a flag for first and last value."
iter_values = iter(values)
try: previous_value = next(iter_values)
except StopIteration: return
first = True
for value in iter_values:
yield first,False,previous_value
first,previous_value = False,value
yield first,True,previous_value
test_eq(loop_first_last(range(3)), [(True,False,0), (False,False,1), (False,True,2)])
#export
def loop_first(values):
"Iterate and generate a tuple with a flag for first value."
return ((b,o) for b,_,o in loop_first_last(values))
test_eq(loop_first(range(3)), [(True,0), (False,1), (False,2)])
#export
def loop_last(values):
"Iterate and generate a tuple with a flag for last value."
return ((b,o) for _,b,o in loop_first_last(values))
test_eq(loop_last(range(3)), [(False,0), (False,1), (True,2)])
```
## fastuple
A tuple with extended functionality.
```
#export
num_methods = """
__add__ __sub__ __mul__ __matmul__ __truediv__ __floordiv__ __mod__ __divmod__ __pow__
__lshift__ __rshift__ __and__ __xor__ __or__ __neg__ __pos__ __abs__
""".split()
rnum_methods = """
__radd__ __rsub__ __rmul__ __rmatmul__ __rtruediv__ __rfloordiv__ __rmod__ __rdivmod__
__rpow__ __rlshift__ __rrshift__ __rand__ __rxor__ __ror__
""".split()
inum_methods = """
__iadd__ __isub__ __imul__ __imatmul__ __itruediv__
__ifloordiv__ __imod__ __ipow__ __ilshift__ __irshift__ __iand__ __ixor__ __ior__
""".split()
#export
class fastuple(tuple):
"A `tuple` with elementwise ops and more friendly __init__ behavior"
def __new__(cls, x=None, *rest):
if x is None: x = ()
if not isinstance(x,tuple):
if len(rest): x = (x,)
else:
try: x = tuple(iter(x))
except TypeError: x = (x,)
return super().__new__(cls, x+rest if rest else x)
def _op(self,op,*args):
if not isinstance(self,fastuple): self = fastuple(self)
return type(self)(map(op,self,*map(cycle, args)))
def mul(self,*args):
"`*` is already defined in `tuple` for replicating, so use `mul` instead"
return fastuple._op(self, operator.mul,*args)
def add(self,*args):
"`+` is already defined in `tuple` for concat, so use `add` instead"
return fastuple._op(self, operator.add,*args)
def _get_op(op):
if isinstance(op,str): op = getattr(operator,op)
def _f(self,*args): return self._op(op,*args)
return _f
for n in num_methods:
if not hasattr(fastuple, n) and hasattr(operator,n): setattr(fastuple,n,_get_op(n))
for n in 'eq ne lt le gt ge'.split(): setattr(fastuple,n,_get_op(n))
setattr(fastuple,'__invert__',_get_op('__not__'))
setattr(fastuple,'max',_get_op(max))
setattr(fastuple,'min',_get_op(min))
show_doc(fastuple, title_level=4)
```
#### Friendly init behavior
Common failure modes when trying to initialize a tuple in python:
```py
tuple(3)
> TypeError: 'int' object is not iterable
```
or
```py
tuple(3, 4)
> TypeError: tuple expected at most 1 arguments, got 2
```
However, `fastuple` allows you to define tuples like this and in the usual way:
```
test_eq(fastuple(3), (3,))
test_eq(fastuple(3,4), (3, 4))
test_eq(fastuple((3,4)), (3, 4))
```
#### Elementwise operations
```
show_doc(fastuple.add, title_level=5)
test_eq(fastuple.add((1,1),(2,2)), (3,3))
test_eq_type(fastuple(1,1).add(2), fastuple(3,3))
test_eq(fastuple('1','2').add('2'), fastuple('12','22'))
show_doc(fastuple.mul, title_level=5)
test_eq_type(fastuple(1,1).mul(2), fastuple(2,2))
```
#### Other Elementwise Operations
Additionally, the following elementwise operations are available:
- `le`: less than or equal
- `eq`: equal
- `gt`: greater than
- `min`: minimum of
```
test_eq(fastuple(3,1).le(1), (False, True))
test_eq(fastuple(3,1).eq(1), (False, True))
test_eq(fastuple(3,1).gt(1), (True, False))
test_eq(fastuple(3,1).min(2), (2,1))
```
You can also do other elementwise operations like negate a `fastuple`, or subtract two `fastuple`s:
```
test_eq(-fastuple(1,2), (-1,-2))
test_eq(~fastuple(1,0,1), (False,True,False))
test_eq(fastuple(1,1)-fastuple(2,2), (-1,-1))
test_eq(type(fastuple(1)), fastuple)
test_eq_type(fastuple(1,2), fastuple(1,2))
test_ne(fastuple(1,2), fastuple(1,3))
test_eq(fastuple(), ())
```
## Functions on Functions
Utilities for functional programming or for defining, modifying, or debugging functions.
```
# export
class _Arg:
def __init__(self,i): self.i = i
arg0 = _Arg(0)
arg1 = _Arg(1)
arg2 = _Arg(2)
arg3 = _Arg(3)
arg4 = _Arg(4)
#export
class bind:
"Same as `partial`, except you can use `arg0` `arg1` etc param placeholders"
def __init__(self, func, *pargs, **pkwargs):
self.func,self.pargs,self.pkwargs = func,pargs,pkwargs
self.maxi = max((x.i for x in pargs if isinstance(x, _Arg)), default=-1)
def __call__(self, *args, **kwargs):
args = list(args)
kwargs = {**self.pkwargs,**kwargs}
for k,v in kwargs.items():
if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
return self.func(*fargs, **kwargs)
show_doc(bind, title_level=3)
```
`bind` is the same as `partial`, but also allows you to reorder positional arguments using variable name(s) `arg{i}` where i refers to the zero-indexed positional argument. `bind` as implemented currently only supports reordering of up to the first 5 positional arguments.
Consider the function `myfunc` below, which has 3 positional arguments. These arguments can be referenced as `arg0`, `arg1`, and `arg1`, respectively.
```
def myfn(a,b,c,d=1,e=2): return(a,b,c,d,e)
```
In the below example we bind the positional arguments of `myfn` as follows:
- The second input `14`, referenced by `arg1`, is substituted for the first positional argument.
- We supply a default value of `17` for the second positional argument.
- The first input `19`, referenced by `arg0`, is subsituted for the third positional argument.
```
test_eq(bind(myfn, arg1, 17, arg0, e=3)(19,14), (14,17,19,1,3))
```
In this next example:
- We set the default value to `17` for the first positional argument.
- The first input `19` refrenced by `arg0`, becomes the second positional argument.
- The second input `14` becomes the third positional argument.
- We override the default the value for named argument `e` to `3`.
```
test_eq(bind(myfn, 17, arg0, e=3)(19,14), (17,19,14,1,3))
```
This is an example of using `bind` like `partial` and do not reorder any arguments:
```
test_eq(bind(myfn)(17,19,14), (17,19,14,1,2))
```
`bind` can also be used to change default values. In the below example, we use the first input `3` to override the default value of the named argument `e`, and supply default values for the first three positional arguments:
```
test_eq(bind(myfn, 17,19,14,e=arg0)(3), (17,19,14,1,3))
#export
def mapt(func, *iterables):
"Tuplified `map`"
return tuple(map(func, *iterables))
t = [0,1,2,3]
test_eq(mapt(operator.neg, t), (0,-1,-2,-3))
#export
def map_ex(iterable, f, *args, gen=False, **kwargs):
"Like `map`, but use `bind`, and supports `str` and indexing"
g = (bind(f,*args,**kwargs) if callable(f)
else f.format if isinstance(f,str)
else f.__getitem__)
res = map(g, iterable)
if gen: return res
return list(res)
test_eq(map_ex(t,operator.neg), [0,-1,-2,-3])
```
If `f` is a string then it is treated as a format string to create the mapping:
```
test_eq(map_ex(t, '#{}#'), ['#0#','#1#','#2#','#3#'])
```
If `f` is a dictionary (or anything supporting `__getitem__`) then it is indexed to create the mapping:
```
test_eq(map_ex(t, list('abcd')), list('abcd'))
```
You can also pass the same `arg` params that `bind` accepts:
```
def f(a=None,b=None): return b
test_eq(map_ex(t, f, b=arg0), range(4))
# export
def compose(*funcs, order=None):
"Create a function that composes all functions in `funcs`, passing along remaining `*args` and `**kwargs` to all"
funcs = listify(funcs)
if len(funcs)==0: return noop
if len(funcs)==1: return funcs[0]
if order is not None: funcs = sorted_ex(funcs, key=order)
def _inner(x, *args, **kwargs):
for f in funcs: x = f(x, *args, **kwargs)
return x
return _inner
f1 = lambda o,p=0: (o*2)+p
f2 = lambda o,p=1: (o+1)/p
test_eq(f2(f1(3)), compose(f1,f2)(3))
test_eq(f2(f1(3,p=3),p=3), compose(f1,f2)(3,p=3))
test_eq(f2(f1(3, 3), 3), compose(f1,f2)(3, 3))
f1.order = 1
test_eq(f1(f2(3)), compose(f1,f2, order="order")(3))
#export
def maps(*args, retain=noop):
"Like `map`, except funcs are composed first"
f = compose(*args[:-1])
def _f(b): return retain(f(b), b)
return map(_f, args[-1])
test_eq(maps([1]), [1])
test_eq(maps(operator.neg, [1,2]), [-1,-2])
test_eq(maps(operator.neg, operator.neg, [1,2]), [1,2])
#export
def partialler(f, *args, order=None, **kwargs):
"Like `functools.partial` but also copies over docstring"
fnew = partial(f,*args,**kwargs)
fnew.__doc__ = f.__doc__
if order is not None: fnew.order=order
elif hasattr(f,'order'): fnew.order=f.order
return fnew
def _f(x,a=1):
"test func"
return x-a
_f.order=1
f = partialler(_f, 2)
test_eq(f.order, 1)
test_eq(f(3), -1)
f = partialler(_f, a=2, order=3)
test_eq(f.__doc__, "test func")
test_eq(f.order, 3)
test_eq(f(3), _f(3,2))
class partial0:
"Like `partialler`, but args passed to callable are inserted at started, instead of at end"
def __init__(self, f, *args, order=None, **kwargs):
self.f,self.args,self.kwargs = f,args,kwargs
self.order = ifnone(order, getattr(f,'order',None))
self.__doc__ = f.__doc__
def __call__(self, *args, **kwargs): return self.f(*args, *self.args, **kwargs, **self.kwargs)
f = partial0(_f, 2)
test_eq(f.order, 1)
test_eq(f(3), 1) # NB: different to `partialler` example
#export
def instantiate(t):
"Instantiate `t` if it's a type, otherwise do nothing"
return t() if isinstance(t, type) else t
test_eq_type(instantiate(int), 0)
test_eq_type(instantiate(1), 1)
#export
def _using_attr(f, attr, x): return f(getattr(x,attr))
#export
def using_attr(f, attr):
"Construct a function which applies `f` to the argument's attribute `attr`"
return partial(_using_attr, f, attr)
t = Path('/a/b.txt')
f = using_attr(str.upper, 'name')
test_eq(f(t), 'B.TXT')
```
### Self (with an _uppercase_ S)
A Concise Way To Create Lambdas
```
#export
class _Self:
"An alternative to `lambda` for calling methods on passed object."
def __init__(self): self.nms,self.args,self.kwargs,self.ready = [],[],[],True
def __repr__(self): return f'self: {self.nms}({self.args}, {self.kwargs})'
def __call__(self, *args, **kwargs):
if self.ready:
x = args[0]
for n,a,k in zip(self.nms,self.args,self.kwargs):
x = getattr(x,n)
if callable(x) and a is not None: x = x(*a, **k)
return x
else:
self.args.append(args)
self.kwargs.append(kwargs)
self.ready = True
return self
def __getattr__(self,k):
if not self.ready:
self.args.append(None)
self.kwargs.append(None)
self.nms.append(k)
self.ready = False
return self
def _call(self, *args, **kwargs):
self.args,self.kwargs,self.nms = [args],[kwargs],['__call__']
self.ready = True
return self
#export
class _SelfCls:
def __getattr__(self,k): return getattr(_Self(),k)
def __getitem__(self,i): return self.__getattr__('__getitem__')(i)
def __call__(self,*args,**kwargs): return self.__getattr__('_call')(*args,**kwargs)
Self = _SelfCls()
#export
_all_ = ['Self']
```
This is a concise way to create lambdas that are calling methods on an object (note the capitalization!)
`Self.sum()`, for instance, is a shortcut for `lambda o: o.sum()`.
```
f = Self.sum()
x = np.array([3.,1])
test_eq(f(x), 4.)
# This is equivalent to above
f = lambda o: o.sum()
x = np.array([3.,1])
test_eq(f(x), 4.)
f = Self.argmin()
arr = np.array([1,2,3,4,5])
test_eq(f(arr), arr.argmin())
f = Self.sum().is_integer()
x = np.array([3.,1])
test_eq(f(x), True)
f = Self.sum().real.is_integer()
x = np.array([3.,1])
test_eq(f(x), True)
f = Self.imag()
test_eq(f(3), 0)
f = Self[1]
test_eq(f(x), 1)
```
`Self` is also callable, which creates a function which calls any function passed to it, using the arguments passed to `Self`:
```
def f(a, b=3): return a+b+2
def g(a, b=3): return a*b
fg = Self(1,b=2)
list(map(fg, [f,g]))
```
## Patching
```
#export
def copy_func(f):
"Copy a non-builtin function (NB `copy.copy` does not work for this)"
if not isinstance(f,FunctionType): return copy(f)
fn = FunctionType(f.__code__, f.__globals__, f.__name__, f.__defaults__, f.__closure__)
fn.__kwdefaults__ = f.__kwdefaults__
fn.__dict__.update(f.__dict__)
return fn
```
Sometimes it may be desirable to make a copy of a function that doesn't point to the original object. When you use Python's built in `copy.copy` or `copy.deepcopy` to copy a function, you get a reference to the original object:
```
import copy as cp
def foo(): pass
a = cp.copy(foo)
b = cp.deepcopy(foo)
a.someattr = 'hello' # since a and b point at the same object, updating a will update b
test_eq(b.someattr, 'hello')
assert a is foo and b is foo
```
However, with `copy_func`, you can retrieve a copy of a function without a reference to the original object:
```
c = copy_func(foo) # c is an indpendent object
assert c is not foo
def g(x, *, y=3):
return x+y
test_eq(copy_func(g)(4), 7)
#export
def patch_to(cls, as_prop=False, cls_method=False):
"Decorator: add `f` to `cls`"
if not isinstance(cls, (tuple,list)): cls=(cls,)
def _inner(f):
for c_ in cls:
nf = copy_func(f)
nm = f.__name__
# `functools.update_wrapper` when passing patched function to `Pipeline`, so we do it manually
for o in functools.WRAPPER_ASSIGNMENTS: setattr(nf, o, getattr(f,o))
nf.__qualname__ = f"{c_.__name__}.{nm}"
if cls_method:
setattr(c_, nm, MethodType(nf, c_))
else:
setattr(c_, nm, property(nf) if as_prop else nf)
# Avoid clobbering existing functions
return globals().get(nm, builtins.__dict__.get(nm, None))
return _inner
```
The `@patch_to` decorator allows you to [monkey patch](https://stackoverflow.com/questions/5626193/what-is-monkey-patching) a function into a class as a method:
```
class _T3(int): pass
@patch_to(_T3)
def func1(self, a): return self+a
t = _T3(1) # we initilized `t` to a type int = 1
test_eq(t.func1(2), 3) # we add 2 to `t`, so 2 + 1 = 3
```
You can access instance properties in the usual way via `self`:
```
class _T4():
def __init__(self, g): self.g = g
@patch_to(_T4)
def greet(self, x): return self.g + x
t = _T4('hello ') # this sets self.g = 'helllo '
test_eq(t.greet('world'), 'hello world') #t.greet('world') will append 'world' to 'hello '
```
You can instead specify that the method should be a class method by setting `cls_method=True`:
```
class _T5(int): attr = 3 # attr is a class attribute we will access in a later method
@patch_to(_T5, cls_method=True)
def func(cls, x): return cls.attr + x # you can access class attributes in the normal way
test_eq(_T5.func(4), 7)
```
Additionally you can specify that the function you want to patch should be a class attribute with `as_prop` = False
```
@patch_to(_T5, as_prop=True)
def add_ten(self): return self + 10
t = _T5(4)
test_eq(t.add_ten, 14)
```
Instead of passing one class to the `@patch_to` decorator, you can pass multiple classes in a tuple to simulteanously patch more than one class with the same method:
```
class _T6(int): pass
class _T7(int): pass
@patch_to((_T6,_T7))
def func_mult(self, a): return self*a
t = _T6(2)
test_eq(t.func_mult(4), 8)
t = _T7(2)
test_eq(t.func_mult(4), 8)
#export
def patch(f=None, *, as_prop=False, cls_method=False):
"Decorator: add `f` to the first parameter's class (based on f's type annotations)"
if f is None: return partial(patch, as_prop=as_prop, cls_method=cls_method)
cls = next(iter(f.__annotations__.values()))
if cls_method: cls = f.__annotations__.pop('cls')
return patch_to(cls, as_prop=as_prop, cls_method=cls_method)(f)
```
`@patch` is an alternative to `@patch_to` that allows you similarly monkey patch class(es) by using [type annotations](https://docs.python.org/3/library/typing.html):
```
class _T8(int): pass
@patch
def func(self:_T8, a): return self+a
t = _T8(1) # we initilized `t` to a type int = 1
test_eq(t.func(3), 4) # we add 3 to `t`, so 3 + 1 = 4
test_eq(t.func.__qualname__, '_T8.func')
```
Similarly to `patch_to`, you can supply a tuple of classes instead of a single class in your type annotations to patch multiple classes:
```
class _T9(int): pass
@patch
def func2(x:(_T8,_T9), a): return x*a # will patch both _T8 and _T9
t = _T8(2)
test_eq(t.func2(4), 8)
test_eq(t.func2.__qualname__, '_T8.func2')
t = _T9(2)
test_eq(t.func2(4), 8)
test_eq(t.func2.__qualname__, '_T9.func2')
```
Just like `patch_to` decorator you can use `as_propas_prop` and `cls_method` parameters with `patch` decorator:
```
@patch(as_prop=True)
def add_ten(self:_T5): return self + 10
t = _T5(4)
test_eq(t.add_ten, 14)
class _T5(int): attr = 3 # attr is a class attribute we will access in a later method
@patch(cls_method=True)
def func(cls:_T5, x): return cls.attr + x # you can access class attributes in the normal way
test_eq(_T5.func(4), 7)
#export
def patch_property(f):
"Deprecated; use `patch(as_prop=True)` instead"
warnings.warn("`patch_property` is deprecated and will be removed; use `patch(as_prop=True)` instead")
cls = next(iter(f.__annotations__.values()))
return patch_to(cls, as_prop=True)(f)
```
## Other Helpers
```
#export
def compile_re(pat):
"Compile `pat` if it's not None"
return None if pat is None else re.compile(pat)
assert compile_re(None) is None
assert compile_re('a').match('ab')
#export
class ImportEnum(enum.Enum):
"An `Enum` that can have its values imported"
@classmethod
def imports(cls):
g = sys._getframe(1).f_locals
for o in cls: g[o.name]=o
show_doc(ImportEnum, title_level=4)
_T = ImportEnum('_T', {'foobar':1, 'goobar':2})
_T.imports()
test_eq(foobar, _T.foobar)
test_eq(goobar, _T.goobar)
#export
class StrEnum(str,ImportEnum):
"An `ImportEnum` that behaves like a `str`"
def __str__(self): return self.name
show_doc(StrEnum, title_level=4)
#export
def str_enum(name, *vals):
"Simplified creation of `StrEnum` types"
return StrEnum(name, {o:o for o in vals})
_T = str_enum('_T', 'a', 'b')
test_eq(f'{_T.a}', 'a')
test_eq(_T.a, 'a')
test_eq(list(_T.__members__), ['a','b'])
print(_T.a, _T.a.upper())
#export
class Stateful:
"A base class/mixin for objects that should not serialize all their state"
_stateattrs=()
def __init__(self,*args,**kwargs):
self._init_state()
super().__init__(*args,**kwargs) # required for mixin usage
def __getstate__(self):
return {k:v for k,v in self.__dict__.items()
if k not in self._stateattrs+('_state',)}
def __setstate__(self, state):
self.__dict__.update(state)
self._init_state()
def _init_state(self):
"Override for custom init and deserialization logic"
self._state = {}
show_doc(Stateful, title_level=4)
class _T(Stateful):
def __init__(self):
super().__init__()
self.a=1
self._state['test']=2
t = _T()
t2 = pickle.loads(pickle.dumps(t))
test_eq(t.a,1)
test_eq(t._state['test'],2)
test_eq(t2.a,1)
test_eq(t2._state,{})
```
Override `_init_state` to do any necessary setup steps that are required during `__init__` or during deserialization (e.g. `pickle.load`). Here's an example of how `Stateful` simplifies the official Python example for [Handling Stateful Objects](https://docs.python.org/3/library/pickle.html#handling-stateful-objects).
```
class TextReader(Stateful):
"""Print and number lines in a text file."""
_stateattrs=('file',)
def __init__(self, filename):
self.filename,self.lineno = filename,0
super().__init__()
def readline(self):
self.lineno += 1
line = self.file.readline()
if line: return f"{self.lineno}: {line.strip()}"
def _init_state(self):
self.file = open(self.filename)
for _ in range(self.lineno): self.file.readline()
reader = TextReader("00_test.ipynb")
print(reader.readline())
print(reader.readline())
new_reader = pickle.loads(pickle.dumps(reader))
print(reader.readline())
#export
class PrettyString(str):
"Little hack to get strings to show properly in Jupyter."
def __repr__(self): return self
show_doc(PrettyString, title_level=4)
```
Allow strings with special characters to render properly in Jupyter. Without calling `print()` strings with special characters are displayed like so:
```
with_special_chars='a string\nwith\nnew\nlines and\ttabs'
with_special_chars
```
We can correct this with `PrettyString`:
```
PrettyString(with_special_chars)
#export
def even_mults(start, stop, n):
"Build log-stepped array from `start` to `stop` in `n` steps."
if n==1: return stop
mult = stop/start
step = mult**(1/(n-1))
return [start*(step**i) for i in range(n)]
test_eq(even_mults(2,8,3), [2,4,8])
test_eq(even_mults(2,32,5), [2,4,8,16,32])
test_eq(even_mults(2,8,1), 8)
#export
def num_cpus():
"Get number of cpus"
try: return len(os.sched_getaffinity(0))
except AttributeError: return os.cpu_count()
defaults.cpus = num_cpus()
num_cpus()
#export
def add_props(f, g=None, n=2):
"Create properties passing each of `range(n)` to f"
if g is None: return (property(partial(f,i)) for i in range(n))
return (property(partial(f,i), partial(g,i)) for i in range(n))
class _T(): a,b = add_props(lambda i,x:i*2)
t = _T()
test_eq(t.a,0)
test_eq(t.b,2)
class _T():
def __init__(self, v): self.v=v
def _set(i, self, v): self.v[i] = v
a,b = add_props(lambda i,x: x.v[i], _set)
t = _T([0,2])
test_eq(t.a,0)
test_eq(t.b,2)
t.a = t.a+1
t.b = 3
test_eq(t.a,1)
test_eq(t.b,3)
#export
def _typeerr(arg, val, typ): return TypeError(f"{arg}=={val} not {typ}")
#export
def typed(f):
"Decorator to check param and return types at runtime"
names = f.__code__.co_varnames
anno = annotations(f)
ret = anno.pop('return',None)
def _f(*args,**kwargs):
kw = {**kwargs}
if len(anno) > 0:
for i,arg in enumerate(args): kw[names[i]] = arg
for k,v in kw.items():
if k in anno and not isinstance(v,anno[k]): raise _typeerr(k, v, anno[k])
res = f(*args,**kwargs)
if ret is not None and not isinstance(res,ret): raise _typeerr("return", res, ret)
return res
return functools.update_wrapper(_f, f)
```
`typed` validates argument types at **runtime**. This is in contrast to [MyPy](http://mypy-lang.org/) which only offers static type checking.
For example, a `TypeError` will be raised if we try to pass an integer into the first argument of the below function:
```
@typed
def discount(price:int, pct:float):
return (1-pct) * price
with ExceptionExpected(TypeError): discount(100.0, .1)
```
We can also optionally allow multiple types by enumarating the types in a tuple as illustrated below:
```
def discount(price:(int,float), pct:float):
return (1-pct) * price
assert 90.0 == discount(100.0, .1)
@typed
def foo(a:int, b:str='a'): return a
test_eq(foo(1, '2'), 1)
with ExceptionExpected(TypeError): foo(1,2)
@typed
def foo()->str: return 1
with ExceptionExpected(TypeError): foo()
@typed
def foo()->str: return '1'
assert foo()
```
`typed` works with classes, too:
```
class Foo:
@typed
def __init__(self, a:int, b: int, c:str): pass
@typed
def test(cls, d:str): return d
with ExceptionExpected(TypeError): Foo(1, 2, 3)
with ExceptionExpected(TypeError): Foo(1,2, 'a string').test(10)
#export
def exec_new(code):
"Execute `code` in a new environment and return it"
g = {}
exec(code, g)
return g
g = exec_new('a=1')
g['a']
```
## Notebook functions
```
show_doc(ipython_shell)
show_doc(in_ipython)
show_doc(in_colab)
show_doc(in_jupyter)
show_doc(in_notebook)
```
These variables are available as booleans in `fastcore.basics` as `IN_IPYTHON`, `IN_JUPYTER`, `IN_COLAB` and `IN_NOTEBOOK`.
```
IN_IPYTHON, IN_JUPYTER, IN_COLAB, IN_NOTEBOOK
```
# Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
## Introduction to Exploratory Data Analysis and Visualization
In this lab, we will cover some basic EDAV tools and provide an example using _presidential speeches_.
## Table of Contents
[ -Step 0: Import modules](#step0)
[-Step 1: Read in the speeches](#step1)
[-Step 2: Text processing](#step2)
-Step 3: Visualization
* [ Step 3.1: Word cloud](#step3-1)
* [ Step 3.2: Joy plot](#step3-3)
[-Step 4: Sentence analysis](#step4)
[-Step 5: NRC emotion analysis](#step5)
<a id="Example"></a>
## Part 2: Example using _presidential speeches_.
In this section, we will go over an example using a collection of presidential speeches. The data were scraped from the [Presidential Documents Archive](http://www.presidency.ucsb.edu/index_docs.php) of the [American Presidency Project](http://www.presidency.ucsb.edu/index.php) using the `Rvest` package from `R`. The scraped text files can be found in the `data` folder.
For the lab, we use a handful of basic Natural language processing (NLP) building blocks provided by NLTK (and a few additional libraries), including text processing (tokenization, stemming etc.), frequency analysis, and NRC emotion analysis. It also provides various data visualizations -- an important field of data science.
<a id="step0"></a>
## Step 0: Import modules
**Initial Setup** you need Python installed on your system to run the code examples used in this tutorial. This tutorial is constructed using Python 2.7, which is slightly different from Python 3.5.
We recommend that you use Anaconda for your python installation. For more installation recommendations, please use our [check_env.ipynb](https://github.com/DS-BootCamp-Collaboratory-Columbia/AY2017-2018-Winter/blob/master/Bootcamp-materials/notebooks/Pre-assignment/check_env.ipynb).
The main modules we will use in this notebook are:
* *nltk*:
* *nltk* (Natural Language ToolKit) is the most popular Python framework for working with human language.
* *nltk* doesn’t come with super powerful pre-trained models, but contains useful functions for doing a quick exploratory data analysis.
* [Reference webpage](https://nlpforhackers.io/introduction-nltk/#more-4627)
* [NLTK book](http://www.nltk.org/book/)
* *re* and *string*:
* For text processing.
* *scikit-learn*:
* For text feature extraction.
* *wordcloud*:
* Word cloud visualization.
* Pip installation:
```
pip install wordcloud
```
* Conda installation (not come with Anaconda built-in packages):
```
conda install -c conda-forge wordcloud=1.2.1
```
* *ipywidgets*:
* *ipywidgets* can render interactive controls on the Jupyter notebook. By using the elements in *ipywidgets*, e.g., `IntSlider`, `Checkbox`, `Dropdown`, we could produce fun interactive visualizations.
* Pip installation:
If you use pip, you also have to enable the ipywidget extension in your notebook to render it next time you start the notebook. Type in following command on your terminal:
```
pip install ipywidgets
jupyter nbextension enable --py widgetsnbextension
```
* Conda installation:
If you use conda, the extension will be enabled automatically. There might be version imcompatible issue happened, following command is to install the modules in the specific compatible versions.
```
conda install --yes jupyter_core notebook nbconvert ipykernel ipywidgets=6.0 widgetsnbextension=2.0
```
* [Reference webpage](https://towardsdatascience.com/a-very-simple-demo-of-interactive-controls-on-jupyter-notebook-4429cf46aabd)
* *seaborn*:
* *seaborn* provides a high-level interface to draw statistical graphics.
* A comprehensive [tutorial](https://www.datacamp.com/community/tutorials/seaborn-python-tutorial) on it.
```
# Basic
from random import randint
import pandas as pd
import csv
import numpy as np
from collections import OrderedDict, defaultdict, Counter
# Text
import nltk, re, string
from nltk.corpus.reader.plaintext import PlaintextCorpusReader #Read in text files
from nltk import word_tokenize, sent_tokenize
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
nltk.download('punkt')
# Plot
import ipywidgets as widgets
import seaborn as sns
from ipywidgets import interactive, Layout, HBox, VBox
from wordcloud import WordCloud
from matplotlib import pyplot as plt
from matplotlib import gridspec, cm
# Source code
import sys
sys.path.append("../lib/")
import joypy
```
<a id="step1"></a>
## Step 1: Read in the speeches
```
inaug_corpus = PlaintextCorpusReader("../data/inaugurals", ".*\.txt")
#Accessing the name of the files of the corpus
inaug_files = inaug_corpus.fileids()
for f in inaug_files[:5]:
print(f)
len(inaug_files)
#Accessing all the text of the corpus
inaug_all_text = inaug_corpus.raw()
print("First 100 words in all the text of the corpus: \n >>" + inaug_all_text[:100])
#Accessing all the text for one of the files
inaug_ZacharyTaylor1_text=inaug_corpus.raw('inaugZacharyTaylor-1.txt')
print("First 100 words in one file: \n >>" + inaug_ZacharyTaylor1_text[:100])
```
<a id="step2"></a>
## Step 2: Text processing
For the speeches, we do the text processing as follows and define a function `tokenize_and_stem`:
1. convert all letters to lower cases
2. split text into sentences and then words
3. remove [stop words](https://github.com/arc12/Text-Mining-Weak-Signals/wiki/Standard-set-of-english-stopwords), remove empty words due to formatting errors, and remove punctuation.
4. [stemming words](https://en.wikipedia.org/wiki/Stemming) use NLTK porter stemmer. There are [many other stemmers](http://www.nltk.org/howto/stem.html) built in NLTK. You can play around and see the difference.
Then we compute the [Document-Term Matrix (DTM)](https://en.wikipedia.org/wiki/Document-term_matrix) and [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf).
See [Natural Language Processing with Python](http://www.nltk.org/book/) for a more comprehensive discussion about NLTK.
There are many more interesting topics in NLP, which we will not cover in this lab. In you are interested, here are some online resources.
1. [Named Entity Recognition](https://github.com/charlieg/A-Smattering-of-NLP-in-Python)
2. [Topic modeling](https://medium.com/mlreview/topic-modeling-with-scikit-learn-e80d33668730)
3. [sentiment analysis](https://pythonspot.com/python-sentiment-analysis/) (positive v.s. negative)
3. [Supervised model](https://www.dataquest.io/blog/natural-language-processing-with-python/)
```
stemmer = PorterStemmer()
def tokenize_and_stem(text):
lowers = text.lower()
tokens = [word for sent in nltk.sent_tokenize(lowers) for word in nltk.word_tokenize(sent)]
filtered_tokens = []
# filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation)
for token in tokens:
if re.search('[a-zA-Z]', token) and not token in stopwords.words('english'):
filtered_tokens.append(re.sub(r'[^\w\s]','',token))
stems = [stemmer.stem(t) for t in filtered_tokens]
return stems
#return filtered_tokens
token_dict = {}
for fileid in inaug_corpus.fileids():
token_dict[fileid] = inaug_corpus.raw(fileid)
# Construct a bag of words matrix.
# This will lowercase everything, and ignore all punctuation by default.
# It will also remove stop words.
vectorizer = CountVectorizer(lowercase=True,
tokenizer=tokenize_and_stem,
stop_words='english')
dtm = vectorizer.fit_transform(token_dict.values()).toarray()
```
**TF - IDF**
TF-IDF (term frequency-inverse document frequency) is a numerical statistics that is intended to reflect how important a word is to a document in a collection or corpus. It is often used as a weighting factor in information retrieval, text mining, and user modeling. The TF-IDF value increases proportionally to the number of times a word appears in the document, but is offset by the frequency of the word in the corpus, which helps to adjust for the fact that some words appear more frequently in general.
$$
\begin{aligned}
\mbox{TF}(t) &=\frac{\mbox{Number of times term $t$ appears in a document}}{\mbox{Total number of terms in the document}}\\
\mbox{IDF}(t) &=\log{\frac{\mbox{Total number of documents}}{\mbox{Number of documents with term $t$ in it}}}\\
\mbox{TF-IDF}(t) &=\mbox{TF}(t)\times\mbox{IDF}(t)
\end{aligned}
$$
```
vectorizer = TfidfVectorizer(tokenizer=tokenize_and_stem,
stop_words='english',
decode_error='ignore')
tfidf_matrix = vectorizer.fit_transform(token_dict.values())
# The above line can take some time (about < 60 seconds)
feature_names = vectorizer.get_feature_names()
num_samples, num_features=tfidf_matrix.shape
print "num_samples: %d, num_features: %d" %(num_samples,num_features)
num_clusters=10
## Checking
print('first term: ' + feature_names[0])
print('last term: ' + feature_names[len(feature_names) - 1])
for i in range(0, 4):
print('random term: ' +
feature_names[randint(1,len(feature_names) - 2)] )
def top_tfidf_feats(row, features, top_n=20):
topn_ids = np.argsort(row)[::-1][:top_n]
top_feats = [(features[i], row[i]) for i in topn_ids]
df = pd.DataFrame(top_feats, columns=['features', 'score'])
return df
def top_feats_in_doc(X, features, row_id, top_n=25):
row = np.squeeze(X[row_id].toarray())
return top_tfidf_feats(row, features, top_n)
print inaug_files[2:3]
print top_feats_in_doc(tfidf_matrix, feature_names, 3, 10)
d =3
top_tfidf = top_feats_in_doc(tfidf_matrix, feature_names, d, 10)
def plot_tfidf_classfeats_h(df, doc):
''' Plot the data frames returned by the function tfidf_feats_in_doc. '''
x = np.arange(len(df))
fig = plt.figure(figsize=(6, 9), facecolor="w")
ax = fig.add_subplot(1, 1, 1)
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.set_frame_on(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.set_xlabel("Tf-Idf Score", labelpad=16, fontsize=14)
ax.set_title(doc, fontsize=16)
ax.ticklabel_format(axis='x', style='sci', scilimits=(-2,2))
ax.barh(x, df.score, align='center', color='#3F5D7D')
ax.set_yticks(x)
ax.set_ylim([-1, x[-1]+1])
yticks = ax.set_yticklabels(df.features)
plt.subplots_adjust(bottom=0.09, right=0.97, left=0.15, top=0.95, wspace=0.52)
plt.show()
plot_tfidf_classfeats_h(top_tfidf, inaug_files[(d-1):d])
```
<a id="step3-1"></a>
## Step 3: Visualization
Data visualization is an integral part of the data science workflow. In the following, we use simple data visualizations to reveal some interesting patterns in our data.
### 1 . Word cloud
```
array_for_word_cloud = []
word_count_array = dtm.sum(0)
for idx, word in enumerate(feature_names):
array_for_word_cloud.append((word,word_count_array[idx]))
def random_color_func(word=None, font_size=None,
position=None, orientation=None, font_path=None, random_state=None):
h = int(360.0 * 45.0 / 255.0)
s = int(100.0 * 255.0 / 255.0)
l = int(100.0 * float(random_state.randint(60, 120)) / 255.0)
return "hsl({}, {}%, {}%)".format(h, s, l)
array_for_word_cloud = dict(array_for_word_cloud)
wordcloud = WordCloud(background_color='white',
width=1600,
height=1000,
color_func=random_color_func).generate_from_frequencies(array_for_word_cloud)
%matplotlib inline
plt.imshow(wordcloud)
plt.axis('off')
plt.show()
```
Let us try making it interactive.
```
word_cloud_dict = {}
counter = 0
for fileid in inaug_corpus.fileids():
row = dtm[counter,:]
word_cloud_dict[fileid] = []
for idx, word in enumerate(feature_names):
word_cloud_dict[fileid].append((word,row[idx]))
counter += 1
def f_wordclouds(t):
df_dict = dict(word_cloud_dict[t])
wordcloud = WordCloud(background_color='white',
color_func=random_color_func).generate_from_frequencies(df_dict)
plt.figure(figsize=(3, 3), dpi=100)
plt.imshow(wordcloud)
plt.axis('off')
plt.show()
interactive_plot_1 = interactive(f_wordclouds, t=widgets.Dropdown(options=inaug_corpus.fileids(),description='text1'))
interactive_plot_2 = interactive(f_wordclouds, t=widgets.Dropdown(options=inaug_corpus.fileids(),description='text2'))
# Define the layout here.
hbox_layout = Layout(display='flex', flex_flow='row', justify_content='space-between', align_items='center')
vbox_layout = Layout(display='flex', flex_flow='column', justify_content='space-between', align_items='center')
%matplotlib inline
HBox([interactive_plot_1,interactive_plot_2])
```
<a id="step3-3"></a>
### 2. Joy plot
The following joy plot allows us to compare the frequencies of the top 10 most frequent words in individual speeches.
```
joy_df = pd.DataFrame(dtm, columns=feature_names)
selected_words = joy_df.sum(0).sort_values(ascending=False).head(10).index
print(selected_words)
%matplotlib inline
plt.rcParams['axes.facecolor'] = 'white'
fig, axes = joypy.joyplot(joy_df.loc[:,selected_words],
range_style='own', grid="y",
colormap=cm.YlGn_r,
title="Top 10 word distribution")
```
<a id="step4"></a>
## Step 4: Sentence analysis
In the previous sections, we focused on word-level distributions in inaugural speeches. Next, we will use sentences as our units of analysis, since they are natural languge units for organizing thoughts and ideas.
For simpler visualization, we chose a subset of better known presidents or presidential candidates on whom to focus our analysis.
```
filter_comparison=["DonaldJTrump","JohnMcCain", "GeorgeBush", "MittRomney", "GeorgeWBush",
"RonaldReagan","AlbertGore,Jr", "HillaryClinton","JohnFKerry",
"WilliamJClinton","HarrySTruman", "BarackObama", "LyndonBJohnson",
"GeraldRFord", "JimmyCarter", "DwightDEisenhower", "FranklinDRoosevelt",
"HerbertHoover","JohnFKennedy","RichardNixon","WoodrowWilson",
"AbrahamLincoln", "TheodoreRoosevelt", "JamesGarfield",
"JohnQuincyAdams", "UlyssesSGrant", "ThomasJefferson",
"GeorgeWashington", "WilliamHowardTaft", "AndrewJackson",
"WilliamHenryHarrison", "JohnAdams"]
```
### Nomination speeches
Next, we first look at the *nomination acceptance speeches* at major parties' national conventions.
Following the same procedure in [step 1](#step1), we will use `pandas` dataframe to store the nomination speech sentences. For each sentence in a speech (`fileid`), we find out the name of the president (`president`) and the term (`term`), and also calculated the number of words in each sentence as *sentence length* (`word_count`) by using a self-defined function `word_count`.
```
def word_count(string):
tokens = [word for word in nltk.word_tokenize(string)]
counter = 0
for token in tokens:
if re.search('[a-zA-Z]', token):
counter += 1
return counter
nomin_corpus = PlaintextCorpusReader("../data/nomimations", ".*\.txt")
nomin_files = nomin_corpus.fileids()
nomin_file_df = pd.DataFrame(columns=["file_id","president","term","raw_text"])
for fileid in nomin_corpus.fileids():
nomin_file_df = nomin_file_df.append({"file_id": fileid,
"president": fileid[0:fileid.find("-")][5:],
"term": fileid.split("-")[-1][0],
"raw_text": nomin_corpus.raw(fileid)}, ignore_index=True)
sentences = []
for row in nomin_file_df.itertuples():
for sentence in sent_tokenize(row[4]):
sentences.append({"file_id": row[1],
"president": row[2],
"term": row[3],
"sentence": sentence})
nomin_sen_df = pd.DataFrame(sentences, columns=["file_id","president","term","sentence"])
nomin_sen_df["word_count"] = [word_count(sentence) for sentence in nomin_sen_df["sentence"]]
```
#### First term
For comparison between presidents, we first limit our attention to speeches for the first terms of former U.S. presidents. We noticed that a number of presidents have very short sentences in their nomination acceptance speeches.
```
filtered_nomin_sen_df = nomin_sen_df.loc[(nomin_sen_df["president"].isin(filter_comparison))&(nomin_sen_df["term"]=='1')]
filtered_nomin_sen_df = filtered_nomin_sen_df.reset_index()
filtered_nomin_sen_df['group_mean'] = filtered_nomin_sen_df.groupby('president')['word_count'].transform('mean')
filtered_nomin_sen_df = filtered_nomin_sen_df.sort_values('group_mean', ascending=False)
%matplotlib inline
plt.figure(figsize=(20, 10))
gs = gridspec.GridSpec(1, 2, width_ratios=[1, 1])
plt.subplot(gs[0])
sns.set(font_scale=1.5)
sns.set_style('whitegrid')
sns.swarmplot(y='president', x='word_count',
data=filtered_nomin_sen_df,
palette = "Set3",
size=2.5,
orient='h'). set(xlabel='Number of words in a sentence')
plt.title("Swarm plot")
plt.subplot(gs[1])
sns.set(font_scale=1.5)
sns.set_style('whitegrid')
sns.violinplot(y='president', x='word_count',
data=filtered_nomin_sen_df,
palette = "Set3", cut = 3,
width=1.5, saturation=0.8, linewidth= 0.3, scale="count",
orient='h').set(xlabel='Number of words in a sentence')
plt.title("Violin plot")
plt.xlim(-3, 350)
plt.tight_layout()
```
#### Second term
```
filtered_nomin_sen_df = nomin_sen_df.loc[(nomin_sen_df["president"].isin(filter_comparison))&(nomin_sen_df["term"]=='2')]
filtered_nomin_sen_df = filtered_nomin_sen_df.reset_index()
filtered_nomin_sen_df['group_mean'] = filtered_nomin_sen_df.groupby('president')['word_count'].transform('mean')
filtered_nomin_sen_df = filtered_nomin_sen_df.sort_values('group_mean', ascending=False)
%matplotlib inline
plt.figure(figsize=(20, 10))
gs = gridspec.GridSpec(1, 2, width_ratios=[1, 1])
plt.subplot(gs[0])
sns.set(font_scale=1.5)
sns.set_style('whitegrid')
sns.swarmplot(y='president', x='word_count',
data=filtered_nomin_sen_df,
palette = "Set3",
size=2.5,
orient='h'). set(xlabel='Number of words in a sentence')
plt.title("Swarm plot")
plt.subplot(gs[1])
sns.violinplot(y='president', x='word_count',
data=filtered_nomin_sen_df,
palette = "Set3", cut = 3,
width=1.5, saturation=0.8, linewidth= 0.3, scale="count",
orient='h').set(xlabel='Number of words in a sentence')
plt.title("Violin plot")
plt.xlim(-3, 350)
plt.tight_layout()
```
### Inaugural speeches
We notice that the sentences in inaugural speeches are longer than those in nomination acceptance speeches.
```
inaug_file_df = pd.DataFrame(columns=["file_id","president","term","raw_text"])
for fileid in inaug_corpus.fileids():
inaug_file_df = inaug_file_df.append({"file_id": fileid,
"president": fileid[0:fileid.find("-")][5:],
"term": fileid.split("-")[-1][0],
"raw_text": inaug_corpus.raw(fileid)}, ignore_index=True)
sentences = []
for row in inaug_file_df.itertuples():
for sentence in sent_tokenize(row[4]):
sentences.append({"file_id": row[1],
"president": row[2],
"term": row[3],
"sentence": sentence})
inaug_sen_df = pd.DataFrame(sentences, columns=["file_id","president","term","sentence"])
wordCounts = [word_count(sentence) for sentence in inaug_sen_df["sentence"]]
inaug_sen_df["word_count"] = wordCounts
filtered_inaug_sen_df = inaug_sen_df.loc[(inaug_sen_df["president"].isin(filter_comparison))&(inaug_sen_df["term"]=='1')]
filtered_inaug_sen_df = filtered_inaug_sen_df.reset_index()
filtered_inaug_sen_df['group_mean'] = filtered_inaug_sen_df.groupby('president')['word_count'].transform('mean')
filtered_inaug_sen_df = filtered_inaug_sen_df.sort_values('group_mean', ascending=False)
%matplotlib inline
plt.figure(figsize=(20, 10))
gs = gridspec.GridSpec(1, 2, width_ratios=[1, 1])
plt.subplot(gs[0])
sns.set(font_scale=1.5)
sns.set_style('whitegrid')
sns.swarmplot(y='president', x='word_count',
data=filtered_inaug_sen_df,
palette = "Set3",
size=2.5,
orient='h'). set(xlabel='Number of words in a sentence')
plt.title("Swarm plot")
plt.subplot(gs[1])
sns.set(font_scale=1.5)
sns.set_style('whitegrid')
sns.violinplot(y='president', x='word_count',
data=filtered_inaug_sen_df,
palette = "Set3", cut = 3,
width=1.5, saturation=0.8, linewidth= 0.3, scale="count",
orient='h').set(xlabel='Number of words in a sentence')
plt.title("Violin plot")
plt.xlim(-3, 350)
plt.tight_layout()
```
<a id="step5"></a>
## Step 5: NRC emotion analsis
For each extracted sentence, we apply sentiment analysis using [NRC sentiment lexion](http://saifmohammad.com/WebPages/NRC-Emotion-Lexicon.htm). "The NRC Emotion Lexicon is a list of English words and their associations with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and two sentiments (negative and positive). The annotations were manually done by crowdsourcing."
```
wordList = defaultdict(list)
emotionList = defaultdict(list)
with open('../data/NRC-emotion-lexicon-wordlevel-alphabetized-v0.92.txt', 'r') as f:
reader = csv.reader(f, delimiter='\t')
headerRows = [i for i in range(0, 46)]
for row in headerRows:
next(reader)
for word, emotion, present in reader:
if int(present) == 1:
#print(word)
wordList[word].append(emotion)
emotionList[emotion].append(word)
from __future__ import division # for Python 2.7 only
def generate_emotion_count(string):
emoCount = Counter()
tokens = [word for word in nltk.word_tokenize(string)]
counter = 0
for token in tokens:
token = token.lower()
if re.search('[a-zA-Z]', token):
counter += 1
emoCount += Counter(wordList[token])
for emo in emoCount:
emoCount[emo]/=counter
return emoCount
emotionCounts = [generate_emotion_count(sentence) for sentence in nomin_sen_df["sentence"]]
nomin_sen_df_with_emotion = pd.concat([nomin_sen_df, pd.DataFrame(emotionCounts).fillna(0)], axis=1)
emotionCounts = [generate_emotion_count(sentence) for sentence in inaug_sen_df["sentence"]]
inaug_sen_df_with_emotion = pd.concat([inaug_sen_df, pd.DataFrame(emotionCounts).fillna(0)], axis=1)
inaug_sen_df_with_emotion.sample(n=3)
```
### Sentence length variation over the course of the speech, with emotions.
How our presidents (or candidates) alternate between long and short sentences and how they shift between different sentiments in their speeches. It is interesting to note that some presidential candidates' speech are more colorful than others. Here we used the same color theme as in the movie "Inside Out."
```
def make_rgb_transparent(color_name, bg_color_name, alpha):
from matplotlib import colors
rgb = colors.colorConverter.to_rgb(color_name)
bg_rgb = colors.colorConverter.to_rgb(bg_color_name)
return [alpha * c1 + (1 - alpha) * c2
for (c1, c2) in zip(rgb, bg_rgb)]
def f_plotsent_len(InDf, InTerm, InPresident):
import numpy as np
import pylab as pl
from matplotlib import colors
from math import sqrt
from matplotlib import collections as mc
col_use={"zero":"lightgray",
"anger":"#ee0000",
"anticipation":"#ffb90f",
"disgust":"#66cd00",
"fear":"blueviolet",
"joy":"#eead0e",
"sadness":"#1874cd",
"surprise":"#ffb90f",
"trust":"#ffb90f",
"negative":"black",
"positive":"#eead0e"}
InDf["top_emotion"] = InDf.loc[:,'anger':'trust'].idxmax(axis=1)
InDf["top_emotion_value"] = InDf.loc[:,'anger':'trust'].max(axis=1)
InDf.loc[InDf["top_emotion_value"] < 0.05, "top_emotion"] = "zero"
InDf.loc[InDf["top_emotion_value"] < 0.05, "top_emotion_value"] = 1
tempDf = InDf.loc[(InDf["president"]==InPresident)&(InDf["term"]==InTerm)]
pt_col_use = []
lines = []
for i in tempDf.index:
pt_col_use.append(make_rgb_transparent(col_use[tempDf.at[i,"top_emotion"]],
"white",
sqrt(sqrt(tempDf.at[i,"top_emotion_value"]))))
lines.append([(i,0),(i,tempDf.at[i,"word_count"])])
%matplotlib inline
lc = mc.LineCollection(lines, colors=pt_col_use, linewidths=min(5,300/len(tempDf.index)))
fig, ax = pl.subplots() #figsize=(15, 6)
ax.add_collection(lc)
ax.autoscale()
ax.axis('off')
plt.title(InPresident, fontsize=30)
plt.tight_layout()
plt.show()
f_plotsent_len(nomin_sen_df_with_emotion, '1', 'HillaryClinton')
f_plotsent_len(nomin_sen_df_with_emotion, '1', 'DonaldJTrump')
f_plotsent_len(nomin_sen_df_with_emotion, '1', 'BarackObama')
f_plotsent_len(nomin_sen_df_with_emotion, '1', 'GeorgeWBush')
```
### Clustering of emotions
```
sns.set(font_scale=1.3)
sns.clustermap(inaug_sen_df_with_emotion.loc[:,'anger':'trust'].corr(),
figsize=(6,7))
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
colors=['darkorange', 'crimson', 'darkseagreen', 'navy', 'wheat', 'gray', 'palevioletred', 'gold', 'lightcoral', 'forestgreen', 'cornflowerblue']
participants = ['p{:02}'.format(index) for index in range(15)]
# 0 1 2* 3* 4** 5 6 7* 8 9* 10* 11** 12 13 14
lenet_vals=[4.2, 5.0, 6.25, 6.53, 7.93, 6.2, 5.85, 7.1, 6.85, 8.5, 7.05, 6.25, 6.0, 6.8, 6.05]
values = [ 4.35, 4.66, 5.55, 5.95, 7.42, 6.023, 5.56, 6.74, 6.17, 6.85, 5.44, 6.39, 5.43, 6.12, 6.21 ]
v_cyclic = [4.25, 4.94, 5.54, 5.47, 6.93, 6.25, 5.88, 6.57, 6.44, 7.13, 5.33, 6.33, 5.7, 6.18, 6.17 ]
v_rlrop = [ 4.70, 4.25, 5.69, 5.30, 7.12, 6.28, 5.46, 6.56, 5.98, 6.55, 5.83, 6.50, 5.30, 6.0, 6.0 ]
v_rlrop1 = [4.53, 4.39, 5.50, 5.33, 7.62, 6.02, 5.36, 6.69, 6.08, 6.84, 5.44, 6.46, 5.51, 6.03, 6.01 ]
v_resnet = [3.98, 5.21, 5.89, 6.00, 7.39, 6.19, 5.56, 6.63, 6.12, 6.95, 6.45, 5.48, 5.62, 6.24, 5.89 ]
v_inc_rlrop=[4.63, 5.42, 6.32, 6.55, 9.22, 10.05, 6.1, 7.46, 7.24, 7.17, 7.28, 5.47, 6.64, 7.49, 5.7]
v_inc_mlr = [5.6, 7.005, 6.35, 6.79, 8.84, 10.05, 6.28, 7.6, 9.83, 7.63, 6.82, 5.48, 7.12, 6.61, 5.79]
v_inc_cyclr=[5.12, 6.96, 6.27, 6.62, 6.54, 7.18, 6.74, 7.37, 7.26, 8.21, 5.97, 5.67, 6.59, 6.68, 5.96]
v_ensemble =[4.35, 5.44, 5.92, 5.77, 7.42, 6.04, 5.42, 6.44, 6.38, 8.15, 5.97, 5.92, 5.38, 5.86, 5.9]
v_ens_rlrop=[4.55, 5.33, 5.83, 5.89, 6.54, 6.17, 7.16, 6.66, 6.57, 7.95, 6.61, 5.7, 6.1, 6.8, 5.6]
v_ens_lr003 =[4.31, 4.12, 5.82, 5.52, 7.41, 6.05, 5.50, 6.41, 6.55, 7.81, 5.79, 6.11, 5.23, 5.84, 5.45]
v16_ens_lr004 =[4.22, 4.14, 6.14, 5.44, 6.94, 6.26, 5.90, 6.58, 6.10, 6.26, 6.36, 7.34, 5.95, 6.08, 5.71]
att = [4.28, 5.46, 5.69, 5.97, 6.56, 6.51, 5.63, 6.80, 6.58, 6.39, 6.37, 6.36, 5.52, 5.89, 5.94]
ens_att = [5.58, 4.53, 5.38, 7.10, 7.70, 7.24, 6.93, 7.49, 7.49, 6.99, 5.92, 6.35, 6.21, 6.05, 7.41]
ens_msk = [4.35, 4.59, 5.96, 5.97, 6.61, 6.70, 5.59, 7.02, 6.68, 6.82, 5.79, 6.32, 5.52, 6.52, 5.34]
levgg = [4.42, 4.20, 5.90, 5.89, 7.13, 6.32, 5.43, 7.20, 6.67, 6.71, 5.69, 6.19, 5.52, 6.12, 5.45]
import pandas as pd
df = pd.DataFrame()
df['participants'] = participants
df['lenet'] = lenet_vals
df['vgg19_rlrop'] = v_rlrop
df['vgg19_cyclic'] = v_cyclic
df['vgg19_mlr'] = values
df['resnet50'] = v_resnet
df['inception_rlrop'] = v_inc_rlrop
df['inception_cycliclr'] = v_inc_cyclr
df['inception_mlr'] = v_inc_mlr
df['ensemble'] = v_ensemble
df['ensemble_rlrop'] = v_ens_rlrop
df['v_ens_lr003'] = v_ens_lr003
df['v16_ens_lr004'] = v16_ens_lr004
df['att'] = att
df['ens_att'] = ens_att
df['ens_msk'] = ens_msk
df['levgg'] = levgg
#df[['lenet', 'vgg19_rlrop', 'v_ens_lr003', 'att', 'ens_att']]
df
df.mean().sort_values()
### was problematic
df.where(df.participants=='p04').min().sort_values()
### was problematic
df.where(df.participants=='p00').min().sort_values()
### high angle error (lenet)
df.where(df.participants=='p07').min().sort_values()
### high angle error (lenet)
df.where(df.participants=='p08').min().sort_values()
### highest angle error (lenet)
df.where(df.participants=='p09').min().sort_values()
### high angle error (lenet). Significant improvement.
df.where(df.participants=='p10').min().sort_values()
### was problematic
df.where(df.participants=='p11').min().sort_values()
### was problematic.
df.where(df.participants=='p14').min().sort_values()
def autolabel(rects):
"""Attach a text label above each bar."""
for rect in rects:
height = rect.get_height()
ax.annotate('{}'.format(height),
xy=(rect.get_x() + rect.get_width() / 2, height),
xytext=(0, 3), # 3 points vertical offset
textcoords="offset points",
ha='center', va='bottom')
x = np.arange(len(participants)) # the label locations
width = 0.35 # the width of the bars
figg, ax = plt.subplots(figsize=(9,5))
fig = ax.bar(participants, v_inc_cyclr, color=colors)
ax.set_ylabel("Angle Error (deg)")
ax.set_ylim([0,9])
ax.set_xlabel("Subject Ids")
ax.set_title(" Inception v3 \n Using CyclicLR \n Mean Angle Error = {}".format(np.mean(v_inc_cyclr)))
ax.set_xticks(x)
ax.set_xticklabels(participants)
figg.tight_layout()
#figg.subplots_adjust(wspace=20)
autolabel(fig)
plt.show()
width = 0.35 # the width of the bars
figg, ax = plt.subplots(figsize=(9,5))
fig = ax.bar(participants, v_inc_rlrop, color=colors)
ax.set_ylabel("Angle Error (deg)")
ax.set_ylim([0,11])
ax.set_xlabel("Subject Ids")
ax.set_title(" Inception v3 \n Using ReduceLROP \n Mean Angle Error = {}".format(np.mean(v_inc_rlrop)))
ax.set_xticks(x)
ax.set_xticklabels(participants)
figg.tight_layout()
#figg.subplots_adjust(wspace=20)
autolabel(fig)
plt.show()
width = 0.35 # the width of the bars
figg, ax = plt.subplots(figsize=(9,5))
fig = ax.bar(participants, v_inc_mlr, color=colors)
ax.set_ylabel("Angle Error (deg)")
ax.set_ylim([0,11])
ax.set_xlabel("Subject Ids")
ax.set_title(" Inception v3 \n Using MulStepLR \n Mean Angle Error = {}".format(np.mean(v_inc_mlr)))
ax.set_xticks(x)
ax.set_xticklabels(participants)
figg.tight_layout()
#figg.subplots_adjust(wspace=20)
autolabel(fig)
plt.show()
width = 0.35 # the width of the bars
figg, ax = plt.subplots(figsize=(9,5))
fig = ax.bar(participants, v_resnet, color=colors)
ax.set_ylabel("Angle Error (deg)")
ax.set_ylim([0,9])
ax.set_xlabel("Subject Ids")
ax.set_title("Resnet50 \n Mean Angle Error = {}".format(np.mean(v_resnet)))
ax.set_xticks(x)
ax.set_xticklabels(participants)
figg.tight_layout()
#figg.subplots_adjust(wspace=20)
autolabel(fig)
plt.show()
width = 0.35 # the width of the bars
figg, ax = plt.subplots(figsize=(9,5))
fig = ax.bar(participants, v_rlrop, color=colors)
ax.set_ylabel("Angle Error (deg)")
ax.set_ylim([0,9])
ax.set_xlabel("Subject Ids")
ax.set_title("VGG19 \n Using ReduceLROP \n Mean Angle Error = {}".format(np.mean(v_rlrop)))
ax.set_xticks(x)
ax.set_xticklabels(participants)
figg.tight_layout()
#figg.subplots_adjust(wspace=20)
autolabel(fig)
plt.show()
width = 0.35 # the width of the bars
figg, ax = plt.subplots(figsize=(9,5))
fig = ax.bar(participants, v_cyclic, color=colors)
ax.set_ylabel("Angle Error (deg)")
ax.set_ylim([0,9])
ax.set_xlabel("Subject Ids")
ax.set_title("VGG19 \n Using CyclicLR \n Mean Angle Error = {}".format(np.mean(v_cyclic)))
ax.set_xticks(x)
ax.set_xticklabels(participants)
figg.tight_layout()
#figg.subplots_adjust(wspace=20)
autolabel(fig)
plt.show()
width = 0.35 # the width of the bars
figg, ax = plt.subplots(figsize=(9,5))
fig = ax.bar(participants, values, color=colors)
ax.set_ylabel("Angle Error (deg)")
ax.set_ylim([0,9])
ax.set_xlabel("Subject Ids")
ax.set_title("VGG19 \n Using MulStepLR \n Mean Angle Error = {}".format(np.mean(values)))
ax.set_xticks(x)
ax.set_xticklabels(participants)
figg.tight_layout()
#figg.subplots_adjust(wspace=20)
autolabel(fig)
plt.show()
width = 0.35 # the width of the bars
figg, ax = plt.subplots(figsize=(9,5))
fig = ax.bar(participants, v_ensemble, color=colors)
ax.set_ylabel("Angle Error (deg)")
ax.set_ylim([0,11])
ax.set_xlabel("Subject Ids")
ax.set_title(" Ensemble \n Using MulStepLR \n Mean Angle Error = {}".format(np.mean(v_ensemble)))
ax.set_xticks(x)
ax.set_xticklabels(participants)
figg.tight_layout()
#figg.subplots_adjust(wspace=20)
autolabel(fig)
plt.show()
width = 0.35 # the width of the bars
figg, ax = plt.subplots(figsize=(9,5))
fig = ax.bar(participants, v_ens_lr003, color=colors)
ax.set_ylabel("Angle Error (deg)")
ax.set_ylim([0,11])
ax.set_xlabel("Subject Ids")
ax.set_title(" Ensemble \n Using MulStepLR \n Mean Angle Error = {}".format(np.mean(v_ens_lr003)))
ax.set_xticks(x)
ax.set_xticklabels(participants)
figg.tight_layout()
#figg.subplots_adjust(wspace=20)
autolabel(fig)
plt.show()
width = 0.35 # the width of the bars
figg, ax = plt.subplots(figsize=(9,5))
fig = ax.bar(participants, att, color=colors)
ax.set_ylabel("Angle Error (deg)")
ax.set_ylim([0,11])
ax.set_xlabel("Subject Ids")
ax.set_title(" VGG with attention \n Using MulStepLR \n Mean Angle Error = {}".format(np.mean(att)))
ax.set_xticks(x)
ax.set_xticklabels(participants)
figg.tight_layout()
#figg.subplots_adjust(wspace=20)
autolabel(fig)
plt.show()
```
| github_jupyter |
# Step input, output, and substeps
* **Difficulty level**: easy
* **Time need to lean**: 10 minutes or less
* **Key points**:
* Input files are specified with the `input` statement, which defines variable `_input`
* Output files are specified with the `output` statement, which defines variable `_output`
* Input files can be processed in groups with the `group_by` option
## Specifying step input and output
Taking again the example workflow from [our first tutorial](sos_overview.html), we have defined variables such as `excel_file` and used them directly in the scripts.
```
[global]
excel_file = 'data/DEG.xlsx'
csv_file = 'DEG.csv'
figure_file = 'output.pdf'
[plot_10]
run: expand=True
xlsx2csv {excel_file} > {csv_file}
[plot_20]
R: expand=True
data <- read.csv('{csv_file}')
pdf('{figure_file}')
plot(data$log2FoldChange, data$stat)
dev.off()
```
You can add an `input` and an `output` statement to the steps and write the workflow as
```
[global]
excel_file = 'data/DEG.xlsx'
csv_file = 'DEG.csv'
figure_file = 'output.pdf'
[plot_10]
input: excel_file
output: csv_file
run: expand=True
xlsx2csv {_input} > {_output}
[plot_20]
input: csv_file
output: figure_file
R: expand=True
data <- read.csv('{_input}')
pdf('{_output}')
plot(data$log2FoldChange, data$stat)
dev.off()
```
Comparing the two workflows, you will notice that steps in the new workflow have `input` and `output` statements that define the input and output of the steps, and two magic variables `_input` and `_output` are used in the scripts. These two variables are of type `sos_targets` and are of vital importance to the use of SoS.
## Substeps and input option `group_by`
The `input` and `output` statements notify SoS the input and output of the steps and allow SoS to handle them in a much more intelligent way. One of the most useful usages is the definition of substeps that allows SoS to process groups of input one by one, and/or the same groups of input with different sets of variables (option `for_each`, which will be discussed later).
Let us assume that we have two input files `data/S20_R1.fastq` and `data/S20_R2.fastq` and we would like to check the quality of them using a tool called [fastqc](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/). Using a plain Python approach and the `sh` action, the analysis can be performed by
```
for infile in ['data/S20_R1.fastq', 'data/S20_R2.fastq']:
sh(f'fastqc {infile}')
```
Or using the `input` statement to define variable `_input` with two files, and use a (slightly more convenient but less Pythonic) indented script format:
```
input: 'data/S20_R1.fastq', 'data/S20_R2.fastq'
for infile in _input:
sh: expand=True
fastqc {infile}
```
There are two problems with this approach,
* The action `sh`, either in function call format or indented script format, is less readable, especially if the script is long, and more importantly,
* The input files are handled one by one although they are independent and can be processed in parallel
To address these problems, you can write the step as follows:
```
input: 'data/S20_R1.fastq', 'data/S20_R2.fastq', group_by=1
sh: expand=True
fastqc {_input}
```
<div class="bs-callout bs-callout-primary" role="alert">
<h4>Substeps created by the <code>group_by</code> input option</h4>
<ul>
<li>The <code>group_by</code> option groups input files and creates multiple groups of input files</li>
<li>Multiple <em>substeps</em> are created for each group of input files</li>
<li>The input of each substep is stored in variable <code>_input</code></li>
<li>The substeps are by default executed in parallel</li>
</ul>
</div>
In this example, option `group_by=1` divides the two input files into two groups, each with one input file. Two substeps are created from the groups. They execute the same step process (statements after the `input` statement) but with different values of variable `_input`. The `sh` action is written in the script format, which can be a lot more readable if the script is long. The substeps are executed in parallel so the step could be completed a lot faster than the `for` look version.
## Output of substeps
<div class="bs-callout bs-callout-primary" role="alert">
<h4>The <code>output</code> statement</h4>
<ul>
<li>The <code>output</code> statement defines the output of each <b>substep</b>, represented by variable <code>_output</code>.</li>
<li>The output of the entire step consists of <code>_output</code> from each substep.</li>
</ul>
</div>
The `input` statement defines input of the entire step, and optionally input of each substep as variable `_input`. **The `output` statement, however, defines the output of each substep**.
In the following example, the two input files are divided into two groups, reprented by `_input` for each substep. The output statement defines a variable `_output` for each substep.
```
input: 'data/S20_R1.fastq', 'data/S20_R2.fastq', group_by=1
output: f'{_input:n}_fastqc.html'
sh: expand=True
fastqc {_input}
```
<div class="bs-callout bs-callout-primary" role="alert">
<h4>Special format specification for <code>_input</code> objects</h4>
<p>SoS variables <code>_input</code> and <code>_output</code> are of type <code>sos_targets</code> and accept additional <a href="https://docs.python.org/3/reference/lexical_analysis.html#f-strings">format specifications</a>. For example,
<ul>
<li><code>:n</code> is the name of the path. e.g. <code>f'{_input:n}'</code> returns <code>/path/to/a</code> if <code>_input</code> is <code>/path/to/a.txt</code></li>
<li><code>:b</code> is the basename of the path. e.g. <code>a.txt</code> from <code>/path/to/a.txt</code></li>
<li><code>:d</code> is the directory name of the path. e.g. <code>/path/to</code> from <code>/path/to/a.txt</code></li>
</ul>
</div>
The output statement of this example is
```python
output: f'{_input:n}_fastqc.html'
```
which takes the name of `_input` and add `_fastqc.html`. For example, if `_input = 'data/S20_R1.fastq'`, the corresponding `_output = 'data/S20_R1_fastqc.html`.
With this output statement, SoS will, among many other things, check if the output is properly generated after the completion of each substep, and returns an output object with the `_output` of each substep.
| github_jupyter |
**This notebook is an exercise in the [Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/cross-validation).**
---
In this exercise, you will leverage what you've learned to tune a machine learning model with **cross-validation**.
# Setup
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
```
# Set up code checking
import os
if not os.path.exists("../input/train.csv"):
os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")
os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex5 import *
print("Setup Complete")
```
You will work with the [Housing Prices Competition for Kaggle Learn Users](https://www.kaggle.com/c/home-data-for-ml-course) from the previous exercise.

Run the next code cell without changes to load the training and test data in `X` and `X_test`. For simplicity, we drop categorical variables.
```
import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
train_data = pd.read_csv('../input/train.csv', index_col='Id')
test_data = pd.read_csv('../input/test.csv', index_col='Id')
# Remove rows with missing target, separate target from predictors
train_data.dropna(axis=0, subset=['SalePrice'], inplace=True)
y = train_data.SalePrice
train_data.drop(['SalePrice'], axis=1, inplace=True)
# Select numeric columns only
numeric_cols = [cname for cname in train_data.columns if train_data[cname].dtype in ['int64', 'float64']]
X = train_data[numeric_cols].copy()
X_test = test_data[numeric_cols].copy()
```
Use the next code cell to print the first several rows of the data.
```
X.head()
```
So far, you've learned how to build pipelines with scikit-learn. For instance, the pipeline below will use [`SimpleImputer()`](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html) to replace missing values in the data, before using [`RandomForestRegressor()`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) to train a random forest model to make predictions. We set the number of trees in the random forest model with the `n_estimators` parameter, and setting `random_state` ensures reproducibility.
```
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
my_pipeline = Pipeline(steps=[
('preprocessor', SimpleImputer()),
('model', RandomForestRegressor(n_estimators=50, random_state=0))
])
```
You have also learned how to use pipelines in cross-validation. The code below uses the [`cross_val_score()`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html) function to obtain the mean absolute error (MAE), averaged across five different folds. Recall we set the number of folds with the `cv` parameter.
```
from sklearn.model_selection import cross_val_score
# Multiply by -1 since sklearn calculates *negative* MAE
scores = -1 * cross_val_score(my_pipeline, X, y,
cv=5,
scoring='neg_mean_absolute_error')
print("Average MAE score:", scores.mean())
```
# Step 1: Write a useful function
In this exercise, you'll use cross-validation to select parameters for a machine learning model.
Begin by writing a function `get_score()` that reports the average (over three cross-validation folds) MAE of a machine learning pipeline that uses:
- the data in `X` and `y` to create folds,
- `SimpleImputer()` (with all parameters left as default) to replace missing values, and
- `RandomForestRegressor()` (with `random_state=0`) to fit a random forest model.
The `n_estimators` parameter supplied to `get_score()` is used when setting the number of trees in the random forest model.
```
def get_score(n_estimators):
"""Return the average MAE over 3 CV folds of random forest model.
Keyword argument:
n_estimators -- the number of trees in the forest
"""
# Replace this body with your own code
my_pipeline = Pipeline(steps=[
('preprocessor', SimpleImputer()),
('model', RandomForestRegressor(n_estimators, random_state=0))
])
scores = -1 * cross_val_score(my_pipeline, X, y,
cv=3,
scoring='neg_mean_absolute_error')
return scores.mean()
# Check your answer
step_1.check()
# Lines below will give you a hint or solution code
#step_1.hint()
step_1.solution()
```
# Step 2: Test different parameter values
Now, you will use the function that you defined in Step 1 to evaluate the model performance corresponding to eight different values for the number of trees in the random forest: 50, 100, 150, ..., 300, 350, 400.
Store your results in a Python dictionary `results`, where `results[i]` is the average MAE returned by `get_score(i)`.
```
results = {}
for i in range(1,9):
results[50*i] = get_score(50*i) # Your code here
# Check your answer
step_2.check()
# Lines below will give you a hint or solution code
#step_2.hint()
step_2.solution()
```
Use the next cell to visualize your results from Step 2. Run the code without changes.
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(list(results.keys()), list(results.values()))
plt.show()
```
# Step 3: Find the best parameter value
Given the results, which value for `n_estimators` seems best for the random forest model? Use your answer to set the value of `n_estimators_best`.
```
n_estimators_best = 200
# Check your answer
step_3.check()
# Lines below will give you a hint or solution code
#step_3.hint()
#step_3.solution()
```
In this exercise, you have explored one method for choosing appropriate parameters in a machine learning model.
If you'd like to learn more about [hyperparameter optimization](https://en.wikipedia.org/wiki/Hyperparameter_optimization), you're encouraged to start with **grid search**, which is a straightforward method for determining the best _combination_ of parameters for a machine learning model. Thankfully, scikit-learn also contains a built-in function [`GridSearchCV()`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) that can make your grid search code very efficient!
# Keep going
Continue to learn about **[gradient boosting](https://www.kaggle.com/alexisbcook/xgboost)**, a powerful technique that achieves state-of-the-art results on a variety of datasets.
---
*Have questions or comments? Visit the [course discussion forum](https://www.kaggle.com/learn/intermediate-machine-learning/discussion) to chat with other learners.*
| github_jupyter |
# Intro to Python!
Stuart Geiger and Yu Feng for The Hacker Within
# Contents
## 1. Installing Python
## 2. The Language
- Expressions
- List, Tuple and Dictionary
- Strings
- Functions
## 3. Example: Word Frequency Analysis with Python
- Reading text files
- Geting and using python packages : wordcloud
- Histograms
- Exporting data as text files
## 1. Installing Python:
- Easy way : with a Python distribution, anaconda: https://www.continuum.io/downloads
- Hard way : install python and all dependencies yourself
- Super hard way : compile everything from scratch
### Three Python user interfaces
#### Python Shell `python`
```
[yfeng1@waterfall ~]$ python
Python 2.7.12 (default, Sep 29 2016, 13:30:34)
[GCC 6.2.1 20160916 (Red Hat 6.2.1-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
#### Jupyter Notebook (in a browser, like this)
#### IDEs: PyCharm, Spyder, etc.
We use Jupyter Notebook here.
Jupyter Notebook is included in the Anaconda distribution.
# Expressions
```
2 + 3
2 / 3
2 * 3
2 ** 3
```
# Variables
```
num = 2 ** 3
print(num)
num
type(num)
name = "The Hacker Within"
type(name)
name + 8
name + str(8)
```
# Lists
```
num_list = [0,1,2,3,4,5,6,7,8]
print(num_list)
type(num_list)
num_list[3]
num_list[3] = 10
print(num_list)
```
Appending new items to a list
```
num_list.append(3)
print(num_list)
```
# Loops and iteration
```
for num in num_list:
print(num)
for num in num_list:
print(num, num * num)
num_list.append("LOL")
print(num_list)
```
## If / else conditionals
```
for num in num_list:
if type(num) is int or type(num) is float:
print(num, num * num)
else:
print("ERROR!", num, "is not an int")
```
## Functions
```
def process_list(input_list):
for num in input_list:
if type(num) is int or type(num) is float:
print(num, num * num)
else:
print("ERROR!", num, "is not an int")
process_list(num_list)
process_list([1,3,4,14,1,9])
```
## Dictionaries
Store a key : value relationship
```
yearly_value = {2001: 10, 2002: 14, 2003: 18, 2004: 20}
print(yearly_value)
yearly_value = {}
yearly_value[2001] = 10
yearly_value[2002] = 14
yearly_value[2003] = 18
yearly_value[2004] = 20
print(yearly_value)
yearly_value.pop(2001)
yearly_value
yearly_value[2001] = 10213
```
You can iterate through dictionaries too:
```
for key, value in yearly_value.items():
print(key, value)
for key, value in yearly_value.items():
print(key, value * 1.05)
```
# Strings
We have seen strings a few times.
String literals can be defined with single or double quotation marks. Triple quotes allow multi-line strings.
```
name = "the hacker within"
name_long = """
~*~*~*~*~*~*~*~*~*~*~
THE HACKER WITHIN
~*~*~*~*~*~*~*~*~*~*~
"""
print(name)
print(name_long)
```
Strings have many built in methods:
```
print(name.upper())
print(name.split())
print(name.upper().split())
```
Strings are also a kind of list, and substrings can be accessed with string[start,end]
```
print(name[4:10])
print(name[4:])
print(name[:4])
count = 0
for character in name:
print(count, character)
count = count + 1
print(name.find('hacker'))
print(name[name.find('hacker'):])
```
# Functions
```
def square_num(num):
return num * num
print(square_num(10))
print(square_num(9.1))
print(square_num(square_num(10)))
def yearly_adjustment(yearly_dict, adjustment):
for key, value in yearly_dict.items():
print(key, value * adjustment)
yearly_adjustment(yearly_value, 1.05)
```
We can expand on this a bit, adding some features:
```
def yearly_adjustment(yearly_dict, adjustment, print_values = False):
adjusted_dict = {}
for key, value in yearly_value.items():
if print_values is True:
print(key, value * adjustment)
adjusted_dict[key] = value * adjustment
return adjusted_dict
adjusted_yearly = yearly_adjustment(yearly_value, 1.05)
adjusted_yearly = yearly_adjustment(yearly_value, 1.05, print_values = True)
adjusted_yearly
```
# Example: word counts
If you begin a line in a notebook cell with ```!```, it will execute a bash command as if you typed it in the terminal. We will use this to download a list of previous THW topics with the curl program.
```
!curl -o thw.txt http://stuartgeiger.com/thw.txt
# and that's how it works, that's how you get to curl
with open('thw.txt') as f:
text = f.read()
text
words = text.split()
lines = text.split("\n")
lines[0:5]
```
But there is an error! R always appears as "RR" -- so we will replace "RR" with "R"
```
text.replace("RR", "R")
text = text.replace("RR", "R")
words = text.split()
lines = text.split("\n")
lines[0:5]
```
### Wordcloud library
```
!pip install wordcloud
from wordcloud import WordCloud
wordcloud = WordCloud()
wordcloud.generate(text)
wordcloud.to_image()
wordcloud = WordCloud(width=800, height=300, prefer_horizontal=1, stopwords=None)
wordcloud.generate(text)
wordcloud.to_image()
```
## Freqency counts
```
freq_dict = {}
for word in words:
if word in freq_dict:
freq_dict[word] = freq_dict[word] + 1
else:
freq_dict[word] = 1
print(freq_dict)
```
A better way to do this is:
```
freq_dict = {}
for word in words:
freq_dict[word] = freq_dict.get(word, 0) + 1
print(freq_dict)
```
# Outputting to files
Let's start from a loop that prints the values to the screen
```
for word, freq in sorted(freq_dict.items()):
line = word + "\t" + str(freq)
print(line)
```
Then expand this to writing a file object:
```
with open("freq_dict_thw.csv", 'w') as f:
for word, freq in sorted(freq_dict.items()):
line = word + ", " + str(freq) + "\n"
f.write(line)
!head -10 freq_dict_thw.csv
```
| github_jupyter |
# Verifying Kinetics Models: Part 2 - Writing Tests
Writing verification tests for kinetics models requires having insights as to the dynamic behavior expected of model variables. This discussion focuses on the concentration of molecules (floating species in ``tellurium``).
```
import numpy as np
import tellurium as te
from teUtils.named_timeseries import NamedTimeseries, TIME
from teUtils.timeseries_plotter import TimeseriesPlotter
plotter = TimeseriesPlotter()
```
## Lecture
### Expected Behavior of Model Variables
Our starting point is a framework for describing the expected behavior of model variables. This is best illustrated by example. Consider the model below that you used in the breakout.
```
def runModel(model, is_plot=True):
"""
Runs and plots the model.
Parameters
----------
model: str
An antimony model
Returns
-------
NamedTimeseries of simulation results
"""
rr = te.loada(model)
data = rr.simulate(0, 4, 30)
ts = NamedTimeseries(named_array=data)
if is_plot:
plotter.plotTimeSingle(ts, num_row=2, figsize=(12, 10))
return ts
true_model = '''
# Reactions
J1: S1 -> S2; k1*S1
J2: S2 -> S3; k2*S2
J3: S3 -> S4; k3*S3
J4: S4 -> S5; k4*S4
J5: S5 -> S6; k5*S5;
# Species initializations
S1 = 10;
k1 = 1; k2 = 2; k3 = 3; k4 = 4; k5 = 5;
S1 = 10; S2 = 0; S3 = 0; S4 = 0; S5 = 0; S6 = 0;
'''
ts0 = runModel(true_model)
```
Here are some observations about the dynamical behavior of this linear pipeline:
1. S1 is monotone deccreasing
1. S6 is monotone increasing
1. S2-S5 are not monotone.
1. The time at which $S_i$ peaks is less than the time at which $S_{i+1}$ peaks.
Let's see what happens if we had an error in our model.
```
# Incorrect model - forgot J3
bad_model1 = '''
# Reactions
J1: S1 -> S2; k1*S1
J2: S2 -> S3; k2*S2
J4: S4 -> S5; k4*S4
J5: S5 -> S6; k5*S5;
# Species initializations
S1 = 10;
k1 = 1; k2 = 2; k3 = 3; k4 = 4; k5 = 5;
S1 = 10; S2 = 0; S3 = 0; S4 = 0; S5 = 0; S6 = 0;
'''
ts1 = runModel(bad_model1)
```
Which of the following are no longer true:
1. S1 is monotone decreasing
1. S6 is monotone increasing
1. S2-S5 are not monotone.
1. The time at which $S_i$ peaks is less than the time at which $S_{i+1}$ peaks.
How do we implement tests for these predicates?
First, we need to determine if a curve is monotone, and if so, whether it is monotone increasing or decreasing.
``ts`` is monotone decreasing if the following hold:
1. ``ts[0] >= ts[-1]``
1. ``ts[0] == max(ts)``
Similarly, `ts`` is monotone increasing if the following hold:
1. ``ts[-1] >= ts[0]``
1. ``ts[-1] == max(ts)``
```
# Implementation of tests for monotonicity
def isIncreasing(arr, is_strict=False):
"""
Checks if the timeseries is monotone non-decreasing
Parameters
----------
arr: numpy array
is_strict: bool
strickly increasing
Returns
-------
bool
"""
diff_arr = arr[1:] - arr[:-1]
if is_strict:
return all([v > 0 for v in diff_arr])
else:
return all([v >= 0 for v in diff_arr])
# Implementation of tests for monotonicity
def isDecreasing(arr, is_strict=False):
"""
Checks if the timeseries is monotone non-decreasing
Parameters
----------
arr: numpy array
is_strict: bool
strickly decreasing
Returns
-------
bool
"""
return isIncreasing(-arr, is_strict=is_strict)
# Implemenation of test for no monotone
def isMonotone(arr, is_strict=False):
"""
Checks if the series is montone
Parameters
----------
arr: numpy array
is_strict: bool
strickly decreasing
Returns
-------
bool
"""
return isDecreasing(arr, is_strict=is_strict) or isIncreasing(arr, is_strict=is_strict)
print("Correct model, Is S1 decreasing: %r" % isDecreasing(ts0["S1"]))
print("Bad model 1, Is S1 decreasing: %r" % isDecreasing(ts1["S1"]))
print("Correct model, Is S6 increasing: %r" % isIncreasing(ts0["S6"]))
print("Bad model 1, Is S6 increasing: %r" % isIncreasing(ts1["S6"]))
```
But ``bad_model1`` is constant at 0. We want to check for strictly increasing.
```
print("Bad model 1, Is S6 increasing: %r" % isIncreasing(ts1["S6"], is_strict=True))
def checkNonMontone(model, is_strict=False):
"""
Checks if the series is not montone
Parameters
----------
arr: numpy array
is_strict: bool
strickly decreasing
Returns
-------
bool
"""
ts = runModel(model, is_plot=False)
is_ok = True
for species in ["S2", "S3", "S4", "S5"]:
if isMonotone(ts[species], is_strict=is_strict):
print("Species %s is montone and should not be." % species)
is_ok = False
if is_ok:
print("Ok.")
checkNonMontone(true_model)
checkNonMontone(bad_model1)
```
Now consider a different kind of error in the model where an extra reaction is added.
```
bad_model2 = '''
# Reactions
J1: S1 -> S2; k1*S1
J1a: S1 -> S3; k1*S1
J2: S2 -> S3; k2*S2
J3: S3 -> S4; k3*S3
J4: S4 -> S5; k4*S4
J5: S5 -> S6; k5*S5;
# Species initializations
S1 = 10;
k1 = 1; k2 = 2; k3 = 3; k4 = 4; k5 = 5;
S1 = 10; S2 = 0; S3 = 0; S4 = 0; S5 = 0; S6 = 0;
'''
ts2 = runModel(bad_model2)
```
Do our tests catch this error?
```
print("Bad model 2, S1 is decreasing. %r." % isDecreasing(ts2["S1"]))
print("Bad model 2, S6 is increasing. %r." % isIncreasing(ts2["S6"]))
checkNonMontone(bad_model2)
```
We need another tests.
Recall that one expectation for this linear pathway is that the interior species peak at progressively later times. If you look closely, you see that S5 peeks *before* S4.
```
def reportPeaks(timeseries):
for colname in timeseries.colnames:
print("%s: %2.2f" % (colname, getTimeOfPeak(timeseries, colname)))
def getTimeOfPeak(ts, colname):
"""
Finds the time at which the maximum value occurs
Parameters
----------
ts: NamedTimeseries
colname: str
name of column in ts
"""
return ts.getTimesForValue(colname, max(ts[colname]))[0]
reportPeaks(ts0)
reportPeaks(ts2)
```
## Breakout
In the following, you will write tests codes for a linear pathway that check for:1. S1 is monotone decreasing
1. S6 is monotone increasing
1. S2-S5 are not monotone.
1. The time at which $S_i$ peaks is less than the time at which $S_{i+1}$ peaks.
You can assume that the pathway starts with S1, and successive integers indicate the step in the pathway.
```
# TestContainer from Part 1
class TestContainer(object):
def __init__(self, model):
ts = self._runSimulation(model)
self.ts = ts
def test1(self):
if len(self.ts) > 0:
return True
else:
print("No data produced by simulation.")
return False
def test2(self):
s1 = self.ts["S1"]
if s1[0] == 10:
return True
else:
print("Initial value of S1 should be 10!")
return False
def test3(self):
"""
Check that the maximum value of S1 is at time 0.
"""
s1 = self.ts["S1"]
if s1[0] == max(s1):
return True
else:
print("The initial value of S1 should be 0!")
return False
def test4(self):
"""
Check that the maximum value of s6 is at the end of the simulation.
"""
s6 = self.ts["S6"]
if s6[-1] == max(s6):
return True
else:
print("The maximum value of S6 should be at the last time!")
return False
# Simulation runner
def _runSimulation(self, model):
"""
Runs a simulation for an antimony model
Parameters
----------
model: str
Returns
-------
NamedTimeseries
"""
rr = te.loada(model)
data = rr.simulate()
return NamedTimeseries(named_array=data)
def run(self):
is_ok = True
for item in dir(self):
if item[0:4] == "test":
# Construct the function call
func = "self.%s()" % item
is_ok = is_ok and eval(func)
if is_ok:
print("OK.")
else:
print("Problems encountered in model.")
tester = TestContainer(true_model)
tester.run()
```
#### Create new tests in TestContainer.
1. The first species is monotone decreasing.
1. The last species is monotone increasing.
1. Interior species are not monotone.
#### Run the tests for true_model and bad_model2
| github_jupyter |
# SIF4Sci 使用示例
## 概述
SIFSci 是一个提供试题切分和标注的模块。它可定制化的将文本切分为令牌(token)序列,为后续试题的向量化做准备。
本文将以下面这道题目(来源自 LUNA 题库)为例,展示 SIFSci 的使用方法。

- 符合 [SIF 格式](https://edunlp.readthedocs.io/en/docs_dev/tutorial/zh/sif.html) 的题目录入格式为:
```
item = {
"stem": r"如图来自古希腊数学家希波克拉底所研究的几何图形.此图由三个半圆构成,三个半圆的直径分别为直角三角形$ABC$的斜边$BC$, 直角边$AB$, $AC$.$\bigtriangleup ABC$的三边所围成的区域记为$I$,黑色部分记为$II$, 其余部分记为$III$.在整个图形中随机取一点,此点取自$I,II,III$的概率分别记为$p_1,p_2,p_3$,则$\SIFChoice$$\FigureID{1}$",
"options": ["$p_1=p_2$", "$p_1=p_3$", "$p_2=p_3$", "$p_1=p_2+p_3$"]
}
item["stem"]
```
- 加载图片:`$\\FigureID{1}$`
```
from PIL import Image
img = Image.open("../../asset/_static/item_figure.png")
figures = {"1": img}
img
```
## 导入模块
```
from EduNLP.SIF import sif4sci, is_sif, to_sif
```
## 验证题目格式
```
is_sif(item['stem'])
```
- 若发现题目因为公式没有包含在 `$$` 中而不符合 SIF 格式,则可以使用 `to_sif` 模块转成标准格式。示例如下:
```
text = '某校一个课外学习小组为研究某作物的发芽率y和温度x(单位...'
is_sif(text)
text = '某校一个课外学习小组为研究某作物的发芽率y和温度x(单位...'
to_sif(text)
```
## 题目切分及令牌化
现在我们得到了符合标准格式的题目文本,接下来可以对题目做进一步的预训练,例如:切分和令牌化。
### 题目切分
#### 基本切分
分离文本、公式、图片和特殊符号。
```
segments = sif4sci(item["stem"], figures=figures, tokenization=False)
segments
```
- 文本部分
```
segments.text_segments
```
- 公式部分
```
segments.formula_segments
```
- 图片部分
```
segments.figure_segments
segments.figure_segments[0].figure
```
- 特殊符号
```
segments.ques_mark_segments
```
#### 标记化切分
如果您不注重题目文本和公式的具体内容,仅仅是对题目的整体(或部分)构成感兴趣,那么可以通过修改 `symbol` 参数来将不同的成分转化成特定标记,方便您的研究。
- symbol:
- "t": text
- "f": formula
- "g": figure
- "m": question mark
```
sif4sci(item["stem"], figures=figures, tokenization=False, symbol="tfgm")
```
### 令牌化
为了方便后续向量化表征试题,本模块提供题目文本的令牌化解析(Tokenization),即将题目转换成令牌序列。
根据构成题目的元素类型,解析功能分为 **“文本解析”** 和 **“公式解析”** 两部分。更具体的过程解析参见 [令牌化](../Tokenizer/tokenizer.ipynb)。
```
tokens = sif4sci(item["stem"], figures=figures, tokenization=True)
```
- 文本解析结果
```
tokens.text_tokens
```
#### 公式解析结果
```
tokens.formula_tokens
```
- 自定义参数,得到定制化解析结果
(1)如果您想按 latex 语法标记拆分公式的各个部分,并得到顺序序列结果,输出方法(`method`)可以选择:`linear`
```
sif4sci(
item["stem"],
figures=figures,
tokenization=True,
tokenization_params={
"formula_params": {
"method": "linear",
}
}
).formula_tokens
```
(2) 如果您想得到公式解析出的语法分析树序列,输出方法可以选择:`ast`
> 抽象语法分析树,简称语法树(Syntax tree),是源代码语法结构的一种抽象表示。它以树状的形式表现编程语言的语法结构,树上的每个节点都表示源代码中的一种结构。
> 因此,ast 可以看做是公式的语法结构表征。
```
sif4sci(
item["stem"],
figures=figures,
tokenization=True,
tokenization_params={
"formula_params":{
"method": "ast",
}
}
).formula_tokens
```
- 语法树展示:
```
f = sif4sci(
item["stem"],
figures=figures,
tokenization=True,
tokenization_params={
"formula_params":{
"method": "ast",
"return_type": "ast",
"ord2token": True,
"var_numbering": True,
}
}
).formula_tokens
f
for i in range(0, len(f)):
ForestPlotter().export(
f[i], root_list=[node for node in f[i]],
)
# plt.show()
```
(3)如果您只是关心公式的结构和类型,并不关心变量具体是什么,比如二元二次方程 `x^2 + y = 1` ,它从公式结构和类型上来说,和 `w^2 + z = 1` 没有区别。
此时,您可以设置如下参数:`ord2token = True`,将公式变量名转换成 token
```
sif4sci(
item["stem"],
figures=figures,
tokenization=True,
tokenization_params={
"formula_params":{
"method": "ast",
"return_type": "list",
"ord2token": True,
}
}
).formula_tokens
```
(4) 如果您除了 (3) 中提供的功能之外,还需要区分不同的变量。此时可以另外设置参数:`var_numbering=True`
```
sif4sci(
item["stem"],
figures=figures,
tokenization=True,
tokenization_params={
"formula_params":{
"method": "ast",
"ord2token": True,
"return_type": "list",
"var_numbering": True
}
}
).formula_tokens
```
## 综合训练
综合上述方法,将题目转换成令牌序列,为后续向量化做准备。
```
sif4sci(item["stem"], figures=figures, tokenization=True,
symbol="fgm")
```
| github_jupyter |
# Altair Data Server
This notebook shows an example of using the [Altair data server](https://github.com/altair-viz/altair_data_server), a lightweight plugin for [Altair](http://altair-viz.github.io) that lets you efficiently and transparently work with larger datasets.
Altair data server can be installed with pip:
```
!pip install altair_data_server
```
## Motivation
Altair charts are built on [vega-lite](http://vega.github.io/vega-lite), a visualization grammar that encodes charts in JSON before rendering them in your browser with Javascript.
For example, consider the following chart:
```
import pandas as pd
import numpy as np
import altair as alt
data = pd.DataFrame({
'value': [2, 6, 4, 7, 6],
'category': list('ABCDE'),
})
chart = alt.Chart(data).mark_bar().encode(
x='value',
y='category'
)
chart
```
The chart itself, including the data, is encoded to a JSON specification that you can inspect:
```
print(chart.to_json())
```
Notice that the data is encoded in the chart specification itself: this is very convenient because it results in a single, well-defined specification that contains **everything** required to recreate the chart.
However, this leads to issues for larger datasets. For example:
```
df = pd.DataFrame({
'timepoint': np.arange(5000),
'value': np.random.randn(5000),
'label': np.random.choice(list('ABCDE'), 5000)
})
chart = alt.Chart(df).mark_line().encode(
x='timepoint',
y='value',
color='label'
)
def print_size_of(chart):
spec = chart.to_json()
print(f"Size of chart spec: {len(spec) / 1024:.1f} KB")
print_size_of(chart)
```
If we had rendered this chart, it would have resulted in about half a megabyte of JSON text being embedded into the notebook. If your notebook contains many charts, this can quickly lead to large and unwieldy notebooks, and in the worst cases to crashing the browser.
For this reason, Altair builds in a protection that prevents you from embedding extremely large data. Here's what happens when you use a dataset with a large number of rows:
```
df = pd.DataFrame({
'x': np.arange(50000),
'y': np.random.randn(50000),
})
big_chart = alt.Chart(df).mark_line().encode(
x='x',
y='y'
)
big_chart.display()
```
We can print the size of the chart by temporarily disabling the maximum rows check:
```
with alt.data_transformers.enable(max_rows=None):
print_size_of(big_chart)
```
Had Altair displayed this, it would have added 3MB of JSON text to the notebook, and if we created multiple charts, it would be another 3MB each time. This can quickly add-up in the context of interactive data exploration.
The way to get around this is to put the data somewhere that is not in the notebook itself, but is visible to the renderer running in the notebook. Altair has some [existing approaches](https://altair-viz.github.io/user_guide/faq.html#maxrowserror-how-can-i-plot-large-datasets) that work by saving the data to disk, but this is not always desirable, and doesn't always work in cloud-based Jupyter frontends.
## A Solution: Altair Data Server
The [Altair data server](https://github.com/altair-viz/altair_data_server) plugin provides a nice solution to this. Rather than embedding the data in the notebook or saving the data to disk, when enabled it starts a background server, serves the data, and inserts the appropriate URL into the altair chart:
```
alt.data_transformers.enable('data_server')
print_size_of(big_chart)
big_chart.display()
```
*Note: If you are running on a cloud-based hosted notebook like MyBinder, you will have to modify the above slightly, and instead run*
```python
alt.data_transformers.enable('data_server_proxied')
```
The resulting spec is only 0.4KB, which is small enough that embedding it in the notebook doesn't cause problems. It's instructive to look at the spec directly:
```
print(big_chart.to_json())
```
What the data transformer has done is to replace the embedded data with a URL reference, and to make that data available at that URL. We can see this by accessing the URL directly:
```
url = big_chart.to_dict()['data']['url']
print(url)
```
We can load the data from the backend using Pandas
```
if not url.startswith('http://localhost'):
# Using proxied URL; reconstruct the host url
*proxy, port, filename = url.split('/')
url = f"http://localhost:{port}/{filename}"
served_data = pd.read_json(url)
served_data.head()
```
## When to use the data server
There is one distinct disadvantage of using the data server approach: your charts will only render as long as your Python session is active.
So the data server is a good option when you'll be **working interactively, generating multiple charts as part of an exploration of data**.
But once you are finished with exploration and want to generate charts that will be fully embedded in the notebook, you can restore the default data transformer:
```python
alt.data_transformers.enable('default')
```
and carry on from there.
| github_jupyter |
# Vanilla Recurrent Neural Network
<br>
Character level implementation of vanilla recurrent neural network
## Import dependencies
```
import numpy as np
import matplotlib.pyplot as plt
```
## Parameters Initialization
```
def initialize_parameters(hidden_size, vocab_size):
'''
Returns:
parameters -- a tuple of network parameters
adagrad_mem_vars -- a tuple of mem variables required for adagrad update
'''
Wxh = np.random.randn(hidden_size, vocab_size) * 0.01
Whh = np.random.randn(hidden_size, hidden_size) * 0.01
Why = np.random.randn(vocab_size, hidden_size) * 0.01
bh = np.zeros([hidden_size, 1])
by = np.zeros([vocab_size, 1])
mWxh, mWhh, mWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why)
mbh, mby = np.zeros_like(bh), np.zeros_like(by) # memory variables for Adagrad
parameter = (Wxh, Whh, Why, bh, by)
adagrad_mem_vars = (mWxh, mWhh, mWhy, mbh, mby)
return (parameter, adagrad_mem_vars)
```
## Forward Propogation
```
def softmax(X):
t = np.exp(X)
return t / np.sum(t, axis=0)
def forward_propogation(X, parameters, seq_length, hprev):
'''
Implement the forward propogation in the network
Arguments:
X -- input to the network
parameters -- a tuple containing weights and biases of the network
seq_length -- length of sequence of input
hprev -- previous hidden state
Returns:
caches -- tuple of activations and hidden states for each step of forward prop
'''
caches = {}
caches['h0'] = np.copy(hprev)
Wxh, Whh, Why, bh, by = parameters
for i in range(seq_length):
x = X[i].reshape(vocab_size, 1)
ht = np.tanh(np.dot(Whh, caches['h' + str(i)]) + np.dot(Wxh, x) + bh)
Z = np.dot(Why, ht) + by
A = softmax(Z)
caches['A' + str(i+1)] = A
caches['h' + str(i+1)] = ht
return caches
```
## Cost Computation
```
def compute_cost(Y, caches):
"""
Implement the cost function for the network
Arguments:
Y -- true "label" vector, shape (vocab_size, number of examples)
caches -- tuple of activations and hidden states for each step of forward prop
Returns:
cost -- cross-entropy cost
"""
seq_length = len(caches) // 2
cost = 0
for i in range(seq_length):
y = Y[i].reshape(vocab_size, 1)
cost += - np.sum(y * np.log(caches['A' + str(i+1)]))
return np.squeeze(cost)
```
## Backward Propogation
```
def backward_propogation(X, Y, caches, parameters):
'''
Implement Backpropogation
Arguments:
Al -- Activations of last layer
Y -- True labels of data
caches -- tuple containing values of `A` and `h` for each char in forward prop
parameters -- tuple containing parameters of the network
Returns
grads -- tuple containing gradients of the network parameters
'''
Wxh, Whh, Why, bh, by = parameters
dWhh, dWxh, dWhy = np.zeros_like(Whh), np.zeros_like(Wxh), np.zeros_like(Why)
dbh, dby = np.zeros_like(bh), np.zeros_like(by)
dhnext = np.zeros_like(caches['h0'])
seq_length = len(caches) // 2
for i in reversed(range(seq_length)):
y = Y[i].reshape(vocab_size, 1)
x = X[i].reshape(vocab_size, 1)
dZ = np.copy(caches['A' + str(i+1)]) - y
dWhy += np.dot(dZ, caches['h' + str(i+1)].T)
dby += dZ
dht = np.dot(Why.T, dZ) + dhnext
dhraw = dht * (1 - caches['h' + str(i+1)] * caches['h' + str(i+1)])
dbh += dhraw
dWhh += np.dot(dhraw, caches['h' + str(i)].T)
dWxh += np.dot(dhraw, x.T)
dhnext = np.dot(Whh.T, dhraw)
for dparam in [dWxh, dWhh, dWhy, dbh, dby]:
np.clip(dparam, -5, 5, out=dparam) # clip to mitigate exploding gradients
grads = (dWxh, dWhh, dWhy, dbh, dby)
return grads
```
## Parameters Update
```
def update_parameters(parameters, grads, adagrad_mem_vars, learning_rate):
'''
Update parameters of the network using Adagrad update
Arguments:
paramters -- tuple containing weights and biases of the network
grads -- tuple containing the gradients of the parameters
learning_rate -- rate of adagrad update
Returns
parameters -- tuple containing updated parameters
'''
a = np.copy(parameters[0])
for param, dparam, mem in zip(parameters, grads, adagrad_mem_vars):
mem += dparam * dparam
param -= learning_rate * dparam / np.sqrt(mem + 1e-8) # adagrad update
return (parameters, adagrad_mem_vars)
```
## Sample text from model
```
def print_sample(ht, seed_ix, n, parameters):
"""
Samples a sequence of integers from the model.
Arguments
ht -- memory state
seed_ix --seed letter for first time step
n -- number of chars to extract
parameters -- tuple containing network weights and biases
"""
Wxh, Whh, Why, bh, by = parameters
x = np.eye(vocab_size)[seed_ix].reshape(vocab_size, 1)
ixes = []
for t in range(n):
ht = np.tanh(np.dot(Wxh, x) + np.dot(Whh, ht) + bh)
y = np.dot(Why, ht) + by
p = np.exp(y) / np.sum(np.exp(y))
ix = np.random.choice(range(vocab_size), p=p.ravel()) ### why not argmax of p??
x = np.eye(vocab_size)[ix].reshape(vocab_size, 1)
ixes.append(ix)
txt = ''.join(ix_to_char[ix] for ix in ixes)
print('----\n %s \n----' % txt)
def get_one_hot(p, char_to_ix, data, vocab_size):
'''
Gets indexes of chars of `seq_length` from `data`, returns them in one hot representation
'''
inputs = [char_to_ix[ch] for ch in data[p:p+seq_length]]
targets = [char_to_ix[ch] for ch in data[p+1:p+seq_length+1]]
X = np.eye(vocab_size)[inputs]
Y = np.eye(vocab_size)[targets]
return X, Y
```
## Model
```
def Model(data, seq_length, lr, char_to_ix, ix_to_char, num_of_iterations):
'''
Train RNN model and return trained parameters
'''
parameters, adagrad_mem_vars = initialize_parameters(hidden_size, vocab_size)
costs = []
n, p = 0, 0
smooth_loss = -np.log(1.0 / vocab_size) * seq_length
while n < num_of_iterations:
if p + seq_length + 1 >= len(data) or n == 0:
hprev = np.zeros((hidden_size, 1)) # reset RNN memory
p = 0 # go from start of data
X, Y = get_one_hot(p, char_to_ix, data, vocab_size)
caches = forward_propogation(X, parameters, seq_length, hprev)
cost = compute_cost(Y, caches)
grads = backward_propogation(X, Y, caches, parameters)
parameters, adagrad_mem_vars = update_parameters(parameters, grads, adagrad_mem_vars, lr)
smooth_loss = smooth_loss * 0.999 + cost * 0.001
if n % 1000 == 0:
print_sample(hprev, char_to_ix['a'], 200, parameters)
print('Iteration: %d -- Cost: %0.3f' % (n, smooth_loss))
costs.append(cost)
hprev = caches['h' + str(seq_length)]
n+=1
p+=seq_length
plt.plot(costs)
return parameters
```
## Implementing the model on a text
```
data = open('data/text-data.txt', 'r').read() # read a text file
chars = list(set(data)) # vocabulary
data_size, vocab_size = len(data), len(chars)
print ('data has %d characters, %d unique.' % (data_size, vocab_size))
char_to_ix = { ch:i for i,ch in enumerate(chars) } # maps char to it's index in vocabulary
ix_to_char = { i:ch for i,ch in enumerate(chars) } # maps index in vocabular to corresponding character
# hyper-parameters
learning_rate = 0.1
hidden_size = 100
seq_length = 25
num_of_iterations = 20000
parameters = Model(data, seq_length, learning_rate, char_to_ix, ix_to_char, num_of_iterations)
```
| github_jupyter |
```
import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
def identity_block(X, f, filters, stage, block):
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
F1, F2, F3 = filters
X_shortcut = X
X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
X = Conv2D(filters = F2, kernel_size = (f,f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
X = layers.Add()([X, X_shortcut])
X = Activation('relu')(X)
return X
def convolutional_block(X, f, filters, stage, block, s = 2):
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
F1, F2, F3 = filters
X_shortcut = X
X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
X = Conv2D(F2, (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
X = Conv2D(F3, (1, 1), strides = (1,1), name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
X_shortcut = Conv2D(F3, (1, 1), strides = (s,s), name = conv_name_base + '1', kernel_initializer = glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut)
X = layers.Add()([X, X_shortcut])
X = Activation('relu')(X)
return X
def ResNet50(input_shape = (64, 64, 3), classes = 6):
X_input = Input(input_shape)
X = ZeroPadding2D((3, 3))(X_input)
X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')
X = convolutional_block(X, f = 3, filters = [128, 128, 512], stage = 3, block='a', s = 2)
X = identity_block(X, 3, [128, 128, 512], stage=3, block='b')
X = identity_block(X, 3, [128, 128, 512], stage=3, block='c')
X = identity_block(X, 3, [128, 128, 512], stage=3, block='d')
X = convolutional_block(X, f = 3, filters = [256, 256, 1024], stage = 4, block='a', s = 2)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f')
X = convolutional_block(X, f = 3, filters = [512, 512, 2048], stage = 5, block='a', s = 2)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b')
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c')
X = AveragePooling2D((2,2), name='avg_pool')(X)
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)
model = Model(inputs = X_input, outputs = X, name='ResNet50')
return model
model = ResNet50(input_shape = (64, 64, 3), classes = 6)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
model.fit(X_train, Y_train, epochs = 20, batch_size = 32)
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
model.summary()
```
| github_jupyter |
```
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Resize window to display all image
def ResizeWithAspectRatio(image, width=None, height=None, inter=cv2.INTER_AREA):
dim = None
(h, w) = image.shape[:2]
if width is None and height is None:
return image
if width is None:
r = height / float(h)
dim = (int(w * r), height)
else:
r = width / float(w)
dim = (width, int(h * r))
return cv2.resize(image, dim, interpolation=inter)
img = cv2.imread('surfacedata/contrastcrop/Nr385.jpg')
# below is from original code
#img1 = cv2.imread('cylinder.png')
#images=np.concatenate(img(img,img1),axis=1)
img_rs = ResizeWithAspectRatio(img, width=1000)
cv2.imshow("Image",img_rs)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Convert to Grayscale image
gray_img=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# find histogram of grayscale and display
hist=cv2.calcHist(gray_img,[0],None,[256],[0,256])
plt.subplot(121)
plt.title("Image1")
plt.xlabel('bins')
plt.ylabel("No of pixels")
plt.plot(hist)
plt.show()
# Normalise brightness and increase contrast
gray_img_eqhist=cv2.equalizeHist(gray_img)
hist=cv2.calcHist(gray_img_eqhist,[0],None,[256],[0,256])
plt.subplot(121)
plt.plot(hist)
plt.show()
# Display enhanced image
gray_img_eqhistrs = ResizeWithAspectRatio(gray_img_eqhist, width=1000)
cv2.imshow("Enhanced",gray_img_eqhistrs)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Using Contrast Limited Adaptive Histogram Equalization (CLAHE)
clahe=cv2.createCLAHE(clipLimit=40) # 40 is default
gray_img_clahe=clahe.apply(gray_img_eqhist)
imgrs = ResizeWithAspectRatio(gray_img_clahe, width=1000)
cv2.imshow("Images",imgrs)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Attempting to separate foreground and background with threshholding
th=80
max_val=255
ret, o1 = cv2.threshold(gray_img_clahe, th, max_val, cv2.THRESH_BINARY)
cv2.putText(o1,"Thresh_Binary",(40,100),cv2.FONT_HERSHEY_SIMPLEX,2,(255,255,255),3,cv2.LINE_AA)
ret, o2 = cv2.threshold(gray_img_clahe, th, max_val, cv2.THRESH_BINARY_INV)
cv2.putText(o2,"Thresh_Binary_inv",(40,100),cv2.FONT_HERSHEY_SIMPLEX,2,(255,255,255),3,cv2.LINE_AA)
ret, o3 = cv2.threshold(gray_img_clahe, th, max_val, cv2.THRESH_TOZERO)
cv2.putText(o3,"Thresh_Tozero",(40,100),cv2.FONT_HERSHEY_SIMPLEX,2,(255,255,255),3,cv2.LINE_AA)
ret, o4 = cv2.threshold(gray_img_clahe, th, max_val, cv2.THRESH_TOZERO_INV)
cv2.putText(o4,"Thresh_Tozero_inv",(40,100),cv2.FONT_HERSHEY_SIMPLEX,2,(255,255,255),3,cv2.LINE_AA)
ret, o5 = cv2.threshold(gray_img_clahe, th, max_val, cv2.THRESH_TRUNC)
cv2.putText(o5,"Thresh_trunc",(40,100),cv2.FONT_HERSHEY_SIMPLEX,2,(255,255,255),3,cv2.LINE_AA)
ret ,o6= cv2.threshold(gray_img_clahe, th, max_val, cv2.THRESH_OTSU)
cv2.putText(o6,"Thresh_OSTU",(40,100),cv2.FONT_HERSHEY_SIMPLEX,2,(255,255,255),3,cv2.LINE_AA)
final=np.concatenate((o1,o2,o3),axis=1)
final1=np.concatenate((o4,o5,o6),axis=1)
cv2.imwrite("surfaceenhance/thimg1.jpg",final)
cv2.imwrite("surfaceenhance/thimg2.jpg",final1)
# Attempt adaptive thresholding to deal with different illumination
thresh1 = cv2.adaptiveThreshold(gray_img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 11, 2)
thresh2 = cv2.adaptiveThreshold(gray_img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 31, 3)
thresh3 = cv2.adaptiveThreshold(gray_img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 13, 5)
thresh4 = cv2.adaptiveThreshold(gray_img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 4)
final=np.concatenate((thresh1,thresh2,thresh3,thresh4),axis=1)
cv2.imwrite('surfaceenhance/rect.jpg',final)
```
| github_jupyter |
# Sonar - Decentralized Model Training Simulation (local)
DISCLAIMER: This is a proof-of-concept implementation. It does not represent a remotely product ready implementation or follow proper conventions for security, convenience, or scalability. It is part of a broader proof-of-concept demonstrating the vision of the OpenMined project, its major moving parts, and how they might work together.
# Getting Started: Installation
##### Step 1: install IPFS
- https://ipfs.io/docs/install/
##### Step 2: Turn on IPFS Daemon
Execute on command line:
> ipfs daemon
##### Step 3: Install Ethereum testrpc
- https://github.com/ethereumjs/testrpc
##### Step 4: Turn on testrpc with 1000 initialized accounts (each with some money)
Execute on command line:
> testrpc -a 1000
##### Step 5: install openmined/sonar and all dependencies (truffle)
##### Step 6: Locally Deploy Smart Contracts in openmined/sonar
From the OpenMined/Sonar repository root run
> truffle compile
> truffle migrate
you should see something like this when you run migrate:
```
Using network 'development'.
Running migration: 1_initial_migration.js
Deploying Migrations...
Migrations: 0xf06039885460a42dcc8db5b285bb925c55fbaeae
Saving successful migration to network...
Saving artifacts...
Running migration: 2_deploy_contracts.js
Deploying ConvertLib...
ConvertLib: 0x6cc86f0a80180a491f66687243376fde45459436
Deploying ModelRepository...
ModelRepository: 0xe26d32efe1c573c9f81d68aa823dcf5ff3356946
Linking ConvertLib to MetaCoin
Deploying MetaCoin...
MetaCoin: 0x6d3692bb28afa0eb37d364c4a5278807801a95c5
```
The address after 'ModelRepository' is something you'll need to copy paste into the code
below when you initialize the "ModelRepository" object. In this case the address to be
copy pasted is `0xe26d32efe1c573c9f81d68aa823dcf5ff3356946`.
##### Step 7: execute the following code
# The Simulation: Diabetes Prediction
In this example, a diabetes research center (Cure Diabetes Inc) wants to train a model to try to predict the progression of diabetes based on several indicators. They have collected a small sample (42 patients) of data but it's not enough to train a model. So, they intend to offer up a bounty of $5,000 to the OpenMined commmunity to train a high quality model.
As it turns out, there are 400 diabetics in the network who are candidates for the model (are collecting the relevant fields). In this simulation, we're going to faciliate the training of Cure Diabetes Inc incentivizing these 400 anonymous contributors to train the model using the Ethereum blockchain.
Note, in this simulation we're only going to use the sonar and syft packages (and everything is going to be deployed locally on a test blockchain). Future simulations will incorporate mine and capsule for greater anonymity and automation.
### Imports and Convenience Functions
```
import warnings
import numpy as np
import phe as paillier
from sonar.contracts import ModelRepository,Model
from syft.he.Paillier import KeyPair
from syft.nn.linear import LinearClassifier
import numpy as np
from sklearn.datasets import load_diabetes
def get_balance(account):
return repo.web3.fromWei(repo.web3.eth.getBalance(account),'ether')
warnings.filterwarnings('ignore')
```
### Setting up the Experiment
```
# for the purpose of the simulation, we're going to split our dataset up amongst
# the relevant simulated users
diabetes = load_diabetes()
y = diabetes.target
X = diabetes.data
validation = (X[0:5],y[0:5])
anonymous_diabetes_users = (X[6:],y[6:])
# we're also going to initialize the model trainer smart contract, which in the
# real world would already be on the blockchain (managing other contracts) before
# the simulation begins
# ATTENTION: copy paste the correct address (NOT THE DEFAULT SEEN HERE) from truffle migrate output.
repo = ModelRepository('0xb0f99be3d5c858efaabe19bcc54405f3858d48bc', ipfs_host='localhost', web3_host='localhost') # blockchain hosted model repository
# we're going to set aside 10 accounts for our 42 patients
# Let's go ahead and pair each data point with each patient's
# address so that we know we don't get them confused
patient_addresses = repo.web3.eth.accounts[1:10]
anonymous_diabetics = list(zip(patient_addresses,
anonymous_diabetes_users[0],
anonymous_diabetes_users[1]))
# we're going to set aside 1 account for Cure Diabetes Inc
cure_diabetes_inc = repo.web3.eth.accounts[1]
```
## Step 1: Cure Diabetes Inc Initializes a Model and Provides a Bounty
```
pubkey,prikey = KeyPair().generate(n_length=1024)
diabetes_classifier = LinearClassifier(desc="DiabetesClassifier",n_inputs=10,n_labels=1)
initial_error = diabetes_classifier.evaluate(validation[0],validation[1])
diabetes_classifier.encrypt(pubkey)
diabetes_model = Model(owner=cure_diabetes_inc,
syft_obj = diabetes_classifier,
bounty = 1,
initial_error = initial_error,
target_error = 10000
)
model_id = repo.submit_model(diabetes_model)
```
## Step 2: An Anonymous Patient Downloads the Model and Improves It
```
model_id
model = repo[model_id]
diabetic_address,input_data,target_data = anonymous_diabetics[0]
repo[model_id].submit_gradient(diabetic_address,input_data,target_data)
```
## Step 3: Cure Diabetes Inc. Evaluates the Gradient
```
repo[model_id]
old_balance = get_balance(diabetic_address)
print(old_balance)
new_error = repo[model_id].evaluate_gradient(cure_diabetes_inc,repo[model_id][0],prikey,pubkey,validation[0],validation[1])
new_error
new_balance = get_balance(diabetic_address)
incentive = new_balance - old_balance
print(incentive)
```
## Step 4: Rinse and Repeat
```
model
for i,(addr, input, target) in enumerate(anonymous_diabetics):
try:
model = repo[model_id]
# patient is doing this
model.submit_gradient(addr,input,target)
# Cure Diabetes Inc does this
old_balance = get_balance(addr)
new_error = model.evaluate_gradient(cure_diabetes_inc,model[i+1],prikey,pubkey,validation[0],validation[1],alpha=2)
print("new error = "+str(new_error))
incentive = round(get_balance(addr) - old_balance,5)
print("incentive = "+str(incentive))
except:
"Connection Reset"
```
| github_jupyter |
```
!pip install neural-tangents
```
## Imports
```
import time
import itertools
import numpy.random as npr
import jax.numpy as np
from jax.config import config
from jax import jit, grad, random
from jax.nn import log_softmax
from jax.experimental import optimizers
import jax.experimental.stax as jax_stax
import neural_tangents.stax as nt_stax
import neural_tangents
from PIL import Image
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
from torchvision import transforms, datasets
from torchvision.datasets import FashionMNIST
def data_to_numpy(dataloader):
X = []
y = []
for batch_id, (cur_X, cur_y) in enumerate(dataloader):
X.extend(cur_X.numpy())
y.extend(cur_y.numpy())
X = np.asarray(X)
y = np.asarray(y)
return X, y
def _one_hot(x, k, dtype=np.float32):
"""Create a one-hot encoding of x of size k."""
return np.array(x[:, None] == np.arange(k), dtype)
def cifar_10():
torch.manual_seed(0)
D = 32
num_classes = 10
torch.manual_seed(0)
if torch.cuda.is_available():
device = torch.device('cuda:0')
else:
device = torch.device('cpu')
cifar10_stats = {
"mean" : (0.4914, 0.4822, 0.4465),
"std" : (0.24705882352941178, 0.24352941176470588, 0.2615686274509804),
}
simple_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(cifar10_stats['mean'], cifar10_stats['std']),
])
train_loader = torch.utils.data.DataLoader(
datasets.CIFAR10(root='./data', train=True, download=True, transform=simple_transform),
batch_size=2048, shuffle=True, pin_memory=True)
test_loader = torch.utils.data.DataLoader(
datasets.CIFAR10(root='./data', train=False, download=True, transform=simple_transform),
batch_size=2048, shuffle=True, pin_memory=True)
train_images, train_labels = data_to_numpy(train_loader)
test_images, test_labels = data_to_numpy(test_loader)
train_images = np.transpose(train_images, (0, 2, 3, 1))
test_images = np.transpose(test_images , (0, 2, 3, 1))
train_labels = _one_hot(train_labels, num_classes)
test_labels = _one_hot(test_labels, num_classes)
return train_images, train_labels, test_images, test_labels
%%time
train_images, train_labels, test_images, test_labels = cifar_10()
train_images.shape, train_labels.shape, test_images.shape, test_labels.shape
train_images.shape, train_labels.shape, test_images.shape, test_labels.shape
```
## Define training primitives
Note: The training code is based on the following example: https://github.com/google/jax/blob/master/examples/mnist_classifier.py.
```
def loss(params, batch):
inputs, targets = batch
preds = predict(params, inputs)
return -np.mean(np.sum(log_softmax(preds, axis=1) * targets, axis=1))
def accuracy(params, batch):
inputs, targets = batch
target_class = np.argmax(targets, axis=1)
predicted_class = np.argmax(predict(params, inputs), axis=1)
return np.mean(predicted_class == target_class)
@jit
def update(i, opt_state, batch):
params = get_params(opt_state)
return opt_update(i, grad(loss)(params, batch), opt_state)
rng_state = npr.RandomState(0)
def data_stream_of(images, labels, batch_size=500, batch_limit=None):
assert len(images) == len(labels)
rng = npr.RandomState(0)
n = len(images)
perm = rng.permutation(n)
for i in range(n // batch_size):
if (batch_limit is not None) and i >= batch_limit:
break
batch_idx = perm[i * batch_size:(i + 1) * batch_size]
yield images[batch_idx], labels[batch_idx]
```
## Train a small CNN in JAX with NTK parameterization
Here I do a few epochs to make sure that my training code works.
I do mix `jax.stax` with `neural_tangents.stax` because I want to use both BatchNorm and NTK parameterizaton.
```
channels = 32
num_classes = 10
init_random_params, predict = jax_stax.serial(
nt_stax.Conv(channels, (3, 3), padding='SAME'), jax_stax.BatchNorm(), nt_stax.Relu(),
nt_stax.Conv(channels, (3, 3), strides=(2,2), padding='SAME'), jax_stax.BatchNorm(), nt_stax.Relu(),
nt_stax.Conv(channels, (3, 3), strides=(2,2), padding='SAME'), jax_stax.BatchNorm(), nt_stax.Relu(),
nt_stax.Conv(channels, (3, 3), strides=(2,2), padding='SAME'), jax_stax.BatchNorm(), nt_stax.Relu(),
nt_stax.AvgPool((1, 1)), nt_stax.Flatten(),
nt_stax.Dense(num_classes), jax_stax.Identity,
)
rng = random.PRNGKey(0)
step_size = 10.
num_epochs = 10
batch_size = 500
momentum_mass = 0.9
opt_init, opt_update, get_params = optimizers.momentum(step_size, mass=momentum_mass)
_, init_params = init_random_params(rng, (batch_size, 32, 32, 3))
opt_state = opt_init(init_params)
itercount = itertools.count()
print("\nStarting training...")
for epoch in range(num_epochs):
start_time = time.time()
for batch in data_stream_of(train_images, train_labels):
opt_state = update(next(itercount), opt_state, batch)
params = get_params(opt_state)
train_accs = [accuracy(params, batch) for batch in data_stream_of(train_images, train_labels)]
train_acc = np.average(train_accs)
test_accs = [accuracy(params, batch) for batch in data_stream_of(test_images, test_labels)]
test_acc = np.average(test_accs)
epoch_time = time.time() - start_time
print(f"Epoch {epoch} in {epoch_time:0.2f} sec")
print(f"Training set accuracy {train_acc}")
print(f"Test set accuracy {test_acc}")
```
## Train a ResNet
```
num_classes = 10
def WideResnetBlock(channels, strides=(1, 1), channel_mismatch=False):
Main = jax_stax.serial(
nt_stax.Conv(channels, (3, 3), strides, padding='SAME'), jax_stax.BatchNorm(), nt_stax.Relu(),
nt_stax.Conv(channels, (3, 3), padding='SAME'), jax_stax.BatchNorm(), nt_stax.Relu(),
jax_stax.Identity
)
Shortcut = nt_stax.Identity() if not channel_mismatch else nt_stax.Conv(channels, (3, 3), strides, padding='SAME')
return jax_stax.serial(jax_stax.FanOut(2),
jax_stax.parallel(Main, Shortcut),
jax_stax.FanInSum,
jax_stax.Identity)
def WideResnetGroup(n, channels, strides=(1, 1)):
blocks = []
blocks += [WideResnetBlock(channels, strides, channel_mismatch=True)]
for _ in range(n - 1):
blocks += [WideResnetBlock(channels, (1, 1))]
return jax_stax.serial(*blocks)
def WideResnet(num_classes, num_channels=32, block_size=1):
return jax_stax.serial(
nt_stax.Conv(num_channels, (3, 3), padding='SAME'), jax_stax.BatchNorm(), nt_stax.Relu(),
WideResnetGroup(block_size, num_channels),
WideResnetGroup(block_size, num_channels, (2, 2)),
WideResnetGroup(block_size, num_channels, (2, 2)),
nt_stax.AvgPool((1, 1)),
nt_stax.Flatten(),
nt_stax.Dense(num_classes),
jax_stax.Identity
)
init_random_params, predict = WideResnet(num_classes)
rng = random.PRNGKey(0)
step_size = 10.
num_epochs = 10
momentum_mass = 0.9
opt_init, opt_update, get_params = optimizers.momentum(step_size, mass=momentum_mass)
_, init_params = init_random_params(rng, (batch_size, 32, 32, 3))
opt_state = opt_init(init_params)
itercount = itertools.count()
print("\nStarting training...")
for epoch in range(num_epochs):
start_time = time.time()
for batch in data_stream_of(train_images, train_labels):
opt_state = update(next(itercount), opt_state, batch)
params = get_params(opt_state)
train_accs = [accuracy(params, batch) for batch in data_stream_of(train_images, train_labels, batch_limit=4)]
train_acc = np.average(train_accs)
test_accs = [accuracy(params, batch) for batch in data_stream_of(test_images, test_labels, batch_limit=4)]
test_acc = np.average(test_accs)
epoch_time = time.time() - start_time
print(f"Epoch {epoch} in {epoch_time:0.2f} sec")
print(f"Training set accuracy {train_acc}")
print(f"Test set accuracy {test_acc}")
```
## Train a linearization of ResNet
Note: I have removed BatchNorm layers because with them training didn't work.
```
from jax.tree_util import tree_multimap
from jax.api import jvp
from jax.api import vjp
# copied from
def linearize(f, params):
"""Returns a function `f_lin`, the first order taylor approximation to `f`.
Example:
>>> # Compute the MSE of the first order Taylor series of a function.
>>> f_lin = linearize(f, params)
>>> mse = np.mean((f(new_params, x) - f_lin(new_params, x)) ** 2)
"""
@jit
def f_lin(p, *args, **kwargs):
dparams = tree_multimap(lambda x, y: x - y, p, params)
f_params_x, proj = jvp(lambda param: f(param, *args, **kwargs),
(params,), (dparams,))
return f_params_x + proj
return f_lin
def WideResnetBlock(channels, strides=(1, 1), channel_mismatch=False):
Main = jax_stax.serial(
nt_stax.Conv(channels, (3, 3), strides, padding='SAME'), nt_stax.Relu(),
nt_stax.Conv(channels, (3, 3), padding='SAME'), nt_stax.Relu(),
jax_stax.Identity
)
Shortcut = nt_stax.Identity() if not channel_mismatch else nt_stax.Conv(channels, (3, 3), strides, padding='SAME')
return jax_stax.serial(jax_stax.FanOut(2),
jax_stax.parallel(Main, Shortcut),
jax_stax.FanInSum,
jax_stax.Identity)
def WideResnetGroup(n, channels, strides=(1, 1)):
blocks = []
blocks += [WideResnetBlock(channels, strides, channel_mismatch=True)]
for _ in range(n - 1):
blocks += [WideResnetBlock(channels, (1, 1))]
return jax_stax.serial(*blocks)
def WideResnet(num_classes, num_channels=32, block_size=1):
return jax_stax.serial(
nt_stax.Conv(num_channels, (3, 3), padding='SAME'), nt_stax.Relu(),
WideResnetGroup(block_size, num_channels),
WideResnetGroup(block_size, num_channels, (2, 2)),
WideResnetGroup(block_size, num_channels, (2, 2)),
nt_stax.AvgPool((1, 1)),
nt_stax.Flatten(),
nt_stax.Dense(num_classes),
jax_stax.Identity
)
num_classes = 10
init_random_params, predict = WideResnet(num_classes, num_channels=512)
rng = random.PRNGKey(0)
step_size = 1.
num_epochs = 100
momentum_mass = 0.9
opt_init, opt_update, get_params = optimizers.momentum(step_size, mass=momentum_mass)
_, init_params = init_random_params(rng, (batch_size, 32, 32, 3))
opt_state = opt_init(init_params)
itercount = itertools.count()
predict = linearize(predict, init_params) # !important: linearization
print("\nStarting training...")
for epoch in range(num_epochs):
start_time = time.time()
for batch in data_stream_of(train_images, train_labels, batch_size=100):
opt_state = update(next(itercount), opt_state, batch)
params = get_params(opt_state)
train_accs = [accuracy(params, batch) for batch in data_stream_of(train_images, train_labels, batch_size=100, batch_limit=20)]
train_acc = np.average(train_accs)
test_accs = [accuracy(params, batch) for batch in data_stream_of(test_images, test_labels, batch_size=100, batch_limit=20)]
test_acc = np.average(test_accs)
epoch_time = time.time() - start_time
print(f"Epoch {epoch} in {epoch_time:0.2f} sec")
print(f"Training set accuracy {train_acc}")
print(f"Test set accuracy {test_acc}")
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# Regressão: preveja consumo de combustível
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/keras/basic_regression"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Veja em TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/pt/tutorials/keras/basic_regression"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Execute em Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/pt/tutorials/keras//basic_regression"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Veja a fonte em GitHub</a>
</td>
</table>
Note: A nossa comunidade TensorFlow traduziu estes documentos. Como as traduções da comunidade são *o melhor esforço*, não há garantias de que sejam uma reflexão exata e atualizada da [documentação oficial em Inglês](https://www.tensorflow.org/?hl=en). Se tem alguma sugestão para melhorar esta tradução, por favor envie um pull request para o repositório do GitHub [tensorflow/docs](https://github.com/tensorflow/docs). Para se voluntariar para escrever ou rever as traduções da comunidade, contacte a [lista docs@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs).
Em um problema de regressão, o objetivo é prever as saídas (*outputs*) de um valor contínuo, como um preço ou probabilidade. Em contraste de problemas de classificação, onde temos o propósito de escolher uma classe em uma lista de classificações (por exemplo, se uma imagem contém uma maçã ou laranja, assim reconhecendo qual fruta é representada na imagem).
Este *notebook* usa a clássica base de dados [Auto MPG](https://archive.ics.uci.edu/ml/datasets/auto+mpg) e constrói um modelo para prever a economia de combustíveis de automóveis do final dos anos 1970, início dos anos 1980. Para isso, forneceremos um modelo com descrição de vários automóveis desse período. Essa descrição inclui atributos como: cilindros, deslocamento, potência do motor, e peso.
Este exemplo usa a API `tf.keras`. Veja [este guia](https://www.tensorflow.org/guide/keras) para mais detalhes.
```
# Use seaborn para pairplot
!pip install seaborn
from __future__ import absolute_import, division, print_function, unicode_literals
import pathlib
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
```
## Base de dados Auto MPG
A base de dados está disponível em [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/).
### Pegando os dados
Primeiro baixe a base de dados dos automóveis.
```
dataset_path = keras.utils.get_file("auto-mpg.data", "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data")
dataset_path
```
Utilizando o pandas, impoorte os dados:
```
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail()
```
### Limpe os dados
Esta base contém alguns valores não conhecidos (*unknown*).
```
dataset.isna().sum()
```
Para manter esse tutorial básico, remova as linhas com esses valores não conhecidos.
```
dataset = dataset.dropna()
```
A coluna "Origin" é uma coluna categórica e não numérica. Logo converta para *one-hot* :
```
origin = dataset.pop('Origin')
dataset['USA'] = (origin == 1)*1.0
dataset['Europe'] = (origin == 2)*1.0
dataset['Japan'] = (origin == 3)*1.0
dataset.tail()
```
### Separando dados de treinamento e teste
Agora separe os dados em um conjunto de treinamento e outro teste.
Iremos utilizar o de conjunto de teste no final da análise do model.
```
train_dataset = dataset.sample(frac=0.8,random_state=0)
test_dataset = dataset.drop(train_dataset.index)
```
### Inspecione o dado
Dê uma rápida olhada em como está a distribuição de algumas colunas do conjunto de treinamento.
```
sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde")
```
Repare na visão geral dos estatísticas:
```
train_stats = train_dataset.describe()
train_stats.pop("MPG")
train_stats = train_stats.transpose()
train_stats
```
### Separe features de labels
Separe o valor alvo (*labels*), das *features*. Essa label é o valor no qual o modelo é treinado para prever.
```
train_labels = train_dataset.pop('MPG')
test_labels = test_dataset.pop('MPG')
```
### Normalize os dados
Observe novamente o `train_stats` acima e note quão diferente são os intervalos de uma feature e outra.
Uma boa prática é normalizar as *features* que usam diferentes escalas e intervalos. Apesar do modelo poder convergir sem a normalização, isso torna o treinamento mais difícil, e torna o resultado do modelo dependente da escolha das unidades da entrada.
Observação: embora geramos intencionalmente essas estatísticas para os dados de treinamento, essas estatísticas serão usadas também para normalizar o conjunto de teste. Precisamos delinear o conjunto de teste na mesma distribuição que o modelo foi treinado.
```
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
```
Esse dado normalizado é o que usaremos para treinar o modelo.
Atenção: As estatísticas usadas para normalizar as entradas aqui (média e desvio padrão) precisa ser aplicada em qualquer outro dado que alimenta o modelo, junto com o código *one-hot* que fizemos anteriormente. Isso inclui o conjunto de teste e os dados que o modelo usará em produção.
## O Modelo
### Construindo o modelo
Vamos construir o modelo. Aqui usaremos o modelo `Sequential` com duas camadas *densely connected*, e a camada de saída que retorna um único valor contínuo. Os passos de construção do modelo são agrupados em uma função, build_model, já que criaremos um segundo modelo mais tarde.
```
def build_model():
model = keras.Sequential([
layers.Dense(64, activation=tf.nn.relu, input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation=tf.nn.relu),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mean_squared_error',
optimizer=optimizer,
metrics=['mean_absolute_error', 'mean_squared_error'])
return model
model = build_model()
```
## Examine o modelo
Use o método `.summary` para exibir uma descrição simples do modelo.
```
model.summary()
```
Agora teste o modelo. Pegue um batch de de 10 exemplos do conjunto de treinamento e chame `model.predict`nestes.
```
example_batch = normed_train_data[:10]
example_result = model.predict(example_batch)
example_result
```
Parece que está funcionando e ele produz o resultado de forma e tipo esperados.
### Treinando o modelo
Treine o modelo com 1000 *epochs*, e grave a acurácia do treinamento e da validação em um objeto `history`.
```
# Mostra o progresso do treinamento imprimindo um único ponto para cada epoch completada
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 1000
history = model.fit(
normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[PrintDot()])
```
Visualize o progresso do modelo de treinamento usando o estados armazenados no objeto `history`
```
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MPG]')
plt.plot(hist['epoch'], hist['mean_absolute_error'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_absolute_error'],
label = 'Val Error')
plt.ylim([0,5])
plt.legend()
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MPG^2$]')
plt.plot(hist['epoch'], hist['mean_squared_error'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_squared_error'],
label = 'Val Error')
plt.ylim([0,20])
plt.legend()
plt.show()
plot_history(history)
```
Este grafo mostra as pequenas melhoras, ou mesmo a diminuição do `validation error` após 100 *epochs*. Vamos atualizar o `model.fit` para que pare automatixamente o treinamento quando o `validation score` não aumentar mais. Usaremos o `EarlyStopping callback` que testa a condição do treinamento a cada `epoch`. Se um grupo de `epochs` decorre sem mostrar melhoras, o treinamento irá parar automaticamente.
Você pode aprender mais sobre este callback [aqui](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping)
```
model = build_model()
# The patience parameter is the amount of epochs to check for improvement
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(normed_train_data, train_labels, epochs=EPOCHS,
validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])
plot_history(history)
```
O gráfico mostra que no conjunto de validação, a média de erro é próximo de +/- 2MPG. Isso é bom? Deixaremos essa decisão a você.
Vamos ver quão bem o modelo generaliza usando o conjunto de **teste**, que não usamos para treinar o modelo. Isso diz quão bem podemos esperar que o modelo se saia quando usarmos na vida real.
```
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=0)
print("Testing set Mean Abs Error: {:5.2f} MPG".format(mae))
```
### Make predictions
Finalmente, prevejamos os valores MPG usando o conjunto de teste.
```
test_predictions = model.predict(normed_test_data).flatten()
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
plt.axis('equal')
plt.axis('square')
plt.xlim([0,plt.xlim()[1]])
plt.ylim([0,plt.ylim()[1]])
_ = plt.plot([-100, 100], [-100, 100])
```
Parece que o nosso modelo prediz razoavelmente bem. Vamos dar uma olhada na distribuição dos erros.
```
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MPG]")
_ = plt.ylabel("Count")
```
Não é tão gaussiana, porém podemos esperar que por conta do número de exemplo é bem pequeno.
## Conclusão
Este notebook introduz algumas técnicas para trabalhar com problema de regressão.
* Mean Sqaured Error(MSE), é uma função comum de *loss* usada para problemas de regressão (diferentes funçẽso de *loss* são usadas para problemas de classificação).
* Similarmente, as métricas de evolução usadas na regressão são diferentes da classificação. Uma métrica comum de regressão é Mean Absolute Error (MAE).
* Quando o dado de entrada de *features* tem diferentes intervalos, cada *feature* deve ser escalada para o mesmo intervalo.
* Se não possui muitos dados de treinamento, uma técnica é preferir uma pequena rede com poucas camadas para evitar *overfitting*.
* *Early stopping* é uma boa técnica para evitar *overfitting*.
| github_jupyter |
```
import pandas as pd
import numpy as np
import scanpy as sc
import os
from sklearn.cluster import KMeans
from sklearn.cluster import AgglomerativeClustering
from sklearn.metrics.cluster import adjusted_rand_score
from sklearn.metrics.cluster import adjusted_mutual_info_score
from sklearn.metrics.cluster import homogeneity_score
import rpy2.robjects as robjects
from rpy2.robjects import pandas2ri
df_metrics = pd.DataFrame(columns=['ARI_Louvain','ARI_kmeans','ARI_HC',
'AMI_Louvain','AMI_kmeans','AMI_HC',
'Homogeneity_Louvain','Homogeneity_kmeans','Homogeneity_HC'])
workdir = './output/'
path_fm = os.path.join(workdir,'feature_matrices/')
path_clusters = os.path.join(workdir,'clusters/')
path_metrics = os.path.join(workdir,'metrics/')
os.system('mkdir -p '+path_clusters)
os.system('mkdir -p '+path_metrics)
metadata = pd.read_csv('./input/metadata.tsv',sep='\t',index_col=0)
num_clusters = len(np.unique(metadata['label']))
print(num_clusters)
files = [x for x in os.listdir(path_fm) if x.startswith('FM')]
len(files)
files
def getNClusters(adata,n_cluster,range_min=0,range_max=3,max_steps=20):
this_step = 0
this_min = float(range_min)
this_max = float(range_max)
while this_step < max_steps:
print('step ' + str(this_step))
this_resolution = this_min + ((this_max-this_min)/2)
sc.tl.louvain(adata,resolution=this_resolution)
this_clusters = adata.obs['louvain'].nunique()
print('got ' + str(this_clusters) + ' at resolution ' + str(this_resolution))
if this_clusters > n_cluster:
this_max = this_resolution
elif this_clusters < n_cluster:
this_min = this_resolution
else:
return(this_resolution, adata)
this_step += 1
print('Cannot find the number of clusters')
print('Clustering solution from last iteration is used:' + str(this_clusters) + ' at resolution ' + str(this_resolution))
for file in files:
file_split = file.split('_')
method = file_split[1]
dataset = file_split[2].split('.')[0]
if(len(file_split)>3):
method = method + '_' + '_'.join(file_split[3:]).split('.')[0]
print(method)
pandas2ri.activate()
readRDS = robjects.r['readRDS']
df_rds = readRDS(os.path.join(path_fm,file))
fm_mat = pandas2ri.ri2py(robjects.r['data.frame'](robjects.r['as.matrix'](df_rds)))
fm_mat.fillna(0,inplace=True)
fm_mat.columns = metadata.index
adata = sc.AnnData(fm_mat.T)
adata.var_names_make_unique()
adata.obs = metadata.loc[adata.obs.index,]
df_metrics.loc[method,] = ""
#Louvain
sc.pp.neighbors(adata, n_neighbors=15,use_rep='X')
# sc.tl.louvain(adata)
getNClusters(adata,n_cluster=num_clusters)
#kmeans
kmeans = KMeans(n_clusters=num_clusters, random_state=2019).fit(adata.X)
adata.obs['kmeans'] = pd.Series(kmeans.labels_,index=adata.obs.index).astype('category')
#hierachical clustering
hc = AgglomerativeClustering(n_clusters=num_clusters).fit(adata.X)
adata.obs['hc'] = pd.Series(hc.labels_,index=adata.obs.index).astype('category')
#clustering metrics
#adjusted rank index
ari_louvain = adjusted_rand_score(adata.obs['label'], adata.obs['louvain'])
ari_kmeans = adjusted_rand_score(adata.obs['label'], adata.obs['kmeans'])
ari_hc = adjusted_rand_score(adata.obs['label'], adata.obs['hc'])
#adjusted mutual information
ami_louvain = adjusted_mutual_info_score(adata.obs['label'], adata.obs['louvain'],average_method='arithmetic')
ami_kmeans = adjusted_mutual_info_score(adata.obs['label'], adata.obs['kmeans'],average_method='arithmetic')
ami_hc = adjusted_mutual_info_score(adata.obs['label'], adata.obs['hc'],average_method='arithmetic')
#homogeneity
homo_louvain = homogeneity_score(adata.obs['label'], adata.obs['louvain'])
homo_kmeans = homogeneity_score(adata.obs['label'], adata.obs['kmeans'])
homo_hc = homogeneity_score(adata.obs['label'], adata.obs['hc'])
df_metrics.loc[method,['ARI_Louvain','ARI_kmeans','ARI_HC']] = [ari_louvain,ari_kmeans,ari_hc]
df_metrics.loc[method,['AMI_Louvain','AMI_kmeans','AMI_HC']] = [ami_louvain,ami_kmeans,ami_hc]
df_metrics.loc[method,['Homogeneity_Louvain','Homogeneity_kmeans','Homogeneity_HC']] = [homo_louvain,homo_kmeans,homo_hc]
adata.obs[['louvain','kmeans','hc']].to_csv(os.path.join(path_clusters ,method + '_clusters.tsv'),sep='\t')
df_metrics.to_csv(path_metrics+'clustering_scores.csv')
df_metrics
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
eth = pd.read_csv("ETH.csv").set_index("Date")
rai = pd.read_csv("RAI.csv").set_index('Date')
rai.index = pd.to_datetime(rai.index)
rai.index = pd.to_datetime(rai.index.date)
eth.index = pd.to_datetime(eth.index)
prices = pd.concat([eth, rai], axis=1).dropna()
prices.columns = ['ETH', 'RAI']
prices['RAI'] = prices['RAI'].astype(float)
prices['ETH'] = prices['ETH'].astype(float)
prices['Ratio'] = prices['ETH'] / prices['RAI']
prices['Ratio'].plot(kind='line')
plt.show()
prices["Log Ratio"] = np.log(prices['Ratio']+1)
prices['Log Ratio'].plot(kind='line')
plt.show()
prices['Log Ratio Diff'] = prices['Log Ratio'].diff()
prices['Log Ratio Diff'].plot.hist(bins=10)
plt.show()
import seaborn as sns
sns.kdeplot(prices['Log Ratio Diff'])
initial_ratio = prices['Log Ratio'].iloc[-1]
from scipy.stats import norm
mu, std = norm.fit(prices['Log Ratio Diff'].dropna())
mu = mu/24
std = std / (24**.5)
mu*100
mu, std
run = 0
timesteps = 100
np.random.seed(seed=run)
deltas = np.random.normal(mu, std, 100)
ratios = np.exp(initial_ratio + deltas.cumsum()) - 1
initial_ratio
np.exp(0)
rai_res = 4953661
eth_res = 7785
rai_res/eth_res
```
$$ A \cdot B = C$$
$$ (A+\Delta A) \cdot (B + \Delta B) = C$$
$$ A \Delta B + B \Delta A + \Delta B \Delta A = 0$$
$$ \frac{A}{B} = R_{1}$$
$$ \frac{A+\Delta A}{B + \Delta B} = R_{2}$$
$$ A = B R_{1}$$
$$ A+\Delta A = R_{2} [B + \Delta B]$$
$$ B R_{1}+\Delta A = R_{2} [B + \Delta B]$$
$$ \Delta A = B R_{2} + \Delta B R_{2} - B R_{1}$$
$$ A \Delta B + [B + \Delta B] \cdot [B R_{2} + \Delta B R_{2} - B R_{1}] - B R_{1}] = 0$$
```
rai_res / eth_res
true = 600
def agent_action(signal, s):
#Find current ratio
current_ratio = s['RAI_balance'] / s['ETH_balance']
eth_res = s['ETH_balance']
rai_res = s['RAI_balance']
#Find the side of the trade
if signal < current_ratio:
action_key = "eth_sold"
else:
action_key = "tokens_sold"
#Constant for equations
C = rai_res * eth_res
#Find the maximum shift that the trade should be able to sap up all arbitrage opportunities
max_shift = abs(rai_res / eth_res - true)
#Start with a constant choice of 10 eth trade
eth_size = 10
#Decide on sign of eth
if action == "eth_sold":
eth_delta = eth_size
else:
eth_delta = -eth_size
#Compute the RAI delta to hold C constant
rai_delta = C / (eth_res + eth_delta) - rai_res
#Caclulate the implied shift in ratio
implied_shift = abs((rai_res + rai_delta)/ (eth_res + eth_delta) - rai_res / eth_res)
#While the trade is too large, cut trade size in half
while implied_shift > max_shift:
#Cut trade in half
eth_size = eth_size/2
rai_delta = C / (eth_res + eth_delta) - rai_res
implied_shift = abs((rai_res + rai_delta)/ (eth_res + eth_delta) - rai_res / eth_res)
if action_key == "eth_sold":
I_t = s['ETH_balance']
O_t = s['RAI_balance']
I_t1 = s['ETH_balance']
O_t1 = s['RAI_balance']
delta_I = eth_delta
delta_O = rai_delta
else:
I_t = s['RAI_balance']
O_t = s['ETH_balance']
I_t1 = s['RAI_balance']
O_t1 = s['ETH_balance']
delta_I = rai_delta
delta_O = eth_delta
return I_t, O_t, I_t1, O_t1, delta_I, delta_O, action_key
s = {"RAI_balance": 4960695.994,
"ETH_balance": 7740.958682}
signal = 651.4802496080162
agent_action(signal, s)
eth_size
delta_I = uniswap_events['eth_delta'][t]
delta_O = uniswap_events['token_delta'][t]
action_key = 'eth_sold'
implied_shift
max_shift
(rai_res + rai_delta)/ (eth_res + eth_delta)
I_t, O_t, I_t1, O_t1, delta_I, delta_O, action_key
```
| github_jupyter |
# Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] [Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012](https://arxiv.org/abs/1207.0580)
```
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs682.classifiers.fc_net import *
from cs682.data_utils import get_CIFAR10_data
from cs682.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs682.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
```
# Dropout forward pass
In the file `cs682/layers.py`, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
```
np.random.seed(231)
x = np.random.randn(500, 500) + 10
for p in [0.25, 0.4, 0.7]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print('Running tests with p = ', p)
print('Mean of input: ', x.mean())
print('Mean of train-time output: ', out.mean())
print('Mean of test-time output: ', out_test.mean())
print('Fraction of train-time output set to zero: ', (out == 0).mean())
print('Fraction of test-time output set to zero: ', (out_test == 0).mean())
print()
```
# Dropout backward pass
In the file `cs682/layers.py`, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
```
np.random.seed(231)
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.2, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
# Error should be around e-10 or less
print('dx relative error: ', rel_error(dx, dx_num))
```
## Inline Question 1:
What happens if we do not divide the values being passed through inverse dropout by `p` in the dropout layer? Why does that happen?
## Answer:
The model would be train with a small portion of the input than it would get in test time.
# Fully-connected nets with Dropout
In the file `cs682/classifiers/fc_net.py`, modify your implementation to use dropout. Specifically, if the constructor of the net receives a value that is not 1 for the `dropout` parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
```
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [1, 0.75, 0.5]:
print('Running check with dropout = ', dropout)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
# Relative errors should be around e-6 or less; Note that it's fine
# if for dropout=1 you have W2 error be on the order of e-5.
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
print()
```
## Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a keep probability of 0.25. We will then visualize the training and validation accuracies of the two networks over time.
```
# Train two identical nets, one with dropout and one without
np.random.seed(231)
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [1, 0.25]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print(dropout)
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
```
## Inline Question 2:
Compare the validation and training accuracies with and without dropout -- what do your results suggest about dropout as a regularizer?
## Answer:
Our experiment shows that although the train accuracy for having dropouts is lower, it actually has a higher accuracy in validation.
## Inline Question 3:
Suppose we are training a deep fully-connected network for image classification, with dropout after hidden layers (parameterized by keep probability p). How should we modify p, if at all, if we decide to decrease the size of the hidden layers (that is, the number of nodes in each layer)?
## Answer:
We should have a larger p, because when decreasing the size of the hidden layers, we would need to keep more nodes in order to not lose the representation of the original data.
| github_jupyter |
## Visualizing-Food-Insecurity-with-Pixie-Dust-and-Watson-Analytics
_IBM Journey showing how to visualize US Food Insecurity with Pixie Dust and Watson Analytics._
Often in data science we do a great deal of work to glean insights that have an impact on society or a subset of it and yet, often, we end up not communicating our findings or communicating them ineffectively to non data science audiences. That's where visualizations become the most powerful. By visualizing our insights and predictions, we, as data scientists and data lovers, can make a real impact and educate those around us that might not have had the same opportunity to work on a project of the same subject. By visualizing our findings and those insights that have the most power to do social good, we can bring awareness and maybe even change. This Code Pattern walks you through how to do just that, with IBM's Data Science Experience (DSX), Pandas, Pixie Dust and Watson Analytics.
For this particular Code Pattern, food insecurity throughout the US is focused on. Low access, diet-related diseases, race, poverty, geography and other factors are considered by using open government data. For some context, this problem is a more and more relevant problem for the United States as obesity and diabetes rise and two out of three adult Americans are considered obese, one third of American minors are considered obsese, nearly ten percent of Americans have diabetes and nearly fifty percent of the African American population have heart disease. Even more, cardiovascular disease is the leading global cause of death, accounting for 17.3 million deaths per year, and rising. Native American populations more often than not do not have grocery stores on their reservation... and all of these trends are on the rise. The problem lies not only in low access to fresh produce, but food culture, low education on healthy eating as well as racial and income inequality.
The government data that I use in this journey has been conveniently combined into a dataset for our use, which you can find in this repo under combined_data.csv. You can find the original, government data from the US Bureau of Labor Statistics https://www.bls.gov/cex/ and The United States Department of Agriculture https://www.ers.usda.gov/data-products/food-environment-atlas/data-access-and-documentation-downloads/.
### What is DSX, Pixie Dust and Watson Analytics and why should I care enough about them to use them for my visualizations?
IBM's Data Science Experience, or DSX, is an online browser platform where you can use notebooks or R Studio for your data science projects. DSX is unique in that it automatically starts up a Spark instance for you, allowing you to work in the cloud without any extra work. DSX also has open data available to you, which you can connect to your notebook. There are also other projects available, in the form of notebooks, which you can follow along with and apply to your own use case. DSX also lets you save your work, share it and collaborate with others, much like I'm doing now!
Pixie Dust is a visualization library you can use on DSX. It is already installed into DSX and once it's imported, it only requires one line of code (two words) to use. With that same line of code, you can pick and choose different values to showcase and visualize in whichever way you want from matplotlib, seaborn and bokeh. If you have geographic data, you can also connect to google maps and Mapbox, depending on your preference. Check out a tutorial on Pixie Dust here: https://ibm-watson-data-lab.github.io/pixiedust/displayapi.html#introduction
IBM's Watson Analytics is another browser platform which allows you to input your data, conduct analysis on it and then visualize your findings. If you're new to data science, Watson recommends connections and visualizations with the data it has been given. These visualizations range from bar and scatter plots to predictive spirals, decision trees, heatmaps, trend lines and more. The Watson platform then allows you to share your findings and visualizations with others, completing your pipeline. Check out my visualizations with the link further down in the notebook, or in the images in this repo.
### Let's start with DSX.
Here's a tutorial on getting started with DSX: https://datascience.ibm.com/docs/content/analyze-data/creating-notebooks.html.
To summarize the introduction, you must first make an account and log in. Then, you can create a project (I titled mine: "Diet-Related Disease"). From there, you'll be able to add data and start a notebook. To begin, I used the combined_data.csv as my data asset. You'll want to upload it as a data asset and once that is complete, go into your notebook in the edit mode (click on the pencil icon next to your notebook on the dashboard). To load your data in your notebook, you'll click on the "1001" data icon in the top right. The combined_data.csv should show up. Click on it and select "Insert Pandas Data Frame". Once you do that, a whole bunch of code will show up in your first cell. Once you see that, run the cell and follow along with my tutorial!
_Quick Note: In Github you can view all of the visualizations by selecting the circle with the dash in the middle at the top right of the notebook!_
```
from io import StringIO
import requests
import json
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
#Insert Pandas Data Frame
import sys
import types
import pandas as pd
from botocore.client import Config
import ibm_boto3
def __iter__(self): return 0
# @hidden_cell
# The following code accesses a file in your IBM Cloud Object Storage. It includes your credentials.
# You might want to remove those credentials before you share your notebook.
client_3ebc7942f56c4334ae3dfda7d1f19d40 = ibm_boto3.client(service_name='s3',
ibm_api_key_id='-9FQb_5uaEltHpWcqXkeVsFIShUoUOJht768ihN-VgYq',
ibm_auth_endpoint="https://iam.ng.bluemix.net/oidc/token",
config=Config(signature_version='oauth'),
endpoint_url='https://s3-api.us-geo.objectstorage.service.networklayer.com')
body = client_3ebc7942f56c4334ae3dfda7d1f19d40.get_object(Bucket='testpixiestorage302e1eb2addc4a09a8e6c82f7f1ae0e3',Key='combined_data.csv')['Body']
# add missing __iter__ method, so pandas accepts body as file-like object
if not hasattr(body, "__iter__"): body.__iter__ = types.MethodType( __iter__, body )
df_data_1 = pd.read_csv(body)
df_data_1.head()
```
### Cleaning data and Exploring
This notebook starts out as a typical data science pipeline: exploring what our data looks like and cleaning the data. Though this is often considered the boring part of the job, it is extremely important. Without clean data, our insights and visualizations could be inaccurate or unclear.
To initially explore, I used matplotlib to see a correlation matrix of our original data. I also looked at some basic statistics to get a feel for what kind of data we are looking at. I also went ahead and plotted using pandas and seaborn to make bar plots, scatterplots and regression plots. You can also find the meanings of the values at the following link in my repo: https://github.com/IBM/visualize-food-insecurity/blob/mjmyers/data/Variable%20list.xlsx.
```
df_data_1.columns
df_data_1.describe()
#to see columns distinctly and evaluate their state
df_data_1['PCT_LACCESS_POP10'].unique()
df_data_1['PCT_REDUCED_LUNCH10'].unique()
df_data_1['PCT_DIABETES_ADULTS10'].unique()
df_data_1['FOODINSEC_10_12'].unique()
#looking at correlation in a table format
df_data_1.corr()
#checking out a correlation matrix with matplotlib
plt.matshow(df_data_1.corr())
#we notice that there is a great deal of variables which makes it hard to read!
#other stats
df_data_1.max()
df_data_1.min()
df_data_1.std()
# Plot counts of a specified column using Pandas
df_data_1.FOODINSEC_10_12.value_counts().plot(kind='barh')
# Bar plot example
sns.factorplot("PCT_SNAP09", "PCT_OBESE_ADULTS10", data=df_data_1,size=3,aspect=2)
# Regression plot
sns.regplot("FOODINSEC_10_12", "PCT_OBESE_ADULTS10", data=df_data_1, robust=True, ci=95, color="seagreen")
sns.despine();
```
After looking at the data I realize that I'm only interested in seeing the connection between certain values and because the dataset is so large it's bringing in irrelevant information and creating noise. To change this, I created a smaller data frame, making sure to remove NaN and 0 values (0s in this dataset generally mean that a number was not recorded).
```
#create a dataframe of values that are most interesting to food insecurity
df_focusedvalues = df_data_1[["State", "County","PCT_REDUCED_LUNCH10", "PCT_DIABETES_ADULTS10", "PCT_OBESE_ADULTS10", "FOODINSEC_10_12", "PCT_OBESE_CHILD11", "PCT_LACCESS_POP10", "PCT_LACCESS_CHILD10", "PCT_LACCESS_SENIORS10", "SNAP_PART_RATE10", "PCT_LOCLFARM07", "FMRKT13", "PCT_FMRKT_SNAP13", "PCT_FMRKT_WIC13", "FMRKT_FRVEG13", "PCT_FRMKT_FRVEG13", "PCT_FRMKT_ANMLPROD13", "FOODHUB12", "FARM_TO_SCHOOL", "SODATAX_STORES11", "State_y", "GROC12", "SNAPS12", "WICS12", "PCT_NHWHITE10", "PCT_NHBLACK10", "PCT_HISP10", "PCT_NHASIAN10", "PCT_65OLDER10", "PCT_18YOUNGER10", "POVRATE10", "CHILDPOVRATE10"]]
#remove NaNs and 0s
df_focusedvalues = df_focusedvalues[(df_focusedvalues != 0).all(1)]
df_focusedvalues = df_focusedvalues.dropna(how='any')
```
Before visualizing, a quick heatmap is created so that we can see what correlations we may want to visualize. I visualized a few of these relationships using seaborn, but I ultimately want to try out other visualizations. The quickest way to explore these is through Pixie Dust.
```
#look at heatmap of correlations with the dataframe to see what we should visualize
corr = df_focusedvalues.corr()
sns.heatmap(corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values)
```
We can immediately see that a fair amount of strong correlations and relationships exist. Some of these include 18 and younger and Hispanic, an inverse relationship between Asian and obese, a correlation between sodatax and Hispanic, African American and obesity as well as food insecurity, sodatax and obese minors, farmers markets and aid such as WIC and SNAP, obese minors and reduced lunches and a few more.
Let's try and plot some of these relationships with seaborn.
```
#Percent of the population that is white vs SNAP aid participation (positive relationship)
sns.regplot("PCT_NHWHITE10", "SNAP_PART_RATE10", data=df_focusedvalues, robust=True, ci=95, color="seagreen")
sns.despine();
#Percent of the population that is Hispanic vs SNAP aid participation (negative relationship)
sns.regplot("SNAP_PART_RATE10", "PCT_HISP10", data=df_focusedvalues, robust=True, ci=95, color="seagreen")
sns.despine();
#Eligibility and use of reduced lunches in schools vs percent of the population that is Hispanic (positive relationship)
sns.regplot("PCT_REDUCED_LUNCH10", "PCT_HISP10", data=df_focusedvalues, robust=True, ci=95, color="seagreen")
sns.despine();
#Percent of the population that is black vs percent of the population with diabetes (positive relationship)
sns.regplot("PCT_NHBLACK10", "PCT_DIABETES_ADULTS10", data=df_focusedvalues, robust=True, ci=95, color="seagreen")
sns.despine();
#Percent of population with diabetes vs percent of population with obesity (positive relationship)
sns.regplot("PCT_DIABETES_ADULTS10", "PCT_OBESE_ADULTS10", data=df_focusedvalues, robust=True, ci=95, color="seagreen")
sns.despine();
```
### Now, let's visualize with Pixie Dust.
Now that we've gained some initial insights, let's try out a different tool: Pixie Dust!
As you can see in the notebook below, to activate Pixie Dust, we just import it and then write:
```display(your_dataframe_name)```
After doing this your dataframe will show up in a column-row table format. To visualize your data, you can click the chart icon at the top left (looks like an arrow going up). From there you can choose from a variety of visuals. Once you select the type of chart you want, you can then select the variables you want to showcase. It's worth playing around with this to see how you can create the most effective visualizations for your audience. The notebook below showcases a couple options such as scatterplots, bar charts, line charts, and histograms.
```
import pixiedust
#looking at the dataframe table. Pixie Dust does this automatically, but to find it again you can click the table icon.
display(df_focusedvalues)
#using seaborn in Pixie Dust to look at Food Insecurity and the Percent of the population that is black in a scatter plot
display(df_focusedvalues)
#using matplotlib in Pixie Dust to view Food Insecurity by state in a bar chart
display(df_focusedvalues)
#using bokeh in Pixie Dust to view the percent of the population that is black vs the percent of the population that is obese in a line chart
display(df_focusedvalues)
#using seaborn in Pixie Dust to view obesity vs diabetes in a scatterplot
display(df_focusedvalues)
#using matplotlib in Pixie Dust to view the percent of the population that is white vs SNAP participation rates in a histogram
display(df_focusedvalues)
#using bokeh in Pixie Dust to view the trends in obesity, diabetes, food insecurity and the percent of the population that is black in a line graph
display(df_focusedvalues)
#using matplotlib in Pixie Dust to view childhood obesity vs reduced school lunches in a scatterplot
display(df_focusedvalues)
```
### Let's download our dataframe and work with it on Watson Analytics.
Once you follow along, you can take the new .csv (found under "Data Services" --> "Object Storage" from the top button) and upload it to Watson Analytics. Again, if you do not have an account, you'll want to set one up. Once you are logged in and ready to go, you can upload the data (saved in this repo as df_focusedvalues.csv) to your Watson platform.
```
#First get your credentials by going to the "1001" button again and under your csv file selecting "Insert Credentials".
#The cell below will be hidden because it has my personal credentials so go ahead and insert your own.
# @hidden_cell
# The following code contains the credentials for a file in your IBM Cloud Object Storage.
# You might want to remove those credentials before you share your notebook.
credentials_1 = {
'IBM_API_KEY_ID': '-9FQb_5uaEltHpWcqXkeVsFIShUoUOJht768ihN-VgYq',
'IAM_SERVICE_ID': 'iam-ServiceId-c8681118-cbee-4807-9adf-ac48dfd1cfdd',
'ENDPOINT': 'https://s3-api.us-geo.objectstorage.service.networklayer.com',
'IBM_AUTH_ENDPOINT': 'https://iam.ng.bluemix.net/oidc/token',
'BUCKET': 'testpixiestorage302e1eb2addc4a09a8e6c82f7f1ae0e3',
'FILE': 'combined_data.csv'
}
df_focusedvalues.to_csv('df_focusedvalues.csv',index=False)
import ibm_boto3
from ibm_botocore.client import Config
cos = ibm_boto3.client(service_name='s3',
ibm_api_key_id=credentials_1['IBM_API_KEY_ID'],
ibm_service_instance_id=credentials_1['IAM_SERVICE_ID'],
ibm_auth_endpoint=credentials_1['IBM_AUTH_ENDPOINT'],
config=Config(signature_version='oauth'),
endpoint_url=credentials_1['ENDPOINT'])
cos.upload_file(Filename='df_focusedvalues.csv',Bucket=credentials_1['BUCKET'],Key='df_focusedvalues.csv')
```
Once this is complete, go get your csv file from Data Services, Object Storage! (Find this above! ^)
### Using Watson to visualize our insights.
Once you've set up your account, you can see that the Watson plaform has three sections: data, discover and display. You uploaded your data to the "data" section, but now you'll want to go to the "discover" section. Under "discover" you can select your dataframe dataset for use. Once you've selected it, the Watson platform will suggest different insights to visualize. You can move forward with its selections or your own, or both. You can take a look at mine here (you'll need an account to view): https://ibm.co/2xAlAkq or see the screen shots attached to this repo. You can also go into the "display" section and create a shareable layout like mine (again you'll need an account): https://ibm.co/2A38Kg6.
You can see that with these visualizations the user can see the impact of food insecurity by state, geographically distributed and used aid such as reduced school lunches, a map of diabetes by state, a predictive model for food insecurity and diabetes (showcasing the factors that, in combination, suggest a likelihood of food insecurity), drivers of adult diabetes, drivers of food insecurity, the relationship with the frequency of farmers market locations, food insecurity and adult obesity, as well as the relationship between farmers markets, the percent of the population that is Asian, food insecurity and poverty rates.
By reviewing our visualizations both in DSX and Watson, we learn that obesity and diabetes almost go hand in hand, along with food insecurity. We can also learn that this seems to be an inequality issue, both in income and race, with Black and Hispanic populations being more heavily impacted by food insecurity and diet-related diseases than those of the White and Asian populations. We can also see that school-aged children who qualify for reduced lunch are more likely obese than not whereas those that have a farm-to-school program are more unlikely to be obese.
Like many data science investigations, this analysis could have a big impact on policy and people's approach to food insecurity in the U.S. What's best is that we can create many projects much like this in a quick time period and share them with others by using Pandas, Pixie Dust as well as Watson's predictive and recommended visualizations.
| github_jupyter |
# Iterators
# 迭代器
> Often an important piece of data analysis is repeating a similar calculation, over and over, in an automated fashion.
For example, you may have a table of a names that you'd like to split into first and last, or perhaps of dates that you'd like to convert to some standard format.
One of Python's answers to this is the *iterator* syntax.
We've seen this already with the ``range`` iterator:
通常在数据分析中又一个重要的工作就是自动重复一个相似的计算。例如也许你又一个姓名的表,你希望将姓名分成名字和姓氏,有或者是日期的表,你希望将它们转换成某种标准的格式。*迭代器(iterator)*是Python对这种情况的处理方式之一。我们在前面已经接触过`range`迭代器:
```
for i in range(10):
print(i, end=' ')
```
> Here we're going to dig a bit deeper.
It turns out that in Python 3, ``range`` is not a list, but is something called an *iterator*, and learning how it works is key to understanding a wide class of very useful Python functionality.
本章我们将会稍微深入一点的进行介绍。在Python3中,`range`不是产生一个list,而是产生了一个*迭代器*,学习它的工作原理就可以了解了一大类Python非常有用的功能。
## Iterating over lists
## 在列表上迭代
> Iterators are perhaps most easily understood in the concrete case of iterating through a list.
Consider the following:
在list上面使用迭代器可能是最容易理解的部分。参看下面的例子:
```
for value in [2, 4, 6, 8, 10]: # in前面是迭代变量,in后面是可迭代的集合,此处是一个list
# do some operation
print(value + 1, end=' ')
```
> The familiar "``for x in y``" syntax allows us to repeat some operation for each value in the list.
The fact that the syntax of the code is so close to its English description ("*for [each] value in [the] list*") is just one of the syntactic choices that makes Python such an intuitive language to learn and use.
这个熟悉的"`for x in y`"语法会在list中的每个元素重复进行操作。事实上,这个语法也非常接近原始的英语语法("*for [each] value in [the] list*"),这也是Python成为易读易理解语言的原因。
> But the face-value behavior is not what's *really* happening.
When you write something like "``for val in L``", the Python interpreter checks whether it has an *iterator* interface, which you can check yourself with the built-in ``iter`` function:
但是表象并不代表*真正*发生的事情。当你使用如"`for val in L`"这样的代码的时候,Python的解释器会检查`L`是否有一个*迭代器*接口,就如同你使用內建的`iter`函数进行检查一样:
```
iter([2, 4, 6, 8, 10])
```
> It is this iterator object that provides the functionality required by the ``for`` loop.
The ``iter`` object is a container that gives you access to the next object for as long as it's valid, which can be seen with the built-in function ``next``:
实际上面的迭代器对象才是真正提供`for`循环迭代的内容。`iter`对象是一个容器,提供了一种让你获得下一个元素的方法,你也可以在內建的`next`函数中看到它:
```
I = iter([2, 4, 6, 8, 10])
print(next(I))
print(next(I))
print(next(I))
```
> What is the purpose of this level of indirection?
Well, it turns out this is incredibly useful, because it allows Python to treat things as lists that are *not actually lists*.
这样的转换有什么好处呢?这可以让Python将类似list这样的对象当成*不是list*来处理。
## ``range()``: A List Is Not Always a List
## `range()`:List不是永远是一个List
> Perhaps the most common example of this indirect iteration is the ``range()`` function in Python 3 (named ``xrange()`` in Python 2), which returns not a list, but a special ``range()`` object:
也许这种简介的迭代当中最常用的就是`range()`函数(在Python2中是`xrange()`函数),该函数的返回值不是一个list,而是一个特殊的`range`对象:
```
range(10)
```
``range``, like a list, exposes an iterator:
`range`,就像list那样,也有迭代器接口:
```
iter(range(10))
```
So Python knows to treat it *as if* it's a list:
因此Python知道将它当成*像*一个list那样来处理:
```
for i in range(10):
print(i, end=' ')
```
> The benefit of the iterator indirection is that *the full list is never explicitly created!*
We can see this by doing a range calculation that would overwhelm our system memory if we actually instantiated it (note that in Python 2, ``range`` creates a list, so running the following will not lead to good things!):
使用这样间接的迭代器比起直接使用list的好处就是*完整的list没有真正创建过*。我们使用一个range的例子看到这点,range的范围远远超过我们系统的内存容量,如果我们真的创建了list的话(比如在Python2中使用range),那程序就会出错。
```
N = 10 ** 12
for i in range(N): # 范围至10的12次方
if i >= 10: break # 如果i>=10,跳出循环
print(i, end=', ')
```
> If ``range`` were to actually create that list of one trillion values, it would occupy tens of terabytes of machine memory: a waste, given the fact that we're ignoring all but the first 10 values!
如果`range`真的创建了一个10万亿的list,它将占用数十TB的系统内存:而实际上,我们仅仅使用了前10个而忽略了其他。
> In fact, there's no reason that iterators ever have to end at all!
Python's ``itertools`` library contains a ``count`` function that acts as an infinite range:
事实上,对于上面的例子来说,这个迭代器完全没有需要停止的必要性。
Python的`itertools`标准库又一个`count`函数,它的作用就是一个无穷的range:
```
from itertools import count
for i in count(): # count()永远不会结束
if i >= 10:
break
print(i, end=', ')
```
> Had we not thrown-in a loop break here, it would go on happily counting until the process is manually interrupted or killed (using, for example, ``ctrl-C``).
如果循环体中没有break,那么这个循环将会一直的执行下去,直至我们中断它或者杀死它(例如使用`ctrl-C`)。
## Useful Iterators
## 有用的迭代器
> This iterator syntax is used nearly universally in Python built-in types as well as the more data science-specific objects we'll explore in later sections.
Here we'll cover some of the more useful iterators in the Python language:
上面说到的迭代器语法已经广泛应用到了Python的內建类型以及很多数据科学相关的对象当中(在后续章节当中有介绍)。下面我们来介绍一些很有用的迭代器:
### ``enumerate``
### `枚举(enumerate)`
> Often you need to iterate not only the values in an array, but also keep track of the index.
You might be tempted to do things this way:
很多时候,我们不仅仅需要迭代获取元素值,而且我们还需要记录元素的序号。如果照搬其他语言的惯例,你可能会这样实现:
```
L = [2, 4, 6, 8, 10]
for i in range(len(L)): # len求list长度,range到该长度为止
print(i, L[i])
```
> Although this does work, Python provides a cleaner syntax using the ``enumerate`` iterator:
虽然这样也是可行的,但是Python提供了一个更简洁易读的方式来实现,使用內建的`enumerate`函数迭代:
(译者注1:请抛弃上面的做法,因为写出来的代码需要转两个弯才能读懂,而且很不Pythonic。)
(译者注2:Go语言也借鉴了Python的这种方式,Go的`range`关键字迭代结果就像`enumerate`一样。)
```
for i, val in enumerate(L): # enumerate返回一个tuple,元素的序号和元素的值
print(i, val)
```
> This is the more "Pythonic" way to enumerate the indices and values in a list.
这当然是更加Pythonic的方式来实现迭代元素的序号和元素的值。
### ``zip``
### `zip`
> Other times, you may have multiple lists that you want to iterate over simultaneously.
You could certainly iterate over the index as in the non-Pythonic example we looked at previously, but it is better to use the ``zip`` iterator, which zips together iterables:
还有种情况,你可能有多个list,然后你希望能同时迭代它们。当然,你也可以使用前面那种不那么Pythonic的方式(使用序号来迭代),你也可以使用`zip`内建函数,它可以将两个list封装起来:
(译者注:zip你可以理解为拉链,当你拉上拉链的时候,两边就缝合到一起了)
```
L = [2, 4, 6, 8, 10]
R = [3, 6, 9, 12, 15]
for lval, rval in zip(L, R): # zip将左右两个集合逐个元素的拼接在一起
print(lval, rval)
```
> Any number of iterables can be zipped together, and if they are different lengths, the shortest will determine the length of the ``zip``.
任意数量的可迭代对象都能`zip`在一起,如果它们有着不同的长度,最短的那个会决定最终`zip`出来的长度。
### ``map`` and ``filter``
### `map` 和 `filter`
> The ``map`` iterator takes a function and applies it to the values in an iterator:
`map`内建函数将迭代器中的每个元素应用到一个函数上,所有元素应用的结果获得另一个迭代器:
```
# 前10个非负整数的平方
square = lambda x: x ** 2 # 平方函数
for val in map(square, range(10)): # map将range(10)应用与square平方函数上
print(val, end=' ')
```
> The ``filter`` iterator looks similar, except it only passes-through values for which the filter function evaluates to True:
`filter`內建函数与`map`很像,除了结果迭代器中仅包含filter函数返回值为真的元素。
```
# 查找0-9整数中的偶数
is_even = lambda x: x % 2 == 0 # 偶数函数
for val in filter(is_even, range(10)): # filter将range(10)应用在is_even偶数函数上,结果迭代器仅包含is_even为True的元素
print(val, end=' ')
```
> The ``map`` and ``filter`` functions, along with the ``reduce`` function (which lives in Python's ``functools`` module) are fundamental components of the *functional programming* style, which, while not a dominant programming style in the Python world, has its outspoken proponents (see, for example, the [pytoolz](https://toolz.readthedocs.org/en/latest/) library).
`map`和`filter`函数,还有`reduce`函数(Python3中移到了functools包下面),都是*函数式编程*的最基本构件。虽然函数式编程在Python中并不占统治地位,但是也有很多支持者(参见:[pytoolz](https://toolz.readthedocs.org/en/latest/)库)。
### Iterators as function arguments
### 迭代器作为函数参数
> We saw in [``*args`` and ``**kwargs``: Flexible Arguments](#*args-and-**kwargs:-Flexible-Arguments). that ``*args`` and ``**kwargs`` can be used to pass sequences and dictionaries to functions.
It turns out that the ``*args`` syntax works not just with sequences, but with any iterator:
前面我们已经看到了`*args`和`**kwargs`的用法,参见[`*args 和 **kwargs 可变参数`](08-Defining-Functions.ipynb),它们可以用来传递序列或字典参数值给函数。
实际上`*args`不仅仅适用与序列,也适用与任何迭代器:
```
print(*range(10))
```
> So, for example, we can get tricky and compress the ``map`` example from before into the following:
因此,我们可以重写上面map函数的例子,使其更简短:
```
print(*map(lambda x: x ** 2, range(10)))
```
> Using this trick lets us answer the age-old question that comes up in Python learners' forums: why is there no ``unzip()`` function which does the opposite of ``zip()``?
If you lock yourself in a dark closet and think about it for a while, you might realize that the opposite of ``zip()`` is... ``zip()``! The key is that ``zip()`` can zip-together any number of iterators or sequences. Observe:
适用这种技巧,我们就能回答一个历史悠久的Python学习者论坛的问题:为什么没有一个`unzip()`函数完成与`zip()`函数相反的工作?如果你把自己关在一个黑暗的厕所里面思考几分钟,你就会发现`zip()`函数的逆函数竟然就是`zip()`!这里的关键点是`zip()`函数能将任意数量的迭代器或序列连接起来。看下面的例子:
```
L1 = (1, 2, 3, 4)
L2 = ('a', 'b', 'c', 'd')
z = zip(L1, L2)
print(*z)
z = zip(L1, L2)
new_L1, new_L2 = zip(*z)
print(new_L1, new_L2)
```
> Ponder this for a while. If you understand why it works, you'll have come a long way in understanding Python iterators!
请仔细思考一会。如果你明白了为什么会得到这个结果,那么你就真正理解了Python的迭代器了。
## Specialized Iterators: ``itertools``
## 特殊的迭代器:`itertools`
> We briefly looked at the infinite ``range`` iterator, ``itertools.count``.
The ``itertools`` module contains a whole host of useful iterators; it's well worth your while to explore the module to see what's available.
As an example, consider the ``itertools.permutations`` function, which iterates over all permutations of a sequence:
前面我们简略的介绍了无限的`range`迭代器,`itertools.count`。`itertools`模块包含了很全的有用的迭代器;这个模块很值得你花时间探索一番,看看有哪些需要用到的功能。例如,`itertools.permutations`函数可以获得一个序列所有可能的排列组合。
```
from itertools import permutations
p = permutations(range(3))
print(*p)
```
> Similarly, the ``itertools.combinations`` function iterates over all unique combinations of ``N`` values within a list:
类似的,`itertools.combinations`函数可以获得一个序列的`N`个元素的组合:
```
from itertools import combinations
c = combinations(range(4), 2)
print(*c)
```
> Somewhat related is the ``product`` iterator, which iterates over all sets of pairs between two or more iterables:
还有`product`函数,可以获得多个迭代器的笛卡尔乘积:
```
from itertools import product
p = product('ab', range(3))
print(*p)
```
> Many more useful iterators exist in ``itertools``: the full list can be found, along with some examples, in Python's [online documentation](https://docs.python.org/3.5/library/itertools.html).
`itertools`里面还有很多有用的函数,你可以在[在线文档](https://docs.python.org/3.5/library/itertools.html)中去查看他们的用法和示例。
| github_jupyter |
# 0. Import
```
import torch
```
# 1. Data
เราจะสร้างข้อมูลขึ้นมาเป็น Tensor ขนาด 10 Row, 3 Column [เรื่อง Tensor จะอธิบายต่อไป](https://www.bualabs.com/archives/1629/what-is-tensor-element-wise-broadcasting-operations-high-order-tensor-numpy-array-matrix-vector-tensor-ep-1/)
```
z = torch.tensor([
[ 1, 2, 3 ],
[ 11, 12, 13 ],
[ 0.1, 0.2, 0.3 ],
[ -1, -2, -3 ],
[ 10, 20, 30 ],
[ -5, 0, 5 ],
[ -11, 1/12, 13 ],
[ -0.1, -0.2, -0.3 ],
[ -11, -12, -13 ],
[ -10, -20, -30 ],
]).float()
z.shape
z
```
# 2. สูตรของ Softmax คือดังนี้
$$\sigma(\mathbf{z})_i = \frac{e^{z_i}}{\sum_{j=1}^K e^{z_j}} \text{ for } i = 1, \dotsc , K \text{ and } \mathbf z=(z_1,\dotsc,z_K) \in R^K$$
# 2.1 ตัวตั้ง (Dividen)
เราจะมาหาตัวตั้ง เอา a มา exp ให้หมด แล้วเก็บไว้ก่อน
```
exp_z = torch.exp(z)
exp_z
```
## 2.2 ตัวหาร (Divisor)
ตัวหารคือผลรวมของ exp_a เราจะ sum ตามมิติที่ 1 Column ให้เหลือแต่ Row โดย keepdim=True จะได้ดูง่าย
```
sum_exp_z = torch.sum(exp_z, 1, keepdim=True)
sum_exp_z
```
# 3. หา Softmax ของ a
นำตัวตั้ง Dividen หารด้วยตัวหาร Divisor จะได้ค่า Softmax
```
softmax_z = exp_z / sum_exp_z
softmax_z
```
# 4. Softmax Function
ได้เป็น ฟังก์ชันดังนี้
```
def softmax(z):
exp_z = torch.exp(z)
sum_exp_z = torch.sum(exp_z, 1, keepdim=True)
return exp_z / sum_exp_z
```
# 5. การใช้งาน Softmax
Softmax มีคุณสมบัติพิเศษ คือ จะรวมกันได้เท่ากับ 1 จึงถูกนำมาใช้เป็นความน่าจะเป็น Probability / Likelihood
ดูตัวอย่างแถวแรก
```
z[0]
softmax_z[0]
```
บวกกันได้ 1
```
softmax_z[0].sum()
```
แถวสอง
```
z[1]
softmax_z[1]
```
ก็บวกกันแล้วได้ 1
```
softmax_z[1].sum()
```
# 6. Numerical Stability
ถ้า Input มีขนาดใหญ่มาก เมื่อ exp จะ Error ได้ เราสามารถปรับ x โดย x = x - max(x) ก่อนเข้า Softmax Function เรียกว่า Numerical Stability
```
n = torch.tensor([
[ 10, 20, 30 ],
[ -100, -200, -300 ],
[ 0.001, 0.0001, 0.0001 ],
[ 70, 80, 90 ],
[ 100, 1000, 10000 ],
]).float()
```
เมื่อเลขใหญ่เกินกว่าระบบจะรับได้ จะมีค่าเป็น Not a Number (nan)
```
m = softmax(n)
m
```
เราจะปรับฟังก์ชัน Softmax ให้ Numerical Stability รับเลขใหญ่ ๆ ได้
```
def softmax2(z):
z = z - z.max(1, keepdim=True)[0]
exp_z = torch.exp(z)
sum_exp_z = torch.sum(exp_z, 1, keepdim=True)
return exp_z / sum_exp_z
m = softmax2(n)
m
```
# 5. สรุป
1. เราสามารถคำนวน Softmax Function ได้ไม่ยาก
1. Softmax ทำให้อะไรที่มีค่าน้อยก็จะยิ่งถูกกดให้น้อย อะไรที่มากก็จะยิ่งดันให้ใกล้ 1 ใกล้เคียงกับ Max Function ที่ใช้หาค่ามากที่สุด
1. Softmax มักถูกนำมาใช้เป็นความน่าจะเป็น Probability / Likelihood ใช้เป็นส่วนประกอบของ Cross Entropy Loss ใน Neural Network
# Credit
* https://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/
* https://en.wikipedia.org/wiki/Softmax_function
* https://medium.com/data-science-bootcamp/understand-the-softmax-function-in-minutes-f3a59641e86d
| github_jupyter |
# PerfForesightConsumerType: Perfect foresight consumption-saving
```
# Initial imports and notebook setup, click arrow to show
from copy import copy
import matplotlib.pyplot as plt
import numpy as np
from HARK.ConsumptionSaving.ConsIndShockModel import PerfForesightConsumerType
from HARK.utilities import plot_funcs
mystr = lambda number: "{:.4f}".format(number)
```
The module `HARK.ConsumptionSaving.ConsIndShockModel` concerns consumption-saving models with idiosyncratic shocks to (non-capital) income. All of the models assume CRRA utility with geometric discounting, no bequest motive, and income shocks are fully transitory or fully permanent.
`ConsIndShockModel` currently includes three models:
1. A very basic "perfect foresight" model with no uncertainty.
2. A model with risk over transitory and permanent income shocks.
3. The model described in (2), with an interest rate for debt that differs from the interest rate for savings.
This notebook provides documentation for the first of these three models.
$\newcommand{\CRRA}{\rho}$
$\newcommand{\DiePrb}{\mathsf{D}}$
$\newcommand{\PermGroFac}{\Gamma}$
$\newcommand{\Rfree}{\mathsf{R}}$
$\newcommand{\DiscFac}{\beta}$
## Statement of perfect foresight consumption-saving model
The `PerfForesightConsumerType` class the problem of a consumer with Constant Relative Risk Aversion utility
${\CRRA}$
\begin{equation}
U(C) = \frac{C^{1-\CRRA}}{1-\rho},
\end{equation}
has perfect foresight about everything except whether he will die between the end of period $t$ and the beginning of period $t+1$, which occurs with probability $\DiePrb_{t+1}$. Permanent labor income $P_t$ grows from period $t$ to period $t+1$ by factor $\PermGroFac_{t+1}$.
At the beginning of period $t$, the consumer has an amount of market resources $M_t$ (which includes both market wealth and currrent income) and must choose how much of those resources to consume $C_t$ and how much to retain in a riskless asset $A_t$, which will earn return factor $\Rfree$. The consumer cannot necessarily borrow arbitarily; instead, he might be constrained to have a wealth-to-income ratio at least as great as some "artificial borrowing constraint" $\underline{a} \leq 0$.
The agent's flow of future utility $U(C_{t+n})$ from consumption is geometrically discounted by factor $\DiscFac$ per period. If the consumer dies, he receives zero utility flow for the rest of time.
The agent's problem can be written in Bellman form as:
\begin{eqnarray*}
V_t(M_t,P_t) &=& \max_{C_t}~U(C_t) ~+ \DiscFac (1 - \DiePrb_{t+1}) V_{t+1}(M_{t+1},P_{t+1}), \\
& s.t. & \\
A_t &=& M_t - C_t, \\
A_t/P_t &\geq& \underline{a}, \\
M_{t+1} &=& \Rfree A_t + Y_{t+1}, \\
Y_{t+1} &=& P_{t+1}, \\
P_{t+1} &=& \PermGroFac_{t+1} P_t.
\end{eqnarray*}
The consumer's problem is characterized by a coefficient of relative risk aversion $\CRRA$, an intertemporal discount factor $\DiscFac$, an interest factor $\Rfree$, and age-varying sequences of the permanent income growth factor $\PermGroFac_t$ and survival probability $(1 - \DiePrb_t)$.
While it does not reduce the computational complexity of the problem (as permanent income is deterministic, given its initial condition $P_0$), HARK represents this problem with *normalized* variables (represented in lower case), dividing all real variables by permanent income $P_t$ and utility levels by $P_t^{1-\CRRA}$. The Bellman form of the model thus reduces to:
\begin{eqnarray*}
v_t(m_t) &=& \max_{c_t}~U(c_t) ~+ \DiscFac (1 - \DiePrb_{t+1}) \PermGroFac_{t+1}^{1-\CRRA} v_{t+1}(m_{t+1}), \\
& s.t. & \\
a_t &=& m_t - c_t, \\
a_t &\geq& \underline{a}, \\
m_{t+1} &=& \Rfree/\PermGroFac_{t+1} a_t + 1.
\end{eqnarray*}
## Solution method for PerfForesightConsumerType
Because of the assumptions of CRRA utility, no risk other than mortality, and no artificial borrowing constraint, the problem has a closed form solution. In fact, the consumption function is perfectly linear, and the value function composed with the inverse utility function is also linear. The mathematical solution of this model is described in detail in the lecture notes [PerfForesightCRRA](https://www.econ2.jhu.edu/people/ccarroll/public/lecturenotes/consumption/PerfForesightCRRA).
The one period problem for this model is solved by the function `solveConsPerfForesight`, which creates an instance of the class `ConsPerfForesightSolver`. To construct an instance of the class `PerfForesightConsumerType`, several parameters must be passed to its constructor as shown in the table below.
## Example parameter values to construct an instance of PerfForesightConsumerType
| Parameter | Description | Code | Example value | Time-varying? |
| :---: | --- | --- | --- | --- |
| $\DiscFac$ |Intertemporal discount factor | $\texttt{DiscFac}$ | $0.96$ | |
| $\CRRA $ |Coefficient of relative risk aversion | $\texttt{CRRA}$ | $2.0$ | |
| $\Rfree$ | Risk free interest factor | $\texttt{Rfree}$ | $1.03$ | |
| $1 - \DiePrb_{t+1}$ |Survival probability | $\texttt{LivPrb}$ | $[0.98]$ | $\surd$ |
|$\PermGroFac_{t+1}$|Permanent income growth factor|$\texttt{PermGroFac}$| $[1.01]$ | $\surd$ |
|$\underline{a}$|Artificial borrowing constraint|$\texttt{BoroCnstArt}$| $None$ | |
|$(none)$|Maximum number of gridpoints in consumption function |$\texttt{aXtraCount}$| $200$ | |
|$T$| Number of periods in this type's "cycle" |$\texttt{T_cycle}$| $1$ | |
|(none)| Number of times the "cycle" occurs |$\texttt{cycles}$| $0$ | |
Note that the survival probability and income growth factor have time subscripts; likewise, the example values for these parameters are *lists* rather than simply single floats. This is because those parameters are *time-varying*: their values can depend on which period of the problem the agent is in. All time-varying parameters *must* be specified as lists, even if the same value occurs in each period for this type.
The artificial borrowing constraint can be any non-positive `float`, or it can be `None` to indicate no artificial borrowing constraint. The maximum number of gridpoints in the consumption function is only relevant if the borrowing constraint is not `None`; without an upper bound on the number of gridpoints, kinks in the consumption function will propagate indefinitely in an infinite horizon model if there is a borrowing constraint, eventually resulting in an overflow error. If there is no artificial borrowing constraint, then the number of gridpoints used to represent the consumption function is always exactly two.
The last two parameters in the table specify the "nature of time" for this type: the number of (non-terminal) periods in this type's "cycle", and the number of times that the "cycle" occurs. *Every* subclass of `AgentType` uses these two code parameters to define the nature of time. Here, `T_cycle` has the value $1$, indicating that there is exactly one period in the cycle, while `cycles` is $0$, indicating that the cycle is repeated in *infinite* number of times-- it is an infinite horizon model, with the same "kind" of period repeated over and over.
In contrast, we could instead specify a life-cycle model by setting `T_cycle` to $1$, and specifying age-varying sequences of income growth and survival probability. In all cases, the number of elements in each time-varying parameter should exactly equal $\texttt{T_cycle}$.
The parameter $\texttt{AgentCount}$ specifies how many consumers there are of this *type*-- how many individuals have these exact parameter values and are *ex ante* homogeneous. This information is not relevant for solving the model, but is needed in order to simulate a population of agents, introducing *ex post* heterogeneity through idiosyncratic shocks. Of course, simulating a perfect foresight model is quite boring, as there are *no* idiosyncratic shocks other than death!
The cell below defines a dictionary that can be passed to the constructor method for `PerfForesightConsumerType`, with the values from the table here.
```
PerfForesightDict = {
# Parameters actually used in the solution method
"CRRA": 2.0, # Coefficient of relative risk aversion
"Rfree": 1.03, # Interest factor on assets
"DiscFac": 0.96, # Default intertemporal discount factor
"LivPrb": [0.98], # Survival probability
"PermGroFac": [1.01], # Permanent income growth factor
"BoroCnstArt": None, # Artificial borrowing constraint
"aXtraCount": 200, # Maximum number of gridpoints in consumption function
# Parameters that characterize the nature of time
"T_cycle": 1, # Number of periods in the cycle for this agent type
"cycles": 0, # Number of times the cycle occurs (0 --> infinitely repeated)
}
```
## Solving and examining the solution of the perfect foresight model
With the dictionary we have just defined, we can create an instance of `PerfForesightConsumerType` by passing the dictionary to the class (as if the class were a function). This instance can then be solved by invoking its `solve` method.
```
PFexample = PerfForesightConsumerType(**PerfForesightDict)
PFexample.cycles = 0
PFexample.solve()
```
The $\texttt{solve}$ method fills in the instance's attribute `solution` as a time-varying list of solutions to each period of the consumer's problem. In this case, `solution` will be a list with exactly one instance of the class `ConsumerSolution`, representing the solution to the infinite horizon model we specified.
```
print(PFexample.solution)
```
Each element of `solution` has a few attributes. To see all of them, we can use the \texttt{vars} built in function:
the consumption functions reside in the attribute $\texttt{cFunc}$ of each element of `ConsumerType.solution`. This method creates a (time varying) attribute $\texttt{cFunc}$ that contains a list of consumption functions.
```
print(vars(PFexample.solution[0]))
```
The two most important attributes of a single period solution of this model are the (normalized) consumption function $\texttt{cFunc}$ and the (normalized) value function $\texttt{vFunc}$. Let's plot those functions near the lower bound of the permissible state space (the attribute $\texttt{mNrmMin}$ tells us the lower bound of $m_t$ where the consumption function is defined).
```
print("Linear perfect foresight consumption function:")
mMin = PFexample.solution[0].mNrmMin
plot_funcs(PFexample.solution[0].cFunc, mMin, mMin + 10.0)
print("Perfect foresight value function:")
plot_funcs(PFexample.solution[0].vFunc, mMin + 0.1, mMin + 10.1)
```
An element of `solution` also includes the (normalized) marginal value function $\texttt{vPfunc}$, and the lower and upper bounds of the marginal propensity to consume (MPC) $\texttt{MPCmin}$ and $\texttt{MPCmax}$. Note that with a linear consumption function, the MPC is constant, so its lower and upper bound are identical.
### Liquidity constrained perfect foresight example
Without an artificial borrowing constraint, a perfect foresight consumer is free to borrow against the PDV of his entire future stream of labor income-- his "human wealth" $\texttt{hNrm}$-- and he will consume a constant proportion of his total wealth (market resources plus human wealth). If we introduce an artificial borrowing constraint, both of these features vanish. In the cell below, we define a parameter dictionary that prevents the consumer from borrowing *at all*, create and solve a new instance of `PerfForesightConsumerType` with it, and then plot its consumption function.
```
LiqConstrDict = copy(PerfForesightDict)
LiqConstrDict["BoroCnstArt"] = 0.0 # Set the artificial borrowing constraint to zero
LiqConstrExample = PerfForesightConsumerType(**LiqConstrDict)
LiqConstrExample.cycles = 0 # Make this type be infinite horizon
LiqConstrExample.solve()
print("Liquidity constrained perfect foresight consumption function:")
plot_funcs(LiqConstrExample.solution[0].cFunc, 0.0, 10.0)
# At this time, the value function for a perfect foresight consumer with an artificial borrowing constraint is not computed nor included as part of its $\texttt{solution}$.
```
## Simulating the perfect foresight consumer model
Suppose we wanted to simulate many consumers who share the parameter values that we passed to `PerfForesightConsumerType`-- an *ex ante* homogeneous *type* of consumers. To do this, our instance would have to know *how many* agents there are of this type, as well as their initial levels of assets $a_t$ and permanent income $P_t$.
### Setting simulation parameters
Let's fill in this information by passing another dictionary to `PFexample` with simulation parameters. The table below lists the parameters that an instance of `PerfForesightConsumerType` needs in order to successfully simulate its model using the `simulate` method.
| Description | Code | Example value |
| :---: | --- | --- |
| Number of consumers of this type | $\texttt{AgentCount}$ | $10000$ |
| Number of periods to simulate | $\texttt{T_sim}$ | $120$ |
| Mean of initial log (normalized) assets | $\texttt{aNrmInitMean}$ | $-6.0$ |
| Stdev of initial log (normalized) assets | $\texttt{aNrmInitStd}$ | $1.0$ |
| Mean of initial log permanent income | $\texttt{pLvlInitMean}$ | $0.0$ |
| Stdev of initial log permanent income | $\texttt{pLvlInitStd}$ | $0.0$ |
| Aggregrate productivity growth factor | $\texttt{PermGroFacAgg}$ | $1.0$ |
| Age after which consumers are automatically killed | $\texttt{T_age}$ | $None$ |
We have specified the model so that initial assets and permanent income are both distributed lognormally, with mean and standard deviation of the underlying normal distributions provided by the user.
The parameter $\texttt{PermGroFacAgg}$ exists for compatibility with more advanced models that employ aggregate productivity shocks; it can simply be set to 1.
In infinite horizon models, it might be useful to prevent agents from living extraordinarily long lives through a fortuitous sequence of mortality shocks. We have thus provided the option of setting $\texttt{T_age}$ to specify the maximum number of periods that a consumer can live before they are automatically killed (and replaced with a new consumer with initial state drawn from the specified distributions). This can be turned off by setting it to `None`.
The cell below puts these parameters into a dictionary, then gives them to `PFexample`. Note that all of these parameters *could* have been passed as part of the original dictionary; we omitted them above for simplicity.
```
SimulationParams = {
"AgentCount": 10000, # Number of agents of this type
"T_sim": 120, # Number of periods to simulate
"aNrmInitMean": -6.0, # Mean of log initial assets
"aNrmInitStd": 1.0, # Standard deviation of log initial assets
"pLvlInitMean": 0.0, # Mean of log initial permanent income
"pLvlInitStd": 0.0, # Standard deviation of log initial permanent income
"PermGroFacAgg": 1.0, # Aggregate permanent income growth factor
"T_age": None, # Age after which simulated agents are automatically killed
}
PFexample.assign_parameters(**SimulationParams)
```
To generate simulated data, we need to specify which variables we want to track the "history" of for this instance. To do so, we set the `track_vars` attribute of our `PerfForesightConsumerType` instance to be a list of strings with the simulation variables we want to track.
In this model, valid arguments to `track_vars` include $\texttt{mNrm}$, $\texttt{cNrm}$, $\texttt{aNrm}$, and $\texttt{pLvl}$. Because this model has no idiosyncratic shocks, our simulated data will be quite boring.
### Generating simulated data
Before simulating, the `initialize_sim` method must be invoked. This resets our instance back to its initial state, drawing a set of initial $\texttt{aNrm}$ and $\texttt{pLvl}$ values from the specified distributions and storing them in the attributes $\texttt{aNrmNow_init}$ and $\texttt{pLvlNow_init}$. It also resets this instance's internal random number generator, so that the same initial states will be set every time `initialize_sim` is called. In models with non-trivial shocks, this also ensures that the same sequence of shocks will be generated on every simulation run.
Finally, the `simulate` method can be called.
```
PFexample.track_vars = ['mNrm']
PFexample.initialize_sim()
PFexample.simulate()
# Each simulation variable $\texttt{X}$ named in $\texttt{track_vars}$ will have the *history* of that variable for each agent stored in the attribute $\texttt{X_hist}$ as an array of shape $(\texttt{T_sim},\texttt{AgentCount})$. To see that the simulation worked as intended, we can plot the mean of $m_t$ in each simulated period:
plt.plot(np.mean(PFexample.history['mNrm'], axis=1))
plt.xlabel("Time")
plt.ylabel("Mean normalized market resources")
plt.show()
```
A perfect foresight consumer can borrow against the PDV of his future income-- his human wealth-- and thus as time goes on, our simulated agents approach the (very negative) steady state level of $m_t$ while being steadily replaced with consumers with roughly $m_t=1$.
The slight wiggles in the plotted curve are due to consumers randomly dying and being replaced; their replacement will have an initial state drawn from the distributions specified by the user. To see the current distribution of ages, we can look at the attribute $\texttt{t_age}$.
```
N = PFexample.AgentCount
F = np.linspace(0.0, 1.0, N)
plt.plot(np.sort(PFexample.t_age), F)
plt.xlabel("Current age of consumers")
plt.ylabel("Cumulative distribution")
plt.show()
```
The distribution is (discretely) exponential, with a point mass at 120 with consumers who have survived since the beginning of the simulation.
One might wonder why HARK requires users to call `initialize_sim` before calling `simulate`: Why doesn't `simulate` just call `initialize_sim` as its first step? We have broken up these two steps so that users can simulate some number of periods, change something in the environment, and then resume the simulation.
When called with no argument, `simulate` will simulate the model for $\texttt{T_sim}$ periods. The user can optionally pass an integer specifying the number of periods to simulate (which should not exceed $\texttt{T_sim}$).
In the cell below, we simulate our perfect foresight consumers for 80 periods, then seize a bunch of their assets (dragging their wealth even more negative), then simulate for the remaining 40 periods.
The `state_prev` attribute of an AgenType stores the values of the model's state variables in the _previous_ period of the simulation.
```
PFexample.initialize_sim()
PFexample.simulate(80)
PFexample.state_prev['aNrm'] += -5.0 # Adjust all simulated consumers' assets downward by 5
PFexample.simulate(40)
plt.plot(np.mean(PFexample.history['mNrm'], axis=1))
plt.xlabel("Time")
plt.ylabel("Mean normalized market resources")
plt.show()
```
| github_jupyter |
```
# from google.colab import drive
# drive.mount('/content/drive')
import torch.nn as nn
import torch.nn.functional as F
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
from matplotlib import pyplot as plt
import copy
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
foreground_classes = {'plane', 'car', 'bird'}
background_classes = {'cat', 'deer', 'dog', 'frog', 'horse','ship', 'truck'}
fg1,fg2,fg3 = 0,1,2
dataiter = iter(trainloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=10
for i in range(5000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
def create_mosaic_img(bg_idx,fg_idx,fg):
"""
bg_idx : list of indexes of background_data[] to be used as background images in mosaic
fg_idx : index of image to be used as foreground image from foreground data
fg : at what position/index foreground image has to be stored out of 0-8
"""
image_list=[]
j=0
for i in range(9):
if i != fg:
image_list.append(background_data[bg_idx[j]].type("torch.DoubleTensor"))
j+=1
else:
image_list.append(foreground_data[fg_idx].type("torch.DoubleTensor"))
label = foreground_label[fg_idx]- fg1 # minus 7 because our fore ground classes are 7,8,9 but we have to store it as 0,1,2
#image_list = np.concatenate(image_list ,axis=0)
image_list = torch.stack(image_list)
return image_list,label
desired_num = 30000
mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(desired_num):
bg_idx = np.random.randint(0,35000,8)
fg_idx = np.random.randint(0,15000)
fg = np.random.randint(0,9)
fore_idx.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
mosaic_list_of_images.append(image_list)
mosaic_label.append(label)
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx], self.fore_idx[idx]
batch = 250
msd = MosaicDataset(mosaic_list_of_images, mosaic_label , fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
class Focus(nn.Module):
def __init__(self):
super(Focus, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=3, padding=0)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(in_channels=6, out_channels=6, kernel_size=3, padding=0)
# self.conv3 = nn.Conv2d(in_channels=12, out_channels=32, kernel_size=3, padding=0)
self.fc1 = nn.Linear(1014, 512)
self.fc2 = nn.Linear(512, 64)
# self.fc3 = nn.Linear(512, 64)
# self.fc4 = nn.Linear(64, 10)
self.fc3 = nn.Linear(64,1)
def forward(self,z): #y is avg image #z batch of list of 9 images
y = torch.zeros([batch,3, 32,32], dtype=torch.float64)
x = torch.zeros([batch,9],dtype=torch.float64)
y = y.to("cuda")
x = x.to("cuda")
for i in range(9):
x[:,i] = self.helper(z[:,i])[:,0]
x = F.softmax(x,dim=1)
x1 = x[:,0]
torch.mul(x1[:,None,None,None],z[:,0])
for i in range(9):
x1 = x[:,i]
y = y + torch.mul(x1[:,None,None,None],z[:,i])
return x, y
def helper(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = (F.relu(self.conv2(x)))
# print(x.shape)
# x = (F.relu(self.conv3(x)))
x = x.view(x.size(0), -1)
# print(x.shape)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
# x = F.relu(self.fc3(x))
# x = F.relu(self.fc4(x))
x = self.fc3(x)
return x
focus_net = Focus().double()
focus_net = focus_net.to("cuda")
class Classification(nn.Module):
def __init__(self):
super(Classification, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=12, kernel_size=3, padding=0)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(in_channels=12, out_channels=6, kernel_size=3, padding=0)
# self.conv3 = nn.Conv2d(in_channels=12, out_channels=20, kernel_size=3, padding=0)
self.fc1 = nn.Linear(1014, 512)
self.fc2 = nn.Linear(512, 64)
# self.fc3 = nn.Linear(512, 64)
# self.fc4 = nn.Linear(64, 10)
self.fc3 = nn.Linear(64,3)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = (F.relu(self.conv2(x)))
# print(x.shape)
# x = (F.relu(self.conv3(x)))
x = x.view(x.size(0), -1)
# print(x.shape)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
# x = F.relu(self.fc3(x))
# x = F.relu(self.fc4(x))
x = self.fc3(x)
return x
classify = Classification().double()
classify = classify.to("cuda")
test_images =[] #list of mosaic images, each mosaic image is saved as laist of 9 images
fore_idx_test =[] #list of indexes at which foreground image is present in a mosaic image
test_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(10000):
bg_idx = np.random.randint(0,35000,8)
fg_idx = np.random.randint(0,15000)
fg = np.random.randint(0,9)
fore_idx_test.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
test_images.append(image_list)
test_label.append(label)
test_data = MosaicDataset(test_images,test_label,fore_idx_test)
test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False)
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer_classify = optim.Adam(classify.parameters(), lr=0.001)#, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
optimizer_focus = optim.Adam(focus_net.parameters(), lr=0.001)#, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
col1=[]
col2=[]
col3=[]
col4=[]
col5=[]
col6=[]
col7=[]
col8=[]
col9=[]
col10=[]
col11=[]
col12=[]
col13=[]
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
count += 1
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
print(count)
print("="*100)
col1.append(0)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
nos_epochs = 200
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for epoch in range(nos_epochs): # loop over the dataset multiple times
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
running_loss = 0.0
epoch_loss = []
cnt=0
iteration = desired_num // batch
#training data set
for i, data in enumerate(train_loader):
inputs , labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
# zero the parameter gradients
optimizer_focus.zero_grad()
optimizer_classify.zero_grad()
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
# print(outputs)
# print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1))
loss = criterion(outputs, labels)
loss.backward()
optimizer_focus.step()
optimizer_classify.step()
running_loss += loss.item()
mini = 60
if cnt % mini == mini-1: # print every 40 mini-batches
print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini))
epoch_loss.append(running_loss/mini)
running_loss = 0.0
cnt=cnt+1
if epoch % 5 == 0:
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
if(np.mean(epoch_loss) <= 0.005):
break;
if epoch % 5 == 0:
# focus_net.eval()
# classify.eval()
col1.append(epoch+1)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
#************************************************************************
#testing data set
with torch.no_grad():
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
print('Finished Training')
# torch.save(focus_net.state_dict(),"/content/drive/My Drive/Research/Cheating_data/16_experiments_on_cnn_3layers/"+name+"_focus_net.pt")
# torch.save(classify.state_dict(),"/content/drive/My Drive/Research/Cheating_data/16_experiments_on_cnn_3layers/"+name+"_classify.pt")
columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ]
df_train = pd.DataFrame()
df_test = pd.DataFrame()
df_train[columns[0]] = col1
df_train[columns[1]] = col2
df_train[columns[2]] = col3
df_train[columns[3]] = col4
df_train[columns[4]] = col5
df_train[columns[5]] = col6
df_train[columns[6]] = col7
df_test[columns[0]] = col1
df_test[columns[1]] = col8
df_test[columns[2]] = col9
df_test[columns[3]] = col10
df_test[columns[4]] = col11
df_test[columns[5]] = col12
df_test[columns[6]] = col13
df_train
# plt.figure(12,12)
plt.plot(col1,col2, label='argmax > 0.5')
plt.plot(col1,col3, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.title("On Training set")
plt.show()
plt.plot(col1,col4, label ="focus_true_pred_true ")
plt.plot(col1,col5, label ="focus_false_pred_true ")
plt.plot(col1,col6, label ="focus_true_pred_false ")
plt.plot(col1,col7, label ="focus_false_pred_false ")
plt.title("On Training set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.savefig("train_ftpt.pdf", bbox_inches='tight')
plt.show()
df_test
# plt.figure(12,12)
plt.plot(col1,col8, label='argmax > 0.5')
plt.plot(col1,col9, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.title("On Testing set")
plt.show()
plt.plot(col1,col10, label ="focus_true_pred_true ")
plt.plot(col1,col11, label ="focus_false_pred_true ")
plt.plot(col1,col12, label ="focus_true_pred_false ")
plt.plot(col1,col13, label ="focus_false_pred_false ")
plt.title("On Testing set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.savefig("test_ftpt.pdf", bbox_inches='tight')
plt.show()
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 30000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
max_alpha =[]
alpha_ftpt=[]
argmax_more_than_half=0
argmax_less_than_half=0
for i, data in enumerate(test_loader):
inputs, labels,fore_idx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
alphas, avg = focus_net(inputs)
outputs = classify(avg)
mx,_ = torch.max(alphas,1)
max_alpha.append(mx.cpu().detach().numpy())
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if (focus == fore_idx[j] and predicted[j] == labels[j]):
alpha_ftpt.append(alphas[j][focus].item())
max_alpha = np.concatenate(max_alpha,axis=0)
print(max_alpha.shape)
plt.figure(figsize=(6,6))
_,bins,_ = plt.hist(max_alpha,bins=50,color ="c")
plt.title("alpha values histogram")
plt.savefig("alpha_hist.pdf")
plt.figure(figsize=(6,6))
_,bins,_ = plt.hist(np.array(alpha_ftpt),bins=50,color ="c")
plt.title("alpha values in ftpt")
plt.savefig("alpha_hist_ftpt.pdf")
```
| github_jupyter |
```
!nvidia-smi
!pip --quiet install transformers
!pip --quiet install tokenizers
from google.colab import drive
drive.mount('/content/drive')
!cp -r '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Scripts/.' .
COLAB_BASE_PATH = '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/'
MODEL_BASE_PATH = COLAB_BASE_PATH + 'Models/Files/236-roBERTa_base/'
import os
os.mkdir(MODEL_BASE_PATH)
```
## Dependencies
```
import json, warnings, shutil
from scripts_step_lr_schedulers import *
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
pd.set_option('max_colwidth', 120)
```
# Load data
```
# Unzip files
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_1.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_2.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_3.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_4.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_5.tar.gz'
database_base_path = COLAB_BASE_PATH + 'Data/complete_64_clean/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
print(f'Training samples: {len(k_fold)}')
display(k_fold.head())
```
# Model parameters
```
vocab_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-vocab.json'
merges_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-merges.txt'
base_path = COLAB_BASE_PATH + 'qa-transformers/roberta/'
config = {
"MAX_LEN": 64,
"BATCH_SIZE": 32,
"EPOCHS": 7,
"LEARNING_RATE": 3e-5,
"ES_PATIENCE": 2,
"N_FOLDS": 5,
"question_size": 4,
"base_model_path": base_path + 'roberta-base-tf_model.h5',
"config_path": base_path + 'roberta-base-config.json'
}
with open(MODEL_BASE_PATH + 'config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
```
## Learning rate schedule
```
lr_min = 1e-6
lr_start = 0
lr_max = config['LEARNING_RATE']
train_size = len(k_fold[k_fold['fold_1'] == 'train'])
step_size = train_size // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * step_size
warmup_steps = total_steps * 0.1
decay = .9985
rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
y = [exponential_schedule_with_warmup(tf.cast(x, tf.float32), warmup_steps=warmup_steps, lr_start=lr_start,
lr_max=lr_max, lr_min=lr_min, decay=decay) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
```
# Model
```
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
logits = layers.Dense(2, name="qa_outputs", use_bias=False)(last_hidden_state)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, axis=-1)
end_logits = tf.squeeze(end_logits, axis=-1)
model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits])
return model
```
# Train
```
def get_training_dataset(x_train, y_train, batch_size, buffer_size, seed=0):
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': x_train[0], 'attention_mask': x_train[1]},
(y_train[0], y_train[1])))
dataset = dataset.repeat()
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.prefetch(buffer_size)
return dataset
def get_validation_dataset(x_valid, y_valid, batch_size, buffer_size, repeated=False, seed=0):
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': x_valid[0], 'attention_mask': x_valid[1]},
(y_valid[0], y_valid[1])))
if repeated:
dataset = dataset.repeat()
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.cache()
dataset = dataset.prefetch(buffer_size)
return dataset
AUTO = tf.data.experimental.AUTOTUNE
history_list = []
for n_fold in range(config['N_FOLDS']):
n_fold +=1
print('\nFOLD: %d' % (n_fold))
# Load data
base_data_path = 'fold_%d/' % (n_fold)
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
step_size = x_train.shape[1] // config['BATCH_SIZE']
# Train model
model_path = 'model_fold_%d.h5' % (n_fold)
model = model_fn(config['MAX_LEN'])
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
checkpoint = ModelCheckpoint((MODEL_BASE_PATH + model_path), monitor='val_loss', mode='min',
save_best_only=True, save_weights_only=True)
optimizer = optimizers.Adam(learning_rate=lambda: exponential_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
warmup_steps=warmup_steps, lr_start=lr_start,
lr_max=lr_max, lr_min=lr_min, decay=decay))
model.compile(optimizer, loss=[losses.CategoricalCrossentropy(label_smoothing=0.2, from_logits=True),
losses.CategoricalCrossentropy(label_smoothing=0.2, from_logits=True)])
history = model.fit(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED),
validation_data=(get_validation_dataset(x_valid, y_valid, config['BATCH_SIZE'], AUTO, repeated=False, seed=SEED)),
epochs=config['EPOCHS'],
steps_per_epoch=step_size,
callbacks=[checkpoint, es],
verbose=2).history
history_list.append(history)
# Make predictions
# model.load_weights(MODEL_BASE_PATH + model_path)
predict_eval_df(k_fold, model, x_train, x_valid, get_test_dataset, decode, n_fold, tokenizer, config, config['question_size'])
```
# Model loss graph
```
#@title
for n_fold in range(config['N_FOLDS']):
print('Fold: %d' % (n_fold+1))
plot_metrics(history_list[n_fold])
```
# Model evaluation
```
#@title
display(evaluate_model_kfold(k_fold, config['N_FOLDS']).style.applymap(color_map))
```
# Visualize predictions
```
#@title
k_fold['jaccard_mean'] = 0
for n in range(config['N_FOLDS']):
k_fold['jaccard_mean'] += k_fold[f'jaccard_fold_{n+1}'] / config['N_FOLDS']
display(k_fold[['text', 'selected_text', 'sentiment', 'text_tokenCnt',
'selected_text_tokenCnt', 'jaccard', 'jaccard_mean'] + [c for c in k_fold.columns if (c.startswith('prediction_fold'))]].head(15))
```
| github_jupyter |
<h1>2b. Machine Learning using tf.estimator </h1>
In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
```
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
```
Read data created in the previous chapter.
```
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
df_test = pd.read_csv('./taxi-test.csv', header = None, names = CSV_COLUMNS)
df_test.tail()
```
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
df_test = pd.read_csv('./taxi-test.csv', header = None, names = CSV_COLUMNS)
df_train.header
```
def make_train_input_fn(df, num_epochs):
return tf.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000
)
def make_eval_input_fn(df):
return tf.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
shuffle = False,
queue_capacity = 1000
)
```
Our input function for predictions is the same except we don't provide a label
```
def make_prediction_input_fn(df):
return tf.estimator.inputs.pandas_input_fn(
x = df,
y = None,
batch_size = 128,
shuffle = False,
queue_capacity = 1000
)
```
### Create feature columns for estimator
```
def make_feature_cols():
input_columns = [tf.feature_column.numeric_column(k) for k in FEATURES]
return input_columns
```
<h3> Linear Regression with tf.Estimator framework </h3>
```
tf.logging.set_verbosity(tf.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_train_input_fn(df_train, num_epochs = 10))
```
Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
```
def print_rmse(model, df):
metrics = model.evaluate(input_fn = make_eval_input_fn(df))
print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss'])))
print_rmse(model, df_valid)
```
This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
```
predictions = model.predict(input_fn = make_prediction_input_fn(df_test))
for items in predictions:
print(items)
```
This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well.
<h3> Deep Neural Network regression </h3>
```
tf.logging.set_verbosity(tf.logging.INFO)
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.DNNRegressor(hidden_units = [32, 8, 2],
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_train_input_fn(df_train, num_epochs = 100));
print_rmse(model, df_valid)
```
We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!
But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model.
<h2> Benchmark dataset </h2>
Let's do this on the benchmark dataset.
```
from google.cloud import bigquery
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
"""
phase: 1 = train 2 = valid
"""
base_query = """
SELECT
(tolls_amount + fare_amount) AS fare_amount,
EXTRACT(DAYOFWEEK FROM pickup_datetime) * 1.0 AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) * 1.0 AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
CONCAT(CAST(pickup_datetime AS STRING), CAST(pickup_longitude AS STRING), CAST(pickup_latitude AS STRING), CAST(dropoff_latitude AS STRING), CAST(dropoff_longitude AS STRING)) AS key
FROM
`nyc-tlc.yellow.trips`
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
if EVERY_N == None:
if phase < 2:
# Training
query = "{0} AND MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 4) < 2".format(base_query)
else:
# Validation
query = "{0} AND MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 4) = {1}".format(base_query, phase)
else:
query = "{0} AND MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), {1}) = {2}".format(base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bigquery.Client().query(query).to_dataframe()
print_rmse(model, df)
```
RMSE on benchmark dataset is <b>9.41</b> (your results will vary because of random seeds).
This is not only way more than our original benchmark of 6.00, but it doesn't even beat our distance-based rule's RMSE of 8.02.
Fear not -- you have learned how to write a TensorFlow model, but not to do all the things that you will have to do to your ML model performant. We will do this in the next chapters. In this chapter though, we will get our TensorFlow model ready for these improvements.
In a software sense, the rest of the labs in this chapter will be about refactoring the code so that we can improve it.
## Challenge Exercise
Create a neural network that is capable of finding the volume of a cylinder given the radius of its base (r) and its height (h). Assume that the radius and height of the cylinder are both in the range 0.5 to 2.0. Simulate the necessary training dataset.
<p>
Hint (highlight to see):
<p style='color:white'>
The input features will be r and h and the label will be $\pi r^2 h$
Create random values for r and h and compute V.
Your dataset will consist of r, h and V.
Then, use a DNN regressor.
Make sure to generate enough data.
</p>
Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
#### Copyright 2017 Google LLC.
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 特征组合
**学习目标:**
* 通过添加其他合成特征来改进线性回归模型(这是前一个练习的延续)
* 使用输入函数将 Pandas `DataFrame` 对象转换为 `Tensors`,并在 `fit()` 和 `predict()` 中调用输入函数
* 使用 FTRL 优化算法进行模型训练
* 通过独热编码、分箱和特征组合创建新的合成特征
## 设置
首先,我们来定义输入并创建数据加载代码,正如我们在之前的练习中所做的那样。
```
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.cn/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a linear regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
```
## FTRL 优化算法
高维度线性模型可受益于使用一种基于梯度的优化方法,叫做 FTRL。该算法的优势是针对不同系数以不同方式调整学习速率,如果某些特征很少采用非零值,该算法可能比较实用(也非常适合支持 L1 正则化)。我们可以使用 [FtrlOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/FtrlOptimizer) 来应用 FTRL。
```
def train_model(
learning_rate,
steps,
batch_size,
feature_columns,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a linear regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
feature_columns: A `set` specifying the input feature columns to use.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `LinearRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a linear regressor object.
my_optimizer = tf.train.FtrlOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=feature_columns,
optimizer=my_optimizer
)
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = linear_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = linear_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
return linear_regressor
_ = train_model(
learning_rate=1.0,
steps=500,
batch_size=100,
feature_columns=construct_feature_columns(training_examples),
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
```
## 离散特征的独热编码
通常,在训练逻辑回归模型之前,离散(即字符串、枚举、整数)特征会转换为二元特征系列。
例如,假设我们创建了一个合成特征,可以采用 `0`、`1` 或 `2` 中的任何值,并且我们还具有以下几个训练点:
| # | feature_value |
|---|---------------|
| 0 | 2 |
| 1 | 0 |
| 2 | 1 |
对于每个可能的分类值,我们都会创建一个新的**二元****实值**特征,该特征只能采用两个可能值中的一个:如果示例中包含该值,则值为 1.0;如果不包含,则值为 0.0。在上述示例中,分类特征会被转换成三个特征,现在训练点如下所示:
| # | feature_value_0 | feature_value_1 | feature_value_2 |
|---|-----------------|-----------------|-----------------|
| 0 | 0.0 | 0.0 | 1.0 |
| 1 | 1.0 | 0.0 | 0.0 |
| 2 | 0.0 | 1.0 | 0.0 |
## 分桶(分箱)特征
分桶也称为分箱。
例如,我们可以将 `population` 分为以下 3 个分桶:
- `bucket_0` (`< 5000`):对应于人口分布较少的街区
- `bucket_1` (`5000 - 25000`):对应于人口分布适中的街区
- `bucket_2` (`> 25000`):对应于人口分布较多的街区
根据前面的分桶定义,以下 `population` 矢量:
[[10001], [42004], [2500], [18000]]
将变成以下经过分桶的特征矢量:
[[1], [2], [0], [1]]
这些特征值现在是分桶索引。请注意,这些索引被视为离散特征。通常情况下,这些特征将被进一步转换为上述独热表示法,但这是以透明方式实现的。
要为分桶特征定义特征列,我们可以使用 [`bucketized_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column)(而不是使用 `numeric_column`),该列将数字列作为输入,并使用 `boundaries` 参数中指定的分桶边界将其转换为分桶特征。以下代码为 `households` 和 `longitude` 定义了分桶特征列;`get_quantile_based_boundaries` 函数会根据分位数计算边界,以便每个分桶包含相同数量的元素。
```
def get_quantile_based_boundaries(feature_values, num_buckets):
boundaries = np.arange(1.0, num_buckets) / num_buckets
quantiles = feature_values.quantile(boundaries)
return [quantiles[q] for q in quantiles.keys()]
# Divide households into 7 buckets.
households = tf.feature_column.numeric_column("households")
bucketized_households = tf.feature_column.bucketized_column(
households, boundaries=get_quantile_based_boundaries(
california_housing_dataframe["households"], 7))
# Divide longitude into 10 buckets.
longitude = tf.feature_column.numeric_column("longitude")
bucketized_longitude = tf.feature_column.bucketized_column(
longitude, boundaries=get_quantile_based_boundaries(
california_housing_dataframe["longitude"], 10))
```
## 任务 1:使用分桶特征列训练模型
**将我们示例中的所有实值特征进行分桶,训练模型,然后查看结果是否有所改善。**
在前面的代码块中,两个实值列(即 `households` 和 `longitude`)已被转换为分桶特征列。您的任务是对其余的列进行分桶,然后运行代码来训练模型。您可以采用各种启发法来确定分桶的范围。本练习使用了分位数技巧,通过这种方式选择分桶边界后,每个分桶将包含相同数量的样本。
```
def construct_feature_columns():
"""Construct the TensorFlow Feature Columns.
Returns:
A set of feature columns
"""
households = tf.feature_column.numeric_column("households")
longitude = tf.feature_column.numeric_column("longitude")
latitude = tf.feature_column.numeric_column("latitude")
housing_median_age = tf.feature_column.numeric_column("housing_median_age")
median_income = tf.feature_column.numeric_column("median_income")
rooms_per_person = tf.feature_column.numeric_column("rooms_per_person")
# Divide households into 7 buckets.
bucketized_households = tf.feature_column.bucketized_column(
households, boundaries=get_quantile_based_boundaries(
training_examples["households"], 7))
# Divide longitude into 10 buckets.
bucketized_longitude = tf.feature_column.bucketized_column(
longitude, boundaries=get_quantile_based_boundaries(
training_examples["longitude"], 10))
#
# YOUR CODE HERE: bucketize the following columns, following the example above:
#
bucketized_latitude =
bucketized_housing_median_age =
bucketized_median_income =
bucketized_rooms_per_person =
feature_columns = set([
bucketized_longitude,
bucketized_latitude,
bucketized_housing_median_age,
bucketized_households,
bucketized_median_income,
bucketized_rooms_per_person])
return feature_columns
_ = train_model(
learning_rate=1.0,
steps=500,
batch_size=100,
feature_columns=construct_feature_columns(),
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
```
### 解决方案
点击下方即可查看解决方案。
您可能想知道如何确定要使用多少个分桶。这当然要取决于数据。在这里,我们只是选择了任意值,以获得一个不太大的模型。
```
def construct_feature_columns():
"""Construct the TensorFlow Feature Columns.
Returns:
A set of feature columns
"""
households = tf.feature_column.numeric_column("households")
longitude = tf.feature_column.numeric_column("longitude")
latitude = tf.feature_column.numeric_column("latitude")
housing_median_age = tf.feature_column.numeric_column("housing_median_age")
median_income = tf.feature_column.numeric_column("median_income")
rooms_per_person = tf.feature_column.numeric_column("rooms_per_person")
# Divide households into 7 buckets.
bucketized_households = tf.feature_column.bucketized_column(
households, boundaries=get_quantile_based_boundaries(
training_examples["households"], 7))
# Divide longitude into 10 buckets.
bucketized_longitude = tf.feature_column.bucketized_column(
longitude, boundaries=get_quantile_based_boundaries(
training_examples["longitude"], 10))
# Divide latitude into 10 buckets.
bucketized_latitude = tf.feature_column.bucketized_column(
latitude, boundaries=get_quantile_based_boundaries(
training_examples["latitude"], 10))
# Divide housing_median_age into 7 buckets.
bucketized_housing_median_age = tf.feature_column.bucketized_column(
housing_median_age, boundaries=get_quantile_based_boundaries(
training_examples["housing_median_age"], 7))
# Divide median_income into 7 buckets.
bucketized_median_income = tf.feature_column.bucketized_column(
median_income, boundaries=get_quantile_based_boundaries(
training_examples["median_income"], 7))
# Divide rooms_per_person into 7 buckets.
bucketized_rooms_per_person = tf.feature_column.bucketized_column(
rooms_per_person, boundaries=get_quantile_based_boundaries(
training_examples["rooms_per_person"], 7))
feature_columns = set([
bucketized_longitude,
bucketized_latitude,
bucketized_housing_median_age,
bucketized_households,
bucketized_median_income,
bucketized_rooms_per_person])
return feature_columns
_ = train_model(
learning_rate=1.0,
steps=500,
batch_size=100,
feature_columns=construct_feature_columns(),
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
```
## 特征组合
组合两个(或更多个)特征是使用线性模型来学习非线性关系的一种聪明做法。在我们的问题中,如果我们只使用 `latitude` 特征进行学习,那么该模型可能会发现特定纬度(或特定纬度范围内,因为我们已经将其分桶)的城市街区更可能比其他街区住房成本高昂。`longitude` 特征的情况与此类似。但是,如果我们将 `longitude` 与 `latitude` 组合,产生的组合特征则代表一个明确的城市街区。如果模型发现某些城市街区(位于特定纬度和经度范围内)更可能比其他街区住房成本高昂,那么这将是比单独考虑两个特征更强烈的信号。
目前,特征列 API 仅支持组合离散特征。要组合两个连续的值(比如 `latitude` 或 `longitude`),我们可以对其进行分桶。
如果我们组合 `latitude` 和 `longitude` 特征(例如,假设 `longitude` 被分到 `2` 个分桶中,而 `latitude` 有 `3` 个分桶),我们实际上会得到 6 个组合的二元特征。当我们训练模型时,每个特征都会分别获得自己的权重。
## 任务 2:使用特征组合训练模型
**在模型中添加 `longitude` 与 `latitude` 的特征组合,训练模型,然后确定结果是否有所改善。**
请参阅有关 [`crossed_column()`](https://www.tensorflow.org/api_docs/python/tf/feature_column/crossed_column) 的 TensorFlow API 文档,了解如何为您的组合构建特征列。`hash_bucket_size` 可以设为 `1000`。
```
def construct_feature_columns():
"""Construct the TensorFlow Feature Columns.
Returns:
A set of feature columns
"""
households = tf.feature_column.numeric_column("households")
longitude = tf.feature_column.numeric_column("longitude")
latitude = tf.feature_column.numeric_column("latitude")
housing_median_age = tf.feature_column.numeric_column("housing_median_age")
median_income = tf.feature_column.numeric_column("median_income")
rooms_per_person = tf.feature_column.numeric_column("rooms_per_person")
# Divide households into 7 buckets.
bucketized_households = tf.feature_column.bucketized_column(
households, boundaries=get_quantile_based_boundaries(
training_examples["households"], 7))
# Divide longitude into 10 buckets.
bucketized_longitude = tf.feature_column.bucketized_column(
longitude, boundaries=get_quantile_based_boundaries(
training_examples["longitude"], 10))
# Divide latitude into 10 buckets.
bucketized_latitude = tf.feature_column.bucketized_column(
latitude, boundaries=get_quantile_based_boundaries(
training_examples["latitude"], 10))
# Divide housing_median_age into 7 buckets.
bucketized_housing_median_age = tf.feature_column.bucketized_column(
housing_median_age, boundaries=get_quantile_based_boundaries(
training_examples["housing_median_age"], 7))
# Divide median_income into 7 buckets.
bucketized_median_income = tf.feature_column.bucketized_column(
median_income, boundaries=get_quantile_based_boundaries(
training_examples["median_income"], 7))
# Divide rooms_per_person into 7 buckets.
bucketized_rooms_per_person = tf.feature_column.bucketized_column(
rooms_per_person, boundaries=get_quantile_based_boundaries(
training_examples["rooms_per_person"], 7))
# YOUR CODE HERE: Make a feature column for the long_x_lat feature cross
long_x_lat =
feature_columns = set([
bucketized_longitude,
bucketized_latitude,
bucketized_housing_median_age,
bucketized_households,
bucketized_median_income,
bucketized_rooms_per_person,
long_x_lat])
return feature_columns
_ = train_model(
learning_rate=1.0,
steps=500,
batch_size=100,
feature_columns=construct_feature_columns(),
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
```
### 解决方案
点击下方即可查看解决方案。
```
def construct_feature_columns():
"""Construct the TensorFlow Feature Columns.
Returns:
A set of feature columns
"""
households = tf.feature_column.numeric_column("households")
longitude = tf.feature_column.numeric_column("longitude")
latitude = tf.feature_column.numeric_column("latitude")
housing_median_age = tf.feature_column.numeric_column("housing_median_age")
median_income = tf.feature_column.numeric_column("median_income")
rooms_per_person = tf.feature_column.numeric_column("rooms_per_person")
# Divide households into 7 buckets.
bucketized_households = tf.feature_column.bucketized_column(
households, boundaries=get_quantile_based_boundaries(
training_examples["households"], 7))
# Divide longitude into 10 buckets.
bucketized_longitude = tf.feature_column.bucketized_column(
longitude, boundaries=get_quantile_based_boundaries(
training_examples["longitude"], 10))
# Divide latitude into 10 buckets.
bucketized_latitude = tf.feature_column.bucketized_column(
latitude, boundaries=get_quantile_based_boundaries(
training_examples["latitude"], 10))
# Divide housing_median_age into 7 buckets.
bucketized_housing_median_age = tf.feature_column.bucketized_column(
housing_median_age, boundaries=get_quantile_based_boundaries(
training_examples["housing_median_age"], 7))
# Divide median_income into 7 buckets.
bucketized_median_income = tf.feature_column.bucketized_column(
median_income, boundaries=get_quantile_based_boundaries(
training_examples["median_income"], 7))
# Divide rooms_per_person into 7 buckets.
bucketized_rooms_per_person = tf.feature_column.bucketized_column(
rooms_per_person, boundaries=get_quantile_based_boundaries(
training_examples["rooms_per_person"], 7))
# YOUR CODE HERE: Make a feature column for the long_x_lat feature cross
long_x_lat = tf.feature_column.crossed_column(
set([bucketized_longitude, bucketized_latitude]), hash_bucket_size=1000)
feature_columns = set([
bucketized_longitude,
bucketized_latitude,
bucketized_housing_median_age,
bucketized_households,
bucketized_median_income,
bucketized_rooms_per_person,
long_x_lat])
return feature_columns
_ = train_model(
learning_rate=1.0,
steps=500,
batch_size=100,
feature_columns=construct_feature_columns(),
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
```
## 可选挑战:尝试更多合成特征
到目前为止,我们已经尝试了简单的分桶列和特征组合,但还有更多组合有可能会改进结果。例如,您可以组合多个列。如果改变分桶的数量,会出现什么情况?您还能想到哪些其他的合成特征?它们能否改进模型效果?
| github_jupyter |
# Likelihood based models
This notebook will outline the likelihood based approach to training on Bandit feedback.
Although before proceeding we will study the output of the simmulator in a little more detail.
```
from numpy.random.mtrand import RandomState
from recogym import Configuration
from recogym.agents import Agent
from sklearn.linear_model import LogisticRegression
from recogym import verify_agents
from recogym.agents import OrganicUserEventCounterAgent, organic_user_count_args
from recogym.evaluate_agent import verify_agents, plot_verify_agents
import gym, recogym
from copy import deepcopy
from recogym import env_1_args
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.rcParams['figure.figsize'] = [6, 3]
ABTestNumberOfUsers=1000
NumberOfProducts=10
NumberOfSamples = 20
env_1_args['phi_var']=0.0
env_1_args['number_of_flips']=0
env_1_args['sigma_mu_organic'] = 0.0
env_1_args['sigma_omega']=1
env_1_args['random_seed'] = 42
env_1_args['num_products'] = NumberOfProducts
env_1_args['K'] = 5
env_1_args['number_of_flips'] = 5
env = gym.make('reco-gym-v1')
env.init_gym(env_1_args)
data = deepcopy(env).generate_logs(ABTestNumberOfUsers)
```
# Logistic Regression Model
## Turn Data into Features
Now we are going to build a _Logistic Regression_ model.
The model will predict _the probability of the click_ for the following data:
* _`Views`_ is a total amount of views of a particular _`Product`_ shown during _Organic_ _`Events`_ **before** a _Bandit_ _`Event`_.
* _`Action`_ is a proposed _`Product`_ at a _Bandit_ _`Event`_.
For example, assume that we have _`10`_ products. In _Organic_ _`Events`_, these products were shown to a user as follows:
<table>
<tr>
<th>Product ID</th>
<th>Views</th>
</tr>
<tr>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>7</td>
</tr>
<tr>
<td>4</td>
<td>0</td>
</tr>
<tr>
<td>5</td>
<td>0</td>
</tr>
<tr>
<td>6</td>
<td>0</td>
</tr>
<tr>
<td>7</td>
<td>8</td>
</tr>
<tr>
<td>8</td>
<td>11</td>
</tr>
<tr>
<td>9</td>
<td>0</td>
</tr>
</table>
When we want to know the probability of the click for _`Product`_ = _`8`_ with available amounts of _`Views`_, the input data for the model will be:
_`0 0 0 7 0 0 0 0 8 11 0`_ _**`8`**_
The first 10 numbers are _`Views`_ of _`Products`_ (see above), the latest one is the _`Action`_.
The output will be two numbers:
* $0^{th}$ index: $1 - \mathbb{P}_c(P=p|V)$.
* $1^{st}$ index: $\mathbb{P}_c(P=p|V)$.
Here, $\mathbb{P}_c(P=p|V)$ is the probability of the click for a _`Product`_ $p$, provided that we have _`Views`_ $V$.
In all following models, an _`Action`_ will not be used as a number, but it will be decoded as a _vector_.
In our current example, the _`Action`_ is _`8`_. Thus, it is encoded as:
_`0 0 0 0 0 0 0 0`_ _**`1`**_ _`0`_
Here,
* Vector of _`Actions`_ has a size that is equal to the _*number of `Products`*_ i.e. _`10`_.
* _`Action`_ _`8`_ is marked as _`1`_ (_`Action`_ starts with _`0`_).
```
import math
import numpy as np
def build_train_data(data):
"""
Build Train Data
Parameters:
data: offline experiment logs
the data contains both Organic and Bandit Events
Returns:
:(outs, history, actions)
"""
num_products = int(data.v.max() + 1)
number_of_users = int(data.u.max()) + 1
history = []
actions = []
outs = []
for user_id in range(number_of_users):
views = np.zeros((0, num_products))
for _, user_datum in data[data['u'] == user_id].iterrows():
if user_datum['z'] == 'organic':
assert (math.isnan(user_datum['a']))
assert (math.isnan(user_datum['c']))
assert (not math.isnan(user_datum['v']))
view = int(user_datum['v'])
tmp_view = np.zeros(num_products)
tmp_view[view] = 1
# Append the latest view at the beginning of all views.
views = np.append(tmp_view[np.newaxis, :], views, axis = 0)
else:
assert (user_datum['z'] == 'bandit')
assert (not math.isnan(user_datum['a']))
assert (not math.isnan(user_datum['c']))
assert (math.isnan(user_datum['v']))
action = int(user_datum['a'])
action_flags = np.zeros(num_products, dtype = np.int8)
action_flags[int(action)] = 1
click = int(user_datum['c'])
history.append(views.sum(0))
actions.append(action_flags)
outs.append(click)
return np.array(outs), history, actions
clicks, history, actions = build_train_data(data)
data[0:27]
history[0:8]
actions[0:8]
```
Look at the data and see how it maps into the features - which is the combination of the history and the actions and the label which is clicks. Note that only the bandit events correspond to records in the training data.
In order to do personalisation it is necessary to cross the action and history features. _Why_? We do the simplest possible cross an element wise kronecker product.
```
from recogym.agents import FeatureProvider
class CrossFeatureProvider(FeatureProvider):
"""Feature provider as an abstract class that defined interface of setting/getting features"""
def __init__(self, config):
super(CrossFeatureProvider, self).__init__(config)
self.feature_data = None
def observe(self, observation):
"""Consider an Organic Event for a particular user"""
for session in observation.sessions():
self.feature_data[session['v']] += 1
def features(self, observation):
"""Provide feature values adjusted to a particular feature set"""
return self.feature_data
def reset(self):
self.feature_data = np.zeros((self.config.num_products))
class ModelBasedAgent(Agent):
def __init__(self, env, feature_provider, model):
# Set environment as an attribute of Agent.
self.env = env
self.feature_provider = feature_provider
self.model = model
self.reset()
def act(self, observation, reward, done):
"""Act method returns an action based on current observation and past history"""
self.feature_provider.observe(observation)
cross_features = np.kron(np.eye(env.config.num_products),self.feature_provider.features(observation))
prob = self.model.predict_proba(cross_features)[:, 1]
action = np.argmax(prob)
prob = np.zeros_like(prob)
prob[action] = 1.0
return {
**super().act(observation, reward, done),
**{
'a': action,
'ps': 1.,
'ps-a': prob,
}
}
def reset(self):
self.feature_provider.reset()
def build_history_agent(env_args, data):
outs, history, actions = build_train_data(data)
features = np.vstack([np.kron(aa,hh) for hh, aa in zip(history, actions)])
config = Configuration(env_args)
logreg = LogisticRegression(
solver = 'lbfgs',
max_iter = 5000,
random_state = config.random_seed
)
log_reg_fit = logreg.fit(features, outs)
return ModelBasedAgent(
config,
CrossFeatureProvider(config),
log_reg_fit
)
likelihood_logreg = build_history_agent(env_1_args, data)
organic_counter_agent = OrganicUserEventCounterAgent(Configuration({
**organic_user_count_args,
**env_1_args,
'select_randomly': True,
}))
result = verify_agents(env, 5000, {'likelihood logreg': likelihood_logreg, 'organic count': organic_counter_agent})
fig = plot_verify_agents(result)
plt.show()
```
| github_jupyter |
# Requirements
```
import numpy as np
import pandas as pd
```
# Dataframe
We create a very simple dataframe with three columns `alpha`, `beta` and `gamma` as well as and index that is non-trivial.
```
indices = 'ABCDEFGHIJK'
df = pd.DataFrame({
'alpha': [i for i in range(1, 1 + len(indices))],
'beta': [i**2 for i in range(1, 1 + len(indices))],
'gamma': [i**3 for i in range(1, 1 + len(indices))],
'idx': [c for c in indices],
})
df.set_index('idx', inplace=True)
df
df.info()
```
# Row selection
Rows can be selected either by row number, or by index value. Selection by value is likely to be the better choice if the index has some semantics, e.g., datetime or unique identifier.
## By row number
To select by row numbers, use the `iloc` locator object. It takes an `int`, a slice, or a list of `int`.
```
df.iloc[7]
```
In this case, a pandas `Series` is returned.
Selecting a slice, e.g., from the second to the fourth row, the familiar Python slicing index can be used.
```
df.iloc[1:4]
```
It is also possible to provide a list of row numbers.
```
df.iloc[[1, 3, 5, 7]]
```
## By index value
To select by index value, the `loc` object can be used. It too takes a single index value, a slice or a list of values. In our example, the index is a `str` with values from `A` to `K` in order.
```
df.loc['B']
```
As for the `iloc` locator, a pandas `Series` object is returned for a single value.
Slicing with index values is also possible. *Note:* as opposed to the familiar slicing semantics in Python, the end value is inclusive in this case.
```
df.loc['B':'E']
```
Finally, a list of index values can also be used.
```
df.loc[['B', 'D', 'H']]
```
# Column selection
Just as for rows, columns can be selected by column number, or column name. In most situations the latter is preferred since the order and even the number of columns may change over time.
## By column number
The `iloc` selector object is again used for selecting by column number. It takes either an `int`, a slice or a list of integers.
```
df.iloc[:, 1]
df.iloc[:, 0:2]
df.iloc[:, [0, 2]]
```
## By column names
Selection by column name is much more often used, and typically more appropriate.
A single column can be accessed by simply using the column name as a dataframe attribute.
```
df.alpha
```
This is a nice syntax, but note that it will not work when column names contain spaces or characters that would lead to Python syntax errors. In general, it is good practice to use straightforward column names. If that is not possible, you can use an alternative approach that is a bit more combersome.
```
df['alpha']
```
To select columns, slicing can not be used directly. However, a list of column names can be used, and is very useful in many situations.
```
df[['gamma', 'beta']]
```
# Selecting both rows and columns
Both the `iloc` and the `loc` selector objects can be used to select a range of rows and columns simultaneously.
```
df.iloc[1:9:2, [0, 2]]
df.loc['B':'I':2, ['alpha', 'gamma']]
```
# Sampling rows
It can often be useful to sample a subset of the wors in a pandas dataframe. It is easy to get the head or the tail of a dataframe by using the methods with that name, optionally providing the number of rows you want to select (default is 5 rows).
```
df.head(3)
df.tail(4)
```
A random sample of the rows can be obtained using the `sample` method.
```
df.sample(n=5)
```
The `sample` method has many useful options. It is possible to sample a given fraction of the rows, sample with replacement, or do weighted sampling.
# Conditional selection
In many data analysis tasks, you want to select rows based on conditions. For instance, selecting the rows where `beta` is larger than 64 can be done in two ways.
## Simple queries
The first approach is to create a temporary `Series` of Boolean values with the obvious semantics.
```
df[df.beta > 64]
```
As you can see below, the condition `df.beta > 64` evalues to a series of Boolean values.
```
df.beta > 64
df[~(df.beta == 4)]
```
Although this methods is quite fast, it may consume considerable memory for large dataframes since a `Series` object has to be constructed. In that case, the `query` method is quite useful.
```
df.query('beta > 64')
df.query('not(beta == 4)')
```
## Complex queries
More complex selection criteria can be implemented using both approaches, e.g.,
```
df[((df.beta > 64) | (df.gamma > 10)) & (df.alpha < 7)]
```
*Note:* the bitwise logical operators `~`, `&` and `|` should be used rather than the logical operators `not`, `and` and `or`. Due to the precedence of the bitwise operators, the braces are also required.
The `query` method offers a much cleaner syntax for the same selection query, but it is somewhat less efficient (to my surprise).
```
df.query('(beta > 64 or gamma > 10) and alpha < 7')
```
## Performance
Testing the performance of the two approaches for a larger data frame reveals that `query` is the slower option. However, it is more memory efficient since it avoid the creation of temporary Boolean `Series` objects.
```
data = pd.DataFrame({
column: np.random.uniform(0.0, 1.0, size=(500_000, ))
for column in 'ABCDEFGHIK'
})
data.info()
%timeit data.loc[(data.A > 0.5) & (data.B < 0.5)]
%timeit data.query('A > 0.5 and B < 0.5')
```
Obviously, you should benchmark for your own use cases.
| github_jupyter |
```
% matplotlib inline
import os
import numpy as np
import matplotlib.pyplot as plt
from keras.utils.np_utils import to_categorical
from snntoolbox.datasets.aedat.DVSIterator import DVSIterator, load_event_list, get_frames_from_sequence, extract_batch, next_eventframe_batch
data_path = '/home/rbodo/.snntoolbox/Datasets/predator_prey/aedat/all'
data_path2 = '/home/rbodo/.snntoolbox/Datasets/predator_prey/npz/dvs/rectified_sum'
path = '/home/rbodo/.snntoolbox/data/predator_prey'
x_test = np.load(os.path.join(data_path, 'x_9.npz'))['arr_0']
y_test = np.load(os.path.join(data_path, 'y_9.npz'))['arr_0']
x_test2 = np.load(os.path.join(data_path2, 'x_test.npz'))['arr_0']
y_test2 = np.load(os.path.join(data_path2, 'y_test.npz'))['arr_0']
filename_x = 'DAVIS240C-2016-02-23T12-42-14+0000-04010058-0_recording_8.aedat'
filename_y = 'DAVIS240C-2016-02-23T12-42-14+0000-04010058-0_recording_8-targets.txt'
def get_y(filepath, start_times):
label_file = open(filepath)
labels_list = []
times_list = []
lines = label_file.readlines()
for line in lines:
line_segments = line.split()
if line_segments[0] == '#':
continue
t = int(line_segments[1])
x_coord = int(line_segments[2])
if x_coord == -1:
label = 0 # N
elif x_coord < 80:
label = 1 # L
elif x_coord < 160:
label = 2 # C
elif x_coord < 240:
label = 3 # R
else:
label = 4 # Invalid
labels_list.append(label)
times_list.append(t)
labels_array = np.array(labels_list)
times_array = np.array(times_list)
y = []
for st in start_times:
diff = np.abs(times_array - st)
y.append(labels_array[np.argmin(diff)])
return np.array(y)
data_format = 'channels_last'
num_dvs_events_per_sample = 5000
chip_size = [240, 180]
image_shape = [36, 36]
label_dict = {"N": "0", "L": "1", "C": "2", "R": "3",}
frame_gen_method = 'rectified_sum'
is_x_first = True
is_x_flipped = True
is_y_flipped = True
filepath_x = os.path.join(data_path, filename_x)
filepath_y = os.path.join(data_path, filename_y)
event_sequence = load_event_list(filepath_x)
num_frames = int(len(event_sequence) / num_dvs_events_per_sample)
timestamps = [e[2] for e in event_sequence]
start_times = timestamps[::num_dvs_events_per_sample]
y_ = get_y(filepath_y, start_times)
y_test3 = to_categorical(y_, 4)
plt.hist(np.argmax(y_test3, -1));
maxpool_subsampling = True
do_clip_three_sigma = True
x_test3 = get_frames_from_sequence(
event_sequence, num_dvs_events_per_sample, data_format, frame_gen_method,
is_x_first, is_x_flipped, is_y_flipped, maxpool_subsampling,
do_clip_three_sigma, chip_size, image_shape)
maxpool_subsampling = False
do_clip_three_sigma = True
x_test4 = get_frames_from_sequence(
event_sequence, num_dvs_events_per_sample, data_format, frame_gen_method,
is_x_first, is_x_flipped, is_y_flipped, maxpool_subsampling,
do_clip_three_sigma, chip_size, image_shape)
maxpool_subsampling = True
do_clip_three_sigma = True
batch_size = 1 # len(x_test3)
event_deques_list = extract_batch(
event_sequence, frame_gen_method, batch_size, 0, num_dvs_events_per_sample,
maxpool_subsampling, do_clip_three_sigma, chip_size, image_shape)
maxpool_subsampling = False
do_clip_three_sigma = True
batch_size = 1 # len(x_test3)
event_deques_list2 = extract_batch(
event_sequence, frame_gen_method, batch_size, 0, num_dvs_events_per_sample,
maxpool_subsampling, do_clip_three_sigma, chip_size, image_shape)
print(x_test3[-1012, :, :, 0])
j = 343
plt.imshow(x_test3[j, :, :, 0].transpose())
plt.title(np.argmax(y_test3[j]))
plt.imshow(x_test3[2, :, :, 0].transpose())
x_test3_sum = np.sum(x_test3, (1, 2, 3))
x_test4_sum = np.sum(x_test4, (1, 2, 3))
x_test3_sum
plt.hist(x_test3[0, :, :, 0].flatten())
plt.hist(x_test3[x_test3.nonzero()], 100);
plt.hist(x_test4[x_test4.nonzero()], 100);
idxs = np.arange(10)
fig, ax = plt.subplots(len(idxs), 2, figsize=(5, 30))
tick_params = {'left': 'off', 'bottom': 'off',
'labelbottom': 'off', 'labelleft': 'off'}
for i, j in enumerate(idxs):
ax[i, 0].imshow(x_test3[j, :, :, 0].transpose())
ax[i, 1].imshow(x_test4[j, :, :, 0].transpose())
ax[i, 0].tick_params(**tick_params)
ax[i, 1].tick_params(**tick_params)
idxs_all = np.nonzero(np.argmax(y_test3, -1) == 1)[0] # np.random.randint(0, len(x_test3), 10)
idxs = idxs_all[50:60]
print(idxs)
fig, ax = plt.subplots(len(idxs), 2, figsize=(5, 30))
tick_params = {'left': 'off', 'bottom': 'off',
'labelbottom': 'off', 'labelleft': 'off'}
for i, j in enumerate(idxs):
ax[i, 0].imshow(x_test3[j, :, :, 0].transpose())
ax[i, 1].imshow(x_test3[j, :, :, 0].transpose())
ax[i, 0].set_title(np.argmax(y_test3[j]))
ax[i, 1].set_title(np.argmax(y_test3[j-1]))
ax[i, 0].tick_params(**tick_params)
ax[i, 1].tick_params(**tick_params)
idx = 203
y_max = image_shape[1]
xy = []
for x, y in zip(x_b_xaddr[idx], x_b_yaddr[idx]):
xy.append(x*y_max + y)
xy2 = []
for x, y in zip(x_b_xaddr2[idx], x_b_yaddr2[idx]):
xy2.append(x*y_max + y)
diff = len(xy2) - len(xy)
print('Number of events discarded during subsampling: {} ({:.2%})'.format(diff, diff/len(xy2)))
min2 = min(x_b_ts2[idx])
plt.figure(figsize=(10, 6))
plt.xlabel('Simulation time')
plt.title('Spiketrains of input layer (DVS)')
plt.ylabel('Neuron index')
plt.scatter(np.array(x_b_ts2[idx])-min2, xy2, s=50, linewidths=0, color='b')
plt.scatter(np.array(x_b_ts[idx])-min(x_b_ts[idx]), xy, s=5, linewidths=0, color='r')
# plt.xlim(0, 100)
# plt.xlim(4000, 4300)
a = [(tt, xx) for tt, xx in zip(x_b_ts[idx], xy)]
b = [(tt, xx) for tt, xx in zip(x_b_ts2[idx], xy2)]
c = np.array([(tt2-min2, xx2) for tt2, xx2, in zip(x_b_ts2[idx], xy2) if (tt2, xx2) not in a])
plt.scatter(c[:, 0], c[:, 1], s=5, linewidths=0, color='r')
plt.xlim(0, 100)
#plt.xlim(4146, 4148)
aa = [(tt, xx) for tt, xx in zip(x_b_ts[idx], xy)]
bb = list(set(xy))
cc =[x_b_ts[idx][xy.index(bbb)]-min2 for bbb in bb]
plt.scatter(cc, bb, s=5, linewidths=0, color='r')
c[np.nonzero(c[:, 0] == 4353)]
print(len(xy2))
print(len(a))
print(len(b))
print(len(c))
plt.hist([x_b_ts2[idx]-min(x_b_ts2[idx]), x_b_ts[idx]-min(x_b_ts[idx])], 50);
plt.hist2d(x_b_xaddr[idx], x_b_yaddr[idx], 36);
plt.hist2d(x_b_xaddr2[idx], x_b_yaddr2[idx], 36);
plt.imshow(x_test4[0, :, :, 0].transpose())
idxs = np.random.randint(0, len(x_test), 10)
idxs = np.arange(0, 10) - 1700
for i in idxs:
plt.imshow(x_test[i, :, :, 0].transpose())
plt.title(np.argmax(y_test[i]))
plt.show()
len(x_test)
len(x_test2)
plt.hist(x_test[x_test.nonzero()], 100);
a = x_test[:min(len(x_test), len(x_test2)), :, :, 0].flatten()
b = a[a.nonzero()]
plt.hist(b, 100);
a = x_test2[:min(len(x_test), len(x_test2)), :, :, 0].flatten()
b = a[a.nonzero()]
plt.hist(b, 100);
print(x_test2[0, :, :, 0])
print(len(event_deques_list[0]))
print(len(event_deques_list2[0]))
batch_shape = [1] + list(image_shape) + [1]
nrows = 5
ncols = 5
fig = plt.figure(figsize=(15, 15))
axes = [fig.add_subplot(nrows, ncols, 1 + r * ncols + c) for r in range(nrows) for c in range(ncols)]
for i, ax in enumerate(axes):
input_l = next_eventframe_batch(event_deques_list, is_x_first, is_x_flipped,
is_y_flipped, batch_shape, data_format, 10)[0]
ax.imshow(input_l[:, :, 0], cmap='gray')
ax.set_title(format(np.sum(input_l)))
ax.set_xticks([])
ax.set_yticks([])
batch_shape = [1] + list(image_shape) + [1]
nrows = 5
ncols = 5
fig = plt.figure(figsize=(15, 15))
axes = [fig.add_subplot(nrows, ncols, 1 + r * ncols + c) for r in range(nrows) for c in range(ncols)]
for i, ax in enumerate(axes):
input_l = next_eventframe_batch(event_deques_list2, is_x_first,
is_x_flipped, is_y_flipped, batch_shape,
data_format, 10)[0]
ax.imshow(input_l[:, :, 0], cmap='gray')
ax.set_title(format(np.sum(input_l)))
ax.set_xticks([])
ax.set_yticks([])
```
| github_jupyter |
```
if len(groups_desc) > 0:
markdown_str = ["## Differential feature functioning"]
markdown_str.append("This section shows differential feature functioning (DFF) plots "
"for all features and subgroups. The features are shown after applying "
"transformations (if applicable) and truncation of outliers.")
display(Markdown('\n'.join(markdown_str)))
# check if we already created the merged file in another notebook
try:
df_train_preproc_merged
except NameError:
df_train_preproc_merged = pd.merge(df_train_preproc, df_train_metadata, on = 'spkitemid')
for group in groups_desc:
display(Markdown("### DFF by {}".format(group)))
if group in min_n_per_group:
display(Markdown("The report only shows the results for groups with "
"at least {} responses in the training set.".format(min_n_per_group[group])))
category_counts = df_train_preproc_merged[group].value_counts()
selected_categories = category_counts[category_counts >= min_n_per_group[group]].index
df_train_preproc_selected = df_train_preproc_merged[df_train_preproc_merged[group].isin(selected_categories)].copy()
else:
df_train_preproc_selected = df_train_preproc_merged.copy()
if len(df_train_preproc_selected) > 0:
# we need to reduce col_wrap and increase width if the feature names are too long
if longest_feature_name > 10:
col_wrap = 2
# adjust height to allow for wrapping really long names. We allow 0.25 in per line
height = 2+(math.ceil(longest_feature_name/30)*0.25)
aspect = 5/height
# show legend near the second plot in the grid
plot_with_legend = 1
else:
height=3
col_wrap = 3
aspect = 1
# show the legend near the third plot in the grid
plot_with_legend = 2
selected_columns = ['spkitemid', 'sc1'] + features_used + [group]
df_melted = pd.melt(df_train_preproc_selected[selected_columns], id_vars=['spkitemid', 'sc1', group], var_name='feature')
group_values = sorted(df_melted[group].unique())
colors = sns.color_palette("Greys", len(group_values))
with sns.axes_style('whitegrid'), sns.plotting_context('notebook', font_scale=1.2):
p = sns.catplot(x='sc1', y='value', hue=group, hue_order = group_values,
col='feature', col_wrap=col_wrap, height=height, aspect=aspect,
scale=0.6,
palette=colors,
sharey=False, sharex=False, legend=False, kind="point",
data=df_melted)
for i, axis in enumerate(p.axes):
axis.set_xlabel('score')
if i == plot_with_legend:
legend = axis.legend(group_values, title=group,
frameon=True, fancybox=True,
ncol=1, fontsize=10,
loc='upper right', bbox_to_anchor=(1.75, 1))
for j in range(len(group_values)):
legend.legendHandles[j].set_color(colors[j])
plt.setp(legend.get_title(), fontsize='x-small')
for ax, cname in zip(p.axes, p.col_names):
ax.set_title('\n'.join(wrap(str(cname), 30)))
with warnings.catch_warnings():
warnings.simplefilter('ignore')
plt.tight_layout(h_pad=1.0)
imgfile = join(figure_dir, '{}_dff_{}.svg'.format(experiment_id, group))
plt.savefig(imgfile)
if use_thumbnails:
show_thumbnail(imgfile, next(id_generator))
else:
plt.show()
else:
display(Markdown("None of the groups in {} had {} or more responses.".format(group,
min_n_per_group[group])))
```
| github_jupyter |
# Bayesian Probabilistic Matrix Factorization
**Published**: November 6, 2020
**Author**: Xinyu Chen [[**GitHub homepage**](https://github.com/xinychen)]
**Download**: This Jupyter notebook is at our GitHub repository. If you want to evaluate the code, please download the notebook from the [**transdim**](https://github.com/xinychen/transdim/blob/master/imputer/BPMF.ipynb) repository.
This notebook shows how to implement the Bayesian Probabilistic Matrix Factorization (BPMF), a fully Bayesian matrix factorization model, on some real-world data sets. For an in-depth discussion of BPMF, please see [1].
<div class="alert alert-block alert-info">
<font color="black">
<b>[1]</b> Ruslan Salakhutdinov, Andriy Mnih (2008). <b>Bayesian probabilistic matrix factorization using Markov chain Monte Carlo</b>. ICML 2008. <a href="https://www.cs.toronto.edu/~amnih/papers/bpmf.pdf" title="PDF"><b>[PDF]</b></a> <a href="https://www.cs.toronto.edu/~rsalakhu/BPMF.html" title="Matlab code"><b>[Matlab code]</b></a>
</font>
</div>
#### Import some necessary packages
```
import numpy as np
from numpy.linalg import inv as inv
from numpy.random import normal as normrnd
from scipy.linalg import khatri_rao as kr_prod
from scipy.stats import wishart
from numpy.linalg import solve as solve
from scipy.linalg import cholesky as cholesky_upper
from scipy.linalg import solve_triangular as solve_ut
import matplotlib.pyplot as plt
%matplotlib inline
```
#### Define some functions
```
def mvnrnd_pre(mu, Lambda):
src = normrnd(size = (mu.shape[0],))
return solve_ut(cholesky_upper(Lambda, overwrite_a = True, check_finite = False),
src, lower = False, check_finite = False, overwrite_b = True) + mu
def cov_mat(mat, mat_bar):
mat = mat - mat_bar
return mat.T @ mat
```
#### Sample factor $\boldsymbol{W}$
```
def sample_factor_w(tau_sparse_mat, tau_ind, W, X, tau, beta0 = 1, vargin = 0):
"""Sampling N-by-R factor matrix W and its hyperparameters (mu_w, Lambda_w)."""
dim1, rank = W.shape
W_bar = np.mean(W, axis = 0)
temp = dim1 / (dim1 + beta0)
var_mu_hyper = temp * W_bar
var_W_hyper = inv(np.eye(rank) + cov_mat(W, W_bar) + temp * beta0 * np.outer(W_bar, W_bar))
var_Lambda_hyper = wishart.rvs(df = dim1 + rank, scale = var_W_hyper)
var_mu_hyper = mvnrnd_pre(var_mu_hyper, (dim1 + beta0) * var_Lambda_hyper)
if dim1 * rank ** 2 > 1e+8:
vargin = 1
if vargin == 0:
var1 = X.T
var2 = kr_prod(var1, var1)
var3 = (var2 @ tau_ind.T).reshape([rank, rank, dim1]) + var_Lambda_hyper[:, :, np.newaxis]
var4 = var1 @ tau_sparse_mat.T + (var_Lambda_hyper @ var_mu_hyper)[:, np.newaxis]
for i in range(dim1):
W[i, :] = mvnrnd_pre(solve(var3[:, :, i], var4[:, i]), var3[:, :, i])
elif vargin == 1:
for i in range(dim1):
pos0 = np.where(sparse_mat[i, :] != 0)
Xt = X[pos0[0], :]
var_mu = tau * Xt.T @ sparse_mat[i, pos0[0]] + var_Lambda_hyper @ var_mu_hyper
var_Lambda = tau * Xt.T @ Xt + var_Lambda_hyper
W[i, :] = mvnrnd_pre(solve(var_Lambda, var_mu), var_Lambda)
return W
```
#### Sample factor $\boldsymbol{X}$
```
def sample_factor_x(tau_sparse_mat, tau_ind, W, X, beta0 = 1):
"""Sampling T-by-R factor matrix X and its hyperparameters (mu_x, Lambda_x)."""
dim2, rank = X.shape
X_bar = np.mean(X, axis = 0)
temp = dim2 / (dim2 + beta0)
var_mu_hyper = temp * X_bar
var_X_hyper = inv(np.eye(rank) + cov_mat(X, X_bar) + temp * beta0 * np.outer(X_bar, X_bar))
var_Lambda_hyper = wishart.rvs(df = dim2 + rank, scale = var_X_hyper)
var_mu_hyper = mvnrnd_pre(var_mu_hyper, (dim2 + beta0) * var_Lambda_hyper)
var1 = W.T
var2 = kr_prod(var1, var1)
var3 = (var2 @ tau_ind).reshape([rank, rank, dim2]) + var_Lambda_hyper[:, :, np.newaxis]
var4 = var1 @ tau_sparse_mat + (var_Lambda_hyper @ var_mu_hyper)[:, np.newaxis]
for t in range(dim2):
X[t, :] = mvnrnd_pre(solve(var3[:, :, t], var4[:, t]), var3[:, :, t])
return X
```
#### Sampling Precision $\tau$
```
def sample_precision_tau(sparse_mat, mat_hat, ind):
var_alpha = 1e-6 + 0.5 * np.sum(ind)
var_beta = 1e-6 + 0.5 * np.sum(((sparse_mat - mat_hat) ** 2) * ind)
return np.random.gamma(var_alpha, 1 / var_beta)
def compute_mape(var, var_hat):
return np.sum(np.abs(var - var_hat) / var) / var.shape[0]
def compute_rmse(var, var_hat):
return np.sqrt(np.sum((var - var_hat) ** 2) / var.shape[0])
```
#### BPMF Implementation
```
def BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter):
"""Bayesian Probabilistic Matrix Factorization, BPMF."""
dim1, dim2 = sparse_mat.shape
W = init["W"]
X = init["X"]
if np.isnan(sparse_mat).any() == False:
ind = sparse_mat != 0
pos_obs = np.where(ind)
pos_test = np.where((dense_mat != 0) & (sparse_mat == 0))
elif np.isnan(sparse_mat).any() == True:
pos_test = np.where((dense_mat != 0) & (np.isnan(sparse_mat)))
ind = ~np.isnan(sparse_mat)
pos_obs = np.where(ind)
sparse_mat[np.isnan(sparse_mat)] = 0
dense_test = dense_mat[pos_test]
del dense_mat
tau = 1
W_plus = np.zeros((dim1, rank))
X_plus = np.zeros((dim2, rank))
temp_hat = np.zeros(sparse_mat.shape)
show_iter = 200
mat_hat_plus = np.zeros(sparse_mat.shape)
for it in range(burn_iter + gibbs_iter):
tau_ind = tau * ind
tau_sparse_mat = tau * sparse_mat
W = sample_factor_w(tau_sparse_mat, tau_ind, W, X, tau)
X = sample_factor_x(tau_sparse_mat, tau_ind, W, X)
mat_hat = W @ X.T
tau = sample_precision_tau(sparse_mat, mat_hat, ind)
temp_hat += mat_hat
if (it + 1) % show_iter == 0 and it < burn_iter:
temp_hat = temp_hat / show_iter
print('Iter: {}'.format(it + 1))
print('MAPE: {:.6}'.format(compute_mape(dense_test, temp_hat[pos_test])))
print('RMSE: {:.6}'.format(compute_rmse(dense_test, temp_hat[pos_test])))
temp_hat = np.zeros(sparse_mat.shape)
print()
if it + 1 > burn_iter:
W_plus += W
X_plus += X
mat_hat_plus += mat_hat
mat_hat = mat_hat_plus / gibbs_iter
W = W_plus / gibbs_iter
X = X_plus / gibbs_iter
print('Imputation MAPE: {:.6}'.format(compute_mape(dense_test, mat_hat[pos_test])))
print('Imputation RMSE: {:.6}'.format(compute_rmse(dense_test, mat_hat[pos_test])))
print()
return mat_hat, W, X
```
## Evaluation on Guangzhou Speed Data
**Scenario setting**:
- Tensor size: $214\times 61\times 144$ (road segment, day, time of day)
- Non-random missing (NM)
- 40% missing rate
```
import scipy.io
tensor = scipy.io.loadmat('./data/tensor.mat')['tensor']
random_matrix = scipy.io.loadmat('./data/random_matrix.mat')['random_matrix']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
## Non-random missing (NM)
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
sparse_mat = np.multiply(dense_mat, binary_mat)
```
**Model setting**:
- Low rank: 10
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
init = {"W": 0.01 * np.random.randn(dim1, rank), "X": 0.01 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Scenario setting**:
- Tensor size: $214\times 61\times 144$ (road segment, day, time of day)
- Random missing (RM)
- 40% missing rate
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
## Random missing (RM)
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
sparse_mat = np.multiply(dense_mat, binary_mat)
```
**Model setting**:
- Low rank: 80
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 80
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Scenario setting**:
- Tensor size: $214\times 61\times 144$ (road segment, day, time of day)
- Random missing (RM)
- 60% missing rate
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')['tensor']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.6
## Random missing (RM)
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
sparse_mat = np.multiply(dense_mat, binary_mat)
```
**Model setting**:
- Low rank: 80
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 80
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
## Evaluation on Birmingham Parking Data
**Scenario setting**:
- Tensor size: $30\times 77\times 18$ (parking slot, day, time of day)
- Non-random missing (NM)
- 40% missing rate
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat')['tensor']
random_matrix = scipy.io.loadmat('../datasets/Birmingham-data-set/random_matrix.mat')['random_matrix']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
## Non-random missing (NM)
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
sparse_mat = np.multiply(dense_mat, binary_mat)
```
**Model setting**:
- Low rank: 20
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 20
init = {"W": 0.01 * np.random.randn(dim1, rank), "X": 0.01 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Scenario setting**:
- Tensor size: $30\times 77\times 18$ (parking slot, day, time of day)
- Random missing (RM)
- 40% missing rate
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat')
tensor = tensor['tensor']
random_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
## Random missing (RM)
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
sparse_mat = np.multiply(dense_mat, binary_mat)
```
**Model setting**:
- Low rank: 20
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 20
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Scenario setting**:
- Tensor size: $30\times 77\times 18$ (parking slot, day, time of day)
- Random missing (RM)
- 60% missing rate
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat')
tensor = tensor['tensor']
random_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.6
## Random missing (RM)
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
sparse_mat = np.multiply(dense_mat, binary_mat)
```
**Model setting**:
- Low rank: 20
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 20
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
## Evaluation on Hangzhou Flow Data
**Scenario setting**:
- Tensor size: $80\times 25\times 108$ (metro station, day, time of day)
- Non-random missing (NM)
- 40% missing rate
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')['tensor']
random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat')['random_matrix']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
## Non-random missing (NM)
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
sparse_mat = np.multiply(dense_mat, binary_mat)
```
**Model setting**:
- Low rank: 30
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
init = {"W": 0.01 * np.random.randn(dim1, rank), "X": 0.01 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Scenario setting**:
- Tensor size: $80\times 25\times 108$ (metro station, day, time of day)
- Random missing (RM)
- 40% missing rate
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')['tensor']
random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4
## Random missing (RM)
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
sparse_mat = np.multiply(dense_mat, binary_mat)
```
**Model setting**:
- Low rank: 30
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Scenario setting**:
- Tensor size: $80\times 25\times 108$ (metro station, day, time of day)
- Random missing (RM)
- 60% missing rate
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')['tensor']
random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.6
## Random missing (RM)
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
sparse_mat = np.multiply(dense_mat, binary_mat)
```
**Model setting**:
- Low rank: 30
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
## Evaluation on Seattle Speed Data
**Scenario setting**:
- Tensor size: $323\times 28\times 288$ (road segment, day, time of day)
- Non-random missing (NM)
- 40% missing rate
```
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
NM_mat = NM_mat.values
missing_rate = 0.4
## Non-random missing (NM)
binary_tensor = np.zeros((dense_mat.shape[0], 28, 288))
for i1 in range(binary_tensor.shape[0]):
for i2 in range(binary_tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate)
sparse_mat = np.multiply(dense_mat, binary_tensor.reshape([dense_mat.shape[0], dense_mat.shape[1]]))
```
**Model setting**:
- Low rank: 10
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
init = {"W": 0.01 * np.random.randn(dim1, rank), "X": 0.01 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Scenario setting**:
- Tensor size: $323\times 28\times 288$ (road segment, day, time of day)
- Random missing (RM)
- 40% missing rate
```
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
RM_mat = RM_mat.values
missing_rate = 0.4
## Random missing (RM)
binary_mat = np.round(RM_mat + 0.5 - missing_rate)
sparse_mat = np.multiply(dense_mat, binary_mat)
```
**Model setting**:
- Low rank: 50
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Scenario setting**:
- Tensor size: $323\times 28\times 288$ (road segment, day, time of day)
- Random missing (RM)
- 60% missing rate
```
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
RM_mat = RM_mat.values
missing_rate = 0.6
## Random missing (RM)
binary_mat = np.round(RM_mat + 0.5 - missing_rate)
sparse_mat = np.multiply(dense_mat, binary_mat)
```
**Model setting**:
- Low rank: 50
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
## Evaluation on London Movement Speed Data
London movement speed data set is is a city-wide hourly traffic speeddataset collected in London.
- Collected from 200,000+ road segments.
- 720 time points in April 2019.
- 73% missing values in the original data.
| Observation rate | $>90\%$ | $>80\%$ | $>70\%$ | $>60\%$ | $>50\%$ |
|:------------------|--------:|--------:|--------:|--------:|--------:|
|**Number of roads**| 17,666 | 27,148 | 35,912 | 44,352 | 52,727 |
If want to test on the full dataset, you could consider the following setting for masking observations as missing values.
```python
import numpy as np
np.random.seed(1000)
mask_rate = 0.20
dense_mat = np.load('../datasets/London-data-set/hourly_speed_mat.npy')
pos_obs = np.where(dense_mat != 0)
num = len(pos_obs[0])
sample_ind = np.random.choice(num, size = int(mask_rate * num), replace = False)
sparse_mat = dense_mat.copy()
sparse_mat[pos_obs[0][sample_ind], pos_obs[1][sample_ind]] = 0
```
Notably, you could also consider to evaluate the model on a subset of the data with the following setting.
```
import numpy as np
np.random.seed(1000)
missing_rate = 0.4
dense_mat = np.load('../datasets/London-data-set/hourly_speed_mat.npy')
binary_mat = dense_mat.copy()
binary_mat[binary_mat != 0] = 1
pos = np.where(np.sum(binary_mat, axis = 1) > 0.7 * binary_mat.shape[1])
dense_mat = dense_mat[pos[0], :]
## Non-random missing (NM)
binary_mat = np.zeros(dense_mat.shape)
random_mat = np.random.rand(dense_mat.shape[0], 30)
for i1 in range(dense_mat.shape[0]):
for i2 in range(30):
binary_mat[i1, i2 * 24 : (i2 + 1) * 24] = np.round(random_mat[i1, i2] + 0.5 - missing_rate)
sparse_mat = np.multiply(dense_mat, binary_mat)
```
**Model setting**:
- Low rank: 20
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 20
init = {"W": 0.01 * np.random.randn(dim1, rank), "X": 0.01 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import numpy as np
np.random.seed(1000)
missing_rate = 0.4
dense_mat = np.load('../datasets/London-data-set/hourly_speed_mat.npy')
binary_mat = dense_mat.copy()
binary_mat[binary_mat != 0] = 1
pos = np.where(np.sum(binary_mat, axis = 1) > 0.7 * binary_mat.shape[1])
dense_mat = dense_mat[pos[0], :]
## Random missing (RM)
random_mat = np.random.rand(dense_mat.shape[0], dense_mat.shape[1])
binary_mat = np.round(random_mat + 0.5 - missing_rate)
sparse_mat = np.multiply(dense_mat, binary_mat)
```
**Model setting**:
- Low rank: 20
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 20
init = {"W": 0.01 * np.random.randn(dim1, rank), "X": 0.01 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
import numpy as np
np.random.seed(1000)
missing_rate = 0.6
dense_mat = np.load('../datasets/London-data-set/hourly_speed_mat.npy')
binary_mat = dense_mat.copy()
binary_mat[binary_mat != 0] = 1
pos = np.where(np.sum(binary_mat, axis = 1) > 0.7 * binary_mat.shape[1])
dense_mat = dense_mat[pos[0], :]
## Random missing (RM)
random_mat = np.random.rand(dense_mat.shape[0], dense_mat.shape[1])
binary_mat = np.round(random_mat + 0.5 - missing_rate)
sparse_mat = np.multiply(dense_mat, binary_mat)
```
**Model setting**:
- Low rank: 20
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 20
init = {"W": 0.01 * np.random.randn(dim1, rank), "X": 0.01 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
## Eavluation on NYC Taxi Flow Data
**Scenario setting**:
- Tensor size: $30\times 30\times 1461$ (origin, destination, time)
- Random missing (RM)
- 40% missing rate
```
import scipy.io
dense_tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')['tensor'].astype(np.float32)
rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')['rm_tensor']
missing_rate = 0.4
## Random missing (RM)
binary_tensor = np.round(rm_tensor + 0.5 - missing_rate)
sparse_tensor = dense_tensor.copy()
sparse_tensor[binary_tensor == 0] = np.nan
dim1, dim2, dim3 = dense_tensor.shape
dense_mat = np.zeros((dim1 * dim2, dim3))
sparse_mat = np.zeros((dim1 * dim2, dim3))
for i in range(dim1):
dense_mat[i * dim2 : (i + 1) * dim2, :] = dense_tensor[i, :, :]
sparse_mat[i * dim2 : (i + 1) * dim2, :] = sparse_tensor[i, :, :]
```
**Model setting**:
- Low rank: 30
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Scenario setting**:
- Tensor size: $30\times 30\times 1461$ (origin, destination, time)
- Random missing (RM)
- 60% missing rate
```
import scipy.io
dense_tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')['tensor'].astype(np.float32)
rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')['rm_tensor']
missing_rate = 0.6
## Random missing (RM)
binary_tensor = np.round(rm_tensor + 0.5 - missing_rate)
sparse_tensor = dense_tensor.copy()
sparse_tensor[binary_tensor == 0] = np.nan
dim1, dim2, dim3 = dense_tensor.shape
dense_mat = np.zeros((dim1 * dim2, dim3))
sparse_mat = np.zeros((dim1 * dim2, dim3))
for i in range(dim1):
dense_mat[i * dim2 : (i + 1) * dim2, :] = dense_tensor[i, :, :]
sparse_mat[i * dim2 : (i + 1) * dim2, :] = sparse_tensor[i, :, :]
```
**Model setting**:
- Low rank: 30
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Scenario setting**:
- Tensor size: $30\times 30\times 1461$ (origin, destination, time)
- Non-random missing (NM)
- 40% missing rate
```
import scipy.io
dense_tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')['tensor'].astype(np.float32)
nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat')['nm_tensor']
missing_rate = 0.4
## Non-random missing (NM)
binary_tensor = np.zeros(dense_tensor.shape)
for i1 in range(dense_tensor.shape[0]):
for i2 in range(dense_tensor.shape[1]):
for i3 in range(61):
binary_tensor[i1, i2, i3 * 24 : (i3 + 1) * 24] = np.round(nm_tensor[i1, i2, i3] + 0.5 - missing_rate)
sparse_tensor = dense_tensor.copy()
sparse_tensor[binary_tensor == 0] = np.nan
dim1, dim2, dim3 = dense_tensor.shape
dense_mat = np.zeros((dim1 * dim2, dim3))
sparse_mat = np.zeros((dim1 * dim2, dim3))
for i in range(dim1):
dense_mat[i * dim2 : (i + 1) * dim2, :] = dense_tensor[i, :, :]
sparse_mat[i * dim2 : (i + 1) * dim2, :] = sparse_tensor[i, :, :]
```
**Model setting**:
- Low rank: 30
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
## Evaluation on Pacific Surface Temperature Data
**Scenario setting**:
- Tensor size: $30\times 84\times 396$ (location x, location y, month)
- Random missing (RM)
- 40% missing rate
```
import numpy as np
np.random.seed(1000)
dense_tensor = np.load('../datasets/Temperature-data-set/tensor.npy').astype(np.float32)
pos = np.where(dense_tensor[:, 0, :] > 50)
dense_tensor[pos[0], :, pos[1]] = 0
random_tensor = np.random.rand(dense_tensor.shape[0], dense_tensor.shape[1], dense_tensor.shape[2])
missing_rate = 0.4
## Random missing (RM)
binary_tensor = np.round(random_tensor + 0.5 - missing_rate)
sparse_tensor = dense_tensor.copy()
sparse_tensor[binary_tensor == 0] = np.nan
sparse_tensor[sparse_tensor == 0] = np.nan
dim1, dim2, dim3 = dense_tensor.shape
dense_mat = np.zeros((dim1 * dim2, dim3))
sparse_mat = np.zeros((dim1 * dim2, dim3))
for i in range(dim1):
dense_mat[i * dim2 : (i + 1) * dim2, :] = dense_tensor[i, :, :]
sparse_mat[i * dim2 : (i + 1) * dim2, :] = sparse_tensor[i, :, :]
```
**Model setting**:
- Low rank: 30
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Scenario setting**:
- Tensor size: $30\times 84\times 396$ (location x, location y, month)
- Random missing (RM)
- 60% missing rate
```
import numpy as np
np.random.seed(1000)
dense_tensor = np.load('../datasets/Temperature-data-set/tensor.npy').astype(np.float32)
pos = np.where(dense_tensor[:, 0, :] > 50)
dense_tensor[pos[0], :, pos[1]] = 0
random_tensor = np.random.rand(dense_tensor.shape[0], dense_tensor.shape[1], dense_tensor.shape[2])
missing_rate = 0.6
## Random missing (RM)
binary_tensor = np.round(random_tensor + 0.5 - missing_rate)
sparse_tensor = dense_tensor.copy()
sparse_tensor[binary_tensor == 0] = np.nan
sparse_tensor[sparse_tensor == 0] = np.nan
dim1, dim2, dim3 = dense_tensor.shape
dense_mat = np.zeros((dim1 * dim2, dim3))
sparse_mat = np.zeros((dim1 * dim2, dim3))
for i in range(dim1):
dense_mat[i * dim2 : (i + 1) * dim2, :] = dense_tensor[i, :, :]
sparse_mat[i * dim2 : (i + 1) * dim2, :] = sparse_tensor[i, :, :]
```
**Model setting**:
- Low rank: 30
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Scenario setting**:
- Tensor size: $30\times 84\times 396$ (location x, location y, month)
- Non-random missing (NM)
- 40% missing rate
```
import numpy as np
np.random.seed(1000)
dense_tensor = np.load('../datasets/Temperature-data-set/tensor.npy').astype(np.float32)
pos = np.where(dense_tensor[:, 0, :] > 50)
dense_tensor[pos[0], :, pos[1]] = 0
random_tensor = np.random.rand(dense_tensor.shape[0], dense_tensor.shape[1], int(dense_tensor.shape[2] / 3))
missing_rate = 0.4
## Non-random missing (NM)
binary_tensor = np.zeros(dense_tensor.shape)
for i1 in range(dense_tensor.shape[0]):
for i2 in range(dense_tensor.shape[1]):
for i3 in range(int(dense_tensor.shape[2] / 3)):
binary_tensor[i1, i2, i3 * 3 : (i3 + 1) * 3] = np.round(random_tensor[i1, i2, i3] + 0.5 - missing_rate)
sparse_tensor = dense_tensor.copy()
sparse_tensor[binary_tensor == 0] = np.nan
sparse_tensor[sparse_tensor == 0] = np.nan
dim1, dim2, dim3 = dense_tensor.shape
dense_mat = np.zeros((dim1 * dim2, dim3))
sparse_mat = np.zeros((dim1 * dim2, dim3))
for i in range(dim1):
dense_mat[i * dim2 : (i + 1) * dim2, :] = dense_tensor[i, :, :]
sparse_mat[i * dim2 : (i + 1) * dim2, :] = sparse_tensor[i, :, :]
```
**Model setting**:
- Low rank: 30
- The number of burn-in iterations: 1000
- The number of Gibbs iterations: 200
```
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
init = {"W": 0.1 * np.random.randn(dim1, rank), "X": 0.1 * np.random.randn(dim2, rank)}
burn_iter = 1000
gibbs_iter = 200
mat_hat, W, X = BPMF(dense_mat, sparse_mat, init, rank, burn_iter, gibbs_iter)
end = time.time()
print('Running time: %d seconds'%(end - start))
```
### License
<div class="alert alert-block alert-danger">
<b>This work is released under the MIT license.</b>
</div>
| github_jupyter |
```
import numpy as np
from ctapipe.io import EventSource
from ctapipe.io import EventSeeker
import matplotlib.pyplot as plt
import numpy as np
from ctapipe.instrument import CameraGeometry
from ctapipe.visualization import CameraDisplay
%matplotlib inline
plt.rcParams['figure.figsize'] = (16, 9)
plt.rcParams['font.size'] = 20
source = EventSource(input_url="/Users/thomasvuillaume/Work/CTA/Data/LST1/LST-1.1.Run00088.0000.fits.fz",
max_events=20)
def format_axes(ax):
ax.set_xlabel("")
ax.set_ylabel("")
ax.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False) # labels along the bottom edge are off
ax.tick_params(
axis='y', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
left=False, # ticks along the bottom edge are off
right=False, # ticks along the top edge are off
labelleft=False) # labels along the bottom edge are off
return ax
geom = CameraGeometry.from_name("LSTCam-002")
evt_id = 203 # The id of the selected event
for i, ev in enumerate(source):
N_modules = 7*265
print(ev.lst.tel[0].evt.event_id)
#if((ev.lst.tel[0].evt.event_id<19800) or (ev.lst.tel[0].evt.event_id>19860)):
# continue
std_signal = np.zeros(1855)
for pixel in range(0, N_modules):
std_signal[pixel] = np.max(ev.r0.tel[0].waveform[0, pixel, 2:38])
if(np.size(std_signal[std_signal>1000.]) < 15):
continue
print(f"Event {ev.lst.tel[0].evt.event_id}, Max: {np.max(std_signal)} counts")
#geom = CameraGeometry.from_name("LSTCam-002")
fig, ax = plt.subplots(figsize=(10,10))
#disp0 = CameraDisplay(geom, ax=ax)
disp0 = CameraDisplay(ev.inst.subarray.tels[0].camera, ax=ax)
disp0.cmap = 'viridis'
disp0.image = std_signal
disp0.add_colorbar(ax=ax)
# Establish max and min
sort = np.argsort(std_signal)
min_color = std_signal[sort][7] # There was one cluster off
max_color = std_signal[sort][-2]
max_color = np.max(std_signal)
disp0.set_limits_minmax(min_color, max_color)
ax.set_title(f"Event {ev.lst.tel[0].evt.event_id}")
format_axes(ax)
# fig.savefig("Images_LST/Event_%i.png"%(ev.lst.tel[0].evt.event_id))
plt.show()
if(ev.lst.tel[0].evt.event_id==evt_id):
break
# If you want to make a movie with all the slices
max_color = np.max(std_signal)
sort = np.argsort(std_signal)
min_color = std_signal[sort][7]
for cell in range(1,39):
print("cell",cell)
fig, ax = plt.subplots(figsize=(10,10))
disp0 = CameraDisplay(geom, ax=ax)
disp0.cmap = 'viridis'
disp0.add_colorbar(ax=ax)
disp0.image = ev.r0.tel[0].waveform[0,:,cell]
disp0.set_limits_minmax(min_color, max_color)
format_axes(ax)
ax.set_title(f"Event {ev.lst.tel[0].evt.event_id}, Time {cell} ns")
# fig.savefig("Images_LST/for_gifs/Event_{:02d}_cell{:02d}.png".format(ev.lst.tel[0].evt.event_id,cell))
#plt.show()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import os
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Activation, Dropout, Flatten, Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import tensorflow as tf
import cv2
```
Hyper parameters
```
epochs = 20
width = height = 224
```
Prepare dataset
```
!pip install -q kaggle
!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!kaggle datasets download -d jangedoo/utkface-new
!unzip -qq utkface-new.zip
images = [] # x
ages = [] # y
for image_name in os.listdir('crop_part1'):
parts = image_name.split('_')
ages.append(int(parts[0]))
image = cv2.imread(f'crop_part1/{image_name}')
image = cv2.resize(image, (width, height))
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
images.append(image)
images = pd.Series(images, name='Images')
ages = pd.Series(ages, name='Ages')
df = pd.concat([images, ages], axis=1)
df.head()
print(df['Ages'][0])
plt.imshow(df['Images'][0])
print(df['Ages'][1])
plt.imshow(df['Images'][1])
plt.figure(figsize=(18, 6))
plt.hist(df['Ages'], bins=df['Ages'].max())
plt.show()
```
Too many faces of people between 0 and 4 years old. The model would fit too well to these ages and not enough to the other ages. To resolve this I'm only going to include a third of the images between these ages.
```
under_4 = df[df['Ages'] <= 4]
under_4_small = under_4.sample(frac=0.3)
up_4 = df[df['Ages'] > 4]
df = pd.concat([under_4_small, up_4])
plt.figure(figsize=(18, 6))
plt.hist(df['Ages'], bins=df['Ages'].max())
plt.show()
```
This looks much better! The dataframe is more representative of the population now. However, there aren't many images of people over 80, which would cause the model to not train well enough on those ages. It's best to just remove over 80s and only have a model that can predict the ages of people under 80.
```
df = df[df['Ages'] < 80]
plt.figure(figsize=(18, 6))
plt.hist(df['Ages'], bins=df['Ages'].max())
plt.show()
X = np.array(df['Images'].values.tolist())
Y = np.array(df['Ages'].values.tolist())
X.shape
x_train, x_val, y_train, y_val = train_test_split(X, Y, test_size=0.2, stratify=Y)
print(x_train.shape)
print(y_train.shape)
print(x_val.shape)
print(y_val.shape)
data_generator = ImageDataGenerator(rescale=1./225,
horizontal_flip=True)
train_data = data_generator.flow(x_train, y_train, batch_size=32)
val_data = data_generator.flow(x_val, y_val, batch_size=32)
```
Train
```
base_model = tf.keras.applications.MobileNetV2(
input_shape=(width, height, 3),
weights='imagenet',
include_top=False,
pooling='avg'
)
for layer in base_model.layers:
layer.trainable = False
model = tf.keras.Sequential([
base_model,
Dropout(0.5),
Dense(1, activation='relu')
])
model.compile(loss='mean_squared_error',
optimizer=Adam(learning_rate=0.001))
model.fit(train_data,
validation_data=val_data,
epochs=epochs,
shuffle=True)
```
Inference
```
!wget --content-disposition https://github.com/SajjadAemmi/Face-Alignment/blob/main/models/shape_predictor_68_face_landmarks.dat?raw=true
from imutils.face_utils import FaceAligner
import imutils
import dlib
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
fa = FaceAligner(predictor, desiredFaceWidth=256)
def process_and_predict(image_path):
image = cv2.imread(image_path)
image = imutils.resize(image, width=800)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
rects = detector(gray, 2)
for i, rect in enumerate(rects):
faceAligned = fa.align(image, gray, rect)
faceAligned = cv2.resize(faceAligned, (width, height))
faceAligned = cv2.cvtColor(faceAligned, cv2.COLOR_BGR2RGB)
plt.imshow(faceAligned)
faceAligned = faceAligned / 255.0
faceAligned = np.expand_dims(faceAligned, axis=0)
age = model.predict(faceAligned)
print('Age:', int(age))
process_and_predict('/content/trump.jpg')
```
| github_jupyter |
<img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
# _*Quantum Tic-Tac-Toe*_
The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.
***
### Contributors
[Maor Ben-Shahar](https://github.com/MA0R)
***
An example run of quantum Tic-Tac-Toe is provided below, with explanations of the game workings following after. Despite the ability to superimpose moves, a winning strategy still exists for both players (meaning the game will be a draw if both implement it). See if you can work it out.
```
#Import the game!
import sys
sys.path.append('game_engines')
from q_tic_tac_toe import Board
#inputs are (X,Y,print_info).
#X,Y are the dimensions of the board. print_info boolean controls if to print instructions at game launch.
#Since it is our first time playing, lets set it True and see the instructions!
B = Board(3,3,True)
B.run()
```
When playing the game, the two players are asked in turn if to make a classical (1 cell) or quantum move (1 or 2 cells at most, for now). When making any move there are several scenarios that can happen, they are explained below. The terminology used:
- Each turn a "move" is made
- Each move consists of one or two "cells", the location(s) where the move is made. It is a superposition of classical moves.
Quantum moves are restricted to two cells only due to them requiring an increasing number of qubits, which is slow to simulate.
## One move on an empty cell
This is the simplest move, it is a "classical" move. The game registers this move as a set of coordinates, and the player who made the move. No qubits are used here.
It is registered as such:
`Play in one or two cells?1
x index: 0
y index: 0`
First the player is asked how many moves (we chose 1, classical). Then it asks for the index of the move.
And the board registers it as
`
[['O1','',''],
['','',''],
['','','']]
`
This move is *always* present at the end of the game.
## Two-cell moves in empty cells
This is a quantum move, the game stores a move that is in a superposition of being played at *two* cells. Ordered coordinates for the two cells to be occupied need to be provided. A row in the board with a superposition move would look like so
`[X1,X1,'']`
Two qubits were used in order to register this move. They are in a state $|10>+|01>$, if the first qubit is measured to be 1 then the board becomes `[X1,'','']` and vice versa. Why can we not use just one qubit to record this? We can, and the qubit would have to be put into a state $|0>+|1>$ but I did not implement this yet since this is method will be consistant with later types of quantum moves.
Let us see this in action:
```
B = Board(3,3)
B.run()
```
The game outcome is almost 50% in each cell, as we would expect. There is a redundant bit at the end of the bit code (to be removed soon!). And note that the bit strings are the inversed order to what we write here, this is because the quantum register in qiskit has positions $|q_n,...,q_0>$.
## One-cell move plyed in a maybe-occupied cell
It is possible that after the game is in state `[X1,X1,'']` one would definitely want to make a move at position (0,0). This could be when the game is completely full perhaps, since it is not a very good strategy. Such a move can be erased from the game history! Let us see how it is recorded. The first row of the board is now
`[X1 O2,X1,'']`
and the state of the game qubits is
$$ |100>+|011> $$
with the first qubit recording sucess of the first move at cell (0,0), the second qubit is the success of the first move in cell (0,1) and the third qubit is the move by player O, which is anti correlated with the move by X at cell (0,0).
Notice that this move can be completely erased!
```
B = Board(3,3)
B.add_move([[0,0],[0,1]],0) #Directly adding moves, ([indx1,indx2],player) 0=X, 1=O.
B.add_move([[0,0]],1)
B.compute_winner()
```
Once again note that the move could be erased completely, and in fact this happens 50% of the time. Notice how the bit string output from QISKIT is translated into a board state.
## Two-cell moves in maybe-occupied cells
Instead of the above, player O might like to choose a better strategy. Perhaps O is interested in a quantum move on cells (0,0) and (0,2). In such a case the game records the two moves in the order they are entered.
- In order (0,0) then (0,2): The state of the game is first made into $ |100>+|011> $ as above, with the third qubit recording the sucess of player O getting position (0,0). Then the (0,2) position is registered, anti-correlated with suceeding in position (0,0): $|1001>+|0110>$. Now, unlike before, player O suceeds in registering a move regardless of the outcome.
- In order (0,2) then (0,0): Now playing at (0,2) is not dependent on anything, and so the game state is $(|10>+|01>)\otimes (|1>+|0>) = |101>+|100>+|011>+|010>$. And when the move in position (0,0) is added too, it is anti correlated with BOTH the move in (0,2) AND the pre-existing move in (0,0). So qubit state becomes $|1010>+|1000>+|0110>+|0101>$. Notice how now the move could be erased, so order does matter!
```
B = Board(3,3)
#Instead of running the game, for the purpose of demonstrating, we can just create the appropriate state manually.
#Directly adding moves, ([[y1,x1],[x2,y2]],player) with player=0->X, 1->O.
B.add_move([[0,0],[0,1]],0)
B.add_move([[0,0],[0,2]],1)
B.compute_winner()
```
### Exercise: what if player O chose coordinates (x=0,y=0) and (x=1,y=0) instead?
### Exercise: At this stage, can player X ensure that no matter what O plays, both (x=0,y=0) and (x=1,y=0) are occupied by X?
And that is all there is to quantum tic tic toe! Remember, to run the game, import the board:
`from q_tic_tac_toe import Board`
Create the board you want to play on:
`B = Board(3,3,True)`
and run!
`B.run()`
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv("data/Mall_Customers.csv")
df.head()
print("Size of the data : ", df.shape)
from sklearn.cluster import KMeans
```
### Segmentation using Age and Spending Score
```
X = df[["Age", "Spending Score (1-100)"]]
X.head()
```
#### Model 1. with k = 3
```
k_means = KMeans(n_clusters=3,random_state = 42)
k_means.fit(X)
#To get cluster centers
k_means.cluster_centers_
labels = k_means.labels_
labels
```
#### Visualizing the clusters
```
plt.figure(figsize = (9,7))
plt.scatter(X["Age"],X["Spending Score (1-100)"],c=k_means.labels_,cmap='rainbow')
plt.title("Cluster obtained using KMeans")
plt.xlabel("Age")
plt.ylabel("Spending Score (1-100)")
plt.show()
```
#### Visualizing the clusters with their centroids
```
plt.figure(figsize = (9,7))
plt.scatter(X.values[labels == 0, 0], X.values[labels == 0, 1], s = 50, c = 'red', label = 'Cluster 1')
plt.scatter(X.values[labels == 1, 0], X.values[labels == 1, 1], s = 50, c = 'blue', label = 'Cluster 2')
plt.scatter(X.values[labels == 2, 0], X.values[labels == 2, 1], s = 50, c = 'green', label = 'Cluster 3')
plt.scatter(k_means.cluster_centers_[:, 0], k_means.cluster_centers_[:, 1], s = 100, c = 'yellow', label = 'Centroids')
plt.title('Clusters of customers')
plt.xlabel('Age')
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
```
#### Assigning cluster to a query point
```
plt.figure(figsize = (9,7))
plt.scatter(X.values[labels == 0, 0], X.values[labels == 0, 1], s = 50, c = 'red', label = 'Cluster 1')
plt.scatter(X.values[labels == 1, 0], X.values[labels == 1, 1], s = 50, c = 'blue', label = 'Cluster 2')
plt.scatter(X.values[labels == 2, 0], X.values[labels == 2, 1], s = 50, c = 'green', label = 'Cluster 3')
plt.scatter(k_means.cluster_centers_[:, 0], k_means.cluster_centers_[:, 1], s = 100, c = 'yellow', label = 'Centroids')
plt.scatter(25, 50, s = 100, c = 'black', label = 'Query point')
plt.title('Clusters of customers')
plt.xlabel("Age")
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
k_means.predict([[25,50]])
```
#### Model 2. with k = 5
```
k_means_5 = KMeans(n_clusters=7,random_state = 42)
k_means_5.fit(X)
plt.figure(figsize = (9,7))
plt.scatter(df["Age"],df["Spending Score (1-100)"],c=k_means_5.labels_,cmap='rainbow')
plt.title("Cluster obtained using KMeans")
plt.xlabel("Age")
plt.ylabel("Spending Score (1-100)")
plt.show()
```
### Finding the optimal value of k
#### 1. Elbow Method
```
inertia = []
for i in range(1 , 15):
k_means = (KMeans(n_clusters = i) )
k_means.fit(X)
inertia.append(k_means.inertia_)
inertia
plt.figure(figsize = (9 ,7))
plt.plot(np.arange(1 , 15) , inertia , 'o')
plt.plot(np.arange(1 , 15) , inertia , '-')
plt.xlabel('Number of Clusters')
plt.ylabel('Inertia')
plt.show()
k_means_4 = KMeans(n_clusters=4,random_state = 42)
k_means_4.fit(X)
labels = k_means_4.labels_
plt.figure(figsize = (9,7))
plt.scatter(X.values[labels == 0, 0], X.values[labels == 0, 1], s = 50, c = 'red', label = 'Low Spenders')
plt.scatter(X.values[labels == 1, 0], X.values[labels == 1, 1], s = 50, c = 'blue', label = 'Young High Spenders')
plt.scatter(X.values[labels == 2, 0], X.values[labels == 2, 1], s = 50, c = 'green', label = 'Young Average Spenders')
plt.scatter(X.values[labels == 3, 0], X.values[labels == 3, 1], s = 50, c = 'maroon', label = 'Old Average Spenders')
plt.scatter(k_means_4.cluster_centers_[:, 0], k_means_4.cluster_centers_[:, 1], s = 100, c = 'yellow', label = 'Centroids')
plt.title('Clusters of customers')
plt.xlabel('Annual Income (k$)')
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
```
#### 2. Average Silhouette Method
```
from sklearn.metrics import silhouette_score
s_scores = []
for i in range(2,15):
k_means = KMeans(n_clusters=i, random_state = 42)
k_means.fit(X)
silhouette_avg = silhouette_score(X, k_means.labels_)
s_scores.append(silhouette_avg)
plt.figure(figsize = (9 ,7))
plt.plot(np.arange(2 , 15) , s_scores , 'o')
plt.plot(np.arange(2 , 15) , s_scores , '-')
plt.xlabel('Number of Clusters')
plt.ylabel('Average Silhouette Score')
plt.show()
```
### Segmentation using Annual Income and Spending Score
```
X = df[["Annual Income (k$)", "Spending Score (1-100)"]]
X.head()
inertia = []
for i in range(1, 15):
k_means = (KMeans(n_clusters = i) )
k_means.fit(X)
inertia.append(k_means.inertia_)
plt.figure(figsize = (9 ,7))
plt.plot(np.arange(1 , 15) , inertia , 'o')
plt.plot(np.arange(1 , 15) , inertia , '-')
plt.xlabel('Number of Clusters')
plt.ylabel('Inertia')
plt.show()
k_means = KMeans(n_clusters=5,random_state = 42)
k_means.fit(X)
labels = k_means.labels_
plt.figure(figsize = (9,7))
plt.scatter(X.values[labels == 0, 0], X.values[labels == 0, 1], s = 50, c = 'red', label = 'Standard')
plt.scatter(X.values[labels == 1, 0], X.values[labels == 1, 1], s = 50, c = 'blue', label = 'Careful')
plt.scatter(X.values[labels == 2, 0], X.values[labels == 2, 1], s = 50, c = 'green', label = 'Sensible')
plt.scatter(X.values[labels == 3, 0], X.values[labels == 3, 1], s = 50, c = 'maroon', label = 'Careless')
plt.scatter(X.values[labels == 4, 0], X.values[labels == 4, 1], s = 50, c = 'orange', label = 'Target')
plt.scatter(k_means.cluster_centers_[:, 0], k_means.cluster_centers_[:, 1], s = 100, c = 'yellow', label = 'Centroids')
plt.title('Clusters of customers')
plt.xlabel('Annual Income (k$)')
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
```
## K-Medoids
#### K-Means centroid are generally not the actual data points.
```
k_means.cluster_centers_
#checking whether the centroids are actual data points or not
for i in k_means.cluster_centers_:
if(len(X[(X["Annual Income (k$)"]==i[0]) & (X["Spending Score (1-100)"]==i[1])])!=0):
print(i)
#installing sklearn_extra
#!conda install -c conda-forge scikit-learn-extra
from sklearn_extra.cluster import KMedoids
k_mediods = KMedoids(n_clusters=5,random_state = 42)
k_mediods.fit(X)
labels = k_mediods.labels_
labels
k_mediods.cluster_centers_
#checking whether the centroids are actual data points or not
for i in k_mediods.cluster_centers_:
if(len(X[(X["Annual Income (k$)"]==i[0]) & (X["Spending Score (1-100)"]==i[1])])!=0):
print("Yes,", i, " is a actual data point")
```
| github_jupyter |
## Data Source and Description:
Creator/Donor:
Jeffrey C. Schlimmer (Jeffrey.Schlimmer '@' a.gp.cs.cmu.edu)
Sources:
1. 1985 Model Import Car and Truck Specifications, 1985 Ward's Automotive Yearbook.
2. Personal Auto Manuals, Insurance Services Office, 160 Water Street, New York, NY 10038
3. Insurance Collision Report, Insurance Institute for Highway Safety, Watergate 600, Washington, DC 20037
Data Set Information:
This data set consists of three types of entities: (a) the specification of an auto in terms of various characteristics, (b) its assigned insurance risk rating, (c) its normalized losses in use as compared to other cars. The second rating corresponds to the degree to which the auto is more risky than its price indicates. Cars are initially assigned a risk factor symbol associated with its price. Then, if it is more risky (or less), this symbol is adjusted by moving it up (or down) the scale. Actuarians call this process "symboling". A value of +3 indicates that the auto is risky, -3 that it is probably pretty safe.
The third factor is the relative average loss payment per insured vehicle year. This value is normalized for all autos within a particular size classification (two-door small, station wagons, sports/speciality, etc...), and represents the average loss per car per year.
Note: Several of the attributes in the database could be used as a "class" attribute.
Attribute Information:
Attribute: Attribute Range
1. symboling: -3, -2, -1, 0, 1, 2, 3.
2. normalized-losses: continuous from 65 to 256.
3. make:
alfa-romero, audi, bmw, chevrolet, dodge, honda,
isuzu, jaguar, mazda, mercedes-benz, mercury,
mitsubishi, nissan, peugot, plymouth, porsche,
renault, saab, subaru, toyota, volkswagen, volvo
4. fuel-type: diesel, gas.
5. aspiration: std, turbo.
6. num-of-doors: four, two.
7. body-style: hardtop, wagon, sedan, hatchback, convertible.
8. drive-wheels: 4wd, fwd, rwd.
9. engine-location: front, rear.
10. wheel-base: continuous from 86.6 120.9.
11. length: continuous from 141.1 to 208.1.
12. width: continuous from 60.3 to 72.3.
13. height: continuous from 47.8 to 59.8.
14. curb-weight: continuous from 1488 to 4066.
15. engine-type: dohc, dohcv, l, ohc, ohcf, ohcv, rotor.
16. num-of-cylinders: eight, five, four, six, three, twelve, two.
17. engine-size: continuous from 61 to 326.
18. fuel-system: 1bbl, 2bbl, 4bbl, idi, mfi, mpfi, spdi, spfi.
19. bore: continuous from 2.54 to 3.94.
20. stroke: continuous from 2.07 to 4.17.
21. compression-ratio: continuous from 7 to 23.
22. horsepower: continuous from 48 to 288.
23. peak-rpm: continuous from 4150 to 6600.
24. city-mpg: continuous from 13 to 49.
25. highway-mpg: continuous from 16 to 54.
26. price: continuous from 5118 to 45400.
load modules
```
import numpy as np
import pandas as pd
```
reading data and columns names.
```
# columns names
names = ['symboling','normalized-losses','make',
'fuel-type','aspiration','num-of-doors',
'body-style','drive-wheels','engine-location',
'wheel-base','length','width','height','curb-weight',
'engine-type','num-of-cylinders','engine-size','fuel-system',
'bore','stroke','compression-ratio','horsepower','peak-rpm',
'city-mpg','highway-mpg','price']
# reading data
automobile_df = pd.read_csv('autos.data',names=names)
# checking the dataset by displaying
automobile_df.head(20)
```
check datatype for each variable.
```
print(automobile_df.dtypes)
```
check size of the dataset.
```
print(automobile_df.shape)
```
after checking dataset size we had **26** `columns` and **205** `rows`.
we got unknown values `?` we need to replace them with `np.nan`.
```
automobile_df.replace('?',np.nan,inplace=True)
automobile_df.head()
```
after handling unknown values and converted to `NaN`. we need to count all missing values and specify each missing values for each feature.
### Handling Missing Values:
* replace numerical or continous values
| github_jupyter |
```
!unzip spam.zip -d /
#importing libraries
import numpy as np
import random
import pandas as pd
import sys
import os
import time
import codecs
import collections
import numpy
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.callbacks import ModelCheckpoint
from keras.utils import np_utils
from nltk.tokenize import sent_tokenize, word_tokenize
import scipy
from scipy import spatial
from nltk.tokenize.toktok import ToktokTokenizer
import re
tokenizer = ToktokTokenizer()
file_content = pd.read_csv('spam.csv', encoding = "ISO-8859-1")
# Just selecting emails and connverting it into list
Email_Data = file_content[[ 'v2']]
list_data = Email_Data.values.tolist()
list_data
#Converting list to string
from collections import Iterable
def flatten(items):
"""Yield items from any nested iterable"""
for x in items:
if isinstance(x, Iterable) and not isinstance(x,(str, bytes)):
for sub_x in flatten(x):
yield sub_x
else:
yield x
TextData=list(flatten(list_data))
TextData = ''.join(TextData)
# Remove unwanted lines and converting into lower case
TextData = TextData.replace('\n','')
TextData = TextData.lower()
pattern = r'[^a-zA-z0-9\s]'
TextData = re.sub(pattern,'', ''.join(TextData))
# Tokenizing
tokens = tokenizer.tokenize(TextData)
tokens = [token.strip() for token in tokens]
# get the distinct words and sort it
word_counts = collections.Counter(tokens)
word_c = len(word_counts)
print(word_c)
distinct_words = [x[0] for x in word_counts.most_common()]
distinct_words_sorted = list(sorted(distinct_words))
# Generate indexing for all words
word_index = {x: i for i, x in enumerate(distinct_words_sorted)}
# decide on sentence length
sentence_length = 25
#prepare the dataset of input to output pairs encoded as integers
# Generate the data for the model
#input = the input sentence to the model with index
#output = output of the model with index
InputData = []
OutputData = []
for i in range(0, word_c - sentence_length, 1):
X = tokens[i:i + sentence_length]
Y = tokens[i + sentence_length]
InputData.append([word_index[char] for char in X])
OutputData.append(word_index[Y])
print (InputData[:1])
print ("\n")
print(OutputData[:1])
X = numpy.reshape(InputData, (len(InputData), sentence_length, 1))
# One hot encode the output variable
Y = np_utils.to_categorical(OutputData)
Y
# define the LSTM model
model = Sequential()
model.add(LSTM(256, input_shape=(X.shape[1], X.shape[2])))
model.add(Dropout(0.2))
model.add(Dense(Y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
#define the checkpoint
file_name_path= "weights-improvement-{epoch:02d}-{loss:.4f}.hdf5"
checkpoint = ModelCheckpoint(file_name_path, monitor='loss',
verbose=1, save_best_only=True, mode='min')
callbacks = [checkpoint]
#fit the model
model.fit(X, Y, epochs=5, batch_size=128, callbacks=callbacks)
# load the network weights
file_name = "weights-improvement-05-6.8075.hdf5"
model.load_weights(file_name)
model.compile(loss='categorical_crossentropy', optimizer='adam')
# Generating random sequence
start = numpy.random.randint(0, len(InputData))
input_sent = InputData[start]
# Generate index of the next word of the email
X = numpy.reshape(input_sent, (1, len(input_sent), 1))
predict_word = model.predict(X, verbose=0)
index = numpy.argmax(predict_word)
print(input_sent)
print ("\n")
print(index)
word_index_rev = dict((i, c) for i, c in enumerate(tokens))
result = word_index_rev[index]
sent_in = [word_index_rev[value] for value in input_sent]
print(sent_in)
print ("\n")
print(result)
```
| github_jupyter |
# 数据集加载总览
`Ascend` `GPU` `CPU` `数据准备`
[](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9taW5kc3BvcmUtd2Vic2l0ZS5vYnMuY24tbm9ydGgtNC5teWh1YXdlaWNsb3VkLmNvbS9ub3RlYm9vay9tb2RlbGFydHMvcHJvZ3JhbW1pbmdfZ3VpZGUvbWluZHNwb3JlX2RhdGFzZXRfbG9hZGluZy5pcHluYg==&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c) [](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/master/programming_guide/zh_cn/mindspore_dataset_loading.ipynb) [](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/master/programming_guide/zh_cn/mindspore_dataset_loading.py) [](https://gitee.com/mindspore/docs/blob/master/docs/mindspore/programming_guide/source_zh_cn/dataset_loading.ipynb)
## 概述
MindSpore支持加载图像领域常用的数据集,用户可以直接使用`mindspore.dataset`中对应的类实现数据集的加载。目前支持的常用数据集及对应的数据集类如下表所示。
| 图像数据集 | 数据集类 | 数据集简介 |
| :--------- | :-------------- | :----------------------------------------------------------------------------------------------------------------------------------- |
| MNIST | MnistDataset | MNIST是一个大型手写数字图像数据集,拥有60,000张训练图像和10,000张测试图像,常用于训练各种图像处理系统。 |
| CIFAR-10 | Cifar10Dataset | CIFAR-10是一个微小图像数据集,包含10种类别下的60,000张32x32大小彩色图像,平均每种类别6,000张,其中5,000张为训练集,1,000张为测试集。 |
| CIFAR-100 | Cifar100Dataset | CIFAR-100与CIFAR-10类似,但拥有100种类别,平均每种类别600张,其中500张为训练集,100张为测试集。 |
| CelebA | CelebADataset | CelebA是一个大型人脸图像数据集,包含超过200,000张名人人脸图像,每张图像拥有40个特征标记。 |
| PASCAL-VOC | VOCDataset | PASCAL-VOC是一个常用图像数据集,被广泛用于目标检测、图像分割等计算机视觉领域。 |
| COCO | CocoDataset | COCO是一个大型目标检测、图像分割、姿态估计数据集。 |
| CLUE | CLUEDataset | CLUE是一个大型中文语义理解数据集。 |
MindSpore还支持加载多种数据存储格式下的数据集,用户可以直接使用`mindspore.dataset`中对应的类加载磁盘中的数据文件。目前支持的数据格式及对应加载方式如下表所示。
| 数据格式 | 数据集类 | 数据格式简介 |
| :--------- | :----------------- | :------------------------------------------------------------------------------------------------ |
| MindRecord | MindDataset | MindRecord是MindSpore的自研数据格式,具有读写高效、易于分布式处理等优势。 |
| Manifest | ManifestDataset | Manifest是华为ModelArts支持的一种数据格式,描述了原始文件和标注信息,可用于标注、训练、推理场景。 |
| TFRecord | TFRecordDataset | TFRecord是TensorFlow定义的一种二进制数据文件格式。 |
| NumPy | NumpySlicesDataset | NumPy数据源指的是已经读入内存中的NumPy arrays格式数据集。 |
| Text File | TextFileDataset | Text File指的是常见的文本格式数据。 |
| CSV File | CSVDataset | CSV指逗号分隔值,其文件以纯文本形式存储表格数据。 |
MindSpore也同样支持使用`GeneratorDataset`自定义数据集的加载方式,用户可以根据需要实现自己的数据集类。
| 数据集类 | 数据格式简介 |
| :----------------- | :------------------------------------ |
| GeneratorDataset | 用户自定义的数据集读取、处理的方式。 |
| NumpySlicesDataset | 用户自定义的由NumPy构建数据集的方式。 |
> 更多详细的数据集加载接口说明,参见[API文档](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.dataset.html)。
## 常用数据集加载
下面将介绍几种常用数据集的加载方式。
### CIFAR-10/100数据集
下载[CIFAR-10数据集](https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz)并解压到指定位置,以下示例代码将数据集下载并解压到指定位置。
```
import os
import requests
import tarfile
import zipfile
import shutil
requests.packages.urllib3.disable_warnings()
def download_dataset(url, target_path):
"""download and decompress dataset"""
if not os.path.exists(target_path):
os.makedirs(target_path)
download_file = url.split("/")[-1]
if not os.path.exists(download_file):
res = requests.get(url, stream=True, verify=False)
if download_file.split(".")[-1] not in ["tgz", "zip", "tar", "gz"]:
download_file = os.path.join(target_path, download_file)
with open(download_file, "wb") as f:
for chunk in res.iter_content(chunk_size=512):
if chunk:
f.write(chunk)
if download_file.endswith("zip"):
z = zipfile.ZipFile(download_file, "r")
z.extractall(path=target_path)
z.close()
if download_file.endswith(".tar.gz") or download_file.endswith(".tar") or download_file.endswith(".tgz"):
t = tarfile.open(download_file)
names = t.getnames()
for name in names:
t.extract(name, target_path)
t.close()
print("The {} file is downloaded and saved in the path {} after processing".format(os.path.basename(url), target_path))
download_dataset("https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/cifar-10-binary.tar.gz", "./datasets")
test_path = "./datasets/cifar-10-batches-bin/test"
train_path = "./datasets/cifar-10-batches-bin/train"
os.makedirs(test_path, exist_ok=True)
os.makedirs(train_path, exist_ok=True)
if not os.path.exists(os.path.join(test_path, "test_batch.bin")):
shutil.move("./datasets/cifar-10-batches-bin/test_batch.bin", test_path)
[shutil.move("./datasets/cifar-10-batches-bin/"+i, train_path) for i in os.listdir("./datasets/cifar-10-batches-bin/") if os.path.isfile("./datasets/cifar-10-batches-bin/"+i) and not i.endswith(".html") and not os.path.exists(os.path.join(train_path, i))]
```
解压后数据集文件的目录结构如下:
```text
./datasets/cifar-10-batches-bin
├── readme.html
├── test
│ └── test_batch.bin
└── train
├── batches.meta.txt
├── data_batch_1.bin
├── data_batch_2.bin
├── data_batch_3.bin
├── data_batch_4.bin
└── data_batch_5.bin
```
下面的样例通过`Cifar10Dataset`接口加载CIFAR-10数据集,使用顺序采样器获取其中5个样本,然后展示了对应图片的形状和标签。
CIFAR-100数据集和MNIST数据集的加载方式也与之类似。
```
import mindspore.dataset as ds
DATA_DIR = "./datasets/cifar-10-batches-bin/train/"
sampler = ds.SequentialSampler(num_samples=5)
dataset = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)
for data in dataset.create_dict_iterator():
print("Image shape:", data['image'].shape, ", Label:", data['label'])
```
### VOC数据集
VOC数据集有多个版本,此处以VOC2012为例。下载[VOC2012数据集](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar)并解压,目录结构如下。
```text
└─ VOCtrainval_11-May-2012
└── VOCdevkit
└── VOC2012
├── Annotations
├── ImageSets
├── JPEGImages
├── SegmentationClass
└── SegmentationObject
```
下面的样例通过`VOCDataset`接口加载VOC2012数据集,分别演示了将任务指定为分割(Segmentation)和检测(Detection)时的原始图像形状和目标形状。
```python
import mindspore.dataset as ds
DATA_DIR = "VOCtrainval_11-May-2012/VOCdevkit/VOC2012/"
dataset = ds.VOCDataset(DATA_DIR, task="Segmentation", usage="train", num_samples=2, decode=True, shuffle=False)
print("[Segmentation]:")
for data in dataset.create_dict_iterator():
print("image shape:", data["image"].shape)
print("target shape:", data["target"].shape)
dataset = ds.VOCDataset(DATA_DIR, task="Detection", usage="train", num_samples=1, decode=True, shuffle=False)
print("[Detection]:")
for data in dataset.create_dict_iterator():
print("image shape:", data["image"].shape)
print("bbox shape:", data["bbox"].shape)
```
输出结果:
```text
[Segmentation]:
image shape: (281, 500, 3)
target shape: (281, 500, 3)
image shape: (375, 500, 3)
target shape: (375, 500, 3)
[Detection]:
image shape: (442, 500, 3)
bbox shape: (2, 4)
```
### COCO数据集
COCO数据集有多个版本,此处以COCO2017的验证数据集为例。下载COCO2017的[验证集](http://images.cocodataset.org/zips/val2017.zip)、[检测任务标注](http://images.cocodataset.org/annotations/annotations_trainval2017.zip)和[全景分割任务标注](http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip)并解压,只取其中的验证集部分,按以下目录结构存放。
```text
└─ COCO
├── val2017
└── annotations
├── instances_val2017.json
├── panoptic_val2017.json
└── person_keypoints_val2017.json
```
下面的样例通过`CocoDataset`接口加载COCO2017数据集,分别演示了将任务指定为目标检测(Detection)、背景分割(Stuff)、关键点检测(Keypoint)和全景分割(Panoptic)时获取到的不同数据。
```python
import mindspore.dataset as ds
DATA_DIR = "COCO/val2017/"
ANNOTATION_FILE = "COCO/annotations/instances_val2017.json"
KEYPOINT_FILE = "COCO/annotations/person_keypoints_val2017.json"
PANOPTIC_FILE = "COCO/annotations/panoptic_val2017.json"
dataset = ds.CocoDataset(DATA_DIR, annotation_file=ANNOTATION_FILE, task="Detection", num_samples=1)
for data in dataset.create_dict_iterator():
print("Detection:", data.keys())
dataset = ds.CocoDataset(DATA_DIR, annotation_file=ANNOTATION_FILE, task="Stuff", num_samples=1)
for data in dataset.create_dict_iterator():
print("Stuff:", data.keys())
dataset = ds.CocoDataset(DATA_DIR, annotation_file=KEYPOINT_FILE, task="Keypoint", num_samples=1)
for data in dataset.create_dict_iterator():
print("Keypoint:", data.keys())
dataset = ds.CocoDataset(DATA_DIR, annotation_file=PANOPTIC_FILE, task="Panoptic", num_samples=1)
for data in dataset.create_dict_iterator():
print("Panoptic:", data.keys())
```
输出结果:
```text
Detection: dict_keys(['image', 'bbox', 'category_id', 'iscrowd'])
Stuff: dict_keys(['image', 'segmentation', 'iscrowd'])
Keypoint: dict_keys(['image', 'keypoints', 'num_keypoints'])
Panoptic: dict_keys(['image', 'bbox', 'category_id', 'iscrowd', 'area'])
```
## 特定格式数据集加载
下面将介绍几种特定格式数据集文件的加载方式。
### MindRecord数据格式
MindRecord是MindSpore定义的一种数据格式,使用MindRecord能够获得更好的性能提升。
> 阅读[数据格式转换](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_conversion.html)章节,了解如何将数据集转化为MindSpore数据格式。
执行本例之前需下载对应的测试数据`test_mindrecord.zip`并解压到指定位置,执行如下命令:
```
download_dataset("https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/datasets/test_mindrecord.zip", "./datasets/mindspore_dataset_loading/")
```
下载的数据集文件的目录结构如下:
```text
./datasets/mindspore_dataset_loading/
├── test.mindrecord
└── test.mindrecord.db
```
下面的样例通过`MindDataset`接口加载MindRecord文件,并展示已加载数据的标签。
```
import mindspore.dataset as ds
DATA_FILE = ["./datasets/mindspore_dataset_loading/test.mindrecord"]
mindrecord_dataset = ds.MindDataset(DATA_FILE)
for data in mindrecord_dataset.create_dict_iterator(output_numpy=True):
print(data.keys())
```
### Manifest数据格式
Manifest是华为ModelArts支持的数据格式文件,详细说明请参见[Manifest文档](https://support.huaweicloud.com/engineers-modelarts/modelarts_23_0009.html)。
本次示例需下载测试数据`test_manifest.zip`并将其解压到指定位置,执行如下命令:
```
download_dataset("https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/datasets/test_manifest.zip", "./datasets/mindspore_dataset_loading/test_manifest/")
```
解压后数据集文件的目录结构如下:
```text
./datasets/mindspore_dataset_loading/test_manifest/
├── eval
│ ├── 1.JPEG
│ └── 2.JPEG
├── test_manifest.json
└── train
├── 1.JPEG
└── 2.JPEG
```
下面的样例通过`ManifestDataset`接口加载Manifest文件`test_manifest.json`,并展示已加载数据的标签。
```
import mindspore.dataset as ds
DATA_FILE = "./datasets/mindspore_dataset_loading/test_manifest/test_manifest.json"
manifest_dataset = ds.ManifestDataset(DATA_FILE)
for data in manifest_dataset.create_dict_iterator():
print(data["label"])
```
### TFRecord数据格式
TFRecord是TensorFlow定义的一种二进制数据文件格式。
下面的样例通过`TFRecordDataset`接口加载TFRecord文件,并介绍了两种不同的数据集格式设定方案。
下载`tfrecord`测试数据`test_tftext.zip`并解压到指定位置,执行如下命令:
```
download_dataset("https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/datasets/test_tftext.zip", "./datasets/mindspore_dataset_loading/test_tfrecord/")
```
解压后数据集文件的目录结构如下:
```text
./datasets/mindspore_dataset_loading/test_tfrecord/
└── test_tftext.tfrecord
```
1. 传入数据集路径或TFRecord文件列表,本例使用`test_tftext.tfrecord`,创建`TFRecordDataset`对象。
```
import mindspore.dataset as ds
DATA_FILE = "./datasets/mindspore_dataset_loading/test_tfrecord/test_tftext.tfrecord"
tfrecord_dataset = ds.TFRecordDataset(DATA_FILE)
for tf_data in tfrecord_dataset.create_dict_iterator():
print(tf_data.keys())
```
2. 用户可以通过编写Schema文件或创建Schema对象,设定数据集格式及特征。
- 编写Schema文件
将数据集格式和特征按JSON格式写入Schema文件。
- `columns`:列信息字段,需要根据数据集的实际列名定义。上面的示例中,数据集有三组数据,其列均为`chinese`、`line`和`words`。
然后在创建`TFRecordDataset`时将Schema文件路径传入。
```
import os
import json
data_json = {
"columns": {
"chinese": {
"type": "uint8",
"rank": 1
},
"line": {
"type": "int8",
"rank": 1
},
"words": {
"type": "uint8",
"rank": 0
}
}
}
if not os.path.exists("dataset_schema_path"):
os.mkdir("dataset_schema_path")
SCHEMA_DIR = "dataset_schema_path/schema.json"
with open(SCHEMA_DIR, "w") as f:
json.dump(data_json, f, indent=4)
tfrecord_dataset = ds.TFRecordDataset(DATA_FILE, schema=SCHEMA_DIR)
for tf_data in tfrecord_dataset.create_dict_iterator():
print(tf_data.values())
```
- 创建Schema对象
创建Schema对象,为其添加自定义字段,然后在创建数据集对象时传入。
```
from mindspore import dtype as mstype
schema = ds.Schema()
schema.add_column('chinese', de_type=mstype.uint8)
schema.add_column('line', de_type=mstype.uint8)
tfrecord_dataset = ds.TFRecordDataset(DATA_FILE, schema=schema)
for tf_data in tfrecord_dataset.create_dict_iterator():
print(tf_data)
```
对比上述中的编写和创建步骤,可以看出:
|步骤|chinese|line|words
|:---|:---|:---|:---
| 编写|UInt8 |Int8|UInt8
| 创建|UInt8 |UInt8|
示例编写步骤中的`columns`中数据由`chinese`(UInt8)、`line`(Int8)和`words`(UInt8)变为了示例创建步骤中的`chinese`(UInt8)、`line`(UInt8),通过Schema对象,设定数据集的数据类型和特征,使得列中的数据类型和特征相应改变了。
### NumPy数据格式
如果所有数据已经读入内存,可以直接使用`NumpySlicesDataset`类将其加载。
下面的样例分别介绍了通过`NumpySlicesDataset`加载arrays数据、 list数据和dict数据的方式。
- 加载NumPy arrays数据
```
import numpy as np
import mindspore.dataset as ds
np.random.seed(6)
features, labels = np.random.sample((4, 2)), np.random.sample((4, 1))
data = (features, labels)
dataset = ds.NumpySlicesDataset(data, column_names=["col1", "col2"], shuffle=False)
for np_arr_data in dataset:
print(np_arr_data[0], np_arr_data[1])
```
- 加载Python list数据
```
import mindspore.dataset as ds
data1 = [[1, 2], [3, 4]]
dataset = ds.NumpySlicesDataset(data1, column_names=["col1"], shuffle=False)
for np_list_data in dataset:
print(np_list_data[0])
```
- 加载Python dict数据
```
import mindspore.dataset as ds
data1 = {"a": [1, 2], "b": [3, 4]}
dataset = ds.NumpySlicesDataset(data1, column_names=["col1", "col2"], shuffle=False)
for np_dic_data in dataset.create_dict_iterator():
print(np_dic_data)
```
### CSV数据格式
下面的样例通过`CSVDataset`加载CSV格式数据集文件,并展示了已加载数据的`keys`。
下载测试数据`test_csv.zip`并解压到指定位置,执行如下命令:
```
download_dataset("https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/datasets/test_csv.zip", "./datasets/mindspore_dataset_loading/test_csv/")
```
解压后数据集文件的目录结构如下:
```text
./datasets/mindspore_dataset_loading/test_csv/
├── test1.csv
└── test2.csv
```
传入数据集路径或CSV文件列表,Text格式数据集文件的加载方式与CSV文件类似。
```
import mindspore.dataset as ds
DATA_FILE = ["./datasets/mindspore_dataset_loading/test_csv/test1.csv", "./datasets/mindspore_dataset_loading/test_csv/test2.csv"]
csv_dataset = ds.CSVDataset(DATA_FILE)
for csv_data in csv_dataset.create_dict_iterator(output_numpy=True):
print(csv_data.keys())
```
## 自定义数据集加载
对于目前MindSpore不支持直接加载的数据集,可以通过`GeneratorDataset`接口实现自定义方式的加载,或者将其转换成MindRecord数据格式。`GeneratorDataset`接口接收一个可随机访问对象或可迭代对象,由该对象自定义数据读取的方式。
> 1. 带`__getitem__`函数的随机访问对象相比可迭代对象,不需进行index递增等操作,逻辑更精简,易于使用。
> 2. 分布式训练场景需对数据集进行切片操作,`GeneratorDataset`初始化时可以接收sampler参数, 也可接收`num_shards、shard_id来指定切片份数和取第几份,后面这种方式更易于使用。
下面分别展示这两种不同的自定义数据集加载方法,为了便于对比,生成的随机数据保持相同。
### 构造可随机访问对象
可随机访问的对象具有__getitem__函数,能够随机访问指定索引位置的数据。定义数据集类的时候重写__getitem__函数,即可使得该类的对象支持随机访问。
```
import numpy as np
import mindspore.dataset as ds
class GetDatasetGenerator:
def __init__(self):
np.random.seed(58)
self.__data = np.random.sample((5, 2))
self.__label = np.random.sample((5, 1))
def __getitem__(self, index):
return (self.__data[index], self.__label[index])
def __len__(self):
return len(self.__data)
dataset_generator = GetDatasetGenerator()
dataset = ds.GeneratorDataset(dataset_generator, ["data", "label"], shuffle=False)
for data in dataset.create_dict_iterator():
print(data["data"], data["label"])
```
### 构造可迭代对象
可迭代的对象具有__iter__函数和__next__函数,能够在每次调用时返回一条数据。定义数据集类的时候重写__iter__函数和__next__函数,通过__iter__函数返回迭代器,通过__next__函数定义数据集加载方式,即可使得该类的对象可迭代。
```
import numpy as np
import mindspore.dataset as ds
class IterDatasetGenerator:
def __init__(self):
np.random.seed(58)
self.__index = 0
self.__data = np.random.sample((5, 2))
self.__label = np.random.sample((5, 1))
def __next__(self):
if self.__index >= len(self.__data):
raise StopIteration
else:
item = (self.__data[self.__index], self.__label[self.__index])
self.__index += 1
return item
def __iter__(self):
self.__index = 0
return self
def __len__(self):
return len(self.__data)
dataset_generator = IterDatasetGenerator()
dataset = ds.GeneratorDataset(dataset_generator, ["data", "label"], shuffle=False)
for data in dataset.create_dict_iterator():
print(data["data"], data["label"])
```
需要注意的是,如果数据集本身并不复杂,直接定义一个可迭代的函数即可快速实现自定义加载功能。
```
import numpy as np
import mindspore.dataset as ds
np.random.seed(58)
data = np.random.sample((5, 2))
label = np.random.sample((5, 1))
def GeneratorFunc():
for i in range(5):
yield (data[i], label[i])
dataset = ds.GeneratorDataset(GeneratorFunc, ["data", "label"])
for item in dataset.create_dict_iterator():
print(item["data"], item["label"])
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
datafile = 'data/ex1data1.txt'
cols = np.loadtxt(datafile,delimiter=',',usecols=(0,1),unpack=True) #Read in comma separated data
#Form the usual "X" matrix and "y" vector
X = np.transpose(np.array(cols[:-1]))
y = np.transpose(np.array(cols[-1:]))
m = y.size # number of training examples
#Insert the usual column of 1's into the "X" matrix
X = np.insert(X,0,1,axis=1)
#Plot the data to see what it looks like
plt.figure(figsize=(10,6))
plt.plot(df.A,df.B,'rx',markersize=10)
plt.grid(True) #Always plot.grid true!
plt.ylabel('Profit in $10,000s')
plt.xlabel('Population of City in 10,000s')
#gradient decent
iterations = 1500
alpha = 0.01
def h(theta,X): #Linear hypothesis function
return np.dot(X,theta)
def computeCost(mytheta,X,y): #Cost function
"""
theta_start is an n- dimensional vector of initial theta guess
X is matrix with n- columns and m- rows
y is a matrix with m- rows and 1 column
"""
#note to self: *.shape is (rows, columns)
return float((1./(2*m)) * np.dot((h(mytheta,X)-y).T,(h(mytheta,X)-y)))
#Test that running computeCost with 0's as theta returns 32.07:
initial_theta = np.zeros((X.shape[1],1)) #(theta is a vector with n rows and 1 columns (if X has n features) )
print(computeCost(initial_theta,X,y))
#Actual gradient descent minimizing routine
def descendGradient(X, theta_start = np.zeros(2)):
"""
theta_start is an n- dimensional vector of initial theta guess
X is matrix with n- columns and m- rows
"""
theta = theta_start
jvec = [] #Used to plot cost as function of iteration
thetahistory = [] #Used to visualize the minimization path later on
for meaninglessvariable in range(iterations):
tmptheta = theta
jvec.append(computeCost(theta,X,y))
# Buggy line
#thetahistory.append(list(tmptheta))
# Fixed line
thetahistory.append(list(theta[:,0]))
#Simultaneously updating theta values
for j in range(len(tmptheta)):
tmptheta[j] = theta[j] - (alpha/m)*np.sum((h(theta,X) - y)*np.array(X[:,j]).reshape(m,1))
theta = tmptheta
return theta, thetahistory, jvec
#Actually run gradient descent to get the best-fit theta values
initial_theta = np.zeros((X.shape[1],1))
theta, thetahistory, jvec = descendGradient(X,initial_theta)
#Plot the convergence of the cost function
def plotConvergence(jvec):
plt.figure(figsize=(10,6))
plt.plot(range(len(jvec)),jvec,'bo')
plt.grid(True)
plt.title("Convergence of Cost Function")
plt.xlabel("Iteration number")
plt.ylabel("Cost function")
dummy = plt.xlim([-0.05*iterations,1.05*iterations])
#dummy = plt.ylim([4,8])
plotConvergence(jvec)
dummy = plt.ylim([4,7])
#Plot the line on top of the data to ensure it looks correct
def myfit(xval):
return theta[0] + theta[1]*xval
plt.figure(figsize=(10,6))
plt.plot(X[:,1],y[:,0],'rx',markersize=10,label='Training Data')
plt.plot(X[:,1],myfit(X[:,1]),'b-',label = 'Hypothesis: h(x) = %0.2f + %0.2fx'%(theta[0],theta[1]))
plt.grid(True) #Always plot.grid true!
plt.ylabel('Profit in $10,000s')
plt.xlabel('Population of City in 10,000s')
plt.legend()
#Import necessary matplotlib tools for 3d plots
from mpl_toolkits.mplot3d import axes3d, Axes3D
from matplotlib import cm
import itertools
fig = plt.figure(figsize=(12,12))
ax = fig.gca(projection='3d')
xvals = np.arange(-10,10,.5)
yvals = np.arange(-1,4,.1)
myxs, myys, myzs = [], [], []
for david in xvals:
for kaleko in yvals:
myxs.append(david)
myys.append(kaleko)
myzs.append(computeCost(np.array([[david], [kaleko]]),X,y))
scat = ax.scatter(myxs,myys,myzs,c=np.abs(myzs),cmap=plt.get_cmap('YlOrRd'))
plt.xlabel(r'$\theta_0$',fontsize=30)
plt.ylabel(r'$\theta_1$',fontsize=30)
plt.title('Cost (Minimization Path Shown in Blue)',fontsize=30)
plt.plot([x[0] for x in thetahistory],[x[1] for x in thetahistory],jvec,'bo-')
plt.show()
datafile = 'data/ex1data2.txt'
#Read into the data file
cols = np.loadtxt(datafile,delimiter=',',usecols=(0,1,2),unpack=True) #Read in comma separated data
#Form the usual "X" matrix and "y" vector
X = np.transpose(np.array(cols[:-1]))
y = np.transpose(np.array(cols[-1:]))
m = y.size # number of training examples
#Insert the usual column of 1's into the "X" matrix
X = np.insert(X,0,1,axis=1)
#Quick visualize data
plt.grid(True)
plt.xlim([-100,5000])
dummy = plt.hist(X[:,0],label = 'col1')
dummy = plt.hist(X[:,1],label = 'col2')
dummy = plt.hist(X[:,2],label = 'col3')
plt.title('Clearly we need feature normalization.')
plt.xlabel('Column Value')
plt.ylabel('Counts')
dummy = plt.legend()
#Feature normalizing the columns (subtract mean, divide by standard deviation)
#Store the mean and std for later use
#Note don't modify the original X matrix, use a copy
stored_feature_means, stored_feature_stds = [], []
Xnorm = X.copy()
for icol in range(Xnorm.shape[1]):
stored_feature_means.append(np.mean(Xnorm[:,icol]))
stored_feature_stds.append(np.std(Xnorm[:,icol]))
#Skip the first column
if not icol: continue
#Faster to not recompute the mean and std again, just used stored values
Xnorm[:,icol] = (Xnorm[:,icol] - stored_feature_means[-1])/stored_feature_stds[-1]
#Quick visualize the feature-normalized data
plt.grid(True)
plt.xlim([-5,5])
dummy = plt.hist(Xnorm[:,0],label = 'col1')
dummy = plt.hist(Xnorm[:,1],label = 'col2')
dummy = plt.hist(Xnorm[:,2],label = 'col3')
plt.title('Feature Normalization Accomplished')
plt.xlabel('Column Value')
plt.ylabel('Counts')
dummy = plt.legend()
#Run gradient descent with multiple variables, initial theta still set to zeros
#(Note! This doesn't work unless we feature normalize! "overflow encountered in multiply")
initial_theta = np.zeros((Xnorm.shape[1],1))
theta, thetahistory, jvec = descendGradient(Xnorm,initial_theta)
#Plot convergence of cost function:
plotConvergence(jvec)
#print "Final result theta parameters: \n",theta
print("Check of result: What is price of house with 1650 square feet and 3 bedrooms?")
ytest = np.array([1650.,3.])
#To "undo" feature normalization, we "undo" 1650 and 3, then plug it into our hypothesis
ytestscaled = [(ytest[x]-stored_feature_means[x+1])/stored_feature_stds[x+1] for x in range(len(ytest))]
ytestscaled.insert(0,1)
print("$%0.2f" % float(h(theta,ytestscaled)))
from numpy.linalg import inv
#Implementation of normal equation to find analytic solution to linear regression
def normEqtn(X,y):
#restheta = np.zeros((X.shape[1],1))
return np.dot(np.dot(inv(np.dot(X.T,X)),X.T),y)
print ("Normal equation prediction for price of house with 1650 square feet and 3 bedrooms")
print ("$%0.2f" % float(h(normEqtn(X,y),[1,1650.,3])))
```
| github_jupyter |
# Welcome to Jupyter!
With Jupyter notebooks you can write and execute code, annotate it with Markdownd and use powerful visualization tools all in one document.
## Running code
Code cells can be executed in sequence by pressing Shift-ENTER. Try it now.
```
import math
from matplotlib import pyplot as plt
a=1
b=2
a+b
```
## Visualisations
Many Python visualization libratries, matplotlib for exampl, intergate seamlessly with Jupyter. Visualiazations will appear directly in the notebook.
```
def display_sinusoid():
X = range(180)
Y = [math.sin(x/10.0) for x in X]
plt.plot(X, Y)
display_sinusoid()
```
## Tensorflow environment and accelerators
On Google's AI Platform notebooks, Tensorflow support is built-in and powerful accelerators are supported out of the box. Run this cell to test if your current notebook instance has Tensorflow and an accelerator (in some codelabs, you will add an accelerator later).
```
import tensorflow as tf
from tensorflow.python.client import device_lib
print("Tensorflow version " + tf.__version__)
try: # detect TPUs
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() # TPU detection
strategy = tf.distribute.TPUStrategy(tpu)
except ValueError: # detect GPUs
strategy = tf.distribute.MirroredStrategy() # for GPU or multi-GPU machines (works on CPU too)
#strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU
#strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # for clusters of multi-GPU machines
print("Number of accelerators: ", strategy.num_replicas_in_sync)
```
## Restarting
If you get stuck, the Jupyter environment can be restarted from the menu Kernel > Restart Kernel.
You can also run the entire notebook using Run > Run All Cells. Try it now.
## License
---
author: Martin Gorner<br>
twitter: @martin_gorner
---
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
---
This is not an official Google product but sample code provided for an educational purpose
```
```
| github_jupyter |
# Using Deep Learning for Medical Imaging
In the United States, it takes an average of [1 to 5 days](https://www.ncbi.nlm.nih.gov/pubmed/29132998) to receive a diagnosis after a chest x-ray. This long wait has been shown to increase anxiety in 45% of patients. In addition, impoverished countries usually lack personnel with the technical knowledge to read chest x-rays, assuming an x-ray machine is even available. In such cases, a [short term solution](https://www.theatlantic.com/health/archive/2016/09/radiology-gap/501803/) has been to upload the images online and have volunteers read the images; volunteers diagnose an average of 4000 CT scans per week. This solution works somewhat, but many people travel for days to a clinic and cannot keep traveling back and forth for a diagnosis or treatment, nor can those with more life threatning injuries wait days for a diagnosis.
Clearly, there is a shortage of trained physicians/radiologists for the amount of care needed. To help reduce diagnosis time, we can turn to deep learning. Specifically, I will be using 3 pre-trained models (VGG19, MobileNet, and ResNet50) to apply transfer learning to chest x-rays. The largest database of chest x-ray images are compiled by the NIH Clinical Center and can be found [here](https://www.nih.gov/news-events/news-releases/nih-clinical-center-provides-one-largest-publicly-available-chest-x-ray-datasets-scientific-community). The database has 112,120 X-ray images from over 30,000 patients. There are 14 different pathologies/conditions and a 'no findings' label, for a total of 15 different labels. Due to time constraints, this notebook will go through the steps as I used transfer learning to train two of these labels: pneumonia and effusion.
## 1 - Retrieving the Data
For this project, I used tensorflow and Keras for my deep learning library. Unfortunately, I ran into reproduciblity problems, which seems to be a common problem (see [machinelearningmastery](https://machinelearningmastery.com/reproducible-results-neural-networks-keras/) and this [StackOverflow question](https://stackoverflow.com/questions/48631576/reproducible-results-using-keras-with-tensorflow-backend)) which is why I set random seeds for python hash seed, numpy, and python in my import section.
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import os
import cv2
import random
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import os
import cv2
import random
from PIL import Image
import skimage
from skimage import io
from skimage.transform import resize
from numpy import expand_dims
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_score, recall_score,accuracy_score, f1_score, roc_curve, confusion_matrix, roc_curve,roc_auc_score
import tensorflow as tf
os.environ['PYTHONHASHSEED']='0'
np.random.seed(42)
random.seed(42)
import keras
from keras import backend as K
# import keras.backend.tensorflow_backend as K
tf.random.set_seed(42)
from keras.preprocessing.image import load_img, ImageDataGenerator, img_to_array
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, GlobalAveragePooling2D, BatchNormalization
from keras.models import Model
from keras.optimizers import RMSprop, Adam
from keras.applications.vgg19 import preprocess_input, decode_predictions, VGG19
from keras.applications.mobilenet import MobileNet
from keras.applications.resnet import ResNet50
dirpath = 'all_images/'
alldata_df = pd.read_csv('./Data_Entry_2017.csv')
```
## 2 - Data Exploration
The master dataframe below shows all the info we know regarding each image, including the image filename, label(s), patient information, and image height and width.
```
alldata_df.head()
```
### 2.1 - Pathologies
One thing to notice is that each image can have multiple labels. To isolate each individual pathology, I will create a column for each pathology and use 0 and 1 to indicate if the image has that pathology or not respectively.
```
alldata_df['Labels List'] = alldata_df['Finding Labels'].apply(lambda x: x.split('|'))
pathology_lst = ['Cardiomegaly', 'No Finding', 'Hernia', 'Infiltration', 'Nodule',
'Emphysema', 'Effusion', 'Atelectasis', 'Pleural_Thickening',
'Pneumothorax', 'Mass', 'Fibrosis', 'Consolidation', 'Edema',
'Pneumonia']
def get_label(col, pathology):
if pathology in col:
return 1
else:
return 0
for pathology in pathology_lst:
alldata_df[pathology] = alldata_df['Labels List'].apply(lambda x: get_label(x, pathology))
```
Below is a table of the percentage of each type of label that makes up the dataset. Not surprisingly, the 'no findings' label makes up the majority of the images, at about 50%. The two pathologies I'd like to train are pneumonia and effusion. Pneumonia, a pathology that most of us has heard of, takes up about 1.2% of the dataset whereas effusion is ~10%. This corresponds to a total of 1431 and 13317 images for pneumonia and effusion respectively. Due to the small amount of images available for pneumonia, I suspect it will be difficult to get good results.
```
alldata_df[pathology_lst].sum()/alldata_df.shape[0]*100
```
### 2.2 - Image Data
There are two things to explore with the image data before diving into creating models. Its useful to know the height and widths of the images, especially since the models I'm using are expecting a dimension of 224 x 224 pixels.
The distribution of the heights and widths are shown in the histograms below. Most of the images have a height of 2000 pixels, although a good number of them hover at 2500 and 3000, with a minimum of ~970 pixels. The width of the images also has 3 distinct peaks, but the max is at 2500, and significant peaks at 2000 and 3000 pixels, with a minimum of ~1140 pixels. All the images have dimensions greater than 224.
```
fig, (axis1, axis2) = plt.subplots(1, 2, figsize=(15, 4))
sns.distplot(alldata_df['Height]'], ax = axis1)
sns.distplot(alldata_df['OriginalImage[Width'], ax = axis2)
axis1.set_title('Distribution of Image Height')
axis2.set_title('Distribution of Image Widths')
for ax in [axis1, axis2]:
ax.set_xlabel('Pixels')
ax.set_ylabel('Density')
plt.tight_layout()
```
Another column is the view position. There are two unique values for this columns: PA and AP. These represent if the X-rays pass from the back to the front or vice versa. Thus, I'd like to find out if there are any stark differences in the images between the two view positions. Most of the images are in the PA viewing position. This is preferred because AP position creates 'shadows', but 'AP' images are taken because the patient is unable to stand up for the 'PA' position and need to lie down on a table.
```
alldata_df['View Position'].value_counts()/alldata_df.shape[0]*100
def get_images(filename_df, target_pathology, num_images = 500, imageSize = 224):
X = []
sample_df = filename_df.sample(n = num_images)
sample_df.reset_index(drop = True, inplace = True)
truncated_image_filename_lst = sample_df['Image Index'].values
full_image_filename_lst = []
for truncated_filename in truncated_image_filename_lst:
full_image_filename_lst.append(find_file(truncated_filename))
for i, file in enumerate(full_image_filename_lst):
img_file = cv2.imread(file)
img_file = cv2.resize(img_file, (imageSize, imageSize), interpolation = cv2.INTER_CUBIC)
img_arr = np.asarray(img_file)
if img_arr.shape == (224, 224, 3):
X.append(img_arr)
else:
sample_df.drop(i, inplace = True)
y = sample_df[target_pathology]
return np.array(X), np.array(y)
# images extractred from 12 files,
image_dir = sorted([dir for dir in os.listdir(dirpath) if 'images' in dir ])
def find_file(filename):
for dirfile in image_dir:
if filename in os.listdir(dirpath + dirfile + '/images'):
return dirpath + dirfile + '/images/' + filename
pa_images, _ = get_images(alldata_df[alldata_df['View Position']=='PA'], 'No Finding', num_images = 9, imageSize = 224)
ap_images, _ = get_images(alldata_df[(alldata_df['View Position']=='AP') & (alldata_df['No Finding']==1)], 'No Finding', num_images = 9, imageSize = 224)
```
Below I've plotted 16 images total. To make sure there aren't any differences due to pathologies, I only took from the 'no findings' label. The first 8 images are X-rays in the PA position (the majority), and the latter 8 images are in the AP position.
The PA images seem to generally have a white mass near the bottom, although how much white varies from image to image. In addition, there is a small protrusion to the left of the spine, and a larger protrusion to the right of the spine.
The AP images are similar, but much blurrier, possibly due to the shadows mentioned before.
```
plt.figure(figsize = (13, 6))
print('Images for PA view')
for i in range(8):
plt.subplot(2, 4, i+1)
tmp1 = pa_images[i].astype(np.uint8)
plt.imshow(tmp1)
plt.tight_layout()
plt.figure(figsize = (13, 6))
print('Images for AP view')
for i in range(8):
plt.subplot(2, 4, i+1)
tmp1 = ap_images[i].astype(np.uint8)
plt.imshow(tmp1)
plt.tight_layout()
```
I looked at the percentages of AP vs PA view positions for 'pneumonia' and 'effusion', and threw in the overall percentage and 'no finding' for comparison. The overall and no findings are pretty close, with 'PA' positions at 60-65%. However, the 'PA' positions for 'pneumonia' and 'effusion' are lower, at 45-50%. Although removing 'AP' images might improve the models, for pneumonia it would leave too little images to train on.
```
pathology_percent_lst = ['No Finding', 'Pneumonia', 'Effusion']
pa_percent_lst = [alldata_df[(alldata_df[path]==1) & (alldata_df['View Position'] == 'PA')].shape[0]/alldata_df[(alldata_df[path]==1)].shape[0]*100 for path in pathology_percent_lst]
ap_percent_lst = [alldata_df[(alldata_df[path]==1) & (alldata_df['View Position'] == 'AP')].shape[0]/alldata_df[(alldata_df[path]==1)].shape[0]*100 for path in pathology_percent_lst]
pathology_percent_lst.insert(0, 'Overall')
pa_percent_lst.insert(0, alldata_df[alldata_df['View Position']=='PA'].shape[0]/alldata_df.shape[0]*100)
ap_percent_lst.insert(0, alldata_df[alldata_df['View Position']=='AP'].shape[0]/alldata_df.shape[0]*100)
ap_pa_percent_df = pd.DataFrame(np.array([pa_percent_lst, ap_percent_lst]),
columns = pathology_percent_lst,
index = ['PA', 'AP'])
ap_pa_percent_df
```
### 2.3 - Splitting into Training, Validation, and Test Sets
Lastly, the original [paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Wang_ChestX-ray8_Hospital-Scale_Chest_CVPR_2017_paper.pdf) split the images into training/validation and test sets and released that information to the public so we can easily compare our results to theirs. I have split my own data into the same training/validation, and test sets as the original authors.
```
train_val_filenames = pd.read_csv('./train_val_list.txt', sep=" ", header=None)
test_filenames = pd.read_csv('./test_list.txt', sep=" ", header=None)
train_val_filenames.shape[0] + test_filenames.shape[0]
train_val_df = alldata_df[alldata_df['Image Index'].isin(train_val_filenames.values.flatten())]
test_df = alldata_df[alldata_df['Image Index'].isin(test_filenames.values.flatten())]
```
## 3 - Pneumonia
First, I will be performing transfer learning on pneumonia images. In order, the three models I will be using are VGG18, MobileNet, and ResNet50.
### 3.1 - VGG19 Model
First I must preprocess the images. The VGG model is expecting an image size of 224 x 224 pixels
```
imgSize = 224
```
Note: After writing this notebook, I realized my smaller test set should have the same distribution of pathologies as the original test set. This will be corrected the next time I improve upon this project.
```
# Get test images
Xtest_pneu, ytest_pneu = get_images(test_df[test_df['Pneumonia']==1],
'Pneumonia',
num_images = test_df[test_df['Pneumonia']==1].shape[0],
imageSize = imgSize)
Xtest_notpneu, ytest_notpneu = get_images(test_df[test_df['Pneumonia']==0],
'Pneumonia',
num_images = test_df[test_df['Pneumonia']==1].shape[0],
imageSize = imgSize)
X_test_pneu = np.concatenate((Xtest_pneu, Xtest_notpneu), axis = 0)
y_test_pneu = np.concatenate((ytest_pneu, ytest_notpneu))
```
I use sklearn's train_test_split function to shuffle test set. Unfortunately that means I lose 1% of the images, so it leaves me with 1099 images total in the test set.
```
_, X_test_pneu, _, y_test_pneu = train_test_split(X_test_pneu,
y_test_pneu,
test_size=0.99,
random_state=42,
stratify = y_test_pneu)
```
The training and validation sets have a total of 1752 and 351 images respectively.
```
# get training images and split into validation set
Xtrain_pneu, ytrain_pneu = get_images(train_val_df[train_val_df['Pneumonia']==1],
'Pneumonia',
num_images = train_val_df[train_val_df['Pneumonia']==1].shape[0],
imageSize = imgSize)
Xtrain_notpneu, ytrain_notpneu = get_images(train_val_df[train_val_df['Pneumonia']==0],
'Pneumonia',
num_images = train_val_df[train_val_df['Pneumonia']==1].shape[0],
imageSize = imgSize)
Xtrain_pneu = np.concatenate((Xtrain_pneu, Xtrain_notpneu), axis = 0)
ytrain_pneu = np.concatenate((ytrain_pneu, ytrain_notpneu))
X_train_pneu, X_val_pneu, y_train_pneu, y_val_pneu = train_test_split(Xtrain_pneu,
ytrain_pneu,
test_size=0.2,
random_state=42,
stratify = ytrain_pneu)
```
Next, I need to convert the images into a format accepted by the VGG model.
```
def convert_X_data(Xtrain, Xval, Xtest, imageSize = 224, num_classes = 2):
if K.image_data_format() == 'channels_first':
Xtrain_model = Xtrain.reshape(Xtrain.shape[0], 3, imageSize, imageSize)
Xval_model = Xval.reshape(Xval.shape[0], 3, imageSize, imageSize)
Xtest_model = Xtest.reshape(Xtest.shape[0], 3, imageSize, imageSize)
else:
Xtrain_model = Xtrain.reshape(Xtrain.shape[0], imageSize, imageSize, 3)
Xval_model = Xval.reshape(Xval.shape[0], imageSize, imageSize, 3)
Xtest_model = Xtest.reshape(Xtest.shape[0], imageSize, imageSize, 3)
# input_shape = (img_rows, img_cols, 1)
Xtrain_model = Xtrain_model.astype('float32')
Xval_model = Xval_model.astype('float32')
Xtest_model = Xtest_model.astype('float32')
Xtrain_model = preprocess_input(Xtrain_model)
Xval_model = preprocess_input(Xval_model)
Xtest_model = preprocess_input(Xtest_model)
return Xtrain_model, Xval_model, Xtest_model
def convert_y_data(ytrain, yval, ytest, num_classes = 2):
ytrain_model = keras.utils.to_categorical(ytrain, num_classes)
yval_model = keras.utils.to_categorical(yval, num_classes)
ytest_model = keras.utils.to_categorical(ytest, num_classes)
return ytrain_model, yval_model, ytest_model
X_train_pneu_model, X_val_pneu_model, X_test_pneu_model = convert_X_data(X_train_pneu,
X_val_pneu,
X_test_pneu,
imageSize = imgSize,
num_classes = 2)
y_train_pneu_model, y_val_pneu_model, y_test_pneu_model = convert_y_data(y_train_pneu,
y_val_pneu,
y_test_pneu,
num_classes = 2)
```
Lastly, keras only has built in functions for accuracy and loss. I am interested in looking at accuracy, precision, recall, and f1 scores however, so I will write my own function to get these metrics.
The two metrics I'm most concerned about are the recall and f1-score. I am interested in the recall because for a medical related dataset, I believe it is best to reduce false negatives. However, I know there are cases where the model can predict all 0 or all 1, which would skew the precision and recalls. As such, it is important to look at f1-scores as well.
```
def get_metrics(model, xtest, ytrue, verbose = True):
y_pred_probs = model.predict(xtest)
try:
y_pred_classes = model.predict_classes(xtest)
except AttributeError:
y_pred_classes = [np.argmax(i) for i in y_pred_probs]
y_pred_probs = y_pred_probs[:, 0]
try:
y_pred_classes = y_pred_classes[:, 0]
except: #IndexError:
pass
if verbose:
print('Accuracy Score: {}'.format(accuracy_score(ytrue, y_pred_classes)))
print('Precision Score: {}'.format(precision_score(ytrue, y_pred_classes)))
print('Recall: {}'.format(recall_score(ytrue, y_pred_classes)))
print('F1 Score: {}'.format(f1_score(ytrue, y_pred_classes)))
print('Confusion matrix: \n{}'.format(confusion_matrix(ytrue, y_pred_classes)))
return accuracy_score(ytrue, y_pred_classes), precision_score(ytrue, y_pred_classes), recall_score(ytrue, y_pred_classes), f1_score(ytrue, y_pred_classes)
```
#### 3.1.1 - VGG Baseline with Pneumonia Images
The first step is to establish what the baseline metrics will be for the VGG model. To do this, first I import the layers from the VGG model. Since this model was trained on the ImageNet dataset, it expects to predict from 1000 classes. I replace this last layer with a dense layer with 2 classes and softmax activation, and compile the model with keras's categorical cross entropy loss function. Lastly, due to the reproducibility issues mentioned in the beginning, I run the model 3 times and average the metrics and show the standard deviation.
For the baseline metric, the model has an accuracy of ~50% while the precision is ~0.55 and recall and f1 score at ~0.23, so this is just about as good as flipping a coin given that the test set is balanced. In addition, the standard deviation for the recall and precision scores are just about as big as the averaged scores themselves, so the reliability of this baseline model is very poor.
```
num_classes = 2
vgg_model = VGG19()
vgg_baseline_acc_scores = []
vgg_baseline_prec_scores = []
vgg_baseline_recall_scores = []
vgg_baseline_f1_scores = []
for i in range(3):
vgg_model_baseline = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_baseline.add(layer)
vgg_model_baseline.add(Dense(num_classes, activation = 'softmax'))
# freeze layers, excluding from future training. weights are not updated.
for layer in vgg_model_baseline.layers:
layer.trainable = False
vgg_model_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_baseline_history = vgg_model_baseline.fit(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model, y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(vgg_model_baseline, X_test_pneu_model, y_test_pneu, verbose = True)
vgg_baseline_acc_scores.append(acc)
vgg_baseline_prec_scores.append(prec)
vgg_baseline_recall_scores.append(recall)
vgg_baseline_f1_scores.append(f1)
print('Accuracy of VGG baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_baseline_acc_scores) * 100, np.std(vgg_baseline_acc_scores)*100))
print('Precision of VGG baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_baseline_prec_scores), np.std(vgg_baseline_prec_scores)))
print('Recall of VGG baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_baseline_recall_scores), np.std(vgg_baseline_recall_scores)))
print('f1 score of VGG baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_baseline_f1_scores), np.std(vgg_baseline_f1_scores)))
```
#### 3.1.2 - Training Layers with VGG19 and Pneumonia Images
A way to fine tune and hopefully improve upon this baseline model is to unfreeze certain layers. That is, the weights imported by the VGG (or any) model is optimized for the ImageNet dataset. Unfreezing layers allows the model to learn the features of your current dataset. Typically, the first layers of the models are kept frozen as they are able to extract big/common features, while the last layers are able to extract features that are more specific to your dataset.
VGG19 has 26 layers, and I found that it is optimal to train the last 4 layers, leaving the first 22 layers frozen. With this model, the accuracy rises slightly to ~53% while the precision stays the same. Recall and f1-score rises to over 0.60. In addition, standard deviations have drastically reduced.
```
vgg_frozen_acc_scores = []
vgg_frozen_prec_scores = []
vgg_frozen_recall_scores = []
vgg_frozen_f1_scores = []
for i in range(3):
vgg_model_frozen = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_frozen.add(layer)
vgg_model_frozen.add(Dense(num_classes, activation = 'softmax'))
# freeze layers, excluding from future training. weights are not updated.
for layer in vgg_model_frozen.layers[:-4]:
layer.trainable = False
vgg_model_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_frozen_history = vgg_model_frozen.fit(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model, y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(vgg_model_frozen, X_test_pneu_model, y_test_pneu, verbose = True)
vgg_frozen_acc_scores.append(acc)
vgg_frozen_prec_scores.append(prec)
vgg_frozen_recall_scores.append(recall)
vgg_frozen_f1_scores.append(f1)
print('Accuracy of VGG model with last 4 layers trained: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_frozen_acc_scores) * 100, np.std(vgg_frozen_acc_scores)*100))
print('Precision of VGG model with last 4 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_frozen_prec_scores), np.std(vgg_frozen_prec_scores)))
print('Recall of VGG model with last 4 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_frozen_recall_scores), np.std(vgg_frozen_recall_scores)))
print('f1 score of VGG model with last 4 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_frozen_f1_scores), np.std(vgg_frozen_f1_scores)))
```
#### 3.1.3 - Adding Augmented Images to VGG19 Model with Pneumonia Images
So far, I've been able to get an ok f1 score, but an accuracy of 55% concerns me because that's barely doing beter than randomly guessing if an image shows pneumonia or not. I believe part of that is the low number of images in the training set, with a total of 1752 images. Normally, a deep learning model will want hundreds of thousands of images. To help with this, I can generate images from the ones I already have by altering them. The alterations are defined in the Image Data Generator function below, but to summarize, the image generator can zoom, shift the image horizontally, or shift the image vertically. If the image is shifted, black pixels fill in the empty space.
As usual, I want to get a baseline for this augmented images model. Unfortunately, the metrics seem to be similar to the non-augmented baseline scores.
```
gen = ImageDataGenerator(zoom_range=0.05,
height_shift_range=0.05,
width_shift_range=0.05,
fill_mode = 'constant',
cval = 0)
vgg_aug_baseline_acc_scores = []
vgg_aug_baseline_prec_scores = []
vgg_aug_baseline_recall_scores = []
vgg_aug_baseline_f1_scores = []
for i in range(3):
vgg_model_aug_baseline = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_aug_baseline.add(layer)
vgg_model_aug_baseline.add(Dense(num_classes, activation = 'softmax'))
for layer in vgg_model_aug_baseline.layers:
layer.trainable = False
vgg_model_aug_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_aug_baseline_history = vgg_model_aug_baseline.fit_generator(gen.flow(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_pneu_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(vgg_model_aug_baseline, X_test_pneu_model, y_test_pneu, verbose = True)
vgg_aug_baseline_acc_scores.append(acc)
vgg_aug_baseline_prec_scores.append(prec)
vgg_aug_baseline_recall_scores.append(recall)
vgg_aug_baseline_f1_scores.append(f1)
print('Accuracy of VGG augmented baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_aug_baseline_acc_scores) * 100, np.std(vgg_aug_baseline_acc_scores)*100))
print('Precision of VGG augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_aug_baseline_prec_scores), np.std(vgg_aug_baseline_prec_scores)))
print('Recall of VGG augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_aug_baseline_recall_scores), np.std(vgg_aug_baseline_recall_scores)))
print('f1 score of VGG augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_aug_baseline_f1_scores), np.std(vgg_aug_baseline_f1_scores)))
```
Again, I'd like to fine tune the augmented image model by letting some layers be trainable. I found the optimal number of trainable layers to be 5. The accuracy of the augmented model with trainable layers is slightly higher, but the standard deviation is also larger, and the precision, recall, and f1-score are either the same or lower than the baseline model with trainable layers.
```
vgg_aug_frozen_acc_scores = []
vgg_aug_frozen_prec_scores = []
vgg_aug_frozen_recall_scores = []
vgg_aug_frozen_f1_scores = []
for i in range(3):
vgg_model_aug_frozen = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_aug_frozen.add(layer)
vgg_model_aug_frozen.add(Dense(num_classes, activation = 'softmax'))
for layer in vgg_model_aug_frozen.layers[:-5]:
layer.trainable = False
vgg_model_aug_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_aug_frozen_history = vgg_model_aug_frozen.fit_generator(gen.flow(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_pneu_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(vgg_model_aug_frozen, X_test_pneu_model, y_test_pneu, verbose = True)
vgg_aug_frozen_acc_scores.append(acc)
vgg_aug_frozen_prec_scores.append(prec)
vgg_aug_frozen_recall_scores.append(recall)
vgg_aug_frozen_f1_scores.append(f1)
print('Accuracy of VGG augmented model with last 5 layers trained: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_aug_frozen_acc_scores) * 100, np.std(vgg_aug_frozen_acc_scores)*100))
print('Precision of VGG augmented model with last 5 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_aug_frozen_prec_scores), np.std(vgg_aug_frozen_prec_scores)))
print('Recall of VGG augmented model with last 5 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_aug_frozen_recall_scores), np.std(vgg_aug_frozen_recall_scores)))
print('f1 score of VGG augmented model with last 5 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(vgg_aug_frozen_f1_scores), np.std(vgg_aug_frozen_f1_scores)))
```
#### 3.1.4 - VGG19 Pnuemonia Summary for Pneumonia Images
A table that summarizes the metrics of the VGG model for pneumonia is below. Overall, it shows that the best VGG model comes from the baseline model while training the last 4 layers, as it has the highest recall and f1-score.
| Model | Accuracy | Precision | Recall | F1-score |
|------|------|------|------|------|
| VGG19 baseline | 49.50 +/- 1.29%| 0.545 +/- 0.055 | 0.236 +/- 0.279| 0.229 +/- 0.226 |
| VGG19 baseline with training | 53.75 +/- 0.43% | 0.525 +/- 0.006 | 0.800 +/- 0.105 | 0.631 +/- 0.031 |
| VGG19 augmented baseline | 51.08 +/- 0.79% | 0.548 +/- 0.022 | 0.245 +/- 0.286 | 0.244 +/- 0.237 |
| VGG19 augmented with training | 54.14 +/- 1.30% | 0.538 +/- 0.004 | 0.573 +/- 0.141 | 0.546 +/- 0.074 |
### 3.2 - MobileNet Model
The next model to try is MobileNet model. This model also expects images of 224 x 224, so the training and test sets are already set up.
#### 3.2.1 - MobileNet Baseline with Pneumonia Images
The rest of the notebook follows the same steps as the VGG model, so here I'll be establishing the baseline metrics for MobileNet. Here I use the argument include_top = False. When compared to the original MobileNet model however, this takes away the last two layers of the model instead of just one. To make up for this, I have to include the GlobalAveragePooling2D layer back in.
For MobileNet's baseline, accuracy is again at ~50% whereas the precision is slightly worse and the recall and f1 score are slightly better than VGG's baseline. All of these metrics are lower than the best VGG model.
```
mobilenet_model = MobileNet(include_top=False, input_shape=(imgSize, imgSize, 3))
mobilenet_baseline_acc_scores = []
mobilenet_baseline_prec_scores = []
mobilenet_baseline_recall_scores = []
mobilenet_baseline_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_baseline=Model(inputs=mobilenet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in mobilenet_model_baseline.layers:
layer.trainable = False
mobilenet_model_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_baseline_history = mobilenet_model_baseline.fit(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(mobilenet_model_baseline,
X_test_pneu_model,
y_test_pneu)
mobilenet_baseline_acc_scores.append(acc)
mobilenet_baseline_prec_scores.append(prec)
mobilenet_baseline_recall_scores.append(recall)
mobilenet_baseline_f1_scores.append(f1)
print('Accuracy of MobileNet baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_baseline_acc_scores) * 100, np.std(mobilenet_baseline_acc_scores)*100))
print('Precision of MobileNet baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_baseline_prec_scores), np.std(mobilenet_baseline_prec_scores)))
print('Recall of MobileNet baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_baseline_recall_scores), np.std(mobilenet_baseline_recall_scores)))
print('f1 score of MobileNet baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_baseline_f1_scores), np.std(mobilenet_baseline_f1_scores)))
```
#### 3.2.2 - MobileNet Training Layers with Pneumonia Images
There are ~85 layers in the MobileNet model, so its more difficult to find the sweet spot of how many layers to train compared to VGG. Due to time, I trained up to the last 25 layers, and found that the optimal number of trainable layers is 21. Accuracy is slightly higher, at 53%, but the precision has stayed the same as MobileNet's baseline. Its recall and f1-scores hwoever, have improved so that they are at least 0.6, and generally the standard deviations are more stable.
```
mobilenet_frozen_acc_scores = []
mobilenet_frozen_prec_scores = []
mobilenet_frozen_recall_scores = []
mobilenet_frozen_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_baseline=Model(inputs=mobilenet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in mobilenet_model_baseline.layers[:-21]:
layer.trainable = False
mobilenet_model_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_baseline_history = mobilenet_model_baseline.fit(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(mobilenet_model_baseline,
X_test_pneu_model,
y_test_pneu)
mobilenet_frozen_acc_scores.append(acc)
mobilenet_frozen_prec_scores.append(prec)
mobilenet_frozen_recall_scores.append(recall)
mobilenet_frozen_f1_scores.append(f1)
print('Accuracy of MobileNet model with last 21 layers trained: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_frozen_acc_scores) * 100, np.std(mobilenet_frozen_acc_scores)*100))
print('Precision of MobileNet model with last 21 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_frozen_prec_scores), np.std(mobilenet_frozen_prec_scores)))
print('Recall of MobileNet model with last 21 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_frozen_recall_scores), np.std(mobilenet_frozen_recall_scores)))
print('f1 score of MobileNet model with last 21 layers trained: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_frozen_f1_scores), np.std(mobilenet_frozen_f1_scores)))
```
#### 3.2.3 - Adding Augmented Images to MobileNet with Pneumonia Images
The metrics for the baseline augmented model for MobileNet are similar to the metrics for the non-augmented model. One thing to note based on the confusion matrix is that the model seems to heavily class everything as 0 or 1 (not pneumonia vs pneumonia).
```
mobilenet_aug_baseline_acc_scores = []
mobilenet_aug_baseline_prec_scores = []
mobilenet_aug_baseline_recall_scores = []
mobilenet_aug_baseline_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_aug_baseline=Model(inputs=mobilenet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in mobilenet_model_aug_baseline.layers:
layer.trainable = False
mobilenet_model_aug_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_aug_baseline_history = mobilenet_model_aug_baseline.fit_generator(gen.flow(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_pneu_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(mobilenet_model_aug_baseline,
X_test_pneu_model,
y_test_pneu)
mobilenet_aug_baseline_acc_scores.append(acc)
mobilenet_aug_baseline_prec_scores.append(prec)
mobilenet_aug_baseline_recall_scores.append(recall)
mobilenet_aug_baseline_f1_scores.append(f1)
print('Accuracy of MobileNet augmented baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_aug_baseline_acc_scores) * 100, np.std(mobilenet_aug_baseline_acc_scores)*100))
print('Precision of MobileNet augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_aug_baseline_prec_scores), np.std(mobilenet_aug_baseline_prec_scores)))
print('Recall of MobileNet augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_aug_baseline_recall_scores), np.std(mobilenet_aug_baseline_recall_scores)))
print('f1 score of MobileNet augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_aug_baseline_f1_scores), np.std(mobilenet_aug_baseline_f1_scores)))
```
Fine tuning the training of layers and finding a sweet spot was very difficult for the augmented MobileNet model. At the moment, the best model I found was to train the last 11 layers of the model. This gives similar scores with MobileNet's trained layers, but with larger standard deviations.
```
mobilenet_aug_frozen_acc_scores = []
mobilenet_aug_frozen_prec_scores = []
mobilenet_aug_frozen_recall_scores = []
mobilenet_aug_frozen_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_aug_frozen=Model(inputs=mobilenet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in mobilenet_model_aug_frozen.layers[:-11]:
layer.trainable = False
mobilenet_model_aug_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_aug_frozen_history = mobilenet_model_aug_frozen.fit_generator(gen.flow(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_pneu_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(mobilenet_model_aug_frozen,
X_test_pneu_model,
y_test_pneu)
mobilenet_aug_frozen_acc_scores.append(acc)
mobilenet_aug_frozen_prec_scores.append(prec)
mobilenet_aug_frozen_recall_scores.append(recall)
mobilenet_aug_frozen_f1_scores.append(f1)
print('Accuracy of MobileNet augmented model while training last 11 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_aug_frozen_acc_scores) * 100, np.std(mobilenet_aug_frozen_acc_scores)*100))
print('Precision of MobileNet augmented model while training last 11 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_aug_frozen_prec_scores), np.std(mobilenet_aug_frozen_prec_scores)))
print('Recall of MobileNet augmented model while training last 11 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_aug_frozen_recall_scores), np.std(mobilenet_aug_frozen_recall_scores)))
print('f1 score of MobileNet augmented model while training last 11 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(mobilenet_aug_frozen_f1_scores), np.std(mobilenet_aug_frozen_f1_scores)))
```
#### 3.2.4 - MobileNet Pnuemonia Summary for Pneumonia Images
A table that summarizes the results of each of the MobileNet models is below. The two models with trained layers had similar f1-scores, but the baseline model with trained layers had a much higher recall, leading me to pick that as the better model.
| Model | Accuracy | Precision | Recall | F1-score |
|------|------|------|------|------|
| MobileNet baseline | 50.47 +/- 2.28% | 0.488 +/- 0.079 | 0.344 +/- 0.314 | 0.325 +/- 0.231 |
| MobileNet baseline with training | 53.78 +/- 0.32% | 0.522 +/- 0.003 | 0.876 +/- 0.069 | 0.653 +/- 0.018 |
| MobileNet augmented baseline | 49.65 +/- 0.19% | 0.405 +/- 0.085 | 0.341 +/- 0.461 | 0.240 +/- 0.300 |
| MobileNet augmented with training | 53.05 +/- 1.12% | 0.521 +/- 0.010 | 0.797 +/- 0.123 | 0.626 +/- 0.029 |
### 3.3 - ResNet50 Model
Lastly, we have a ResNet model, specifically the ResNet50 model. I chose this model because it gets good results for most image recognition problems.
#### 3.3.1 - ResNet Baseline with Pneumonia Images
Similarly to MobileNet, I utilize the include_top = False argument and I have to add the GlobalAveragePooling2D and Dense layers. This model also expects an image size of 224 x 224.
I had high hopes that this baseline would be higher than 50%, but it was not to be. The recall is much higher than the other baselines at ~0.86, but I belive this is because the model tends to predict everything to be 1 (has pneumonia). This is a good reminder of why you should check confusion matrices.
```
resnet_model = ResNet50(include_top=False, input_shape=(imgSize, imgSize, 3))
resnet_baseline_acc_scores = []
resnet_baseline_prec_scores = []
resnet_baseline_recall_scores = []
resnet_baseline_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_baseline=Model(inputs=resnet_model.input,outputs=preds)
for layer in resnet_model_baseline.layers:
layer.trainable = False
resnet_model_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_baseline_history = resnet_model_baseline.fit(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(resnet_model_baseline,
X_test_pneu_model,
y_test_pneu)
resnet_baseline_acc_scores.append(acc)
resnet_baseline_prec_scores.append(prec)
resnet_baseline_recall_scores.append(recall)
resnet_baseline_f1_scores.append(f1)
print('Accuracy of ResNet baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_baseline_acc_scores) * 100, np.std(resnet_baseline_acc_scores)*100))
print('Precision of ResNet baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_baseline_prec_scores), np.std(resnet_baseline_prec_scores)))
print('Recall of ResNet baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_baseline_recall_scores), np.std(resnet_baseline_recall_scores)))
print('f1 score of ResNet baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_baseline_f1_scores), np.std(resnet_baseline_f1_scores)))
```
#### 3.3.2 - ResNet50 Training Layers with Pneumonia Images
I found that the optimal number of layers to train is 22, which has an accuracy that finally breaks past 53%, for an accuracy of ~54%. Not much of an improvement but I was beginning to wonder if any model could break 53%. The precision, recall, and f1-scores all hover at around 0.50.
```
resnet_frozen_acc_scores = []
resnet_frozen_prec_scores = []
resnet_frozen_recall_scores = []
resnet_frozen_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_frozen=Model(inputs=resnet_model.input,outputs=preds)
for layer in resnet_model_frozen.layers[:-22]:
layer.trainable = False
resnet_model_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_frozen_history = resnet_model_frozen.fit(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, prec, recall, f1 = get_metrics(resnet_model_frozen,
X_test_pneu_model,
y_test_pneu)
resnet_frozen_acc_scores.append(acc)
resnet_frozen_prec_scores.append(prec)
resnet_frozen_recall_scores.append(recall)
resnet_frozen_f1_scores.append(f1)
print('Accuracy of ResNet model while training last 22 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_frozen_acc_scores) * 100, np.std(resnet_frozen_acc_scores)*100))
print('Precision of ResNet model while training last 22 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_frozen_prec_scores), np.std(resnet_frozen_prec_scores)))
print('Recall of ResNet model while training last 22 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_frozen_recall_scores), np.std(resnet_frozen_recall_scores)))
print('f1 score of ResNet model while training last 22 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_frozen_f1_scores), np.std(resnet_frozen_f1_scores)))
```
#### 3.3.3 - Adding Augmented Images to ResNet Model with Pneumonia Images
Again, not too much improvement here over the ResNet baseline model without augmented images. In fact, all the metrics aside from accuracy has a worse score with larger standard deviations.
```
resnet_aug_baseline_acc_scores = []
resnet_aug_baseline_prec_scores = []
resnet_aug_baseline_recall_scores = []
resnet_aug_baseline_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_aug_baseline=Model(inputs=resnet_model.input,outputs=preds)
for layer in resnet_model_aug_baseline.layers:
layer.trainable = False
resnet_model_aug_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_aug_baseline_history = resnet_model_aug_baseline.fit_generator(gen.flow(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_pneu_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, pres, recall, f1 = get_metrics(resnet_model_aug_baseline,
X_test_pneu_model,
y_test_pneu)
resnet_aug_baseline_acc_scores.append(acc)
resnet_aug_baseline_prec_scores.append(pres)
resnet_aug_baseline_recall_scores.append(recall)
resnet_aug_baseline_f1_scores.append(f1)
print('Accuracy of ResNet augmented baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_aug_baseline_acc_scores) * 100, np.std(resnet_aug_baseline_acc_scores)*100))
print('Precision of ResNet augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_aug_baseline_prec_scores), np.std(resnet_aug_baseline_prec_scores)))
print('Recall of ResNet augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_aug_baseline_recall_scores), np.std(resnet_aug_baseline_recall_scores)))
print('f1 score of ResNet augmented baseline model: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_aug_baseline_f1_scores), np.std(resnet_aug_baseline_f1_scores)))
```
The optimal number of layers to train for the ResNet augmented model is 14. While the accuracy still hovers at ~55%, the recall is at almost 0.90 with a relatively small standard deviation. However, the one must be careful since the confusion matrices show that the model tends to categorize the images as 1 (has pneumonia).
```
resnet_aug_frozen_acc_scores = []
resnet_aug_frozen_prec_scores = []
resnet_aug_frozen_recall_scores = []
resnet_aug_frozen_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_aug_frozen=Model(inputs=resnet_model.input,outputs=preds)
for layer in resnet_model_aug_frozen.layers[:-14]:
layer.trainable = False
resnet_model_aug_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_aug_frozen_history = resnet_model_aug_frozen.fit_generator(gen.flow(X_train_pneu_model,
y_train_pneu_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_pneu_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_pneu_model,
y_val_pneu_model))
acc, pres, recall, f1 = get_metrics(resnet_model_aug_frozen,
X_test_pneu_model,
y_test_pneu)
resnet_aug_frozen_acc_scores.append(acc)
resnet_aug_frozen_prec_scores.append(pres)
resnet_aug_frozen_recall_scores.append(recall)
resnet_aug_frozen_f1_scores.append(f1)
print('Accuracy of ResNet augmented model while training 14 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_aug_frozen_acc_scores) * 100, np.std(resnet_aug_frozen_acc_scores)*100))
print('Precision of ResNet augmented model while training 14 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_aug_frozen_prec_scores), np.std(resnet_aug_frozen_prec_scores)))
print('Recall of ResNet augmented model while training 14 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_aug_frozen_recall_scores), np.std(resnet_aug_frozen_recall_scores)))
print('f1 score of ResNet augmented model while training 14 layers: {:0.3f} +/- {:0.3f}'.format(np.mean(resnet_aug_frozen_f1_scores), np.std(resnet_aug_frozen_f1_scores)))
```
#### 3.3.4 - ResNet50 Pnuemonia Summary for Pneumonia
The table summarizing the ResNet50's metrics is below. Overall, the max f1-score achieved was 0.667 and the max recall was ~0.90. These were attained by the model with augmented images and training layers.
| Model | Accuracy | Precision | Recall | F1-score |
|------|------|------|------|------|
| ResNet50 baseline | 49.56 +/- 0.65% | 0.496 +/- 0.005 | 0.862 +/- 0.189 | 0.622 +/- 0.061 |
| ResNet50 baseline with training | 54.47 +/- 2.17% | 0.549 +/- 0.021 | 0.493 +/- 0.214 | 0.496 +/- 0.120 |
| ResNet50 augmented baseline | 49.71 +/- 1.15% | 0.487 +/- 0.022 | 0.665 +/- 0.347 | 0.517 +/- 0.183 |
| ResNet50 augmented with training | 55.14 +/- 1.34% | 0.530 +/- 0.008 | 0.899 +/- 0.042 | 0.667 +/- 0.013 |
### 3.4 - Summary of Pneumonia Images
As a reminder, a table of the best models from each architecture (VGG, MobileNet, and ResNet) are summarized below. Overall, each model gives similar results, with accuracies between 53-55% and recalls between 0.80 and 0.90. This means that the best model I found (so far...) for identifying images with pneumonia is with ResNet50 with 14 trained layers. However, the other two models aren't too far behind.
The low accuracy is a concern to me because it means none of these models do particularly well. I believe part of the problem stems from the training data only having ~1750 images, which is probably not enough to train on. Augmenting images helped a little for the ResNet model, but not enough.
| Model | Accuracy | Precision | Recall | F1-score |
|------|------|------|------|------|
| VGG19 baseline with training | 53.75 +/- 0.43% | 0.525 +/- 0.006 | 0.800 +/- 0.105 | 0.631 +/- 0.031 |
| MobileNet baseline with training | 53.78 +/- 0.32% | 0.522 +/- 0.003 | 0.876 +/- 0.069 | 0.653 +/- 0.018 |
| ResNet50 augmented with training | 55.14 +/- 1.34% | 0.530 +/- 0.008 | 0.899 +/- 0.042 | 0.667 +/- 0.013 |
## 4 - Effusion
### 4.1 - VGG19 Model for Effusion Images
I suspected that it was difficult to train pneumonia images due to the low number of images in the training set. One way to test this theory is to use find a pathology with more images. However, with more images comes longer computational times, which is why I chose to train effusion images, which had the second most abundant images in the dataset. As a reference, there are ~14000, ~3500, and ~9200 images total for the training, validation, and test sets.
```
# Get test images
Xtest_eff, ytest_eff = get_images(test_df[test_df['Effusion']==1],
'Effusion',
num_images = test_df[test_df['Effusion']==1].shape[0],
imageSize = imgSize)
Xtest_noteff, ytest_noteff = get_images(test_df[test_df['Effusion']==0],
'Effusion',
num_images = test_df[test_df['Effusion']==1].shape[0],
imageSize = imgSize)
X_test_eff = np.concatenate((Xtest_eff, Xtest_noteff), axis = 0)
y_test_eff = np.concatenate((ytest_eff, ytest_noteff))
_, X_test_eff, _, y_test_eff = train_test_split(X_test_eff,
y_test_eff,
test_size=0.99,
random_state=42,
stratify = y_test_eff)
# get training images and split into validation set
Xtrain_eff, ytrain_eff = get_images(train_val_df[train_val_df['Effusion']==1],
'Effusion',
num_images = train_val_df[train_val_df['Effusion']==1].shape[0],
imageSize = imgSize)
Xtrain_noteff, ytrain_noteff = get_images(train_val_df[train_val_df['Effusion']==0],
'Effusion',
num_images = train_val_df[train_val_df['Effusion']==1].shape[0],
imageSize = imgSize)
Xtrain_eff = np.concatenate((Xtrain_eff, Xtrain_noteff), axis = 0)
ytrain_eff = np.concatenate((ytrain_eff, ytrain_noteff))
X_train_eff, X_val_eff, y_train_eff, y_val_eff = train_test_split(Xtrain_eff,
ytrain_eff,
test_size=0.2,
random_state=42,
stratify = ytrain_eff)
X_train_eff_model, X_val_eff_model, X_test_eff_model = convert_X_data(X_train_eff,
X_val_eff,
X_test_eff,
imageSize = imgSize,
num_classes = 2)
y_train_eff_model, y_val_eff_model, y_test_eff_model = convert_y_data(y_train_eff,
y_val_eff,
y_test_eff,
num_classes = 2)
```
#### 4.1.1 - VGG Baseline for Effusion Images
This is the first model for the effusion pathology. Already the metrics are generally higher than that of pneumonia's baselines. Possibly this is just because there are more images the model can train on.
```
vgg_baseline_acc_eff_scores = []
vgg_baseline_pres_eff_scores = []
vgg_baseline_recall_eff_scores = []
vgg_baseline_f1_eff_scores = []
for i in range(3):
vgg_model_baseline = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_baseline.add(layer)
vgg_model_baseline.add(Dense(num_classes, activation = 'softmax'))
# freeze layers, excluding from future training. weights are not updated.
for layer in vgg_model_baseline.layers:
layer.trainable = False
vgg_model_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_baseline_history = vgg_model_baseline.fit(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model, y_val_eff_model))
acc, prec, recall, f1 = get_metrics(vgg_model_baseline, X_test_eff_model, y_test_eff, verbose = True)
vgg_baseline_acc_eff_scores.append(acc)
vgg_baseline_pres_eff_scores.append(pres)
vgg_baseline_recall_eff_scores.append(recall)
vgg_baseline_f1_eff_scores.append(f1)
print('Accuracy of VGG baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_baseline_acc_eff_scores) * 100, np.std(vgg_baseline_acc_eff_scores)*100))
print('Precision of VGG baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_baseline_pres_eff_scores) * 100, np.std(vgg_baseline_pres_eff_scores)*100))
print('Recall of VGG baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_baseline_recall_eff_scores) * 100, np.std(vgg_baseline_recall_eff_scores)*100))
print('f1 score of VGG baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_baseline_f1_eff_scores) * 100, np.std(vgg_baseline_f1_eff_scores)*100))
```
#### 4.1.2 - VGG19 Training Layers for Effusion Images
After training the last 2 layers, the accuracy jumps up to 63%, which I'm becoming more convinced is because of the number of images. The precision stays the same but the recall and f1-scores jump to at least 0.67.
```
print('Accuracy of baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_baseline_acc_eff_scores) * 100, np.std(vgg_baseline_acc_eff_scores)*100))
print('Accuracy of baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_baseline_pres_eff_scores) * 100, np.std(vgg_baseline_pres_eff_scores)*100))
print('Accuracy of baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_baseline_recall_eff_scores) * 100, np.std(vgg_baseline_recall_eff_scores)*100))
print('f1 score of baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_baseline_f1_eff_scores) * 100, np.std(vgg_baseline_f1_eff_scores)*100))
vgg_frozen_acc_eff_scores = []
vgg_frozen_pres_eff_scores = []
vgg_frozen_recall_eff_scores = []
vgg_frozen_f1_eff_scores = []
for i in range(3):
vgg_model_frozen = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_frozen.add(layer)
vgg_model_frozen.add(Dense(num_classes, activation = 'softmax'))
# freeze layers, excluding from future training. weights are not updated.
for layer in vgg_model_frozen.layers[:-2]:
layer.trainable = False
vgg_model_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_frozen_history = vgg_model_frozen.fit(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model, y_val_eff_model))
acc, prec, recall, f1 = get_metrics(vgg_model_frozen, X_test_eff_model, y_test_eff, verbose = True)
vgg_frozen_acc_eff_scores.append(acc)
vgg_frozen_pres_eff_scores.append(pres)
vgg_frozen_recall_eff_scores.append(recall)
vgg_frozen_f1_eff_scores.append(f1)
print('Accuracy of VGG model while training last 2 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_frozen_acc_eff_scores) * 100, np.std(vgg_frozen_acc_eff_scores)*100))
print('Precision of VGG model while training last 2 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_frozen_pres_eff_scores), np.std(vgg_frozen_pres_eff_scores)))
print('Recall of VGG model while training last 2 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_frozen_recall_eff_scores), np.std(vgg_frozen_recall_eff_scores)))
print('f1 score of VGG model while training last 2 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_frozen_f1_eff_scores), np.std(vgg_frozen_f1_eff_scores)))
```
#### 4.1.3 - Adding Augmented Images to VGG19 Effusion Images
Not much change in accuracy and precision compared to the baseline model. However, recall and f1-scores drop quite a bit, to under 0.15. The confusion matrix shows that the model tends to predict most images as 0 (no effusion), resulting in a mediocre precision and an extremely low recall, which drops the f1-score.
```
vgg_aug_baseline_acc_eff_scores = []
vgg_aug_baseline_pres_eff_scores = []
vgg_aug_baseline_recall_eff_scores = []
vgg_aug_baseline_f1_eff_scores = []
for i in range(3):
vgg_model_aug_baseline = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_aug_baseline.add(layer)
vgg_model_aug_baseline.add(Dense(num_classes, activation = 'softmax'))
# freeze layers, excluding from future training. weights are not updated.
for layer in vgg_model_aug_baseline.layers:
layer.trainable = False
vgg_model_aug_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_aug_baseline_history = vgg_model_aug_baseline.fit_generator(gen.flow(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_eff_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model, y_val_eff_model))
acc, prec, recall, f1 = get_metrics(vgg_model_aug_baseline, X_test_eff_model, y_test_eff, verbose = True)
vgg_aug_baseline_acc_eff_scores.append(acc)
vgg_aug_baseline_pres_eff_scores.append(pres)
vgg_aug_baseline_recall_eff_scores.append(recall)
vgg_aug_baseline_f1_eff_scores.append(f1)
print('Accuracy of VGG augmented baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_aug_baseline_acc_eff_scores) * 100, np.std(vgg_aug_baseline_acc_eff_scores)*100))
print('Precision of VGG augmented baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_aug_baseline_pres_eff_scores), np.std(vgg_aug_baseline_pres_eff_scores)))
print('Recall of VGG augmented baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_aug_baseline_recall_eff_scores), np.std(vgg_aug_baseline_recall_eff_scores)))
print('f1 score of VGG augmented baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_aug_baseline_f1_eff_scores), np.std(vgg_aug_baseline_f1_eff_scores)))
```
Training the last layer of the augmented vgg model improves the recall and therefore f1-score, but they are still not as high as the baseline model with trained layers.
```
vgg_aug_frozen_acc_eff_scores = []
vgg_aug_frozen_pres_eff_scores = []
vgg_aug_frozen_recall_eff_scores = []
vgg_aug_frozen_f1_eff_scores = []
for i in range(3):
vgg_model_aug_frozen = Sequential()
for layer in vgg_model.layers[:-1]:
vgg_model_aug_frozen.add(layer)
vgg_model_aug_frozen.add(Dense(num_classes, activation = 'softmax'))
# freeze layers, excluding from future training. weights are not updated.
for layer in vgg_model_aug_frozen.layers[:-1]:
layer.trainable = False
vgg_model_aug_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
vgg_model_aug_frozen_history = vgg_model_aug_frozen.fit_generator(gen.flow(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_eff_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model, y_val_eff_model))
acc, prec, recall, f1 = get_metrics(vgg_model_aug_frozen, X_test_eff_model, y_test_eff, verbose = True)
vgg_aug_frozen_acc_eff_scores.append(acc)
vgg_aug_frozen_pres_eff_scores.append(pres)
vgg_aug_frozen_recall_eff_scores.append(recall)
vgg_aug_frozen_f1_eff_scores.append(f1)
print('Accuracy of VGG augmented model while training last 1 layer: {:0.2f} +/- {:0.2f}%'.format(np.mean(vgg_aug_frozen_acc_eff_scores) * 100, np.std(vgg_aug_frozen_acc_eff_scores)*100))
print('Precision of VGG augmented model while training last 1 layer: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_aug_frozen_pres_eff_scores), np.std(vgg_aug_frozen_pres_eff_scores)))
print('Recall of VGG augmented model while training last 1 layer: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_aug_frozen_recall_eff_scores), np.std(vgg_aug_frozen_recall_eff_scores)))
print('f1 score of VGG augmented model while training last 1 layer: {:0.3f} +/- {:0.3f}%'.format(np.mean(vgg_aug_frozen_f1_eff_scores), np.std(vgg_aug_frozen_f1_eff_scores)))
```
#### 4.1.4 - Summary of VGG19 Model with Effusion Images
Overall, the best VGG19 model for effusion images was the baseline model after training the last 2 layers, with an f1-score of 0.67 and recall of 0.76.
| Model | Accuracy | Precision | Recall | F1-score |
|------|------|------|------|------|
| VGG19 baseline | 53.09 +/- 1.24% | 53.111 +/- 0.000 | 57.413 +/- 5.408 | 54.928 +/- 1.630 |
| VGG19 with training | 63.06 +/- 1.92% | 0.531 +/- 0.000 | 0.764 +/- 0.109 | 0.672 +/- 0.019 |
| VGG19 augmented | 50.05 +/- 0.48% | 0.531 +/- 0.000 | 0.081 +/- 0.040 | 0.135 +/- 0.059 |
| VGG19 augmented with training | 50.02 +/- 1.59% | 0.531 +/- 0.000 | 0.632 +/- 0.297 | 0.522 +/- 0.145 |
### 4.2 - MobileNet Model for Effusion Images
#### 4.2.1 - MobileNet Baseline for Effusion Images
The baseline model for MobileNet has no surprises, with similar accuracy and precisions as other baselines. The recall is slightly higher at 0.65, which increases the f1-score compared to other baselines.
```
mobilenet_baseline_acc_scores = []
mobilenet_baseline_pres_scores = []
mobilenet_baseline_recall_scores = []
mobilenet_baseline_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_baseline=Model(inputs=mobilenet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in mobilenet_model_baseline.layers:
layer.trainable = False
mobilenet_model_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_baseline_history = mobilenet_model_baseline.fit(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(mobilenet_model_baseline,
X_test_eff_model,
y_test_eff)
mobilenet_baseline_acc_scores.append(acc)
mobilenet_baseline_pres_scores.append(pres)
mobilenet_baseline_recall_scores.append(recall)
mobilenet_baseline_f1_scores.append(f1)
print('Accuracy of MobileNet baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_baseline_acc_scores) * 100, np.std(mobilenet_baseline_acc_scores)*100))
print('Precision of MobileNet baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_baseline_pres_scores), np.std(mobilenet_baseline_pres_scores)))
print('Recall of MobileNet baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_baseline_recall_scores), np.std(mobilenet_baseline_recall_scores)))
print('f1 score of MobileNet baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_baseline_f1_scores), np.std(mobilenet_baseline_f1_scores)))
```
#### 4.2.2. - MobileNet Training Layers for Effusion Images
Unfortunately at the time of writing, I seem to be having trouble reproducing the results I got in my preliminary work, so these metrics will be excluded from the summary table.
```
mobilenet_frozen_acc_scores = []
mobilenet_frozen_pres_scores = []
mobilenet_frozen_recall_scores = []
mobilenet_frozen_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_frozen=Model(inputs=mobilenet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in mobilenet_model_frozen.layers[:-24]:
layer.trainable = False
mobilenet_model_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_frozen_history = mobilenet_model_frozen.fit(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(mobilenet_model_frozen,
X_test_eff_model,
y_test_eff)
mobilenet_frozen_acc_scores.append(acc)
mobilenet_frozen_pres_scores.append(pres)
mobilenet_frozen_recall_scores.append(recall)
mobilenet_frozen_f1_scores.append(f1)
print('Accuracy of MobileNet model while training last 24 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_frozen_acc_scores) * 100, np.std(mobilenet_frozen_acc_scores)*100))
print('Precision of MobileNet model while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_frozen_pres_scores), np.std(mobilenet_frozen_pres_scores)))
print('Recall of MobileNet model while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_frozen_recall_scores), np.std(mobilenet_frozen_recall_scores)))
print('f1 score of MobileNet model while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_frozen_f1_scores), np.std(mobilenet_frozen_f1_scores)))
```
#### 4.2.3 - Adding Augmented Images to MobileNet for Effusion Images
```
mobilenet_aug_baseline_acc_scores = []
mobilenet_aug_baseline_pres_scores = []
mobilenet_aug_baseline_recall_scores = []
mobilenet_aug_baseline_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_aug_baseline=Model(inputs=mobilenet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in mobilenet_model_aug_baseline.layers:
layer.trainable = False
mobilenet_model_aug_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_aug_baseline_history = mobilenet_model_aug_baseline.fit_generator(gen.flow(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_eff_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(mobilenet_model_aug_baseline,
X_test_eff_model,
y_test_eff)
mobilenet_aug_baseline_acc_scores.append(acc)
mobilenet_aug_baseline_pres_scores.append(pres)
mobilenet_aug_baseline_recall_scores.append(recall)
mobilenet_aug_baseline_f1_scores.append(f1)
print('Accuracy of MobileNet augmented baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_aug_baseline_acc_scores) * 100, np.std(mobilenet_aug_baseline_acc_scores)*100))
print('Precision of MobileNet augmetned baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_aug_baseline_pres_scores), np.std(mobilenet_aug_baseline_pres_scores)))
print('Recall of MobileNet augmetned baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_aug_baseline_recall_scores), np.std(mobilenet_aug_baseline_recall_scores)))
print('f1 score of MobileNet augmetned baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_aug_baseline_f1_scores), np.std(mobilenet_aug_baseline_f1_scores)))
```
Again, I was unable to find the optimal number of layers to train, so these metrics will be removed from the summary.
```
mobilenet_aug_frozen_acc_scores = []
mobilenet_aug_frozen_pres_scores = []
mobilenet_aug_frozen_recall_scores = []
mobilenet_aug_frozen_f1_scores = []
for i in range(3):
x=mobilenet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
mobilenet_model_aug_frozen=Model(inputs=mobilenet_model.input,outputs=preds)
for layer in mobilenet_model_aug_frozen.layers[:-24]:
layer.trainable = False
mobilenet_model_aug_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
mobilenet_model_aug_frozen_history = mobilenet_model_aug_frozen.fit_generator(gen.flow(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_eff_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(mobilenet_model_aug_frozen,
X_test_eff_model,
y_test_eff)
mobilenet_aug_frozen_acc_scores.append(acc)
mobilenet_aug_frozen_pres_scores.append(pres)
mobilenet_aug_frozen_recall_scores.append(recall)
mobilenet_aug_frozen_f1_scores.append(f1)
print('Accuracy of MobileNet augmented model while training last 24 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(mobilenet_aug_frozen_acc_scores) * 100, np.std(mobilenet_aug_frozen_acc_scores)*100))
print('Precision of MobileNet augmetned model while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_aug_frozen_pres_scores), np.std(mobilenet_aug_frozen_pres_scores)))
print('Recall of MobileNet augmetned model while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_aug_frozen_recall_scores), np.std(mobilenet_aug_frozen_recall_scores)))
print('f1 score of MobileNet augmetned model while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(mobilenet_aug_frozen_f1_scores), np.std(mobilenet_aug_frozen_f1_scores)))
```
#### 4.2.4 - Summary of MobileNet Model with Effusion Images
Since I was unable to optimize the number of trainable layers in this notebook, I only have the baselines of the augmented and non augmented models. This section will have to be revisted at a later date.
| Model | Accuracy | Precision | Recall | F1-score |
|------|------|------|------|------|
| MobileNet baseline | 49.72 +/- 1.53% | 0.488 +/- 0.022 | 0.655 +/- 0.324 | 0.519 +/- 0.172 |
| MobileNet augmented baseline | 49.98 +/- 0.29% | 0.455 +/- 0.065 | 0.552 +/- 0.409 | 0.417 +/- 0.284 |
### 4.3 - ResNet Model for Effusion Images
#### 4.3.1 - ResNet Baseline for Effusion Images
Finally, we reach the last model for effusion images. As expected, accuracy is around 50% and precision is around 0.50. The recall and f1-score however, seem to be higher than normal for a baseline.
```
resnet_baseline_acc_scores = []
resnet_baseline_pres_scores = []
resnet_baseline_recall_scores = []
resnet_baseline_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_baseline=Model(inputs=resnet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in resnet_model_baseline.layers:
layer.trainable = False
resnet_model_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_baseline_history = resnet_model_baseline.fit(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(resnet_model_baseline,
X_test_eff_model,
y_test_eff)
resnet_baseline_acc_scores.append(acc)
resnet_baseline_pres_scores.append(pres)
resnet_baseline_recall_scores.append(recall)
resnet_baseline_f1_scores.append(f1)
print('Accuracy of ResNet baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_baseline_acc_scores) * 100, np.std(resnet_baseline_acc_scores)*100))
print('Precision of ResNet baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_baseline_pres_scores), np.std(resnet_baseline_pres_scores)))
print('Recall of ResNet baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_baseline_recall_scores), np.std(resnet_baseline_recall_scores)))
print('f1 score of ResNet baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_baseline_f1_scores), np.std(resnet_baseline_f1_scores)))
```
#### 4.3.2 - ResNet Training Layers for Effusion Images
With the last 26 layers trained on the baseline ResNet model, the accuracy increases to 62%, precision rises slightly to 0.59, and recall and f1-scores rises to 0.70 and above.
```
resnet_frozen_acc_scores = []
resnet_frozen_pres_scores = []
resnet_frozen_recall_scores = []
resnet_frozen_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_frozen=Model(inputs=resnet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in resnet_model_frozen.layers[:-26]:
layer.trainable = False
resnet_model_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_frozen_history = resnet_model_frozen.fit(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(resnet_model_frozen,
X_test_eff_model,
y_test_eff)
resnet_frozen_acc_scores.append(acc)
resnet_frozen_pres_scores.append(pres)
resnet_frozen_recall_scores.append(recall)
resnet_frozen_f1_scores.append(f1)
print('Accuracy of ResNet model while training last 24 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_frozen_acc_scores) * 100, np.std(resnet_frozen_acc_scores)*100))
print('Precision of ResNet model while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_frozen_pres_scores), np.std(resnet_frozen_pres_scores)))
print('Recall of model ResNet while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_frozen_recall_scores), np.std(resnet_frozen_recall_scores)))
print('f1 score of model ResNet while training last 24 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_frozen_f1_scores), np.std(resnet_frozen_f1_scores)))
```
#### 4.3.3 - Adding Augmented Images to ResNet for Effusion Images
The baseline model with augmented images have slightly better metrics than the baseline model without augmentation. The metrics are still lower than those of the baseline layer with fine tuning.
```
resnet_aug_baseline_acc_scores = []
resnet_aug_baseline_pres_scores = []
resnet_aug_baseline_recall_scores = []
resnet_aug_baseline_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_aug_baseline=Model(inputs=resnet_model.input,outputs=preds)
# freeze layers, excluding from future training. weights are not updated.
for layer in resnet_model_aug_baseline.layers:
layer.trainable = False
resnet_model_aug_baseline.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_aug_baseline_history = resnet_model_aug_baseline.fit_generator(gen.flow(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_eff_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(resnet_model_aug_baseline,
X_test_eff_model,
y_test_eff)
resnet_aug_baseline_acc_scores.append(acc)
resnet_aug_baseline_pres_scores.append(pres)
resnet_aug_baseline_recall_scores.append(recall)
resnet_aug_baseline_f1_scores.append(f1)
print('Accuracy of ResNet augmented baseline model: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_aug_basealine_acc_scores) * 100, np.std(resnet_aug_baseline_acc_scores)*100))
print('Precision of ResNet augmented baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_aug_baseline_pres_scores), np.std(resnet_aug_baseline_pres_scores)))
print('Recall of ResNet augmented baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_aug_baseline_recall_scores), np.std(resnet_aug_baseline_recall_scores)))
print('f1 score of ResNet augmented baseline model: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_aug_baseline_f1_scores), np.std(resnet_aug_baseline_f1_scores)))
```
After training the last 27 images for the augmented model, all metrics are quite similar to baseline with fine tuning.
```
resnet_aug_frozen_acc_scores = []
resnet_aug_frozen_pres_scores = []
resnet_aug_frozen_recall_scores = []
resnet_aug_frozen_f1_scores = []
for i in range(3):
x=resnet_model.output
x=GlobalAveragePooling2D()(x)
preds=Dense(2,activation='softmax')(x)
resnet_model_aug_frozen=Model(inputs=resnet_model.input,outputs=preds)
for layer in resnet_model_aug_frozen.layers[:-27]:
layer.trainable = False
resnet_model_aug_frozen.compile(loss = 'categorical_crossentropy',
optimizer = Adam(learning_rate = 0.001),
metrics = ['acc'])
batchsize = 64
resnet_model_aug_frozen_history = resnet_model_aug_frozen.fit_generator(gen.flow(X_train_eff_model,
y_train_eff_model,
batch_size=batchsize),
steps_per_epoch = len(X_train_eff_model)//batchsize,
epochs = 15,
verbose = 0,
validation_data=(X_val_eff_model,
y_val_eff_model))
acc, pres, recall, f1 = get_metrics(resnet_model_aug_frozen,
X_test_eff_model,
y_test_eff)
resnet_aug_frozen_acc_scores.append(acc)
resnet_aug_frozen_pres_scores.append(pres)
resnet_aug_frozen_recall_scores.append(recall)
resnet_aug_frozen_f1_scores.append(f1)
print('Accuracy of ResNet augmented model while training last 27 layers: {:0.2f} +/- {:0.2f}%'.format(np.mean(resnet_aug_frozen_acc_scores) * 100, np.std(resnet_aug_frozen_acc_scores)*100))
print('Precision of ResNet augmented model while training last 27 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_aug_frozen_pres_scores), np.std(resnet_aug_frozen_pres_scores)))
print('Recall of ResNet augmented model while training last 27 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_aug_frozen_recall_scores), np.std(resnet_aug_frozen_recall_scores)))
print('f1 score of ResNet augmented model while training last 27 layers: {:0.3f} +/- {:0.3f}%'.format(np.mean(resnet_aug_frozen_f1_scores), np.std(resnet_aug_frozen_f1_scores)))
```
#### 4.3.4 - Summary of ResNet50 Model with Effusion Images
Overall, the two models with trained layers have similar results. However, the model without augmented images has better recall and f1-scores as well as generally smaller standard deviations.
| Model | Accuracy | Precision | Recall | F1-score |
|------|------|------|------|------|
| ResNet baseline | 47.50 +/- 2.92% | 0.460 +/- 0.047 | 0.519 +/- 0.340 | 0.450 +/- 0.156 |
| ResNet with training | 62.76 +/- 0.11% | 0.585 +/- 0.002 | 0.878 +/- 0.009 | 0.702 +/- 0.002 |
| ResNet augmented | 51.13 +/- 1.25% | 0.518 +/- 0.014 | 0.561 +/- 0.343 | 0.471 +/- 0.202 |
| ResNet augmented with training | 63.88 +/- 0.25% | 0.606 +/- 0.007 | 0.793 +/- 0.050 | 0.686 +/- 0.014 |
### 4.4 - Summary of Effusion Images
In the end, I have two models that produce reasonable results for effusion images. For both the VGG19 and Resnet50 models, the models without augmented images and trained layers did the best. Between VGG and ResNet however, the winner is ResNet.
These results are interesting because even though there are almost 10 times more effusion images compared to pneumonia images, the number of images is still relatively small compared to what these models are generally used to seeing. I would have thought that augmenting image would have helped here. Perhaps one reason it didn't help is because the augmented images are being generated in the fit_generator function, so I cannot pick the images to be augmented. Perhaps in a future iteration of this project, I can
| Model | Accuracy | Precision | Recall | F1-score |
|------|------|------|------|------|
| VGG19 with training | 63.06 +/- 1.92% | 0.531 +/- 0.000 | 0.764 +/- 0.109 | 0.672 +/- 0.019 |
| ResNet with training | 62.76 +/- 0.11% | 0.585 +/- 0.002 | 0.878 +/- 0.009 | 0.702 +/- 0.002 |
## 5 - Future Research
So far, I've only applied this transfer learning technique to two pathologies, but there are 12 more! Future research would definately be to extend this project to the other 12 pathologies. In addition, it would also be interesting to see if building a model from scratch would be more beneficial considering how difficult and time consuming it was to try and find the optimal number of layers to train.
| github_jupyter |
# Surname Generation
## Imports
```
import os
from argparse import Namespace
from collections import Counter
import json
import re
import string
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
from torch.nn import functional as F
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
from tqdm import tqdm_notebook
```
## Data Vectorization classes
### Vocabulary
```
class Vocabulary(object):
"""Class to process text and extract vocabulary for mapping"""
def __init__(self, token_to_idx=None):
"""
Args:
token_to_idx (dict): a pre-existing map of tokens to indices
"""
if token_to_idx is None:
token_to_idx = {}
self._token_to_idx = token_to_idx
self._idx_to_token = {idx: token
for token, idx in self._token_to_idx.items()}
def to_serializable(self):
""" returns a dictionary that can be serialized """
return {'token_to_idx': self._token_to_idx}
@classmethod
def from_serializable(cls, contents):
""" instantiates the Vocabulary from a serialized dictionary """
return cls(**contents)
def add_token(self, token):
"""Update mapping dicts based on the token.
Args:
token (str): the item to add into the Vocabulary
Returns:
index (int): the integer corresponding to the token
"""
if token in self._token_to_idx:
index = self._token_to_idx[token]
else:
index = len(self._token_to_idx)
self._token_to_idx[token] = index
self._idx_to_token[index] = token
return index
def add_many(self, tokens):
"""Add a list of tokens into the Vocabulary
Args:
tokens (list): a list of string tokens
Returns:
indices (list): a list of indices corresponding to the tokens
"""
return [self.add_token(token) for token in tokens]
def lookup_token(self, token):
"""Retrieve the index associated with the token
Args:
token (str): the token to look up
Returns:
index (int): the index corresponding to the token
"""
return self._token_to_idx[token]
def lookup_index(self, index):
"""Return the token associated with the index
Args:
index (int): the index to look up
Returns:
token (str): the token corresponding to the index
Raises:
KeyError: if the index is not in the Vocabulary
"""
if index not in self._idx_to_token:
raise KeyError("the index (%d) is not in the Vocabulary" % index)
return self._idx_to_token[index]
def __str__(self):
return "<Vocabulary(size=%d)>" % len(self)
def __len__(self):
return len(self._token_to_idx)
class SequenceVocabulary(Vocabulary):
def __init__(self, token_to_idx=None, unk_token="<UNK>",
mask_token="<MASK>", begin_seq_token="<BEGIN>",
end_seq_token="<END>"):
super(SequenceVocabulary, self).__init__(token_to_idx)
self._mask_token = mask_token
self._unk_token = unk_token
self._begin_seq_token = begin_seq_token
self._end_seq_token = end_seq_token
self.mask_index = self.add_token(self._mask_token)
self.unk_index = self.add_token(self._unk_token)
self.begin_seq_index = self.add_token(self._begin_seq_token)
self.end_seq_index = self.add_token(self._end_seq_token)
def to_serializable(self):
contents = super(SequenceVocabulary, self).to_serializable()
contents.update({'unk_token': self._unk_token,
'mask_token': self._mask_token,
'begin_seq_token': self._begin_seq_token,
'end_seq_token': self._end_seq_token})
return contents
def lookup_token(self, token):
"""Retrieve the index associated with the token
or the UNK index if token isn't present.
Args:
token (str): the token to look up
Returns:
index (int): the index corresponding to the token
Notes:
`unk_index` needs to be >=0 (having been added into the Vocabulary)
for the UNK functionality
"""
if self.unk_index >= 0:
return self._token_to_idx.get(token, self.unk_index)
else:
return self._token_to_idx[token]
```
### Vectorizer
```
class SurnameVectorizer(object):
""" The Vectorizer which coordinates the Vocabularies and puts them to use"""
def __init__(self, char_vocab, nationality_vocab):
"""
Args:
char_vocab (SequenceVocabulary): maps words to integers
nationality_vocab (Vocabulary): maps nationalities to integers
"""
self.char_vocab = char_vocab
self.nationality_vocab = nationality_vocab
def vectorize(self, surname, vector_length=-1):
"""Vectorize a surname into a vector of observations and targets
The outputs are the vectorized surname split into two vectors:
surname[:-1] and surname[1:]
At each timestep, the first vector is the observation and the second vector is the target.
Args:
surname (str): the surname to be vectorized
vector_length (int): an argument for forcing the length of index vector
Returns:
a tuple: (from_vector, to_vector)
from_vector (numpy.ndarray): the observation vector
to_vector (numpy.ndarray): the target prediction vector
"""
indices = [self.char_vocab.begin_seq_index]
indices.extend(self.char_vocab.lookup_token(token) for token in surname)
indices.append(self.char_vocab.end_seq_index)
if vector_length < 0:
vector_length = len(indices) - 1
from_vector = np.zeros(vector_length, dtype=np.int64)
from_indices = indices[:-1]
from_vector[:len(from_indices)] = from_indices
from_vector[len(from_indices):] = self.char_vocab.mask_index
to_vector = np.zeros(vector_length, dtype=np.int64)
to_indices = indices[1:]
to_vector[:len(to_indices)] = to_indices
to_vector[len(to_indices):] = self.char_vocab.mask_index
return from_vector, to_vector
@classmethod
def from_dataframe(cls, surname_df):
"""Instantiate the vectorizer from the dataset dataframe
Args:
surname_df (pandas.DataFrame): the surname dataset
Returns:
an instance of the SurnameVectorizer
"""
char_vocab = SequenceVocabulary()
nationality_vocab = Vocabulary()
for index, row in surname_df.iterrows():
for char in row.surname:
char_vocab.add_token(char)
nationality_vocab.add_token(row.nationality)
return cls(char_vocab, nationality_vocab)
@classmethod
def from_serializable(cls, contents):
"""Instantiate the vectorizer from saved contents
Args:
contents (dict): a dict holding two vocabularies for this vectorizer
This dictionary is created using `vectorizer.to_serializable()`
Returns:
an instance of SurnameVectorizer
"""
char_vocab = SequenceVocabulary.from_serializable(contents['char_vocab'])
nat_vocab = Vocabulary.from_serializable(contents['nationality_vocab'])
return cls(char_vocab=char_vocab, nationality_vocab=nat_vocab)
def to_serializable(self):
""" Returns the serializable contents """
return {'char_vocab': self.char_vocab.to_serializable(),
'nationality_vocab': self.nationality_vocab.to_serializable()}
```
### Dataset
```
class SurnameDataset(Dataset):
def __init__(self, surname_df, vectorizer):
"""
Args:
surname_df (pandas.DataFrame): the dataset
vectorizer (SurnameVectorizer): vectorizer instatiated from dataset
"""
self.surname_df = surname_df
self._vectorizer = vectorizer
self._max_seq_length = max(map(len, self.surname_df.surname)) + 2
self.train_df = self.surname_df[self.surname_df.split=='train']
self.train_size = len(self.train_df)
self.val_df = self.surname_df[self.surname_df.split=='val']
self.validation_size = len(self.val_df)
self.test_df = self.surname_df[self.surname_df.split=='test']
self.test_size = len(self.test_df)
self._lookup_dict = {'train': (self.train_df, self.train_size),
'val': (self.val_df, self.validation_size),
'test': (self.test_df, self.test_size)}
self.set_split('train')
@classmethod
def load_dataset_and_make_vectorizer(cls, surname_csv):
"""Load dataset and make a new vectorizer from scratch
Args:
surname_csv (str): location of the dataset
Returns:
an instance of SurnameDataset
"""
surname_df = pd.read_csv(surname_csv)
return cls(surname_df, SurnameVectorizer.from_dataframe(surname_df))
@classmethod
def load_dataset_and_load_vectorizer(cls, surname_csv, vectorizer_filepath):
"""Load dataset and the corresponding vectorizer.
Used in the case in the vectorizer has been cached for re-use
Args:
surname_csv (str): location of the dataset
vectorizer_filepath (str): location of the saved vectorizer
Returns:
an instance of SurnameDataset
"""
surname_df = pd.read_csv(surname_csv)
vectorizer = cls.load_vectorizer_only(vectorizer_filepath)
return cls(surname_df, vectorizer)
@staticmethod
def load_vectorizer_only(vectorizer_filepath):
"""a static method for loading the vectorizer from file
Args:
vectorizer_filepath (str): the location of the serialized vectorizer
Returns:
an instance of SurnameVectorizer
"""
with open(vectorizer_filepath) as fp:
return SurnameVectorizer.from_serializable(json.load(fp))
def save_vectorizer(self, vectorizer_filepath):
"""saves the vectorizer to disk using json
Args:
vectorizer_filepath (str): the location to save the vectorizer
"""
with open(vectorizer_filepath, "w") as fp:
json.dump(self._vectorizer.to_serializable(), fp)
def get_vectorizer(self):
""" returns the vectorizer """
return self._vectorizer
def set_split(self, split="train"):
self._target_split = split
self._target_df, self._target_size = self._lookup_dict[split]
def __len__(self):
return self._target_size
def __getitem__(self, index):
"""the primary entry point method for PyTorch datasets
Args:
index (int): the index to the data point
Returns:
a dictionary holding the data point: (x_data, y_target, class_index)
"""
row = self._target_df.iloc[index]
from_vector, to_vector = \
self._vectorizer.vectorize(row.surname, self._max_seq_length)
nationality_index = \
self._vectorizer.nationality_vocab.lookup_token(row.nationality)
return {'x_data': from_vector,
'y_target': to_vector,
'class_index': nationality_index}
def get_num_batches(self, batch_size):
"""Given a batch size, return the number of batches in the dataset
Args:
batch_size (int)
Returns:
number of batches in the dataset
"""
return len(self) // batch_size
def generate_batches(dataset, batch_size, shuffle=True,
drop_last=True, device="cpu"):
"""
A generator function which wraps the PyTorch DataLoader. It will
ensure each tensor is on the write device location.
"""
dataloader = DataLoader(dataset=dataset, batch_size=batch_size,
shuffle=shuffle, drop_last=drop_last)
for data_dict in dataloader:
out_data_dict = {}
for name, tensor in data_dict.items():
out_data_dict[name] = data_dict[name].to(device)
yield out_data_dict
x = torch.ones(5, 5)
x = x.to("cuda")
y = torch.zeros(5)
y
y.to(x.device)
```
## The Model: SurnameGenerationModel
```
class SurnameGenerationModel(nn.Module):
def __init__(self, char_embedding_size, char_vocab_size, num_nationalities,
rnn_hidden_size, batch_first=True, padding_idx=0, dropout_p=0.5):
"""
Args:
char_embedding_size (int): The size of the character embeddings
char_vocab_size (int): The number of characters to embed
num_nationalities (int): The size of the prediction vector
rnn_hidden_size (int): The size of the RNN's hidden state
batch_first (bool): Informs whether the input tensors will
have batch or the sequence on the 0th dimension
padding_idx (int): The index for the tensor padding;
see torch.nn.Embedding
dropout_p (float): the probability of zeroing activations using
the dropout method. higher means more likely to zero.
"""
super(SurnameGenerationModel, self).__init__()
self.char_emb = nn.Embedding(num_embeddings=char_vocab_size,
embedding_dim=char_embedding_size,
padding_idx=padding_idx)
self.nation_emb = nn.Embedding(num_embeddings=num_nationalities,
embedding_dim=rnn_hidden_size)
self.rnn = nn.GRU(input_size=char_embedding_size,
hidden_size=rnn_hidden_size,
batch_first=batch_first)
self.fc = nn.Linear(in_features=rnn_hidden_size,
out_features=char_vocab_size)
self._dropout_p = dropout_p
def forward(self, x_in, nationality_index, apply_softmax=False):
"""The forward pass of the model
Args:
x_in (torch.Tensor): an input data tensor.
x_in.shape should be (batch, max_seq_size)
nationality_index (torch.Tensor): The index of the nationality for each data point
Used to initialize the hidden state of the RNN
apply_softmax (bool): a flag for the softmax activation
should be false if used with the Cross Entropy losses
Returns:
the resulting tensor. tensor.shape should be (batch, char_vocab_size)
"""
x_embedded = self.char_emb(x_in)
# hidden_size: (num_layers * num_directions, batch_size, rnn_hidden_size)
nationality_embedded = self.nation_emb(nationality_index).unsqueeze(0)
y_out, _ = self.rnn(x_embedded, nationality_embedded)
batch_size, seq_size, feat_size = y_out.shape
y_out = y_out.contiguous().view(batch_size * seq_size, feat_size)
y_out = self.fc(F.dropout(y_out, p=self._dropout_p))
if apply_softmax:
y_out = F.softmax(y_out, dim=1)
new_feat_size = y_out.shape[-1]
y_out = y_out.view(batch_size, seq_size, new_feat_size)
return y_out
def sample_from_model(model, vectorizer, nationalities, sample_size=20,
temperature=1.0):
"""Sample a sequence of indices from the model
Args:
model (SurnameGenerationModel): the trained model
vectorizer (SurnameVectorizer): the corresponding vectorizer
nationalities (list): a list of integers representing nationalities
sample_size (int): the max length of the samples
temperature (float): accentuates or flattens
the distribution.
0.0 < temperature < 1.0 will make it peakier.
temperature > 1.0 will make it more uniform
Returns:
indices (torch.Tensor): the matrix of indices;
shape = (num_samples, sample_size)
"""
num_samples = len(nationalities)
begin_seq_index = [vectorizer.char_vocab.begin_seq_index
for _ in range(num_samples)]
begin_seq_index = torch.tensor(begin_seq_index,
dtype=torch.int64).unsqueeze(dim=1)
indices = [begin_seq_index]
nationality_indices = torch.tensor(nationalities, dtype=torch.int64).unsqueeze(dim=0)
h_t = model.nation_emb(nationality_indices)
for time_step in range(sample_size):
x_t = indices[time_step]
x_emb_t = model.char_emb(x_t)
rnn_out_t, h_t = model.rnn(x_emb_t, h_t)
prediction_vector = model.fc(rnn_out_t.squeeze(dim=1))
probability_vector = F.softmax(prediction_vector / temperature, dim=1)
indices.append(torch.multinomial(probability_vector, num_samples=1))
indices = torch.stack(indices).squeeze().permute(1, 0)
return indices
def decode_samples(sampled_indices, vectorizer):
"""Transform indices into the string form of a surname
Args:
sampled_indices (torch.Tensor): the inidces from `sample_from_model`
vectorizer (SurnameVectorizer): the corresponding vectorizer
"""
decoded_surnames = []
vocab = vectorizer.char_vocab
for sample_index in range(sampled_indices.shape[0]):
surname = ""
for time_step in range(sampled_indices.shape[1]):
sample_item = sampled_indices[sample_index, time_step].item()
if sample_item == vocab.begin_seq_index:
continue
elif sample_item == vocab.end_seq_index:
break
else:
surname += vocab.lookup_index(sample_item)
decoded_surnames.append(surname)
return decoded_surnames
```
## Training Routine
### Helper functions
```
def make_train_state(args):
return {'stop_early': False,
'early_stopping_step': 0,
'early_stopping_best_val': 1e8,
'learning_rate': args.learning_rate,
'epoch_index': 0,
'train_loss': [],
'train_acc': [],
'val_loss': [],
'val_acc': [],
'test_loss': -1,
'test_acc': -1,
'model_filename': args.model_state_file}
def update_train_state(args, model, train_state):
"""Handle the training state updates.
Components:
- Early Stopping: Prevent overfitting.
- Model Checkpoint: Model is saved if the model is better
:param args: main arguments
:param model: model to train
:param train_state: a dictionary representing the training state values
:returns:
a new train_state
"""
# Save one model at least
if train_state['epoch_index'] == 0:
torch.save(model.state_dict(), train_state['model_filename'])
train_state['stop_early'] = False
# Save model if performance improved
elif train_state['epoch_index'] >= 1:
loss_tm1, loss_t = train_state['val_loss'][-2:]
# If loss worsened
if loss_t >= loss_tm1:
# Update step
train_state['early_stopping_step'] += 1
# Loss decreased
else:
# Save the best model
if loss_t < train_state['early_stopping_best_val']:
torch.save(model.state_dict(), train_state['model_filename'])
train_state['early_stopping_best_val'] = loss_t
# Reset early stopping step
train_state['early_stopping_step'] = 0
# Stop early ?
train_state['stop_early'] = \
train_state['early_stopping_step'] >= args.early_stopping_criteria
return train_state
def normalize_sizes(y_pred, y_true):
"""Normalize tensor sizes
Args:
y_pred (torch.Tensor): the output of the model
If a 3-dimensional tensor, reshapes to a matrix
y_true (torch.Tensor): the target predictions
If a matrix, reshapes to be a vector
"""
if len(y_pred.size()) == 3:
y_pred = y_pred.contiguous().view(-1, y_pred.size(2))
if len(y_true.size()) == 2:
y_true = y_true.contiguous().view(-1)
return y_pred, y_true
def compute_accuracy(y_pred, y_true, mask_index):
y_pred, y_true = normalize_sizes(y_pred, y_true)
_, y_pred_indices = y_pred.max(dim=1)
correct_indices = torch.eq(y_pred_indices, y_true).float()
valid_indices = torch.ne(y_true, mask_index).float()
n_correct = (correct_indices * valid_indices).sum().item()
n_valid = valid_indices.sum().item()
return n_correct / n_valid * 100
def sequence_loss(y_pred, y_true, mask_index):
y_pred, y_true = normalize_sizes(y_pred, y_true)
return F.cross_entropy(y_pred, y_true, ignore_index=mask_index)
```
### General utilities
```
def set_seed_everywhere(seed, cuda):
np.random.seed(seed)
torch.manual_seed(seed)
if cuda:
torch.cuda.manual_seed_all(seed)
def handle_dirs(dirpath):
if not os.path.exists(dirpath):
os.makedirs(dirpath)
```
### Settings and some prep work
```
args = Namespace(
# Data and Path information
surname_csv="data/surnames/surnames_with_splits.csv",
vectorizer_file="vectorizer.json",
model_state_file="model.pth",
save_dir="model_storage/ch7/model2_conditioned_surname_generation",
# Model hyper parameters
char_embedding_size=32,
rnn_hidden_size=32,
# Training hyper parameters
seed=1337,
learning_rate=0.001,
batch_size=128,
num_epochs=100,
early_stopping_criteria=5,
# Runtime options
catch_keyboard_interrupt=True,
cuda=True,
expand_filepaths_to_save_dir=True,
reload_from_files=False,
)
if args.expand_filepaths_to_save_dir:
args.vectorizer_file = os.path.join(args.save_dir,
args.vectorizer_file)
args.model_state_file = os.path.join(args.save_dir,
args.model_state_file)
print("Expanded filepaths: ")
print("\t{}".format(args.vectorizer_file))
print("\t{}".format(args.model_state_file))
# Check CUDA
if not torch.cuda.is_available():
args.cuda = False
args.device = torch.device("cuda" if args.cuda else "cpu")
print("Using CUDA: {}".format(args.cuda))
# Set seed for reproducibility
set_seed_everywhere(args.seed, args.cuda)
# handle dirs
handle_dirs(args.save_dir)
```
### Initializations
```
if args.reload_from_files:
# training from a checkpoint
dataset = SurnameDataset.load_dataset_and_load_vectorizer(args.surname_csv,
args.vectorizer_file)
else:
# create dataset and vectorizer
dataset = SurnameDataset.load_dataset_and_make_vectorizer(args.surname_csv)
dataset.save_vectorizer(args.vectorizer_file)
vectorizer = dataset.get_vectorizer()
model = SurnameGenerationModel(char_embedding_size=args.char_embedding_size,
char_vocab_size=len(vectorizer.char_vocab),
num_nationalities=len(vectorizer.nationality_vocab),
rnn_hidden_size=args.rnn_hidden_size,
padding_idx=vectorizer.char_vocab.mask_index,
dropout_p=0.5)
```
### Training loop
```
mask_index = vectorizer.char_vocab.mask_index
model = model.to(args.device)
optimizer = optim.Adam(model.parameters(), lr=args.learning_rate)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer=optimizer,
mode='min', factor=0.5,
patience=1)
train_state = make_train_state(args)
epoch_bar = tqdm_notebook(desc='training routine',
total=args.num_epochs,
position=0)
dataset.set_split('train')
train_bar = tqdm_notebook(desc='split=train',
total=dataset.get_num_batches(args.batch_size),
position=1,
leave=True)
dataset.set_split('val')
val_bar = tqdm_notebook(desc='split=val',
total=dataset.get_num_batches(args.batch_size),
position=1,
leave=True)
try:
for epoch_index in range(args.num_epochs):
train_state['epoch_index'] = epoch_index
# Iterate over training dataset
# setup: batch generator, set loss and acc to 0, set train mode on
dataset.set_split('train')
batch_generator = generate_batches(dataset,
batch_size=args.batch_size,
device=args.device)
running_loss = 0.0
running_acc = 0.0
model.train()
for batch_index, batch_dict in enumerate(batch_generator):
# the training routine is these 5 steps:
# --------------------------------------
# step 1. zero the gradients
optimizer.zero_grad()
# step 2. compute the output
y_pred = model(x_in=batch_dict['x_data'],
nationality_index=batch_dict['class_index'])
# step 3. compute the loss
loss = sequence_loss(y_pred, batch_dict['y_target'], mask_index)
# step 4. use loss to produce gradients
loss.backward()
# step 5. use optimizer to take gradient step
optimizer.step()
# -----------------------------------------
# compute the running loss and running accuracy
running_loss += (loss.item() - running_loss) / (batch_index + 1)
acc_t = compute_accuracy(y_pred, batch_dict['y_target'], mask_index)
running_acc += (acc_t - running_acc) / (batch_index + 1)
# update bar
train_bar.set_postfix(loss=running_loss,
acc=running_acc,
epoch=epoch_index)
train_bar.update()
train_state['train_loss'].append(running_loss)
train_state['train_acc'].append(running_acc)
# Iterate over val dataset
# setup: batch generator, set loss and acc to 0; set eval mode on
dataset.set_split('val')
batch_generator = generate_batches(dataset,
batch_size=args.batch_size,
device=args.device)
running_loss = 0.
running_acc = 0.
model.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
y_pred = model(x_in=batch_dict['x_data'],
nationality_index=batch_dict['class_index'])
# step 3. compute the loss
loss = sequence_loss(y_pred, batch_dict['y_target'], mask_index)
# compute the running loss and running accuracy
running_loss += (loss.item() - running_loss) / (batch_index + 1)
acc_t = compute_accuracy(y_pred, batch_dict['y_target'], mask_index)
running_acc += (acc_t - running_acc) / (batch_index + 1)
# Update bar
val_bar.set_postfix(loss=running_loss, acc=running_acc,
epoch=epoch_index)
val_bar.update()
train_state['val_loss'].append(running_loss)
train_state['val_acc'].append(running_acc)
train_state = update_train_state(args=args, model=model,
train_state=train_state)
scheduler.step(train_state['val_loss'][-1])
if train_state['stop_early']:
break
# move model to cpu for sampling
nationalities = np.random.choice(np.arange(len(vectorizer.nationality_vocab)), replace=True, size=2)
model = model.cpu()
sampled_surnames = decode_samples(
sample_from_model(model, vectorizer, nationalities=nationalities),
vectorizer)
sample1 = "{}->{}".format(vectorizer.nationality_vocab.lookup_index(nationalities[0]),
sampled_surnames[0])
sample2 = "{}->{}".format(vectorizer.nationality_vocab.lookup_index(nationalities[1]),
sampled_surnames[1])
epoch_bar.set_postfix(sample1=sample1,
sample2=sample2)
# move model back to whichever device it should be on
model = model.to(args.device)
train_bar.n = 0
val_bar.n = 0
epoch_bar.update()
except KeyboardInterrupt:
print("Exiting loop")
# compute the loss & accuracy on the test set using the best available model
model.load_state_dict(torch.load(train_state['model_filename']))
model = model.to(args.device)
dataset.set_split('test')
batch_generator = generate_batches(dataset,
batch_size=args.batch_size,
device=args.device)
running_acc = 0.
model.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
y_pred = model(x_in=batch_dict['x_data'],
nationality_index=batch_dict['class_index'])
# compute the loss
loss = sequence_loss(y_pred, batch_dict['y_target'], mask_index)
# compute the running loss and running accuracy
running_loss += (loss.item() - running_loss) / (batch_index + 1)
acc_t = compute_accuracy(y_pred, batch_dict['y_target'], mask_index)
running_acc += (acc_t - running_acc) / (batch_index + 1)
train_state['test_loss'] = running_loss
train_state['test_acc'] = running_acc
print("Test loss: {};".format(train_state['test_loss']))
print("Test Accuracy: {}".format(train_state['test_acc']))
```
## Sampling
```
model = model.cpu()
for index in range(len(vectorizer.nationality_vocab)):
nationality = vectorizer.nationality_vocab.lookup_index(index)
print("Sampled for {}: ".format(nationality))
sampled_indices = sample_from_model(model, vectorizer,
nationalities=[index] * 3,
temperature=0.7)
for sampled_surname in decode_samples(sampled_indices, vectorizer):
print("- " + sampled_surname)
```
| github_jupyter |
## Collaborative filtering
```
from fastai.gen_doc.nbdoc import *
```
This package contains all the necessary functions to quickly train a model for a collaborative filtering task. Let's start by importing all we'll need.
```
from fastai.collab import *
```
## Overview
Collaborative filtering is when you're tasked to predict how much a user is going to like a certain item. The fastai library contains a [`CollabFilteringDataset`](/collab.html#CollabFilteringDataset) class that will help you create datasets suitable for training, and a function `get_colab_learner` to build a simple model directly from a ratings table. Let's first see how we can get started before delving into the documentation.
For this example, we'll use a small subset of the [MovieLens](https://grouplens.org/datasets/movielens/) dataset to predict the rating a user would give a particular movie (from 0 to 5). The dataset comes in the form of a csv file where each line is a rating of a movie by a given person.
```
path = untar_data(URLs.ML_SAMPLE)
ratings = pd.read_csv(path/'ratings.csv')
ratings.head()
```
We'll first turn the `userId` and `movieId` columns in category codes, so that we can replace them with their codes when it's time to feed them to an `Embedding` layer. This step would be even more important if our csv had names of users, or names of items in it. To do it, we simply have to call a [`CollabDataBunch`](/collab.html#CollabDataBunch) factory method.
```
data = CollabDataBunch.from_df(ratings)
```
Now that this step is done, we can directly create a [`Learner`](/basic_train.html#Learner) object:
```
learn = collab_learner(data, n_factors=50, y_range=(0.,5.))
```
And then immediately begin training
```
learn.fit_one_cycle(5, 5e-3, wd=0.1)
show_doc(CollabDataBunch)
```
The init function shouldn't be called directly (as it's the one of a basic [`DataBunch`](/basic_data.html#DataBunch)), instead, you'll want to use the following factory method.
```
show_doc(CollabDataBunch.from_df)
```
Take a `ratings` dataframe and splits it randomly for train and test following `pct_val` (unless it's None). `user_name`, `item_name` and `rating_name` give the names of the corresponding columns (defaults to the first, the second and the third column). Optionally a `test` dataframe can be passed an a `seed` for the separation between training and validation set. The `kwargs` will be passed to [`DataBunch.create`](/basic_data.html#DataBunch.create).
## Model and [`Learner`](/basic_train.html#Learner)
```
show_doc(CollabLearner, title_level=3)
```
This is a subclass of [`Learner`](/basic_train.html#Learner) that just introduces helper functions to analyze results, the initialization is the same as a regular [`Learner`](/basic_train.html#Learner).
```
show_doc(CollabLearner.bias)
show_doc(CollabLearner.get_idx)
show_doc(CollabLearner.weight)
show_doc(EmbeddingDotBias, title_level=3)
```
Creates a simple model with `Embedding` weights and biases for `n_users` and `n_items`, with `n_factors` latent factors. Takes the dot product of the embeddings and adds the bias, then if `y_range` is specified, feed the result to a sigmoid rescaled to go from `y_range[0]` to `y_range[1]`.
```
show_doc(EmbeddingNN, title_level=3)
```
`emb_szs` will overwrite the default and `kwargs` are passed to [`TabularModel`](/tabular.models.html#TabularModel).
```
show_doc(collab_learner)
```
More specifically, binds [`data`](/tabular.data.html#tabular.data) with a model that is either an [`EmbeddingDotBias`](/collab.html#EmbeddingDotBias) with `n_factors` if `use_nn=False` or a [`EmbeddingNN`](/collab.html#EmbeddingNN) with `emb_szs` otherwise. In both cases the numbers of users and items will be inferred from the data, `y_range` can be specified in the `kwargs` and you can pass [`metrics`](/metrics.html#metrics) or `wd` to the [`Learner`](/basic_train.html#Learner) constructor.
## Links with the Data Block API
```
show_doc(CollabLine, doc_string=False, title_level=3)
```
Subclass of [`TabularLine`](/tabular.data.html#TabularLine) for collaborative filtering.
```
show_doc(CollabList, title_level=3, doc_string=False)
```
Subclass of [`TabularList`](/tabular.data.html#TabularList) for collaborative filtering.
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(EmbeddingDotBias.forward)
show_doc(CollabList.reconstruct)
show_doc(EmbeddingNN.forward)
```
## New Methods - Please document or move to the undocumented section
| github_jupyter |
# Aerospike Python Client Tutorial
### Refer to https://www.aerospike.com/docs/client/python/index.html for information on installing the Aerospike Python client.
#### Tested with Python 3.7
```
# IP Address or DNS name for one host in your Aerospike cluster
AS_HOST ="127.0.0.1"
# Please reach out to us if you do not have a feature key
AS_FEATURE_KEY_PATH = "/etc/aerospike/features.conf"
AS_PORT = 3000 # Usually 3000, but change here if not
import aerospike
```
## Create Sample Data and load it into Aerospike
```
# We create age vs salary data, using three different Gaussian distributions
import numpy as np
import pandas as pd
import math
# Create covariance matrix from std devs + correlation
def covariance_matrix(std_dev_1,std_dev_2,correlation):
return [[std_dev_1 ** 2, correlation * std_dev_1 * std_dev_2],
[correlation * std_dev_1 * std_dev_2, std_dev_2 ** 2]]
# Return a bivariate sample given means/std dev/correlation
def age_salary_sample(distribution_params,sample_size):
mean = [distribution_params["age_mean"], distribution_params["salary_mean"]]
cov = covariance_matrix(distribution_params["age_std_dev"],distribution_params["salary_std_dev"],
distribution_params["age_salary_correlation"])
return np.random.multivariate_normal(mean, cov, sample_size).T
# Define the characteristics of our age/salary distribution
age_salary_distribution_1 = {"age_mean":25,"salary_mean":50000,
"age_std_dev":1,"salary_std_dev":5000,"age_salary_correlation":0.3}
age_salary_distribution_2 = {"age_mean":45,"salary_mean":80000,
"age_std_dev":4,"salary_std_dev":10000,"age_salary_correlation":0.7}
age_salary_distribution_3 = {"age_mean":35,"salary_mean":70000,
"age_std_dev":2,"salary_std_dev":9000,"age_salary_correlation":0.1}
distribution_data = [age_salary_distribution_1,age_salary_distribution_2,age_salary_distribution_3]
# Sample age/salary data for each distributions
group_1_ages,group_1_salaries = age_salary_sample(age_salary_distribution_1,sample_size=100)
group_2_ages,group_2_salaries = age_salary_sample(age_salary_distribution_2,sample_size=120)
group_3_ages,group_3_salaries = age_salary_sample(age_salary_distribution_3,sample_size=80)
ages=np.concatenate([group_1_ages,group_2_ages,group_3_ages])
salaries=np.concatenate([group_1_salaries,group_2_salaries,group_3_salaries])
print("Data created")
# Turn the above records into a Data Frame
# First of all, create an array of arrays
inputBuf = []
for i in range(0, len(ages)) :
id = i + 1 # Avoid counting from zero
name = "Individual: {:03d}".format(id)
# Note we need to make sure values are typed correctly
# salary will have type numpy.float64 - if it is not cast as below, an error will be thrown
age = float(ages[i])
salary = int(salaries[i])
inputBuf.append((id, name,age,salary))
for i in inputBuf:
print (i)
```
## Connect to the Aerospike
```
# Configure the client
config = {
'hosts': [ (AS_HOST, AS_PORT) ]
}
# Create a client and connect it to the cluster
try:
client = aerospike.client(config).connect()
except:
import sys
print("failed to connect to the cluster with", config['hosts'])
sys.exit(1)
```
## Write Data
```
# Type 'show namespaces' at the aql prompt if you are not sure about the namespace
namespace= "test"
write_set= "write_set"
for i in inputBuf:
_id, _name, _age, _salary = i
key = (namespace, write_set,_id)
client.put(key, {'id': _id,'name': _name,'age': _age,'salary': _salary})
```
## Read Data
```
for i in inputBuf:
_id, _name, _age, _salary = i
key = (namespace, write_set,_id)
(key, metadata, record) = client.get(key)
print(record)
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from sklearn.utils import shuffle
import re
import time
import collections
import os
def build_dataset(words, n_words, atleast=1):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
counter = [i for i in counter if i[1] >= atleast]
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
lines = open('movie_lines.txt', encoding='utf-8', errors='ignore').read().split('\n')
conv_lines = open('movie_conversations.txt', encoding='utf-8', errors='ignore').read().split('\n')
id2line = {}
for line in lines:
_line = line.split(' +++$+++ ')
if len(_line) == 5:
id2line[_line[0]] = _line[4]
convs = [ ]
for line in conv_lines[:-1]:
_line = line.split(' +++$+++ ')[-1][1:-1].replace("'","").replace(" ","")
convs.append(_line.split(','))
questions = []
answers = []
for conv in convs:
for i in range(len(conv)-1):
questions.append(id2line[conv[i]])
answers.append(id2line[conv[i+1]])
def clean_text(text):
text = text.lower()
text = re.sub(r"i'm", "i am", text)
text = re.sub(r"he's", "he is", text)
text = re.sub(r"she's", "she is", text)
text = re.sub(r"it's", "it is", text)
text = re.sub(r"that's", "that is", text)
text = re.sub(r"what's", "that is", text)
text = re.sub(r"where's", "where is", text)
text = re.sub(r"how's", "how is", text)
text = re.sub(r"\'ll", " will", text)
text = re.sub(r"\'ve", " have", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"\'d", " would", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"won't", "will not", text)
text = re.sub(r"can't", "cannot", text)
text = re.sub(r"n't", " not", text)
text = re.sub(r"n'", "ng", text)
text = re.sub(r"'bout", "about", text)
text = re.sub(r"'til", "until", text)
text = re.sub(r"[-()\"#/@;:<>{}`+=~|.!?,]", "", text)
return ' '.join([i.strip() for i in filter(None, text.split())])
clean_questions = []
for question in questions:
clean_questions.append(clean_text(question))
clean_answers = []
for answer in answers:
clean_answers.append(clean_text(answer))
min_line_length = 2
max_line_length = 5
short_questions_temp = []
short_answers_temp = []
i = 0
for question in clean_questions:
if len(question.split()) >= min_line_length and len(question.split()) <= max_line_length:
short_questions_temp.append(question)
short_answers_temp.append(clean_answers[i])
i += 1
short_questions = []
short_answers = []
i = 0
for answer in short_answers_temp:
if len(answer.split()) >= min_line_length and len(answer.split()) <= max_line_length:
short_answers.append(answer)
short_questions.append(short_questions_temp[i])
i += 1
question_test = short_questions[500:550]
answer_test = short_answers[500:550]
short_questions = short_questions[:500]
short_answers = short_answers[:500]
concat_from = ' '.join(short_questions+question_test).split()
vocabulary_size_from = len(list(set(concat_from)))
data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from)
print('vocab from size: %d'%(vocabulary_size_from))
print('Most common words', count_from[4:10])
print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]])
print('filtered vocab size:',len(dictionary_from))
print("% of vocab used: {}%".format(round(len(dictionary_from)/vocabulary_size_from,4)*100))
concat_to = ' '.join(short_answers+answer_test).split()
vocabulary_size_to = len(list(set(concat_to)))
data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to)
print('vocab from size: %d'%(vocabulary_size_to))
print('Most common words', count_to[4:10])
print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]])
print('filtered vocab size:',len(dictionary_to))
print("% of vocab used: {}%".format(round(len(dictionary_to)/vocabulary_size_to,4)*100))
GO = dictionary_from['GO']
PAD = dictionary_from['PAD']
EOS = dictionary_from['EOS']
UNK = dictionary_from['UNK']
for i in range(len(short_answers)):
short_answers[i] += ' EOS'
class Chatbot:
def __init__(self, size_layer, num_layers, embedded_size,
from_dict_size, to_dict_size, learning_rate,
batch_size, dropout = 0.5, beam_width = 15):
def lstm_cell(size, reuse=False):
return tf.nn.rnn_cell.LSTMCell(size, initializer=tf.orthogonal_initializer(),
reuse=reuse)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.placeholder(tf.int32, [None])
self.Y_seq_len = tf.placeholder(tf.int32, [None])
# encoder
encoder_embeddings = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1))
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
for n in range(num_layers):
(out_fw, out_bw), (state_fw, state_bw) = tf.nn.bidirectional_dynamic_rnn(
cell_fw = lstm_cell(size_layer // 2),
cell_bw = lstm_cell(size_layer // 2),
inputs = encoder_embedded,
sequence_length = self.X_seq_len,
dtype = tf.float32,
scope = 'bidirectional_rnn_%d'%(n))
encoder_embedded = tf.concat((out_fw, out_bw), 2)
bi_state_c = tf.concat((state_fw.c, state_bw.c), -1)
bi_state_h = tf.concat((state_fw.h, state_bw.h), -1)
bi_lstm_state = tf.nn.rnn_cell.LSTMStateTuple(c=bi_state_c, h=bi_state_h)
self.encoder_state = tuple([bi_lstm_state] * num_layers)
self.encoder_state = tuple(self.encoder_state[-1] for _ in range(num_layers))
main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
# decoder
decoder_embeddings = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1))
decoder_cells = tf.nn.rnn_cell.MultiRNNCell([lstm_cell(size_layer) for _ in range(num_layers)])
dense_layer = tf.layers.Dense(to_dict_size)
training_helper = tf.contrib.seq2seq.TrainingHelper(
inputs = tf.nn.embedding_lookup(decoder_embeddings, decoder_input),
sequence_length = self.Y_seq_len,
time_major = False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cells,
helper = training_helper,
initial_state = self.encoder_state,
output_layer = dense_layer)
training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = training_decoder,
impute_finished = True,
maximum_iterations = tf.reduce_max(self.Y_seq_len))
predicting_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
embedding = decoder_embeddings,
start_tokens = tf.tile(tf.constant([GO], dtype=tf.int32), [batch_size]),
end_token = EOS)
predicting_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cells,
helper = predicting_helper,
initial_state = self.encoder_state,
output_layer = dense_layer)
predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = predicting_decoder,
impute_finished = True,
maximum_iterations = 2 * tf.reduce_max(self.X_seq_len))
self.training_logits = training_decoder_output.rnn_output
self.predicting_ids = predicting_decoder_output.sample_id
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(self.cost)
size_layer = 256
num_layers = 2
embedded_size = 128
learning_rate = 0.001
batch_size = 16
epoch = 20
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Chatbot(size_layer, num_layers, embedded_size, len(dictionary_from),
len(dictionary_to), learning_rate,batch_size)
sess.run(tf.global_variables_initializer())
def str_idx(corpus, dic):
X = []
for i in corpus:
ints = []
for k in i.split():
ints.append(dic.get(k,UNK))
X.append(ints)
return X
X = str_idx(short_questions, dictionary_from)
Y = str_idx(short_answers, dictionary_to)
X_test = str_idx(question_test, dictionary_from)
Y_test = str_idx(answer_test, dictionary_from)
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = max([len(sentence) for sentence in sentence_batch])
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(len(sentence))
return padded_seqs, seq_lens
def check_accuracy(logits, Y):
acc = 0
for i in range(logits.shape[0]):
internal_acc = 0
count = 0
for k in range(len(Y[i])):
try:
if Y[i][k] == logits[i][k]:
internal_acc += 1
count += 1
if Y[i][k] == EOS:
break
except:
break
acc += (internal_acc / count)
return acc / logits.shape[0]
for i in range(epoch):
total_loss, total_accuracy = 0, 0
for k in range(0, (len(short_questions) // batch_size) * batch_size, batch_size):
batch_x, seq_x = pad_sentence_batch(X[k: k+batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(Y[k: k+batch_size], PAD)
predicted, loss, _ = sess.run([model.predicting_ids, model.cost, model.optimizer],
feed_dict={model.X:batch_x,
model.Y:batch_y,
model.X_seq_len:seq_x,
model.Y_seq_len:seq_y})
total_loss += loss
total_accuracy += check_accuracy(predicted,batch_y)
total_loss /= (len(short_questions) // batch_size)
total_accuracy /= (len(short_questions) // batch_size)
print('epoch: %d, avg loss: %f, avg accuracy: %f'%(i+1, total_loss, total_accuracy))
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
batch_x, seq_x = pad_sentence_batch(X_test[:batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(Y_test[:batch_size], PAD)
predicted = sess.run(model.predicting_ids, feed_dict={model.X:batch_x,model.X_seq_len:seq_x})
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
```
| github_jupyter |
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
#### Version Check
Note: Animations are available in version 1.12.10+
Run `pip install plotly --upgrade` to update your Plotly version.
```
import plotly
plotly.__version__
```
#### Import Data
Let us import some apple stock data for this animation.
```
import plotly.plotly as py
from plotly.grid_objs import Grid, Column
from plotly.tools import FigureFactory as FF
import time
from datetime import datetime
import numpy as np
import pandas as pd
appl = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv')
appl.columns = [col.replace('AAPL.', '') for col in appl.columns]
apple_data_matrix = appl.head(10).round(2)
table = FF.create_table(apple_data_matrix)
py.iplot(table, filename='apple_data_table')
```
#### Make the Grid
```
def to_unix_time(dt):
epoch = datetime.utcfromtimestamp(0)
return (dt - epoch).total_seconds() * 1000
appl_price = list(appl['Adjusted'])
my_columns = []
for k in range(len(appl.Date) - 1):
my_columns.append(Column(list(appl.Date)[:k + 1], 'x{}'.format(k + 1)))
my_columns.append(Column(appl_price[:k + 1], 'y{}'.format(k + 1)))
grid = Grid(my_columns)
py.grid_ops.upload(grid, 'AAPL-daily-stock-price' + str(time.time()), auto_open=False)
```
#### Make the Figure
```
data=[dict(type='scatter',
xsrc=grid.get_column_reference('x1'),
ysrc= grid.get_column_reference('y1'),
name='AAPL',
mode='lines',
line=dict(color= 'rgb(114, 186, 59)'),
fill='tozeroy',
fillcolor='rgba(114, 186, 59, 0.5)')]
axis=dict(ticklen=4,
mirror=True,
zeroline=False,
showline=True,
autorange=False,
showgrid=False)
layout = dict(title='AAPL Daily Stock Price',
font=dict(family='Balto'),
showlegend=False,
autosize=False,
width=800,
height=400,
xaxis=dict(axis, **{'nticks':12, 'tickangle':-45,
'range': [to_unix_time(datetime(2015, 2, 17)),
to_unix_time(datetime(2016, 11, 30))]}),
yaxis=dict(axis, **{'title': '$', 'range':[0,170]}),
updatemenus=[dict(type='buttons',
showactive=False,
y=1,
x=1.1,
xanchor='right',
yanchor='top',
pad=dict(t=0, r=10),
buttons=[dict(label='Play',
method='animate',
args=[None, dict(frame=dict(duration=50, redraw=False),
transition=dict(duration=0),
fromcurrent=True,
mode='immediate')])])])
frames=[{'data':[{'xsrc': grid.get_column_reference('x{}'.format(k + 1)),
'ysrc': grid.get_column_reference('y{}'.format(k + 1))}],
'traces': [0]
} for k in range(len(appl.Date) - 1)]
fig=dict(data=data, layout=layout, frames=frames)
py.icreate_animations(fig, 'AAPL-stockprice' + str(time.time()))
```
#### Reference
For additional information on filled area plots in Plotly see: https://plot.ly/python/filled-area-plots/.
For more documentation on creating animations with Plotly, see https://plot.ly/python/#animations.
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
!pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'filled-area-animation.ipynb', 'python/filled-area-animation/', 'Filled-Area Animation | plotly',
'How to make an animated filled-area plot with apple stock data in Python.',
title='Filled-Area Animation | plotly',
name='Filled-Area Animation',
language='python',
page_type='example_index', has_thumbnail='true', thumbnail='thumbnail/apple_stock_animation.gif',
display_as='animations', ipynb= '~notebook_demo/128', order=3)
```
| github_jupyter |
```
%matplotlib inline
import math
import scipy
from scipy.stats import *
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
colorDic = {"blue" : "#6599FF", "yellow" : "#FFAD33", "purple": "#683b96", "green" : "#198D6D", "red" : "#FF523F"}
colors = list(colorDic.values())
raycasts = ["IFRC", "FRC","HRC", "EFRC"]
OUTPUT_PATH = "./out/"
'''
Find intersection with infinite cylinder with center in the origin
to do that, translate the ray origin so that the center of the bottom base
is at the origin, then calculate intersection with the canonical infinite cylinder
and check if the ray intersects the lateral surface of the cylinder within our
bases, if not check if it's intersecting the bases and if not, it's not intersecting
our actual cylinder
'''
def calcCylinderIntersectValue (centerBottomBase, radius, rayOrigin, rayDirection):
# translate the ray origin
# Point
p0 = np.array([rayOrigin[0]-centerBottomBase[0], rayOrigin[1]-centerBottomBase[1], rayOrigin[2]-centerBottomBase[2]])
# coefficients for the intersection equation
# got them mathematically intersecting the line equation with the cylinder equation
a = rayDirection[0] * rayDirection[0] + rayDirection[2] * rayDirection[2]
b = rayDirection[0]*p0[0] +rayDirection[2]*p0[2]
c = p0[0]*p0[0]+p0[2]*p0[2]-radius*radius
delta = b*b - a*c
#use epsilon because of computation errors between doubles
epsilon = 0.00000001
# delta < 0 means no intersections
if (delta < epsilon):
return (False, np.inf, np.inf)
# nearest intersection
t1 = (-b - np.sqrt (delta))/a
t2 = (-b + np.sqrt (delta))/a
return (True, t1, t2)
def calcCylinderIntersect (centerBottomBase, radius, rayOrigin, rayDirection):
b, t1, t2 = calcCylinderIntersectValue(centerBottomBase, radius, rayOrigin, rayDirection)
if (b):
i1, i2 = (rayOrigin + rayDirection* t1, rayOrigin + rayDirection* t2)
d1 = np.sqrt(np.sum(np.square(rayDirection- i1)))
d2 = np.sqrt(np.sum(np.square(rayDirection- i2)))
if (d1 < d2):
return i1
else:
return i2
else:
return np.inf
def rotate3DAroundOrigin(point, pitch, yaw):
x = point[:,0]
y = point[:,1]
z = point[:,2]
r = np.sqrt(x*x + y*y + z*z)
phi = np.arctan2(x,z)+np.deg2rad(yaw)
theta = np.arccos(y/r)+np.deg2rad(pitch)
x = r * np.sin( theta ) * np.sin( phi )
y = r * np.cos( theta )
z = r * np.sin( theta ) * np.cos( phi )
return np.array([x,y,z]).T
def getOrientaion(points):
x = points[:,0]
y = points[:,1]
z = points[:,2]
r = np.sqrt(x*x + y*y + z*z)
phi = np.arctan2(x,z)
theta = np.arccos(y/r)
return np.array([np.rad2deg(theta), np.rad2deg(phi)]).T
#yaw = np.arctan2(points[:,0], points[:,2])
#pitch = np.arctan2(np.sqrt((points[:,0] * points[:,0]) + (points[:,1] * points[:,1])), points[:,2]);
#return np.array([np.rad2deg(pitch), np.rad2deg(yaw)]).T
points = np.array([[1,1,0], [1,-1,0], [1,99,0], [1,-99,0],[1,1,1], [5, 5,5], [-5, -5,-5]])
getOrientaion(points)
dfMatch = pd.read_pickle(OUTPUT_PATH + "data_v02.pkl")
dfPointer = dfMatch[dfMatch.IsPointer == True].copy(deep=True)
del dfMatch
len(dfPointer)
i = 0
dfPointer["TargetId2"] = -1
for y in sorted(dfPointer.TargetProjectionY.unique()):
for x in sorted(dfPointer.TargetProjectionX.unique()):
dfPointer.loc[(dfPointer.TargetProjectionX == x) & (dfPointer.TargetProjectionY == y), "TargetId2"] = i
i = i+1
for rc in raycasts:
target = np.array(list(dfPointer["TargetPos"].apply(lambda x: list(x)).values))
intersection = np.array(list(dfPointer["Intersection"+rc].apply(lambda x: list(x)).values))
if(rc == "IFRC"):
origin = np.array(list(dfPointer["FingertipPos"].apply(lambda x: list(x)).values))
elif(rc == "FRC"):
origin = np.array(list(dfPointer["RightLowerArmPos"].apply(lambda x: list(x)).values))
else:
origin = np.array(list(dfPointer["HMDPos"].apply(lambda x: list(x)).values))
rotations = getOrientaion(target-origin)
dfPointer["Target"+rc+"Pitch"] = rotations[:,0]
dfPointer["Target"+rc+"Yaw"] = rotations[:,1]
rotations = getOrientaion(intersection-origin)
dfPointer[rc+"Pitch"] = rotations[:,0]
dfPointer[rc+"Yaw"] = rotations[:,1]
dfPointer[rc+"PitchError"] = dfPointer[rc+"Pitch"]-dfPointer["Target"+rc+"Pitch"]
dfPointer.loc[dfPointer[rc+"PitchError"] > 180, rc+"PitchError"] = dfPointer[rc+"PitchError"]-360
dfPointer.loc[dfPointer[rc+"PitchError"] < -180, rc+"PitchError"] = dfPointer[rc+"PitchError"]+360
dfPointer[rc+"YawError"] = dfPointer[rc+"Yaw"]-dfPointer["Target"+rc+"Yaw"]
dfPointer.loc[dfPointer[rc+"YawError"] > 180, rc+"YawError"] = dfPointer[rc+"YawError"]-360
dfPointer.loc[dfPointer[rc+"YawError"] < -180, rc+"YawError"] = dfPointer[rc+"YawError"]+360
lst = ["TargetId2"]
dfError = dfPointer.groupby("TargetId2").mean().reset_index(drop=False)
for c in dfPointer.columns:
if ("Yaw" in c) or ("Pitch" in c):
lst.append(c)
for c in lst:
for target in dfError.TargetId:
x = dfPointer[dfPointer.TargetId == target][c].values
dfError.loc[dfError.TargetId == target, c] = np.rad2deg(scipy.stats.circmean(np.deg2rad(x)))
dfError.loc[dfError[c] > 180, c] = dfError[c]-360
dfError["HMDPos"] = dfPointer.groupby(dfPointer.TargetId).HMDPos.apply(lambda x: np.mean(np.array(list(x)), axis=0))
dfError["TargetPos"] = dfPointer.groupby(dfPointer.TargetId).TargetPos.apply(lambda x: np.mean(np.array(list(x)), axis=0))
dfError["FingertipPos"] = dfPointer.groupby(dfPointer.TargetId).FingertipPos.apply(lambda x: np.mean(np.array(list(x)), axis=0))
dfError["RightLowerArmPos"] = dfPointer.groupby(dfPointer.TargetId).RightLowerArmPos.apply(lambda x: np.mean(np.array(list(x)), axis=0))
df = dfError[["TargetId2", "TargetIFRCYaw", "IFRCYaw", "FRCYaw", "HRCYaw", "EFRCYaw"]]
df = df.sort_values("TargetId2").set_index("TargetId2")
fix,ax = plt.subplots(1,1,figsize=(15,5))
df.plot(ax=ax)
#dfError.groupby('TargetId').mean()["TargetAngle"].plot(ax=ax)
plt.legend(ncol=4)
df.describe()
plt.show()
df = dfError[["TargetId2", "TargetIFRCPitch", "TargetFRCPitch", "IFRCPitch", "FRCPitch", "HRCPitch", "EFRCPitch"]]
df = df.sort_values("TargetId2").set_index("TargetId2")
fix,ax = plt.subplots(1,1,figsize=(15,5))
df.plot(ax=ax)
plt.legend(ncol=4)
df.describe()
plt.gca().invert_yaxis()
plt.show()
lst = ["TargetId2"]
for c in dfPointer.columns:
if ("Yaw" in c) or ("Pitch" in c):
lst.append(c)
for c in lst:
for target in dfError.TargetId:
x = dfPointer[dfPointer.TargetId == target][c].values
dfError.loc[dfError.TargetId == target, c] = np.rad2deg(scipy.stats.circmean(np.deg2rad(x)))
dfError.loc[dfError[c] > 180, c] = dfError[c]-360
df = dfError[["TargetId2", "IFRCYawError", "FRCYawError", "HRCYawError", "EFRCYawError"]]
df = df.sort_values("TargetId2").set_index("TargetId2")
fix,ax = plt.subplots(1,1,figsize=(15,5))
df.plot(ax=ax)
#dfError.groupby('TargetId').mean()["TargetAngle"].plot(ax=ax)
plt.legend(ncol=4)
df.describe()
plt.show()
df = dfError[["TargetId2", "IFRCPitchError", "FRCPitchError", "HRCPitchError", "EFRCPitchError"]]
df = df.sort_values("TargetId2").set_index("TargetId2")
fix,ax = plt.subplots(1,1,figsize=(15,5))
df.plot(ax=ax)
plt.legend(ncol=4)
df.describe()
plt.show()
dfPointer[rc+"YawError"].hist()
for rc in sorted(raycasts):
#point = np.array(list(dfPointer["Target"+ rc].apply(lambda x: list(x)).values))
if(rc == "IFRC"):
origin = np.array(list(dfPointer["FingertipPos"].apply(lambda x: list(x)).values))
elif(rc == "FRC"):
origin = np.array(list(dfPointer["RightLowerArmPos"].apply(lambda x: list(x)).values))
elif (rc=="EFRC"):
origin = np.array(list(dfPointer["HMDPos"].apply(lambda x: list(x)).values))
else:
origin = np.array(list(dfPointer["HMDPos"].apply(lambda x: list(x)).values))
point = np.array(list(dfPointer["Intersection"+ rc].apply(lambda x: list(x)).values))
print(rc)
intersect1 = rotate3DAroundOrigin(point-origin, dfPointer[rc+"PitchError"].values, dfPointer[rc+"YawError"].values)
intersect2 = intersect1+origin
centerBottomBase = np.array([0,1,0])
intersectReal = np.zeros(intersect2.shape)
for i in range(intersect2.shape[0]):
intersectReal[i] = calcCylinderIntersect (centerBottomBase, 4.0, origin[i], intersect2[i]-origin[i])
dfPointer["Intersection"+rc+"FromTarget"] = list(intersectReal)
from matplotlib.patches import Circle
fig, ax = plt.subplots(figsize=(10,10))
plt.scatter(0,0, marker="X", color="k")
plt.scatter(point[:5,0],point[:5,2], label="point")
plt.scatter(origin[:5,0],origin[:5,2], label="origin")
plt.scatter((point-origin)[:5,0],(point-origin)[:5,2], label="point-origin")
plt.scatter(intersect1[:5,0],intersect1[:5,2], label="intersect1")
plt.scatter(intersect2[:5,0],intersect2[:5,2], label="intersect1+origin")
plt.scatter(intersectReal[:5,0],intersectReal[:5,2], marker="X", label="intersectReal")
for i in range(5):
l = mlines.Line2D([intersect2[i,0],origin[i,0]], [intersect2[i,2],origin[i,2]])
ax.add_line(l)
for i in range(5):
l = mlines.Line2D([origin[i,0],intersectReal[i,0]], [origin[i,2],intersectReal[i,2]])
ax.add_line(l)
circle = Circle((0, 0), 4.0, fill=False)
ax.add_patch(circle)
plt.axis('equal')
plt.legend()
(point -intersectReal).mean(axis=0).round(3)
(point)[:,1].mean()
(origin)[:,1].mean()
(point-origin)[:,1].mean()
intersect1[:,1].mean()
intersect2[:,1].mean()
intersectReal[:,1].mean()
dfPointer[rc+"YawError"].hist()
plt.scatter(np.array(list(dfPointer["Intersection"+rc+"FromTarget"].apply(lambda x: list(x)).values))[:,0],
np.array(list(dfPointer["Intersection"+rc+"FromTarget"].apply(lambda x: list(x)).values))[:,2])
plt.scatter(np.array(list(dfPointer["Intersection"+rc+""].apply(lambda x: list(x)).values))[:,0],
np.array(list(dfPointer["Intersection"+rc+""].apply(lambda x: list(x)).values))[:,2])
plt.axis('equal')
for rc in raycasts:
dfPointer[rc+"Angle"] = dfPointer["Intersection"+ rc+"FromTarget"].apply(lambda x: math.degrees(math.atan2(x[0],x[2])))
dfPointer[rc+"X"] = (dfPointer[rc + "Angle"] / 180.0) * np.pi * 4.0
dfPointer[rc+"Y"] = dfPointer["Intersection"+ rc+"FromTarget"].apply(lambda x: x[1])
dfPointer[rc+"XCorected"] = dfPointer[rc+"X"]
dfPointer[rc+"XCorected"] = dfPointer[["TargetProjectionX", rc+"X"]].apply(lambda x: x[rc+"X"] - 4*np.pi*2 if x[rc+"X"] - x.TargetProjectionX > 4*np.pi else x[rc+"X"] ,axis=1)
dfPointer[rc+"DistanceError"] = np.sqrt(np.power(dfPointer.TargetProjectionX-dfPointer[rc+"XCorected"], 2) +
np.power(dfPointer.TargetProjectionY-dfPointer[rc+"Y"], 2))
for rc in raycasts:
plt.figure(figsize=(15,3))
f = dfPointer[rc+"DistanceError"].describe()
dfPointerFiltered = dfPointer[dfPointer[rc+"DistanceError"] < f["mean"] + 3 * f["std"]]
print("%s filtered: %.2f %%" % (rc, (1-len(dfPointerFiltered)/len(dfPointer))*100))
plt.title(rc)
plt.scatter(dfPointerFiltered.TargetProjectionX, dfPointerFiltered.TargetProjectionY, marker="+", color=colorDic["red"], label="Target")
plt.scatter(dfPointerFiltered[rc+"XCorected"], dfPointerFiltered[rc+"Y"], marker="+", color=colorDic["blue"])
ax = plt.gca()
for i, e in dfPointerFiltered.iterrows():
l = mlines.Line2D([e.TargetProjectionX, e[rc+"XCorected"]], [e.TargetProjectionY, e[rc+"Y"]], color=colorDic["blue"], alpha=0.2)
ax.add_line(l)
lstTicks = []
plt.axis('equal')
plt.ylim(-3, 6)
#for x in np.arange(-180,181,45):
# lstTicks.append(str(x)+"°")
#plt.xticks((4*np.pi*2)/360 * np.arange(-180,181,45), lstTicks)
plt.ylabel("Hight in m")
plt.ylabel("Rotation")
plt.show()
dfPointer.to_pickle(OUTPUT_PATH + "data_v03_PointerOnly.pkl")
```
| github_jupyter |
[](https://colab.sandbox.google.com/github/kornia/tutorials/blob/master/source/data_augmenation_segmentation.ipynb)
# **Data Augmentation Semantic Segmentation**
In this tutorial we will show how we can quickly perform **data augmentation for semantic segmenation** using the `kornia.augmentation` API.
## Install and get data
We install Kornia and some dependencies, and download a simple data sample
```
%%capture
!pip install kornia opencv-python matplotlib
%%capture
!wget http://www.zemris.fer.hr/~ssegvic/multiclod/images/causevic16semseg3.png
# import the libraries
%matplotlib inline
import matplotlib.pyplot as plt
import cv2
import numpy as np
import torch
import torch.nn as nn
import kornia as K
```
## Define Augmentation pipeline
We define a class to define our augmentation API using an `nn.Module`
```
class MyAugmentation(nn.Module):
def __init__(self):
super(MyAugmentation, self).__init__()
# we define and cache our operators as class members
self.k1 = K.augmentation.ColorJitter(0.15, 0.25, 0.25, 0.25)
self.k2 = K.augmentation.RandomAffine([-45., 45.], [0., 0.15], [0.5, 1.5], [0., 0.15])
def forward(self, img: torch.Tensor, mask: torch.Tensor) -> torch.Tensor:
# 1. apply color only in image
# 2. apply geometric tranform
img_out = self.k2(self.k1(img))
# 3. infer geometry params to mask
# TODO: this will change in future so that no need to infer params
mask_out = self.k2(mask, self.k2._params)
return img_out, mask_out
```
Load the data and apply the transforms
```
def load_data(data_path: str) -> torch.Tensor:
data: np.ndarray = cv2.imread(data_path, cv2.IMREAD_COLOR)
data_t: torch.Tensor = K.image_to_tensor(data, keepdim=False)
data_t = K.bgr_to_rgb(data_t)
data_t = K.normalize(data_t, 0., 255.)
img, labels = data_t[..., :571], data_t[..., 572:]
return img, labels
# load data (B, C, H, W)
img, labels = load_data("causevic16semseg3.png")
# create augmentation instance
aug = MyAugmentation()
# apply the augmenation pipelone to our batch of data
img_aug, labels_aug = aug(img, labels)
# visualize
img_out = torch.cat([img, labels], dim=-1)
plt.imshow(K.tensor_to_image(img_out))
plt.axis('off')
# generate several samples
num_samples: int = 10
for img_id in range(num_samples):
# generate data
img_aug, labels_aug = aug(img, labels)
img_out = torch.cat([img_aug, labels_aug], dim=-1)
# save data
plt.figure()
plt.imshow(K.tensor_to_image(img_out))
plt.axis('off')
plt.savefig(f"img_{img_id}.png", bbox_inches='tight')
```
| github_jupyter |
# Pandas
Pandas ist ein Python-Modul, welches auf Tabellen sowie Tabellenkalkulationsprogrammen (wie es auch MS Excel tut) beruht. Eine besondere Fähigkeit von Pandas ist, dass es direkt CSV-, DSV- und Excel-Dateien einlesen und schreiben kann.
Mehr zu Pandas auf der offiziellen Website: http://pandas.pydata.org/
### Download von Pandas
```
# nicht starten, da Pandas bereits installiert wurde und die notwendigen Rechte fehlen
!sudo pip install pandas
```
### Verwenden von Pandas
```
import pandas as pd
S = pd.Series([11, 28, 72, 3, 5, 8])
print(S)
print(S.index)
print(S.values)
```
### Vergleich zwischen NumPy und Pandas
```
import numpy as np
X = np.array([11, 28, 72, 3, 5, 8])
print(X)
print(S.values)
# both are the same type:
print(type(X))
print(type(S.values))
print(X==S.values)
```
#### Individuelle Indizes
```
fruits = ['apples', 'oranges', 'cherries', 'pears']
quantities = [20, 33, 52, 10]
S = pd.Series(quantities, index=fruits)
print(S)
fruits = ['apples', 'oranges', 'cherries', 'pears']
fruits_turkey = ['elma', 'portakal', 'kiraz', 'armut']
S = pd.Series([20, 33, 52, 10], index=fruits)
S2 = pd.Series([17, 13, 31, 32], index=fruits_turkey)
print(S + S2)
```
### Zugriff auf Indizes
```
print(S['apples'])
print(S[['apples', 'oranges', 'cherries']])
S[S>30]
```
### Ändern des Spaltennamen
```
years = range(2014, 2018)
shop1 = pd.Series([2409.14, 2941.01, 3496.83, 3119.55], index=years)
shop2 = pd.Series([1203.45, 3441.62, 3007.83, 3619.53], index=years)
shop3 = pd.Series([3412.12, 3491.16, 3457.19, 1963.10], index=years)
shops_df = pd.concat([shop1, shop2, shop3], axis=1)
print(shops_df)
print(type(shops_df))
print(type(shop1))
```
---
```
shops_df.columns.values
cities = ["Zürich", "Winterthur", "Freiburg"]
shops_df.columns = cities
print(shops_df)
```
### Zugriff auf Spalten
```
print(shops_df.Zürich)
```
### Index ändern
```
cities = {"name": ["London", "Berlin", "Madrid", "Rome",
"Paris", "Vienna", "Bucharest", "Hamburg",
"Budapest", "Warsaw", "Barcelona",
"Munich", "Milan"],
"population": [8615246, 3562166, 3165235, 2874038,
2273305, 1805681, 1803425, 1760433,
1754000, 1740119, 1602386, 1493900,
1350680],
"area": [1572, 891.85, 605.77, 1285, 105.4, 414.6, 228, 755,
525.2, 517, 101.9, 310.4, 181.8],
"country": ["England", "Germany", "Spain", "Italy",
"France", "Austria", "Romania",
"Germany", "Hungary", "Poland", "Spain",
"Germany", "Italy"]}
city_frame = pd.DataFrame(cities)
print(city_frame)
ordinals = ["first", "second", "third", "fourth",
"fifth", "sixth", "seventh", "eigth",
"ninth", "tenth", "eleventh", "twelvth",
"thirteenth"]
city_frame = pd.DataFrame(cities, index=ordinals)
print(city_frame)
```
### Spalten umsortieren
```
city_frame = pd.DataFrame(cities, columns=["name", "country", "population", "area"])
print(city_frame)
city_frame.reindex(index=[0, 2, 4, 6, 8, 10, 12, 1, 3, 5, 7, 9, 11], columns=['country', 'name', 'area', 'population'])
print(city_frame)
```
### Sortierung
```
print(city_frame)
# absteigend nach Fläche sortieren
city_frame = city_frame.sort_values(by="area", ascending=False)
print(city_frame)
# aufsteigend nach Einwohner sortieren
city_frame = city_frame.sort_values(by="population", ascending=True)
print(city_frame)
```
## Auslesen von Dateien
```
import zipfile
# Quelle: https://www.kaggle.com/unsdsn/world-happiness#2019.csv
target = 'data/world-happiness.zip'
handle = zipfile.ZipFile(target)
handle.extractall('data/')
handle.close()
dataframe = pd.read_csv('data/2019.csv')
print(dataframe)
dataframe.head() # es werden immer die ersten 5 Elemente (Zeilen) angezeigt
dataframe.tail()
```
| github_jupyter |
```
!pip install pyforest
from pyforest import *
import warnings
!pip install quandl
import quandl
from pandas import DataFrame
!pip install tscv
from tscv import GapKFold
!pip install backtrader
import backtrader as bt
from backtrader.feeds import PandasData
from sklearn.linear_model import LogisticRegression
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
!pip install strategies
from strategies import *
import traceback
import sys, logging, json, pprint, requests
from google.colab import files
!pip install pyfolio
import pyfolio as pf
import unittest
# ONNX run-tine
!pip install skl2onnx
!pip install onnxruntime
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType
import onnxruntime as rt
def NaturalGasData():
NaturalGas = quandl.get("CHRIS/CME_NG1", authtoken="LSQpgUzwJRoF667ZpzyL")
NaturalGas = NaturalGas.loc['2010-01-01':,]
NaturalGas = NaturalGas [['Open', 'High', 'Low', 'Last', 'Settle', 'Volume']].copy()
NaturalGas.rename(columns ={'Last': 'Close', 'Settle': 'Adj Close'}, inplace=True)
return NaturalGas
NaturalGas = NaturalGasData()
# NaturalGas.reset_index(inplace=True)
NaturalGas.tail()
NaturalGas.head()
# count = np.isinf(NaturalGas).values.sum()
# print("It contains " + str(count) + " infinite values")
# print("printing column name where infinity is present")
# col_name = NaturalGas.columns.to_series()[np.isinf(NaturalGas).any()]
# print(col_name)
# print("printing row index with infinity ")
# r = NaturalGas.index[np.isinf(NaturalGas).any(1)]
# print(r)
```
We will create the signal based on the values of the column, daily_difference. If the value is positive, we will give the value 1, otherwise, the value will remain 0:
```
def ClassificationCondition():
df = NaturalGas.copy()
df['closeReturn'] = df['Close'].pct_change()
df['highReturn'] = df['High'].pct_change()
df['lowReturn'] = df['Low'].pct_change()
df['volReturn'] = df['Volume'].pct_change()
df['dailyChange'] = (df['Close'] - df['Open']) / df['Open']
df['priceDirection'] = (df['Close'].shift(-1) - df['Close'])
df = df.replace([np.inf, -np.inf], np.nan)
df['signal'] = 0.0
df['signal'] = np.where((df.loc[:,'priceDirection'] > 0), 1.0, 0.0)
df.drop( columns =
['Open', 'High', 'Low',
'Close', 'Adj Close','Volume',
'priceDirection'
], axis=1, inplace=True)
df.dropna(inplace=True)
return df
df = ClassificationCondition()
df
# df['signal'] = df['signal'].diff()
# df
def features():
X = df.drop( columns = ['signal'], axis=1)
y = df.signal
return X,y
X,y = features()
X
MLDataFrame = NaturalGas.tail(len(X))[['Open', 'High', 'Low','Close', 'Volume']]
MLDataFrame
y.describe()
import seaborn as sns
sns.countplot(x = 'signal', data=pd.DataFrame(y), hue='signal')
plt.show()
# X = X.astype(float)
# # Create training and test sets
gkcv = GapKFold(n_splits=10, gap_before=2, gap_after=1)
"""
Introduced gaps between the training and test set to mitigate the temporal dependence.
Here the split function splits the data into Kfolds.
The test sets are untouched, while the training sets get the gaps removed
"""
for trainIndex, testIndex in gkcv.split(X):
# print("TRAIN:", trainIndex, "TEST:", testIndex)
xTrain, xTest = X.values[trainIndex], X.values[testIndex];
yTrain, yTest = y.values[trainIndex], y.values[testIndex];
# print('Observations: %d' % (len(xTrain) + len(xTest)))
print('Training Observations: %d' % (len(xTrain)))
print('Testing Observations: %d' % (len(xTest)))
# Create models
print("Accuracy scores/Confusion Matrices:\n")
models = [("LR", LogisticRegression(penalty='l2',random_state = 0)),
("LDA", LinearDiscriminantAnalysis()),
("QDA", QuadraticDiscriminantAnalysis()),
("LSVC", LinearSVC()),
("RF", RandomForestClassifier(bootstrap=True,
ccp_alpha=0.0,
class_weight="balanced",
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
min_samples_leaf=1,
min_samples_split=2,
min_weight_fraction_leaf=0.0,
n_estimators=1000,
random_state=0,
verbose=0,
warm_start=False))]
# iterate over the models
for m in models:
# Train each of the models on the training set
m[1].fit(xTrain, yTrain)
# predictions on the test set
pred = m[1].predict(xTest)
# Accuracy Score and the confusion matrix for each model
print("%s:\n%0.3f" % (m[0], m[1].score(xTest, yTest)))
print("%s\n" % confusion_matrix(pred, yTest))
from sklearn.ensemble import VotingClassifier
from sklearn.model_selection import cross_val_score
RF = RandomForestClassifier(bootstrap=True,
ccp_alpha=0.0,
class_weight="balanced",
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
min_samples_leaf=1,
min_samples_split=2,
min_weight_fraction_leaf=0.0,
n_estimators=1000,
random_state=0,
verbose=0,
warm_start=False)
LR = LogisticRegression(penalty='l2',random_state = 0)
ensembleClassifiers = VotingClassifier(
estimators=[
('RF', RF),
('LogReg', LR)
],
voting='hard'
).fit(X, y)
ensembleScore = cross_val_score(ensembleClassifiers, X, y, cv=gkcv).mean()
ensembleScore
LR = LogisticRegression(penalty='l2',random_state = 0).fit(X,y)
cross_val_score(LR, X, y, cv=gkcv).mean()
# X.shape[1]
initial_type = [('float_input', FloatTensorType([None, X.shape[1]]))]
onx = convert_sklearn(LR, initial_types=initial_type)
with open("LRModel.onnx", "wb") as f:
f.write(onx.SerializeToString())
sess = rt.InferenceSession("LRModel.onnx")
input_name = sess.get_inputs()[0].name
label_name = sess.get_outputs()[0].name
predictions = DataFrame(sess.run([label_name], {input_name: X.values.astype(np.float32)})[0])
predictions.rename({0: 'PredictedSignal'}, axis = 'columns', inplace=True)
predictions.index = MLDataFrame.index
print(predictions)
MLDataFrame = pd.concat([MLDataFrame, predictions], 1)
print(MLDataFrame)
# MLDataFrame['PredictedSignal'] = LR.predict(X)
MLDataFrame['PredictedSignal']= MLDataFrame['PredictedSignal'].diff(periods=1)
print(MLDataFrame)
MLDataFrame.tail(10)
print('Number of trades (buy) = ', (MLDataFrame['PredictedSignal']== 1).sum())
print('Number of trades (sell) = ', (MLDataFrame['PredictedSignal']==-1).sum())
```
"""
To limit the position on the market, it will be impossible to buy
or sell more than one time consecutively. Therefore, applied diff() to the column signal:
"""
```
%matplotlib inline
print(MLDataFrame.PredictedSignal.value_counts())
# Buy/Sell signals plot
buys = MLDataFrame.loc[MLDataFrame["PredictedSignal"] == 1];
sells = MLDataFrame.loc[MLDataFrame["PredictedSignal"] == -1];
# Plot
fig = plt.figure(figsize=(20, 5));
# plt.plot(MLDataFrame.index[-100:], MLDataFrame['Close'][-100:], lw=2., label='Price');
# Plot buy and sell signals
# up arrow when we buy one share
plt.plot(buys.index[-100:], MLDataFrame.loc[buys.index]['Close'][-100:], '^', markersize=10, color='red', lw=2., label='Buy');
# down arrow when we sell one share
plt.plot(sells.index[-100:], MLDataFrame.loc[sells.index]['Close'][-100:], 'v', markersize = 10, color='green', lw=2., label='Sell');
plt.ylabel('Price (USD)'); plt.xlabel('Date');
plt.title('Last 100 Buy and Sell signals'); plt.legend(loc='best');
plt.grid(True)
plt.show()
```
We will initiate buy when volume is down and sell when volume is up
```
from __future__ import (absolute_import, division, print_function,
unicode_literals)
...
prices = MLDataFrame['2021-01-01':].copy()
OHLCV = ['Open', 'High', 'Low', 'Close', 'Volume']
...
# class to define the columns we will provide
class NaturalGasData(PandasData):
"""
Define pandas DataFrame structure
"""
cols = OHLCV + ['PredictedSignal']
# create lines
lines = tuple(cols)
# define parameters
params = {c: -1 for c in cols}
params.update({'datetime': None})
params = tuple(params.items())
...
# define backtesting strategy class
class NGStrategy(bt.Strategy):
params = (
('percents', 0.9),
('stopLoss', 0.10), # if current price is %10 below the original buy price then sell the stock
('stopWin', 0.20) # if current price is %20 above the original buy price then sell the stock
)
# Float: 1 == 100%
def __init__(self):
# import ipdb; ipdb.set_trace()
'''Initializes logger and variables required for the strategy implementation.'''
# initialize logger for log function (set to critical to prevent any unwanted autologs, not using log objects because only care about logging one thing)
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
logging.basicConfig(format='%(message)s', level=logging.CRITICAL, handlers=[
logging.FileHandler("LOG.log"),
logging.StreamHandler()
]
)
self.startCash = self.broker.getvalue()
date = self.data.datetime.date()
close = prices.Close[0]
print('{}: Close: ${}, Position Size: {}'.format(date, close, self.position.size))
# keep track of open, close prices and predicted value in the series
self.data_PredictedSignal = self.datas[0].PredictedSignal
self.data_Open = self.datas[0].Open
self.data_Close = self.datas[0].Close
# keep track of pending orders/buy price/buy commission
self.order = None
self.price = None
self.stop_price = None
self.comm = None
self.trade = None
self.buystop_order = None
self.sellstop_order = None
self.qty = 1
# logging function
"""
log function allows us to pass in data via the txt variable that we want to output to the screen.
It will attempt to grab datetime values from the most recent data point,if available, and log it to the screen.
"""
def log(self, txt, doprint=True):
'''Logging function'''
# Logging function for the strategy. 'txt' is the statement and 'dt' can be used to specify a specific datetime
dt = self.datas[0].datetime.date(0)
print('{0},{1}'.format(dt.isoformat(), txt))
def notify_order(self, order):
date = self.data.datetime.date()
"""
Run on every next iteration, logs the order execution status whenever an order is filled or rejected,
setting the order parameter back to None if the order is filled or cancelled to denote that there are no more pending orders.
"""
# 1. If order is submitted/accepted, do nothing
if order.status in [order.Submitted, order.Accepted]:
return
# 2. If order is buy/sell executed, report price executed
if order.status == order.Completed:
if order.isbuy():
self.log('BUY@ Price: {0:8.2f}, Cost: {1:8.2f}, Comm: {2:8.2f}'.format(
order.executed.price,
order.executed.value,
order.executed.comm))
self.buyprice = order.executed.price
self.buycomm = order.executed.comm
else:
self.log('SELL@ Price: {0:8.2f}, Cost: {1:8.2f}, Comm{2:8.2f}'.format(
order.executed.price,
order.executed.value,
order.executed.comm))
self.bar_executed = len(self) # when was trade executed
# 3. If order is canceled/margin/rejected, report order canceled
elif order.status in [order.Canceled, order.Margin, order.Rejected]:
self.log('Order Canceled/Margin/Rejected')
"""
When system receives a buy or sell signal, we can instruct it to create an order.
However, that order won’t be executed until the next bar is called, at whatever price that may be.
"""
"""
automate stop loss and stop profit based on a percentage from the original buy price,
"""
if order.status in [order.Completed]:
if order.isbuy():
if(self.params.stopLoss):
self.stopOrder = self.sell(price=order.executed.price,exectype=bt.Order.StopTrail,trailpercent=self.params.stopLoss)
if(self.params.stopWin):
self.stopOrder = self.sell(price=(order.executed.price*(1+self.params.stopWin)),exectype=bt.Order.Limit,oco=self.stopOrder)
# set no pending order
self.order = None
"""
In below we will overwrite is the notify_trade function and here everything related to trade orders gets processed.
This will log when an order gets executed, and at what price. This will also provide notification in case an order didn’t go through.
"""
def notify_trade(self, trade):
date = self.data.datetime.date()
if not trade.isclosed:
return
self.log(f'OPERATION RESULT --- Gross: {trade.pnl:.2f}, Net: {trade.pnlcomm:.2f}')
"""
As we have predicted the market direction on the day’s closing price, hence we will use cheat_on_open=True
when creating the bt.Cerebro object. This means the number of shares we want to buy will be based on day t+1’s open price.
As a result, we also define the next_open method instead of next within the Strategy class.
"""
def next_open(self):
if self.order: # check if order is pending, if so, then break out
return
# since there is no order pending, are we in the market?
if not self.position:
if self.data_PredictedSignal == -1:
size = int(self.broker.getcash() / self.datas[0].Open)
self.buy(size=size)
else:
if self.data_PredictedSignal == 1:
# sell order
self.log(f'SELL CREATED --- Size: {self.position.size}')
self.sell(size=self.position.size)
class Test(unittest.TestCase):
def setUp(self):
self.trading_strategy= NGStrategy()
def printTradeAnalysis(analyzer):
'''
Function to print the Technical Analysis results in tabular format.
'''
# Get the results we are interested in
totalOpen = analyzer.total.open
totalClosed = analyzer.total.closed
totalWon = analyzer.won.total
totalLost = analyzer.lost.total
winStreak = analyzer.streak.won.longest
loseStreak = analyzer.streak.lost.longest
pnlNet = round(analyzer.pnl.net.total,2)
strikeRate = (totalWon / totalClosed) * 100
# Designate the rows
a = ['Total Open', 'Total Closed', 'Total Won', 'Total Lost']
b = ['Strike Rate','Win Streak', 'Losing Streak', 'PnL Net']
c = [totalOpen, totalClosed,totalWon,totalLost]
d = [strikeRate, winStreak, loseStreak, pnlNet]
# Check which set of headers is the longest.
if len(a) > len(b):
header_length = len(a)
else:
header_length = len(b)
# Print the rows
print_list = [a,b,c,d]
row_format ="{:<20}" * (header_length + 1)
print("Trade Analysis Results:")
for row in print_list:
print(row_format.format('',*row))
def printSQN(analyzer):
sqn = round(analyzer.sqn,2)
print('SQN: {}'.format(sqn))
...
# instantiate SignalData class
data = NaturalGasData(dataname=prices)
def runstrat():
# Variable for our starting cash
startCash = 5000.0
NGtrade = bt.Cerebro(stdstats = False, cheat_on_open=True, maxcpus=1)
NGtrade.addstrategy(NGStrategy)
NGtrade.addanalyzer(bt.analyzers.PyFolio, _name='pyfolio')
# Add the analyzers we are interested in
NGtrade.addanalyzer(bt.analyzers.TradeAnalyzer, _name="ta")
NGtrade.addanalyzer(bt.analyzers.SQN, _name="sqn")
NGtrade.addobserver(bt.observers.Value)
NGtrade.addanalyzer(bt.analyzers.SharpeRatio, riskfreerate=0.0)
NGtrade.addanalyzer(bt.analyzers.Returns)
NGtrade.addanalyzer(bt.analyzers.DrawDown)
NGtrade.adddata(data)
# Set desired cash start
NGtrade.broker.setcash(startCash)
NGtrade.broker.setcommission(commission=0.001)
startPortfolioValue = NGtrade.broker.getvalue()
print('Starting Portfolio Value:', startPortfolioValue)
print()
strategies = NGtrade.run(runonce=False)
firstStrat = strategies[0]
# print the analyzers
print()
printTradeAnalysis(firstStrat.analyzers.ta.get_analysis())
printSQN(firstStrat.analyzers.sqn.get_analysis())
# Get final portfolio Value
endPortfolioValue = NGtrade.broker.getvalue()
print()
print(f'Final Portfolio Value: {endPortfolioValue:.2f}')
pnl = endPortfolioValue - startPortfolioValue
print()
print(f'PnL: {pnl:.2f}')
NGtrade.plot(style='candlestick', barup='green', bardown='red',
subtxtsize=8)[0][0].savefig('samplefigure.png', dpi=300)
#files.download('samplefigure.png')
if __name__ == '__main__':
runstrat()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.